{“状态”:“确定”,“消息类型”:“工作”,“信息版本”:“1.0.0”,“邮件”:{“索引”:{“日期-部件”:[[2024,6,12],“日期-时间”:“2024-06-12T11:14:27Z”,“时间戳”:1718190867939},“引用-计数”:56,“出版商”:“爱思唯尔BV”,“许可证”:[{“开始”:{-“日期-零件”:[2022,6,1]],“日期时间”:”2022-06-01T00:00:00Z“,”timestamp“:1654041600000},”content-version“:“tdm”,“delay-in-days”:0,“URL”:“https:\/\/www.elsevier.com//tdm\/userlicense\/1.0\/”},{“start”:{“date-parts”:[2022,6,1]],“date-time”:“2022-06-01T00:00:00Z”,“timestamp”:165404160000},“content-version”:“stm-asf”,“delay-in-days”:“0”,“URL“https:\\/doi.org\/10.15223\/policy-017“},{“start”:{“date-parts”:[[2022,6,1]],“date-time”:“2022-06-01T00:00:00Z”,“timestamp”:1654041600000},“content-version”:“stm-asf”,“delay-in-days”:0,“URL”:“https:\/\/doi.org\/10.15223\/policy-037”},{“start”:{“date-parts”:[2022,6,1]],“date-time”:“2022-06-01T00:00:00Z”,“timetamp”:1.654041600000}、“content-version”:“stm-asf”、“delay-in-days“:0,”URL“https:”\/\/doi.org\/10.15223\/policy-012“},{“start”:{“date-parts”:[[2022,6,1]],“date-time”:“2022-06-01T00:00:00Z”,“timestamp”:1654041600000},“content-version”:“stm-asf”,“delay-in-days”:0,“URL”:“https:\/\/doi.org\/10.15223\/policy-029”},{“start”:{“date-parts”:[[2022,6,1]],“datetime”:“022-06-01T_00:00:0Z”,f“,”delay-in-days“:0,”URL“:”https:\/\/doi.org\/10.15223\/policy-004“}],“content-domain”:{“domain”:[“elsevier.com”,“sciencedirect.com”],“crossmark-restriction”:true},“short-container-title”:[“Neurocomputing”],”published-print“:{”date-parts“:[[2022,6]]},”DOI“:”10.1016\/j.neucom.2021.0.121“,”type“:”journal-article“,”created“:{”date-ports“:[2022,2,12]],”date-time“:”2022-02-12T16:04:51Z“,”时间戳“:1644681891000},“page”:“54-65”,“update-policy”:”http://\/dx.doi.org\/10.1016\/elsevier_cm_policy“,”source“:”Crossref“,“is-referenced-by-count”:2,“title”:[“You should know more:Learning external knowledge knowledges for visual dialogue”],“prefix”:“10.1016”,“volume”:“488”,“author”:[{“give”:“Lei”,“family”:“Zhao”,“sequence”:“first”,“affiliation”:[]},{”给定“:”浩南“,”家庭“:“张”,“sequence”:“additional”,“affiliation”:[]},{“given”:“祥鹏”,“family”:“李”,“serquence”:“additionable”,“filiation“:[]{”given“:”森“,”family“:”杨“,”sequence“:”additional:“10.1016\/j.neucom.2021.10.121_b0005”,“首页”:“6077”,“文章标题”:“图片字幕和视觉问答的自下而上和自上而下注意,in”,“作者”:“Anderson”,“年份”:“2018”,“期刊标题”:”CVPR“},{“问题”:“10”,“关键字”:“10.116\/j.neucom.202.10.121_b0010”,“doi-asserted-by”:“crossref”,“第一页”:”3047“doi”:“10.1109”,“TNNLS.2018.2851077”,“article-title”:“从确定性到生成性:视频字幕的多模随机rnns”,“volume”:“30”,“author”:“Song”,“year”:“2019”,“journal-title“:”IEEE Trans.Neural Networks Learn.Syst.“},{“key”:”10.1016\/j.neucom.2021.10.121_b0015“,”doi-asserted-by“:”crossref“,”first page“:”56“,”doi“:“10.1016\/j.neucom.2018.03.078”,“article-title”:“通过结合从视觉和文本组件中学习的情感概念进行图像字幕”,“volume”:“328”,“author”:“Yang”,“year”:“2019”,“日记标题”:“Neurocomputing”},{“issue”:“5”,“key”::“视觉字幕自适应注意的分层lstms”,“卷”:“42”,“作者”:“高”,“年份”:“2020”,“期刊标题”:“IEEE Trans.Pattern Anal.Mach.Intell.”},{“key”:“10.1016\/j.neucom.2021.0.121_b0025”,“series-title”:“第43届国际ACM SIGIR信息检索研究与开发会议论文集”,“首页”:“1339”,“article-title”:“用于复杂查询视频检索的树增强交叉模式编码,in”,“author”:“Yang”,“year”:“2020”},{“key”:“10.1016\/j.neucom.2021.0.121_b0030”,“series-title”:《第四十四届国际ACM SIGIR信息检索研究与发展会议论文集》,“首版”:“1”,“article-tiple”:“带因果干预的解码视频时刻检索,in”,“author”:“Yang”,“year”:“2021”},{“key”:“10.1016\/j.neucom.2021.0.121_b0035”,“unstructured”:“j.Dong,X.Li,C.Xu,X.Yang,G.Yang,X.Wang,M.Wang,文本视频检索的双重编码,IEEE Transactions on Pattern Analysis and Machine Intelligence。”}、{“key”:“10.1016\/j.neucom.2021.10.121_b0040”,“doi-asserted-by”:“crossref”,“unstructured”:“Y.Qiao,Z.Yu,j.Liu,Rankvqa:视觉问答的重新排名,in:ICME,2020,pp.1\u20136.”,“doi”:“10.1109\/IMCE46284.2020.9102814”},{“key”:“1016\/j.neucom.2021.10.121_b045”,“doi-asserted-by”8658英寸,“doi”:“10.1609”,“aaai.v33i01.33018658”,“article-title”:“Beyond rnns:位置自我关注,共同关注视频问答,in”,“author”:“Li”,“year”:“2019”,《journal-title》:“aaai”},{“key”:“10.1016”,“j.neucom.2021.0.121_b0050”,“doi-asserted-by”:“crossref”,”非结构化:“X.Li,L.Gao,X.Wang,W.Liu,X.X.Xu,H.T.Shen,j。Song,Learnable aggregating net with diversity learning for video question answering,in:ACM MM,ACM,2019,pp.1166\u20131174.“,”DOI“:”10.1145\/3343031.3350971“},{“key”:“10.1016\/j.neucom.2021.0.121_b0055”,“DOI-asserted-by”:“crossref”,“first page”:”187“,“DOI”:“1016\/j.neucom.2019.03.035”,“article-title”:“利用分层视觉特征进行视觉问答“,”卷“:”351“,”作者“:”洪“,”年份“:”2019“,”日志标题“:”神经计算“},{“key”:“10.1016\/j.neucom.2021.0.121_b0060”,“首页”:“1564”,“article-title”:“与目标对象的视觉对话,in”,“author”:“Wang”,“year”:“2019”,“日志标题”:“ICME”},“key“:”10.1016\/j.neucom.2021.10.121_b0065“,“首页”:“2039”,“文章标题”:“因素图注意力”,“作者”:“Schwartz”,“年份”:“2019年”,“新闻标题”:”CVPR“},{“键”:“10.1016\/j.nuecom.2021.0.121_b0070”,“首页面”:“520”,“论文标题”:《学习目标导向的视觉对话代理:模仿和超越分析专家》,“作者“:”常,“年份:”2019年”,“新闻标题”:“ICME”},{“key”:“10.1016\/j.neucom.2021.10.121_b0075”,“首页”:“10434”,“文章标题”:”视觉对话的图像-问题-回答协同网络“,”作者“:”郭“,”年份“:”2019年“,”新闻标题“:”CVPR“},”{“密钥”:“0.1016\/j.neucomm.2021.0.121_b0080”,“doi-asserted-by”:“crossref”,“非结构化”:“A.Das,S.Kottur、K.Gupta、A。Singh,D.Yadav,J.M.F.Moura,D.Parikh,D.Batra,《视觉对话》,收录于:CVPR,2017年,第1080\u20131089页。Rohrbach,使用神经模块网络的视觉对话中的视觉共指消解,收录于:ECCV,第11219卷,2018年,第160\u2013178页年份:“2019”,“日志标题”:“CVPR”},{“key”:“10.1016\/j.neucom.2021.10.121_b0095”,“首页”:“2024”,“文章标题”:”视觉对话中视觉参考分辨率的双注意网络,in“,”author“:”Kang“,”year“:”2019“,”journal-title“:”EMNLP-IJCNLP“},”{“密钥”:“1016\/j.neucom.2021.10.121_b0100”,“series-title”:“”in:《第28届ACM国际多媒体会议论文集》,“首页”:“1939年”,“文章标题”:“通过探索时空背景来弱化视频对象基础”,“作者”:“杨”,“年份”:“2020年”},{“问题”:“3”,“关键”:“10.1016\/j.neucom.2021.0.121_b0105”,“首版”:“1”,“论文标题”:视觉相似性建模的深度邻域成分分析”,“卷”:“11”,“作者”:“刘”,“年份”:“2020年”,“期刊标题”:“ACM智能系统与技术交易(TIST)”},{“关键”:“10.1016\/j.neucom.2021.0.121_b0110”,“首页”:“447”,“文章标题”:”视频中的视觉关系基础,in“,”作者“:”肖“,”年“:“2020”,“journal-title“:”欧洲计算机视觉会议“},{“key”:“10.1016\/j.neucom.2021.10.121_b0115”,“article-title”:“介入性视频关系检测,in”,“author”:“Li”,“year”:“2021”,“journal-title”:“ACM国际多媒体会议”},“key“:”10.1016\\j.neucom.2021.10.121_b0120“,”第一页“:”4466“,”article-title“你猜怎么着?!通过多模态对话发现视觉对象,在“”中,“作者”:“de Vries”,“年份”:“2017”,“期刊标题”:“CVPR”},{“密钥”:“10.1016\/j.neucom.20211.121_b0125”,“doi断言”:“crossref”,“非结构化”:“B.Zhuang,Q.Wu,C.Shen,I.D。Reid,A.van den Hengel,平行注意力:通过对话和查询发现视觉对象的统一框架,见:CVPR,2018,第4252\u20134261页。Batra,《利用深度强化学习学习合作视觉对话代理》,载于:ICCV,2017年,第2970\u20132979页10.1016\/j.neucom.2021.10.121_b0140“,”非结构化“:”P.Velickovic,G.Cucurull,A.Casanova,A.Romero,P.Li\u00f2,Y.Bengio,Graph attention networks,in:ICLR,OpenReview.net,2018.“},{”issue“:”6“,”key“:”10.1016\/j.nucom.2021.01_b0145“,”doi-asserted-by“:”crossref“,”first page“:“1137”,“doi”:“10.1109\/TPAMI.2016 2577031“,”文章标题“:”更快的R-CNN:通过地区提案网络实现实时目标检测”,“卷”:“39”,“作者”:“任”,“年份”:“2017年”,“新闻标题”:“IEEE Trans。模式分析。机器。智力。“},{”issue“:”1“,”key“:”10.1016\/j.neucom.2021.10.121_b0150“,”doi-asserted-by“:”crossref“,”first page“:“32”,”doi“:”101007\/s11263-016-0981-7“,“article-title”:“视觉基因组:使用众包密集图像注释连接语言和视觉”,“volume”:“123”,“author”:“Krishna”,“year”:“2017”,“journal-title“:”Int.j.Compute.Vis.“},{“密钥”:“10.1016\/j.neucom.2021.10.121_b0155“,“首页”:“4444”,“文章标题”:“Conceptnet 5.5:一个开放的多语言通用知识图,in”,“author”:“Speer”,“year”:“2017”,“journal title”:“AAAI”},{“key”:“10.1016\/j.nucom.2021.011_b0160”,“unstructured”:“T.N.Kipf,M。Welling,图卷积网络半监督分类,in:ICLR,OpenReview.net,2017.“},{”issue“:”1“,”key“:”10.1016\/j.neucom.2021.0.121_b0165“,”doi-asserted-by“:”crossref“,”first page“:“4”,”doi“:”101007\/s11263-016-0966-6“,”article-title“:”VQA:可视问答-www.visualqa.org“,“volume”:“123”,“author”:“”阿格拉瓦尔“,“年份”:“2017年”,“新闻标题”:“国际计算杂志”。视觉。“},{”key“:”10.1016\/j.neucom.2021.10.121_b0170“,”first page“:“3567”,”article-title“:”Learning to answer questions from image using convolutional neural network,in“,”author“:”Ma“,”year“:”2016“,”journal title“:Dualnet:可视问答领域非变异网络,in“,”author“:”Saito“,”year“:”2017“,”journal-title“:”ICME“},{“key”:“10.1016\/j.neucom.2021.0.121_b0180”,“first page”:”289“,”article-title”:“视觉问答的分层问题-图像共同关注,in”,“author”:“Lu”,“year”:“2016”,“journal-title”:“NeurIPS”},”{“key“:”10.1016\/j.neucom.2021.10.121_b0185“,”doi-asserted-by“:”crossref“,”unstructured“:”Z.Yu,j.Yu,j·Fan,D.Tao,多模态因式分解双线性池与共同注意学习用于可视问答,收录于:ICCV,2017年,第1839\u20131848页。“,”doi“:”10.1109\/ICCV.2017.202“},{“key”:“10.1016\\j.neucomm.2021.012_b0190”,“首页”:“6281”,“文章标题“:“用于视觉问答的深度模块化共同注意网络,in”,“author”:“Yu”,“year”:“2019”,“journal title”:“CVPR”},{“key”:“10.1016\/j.neucom.2021.10.121_b0195”,“first page”:“6087”,“article title”:“通过用于视觉问答的密集对称共同注意改进视觉和语言表示的融合,in”author“:”Nguyen“,”year“:”2018“,”journal-title“:”CVPR“},{“key”:“10.1016\/j.neucom.2021.0.121_b0200”,“首页”:“6679”,“article-title”:“视觉对话中的递归视觉注意,in”,“author”:“Niu”,“year”:“2019”,“journal-title”:“CVPR”},”key“:”10.1016\/j.nucom.2021.01_b0205“,”首页“:”10052“,“文章标题”:“视觉对话的迭代上下文图形推理,in,“author”:“Guo”,“year”:“2020”,“journal-title”:“CVPR”},{“key”:“10.1016\/j.neucom.2021.0.121_b0210”,“first-page”:“5754”,“article-title(文章标题)”:“Two can play this game:visual dialog with discriminative question generation and answering,in”,“au作者”:“Jain”,{“key”:“10.1016\/j.neucom.2021.0.121_b0215”,“unstructured”:“D.Massiceti,N.Siddharth,P.K.Dokania,P.H.S.Torr,Flipdial:双向视觉对话的生成模型,收录于:CVPR,2018,第6097\u20136105页。”},{《key》:“10.106\/j-neucom.20211.10.121_b0220”,“首页”:“1218”,“文章标题”:“不要再问了:决定何时在参照视觉对话中猜测,in”,“author”:“Shekhar”,“year”:“2018”,“journal-title”:“COLING”},{“key”:“10.1016\/j.neucom.2021.0.121_b0225”,“doi-asserted-by”:“crossref”,”doi“:”10.1016\/j.patcog.2021.107823“,”article-title“猜猜哪一个?带注意力记忆网络的视觉对话”,“volume”:”114“,”author“赵”,“年份”:“2021”,“日志标题”:“模式识别”},{“key”:“10.1016\/j.neucom.2021.10.121_b0230”,“首页”:“1042”,“文章标题”:”可视化对话的社区规范化,in“,”author“:”Agarwal“,”year“:”2019“,”journal-title“:”AAMAS“},”key“:”10.1016//j.neucomm.2021.10.1121_b0235“,”doi-asserted-by“:”crossref“,”非结构化“:”S。Auer,C.Bizer,G.Kobilarov,J.Lehmann,R.Cyganiak,Z.G.Ives,《Dbpedia:开放数据网的核心》,收录于:《语义网》,第六届国际语义网会议,第二届亚洲语义网会,ISWC 2007+ASWC 2007,韩国釜山,201315年11月11日,2007年第4825卷,第722\u2013735页10.1007\/978-3-540-76298-0_52“},{“key”:“10.1016\/j.neucom.2021.10.121_b0240”,“doi-asserted-by”:“crossref”,“首页”:“3027”,“doi”:“10.“密钥”:“10.1016\/j.neucom.2021.10.121_b0245”,“非结构化“:”S.Bhakthavatsalam,C.Anastasiades,P.Clark,Genericskb:通用语句的知识库,arXiv预印本arXiv:2005.00660.“},{“key”:“10.1016\/j.neucom.2021.0.121_b0250”,“doi-asserted-by”:“crossref”,“非结构化”:“S.Shah,A.Mishra,N.Yadati,P。Talukdar,KVQA:知识软件可视化问答,收录于:AAAI,2019,第8876\u20138884页。“,”DOI“:”10.1609\/AAAI.v33i01.33018876“},{“key”:“10.1016\/j.neucom.2021.0.121_b0255”,“DOI-asserted-by”:“crossref”,“unstructured”:“j.Pennington,R.Socher,C.D。Manning,Glove:单词表示的全局向量,收录于:EMNLP,2014年,第1532\u20131543页。“,”DOI“:”10.3115\/v1\/D14-1162“},{“key”:“10.1016\/j.neucom.2021.0.121_b0260”,“DOI-asserted-by”:“crossref”,“unstructured”:“B.Y.Lin,X.Chen,j.Chen,X。Ren,Kagnet:常识推理的知识软件图形网络,收录于:EMNLP-IJCNLP,2019年,第2829\u20132839页。“”key“:”10.1016\/j.neucom.2021.10.121_b0270“,”doi-asserted-by“:”crossref“,”first-page“:”229“,”doi“:”101007\/BF00992696“,”article-title“:”连接主义强化学习的简单统计梯度算法“,”volume“:“8”,”author“:”Williams“,”year“:”1992“,”journal-title”:“Mach。学习。“},{”key“:”10.1016\/j.neucom.2021.10.121_b0275“,”doi-asserted-by“:”crossref“,”unstructured“:”Y.Bengio,j.Louradour,R.Collobert,j.Weston,《课程学习》,收录于:ICML,第382卷,2009年,第41\u201348页。“,”doi“:”101145\/1553374.1553380“}”,{“key”:“10.1016\/j.nucom.2021.01_b0280”,“doi-assert-by”:”cross-ref“,“首页”:“11125”,“doi”:“10.1609 \/aaai.v34i07.6769“,”article-title“:”Dualvd:一种自适应双重编码模型,用于在视觉对话中进行深度视觉理解,in“,”author“:”Jiang“,”year“:”2020“,”journal-title”:“aaai”}],”container-title:[“Neurocomputing”],”original-title:[],”language“:”en“,”link“:[{”URL“:”https:\/\/api.elsevier.com/content\/article\/PII:S092523122001795?httpAccept=text\/xml“,”content-type“:”text\/.xml“,”content-version“:”vor“,”intended-application“:”text-mining“},{“URL”:“https:\/\/api.elsevier.com/content\/article\/PII:S092523122001795?httpAccept=text\/plain“,”内容类型“:”文本\/plaine“,”content-version”:“vor”,“intended_application”:“text-mining”}],“存放”:{“日期部分”:[[2023,3,6]],“date-time“:”2023-03-06T05:14:02Z“,”timestamp“:1678079642000},”score“:1,”resource“:{主要”:{“URL”:“https:\/\/linkinghub.elsevier.com/retrieve\/pii\/S092523122001795”}},“subtitle”:[],“shorttitle”:[],“issued”:5“],”URL“:”http://\/dx.doi.org\/10.1016\/j.neucom.2021.10.121“,”关系“:{},”ISSN“:[”0925-2312“],”ISSN-type“:[{”value“:”0925-2312“,”type“:”print“}],”主题“:[],”published“:{”date-parts“:[2022,6]},“值”:“你应该知道更多:学习视觉对话的外部知识“,”name“:”articletitle“,”label“:”Article Title“},”{“value”:“Neurocomputing”,“name”:“journaltitle”,“label”:“Journal Title”},{“value”:“https:\/\/doi.org\/101016\/j.neucom.2021.10.121”,“name”:“articlelink”,”lable“:”CrossRef doi link to publisher maintained version“}”,{”value“:“”article“,”name“:”content_type“,”label“:”content-type“},{”value“:”\u00a9 2022 Elsevier B.V.保留所有权利