《“CCF走进高校”——走进365体育官网》讲座论坛讲座时间:2022年9月14日(周三)15:00-18:00讲座地点:腾讯会议 569-641-719讲座流程:(1)时间:14:30—15:00线上参会人员进入会议。(2)时间:15:00-16:00南京大学计算机科学与技术系窦万春教授,讲座题目:元宇宙发展态势及其智能技术应用。报告内容:作为未来互联网3.0的主要应用场景,元宇宙成为目前包括IT领域在内很多应用的热点话题。报告从基本的数据概念讲起...
《“CCF走进高校”——走进365体育官网》讲座论坛
讲座时间:2022年9月14日(周三)15:00-18:00
讲座地点:腾讯会议 569-641-719
讲座流程:
(1)时间:14:30—15:00 线上参会人员进入会议。 |
(2)时间:15:00-16:00 南京大学计算机科学与技术系窦万春教授,讲座题目:元宇宙发展态势及其智能技术应用。 报告内容:作为未来互联网3.0的主要应用场景,元宇宙成为目前包括IT领域在内很多应用的热点话题。报告从基本的数据概念讲起,重点结合讲者主持的国家重点研究计划项目的研发进展,对目前元宇宙的一些概念和发展现状,提出了自己的一些见解和观点。进而针对工业互联网未来的应用需求,介绍了工业元宇宙的相关技术及发展趋势,进而讨论了智能技术在工业领域更多场景的落地应用。 |
(3)时间:16:10-17:10 湖南大学信息科学与工程学院教授蒋斌,讲座题目:基于自然语言的视频矩检索。 报告内容:Video Moment Retrieval (VMR) aims to retrieve a temporal moment that semantically corresponds to a language query from an untrimmed video. Connecting computer vision and natural language, VMR has drawn significant attention from researchers in both communities. The existing solutions for this problem can be roughly divided into two categories based on whether candidate moments are generated: Moment-based approach and Clip-based approach. Both frameworks have respective shortcomings: the moment-based models suffer from heavy computations, while the performance of clip-based models is familiarly inferior to moment-based counterparts. To this end, we design an intuitive and efficient Dual-Channel Localization Network (DCLN) to balance computational cost and retrieval performance. Meanwhile, despite their effectiveness, Moment-based and Clip-based methods mostly focus only on aligning the query and single-level clip or moment features, and ignore the different granularities involved in the video itself, such as clip, moment, or video, resulting in insufficient cross-modal interaction. To this end, we also propose a Temporal Localization Network with Hierarchical Contrastive Learning (HCLNet) for the VMR task. This report will detail these two works and also share our deeper insights. |
(4)时间:17:10-18:00 问答环节:参会人员就两位专家的演讲题目与专家交流。 |
(5)18:00 会议结束。 |