(Translated by https://www.hiragana.jp/)
Cognitive-inspired Graph Redundancy Networks for Multi-source Information Fusion | Proceedings of the 32nd ACM International Conference on Information and Knowledge Management skip to main content
10.1145/3583780.3614815acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Cognitive-inspired Graph Redundancy Networks for Multi-source Information Fusion

Published: 21 October 2023 Publication History
  • Get Citation Alerts
  • Abstract

    The recent developments in technologies bring not only increasing amount of information but also multiple information sources for Graph Representation Learning. With the success of Graph Neural Networks (GNN), there have been increasing attempts to learn representation of multi-source information leveraging its graph structures. However, existing graph methods basically combine multi-source information with different contribution scores and over-simplify the graph structures based on prior knowledge, which fail to unify complex and conflicting multi-source information. Multisensory Processing theory in cognitive neuroscience reveals human mechanism of learning multi-source information by identifying the redundancy and complementarity. Inspired by that, we propose Graph Redundancy Network (GRN) that: 1). learns a suitable representation space that maximizes multi-source interactions; 2). encodes the redundant and complementary information according to Graph Intersection and Difference of their graph structures; 3). further reinforces and explores the redundant and complementary information through low-pass and high-pass graph filters. The empirical study shows that GRN outperforms existing methods on various tasks.

    References

    [1]
    Çalar Akçay and Michael D. Beecher. 2019. Multi-modal communication: song sparrows increase signal redundancy in noise. Biology Letters, Vol. 15, 10 (2019), 20190513.
    [2]
    Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. 2021. Beyond Low-frequency Information in Graph Convolutional Networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence.
    [3]
    J. Adrian Bondy and Uppaluri S. R. Murty. 2008. Graph Theory. Springer. https://doi.org/10.1007/978--1--84628--970--5
    [4]
    Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N. Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Evaluation, Vol. 42, 4 (2008), 335--359.
    [5]
    Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ming Ouhyoung. 2003. On Visual Similarity Based 3D Model Retrieval. Comput. Graph. Forum, Vol. 22, 3 (2003), 223--232.
    [6]
    Jiayi Chen and Aidong Zhang. 2020. HGMF: Heterogeneous Graph-Based Fusion for Multimodal Data with Incompleteness. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD.
    [7]
    Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In Advances in Neural Information Processing Systems.
    [8]
    Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence.
    [9]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL.
    [10]
    Tayfun Esenkaya, Vanessa Lloyd-Esenkaya, Eamonn O'Neill, and Michael Proulx. 2020. Multisensory inclusive design with sensory substitution. Cognitive Research: Principles and Implications, Vol. 5 (12 2020).
    [11]
    Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. 2018a. Hypergraph Neural Networks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI.
    [12]
    Yifan Feng, Zizhao Zhang, Xibin Zhao, Rongrong Ji, and Yue Gao. 2018b. GVCNN: Group-View Convolutional Neural Networks for 3D Shape Recognition. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.
    [13]
    Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander Gelbukh. 2019. DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP.
    [14]
    William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. In Conference on Neural Information Processing Systems, NeurIPS.
    [15]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.
    [16]
    Jingwen Hu, Yuchen Liu, Jinming Zhao, and Qin Jin. 2021. MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP.
    [17]
    Jing Huang and Jie Yang. 2021. UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI.
    [18]
    Keli Huang, Botian Shi, Xiang Li, Xin Li, Siyuan Huang, and Yikang Li. 2022. Multi-modal Sensor Fusion for Auto Driving Perception: A Survey. CoRR, Vol. abs/2202.02703 (2022). https://arxiv.org/abs/2202.02703
    [19]
    Yu Huang, Chenzhuang Du, Zihui Xue, Xuanyao Chen, Hang Zhao, and Longbo Huang. 2021. What Makes Multi-Modal Learning Better than Single (Provably). In Advances in Neural Information Processing Systems, NeurIPS.
    [20]
    Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Singh, and Ashutosh Modi. 2022. COGMEN: COntextualized GNN based Multimodal Emotion recognitioN. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
    [21]
    Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.
    [22]
    Eun-Sol Kim, Woo Young Kang, Kyoung-Woon On, Yu-Jung Heo, and Byoung-Tak Zhang. 2020. Hypergraph Attention Networks for Multimodal Learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR.
    [23]
    Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations, ICLR.
    [24]
    Dana Lahat, Tülay Adalý, and Christian Jutten. 2014. Challenges in multimodal data fusion. In 2014 22nd European Signal Processing Conference (EUSIPCO).
    [25]
    Xiang Li, Chao Wang, Jiwei Tan, Xiaoyi Zeng, Dan Ou, and Bo Zheng. 2020. Adversarial Multimodal Representation Learning for Click-Through Rate Prediction. In The Web Conference 2020, WWW.
    [26]
    Yongcheng Liu, Bin Fan, Shiming Xiang, and Chunhong Pan. 2019a. Relation-Shape Convolutional Neural Network for Point Cloud Analysis. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR.
    [27]
    Ye Liu, Hui Li, Alberto Garcia-Duran, Mathias Niepert, Daniel Onoro-Rubio, and David S Rosenblum. 2019b. MMKG: multi-modal knowledge graphs. In European Semantic Web Conference, ESWC.
    [28]
    Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. 2018. Efficient Low-rank Multimodal Fusion With Modality-Specific Factors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL.
    [29]
    Tengfei Lyu, Jianliang Gao, Ling Tian, Zhao Li, Peng Zhang, and Ji Zhang. 2021. MDNN: A Multimodal Deep Neural Network for Predicting Drug-Drug Interaction Events. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI.
    [30]
    Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander F. Gelbukh, and Erik Cambria. 2019. DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI.
    [31]
    Seyed Saber Mohammadi, Yiming Wang, and Alessio Del Bue. 2021. Pointview-GCN: 3D Shape Classification With Multi-View Point Clouds. In 2021 IEEE International Conference on Image Processing, ICIP 2021. IEEE, 3103--3107.
    [32]
    Raeid Saqur and Karthik Narasimhan. 2020. Multimodal Graph Networks for Compositional Generalization in Visual Question Answering. In Advances in Neural Information Processing Systems, NeurIPS.
    [33]
    Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling Relational Data with Graph Convolutional Networks. In The Semantic Web - 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3--7, 2018, Proceedings.
    [34]
    Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik G. Learned-Miller. 2015. Multi-view Convolutional Neural Networks for 3D Shape Recognition. In 2015 IEEE International Conference on Computer Vision, ICC.
    [35]
    Rui Sun, Xuezhi Cao, Yan Zhao, Junchen Wan, Kun Zhou, Fuzheng Zhang, Zhongyuan Wang, and Kai Zheng. 2020. Multi-Modal Knowledge Graphs for Recommender Systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, CIKM.
    [36]
    Hongwei Tan, Yifan Zhou, Quanzheng Tao, Johanna Rosen, and Sebastiaan van Dijken. 2021. Bioinspired multisensory neural network with crossmodal integration and recognition. Nature Communications, Vol. 12, 1 (Feb. 2021), 1120.
    [37]
    Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha P. Talukdar. 2020. Composition-based Multi-Relational Graph Convolutional Networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020.
    [38]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, NeurIPS.
    [39]
    Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. In International Conference on Learning Representations, ICLR.
    [40]
    Tana Wang, Yaqing Hou, Dongsheng Zhou, and Qiang Zhang. 2021. A Contextual Attention Network for Multimodal Emotion Recognition in Conversation. In 2021 International Joint Conference on Neural Networks, IJCNN.
    [41]
    Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying Graph Convolutional Networks. In Proceedings of the 36th International Conference on Machine Learning, ICML.
    [42]
    Junkang Wu, Wentao Shi, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Fuzheng Zhang, Wei Wu, and Xiangnan He. 2021. DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention Network. In CIKM '21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021. ACM, 2140--2149.
    [43]
    Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3D ShapeNets: A deep representation for volumetric shapes. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR.
    [44]
    Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.).
    [45]
    Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor Fusion Network for Multimodal Sentiment Analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP.
    [46]
    Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Memory Fusion Network for Multi-view Sequential Learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence.
    [47]
    Dong Zhang, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu, and Guodong Zhou. 2021. Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI.
    [48]
    Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020. Improving Conversational Recommender Systems via Knowledge Graph based Semantic Fusion. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD.

    Index Terms

    1. Cognitive-inspired Graph Redundancy Networks for Multi-source Information Fusion

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
      October 2023
      5508 pages
      ISBN:9798400701245
      DOI:10.1145/3583780
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. graph neural networks
      2. multi-source information fusion

      Qualifiers

      • Research-article

      Conference

      CIKM '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 109
        Total Downloads
      • Downloads (Last 12 months)109
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 09 Aug 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media