A Summary of Recent SDA Contributions towards improving Entity Disambiguation, Linking and Prediction

SDA Research
4 min readNov 27, 2020

This blog post was written by several authors in the SDA Team.

In 2020, the SDA team contributed to several improvements to the state-of-the-art in individual AI challenges on standard community datasets. We reported about those results in several papers and blog posts. In this post, we want to collect those and provide pointers to further relevant information. A common theme is that we achieved better performance on various tasks by improving the use of knowledge graph structures. Below is a table listing improvements on particular tasks and (very briefly) how we achieved them. For each entry, you can find more information and the list to the paper below.

Improvement of State of the Art on different Tasks.

Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Mode (Paper)
Authors: Isaiah Onando Mulang’, Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart and Jens Lehmann
Conference: 29th ACM International Conference on Information and Knowledge Management (CIKM 2020)

Summary: The paper presents varying modulations of two major Entity Disambiguation models (a transformer model and an attention model dynamically augmented with context) enhanced with extra knowledge graph context (Figure 1). It achieves a new state of the art on entity disambiguation task on following standard datasets: with 94.94 F1 score on CONLL-AIDA Wikipedia dataset, and with 92.35 F1 score on Wikidata-disamb dataset for Wikidata. This work experiments with both one-hop, and two-hop triples as model inputs, in which the input representation allows the models to focus on relevant entity-specific signals from the KG, that improves model performance.

Figure 1: Overall Approach: Φrefers to the ordered set of triples from the KG for a candidate entity while Φ max ⊆ Φ, is the maximum number of triples that fits in the sequence length. For brevity: N→”National”, H→”Highway”, desc→”description.

PNEL: Pointer Network based End-To-End Entity Linking over Knowledge Graphs (Paper)
Authors: Debayan Banerjee, Debanjan Chaudhuri, Mohnish Dubey, Jens Lehmann
Conference: The 19th International Semantic Web Conference (ISWC 2020)

Summary: This work implements an entity linking model that takes a natural language sentence as input, vectorises its entity candidates, and lets a pointer network mark the correct entities in the candidate list (Figure 2).

Figure 2: The red and green dots represent entity candidate vectors for the given question. The green vectors are the correct entity vectors. Although they belong to the same entity they are not the same dots because they come from different n-grams. At each time step the Pointer Network points to one of the input candidate entities as the linked entity, or to the END symbol to indicate nochoice.

Knowledge Graph Entity Aliases in Attentive Neural Networks for Wikidata Entity Linking (Paper)
Authors: Isaiah Onando Mulang’, Kuldeep Singh, Akhilesh Vyas, Saeedeh Shekarpour, Maria Esther Vidal, Jens Lehmann and Sören Auer
Conference: 21th International Conference on Web Information Systems Engineering (WISE 2020)

Summary: The paper aims for identifying and linking entities in a web document to the Wikidata KG. The proposed approach Arjun is a new state of the art on T-Rex Wikidata dataset with an F-score 0.713 compared to previous baseline with 0.579 score. This work, empirically illustrates that extra entity specific context improves the EL task. Here, Such information (readily available in the KG — e.g. entity aliases) is derived into a local, enriched background-KG, and used at the interchange between the mention detection and the entity disambiguation stages of the Entity Linking. Enabling the attentive neural network to effectively capture better signals for the Linking process, especially in the challenging KGs like Wikidata. The overall approach is depicted in Figure 3.

Figure 3: Proposed Approach Arjun: Arjun consists of three tasks. First task identifies the surface forms using an attentive neural network. Second task induces background knowledge from the Local KG and associates each surface form with potential entity candidates. Third task links the potential entity candidates to the correct entity labels.

Message Passing for Hyper-Relational Knowledge Graphs (Paper)

Authors: Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, Jens Lehmann
Conference: Empirical Methods in Natural Language Processing (EMNLP 2020)

Summary: The paper introduces a GNN encoder architecture that aggregates additional attributes over KG edges, e.g., entity-relation qualifier pairs in Wikidata statements (Figure 4). The aggregated entity and relation representations are then decoded through the transformer for a downstream link prediction task.

Figure 4: The mechanism in which STARE encodes a hyper-relational fact from Fig. 1.B. Qualifier pairs are passed through a composition functionφq, summed and transformed by Wq. The resulting vector is then merged viaγ, andφrwith the relation and object vector, respectively. Finally, nodeQ937aggregates mes-sages from this and other hyper-relational edges.

MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs (Paper)
Afshin Sadeghi and Damien Graux and Hamed Shariat Yazdi and Jens Lehmann
Conference: European Conference on Artificial Intelligence (ECAI 2020)

Summary: In this paper, we propose the Multiple Distance Embedding model (MDE) a knowledge graph embedding model that combines variant latent distance-based terms (Figure 5). MDE allows modeling relations with (anti)symmetry, inversion, and composition patterns. MDE_adv version of the model is a neural network model that allows to map nonlinear relations between the embedding vectors and the expected output of the score function. The model performs competitively to state-of-the-art embedding models on several benchmark datasets.

Figure 5: Geometric illustration of the translation terms considered in MDE.

Temporal Knowledge Graph Completion based on Time Series Gaussian Embedding (Paper)
Chengjin Xu and Mojtaba Nayyeri and Fouad Alkhoury and Hamed Shariat Yazdi and Jens Lehmann
Conference: The 19th International Semantic Web Conference (ISWC 2020)

Summary: This paper introduces ATiSE, a temporal KGE model that incorporates time information into KG representations by fitting the temporal evolution of entity/relation representations over time as additive time series. Considering the uncertainty during the temporal evolution of KG representations, ATiSE maps the representations of temporal KGs into the space of multi-dimensional Gaussian distributions. The covariance of an entity/relation representation represents its randomness component (Figure 6). Experimental results demonstrate that our method significantly outperforms the state-of-the-art methods on link prediction over four TKG benchmarks.

Figure 6: illustration of the means and (diagonal) variances of entities and relations in a temporal Gaussian Embedding Space. The labels indicate their position. In the representations, we might infer that Bill ClintonwaspresidentOf USAi n 1998 and Barack Obama was president of USA in 2010



SDA Research

The Smart Data Analytics (SDA) research group at the University of Bonn working on #semantics, #machinelearning and #bigdata.