Learning to Extract Structured Entities Using Language Models (2024)

Haolun Wu1, 2,Ye Yuan1, 211footnotemark: 1,Liana Mikaelyan3,Alexander Meulemans4,
Xue Liu1, 2,James Hensman3,Bhaskar Mitra3
1McGill University,2Mila - Quebec AI Institute,3Microsoft Research,4ETH Zürich.
{haolun.wu, ye.yuan3}@mail.mcgill.ca,
xueliu@cs.mcgill.ca,ameulema@ethz.ch,
{lmikaelyan, jameshensman, bhaskar.mitra}@microsoft.com
 Equal contribution with random order.

Abstract

Recent advances in machine learning have significantly impacted the field of information extraction, with Language Models (LMs) playing a pivotal role in extracting structured information from unstructured text. Prior works typically represent information extraction as triplet-centric and use classical metrics such as precision and recall for evaluation. We reformulate the task to be entity-centric, enabling the use of diverse metrics that can provide more insights from various perspectives. We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP (AESOP) metric, designed to appropriately assess model performance. Later, we introduce a new Multistage Structured Entity Extraction (MuSEE) model that harnesses the power of LMs for enhanced effectiveness and efficiency by decomposing the extraction task into multiple stages. Quantitative and human side-by-side evaluations confirm that our model outperforms baselines, offering promising directions for future advancements in structured entity extraction.Our source code and datasets are available at https://github.com/microsoft/Structured-Entity-Extraction.

1 Introduction

Information extraction refers to a broad family of challenging natural language processing (NLP) tasks that aim to extract structured information from unstructured text(Cardie, 1997; Eikvil, 1999; Chang etal., 2006; Sarawagi etal., 2008; Grishman, 2015; Niklaus etal., 2018; Nasar etal., 2018; Wang etal., 2018; Martinez-Rodriguez etal., 2020).Examples of information extraction tasks include:(i) Named-entity recognition(Li etal., 2020),(ii) relation extraction(Kumar, 2017),(iii) event extraction(Li etal., 2022), and(iv) coreference resolution(Stylianou and Vlahavas, 2021; Liu etal., 2023),as well as higher-order challenges, such as automated knowledge base (KB) and knowledge graph (KG) construction from text(Weikum and Theobald, 2010; Ye etal., 2022; Zhong etal., 2023).The latter may in turn necessitate solving a combination of the former more fundamental extraction tasks as well as require other capabilities like entity linking(Shen etal., 2014, 2021; Oliveira etal., 2021; Sevgili etal., 2022).

Learning to Extract Structured Entities Using Language Models (1)

Previous formulations and evaluations of information extraction have predominantly centered around the extraction of \langlesubject, relation, object\rangle triplets.The conventional metrics used to evaluate triplet-level extraction, such as recall and precision, however, might be insufficient to represent a model’s understanding of the text from a holistic perspective.For example, consider a paragraph that mentions ten entities, where one entity is associated with 10 relations as the subject, while each of the other nine entities is associated with only 1 relation as the subject.Imagine a system that accurately predicts all ten triplets for the heavily linked entity but overlooks the other entities.Technically, this system achieves a recall of more than 50% (i.e., 10 out of 19) and a precision of 100%.However, when compared to another system that recognizes one correct triplet for each of the ten entities and achieves the same recall and precision, it becomes evident that both systems, despite showing identical evaluation scores, offer significantly different insights into the text comprehension.Moreover, implementing entity-level normalization within traditional metrics is not always easy due to challenges like coreference resolution(Stylianou and Vlahavas, 2021; Liu etal., 2023), particularly in scenarios where multiple entities share the same name or lack primary identifiers such as names.Therefore, we advocate for alternatives that can offer insights from diverse perspectives.

In this work, we propose Structured Entity Extraction, an entity-centric formulation of (strict) information extraction, which facilitates diverse evaluations.We define a structured entity as a named entity with associated properties and relationships with other named-entities.Fig.1 shows an illustration of the structured entity extraction.Given a text description, we aim to first identify the two entities “Bill Gates” and “Microsoft”.Then, given some predefined schema on all possible entity types and property keys (referred to as a strict setting in our scenario), the exact types, property keys, property values on all identified entities in the text are expected to be predicted, as well as the relations between these two entities (i.e., Bill Gates co-founded Microsoft).Such extracted structured entities may be further linked and merged to automatically construct KBs from text corpora.Along with this, we propose a new evaluation metric, Approximate Entity Set OverlaP (AESOP), with numerous variants for measuring the similarity between the predicted set of entities and the ground truth set, which is more flexible to include different level of normalization (see default AESOP in Sec.3 and other variants in AppendixA).

In recent years, deep learning has garnered significant interest in the realm of information extraction tasks.Techniques based on deep learning for entity extraction have consistently outperformed traditional methods that rely on features and kernel functions, showcasing superior capability in feature extraction and overall accuracy(Yang etal., 2022).Building upon these developments, our study employs language models (LMs) to solve structured entity extraction.We introduce a Multi-stage Structured Entity Extraction (MuSEE) model, a novel architecture that enhances both effectiveness and efficiency.Our model decomposes the entire information extraction task into multiple stages, enabling parallel predictions within each stage for enhanced focus and accuracy.Additionally, we reduce the number of tokens needed for generation, which further improves the efficiency for both training and inference.Human side-by-side evaluations show similar results as our AESOP metric, which not only further confirm our model’s effectiveness but also validate the AESOP metric.

In summary, our main contributions are:

  • We introduce an entity-centric formulation of the information extraction task within a strict setting, where the schema for all possible entity types and property keys is predefined.

  • We propose an evaluation metric, Approximate Entity Set OverlaP (AESOP), with more flexibility tailored for assessing structured entity extraction.

  • We propose a new model leveraging the capabilities of LMs, improving the effectiveness and efficiency for structured entity extraction.

2 Related work

In this section, we first review the formulation of existing information extraction tasks and the metrics used, followed by a discussion of current methods for solving information extraction tasks.

Information extraction tasks are generally divided into open and closed settings.Open information extraction (OIE), first proposed by Banko etal. (2007), is designed to derive relation triplets from unstructured text by directly utilizing entities and relationships from the sentences themselves, without adherence to a fixed schema.Conversely, closed information extraction (CIE) focuses on extracting factual data from text that fits into a predetermined set of relations or entities, as detailed by Josifoski etal. (2022).While open and closed information extraction vary, both seek to convert unstructured text into structured knowledge, which is typically represented as triplets. These triplets are useful for outlining relationships but offer limited insight at the entity level.It is often assumed that two triplets refer to the same entity if their subjects match.However, this assumption is not always held.Additionally, the evaluation of these tasks relies on precision, recall, and F1111 scores at the triplet level.As previously mentioned, evaluating solely on triplet metrics can yield misleading insights regarding the entity understanding.Thus, it is essential to introduce a metric that assesses understanding at the entity level through entity-level normalization.In this work, we introduce the AESOP metric, which is elaborated on in Sec.3.2.

Various strategies have been employed in existing research to address the challenges of information extraction.TextRunnerYates etal. (2007) initially spearheaded the development of unsupervised methods.Recent progress has been made with the use of manual annotations and Transformer-based modelsVasilkovsky etal. (2022); Kolluru etal. (2020a).Sequence generation approaches, like IMoJIEKolluru etal. (2020b) and GEN2OIEKolluru etal. (2022), have refined open information extraction by converting it into a sequence-to-sequence taskCui etal. (2018).GenIEJosifoski etal. (2022) focuses on integrating named-entity recognition, relation extraction, and entity linking within a closed setting where a knowledge base is provided.Recent work, PIVOINELu etal. (2023), focuses on improving the language model’s generality to various (or unseen) instructions for open information extraction, whereas our focus is on designing a new model architecture for improving the effectiveness and efficiency of language model’s information extraction in a strict setting.

3 Structured Entity Extraction

In this section, we first describe the structured entity extraction formulation, followed by detailing the Approximate Entity Set OverlaP (AESOP) metric for evaluation.We would like to emphasize that structured entity extraction is not an entirely new task, but rather a novel entity-centric formulation of information extraction.

3.1 Task Formulation

Given a document d𝑑ditalic_d, the goal of structured entity extraction is to generate a set of structured entities ={e1,e2,,en}subscript𝑒1subscript𝑒2subscript𝑒𝑛\mathcal{E}=\{e_{1},e_{2},\ldots,e_{n}\}caligraphic_E = { italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_e start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } that are mentioned in the document text.Each structured entity e𝑒eitalic_e is a dictionary of property keys p𝒫𝑝𝒫p\in\mathcal{P}italic_p ∈ caligraphic_P and property values v𝒱𝑣𝒱v\in\mathcal{V}italic_v ∈ caligraphic_V, and let ve,psubscript𝑣𝑒𝑝v_{e,p}italic_v start_POSTSUBSCRIPT italic_e , italic_p end_POSTSUBSCRIPT be the value of property p𝑝pitalic_p of entity e𝑒eitalic_e.In this work we consider only text properties and hence 𝒱𝒱\mathcal{V}caligraphic_V is the set of all possible text property values.If a property of an entity is common knowledge but does not appear in the input document, it will not be considered in the structured entity extraction. Depending on the particular situation, the property values could be other entities, although this is not always the case.

So, the goal then becomes to learn a function f:d={e1,e2,,em}:𝑓𝑑superscriptsubscriptsuperscript𝑒1subscriptsuperscript𝑒2subscriptsuperscript𝑒𝑚f:d\to\mathcal{E}^{\prime}=\{e^{\prime}_{1},e^{\prime}_{2},\ldots,e^{\prime}_{%m}\}italic_f : italic_d → caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = { italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT }, and we expect the predicted set superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to be as close as possible to the target set \mathcal{E}caligraphic_E, where the closeness is measured by some similarity metric Ψ(,)Ψsuperscript\Psi(\mathcal{E}^{\prime},\mathcal{E})roman_Ψ ( caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , caligraphic_E ).Note that the predicted set of entities superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the ground-truth set \mathcal{E}caligraphic_E may differ in their cardinality, and our definition of ΨΨ\Psiroman_Ψ should allow for the case when ||||superscript|\mathcal{E}^{\prime}|\neq|\mathcal{E}|| caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | ≠ | caligraphic_E |.Finally, both superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and \mathcal{E}caligraphic_E are unordered sets and hence we also want to define ΨΨ\Psiroman_Ψ to be order-invariant over superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and \mathcal{E}caligraphic_E.As we do not need to constrain f𝑓fitalic_f to produce the entities in any strict order, it is reasonable for ΨΨ\Psiroman_Ψ to assume the most optimistic assignment of superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with respect to\mathcal{E}caligraphic_E.We denote Esuperscript𝐸\vec{E}^{\prime}over→ start_ARG italic_E end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and E𝐸\vec{E}over→ start_ARG italic_E end_ARG as some arbitrary but fixed ordering over items in prediction set superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and ground-truth set \mathcal{E}caligraphic_E for allowing indexing.

3.2 Approximate Entity Set OverlaP (AESOP) Metric

We propose a formal definition of the Approximate Entity Set OverlaP (AESOP) metric, which focuses on the entity-level and more flexible to include different level of normalization:

Ψ(,)=1μi,jm,n𝐅i,jψent(Ei,Ej),Ψsuperscript1𝜇superscriptsubscriptdirect-sum𝑖𝑗𝑚𝑛subscript𝐅𝑖𝑗subscript𝜓entsubscriptsuperscript𝐸𝑖subscript𝐸𝑗\displaystyle\Psi(\mathcal{E}^{\prime},\mathcal{E})=\frac{1}{\mu}\bigoplus_{i,%j}^{m,n}{\mathbf{F}_{i,j}\cdot\psi_{\text{ent}}(\vec{E^{\prime}}_{{i}},\vec{E}%_{j})},roman_Ψ ( caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , caligraphic_E ) = divide start_ARG 1 end_ARG start_ARG italic_μ end_ARG ⨁ start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m , italic_n end_POSTSUPERSCRIPT bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ⋅ italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT ( over→ start_ARG italic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over→ start_ARG italic_E end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ,(1)

which is composed of two phases: (i) optimal entity assignment for obtaining the assignment matrix 𝐅𝐅\mathbf{F}bold_F to let us know which entity in superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is matched with which one in \mathcal{E}caligraphic_E, and (ii) pairwise entity comparison through ψent(Ei,Ej)subscript𝜓entsubscriptsuperscript𝐸𝑖subscript𝐸𝑗\psi_{\text{ent}}(\vec{E^{\prime}}_{i},\vec{E}_{j})italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT ( over→ start_ARG italic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over→ start_ARG italic_E end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ), which is a similarity measure defined between any two arbitrary entities esuperscript𝑒e^{\prime}italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and e𝑒eitalic_e.We demonstrate the details of these two phases in this section.We implement ΨΨ\Psiroman_Ψ as a linear sum direct-sum\bigoplus over individual pairwise entity comparisons ψentsubscript𝜓ent\psi_{\text{ent}}italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT, and μ𝜇\muitalic_μ is the maximum of the sizes of the target set and the predicted set, i.e., μ=max{m,n}𝜇𝑚𝑛\mu=\max\{m,n\}italic_μ = roman_max { italic_m , italic_n }.

Phase 1: Optimal Entity Assignment.

The optimal entity assignment is directly derived from a matrix 𝐅m×n𝐅superscript𝑚𝑛\mathbf{F}\in\mathbb{R}^{m\times n}bold_F ∈ blackboard_R start_POSTSUPERSCRIPT italic_m × italic_n end_POSTSUPERSCRIPT, which is obtained by solving an assignment problem between superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and \mathcal{E}caligraphic_E.Here, the matrix 𝐅𝐅\mathbf{F}bold_F is a binary matrix where each element 𝐅i,jsubscript𝐅𝑖𝑗\mathbf{F}_{i,j}bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is 1 if the entity Eisubscriptsuperscript𝐸𝑖\vec{E}^{\prime}_{i}over→ start_ARG italic_E end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is matched with the entity Ejsubscript𝐸𝑗\vec{E}_{j}over→ start_ARG italic_E end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, and 0 otherwise.Before formulating the assignment problem, we first define a similarity matrix 𝐒m×n𝐒superscript𝑚𝑛\mathbf{S}\in\mathbb{R}^{m\times n}bold_S ∈ blackboard_R start_POSTSUPERSCRIPT italic_m × italic_n end_POSTSUPERSCRIPT where each element 𝐒i,jsubscript𝐒𝑖𝑗\mathbf{S}_{i,j}bold_S start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT quantifies the similarity between the i𝑖iitalic_i-th entity in Esuperscript𝐸\vec{E}^{\prime}over→ start_ARG italic_E end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the j𝑗jitalic_j-th entity in E𝐸\vec{E}over→ start_ARG italic_E end_ARG for the assignment phase. For practical implementation, we ensure inclusion of the union set of property keys from both the i𝑖iitalic_i-th entity in Esuperscript𝐸\vec{E}^{\prime}over→ start_ARG italic_E end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the j𝑗jitalic_j-th entity in E𝐸\vec{E}over→ start_ARG italic_E end_ARG for each of these entities. When a property key is absent, its corresponding property value is set to be an empty string. The similarity is then computed as a weighted average of the Jaccard index(Murphy, 1996) for the list of tokens of the property values associated the same property key in both entities. The Jaccard index involved empty strings is defined as zero in our case. We assign a weight of 0.90.90.90.9 to the entity name, while all other properties collectively receive a total weight of 0.10.10.10.1. This ensures that the entity name holds the highest importance for matching, while still acknowledging the contributions of other properties.It is worthy to notice that the weights values 0.90.90.90.9 and 0.10.10.10.1 are not universal standards. One can tailor the choices of these weights values for specific requirements.Then the optimal assignment matrix 𝐅𝐅\mathbf{F}bold_F is found by maximizing the following equation:

𝐅=argmax𝐅i=1mj=1n𝐅i,j𝐒i,j,𝐅subscriptargmax𝐅superscriptsubscript𝑖1𝑚superscriptsubscript𝑗1𝑛subscript𝐅𝑖𝑗subscript𝐒𝑖𝑗\displaystyle\mathbf{F}=\operatorname*{arg\,max}_{\mathbf{F}}\sum_{i=1}^{m}%\sum_{j=1}^{n}\mathbf{F}_{i,j}\cdot\mathbf{S}_{i,j},bold_F = start_OPERATOR roman_arg roman_max end_OPERATOR start_POSTSUBSCRIPT bold_F end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ⋅ bold_S start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ,(2)

subject to the following four constraints to ensure one-to-one assignment between entities in the prediction set and the ground truth set:(i) 𝐅i,j{0,1}subscript𝐅𝑖𝑗01\mathbf{F}_{i,j}\in\{0,1\}bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ∈ { 0 , 1 }; (ii) i=1m𝐅i,j1,j{1,2,,n}formulae-sequencesuperscriptsubscript𝑖1𝑚subscript𝐅𝑖𝑗1for-all𝑗12𝑛\sum_{i=1}^{m}\mathbf{F}_{i,j}\leq 1,\forall j\in\{1,2,\ldots,n\}∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ≤ 1 , ∀ italic_j ∈ { 1 , 2 , … , italic_n }; (iii) j=1n𝐅i,j1,i{1,2,,m}formulae-sequencesuperscriptsubscript𝑗1𝑛subscript𝐅𝑖𝑗1for-all𝑖12𝑚\sum_{j=1}^{n}\mathbf{F}_{i,j}\leq 1,\forall i\in\{1,2,\ldots,m\}∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ≤ 1 , ∀ italic_i ∈ { 1 , 2 , … , italic_m }; (iv) i=1mj=1n𝐅i,j=min{m,n}superscriptsubscript𝑖1𝑚superscriptsubscript𝑗1𝑛subscript𝐅𝑖𝑗𝑚𝑛\sum_{i=1}^{m}\sum_{j=1}^{n}\mathbf{F}_{i,j}=\min\{m,n\}∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT bold_F start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT = roman_min { italic_m , italic_n }.One can take an analogy of maximizing equation.2 to the optimal flow in the Earth Mover’s Distance (EMD). In EMD, the optimal flow is the one that minimizes the entire “cost” of moving the dirt, while in our case, the optimal assignment is the one that maximizes the entire "similarity" in the best possible way.

Phase 2: Pairwise Entity Comparison.

After obtaining the optimal entity assignment, we focus on the pairwise entity comparison.We define ψent(Ei,Ej)subscript𝜓entsubscriptsuperscript𝐸𝑖subscript𝐸𝑗\psi_{\text{ent}}(\vec{E^{\prime}}_{i},\vec{E}_{j})italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT ( over→ start_ARG italic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over→ start_ARG italic_E end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) as a similarity metric between any two arbitrary entities esuperscript𝑒e^{\prime}italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and e𝑒eitalic_e from superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and \mathcal{E}caligraphic_E.

The pairwise entity similarity function ψentsubscript𝜓ent\psi_{\text{ent}}italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT is defined as a linear average tensor-product\bigotimes over individual pairwise property similarity ψpropsubscript𝜓prop\psi_{\text{prop}}italic_ψ start_POSTSUBSCRIPT prop end_POSTSUBSCRIPT as follows:

ψent(e,e)=p𝒫ψprop(ve,p,ve,p),subscript𝜓entsuperscript𝑒𝑒subscripttensor-product𝑝𝒫subscript𝜓propsubscript𝑣superscript𝑒𝑝subscript𝑣𝑒𝑝\displaystyle\psi_{\text{ent}}(e^{\prime},e)=\bigotimes_{p\in\mathcal{P}}\psi_%{\text{prop}}(v_{e^{\prime},p},v_{e,p}),italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT ( italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_e ) = ⨂ start_POSTSUBSCRIPT italic_p ∈ caligraphic_P end_POSTSUBSCRIPT italic_ψ start_POSTSUBSCRIPT prop end_POSTSUBSCRIPT ( italic_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_p end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_e , italic_p end_POSTSUBSCRIPT ) ,(3)

where ψprop(ve,p,ve,p)subscript𝜓propsubscript𝑣superscript𝑒𝑝subscript𝑣𝑒𝑝\psi_{\text{prop}}(v_{e^{\prime},p},v_{e,p})italic_ψ start_POSTSUBSCRIPT prop end_POSTSUBSCRIPT ( italic_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_p end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_e , italic_p end_POSTSUBSCRIPT ) is defined as the Jaccard index between the lists of tokens of the predicted values and ground-truth values for corresponding properties.We define the score as zero for missing properties.

It should be noted that while both 𝐒𝐒\mathbf{S}bold_S and ψentsubscript𝜓ent\psi_{\text{ent}}italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT are used to calculate similarities between pairs of entities, they are not identical. During the entity assignment phase, it is more important to make sure the entity names are aligned, while it is more acceptable to treat all properties equally without differentiation during the pairwise entity comparison.The separation in the definitions of two similarity measures allows us to tailor our metric more precisely to the specific requirements of each phase of the process.The definition of similarity and different variants for our proposed AESOP metric are elaborated in AppendixA. We discuss the relationship between traditional metrics, such as precision and recall, and AESOP in AppendixB.

4 Multi-stage Structured Entity Extraction using Language Models

Learning to Extract Structured Entities Using Language Models (2)

In this section, we elaborate on the methodology for structured entity extraction using LMs.We introduce a novel model architecture leveraging LMs, MuSEE, for Multi-stage Structured Entity Extaction.MuSEE is built on an encoder-decoder architecture, whose pipeline incorporates two pivotal enhancements to improve effectiveness and efficiency: (i) reducing output tokens through introducing additional special tokens where each can be used to replace multiple tokens, and (ii) multi-stage parallel generation for making the model focus on a sub-task at each stage where all predictions within a stage can be processed parallelly.

Reducing output tokens.

Our model condenses the output by translating entity types and property keys into unique, predefined tokens.Specifically, for the entity type, we add prefix “ent_type_”, while for each property key, we add prefix “pk_”.By doing so, the type and each property key on an entity is represented by a single token, which significantly reduces the number of output tokens during generation thus improving efficiency.For instance, if the original entity type is “artificial object” which is decomposed into 4 tokens (i.e., “_art”, “if”, “ical”, “_object”) using the T5 tokenizer, now we only need one special token, “ent_type_artifical_object”, to represent the entire sequence.All of these special tokens can be derived through the knowledge of some predefined schema before the model training.

Multi-stage parallel generation.

In addition to reducing the number of generated tokens, MuSEE further decomposes the generation process into three stages: (i) identifying all entities, (ii) determining entity types and property keys, and (iii) predicting property values.To demonstrate this pipeline more clearly, we use the same text shown in Fig.1 as an example to show the process of structured entity extraction as follows:

Stage 1: Entity Identification.

Stage 2: Type and property key prediction.

Stage 3: Property value prediction.

Among the three stages depicted, pred_ent_names, pred_type_and_property, and pred_val are special tokens to indicate the task.For each model prediction behavior, the first “\Rightarrow” indicates inputting the text into the encoder of MuSEE, while the second “\Rightarrow” means inputting the encoded outputs into the decoder.All tokens in blue are the prompt tokens input into the decoder which do not need to be predicted, while all tokens in bold are the model predictions.For the stage 1, we emphasize that MuSEE outputs a unique identifier for each entity in the given text. Taking the example in Fig.1, the first stage outputs “Bill Gates” only, rather than both “Bill Gates” and “Gates”. This requires the model implicitly learn how to do coreference resolution, namely learning that “Bill Gates” and “Gates” are referring to the same entity. Therefore, our approach uses neither surface forms, as the outputs of the first stage are unique identifiers, nor the entity titles followed by entity linkings.For stage 2, the MuSEE model predicts the entity types and property keys, which are all represented by special tokens. Hence, the prediction can be made by sampling the token with highest probability over the special tokens for entity types and property keys only, rather than all tokens.Notice that we do not need to predict the value for “type” and “name” in stage 3, since the type can be directly derived from the “ent_type_” special key itself, and the name is obtained during stage 1.The tokens in the bracket “{..}” are also part of the prompt tokens and are obtained in different ways during training and inference.During training, these inputs are obtained from the ground truth due to the teacher forcing technique(Raffel etal., 2023).During inference, they are obtained from the output predictions from the previous stages.The full training loss is a sum of three cross-entropy losses, one for each stage.An illustration of our model’s pipeline is shown in Fig.2.More implementation details are elaborated in AppendixC.

Benefits for Training and Inference.

MuSEE’s unique design benefits both training and inference.In particular, each stage in MuSEE is finely tuned to concentrate on a specific facet of the extraction process, thereby enhancing the overall effectiveness.Most importantly, all predictions within the same stage can be processed in batch thus largely improving efficiency.The adoption of a teacher forcing strategy enables parallel training even across different stages, further enhancing training efficiency.During inference, the model’s approach to breaking down long sequences into shorter segments significantly reduces the generation time.It is also worthy to mention that each text in the above three stages needs to be encoded only once by the MuSEE’s encoder, where the encoded output is repeatedly utilized across different stages.This streamlined approach ensures a concise and clear delineation of entity information, facilitating the transformation of unstructured text into a manageable and structured format.

5 Experiments

In this section, we describe the datasets used in our experiment, followed by the discussion of baseline methods and training details.

5.1 Data

In adapting the structured entity extraction, we repurpose the NYTRiedel etal. (2010), CoNLL04040404Roth and Yih (2004), and REBELHuguetCabot and Navigli (2021) datasets, which are originally developed for relation extractions.For NYT and CoNLL04040404, since each entity in these two datasets has a predefined type, we simply reformat them to our entity-centric formulation by treating the subjects as entities, relations as property keys, and objects as property values.REBEL connects entities identified in Wikipedia abstracts as hyperlinks, along with dates and values, to entities in Wikidata and extracts the relations among them.For entities without types in the REBEL dataset, we categorize their types as “unknown”.Additionally, we introduce a new dataset, named Wikidata-based.The Wikidata-based dataset is crafted using an approach similar to REBEL but with two primary distinctions:(i) property values are not necessarily entities;(ii) we simplify the entity types by consolidating them into broader categories based on the Wikidata taxonomy graph, resulting in less specific types.The processes for developing the Wikidata-based dataset is detailed in AppendixD.The predefined schemas for NYT, CoNLL04040404, and REBEL are using all entity types and property keys from these datasets.The details of the predefined schema for Wikidata-based dataset are provided in AppendixD.Comprehensive statistics for all four datasets are available in AppendixE.

5.2 Baseline

We benchmark our methodology against two distinct classes of baseline approaches.The first category considers adaptations from general seq2seq task models:(i) LM-JSON: this approach involves fine-tuning pre-trained language models.The input is a textual description, and the output is the string format JSON containing all entities.The second category includes techniques designed for different information extraction tasks, which we adapt to address our challenge:(ii) GEN2OIEKolluru etal. (2022), which employs a two-stage generative model initially outputs relations for each sentence, followed by all extractions in the subsequent stage;(iii) IMoJIEKolluru etal. (2020b), an extension of CopyAttentionCui etal. (2018), which sequentially generates new extractions based on previously extracted tuples;(iv) GenIEJosifoski etal. (2022), an end-to-end autoregressive generative model using a bi-level constrained generation strategy to produce triplets that align with a predefined schema for relations.GenIE is crafted for the closed information extraction, so it includes a entity linking step. However, in our strict setting, there is only a schema of entity types and relations. Therefore, we repurpose GenIE for our setting by maintaining the constrained generation strategy and omitting the entity linking step.We omit to compare our method with non-generative models primarily due to the task differences.

5.3 Training

We follow existing studiesHuguetCabot and Navigli (2021) to use the encoder-decoder architecture in our experiment.We choose the T5Raffel etal. (2023) series of LMs and employ the pre-trained T5-Base (T5-B) and T5-Large (T5-L) as the base models underlying every method discussed in section5.2 and our proposed MuSEE.LM-JSON and MuSEE are trained with the Low-Rank AdaptationHu etal. (2021), where r=16𝑟16r=16italic_r = 16 and α=32𝛼32\alpha=32italic_α = 32.For GEN2OIE, IMoJIE, and GenIE, we follow all training details of their original implementation.For all methods, we employ a linear warm up and the Adam optimizerKingma and Ba (2017), tuning the learning rates between 3e𝑒eitalic_e-4 and 1e𝑒eitalic_e-4, and weight decays between 1e𝑒eitalic_e-2 and 0.All experiments are run on a NVIDIA A100100100100 GPU.

It is worthy to mention that MuSEE can also build upon the decoder-only architecture by managing the KV cache and modifications to the position encodingsXiao etal. (2024), though this requires additional management and is not the main focus of this study.

6 Results

In this section, we show the results for both quantitative and human side-by-side evaluation.

ModelREBELNYTCoNLL04Wikidata-based samplesper sec
AESOPPrecisionRecallF1AESOPPrecisionRecallF1AESOPPrecisionRecallF1AESOPPrecisionRecallF1
LM-JSON (T5-B)41.9138.3351.2943.8766.3373.1052.6661.2268.8061.6348.0453.9936.9843.9529.8235.5319.08
GEN2OIE (T5-B)44.5235.2340.2837.5667.0472.0853.0261.1468.3962.3542.2050.2637.0740.8728.3733.5528.21
IMoJIE (T5-B)46.1134.1048.6140.0863.8672.2848.9958.4063.6852.0042.6246.8537.0841.6128.2333.645.36
GenIE (T5-B)48.8257.5538.7046.2879.4187.6873.2479.8174.7472.4959.3965.2940.6050.2729.7537.3810.19
MuSEE (T5-B)55.2456.9342.3148.5481.3388.2972.2179.4478.3873.1860.2866.0146.9553.2729.3337.9952.93
LM-JSON (T5-L)45.9239.4940.8240.1467.7373.3853.2261.6968.8861.5047.7753.7738.1943.2431.6336.5411.24
GEN2OIE (T5-L)46.7037.2841.1239.0968.2773.9753.3261.8868.5262.7643.3151.1638.2541.2328.5433.7718.56
IMoJIE (T5-L)48.1338.5549.7343.4365.7273.4650.0359.5267.3153.0043.4447.7538.1841.7430.1034.983.73
GenIE (T5-L)50.0658.0042.5649.0979.6484.8275.6980.0072.9277.7555.6464.8643.5054.0530.9839.385.09
MuSEE (T5-L)57.3957.1142.8948.9682.6789.4373.3280.6079.8774.8960.7267.0850.9453.7231.1239.2433.96

6.1 Quantitative Evaluation

Effectiveness comparison.

The overall effectiveness comparison is shown in Table1.We report traditional metrics, including precision, recall, and F1 score, in addition to our proposed AESOP metric.From the results, the MuSEE model consistently outperforms other baselines in terms of AESOP across all datasets.For instance, MuSEE achieves the highest AESOP scores on REBEL with 55.24 (T5-B) and 57.39 (T5-L), on NYT with 81.33 (T5-B) and 82.67 (T5-L), on CoNLL04 with 78.38 (T5-B) and 79.87 (T5-L), and on the Wikidata-based dataset with 46.95 (T5-B) and 50.94 (T5-L).These scores significantly surpass those of the competing models, indicating MuSEE’s stronger entity extraction capability.The other three traditional metrics further underscore the efficacy of the MuSEE model.For instance, on CoNLL04, MuSEE (T5-B) achieves a precision of 73.18, a recall of 60.28, and a F1 score of 66.01, which surpass all the other baselines.Similar improvements are observed on REBEL, NYT, and Wikidata-based dataset.Nevertheless, while MuSEE consistently excels in the AESOP metric, it does not invariably surpass the baselines across all the traditional metrics of precision, recall, and F1 score. Specifically, within the REBEL dataset, GenIE (T5-B) achieves the highest precision at 57.55, and LM-JSON (T5-B) records the best recall at 51.29. Furthermore, on the NYT dataset, GenIE (T5-B) outperforms other models in F1 score. These variances highlight the unique insights provided by our adaptive AESOP metric, which benefits from our entity-centric formulation. We expand on this discussion in section6.2.

As discussed in Sec.4, our MuSEE model is centered around two main enhancements: reducing output tokens and multi-stage parallel generation.By simplifying output sequences, MuSEE tackles the challenge of managing long sequences that often hinder baseline models, like LM-JSON, GenIE, IMoJIE, thus reducing errors associated with sequence length.Additionally, by breaking down the extraction process into three focused stages, MuSEE efficiently processes each aspect of entity extraction, leveraging contextual clues for more accurate predictions.In contrast, GEN2OIE’s two-stage approach, though similar, falls short because it extracts relations first and then attempts to pair entities with these relations.However, a single relation may exist among different pairs of entities, which can lead to low performance with this approach.Supplemental ablation study is provided in AppendixF.

Efficiency comparison.

As shown in the last column of Table1, we provide a comparison on the inference efficiency, measured in the number of samples the model can process per second.The MuSEE model outperforms all baseline models in terms of efficiency, processing 52.93 samples per second with T5-B and 33.96 samples per second with T5-L.It shows a 10x speed up compared to IMoJIE, and a 5x speed up compared to the strongest baseline GenIE.This high efficiency can be attributed to MuSEE’s architecture, specifically its multi-stage parallel generation feature.By breaking down the task into parallelizable stages, MuSEE minimizes computational overhead, allowing for faster processing of each sample.The benefit of this design can also be approved by the observation that the other multi-stage model, GEN2OIE, shows the second highest efficiency.

To better illustrate our model’s strength, we show the scatter plots comparing all models with various backbones in Fig.3 on the effectiveness and efficiency.We choose the Wikidata-based dataset and the effectiveness is measured by AESOP.As depicted, our model outperforms all baselines with a large margin.This advantage makes MuSEE particularly suitable for applications requiring rapid processing of large volumes of data, such as processing web-scale datasets, or integrating into interactive systems where response time is critical.

Learning to Extract Structured Entities Using Language Models (3)
Learning to Extract Structured Entities Using Language Models (4)

Grounding check.

As the family of T5 models are pre-trained on Wikipedia corpusRaffel etal. (2023), we are curious whether the models are extracting information from the given texts, or they are leveraging their prior knowledge to generate information that cannot be grounded to the given description.We use T5-L as the backbone in this experiment.We develop a simple approach to conduct this grounding check by perturbing the original test dataset with the following strategy.We first systematically extract and categorize all entities and their respective properties, based on their entity types.Then, we generate a perturbed version of the dataset, by randomly modifying entity properties based on the categorization we built.We introduce controlled perturbations into the dataset by selecting alternative property values from the same category but different entities, and subsequently replacing the original values in the texts.The experiment results from our grounding study on the Wikidata-based dataset, as illustrated in Fig.4, reveal findings regarding the performance of various models under the AESOP and F1 score.Our model, MuSEE, shows the smallest performance gap between the perturbed data and the original data compared to its counterparts, suggesting its stronger capability to understand and extract structured information from given texts.

6.2 Human Evaluation

To further analyze our approach, we randomly select 400 test passages from the Wikidata-based dataset, and generate outputs of our model MuSEE and the strongest baseline GenIE.Human evaluators are presented with a passage and two randomly flipped extracted sets of entities with properties.Evaluators are then prompted to choose the output they prefer or express no preference based on three criteria, Completeness, Correctness, and Hallucinations (details shown in AppendixG).Among all 400 passages, the output of MuSEE is preferred 61.75% on the completeness, 59.32% on the correctness, and 57.13% on the hallucinations.For a complete comparison, we also report the percentage of samples preferred by quantitative metrics on MuSEE’s results when compared with GenIE’s, as summarized in Table2.As shown, our proposed AESOP metric aligns more closely with human judgment than traditional metrics.These observations provide additional confirm to the quantitative results evaluated using the AESOP metric that our model significantly outperforms existing baselines and illustrates the inadequacy of traditional metrics due to their oversimplified assessment of extraction quality.Case study of the human evaluation is shown in AppendixG.

Human EvaluationQuantitative Metrics
Complete.Correct.Halluc.AESOPPrecisionRecallF1
MuSEE prefer61.7559.3257.1361.2845.3337.2440.57

7 Discussion and Conclusion

We introduce Structured Entity Extraction (SEE), an entity-centric formulation of information extraction in a strict setting.We then propose the Approximate Entity Set OverlaP (AESOP) Metric, which focuses on the entity-level and more flexible to include different level of normalization.Based upon, we propose a novel model architecture, MuSEE, that enhances both effectiveness and efficiency.Both quantitative evaluation and human side-by-side evaluation confirm that our model outperforms baselines.

An additional advantage of our formulation is its potential to address coreference resolution challenges, particularly in scenarios where multiple entities share the same name or lack primary identifiers such as names.Models trained with prior triplet-centric formulation cannot solve the above challenges.However, due to a scarcity of relevant data, we were unable to assess this aspect in our current study.

8 Limitations

The limitation of our work lies in the assumption that each property possesses a single value.However, there are instances where a property’s value might consist of a set, such as varying “names”.Adapting our method to accommodate these scenarios presents a promising research direction.

9 Acknowledgement

We would like to thank all reviewers for their professional review work, constructive comments, and valuable suggestions on our manuscript. This work is supported by the the MSR-Mila Research Grant. We thank ComputeCanada for the computing resources.

References

  • Banko etal. (2007)Michele Banko, MichaelJ. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007.Open information extraction from the web.In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, page 2670–2676, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
  • Cardie (1997)Claire Cardie. 1997.Empirical methods in information extraction.AI magazine, 18(4):65–65.
  • Chang etal. (2006)Chia-Hui Chang, Mohammed Kayed, MohebR Girgis, and KhaledF Shaalan. 2006.A survey of web information extraction systems.IEEE transactions on knowledge and data engineering, 18(10):1411–1428.
  • Cui etal. (2018)Lei Cui, Furu Wei, and Ming Zhou. 2018.Neural open information extraction.
  • Eikvil (1999)Line Eikvil. 1999.Information extraction from world wide web-a survey.Technical report, Technical Report 945, Norweigan Computing Center.
  • Elsahar etal. (2018)Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018.T-REx: A large scale alignment of natural language with knowledge base triples.In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
  • Grishman (2015)Ralph Grishman. 2015.Information extraction.IEEE Intelligent Systems, 30(5):8–15.
  • Hu etal. (2021)EdwardJ. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, LuWang, and Weizhu Chen. 2021.Lora: Low-rank adaptation of large language models.
  • HuguetCabot and Navigli (2021)Pere-Lluís HuguetCabot and Roberto Navigli. 2021.REBEL: Relation extraction by end-to-end language generation.In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2370–2381, Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Josifoski etal. (2022)Martin Josifoski, NicolaDe Cao, Maxime Peyrard, Fabio Petroni, and Robert West. 2022.Genie: Generative information extraction.
  • Kingma and Ba (2017)DiederikP. Kingma and Jimmy Ba. 2017.Adam: A method for stochastic optimization.
  • Kolluru etal. (2020a)Keshav Kolluru, Vaibhav Adlakha, Samarth Aggarwal, Mausam, and Soumen Chakrabarti. 2020a.Openie6: Iterative grid labeling and coordination analysis for open information extraction.
  • Kolluru etal. (2020b)Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam, and Soumen Chakrabarti. 2020b.Imojie: Iterative memory-based joint open information extraction.
  • Kolluru etal. (2022)Keshav Kolluru, Muqeeth Mohammed, Shubham Mittal, Soumen Chakrabarti, and Mausam. 2022.Alignment-augmented consistent translation for multilingual open information extraction.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2502–2517, Dublin, Ireland. Association for Computational Linguistics.
  • Kumar (2017)Shantanu Kumar. 2017.A survey of deep learning methods for relation extraction.arXiv preprint arXiv:1705.03645.
  • Li etal. (2020)Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020.A survey on deep learning for named entity recognition.IEEE Transactions on Knowledge and Data Engineering, 34(1):50–70.
  • Li etal. (2022)Qian Li, Jianxin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, etal. 2022.A survey on deep learning event extraction: Approaches and applications.IEEE Transactions on Neural Networks and Learning Systems.
  • Liu etal. (2023)Ruicheng Liu, Rui Mao, AnhTuan Luu, and Erik Cambria. 2023.A brief survey on recent advances in coreference resolution.Artificial Intelligence Review, pages 1–43.
  • Lu etal. (2023)Keming Lu, Xiaoman Pan, Kaiqiang Song, Hongming Zhang, Dong Yu, and Jianshu Chen. 2023.Pivoine: Instruction tuning for open-world information extraction.arXiv preprint arXiv:2305.14898.
  • Martinez-Rodriguez etal. (2020)JoseL Martinez-Rodriguez, Aidan Hogan, and Ivan Lopez-Arevalo. 2020.Information extraction meets the semantic web: a survey.Semantic Web, 11(2):255–335.
  • Murphy (1996)AllanH Murphy. 1996.The finley affair: A signal event in the history of forecast verification.Weather and forecasting, 11(1):3–20.
  • Nasar etal. (2018)Zara Nasar, SyedWaqar Jaffry, and MuhammadKamran Malik. 2018.Information extraction from scientific articles: a survey.Scientometrics, 117:1931–1990.
  • Niklaus etal. (2018)Christina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh. 2018.A survey on open information extraction.In Proceedings of the 27th International Conference on Computational Linguistics, pages 3866–3878.
  • Oliveira etal. (2021)ItaloL Oliveira, Renato Fileto, René Speck, LuísPF Garcia, Diego Moussallem, and Jens Lehmann. 2021.Towards holistic entity linking: Survey and directions.Information Systems, 95:101624.
  • Raffel etal. (2023)Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and PeterJ. Liu. 2023.Exploring the limits of transfer learning with a unified text-to-text transformer.
  • Riedel etal. (2010)Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010.Modeling relations and their mentions without labeled text.In Machine Learning and Knowledge Discovery in Databases, pages 148–163, Berlin, Heidelberg. Springer Berlin Heidelberg.
  • Roth and Yih (2004)Dan Roth and Wen-tau Yih. 2004.A linear programming formulation for global inference in natural language tasks.In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pages 1–8, Boston, Massachusetts, USA. Association for Computational Linguistics.
  • Sarawagi etal. (2008)Sunita Sarawagi etal. 2008.Information extraction.Foundations and Trends® in Databases, 1(3):261–377.
  • Sevgili etal. (2022)Özge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, and Chris Biemann. 2022.Neural entity linking: A survey of models based on deep learning.Semantic Web, 13(3):527–570.
  • Shen etal. (2021)Wei Shen, Yuhan Li, Yinan Liu, Jiawei Han, Jianyong Wang, and Xiaojie Yuan. 2021.Entity linking meets deep learning: Techniques and solutions.IEEE Transactions on Knowledge and Data Engineering.
  • Shen etal. (2014)Wei Shen, Jianyong Wang, and Jiawei Han. 2014.Entity linking with a knowledge base: Issues, techniques, and solutions.IEEE Transactions on Knowledge and Data Engineering, 27(2):443–460.
  • Stylianou and Vlahavas (2021)Nikolaos Stylianou and Ioannis Vlahavas. 2021.A neural entity coreference resolution review.Expert Systems with Applications, 168:114466.
  • Trisedya etal. (2019)BayuDistiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019.Neural relation extraction for knowledge base enrichment.In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 229–240, Florence, Italy. Association for Computational Linguistics.
  • Vasilkovsky etal. (2022)Michael Vasilkovsky, Anton Alekseev, Valentin Malykh, Ilya Shenbin, Elena Tutubalina, Dmitriy Salikhov, Mikhail Stepnov, Andrey Chertok, and Sergey Nikolenko. 2022.Detie: Multilingual open information extraction inspired by object detection.
  • Wang etal. (2018)Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, etal. 2018.Clinical information extraction applications: a literature review.Journal of biomedical informatics, 77:34–49.
  • Weikum and Theobald (2010)Gerhard Weikum and Martin Theobald. 2010.From information to knowledge: harvesting entities and relationships from web sources.In Proceedings of the twenty-ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 65–76.
  • Xiao etal. (2024)Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024.Efficient streaming language models with attention sinks.
  • Yang etal. (2022)Yang Yang, Zhilei Wu, Yuexiang Yang, Shuangshuang Lian, Fengjie Guo, and Zhiwei Wang. 2022.A survey of information extraction based on deep learning.Applied Sciences, 12(19):9691.
  • Yates etal. (2007)Alexander Yates, Michele Banko, Matthew Broadhead, Michael Cafarella, Oren Etzioni, and Stephen Soderland. 2007.TextRunner: Open information extraction on the web.In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 25–26, Rochester, New York, USA. Association for Computational Linguistics.
  • Ye etal. (2022)Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen. 2022.Generative knowledge graph construction: A review.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1–17.
  • Zhong etal. (2023)Lingfeng Zhong, Jia Wu, Qian Li, Hao Peng, and Xindong Wu. 2023.A comprehensive survey on automatic knowledge graph construction.arXiv preprint arXiv:2302.05019.

Appendix A Variants of AESOP

The AESOP metric detailed in section3.2 matches entities by considering all properties and normalizes with the maximum of the sizes of the target set and the predicted set. We denote it as AESOP-MultiProp-Max.In this section, we elaborate more variants of the AESOP metric in addition to section3.2, categorized based on two criteria: the definition of entity similarity used for entity assignment and the normalization approach when computing the final metric value between superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and \mathcal{E}caligraphic_E.These variants allow for flexibility and adaptability to different scenarios and requirements in structured entity extraction.

Variants Based on Entity Assignment.

The first category of variants is based on the criteria for matching entities between the prediction superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the ground-truth \mathcal{E}caligraphic_E.We define three variants:

  • AESOP-ExactName: Two entities are considered a match if their names are identical, disregarding case sensitivity.This variant is defined as 𝐒i,j=1subscript𝐒𝑖𝑗1\mathbf{S}_{i,j}=1bold_S start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT = 1 if vei,name=vej,namesubscript𝑣subscriptsuperscript𝑒𝑖namesubscript𝑣subscript𝑒𝑗namev_{e^{\prime}_{i},\text{name}}=v_{e_{j},\text{name}}italic_v start_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , name end_POSTSUBSCRIPT = italic_v start_POSTSUBSCRIPT italic_e start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , name end_POSTSUBSCRIPT, otherwise 0.

  • AESOP-ApproxName: Entities are matched based on the similarity of their “name” property values. This similarity can be measured using a text similarity metric, such as the Jaccard index.

  • AESOP-MultiProp: Entities are matched based on the similarity of all their properties, with a much higher weight given to the “entity name” property due to its higher importance.

Variants Based on Normalization.

The second category of variants involves different normalization approaches for computing the final metric value through Eq.1:

  • AESOP-Precision: The denominator is the size of the predicted set superscript\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, i.e., μ=m𝜇𝑚\mu=mitalic_μ = italic_m.

  • AESOP-Recall: The denominator is the size of the target set \mathcal{E}caligraphic_E, i.e., μ=n𝜇𝑛\mu=nitalic_μ = italic_n.

  • AESOP-Max: The denominator is the maximum of the sizes of the target set and the predicted set, i.e., μ=max{m,n}𝜇𝑚𝑛\mu=\max\{m,n\}italic_μ = roman_max { italic_m , italic_n }.

Given these choices, we can obtain 3×3=93393\times 3=93 × 3 = 9 variants of the AESOP metric.To avoid excessive complexity, we regard the AESOP-MultiProp-Max as default.For clarity, we illustrate the two phases of computing the AESOP metric and its variants in Fig.5.We also show that precision and recall are specific instances of the AESOP metric in AppendixB.

Learning to Extract Structured Entities Using Language Models (5)

Appendix B Relationship between Precision/Recall and AESOP

In this section, we show the traditional metrics, precision and recall, are specific instances of the AESOP metric.To calculate precision and recall, we use the following equations on the number of triplets, where each triplet contains subject, relation, and object.

precision=# of correctly predicted triplets# of triplets in the prediction,precision# of correctly predicted triplets# of triplets in the prediction\text{precision}=\frac{\text{\# of correctly predicted triplets}}{\text{\# of %triplets in the prediction}},precision = divide start_ARG # of correctly predicted triplets end_ARG start_ARG # of triplets in the prediction end_ARG ,(4)
recall=# of correctly predicted triplets# of triplets in the target.recall# of correctly predicted triplets# of triplets in the target\text{recall}=\frac{\text{\# of correctly predicted triplets}}{\text{\# of %triplets in the target}}.recall = divide start_ARG # of correctly predicted triplets end_ARG start_ARG # of triplets in the target end_ARG .(5)

In the framework of the AESOP metric, precision and recall are effectively equivalent to treating each triplet as an entity, where the subject as the entity name, and the relation and object form a pair of property key and value.For optimal entity assignment (phase 1), precision and recall use the AESOP-MultiProp variant but match entities based on the similarity of all their properties with a same weight.For pairwise entity comparison (phase 2), the ψent(e,e)subscript𝜓entsuperscript𝑒𝑒\psi_{\text{ent}}(e^{\prime},e)italic_ψ start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT ( italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_e ) (Eq.3), can be defined as 1 if v=vsuperscript𝑣𝑣v^{\prime}=vitalic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_v, otherwise 0,where vsuperscript𝑣v^{\prime}italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and v𝑣vitalic_v are the only property values in esuperscript𝑒e^{\prime}italic_e start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and e𝑒eitalic_e, respectively.For Eq.1, direct-sum\bigoplus aggregation can be defined as a linear sum, which principally results in how many triplets are correctly predicted in this case.If μ𝜇\muitalic_μ in Eq.1 is set as the number of triplets in the prediction, this corresponds to the calculation of precision.Similarly, when μ𝜇\muitalic_μ equals the number of triplets in the target, it corresponds to the calculation of recall.

Appendix C Implementation Details of MuSEE

In order to implement the approach of our MuSEE model, one may extend existing models with encoder-decoder architecture by integrating additional modules and processing steps specifically designed for entity and property prediction tasks.Specifically, given a predefined schema, we first add all necessary special tokens to customize the tokenizer as detailed before.The implementation of the generation process involves three main stages: entity prediction, property key prediction, and property value prediction.

  1. 1.

    Entity Prediction: We first encode the input sequence using the encoder to obtain the hidden states for the entire sequence. We generate a prompt “pred_ent_names” and transform it to token ids using the tokenizer. This prompt, repeated for each sample in the batch, is concatenated with the encoded input sequence and processed through the decoder to produce entity name predictions as a sequence of tokens.

  2. 2.

    Property Key Prediction: For each predicted entity name, we generate prompts in the format “pred_type_and_property [entity_name]”. These prompts are tokenized, padded to a fixed length, and concatenated with the encoded input sequence. The concatenated sequences are then passed through the decoder to predict entity types and property keys as a sequence of special tokens for entity types and property keys. We achieve this by sampling the token with highest probability over all special tokens for entity types and property keys, rather than training a separate classifier head.

  3. 3.

    Property Value Prediction: For each predicted entity and its corresponding property keys, we create prompts in the format “pred_val [entity_name] [entity_type] [property_key]”. These prompts are tokenized, padded, and concatenated with the encoded input sequence. The concatenated sequences are processed by the decoder to generate property value predictions.

The training loss is a summation of the cross-entropy loss from each stage, and the training process can be parallel as we elaborate in section4.

Appendix D Details of Wikidata-based Dataset

We build a new Wikidata-based dataset.This dataset is inspired by methodologies employed in previous works such as Wiki-NRETrisedya etal. (2019), T-RExElsahar etal. (2018), REBELHuguetCabot and Navigli (2021), leveraging extensive information available on Wikipedia and Wikidata.The primary objective centers around establishing systematic alignments between textual content in Wikipedia articles, hyperlinks embedded within these articles, and their associated entities and properties as cataloged in Wikidata.This procedure is divided into three steps:(i) Parsing Articles:We commence by parsing English Wikipedia articles from the dump file111The version of the Wikipedia and Wikidata dump files utilized in our study are 20230720, representing the most recent version available during the development of our work., focusing specifically on text descriptions and omitting disambiguation and redirect pages.The text from each selected article is purified of Wiki markup to extract plain text, and hyperlinks within these articles are identified as associated entities.Subsequently, the text descriptions are truncated to the initial ten sentences, with entity selection confined to those referenced within this truncated text.This approach ensures a more concentrated and manageable dataset.(ii) Mapping Wikidata IDs to English Labels:Concurrently, we process the Wikidata dump11footnotemark: 1 file to establish a mapping (termed as the id-label map) between Wikidata IDs and their corresponding English labels.This mapping allows for efficient translation of Wikidata IDs to their English equivalents.(iii) Interconnecting Wikipedia articles with Wikidata properties:For each associated entity within the text descriptions, we utilize Wikidatas API to ascertain its properties and retrieve their respective Wikidata IDs.The previously established id-label map is then employed to convert these property IDs into English labels.Each entitys type is determined using the value associated with instance of (P31).Given the highly specific nature of these entity types (e.g., small city (Q18466176), town (Q3957), big city (Q1549591)), we implement a recursive merging process to generalize these types into broader categories, referencing the subclass of (P279) property.Specifically, we first construct a hierarchical taxonomy graph.Each node within this graph structure represents an entity type, annotated with a count reflecting the total number of entities it encompasses.Second, a priority queue are utilized, where nodes are sorted in descending order based on their entity count.We determine whether the top n𝑛nitalic_n nodes represent an ideal set of entity types, ensuring the resulted entity types are not extremely specific.Two key metrics are considered for this evaluation: the percentage of total entities encompassed by the top n𝑛nitalic_n nodes, and the skewness of the distribution of each entity type’s counts within the top n𝑛nitalic_n nodes.If the distribution is skew, we then execute a procedure of dequeuing the top node and enqueueing its child nodes back into the priority queue.This iterative process allows for a dynamic exploration of the taxonomy, ensuring that the most representative nodes are always at the forefront.Finally, our Wikidata-based dataset is refined to contain the top-10 (i.e., n=10𝑛10n=10italic_n = 10) most prevalent entity types according to our hierarchical taxonomy graph and top-10 property keys in terms of occurrence frequency, excluding entity name and type.The 10 entity types are talk, system, spatio-temporal entity, product, natural object, human, geographical feature, corporate body, concrete object, and artificial object.The 10 property keys are capital, family name, place of death, part of, location, country, given name, languages spoken, written or signed, occupation, and named after.

Appendix E Statistics of Datasets

NYT is under the CC-BY-SA license. CoNLL04040404 is under the Creative Commons Attribution-NonCommercial-ShareAlike 3.03.03.03.0 International License. REBEL is under the Creative Commons Attribution 4.04.04.04.0 International License.The dataset statistics presented in Table3 compare NYT, CoNLL04040404, REBEL, and Wikidata-based datasets.All datasets feature a minimum of one entity per sample, but they differ in their average and maximum number of entities, with the Wikidata-based dataset showing a higher mean of 3.843.843.843.84 entities.They also differ in the maximum number of entities, where REBEL has a max of 65656565.Property counts also vary, with REBEL having a slightly higher average number of properties per entity at 3.403.403.403.40.

StatisticsNYTCoNLL04040404REBELWikidata-based
# of Entity Min1111
# of Entity Mean1.251.222.373.84
# of Entity Max1256520
# of Property Min3322
# of Property Mean3.193.023.402.80
# of Property Max64178
# of Training Samples56,1969222,000,00023,477
# of Testing Samples5,0002885,0004,947

Appendix F Ablation Study

The ablation study conducted on the MuSEE model, with the Wikidata-based dataset, serves as an evaluation of the model’s core components: the introduction of special tokens and the Multi-stage parallel generation.By comparing the performance of the full MuSEE model against its ablated version, where only the special tokens feature is retained, we aim to dissect the individual contributions of these design choices to the model’s overall efficacy.The ablated version simplifies the output format by eliminating punctuation such as commas, double quotes, and curly brackets, and by converting all entity types and property keys into special tokens.This is similar to the reducing output tokens discussed in Sec.4.Results from the ablation study, as shown in Table4, reveal significant performance disparities between the complete MuSEE model and its ablated counterpart, particularly when examining metrics across different model sizes (T5-B and T5-L) and evaluation metrics.The full MuSEE model markedly outperforms the ablated version across all metrics with notable improvements, underscoring the Multi-stage parallel generation’s critical role in enhancing the model’s ability to accurately and comprehensively extract entity-related information.These findings highlight the synergistic effect of the MuSEE model’s design elements, demonstrating that both the Reducing output tokens and the Multi-stage parallel generation are pivotal for achieving optimal performance in structured entity extraction tasks.

ModelAESOP-ExactNameAESOP-ApproxNameAESOP-MultiProp
MaxPrecisionRecallMaxPrecisionRecallMaxPrecisionRecall
w/o Multi-stage (T5-B)25.1940.8727.6425.7542.1428.2626.9344.4929.72
MuSEE (T5-B)44.9550.6358.9945.7551.5760.1046.9553.0061.75
w/o Multi-stage (T5-L)27.7453.0428.8128.1454.1029.2229.1456.9030.29
MuSEE (T5-L)49.3557.9759.6349.8958.6960.3550.9460.1161.68

Appendix G Human Evaluation Criteria and Case Study

The details for the three human evaluation criteria are shown below:

  • Completeness: Which set of entities includes all relevant entities and has the fewest missing important entities? Which set of entities is more useful for further analysis or processing? Focus on the set that contains less unimportant and/or irrelevant entities.

  • Correctness: Which set of entities more correctly represents the information in the passage? Focus on consistency with the context of the passage. Do extracted properties correctly represent each entity or are there more specific property values available? Are property values useful?

  • Hallucinations: Which set of entities contains less hallucinations? That is, are there any entities or property values that do not exist or cannot be inferred from the text?

We provide a case study for the human evaluation analysis comparing the outputs of GenIE (T5-L) and MuSEE (T5-L) given a specific text description.MuSEE accurately identifies seven entities, surpassing GenIE’s two, thus demonstrating greater completeness.Additionally, we identify an error in GenIE’s output where it incorrectly assigns Bartolomeo Rastrelli’s place of death as Moscow, in contrast to the actual location, Saint Petersburg, which is not referenced in the text.This error by GenIE could stem from hallucination, an issue not present in MuSEE’s output.In this example, it is evident that MuSEE outperforms GenIE in terms of completeness, correctness, and resistance to hallucinations.

{mdframed}

[backgroundcolor=gray!20, linewidth=0pt]Text Description: The ceremonial attire of Elizabeth, Catherine Palace, Tsarskoye Selo; fot. Ivonna Nowicka Elizabeth or Elizaveta Petrovna (; ) reigned as Empress of Russia from 1741 until her death in 1762. She remains one of the most popular Russian monarchs because of her decision not to execute a single person during her reign, her numerous construction projects, and her strong opposition to Prussian policies. The second-eldest daughter of Tsar Peter the Great (), Elizabeth lived through the confused successions of her father’s descendants following her half-brother Alexei’s death in 1718. The throne first passed to her mother Catherine I of Russia (), then to her nephew Peter II, who died in 1730 and was succeeded by Elizabeth’s first cousin Anna (). After the brief rule of Anna’s infant great-nephew, Ivan VI, Elizabeth seized the throne with the military’s support and declared her own nephew, the future Peter III, her heir. During her reign Elizabeth continued the policies of her father and brought about a remarkable Age of Enlightenment in Russia. Her domestic policies allowed the nobles to gain dominance in local government while shortening their terms of service to the state. She encouraged Mikhail Lomonosov’s foundation of the University of Moscow, the highest-ranking Russian educational institution. Her court became one of the most splendid in all Europe, especially regarding architecture: she modernised Russia’s roads, encouraged Ivan Shuvalov’s foundation of the Imperial Academy of Arts, and financed grandiose Baroque projects of her favourite architect, Bartolomeo Rastrelli, particularly in Peterhof Palace.

GenIE (T5-L):

{

"0": {

"entity name": "Bartolomeo Rastrelli",

"place of death": "Moscow",

"given name": "Bartolomeo",

"family name": "Rastrelli",

"languages spoken, written or signed": "Russian",

"country": "Russia",

"type": "human"

},

"1": {

"entity name": "Catherine I of Russia",

"given name": "Catherine",

"languages spoken, written or signed": "Russian",

"type": "human"

}

}

MuSEE (T5-L):

{

"0": {

"type": "human",

"entity name": "Peter the Great",

"given name": "Peter"

},

"1": {

"type": "human",

"entity name": "Bartolomeo Rastrelli",

"family name": "Rastrelli",

"given name": "Bartolomeo"

},

"2": {

"type": "human",

"entity name": "Catherine I of Russia",

"given name": "Catherine"

},

"3": {

"type": "human",

"entity name": "Peter III",

"given name": "Peter"

},

"4": {

"type": "human",

"entity name": "Ivan Shuvalov",

"family name": "Shuvalov",

"given name": "Ivan",

"languages spoken, written or signed": "Russian"

},

"5": {

"type": "concrete object",

"entity name": "Peterhof Palace",

"country": "Russia"

},

"6": {

"type": "human",

"entity name": "Mikhail Lomonosov",

"family name": "Lomonosov",

"given name": "Mikhail",

"languages spoken, written or signed": "Russian"

}

}

Learning to Extract Structured Entities Using Language Models (8)
Learning to Extract Structured Entities Using Language Models (9)
Learning to Extract Structured Entities Using Language Models (10)
Learning to Extract Structured Entities Using Language Models (11)

Appendix H Metric Correlation Analysis

We show the correlation analysis between AESOP metric variants across all models on all four datasets, shown in Fig.8, Fig.9, Fig.10, and Fig.11, respectively.Specifically, we focus on the correlation analysis of different variants based on entity assignment variants in Phase 1 of AESOP, as described in Sec.3.For Phase 2, the “Max” normalization method is employed by default.Observations for the other two normalization variants are similar.In the associated figures, AESOP-MultiProp-Max is uniformly used as the x-axis measure, while AESOP-ExactName-Max or AESOP-ApproxName-Max serve as the y-axis metrics.The scatter plots in all figures tend to cluster near the diagonal, indicating a robust correlation among the various metric variants we have introduced.

Learning to Extract Structured Entities Using Language Models (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Delena Feil

Last Updated:

Views: 5995

Rating: 4.4 / 5 (65 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.