Skip to content

All the material (code, dataset, results) of our Benchmark of Nested NER approaches accepted at ICDAR 2023

Notifications You must be signed in to change notification settings

soduco/paper-nestedner-icdar23-code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code and Data for the paper "A Benchmark of Nested NER Approaches in Historical Structured Documents" presented at ICDAR 2023

Abstract

Named Entity Recognition (NER) is a key step in the creation of structured data from digitised historical documents. Traditional NER approaches deal with flat named entities, whereas entities often are nested. For example, a postal address might contain a street name and a number. This work compares three nested NER approaches, including two state-of-the-art approaches using Transformer-based architectures. We introduce a new Transformer-based approach based on joint labelling and semantic weighting of errors, evaluated on a collection of 19th-century Paris trade directories. We evaluate approaches regarding the impact of supervised fine-tuning, unsupervised pre-training with noisy texts, and variation of IOB tagging formats. Our results show that while nested NER approaches enable extracting structured data directly, they do not benefit from the extra knowledge provided during training and reach a performance similar to the base approach on flat entities. Even though all 3 approaches perform well in terms of F1 scores, joint labelling is most suitable for hierarchically structured data. Finally, our experiments reveal the superiority of the IO tagging format on such data.

Full extraction pipeline

Sources documents

  • Paper pre-print (PDF) : HAL - 03994759 & arXiv
  • Final paper (Springer edition) : DOI - 10.1007/978-3-031-41682-8_8
  • Full dataset (images and transcripted texts) : DOI

Code

python DOI

Installation

Download code last stable released HERE

pip install --requirement requirements.txt

Models

Project Structure

Structure of this repository:

├── dataset                    <- Data used for training and validation (except dataset_full.json)
│   ├── 10-ner_ref                <- Full ground-truth dataset
│   ├── 31-ner_align_pero         <- Full Pero-OCR dataset
│   ├── 41-ner_ref_from_pero      <- GT entries subset which have corresponding valid Pero OCR equivalent.
|   ├── qualitative_analysis      <- Test and entries for qualitative analysis
|   ├── dataset_full.json         <- Published data
|
├── img                       <- Images
│
├── src                       <- Jupyter notebooks and Python scripts.
│   ├── m0_flat_ner                <- Flat NER approach notebook and scripts
│   ├── m1_independant_ner_layers  <- M1 approach notebook and scripts
|   ├── m2_joint-labelling_for_ner <- M2 approach notebook and scripts
│   ├── m3_hierarchical_ner        <- M3 approach notebook and scripts
│   ├── t1_dataset_tools           <- Scripts to format dataset
│   ├── t2_metrics             <- Benchmark results tables
|   |── requirements.txt  
│
└── README.md

Please note that for each approach, the qualitative analysis notebook and the demo notebook can be run without preparing the source data neather training models.

Reference

If you use this software, please cite it as below.

@inproceedings{nner_benchmark_2023,
	title = {A Benchmark of Nested Named Entity Recognition Approaches in Historical Structured Documents},
    author = {Tual, Solenn and Abadie, Nathalie and Carlinet, Edwin and Chazalon, Joseph and Duménieu, Bertrand},
    booktitle = {Proceedings of the 17th International Conference on Document Analysis and Recognition (ICDAR'23)},
    year = {2023},
    month = aug,
    address = {San José, California, USA},
	url = {https://hal.science/hal-03994759},
    doi = {https://doi.org/10.1007/978-3-031-41682-8_8}
}

Acknowledgment

This work is supported by the French National Research Agency (ANR), as part of the SODUCO project (grant ANR-18-CE38-0013).