RUJA: Repositorio Institucional de Producción Científica

 

LOcative Reference Extractor (LORE) [Software]

Fecha

2020

Título de la revista

ISSN de la revista

Título del volumen

Editor

Resumen

LORE, my rule-based model, and nLORE, its neural counterpart nLORE, are locative reference extractors designed to detect and extract any kind of locative references (e.g. geopolitical entities, landforms, points of interests, traffic ways, etc.) and their surrounding locative markers (e.g. directional, distance and temporal markers) from tweets or any other piece of microtext. LORE works with English, Spanish and French texts whereas nLORE only works with English texts. This software tool is the result of my my PhD thesis project, titled A linguistically-aware computational approach to microtext location detection, defended on October 21 2020 at the University of Granada (UGR) and obtaining cum laude distinction, under the PhD programme of Languages, Texts and Contexts (Programa de Doctorado en Lenguajes, Textos y Contextos).

Descripción

How to use this software: 1. Download the precompiled application by requesting access to the file here: https://drive.google.com/file/d/1zy66ezgdKW5roYeFoAh7p2KqA7_L3dch/view?usp=sharing 2. Decompress the zip file into a folder. Execute the exe file. 3. Using it is fairly intuitive: the main app window lets you upload a dataset of tweets or any other kind of dataset in txt format by clicking on the File icon and perform locative reference extraction. You can select either LORE or nLORE and the language of the dataset. Also, you can select the output format: token-based format (typically used in NER) or entity-based format. The extracted locative references will be saved onto the data/output folder in a txt file.

Palabras clave

Locative reference extractor, LORE, Location detection, Natural language processing, Computational linguistics

Citación

Fernández-Martínez, Nicolás José & Periñán-Pascual, Carlos. (2021). LORE: A model for the detection of fine-grained locative references in tweets. Onomazein 52, 195–225. https://doi.org/10.7764/onomazein.52.11