In the aviation domain, there are many applications for machine learning and artificial intelligence tools that utilize natural language. For example, there is a desire to know the commonalities in written safety reports such as voluntary post incidents reports or create more accurate transcripts of air traffic management conversations. Another use-case is the possibility of extracting airspace procedures and constraints currently written in documents such as Letters of Agreement (LOA) which is used as the evaluation case in this paper. These applications can benefit from the use of state-of-the-art Natural Language Processing (NLP) techniques when adapted to the language/phraseology specific to the aviation domain. This paper evaluates the viability of transferring pre-trained large language models to the aviation domain by adapting transformer based models using aviation datasets. This paper utilized two datasets to adapt a ‘Robustly Opti-mized Bidirectional Encoder Representations from Transformers Approach’ (RoBERTa) model and two down-stream classification tasks to assess its performance. These datasets are all built upon Letters of Agreement which are Federal Aviation Administration (FAA) documents that formalize airspace operations across the national airspace system. The first two datasets are used for the adaptation of RoBERTa to the aviation domain and were of different sizes to assess the number of documents needed to adapt to the aviation domain. They contain many examples of ‘aviation English’ using domain specific terminology and phrasing which serves as a representative basis to perform the unsupervised adaptation. The second dataset is a separate set of LOA documents with two sets of classification labels to be used for evaluation; one at the document level and one at the line level. These down-stream evaluations allowed the measurement of improvement by adapting RoBERTa. The accuracy increased by 4–6 % on both tasks and the F1 score on the class of interest increased by 4–8 % from the adaptation.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Towards an Aviation Large Language Model by Fine-tuning and Evaluating Transformers


    Contributors:


    Publication date :

    2024-09-29


    Size :

    220401 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Airspace homing sorting method based on large language model fine tuning

    JI YULONG / ZHOU WENTAO / WANG JINLIN et al. | European Patent Office | 2024

    Free access

    AviationGPT: A Large Language Model for the Aviation Domain

    Wang, Liya / Chou, Jason / Tien, Alex et al. | AIAA | 2024


    Drive as Veteran: Fine-tuning of an Onboard Large Language Model for Highway Autonomous Driving

    Wang, Yujin / Huang, Zhaoyan / Liu, Quanfeng et al. | IEEE | 2024


    Adapting Sentence Transformers for the Aviation Domain

    Wang, Liya / Chou, Jason / Rouck, David et al. | AIAA | 2024


    Aviation-BERT: A Preliminary Aviation-Specific Natural Language Model

    Chandra, Chetan / Jing, Xiao / Bendarkar, Mayank V. et al. | AIAA | 2023