The transformer architecture based on self-attention offers a versatile structure which has led to the definition of multiple deep learning models for various tasks or applications of natural language processing. The purpose of this work is to analyze two language models for training bidirectional encoders like BERT: the Masked Language Model (MLM) and the Conditional Masked Language Model (CMLM) for learning sentence embeddings. How sentence-level representations impact the task of sequence classification is the main focus of interest in our investigation: is there any significant difference in quality between these two pretrained language models? We evaluate via fine-tuning these pretrained models on a downstream task as sequence classification.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Fine-Tuning of BERT models for Sequence Classification


    Contributors:


    Publication date :

    2022-12-05


    Size :

    358821 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    BERT for Aviation Text Classification

    Jing, Xiao / Chennakesavan, Akul / Chandra, Chetan et al. | TIBKAT | 2023


    BERT for Aviation Text Classification

    Jing, Xiao / Chennakesavan, Akul / Chandra, Chetan et al. | AIAA | 2023



    Fine tuning mechanical design

    McCormick,D. | Automotive engineering | 1981


    Platform fine-tuning mechanism

    MA HAIHUA / GE CHUNMING / GU HAIYANG et al. | European Patent Office | 2020

    Free access