Transformer networks such as CodeBERT already achieve very good results for code clone detection in benchmark datasets, so one could assume that this task has already been solved. However, code clone detection is not a trivial task. Semantic code clones in particular are difficult to detect. We show that the generalizability of CodeBERT decreases by evaluating two different subsets of Java code clones from BigCloneBench. We observe a significant drop of F1 score when we evaluate different code snippets and different functionality IDs than those used for model building.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Generalizability of Code Clone Detection on CodeBERT


    Beteiligte:

    Kongress:

    2022 ; Michigan, USA



    Erscheinungsdatum :

    2022-10-10


    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Enhancing generalizability of machine-learning turbulence models

    Li, Jiaqi / Bin, Yuanwei / Huang, George et al. | AIAA | 2024




    Clone detection in automotive model-based development

    Deissenboeck, F. / Hummel, B. / Jurgens, E. et al. | Tema Archiv | 2008