The traditional implicit aspect extraction methods require too much manual feature engineering, which are inefficient when processing large-scale data. To address the problem, BERT-based model with different kinds of classifiers is proposed. In this paper, the text are inputted into BERT to get the embeddings. Then the embeddings of the final hidden layer are combined with different classifiers to obtain implicit aspect. Five classific text classifiers are used for comparison. The experimental results show that the method in this paper outperform state-of-the-art works. The accuracy and the macro-F1 score on the SemEval implicit extraction data set can reach 78.11% and 73.19% respectively.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    BERT-based implicit aspect extraction


    Beteiligte:
    Wang, Lanlan (Autor:in) / Yao, Chunlong (Autor:in) / Li, Xu (Autor:in) / Yu, Xiaoqiang (Autor:in)


    Erscheinungsdatum :

    20.10.2021


    Format / Umfang :

    984290 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Grusswort Bert Breuer

    Online Contents | 1998


    Grusswort Bert Breuer

    Online Contents | 1998


    BERT for Aviation Text Classification

    Jing, Xiao / Chennakesavan, Akul / Chandra, Chetan et al. | TIBKAT | 2023


    BERT for Aviation Text Classification

    Jing, Xiao / Chennakesavan, Akul / Chandra, Chetan et al. | AIAA | 2023