Advances in Pre-trained Language Models for Domain-Specific Text Classification: A Systematic Review
Kwekha Rostam, Zhyar Rzgar K Rostam and Kertész, Gábor (2025) Advances in Pre-trained Language Models for Domain-Specific Text Classification: A Systematic Review. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY. ISSN 2157-6904 10.1145/3763002
|
Text
Rostam_1_36365049_z.pdf Restricted to Registered users only Request a copy |
Abstract
The exponential increase in scientific literature and online information necessitates efficient methods for extracting knowledge from textual data. Natural Language Processing (NLP) plays a crucial role in addressing this challenge, particularly in text classification tasks. While Large Language Models (LLMs) have achieved remarkable success in NLP, their accuracy can suffer in domain-specific contexts due to specialized vocabulary, unique grammatical structures, and imbalanced data distributions. In this Systematic Literature Review (SLR), we investigate the utilization of Pre-trained Language Models (PLMs) for domain-specific text classification. We systematically review 41 articles published between 2018 and January 2024, adhering to the PRISMA statement (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). This review methodology involved rigorous inclusion criteria and a multi-step selection process employing AI-powered tools. We delve into the evolution of text classification techniques and differentiate between traditional and modern approaches. We emphasize transformer-based models and explore the challenges and considerations associated with using LLMs for domain-specific text classification. Furthermore, we categorize existing research based on various PLMs and propose a taxonomy of techniques used in the field. To validate our findings, we conducted a comparative experiment involving BERT, SciBERT, and BioBERT in biomedical sentence classification. Finally, we present a comparative study on the performance of LLMs in text classification tasks across different domains. In addition, we examine recent advancements in PLMs for domain-specific text classification and offer insights into future directions and limitations in this rapidly evolving domain.
| Item Type: | Article |
|---|---|
| Subjects: | Q Science > QA Mathematics and Computer Science > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány |
| Divisions: | Laboratory of Parallel and Distributed Systems |
| SWORD Depositor: | MTMT Injector |
| Depositing User: | MTMT Injector |
| Date Deposited: | 13 Jan 2026 07:34 |
| Last Modified: | 13 Jan 2026 07:34 |
| URI: | https://eprints.sztaki.hu/id/eprint/11038 |
![]() |
Update Item |



