Large Language Models: Architectures and Applications in Electrical and Computer Engineering - A Short Review
Abstract
Abstract –Large Language Models (LLMs) have rapidly evolved from research curiosities to general-purpose AI components increasingly embedded into engineering workflows. Although several surveys address theoretical foundations and broad application domains, a concise, engineering-oriented perspective tailored to electrical and computer engineering remains limited. This short review outlines the evolution and taxonomy of current LLM families, including proprietary and open-source models, code-oriented variants and multimodal extensions. Core techniques for adapting LLMs to domain-specific tasks are summarized, with emphasis on instruction tuning, parameter-efficient fine-tuning (e.g., LoRA (Low-Rank Adaptation), QLoRA (Quantized LoRA)) and retrieval-augmented generation, together with recent advances in tool use and LLM-based agents. Evaluation methodologies are briefly reviewed, covering general benchmarks, trustworthiness and safety aspects, as well as domain-specific assessment for coding, control and signal processing tasks. Representative application patterns in electrical engineering, control and computer science are highlighted, and key challenges and future research directions related to hallucination mitigation, robustness, efficiency and secure deployment are outlined. The survey is intended as a practical reference for integrating LLMs into engineering systems and educational environments.
Full Text:
PDFReferences
P. Peykani, F. Ramezanlou, C. Tanasescu, and S. Ghanidel, “Large Language Models: A Structured Taxonomy and Review of Challenges, Limitations, Solutions, and Future Directions,” Applied Sciences 2025, Vol. 15, Page 8103, vol. 15, no. 14, p. 8103, Jul. 2025, doi: 10.3390/APP15148103.
D. Kampelopoulos, A. Tsanousa, S. Vrochidis, and I. Kompatsiaris, “A review of LLMs and their applications in the architecture, engineering and construction industry,” Artificial Intelligence Review 2025 58:8, vol. 58, no. 8, pp. 250-, May 2025, doi: 10.1007/S10462-025-11241-7.
S. M. Sajjadi Mohammadabadi, B. C. Kara, C. Eyupoglu, C. Uzay, M. S. Tosun, and O. Karakuş, “A Survey of Large Language Models: Evolution, Architectures, Adaptation, Benchmarking, Applications, Challenges, and Societal Implications,” Electronics 2025, Vol. 14, Page 3580, vol. 14, no. 18, p. 3580, Sep. 2025, doi: 10.3390/ELECTRONICS14183580.
Y. Chang et al., “A Survey on Evaluation of Large Language Models,” ACM Trans Intell Syst Technol, vol. 15, no. 3, p. 39, Mar. 2024, doi: 10.1145/3641289.
R. Al-Rfou, D. Choe, N. Constant, M. Guo, and L. Jones, “Character-level language modeling with deeper self-attention,” 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, pp. 3159–3166, 2019, doi: 10.1609/AAAI.V33I01.33013159.
Y. Ye et al., “LLMs4All: A Review of Large Language Models Across Academic Disciplines,” Sep. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2509.19580
M. Khan, M. A. Akbar, and J. Kasurinen, “Integrating LLMs in Software Engineering Education: Motivators, Demotivators, and a Roadmap Towards a Framework for Finnish Higher Education Institutes,” vol. 17, Mar. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2503.22238
A. Vaswani et al., “Attention Is All You Need,” p. 1, Jun. 2017, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/1706.03762
T. B. Brown et al., “Language Models are Few-Shot Learners,” Adv Neural Inf Process Syst, vol. 2020-December, May 2020, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2005.14165
W. X. Zhao et al., “A Survey of Large Language Models,” arXiv preprint arXiv:2303.18223, Mar. 2023, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2303.18223
M. Usman Hadi et al., “Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects,” Authorea Preprints, Feb. 2025, doi: 10.36227/TECHRXIV.23589741.V8.
T. L. Khoo, T. S. Lee, S. T. Bee, C. Ma, and Y. Y. Zhang, “A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications,” Processes 2025, Vol. 13, Page 2680, vol. 13, no. 9, p. 2680, Aug. 2025, doi: 10.3390/PR13092680.
S. Minaee et al., “Large Language Models: A Survey,” Feb. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2402.06196
Y. Liu et al., “Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models’ Alignment,” Aug. 2023, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2308.05374
Q. Zhang et al., “A Survey on Large Language Models for Software Engineering,” Dec. 2023, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2312.15223
X. Hou et al., “Large Language Models for Software Engineering: A Systematic Literature Review,” ACM Transactions on Software Engineering and Methodology, vol. 33, no. 8, p. 79, Dec. 2024, doi: 10.1145/3695988.
S. Cruzes, “Revolutionizing optical networks: The integration and impact of large language models,” Optical Switching and Networking, vol. 57, p. 100812, Oct. 2025, doi: 10.1016/J.OSN.2025.100812.
X. Du, J. Yang, S. Filippi, and B. Motyl, “Large Language Models (LLMs) in Engineering Education: A Systematic Review and Suggestions for Practical Adoption,” Information 2024, Vol. 15, Page 345, vol. 15, no. 6, p. 345, Jun. 2024, doi: 10.3390/INFO15060345.
Y. Huang et al., “TrustLLM: Trustworthiness in Large Language Models,” arXiv Preprint, p. 40, Jan. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2401.05561
M. Bernabei, S. Colabianchi, A. Falegnami, and F. Costantino, “Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances,” Computers and Education: Artificial Intelligence, vol. 5, p. 100172, Jan. 2023, doi: 10.1016/J.CAEAI.2023.100172.
Y. Zeng and J. Wu, “A Review of End-to-End Precipitation Prediction Using Remote Sensing Data: from Divination to Machine Learning,” Oct. 2025, Accessed: Dec. 31, 2025. [Online]. Available: https://arxiv.org/pdf/2510.22855
M. Hammoud, A. Ai, and D. Acharya, “Don’t Pay Attention,” Jun. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2506.11305
“Everyday AI: Real-World Applications of Transformer Based Language Models,” International Journal of Computer Trends and Technology, vol. 73, no. 9, Sep. 2025, doi: 10.14445/22312803/IJCTT-V73I9P103.
S. Nair, Y. S. Rao, R. Shankarmani, and A. Professor, “Assessment of Transformer-Based Encoder-Decoder Model for Human-Like Summarization,” Oct. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2410.16842
S. Nagaraja and K. Y. Chandappa, “Self-attention encoder-decoder with model adaptation for transliteration and translation tasks in regional language,” International Journal of Reconfigurable and Embedded Systems (IJRES), vol. 14, no. 1, pp. 243–253, Mar. 2025, doi: 10.11591/IJRES.V14.I1.PP243-253.
A. Matarazzo and R. Torlone, “A Survey on Large Language Models with some Insights on their Capabilities and Limitations,” Jan. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2501.04040
Y. Zhang et al., “Tensor Product Attention Is All You Need,” Jan. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2501.06425
M. Osama, A. Dey, K. Ahmed, and M. A. Kabir, “BeliN: A novel corpus for Bengali religious news headline generation using contextual feature fusion,” Natural Language Processing Journal, vol. 11, p. 100138, Jun. 2025, doi: 10.1016/J.NLP.2025.100138.
S. Vats et al., “How Do LLMs Work?: A Deep Dive Into Transformer Models,” https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3373-3785-2.ch004, pp. 85–105, Jan. 1AD, doi: 10.4018/979-8-3373-3785-2.CH004.
K. Gao et al., “Examining User-Friendly and Open-Sourced Large GPT Models: A Survey on Language, Multimodal, and Scientific GPT Models,” Aug. 2023, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2308.14149
P. Zhang, G. Zeng, T. Wang, and W. Lu, “TinyLlama: An Open-Source Small Language Model,” Jan. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2401.02385
A. Q. Jiang et al., “Mixtral of Experts,” Jan. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2401.04088
Y. Chu et al., “Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models,” Nov. 2023, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2311.07919
A. I. Cristea, E. Walker, Y. Lu, O. C. Santos, and S. Isotani, Eds., “Artificial Intelligence in Education,” in 26th International Conference, AIED 2025, Palermo, Italy, July 22–26, 2025, Proceedings, Part II, in Lecture Notes in Computer Science, vol. 15878. Cham: Springer Nature Switzerland, 2025. doi: 10.1007/978-3-031-98417-4.
N. Huynh and B. Lin, “Large Language Models for Code Generation: A Comprehensive Survey of Challenges, Techniques, Evaluation, and Applications,” Mar. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2503.01245
M. A. Zadenoori, J. Dąbrowski, W. Alhoshan, L. Zhao, and A. Ferrari, “Large Language Models (LLMs) for
Requirements Engineering (RE): A Systematic Literature Review,” Sep. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2509.11446
K. V. Arnautov and D. A. Akimov, “Application of Large Language Models for Optimization of Electric Power System States,” Proceedings of the 2024 Conference of Young Researchers in Electrical and Electronic Engineering, ElCon 2024, pp. 314–317, 2024, doi: 10.1109/ELCON61730.2024.10468377.
R. E. O. Roxas and R. N. C. Recario, “Scientific landscape on opportunities and challenges of large language models and natural language processing,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 36, no. 1, pp. 252–263, Oct. 2024, doi: 10.11591/IJEECS.V36.I1.PP252-263.
V. Shukla and G. G. Parker, “Building Custom Large Language Models for Industries: A Comparative Analysis of Fine-Tuning and Retrieval-Augmented Generation Techniques,” ICAAEEI 2024 - 1st International Conference of Adisutjipto on Aerospace Electrical Engineering and Informatics: Shaping the Future Work for the Aerospace Technology in Science, Engineering, and Industry in the Disruptive Era, 2024, doi: 10.1109/ICAAEEI63658.2024.10899129.
S. Chimata, A. R. Bollimuntha, D. K. Devagiri, S. Puligadda, V. P. Kumar S, and V. K. Kishore K, “An Investigative Analysis on Generation of AI Text Using Deep Learning Models for Large Language Models,” International Conference on Smart Systems for Electrical, Electronics, Communication and Computer Engineering, ICSSEEC 2024 - Proceedings, pp. 7–12, 2024, doi: 10.1109/ICSSEECC61126.2024.10649514.
V. B. Parthasarathy, A. Zafar, A. Khan, and A. Shahid, “The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities,” Aug. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2408.13296
I. Seregina, P. Lalanda, and G. Vega, “Parameter-Efficient Fine-Tuning for HAR: Integrating LoRA and QLoRA into Transformer Models,” Dec. 2025, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2512.17983
J. Jiang et al., “A Survey on Large Language Models for Code Generation,” ACM Transactions on Software Engineering and Methodology, vol. 37, no. 1, Jun. 2024, doi: 10.1145/3747588.
J. Liu et al., “Large Language Model-Based Agents for Software Engineering: A Survey,” Sep. 2024, Accessed: Dec. 27, 2025. [Online]. Available: https://arxiv.org/pdf/2409.02977
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Journal of Electrical Engineering, Electronics, Control and Computer Science

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.