Generative AI Models for Automated Software Testing

Authors

  • Krish Saxena Bundeli Akash Institute of Engineering and Technology Devanahalli Bengaluru, India Author

DOI:

https://doi.org/10.15662/IJEETR.2023.0502001

Keywords:

Generative AI, Software Testing, Test Case Generation, Large Language Models, Test Automation, Code Coverage, GANs, Continuous Integration, CodeBERT, GPT

Abstract

The growing complexity of modern software systems necessitates more intelligent, efficient, and scalable testing approaches. Traditional software testing methods are often time-consuming, resource-intensive, and limited by human bias. Generative AI models—particularly large language models (LLMs) and generative adversarial networks (GANs)—are emerging as powerful tools for automated software testing, offering the ability to autonomously generate test cases, test scripts, and even simulate user behavior. This paper investigates the potential of generative AI in enhancing test automation processes across various stages of the software development life cycle (SDLC).

We review recent applications of generative AI models for test case generation, code coverage improvement, regression testing, and fuzz testing. Language models like GPT, Codex, and CodeBERT are shown to produce syntactically and semantically valid test cases for diverse programming languages and frameworks. GANs, on the other hand, are utilized for generating realistic input data to uncover edge-case bugs and vulnerabilities.

Our research synthesizes state-of-the-art contributions from academic and industrial sources, analyzing their effectiveness based on key metrics such as fault detection rate, test coverage, and execution efficiency. The paper also presents a taxonomy of generative AI techniques used in software testing, categorizing them based on architecture, domain of application, and level of autonomy.

Findings suggest that generative AI models significantly outperform traditional and rule-based test generation approaches in terms of speed, coverage, and adaptability. However, challenges remain in ensuring the correctness, explainability, and maintainability of AI-generated artifacts. This work concludes by identifying future directions for integrating generative AI into continuous integration/continuous deployment (CI/CD) pipelines and developing more transparent, human-inthe-loop frameworks.

References

1. Tufano, M., Watson, C., Bavota, G., & Poshyvanyk, D. (2020). Deep learning for automatic code generation: Are we there yet?. Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering.

2. Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., ... & Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.

3. Zhang, Y., Wang, X., & Wang, H. (2022). GAN-based fuzz testing for software vulnerability detection. IEEE Transactions on Dependable and Secure Computing.

4. Sobania, D., von der Maßen, M., & Wehrheim, H. (2023). CI/CD-aware automated test generation using large language models. Empirical Software Engineering Journal.

5. Alshammari, M., & Alsallakh, B. (2024). Hybrid AI frameworks for context-aware software testing. Journal of Systems and Software.

Downloads

Published

2023-03-01

How to Cite

Generative AI Models for Automated Software Testing. (2023). International Journal of Engineering & Extended Technologies Research (IJEETR), 5(2), 6253-6256. https://doi.org/10.15662/IJEETR.2023.0502001