Loading articles...
News Update:
June 2026 Issue Open Now |Quick Peer Review (7–10 Days) | DOI Available for Interested Authors | 20% Publication Fee Discount for First 10 Papers | Exclusive Group Submission Benefits | Academic & Conference Collaboration Opportunities | Research Visibility & Academic Promotion Support | Seminar, Institutional & Research Partnership Opportunities
WhatsApp
Back to Articles
Computer Science Open Access Peer Reviewed

Self-correcting code generating with iterative testing


Authors

Jothi, Maheswari, Swetha, Senthil Prakash*


Abstract

Recent advances in Large Language Models (LLMs) have significantly improved automated code generation capabilities. However, despite producing syntactically correct and logically plausible code, LLMs frequently generate programs that fail during execution, suffer from edge-case errors, or do not satisfy functional requirements. This gap between apparent correctness and actual excitability presents a major limitation for real-world software development applications. This project introduces a Self-correcting Code Generation System that applies iterative feedback loops and automated testing to dramatically improve code reliability. Instead of generating code only once, the system follows an argentic workflow in which the LLM generates both the source code and corresponding unit tests, executes them in a secure sandbox, analyzes failures, and iteratively refines the code until correctness is achieved or a retry limit is reached. Research demonstrates that such iterative refinement can improve code accuracy from approximately 40% to over 90% on benchmarks such as Human Evil.


Keywords

Self-correcting code generation, iterative testing, large language models, automated testing, error detection, software development.

Publication Details

Published In

Volume 1, Issue 1