Loading articles...
News Update:
March 2026 Special Issue Announcement – Exclusive Publication Discounts: 60% for Editorial Board Members, 40% for Previous Authors, 25% for New Authors | Fast-Track Review (7–10 Days) | E-Certificate of Publication | Special Issue (Minimum 5 Papers) | Inviting Applications for Editorial Board Membership.
WhatsApp
Back to Articles
Computer Science Open Access Peer Reviewed

Self-correcting code generating with iterative testing


Authors

Jothi, Maheswari, Swetha, Senthil Prakash*


Abstract

Recent advances in Large Language Models (LLMs) have significantly improved automated code generation capabilities. However, despite producing syntactically correct and logically plausible code, LLMs frequently generate programs that fail during execution, suffer from edge-case errors, or do not satisfy functional requirements. This gap between apparent correctness and actual excitability presents a major limitation for real-world software development applications. This project introduces a Self-correcting Code Generation System that applies iterative feedback loops and automated testing to dramatically improve code reliability. Instead of generating code only once, the system follows an argentic workflow in which the LLM generates both the source code and corresponding unit tests, executes them in a secure sandbox, analyzes failures, and iteratively refines the code until correctness is achieved or a retry limit is reached. Research demonstrates that such iterative refinement can improve code accuracy from approximately 40% to over 90% on benchmarks such as Human Evil.


Keywords

Self-correcting code generation, iterative testing, large language models, automated testing, error detection, software development.

Publication Details

Published In

Volume 1, Issue 1