Self-correcting code generating with iterative testing
Authors
Jothi, Maheswari, Swetha, Senthil Prakash*
Abstract
Recent advances in Large Language Models (LLMs) have significantly improved automated code generation capabilities. However, despite producing syntactically correct and logically plausible code, LLMs frequently generate programs that fail during execution, suffer from edge-case errors, or do not satisfy functional requirements. This gap between apparent correctness and actual excitability presents a major limitation for real-world software development applications. This project introduces a Self-correcting Code Generation System that applies iterative feedback loops and automated testing to dramatically improve code reliability. Instead of generating code only once, the system follows an argentic workflow in which the LLM generates both the source code and corresponding unit tests, executes them in a secure sandbox, analyzes failures, and iteratively refines the code until correctness is achieved or a retry limit is reached. Research demonstrates that such iterative refinement can improve code accuracy from approximately 40% to over 90% on benchmarks such as Human Evil.
Keywords
Publication Details
Published In
Volume 1, Issue 1