Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms
DOI:
https://doi.org/10.4114/intartif.vol29iss77pp108-132Keywords:
Combinatorial Optimization, Large Language Models, LLM, Travelling Salesman Problem, MetaheuristicAbstract
We investigate whether Large Language Models (LLMs) can refine the given codebase of an optimization algorithm without requiring specialized user expertise. This is in contrast to works that study optimization algorithm code generation from scratch. To this end, 10 baseline algorithms covering metaheuristics, reinforcement learning, and exact methods are applied to the Traveling Salesman Problem. The results demonstrate that our simple methodology leads to improved algorithm variants in 9 out of the 10 cases analyzed. Notably, the LLMs autonomously incorporated advanced techniques---such as heuristic initializations in exact methods---leading to significant runtime reductions. Furthermore, this performance enhancement did not come at the cost of software quality; the generated code preserved a high maintainability index (averaging 53.40), and for certain models, coincided with simplified structures, observing a reduction in average cyclomatic complexity of up to 19.4%, all without requiring specialized optimization knowledge from the user.
Downloads
Metrics
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Iberamia & The Authors

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Open Access publishing.
Lic. under Creative Commons CC-BY-NC
Inteligencia Artificial (Ed. IBERAMIA)
ISSN: 1988-3064 (on line).
(C) IBERAMIA & The Authors

