Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms

Authors

  • Camilo Chacón Sartori IIIA-CSIC, Spain
  • Christian Blum IIIA-CSIC, Spain

DOI:

https://doi.org/10.4114/intartif.vol29iss77pp108-132

Keywords:

Combinatorial Optimization, Large Language Models, LLM, Travelling Salesman Problem, Metaheuristic

Abstract

We investigate whether Large Language Models (LLMs) can refine the given codebase of an optimization algorithm without requiring specialized user expertise. This is in contrast to works that study optimization algorithm code generation from scratch. To this end, 10 baseline algorithms covering metaheuristics, reinforcement learning, and exact methods are applied to the Traveling Salesman Problem. The results demonstrate that our simple methodology leads to improved algorithm variants in 9 out of the 10 cases analyzed. Notably, the LLMs autonomously incorporated advanced techniques---such as heuristic initializations in exact methods---leading to significant runtime reductions. Furthermore, this performance enhancement did not come at the cost of software quality; the generated code preserved a high maintainability index (averaging 53.40), and for certain models, coincided with simplified structures, observing a reduction in average cyclomatic complexity of up to 19.4%, all without requiring specialized optimization knowledge from the user.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2026-02-26

How to Cite

Chacón Sartori, C., & Blum, C. (2026). Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms. Inteligencia Artificial, 29(77), 108–132. https://doi.org/10.4114/intartif.vol29iss77pp108-132