|

Meet ‘AutoAgent’: The Open-Source Library That Lets an AI Engineer and Optimize Its Own Agent Harness Overnight

There’s a specific type of tedium that each AI engineer is aware of intimately: the prompt-tuning loop. You write a system immediate, run your agent in opposition to a benchmark, learn the failure traces, tweak the immediate, add a device, rerun. Repeat this a number of dozen instances and you may transfer the needle. It’s grunt work dressed up in Python recordsdata. Now, a brand new open-source library known as AutoAgent, constructed by Kevin Gu at thirdlayer.inc, proposes an unsettling different — don’t do this work your self. Let an AI do it.

AutoAgent is an open supply library for autonomously enhancing an agent on any area. In a 24-hour run, it hit #1 on SpreadsheetBench with a rating of 96.5%, and achieved the #1 GPT-5 rating on TerminalBench with 55.1%.

https://x.com/kevingu/standing/2039843234760073341

What Is AutoAgent, Really?

AutoAgent is described as being ‘like autoresearch however for agent engineering.’ The concept: give an AI agent a process, let it construct and iterate on an agent harness autonomously in a single day. It modifies the system immediate, instruments, agent configuration, and orchestration, runs the benchmark, checks the rating, retains or discards the change, and repeats.

To perceive the analogy: Andrej Karpathy’s autoresearch does the identical factor for ML coaching — it loops by propose-train-evaluate cycles, preserving solely modifications that enhance validation loss. AutoAgent ports that very same ratchet loop from ML coaching into agent engineering. Instead of optimizing a mannequin’s weights or coaching hyperparameters, it optimizes the harness — the system immediate, device definitions, routing logic, and orchestration technique that decide how an agent behaves on a process.

A harness, on this context, is the scaffolding round an LLM: what system immediate it receives, what instruments it might probably name, the way it routes between sub-agents, and how duties are formatted as inputs. Most agent engineers hand-craft this scaffolding. AutoAgent automates the iteration on that scaffolding itself.

The Architecture: Two Agents, One File, One Directive

The GitHub repo has a intentionally easy construction. agent.py is the whole harness underneath take a look at in a single file — it accommodates config, device definitions, agent registry, routing/orchestration, and the Harbor adapter boundary. The adapter part is explicitly marked as mounted; the remainder is the first edit floor for the meta-agent. program.md accommodates directions for the meta-agent plus the directive (what sort of agent to construct), and that is the one file the human edits.

Think of it as a separation of considerations between human and machine. The human units the course inside program.md. The meta-agent (a separate, higher-level AI) then reads that directive, inspects agent.py, runs the benchmark, diagnoses what failed, rewrites the related components of agent.py, and repeats. The human by no means touches agent.py instantly.

A essential piece of infrastructure that retains the loop coherent throughout iterations is outcomes.tsv — an experiment log routinely created and maintained by the meta-agent. It tracks each experiment run, giving the meta-agent a historical past to be taught from and calibrate what to attempt subsequent. The full challenge construction additionally consists of Dockerfile.base, an elective .agent/ listing for reusable agent workspace artifacts like prompts and expertise, a duties/ folder for benchmark payloads (added per benchmark department), and a jobs/ listing for Harbor job outputs.

The metric is whole rating produced by the benchmark’s process take a look at suites. The meta-agent hill-climbs on this rating. Every experiment produces a numeric rating: maintain if higher, discard if not — the identical loop as autoresearch.

The Task Format and Harbor Integration

Benchmarks are expressed as duties in Harbor format. Each process lives underneath duties/my-task/ and features a process.toml for config like timeouts and metadata, an instruction.md which is the immediate despatched to the agent, a assessments/ listing with a take a look at.sh entry level that writes a rating to /logs/reward.txt, and a take a look at.py for verification utilizing both deterministic checks or LLM-as-judge. An surroundings/Dockerfile defines the duty container, and a recordsdata/ listing holds reference recordsdata mounted into the container. Tests write a rating between 0.0 and 1.0 to the verifier logs. The meta-agent hill-climbs on this.

The LLM-as-judge sample right here is value flagging: as a substitute of solely checking solutions deterministically (like unit assessments), the take a look at suite can use one other LLM to judge whether or not the agent’s output is ‘right sufficient.’ This is frequent in agentic benchmarks the place right solutions aren’t reducible to string matching.

Key Takeaways

  • Autonomous harness engineering works — AutoAgent proves {that a} meta-agent can change the human prompt-tuning loop totally, iterating on agent.py in a single day with none human touching the harness recordsdata instantly.
  • Benchmark outcomes validate the strategy — In a 24-hour run, AutoAgent hit #1 on SpreadsheetBench (96.5%) and the highest GPT-5 rating on TerminalBench (55.1%), beating each different entry that was hand-engineered by people.
  • ‘Model empathy’ could also be an actual phenomenon — A Claude meta-agent optimizing a Claude process agent appeared to diagnose failures extra precisely than when optimizing a GPT-based agent, suggesting same-family mannequin pairing might matter when designing your AutoAgent loop.
  • The human’s job shifts from engineer to director — You don’t write or edit agent.py. You write program.md — a plain Markdown directive that steers the meta-agent. The distinction mirrors the broader shift in agentic engineering from writing code to setting objectives.
  • It’s plug-and-play with any benchmark — Because duties observe Harbor’s open format and brokers run in Docker containers, AutoAgent is domain-agnostic. Any scorable process — spreadsheets, terminal instructions, or your individual customized area — can develop into a goal for autonomous self-optimization.

Check out the Repo and Tweet.  Also, be at liberty to observe us on Twitter and don’t neglect to affix our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to associate with us for selling your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar and so forth.? Connect with us

The submit Meet ‘AutoAgent’: The Open-Source Library That Lets an AI Engineer and Optimize Its Own Agent Harness Overnight appeared first on MarkTechPost.

Similar Posts