What are ‘Computer-Use Agents’? From Web to OS—A Technical Explainer

TL;DR: Computer-use brokers are VLM-driven UI brokers that act like customers on unmodified software program. Baselines on OSWorld began at 12.24% (human 72.36%); Claude Sonnet 4.5 now studies 61.4%. Gemini 2.5 Computer Use leads a number of net benchmarks (Online-Mind2Web 69.0%, WebVoyager 88.9%) however is not but OS-optimized. Next steps middle on OS-level robustness, sub-second motion loops, and hardened security insurance policies, with clear coaching/analysis recipes rising from the open group.
Definition
Computer-use brokers (a.okay.a. GUI brokers) are vision-language fashions that observe the display, floor UI parts, and execute bounded UI actions (click on, kind, scroll, key-combos) to full duties in unmodified functions and browsers. Public implementations embrace Anthropic’s Computer Use, Google’s Gemini 2.5 Computer Use, and OpenAI’s Computer-Using Agent powering Operator.
Control Loop
Typical runtime loop: (1) seize screenshot + state, (2) plan subsequent motion with spatial/semantic grounding, (3) act by way of a constrained motion schema, (4) confirm and retry on failure. Vendors doc standardized motion units and guardrails; audited harnesses normalize comparisons.
Benchmark Landscape
- OSWorld (HKU, Apr 2024): 369 actual desktop/net duties spanning OS file I/O and multi-app workflows. At launch, human 72.36%, greatest mannequin 12.24%.
- State of play (2025): Anthropic Claude Sonnet 4.5 studies 61.4% on OSWorld (sub-human however a big bounce from 42.2%).
- Live-web benchmarks: Google’s Gemini 2.5 Computer Use studies 69.0% on Online-Mind2Web (official leaderboard), 88.9% on WebVoyager, 69.7% on AndroidWorld; the present mannequin is browser-optimized and not but optimized for OS-level management.
- Online-Mind2Web spec: 300 duties throughout 136 dwell web sites; outcomes verified by Princeton/HAL and a public HF house.
Architecture Components
- Perception & Grounding: periodic screenshots, OCR/textual content extraction, component localization, coordinate inference.
- Planning: multi-step coverage with restoration; usually post-trained/RL-tuned for UI management.
- Action Schema: bounded verbs (
click_at
,kind
,key_combo
,open_app
), benchmark-specific exclusions to stop software shortcuts. - Evaluation Harness: live-web/VM sandboxes with third-party auditing and reproducible execution scripts.
Enterprise Snapshot
- Anthropic: Computer Use API; Sonnet 4.5 at 61.4% OSWorld; docs emphasize pixel-accurate grounding, retries, and security confirmations.
- Google DeepMind: Gemini 2.5 Computer Use API + mannequin card with Online-Mind2Web 69.0%, WebVoyager 88.9%, AndroidWorld 69.7%, latency measurements, and security mitigations.
- OpenAI: Operator analysis preview for U.S. Pro customers, powered by a Computer-Using Agent; separate system card and developer floor by way of the Responses API; availability is proscribed/preview.

Where They’re Headed: Web → OS
- Few-/one-shot workflow cloning: near-term course is powerful activity imitation from a single demonstration (display seize + narration). Treat as an lively analysis declare, not a completely solved product function.
- Latency budgets for collaboration: to protect direct manipulation, actions ought to land inside 0.1–1 s HCI thresholds; present stacks usually exceed this due to imaginative and prescient and planning overhead. Expect engineering on incremental imaginative and prescient (diff frames), cache-aware OCR, and motion batching.
- OS-level breadth: file dialogs, multi-window focus, non-DOM UIs, and system insurance policies add failure modes absent from browser-only brokers. Gemini’s present “browser-optimized, not OS-optimized” standing underscores this subsequent step.
- Safety: prompt-injection from net content material, harmful actions, and knowledge exfiltration. Model playing cards describe permit/deny lists, confirmations, and blocked domains; anticipate typed motion contracts and “consent gates” for irreversible steps.
Practical Build Notes
- Start with a browser-first agent utilizing a documented motion schema and a verified harness (e.g., Online-Mind2Web).
- Add recoverability: express post-conditions, on-screen verification, and rollback plans for lengthy workflows.
- Treat metrics with skepticism: desire audited leaderboards or third-party harnesses over self-reported scripts; OSWorld makes use of execution-based analysis for reproducibility.
Open Research & Tooling
Hugging Face’s Smol2Operator gives an open post-training recipe that upgrades a small VLM right into a GUI-grounded operator—helpful for labs/startups prioritizing reproducible coaching over leaderboard data.
Key Takeaways
- Computer-use (GUI) brokers are VLM-driven programs that understand screens and emit bounded UI actions (click on/kind/scroll) to function unmodified apps; present public implementations embrace Anthropic Computer Use, Google Gemini 2.5 Computer Use, and OpenAI’s Computer-Using Agent.
- OSWorld (HKU) benchmarks 369 actual desktop/net duties with execution-based analysis; at launch people achieved 72.36% whereas one of the best mannequin reached 12.24%, highlighting grounding and procedural gaps.
- Anthropic Claude Sonnet 4.5 studies 61.4% on OSWorld—sub-human however a big bounce from prior Sonnet 4 outcomes.
- Gemini 2.5 Computer Use leads a number of live-web benchmarks—Online-Mind2Web 69.0%, WebVoyager 88.9%, AndroidWorld 69.7%—and is explicitly optimized for browsers, not but for OS-level management.
- OpenAI Operator is a analysis preview powered by the Computer-Using Agent (CUA) mannequin that makes use of screenshots to work together with GUIs; availability stays restricted.
- Open-source trajectory: Hugging Face’s Smol2Operator gives a reproducible post-training pipeline that turns a small VLM right into a GUI-grounded operator, standardizing motion schemas and datasets.
References:
Benchmarks (OSWorld & Online-Mind2Web)
- OSWorld homepage: https://os-world.github.io/
- OSWorld paper (arXiv): https://arxiv.org/abs/2404.07972
- OSWorld NeurIPS paper PDF: https://proceedings.neurips.cc/paper_files/paper/2024/file/5d413e48f84dc61244b6be550f1cd8f5-Paper-Datasets_and_Benchmarks_Track.pdf
- Online-Mind2Web (HAL leaderboard): https://hal.cs.princeton.edu/online_mind2web
- Online-Mind2Web (HF leaderboard): https://huggingface.co/spaces/osunlp/Online_Mind2Web_Leaderboard
- Online-Mind2Web (arXiv): https://arxiv.org/abs/2504.01382
- Online-Mind2Web GitHub: https://github.com/OSU-NLP-Group/Online-Mind2Web
Anthropic (Computer Use & Sonnet 4.5)
- Introducing Computer Use (Oct 2024): https://www.anthropic.com/news/3-5-models-and-computer-use
- Developing a Computer Use Model: https://www.anthropic.com/news/developing-computer-use
- Introducing Claude Sonnet 4.5 (benchmarks incl. OSWorld 61.4%): https://www.anthropic.com/news/claude-sonnet-4-5
- Claude Sonnet 4.5 System Card: https://www.anthropic.com/claude-sonnet-4-5-system-card
Google DeepMind (Gemini 2.5 Computer Use)
- Launch weblog: https://blog.google/technology/google-deepmind/gemini-computer-use-model/
- Model card (PDF): https://storage.googleapis.com/deepmind-media/Model-Cards/Gemini-2-5-Computer-Use-Model-Card.pdf
- Evaluation & methodology addendum (PDF): https://storage.googleapis.com/deepmind-media/gemini/computer_use_eval_additional_info.pdf
- Gemini API docs — Computer Use: https://ai.google.dev/gemini-api/docs/computer-use
- Vertex AI docs — Computer Use: https://cloud.google.com/vertex-ai/generative-ai/docs/computer-use
OpenAI (Operator / CUA)
- Computer-Using Agent overview: https://openai.com/index/computer-using-agent/
- Operator system card: https://openai.com/index/operator-system-card/
- Introducing Operator (analysis preview): https://openai.com/index/introducing-operator/
Open-source: Hugging Face Smol2Operator
- Smol2Operator weblog: https://huggingface.co/blog/smol2operator
- Smol2Operator repo: https://github.com/huggingface/smol2operator
- Smol2Operator demo Space: https://huggingface.co/spaces/A-Mahla/Smol2Operator
The publish What are ‘Computer-Use Agents’? From Web to OS—A Technical Explainer appeared first on MarkTechPost.