|

Can a Small Language Model Predict Kernel Latency, Memory, and Model Accuracy from Code? A New Regression Language Model (RLM) Says Yes

Researchers from Cornell and Google introduce a unified Regression Language Model (RLM) that predicts numeric outcomes straight from code strings—protecting GPU kernel latency, program reminiscence utilization, and even neural community accuracy and latency—with out hand-engineered options. A 300M-parameter encoder–decoder initialized from T5-Gemma achieves sturdy rank correlations throughout heterogeneous duties and languages, utilizing a single text-to-number decoder that emits digits with constrained decoding.

What precisely is new?

  • Unified code-to-metric regression: One RLM predicts (i) peak reminiscence from high-level code (Python/C/C++ and extra), (ii) latency for Triton GPU kernels, and (iii) accuracy and hardware-specific latency from ONNX graphs—by studying uncooked textual content representations and decoding numeric outputs. No function engineering, graph encoders, or zero-cost proxies are required.
  • Concrete outcomes: Reported correlations embody Spearman ρ ≈ 0.93 on APPS LeetCode reminiscence, ρ ≈ 0.52 for Triton kernel latency, ρ > 0.5 common throughout 17 CodeWeb languages, and Kendall τ ≈ 0.46 throughout 5 basic NAS areas—aggressive with and in some circumstances surpassing graph-based predictors.
  • Multi-objective decoding: Because the decoder is autoregressive, the mannequin circumstances later metrics on earlier ones (e.g., accuracy → per-device latencies), capturing real looking trade-offs alongside Pareto fronts.
https://arxiv.org/abs/2509.26476

Why is that this vital?

Performance prediction pipelines in compilers, GPU kernel choice, and NAS usually depend on bespoke options, syntax bushes, or GNN encoders which can be brittle to new ops/languages. Treating regression as next-token prediction over numbers standardizes the stack: tokenize inputs as plain textual content (supply code, Triton IR, ONNX), then decode calibrated numeric strings digit-by-digit with constrained sampling. This reduces upkeep value and improves switch to new duties by way of fine-tuning.

Data and benchmarks

  • Code-Regression dataset (HF): Curated to assist code-to-metric duties spanning APPS/LeetCode runs, Triton kernel latencies (KernelEbook-derived), and CodeWeb reminiscence footprints.
  • NAS/ONNX suite: Architectures from NASBench-101/201, FBNet, Once-for-All (MB/PN/RN), Twopath, Hiaml, Inception, and NDS are exported to ONNX textual content to foretell accuracy and device-specific latency.

How does it work?

  • Backbone: Encoder–decoder with a T5-Gemma encoder initialization (~300M params). Inputs are uncooked strings (code or ONNX). Outputs are numbers emitted as signal/exponent/mantissa digit tokens; constrained decoding enforces legitimate numerals and helps uncertainty by way of sampling.
  • Ablations: (i) Language pretraining accelerates convergence and improves Triton latency prediction; (ii) decoder-only numeric emission outperforms MSE regression heads even with y-normalization; (iii) realized tokenizers specialised for ONNX operators enhance efficient context; (iv) longer contexts assist; (v) scaling to a bigger Gemma encoder additional improves correlation with ample tuning.
  • Training code. The regress-lm library offers text-to-text regression utilities, constrained decoding, and multi-task pretraining/fine-tuning recipes.

Stats that issues

  • APPS (Python) reminiscence: Spearman ρ > 0.9.
  • CodeWeb (17 languages) reminiscence: common ρ > 0.5; strongest languages embody C/C++ (~0.74–0.75).
  • Triton kernels (A6000) latency: ρ ≈ 0.52.
  • NAS rating: common Kendall τ ≈ 0.46 throughout NASNet, Amoeba, PNAS, ENAS, DARTS; aggressive with FLAN and GNN baselines.

Key Takeaways

  1. Unified code-to-metric regression works. A single ~300M-parameter T5Gemma-initialized mannequin (“RLM”) predicts: (a) reminiscence from high-level code, (b) Triton GPU kernel latency, and (c) mannequin accuracy + system latency from ONNX—straight from textual content, no hand-engineered options.
  2. The analysis reveals Spearman ρ > 0.9 on APPS reminiscence, ≈0.52 on Triton latency, >0.5 common throughout 17 CodeWeb languages, and Kendall-τ ≈ 0.46 on 5 NAS areas.
  3. Numbers are decoded as textual content with constraints. Instead of a regression head, RLM emits numeric tokens with constrained decoding, enabling multi-metric, autoregressive outputs (e.g., accuracy adopted by multi-device latencies) and uncertainty by way of sampling.
  4. The Code-Regression dataset unifies APPS/LeetCode reminiscence, Triton kernel latency, and CodeWeb reminiscence; the regress-lm library offers the coaching/decoding stack.

Our Comments

It could be very attention-grabbing how this work reframes efficiency prediction as text-to-number technology: a compact T5Gemma-initialized RLM reads supply (Python/C++), Triton kernels, or ONNX graphs and emits calibrated numerics by way of constrained decoding. The reported correlations—APPS reminiscence (ρ>0.9), Triton latency on RTX A6000 (~0.52), and NAS Kendall-τ ≈0.46—are sturdy sufficient to matter for compiler heuristics, kernel pruning, and multi-objective NAS triage with out bespoke options or GNNs. The open dataset and library make replication simple and decrease the barrier to fine-tuning on new {hardware} or languages.


Check out the Paper, GitHub Page and Dataset Card. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to comply with us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The put up Can a Small Language Model Predict Kernel Latency, Memory, and Model Accuracy from Code? A New Regression Language Model (RLM) Says Yes appeared first on MarkTechPost.

Similar Posts