Cache-to-Cache(C2C): Direct Semantic Communication Between Large Language Models via KV-Cache Fusion
Can giant language fashions collaborate with out sending a single token of textual content? a workforce of researchers from Tsinghua University, Infinigence AI, The Chinese University of Hong Kong, Shanghai AI Laboratory, and Shanghai Jiao Tong University say sure. Cache-to-Cache (C2C) is a brand new communication paradigm the place giant language fashions alternate data via their KV-Cache fairly than via generated textual content.

Text communication is the bottleneck in multi LLM methods
Current multi LLM methods largely use textual content to speak. One mannequin writes an evidence, one other mannequin reads it as context.
This design has three sensible prices:
- Internal activations are compressed into quick pure language messages. Much of the semantic sign within the KV-Cache by no means crosses the interface.
- Natural language is ambiguous. Even with structured protocols, a coder mannequin could encode structural indicators, such because the position of an HTML
<p>tag, that don’t survive a obscure textual description. - Every communication step requires token by token decoding, which dominates latency in lengthy analytical exchanges.
The C2C work asks a direct query, can we deal with KV-Cache because the communication channel as an alternative.
Oracle experiments, can KV-Cache carry communication
The analysis workforce first run two oracle model experiments to check whether or not KV-Cache is a helpful medium.
Cache enrichment oracle
They evaluate three setups on a number of selection benchmarks:
- Direct, prefill on the query solely.
- Few shot, prefill on exemplars plus query, longer cache.
- Oracle, prefill on exemplars plus query, then discard the exemplar phase and hold solely the query aligned slice of the cache, so cache size is similar as Direct.

Oracle improves accuracy from 58.42 p.c to 62.34 p.c on the similar cache size, whereas Few shot reaches 63.39 p.c. This demonstrates that enriching the query KV-Cache itself, even with out extra tokens, improves efficiency. Layer smart evaluation exhibits that enriching solely chosen layers is healthier than enriching all layers, which later motivates a gating mechanism.
Cache transformation oracle
Next, they check whether or not KV-Cache from one mannequin could be reworked into the house of one other mannequin. A 3 layer MLP is skilled to map KV-Cache from Qwen3 4B to Qwen3 0.6B. t SNE plots present that the reworked cache lies contained in the goal cache manifold, however solely in a sub area.

C2C, direct semantic communication via KV-Cache
Based on these oracles, the analysis workforce defines Cache-to-Cache communication between a Sharer and a Receiver mannequin.
During prefill, each fashions learn the identical enter and produce layer smart KV-Cache. For every Receiver layer, C2C selects a mapped Sharer layer and applies a C2C Fuser to supply a fused cache. During decoding, the Receiver predicts tokens conditioned on this fused cache as an alternative of its authentic cache.
The C2C Fuser follows a residual integration precept and has three modules:
- Projection module concatenates Sharer and Receiver KV-Cache vectors, applies a projection layer, then a function fusion layer.
- Dynamic weighting module modulates heads primarily based on the enter in order that some consideration heads rely extra on Sharer data.
- Learnable gate provides a per layer gate that decides whether or not to inject Sharer context into that layer. The gate makes use of a Gumbel sigmoid throughout coaching and turns into binary at inference.
Sharer and Receiver can come from completely different households and sizes, so C2C additionally defines:
- Token alignment by decoding Receiver tokens to strings and re encoding them with the Sharer tokenizer, then selecting Sharer tokens with maximal string protection.
- Layer alignment utilizing a terminal technique that pairs prime layers first and walks backward till the shallower mannequin is absolutely coated.

During coaching, each LLMs are frozen. Only the C2C module is skilled, utilizing a subsequent token prediction loss on Receiver outputs. The principal C2C fusers are skilled on the primary 500k samples of the OpenHermes2.5 dataset, and evaluated on OpenBookQA, ARC Challenge, MMLU Redux and C Eval.
Accuracy and latency, C2C versus textual content communication
Across many Sharer Receiver combos constructed from Qwen2.5, Qwen3, Llama3.2 and Gemma3, C2C persistently improves Receiver accuracy and reduces latency. For outcomes:
- C2C achieves about 8.5 to 10.5 p.c increased common accuracy than particular person fashions.
- C2C outperforms textual content communication by about 3.0 to five.0 p.c on common.
- C2C delivers round 2x common speedup in latency in contrast with textual content primarily based collaboration, and in some configurations the speedup is bigger.
A concrete instance makes use of Qwen3 0.6B as Receiver and Qwen2.5 0.5B as Sharer. On MMLU Redux, the Receiver alone reaches 35.53 p.c, textual content to textual content reaches 41.03 p.c, and C2C reaches 42.92 p.c. Average time per question for textual content to textual content is 1.52 models, whereas C2C stays near the only mannequin at 0.40. Similar patterns seem on OpenBookQA, ARC Challenge and C Eval.
On LongBenchV1, with the identical pair, C2C outperforms textual content communication throughout all sequence size buckets. For sequences of 0 to 4k tokens, textual content communication reaches 29.47 whereas C2C reaches 36.64. Gains stay for 4k to 8k and for longer contexts.

Key Takeaways
- Cache-to-Cache communication lets a Sharer mannequin ship data to a Receiver mannequin straight via KV-Cache, so collaboration doesn’t want intermediate textual content messages, which removes the token bottleneck and reduces semantic loss in multi mannequin methods.
- Two oracle research present that enriching solely the query aligned slice of the cache improves accuracy at fixed sequence size, and that KV-Cache from a bigger mannequin could be mapped right into a smaller mannequin’s cache house via a realized projector, confirming cache as a viable communication medium.
- C2C Fuser structure combines Sharer and Receiver caches with a projection module, dynamic head weighting and a learnable per layer gate, and integrates every little thing in a residual manner, which permits the Receiver to selectively soak up Sharer semantics with out destabilizing its personal illustration.
- Consistent accuracy and latency positive aspects are noticed throughout Qwen2.5, Qwen3, Llama3.2 and Gemma3 mannequin pairs, with about 8.5 to 10.5 p.c common accuracy enchancment over a single mannequin, 3 to five p.c positive aspects over textual content to textual content communication, and round 2x quicker responses as a result of pointless decoding is eliminated.
Editorial Comments
Cache-to-Cache reframes multi LLM communication as a direct semantic switch downside, not a immediate engineering downside. By projecting and fusing KV-Cache between Sharer and Receiver with a neural fuser and learnable gating, C2C makes use of the deep specialised semantics of each fashions whereas avoiding specific intermediate textual content era, which is an data bottleneck and a latency value. With 8.5 to 10.5 p.c increased accuracy and about 2x decrease latency than textual content communication, C2C is a robust methods stage step towards KV native collaboration between fashions.
Check out the Paper and Codes. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be at liberty to observe us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The put up Cache-to-Cache(C2C): Direct Semantic Communication Between Large Language Models via KV-Cache Fusion appeared first on MarkTechPost.
