|

An End-to-End Coding Guide to NVIDIA KVPress for Long-Context LLM Inference, KV Cache Compression, and Memory-Efficient Generation

In this tutorial, we take an in depth, sensible strategy to exploring NVIDIA’s KVPress and understanding the way it could make long-context language mannequin inference extra environment friendly. We start by establishing the total surroundings, putting in the required libraries, loading a compact Instruct mannequin, and getting ready a easy workflow that runs in Colab whereas nonetheless demonstrating the actual worth of KV cache compression. As we transfer via implementation, we create an artificial long-context corpus, outline focused extraction questions, and run a number of inference experiments to instantly examine normal technology with totally different KVPress methods. At the tip of the tutorial, we may have constructed a stronger instinct for how long-context optimization works in observe, how totally different press strategies have an effect on efficiency, and how this sort of workflow might be tailored for real-world retrieval, doc evaluation, and memory-sensitive LLM purposes.

import os, sys, subprocess, textwrap, time, gc, json, math, random, warnings, examine
warnings.filterwarnings("ignore")


def run(cmd):
   print("n[RUN]", " ".be part of(cmd))
   subprocess.check_call(cmd)


run([sys.executable, "-m", "pip", "install", "-q", "--upgrade", "pip"])
run([sys.executable, "-m", "pip", "install", "-q", "torch", "transformers", "accelerate", "bitsandbytes", "sentencepiece", "kvpress==0.4.0"])


attempt:
   from google.colab import userdata
   hf_token = userdata.get("HF_TOKEN")
besides Exception:
   hf_token = os.environ.get("HF_TOKEN", "")


if not hf_token:
   attempt:
       import getpass
       hf_token = getpass.getpass("Enter your Hugging Face token (go away empty if mannequin is public and accessible): ").strip()
   besides Exception:
       hf_token = ""


if hf_token:
   os.environ["HF_TOKEN"] = hf_token
   os.environ["HUGGINGFACEHUB_API_TOKEN"] = hf_token


import torch
import transformers
import kvpress


from transformers import pipeline, BitsAndBytesConfig
from kvpress import AnticipatedAttentionPress, KnormPress


print("Python:", sys.model.cut up()[0])
print("Torch:", torch.__version__)
print("Transformers:", transformers.__version__)
print("CUDA obtainable:", torch.cuda.is_available())
if torch.cuda.is_available():
   print("GPU:", torch.cuda.get_device_name(0))


MODEL_ID = "Qwen/Qwen2.5-1.5B-Instruct"
MAX_NEW_TOKENS = 96
SEED = 42
random.seed(SEED)
torch.manual_seed(SEED)

We arrange the Colab surroundings and set up all required libraries to run the KVPress workflow efficiently. We securely gather the Hugging Face token, set surroundings variables, and import the core modules wanted for mannequin loading, pipeline execution, and compression experiments. We additionally print the runtime and {hardware} particulars so we clearly perceive the setup during which we carry out the tutorial.

if torch.cuda.is_available():
   torch.cuda.empty_cache()
   quantization_config = BitsAndBytesConfig(
       load_in_4bit=True,
       bnb_4bit_compute_dtype=torch.float16,
       bnb_4bit_quant_type="nf4",
       bnb_4bit_use_double_quant=True,
   )
   pipe = pipeline(
       "kv-press-text-generation",
       mannequin=MODEL_ID,
       device_map="auto",
       token=hf_token if hf_token else None,
       model_kwargs={
           "quantization_config": quantization_config,
           "attn_implementation": "sdpa",
       },
   )
else:
   pipe = pipeline(
       "kv-press-text-generation",
       mannequin=MODEL_ID,
       device_map="auto",
       torch_dtype=torch.float32,
       token=hf_token if hf_token else None,
       model_kwargs={
           "attn_implementation": "sdpa",
       },
   )


def cuda_mem():
   if not torch.cuda.is_available():
       return {"allocated_gb": None, "reserved_gb": None, "peak_gb": None}
   return {
       "allocated_gb": spherical(torch.cuda.memory_allocated() / 1024**3, 3),
       "reserved_gb": spherical(torch.cuda.memory_reserved() / 1024**3, 3),
       "peak_gb": spherical(torch.cuda.max_memory_allocated() / 1024**3, 3),
   }


def reset_peak():
   if torch.cuda.is_available():
       torch.cuda.reset_peak_memory_stats()


def extract_answer(x):
   if isinstance(x, checklist) and len(x) > 0:
       x = x[0]
   if isinstance(x, dict):
       for ok in ["answer", "generated_text", "text", "output_text"]:
           if ok in x:
               return x[k]
       return json.dumps(x, indent=2, ensure_ascii=False)
   return str(x)


def generate_once(context, query, press=None, label="run"):
   gc.gather()
   if torch.cuda.is_available():
       torch.cuda.empty_cache()
   reset_peak()
   begin = time.time()
   out = pipe(
       context,
       query=query,
       press=press,
       max_new_tokens=MAX_NEW_TOKENS,
       do_sample=False,
       temperature=None,
       return_full_text=False,
   )
   elapsed = time.time() - begin
   reply = extract_answer(out)
   stats = cuda_mem()
   end result = {
       "label": label,
       "elapsed_sec": spherical(elapsed, 2),
       "allocated_gb": stats["allocated_gb"],
       "reserved_gb": stats["reserved_gb"],
       "peak_gb": stats["peak_gb"],
       "reply": reply.strip(),
   }
   return end result

We initialize the kv-press-text-generation pipeline and configure it in a different way relying on whether or not GPU assist is out there. We outline the helper capabilities that measure CUDA reminiscence utilization, reset peak reminiscence, extract solutions from mannequin outputs, and run a single technology cross cleanly. This half supplies the reusable execution logic that powers the remainder of the tutorial and allows us to examine baseline inference with KV cache compression.

company_records = [
   {"company": "Arcturus Dynamics", "hq": "Bengaluru", "founded": 2017, "focus": "warehouse robotics"},
   {"company": "BlueMesa Energy", "hq": "Muscat", "founded": 2014, "focus": "grid analytics"},
   {"company": "CinderPeak Health", "hq": "Pune", "founded": 2019, "focus": "clinical imaging AI"},
   {"company": "DeltaForge Marine", "hq": "Kochi", "founded": 2012, "focus": "autonomous vessel telemetry"},
   {"company": "EonCircuit Labs", "hq": "Hyderabad", "founded": 2020, "focus": "edge silicon tooling"},
   {"company": "Frostline Aero", "hq": "Jaipur", "founded": 2016, "focus": "drone inspection"},
]


needle_facts = [
   "PROJECT NEEDLE 1: The internal codename for the confidential pilot program is SAFFRON-17.",
   "PROJECT NEEDLE 2: The audit escalation owner is Meera Vashisht.",
   "PROJECT NEEDLE 3: The approved deployment region for the first production rollout is Oman North.",
   "PROJECT NEEDLE 4: The emergency rollback phrase is amber lantern.",
   "PROJECT NEEDLE 5: The signed commercial start date is 17 September 2026.",
]


background_block = """
Long-context methods typically comprise repeated operational notes, historic data, coverage sections, and noisy retrieval artifacts.
The objective of this demo is to create a realistically lengthy immediate the place just a few particulars matter for downstream answering.
KV cache compression reduces reminiscence utilization by pruning cached key-value pairs whereas preserving reply high quality.
"""


policy_block = """
Operational coverage abstract:
1. Safety overrides throughput when sensor confidence falls beneath threshold.
2. Logs ought to protect area, timestamp, gadget class, and operator approval state.
3. Field trials might comprise duplicated annexes, OCR-style artifacts, and repeated compliance summaries.
4. A great long-context mannequin should ignore irrelevant repetition and retrieve the particular particulars that matter.
"""


records_text = []
for i in vary(120):
   rec = company_records[i % len(company_records)]
   records_text.append(
       f"Record {i+1}: {rec['company']} is headquartered in {rec['hq']}, based in {rec['founded']}, and focuses on {rec['focus']}. "
       f"Quarterly memo {i+1}: retention remained secure, operator coaching progressed, and the compliance appendix was reattached for assessment."
   )


needle_insert_positions = {18, 41, 73, 96, 111}
full_corpus = []
for i, para in enumerate(records_text):
   full_corpus.append(background_block.strip())
   full_corpus.append(policy_block.strip())
   full_corpus.append(para)
   if i in needle_insert_positions:
       full_corpus.append(needle_facts[len([x for x in needle_insert_positions if x <= i]) - 1])

We create an artificial long-context dataset to check the KVPress system in a managed but real looking method. We outline firm data, insert necessary hidden info at totally different positions, and combine them with repeated background and coverage blocks, making the immediate lengthy and noisy. This helps us simulate the context during which memory-efficient inference issues and the mannequin should retrieve solely the actually related particulars.

context = "nn".be part of(full_corpus)


query = textwrap.dedent("""
Answer utilizing solely the supplied context.
Give a compact JSON object with precisely these keys:
commercial_start_date
deployment_region
audit_owner
rollback_phrase
pilot_codename
""").strip()


print("nContext characters:", len(context))
print("Approx phrases:", len(context.cut up()))


experiments = []


baseline = generate_once(context, query, press=None, label="baseline_no_compression")
experiments.append(baseline)


presses = [
   ("expected_attention_0.7", ExpectedAttentionPress(compression_ratio=0.7)),
   ("expected_attention_0.5", ExpectedAttentionPress(compression_ratio=0.5)),
   ("knorm_0.5", KnormPress(compression_ratio=0.5)),
]


for label, press in presses:
   attempt:
       end result = generate_once(context, query, press=press, label=label)
       experiments.append(end result)
   besides Exception as e:
       experiments.append({
           "label": label,
           "elapsed_sec": None,
           "allocated_gb": None,
           "reserved_gb": None,
           "peak_gb": None,
           "reply": f"FAILED: {kind(e).__name__}: {e}"
       })


attempt:
   from kvpress import DecodingPress
   sig = examine.signature(DecodingPress)
   kwargs = {"base_press": KnormPress()}
   if "compression_interval" in sig.parameters:
       kwargs["compression_interval"] = 10
   elif "compression_steps" in sig.parameters:
       kwargs["compression_steps"] = 10
   if "target_size" in sig.parameters:
       kwargs["target_size"] = 512
   elif "token_buffer_size" in sig.parameters:
       kwargs["token_buffer_size"] = 512
   if "hidden_states_buffer_size" in sig.parameters:
       kwargs["hidden_states_buffer_size"] = 0
   decoding_press = DecodingPress(**kwargs)
   decoding_result = generate_once(context, query, press=decoding_press, label="decoding_knorm")
   experiments.append(decoding_result)
besides Exception as e:
   experiments.append({
       "label": "decoding_knorm",
       "elapsed_sec": None,
       "allocated_gb": None,
       "reserved_gb": None,
       "peak_gb": None,
       "reply": f"SKIPPED_OR_FAILED: {kind(e).__name__}: {e}"
   })

We assemble the ultimate context, outline the structured extraction query, and launch the core set of inference experiments. We first run the baseline with out compression, then apply a number of press methods to observe how totally different compression ratios have an effect on the outcomes. We additionally conduct a decoding-oriented compression experiment, which extends the tutorial past prefilling and supplies a broader view of the KVPress framework.

print("n" + "=" * 120)
print("RESULTS")
print("=" * 120)


for r in experiments:
   print(f"n[{r['label']}]")
   print("elapsed_sec:", r["elapsed_sec"])
   print("allocated_gb:", r["allocated_gb"])
   print("reserved_gb:", r["reserved_gb"])
   print("peak_gb:", r["peak_gb"])
   print("reply:")
   print(r["answer"])


print("n" + "=" * 120)
print("SIMPLE SUMMARY")
print("=" * 120)


def safe_float(x):
   attempt:
       return float(x)
   besides Exception:
       return None


base_peak = safe_float(baseline["peak_gb"]) if baseline.get("peak_gb") isn't None else None
base_time = safe_float(baseline["elapsed_sec"]) if baseline.get("elapsed_sec") isn't None else None


for r in experiments[1:]:
   peak = safe_float(r["peak_gb"])
   t = safe_float(r["elapsed_sec"])
   peak_delta = None if base_peak is None or peak is None else spherical(base_peak - peak, 3)
   time_delta = None if base_time is None or t is None else spherical(base_time - t, 2)
   print({
       "label": r["label"],
       "peak_gb_saved_vs_baseline": peak_delta,
       "time_sec_saved_vs_baseline": time_delta,
       "answer_preview": r["answer"][:180].substitute("n", " ")
   })


print("n" + "=" * 120)
print("OPTIONAL NEXT STEPS")
print("=" * 120)
print("1. Swap MODEL_ID to a stronger long-context instruct mannequin that matches your GPU.")
print("2. Increase context size by duplicating records_text extra instances.")
print("3. Try different presses from kvpress, comparable to SnapKVPress, StreamingLLMPress, QFilterPress, or ChunkKVPress.")
print("4. Replace the artificial corpus with your personal lengthy PDF/textual content chunks and hold the identical analysis loop.")

We print all experiment outputs in a readable format and summarize the runtime and reminiscence variations relative to the baseline. We calculate easy comparability metrics to rapidly see how a lot reminiscence or time every compression technique saves. We then conclude with recommended subsequent steps to prolong the tutorial to stronger fashions, longer contexts, further press strategies, and real-world doc workloads.

In conclusion, we developed a powerful sensible understanding of how NVIDIA’s KVPress can be utilized to optimize long-context inference in a sensible Colab-based setting. We did greater than merely run a mannequin: we constructed an end-to-end workflow that installs the framework, hundreds the pipeline accurately, constructs a significant long-context enter, applies a number of compression presses, and evaluates the outcomes when it comes to reply high quality, runtime, and reminiscence conduct. By evaluating baseline technology with compressed KV-cache technology, we clearly noticed the trade-offs concerned. We gained helpful instinct about when these strategies will help cut back useful resource strain with out severely harming output constancy. We additionally explored the framework’s flexibility by testing totally different press configurations and together with an non-compulsory decoding-oriented compression path, offering a broader view of how KVPress can be utilized past a single static instance.


Check out the Codes and Notebook hereAlso, be happy to comply with us on Twitter and don’t neglect to be part of our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to companion with us for selling your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar and so forth.? Connect with us

The put up An End-to-End Coding Guide to NVIDIA KVPress for Long-Context LLM Inference, KV Cache Compression, and Memory-Efficient Generation appeared first on MarkTechPost.

Similar Posts