Thumbnail for Accelerating Language Models with KV Caching

Accelerating Language Models with KV Caching

Published: 2025-12-15

Accelerating Language Models with KV Caching

Let's dive into the architecture of Transformers to understand one of the most important optimizations in the LLM world: KV Caching.

The Problem: The Model's Sisyphean Task

Models like GPT or Llama work in an autoregressive manner. This means they generate one token (part of a word) at a time, and each new token depends on all previous ones.

Imagine you're writing a sentence: "Artificial intelligence is..."

  1. Model sees "Artificial" -> generates "intelligence".
  2. Model sees "Artificial intelligence" -> generates "is".
  3. Model sees "Artificial intelligence is" -> generates "the future".

In the standard (naive) approach, in step 3, the model would have to recalculate the same mathematical operations for the words "Artificial" and "intelligence", even though it did this just a fraction of a second earlier!

It's like reading a book where you have to start from the first page every time you want to read the next word. A waste of time and computational power.

The Solution: Attention Mechanism and Q, K, V Matrices

The heart of Transformers is the Self-Attention mechanism. It allows the model to understand relationships between words (e.g., that the pronoun "she" refers to "Kate" from the beginning of the sentence).

For each token, the model calculates three vectors (numerical representations):

  1. Query (Q): What is this token looking for?
  2. Key (K): What does this token contain/represent?
  3. Value (V): What is the content of this token that will be passed forward?

Computing attention involves matrix multiplication: the Query of the current token is compared with the Keys of all previous tokens to extract the appropriate Values.

Enter KV Cache

Instead of calculating the K and V vectors for the entire conversation history from scratch at each step, we can do something clever: remember them.

This is exactly what KV Caching is.

  1. When the model processes the first words, it calculates their K and V and saves them in cache memory.
  2. When it generates a new token, it calculates K and V only for this new token.
  3. Then it "appends" the new K and V to those already in memory.
  4. It performs attention calculations using the ready-made set.

Visual Comparison

Without KV Cache (for each new word):

Step 1: Calculate [Word A]
Step 2: Calculate [Word A, Word B]
Step 3: Calculate [Word A, Word B, Word C]

This grows exponentially!

With KV Cache:

Step 1: Calculate [Word A] -> Save cache
Step 2: Retrieve cache + Calculate [Word B] -> Update cache
Step 3: Retrieve cache + Calculate [Word C] -> Update cache

The Price of Speed: VRAM Memory

In computer science, there's no free lunch. We gain a huge performance boost (generation time drops from quadratic to linear complexity), but we pay for it with memory.

KV Cache must be stored in graphics card memory (VRAM/GPU memory) so the graphics processor has fast access to it.

The longer the conversation context (the more words the model remembers) and the larger the model (more layers), the more space the cache takes up.

Simple math:

If your model has 32 layers and each vector has a certain size, then with a long conversation (e.g., 8000 tokens), the KV Cache alone can occupy gigabytes of VRAM, even if the model weights themselves are smaller!

Summary: Why Is This Worth Knowing?

KV Caching is a fundamental optimization technique. Thanks to it:

  • Interaction is smooth: You don't wait forever for the next word.
  • We save energy: The GPU performs fewer unnecessary matrix multiplication operations.

Understanding this mechanism also helps understand the limitations of modern systems – for example, why "long context" (e.g., pasting an entire book into a prompt) is so computationally and memory expensive. That's where KV Cache swells to enormous sizes.

Inspiration: Hugging Face Blog - KV Caching

Back to Blog