5/18/2025

AlphaEvolve by DeepMind cuts 4×4 matrix multiplication to 48 scalar multiplications—the first advance since Strassen. Plain English explainer with Python examples.

AlphaEvolve: A 4×4 Matrix‑Multiplication Revolution in Just 48 Products

1. What exactly did AlphaEvolve achieve?

AlphaEvolve is a 2025 Google DeepMind AI system that discovered a brand‑new formula for multiplying two 4×4 matrices using only 48 scalar multiplications. Strassen’s 1969 record was 49. That single saved multiplication marks the first improvement in 56 years.

In one sentence: AlphaEvolve turns a “minus‑one” into a milestone for linear algebra.


2. Does saving one multiplication really matter?

| Algorithm              | 4×4 multiplications | Recursive asymptotic complexity |
| ---------------------- | ------------------- | ------------------------------- |
| Standard               | 64                  | \$\mathcal O(n^3)\$             |
| Strassen (1969)        | 49                  | \$\mathcal O(n^{2.807})\$       |
| **AlphaEvolve (2025)** | **48**              | \$\mathcal O(n^{2.792})\$       |

When a faster 4×4 kernel is applied recursively to larger matrices, the “minus‑one” advantage is exponentially amplified. Over time this translates into sizable energy and compute savings for deep‑learning training, physics simulation, and graphics rendering.


3. Three lines of code to feel AlphaEvolve

A tiny excerpt from matrix_multiplication_algorithms.py shows how clean the API is:

from matrix_multiplication_algorithms import alphaevolve_4x4

C = alphaevolve_4x4(A, B)   # exactly 48 scalar products under the hood

Digging into the source you’ll see a flat, loop‑free structure:

def alphaevolve_4x4(A, B):
    """
    AlphaEvolve’s 4×4 matrix‑multiplication kernel.
    Executes precisely 48 scalar multiplications.
    """
    # ... build linear combinations a0–a47, b0–b47
    m0  = a0  * b0
    #  ⋮   m1–m46 omitted for brevity
    m47 = a47 * b47   # total 48
    # ... assemble the 4×4 result C

Across real or complex inputs the function sticks to exactly 48 multiplies; the rest are adds, subtracts, and constant factors.


4. How are those 48 products found?

AlphaEvolve’s raw discovery comes as a huge tensor factorisation. The helper script decomposition_analyzer.py autotranslates it into human‑readable Python:

function_lines = [
    "def alphaevolve_4x4_optimized(A, B):",
    "    \"\"\"AlphaEvolve’s 4×4 kernel with 48 multiplies.\"\"\"",
    "    C = np.zeros((4, 4), dtype=complex)",
    # generates 48 pairs (a_r, b_r) then m_r = a_r * b_r
]

The script scans the tensor coefficients, synthesises 48 linear‑combination pairs, emits m_r = a_r * b_r, and finally pieces together C[i, j]. The resulting alphaevolve_4x4_optimized is branchless, loop‑free, numerically stable, and quick enough to spar with hand‑tuned NumPy.


5. Closing thoughts

  • AlphaEvolve shows that AI‑guided search can still squeeze hidden gems out of half‑century‑old math problems.
  • Pushing 8×8 or larger base blocks below today’s counts would trigger an even bigger energy‑efficiency wave across scientific computing and AI.
  • Follow this blog for ongoing coverage of AlphaEvolve and future breakthroughs.

Tags: AlphaEvolve, 48 multiplications, matrix multiplication, Strassen, tensor decomposition, DeepMind

6. How to Get the AlphaEvolve Verify Python Code?

You Can Clone this Repo