5/21/2025

After cracking 4×4 matrix multiplication, DeepMind’s AlphaEvolve may chase faster FFTs, tensor contractions, graph paths, sorting networks & more. Here’s where it could strike.

AlphaEvolve has already rewritten the 56-year-old playbook on matrix multiplication by finding a 48-multiplication scheme for 4 × 4 complex matrices — beating the long-standing 49-step record. That proof-of-concept suggests the Gemini-powered evolutionary agent can roam well beyond linear algebra. Below we map ten fertile algorithm families where a self-improving search could discover shorter circuits, smaller constant factors, or entirely new paradigms.


1. Scaling Up Matrix Multiplication

Optimising 5 × 5, 6 × 6 or highly rectangular products would reverberate through BLAS libraries and deep-learning accelerators. Each reduction in scalar counts shrinks everything from graphics pipelines to attention layers.


2. Fast Fourier & Number-Theoretic Transforms

The Cooley–Tukey FFT still rules but its butterfly graph has many degrees of freedom. Evolutionary search could prune twiddle factors or fuse stages, just as it did for matrix blocks—especially for power-of-two sizes where hardware FFT IP is ubiquitous. Cryptographers would celebrate leaner Number-Theoretic Transforms (NTT) that speed lattice-based schemes and homomorphic encryption.


3. Winograd & Other Small-Kernel Convolutions

Winograd minimal filtering halves the multiplications in 3 × 3 CNN layers but suffers numerical drift. AlphaEvolve could evolve mixed-precision variants or entirely new tile layouts, delivering instant gains for edge-AI inference without extra FLOPs.


4. Tensor-Network Contraction Ordering

Choosing the optimal contraction path for high-rank tensors is NP-hard. Simulated annealing and genetic algorithms already beat greedy heuristics. AlphaEvolve’s code-gen plus proof framework is tailor-made to output provably optimal sequences for physics, quantum simulation and automatic differentiation graphs.


5. Graph Shortest-Path Kernels

Floyd–Warshall ( O(n³) ) or even the fastest Strassen-based APSP variants leave room for lower constants. Recent papers push deterministic 2-approximations below n²·². An evolutionary search might splice dynamic-update tricks into dense-graph solvers, accelerating social-network analytics and routing.


6. Sorting Network Size & Depth

Every extra comparator in a fixed network costs ASIC silicon and branchless SIMD time. Research keeps shaving layers off 128-item networks. Evolving comparator layouts could yield minimal-depth networks for GPU warp sizes ( 32, 64 ), boosting radix-sort and database joins.


7. Boolean Matrix & Transitive-Closure Tricks

Boolean matrix multiplication underpins context-free grammar parsing and static code analysis. Any reduction in logical AND/OR gates drops compile-time and security-audit costs—fertile ground for AlphaEvolve’s discrete search.


8. Polynomial & Modular Arithmetic Primitives

Fast Karatsuba-like recurrences govern RSA, ECC and homomorphic crypto cores. Auto-discovered short-cut recurrences would cascade into quicker key exchanges and lighter secure messaging apps, all without waiting for post-quantum standards.


9. Convex Optimisation Inner Loops

Interior-point and quasi-Newton solvers spend 90 % of cycles factoring KKT matrices. Better block-elimination formulas or pre-conditioners would ripple through logistics, LLM fine-tuning and reinforcement-learning policy optimisers. AlphaEvolve’s proof harness could even certify numerical stability.


10. Back-Prop & Attention Microkernels

Transformer training is a choreography of GEMM, softmax, layer-norm and depthwise convolutions. With tensor remappings akin to its 4 × 4 breakthrough, AlphaEvolve could co-design fused kernels that reduce global memory traffic on TPUs and GPUs.


Outlook

DeepMind calls AlphaEvolve a “general-purpose algorithm-design agent”. If its search space expands from cubic-sized tensors to graph, boolean and spectral arenas, the next decade of compute efficiency could be driven less by silicon and more by synthetic creativity. The chessboard of classical algorithms is wide—and this new player has only moved its first pawn.