On October 21, 2025, Cloudflare found itself at the center of a technical deep dive and a high-level industry conversation, both converging on a single theme: the evolving challenges and opportunities of building a faster, safer, and more resilient internet. The day saw the publication of a detailed Cloudflare blog post dissecting the performance hurdles of a key data structure underpinning modern network security, while CEO Matthew Prince took the stage at Bloomberg Tech in London to share his vision for cybersecurity and digital resilience in an increasingly complex online world.
For engineers and internet users alike, the stakes couldn’t be higher. At the heart of Cloudflare’s technical revelations is the BPF LPM trie—a data structure that, while not exactly a household name, plays a crucial role in how your data finds its way across the web. These tries, formally known as BPF_MAP_TYPE_LPM_TRIE, are essential for matching IP addresses and ports in real-time, ensuring that network packets are routed swiftly and securely through the right digital pathways. In practical terms, they help power services like Cloudflare’s Magic Firewall, acting as sentinels that evaluate traffic and enforce security rules to keep threats at bay.
But as Cloudflare’s engineers revealed in their October 21 blog post, the current implementation of BPF LPM tries has been running into some serious performance bottlenecks. When the company’s systems were flooded with millions of entries—think massive lists of IP addresses and rules—the time it took to look up or free entries ballooned alarmingly. In some cases, freeing up a BPF LPM trie map could lock up a CPU for over 10 seconds, a veritable eternity in the high-speed world of internet infrastructure. Lookup times, while measured in nanoseconds, could still add up to packet loss and degraded service for customers, especially at scale.
Cloudflare’s benchmarks, run on formidable AMD EPYC 9684X 96-Core machines, put the numbers into perspective. With 10,000 entries, lookup throughput was an impressive 7.423 million operations per second, with each lookup taking about 134.7 nanoseconds. Updates and deletions were slower, and freeing the map was the slowest operation of all, clocking in at 1.743 milliseconds per operation. Yet as the number of entries scaled up—to the hundreds of thousands and beyond—lookup throughput dropped sharply, sagging to around 1.5 million operations per second at a million entries. The culprit? Not just the sheer size of the data, but the way the trie is built and traversed.
To appreciate the challenge, it helps to understand what a trie is—a data structure that organizes data (like IP addresses) in a tree-like fashion, allowing for efficient searches by following paths based on shared prefixes. Unlike a binary search tree, where each node holds a full key and comparisons are made at every step, tries break keys into smaller chunks, storing only what’s necessary at each node. This makes them especially useful for tasks like IP routing, where the goal is often to find the longest matching prefix rather than an exact key.
Cloudflare’s blog post offers a quick refresher on the nuances of trie design, including concepts like path compression (which skips over redundant nodes when data is sparse) and level compression (which collapses densely populated levels into a single node with many children). These optimizations are more than academic—they can make the difference between a system that hums along smoothly and one that grinds to a halt under heavy load.
Unfortunately, the current BPF LPM trie implementation in the Linux kernel falls short on both counts. With only two child pointers per node, these tries are forced into a binary branching pattern, even when more efficient multi-way branching would be possible. This limitation means the tree can become much deeper than necessary, increasing the number of steps—and memory accesses—required for each lookup. As Cloudflare’s engineers put it, "the current BPF LPM trie implementation uses only 2-child nodes, limiting branching and causing deeper trie heights and slower searches compared to tries with more children per node."
Worse still, the tries don’t fully implement level compression, missing out on a key optimization used in other parts of the Linux kernel, such as the net/ipv4/fib_trie.c code. As a result, as the trie fills up with more entries, the performance degrades not just because there’s more data to search, but because the structure itself isn’t making the most of the data’s natural patterns. The blog notes, "BPF LPM tries do not implement level compression and only partially implement path compression."
There’s another wrinkle: memory access patterns. As tries grow, traversing them means jumping between nodes that may be scattered across memory, leading to frequent cache and translation lookaside buffer (dTLB) misses. These low-level hardware bottlenecks can add precious microseconds to every operation, and as Cloudflare’s benchmarks show, they become the dominant factor as the number of entries climbs.
So what’s the path forward? Cloudflare’s engineers are already laying out a roadmap for improvement. They’ve contributed their benchmarks to the upstream Linux kernel and are planning to refactor the code to support a common Level Compressed trie implementation—essentially borrowing best practices from other parts of the kernel to boost performance where it’s needed most. As they put it, "We have plans to improve the performance of BPM LPM tries, particularly the lookup function which is heavily used for our workloads." Expect more technical explorations in future blog posts, as the company continues to push for a faster, more reliable internet backbone.
While the engineers wrestle with the nuts and bolts, Cloudflare’s leadership is just as focused on the bigger picture. In a video interview published by Bloomberg on October 21, CEO Matthew Prince sat down at Bloomberg Tech in London to discuss how cybersecurity, data privacy, and digital resilience are shaping the future of the open internet. Prince emphasized Cloudflare’s commitment to building infrastructure that not only performs at scale but also safeguards the privacy and security of users worldwide. The conversation highlighted Cloudflare’s dual role: innovator at the technical frontier and steward of a safer, more open digital ecosystem.
It’s a reminder that the challenges of internet infrastructure aren’t just about speed or efficiency—they’re about trust, reliability, and staying one step ahead of ever-evolving threats. Whether it’s optimizing a trie for nanosecond lookups or redefining what it means to be secure in the cloud era, Cloudflare’s efforts underscore the complexity and urgency of keeping the internet running smoothly for everyone. As both the blog post and Prince’s interview make clear, the work is far from done, but the direction is set: onward to a faster, safer, and more resilient internet.