One of the most unique new features of Rust is its slowly-growing support for regions—or lifetimes, as some of us core developers like to call them. As lifetimes aren't found in any mainstream languages, I thought I'd expand upon why we want them and how they can be used to improve memory management for performance (especially interactive performance) without sacrificing safety. In this first post I'll explain why existing memory models weren't enough and why we went searching for alternatives. Here I'm assuming basic knowledge of garbage collection, reference counting, and malloc
/free
, but nothing more.
The programming models that the current crop of mainstream programming languages expose can be divided pretty evenly into two camps: explicitly-managed environments and garbage-collected enivornments. By far, the most common programming languages built around explicitly-managed environments are C, C++, and Objective-C, and explicit memory management is so associated with these languages that it's often just called "the C memory model". Almost all other languages in mainstream use are garbage collected—Java, C#, JavaScript, Python, Ruby, Lisp, Perl, and tons of other languages all fall into this camp. (Note that here I'm using "garbage collection" in the general sense to mean automatic memory management; some of these languages don't have tracing garbage collection and instead use reference counting.)
Now C and its derivatives famously offer a huge amount of control over memory usage—the built-in language features make it easy to implement stack allocation, ownership (i.e. explicit new
and delete
), memory pools, and reference counting (manually or with smart pointers or Objective-C's Automatic Reference Counting). Most large C/C++/Objective-C codebases use all four strategies. Some programs (like Firefox and OS kernels) even implement their own general-purpose memory allocators. (A few use conservative garbage collectors, like the Boehm GC, but these are in the minority, so I'll leave them aside.) This flexibility has a real benefit, especially for real-time and interactive apps (like web browsers!). Not only does explicit memory management tend to spread out the load so that pauses associated with tracing GC don't appear, but it also provides a clear path toward improving performance whenever malloc
and free
do become expensive. In C++, for example, if you profile a program and see lots of expensive calls to operator new
near the top, you can often just drop the [Boost pool library] 1 into your code, change new
to new (pool)
, and call it a day.
Of course, all this power comes at a huge cost: namely, memory safety. Dangling pointers, wild pointers, and buffer overruns are not only annoying and costly in terms of hard-to-find bugs but also deadly from a security perspective. Heap spray attacks make any vtable dispatch on a freed object into an exploitable security vulnerability. Think about that for a second: in C++, you're always one virtual method call away from an exploitable security vulnerability. You can, of course, mitigate this with sandboxing, but sandboxing has a performance and maintenance cost, and mitigating these costs isn't easy.
Recognizing the huge costs associated with manual memory management, a huge amount of programming these days has shifted to languages that require garbage-collected environments. These include all of the scripting languages, as well as Java and C#. Garbage collection brings about enormous productivity savings (because the programmer doesn't have to think as much about memory management) and also enormous security benefits. An entire class of security vulnerabilities (buffer overruns, use-after-free, stack overflow) basically cease to exist for programs running in a garbage-collected environment (to be replaced by exciting new security vulnerabilities such as SQL injection, but that's another story).
The problem with garbage collection is that, now that memory management isn't explicit (i.e. that when to recycle memory can't be statically known by the compiler anymore), lifetimes have to be discovered at runtime—and that entails a performance cost. Tracing stop-the-world garbage collectors (and cycle collectors) have to suspend the entire program for pauses that can last hundreds of milliseconds, a fact which hurts lots of programs—for instance, mobile apps really need to be able to draw at 60 frames per second, ruling out any pause longer than 16 ms. Incremental garbage collection is better, but it's tricky to implement and causes a loss of throughput, because the compiler has to insert extra operations on every modification of a pointer. And because everything has to essentially be done dynamically (barring simple static analyses like escape analysis), there will always be scenarios in which a fully garbage collected system loses to a manually-managed one—and both major open source web browser engines have zero tolerance for performance regressions.
There are many workarounds in garbage-collected languages for the lack of manual memory management. For example, free lists are a popular technique in languages like Java to reduce GC pause times. The idea is simple—when you have a large number of objects of the same type that you constantly allocate and deallocate, you keep a pool of old objects around and reuse them. The programmer is then responsible for manually allocating and deallocating objects from this free list. This is definitely an effective way to reduce allocations when it's needed. But, unfortunately, there are a number of downsides to this approach.
First of all, garbage-collected languages usually don't have any built-in syntax for creating objects out of a free list instead of the heap. The built-in constructor for the object can only be called on a fresh heap allocation. The usual workaround for this is to create an init
method on the object or to create a factory object, but all of those approaches tend to look awkward syntactically. This problem itself isn't a deal-breaker—after all, Java programmers frequently make factory classes for other reasons—but it does compound the awkwardness of the free list pattern. Of course, in and of itself, this wouldn't be sufficient grounds to add a large amount of complexity to a garbage-collected language to support this pattern.
But there's a much worse problem: free lists are inherently unsafe. They aren't unsafe in the same way as C++, to be sure—in C++, there are serious security vulnerabilities to contend with—but they still allow for many of the same bugs that dangling pointers entail. To see why, notice that a free list has no idea when no more references remain to the objects that it hands out. In fact, it can't know how many references remain to the objects allocated within it—at least, not without reference counting or tracing the object graph, which would lead back to GC and defeat the purpose of the free list! So a free list must require manual memory management. When the programmer frees an object that's managed by a free list, it's the programmer's responsibility to ensure that no more references to it remain. If the programmer accidentally leaks a reference, then that object might be reused for a new instance, and a potentially hard-to-find bug will result. It's malloc
and free
all over again.
The end result of this is that we seem to be trapped between the rock of unpredictable performance and the hard place of programmer burdens and security vulnerabilities. The current set of commonly-used languages don't provide solutions here.
Fortunately, the research landscape offers some promising potential solutions, which I'll cover next time.