speice.io/_drafts/understanding-allocations-in-rust.md

32 KiB

layout title description category tags
post Allocations in Rust An introduction to the memory model
rust

There's an alchemy of distilling complex technical topics into articles and videos that change the way programmers see the tools they interact with on a regular basis. I knew what a linker was, but there's a staggering amount of complexity in between main() and your executable. Rust programmers use the Box type all the time, but there's a rich history of the Rust language itself wrapped up in how special it is.

In a similar vein, I want you to look at code and understand how memory is used; the complex choreography of operating system, compiler, and program that frees you to focus on functionality far-flung from frivolous book-keeping. The Rust compiler relieves a great deal of the cognitive burden associated with memory management, but we're going to step into its world for a while.

Let's learn a bit about memory in Rust.

Table of Contents

This post is intended as both guide and reference material; we'll work to establish an understanding of the different memory types Rust makes use of, then summarize each section for easy citation in the future. To that end, a table of contents is provided to assist in easy navigation:

Foreword

Rust's three defining features of Performance, Reliability, and Productivity are all driven to a great degree by the how the Rust compiler understands memory ownership. Unlike managed memory languages (Java, Python), Rust doesn't really garbage collect, leading to fast code when dynamic (heap) memory isn't necessary. When heap memory is necessary, Rust ensures you can't accidentally mis-manage it. And because the compiler handles memory "ownership" for you, developers never need to worry about accidentally deleting data that was needed somewhere else.

That said, there are situations where you won't benefit from work the Rust compiler is doing. If you:

  1. Never use unsafe
  2. Never use #![feature(alloc)] or the alloc crate

...then it's not possible for you to use dynamic memory!

For some uses of Rust, typically embedded devices, these constraints make sense. They have very limited memory, and the program binary size itself may significantly affect what's available! There's no operating system able to manage this "virtual memory" junk, but that's not an issue because there's only one running application. The embedonomicon is ever in mind, and interacting with the "real world" through extra peripherals is accomplished by reading and writing to specific memory addresses.

Most Rust programs find these requirements overly burdensome though. C++ developers would struggle without access to std::vector (except those hardcore no-STL people), and Rust developers would struggle without std::vec. But in this scenario, std::vec is actually aliased to a part of the alloc crate, and thus off-limits. Box, Rc, etc., are also unusable for the same reason.

Whether writing code for embedded devices or not, the important thing in both situations is how much you know before your application starts about what its memory usage will look like. In embedded devices, there's a small, fixed amount of memory to use. In a browser, you have no idea how large google.com's home page is until you start trying to download it. The compiler uses this information (or lack thereof) to optimize how memory is used; put simply, your code runs faster when the compiler can guarantee exactly how much memory your program needs while it's running. This post is all about understanding how the compiler reasons about your program, with an emphasis on how to design your programs for performance.

Now let's address some conditions and caveats before going much further:

  • We'll focus on "safe" Rust only; unsafe lets you use platform-specific allocation API's (malloc) that we'll ignore.
  • We'll assume a "debug" build of Rust code (what you get with cargo run and cargo test) and address (pun intended) release mode at the end (cargo run --release and cargo test --release).
  • All content will be run using Rust 1.32, as that's the highest currently supported in the Compiler Exporer. As such, we'll avoid upcoming innovations like compile-time evaluation of static that are available in nightly.
  • Because of the nature of the content, some (very simple) assembly-level code is involved. We'll keep this simple, but I found a refresher on the push and pop instructions was helpful while writing this post.

Finally, I'll do what I can to flag potential future changes but the Rust docs have a notice worth repeating:

Rust does not currently have a rigorously and formally defined memory model.

-- the docs

The Whole World: Global Memory Usage

The first memory type we'll look at is pretty special: when Rust can prove that a value is fixed for the life of a program (const), and when a reference is valid for the duration of the program (static as a declaration, not 'static as a lifetime). Understanding the distinction between value and reference is important for reasons we'll go into below. The full specification for these two memory types is available, but we'll take a hands-on approach to the topic.

const

The quick summary is this: const declares a read-only block of memory that is loaded as part of your program binary (during the call to exec(3)). Any const value resulting from calling a const fn is guaranteed to be materialized at compile-time (meaning that access at runtime will not invoke the const fn), even though the const fn functions are available at run-time as well. The compiler can choose to copy the constant value wherever it is deemed practical. Getting the address of a const value is legal, but not guaranteed to be the same even when referring to the same named identifier.

The first point is a bit strange - "read-only memory". The Rust book mentions in a couple places that using mut with constants is illegal, but it's also important to demonstrate just how immutable they are. Typically in Rust you can use "inner mutability" to modify things that aren't declared mut. RefCell provides an API to guarantee at runtime that some consistency rules are enforced:

use std::cell::RefCell;

fn my_mutator(cell: &RefCell<u8>) {
    // Even though we're given an immutable reference,
    // the `replace` method allows us to modify the inner value.
    cell.replace(14);
}

fn main() {
    let cell = RefCell::new(25);
    // Prints out 25
    println!("Cell: {:?}", cell);
    my_mutator(&cell);
    // Prints out 14
    println!("Cell: {:?}", cell);
}

-- Rust Playground

When const is involved though, modifications are silently ignored:

use std::cell::RefCell;

const CELL: RefCell<u8> = RefCell::new(25);

fn my_mutator(cell: &RefCell<u8>) {
    cell.replace(14);
}

fn main() {
    // First line prints 25 as expected
    println!("Cell: {:?}", &CELL);
    my_mutator(&CELL);
    // Second line *still* prints 25
    println!("Cell: {:?}", &CELL);
}

-- Rust Playground

And a second example using Once:

use std::sync::Once;

const SURPRISE: Once = Once::new();

fn main() {
    // This is how `Once` is supposed to be used
    SURPRISE.call_once(|| println!("Initializing..."));
    // Because `Once` is a `const` value, we never record it
    // having been initialized the first time, and this closure
    // will also execute.
    SURPRISE.call_once(|| println!("Initializing again???"));
}

-- Rust Playground

When the const specification refers to "rvalues", this is what they mean. Clippy will treat this as an error, but it's still something to be aware of.

The next thing to mention is that const values are loaded into memory as part of your program binary. Because of this, any const values declared in your program will be "realized" at compile-time; accessing them may trigger a main-memory lookup (with a fixed address, so your CPU may be able to prefetch the value), but that's it.

use std::cell::RefCell;

const CELL: RefCell<u32> = RefCell::new(24);

pub fn multiply(value: u32) -> u32 {
    value * (*CELL.get_mut())
}

-- Compiler Explorer

The compiler only creates one RefCell, and uses it everywhere. However, that value is fully realized at compile time, and is fully stored in the .L__unnamed_1 section.

If it's helpful though, the compiler can choose to copy const values.

const FACTOR: u32 = 1000;

pub fn multiply(value: u32) -> u32 {
    value * FACTOR
}

pub fn multiply_twice(value: u32) -> u32 {
    value * FACTOR * FACTOR
}

-- Compiler Explorer

In this example, the FACTOR value is turned into the mov edi, 1000 instruction in both the multiply and multiply_twice functions; the "1000" value is never "stored" anywhere, as it's small enough to inline into the assembly instructions.

Finally, getting the address of a const value is possible but not guaranteed to be unique (given that the compiler can choose to copy values). In my testing I was never able to get the compiler to copy a const value and get differing pointers, but the specifications are clear enough: don't rely on pointers to const values being consistent. To be frank, caring about locations for const values is almost certainly a code smell.

static

Static variables are related to const variables, but take a slightly different approach. When the compiler can guarantee that a reference is fixed for the life of a program, you end up with a static variable (as opposed to values that are fixed for the duration a program is running). Because of this reference/value distinction, static variables behave much more like what people expect from "global" variables. We'll look at regular static variables first, and then address the lazy_static!() and thread_local!() macros later.

More generally, static variables are globally unique locations in memory, the contents of which are loaded as part of your program being read into main memory. They allow initialization with both raw values and const fn calls, and the initial value is loaded along with the program/library binary. All static variables must be of a type that implements the Sync marker trait. And while static mut variables are allowed, mutating a static is considered an unsafe operation.

The single biggest difference between const and static is the guarantees provided about uniqueness. Where const variables may or may not be copied in code, static variables are guarantee to be unique. If we take a previous const example and change it to static, the difference should be clear:

static FACTOR: u32 = 1000;

pub fn multiply(value: u32) -> u32 {
    value * FACTOR
}

pub fn multiply_twice(value: u32) -> u32 {
    value * FACTOR * FACTOR
}

-- Compiler Explorer

Where previously there were plenty of references to multiplying by 1000, the new assembly refers to FACTOR as a named memory location instead. No initialization work needs to be done, but the compiler can no longer prove the value never changes during execution.

Next, let's talk about initialization. The simplest case is initializing static variables with either scalar or struct notation:

#[derive(Debug)]
struct MyStruct {
    x: u32
}

static MY_STRUCT: MyStruct = MyStruct {
    // You can even reference other statics
    // declared later
    x: MY_VAL
};

static MY_VAL: u32 = 24;

fn main() {
    println!("Static MyStruct: {:?}", MY_STRUCT);
}

-- Rust Playground

Things get a bit weirder when using const fn. In most cases, things just work:

#[derive(Debug)]
struct MyStruct {
    x: u32
}

impl MyStruct {
    const fn new() -> MyStruct {
        MyStruct { x: 24 }
    }
}

static MY_STRUCT: MyStruct = MyStruct::new();

fn main() {
    println!("const fn Static MyStruct: {:?}", MY_STRUCT);
}

-- Rust Playground

However, there's a caveat: you're currently not allowed to use const fn to initialize static variables of types that aren't marked Sync. As an example, even though RefCell::new() is const fn, because RefCell isn't Sync, you'll get an error at compile time:

use std::cell::RefCell;

// error[E0277]: `std::cell::RefCell<u8>` cannot be shared between threads safely
static MY_LOCK: RefCell<u8> = RefCell::new(0);

-- Rust Playground

It's likely that this will change in the future though.

Which leads well to the next point: static variable types must implement the Sync marker. Because they're globally unique, it must be safe for you to access static variables from any thread at any time. Most struct definitions automatically implement the Sync trait because they contain only elements which themselves implement Sync. This is why earlier examples could get away with initializing statics, even though we never included an impl Sync for MyStruct in the code. For more on the Sync trait, the Nomicon has a much more thorough treatment. But as an example, Rust refuses to compile our earlier example if we add a non-Sync element to the struct definition:

use std::cell::RefCell;

struct MyStruct {
    x: u32,
    y: RefCell<u8>,
}

// error[E0277]: `std::cell::RefCell<u8>` cannot be shared between threads safely
static MY_STRUCT: MyStruct = MyStruct {
    x: 8,
    y: RefCell::new(8)
};

-- Rust Playground

Finally, while static mut variables are allowed, mutating them is an unsafe operation. Unlike const however, interior mutability is acceptable. To demonstrate:

use std::sync::Once;

// This example adapted from https://doc.rust-lang.org/std/sync/struct.Once.html#method.call_once
static INIT: Once = Once::new();

fn main() {
    // Note that while `INIT` is declared immutable, we're still allowed
    // to mutate its interior
    INIT.call_once(|| println!("Initializing..."));
    // This code won't panic, as the interior of INIT was modified
    // as part of the previous `call_once`
    INIT.call_once(|| panic!("INIT was called twice!"));
}

-- Rust Playground

Stacking Up: Non-Heap Memory

const and static are perfectly fine, but it's very rare that we know at compile-time about either values or references that will be the same for the entire time our program is running. Put another way, it's not often the case that either you or your compiler know how much memory your entire program will need.

However, there are still some optimizations the compiler can do if it knows how much memory individual functions will need. Specifically, the compiler can make use of "stack" memory (as opposed to "heap" memory) which can be managed far faster in both the short- and long-term. When requesting memory, the push instruction can typically complete in 1 or 2 cycles (<1 nanosecond on modern CPUs). Heap memory instead requires using an allocator (specialized software to track what memory is in use) to reserve space. And when you're finished with your memory, the pop instruction likewise runs in 1-3 cycles, as opposed to an allocator needing to worry about memory fragmentation and other issues. All sorts of incredibly sophisticated techniques have been used to design allocators:

But no matter how fast your allocator is, the principle remains: the fastest allocator is the one you never use. As such, we're not going to go in detail on how exactly the push and pop instructions work, and we'll focus instead on the conditions that enable the Rust compiler to use the faster stack-based allocation for variables.

With that in mind, let's get into the details. How do we know when Rust will or will not use stack allocation for objects we create? Looking at other languages, it's often easy to delineate between stack and heap. Managed memory languages (Python, Java, C#) assume everything is on the heap. JIT compilers (PyPy, HotSpot) may optimize some heap allocations away, but you should never assume it will happen. C makes things clear with calls to special functions (malloc(3) is one) being the way to use heap memory. Old C++ has the new keyword, though modern C++/C++11 is more complicated with RAII.

For Rust specifically, the principle is this: stack allocation will be used for everything that doesn't involve "smart pointers" and collections. If we're interested in dissecting it though, there are three things we pay attention to:

  1. Stack manipulation instructions (push, pop, and add/sub of the rsp register) indicate allocation of stack memory:

    pub fn stack_alloc(x: u32) -> u32 {
        // Space for `y` is allocated by subtracting from `rsp`,
        // and then populated
        let y = [1u8, 2, 3, 4];
        // Space for `y` is deallocated by adding back to `rsp`
        x
    }
    

    -- Compiler Explorer

  2. Tracking when exactly heap allocation calls happen is difficult. It's typically easier to watch for call core::ptr::real_drop_in_place, and infer that a heap allocation happened in the recent past:

    pub fn heap_alloc(x: usize) -> usize {
        // Space for elements in a vector has to be allocated
        // on the heap, and is then de-allocated once the
        // vector goes out of scope
        let y: Vec<u8> = Vec::with_capacity(x);
        x
    }
    

    -- Compiler Explorer (real_drop_in_place happens on line 1317) Note: While the Drop trait is called for stack-allocated objects, the Rust standard library only defines Drop implementations for types that involve heap allocation.

  3. If you don't want to inspect the assembly, use a custom allocator that's able to track and alert when heap allocations occur. As an unashamed plug, qadapt was designed for exactly this purpose.

With all that in mind, let's talk about situations in which we're guaranteed to use stack memory:

  • Structs are created on the stack.
  • Function arguments are passed on the stack.
  • Enums and unions are stack-allocated.
  • Arrays are always stack-allocated.
  • Using the #[inline] attribute will not change the memory region used.
  • Closures capture their arguments on the stack
  • Generics will use stack allocation, even with dynamic dispatch.

Structs

Enums

It's been a worry of mine that I'd manage to trigger a heap allocation because of wrapping an underlying type in Given that you're not using smart pointers, enum and other wrapper types will never use heap allocations. This shows up most often with Option and Result types, but generalizes to any other types as well.

Because the size of an enum is the size of its largest element plus the size of a discriminator, the compiler can predict how much memory is used. If enums were sized as tightly as possible, heap allocations would be needed to handle the fact that enum variants were of dynamic size!

Arrays

The array type is guaranteed to be stack allocated, which is why the array size must be declared. Interestingly enough, this can be used to cause safe Rust programs to crash:

// 256 bytes
#[derive(Default)]
struct TwoFiftySix {
    _a: [u64; 32]
}

// 8 kilobytes
#[derive(Default)]
struct EightK {
    _a: [TwoFiftySix; 32]
}

// 256 kilobytes
#[derive(Default)]
struct TwoFiftySixK {
    _a: [EightK; 32]
}

// 8 megabytes - exceeds space typically provided for the stack,
// though the kernel can be instructed to allocate more.
// On Linux, you can check stack size using `ulimit -s`
#[derive(Default)]
struct EightM {
    _a: [TwoFiftySixK; 32]
}

fn main() {
    // Because we already have things in stack memory
    // (like the current function), allocating another
    // eight megabytes of stack memory crashes the program
    let _x = EightM::default();
}

-- Rust Playground

There aren't any security implications of this (no memory corruption occurs, just running out of memory), but it's good to note that the Rust compiler won't move arrays into heap memory even if they can be reasonably expected to overflow the stack.

inline attributes

Closures

Rules for how anonymous functions capture their arguments are typically language-specific. In Java, Lambda Expressions are actually objects created on the heap that capture local primitives by copying, and capture local non-primitives as (final) references. Python and JavaScript both bind everything by reference normally, but Python can also capture values and JavaScript has Arrow functions.

In Rust, arguments to closures are the same as arguments to other functions; closures are simply functions that don't have a declared name. Some weird ordering of the stack may be required to handle them, but it's the compiler's responsiblity to figure it out.

Each example below has the same effect, but compile to very different programs. In the simplest case, we immediately run a closure returned by another function. Because we don't store a reference to the closure, the stack memory needed to store the captured values is contiguous:

fn my_func() -> impl FnOnce() {
    let x = 24;
    // Note that this closure in assembly looks exactly like
    // any other function; you even use the `call` instruction
    // to start running it.
    move || { x; }
}

pub fn immediate() {
    my_func()();
    my_func()();
}

-- Compiler Explorer, 25 total assembly instructions

If we store a reference to the bound closure though, the Rust compiler has to work a bit harder to make sure everything is correctly laid out in stack memory:

pub fn simple_reference() {
    let x = my_func();
    let y = my_func();
    y();
    x();
}

-- Compiler Explorer, 55 total assembly instructions

In more complex cases, even things like variable order matter:

pub fn complex() {
    let x = my_func();
    let y = my_func();
    x();
    y();
}

-- Compiler Explorer, 70 total assembly instructions

In every circumstance though, the compiler ensured that no heap allocations were necessary.

Generics

A Heaping Helping: Rust and Dynamic Memory

Opening question: How many allocations happen before fn main() is called?

Now, one question I hope you're asking is "how do we distinguish stack- and heap-based allocations in Rust code?" There are two strategies I'm going to use for this:

Summary section:

  • Smart pointers hold their contents in the heap
  • Collections are smart pointers for many objects at a time, and reallocate when they need to grow
  • Boxed closures (FnBox, others?) are heap allocated
  • "Move" semantics don't trigger new allocation; just a change of ownership, so are incredibly fast
  • Stack-based alternatives to standard library types should be preferred (spin, parking_lot)

Smart pointers

The first thing to note are the "smart pointer" types. When you have data that must outlive the scope in which it is declared, or your data is of unknown or dynamic size, you'll make use of these types.

The term smart pointer comes from C++, and is used to describe objects that are responsible for managing ownership of data allocated on the heap. The smart pointers available in the alloc crate should look mostly familiar:

The standard library also defines some smart pointers, though more than can be covered in this article. Some examples:

Finally, there is one gotcha: cell types (like RefCell) look and behave like smart pointers, but don't actually require heap allocation. Check out the core::cell docs for more information.

When a smart pointer is created, the data it is given is placed in heap memory and the location of that data is recorded in the smart pointer. Once the smart pointer has determined it's safe to deallocate that memory (when a Box has gone out of scope or when reference count for an object goes to zero), the heap space is reclaimed. We can prove these types use heap memory by looking at code:

use std::rc::Rc;
use std::sync::Arc;
use std::borrow::Cow;

pub fn my_box() {
    // Drop at line 1640
    Box::new(0);
}

pub fn my_rc() {
    // Drop at line 1650
    Rc::new(0);
}

pub fn my_arc() {
    // Drop at line 1660
    Arc::new(0);
}

pub fn my_cow() {
    // Drop at line 1672
    Cow::from("drop");
}

-- Compiler Explorer

Collections

Collections types use heap memory because they have dynamic size; they will request more memory when needed, and can release memory when it's no longer necessary. This dynamic memory usage forces Rust to heap allocate everything they contain. In a way, collections are smart pointers for many objects at once. Common types that fall under this umbrella are Vec, HashMap, and String (not &str).

But while collections store the objects they own in heap memory, creating new collections will not allocate on the heap. This is a bit weird, because if we call Vec::new() the assembly shows a corresponding call to drop_in_place:

pub fn my_vec() {
    // Drop in place at line 481
    Vec::<u8>::new();
}

-- Compiler Explorer

But because the vector has no elements it is managing, no calls to the allocator will ever be dispatched. A couple of places to look at for confirming this behavior: Vec::new(), HashMap::new(), and String::new().

Compiler Optimizations: What It's Done For You Lately

  1. Box<> getting inlined into stack allocations
  2. Vec::push() === Vec::with_capacity() for fixed/predictable capacities
  3. Inlining statics that don't change value