15 KiB
layout | title | description | category | tags | ||
---|---|---|---|---|---|---|
post | Stacking Up: Fixed Memory | Going fast in Rust |
|
const
and static
are perfectly fine, but it's very rare that we know
at compile-time about either values or references that will be the same for the entire
time our program is running. Put another way, it's not often the case that either you
or your compiler know how much memory your entire program will need.
However, there are still some optimizations the compiler can do if it knows how much
memory individual functions will need. Specifically, the compiler can make use of
"stack" memory (as opposed to "heap" memory) which can be managed far faster in
both the short- and long-term. When requesting memory, the
push
instruction
can typically complete in 1 or 2 cycles
(<1 nanosecond on modern CPUs). Heap memory instead requires using an allocator
(specialized software to track what memory is in use) to reserve space.
And when you're finished with your memory, the pop
instruction likewise runs in
1-3 cycles, as opposed to an allocator needing to worry about memory fragmentation
and other issues. All sorts of incredibly sophisticated techniques have been used
to design allocators:
- Garbage Collection strategies like Tracing (used in Java) and Reference counting (used in Python)
- Thread-local structures to prevent locking the allocator in tcmalloc
- Arena structures used in jemalloc, which until recently was the primary allocator for Rust programs!
But no matter how fast your allocator is, the principle remains: the
fastest allocator is the one you never use. As such, we're not going to discuss how exactly the
push
and pop
instructions work,
but we'll focus instead on the conditions that enable the Rust compiler to use
the faster stack-based allocation for variables.
With that in mind, let's get into the details. How do we know when Rust will or will not use
stack allocation for objects we create? Looking at other languages, it's often easy to delineate
between stack and heap. Managed memory languages (Python, Java,
C#) assume
everything is on the heap. JIT compilers (PyPy,
HotSpot) may
optimize some heap allocations away, but you should never assume it will happen.
C makes things clear with calls to special functions (malloc(3)
is one) being the way to use heap memory. Old C++ has the new
keyword, though modern C++/C++11 is more complicated with RAII.
For Rust specifically, the principle is this: stack allocation will be used for everything that doesn't involve "smart pointers" and collections. If we're interested in dissecting it though, there are three things we pay attention to:
-
Stack manipulation instructions (
push
,pop
, andadd
/sub
of thersp
register) indicate allocation of stack memory:pub fn stack_alloc(x: u32) -> u32 { // Space for `y` is allocated by subtracting from `rsp`, // and then populated let y = [1u8, 2, 3, 4]; // Space for `y` is deallocated by adding back to `rsp` x }
-
Tracking when exactly heap allocation calls happen is difficult. It's typically easier to watch for
call core::ptr::real_drop_in_place
, and infer that a heap allocation happened in the recent past:pub fn heap_alloc(x: usize) -> usize { // Space for elements in a vector has to be allocated // on the heap, and is then de-allocated once the // vector goes out of scope let y: Vec<u8> = Vec::with_capacity(x); x }
-- Compiler Explorer (
real_drop_in_place
happens on line 1317) Note: While theDrop
trait is called for stack-allocated objects, the Rust standard library only definesDrop
implementations for types that involve heap allocation. -
If you don't want to inspect the assembly, use a custom allocator that's able to track and alert when heap allocations occur. As an unashamed plug, qadapt was designed for exactly this purpose.
With all that in mind, let's talk about situations in which we're guaranteed to use stack memory:
- Structs are created on the stack.
- Function arguments are passed on the stack, meaning the
#[inline]
attribute will not change the memory region used. - Enums and unions are stack-allocated.
- Arrays are always stack-allocated.
- Closures capture their arguments on the stack
- Generics will use stack allocation, even with dynamic dispatch.
Structs
The simplest case comes first. When creating vanilla struct
objects, we use stack memory
to hold their contents:
struct Point {
x: u64,
y: u64,
}
struct Line {
a: Point,
b: Point,
}
pub fn make_line() {
// `origin` is stored in the first 16 bytes of memory
// starting at location `rsp`
let origin = Point { x: 0, y: 0 };
// `point` makes up the next 16 bytes of memory
let point = Point { x: 1, y: 2 };
// When creating `ray`, we just move the content out of
// `origin` and `point` into the next 32 bytes of memory
let ray = Line { a: origin, b: point };
}
Note that while some extra-fancy instructions are used for memory manipulation in the assembly,
the sub rsp, 64
instruction indicates we're still working with the stack.
Function arguments
Have you ever wondered how functions communicate with each other? Like, once the variables are
given to you, everything's fine. But how do you "give" those variables to another function?
How do you get the results back afterward? The answer: the compiler arranges memory and
assembly instructions using a pre-determined
calling convention.
This convention governs the rules around where arguments needed by a function will be located
(either in memory offsets relative to the stack pointer rsp
, or in other registers), and
where the results can be found once the function has finished. And when multiple languages
agree on what the calling conventions are, you can do things like having
Go call Rust code!
Put simply: it's the compiler's job to figure out how to call other functions, and you can assume that the compiler is good at its job.
We can see this in action using a simple example:
struct Point {
x: i64,
y: i64,
}
// We use integer division operations to keep
// the assembly clean, understanding the result
// isn't accurate.
fn distance(a: &Point, b: &Point) -> i64 {
// Immediately subtract from `rsp` the bytes needed
// to hold all the intermediate results - this is
// the stack allocation step
// The compiler used the `rdi` and `rsi` registers
// to pass our arguments, so read them in
let x1 = a.x;
let x2 = b.x;
let y1 = a.y;
let y2 = b.y;
// Do the actual math work
let x_pow = (x1 - x2) * (x1 - x2);
let y_pow = (y1 - y2) * (y1 - y2);
let squared = x_pow + y_pow;
squared / squared
// Our final result will be stored in the `rax` register
// so that our caller knows where to retrieve it.
// Finally, add back to `rsp` the stack memory that is
// now ready to be used by other functions.
}
pub fn total_distance() {
let start = Point { x: 1, y: 2 };
let middle = Point { x: 3, y: 4 };
let end = Point { x: 5, y: 6 };
let _dist_1 = distance(&start, &middle);
let _dist_2 = distance(&middle, &end);
}
As a consequence of function arguments never using heap memory, we can also
infer that functions using the #[inline]
attributes also do not heap-allocate.
But better than inferring, we can look at the assembly to prove it:
struct Point {
x: i64,
y: i64,
}
// Note that there is no `distance` function in the assembly output,
// and the total line count goes from 229 with inlining off
// to 306 with inline on. Even still, no heap allocations occur.
#[inline(always)]
fn distance(a: &Point, b: &Point) -> i64 {
let x1 = a.x;
let x2 = b.x;
let y1 = a.y;
let y2 = b.y;
let x_pow = (a.x - b.x) * (a.x - b.x);
let y_pow = (a.y - b.y) * (a.y - b.y);
let squared = x_pow + y_pow;
squared / squared
}
pub fn total_distance() {
let start = Point { x: 1, y: 2 };
let middle = Point { x: 3, y: 4 };
let end = Point { x: 5, y: 6 };
let _dist_1 = distance(&start, &middle);
let _dist_2 = distance(&middle, &end);
}
Finally, passing by value (arguments with type
Copy
)
and passing by reference (either moving ownership or passing a pointer) may have
slightly different layouts in assembly, but will
still use either stack memory or CPU registers.
Enums
If you've ever worried that wrapping your types in
Option
or
Result
would
finally make them large enough that Rust decides to use heap allocation instead,
fear no longer: enum
and union types don't use heap allocation:
enum MyEnum {
Small(u8),
Large(u64)
}
struct MyStruct {
x: MyEnum,
y: MyEnum,
}
pub fn enum_compare() {
let x = MyEnum::Small(0);
let y = MyEnum::Large(0);
let z = MyStruct { x, y };
let opt = Option::Some(z);
}
Because the size of an enum
is the size of its largest element plus a flag,
the compiler can predict how much memory is used no matter which variant
of an enum is currently stored in a variable. Thus, enums and unions have no
need of heap allocation. There's unfortunately not a great way to show this
in assembly, so I'll instead point you to the
core::mem::size_of
documentation.
Arrays
The array type is guaranteed to be stack allocated, which is why the array size must be declared. Interestingly enough, this can be used to cause safe Rust programs to crash:
// 256 bytes
#[derive(Default)]
struct TwoFiftySix {
_a: [u64; 32]
}
// 8 kilobytes
#[derive(Default)]
struct EightK {
_a: [TwoFiftySix; 32]
}
// 256 kilobytes
#[derive(Default)]
struct TwoFiftySixK {
_a: [EightK; 32]
}
// 8 megabytes - exceeds space typically provided for the stack,
// though the kernel can be instructed to allocate more.
// On Linux, you can check stack size using `ulimit -s`
#[derive(Default)]
struct EightM {
_a: [TwoFiftySixK; 32]
}
fn main() {
// Because we already have things in stack memory
// (like the current function call stack), allocating another
// eight megabytes of stack memory crashes the program
let _x = EightM::default();
}
There aren't any security implications of this (no memory corruption occurs), but it's good to note that the Rust compiler won't move arrays into heap memory even if they can be reasonably expected to overflow the stack.
Closures
Rules for how anonymous functions capture their arguments are typically language-specific.
In Java, Lambda Expressions
are actually objects created on the heap that capture local primitives by copying, and capture
local non-primitives as (final
) references.
Python and
JavaScript
both bind everything by reference normally, but Python can also
capture values and JavaScript has
Arrow functions.
In Rust, arguments to closures are the same as arguments to other functions; closures are simply functions that don't have a declared name. Some weird ordering of the stack may be required to handle them, but it's the compiler's responsiblity to figure it out.
Each example below has the same effect, but compile to very different programs. In the simplest case, we immediately run a closure returned by another function. Because we don't store a reference to the closure, the stack memory needed to store the captured values is contiguous:
fn my_func() -> impl FnOnce() {
let x = 24;
// Note that this closure in assembly looks exactly like
// any other function; you even use the `call` instruction
// to start running it.
move || { x; }
}
pub fn immediate() {
my_func()();
my_func()();
}
-- Compiler Explorer, 25 total assembly instructions
If we store a reference to the bound closure though, the Rust compiler has to work a bit harder to make sure everything is correctly laid out in stack memory:
pub fn simple_reference() {
let x = my_func();
let y = my_func();
y();
x();
}
-- Compiler Explorer, 55 total assembly instructions
In more complex cases, even things like variable order matter:
pub fn complex() {
let x = my_func();
let y = my_func();
x();
y();
}
-- Compiler Explorer, 70 total assembly instructions
In every circumstance though, the compiler ensured that no heap allocations were necessary.