8.6 KiB
layout | title | description | category | tags | ||
---|---|---|---|---|---|---|
post | A Heaping Helping: Dynamic Memory | The reason Rust exists. |
|
Managing dynamic memory is hard. Some languages assume users will do it themselves (C, C++), and some languages go to extreme lengths to protect users from themselves (Java, Python). In Rust, how the language uses dynamic memory (also referred to as the heap) is a system called ownership. And as the docs mention, ownership is Rust's most unique feature.
The heap is used in two situations; when the compiler is unable to predict either the total size of memory needed, or how long the memory is needed for, it will allocate space in the heap. This happens pretty frequently; if you want to download the Google home page, you won't know how large it is until your program runs. And when you're finished with Google, whenever that might be, we deallocate the memory so it can be used to store other webpages.
We won't go into detail on how the heap is managed; the ownership documentation does a phenomenal job explaining both the "why" and "how" of memory management. Instead, we're going to focus on understanding "when" heap allocations occur in Rust.
To start off: take a guess for how many allocations happen in the program below:
fn main() {}
It's obviously a trick question; while no heap allocations happen as a result of
the code listed above, the setup needed to call main
does allocate on the heap.
Here's a way to show it:
#![feature(integer_atomics)]
use std::alloc::{GlobalAlloc, Layout, System};
use std::sync::atomic::{AtomicU64, Ordering};
static ALLOCATION_COUNT: AtomicU64 = AtomicU64::new(0);
struct CountingAllocator;
unsafe impl GlobalAlloc for CountingAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
ALLOCATION_COUNT.fetch_add(1, Ordering::SeqCst);
System.alloc(layout)
}
unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
System.dealloc(ptr, layout);
}
}
#[global_allocator]
static A: CountingAllocator = CountingAllocator;
fn main() {
let x = ALLOCATION_COUNT.fetch_add(0, Ordering::SeqCst);
println!("There were {} allocations before calling main!", x);
}
As of the time of writing, there are five allocations that happen before main
is ever called.
But when we want to understand more practically where heap allocation happens, we'll follow this guide:
- Smart pointers hold their contents in the heap
- Collections are smart pointers for many objects at a time, and reallocate when they need to grow
lazy_static!
andthread_local!
force heap allocation for everything.- Stack-based alternatives to standard library types should be preferred (spin, parking_lot)
Smart pointers
The first thing to note are the "smart pointer" types. When you have data that must outlive the scope in which it is declared, or your data is of unknown or dynamic size, you'll make use of these types.
The term smart pointer
comes from C++, and while it's closely linked to a general design pattern of
"Resource Acquisition Is Initialization",
we'll use it here specifically to describe objects that are responsible for managing
ownership of data allocated on the heap. The smart pointers available in the alloc
crate should look mostly familiar:
The standard library also defines some smart pointers to manage heap objects, though more than can be covered here. Some examples:
Finally, there is one "gotcha":
cell types (like RefCell
)
follow the RAII pattern, but don't involve heap allocation. Check out the
core::cell
docs
for more information.
When a smart pointer is created, the data it is given is placed in heap memory and
the location of that data is recorded in the smart pointer. Once the smart pointer
has determined it's safe to deallocate that memory (when a Box
has
gone out of scope or when
reference count for an object goes to zero),
the heap space is reclaimed. We can prove these types use heap memory by
looking at code:
use std::rc::Rc;
use std::sync::Arc;
use std::borrow::Cow;
pub fn my_box() {
// Drop at line 1640
Box::new(0);
}
pub fn my_rc() {
// Drop at line 1650
Rc::new(0);
}
pub fn my_arc() {
// Drop at line 1660
Arc::new(0);
}
pub fn my_cow() {
// Drop at line 1672
Cow::from("drop");
}
Collections
Collections types use heap memory because they have dynamic size; they will request more memory
when needed,
and can release memory
when it's no longer necessary. This dynamic memory usage forces Rust to heap allocate
everything they contain. In a way, collections are smart pointers for many objects at once.
Common types that fall under this umbrella are Vec
, HashMap
, and String
(not &str
).
But while collections store the objects they own in heap memory, creating new collections
will not allocate on the heap. This is a bit weird, because if we call Vec::new()
the
assembly shows a corresponding call to drop_in_place
:
pub fn my_vec() {
// Drop in place at line 481
Vec::<u8>::new();
}
But because the vector has no elements it is managing, no calls to the allocator
will ever be dispatched. A couple of places to look at for confirming this behavior:
Vec::new()
,
HashMap::new()
,
and String::new()
.
lazy_static! and thread_local!
There are two macros worth addressing in a conversation about heap memory. The first isn't part of the standard library, but it's the 5th most downloaded crate in Rust. The second
TODO: Not so sure about lazy_static anymore. Is thread_local possibly heap-allocated too?
- Think it may actually be that lazy_static has a no_std mode that uses
spin
, std-mode uses std::Once. - Reasonably confident thread_local always allocates
Heap Alternatives
While it is a bit strange for us to talk of the stack after spending so much time with the heap, it's worth pointing out that some heap-allocated objects in Rust have stack-based counterparts provided by other crates. There are a number of cases where this may be helpful, so it's useful to know that alternatives exist if you need them.
When it comes to some of the standard library smart pointers
(RwLock
and
Mutex
), stack-based alternatives
are provided in crates like parking_lot and
spin. You can check out
lock_api::RwLock
,
lock_api::Mutex
, and
spin::Once
if you're in need of synchronization primitives.
thread_id
may still be necessary if you're implementing an allocator (cough cough the author cough cough)
because thread::current().id()
uses a thread_local!
structure
that needs heap allocation.