6.6 KiB
layout | title | description | category | tags | ||
---|---|---|---|---|---|---|
post | Allocations in Rust | An introduction to the memory model. |
|
There's an alchemy of distilling complex technical topics into articles and videos that change the
way programmers see the tools they interact with on a regular basis. I knew what a linker was, but
there's a staggering amount of complexity in between
the OS and main()
. Rust programmers use the
Box
type all the time, but there's a
rich history of the Rust language itself wrapped up in
how special it is.
In a similar vein, this series attempts to look at code and understand how memory is used; the complex choreography of operating system, compiler, and program that frees you to focus on functionality far-flung from frivolous book-keeping. The Rust compiler relieves a great deal of the cognitive burden associated with memory management, but we're going to step into its world for a while.
Let's learn a bit about memory in Rust.
Table of Contents
This series is intended as both learning and reference material; we'll work through the different memory types Rust uses, and explain the implications of each. Ultimately, a summary will be provided as a cheat sheet for easy future reference. To that end, a table of contents is in order:
- Foreword
- Global Memory Usage: The Whole World
- Fixed Memory: Stacking Up
- Dynamic Memory: A Heaping Helping
- Compiler Optimizations: What It's Done For You Lately
- Summary: What Are the Rules?
Foreword
Rust's three defining features of Performance, Reliability, and Productivity are all driven to a great degree by the how the Rust compiler understands memory usage. Unlike managed memory languages (Java, Python), Rust doesn't really garbage collect; instead, it uses an ownership system to reason about how long objects will last in your program. In some cases, if the life of an object is fairly transient, Rust can make use of a very fast region called the "stack." When that's not possible, Rust uses dynamic (heap) memory and the ownership system to ensure you can't accidentally corrupt memory. It's not as fast, but it is important to have available.
That said, there are specific situations in Rust where you'd never need to worry about the stack/heap distinction! If you:
- Never use
unsafe
- Never use
#![feature(alloc)]
or thealloc
crate
...then it's not possible for you to use dynamic memory!
For some uses of Rust, typically embedded devices, these constraints are OK. They have very limited memory, and the program binary size itself may significantly affect what's available! There's no operating system able to manage this "virtual memory" thing, but that's not an issue because there's only one running application. The embedonomicon is ever in mind, and interacting with the "real world" through extra peripherals is accomplished by reading and writing to specific memory addresses.
Most Rust programs find these requirements overly burdensome though. C++ developers would struggle
without access to std::vector
(except those
hardcore no-STL people), and Rust developers would struggle without
std::vec
. But with the constraints above,
std::vec
is actually a part of the
alloc
crate, and thus off-limits. Box
,
Rc
, etc., are also unusable for the same reason.
Whether writing code for embedded devices or not, the important thing in both situations is how much you know before your application starts about what its memory usage will look like. In embedded devices, there's a small, fixed amount of memory to use. In a browser, you have no idea how large google.com's home page is until you start trying to download it. The compiler uses this knowledge (or lack thereof) to optimize how memory is used; put simply, your code runs faster when the compiler can guarantee exactly how much memory your program needs while it's running. This series is all about understanding how the compiler reasons about your program, with an emphasis on the implications for performance.
Now let's address some conditions and caveats before going much further:
- We'll focus on "safe" Rust only;
unsafe
lets you use platform-specific allocation API's (malloc
) that we'll ignore. - We'll assume a "debug" build of Rust code (what you get with
cargo run
andcargo test
) and address (pun intended) release mode at the end (cargo run --release
andcargo test --release
). - All content will be run using Rust 1.32, as that's the highest currently supported in the
Compiler Exporer. As such, we'll avoid upcoming innovations like
compile-time evaluation of
static
that are available in nightly. - Because of the nature of the content, being able to read assembly is helpful. We'll keep it
simple, but I found a
refresher on the
push
andpop
instructions was helpful while writing this. - I've tried to be precise in saying only what I can prove using the tools (ASM, docs) that are available, but if there's something said in error it will be corrected expeditiously. Please let me know at bradlee@speice.io
Finally, I'll do what I can to flag potential future changes but the Rust docs have a notice worth repeating:
Rust does not currently have a rigorously and formally defined memory model.
-- the docs