"use strict";(self.webpackChunkspeice_io=self.webpackChunkspeice_io||[]).push([["1874"],{22763:function(e,n,t){t.r(n),t.d(n,{assets:function(){return l},contentTitle:function(){return i},default:function(){return d},frontMatter:function(){return o},metadata:function(){return s},toc:function(){return c}});var s=t(82340),r=t(85893),a=t(50065);let o={slug:"2019/02/stacking-up",title:"Allocations in Rust: Fixed memory",date:new Date("2019-02-06T12:00:00.000Z"),authors:["bspeice"],tags:[]},i=void 0,l={authorsImageUrls:[void 0]},c=[{value:"Structs",id:"structs",level:2},{value:"Function arguments",id:"function-arguments",level:2},{value:"Enums",id:"enums",level:2},{value:"Arrays",id:"arrays",level:2},{value:"Closures",id:"closures",level:2},{value:"Generics",id:"generics",level:2},{value:"Copy types",id:"copy-types",level:2},{value:"Iterators",id:"iterators",level:2}];function h(e){let n={a:"a",code:"code",em:"em",h2:"h2",li:"li",ol:"ol",p:"p",pre:"pre",strong:"strong",ul:"ul",...(0,a.a)(),...e.components};return(0,r.jsxs)(r.Fragment,{children:[(0,r.jsxs)(n.p,{children:[(0,r.jsx)(n.code,{children:"const"})," and ",(0,r.jsx)(n.code,{children:"static"})," are perfectly fine, but it's relatively rare that we know at compile-time about\neither values or references that will be the same for the duration of our program. Put another way,\nit's not often the case that either you or your compiler knows how much memory your entire program\nwill ever need."]}),"\n",(0,r.jsx)(n.p,{children:'However, there are still some optimizations the compiler can do if it knows how much memory\nindividual functions will need. Specifically, the compiler can make use of "stack" memory (as\nopposed to "heap" memory) which can be managed far faster in both the short- and long-term.'}),"\n",(0,r.jsxs)(n.p,{children:["When requesting memory, the ",(0,r.jsxs)(n.a,{href:"http://www.cs.virginia.edu/~evans/cs216/guides/x86.html",children:[(0,r.jsx)(n.code,{children:"push"})," instruction"]}),"\ncan typically complete in ",(0,r.jsx)(n.a,{href:"https://agner.org/optimize/instruction_tables.ods",children:"1 or 2 cycles"})," (<1ns\non modern CPUs). Contrast that to heap memory which requires an allocator (specialized\nsoftware to track what memory is in use) to reserve space. When you're finished with stack memory,\nthe ",(0,r.jsx)(n.code,{children:"pop"})," instruction runs in 1-3 cycles, as opposed to an allocator needing to worry about memory\nfragmentation and other issues with the heap. All sorts of incredibly sophisticated techniques have\nbeen used to design allocators:"]}),"\n",(0,r.jsxs)(n.ul,{children:["\n",(0,r.jsxs)(n.li,{children:[(0,r.jsx)(n.a,{href:"https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)",children:"Garbage Collection"}),"\nstrategies like ",(0,r.jsx)(n.a,{href:"https://en.wikipedia.org/wiki/Tracing_garbage_collection",children:"Tracing"})," (used in\n",(0,r.jsx)(n.a,{href:"https://www.oracle.com/technetwork/java/javase/tech/g1-intro-jsp-135488.html",children:"Java"}),") and\n",(0,r.jsx)(n.a,{href:"https://en.wikipedia.org/wiki/Reference_counting",children:"Reference counting"})," (used in\n",(0,r.jsx)(n.a,{href:"https://docs.python.org/3/extending/extending.html#reference-counts",children:"Python"}),")"]}),"\n",(0,r.jsxs)(n.li,{children:["Thread-local structures to prevent locking the allocator in\n",(0,r.jsx)(n.a,{href:"https://jamesgolick.com/2013/5/19/how-tcmalloc-works.html",children:"tcmalloc"})]}),"\n",(0,r.jsxs)(n.li,{children:["Arena structures used in ",(0,r.jsx)(n.a,{href:"http://jemalloc.net/",children:"jemalloc"}),", which\n",(0,r.jsx)(n.a,{href:"https://blog.rust-lang.org/2019/01/17/Rust-1.32.0.html#jemalloc-is-removed-by-default",children:"until recently"}),"\nwas the primary allocator for Rust programs!"]}),"\n"]}),"\n",(0,r.jsxs)(n.p,{children:["But no matter how fast your allocator is, the principle remains: the fastest allocator is the one\nyou never use. As such, we're not going to discuss how exactly the\n",(0,r.jsxs)(n.a,{href:"http://www.cs.virginia.edu/~evans/cs216/guides/x86.html",children:[(0,r.jsx)(n.code,{children:"push"})," and ",(0,r.jsx)(n.code,{children:"pop"})," instructions work"]}),", but\nwe'll focus instead on the conditions that enable the Rust compiler to use faster stack-based\nallocation for variables."]}),"\n",(0,r.jsxs)(n.p,{children:["So, ",(0,r.jsx)(n.strong,{children:"how do we know when Rust will or will not use stack allocation for objects we create?"}),"\nLooking at other languages, it's often easy to delineate between stack and heap. Managed memory\nlanguages (Python, Java,\n",(0,r.jsx)(n.a,{href:"https://blogs.msdn.microsoft.com/ericlippert/2010/09/30/the-truth-about-value-types/",children:"C#"}),") place\neverything on the heap. JIT compilers (",(0,r.jsx)(n.a,{href:"https://www.pypy.org/",children:"PyPy"}),",\n",(0,r.jsx)(n.a,{href:"https://www.oracle.com/technetwork/java/javase/tech/index-jsp-136373.html",children:"HotSpot"}),") may optimize\nsome heap allocations away, but you should never assume it will happen. C makes things clear with\ncalls to special functions (like ",(0,r.jsx)(n.a,{href:"https://linux.die.net/man/3/malloc",children:"malloc(3)"}),") needed to access\nheap memory. Old C++ has the ",(0,r.jsx)(n.a,{href:"https://stackoverflow.com/a/655086/1454178",children:(0,r.jsx)(n.code,{children:"new"})})," keyword, though\nmodern C++/C++11 is more complicated with ",(0,r.jsx)(n.a,{href:"https://en.cppreference.com/w/cpp/language/raii",children:"RAII"}),"."]}),"\n",(0,r.jsxs)(n.p,{children:["For Rust, we can summarize as follows: ",(0,r.jsx)(n.strong,{children:'stack allocation will be used for everything that doesn\'t\ninvolve "smart pointers" and collections'}),'. We\'ll skip over a precise definition of the term "smart\npointer" for now, and instead discuss what we should watch for to understand when stack and heap\nmemory regions are used:']}),"\n",(0,r.jsxs)(n.ol,{children:["\n",(0,r.jsxs)(n.li,{children:["\n",(0,r.jsxs)(n.p,{children:["Stack manipulation instructions (",(0,r.jsx)(n.code,{children:"push"}),", ",(0,r.jsx)(n.code,{children:"pop"}),", and ",(0,r.jsx)(n.code,{children:"add"}),"/",(0,r.jsx)(n.code,{children:"sub"})," of the ",(0,r.jsx)(n.code,{children:"rsp"})," register) indicate\nallocation of stack memory:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"pub fn stack_alloc(x: u32) -> u32 {\n // Space for `y` is allocated by subtracting from `rsp`,\n // and then populated\n let y = [1u8, 2, 3, 4];\n // Space for `y` is deallocated by adding back to `rsp`\n x\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/5WSgc9",children:"Compiler Explorer"})]}),"\n"]}),"\n",(0,r.jsxs)(n.li,{children:["\n",(0,r.jsxs)(n.p,{children:["Tracking when exactly heap allocation calls occur is difficult. It's typically easier to watch\nfor ",(0,r.jsx)(n.code,{children:"call core::ptr::real_drop_in_place"}),", and infer that a heap allocation happened in the recent\npast:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"pub fn heap_alloc(x: usize) -> usize {\n // Space for elements in a vector has to be allocated\n // on the heap, and is then de-allocated once the\n // vector goes out of scope\n let y: Vec = Vec::with_capacity(x);\n x\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/epfgoQ",children:"Compiler Explorer"})," (",(0,r.jsx)(n.code,{children:"real_drop_in_place"})," happens on line 1317)\n",(0,r.jsxs)("small",{children:["Note: While the\n",(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/std/ops/trait.Drop.html",children:[(0,r.jsx)(n.code,{children:"Drop"})," trait"]})," is\n",(0,r.jsx)(n.a,{href:"https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=87edf374d8983816eb3d8cfeac657b46",children:"called for stack-allocated objects"}),",\nthe Rust standard library only defines ",(0,r.jsx)(n.code,{children:"Drop"})," implementations for types that involve heap\nallocation."]})]}),"\n"]}),"\n",(0,r.jsxs)(n.li,{children:["\n",(0,r.jsxs)(n.p,{children:["If you don't want to inspect the assembly, use a custom allocator that's able to track and alert\nwhen heap allocations occur. Crates like\n",(0,r.jsx)(n.a,{href:"https://crates.io/crates/alloc_counter",children:(0,r.jsx)(n.code,{children:"alloc_counter"})})," are designed for exactly this purpose."]}),"\n"]}),"\n"]}),"\n",(0,r.jsx)(n.p,{children:"With all that in mind, let's talk about situations in which we're guaranteed to use stack memory:"}),"\n",(0,r.jsxs)(n.ul,{children:["\n",(0,r.jsx)(n.li,{children:"Structs are created on the stack."}),"\n",(0,r.jsxs)(n.li,{children:["Function arguments are passed on the stack, meaning the\n",(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/reference/attributes.html#inline-attribute",children:[(0,r.jsx)(n.code,{children:"#[inline]"})," attribute"]})," will\nnot change the memory region used."]}),"\n",(0,r.jsx)(n.li,{children:"Enums and unions are stack-allocated."}),"\n",(0,r.jsxs)(n.li,{children:[(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/std/primitive.array.html",children:"Arrays"})," are always stack-allocated."]}),"\n",(0,r.jsx)(n.li,{children:"Closures capture their arguments on the stack."}),"\n",(0,r.jsx)(n.li,{children:"Generics will use stack allocation, even with dynamic dispatch."}),"\n",(0,r.jsxs)(n.li,{children:[(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/std/marker/trait.Copy.html",children:(0,r.jsx)(n.code,{children:"Copy"})})," types are guaranteed to be\nstack-allocated, and copying them will be done in stack memory."]}),"\n",(0,r.jsxs)(n.li,{children:[(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/std/iter/trait.Iterator.html",children:[(0,r.jsx)(n.code,{children:"Iterator"}),"s"]})," in the standard library are\nstack-allocated even when iterating over heap-based collections."]}),"\n"]}),"\n",(0,r.jsx)(n.h2,{id:"structs",children:"Structs"}),"\n",(0,r.jsxs)(n.p,{children:["The simplest case comes first. When creating vanilla ",(0,r.jsx)(n.code,{children:"struct"})," objects, we use stack memory to hold\ntheir contents:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"struct Point {\n x: u64,\n y: u64,\n}\n\nstruct Line {\n a: Point,\n b: Point,\n}\n\npub fn make_line() {\n // `origin` is stored in the first 16 bytes of memory\n // starting at location `rsp`\n let origin = Point { x: 0, y: 0 };\n // `point` makes up the next 16 bytes of memory\n let point = Point { x: 1, y: 2 };\n\n // When creating `ray`, we just move the content out of\n // `origin` and `point` into the next 32 bytes of memory\n let ray = Line { a: origin, b: point };\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/vri9BE",children:"Compiler Explorer"})]}),"\n",(0,r.jsxs)(n.p,{children:["Note that while some extra-fancy instructions are used for memory manipulation in the assembly, the\n",(0,r.jsx)(n.code,{children:"sub rsp, 64"})," instruction indicates we're still working with the stack."]}),"\n",(0,r.jsx)(n.h2,{id:"function-arguments",children:"Function arguments"}),"\n",(0,r.jsxs)(n.p,{children:['Have you ever wondered how functions communicate with each other? Like, once the variables are given\nto you, everything\'s fine. But how do you "give" those variables to another function? How do you get\nthe results back afterward? The answer: the compiler arranges memory and assembly instructions using\na pre-determined ',(0,r.jsx)(n.a,{href:"http://llvm.org/docs/LangRef.html#calling-conventions",children:"calling convention"}),". This\nconvention governs the rules around where arguments needed by a function will be located (either in\nmemory offsets relative to the stack pointer ",(0,r.jsx)(n.code,{children:"rsp"}),", or in other registers), and where the results\ncan be found once the function has finished. And when multiple languages agree on what the calling\nconventions are, you can do things like having ",(0,r.jsx)(n.a,{href:"https://blog.filippo.io/rustgo/",children:"Go call Rust code"}),"!"]}),"\n",(0,r.jsx)(n.p,{children:"Put simply: it's the compiler's job to figure out how to call other functions, and you can assume\nthat the compiler is good at its job."}),"\n",(0,r.jsx)(n.p,{children:"We can see this in action using a simple example:"}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"struct Point {\n x: i64,\n y: i64,\n}\n\n// We use integer division operations to keep\n// the assembly clean, understanding the result\n// isn't accurate.\nfn distance(a: &Point, b: &Point) -> i64 {\n // Immediately subtract from `rsp` the bytes needed\n // to hold all the intermediate results - this is\n // the stack allocation step\n\n // The compiler used the `rdi` and `rsi` registers\n // to pass our arguments, so read them in\n let x1 = a.x;\n let x2 = b.x;\n let y1 = a.y;\n let y2 = b.y;\n\n // Do the actual math work\n let x_pow = (x1 - x2) * (x1 - x2);\n let y_pow = (y1 - y2) * (y1 - y2);\n let squared = x_pow + y_pow;\n squared / squared\n\n // Our final result will be stored in the `rax` register\n // so that our caller knows where to retrieve it.\n // Finally, add back to `rsp` the stack memory that is\n // now ready to be used by other functions.\n}\n\npub fn total_distance() {\n let start = Point { x: 1, y: 2 };\n let middle = Point { x: 3, y: 4 };\n let end = Point { x: 5, y: 6 };\n\n let _dist_1 = distance(&start, &middle);\n let _dist_2 = distance(&middle, &end);\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/Qmx4ST",children:"Compiler Explorer"})]}),"\n",(0,r.jsxs)(n.p,{children:["As a consequence of function arguments never using heap memory, we can also infer that functions\nusing the ",(0,r.jsx)(n.code,{children:"#[inline]"})," attributes also do not heap allocate. But better than inferring, we can look\nat the assembly to prove it:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"struct Point {\n x: i64,\n y: i64,\n}\n\n// Note that there is no `distance` function in the assembly output,\n// and the total line count goes from 229 with inlining off\n// to 306 with inline on. Even still, no heap allocations occur.\n#[inline(always)]\nfn distance(a: &Point, b: &Point) -> i64 {\n let x1 = a.x;\n let x2 = b.x;\n let y1 = a.y;\n let y2 = b.y;\n\n let x_pow = (a.x - b.x) * (a.x - b.x);\n let y_pow = (a.y - b.y) * (a.y - b.y);\n let squared = x_pow + y_pow;\n squared / squared\n}\n\npub fn total_distance() {\n let start = Point { x: 1, y: 2 };\n let middle = Point { x: 3, y: 4 };\n let end = Point { x: 5, y: 6 };\n\n let _dist_1 = distance(&start, &middle);\n let _dist_2 = distance(&middle, &end);\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/30Sh66",children:"Compiler Explorer"})]}),"\n",(0,r.jsxs)(n.p,{children:["Finally, passing by value (arguments with type\n",(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/std/marker/trait.Copy.html",children:(0,r.jsx)(n.code,{children:"Copy"})}),") and passing by reference (either\nmoving ownership or passing a pointer) may have slightly different layouts in assembly, but will\nstill use either stack memory or CPU registers:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"pub struct Point {\n x: i64,\n y: i64,\n}\n\n// Moving values\npub fn distance_moved(a: Point, b: Point) -> i64 {\n let x1 = a.x;\n let x2 = b.x;\n let y1 = a.y;\n let y2 = b.y;\n\n let x_pow = (x1 - x2) * (x1 - x2);\n let y_pow = (y1 - y2) * (y1 - y2);\n let squared = x_pow + y_pow;\n squared / squared\n}\n\n// Borrowing values has two extra `mov` instructions on lines 21 and 22\npub fn distance_borrowed(a: &Point, b: &Point) -> i64 {\n let x1 = a.x;\n let x2 = b.x;\n let y1 = a.y;\n let y2 = b.y;\n\n let x_pow = (x1 - x2) * (x1 - x2);\n let y_pow = (y1 - y2) * (y1 - y2);\n let squared = x_pow + y_pow;\n squared / squared\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/06hGiv",children:"Compiler Explorer"})]}),"\n",(0,r.jsx)(n.h2,{id:"enums",children:"Enums"}),"\n",(0,r.jsxs)(n.p,{children:["If you've ever worried that wrapping your types in\n",(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/stable/core/option/enum.Option.html",children:(0,r.jsx)(n.code,{children:"Option"})})," or\n",(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/stable/core/result/enum.Result.html",children:(0,r.jsx)(n.code,{children:"Result"})})," would finally make them\nlarge enough that Rust decides to use heap allocation instead, fear no longer: ",(0,r.jsx)(n.code,{children:"enum"})," and union\ntypes don't use heap allocation:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"enum MyEnum {\n Small(u8),\n Large(u64)\n}\n\nstruct MyStruct {\n x: MyEnum,\n y: MyEnum,\n}\n\npub fn enum_compare() {\n let x = MyEnum::Small(0);\n let y = MyEnum::Large(0);\n\n let z = MyStruct { x, y };\n\n let opt = Option::Some(z);\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/HK7zBx",children:"Compiler Explorer"})]}),"\n",(0,r.jsxs)(n.p,{children:["Because the size of an ",(0,r.jsx)(n.code,{children:"enum"})," is the size of its largest element plus a flag, the compiler can\npredict how much memory is used no matter which variant of an enum is currently stored in a\nvariable. Thus, enums and unions have no need of heap allocation. There's unfortunately not a great\nway to show this in assembly, so I'll instead point you to the\n",(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/stable/core/mem/fn.size_of.html#size-of-enums",children:(0,r.jsx)(n.code,{children:"core::mem::size_of"})}),"\ndocumentation."]}),"\n",(0,r.jsx)(n.h2,{id:"arrays",children:"Arrays"}),"\n",(0,r.jsx)(n.p,{children:"The array type is guaranteed to be stack allocated, which is why the array size must be declared.\nInterestingly enough, this can be used to cause safe Rust programs to crash:"}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"// 256 bytes\n#[derive(Default)]\nstruct TwoFiftySix {\n _a: [u64; 32]\n}\n\n// 8 kilobytes\n#[derive(Default)]\nstruct EightK {\n _a: [TwoFiftySix; 32]\n}\n\n// 256 kilobytes\n#[derive(Default)]\nstruct TwoFiftySixK {\n _a: [EightK; 32]\n}\n\n// 8 megabytes - exceeds space typically provided for the stack,\n// though the kernel can be instructed to allocate more.\n// On Linux, you can check stack size using `ulimit -s`\n#[derive(Default)]\nstruct EightM {\n _a: [TwoFiftySixK; 32]\n}\n\nfn main() {\n // Because we already have things in stack memory\n // (like the current function call stack), allocating another\n // eight megabytes of stack memory crashes the program\n let _x = EightM::default();\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["--\n",(0,r.jsx)(n.a,{href:"https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=587a6380a4914bcbcef4192c90c01dc4",children:"Rust Playground"})]}),"\n",(0,r.jsx)(n.p,{children:"There aren't any security implications of this (no memory corruption occurs), but it's good to note\nthat the Rust compiler won't move arrays into heap memory even if they can be reasonably expected to\noverflow the stack."}),"\n",(0,r.jsx)(n.h2,{id:"closures",children:"Closures"}),"\n",(0,r.jsxs)(n.p,{children:["Rules for how anonymous functions capture their arguments are typically language-specific. In Java,\n",(0,r.jsx)(n.a,{href:"https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html",children:"Lambda Expressions"})," are\nactually objects created on the heap that capture local primitives by copying, and capture local\nnon-primitives as (",(0,r.jsx)(n.code,{children:"final"}),") references.\n",(0,r.jsx)(n.a,{href:"https://docs.python.org/3.7/reference/expressions.html#lambda",children:"Python"})," and\n",(0,r.jsx)(n.a,{href:"https://javascriptweblog.wordpress.com/2010/10/25/understanding-javascript-closures/",children:"JavaScript"}),"\nboth bind ",(0,r.jsx)(n.em,{children:"everything"})," by reference normally, but Python can also\n",(0,r.jsx)(n.a,{href:"https://stackoverflow.com/a/235764/1454178",children:"capture values"})," and JavaScript has\n",(0,r.jsx)(n.a,{href:"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions",children:"Arrow functions"}),"."]}),"\n",(0,r.jsx)(n.p,{children:"In Rust, arguments to closures are the same as arguments to other functions; closures are simply\nfunctions that don't have a declared name. Some weird ordering of the stack may be required to\nhandle them, but it's the compiler's responsiblity to figure that out."}),"\n",(0,r.jsx)(n.p,{children:"Each example below has the same effect, but a different assembly implementation. In the simplest\ncase, we immediately run a closure returned by another function. Because we don't store a reference\nto the closure, the stack memory needed to store the captured values is contiguous:"}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"fn my_func() -> impl FnOnce() {\n let x = 24;\n // Note that this closure in assembly looks exactly like\n // any other function; you even use the `call` instruction\n // to start running it.\n move || { x; }\n}\n\npub fn immediate() {\n my_func()();\n my_func()();\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/mgJ2zl",children:"Compiler Explorer"}),", 25 total assembly instructions"]}),"\n",(0,r.jsx)(n.p,{children:"If we store a reference to the closure, the Rust compiler keeps values it needs in the stack memory\nof the original function. Getting the details right is a bit harder, so the instruction count goes\nup even though this code is functionally equivalent to our original example:"}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"pub fn simple_reference() {\n let x = my_func();\n let y = my_func();\n y();\n x();\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/K_dj5n",children:"Compiler Explorer"}),", 55 total assembly instructions"]}),"\n",(0,r.jsx)(n.p,{children:"Even things like variable order can make a difference in instruction count:"}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"pub fn complex() {\n let x = my_func();\n let y = my_func();\n x();\n y();\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/p37qFl",children:"Compiler Explorer"}),", 70 total assembly instructions"]}),"\n",(0,r.jsx)(n.p,{children:"In every circumstance though, the compiler ensured that no heap allocations were necessary."}),"\n",(0,r.jsx)(n.h2,{id:"generics",children:"Generics"}),"\n",(0,r.jsxs)(n.p,{children:["Traits in Rust come in two broad forms: static dispatch (monomorphization, ",(0,r.jsx)(n.code,{children:"impl Trait"}),") and dynamic\ndispatch (trait objects, ",(0,r.jsx)(n.code,{children:"dyn Trait"}),"). While dynamic dispatch is often ",(0,r.jsx)(n.em,{children:"associated"})," with trait\nobjects being stored in the heap, dynamic dispatch can be used with stack allocated objects as well:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"trait GetInt {\n fn get_int(&self) -> u64;\n}\n\n// vtable stored at section L__unnamed_1\nstruct WhyNotU8 {\n x: u8\n}\nimpl GetInt for WhyNotU8 {\n fn get_int(&self) -> u64 {\n self.x as u64\n }\n}\n\n// vtable stored at section L__unnamed_2\nstruct ActualU64 {\n x: u64\n}\nimpl GetInt for ActualU64 {\n fn get_int(&self) -> u64 {\n self.x\n }\n}\n\n// `&dyn` declares that we want to use dynamic dispatch\n// rather than monomorphization, so there is only one\n// `retrieve_int` function that shows up in the final assembly.\n// If we used generics, there would be one implementation of\n// `retrieve_int` for each type that implements `GetInt`.\npub fn retrieve_int(u: &dyn GetInt) {\n // In the assembly, we just call an address given to us\n // in the `rsi` register and hope that it was set up\n // correctly when this function was invoked.\n let x = u.get_int();\n}\n\npub fn do_call() {\n // Note that even though the vtable for `WhyNotU8` and\n // `ActualU64` includes a pointer to\n // `core::ptr::real_drop_in_place`, it is never invoked.\n let a = WhyNotU8 { x: 0 };\n let b = ActualU64 { x: 0 };\n\n retrieve_int(&a);\n retrieve_int(&b);\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/u_yguS",children:"Compiler Explorer"})]}),"\n",(0,r.jsx)(n.p,{children:"It's hard to imagine practical situations where dynamic dispatch would be used for objects that\naren't heap allocated, but it technically can be done."}),"\n",(0,r.jsx)(n.h2,{id:"copy-types",children:"Copy types"}),"\n",(0,r.jsxs)(n.p,{children:["Understanding move semantics and copy semantics in Rust is weird at first. The Rust docs\n",(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/stable/core/marker/trait.Copy.html",children:"go into detail"})," far better than can\nbe addressed here, so I'll leave them to do the job. From a memory perspective though, their\nguideline is reasonable:\n",(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/stable/core/marker/trait.Copy.html#when-should-my-type-be-copy",children:["if your type can implemement ",(0,r.jsx)(n.code,{children:"Copy"}),", it should"]}),".\nWhile there are potential speed tradeoffs to ",(0,r.jsx)(n.em,{children:"benchmark"})," when discussing ",(0,r.jsx)(n.code,{children:"Copy"})," (move semantics for\nstack objects vs. copying stack pointers vs. copying stack ",(0,r.jsx)(n.code,{children:"struct"}),"s), ",(0,r.jsxs)(n.em,{children:["it's impossible for ",(0,r.jsx)(n.code,{children:"Copy"}),"\nto introduce a heap allocation"]}),"."]}),"\n",(0,r.jsxs)(n.p,{children:["But why is this the case? Fundamentally, it's because the language controls what ",(0,r.jsx)(n.code,{children:"Copy"})," means -\n",(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/std/marker/trait.Copy.html#whats-the-difference-between-copy-and-clone",children:['"the behavior of ',(0,r.jsx)(n.code,{children:"Copy"}),' is not overloadable"']}),"\nbecause it's a marker trait. From there we'll note that a type\n",(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/std/marker/trait.Copy.html#when-can-my-type-be-copy",children:["can implement ",(0,r.jsx)(n.code,{children:"Copy"})]}),"\nif (and only if) its components implement ",(0,r.jsx)(n.code,{children:"Copy"}),", and that\n",(0,r.jsxs)(n.a,{href:"https://doc.rust-lang.org/std/marker/trait.Copy.html#implementors",children:["no heap-allocated types implement ",(0,r.jsx)(n.code,{children:"Copy"})]}),".\nThus, assignments involving heap types are always move semantics, and new heap allocations won't\noccur because of implicit operator behavior."]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"#[derive(Clone)]\nstruct Cloneable {\n x: Box\n}\n\n// error[E0204]: the trait `Copy` may not be implemented for this type\n#[derive(Copy, Clone)]\nstruct NotCopyable {\n x: Box\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/VToRuK",children:"Compiler Explorer"})]}),"\n",(0,r.jsx)(n.h2,{id:"iterators",children:"Iterators"}),"\n",(0,r.jsxs)(n.p,{children:["In managed memory languages (like\n",(0,r.jsx)(n.a,{href:"https://www.youtube.com/watch?v=bSkpMdDe4g4&feature=youtu.be&t=357",children:"Java"}),"), there's a subtle\ndifference between these two code samples:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-java",children:'public static int sum_for(List vals) {\n long sum = 0;\n // Regular for loop\n for (int i = 0; i < vals.length; i++) {\n sum += vals[i];\n }\n return sum;\n}\n\npublic static int sum_foreach(List vals) {\n long sum = 0;\n // "Foreach" loop - uses iteration\n for (Long l : vals) {\n sum += l;\n }\n return sum;\n}\n'})}),"\n",(0,r.jsxs)(n.p,{children:["In the ",(0,r.jsx)(n.code,{children:"sum_for"})," function, nothing terribly interesting happens. In ",(0,r.jsx)(n.code,{children:"sum_foreach"}),", an object of type\n",(0,r.jsx)(n.a,{href:"https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Iterator.html",children:(0,r.jsx)(n.code,{children:"Iterator"})}),"\nis allocated on the heap, and will eventually be garbage-collected. This isn't a great design;\niterators are often transient objects that you need during a function and can discard once the\nfunction ends. Sounds exactly like the issue stack-allocated objects address, no?"]}),"\n",(0,r.jsxs)(n.p,{children:["In Rust, iterators are allocated on the stack. The objects to iterate over are almost certainly in\nheap memory, but the iterator itself\n(",(0,r.jsx)(n.a,{href:"https://doc.rust-lang.org/std/slice/struct.Iter.html",children:(0,r.jsx)(n.code,{children:"Iter"})}),") doesn't need to use the heap. In\neach of the examples below we iterate over a collection, but never use heap allocation:"]}),"\n",(0,r.jsx)(n.pre,{children:(0,r.jsx)(n.code,{className:"language-rust",children:"use std::collections::HashMap;\n// There's a lot of assembly generated, but if you search in the text,\n// there are no references to `real_drop_in_place` anywhere.\n\npub fn sum_vec(x: &Vec) {\n let mut s = 0;\n // Basic iteration over vectors doesn't need allocation\n for y in x {\n s += y;\n }\n}\n\npub fn sum_enumerate(x: &Vec) {\n let mut s = 0;\n // More complex iterators are just fine too\n for (_i, y) in x.iter().enumerate() {\n s += y;\n }\n}\n\npub fn sum_hm(x: &HashMap) {\n let mut s = 0;\n // And it's not just Vec, all types will allocate the iterator\n // on stack memory\n for y in x.values() {\n s += y;\n }\n}\n"})}),"\n",(0,r.jsxs)(n.p,{children:["-- ",(0,r.jsx)(n.a,{href:"https://godbolt.org/z/FTT3CT",children:"Compiler Explorer"})]})]})}function d(e={}){let{wrapper:n}={...(0,a.a)(),...e.components};return n?(0,r.jsx)(n,{...e,children:(0,r.jsx)(h,{...e})}):h(e)}},50065:function(e,n,t){t.d(n,{Z:function(){return i},a:function(){return o}});var s=t(67294);let r={},a=s.createContext(r);function o(e){let n=s.useContext(a);return s.useMemo(function(){return"function"==typeof e?e(n):{...n,...e}},[n,e])}function i(e){let n;return n=e.disableParentContext?"function"==typeof e.components?e.components(r):e.components||r:o(e.components),s.createElement(a.Provider,{value:n},e.children)}},82340:function(e){e.exports=JSON.parse('{"permalink":"/2019/02/stacking-up","source":"@site/blog/2019-02-06-stacking-up/index.mdx","title":"Allocations in Rust: Fixed memory","description":"const and static are perfectly fine, but it\'s relatively rare that we know at compile-time about","date":"2019-02-06T12:00:00.000Z","tags":[],"readingTime":15.165,"hasTruncateMarker":true,"authors":[{"name":"Bradlee Speice","socials":{"github":"https://github.com/bspeice"},"key":"bspeice","page":null}],"frontMatter":{"slug":"2019/02/stacking-up","title":"Allocations in Rust: Fixed memory","date":"2019-02-06T12:00:00.000Z","authors":["bspeice"],"tags":[]},"unlisted":false,"lastUpdatedAt":1731204300000,"prevItem":{"title":"Allocations in Rust: Dynamic memory","permalink":"/2019/02/a-heaping-helping"},"nextItem":{"title":"Allocations in Rust: Global memory","permalink":"/2019/02/the-whole-world"}}')}}]);