mirror of
https://github.com/bspeice/speice.io
synced 2024-12-22 08:38:09 -05:00
Add section on hardware
This commit is contained in:
parent
f10a858a7f
commit
a47d3c24bf
@ -1,27 +1,27 @@
|
|||||||
---
|
---
|
||||||
layout: post
|
layout: post
|
||||||
title: "On Writing High Performance Code"
|
title: "On Building High Performance Systems"
|
||||||
description: ""
|
description: ""
|
||||||
category:
|
category:
|
||||||
tags: []
|
tags: []
|
||||||
---
|
---
|
||||||
|
|
||||||
Prior to working in the trading industry, my assumption was that High Frequency Trading (HFT) is made up of people who have access to secret techniques the rest of us mortal developers could only dream of. There had to be some lost art of trading that could only be learned if one had an appropriately tragic backstory:
|
Prior to working in the trading industry, my assumption was that High Frequency Trading (HFT) is made up of people who have access to secret techniques the rest of us mortal developers could only dream of. There had to be some secret art that could only be learned if one had an appropriately tragic backstory:
|
||||||
|
|
||||||
<img src="/assets/images/2019-04-24-kung-fu.webp" alt="kung-fu fight">
|
<img src="/assets/images/2019-04-24-kung-fu.webp" alt="kung-fu fight">
|
||||||
> How I assumed HFT people learn their secret techniques
|
> How I assumed HFT people learn their secret techniques
|
||||||
|
|
||||||
How else do you explain people working on systems that complete the round trip of market data in to orders out (a.k.a. tick-to-trade) [consistently within 750-800 nanoseconds](https://stackoverflow.com/a/22082528/1454178)?
|
How else do you explain people working on systems that complete the round trip of market data in to orders out (a.k.a. tick-to-trade) [consistently within 750-800 nanoseconds](https://stackoverflow.com/a/22082528/1454178)?
|
||||||
In roughly the time it takes other computers to access [main memory 8 times](https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html), these systems are capable of reading the market data packets, deciding what orders to send, (presumably) doing risk checks, creating new packets for exchange-specific protocols, and putting those packets on the wire.
|
In roughly the time it takes other computers to access [main memory 8 times](https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html), trading systems are capable of reading the market data packets, deciding what orders to send, (presumably) doing risk checks, creating new packets for exchange-specific protocols, and putting those packets on the wire.
|
||||||
|
|
||||||
Having now worked in the trading industry, I can confirm the developers are mortal; I've made some simple mistakes at the very least. But more to the point, what shows up from reading public discussions is that philosophy, not technique, separates high-performance systems from everything else. Performance-critical systems don't rely on C++ optimization tricks to make code fast (though they're definitely useful); rather, there are two governing principles I want to reflect on:
|
Having now worked in the trading industry, I can confirm the developers are mortal; I've made some simple mistakes at the very least. But more to the point, what shows up from reading public discussions is that philosophy, not technique, separates high-performance systems from everything else. Performance-critical systems don't rely on C++ optimization tricks to make code fast (though they're definitely useful); rather, there are two governing principles I want to point out:
|
||||||
|
|
||||||
1. Focus on variance (average latency) first, overall speed comes later.
|
1. Optimize for variance (average latency) first.
|
||||||
2. Don't do unnecessary work.
|
2. Don't do unnecessary work.
|
||||||
|
|
||||||
# Variance First
|
# Variance First
|
||||||
|
|
||||||
Don't get me wrong, I'm a much happier person when things are fast. Computer now boots up in 9 seconds after switching from spinning plates to solid-state? Awesome. But if the computer takes a full 60 seconds to boot up tomorrow? Not so great. When it comes to code, speeding up a function by 10 milliseconds doesn't mean much if the variance of that function is ±1000ms; you simply won't know until you call the function how long it takes to complete. **High-performance code should first optimize for time variance**. Once you're consistent, then you can focus on improving overall time.
|
Don't get me wrong, I'm a much happier person when things are fast. Computer now boots up in 9 seconds after switching from spinning plates to solid-state? Awesome. But if the computer takes a full 60 seconds to boot up tomorrow? Not so great. When it comes to code, speeding up a function by 10 milliseconds doesn't mean much if the variance of that function is ±1000ms; you simply won't know until you call the function how long it takes to complete. **High-performance systems should first optimize for time variance**. Once you're consistent at the time scale you care about, then you can focus on improving overall time.
|
||||||
|
|
||||||
But you don't have to take my word for it (emphasis added in all quotes below):
|
But you don't have to take my word for it (emphasis added in all quotes below):
|
||||||
|
|
||||||
@ -34,7 +34,10 @@ But you don't have to take my word for it (emphasis added in all quotes below):
|
|||||||
- The company PolySync, which is working on autonomous vehicles, [mentions why](https://polysync.io/blog/session-types-for-hearty-codecs/) they picked their specific messaging format:
|
- The company PolySync, which is working on autonomous vehicles, [mentions why](https://polysync.io/blog/session-types-for-hearty-codecs/) they picked their specific messaging format:
|
||||||
> In general, high performance is almost always desirable for serialization. But in the world of autonomous vehicles, **steady timing performance is even more important** than peak throughput. This is because safe operation is sensitive to timing outliers. Nobody wants the system that decides when to slam on the brakes to occasionally take 100 times longer than usual to encode its commands.
|
> In general, high performance is almost always desirable for serialization. But in the world of autonomous vehicles, **steady timing performance is even more important** than peak throughput. This is because safe operation is sensitive to timing outliers. Nobody wants the system that decides when to slam on the brakes to occasionally take 100 times longer than usual to encode its commands.
|
||||||
|
|
||||||
So how exactly does one go about looking for and eliminating performance variance? To tell the truth, I don't think a systematic answer or flow-chart exists. There's no substitute for (A) building a deep understanding of the entire technology stack, and (B) actually measuring the code through benchmarks. And even then, each project cares about performance to a different degree; you may need to build an entire [replica production system](https://www.youtube.com/watch?v=NH1Tta7purM&feature=youtu.be&t=3015) to accurately benchmark at nanosecond precision. Alternately, you may be content to simply avoid garbage collection in your Java code.
|
- [Solarflare](https://solarflare.com/), which makes highly-specialized network hardware, points out variance as a big concern for [electronic trading](https://solarflare.com/electronic-trading/):
|
||||||
|
> The high stakes world of electronic trading, investment banks, market makers, hedge funds and exchanges demand the **lowest possible latency and jitter** while utilizing the highest bandwidth and return on their investment.
|
||||||
|
|
||||||
|
So how exactly does one go about looking for and eliminating performance variance? To tell the truth, I don't think a systematic answer or flow-chart exists. There's no substitute for (A) building a deep understanding of the entire technology stack, and (B) actually measuring performance using benchmarks (or (C) watching a lot of [CppCon](https://www.youtube.com/channel/UCMlGfpWw-RUdWX_JbLCukXg) videos). Even then, each project cares about performance to a different degree; you may need to build an entire [replica production system](https://www.youtube.com/watch?v=NH1Tta7purM&feature=youtu.be&t=3015) to accurately benchmark at nanosecond precision. Alternately, you may be content to simply [avoid garbage collection](https://www.youtube.com/watch?v=BD9cRbxWQx8&feature=youtu.be&t=1335) in your Java code.
|
||||||
|
|
||||||
Even though everyone has different needs, there are still common things to look for when trying to isolate variance. In no particular order, these are places to focus on when building high-performance/low-latency systems:
|
Even though everyone has different needs, there are still common things to look for when trying to isolate variance. In no particular order, these are places to focus on when building high-performance/low-latency systems:
|
||||||
|
|
||||||
@ -46,7 +49,7 @@ Even though everyone has different needs, there are still common things to look
|
|||||||
|
|
||||||
**Allocation**: Every language has a different way of interacting with "heap" memory, but the principle is the same; figuring out what chunks of memory are available to give to a program is complex. Allocation libraries have a great deal of sophisticated strategies to deal with this, but it's unknown how long it may take to find available space. Understanding when your language interacts with the allocator is crucial (and I wrote [a guide for Rust](https://speice.io/2019/02/understanding-allocations-in-rust.html)).
|
**Allocation**: Every language has a different way of interacting with "heap" memory, but the principle is the same; figuring out what chunks of memory are available to give to a program is complex. Allocation libraries have a great deal of sophisticated strategies to deal with this, but it's unknown how long it may take to find available space. Understanding when your language interacts with the allocator is crucial (and I wrote [a guide for Rust](https://speice.io/2019/02/understanding-allocations-in-rust.html)).
|
||||||
|
|
||||||
**Data Layout**: Your CPU does a lot of work to keep things running quickly, from [speculative execution](https://www.youtube.com/watch?v=_f7O3IfIR2k) to [caching](https://www.youtube.com/watch?v=vDns3Um39l0&feature=youtu.be&t=1311). And when it comes to caching, how your data is arranged in memory matters. The C family (C, value types in C#, C++) and Rust all have guarantees about layout; from the CPU's perspective, if the work has been done to retrieve one part of a structure from main memory, other parts are likely available in the cache without needing to contact main memory again. Java and Python don't make these same guarantees, so the CPU may have to wait more often on memory lookups. [Cachegrind](http://valgrind.org/docs/manual/cg-manual.html) is great for understanding what's going on.
|
**Data Layout**: How your data is arranged in memory matters; [data-oriented design](https://www.youtube.com/watch?v=yy8jQgmhbAU) and [cache locality](https://www.youtube.com/watch?v=2EWejmkKlxs&feature=youtu.be&t=1185) can have huge impacts on performance. The C family of languages (C, value types in C#, C++) and Rust all have guarantees about the shape every object takes in memory that others (like Java and Python) can't make. [Cachegrind](http://valgrind.org/docs/manual/cg-manual.html) and kernel [perf](https://perf.wiki.kernel.org/index.php/Main_Page) counters are both great for understanding how performance relates to memory layout.
|
||||||
|
|
||||||
**Just-In-Time Compilation**: Languages that are compiled on the fly (LuaJIT, C#, Java, PyPy) are great because they optimize your program for how it's actually being used. However, there's a variance cost associated with this; the virtual machine may stop executing while it waits for translation from VM bytecode to native code. As a remedy, some languages now support ahead-of-time compilation ([CoreRT](https://github.com/dotnet/corert) in C# and [GraalVM](https://www.graalvm.org/) in Java). On the other hand, LLVM supports [Profile Guided Optimization](https://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization), which should bring JIT-like benefits to non-JIT languages. Benchmarking is incredibly important here.
|
**Just-In-Time Compilation**: Languages that are compiled on the fly (LuaJIT, C#, Java, PyPy) are great because they optimize your program for how it's actually being used. However, there's a variance cost associated with this; the virtual machine may stop executing while it waits for translation from VM bytecode to native code. As a remedy, some languages now support ahead-of-time compilation ([CoreRT](https://github.com/dotnet/corert) in C# and [GraalVM](https://www.graalvm.org/) in Java). On the other hand, LLVM supports [Profile Guided Optimization](https://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization), which should bring JIT-like benefits to non-JIT languages. Benchmarking is incredibly important here.
|
||||||
|
|
||||||
@ -58,17 +61,20 @@ Code you wrote is likely not the *only* code running on your system. There are m
|
|||||||
|
|
||||||
**System calls**: Reading from a UNIX socket? Writing to a file? In addition to not knowing how long the I/O operation takes, these all trigger expensive [system calls (syscalls)](https://en.wikipedia.org/wiki/System_call). To handle these, the CPU must [context switch](https://en.wikipedia.org/wiki/Context_switch) to the kernel, let the kernel operation complete, then context switch back to your program. We'd rather keep these to a minimum. [Strace](https://linux.die.net/man/1/strace) is your friend for understanding when and where syscalls happen.
|
**System calls**: Reading from a UNIX socket? Writing to a file? In addition to not knowing how long the I/O operation takes, these all trigger expensive [system calls (syscalls)](https://en.wikipedia.org/wiki/System_call). To handle these, the CPU must [context switch](https://en.wikipedia.org/wiki/Context_switch) to the kernel, let the kernel operation complete, then context switch back to your program. We'd rather keep these to a minimum. [Strace](https://linux.die.net/man/1/strace) is your friend for understanding when and where syscalls happen.
|
||||||
|
|
||||||
**Signal Handling**: Far less likely to be an issue, but does trigger a context switch if your code has a handler registered. This will be highly dependent on the application, but you can [block signals](https://www.linuxprogrammingblog.com/all-about-linux-signals?page=show#Blocking_signals) if it's an issue.
|
**Signal Handling**: Far less likely to be an issue, but signals do trigger a context switch if your code has a handler registered. This will be highly dependent on the application, but you can [block signals](https://www.linuxprogrammingblog.com/all-about-linux-signals?page=show#Blocking_signals) if it's an issue.
|
||||||
|
|
||||||
**Interrupts**: System interrupts are how devices connected to your computer notify the CPU that something has happened. It's then up to the CPU to pause whatever program is running so the operating system can handle the interrupt. We don't want our program to be the one paused, so make sure that [SMP affinity](http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux) is set and the interrupts are handled on a CPU core not running the program we care about.
|
**Interrupts**: System interrupts are how devices connected to your computer notify the CPU that something has happened. It's then up to the CPU to pause whatever program is running so the operating system can handle the interrupt. We don't want our program to be the one paused, so make sure that [SMP affinity](http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux) is set and the interrupts are handled on a CPU core not running the program we care about.
|
||||||
|
|
||||||
**[NUMA](https://www.kernel.org/doc/html/latest/vm/numa.html)**: While NUMA is good at making multi-cell systems transparent, there are variance implications; if the kernel moves a process across nodes, future memory accesses must wait for the controller on the original node. Use [numactl](https://linux.die.net/man/8/numactl) to handle memory/cpu pinning.
|
**[NUMA](https://www.kernel.org/doc/html/latest/vm/numa.html)**: While NUMA is good at making multi-cell systems transparent, there are variance implications; if the kernel moves a process across nodes, future memory accesses must wait for the controller on the original node. Use [numactl](https://linux.die.net/man/8/numactl) to handle memory/cpu pinning.
|
||||||
|
|
||||||
- Hardware
|
## Hardware
|
||||||
- CPU pipelining; can use code inlining as a hint, but branch predictor might guess incorrectly and have to rewind
|
|
||||||
- TLB/MMU/paging strategies;
|
**CPU Pipelining/Speculation**: Speculative execution in modern processors gave us vulnerabilities like Spectre, but it also gave us performance improvements like [branch prediction](https://stackoverflow.com/a/11227902/1454178). However, there's variance involved because the CPU might mis-predict. And while the compiler knows a lot about how your CPU [pipelines instructions](https://youtu.be/nAbCKa0FzjQ?t=4467), code can be [structured to help](https://www.youtube.com/watch?v=NH1Tta7purM&feature=youtu.be&t=755) the branch predictor.
|
||||||
- Cache-local operations/Main memory access; need to go through MMU and page tables, etc. If you can keep things cache local, don't have to worry about how long it may take for DRAM access. For multi-socket computers, make sure all lookups are on the same NUMA node as the CPU.
|
|
||||||
- DMA/Open Onload
|
**Paging**: For most systems, virtual memory is incredible. Applications live in their own worlds, and the CPU/[MMU](https://en.wikipedia.org/wiki/Memory_management_unit) figures out the details afterward. However, there's a variance penalty associated with memory paging and caching; if you access more memory pages than the [TLB](https://en.wikipedia.org/wiki/Translation_lookaside_buffer) can store, you'll have to wait for the page walk. Kernel perf tools are necessary to figure out if this is an issue, but techniques like [huge pages](https://blog.pythian.com/performance-tuning-hugepages-in-linux/) can reduce TLB burdens. Alternately, running applications in a hypervisor like [Jailhouse](https://github.com/siemens/jailhouse) allows one to skip virtual memory entirely, but this is potentially more work than the benefits are worth.
|
||||||
|
|
||||||
|
**Network Interfaces**: When more than one computer is involved, variance can go up dramatically. Tuning kernel [network parameters](https://github.com/leandromoreira/linux-network-performance-parameters) may be helpful, but modern systems more frequently opt to skip the kernel altogether with a technique called [kernel bypass](https://blog.cloudflare.com/kernel-bypass/). This typically requires specialized hardware and [custom drivers](https://www.openonload.org/), but even industries like [telecom](https://www.bbc.co.uk/rd/blog/2018-04-high-speed-networking-open-source-kernel-bypass) are finding the benefits.
|
||||||
|
|
||||||
- Networks
|
- Networks
|
||||||
- Internet routing - no idea what the network path looks like, so financial firms pay big money to make sure they have straight-line connections
|
- Internet routing - no idea what the network path looks like, so financial firms pay big money to make sure they have straight-line connections
|
||||||
- Latency within the switch - cut-through vs. store-and-forward routing - https://www.networkworld.com/article/2241573/latency-and-jitter--cut-through-design-pays-off-for-arista--blade.html
|
- Latency within the switch - cut-through vs. store-and-forward routing - https://www.networkworld.com/article/2241573/latency-and-jitter--cut-through-design-pays-off-for-arista--blade.html
|
||||||
|
Loading…
Reference in New Issue
Block a user