diff --git a/blog/2015-11-14-welcome/index.mdx b/blog/2015-11-14-welcome/index.mdx
index f3c8ec1..041b910 100644
--- a/blog/2015-11-14-welcome/index.mdx
+++ b/blog/2015-11-14-welcome/index.mdx
@@ -1,11 +1,11 @@
---
+slug: 2015/11/welcome
title: Welcome, and an algorithm
date: 2015-11-19 12:00:00
last_update:
date: 2015-12-05 12:00:00
-slug: 2015/11/welcome
authors: [bspeice]
-tags: [trading]
+tags: []
---
Hello! Glad to meet you. I'm currently a student at Columbia University studying Financial Engineering, and want to give an overview of the projects I'm working on!
@@ -17,7 +17,7 @@ To start things off, Columbia has been hosting a trading competition that myself
The competition is scored in 3 areas:
- Total return
-- [Sharpe ratio](1)
+- [Sharpe ratio](https://en.wikipedia.org/wiki/Sharpe_ratio)
- Maximum drawdown
Our algorithm uses a basic momentum strategy: in the given list of potential portfolios, pick the stocks that have been performing well in the past 30 days. Then, optimize for return subject to the drawdown being below a specific level. We didn't include the Sharpe ratio as a constraint, mostly because we were a bit late entering the competition.
diff --git a/blog/2018-01-16-captains-cookbok-part-1/index.mdx b/blog/2018-01-16-captains-cookbok-part-1/index.mdx
index 76d1ea7..042f23a 100644
--- a/blog/2018-01-16-captains-cookbok-part-1/index.mdx
+++ b/blog/2018-01-16-captains-cookbok-part-1/index.mdx
@@ -1,6 +1,6 @@
---
slug: 2018/01/captains-cookbook-part-1
-title: Captain's cookbook - part 1
+title: "Captain's Cookbook: Project setup"
date: 2018-01-16 12:00:00
authors: [bspeice]
tags: []
diff --git a/blog/2018-01-16-captains-cookbook-part-2/index.mdx b/blog/2018-01-16-captains-cookbook-part-2/index.mdx
index 52c3d2e..80d7f06 100644
--- a/blog/2018-01-16-captains-cookbook-part-2/index.mdx
+++ b/blog/2018-01-16-captains-cookbook-part-2/index.mdx
@@ -1,6 +1,6 @@
---
slug: 2018/01/captains-cookbook-part-2
-title: Captain's cookbook - part 2
+title: "Captain's Cookbook: Practical usage"
date: 2018-01-16 13:00:00
authors: [bspeice]
tags: []
diff --git a/blog/2018-06-25-dateutil-parser-to-rust/index.mdx b/blog/2018-06-25-dateutil-parser-to-rust/index.mdx
index ed919af..7d4f1a2 100644
--- a/blog/2018-06-25-dateutil-parser-to-rust/index.mdx
+++ b/blog/2018-06-25-dateutil-parser-to-rust/index.mdx
@@ -1,6 +1,6 @@
---
slug: 2018/06/dateutil-parser-to-rust
-title: "What I Learned: Porting Dateutil Parser to Rust"
+title: "What I learned porting dateutil to Rust"
date: 2018-06-25 12:00:00
authors: [bspeice]
tags: []
diff --git a/blog/2018-12-15-allocation-safety/_article.md b/blog/2018-12-15-allocation-safety/_article.md
new file mode 100644
index 0000000..7892856
--- /dev/null
+++ b/blog/2018-12-15-allocation-safety/_article.md
@@ -0,0 +1,218 @@
+---
+layout: post
+title: "QADAPT - debug_assert! for your memory usage"
+description: "...and why you want an allocator that goes 💥."
+category:
+tags: []
+---
+
+I think it's part of the human condition to ignore perfectly good advice when it comes our way. A
+bit over a month ago, I was dispensing sage wisdom for the ages:
+
+> I had a really great idea: build a custom allocator that allows you to track your own allocations.
+> I gave it a shot, but learned very quickly: **never write your own allocator.**
+>
+> -- [me](/2018/10/case-study-optimization.html)
+
+I proceeded to ignore it, because we never really learn from our mistakes.
+
+There's another part of the human condition that derives joy from seeing things explode.
+
+
+
+And _that's_ the part I'm going to focus on.
+
+# Why an Allocator?
+
+So why, after complaining about allocators, would I still want to write one? There are three reasons
+for that:
+
+1. Allocation/dropping is slow
+2. It's difficult to know exactly when Rust will allocate or drop, especially when using code that
+ you did not write
+3. I want automated tools to verify behavior, instead of inspecting by hand
+
+When I say "slow," it's important to define the terms. If you're writing web applications, you'll
+spend orders of magnitude more time waiting for the database than you will the allocator. However,
+there's still plenty of code where micro- or nano-seconds matter; think
+[finance](https://www.youtube.com/watch?v=NH1Tta7purM),
+[real-time audio](https://www.reddit.com/r/rust/comments/9hg7yj/synthesizer_progress_update/e6c291f),
+[self-driving cars](https://polysync.io/blog/session-types-for-hearty-codecs/), and
+[networking](https://carllerche.github.io/bytes/bytes/index.html). In these situations it's simply
+unacceptable for you to spend time doing things that are not your program, and waiting on the
+allocator is not cool.
+
+As I continue to learn Rust, it's difficult for me to predict where exactly allocations will happen.
+So, I propose we play a quick trivia game: **Does this code invoke the allocator?**
+
+## Example 1
+
+```rust
+fn my_function() {
+ let v: Vec = Vec::new();
+}
+```
+
+**No**: Rust [knows how big](https://doc.rust-lang.org/std/mem/fn.size_of.html) the `Vec` type is,
+and reserves a fixed amount of memory on the stack for the `v` vector. However, if we wanted to
+reserve extra space (using `Vec::with_capacity`) the allocator would get invoked.
+
+## Example 2
+
+```rust
+fn my_function() {
+ let v: Box> = Box::new(Vec::new());
+}
+```
+
+**Yes**: Because Boxes allow us to work with things that are of unknown size, it has to allocate on
+the heap. While the `Box` is unnecessary in this snippet (release builds will optimize out the
+allocation), reserving heap space more generally is needed to pass a dynamically sized type to
+another function.
+
+## Example 3
+
+```rust
+fn my_function(v: Vec) {
+ v.push(5);
+}
+```
+
+**Maybe**: Depending on whether the Vector we were given has space available, we may or may not
+allocate. Especially when dealing with code that you did not author, it's difficult to verify that
+things behave as you expect them to.
+
+# Blowing Things Up
+
+So, how exactly does QADAPT solve these problems? **Whenever an allocation or drop occurs in code
+marked allocation-safe, QADAPT triggers a thread panic.** We don't want to let the program continue
+as if nothing strange happened, _we want things to explode_.
+
+However, you don't want code to panic in production because of circumstances you didn't predict.
+Just like [`debug_assert!`](https://doc.rust-lang.org/std/macro.debug_assert.html), **QADAPT will
+strip out its own code when building in release mode to guarantee no panics and no performance
+impact.**
+
+Finally, there are three ways to have QADAPT check that your code will not invoke the allocator:
+
+## Using a procedural macro
+
+The easiest method, watch an entire function for allocator invocation:
+
+```rust
+use qadapt::no_alloc;
+use qadapt::QADAPT;
+
+#[global_allocator]
+static Q: QADAPT = QADAPT;
+
+#[no_alloc]
+fn push_vec(v: &mut Vec) {
+ // This triggers a panic if v.len() == v.capacity()
+ v.push(5);
+}
+
+fn main() {
+ let v = Vec::with_capacity(1);
+
+ // This will *not* trigger a panic
+ push_vec(&v);
+
+ // This *will* trigger a panic
+ push_vec(&v);
+}
+```
+
+## Using a regular macro
+
+For times when you need more precision:
+
+```rust
+use qadapt::assert_no_alloc;
+use qadapt::QADAPT;
+
+#[global_allocator]
+static Q: QADAPT = QADAPT;
+
+fn main() {
+ let v = Vec::with_capacity(1);
+
+ // No allocations here, we already have space reserved
+ assert_no_alloc!(v.push(5));
+
+ // Even though we remove an item, it doesn't trigger a drop
+ // because it's a scalar. If it were a `Box<_>` type,
+ // a drop would trigger.
+ assert_no_alloc!({
+ v.pop().unwrap();
+ });
+}
+```
+
+## Using function calls
+
+Both the most precise and most tedious:
+
+```rust
+use qadapt::enter_protected;
+use qadapt::exit_protected;
+use qadapt::QADAPT;
+
+#[global_allocator]
+static Q: QADAPT = QADAPT;
+
+fn main() {
+ // This triggers an allocation (on non-release builds)
+ let v = Vec::with_capacity(1);
+
+ enter_protected();
+ // This does not trigger an allocation because we've reserved size
+ v.push(0);
+ exit_protected();
+
+ // This triggers an allocation because we ran out of size,
+ // but doesn't panic because we're no longer protected.
+ v.push(1);
+}
+```
+
+## Caveats
+
+It's important to point out that QADAPT code is synchronous, so please be careful when mixing in
+asynchronous functions:
+
+```rust
+use futures::future::Future;
+use futures::future::ok;
+
+#[no_alloc]
+fn async_capacity() -> impl Future, Error=()> {
+ ok(12).and_then(|e| Ok(Vec::with_capacity(e)))
+}
+
+fn main() {
+ // This doesn't trigger a panic because the `and_then` closure
+ // wasn't run during the function call.
+ async_capacity();
+
+ // Still no panic
+ assert_no_alloc!(async_capacity());
+
+ // This will panic because the allocation happens during `unwrap`
+ // in the `assert_no_alloc!` macro
+ assert_no_alloc!(async_capacity().poll().unwrap());
+}
+```
+
+# Conclusion
+
+While there's a lot more to writing high-performance code than managing your usage of the allocator,
+it's critical that you do use the allocator correctly. QADAPT will verify that your code is doing
+what you expect. It's usable even on stable Rust from version 1.31 onward, which isn't the case for
+most allocators. Version 1.0 was released today, and you can check it out over at
+[crates.io](https://crates.io/crates/qadapt) or on [github](https://github.com/bspeice/qadapt).
+
+I'm hoping to write more about high-performance Rust in the future, and I expect that QADAPT will
+help guide that. If there are topics you're interested in, let me know in the comments below!
+
+[qadapt]: https://crates.io/crates/qadapt
diff --git a/blog/2018-12-15-allocation-safety/index.mdx b/blog/2018-12-15-allocation-safety/index.mdx
new file mode 100644
index 0000000..9333459
--- /dev/null
+++ b/blog/2018-12-15-allocation-safety/index.mdx
@@ -0,0 +1,222 @@
+---
+slug: 2018/12/allocation-safety
+title: "QADAPT - debug_assert! for allocations"
+date: 2018-12-15 12:00:00
+authors: [bspeice]
+tags: []
+---
+
+I think it's part of the human condition to ignore perfectly good advice when it comes our way. A
+bit over a month ago, I was dispensing sage wisdom for the ages:
+
+> I had a really great idea: build a custom allocator that allows you to track your own allocations.
+> I gave it a shot, but learned very quickly: **never write your own allocator.**
+>
+> -- [me](../2018-10-08-case-study-optimization)
+
+I proceeded to ignore it, because we never really learn from our mistakes.
+
+
+
+There's another part of the human condition that derives joy from seeing things explode.
+
+
+![Explosions](./watch-the-world-burn.webp)
+
+
+And _that's_ the part I'm going to focus on.
+
+## Why an Allocator?
+
+So why, after complaining about allocators, would I still want to write one? There are three reasons
+for that:
+
+1. Allocation/dropping is slow
+2. It's difficult to know exactly when Rust will allocate or drop, especially when using code that
+ you did not write
+3. I want automated tools to verify behavior, instead of inspecting by hand
+
+When I say "slow," it's important to define the terms. If you're writing web applications, you'll
+spend orders of magnitude more time waiting for the database than you will the allocator. However,
+there's still plenty of code where micro- or nano-seconds matter; think
+[finance](https://www.youtube.com/watch?v=NH1Tta7purM),
+[real-time audio](https://www.reddit.com/r/rust/comments/9hg7yj/synthesizer_progress_update/e6c291f),
+[self-driving cars](https://polysync.io/blog/session-types-for-hearty-codecs/), and
+[networking](https://carllerche.github.io/bytes/bytes/index.html). In these situations it's simply
+unacceptable for you to spend time doing things that are not your program, and waiting on the
+allocator is not cool.
+
+As I continue to learn Rust, it's difficult for me to predict where exactly allocations will happen.
+So, I propose we play a quick trivia game: **Does this code invoke the allocator?**
+
+### Example 1
+
+```rust
+fn my_function() {
+ let v: Vec = Vec::new();
+}
+```
+
+**No**: Rust [knows how big](https://doc.rust-lang.org/std/mem/fn.size_of.html) the `Vec` type is,
+and reserves a fixed amount of memory on the stack for the `v` vector. However, if we wanted to
+reserve extra space (using `Vec::with_capacity`) the allocator would get invoked.
+
+### Example 2
+
+```rust
+fn my_function() {
+ let v: Box> = Box::new(Vec::new());
+}
+```
+
+**Yes**: Because Boxes allow us to work with things that are of unknown size, it has to allocate on
+the heap. While the `Box` is unnecessary in this snippet (release builds will optimize out the
+allocation), reserving heap space more generally is needed to pass a dynamically sized type to
+another function.
+
+### Example 3
+
+```rust
+fn my_function(v: Vec) {
+ v.push(5);
+}
+```
+
+**Maybe**: Depending on whether the Vector we were given has space available, we may or may not
+allocate. Especially when dealing with code that you did not author, it's difficult to verify that
+things behave as you expect them to.
+
+## Blowing Things Up
+
+So, how exactly does QADAPT solve these problems? **Whenever an allocation or drop occurs in code
+marked allocation-safe, QADAPT triggers a thread panic.** We don't want to let the program continue
+as if nothing strange happened, _we want things to explode_.
+
+However, you don't want code to panic in production because of circumstances you didn't predict.
+Just like [`debug_assert!`](https://doc.rust-lang.org/std/macro.debug_assert.html), **QADAPT will
+strip out its own code when building in release mode to guarantee no panics and no performance
+impact.**
+
+Finally, there are three ways to have QADAPT check that your code will not invoke the allocator:
+
+### Using a procedural macro
+
+The easiest method, watch an entire function for allocator invocation:
+
+```rust
+use qadapt::no_alloc;
+use qadapt::QADAPT;
+
+#[global_allocator]
+static Q: QADAPT = QADAPT;
+
+#[no_alloc]
+fn push_vec(v: &mut Vec) {
+ // This triggers a panic if v.len() == v.capacity()
+ v.push(5);
+}
+
+fn main() {
+ let v = Vec::with_capacity(1);
+
+ // This will *not* trigger a panic
+ push_vec(&v);
+
+ // This *will* trigger a panic
+ push_vec(&v);
+}
+```
+
+### Using a regular macro
+
+For times when you need more precision:
+
+```rust
+use qadapt::assert_no_alloc;
+use qadapt::QADAPT;
+
+#[global_allocator]
+static Q: QADAPT = QADAPT;
+
+fn main() {
+ let v = Vec::with_capacity(1);
+
+ // No allocations here, we already have space reserved
+ assert_no_alloc!(v.push(5));
+
+ // Even though we remove an item, it doesn't trigger a drop
+ // because it's a scalar. If it were a `Box<_>` type,
+ // a drop would trigger.
+ assert_no_alloc!({
+ v.pop().unwrap();
+ });
+}
+```
+
+### Using function calls
+
+Both the most precise and most tedious:
+
+```rust
+use qadapt::enter_protected;
+use qadapt::exit_protected;
+use qadapt::QADAPT;
+
+#[global_allocator]
+static Q: QADAPT = QADAPT;
+
+fn main() {
+ // This triggers an allocation (on non-release builds)
+ let v = Vec::with_capacity(1);
+
+ enter_protected();
+ // This does not trigger an allocation because we've reserved size
+ v.push(0);
+ exit_protected();
+
+ // This triggers an allocation because we ran out of size,
+ // but doesn't panic because we're no longer protected.
+ v.push(1);
+}
+```
+
+### Caveats
+
+It's important to point out that QADAPT code is synchronous, so please be careful when mixing in
+asynchronous functions:
+
+```rust
+use futures::future::Future;
+use futures::future::ok;
+
+#[no_alloc]
+fn async_capacity() -> impl Future, Error=()> {
+ ok(12).and_then(|e| Ok(Vec::with_capacity(e)))
+}
+
+fn main() {
+ // This doesn't trigger a panic because the `and_then` closure
+ // wasn't run during the function call.
+ async_capacity();
+
+ // Still no panic
+ assert_no_alloc!(async_capacity());
+
+ // This will panic because the allocation happens during `unwrap`
+ // in the `assert_no_alloc!` macro
+ assert_no_alloc!(async_capacity().poll().unwrap());
+}
+```
+
+## Conclusion
+
+While there's a lot more to writing high-performance code than managing your usage of the allocator,
+it's critical that you do use the allocator correctly. QADAPT will verify that your code is doing
+what you expect. It's usable even on stable Rust from version 1.31 onward, which isn't the case for
+most allocators. Version 1.0 was released today, and you can check it out over at
+[crates.io](https://crates.io/crates/qadapt) or on [github](https://github.com/bspeice/qadapt).
+
+I'm hoping to write more about high-performance Rust in the future, and I expect that QADAPT will
+help guide that. If there are topics you're interested in, let me know in the comments below!
+
+[qadapt]: https://crates.io/crates/qadapt
diff --git a/blog/2018-12-15-allocation-safety/watch-the-world-burn.webp b/blog/2018-12-15-allocation-safety/watch-the-world-burn.webp
new file mode 100644
index 0000000..df663f3
Binary files /dev/null and b/blog/2018-12-15-allocation-safety/watch-the-world-burn.webp differ
diff --git a/src/css/custom.css b/src/css/custom.css
index 2deadc8..bd7bd02 100644
--- a/src/css/custom.css
+++ b/src/css/custom.css
@@ -1,6 +1,8 @@
:root {
--ifm-container-width: 1280px;
+ --ifm-container-width-xl: 1440px;
--ifm-footer-padding-vertical: .5rem;
+ --ifm-spacing-horizontal: .8rem;
}
.header-github-link:hover {
diff --git a/src/theme/BlogLayout/index.tsx b/src/theme/BlogLayout/index.tsx
deleted file mode 100644
index 302875d..0000000
--- a/src/theme/BlogLayout/index.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-import React from 'react';
-import clsx from 'clsx';
-import Layout from '@theme/Layout';
-import BlogSidebar from '@theme/BlogSidebar';
-
-import type {Props} from '@theme/BlogLayout';
-
-export default function BlogLayout(props: Props): JSX.Element {
- const {sidebar, toc, children, ...layoutProps} = props;
- const hasSidebar = sidebar && sidebar.items.length > 0;
-
- return (
-
-
-
-
-
- {children}
-
- {toc &&
{toc}
}
-
-
-
- );
-}
diff --git a/src/theme/BlogPostPaginator/index.tsx b/src/theme/BlogPostPaginator/index.tsx
index d9af31b..3e8da78 100644
--- a/src/theme/BlogPostPaginator/index.tsx
+++ b/src/theme/BlogPostPaginator/index.tsx
@@ -1,3 +1,7 @@
+/**
+ * Docusaurus typically puts the newer post on the left button,
+ * and older posts on the right. This file exists to swap them.
+ */
import React from 'react';
import Translate, {translate} from '@docusaurus/Translate';
import PaginatorNavLink from '@theme/PaginatorNavLink';
diff --git a/src/theme/BlogSidebar/Content/index.tsx b/src/theme/BlogSidebar/Content/index.tsx
new file mode 100644
index 0000000..2ebf4a9
--- /dev/null
+++ b/src/theme/BlogSidebar/Content/index.tsx
@@ -0,0 +1,120 @@
+/**
+ * Use post titles to infer blog post series
+ */
+import React, { memo, type ReactNode } from 'react';
+import Heading, { HeadingType } from '@theme/Heading';
+import type { Props } from '@theme/BlogSidebar/Content';
+import { BlogSidebarItem } from '@docusaurus/plugin-content-blog';
+
+
+function BlogSidebarGroup({ title, headingType, children }: { title: string, headingType: HeadingType, children: ReactNode }) {
+ return (
+
+
+ {title}
+
+ {children}
+
+ );
+}
+
+function groupBySeries(items: BlogSidebarItem[], ListComponent: Props["ListComponent"]) {
+ var returnItems = [];
+ var seriesItems: BlogSidebarItem[] = [];
+
+ function flushSeries() {
+ if (seriesItems.length === 0) {
+ return;
+ }
+
+ const seriesTitle = seriesItems[0].title.split(":")[0];
+
+ // Strip the series name from the titles
+ seriesItems = seriesItems.map(item => {
+ return {
+ ...item,
+ title: item.title.split(":")[1].trim(),
+ }
+ });
+
+ // Reverse the display ordering - normally blog items are shown in descending time order,
+ // but for a series, we want to show ascending order
+ seriesItems = seriesItems.reverse();
+
+ returnItems.push(<>
+
+