mirror of
https://github.com/bspeice/speice.io
synced 2024-12-22 16:48:10 -05:00
First revision
This commit is contained in:
parent
976db3e1c9
commit
451b92db4a
@ -6,12 +6,11 @@ category:
|
|||||||
tags: [python]
|
tags: [python]
|
||||||
---
|
---
|
||||||
|
|
||||||
Complaining about the [Global Interpreter Lock](https://wiki.python.org/moin/GlobalInterpreterLock) seems like a rite of passage for Python developers. It's easy to make fun of a design decision made before multi-core CPU's were widely available, but the fact that it's still around indicates that it generally works [Good](https://wiki.c2.com/?PrematureOptimization) [Enough](https://wiki.c2.com/?YouArentGonnaNeedIt). Besides, it's not hard to start a [new process](https://docs.python.org/3/library/multiprocessing.html) and use message passing to synchronize if there's a need to run things in parallel.
|
Complaining about the [Global Interpreter Lock](https://wiki.python.org/moin/GlobalInterpreterLock)(GIL) seems like a rite of passage for Python developers. It's easy to criticize a design decision made before multi-core CPU's were widely available, but the fact that it's still around indicates that it generally works [Good](https://wiki.c2.com/?PrematureOptimization) [Enough](https://wiki.c2.com/?YouArentGonnaNeedIt). Besides, there are simple and effective workarounds; it's not hard to start a [new process](https://docs.python.org/3/library/multiprocessing.html) and use message passing to synchronize code running in parallel.
|
||||||
|
|
||||||
Still, one often wonders what could be possible if only the GIL wasn't holding them back. The thought of having only a single active interpreter thread seems so old-fashioned in an era where NodeJS and Go allow scheduling $M$ coroutines to $N$ system threads. Why can't Python learn to break free?
|
Still, wouldn't it be nice to have more than a single active interpreter thread? In an age of asynchronicity and $M:N$ threading, Python seems lacking. The ideal scenario is to both take advantage of Python's productivity, and run code in true parallel.
|
||||||
|
|
||||||
Presented below are some strategies for releasing the GIL's icy grip. Bear in mind that these are just the tools, and no claim is made about whether it's a good idea to use them. Very often, unlocking the GIL is an [XY problem](https://en.wikipedia.org/wiki/XY_problem); you want application performance, and the GIL seems like an obvious bottleneck. Just remember that you're *intentionally* breaking Python's memory model.
|
|
||||||
|
|
||||||
|
Presented below are two strategies for releasing the GIL's icy grip without giving up on what makes Python a nice language to start with. Bear in mind: these are just the tools, no claim is made about whether it's a good idea to use them. Very often, unlocking the GIL is an [XY problem](https://en.wikipedia.org/wiki/XY_problem); you want application performance, and the GIL seems like an obvious bottleneck. Remember that any gains from running code in parallel come at the expense of project complexity; messing with the GIL is ultimately messing with Python's memory model.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
%load_ext Cython
|
%load_ext Cython
|
||||||
@ -22,15 +21,12 @@ N = 1_000_000_000
|
|||||||
|
|
||||||
# Cython
|
# Cython
|
||||||
|
|
||||||
Put simply, [Cython](https://cython.org/) is a programming language that looks a lot like Python, gets translated to C or C++ before compiling, and integrates well with the [CPython](https://en.wikipedia.org/wiki/CPython) API. It's great for building Python wrappers to C and C++ libraries, writing optimized code for numerical processing, and a bunch of other things. As Coffeescript is to Javascript, so is Cython to C.
|
Put simply, [Cython](https://cython.org/) is a programming language that looks a lot like Python, gets [transpiled](https://en.wikipedia.org/wiki/Source-to-source_compiler) to C/C++, and integrates well with the [CPython](https://en.wikipedia.org/wiki/CPython) API. It's great for building Python wrappers to C and C++ libraries, writing optimized code for numerical processing, and tons more. And when it comes to managing the GIL, there are two special features:
|
||||||
|
|
||||||
When it comes to managing the GIL, there are two utilities to keep in mind:
|
- The `nogil` [function annotation](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#declaring-a-function-as-callable-without-the-gil) asserts that a Cython function is safe to use without the GIL, and compilation will fail if it interacts with vanilla Python
|
||||||
|
- The `with nogil` [context manager](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#releasing-the-gil) explicitly unlocks the CPython GIL while active
|
||||||
- The `nogil` [function annotation](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#declaring-a-function-as-callable-without-the-gil) marks a function as safe to use without the GIL
|
|
||||||
- The `with nogil` [context manager](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#releasing-the-gil) explicitly unlocks the CPython GIL while in that block
|
|
||||||
|
|
||||||
Whenever Cython code runs inside a `with nogil` block, the Python interpreter is unblocked and allowed to continue work elsewhere. We'll calculate the Fibonacci sequence to demonstrate this principle in action:
|
|
||||||
|
|
||||||
|
Whenever Cython code runs inside a `with nogil` block on a separate thread, the Python interpreter is unblocked and allowed to continue work elsewhere. We'll define a "busy work" function that demonstrates this principle in action:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
%%cython
|
%%cython
|
||||||
@ -62,13 +58,12 @@ def cython_nogil(unsigned long n):
|
|||||||
|
|
||||||
def cython_gil(unsigned long n):
|
def cython_gil(unsigned long n):
|
||||||
# Because the GIL is not explicitly released, it implicitly
|
# Because the GIL is not explicitly released, it implicitly
|
||||||
# remains acquired.
|
# remains acquired when running the `fibonacci` function
|
||||||
return fibonacci(n)
|
return fibonacci(n)
|
||||||
```
|
```
|
||||||
|
|
||||||
First, let's time how long it takes Cython to calculate the billionth Fibonacci number:
|
First, let's time how long it takes Cython to calculate the billionth Fibonacci number:
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
%%time
|
%%time
|
||||||
_ = cython_gil(N);
|
_ = cython_gil(N);
|
||||||
@ -130,9 +125,9 @@ t1.join(); t2.join()
|
|||||||
> Wall time: 358 ms
|
> Wall time: 358 ms
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
|
Because `user` time represents the sum of processing time on all threads, it doesn't change much. The ["wall time"](https://en.wikipedia.org/wiki/Elapsed_real_time) has been cut roughly in half because the code is now running in parallel.
|
||||||
|
|
||||||
Keep in mind that **the order in which threads are started matters**!
|
Keep in mind that the **order in which threads are started** makes a difference!
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
%%time
|
%%time
|
||||||
@ -149,8 +144,7 @@ t1.join(); t2.join()
|
|||||||
> Wall time: 672 ms
|
> Wall time: 672 ms
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
|
Even though the second thread releases the GIL lock while active, it can't start until the first has completed. Thus, the overall runtime the same as running two GIL-locked threads.
|
||||||
Even though the second thread releases the GIL lock, it can't start until the first has completed. Thus, the overall runtime the same as running two GIL-locked threads.
|
|
||||||
|
|
||||||
Finally, be aware that attempting to unlock the GIL from a thread that doesn't own it will crash the **interpreter**, not just the thread attempting the unlock:
|
Finally, be aware that attempting to unlock the GIL from a thread that doesn't own it will crash the **interpreter**, not just the thread attempting the unlock:
|
||||||
|
|
||||||
@ -176,17 +170,17 @@ cython_recurse(2)
|
|||||||
> File "/usr/lib/python3.7/threading.py", line 890 in _bootstrap
|
> File "/usr/lib/python3.7/threading.py", line 890 in _bootstrap
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
In practice, it's not difficult to avoid this ussue. While `nogil` functions likely shouldn't contain `with nogil` blocks GIL themselves, Cython can [conditionally acquire/release the GIL](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#conditional-acquiring-releasing-the-gil). Cython's documentation for [external C code](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#acquiring-and-releasing-the-gil) contains plenty of information on how to safely manage the GIL.
|
In practice, avoiding this issue is simple. First, `nogil` functions likely shouldn't contain `with nogil` blocks. Second, Cython can [conditionally acquire/release](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#conditional-acquiring-releasing-the-gil) the GIL, so synchronizing access shouldn't be problematic. Finally, Cython's documentation for [external C code](https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html#acquiring-and-releasing-the-gil) contains more detail on how to safely manage the GIL.
|
||||||
|
|
||||||
To conclude: use Cython's `nogil` annotation to mark functions as safe for calling when the GIL is unlocked, and `with nogil` to actually unlock the GIL. Because Cython refuses to compile functions declared `nogil` if they interact with the CPython API, it is difficult to trigger safety issues at runtime.
|
To conclude: use Cython's `nogil` annotation to assert that functions are safe for calling when the GIL is unlocked, and `with nogil` to actually unlock the GIL.
|
||||||
|
|
||||||
# Numba
|
# Numba
|
||||||
|
|
||||||
Like Cython, [Numba](https://numba.pydata.org/) is also a "compiled Python." Where Cython works by compiling a Python-like language to C/C++, Numba compiles Python bytecode *directly to machine code* at runtime. Behavior is controlled using a special `@jit` decorator; calling a decorated function first compiles it to machine code, and then runs it. Calling the function a second time triggers recompilation only if the argument types change.
|
Like Cython, [Numba](https://numba.pydata.org/) is a "compiled Python." Where Cython works by compiling a Python-like language to C/C++, Numba compiles Python bytecode *directly to machine code* at runtime. Behavior is controlled with a special `@jit` decorator; calling a decorated function first compiles it to machine code, and then runs it. Calling the function a second time re-uses that machine code, but will recompile if the argument types change.
|
||||||
|
|
||||||
Numba works best when a `nopython=True` argument is added to the `@jit` decorator; functions compiled in [`nopython`](http://numba.pydata.org/numba-doc/latest/user/jit.html?#nopython) mode ignore the CPython API and have performance comparable to C. Additionally, we can unlock the GIL by adding `nogil=True` to the `@jit` decorator. Note that `nogil` and `nopython` are different arguments; while it is necessary for code to be compiled in `nopython` mode in order to release the GIL, the GIL will remain locked if `nogil=False` (the default).
|
Numba works best when a `nopython=True` argument is added to the `@jit` decorator; functions compiled in [`nopython`](http://numba.pydata.org/numba-doc/latest/user/jit.html?#nopython) mode avoid the CPython API and have performance comparable to C. Further, adding `nogil=True` to the `@jit` decorator unlocks the GIL while that function is running. Note that `nogil` and `nopython` are different arguments; while it is necessary for code to be compiled in `nopython` mode in order to release the lock, the GIL will remain locked if `nogil=False` (the default).
|
||||||
|
|
||||||
Let's repeat the same Fibonacci experiment, this time using Numba instead of Cython:
|
Let's repeat the same experiment, this time using Numba instead of Cython:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# The `int` type annotation is only for humans and is ignored
|
# The `int` type annotation is only for humans and is ignored
|
||||||
@ -208,9 +202,8 @@ def numba_nogil(n: int) -> int:
|
|||||||
return c
|
return c
|
||||||
|
|
||||||
|
|
||||||
# We implement the algorithm multiple times because the GIL is unlocked
|
# Run using `nopython` mode to receive a performance boost,
|
||||||
# whenver we enter a function with `nogil=True`, and we want to keep the
|
# but GIL remains locked due to `nogil=False` by default.
|
||||||
# GIL locked during this function's execution.
|
|
||||||
@jit(nopython=True)
|
@jit(nopython=True)
|
||||||
def numba_gil(n: int) -> int:
|
def numba_gil(n: int) -> int:
|
||||||
if n <= 1:
|
if n <= 1:
|
||||||
@ -246,10 +239,9 @@ _ = numba_gil(N)
|
|||||||
> Wall time: 251 ms
|
> Wall time: 251 ms
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
|
|
||||||
<span style="font-size: .8em">
|
<span style="font-size: .8em">
|
||||||
Aside: it's not clear why Numba takes ~20% less time to produce the same result as Cython.
|
Aside: it's not immediately clear why Numba takes ~20% less time to run than Cython for code that should be
|
||||||
Local tests I've run show that nopython mode doesn't affect runtime in this example.
|
effectively identical after compilation.
|
||||||
</span>
|
</span>
|
||||||
|
|
||||||
When running two GIL-locked threads in parallel, the result (as expected) takes around twice as long to compute:
|
When running two GIL-locked threads in parallel, the result (as expected) takes around twice as long to compute:
|
||||||
@ -267,9 +259,7 @@ t1.join(); t2.join()
|
|||||||
> Wall time: 541 ms
|
> Wall time: 541 ms
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
|
And if the GIL-unlocking thread runs first, both threads run in parallel:
|
||||||
And when the GIL-unlocking thread runs first, we can run threads in parallel:
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
%%time
|
%%time
|
||||||
@ -284,10 +274,8 @@ t1.join(); t2.join()
|
|||||||
> Wall time: 279 ms
|
> Wall time: 279 ms
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
|
|
||||||
Just like Cython, starting a GIL-locked thread first leads to overall runtime taking twice as long:
|
Just like Cython, starting a GIL-locked thread first leads to overall runtime taking twice as long:
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
%%time
|
%%time
|
||||||
t1 = Thread(target=numba_gil, args=[N])
|
t1 = Thread(target=numba_gil, args=[N])
|
||||||
@ -301,10 +289,8 @@ t1.join(); t2.join()
|
|||||||
> Wall time: 522 ms
|
> Wall time: 522 ms
|
||||||
> </pre>
|
> </pre>
|
||||||
|
|
||||||
|
|
||||||
Finally, unlike Cython, Numba will unlock the GIL if and only if it is currently acquired; recursively calling `@jit(nogil=True)` functions is perfectly safe:
|
Finally, unlike Cython, Numba will unlock the GIL if and only if it is currently acquired; recursively calling `@jit(nogil=True)` functions is perfectly safe:
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from numba import jit
|
from numba import jit
|
||||||
|
|
||||||
@ -320,10 +306,10 @@ numba_recurse(2);
|
|||||||
|
|
||||||
# Conclusion
|
# Conclusion
|
||||||
|
|
||||||
While unlocking the GIL is often a solution in search of a problem, both Cython and Numba provide means to unlock the GIL when applicable. This enables true parallelism (not just [concurrency](https://stackoverflow.com/a/1050257)) that is impossible in vanilla Python.
|
While unlocking the GIL is often a solution in search of a problem, both Cython and Numba provide simple means to manage the GIL when appropriate. This enables true parallelism (not just [concurrency](https://stackoverflow.com/a/1050257)) that is impossible in vanilla Python.
|
||||||
|
|
||||||
Before finishing, it's important to address pain points that will show up if these techniques are used in a more realistic project:
|
Before finishing, it's important to address pain points that will show up if these techniques are used in a more realistic project:
|
||||||
|
|
||||||
First, code running in a GIL-free context will likely also need non-trivial data structures; GIL-free functions aren't useful if they're constantly interacting with Python objects. Cython provides [extension types](http://docs.cython.org/en/latest/src/tutorial/cdef_classes.html) to address this, and Numba provides the [`@jitclass`](https://numba.pydata.org/numba-doc/dev/user/jitclass.html) decorator to address this need.
|
First, code running in a GIL-free context will likely also need non-trivial data structures; GIL-free functions aren't useful if they're constantly interacting with Python objects that need the GIL for access. Cython provides [extension types](http://docs.cython.org/en/latest/src/tutorial/cdef_classes.html) and Numba provides a [`@jitclass`](https://numba.pydata.org/numba-doc/dev/user/jitclass.html) decorator to address this need.
|
||||||
|
|
||||||
Second, building and distributing applications that make use of Cython/Numba can be complicated. Cython requires running the compiler, linking with external dependencies, and distributing a binary wheel. Numba is generally simpler because code is distributed as-is and compiled just-in-time, but errors aren't detected until runtime and debugging can be problematic.
|
Second, building and distributing applications that make use of Cython/Numba can be complicated. Cython packages require running the compiler, (potentially) linking/packaging external dependencies, and distributing a binary wheel. Numba is generally simpler because the code being distributed is pure Python that isn't compiled until being run. However, errors aren't detected until runtime and debugging can be problematic.
|
||||||
|
Loading…
Reference in New Issue
Block a user