mirror of
https://github.com/bspeice/ecbm4040
synced 2024-12-04 21:18:12 -05:00
Add some additional inline comments
This commit is contained in:
parent
ea3eede557
commit
6064585459
@ -4,7 +4,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Problem 1"
|
"# Problem 1\n",
|
||||||
|
"\n",
|
||||||
|
"First, we run both `logistic_sgd.py` and `convolutional_mlp.py` on the GPU. Note that we're using IPython to both time and run each command."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -639,6 +641,13 @@
|
|||||||
"!THEANO_FLAGS=device=gpu,floatX=float32 python DeepLearningTutorials/code/convolutional_mlp.py"
|
"!THEANO_FLAGS=device=gpu,floatX=float32 python DeepLearningTutorials/code/convolutional_mlp.py"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"After running the GPU code, we note that it took ~43 minutes all told for 200 epochs. We'll see when running the CPU version that it is significantly slower."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 3,
|
"execution_count": 3,
|
||||||
@ -846,7 +855,18 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Problem 2"
|
"While we could run the entire model through the CPU, we cut it off after 20 minutes. During extended testing, it was found that the CPU model took approximately 6 hours to run. In this example, 10 epochs were trained in 20 minutes.\n",
|
||||||
|
"\n",
|
||||||
|
"This leads to the conclusion that the GPU is about an order of magnitude faster than the CPU (12s / epoch GPU vs. 2m / epoch CPU)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Problem 2\n",
|
||||||
|
"\n",
|
||||||
|
"We set up variable `x` to be a user-supplied value. It is initialized as a Tensor, since we only need deal with it as a symbolic unit at the moment. Then, `a` and `b` are created as columns of uniform random numbers, with `a_shared` and `b_shared` set up to hold their values later. Note that we must specify `dtype=np.float32` to avoid precision issues."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -896,6 +916,13 @@
|
|||||||
"f(np.ones(10))"
|
"f(np.ones(10))"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"After getting a result using Theano's calculation, we use `a_shared` and `b_shared` to verify Theano ran correctly. Note that `a_shared` and `b_shared` are concrete Numpy arrays, and are in no way connected to Theano (other than being set initially through the function `updates` method)."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 6,
|
||||||
@ -968,7 +995,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Problem 3"
|
"# Problem 3\n",
|
||||||
|
"\n",
|
||||||
|
"Finally, we write a trivial version of the Fibonacci sequence generator. Because all updates are idempotent (i.e. order doesn't matter) we simply need to update the second number in the sequence to be itself plus the first number, and update the first number to what the second number was. This continues until we're done."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1009,21 +1038,21 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 2",
|
"display_name": "Python 3",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python2"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
"name": "ipython",
|
"name": "ipython",
|
||||||
"version": 2
|
"version": 3
|
||||||
},
|
},
|
||||||
"file_extension": ".py",
|
"file_extension": ".py",
|
||||||
"mimetype": "text/x-python",
|
"mimetype": "text/x-python",
|
||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython2",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "2.7.11"
|
"version": "3.5.1"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -4,7 +4,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Problem 1"
|
"# Problem 1\n",
|
||||||
|
"\n",
|
||||||
|
"First, we run both `logistic_sgd.py` and `convolutional_mlp.py` on the GPU. Note that we're using IPython to both time and run each command."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -639,6 +641,13 @@
|
|||||||
"!THEANO_FLAGS=device=gpu,floatX=float32 python DeepLearningTutorials/code/convolutional_mlp.py"
|
"!THEANO_FLAGS=device=gpu,floatX=float32 python DeepLearningTutorials/code/convolutional_mlp.py"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"After running the GPU code, we note that it took ~43 minutes all told for 200 epochs. We'll see when running the CPU version that it is significantly slower."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 3,
|
"execution_count": 3,
|
||||||
@ -846,7 +855,18 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Problem 2"
|
"While we could run the entire model through the CPU, we cut it off after 20 minutes. During extended testing, it was found that the CPU model took approximately 6 hours to run. In this example, 10 epochs were trained in 20 minutes.\n",
|
||||||
|
"\n",
|
||||||
|
"This leads to the conclusion that the GPU is about an order of magnitude faster than the CPU (12s / epoch GPU vs. 2m / epoch CPU)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Problem 2\n",
|
||||||
|
"\n",
|
||||||
|
"We set up variable `x` to be a user-supplied value. It is initialized as a Tensor, since we only need deal with it as a symbolic unit at the moment. Then, `a` and `b` are created as columns of uniform random numbers, with `a_shared` and `b_shared` set up to hold their values later. Note that we must specify `dtype=np.float32` to avoid precision issues."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -896,6 +916,13 @@
|
|||||||
"f(np.ones(10))"
|
"f(np.ones(10))"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"After getting a result using Theano's calculation, we use `a_shared` and `b_shared` to verify Theano ran correctly. Note that `a_shared` and `b_shared` are concrete Numpy arrays, and are in no way connected to Theano (other than being set initially through the function `updates` method)."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 6,
|
||||||
@ -968,7 +995,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Problem 3"
|
"# Problem 3\n",
|
||||||
|
"\n",
|
||||||
|
"Finally, we write a trivial version of the Fibonacci sequence generator. Because all updates are idempotent (i.e. order doesn't matter) we simply need to update the second number in the sequence to be itself plus the first number, and update the first number to what the second number was. This continues until we're done."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1009,21 +1038,21 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 2",
|
"display_name": "Python 3",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python2"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
"name": "ipython",
|
"name": "ipython",
|
||||||
"version": 2
|
"version": 3
|
||||||
},
|
},
|
||||||
"file_extension": ".py",
|
"file_extension": ".py",
|
||||||
"mimetype": "text/x-python",
|
"mimetype": "text/x-python",
|
||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython2",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "2.7.11"
|
"version": "3.5.1"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
Loading…
Reference in New Issue
Block a user