Fix an issue with MathJax messing up on "(", ")"

This commit is contained in:
Bradlee Speice 2016-03-05 11:54:06 -05:00
parent adcf1dea8a
commit f5ed17704d
2 changed files with 5 additions and 4 deletions

View File

@ -9,7 +9,8 @@ Summary: My first real-world data challenge: predicting whether a bank's custome
{% notebook 2016-3-5-predicting-santander-customer-happiness.ipynb %}
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\(','\)']]}});
# MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\(','\)']]}});
MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$']]}});
</script>
<script async src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML'></script>

View File

@ -10,7 +10,7 @@
"\n",
"# Data Exploration\n",
"\n",
"First up: we need to load our data and do some exploratory work. Because we're going to be using this data for model selection prior to testing, we need to make a further split. I've already gone ahead and done this work, please see the code in the appendix below.\n",
"First up: we need to load our data and do some exploratory work. Because we're going to be using this data for model selection prior to testing, we need to make a further split. I've already gone ahead and done this work, please see the code in the [appendix below](#Appendix).\n",
"\n",
"[1]: https://www.kaggle.com/c/santander-customer-satisfaction"
]
@ -94,7 +94,7 @@
"\n",
"## Dimensionality Reduction pt. 1 - Binary Classifiers\n",
"\n",
"My first attempt to reduce the data dimensionality is to find all the binary classifiers in the dataset (i.e. 0 or 1 values) and see if any of those are good (or anti-good) predictors of the final data.\n",
"My first attempt to reduce the data dimensionality is to find all the binary classifiers in the dataset \\(i.e. 0 or 1 values\\) and see if any of those are good \\(or anti-good\\) predictors of the final data.\n",
"\n",
"[2]: https://www.kaggle.com/c/santander-customer-satisfaction/data"
]
@ -268,7 +268,7 @@
"\n",
"Instead, let's take a different approach to dimensionality reduction: [principle components analysis][4]. This allows us to perform the dimensionality reduction without worrying about the number of classes. Then, we'll use a [Gaussian Naive Bayes][5] model to actually do the prediction. This model is chosen simply because it doesn't take a long time to fit and compute; because PCA will take so long, I just want a prediction at the end of this. We can worry about using a more sophisticated LDA/QDA/SVM model later.\n",
"\n",
"Now into the actual process: We're going to test out PCA dimensionality reduction from 1 - 20 dimensions, and then predict using a Gaussian Naive Bayes model. The 20 dimensions upper limit was selected because the accuracy never improves after you get beyond that (I found out by running it myself). Hopefully, we'll find that we can create a model better than the naive guess.\n",
"Now into the actual process: We're going to test out PCA dimensionality reduction from 1 - 20 dimensions, and then predict using a Gaussian Naive Bayes model. The 20 dimensions upper limit was selected because the accuracy never improves after you get beyond that \\(I found out by running it myself\\). Hopefully, we'll find that we can create a model better than the naive guess.\n",
"\n",
"[3]:http://scikit-learn.org/stable/modules/lda_qda.html\n",
"[4]:http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html\n",