mirror of
https://github.com/bspeice/speice.io
synced 2024-11-14 22:18:10 -05:00
462 lines
22 KiB
Plaintext
462 lines
22 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"An experiment in creating a robot that will imitate me on Twitter.\n",
|
||
|
"\n",
|
||
|
"---\n",
|
||
|
"\n",
|
||
|
"So, I'm taking a Machine Learning course this semester in school, and one of the topics we keep coming back to is natural language processing and the 'bag of words' data structure. That is, given a sentence:\n",
|
||
|
"\n",
|
||
|
"`How much wood would a woodchuck chuck if a woodchuck could chuck wood?`\n",
|
||
|
"\n",
|
||
|
"We can represent that sentence as the following list:\n",
|
||
|
"\n",
|
||
|
"`{\n",
|
||
|
" How: 1\n",
|
||
|
" much: 1\n",
|
||
|
" wood: 2\n",
|
||
|
" would: 2\n",
|
||
|
" a: 2\n",
|
||
|
" woodchuck: 2\n",
|
||
|
" chuck: 2\n",
|
||
|
" if: 1\n",
|
||
|
"}`\n",
|
||
|
"\n",
|
||
|
"Ignoring *where* the words happened, we're just interested in how *often* the words occurred. That got me thinking: I wonder what would happen if I built a robot that just imitated how often I said things? It's dangerous territory when computer scientists ask \"what if,\" but I got curious enough I wanted to follow through.\n",
|
||
|
"\n",
|
||
|
"## The Objective\n",
|
||
|
"\n",
|
||
|
"Given an input list of Tweets, build up the following things:\n",
|
||
|
"\n",
|
||
|
"1. The distribution of starting words; since there are no \"prior\" words to go from, we need to treat this as a special case.\n",
|
||
|
"2. The distribution of words given a previous word; for example, every time I use the word `woodchuck` in the example sentence, there is a 50% chance it is followed by `chuck` and a 50% chance it is followed by `could`. I need this distribution for all words.\n",
|
||
|
"3. The distribution of quantity of hashtags; Do I most often use just one? Two? Do they follow something like a Poisson distribution?\n",
|
||
|
"4. Distribution of hashtags; Given a number of hashtags, what is the actual content? I'll treat hashtags as separate from the content of a tweet.\n",
|
||
|
"\n",
|
||
|
"## The Data\n",
|
||
|
"\n",
|
||
|
"I'm using as input my tweet history. I don't really use Twitter anymore, but it seems like a fun use of the dataset. I'd like to eventually build this to a point where I can imitate anyone on Twitter using their last 100 tweets or so, but I'll start with this as example code.\n",
|
||
|
"\n",
|
||
|
"## The Algorithm\n",
|
||
|
"\n",
|
||
|
"I'll be using the [NLTK](http://www.nltk.org/) library for doing a lot of the heavy lifting. First, let's import the data:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"import pandas as pd\n",
|
||
|
"\n",
|
||
|
"tweets = pd.read_csv('tweets.csv')\n",
|
||
|
"text = tweets.text\n",
|
||
|
"\n",
|
||
|
"# Don't include tweets in reply to or mentioning people\n",
|
||
|
"replies = text.str.contains('@')\n",
|
||
|
"text_norep = text.loc[~replies]"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"And now that we've got data, let's start crunching. First, tokenize and build out the distribution of first word:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 2,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from nltk.tokenize import TweetTokenizer\n",
|
||
|
"tknzr = TweetTokenizer()\n",
|
||
|
"tokens = text_norep.map(tknzr.tokenize)\n",
|
||
|
"\n",
|
||
|
"first_words = tokens.map(lambda x: x[0])\n",
|
||
|
"first_words_alpha = first_words[first_words.str.isalpha()]\n",
|
||
|
"first_word_dist = first_words_alpha.value_counts() / len(first_words_alpha)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Next, we need to build out the conditional distributions. That is, what is the probability of the next word given the current word is $X$? This one is a bit more involved. First, find all unique words, and then find what words proceed them. This can probably be done in a more efficient manner than I'm currently doing here, but we'll ignore that for the moment."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 3,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from functools import reduce\n",
|
||
|
"\n",
|
||
|
"# Get all possible words\n",
|
||
|
"all_words = reduce(lambda x, y: x+y, tokens, [])\n",
|
||
|
"unique_words = set(all_words)\n",
|
||
|
"actual_words = set([x if x[0] != '.' else None for x in unique_words])\n",
|
||
|
"\n",
|
||
|
"word_dist = {}\n",
|
||
|
"for word in iter(actual_words):\n",
|
||
|
" indices = [i for i, j in enumerate(all_words) if j == word]\n",
|
||
|
" proceeding = [all_words[i+1] for i in indices]\n",
|
||
|
" word_dist[word] = proceeding"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Now that we've got the tweet analysis done, it's time for the fun part: hashtags! Let's count how many hashtags are in each tweet, I want to get a sense of the distribution."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 4,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"<matplotlib.axes._subplots.AxesSubplot at 0x18e59dc28d0>"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 4,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYkAAAEACAYAAABGYoqtAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAEe1JREFUeJzt3X+s3XV9x/HnCzqRinadjt6NosA0CGYOUSsJM7tmG4pG\nYFuGuGkEMmOCTheThZZsazXZBOOcbguJUWYqw7CCIpi5UQi7Li5KmYKixdpkFrHQC1MHogQB3/vj\nfGsP9X7KObf33HNu7/ORnPT7/dzv95x3v/32vO7n8/2VqkKSpLkcNu4CJEmTy5CQJDUZEpKkJkNC\nktRkSEiSmgwJSVLTyEMiya4kX01ye5JtXdvqJFuT7EhyY5JVfctvSLIzyV1Jzhh1fZKktsXoSfwU\nmK6ql1TVuq5tPXBzVZ0I3AJsAEhyMnAucBJwJnB5kixCjZKkOSxGSGSOzzkb2NxNbwbO6abPAq6u\nqserahewE1iHJGksFiMkCrgpyW1J/qRrW1NVswBVtQc4ums/Brinb93dXZskaQxWLMJnnF5V9yX5\nZWBrkh30gqOf9waRpAk08pCoqvu6Px9I8hl6w0ezSdZU1WySKeD+bvHdwLF9q6/t2p4kiaEiSfNQ\nVUMd5x3pcFOSlUmO6qafAZwB3AncAJzfLfYW4Ppu+gbgvCRPS3I88Hxg21zvXVW+qti4cePYa5iU\nl9vCbeG2OPBrPkbdk1gDXNf95r8CuKqqtib5b2BLkguBu+md0URVbU+yBdgOPAZcVPP9m0mSDtpI\nQ6Kqvg2cMkf794HfaazzPuB9o6xLkjQYr7he4qanp8ddwsRwW+zjttjHbXFwshRHc5I4CiVJQ0pC\nTdKBa0nS0mZISJKaDAlJUpMhIUlqMiQkSU2GhCSpyZCQJDUZEpKkJkNCktS0rEJiauo4kgz1mpo6\nbtxlS9LYLKvbcvQelz3sepn3LXYlaZJ4Ww5J0oIyJCRJTYaEJKnJkJAkNRkSkqQmQ0KS1GRISJKa\nDAlJUpMhIUlqMiQkSU2GhCSpyZCQJDUZEpKkJkNCktRkSEiSmgwJSVKTISFJajIkJElNhoQkqcmQ\nkCQ1GRKSpCZDQpLUZEhIkpoWJSSSHJbkK0lu6OZXJ9maZEeSG5Os6lt2Q5KdSe5KcsZi1CdJmtti\n9STeBWzvm18P3FxVJwK3ABsAkpwMnAucBJwJXJ4ki1SjJGk/Iw+JJGuB1wIf62s+G9jcTW8Gzumm\nzwKurqrHq2oXsBNYN+oaJUlzW4yexN8Bfw5UX9uaqpoFqKo9wNFd+zHAPX3L7e7aJEljsGKUb57k\ndcBsVd2RZPoAi9YBfjanTZs2/Wx6enqa6ekDvb0kLT8zMzPMzMwc1Hukaujv58HfPPkb4E3A48CR\nwDOB64CXAdNVNZtkCviPqjopyXqgquqybv1/BzZW1a37vW/Np+7e4Y1h1wuj3EaStFiSUFVDHecd\n6XBTVV1SVc+tqhOA84BbqurNwGeB87vF3gJc303fAJyX5GlJjgeeD2wbZY2SpLaRDjcdwKXAliQX\nAnfTO6OJqtqeZAu9M6EeAy6aV5dBkrQgRjrcNCoON0nS8CZuuEmStLQZEpKkJkNCktRkSEiSmgwJ\nSVKTISFJajIkJElNhoQkqcmQkCQ1GRKSpCZDQpLUZEhIkpoMCUlSkyEhSWoyJCRJTYaEJKnJkJAk\nNY3r8aUH7b3vfe9Qy69cuXJElUjSoWvJPr4U/nKodY444goeffRefHyppOVqPo8vXcIhMVzdq1ad\nxoMP3oohIWm58hnXkqQFZUhIkpoMCUlSkyEhSWoyJCRJTYaEJKnJkJAkNRkSkqQmQ0KS1GRISJKa\nDAlJUpMhIUlqMiQkSU2GhCSpyZCQJDWNNCSSHJHk1iS3J7kzycaufXWSrUl2JLkxyaq+dTYk2Znk\nriRnjLI+SdKBjTQkqupR4FVV9RLgFODMJOuA9cDNVXUicAuwASDJycC5wEnAmcDlSYZ6QIYkaeGM\nfLipqn7cTR5B75naBZwNbO7aNwPndNNnAVdX1eNVtQvYCawbdY2SpLkNFBJJfn2+H5DksCS3A3uA\nm6rqNmBNVc0CVNUe4Ohu8WOAe/pW3921SZLGYNCexOVJtiW5qP/4wSCq6qfdcNNaYF2SF/HzD5r2\nIdKSNIFWDLJQVb0yyQuAC4EvJ9kGfLyqbhr0g6rqoSQzwGuA2SRrqmo2yRRwf7fYbuDYvtXWdm1z\n2NQ3Pd29JEl7zczMMDMzc1DvkarBf4lPcji94wd/DzwEBLikqj7dWP45wGNV9WCSI4EbgUuB3wK+\nX1WXJbkYWF1V67sD11cBr6A3zHQT8ILar8gkNWznY9Wq03jwwVsZvtMShtlGkjSpklBVQ50MNFBP\nIsmLgQuA19H74n59VX0lya8CXwTmDAngV4DNSQ6jN7T1L1X1uSRfArYkuRC4m94ZTVTV9iRbgO3A\nY8BF+weEJGnxDNSTSPJ54GPAtVX1yH4/e3NVXTmi+lr12JOQpCGNrCdBrwfxSFU90X3QYcDTq+rH\nix0QkqTFM+jZTTcDR/bNr+zaJEmHsEFD4ulV9fDemW565WhKkiRNikFD4kdJTt07k+SlwCMHWF6S\ndAgY9JjEnwHXJLmX3mmvU8AbRlaVJGkiDHox3W1JXgic2DXtqKrHRleWJGkSDNqTAHg5cFy3zqnd\nqVSfGElVkqSJMOjFdFcCvwbcATzRNRdgSEjSIWzQnsTLgJO9+lmSlpdBz276Or2D1ZKkZWTQnsRz\ngO3d3V8f3dtYVWeNpCpJ0kQYNCQ2jbIISdJkGvQU2M8neR6923bfnGQlcPhoS5Mkjdugjy99K3At\n8JGu6RjgM6MqSpI0GQY9cP124HR6Dxqiqnay77nUkqRD1KAh8WhV/WTvTJIV+FxqSTrkDRoSn09y\nCXBkkt8FrgE+O7qyJEmTYNCQWA88ANwJvA34HPAXoypKkjQZBnp86aTx8aWSNLyRPb40ybeZ49u1\nqk4Y5sMkSUvLMPdu2uvpwB8Cv7Tw5UiSJslAxySq6nt9r91V9SHgdSOuTZI0ZoMON53aN3sYvZ7F\nMM+ikCQtQYN+0f9t3/TjwC7g3AWvRpI0UQa9d9OrRl2IJGnyDDrc9O4D/byqPrgw5UiSJskwZze9\nHLihm389sA3YOYqiJEmTYdCQWAucWlU/BEiyCfjXqnrTqAqTJI3foLflWAP8pG/+J12bJOkQNmhP\n4hPAtiTXdfPnAJtHU5IkaVIMenbTXyf5N+CVXdMFVXX76MqSJE2CQYebAFYCD1XVh4HvJjl+RDVJ\nkibEoI8v3QhcDGzomn4B+OdRFSVJmgyD9iR+DzgL+BFAVd0LPHNURUmSJsOgIfGT6j1UoQCSPGN0\nJUmSJsWgIbElyUeAX0zyVuBm4KOjK0uSNAkGvVX4B4BrgU8BJwJ/VVX/8FTrJVmb5JYk30hyZ5J3\ndu2rk2xNsiPJjUlW9a2zIcnOJHclOWN+fy1J0kJ4yseXJjkcuHk+N/lLMgVMVdUdSY4CvgycDVwA\nfK+q3p/kYmB1Va1PcjJwFb1bgKyl12N5Qe1XpI8vlaThzefxpU/Zk6iqJ4Cf9v+2P6iq2lNVd3TT\nDwN30fvyP5t9F+NtpndxHvQOjl9dVY9X1S5694ZaN+znSpIWxqBXXD8M3JnkJroznACq6p2DflCS\n44BTgC8Ba6pqtnuPPUmO7hY7Bvhi32q7uzZJ0hgMGhKf7l7z0g01XQu8q6oe7g0XPYnjOZI0gQ4Y\nEkmeW1Xfqap536cpyQp6AXFlVV3fNc8mWVNVs91xi/u79t3AsX2rr+3a5rCpb3q6e0mS9pqZmWFm\nZuag3uOAB66TfKWqTu2mP1VVfzD0BySfAP63qt7d13YZ8P2quqxx4PoV9IaZbsID15K0IOZz4Pqp\nhpv63+yEeRR0OvDH9I5n3E7vG/oS4DJ6115cCNxN97zsqtqeZAuwHXgMuGj/gJAkLZ6nColqTA+k\nqv4LOLzx499prPM+4H3
|
||
|
"text/plain": [
|
||
|
"<matplotlib.figure.Figure at 0x18e59dabe80>"
|
||
|
]
|
||
|
},
|
||
|
"metadata": {},
|
||
|
"output_type": "display_data"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"import matplotlib.pyplot as plt\n",
|
||
|
"%matplotlib inline\n",
|
||
|
"\n",
|
||
|
"hashtags = text_norep.str.count('#')\n",
|
||
|
"bins = hashtags.unique().max()\n",
|
||
|
"hashtags.plot(kind='hist', bins=bins)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"That looks like a Poisson distribution, kind of as I expected. I'm guessing my number of hashtags per tweet is $\\sim Poi(1)$, but let's actually find the [most likely estimator](https://en.wikipedia.org/wiki/Poisson_distribution#Maximum_likelihood) which in this case is just $\\bar{\\lambda}$:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 5,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"0.870236869207003"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 5,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"mle = hashtags.mean()\n",
|
||
|
"mle"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Pretty close! So we can now simulate how many hashtags are in a tweet. Let's also find what hashtags are actually used:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 6,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"603"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 6,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"hashtags = [x for x in all_words if x[0] == '#']\n",
|
||
|
"n_hashtags = len(hashtags)\n",
|
||
|
"\n",
|
||
|
"unique_hashtags = list(set([x for x in unique_words if x[0] == '#']))\n",
|
||
|
"hashtag_dist = pd.DataFrame({'hashtags': unique_hashtags,\n",
|
||
|
" 'prob': [all_words.count(h) / n_hashtags\n",
|
||
|
" for h in unique_hashtags]})\n",
|
||
|
"len(hashtag_dist)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Turns out I have used 603 different hashtags during my time on Twitter. That means I was using a unique hashtag for about every third tweet.\n",
|
||
|
"\n",
|
||
|
"In better news though, we now have all the data we need to go about actually constructing tweets! The process will happen in a few steps:\n",
|
||
|
"\n",
|
||
|
"1. Randomly select what the first word will be.\n",
|
||
|
"2. Randomly select the number of hashtags for this tweet, and then select the actual hashtags.\n",
|
||
|
"3. Fill in the remaining space of 140 characters with random words taken from my tweets.\n",
|
||
|
"\n",
|
||
|
"And hopefully, we won't have anything too crazy come out the other end. The way we do the selection follows a [Multinomial Distribution](https://en.wikipedia.org/wiki/Multinomial_distribution): given a lot of different values with specific probability, pick one. Let's give a quick example:\n",
|
||
|
"\n",
|
||
|
"```\n",
|
||
|
"x: .33\n",
|
||
|
"y: .5\n",
|
||
|
"z: .17\n",
|
||
|
"```\n",
|
||
|
"\n",
|
||
|
"That is, I pick `x` with probability 33%, `y` with probability 50%, and so on. In context of our sentence construction, I've built out the probabilities of specific words already - now I just need to simulate that distribution. Time for the engine to actually be developed!"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 7,
|
||
|
"metadata": {
|
||
|
"collapsed": true
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"import numpy as np\n",
|
||
|
"\n",
|
||
|
"def multinom_sim(n, vals, probs):\n",
|
||
|
" occurrences = np.random.multinomial(n, probs)\n",
|
||
|
" results = occurrences * vals\n",
|
||
|
" return ' '.join(results[results != ''])\n",
|
||
|
"\n",
|
||
|
"def sim_n_hashtags(hashtag_freq):\n",
|
||
|
" return np.random.poisson(hashtag_freq)\n",
|
||
|
"\n",
|
||
|
"def sim_hashtags(n, hashtag_dist):\n",
|
||
|
" return multinom_sim(n, hashtag_dist.hashtags, hashtag_dist.prob)\n",
|
||
|
"\n",
|
||
|
"def sim_first_word(first_word_dist):\n",
|
||
|
" probs = np.float64(first_word_dist.values)\n",
|
||
|
" return multinom_sim(1, first_word_dist.reset_index()['index'], probs)\n",
|
||
|
"\n",
|
||
|
"def sim_next_word(current, word_dist):\n",
|
||
|
" dist = pd.Series(word_dist[current])\n",
|
||
|
" probs = np.ones(len(dist)) / len(dist)\n",
|
||
|
" return multinom_sim(1, dist, probs)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Pulling it all together\n",
|
||
|
"\n",
|
||
|
"I've now built out all the code I need to actually simulate a sentence written by me. Let's try doing an example with five words and a single hashtag:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 8,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"'My first all-nighter of friends #oldschool'"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 8,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"first = sim_first_word(first_word_dist)\n",
|
||
|
"second = sim_next_word(first, word_dist)\n",
|
||
|
"third = sim_next_word(second, word_dist)\n",
|
||
|
"fourth = sim_next_word(third, word_dist)\n",
|
||
|
"fifth = sim_next_word(fourth, word_dist)\n",
|
||
|
"hashtag = sim_hashtags(1, hashtag_dist)\n",
|
||
|
"\n",
|
||
|
"' '.join((first, second, third, fourth, fifth, hashtag))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Let's go ahead and put everything together! We're going to simulate a first word, simulate the hashtags, and then simulate to fill the gap until we've either taken up all the space or reached a period."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 9,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"def simulate_tweet():\n",
|
||
|
" chars_remaining = 140\n",
|
||
|
" first = sim_first_word(first_word_dist)\n",
|
||
|
" n_hash = sim_n_hashtags(mle)\n",
|
||
|
" hashtags = sim_hashtags(n_hash, hashtag_dist)\n",
|
||
|
" \n",
|
||
|
" chars_remaining -= len(first) + len(hashtags)\n",
|
||
|
" \n",
|
||
|
" tweet = first\n",
|
||
|
" current = first\n",
|
||
|
" while chars_remaining > len(tweet) + len(hashtags) and current[0] != '.' and current[0] != '!':\n",
|
||
|
" current = sim_next_word(current, word_dist)\n",
|
||
|
" tweet += ' ' + current\n",
|
||
|
" \n",
|
||
|
" tweet = tweet[:-2] + tweet[-1]\n",
|
||
|
" \n",
|
||
|
" return ' '.join((tweet, hashtags)).strip()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## The results\n",
|
||
|
"\n",
|
||
|
"And now for something completely different: twenty random tweets dreamed up by my computer and my Twitter data. Here you go:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 12,
|
||
|
"metadata": {
|
||
|
"collapsed": false
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"Also , I'm at 8 this morning. #thursdaysgohard #ornot\n",
|
||
|
"\n",
|
||
|
"Turns out of us breathe the code will want to my undergraduate career is becoming more night trying ? Religion is now as a chane #HYPE\n",
|
||
|
"\n",
|
||
|
"You know what recursion is to review the UNCC. #ornot\n",
|
||
|
"\n",
|
||
|
"There are really sore 3 bonfires in my first writing the library ground floor if awesome. #realtalk #impressed\n",
|
||
|
"\n",
|
||
|
"So we can make it out there's nothing but I'm not let us so hot I could think I may be good. #SwingDance\n",
|
||
|
"\n",
|
||
|
"Happy Christmas , at Harris Teeter to be be godly or Roman Catholic ). #4b392b#4b392b #Isaiah26\n",
|
||
|
"\n",
|
||
|
"For context , I in the most decisive factor of the same for homework. #accomplishment\n",
|
||
|
"\n",
|
||
|
"Freaking done. #loveyouall\n",
|
||
|
"\n",
|
||
|
"New blog post : Don't jump in a quiz in with a knife fight. #haskell #earlybirthday\n",
|
||
|
"\n",
|
||
|
"God shows me legitimately want to get some food and one day.\n",
|
||
|
"\n",
|
||
|
"Stormed the queen city. #mindblown\n",
|
||
|
"\n",
|
||
|
"The day of a cold at least outside right before the semester ..\n",
|
||
|
"\n",
|
||
|
"Finished with the way back. #winners\n",
|
||
|
"\n",
|
||
|
"Waking up , OJ , I feel like Nick Jonas today.\n",
|
||
|
"\n",
|
||
|
"First draft of so hard drive. #humansvszombies\n",
|
||
|
"\n",
|
||
|
"Eric Whitacre is the wise creation.\n",
|
||
|
"\n",
|
||
|
"Ethics paper first , music in close to everyone who just be posting up with my sin , and Jerry Springr #TheLittleThings\n",
|
||
|
"\n",
|
||
|
"Love that you know enough time I've eaten at 8 PM. #deepthoughts #stillblownaway\n",
|
||
|
"\n",
|
||
|
"Lead. #ThinkingTooMuch #Christmas\n",
|
||
|
"\n",
|
||
|
"Aamazing conference when you married #DepartmentOfRedundancyDepartment Yep , but there's a legitimate challenge.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"for i in range(0, 20):\n",
|
||
|
" print(simulate_tweet())\n",
|
||
|
" print()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"...Which all ended up being a whole lot more nonsensical than I had hoped for. There are some good ones, so I'll call that an accomplishment! I was banking on grammar not being an issue: since my tweets use impeccable grammar, the program modeled off them should have pretty good grammar as well. There are going to be some hilarious edge cases (I'm looking at you, `Ethics paper first, music in close to everyone`) that make no sense, and some hilarious edge cases (`Waking up, OJ, I feel like Nick Jonas today`) that make me feel like I should have a Twitter rap career. On the whole though, the structure came out alright.\n",
|
||
|
"\n",
|
||
|
"## Moving on from here\n",
|
||
|
"\n",
|
||
|
"During class we also talked about an interesting idea: trying to analyze corporate documents and corporate speech. I'd be interested to know what this analysis applied to something like a couple of bank press releases could do. By any means, the code needs some work to clean it up before I get that far.\n",
|
||
|
"\n",
|
||
|
"## For further reading\n",
|
||
|
"\n",
|
||
|
"I'm pretty confident I re-invented a couple wheels along the way - what I'm doing feels a lot like what [Markov Chain Monte Carlo](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) is intended to do. But I've never worked explicitly with that before, so more research is needed."
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.5.1"
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 0
|
||
|
}
|