Category Archives: Statistics

Amazon, Google and Apple vs the Big 5 Unicorns on Hiring and Churn

I’ve received multiple requests to analyze employee churn and new hiring rates for big companies and unicorns with the approach I took earlier for studying engineering and sales retention rates. I figured I’d give it a shot – and combine all of the key metrics in one chart …

amazon_google_apple_unicorns_retention_hiring

How to read this:

Blue bar represents the number of expected new hires that particular company will make in a 30 day (one month) period. Black bars (negative values) indicate how many employees will churn in a one month period. The orange line (the top most numerical labels correspond to the orange line plot) represent the net change in hires per month (new hires less churn). The companies are ranked by churn from left to right in descending order (so highest churn on the left).

As you can see in the chart, the big three companies included in this analysis are Amazon, Apple and Google. The unicorns are Uber, Lyft, Airbnb, Pinterest and Snapchat. “Big 5” combines these unicorns together as if they were one whole company. Also note, this is looking at employees worldwide with any job title.

Key Insights:

  1. Apple is not hiring enough new heads when compared with Amazon and Google. In fact, the Big 5 unicorns combined will hire more net heads than Apple with almost 50% less employee churn.
  2. Amazon’s churn is the highest – losing a little over 10 people a day. However, this is not bad relatively speaking – Google loses 8-9 people a day, and Apple is a tad over 9 a day (and Amazon has 36% more employees than Google). Given the recent press bashing Amazon’s culture and the periodic press envying Google’s great benefits, their retention rates tell a different a story – that it’s closer to a wash. Big tech companies with great talent churn people at pretty similar high rates regardless it seems (have some more thoughts on this but will save those for another post).
  3. At these current rates, all of the companies here (collectively) will increase their employee size by 20K (19, 414 to be precise) heads by year end (this is new hires less churn). That’s a measly 5% increase in their current collective employee size – and this is across the Big 3 Tech Companies and Big 5 Unicorns.
  4. Let’s compare Amazon to the Big 5 Unicorns. The Big 5 will hire 79% as many incremental heads as Amazon in a month, even though their collective employee size is 24% that of Amazon’s. Amazon has been in business for much longer (2-3x days since incorporation), and the Big 5’s churn is 43% of Amazon’s figure – both factors contributing to the closeness in the incremental head rate between the two.

Want more details?

How did I calculate these figures? Take a look at my previous post on engineering retention for more details. Same caveats listed there apply here, and then some (such as how this depends on the participation rates of LinkedIn which may differ considerably internationally compared to the US market which my previous posts exclusively focused on). Feel free to connect or email me if you have any questions or feedback.

 

Leave a comment

Filed under Data Mining, Economics, Entrepreneurship, Google, LinkedIn, Management, Non-Technical-Read, Research, Startups, Statistics, Trends

Ranking Companies on Sales Culture & Retention

A company’s sales retention rate is a very important indicator of business health. If you have a good gauge on this, you could better answer questions such as: should I join that company’s sales department, will I be able to progress up the ladder, are reps hitting their numbers, are they providing effective training, should I invest money in this business, etc. But how does one measure this rate especially from an outside vantage point? This is where LinkedIn comes to the rescue. I essentially cross applied the approach I took to measuring engineering retention to sales.

sales_ret_2

This chart reveals several key technology companies ranked in reverse order of sales churn – so higher on the chart (or longer the bar) the higher the churn (so from worst at the top to best at the bottom).

So how are we defining sales churn here? I calculated the measurement as follows: I took the number of people who have ever churned in a sales role from the company and divide that by the number of days since incorporation for that respective company (call this Churn Per Day), and then I compute the ratio of how many sales people will churn in one year (the run rate i.e. Churn Per Day * 365) over the number of current sales people employed.

For ex. if you look at the top row, which is Zenefits, the value is 0.40 – which means that 40% of the current sales team size will churn in a one year period. In order to maintain that sales team size and corresponding revenue, the company will need to hire 40% of their team – and sooner than in a year as that churn likely spreads throughout the year as well as given new sales hire ramping periods (if you’re churning a ramped rep and say it takes one quarter to ramp a new sales rep, then you need to hire a new head at least one quarter beforehand to avoid a revenue dip).

A few more notes:

The color saturation indicates Churn Per Day – the darker the color, the higher the Churn Per Day.

Caveats listed in the previous post on engineering retention apply to this analysis too.

Leave a comment

Filed under Data Mining, Economics, Enterprise, Entrepreneurship, Job Stuff, LinkedIn, Non-Technical-Read, Startups, Statistics, Trends, Venture Capital

Top Tech Companies Ranked By Engineering Retention

(TL;DR) Here’s the ranking going from top to bottom (so higher / longer the better):

eng_ret

How did you measure this?

By running advanced Linkedin searches and counting up the hits. Specifically, for each company, at their headquarters location only, I searched for profiles that were or are software engineers, and had at least 1+ years of experience. Then I filtered these results in two ways:

1) Counting how many of those profiles used to work at the company in question (and not currently). Call this result Past Not Current Count.

2) Separately (not applying the above filter), filtering to those who are currently working at the company for at least 1+ years. Call this Current Count.

I also computed the number of days since incorporation for each respective company to be able to compute Churn Per Day – which is simply dividing Past Not Current Count by the number of days since incorporation.

Then I took this rate, and computed how long in years it would take for each company to churn through all of their Current Count or current heads who were or are software engineers and who’ve been with the company for at least 1 year (those who possess the most tribal wisdom and arguably deserve more retention benefits). Call this the Wipeout Period (in years) figure. This is what’s plotted in the chart above and is represented by the size of the bars – so longer the better for a company.

What does the color hue indicate?

The Churn Per Day (described in the previous answer). The darker the color the higher the churn rate.

Who’s safe and who’s at risk?

I would think under a 10 year wipeout period (esp. if you’re a larger and mature company) would be very scary.

In general (disclaimer – subjective – would like to run this over more comps) greater than 20 years feels safe, but if you’re dark green (and hence experience more churn per day) then in order to keep your wipeout period long you need to be hiring many new engineering heads constantly (but you may not always be hot in tech to be able to maintain such a hiring pace!).

What are the caveats with this analysis?

There are several, but to mention a few:

Past Not Current Count biases against older companies – for ex. Microsoft has had more churn than # of present heads because they’ve been in business for a long time.

I needed more precise filtering options than what was available from Linkedin to be able to properly remove software internships (although could argue that’s still valid churn – means that the company wasn’t able to pipeline them into another internship or full-time position) as well as ensure that the Past Not Current Count factored only software engineers at the time that they were working at that company. So, given the lack of these filters, a better description for the above chart would be Ranking Retention of Folks with Software Experience.

Also, this analysis assumes the Churn Per Day figure is the same for all folks currently 1+ years at their respective company, even though it’s likely that the churn rate is different depending the # of years you’re at the company (I’m essentially assuming it’s a wash – that the distributions of the historical Past Not Current vs Current are similar).

1 Comment

Filed under Blog Stuff, Computer Science, Data Mining, Entrepreneurship, Job Stuff, LinkedIn, Management, Non-Technical-Read, Research, Statistics, Trends, VC

Betting on UFC Fights – A Statistical Data Analysis

Mixed Martial Arts (MMA) is an incredibly entertaining and technical sport to watch. It’s become one of the fastest growing sports in the world. I’ve been following MMA organizations like the Ultimate Fighting Championship (UFC) for almost eight years now, and in that time have developed a great appreciation for MMA techniques. After watching dozens of fights, you begin to pick up on what moves win and when, and spot strengths and weaknesses in certain fighters. However, I’ve always wanted to test my knowledge against the actual stats – like do accomplished wrestlers really beat fighters with little wrestling experience?

To do this, we need fight data, so I crawled and parsed all the MMA fights from Sherdog.com. This data includes fighter profiles (birth date, weight, height, disciplines, training camp, location) and fight records (challenger, opponent, time, round, outcome, event). After some basic data cleaning, I had a dataset of 11,886 fight records, 1,390 of which correspond to the UFC.

I then trained a random forest classifier from this data to see if a state-of-the-art machine learning model can identify any winning and losing characteristics. Over cross-validation with 10 folds, the resulting model scored a surprisingly decent AUC score of 0.69; a AUC score closer to 0.5 would indicate that the model can’t predict winning fights any better than random or fair coin flips.

So there may be interesting patterns in this data … Feeling motivated, I ran exhaustive searches over the data to find feature combinations that indicate winning or losing behaviors. Many hours later, several dozens of such insights were found.

Here are the most interesting ones (stars indicate statistical significance at the 5% level):

Top UFC Insights

Fighters older than 32 years of age will more likely lose

This was validated in 173 out of 277 (62%) fights*

Fighters with more than 6 TKO victories fighting opponents older than 32 years of age will more likely win

This was validated in 47 out of 60 (78%) fights*

Fighters from Japan will more likely lose

This was validated in 36 out of 51 (71%) fights*

Fighters who have lost 2 or more KOs will more likely lose

This was validated in 54 out of 84 (64%) fights*

Fighters with 3x or more decision wins and are greater than 3% taller than their opponents will more likely win

This was validated in 32 out of 38 (84%) fights*

Fighters who have won 3x or more decisions than their opponent will more likely win

This was validated in 142 out of 235 (60%) fights*

Fighters with no wrestling background vs fighters who do have one more likely lose

This was validated in 136 out of 212 (64%) fights*

Fighters fighting opponents with 3x or less decision wins and are on a 6 fight (or better) winning streak more likely win

This was validated in 30 out of 39 (77%) fights*

Fighters younger than their opponents by 3 or more years in age will more likely win

This was validated in 324 out of 556 (58%) fights*

Fighters who haven’t fought in more than 210 days will more likely lose

This was validated in 162 out of 276 (59%) fights*

Fighters taller than their opponents by 3% will more likely win

This was validated in 159 out of 274 (58%) fights*

Fighters who have lost less by submission than their opponents will more likely win

This was validated in 295 out of 522 (57%) fights*

Fighters who have lost 6 or more fights will more likely lose

This was validated in 172 out of 291 (60%) fights*

Fighters who have 18 or more wins and never had a 2 fight losing streak more likely win

This was validated in 79 out of 126 (63%) fights*

Fighters who have lost back to back fights will more likely lose

This was validated in 514 out of 906 (57%) fights*

Fighters with 0 TKO victories will more likely lose

This was validated in 90 out of 164 (55%) fights

Fighters fighting opponents out of Greg Jackson’s camp will more likely lose

This was validated in 38 out of 63 (60%) fights

 

Top Insights over All Fights

Fighters with 15 or more wins that have 50% less losses than their opponents will more likely win

This was validated in 239 out of 307 (78%) fights*

Fighters fighting American opponents will more likely win

This was validated in 803 out of 1303 (62%) fights*

Fighters with 2x more (or better) wins than their opponents and those opponents lost their last fights will more likely win

This was validated in 709 out of 1049 (68%) fights*

Fighters who’ve lost their last 4 fights in a row will more likely lose

This was validated in 345 out of 501 (68%) fights*

Fighters currently on a 5 fight (or better) winning streak will more likely win

This was validated in 1797 out of 2960 (61%) fights*

Fighters with 3x or more wins than their opponents will more likely win

This was validated in 2831 out of 4764 (59%) fights*

Fighters who have lost 7 or more times will more likely lose

This was validated in 2551 out of 4547 (56%) fights*

Fighters with no jiu jitsu in their background versus fighters who do have it more likely lose

This was validated in 334 out of 568 (59%) fights*

Fighters who have lost by submission 5 or more times will more likely lose

This was validated in 1166 out of 1982 (59%) fights*

Fighters in the Middleweight division who fought their last fight more recently will more likely win

This was validated in 272 out of 446 (61%) fights*

Fighters in the Lightweight division fighting 6 foot tall fighters (or higher) will more likely win

This was validated in 50 out of 83 (60%) fights

 

Note – I separated UFC fights from all fights because regulations and rules can vary across MMA organizations.

Most of these insights are intuitive except for maybe the last one and an earlier one which states 77% of the time fighters beat opponents who are on 6 fight or better winning streaks but have 3x less decision wins.

Many of these insights demonstrate statistically significant winning biases. I couldn’t help but wonder – could we use these insights to effectively bet on UFC fights? For the sake of simplicity, what happens if we make bets based on just the very first insight which states that fighters older than 32 years old will more likely lose (with a 62% chance)?

To evaluate this betting rule, I pulled the most recent UFC fights where in each fight there’s a fighter that’s at least 33 years old. I found 52 such fights, spanning 2/5/2011 – 8/14/2011. I placed a $10K bet on the younger fighter in each of these fights.

Surprisingly, this rule calls 33 of these 52 fights correctly (63% – very close to the rule’s observed 62% overall win rate). Each fight called incorrectly results in a loss of $10,000, and for each of the fights called correctly I obtained the corresponding Bodog money line (betting odds) to compute the actual winning amount.

I’ve compiled the betting data for these fights in this Google spreadsheet.

Note, for 6 of the fights that our rule called correctly, the money lines favored the losing fighters.

Let’s compute the overall return of our simple betting rule:

For each of these 52 fights, we risked $10,000, or in all $520,000
We lost 19 times, or a total of $190,000
Based on the betting odds of the 33 fights we called correctly (see spreadsheet), we won $255,565.44
Profit = $255,565.44 – $190,000 = $65,565.44
Return on investment (ROI) = 100 * 65,565.44 / 520,000 = 12.6%

 

That’s a very decent return.

For kicks, let’s compare this to investing in the stock market over the same period of time. If we buy the S&P 500 with a conventional dollar cost averaging strategy to spread out the $520,000 investment, then we get a ROI of -7.31%. Ouch.

Keep in mind that we’re using a simple betting rule that’s based on a single insight. The random forest model, which optimizes over many insights, should predict better and be applicable to more fights.

Please note that I’m just poking fun at stocks – I’m not saying betting on UFC fights with this rule is a more sound investment strategy (risk should be thoroughly examined – the variance of the performance of the rule should be evaluated over many periods of time).

The main goal here is to demonstrate the effectiveness of data driven approaches for better understanding the patterns in a sport like MMA. The UFC could leverage these data mining approaches for coming up with fairer matches (dismiss fights that match obvious winning and losing biases). I don’t favor this, but given many fans want to see knockouts, the UFC could even use these approaches to design fights that will likely avoid decisions or submissions.

Anyways, there’s so much more analysis I’ve done (and haven’t done) over this data. Will post more results when cycles permit. Stay tuned.

26 Comments

Filed under AI, Blog Stuff, Computer Science, Data Mining, Economics, Machine Learning, Research, Science, Statistics, Trends

Ranking High Schools Based On Outcomes

High school is arguably the most important phase of your education. Some families will move just to be in the district of the best ranked high school in the area. However, the factors that these rankings are based on, such as test scores, tuition amount, average class size, teacher to student ratio, location, etc. do not measure key outcomes such as what colleges or jobs the students get into.

Unfortunately, measuring outcomes is tough – there’s no data source that I know of that describes how all past high school students ended up. However, I thought it would be a fun experiment to approximate using LinkedIn data. I took eight top high schools in the Bay Area (see the table below) and ran a whole bunch of advanced LinkedIn search queries to find graduates from these high schools while also counting up their key outcomes like what colleges they graduated from, what companies they went on to work for, what industries are they in, what job titles have they earned, etc.

The results are quite interesting. Here are a few statistics:

College Statistics

  • The top 5 high schools that have the largest share of users going to top private schools (Ivy League’s + Stanford + Caltech + MIT) are (1) Harker (2) Gunn (3) Saratoga (4) Lynbrook (5) Bellarmine.
  • The top 5 high schools that have the largest share of users going to the top 3 UC’s (Berkeley, LA, San Diego) are (1) Mission (2) Gunn (3) Saratoga (4) Lynbrook (5) Leland.
  • Although Harker has the highest share of users going to top privates (30%), their share of users going to the top UC’s is below average. It’s worth nothing that Harker’s tuition is the highest at $36K a year.
  • Bellarmine, an all men’s high school with tuition of $15K a year, is below average in its share of users going on to top private universities as well as to the UC system.
  • Gunn has the highest share of users (11%) going on to Stanford. That’s more than 2x the second place high school (Harker).
  • Mission has the highest share of users (31%) going to the top 3 UC’s and to UC Berkeley alone (14%).

Career Statistics

  • In rank order (1) Saratoga (2) Bellarmine (3) Leland have the biggest share of users which hold job titles that allude to leadership positions (CEO, VP, Manager, etc.).
  • The highest share of lawyers come from (1) Bellarmine (2) Lynbrook (3) Leland. Gunn has 0 lawyers and Harker is second lowest at 6%.
  • Saratoga has the best overall balance of users in each industry (median share of users).
  • Hardware is fading – 5 schools (Leland, Gunn,  Harker, Mission, Lynbrook) have zero users in this industry.
  • Harker has the highest share of its users in the Internet, Financial, and Medical industries.
  • Harker has the lowest percentage of Engineers and below average share of users in the Software industry.
  • Gunn has the highest share of users in the Software and Media industries.
  • Harker high school is relatively new (formed in 1998), so its graduates are still early in the workforce. Leadership takes time to earn, so the leadership statistic is unfairly biased against Harker.

You can see all the stats I collected in the table below. Keep in mind that percentages correspond to the share of users from the high school that match that column’s criteria. Yellow highlights correspond to the best score; blue shaded boxes correspond to scores that are above average. There are quite a few caveats which I’ll note in more detail later, so take these results with a grain of salt. However, as someone who grew up in the Bay Area his whole life, I will say that many of these results make sense to me.

6 Comments

Filed under Blog Stuff, Data Mining, Education, Job Stuff, LinkedIn, Research, Science, Social, Statistics

An Evaluation of Google’s Realtime Search

How timely are the results returned from Google’s Realtime (RT) Search Engine? How often do Twitter results appear in these results? Over the weekend I developed a few basic experiments to find out and published the results below.

Key Findings

  • For location-based queries, there’s nearly a flip of a coin chance (43%) that a Twitter result will be the #1 ranked result.
  • For general knowledge queries, there’s a 23% chance that a Twitter result will be #1.
  • The newest Twitter results are usually 4 seconds old. The newest Web results are 10x older (41 seconds).
  • A top ranking Twitter result for a location-based query is usually 2 minutes old (compared with Web which is 22 minutes old – again nearly 10x older).
  • When Twitter results appear at least one of them is in the top ranked position
Experiment #1 – General Knowledge

I crawled 1,370 article titles from Wikipedia and ran each title as a query into Google RT search.

Market Shares

81% of all queries returned search results that included web page results
23% of all queries returned search results that included Twitter results
7% of all queries returned 0 search results

70% of all queries had a web page result in the #1 ranked position
When Twitter results appeared there was always at least one result in the #1 ranked position (so 23% of queries)

Time Lag

When a web page was the #1 ranked result, that result on average was 6736 seconds (or 1 hr and 52 minutes) old.
When a Tweet was the #1 ranked result, that result on average was 261 seconds (or 4 minutes and 21 seconds) old.

The average age of the top 10% newest web page results (across all queries) is 41 seconds
The average age of the top 10% newest Twitter results (across all queries) is 2 seconds

Tail

Query length was between 1 – 12 words (where 1-2 word long queries are most popular)
Worth noting that no Twitter results appear for queries with greater than 5 words

Experiment #2 – Location

I crawled 265 major populated U.S. cities from the U.S. Census Bureau and ran each city name as a query into Google RT search.

Market Shares

73% of all queries returned search results that included web page results
43% of all queries returned search results that included Twitter results
5% of all queries returned 0 search results

52% of all queries had a web page result in the #1 ranked position
When Twitter results appeared there was always at least one result in the #1 ranked position (so 43% of queries)

Time Lag

When a web page was the #1 ranked result, that result on average was 1341 seconds (or 22 minutes and 21 seconds) old.
When a Tweet was the #1 ranked result, that result on average was 138 seconds (or 2 minutes and 18 seconds) old.

The average age of the top 10% newest web page results (across all queries) is 41 seconds
The average age of the top 10% newest Twitter results (across all queries) is 4 seconds

Tail

Query length was between 1 – 3 words
Worth noting that no Twitter results appear for 3 word long queries

Implementation Details

  • Generated Wiki queries by running “site:en.wikipedia.org” searches on Google and Blekko, and extracting the titles (en.wikipedia.org/{title_is_here}) from the result links. Side point: I tried Bing but the result links had mostly one word long titles (Bing seems to really bias query length in their ranking) and I wanted more diversity to test out tail queries.
  • Crawled cities (for the location-based queries) from http://www.census.gov/popest/cities/tables/SUB-EST2009-01.csv

Caveats

  • I ran these experiments at 2:45a PST on Monday. The location-based queries all relate to U.S., so probably not many people up at that time generating up-to-date information. The time lag stats could vary depending on when these experiments are ran. I did however re-run the experiments in the late morning and didn’t see much difference in the timings.
  • I ran all queries through Google’s normal web search engine with ‘Latest’ on (in the left bar under Search Tools). These results are not exactly the same as those generated from the standalone Google Realtime Search portal, which seems to bias Tweets more while the ‘Latest’ results seems to find middle ground between real-time Twitter results and web page results. I used ‘Latest’ because it seems like it would be the most popular gateway to Google’s Realtime search results.

5 Comments

Filed under Blog Stuff, Computer Science, Data Mining, Google, Information Retrieval, Research, Search, Social, Statistics, Twitter, Wikipedia

Some Stats about Twitter’s Content

Near the end of July, I crawled a sample of ~10M tweets. On my way over from Open Hack Day NYC yesterday I finally got some time to do some preliminary analysis of this data. Several posts have analyzed Twitter’s traffic stats [TechCrunch] [Mashable] [zooie], so I thought I’d focus more on the content here.

Duplication

By compressing the data and comparing the before and after sizes, one can get a pretty decent understanding of the duplication factor. To do this, I extracted just the raw text messages, sorted them, and then ran gzip over the sorted set.

Compression ratio

>>> 284023259 / 739273532 bytes

0.38419238171778614

Typically, for text compression, gzip-like programs can achieve around 50% without the sort (and sorting typically helps), and here we get 38%. A standard text corpus consists of much larger document sizes, so it’s interesting to see a similar or larger duplication factor for tweets.

We can dive even deeper into this area by analyzing the term overlap statistics to measure near duplication, or messages that aren’t necessarily identical but are close enough.

To do this, I first cleaned the text (removed stopwords, stemmed terms, normalized case). Interesting, after cleaning the text, the average number of tokens for a message is just 6.28, or 2.5x the size of a standard web search query.

Then, I employed consistent term sampling to select N representatives for each cleaned message and coalesced the representatives together as a single key. By comparing the total number of unique keys to messages, one can infer the near duplication factor. Also, the higher the N, the higher the threshold is to match (so N >= 6, 6 being the average number of tokens per message, probably means that two messages that generate the same key are exact duplicates).

You’ll notice N >=6 converges around 84%, implying that after cleaning the text, 16% of the messages exactly match some other message. Additionally, when N = 2 (or requiring 2 / 6 tokens or 33% of the text on average) to match, 45% of the messages collide with other messages in the corpus. At N = 2, matching often means the messages discuss the same general topic, but aren’t close near duplicates.

N Term Samples Unique Keys Coverage
8 8548695 0.8356
6 8512672 0.8321
5 8476590 0.8286
4 8366391 0.8177
3 8098400 0.7916
2 5716566 0.5588
1 1013783 0.0991

 

 

 

 

 

 

 

URLs

URLs are present in ~18% of the tweets

Of those, ~65% of the URLs are unique

70K Unique Domains covering 2M URLS

Top Domains:

[‘bit.ly’, ‘tinyurl.com’, ‘twitpic.com’, ‘is.gd’, ‘myloc.me’, ‘ow.ly’, ‘ustre.am’, ‘cli.gs’, ‘tr.im’, ‘plurk.com’, ‘ff.im’, ‘tumblr.com’, ‘yfrog.com’, ‘140mafia.com’, ‘u.mavrev.com’, ‘twurl.nl’, ‘tweeterfollow.com’, ‘mypict.me’, ‘viagracan.com’, ‘vipfollowers.com’, ‘morefollowers.net’, ‘digg.com’, ‘tweeteradder.com’, ‘ping.fm’, ‘tiny.cc’, ‘followersnow.com’, ‘short.to’, ‘twit.ac’, ‘snipr.com’, ‘wefollow.com’, ‘tweet.sg’, ‘url4.eu’, ‘the-twitter-follow-train.info’, ‘fwix.com’, ‘budurl.com’, ‘su.pr’, ‘shar.es’, ‘tinychat.com’, ‘snipurl.com’, ‘loopt.us’, ‘migre.me’, ‘flic.kr’, ‘myspace.com’, ‘snurl.com’, ‘twitgoo.com’, ‘zshare.net’, ‘post.ly’, ‘bkite.com’, ‘yes.com’, ‘flickr.com’, ‘twitter.com’, ‘artistsforschapelle.com’, ‘140army.com’, ‘youtube.com’, ‘x.imeem.com’, ‘pic.gd’, ‘TwitterBackgrounds.com’, ‘raptr.com’, ‘twt.gs’, ‘twitthis.com’, ‘mobypicture.com’, ‘tobtr.com’, ‘ad.vu’, ‘sml.vg’, ‘rubyurl.com’, ‘tinylink.com’, ‘redirx.com’, ‘a2a.me’, ‘eCa.sh’, ‘vimeo.com’, ‘meadd.com’, ‘hotjobs.yahoo.com’, ‘doiop.com’, ‘myurl.in’, ‘urlpire.com’, ‘buzzup.com’, ‘freead.im’, ‘youradder.com’, ‘facebook.com’, ‘adf.ly’, ‘justin.tv’, ‘twitvid.com’, ‘adjix.com’, ‘twcauses.com’, ‘lkbk.nu’, ‘tlre.us’, ‘htxt.it’, ‘stickam.com’, ‘twubs.com’, ‘isy.gs’, ‘reverbnation.com’, ‘news.bbc.co.uk’, ‘sn.im’, ‘twibes.com’, ‘ustream.tv’, ‘trim.su’, ‘hashjobs.com’, ‘blogtv.com’, ‘jobs-cb.de’, ‘xsaimex.com’]

Retweets

~4% of messages are retweets

Replied @Users

~1M total replied-to users in this data set

37% of tweets contain ‘@x’ terms

Most Popular Replied-to Users (almost all celebrities):

[‘@mileycyrus’, ‘@jonasbrothers’, ‘@ddlovato’, ‘@mitchelmusso’, ‘@donniewahlberg’, ‘@souljaboytellem’, ‘@tommcfly’, ‘@addthis’, ‘@officialtila’, ‘@johncmayer’, ‘@shanedawson’, ‘@bowwow614’, ‘@jordanknight’, ‘@ryanseacrest’, ‘@perezhilton’, ‘@jonathanrknight’, ‘@petewentz’, ‘@tweetmeme’, ‘@adamlambert’, ‘@david_henrie’, ‘@dealsplus’, ‘@dwighthoward’, ‘@iamdiddy’, ‘@lancearmstrong’, ‘@songzyuuup’, ‘@imeem’, ‘@blakeshelton’, ‘@dannymcfly’, ‘@lilduval’, ‘@selenagomez’, ‘@markhoppus’, ‘@yelyahwilliams’, ‘@therealpickler’, ‘@stephenfry’, ‘@mrtweet.’, ‘@taylorswift13’, ‘@michaelsarver1’, ‘@davidarchie’, ‘@the_real_shaq’, ‘@tyrese4real’, ‘@britneyspears’, ‘@106andpark’, ‘@ashleytisdale’, ‘@mariahcarey’, ‘@kimkardashian’, ‘@wale’, ‘@mashable’, ‘@programapanico’, ‘@therealjordin’, ‘@listensto’, ‘@misskeribaby’, ‘@alyssa_milano’, ‘@alexalltimelow’, ‘@aplusk’, ‘@thisisdavina’, ‘@breakingnews:’, ‘@peterfacinelli’, ‘@truebloodhbo’, ‘@mgiraudofficial’, ‘@tonyspallelli’, ‘@mtv’, ‘@jackalltimelow’, ‘@dfizzy’, ‘@youngq’, ‘@tomfelton’, ‘@pooch_dog’, ‘@jonaskevin’, ‘@princesammie’, ‘@nkotb’, ‘@christianpior’, ‘@cthagod’, ‘@johnlloydtaylor’, ‘@neilhimself’, ‘@moontweet’, ‘@katyperry’, ‘@danilogentili’, ‘@mchammer’, ‘@rainnwilson’, ‘@joeymcintyre’, ‘@30secondstomars’, ‘@phillyd’, ‘@heidimontag’, ‘@mrpeterandre’, ‘@andyclemmensen’, ‘@crystalchappell’, ‘@kevindurant35’, ‘@huckluciano’, ‘@dannygokey’, ‘@jaketaustin’, ‘@revrunwisdom’, ‘@jamesmoran’, ‘@musewire’, ‘@dannywood’, ‘@nickiminaj’, ‘@akgovsarahpalin’, ‘@terrencej106’, ‘@mashable:’, ‘@drewryanscott’, ‘@mrtweet’, ‘@necolebitchie’, ‘@lilduval:’, ‘@willie_day26’, ‘@kirstiealley’, ‘@betthegame’, ‘@radiomsn’, ‘@alancarr’, ‘@rafinhabastos’, ‘@krisallen4real’, ‘@iamjericho’, ‘@breakingnews’, ‘@babygirlparis’, ‘@ladygaga’, ‘@chris_daughtry’, ‘@hypem’, ‘@danecook’, ‘@imcudi’, ‘@jeepersmedia’, ‘@buckhollywood’, ‘@kimmyt22’, ‘@giulianarancic’, ‘@chrisbrogan’, ‘@nasa’, ‘@addtoany’, ‘@nickcarter’, ‘@debbiefletcher’, ‘@marcoluque’, ‘@shaundiviney’, ‘@ogochocinco’, ‘@twitter’, ‘@eddieizzard’, ‘@youngbillymays’, ‘@real_ron_artest’, ‘@pink’, ‘@laurenconrad’, ‘@rubarrichello’, ‘@ianjamespoulter’, ‘@liltwist’, ‘@teyanataylor’, ‘@dougiemcfly’, ‘@theellenshow’, ‘@robkardashian’, ‘@sherrieshepherd’, ‘@justinbieber’, ‘@paulaabdul’, ‘@jason_manford’, ‘@jaredleto’, ‘@tracecyrus’, ‘@itsonalexa’, ‘@ddlovato:’, ‘@khloekardashian’, ‘@revrunwisdom:’, ‘@solangeknowles’, ‘@allison4realzzz’, ‘@nickjonas’, ‘@reply’, ‘@anarbor’, ‘@donlemoncnn’, ‘@gfalcone601’, ‘@moonfrye’, ‘@symphnysldr’, ‘@iamspectacular’, ‘@honorsociety’, ‘@questlove’, ‘@guykawasaki’, ‘@dawnrichard’, ‘@_maxwell_’, ‘@somaya_reece’, ‘@mandyyjirouxx’, ‘@teemwilliams’, ‘@greggarbo’, ‘@pennjillette’, ‘@mikeyway’, ‘@matthardybrand’, ‘@iamjonwalker’, ‘@andyroddick’, ‘@kohnt01’, ‘@chris_gorham’, ‘@seankingston’, ‘@joshgroban’, ‘@mousebudden’, ‘@misskatieprice’, ‘@spencerpratt’, ‘@wilw’, ‘@jgshock’, ‘@swear_bot’, ‘@joelmadden’, ‘@techcrunch’, ‘@americanwomannn’, ‘@kelly__rowland’, ‘@mionzera’, ‘@astro_127’, ‘@_@’, ‘@spam’, ‘@sookiebontemps’, ‘@drakkardnoir’, ‘@noh8campaign’, ‘@kayako’, ‘@trvsbrkr’, ‘@qbkilla’, ‘@mw55’, ‘@guykawasaki:’, ‘@donttrythis’, ‘@cv31’, ‘@liljjdagreat’, ‘@tiamowry’, ‘@nickensimontwit’, ‘@holdemtalkradio’, ‘@bradiewebbstack’, ‘@nytimes’, ‘@riskybizness23’, ‘@radityadika’, ‘@adrienne_bailon’, ‘@riccklopes’, ‘@jessicasimpson’, ‘@sportsnation’, ‘@jasonbradbury’, ‘@huffingtonpost’, ‘@oceanup’, ‘@gilbirmingham’, ‘@iconic88’, ‘@the’, ‘@thebrandicyrus’, ‘@gordela’, ‘@thedebbyryan’, ‘@jessemccartney’, ‘@?’, ‘@caiquenogueira’, ‘@celsoportiolli’, ‘@shontelle_layne’, ‘@calvinharris’, ‘@chattyman’, ‘@ali_sweeney’, ‘@anamariecox’, ‘@joshthomas87’, ‘@emilyosment’, ‘@nasa:’, ‘@sevinnyne6126’, ‘@thebiggerlights’, ‘@theboygeorge’, ‘@jbarsodmg’, ‘@goldenorckus’, ‘@warrenwhitlock’, ‘@bobbyedner’, ‘@myfabolouslife’, ‘@descargaoficial’, ‘@ochonflcinco85’, ‘@ninabrown’, ‘@billycurrington’, ‘@oprah’, ‘@junior_lima’, ‘@asherroth’, ‘@starbucks’, ‘@jason_pollock’, ‘@intanalwi’, ‘@harrislacewell’, ‘@serenajwilliams’, ‘@kevinruddpm’, ‘@bigbrotherhoh’, ‘@oliviamunn’, ‘@chamillionaire’, ‘@tamekaraymond’, ‘@teamwinnipeg’, ‘@littlefletcher’, ‘@piercethemind’, ‘@brookandthecity’, ‘@iranbaan:’, ‘@tonyrobbins’, ‘@maestro’, ‘@glennbeck’, ‘@1omarion’, ‘@nadhiyamali’, ‘@slimthugga’, ‘@jason_mraz’, ‘@profbrendi’, ‘@djaaries’, ‘@juanestwiter’, ‘@davegorman’, ‘@zackalltimelow’, ‘@mamajonas’, ‘@itschristablack’, ‘@skydiver’, ‘@gigva’, ‘@currensy_spitta’, ‘@paulwallbaby’, ‘@rpattzproject’, ‘@petewentz:’, ‘@rodrigovesgo’, ‘@drdrew’, ‘@sportsguy33’, ‘@cthagod:’, ‘@hollymadison123’, ‘@mjjnews’, ‘@itsbignicholas’, ‘@_supernatural_’, ‘@santoevandro’, ‘@demar_derozan’, ‘@marthastewart’, ‘@billganz62’, ‘@oodle’, ‘@davidleibrandt’]

Hashtags

~7% of messages contain hashtags

Total Unique Hashtags found: ~94k

Top Hashtags:

[‘#lies’, ‘#fb’, ‘#musicmonday’, ‘#truth’, ‘#iranelection’, ‘#moonfruit’, ‘#tendance’, ‘#jobs’, ‘#ihavetoadmit’, ‘#mariomarathon’, ‘#140mafia’, ‘#tcot’, ‘#zyngapirates’, ‘#followfriday’, ‘#spymaster’, ‘#ff’, ‘#1’, ‘#sotomayor’, ‘#turnon’, ‘#notagoodlook’, ‘#tweetmyjobs’, ‘#hiring:’, ‘#iran’, ‘#fun140’, ‘#jesus’, ‘#72b381.’, ‘#quote’, ‘#tinychat’, ‘#neda’, ‘#militarymon’, ‘#gr88’, ‘#trueblood’, ‘#fail’, ‘#news’, ‘#140army’, ‘#livestrong’, ‘#noh8’, ‘#wpc09’, ‘#music’, ‘#turnoff’, ‘#unacceptable’, ‘#twables’, ‘#masterchef’, ‘#noh84kradison’, ‘#writechat’, ‘#job’, ‘#squarespace’, ‘#michaeljackson’, ‘#2’, ‘#nothingpersonal’, ‘#iphone’, ‘#ala2009’, ‘#mj’, ‘#tdf’, ‘#blogtalkradio’, ‘#mlb’, ‘#1stdraftmovielines’, ‘#p2’, ‘#secretagent’, ‘#tlot’, ‘#72b381’, ‘#honduras’, ‘#twitter’, ‘#jtv’, ‘#tehran’, ‘#gorillapenis’, ‘#porn’, ‘#bb11’, ‘#sotoshow’, ‘#brazillovesatl’, ‘#google’, ‘#oneandother’, ‘#bb10’, ‘#chucknorris’, ‘#cmonbrazil’, ‘#agendasource’, ‘#travel’, ‘#ashes’, ‘#dumbledore’, ‘#freeschapelle’, ‘#tl’, ‘#dealsplus’, ‘#nsfw’, ‘#entourage’, ‘#tech’, ‘#hottest100’, ‘#3693dh…’, ‘#torchwood’, ‘#design’, ‘#teaparty’, ‘#love’, ‘#dontyouhate’, ‘#mileycyrus’, ‘#sgp’, ‘#harrypottersequels’, ‘#peteandinvisiblechildren’, ‘#stopretweets’, ‘#tscc’, ‘#wimbledon’, ‘#hive’, ‘#cubs’, ‘#3’, ‘#redsox’, ‘#photography’, ‘#voss’, ‘#snods’, ‘#lol’, ‘#socialmedia’, ‘#gop’, ‘#health’, ‘#esriuc’, ‘#green’, ‘#follow’, ‘#echo!’, ‘#obama’, ‘#digg’, ‘#shazam’, ‘#hhrs’, ‘#video’, ‘#moonfruit.’, ‘#swineflu’, ‘#politics’, ‘#ebuyer683’, ‘#umad’, ‘#quizdostandup’, ‘#thankyoumichael’, ‘#blogchat’, ‘#wordpress’, ‘#3693dh’, ‘#haiku’, ‘#ttparty’, ‘#lastfm:’, ‘#healthcare’, ‘#hcr’, ‘#ecgc’, ‘#seo’, ‘#apple’, ‘#chuck’, ‘#wine’, ‘#sammie’, ‘#h1n1’, ‘#marketing’, ‘#twitition’, ‘#happybirthdaymitchel18’, ‘#cnn’, ‘#lie’, ‘#rt:’, ‘#art’, ‘#nasa’, ‘#blog’, ‘#quotes’, ‘#bruno’, ‘#business’, ‘#palin’, ‘#mw2’, ‘#hcsm’, ‘#harrypotter’, ‘#4’, ‘#lastfm’, ‘#askclegg’, ‘#photo’, ‘#jobfeedr’, ‘#lgbt’, ‘#lies:’, ‘#ihavetoadmit.i’, ‘#jamlegend,’, ‘#truthbetold’, ‘#mcfly’, ‘#microsoft’, ‘#fashion’, ‘#tweetphoto’, ‘#ebuyer167201’, ‘#noh84adison’, ‘#5’, ‘#mets’, ‘#china’, ‘#bigprize’, ‘#whythehell’, ‘#money’, ‘#sophiasheart’, ‘#finance’, ‘#michael’, ‘#f1’, ‘#adamlambert100k’, ‘#web’, ‘#urwashed’, ‘#moonfruit!’, ‘#1:’, ‘#kayako’, ‘#lies.’, ‘#thankyouaaron’, ‘#food’, ‘#wow’, ‘#moonfruit,’, ‘#facebook’, ‘#ebuyer291’, ‘#ecomonday’, ‘#ihave’, ‘#happybdaydenise’, ‘#postcrossing’, ‘#ichc’, ‘#912’, ‘#demilovatolive’, ‘#gijoemoviefan’, ‘#funny’, ‘#media’, ‘#meowmonday’, ‘#israel’, ‘#blogger’, ‘#forasarney’, ‘#tv’, ‘#topgear’, ‘#chrisisadouche’, ‘#stlcards’, ‘#wec09’, ‘#forex’, ‘#aots1000’, ‘#celebrity’, ‘#dwarffilmtitles’, ‘#6’, ‘#yeg’, ‘#slaughterhouse’, ‘#nfl’, ‘#photog’, ‘#ny’, ‘#firstdraftmovies’, ‘#ufc’, ‘#reddit’, ‘#free’, ‘#iwish’, ‘#etsy’, ‘#rulez’, ‘#sports’, ‘#icmillion’, ‘#mmot’, ‘#webdesign’, ‘#deals’, ‘#moonfruit?’, ‘#pawpawty’, ‘#twitterfahndung’, ‘#billymaystribute’, ‘#sytycd’, ‘#runkeeper’, ‘#scotus’, ‘#yoconfieso’, ‘#mariomarathon,’, ‘#musicmondays’, ‘#lies,’, ‘#findbob’, ‘#realestate’, ‘#sohrab’, ‘#sales’, ‘#metal’, ‘#runescape’, ‘#hypem’, ‘#threadless’, ‘#gay’, ‘#isyouserious’, ‘#hollywood,’, ‘#2:’, ‘#ca,’, ‘#golf’, ‘#diadorock’, ‘#newyork,’, ‘#meteor’, ‘#dailyquestion’, ‘#photoshop’, ‘#saveiantojones’, ‘#musicmonday:’, ‘#rock’, ‘#sex’, ‘#mlbfutures’, ‘#ilove’, ‘#mikemozart’, ‘#nascar’, ‘#indico’, ‘#crossfitgames’, ‘#gratitude’, ‘#quote:’, ‘#creativetechs’, ‘#truth:’, ‘#sharepoint’, ‘#mkt’, ‘#why’, ‘#bigbrother’, ‘#tam7’, ‘#ihate’, ‘#futureruby’, ‘#slickrick’, ‘#105.3’, ‘#youareinatl’, ‘#vegan’, ‘#dontletmefindout’, ‘#imustadmit’, ‘#7’, ‘#twitterafterdark’, ‘#sunnyfacts’, ‘#gilad’, ‘#japan’, ‘#iremember’, ‘#97.3’, ‘#puffdaddy’, ‘#blogher’, ‘#ade2009’, ‘#aaliyah’, ‘#alfredosms’, ‘#95.1’, ‘#truth,’, ‘#twine’, ‘#hiring’]

Questions

Hard to infer exactly whether a message is a question or not, so I ran a couple of different filters:

5W’s, H, ? present ANYWHERE in tweet:

0.102789281948 or 10%

5W’s, H first token or ? last token:

0.0238229662219 or 2%

Just ? ANYWHERE in tweet:

0.0040984928533 or 0.4%

Users

Discovered ~2M unique users

Top Sending Users (many bots):

[‘followermonitor’, ‘Tweet_Words’, ‘currentcet’, ‘currentutc’, ‘whattimeisitnow’, ‘ItIsNow’, ‘ThinkingStiff’, ‘otvrecorder’, ‘delicious50’, ‘Porngus’, ‘craigslistjobs’, ‘GorPen’, ‘hashjobs’, ‘TransAlchemy2’, ‘bot_theta’, ‘CHRISVOSS’, ‘bot_iota’, ‘bot_kappa’, ‘TIPAS’, ‘VeolaJBanner’, ‘StacyDWatson’, ‘LMAObot’, ‘SarahJSlonecker’, ‘AllisonMRussell’, ‘bot_eta’, ‘SandraHOakley’, ‘bot_psi’, ‘bot_tau’, ‘LoreleiRMercer’, ‘bot_zeta’, ‘bot_gamma’, ‘bot_sigma’, ‘bot_lambda’, ‘bot_pi’, ‘bot_epsilon’, ‘bot_nu’, ‘bot_rho’, ‘bot_omicron’, ‘bot_khi’, ‘LindaTYoung’, ‘mensrightsindia’, ‘bot_omega’, ‘bot_ksi’, ‘bot_delta’, ‘bot_alpha’, ‘bot_phi’, ‘CindaDJenkins’, ‘bot_mu’, ‘ImogeneDPetit’, ‘bot_upsilon’, ‘OPENLIST_CA’, ‘openlist’, ‘isygs’, ‘dq_jumon’, ‘gamingscoop’, ‘MildredSLogan’, ‘ObiWanKenobi_’, ‘pulseSearch’, ‘MaryEVo’, ‘ImeldaGMcward’, ‘MaryJNewman’, ‘SharonTForde’, ‘LoriJCornelius’, ‘BrandyWPulliam’, ‘RhondaTLopez’, ‘AprilKOropeza’, ‘CarolETrotman’, ‘SusanATouvell’, ‘dinoperna’, ‘buzzurls’, ‘_Freelance_’, ‘DrSnooty’, ‘illstreet’, ‘bibliotaph_eyes’, ‘loc4lhost’, ‘bsiyo’, ‘BOTHOUSE’, ‘post_ads’, ‘qazkm’, ‘frugaldonkey’, ‘free_post’, ‘groovera’, ‘wonkawonkawonka’, ‘ForksGirlBella’, ‘casinopokera’, ‘dermdirectoryny’, ‘Yoowalk_chat’, ‘mstehr’, ‘hashgoogle’, ‘perry1949’, ‘ensiz_news’, ‘Bezplatno_net’, ‘timesmirror’, ‘work_freelance’, ‘cockbot’, ‘pdurham’, ‘bombtter_raw’, ‘ocha1’, ‘AlairAneko24’, ‘HaiIAmDelicious’, ‘Freshestjobs’, ‘fast_followers’, ‘LeadsForFree’, ‘RideOfYourLife’, ‘AlastairBotan30’, ‘helpmefast25’, ‘TheMLMWizard’, ‘uitrukken’, ‘adoptedALICE’, ‘TKATI’, ‘ezadsncash’, ‘tweetshelp’, ‘LAmetro_traffic’, ‘thinkpozzitive’, ‘StarrNeishaa’, ‘AldenCho36’, ‘JobHits’, ‘wootboot’, ‘smacula’, ‘faithclubdotnet’, ‘DmitriyVoronov’, ‘brownthumbgirl’, ‘NYCjobfeed’, ‘hfradiospacewx’, ‘FakeeKristenn’, ‘MLBDAILYTIMES’, ‘wildingp’, ‘JacksonsReview’, ‘EarthTimesPR’, ‘friedretweet’, ‘Wealthy23’, ‘RokpoolFM’, ‘HDOLLAZ’, ‘_MrSpacely’, ‘Bestdocnyc’, ‘Rabidgun’, ‘flygatwick’, ‘live_china’, ‘friendlinks’, ‘retweetinator’, ‘iamamro’, ‘thayferreira’, ‘AldisDai39’, ‘AndersHana60’, ‘nonstopNEWS’, ‘VivaLaCash’, ‘TravelNewsFeeds’, ‘vuelosplus’, ‘threeporcupines’, ‘DemiAuzziefan’, ‘worldofprint’, ‘KevinEdwardsJr’, ‘REDDITSPAMMOR’, ‘NatValentine’, ‘ChanelLebrun’, ‘nowbot’, ‘hollyswansonUK’, ‘youngrhome’, ‘M_Abricot’, ‘thefakemandyv’, ‘scrapbookingpas’, ‘Naughtytimes’, ‘Opcode1300_bot’, ‘tellsecret’, ‘tboogie937’, ‘Climber_IT’, ‘comlist’, ‘with_a_smile’, ‘USN_retired’, ‘Climber_EngJobs’, ‘Climber_Finance’, ‘Climber_HRJobs’, ‘intanalwi’, ‘Climber_Sales’, ‘nadhiyamali’, ‘wonderfulquotes’, ‘MRAustria’, ‘O2Q’, ‘GL0’, ‘SookieBonTemps’, ‘MRSchweiz’, ‘latinasabor’, ‘nineleal’, ‘casservice’, ‘AltonGin54’, ‘KulerFeed’, ‘_cesaum’, ‘HFMONAIR’, ‘DeeOnDreeYah’, ‘rockstalgica’, ‘iamword’, ‘rpattzproject’, ‘madblackcatcom’, ‘ftfradio’, ‘marciomtc’, ‘SocialNetCircus’, ‘AnotherYearOver’, ‘ichig’, ‘tcikcik’, ‘HelenaMarie210’, ‘mrbax0’, ‘SWBot’, ‘DayTrends’, ‘_Embry_Call_’, ‘eProducts24’, ‘The_Sims_3’, ‘tom_ssa’, ‘woxy_vintage’, ‘urbanmusic2000’, ‘dopeguhxfresh’, ‘erections’, ‘DudeBroChill’, ‘lookingformoney’, ‘drnschneider’, ‘MosesMaimonides’, ’92Blues’, ‘elarmelar’, ‘rock937fm’, ‘sonicfm’, ‘erikadotnet’, ‘sky0311’, ‘weqx’, ‘brandamc’, ‘Hot106’, ‘woxy_live’, ‘ksopthecowboy’, ‘vixalius’, ‘cogourl’, ‘Cashintoday’, ‘Andrewdaflirt’, ‘oodle’, ‘mkephart25’, ‘doomed’, ‘spotifyuri’, ‘mangelat’, ‘Cody_K’, ‘swayswaystacey’, ‘KLLY953’, ‘onlaa’, ‘Ginger_Swan’, ‘Call_Embry’, ‘conservatweet’, ‘weerinlelystad’, ‘ruhanirabin’, ‘tmgadops’, ‘wakemeupinside1’, ‘horaoficial’, ‘xstex’, ‘franzidee’, ‘tommytrc’, ‘khopmusic’, ‘tez19’, ‘GaryGotnought’, ‘UnemployKiller’, ‘felloff’, ‘Kalediscope’, ‘TheRealSherina’, ‘jasonsfreestuff’, ‘johnkennick’, ‘sel_gomezx3’, ‘OE3’, ‘AddisonMontg’, ‘_rosieCAKES’, ‘neownblog’, ‘PrinceP23’, ‘ontd_fluffy’, ‘USofAl’, ‘Kacizzle88’, ‘somalush’, ‘FrankieNichelle’, ‘jiva_music’, ‘itz_cookie’, ‘soundOfTheTone’, ‘knowheremom’, ‘Jayme1988’, ‘TrafficPilot’, ‘tweetalot’, ‘TheStation1610’, ‘lasvegasdivorce’, ‘1000_LINKS_NOW2’, ‘KeepOnTweeting’, ‘uFreelance’, ‘ChocoKouture’, ‘Magic983’, ‘SnarkySharky’, ‘agthekid’, ‘cashinnow’, ‘jamokie’, ‘jessicastanely’, ‘Q103Albany’, ‘GPGTwit’, ‘xAmberNicholex’, ‘wjtlplaylist’, ‘sjAimee’, ‘chrisduhhh’, ‘failbus’, ‘1stwave’, ‘RichardBejah’, ‘nyanko_love’]

Web Queries Overlap

How much overlap is there between tweets and trending web search queries?

I took the top trending queries during the days of my twitter crawl from Google Trends, then query expanded each trending query until the length was 6 tokens so as to equalize the average lengths. Then, I simply counted how many tweets match at least 2 (cleaned) tokens of any of these query-expanded trends:

0.0185654981775 or 2%

That’s it for now. I have some more stats but need a bit more time to clean those up before publishing here.

Notes

Can’t distribute my data set unfortunately, but it shouldn’t take too long to assemble a comparable set via Twitter’s spritzer feed – that’ll probably be more useful as it’ll be more update-to-date than the one I analyzed here. Feel free to pull my stats off if you find them useful (top hashtags and users are in JSON format).

10 Comments

Filed under Data Mining, Research, Search, Social, Statistics, Trends, Twitter

Build an Automatic Tagger in 200 lines with BOSS

My colleagues and I will be giving a talk on BOSS at Yahoo!’s Hack Day in NYC on October 9. To show developers the versatility of an open search API, I developed a simple toy example (see my past ones: TweetNews, Q&A) on the flight over that uses BOSS to generate data for training a machine learned text classifier. The resulting application basically takes two tags, some text, and tells you which tag best classifies that text. For example, you can ask the system if some piece of text is more liberal or conservative.

How does it work? BOSS offers delicious metadata for many search results that have been saved in delicious. This includes top tags, their frequencies, and the number of user saves. Additionally, BOSS makes available an option to retrieve extended search result abstracts. So, to generate a training set, I first build up a query list (100 delicious popular tags), search each query through BOSS (asking for 500 results per), and filter the results to just those that have delicious tags.

Basically, the collection logically looks like this:

[(result_1, delicious_tags), (result_2, delicious_tags) …]

Then, I invert the collection on the tags while retaining each result’s extended abstract and title fields (concatenated together)

This logically looks like this now:

[(tag_1, result_1.abstract + result_1.title), (tag_2, result_1.abstract + result_1.title), …, (tag_1, result_2.abstract + result_2.title), (tag_2, result_2.abstract + result_2.title) …]

To build a model comparing 2 tags, the system selects pairs from the above collection that have matching tags, converts the abstract + title text into features, and then passes the resulting pairs over to LibSVM to train a binary classification model.

Here’s how it works:

tagger viksi$ python gen_training_test_set.py liberal conservative

tagger viksi$ python autosvm.py training_data.txt test_data.txt

__Searching / Training Best Model

____Trained A Better Model: 60.5263

____Trained A Better Model: 68.4211

__Predicting Test Data

__Evaluation

____Right: 16

____Wrong: 4

____Total: 20

____Accuracy: 0.800000

get_training_test_set finds the pairs with matching tags and split those results into a training (80% of the pairs) and test set (20%), saving the data as training_data.txt and test_data.txt respectively. autosvm learns the best model (brute forcing the parameters for you – could be handy by itself as a general learning tool) and then applies it to the test set, reporting how well it did. In the above case, the system achieved 80% accuracy over 20 test instances.

Here’s another way to use it:

tagger viksi$ python classify.py apple microsoft bill gates steve ballmer windows vista xp

microsoft

tagger viksi$ python classify.py apple microsoft steve jobs ipod iphone macbook

apple

classify combines the above steps into an application that, given two tags and some text, will return which tag more likely describes the text. Or, in command line form, ‘python classify.py [tag1] [tag2] [some free text]’ => ‘tag1’ or ‘tag2’

My main goal here is not to build a perfect experiment or classifier (see caveats below), but to show a proof of concept of how BOSS or open search can be leveraged to build intelligent applications. BOSS isn’t just a search API, but really a general data API for powering any application that needs to party on a lot of the world’s knowledge.

I’ve open sourced the code here:

http://github.com/zooie/tagger

Caveats

Although the total lines of code is ~200 lines, the system is fairly state-of-the-art as it employs LibSVM for its learning model. However, this classifier setup has several caveats due to my time constraints and goals, as my main intention for this example was to show the awesomeness of the BOSS data. For example, training and testing on abstracts and titles means the top features will probably be inclusive of the query, so the test set may be fairly easy to score well on as well as not be representative of real input data. I did later add code to remove query related features from the test set and the accuracy seemed to dip just slightly. For classify.py, the ‘some free text’ input needs to be fairly large (about an extended abstract’s size) to be more accurate. Another caveat is what happens when both tags have been used to label a particular search result. The current system may only choose one tag, which may incur an error depending on what’s selected in the test set. Furthermore, the features I’m using are super simple and can be greatly improved with TFIDF scaling, normalization, feature selection (mutual information gain), etc. Also, more training / test instances (and check the distribution of the labels), baselines and evaluation measures should be tested.

I could have made this code a lot cleaner and shorter if I just used LibSVM’s python interface, but I for some reason forgot about that and wrote up scripts that parsed the stdout messages of the binaries to get something working fast (but dirty).

Leave a comment

Filed under AI, Boss, Code, CS, Data Mining, delicious, Information Retrieval, Machine Learning, Open Source, Research, Search, Social, Statistics, Talk, Tutorial, Yahoo

A Comparison of Open Source Search Engines

Updated: sphinx setup wasn’t exactly ‘out of the box’. Sphinx searches the fastest now and its relevancy increased (charts updated below).

Motivation

Later this month we will be presenting a half day tutorial on Open Search at SIGIR. It’ll basically focus on how to use open source software and cloud services for building and quickly prototyping advanced search applications. Open Search isn’t just about building a Google-like search box on a free technology stack, but encouraging the community to extend and embrace search technology to improve the relevance of any application.

For example, one non-search application of BOSS leveraged the Spelling service to spell correct video comments before handing them off to their Spam filter. The Spelling correction process normalizes popular words that spammers intentionally misspell to get around spam models that rely on term statistics, and thus, can increase spam detection accuracy.

We have split up our upcoming talk into two sections:

  • Services: Open Search Web APIs (Yahoo! BOSS, Twitter, Bing, and Google AJAX Search), interesting mashup examples, ranking models and academic research that leverage or could benefit from such services.
  • Software: How to use popular open source packages for vertical indexing your own data.

While researching for the Software section, I was quite surprised by the number of open source vertical search solutions I found:

And I was even more surprised by the lack of comparisons between these solutions. Many of these platforms advertise their performance benchmarks, but they are in isolation, use different data sets, and seem to be more focused on speed as opposed to say relevance.

The best paper I could find that compared performance and relevance of many open source search engines was Middleton+Baeza’07, but the paper is quite old now and didn’t make its source code and data sets publicly available.

So, I developed a couple of fun, off the wall experiments to test (for building code examples – this is just a simple/quick evaluation and not for SIGIR – read disclaimer in the conclusion section) some of the popular vertical indexing solutions. Here’s a table of the platforms I selected to study, with some high level feature breakdowns:

High level feature comparison among the vertical search solutions I studied; The support rating and scale are based on information I collected from web sites and conversations (please feel free to comment).

High level feature comparison among the vertical search solutions I studied; The support rating and scale are based on information I collected from web sites and conversations. I tested each solution's latest stable release as of this week (Indri is TODO).

One key design decision I made was not to change any numerical tuning parameters. I really wanted to test “Out of the Box” performance to simulate the common developer scenario. Plus, it takes forever to optimize parameters fairly across multiple platforms and different data sets esp. for an over-the-weekend benchmark (see disclaimer in the Conclusion section).

Also, I tried my best to write each experiment natively for each platform using the expected library routines or binary commands.

Twitter Experiment

For the first experiment, I wanted to see how well these platforms index Twitter data. Twitter is becoming very mainstream, and its real time nature and brevity differs greatly from traditional web content (which these search platforms are overall more tailored for) so its data should make for some interesting experiments.

So I proceeded to crawl Twitter to generate a sample data set. After about a full day and night, I had downloaded ~1M tweets (~10/second).

But before indexing, I did some quick analysis of my acquired Twitter data set:

# of Tweets: 968,937

Indexable Text Size (user, name, text message): 92MB

Average Tweet Size: 12 words

Types of Tweets based on simple word filters:

Out of a 1M sample, what kind of Tweet types do we find?

Out of a 1M sample, what types of Tweets do we find? Unique Users means that there were ~600k users that authored all of the 1M tweets in this sample.

Very interesting stats here – especially the high percentage of tweets that seem to be asking questions. Could Twitter (or an application) better serve this need?

Here’s a table comparing the indexing performance over this Twitter data set across the select vertical search solutions:

Indexing 1M twitter messages on a variety of open source search solutions; measuring time and space for each.

Indexing 1M twitter messages on a variety of open source search solutions.

Lucene was the only solution that produced an index that was smaller than the input data size. Shaves an additional 5 megabytes if one runs it in optimize mode, but at the consequence of adding another ten seconds to indexing. sphinx and zettair index the fastest. Interestingly, I ran zettair in big-and-fast mode (which sucks up 300+ megabytes of RAM) but it ran slower by 3 seconds (maybe because of the nature of tweets). Xapian ran 5x slower than sqlite (which stores the raw input data in addition to the index) and produced the largest index file sizes. The default index_text method in Xapian stores positional information, which blew the index size to 529 megabytes. One must use index_text_without_positions to make the size more reasonable. I checked my Xapian code against the examples and documentation to see if I was doing something wrong, but I couldn’t find any discrepancies. I also included a column about development issues I encountered. zettair was by far the easiest to use (simple command line) but required transforming the input data into a new format. I had some text issues with sqlite (also needs to be recompiled with FTS3 enabled) and sphinx given their strict input constraints. sphinx also requires a conf file which took some searching to find full examples of. Lucene, zettair, and Xapian were the most forgiving when it came to accepting text inputs (zero errors).

Measuring Relevancy: Medical Data Set

While this is a fun performance experiment for indexing short text, this test does not measure search performance and relevancy.

To measure relevancy, we need judgment data that tells us how relevant a document result is to a query. The best data set I could find that was publicly available for download (almost all of them require mailing in CD’s) was from the TREC-9 Filtering track, which provides a collection of 196,403 medical journal references – totaling ~300MB of indexable text (titles, authors, abstracts, keywords) with an average of 215 tokens per record. More importantly, this data set provides judgment data for 63 query-like tasks in the form of “<task, document, 2|1|0 rating>” (2 is very relevant, 1 is somewhat relevant, 0 is not rated). An example task is “37 yr old man with sickle cell disease.” To turn this into a search benchmark, I treat these tasks as OR’ed queries. To measure relevancy, I compute the Average DCG across the 63 queries for results in positions 1-10.

Performance and Relevancy marks on the TREC OHSUMED Data Set; Lucene is the smallest, most relevant and fastest to search; Xapian is very close to Lucene on the search side but 3x slower on indexing and 4x bigger in index space; zettair is the fastest indexer.

Performance and Relevancy marks on the TREC-9 across select vertical search solutions.

With this larger data set (3x larger than the Twitter one), we see zettair’s indexing performance improve (makes sense as it’s more designed for larger corpora); zettair’s search speed should probably be a bit faster because its search command line utility prints some unnecessary stats. For multi-searching in sphinx, I developed a Java client (with the hopes of making it competitive with Lucene – the one to beat) which connects to the sphinx searchd server via a socket (that’s their API model in the examples). sphinx returned searches the fastest – ~3x faster than Lucene. Its indexing time was also on par with zettair. Lucene obtained the highest relevance and smallest index size. The index time could probably be improved by fiddling with its merge parameters, but I wanted to avoid numerical adjustments in this evaluation. Xapian has very similar search performance to Lucene but with significant indexing costs (both time and space > 3x). sqlite has the worst relevance because it doesn’t sort by relevance nor seem to provide an ORDER BY function to do so.

Conclusion & Downloads

Based on these preliminary results and anecdotal information I’ve collected from the web and people in the field (with more emphasis on the latter), I would probably recommend Lucene (which is an IR library – use a wrapper platform like Solr w/ Nutch if you need all the search dressings like snippets, crawlers, servlets) for many vertical search indexing applications – especially if you need something that runs decently well out of the box (as that’s what I’m mainly evaluating here) and community support.

Keep in mind that these experiments are still very early (done on a weekend budget) and can/should be improved greatly with bigger and better data sets, tuned implementations, and community support (I’d be the first one to say these are far from perfect, so I open sourced my code below). It’s pretty hard to make a benchmark that everybody likes (especially in this space where there haven’t really been many … and I’m starting to see why :)), not necessarily because there are always winners/losers and biases in benchmarks, but because there are so many different types of data sets and platform APIs and tuning parameters (at least databases support SQL!). This is just a start. I see this as a very evolutionary project that requires community support to get it right. Take the results here for what it’s worth and still run your own tuned benchmarks.

To encourage further search development and benchmarks, I’ve open sourced all the code here:

http://github.com/zooie/opensearch/tree/master

Happy to post any new and interesting results.

146 Comments

Filed under Blog Stuff, Boss, Code, CS, Data Mining, Databases, Information Retrieval, Job Stuff, Open, Open Source, Performance, Research, Search, Statistics, Talk, Tutorial, Twitter

Is the Facebook Application Platform Fair?

Take a look at this stats deck from O’Reilly’s Graphing Social Patterns conference:

http://en.oreilly.com/gspeast2008/public/asset/attachment/2950

Fairly in-depth and recent [6/01/2008] analysis of the application usage in Facebook and MySpace.

As expected, lots of power law behavior.

I found the slides describing churn to be pretty interesting. Since October 2007, nine of the top fifteen most popular applications are new. However, only three of those new applications debuted after March 2008. I expect the amount of churn in the top spots to continue to drop based on the recent declining active usage trends and Facebook’s efforts to curb application spam (new UI that puts applications in a separate profile tab, app module minimizing, viral friend messaging limits, security compliance, etc).

What I would find even more interesting is a study of the number of applications users install, and how those moving averages have changed over time. Like say the number of applications a typical user installs is 4. Once the user reaches that threshold, what’s the churn like then? Specifically, what are the chances that a user will add a new app? Or maybe an even better metric: how long does it take, and how does this length of time compare to when the user had 1 app and increased to 2, or 2 apps and increased to 3, etc.? Basically, what’s the adoption rate/times based on current application counts?

I believe it becomes harder to influence a user to add or replace for a new app if the number of current apps the user has is high. I think most users, without even knowing it, have a threshold of how many total apps they are willing display on their profile – and that this threshold is based on an ongoing evaluation of the utility and efficiency of the page. Each app takes up real estate on the profile page, and a “rational” user will only show so many until page load times degrade and/or core modules (wall, general information, albums, networks) get drowned in clutter and thus become difficult for users to locate. Of course, social networks like MySpace which have very minimal profile page design constraints prove that most users are irrational 😉 – but it’s this design control that greatly helped Facebook dominate the market IMO.

If this is true, then it means that first movers really, really win in the Facebook apps world. Companies like Slide and Rockyou manage many of the top applications, and given the power law market share phenomena, they control a majority stake of application usage and installs. Many of these companies had the early bird advantage, and once winners, always winners – acquisitions of emerging applications, leveraging branding and existing audiences (a.k.a monopoly) to cross promote potentially copy-cat applications faster and wider than the competition, etc. Monopolies inside Facebook have unsettling ramifications, as they block newcomers from capturing profile space. If they fail to innovate (as most monopolies) then next-gen application development may never get through.

Now, if users do have an application count threshold, and it becomes successively more difficult to replace/add a new app as this count increases, then any apps developed now have a substantially rarer chance of gaining market share. If winner’s win, first movers reap, and churn becomes improbable over time, then the early top apps have already most likely filled up users’ allocated app slots.

I find thinking of the profile page as a resource allocation problem rather fascinating. Essentially, there are finite resources on a page and we expect rational users to perform some optimization to allocate resources to maximize utility for themselves and for others (potential game theory link). Once users fill up these resources, human laziness kicks in. Another warrant for improbable churn is that users who want to add new applications after filling up their resource limit will need to remove an existing app to make space. The standards for change are higher now, as the user must compare the new app to an existing preferred app (which probably is a popular early-bird app that friends use), and so the decision will incur a trade-off.

One could also argue that with more apps available now (second slide shows that despite sluggish usage the # of app’s being developed is still growing insanely) users are burdened with more choices. Or, one could argue because most users have reached their app limit, and thus, churn has become improbable, the discoverability of new apps among friends (a critical channel for adoption) also becomes improbable.

Under this theory, especially in context of Facebook’s current efforts and app stats, the growth of new app adoption in social networks will continue to slow down.

So what can be done here?

The platform needs to encourage more churn by building a fairer market that matches users to high quality apps that satisfy their expressed intents. At the end of the day, these applications are really just web pages, but unlike the web, they do not leverage important primitives like linking and meta tags. Search engines like Google and Yahoo use these features extensively to calculate authority and relevance. In the long run, as the number of sources increases, advanced ranking algorithms and marketplaces are necessary to scale and ensure fairness to worthy tail publishers. Maybe social networks should inherit these system properties to bolster their tail applications.

Also, Facebook needs to encourage users to variate or add more applications to their profile page. Facebook’s move to put applications in its own profile tab may very well achieve this goal, but at a consequence of lowering their visibility.

Anyways, just some random thoughts about the current state of Facebook apps. It’ll be very interesting to see how their platform progresses and how it will be perceived by end users and developers in the future.

3 Comments

Filed under Economics, Facebook, Non-Technical-Read, Social, Statistics, Trends