Category Archives: Trends

Why Atlassian will be a $50B company in 10 years

An op-ed I wrote for VentureBeat:

Last week, Atlassian made a very smart move by acquiring Trello. While $425 million implies a high multiple (given Trello’s revenue run rate was around $10 million last year), I believe it positions Atlassian to become the next big enterprise software company. I project it will reach a $50 billion market cap in 10 years by taking over software for teams. Here are four reasons why:

Read more

Leave a comment

Filed under Entrepreneurship, News, Non-Technical-Read, Startups, Trends, VC

AI First, the Overhype and the Last Mile Problem

AI is hot, I mean really hot. VCs love it, pouring in over $1.5B in just the first half of this year. Consumer companies like Google and Facebook also love AI, with notable apps like Newsfeed, Messenger, Google Photos, Gmail and Search leveraging machine learning to improve their relevance. And it’s now spreading into the enterprise, with moves like Salesforce unveiling Einstein, Microsoft’s Cortana / Azure ML, Oracle with Intelligent App Cloud, SAP’s Application Intelligence, and Google with Tensorflow (and their TPUs).

As a founder of an emerging AI company in the enterprise space, I’ve been following these recent moves by the big titans closely because they put us (as well as many other ventures) in an interesting spot. How do we position ourselves and compete in this environment?

In this post, I’ll share some of my thoughts and experiences around the whole concept of AI-First, the “last mile” problems of AI that many companies ignore, the overhype issue that’s facing our industry today (especially as larger players enter the game), and my predictions for when we’ll reach mass AI adoption.

Defining AI-First vs. AI-Later

A few years ago, I wrote about the key tenets of building Predictive-First applications, something that’s synonymous to the idea of AI-First, which Google is pushing. A great example of Predictive-First is Pandora (disclosure: Infer customer). Pandora didn’t try to redo the music player UI — there were many services that did that, and arguably better. Instead, they focused on making their service intelligent, by providing relevant recommendations. No need to build or manage playlists. This key differentiation led to their rise in popularity, and that differentiation depended on data intelligence that started on day one. Predictive wasn’t sprinkled on later (that’s AI-Later, not AI-First, and there’s a big difference … keep reading).

If you are building an AI-First application, you need to follow the data and you need a lot of data so you would likely gravitate towards integrating with big platforms (as in big companies with customers) that have APIs to pull data from.

For example, a system like CRM.

There’s so much valuable data in a CRM system, but five years ago, pretty much no one was applying machine learning to this data to improve sales. The data was, and still is for many companies, untapped. There’s got to be more to CRM than basic data entry and reporting, right? If we could apply machine learning, and if it worked, it could drive more revenue for companies. Who would say no to this?

So naturally, we (Infer) went after CRM (Salesforce, Dynamics, SAP C4C), along with the marketing automation platforms (Marketo, Eloqua, Pardot, HubSpot) and even custom sales and marketing databases (via REST APIs). We helped usher a new category around Predictive Sales and Marketing.

We can’t complain much we’ve amassed the largest customer base in our space, and have published dozens of case studies showcasing customers achieving results like 9x improvements in conversion rates and 12x ROI via vastly better qualification and nurturing programs.

But it was hard to build our solutions, and remains hard to do so at scale. It’s not because the data science is hard (although that’s an area we take pride in going deep on), it’s the end-to-end product and packaging that’s really tough to get right. We call this the last mile problem, and I believe this is an issue for any AI product whether in the enterprise or consumer space.

Now, with machine learning infrastructure in the open with flowing (and free) documentation, how-to guides, online courses, open source libraries, cloud services, etc. machine learning is being democratized.

Anyone can model data. Some do it better than others, especially those with more infrastructure (for deep learning and huge data sets) and a better understanding of the algorithms and the underlying data. You may occasionally get pretty close with off-the-shelf approaches, but it’s almost always better to optimize for a particular problem. By doing so, you’ll not only squeeze out better or slightly better performance, but the understanding you gain from going deep will help you generalize and handle new data inputs better which is key for knowing how to explain, fix, tweak and train the model over time to maintain or improve performance.

But still, this isn’t the hardest part. This is the sexy, fun part (well, for the most part … the data cleaning and matching may or may not be depending on who you talk to :).

The hardest part is creating stickiness.

The Last Mile of AI

How do you get regular business users to depend on your predictions, even though they won’t understand all of the science that went into calculating them? You want them to trust the predictions, to understand how to best leverage them to drive value, and to change their workflows to depend on them.

This is the last mile problem. It is a very hard problem and it’s a product problem, not a data scientist problem. Having an army of data scientists isn’t going to make this problem better. In fact, it may make it worse, as data scientists typically want to focus on modeling, which may lead to over-investing in that aspect versus thinking about the end-to-end user experience.

To solve last mile problems, vendors need to successfully tackle three critical components:

1)  Getting “predictive everywhere” with integrations

It’s very important to understand where the user needs their predictions and this may not be in just one system, but many. We had to provide open APIs and build direct integrations for Marketo, Eloqua, Salesforce, Microsoft Dynamics, HubSpot, Pardot, Google Analytics and Microsoft Power BI.

Integrating into these systems is not fun. Each one has it own challenges: how to push predictions into records without locking out users who are editing at the same time; how to pull all the behavioral activity data out to determine when a prospect will be ready to buy (without exceeding the API limits); how to populate predictions across millions of records in minutes not hours; etc.

These are hard software and systems problems (99% perspiration). In fact, the integration work likely consumed more time than our modeling work.

This is what it means to be truly “predictive everywhere.” Some companies like Salesforce are touting this idea, but it’s closed to their stack. For specific solutions like predictive lead scoring, this falls apart quickly, because most mid-market and enterprise companies run lead scoring in marketing automation systems like Marketo, Eloqua and Hubspot.

Last mile here means you’re investing more in integrating predictions into other systems than in your own user experience or portal. You go to where the user already is that’s how you get sticky not by trying to create new behavior for them to do on your own site (even if you can make your site look way prettier and function better). What matters is stickiness. Period.

2)  Building trust

Trust is paramount to achieving success with predictive solutions. It doesn’t matter if your model works if the user doesn’t act on it or believe in it. A key area to establish trust around is the data, and specifically the external data (i.e. signals not in the CRM or marketing automation platforms a big trick we employ to improve our models and to de-noise dirty CRM data).

Sometimes, customers want external signals that aren’t just useful for improving model performance. Signals like whether a business offers a Free Trial on their website might also play an important operational role in helping a company take different actions for specific types of leads or contacts. For example, with profiling and predictive scoring solutions, they could filter and define a segment, predict the winners from that group and prioritize personalized sales and marketing programs to target those prospects.

In addition to exposing our tens of thousands of external signals, another way we build trust is by making it easy and flexible to customize our solution to the unique needs and expectations of each customer. Some companies may need multiple models, by region / market / product line (when there is enough training data) or “lenses” (essentially, normalizing another model that has more data) when there isn’t enough data. They then need a system that guides them on how to determine those solutions and tradeoffs. Some companies care about the timing of deals; they may have particular cycle times they want to optimize for or they may want their predictions to bias towards higher deal size, higher LTV, etc.

Some customers want the models to update as they close more deals. This is known as retraining the model, but over retraining could result in bad performance. For example, say you’re continuously and automatically retraining with every new example, but the customer was in the middle of a messy data migration process. It would have been better to wait until that migration completed to avoid incorrectly skewing the model for that period of time. What you need is model monitoring, which gauges live performance and notices dips or opportunities to improve performance when there’s new data. The platform then alerts the vendor and the customer, and finally results in a proper retraining.

Additionally, keep in mind that not all predictions will be accurate, and the customer will sometimes see these errors. It’s important to provide them with options to report such feedback via an active process that actually results in improvements in the models. Customers expect their vendor to be deep on details like these. Remember, for many people AI still feels like voodoo, science fiction and too blackbox-like (despite the industry’s best efforts to visualize and explain models). Customers want transparent controls that support a variety of configurations in order to believe, and thus, operationalize a machine-learned model.

3)  Making predictive disappear with proven use cases

Finally, let’s talk about use cases and making predictive disappear in a product. This is a crucial dimension and a clear sign of a mature AI-First company. There are a lot of early startups selling AI as their product to business users. However, most business users don’t want or should want AI they want a solution to a problem. AI is not a solution, but an optimization technique. At Infer, we support three primary applications (or use cases) to help sales and marketing teams: Qualification, Nurturing and Net New. We provide workflows that you can install in your automation systems to leverage our predictive tech and make each of these use cases more intelligent. However, we could position and sell these apps without even mentioning the word predictive because it’s all about the business value.

In our space, most VPs of Sales or Marketing don’t have Ph.Ds in computer science or statistics. They want more revenue, not a machine learning tutorial. Our pitch then goes something like this …

“Here are three apps for driving more revenue. Here’s how each app looks in our portal and here are the workflows in action in your automation systems … here are the ROI visualizations for each app … let’s run through a bunch of customer references and success studies for the apps that you care about. Oh, and our apps happen to leverage a variety predictive models that we’ll expose to you too if you want to go deep on those.”

Predictive is core to the value but not what we lead with. Where we are different is in the lengths we go to guide our customers with real-world playbooks, to formulate and vet models that best serve their individual use cases, and to help them establish sticky workflows that drive consistent success. We’ll initially sell customers one application, and hopefully, over time, the depth of our use cases will impress them so much that we’ll cross-sell them into all three apps. This approach has been huge for us. It’s also been a major differentiator we achieved our best-ever competitive win rate this year (despite 2016 being the most competitive) by talking less about predictive.

Vendors that are overdoing the predictive and AI talk are missing the point and don’t realize that data science is a behind-the-scenes optimization. Don’t get me wrong, it’s sexy tech, it’s a fun category to be in (certainly helps with engineering recruiting) and it makes for great marketing buzz, but that positioning is not terribly helpful in the later stages of a deal or for driving customer success.

The focus needs to be on the value. When I hear companies just talking about predictive, and not about value or use cases / applications, I think they’re playing a dangerous game for themselves as well as for the market. It hurts them as that’s not something you can differentiate on any more (remember, anyone can model). Sure, your model may be better, but the end buyer can’t tell the difference or may not be willing (or understand how) to run a rigorous evaluation to see those differences.

The Overhype Issue

Vendors in our space often over-promise and under-deliver, resulting in many churn cases, which, in turn, hurts the reputation of the predictive category overall. At first, this was just a problem with the startups in our space, but now we’re seeing it from the big companies as well. That’s even more dangerous, as they have bigger voice boxes and reach. It makes sense that the incumbents want to sprinkle AI-powered features into their existing products in order to quickly impact thousands of their customers. But with predictive, trust is paramount.

Historically, in the enterprise, the market has been accustomed to overhyped products that don’t ship for years from their initial marketing debuts. However, in this space, I’d argue that overhyping is the last thing you should do. You need to build trust and success first. You need to under-promise and over-deliver.

Can the Giants Really Go Deep on AI?

The key is to hyper focus on one end-to-end use case and go deep to start, do that well with a few customers, learn, repeat with more, and keep going. You can’t just usher out an AI solution to many business customers at once, although that temptation is there for a bigger company. Why only release something to 5% of your base when you can generate way more revenue if it’s rolled out to everyone? This forces a big company to build a more simplified, “checkbox” predictive solution for the sake of scale, but that won’t work for mid-market and enterprise companies, which need many more controls to address complex, but common, scenarios like multiple markets and objective targets.

Such a simplified approach caters better to smaller customers that desire turnkey products, but unlike non-predictive enterprise solutions, predictive solutions face a big problem with smaller companies a data limiting challenge. You need a lot of data for AI, and most small businesses don’t have enough transactions in their databases to machine learn patterns from (I also would contend that most small companies shouldn’t be focusing on optimizing their sales and marketing functions anyway, but rather on building a product and a team).

So, inherently, AI is biased towards mid-market / enterprise accounts, but their demands are so particular that they need a deeper solution that’s harder to productize for thousands. Figuring out how to build such a scalable product is much better done within a startup vs. in a big company, given the incredible focus and patience that’s needed.

AI really does work for many applications, but more vendors need to get good at solving the last mile the 80% that depends less on AI and more on building the vehicle that runs with AI. This is where emerging companies like Infer have an advantage. We have the patience, focus, and depth to solve these last mile problems end-to-end and to do it in a manner that’s open to every platform not just closed off to one company’s ecosystem. This matters (especially with respect to the sales and marketing space, in which almost every company runs a fragmented stack with many vendors).

It’s also much easier to solve these end-to-end problems without the legacy issues of an industry giant. At Infer, we started out with AI from the very beginning (AI-First), not AI-Later like most of these bigger companies. Many of them will encounter challenges when it comes to processing data in a way that’s amenable for modeling, monitoring, etc. We’re already seeing these large vendors having to forge big cloud partnerships to rehaul their backends in order to address their scaling issues. I actually think some of the marketing automation companies still won’t be able to improve their scale, given how dependent they are on legacy backend design that wasn’t meant to handle expensive data mining workloads.

Many of these companies will also need to curtail security requirements stemming from the days of moving companies over to the cloud. Some of their legacy security provisions may prevent them from even looking at or analyzing a customer’s data (which is obviously important for modeling).

When you solve one problem really well, the predictive piece almost disappears to the end user (like with our three applications). That’s the litmus test of a good AI-powered business application. But, that’s not what we’re seeing from the big companies and most startups. It’s quite the opposite in fact, we’re seeing more over generalization.

They’re making machine learning feel like AWS infrastructure. Just build a model in their cloud and connect it somehow to your business database like CRM. After five years of experience in this game, I’ll bet our bank that approach won’t result in sticky adoption. Machine learning is not like AWS, which you can just spin up and magically connect to some system. “It’s not commoditizable like EC2” (Prof. Manning at Stanford). It’s much more nuanced and personalized based on each use case. And this approach doesn’t address the last mile problems which are harder and typically more expensive than the modeling part!

From AI Hype to Mass Adoption

There aren’t yet thousands of companies running their growth with AI. It will take time, just like it took Eloqua and Marketo time to build up the marketing automation category. We’re grateful that the bigger companies like Microsoft, Oracle, Salesforce, Adobe, IBM and SAP are helping market this industry better than we could ever do.

I strongly believe every company will be using predictive to drive growth within the next 10 years. It just doesn’t make sense not to, when we can get a company up and running in a week, show them the ROI value via simulations, and only then ask them to pay for it. Additionally, there are a variety of lightweight ways to leverage predictive for growth (such as powering key forecasting metrics and dashboards) that don’t require process changes if you’re in the middle of org changes or data migrations.

In an AI-First world, every business must ask the question: What if our competitor is using predictive and achieving 3x better conversion rates as a result? The solution is simple adopt AI as well and prop up the arms race.

I encourage all emerging AI companies to remain heads down and focus on customer success and last mile product problems. Go deep, iterate with a few companies and grow the base wisely. Under-promise and over-deliver. Let the bigger companies pay for your marketing with their big voice boxes which they’re really flexing now. Doing so, you’ll likely succeed beyond measure and who knows, we may even replace the incumbents in the process.

3 Comments

Filed under AI, Blog Stuff, Entrepreneurship, Google, Machine Learning, Microsoft, Non-Technical-Read, Trends, VC

Amazon, Google and Apple vs the Big 5 Unicorns on Hiring and Churn

I’ve received multiple requests to analyze employee churn and new hiring rates for big companies and unicorns with the approach I took earlier for studying engineering and sales retention rates. I figured I’d give it a shot – and combine all of the key metrics in one chart …

amazon_google_apple_unicorns_retention_hiring

How to read this:

Blue bar represents the number of expected new hires that particular company will make in a 30 day (one month) period. Black bars (negative values) indicate how many employees will churn in a one month period. The orange line (the top most numerical labels correspond to the orange line plot) represent the net change in hires per month (new hires less churn). The companies are ranked by churn from left to right in descending order (so highest churn on the left).

As you can see in the chart, the big three companies included in this analysis are Amazon, Apple and Google. The unicorns are Uber, Lyft, Airbnb, Pinterest and Snapchat. “Big 5” combines these unicorns together as if they were one whole company. Also note, this is looking at employees worldwide with any job title.

Key Insights:

  1. Apple is not hiring enough new heads when compared with Amazon and Google. In fact, the Big 5 unicorns combined will hire more net heads than Apple with almost 50% less employee churn.
  2. Amazon’s churn is the highest – losing a little over 10 people a day. However, this is not bad relatively speaking – Google loses 8-9 people a day, and Apple is a tad over 9 a day (and Amazon has 36% more employees than Google). Given the recent press bashing Amazon’s culture and the periodic press envying Google’s great benefits, their retention rates tell a different a story – that it’s closer to a wash. Big tech companies with great talent churn people at pretty similar high rates regardless it seems (have some more thoughts on this but will save those for another post).
  3. At these current rates, all of the companies here (collectively) will increase their employee size by 20K (19, 414 to be precise) heads by year end (this is new hires less churn). That’s a measly 5% increase in their current collective employee size – and this is across the Big 3 Tech Companies and Big 5 Unicorns.
  4. Let’s compare Amazon to the Big 5 Unicorns. The Big 5 will hire 79% as many incremental heads as Amazon in a month, even though their collective employee size is 24% that of Amazon’s. Amazon has been in business for much longer (2-3x days since incorporation), and the Big 5’s churn is 43% of Amazon’s figure – both factors contributing to the closeness in the incremental head rate between the two.

Want more details?

How did I calculate these figures? Take a look at my previous post on engineering retention for more details. Same caveats listed there apply here, and then some (such as how this depends on the participation rates of LinkedIn which may differ considerably internationally compared to the US market which my previous posts exclusively focused on). Feel free to connect or email me if you have any questions or feedback.

 

Leave a comment

Filed under Data Mining, Economics, Entrepreneurship, Google, LinkedIn, Management, Non-Technical-Read, Research, Startups, Statistics, Trends

Ranking Companies on Sales Culture & Retention

A company’s sales retention rate is a very important indicator of business health. If you have a good gauge on this, you could better answer questions such as: should I join that company’s sales department, will I be able to progress up the ladder, are reps hitting their numbers, are they providing effective training, should I invest money in this business, etc. But how does one measure this rate especially from an outside vantage point? This is where LinkedIn comes to the rescue. I essentially cross applied the approach I took to measuring engineering retention to sales.

sales_ret_2

This chart reveals several key technology companies ranked in reverse order of sales churn – so higher on the chart (or longer the bar) the higher the churn (so from worst at the top to best at the bottom).

So how are we defining sales churn here? I calculated the measurement as follows: I took the number of people who have ever churned in a sales role from the company and divide that by the number of days since incorporation for that respective company (call this Churn Per Day), and then I compute the ratio of how many sales people will churn in one year (the run rate i.e. Churn Per Day * 365) over the number of current sales people employed.

For ex. if you look at the top row, which is Zenefits, the value is 0.40 – which means that 40% of the current sales team size will churn in a one year period. In order to maintain that sales team size and corresponding revenue, the company will need to hire 40% of their team – and sooner than in a year as that churn likely spreads throughout the year as well as given new sales hire ramping periods (if you’re churning a ramped rep and say it takes one quarter to ramp a new sales rep, then you need to hire a new head at least one quarter beforehand to avoid a revenue dip).

A few more notes:

The color saturation indicates Churn Per Day – the darker the color, the higher the Churn Per Day.

Caveats listed in the previous post on engineering retention apply to this analysis too.

Leave a comment

Filed under Data Mining, Economics, Enterprise, Entrepreneurship, Job Stuff, LinkedIn, Non-Technical-Read, Startups, Statistics, Trends, Venture Capital

Top Tech Companies Ranked By Engineering Retention

(TL;DR) Here’s the ranking going from top to bottom (so higher / longer the better):

eng_ret

How did you measure this?

By running advanced Linkedin searches and counting up the hits. Specifically, for each company, at their headquarters location only, I searched for profiles that were or are software engineers, and had at least 1+ years of experience. Then I filtered these results in two ways:

1) Counting how many of those profiles used to work at the company in question (and not currently). Call this result Past Not Current Count.

2) Separately (not applying the above filter), filtering to those who are currently working at the company for at least 1+ years. Call this Current Count.

I also computed the number of days since incorporation for each respective company to be able to compute Churn Per Day – which is simply dividing Past Not Current Count by the number of days since incorporation.

Then I took this rate, and computed how long in years it would take for each company to churn through all of their Current Count or current heads who were or are software engineers and who’ve been with the company for at least 1 year (those who possess the most tribal wisdom and arguably deserve more retention benefits). Call this the Wipeout Period (in years) figure. This is what’s plotted in the chart above and is represented by the size of the bars – so longer the better for a company.

What does the color hue indicate?

The Churn Per Day (described in the previous answer). The darker the color the higher the churn rate.

Who’s safe and who’s at risk?

I would think under a 10 year wipeout period (esp. if you’re a larger and mature company) would be very scary.

In general (disclaimer – subjective – would like to run this over more comps) greater than 20 years feels safe, but if you’re dark green (and hence experience more churn per day) then in order to keep your wipeout period long you need to be hiring many new engineering heads constantly (but you may not always be hot in tech to be able to maintain such a hiring pace!).

What are the caveats with this analysis?

There are several, but to mention a few:

Past Not Current Count biases against older companies – for ex. Microsoft has had more churn than # of present heads because they’ve been in business for a long time.

I needed more precise filtering options than what was available from Linkedin to be able to properly remove software internships (although could argue that’s still valid churn – means that the company wasn’t able to pipeline them into another internship or full-time position) as well as ensure that the Past Not Current Count factored only software engineers at the time that they were working at that company. So, given the lack of these filters, a better description for the above chart would be Ranking Retention of Folks with Software Experience.

Also, this analysis assumes the Churn Per Day figure is the same for all folks currently 1+ years at their respective company, even though it’s likely that the churn rate is different depending the # of years you’re at the company (I’m essentially assuming it’s a wash – that the distributions of the historical Past Not Current vs Current are similar).

1 Comment

Filed under Blog Stuff, Computer Science, Data Mining, Entrepreneurship, Job Stuff, LinkedIn, Management, Non-Technical-Read, Research, Statistics, Trends, VC

Betting on UFC Fights – A Statistical Data Analysis

Mixed Martial Arts (MMA) is an incredibly entertaining and technical sport to watch. It’s become one of the fastest growing sports in the world. I’ve been following MMA organizations like the Ultimate Fighting Championship (UFC) for almost eight years now, and in that time have developed a great appreciation for MMA techniques. After watching dozens of fights, you begin to pick up on what moves win and when, and spot strengths and weaknesses in certain fighters. However, I’ve always wanted to test my knowledge against the actual stats – like do accomplished wrestlers really beat fighters with little wrestling experience?

To do this, we need fight data, so I crawled and parsed all the MMA fights from Sherdog.com. This data includes fighter profiles (birth date, weight, height, disciplines, training camp, location) and fight records (challenger, opponent, time, round, outcome, event). After some basic data cleaning, I had a dataset of 11,886 fight records, 1,390 of which correspond to the UFC.

I then trained a random forest classifier from this data to see if a state-of-the-art machine learning model can identify any winning and losing characteristics. Over cross-validation with 10 folds, the resulting model scored a surprisingly decent AUC score of 0.69; a AUC score closer to 0.5 would indicate that the model can’t predict winning fights any better than random or fair coin flips.

So there may be interesting patterns in this data … Feeling motivated, I ran exhaustive searches over the data to find feature combinations that indicate winning or losing behaviors. Many hours later, several dozens of such insights were found.

Here are the most interesting ones (stars indicate statistical significance at the 5% level):

Top UFC Insights

Fighters older than 32 years of age will more likely lose

This was validated in 173 out of 277 (62%) fights*

Fighters with more than 6 TKO victories fighting opponents older than 32 years of age will more likely win

This was validated in 47 out of 60 (78%) fights*

Fighters from Japan will more likely lose

This was validated in 36 out of 51 (71%) fights*

Fighters who have lost 2 or more KOs will more likely lose

This was validated in 54 out of 84 (64%) fights*

Fighters with 3x or more decision wins and are greater than 3% taller than their opponents will more likely win

This was validated in 32 out of 38 (84%) fights*

Fighters who have won 3x or more decisions than their opponent will more likely win

This was validated in 142 out of 235 (60%) fights*

Fighters with no wrestling background vs fighters who do have one more likely lose

This was validated in 136 out of 212 (64%) fights*

Fighters fighting opponents with 3x or less decision wins and are on a 6 fight (or better) winning streak more likely win

This was validated in 30 out of 39 (77%) fights*

Fighters younger than their opponents by 3 or more years in age will more likely win

This was validated in 324 out of 556 (58%) fights*

Fighters who haven’t fought in more than 210 days will more likely lose

This was validated in 162 out of 276 (59%) fights*

Fighters taller than their opponents by 3% will more likely win

This was validated in 159 out of 274 (58%) fights*

Fighters who have lost less by submission than their opponents will more likely win

This was validated in 295 out of 522 (57%) fights*

Fighters who have lost 6 or more fights will more likely lose

This was validated in 172 out of 291 (60%) fights*

Fighters who have 18 or more wins and never had a 2 fight losing streak more likely win

This was validated in 79 out of 126 (63%) fights*

Fighters who have lost back to back fights will more likely lose

This was validated in 514 out of 906 (57%) fights*

Fighters with 0 TKO victories will more likely lose

This was validated in 90 out of 164 (55%) fights

Fighters fighting opponents out of Greg Jackson’s camp will more likely lose

This was validated in 38 out of 63 (60%) fights

 

Top Insights over All Fights

Fighters with 15 or more wins that have 50% less losses than their opponents will more likely win

This was validated in 239 out of 307 (78%) fights*

Fighters fighting American opponents will more likely win

This was validated in 803 out of 1303 (62%) fights*

Fighters with 2x more (or better) wins than their opponents and those opponents lost their last fights will more likely win

This was validated in 709 out of 1049 (68%) fights*

Fighters who’ve lost their last 4 fights in a row will more likely lose

This was validated in 345 out of 501 (68%) fights*

Fighters currently on a 5 fight (or better) winning streak will more likely win

This was validated in 1797 out of 2960 (61%) fights*

Fighters with 3x or more wins than their opponents will more likely win

This was validated in 2831 out of 4764 (59%) fights*

Fighters who have lost 7 or more times will more likely lose

This was validated in 2551 out of 4547 (56%) fights*

Fighters with no jiu jitsu in their background versus fighters who do have it more likely lose

This was validated in 334 out of 568 (59%) fights*

Fighters who have lost by submission 5 or more times will more likely lose

This was validated in 1166 out of 1982 (59%) fights*

Fighters in the Middleweight division who fought their last fight more recently will more likely win

This was validated in 272 out of 446 (61%) fights*

Fighters in the Lightweight division fighting 6 foot tall fighters (or higher) will more likely win

This was validated in 50 out of 83 (60%) fights

 

Note – I separated UFC fights from all fights because regulations and rules can vary across MMA organizations.

Most of these insights are intuitive except for maybe the last one and an earlier one which states 77% of the time fighters beat opponents who are on 6 fight or better winning streaks but have 3x less decision wins.

Many of these insights demonstrate statistically significant winning biases. I couldn’t help but wonder – could we use these insights to effectively bet on UFC fights? For the sake of simplicity, what happens if we make bets based on just the very first insight which states that fighters older than 32 years old will more likely lose (with a 62% chance)?

To evaluate this betting rule, I pulled the most recent UFC fights where in each fight there’s a fighter that’s at least 33 years old. I found 52 such fights, spanning 2/5/2011 – 8/14/2011. I placed a $10K bet on the younger fighter in each of these fights.

Surprisingly, this rule calls 33 of these 52 fights correctly (63% – very close to the rule’s observed 62% overall win rate). Each fight called incorrectly results in a loss of $10,000, and for each of the fights called correctly I obtained the corresponding Bodog money line (betting odds) to compute the actual winning amount.

I’ve compiled the betting data for these fights in this Google spreadsheet.

Note, for 6 of the fights that our rule called correctly, the money lines favored the losing fighters.

Let’s compute the overall return of our simple betting rule:

For each of these 52 fights, we risked $10,000, or in all $520,000
We lost 19 times, or a total of $190,000
Based on the betting odds of the 33 fights we called correctly (see spreadsheet), we won $255,565.44
Profit = $255,565.44 – $190,000 = $65,565.44
Return on investment (ROI) = 100 * 65,565.44 / 520,000 = 12.6%

 

That’s a very decent return.

For kicks, let’s compare this to investing in the stock market over the same period of time. If we buy the S&P 500 with a conventional dollar cost averaging strategy to spread out the $520,000 investment, then we get a ROI of -7.31%. Ouch.

Keep in mind that we’re using a simple betting rule that’s based on a single insight. The random forest model, which optimizes over many insights, should predict better and be applicable to more fights.

Please note that I’m just poking fun at stocks – I’m not saying betting on UFC fights with this rule is a more sound investment strategy (risk should be thoroughly examined – the variance of the performance of the rule should be evaluated over many periods of time).

The main goal here is to demonstrate the effectiveness of data driven approaches for better understanding the patterns in a sport like MMA. The UFC could leverage these data mining approaches for coming up with fairer matches (dismiss fights that match obvious winning and losing biases). I don’t favor this, but given many fans want to see knockouts, the UFC could even use these approaches to design fights that will likely avoid decisions or submissions.

Anyways, there’s so much more analysis I’ve done (and haven’t done) over this data. Will post more results when cycles permit. Stay tuned.

24 Comments

Filed under AI, Blog Stuff, Computer Science, Data Mining, Economics, Machine Learning, Research, Science, Statistics, Trends

anymeme: Breaking News, Tweets in your URLs

A very basic experiment that pads URLs with messages:

or more appropriately http://anymeme.appspot.com/anymeme.appspot.com

Notes

  • This is not related to any work I’ve been pursuing during my EIR gig.
  • It’s kind of like the opposite of bit.ly (there is a shortener available on the site though). It’s better tailored for shorter URLs where there’s enough address bar space to display a message at the end of the URL.
  • I tested this on the top 30 or so sites using a mix of Firefox and Chrome.
  • This could easily be the dumbest thing I’ve ever developed, but then again there are a lot of dumb things on the web. It took longer for me to write these posts describing anymeme than to develop the code for it. This is more of an experiment to see:
    • If users, publishers, and advertisers like it
    • To try to make URLs more interesting and valuable
  • It would be so cool:
    • To generate enough cash via sponsored messages to make meaningful contributions to great causes
    • To see an important breaking news headline or an interesting tweet as you load up hulu to check for new episodes – visible in the previously half empty address bar so there’s no need to frame or change the destination page to show the content.
  • It currently runs on Google App Engine

5 Comments

Filed under Blog Stuff, Google, News, Social, Trends, Twitter, Uncategorized, Web2.0