AI First, the Overhype and the Last Mile Problem

AI is hot, I mean really hot. VCs love it, pouring in over $1.5B in just the first half of this year. Consumer companies like Google and Facebook also love AI, with notable apps like Newsfeed, Messenger, Google Photos, Gmail and Search leveraging machine learning to improve their relevance. And it’s now spreading into the enterprise, with moves like Salesforce unveiling Einstein, Microsoft’s Cortana / Azure ML, Oracle with Intelligent App Cloud, SAP’s Application Intelligence, and Google with Tensorflow (and their TPUs).

As a founder of an emerging AI company in the enterprise space, I’ve been following these recent moves by the big titans closely because they put us (as well as many other ventures) in an interesting spot. How do we position ourselves and compete in this environment?

In this post, I’ll share some of my thoughts and experiences around the whole concept of AI-First, the “last mile” problems of AI that many companies ignore, the overhype issue that’s facing our industry today (especially as larger players enter the game), and my predictions for when we’ll reach mass AI adoption.

Defining AI-First vs. AI-Later

A few years ago, I wrote about the key tenets of building Predictive-First applications, something that’s synonymous to the idea of AI-First, which Google is pushing. A great example of Predictive-First is Pandora (disclosure: Infer customer). Pandora didn’t try to redo the music player UI — there were many services that did that, and arguably better. Instead, they focused on making their service intelligent, by providing relevant recommendations. No need to build or manage playlists. This key differentiation led to their rise in popularity, and that differentiation depended on data intelligence that started on day one. Predictive wasn’t sprinkled on later (that’s AI-Later, not AI-First, and there’s a big difference … keep reading).

If you are building an AI-First application, you need to follow the data and you need a lot of data so you would likely gravitate towards integrating with big platforms (as in big companies with customers) that have APIs to pull data from.

For example, a system like CRM.

There’s so much valuable data in a CRM system, but five years ago, pretty much no one was applying machine learning to this data to improve sales. The data was, and still is for many companies, untapped. There’s got to be more to CRM than basic data entry and reporting, right? If we could apply machine learning, and if it worked, it could drive more revenue for companies. Who would say no to this?

So naturally, we (Infer) went after CRM (Salesforce, Dynamics, SAP C4C), along with the marketing automation platforms (Marketo, Eloqua, Pardot, HubSpot) and even custom sales and marketing databases (via REST APIs). We helped usher a new category around Predictive Sales and Marketing.

We can’t complain much we’ve amassed the largest customer base in our space, and have published dozens of case studies showcasing customers achieving results like 9x improvements in conversion rates and 12x ROI via vastly better qualification and nurturing programs.

But it was hard to build our solutions, and remains hard to do so at scale. It’s not because the data science is hard (although that’s an area we take pride in going deep on), it’s the end-to-end product and packaging that’s really tough to get right. We call this the last mile problem, and I believe this is an issue for any AI product whether in the enterprise or consumer space.

Now, with machine learning infrastructure in the open with flowing (and free) documentation, how-to guides, online courses, open source libraries, cloud services, etc. machine learning is being democratized.

Anyone can model data. Some do it better than others, especially those with more infrastructure (for deep learning and huge data sets) and a better understanding of the algorithms and the underlying data. You may occasionally get pretty close with off-the-shelf approaches, but it’s almost always better to optimize for a particular problem. By doing so, you’ll not only squeeze out better or slightly better performance, but the understanding you gain from going deep will help you generalize and handle new data inputs better which is key for knowing how to explain, fix, tweak and train the model over time to maintain or improve performance.

But still, this isn’t the hardest part. This is the sexy, fun part (well, for the most part … the data cleaning and matching may or may not be depending on who you talk to :).

The hardest part is creating stickiness.

The Last Mile of AI

How do you get regular business users to depend on your predictions, even though they won’t understand all of the science that went into calculating them? You want them to trust the predictions, to understand how to best leverage them to drive value, and to change their workflows to depend on them.

This is the last mile problem. It is a very hard problem and it’s a product problem, not a data scientist problem. Having an army of data scientists isn’t going to make this problem better. In fact, it may make it worse, as data scientists typically want to focus on modeling, which may lead to over-investing in that aspect versus thinking about the end-to-end user experience.

To solve last mile problems, vendors need to successfully tackle three critical components:

1)  Getting “predictive everywhere” with integrations

It’s very important to understand where the user needs their predictions and this may not be in just one system, but many. We had to provide open APIs and build direct integrations for Marketo, Eloqua, Salesforce, Microsoft Dynamics, HubSpot, Pardot, Google Analytics and Microsoft Power BI.

Integrating into these systems is not fun. Each one has it own challenges: how to push predictions into records without locking out users who are editing at the same time; how to pull all the behavioral activity data out to determine when a prospect will be ready to buy (without exceeding the API limits); how to populate predictions across millions of records in minutes not hours; etc.

These are hard software and systems problems (99% perspiration). In fact, the integration work likely consumed more time than our modeling work.

This is what it means to be truly “predictive everywhere.” Some companies like Salesforce are touting this idea, but it’s closed to their stack. For specific solutions like predictive lead scoring, this falls apart quickly, because most mid-market and enterprise companies run lead scoring in marketing automation systems like Marketo, Eloqua and Hubspot.

Last mile here means you’re investing more in integrating predictions into other systems than in your own user experience or portal. You go to where the user already is that’s how you get sticky not by trying to create new behavior for them to do on your own site (even if you can make your site look way prettier and function better). What matters is stickiness. Period.

2)  Building trust

Trust is paramount to achieving success with predictive solutions. It doesn’t matter if your model works if the user doesn’t act on it or believe in it. A key area to establish trust around is the data, and specifically the external data (i.e. signals not in the CRM or marketing automation platforms a big trick we employ to improve our models and to de-noise dirty CRM data).

Sometimes, customers want external signals that aren’t just useful for improving model performance. Signals like whether a business offers a Free Trial on their website might also play an important operational role in helping a company take different actions for specific types of leads or contacts. For example, with profiling and predictive scoring solutions, they could filter and define a segment, predict the winners from that group and prioritize personalized sales and marketing programs to target those prospects.

In addition to exposing our tens of thousands of external signals, another way we build trust is by making it easy and flexible to customize our solution to the unique needs and expectations of each customer. Some companies may need multiple models, by region / market / product line (when there is enough training data) or “lenses” (essentially, normalizing another model that has more data) when there isn’t enough data. They then need a system that guides them on how to determine those solutions and tradeoffs. Some companies care about the timing of deals; they may have particular cycle times they want to optimize for or they may want their predictions to bias towards higher deal size, higher LTV, etc.

Some customers want the models to update as they close more deals. This is known as retraining the model, but over retraining could result in bad performance. For example, say you’re continuously and automatically retraining with every new example, but the customer was in the middle of a messy data migration process. It would have been better to wait until that migration completed to avoid incorrectly skewing the model for that period of time. What you need is model monitoring, which gauges live performance and notices dips or opportunities to improve performance when there’s new data. The platform then alerts the vendor and the customer, and finally results in a proper retraining.

Additionally, keep in mind that not all predictions will be accurate, and the customer will sometimes see these errors. It’s important to provide them with options to report such feedback via an active process that actually results in improvements in the models. Customers expect their vendor to be deep on details like these. Remember, for many people AI still feels like voodoo, science fiction and too blackbox-like (despite the industry’s best efforts to visualize and explain models). Customers want transparent controls that support a variety of configurations in order to believe, and thus, operationalize a machine-learned model.

3)  Making predictive disappear with proven use cases

Finally, let’s talk about use cases and making predictive disappear in a product. This is a crucial dimension and a clear sign of a mature AI-First company. There are a lot of early startups selling AI as their product to business users. However, most business users don’t want or should want AI they want a solution to a problem. AI is not a solution, but an optimization technique. At Infer, we support three primary applications (or use cases) to help sales and marketing teams: Qualification, Nurturing and Net New. We provide workflows that you can install in your automation systems to leverage our predictive tech and make each of these use cases more intelligent. However, we could position and sell these apps without even mentioning the word predictive because it’s all about the business value.

In our space, most VPs of Sales or Marketing don’t have Ph.Ds in computer science or statistics. They want more revenue, not a machine learning tutorial. Our pitch then goes something like this …

“Here are three apps for driving more revenue. Here’s how each app looks in our portal and here are the workflows in action in your automation systems … here are the ROI visualizations for each app … let’s run through a bunch of customer references and success studies for the apps that you care about. Oh, and our apps happen to leverage a variety predictive models that we’ll expose to you too if you want to go deep on those.”

Predictive is core to the value but not what we lead with. Where we are different is in the lengths we go to guide our customers with real-world playbooks, to formulate and vet models that best serve their individual use cases, and to help them establish sticky workflows that drive consistent success. We’ll initially sell customers one application, and hopefully, over time, the depth of our use cases will impress them so much that we’ll cross-sell them into all three apps. This approach has been huge for us. It’s also been a major differentiator we achieved our best-ever competitive win rate this year (despite 2016 being the most competitive) by talking less about predictive.

Vendors that are overdoing the predictive and AI talk are missing the point and don’t realize that data science is a behind-the-scenes optimization. Don’t get me wrong, it’s sexy tech, it’s a fun category to be in (certainly helps with engineering recruiting) and it makes for great marketing buzz, but that positioning is not terribly helpful in the later stages of a deal or for driving customer success.

The focus needs to be on the value. When I hear companies just talking about predictive, and not about value or use cases / applications, I think they’re playing a dangerous game for themselves as well as for the market. It hurts them as that’s not something you can differentiate on any more (remember, anyone can model). Sure, your model may be better, but the end buyer can’t tell the difference or may not be willing (or understand how) to run a rigorous evaluation to see those differences.

The Overhype Issue

Vendors in our space often over-promise and under-deliver, resulting in many churn cases, which, in turn, hurts the reputation of the predictive category overall. At first, this was just a problem with the startups in our space, but now we’re seeing it from the big companies as well. That’s even more dangerous, as they have bigger voice boxes and reach. It makes sense that the incumbents want to sprinkle AI-powered features into their existing products in order to quickly impact thousands of their customers. But with predictive, trust is paramount.

Historically, in the enterprise, the market has been accustomed to overhyped products that don’t ship for years from their initial marketing debuts. However, in this space, I’d argue that overhyping is the last thing you should do. You need to build trust and success first. You need to under-promise and over-deliver.

Can the Giants Really Go Deep on AI?

The key is to hyper focus on one end-to-end use case and go deep to start, do that well with a few customers, learn, repeat with more, and keep going. You can’t just usher out an AI solution to many business customers at once, although that temptation is there for a bigger company. Why only release something to 5% of your base when you can generate way more revenue if it’s rolled out to everyone? This forces a big company to build a more simplified, “checkbox” predictive solution for the sake of scale, but that won’t work for mid-market and enterprise companies, which need many more controls to address complex, but common, scenarios like multiple markets and objective targets.

Such a simplified approach caters better to smaller customers that desire turnkey products, but unlike non-predictive enterprise solutions, predictive solutions face a big problem with smaller companies a data limiting challenge. You need a lot of data for AI, and most small businesses don’t have enough transactions in their databases to machine learn patterns from (I also would contend that most small companies shouldn’t be focusing on optimizing their sales and marketing functions anyway, but rather on building a product and a team).

So, inherently, AI is biased towards mid-market / enterprise accounts, but their demands are so particular that they need a deeper solution that’s harder to productize for thousands. Figuring out how to build such a scalable product is much better done within a startup vs. in a big company, given the incredible focus and patience that’s needed.

AI really does work for many applications, but more vendors need to get good at solving the last mile the 80% that depends less on AI and more on building the vehicle that runs with AI. This is where emerging companies like Infer have an advantage. We have the patience, focus, and depth to solve these last mile problems end-to-end and to do it in a manner that’s open to every platform not just closed off to one company’s ecosystem. This matters (especially with respect to the sales and marketing space, in which almost every company runs a fragmented stack with many vendors).

It’s also much easier to solve these end-to-end problems without the legacy issues of an industry giant. At Infer, we started out with AI from the very beginning (AI-First), not AI-Later like most of these bigger companies. Many of them will encounter challenges when it comes to processing data in a way that’s amenable for modeling, monitoring, etc. We’re already seeing these large vendors having to forge big cloud partnerships to rehaul their backends in order to address their scaling issues. I actually think some of the marketing automation companies still won’t be able to improve their scale, given how dependent they are on legacy backend design that wasn’t meant to handle expensive data mining workloads.

Many of these companies will also need to curtail security requirements stemming from the days of moving companies over to the cloud. Some of their legacy security provisions may prevent them from even looking at or analyzing a customer’s data (which is obviously important for modeling).

When you solve one problem really well, the predictive piece almost disappears to the end user (like with our three applications). That’s the litmus test of a good AI-powered business application. But, that’s not what we’re seeing from the big companies and most startups. It’s quite the opposite in fact, we’re seeing more over generalization.

They’re making machine learning feel like AWS infrastructure. Just build a model in their cloud and connect it somehow to your business database like CRM. After five years of experience in this game, I’ll bet our bank that approach won’t result in sticky adoption. Machine learning is not like AWS, which you can just spin up and magically connect to some system. “It’s not commoditizable like EC2” (Prof. Manning at Stanford). It’s much more nuanced and personalized based on each use case. And this approach doesn’t address the last mile problems which are harder and typically more expensive than the modeling part!

From AI Hype to Mass Adoption

There aren’t yet thousands of companies running their growth with AI. It will take time, just like it took Eloqua and Marketo time to build up the marketing automation category. We’re grateful that the bigger companies like Microsoft, Oracle, Salesforce, Adobe, IBM and SAP are helping market this industry better than we could ever do.

I strongly believe every company will be using predictive to drive growth within the next 10 years. It just doesn’t make sense not to, when we can get a company up and running in a week, show them the ROI value via simulations, and only then ask them to pay for it. Additionally, there are a variety of lightweight ways to leverage predictive for growth (such as powering key forecasting metrics and dashboards) that don’t require process changes if you’re in the middle of org changes or data migrations.

In an AI-First world, every business must ask the question: What if our competitor is using predictive and achieving 3x better conversion rates as a result? The solution is simple adopt AI as well and prop up the arms race.

I encourage all emerging AI companies to remain heads down and focus on customer success and last mile product problems. Go deep, iterate with a few companies and grow the base wisely. Under-promise and over-deliver. Let the bigger companies pay for your marketing with their big voice boxes which they’re really flexing now. Doing so, you’ll likely succeed beyond measure and who knows, we may even replace the incumbents in the process.

Amazon, Google and Apple vs the Big 5 Unicorns on Hiring and Churn

I’ve received multiple requests to analyze employee churn and new hiring rates for big companies and unicorns with the approach I took earlier for studying engineering and sales retention rates. I figured I’d give it a shot – and combine all of the key metrics in one chart …

amazon_google_apple_unicorns_retention_hiring

How to read this:

Blue bar represents the number of expected new hires that particular company will make in a 30 day (one month) period. Black bars (negative values) indicate how many employees will churn in a one month period. The orange line (the top most numerical labels correspond to the orange line plot) represent the net change in hires per month (new hires less churn). The companies are ranked by churn from left to right in descending order (so highest churn on the left).

As you can see in the chart, the big three companies included in this analysis are Amazon, Apple and Google. The unicorns are Uber, Lyft, Airbnb, Pinterest and Snapchat. “Big 5” combines these unicorns together as if they were one whole company. Also note, this is looking at employees worldwide with any job title.

Key Insights:

  1. Apple is not hiring enough new heads when compared with Amazon and Google. In fact, the Big 5 unicorns combined will hire more net heads than Apple with almost 50% less employee churn.
  2. Amazon’s churn is the highest – losing a little over 10 people a day. However, this is not bad relatively speaking – Google loses 8-9 people a day, and Apple is a tad over 9 a day (and Amazon has 36% more employees than Google). Given the recent press bashing Amazon’s culture and the periodic press envying Google’s great benefits, their retention rates tell a different a story – that it’s closer to a wash. Big tech companies with great talent churn people at pretty similar high rates regardless it seems (have some more thoughts on this but will save those for another post).
  3. At these current rates, all of the companies here (collectively) will increase their employee size by 20K (19, 414 to be precise) heads by year end (this is new hires less churn). That’s a measly 5% increase in their current collective employee size – and this is across the Big 3 Tech Companies and Big 5 Unicorns.
  4. Let’s compare Amazon to the Big 5 Unicorns. The Big 5 will hire 79% as many incremental heads as Amazon in a month, even though their collective employee size is 24% that of Amazon’s. Amazon has been in business for much longer (2-3x days since incorporation), and the Big 5’s churn is 43% of Amazon’s figure – both factors contributing to the closeness in the incremental head rate between the two.

Want more details?

How did I calculate these figures? Take a look at my previous post on engineering retention for more details. Same caveats listed there apply here, and then some (such as how this depends on the participation rates of LinkedIn which may differ considerably internationally compared to the US market which my previous posts exclusively focused on). Feel free to connect or email me if you have any questions or feedback.

 

3 Gmail Tricks That Save Me Hours A Week

A guest piece I wrote for The Next Web on 3 Gmail tricks I use that save me literally hours of my life a week:

It’s totally unbearable and massively inefficient to process countless emails every day. And yet, to have any chance of success in today’s information world, you must communicate via email.

As you succeed, you become more networked, and more dependent on others to achieve even bigger milestones. As a result, your email volume just increases, while higher expectations require even faster responses and decision making. It’s a seemingly impossible cycle.

This is especially true for C-level and executive leaders. I was chatting recently with Suresh Khanna, Chief Revenue Officer at AdRoll, and he said it best: “Management is about making decisions – not executing. You need to delegate execution efficiently. You need to listen and keep everyone aligned on the same page.

“So, when it comes to doing this over email, you mainly serve as an email routing and forwarding agent.” (Read More)

An Evaluation of Google’s Realtime Search

How timely are the results returned from Google’s Realtime (RT) Search Engine? How often do Twitter results appear in these results? Over the weekend I developed a few basic experiments to find out and published the results below.

Key Findings

  • For location-based queries, there’s nearly a flip of a coin chance (43%) that a Twitter result will be the #1 ranked result.
  • For general knowledge queries, there’s a 23% chance that a Twitter result will be #1.
  • The newest Twitter results are usually 4 seconds old. The newest Web results are 10x older (41 seconds).
  • A top ranking Twitter result for a location-based query is usually 2 minutes old (compared with Web which is 22 minutes old – again nearly 10x older).
  • When Twitter results appear at least one of them is in the top ranked position
Experiment #1 – General Knowledge

I crawled 1,370 article titles from Wikipedia and ran each title as a query into Google RT search.

Market Shares

81% of all queries returned search results that included web page results
23% of all queries returned search results that included Twitter results
7% of all queries returned 0 search results

70% of all queries had a web page result in the #1 ranked position
When Twitter results appeared there was always at least one result in the #1 ranked position (so 23% of queries)

Time Lag

When a web page was the #1 ranked result, that result on average was 6736 seconds (or 1 hr and 52 minutes) old.
When a Tweet was the #1 ranked result, that result on average was 261 seconds (or 4 minutes and 21 seconds) old.

The average age of the top 10% newest web page results (across all queries) is 41 seconds
The average age of the top 10% newest Twitter results (across all queries) is 2 seconds

Tail

Query length was between 1 – 12 words (where 1-2 word long queries are most popular)
Worth noting that no Twitter results appear for queries with greater than 5 words

Experiment #2 – Location

I crawled 265 major populated U.S. cities from the U.S. Census Bureau and ran each city name as a query into Google RT search.

Market Shares

73% of all queries returned search results that included web page results
43% of all queries returned search results that included Twitter results
5% of all queries returned 0 search results

52% of all queries had a web page result in the #1 ranked position
When Twitter results appeared there was always at least one result in the #1 ranked position (so 43% of queries)

Time Lag

When a web page was the #1 ranked result, that result on average was 1341 seconds (or 22 minutes and 21 seconds) old.
When a Tweet was the #1 ranked result, that result on average was 138 seconds (or 2 minutes and 18 seconds) old.

The average age of the top 10% newest web page results (across all queries) is 41 seconds
The average age of the top 10% newest Twitter results (across all queries) is 4 seconds

Tail

Query length was between 1 – 3 words
Worth noting that no Twitter results appear for 3 word long queries

Implementation Details

  • Generated Wiki queries by running “site:en.wikipedia.org” searches on Google and Blekko, and extracting the titles (en.wikipedia.org/{title_is_here}) from the result links. Side point: I tried Bing but the result links had mostly one word long titles (Bing seems to really bias query length in their ranking) and I wanted more diversity to test out tail queries.
  • Crawled cities (for the location-based queries) from http://www.census.gov/popest/cities/tables/SUB-EST2009-01.csv

Caveats

  • I ran these experiments at 2:45a PST on Monday. The location-based queries all relate to U.S., so probably not many people up at that time generating up-to-date information. The time lag stats could vary depending on when these experiments are ran. I did however re-run the experiments in the late morning and didn’t see much difference in the timings.
  • I ran all queries through Google’s normal web search engine with ‘Latest’ on (in the left bar under Search Tools). These results are not exactly the same as those generated from the standalone Google Realtime Search portal, which seems to bias Tweets more while the ‘Latest’ results seems to find middle ground between real-time Twitter results and web page results. I used ‘Latest’ because it seems like it would be the most popular gateway to Google’s Realtime search results.

anymeme: Breaking News, Tweets in your URLs

A very basic experiment that pads URLs with messages:

or more appropriately http://anymeme.appspot.com/anymeme.appspot.com

Notes

  • This is not related to any work I’ve been pursuing during my EIR gig.
  • It’s kind of like the opposite of bit.ly (there is a shortener available on the site though). It’s better tailored for shorter URLs where there’s enough address bar space to display a message at the end of the URL.
  • I tested this on the top 30 or so sites using a mix of Firefox and Chrome.
  • This could easily be the dumbest thing I’ve ever developed, but then again there are a lot of dumb things on the web. It took longer for me to write these posts describing anymeme than to develop the code for it. This is more of an experiment to see:
    • If users, publishers, and advertisers like it
    • To try to make URLs more interesting and valuable
  • It would be so cool:
    • To generate enough cash via sponsored messages to make meaningful contributions to great causes
    • To see an important breaking news headline or an interesting tweet as you load up hulu to check for new episodes – visible in the previously half empty address bar so there’s no need to frame or change the destination page to show the content.
  • It currently runs on Google App Engine

Twitter + BOSS = Real Time Search

Try ityahoo

Update: (6/25) This application has been updated. Go here to learn more. The description below though still applies.

Update: (6/11) In case you’re bored, here’s a discussion we had with Google and Twitter about Open & Real-time Search.

Update: (1/19) If you have issues try again in 5-10 minutes. You can also check out the screenshots below. (1/15) App Engine limits were reached (and fast). Appreciate the love and my apologies for not fully anticipating that. Google was nice enough though to temporarily raise the quota for this application. Anyways, this was more to show a cool BOSS developer example using code libraries I released earlier, but there might be more here. Stay tuned.

Here’s a screenshot as well (which should hopefully be stale by the time you read this).

Basically this service boosts Yahoo’s freshest news search results (which typically don’t have much relevance since they are ordered by timestamp and that’s it) based on how similar they are to the emerging topics found on Twitter for the same query (hence using Twitter to determine authority for content that don’t yet have links because they are so fresh). It also overlays related tweets via an AJAX expando button (big thanks to Greg Walloch at Yahoo! for the design) under results if they exist. A nice added feature to the overlay functionality is near-duplicate removal to ensure message threads on any given result provide as much comment diversity as possible.

Freshness (especially in the context of search) is a challenging problem. Traditional PageRank style algorithms don’t really work here as it takes time for a fresh URL to garner enough links to beat an older high ranking URL. One approach is to use cluster sizes as a feature for measuring the popularity of a story (i.e. Google News). Although quite effective IMO this may not be fast enough all the time. For the cluster size to grow requires other sources to write about the same story. Traditional media can be slow however, especially on local topics. I remember when I saw breaking Twitter messages describing the California Wildfires. When I searched Google/Yahoo/Microsoft right at that moment I barely got anything (< 5 results spanning 3 search results pages). I had a similar episode when I searched on the Mumbai attacks. Specifically, the Twitter messages were providing incredible focus on the important subtopics that had yet to become popular in the traditional media and news search worlds. What I found most interesting in both of these cases was that news articles did exist on these topics, but just weren’t valued highly enough yet or not focusing on the right stories (as the majority of tweets were). So why not just do that? Order these fresh news articles (which mostly provide authority and in-depth coverage) based on the number of related fresh tweets as well as show the tweets under each. That’s this service.

To illustrate the need, here’s a quick before and after shot. I searched for ‘nba’ using Yahoo’s news search ordered by latest results (first image). Very fresh (within a minute) but subpar quality. The first result talks about teams that are in a different league of basketball than the NBA. However, search for ‘nba’ on TweetNews (second image) and you get the Kings/Warriors triple OT game highlight which was buzzing more in Twitter at that minute.

'NBA' on Y! News latest
'NBA' on Y! News latest
'NBA' on Y! News latest enhanced by Twitter
'NBA' on TweetNews

There’s something very interesting here … Twitter as a ranking signal for search freshness may prove to be very useful if constructed properly. Definitely deserves more exploration – hence this service, which took < 100 lines of code to represent all the search logic thanks to Yahoo! BOSS, Twitter’s API, and the BOSS Mashup Framework.

To sum up, the contributions of this service are: (1) Real-time search + freshness (2) Stitching social commentary to authoritative sources of information (3) Another (hopefully cool) BOSS example.

The code is packaged for general open consumption and has been ported to run on App Engine (which powers this service actually). You can download all the source here.

Yahoo Boss – Google App Engine Integrated

Updated: I see blogs doing evaluations of the Q&A engine. I have to admit, that wasn’t my focus here. The service is merely 50 lines of code … just to demonstrate the integration of BMF and GAE.

Updated: Direct link to the example Question-Answering Service

Today I finally plugged-in the Yahoo Boss Mashup Framework into the Google App Engine environment. Google App Engine (GAE) provides a pretty sweet yet simple platform for executing Python applications on Google’s infrastructure. The Boss Mashup Framework (BMF) provides Python API’s for accessing Yahoo’s Search API’s as well remixing data a la SQL constructs. Running BMF on top of GAE is a seemingly natural progression, and quite arguably the easiest way to deploy Boss – so I spent today porting BMF to the GAE platform.

Here’s the full BMF-GAE integrated project source download.

There’s a README file included. Just unzip, put your appid’s in the config files, and you’re done. No setup or dependencies (easier than installing BMF standalone!). It’s a complete GAE project directory which includes a directory called yos which holds all the ported BMF code. Also made a number of improvements to the BMF code (SQL ‘where’ support, stopwords, yql.db refactoring, util & templates in yos namespace, yos.crawl.rest refactored & optimized, etc.).

The next natural thing to do is to develop a test application on top of this united framework. In the original BMF package, there’s an examples directory. In particular, ex6.py was able to answer some ‘when’ style questions. I simply wrapped that code as a function and referenced it as a GAE handler in main.py.

Here’s the ‘when’ q&a source code as a webpage (less than 25 lines).

The algorithm is quite easy – use the question as the search query and fetch 50 results via the Boss API. Count the dates that occur in the results’ abstracts, and simply return the most popular one.

For fun, following a similar pattern to the ‘when’ code, I developed another handler to answer ‘who’ or ‘what’ or ‘where’ style questions (finding the most popular capitalized phrase).

Here’s the complete example (just ~50 lines of code – bundled in project download):

Q&A Running Service Example

Keep in mind that this is just a quick proof of concept to hopefully showcase the power of BMF and the idea of Open Web Search.

If you’re interested in learning more about this Q&A system (or how to improve it), check out AskMSR – the original inspiration behind this example.

Also, shoutout to Sam for his very popular Yuil example, which is powered by BMF + GAE. The project download linked above is aimed to make it hopefully easier for people to build these types of web services.

How Google is putting us back into the Stone Age

Yeah, I know – what a linkbait title. If that’s what it takes these days to get visitors and diggs then so be it. Also, just to forewarn, as you read this you might find that a better title choice for this post would have been “How Web 2.0 is putting us back into the Stone Age” since many of these thoughts generalize to Web 2.0 companies as a whole. I used Google in the title mainly because they are the big daddy in the web world, the model many web 2.0 companies strive to be like, the one to beat. Plus, the title just looks and sounds cooler with ‘Google’ in it.

Here’s the main problem I have with web applications coming from companies like Google: About 2 years ago I bought a pretty good box – which is now fairly standard and cheap these days – 2 gigs of ram, dual core AMD-64 3400+’s, 250 gigs hd, nVidia 6600 GT PCI Express, etc. It’s a beast. However, because I don’t play games, its potential isn’t being utilized – not even close. Most of the applications I use are web-based, mainly because the web provides a medium which is cross platform (all machines have a web browser), synchronized (since the data is stored server side I can access it from anywhere like the library, friend’s computer, my laptop) and it keeps my machine pretty light (no need to install anything and waste disk and risk security issues). The web UI experience for the most part isn’t too bad either – in fact, I find that the browser’s restrictions force many UI’s to be far simpler and easier to use. To me, the benefits mentioned above clearly compensate for any UI deficiencies. Unfortunately, this doesn’t mean that Web 2.0 is innovating the user’s experience. Visualizing data – search results, semantic networks, social networks, excel data sheets – is still very primitive, and a lot can be done to improve this experience by taking advantage of the user’s hardware.

My machine, and most likely yours, is very powerful and underutilized. For instance, my graphics card has tons of cores. We live in an age where GPU’s like mine can sort terabytes of data faster than the top-of-the-line Xeon based workstation (refer to Jim Gray’s GPUTerasort paper). For sorting, which is typically the bottleneck in database query plans and MapReduce jobs, it’s all about I/O – or in this case, how fast you can swap memory (for example, a 2-pass bitonic radix sort iteratively swaps the lows and the highs). Say you call memcpy in your C program on a $6,000 Xeon machine. The memory bandwidth is about 4 GB/s. Do the equivalent on a $200 graphics co-processor and you get about 50 GB/s. Holy smokes! I know I’m getting off-topic here, but why is it so much faster on a GPU? Well, in CPU world, memory access can be quite slow. You have almost these random jumps in memory, which can result in expensive TLB/cache misses, page faults, etc. You also have context switching for multi-processing. Lots of overhead going on here. Now compare this with a GPU, which has the memory almost stream directly to tons of cores. The cores on a GPU are fairly cheap, dumb processing units in comparison to the cores found in a CPU. But the GPU uses hundreds of these cores, in parallel, to drastically speed-up the overall processing. This, coupled with its specialized memory architecture, results in amazing performance bandwidth. Also, interestingly, since these cores are cheap (bad), there’s a lot of room for improvement. At the current rate, GPU advancements are occurring 3-4x faster than Moore’s law for CPU’s. Additionally, the graphical experience is near real-life quality. Current API’s enable developers to draw 3D triangles directly off the video card! This is some amazing hardware folks. GPU’s, and generally this whole notion of co-processing to optimize for operations that lag on CPU’s (memory bandwidth, I/O) promise to make future computers even faster than ever.

OK, so the basic story here is our computers are really powerful machines. The web world doesn’t take advantage of this, and considering how much time we spend there, it’s an unfortunate waste of computing potential. Because of this, I feel we are losing an appreciation for our computer’s capabilities. For example, when my friend first started using Gmail, he was non-stop clicking on the ‘Invite a friend’ drop-down. He couldn’t believe how the page could change without a browser refresh. Although this is quite an extreme example, I’ve seen this same phenomena for many users on other websites. IMHO, this is completely pathetic, especially when considering how powerful client-end applications can be in comparison.

Again, I’m not against web-based applications. I love Gmail, Google Maps, Reader, etc. However, there are applications which I do not think should be web-based. An example of this is YouOS, which is an OS accessible through the web-browser. I mean, there’s some potential here, but the way it’s currently implemented is very limiting and unnecessary.

To me, people are developing web-services with the mindset ‘can it hurt?’, when I think a better mantra is ‘will it advance computing and communication?’. Here’s the big web 2.0 problem: Just because you can make something web 2.0’ish, doesn’t mean you should. I think of this along the lines of Turing Complete, which is a notion in computer science for determining whether a system can express any computation. Basically, as long as you can process an input, store state, and return an output (i.e. a potentially stateful function), you can do any computation. Now web pages provide an input form, perform calculations server side, and can generate outputting pages – enough to do anything according to this paradigm, but with extreme limitations on visualization and performance (like with games). AJAX makes web views richer, but it is not only a terribly hacked up programming model, but for some reason compels developers to convert previously successful client-end-based applications into web-based services. Sometimes this makes sense from an end-user perspective, but consequently results in dumbing down the user experience.

We have amazing hardware that’s not being leveraged in web-based services. Browsers provide an emulation for a real application. However, given the proliferation of AJAX web 2.0 services, we’re starting to see applications only appear in the browser and not on the client. I think this current architecture view is unfortunate, because what I see in a browser is typically static content – something I could capture the essence of with a camera shot. In some sense, Web 2.0 is a surreal hack on what the real online experience should be.

I feel we really deserve truly rich applications that deliver ‘Minority Report’ style interfaces that utilize the client’s hardware. Movies predating the 1970’s predicted so much more for our current state’s user experience level. It’s up to us, the end-consumer, to encourage innovation in this space. It’s up to us, the developer, to build killer-applications that require tapping into a computer’s powerful hardware.The more we hype up web 2.0 and dumb-downed webpage experiences, the more website-based services we get – and consequently, less innovation in hardware driven UI’s.

But there’s hope. I think there exists a fair compromise between client-end applications and server-side web services. Internet is getting faster, the browser + Flash are getting fine tuned to make better use of a computer’s resources. Soon, the internet will be well-suited for thin-client computing. A great example of this already exists today, and I’m sure many of you have used it: Google Earth. It’s a client-end application – taking advantage of the computer’s graphics and processing power to make the user feel like he/she is traveling in and out of space – while being a server-side service since it gathers updated geographical data from the web. The only problem is there’s no cross-platform, preexisting layer to build applications like this. How do we make these services without forcing the user to do an interventionist, slow installation? How do we make it run over different platforms? Personally, I think Microsoft completely missed the boat here with .NET. If MS could have recognized the web phenomena early on, they could have build this layer into Vista to encourage users to develop these rich thin-client applications, while also promoting Vista. I have no reason to change my OS – this could have been my reason! Even if it was cross platform, if they had better performance it’s still a reason to prefer (providing some business case). Instead, they treated .NET as a Java-based replacement for MFC, thereby forcing developers to resort to building their cross-platform, no-installation-required services through AJAX and Flash.

Now, even if this layer existed, which would enable developers to build and instantly deploy Google Earth style applications in a cross-platform manner, there would be security concerns. I mean, one could make the case that ActiveX attempted to do this – allowing developers to run arbitrary code on the client’s machines. Unfortunately, this led to numerous viruses. Security violations and spyware scare(d) all of us – so much so that we now do traditionally client-end functions through a dumb-downed web browser interface. But, I think we made some serious inroads in security since then. The fact that we even recognize security in current development makes us readily prepared to support such a platform. I am confident that the potential security issues can be tackled.

To make a final point, I think we all really need higher expectations in the user experience front. We need to develop killer applications that push the limitations of our hardware – to promote innovation and progress. We’re currently at a standstill in my opinion. This isn’t how the internet should be. This is not how I envisioned the future to be like 5 years ago. We can do better. We can build richer applications. But to do this, we as consumers must demand it in order for companies to have a business case to further pursue it. We need developers to come up with innovative ways of visualizing the large amounts of data being generated with the use of hardware – thereby delivering long-awaited killer-applications for our idly computers. Let’s take our futuristic dreams and finally translate them into our present reality.

Google Co-op just got del.icio.us!

Update: Sorry, link is going up and down. Worth trying, but will try to find a more stable option when time cycles free up.

This past week I decided to cook up a service (link in bold near the middle of this post) I feel will greatly assist users in developing advanced Google Custom Search Engines (CSE’s). I read through the Co-op discussion posts, digg/blog comments, reviews, emails, etc. and learned many of our users are fascinated by the refinements feature – in particular, building search engines that produce results like this:

‘linear regression” on my Machine Learning Search Engine

… but unfortunately, many do not know how to do this nor understand/want to hack up the XML. Additionally, I think it’s fair to say many users interested in building advanced CSE’s have already done similar site tagging/bookmarking through services like del.icio.us. del.icio.us really is great. Here are a couple of reasons why people should (and do) use del.icio.us:

  • It’s simple and clean
  • You can multi-tag a site quickly (comma separated field; don’t have to keep reopening the bookmarklet like with Google’s)
  • You can create new tags on the fly (don’t choose the labels from a fixed drop-down like with Google’s)
  • The bookmarklet provides auto-complete tag suggestions; shows you the popular tags others have used for that current site
  • Can have bundles (two level tag hierarchies)
  • Can see who else has bookmarked the site (can also view their comments); builds a user community
  • Generates a public page serving all your bookmarks

Understandably, we received several requests to support del.icio.us bookmark importing. My part-time role with Google just ended last Friday, so, as a non-Googler, I decided to build this project. Initially, I was planning to write a simple service to convert del.icio.us bookmarks into CSE annotations – and that’s it – but realized, as I learned more about del.icio.us, that there were several additional features I could develop that would make our users’ lives even easier. Instead of just generating the annotations, I decided to also generate the CSE contexts as well.

Ok, enough talk, here’s the final product:
http://basundi.com:8000/login.html

If you don’t have a del.icio.us account, and just want to see how it works, then shoot me an email (check the bottom of the Bio page) and I’ll send you a dummy account to play with (can’t publicize it or else people might spam it or change the password).

Here’s a quick feature list:

  • Can build a full search engine (like the machine learning one above) in two steps, without having to edit any XML, and in less than two minutes
  • Auto-generates the CSE annotations XML from your del.icio.us bookmarks and tags
  • Provides an option to auto-generate CSE annotations just for del.icio.us bookmarks that have a particular tag
  • Provides an option to Auto-calculate each annotation’s boost score (log normalizes over the max # of Others per bookmark)
  • Provides an option to Auto-expand links (appends a wildcard * to any links that point to a directory)
  • Auto-generates the CSE context XML
  • Auto-generates facet titles
  • Since there’s a four facet by five labels restriction (that’s the max that one can fit in the refinements display on the search results page), I provide two options for automatic facet/refinement generation:
    • The first uses a machine learning algorithm to find the four most frequent disjoint 5-item-sets (based on the # of del.icio.us tag co-occurrences; it then does query-expansion over the tag sets to determine good facet titles)
    • The other option returns the user’s most popular del.ico.us bundles and corresponding tags
    • Any refinements that do not make it in the top 4 facets are dumped in a fifth facet in order of popularity. If you don’t understand this then don’t worry, you don’t need to! The point is all of this is automated for you (just use the default Cluster option). If you want control over which refinements/facets get displayed, then just choose Bundle.
  • Provides help documentation links at key steps
  • And best of all … You don’t need to understand the advanced options of Google CSE/Co-op to build an advanced CSE! This seriously does all the hard, tedious work for you!

In my opinion, there’s no question that this is the easiest way to make a fancy search engine. If I make any future examples I’m using this – I can simply use del.icio.us, sign-in to this service, and voila I have a search engine with facets and multi-label support.


Please note that this tool is not officially endorsed by nor affiliated with Google or Yahoo! It was just something I wanted to work on for fun that I think will benefit many users (including myself). Also, send your feedback/issues/bugs to me or post them on this blog.

Google Co-op — An Intro & Some Insider Hacks

http://www.google.com/coop

So what is it? It’s called Google Co-op, a platform which enables users to build their own vertical search engines and make money off the advertisements. It provides a clean, easy interface for simple site restrictions (like what Yahoo! Search Builder and Live Macros offer) plus a number of power user features for tweaking the search results. The user has control over the look and feel (to embed the search box on their own site), can rank results, and even (multi) tag sites to let viewers filter out results by category.

But talk is cheap. So let me show you some examples of what you can do with Co-op:

http://vik.singh.googlepages.com/techstuff

This is a technology specific search engine, which lets users refine results based off Google Topics (global labels which anyone can annotate with). Basically, I was lazy here. I didn’t feel like multi-tagging sites/domains individually, so instead I just collected a laundry list of popular technology site domains in a flat file and pasted it into Google Co-op’s Custom Search Engine control panel/sites page. In addition, something I think is really useful, Google Co-op allows users to bulk upload links from OPML files. So, to make my life easier when building this, I uploaded Scoble’s and Matt Cutt’s OPML’s. Tons of great links there (and close to 1000 total). Then I clicked on the ‘filter results to just the sites I listed’ option (which I recommend you use since if you muddle your results with normal Google web search’s you typically won’t see your results popping up on the first page of results despite the higher priority level promise for hand chosen sites). To enable the filters you see on the results page (Reviews, Forums, Shopping, Blogs, etc.), I did an intersection with the background label of my search engine and the Google Topics labels. How do you that? The XML context configuration exposes a <BackgroundLabels> tag. Any labels listed in the BackgroundLabels block will be AND’ed (how cool is that). So I added the label of my search engine (each search engine has a unique background label – it can be found bolded on the Advanced Tab page) and a Google Topic label (News, Reviews, Stores, Shopping_Comparison, Blogs, Forums, etc.) in the BackgroundLabels XML block. I made a separate XML context file for each Google Topic intersection. By doing this, I didn’t have to tag any of my results and was still able to provide search filters. Google Topics does most of the hardwork and gives me search refinements for free!

But say you’re not lazy. Here’s an example of what you can do with multi-tagging and refinements.

http://vik.singh.googlepages.com/machinelearningsearch2

This one is more of a power user example – notice the refinements onebox on the search results page, and the labels with “>>” at the end. These labels redirect to another label hierarchy (a hack, I used the label redirect XML option to link to other custom search engine contexts – basically I’m nesting search engines here)

Now, say you want to get fancy with the search results presentation. Here’s a way to do it with Google’s Ajax Search API:

http://www.google.com/uds/samples/cse/index.html

Thanks to Mark Lucovsky and Matt Wytock for developing that great example.
For more information about how to use the Ajax Search API with Custom Search, please take a look at this informative post: http://googleajaxsearchapi.blogspot.com/2006/10/custom-search-engine-support.html

While writing this blog post, I realized it would take me forever to go over the number of tricks one can pull with Co-op. Instead, I’ll summarize some of the big selling point features to encourage everyone to start hacking away. Also, to help jump start power users, I’ve linked the XML files I used to make my featured search examples at the bottom of this post.

Key Feature Summary (in no particular order):

and much much more (especially for power users).

If you need a search engine for your site, and your content has been indexed by Google, then seriously consider using this rather than building your own index – or worse, using the crappy full-text functions available in relational databases.

Here are my XML files:

ml-context.xml

ml-pop-context.xml

ml-complx-context.xml

ml-source-context.xml

tech-stuff-context.xml

techreviews.xml

techforums.xml

techshopping.xml

techblogs.xml

technews.xml

tech-stuff-scoble-annotations.xml

tech-stuff-matcutts-annotations.xml

Happy Coop hacking!