First it was Microsoft, followed a day later by Google. At the end of last week both giants have signed deals with "micro blogging" phenomenon Twitter to reuse its vast repository of data for their own purposes.
The terms of the deal aren't known, but it probably represents the first time Twitter has managed to leverage its user base - around 5.1 million strong, if you go by publicly accessible profiles - as a major source of income. This has proved a stumbling block for other Social Media favourites such as YouTube and Facebook, but by acting as a single gathering point for the type of information that search engines can't typically get at, Twitter has made itself extremely attractive to Google and Microsoft.
Time Sensitive Queries
Search engines exist in part to satisfy a need for what might be termed "static" search; informational or transactional searches where the user isn't looking for breaking news but merely something useful. On the other hand time sensitive searches can sometimes dwarf these every day searches. In the 25 minutes following the reported death of Michael Jackson in June, so great was the surge in searches for news of the singer that Google interpreted the spike as a denial of service attack.
The search for the ability to respond usefully to users looking for more or less real time information in this way has been part of the search engines raison d'être ever since the start of the decade, when they were barely able to maintain monthly updates to their indexes (the pool of data from which search results are drawn).
Solutions Have Been Inadequate
Past answers to this problem have included the introduction of News and Blog search results, and modifying their algorithms to allow some brand new content a short term ranking boost (popularly known as QDF for "query deserves freshness") - but these have been half measures at best. With News and Blog results being drawn from a wide range of disparate sources it's difficult for search engines to get a consensus opinion of what's really newsworthy, and despite the speed with which many bloggers and news websites can publish new content, it's not always fast enough.
Twitter Steps Into The Gap
Twitter suffers neither of these drawbacks, essentially bringing millions of bloggers under one roof where the a news story can be picked up and "re-Tweeted" thousands of times within a matter of minutes. In essence, Twitter brings search engines as close to "real time" search as they've ever been able to get.
There's another facet to this though, and that's what search engines can infer about the popularity and usefulness of other websites by looking at the frequency with which they are mentioned in Twitter tweets - information that has until now been off limits to them. This is a new spin on the old idea of link citation analysis, some variation of which forms an important part of all modern search engine algorithms, the idea being that a link is treated as a vote in favour of the site it points to.
The problem with using links in this way is that as time goes on, many of the types of links search engines used to rely on have become subverted by companies looking to manipulate search results in their favour. This has left search engines fighting a constant rearguard battle in an attempt to ignore manipulative links while concealing which factors they really take into account. So, any new source of information about what real users like will be a welcome weapon in the search engines arsenals.
But will the same thing happen to Twitter now it's open to search engines? Only time will tell, but one thing is for certain. With the increased exposure that Twitter itself will receive from Bing and Google, and the intriguing possibility that Twitter users could wind up directly influencing the search results, Twitter is looking like a much more attractive proposition for online marketers.