Has Curation Finally Arrived?

Curation, as in content curation, digital curation, search curation and so forth has been all the rage in 2011.

But nothing says that curation has finally arrived, than this recent Dilbert cartoon below:

Dilbert.com

Has curation reached the tipping point?  What do you think?

The Social Search Buzz Dance

Today I was looking at the recent trend in social media buzz about “social search”.  I saw something that was interesting to my eye.

Social buzz on "social search"

The blue line in this graph is the overall mentions of the phrase “social search” in all blogs tracked by BlogPulse.com

The orange line is the portion of mentions of the phrase “social search” and Google.

The green line is the portion of mentions of the phrase “social search” and either Microsoft or Bing.

I noticed a few things:

  1. The buzz in blogs about “social search” overall tracks closely to work done by Google or Microsoft in that area.
  2. The spikes in buzz are usually related to some news or announcement from one of these companies.
  3. The spike in buzz on Jul 15th is related to an accidental reveal of an internal social search engine research project at Microsoft (first discovered by Fusible, and then picked up by Search Engine Land, and covered in many outlets including here at Bing Watch and Microsoft-Watch).

There are a lot of startups and other entrepeneurs working on social search related technologies and services.  Do these generate any buzz at all in social media?

Social media buzz on "Social search" - buzz generated about something other than Microsoft's or Google's work

In this trend graph above, the blue line is the same as with the first trend graph on top, and the orange line is buzz about “social search” that does not mention Google, Microsoft or Bing.

There’s a trickle of non-Google, non-Microsoft buzz regarding “social search”.

But the overall public discourse on “social search”, the social search buzz dance if you will, is dominated by Google and Microsoft.  I see this as further evidence that “social search” as we are currently viewing it in the market will be mainstream search engine features, as I wrote about it in this post on The Evolution of Social Search.

In related news, my startup Zakta, officially announced the public availability of SearchTeam.com, the world’s first real-time collaborative search and curation engine.  It is the first search engine on the Web to allows friends, classmates, family members, colleagues and other trusted people to search together and find and share what they need.

Here is a (~1 minute) concept video that shows the idea behind SearchTeam:

Here is a (~ 3 minute) guided tour video of the capabilities of SearchTeam:

For those of you who prefer to browse through a set of screenshots at your pace, rather than watch the videos, here’s a pdf file with annotated screenshots of SearchTeam.com.

Early reviews have been quite positive, with promising applications of SearchTeam in School, at Home and at Work.  I’d like to personally extend my gratitude to all of you who have tweeted, blogged or otherwise shared SearchTeam with your social circles.  Innovation like SearchTeam can thrive only through the support of people like you.  Thank you for your support!

Social Search and Bing

Bing recently made major updates to their social search capability which leverages their relationship with Facebook.  With this latest update, Bing now uses “Likes” from a user’s Facebook Friends, as well as the collective wisdom gained from opinions of users at large, to better rank and present search results.

As reported widely, Bing’s updates also includes the availability of the Bing Bar, which makes it easy for users to Like any page on the Web. This is another source of social signals for Bing. On the heels of this major update to Bing.com, Microsoft has also added social search features to its mobile search as well.

Equally noteworthy has been the aggressive ad campaign around the themes “Bing and decide with your friends“, and “Friends don’t let friends decide alone“.

In the days following these major updates to Bing, there’s been a lot of discussion on the impact of these changes to the search landscape.  Here are a few topics related to this, that interested me greatly:

  • Users seem to like these new social search features in general. But is this enough to convert regular Google search users to Bing?  This is covered in a good post at Brafton.com.
  • All these social search additions from Bing (and earlier from Google) are changing the nature of search itself. The social impact on search was a hot topic at SEMPO, and that is covered well at this post from SearchAdvisory.net.
  • So, who is creating the most social search engine now?  Google or Bing?  This is a topic covered in this post, also from SearchAdvisory.net.
  • Twitter is impacting Web search results.  Facebook is altering Web search results.  Other social signals are increasingly changing the search results we see from Google and Bing. SearchEngineWatch has a collection of posts on this page related to this topic of how social signals are impacting mainstream Web search engines.
  • With Bing’s aggressive integration of social search, comes the natural question around the impact of the Facebook-Microsoft alliance on Google.  This AdAge article calls out why Microsoft’s Facebook alliance is a real threat to Google.

In my blog post on The Evolution of Social Search, I predicted that “Social Search, as we now know it, becomes a mainstream search engine feature”. Bing’s recent social search moves seem to cement that claim.

The current wave of “social search” has been around the concept of using social signals of recommendation from friends and the Web at large to alter the rank ordering and presentation of Web search results.  Good strides have been made in this regard, and I expect even more activity and integration from Google and Bing in the coming months.

My startup, Zakta, has taken the next steps in deepening social search.  Where the current generation of social search involves leveraging signals of recommendation from friends / social connections in presenting Web search results, SearchTeam.com from Zakta enables users to search the Web together with their friends and other trusted people.  SearchTeam provides the capability for friends to search together, classmates to research together, for colleagues to work together, in real-time or asynchronously, curating the best search results together from the Web.

Just as the current generation of social search features promises to improve the quality of search results for transactional searches and some simple informational searches by leveraging social signals, SearchTeam delivers the social search solution for improving the quality and experience and value of deeper informational searches through its collaborative search and curation paradigm.

What is your take on social search and its long term impact on the search landscape?

The Evolution of Social Search

I was going to write a post earlier this year about social search, and it was going to be titled: “Does anyone care about social search anymore?“.  I was genuinely wondering what had happened to the “social search” meme, which was all the rage in 2009!  As it turns out, I never did write that post.  And just as well.  You can see why in this BlogPulse trend graph below:

You will notice two spikes in the trend graph, one in mid-February, and another in early April.

In mid-February Google announced deeper integration of social data from Twitter, Flickr, and Quora.  MG Siegler wrote this on TechCrunch about this mid-February social search update:

What Google is sort of downplaying as just an “update” to social search, is actually much more. Google is taking those social circle links at the bottom of the page, pumping them with social steroids, and shoving them towards the top of results pages. For the first time, social is actually going to affect Google Search in a meaningful way.

In early-April, Google announced its +1 button to rival Facebook’s Like button.  I wrote about this in this earlier post on Social Search and Google +1.

… Google has demonstrated that they consider social signals as an important element of their ranking of search results.  So, does the Google +1 launch officially make Google a social search engine? 

After a long lull in “social search” buzz, we hear two big announcements related to social search in the span of two months in 2011 from Google.  What does this mean for “social search”?  It will be fair to say that “social search” is a real phenomenon, and is rapidly evolving.

By the way, other people have pondered about the evolution of social search over the past few years, and here’s a couple of earlier posts on this topic you might find interesting:

  • October 2010, Lauren Fisher, TNW Social Media: The Evolution of Social Search – Lauren wrote about the potential business impacts of the emerging social search phenomena. Among the observations Lauren makes is this: “The impact that social search can have on the SEO industry is huge, and it represents a fundamental shift in the way this operates. While SEO has typically been a longer-term strategy, often taking weeks of months to see the fruits of your labour, social search has changed all that.”, and clearly, we are seeing signs in the SEO market that the impact of social on search is a key part of modern SEO work.
  • March 2011, Jeniffer Van Grove, Mashable: The Future of Social Search – Jeniffer argues that since search is rapidly changing, so is social search and that we should be thinking of social search in broader terms than just “socially ranked search results”.  Her parting remarks in this post: “We’re just now scratching the surface of what’s possible when one’s expanding social graph becomes intertwined with search. But as time goes on, the social search experience will be so fluid — it will seem more like discovering than searching — we won’t even know it’s happening.

Here is my own take (thoughts and predictions) about the evolution of social search:

  • Social search, as we now know it, becomes a mainstream search engine feature:  It is evident that Google is fully integrating social signals to alter their search results ranking.  We can only expect this integration to go broader (more social signals) and deeper (better integration of social signals).  This will drive a flurry of interest and activity on the part of companies and content creators to learn and incorporate “social search” related elements in their own online content and marketing strategies.
  • Aggregate social signals will continue to impact search result ranking: I think that using aggregate social signals to alter search result ranking is an idea that is here to stay – this is what Zakta.com does, and the reason for this is that this can be done in a way where the value can be delivered without getting destroyed by privacy issues or spam issues.
  • Social circle recommendations will aid a minority of search results:  I think that integrating signals of recommendations of people from my social circle into my search results is interesting – but the percentage of queries for which a user’s social circle has a meaningful recommendation will be low, and this is due to the very nature of the wide range of topics we typically search for, and the constitution of our social circles
  • Privacy concerns will hamper broad adoption:  I think that a large percentage of users are going to be concerned in opening up their social circles and content flows from within them to mainstream search engines. In turn, this will be a hurdle for broad adoption of social circles into search.
  • Facebook social search will be here:  Social search won’t remain just in the bastion of search engines.  Facebook will be a huge player in this.  As I see it, Facebook has at least two major assets as it pertains to social search: (1) a growing base of registered users with their growing social graphs, and (2) an enormous growing set of social signals fueled through a lot of social sharing within Facebook, their seemingly ubiquitous Facebook Like button, and new social sharing widgets they are deploying in the market.  How long before we see an innovative “social search” tool from Facebook that leverages all these massive assets they have!
  • Social search startups will innovate along different paths: Social search is a buzzword that has meant the incorporation of social search signals in search results.  But that is a rather limiting view of what can be possible when social and search are combined.  I think we can expect new solutions to enter the market that will vastly expand the definition and understanding of social search in the coming months and years.  I think that social search startups will innovate along different paths not taken by mainstream search engines so far.

Talking of different paths of innovation with social search, here’s a shameless plug for what we are doing at Zakta, my startup.  There are two directions that Zakta is taking which are different than mainstream approaches to social search:

  1. Curation:  I think that personal and social curation of search results is key to delivering relevance and ongoing value for informational searches.
  2. Collaboration: I think that real-time and asynchronous collaboration between trusted people (social circle / professional circle) is key to leveraging group knowledge and work as it pertains to informational searching and Web-based information research.

Zakta’s new service, SearchTeam, is a real-time collaborative search and curation engine that is based on the principles of curation and collaboration applied to the context of the informational search process / information research.  SearchTeam is not officially launched yet, but you can try it out today at SearchTeam.com.

What do you think about social search and where it is going?

Social Search and Google +1

A few weeks ago, the market was all abuzz with the announcement of Google +1.

Danny Sullivan wrote a customarily thorough article about Google +1 in this SearchEngineLand post:

The idea makes a lot of sense. If you’re searching, it’s nice to see if there are any answers that are recommended by your friends. Indeed, it makes so much sense that Google’s already been kind of offering this already through Google Social Search for nearly two years. But now these explicit recommendations become part of that.

Further in the article, Danny Sullivan talks about an aspect of Google +1 that is of great interest to me:

Social search signals, including the new +1 recommendations, will also continue to influence the first two things below plus power the new, third option:

  1. Influence the ranking of results, causing you to see things others might not, based on your social connections
  2. Influence the look of results, showing names of those in your social network who created, shared or now recommend a link
  3. Influence the look of results, showing an aggregate number of +1s from all people, not just your social network, for some links

Zakta.com, a personal and social search engine created by my startup Zakta (released in 2009) was based on three core ideas, parts of which overlap with what Google is now doing:

  1. Allow users to control their own search results (through Zakta Personal Web Search)
  2. Allow users to organize their informational search results and share them back with the search community (through Zakta Guides)
  3. Incorporate social signals from the user’s trust network and also in aggregate from the user community at large to improve search result ranking for everyone

It is heartening to see key elements of Zakta’s direction (particularly related to social signals from #3 above) from 2+ years ago be embodied in the world’s largest search engine today!

At their scale, Google has both problems and opportunities with their Google +1 direction.  The opportunities are quite evident:

  • Boosting their sagging (and broken / manipulated) Pagerank with social signals.  To their credit, Google has been quite aggressively doing this for over 2 years.
  • Apply this same +1 methodology to ads, and gain more social signals around ad relevance as well

The problems with this for Google at their scale include:

  • Manipulation of social signals – would it be that far behind before the SEO community figure out how to manipulate the signals derived from +1?
  • How to prevent Web search result ranking from becoming a mere social popularity contest?

Much has already written about Google +1 by others.  I’ve had a set of questions in this regard, which have been answered quite nicely by others:

  • How might Google use +1 data for search result ranking? In this post  How Google Plus One Works For Ranking, Ruud Hein writes probes the question of how Google Plus One data might affect search result ranking.  “Is there a correlation between relevance and social shares? Traffic and social shares? Are social shares maybe only relevant and correlated within one’s social network; you visit what I visit but outside of our relationship people could care less? Do pages with more links get equally more social shares? Are too many social shares a sign of web spam?
  • Can Google +1 be really competitive to Facebook’s Like? In this post Can Google’s Plus One Take On The Facebook Like?, Nick O’Neill writes: “With Google’s major influence, there’s no doubt that they will be able to get any online publication on the phone in a heartbeat. The only question now is how fast the search company can move. With no add-on for publishers available yet, it’s clear that Google has a long way to go before they put a serious dent in the massive lead that Facebook already has when it comes to measuring consumers’ interest in content around the web.
  • Can Google +1 Button succeed, given the lack of success from Google’s previous social solutions? In this post Google +1 Button – 5 Questions Surrounding Its Potential Success, Chris Crum at WebProNews summarizes the success potential for the +1 button as follows: “Facebook’s “like” button works because of Facebook’s social nature. Google’s nature is largely search. Google has also been careful to position the button as heavily search-oriented. Probably the biggest question of them all is: Do people care about interacting with search like they care about interacting with their friends?
  • Does Google finally “get” social?  In this post, Google +1 Button, Phil Bradley is very critical of Google’s +1 Button.  Citing problems with everything from the name of this feature to the fuzziness of who exactly is the social network that your +1′ing influences. “I’ve said it plenty of times before, and I’m saying it again. Google doesn’t understand social. They have absolutely no clue as to how it works, how to use it, or how to work with it. If Google has a downfall at any time in the future, this is what’s going to cause it. Orkut, Google Wave, Google Buzz, and now this latest mess.

All said and done, Google has demonstrated that they consider social signals as an important element of their ranking of search results.  So, does the Google +1 launch officially make Google a social search engine?  What do you think?

Does the Web need Collaborative Search Tools?

Search engine interfaces have historically been designed to let just an individual search the Web for their needs.  In over 15 years since the first Web search engine hit the market, search engine use has become ubiquitous, with many searches actually being collaborative in nature. But search engines have remained in the domain for individual use only.  Why are search engines designed only to be used alone?

Before answering this, I think it is useful to see if search engines really are being used collaboratively today? Let us look at one example in a bit of detail

Planning a vacation with friends / family: Whether it is spring break with friends, or a summer vacation with family, vacation planning involved web searching and communication, coordination and collaboration with friends or family members. When my family went on a summer vacation to Toronto, Canada recently, I had to engage my family members in the process, seeking input about places to go, places to stay, and myriad other details. Here’s how I ended up doing this job:

  • Suffering from Google addiction as many out there are, I googled many times to find interesting information about places in and around Toronto, day trips of interest, interesting places to stay etc.
  • I copied links of interest over into my email and pruned that list and would periodically pass it around for comments from the family
  • I visited many different specialty sites like Expedia, Travelocity, Hotels.com, Priceline, Kayak etc. to find possible flight itineraries, and places to stay
  • And in turn, I copied interesting links of places to stay, as well as possible travel itineraries, in email and sent that around for comments from the family
  • My wife or son would pass along interesting links via email along the way from some searches they did, or tidbits they heard from other family members / friends who had been to Toronto before.  Some more conversations would ensue.
  • Many iterations of this, and many email conversations and many in-person conversations (where that was possible) later, many days from when we started this process, we arrived at the decisions we needed.  We had firmed up an itinerary, places to stay, details of places to see, day trips to try out, and lists of links of interest towards our visit (all scattered across multiple emails).

Does this sound familiar?  This is collaborative searching at work.  Albeit with search engines that weren’t built to support it.

Let’s look at another example in a little detail.

Researching a disease or medical condition: It is not uncommon these days to have a good friend or a family member get diagnosed with some new disease or medical condition. That kicks off the process of trying to learn more about the disease or condition, finding treatment options, and finding ways to cope with the condition.  Recently, in my family, a relative of mine was diagnosed recently with high cholesterol and diabetes at the same time. They reached out to me for input on food or lifestyle changes that might help with the management of the diseases along with their regular mainstream treatment. They were keen to know about herbs or supplements that might help, or how methods like Yoga or energy healing might contribute towards a return to wellness faster.  The process that ensued was like this:

  • I googled many different queries related to these conditions, and additional queries related to diet, nutrition, supplements / herbs, lifestyle changes, looking for good authoritative information that I could pass along
  • I started collecting links into an email and sent them along in small batches to my relative
  • In turn, I’d get emails back with links and questions about the legitimacy / believability of various claims made about certain supplements or herbs.  And I’d check them out to see the sources and citations and so forth and write back about each
  • Occasionally, they would find me online on Skype and reach out to me to chat about some additional things they had read.  In the process, we’d discover some more interesting resources to keep for future use, which I’d go copy into an open email or new email
  • Dozens of queries, hundreds of pages sifted, and many email threads later, we had collected dozens of links of use for my relative. They finally had the information they needed to make their own decision in concert with their doctor

Sound familiar again?  This too is an example of collaborative search in action today.

The problem with this is that, this process is inefficient, time consuming, prone to redundant work (people doing the same queries, seeing the same sites that were not useful etc.), and at the end of all this, the useful information is spread across multiple emails and possibly some instant messaging / chat sessions, and not easily discoverable or usable when you need to consult it later on.

Here are more examples at home or in other personal contexts, where I’ve run into this need:  Shopping for an appliance or a big ticket item;  Looking for a new home; Finding suppliers for a craft project;  Finding learning resources for gifted kids etc.

Plenty of such examples also exist in the academic context or business context as well.

What is common across all of these examples is that there’s more than one person involved in the finding, collecting, organizing, sharing or using of that information.  i.e. These are prime examples of collaborative searching, which cry out for a new breed of collaborative search tools.

So, yes, I think that the Web needs collaborative search tools now.  What do you think?

My startup Zakta, is about to launch SearchTeam (sometimes mistakenly referred to as Search Team), a real time collaborative search and curation engine.  It combines traditional search engine features, with semantics, curation tools, real-time and asynchronous collaboration tools to deliver the world’s first commercial tool for real-time collaborative searching with trusted people.  SearchTeam is designed from ground up to enable users to search the Web together with others they trust, curating, sharing and collaborating on what they need on any given topic.  I’ll be sharing more information about this in the coming days and weeks.

Search Quality, SEO, The Google Farmer Update and The Aftermath

The declining search quality on Google

In the last couple of months, the online world seemed to be buzzing about Google’s declining search quality.

Google’s Response:  The Google Farmer Update

In late February, Google announced a major update to improve search result quality, and tighten the screws on content farms on the Web.  This algorithmic update, dubbed the “Farmer Update” (presumably because it tried to address the issue related to content farms) has created a scenario with winners and losers, and has also left a trail of devastation.

Analysis of the effects of the Google Farmer Update

Given that nearly 12% of search results were affected by this update, many industry experts have chimed in with analysis of winners and losers in the aftermath of this Google Farmer Update:

  • Google Farmer Update: Quest for Quality – SEO company SISTRIX published a list of big losers (based on their SISTRIX VisibilityIndex, calculated from traffic on keywords, ranking and click-through rate on specific positions). Web 2.0 company, Mahalo.com is one of the companies in the losers list, which according to SISTRIX, lost nearly 70% of their top-ranking keywords on Google.
  • Number Crunchers: Who Lost in Google’s “Farmer” Algorithm Change? — Danny Sullivan wrote a comprehensive post analyzing winners and losers from this update, citing data from multiple sources.
  • Google Farmer Update: Who’s really affected? – SearchMetrics SEO blog shares analysis of specific sites that have been hurt badly in this update, including Suite101.com, Helium.com and others.
  • Google’s Farmer Update: Analysis of Winners and Losers – Rand Fishkin of SEOmoz shares his company’s analysis of the effect of this algorithmic update on the rankings of sites.  Of particular note is the analysis of possible causes factors that could have caused lost rankings.  Initial speculation is that factors like “user/usage data”, “quality raters’ inputs”, and “content analysis” are likely to be involved, and that “link analysis” of sites was not a likely factor.
  • Correlation between Google Farmer Update and Social Media Buzz — Liam Veitch at Zen Web Solutions has done some analysis on whether Google must have considered social buzz as a factor in determining site to whack or reward.  His initial analysis seems to support the hypothesis, but requires more study.
  • How Demand Media Used PR Spin to Have Google Kill Their Competitors — Aaron Wall at SEOBook.com presents a provocative analysis about how eHow (a service of Demand Media which had an IPO recently, and often cited in the context of content farms) not only escaped the Google axe with this update, but might actually be thriving in an environment where many competitors have been killed.

Collateral Damage?

There are a lot of sites that seem to have been caught up in this “cleanup” act of Google – collateral damage as it were, in Google’s act of slashing “content farms”.  Here are some sample comments from site owners on various blogs that are telling:

My Personal blog was almost completely removed from Google’s SERPs.

Searching for my name IN QUOTES will not pull up my Blog (url is the same exact as my name).

I didn’t do anything to my site, nor did I do any SEO (white or black hat) but my search traffic is now 1 hit a day from 15-20 a day.

Google probably axed a lot of innocents in this update.

BLueSS on SEOmoz.org

We made the Sistrix list and I am currently freaking out right now. We literally lost about 70% of our US-based traffic overnight. What’s worse, we are a discussion forum with editorial who employs absolutely no black hat techniques, no duplicate content, we’re really tough on spam, and I don’t know what on earth I can do to get back into Google’s good graces. I’m convinced we somehow got caught up in the mix because I was under the impression Google was targeting “content farms” and “Made-for-AdSense” sites, and not forums. In fact, like most forum owners, I was eagerly awaiting this update with anticipation because I thought it would help us sites that deliver 100% unique, quality content.

Dani of Daniweb.com on SearchEngineLand

Well, there may be anecdotal reports of recovery, but not for my site (freegeographytools.com; 4 years old, 100% original content all by me, no farming or scraping). Google referrals are still down 20%, and AdSense earnings are down 40%+, and the trend is downwards. Thanks, Google!

leszekmp on SearchEngineLand

The real impact on hundreds of thousands of small sites may not be known for a long time.  But this has turned out to be a situation where for every loser there seems to have emerged a winner – whether the winner was deserving to win or not, and whether the loser was deserving to lose or not will remain debatable for some time for many sites.

Is search quality all good now?

Turning to a question which I’m personally interested in … Ok, now that Google has deployed an update (that by some accounts has been a year in development), is search quality all cleaned up?  Noted sites has Technorati, Songkick, PRNewswire have all been hit in this recent update, and it is kind of difficult to consider them as being similar to content farms.  So, personally I’m not sure.  I’ve not run tests myself yet with the new update, so I can’t speak to this from personal experience yet.  Some commentators like AJ Kohn point out that this update was more about demoting content deemed of low quality, not promoting better content.  According to AJ Kohn, the results are different, not necessarily betterJoe Devon comments on a ReadWriteWeb post about this:

The new results are different, but not better. I think it has exposed that Google has an immense problem.

They’ve taken care of many of the open content farms…yes. But it just pushed up a bunch of scrapers that are being a little more low key than the content farms going public or selling for millions. Results are awful…

In the mean time, Google is claiming that they are working to help good sites caught by this cleanup operation.

The Google – SEO Industry dance

We’ve seen this dance before:

  • The SEO industry at large doggedly pursues the task of finding how Google’s ranking algorithms might be working, figures out loopholes in the process, and soon, large numbers of sites out there are exploiting those loopholes.
  • Search Quality declines, and the whining from users begins, and sometimes reaches a crescendo.
  • Google pays attention, comes back at it with some algorithmic updates, fixes some issues, opens up other issues, leaves some collateral damage along the way.
  • The SEO industry (again, I mean this broadly to include all manner of SEO specialists, white-hat, gray-hat, black-hat) goes to work again to learn about the ranking updates … and the cycle continues!

This is a classic cat-and-mouse game.

To think that a chunk of the business transacted online is dependent on this, or to consider that the livelihood of many small companies (maybe even larger companies too) or solopreneurs might depend on the outcome of this game at any given time – to me this is frightening!

What do you think?

Disclosure: Readers of this blog know that my startup Zakta, will soon officially launch SearchTeam, a real-time collaborative search engine that enables personal as well as collaborative content curation.  It represents a very different approach to solve the information search problem and the attendant search quality and search relevance issues.  I’ll be writing more about SearchTeam here in the coming weeks.

Newsflash: Searcher dies of old age waiting for “the next Google”!

The “next Google” is coming, but it isn’t what you think.  And if you are waiting for what you think is coming, you are not going to see it, ever!

I wrote a post recently about the current buzz with declining search results quality. I followed that with a post on what I thought were big problems with search that have still not been addressed. With all this, we can certainly agree that we have some pressing current issues with search, and many unsolved problems as well as untapped opportunities as well.

Who’s going to solve all these search problems?

The vast amount of discussion about search is so myopically centered around Google, as the sole tool and the savior of humanity for all its search needs, as current social attention on it shows.

The blue line is buzz about Google.  The orange line is buzz / discussion on Bing, Yahoo and all other mainstream engines like Ask, AOL etc.  That little green line at the bottom of the graph above is the rather miniscule amount of buzz on any manner of other search tools other than Google and its mainstream cousins (including but not limited to all new / alternative search engines, specialized search tools, specialized databases, and much more).  This is how dominant our discourse is about Google and the other mainstream engines.

This is particularly absurd and illogical as I see it!

Let me explain what I mean with this personal example and analogy.  I am not a handy person and usually call upon a handyman to fix problems in our home whether they are electrical, plumbing, or other issues.  The handyman comes with an array of common tools into my home, and often when he doesn’t have a tool in his tool belt to solve a specific problem, he steps out to his vehicle and comes back with a more specialized tool or set of tools to get the job done.  How absurd would it be to imagine a world where the handyman has just one tool or a couple of tools in his tool belt and that’s it!  How would he fix all my varied fixit needs? It is by having a wide range of relevant tools that the Handyman steps into my home and is able to deal with the wide range of fixit needs I have.

Not only is the obsession with a single tool / brand illogical, it is also quite dangerous.  Alan Patrick writes about this in his post “On the increasing uselessness of Google …….”:

Google is like a monoculture, and thus parasites have a major impact once they have adapted to it – especially if Google has “lost the war”. If search was more heterogenous, spamsites would find it more costly to scam every site. That is a very interesting argument against the level of Google market dominance

So, why do we really look only to Google to solve all our varied search problems?  Or when that doesn’t happen, seek a “better Google”, which is just as nutty?

Wouldn’t we be better serving ourselves by recognizing that we are trying to get one tool or a few tools from one company (or another) to solve all our past, present and future search problems?  Shouldn’t we be thinking about using the best search tools for different search jobs?  And then talking about our search toolset and educating others to think likewise?

Mass Google addiction?

Conrad Saam writes in this articleGoogle vs Bing: The Fallacy Of The Superior Search Engine” that in a test of search results quality,  Bing bested Google. Even though Bing beat out Google, this doesn’t reflect in a notable migration of searchers over to using Bing!  Such is the power of the grip Google has on the mindset and habits of people.   (Sidebar: Hitwise recently reported that according to their “search success rate” metric, Bing and Yahoo were more successful compared to Google. This is the subject of some heated discussion and controversy, and I’ll write about this separately)

In the same article, Conrad Saam hints at the notion of using a search toolset for his own search needs:

My personal search approach uses Google as the default while using other sites for specialty searches. On Bing, image search is far superior and Wikipedia for 101 style information.

Using a toolset of search engines and search tools just makes more sense.  Wouldn’t that be the right way to break free from the Google euphoria, and the Google addiction on the one hand and the Google disappointments and the Google disillusionment and the continuous hankering after the next Google or a Google-killer on the other!

The mindset of a “search toolset”

A thoughtfully assembled search toolset can put together a wide range of useful search tools from the following:

  • General purpose search engines — Google, Bing, Yahoo etc. will be included here including Cuil, Blekko and others
  • Vertical search engines and tools — Specialty engines from Auto, Travel, Health and wide range of other verticals would be included here
  • Special purpose / Specialty search engines and tools — Fact search engines, Human curated collections, patent search engines, publication archives and much more would be included here
  • Other search engines and tools — A catchall for all sorts of current and emerging engines and tools

The “next Google”

As I see it, the “next Google” isn’t a single search engine replacement for the world’s #1 tool of choice.  It is a search toolset that is going to be as vibrant and as varied as what the world has embraced with the diversity of tools and applications on their mobile devices.

The sooner we usher in that mentality into our personal search practice, the sooner we are going to be part of the revolution that is coming ahead.

If you are a reader of this blog, you must have noted that I have a stake in this future.  My startup, Zakta, has created a specialized search tool for your toolset called SearchTeam, a real-time collaborative search engine .  It is perfect for all those times you have to search the Web with friends, classmates and colleagues, or just when you need research the Web deeper and more efficiently.  It doesn’t aspire to be the next Google, but be a highly desired part of the search toolset that will be the “next Google”.

About the Newsflash

No!  No one has died yet waiting for “the next Google”.  But they could, if they remain myopically anticipating a Google revival and return to the glory days of Google as the sole king of search, OR if they remain waiting for the Google-killer which will come and deliver them from all their search problems!  That isn’t happening, as far as I can see it.

But this is all just my opinion.

What do you think?

Beyond spam: Big Problems with Search

The current discussion around declining search quality on Google goes to the main bread and butter issue in organic search: How good are the search results in the first page?  And in this context, the discussion is dominated by the topics of search spam and content farms and gaming of the Google algorithm. That makes sense!

In my opinion, there are a lot of unaddressed “big problems” in search beyond fixing the spam issue.  I’m citing just a few of these here.

The content explosion: There is a growing diversity of content types, explosive growth of online content, and multi lingual content, all of which contribute to the complexity of what the current and next generation search engine needs to handle. No single search engine really is able to cover the complete set of information on the Web today, and this will remain a big challenge for search engines into the future.

Hidden content sources: Part of the content explosion continues to be the proliferation of specialized content sources and databases, content from within which we can’t readily discover from mainstream search engines. This phenomenon is called the Invisible Web or the Deep Web, first written about in the late ’90s (my previous startup Intelliseek, delivered the first search engine for the Invisible Web in 1999), and continues to remain a big open issue. Attention on it has lessened only because of the sheer noise around other memes like social search, real-time search and so forth in the past few years.

Understanding user intent: Then there are age-old issues that haven’t been addressed around understanding user intent.  Much of the quality of search results has to do with not knowing what the heck the searcher really needs.  We are still feeding keywords into a single search box and expecting the magic to happen on the part of the search engine to give us what we need.  Not finding our answers, more of us are doing longer queries, hoping that will give us the answers we need. i.e. We are compensating as users for something that search engines fundamentally do not understand today: our search intent.

Understanding the content: 16+ years since the first Web search engine, we are still processing textual information with little understanding of the semantics involved. Search engines do not understand the meaning of the content that they index. This is another contributing factor that limits the quality of results delivered by a search engine to users. For long, there’s been a buzz about the semantic Web, which is supposed to usher in richer search and information experiences starting from more meaningful data and sophisticated software that can make inferences from the data in ways that is not possible today. Hailed as “Web 3.0″, it is seen as the next phase in the evolution of the Web, and that is a realm of new problems and opportunities for search engines.

Handling User input: For the most part, search interfaces have continued to use the age old search box for typing keywords as input. While promising work has been done with accepting natural language questions as input, nothing commercially viable has really turned up that works in Web scale. Without solving this problem first, there is no hope of being able to speak to a search engine to have it bring back what you are looking for.

Presenting search results: The 10 results per page read-only SERP interface that first came about in the mid 90′s is what we are essentially stuck with even today (granted that there have been recent touches like page previews / summaries added to it, and showing videos / images etc. along with links to pages / sites).  A retrospective look at this 2007 interview with usability expert Jakob Nielsen which looks into possible changes in search result interfaces by 2010 is very revealing about the relatively slow pace of change with the SERP interface.  Others have attempted purely visual searches, and still others have tried to categorize / cluster search results. Still, what the mainstream search engines offer in terms of a interface for search results consumption is not noticeably innovative.

Personalizing: For the most part, search results are a one-size-fits-all thing.  Everyone gets the same results regardless of your interests and your connections.  Some attempts have been made to personalize search results both based on some model of individual interests and on the likes / recommendations of their social group, but that is a really challenging problem to solve well.  At Zakta, our Zakta.com service made the SERP read-write, and personalizable. Other services have tried to bypass the search engine itself with Q&A services that flow through a user’s social network.

Leveraging social connections and recommendations: First generation attempts have been made to have search results be influenced by the recommendations of others in a person’s social circle. Some speculate that Facebook might be sitting on so much recommendation data that they might have a potent alternative to Google in the search arena.  Regardless, this remains an unsolved search problem today.

Facilitating collaboration in search: Web searching has been a lonely activity since its inception. Combined with the limiting read-only SERP interface, searchers have never really been able to leverage the work, findings or knowledge of others (including those that they deeply trust) in the search process.  In the post Web 2.0 world we are in, this remains an noticeable gap in search. One area of opportunity is for search engines to let people search together to find what they need.

Specialized searches in verticals and niches: For a while in the early and mid 2000′s, the buzz was all about vertical search engines and somehow that meme just faded away. The core reasons for the attractiveness of vertical / specialized search engines remain. Shopping, Travel, and plenty of other verticals represent areas which could benefit from continued development of specialized search solutions that go beyond the mainstream search engine experience.

These are but a few examples of the many open “big problems” with Search.  Seeing this, we cannot but acknowledge that we are still in our infancy in meeting the search needs of an increasingly online, connected and mobile populace.

At Zakta, my startup, we are working on solutions for some aspects of these big search problems.  We are combining semantics, curation, and collaboration technologies with traditional Web searching to deliver a new search engine called SearchTeam.  Perfect for collaborative search, or as a research tool for personal or collaborative curation of web content, we hope that SearchTeam will become a very useful part of people’s search toolset.  At this time, SearchTeam is in private beta.

What do you think are open problems, big or small, with search engines?

The Buzz about Google’s Search Quality

We may have reached a tipping point in our tolerance of the declining quality of web search results on Google. At least that is how it appears with the growing commentary on the subject from influential bloggers, writers in the news media and searchers as well.

A meme in the making?

Anil Dash writes about the decline of Google search quality citing the negative experiences and observations of Paul Kedrosky, Alan Patrick, and Jeff Atwood:

What is worth noting now is that, half a decade after so many people began unquestioningly modifying their sites to serve Google’s needs better, there may start to be enough critical mass for the pendulum to swing back to earlier days, when Google modified its workings to suit the web’s existing behaviors.

Many more bloggers are chiming in about the same matter, as this BlogPulse Conversation Tracker listing shows.

Meanwhile, at LifeHacker, over 77% of readers say that Google’s search results are less useful lately:

We asked readers last week whether what influential bloggers said was true—that Google was losing the war against search result spam. Your response? More than three quarters found Google prone to spam, with one-third tagging the decline as significant.

Michael Rosenwald wrote in the Washington post about the losing battle against spam in search results:

Google’s success rate, as measured by the percentage of users visiting a Web site after executing a search, fell 13 percent last year, according to Experian Hitwise, which monitors Web traffic. Microsoft’s Bing search engine increased its search efficiency by 9 percent over the same period.

Although there could be several reasons for the disparity, one is most certainly spam in Google’s results, analysts said.

“It’s clear that Google is losing some kind of war with the spammers,” said tech guru Tim O’Reilly, who often cheers Google’s technology. “I think Google has in some ways taken their eye off the ball, and I’d be worried about it if I were them.”

For years, Google’s organic search results have been experiencing a slow decline in quality. Paul Kedrosky writes about this in a recent blog post:

What has happened is that Google’s ranking algorithm, like any trading algorithm, has lost its alpha. It no longer has lists to draw and, on its own, it no longer generates the same outperformance — in part because it is, for practical purposes, reverse-engineered, well-understood and operating in an adaptive content landscape. Search results in many categories are now honey pots embedded in ruined landscapes — traps for the unwary. It has turned search back into something like it was in the dying days of first-generation algorithmic search, like Excite and Altavista: results so polluted by spam that you often started looking at results only on the second or third page — the first page was a smoking hulk of algo-optimized awfulness.

Would Google care?

One thing I personally do wonder about is just how important is this issue to Google in 2011 and beyond, as compared to when they started in 1999!  In my earlier post on this blog, I wrote about the ongoing relevance of search relevance to Google:

I have no doubt that Google’s relevance with organic search results will improve yet again, given the rise in negative commentary about it in influential pockets of the Internet.  However, the pertinent question to ask is this: Why would an advertising giant care more about relevance of organic search results any more than absolutely necessary to keep their revenue proposition in tact?  Or, asked another way, is search relevance ever likely to be as relevant as ad relevance to Google?

Should Google care?

What is the practical threat to Google? It seems that all this isn’t really affecting Google’s stable search market share or its growing ad revenues in any meaningful way! But there are others who see 2011 as a turning point for Google’s invincibility.

Niall Harbison wrote recently about the perfect storm coming together to unhinge Google, that too in 2011.  He cites real time search, friend recommendations, the Like button from Facebook, the rise of spam, the possibility of categorized human knowledge and Bing as some of the key factors that could unseat Google as the search king.:

For years it seemed as if Google could do no wrong and the competition be it search start ups, Yahoo or Microsoft was generally batted away with disdain. The landscape has changed over the last 18 months though and Google faces a very real danger that it’s core product could come under threat and I think 2011 will be the year where we see the first cracks start to appear in Google’s once invincible search armor.

I am not ready to predict anything about Google’s future.  I myself have been a Google addict, and I respect their talent and innovations to date greatly.  I am interested in all this in a very deep and personal way.  I been in the search engine space, dabbling with search engine technologies since 1996. But more importantly, my startup, Zakta is set to introduce an innovative search tool that we hope is very relevant to the problems we face with search today, and very useful as well.  SearchTeam.com is in private beta now, and offers unique ways to search the Web, curate what you need, personally or in collaboration with others you trust.

What is your take on this?

Follow

Get every new post delivered to your Inbox.