Search

The Sharer

Journey to Spirituality, Healing and Awakening

Social Search and Google +1

A few weeks ago, the market was all abuzz with the announcement of Google +1.

Danny Sullivan wrote a customarily thorough article about Google +1 in this SearchEngineLand post:

The idea makes a lot of sense. If you’re searching, it’s nice to see if there are any answers that are recommended by your friends. Indeed, it makes so much sense that Google’s already been kind of offering this already through Google Social Search for nearly two years. But now these explicit recommendations become part of that.

Further in the article, Danny Sullivan talks about an aspect of Google +1 that is of great interest to me:

Social search signals, including the new +1 recommendations, will also continue to influence the first two things below plus power the new, third option:

  1. Influence the ranking of results, causing you to see things others might not, based on your social connections
  2. Influence the look of results, showing names of those in your social network who created, shared or now recommend a link
  3. Influence the look of results, showing an aggregate number of +1s from all people, not just your social network, for some links

Zakta.com, a personal and social search engine created by my startup Zakta (released in 2009) was based on three core ideas, parts of which overlap with what Google is now doing:

  1. Allow users to control their own search results (through Zakta Personal Web Search)
  2. Allow users to organize their informational search results and share them back with the search community (through Zakta Guides)
  3. Incorporate social signals from the user’s trust network and also in aggregate from the user community at large to improve search result ranking for everyone

It is heartening to see key elements of Zakta’s direction (particularly related to social signals from #3 above) from 2+ years ago be embodied in the world’s largest search engine today!

At their scale, Google has both problems and opportunities with their Google +1 direction.  The opportunities are quite evident:

  • Boosting their sagging (and broken / manipulated) Pagerank with social signals.  To their credit, Google has been quite aggressively doing this for over 2 years.
  • Apply this same +1 methodology to ads, and gain more social signals around ad relevance as well

The problems with this for Google at their scale include:

  • Manipulation of social signals – would it be that far behind before the SEO community figure out how to manipulate the signals derived from +1?
  • How to prevent Web search result ranking from becoming a mere social popularity contest?

Much has already written about Google +1 by others.  I’ve had a set of questions in this regard, which have been answered quite nicely by others:

  • How might Google use +1 data for search result ranking? In this post  How Google Plus One Works For Ranking, Ruud Hein writes probes the question of how Google Plus One data might affect search result ranking.  “Is there a correlation between relevance and social shares? Traffic and social shares? Are social shares maybe only relevant and correlated within one’s social network; you visit what I visit but outside of our relationship people could care less? Do pages with more links get equally more social shares? Are too many social shares a sign of web spam?
  • Can Google +1 be really competitive to Facebook’s Like? In this post Can Google’s Plus One Take On The Facebook Like?, Nick O’Neill writes: “With Google’s major influence, there’s no doubt that they will be able to get any online publication on the phone in a heartbeat. The only question now is how fast the search company can move. With no add-on for publishers available yet, it’s clear that Google has a long way to go before they put a serious dent in the massive lead that Facebook already has when it comes to measuring consumers’ interest in content around the web.
  • Can Google +1 Button succeed, given the lack of success from Google’s previous social solutions? In this post Google +1 Button – 5 Questions Surrounding Its Potential Success, Chris Crum at WebProNews summarizes the success potential for the +1 button as follows: “Facebook’s “like” button works because of Facebook’s social nature. Google’s nature is largely search. Google has also been careful to position the button as heavily search-oriented. Probably the biggest question of them all is: Do people care about interacting with search like they care about interacting with their friends?
  • Does Google finally “get” social?  In this post, Google +1 Button, Phil Bradley is very critical of Google’s +1 Button.  Citing problems with everything from the name of this feature to the fuzziness of who exactly is the social network that your +1’ing influences. “I’ve said it plenty of times before, and I’m saying it again. Google doesn’t understand social. They have absolutely no clue as to how it works, how to use it, or how to work with it. If Google has a downfall at any time in the future, this is what’s going to cause it. Orkut, Google Wave, Google Buzz, and now this latest mess.

All said and done, Google has demonstrated that they consider social signals as an important element of their ranking of search results.  So, does the Google +1 launch officially make Google a social search engine?  What do you think?

Advertisements

Does the Web need Collaborative Search Tools?

Search engine interfaces have historically been designed to let just an individual search the Web for their needs.  In over 15 years since the first Web search engine hit the market, search engine use has become ubiquitous, with many searches actually being collaborative in nature. But search engines have remained in the domain for individual use only.  Why are search engines designed only to be used alone?

Before answering this, I think it is useful to see if search engines really are being used collaboratively today? Let us look at one example in a bit of detail

Planning a vacation with friends / family: Whether it is spring break with friends, or a summer vacation with family, vacation planning involved web searching and communication, coordination and collaboration with friends or family members. When my family went on a summer vacation to Toronto, Canada recently, I had to engage my family members in the process, seeking input about places to go, places to stay, and myriad other details. Here’s how I ended up doing this job:

  • Suffering from Google addiction as many out there are, I googled many times to find interesting information about places in and around Toronto, day trips of interest, interesting places to stay etc.
  • I copied links of interest over into my email and pruned that list and would periodically pass it around for comments from the family
  • I visited many different specialty sites like Expedia, Travelocity, Hotels.com, Priceline, Kayak etc. to find possible flight itineraries, and places to stay
  • And in turn, I copied interesting links of places to stay, as well as possible travel itineraries, in email and sent that around for comments from the family
  • My wife or son would pass along interesting links via email along the way from some searches they did, or tidbits they heard from other family members / friends who had been to Toronto before.  Some more conversations would ensue.
  • Many iterations of this, and many email conversations and many in-person conversations (where that was possible) later, many days from when we started this process, we arrived at the decisions we needed.  We had firmed up an itinerary, places to stay, details of places to see, day trips to try out, and lists of links of interest towards our visit (all scattered across multiple emails).

Does this sound familiar?  This is collaborative searching at work.  Albeit with search engines that weren’t built to support it.

Let’s look at another example in a little detail.

Researching a disease or medical condition: It is not uncommon these days to have a good friend or a family member get diagnosed with some new disease or medical condition. That kicks off the process of trying to learn more about the disease or condition, finding treatment options, and finding ways to cope with the condition.  Recently, in my family, a relative of mine was diagnosed recently with high cholesterol and diabetes at the same time. They reached out to me for input on food or lifestyle changes that might help with the management of the diseases along with their regular mainstream treatment. They were keen to know about herbs or supplements that might help, or how methods like Yoga or energy healing might contribute towards a return to wellness faster.  The process that ensued was like this:

  • I googled many different queries related to these conditions, and additional queries related to diet, nutrition, supplements / herbs, lifestyle changes, looking for good authoritative information that I could pass along
  • I started collecting links into an email and sent them along in small batches to my relative
  • In turn, I’d get emails back with links and questions about the legitimacy / believability of various claims made about certain supplements or herbs.  And I’d check them out to see the sources and citations and so forth and write back about each
  • Occasionally, they would find me online on Skype and reach out to me to chat about some additional things they had read.  In the process, we’d discover some more interesting resources to keep for future use, which I’d go copy into an open email or new email
  • Dozens of queries, hundreds of pages sifted, and many email threads later, we had collected dozens of links of use for my relative. They finally had the information they needed to make their own decision in concert with their doctor

Sound familiar again?  This too is an example of collaborative search in action today.

The problem with this is that, this process is inefficient, time consuming, prone to redundant work (people doing the same queries, seeing the same sites that were not useful etc.), and at the end of all this, the useful information is spread across multiple emails and possibly some instant messaging / chat sessions, and not easily discoverable or usable when you need to consult it later on.

Here are more examples at home or in other personal contexts, where I’ve run into this need:  Shopping for an appliance or a big ticket item;  Looking for a new home; Finding suppliers for a craft project;  Finding learning resources for gifted kids etc.

Plenty of such examples also exist in the academic context or business context as well.

What is common across all of these examples is that there’s more than one person involved in the finding, collecting, organizing, sharing or using of that information.  i.e. These are prime examples of collaborative searching, which cry out for a new breed of collaborative search tools.

So, yes, I think that the Web needs collaborative search tools now.  What do you think?

My startup Zakta, is about to launch SearchTeam (sometimes mistakenly referred to as Search Team), a real time collaborative search and curation engine.  It combines traditional search engine features, with semantics, curation tools, real-time and asynchronous collaboration tools to deliver the world’s first commercial tool for real-time collaborative searching with trusted people.  SearchTeam is designed from ground up to enable users to search the Web together with others they trust, curating, sharing and collaborating on what they need on any given topic.  I’ll be sharing more information about this in the coming days and weeks.

Search Quality, SEO, The Google Farmer Update and The Aftermath

The declining search quality on Google

In the last couple of months, the online world seemed to be buzzing about Google’s declining search quality.

Google’s Response:  The Google Farmer Update

In late February, Google announced a major update to improve search result quality, and tighten the screws on content farms on the Web.  This algorithmic update, dubbed the “Farmer Update” (presumably because it tried to address the issue related to content farms) has created a scenario with winners and losers, and has also left a trail of devastation.

Analysis of the effects of the Google Farmer Update

Given that nearly 12% of search results were affected by this update, many industry experts have chimed in with analysis of winners and losers in the aftermath of this Google Farmer Update:

  • Google Farmer Update: Quest for Quality — SEO company SISTRIX published a list of big losers (based on their SISTRIX VisibilityIndex, calculated from traffic on keywords, ranking and click-through rate on specific positions). Web 2.0 company, Mahalo.com is one of the companies in the losers list, which according to SISTRIX, lost nearly 70% of their top-ranking keywords on Google.
  • Number Crunchers: Who Lost in Google’s “Farmer” Algorithm Change? — Danny Sullivan wrote a comprehensive post analyzing winners and losers from this update, citing data from multiple sources.
  • Google Farmer Update: Who’s really affected? — SearchMetrics SEO blog shares analysis of specific sites that have been hurt badly in this update, including Suite101.com, Helium.com and others.
  • Google’s Farmer Update: Analysis of Winners and Losers — Rand Fishkin of SEOmoz shares his company’s analysis of the effect of this algorithmic update on the rankings of sites.  Of particular note is the analysis of possible causes factors that could have caused lost rankings.  Initial speculation is that factors like “user/usage data”, “quality raters’ inputs”, and “content analysis” are likely to be involved, and that “link analysis” of sites was not a likely factor.
  • Correlation between Google Farmer Update and Social Media Buzz — Liam Veitch at Zen Web Solutions has done some analysis on whether Google must have considered social buzz as a factor in determining site to whack or reward.  His initial analysis seems to support the hypothesis, but requires more study.
  • How Demand Media Used PR Spin to Have Google Kill Their Competitors — Aaron Wall at SEOBook.com presents a provocative analysis about how eHow (a service of Demand Media which had an IPO recently, and often cited in the context of content farms) not only escaped the Google axe with this update, but might actually be thriving in an environment where many competitors have been killed.

Collateral Damage?

There are a lot of sites that seem to have been caught up in this “cleanup” act of Google – collateral damage as it were, in Google’s act of slashing “content farms”.  Here are some sample comments from site owners on various blogs that are telling:

My Personal blog was almost completely removed from Google’s SERPs.

Searching for my name IN QUOTES will not pull up my Blog (url is the same exact as my name).

I didn’t do anything to my site, nor did I do any SEO (white or black hat) but my search traffic is now 1 hit a day from 15-20 a day.

Google probably axed a lot of innocents in this update.

BLueSS on SEOmoz.org

We made the Sistrix list and I am currently freaking out right now. We literally lost about 70% of our US-based traffic overnight. What’s worse, we are a discussion forum with editorial who employs absolutely no black hat techniques, no duplicate content, we’re really tough on spam, and I don’t know what on earth I can do to get back into Google’s good graces. I’m convinced we somehow got caught up in the mix because I was under the impression Google was targeting “content farms” and “Made-for-AdSense” sites, and not forums. In fact, like most forum owners, I was eagerly awaiting this update with anticipation because I thought it would help us sites that deliver 100% unique, quality content.

Dani of Daniweb.com on SearchEngineLand

Well, there may be anecdotal reports of recovery, but not for my site (freegeographytools.com; 4 years old, 100% original content all by me, no farming or scraping). Google referrals are still down 20%, and AdSense earnings are down 40%+, and the trend is downwards. Thanks, Google!

leszekmp on SearchEngineLand

The real impact on hundreds of thousands of small sites may not be known for a long time.  But this has turned out to be a situation where for every loser there seems to have emerged a winner – whether the winner was deserving to win or not, and whether the loser was deserving to lose or not will remain debatable for some time for many sites.

Is search quality all good now?

Turning to a question which I’m personally interested in … Ok, now that Google has deployed an update (that by some accounts has been a year in development), is search quality all cleaned up?  Noted sites has Technorati, Songkick, PRNewswire have all been hit in this recent update, and it is kind of difficult to consider them as being similar to content farms.  So, personally I’m not sure.  I’ve not run tests myself yet with the new update, so I can’t speak to this from personal experience yet.  Some commentators like AJ Kohn point out that this update was more about demoting content deemed of low quality, not promoting better content.  According to AJ Kohn, the results are different, not necessarily betterJoe Devon comments on a ReadWriteWeb post about this:

The new results are different, but not better. I think it has exposed that Google has an immense problem.

They’ve taken care of many of the open content farms…yes. But it just pushed up a bunch of scrapers that are being a little more low key than the content farms going public or selling for millions. Results are awful…

In the mean time, Google is claiming that they are working to help good sites caught by this cleanup operation.

The Google – SEO Industry dance

We’ve seen this dance before:

  • The SEO industry at large doggedly pursues the task of finding how Google’s ranking algorithms might be working, figures out loopholes in the process, and soon, large numbers of sites out there are exploiting those loopholes.
  • Search Quality declines, and the whining from users begins, and sometimes reaches a crescendo.
  • Google pays attention, comes back at it with some algorithmic updates, fixes some issues, opens up other issues, leaves some collateral damage along the way.
  • The SEO industry (again, I mean this broadly to include all manner of SEO specialists, white-hat, gray-hat, black-hat) goes to work again to learn about the ranking updates … and the cycle continues!

This is a classic cat-and-mouse game.

To think that a chunk of the business transacted online is dependent on this, or to consider that the livelihood of many small companies (maybe even larger companies too) or solopreneurs might depend on the outcome of this game at any given time – to me this is frightening!

What do you think?

Disclosure: Readers of this blog know that my startup Zakta, will soon officially launch SearchTeam, a real-time collaborative search engine that enables personal as well as collaborative content curation.  It represents a very different approach to solve the information search problem and the attendant search quality and search relevance issues.  I’ll be writing more about SearchTeam here in the coming weeks.

Newsflash: Searcher dies of old age waiting for “the next Google”!

The “next Google” is coming, but it isn’t what you think.  And if you are waiting for what you think is coming, you are not going to see it, ever!

I wrote a post recently about the current buzz with declining search results quality. I followed that with a post on what I thought were big problems with search that have still not been addressed. With all this, we can certainly agree that we have some pressing current issues with search, and many unsolved problems as well as untapped opportunities as well.

Who’s going to solve all these search problems?

The vast amount of discussion about search is so myopically centered around Google, as the sole tool and the savior of humanity for all its search needs, as current social attention on it shows.

The blue line is buzz about Google.  The orange line is buzz / discussion on Bing, Yahoo and all other mainstream engines like Ask, AOL etc.  That little green line at the bottom of the graph above is the rather miniscule amount of buzz on any manner of other search tools other than Google and its mainstream cousins (including but not limited to all new / alternative search engines, specialized search tools, specialized databases, and much more).  This is how dominant our discourse is about Google and the other mainstream engines.

This is particularly absurd and illogical as I see it!

Let me explain what I mean with this personal example and analogy.  I am not a handy person and usually call upon a handyman to fix problems in our home whether they are electrical, plumbing, or other issues.  The handyman comes with an array of common tools into my home, and often when he doesn’t have a tool in his tool belt to solve a specific problem, he steps out to his vehicle and comes back with a more specialized tool or set of tools to get the job done.  How absurd would it be to imagine a world where the handyman has just one tool or a couple of tools in his tool belt and that’s it!  How would he fix all my varied fixit needs? It is by having a wide range of relevant tools that the Handyman steps into my home and is able to deal with the wide range of fixit needs I have.

Not only is the obsession with a single tool / brand illogical, it is also quite dangerous.  Alan Patrick writes about this in his post “On the increasing uselessness of Google …….”:

Google is like a monoculture, and thus parasites have a major impact once they have adapted to it – especially if Google has “lost the war”. If search was more heterogenous, spamsites would find it more costly to scam every site. That is a very interesting argument against the level of Google market dominance

So, why do we really look only to Google to solve all our varied search problems?  Or when that doesn’t happen, seek a “better Google”, which is just as nutty?

Wouldn’t we be better serving ourselves by recognizing that we are trying to get one tool or a few tools from one company (or another) to solve all our past, present and future search problems?  Shouldn’t we be thinking about using the best search tools for different search jobs?  And then talking about our search toolset and educating others to think likewise?

Mass Google addiction?

Conrad Saam writes in this articleGoogle vs Bing: The Fallacy Of The Superior Search Engine” that in a test of search results quality,  Bing bested Google. Even though Bing beat out Google, this doesn’t reflect in a notable migration of searchers over to using Bing!  Such is the power of the grip Google has on the mindset and habits of people.   (Sidebar: Hitwise recently reported that according to their “search success rate” metric, Bing and Yahoo were more successful compared to Google. This is the subject of some heated discussion and controversy, and I’ll write about this separately)

In the same article, Conrad Saam hints at the notion of using a search toolset for his own search needs:

My personal search approach uses Google as the default while using other sites for specialty searches. On Bing, image search is far superior and Wikipedia for 101 style information.

Using a toolset of search engines and search tools just makes more sense.  Wouldn’t that be the right way to break free from the Google euphoria, and the Google addiction on the one hand and the Google disappointments and the Google disillusionment and the continuous hankering after the next Google or a Google-killer on the other!

The mindset of a “search toolset”

A thoughtfully assembled search toolset can put together a wide range of useful search tools from the following:

  • General purpose search engines — Google, Bing, Yahoo etc. will be included here including Cuil, Blekko and others
  • Vertical search engines and tools — Specialty engines from Auto, Travel, Health and wide range of other verticals would be included here
  • Special purpose / Specialty search engines and tools — Fact search engines, Human curated collections, patent search engines, publication archives and much more would be included here
  • Other search engines and tools — A catchall for all sorts of current and emerging engines and tools

The “next Google”

As I see it, the “next Google” isn’t a single search engine replacement for the world’s #1 tool of choice.  It is a search toolset that is going to be as vibrant and as varied as what the world has embraced with the diversity of tools and applications on their mobile devices.

The sooner we usher in that mentality into our personal search practice, the sooner we are going to be part of the revolution that is coming ahead.

If you are a reader of this blog, you must have noted that I have a stake in this future.  My startup, Zakta, has created a specialized search tool for your toolset called SearchTeam, a real-time collaborative search engine .  It is perfect for all those times you have to search the Web with friends, classmates and colleagues, or just when you need research the Web deeper and more efficiently.  It doesn’t aspire to be the next Google, but be a highly desired part of the search toolset that will be the “next Google”.

About the Newsflash

No!  No one has died yet waiting for “the next Google”.  But they could, if they remain myopically anticipating a Google revival and return to the glory days of Google as the sole king of search, OR if they remain waiting for the Google-killer which will come and deliver them from all their search problems!  That isn’t happening, as far as I can see it.

But this is all just my opinion.

What do you think?

Beyond spam: Big Problems with Search

The current discussion around declining search quality on Google goes to the main bread and butter issue in organic search: How good are the search results in the first page?  And in this context, the discussion is dominated by the topics of search spam and content farms and gaming of the Google algorithm. That makes sense!

In my opinion, there are a lot of unaddressed “big problems” in search beyond fixing the spam issue.  I’m citing just a few of these here.

The content explosion: There is a growing diversity of content types, explosive growth of online content, and multi lingual content, all of which contribute to the complexity of what the current and next generation search engine needs to handle. No single search engine really is able to cover the complete set of information on the Web today, and this will remain a big challenge for search engines into the future.

Hidden content sources: Part of the content explosion continues to be the proliferation of specialized content sources and databases, content from within which we can’t readily discover from mainstream search engines. This phenomenon is called the Invisible Web or the Deep Web, first written about in the late ’90s (my previous startup Intelliseek, delivered the first search engine for the Invisible Web in 1999), and continues to remain a big open issue. Attention on it has lessened only because of the sheer noise around other memes like social search, real-time search and so forth in the past few years.

Understanding user intent: Then there are age-old issues that haven’t been addressed around understanding user intent.  Much of the quality of search results has to do with not knowing what the heck the searcher really needs.  We are still feeding keywords into a single search box and expecting the magic to happen on the part of the search engine to give us what we need.  Not finding our answers, more of us are doing longer queries, hoping that will give us the answers we need. i.e. We are compensating as users for something that search engines fundamentally do not understand today: our search intent.

Understanding the content: 16+ years since the first Web search engine, we are still processing textual information with little understanding of the semantics involved. Search engines do not understand the meaning of the content that they index. This is another contributing factor that limits the quality of results delivered by a search engine to users. For long, there’s been a buzz about the semantic Web, which is supposed to usher in richer search and information experiences starting from more meaningful data and sophisticated software that can make inferences from the data in ways that is not possible today. Hailed as “Web 3.0”, it is seen as the next phase in the evolution of the Web, and that is a realm of new problems and opportunities for search engines.

Handling User input: For the most part, search interfaces have continued to use the age old search box for typing keywords as input. While promising work has been done with accepting natural language questions as input, nothing commercially viable has really turned up that works in Web scale. Without solving this problem first, there is no hope of being able to speak to a search engine to have it bring back what you are looking for.

Presenting search results: The 10 results per page read-only SERP interface that first came about in the mid 90’s is what we are essentially stuck with even today (granted that there have been recent touches like page previews / summaries added to it, and showing videos / images etc. along with links to pages / sites).  A retrospective look at this 2007 interview with usability expert Jakob Nielsen which looks into possible changes in search result interfaces by 2010 is very revealing about the relatively slow pace of change with the SERP interface.  Others have attempted purely visual searches, and still others have tried to categorize / cluster search results. Still, what the mainstream search engines offer in terms of a interface for search results consumption is not noticeably innovative.

Personalizing: For the most part, search results are a one-size-fits-all thing.  Everyone gets the same results regardless of your interests and your connections.  Some attempts have been made to personalize search results both based on some model of individual interests and on the likes / recommendations of their social group, but that is a really challenging problem to solve well.  At Zakta, our Zakta.com service made the SERP read-write, and personalizable. Other services have tried to bypass the search engine itself with Q&A services that flow through a user’s social network.

Leveraging social connections and recommendations: First generation attempts have been made to have search results be influenced by the recommendations of others in a person’s social circle. Some speculate that Facebook might be sitting on so much recommendation data that they might have a potent alternative to Google in the search arena.  Regardless, this remains an unsolved search problem today.

Facilitating collaboration in search: Web searching has been a lonely activity since its inception. Combined with the limiting read-only SERP interface, searchers have never really been able to leverage the work, findings or knowledge of others (including those that they deeply trust) in the search process.  In the post Web 2.0 world we are in, this remains an noticeable gap in search. One area of opportunity is for search engines to let people search together to find what they need.

Specialized searches in verticals and niches: For a while in the early and mid 2000’s, the buzz was all about vertical search engines and somehow that meme just faded away. The core reasons for the attractiveness of vertical / specialized search engines remain. Shopping, Travel, and plenty of other verticals represent areas which could benefit from continued development of specialized search solutions that go beyond the mainstream search engine experience.

These are but a few examples of the many open “big problems” with Search.  Seeing this, we cannot but acknowledge that we are still in our infancy in meeting the search needs of an increasingly online, connected and mobile populace.

At Zakta, my startup, we are working on solutions for some aspects of these big search problems.  We are combining semantics, curation, and collaboration technologies with traditional Web searching to deliver a new search engine called SearchTeam.  Perfect for collaborative search, or as a research tool for personal or collaborative curation of web content, we hope that SearchTeam will become a very useful part of people’s search toolset.  At this time, SearchTeam is in private beta.

What do you think are open problems, big or small, with search engines?

The Buzz about Google’s Search Quality

We may have reached a tipping point in our tolerance of the declining quality of web search results on Google. At least that is how it appears with the growing commentary on the subject from influential bloggers, writers in the news media and searchers as well.

A meme in the making?

Anil Dash writes about the decline of Google search quality citing the negative experiences and observations of Paul Kedrosky, Alan Patrick, and Jeff Atwood:

What is worth noting now is that, half a decade after so many people began unquestioningly modifying their sites to serve Google’s needs better, there may start to be enough critical mass for the pendulum to swing back to earlier days, when Google modified its workings to suit the web’s existing behaviors.

Many more bloggers are chiming in about the same matter, as this BlogPulse Conversation Tracker listing shows.

Meanwhile, at LifeHacker, over 77% of readers say that Google’s search results are less useful lately:

We asked readers last week whether what influential bloggers said was true—that Google was losing the war against search result spam. Your response? More than three quarters found Google prone to spam, with one-third tagging the decline as significant.

Michael Rosenwald wrote in the Washington post about the losing battle against spam in search results:

Google’s success rate, as measured by the percentage of users visiting a Web site after executing a search, fell 13 percent last year, according to Experian Hitwise, which monitors Web traffic. Microsoft’s Bing search engine increased its search efficiency by 9 percent over the same period.

Although there could be several reasons for the disparity, one is most certainly spam in Google’s results, analysts said.

“It’s clear that Google is losing some kind of war with the spammers,” said tech guru Tim O’Reilly, who often cheers Google’s technology. “I think Google has in some ways taken their eye off the ball, and I’d be worried about it if I were them.”

For years, Google’s organic search results have been experiencing a slow decline in quality. Paul Kedrosky writes about this in a recent blog post:

What has happened is that Google’s ranking algorithm, like any trading algorithm, has lost its alpha. It no longer has lists to draw and, on its own, it no longer generates the same outperformance — in part because it is, for practical purposes, reverse-engineered, well-understood and operating in an adaptive content landscape. Search results in many categories are now honey pots embedded in ruined landscapes — traps for the unwary. It has turned search back into something like it was in the dying days of first-generation algorithmic search, like Excite and Altavista: results so polluted by spam that you often started looking at results only on the second or third page — the first page was a smoking hulk of algo-optimized awfulness.

Would Google care?

One thing I personally do wonder about is just how important is this issue to Google in 2011 and beyond, as compared to when they started in 1999!  In my earlier post on this blog, I wrote about the ongoing relevance of search relevance to Google:

I have no doubt that Google’s relevance with organic search results will improve yet again, given the rise in negative commentary about it in influential pockets of the Internet.  However, the pertinent question to ask is this: Why would an advertising giant care more about relevance of organic search results any more than absolutely necessary to keep their revenue proposition in tact?  Or, asked another way, is search relevance ever likely to be as relevant as ad relevance to Google?

Should Google care?

What is the practical threat to Google? It seems that all this isn’t really affecting Google’s stable search market share or its growing ad revenues in any meaningful way! But there are others who see 2011 as a turning point for Google’s invincibility.

Niall Harbison wrote recently about the perfect storm coming together to unhinge Google, that too in 2011.  He cites real time search, friend recommendations, the Like button from Facebook, the rise of spam, the possibility of categorized human knowledge and Bing as some of the key factors that could unseat Google as the search king.:

For years it seemed as if Google could do no wrong and the competition be it search start ups, Yahoo or Microsoft was generally batted away with disdain. The landscape has changed over the last 18 months though and Google faces a very real danger that it’s core product could come under threat and I think 2011 will be the year where we see the first cracks start to appear in Google’s once invincible search armor.

I am not ready to predict anything about Google’s future.  I myself have been a Google addict, and I respect their talent and innovations to date greatly.  I am interested in all this in a very deep and personal way.  I been in the search engine space, dabbling with search engine technologies since 1996. But more importantly, my startup, Zakta is set to introduce an innovative search tool that we hope is very relevant to the problems we face with search today, and very useful as well.  SearchTeam.com is in private beta now, and offers unique ways to search the Web, curate what you need, personally or in collaboration with others you trust.

What is your take on this?

On The Ongoing Relevance Of Google Search Relevancy

Google Sucks All The Way To The Bank!” declared SEO Consultant Jill Whalen in her recent blog post:

It was done gradually over many years, but Google now provides organic search results that often look relevant on the surface, but either lead to made-for-AdSense content pages or somewhat sketchy companies who are great at article spinning and comment spamming.

Matt Cutts even admitted at a recent conference that Google web spam resources had been moved away from his team.  While I doubt Matt himself was happy about this, those whose bright idea it was are likely laughing all the way to the bank.

Later in the article, Jill Whalen wonders if Google has gone too far in ignoring relevance issues with its core search results:

Since their poor results are being talked about with more fervor outside of the search marketing industry, it’s possible that they have indeed crossed the line. Numerous mainstream publications and highly regarded bloggers have taken notice and written about the putrid results. While Google is used to negative press, the current wave of stories hits them at their core — or at least what most people believe to be their core — their search results.

Even though today Google is technically just an advertising platform that happens to offer Internet search, they built their reputation on providing superior results. Because fixing what’s broken in the current algorithm can’t be very difficult for the brilliant minds that work at Google (Hint: ignore all anchor text links in blog comments, for one thing), we can only assume that they don’t want to fix them — at least not yet.

Google made its mark by providing relevant results really fast. This excellence is what killed AltaVista and all the search engines of the day, and also effectively stifled search engine innovation from outside Google till date.

Google continues to thrive despite these issues with their core search product.  Google doesn’t seem to be losing searchers readily, and their commanding marketshare remains in tact.  Even Bing with all its marketing muscle, some thoughtful innovations and the Yahoo search deal, hasn’t been able to wean off too many searchers away from Google. Why?

One reason for this is what I call the Google seduction.  In short, people are hooked on Google through years of familiarity, and even if there were legitimate alternative search engines that people can use for different needs, Google is their starting point on the Web and they can’t make the habit change, at least not readily.

Another reason is that Google continues to introduce some innovations with organic search results. An example is this continuous push for the best top result or two, which has further brought to us “instant search results” (aka Google Instant).  It is awesome to get results in milliseconds as you type a query, but all that simply makes Google addicts like me get further addicted to it, and masks the real cost of Google searches, as we put up with poor results for our more involved or serious searches or commercially relevant searches.

Search guru Danny Sullivan wrote about this a few months ago in a post titled: How the “Focus on First” helps hide Google relevancy problems. Danny gives very specific examples of how Google’s results aren’t always relevant.  But Danny points out how Google is saved by he calls their “Focus on First”:

At its press conference, Google emphasized how people would move their eyes from what they entered into the search box to the first result that was listed, using that first result in a way to effectively judge if all the results they might get matched their query. Google’s really just got to make that first result hum, for most people, most of the time. If results 2-10 are so-so, it’s not a mission critical matter.

It shouldn’t be that way, however. We ought to get 10 solid results on the first page. That’s what I expect from Google. But maybe I expect too much. Maybe good is good enough, especially given how people search.

So, is this what we’ve come to with organic Web search results?  A good first hit, and then whatever else on the first page!  And we are to believe that this is how the vast majority of the world searches and satisfying them with a good first hit is enough!  Wow!  Unbelievable!

I have no doubt that Google’s relevance with organic search results will improve yet again, given the rise in negative commentary about it in influential pockets of the Internet.  However, the pertinent question to ask is this: Why would an advertising giant care more about relevance of organic search results any more than absolutely necessary to keep their revenue proposition in tact?  Or, asked another way, is search relevance ever likely to be as relevant as ad relevance to Google?

Personally, it is my belief that relevance in search is ultimately for the individual searcher to judge, and while it is important for a good search engine to deliver a strong set of relevant results from the get go, the Web has gotten complex enough that people will be better served by having a better set of tools to help them find and curate what they need.

What do you think?

At Zakta, my startup, we have developed a new search engine called SearchTeam that lets people search the Web together with others they trust. Our approach to improved search relevance is to deliver a suite of tools that enables people to find, collaborate and curate information they need from the Web easily.  Stay tuned for more information about SearchTeam here, and at the official SearchTeam blog.

The Google seduction – Are instant search results blinding you to the real cost of your searches?

I’ll admit it.  I’m hooked on Google!

To start with, back in 1999, I was impressed with the speedy and mostly relevant search results delivered by Google.  I got mesmerized by their growing coverage, the advanced tools, and mostly by their simple, uncluttered, spartan interface that never changed in years.  I got lazier and hooked even more when Google started suggesting queries I could use – now I don’t even type my query fully often, and pick from the query suggestions list instead.  My addiction to Google has only gotten worse with Google Instant, where I get results as I type (granted, that this thing works well some times, and is downright irritating at other times when it slows down my typing of the query).  There are these delightful situations where I type in the name of a company, and I get the company’s web site, address, phone number and a map, ready for use – exactly what I needed.  Examples like this abound.  In all, these are ingredients of the Google seduction!  Masterfully crafted tools and features that keep me hooked and coming back for more.

You know what I mean!  It’s highly likely, from seeing market share stats, that you, like me, are hooked on Google too!

If you’ve seen the previous posts here, you might have noticed that I’m an entrepreneur, and my team has created an alternative search engine that will be released to the market soon.  Why am I admitting to being seduced by and hooked on Google?  Because it is true, and it is a reality I expect to encounter again and again in releasing our brand new search engine to market soon.

As productive and as satisfying my searches like these are on Google, there are different types of searches I do often which are downright frustrating and painful.

Example 1:

Shopping for a laptop computerTake for instance the time when I wanted to find a good laptop suitable for use by son who was heading to college (this past summer).  I was buying a laptop after a 2 year gap, so I had to find out what laptop technologies were out there, research specific features of interest to us, find reviews of models, find deals etc. – the whole effort lasted many hours. During this time, I used Google out of habit, and had to wade through tons of irrelevant and commercially hijacked results.  It wasn’t easy keeping things together that I found interesting along the way, and it certainly wasn’t easy to share findings readily with my son.  And in parallel, my son did some searches and we couldn’t easily get in synch with our efforts.  Irrelevant results, lot of time spent in sifting through results, no easy way to save what was useful, no easy way to share, no easy way to find things together, and no easy way to pause the searching process and continue from where I left off.

Example 2:

Finding a hypoallergenic natural sunblock or sunscreenHere’s another example … recently I had to research hypoallergenic sun screen / sun block solutions because my family had developed an allergy to something in traditional sun screen products.  I recall spending over 8 hours, Googling over and over across dozens of different queries, pouring over pages and pages of results, sifting through the gunk to isolate useful nuggets.  Since I had to do this across multiple sessions, often I had to repeat my searches and sift through the same results, often irrelevant, over and over again.  Post-it notes, clippings in a Word document, patchy email notes sent to other family members about this — this was my toolset for collecting, sharing and collaborating.  I did find 2-3 products finally, but it wasn’t easy.  And I’m an above-average searcher myself!

Example 3:

Planning a reunion in central FloridaIn late 2009, my extended family decided to have a family reunion in Florida.  In that context, we had to search for travel options, accommodations for 16 people, attractions, food/eating choices and ideas and a whole lot more.  Of course, I Googled over and over and over across many days doggedly, used emails to collect information and share with other family members across the country, and eventually we did have a great reunion in the Orlando area.  But, Googling offered little support in accomplishing this whole task!

If you think these searches are outliers, think to your own experiences of researching to purchase a gadget, an appliance or any big ticket item.  Think about the time you started researching places for a vacation with family or friends.  Or when you had to find more about a disease or medical condition and treatment options for a dear one.  Or the time when you had to find a supplier for a product / service at work.  The list is quite large, of searches like these, where the search itself is a process, and not something that yields an answer with a single query and the desired result on page-1.  Whether these searches emerge from our need as a consumer, or as a student, or as a business professional, the problem is the same!  The instant gratification that seduced me and you into using Google, and has us addicted to it, doesn’t do a darned thing to help here.

I don’t know how to say it, except quite bluntly – Google isn’t designed for searches like these, at least not today! Neither are other mainstream search engines.

The irrelevant results, coming from the search result pages that have been hijacked for queries with a commercial intent, make it worse.  Check out Paul Kedrosky’s post: Dishwashers, and How Google Eats Its Own Tail, Alan Patrick’s post: On the increasing uselessness of Google and Jeff Atwood’s post: Trouble in the House of Google for more examples where Googling isn’t helping as it might have some time ago. The subject of declining relevance in search results is a separate topic that I’ll write more about later.

These searches are expensive in terms of time – your and my precious time lost in googling over and over.  And yet, each one of us goes back to Google every so often for a need like this, and go through the same time wasting process over and over again!

So, when you factor in these sorts of searches that you do, what is the real cost of your searches in all?

Seduced by the tools of instant gratification, I personally believe we have been enslaved into using Google (and same could be said for regular Bing or Yahoo or Ask or AOL users too) even when it is not the right tool for the job.  “If the only tool you have is a hammer, everything looks like a nail”, the saying goes, and it seems that the vast majority of searchers are using the Google hammer all the time, even when it is clearly not the right tool for the job. The cost of doing so?  I don’t have concrete numbers to share yet, but I think it is safe to say that there’s a HUGE collective productivity loss from using the wrong tool for a searching job like this!

What is your opinion on this matter?

My startup, Zakta, is set to launch SearchTeam, the world’s first real-time collaborative search engine soon.  By combining tools to search, collaborate and curate into a single integrated solution, we hope to provide a useful search tool for finding information like this individually, or together with friends, family members, colleagues or other trusted people. I’ll share more information about this in the coming days and weeks.

Is Google domination stifling search engine innovation?

Google is the undisputed leader in the search engine market.

US Search Engines Marketshare, 6-month trend from StatOwlSource: StatOwl

As this trend chart shows, in the past few months, Google has lost a little bit of ground to Bing and Yahoo.  And that is after a mega spend on the Bing launch and the Yahoo-Microsoft deal.

For me, this is not the noteworthy observation in this chart.  It is that the “Other” group charted here is only 0.02% or lesser of marketshare / usage!

And this is particularly interesting when you consider that there has been a continuous stream of well-funded search engine companies that have come into the market over time.  Take the well funded and much-hyped Cuil or the recently launched Blekko.  Or the veteran meta search engine DogPile. Or the clustering search engine Clusty. Or niche search engines like the discussion search engine Omgili, or the people search engine Pipl, or the real-time search engine Topsy.  There are literally over one hundred such search engines in existence now, not to mention the many that have come and gone!

The combined effect of all the $s gone into these companies, their creativity, their innovations, as measured by their actual impact on the market is but a small, unnoticeable blip.

The dominance of Google, with Bing, Yahoo, AOL and Ask picking up the rear to complete the canvas of “mainstream search engines”, seems to leave no room whatsoever for innovations from outside to thrive!

One could argue that none of the hundreds of search engine companies in the past decade offered a compelling enough alternative to Google.  Clearly that is true at many levels.  The sheer coverage of the Web that Google provides, the high performance sub-second result pages, the continuous stream of small innovations (like Wonder Wheel, Query Autosuggestions, Google Instant etc.) continue to keep a high bar that not even the others in the “mainstream search engines” have been able to match and exceed consistently.

On the other hand, Google isn’t without flaws, holes and deficiencies.  As recent articles point out well, Google’s search results are manipulated every single minute.  What yielded superior results consistently back in 1999-2001 has consistently been compromised in the past 8-9 years, especially for queries with a commercial intent.  Relevance, the thing that Google was originally most famous for, is slipping away from Google, or so it seems at this juncture.  This, and other factors like, the growth of social media, social networking, real-time information, video and more, have made the search engine problem more complex.  In this complexity, and these gaps, search engine startups see opportunities, and investors continue to invest money.

But will any of these search engine startups and their innovations really become part of the mainstream in terms of user adoption?

It does seem like a tall order.  The graph above speaks volumes about the rather poor odds of a new search engine become part of the mainstream!   It is in this vein that I wonder if Google’s domination, followed by the four other “mainstream search engines”, stifles lasting search engine innovation from a broader market of search engine companies!  What do you think?

I have a vested interest in this matter.  My second startup, Zakta, is about to release a new search engine to market soon.  I’ll be writing more about this here in the coming days and weeks.

Blog at WordPress.com.

Up ↑