Has Curation Finally Arrived?

Curation, as in content curation, digital curation, search curation and so forth has been all the rage in 2011.

But nothing says that curation has finally arrived, than this recent Dilbert cartoon below:

Dilbert.com

Has curation reached the tipping point?  What do you think?

The Evolution of Social Search

I was going to write a post earlier this year about social search, and it was going to be titled: “Does anyone care about social search anymore?“.  I was genuinely wondering what had happened to the “social search” meme, which was all the rage in 2009!  As it turns out, I never did write that post.  And just as well.  You can see why in this BlogPulse trend graph below:

You will notice two spikes in the trend graph, one in mid-February, and another in early April.

In mid-February Google announced deeper integration of social data from Twitter, Flickr, and Quora.  MG Siegler wrote this on TechCrunch about this mid-February social search update:

What Google is sort of downplaying as just an “update” to social search, is actually much more. Google is taking those social circle links at the bottom of the page, pumping them with social steroids, and shoving them towards the top of results pages. For the first time, social is actually going to affect Google Search in a meaningful way.

In early-April, Google announced its +1 button to rival Facebook’s Like button.  I wrote about this in this earlier post on Social Search and Google +1.

… Google has demonstrated that they consider social signals as an important element of their ranking of search results.  So, does the Google +1 launch officially make Google a social search engine? 

After a long lull in “social search” buzz, we hear two big announcements related to social search in the span of two months in 2011 from Google.  What does this mean for “social search”?  It will be fair to say that “social search” is a real phenomenon, and is rapidly evolving.

By the way, other people have pondered about the evolution of social search over the past few years, and here’s a couple of earlier posts on this topic you might find interesting:

  • October 2010, Lauren Fisher, TNW Social Media: The Evolution of Social Search – Lauren wrote about the potential business impacts of the emerging social search phenomena. Among the observations Lauren makes is this: “The impact that social search can have on the SEO industry is huge, and it represents a fundamental shift in the way this operates. While SEO has typically been a longer-term strategy, often taking weeks of months to see the fruits of your labour, social search has changed all that.”, and clearly, we are seeing signs in the SEO market that the impact of social on search is a key part of modern SEO work.
  • March 2011, Jeniffer Van Grove, Mashable: The Future of Social Search – Jeniffer argues that since search is rapidly changing, so is social search and that we should be thinking of social search in broader terms than just “socially ranked search results”.  Her parting remarks in this post: “We’re just now scratching the surface of what’s possible when one’s expanding social graph becomes intertwined with search. But as time goes on, the social search experience will be so fluid — it will seem more like discovering than searching — we won’t even know it’s happening.

Here is my own take (thoughts and predictions) about the evolution of social search:

  • Social search, as we now know it, becomes a mainstream search engine feature:  It is evident that Google is fully integrating social signals to alter their search results ranking.  We can only expect this integration to go broader (more social signals) and deeper (better integration of social signals).  This will drive a flurry of interest and activity on the part of companies and content creators to learn and incorporate “social search” related elements in their own online content and marketing strategies.
  • Aggregate social signals will continue to impact search result ranking: I think that using aggregate social signals to alter search result ranking is an idea that is here to stay – this is what Zakta.com does, and the reason for this is that this can be done in a way where the value can be delivered without getting destroyed by privacy issues or spam issues.
  • Social circle recommendations will aid a minority of search results:  I think that integrating signals of recommendations of people from my social circle into my search results is interesting – but the percentage of queries for which a user’s social circle has a meaningful recommendation will be low, and this is due to the very nature of the wide range of topics we typically search for, and the constitution of our social circles
  • Privacy concerns will hamper broad adoption:  I think that a large percentage of users are going to be concerned in opening up their social circles and content flows from within them to mainstream search engines. In turn, this will be a hurdle for broad adoption of social circles into search.
  • Facebook social search will be here:  Social search won’t remain just in the bastion of search engines.  Facebook will be a huge player in this.  As I see it, Facebook has at least two major assets as it pertains to social search: (1) a growing base of registered users with their growing social graphs, and (2) an enormous growing set of social signals fueled through a lot of social sharing within Facebook, their seemingly ubiquitous Facebook Like button, and new social sharing widgets they are deploying in the market.  How long before we see an innovative “social search” tool from Facebook that leverages all these massive assets they have!
  • Social search startups will innovate along different paths: Social search is a buzzword that has meant the incorporation of social search signals in search results.  But that is a rather limiting view of what can be possible when social and search are combined.  I think we can expect new solutions to enter the market that will vastly expand the definition and understanding of social search in the coming months and years.  I think that social search startups will innovate along different paths not taken by mainstream search engines so far.

Talking of different paths of innovation with social search, here’s a shameless plug for what we are doing at Zakta, my startup.  There are two directions that Zakta is taking which are different than mainstream approaches to social search:

  1. Curation:  I think that personal and social curation of search results is key to delivering relevance and ongoing value for informational searches.
  2. Collaboration: I think that real-time and asynchronous collaboration between trusted people (social circle / professional circle) is key to leveraging group knowledge and work as it pertains to informational searching and Web-based information research.

Zakta’s new service, SearchTeam, is a real-time collaborative search and curation engine that is based on the principles of curation and collaboration applied to the context of the informational search process / information research.  SearchTeam is not officially launched yet, but you can try it out today at SearchTeam.com.

What do you think about social search and where it is going?

Does the Web need Collaborative Search Tools?

Search engine interfaces have historically been designed to let just an individual search the Web for their needs.  In over 15 years since the first Web search engine hit the market, search engine use has become ubiquitous, with many searches actually being collaborative in nature. But search engines have remained in the domain for individual use only.  Why are search engines designed only to be used alone?

Before answering this, I think it is useful to see if search engines really are being used collaboratively today? Let us look at one example in a bit of detail

Planning a vacation with friends / family: Whether it is spring break with friends, or a summer vacation with family, vacation planning involved web searching and communication, coordination and collaboration with friends or family members. When my family went on a summer vacation to Toronto, Canada recently, I had to engage my family members in the process, seeking input about places to go, places to stay, and myriad other details. Here’s how I ended up doing this job:

  • Suffering from Google addiction as many out there are, I googled many times to find interesting information about places in and around Toronto, day trips of interest, interesting places to stay etc.
  • I copied links of interest over into my email and pruned that list and would periodically pass it around for comments from the family
  • I visited many different specialty sites like Expedia, Travelocity, Hotels.com, Priceline, Kayak etc. to find possible flight itineraries, and places to stay
  • And in turn, I copied interesting links of places to stay, as well as possible travel itineraries, in email and sent that around for comments from the family
  • My wife or son would pass along interesting links via email along the way from some searches they did, or tidbits they heard from other family members / friends who had been to Toronto before.  Some more conversations would ensue.
  • Many iterations of this, and many email conversations and many in-person conversations (where that was possible) later, many days from when we started this process, we arrived at the decisions we needed.  We had firmed up an itinerary, places to stay, details of places to see, day trips to try out, and lists of links of interest towards our visit (all scattered across multiple emails).

Does this sound familiar?  This is collaborative searching at work.  Albeit with search engines that weren’t built to support it.

Let’s look at another example in a little detail.

Researching a disease or medical condition: It is not uncommon these days to have a good friend or a family member get diagnosed with some new disease or medical condition. That kicks off the process of trying to learn more about the disease or condition, finding treatment options, and finding ways to cope with the condition.  Recently, in my family, a relative of mine was diagnosed recently with high cholesterol and diabetes at the same time. They reached out to me for input on food or lifestyle changes that might help with the management of the diseases along with their regular mainstream treatment. They were keen to know about herbs or supplements that might help, or how methods like Yoga or energy healing might contribute towards a return to wellness faster.  The process that ensued was like this:

  • I googled many different queries related to these conditions, and additional queries related to diet, nutrition, supplements / herbs, lifestyle changes, looking for good authoritative information that I could pass along
  • I started collecting links into an email and sent them along in small batches to my relative
  • In turn, I’d get emails back with links and questions about the legitimacy / believability of various claims made about certain supplements or herbs.  And I’d check them out to see the sources and citations and so forth and write back about each
  • Occasionally, they would find me online on Skype and reach out to me to chat about some additional things they had read.  In the process, we’d discover some more interesting resources to keep for future use, which I’d go copy into an open email or new email
  • Dozens of queries, hundreds of pages sifted, and many email threads later, we had collected dozens of links of use for my relative. They finally had the information they needed to make their own decision in concert with their doctor

Sound familiar again?  This too is an example of collaborative search in action today.

The problem with this is that, this process is inefficient, time consuming, prone to redundant work (people doing the same queries, seeing the same sites that were not useful etc.), and at the end of all this, the useful information is spread across multiple emails and possibly some instant messaging / chat sessions, and not easily discoverable or usable when you need to consult it later on.

Here are more examples at home or in other personal contexts, where I’ve run into this need:  Shopping for an appliance or a big ticket item;  Looking for a new home; Finding suppliers for a craft project;  Finding learning resources for gifted kids etc.

Plenty of such examples also exist in the academic context or business context as well.

What is common across all of these examples is that there’s more than one person involved in the finding, collecting, organizing, sharing or using of that information.  i.e. These are prime examples of collaborative searching, which cry out for a new breed of collaborative search tools.

So, yes, I think that the Web needs collaborative search tools now.  What do you think?

My startup Zakta, is about to launch SearchTeam (sometimes mistakenly referred to as Search Team), a real time collaborative search and curation engine.  It combines traditional search engine features, with semantics, curation tools, real-time and asynchronous collaboration tools to deliver the world’s first commercial tool for real-time collaborative searching with trusted people.  SearchTeam is designed from ground up to enable users to search the Web together with others they trust, curating, sharing and collaborating on what they need on any given topic.  I’ll be sharing more information about this in the coming days and weeks.

Beyond spam: Big Problems with Search

The current discussion around declining search quality on Google goes to the main bread and butter issue in organic search: How good are the search results in the first page?  And in this context, the discussion is dominated by the topics of search spam and content farms and gaming of the Google algorithm. That makes sense!

In my opinion, there are a lot of unaddressed “big problems” in search beyond fixing the spam issue.  I’m citing just a few of these here.

The content explosion: There is a growing diversity of content types, explosive growth of online content, and multi lingual content, all of which contribute to the complexity of what the current and next generation search engine needs to handle. No single search engine really is able to cover the complete set of information on the Web today, and this will remain a big challenge for search engines into the future.

Hidden content sources: Part of the content explosion continues to be the proliferation of specialized content sources and databases, content from within which we can’t readily discover from mainstream search engines. This phenomenon is called the Invisible Web or the Deep Web, first written about in the late ’90s (my previous startup Intelliseek, delivered the first search engine for the Invisible Web in 1999), and continues to remain a big open issue. Attention on it has lessened only because of the sheer noise around other memes like social search, real-time search and so forth in the past few years.

Understanding user intent: Then there are age-old issues that haven’t been addressed around understanding user intent.  Much of the quality of search results has to do with not knowing what the heck the searcher really needs.  We are still feeding keywords into a single search box and expecting the magic to happen on the part of the search engine to give us what we need.  Not finding our answers, more of us are doing longer queries, hoping that will give us the answers we need. i.e. We are compensating as users for something that search engines fundamentally do not understand today: our search intent.

Understanding the content: 16+ years since the first Web search engine, we are still processing textual information with little understanding of the semantics involved. Search engines do not understand the meaning of the content that they index. This is another contributing factor that limits the quality of results delivered by a search engine to users. For long, there’s been a buzz about the semantic Web, which is supposed to usher in richer search and information experiences starting from more meaningful data and sophisticated software that can make inferences from the data in ways that is not possible today. Hailed as “Web 3.0″, it is seen as the next phase in the evolution of the Web, and that is a realm of new problems and opportunities for search engines.

Handling User input: For the most part, search interfaces have continued to use the age old search box for typing keywords as input. While promising work has been done with accepting natural language questions as input, nothing commercially viable has really turned up that works in Web scale. Without solving this problem first, there is no hope of being able to speak to a search engine to have it bring back what you are looking for.

Presenting search results: The 10 results per page read-only SERP interface that first came about in the mid 90’s is what we are essentially stuck with even today (granted that there have been recent touches like page previews / summaries added to it, and showing videos / images etc. along with links to pages / sites).  A retrospective look at this 2007 interview with usability expert Jakob Nielsen which looks into possible changes in search result interfaces by 2010 is very revealing about the relatively slow pace of change with the SERP interface.  Others have attempted purely visual searches, and still others have tried to categorize / cluster search results. Still, what the mainstream search engines offer in terms of a interface for search results consumption is not noticeably innovative.

Personalizing: For the most part, search results are a one-size-fits-all thing.  Everyone gets the same results regardless of your interests and your connections.  Some attempts have been made to personalize search results both based on some model of individual interests and on the likes / recommendations of their social group, but that is a really challenging problem to solve well.  At Zakta, our Zakta.com service made the SERP read-write, and personalizable. Other services have tried to bypass the search engine itself with Q&A services that flow through a user’s social network.

Leveraging social connections and recommendations: First generation attempts have been made to have search results be influenced by the recommendations of others in a person’s social circle. Some speculate that Facebook might be sitting on so much recommendation data that they might have a potent alternative to Google in the search arena.  Regardless, this remains an unsolved search problem today.

Facilitating collaboration in search: Web searching has been a lonely activity since its inception. Combined with the limiting read-only SERP interface, searchers have never really been able to leverage the work, findings or knowledge of others (including those that they deeply trust) in the search process.  In the post Web 2.0 world we are in, this remains an noticeable gap in search. One area of opportunity is for search engines to let people search together to find what they need.

Specialized searches in verticals and niches: For a while in the early and mid 2000’s, the buzz was all about vertical search engines and somehow that meme just faded away. The core reasons for the attractiveness of vertical / specialized search engines remain. Shopping, Travel, and plenty of other verticals represent areas which could benefit from continued development of specialized search solutions that go beyond the mainstream search engine experience.

These are but a few examples of the many open “big problems” with Search.  Seeing this, we cannot but acknowledge that we are still in our infancy in meeting the search needs of an increasingly online, connected and mobile populace.

At Zakta, my startup, we are working on solutions for some aspects of these big search problems.  We are combining semantics, curation, and collaboration technologies with traditional Web searching to deliver a new search engine called SearchTeam.  Perfect for collaborative search, or as a research tool for personal or collaborative curation of web content, we hope that SearchTeam will become a very useful part of people’s search toolset.  At this time, SearchTeam is in private beta.

What do you think are open problems, big or small, with search engines?

The Buzz about Google’s Search Quality

We may have reached a tipping point in our tolerance of the declining quality of web search results on Google. At least that is how it appears with the growing commentary on the subject from influential bloggers, writers in the news media and searchers as well.

A meme in the making?

Anil Dash writes about the decline of Google search quality citing the negative experiences and observations of Paul Kedrosky, Alan Patrick, and Jeff Atwood:

What is worth noting now is that, half a decade after so many people began unquestioningly modifying their sites to serve Google’s needs better, there may start to be enough critical mass for the pendulum to swing back to earlier days, when Google modified its workings to suit the web’s existing behaviors.

Many more bloggers are chiming in about the same matter, as this BlogPulse Conversation Tracker listing shows.

Meanwhile, at LifeHacker, over 77% of readers say that Google’s search results are less useful lately:

We asked readers last week whether what influential bloggers said was true—that Google was losing the war against search result spam. Your response? More than three quarters found Google prone to spam, with one-third tagging the decline as significant.

Michael Rosenwald wrote in the Washington post about the losing battle against spam in search results:

Google’s success rate, as measured by the percentage of users visiting a Web site after executing a search, fell 13 percent last year, according to Experian Hitwise, which monitors Web traffic. Microsoft’s Bing search engine increased its search efficiency by 9 percent over the same period.

Although there could be several reasons for the disparity, one is most certainly spam in Google’s results, analysts said.

“It’s clear that Google is losing some kind of war with the spammers,” said tech guru Tim O’Reilly, who often cheers Google’s technology. “I think Google has in some ways taken their eye off the ball, and I’d be worried about it if I were them.”

For years, Google’s organic search results have been experiencing a slow decline in quality. Paul Kedrosky writes about this in a recent blog post:

What has happened is that Google’s ranking algorithm, like any trading algorithm, has lost its alpha. It no longer has lists to draw and, on its own, it no longer generates the same outperformance — in part because it is, for practical purposes, reverse-engineered, well-understood and operating in an adaptive content landscape. Search results in many categories are now honey pots embedded in ruined landscapes — traps for the unwary. It has turned search back into something like it was in the dying days of first-generation algorithmic search, like Excite and Altavista: results so polluted by spam that you often started looking at results only on the second or third page — the first page was a smoking hulk of algo-optimized awfulness.

Would Google care?

One thing I personally do wonder about is just how important is this issue to Google in 2011 and beyond, as compared to when they started in 1999!  In my earlier post on this blog, I wrote about the ongoing relevance of search relevance to Google:

I have no doubt that Google’s relevance with organic search results will improve yet again, given the rise in negative commentary about it in influential pockets of the Internet.  However, the pertinent question to ask is this: Why would an advertising giant care more about relevance of organic search results any more than absolutely necessary to keep their revenue proposition in tact?  Or, asked another way, is search relevance ever likely to be as relevant as ad relevance to Google?

Should Google care?

What is the practical threat to Google? It seems that all this isn’t really affecting Google’s stable search market share or its growing ad revenues in any meaningful way! But there are others who see 2011 as a turning point for Google’s invincibility.

Niall Harbison wrote recently about the perfect storm coming together to unhinge Google, that too in 2011.  He cites real time search, friend recommendations, the Like button from Facebook, the rise of spam, the possibility of categorized human knowledge and Bing as some of the key factors that could unseat Google as the search king.:

For years it seemed as if Google could do no wrong and the competition be it search start ups, Yahoo or Microsoft was generally batted away with disdain. The landscape has changed over the last 18 months though and Google faces a very real danger that it’s core product could come under threat and I think 2011 will be the year where we see the first cracks start to appear in Google’s once invincible search armor.

I am not ready to predict anything about Google’s future.  I myself have been a Google addict, and I respect their talent and innovations to date greatly.  I am interested in all this in a very deep and personal way.  I been in the search engine space, dabbling with search engine technologies since 1996. But more importantly, my startup, Zakta is set to introduce an innovative search tool that we hope is very relevant to the problems we face with search today, and very useful as well.  SearchTeam.com is in private beta now, and offers unique ways to search the Web, curate what you need, personally or in collaboration with others you trust.

What is your take on this?

On The Ongoing Relevance Of Google Search Relevancy

Google Sucks All The Way To The Bank!” declared SEO Consultant Jill Whalen in her recent blog post:

It was done gradually over many years, but Google now provides organic search results that often look relevant on the surface, but either lead to made-for-AdSense content pages or somewhat sketchy companies who are great at article spinning and comment spamming.

Matt Cutts even admitted at a recent conference that Google web spam resources had been moved away from his team.  While I doubt Matt himself was happy about this, those whose bright idea it was are likely laughing all the way to the bank.

Later in the article, Jill Whalen wonders if Google has gone too far in ignoring relevance issues with its core search results:

Since their poor results are being talked about with more fervor outside of the search marketing industry, it’s possible that they have indeed crossed the line. Numerous mainstream publications and highly regarded bloggers have taken notice and written about the putrid results. While Google is used to negative press, the current wave of stories hits them at their core — or at least what most people believe to be their core — their search results.

Even though today Google is technically just an advertising platform that happens to offer Internet search, they built their reputation on providing superior results. Because fixing what’s broken in the current algorithm can’t be very difficult for the brilliant minds that work at Google (Hint: ignore all anchor text links in blog comments, for one thing), we can only assume that they don’t want to fix them — at least not yet.

Google made its mark by providing relevant results really fast. This excellence is what killed AltaVista and all the search engines of the day, and also effectively stifled search engine innovation from outside Google till date.

Google continues to thrive despite these issues with their core search product.  Google doesn’t seem to be losing searchers readily, and their commanding marketshare remains in tact.  Even Bing with all its marketing muscle, some thoughtful innovations and the Yahoo search deal, hasn’t been able to wean off too many searchers away from Google. Why?

One reason for this is what I call the Google seduction.  In short, people are hooked on Google through years of familiarity, and even if there were legitimate alternative search engines that people can use for different needs, Google is their starting point on the Web and they can’t make the habit change, at least not readily.

Another reason is that Google continues to introduce some innovations with organic search results. An example is this continuous push for the best top result or two, which has further brought to us “instant search results” (aka Google Instant).  It is awesome to get results in milliseconds as you type a query, but all that simply makes Google addicts like me get further addicted to it, and masks the real cost of Google searches, as we put up with poor results for our more involved or serious searches or commercially relevant searches.

Search guru Danny Sullivan wrote about this a few months ago in a post titled: How the “Focus on First” helps hide Google relevancy problems. Danny gives very specific examples of how Google’s results aren’t always relevant.  But Danny points out how Google is saved by he calls their “Focus on First”:

At its press conference, Google emphasized how people would move their eyes from what they entered into the search box to the first result that was listed, using that first result in a way to effectively judge if all the results they might get matched their query. Google’s really just got to make that first result hum, for most people, most of the time. If results 2-10 are so-so, it’s not a mission critical matter.

It shouldn’t be that way, however. We ought to get 10 solid results on the first page. That’s what I expect from Google. But maybe I expect too much. Maybe good is good enough, especially given how people search.

So, is this what we’ve come to with organic Web search results?  A good first hit, and then whatever else on the first page!  And we are to believe that this is how the vast majority of the world searches and satisfying them with a good first hit is enough!  Wow!  Unbelievable!

I have no doubt that Google’s relevance with organic search results will improve yet again, given the rise in negative commentary about it in influential pockets of the Internet.  However, the pertinent question to ask is this: Why would an advertising giant care more about relevance of organic search results any more than absolutely necessary to keep their revenue proposition in tact?  Or, asked another way, is search relevance ever likely to be as relevant as ad relevance to Google?

Personally, it is my belief that relevance in search is ultimately for the individual searcher to judge, and while it is important for a good search engine to deliver a strong set of relevant results from the get go, the Web has gotten complex enough that people will be better served by having a better set of tools to help them find and curate what they need.

What do you think?

At Zakta, my startup, we have developed a new search engine called SearchTeam that lets people search the Web together with others they trust. Our approach to improved search relevance is to deliver a suite of tools that enables people to find, collaborate and curate information they need from the Web easily.  Stay tuned for more information about SearchTeam here, and at the official SearchTeam blog.

Follow

Get every new post delivered to your Inbox.