Tagged with " search engine"
With the always growing concern over privacy online, it wasn’t a great shock that Google announced that their browser, Chrome, is moving to an entirely encrypted service. Currently the beta version of the browser provides private search features for logged in users, and they’re quickly working towards that being a default for all users, signed into your Google account or not.
It’s a mildly distressing point when you drill down into your analytics, because at present the average is somewhere between 20-30% of analytics traffic is coming up as “Not Provided”. Up until the last year or so, when a user conducted a search, made a choice, our analytics tools would show the URL that “refers” the visitor to that page, and would typically include what the visitor searched for. Now when someone performs that same search, the referring URL just looked like www.google.com. The analytics didn’t know how to provide a proper break down of traffic with that referrer, so instead it started giving results of “not provided”. And when you’re dealing with online optimization, not being aware of what your target audience is searching for can be a distressing blow, in the short term.
It’s highly likely that the amount of users taking advantage of secure search methods will continue to grow, especially after Chrome makes it a default setting for all users of it’s browser. But just because the referrer is no longer being provided, it doesn’t mean all is lost. As a website owner, you’re losing the ability to easily see how trends are shifting in search for your particular niche, but you can counter that simply by being up to date with your clients and customers. You should be on the cusp of shifting trends in your industry if you expect to be a leader. Additionally, those search terms are not entirely lost, you just need to look in a different place. You should have Google Webmaster tools setup to monitor your website, and within that toolkit you can see the last 90 days of search terms for your site, with up to 2000 key terms. The data isn’t gone, it’s just in a different spot, and with the utilization of your entire toolkit you can still find any answers you seek.
The new Facebook Graph search has a grip on the social world of course, being assisted on the web search by Bing. It doesn’t take but a few minutes of typing “Facebook Graph” into a search bar to find that there are more than a few people hailing the new service as taking the fight to Google. The information and the action of the new Facebook feature however, doesn’t have me convinced that Google has a whole lot to worry about.
The first part of the feature that struck me as a tad odd, was the introduction of using your social signals to deliver search results which have been tailored to suit you. Based on your friends activity and online postings, your search becomes filtered, delivering you only the topics and trends which apply to you. It places you within a search bubble, and there has been a growing outcry against Google delivering personalized results to people for the last year and a bit, so why the introduction of a Facebook search should suddenly change peoples opinions struck me a little odd.
They put together a convincing trailer about the graph search on their site, but the offering from Facebook left me rather unconvinced as to it’s full usefulness. With somewhere around a billion users, and with user profiles that are nearly five times in friend size (from 100 friends on average 5 years ago to 500 now), it seemed to me that the “searches” that were performed in the video, would have been questions you already knew the answers to. Regardless of the amount of friends you might have on your list in Facebook, you already know and can identify those nearest to you that share your interests. You don’t need to search for them or what they may like, because you already likely know.
One of the greatest advantages however to the introduction of a search service to Facebook, is the cost of advertising. Google has long been an incredibly dominant force in the web advertising space with its Adwords and Adsense programs, and depending on the terms you wanted to pursue your costs could climb rapidly. The ideal revenue model for the Facebook Graph search service would be the same idea, an advertising model that caters to what you may be searching for at the moment. And on the surface that may look like it spells doom for Google and the way it does business, but it is actually a very good thing. Competition drives creativity, it promotes change and spurs innovation. Google has always said they want people to enter the search space, and Facebook is working on it now. We’ll have to keep watch on it to see just how far it can go.
Later today there is going to be an announcement, it’ll be a change to the way you conduct yourself online, and will likely affect your friends and family as well. Late yesterday I saw word of the impending news, and in just a few hours the tech world will get it’s answer, just what could Facebook have planned?
There’s been a lot of ideas thrown around about the future of Facebook, with discussions covering almost everything from phone hardware and/or software, to search engines. In case you’ve been living under a rock, Facebook is the largest social site on the web, and with somewhere near a billion members for a user base they have the potential market to influence massive online change. As for what option really makes sense for the company is anyones guess, but you can bet that there is going to be a massive audience tuning in for it.
Facebook likely won’t be going down the road of building their own phone, while the company has a strong digital presence, it wouldn’t likely translate into as strong an audience in the hardware market. A great option, and one that makes sense especially since they recently closed their purchase of Instagram, would be to add video support to the platform. It’s already globally accepted as a way to rapidly share the photos you take, it would make sense in a number of ways to offer the same feature to any videos that are taken. Not only would this allow Facebook to monetize any videos that are put up by placing ads in the stream, but it would give a reason for YouTube to possibly step up it’s game as well. It has been the dominant online video source for ever it seems.
And then there’s the elephant in the room, the question that has been asked of Zuckerberg and the Facebook machine a number of times – are you building a search engine. Other times when the question was pointedly asked, they have sometimes shied away from the question by avoiding it, and other times saying no, not yet. Perhaps today is the day, where Facebook announces their own search engine, driven entirely by social signals? Even if today is the day that Facebook does let loose with a new search engine, or even a coming one, the true effect of what that could do to the online scope is unknown. There would be a great deal of unknown territory, as a search engine driven by social signals would be prone to massive manipulation, both positive and negative. And with a user base of somewhere around a billion members, that’s a lot of leverage that can sway an algorithm one way or the other. The other question that could be asked, is what happens to those people who remove their accounts, either by deleting or deactivating them, what happens to their social links they may have bestowed? Over the last month in the UK there have been more than a half million accounts deleted from the social service, what would happen to a search service if a mass migration hit the site? So many unknown variables, stay tuned to the web in a few hours and the picture will begin to become clearer.
The next frontier that Facebook needs to conquer is search. That would help it significantly expand revenues and, in turn, its market value. Search, I would say, is a very high priority for Facebook and may be the announcement due Tuesday might well be that. Facebook has this incredible treasure trove of unstructured data on the site, but can it finally put it to good use?
Research firm eMarketer estimates that Facebook, the No. 2 company in the U.S. mobile advertising market, had an 8.8 percent share last year —up from zero in 2011. That compared with No. 1 Google’s 56.6 percent. This year, Facebook is expected to grow its share to 12.2 percent, while remaining far behind Google, but we all know the real dollars is in search.
Facebook’ biggest challenge however and potentially its most lucrative opportunity, a chance to topple Google as the king of search. Will that ever happen?
Since Google has been given the all clear signal from the FTC about the charges of them using anti-competitive behaviour, it loosens the reins a bit for the company. To be completely fair, the evolution of search and the ever present forward advancements should be evidence that the industry has never really stopped evolving.
Bing sold itself initially as a decision engine, conduct your search and you can make a decision then and there instead of digging through results pages. Then, just a short time later they started to re-brand themselves again, into the “do” engine. It’s been a year since then, and while they’ve had their hiccups (and tantrums) along the way, they’re also growing and changing with the web. It’s not just the internet that’s evolving, to technologies like IPV6, fiber connections and what not, users are evolving and changing at just as frantic a pace. Bing recognized this, and has been trying to tap into the market of people who are ready to make a choice now. Google has also recognized this in online users, when they introduced their “instant” version of search results. Instant search is basically a cached version of search results which begin to appear, if you have the feature turned on, as quickly as you can type your query. It was just one step of many to come, by both search engines to engage a quickly growing user base, those who want information now, not just options to dig through.
So what’s to come with search in the future? No one really knows for sure, but Google and Bing both have their teams working furiously to try and embrace the changing landscape. Amit Singhal, Googles head of search was even heard to say:
I would be so bold to predict that in the next two years, you’ll have a conversational search engine that you can talk to like you’re talking to me.
As much things change with the search world however, for the time being there are a few points you need to continue to work with. Remember the basics, and follow the best practices guidelines for building and maintaining your website. Your keywords are important, you can’t just slam a ton of text on your site and expect the search engines to sort it out for you, it needs to be properly written and useful to your users for the engines to take notice. Your website titles, they should follow some sort of relation to your business or service, but again, shouldn’t be filled to overflowing with keywords as that’s a no-no in the guidelines. Your URL structure is important too, as it can be used to create quick, and simple navigation for users and for crawlers as they go over your site. Having your pages properly named, and instead of using query strings for a dynamic site only helps your site gain brownie points online. As an example, what’s easier to remember on a website, an www.yourwebsite.com/about-us/ url, or www.yourwebsite.com/?q=7s9b992 . And lastly, it’s slowly making it’s way out, but your metatags still have some information to share with the search engines and your users, as you can layout what keywords you deem important and the description you use for your website. The future is definitely on the way for search, but you can’t move forward and completely forget what got you there in the first place.
An interesting point to notice about a search engine, is just how many results are returned when performing a search. Google and Bing have indexed trillions of pages if you mixed them together, an always increasing amount. Some written articles have called it a problem, but Google and Bing rarely display any results passed the 1000 range, even if it says that it found 25,000,000 results for your query.
It isn’t so much a problem that they don’t display a value larger than 1000 results, the question should really be ‘Do they need to?’ The search engines like to pride themselves on delivering the most relevant results, based on what you’ve searched for, your past history and so on. If you’re a fan of having a no strings attached type search, using a search engine like DuckDuckGo may be more up your alley, but the first point still remains. What point does it serve if a search engine tells you it finds millions of results, and doesn’t show you them.
Let’s take the following quick search from Google, for ice cream. We’re in Winnipeg, so we were returned the results for the Wikipedia entry, and then we got into the local restaurants and dessert places that purvey ice cream. But when you look at how many results are returned, 463,000,000.. is that entirely relevant? I don’t need that many results about ice cream, it’s not that wide of a variable product, but Google has said they have that many results. This is where some written pieces have said that there is a problem, even though it says that it has 463,000,000 results, I can’t browse passed the first 1,000 results even though there’s been so many returned. It’s more a personal preference, but some very basic math (default search results pages show 10 results) says, why would I be looking on page 46 million to see what has been indexed about ice cream?
Where online marketing and your brand are concerned, you shouldn’t worry about what is showing in terms of how many pages have been returned that have been indexed. There are some sites and pages from the early 90s that can still be found, which are horrid where aesthetics and usability are concerned if you’d really like to find them. The vast majority of search users don’t go to page 2, let alone page 3 or 4, chances are if their result hasn’t been found on their first search they’re going to revise their terms and try again. Focus on your content, focus on being relevant, and focus on the basics. Don’t worry about the other 463 million results.
There’s been a case of defamation in an Australian court where it was claimed that Google (knowingly) defamed someone by tying him to organized crime, both in organic and image search. Google was found guilty by jury, and has been ordered to pay a fine of what amounts to about 30 seconds of work for them ($200,000), but it’s not the fine that has the company a bit worried, it’s the precedent that it would be setting. Google is currently in the process of appealing the decision, we’ll all have to wait to see what happens.
The case was launched off of the search results for both organic and image listings which showed the claimant with ties to the local crime scene. Google responded that they’re not in control of the results page, that they merely list what has been observed as being popular search terms for the area. It sounds like a weak argument, but you can see how Google tracks their top trends by looking at Google Trends, you get a very brief glimpse into what the top searches were for the last day or so.
Back to why this is a bad idea however, to hold Google accountable as a publisher, and not as an information provider. The jury in this particular case decided that Google was guilty as a publisher and created the page which delivered the false information, and the images pages that are served up when you search are Google specific creations. As anyone who has any experience working with images online can tell you, there is the alt tag which can be used to give an image a text like value, which can then be indexed by the search engines. The image results page is actually the most recent target by black hat manipulators the last couple of months, not only because of this feature but it helps them get listed much quicker than pushing for listings in the center of the page.
Google being declared a publisher of the search results pages makes them accountable for the comments that came up in search, even though they never actually created the content themselves. It’s happened a handful of times that have made the news in recent years, with Rick Santorum being the most recent victim of results page manipulations by spammers and some other unscrupulous methods, but the results pages were driven by the users and by the most frequently used search terms. Blaming any search engine, not just Google for the aforementioned issue is like blaming your mechanic for your bus being late getting you to work. Once something is on the internet it’s also notoriously difficult to try and remove, ask anyone of the stars out there who have unflattering photos which pop up from time to time, once it’s online, it is forever. This also brings up the point of online brand protection, and the importance of a positive relationship in the local scene, with proper brand management mistakes like this can be captured and stopped before they begin.
The ruling sets a scary precedent in a way, as if it stands then it opens the door to an increasingly censored internet. Add into the mix that the ITU will be meeting in just a couple of weeks and the issue of net neutrality and freedom of use and access starts to become a threatened point.
It’s no secret that Google is the big kahuna where search is concerned, and they make enough money year after year they should have their own printing press. But for the last year or so especially, Google has been the target of some anti-trust and privacy issues across the globe, with advocates pushing for more from the search giant. Claims that it takes too long to clean up your past from the search engine, and blaming the provider for results deemed inappropriate.
The web is at it’s core, a giant repository of everything. Pictures, videos, text, scripts, code and trillions upon trillions of 1s and 0s that make up websites and documents. It is often a strange sensation to be able to go back to an old website you used to frequent, read some of your past ramblings and wonder, what was wrong with me, or, why would I write something like that? With the way the internet holds onto its history, you can often find information about anything or anyone for that matter. You would be hard pressed to think up a legitimate search topic that wouldn’t appear on a search engine somewhere, and it’s highly likely that Google as well has it indexed and stored on one of it’s multitude of data centers across the globe.
It’s that level of access to information that seems to have the hackles of some of the population up, and has them trying to call for regulations on search engines. Soon it won’t be just Google that will be caught up in these privacy and anti trust regulation talks. Google is being made an example of because they’re the biggest target out there, and so, who better to hit. The plain and simple point of contention of access to information isn’t a search problem, I’d blame it more on a generational divide. The yougest users of the web, those 13-18 year olds have grown up with 24/7 access to the web and all of it’s content, while the top end of the user range, that 65+ age range, sees the internet in a completely different way.
40 years ago when a family went on vacation and took snap shots, they didn’t share them with 400 of their friends on a social network. It was maybe the 6-10 close family friends that they shared their details with, and so they could control their information and had a semblance of privacy. Flash forward to now with the same family, and you have little sister posting pictures to Instagram and Facebook, while the 17 year old son is watching a steaming Netflix movie. Mom and dad are using a GPS navigational system with turn by turn functionality, and are setting up a video chat with the friends they’re on their way to visit. Everytime that photo is viewed on Facebook or Instagram, it’s being saved with another web address, in another location. Everytime you’ve used your Skype or iPhone to conduct a video call, the connection and duration has been saved on a data server, and every movie or show you stream online has helped define what your likes and dislikes are with the service, so you can have a better targeted product to view at a later date. It used to be called personal accountability, if you didn’t want to be viewed in a certain way, you just didn’t act that way, and it’s become even more important to conduct yourself well.
Privacy hasn’t disappeared, but it’s definitely not the same as it was 40 years ago, as a person living in the digital age you need to be acutely aware of your online conduct. Because everything you say, do, or post is saved somewhere. Google, Bing, Yahoo, and all of the other search engines just search for information. They do not operate with bias or under the control of some megalomaniac with a god complex who is out to control the world. All they do is take a mess of 1s and 0s, and display them in a way that a person can understand them. And just remember that the information that people are trying so hard to push Google to bury, erase and hide, can be found just as quickly on the other major search engines out there.
So finally the election is finished, and the winner has been decided. If for some reason you’ve been living in a cave the last couple of days, Obama took the crown and is set to begin his second term as the President of the United States. And regardless of who you were rooting for, there were some interesting search discoveries over the last couple of months of the battle, which have their roots in search.
A few days back, there was a story run in the Wall Street Journal about how Google was serving up results pages in what some were thinking was a strange coincidence. It seemed that even with being signed out of a Google account, and being on a cookie free browser, the results when searching for Obama almost bcame personalized. The article that was published even went on to say that the search engine was biased when searching for obama and related news, with one story coming right out and saying that the candidates were being treated unfairly. While it would make for a great conspiracy story, the unexciting truth is that it’s just how the Google algo works. Google simply displayed results based on how people searched for terms, the example being
more people searched for “Obama” followed by searches for “Iran” than the number of people who searched for “Romney” followed by “Iran.”
That was the first interesting point, the second follows in a similar vein.
It’s not really news anymore that between the candidates there were hundreds of millions of dollars spent on campaigning, but it was interesting to find that Obama out bid Romney on search ads online at nearly three to one. Both were bidding on the big hitters like ”2012 election” and “2012 presidential polls” to lead people to their campaign websites, but it was the former President who owned the paid advertisements of the results pages. Sticking in the trend of online visibility, Obama had Romney beat across the board with more Facebook fans, website visitors and Youtube video views.
The largest demographic in the voting populace is shifting to a much younger, information hungry crowd, so being able to be found online should be an integral cog in any parties agenda. When you shake all the numbers out from organic results to paid search, it looks like in the end Obama simply out optimized his opponent, and as helped secure himself with a second term.
With Google making their gaff and releasing their earnings numbers in the middle of the day as opposed to the end of day, it caused a bit of excitement. So much do in fact, that trading on their stock had to be halted, due to their earnings being lower than expected.
The market had already been aggressive with the stock, estimating positive growth in the company. With the final numbers coming in lower than what was expected, it caused the knee jerk reaction that the stock experienced. But just how is it, that one of the most powerful online properties failed to increase earnings when they picked up notable acquisitions like Motorola? Perhaps the answer isn’t as complex as it seems on the surface.
When it comes to search there is a handful of (viable) options for being found online, Google, Bing etc. But one of the avenues that mostly levels the search playing field is paid search, or PPC. Pay per click is almost the gear equalizer, as it’s limited to daily budget and doesn’t have any real bearing on age of domain or rely on heavy back linking strategies, you just need to write a better ad than the other guy. The issue we’ve been seeing in the last 8 months or so is the cost per click on client campaigns, previous costs ran in the 35 to 40 cent range where now we’re seeing increases to the 3 dollar plus range.
It makes it vastly difficult for anyone who doesn’t have a budget of several hundred dollars, equating to budgets of several thousand dollars per month. Short term gains are much more difficult for the mid to small business owner and who knows, maybe a direct correlation was their bottom line.