Tagged with " internet"
While Google is undoubtedly the largest search engine on the web with its trillion pages indexed, they are not the only tool out there with which to make your way around the web. But while there are hundreds of millions of web users out there, there is only a handful of search engines that really garner any real use.
Google, as mentioned previously clearly holds the dominant spot online and has for a number of years. With more than 2/3rds of the market share in search, it has an massive presence on the web. With the clout that they have with the worldwide market any business that has a website is keen to try and make a place for themselves on the front page. And the bigger the target, the more detractors one is bound to have, and Google definitely has the majority share. Privacy issues, a social platform that (at first) floundered and has grown somewhat stale, and a long list of competitors claiming anti-competitive behaviour it seems amazing that they could still be in business, but while they haven’t made friends with every user on the web, 66% is more than enough.
The second most widely used search engine is really two, as it delivers results for both bots, Bing and Yahoo gobble up the majority of the remaining search activity. The Yahoo results pages for more than a year so far have been provided by a Bing search bot, as opposed to running their own bot, and building their own index of websites on the web. And while this still allows the adopters of the Yahoo portal a way to browse the web, they’re not being delivered their own true results. The new CEO at Yahoo however, seeks to change all of that, hopefully 2013 has some shaking up in the search world. Bing as a search service has been trying hard for a couple of years to break into the market that Google dominates. With some clever ideas with image search, flyout snippets of search pages and sometimes widely differing results at times from Google, Bing has a share of the market that hasn’t shifted much in a number of years. Perhaps they can rekindle their search agreement with Facebook and together they can develop a full fledged social search service, only time will tell.
In the last little bit of the search world, you have some of the little guys who are trying to shake up the web. Blekko, one of the more interesting search services out there is a great way to pick your way through a search results page that bills itself as being spam free. Your experiences will vary wildly based on what, and how you search, but with their usage of what they describe as slashtag which allows you to greatly fine tune your search parameters. It’s an interesting technology and definitely gives a differing view of the web and it’s offerings. Another small fry in the search landscape, but one which can cater to those concerned with privacy is Duckduckgo. It has the same clean search ui as the others with a basic text input box, but it delivers you results from “outside the search bubble” they describe that other search engines put you in. It is a great option to have a look at what the web might look like with no search history to go on, the results can be interesting to say the least.
The next frontier that Facebook needs to conquer is search. That would help it significantly expand revenues and, in turn, its market value. Search, I would say, is a very high priority for Facebook and may be the announcement due Tuesday might well be that. Facebook has this incredible treasure trove of unstructured data on the site, but can it finally put it to good use?
Research firm eMarketer estimates that Facebook, the No. 2 company in the U.S. mobile advertising market, had an 8.8 percent share last year —up from zero in 2011. That compared with No. 1 Google’s 56.6 percent. This year, Facebook is expected to grow its share to 12.2 percent, while remaining far behind Google, but we all know the real dollars is in search.
Facebook’ biggest challenge however and potentially its most lucrative opportunity, a chance to topple Google as the king of search. Will that ever happen?
When you’ve decided to build yourself a new site, whether it be due to needing an update, or if you’re just looking for a new image there’s a very important step to monitor. You need to ensure, that before you get too far into the process that you’re not making a rookie mistake and allowing the search engines to index both versions of your website. Doing so, can cause you grief and could ulimately penalize both websites for duplicate content.
When you’ve begun working on the newest version of your site, you need to ensure that it’s not being indexed by the search engines so you can work all you like without worry. The simplest way would be to use your htaccess file to block the bots, or alternatively if you have the means, you could work on a local server where the site isn’t techinically on the internet. Duplicate content can cause Google or Bing not to know which page it should list in response to a search. The search engines suddenly have two versions of your website and content to consider, and need to determine which it feels is the most relevant of the two. Seeing as your old site originally had the content, you stand to injure your brands reputation and new url simply by working on a new site or look.
Duplicate content isn’t just a concern when you’re working on your own website, it’s actually a point you should make note to occaisionally monitor. A bothersome trait and a difficult problem to tackle is if your own, original content ends up being scraped by a bot and winds up on an aggregator site. You can search for your own content by searching for key phrases and terms which you’ve used within the content and/or title, and hopefully the only sites which come up are your own or those you’ve given permission too to reproduce it. Typically scraper sites don’t rank that highly in search anymore, however there are still occasions where they do show up higher in the results than the original creators. When this happens, you often become trapped in a terrible cycle of trying to have your own, hard earned content removed from the index, and having credit given where credit is due.
There are a few basic rules and ideas that you should always keep in mind when working on the web. Sometimes, it doesn’t matter how often you’ve done the same steps before, you make a mistake. Depending on the severity you can take down a website, mess up a web page, or you could make just minor little code mistakes which mess up your page layout in the odd browser.
One of the most basic points to keep in mind while working on your website is to keep it simple. A lesser repeated, but just as important lesson is to always backup your work. No matter how basic or simple your steps may be, you should always keep a backup before you push your changes live. It’s a simple mistake to not keep a backup of your original site or content before getting to work on it, one which can cost you more work if you’re not careful. Even seasoned coders make mistakes, and when they happen, a blog for example *cough*, can be offline until a backup can be restored.
But enough about completely crashing a website or losing content and materials, there are small errors you can make which can actually hamper your site as well which aren’t as immediately obvious. If you’ve been rewriting your simple tags, say your title, description and keywords (yes I know, the internet says they don’t really matter anymore), and you happen to mix them up with the wrong content, you could experience a negative impact in your rankings. And even a loss of a single position in the search results can equate to lost conversions. Another common error, one which doesn’t directly impact your rankings and website performance and is a tad more difficult to detect, is mis-tagging elements on your pages. It may seem a small, and innocuous step to miss in a website or page, but every little thing does add up. And when it comes to optimization and your online competition, every little bit helps.
It should be no surprise to anyone out there that Google has their share of privacy concerns. People worry about their search history, their emails being read, even with some who use the browser the worry extends to their entire online activity profile. Everybody has always assumed that Google knew what you were doing and kept track of everything, and they never really helped their case by saying either way. But now, you can have an insight into just how much Google does know about you.
Earlier today, Google announced a new service they call Account Activity which does exactly as it’s name suggests. For users who opt in to the service, once a month Google will send you a report about the information it has collected on you for that month, while signed into your Google account. Being ever curious, I opted in for the report and a few hours later I received all of the data that has been grabbed of my activity. And bearing in mind that I also use an Android device, the amount of data that could be collected of my usage is quite large. Yet when I went through the report, I found the information was vague at best, at least in terms of what they keep. It kept track of the top 3 people I email, how many emails I had coming and going (note: not the content of which), and the devices and platforms I’ve used while signed in. Way down at the bottom of the report is Web History, and since I’d opted out of allowing them to collect any data, it was completely blank.
Since Google unified their privacy policies across their products, it seemed like there was a sudden surge of concern about what data Google does collect about their users. Personally, it was never a concern, becuase while true privacy online doesn’t exist, as a user you do still have an incredible amount of contol about what information you share with the world, and with the services out there. Where the disconnect between the reality and the paranoia occur, is where people stop reading about their services, and just run amok with what’s trending. Whether it be on Twitter, Facebook, or any other social media network. Every service on the internet, not just Google, every single one is only viable because of the users who share information with them. Even if it’s something as simple as a username, without even that fragment of information they couldn’t exist. When you next read about some internet company stealing your information or selling it to third parties, instead of jumping on the band wagon have a look at your settings if you’re a part of the network. It’s the user who has the control at the end of the day, if you don’t want to be a part of a service, leave it.
In all the ruckus made about the issues of privacy that people keep bringing up, it always comes back to the same question. If you’re so unhappy, why don’t you just stop using it? The real issue with privacy and being online that the vast majority don’t, or won’t realize, is it doesn’t truly exist. If you want your information to be private, never sign anything. Never use the internet, don’t get an email address and move to a mountain side. And even then, even if you lived all alone in a shack on the side of a mountain, if someone sees you and writes a blog about you, sorry, no more privacy. All you can do to maintain control online is to be aware of the sites you use, what their policies are and what they change too if they change. Google didn’t change anything about how they do their work, they simply stream lined it to make it easier for the user, and for them. Facebook, Apple, Microsoft, Yahoo, all massive companies all of which became that way because you’ve used their products and given them your information. Companies don’t grow like trees, they grow with your personal, private information.
There’s a new type of search engine making a debut on the web dubbed Trapit. It’s unique in it’s own right simply because of the premise it has been built on, by learning what it is that you search for it delivers similar results for you to look through.
It’s not an unheard of idea, or even really a unique one at that, Trapit however takes a step further and tries to make educated guesses as to your preferences. It’s the same kind of algorithm that Apple’s new Siri technology uses to deliver your answers to you as you ask for them. Trapit does specifically type cast itself as a discovery engine, not a search engine, that doesn’t preclude what they have deemed to be an upcoming competition with Google out of the picture. Trapit co-founder Gary Griffiths called Google an online yellow pages, saying that it works well for direct queries but not for getting to new content.
It’s an interesting idea and a different perspective on delivering search results to be sure. But it’s a rather curious thought that general users are in so far okay with the way Trapit works. The puzzlement is coming from remembering the public enjoys being able to have their privacy protected, as they should. And that there have been more than one concern or complaint registered in Google’s realm about privacy and about how your search terms are saved and or indexed as part of your search history. My question to the early adopters and testers of Trapit would be then: How do you expect that Trapit learns what you may enjoy? It saves your searches, either on a cookie on your computer or within their members database and extrapolates from their via it’s algoritm.
But then again, it seems that it’s alright for a little player out there to have access to your searches and (potentially) information, but not the big guys who are frequently held accountable. Perhaps it’s just another case of wanting to eat your cake and have it too.