Search Engines - What's in a Name?
by Brian D. Chmielewski
The term search engine is often incorrectly used interchangeably to
describe every device that allows you to locate information on the
Internet. Just as all brown sodas are not truly Coke's and all copying
machines are not truly Xerox's, all searching devices are not truly
search engines. In reality, nearly all query sites can be placed into
a few generic categories: search engines, directories, yellow pages,
metacrawlers, free links pages and what's new announcement sites.
Disregard the titles for a moment, because the real difference that
we will concentrate on is in how listings are compiled.
Search engines create their listings automatically. A
true search engine gathers its database by accepting a web address
or URL. The engine then sends an electronic scout -- AKA webcrawlers,
spiders, or robots - to roam the Internet in search for the respective
URL. Upon locating it, the scout begins storing links to and information
about each page they visit into their index. The scout returns to the
site on a regular basis to look for changes. This is what is commonly
referred to as a "spider-based" search.
When you begin a search by typing in a keyword, the respective search
engine will return results from its index based upon the greatest
similarity between your word and the scout's findings. Most search
engines return this relationship rating in a percentage, or relevancy
ranking, beside each result for a given search. As a business owner,
you want to return a greater percentage because you will appear "near
the top" of the search return list, improving your odds of being
chosen by the user and getting traffic to your site.
Getting listed at the top of every search engine simultaneously is
virtually impossible. While scouts from each search engine use the
same methods to gather information from your site, they differ radically
in search indexing and search software. This is why different search
engines return different results when searching with the identical
keyword. Among the most popular things that these electronic parasites
search for are your page's HTML codes -- preferably the META tag,
title tag and comment tags information -- and full text of every page
at your site. According to The WWW Robot Page, scouts normally start
with a historical list of links, such as server lists, and lists of
the most popular or best sites, and follow the links on these pages to
find more links to add to the database. Without a doubt, this makes
most engines biased toward more popular sites.
Along the same lines, search engines favor more recent submissions.
Those sites that practice rigorous site maintenance rituals by keeping
their site fresh and live, will either resubmit their URL or be visited
more often by the scout. To successfully market your web site you need
to run an on-going campaign, just as you would for a product or service.
Keeping the scouts busy at your site improves you odds of remaining
near the top of the index for your keywords.
Sometimes it can take a while for new pages or changes to be added to
the index. Thus, a web page may have been "spidered" but not yet
"indexed." Until it is indexed -- added to the index -- it is not
available to those searching with the search engine. Some factors that
distinguish the actual indexing time are the size of the search engine
database, technological advantages, frequency of update, employee base
and level of impartiality.
In the meantime, visit uPromote's Directory Listings
Service to index your site in the major search engines, directories,
yellow pages and more.