You are on page 1of 48

Table of Content

Acknowledgement Preface
1.

Abstract 5

2. Introduction. 6
2.1 2.2 what is search engine working

3. Problems and Limitations of Search Engines 9


4.

Search Engine Optimization..10

5. A New way: Meta search engines..19


5.1 5.2 working algorithms Implementation

6. Recent Trends of Technology.24 7. Case study of some meta-search engines..29 8. References50 9. Bibliography51

ABSTRACT
As World Wide Web (WWW) based Internet services become more popular, information overload also becomes a pressing research problem. Difficulties with searching on the Internet get worse as the amount of information that is available increases. A new approach to build an intelligent personal spider (agent), which is based on automatic textual analysis of Internet documents, is proposed. These personal spiders are able to dynamically and intelligently analyze the contents of the users' selected homepages as the starting point to search for the most relevant homepages based on the links and indexing. It is straightforward to define an evaluation function that is a mathematical formulation of the user request and to define a steady state algorithm. Querying standard search engine performs the creation of individuals.

1.

Introduction:

To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago.

1.1What is a search engine?


A Web search engine is a tool designed to search for information on the World Wide Web. The search results are usually presented in a list and are commonly called hits. Internet search engines are special sites on the Web that are designed to help people find information stored on other sites. There are differences in the ways various search engines work, but they all perform three basic tasks: They search the Internet -- or select pieces of the Internet -- based on important words. They keep an index of the words they find, and where they find them. They allow users to look for words or combinations of words found in that index. A top search engine will index hundreds of millions of pages, and respond to tens of millions of queries per day.

1.2How search engine works?


To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. When a spider is building its lists, the process is called Web crawling. In order to build and maintain a useful list of words, a search engine's spiders have to look at a lot of pages. How does any spider start its travels over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web. Words occurring in the title, subtitles, Meta tags and other positions of relative importance were noted for special consideration during a subsequent user search.

A search engine operates, in the following order 1. Web crawling 2. Indexing 3. Searching

1.2.1 Web Crawling


Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt.

1.2.2 Building the Index


The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called Meta tags). Data about web pages are stored in an index database for use in later queries. An index has a single purpose: It allows information to be found as quickly as possible. There are quite a few ways for an index to be built, but one of the most effective ways is to build a hash table. In hashing, a formula is applied to attach a numerical value to each word.
4

1.2.3 Searching
When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text.

1.3 Search Engine Optimization


Search engines are successful only if they provide a user links to the best Web sites related to the user's search terms. You just have to find a way to show search engines that your site belongs at the top of the heap. That's where search engine optimization (SEO) comes in -- it's a collection of techniques a webmaster can use to improve his or her site's SERP position.
5

1.3.1 How Search Engine Optimization Works


SEO techniques rely on how search engines work. Some are legitimate methods that are a great way to let search engines know your Web page exists. Other techniques aren't good ways to get noticed and might involve exploiting a search engine so that it gives the page a higher ranking. Sometimes it's tough to tell if an approach is legitimate. If it seems a little questionable, it's probably a bad idea.

1.3.2 How can be search engine optimized?


Most search engines use computer programs called spiders or crawlers to search the Web and analyze individual pages. These programs read Web pages and index them according to the terms that show up often and in important sections of the page. There's no way for a search engine spider to know your page is about skydiving unless you use the right keywords in the right places.

1.4Introduction to Genetic Algorithm


Genetic algorithms are a part of evolutionary computing, which is a rapidly growing area of artificial intelligence. Genetic algorithms are a particular class of evolutionary algorithms (also known as evolutionary computation) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination). Genetic algorithms are implemented as a computer simulation in which a population of abstract representations (called chromosomes or the genotype of the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions.

2. Problems and Limitations of Search Engines:


Maintaining the freshness with respect to the change frequency of the web is a gargantuan task. In the current information age, the web is increasing at a very rapid pace, while the indexing of the current search engines is not scaling up at the same pace resulting in the loss of access to good function of documents on the web. Current technology is inadequate in indexing the entire web. Consumption of huge bandwidth. Crawlers consume majority of web server time. The resources can occur many times due to mirroring and aliasing. There are several limitations using web crawlers to collect data for search engines: Not Scalable, Slow Update, Hidden (Deep) Web, Robot Exclusion Rule, and High Maintenance. A successful search engine system requires a large data cache with tens of thousands of processors to inverted text indices, to measure page quality, and to execute user queries. Centralized systems provide a single point of failure. Failures may be network outages; denial-of-service attacks; or censorship by domestic of foreign authorities.
6

Client/server networks architectures because of focus on the server, provide a bottleneck, therefore they are not fault tolerant. They also have complex architecture and delay in remote networks. Information overlap: During the study of this research, searching different words within various search engines has provided a lot of results. Surely most of them dont fit the users real requests. Heterogeneous and distributed information.

3. Search Engine Optimization


How Search Engine Optimization Works One of the most reliable ways to improve traffic is to achieve a high ranking on search engine return pages (SERPs) Search engines are successful only if they provide a user links to the best Web sites related to the user's search terms. You just have to find a way to show search engines that your site belongs at the top of the heap. That's where search engine optimization (SEO) comes in -- it's a collection of techniques a webmaster can use to improve his or her site's SERP position. SEO Overview Without strong content, SEO tips and tricks will provide a temporary boost in your site's ranking at best. SEO techniques rely on how search engines work. Some are legitimate methods that are a great way to let search engines know your Web page exists. Other techniques aren't good ways to get noticed and might involve exploiting a search engine so that it gives the page a higher ranking. Sometimes it's tough to tell if an approach is legitimate. If it seems a little questionable, it's probably a bad idea. To improve a Web page's position in a SERP, you have to know how search engines work. Search engines categorize Web pages based on keywords -- important terms that are relevant to the content of the page. In our example, the term "skydiving" should be a keyword, but a term like "bungee jumping" wouldn't be relevant. Most search engines use computer programs called spiders or crawlers to search the Web and analyze individual pages. These programs read Web pages and index them according to the terms that show up often and in important sections of the page. There's no way for a search engine spider to know your page is about skydiving unless you use the right keywords in the right places. Here are some general tips about keyword placement: One place you should definitely include keywords is in the title of your Web page. You might want to choose something like "Skydiving 101" or "The Art of Skydiving."

Another good place to use keywords is in headers. If your page has several sections, consider using header tags and include important keywords in them. In our example, headers might include "Skydiving Equipment" or "Skydiving Classes." Most SEO experts recommend that you use important keywords throughout the Web page, particularly at the top, but it's possible to overuse keywords. Your skydiving site would obviously use the word "skydiving" as a keyword, but it might also include other keywords like "base jumping" or "parachute." If you use a keyword too many times, some search engine spiders will flag your page as spam. That's because of a black hat technique called keyword stuffing, but more on that later. Keywords aren't the only important factor search engines take into account when generating SERPs. Just because a site uses keywords well doesn't mean it's one of the best resources on the Web. To determine the quality of a Web page, most automated search engines use link analysis. Link analysis means the search engine looks to see how many other Web pages link to the page in question.

Going back to our skydiving example, if a search engine sees that hundreds of other Web pages related to skydiving are linking to your Web page, the engine will give your page a higher rank. Search engines like Google weigh the importance of links based on the rank of the linking pages. In other words, if the pages linking to your site are themselves ranked high in Google's system, they boost your page's rank more than lesser-ranked pages. So, how do you get sites to link to your page? That's a tricky task, but make sure your page is a destination people want to link to, and you're halfway there. Another way is to offer link exchanges with other sites that cover material related to your content. You don't want to trade links with just anyone because many search engines look to see how relevant the links to and from your page are to the information within your page. Too many irrelevant links and the search engine will think you're trying to cheat the system. In the next section, we'll look more closely at ways people try to fool search engines into ranking their pages higher on a SERP. Black Hat SEO Techniques Some people seem to believe that on the Web, the ends justify the means. There are lots of ways webmasters can try to trick search engines into listing their Web pages high in SERPs, though such a victory doesn't usually last very long. One of these methods is called keyword stuffing, which skews search results by overusing keywords on the page. Usually webmasters will put repeated keywords toward the bottom of the page where most visitors won't see them. They can also use invisible text, text with a color matching the page's background. Since search engine spiders read content through the page's HTML code, they detect text even if people can't see it. Some search engine spiders can identify and ignore text that matches the page's background color. Webmasters might include irrelevant keywords to trick search engines. The webmasters look to see which search terms are the most popular and then use those words on their Web pages. While

search engines might index the page under more keywords, people who follow the SERP links often leave the site once they realize it has little or nothing to do with their search terms. A webmaster might create Web pages that redirect visitors to another page. The webmaster creates a simple page that includes certain keywords to get listed on a SERP. The page also includes a program that redirects visitors to a different page that often has nothing to do with the original search term. With several pages that each focus on a current hot topic, the webmaster can get a lot of traffic to a particular Web site. Page stuffing also cheats people out of a fair search engine experience. Webmasters first create a Web page that appears high up on a SERP. Then, the webmaster duplicates the page in the hopes that both pages will make the top results. The webmaster does this repeatedly with the intent to push other results off the top of the SERP and eliminate the competition. Most search engine spiders are able to compare pages against each other and determine if two different pages have the same content. Selling and farming links are popular black hat SEO techniques. Because many search engines look at links to determine a Web page's relevancy, some webmasters buy links from other sites to boost a page's rank. A link farm is a collection of Web pages that all interlink with one another in order to increase each page's rank. Small link farms seem pretty harmless, but some link farms include hundreds of Web sites, each with a Web page dedicated just to listing links to every other site in the farm. When search engines detect a link selling scheme or link farm, they flag every site involved. Sometimes the search engine will simply demote every page's rank. In other cases, it might ban all the sites from its indexes. Cheating the system might result in a temporary increase in visitors, but since people normally don't like to be fooled, the benefits are questionable at best. Who wants to return to a site that isn't what it claims to be? Plus, most search engines penalize Web pages that use black hat techniques, which means the webmaster trades a short success for a long-term failure. In the next section, we'll look at some factors that make SEO more difficult. SEO Obstacles The biggest challenge in SEO approaches is finding a content balance that satisfies both the visitors to the Web page and search engine spiders. A site that's entertaining to users might not merit a blip on a search engine's radar. A site that's optimized for search engines may come across as dry and uninteresting to users. It's usually a good idea to first create an engaging experience for visitors, then tweak the page's design so that search engines can find it easily. One potential problem with the way search engine spiders crawl through sites deals with media files. Most people browsing Web pages don't want to look at page after page of text. They want pages that include photos, video or other forms of media to enhance the browsing experience. Unfortunately, most search engines skip over image and video content when indexing a site. For sites that use a lot of media files to convey information, this is a big problem. Some interactive Web pages don't have a lot of text, which gives search engine spiders very little to go on when building an index.

Webmasters with sites that rely on media files might be tempted to use some of the black hat techniques to help even the playing field, but it's usually a bad idea to do that. For one thing, the major search engines are constantly upgrading spider programs to detect and ignore (or worse, penalize) sites that use black hat approaches. The best approach for these webmasters is to use keywords in important places like the title of the page and to get links from other pages that focus on relevant content. Optimizing a site isn't always straightforward or easy, which is why some webmasters use an SEO consultant. When relying on an SEO consultant, it's important to check the consultant's credentials, track record and client list. It's also a good idea to stay as informed as possible about SEO issues -- if the consultant recommends a black hat approach and the webmaster takes the advice, search engines might hold both parties accountable. Many SEO firms are completely legitimate businesses that only follow the white hat optimization philosophy. They help webmasters tweak Web page layout, choose the right words to increase traffic, and help facilitate link exchanges between sites with complementary content. If you have a Web page that needs a little help, it's a good idea to find someone who really knows how to leverage legitimate techniques to increase your page's SERP ranking. Search Engine Optimization Step 1 - Keyword Analysis Knowing what words and phrases people use to search the Web is an essential component of any well-executed search engine positioning campaign. Deeho Design's keyword analysis program takes the "guess work" out of determining which keyword phrases you should be targeting. The keyword analysis and selection phase of our search engine positioning process identifies the proper keywords to target ensuring that the most qualified users will find the specific pages within your site that are relevant to their search query. Deeho Design will analyze your Website log files and perform a comprehensive competitive analysis using our proprietary web analytics technology to identify the most effective keywords for your search engine positioning campaign. Search Engine Optimization Step 2 - Competitive Analysis The competitive analysis phase of the search engine optimization (SEO) process will help you size up your competition and provide you with the tools necessary to achieve and maintain your rightful position at the top of the search engines. After establishing the most effective keywords for your Website, Deeho Design will prepare a comprehensive competitive analysis and baseline report showing you exactly where you stand relative to your competitors for these keywords. This phase of the search engine optimization (SEO) process will yield the foundational data necessary to achieve the competitive advantages needed to outperform your competitors on the search engines and gain valuable market share.

10

Search Engine Optimization Step 3 - Content Enhancement Once effective keywords have been agreed upon, and a strategy has been formulated for outperforming competitors, Deeho Design will perform the content enhancement phase of the search engine optimization (SEO) process. As the title implies, content enhancement involves the modification of Website content. During this phase of the process, Deeho Design will make recommendations to manipulate the current placement of existing content, add new content, or even remove existing content in certain circumstances. Since the end goal of the search engine optimization (SEO) process is to achieve prominent placement for the right keywords, rather than tricking the search engines into listing your pages, we will need to ensure that the appropriate keywords are incorporated into your site in an appropriate manner. Simply including these keywords into your content is not enough. Deeho Design has years of experience in constructing site content in a search engine friendly format. Deeho Design will also work with your Website editors so that future content (press releases, new products, etc...) will remain synergistic with your newly optimized content. Search Engine Optimization Step 4 - Code Enhancement Effective copy writing is only part of the battle when it comes to making your Website search engine friendly. Regardless of how complex search engines seem, they are not yet complex enough to read all content that the web browsers display. While flash, Java script, DHTML, and framing technologies may look great to the end user, they can stop a search engine spider dead in its tracks and prohibit your Website from being crawled effectively. Even if your site has been programmed in pure, easy to read HTML, you may not be presenting the spider with the appropriate information. Title, H1, and Meta tags are all important elements in effective code writing. Search engines rely on specific code to determine the content within a Web page. Database-driven Websites present yet another obstacle for search engine spiders. Without proper search engine optimization, much of your site may be unreadable to search engines. This can have a significant adverse impact on your sites ability to generate search engine traffic. Deeho Design has years of experience in making your code work with your content rather than against it. Search Engine Optimization Step 5 - Link Enhancement Once the content contained within your site is geared towards the appropriate keywords and search engine spiders can effectively navigate your site, the search engines will attempt to determine the relative importance of your site. Your site may contain information about a subject, but how qualified is your site to talk about it?
11

Relevancy based search engines use linking to determine the relative credibility of one site versus another with respect to subject matter. You may have heard the terms "Link Popularity" and "Page Rank" through search engine optimization (SEO) research. These terms refer to the existence of external links from sites that link to yours. These links can have a substantial impact on your ability to achieve and maintain prominent positioning for the keywords you covet. Deeho Design works with you to help increase the number, quality and manner in which external sites are linked to your Website. Deeho Design works with you to identify relevant sources for links, where they should direct traffic to, and how they link. Search Engine Optimization Step 6 - Code Implementation Once the content, code, and link enhancement processes have been completed, the net step in the search engine optimization (SEO) process is implementation. Deeho Design offers varying levels of implementation to address the individual needs of our clients. Deeho Design can implement search engine optimization (SEO) by preparing a deliverable that contains a hard copy document and a custom tailored CD with all the code the client needs to make changes. This process enables your Web team to simply paste the optimized code from the CD into your current pages. Depending on level of access, Deeho Design can also implement search engine optimization by accessing your Web servers and uploading the coded pages. Deeho Design also offers the option of having one of our search engine optimization (SEO) consultants implement the optimized code on your premises. If the client chooses this option, a Deeho Design search engine optimization (SEO) consultant will travel to your facility, implement code changes, and work with your Web team to train them on the fine points of updating content. Deeho Design's search engine optimization (SEO) consultants have years of experience in working with content management systems such as Vignette and Interwoven, as well as shopping cart systems that can be the source of issues related to changes in content. Search Engine Optimization Step 7 - Web Page and Directory Submission The final step in the search engine optimization (SEO) process is to make sure that each of the newly optimized pages are included in the indexes of all the relevant search engines. Deeho Design hand submits all pages into search engines and directories. Directory submission requires both effective copy writing and a comprehensive understanding of the search engines. Deeho Design ensures that your directory submissions are in compliance with the directory's technical parameters and ensures that the client's listings are as keyword rich and relevant as possible. Proper directory submission is critical for search engine success.

12

Additional Steps For Search Engine Placement The 7 steps outlined above represent the core components of an effective search engine optimization process. Deeho Design sets itself apart by the quality in which we perform these tasks, as well as the additional steps we take to ensure the success of our clients. Search Media Overview While proper search engine optimization (SEO) is crucial, it is only one component of an effective overall search engine marketing strategy. The research that Deeho Design conducts during a search engine optimization (SEO) consultation engagement typically yields additional recommendations for other pertinent search media opportunities. Ongoing Analytics Deeho Design further differentiates itself from other search engine optimization (SEO) firms by providing advanced analytics on an ongoing basis. Our analytical services extend far beyond just counting clicks from search engines. Deeho Design prepares reports to support ad spending through ongoing analysis, identifies potential site navigation problems, and uncovers opportunities to increase conversion and gain market share. On a monthly basis, Deeho Design's natural search optimization clients receive positioning reports, competitive analysis reports, detailed traffic analysis, & recommendations for code improvements. At Deeho Design, we don't believe search engine optimization is a one-time effort. We make it our mission to ensure that your search engine marketing initiatives continue to yield a substantial return on your initial investment.

13

Search Engine Optimization Services Search Engine Marketing Search engines score Web Site Design on a vast range of criteria, which are constantly assessed to ensure that the most suitable sites appear in your search results. Although Google tries to think like a human being, it is still only a computer & so relies upon a complex algorithm to compile the necessary data.This algorithm rely's upon looking for over one hundred different factors on each page which it then scores in order to be able to rank each site. These factors can also be over done so it is very important to seek professional guidance before attempting to optimize a site yourself.

Google uses electronic "spiders" to search for links on web sites, & then follows them reading all the text it can find on the way. It likes to find keywords in groups, headings, image labels etc, but not too many otherwise it will think that you are trying too hard & will then begin to count them against you. Google has a huge problem with reading images & Java Script (flash buttons/links etc). The problem is it can't read them or follow them. This means that whilst you may have the richest content on the web for you given topic, if your navigation devices are images or Java Script then all google will see is a blank page with no links & you will forever wallow at number 1,458,000 in the rankings.

14

All the major search engines use two types of scoring to evaluate web site Design, on page & off page. On page is as I have said above just a matter of making sure that your pages are in a format that Google likes to see & values highly. Off page Web Site Design is the second method of valuation for your site, namely "Is your site good enough so that other sites link to it?" Google looks at each & everyone of those links & the "Page Rank" of the site that the link is on & forms an opinion on that basis as to how popular your site is likely to be. If you for example have a link from a High Street Banks site that has a PR of 7/10 it will be worth far more than 100 links from your friends blog pages PR0/10. There are so many factors to consider when getting involved in the Search Engine Optimization (optimisation)(seo) process that it can be too easy to miss a step along the way. This is not an overnight process however, from first contact to a top ten ranking can take up to a year as building a positive image for your site is a cumulative ongoing process that cannot be rushed. You should treat anyone who claims quick results with caution as it is not possible within the strict parameters set by Google. Only by combining all of the above mentioned factors can you build your Page Rank within Google & thus feature well.

15

4. A New way for search engine optimization: Meta Search Engines


The meta-search engine gives the user needed documents based on the multi-stage mechanism. The merge of the results obtained from the search engines in the network is done in parallel. Using a reduction parallel algorithm, the efficiency of this method is increased. Furthermore, a feedback mechanism gives the meta-search engine the users suggestions about the found documents, which leads to a new query using a genetic algorithm. In the new search stage, more relevant documents are given to the user. The practical experiments were performed in Aglets programming environment. The results achieved from these experiments confirm the efficiency and adaptability of the method. Search tools for the web can be classified as Search Engines, Directory Services, Meta-Search Engines, and Hybrid Search Services

4.1 Need of Meta search engines


As the web continues to increase in size, the relative Coverage of web search engine is decreasing, and search tools that combine the results of multiple search engines are becoming more valuable.

4.2 Working of Meta search Engines


A meta-search engine is a search tool that sends user requests to several other search engines and/or databases and aggregates the results into a single list or displays them according to their source. Meta search engines enable users to enter search criteria once and access several search engines simultaneously. Meta search engines operate on the premise that the Web is too large for any one search engine to index it all and that more comprehensive search results can be obtained by combining the results from several search engines. This also may save the user from having to use multiple search engines separately.

16

Results can vary between Meta search engines based on a large number of variables. Still, even the most basic metasearch engine will allow more of the web to be searched at once than any one stand-alone search engine. On the other hand, the results are said to be less relevant, since a metasearch engine cant know the internal alchemy a search engine does on its result (a metasearch engine does not have any direct access to the search engines database).

4.3 Problem with Meta search Engines


Though Meta-search engines address some of the main drawbacks of the search engines, they may sometimes result poor precision brought by the heterogeneity of the underlying search engines. In other words, the query that can be used to optimally describe a users particular information need may vary from one search engine to another Search Space is often complicated and one doesnt know where to look for the solution or where to start from. Here GA comes to help. Traditional methods often require some domain knowledge of the problem which might not be readily available. Many traditional methods are often sensitive to initial guesses made and provided an inappropriate guess the method may not converge to the solution.

17

4.4 Solution of Meta search using genetic Algorithm


It is straightforward to define an evaluation function that is a mathematical formulation of the user request and to define a steady state genetic algorithm (GA) that evolves a population of pages with binary tournament selection. Querying standard search engine performs the creation of individuals. The crossover operator that with probability of crossover Pc is performed by selecting two parent individuals (web pages) from the population. It chooses one crossover position within the page randomly and exchanges the links after that position between both individuals (web pages). We present a comparative evaluation that is performed with the same protocol as used in optimization. Our tool leads to pages of qualities that are significantly better than those of the standard search engines.

5. Implementation of genetic algorithm in Meta search engines


A metasearch engine searches the web by making requests to multiple search engines such as Alta vista, Yahoo, etc. Results of the individual search engines are combined into a single result set. The Advantage of metasearch engines includes a consistent interface to multiple engines and improved coverage. Genetic search is characterized by the fact that a number N of potential solutions of an optimization problem simultaneously samples the search space. The search space S of our optimization problem is the set of web pages and is structured with neighborhood (Links going out of a page) relationship V:SSk. We associate to S an evaluation or fitness function, which can numerically evaluate web pages. A search engine tries to output pages, which maximize this function, and thus tries to solve that optimization problem. To scan S, optimization algorithms and search engines make both uses of the following similar search operators:

5.1 The proposed genetic algorithm for web search


1- Get the user request and define the evaluation function F. 2- t 0 (iteration No =0, pop size =0) 3- Initialize P (t). 4- Evaluate P (t) (page from standard search engine). 5- Generate an offspring page O. 6- t t+1 (new population) . 7- Select P (t) from P (t-1). 8- Crossover P (t). 9- Evaluate P (t).
18

10- Go To 5 (while not termination condition (no of iterations)). 11- Sort P (t) (sort the pages given to the user in descending order according to their quality values). 12- Stop P (t) and give the outputs to the user

19

5.2 Evaluation Function According To the User Request


The fitness function F that evaluates web pages is a mathematical formulation of the user query and numerous evaluation functions. We have used function F, closing to the evaluation functions used in standard search engines. First, let us define the followings in the simplest forms for practical considerations.

1) Link quality F (L) F (L) = (i=1 to n) #ki.. (1) Where n is the total number of input keywords, #(ki) mean number of occurrence in link and k1, K2, k3 are the keywords given by the user.

2) Page quality F (P) F (P) = (j=1 to m) Fj(L). (2) Where m is the total number of links per page.

3) Mean quality function Mq Mq =Fmax(P) +Fmin(P) 2 Where Fmax (P) and Fmin (P) are the maximum and minimum values of the pages qualities, respectively after applying the GA. It should be noted that the upper value of Fmax is m*n and the least value of Fmin (P) is zero. Hence, the upper limit of Mq is (m*n)/2. Application of the GA to web pages will increase some qualities of pages and decrease others.

20

6. Recent Trends of technology


Are "Smarter" Meta-Searchers Still Smarter? "Smarter" meta-searcher technology includes clustering and linguistic analysis that attempts to show you themes within results, and some fancy textual analysis and display that can help you dig deeply into a set of results. However, neither of these technologies is any better than the quality of the search engine databases they obtain results from. This is the topic of an insightful article titled, "Some Cautionary Notes on Vivisimo," by librarian and professional researcher, Rita Vine of Working Faster. But here is another viewpoint favoring meta-searching by saying "More heads better than one." Few meta-searchers allow you to delve into the largest, most useful search engine databases. They tend to return results from smaller and/or free search engines and miscellaneous free directories, often small and highly commercial. (But see Dogpile, below. Dogpile also offers a unique parallel mode for viewing and comparing each search engine's results. Useful to see how little/much overlap.) Although we respect the potential of textual analysis and clustering technologies, we have ceased recommending any meta-searchers in our drop-in workshops at UC Berkeley. We recommend directly searching each search engine to get the most precise results, and using meta-searchers if you want to explore more broadly. The meta-search tools listed here are "use at your own risk." We are not endorsing or recommending them. Better Meta-Searchers
Meta-Search Tool Clusty clusty.com What's Searched Complex Search (As of date at bottom of page. They Ability change often.) Currently searches a number of Accepts and free, search engines and "translates" complex directories, not Google or Yahoo. searches with Boolean operators and field limiting. Results Display Results accompanied with subject subdivisions based on words in search results, intended to give the major themes. Click on these to search within results on each theme.

Dogpile Searches Google, Yahoo, Accepts Boolean www.dogpile.com LookSmart, Ask.com, MSN search, logic, especially in and more. Sites that have advanced search purchased ranking and inclusion modes. are blended in. Watch for Sponsored by... links below search results.

21

Meta-Search Engines for SERIOUS Deep Digging


What's Searched Meta-Search Tool (As of date at bottom of page. They change often.) SurfWax A better than average set www.surfwax.com of search engines. Can mix with educational, US Govt tools, and news sources, or many other categories. Complex Search Ability Results Display

Accepts " ", +/-. Default is AND Click on source link to between words. I recommend fairly view complete search simple searches, allowing SurfWax's results there. SiteSnaps and other features to help Click on to view helpful you dig deeply into results. "SiteSnap" extracted from most sites in frame on right. Many additional features for probing within a site. Copernic Agent Select from list of search ALL, ANY, Phrase, and more. Also Must be downloaded and www.copernic.com engines by clicking the Boolean searching within results installed, but Basic Properties button under Refine (powerful!). version is free of charge. following Advanced Table comparing versions. Search search box.

22

The Pandia Newsfinder is a news metasearch engine that lets you search many of the best search sites on the Net in one go, including Yahoo!, BBC and the Washington Post. The news metasearch engine will also query Moreover and Newshub, services that contain a wide array of news sources, including USAToday, Reuters, New York Times, ABC, The Financial Times and many more. We have now excluded our direct search of CNN and MSNBC, as these services do not include the dates of the news messages in their listings. We must therefore use the current date instead, which is not satisfactory. However, you may nonetheless include these searches by selecting "Pandia News Extended Search" on the pull-down menu. Headlines from CNN and MSNBC may also be included in listings from Moreover and Newshub.

Advanced news searching


The Pandia News Metasearch Engine will not try to "translate" your search phrase into the search query language preferred by the various search engines. Most of them will, however, recognize the following search terms: Must be present: Use the plus sign to force a word to be present in the results., like this: +Apple (Note: There should be no space between the + and the word.) May be present: Simply separate by spaces for optional inclusion: +Apple Jobs. Please note, however, that CNN will return documents containing all of the words in the query only. Must not be present: Use the minus sign to expressly exclude a term from results: +Apple Jobs -fruit Phrases: Wrap phrases in quotation marks: "Steve Jobs" "Apple Macintosh" See Pandia Goalgetter for more help on using "search engine math". BBC, MSNBC and The Wall Street Journal also accept the Boolean queries AND, NOT and OR. Enter more words to get more focused results. Words are not necessarily searched in the order in which you type them. 1. TurboScout.com Makes Web Searching 20 Times Faster A 21 year-old undergraduate from Singapore launched TurboScout.com, a new search tool that helps Internet users to access and compare original results from over 90 search engines across 7 categories on a single web page, removing the hassle of retyping keywords into different search engines.
23

Comparing results from different search engines like Google, Yahoo, MSN Search, and Ask Jeeves is a common practice for many users. This is because different engines use different ranking methods and thus no single engine can give users exactly what theyre looking for. Thats where TurboScout.com comes in handy Says William Chee, founder of TurboScout.com. Grown out of the frustration of typing and retyping keywords into different search engines, I decided to create an online search tool to get rid of these hassles and make such searching 20 times faster. Users who visit http://www.turboscout.com only need to enter keywords once, and getting original results from different search engines is as simple as clicking the engines name. No more retyping keywords into different search engines. Search engine marketing firms also find TurboScout.com an invaluable tool for search engine optimization and finding out the rankings of their clients web sites in different engines. Users who set their browser home page to their favorite site can even customize TurboScout.com to load together with their favorite web site. This way, users can have all the benefits of TurboScout.com and access to their favorite site at the same time. In addition to web page search, TurboScout.com also helps users to find images, search encyclopedias, check out latest news, look for interesting blogs, find songs and videos, and even compare prices from major online retailers and auction sites. With a growing number of users who search online, TurboScout.com is slated to become the preferred time-saving search tool for many across the globe. 2. Access 90 Search Engines Results With Firefoxs Search Box TurboScout.com launches a Firefox extension which empowers over 27 million Firefox users to access original results from more than 90 search engines with just a click. According to Nielsen//NetRatings, majority of Internet searchers use multiple search engines to compare results from search engines like Google, Yahoo, MSN Search and Ask Jeeves. Different search engines use different ranking methods and thus no single search engine can give users exactly what theyre looking for. Users who visit http://www.turboscout.com only need to enter keywords once. Getting original results from different search engines is as simple as clicking on the engines name. No more retyping of URLs and keywords into different search engines. To further simplify the searching process and enhance TurboScout.coms usability, a Firefox extension has been developed. This allows Firefox users to customize the browsers in-built search box to search using TurboScout.com, said William Chee, founder of TurboScout.com. Users can add this extension with just a click from http://www.turboscout.com/firefox, no download or installation is needed.
24

TurboScout.com saved my time and made searching a breeze, said Ben M. Bois, the extension's developer and founder of hooboo.com. This prompted me to develop the extension to help fellow Firefox users around the world. Over 27 million Firefox users can now harness TurboScout.com benefits by adding the extension to their Firefox's search box with just a click, Ben added. By incorporating TurboScout.coms capability with the convenience of searching with Firefoxs in-built search box, Firefox users will be able to access original search results of multiple search engines right from their browsers tool bar. About TurboScout.com: TurboScout.com is a search tool that saves you time and makes your Web searches easy. With TurboScout you only need enter keywords once to access and compare original results from over 90 search engines across 7 categories by simply clicking on the engines name. No more retyping of URLs and keywords into different search engines. About HooBoo.com: hooboo.com, a free online bookmark Web site, founded by Ben M. Bois, who also developed the free TurboScout.com Firefox extension.

25

7. Case study of Meta search engines:


7.1 Alexa web search
Alexa Internet, Inc. is a California based subsidiary company of Amazon.com that is known for its toolbar and website. Once installed, the toolbar collects data on browsing behavior which is transmitted to the website where it is stored and analyzed and is the basis for the company's web traffic reporting.

Operations and history


Alexa Internet was founded in 1996 by Brewster Kahle and Bruce Gilliat.[2] The company's name was chosen in homage to the Library of Alexandria,[3] drawing a parallel between the largest repository of knowledge in the ancient world to the potential of the Internet. The company offered a toolbar that gave Internet users suggestions on where to go next, based on the traffic patterns of its user community. Alexa also offered context for each site visited: to whom it was registered, how many pages it had, how many other sites pointed to it, and how frequently it was updated.[4] Alexa's operation includes archiving of webpages as they are crawled. This database served as the basis for the creation of the Internet Archive accessible through the Wayback Machine.[5] In 1998 the company donated a copy of the archive, 2 terabytes back then, to the Library of Congress.[3] Alexa continues to supply the Internet Archive with web crawlers. In 1999, Alexa was acquired by Amazon.com for about $250 million in Amazon stock[6] as the company moved away from its original vision of providing an 'intelligent' search engine. Alexa began a partnership with Google in spring 2002, and with the Open Directory Project in January 2003.[1] In May 2006, Amazon replaced Google with Live Search as a provider of search results. [7] In September 2006, they began using their own Search Platform[clarification needed] to serve results. In December 2006, they released Alexa Image Search. Built in-house, it is the first major application to be built on their Web Platform. In December 2005, Alexa opened its extensive search index and web-crawling facilities to third party programs through a comprehensive set of web services and APIs. These could be used, for instance, to construct vertical search engines that could run on Alexa's own servers or elsewhere. In May 2007, Alexa changed their API to require comparisons be limited to 3 sites, reduced size embedded graphs be shown using Flash, and mandatory embedded BritePic ads.[8] In April 2007, the lawsuit Alexa v. Hornbaker was filled to stop trademark infringement by the statsaholic service.[9] In the lawsuit, Alexa alleges that Hornbaker is stealing traffic graphs for profit, and that the primary purpose of his site is to display graphs that are generated by Alexa's servers.[10][11] Hornbaker removed the term Alexa from his service name on March 19, 2007. [12] Nevertheless, Alexa expressly grants permission to refer its data in third-party work subject to suitable credits.[13]

26

On November 27, 2008, Amazon announced that Alexa Web Search was no longer accepting new customers, and the service would be deprecated or discontinued for existing customers on January 26, 2009.[14]

Accuracy of ranking by the Alexa Toolbar


Main article: Alexa Toolbar

Alexa ranks sites based on tracking information of users of its Alexa Toolbar for Internet Explorer and from integrated sidebars in Mozilla and Netscape.[15][16] There is some controversy over how representative Alexa's user base is of typical Internet behavior,[17] especially for less trafficked sites.[16] In 2007 Michael Arrington provided a few examples of relative Alexa ranking known to contradict data from comScore, including ranking YouTube ahead of Google.[18] On April 16, 2008, many users reported dramatic shifts in their Alexa rankings. Alexa confirmed this later in the day with an announcement that they had released the new Alexa ranking system, claiming that they now take into account more data sources "beyond Alexa Toolbar users".[19]

Redesign and new statistics


On March 31, 2009, Alexa.com underwent a complete redesign with new metrics including: Pageviews per User, Bounce Rate, and Time on Site.[20] In the following weeks they added new features including Demographics, Clickstream and Search Traffic stats.[21] These new features were introduced in order to compete with other services such as Compete.com and Quancast.[22]

Spyware
The Alexa toolbar is regarded by many vendors, such as Symantec and McAfee, as spyware. Symantec classifies the toolbar as trackware.[23] McAfee classifies it as Adware, a "Potentially Unwanted Program."[24] McAfee Site Advisor rates the Alexa website as yellow, with the warning: "In our tests, we found downloads on this site that some people consider adware, spyware or other potentially unwanted programs".[25]

27

7.2 All The Web


AlltheWeb is an Internet search engine that made its debut in mid-1999. It grew out of FTP Search, Tor Egge's doctorate thesis at the Norwegian University of Science and Technology, which he started on in 1994, which in turn resulted in the formation of Fast Search and Transfer established on July 16, 1997.[1] It was used primarily as a show piece site for FAST's enterprise search engine. Although rivaling Google in size and technology,[2] AlltheWeb never became as popular. When AlltheWeb started in 1999, Fast Search and Transfer aimed to provide their database to other search engines, copying the successful case of Inktomi. Indeed, in January 2000, Lycos used their results in the Lycos PRO search. By that time, the AlltheWeb database had grown from 80 million URIs to 200 million. Their aim was to index all the publicly-accessible web. Their crawler indexed over 2 billion pages by June 2002[2] and started a fresh round of the search engine size war. Before their purchase by Yahoo!, the database contained about 3.3 billion URIs. AlltheWeb had a few advantages over Google, such as a fresher database, more advanced search features, search clustering and a completely customizable look.[2][3][4] In February 2003 Fast's web search division was bought by Overture. In March 2004 Overture itself was taken over by Yahoo!. Shortly after Yahoo!'s acquisition, the AlltheWeb site started using Yahoo!'s database and some of the advanced features were removed, such as FTP search.

7.3 AltaVista
AltaVista is a web search engine owned by Yahoo!. AltaVista was once one of the most popular search engines but its popularity has waned with the rise of Google.

Origins
AltaVista was created by researchers at Digital Equipment Corporation's Western Research Laboratory who were trying to provide services to make finding files on the public network easier.[1] Although there is some dispute about who was responsible for the original idea, [2] two key participants were Louis Monier, who wrote the crawler, and Michael Burrows, who wrote the indexer. The name AltaVista was chosen in relation to the surroundings of their company at Palo Alto. AltaVista was publicly launched as an internet search engine on 15 December 1995 at altavista.digital.com.[3][4] At launch, the service had two innovations which set it ahead of the other search engines; It used a fast, multi-threaded crawler (Scooter) which could cover many more Web pages than were believed to exist at the time and an efficient search running back-end on advanced hardware. As of 1998, it used 20 multi-processor machines using DEC's 64-bit Alpha processor. Together, the back-end machines had 130 GB of RAM and 500 GB of hard disk space, and received 13 million queries per day.[5] This made AltaVista the first searchable, full-text database of a large part of the World Wide Web. The distinguishing feature of AltaVista was its minimalistic interface
28

compared with other search engines of the time; a feature which was lost when it became a portal, but was regained when it refocused its efforts on its search function. AltaVista's site was an immediate success. Traffic increased steadily from 300,000 hits on the first day to more than 80 million hits a day two years later. The ability to search the web, and AltaVista's service in particular, became the subject of numerous articles and even some books.[1] AltaVista itself became one of the top destinations on the web, and by 1997 would earn US$50 million in sponsorship revenue.[2]

Business transactions
In 1996, AltaVista became the exclusive provider of search results for Yahoo!. In 1998, Digital was sold to Compaq and in 1999, Compaq relaunched AltaVista as a web portal, hoping to compete with Yahoo!. Under CEO Rod Schrock, AltaVista abandoned its streamlined search page and focused on features like shopping and free email.[6] In June 1998, Compaq paid AltaVista Technology Incorporated ("ATI") $3.3 million for the domain name altavista.com Jack Marshall, cofounder of ATI, had registered the name in 1994. In June 1999, Compaq sold a majority stake in AltaVista to CMGI, an internet investment company.[7] CMGI filed for an initial public offering for AltaVista to take place in April 2000, but as the internet bubble collapsed, the IPO was cancelled.[8] Meanwhile, it became clear that AltaVista's portal strategy was unsuccessful, and the search service began losing market share, especially to Google. After a series of layoffs and several management changes, AltaVista gradually shed its portal features and refocused on search. By 2002, AltaVista had improved the quality and freshness of its results and redesigned its user interface.[9] In February 2003, AltaVista was bought by Overture Services, Inc.[10] In July 2003, Overture itself was taken over by Yahoo!.[11]

Free services
AltaVista provides a free translation service, branded Babel Fish, which automatically translates text between several languages. In May 2008, this service was renamed Yahoo! Babel Fish, after the parent company.

7.4 AOL
AOL Inc. (NYSE: AOL), formerly known as America Online is an American global Internet services and media company. The company was based in Northern Virginia from its founding until 2007.[3][4] It is currently headquartered at 770 Broadway in New York.[5][6] Founded in 1983 as Quantum Computer Services, it has franchised its services to companies in several nations around the world or set up international versions of its services.[7]

29

AOL is best known for its online software suite, also called AOL, that allowed millions of customers around the world to access the world's largest "walled garden" online community and eventually reach out to the internet as a whole. At its zenith, AOL's membership was over 30 million members worldwide,[8] most of whom accessed the AOL service through the AOL software suite. On May 28, 2009, Time Warner announced that it would spin off AOL into a separate public company, and the spinoff occurred on December 9, 2009,[9] ending the 8 year relationship between the two companies.[10]

Description

Original logo for AOL, from 19912006

With regional branches around the world, the former American "goliath among Internet service providers"[8] once had more than 30 million subscribers[8] on several continents. In January 2000, AOL and Time Warner announced plans to merge. The terms of the deal negotiated called for AOL shareholders to own 55% of the new, combined company. The deal closed on January 11, 2001 after receiving regulatory approval from the FTC, the FCC and the European Union. America Online, Inc., as the company was then called, was led by executives from AOL, SBI and Time Warner. Gerald Levin, who had served as CEO of Time Warner, was CEO of the new company. Steve Case served as Chairman, J. Michael Kelly (from AOL) was the Chief Financial Officer, Robert W. Pittman (from AOL) and Dick Parsons (from Time Warner) served as CoChief Operating Officers. The total value of AOL stock subsequently went from $226 billion to about $20 billion.[11] Similarly, its customer base has decreased to 10.1 million subscribers as of November 2007,[12] just narrowly ahead of Comcast and AT&T Yahoo!. AOL is a company in transition, made evident by discussions of buy-outs and joint ventures during a period of dramatic decline in AOL's subscriber base.[8][neutrality is disputed]

30

The next logo for AOL, used from 20062009

News reports in late 2005 identified companies such as Yahoo!, Microsoft, and Google as candidates for turning AOL into a joint venture;[13] those plans were apparently abandoned when it was revealed on December 20, 2005 that Google would purchase a 5% share of AOL for $1 billion. AOL was rated both one of the best and worst Internet suppliers in the UK, according to a poll by BBC Watchdog.[14] On March 31, 1997, the short lived eWorld was purchased by AOL. The ISP side of AOL UK was bought by The Carphone Warehouse in October 2006 to take advantage of their 100,000 LLU customers which made The Carphone Warehouse the biggest LLU provider in the UK.[15] On May 28, 2009,[16] Time Warner announced that it would spin AOL off as an independent company once Google's shares ceased at the end of the fiscal year, and AOL's page and logo changed afterward.[17] AOL ceased to be a part of Time Warner on December 9, 2009. It was declared an IPO on the 9th, under the stock symbol NYSE:AOL.[18]

History
AOL release timeline 1989 February 1991 America Online for Macintosh received as a popular Apple Macintosh BBS AOL for DOS launched AOL 2.0 for the Apple Macintosh released, AOL 1.0 for Microsoft Windows 3.x launched AOL 1.5 for Microsoft Windows 3.x released AOL 2.0 for Microsoft Windows 3.x released AOL 2.5 for Microsoft Windows 3.x 31

January 1993

June 1994

September 1994 June 1995

released June 1995 June 1996 AOL 3.0 (Win16) for Windows 3.x/Windows 95/Windows NT released AOL 3.0 for Windows 95 released

July 1998 / June AOL 4.0 (Casablanca) and Refresh 2 1999 released September 1999 June 2000 AOL 5.0 (Kilimanjaro) released AOL 5.0 for 9x/NT/2K (Niagara) released

October and AOL 6.0 (K2 Karakorum) and December 2000 Refresh released September 2001 AOL 6.0.2 for XP launched

October and December 2001, AOL 7.0 (Taz) and Refresh 1, Refresh May and July 2, and Refresh 2 Plus released 2002 October 2002 April 2003 AOL 8.0 (Spacely) released AOL 8.0 Plus (Elroy) launched

August and AOL 9.0 Optimized (Bunker Hill / Blue September 2003 Hawaii) and Refresh released May 2004 November July 2005 AOL 9.0 Optimized SE/LE (Thailand / Tahiti) released 2004, AOL 9.0 Security Edition SE/LE (Strauss) and Refresh released

August 2005 to AOL Suite Beta launched (cancelled) March 2006

32

September 2006, AOL OpenRide (Streamliner) launched March 2007 AOL 9.0 VR and Refresh (Raga) released (AOL 9.0 for Microsoft November 2006, Windows Vista but also works with April 2007 Microsoft Windows 98, ME, 2000 and XP) September 2007 AOL Desktop for Mac Beta released

October 31, 2007 AOL 9.1 (Tarana) released December 2007 AOL Desktop launched (a.k.a. AOL 10.0)

May 2008 September 2008 February 2009

AOL Desktop for Mac 1.0 officially launched AOL Desktop 10.1 released AOL 9.5 released (Beta refreshed sometime in 2009) AOL 9.5 Refresh released, compatible with XP, Vista and Windows 7

October 2009

AOL began life as a short-lived venture called Control Video Corporation (or CVC), founded by Bill von Meister. Its sole product was an online service called Gameline for the Atari 2600 video game console after von Meister's idea of buying music on demand was rejected by Warner Brothers.[19] Subscribers bought a modem from the company for $49.95 and paid a one-time $15 setup fee. Gameline permitted subscribers to temporarily download games and keep track of high scores, at a cost of $1 per game. The telephone disconnected and the downloaded game would remain in Gameline's Master Module and playable until the user turned off the console or downloaded another game. The original technical team was composed of Marc Seriff, Tom Ralston, Ken Huntsman, Janet Hunter, Dave Brown, Steve Trus, Ray Heinrich, Craig Dykstra, and Doug Coward. In January 1983, Steve Case was hired as a marketing consultant for Control Video on the recommendation of his brother, investment banker Dan Case. In May 1983, Jim Kimsey became a manufacturing consultant for Control Video, which was near bankruptcy. Kimsey was brought
33

in by his West Point friend Frank Caufield, an investor in the company.[19] Von Meister quietly left the company in early 1985. Control Video was reorganized as Quantum Computer Services, Inc. on May 24, 1985, with Kimsey as Chief Executive Officer and Marc Seriff as Chief Technology Officer. Out of 100 employees from Control Video, only 10 remained in the new company.[19] Case himself rose quickly through the ranks; Kimsey promoted him to vicepresident of marketing not long after becoming CEO, and later promoted him further to executive vice-president in 1987. Kimsey soon began to groom Case to ascend to the rank of CEO, which he did when Kimsey retired in 1991. Kimsey changed the company's strategy, and in 1985 launched a dedicated online service for Commodore 64 and 128 computers, originally called Quantum Link ("Q-Link" for short). The Quantum Link software was based on software licensed from PlayNet, Inc. In May 1988, Quantum and Apple launched AppleLink Personal Edition for Apple II and Macintosh computers. In August 1988, Quantum launched PC Link, a service for IBM-compatible PCs developed in a joint venture with the Tandy Corporation. After the company parted ways with Apple in October 1989, Quantum changed the service's name to America Online.[20][21] From the beginning, AOL included online games in its mix of products; many classic and casual games were included in the original PlayNet software system. In the early years of AOL the company introduced many innovative online interactive titles and games, including:

Graphical chat environments Habitat (19861988) and Club Caribe (1988) from LucasArts. The first online interactive fiction series QuantumLink Serial by Tracy Reed (1988). Quantum Space, the first fully automated Play by email game (19891991). The original Dungeons & Dragons title Neverwinter Nights from Stormfront Studios (1991 1997), the first Massively Multiplayer Online Role Playing Game (MMORPG) to depict the adventure with graphics instead of text (1991). The first chat room-based text role-playing game Black Bayou (19962004), a horror roleplaying game from Hecklers Online and ANTAGONIST, Inc..

In 2008 Neverwinter Nights was honored (along with Everquest and World of Warcraft) at the 59th Annual Technology & Engineering Emmy Awards for advancing the art form of MMORPG games. In February 1991 AOL for DOS was launched using a GeoWorks interface followed a year later by AOL for Windows. This coincided with growth in pay-based online services, like Prodigy, CompuServe, and GEnie. AOL discontinued Q-Link and PC Link in the fall of 1994.

Growth
This section has multiple issues. Please help improve the article or discuss these issues on the talk page.

It needs additional references or sources for verification. Tagged since May 2009. Its neutrality is disputed. Tagged since November 2008.

34

Steve Case positioned AOL as the online service for people unfamiliar with computers, in particular contrast to CompuServe, which had long served the technical community. The PlayNet system that AOL licensed was the first online service to require use of proprietary software, rather than a standard terminal program; as a result it was able to offer a graphical user interface (GUI) instead of command lines, and was well ahead of the competition in emphasizing communication among members as a feature.[citation needed] In particular was the Chat Room concept from PlayNet, as opposed to the previous paradigm of CB-style channels. Chat Rooms allowed a large group of people with similar interests to convene and hold conversations in real time, including:

Private rooms created by any user. Hold up to 23 people. Conference rooms created with permission of AOL. Hold up to 48 people and often moderated. Auditoriums created with permission of AOL. Consisted of a stage and an unlimited number of rows. What happened on the stage was viewable by everybody in the auditorium but what happened within individual rows, of up to 27 people, was viewable only by the people within those rows.[citation needed]

In September 1993, AOL added USENET access to its features.[22] AOL quickly surpassed GEnie, and by the mid-1990s, it passed Prodigy (which for several years allowed AOL advertising) and CompuServe. Originally, AOL charged its users an hourly fee, but in 1996 this changed to a flat monthly rate of $19.99. Within three years, AOL's userbase grew to 10 million people. During this time, AOL connections would be flooded with users trying to get on, and many canceled their accounts due to constant busy signals (this was often joked "AOL" standing for "Always Off-Line").[citation needed] In 1995 AOL was headquartered at 8619 Westwood Center Drive in the Tysons Corner CDP in unincorporated Fairfax County, Virginia,[23][24] near the Town of Vienna.[25] AOL was quickly running out of room in 1996 for its network at the Fairfax County campus. In 1996,[citation needed] AOL moved to 22000 AOL Way in Dulles, unincorporated Loudoun County, Virginia.[26] The move to Dulles took place in mid-1996 and provided room for future growth. In a five year landmark agreement with the now reigning operating system winner was AOL bundled with Windows.[citation needed]

Change in focus, decline, and rebranding


This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (September 2008)

This article is in a list format that may be better presented using prose. You can help by converting this article to prose, if appropriate. Editing help is available. (January 2010)

35

U.S. AOL Subscribers 2Q 2001 2Q 2009

Since its merger with Time Warner (the owners of the aforementioned Warner Bros.) in 2001, the value of AOL has dropped significantly from its $240 billion high. Its subscriber base has seen no quarterly growth since 2002. AOL has since attempted to reposition itself as a content provider similar to companies such as Yahoo! as opposed to an Internet service provider.

In 2004 along with the launch of AOL 9.0 Optimized, AOL also made available the option of personalized greetings which would enable the user to hear his or her name while accessing basic functions and mail alerts, or while logging in or out. AOL eventually[citation needed] announced plans to offer subscribers classic television programs for free with commercials inserted via its new IN2TV service. At the time of launch, AOL made available Warner Bros. Television's vast library of programs, with Welcome Back Kotter as its marquee offering.[citation needed] In 2005, AOL broadcast the Live 8 concert live over the Internet, and thousands of users downloaded clips of the concert over the following months. In 2005, AOL (along with Telepictures Productions) launched TMZ.com, one of the leading celebrity news and gossip sources on the web. TMZ.com has become known for its quickness to break celebrity news, often accompanied by exclusive videos and photos. In 2006, AOL informed its American customers that it would be increasing the price of its dial-up access to $25.90. The increase was part of an effort to migrate the service's remaining dial-up users to broadband, as the increased price was the same price they had been charging for monthly DSL access.[27] However, AOL has since started offering their services for $9.95 a month for unlimited dial-up access.[28] On April 3, 2006, AOL announced that it was retiring the full name "America Online"; the official name of the service is now "AOL", and the full name of the TimeWarner subdivision was "AOL, LLC".[29] On August 2, 2006, AOL announced that they would give away e-mail accounts and software previously available only to its paying customers provided the customer accesses AOL or AOL.com through a non-AOL-owned access method (otherwise known as "third party transit", "bring your own access", or "BYOA"). The move was designed to reduce costs associated with the "Walled Garden" business model by reducing usage of AOL-owned access points and shifting members with high-speed internet access from client-based usage to the more lucrative advertising provider, AOL.com.[citation needed] The change from paid to free was also designed to slow the rate of members canceling their accounts and defecting to Microsoft Hotmail, Yahoo!, or other free e-mail providers. According to AOL CEO Randy Falco, as of December 2007, the conversion rate of accounts from paid access to free access is over 80%.[30] In December 2006, in order to cut operating costs, AOL decided to cease using U.S.-based call centers to provide customer service.[citation needed] AOL drastically downsized U.S. corporate operations as well. On January 28, 2007, the last domestic AOL owned and operated call center (based in Oklahoma City) closed its doors,[citation needed] and, during October 2007, the last call center in Canada was also shut down.[citation needed] All customer service calls are now handled by outsourced representatives in Romania, the Philippines, and India.[citation needed] On September 17, 2007, AOL announced that it was moving one of its corporate headquarters from Dulles, Virginia to New York, New York[31] and combining its various advertising units into a new subsidiary called Platform A. This action follows several advertising acquisitions, most notably Advertising.com, and highlights the company's new focus on advertising-driven business models. AOL management stressed that "significant operations" will remain in Dulles, which includes the company's access services and modem banks.

36

AOL created animated cartoons in 2008 to explain behavioral targeting to its users, showing how a user's past visits to other Web sites may determine the content of advertising they see in the future.[32] Later that year AOL initiated privacy research and extended the animated penguin campaign to the United Kingdom.[33] AOL closed one of its three Northern Virginia data centers, Reston Technology Center, and sold it to CRG West in January 2008.[34] This sale enabled AOL to consolidate its Northern Virginia operations from three sites (Dulles, Manassas, Reston) to two. AOL took advantage of the move to both reduce its overall hardware inventory and to determine a "right size" for its Network Operations Center staff after consolidating the three sites into two.[citation needed] In 2007 AOL announced that it would move one of its other headquarters from Loudoun County, Virginia to New York City; it would continue to operate its Virginia offices.[6] As part of the impending move to New York and the restructuring of responsibilities at the Dulles headquarters complex after the Reston move, AOL CEO Randy Falco announced on October 15, 2007 plans to lay off 2000 employees worldwide by the end of 2007, beginning "immediately". [35] That evening, over 750 employees at Dulles alone received notices to attend early morning meetings the next day;[36] those employees were laid off on October 16, 2007, though the employees would remain on the payroll until December 14, 2007 in accordance with the Worker Adjustment and Retraining Notification Act. Other employees whose groups were due for phase-out as part of the restructuring were informed on October 16, 2007 that they would be kept on until December 14, 2007 to complete any outstanding tasks, after which they would be laid off. The reduction in force was so large that virtually every conference room within the Dulles complex was reserved for the day as a "Special Purpose Room", where various aspects of the layoff process were conducted for outgoing employees; remaining employees at Dulles were quick to dub the mass layoff "Bloody Tuesday" in online blogs and news reports.[36] An unspecified number of staff at the former Compuserve facility in Columbus, OH were also released, as well as the entire Tucson Quality Analysis shop, a number of AOL employees working at the former Netscape facility in Mountain View, CA, the development team in France, and practically the entire Moncton, New Brunswick, Canada member services call center site. The end result was a near 40% layoff across the board at AOL, including a substantial number of Systems Operations personnel, a significant change from previous layoffs where SysOps employees routinely suffered only minor personnel reductions. An additional round of layoffs, mostly confined to analysis groups and the staff at AOL Voice Services in Halifax, Nova Scotia, occurred on December 11 and 12, 2007.[citation needed] On February 6, 2008, Time Warner CEO Jeff Bewkes announced that Time Warner would split AOL's internet access and advertising businesses into two, with the possibility of later selling the internet access division.[37] On November 23, 2009, AOL unveiled a sneak preview of a new brand identity which has the new logo Aol. sumperimposed onto figures (e.g., a goldfish, a rainbow, a tree, a postcard). The new identity was enacted onto all of AOL's services on December 10, 2009, right after TimeWarner split from AOL.

Controversies
This article's Criticism or Controversy section(s) may mean the article does not present a neutral point of view of the subject. It may be better to integrate the material in those sections into the article as a whole. (September 2009) This section may be inaccurate in or unbalanced towards certain viewpoints. Please improve the article by adding information on neglected viewpoints, or discuss the issue on the talk page.
(September 2009)

37

Community leaders
Prior to mid 2005, AOL used online volunteers called Community Leaders, or CLs, to monitor chatrooms, message boards, and libraries. AOL's use of remote volunteers dated back to the establishment of its Quantum Link service in 1985. Some community leaders were recruited for content design and maintenance using a proprietary language and interface called RAINMAN, although most content maintenance was performed by partner and internal employees. In 1999, a class action lawsuit was filed against AOL citing violations of U.S. labor laws in its usage of CLs. The Department of Labor investigated but came to no conclusions, closing their investigation in 2001[38] AOL began drastically reducing the responsibilities and privileges of its volunteers in 2000. The program was eventually ended on June 8, 2005. Current Community Leaders at the time were offered 12 months of credit on their accounts. In February 2010; America Online settled claims with Community Leaders for 15 million dollars.

Billing disputes
AOL has faced a number of lawsuits over claims that it has been slow to stop billing customers after their accounts have been canceled, either by the company or the user. In addition, AOL changed its method of calculating used minutes in response to a class action lawsuit. Previously, AOL would add fifteen seconds to the time a user was connected to the service and round up to the next whole minute (thus, a person who used the service for 11 minutes and 46 seconds would be charged for 13 minutes). AOL claimed this was to account for sign on/sign off time, but because this practice was not made known to its customers, the plaintiffs won (some also pointed out that signing on and off did not always take 15 seconds, especially when connecting via another ISP). AOL disclosed its connection-time calculation methods to all of its customers and credited them with extra free hours. In addition, the AOL software would notify the user of exactly how long they were connected and how many minutes they were being charged. AOL was sued by the Ohio Attorney General in October 2003 for improper billing practices. The case was settled on June 8, 2005. AOL agreed to resolve any consumer complaints filed with the Ohio AG's office. In December 2006, AOL agreed to provide restitution to Florida consumers to settle the case filed against them by the Florida Attorney General.[39]

Account cancellation
In response to approximately 300 consumer complaints, New York Attorney General's office began an inquiry of AOL's customer service policies. The investigation revealed that the company had an elaborate scheme for rewarding employees who purported to retain or "save" subscribers who had called to cancel their Internet service. In many instances, such retention was done against subscribers' wishes, or without their consent. Under the scheme, consumer service personnel received bonuses worth tens of thousands of dollars if they could successfully dissuade or "save" half of the people who called to cancel service. For several years, AOL had instituted minimum retention or "save" percentages, which consumer representatives were expected to
38

meet. These bonuses, and the minimum "save" rates accompanying them, had the effect of employees not honoring cancellations, or otherwise making cancellation unduly difficult for consumers. Many customers complained that AOL personnel ignored their demands to cancel service and stop billing. On August 24, 2005, America Online agreed to pay $1.25 million to the state of New York and reformed its customer service procedures. Under the agreement, AOL would no longer require its customer service representatives to meet a minimum quota for customer retention in order to receive a bonus.[40] On June 13, 2006, a man named Vincent Ferrari documented his account cancellation phone call in a blog post, stating he had switched to broadband years earlier. In the recorded phone call, the AOL representative refused to cancel the account unless the 30-year-old Ferrari explained why AOL hours were still being recorded on it. Ferrari insisted that AOL software was not even installed on the computer. When Ferrari demanded that the account be canceled regardless, the AOL representative asked to speak with Ferrari's father, for whom the account had been set up. The conversation was aired on CNBC. When CNBC reporters tried to have an account on AOL cancelled, they were hung up on immediately and it ultimately took more than 45 minutes to cancel the account.[41] On July 19, 2006, AOL's entire retention manual was released on the Internet.[42] (7MB PDF). On August 3, 2006, Time Warner announced that the company would be dissolving AOL's retention centers due to its profits hinging on $1 billion in cost cuts. The company estimated that it would lose more than six million subscribers over the following year.[43]

Direct marketing of disks


Main article: AOL disk collecting

Prior to 2006, AOL was infamous for the unsolicited mass direct mail of CD-ROMs (and 312 in floppy disks earlier) containing their software. They were the most frequent user of this marketing tactic, and received criticism from the environmental cost of the campaign.[44]

Software
This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2008)

In 2000, AOL was served with an $8 billion lawsuit alleging that its AOL 5.0 software caused significant difficulties for users attempting to use third-party Internet service providers. The lawsuit sought damages of up to $1000 for each user that had downloaded the software cited at the time of the lawsuit. AOL later agreed to a settlement of $15 million, without admission of wrongdoing.[45] Now, the AOL software has a feature called AOL Dialer, or AOL Connect on Mac OS X. This feature allows users to connect to the ISP without running the full interface. This

39

allows users to use only the applications they wish to use, especially if they do not favor the AOL Browser. AOL 9.0 was once identified by Stopbadware as being under investigation[46] for installing additional software without disclosure, and modifying browser preferences, toolbars, and icons. However, as of the release of AOL 9.0 VR (Vista Ready) on 26 January 2007, it is no longer considered badware due to changes AOL made in the software.[47]

Usenet newsgroups
When AOL gave clients access to Usenet in 1993, they hid at least one newsgroup in standard list view: alt.aol-sucks. AOL did list the newsgroup in the alternative description view, but changed the description to "Flames and complaints about America Online". With AOL clients swarming Usenet newsgroups, the old, existing user base started to develop a strong distaste for both AOL and its clients, referring to the new state of affairs as Eternal September. Later, AOL discontinued providing access to Usenet on 25 June 2005.[48] No official details were provided as to the cause of decommissioning Usenet access, except providing users the suggestion to access Usenet services from a third-party, Google Groups. Currently, AOL provides community-based Message Boards in lieu of Usenet.

Terms of Service (TOS)


AOL has a detailed set of guidelines and expectations for users on their service, known as the Terms of Service (TOS, also known as Conditions of Service, or COS in the UK). It is separated into three different sections: Member Agreement, Community Guidelines and Privacy Policy.[49] [50] All three agreements are presented to users at time of registration and digital acceptance is achieved when they access the AOL service. There have been many complaints over rules that govern an AOL user's conduct. Some users disagree with the TOS, citing the guidelines are too strict to follow coupled with the fact the TOS may change without users being made aware. A considerable cause for this was likely due to alleged censorship of user-generated content during the earlier years of growth for AOL. [51][52][53]
[54]

Certified e-mail
In early 2005, AOL stated its intention to implement a certified e-mail system called Goodmail, which will allow companies to send email to users with whom they have pre-existing business relationships, with a visual indication that the email is from a trusted source and without the risk that the email messages might be blocked or stripped by spam filters. This decision has drawn fire from MoveOn, which characterizes the program as an "e-mail tax", and the EFF, which characterizes it as a shakedown of non profits. [55] A website called Dearaol.com was launched, with an online petition and a blog that garnered hundreds of signatures from people and organizations expressing their opposition to AOL's use of goodmail.

40

Esther Dyson defended the move in a New York Times editorial saying "I hope Goodmail succeeds, and that it has lots of competition. I also think it and its competitors will eventually transform into services that more directly serve the interests of mail recipients. Instead of the fees going to Goodmail and EON, they will also be shared with the individual recipients."[56] Other members of the antispam and blogging community are broadly critical of moveon.org and the EFF's attempts to characterize this as a "shakedown". Tim Lee of the Technology Liberation Front posted an article that questioned the EFF's adopting a confrontational posture when dealing with private companies. Lee's article cited a series of discussions on Declan McCullagh's Politechbot mailing list on this subject between the EFF's Danny O'Brien and antispammer Suresh Ramasubramanian, who has also compared the EFF's tactics in opposing Goodmail to tactics used by Republican political strategist Karl Rove. Spamassassin developer Justin Mason posted some criticism of the EFF's and Moveon's "going overboard" in their opposition to the scheme. The dearaol.com campaign lost momentum and disappeared, with the last post to the now defunct dearaol.com blog"AOL starts the shakedown" being made on 9 May 2006.

Search data
Main article: AOL search data scandal

On August 4, 2006, AOL released a compressed text file on one of its websites containing twenty million search keywords for over 650,000 users over a 3-month period between March 1, 2006 and May 31, intended for research purposes. AOL pulled the file from public access by August 7, but not before its wide distribution on the Internet by others. Derivative research, titled A Picture of Search was published by authors Pass, Chowdhury and Torgeson for The First International Conference on Scalable Information Systems. The data are being used by Web sites such as AOLstalker for entertainment purposes, where users of AOLstalker are encouraged to judge AOL clients based on the humorousness of personal details revealed by search behavior.

Company sales
AOL (Time Warner) has sold a number of its sub-companies in Europe. AOL Europe has six million users, but its subscription base had been steadily declining. In 2005, 287,000 European AOL online users migrated to other service providers.[57] In September 2006, AOL Germany's ISP business (AOL Deutschland GmbH & Co. KG) was sold for $863m (675m) to Telecom Italia.[58] AOL's German web portal (AOL Deutschland), however, is now operated by then newly founded AOL Deutschland Medien GmbH which still is a subsidiary of Time Warner. Today, AOL Deutschland offers virtually all free services of AOL.com (see below) in German versions as well as some own products, such as an AOL VISA card.[59]

41

In October 2006, AOL UK's ISP business was sold for $688m (370m) to Carphone Warehouse.
[60][61]

7.5 Ask.com
Ask.com (or Ask Jeeves in the United Kingdom) is a search engine founded in 1996 by Garrett Gruener and David Warthen in Berkeley, California. The original search engine software was implemented by Gary Chevsky from his own design. Chevsky, Justin Grant, and others built the early AskJeeves.com website around that core engine. Three venture capital firms, Highland Capital Partners, Institutional Venture Partners, and The RODA Group were early investors.[2] Ask.com is currently owned by InterActiveCorp under the NASDAQ symbol IACI.

History
Ask.com was originally known as Ask Jeeves, where "Jeeves" is the name of the "gentleman's personal gentleman", or valet, fetching answers to any question asked. The character was based on Jeeves, Bertie Wooster's fictional valet from the works of P. G. Wodehouse. The original idea behind Ask Jeeves was to allow users to get answers to questions posed in everyday, natural language, as well as traditional keyword searching. The current Ask.com still supports this, with added support for math, dictionary, and conversion questions.

In 2005, the company announced plans to phase out Jeeves. On February 27, 2006, the character disappeared from Ask.com, and was stated to be "going in to retirement." The website prominently brought the character back in 2009. InterActiveCorp owns a variety of sites including country-specific sites for UK, Germany, Italy, Japan, the Netherlands, and Spain along with Ask Kids, Teoma (now ExpertRank[3]) and several others (see this page for a complete list). On June 5, 2007 Ask.com relaunched with a 3D look.[4] On May 16, 2006, Ask implemented a "Binoculars Site Preview" into its search results. On search results pages, the "Binoculars" let searchers capture a sneak peak of the page they could visit with a mouse-over activating screenshot pop-up.[5] In December 2007, Ask released the AskEraser feature,[6] allowing users to opt-out from tracking of search queries and IP and cookie values. They also vowed to erase this data after 18 months if the AskEraser option is not set. The Center for Democracy and Technology's positive evaluation of AskEraser[7] differed from that of privacy groups including the Electronic Privacy Information Center who found problems such as the requirement that HTTP cookies be enabled for AskEraser to function.[8]
42

On July 4, 2008 InterActiveCorp announced the acquisition of Lexico Publishing Group, which owns Dictionary.com, Thesaurus.com, and Reference.com.[9][10] On April 20, 2009, the "Jeeves" character re-appeared on the homepage, standing on the left side of the page. His name, however, is still not mentioned on ask.com. The United Kingdom site uk.ask.com still calls itself "Ask Jeeves", featuring the same character.

International
The company uses different websites offering localized services for certain countries and its associated languages, including:

fr.ask.com (France) uk.ask.com (Ask Jeeves) (United Kingdom) de.ask.com (Germany) es.ask.com (Spain) it.ask.com (Italy) ru.ask.com (Russia)

Corporate details
Ask Jeeves, Inc. stock traded on the NASDAQ stock exchange from July 1999 to July 2005, under the ticker symbol ASKJ. In July 2005, the ASKJ ticker was retired upon the acquisition by InterActiveCorp, valuing ASKJ at $1.85 billion.

Ask Sponsored Listings


Ask Sponsored Listings is the search engine marketing tool offered to advertisers to increase the visibility of their websites (and subsequent businesses, services, and products) by producing more prominent and frequent search engine listing results.

Marketing and promotion


Information-revolution.org campaign
In early 2007, a number of advertisements appeared on London Underground trains warning commuters that 75% of all the information on the web flowed through one site (implied to be Google), with a URL for www.information-revolution.org.[11]

43

Advertising
Apostolos Gerasoulis, the co-creator of Ask's Teoma algorithmic search technology, starred in four television advertisements in 2007, extolling the virtues of Ask.com's usefulness for information relevance.[12] There was a Jeeves balloon in the 2001 Macy's Thanksgiving Day Parade.

NASCAR sponsorship
On January 14, 2009, Ask.com became the official sponsor of NASCAR driver Bobby Labonte's #96 car. Ask would become the official search engine of NASCAR.[13] Ask.com will be the primary sponsor for the No. 96 for 18 of the first 21 races and has rights to increase this to a total of 29 races this season.[14] The Ask.com car debuted in the 2009 Bud Shootout where it failed to finish the race but subsequently has come back strong placing as high as 5th in the March 1st, 2009 Shelby 427 race at Las Vegas Motor Speedway.[15] Ask.com's foray into NASCAR is the first instance of its venture into what it calls Super Verticals.[16]

Toolbar
Not to be confused with the MyWay Searchbar.

The Ask Toolbar is a free internet browser toolbar from Ask.com, available for both the Internet Explorer and Firefox web browsers. Features include the web, image, news, dictionary searches, a wide variety of US and international content served in widgets, weather forecasts, RSS/ATOM feeds and related services. The Ask Toolbar can be installed from the toolbar.ask.com website. The installation of the Ask Toolbar is optional to the user and always requires end user consent (in the form of an "Opt-Out" check box) when bundled with other 3rd party software. The Ask Toolbar can be uninstalled from Internet Explorer through the Windows control panel, and from Firefox through the Add-ons menu and via an uninstall link in more recent builds. Software which changes the browser behaviour may still remain on the computer after the uninstall of the toolbar, requiring further uninstalls or file deletions. [17] An older version of the Ask Toolbar is incompatible with Kaspersky Internet Security; presence of the toolbar causes license key corruption.[18]

7.6 Lycos

44

Lycos is a search engine and web portal with broadband entertainment content. History
Lycos began as a search engine research project by Dr. Michael Loren Mauldin of Carnegie Mellon University in 1994. Bob Davis joined the company as its CEO and first employee in 1995. Lycos then enjoyed several years of growth and, in 1999, became the most visited online destination in the world, with a global presence in more than 40 countries. Lycos was sold to Terra Networks of Spain in May 2000 for $13 billion, forming a new company, Terra Lycos, and maintaining a position as one of the world's largest Internet companies. Shortly after the merger, Davis left the company to become a venture capitalist with Highland Capital Partners in Boston. In October 2004, Lycos was sold by Terra's parent company Telefonica to Daum Communications Corporation, the second largest Internet portal in Korea, becoming once again Lycos Inc.

Corporate development
Shortly after the development of the search engine, Lycos Inc. was formed with approximately US $2 million in venture capital funding from CMGI. The CEO of Lycos since inception was Bob Davis, a Boston native who joined the company after its incorporation in Massachusetts and, after attempting to turn the business into a software company selling an enterprise version of the search software, concentrated on building the company into an advertising-supported web portal. Lycos grew from a crowded field in 1995 to become the most-visited web portal in the world in the spring of 1999 (as measured by visits to all of its sites). In 1996, the company completed the fastest IPO from inception to offering in NASDAQ history, and, in 1997, became one of the first profitable internet businesses in the world. In 1998 it paid $58 million for Tripod in an attempt to "break into the portal market" which was rapidly developing;[2] over the course of the next few years this was followed by nearly two dozen acquisitions of high profile internet brands including Gamesville, WhoWhere, Wired Digital (sold to Wired), Quote.com, Angelfire and Raging Bull. Lycos Europe was a joint venture between Bertelsmann and Lycos but has always been a distinct corporate entity. Although Lycos Europe is the largest of the overseas ventures several other companies also entered into joint venture agreements including Lycos Canada, Lycos Korea and Lycos Asia. Near the peak of the internet bubble in May 2000, Lycos announced its intent to be acquired by Terra Networks, the internet arm of the Spanish telecommunications giant Telefnica, for $5.4 billion. The acquisition price represented a nearly 3000 times return on the initial venture capital investment in Lycos and about 20 times its initial public offering valuation. The transaction closed in October 2000. The merged company was renamed Terra Lycos yet the Lycos brand was the US franchise. Overseas the company continued to be known as Terra Networks, Davis left the company shortly after the merger was completed to join Highland Capital Partners, a venture capital fund where he now serves as a Managing General partner and concentrates on internet investments.
45

On August 2, 2004, Terra announced that it was selling Lycos to Seoul, South Korea-based Daum Communications Corporation for $95.4 million in cash, less than 2% of Terra's initial multi-billion investment. In October 2004, the transaction closed and the company name was changed back to Lycos Inc.The remaining Terra half of the business was subsequently reacquired by Telefnica. Under new ownership, Lycos began to refocus its strategy in 2005, moving away from a searchcentric portal torwards a community destination for broadband entertainment content. With a new management team in place, Lycos also began divesting properties that were not core to its new strategy. In July 2006, Wired News, which had been part of Lycos since the purchase of Wired Digital in 1998, was sold to Cond Nast and re-merged with Wired magazine. The Lycos Finance division, best known for Quote.com and Raging Bull.com, was sold to FT Interactive Data Corporation in February 2006, while its online dating site, Matchmaker.com, was sold to Date.com. In 2006, Lycos also regained ownership of the Lycos trademark from Carnegie Mellon University becoming Lycos Inc. once again. During 2006, Lycos introduced services, including Lycos Phone, which combined IM video chat, real-time video on demand and an MP3 player. In August of the same year, a new version of Lycos Mail was released, which allowed sending and receiving mega files, including unlimited size file attachments. In November 2006, Lycos began to roll out applications centered around social media, including the web's first[citation needed] watch & chat video application, with the launch of its "Lycos Cinema" platform. In February 2007 "Lycos MIX" was launched a tool allowing users to pull video clips from YouTube, Google Video, Yahoo! Video and

Lycos Network sites


Angelfire [1], a Lycos property providing free webhosting, blogging and web publishing tools Gamesville [2], Lycos's massive multiplayer gaming site Hotbot [3], a Lycos-owned search engine HtmlGear [4], a Lycos property providing web-page addons (guestbooks, etc.) Tripod.com [5], a Lycos property providing free webhosting, blogging and web publishing tools Webon [6], a next generation webhosting and publishing platform WhoWhere.com [7], a people search engine InsiderInfo [8]

References
46

www.wikipedia.org/Genetic_algorithm http://infolab.stanford.edu/~backrub/google.html http://computer.howstuffworks.com/search-engine.htm http://www.obitko.com/tutorials/genetic-algorithms/index.htm http://en.wikipedia.org/wiki/Metasearch_engine http://www.searchengineshowdown.com/multi/

47

Bibliography
1. Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications by Lakhmi C. Jain; N.M. Martin CRC Press, CRC Press LLC 2. Genetic Algorithms: Theory and Applications by Ulrich Bodenhofer 3. PRACTICAL GENETIC ALGORITHMS by Randy L. Haupt, Sue Ellen Haupt 4 An Introduction to Genetic Algorithms Mitchell Melanie A Bradford Book the MIT Press Cambridge, Massachusetts London, England

48

You might also like