Long before a search engine can ever understand whether a webpage will satisfy a user’s search inquiry, the search engine must be in a position to locate the content. As we discussed, search engines use robot software to scour the Internet looking for websites.
But how does a search engine know where to find websites? In some cases a marketer can send a message to the search engine letting it know the web address of content. For instance, all three major search engines Google, MSN and Yahoo have online forms that allow marketers to submit a site’s main URL to trigger crawling. But this only gets the search engine spider to a site’s front door. To get to inside pages within the site requires the robot to either: 2) follow links found internally on the site, or 2) follow links appearing on an external site (i.e., another website). Unless a website consists of only one or a few pages it is unlikely that all pages can be found through external links. Rather, search engine robots must rely on the site itself for guidance in locating content found inside the site. This means a site must contain an internal linking system to guide a search engine as it indexes the site.
To insure search engine robots can find webpages through internal links, marketers should
consider the following issues:
Building a Menu System
Creating a Sitemap
Using Page Redirects
Managing Broken Links
Restricting Crawling Activity
We should note that some of the material discussed below will require adjustments to a website’s operational side (e.g., adjustments on the web server). Marketers who are not familiar with technical aspects of operating a website are encouraged to discuss these issues with your technical contacts
Source: http://www.knowthis.com/tutorials/search-engine-marketing/site-navigation-for-sem/1.htm
But how does a search engine know where to find websites? In some cases a marketer can send a message to the search engine letting it know the web address of content. For instance, all three major search engines Google, MSN and Yahoo have online forms that allow marketers to submit a site’s main URL to trigger crawling. But this only gets the search engine spider to a site’s front door. To get to inside pages within the site requires the robot to either: 2) follow links found internally on the site, or 2) follow links appearing on an external site (i.e., another website). Unless a website consists of only one or a few pages it is unlikely that all pages can be found through external links. Rather, search engine robots must rely on the site itself for guidance in locating content found inside the site. This means a site must contain an internal linking system to guide a search engine as it indexes the site.
To insure search engine robots can find webpages through internal links, marketers should
consider the following issues:
Building a Menu System
Creating a Sitemap
Using Page Redirects
Managing Broken Links
Restricting Crawling Activity
We should note that some of the material discussed below will require adjustments to a website’s operational side (e.g., adjustments on the web server). Marketers who are not familiar with technical aspects of operating a website are encouraged to discuss these issues with your technical contacts
Source: http://www.knowthis.com/tutorials/search-engine-marketing/site-navigation-for-sem/1.htm
Comments