ソースを表示
出典: くみこみックス
Sitemapxml2
のソース
移動:
ナビゲーション
,
検索
以下に示された理由により ページの編集 を行うことができません:
この処理は
ログイン利用者
の権限を持った利用者のみが実行できます。
以下にソースを表示しています:
The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more intelligently. Sitemaps are a URL inclusion protocol and complement robots.txt, a URL exclusion protocol. Sitemaps are particularly beneficial on websites where: some areas of the website are not available through the browsable interface, or webmasters use rich Ajax, Silverlight, or Flash content that is not normally processed by search engines. Sitemaps supplement and do not replace the existing crawl-based mechanisms that search engines already use to discover URLs. Using this protocol does not guarantee that web pages will be included in search indexes, nor does it influence the way that pages are ranked in search results Google first introduced Sitemaps 0.84 in June 2005 so web developers could publish lists of links from across their sites. Google, MSN and Yahoo announced joint support for the Sitemaps protocol in November 2006. The schema version was changed to Sitemap 0.90, but no other changes were made. In April 2007, Ask.com and IBM announced support for Sitemaps. Also, Google, Yahoo, MS announced auto-discovery for sitemaps through robots.txt. In May 2007, the state governments of Arizona, California, Utah and Virginia announced they would use Sitemaps on their web sites. The Sitemaps protocol is based on ideas from Crawler-friendly Web Servers. Referensi: [http://www.gaptekupdate.com/sitemap.xml Gaptek Update Sitemap]
Sitemapxml2
に戻る。
表示
本文
ノート
ソースを表示
履歴
メニュー
メインページ
最近の出来事
最近更新したページ
検索
* ツールボックス
リンク元
リンク先の更新状況
アップロード
特別ページ