HI,all
I have two urls...
http://www.mofa.gov.la and http://www.mixsports.net
but begin fetch, i can see log by cygwin:
--
Stopping at depth=0 - no more URLs to fetch.
No URLs to fetch - check your seed list and URL filters.
crawl finished: new_crawl
--
the urls.txt and crawl-urlfilter.txt like accessory...
please help me ,thank you
http://www.mofa.gov.la
http://www.mixsports.net
# The url filter file used by the crawl command.
# Better for intranet crawling.
# Be sure to change MY.DOMAIN.NAME to your domain name.
# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'. The first matching pattern in the file
# determines whether a URL is included or ignored. If no pattern
# matches, the URL is ignored.
# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
-\.(swf|SWF|js|JS|gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$*
-\.(gpx|GPX|nb|PDF|pdf|m|java|JAVA|doc|DOC|ps|tex|jpeg|JPEG|bmp|BMP)$*
# skip URLs containing certain characters as probable queries, etc.
# -...@]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
# -.*(/.+?)/.*?\1/.*?\1/
# accept hosts in MY.DOMAIN.NAME
+^http://www.mofa.gov.la/
+^http://www.mixsports.net/
# skip everything else
-.