>> Well if there are only seconds and no objections I think it should be 
>> done. Somebody could just "rm robotx.txt"...
> This is not a good idea. Google's index of a bug would rapidly go out of 
> date.

That's not a problem, because all Google needs to provide a search 
result is to have some keywords of that bug, then it will just link to 
the original source. Besides, Google is very smart at detecting which 
pages should be indexed frequently.

> You are solving the wrong problem. If Bugzilla's querying system makes 
> it hard for you to find the bug you want, we should simplify the 
> querying system, not reinvent it.

Taking advantage of an existing resource can't hardly be called 
"reinventing" something. In fact, someone could say to you that 
implementing a site specific search is to reinvent something that Google 
already provides. Someday the semantic-web will come (!) and we will be 
able to use search engines to do powerful queries. We need an ad-hoc 
search engine now because that day has arrived and there's no way Google 
could process all the bug's metadata.

Again, this is not reinventing. Someone at mozilla.org has taken a small 
extra effort to prevent Google from indexing and providing an useful 
service to the community.

Anyway, what's wrong with having choices?


Reply via email to