Hi!

Some parts of our site, sesam.no, is crawled more and more by search engines like Google. This is a good thing, but this stresses our servers a bit. And a lot of the stress can be reduced by some new search feature.

The idea:

Almost on the top level in the search command hierarcy, there could be a flag "executeOnCrawl" that defaults to "true" (backward compatible). Then there could be implemented a method "isCrawlRequest" that checks the user agent and returns "true" if it is a crawl bot.

Then there would be really simple for skins to skip enrichments etc. on crawl requests.

I know this can be implemented on skin level, but I thougt the idea could be good enough so it should be implemented in Sesat.

What do you think?

--
ENDRE MIDTGÅRD MECKELBORG
Seniorutvikler
Schibsted Søk AS
sesam.no

+47 930 14 504



_______________________________________________
Kernel-development mailing list
[email protected]
http://sesat.no/mailman/listinfo/kernel-development

Reply via email to