: The easiest way is to run maybe 100,000 or more queries and take an
: average. A single microsecond value for a query would be incredibly
: inaccurate.

that can be useful for doing the timing externally, if you're interested 
in averaging over all X queries in a sequential batch, but it doesn't help 
with things like replaying a live log (so you see acurate cache 
behavior) and then trying to evaluate wether a subset of those queries 
(the ones that use faceting maybe) are faster with config X then with 
config Y .. for that you really want to be able to crunch the logs to 
extract just hte requests you are interested in and then generate stats -- 
but as Ahmet points out, with millisecond resolution soemthing that takes 
about 1 millisecond is hard to profile for possible improvements.

with the Solr code as written the best suggestion i can make is to see if 
your servlet container can log it's requests at microsecond resolution -- 
that will include the ResponseWriter timing and the network overhead, but 
that may better anyway.

A patch to change Solr to use System.nanoTime() for all the internal 
timein would probably be pretty straight forward -- we'd just have to 
consider wether it will screw people up if we change the format that gets 
logged or included in teh response.


-Hoss

Reply via email to