There are two radically distinct use cases:

1. Consumers on the open Internet. They do stupid things. Give them a very constrained search experience, enforced with query preprocessing. Maybe give them only dismax queries. 2. Professional power users. They typically have "credentials" for using the application, so if they are detected as performing long or stupid queries, log the details and administratively take action, such as denying them access (or billing them for excessive resource usage.)

-- Jack Krupansky

-----Original Message----- From: Bernd Fehling
Sent: Monday, June 03, 2013 4:39 AM
To: solr-user@lucene.apache.org
Subject: how are you handling killer queries?

How are you handling "killer queries" with solr?

While solr/lucene (currently 4.2.1) is trying to do its best I see sometimes stupid queries
in my logs, located with extremly long query time.

Example:
q=???????+and+??+and+???+and+????+and+???????+and+??????????

I even get hits for this (hits=34091309 status=0 QTime=88667).

But the jetty log says:
WARN:oejs.Response:Committed before 500 {msg=Datenübergabe unterbrochen
(broken pipe),trace=org.eclipse.jetty.io.EofException...
org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:838)|?... 35 more|,code=500}
WARN:oejs.ServletHandler:/solr/base/select
java.lang.IllegalStateException: Committed
       at org.eclipse.jetty.server.Response.resetBuffer(Response.java:1136)

Because I get hits and qtime the search is successful, right?

But jetty/http has already closed the connection and solr doesn't know about this?

How are you handling "killer queries", just ignoring?
Or something to tune (jetty config about timeout) or filter (query filtering)?

Would be pleased to hear your comments.

Bernd

Reply via email to