a2l007 commented on a change in pull request #10746:
URL: https://github.com/apache/druid/pull/10746#discussion_r556043383



##########
File path: 
processing/src/main/java/org/apache/druid/query/BadQueryException.java
##########
@@ -20,12 +20,16 @@
 package org.apache.druid.query;
 
 /**
- * This exception is thrown when the requested operation cannot be completed 
due to a lack of available resources.
+ * An abstract class for all query exceptions that should return a bad request 
status code (404).

Review comment:
       Shouldn't it be 400 here?

##########
File path: 
processing/src/main/java/org/apache/druid/query/ResourceLimitExceededException.java
##########
@@ -19,18 +19,33 @@
 
 package org.apache.druid.query;
 
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
 import org.apache.druid.java.util.common.StringUtils;
 
 /**
  * Exception indicating that an operation failed because it exceeded some 
configured resource limit.
  *
- * This is used as a marker exception by {@link QueryInterruptedException} to 
report the "Resource limit exceeded"
- * error code.
+ * This is a {@link BadQueryException} because it will likely a user's 
misbehavior when this exception is thrown.
+ * Druid cluster operators set some set of resource limitations for some good 
reason. When a user query requires
+ * too many resources, it will likely mean that the user query should be 
modified to not use excessive resources.

Review comment:
       Based on the description, shouldn't [this 
exception](https://github.com/apache/druid/blob/master/server/src/main/java/org/apache/druid/client/DirectDruidClient.java#L447)
 also be categorized as ResourceLimitExceeded ?

##########
File path: server/src/main/java/org/apache/druid/server/QueryResource.java
##########
@@ -463,36 +481,36 @@ Response ok(Object object) throws IOException
 
     Response gotError(Exception e) throws IOException
     {
-      return Response.serverError()
-                     .type(contentType)
-                     .entity(
-                         newOutputWriter(null, null, false)
-                             
.writeValueAsBytes(QueryInterruptedException.wrapIfNeeded(e))
-                     )
-                     .build();
+      return buildNonOkResponse(
+          Status.INTERNAL_SERVER_ERROR.getStatusCode(),
+          QueryInterruptedException.wrapIfNeeded(e)
+      );
     }
 
     Response gotTimeout(QueryTimeoutException e) throws IOException
     {
-      return Response.status(QueryTimeoutException.STATUS_CODE)
-                     .type(contentType)
-                     .entity(
-                         newOutputWriter(null, null, false)
-                             .writeValueAsBytes(e)
-                     )
-                     .build();
+      return buildNonOkResponse(QueryTimeoutException.STATUS_CODE, e);
     }
 
     Response gotLimited(QueryCapacityExceededException e) throws IOException
     {
-      return Response.status(QueryCapacityExceededException.STATUS_CODE)
-                     .entity(newOutputWriter(null, null, 
false).writeValueAsBytes(e))
-                     .build();
+      return buildNonOkResponse(QueryCapacityExceededException.STATUS_CODE, e);
     }
 
     Response gotUnsupported(QueryUnsupportedException e) throws IOException
     {
-      return Response.status(QueryUnsupportedException.STATUS_CODE)
+      return buildNonOkResponse(QueryUnsupportedException.STATUS_CODE, e);
+    }
+
+    Response gotBadQuery(BadQueryException e) throws IOException
+    {
+      return buildNonOkResponse(BadQueryException.STATUS_CODE, e);

Review comment:
       I was curious whether this would catch ResourceLimitExceeded exceptions 
wrapped under `QueryInterruptedException` and so I ran the PR on my test 
cluster. It looks like such tests are still giving us 500 instead of 400. 
Related stack trace:
   
   ```
   HTTP/1.1 500 Internal Server Error
   < Content-Type: application/json
   < Content-Length: 372
   {
     "error" : "Resource limit exceeded",
    "trace":
   org.apache.druid.query.QueryInterruptedException: Not enough aggregation 
buffer space to execute this query. Try increasing 
druid.processing.buffer.sizeBytes or enable disk spilling by setting 
druid.query.groupBy.maxOnDiskStorage to a positive number.
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_231]
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_231]
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_231]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_231]
        at 
com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:124)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromObjectWith(StdValueInstantiator.java:283)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.deser.ValueInstantiator.createFromObjectWith(ValueInstantiator.java:229)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:198)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:422)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:65)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
 ~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4189) 
~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2476) 
~[jackson-databind-2.10.5.1.jar:2.10.5.1]
        at 
org.apache.druid.client.JsonParserIterator.init(JsonParserIterator.java:182) 
[classes/:?]
        at 
org.apache.druid.client.JsonParserIterator.hasNext(JsonParserIterator.java:90) 
[classes/:?]
   ```
   Maybe as a follow-up  we could check the wrapped error code while handling 
QueryInterruptedException to determine the correct exception type.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to