Hello,
While evaluating 1.20.0-SNAPSHOT release performance, I ran a mongo query
that runs in 15 minutes in the 1.19 release (below).
SELECT `Elements_Efforts`.`EffortTypeName` AS `EffortTypeName`,
`Elements`.`ElementSubTypeName` AS `ElementSubTypeName`,
`Elements`.`ElementTypeName` AS `ElementTypeName`,
`Elements`.`PlanID` AS `PlanID`
FROM `mongo.grounds`.`Elements` `Elements`
INNER JOIN `mongo.grounds`.`Elements_Efforts` `Elements_Efforts` ON
(`Elements`.`_id` = `Elements_Efforts`.`_id`)
WHERE (`Elements`.`PlanID` = '1623263140')
GROUP BY `Elements_Efforts`.`EffortTypeName`,
`Elements`.`ElementSubTypeName`,
`Elements`.`ElementTypeName`,
`Elements`.`PlanID`
The query runs for 34 minutes before returning this error; "Sort exceeded
memory limit of 104857600 bytes, but did not opt in to external sorting.
Aborting operation. Pass allowDiskUse:true to opt in.' on server
localhost:27017." Any ideas? I realize that it's a mongodb error, but the
mongo database doesn't raise this error with the 1.19 release. I was
expecting improved performance with the mongo storage plugin in the
upcoming 1.20 release. Nothing in my environment has changed. I've attached
the full stacktrace.
2022-01-27 14:35:09,332 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.drill.exec.work.foreman.Foreman - Query text for query with id
1e0d0c12-6553-4a9e-22a4-20809a813521 issued by clarkddc: SELECT
`Elements_Efforts`.`EffortTypeName` AS `EffortTypeName`,
`Elements`.`ElementSubTypeName` AS `ElementSubTypeName`,
`Elements`.`ElementTypeName` AS `ElementTypeName`,
`Elements`.`PlanID` AS `PlanID`
FROM `mongo.grounds`.`Elements` `Elements`
INNER JOIN `mongo.grounds`.`Elements_Efforts` `Elements_Efforts` ON
(`Elements`.`_id` = `Elements_Efforts`.`_id`)
WHERE (`Elements`.`PlanID` = '1623263140')
GROUP BY `Elements_Efforts`.`EffortTypeName`,
`Elements`.`ElementSubTypeName`,
`Elements`.`ElementTypeName`,
`Elements`.`PlanID`
2022-01-27 14:35:09,604 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.drill.exec.store.PluginHandle - Creating storage plugin for mongo
2022-01-27 14:35:09,662 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.e.s.mongo.MongoStoragePlugin - Created connection to
[address=localhost:27017, user=web-user].
2022-01-27 14:35:09,663 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.e.s.mongo.MongoStoragePlugin - Number of open connections 1.
2022-01-27 14:35:09,922 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] WARN
o.a.d.e.s.m.s.MongoSchemaFactory - Failure while getting collection names from
'admin'. Command failed with error 13 (Unauthorized): 'not authorized on admin
to execute command { listCollections: 1, cursor: {}, nameOnly: true, $db:
"admin", $clusterTime: { clusterTime: Timestamp(1643312104, 1), signature: {
hash: BinData(0, 9BABBD8A4621B367CD595FC6A0F8A6B034F9B0EC), keyId:
7034735910600048641 } }, lsid: { id:
UUID("b992f7a8-5645-457d-a04f-75f65de4aeaf") } }' on server localhost:27017.
The full response is {"operationTime": {"$timestamp": {"t": 1643312104, "i":
1}}, "ok": 0.0, "errmsg": "not authorized on admin to execute command {
listCollections: 1, cursor: {}, nameOnly: true, $db: \"admin\", $clusterTime: {
clusterTime: Timestamp(1643312104, 1), signature: { hash: BinData(0,
9BABBD8A4621B367CD595FC6A0F8A6B034F9B0EC), keyId: 7034735910600048641 } },
lsid: { id: UUID(\"b992f7a8-5645-457d-a04f-75f65de4aeaf\") } }", "code": 13,
"codeName": "Unauthorized", "$clusterTime": {"clusterTime": {"$timestamp":
{"t": 1643312104, "i": 1}}, "signature": {"hash": {"$binary": {"base64":
"m6u9ikYhs2fNWV/GoPimsDT5sOw=", "subType": "00"}}, "keyId":
7034735910600048641}}}
2022-01-27 14:35:10,571 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.drill.exec.store.PluginHandle - Creating storage plugin for dfs
2022-01-27 14:35:10,605 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.c.s.persistence.ScanResult - loading 20 classes for
org.apache.drill.exec.store.dfs.FormatPlugin took 13ms
2022-01-27 14:35:10,621 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.c.s.persistence.ScanResult - loading 21 classes for
org.apache.drill.common.logical.FormatPluginConfig took 0ms
2022-01-27 14:35:10,624 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.c.s.persistence.ScanResult - loading 21 classes for
org.apache.drill.common.logical.FormatPluginConfig took 0ms
2022-01-27 14:35:10,625 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.c.s.persistence.ScanResult - loading 21 classes for
org.apache.drill.common.logical.FormatPluginConfig took 0ms
2022-01-27 14:35:10,625 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.drill.exec.store.PluginHandle - Creating storage plugin for cp
2022-01-27 14:35:10,640 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.c.s.persistence.ScanResult - loading 20 classes for
org.apache.drill.exec.store.dfs.FormatPlugin took 0ms
2022-01-27 14:35:10,643 [1e0d0c12-6553-4a9e-22a4-20809a813521:foreman] INFO
o.a.d.c.s.persistence.ScanResult - loading 21 classes for
org.apache.drill.common.logical.FormatPluginConfig took 0ms
2022-01-27 15:09:16,536 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.e.s.m.MongoScanBatchCreator - Number of record readers initialized : 1
2022-01-27 15:09:16,548 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.e.s.m.MongoScanBatchCreator - Number of record readers initialized : 1
2022-01-27 15:09:16,601 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.e.w.fragment.FragmentExecutor - 1e0d0c12-6553-4a9e-22a4-20809a813521:0:0:
State change requested AWAITING_ALLOCATION --> RUNNING
2022-01-27 15:09:16,608 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.e.w.f.FragmentStatusReporter - 1e0d0c12-6553-4a9e-22a4-20809a813521:0:0:
State to report: RUNNING
2022-01-27 15:09:17,080 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.exec.physical.impl.ScanBatch - User Error Occurred: Command failed with
error 16819 (Location16819): 'Sort exceeded memory limit of 104857600 bytes,
but did not opt in to external sorting. Aborting operation. Pass
allowDiskUse:true to opt in.' on server localhost:27017. The full response is
{"operationTime": {"$timestamp": {"t": 1643314154, "i": 1}}, "ok": 0.0,
"errmsg": "Sort exceeded memory limit of 104857600 bytes, but did not opt in to
external sorting. Aborting operation. Pass allowDiskUse:true to opt in.",
"code": 16819, "codeName": "Location16819", "$clusterTime": {"clusterTime":
{"$timestamp": {"t": 1643314154, "i": 1}}, "signature": {"hash": {"$binary":
{"base64": "FOXdk3SnWHMsJo6W6HGtqNLLCMY=", "subType": "00"}}, "keyId":
7034735910600048641}}} (Command failed with error 16819 (Location16819): 'Sort
exceeded memory limit of 104857600 bytes, but did not opt in to external
sorting. Aborting operation. Pass allowDiskUse:true to opt in.' on server
localhost:27017. The full response is {"operationTime": {"$timestamp": {"t":
1643314154, "i": 1}}, "ok": 0.0, "errmsg": "Sort exceeded memory limit of
104857600 bytes, but did not opt in to external sorting. Aborting operation.
Pass allowDiskUse:true to opt in.", "code": 16819, "codeName": "Location16819",
"$clusterTime": {"clusterTime": {"$timestamp": {"t": 1643314154, "i": 1}},
"signature": {"hash": {"$binary": {"base64": "FOXdk3SnWHMsJo6W6HGtqNLLCMY=",
"subType": "00"}}, "keyId": 7034735910600048641}}})
org.apache.drill.common.exceptions.UserException: INTERNAL_ERROR ERROR: Command
failed with error 16819 (Location16819): 'Sort exceeded memory limit of
104857600 bytes, but did not opt in to external sorting. Aborting operation.
Pass allowDiskUse:true to opt in.' on server localhost:27017. The full response
is {"operationTime": {"$timestamp": {"t": 1643314154, "i": 1}}, "ok": 0.0,
"errmsg": "Sort exceeded memory limit of 104857600 bytes, but did not opt in to
external sorting. Aborting operation. Pass allowDiskUse:true to opt in.",
"code": 16819, "codeName": "Location16819", "$clusterTime": {"clusterTime":
{"$timestamp": {"t": 1643314154, "i": 1}}, "signature": {"hash": {"$binary":
{"base64": "FOXdk3SnWHMsJo6W6HGtqNLLCMY=", "subType": "00"}}, "keyId":
7034735910600048641}}}
Please, refer to logs for more information.
[Error Id: d6465445-20ea-4b0e-94f0-951e183fc90e ]
at
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:657)
at
org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:305)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.RecordIterator.nextBatch(RecordIterator.java:102)
at
org.apache.drill.exec.record.RecordIterator.next(RecordIterator.java:191)
at
org.apache.drill.exec.physical.impl.join.JoinStatus.initialize(JoinStatus.java:76)
at
org.apache.drill.exec.physical.impl.join.MergeJoinBatch.buildSchema(MergeJoinBatch.java:169)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:153)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.buildSchema(ExternalSortBatch.java:320)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:153)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.buildSchema(StreamingAggBatch.java:166)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:153)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:323)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:310)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:310)
at
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mongodb.MongoCommandException: Command failed with error 16819
(Location16819): 'Sort exceeded memory limit of 104857600 bytes, but did not
opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt
in.' on server localhost:27017. The full response is {"operationTime":
{"$timestamp": {"t": 1643314154, "i": 1}}, "ok": 0.0, "errmsg": "Sort exceeded
memory limit of 104857600 bytes, but did not opt in to external sorting.
Aborting operation. Pass allowDiskUse:true to opt in.", "code": 16819,
"codeName": "Location16819", "$clusterTime": {"clusterTime": {"$timestamp":
{"t": 1643314154, "i": 1}}, "signature": {"hash": {"$binary": {"base64":
"FOXdk3SnWHMsJo6W6HGtqNLLCMY=", "subType": "00"}}, "keyId":
7034735910600048641}}}
at
com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:195)
at
com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:400)
at
com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:324)
at
com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:114)
at
com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:603)
at
com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:81)
at
com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:252)
at
com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:214)
at
com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:123)
at
com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:113)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:328)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:318)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:201)
at
com.mongodb.internal.operation.CommandOperationHelper.lambda$executeCommand$4(CommandOperationHelper.java:189)
at
com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:583)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:189)
at
com.mongodb.internal.operation.AggregateOperationImpl.execute(AggregateOperationImpl.java:195)
at
com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:306)
at
com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:46)
at
com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:184)
at
com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)
at
com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)
at
org.apache.drill.exec.store.mongo.MongoRecordReader.next(MongoRecordReader.java:205)
at
org.apache.drill.exec.physical.impl.ScanBatch.internalNext(ScanBatch.java:234)
at
org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:298)
... 42 common frames omitted
2022-01-27 15:09:17,081 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] ERROR
o.a.d.e.physical.impl.BaseRootExec - Batch dump started: dumping last 2 failed
batches
2022-01-27 15:09:17,081 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] ERROR
o.a.d.exec.physical.impl.ScanBatch -
ScanBatch[container=org.apache.drill.exec.record.VectorContainer@58bf341c[recordCount
= 0, schemaChanged = true, schema = null, wrappers = [], ...],
currentReader=MongoRecordReader[reader=BsonRecordReader[]], schema=null]
2022-01-27 15:09:17,081 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] ERROR
o.a.d.exec.physical.impl.ScanBatch -
ScanBatch[container=org.apache.drill.exec.record.VectorContainer@4e4bc217[recordCount
= 0, schemaChanged = true, schema = null, wrappers = [], ...],
currentReader=null, schema=null]
2022-01-27 15:09:17,081 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] ERROR
o.a.d.e.physical.impl.BaseRootExec - Batch dump completed.
2022-01-27 15:09:17,082 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.e.w.fragment.FragmentExecutor - 1e0d0c12-6553-4a9e-22a4-20809a813521:0:0:
State change requested RUNNING --> FAILED
2022-01-27 15:09:17,092 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] INFO
o.a.d.e.w.fragment.FragmentExecutor - 1e0d0c12-6553-4a9e-22a4-20809a813521:0:0:
State change requested FAILED --> FINISHED
2022-01-27 15:09:17,112 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] WARN
o.a.d.exec.rpc.control.WorkEventBus - Fragment
1e0d0c12-6553-4a9e-22a4-20809a813521:0:0 manager is not found in the work bus.
2022-01-27 15:09:17,121 [qtp1826822932-51] ERROR
o.a.d.e.server.rest.QueryResources - Query from Web UI Failed: {}
org.apache.drill.common.exceptions.UserRemoteException: INTERNAL_ERROR ERROR:
Command failed with error 16819 (Location16819): 'Sort exceeded memory limit of
104857600 bytes, but did not opt in to external sorting. Aborting operation.
Pass allowDiskUse:true to opt in.' on server localhost:27017. The full response
is {"operationTime": {"$timestamp": {"t": 1643314154, "i": 1}}, "ok": 0.0,
"errmsg": "Sort exceeded memory limit of 104857600 bytes, but did not opt in to
external sorting. Aborting operation. Pass allowDiskUse:true to opt in.",
"code": 16819, "codeName": "Location16819", "$clusterTime": {"clusterTime":
{"$timestamp": {"t": 1643314154, "i": 1}}, "signature": {"hash": {"$binary":
{"base64": "FOXdk3SnWHMsJo6W6HGtqNLLCMY=", "subType": "00"}}, "keyId":
7034735910600048641}}}
Fragment: 0:0
Please, refer to logs for more information.
[Error Id: d6465445-20ea-4b0e-94f0-951e183fc90e on localhost:31010]
at
org.apache.drill.exec.server.rest.RestQueryRunner.submitQuery(RestQueryRunner.java:99)
at
org.apache.drill.exec.server.rest.RestQueryRunner.run(RestQueryRunner.java:54)
at
org.apache.drill.exec.server.rest.QueryResources.submitQuery(QueryResources.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
at
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
at
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
at
org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219)
at
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
at
org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
at
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
at
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
at
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
at
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
at
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
at
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
at
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
at
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
at
org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
at
org.apache.drill.exec.server.rest.header.ResponseHeadersSettingFilter.doFilter(ResponseHeadersSettingFilter.java:71)
at
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
at
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
at
org.apache.drill.exec.server.rest.CsrfTokenValidateFilter.doFilter(CsrfTokenValidateFilter.java:55)
at
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
at
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
at
org.apache.drill.exec.server.rest.CsrfTokenInjectFilter.doFilter(CsrfTokenInjectFilter.java:54)
at
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
at
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
at
org.apache.drill.exec.server.rest.auth.DrillHttpSecurityHandlerProvider.handle(DrillHttpSecurityHandlerProvider.java:163)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
at
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1435)
at
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
at
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1350)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Command failed with error 16819
(Location16819): 'Sort exceeded memory limit of 104857600 bytes, but did not
opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt
in.' on server localhost:27017. The full response is {"operationTime":
{"$timestamp": {"t": 1643314154, "i": 1}}, "ok": 0.0, "errmsg": "Sort exceeded
memory limit of 104857600 bytes, but did not opt in to external sorting.
Aborting operation. Pass allowDiskUse:true to opt in.", "code": 16819,
"codeName": "Location16819", "$clusterTime": {"clusterTime": {"$timestamp":
{"t": 1643314154, "i": 1}}, "signature": {"hash": {"$binary": {"base64":
"FOXdk3SnWHMsJo6W6HGtqNLLCMY=", "subType": "00"}}, "keyId":
7034735910600048641}}}
at
com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:195)
at
com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:400)
at
com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:324)
at
com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:114)
at
com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:603)
at
com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:81)
at
com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:252)
at
com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:214)
at
com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:123)
at
com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:113)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:328)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:318)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:201)
at
com.mongodb.internal.operation.CommandOperationHelper.lambda$executeCommand$4(CommandOperationHelper.java:189)
at
com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:583)
at
com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:189)
at
com.mongodb.internal.operation.AggregateOperationImpl.execute(AggregateOperationImpl.java:195)
at
com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:306)
at
com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:46)
at
com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:184)
at
com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)
at
com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)
at
org.apache.drill.exec.store.mongo.MongoRecordReader.next(MongoRecordReader.java:205)
at
org.apache.drill.exec.physical.impl.ScanBatch.internalNext(ScanBatch.java:234)
at
org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:298)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.RecordIterator.nextBatch(RecordIterator.java:102)
at
org.apache.drill.exec.record.RecordIterator.next(RecordIterator.java:191)
at
org.apache.drill.exec.physical.impl.join.JoinStatus.initialize(JoinStatus.java:76)
at
org.apache.drill.exec.physical.impl.join.MergeJoinBatch.buildSchema(MergeJoinBatch.java:169)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:153)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.buildSchema(ExternalSortBatch.java:320)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:153)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.buildSchema(StreamingAggBatch.java:166)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:153)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:323)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:310)
at .......(:0)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:310)
at
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
at .......(:0)
2022-01-27 15:09:17,156 [1e0d0c12-6553-4a9e-22a4-20809a813521:frag:0:0] WARN
o.a.d.e.w.f.QueryStateProcessor - Dropping request to move to COMPLETED state
as query is already at FAILED state (which is terminal).
2022-01-28 08:42:47,435 [qtp1826822932-74] INFO
o.a.d.e.s.r.a.DrillRestLoginService - WebUser clarkddc logged in from
[0:0:0:0:0:0:0:1]:54462