Hi James,

After setting the config option exec.enable_union_type = true and running
the original query

SELECT `Elements_Efforts`.`EffortTypeName` AS `EffortTypeName`,
  `Elements`.`ElementSubTypeName` AS `ElementSubTypeName`,
  `Elements`.`ElementTypeName` AS `ElementTypeName`,
  `Elements`.`PlanID` AS `PlanID`
FROM `mongo.grounds`.`Elements` `Elements`
  INNER JOIN `mongo.grounds`.`Elements_Efforts` `Elements_Efforts` ON
(`Elements`.`_id` = `Elements_Efforts`.`_id`)
WHERE (`Elements`.`PlanID` = '1623263140')
GROUP BY `Elements_Efforts`.`EffortTypeName`,
  `Elements`.`ElementSubTypeName`,
  `Elements`.`ElementTypeName`,
  `Elements`.`PlanID`

I get the following error: SYSTEM ERROR: RuntimeException: Schema change
not currently supported for schemas with complex types. I've attached both
the stack trace and the query profile.

On Thu, Feb 3, 2022 at 10:05 AM Daniel Clark <[email protected]> wrote:

> I tried again with this query:
>
> with element as (
> select
>    _id,
>    ElementTypeName,
>    ElementSubTypeName,
>    PlanId
> FROM
>    `mongo.grounds`.`Elements`
> ), element_effort as (
> select
>    _id,
>    EffortTypeName
> FROM
>    `mongo.grounds`.`Elements_Efforts`
> )
> select
>    *
> from
>    element
> join
>    element_effort on element._id = element_effort._id
> where element.PlanId = '1623263140'
>
> The query completed successfully, but it did not return any rows. I've
> attached the log and the profile.
>
> On Thu, Feb 3, 2022 at 9:02 AM James Turton <[email protected]> wrote:
>
>> It looks like there is a FLOAT field called MinutesTotal that is only
>> present in some documents.  Can you try writing a query that uses
>> explicit column specs like this?
>>
>> with element as (
>> select
>>    _id,
>>    ElementTypeName,
>>    PlanId,
>>    ...
>> FROM
>>    `mongo.grounds`.`Elements`
>> ), element_effort as (
>> select
>>    _id,
>>    EffortTypeName
>> FROM
>>    `mongo.grounds`.`Elements_Efforts`
>> )
>> select
>>    *
>> from
>>    element
>> join
>>    element_effort on element._id = element_effort._id;
>>
>> (Needs some fleshing out).  You can also experiment with the UNION type
>> for this situation but I understand that one should be cautious about
>> using it in production.
>>
>> I still cannot say why 1.19 has no problem here but it could perhaps be
>> a batch ordering thing.  I think that whether or not the _first_ batch
>> includes a MinutesTotal field can make a difference to subsequent schema
>> handling (would need to confirm this last bit).
>>
>>
>> On 2022/02/03 15:33, Daniel Clark wrote:
>> > Hi James,
>> >
>> > Please see the attached.
>> >
>> > On Wed, Feb 2, 2022 at 2:35 AM Daniel Clark <[email protected]
>> > <mailto:[email protected]>> wrote:
>> >
>> >     Hi James,
>> >
>> >     There initially weren’t any differences between the 1.19 environment
>> >     and the 1.20.0-SNAPSHOT environment. The config options that worked
>> >     in the 1.19 environment were carried over when I installed the
>> >     snapshot build.  The recent change made to the snapshot build was
>> >     setting store.mongo.bson.record.reader to true. The original query
>> >     worked in the 1.19 environment, with the parameter set to false.
>> >
>> >     Yes, I’m running the exact same query against the exact same data
>> >     sources. I’ll attach a copy of the stack trace and profile, later
>> >     this morning. I’ll also see about reducing the dataset. Thanks for
>> >     following up.
>> >
>> >     Sent from my iPhone
>> >
>> >      > On Feb 2, 2022, at 2:03 AM, James Turton <[email protected]
>> >     <mailto:[email protected]>> wrote:
>> >      >
>> >      > Okay.  It's always a good idea to attach a stack trace and a
>> >     query profile when you have an error to send in, so maybe you can
>> >     add those?
>> >      >
>> >      > Next, we're left with a reproducibility challenge.  Are there
>> >     other config option differences between your two Drill environments,
>> >     beyond the one we've uncovered?  Are you running exactly the same
>> >     query against exactly the same data source in both environments?
>> >     Can you reduce the collections involved in the query to minimal (and
>> >     obfuscated if need be) datasets that we can use to reproduce the
>> >     problem?
>> >      >
>> >      >> On 2022/02/01 18:15, Daniel Clark wrote:
>> >      >> No, exec.enable_union_type is set tofalse.
>> >      >> On Tue, Feb 1, 2022 at 10:59 AM James Turton <[email protected]
>> >     <mailto:[email protected]> <mailto:[email protected]
>> >     <mailto:[email protected]>>> wrote:
>> >      >>    Do you have exec.enable_union_type = true in your 1.19
>> >     environment?
>> >      >>    On 2022/02/01 17:30, Daniel Clark wrote:
>> >      >>     > Hi James,
>> >      >>     >
>> >      >>     > Yes, the store.mongo.bson.record.reader was set to false.
>> >     I set
>> >      >>    it to true
>> >      >>     > and re-ran the original query. It returned an error:
>> >      >>     > UNSUPPORTED_OPERATION ERROR: Schema changes not supported
>> in
>> >      >>    External Sort.
>> >      >>     > Please enable Union type.
>> >      >>     >
>> >      >>     >
>> >      >>     >
>> >      >>     > On Tue, Feb 1, 2022 at 9:19 AM James Turton
>> >     <[email protected] <mailto:[email protected]>
>> >      >>    <mailto:[email protected] <mailto:[email protected]>>> wrote:
>> >      >>     >
>> >      >>     >> Hi Daniel
>> >      >>     >>
>> >      >>     >> Please let us know if you have set the config option
>> >      >>    store.mongo.bson.record.reader
>> >      >>     >> = false and, if so, please set it to true.
>> >      >>     >>
>> >      >>     >> Thanks
>> >      >>     >> James
>> >      >>     >>
>> >      >>     >> On 2022/01/31 17:45, Daniel Clark wrote:
>> >      >>     >>
>> >      >>     >> Here it is. Please see the attached file.
>> >      >>     >>
>> >      >>     >> On Mon, Jan 31, 2022 at 4:22 AM James Turton
>> >     <[email protected] <mailto:[email protected]>
>> >      >>    <mailto:[email protected] <mailto:[email protected]>>> wrote:
>> >      >>     >>
>> >      >>     >>> Please also attach the query profile if you can.
>> >      >>     >>>
>> >      >>     >>> Thanks
>> >      >>     >>> James
>> >      >>     >>>
>> >      >>     >>> On 2022/01/31 08:09, luoc wrote:
>> >      >>     >>>> Hi Daniel,
>> >      >>     >>>>     What is the data type of the `_id` field? The
>> default
>> >      >>    ObjectId, or
>> >      >>     >>> String or key-value pair (Struct)?
>> >      >>     >>>>
>> >      >>     >>>>> On Jan 31, 2022, at 11:12, Daniel Clark
>> >     <[email protected] <mailto:[email protected]>
>> >      >>    <mailto:[email protected] <mailto:[email protected]>>>
>> wrote:
>> >      >>     >>>>>
>> >      >>     >>>>> 
>> >      >>     >>>>> Hello,
>> >      >>     >>>>>
>> >      >>     >>>>> I'm running this mongo query on the 1.20.0-SNAPSHOT
>> >     build. It
>> >      >>    runs
>> >      >>     >>> without error on the 1.19 release.
>> >      >>     >>>>>
>> >      >>     >>>>> SELECT `Elements_Efforts`.`EffortTypeName` AS
>> >     `EffortTypeName`,
>> >      >>     >>>>>     `Elements`.`ElementSubTypeName` AS
>> >     `ElementSubTypeName`,
>> >      >>     >>>>>     `Elements`.`ElementTypeName` AS `ElementTypeName`,
>> >      >>     >>>>>     `Elements`.`PlanID` AS `PlanID`
>> >      >>     >>>>> FROM `mongo.grounds`.`Elements` `Elements`
>> >      >>     >>>>>     INNER JOIN `mongo.grounds`.`Elements_Efforts`
>> >      >>    `Elements_Efforts` ON
>> >      >>     >>> (`Elements`.`_id` = `Elements_Efforts`.`_id`)
>> >      >>     >>>>> WHERE (`Elements`.`PlanID` = '1623263140')
>> >      >>     >>>>> GROUP BY `Elements_Efforts`.`EffortTypeName`,
>> >      >>     >>>>>     `Elements`.`ElementSubTypeName`,
>> >      >>     >>>>>     `Elements`.`ElementTypeName`,
>> >      >>     >>>>>     `Elements`.`PlanID`
>> >      >>     >>>>>
>> >      >>     >>>>> The error message returned is:
>> >      >>     >>>>>
>> >      >>     >>>>>
>> org.apache.drill.common.exceptions.UserRemoteException:
>> >      >>    SYSTEM ERROR:
>> >      >>     >>> UnsupportedOperationException: Map, Array, Union or
>> repeated
>> >      >>    scalar type
>> >      >>     >>> should not be used in group by, order by or in a
>> comparison
>> >      >>    operator. Drill
>> >      >>     >>> does not support compare between MAP:REQUIRED and
>> >     MAP:REQUIRED.
>> >      >>     >>>>>
>> >      >>     >>>>> Fragment: 0:0
>> >      >>     >>>>>
>> >      >>     >>>>> Please, refer to logs for more information.
>> >      >>     >>>>>
>> >      >>     >>>>> [Error Id: 21b3260d-9ebf-4156-a5fa-4748453b5465 on
>> >      >>    localhost:31010]
>> >      >>     >>>>>
>> >      >>     >>>>> I've tried searching the mailing list archives, as
>> well as
>> >      >>    googling
>> >      >>     >>> the error. The stack trace mentions that memory was
>> >     leaked by
>> >      >>    the query.
>> >      >>     >>> Any ideas? Full stack trace attached.
>> >      >>     >>>>> <stacktrace.txt>
>> >      >>     >>>
>> >      >>     >>>
>> >      >>     >>
>> >      >>     >
>> >
>>
>
2022-02-03 13:48:39,470 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
1e03dc77-953c-b877-e1ac-1cb1ae3475e3 issued by clarkddc: SELECT 
`Elements_Efforts`.`EffortTypeName` AS `EffortTypeName`,
  `Elements`.`ElementSubTypeName` AS `ElementSubTypeName`,
  `Elements`.`ElementTypeName` AS `ElementTypeName`,
  `Elements`.`PlanID` AS `PlanID`
FROM `mongo.grounds`.`Elements` `Elements`
  INNER JOIN `mongo.grounds`.`Elements_Efforts` `Elements_Efforts` ON 
(`Elements`.`_id` = `Elements_Efforts`.`_id`)
WHERE (`Elements`.`PlanID` = '1623263140')
GROUP BY `Elements_Efforts`.`EffortTypeName`,
  `Elements`.`ElementSubTypeName`,
  `Elements`.`ElementTypeName`,
  `Elements`.`PlanID`
2022-02-03 13:48:39,485 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:foreman] WARN  
o.a.d.e.s.m.s.MongoSchemaFactory - Failure while getting collection names from 
'admin'. Command failed with error 13 (Unauthorized): 'not authorized on admin 
to execute command { listCollections: 1, cursor: {}, nameOnly: true, $db: 
"admin", $clusterTime: { clusterTime: Timestamp(1643914112, 1), signature: { 
hash: BinData(0, 635DF23A29CFFFFE0BD9A216233A52F8EEFDAD04), keyId: 
7034735910600048641 } }, lsid: { id: 
UUID("be5e822b-6b70-498c-bf3c-cbad4fb17d3b") } }' on server localhost:27017. 
The full response is {"operationTime": {"$timestamp": {"t": 1643914112, "i": 
1}}, "ok": 0.0, "errmsg": "not authorized on admin to execute command { 
listCollections: 1, cursor: {}, nameOnly: true, $db: \"admin\", $clusterTime: { 
clusterTime: Timestamp(1643914112, 1), signature: { hash: BinData(0, 
635DF23A29CFFFFE0BD9A216233A52F8EEFDAD04), keyId: 7034735910600048641 } }, 
lsid: { id: UUID(\"be5e822b-6b70-498c-bf3c-cbad4fb17d3b\") } }", "code": 13, 
"codeName": "Unauthorized", "$clusterTime": {"clusterTime": {"$timestamp": 
{"t": 1643914112, "i": 1}}, "signature": {"hash": {"$binary": {"base64": 
"Y13yOinP//4L2aIWIzpS+O79rQQ=", "subType": "00"}}, "keyId": 
7034735910600048641}}}
2022-02-03 14:12:35,853 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.s.m.MongoScanBatchCreator - Number of record readers initialized : 1
2022-02-03 14:12:35,881 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.s.m.MongoScanBatchCreator - Number of record readers initialized : 1
2022-02-03 14:12:35,908 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State change requested AWAITING_ALLOCATION --> RUNNING
2022-02-03 14:12:35,909 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State to report: RUNNING
2022-02-03 14:13:02,507 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] ERROR 
o.a.d.e.physical.impl.BaseRootExec - Batch dump started: dumping last 2 failed 
batches
2022-02-03 14:13:02,507 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] ERROR 
o.a.d.e.p.i.xsort.ExternalSortBatch - ExternalSortBatch[schema=BatchSchema 
[fields=[[`EffortTypeID` (INT:OPTIONAL)], [`EffortTypeName` 
(VARCHAR:OPTIONAL)], [`EffortSubTypeID` (INT:OPTIONAL)], [`EffortSubTypeName` 
(VARCHAR:OPTIONAL)], [`CrewTypeID` (INT:OPTIONAL)], [`CrewType` 
(VARCHAR:OPTIONAL)], [`Unit` (VARCHAR:OPTIONAL)], [`MaterialID` 
(INT:OPTIONAL)], [`EquipmentID` (INT:OPTIONAL)], [`Equipment` 
(VARCHAR:OPTIONAL)], [`MinutesPerUnit` (FLOAT8:OPTIONAL)], [`Frequency` 
(MAP:REQUIRED), children=([`Jan` (FLOAT8:OPTIONAL)], [`Feb` (FLOAT8:OPTIONAL)], 
[`Mar` (FLOAT8:OPTIONAL)], [`Apr` (FLOAT8:OPTIONAL)], [`May` 
(FLOAT8:OPTIONAL)], [`Jun` (FLOAT8:OPTIONAL)], [`Jul` (FLOAT8:OPTIONAL)], 
[`Aug` (FLOAT8:OPTIONAL)], [`Sep` (FLOAT8:OPTIONAL)], [`Oct` 
(FLOAT8:OPTIONAL)])], [`_id` (VARBINARY:OPTIONAL)], [`Material` 
(VARCHAR:OPTIONAL)]], selectionVector=NONE], sortState=LOAD, 
sortConfig=SortConfig[spillFileSize=268435456, spillBatchSize=1048576, 
mergeBatchSize=16777216, mSortBatchSize=65535], 
outputWrapperContainer=org.apache.drill.exec.record.VectorContainer@3004ba17[recordCount
 = 0, schemaChanged = false, schema = BatchSchema [fields=[], 
selectionVector=NONE], wrappers = [], ...], 
outputSV4=SelectionVector4[data=DrillBuf[1], udle: [1 0..0], recordCount=0, 
start=0, length=0], 
container=org.apache.drill.exec.record.VectorContainer@24e796d1[recordCount = 
0, schemaChanged = false, schema = BatchSchema [fields=[[`EffortTypeID` 
(INT:OPTIONAL)], [`EffortTypeName` (VARCHAR:OPTIONAL)], [`EffortSubTypeID` 
(INT:OPTIONAL)], [`EffortSubTypeName` (VARCHAR:OPTIONAL)], [`CrewTypeID` 
(INT:OPTIONAL)], [`CrewType` (VARCHAR:OPTIONAL)], [`Unit` (VARCHAR:OPTIONAL)], 
[`MaterialID` (INT:OPTIONAL)], [`EquipmentID` (INT:OPTIONAL)], [`Equipment` 
(VARCHAR:OPTIONAL)], [`MinutesPerUnit` (FLOAT8:OPTIONAL)], [`Frequency` 
(MAP:REQUIRED), children=([`Jan` (FLOAT8:OPTIONAL)], [`Feb` (FLOAT8:OPTIONAL)], 
[`Mar` (FLOAT8:OPTIONAL)], [`Apr` (FLOAT8:OPTIONAL)], [`May` 
(FLOAT8:OPTIONAL)], [`Jun` (FLOAT8:OPTIONAL)], [`Jul` (FLOAT8:OPTIONAL)], 
[`Aug` (FLOAT8:OPTIONAL)], [`Sep` (FLOAT8:OPTIONAL)], [`Oct` 
(FLOAT8:OPTIONAL)])], [`_id` (VARBINARY:OPTIONAL)], [`Material` 
(VARCHAR:OPTIONAL)]], selectionVector=NONE], wrappers = 
[org.apache.drill.exec.vector.NullableIntVector@a508f80[field = [`EffortTypeID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@160628f3[field = 
[`EffortTypeName` (VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@3c7ec26[field = 
[`EffortSubTypeID` (INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@1b8eb391[field = 
[`EffortSubTypeName` (VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@15ac03ef[field = [`CrewTypeID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@324ae019[field = [`CrewType` 
(VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@e509caa[field = [`Unit` 
(VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@60aca4ca[field = [`MaterialID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@15ea604b[field = [`EquipmentID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@70c510c7[field = 
[`Equipment` (VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableFloat8Vector@7a7a77b9[field = 
[`MinutesPerUnit` (FLOAT8:OPTIONAL)], ...], 
org.apache.drill.exec.vector.complex.MapVector@7ff516c8, 
org.apache.drill.exec.vector.NullableVarBinaryVector@39dbd945[field = [`_id` 
(VARBINARY:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@6140b591[field = [`Material` 
(VARCHAR:OPTIONAL)], ...]], ...]]
2022-02-03 14:13:02,508 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] ERROR 
o.a.d.e.p.i.s.RemovingRecordBatch - 
RemovingRecordBatch[container=org.apache.drill.exec.record.VectorContainer@48aed1bf[recordCount
 = 0, schemaChanged = false, schema = BatchSchema [fields=[[`EffortTypeID` 
(INT:OPTIONAL)], [`EffortTypeName` (VARCHAR:OPTIONAL)], [`EffortSubTypeID` 
(INT:OPTIONAL)], [`EffortSubTypeName` (VARCHAR:OPTIONAL)], [`CrewTypeID` 
(INT:OPTIONAL)], [`CrewType` (VARCHAR:OPTIONAL)], [`Unit` (VARCHAR:OPTIONAL)], 
[`MaterialID` (INT:OPTIONAL)], [`EquipmentID` (INT:OPTIONAL)], [`Equipment` 
(VARCHAR:OPTIONAL)], [`MinutesPerUnit` (FLOAT8:OPTIONAL)], [`Frequency` 
(MAP:REQUIRED), children=([`Jan` (FLOAT8:OPTIONAL)], [`Feb` (FLOAT8:OPTIONAL)], 
[`Mar` (FLOAT8:OPTIONAL)], [`Apr` (FLOAT8:OPTIONAL)], [`May` 
(FLOAT8:OPTIONAL)], [`Jun` (FLOAT8:OPTIONAL)], [`Jul` (FLOAT8:OPTIONAL)], 
[`Aug` (FLOAT8:OPTIONAL)], [`Sep` (FLOAT8:OPTIONAL)], [`Oct` 
(FLOAT8:OPTIONAL)])], [`_id` (VARBINARY:OPTIONAL)], [`Material` 
(VARCHAR:OPTIONAL)]], selectionVector=NONE], wrappers = 
[org.apache.drill.exec.vector.NullableIntVector@1fbadd34[field = 
[`EffortTypeID` (INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@784c2e77[field = 
[`EffortTypeName` (VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@1c44f0d2[field = 
[`EffortSubTypeID` (INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@22d9e9cc[field = 
[`EffortSubTypeName` (VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@6bc3c3de[field = [`CrewTypeID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@72e56747[field = [`CrewType` 
(VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@499ccba9[field = [`Unit` 
(VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@24e6ccdf[field = [`MaterialID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableIntVector@514e26ae[field = [`EquipmentID` 
(INT:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@8c8ac9f[field = [`Equipment` 
(VARCHAR:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableFloat8Vector@2726efac[field = 
[`MinutesPerUnit` (FLOAT8:OPTIONAL)], ...], 
org.apache.drill.exec.vector.complex.MapVector@35b45fa3, 
org.apache.drill.exec.vector.NullableVarBinaryVector@15ef74ed[field = [`_id` 
(VARBINARY:OPTIONAL)], ...], 
org.apache.drill.exec.vector.NullableVarCharVector@580e8e9b[field = [`Material` 
(VARCHAR:OPTIONAL)], ...]], ...], state=NOT_FIRST, 
copier=org.apache.drill.exec.physical.impl.svremover.StraightCopier@484551b9]
2022-02-03 14:13:02,508 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] ERROR 
o.a.d.e.physical.impl.BaseRootExec - Batch dump completed.
2022-02-03 14:13:02,508 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State change requested RUNNING --> FAILED
2022-02-03 14:13:02,524 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State change requested FAILED --> FAILED
2022-02-03 14:13:02,524 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State change requested FAILED --> FAILED
2022-02-03 14:13:02,524 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State change requested FAILED --> FAILED
2022-02-03 14:13:02,524 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0: 
State change requested FAILED --> FINISHED
2022-02-03 14:13:02,525 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] ERROR 
o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: RuntimeException: Schema 
change not currently supported for schemas with complex types

Fragment: 0:0

Please, refer to logs for more information.

[Error Id: 656236f6-f669-49e4-91a9-0faf1c68ca65 on localhost:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
RuntimeException: Schema change not currently supported for schemas with 
complex types

Fragment: 0:0

Please, refer to logs for more information.

[Error Id: 656236f6-f669-49e4-91a9-0faf1c68ca65 on localhost:31010]
        at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:657)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:392)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:244)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:359)
        at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Schema change not currently supported 
for schemas with complex types
        at 
org.apache.drill.exec.record.SchemaUtil.mergeSchemas(SchemaUtil.java:71)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.setupSchema(ExternalSortBatch.java:476)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.loadBatch(ExternalSortBatch.java:449)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.load(ExternalSortBatch.java:400)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:355)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.RecordIterator.nextBatch(RecordIterator.java:102)
        at 
org.apache.drill.exec.record.RecordIterator.next(RecordIterator.java:191)
        at 
org.apache.drill.exec.record.RecordIterator.prepare(RecordIterator.java:175)
        at 
org.apache.drill.exec.physical.impl.join.JoinStatus.prepare(JoinStatus.java:86)
        at 
org.apache.drill.exec.physical.impl.join.MergeJoinBatch.innerNext(MergeJoinBatch.java:184)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.loadBatch(ExternalSortBatch.java:441)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.load(ExternalSortBatch.java:400)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:355)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:214)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)
        at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.lambda$run$0(FragmentExecutor.java:321)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:310)
        ... 4 common frames omitted
        Suppressed: org.apache.drill.exec.ops.QueryCancelledException: null
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor$ExecutorStateImpl.checkContinue(FragmentExecutor.java:533)
                at 
org.apache.drill.exec.record.AbstractRecordBatch.checkContinue(AbstractRecordBatch.java:256)
                at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:110)
                at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
                at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
                at 
org.apache.drill.exec.record.RecordIterator.clearInflightBatches(RecordIterator.java:359)
                at 
org.apache.drill.exec.record.RecordIterator.close(RecordIterator.java:365)
                at 
org.apache.drill.exec.physical.impl.join.MergeJoinBatch.close(MergeJoinBatch.java:300)
                at 
org.apache.drill.common.DeferredException.suppressingClose(DeferredException.java:159)
                at 
org.apache.drill.exec.physical.impl.BaseRootExec.close(BaseRootExec.java:169)
                at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.close(ScreenCreator.java:124)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:407)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:239)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:359)
                ... 4 common frames omitted
        Suppressed: java.lang.IllegalStateException: Memory was leaked by 
query. Memory leaked: (995328)
Allocator(op:0:0:6:MergeJoinPOP) 1000000/995328/16150528/10000000000 
(res/actual/peak/limit)

                at 
org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:520)
                at 
org.apache.drill.exec.ops.BaseOperatorContext.close(BaseOperatorContext.java:159)
                at 
org.apache.drill.exec.ops.OperatorContextImpl.close(OperatorContextImpl.java:77)
                at 
org.apache.drill.exec.ops.FragmentContextImpl.suppressingClose(FragmentContextImpl.java:581)
                at 
org.apache.drill.exec.ops.FragmentContextImpl.close(FragmentContextImpl.java:571)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:414)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:239)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:359)
                ... 4 common frames omitted
        Suppressed: java.lang.IllegalStateException: Memory was leaked by 
query. Memory leaked: (1000000)
Allocator(frag:0:0) 69000000/1000000/277999104/93904515723 
(res/actual/peak/limit)

                at 
org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:520)
                at 
org.apache.drill.exec.ops.FragmentContextImpl.suppressingClose(FragmentContextImpl.java:581)
                at 
org.apache.drill.exec.ops.FragmentContextImpl.close(FragmentContextImpl.java:574)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:414)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:239)
                at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:359)
                ... 4 common frames omitted
2022-02-03 14:13:02,530 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] WARN  
o.a.d.exec.rpc.control.WorkEventBus - Fragment 
1e03dc77-953c-b877-e1ac-1cb1ae3475e3:0:0 manager is not found in the work bus.
2022-02-03 14:13:02,533 [qtp1799477491-169] ERROR 
o.a.d.e.server.rest.QueryResources - Query from Web UI Failed: {}
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
RuntimeException: Schema change not currently supported for schemas with 
complex types

Fragment: 0:0

Please, refer to logs for more information.

[Error Id: 656236f6-f669-49e4-91a9-0faf1c68ca65 on localhost:31010]
        at 
org.apache.drill.exec.server.rest.RestQueryRunner.submitQuery(RestQueryRunner.java:99)
        at 
org.apache.drill.exec.server.rest.RestQueryRunner.run(RestQueryRunner.java:54)
        at 
org.apache.drill.exec.server.rest.QueryResources.submitQuery(QueryResources.java:158)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
        at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
        at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
        at 
org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219)
        at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
        at 
org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
        at 
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
        at 
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
        at 
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
        at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
        at 
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
        at 
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
        at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
        at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
        at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
        at 
org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1631)
        at 
org.apache.drill.exec.server.rest.header.ResponseHeadersSettingFilter.doFilter(ResponseHeadersSettingFilter.java:71)
        at 
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
        at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
        at 
org.apache.drill.exec.server.rest.CsrfTokenValidateFilter.doFilter(CsrfTokenValidateFilter.java:55)
        at 
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
        at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
        at 
org.apache.drill.exec.server.rest.CsrfTokenInjectFilter.doFilter(CsrfTokenInjectFilter.java:54)
        at 
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
        at 
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
        at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:571)
        at 
org.apache.drill.exec.server.rest.auth.DrillHttpSecurityHandlerProvider.handle(DrillHttpSecurityHandlerProvider.java:163)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
        at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
        at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
        at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
        at org.eclipse.jetty.server.Server.handle(Server.java:516)
        at 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
        at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
        at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
        at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
        at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
        at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Schema change not currently supported 
for schemas with complex types
        at 
org.apache.drill.exec.record.SchemaUtil.mergeSchemas(SchemaUtil.java:71)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.setupSchema(ExternalSortBatch.java:476)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.loadBatch(ExternalSortBatch.java:449)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.load(ExternalSortBatch.java:400)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:355)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.RecordIterator.nextBatch(RecordIterator.java:102)
        at 
org.apache.drill.exec.record.RecordIterator.next(RecordIterator.java:191)
        at 
org.apache.drill.exec.record.RecordIterator.prepare(RecordIterator.java:175)
        at 
org.apache.drill.exec.physical.impl.join.JoinStatus.prepare(JoinStatus.java:86)
        at 
org.apache.drill.exec.physical.impl.join.MergeJoinBatch.innerNext(MergeJoinBatch.java:184)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.loadBatch(ExternalSortBatch.java:441)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.load(ExternalSortBatch.java:400)
        at 
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:355)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:214)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:111)
        at 
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:59)
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:85)
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:170)
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103)
        at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.lambda$run$0(FragmentExecutor.java:321)
        at .......(:0)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:310)
        at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
        at .......(:0)
2022-02-03 14:13:02,543 [1e03dc77-953c-b877-e1ac-1cb1ae3475e3:frag:0:0] WARN  
o.a.d.e.w.f.QueryStateProcessor - Dropping request to move to COMPLETED state 
as query is already at FAILED state (which is terminal).
     

Attachment: profile_2.json
Description: application/json

Reply via email to