Hi Anup,
since your original query was working on 1.6 and failed in 1.9,  could you
pls file a JIRA for this ?  It sounds like a regression related to
evaluation of a Project expression (based on the stack trace).  Since there
are several CASE exprs, quite likely something related to its evaluation.
It would be great if you can provide some sample data for someone to
debug.
Thanks.

On Thu, Dec 8, 2016 at 12:50 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:

> Hi,
>
> I have removed few conditions from my query then it just worked fine.
>
> Also can someone tell me in which scenarios we throw "
> *IllegalReferenceCountException*" and how to handle it in different
> scenarios ?
>
> As i got this in another query and by removing some conditions it worked
> for me but when i execute that removed conditions alone in CTAS , it got
> executed successfully.
>
> Regards,
> *Anup Tiwari*
>
> On Wed, Dec 7, 2016 at 12:22 PM, Anup Tiwari <anup.tiw...@games24x7.com>
> wrote:
>
> > Hi Team,
> >
> > I am getting below 2 error in my one of the query which was working fine
> > on 1.6, Please help me out in this:-
> >
> > 1. UserException: SYSTEM ERROR: IllegalReferenceCountException: refCnt:
> 0
> > 2. SYSTEM ERROR: IOException: Failed to shutdown streamer
> >
> > Please find below query and its stack trace :-
> >
> > *Query :-*
> >
> > create table a_tt3_reg_login as
> > select sessionid,
> >
> > count(distinct (case when ((( event = 'e.a' and ajaxUrl like
> > '%/ab/pL%t=r%' ) or ( (Base64Conv(Response) like '%st%tr%' and
> > Base64Conv(Response) not like '%error%') and ajaxUrl like '%/sign/ter%'
> ))
> > OR ( event = 'e.a' and ajaxUrl like '%/player/ter/ter.htm%' and
> > Base64Conv(Response) like '%st%tr%ter%tr%')  OR (id = '/ter/thyou.htm'
> and
> > url = '/pla/natlob.htm')) then sessionid end) )  as  regs,
> >
> > count(distinct (case when ( ajaxUrl like '%/signup/poLo%t=log%' and event
> > = 'e.a' ) or ( event = 'e.a' and ajaxUrl like '%j_spring_security_check%'
> > and Base64Conv(Response)  like '%st%tr%') then sessionid end) ) as login
> ,
> >
> > count(distinct (case when ((ajaxUrl like '/pl%/loadResponsePage.htm%fD=
> true&sta=yes%'
> > or ajaxUrl like '/pl%/loadResponsePage.htm%fD=true&sta=YES%') OR
> (ajaxUrl
> > like 'loadSuccessPage.do%fD=true&sta=yes%' or ajaxUrl like
> > 'loadSuccessPage.do%fD=true&sta=YES%'))  then sessionid end) ) as fd ,
> >
> > count(distinct (case when ((ajaxUrl like '/pl%/loadResponsePage.htm%fD=
> false&sta=yes%'
> > or ajaxUrl like '/pl%/loadResponsePage.htm%fD=false&sta=YES%') OR
> > (ajaxUrl like 'loadSuccessPage.do%fD=false&sta=yes%' or ajaxUrl like
> > 'loadSuccessPage.do%fD=false&sta=YES%')) then sessionid end) ) as rd
> >
> > from
> > tt2
> > group by sessionid;
> > Error: SYSTEM ERROR: IllegalReferenceCountException: refCnt: 0
> >
> > Fragment 14:19
> >
> > [Error Id: e4659753-f8d0-403c-9eec-0ff6f2e30dd9 on namenode:31010]
> > (state=,code=0)
> >
> >
> > *Stack Trace From Drillbit.log:-*
> >
> > [Error Id: e4659753-f8d0-403c-9eec-0ff6f2e30dd9 on namenode:31010]
> > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> > IllegalReferenceCountException: refCnt: 0
> >
> > Fragment 14:19
> >
> > [Error Id: e4659753-f8d0-403c-9eec-0ff6f2e30dd9 on namenode:31010]
> >         at org.apache.drill.common.exceptions.UserException$
> > Builder.build(UserException.java:543) ~[drill-common-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.
> > sendFinalState(FragmentExecutor.java:293) [drill-java-exec-1.9.0.jar:1.
> > 9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.
> > cleanup(FragmentExecutor.java:160) [drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.run(
> FragmentExecutor.java:262)
> > [drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.common.SelfCleaningRunnable.run(
> SelfCleaningRunnable.java:38)
> > [drill-common-1.9.0.jar:1.9.0]
> >         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> > [na:1.8.0_74]
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> > [na:1.8.0_74]
> >         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_74]
> > Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
> >         at io.netty.buffer.AbstractByteBuf.ensureAccessible(
> AbstractByteBuf.java:1178)
> > ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
> >         at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:115)
> > ~[drill-memory-base-1.9.0.jar:4.0.27.Final]
> >         at io.netty.buffer.DrillBuf.chk(DrillBuf.java:147)
> > ~[drill-memory-base-1.9.0.jar:4.0.27.Final]
> >         at io.netty.buffer.DrillBuf.getByte(DrillBuf.java:775)
> > ~[drill-memory-base-1.9.0.jar:4.0.27.Final]
> >         at org.apache.drill.exec.expr.fn.impl.CharSequenceWrapper.
> > isAscii(CharSequenceWrapper.java:143) ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.expr.fn.impl.CharSequenceWrapper.
> > setBuffer(CharSequenceWrapper.java:106) ~[drill-java-exec-1.9.0.jar:1.
> 9.0]
> >         at org.apache.drill.exec.test.generated.ProjectorGen980.
> > doEval(ProjectorTemplate.java:776) ~[na:na]
> >         at org.apache.drill.exec.test.generated.ProjectorGen980.
> > projectRecords(ProjectorTemplate.java:62) ~[na:na]
> >         at org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.doWork(ProjectRecordBatch.java:199)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractSingleRecordBatch.
> > innerNext(AbstractSingleRecordBatch.java:93)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:162)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:119)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.test.generated.HashAggregatorGen33.
> > doWork(HashAggTemplate.java:313) ~[na:na]
> >         at org.apache.drill.exec.physical.impl.aggregate.
> > HashAggBatch.innerNext(HashAggBatch.java:144)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:162)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:119)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:109)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractSingleRecordBatch.
> > innerNext(AbstractSingleRecordBatch.java:51)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:162)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.BaseRootExec.
> next(BaseRootExec.java:104)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.SingleSenderCreator$
> > SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.BaseRootExec.
> next(BaseRootExec.java:94)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor$1.
> run(FragmentExecutor.java:232)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor$1.
> run(FragmentExecutor.java:226)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.8.0_74]
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > ~[na:1.8.0_74]
> >         at org.apache.hadoop.security.UserGroupInformation.doAs(
> > UserGroupInformation.java:1657) ~[hadoop-common-2.7.1.jar:na]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.run(
> FragmentExecutor.java:226)
> > [drill-java-exec-1.9.0.jar:1.9.0]
> >         ... 4 common frames omitted
> > 2016-12-07 11:47:54,616 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:2:1: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,616 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:2:1: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,617 [27b85671-2a57-7d5f-18a5-680566b07067:frag:2:1]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:2:1:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,617 [27b85671-2a57-7d5f-18a5-680566b07067:frag:2:1]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:2:1:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,617 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:1: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,663 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:1: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,664 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:1]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:1:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,722 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:1]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:1:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,675 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:5: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,727 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:5: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,727 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:5]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:5:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,727 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:5]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:5:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,733 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:9: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,733 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:9: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,733 [drill-executor-652] WARN
> o.a.d.exec.rpc.control.WorkEventBus
> > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:2:1 not found in the
> work
> > bus.
> > 2016-12-07 11:47:54,734 [drill-executor-624] WARN
> o.a.d.exec.rpc.control.WorkEventBus
> > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:5 not found in the
> work
> > bus.
> > 2016-12-07 11:47:54,734 [drill-executor-621] WARN
> o.a.d.exec.rpc.control.WorkEventBus
> > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:1 not found in the
> work
> > bus.
> > 2016-12-07 11:47:54,780 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:9]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:9:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,780 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:9]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:9:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,781 [drill-executor-625] WARN
> o.a.d.exec.rpc.control.WorkEventBus
> > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:9 not found in the
> work
> > bus.
> > 2016-12-07 11:47:54,796 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:13: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,796 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:13: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,796 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:13: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,797 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:13]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:13:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,797 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:17: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,847 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:13]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:13:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,847 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:17: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,847 [drill-executor-626] WARN
> o.a.d.exec.rpc.control.WorkEventBus
> > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:13 not found in the
> > work bus.
> > 2016-12-07 11:47:54,855 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:21: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,855 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:7:21: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,855 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:17]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:17:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,855 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:17]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:17:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,855 [drill-executor-628] WARN
> o.a.d.exec.rpc.control.WorkEventBus
> > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:17 not found in the
> > work bus.
> > 2016-12-07 11:47:54,855 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:21]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:21:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > 2016-12-07 11:47:54,855 [27b85671-2a57-7d5f-18a5-680566b07067:frag:7:21]
> > INFO  o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> 680566b07067:7:21:
> > State to report: CANCELLED
> > 2016-12-07 11:47:54,856 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor
> > - 27b85671-2a57-7d5f-18a5-680566b07067:8:1: State change requested
> > RUNNING --> CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,856 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.
> FragmentStatusReporter
> > - 27b85671-2a57-7d5f-18a5-680566b07067:8:1: State to report:
> > CANCELLATION_REQUESTED
> > 2016-12-07 11:47:54,857 [27b85671-2a57-7d5f-18a5-680566b07067:frag:8:1]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:8:1:
> > State change requested CANCELLATION_REQUESTED --> FINISHED
> > ....
> >
> > 2016-12-07 11:47:55,172 [27b85671-2a57-7d5f-18a5-680566b07067:frag:1:1]
> > INFO  o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> 680566b07067:1:1:
> > State change requested FAILED --> FINISHED
> > 2016-12-07 11:47:55,174 [27b85671-2a57-7d5f-18a5-680566b07067:frag:1:1]
> > ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IOException:
> > Failed to shutdown streamer
> >
> > Fragment 1:1
> >
> > [Error Id: 594a2ba9-6e58-4602-861e-8333f4356752 on namenode:31010]
> > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> > IOException: Failed to shutdown streamer
> >
> > Fragment 1:1
> >
> > [Error Id: 594a2ba9-6e58-4602-861e-8333f4356752 on namenode:31010]
> >         at org.apache.drill.common.exceptions.UserException$
> > Builder.build(UserException.java:543) ~[drill-common-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.
> > sendFinalState(FragmentExecutor.java:293) [drill-java-exec-1.9.0.jar:1.
> > 9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.
> > cleanup(FragmentExecutor.java:160) [drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.run(
> FragmentExecutor.java:262)
> > [drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.common.SelfCleaningRunnable.run(
> SelfCleaningRunnable.java:38)
> > [drill-common-1.9.0.jar:1.9.0]
> >         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> > [na:1.8.0_74]
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> > [na:1.8.0_74]
> >         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_74]
> > Caused by: java.io.IOException: Failed to shutdown streamer
> >         at org.apache.hadoop.hdfs.DFSOutputStream.closeThreads(
> DFSOutputStream.java:2187)
> > ~[hadoop-hdfs-2.7.1.jar:na]
> >         at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(
> DFSOutputStream.java:2235)
> > ~[hadoop-hdfs-2.7.1.jar:na]
> >         at org.apache.hadoop.hdfs.DFSOutputStream.close(
> DFSOutputStream.java:2204)
> > ~[hadoop-hdfs-2.7.1.jar:na]
> >         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(
> FSDataOutputStream.java:72)
> > ~[hadoop-common-2.7.1.jar:na]
> >         at org.apache.hadoop.fs.FSDataOutputStream.close(
> FSDataOutputStream.java:106)
> > ~[hadoop-common-2.7.1.jar:na]
> >         at org.apache.drill.exec.store.easy.json.JsonRecordWriter.
> > cleanup(JsonRecordWriter.java:246) ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.
> > WriterRecordBatch.closeWriter(WriterRecordBatch.java:180)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.
> > WriterRecordBatch.innerNext(WriterRecordBatch.java:128)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.record.AbstractRecordBatch.next(
> AbstractRecordBatch.java:162)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.BaseRootExec.
> next(BaseRootExec.java:104)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.SingleSenderCreator$
> > SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.physical.impl.BaseRootExec.
> next(BaseRootExec.java:94)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor$1.
> run(FragmentExecutor.java:232)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor$1.
> run(FragmentExecutor.java:226)
> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >         at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.8.0_74]
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > ~[na:1.8.0_74]
> >         at org.apache.hadoop.security.UserGroupInformation.doAs(
> > UserGroupInformation.java:1657) ~[hadoop-common-2.7.1.jar:na]
> >         at org.apache.drill.exec.work.fragment.FragmentExecutor.run(
> FragmentExecutor.java:226)
> > [drill-java-exec-1.9.0.jar:1.9.0]
> >         ... 4 common frames omitted
> >
> >
> > Regards,
> > *Anup Tiwari*
> >
>

Reply via email to