[jira] [Commented] (DRILL-5420) all cores at 100% of all servers
[ https://issues.apache.org/jira/browse/DRILL-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15981172#comment-15981172 ] Hugo Bellomusto commented on DRILL-5420: Disabling the pagerader.async fix the problem. > all cores at 100% of all servers > > > Key: DRILL-5420 > URL: https://issues.apache.org/jira/browse/DRILL-5420 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 > Environment: linux, cluster with 5 servers over hdfs/parquet >Reporter: Hugo Bellomusto >Assignee: Parth Chandra > Attachments: 2709a36d-804a-261a-64e5-afa271e782f8.json > > > We have a drill cluster with five servers over hdfs/parquet. > Each machine have 8 cores. All cores get at 100% of use. > Each thread is looping in the while in line 314 in AsyncPageReader.java > inside clear() method. > https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java#L314 > jstack -l 19255|grep -A 50 $(printf "%x" 29250) > "271d6262-ff19-ad24-af36-777bfe6c6375:frag:1:4" daemon prio=10 > tid=0x7f5b2adec800 nid=0x7242 runnable [0x7f5aa33e8000] >java.lang.Thread.State: RUNNABLE > at java.lang.Throwable.fillInStackTrace(Native Method) > at java.lang.Throwable.fillInStackTrace(Throwable.java:783) > - locked <0x0007374bfcb0> (a java.lang.InterruptedException) > at java.lang.Throwable.(Throwable.java:250) > at java.lang.Exception.(Exception.java:54) > at java.lang.InterruptedException.(InterruptedException.java:57) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) > at > java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439) > at > org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:317) > at > org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) > at > org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) > at > org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java
[jira] [Comment Edited] (DRILL-5420) all cores at 100% of all servers
[ https://issues.apache.org/jira/browse/DRILL-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973402#comment-15973402 ] Hugo Bellomusto edited comment on DRILL-5420 at 4/18/17 8:16 PM: - I have attached the profile. I pasted the stacktrace too "2709a36d-804a-261a-64e5-afa271e782f8:frag:4:8" daemon prio=10 tid=0x01597800 nid=0x45b4 runnable [0x7f0a254de000] java.lang.Thread.State: RUNNABLE at java.lang.Thread.interrupt0(Native Method) at java.lang.Thread.interrupt(Thread.java:941) at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:322) at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) at org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92) at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232) at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226) at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Locked ownable synchronizers: - <0x000745598c48> (a java.util.concurrent.ThreadPoolExecutor$Worker) was (Author: hbellomusto): "2709a36d-804a-261a-64e5-afa271e782f8:frag:4:8" daemon prio=10 tid=0x01597800 nid=0x45b4 runnable [0x7f0a254de000] java.lang.Thread.State: RUNNABLE at java.lang.Thread.interrupt0(Native Method) at java.lang.Thread.interrupt(Thread.java:941) at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:322) at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
[jira] [Updated] (DRILL-5420) all cores at 100% of all servers
[ https://issues.apache.org/jira/browse/DRILL-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hugo Bellomusto updated DRILL-5420: --- Attachment: 2709a36d-804a-261a-64e5-afa271e782f8.json "2709a36d-804a-261a-64e5-afa271e782f8:frag:4:8" daemon prio=10 tid=0x01597800 nid=0x45b4 runnable [0x7f0a254de000] java.lang.Thread.State: RUNNABLE at java.lang.Thread.interrupt0(Native Method) at java.lang.Thread.interrupt(Thread.java:941) at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:322) at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) at org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92) at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232) at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226) at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Locked ownable synchronizers: - <0x000745598c48> (a java.util.concurrent.ThreadPoolExecutor$Worker) > all cores at 100% of all servers > > > Key: DRILL-5420 > URL: https://issues.apache.org/jira/browse/DRILL-5420 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 > Environment: linux, cluster with 5 servers over hdfs/parquet >Reporter: Hugo Bellomusto >Assignee: Parth Chandra > Attachments: 2709a36d-804a-261a-64e5-afa271e782f8.json > > > We have a drill cluster with five servers over hdfs/parquet. > Each machine have 8 cores. All cores get at 100% of use. > Each thread is looping in the while in line 314 in AsyncPageReader.java > inside clear() method. > https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java#L314 > jstack -l 19255|grep -A 50 $(printf "%x" 29250) > "271d6262-ff19-ad24-af36-777bfe6c6375:frag:1:4" daemon prio=10 > tid=0x7f5b2adec800 nid=0x7242 runnable [0x7f5aa33e8000] >java.lang.Thread.State: RUNNABLE > at java.lang.Throwable.fillInStackTrace(Native Method) > at java.lang.Throwable.fillInStackTrace(Throwable.java:783) > - locked <0x000737
[jira] [Commented] (DRILL-5420) all cores at 100% of all servers
[ https://issues.apache.org/jira/browse/DRILL-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972798#comment-15972798 ] Hugo Bellomusto commented on DRILL-5420: I understand that drill.exec.scan.threadpool_size is a "IO bound" thread pool. As you said, I also have hdfs in those servers. In addition, now i have the same issue: - 5 servers (hdfs+drill) - No running queries (according to Drill web ui) - 3 servers have 3 cores of CPU at 100%. All in the same "while" line. > all cores at 100% of all servers > > > Key: DRILL-5420 > URL: https://issues.apache.org/jira/browse/DRILL-5420 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 > Environment: linux, cluster with 5 servers over hdfs/parquet >Reporter: Hugo Bellomusto > > We have a drill cluster with five servers over hdfs/parquet. > Each machine have 8 cores. All cores get at 100% of use. > Each thread is looping in the while in line 314 in AsyncPageReader.java > inside clear() method. > https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java#L314 > jstack -l 19255|grep -A 50 $(printf "%x" 29250) > "271d6262-ff19-ad24-af36-777bfe6c6375:frag:1:4" daemon prio=10 > tid=0x7f5b2adec800 nid=0x7242 runnable [0x7f5aa33e8000] >java.lang.Thread.State: RUNNABLE > at java.lang.Throwable.fillInStackTrace(Native Method) > at java.lang.Throwable.fillInStackTrace(Throwable.java:783) > - locked <0x0007374bfcb0> (a java.lang.InterruptedException) > at java.lang.Throwable.(Throwable.java:250) > at java.lang.Exception.(Exception.java:54) > at java.lang.InterruptedException.(InterruptedException.java:57) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) > at > java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439) > at > org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:317) > at > org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) > at > org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) > at > org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > at > org.apache.drill.exec.phys
[jira] [Updated] (DRILL-5420) all cores at 100% of all servers
[ https://issues.apache.org/jira/browse/DRILL-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hugo Bellomusto updated DRILL-5420: --- Description: We have a drill cluster with five servers over hdfs/parquet. Each machine have 8 cores. All cores get at 100% of use. Each thread is looping in the while in line 314 in AsyncPageReader.java inside clear() method. https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java#L314 jstack -l 19255|grep -A 50 $(printf "%x" 29250) "271d6262-ff19-ad24-af36-777bfe6c6375:frag:1:4" daemon prio=10 tid=0x7f5b2adec800 nid=0x7242 runnable [0x7f5aa33e8000] java.lang.Thread.State: RUNNABLE at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.fillInStackTrace(Throwable.java:783) - locked <0x0007374bfcb0> (a java.lang.InterruptedException) at java.lang.Throwable.(Throwable.java:250) at java.lang.Exception.(Exception.java:54) at java.lang.InterruptedException.(InterruptedException.java:57) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439) at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:317) at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) at org.apache.drill.exec.physical.
[jira] [Created] (DRILL-5420) all cores at 100% of all servers
Hugo Bellomusto created DRILL-5420: -- Summary: all cores at 100% of all servers Key: DRILL-5420 URL: https://issues.apache.org/jira/browse/DRILL-5420 Project: Apache Drill Issue Type: Bug Affects Versions: 1.10.0 Environment: linux, cluster with 5 servers over hdfs/parquet Reporter: Hugo Bellomusto We have a drill cluster with five servers over hdfs/parquet. Each machine have 8 cores. All cores get at 100% of use. Each thread is looping in the while in line 314 in AsyncPageReader.java inside clean() method. https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java#L314 jstack -l 19255|grep -A 50 $(printf "%x" 29250) "271d6262-ff19-ad24-af36-777bfe6c6375:frag:1:4" daemon prio=10 tid=0x7f5b2adec800 nid=0x7242 runnable [0x7f5aa33e8000] java.lang.Thread.State: RUNNABLE at java.lang.Throwable.fillInStackTrace(Native Method) at java.lang.Throwable.fillInStackTrace(Throwable.java:783) - locked <0x0007374bfcb0> (a java.lang.InterruptedException) at java.lang.Throwable.(Throwable.java:250) at java.lang.Exception.(Exception.java:54) at java.lang.InterruptedException.(InterruptedException.java:57) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439) at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:317) at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140) at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632) at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) at
[jira] [Created] (DRILL-5244) negative sign disappears depending on the cast order
Hugo Bellomusto created DRILL-5244: -- Summary: negative sign disappears depending on the cast order Key: DRILL-5244 URL: https://issues.apache.org/jira/browse/DRILL-5244 Project: Apache Drill Issue Type: Bug Affects Versions: 1.9.0 Reporter: Hugo Bellomusto # Create simple table with a double column {code:sql} CREATE TABLE dfs.raw_data.number_test AS SELECT cast( -6 AS DOUBLE) AS val FROM (VALUES(1)) {code} # {code:sql} SELECT CAST( val AS int) / 2 ok , val / 2 error FROM dfs.raw_data.number_test {code} ||ok||error|| |-3|3| # if order is inverted, that's ok {code:sql} SELECT val / 2 now_ok , CAST( val AS int) / 2 ok FROM dfs.raw_data.number_test {code} ||now_ok||ok|| |-3|-3| -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (DRILL-4917) Unable to alias columns from subquery
[ https://issues.apache.org/jira/browse/DRILL-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15545397#comment-15545397 ] Hugo Bellomusto commented on DRILL-4917: Same behavior with group by and *sum* function: {code:sql} SELECT EXPR$0 col1, sum(EXPR$0) ignoredAlias FROM (VALUES(1)) GROUP BY EXPR$0 {code} ||col1||{color:red}$f1{color}| |1|1| Using count, avg, min, max it works well. > Unable to alias columns from subquery > - > > Key: DRILL-4917 > URL: https://issues.apache.org/jira/browse/DRILL-4917 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Dan Wild > > Column aliasing works in a simple query (without subqueries): > {code} > select 1 as myVal from (values(1)) > {code} > My result set gives me one column called myVal as expected > |myVal|| > |1| > However, when I run the query > {code} > select myVal as myValAlias FROM(select 1 as myVal from (values(1))) > {code} > the alias myValAlias is not applied, and the resulting column is still called > myVal > |myVal|| > |1| > This is problematic because if my query instead looked like > {code} > select myVal, SUM(myVal) as mySum FROM(select 1 as myVal from (values(1))) > GROUP BY myVal > {code} > I would get a result set back that looks like this, with no way to alias the > second column: > |myVal|$f1| > |1|1| -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (DRILL-4575) alias not working on field.
[ https://issues.apache.org/jira/browse/DRILL-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228817#comment-15228817 ] Hugo Bellomusto commented on DRILL-4575: It sounds different, in DRILL-4572 error happens when using functions. Here, I use a function to make it work. > alias not working on field. > --- > > Key: DRILL-4575 > URL: https://issues.apache.org/jira/browse/DRILL-4575 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.6.0 > Environment: Apache drill 1.6.0 > java 1.7.0_40 >Reporter: Hugo Bellomusto > > {code:sql} > create table dfs.tmp.a_field as > select 'hello' field from (VALUES(1)); > select field my_field from dfs.tmp.a_field; > {code} > The result is: > ||field|| > |hello| > When should be: > ||my_field|| > |hello| > {noformat:title=physical plan} > 00-00Screen : rowType = RecordType(ANY field): rowcount = 1.0, cumulative > cost = {1.1 rows, 1.1 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 1635 > 00-01 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath > [path=hdfs://10.70.168.69:8020/tmp/a_field]], > selectionRoot=hdfs://10.70.168.69:8020/tmp/a_field, numFiles=1, > usedMetadataFile=false, columns=[`field`]]]) : rowType = RecordType(ANY > field): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io, 0.0 > network, 0.0 memory}, id = 1634 > {noformat} > But, this works well: > {code:sql} > select concat(field, ' world') my_field from dfs.tmp.a_field; > {code} > returns: > ||my_field|| > |hello world| > Additional info: > {code:sql} > select * from sys.options where name like '%parquet%' or string_val like > '%parquet%'; > {code} > ||name||string_val| > |store.format|parquet| > |store.parquet.block-size| | > |store.parquet.compression|snappy| > |store.parquet.dictionary.page-size| | > |store.parquet.enable_dictionary_encoding| | > |store.parquet.page-size| | > |store.parquet.use_new_reader| | > |store.parquet.vector_fill_check_threshold| | > |store.parquet.vector_fill_threshold| | -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-4575) alias not working on field.
Hugo Bellomusto created DRILL-4575: -- Summary: alias not working on field. Key: DRILL-4575 URL: https://issues.apache.org/jira/browse/DRILL-4575 Project: Apache Drill Issue Type: Bug Affects Versions: 1.6.0 Environment: Apache drill 1.6.0 java 1.7.0_40 Reporter: Hugo Bellomusto {code:sql} create table dfs.tmp.a_field as select 'hello' field from (VALUES(1)); select field my_field from dfs.tmp.a_field; {code} The result is: ||field|| |hello| When should be: ||my_field|| |hello| {noformat:title=physical plan} 00-00Screen : rowType = RecordType(ANY field): rowcount = 1.0, cumulative cost = {1.1 rows, 1.1 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 1635 00-01 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=hdfs://10.70.168.69:8020/tmp/a_field]], selectionRoot=hdfs://10.70.168.69:8020/tmp/a_field, numFiles=1, usedMetadataFile=false, columns=[`field`]]]) : rowType = RecordType(ANY field): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 1634 {noformat} But, this works well: {code:sql} select concat(field, ' world') my_field from dfs.tmp.a_field; {code} returns: ||my_field|| |hello world| Additional info: {code:sql} select * from sys.options where name like '%parquet%' or string_val like '%parquet%'; {code} ||name||string_val| |store.format|parquet| |store.parquet.block-size| | |store.parquet.compression|snappy| |store.parquet.dictionary.page-size| | |store.parquet.enable_dictionary_encoding| | |store.parquet.page-size| | |store.parquet.use_new_reader| | |store.parquet.vector_fill_check_threshold| | |store.parquet.vector_fill_threshold| | -- This message was sent by Atlassian JIRA (v6.3.4#6332)