[ https://issues.apache.org/jira/browse/DRILL-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994858#comment-14994858 ]
Sudheesh Katkam edited comment on DRILL-4046 at 11/7/15 1:43 AM: ----------------------------------------------------------------- The only thing that is consistent about these regressions is that there are a lot of fragments in this state (waiting to get the ISOChronology instance): {code} "29c6bb10-3447-65c9-1e7c-985afdcb83ea:frag:7:137" daemon prio=10 tid=0x00007f593290e000 nid=0x2132 waiting for monitor entry [0x00007f58998f2000] java.lang.Thread.State: BLOCKED (on object monitor) at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:104) - waiting to lock <0x000000060d627e90> (a java.util.HashMap) at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:86) at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:283) at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:942) at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:851) at org.joda.time.DateTime.parse(DateTime.java:144) at org.apache.drill.exec.test.generated.FiltererGen87.doEval(FilterTemplate2.java:144) at org.apache.drill.exec.test.generated.FiltererGen87.filterBatchNoSV(FilterTemplate2.java:99) at org.apache.drill.exec.test.generated.FiltererGen87.filterBatch(FilterTemplate2.java:72) at org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.doWork(FilterRecordBatch.java:80) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:147) ... {code} This has been fixed in the latest version of [ISOChronology|https://github.com/JodaOrg/joda-time/commit/634066471f2941eddfcca3ed2a62c9d254cabccb]. What I don't understand is why DRILL-3242 (or patches around that) would make this bug appear more frequently. was (Author: sudheeshkatkam): The only thing that is consistent about this regressions is that there are a lot of fragments in this state (waiting to get the ISOChronology instance): {code} "29c6bb10-3447-65c9-1e7c-985afdcb83ea:frag:7:137" daemon prio=10 tid=0x00007f593290e000 nid=0x2132 waiting for monitor entry [0x00007f58998f2000] java.lang.Thread.State: BLOCKED (on object monitor) at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:104) - waiting to lock <0x000000060d627e90> (a java.util.HashMap) at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:86) at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:283) at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:942) at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:851) at org.joda.time.DateTime.parse(DateTime.java:144) at org.apache.drill.exec.test.generated.FiltererGen87.doEval(FilterTemplate2.java:144) at org.apache.drill.exec.test.generated.FiltererGen87.filterBatchNoSV(FilterTemplate2.java:99) at org.apache.drill.exec.test.generated.FiltererGen87.filterBatch(FilterTemplate2.java:72) at org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.doWork(FilterRecordBatch.java:80) at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93) at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:147) ... {code} This has been fixed in the latest version of [ISOChronology|https://github.com/JodaOrg/joda-time/commit/634066471f2941eddfcca3ed2a62c9d254cabccb]. What I don't understand is why DRILL-3242 (or patches around that) would make this bug appear more frequently. > Performance regression in some tpch queries with 1.3rc0 build > ------------------------------------------------------------- > > Key: DRILL-4046 > URL: https://issues.apache.org/jira/browse/DRILL-4046 > Project: Apache Drill > Issue Type: Bug > Reporter: Jacques Nadeau > Assignee: Jacques Nadeau > Attachments: profiles.tar.gz > > > ||commit/query||14||15||18||20|| > |[839f8da|https://github.com/apache/drill/commit/839f8dac2e2d0479a1552701a5274ebe8416fea6]|10,253|14,642|32,993|21,251| > |[e7db9dc|https://github.com/apache/drill/commit/e7db9dcacbc39c4797de1aa29b119a7428451dea]|85,061|211,400|900,020|34,066| > (Time in milliseconds; 900 second timeout) > + These regressions are not consistent i.e. on multiple runs, some runs do > not vary from the baseline. > + TPCH 18 did not regress without timing out (on runs until now). -- This message was sent by Atlassian JIRA (v6.3.4#6332)