[jira] [Closed] (DRILL-2208) Error message must be updated when query contains operations on a flattened column

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-2208.
--

Marked as duplicate. Closing. 

 Error message must be updated when query contains operations on a flattened 
 column
 --

 Key: DRILL-2208
 URL: https://issues.apache.org/jira/browse/DRILL-2208
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Affects Versions: 0.8.0
Reporter: Abhishek Girish
Assignee: Jason Altekruse
Priority: Minor
 Attachments: drillbit_flatten.log


 Currently i observe that if there is a flatten/kvgen operation applied on a 
 column, no further operations can be performed on the said column unless it 
 is wrapped inside a nested query. 
 Consider a simple flatten/kvgen operation on a complex JSON file :
  select flatten(kvgen(f.`people`)) as p from `factbook/world.json` f limit 1;
 ++
 | p  |
 ++
 | {key:languages,value:{text:Mandarin Chinese 12.44%, Spanish 4.85%, 
 English 4.83%, Arabic 3.25%, Hindi 2.68%, Bengali 2.66%, Portuguese 2.62%, 
 Russian 2.12%, Japanese 1.8%, Standard German 1.33%, Javanese 1.25% (2009 
 est.),note_1:percents are for \first language\ speakers only; the six 
 UN languages - Arabic, Chinese (Mandarin), English, French, Russian, and 
 Spanish (Castilian) - are the mother tongue or second language of about half 
 of the world's population, and are the official languages in more than half 
 the states in the world; some 150 to 200 languages have more than a million 
 speakers,note_2:all told, there are an estimated 7,100 languages spoken 
 in the world; aproximately 80% of these languages are spoken by less than 
 100,000 people; about 50 languages are spoken by only 1 person; communities 
 that are isolated from each other in mountainous regions often develop 
 multiple languages; Papua New Guinea, for example, boasts about 836 separate 
 languages,note_3:approximately 2,300 languages are spoken in Asia, 2,150, 
 in Africa, 1,311 in the Pacific, 1,060 in the Americas, and 280 in Europe}} |
 | {key:religions,value:{text:Christian 33.39% (of which Roman 
 Catholic 16.85%, Protestant 6.15%, Orthodox 3.96%, Anglican 1.26%), Muslim 
 22.74%, Hindu 13.8%, Buddhist 6.77%, Sikh 0.35%, Jewish 0.22%, Baha'i 0.11%, 
 other religions 10.95%, non-religious 9.66%, atheists 2.01% (2010 est.)}} |
 | {key:population,value:{text:7,095,217,980 (July 2013 
 est.),top_ten_most_populous_countries_in_millions:China 1,349.59; India 
 1,220.80; United States 316.67; Indonesia 251.16; Brazil 201.01; Pakistan 
 193.24; Nigeria 174.51; Bangladesh 163.65; Russia 142.50; Japan 127.25}} |
 | {key:age_structure,value:{0_14_years:26% (male 953,496,513/female 
 890,372,474),15_24_years:16.8% (male 614,574,389/female 
 579,810,490),25_54_years:40.6% (male 1,454,831,900/female 
 1,426,721,773),55_64_years:8.4% (male 291,435,881/female 
 305,185,398),65_years_and_over:8.2% (male 257,035,416/female 321,753,746) 
 (2013 est.)}} |
 | {key:dependency_ratios,value:{total_dependency_ratio:52 
 %,youth_dependency_ratio:39.9 %,elderly_dependency_ratio:12.1 
 %,potential_support_ratio:8.3 (2013)}} |
 ++
 *Adding a WHERE clause with conditions on this column fails:*
  select flatten(kvgen(f.`people`)) as p from `factbook/world.json` f where 
  f.p.`key` = 'languages';
 Query failed: RemoteRpcException: Failure while running fragment., languages 
 [ 686bcd40-c23b-448c-93d8-b98a3b092657 on abhi5.qa.lab:31010 ]
 [ 686bcd40-c23b-448c-93d8-b98a3b092657 on abhi5.qa.lab:31010 ]
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 Logs indicate a NumberFormat Exception in the above case.
 *And query fails to parse in the below case*
  select flatten(kvgen(f.`people`)).`value` as p from `factbook/world.json` f 
  limit 5;
 Query failed: ParseException: Encountered . at line 1, column 34.
 Was expecting one of:
 FROM ...
 , ...
 AS ...
  
  
 OVER ...
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 Rewriting using an inner query succeeds:
 select g.p.`value`.`note_3` from (select flatten(kvgen(f.`people`)) as p from 
 `factbook/world.json` f) g where g.p.`key`='languages';
 ++
 |   EXPR$0   |
 ++
 | approximately 2,300 languages are spoken in Asia, 2,150, in Africa, 1,311 
 in the Pacific, 1,060 in the Americas, and 280 in Europe |
 ++
 *In both the failure cases the error message needs to be updated to indicate 
 that the operation is not supported. The current error message and logs are 
 not clear for an end user. *



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-1615) Drill throws NPE on select * when JSON All-Text-Mode is turned on

2015-05-08 Thread Abhishek Girish (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534569#comment-14534569
 ] 

Abhishek Girish commented on DRILL-1615:


This has been functional since a long time now and has been tested thoroughly. 
Closing. 

 Drill throws NPE on select * when JSON All-Text-Mode is turned on
 -

 Key: DRILL-1615
 URL: https://issues.apache.org/jira/browse/DRILL-1615
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Jason Altekruse
 Fix For: 0.7.0


  alter system set `store.json.all_text_mode` = true
  select * from `textmode.json` limit 1;
 ++++++
 |  field_1   |  field_2   |  field_3   |  field_4   |  field_5   |
 ++++++
 java.lang.NullPointerException
 at org.apache.drill.exec.vector.UInt4Vector$Accessor.get(UInt4Vector.java:297)
 at 
 org.apache.drill.exec.vector.RepeatedVarCharVector$Accessor.getObject(RepeatedVarCharVector.java:326)
 at 
 org.apache.drill.exec.vector.RepeatedVarCharVector$Accessor.getObject(RepeatedVarCharVector.java:305)
 at 
 org.apache.drill.exec.vector.complex.MapVector$Accessor.getObject(MapVector.java:368)
 at 
 org.apache.drill.exec.vector.accessor.GenericAccessor.getObject(GenericAccessor.java:38)
 at 
 org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:136)
 at 
 net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351)
 at sqlline.SqlLine$Rows$Row.init(SqlLine.java:2388)
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2504)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2971) If BitBit connection is unexpectedly closed and we were already blocked on writing to socket, we'll stay forever in ResettableBarrier.await()

2015-05-08 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-2971:
--
Assignee: Deneche A. Hakim  (was: Jacques Nadeau)

 If BitBit connection is unexpectedly closed and we were already blocked on 
 writing to socket, we'll stay forever in ResettableBarrier.await()
 ---

 Key: DRILL-2971
 URL: https://issues.apache.org/jira/browse/DRILL-2971
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Jacques Nadeau
Assignee: Deneche A. Hakim
 Fix For: 1.0.0

 Attachments: DRILL-2971.patch


 We need to reset the ResettableBarrier if the connection dies so that the 
 message can be failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2971) If BitBit connection is unexpectedly closed and we were already blocked on writing to socket, we'll stay forever in ResettableBarrier.await()

2015-05-08 Thread Jacques Nadeau (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534714#comment-14534714
 ] 

Jacques Nadeau commented on DRILL-2971:
---

Review board: https://reviews.apache.org/r/33985/

 If BitBit connection is unexpectedly closed and we were already blocked on 
 writing to socket, we'll stay forever in ResettableBarrier.await()
 ---

 Key: DRILL-2971
 URL: https://issues.apache.org/jira/browse/DRILL-2971
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Jacques Nadeau
Assignee: Deneche A. Hakim
 Fix For: 1.0.0

 Attachments: DRILL-2971.patch


 We need to reset the ResettableBarrier if the connection dies so that the 
 message can be failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2757) Verify operators correctly handle low memory conditions and cancellations

2015-05-08 Thread Deneche A. Hakim (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534660#comment-14534660
 ] 

Deneche A. Hakim commented on DRILL-2757:
-

marked as blocker, as DRILL-2878 and DRILL-2476 patches depend on this.

 Verify operators correctly handle low memory conditions and cancellations
 -

 Key: DRILL-2757
 URL: https://issues.apache.org/jira/browse/DRILL-2757
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow, Execution - Relational Operators
Affects Versions: 0.8.0
Reporter: Chris Westin
Assignee: Steven Phillips
Priority: Blocker
 Fix For: 1.0.0

 Attachments: DRILL-2757.1.patch.txt


 Check that the path through sort that notices low memory conditions and 
 causes the sort to spill (out of memory condition management).
 Also check to make sure we handle queries and fragment failures properly 
 under these conditions.
 hashjoin, hashagg, and topn use large amounts of memory,and may be unable to
 complete if their memory needs can't be met; for all others, the idea is that 
 they can complete if they get their reservation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2757) Verify operators correctly handle low memory conditions and cancellations

2015-05-08 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-2757:

Priority: Blocker  (was: Major)

 Verify operators correctly handle low memory conditions and cancellations
 -

 Key: DRILL-2757
 URL: https://issues.apache.org/jira/browse/DRILL-2757
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow, Execution - Relational Operators
Affects Versions: 0.8.0
Reporter: Chris Westin
Assignee: Steven Phillips
Priority: Blocker
 Fix For: 1.0.0

 Attachments: DRILL-2757.1.patch.txt


 Check that the path through sort that notices low memory conditions and 
 causes the sort to spill (out of memory condition management).
 Also check to make sure we handle queries and fragment failures properly 
 under these conditions.
 hashjoin, hashagg, and topn use large amounts of memory,and may be unable to
 complete if their memory needs can't be met; for all others, the idea is that 
 they can complete if they get their reservation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2979) Storage HBase doesn't support customized hbase property zookeeper.znode.parent

2015-05-08 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534645#comment-14534645
 ] 

Aditya Kishore commented on DRILL-2979:
---

Have you tried this?

{noformat}
{
  type: hbase,
  config:
  {
hbase.zookeeper.quorum: myhostname,
hbase.zookeeper.property.clientPort: 2181,
zookeeper.znode.parent: /hbase-unsecure
  },
  size.calculator.enabled: false,
  enabled: true
}
{noformat}

 Storage HBase doesn't support customized hbase property zookeeper.znode.parent
 --

 Key: DRILL-2979
 URL: https://issues.apache.org/jira/browse/DRILL-2979
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - HBase
Affects Versions: 0.9.0
Reporter: Yi Song
Assignee: Aditya Kishore
Priority: Minor

 when hbase property is set to zookeeper.znode.parent = /hbase-unsecure, we 
 will get below erros
 org.apache.hadoop.hbase.MasterNotRunningException: 
 org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in 
 ZooKeeper. It should have been written by the master. Check the v
 alue configured in 'zookeeper.znode.parent'. There could be a mismatch with 
 the one configured in the master.
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1628)
  ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1654)
  ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-had
 oop2]
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1861)
  ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:2649)
  ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:397)
  ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
 at 
 org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:402)
  ~[hbase-client-0.98.7-hadoop2.jar:0.98.7-hadoop2]
 at 
 org.apache.drill.exec.store.hbase.DrillHBaseTable.init(DrillHBaseTable.java:40)
  ~[drill-storage-hbase-0.8.0.jar:0.8.0]
 at 
 org.apache.drill.exec.store.hbase.HBaseSchemaFactory$HBaseSchema.getTable(HBaseSchemaFactory.java:77)
  [drill-storage-hbase-0.8.0.jar:0.8.0]
 at 
 net.hydromatic.optiq.jdbc.SimpleOptiqSchema.getTable(SimpleOptiqSchema.java:75)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 net.hydromatic.optiq.prepare.OptiqCatalogReader.getTableFrom(OptiqCatalogReader.java:87)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 net.hydromatic.optiq.prepare.OptiqCatalogReader.getTable(OptiqCatalogReader.java:70)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 net.hydromatic.optiq.prepare.OptiqCatalogReader.getTable(OptiqCatalogReader.java:42)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.EmptyScope.getTableNamespace(EmptyScope.java:67) 
 [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:75)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:85)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:785)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:774)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:2605)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:2590)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:2813)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:85)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:785)
  [optiq-core-0.9-drill-r20.jar:na]
 at 
 org.eigenbase.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:774)
  [optiq-core-0.9-drill-r20.jar:na]
 at 

[jira] [Commented] (DRILL-1580) Count(*) on TPCDS JSON dataset (table store_sales) throws NullPointerException

2015-05-08 Thread Abhishek Girish (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534576#comment-14534576
 ] 

Abhishek Girish commented on DRILL-1580:


This has been resolved for a long time now. Closing.

 Count(*) on TPCDS JSON dataset (table store_sales) throws NullPointerException
 --

 Key: DRILL-1580
 URL: https://issues.apache.org/jira/browse/DRILL-1580
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Jacques Nadeau
 Fix For: 0.7.0


  select count(*) from store_sales;
 Query failed: Failure while running fragment. Schema is currently null.  You 
 must call buildSchema(SelectionVectorMode) before this container can return a 
 schema. [289a6c1c-46e9-469d-8a3b-a23292f608f7]
 Error: exception while executing query: Failure while trying to get next 
 result batch. (state=,code=0)
 Stack trace attached. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1615) Drill throws NPE on select * when JSON All-Text-Mode is turned on

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-1615.
--

 Drill throws NPE on select * when JSON All-Text-Mode is turned on
 -

 Key: DRILL-1615
 URL: https://issues.apache.org/jira/browse/DRILL-1615
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Jason Altekruse
 Fix For: 0.7.0


  alter system set `store.json.all_text_mode` = true
  select * from `textmode.json` limit 1;
 ++++++
 |  field_1   |  field_2   |  field_3   |  field_4   |  field_5   |
 ++++++
 java.lang.NullPointerException
 at org.apache.drill.exec.vector.UInt4Vector$Accessor.get(UInt4Vector.java:297)
 at 
 org.apache.drill.exec.vector.RepeatedVarCharVector$Accessor.getObject(RepeatedVarCharVector.java:326)
 at 
 org.apache.drill.exec.vector.RepeatedVarCharVector$Accessor.getObject(RepeatedVarCharVector.java:305)
 at 
 org.apache.drill.exec.vector.complex.MapVector$Accessor.getObject(MapVector.java:368)
 at 
 org.apache.drill.exec.vector.accessor.GenericAccessor.getObject(GenericAccessor.java:38)
 at 
 org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:136)
 at 
 net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351)
 at sqlline.SqlLine$Rows$Row.init(SqlLine.java:2388)
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2504)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1614) CTAS is broken - throws RuntimeException and Reader error

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-1614.
--

 CTAS is broken - throws RuntimeException and Reader error
 -

 Key: DRILL-1614
 URL: https://issues.apache.org/jira/browse/DRILL-1614
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Parth Chandra
Priority: Critical
 Fix For: 0.7.0

 Attachments: drillbit.log


 CTAS is broken 
 Create table as statements fail with the below error:
 Query failed: Failure while running fragment.
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)
 LOG:
 2014-10-29 16:24:20,236 [f9e93ef8-20c8-46da-9675-5384269f8c21:frag:0:0] WARN  
 o.a.d.e.vector.complex.fn.JsonReader - Error reported. Quit writing
 2014-10-29 16:24:20,413 [f9e93ef8-20c8-46da-9675-5384269f8c21:frag:0:0] WARN  
 o.a.d.e.w.fragment.FragmentExecutor - Error while initializing or executing 
 fragment
 The first CTAS statement usually succeeds and all subsequent statements fail.
 Tried on JSON and Text 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-1614) CTAS is broken - throws RuntimeException and Reader error

2015-05-08 Thread Abhishek Girish (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534571#comment-14534571
 ] 

Abhishek Girish commented on DRILL-1614:


This has been resolved for a long time now. Closing. 

 CTAS is broken - throws RuntimeException and Reader error
 -

 Key: DRILL-1614
 URL: https://issues.apache.org/jira/browse/DRILL-1614
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Parth Chandra
Priority: Critical
 Fix For: 0.7.0

 Attachments: drillbit.log


 CTAS is broken 
 Create table as statements fail with the below error:
 Query failed: Failure while running fragment.
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)
 LOG:
 2014-10-29 16:24:20,236 [f9e93ef8-20c8-46da-9675-5384269f8c21:frag:0:0] WARN  
 o.a.d.e.vector.complex.fn.JsonReader - Error reported. Quit writing
 2014-10-29 16:24:20,413 [f9e93ef8-20c8-46da-9675-5384269f8c21:frag:0:0] WARN  
 o.a.d.e.w.fragment.FragmentExecutor - Error while initializing or executing 
 fragment
 The first CTAS statement usually succeeds and all subsequent statements fail.
 Tried on JSON and Text 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2971) If BitBit connection is unexpectedly closed and we were already blocked on writing to socket, we'll stay forever in ResettableBarrier.await()

2015-05-08 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-2971:
--
Attachment: DRILL-2971.patch

 If BitBit connection is unexpectedly closed and we were already blocked on 
 writing to socket, we'll stay forever in ResettableBarrier.await()
 ---

 Key: DRILL-2971
 URL: https://issues.apache.org/jira/browse/DRILL-2971
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Jacques Nadeau
Assignee: Jacques Nadeau
 Fix For: 1.0.0

 Attachments: DRILL-2971.patch


 We need to reset the ResettableBarrier if the connection dies so that the 
 message can be failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-1096) Drill query from sqlline hangs while reading an invalid Json file

2015-05-08 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534881#comment-14534881
 ] 

Rahul Challapalli commented on DRILL-1096:
--

Verified! Still need to add a negative testcase

 Drill query from sqlline hangs while reading an invalid Json file
 -

 Key: DRILL-1096
 URL: https://issues.apache.org/jira/browse/DRILL-1096
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
 Environment: CentOS release 6.5
Reporter: Amit Katti
Assignee: DrillCommitter
 Fix For: 0.4.0

 Attachments: DRILL-1096.1.patch.txt, DRILL-1096.2.patch.txt


 When I submit a query from sqlline which reads an invalid Json, it simply 
 hangs. Eventually I have to kill the sqlline process.
 Example Json File:
 { [rownum:1,name:fred ovid] }
 { [rownum:2,name:bob brown] }
 Query: Select * from `invalid.json`;
 Expected Behavior: Drill should throw and error message instead of hanging.
 git.commit.id=f492ca5174e6a815397b741c7c6f4aac4120a1fa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2602) Throw an error on schema change during streaming aggregation

2015-05-08 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-2602:

Assignee: Jason Altekruse  (was: Deneche A. Hakim)

 Throw an error on schema change during streaming aggregation
 

 Key: DRILL-2602
 URL: https://issues.apache.org/jira/browse/DRILL-2602
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators
Affects Versions: 0.8.0
Reporter: Victoria Markman
Assignee: Jason Altekruse
 Fix For: 1.0.0

 Attachments: DRILL-2602.1.patch.txt, DRILL-2602.2.patch.txt, 
 DRILL-2602.3.patch.txt, DRILL-2602.4.patch.txt, optional.parquet, 
 required.parquet


 We don't recoginize schema change during streaming aggregation when column is 
 a mix of required and optional types.
 Hash aggregation does throw correct error message.
 I have a table 'mix' where:
 {code}
 [Fri Mar 27 09:46:07 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/mix 
 ] # ls -ltr
 total 753
 -rwxr-xr-x 1 root root 759879 Mar 27 09:41 optional.parquet
 -rwxr-xr-x 1 root root   9867 Mar 27 09:41 required.parquet
 [Fri Mar 27 09:46:09 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/mix 
 ] # ~/parquet-tools-1.5.1-SNAPSHOT/parquet-schema optional.parquet
 message root {
   optional binary c_varchar (UTF8);
   optional int32 c_integer;
   optional int64 c_bigint;
   optional float c_float;
   optional double c_double;
   optional int32 c_date (DATE);
   optional int32 c_time (TIME);
   optional int64 c_timestamp (TIMESTAMP);
   optional boolean c_boolean;
   optional double d9;
   optional double d18;
   optional double d28;
   optional double d38;
 }
 [Fri Mar 27 09:46:41 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/mix 
 ] # ~/parquet-tools-1.5.1-SNAPSHOT/parquet-schema required.parquet
 message root {
   required binary c_varchar (UTF8);
   required int32 c_integer;
   required int64 c_bigint;
   required float c_float;
   required double c_double;  required int32 c_date (DATE);
   required int32 c_time (TIME);
   required int64 c_timestamp (TIMESTAMP);
   required boolean c_boolean;
   required double d9;
   required double d18;
   required double d28;
   required double d38;
 }
 {code}
 Nice error message on hash aggregation:
 {code}
 0: jdbc:drill:schema=dfs select count(*) from mix group by c_integer;
 ++
 |   EXPR$0   |
 ++
 Query failed: Query stopped., Hash aggregate does not support schema changes 
 [ 2bc255ce-c7f9-47bf-80b0-a5c87cfa67be on atsqa4-134.qa.lab:31010 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)
 {code}
 On streaming aggregation, exception that is hard for the end user to 
 understand:
 {code}
 0: jdbc:drill:schema=dfs alter session set `planner.enable_hashagg` = false;
 +++
 | ok |  summary   |
 +++
 | true   | planner.enable_hashagg updated. |
 +++
 1 row selected (0.067 seconds)
 0: jdbc:drill:schema=dfs select count(*) from mix group by c_integer;
 ++
 |   EXPR$0   |
 ++
 Query failed: RemoteRpcException: Failure while running fragment., Failure 
 while reading vector.  Expected vector class of 
 org.apache.drill.exec.vector.IntVector but was holding vector class 
 org.apache.drill.exec.vector.NullableIntVector. [ 
 5610e589-38e0-4dc5-a560-649516180ba4 on atsqa4-134.qa.lab:31010 ]
 [ 5610e589-38e0-4dc5-a560-649516180ba4 on atsqa4-134.qa.lab:31010 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1176) Jackson failing in some json reading when first character of quoted string is bracket.

2015-05-08 Thread Rahul Challapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Challapalli closed DRILL-1176.


Verified and added the below testcase

Functional/Passing/json_storage/jsonbug_DRILL-1176.q

 Jackson failing in some json reading when first character of quoted string is 
 bracket.
 --

 Key: DRILL-1176
 URL: https://issues.apache.org/jira/browse/DRILL-1176
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
Reporter: Jacques Nadeau
Assignee: DrillCommitter
 Fix For: 0.4.0

 Attachments: DRILL-1176.1.patch.txt


 Values similar to 
 {
happy: [my value 1], [my value 2]
 }
 will sometimes cause Jackson to incorrectly parse.  This is either an issue 
 internal to Jackson or part of the Record Splitter we use before feeding to 
 Jackson.  I suggest starting by upgrading from 2.2 to 2.4.1 and see if that 
 solves the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2691) Source files with Windows line endings

2015-05-08 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534865#comment-14534865
 ] 

Jason Altekruse commented on DRILL-2691:


The patch is not applying cleanly, it doesn't look like git works well trying 
to track line ending changes with patch files. I might be doing something 
wrong, but I think whoever commits this is going to have to just manually make 
the change and commit it from their repo. I am +1 on the change and I will try 
to make this change myself soon and commit it.

 Source files with Windows line endings
 --

 Key: DRILL-2691
 URL: https://issues.apache.org/jira/browse/DRILL-2691
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Affects Versions: 0.6.0
Reporter: Deneche A. Hakim
Assignee: Jason Altekruse
 Fix For: 1.0.0

 Attachments: DRILL-2691.1.patch.txt


 The following files:
 {noformat}
 common/src/main/java/org/apache/drill/common/util/DrillStringUtils.java
 contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseCFAsJSONString.java
 {noformat}
 Have Windows line endings in them. Trying to apply a patch that contains 
 changes in one of those files will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2878) FragmentExecutor.closeOutResources() is not called if an exception happens in the Foreman before the fragment executor starts running

2015-05-08 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-2878:

Attachment: DRILL-2878.2.patch.txt

added more information in the comment. Also added a unit test

Note: the unit test assumes DRILL-2757 has been committed as it tries to inject 
an exception in a position defined in DRILL-2757

 FragmentExecutor.closeOutResources() is not called if an exception happens in 
 the Foreman before the fragment executor starts running
 -

 Key: DRILL-2878
 URL: https://issues.apache.org/jira/browse/DRILL-2878
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Reporter: Deneche A. Hakim
Assignee: Deneche A. Hakim
 Fix For: 1.0.0

 Attachments: DRILL-2878.1.patch.txt, DRILL-2878.2.patch.txt


 When the Foreman sets up the root FragmentExecutor and it needs to wait for 
 data from the remote fragments, the fragment manager is recorded in the work 
 bus and the root fragment executor is not run immediately.
 If an exception happens in the Foreman while setting up the remote fragments, 
 the Foreman cancels all fragments and returns a FAILED message to the client.
 Because the root fragment executor was not run it will never call it's 
 closeOutResources() method and it's fragment context will never be closed.
 You can easily reproduce this by running the following unit test:
 {noformat}
 org.apache.drill.exec.server.TestDrillbitResilience#failsWhenSendingFragments
 {noformat}
 although the test passes successfully because Drill does report the correct 
 failure to the client, the memory leak is not detected and will show up after 
 the test finishes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2602) Throw an error on schema change during streaming aggregation

2015-05-08 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-2602:

Attachment: DRILL-2602.4.patch.txt

updated StreamingAggBatch to throw a proper user exception

all unit tests are passing along with customer/tpch

 Throw an error on schema change during streaming aggregation
 

 Key: DRILL-2602
 URL: https://issues.apache.org/jira/browse/DRILL-2602
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators
Affects Versions: 0.8.0
Reporter: Victoria Markman
Assignee: Deneche A. Hakim
 Fix For: 1.0.0

 Attachments: DRILL-2602.1.patch.txt, DRILL-2602.2.patch.txt, 
 DRILL-2602.3.patch.txt, DRILL-2602.4.patch.txt, optional.parquet, 
 required.parquet


 We don't recoginize schema change during streaming aggregation when column is 
 a mix of required and optional types.
 Hash aggregation does throw correct error message.
 I have a table 'mix' where:
 {code}
 [Fri Mar 27 09:46:07 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/mix 
 ] # ls -ltr
 total 753
 -rwxr-xr-x 1 root root 759879 Mar 27 09:41 optional.parquet
 -rwxr-xr-x 1 root root   9867 Mar 27 09:41 required.parquet
 [Fri Mar 27 09:46:09 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/mix 
 ] # ~/parquet-tools-1.5.1-SNAPSHOT/parquet-schema optional.parquet
 message root {
   optional binary c_varchar (UTF8);
   optional int32 c_integer;
   optional int64 c_bigint;
   optional float c_float;
   optional double c_double;
   optional int32 c_date (DATE);
   optional int32 c_time (TIME);
   optional int64 c_timestamp (TIMESTAMP);
   optional boolean c_boolean;
   optional double d9;
   optional double d18;
   optional double d28;
   optional double d38;
 }
 [Fri Mar 27 09:46:41 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/mix 
 ] # ~/parquet-tools-1.5.1-SNAPSHOT/parquet-schema required.parquet
 message root {
   required binary c_varchar (UTF8);
   required int32 c_integer;
   required int64 c_bigint;
   required float c_float;
   required double c_double;  required int32 c_date (DATE);
   required int32 c_time (TIME);
   required int64 c_timestamp (TIMESTAMP);
   required boolean c_boolean;
   required double d9;
   required double d18;
   required double d28;
   required double d38;
 }
 {code}
 Nice error message on hash aggregation:
 {code}
 0: jdbc:drill:schema=dfs select count(*) from mix group by c_integer;
 ++
 |   EXPR$0   |
 ++
 Query failed: Query stopped., Hash aggregate does not support schema changes 
 [ 2bc255ce-c7f9-47bf-80b0-a5c87cfa67be on atsqa4-134.qa.lab:31010 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)
 {code}
 On streaming aggregation, exception that is hard for the end user to 
 understand:
 {code}
 0: jdbc:drill:schema=dfs alter session set `planner.enable_hashagg` = false;
 +++
 | ok |  summary   |
 +++
 | true   | planner.enable_hashagg updated. |
 +++
 1 row selected (0.067 seconds)
 0: jdbc:drill:schema=dfs select count(*) from mix group by c_integer;
 ++
 |   EXPR$0   |
 ++
 Query failed: RemoteRpcException: Failure while running fragment., Failure 
 while reading vector.  Expected vector class of 
 org.apache.drill.exec.vector.IntVector but was holding vector class 
 org.apache.drill.exec.vector.NullableIntVector. [ 
 5610e589-38e0-4dc5-a560-649516180ba4 on atsqa4-134.qa.lab:31010 ]
 [ 5610e589-38e0-4dc5-a560-649516180ba4 on atsqa4-134.qa.lab:31010 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
 at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
 at sqlline.SqlLine.print(SqlLine.java:1809)
 at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
 at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
 at sqlline.SqlLine.dispatch(SqlLine.java:889)
 at sqlline.SqlLine.begin(SqlLine.java:763)
 at sqlline.SqlLine.start(SqlLine.java:498)
 at sqlline.SqlLine.main(SqlLine.java:460)
 {code}



--
This message 

[jira] [Updated] (DRILL-2662) Exception type not being included when propagating exception message

2015-05-08 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-2662:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Exception type not being included when propagating exception message
 

 Key: DRILL-2662
 URL: https://issues.apache.org/jira/browse/DRILL-2662
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Affects Versions: 0.8.0
Reporter: Daniel Barclay (Drill)
Assignee: Deneche A. Hakim
 Fix For: 1.1.0


 A query that tries to cast a non-numeric string (e.g., col4) to an integer 
 fails (as expected), with the root exception being a NumberFormatException 
 whose exception trace printout would begin with:
   java.lang.NumberFormatException: col4
 However, one of the higher-level chained/wrapping exceptions shows up like 
 this:
   Query failed: RemoteRpcException: Failure while running fragment., col4 [ 
 99343f97-5c70-4454-b67f-ae550b2252fb on dev-linux2:31013 ]
 In particular, note that the most important information, that there was a 
 numeric syntax error, is not present in the message, even though some details 
 (the string with the invalid syntax) is present.
 This usually comes from taking getMessage() of an exception rather than 
 toString() when making a higher-level message.
 The toString() method normally includes the class name--and frequently the 
 class name contains key information that is not given in the exception 
 message.  (Maybe Sun/Oracle should have always put the full information in 
 the message part, but they didn't.)
 _If_ all our exceptions were just for developers, then I'd suggest always 
 wrapping exceptions like this:
   throw new WrappingException( higher-level problem:  + e, e );
 rather than
   throw new WrappingException( higher-level problem:  + e.getMessage(), e );
 to avoid losing information.  (Then the top-most exception's message string 
 always includes all the information from the lower-level exception's message 
 strings.)
 However, since that would inject class names (irrelevant to users) into 
 message strings (shown to users), for Drill we should probably make sure that 
 exceptions like NumberFormatException (for expected conversion errors) are 
 always wrapped in or replaced by exceptions that are meant for users (e.g., 
 an InvalidIntegerFormatDataException (from standard SQL exception conditions 
 like data exception — invalid datetime format) whose message string stands 
 on its own (independent of whether the class name appears with it)).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2997) Remove references to groupCount from SerializedField

2015-05-08 Thread Hanifi Gunes (JIRA)
Hanifi Gunes created DRILL-2997:
---

 Summary: Remove references to groupCount from SerializedField
 Key: DRILL-2997
 URL: https://issues.apache.org/jira/browse/DRILL-2997
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Data Types
Reporter: Hanifi Gunes
Assignee: Hanifi Gunes


Now that RVVs do not have the notion of group count. We should remove obsolete 
code that makes use of  group count from SerializedField and other classes(if 
any).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2994) Incorrect error message when disconnecting from server (using direct connection to drillbit)

2015-05-08 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra updated DRILL-2994:
-
Assignee: Hanifi Gunes  (was: Parth Chandra)

 Incorrect error message when disconnecting from server (using direct 
 connection to drillbit)
 

 Key: DRILL-2994
 URL: https://issues.apache.org/jira/browse/DRILL-2994
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Reporter: Parth Chandra
Assignee: Hanifi Gunes
Priority: Minor
 Fix For: 1.0.0

 Attachments: DRILL-2994.1.patch.diff


 If connected to the server using a direct drillbit connection, JDBC client 
 (sqlline) prints an already disconnected error when disconnecting.
 This happens because of an exception because the client is trying to close 
 the ZK cluster coordinator which is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2994) Incorrect error message when disconnecting from server (using direct connection to drillbit)

2015-05-08 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra updated DRILL-2994:
-
Attachment: DRILL-2994.1.patch.diff

 Incorrect error message when disconnecting from server (using direct 
 connection to drillbit)
 

 Key: DRILL-2994
 URL: https://issues.apache.org/jira/browse/DRILL-2994
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Reporter: Parth Chandra
Assignee: Parth Chandra
Priority: Minor
 Fix For: 1.0.0

 Attachments: DRILL-2994.1.patch.diff


 If connected to the server using a direct drillbit connection, JDBC client 
 (sqlline) prints an already disconnected error when disconnecting.
 This happens because of an exception because the client is trying to close 
 the ZK cluster coordinator which is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1010) Query throws exception after displaying result

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish updated DRILL-1010:
---
Labels: no_verified_test  (was: )

 Query throws exception after displaying result
 --

 Key: DRILL-1010
 URL: https://issues.apache.org/jira/browse/DRILL-1010
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet
Reporter: Abhishek Girish
  Labels: no_verified_test
 Fix For: 0.4.0


 This issue was observed with TPC-DS dataset. After displaying results, the 
 query seems to fail with exceptions. 
 Query:
 select * from item limit 10;
 Result:
 Outputs 10 rows
 Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while 
 running query.[error_id: bf83170d-b644-4145-b99c-377d60e342e4
 endpoint {
   address: drillats2.qa.lab
   user_port: 31010
   control_port: 31011
   data_port: 31012
 }
 error_type: 0
 message: Failure while running fragment.  ArrayIndexOutOfBoundsException
 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 Log:
 2014-06-12 00:38:14,654 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] DEBUG 
 o.a.d.e.w.fragment.FragmentExecutor - Caught exception while running fragment
 java.lang.ArrayIndexOutOfBoundsException: null
 2014-06-12 00:38:14,655 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] ERROR 
 o.a.d.e.w.f.AbstractStatusReporter - Error 
 67e30181-bcb9-4cde-bdcd-dcab2b44d28a: Failure while running fragment.
 java.lang.ArrayIndexOutOfBoundsException: null
 2014-06-12 00:38:14,656 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] DEBUG 
 o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to 
 Foreman of profile {
   state: FAILED
   error {
 error_id: 67e30181-bcb9-4cde-bdcd-dcab2b44d28a
 endpoint {
   address: drillats2.qa.lab
   user_port: 31010
   control_port: 31011
   data_port: 31012
 }
 error_type: 0
 message: Failure while running fragment.  
 ArrayIndexOutOfBoundsException
   }
   operator_profile {
 input_profile {
   records: 0
   batches: 0
   schemas: 0
 }
 operator_id: 3
 operator_type: 21
 setup_nanos: 0
 process_nanos: 0
   }
   operator_profile {
 input_profile {
   records: 8191
   batches: 1
   schemas: 1
 }
 operator_id: 2
 operator_type: 7
 setup_nanos: 186912
 process_nanos: 176768368
   }
   operator_profile {
 input_profile {
   records: 10
   batches: 1
   schemas: 1
 }
 operator_id: 1
 operator_type: 14
 setup_nanos: 110173355
 process_nanos: 182528775
   }
   start_time: 1402533494300
   end_time: 1402533494655
   memory_used: 53641636
 }
 handle {
   query_id {
 part1: 5381226858754228917
 part2: -6757939115204015376
   }
   major_fragment_id: 0
   minor_fragment_id: 0
 }
 2014-06-12 00:38:14,674 [WorkManager-8] WARN  
 o.a.d.e.w.fragment.FragmentExecutor - Failure while closing context in failed 
 state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2996) ValueVectors shouldn't call reAlloc() in a while() loop

2015-05-08 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535167#comment-14535167
 ] 

Jason Altekruse commented on DRILL-2996:


Along with this functional refactoring, we need to come up with a hard 
specification of how much repetition we support. It is a combination of large 
varlength values and large lists that are bringing out out these scenarios with 
excessive allocation.

We have a hard limit that a batch of records can only have 65K elements (so 
that we know that we can use a two-byte unsigned int to index into them). 
Currently we impose no specific limit on the number of child values we can have 
in a list, but these will hit limits as they inner values are stored in vectors 
themselves (so they can hit this 65k limit, if we have 65k lists, this could 
happen with a single element list in every index and a few extra elements in 
just one of the lists). Most cases are handled in regular execution by the 
field read/writer abstractions, as well as our default behavior to only fill 
vectors with ~4000 values. At the Value Vector level we do not have enforcement 
of a hard limit for these inner values and I think that is part of the problem.

 ValueVectors shouldn't call reAlloc() in a while() loop
 ---

 Key: DRILL-2996
 URL: https://issues.apache.org/jira/browse/DRILL-2996
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Reporter: Chris Westin
Assignee: Hanifi Gunes

 Instead, reAlloc() should be change to take a new minimum size as an 
 argument. This value is just the value used to determine the while loops' 
 termination. Then reAlloc() can figure out how much more to allocate once and 
 for all, instead of possibly reallocating and copying more than once, and it 
 can make sure that the size doesn't overflow (we've seen some instances of 
 the allocator being called with negative sizes).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1047) Amplab - Queries 10-12 fail with exception

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-1047.
--

Issue is invalid. Closing. 

 Amplab - Queries 10-12 fail with exception
 --

 Key: DRILL-1047
 URL: https://issues.apache.org/jira/browse/DRILL-1047
 Project: Apache Drill
  Issue Type: Bug
  Components: SQL Parser
Reporter: Abhishek Girish
 Fix For: 0.4.0


 Running Amplab queries Q10 - Q12 fail with errors:
 Query10.q
 SELECT SUBSTRING(sourceIP, 1, 8), SUM(adRevenue) FROM uservisits GROUP BY 
 SUBSTRING(sourceIP, 1, 10);
 Query11.q
 SELECT SUBSTRING(sourceIP, 1, 10), SUM(adRevenue) FROM uservisits GROUP BY 
 SUBSTRING(sourceIP, 1, 10);
 Query12.q
 SELECT SUBSTRING(sourceIP, 1, 12), SUM(adRevenue) FROM uservisits GROUP BY 
 SUBSTRING(sourceIP, 1, 12);
 Errors:
 message: Failure while setting up Foreman.  AssertionError:[ 
 typeName.allowsPrecScale(true, false) ]
 ]
   at 
 org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:72)
   at 
 org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:89)
   at 
 org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52)
   at 
 org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34)
   at 
 org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:154)
   at 
 org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:139)
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
   at 
 io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
   at 
 io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
   at 
 io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
   at 
 io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:173)
   at 
 io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
   at 
 io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
   at 
 io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
   at 
 io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:334)
   at 
 io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:320)
   at 
 io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
   at 
 io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:100)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:497)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:465)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:359)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
   at java.lang.Thread.run(Thread.java:744)
 Log:
 2014-06-20 04:02:38,017 [825846ba-d65b-4c80-bf42-6fe6942c7a17:foreman] ERROR 
 o.a.drill.exec.work.foreman.Foreman - Error 
 1a79a2e4-cf80-4616-94a8-88a3949e1cc2: Failure while setting up Foreman.
 java.lang.AssertionError: typeName.allowsPrecScale(true, false)
 at org.eigenbase.sql.type.BasicSqlType.init(BasicSqlType.java:66) 
 ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at 
 org.eigenbase.sql.type.SqlTypeFactoryImpl.createSqlType(SqlTypeFactoryImpl.java:59)
  ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at 
 org.eigenbase.sql.type.SqlTypeTransforms$4.transformType(SqlTypeTransforms.java:107)
  ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at 
 org.eigenbase.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:62)
  ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at 
 org.eigenbase.sql.SqlOperator.inferReturnType(SqlOperator.java:451) 
 ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at 
 org.eigenbase.sql.SqlOperator.validateOperands(SqlOperator.java:418) 
 ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at org.eigenbase.sql.SqlFunction.deriveType(SqlFunction.java:290) 
 ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at org.eigenbase.sql.SqlFunction.deriveType(SqlFunction.java:206) 
 ~[optiq-core-0.7-20140617.012959-7.jar:na]
 at 
 

[jira] [Commented] (DRILL-1010) Query throws exception after displaying result

2015-05-08 Thread Abhishek Girish (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535333#comment-14535333
 ] 

Abhishek Girish commented on DRILL-1010:


The issue described here is resolved, for the query mentioned. Verified 
manually on Git.Commit.ID d12bee0 (May 7 build). Closing. 

 Query throws exception after displaying result
 --

 Key: DRILL-1010
 URL: https://issues.apache.org/jira/browse/DRILL-1010
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet
Reporter: Abhishek Girish
  Labels: no_verified_test
 Fix For: 0.4.0


 This issue was observed with TPC-DS dataset. After displaying results, the 
 query seems to fail with exceptions. 
 Query:
 select * from item limit 10;
 Result:
 Outputs 10 rows
 Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while 
 running query.[error_id: bf83170d-b644-4145-b99c-377d60e342e4
 endpoint {
   address: drillats2.qa.lab
   user_port: 31010
   control_port: 31011
   data_port: 31012
 }
 error_type: 0
 message: Failure while running fragment.  ArrayIndexOutOfBoundsException
 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 Log:
 2014-06-12 00:38:14,654 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] DEBUG 
 o.a.d.e.w.fragment.FragmentExecutor - Caught exception while running fragment
 java.lang.ArrayIndexOutOfBoundsException: null
 2014-06-12 00:38:14,655 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] ERROR 
 o.a.d.e.w.f.AbstractStatusReporter - Error 
 67e30181-bcb9-4cde-bdcd-dcab2b44d28a: Failure while running fragment.
 java.lang.ArrayIndexOutOfBoundsException: null
 2014-06-12 00:38:14,656 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] DEBUG 
 o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to 
 Foreman of profile {
   state: FAILED
   error {
 error_id: 67e30181-bcb9-4cde-bdcd-dcab2b44d28a
 endpoint {
   address: drillats2.qa.lab
   user_port: 31010
   control_port: 31011
   data_port: 31012
 }
 error_type: 0
 message: Failure while running fragment.  
 ArrayIndexOutOfBoundsException
   }
   operator_profile {
 input_profile {
   records: 0
   batches: 0
   schemas: 0
 }
 operator_id: 3
 operator_type: 21
 setup_nanos: 0
 process_nanos: 0
   }
   operator_profile {
 input_profile {
   records: 8191
   batches: 1
   schemas: 1
 }
 operator_id: 2
 operator_type: 7
 setup_nanos: 186912
 process_nanos: 176768368
   }
   operator_profile {
 input_profile {
   records: 10
   batches: 1
   schemas: 1
 }
 operator_id: 1
 operator_type: 14
 setup_nanos: 110173355
 process_nanos: 182528775
   }
   start_time: 1402533494300
   end_time: 1402533494655
   memory_used: 53641636
 }
 handle {
   query_id {
 part1: 5381226858754228917
 part2: -6757939115204015376
   }
   major_fragment_id: 0
   minor_fragment_id: 0
 }
 2014-06-12 00:38:14,674 [WorkManager-8] WARN  
 o.a.d.e.w.fragment.FragmentExecutor - Failure while closing context in failed 
 state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1010) Query throws exception after displaying result

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-1010.
--

 Query throws exception after displaying result
 --

 Key: DRILL-1010
 URL: https://issues.apache.org/jira/browse/DRILL-1010
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet
Reporter: Abhishek Girish
  Labels: no_verified_test
 Fix For: 0.4.0


 This issue was observed with TPC-DS dataset. After displaying results, the 
 query seems to fail with exceptions. 
 Query:
 select * from item limit 10;
 Result:
 Outputs 10 rows
 Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while 
 running query.[error_id: bf83170d-b644-4145-b99c-377d60e342e4
 endpoint {
   address: drillats2.qa.lab
   user_port: 31010
   control_port: 31011
   data_port: 31012
 }
 error_type: 0
 message: Failure while running fragment.  ArrayIndexOutOfBoundsException
 ]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 Log:
 2014-06-12 00:38:14,654 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] DEBUG 
 o.a.d.e.w.fragment.FragmentExecutor - Caught exception while running fragment
 java.lang.ArrayIndexOutOfBoundsException: null
 2014-06-12 00:38:14,655 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] ERROR 
 o.a.d.e.w.f.AbstractStatusReporter - Error 
 67e30181-bcb9-4cde-bdcd-dcab2b44d28a: Failure while running fragment.
 java.lang.ArrayIndexOutOfBoundsException: null
 2014-06-12 00:38:14,656 [4aadf551-2bd6-42b5-a236-fa2e088c0af0:frag:0:0] DEBUG 
 o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to 
 Foreman of profile {
   state: FAILED
   error {
 error_id: 67e30181-bcb9-4cde-bdcd-dcab2b44d28a
 endpoint {
   address: drillats2.qa.lab
   user_port: 31010
   control_port: 31011
   data_port: 31012
 }
 error_type: 0
 message: Failure while running fragment.  
 ArrayIndexOutOfBoundsException
   }
   operator_profile {
 input_profile {
   records: 0
   batches: 0
   schemas: 0
 }
 operator_id: 3
 operator_type: 21
 setup_nanos: 0
 process_nanos: 0
   }
   operator_profile {
 input_profile {
   records: 8191
   batches: 1
   schemas: 1
 }
 operator_id: 2
 operator_type: 7
 setup_nanos: 186912
 process_nanos: 176768368
   }
   operator_profile {
 input_profile {
   records: 10
   batches: 1
   schemas: 1
 }
 operator_id: 1
 operator_type: 14
 setup_nanos: 110173355
 process_nanos: 182528775
   }
   start_time: 1402533494300
   end_time: 1402533494655
   memory_used: 53641636
 }
 handle {
   query_id {
 part1: 5381226858754228917
 part2: -6757939115204015376
   }
   major_fragment_id: 0
   minor_fragment_id: 0
 }
 2014-06-12 00:38:14,674 [WorkManager-8] WARN  
 o.a.d.e.w.fragment.FragmentExecutor - Failure while closing context in failed 
 state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: DRILL-2343.1.patch.txt

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.1.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2996) ValueVectors shouldn't call reAlloc() in a while() loop

2015-05-08 Thread Chris Westin (JIRA)
Chris Westin created DRILL-2996:
---

 Summary: ValueVectors shouldn't call reAlloc() in a while() loop
 Key: DRILL-2996
 URL: https://issues.apache.org/jira/browse/DRILL-2996
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Reporter: Chris Westin
Assignee: Daniel Barclay (Drill)


Instead, reAlloc() should be change to take a new minimum size as an 
argument. This value is just the value used to determine the while loops' 
termination. Then reAlloc() can figure out how much more to allocate once and 
for all, instead of possibly reallocating and copying more than once, and it 
can make sure that the size doesn't overflow (we've seen some instances of the 
allocator being called with negative sizes).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-2569) Minor fragmentId in Profile UI gets truncated to the last 2 digits

2015-05-08 Thread Krystal (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krystal closed DRILL-2569.
--

git.commit.id.abbrev=79a712a

Verified that the minor fragments  99 are displayed correctly on the profile 
UI.

 Minor fragmentId in Profile UI gets truncated to the last 2 digits
 --

 Key: DRILL-2569
 URL: https://issues.apache.org/jira/browse/DRILL-2569
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - HTTP
Affects Versions: 0.9.0
Reporter: Krystal
Assignee: Jason Altekruse
  Labels: no_verified_test
 Fix For: 1.0.0

 Attachments: DRILL-2569.1.patch.txt


 git.commit.id.abbrev=8493713
 When the profile json contains minorFragmentId  99, the UI only display the 
 last 2 digits.  For example if minorFragmentId=100, it is being displayed as 
 00.  Here is a snippet of such data from the profile UI:
 04-xx-03 - PARQUET_ROW_GROUP_SCAN
 Minor FragmentSetup   Process WaitMax Batches Max Records 
 Peak Mem
 04-98-03  0.000   3.807   1.795   0   0   15MB
 04-99-03  0.000   3.247   3.111   0   0   24MB
 04-00-03  0.000   3.163   2.545   0   0   20MB
 04-01-03  0.000   3.272   2.278   0   0   15MB
 04-02-03  0.000   3.496   2.004   0   0   15MB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2985) NPE seen for project distinct values from CSV

2015-05-08 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535423#comment-14535423
 ] 

Khurram Faraaz commented on DRILL-2985:
---

This is a regression. I disabled the new text reader and I don't see the NPE, 
the NPE is seen with the new text reader.

{code}
0: jdbc:drill: alter session set `exec.storage.enable_new_text_reader` = false;
+++
| ok |  summary   |
+++
| true   | exec.storage.enable_new_text_reader updated. |
+++
1 row selected (0.129 seconds)
{code}

We see proper message when new text reader is disabled.

{code}
0: jdbc:drill: select distinct type from `airports.csv`;
Error: SYSTEM ERROR: Selected column(s) must have name 'columns' or must be 
plain '*'

Fragment 0:0

[Error Id: a43e56d0-31f5-4aec-b881-9269080546dd on centos-04.qa.lab:31010] 
(state=,code=0)
{code}

 NPE seen for project distinct values from CSV
 -

 Key: DRILL-2985
 URL: https://issues.apache.org/jira/browse/DRILL-2985
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Text  CSV
Affects Versions: 1.0.0
 Environment: d12bee05a8f6e974c70d5d2a94176b176d7dba5b | DRILL-2508: 
 Added a wrapper class for OptionValue to include status Option status: BOOT, 
 DEFAULT, CHANGED | 07.05.2015 @ 13:08:36 EDT
Reporter: Khurram Faraaz
Assignee: Steven Phillips

 I am seeing a NPE when we project distinct values. Test was run on 4 node 
 cluster on CentOS.
 {code}
 0: jdbc:drill: select distinct type from `airports.csv`;
 Error: SYSTEM ERROR: null
 Fragment 0:0
 [Error Id: 9f6e6929-41f6-4821-8a31-8bd45143f3d1 on centos-01.qa.lab:31010] 
 (state=,code=0)
 {code}
 Stacktrace from drillbit.log 
 {code}
 2015-05-07 20:03:21,790 [2ab43af5-b6a9-0578-c45b-454b1a1a7b35:frag:0:0] ERROR 
 o.a.d.c.e.DrillRuntimeException - SYSTEM ERROR: null
 Fragment 0:0
 [Error Id: 9f6e6929-41f6-4821-8a31-8bd45143f3d1 on centos-01.qa.lab:31010]
 org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: null
 Fragment 0:0
 [Error Id: 9f6e6929-41f6-4821-8a31-8bd45143f3d1 on centos-01.qa.lab:31010]
 at 
 org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:465)
  ~[drill-common-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:262)
  [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:232)
  [drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
  [drill-common-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_75]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_75]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
 Caused by: java.lang.NullPointerException: null
 at 
 org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.cleanup(CompliantTextRecordReader.java:147)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.ScanBatch.init(ScanBatch.java:104) 
 ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.dfs.easy.EasyFormatPlugin.getReaderBatch(EasyFormatPlugin.java:189)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.dfs.easy.EasyReaderBatchCreator.getBatch(EasyReaderBatchCreator.java:35)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.dfs.easy.EasyReaderBatchCreator.getBatch(EasyReaderBatchCreator.java:28)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:140)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:163)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:121)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:163)
  ~[drill-java-exec-1.0.0-SNAPSHOT-rebuffed.jar:1.0.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:121)
  

[jira] [Closed] (DRILL-1180) Case messes up the datatype returned by function surrounding it

2015-05-08 Thread Ramana Inukonda Nagaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramana Inukonda Nagaraj closed DRILL-1180.
--

Verified as part of TPCH SF100 queries

 Case messes up the datatype returned by function surrounding it
 ---

 Key: DRILL-1180
 URL: https://issues.apache.org/jira/browse/DRILL-1180
 Project: Apache Drill
  Issue Type: Bug
Reporter: Ramana Inukonda Nagaraj
Assignee: DrillCommitter
Priority: Critical
 Fix For: 0.4.0

 Attachments: DRILL-1180.patch


 Hit this while investigating tpch data variation between postgres and drill
 Simplified tpch14 to the following query:
 select 
  sum(case
 when l.L_RETURNFLAG like 'R%'
   then l.l_extendedprice * (1 - l.l_discount)
 else 0
   end)
 from lineitem l;
 returns bigint in the case of drill and double in the case of postgres. 
 Extendedprice and discount are double though.
 Drill:507996494
 Postgres:507996454.406699
 However when the case is removed and we use an equivalent filter instead 
 drill and postgres return the same results:
 select 
 sum(l.l_extendedprice * (1 - l.l_discount)
 from lineitem l where l.L_RETURNFLAG like 'R%';
 Postgres: 507996454.406699
 Drill: 5.0799645440669966E8
 This would explain the data mismatch for both TPCH14 and 8
 git.commit.id.abbrev=e5c2da0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: (was: DRILL-2343.2.patch.txt)

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.3.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: DRILL-2343.3.patch.txt

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.3.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2972) Error message not clear when we try to select a field within a map without using an alias

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) resolved DRILL-2972.
---
Resolution: Fixed

Fixed by DRILL-2932 fix.

 Error message not clear when we try to select a field within a map without 
 using an alias
 -

 Key: DRILL-2972
 URL: https://issues.apache.org/jira/browse/DRILL-2972
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Reporter: Rahul Challapalli
Assignee: Jinfeng Ni
 Fix For: 1.2.0


 git.commit.id.abbrev=3b19076
 When I try to access a field within a map (without using an alias), below is 
 the error I get.
 {code}
 0: jdbc:drill:schema=dfs_eea select map.rm from `data.json`;
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {code}
 There is no message from drill and the chances of users hitting this scenario 
 is very high. So I am marking this as critical.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1580) Count(*) on TPCDS JSON dataset (table store_sales) throws NullPointerException

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish updated DRILL-1580:
---
Labels: no_verified_test  (was: )

 Count(*) on TPCDS JSON dataset (table store_sales) throws NullPointerException
 --

 Key: DRILL-1580
 URL: https://issues.apache.org/jira/browse/DRILL-1580
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Jacques Nadeau
  Labels: no_verified_test
 Fix For: 0.7.0


  select count(*) from store_sales;
 Query failed: Failure while running fragment. Schema is currently null.  You 
 must call buildSchema(SelectionVectorMode) before this container can return a 
 schema. [289a6c1c-46e9-469d-8a3b-a23292f608f7]
 Error: exception while executing query: Failure while trying to get next 
 result batch. (state=,code=0)
 Stack trace attached. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-2249) Parquet reader hit IOBE when reading decimal type columns.

2015-05-08 Thread Rahul Challapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Challapalli closed DRILL-2249.


Verified and added the below test case

Functional/Passing/ctas/ctas_t18_DRILL-2249.sql


 Parquet reader hit IOBE when reading decimal type columns. 
 ---

 Key: DRILL-2249
 URL: https://issues.apache.org/jira/browse/DRILL-2249
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Parquet, Storage - Writer
Reporter: Jinfeng Ni
Assignee: Steven Phillips

 On today's master branch:
 select commit_id from sys.version;
 ++
 | commit_id |
 ++
 | 4ed0a8d68ec5ef344fb54ff7c9d857f7f3f153aa |
 ++
 If I create a parquet file containing two decimal(10,2) columns as:
 {code}
 create table my_dec_table as select *, cast(o_totalprice as decimal(10,2)) 
 dec1, cast(o_totalprice as decimal(10,2)) dec2 from cp.`tpch/orders.parquet`;
 ++---+
 | Fragment | Number of records written |
 ++---+
 | 0_0 | 15000 |
 ++---+
 1 row selected (1.977 seconds)
 {code}
 However, when I try to read from the new created parquet file, Drill report 
 IOBE in parquet reader.
 {code}
 select * from my_dec_table;
 Query failed: Query stopped., index: 22531, length: 1 (expected: range(0, 
 22531)) [ ee35bc67-5c70-4677-bf7f-8db12e4a5491 on 10.250.0.8:31010 ]
 {code}
 The plan looks fine to me for this query:
 {code}
 xplain plan for select * from my_dec_table;
 +++
 |text|json|
 +++
 | 00-00Screen
 00-01  Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
 [path=file:/Users/jni/work/data/tpcds/my_dec_table]], 
 selectionRoot=/Users/jni/work/data/tpcds/my_dec_table, numFiles=1, 
 columns=[`*`]]])
 {code}
 Here is part of the stack trace:
 {code}
 java.lang.IndexOutOfBoundsException: index: 22531, length: 1 (expected: 
 range(0, 22531))
   at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:156) 
 ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:4.0.24.Final]
   at io.netty.buffer.DrillBuf.chk(DrillBuf.java:178) 
 ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:4.0.24.Final]
   at io.netty.buffer.DrillBuf.getByte(DrillBuf.java:673) 
 ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:4.0.24.Final]
   at 
 org.apache.drill.exec.store.parquet.columnreaders.FixedByteAlignedReader$DateReader.readIntLittleEndian(FixedByteAlignedReader.java:144)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
   at 
 org.apache.drill.exec.store.parquet.columnreaders.FixedByteAlignedReader...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2995) RepeatedVector should not expose low level details

2015-05-08 Thread Hanifi Gunes (JIRA)
Hanifi Gunes created DRILL-2995:
---

 Summary: RepeatedVector should not expose low level details
 Key: DRILL-2995
 URL: https://issues.apache.org/jira/browse/DRILL-2995
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Data Types
Reporter: Hanifi Gunes
Assignee: Hanifi Gunes
Priority: Minor


Currently ParquetReader  Flatten are the two consumers of RVVs that need low 
level access to offsets and data vectors. We should update the interface such 
that exposing low level details won't be necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2994) Incorrect error message when disconnecting from server (using direct connection to drillbit)

2015-05-08 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-2994:


 Summary: Incorrect error message when disconnecting from server 
(using direct connection to drillbit)
 Key: DRILL-2994
 URL: https://issues.apache.org/jira/browse/DRILL-2994
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Reporter: Parth Chandra
Assignee: Parth Chandra
Priority: Minor
 Fix For: 1.0.0


If connected to the server using a direct drillbit connection, JDBC client 
(sqlline) prints an already disconnected error when disconnecting.
This happens because of an exception because the client is trying to close the 
ZK cluster coordinator which is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2996) ValueVectors shouldn't call reAlloc() in a while() loop

2015-05-08 Thread Chris Westin (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Westin updated DRILL-2996:

Assignee: Hanifi Gunes  (was: Daniel Barclay (Drill))

 ValueVectors shouldn't call reAlloc() in a while() loop
 ---

 Key: DRILL-2996
 URL: https://issues.apache.org/jira/browse/DRILL-2996
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Reporter: Chris Westin
Assignee: Hanifi Gunes

 Instead, reAlloc() should be change to take a new minimum size as an 
 argument. This value is just the value used to determine the while loops' 
 termination. Then reAlloc() can figure out how much more to allocate once and 
 for all, instead of possibly reallocating and copying more than once, and it 
 can make sure that the size doesn't overflow (we've seen some instances of 
 the allocator being called with negative sizes).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1579) Count(*) on TPCDS JSON dataset (table time_dim) throws IndexOutOfBoundsException

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish updated DRILL-1579:
---
Labels: no_verified_test  (was: )

 Count(*) on TPCDS JSON dataset (table time_dim) throws 
 IndexOutOfBoundsException
 

 Key: DRILL-1579
 URL: https://issues.apache.org/jira/browse/DRILL-1579
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
Reporter: Abhishek Girish
Assignee: Hanifi Gunes
  Labels: no_verified_test
 Fix For: 0.7.0

 Attachments: drillbit.log, drillbit.log


  select count(*) from time_dim;
 ++
 |   EXPR$0   |
 ++
 | 18429  |
 Query failed: Screen received stop request sent. index: 16384, length: 4 
 (expected: range(0, 16384)) [b516b27b-937b-4cbe-9850-d9e823545b43]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 SF100
  select count(*) from time_dim;
 ++
 |   EXPR$0   |
 ++
 | 18429  |
 Query failed: Screen received stop request sent. index: 16384, length: 4 
 (expected: range(0, 16384)) [3c7ef863-3397-4a8b-a667-347886b2635a]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 Stack trace attached. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1579) Count(*) on TPCDS JSON dataset (table time_dim) throws IndexOutOfBoundsException

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-1579.
--

The issue described here is resolved, for the query mentioned. Verified 
manually on Git.Commit.ID d12bee0 (May 7 build). Closing.

 Count(*) on TPCDS JSON dataset (table time_dim) throws 
 IndexOutOfBoundsException
 

 Key: DRILL-1579
 URL: https://issues.apache.org/jira/browse/DRILL-1579
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
Reporter: Abhishek Girish
Assignee: Hanifi Gunes
  Labels: no_verified_test
 Fix For: 0.7.0

 Attachments: drillbit.log, drillbit.log


  select count(*) from time_dim;
 ++
 |   EXPR$0   |
 ++
 | 18429  |
 Query failed: Screen received stop request sent. index: 16384, length: 4 
 (expected: range(0, 16384)) [b516b27b-937b-4cbe-9850-d9e823545b43]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 SF100
  select count(*) from time_dim;
 ++
 |   EXPR$0   |
 ++
 | 18429  |
 Query failed: Screen received stop request sent. index: 16384, length: 4 
 (expected: range(0, 16384)) [3c7ef863-3397-4a8b-a667-347886b2635a]
 java.lang.RuntimeException: java.sql.SQLException: Failure while trying to 
 get next result batch.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 Stack trace attached. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-2453) hbase queries in certain env result in NPE at FragmentWritableBatch.getEmptyBatchWithSchema()

2015-05-08 Thread Ramana Inukonda Nagaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramana Inukonda Nagaraj closed DRILL-2453.
--

Verified as working as of 7abd7cf4e3a6e67b4168d3d598ee01eb62e346e6

 hbase queries in certain env result in NPE at 
 FragmentWritableBatch.getEmptyBatchWithSchema()
 -

 Key: DRILL-2453
 URL: https://issues.apache.org/jira/browse/DRILL-2453
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - HBase
Affects Versions: 0.8.0
Reporter: Ramana Inukonda Nagaraj
Assignee: Venki Korukanti
Priority: Critical
 Fix For: 0.8.0

 Attachments: DRILL-2453-1.patch


 Sounds similar to Drill-1932
 But seems to be from a different place.
 Stacktrace:
 {code}
 java.lang.NullPointerException: null
 at 
 org.apache.drill.exec.record.FragmentWritableBatch.getEmptyBatchWithSchema(FragmentWritableBatch.java:86)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderRootExec.sendEmptyBatch(PartitionSenderRootExec.java:276)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.j
 ar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderRootExec.innerNext(PartitionSenderRootExec.java:133)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.
 8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:57) 
 ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:121)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.work.WorkManager$RunnableWrapper.run(WorkManager.java:303)
  [drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 2015-03-13 13:41:50,740 [2afcb471-b3a1-f719-a4ce-bbd75c36637a:frag:2:2] ERROR 
 o.a.drill.exec.ops.FragmentContext - Fragment Context received failure.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1942) Improve off-heap memory usage tracking

2015-05-08 Thread Chris Westin (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Westin updated DRILL-1942:

Attachment: DRILL-1942.patch

 Improve off-heap memory usage tracking
 --

 Key: DRILL-1942
 URL: https://issues.apache.org/jira/browse/DRILL-1942
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators
Reporter: Chris Westin
Assignee: Chris Westin
 Fix For: 1.0.0

 Attachments: DRILL-1942.patch


 We're using a lot more memory than we think we should. We may be leaking it, 
 or not releasing it as soon as we could. 
 This is a call to come up with some improved tracking so that we can get 
 statistics out about exactly where we're using it, and whether or not we can 
 release it earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2408) CTAS should not create empty folders when underlying query returns no results

2015-05-08 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535451#comment-14535451
 ] 

Rahul Challapalli commented on DRILL-2408:
--

Verified!

 CTAS should not create empty folders when underlying query returns no results
 -

 Key: DRILL-2408
 URL: https://issues.apache.org/jira/browse/DRILL-2408
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Writer
Affects Versions: 0.8.0
Reporter: Aman Sinha
Assignee: Aman Sinha
 Fix For: 1.0.0

 Attachments: DRILL-2408.1.patch.txt, DRILL-2408.2.patch.txt, 
 DRILL-2408.3.patch.txt, DRILL-2408.4.patch.txt, DRILL-2408.5.patch.txt, 
 DRILL-2408.6.patch.txt, DRILL-2408.7.patch.txt, DRILL-2408.8.patch.txt


 {noformat}
 0: jdbc:drill:schema=dfs select c_integer, c_bigint, c_date, c_time, 
 c_varchar from j4 where c_bigint is null;
 ++++++
 | c_integer  |  c_bigint  |   c_date   |   c_time   | c_varchar  |
 ++++++
 ++++++
 No rows selected (0.126 seconds)
 0: jdbc:drill:schema=dfs create table ctas_t6(c1,c2,c3,c4,c5) as select 
 c_integer, c_bigint, c_date, c_time, c_varchar from j4 where c_bigint is null;
 ++---+
 |  Fragment  | Number of records written |
 ++---+
 | 0_0| 0 |
 ++---+
 1 row selected (0.214 seconds)
 0: jdbc:drill:schema=dfs select * from ctas_t6;
 Query failed: IndexOutOfBoundsException: Index: 0, Size: 0
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {noformat}
 parquet file was not created, but directory was:
 {noformat}
 [Mon Apr 06 09:03:41 
 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/ctas_t6 ] # pwd
 /mapr/vmarkman.cluster.com/drill/testdata/joins/ctas_t6
 [Mon Apr 06 09:03:45 
 root@/mapr/vmarkman.cluster.com/drill/testdata/joins/ctas_t6 ] # ls -l
 total 0
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: DRILL-2343.2.patch.txt

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.2.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: (was: DRILL-2343.1.patch.txt)

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.2.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1662) drillbit.sh stop should timeout

2015-05-08 Thread Ramana Inukonda Nagaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramana Inukonda Nagaraj updated DRILL-1662:
---
Attachment: patch.diff

Please find a patch with the fix. 

 drillbit.sh stop should timeout
 ---

 Key: DRILL-1662
 URL: https://issues.apache.org/jira/browse/DRILL-1662
 Project: Apache Drill
  Issue Type: Improvement
  Components: Tools, Build  Test
Reporter: Ramana Inukonda Nagaraj
Assignee: Ramana Inukonda Nagaraj
 Fix For: 1.0.0

 Attachments: patch.diff


 We need a timeout as part of the drillbit.sh stop
 Can we have a configurable parameter with a default of 30 seconds and after 
 that the timeout should kill the drillbit.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2691) Source files with Windows line endings

2015-05-08 Thread Jason Altekruse (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Altekruse updated DRILL-2691:
---
Fix Version/s: (was: 1.0.0)
   1.1.0

 Source files with Windows line endings
 --

 Key: DRILL-2691
 URL: https://issues.apache.org/jira/browse/DRILL-2691
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Affects Versions: 0.6.0
Reporter: Deneche A. Hakim
Assignee: Jason Altekruse
 Fix For: 1.1.0

 Attachments: DRILL-2691.1.patch.txt


 The following files:
 {noformat}
 common/src/main/java/org/apache/drill/common/util/DrillStringUtils.java
 contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseCFAsJSONString.java
 {noformat}
 Have Windows line endings in them. Trying to apply a patch that contains 
 changes in one of those files will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (DRILL-2089) Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place for doc.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14488333#comment-14488333
 ] 

Daniel Barclay (Drill) edited comment on DRILL-2089 at 5/8/15 9:21 PM:
---

DRILL-2089 move started in DRILL-2613 work:  old class DrillResultSet split 
into interface DrillResultSet and class DrillResultSetImpl.


was (Author: dsbos):
DRILL-2098 move started in DRILL-2613 work:  old class DrillResultSet split 
into interface DrillResultSet and class DrillResultSetImpl.

 Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place 
 for doc.
 --

 Key: DRILL-2089
 URL: https://issues.apache.org/jira/browse/DRILL-2089
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future


 The JDBC implementation classes and interfaces that are not part of Drill's 
 published JDBC interface should be moved out of package org.apache.drill.jdbc.
 This will support using Javadoc to produce end-user documentation of 
 Drill-specific JDBC API behavior (e.g., what's implemented or not, plus any 
 extensions), and keep clear what is part of Drill's published JDBC interface 
 vs. what is not (i.e., items that are technically accessible (public or 
 protected) but _not_ meant to be used by Drill users).
 Parts:
 1.  Move most classes and packages in {{org.apache.drill.jdbc}} (e.g., 
 {{DrillHandler}}, {{DrillConnectionImpl}}) to an implementation package 
 (e.g., {{org.apache.drill.jdbc.impl}}).
 2.  Split the current {{org.apache.drill.jdbc.Driver}} into a 
 published-interface portion still at {{org.apache.drill.jdbc.Driver}} plus an 
 implementation portion at {{org.apache.drill.jdbc.impl.DriverImpl}}.
 ({{org.apache.drill.jdbc.Driver}} would expose only the published interface 
 (e.g., its constructor and methods from {{java.sql.Driver}}).  
 {{org.apache.drill.jdbc.impl.DriverImpl}} would contain methods that are not 
 part of Drill's published JDBC interface (including methods that need to be 
 public or protected because of using Avatica but which shouldn't be used by 
 Drill users).)
 3.  As needed (for Drill extensions and for documentation), create 
 Drill-specific interfaces extending standard JDBC interfaces.
 For example, to create a place for documenting Drill-specific behavior of 
 methods defined in {{java.sql.Connection}}, create an interface, e.g., 
 {{org.apache.drill.jdbc.DrillConnection}}, that extends interface 
 {{java.sql.Connection}}, adjust the internal implementation class in 
 {{org.apache.drill.jdbc.impl}} to implement that Drill-specified interface 
 rather than directly implementing {{java.sql.Connection}}, and then add a 
 method declaration with the Drill-specific documentation to the 
 Drill-specific subinterface.
 4.  In Drill-specific interfaces created per part 3, _consider_ using 
 co-variant return types to narrow return types to the Drill-specific 
 interfaces.
 For example:  {{java.sql.Connection}}'s {{createStatement()}} method returns 
 type {{java.sql.Statement}}.  Drill's implementation of that method will 
 always return a Drill-specific implementation of {{java.sql.Statement}}, 
 which will also be an implementation of the Drill-specific interface that 
 extends {{java.sql.Statement}}.  Therefore, the Drill-specific {{Connection}} 
 interface can re-declare {{createStatement()}} as returning the 
 Drill-specific {{Statement}} interface type (because the Drill-specific 
 {{Statement}} type is a subtype of {{java.sql.Statement}}).
 That would likely make it easier for client code to access any Drill 
 extension methods:  Although the client might have to cast or do something 
 else special to get to the first Drill-specific interface or class, it could 
 traverse to other objects (e.g., from connection to statement, from statement 
 to result set, etc.) still using Drill-specific types, not needing casts or 
 whatever as each step.
 Note:  Steps 1 and 2 have already been prototyped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1818) Parquet files generated by Drill ignore field names when nested elements are queried

2015-05-08 Thread Rahul Challapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Challapalli closed DRILL-1818.


Verified and added the below testcase

Functional/Passing/parquet_storage/parquet_generic/parquet_DRILL-1818.q

 Parquet files generated by Drill ignore field names when nested elements are 
 queried
 

 Key: DRILL-1818
 URL: https://issues.apache.org/jira/browse/DRILL-1818
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Writer
Reporter: Neeraja
Assignee: Steven Phillips
Priority: Blocker
 Fix For: 0.7.0

 Attachments: 0_0_0.parquet, DRILL-1818.patch


 I observed this with this parquet file and a more comprehensive testing might 
 be needed here. The issue is that Drill seem to simply ignore field names at 
 the leaf level and accessing data in a positional fashion.
 Below is the repro.
 1. Generate  a parquet file using Drill. Input is the JSON doc below
 create  table dfs.tmp.sampleparquet as (select trans_id, cast(`date` as date) 
 transdate,cast(`time` as time) transtime, cast(amount as double) 
 amount,`user_info`,`marketing_info`, `trans_info` from 
 dfs.`/Users/nrentachintala/Downloads/sample.json` )
 2. Now do queries. 
 Note in query below, there is no field name called 'keywords' in trans_info, 
 but data is just positionally returned (the data returned from prod_id 
 column).
 0: jdbc:drill:zk=local select t.`trans_info`.keywords from 
 dfs.tmp.sampleparquet t where t.`trans_info`.keywords is not null;
 ++
 |   EXPR$0   |
 ++
 | [16]   |
 | [] |
 | [293,90]   |
 | [173,18,121,84,115,226,464,525,35,11,94,45] |
 | [311,29,5,41] |
 0: jdbc:drill:zk=local select t.`marketing_info`.keywords from 
 dfs.tmp.sampleparquet t;
 Note in the query below, it is trying to return the first element in 
 marketing_Info which is camp_id which is of int type for keywords columns. 
 But keywords schema is string, so it throws error with type mismatch.
 Query failed: Query failed: Failure while running fragment., You tried to 
 write a VarChar type when you are using a ValueWriter of type 
 NullableBigIntWriterImpl. [ c3761403-b8c5-43c1-8e90-2c4918d1f85c on 
 10.0.0.20:31010 ]
 [ c3761403-b8c5-43c1-8e90-2c4918d1f85c on 10.0.0.20:31010 ]
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 0: jdbc:drill:zk=local select 
 t.`marketing_info`.`camp_id`,t.`marketing_info`.keywords from 
 dfs.tmp.sampleparquet t;
 +++
 |   EXPR$0   |   EXPR$1   |
 +++
 | 4  | 
 [go,to,thing,watch,made,laughing,might,pay,in,your,hold]
  |
 | 6  | [pronounce,tree,instead,games,sigh] |
 | 17 | [] |
 | 17 | [it's]   |
 | 8  | [fallout] |
 +++
 Sample.json is below
 {trans_id:0,date:2013-07-26,time:04:56:59,amount:80.5,user_info:{cust_id:28,device:IOS5,state:mt},marketing_info:{camp_id:4,keywords:[go,to,thing,watch,made,laughing,might,pay,in,your,hold]},trans_info:{prod_id:[16],purch_flag:false}}
 {trans_id:1,date:2013-05-16,time:07:31:54,amount:100.40,
 user_info:{cust_id:86623,device:AOS4.2,state:mi},marketing_info:{camp_id:6,keywords:[pronounce,tree,instead,games,sigh]},trans_info:{prod_id:[],purch_flag:false}}
 {trans_id:2,date:2013-06-09,time:15:31:45,amount:20.25,
 user_info:{cust_id:11,device:IOS5,state:la},marketing_info:{camp_id:17,keywords:[]},trans_info:{prod_id:[293,90],purch_flag:true}}
 {trans_id:3,date:2013-07-19,time:11:24:22,amount:500.75,
 user_info:{cust_id:666,device:IOS5,state:nj},marketing_info:{camp_id:17,keywords:[it's]},trans_info:{prod_id:[173,18,121,84,115,226,464,525,35,11,94,45],purch_flag:false}}
 {trans_id:4,date:2013-07-21,time:08:01:13,amount:34.20,user_info:{cust_id:999,device:IOS7,state:ct},marketing_info:{camp_id:8,keywords:[fallout]},trans_info:{prod_id:[311,29,5,41],purch_flag:false}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2570) Broken JDBC-All Jar packaging can cause missing XML classes

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535408#comment-14535408
 ] 

Daniel Barclay (Drill) commented on DRILL-2570:
---

[~pwong-mapr]  Yes, that change seems to solve the problem.  (The 
META-INF/services/ tree no longer exists in 
exec/jdbc-all/target/drill-jdbc-all-1.0.0-SNAPSHOT.jar.)

[~parthc] Can you merge this patch in?




 Broken JDBC-All Jar packaging can cause missing XML classes
 ---

 Key: DRILL-2570
 URL: https://issues.apache.org/jira/browse/DRILL-2570
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: 1.0.0

 Attachments: DRILL-2570.1.patch.txt, ElementTraversal.rtf, 
 xerces-error.rtf


 [Transcribed from other medium:]
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 
 When starting Spotfire Server using JDBC driver an error see attachment 
 (xerces-error) is produced.
 This error is then resolved by adding the jars/3rdparty/xercesImpl-2.11.0.jar 
 from the drillbit package to the classpath for the JDBC client driver.
 Then the following error is observed. See attachment (ElementTraversal).
 This requires to add jars/3rdparty/xml-apis-1.4.01.jar to the classpath from 
 the drillbit package.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 
 The issue is Tomcat and Spotfire Server does not show any errors and starts 
 up fine without the Drill JDBC driver. Once the Drill driver is added the 
 application server fails to start with the errors shown.
 Adding the 2 jars to the classpath then resolves the issue.
 I have not looked at all the JDBC driver classes, but it is important to note 
 that the error occurs when the JDBC driver is added and resolved by adding 2 
 jars from the drillbit package.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 
  I do not see Drill classes in the stack trace. This seems to be a Tomcat 
  configuration issue.
 I suspect another possibility: that the Drill JDBC-all Jar file contains a 
 stray reference to the unfound class (SAXParserFactoryImpl) in some file in 
 META-INF/services (left over from some package whose classes we either 
 excluded or renamed (with shading)
 Xxx, Yyy: Can you try this?:
 (Temporarily) removing the added XML Jar files from the class path to 
 re-confirm the problem.
 Move the Drill JDBC-all Jar file to be last on the class path (and remove 
 ).
 Report whether the symptoms change.
 Thanks.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2512) Client connecting to same node through zookeeper

2015-05-08 Thread Ramana Inukonda Nagaraj (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535413#comment-14535413
 ] 

Ramana Inukonda Nagaraj commented on DRILL-2512:


Verified as of commit id 7abd7cf4e3a6e67b4168d3d598ee01eb62e346e6 that a new 
drillbit is selected at random on each connection. 


 Client connecting to same node through zookeeper
 

 Key: DRILL-2512
 URL: https://issues.apache.org/jira/browse/DRILL-2512
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Ramana Inukonda Nagaraj
Assignee: Jason Altekruse
Priority: Critical
 Fix For: 0.9.0

 Attachments: DRILL-2512.1.patch.txt


 When connecting to multiple drillbits and using ZK in the connection string 
 it looks like drill is connecting to the same drillbit everytime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-2512) Client connecting to same node through zookeeper

2015-05-08 Thread Ramana Inukonda Nagaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramana Inukonda Nagaraj closed DRILL-2512.
--
Assignee: Ramana Inukonda Nagaraj  (was: Jason Altekruse)

 Client connecting to same node through zookeeper
 

 Key: DRILL-2512
 URL: https://issues.apache.org/jira/browse/DRILL-2512
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Ramana Inukonda Nagaraj
Assignee: Ramana Inukonda Nagaraj
Priority: Critical
 Fix For: 0.9.0

 Attachments: DRILL-2512.1.patch.txt


 When connecting to multiple drillbits and using ZK in the connection string 
 it looks like drill is connecting to the same drillbit everytime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1580) Count(*) on TPCDS JSON dataset (table store_sales) throws NullPointerException

2015-05-08 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish closed DRILL-1580.
--

 Count(*) on TPCDS JSON dataset (table store_sales) throws NullPointerException
 --

 Key: DRILL-1580
 URL: https://issues.apache.org/jira/browse/DRILL-1580
 Project: Apache Drill
  Issue Type: Bug
Reporter: Abhishek Girish
Assignee: Jacques Nadeau
  Labels: no_verified_test
 Fix For: 0.7.0


  select count(*) from store_sales;
 Query failed: Failure while running fragment. Schema is currently null.  You 
 must call buildSchema(SelectionVectorMode) before this container can return a 
 schema. [289a6c1c-46e9-469d-8a3b-a23292f608f7]
 Error: exception while executing query: Failure while trying to get next 
 result batch. (state=,code=0)
 Stack trace attached. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2089) Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place for doc.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535569#comment-14535569
 ] 

Daniel Barclay (Drill) commented on DRILL-2089:
---

DRILL-2089 move incremented in started in DRILL-2961 work: old class 
...jdbc.DrillStatement split into interface ...jdbc.DrillStatement and class 
...jdbc.impl.DrillStatementImpl.


 Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place 
 for doc.
 --

 Key: DRILL-2089
 URL: https://issues.apache.org/jira/browse/DRILL-2089
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future


 The JDBC implementation classes and interfaces that are not part of Drill's 
 published JDBC interface should be moved out of package org.apache.drill.jdbc.
 This will support using Javadoc to produce end-user documentation of 
 Drill-specific JDBC API behavior (e.g., what's implemented or not, plus any 
 extensions), and keep clear what is part of Drill's published JDBC interface 
 vs. what is not (i.e., items that are technically accessible (public or 
 protected) but _not_ meant to be used by Drill users).
 Parts:
 1.  Move most classes and packages in {{org.apache.drill.jdbc}} (e.g., 
 {{DrillHandler}}, {{DrillConnectionImpl}}) to an implementation package 
 (e.g., {{org.apache.drill.jdbc.impl}}).
 2.  Split the current {{org.apache.drill.jdbc.Driver}} into a 
 published-interface portion still at {{org.apache.drill.jdbc.Driver}} plus an 
 implementation portion at {{org.apache.drill.jdbc.impl.DriverImpl}}.
 ({{org.apache.drill.jdbc.Driver}} would expose only the published interface 
 (e.g., its constructor and methods from {{java.sql.Driver}}).  
 {{org.apache.drill.jdbc.impl.DriverImpl}} would contain methods that are not 
 part of Drill's published JDBC interface (including methods that need to be 
 public or protected because of using Avatica but which shouldn't be used by 
 Drill users).)
 3.  As needed (for Drill extensions and for documentation), create 
 Drill-specific interfaces extending standard JDBC interfaces.
 For example, to create a place for documenting Drill-specific behavior of 
 methods defined in {{java.sql.Connection}}, create an interface, e.g., 
 {{org.apache.drill.jdbc.DrillConnection}}, that extends interface 
 {{java.sql.Connection}}, adjust the internal implementation class in 
 {{org.apache.drill.jdbc.impl}} to implement that Drill-specified interface 
 rather than directly implementing {{java.sql.Connection}}, and then add a 
 method declaration with the Drill-specific documentation to the 
 Drill-specific subinterface.
 4.  In Drill-specific interfaces created per part 3, _consider_ using 
 co-variant return types to narrow return types to the Drill-specific 
 interfaces.
 For example:  {{java.sql.Connection}}'s {{createStatement()}} method returns 
 type {{java.sql.Statement}}.  Drill's implementation of that method will 
 always return a Drill-specific implementation of {{java.sql.Statement}}, 
 which will also be an implementation of the Drill-specific interface that 
 extends {{java.sql.Statement}}.  Therefore, the Drill-specific {{Connection}} 
 interface can re-declare {{createStatement()}} as returning the 
 Drill-specific {{Statement}} interface type (because the Drill-specific 
 {{Statement}} type is a subtype of {{java.sql.Statement}}).
 That would likely make it easier for client code to access any Drill 
 extension methods:  Although the client might have to cast or do something 
 else special to get to the first Drill-specific interface or class, it could 
 traverse to other objects (e.g., from connection to statement, from statement 
 to result set, etc.) still using Drill-specific types, not needing casts or 
 whatever as each step.
 Note:  Steps 1 and 2 have already been prototyped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (DRILL-2089) Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place for doc.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14488333#comment-14488333
 ] 

Daniel Barclay (Drill) edited comment on DRILL-2089 at 5/8/15 9:27 PM:
---

DRILL-2089 move was started in DRILL-2613 work:  old class DrillResultSet split 
into interface DrillResultSet and class DrillResultSetImpl.


was (Author: dsbos):
DRILL-2089 move started in DRILL-2613 work:  old class DrillResultSet split 
into interface DrillResultSet and class DrillResultSetImpl.

 Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place 
 for doc.
 --

 Key: DRILL-2089
 URL: https://issues.apache.org/jira/browse/DRILL-2089
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future


 The JDBC implementation classes and interfaces that are not part of Drill's 
 published JDBC interface should be moved out of package org.apache.drill.jdbc.
 This will support using Javadoc to produce end-user documentation of 
 Drill-specific JDBC API behavior (e.g., what's implemented or not, plus any 
 extensions), and keep clear what is part of Drill's published JDBC interface 
 vs. what is not (i.e., items that are technically accessible (public or 
 protected) but _not_ meant to be used by Drill users).
 Parts:
 1.  Move most classes and packages in {{org.apache.drill.jdbc}} (e.g., 
 {{DrillHandler}}, {{DrillConnectionImpl}}) to an implementation package 
 (e.g., {{org.apache.drill.jdbc.impl}}).
 2.  Split the current {{org.apache.drill.jdbc.Driver}} into a 
 published-interface portion still at {{org.apache.drill.jdbc.Driver}} plus an 
 implementation portion at {{org.apache.drill.jdbc.impl.DriverImpl}}.
 ({{org.apache.drill.jdbc.Driver}} would expose only the published interface 
 (e.g., its constructor and methods from {{java.sql.Driver}}).  
 {{org.apache.drill.jdbc.impl.DriverImpl}} would contain methods that are not 
 part of Drill's published JDBC interface (including methods that need to be 
 public or protected because of using Avatica but which shouldn't be used by 
 Drill users).)
 3.  As needed (for Drill extensions and for documentation), create 
 Drill-specific interfaces extending standard JDBC interfaces.
 For example, to create a place for documenting Drill-specific behavior of 
 methods defined in {{java.sql.Connection}}, create an interface, e.g., 
 {{org.apache.drill.jdbc.DrillConnection}}, that extends interface 
 {{java.sql.Connection}}, adjust the internal implementation class in 
 {{org.apache.drill.jdbc.impl}} to implement that Drill-specified interface 
 rather than directly implementing {{java.sql.Connection}}, and then add a 
 method declaration with the Drill-specific documentation to the 
 Drill-specific subinterface.
 4.  In Drill-specific interfaces created per part 3, _consider_ using 
 co-variant return types to narrow return types to the Drill-specific 
 interfaces.
 For example:  {{java.sql.Connection}}'s {{createStatement()}} method returns 
 type {{java.sql.Statement}}.  Drill's implementation of that method will 
 always return a Drill-specific implementation of {{java.sql.Statement}}, 
 which will also be an implementation of the Drill-specific interface that 
 extends {{java.sql.Statement}}.  Therefore, the Drill-specific {{Connection}} 
 interface can re-declare {{createStatement()}} as returning the 
 Drill-specific {{Statement}} interface type (because the Drill-specific 
 {{Statement}} type is a subtype of {{java.sql.Statement}}).
 That would likely make it easier for client code to access any Drill 
 extension methods:  Although the client might have to cast or do something 
 else special to get to the first Drill-specific interface or class, it could 
 traverse to other objects (e.g., from connection to statement, from statement 
 to result set, etc.) still using Drill-specific types, not needing casts or 
 whatever as each step.
 Note:  Steps 1 and 2 have already been prototyped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (DRILL-2089) Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place for doc.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535569#comment-14535569
 ] 

Daniel Barclay (Drill) edited comment on DRILL-2089 at 5/8/15 9:27 PM:
---

DRILL-2089 move was implemented a bit more in DRILL-2961 work: old class 
...jdbc.DrillStatement split into interface ...jdbc.DrillStatement and class 
...jdbc.impl.DrillStatementImpl.



was (Author: dsbos):
DRILL-2089 move incremented in started in DRILL-2961 work: old class 
...jdbc.DrillStatement split into interface ...jdbc.DrillStatement and class 
...jdbc.impl.DrillStatementImpl.


 Split JDBC implementation out of org.apache.drill.jdbc, so that pkg. is place 
 for doc.
 --

 Key: DRILL-2089
 URL: https://issues.apache.org/jira/browse/DRILL-2089
 Project: Apache Drill
  Issue Type: Improvement
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future


 The JDBC implementation classes and interfaces that are not part of Drill's 
 published JDBC interface should be moved out of package org.apache.drill.jdbc.
 This will support using Javadoc to produce end-user documentation of 
 Drill-specific JDBC API behavior (e.g., what's implemented or not, plus any 
 extensions), and keep clear what is part of Drill's published JDBC interface 
 vs. what is not (i.e., items that are technically accessible (public or 
 protected) but _not_ meant to be used by Drill users).
 Parts:
 1.  Move most classes and packages in {{org.apache.drill.jdbc}} (e.g., 
 {{DrillHandler}}, {{DrillConnectionImpl}}) to an implementation package 
 (e.g., {{org.apache.drill.jdbc.impl}}).
 2.  Split the current {{org.apache.drill.jdbc.Driver}} into a 
 published-interface portion still at {{org.apache.drill.jdbc.Driver}} plus an 
 implementation portion at {{org.apache.drill.jdbc.impl.DriverImpl}}.
 ({{org.apache.drill.jdbc.Driver}} would expose only the published interface 
 (e.g., its constructor and methods from {{java.sql.Driver}}).  
 {{org.apache.drill.jdbc.impl.DriverImpl}} would contain methods that are not 
 part of Drill's published JDBC interface (including methods that need to be 
 public or protected because of using Avatica but which shouldn't be used by 
 Drill users).)
 3.  As needed (for Drill extensions and for documentation), create 
 Drill-specific interfaces extending standard JDBC interfaces.
 For example, to create a place for documenting Drill-specific behavior of 
 methods defined in {{java.sql.Connection}}, create an interface, e.g., 
 {{org.apache.drill.jdbc.DrillConnection}}, that extends interface 
 {{java.sql.Connection}}, adjust the internal implementation class in 
 {{org.apache.drill.jdbc.impl}} to implement that Drill-specified interface 
 rather than directly implementing {{java.sql.Connection}}, and then add a 
 method declaration with the Drill-specific documentation to the 
 Drill-specific subinterface.
 4.  In Drill-specific interfaces created per part 3, _consider_ using 
 co-variant return types to narrow return types to the Drill-specific 
 interfaces.
 For example:  {{java.sql.Connection}}'s {{createStatement()}} method returns 
 type {{java.sql.Statement}}.  Drill's implementation of that method will 
 always return a Drill-specific implementation of {{java.sql.Statement}}, 
 which will also be an implementation of the Drill-specific interface that 
 extends {{java.sql.Statement}}.  Therefore, the Drill-specific {{Connection}} 
 interface can re-declare {{createStatement()}} as returning the 
 Drill-specific {{Statement}} interface type (because the Drill-specific 
 {{Statement}} type is a subtype of {{java.sql.Statement}}).
 That would likely make it easier for client code to access any Drill 
 extension methods:  Although the client might have to cast or do something 
 else special to get to the first Drill-specific interface or class, it could 
 traverse to other objects (e.g., from connection to statement, from statement 
 to result set, etc.) still using Drill-specific types, not needing casts or 
 whatever as each step.
 Note:  Steps 1 and 2 have already been prototyped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2046) Merge join inconsistent results

2015-05-08 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-2046:
--
Fix Version/s: (was: 1.0.0)
   1.1.0

 Merge join inconsistent results
 ---

 Key: DRILL-2046
 URL: https://issues.apache.org/jira/browse/DRILL-2046
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Reporter: Rahul Challapalli
Assignee: Aman Sinha
Priority: Critical
 Fix For: 1.1.0

 Attachments: widestrings_small.parquet


 git.commit.id.abbrev=a418af1
 The below queries should result in the same no of records. However the counts 
 do not match when we use merge join.
 {code}
 alter session set `planner.enable_hashjoin` = false;
 select ws1.* from widestrings_small ws1 INNER JOIN widestrings_small ws2 on 
 ws1.str_fixed_null_empty=ws2.str_var_null_empty where 
 ws1.str_fixed_null_empty is not null and ws2.str_var_null_empty is not null 
 and ws1.tinyint_var  120;
 6 records
 select count(*) from widestrings_small ws1 INNER JOIN widestrings_small ws2 
 on ws1.str_fixed_null_empty=ws2.str_var_null_empty where 
 ws1.str_fixed_null_empty is not null and ws2.str_var_null_empty is not null 
 and ws1.tinyint_var  120;
 60 records
 select count(ws1.str_var) from widestrings_small ws1 INNER JOIN 
 widestrings_small ws2 on ws1.str_fixed_null_empty=ws2.str_var_null_empty 
 where ws1.str_fixed_null_empty is not null and ws2.str_var_null_empty is not 
 null and ws1.tinyint_var  120;
 4 records
 {code}
 For hash join all the above queries result in 60 records. I attached the 
 dataset used. Let me know if you have any questions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1942) Improve off-heap memory usage tracking

2015-05-08 Thread Chris Westin (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Westin updated DRILL-1942:

Attachment: DRILL-1942.2.patch.txt

After some jiggery-pokery, I managed to get review board to accept this patch: 
https://reviews.apache.org/r/34004/ .

 Improve off-heap memory usage tracking
 --

 Key: DRILL-1942
 URL: https://issues.apache.org/jira/browse/DRILL-1942
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators
Reporter: Chris Westin
Assignee: Chris Westin
 Fix For: 1.0.0

 Attachments: DRILL-1942.1.patch.txt, DRILL-1942.2.patch.txt


 We're using a lot more memory than we think we should. We may be leaking it, 
 or not releasing it as soon as we could. 
 This is a call to come up with some improved tracking so that we can get 
 statistics out about exactly where we're using it, and whether or not we can 
 release it earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2974) Make OutOfMemoryException an unchecked exception and remove OutOfMemoryRuntimeException

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2974:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Make OutOfMemoryException an unchecked exception and remove 
 OutOfMemoryRuntimeException
 ---

 Key: DRILL-2974
 URL: https://issues.apache.org/jira/browse/DRILL-2974
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Flow
Reporter: Deneche A. Hakim
Assignee: Sudheesh Katkam
 Fix For: 1.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2915) Regression: Mondrian query5614.q - Query failed: SYSTEM ERROR: This query cannot be planned possibly due to either a cartesian join or an inequality join

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2915:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Regression: Mondrian query5614.q - Query failed: SYSTEM ERROR: This query 
 cannot be planned possibly due to either a cartesian join or an inequality 
 join
 -

 Key: DRILL-2915
 URL: https://issues.apache.org/jira/browse/DRILL-2915
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Affects Versions: 0.9.0
Reporter: Chun Chang
Assignee: Aman Sinha
Priority: Critical
 Fix For: 1.1.0

 Attachments: mondrian_query5614.explain


 #Wed Apr 29 14:39:22 EDT 2015
 git.commit.id.abbrev=f5b0f49
 The following mondrian query fails now.
 {code}
 SELECT store.store_state   AS c0, 
Count(DISTINCT sales_fact_1997.customer_id) AS m0 
 FROM   store AS store, 
sales_fact_1997 AS sales_fact_1997, 
time_by_day AS time_by_day, 
product_class AS product_class, 
product AS product 
 WHERE  sales_fact_1997.store_id = store.store_id 
AND store.store_state = 'CA' 
AND sales_fact_1997.time_id = time_by_day.time_id 
AND sales_fact_1997.product_id = product.product_id 
AND product.product_class_id = product_class.product_class_id 
AND ( ( product_class.product_family = 'Food' 
AND time_by_day.quarter = 'Q1' 
AND time_by_day.the_year = 1997 ) 
   OR ( product_class.product_family = 'Drink' 
AND time_by_day.month_of_year = 4 
AND time_by_day.quarter = 'Q2' 
AND time_by_day.the_year = 1997 ) ) 
 GROUP  BY store.store_state; 
 {code}
 postgres:
 {code}
 foodmart=# select store.store_state as c0, count(distinct 
 sales_fact_1997.customer_id) as m0 from store as store, sales_fact_1997 as 
 sales_fact_1997, time_by_day as time_by_day, product_class as product_class, 
 product as product where sales_fact_1997.store_id = store.store_id and 
 store.store_state = 'CA' and sales_fact_1997.time_id = time_by_day.time_id 
 and sales_fact_1997.product_id = product.product_id and 
 product.product_class_id = product_class.product_class_id and 
 ((product_class.product_family = 'Food' and time_by_day.quarter = 'Q1' and 
 time_by_day.the_year = 1997) or (product_class.product_family = 'Drink' and 
 time_by_day.month_of_year = 4 and time_by_day.quarter = 'Q2' and 
 time_by_day.the_year = 1997)) group by store.store_state;
  c0 |  m0
 +--
  CA | 1175
 (1 row)
 {code}
 drill failed
 {code}
 0: jdbc:drill:schema=dfs.drillTestDirAdvanced select store.store_state as 
 c0, count(distinct sales_fact_1997.customer_id) as m0 from store as store, 
 sales_fact_1997 as sales_fact_1997, time_by_day as time_by_day, product_class 
 as product_class, product as product where sales_fact_1997.store_id = 
 store.store_id and store.store_state = 'CA' and sales_fact_1997.time_id = 
 time_by_day.time_id and sales_fact_1997.product_id = product.product_id and 
 product.product_class_id = product_class.product_class_id and 
 ((product_class.product_family = 'Food' and time_by_day.quarter = 'Q1' and 
 time_by_day.the_year = 1997) or (product_class.product_family = 'Drink' and 
 time_by_day.month_of_year = 4 and time_by_day.quarter = 'Q2' and 
 time_by_day.the_year = 1997)) group by store.store_state;
 Query failed: SYSTEM ERROR: This query cannot be planned possibly due to 
 either a cartesian join or an inequality join
 [3eb99963-92aa-4129-844f-fe43839537b9 on qa-node119.qa.lab:31010]
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2903) Update TestDrillbitResilience tests so that they correctly manage canceled queries that get to complete too quickly.

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2903:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Update TestDrillbitResilience tests so that they correctly manage canceled 
 queries that get to complete too quickly.
 

 Key: DRILL-2903
 URL: https://issues.apache.org/jira/browse/DRILL-2903
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Reporter: Jacques Nadeau
Assignee: Sudheesh Katkam
Priority: Critical
 Fix For: 1.1.0


 Due to timing issues, this test currently appears to flap.  We need to update 
 it so that this isn't an issue and then reenable it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1673) Flatten function can not work well with nested arrays.

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-1673:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Flatten function can not work well with nested arrays.
 --

 Key: DRILL-1673
 URL: https://issues.apache.org/jira/browse/DRILL-1673
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 0.7.0
 Environment: 0.7.0
Reporter: Hao Zhu
Assignee: Jason Altekruse
Priority: Blocker
 Fix For: 1.1.0

 Attachments: DRILL-1673.patch, error.log


 Flatten function failed to scan nested arrays , for example something like 
 [[ ]].
 The only difference of JSON files between below 2 tests is 
 num:[1,2,3]
 VS
 num:[[1,2,3]]
 ==Test 1 (Works well):==
 file:
 {code}
 {fixed_column:abc, list_column:[{id1:1,name:zhu, num: 
 [1,2,3]}, {id1:2,name:hao, num: [4,5,6]} ]}
 {code}
 SQL:
 {code}
 0: jdbc:drill:zk=local select t.`fixed_column` as fixed_column, 
 flatten(t.`list_column`)  from 
 dfs.root.`/Users/hzu/Documents/sharefolder/hp/n2.json` as t;
 +--++
 | fixed_column |   EXPR$1   |
 +--++
 | abc  | {id1:1,name:zhu,num:[1,2,3]} |
 | abc  | {id1:2,name:hao,num:[4,5,6]} |
 +--++
 2 rows selected (0.154 seconds)
 {code}
 ==Test 2 (Failed):==
 file:
 {code}
 {fixed_column:abc, list_column:[{id1:1,name:zhu, num: 
 [[1,2,3]]}, {id1:2,name:hao, num: [[4,5,6]]} ]}
 {code}
 SQL:
 {code}
 0: jdbc:drill:zk=local  select t.`fixed_column` as fixed_column, 
 flatten(t.`list_column`)  from 
 dfs.root.`/Users/hzu/Documents/sharefolder/hp/n3.json` as t;
 +--++
 | fixed_column |   EXPR$1   |
 +--++
 Query failed: Failure while running fragment.[ 
 df28347b-fac1-497d-b9c5-a313ba77aa4d on 10.250.0.115:31010 ]
   (java.lang.UnsupportedOperationException) 
 
 org.apache.drill.exec.vector.complex.RepeatedListVector$RepeatedListTransferPair.splitAndTransfer():339
 
 org.apache.drill.exec.vector.complex.RepeatedMapVector$SingleMapTransferPair.splitAndTransfer():305
 org.apache.drill.exec.test.generated.FlattenerGen22.flattenRecords():93
 
 org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.doWork():152
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():89
 
 org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.innerNext():118
 org.apache.drill.exec.record.AbstractRecordBatch.next():106
 
 org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():124
 org.apache.drill.exec.record.AbstractRecordBatch.next():86
 org.apache.drill.exec.record.AbstractRecordBatch.next():76
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():52
 
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
 org.apache.drill.exec.record.AbstractRecordBatch.next():106
 
 org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():124
 org.apache.drill.exec.physical.impl.BaseRootExec.next():67
 
 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():122
 org.apache.drill.exec.physical.impl.BaseRootExec.next():57
 org.apache.drill.exec.work.fragment.FragmentExecutor.run():105
 org.apache.drill.exec.work.WorkManager$RunnableWrapper.run():249
 ...():0
 java.lang.RuntimeException: java.sql.SQLException: Failure while executing 
 query.
   at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
   at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
   at sqlline.SqlLine.print(SqlLine.java:1809)
   at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
   at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
   at sqlline.SqlLine.dispatch(SqlLine.java:889)
   at sqlline.SqlLine.begin(SqlLine.java:763)
   at sqlline.SqlLine.start(SqlLine.java:498)
   at sqlline.SqlLine.main(SqlLine.java:460)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2677) Query does not go beyond 4096 lines in small JSON files

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2677:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Query does not go beyond 4096 lines in small JSON files
 ---

 Key: DRILL-2677
 URL: https://issues.apache.org/jira/browse/DRILL-2677
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - JSON
 Environment: drill 0.8 official build
Reporter: Alexander Reshetov
Assignee: Jason Altekruse
 Fix For: 1.1.0

 Attachments: dataset_4095_and_1.json, dataset_4096_and_1.json, 
 dataset_sample.json.gz.part-aa, dataset_sample.json.gz.part-ab, 
 dataset_sample.json.gz.part-ac, dataset_sample.json.gz.part-ad, 
 dataset_sample.json.gz.part-ae, dataset_sample.json.gz.part-af


 Hello,
 I'm trying to execute next query:
 {code}
 select * from (select source.pck, source.`timestamp`, 
 flatten(source.HostUpdateTypeNW.Transfers) as entry from 
 dfs.`/mnt/data/dataset_4095_and_1.json` as source) as parsed;
 {code}
 And it works as expected and I got result:
 {code}
 ++++
 |pck | timestamp  |   entry|
 ++++
 | 3547   | 1419807470286356 | 
 {TransferingPurpose:8,TransferingImpact:88,TransferingKind:8,TransferingTime:8,PackageOrigSenderID:8,TransferingID:8,TransitCN:888,PackageChkPnt:,PackageFullSize:8,TransferingSessionID:8,SubpackagesCounter:8}
  |
 ++++
 1 row selected (0.188 seconds)
 {code}
 This file contains 4095 same lines of one JSON string + at the end another 
 JOSN line (see attached file dataset_4095_and_1.json)
 The problem is when first string repeats more than 4095 times query got 
 exception. Here is query for file with 4096 string of first type + 1 string 
 of another (see attached file dataset_4096_and_1.json).
 {code}
 select * from (select source.pck, source.`timestamp`, 
 flatten(source.HostUpdateTypeNW.Transfers) as entry from 
 dfs.`/mnt/data/dataset_4096_and_1.json` as source) as parsed;
 Exception in thread 2ae108ff-b7ea-8f07-054e-84875815d856:frag:0:0 
 java.lang.RuntimeException: Error closing fragment context.
   at 
 org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:224)
   at 
 org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:187)
   at 
 org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.ClassCastException: 
 org.apache.drill.exec.vector.NullableIntVector cannot be cast to 
 org.apache.drill.exec.vector.RepeatedVector
   at 
 org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.getFlattenFieldTransferPair(FlattenRecordBatch.java:274)
   at 
 org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.setupNewSchema(FlattenRecordBatch.java:296)
   at 
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
   at 
 org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.innerNext(FlattenRecordBatch.java:122)
   at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
   at 
 org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
   at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
   at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
   at 
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
   at 
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
   at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
   at 
 org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
   at 
 org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:68)
   at 
 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:96)
   at 
 org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:58)
   at 
 org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:163)
   ... 4 more
 Query failed: RemoteRpcException: Failure while running fragment., 
 org.apache.drill.exec.vector.NullableIntVector cannot be cast to 
 

[jira] [Updated] (DRILL-2167) Order by on a repeated index from the output of a flatten on large no of records results in incorrect results

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2167:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Order by on a repeated index from the output of a flatten on large no of 
 records results in incorrect results
 -

 Key: DRILL-2167
 URL: https://issues.apache.org/jira/browse/DRILL-2167
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Reporter: Rahul Challapalli
Assignee: Jason Altekruse
Priority: Critical
 Fix For: 1.1.0

 Attachments: data.json


 git.commit.id.abbrev=3e33880
 The below query results in 26 records. Based on the data set we should 
 only receive 20 records. 
 {code}
 select s.uid from (select d.uid, flatten(d.map.rm) rms from `data.json` d) s 
 order by s.rms.rptd[1].d;
 {code}
 When I removed the order by part, drill correctly reported 20 records.
 {code}
 select s.uid from (select d.uid, flatten(d.map.rm) rms from `data.json` d) s;
 {code}
 I attached the data set with 2 records. I copied over the data set 5 
 times and ran the queries on top of it. Let me know if you have any other 
 questions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2941) Update RPC layer to avoid writing local data messages to socket

2015-05-08 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-2941:
--
Assignee: Steven Phillips  (was: Jacques Nadeau)

 Update RPC layer to avoid writing local data messages to socket
 ---

 Key: DRILL-2941
 URL: https://issues.apache.org/jira/browse/DRILL-2941
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Jacques Nadeau
Assignee: Steven Phillips
 Fix For: 1.0.0

 Attachments: DRILL-2941.patch


 Right now, if we send a fragment record batch to localhost, we still traverse 
 the RPC layer.   We should short-circuit this path.  This is especially 
 important in light of the mux and demux exchanges.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2971) If BitBit connection is unexpectedly closed and we were already blocked on writing to socket, we'll stay forever in ResettableBarrier.await()

2015-05-08 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-2971:

Assignee: Steven Phillips  (was: Deneche A. Hakim)

 If BitBit connection is unexpectedly closed and we were already blocked on 
 writing to socket, we'll stay forever in ResettableBarrier.await()
 ---

 Key: DRILL-2971
 URL: https://issues.apache.org/jira/browse/DRILL-2971
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Reporter: Jacques Nadeau
Assignee: Steven Phillips
 Fix For: 1.0.0

 Attachments: DRILL-2971.patch


 We need to reset the ResettableBarrier if the connection dies so that the 
 message can be failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3000) 3k!

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)
Daniel Barclay (Drill) created DRILL-3000:
-

 Summary: 3k!
 Key: DRILL-3000
 URL: https://issues.apache.org/jira/browse/DRILL-3000
 Project: Apache Drill
  Issue Type: Bug
Reporter: Daniel Barclay (Drill)






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2999) Parse-error exception logged to stdout/stderr (visible in SQLLine output)

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2999:
--
Description: 
For some Calcite/parsing exceptions that seem to be internal (seem to be caught 
and processed (translated) at a higher level), Calcite or parsing logging is 
writing SEVERE-level logging messages to stdout or stderr.  

When SQLLine runs Drill in embedded mode, those logging lines show up 
intermixed in the SQLLine output

{noformat}
0: jdbc:drill:zk=local bad syntax;
May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
SEVERE: org.apache.calcite.runtime.CalciteException: Non-query expression 
encountered in illegal context
May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
SEVERE: org.apache.calcite.runtime.CalciteContextException: From line 1, column 
1 to line 1, column 3: Non-query expression encountered in illegal context
Error: SYSTEM ERROR: Failure parsing SQL. Non-query expression encountered in 
illegal context


[Error Id: 87c20db6-58b1-4042-9060-42ee29945377 on dev-linux2:31016] 
(state=,code=0)
0: jdbc:drill:zk=local 
{noformat}

(The Error: SYSTEM ... lines are the normal error from SQLLine including 
exception message text from Drill.  The four lines starting with May or 
SEVERE are the extraneous logging output.)


  was:
For some Calcite/parsing exceptions that seem to be internal (seem to be caught 
and processed at a higher level), Calcite or parsing logging is writing 
SEVERE-level logging messages to stdout or stderr.  

When SQLLine runs Drill in embedded mode, those logging lines show up 
intermixed in the SQLLine output

{noformat}
0: jdbc:drill:zk=local bad syntax;
May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
SEVERE: org.apache.calcite.runtime.CalciteException: Non-query expression 
encountered in illegal context
May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
SEVERE: org.apache.calcite.runtime.CalciteContextException: From line 1, column 
1 to line 1, column 3: Non-query expression encountered in illegal context
Error: SYSTEM ERROR: Failure parsing SQL. Non-query expression encountered in 
illegal context


[Error Id: 87c20db6-58b1-4042-9060-42ee29945377 on dev-linux2:31016] 
(state=,code=0)
0: jdbc:drill:zk=local 
{noformat}

(The Error: SYSTEM ... lines are the normal error from SQLLine including 
exception message text from Drill.  The four lines starting with May or 
SEVERE are the extraneous logging output.)



 Parse-error exception logged to stdout/stderr (visible in SQLLine output)
 -

 Key: DRILL-2999
 URL: https://issues.apache.org/jira/browse/DRILL-2999
 Project: Apache Drill
  Issue Type: Bug
  Components: SQL Parser
Reporter: Daniel Barclay (Drill)

 For some Calcite/parsing exceptions that seem to be internal (seem to be 
 caught and processed (translated) at a higher level), Calcite or parsing 
 logging is writing SEVERE-level logging messages to stdout or stderr.  
 When SQLLine runs Drill in embedded mode, those logging lines show up 
 intermixed in the SQLLine output
 {noformat}
 0: jdbc:drill:zk=local bad syntax;
 May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
 SEVERE: org.apache.calcite.runtime.CalciteException: Non-query expression 
 encountered in illegal context
 May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
 SEVERE: org.apache.calcite.runtime.CalciteContextException: From line 1, 
 column 1 to line 1, column 3: Non-query expression encountered in illegal 
 context
 Error: SYSTEM ERROR: Failure parsing SQL. Non-query expression encountered in 
 illegal context
 [Error Id: 87c20db6-58b1-4042-9060-42ee29945377 on dev-linux2:31016] 
 (state=,code=0)
 0: jdbc:drill:zk=local 
 {noformat}
 (The Error: SYSTEM ... lines are the normal error from SQLLine including 
 exception message text from Drill.  The four lines starting with May or 
 SEVERE are the extraneous logging output.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DRILL-3000) 3k!

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) reassigned DRILL-3000:
-

Assignee: Daniel Barclay (Drill)

 3k!
 ---

 Key: DRILL-3000
 URL: https://issues.apache.org/jira/browse/DRILL-3000
 Project: Apache Drill
  Issue Type: Bug
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2999) Parse-error exception logged to stdout/stderr (visible in SQLLine output)

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2999:
--
Component/s: SQL Parser

 Parse-error exception logged to stdout/stderr (visible in SQLLine output)
 -

 Key: DRILL-2999
 URL: https://issues.apache.org/jira/browse/DRILL-2999
 Project: Apache Drill
  Issue Type: Bug
  Components: SQL Parser
Reporter: Daniel Barclay (Drill)

 For some Calcite/parsing exceptions that seem to be internal (seem to be 
 caught and processed at a higher level), Calcite or parsing logging is 
 writing SEVERE-level logging messages to stdout or stderr.  
 When SQLLine runs Drill in embedded mode, those logging lines show up 
 intermixed in the SQLLine output
 {noformat}
 0: jdbc:drill:zk=local bad syntax;
 May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
 SEVERE: org.apache.calcite.runtime.CalciteException: Non-query expression 
 encountered in illegal context
 May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
 SEVERE: org.apache.calcite.runtime.CalciteContextException: From line 1, 
 column 1 to line 1, column 3: Non-query expression encountered in illegal 
 context
 Error: SYSTEM ERROR: Failure parsing SQL. Non-query expression encountered in 
 illegal context
 [Error Id: 87c20db6-58b1-4042-9060-42ee29945377 on dev-linux2:31016] 
 (state=,code=0)
 0: jdbc:drill:zk=local 
 {noformat}
 (The Error: SYSTEM ... lines are the normal error from SQLLine including 
 exception message text from Drill.  The four lines starting with May or 
 SEVERE are the extraneous logging output.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-2999) Parse-error exception logged to stdout/stderr (visible in SQLLine output)

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)
Daniel Barclay (Drill) created DRILL-2999:
-

 Summary: Parse-error exception logged to stdout/stderr (visible in 
SQLLine output)
 Key: DRILL-2999
 URL: https://issues.apache.org/jira/browse/DRILL-2999
 Project: Apache Drill
  Issue Type: Bug
Reporter: Daniel Barclay (Drill)


For some Calcite/parsing exceptions that seem to be internal (seem to be caught 
and processed at a higher level), Calcite or parsing logging is writing 
SEVERE-level logging messages to stdout or stderr.  

When SQLLine runs Drill in embedded mode, those logging lines show up 
intermixed in the SQLLine output

{noformat}
0: jdbc:drill:zk=local bad syntax;
May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
SEVERE: org.apache.calcite.runtime.CalciteException: Non-query expression 
encountered in illegal context
May 08, 2015 2:42:23 PM org.apache.calcite.runtime.CalciteException init
SEVERE: org.apache.calcite.runtime.CalciteContextException: From line 1, column 
1 to line 1, column 3: Non-query expression encountered in illegal context
Error: SYSTEM ERROR: Failure parsing SQL. Non-query expression encountered in 
illegal context


[Error Id: 87c20db6-58b1-4042-9060-42ee29945377 on dev-linux2:31016] 
(state=,code=0)
0: jdbc:drill:zk=local 
{noformat}

(The Error: SYSTEM ... lines are the normal error from SQLLine including 
exception message text from Drill.  The four lines starting with May or 
SEVERE are the extraneous logging output.)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2792) Killing the drillbit which is the foreman results in direct memory being held on

2015-05-08 Thread Chris Westin (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Westin updated DRILL-2792:

Assignee: Sudheesh Katkam  (was: Chris Westin)

 Killing the drillbit which is the foreman results in direct memory being held 
 on
 

 Key: DRILL-2792
 URL: https://issues.apache.org/jira/browse/DRILL-2792
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Affects Versions: 0.8.0
Reporter: Ramana Inukonda Nagaraj
Assignee: Sudheesh Katkam
 Fix For: 1.0.0


 Killed one of the drillbits which is the foreman for the query- 
 Profiles page reports that query has cancelled.
 Due to bug Drill-2778 sqlline hangs. However after killing sqlline the 
 current direct memory used does not go down to pre query levels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2961) Statement.setQueryTimeout() should throw a SQLException

2015-05-08 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-2961:
--
Assignee: Parth Chandra  (was: Daniel Barclay (Drill))

 Statement.setQueryTimeout() should throw a SQLException
 ---

 Key: DRILL-2961
 URL: https://issues.apache.org/jira/browse/DRILL-2961
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Affects Versions: 1.0.0
 Environment: RHEL 6.4
Reporter: Kunal Khatua
Assignee: Parth Chandra
 Fix For: 1.0.0

 Attachments: DRILL-2961.1Prep.2.patch.txt, 
 DRILL-2961.1Prep.3.patch.txt, DRILL-2961.1Prep.4.patch.txt, 
 DRILL-2961.2Core.2.patch.txt, DRILL-2961.2Core.3.patch.txt, 
 DRILL-2961.2Core.4.patch.txt


 When trying to set the timeout for a Drill Statement object, Drill does not 
 report any SQLException which makes the developer incorrectly believe that a 
 timeout has been set. 
 The operation should throw the exception:
 java.sql.SQLException: Method not supported
 at 
 org.apache.drill.jdbc.DrillStatement.setQueryTimeout(DrillStatement.java)
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2425) Wrong results when identifier change cases within the same data file

2015-05-08 Thread Hanifi Gunes (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535746#comment-14535746
 ] 

Hanifi Gunes commented on DRILL-2425:
-

[~sphillips] this seems a duplicate of DRILL-2036.

 Wrong results when identifier change cases within the same data file
 

 Key: DRILL-2425
 URL: https://issues.apache.org/jira/browse/DRILL-2425
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Affects Versions: 0.8.0
Reporter: Chun Chang
Assignee: Steven Phillips
Priority: Critical
 Fix For: 1.0.0


 #Fri Mar 06 16:51:10 EST 2015
 git.commit.id.abbrev=fb293ba
 I have the following JSON file that one of the identifier change cases:
 {code}
 [root@qa-node120 md-83]# hadoop fs -cat 
 /drill/testdata/complex_type/json/schema/a.json
 {SOURCE: ebm,msAddressIpv6Array: null}
 {SOURCE: ebm,msAddressIpv6Array: {msAddressIpv6_1:99.111.222.0, 
 msAddressIpv6_2:88.222.333.0}}
 {SOURCE: ebm,msAddressIpv6Array: {msAddressIpv6_1:99.111.222.1, 
 msAddressIpv6_2:88.222.333.1}}
 {SOURCE: ebm,msAddressIpv6Array: {msaddressipv6_1:99.111.222.2, 
 msAddressIpv6_2:88.222.333.2}}
 {code}
 Query this file through drill gives wrong results:
 {code}
 0: jdbc:drill:schema=dfs.drillTestDirComplexJ select 
 t.msAddressIpv6Array.msAddressIpv6_1 as msAddressIpv6_1 from `schema/a.json` 
 t;
 +-+
 | msAddressIpv6_1 |
 +-+
 | null|
 | null|
 | null|
 | 99.111.222.2|
 +-+
 {code}
 plan:
 {code}
 0: jdbc:drill:schema=dfs.drillTestDirComplexJ explain plan for select 
 t.msAddressIpv6Array.msAddressIpv6_1 as msAddressIpv6_1 from `schema/a.json` 
 t;
 +++
 |text|json|
 +++
 | 00-00Screen
 00-01  Project(msAddressIpv6_1=[ITEM($0, 'msAddressIpv6_1')])
 00-02Scan(groupscan=[EasyGroupScan 
 [selectionRoot=/drill/testdata/complex_type/json/schema/a.json, numFiles=1, 
 columns=[`msAddressIpv6Array`.`msAddressIpv6_1`], 
 files=[maprfs:/drill/testdata/complex_type/json/schema/a.json]]])
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-3000) I got JIRA report #3000. Now ... to use it for good or evil?

2015-05-08 Thread Hanifi Gunes (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanifi Gunes closed DRILL-3000.
---
Resolution: Fixed

This was a though fix.

 I got JIRA report #3000.  Now ... to use it for good or evil?
 -

 Key: DRILL-3000
 URL: https://issues.apache.org/jira/browse/DRILL-3000
 Project: Apache Drill
  Issue Type: Bug
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-3000) 3k!

2015-05-08 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535666#comment-14535666
 ] 

Aditya Kishore commented on DRILL-3000:
---

You beat Ramana and me to it :)

 3k!
 ---

 Key: DRILL-3000
 URL: https://issues.apache.org/jira/browse/DRILL-3000
 Project: Apache Drill
  Issue Type: Bug
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-3000) I got JIRA report #3000. Now ... to use it for good or evil?

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-3000:
--
Summary: I got JIRA report #3000.  Now ... to use it for good or evil?  
(was: 3k!)

 I got JIRA report #3000.  Now ... to use it for good or evil?
 -

 Key: DRILL-3000
 URL: https://issues.apache.org/jira/browse/DRILL-3000
 Project: Apache Drill
  Issue Type: Bug
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2441) Throw unsupported error message in case of inequality join

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2441:

Labels: no_verified_test  (was: )

 Throw unsupported error message in case of inequality join
 --

 Key: DRILL-2441
 URL: https://issues.apache.org/jira/browse/DRILL-2441
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Reporter: Victoria Markman
Assignee: Aman Sinha
  Labels: no_verified_test
 Fix For: 0.8.0

 Attachments: DRILL-2441.1.patch


 Since we don't support inequality join, the whole class of queries will throw 
 huge page long Can't plan exception
 This is a request to throw a nice error message that we throw in case of 
 cartesian join in these cases as well.
 {code} 
 select * from t1 left outer join t2  on (t1.a1 = t2.a2 and t1.b2  t2.b2);
 select * from t1 right outer join t2 on (t1.a1 = t2.a2 and t1.b2  t2.b2);
 {code}
 Example of an exception:
 {code}
 0: jdbc:drill:schema=dfs select * from t1 inner join t2 on(t1.b1  t2.b2);
 Query failed: UnsupportedRelOperatorException: This query cannot be planned 
 possibly due to either a cartesian join or an inequality join
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2441) Throw unsupported error message in case of inequality join

2015-05-08 Thread Victoria Markman (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535690#comment-14535690
 ] 

Victoria Markman commented on DRILL-2441:
-

Verified fixed in: 1.0.0

#Thu May 07 19:07:43 EDT 2015
git.commit.id.abbrev=79a712a

0: jdbc:drill:schema=dfs select * from t1 left outer join t2  on (t1.a1 = 
t2.a2 and t1.b2  t2.b2);
Error: SYSTEM ERROR: This query cannot be planned possibly due to either a 
cartesian join or an inequality join
[Error Id: ee0aa885-1aca-419b-b5f0-65fd992451dc on atsqa4-133.qa.lab:31010] 
(state=,code=0)


 Throw unsupported error message in case of inequality join
 --

 Key: DRILL-2441
 URL: https://issues.apache.org/jira/browse/DRILL-2441
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Reporter: Victoria Markman
Assignee: Aman Sinha
  Labels: no_verified_test
 Fix For: 0.8.0

 Attachments: DRILL-2441.1.patch


 Since we don't support inequality join, the whole class of queries will throw 
 huge page long Can't plan exception
 This is a request to throw a nice error message that we throw in case of 
 cartesian join in these cases as well.
 {code} 
 select * from t1 left outer join t2  on (t1.a1 = t2.a2 and t1.b2  t2.b2);
 select * from t1 right outer join t2 on (t1.a1 = t2.a2 and t1.b2  t2.b2);
 {code}
 Example of an exception:
 {code}
 0: jdbc:drill:schema=dfs select * from t1 inner join t2 on(t1.b1  t2.b2);
 Query failed: UnsupportedRelOperatorException: This query cannot be planned 
 possibly due to either a cartesian join or an inequality join
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1986) Natural join query returns wrong result

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman closed DRILL-1986.
---

 Natural join query returns wrong result
 ---

 Key: DRILL-1986
 URL: https://issues.apache.org/jira/browse/DRILL-1986
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Affects Versions: 0.8.0
Reporter: Victoria Markman
Assignee: Sean Hsuan-Yi Chu
  Labels: no_verified_test

 Natural join returns wrong result:
 {code}
 0: jdbc:drill:schema=dfs select * from `t1.json`;
 ++++
 | a1 | b1 | c1 |
 ++++
 | 1  | 1  | 2015-01-01 |
 | 2  | 2  | 2015-01-02 |
 ++++
 2 rows selected (0.087 seconds)
 0: jdbc:drill:schema=dfs select * from `t2.json`;
 ++++
 | a1 | b1 | c1 |
 ++++
 | 1  | 1  | 2015-01-01 |
 ++++
 1 row selected (0.112 seconds)
 0: jdbc:drill:schema=dfs select * from `t1.json` natural join `t2.json`;
 +++++++
 | a1 | b1 | c1 |a10 |b10 |c10 
 |
 +++++++
 +++++++
 No rows selected (0.223 seconds)
 {code}
 Equivalent inner join query returns one row:
 {code}
 0: jdbc:drill:schema=dfs select * from `t1.json` t1, `t2.json` t2 where 
 t1.a1=t2.a1 and t1.b1=t2.b1 and t1.c1=t2.c1;
 +++++++
 | a1 | b1 | c1 |a10 |b10 |c10 
 |
 +++++++
 | 1  | 1  | 2015-01-01 | 1  | 1  | 2015-01-01 
 |
 +++++++
 1 row selected (0.732 seconds)
 {code}
 Natural join is listed as supported in our documentation. 
 If we decide not to support it, we need to make sure to remove it from docs 
 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-1986) Natural join query returns wrong result

2015-05-08 Thread Victoria Markman (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535710#comment-14535710
 ] 

Victoria Markman commented on DRILL-1986:
-

Verified fixed in 1.0.0

#Thu May 07 19:07:43 EDT 2015
git.commit.id.abbrev=79a712a

0: jdbc:drill:schema=dfs select * from t1 natural join t2;
Error: SYSTEM ERROR: NATURAL JOIN is not supported
See Apache Drill JIRA: DRILL-1986
[Error Id: 3f642783-5ac6-46b2-9497-94fba6ac2a72 on atsqa4-133.qa.lab:31010] 
(state=,code=0)


 Natural join query returns wrong result
 ---

 Key: DRILL-1986
 URL: https://issues.apache.org/jira/browse/DRILL-1986
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Affects Versions: 0.8.0
Reporter: Victoria Markman
Assignee: Sean Hsuan-Yi Chu
  Labels: no_verified_test

 Natural join returns wrong result:
 {code}
 0: jdbc:drill:schema=dfs select * from `t1.json`;
 ++++
 | a1 | b1 | c1 |
 ++++
 | 1  | 1  | 2015-01-01 |
 | 2  | 2  | 2015-01-02 |
 ++++
 2 rows selected (0.087 seconds)
 0: jdbc:drill:schema=dfs select * from `t2.json`;
 ++++
 | a1 | b1 | c1 |
 ++++
 | 1  | 1  | 2015-01-01 |
 ++++
 1 row selected (0.112 seconds)
 0: jdbc:drill:schema=dfs select * from `t1.json` natural join `t2.json`;
 +++++++
 | a1 | b1 | c1 |a10 |b10 |c10 
 |
 +++++++
 +++++++
 No rows selected (0.223 seconds)
 {code}
 Equivalent inner join query returns one row:
 {code}
 0: jdbc:drill:schema=dfs select * from `t1.json` t1, `t2.json` t2 where 
 t1.a1=t2.a1 and t1.b1=t2.b1 and t1.c1=t2.c1;
 +++++++
 | a1 | b1 | c1 |a10 |b10 |c10 
 |
 +++++++
 | 1  | 1  | 2015-01-01 | 1  | 1  | 2015-01-01 
 |
 +++++++
 1 row selected (0.732 seconds)
 {code}
 Natural join is listed as supported in our documentation. 
 If we decide not to support it, we need to make sure to remove it from docs 
 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1986) Natural join query returns wrong result

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-1986:

Labels: no_verified_test  (was: )

 Natural join query returns wrong result
 ---

 Key: DRILL-1986
 URL: https://issues.apache.org/jira/browse/DRILL-1986
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Affects Versions: 0.8.0
Reporter: Victoria Markman
Assignee: Sean Hsuan-Yi Chu
  Labels: no_verified_test

 Natural join returns wrong result:
 {code}
 0: jdbc:drill:schema=dfs select * from `t1.json`;
 ++++
 | a1 | b1 | c1 |
 ++++
 | 1  | 1  | 2015-01-01 |
 | 2  | 2  | 2015-01-02 |
 ++++
 2 rows selected (0.087 seconds)
 0: jdbc:drill:schema=dfs select * from `t2.json`;
 ++++
 | a1 | b1 | c1 |
 ++++
 | 1  | 1  | 2015-01-01 |
 ++++
 1 row selected (0.112 seconds)
 0: jdbc:drill:schema=dfs select * from `t1.json` natural join `t2.json`;
 +++++++
 | a1 | b1 | c1 |a10 |b10 |c10 
 |
 +++++++
 +++++++
 No rows selected (0.223 seconds)
 {code}
 Equivalent inner join query returns one row:
 {code}
 0: jdbc:drill:schema=dfs select * from `t1.json` t1, `t2.json` t2 where 
 t1.a1=t2.a1 and t1.b1=t2.b1 and t1.c1=t2.c1;
 +++++++
 | a1 | b1 | c1 |a10 |b10 |c10 
 |
 +++++++
 | 1  | 1  | 2015-01-01 | 1  | 1  | 2015-01-01 
 |
 +++++++
 1 row selected (0.732 seconds)
 {code}
 Natural join is listed as supported in our documentation. 
 If we decide not to support it, we need to make sure to remove it from docs 
 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2570) Broken JDBC-All Jar packaging can cause missing XML classes

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2570:
--
Assignee: Parth Chandra  (was: Daniel Barclay (Drill))

 Broken JDBC-All Jar packaging can cause missing XML classes
 ---

 Key: DRILL-2570
 URL: https://issues.apache.org/jira/browse/DRILL-2570
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Reporter: Daniel Barclay (Drill)
Assignee: Parth Chandra
 Fix For: 1.0.0

 Attachments: DRILL-2570.1.patch.txt, ElementTraversal.rtf, 
 xerces-error.rtf


 [Transcribed from other medium:]
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 
 When starting Spotfire Server using JDBC driver an error see attachment 
 (xerces-error) is produced.
 This error is then resolved by adding the jars/3rdparty/xercesImpl-2.11.0.jar 
 from the drillbit package to the classpath for the JDBC client driver.
 Then the following error is observed. See attachment (ElementTraversal).
 This requires to add jars/3rdparty/xml-apis-1.4.01.jar to the classpath from 
 the drillbit package.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 
 The issue is Tomcat and Spotfire Server does not show any errors and starts 
 up fine without the Drill JDBC driver. Once the Drill driver is added the 
 application server fails to start with the errors shown.
 Adding the 2 jars to the classpath then resolves the issue.
 I have not looked at all the JDBC driver classes, but it is important to note 
 that the error occurs when the JDBC driver is added and resolved by adding 2 
 jars from the drillbit package.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 
  I do not see Drill classes in the stack trace. This seems to be a Tomcat 
  configuration issue.
 I suspect another possibility: that the Drill JDBC-all Jar file contains a 
 stray reference to the unfound class (SAXParserFactoryImpl) in some file in 
 META-INF/services (left over from some package whose classes we either 
 excluded or renamed (with shading)
 Xxx, Yyy: Can you try this?:
 (Temporarily) removing the added XML Jar files from the class path to 
 re-confirm the problem.
 Move the Drill JDBC-all Jar file to be last on the class path (and remove 
 ).
 Report whether the symptoms change.
 Thanks.
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
 - - - - - - - - - - - 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2310) Drill fails to start in embedded mode on windows

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2310:

Assignee: Aditya Kishore  (was: Steven Phillips)

 Drill fails to start in embedded mode on windows
 

 Key: DRILL-2310
 URL: https://issues.apache.org/jira/browse/DRILL-2310
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Affects Versions: 0.8.0
Reporter: Krystal
Assignee: Aditya Kishore
 Fix For: 1.0.0


 git.commit.id.abbrev=c8d2fe1
 I installed it on windows 7
 I invoked sqlline in embedded mode via:
 C:\drill\apache-drill-0.8.0-SNAPSHOT\binsqlline.bat -u jdbc:drill:zk=local
 I got the following error:
 Error: Failure while attempting to start Drillbit in embedded mode. 
 (state=,code=0)
 With debug turned on, the following error is displayed:
 15:36:53.608 [main] WARN  org.apache.hadoop.fs.FSInputChecker - Problem 
 opening checksum file: /tmp/drill/sys.storage_
 ugins/hbase.sys.drill.  Ignoring exception:
 java.io.EOFException: null
 at java.io.DataInputStream.readFully(Unknown Source) ~[na:1.7.0_10]
 at java.io.DataInputStream.readFully(Unknown Source) ~[na:1.7.0_10]
 at 
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:146)
  ~[hadoop
 ommon-2.4.1.jar:na]
 at 
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339) 
 [hadoop-common-2.4.1.jar:na]
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764) 
 [hadoop-common-2.4.1.jar:na]
 at 
 org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:145)
  [drill-java-exec-0.8.0-SNAPS
 T-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.sys.local.FilePStore.get(FilePStore.java:136) 
 [drill-java-exec-0.8.0-SNAPSHOT-r
 uffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.sys.local.FilePStore$Iter$DeferredEntry.getValue(FilePStore.java:218)
  [drill-ja
 -exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.StoragePluginRegistry.createPlugins(StoragePluginRegistry.java:166)
  [drill-java
 xec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.store.StoragePluginRegistry.init(StoragePluginRegistry.java:130)
  [drill-java-exec-0.8
 -SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:155) 
 [drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0
 .0-SNAPSHOT]
 at 
 org.apache.drill.jdbc.DrillConnectionImpl.init(DrillConnectionImpl.java:79) 
 [drill-jdbc-0.8.0-SNAPSHOT.ja
 0.8.0-SNAPSHOT]
 at 
 org.apache.drill.jdbc.DrillJdbc41Factory$DrillJdbc41Connection.init(DrillJdbc41Factory.java:94)
  [drill-jd
 -0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:57)
  [drill-jdbc-0.8.0-S
 PSHOT.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.jdbc.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:43)
  [drill-jdbc-0.8.0-S
 PSHOT.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.jdbc.DrillFactory.newConnection(DrillFactory.java:54) 
 [drill-jdbc-0.8.0-SNAPSHOT.jar:0.8.0
 NAPSHOT]
 at 
 net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
  [optiq-avatica-0.9-drill-r20
 ar:na]
 at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4732) 
 [sqlline-1.1.6.jar:na]
 at 
 sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4786) 
 [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine$Commands.connect(SqlLine.java:4026) 
 [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine$Commands.connect(SqlLine.java:3935) 
 [sqlline-1.1.6.jar:na]
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
 ~[na:1.7.0_10]
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) 
 ~[na:1.7.0_10]
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) 
 ~[na:1.7.0_10]
 at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.7.0_10]
 at 
 sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2884) 
 [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine.dispatch(SqlLine.java:885) [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine.initArgs(SqlLine.java:693) [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine.begin(SqlLine.java:745) [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine.start(SqlLine.java:498) [sqlline-1.1.6.jar:na]
 at sqlline.SqlLine.main(SqlLine.java:460) [sqlline-1.1.6.jar:na]
 Drill successfully created the /tmp/drill/sys.storage_
 ugins/hbase.sys.drill file so not sure why it can't access it.  The 
 file/directory has appropriate permissions.



--
This message was sent by Atlassian 

[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: (was: DRILL-2343.3.patch.txt)

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.4.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-1315) Allow specifying Zookeeper root znode and cluster-id as JDBC parameters

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-1315:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Allow specifying Zookeeper root znode and cluster-id as JDBC parameters
 ---

 Key: DRILL-1315
 URL: https://issues.apache.org/jira/browse/DRILL-1315
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Affects Versions: 0.5.0
Reporter: Aditya Kishore
Assignee: Parth Chandra
 Fix For: 1.1.0


 Currently there is no way to specify a different root z-node and cluster-id 
 to the Drill JDBC driver and it always attempt to connect to the default 
 values (unless there is a {{drill-override.conf}} with the correct values 
 also included early in the classpath).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-2849) Difference in query results over CSV file created by CTAS, compared to results over original CSV file

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman resolved DRILL-2849.
-
Resolution: Fixed

 Difference in query results over CSV file created by CTAS, compared to 
 results over original CSV file 
 --

 Key: DRILL-2849
 URL: https://issues.apache.org/jira/browse/DRILL-2849
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Text  CSV
Affects Versions: 0.9.0
 Environment: 64e3ec52b93e9331aa5179e040eca19afece8317 | DRILL-2611: 
 value vectors should report valid value count | 16.04.2015 @ 13:53:34 EDT
Reporter: Khurram Faraaz
Assignee: Khurram Faraaz
Priority: Critical
 Fix For: 1.0.0


 Different results are seen for the same query over CSV data file and another 
 CSV data file created by CTAS using the same CSV file.
 Tests were executed on 4 node cluster on CentOS.
 I got rid of the header information that is written by CTAS into the new CSV 
 file that CTAS creates, and then ran my queries over CTAS' CSV file.
 query over uncompressed CSV file, deletions/deletions-0-of-00020.csv
 {code}
  select count(cast(columns[0] as double)),max(cast(columns[0] as 
  double)),min(cast(columns[0] as double)),avg(cast(columns[0] as double)), 
  columns[7] from `deletions/deletions-0-of-00020.csv` group by 
  columns[7];
 88 rows selected (6.893 seconds)
 =
 {code}
 query over CSV file that was created by CTAS. (input to CTAS was 
 deletions/deletions-0-of-00020.csv)
 Notice there is one more record returned.
 {code}
  select count(cast(columns[0] as double)),max(cast(columns[0] as 
  double)),min(cast(columns[0] as double)),avg(cast(columns[0] as double)), 
  columns[7] from `csvToCSV_0_of_00020/0_0_0.csv` group by columns[7];
  
 89 rows selected (6.623 seconds)
 ==
 {code}
 query over compressed CSV file
 {code}
  select count(cast(columns[0] as double)),max(cast(columns[0] as 
  double)),min(cast(columns[0] as double)),avg(cast(columns[0] as double)), 
  columns[7] from `deletions-0-of-00020.csv.gz` group by columns[7];
 88 rows selected (10.526 seconds)
 ==
 {code}
 In the below cases, the count and sum results are different when query is 
 executed over CSV file that was created by CTAS. ( this may explain why we 
 see the difference in results in the above queries ? )
 {code}
 0: jdbc:drill: select count(cast(columns[0] as double)),max(cast(columns[0] 
 as double)),min(cast(columns[0] as double)),avg(cast(columns[0] as double)), 
 columns[7] from `deletions/deletions-0-of-00020.csv` where columns[7] is 
 null group by columns[7];
 ++++++
 |   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |   EXPR$4   |
 ++++++
 | 252| 1.362983396001E12 | 1.165768779027E12 | 1.293794515595635E12 | 
 null   |
 ++++++
 1 row selected (6.013 seconds)
 0: jdbc:drill: select count(cast(columns[0] as double)),max(cast(columns[0] 
 as double)),min(cast(columns[0] as double)),avg(cast(columns[0] as double)), 
 columns[7] from `deletions-0-of-00020.csv.gz` where columns[7] is null 
 group by columns[7];
 ++++++
 |   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |   EXPR$4   |
 ++++++
 | 252| 1.362983396001E12 | 1.165768779027E12 | 1.293794515595635E12 | 
 null   |
 ++++++
 1 row selected (8.899 seconds)
 {code}
 Notice that count and sum results are different (from those above) when query 
 is executed over the CSV file created by CTAS.
 {code}
 0: jdbc:drill: select count(cast(columns[0] as double)),max(cast(columns[0] 
 as double)),min(cast(columns[0] as double)),avg(cast(columns[0] as double)), 
 columns[7] from `csvToCSV_0_of_00020/0_0_0.csv` where columns[7] is null 
 group by columns[7];
 ++++++
 |   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |   EXPR$4   |
 ++++++
 | 245| 1.349670663E12 | 1.165768779027E12 | 1.2930281335065144E12 | 
 null   |
 ++++++
 1 row selected (5.736 seconds)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2234) IOOB when streaming aggregate is on the left side of hash join

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2234:

Fix Version/s: (was: 1.0.0)
   1.1.0

 IOOB when streaming aggregate is on the left side of hash join
 --

 Key: DRILL-2234
 URL: https://issues.apache.org/jira/browse/DRILL-2234
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Reporter: Mehant Baid
Assignee: Mehant Baid
 Fix For: 1.1.0


 This issue is similar to DRILL-2107. 
 Issue can be reproduced by enabling SwapJoinRule in DrillRuleSets and running 
 the following query.
 alter session set `planner.slice_target` = 1;
 alter session set `planner.enable_hashagg` = false;
 alter session set `planner.enable_streamagg` = true;
 select l_suppkey, sum(l_extendedprice)/sum(l_quantity) as avg_price 
 from cp.`tpch/lineitem.parquet` where l_orderkey in
 (select o_orderkey from cp.`tpch/orders.parquet` where o_custkey = 2) 
 group by l_suppkey having sum(l_extendedprice)/sum(l_quantity)  1850.0;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2343) Create JDBC tracing proxy driver.

2015-05-08 Thread Daniel Barclay (Drill) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Barclay (Drill) updated DRILL-2343:
--
Attachment: DRILL-2343.4.patch.txt

 Create JDBC tracing proxy driver.
 -

 Key: DRILL-2343
 URL: https://issues.apache.org/jira/browse/DRILL-2343
 Project: Apache Drill
  Issue Type: New Feature
  Components: Client - JDBC
Reporter: Daniel Barclay (Drill)
Assignee: Daniel Barclay (Drill)
 Fix For: Future

 Attachments: DRILL-2343.4.patch.txt


 Create JDBC driver that functions as proxy to Drill (or other) JDBC driver in 
 order to report calls made across the JDBC API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2838) Applying flatten after joining 2 sub-queries returns empty maps

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2838:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Applying flatten after joining 2 sub-queries returns empty maps
 ---

 Key: DRILL-2838
 URL: https://issues.apache.org/jira/browse/DRILL-2838
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Reporter: Rahul Challapalli
Assignee: Jason Altekruse
Priority: Critical
 Fix For: 1.1.0

 Attachments: data.json


 git.commit.id.abbrev=5cd36c5
 The below query applies flatten after joining 2 subqueries. It generates 
 empty maps which is wrong
 {code}
 select v1.uid, flatten(events), flatten(transactions) from 
 (select uid, events from `data.json`) v1
 inner join
 (select uid, transactions from `data.json`) v2
 on v1.uid = v2.uid;
 ++++
 |uid |   EXPR$1   |   EXPR$2   |
 ++++
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 1  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 | 2  | {} | {} |
 ++++
 36 rows selected (0.244 seconds)
 {code}
 I attached the data set. Let me know if you have any questions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2100) Drill not deleting spooling files

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2100:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Drill not deleting spooling files
 -

 Key: DRILL-2100
 URL: https://issues.apache.org/jira/browse/DRILL-2100
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 0.8.0
Reporter: Abhishek Girish
Assignee: Steven Phillips
 Fix For: 1.1.0


 Currently, after forcing queries to use an external sort by switching off 
 hash join/agg causes spill-to-disk files accumulating. 
 This causes issues with disk space availability when the spill is configured 
 to be on the local file system (/tmp/drill). Also not optimal when configured 
 to use DFS (custom). 
 Drill must clean up all temporary files created after a query completes or 
 after a drillbit restart. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2774) Updated drill-patch-review.py to use git-format-patch

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2774:

Fix Version/s: (was: 1.0.0)
   1.1.0

 Updated drill-patch-review.py to use git-format-patch
 -

 Key: DRILL-2774
 URL: https://issues.apache.org/jira/browse/DRILL-2774
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Reporter: Steven Phillips
Assignee: Steven Phillips
Priority: Minor
 Fix For: 1.1.0

 Attachments: DRILL-2774.patch, DRILL-2774.patch


 The tool currently uses git diff to generate the patches, which does not 
 preserve commit information, and which is required for submitting patches in 
 the Drill community.
 This doesn't work properly when there are multiple commits, so as part of 
 this change, we enforce the requirement that the branch used to create the 
 patch is exactly one commit ahead and zero behind the remote branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2516) CTAS fails with NPE when select statement contains join and columns are not specified explicitly

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman updated DRILL-2516:

Fix Version/s: (was: 1.0.0)
   1.1.0

 CTAS fails with NPE when select statement contains join and columns are not 
 specified explicitly
 

 Key: DRILL-2516
 URL: https://issues.apache.org/jira/browse/DRILL-2516
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 0.8.0
Reporter: Victoria Markman
Assignee: Jason Altekruse
 Fix For: 1.1.0

 Attachments: t1.parquet, t2.parquet


 {code}
 0: jdbc:drill:schema=dfs select * from t1, t2 where t1.a1 = t2.a2;
 +++++++
 | a1 | b1 | c1 | a2 | b2 | c2 
 |
 +++++++
 | 1  | a  | 2015-01-01 | 1  | a  | 2015-01-01 
 |
 | 2  | b  | 2015-01-02 | 2  | b  | 2015-01-02 
 |
 | 2  | b  | 2015-01-02 | 2  | b  | 2015-01-02 
 |
 | 2  | b  | 2015-01-02 | 2  | b  | 2015-01-02 
 |
 | 3  | c  | 2015-01-03 | 3  | c  | 2015-01-03 
 |
 | 4  | null   | 2015-01-04 | 4  | d  | 2015-01-04 
 |
 | 5  | e  | 2015-01-05 | 5  | e  | 2015-01-05 
 |
 | 6  | f  | 2015-01-06 | 6  | f  | 2015-01-06 
 |
 | 7  | g  | 2015-01-07 | 7  | g  | 2015-01-07 
 |
 | 7  | g  | 2015-01-07 | 7  | g  | 2015-01-07 
 |
 | 9  | i  | null   | 9  | i  | 2015-01-09 
 |
 +++++++
 11 rows selected (0.253 seconds)
 {code}
 CTAS assert:
 {code}
 0: jdbc:drill:schema=dfs create table test as select * from t1, t2 where 
 t1.a1 = t2.a2;
 Query failed: RemoteRpcException: Failure while running fragment.[ 
 83a1a356-b427-4dce-822f-6ae35ef9ca38 on atsqa4-134.qa.lab:31010 ]
 [ 83a1a356-b427-4dce-822f-6ae35ef9ca38 on atsqa4-134.qa.lab:31010 ]
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {code}
 From drillbit.log
 {code}
 2015-03-20 23:40:30,017 [2af35011-91c9-7834-14b3-863cb0cf41d2:frag:0:0] ERROR 
 o.a.d.e.w.f.AbstractStatusReporter - Error 
 83a1a356-b427-4dce-822f-6ae35ef9ca38: Failure while running fragment.
 java.lang.AssertionError: null
 at 
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:347)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext(WriterRecordBatch.java:102)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
  ~[drill-java-exec-0.8.0-SNAPSHOT-rebuffed.jar:0.8.0-SNAPSHOT]
 at 
 

[jira] [Commented] (DRILL-2897) Update Limit 0 to avoid parallelization

2015-05-08 Thread Aman Sinha (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535804#comment-14535804
 ] 

Aman Sinha commented on DRILL-2897:
---

Another thing to note is that although tools such as Tableau add LIMIT 0 to a 
query,  it typically won't be disabling hash join; so it is unlikely to 
encounter the cannot plan exception. 

 Update Limit 0 to avoid parallelization
 ---

 Key: DRILL-2897
 URL: https://issues.apache.org/jira/browse/DRILL-2897
 Project: Apache Drill
  Issue Type: Bug
Reporter: Jacques Nadeau
Assignee: Jacques Nadeau
 Fix For: 1.0.0

 Attachments: DRILL-2897.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-3001) Some functional tests fail when new text reader is disabled

2015-05-08 Thread Khurram Faraaz (JIRA)
Khurram Faraaz created DRILL-3001:
-

 Summary: Some functional tests fail when new text reader is 
disabled
 Key: DRILL-3001
 URL: https://issues.apache.org/jira/browse/DRILL-3001
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Text  CSV
Affects Versions: 1.0.0
Reporter: Khurram Faraaz
Assignee: Steven Phillips


I am seeing several tests fail in Functional/Passing suite, when I disable the 
new text reader. Some those failures are listed here.

{code}
alter system set `exec.storage.enable_new_text_reader` = false;
+++
| ok |  summary   |
+++
| true   | exec.storage.enable_new_text_reader updated. |
+++
1 row selected (1.442 seconds)
{code}

{code}
running test 
/root/private-sql-hadoop-test/framework/resources/Functional/Passing/data-shapes/wide-columns/5000/1000rows/parquet/q219.q
 1463647375
Query failed: 
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: null

Fragment 5:1

[Error Id: 48e103c5-0c5f-4d60-832c-ef41dc642fd3 on atsqa6c85.qa.lab:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:112)
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:102)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34)
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:57)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:194)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:173)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:744)
{code}

Another test also fails with the same stack trace

{code}
Completed test 
/root/private-sql-hadoop-test/framework/resources/Functional/Passing/json_kvgenflatten/convert/convert_json_text1.q.
 Status PASS
Query failed: 
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: null

Fragment 0:0

[Error Id: c9747707-8071-410e-84da-5a0aaec3f77b on atsqa6c87.qa.lab:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:112)
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:102)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34)
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:57)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:194)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:173)
at 

[jira] [Closed] (DRILL-2441) Throw unsupported error message in case of inequality join

2015-05-08 Thread Victoria Markman (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Markman closed DRILL-2441.
---

 Throw unsupported error message in case of inequality join
 --

 Key: DRILL-2441
 URL: https://issues.apache.org/jira/browse/DRILL-2441
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Reporter: Victoria Markman
Assignee: Aman Sinha
  Labels: no_verified_test
 Fix For: 0.8.0

 Attachments: DRILL-2441.1.patch


 Since we don't support inequality join, the whole class of queries will throw 
 huge page long Can't plan exception
 This is a request to throw a nice error message that we throw in case of 
 cartesian join in these cases as well.
 {code} 
 select * from t1 left outer join t2  on (t1.a1 = t2.a2 and t1.b2  t2.b2);
 select * from t1 right outer join t2 on (t1.a1 = t2.a2 and t1.b2  t2.b2);
 {code}
 Example of an exception:
 {code}
 0: jdbc:drill:schema=dfs select * from t1 inner join t2 on(t1.b1  t2.b2);
 Query failed: UnsupportedRelOperatorException: This query cannot be planned 
 possibly due to either a cartesian join or an inequality join
 Error: exception while executing query: Failure while executing query. 
 (state=,code=0)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-2425) Wrong results when identifier change cases within the same data file

2015-05-08 Thread Hanifi Gunes (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanifi Gunes updated DRILL-2425:

Assignee: Steven Phillips  (was: Hanifi Gunes)

 Wrong results when identifier change cases within the same data file
 

 Key: DRILL-2425
 URL: https://issues.apache.org/jira/browse/DRILL-2425
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Data Types
Affects Versions: 0.8.0
Reporter: Chun Chang
Assignee: Steven Phillips
Priority: Critical
 Fix For: 1.0.0


 #Fri Mar 06 16:51:10 EST 2015
 git.commit.id.abbrev=fb293ba
 I have the following JSON file that one of the identifier change cases:
 {code}
 [root@qa-node120 md-83]# hadoop fs -cat 
 /drill/testdata/complex_type/json/schema/a.json
 {SOURCE: ebm,msAddressIpv6Array: null}
 {SOURCE: ebm,msAddressIpv6Array: {msAddressIpv6_1:99.111.222.0, 
 msAddressIpv6_2:88.222.333.0}}
 {SOURCE: ebm,msAddressIpv6Array: {msAddressIpv6_1:99.111.222.1, 
 msAddressIpv6_2:88.222.333.1}}
 {SOURCE: ebm,msAddressIpv6Array: {msaddressipv6_1:99.111.222.2, 
 msAddressIpv6_2:88.222.333.2}}
 {code}
 Query this file through drill gives wrong results:
 {code}
 0: jdbc:drill:schema=dfs.drillTestDirComplexJ select 
 t.msAddressIpv6Array.msAddressIpv6_1 as msAddressIpv6_1 from `schema/a.json` 
 t;
 +-+
 | msAddressIpv6_1 |
 +-+
 | null|
 | null|
 | null|
 | 99.111.222.2|
 +-+
 {code}
 plan:
 {code}
 0: jdbc:drill:schema=dfs.drillTestDirComplexJ explain plan for select 
 t.msAddressIpv6Array.msAddressIpv6_1 as msAddressIpv6_1 from `schema/a.json` 
 t;
 +++
 |text|json|
 +++
 | 00-00Screen
 00-01  Project(msAddressIpv6_1=[ITEM($0, 'msAddressIpv6_1')])
 00-02Scan(groupscan=[EasyGroupScan 
 [selectionRoot=/drill/testdata/complex_type/json/schema/a.json, numFiles=1, 
 columns=[`msAddressIpv6Array`.`msAddressIpv6_1`], 
 files=[maprfs:/drill/testdata/complex_type/json/schema/a.json]]])
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-2994) Incorrect error message when disconnecting from server (using direct connection to drillbit)

2015-05-08 Thread Hanifi Gunes (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535729#comment-14535729
 ] 

Hanifi Gunes commented on DRILL-2994:
-

+1

 Incorrect error message when disconnecting from server (using direct 
 connection to drillbit)
 

 Key: DRILL-2994
 URL: https://issues.apache.org/jira/browse/DRILL-2994
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Reporter: Parth Chandra
Assignee: Hanifi Gunes
Priority: Minor
 Fix For: 1.0.0

 Attachments: DRILL-2994.1.patch.diff


 If connected to the server using a direct drillbit connection, JDBC client 
 (sqlline) prints an already disconnected error when disconnecting.
 This happens because of an exception because the client is trying to close 
 the ZK cluster coordinator which is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >