[jira] [Created] (DRILL-7211) Batch Sizing in MergingReceiver
Karthikeyan Manivannan created DRILL-7211: - Summary: Batch Sizing in MergingReceiver Key: DRILL-7211 URL: https://issues.apache.org/jira/browse/DRILL-7211 Project: Apache Drill Issue Type: Sub-task Reporter: Karthikeyan Manivannan Changes required to MergingReceiver for doing output batch sizing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-7210) Batch Sizing in HashPartitionSender
Karthikeyan Manivannan created DRILL-7210: - Summary: Batch Sizing in HashPartitionSender Key: DRILL-7210 URL: https://issues.apache.org/jira/browse/DRILL-7210 Project: Apache Drill Issue Type: Sub-task Reporter: Karthikeyan Manivannan Jira to track changes required in HashPartitionSender for performing Batch Sizing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-7137) Implement unit test case to test Drill client <-> interaction in a secure setup
Karthikeyan Manivannan created DRILL-7137: - Summary: Implement unit test case to test Drill client <-> interaction in a secure setup Key: DRILL-7137 URL: https://issues.apache.org/jira/browse/DRILL-7137 Project: Apache Drill Issue Type: Improvement Reporter: Karthikeyan Manivannan Implement a unit testcase fo DRILL-7101 >From the PR https://github.com/apache/drill/pull/1702 "Writing a test where the Drillbits (inside ClusterFixture) are setup with ZK_APPLY_SECURE_ACL=false (to avoid the need to setup a secure ZK server within the unit test) and the ClientFixture is setup with ZK_APPLY_SECURE_ACL=true (to simulate the failure). Starting a test with different values for the same property turns out to be quite hard because the ClusterFixture internally instantiates a ClientFixure. Changing this behavior might affect other tests." -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-7107) Unable to connect to Drill 1.15 through ZK
Karthikeyan Manivannan created DRILL-7107: - Summary: Unable to connect to Drill 1.15 through ZK Key: DRILL-7107 URL: https://issues.apache.org/jira/browse/DRILL-7107 Project: Apache Drill Issue Type: Bug Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan After upgrading to Drill 1.15, users are seeing they are no longer able to connect to Drill using ZK quorum. They are getting the following "Unable to setup ZK for client" error. [~]$ sqlline -u "jdbc:drill:zk=172.16.2.165:5181;auth=maprsasl" Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0) java.sql.SQLNonTransientConnectionException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:174) at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67) at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67) at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138) at org.apache.drill.jdbc.Driver.connect(Driver.java:72) at sqlline.DatabaseConnection.connect(DatabaseConnection.java:130) at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179) at sqlline.Commands.connect(Commands.java:1247) at sqlline.Commands.connect(Commands.java:1139) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38) at sqlline.SqlLine.dispatch(SqlLine.java:722) at sqlline.SqlLine.initArgs(SqlLine.java:416) at sqlline.SqlLine.begin(SqlLine.java:514) at sqlline.SqlLine.start(SqlLine.java:264) at sqlline.SqlLine.main(SqlLine.java:195) Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:340) at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:165) ... 18 more Caused by: java.lang.NullPointerException at org.apache.drill.exec.coord.zk.ZKACLProviderFactory.findACLProvider(ZKACLProviderFactory.java:68) at org.apache.drill.exec.coord.zk.ZKACLProviderFactory.getACLProvider(ZKACLProviderFactory.java:47) at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:114) at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:86) at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:337) ... 19 more Apache Drill 1.15.0.0 "This isn't your grandfather's SQL." sqlline> -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-7099) Resource Management in Exchange Operators
Karthikeyan Manivannan created DRILL-7099: - Summary: Resource Management in Exchange Operators Key: DRILL-7099 URL: https://issues.apache.org/jira/browse/DRILL-7099 Project: Apache Drill Issue Type: Bug Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan This Jira will be used to track the changes required for implementing Resource Management in Exchange operators. The design can be found here: https://docs.google.com/document/d/1N9OXfCWcp68jsxYVmSt9tPgnZRV_zk8rwwFh0BxXZeE/edit?usp=sharing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-7093) Batch Sizing in SingleSender
Karthikeyan Manivannan created DRILL-7093: - Summary: Batch Sizing in SingleSender Key: DRILL-7093 URL: https://issues.apache.org/jira/browse/DRILL-7093 Project: Apache Drill Issue Type: Bug Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan SingleSender batch sizing: SingleSender does not have a mechanism to control the size of batches sent to the receiver. This results in excessive memory use. This bug captures the changes required to SingleSender to control batch size by using the RecordbatchSizer -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-7027) TPCH Queries hit IOB when planner.enable_demux_exchange = true
Karthikeyan Manivannan created DRILL-7027: - Summary: TPCH Queries hit IOB when planner.enable_demux_exchange = true Key: DRILL-7027 URL: https://issues.apache.org/jira/browse/DRILL-7027 Project: Apache Drill Issue Type: Bug Components: Execution - Flow Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan Runing TPCH queries on SF100 dataset, a few queries (13, 14, and 19) hit IOB exception: {code} java.sql.SQLException: SYSTEM ERROR: IndexOutOfBoundsException: index 154 Fragment 7:0 Please, refer to logs for more information. [Error Id: e312dc77-0cad-4bc0-b90e-fb0d477ef272 on ucs-node2.perf.lab:31010] at org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:528) at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:632) at org.apache.calcite.avatica.AvaticaResultSet.next(AvaticaResultSet.java:217) at org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:151) at PipSQueak.fetchRows(PipSQueak.java:346) at PipSQueak.runTest(PipSQueak.java:113) at PipSQueak.main(PipSQueak.java:477) Caused by: org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: IndexOutOfBoundsException: index 154 Fragment 7:0 Please, refer to logs for more information. [Error Id: e312dc77-0cad-4bc0-b90e-fb0d477ef272 on ucs-node2.perf.lab:31010] at org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123) at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422) at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96) at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273) at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at
[jira] [Created] (DRILL-6896) Extraneous columns being projected in Drill 1.15
Karthikeyan Manivannan created DRILL-6896: - Summary: Extraneous columns being projected in Drill 1.15 Key: DRILL-6896 URL: https://issues.apache.org/jira/browse/DRILL-6896 Project: Apache Drill Issue Type: Improvement Affects Versions: 1.15.0 Reporter: Karthikeyan Manivannan Assignee: Aman Sinha [~rhou] noted that TPCH13 on Drill 1.15 was running slower than Drill 1.14. Analysis revealed that an extra column was being projected in 1.15 and the slowdown was because the extra column was being unnecessarily pushed across an exchange. Here is a simplified query written by [~amansinha100] that exhibits the same problem : In first plan, o_custkey and o_comment are both extraneous projections. In the second plan (on 1.14.0), also, there is an extraneous projection: o_custkey but not o_comment. On 1.15.0: - explain plan without implementation for select c.c_custkey from cp.`tpch/customer.parquet` c left outer join cp.`tpch/orders.parquet` o on c.c_custkey = o.o_custkey and o.o_comment not like '%special%requests%' ; DrillScreenRel DrillProjectRel(c_custkey=[$0]) DrillProjectRel(c_custkey=[$2], o_custkey=[$0], o_comment=[$1]) DrillJoinRel(condition=[=($2, $0)], joinType=[right]) DrillFilterRel(condition=[NOT(LIKE($1, '%special%requests%'))]) DrillScanRel(table=[[cp, tpch/orders.parquet]], groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=classpath:/tpch/orders.parquet]], selectionRoot=classpath:/tpch/orders.parquet, numFiles=1, numRowGroups=1, usedMetadataFile=false, columns=[`o_custkey`, `o_comment`]]]) DrillScanRel(table=[[cp, tpch/customer.parquet]], groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=classpath:/tpch/customer.parquet]], selectionRoot=classpath:/tpch/customer.parquet, numFiles=1, numRowGroups=1, usedMetadataFile=false, columns=[`c_custkey`]]]) On 1.14.0: - DrillScreenRel DrillProjectRel(c_custkey=[$0]) DrillProjectRel(c_custkey=[$1], o_custkey=[$0]) DrillJoinRel(condition=[=($1, $0)], joinType=[right]) DrillProjectRel(o_custkey=[$0]) DrillFilterRel(condition=[NOT(LIKE($1, '%special%requests%'))]) DrillScanRel(table=[[cp, tpch/orders.parquet]], groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=classpath:/tpch/orders.parquet]], selectionRoot=classpath:/tpch/orders.parquet, numFiles=1, numRowGroups=1, usedMetadataFile=false, columns=[`o_custkey`, `o_comment`]]]) DrillScanRel(table=[[cp, tpch/customer.parquet]], groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=classpath:/tpch/customer.parquet]], selectionRoot=classpath:/tpch/customer.parquet, numFiles=1, numRowGroups=1, usedMetadataFile=false, columns=[`c_custkey`]]]) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6823) Tests that require ZK client-specific configuration have to be run standalone
Karthikeyan Manivannan created DRILL-6823: - Summary: Tests that require ZK client-specific configuration have to be run standalone Key: DRILL-6823 URL: https://issues.apache.org/jira/browse/DRILL-6823 Project: Apache Drill Issue Type: Improvement Reporter: Karthikeyan Manivannan ZK libraries only supports one client instance per-machine per-server and it is cached. Tests that require client-specific configuration will fail when run after other ZK tests that setup the client in a way that will cause this test to fail. Some investigation is necessary to see if the ZK ACL tests and any other such tests, can be run standalone in our test framework. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (DRILL-4897) NumberFormatException in Drill SQL while casting to BIGINT when its actually a number
[ https://issues.apache.org/jira/browse/DRILL-4897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-4897. --- Resolution: Not A Problem > NumberFormatException in Drill SQL while casting to BIGINT when its actually > a number > - > > Key: DRILL-4897 > URL: https://issues.apache.org/jira/browse/DRILL-4897 > Project: Apache Drill > Issue Type: Bug > Components: Functions - Drill >Reporter: Srihari Karanth >Assignee: Karthikeyan Manivannan >Priority: Blocker > Fix For: 1.15.0 > > > In the following SQL, drill cribs when trying to convert a number which is in > varchar >select cast (case IsNumeric(Delta_Radio_Delay) > when 0 then 0 else Delta_Radio_Delay end as BIGINT) > from datasource.`./sometable` > where Delta_Radio_Delay='4294967294'; > BIGINT should be able to take very large number. I dont understand how it > throws the below error: > 0: jdbc:drill:> select cast (case IsNumeric(Delta_Radio_Delay) > when 0 then 0 else Delta_Radio_Delay end as BIGINT) > from datasource.`./sometable` > where Delta_Radio_Delay='4294967294'; > Error: SYSTEM ERROR: NumberFormatException: 4294967294 > Fragment 1:29 > [Error Id: a63bb113-271f-4d8b-8194-2c9728543200 on cluster-3:31010] > (state=,code=0) > How can i modify SQL to fix this? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6754) Add a field to SV2 to indicate if the SV2 reorders the Record Batch
Karthikeyan Manivannan created DRILL-6754: - Summary: Add a field to SV2 to indicate if the SV2 reorders the Record Batch Key: DRILL-6754 URL: https://issues.apache.org/jira/browse/DRILL-6754 Project: Apache Drill Issue Type: Improvement Environment: The optimization in DRILL-6687 is not correct if an SV2 is used to re-order rows in the record batch. Currently, this is not a problem because none of the reordering operators (SORT, TOPN) use an SV2. SORT has code for SV2 but it is disabled. Adding a field to SV2 to indicate if the SV2 reorders the Record Batch would allow the safe application of the DRILL-6687 optimization. Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6629) BitVector split and transfer does not work correctly for transfer length < 8
Karthikeyan Manivannan created DRILL-6629: - Summary: BitVector split and transfer does not work correctly for transfer length < 8 Key: DRILL-6629 URL: https://issues.apache.org/jira/browse/DRILL-6629 Project: Apache Drill Issue Type: Improvement Components: Execution - Data Types Environment: BitVector split and transfer does not work correctly for transfer length < 8. Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6601) LageFileCompilation testProject times out
Karthikeyan Manivannan created DRILL-6601: - Summary: LageFileCompilation testProject times out Key: DRILL-6601 URL: https://issues.apache.org/jira/browse/DRILL-6601 Project: Apache Drill Issue Type: Improvement Reporter: Karthikeyan Manivannan The number of columns projected by testProject was bumped up from 5K to 10K in DRILL-6529. Changing this back to 5K should reduce the stress on this test yet stay within the threshold to test constant pool constraints. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6529) Prokect Batch Sizing causes two LargeFileCompilation tests to fail
Karthikeyan Manivannan created DRILL-6529: - Summary: Prokect Batch Sizing causes two LargeFileCompilation tests to fail Key: DRILL-6529 URL: https://issues.apache.org/jira/browse/DRILL-6529 Project: Apache Drill Issue Type: Improvement Components: Execution - Relational Operators Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan Timeout failures are seen in TestLargeFileCompilation testExternal_Sort and testTop_N_Sort. These tests are stress tests for compilation where the queries cover projections over 5000 columns and sort over 500 columns. These tests pass if they are run stand-alone. Something triggers the timeouts when the tests are run in parallel as part of a unit test run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6493) Replace BitVector with Uint1Vector
Karthikeyan Manivannan created DRILL-6493: - Summary: Replace BitVector with Uint1Vector Key: DRILL-6493 URL: https://issues.apache.org/jira/browse/DRILL-6493 Project: Apache Drill Issue Type: Improvement Reporter: Karthikeyan Manivannan BitVector stores a single bit of data and it uses a bit of storage space. UInt1 is an alternate implementation which uses a byte to store a bit. Recently discovered bugs in BitVector and anecdotal evidence of performance issues seems to suggest that this code is slow and buggy. I am opening this bug to analyze the impact of replacing BitVector with UInt1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (DRILL-6486) BitVector split and transfer does not work correctly for non byte-multiple transfer lengths
[ https://issues.apache.org/jira/browse/DRILL-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-6486. --- Resolution: Fixed > BitVector split and transfer does not work correctly for non byte-multiple > transfer lengths > --- > > Key: DRILL-6486 > URL: https://issues.apache.org/jira/browse/DRILL-6486 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Data Types >Affects Versions: 1.13.0 >Reporter: Karthikeyan Manivannan >Assignee: Karthikeyan Manivannan >Priority: Major > Fix For: 1.14.0 > > Attachments: TestSplitAndTransfer.java > > Original Estimate: 24h > Remaining Estimate: 24h > > BitVector splitAndTransfer does not correctly handle transfers where the > transfer-length is not a multiple of 8. The attached bitVector tests will > expose this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6486) BitVector split and transfer does not work correctly for non byte-multiple transfer lengths
Karthikeyan Manivannan created DRILL-6486: - Summary: BitVector split and transfer does not work correctly for non byte-multiple transfer lengths Key: DRILL-6486 URL: https://issues.apache.org/jira/browse/DRILL-6486 Project: Apache Drill Issue Type: Bug Components: Execution - Data Types Affects Versions: 1.13.0 Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan Fix For: 1.14.0 Attachments: TestSplitAndTransfer.java BitVector splitAndTransfer does not correctly handle transfers where the transfer-length is not a multiple of 8. The attached bitVector tests will expose this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6340) Output Batch Control in Project using the RecordBatchSizer
Karthikeyan Manivannan created DRILL-6340: - Summary: Output Batch Control in Project using the RecordBatchSizer Key: DRILL-6340 URL: https://issues.apache.org/jira/browse/DRILL-6340 Project: Apache Drill Issue Type: Improvement Components: Execution - Relational Operators Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan This bug is for tracking the changes required to implement Output Batch Sizing in Project using the RecordBatchSizer. The challenge in doing this mainly lies in dealing with expressions that produce variable-length columns. The following doc talks about some of the design approaches for dealing with such variable-length columns. [https://docs.google.com/document/d/1h0WsQsen6xqqAyyYSrtiAniQpVZGmQNQqC1I2DJaxAA/edit?usp=sharing] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (DRILL-3928) OutOfMemoryException should not be derived from FragmentSetupException
[ https://issues.apache.org/jira/browse/DRILL-3928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-3928. --- Resolution: Not A Problem OutofMemoryException is not derived from FragmentSetupException > OutOfMemoryException should not be derived from FragmentSetupException > -- > > Key: DRILL-3928 > URL: https://issues.apache.org/jira/browse/DRILL-3928 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow >Affects Versions: 1.2.0 >Reporter: Chris Westin >Assignee: Karthikeyan Manivannan >Priority: Major > > Discovered while working on DRILL-3927. > The client and server both use the same direct memory allocator code. But the > allocator's OutOfMemoryException is derived from FragmentSetupException > (which is derived from ForemanException). > Firstly, OOM situations don't only happen during setup. > Secondly, Fragment and Foreman classes shouldn't exist on the client side. > (This is causing unnecessary dependencies on the jdbc-all jar on server-only > code). > There's nothing special in those base classes that OutOfMemoryException > depends on. This looks like it was just a cheap way to avoid extra catch > clauses in Foreman and FragmentExecutor by catching the baser classes only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (DRILL-520) ceiling/ceil and floor functions return decimal value instead of an integer
[ https://issues.apache.org/jira/browse/DRILL-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-520. -- Resolution: Later > ceiling/ceil and floor functions return decimal value instead of an integer > --- > > Key: DRILL-520 > URL: https://issues.apache.org/jira/browse/DRILL-520 > Project: Apache Drill > Issue Type: Bug > Components: Functions - Drill >Affects Versions: 1.0.0 >Reporter: Krystal >Assignee: Karthikeyan Manivannan >Priority: Critical > Fix For: Future > > Attachments: DRILL-520.patch > > > Ran the following queries in drill: > 0: jdbc:drill:schema=dfs> select ceiling(55.8) from dfs.`student` where > rownum=11; > ++ > | EXPR$0 | > ++ > | 56.0 | > ++ > 0: jdbc:drill:schema=dfs> select floor(55.8) from dfs.`student` where > rownum=11; > ++ > | EXPR$0 | > ++ > | 55.0 | > ++ > The same queries executed from oracle, postgres and mysql returned integer > values of 56 and 55. > Found the following description of the two functions from > http://users.atw.hu/sqlnut/sqlnut2-chp-4-sect-4.html : > Ceil/Ceiling: > Rounds a noninteger value upwards to the next greatest integer. Returns an > integer value unchanged. > Floor: > Rounds a noninteger value downwards to the next least integer. Returns an > integer value unchanged. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (DRILL-6083) RestClientFixture does not connect to the correct webserver port
[ https://issues.apache.org/jira/browse/DRILL-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-6083. --- Resolution: Not A Problem > RestClientFixture does not connect to the correct webserver port > > > Key: DRILL-6083 > URL: https://issues.apache.org/jira/browse/DRILL-6083 > Project: Apache Drill > Issue Type: Bug > Components: Tools, Build Test >Affects Versions: Future >Reporter: Karthikeyan Manivannan >Assignee: Karthikeyan Manivannan >Priority: Major > Fix For: 1.13.0 > > > RestClientFixture always connects to the default http port (8047) instead of > connecting to the webserver-port of the cluster. The cluster's webserver port > won't be 8047 if there are other Drillbits running when the cluster is > launched. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (DRILL-6083) RestClientFixture does not connect to the correct webserver port
Karthikeyan Manivannan created DRILL-6083: - Summary: RestClientFixture does not connect to the correct webserver port Key: DRILL-6083 URL: https://issues.apache.org/jira/browse/DRILL-6083 Project: Apache Drill Issue Type: Bug Components: Tools, Build & Test Affects Versions: Future Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan Fix For: 1.13.0 RestClientFixture always connects to the default http port (8047) instead of connecting to the webserver-port of the cluster. The cluster's webserver port won't be 8047 if there are other Drillbits running when the cluster is launched. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (DRILL-4708) connection closed unexpectedly
[ https://issues.apache.org/jira/browse/DRILL-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-4708. --- Resolution: Works for Me > connection closed unexpectedly > -- > > Key: DRILL-4708 > URL: https://issues.apache.org/jira/browse/DRILL-4708 > Project: Apache Drill > Issue Type: Bug > Components: Execution - RPC >Affects Versions: 1.7.0 >Reporter: Chun Chang >Assignee: Karthikeyan Manivannan >Priority: Critical > Attachments: data.tgz > > > Running DRILL functional automation, we often see query failed randomly due > to the following unexpected connection close error. > {noformat} > Execution Failures: > /root/drillAutomation/framework/framework/resources/Functional/ctas/ctas_flatten/10rows/filter5.q > Query: > select * from dfs.ctas_flatten.`filter5_10rows_ctas` > Failed with exception > java.sql.SQLException: CONNECTION ERROR: Connection /10.10.100.171:36185 <--> > drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. > Drillbit down? > [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ] > at > org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247) > at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:321) > at > oadd.net.hydromatic.avatica.AvaticaResultSet.next(AvaticaResultSet.java:187) > at > org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:172) > at > org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:210) > at > org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:99) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: oadd.org.apache.drill.common.exceptions.UserException: CONNECTION > ERROR: Connection /10.10.100.171:36185 <--> > drillats4.qa.lab/10.10.100.174:31010 (user client) closed unexpectedly. > Drillbit down? > [Error Id: 3d5dad8e-80d0-4c7f-9012-013bf01ce2b7 ] > at > oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543) > at > oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > at > oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) > at > oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) > at > oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > ... 1 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5671) Set a secure ACL (Access Control List) for Drill ZK nodes in a secure cluster
Karthikeyan Manivannan created DRILL-5671: - Summary: Set a secure ACL (Access Control List) for Drill ZK nodes in a secure cluster Key: DRILL-5671 URL: https://issues.apache.org/jira/browse/DRILL-5671 Project: Apache Drill Issue Type: New Feature Components: Server Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan All Drill ZK nodes, currently, are assigned a default [world:all] ACL i.e. anyone gets to do CDRWA(create, delete, read, write, admin access). This means that even on a secure cluster anyone can perform all CRDWA actions on the znodes. This should be changed such that: - In a non-secure cluster, Drill will continue using the current default [world:all] ACL - In a secure cluster, all nodes should have an [authid: all] ACL i.e. the authenticated user that created the znode gets full access. The discovery znodes i.e. the znodes with the list of Drillbits will have an additional [world:read] ACL, i.e. the list of Drillbits will be readable by anyone. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (DRILL-5567) Review changes for DRILL 5514
[ https://issues.apache.org/jira/browse/DRILL-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthikeyan Manivannan resolved DRILL-5567. --- Resolution: Done > Review changes for DRILL 5514 > - > > Key: DRILL-5567 > URL: https://issues.apache.org/jira/browse/DRILL-5567 > Project: Apache Drill > Issue Type: Sub-task >Reporter: Karthikeyan Manivannan >Assignee: Karthikeyan Manivannan > Fix For: 1.11.0 > > Original Estimate: 2h > Remaining Estimate: 2h > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5567) Review changes for DRILL 5514
Karthikeyan Manivannan created DRILL-5567: - Summary: Review changes for DRILL 5514 Key: DRILL-5567 URL: https://issues.apache.org/jira/browse/DRILL-5567 Project: Apache Drill Issue Type: Sub-task Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5547) Drill config options and session options do not work as intended
Karthikeyan Manivannan created DRILL-5547: - Summary: Drill config options and session options do not work as intended Key: DRILL-5547 URL: https://issues.apache.org/jira/browse/DRILL-5547 Project: Apache Drill Issue Type: Bug Components: Server Affects Versions: 1.10.0 Reporter: Karthikeyan Manivannan Assignee: Venkata Jyothsna Donapati Fix For: Future In Drill, session options should take precedence over config options. But several of these session options are assigned hard-coded default values when the option validators are initialized. Because of this config options will never be read and honored even if the user did not specify the session option. ClassCompilerSelector.JAVA_COMPILER_VALIDATOR uses CompilerPolicy.DEFAULT as the default value. This default value gets into the session options map via the initialization of validators in SystemOptionManager. Now any piece of code that tries to check if a session option is set will never see a null, so it will always use that value and never try to look into the config options. For example, in the following piece of code from ClassCompilerSelector (), the policy will never be read from the config file. policy = CompilerPolicy.valueOf((value != null) ? value.string_val.toUpperCase() : config.getString(JAVA_COMPILER_CONFIG).toUpperCase()); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5121) A memory leak is observed when exact case is not specified for a column in a filter condition
Karthikeyan Manivannan created DRILL-5121: - Summary: A memory leak is observed when exact case is not specified for a column in a filter condition Key: DRILL-5121 URL: https://issues.apache.org/jira/browse/DRILL-5121 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.8.0, 1.6.0 Reporter: Karthikeyan Manivannan Assignee: Karthikeyan Manivannan Fix For: Future When the query SELECT XYZ from dfs.`/tmp/foo' where xYZ like "abc", is executed on a setup where /tmp/foo has 2 Parquet files, 1.parquet and 2.parquet, where 1.parquet has the column XYZ but 2.parquet does not, then there is a memory leak. This seems to happen because xYZ seem to be treated as a new column. -- This message was sent by Atlassian JIRA (v6.3.4#6332)