[jira] [Created] (DRILL-4696) select four table inner join result.Waiting for a long time after drill report error java.lang.OutOfMemoryError: Java heap space

2016-05-26 Thread david_hudavy (JIRA)
david_hudavy created DRILL-4696:
---

 Summary: select four table inner join result.Waiting for a long 
time after drill report error java.lang.OutOfMemoryError: Java heap space
 Key: DRILL-4696
 URL: https://issues.apache.org/jira/browse/DRILL-4696
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 1.6.0
 Environment: Test Environment:
SUSE Linux Enterprise Server 11 SP3  (x86_64) cluster
MySQL 5.7.11 Enterprise Server - Advanced Edition 
Drill cluster

Reporter: david_hudavy


Test Environment:
cluster 10-3
MySQL 5.7.11 Enterprise Server - Advanced Edition 
Drill cluster

Test Scope:
select performance of huge table(30M records).

MySQL table: Eps  Eps_EpsImei  mscIden  EpsStatic inner join (Four table 
each have 30M records)

-- four table inner join: (take time Drill Crash)
0: jdbc:drill:zk=SC-1:6181,SC-2:6181,PL-3:618> select
. . . . . . . . . . . . . . . . . . . . . . .> EpsStatic.EpsProfileId,
. . . . . . . . . . . . . . . . . . . . . . .> mscIden.mscId,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMmeAddr,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMmeRealm,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsLastInsertSent,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMobilityNotifInfo,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsAaaAddr ,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsAaaRealm ,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMmeRegServ,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsHomoImsVoip ,
. . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsUeSrVccCap,
. . . . . . . . . . . . . . . . . . . . . . .> Eps_EpsImei.EpsImeiSv
. . . . . . . . . . . . . . . . . . . . . . .> from 
mysql.user_data.Eps,mysql.user_data.Eps_EpsImei,mysql.user_data.mscIden,mysql.user_data.EpsStatic
. . . . . . . . . . . . . . . . . . . . . . .> where 
mscIden.mscId=Eps.mscId and Eps.mscId =Eps_EpsImei.mscId and 
Eps_EpsImei.mscId=EpsStatic.mscId
. . . . . . . . . . . . . . . . . . . . . . .> and  mscIden.mscId='0';
Drill Crash
2016-05-13 09:52:35,131 [28cacd19-0f04-cbb1-b418-73a76dcd6ebe:frag:0:0] ERROR 
o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurred, exiting. 
Information message: Unable to handle out of memory condition in 
FragmentExecutor.
java.lang.OutOfMemoryError: Java heap space
at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2157) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1964) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3316) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:463) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at 
com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3040) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2288) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2681) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2547) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2505) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1370) 
~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
 ~[commons-dbcp-1.4.jar:1.4]
at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
 ~[commons-dbcp-1.4.jar:1.4]
at 
org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup(JdbcRecordReader.java:177)
 ~[drill-jdbc-storage-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.physical.impl.ScanBatch.(ScanBatch.java:108) 
~[drill-java-exec-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.physical.impl.ScanBatch.(ScanBatch.java:136) 
~[drill-java-exec-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch(JdbcBatchCreator.java:40)
 ~[drill-jdbc-storage-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch(JdbcBatchCreator.java:33)
 ~[drill-jdbc-storage-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:146)
 ~[drill-java-exec-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:169)
 ~[drill-java-exec-1.6.0.jar:1.6.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:

[jira] [Closed] (DRILL-4577) Improve performance for query on INFORMATION_SCHEMA when HIVE is plugged in

2016-05-26 Thread Dechang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dechang Gu closed DRILL-4577.
-

Verified and add test cases to perf test framework

> Improve performance for query on INFORMATION_SCHEMA when HIVE is plugged in
> ---
>
> Key: DRILL-4577
> URL: https://issues.apache.org/jira/browse/DRILL-4577
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Hive
>Reporter: Sean Hsuan-Yi Chu
>Assignee: Sean Hsuan-Yi Chu
> Fix For: 1.7.0
>
>
> A query such as 
> {code}
> select * from INFORMATION_SCHEMA.`TABLES` 
> {code}
> is converted as calls to fetch all tables from storage plugins. 
> When users have Hive, the calls to hive metadata storage would be: 
> 1) get_table
> 2) get_partitions
> However, the information regarding partitions is not used in this type of 
> queries. Beside, a more efficient way is to fetch tables is to use 
> get_multi_table call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-4237) Skew in hash distribution

2016-05-26 Thread Dechang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dechang Gu closed DRILL-4237.
-

Verified with TPCH regression test, no regression is observed.

> Skew in hash distribution
> -
>
> Key: DRILL-4237
> URL: https://issues.apache.org/jira/browse/DRILL-4237
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.4.0
>Reporter: Aman Sinha
>Assignee: Chunhui Shi
> Fix For: 1.7.0
>
>
> Apparently, the fix in DRILL-4119 did not fully resolve the data skew issue.  
> It worked fine on the smaller sample of the data set but on another sample of 
> the same data set, it still produces skewed values - see below the hash 
> values which are all odd numbers. 
> {noformat}
> 0: jdbc:drill:zk=local> select columns[0], hash32(columns[0]) from `test.csv` 
> limit 10;
> +---+--+
> |  EXPR$0   |EXPR$1|
> +---+--+
> | f71aaddec3316ae18d43cb1467e88a41  | 1506011089   |
> | 3f3a13bb45618542b5ac9d9536704d3a  | 1105719049   |
> | 6935afd0c693c67bba482cedb7a2919b  | -18137557|
> | ca2a938d6d7e57bda40501578f98c2a8  | -1372666789  |
> | fab7f08402c8836563b0a5c94dbf0aec  | -1930778239  |
> | 9eb4620dcb68a84d17209da279236431  | -970026001   |
> | 16eed4a4e801b98550b4ff504242961e  | 356133757|
> | a46f7935fea578ce61d8dd45bfbc2b3d  | -94010449|
> | 7fdf5344536080c15deb2b5a2975a2b7  | -141361507   |
> | b82560a06e2e51b461c9fe134a8211bd  | -375376717   |
> +---+--+
> {noformat}
> This indicates an underlying issue with the XXHash64 java implementation, 
> which is Drill's implementation of the C version.  One of the key difference 
> as pointed out by [~jnadeau] was the use of unsigned int64 in the C version 
> compared to the Java version which uses (signed) long.  I created an XXHash 
> version using com.google.common.primitives.UnsignedLong.  However, 
> UnsignedLong does not have bit-wise operations that are needed for XXHash 
> such as rotateLeft(),  XOR etc.  One could write wrappers for these but at 
> this point, the question is: should we think of an alternative hash function 
> ? 
> The alternative approach could be the murmur hash for numeric data types that 
> we were using earlier and the Mahout version of hash function for string 
> types 
> (https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/HashHelper.java#L28).
>   As a test, I reverted to this function and was getting good hash 
> distribution for the test data. 
> I could not find any performance comparisons of our perf tests (TPC-H or DS) 
> with the original and newer (XXHash) hash functions.  If performance is 
> comparable, should we revert to the original function ?  
> As an aside, I would like to remove the hash64 versions of the functions 
> since these are not used anywhere. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4697) Editing of multi-line queries in sqlline

2016-05-26 Thread Matt Keranen (JIRA)
Matt Keranen created DRILL-4697:
---

 Summary: Editing of multi-line queries in sqlline
 Key: DRILL-4697
 URL: https://issues.apache.org/jira/browse/DRILL-4697
 Project: Apache Drill
  Issue Type: Wish
  Components: Client - CLI
Affects Versions: 1.6.0
Reporter: Matt Keranen
Priority: Minor


Being used to the PostgreSQL psql client, editing multiline queries in sqlline 
seems difficult. Each line of an unterminated (semicolon) query appears to be a 
separate history entry, thus uneditable.

The ability to up-arrow to bring the entire previous query would be very 
helpful, or the option to send the query text to an external shell editor (Vi).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-4616) Running out of /tmp results in "Memory was leaked by query"

2016-05-26 Thread Matt Keranen (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Keranen closed DRILL-4616.
---
Resolution: Duplicate

> Running out of /tmp results in "Memory was leaked by query"
> ---
>
> Key: DRILL-4616
> URL: https://issues.apache.org/jira/browse/DRILL-4616
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.6.0
> Environment: MapR community cluster v. 5.1.0.37549.GA, 4 nodes Ubuntu 
> 14.04, MapR-FS
> Also Vanilla Hadoop 2.7.0 HDFS
> Each node has Maximum Direct Memory of 25,769,803,776
>Reporter: Matt Keranen
>
> Attempting to convert csv file data to partitioned Parquet files via sqlline 
> with as
> {noformat}
> CREATE TABLE () PARTITION BY (date_tm) AS SELECT ...
> Error: SYSTEM ERROR: IllegalStateException: Memory was leaked by query. 
> Memory leaked: (523264)
> Allocator(op:1:16:5:ExternalSort) 2000/523264/1361178240/1431655765 
> (res/actual/peak/limit)
> Fragment 1:16
> [Error Id: cea4a79d-3e85-4e51-b6c2-1539f40dee10 on es05:31010]
>   (java.lang.IllegalStateException) Memory was leaked by query. Memory 
> leaked: (523264)
> Allocator(op:1:16:5:ExternalSort) 2000/523264/1361178240/1431655765 
> (res/actual/peak/limit)
> org.apache.drill.exec.memory.BaseAllocator.close():492
> org.apache.drill.exec.ops.OperatorContextImpl.close():124
> org.apache.drill.exec.ops.FragmentContext.suppressingClose():416
> org.apache.drill.exec.ops.FragmentContext.close():405
> 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():343
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():180
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():287
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745 (state=,code=0)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4616) Running out of /tmp results in "Memory was leaked by query"

2016-05-26 Thread Matt Keranen (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15303055#comment-15303055
 ] 

Matt Keranen commented on DRILL-4616:
-

This is indeed a dup of DRILL-4542

> Running out of /tmp results in "Memory was leaked by query"
> ---
>
> Key: DRILL-4616
> URL: https://issues.apache.org/jira/browse/DRILL-4616
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.6.0
> Environment: MapR community cluster v. 5.1.0.37549.GA, 4 nodes Ubuntu 
> 14.04, MapR-FS
> Also Vanilla Hadoop 2.7.0 HDFS
> Each node has Maximum Direct Memory of 25,769,803,776
>Reporter: Matt Keranen
>
> Attempting to convert csv file data to partitioned Parquet files via sqlline 
> with as
> {noformat}
> CREATE TABLE () PARTITION BY (date_tm) AS SELECT ...
> Error: SYSTEM ERROR: IllegalStateException: Memory was leaked by query. 
> Memory leaked: (523264)
> Allocator(op:1:16:5:ExternalSort) 2000/523264/1361178240/1431655765 
> (res/actual/peak/limit)
> Fragment 1:16
> [Error Id: cea4a79d-3e85-4e51-b6c2-1539f40dee10 on es05:31010]
>   (java.lang.IllegalStateException) Memory was leaked by query. Memory 
> leaked: (523264)
> Allocator(op:1:16:5:ExternalSort) 2000/523264/1361178240/1431655765 
> (res/actual/peak/limit)
> org.apache.drill.exec.memory.BaseAllocator.close():492
> org.apache.drill.exec.ops.OperatorContextImpl.close():124
> org.apache.drill.exec.ops.FragmentContext.suppressingClose():416
> org.apache.drill.exec.ops.FragmentContext.close():405
> 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():343
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():180
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():287
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745 (state=,code=0)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4573) Zero copy LIKE, REGEXP_MATCHES, SUBSTR

2016-05-26 Thread jean-claude (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15303306#comment-15303306
 ] 

jean-claude commented on DRILL-4573:


using the CharsetDecoder.decode method had the same performance as creating 
full String objects. However my tests ran for a few minutes, it's possible it 
would need to run longer to saturate the VM. But still it's nowhere near the 
performance of just checking ASCII bytes.

I had thought of testing the buffer for ASCII only and then switching strategy. 
So after seeing the results just mentioned I tried adding the test and switch 
strategy approach. Instead of 25% faster I got 17%. So the test seem to add 
about 7%. This is done over a text file containing UUID on each line.

I think using a test and switch strategy is potentially beneficial but I feel 
it should have a configuration to turn on or off the additional check for ASCII 
as it does incur a penalty when it turns out the buffer contained none ASCII 
chars.

Just as a side note: I did consider performing the ASCII test as chars are 
requested but the first thing the regex algo does is call the CharSequence for 
the count of chars. So that's not a possible option.


Also while I'm at it. I do have another optimization idea for the LIKE 
function. The SQL LIKE function is supposed to only support a % prefix or 
suffix. And _ matching any character. The idea I have is to check if there's 
any _ which is not very often used. Then using the same code used in the STRPOS 
function to match a substring. This way the LIKE function can work by comparing 
byte-to-byte and not making any copy.

 
 

> Zero copy LIKE, REGEXP_MATCHES, SUBSTR
> --
>
> Key: DRILL-4573
> URL: https://issues.apache.org/jira/browse/DRILL-4573
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: jean-claude
>Priority: Minor
> Fix For: 1.7.0
>
> Attachments: DRILL-4573-3.patch.txt, DRILL-4573.patch.txt
>
>
> All the functions using the java.util.regex.Matcher are currently creating 
> Java string objects to pass into the matcher.reset().
> However this creates unnecessary copy of the bytes and a Java string object.
> The matcher uses a CharSequence, so instead of making a copy we can create an 
> adapter from the DrillBuffer to the CharSequence interface.
> Gains of 25% in execution speed are possible when going over VARCHAR of 36 
> chars. The gain will be proportional to the size of the VARCHAR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4696) select four table inner join result.Waiting for a long time after drill report error java.lang.OutOfMemoryError: Java heap space

2016-05-26 Thread david_hudavy (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15303307#comment-15303307
 ] 

david_hudavy commented on DRILL-4696:
-

Add four table definitions schema:
Each table has 30 million rows.
Table structure:

CREATE TABLE `Eps` (
  `DSG` tinyint(4) unsigned NOT NULL,
  `ENTRY_KEY` bigint(20) unsigned NOT NULL,
  `mscId` varchar(255) NOT NULL,
  `EpsMmeAddr` varchar(255) DEFAULT NULL,
  `EpsMmeRealm` varchar(255) DEFAULT NULL,
  `EpsLastInsertSent` varbinary(18) DEFAULT NULL,
  `EpsMobilityNotifInfo` varbinary(8) DEFAULT NULL,
  `EpsAaaAddr` varchar(255) DEFAULT NULL,
  `EpsAaaRealm` varchar(255) DEFAULT NULL,
  `EpsMmeRegServ` varchar(5) DEFAULT NULL,
  `EpsHomoImsVoip` int(11) DEFAULT NULL,
  `EpsUeSrVccCap` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`DSG`,`ENTRY_KEY`),
  KEY `mscId` (`mscId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 
 
CREATE TABLE `Eps_EpsImei` (
  `DSG` tinyint(4) unsigned NOT NULL,
  `ENTRY_KEY` bigint(20) unsigned NOT NULL,
  `mscId` varchar(32) NOT NULL,
  `EpsImeiSv` varchar(255) DEFAULT NULL,
  KEY `DSG` (`DSG`,`ENTRY_KEY`),
  KEY `mscId` (`mscId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 
CREATE TABLE `EpsStatic` (
  `DSG` tinyint(4) unsigned NOT NULL,
  `ENTRY_KEY` bigint(20) unsigned NOT NULL,
  `mscId` varchar(32) NOT NULL,
  `EpsAccessRestriction` int(11) DEFAULT NULL,
  `EpsIndDefContextId` bigint(20) DEFAULT NULL,
  `EpsOdb` int(11) DEFAULT NULL,
  `EpsProfileId` varchar(255) DEFAULT NULL,
  `EpsRoamAllow` varchar(5) DEFAULT NULL,
  PRIMARY KEY (`DSG`,`ENTRY_KEY`),
  KEY `mscId` (`mscId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 
CREATE TABLE `mscIden` (
  `DSG` tinyint(4) unsigned NOT NULL,
  `ENTRY_KEY` bigint(20) unsigned NOT NULL,
  `mscId` varchar(32) NOT NULL,
  `IMSI` varchar(15) DEFAULT NULL,
  `MSISDN` varchar(15) DEFAULT NULL,
  PRIMARY KEY (`DSG`,`ENTRY_KEY`),
  KEY `mscId` (`mscId`),
  KEY `MSISDN` (`MSISDN`),
  KEY `IMSI` (`IMSI`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
 

> select four table inner join result.Waiting for a long time after drill 
> report error java.lang.OutOfMemoryError: Java heap space
> 
>
> Key: DRILL-4696
> URL: https://issues.apache.org/jira/browse/DRILL-4696
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.6.0
> Environment: Test Environment:
> SUSE Linux Enterprise Server 11 SP3  (x86_64) cluster
> MySQL 5.7.11 Enterprise Server - Advanced Edition 
> Drill cluster
>Reporter: david_hudavy
>
> Test Environment:
> cluster 10-3
> MySQL 5.7.11 Enterprise Server - Advanced Edition 
> Drill cluster
> Test Scope:
> select performance of huge table(30M records).
> MySQL table: Eps  Eps_EpsImei  mscIden  EpsStatic inner join (Four table 
> each have 30M records)
> -- four table inner join: (take time Drill Crash)
> 0: jdbc:drill:zk=SC-1:6181,SC-2:6181,PL-3:618> select
> . . . . . . . . . . . . . . . . . . . . . . .> EpsStatic.EpsProfileId,
> . . . . . . . . . . . . . . . . . . . . . . .> mscIden.mscId,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMmeAddr,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMmeRealm,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsLastInsertSent,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMobilityNotifInfo,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsAaaAddr ,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsAaaRealm ,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsMmeRegServ,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsHomoImsVoip ,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps.EpsUeSrVccCap,
> . . . . . . . . . . . . . . . . . . . . . . .> Eps_EpsImei.EpsImeiSv
> . . . . . . . . . . . . . . . . . . . . . . .> from 
> mysql.user_data.Eps,mysql.user_data.Eps_EpsImei,mysql.user_data.mscIden,mysql.user_data.EpsStatic
> . . . . . . . . . . . . . . . . . . . . . . .> where 
> mscIden.mscId=Eps.mscId and Eps.mscId =Eps_EpsImei.mscId and 
> Eps_EpsImei.mscId=EpsStatic.mscId
> . . . . . . . . . . . . . . . . . . . . . . .> and  mscIden.mscId='0';
> Drill Crash
> 2016-05-13 09:52:35,131 [28cacd19-0f04-cbb1-b418-73a76dcd6ebe:frag:0:0] ERROR 
> o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurred, 
> exiting. Information message: Unable to handle out of memory condition in 
> FragmentExecutor.
> java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2157) 
> ~[mysql-connector-java-5.1.38-bin.jar:5.1.38]
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1964) 
> ~[mysql-connector-java-5.1.38-bin.jar:5.1.38]

[jira] [Commented] (DRILL-4690) Header in RestApi CORS support

2016-05-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15303650#comment-15303650
 ] 

ASF GitHub Bot commented on DRILL-4690:
---

Github user PythonicNinja commented on the pull request:

https://github.com/apache/drill/pull/507#issuecomment-222073117
  
I have updated PR according to your and laurentgo ideas. @hnfgns: Can you 
check second round of review?


> Header in RestApi CORS support 
> ---
>
> Key: DRILL-4690
> URL: https://issues.apache.org/jira/browse/DRILL-4690
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Wojciech Nowak
>Priority: Minor
>
> Damien Cantreras raised question on mailing list, related to Drill RestAPI 
> support for Header "Access-Control-Allow-Origin: *"
> to allow it being used from a HTML5 application.
> Place where Header should be added 
> https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/WebServer.java
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)