Please vote for proposed Drill talks for the Hadoop Summit

2016-02-05 Thread Jason Altekruse
Hello Drillers,

There are some great proposed talks for this year's Hadoop summit related
to Drill. Please help to promote Drill in the wider Big Data community by
taking a look through the list and voting for talks that sound good.

You don't need to register or anything to vote, it just asks for an e-mail
address.

http://hadoopsummit.uservoice.com/search?filter=ideas=drill

Thanks!
Jason


[GitHub] drill pull request: DRILL-4359: Adds equals/hashCode methods to En...

2016-02-05 Thread laurentgo
GitHub user laurentgo opened a pull request:

https://github.com/apache/drill/pull/363

DRILL-4359: Adds equals/hashCode methods to EndpointAffinity

Adds equals/hashCode methods to EndpointAffinity to allow for comparison in 
tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/laurentgo/drill 
laurent/adds-equals-to-endpoint-affinity

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/363.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #363


commit 80954145733c4b6f6ce5cce8cb40dfba29aa899c
Author: Laurent Goujon 
Date:   2016-02-05T22:06:00Z

DRILL-4359: Adds equals/hashCode methods to EndpointAffinity

Adds equals/hashCode methods to EndpointAffinity to allow for comparison in
tests.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-3623: Optimize limit 0 queries

2016-02-05 Thread StevenMPhillips
GitHub user StevenMPhillips opened a pull request:

https://github.com/apache/drill/pull/364

DRILL-3623: Optimize limit 0 queries

This pulls some patches that Sudheesh had previously worked on but not 
committed. In addition, fixes some problems when the type is not one of the 
json ExtendTypes.

There are still problems with float, decimal, and interval_day/year.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StevenMPhillips/drill limit_0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/364.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #364


commit 8c3f2e8911dabc93f7b3f381600ccfaccc008938
Author: Sudheesh Katkam 
Date:   2015-11-17T23:12:48Z

DRILL-3623: Use shorter exec path for LIMIT 0 queries when schema of the 
root logical node is known

+ DRILL-4043: Perform LIMIT 0 optimization before logical transformation

commit 3be63cd5b95891cbfee1bdb6cfa2baaaf18f31c8
Author: Sudheesh Katkam 
Date:   2015-12-22T19:07:39Z

WIP

commit b4061e46fe91b747bc2afe82e66cd6d0bc060d1b
Author: Steven Phillips 
Date:   2016-01-31T01:49:04Z

Make limit 0 return correct type for VARCHAR

Still does not work correctly for float4, decimal, or interval types




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (DRILL-4361) Allow for FileSystemPlugin subclasses to override FormatCreator

2016-02-05 Thread Laurent Goujon (JIRA)
Laurent Goujon created DRILL-4361:
-

 Summary: Allow for FileSystemPlugin subclasses to override 
FormatCreator
 Key: DRILL-4361
 URL: https://issues.apache.org/jira/browse/DRILL-4361
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Laurent Goujon
Assignee: Laurent Goujon
Priority: Minor


FileSystemPlugin subclasses are not able to customize plugins, as FormatCreator 
in created in FileSystemPlugin constructor and immediately used to create 
SchemaFactory instance.

FormatCreator instantiation should be moved to a protected method so that 
subclass can choose to implement it differently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: strange build problem with generated sources

2016-02-05 Thread Abdel Hakim Deneche
Hey Dave,

Which file did you add the "forceBits()" method to ?

On Fri, Feb 5, 2016 at 8:33 AM, Dave Oshinsky 
wrote:

> Hi Drill-ers,
> I am looking into fixing JIRA
> https://issues.apache.org/jira/browse/DRILL-4184.  I've encountered a
> number of strange build problems along the way with my drill 1.4 snapshot,
> including inability to rebuild after running "mvn clean", no matter what I
> try.  So, I'm building from scratch for the second time, at least.  The
> latest problem really has me stumped at the moment.  I added a
> "forceBits(int,int)" method that I see in generated source file
> NullableDecimal28SparseVector.java (and
> NullableDecimal38SparseVector.java), but somehow this doesn't get compiled
> properly into the *.class and my build keeps failing as if the new
> forceBits method isn't there:
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:3.2:compile
> (default-compile) on project drill-java-exec: Compilation failure:
> Compilation failure:
> [ERROR]
> C:\apache\apache-drill-1.4.0\rebuild2\drill-1.4.0\exec\java-exec\src\main\java\org\apache\drill\exec\store\parquet\columnreaders\VarLengthColumnReaders.java:[108,29]
> error: cannot find symbol
> [ERROR] symbol:   method forceBits(int,int)
> [ERROR] location: variable nullableDecimal28Vector of type
> NullableDecimal28SparseVector
> [ERROR]
> C:\apache\apache-drill-1.4.0\rebuild2\drill-1.4.0\exec\java-exec\src\main\java\org\apache\drill\exec\store\parquet\columnreaders\VarLengthColumnReaders.java:[179,29]
> error: cannot find symbol
>
> Can anyone suggest how to fix this without starting over from scratch in a
> new build node (again)?  Any advice would be greatly appreciated.
>
> I will send a separate email eventually regarding the design of my fix,
> which I know is only a short-term solution to the problem of handling
> variable width decimal fields in Parquet files.  To make a long story
> short, all the decimal vectors are fixed width vectors, which don't have
> the ability to "remember" varying sizes from one decimal field to the
> next.  I've hacked up something to "remember" the varying field sizes
> (BigDecimal array sizes) in NullableVarLengthValuesColumn and
> VarLengthValuesColumn, not in the decimal vectors.  This seems to work,
> though it's admittedly ugly.  However, I ran into a problem with nullable
> varying width decimal columns where the "isSet" always returns 0, as if the
> column is null, when it is not, and the sparse decimal data is present in
> the vector (but Drill won't send the decimal value, because it thinks it's
> null).  Hence the "forceBits" hack to try to work around this.  It seemed
> like I was close to running a successful Drill query on the varying width
> decimal Parquet data, but alas, I ran into (another) build problem.  I do
> have a LOT of questions as to why the decimal stuff was designed the way it
> is, but that's for another email
>
> Thanks,
> Dave Oshinsky
>
>
>
>
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **




-- 

Abdelhakim Deneche

Software Engineer

  


Now Available - Free Hadoop On-Demand Training



[GitHub] drill pull request: Upgrade to guava 18.0

2016-02-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/157


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


RE: strange build problem with generated sources

2016-02-05 Thread Dave Oshinsky
Hi Abdel,
Thanks for reply.  I added it to 
exec/vector/target/codegen/templates/NullableValueVectors.java.  I then see it 
propagated by Freemarker to generated sources, as shown below signature.  In 
Eclipse, I'm able to navigate from where I'm calling forceBits() to e.g. 
NullableDecimal28SparseVector.java, and I see forceBits() implemented there.  
However, Eclipse has the dreaded "red X" in VarLengthColumnReaders.java 
indicating an error, and the build also fails.  Weird,  huh?

I do have to say, this whole "generated sources" business is a pain in the you 
know where  ;-)  In all seriousness, I would implement the whole decimal 
business MUCH more simply if writing it from scratch.  Why the gazillion 
special cases for different widths (all with separate generated sources), for 
nullable and not, sparse or dense?  Just treat them all the same, at least from 
where I'm sitting with a Parquet-centric perspective.  And the memory usage 
would be MUCH lower by treating decimals ALWAYS as variable width.  Most actual 
decimal numbers are way smaller than the precision would indicate, in many of 
my test cases.

Dave Oshinsky

$ find . -name "*.java" | xargs grep forceBits
./exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
  nullableDecimal28Vector.forceBits(index, 1);
./exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
  nullableDecimal38Vector.forceBits(index, 1);
./exec/java-exec/target/classes/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
  nullableDecimal28Vector.forceBits(index, 1);
./exec/java-exec/target/classes/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
  nullableDecimal38Vector.forceBits(index, 1);
./exec/vector/src/main/codegen/templates/NullableValueVectors.java:public 
void forceBits(int index, int value){  // DAO
./exec/vector/target/classes/codegen/templates/NullableValueVectors.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/codegen/templates/NullableValueVectors.java:public 
void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableBigIntVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableBitVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDateVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal18Vector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal28DenseVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal28SparseVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal38DenseVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal38SparseVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal9Vector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableFloat4Vector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableFloat8Vector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableIntervalDayVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableIntervalVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableIntervalYearVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableIntVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableSmallIntVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableTimeStampVector.java:
public void forceBits(int index, int value){  // DAO
./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableTimeVector.java:
public void forceBits(int index, int value){  // 

RE: strange build problem with generated sources

2016-02-05 Thread Dave Oshinsky
While the build was successful, Drill fails at runtime.  I built and copied 
these jars:
cp exec/java-exec/target/drill-java-exec-1.4.0.jar 
../../jars/drill-java-exec-1.4.0.jar
cp exec/vector/target/vector-1.4.0.jar ../../jars

The exception from Drill:
Exception in thread "drill-executor-2" java.lang.Error: Unresolved compilation 
problem:
The method forceBits(int, int) is undefined for the type 
NullableDecimal28SparseVector

at 
org.apache.drill.exec.store.parquet.columnreaders.VarLengthColumnReaders$NullableDecimal28Column.setSafe(VarLengthColumnReaders.java:108)
at 
org.apache.drill.exec.store.parquet.columnreaders.NullableVarLengthValuesColumn.readField(NullableVarLengthValuesColumn.java:160)
at 
org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readRecords(ColumnReader.java:163)
at 
org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readPage(ColumnReader.java:194)
at 
org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.determineSize(ColumnReader.java:141)

This is the NullableDecimal28SparseVector.java code showing forceBits() is 
implemented:
public void forceBits(int index, int value){  // DAO
  bits.getMutator().setSafe(index, value);
}

Do I need to copy any other jars?  I copied everything that implements or 
refers to NullableDecimal28SparseVector, or so I thought.

-Original Message-
From: Dave Oshinsky [mailto:doshin...@commvault.com] 
Sent: Friday, February 05, 2016 1:23 PM
To: dev@drill.apache.org
Subject: RE: strange build problem with generated sources

Thanks Abdel!  That fixed the build (adding it to FixedValueVectors.java, 
throwing UnsupportedOperationException).  Why was that necessary?  Is it some 
complicated aspect to how NullableDecimal38SparseVector.class gets compiled 
from NullableDecimal38SparseVector.java?  I actually saw the forceBits() method 
in the generated NullableDecimal38SparseVector.java code before, yet the build 
behaved as if it were not there.

The error I saw earlier was from these two calls to forceBits() in 
VarLengthColumnReaders.java:

@Override
public boolean setSafe(int index, BigDecimal intermediate) {  // DAO
  int width = Decimal28SparseHolder.WIDTH;
  if (index >= nullableDecimal28Vector.getValueCapacity()) {
return false;
  }
  DecimalUtility.getSparseFromBigDecimal(intermediate, 
nullableDecimal28Vector.getBuffer(), index * width, schemaElement.getScale(),
  schemaElement.getPrecision(), 
Decimal28SparseHolder.nDecimalDigits);
  nullableDecimal28Vector.forceBits(index, 1);
  return true;
}

@Override
public boolean setSafe(int index, BigDecimal intermediate) {  // DAO
  int width = Decimal38SparseHolder.WIDTH;
  if (index >= nullableDecimal38Vector.getValueCapacity()) {
return false;
  }
  DecimalUtility.getSparseFromBigDecimal(intermediate, 
nullableDecimal38Vector.getBuffer(), index * width, schemaElement.getScale(),
  schemaElement.getPrecision(), 
Decimal38SparseHolder.nDecimalDigits);
  nullableDecimal38Vector.forceBits(index, 1);
  return true;
}

-Original Message-
From: Abdel Hakim Deneche [mailto:adene...@maprtech.com]
Sent: Friday, February 05, 2016 12:41 PM
To: dev@drill.apache.org
Subject: Re: strange build problem with generated sources

Do you also need to add forceBits to the template FixedValueVectors.java ?

Can you copy the line from VarLengthColumnReaders .java where you hit a compile 
error ?


On Fri, Feb 5, 2016 at 9:26 AM, Dave Oshinsky 
wrote:

> Hi Abdel,
> Thanks for reply.  I added it to
> exec/vector/target/codegen/templates/NullableValueVectors.java.  I 
> then see it propagated by Freemarker to generated sources, as shown 
> below signature.  In Eclipse, I'm able to navigate from where I'm 
> calling
> forceBits() to e.g. NullableDecimal28SparseVector.java, and I see
> forceBits() implemented there.  However, Eclipse has the dreaded "red 
> X" in VarLengthColumnReaders.java indicating an error, and the build also 
> fails.
> Weird,  huh?
>
> I do have to say, this whole "generated sources" business is a pain in 
> the you know where  ;-)  In all seriousness, I would implement the 
> whole decimal business MUCH more simply if writing it from scratch.
> Why the gazillion special cases for different widths (all with 
> separate generated sources), for nullable and not, sparse or dense?
> Just treat them all the same, at least from where I'm sitting with a 
> Parquet-centric perspective.
> And the memory usage would be MUCH lower by treating decimals ALWAYS 
> as variable width.  Most actual decimal numbers are way smaller than 
> the precision would indicate, in many of my test cases.
>
> Dave Oshinsky
>
> $ find . -name "*.java" | xargs grep forceBits
> 

[jira] [Created] (DRILL-4358) NPE when closing UserServer and authenticator is not set.

2016-02-05 Thread Jacques Nadeau (JIRA)
Jacques Nadeau created DRILL-4358:
-

 Summary: NPE when closing UserServer and authenticator is not set.
 Key: DRILL-4358
 URL: https://issues.apache.org/jira/browse/DRILL-4358
 Project: Apache Drill
  Issue Type: Bug
 Environment: As part of DRILL-3581, I removed the use of 
Closeables.closeQuietly(). In the case of UserServer, the close method no 
longer checks for authenticator nullability. That means closes in certain 
situations cause NPE.
Reporter: Jacques Nadeau
Assignee: Jacques Nadeau
 Fix For: 1.6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Drill 1.5.0 RC1

2016-02-05 Thread Jason Altekruse
This seems like a major issue, due to a resource leak, consistent usage of
the rest API can make a drillbit crash in less than an hour.

I have run tests on the patch rebased on the release branch, and Venki
reported a stress test he completed showed that his change fixes the file
handle leaks. I am going to call this vote closed and spin another release.

On Thu, Feb 4, 2016 at 2:48 PM, Venki Korukanti 
wrote:

> -1. Found a regression DRILL-4353, I think we should include it in 1.5.0
>
> On Wed, Feb 3, 2016 at 1:38 AM, Jason Altekruse 
> wrote:
>
> > Hello all,
> >
> > I'd like to propose the second release candidate (rc1) of Apache Drill,
> > version
> > 1.5.0. It covers a total of 54 resolved JIRAs [1]. Thanks to everyone who
> > contributed to this release. This release candidate includes a small test
> > modification that was detailed on the vote thread for RC0.
> >
> > The tarball artifacts are hosted at [2] and the maven artifacts are
> hosted
> > at
> > [3]. This release candidate is based on commit
> > c3939c55cf3e274c9bcbc8ca860603e7197cfa16 located at [4].
> >
> > The vote will be open for the next ~72 hours ending at 7AM Pacific,
> > January 6, 2016.
> >
> > [ ] +1
> > [ ] +0
> > [ ] -1
> >
> > Here's my vote: +1
> >
> > Thanks,
> > Jason
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820=12332948
> > [2] http://people.apache.org/~json/apache-drill-1.5.0.rc1/
> > [3]
> https://repository.apache.org/content/repositories/orgapachedrill-1024
> >  >
> > [4] https://github.com/jaltekruse/incubator-drill/tree/1.5-release-rc1
> >
>


[GitHub] drill pull request: DRILL-4327: Fix rawtypes warnings emitted by c...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on a diff in the pull request:

https://github.com/apache/drill/pull/347#discussion_r52048231
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/mergereceiver/MergingRecordBatch.java
 ---
@@ -84,6 +64,26 @@
 import com.sun.codemodel.JExpr;
 
 /**
+ * Licensed to the Apache Software Foundation (ASF) under one
--- End diff --

Can you move this to the top of the file? It should actually come before 
the package statement.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4331: Fix TestFlattenPlanning.testFlatte...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on the pull request:

https://github.com/apache/drill/pull/351#issuecomment-180472482
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Apache Drill querying IGFS-accelerated (H)DFS?

2016-02-05 Thread Vladimir Ozerov
It looks like file system configuration is created inside
WorkspaceSchemaFactory
constructor:

this.fsConf = plugin.getFsConf();

Does it mean that we have to implement our own Ignite's FileSystemPlugin to
be able to work with Drill?

On Fri, Feb 5, 2016 at 8:55 PM, Vladimir Ozerov 
wrote:

> *Peter,*
>
> I created the ticket in Ignite JIRA. Hope someone form community will be
> able to throw a glance on it soon -
> https://issues.apache.org/jira/browse/IGNITE-2568
> Please keep an eye on it.
>
> Cross-posting the issue to Drill dev list.
>
> *Dear Drill folks,*
>
> We have our own implementation of Hadoop FileSystem here in Ignite. It has
> unique URI prefix ("igfs://") and is normally registered in Hadoop's
> core-site.xml like this:
>
> 
> fs.igfs.impl
> 
> org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
>
>
> However, when we try to use this file system as data source in Drill, the
> exception is thrown (see stack trace below). I suspect that default Hadoop
> core-site.xml is not taken in consideration by Drill somehow. Could you
> please give us a hint on how to properly configure custom Hadoop FileSystem
> implementation in your system?
>
> Thank you!.
>
> Vladimir.
>
> Stack trace:
>
> java.io.IOException: No FileSystem for scheme: igfs
> at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> ~[hadoop-common-2.7.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
> ~[hadoop-common-2.7.1.jar:na]
> at
> org.apache.drill.exec.store.dfs.DrillFileSystem.(DrillFileSystem.java:92)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:213)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:210)
> ~[drill-java-exec-1.4.0.jar:1.4.0]
> at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.8.0_40-ea]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> ~[hadoop-common-2.7.1.jar:na]
> at
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:210)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:202)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:150)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.(FileSystemSchemaFactory.java:78)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:65)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:131)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.registerSchemas(StoragePluginRegistry.java:403)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:129)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> org.apache.drill.exec.planner.sql.DrillSqlWorker.(DrillSqlWorker.java:93)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244)
> [drill-java-exec-1.4.0.jar:1.4.0]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_40-ea]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_40-ea]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]
>
>
>
> On Fri, Feb 5, 2016 at 4:18 PM, pshomov  wrote:
>
>>
>> ​Hi Vladimir,
>>
>> My bad about that ifgs://, fixed it but it changed nothing.
>>
>> I don’t think Drill cares much about Hadoop settings. It never asked me

[GitHub] drill pull request: DRILL-4327: Fix rawtypes warnings emitted by c...

2016-02-05 Thread laurentgo
Github user laurentgo commented on a diff in the pull request:

https://github.com/apache/drill/pull/347#discussion_r52051852
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/mergereceiver/MergingRecordBatch.java
 ---
@@ -84,6 +64,26 @@
 import com.sun.codemodel.JExpr;
 
 /**
+ * Licensed to the Apache Software Foundation (ASF) under one
--- End diff --

sure


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


RE: strange build problem with generated sources

2016-02-05 Thread Dave Oshinsky
Thanks Abdel!  That fixed the build (adding it to FixedValueVectors.java, 
throwing UnsupportedOperationException).  Why was that necessary?  Is it some 
complicated aspect to how NullableDecimal38SparseVector.class gets compiled 
from NullableDecimal38SparseVector.java?  I actually saw the forceBits() method 
in the generated NullableDecimal38SparseVector.java code before, yet the build 
behaved as if it were not there.

The error I saw earlier was from these two calls to forceBits() in 
VarLengthColumnReaders.java:

@Override
public boolean setSafe(int index, BigDecimal intermediate) {  // DAO
  int width = Decimal28SparseHolder.WIDTH;
  if (index >= nullableDecimal28Vector.getValueCapacity()) {
return false;
  }
  DecimalUtility.getSparseFromBigDecimal(intermediate, 
nullableDecimal28Vector.getBuffer(), index * width, schemaElement.getScale(),
  schemaElement.getPrecision(), 
Decimal28SparseHolder.nDecimalDigits);
  nullableDecimal28Vector.forceBits(index, 1);
  return true;
}

@Override
public boolean setSafe(int index, BigDecimal intermediate) {  // DAO
  int width = Decimal38SparseHolder.WIDTH;
  if (index >= nullableDecimal38Vector.getValueCapacity()) {
return false;
  }
  DecimalUtility.getSparseFromBigDecimal(intermediate, 
nullableDecimal38Vector.getBuffer(), index * width, schemaElement.getScale(),
  schemaElement.getPrecision(), 
Decimal38SparseHolder.nDecimalDigits);
  nullableDecimal38Vector.forceBits(index, 1);
  return true;
}

-Original Message-
From: Abdel Hakim Deneche [mailto:adene...@maprtech.com] 
Sent: Friday, February 05, 2016 12:41 PM
To: dev@drill.apache.org
Subject: Re: strange build problem with generated sources

Do you also need to add forceBits to the template FixedValueVectors.java ?

Can you copy the line from VarLengthColumnReaders .java where you hit a compile 
error ?


On Fri, Feb 5, 2016 at 9:26 AM, Dave Oshinsky 
wrote:

> Hi Abdel,
> Thanks for reply.  I added it to
> exec/vector/target/codegen/templates/NullableValueVectors.java.  I 
> then see it propagated by Freemarker to generated sources, as shown 
> below signature.  In Eclipse, I'm able to navigate from where I'm 
> calling
> forceBits() to e.g. NullableDecimal28SparseVector.java, and I see
> forceBits() implemented there.  However, Eclipse has the dreaded "red 
> X" in VarLengthColumnReaders.java indicating an error, and the build also 
> fails.
> Weird,  huh?
>
> I do have to say, this whole "generated sources" business is a pain in 
> the you know where  ;-)  In all seriousness, I would implement the 
> whole decimal business MUCH more simply if writing it from scratch.  
> Why the gazillion special cases for different widths (all with 
> separate generated sources), for nullable and not, sparse or dense?  
> Just treat them all the same, at least from where I'm sitting with a 
> Parquet-centric perspective.
> And the memory usage would be MUCH lower by treating decimals ALWAYS 
> as variable width.  Most actual decimal numbers are way smaller than 
> the precision would indicate, in many of my test cases.
>
> Dave Oshinsky
>
> $ find . -name "*.java" | xargs grep forceBits
> ./exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
> nullableDecimal28Vector.forceBits(index, 1);
> ./exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
> nullableDecimal38Vector.forceBits(index, 1);
> ./exec/java-exec/target/classes/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
> nullableDecimal28Vector.forceBits(index, 1);
> ./exec/java-exec/target/classes/org/apache/drill/exec/store/parquet/columnreaders/VarLengthColumnReaders.java:
> nullableDecimal38Vector.forceBits(index, 1);
> ./exec/vector/src/main/codegen/templates/NullableValueVectors.java:
> public void forceBits(int index, int value){  // DAO
> ./exec/vector/target/classes/codegen/templates/NullableValueVectors.java:
>   public void forceBits(int index, int value){  // DAO
> ./exec/vector/target/codegen/templates/NullableValueVectors.java:
> public void forceBits(int index, int value){  // DAO
> ./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableBigIntVector.java:
>   public void forceBits(int index, int value){  // DAO
> ./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableBitVector.java:
>   public void forceBits(int index, int value){  // DAO
> ./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDateVector.java:
>   public void forceBits(int index, int value){  // DAO
> ./exec/vector/target/generated-sources/org/apache/drill/exec/vector/NullableDecimal18Vector.java:
>   public void forceBits(int index, int value){  // DAO
> 

[GitHub] drill pull request: DRILL-4358: Fix NPE in UserServer.close()

2016-02-05 Thread jacques-n
GitHub user jacques-n opened a pull request:

https://github.com/apache/drill/pull/362

DRILL-4358: Fix NPE in UserServer.close()

- Also remove untested CustomSerDe's from CustomTunnel.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jacques-n/drill DRILL-4358

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/362.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #362


commit 3bb8f84217b0dec5ec928a528deae94d0a8c5ec5
Author: Jacques Nadeau 
Date:   2016-02-05T18:45:32Z

DRILL-4358: Fix NPE in UserServer.close()

- Also remove untested CustomSerDe's from CustomTunnel.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Apache Drill querying IGFS-accelerated (H)DFS?

2016-02-05 Thread Vladimir Ozerov
*Peter,*

I created the ticket in Ignite JIRA. Hope someone form community will be
able to throw a glance on it soon -
https://issues.apache.org/jira/browse/IGNITE-2568
Please keep an eye on it.

Cross-posting the issue to Drill dev list.

*Dear Drill folks,*

We have our own implementation of Hadoop FileSystem here in Ignite. It has
unique URI prefix ("igfs://") and is normally registered in Hadoop's
core-site.xml like this:


fs.igfs.impl

org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem


However, when we try to use this file system as data source in Drill, the
exception is thrown (see stack trace below). I suspect that default Hadoop
core-site.xml is not taken in consideration by Drill somehow. Could you
please give us a hint on how to properly configure custom Hadoop FileSystem
implementation in your system?

Thank you!.

Vladimir.

Stack trace:

java.io.IOException: No FileSystem for scheme: igfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
~[hadoop-common-2.7.1.jar:na]
at
org.apache.drill.exec.store.dfs.DrillFileSystem.(DrillFileSystem.java:92)
~[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:213)
~[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:210)
~[drill-java-exec-1.4.0.jar:1.4.0]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.8.0_40-ea]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea]
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
~[hadoop-common-2.7.1.jar:na]
at
org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:210)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:202)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:150)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.(FileSystemSchemaFactory.java:78)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:65)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:131)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.registerSchemas(StoragePluginRegistry.java:403)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:129)
[drill-java-exec-1.4.0.jar:1.4.0]
at
org.apache.drill.exec.planner.sql.DrillSqlWorker.(DrillSqlWorker.java:93)
[drill-java-exec-1.4.0.jar:1.4.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907)
[drill-java-exec-1.4.0.jar:1.4.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244)
[drill-java-exec-1.4.0.jar:1.4.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_40-ea]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_40-ea]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]



On Fri, Feb 5, 2016 at 4:18 PM, pshomov  wrote:

>
> ​Hi Vladimir,
>
> My bad about that ifgs://, fixed it but it changed nothing.
>
> I don’t think Drill cares much about Hadoop settings. It never asked me to
> point it to an installation or configuration of Hadoop. I believe they have
> their own storage plugin mechanism and one of their built-in plugins
> happens to be the HDFS one.
>
> Here is (part of) the Drill log
>
> 2016-02-05 13:14:03,507 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman]
> ERROR o.a.d.exec.util.ImpersonationUtil - Failed to create DrillFileSystem
> for proxy user: No FileSystem for scheme: igfs
> java.io.IOException: No FileSystem for scheme: igfs
> at
> 

[VOTE] Release Apache Drill 1.5.0 RC2

2016-02-05 Thread Jason Altekruse
Hello all,

I'd like to propose the third release candidate (rc2) of Apache Drill,
version
1.5.0. It covers a total of 55 resolved JIRAs [1]. Thanks to everyone who
contributed to this release. This release candidate includes a fix for
DRILL-4353, a major stability problem with the Rest API that was identified
during the last vote.

The tarball artifacts are hosted at [2] and the maven artifacts are hosted
at
[3]. This release candidate is based on commit
0a64888ba8d374e94435e2518e81352e677255ad located at [4].

The vote will be open for the next 96 hours (including an extra day as the
vote is happening over a weekend) ending at 11AM Pacific, February 9th,
2016.

[ ] +1
[ ] +0
[ ] -1

Here's my vote: +1

Thanks,
Jason

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820=12332948
[2] http://people.apache.org/~json/apache-drill-1.5.0.rc2/
[3] https://repository.apache.org/content/repositories/orgapachedrill-1026
[4] https://github.com/jaltekruse/incubator-drill/tree/1.5-release-rc2


[GitHub] drill pull request: DRILL-4358: Fix NPE in UserServer.close()

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on the pull request:

https://github.com/apache/drill/pull/362#issuecomment-180506885
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4327: Fix rawtypes warnings emitted by c...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on the pull request:

https://github.com/apache/drill/pull/347#issuecomment-180471371
  
Other than the two small comments this looks good to me +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4327: Fix rawtypes warnings emitted by c...

2016-02-05 Thread laurentgo
Github user laurentgo commented on a diff in the pull request:

https://github.com/apache/drill/pull/347#discussion_r52051723
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java
 ---
@@ -26,13 +26,13 @@
 import org.apache.drill.exec.ops.FragmentContext;
 import org.apache.drill.exec.ops.OperatorStats;
 import org.apache.drill.exec.record.RecordBatch;
-
--- End diff --

I might have hit the organize imports on Eclipse (because of unused 
imports), which follows Sun conventions regarding import order. I can revert 
the change and do an import cleanup in a different PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4225: TestDateFunctions#testToChar fails...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on the pull request:

https://github.com/apache/drill/pull/311#issuecomment-180479132
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Drill 1.5 and a few Avro iseeus/questions

2016-02-05 Thread Stefán Baxter
Hi,

I'm wondering why DRILL-4120 has been pushed back to 1.6.

I have no idea if we are the only ones using directory pruning with Avro
but we use Avro for streaming/fresh data before a Parquet conversion and
this would be a welcome fix.

Pet peeve - Avro Schema validation.

Some facts:

   - The Map structure supported by Avro can not be validated with a schema
   as it allows keys to vary and only ensures the data type of the value.

   - Evolving schema will fail with the current Avro validation when
   directory pruning is used unless all file headers, even in the pruned
   directories, are scanned

   - Schema validation in Avro and schema validation in Parquet are
   different

This, and in my opinion many other things, mean that the strict schema
validation in Avro should be a opt-in arrangement for those wanting stop
evolving their schema and put all their entries in a single file /
directory.

Additionally,  Avro 1.8 is just out and it, plus the parquet-avro now
support timestamp fields. It would be a great benefit of hafin proper date
/ timestamp handling in Avro and the Avro->Parquet conversion.

Yours truly,
  - The Slightly Disgruntled Drill-Avro User


[GitHub] drill pull request: DRILL-4359: Adds equals/hashCode methods to En...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on a diff in the pull request:

https://github.com/apache/drill/pull/363#discussion_r52080108
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/EndpointAffinity.java
 ---
@@ -96,6 +96,42 @@ public boolean isAssignmentRequired() {
   }
 
   @Override
+  public int hashCode() {
+final int prime = 31;
+int result = 1;
+long temp;
+temp = Double.doubleToLongBits(affinity);
+result = prime * result + (int) (temp ^ (temp >>> 32));
+result = prime * result + ((endpoint == null) ? 0 : 
endpoint.hashCode());
+return result;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+if (this == obj) {
+  return true;
+}
+if (obj == null) {
+  return false;
+}
+if (!(obj instanceof EndpointAffinity)) {
+  return false;
+}
+EndpointAffinity other = (EndpointAffinity) obj;
+if (Double.doubleToLongBits(affinity) != 
Double.doubleToLongBits(other.affinity)) {
+  return false;
+}
+if (endpoint == null) {
+  if (other.endpoint != null) {
+return false;
+  }
+} else if (!endpoint.equals(other.endpoint)) {
--- End diff --

It looks like DrillbitEndpoint also lacks equals and hashcode methods, are 
these going to be necessary for your tests as well?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4361: Let FileSystemPlugin FormatCreator...

2016-02-05 Thread laurentgo
GitHub user laurentgo opened a pull request:

https://github.com/apache/drill/pull/365

DRILL-4361: Let FileSystemPlugin FormatCreator class be overridable

Allow for FileSystemPlugin subclasses to customize FormatPlugin by 
injecting their own version of FormatCreator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/laurentgo/drill 
laurent/filesystem-custom-format-plugins

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/365.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #365






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


where can I find the source code of parquet 1.8.1-drill-r0 ?

2016-02-05 Thread Abdel Hakim Deneche
Hey all,

Does anyone knows where is the source code of the parquet library currently
used by Drill ?

-- 

Abdelhakim Deneche

Software Engineer

  


Now Available - Free Hadoop On-Demand Training



Re: where can I find the source code of parquet 1.8.1-drill-r0 ?

2016-02-05 Thread Abdel Hakim Deneche
Thanks Jason,

We should publish the source code in maven too. This would make it so much
easier.

On Fri, Feb 5, 2016 at 3:23 PM, Jason Altekruse 
wrote:

>
> https://github.com/dremio/parquet-mr/commit/c74a6b7a0ed7180c5759cea5d2157919c1e80c2b
>
> The current version is just the Parquet master branch where the bytebuffer
> patch was merged, with the one new commit to declare the version number so
> that we could deploy it and not be depending on a SNAPSHOT version.
>
> On Fri, Feb 5, 2016 at 3:19 PM, Abdel Hakim Deneche  >
> wrote:
>
> > Hey all,
> >
> > Does anyone knows where is the source code of the parquet library
> currently
> > used by Drill ?
> >
> > --
> >
> > Abdelhakim Deneche
> >
> > Software Engineer
> >
> >   
> >
> >
> > Now Available - Free Hadoop On-Demand Training
> > <
> >
> http://www.mapr.com/training?utm_source=Email_medium=Signature_campaign=Free%20available
> > >
> >
>



-- 

Abdelhakim Deneche

Software Engineer

  


Now Available - Free Hadoop On-Demand Training



Re: where can I find the source code of parquet 1.8.1-drill-r0 ?

2016-02-05 Thread Jason Altekruse
Agreed, I will put this on my list, not sure when I'll get to it, but I
know it would be good to just have it set up.

On Fri, Feb 5, 2016 at 3:27 PM, Abdel Hakim Deneche 
wrote:

> Thanks Jason,
>
> We should publish the source code in maven too. This would make it so much
> easier.
>
> On Fri, Feb 5, 2016 at 3:23 PM, Jason Altekruse 
> wrote:
>
> >
> >
> https://github.com/dremio/parquet-mr/commit/c74a6b7a0ed7180c5759cea5d2157919c1e80c2b
> >
> > The current version is just the Parquet master branch where the
> bytebuffer
> > patch was merged, with the one new commit to declare the version number
> so
> > that we could deploy it and not be depending on a SNAPSHOT version.
> >
> > On Fri, Feb 5, 2016 at 3:19 PM, Abdel Hakim Deneche <
> adene...@maprtech.com
> > >
> > wrote:
> >
> > > Hey all,
> > >
> > > Does anyone knows where is the source code of the parquet library
> > currently
> > > used by Drill ?
> > >
> > > --
> > >
> > > Abdelhakim Deneche
> > >
> > > Software Engineer
> > >
> > >   
> > >
> > >
> > > Now Available - Free Hadoop On-Demand Training
> > > <
> > >
> >
> http://www.mapr.com/training?utm_source=Email_medium=Signature_campaign=Free%20available
> > > >
> > >
> >
>
>
>
> --
>
> Abdelhakim Deneche
>
> Software Engineer
>
>   
>
>
> Now Available - Free Hadoop On-Demand Training
> <
> http://www.mapr.com/training?utm_source=Email_medium=Signature_campaign=Free%20available
> >
>


[jira] [Created] (DRILL-4359) EndpointAffinity missing equals method

2016-02-05 Thread Laurent Goujon (JIRA)
Laurent Goujon created DRILL-4359:
-

 Summary: EndpointAffinity missing equals method
 Key: DRILL-4359
 URL: https://issues.apache.org/jira/browse/DRILL-4359
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Laurent Goujon
Assignee: Laurent Goujon
Priority: Trivial


EndpointAffinity is a placeholder class, but has no equals method to allow 
comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] drill pull request: DRILL-4359: Adds equals/hashCode methods to En...

2016-02-05 Thread laurentgo
Github user laurentgo commented on a diff in the pull request:

https://github.com/apache/drill/pull/363#discussion_r52083113
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/EndpointAffinity.java
 ---
@@ -96,6 +96,42 @@ public boolean isAssignmentRequired() {
   }
 
   @Override
+  public int hashCode() {
+final int prime = 31;
+int result = 1;
+long temp;
+temp = Double.doubleToLongBits(affinity);
+result = prime * result + (int) (temp ^ (temp >>> 32));
+result = prime * result + ((endpoint == null) ? 0 : 
endpoint.hashCode());
+return result;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+if (this == obj) {
+  return true;
+}
+if (obj == null) {
+  return false;
+}
+if (!(obj instanceof EndpointAffinity)) {
+  return false;
+}
+EndpointAffinity other = (EndpointAffinity) obj;
+if (Double.doubleToLongBits(affinity) != 
Double.doubleToLongBits(other.affinity)) {
+  return false;
+}
+if (endpoint == null) {
+  if (other.endpoint != null) {
+return false;
+  }
+} else if (!endpoint.equals(other.endpoint)) {
--- End diff --

DrillbitEndpoint is a protobuf Message, so the equals/hashCode methods are 
provided by protobuf API (by AbstractMessage if I'm correct)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4359: Adds equals/hashCode methods to En...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on a diff in the pull request:

https://github.com/apache/drill/pull/363#discussion_r52084085
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/EndpointAffinity.java
 ---
@@ -96,6 +96,42 @@ public boolean isAssignmentRequired() {
   }
 
   @Override
+  public int hashCode() {
+final int prime = 31;
+int result = 1;
+long temp;
+temp = Double.doubleToLongBits(affinity);
+result = prime * result + (int) (temp ^ (temp >>> 32));
+result = prime * result + ((endpoint == null) ? 0 : 
endpoint.hashCode());
+return result;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+if (this == obj) {
+  return true;
+}
+if (obj == null) {
+  return false;
+}
+if (!(obj instanceof EndpointAffinity)) {
+  return false;
+}
+EndpointAffinity other = (EndpointAffinity) obj;
+if (Double.doubleToLongBits(affinity) != 
Double.doubleToLongBits(other.affinity)) {
+  return false;
+}
+if (endpoint == null) {
+  if (other.endpoint != null) {
+return false;
+  }
+} else if (!endpoint.equals(other.endpoint)) {
--- End diff --

I should have looked more carefully, I was looking at the one in the 
proto.beans package, but that isn't being used here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4353: Add HttpSessionListener to release...

2016-02-05 Thread jaltekruse
Github user jaltekruse commented on the pull request:

https://github.com/apache/drill/pull/359#issuecomment-180514935
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Release Apache Drill 1.5.0 RC2

2016-02-05 Thread Stefán Baxter
+1 (non-binding / not a committer)

   - Built the project on ubuntu/linux
   - Ran our test suite
   - Verified that the jdbc driver works and is properly shaded (we had
   problems with *leakage*)

(I ran into a problem reading a snappy zipped parquet file that was created
with the latest parquet-mr/parquet-avro (1.8.1) but i think that is out of
scope here and I will create a Jira issue once I have tested it better)

Thank you

On Fri, Feb 5, 2016 at 6:56 PM, Jason Altekruse 
wrote:

> Hello all,
>
> I'd like to propose the third release candidate (rc2) of Apache Drill,
> version
> 1.5.0. It covers a total of 55 resolved JIRAs [1]. Thanks to everyone who
> contributed to this release. This release candidate includes a fix for
> DRILL-4353, a major stability problem with the Rest API that was identified
> during the last vote.
>
> The tarball artifacts are hosted at [2] and the maven artifacts are hosted
> at
> [3]. This release candidate is based on commit
> 0a64888ba8d374e94435e2518e81352e677255ad located at [4].
>
> The vote will be open for the next 96 hours (including an extra day as the
> vote is happening over a weekend) ending at 11AM Pacific, February 9th,
> 2016.
>
> [ ] +1
> [ ] +0
> [ ] -1
>
> Here's my vote: +1
>
> Thanks,
> Jason
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820=12332948
> [2] http://people.apache.org/~json/apache-drill-1.5.0.rc2/
> [3] https://repository.apache.org/content/repositories/orgapachedrill-1026
> [4] https://github.com/jaltekruse/incubator-drill/tree/1.5-release-rc2
>


Re: [VOTE] Release Apache Drill 1.5.0 RC2

2016-02-05 Thread Julien Le Dem
+1 (non-binding)
Built and run the tests on linux (took 27 min)



On Fri, Feb 5, 2016 at 11:21 AM, Stefán Baxter 
wrote:

> +1 (non-binding / not a committer)
>
>- Built the project on ubuntu/linux
>- Ran our test suite
>- Verified that the jdbc driver works and is properly shaded (we had
>problems with *leakage*)
>
> (I ran into a problem reading a snappy zipped parquet file that was created
> with the latest parquet-mr/parquet-avro (1.8.1) but i think that is out of
> scope here and I will create a Jira issue once I have tested it better)
>
> Thank you
>
> On Fri, Feb 5, 2016 at 6:56 PM, Jason Altekruse 
> wrote:
>
> > Hello all,
> >
> > I'd like to propose the third release candidate (rc2) of Apache Drill,
> > version
> > 1.5.0. It covers a total of 55 resolved JIRAs [1]. Thanks to everyone who
> > contributed to this release. This release candidate includes a fix for
> > DRILL-4353, a major stability problem with the Rest API that was
> identified
> > during the last vote.
> >
> > The tarball artifacts are hosted at [2] and the maven artifacts are
> hosted
> > at
> > [3]. This release candidate is based on commit
> > 0a64888ba8d374e94435e2518e81352e677255ad located at [4].
> >
> > The vote will be open for the next 96 hours (including an extra day as
> the
> > vote is happening over a weekend) ending at 11AM Pacific, February 9th,
> > 2016.
> >
> > [ ] +1
> > [ ] +0
> > [ ] -1
> >
> > Here's my vote: +1
> >
> > Thanks,
> > Jason
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820=12332948
> > [2] http://people.apache.org/~json/apache-drill-1.5.0.rc2/
> > [3]
> https://repository.apache.org/content/repositories/orgapachedrill-1026
> > [4] https://github.com/jaltekruse/incubator-drill/tree/1.5-release-rc2
> >
>



-- 
Julien


[jira] [Created] (DRILL-4362) MapR profile - DRILL-3581 breaks build

2016-02-05 Thread Sudheesh Katkam (JIRA)
Sudheesh Katkam created DRILL-4362:
--

 Summary: MapR profile - DRILL-3581 breaks build
 Key: DRILL-4362
 URL: https://issues.apache.org/jira/browse/DRILL-4362
 Project: Apache Drill
  Issue Type: Bug
Reporter: Sudheesh Katkam
Assignee: Sudheesh Katkam


The new rule in [this 
commit|https://github.com/apache/drill/commit/422c5a83b8e69e4169d3ebc946401248073c8bf8#diff-8f76e2ed78ea4afabd0d911a33fec0fc]
 that adds log4j to enforcer exclusions breaks the mapr build (*mvn clean 
install -DskipTests -Pmapr*).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] drill pull request: DRILL-4362: Exclude log4j for hbase dependency...

2016-02-05 Thread sudheeshkatkam
GitHub user sudheeshkatkam opened a pull request:

https://github.com/apache/drill/pull/366

DRILL-4362: Exclude log4j for hbase dependency under mapr profile



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudheeshkatkam/drill DRILL-4362

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #366


commit 857d0420699a70ad44b152375e451f8aad4d7acd
Author: Sudheesh Katkam 
Date:   2016-02-05T23:36:52Z

DRILL-4362: Exclude log4j for hbase dependency under mapr profile




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request: DRILL-4362: Exclude log4j for hbase dependency...

2016-02-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (DRILL-4362) MapR profile - DRILL-3581 breaks build

2016-02-05 Thread Sudheesh Katkam (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudheesh Katkam resolved DRILL-4362.

Resolution: Fixed

> MapR profile - DRILL-3581 breaks build
> --
>
> Key: DRILL-4362
> URL: https://issues.apache.org/jira/browse/DRILL-4362
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Sudheesh Katkam
>Assignee: Sudheesh Katkam
>
> The new rule in [this 
> commit|https://github.com/apache/drill/commit/422c5a83b8e69e4169d3ebc946401248073c8bf8#diff-8f76e2ed78ea4afabd0d911a33fec0fc]
>  that adds log4j to enforcer exclusions breaks the mapr build (*mvn clean 
> install -DskipTests -Pmapr*).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Apache Drill 1.4 and TDE

2016-02-05 Thread William Witt
Kalyan,

Did you find a resolution to this problem?

> On Jan 21, 2016, at 8:27 AM, Ghosh, Kalyan  wrote:
> 
> Hi,
> Can Apache Drill 1.4 query Hive external tables where the HDFS is encrypted 
> by TDE?
> 
> Querying Hive from drill-embeded, I get this error:
> 
> 09:15:05.155 [ingestsvc:task-delegate-thread] ERROR 
> o.a.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a keyProvider !!
> Error: SYSTEM ERROR: IOException: No KeyProvider is configured, cannot access 
> an encrypted file
> 
> Fragment 1:2
> 
> [Error Id: 583d0eda-6e6a-4259-8541-59b9f9a82635 on localhost:31010] 
> (state=,code=0)
> 
> 
> Any direction would be appreciated.
> 
> The information contained in this message is proprietary and/or confidential. 
> If you are not the intended recipient, please: (i) delete the message and all 
> copies; (ii) do not disclose, distribute or use the message in any manner; 
> and (iii) notify the sender immediately. In addition, please be aware that 
> any message addressed to our domain is subject to archiving and review by 
> persons other than the intended recipient. Thank you.