We have a spark job generating parquet files and uploading them to s3. drill
wont query these files unless i delete the _metadata and _common_metadata files
from s3. I'd rather not modify the spark job or have to delete these files. Is
there a way to get drill to ignore these files or work aroun
(I should find where this is and put it in my calendar or something... )
hangout starting now:
https://hangouts.google.com/hangouts/_/event/ci4rdiju8bv04a64efj5fedd0lc
A case statement along with aggregation might do the trick -
http://stackoverflow.com/questions/13372276/simple-way-to-transpose-columns-and-rows-in-sql/13377114#13377114
On Mon, Jun 13, 2016 at 11:35 PM, Sanjiv Kumar wrote:
> Then, how to convert row data to column using Apache Drill ?
>
>
>
Attendees: Parth, Aman, Hanumath, John O, Jason, Lau Sennels, Vitalii,
Arina, Gautam, Jinfeng, Paul, Subbu Srinivasan, Zelaine
1) John indicated he's running into a problem where Drill hangs and becomes
unresponsive. Parth indicated he suspects Drill is hitting some type of
memory limit. In gene
Can you say a little bit more about what you mean by turning columns into
rows?
Can you provide an example of the data you have (or something shaped like
the data you have)?
On Mon, Jun 13, 2016 at 11:35 PM, Sanjiv Kumar wrote:
> Then, how to convert row data to column using Apache Drill ?
>
On Tue, Jun 14, 2016 at 11:23 AM, Zelaine Fong wrote:
> 3) Jason walked through his new operator unit test framework. The
> motivation for this is to be able to test operators in isolation without a
> SQL statement and large data sets. This came about as a result of trying
> to fix issues with
This is what I have thus far... I can provide more complete logs on a one
on one basis.
The cluster was completely mine, with fresh logs. I ran a CTAS query on a
large table that over 100 fields. This query works well in other cases,
however I was working with the Block size, both in MapR FS and D
I can see how the GC errors will cause the world to stop spinning. The GC
is itself not able to allocate memory which is not a great place to be in.
Sudheesh saw something similar in his branch. @Sudheesh is this possible we
have a mem-leak in master?
On Tue, Jun 14, 2016 at 11:37 AM, John Ome
John, can you log a JIRA and attach all the logs you have to the JIRA?
On Tue, Jun 14, 2016 at 11:43 AM, Parth Chandra
wrote:
> I can see how the GC errors will cause the world to stop spinning. The GC
> is itself not able to allocate memory which is not a great place to be in.
>
> Sudheesh s
I logged https://issues.apache.org/jira/browse/DRILL-4723
Specific logs can be requested. I may have to sanitize some of the logs, so
if it's requested in the case itself, let me know, and I will sanitize and
post. (MapR can get the full dump if they want to play through things...
NDAs and such :)
Nevermind, this is because I changed the directory structure.
Scott Kinney | DevOps
stem | m 510.282.1299
100 Rollins Road, Millbrae, California 94030
This e-mail and/or any attachments contain Stem, Inc. confidential and
proprietary information and materia
HBase 1.x support has been merged and is available in latest 1.7.0-SNAPSHOT
builds.
On Wed, Jun 1, 2016 at 1:23 PM, Aditya wrote:
> Thanks Jacques for promptly reviewing my long series of patches!
>
> I'm planning to merge the HBase 1.x support some time in next 48 hours.
>
> If anyone else is i
Hi,
I just followed the instruction on the Apache Drill documentation to connect to
S3.
When the S3 plugin is disable, I could query the employee.json file
But once I enable the S3 plugin and added my bucket to the connection, whatever
query i sent, I got an error saying that my access and se
Did you added AWS credentials in the file conf/core-site.xml in your Drill
install directory?
And then copy this core-site.xml to all running drill nodes and restart the
cluster.
On 15 Jun 2016 06:34, "Edward Chen" wrote:
> Hi,
>
> I just followed the instruction on the Apache Drill documentat
15 matches
Mail list logo