Chun Chang created DRILL-4532:
-
Summary: remove incorrect log message
Key: DRILL-4532
URL: https://issues.apache.org/jira/browse/DRILL-4532
Project: Apache Drill
Issue Type: Bug
Jacques,
Would it be possible for you guys (or anyone in the community) to execute
the tests on Apache HDFS and report back the list of failed tests? I may
spend time to fix them or look for community contributions.
Thanks,
-Chun
On Wed, Mar 23, 2016 at 5:20 PM, Chun Chang
Happy to help. I will stay involved on the Yarn side too, my hope is any
improvements to drill to facilitate a benefit for drill on yarn can be
abstracted and not just be a drill on yarn feature, but instead, create
hooks to do things (like draining nodes we wish to shutdown, or scale
memory and
Yes, to do authenticated requests, I create a session object with Python
requests, post creds to the login page, get the session cookies, and them
do the queries. I do not believe Drill supports just using basic auth.
This as far as I can tell is actually a good thing because forces you to
manage
Github user jacques-n commented on the pull request:
https://github.com/apache/drill/pull/443#issuecomment-200592038
I don't know the new semantics of ConnectionFactory. Drill now has plugin
lifecycle events so we should close out an hbase connection if a storage plugin
is changed or
Can you confirm that you've successfully executed the tests on Apache HDFS
2.7.1? I note that you have modified the plans to remove the maprfs prefix
however you have kept the individual file names. I believe the ordering of
these files is not the same in HDFS versus MapRFS and thus tests will
Github user jacques-n commented on a diff in the pull request:
https://github.com/apache/drill/pull/443#discussion_r57258088
--- Diff:
contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/config/HBasePersistentStore.java
---
@@ -25,36 +25,40 @@
import
Github user adityakishore commented on the pull request:
https://github.com/apache/drill/pull/443#issuecomment-200580166
`jcl-over-slf4j` works as long as the application use only the surface
logging APIs.
Unfortunately, HBase uses some [internal
Hi drillers,
MapR recently made changes to the automation framework* to make it easier
running against HDFS cluster. Please refer to the updated README file for
detail. Let us know if you encounter any issues.
Thanks,
-Chun
*https://github.com/mapr/drill-test-framework
Github user jacques-n commented on the pull request:
https://github.com/apache/drill/pull/443#issuecomment-200576391
The jcl-over-slf4j bridge should be all that is necessary. Is HBase doing
something weird that means that doesn't work? Or is that just missing for some
reason?
I have submitted a pull request[1] to add support for HBase 1.x.
[1] https://github.com/apache/drill/pull/443
On Mon, Mar 21, 2016 at 3:13 PM, Jacques Nadeau wrote:
> +1
>
> --
> Jacques Nadeau
> CTO and Co-Founder, Dremio
>
> On Mon, Mar 21, 2016 at 1:18 PM, Aditya
Hi,
I there anyone here that provides professional service for Drill?
We are trying to optimize our system in order to speed up smaller queries
and aiming for sub second response times when dealing with a < 100 million
records from Parquet.
We are, for example, looking at profiles where
If the main complexity comes from UDF, i.e, user has to implement each
permutation of nullability, I feel it might be possible to allow users
to only provide an implementation (nullable input), and Drill changes
function resolution logic and run-time code-gen logic to make that
work. That is,
Github user adeneche closed the pull request at:
https://github.com/apache/drill/pull/432
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hnfgns commented on the pull request:
https://github.com/apache/drill/pull/432#issuecomment-200498974
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user adeneche opened a pull request:
https://github.com/apache/drill/pull/442
DRILL-3714: Query runs out of memory and remains in CANCELLATION_REQUâ¦
â¦ESTED state until drillbit is restarted
RpcBus.ChannelClosedHandler calls CoordinationQueue.channelClosed() on
Hmm. I may not have expressed my thoughts clearly.
What I was suggesting was that 'non-null' data exists in all data sets. (I
have at least two data sets from users with Drill in production (sorry,
cannot share the data), that have required fields in parquet files). The
fields may not be marked as
Jacques,
Doesn't Drill detect the type of each column within each batch? If so,
does it (or could it) also detect that a particular column is not null
(again, within the batch)?
You may not generate not-null data, but a lot of data is not-null.
Let's not be too hasty to dismiss this as a
Github user yufeldman commented on the pull request:
https://github.com/apache/drill/pull/368#issuecomment-200495039
@amansinha100 and @hnfgns - could you please review this PR and provide
your feedback
---
If your project is set up for it, you can reply to this email and have
Github user chunhui-shi commented on a diff in the pull request:
https://github.com/apache/drill/pull/430#discussion_r57206301
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/MurmurHash3.java
---
@@ -0,0 +1,264 @@
+/**
+ * Licensed to the Apache
Github user chunhui-shi commented on a diff in the pull request:
https://github.com/apache/drill/pull/430#discussion_r57203072
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/MurmurHash3.java
---
@@ -0,0 +1,264 @@
+/**
+ * Licensed to the Apache
Github user chunhui-shi commented on a diff in the pull request:
https://github.com/apache/drill/pull/430#discussion_r57202442
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/HashHelper.java
---
@@ -17,47 +17,77 @@
*/
package
Github user julianhyde commented on the pull request:
https://github.com/apache/drill/pull/436#issuecomment-200448075
Yes, I think this would be good in Calcite. Thanks for offering.
DESCRIBE is in the SQL standard (the latest draft, anyway; I didn't check
any others) with a
I agree that we should focus on real benefits versus theories.
Reduction in code complexity is a real benefit. Performance benefits from
having required types is theoretical. Dot drill files don't exist so they
should have little bearing on this conversation.
We rarely generate required data.
I have narrowed down the problem further. The ParquetReader class uses the
ParquetFileReader. That is the class which calls
FSDataInputStream.read(ByteBuffer). To read all the data.
However if you look at the ParquetFileReader found in the git repo
Hey Paul and Jacques, Great discussion here. Paul, I believe we met a week
or two ago on a call.
I have been running Drill successfully and powerfully (Multi-tenant etc)
using Apache Mesos and Marathon. While I didn't write a framework for Drill
in Mesos, Marathon does give some very nice
Thank you for your help, Aditya! We were able to workaround this.
We replaced the included hbase client jar in the Drill distributable tarball to
workaround the bug in https://issues.apache.org/jira/browse/HBASE-13262. The
fix on the client side is in >= 0.98.12. We initially tested with
Whenever drill encounters a corrupted parquet file it will stop processing
a query.
To work around this issue I'm trying to write a simple tool to detect
corrupted parquet files so that we can remove them from the pool of files
drill will query on.
I'm basically doing a HEAD command like was
28 matches
Mail list logo