It sounds like we should not have written to the filesystem if we were not
connected to a single host or a distributed filesystem. The problem is that
the files we wrote will not be associated together the way they would be in
a single filesystem (even a distributed one that would have a common
My suggestion would be that the local file system plugin be disabled with
distributed mode. With multiple drill bits and a centralized plugin for
local file system, consistent behavior cannot be expected.
It should be either disabled when distributed mode is detected or we should
add support for
I have seen some discussions on the Parquet storage format suggesting
that sorting time series data on the time key prior to converting to the
Parquet format will improve range query efficiency via min/max values on
column chunks - perhaps analogous to skip indexes?
Is this a recommended
We are using MapR Drill 1.0 and configured a mongodb storage plugin as below.
{
type: mongo,
connection: mongodb://uid:pwd@host:port/dbname,
enabled: true
}
We are able to connect through CLI and Drill Explorer, but it does display any
databases and query against any collection fail as
Hi Jacques,
I saw your reply in the mail list and responding to your question.
Can you please let me know if you want me to execute the failing operation now
and send the log files or setup some more debug info options and send the log
file. Please let me know.
Thanks,
Mano
From: Rangaswamy,
It would be great if you could give some more context failing queries, log,
environment etc.
-Hanifi
On Mon, Jun 1, 2015 at 10:47 AM, Rangaswamy, Manoharan mran...@qualcomm.com
wrote:
Hi Jacques,
I saw your reply in the mail list and responding to your question.
Can you please let me know
Hi Nishith,
As far as I know, I don't think there is any documentation on that. Hopefully
the function names are relatively self-explanatory. If not, feel free to ask on
this list for clarification.
Norris
-Original Message-
From: Nishith Maheshwari [mailto:nsh...@gmail.com]
Sent:
Have a look at QuerySubmitter
https://github.com/hnfgns/incubator-drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/client/QuerySubmitter.java.
It does boilerplate for posting queries on top DrillClient. All remains is
to attach a result listener to perform your custom logic.
Adding to Hanifi’s comment. Loot at QueryWrapper#run method and
QueryWrapper$Listener
https://github.com/hnfgns/incubator-drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java
On 6/1/15, 12:14 PM, Matt bsg...@gmail.com wrote:
Segmenting data into directories in HDFS would require clients to
structure queries accordingly, but would there be benefit in reduced
query time by limiting scan ranges?
Yes. I am just a newbie user, but I have already seen that work with
Here is the error message from the log file for now. I will try to upload if
they are huge enough during further troubleshooting.
***
2015-06-01 14:52:15,173 [Client-1] INFO
o.a.d.j.i.DrillResultSetImpl$ResultsListener - Query failed:
Could you execute failed query and grep exception with mongo and send us?.
On Tue, Jun 2, 2015 at 5:15 AM, Rangaswamy, Manoharan mran...@qualcomm.com
wrote:
Here is the error message from the log file for now. I will try to upload
if they are huge enough during further troubleshooting.
This is not spam. This is required to troubleshoot a product issue.
Thanks,
Mano
Sent from my Verizon 4G LTE Smartphone
-- Original message--
From: request.allow.sp...@qualcomm.com
Date: Mon, Jun 1, 2015 4:46 PM
To:
13 matches
Mail list logo