Case-sensitive I think, try all caps.
On Monday, June 8, 2015, Kristine Hahn wrote:
> Set the UID and PWD properties in the connection string.
> http://tshiran.github.io/drill/docs/odbc-configuration-reference/ has a
> first draft of the docs that include a description of the properties. Some
>
Set the UID and PWD properties in the connection string.
http://tshiran.github.io/drill/docs/odbc-configuration-reference/ has a
first draft of the docs that include a description of the properties. Some
of these docs, esp Linux and windows installation, are incomplete.
On Monday, June 8, 2015, Ch
I use a DNS in a connection string like this for my pyodbc in my
notebooks...
MY_DSN = "DSN=Drill 1.0 sandbox;UID=mapr;PWD=mapr"
conn = pyodbc.connect(MY_DSN, autocommit=True)
On Mon, Jun 8, 2015 at 9:29 PM, Christopher Matta wrote:
> That's actually my notebook, which I'm trying to update to u
That's actually my notebook, which I'm trying to update to use with Drill
authentication, yes I'm using the DSN (the first argument in the connect
function).
Chris Matta
cma...@mapr.com
215-701-3146
On Mon, Jun 8, 2015 at 10:24 PM, Matt wrote:
> Does using a DSN as per this notebook help?
>
>
>
Does using a DSN as per this notebook help?
http://nbviewer.ipython.org/github/cjmatta/drill_ipython_notebook/blob/master/Twitter%20Drill%20Pandas.ipynb
https://github.com/cjmatta/drill_ipython_notebook
On 8 Jun 2015, at 22:20, Christopher Matta wrote:
Does anyone know what the expected key n
Does anyone know what the expected key names are for userid and password
for an ODBC connection? I was using pyodbc to connect to Drill pre-1.0 but
now with authentication enabled I haven’t figured out how to do it.
Relevant errors:
conn =
pyodbc.connect('Driver=/opt/mapr/drillodbc/lib/universal
I've been working on an updated version of the 3.0 patch from the original
plugin guys. I'll try to get it uploaded/merged soon.
I'm still seeing connection issues on larger workloads so I'm waiting to
post until I work through that. Adam, have you had any problems when doing
very large scale qu
Sorry - on a side note, I forgot to mention that this only occurred when
connecting to a replica set in 3.0. Connecting to a single 3.0 instance
did not have the problem.
On Tue, Jun 9, 2015 at 10:06 AM, Adam Gilmore wrote:
> Just my input here guys. We experienced the exact same issue due to
Just my input here guys. We experienced the exact same issue due to the
fact that Drill is still using the 2.x Mongo Java driver. Mongo 3.0's
server does not play nicely with this driver (you cannot see any
collections).
If it does turn out that you're using Mongo 3.0, then you need to be using
Hi Jacques,
We did create a role similar to below in our non-prod instance and it worked
connecting to drill. I will deploy this to prod in a couple of days and let you
know if I run across any issue.
Thank you for all your help,
Mano
-Original Message-
From: Jacques Nadeau [mailto:jacq
Currently, the only output is a using CTAS. The default format is parquet
and you can use the setting you described above to get csv. Is there a
particular piece of functionality that you would expect export to provide
that CTAS does not?
On Mon, Jun 8, 2015 at 12:41 PM, James Jones wrote:
> D
Does Apache Drill have an export function?
Or does one need to go the Alter session set store.format = csv then do a
CTAS query?
Apologies if I overlooked in the documentation.
James
Hey There,
You need to create a new storage plugin instance rather than trying to
embed two in the same plugin instance. Go back to the
http://:8047/storage
path and then type a new name for your second mongo database. Maybe
mongodev and then add the configuration for that one. Node that you'll
Hello Drillers,
I have been working on DRILL-3209, which aims to speed up reading from hive
tables by re-planning them as native Drill reads in the case where the
tables are backed by files that have available native readers. This will
begin with parquet and delimited text files.
To provide the s
I have a mongo plugin configured to connect to an instance through the
Drill UI as below
{
"type": "mongo",
"connection": "mongodb://host1:27017/",
"enabled": true
}
Is it possible to setup mongo plugin to connect to multiple mongo
instances? If so, can you please point me to the documentat
Adding the user list back to the thread.
Satish, a couple of things.
First off, the text reader doesn't currently implement what we call
skip-all semantics. You can think of this as a way to avoid reading the
data if you're only asking for a count. As such, you'll actually faster
performance if
Please follow the link to configure drill with hive storage plugin
https://drill.apache.org/docs/hive-storage-plugin/
On Mon, Jun 8, 2015 at 2:50 PM, weiw...@brandbigdata.com <
weiw...@brandbigdata.com> wrote:
> The follow is hive conf:
> How to register hive plugin?
>
>
The follow is hive conf:
How to register hive plugin?
weiw...@brandbigdata.com
18 matches
Mail list logo