One quick hint :
The file $DRILL_HOME/bin/hadoop-excludes.txt has a list of jars that ARE NOT
loaded during the bootstrap of Drill … and jets3t is one of them. Commenting
out the jets3t line in that file and restarting the drill bits will at least
get you past the first java dependency problem
One more thing to remember ... S3 is an object store, not a file system in the
traditional sense. That means that when a drill-bit accesses a file from S3,
the whole thing is transferred ... whether it's 100 bytes or 100 megabytes.
The advantages of the parquet format are far more obvious in
:05 PM, Hao Zhu wrote:
>
> We just need to copy that profile in corresponding mfs location, then you
> can view the profile from the UI:
> http://:8047/profiles/
>
> It is working fine for me.
>
> Thanks,
> Hao
>
>
> On Tue, Jul 14, 2015 at 3:15 PM,
I decided to transition to shared profile locations for my drill cluster. I
updated the drill-override.conf with a blobroot setting
sys.store.provider.zk.blobroot: "maprfs:///tmp/drill/profiles”
and made sure the directory existed with permissions for everyone.
I restarted the drill-bi
You'll need to make sure of two things :
WASB jars are included by default with HDFS 2.6 and later. If you're using an
earlier version (or simply a stand-alone installation of Drill), you'll need to
grab the jar files and put them in the class path.
Your Azure credentials must be in core-site.x
Check the Amazon sites, but I believe that the EU-Central region has some
constraints about data migration. It may well be that S3 buckets (even those
defined as “public”) will not expose their contents to instances OUTSIDE the
EU-Central region.
— David
On Jun 13, 2015, at 12:53 AM, Subrat
The downside of that isolation is that the storage plugin configuration of the
primary cluster is lost.
If you connect DIRECTLY to the drill-bit rather than via zookeeper, then that
drill-bit will be the foreman of your queries. For small data sets, the
foreman will not involve any other drill
For many Linux services, this can be an unstable configuration. Better to use
ifconfig eth0
to identify the configured IP address and add that entry to /etc/hosts.
Some DHCP client packages will do this automatically, since the IP can change
with every reboot.
— David
On May 26, 201
The current version of Hadoop in EMR (both Apache and MapR) does not support
the IAM authentication to S3 without the credentials in core-site. I believe
the support has been integrated into Hadoop 2.6 … so when the EMR distributions
upgrade to that level, the access you request should be supp
And that works … thank you Kristine !
On Apr 28, 2015, at 11:30 AM, Kristine Hahn wrote:
> The new docs http://tshiran.github.io/drill/docs/lexical-structure/ will
> say:
> First paragraph, last bullet: /* This is a comment. */
>
> Kristine Hahn
> Sr. Technical Writer
> 415-497-8107 @krishahn
Glad I’m not the only one fighting with this :)
— David
On Apr 28, 2015, at 11:28 AM, Ramana Inukonda wrote:
> Sorry,
>
> Scratch that. Somehow had a notion that worked. Does not seem to be the
> case.
>
> Regards
> Ramana
>
>
> On Tue, Apr 28, 2015 at 11:25 AM, Ramana Inukonda
> wrote:
>
Ganesh,
When you say the keys are “custom controlled”, does that mean that only special
logic within your Java application allows the data to be properly accessed ?
There are several mechanisms within the S3 API such that encryption/decryption
occur transparently to the application. If your
I believe you’ll need to put the custom jar in DRILL’s classpath (it does not
include $HIVE_HOME/lib be default since there’s no guarantee it will be on all
cluster nodes).
I’ve been successful putting the extra libraries I need for object-store access
into $DRILL_HOME/jars/3rdparty.
— Davi
I’ll second Adnries’ comment about measurable performance in AWS : you should
not expect consistency there (especially with instance types that are smaller
than a physical server, such as the c3.xlarge instances you’re using).
How does the memory utilization look during your queries ? Memory p
ntType? It could be putting in the Content-Type header in
> additiin to the one you specified.
> On Jan 14, 2015 12:00 AM, "David Tucker" wrote:
>
>> Has anyone seen the error
>>Error 415 Unsupported Media Type
>> when trying to use the REST interfa
Has anyone seen the error
Error 415 Unsupported Media Type
when trying to use the REST interface to create a new plug-in ?
— David
DETAILS
The json file describing the plug-in is :
{
"name" : "mdb",
"config" : {
"type" : "hbase",
"config" : {
"hbase.table.namespace.ma
te:
>
>> Yes, drill-env.sh would be the place to put this, regardless of how Drill
>> is deployed.
>>
>> On Mon, Dec 1, 2014 at 1:08 PM, David Tucker wrote:
>>
>>>
>>> I can see the mechanics for adding additional jars to the invocation of
&g
I can see the mechanics for adding additional jars to the invocation of the
drillbit via the DRILL_CLASSPATH environment variable (used in
$DRILL_HOME/bin/drill-config.sh).
For drill deployments within a MapR cluster (where the mapr-drill package is
installed, so the drillbit is managed by the
Is there a process for installing the sqlline tool independent of the complete
drillbit ? The scenario I’m considering is a cluster of 10 nodes where only 5
would have drill-bits installed but users will expect to connect to the drill
cluster arbitrarily from any node out of the 10.
— David
19 matches
Mail list logo