I solved the problem by using a fully qualified path for
hive.exec.scratchdir and then the umask trick worked. It turns out
that hive was creating a different directory (on hdfs) than the one
mapreduce was trying to write into, and that's why the umask didn't
work. This remains a nasty work
Thanks for the reply Tim. It is writable to all (permission 777). As a
side note, I have discovered now that the mapreduce task spawned by
the RCFileOutputDriver is setting mapred.output.dir to a folder under
file:// regardrless of the fs.default.name. This might be expected
beahviour, but
make sure :/home/yaboulnaga/tmp/**hive-scratch/ is writeable by your
processes.
On Mon, Nov 26, 2012 at 10:07 AM, wrote:
> Hello,
>
> I'm using Cloudera's CDH4 with Hive 0.9 and Hive Server 2. I am trying to
> load data into hive using the JDBC driver (the one distributed with
> Cloudera CDH4 "
You may have to go directly to cloudera support for this one.
HiveServer2 is not officially part of hive yet so technically we
should not be supporting it (yet). However someone on list might still
answer you.
On Mon, Nov 26, 2012 at 11:07 AM, wrote:
> Hello,
>
> I'm using Cloudera's CDH4 with
Hello,
I'm using Cloudera's CDH4 with Hive 0.9 and Hive Server 2. I am trying
to load data into hive using the JDBC driver (the one distributed with
Cloudera CDH4 "org.apache.hive.jdbc.HiveDriver". I can create the
staging table and LOAD LOCAL into it. However when I try to insert
data in