fixes that? (cdh 5.5 is the same hive version as
cdh5.4. is it spark related and not hive?)
On Sun, Jan 10, 2016 at 9:26 AM, sandeep vura wrote:
> Upgrade to CDH 5.5 for spark. It should work
>
> On Sat, Jan 9, 2016 at 12:17 AM, Ophir Etzion
> wrote:
>
>> It didn't work.
using cdh5.4.3 (hive1.1) via HiveServer.
Does anyone have a suggestion about what to do / look for?
the error:
org.apache.hadoop.hive.ql.parse.SemanticException: Generate Map Join Task
Error: Unable to find class:
com.foursquare.hadoop.hive.udf.IsDefinedUDF$$anonfun$initialize$6
Serialization tr
i, Jan 8, 2016 at 12:24 PM, Edward Capriolo
wrote:
> You can not 'add jar' input formats and serde's. They need to be part of
> your auxlib.
>
> On Fri, Jan 8, 2016 at 12:19 PM, Ophir Etzion
> wrote:
>
>> I tried now. still getting
>>
>> 16/01
Thanks!
In certain use cases you could but forgot about the aux thing, thats
probably it.
On Fri, Jan 8, 2016 at 12:24 PM, Edward Capriolo
wrote:
> You can not 'add jar' input formats and serde's. They need to be part of
> your auxlib.
>
> On Fri, Jan 8, 2016 at 12:1
your jar is of huge size,
> you can pre-load the jar on all executors in a common available directory
> to avoid network IO.
>
> On Thu, Jan 7, 2016 at 4:03 PM, Ophir Etzion wrote:
>
>> I' trying to add jars before running a query using hive on spark on cdh
>> 5.
I' trying to add jars before running a query using hive on spark on cdh
5.4.3.
I've tried applying the patch in
https://issues.apache.org/jira/browse/HIVE-12045 (manually as the patch is
done on a different hive version) but still hasn't succeeded.
did anyone manage to do ADD JAR successfully with
I want to know for each of my tables the last time it was modified. some of
my tables don't have last_modified_time in the table parameters but all
have transient_lastDdlTime.
transient_lastDdlTime seems to be the same as last_modified_time in some of
the tables I randomly cheked.
what is the time
During spark-submit when running hive on spark I get:
Exception in thread "main" java.util.ServiceConfigurationError:
org.apache.hadoop.fs.FileSystem: Provider
org.apache.hadoop.hdfs.HftpFileSystem could not be instantiated
Caused by: java.lang.IllegalAccessError: tried to access method
org.apac
e designated recipient only, if you are not the intended
>> recipient, you should destroy it immediately. Any information in this
>> message shall not be understood as given or endorsed by Peridale Technology
>> Ltd, its subsidiaries or their employees, unless expressly so stated. It
not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
&g
Hi,
when trying to do Hive on Spark on CDH5.4.3 I get the following error when
trying to run a simple query using spark.
I've tried setting everything written here (
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started)
as well as what the cdh recommends.
any one enc
Hi,
I've been trying to figure out how to know the number of MR jobs that will
be ran for a hive query using the EXPLAIN output.
I haven't got to a consistent method to knowing that.
for example (in one of my queries, ctas query):
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-7 depends o
12 matches
Mail list logo