Hi, I just moved from MR1 to YARN (CDH 4.x to CDH 5.2). After this, I see
that all the loading jobs which are mostly like the following are running
really slow.
insert overwrite table desttable partition (partname) select * from sourcetable
From what I can see, even if I set the number of
hi, folks,
I am using the HBaseintergration feature from hive (
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration) to load
TPCH tables into HBase. Hive 0.13 and HBase 0.98.6.
The load works well. However, as documented here:
yes, have placed spark-assembly jar in hive lib folder.
hive.log---
bmit.2317151720491931059.properties --class
org.apache.hive.spark.client.RemoteDriver
/opt/cluster/apache-hive-1.2.0-SNAPSHOT-bin/lib/hive-exec-1.2.0-SNAPSHOT.jar
--remote-host M151 --remote-port 56996 --conf
Is there a simple way to migrate from PL/SQL to PL/HQL?
Regards,
Venkat
From: Dmitry Tolpeko [mailto:dmtolp...@gmail.com]
Sent: Friday, February 27, 2015 1:36 PM
To: user@hive.apache.org
Subject: PL/HQL - Procedural SQL-on-Hadoop
Let me introduce PL/HQL, an open source tool that implements
Hi,
I notice there's one folder example which contains sample data and sample
queries. But I didn't find any document about how to use these data and
queries. Could anyone point it to me ? Thanks
It seems that the remote spark context failed to come up. I saw you're
using Spark standalone cluster. Please make sure spark cluster is up. You
may try spark.master=local first.
On Mon, Mar 2, 2015 at 5:15 PM, scwf wangf...@huawei.com wrote:
yes, have placed spark-assembly jar in hive lib
Venkat,
The goal of this project is to execute existing PL/SQL in Hive as much as
possible, not to migrate. In case when some design restrictions are faced
the code has to be redesigned, but hopefully most of the remaining code
remained untouched, no need to convert everything to bash/Python etc.
Could you check your hive.log and spark.log for more detailed error
message? Quick check though, do you have spark-assembly.jar in your hive
lib folder?
Thanks,
Xuefu
On Mon, Mar 2, 2015 at 5:14 AM, scwf wangf...@huawei.com wrote:
Hi all,
anyone met this error: HiveException(Failed to
there is no sampling for order by in Hive. Hive uses a single reducer for
order by (if you're talking about MR execution engine).
Hive on Spark is different for this, thought.
Thanks,
Xuefu
On Mon, Mar 2, 2015 at 2:17 AM, Jeff Zhang zjf...@gmail.com wrote:
Order by usually invoke 2 steps
Order by usually invoke 2 steps (sampling job and repartition job) but hive
only run one mr job for order by, so wondering when and where does hive do
sampling ? client side ?
--
Best Regards
Jeff Zhang
Hi all,
anyone met this error: HiveException(Failed to create spark client.)
M151:/opt/cluster/apache-hive-1.2.0-SNAPSHOT-bin # bin/hive
Logging initialized using configuration in
jar:file:/opt/cluster/apache-hive-1.2.0-SNAPSHOT-bin/lib/hive-common-1.2.0-SNAPSHOT.jar!/hive-log4j.properties
Hi,
I got the attached error on a map-side join where a serialized table
contains an array column.
When setting map-side join off via setting
hive.mapjoin.optimized.hashtable=false, exceptions do not occur.
It seems that a wrong ObjectInspector was set at
CommonJoinOperator#initializeOp.
I am
Hello Everyone,
I was able to look up the query of hive using hive.query.name from job
history server. I wasn't able to find a similar parameter for tez.
Is there a way where you could find out all the queries that ran in a tez
session ?
Thanks
yes, we even have a ticket for that
https://issues.apache.org/jira/browse/HIVE-9600
btw can anyone test jdbc driver with kerberos enabled?
https://issues.apache.org/jira/browse/HIVE-9599
On Mon, Mar 2, 2015 at 10:01 AM, Nick Dimiduk ndimi...@gmail.com wrote:
Heya,
I've like to use jmeter
Heya,
I've like to use jmeter against HS2/JDBC and I'm finding the standalone
jar isn't actually standalone. It appears to include a number of
dependencies but not Hadoop Common stuff. Is there a packaging of this jar
that is actually standalone? Are there instructing for using this
standalone
hive create table test1 (c1 arrayint) row format delimited collection
items terminated by ',';
OK
hive insert into test1 select array(1,2,3) from dual;
OK
hive select * from test1;
OK
[1,2,3]
hive select c1[0] from test1;
OK
1
$ hadoop fs -cat /apps/hive/warehouse/test1/00_0
1,2,3
On
Hello All,
I have a couple of Sequence files on HDFS. I now need to load these files
into an ORC table. One option is to create an external table of
SequenceFile format and then load it into the ORC table by using the INSERT
OVERWRITE command.
I am looking for an alternative without using an
Thanks Alexander!
On Mon, Mar 2, 2015 at 10:31 AM, Alexander Pivovarov apivova...@gmail.com
wrote:
yes, we even have a ticket for that
https://issues.apache.org/jira/browse/HIVE-9600
btw can anyone test jdbc driver with kerberos enabled?
https://issues.apache.org/jira/browse/HIVE-9599
On
18 matches
Mail list logo