I based on
http://blog.cloudera.com/blog/2015/02/download-the-hive-on-spark-beta/; and
http://blog.cloudera.com/blog/2015/02/download-the-hive-on-spark-beta/
Set parameters spark.executor.instances = 12 (I have four nodes), when I
execute hive sql, spark executors are always 3, 1 driver and 2
Hi
Thanks for investigating.. Trying to locate the patch that fixes this
between 1.1 and 2.0.0-SNAPSHOT. Any leads on what Jira this fix was part
of? Or what part of the code the patch is likely to be on?
git bisect is the only way usually to identify these things.
But before you hunt into
Delta files that are no longer needed are deleted asynchronously.
For example, you may have some query using delta_002_002. A minor
compaction, for example, can run concurrently
and create delta_001_003 but it will leave delta_001_001,
delta_002_002,
Hi Team,
Sharing the article which explains the Hive transaction features in Hive
1.0:
Hive transaction feature in Hive 1.0
http://www.openkb.info/2015/06/hive-transaction-feature-in-hive-10.html
--
Thanks,
www.openkb.info
(Open KnowledgeBase for Hadoop/Database/OS/Network/Tool)
Hi,
I have a table partitioned on every hour, the partitioning column ds is
timestamp type. However, I could not locate one partition with the equal
predicate on ds, only the range predicates works. Here are the ddl and
queries:
create table test (c1 int, c2 string) partitioned by (ds timestamp)
Done. https://issues.apache.org/jira/browse/HIVE-10996
On Fri, Jun 12, 2015 at 1:47 PM, Gopal Vijayaraghavan gop...@apache.org
wrote:
Hi
Thanks for investigating.. Trying to locate the patch that fixes this
between 1.1 and 2.0.0-SNAPSHOT. Any leads on what Jira this fix was part
of? Or
Thanks Nick for the write up. It was quite helpful for a newbie like me.
Is there any Hive config to provide the zookeeper quorum for the HBase
cluster since I got Hive and HBase on separate clusters?
Thanks!
On Tue, Jun 9, 2015 at 12:03 AM, Nick Dimiduk ndimi...@gmail.com wrote:
Hi there.