Both thfrift and protbuf are wire compatible but NOT classpath compatible,
you need to make sure that you are using one version (even down to the
minor version) across all your codebase.
On Tue, Mar 22, 2016 at 12:05 PM, kalai selvi wrote:
> Hi,
>
> I am using Hive 0.13 in Amazon EMR. I am stuck
Hi,
I am using Hive 0.13 in Amazon EMR. I am stuck up with the problem of
hive-exec jar being bundled with older protobuf buffers java of version
2.5. Please help me in getting myself unblocked from this problem.
We have developed a Cusom SerDe which is in turn dependent on
protobuf-buffers-java
Hi all,
I'm following this (pretty old) tutorial for accessing a local Oracle NoSQL
DB from Hive using a StorageHandler:
https://blogs.oracle.com/NoSQL/entry/bigdata_sql_with_oracle_nosql
I've successfully:
Extracted Hadoop 2.7.2 under /home/hadoop/hadoop
Extracted Hive 1.2.1 under /home/hadoop/
Two things
In Eclipse have you added Hive jar files as external Jars to your libs in
Build path?
I will try to run the same JAVA code on the host that you have Hive
installed and you have defined your CLASSPATH first. If your JAVA program
works OK then you have an issue with your eclipse librarie
Dear All:
We just installed and configured hive 1.2.1 and Hadoop on Ubuntu,
Everything worked fine and we can query table from hive>
But when we tested jdbc from eclipse on windows client, there’re errors as
follows, any advice would be appreciated,
Thanks
Joe
java.lang.ClassNotFoundException:
Hello everyone.
Thanks for your answers.
I'm gonna test this.
Best regards.
Tale
On Mon, Mar 21, 2016 at 10:06 PM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:
> Hi
>
> Simple select * query launches a job when the input size is >1Gb by
> default. Two configs that determine
Thanks Nitin, Mich,
if its just plain vanilla text file format, it needs to run a job to get
the count so the longest of all
--> Hive must be translating some operator like fetch (for count) into a
map-reduce job and getting the result?
Can a custom storage handler get information about the operat
ORC file has the following stats levels for storage indexes
1. ORC File itself
2. Multiple stripes (chunks) within the ORC file
3. Multiple row groups (row batches) within each stripe
Assuming that the underlying table has stats updated, count will be stored
for each column
So when we
If you have enabled performance optimization by enabling statistics it will
come from there
if the underlying file format supports infile statistics (like ORC), it
will come from there
if its just plain vanilla text file format, it needs to run a job to get
the count so the longest of all
On Tue,
select count(*) from table;
How does hive evaluate count(*) on a table?
Does it return count by actually querying table, or directly return count
by consulting some statistics locally.
For Hive's Text format it takes few seconds while Hive's Orc format takes
fraction of seconds.
Regards,
Amey
Hi, Xuefu
You are right.
Maybe I should launch spark-submit by HS2 or Hive CLI ?
Thanks a lot,
Stana
2016-03-22 1:16 GMT+08:00 Xuefu Zhang :
> Stana,
>
> I'm not sure if I fully understand the problem. spark-submit is launched in
> the same host as your application, which should be able to acc
Hi,
Does Hive supports extended CBO options like cpu,io only for tez execution.
When I See the HIveDefaultCostModel class
(https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveDefaultCostModel.java)
all cost has been made to zero Cost. Whe
12 matches
Mail list logo