Sorry, sent too fast ... either way you do it, you'll need to include all
of the dependencies of the Hive JDBC driver classes in a single JAR and the
apply the shading the unified JAR file. This ensures that all classes that
depend on Thrift 0.9 get updated to use the shaded package. For example,
I
I'm not sure about keep, but if you're going to use maven, go with the
shade plugin vs. jarjar ... it does the same thing but has better maven
integration - http://maven.apache.org/plugins/maven-shade-plugin/
On Thu, Oct 10, 2013 at 5:31 PM, Zhang Xiaoyu wrote:
> Hi, Timothy,
> Thanks for your
Hi, Timothy,
Thanks for your reply. Seems jarjar is a solution for me. I have a basic
question to follow:
I don't quite understand what is tag for ?
My understand is
1.
use to grab the maven dependencies I want to re-package,
2.
then use and in to define what class to
re-package and what's t
Here's some simple Pig that reads from one Hive table and writes to another
(same data, same schema):
sigs_in = load 'signals' using org.apache.hcatalog.pig.HCatLoader();
sigs = filter sigs_in by datetime_partition == '2013-10-07_';
STORE sigs INTO 'signals_orc' USING org.apache.hcatalog.pig.H
I looks really cool, I think I will try it on.
Cheers,
Zhuoluo (Clark) Yang
2013/10/5 Makoto YUI
> Hi Edward,
>
> Thank you for your interst.
>
> Hivemall project does not have a plan to have a specific mailing list, I
> will answer following questions/comments on twitter or through Github
> i
Hi Zhang,
I have the same issue in that I use some Cassandra client API's that depend
on Thrift 0.7 and HCatalog 0.11 that depends on Thrift 0.9. I opted for
using the jarjar utility to "shade" the thrift 0.9 classes. Here's what I
added to the build.xml file for the hcatalog-pig-adapter project:
Thanks, Edward. it is a big upgrade for other component. I guess I have to
got with class loader for each class for now.
Johnny
On Thu, Oct 10, 2013 at 12:19 PM, Edward Capriolo wrote:
> You are kinda screwed. Thrift is wire-compatible in many cases but not API
> compatible. You can not have tw
You are kinda screwed. Thrift is wire-compatible in many cases but not API
compatible. You can not have two applications build off two versions of
thrift in the same classpath without something like OSGI. To insulate the
class loaders from each other.
Your best bet is upgrading "other component" t
Hi, all,
I am writing a piece of code talking to Hive 0.11 Hive Server 2. The JDBC
code depends on libthrift 0.9. However one component which depends on
libthrift 0.7 and not binary compatible with libthrift 0.9.
When I downgrade to 0.7, I got below NoClassDefFoundError:
org/apache/thrift/scheme/S
I'm having trouble getting HCatalog to write to a Hive table using Pig
(Hive 0.11 running on Hadoop 2.0.0-cdh4.1.2 and Pig 0.10.0-cdh4.1.2).
2013-10-09 20:20:36,411 [main] ERROR
org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to
recreate exception from backend error:
org.apache.h
Thank you Selva
for the response. But the data is too huge to be handled by Excel, thus I
need to do it with hive.
On Thu, Oct 10, 2013 at 2:31 PM, Selvamohan Neethiraj <
selva.apa...@infotekies.com> wrote:
> If it is not so much data, you can use Excel's PivotTable to solve this
> specific req
If it is not so much data, you can use Excel's PivotTable to solve this specific requirement:1. Select the date and plz columns (w/o header) and create pivotTable on a new WorkSheet2. Drag the column name: 'Date' from the 'Pivot Table' Builder to the 'Column Label' section.3. Drag the column name:
Hello,
I have a data manipulation query.
I have my data in the following format:
*Date PLZ Count*
date1 plz1 count1
date1 plz1 count2
date1 plz1 count3
date1 plz2 count4
date1 plz2 count5
date1 plz3 count6
date1 plz3 count7
date2 plz1 count8
date2 plz1 co
Baahu,
from hive-0.11 , hcat is part of hive.
So you basically do not need to configure your ports etc in hcat_server.sh
what you need to set is hive.metastore.uris in hive-site.xml. Default port
is 9083 I suppose
On Thu, Oct 10, 2013 at 12:36 PM, Baahu wrote:
> Thanks Nitin for your help.I
What is your understanding of "4 hive setups" ?
How many maps or reducers are needed for your query is determined by the
size of the data you got. You can alter the number of mappers needed for a
query by setting the max and min splits. But I am guessing that is not what
you want to achieve,
I g
Thanks Nitin for your help.I was able to run hive and also start meta store
finally !!
The tar should contain required config files(instead of having templates),
otherwise it turns out to be a difficult for newbies like me to go ahead
and learn/use hive.
Also I could not find the port number for me
16 matches
Mail list logo