() is written such that it only expects jar
file.jar to get passed to it. That's how it appears to work when
add jar file.jar is run from a stand-alone Hive CLI and from beeline.
David
On Sat, Apr 26, 2014 at 12:14:53AM -0700, Brad Ruderman wrote:
An easy solution would be to add the jar
Hi, just want to give warning about export/import to different hive
versions see this bug:
https://issues.apache.org/jira/browse/HIVE-5318
Thanks,
Brad
On Fri, Feb 28, 2014 at 1:55 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
Hive also has export import utilities.
Hi Shouvanik-
Can you send the hive server 2 logs? Also might want to reach out to the
Accenture Tech lab's Data and Platforms group for client support as they
should have in-depth experience with Hive Configuration. Mike Wendt would
be a good contact.
Thanks,
Brad
On Tue, Feb 18, 2014 at 3:18
Hope all is well. I recently released a python wrapper around thrift for
connecting to Hive Server 2. One of the big functionalities I was looking
to implement was kerberos authentication support.
Charith-qubit was gracious enough to modify the code and add the support.
He has created a pull
desc extended table name
Thanks,
Brad
On Thu, Feb 6, 2014 at 10:23 AM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi,
How can I just find out the physical location of a partitioned table in
Hive.
Show partitions tab name
gives me just the partition column info.
I want the location of
#HiveServer2Clients-PythonClient
Thanks for your contribution.
-- Lefty
On Tue, Oct 29, 2013 at 12:55 AM, Lefty Leverenz
leftylever...@gmail.comwrote:
When it's ready, I can add it to the wikidoc for you if you don't have
editing access.
-- Lefty
On Wed, Oct 23, 2013 at 7:24 PM, Brad Ruderman
I have had much better luck with the Cloudera driver, especially since you
are using the cloudera dist.
Can you send the logs from /var/log/hive/hive-server2.out and
hive-server2.log?
Thanks!
On Wed, Dec 4, 2013 at 8:26 AM, Joseph D Antoni jdant...@yahoo.com wrote:
To all,
I'm trying to
Check out
size
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
Thanks,
Brad
On Mon, Dec 2, 2013 at 5:05 PM, Raj Hadoop hadoop...@yahoo.com wrote:
hi,
how to find number of elements in an array in Hive table?
thanks,
Raj
Wow that question won't be answerable. It all depends on the amount of data
per partition and the queries you are going to be executing on it, as well
as the structure of the data. In general in hive (depending on your cluster
size) you need to balance the number of files with the size, smaller
whether this is something
normal or not at all a normal thing.
Thanks,
Raj
On Thursday, October 31, 2013 6:39 PM, Brad Ruderman
bruder...@radiumone.com wrote:
Wow that question won't be answerable. It all depends on the amount of
data per partition and the queries you are going
3rd as well. I would like to add information about hs2 client libraries
(ruby,node,python).
bradruder...@gmail.com
Thanks,
Brad
On Mon, Oct 28, 2013 at 5:55 PM, Mikhail Antonov olorinb...@gmail.comwrote:
Could you please also add me? olorinb...@gmail.com
I wanted to add details about LDAP
Hi All-
I have struggled for awhile with a simple and straightforward driver that I
can use to connect to Hive Server 2 in a very similar manner as a mysql
driver in python. I know there are a few ways like using thrift or ODBC but
all require significant amount of installation. I decided to
://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
and save some other poor saps from re-inventing the wheel.
On Wed, Oct 23, 2013 at 2:42 PM, Brad Ruderman bruder...@radiumone.comwrote:
Hi All-
I have struggled for awhile with a simple and straightforward driver that
I can use
Typically it be your application that opens the process off the main
thread. Hue (Beeswax specifically) does this and you can see the code here:
https://github.com/cloudera/hue/tree/master/apps/beeswax
Thx
On Sun, Sep 29, 2013 at 5:15 PM, kentkong_work kentkong_w...@163.comwrote:
**
hi all,
Hi All-
I have opened up a ticket for this issue:
https://issues.apache.org/jira/browse/HIVE-5318
Can anyone repo to confirm its a bug with Hive and not with a configuration
within my instance?
THanks,
Brad
On Tue, Sep 17, 2013 at 2:22 PM, Brad Ruderman bruder...@radiumone.comwrote:
Hi All
/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks
start=1379535242333 end=1379535242333 duration=0
Thanks,
Brad
On Tue, Sep 17, 2013 at 2:22 PM, Brad Ruderman bruder...@radiumone.comwrote:
Hi All-
I am trying to export
Hi All-
I am trying to export a table in Hive 0.9, then import it into Hive 0.10
staging. Essentially moving data from a production import to staging.
I used the EXPORT table command, however when I try to import the table
back into staging I receive the following (pulled from the hive.log file).
Hi All-
I was hoping to gather some insight in how the hadoop (and or hive) job
scheduler distributes mappers per user. I am running into an issue where I
see that hadoop (and or hive) is evenly distributing mappers per user
instead of per job.
For example:
-We have 1000 mapper capacity
-10 Jobs
, Aug 30, 2013 at 10:07 PM, Brad Ruderman
bruder...@radiumone.comwrote:
Hi All-
I was hoping to gather some insight in how the hadoop (and or hive) job
scheduler distributes mappers per user. I am running into an issue where I
see that hadoop (and or hive) is evenly distributing mappers per
Have you simply tried
INSERT OVERWRITE TABLE destination
SELECT col1, col2, col3
FROM source
WHERE col4 = 'abc'
Thanks!
On Tue, Jul 30, 2013 at 8:25 PM, Sha Liu lius...@hotmail.com wrote:
Hi Hive Gurus,
When using the Hive extension of multiple inserts, can we add Where
clauses for each
Hive doesn't support inserting a few records into a table. You will need to
write a query to union your select and then insert. IF you can partition,
then you can insert a whole partition at a time instead of the table.
Thanks,
Brad
On Tue, Jul 30, 2013 at 9:04 PM, Sha Liu lius...@hotmail.com
Hi All-
I have 2 tables:
CREATE TABLE users (
a bigint,
b int
)
CREATE TABLE products (
a bigint,
c int
)
Each table has about 8 billion records (roughly 2k files total mappers). I
want to know the most performant way to do the following query:
SELECT u.b,
p.c,
? are there any columns you have
on which you can create buckets?
I have done joins having 10 billion records in one table but other table
was significantly smaller. and I had a 1000 node cluster ad disposal
On Mon, Jul 29, 2013 at 11:08 PM, Brad Ruderman
bruder...@radiumone.comwrote:
Hi
You need to stream and read the stderr and stdout for text messages
alerting there is an error. In python this is what I use:
On Tue, Jul 16, 2013 at 7:42 PM, kentkong_work kentkong_w...@163.comwrote:
**
hi,
I use a shell script to run hive query in background, like this
hive -e
:
hive_exception(stdout,stderr)
else:
if stderr != '':
stderr = stderr.lower()
if stderr.find('error') -1 or stderr.find('failed') -1:
if stderr.find('log4j:error could not find value for key
log4j.appender.fa') == -1:
raise Exception(stderr)
return
On Tue, Jul 16, 2013 at 8:03 PM, Brad Ruderman
25 matches
Mail list logo