From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com]
Sent: Friday, March 22, 2013 3:13 PM
To: user@hive.apache.org
Subject: Not a hive question but HUE question
Hi fellow Hive users,
Any of you know if there is a HUE user group?
Thanks,
Chalcy
Hi fellow Hive users,
Any of you know if there is a HUE user group?
Thanks,
Chalcy
that a bug?
On Wed, Mar 6, 2013 at 12:33 PM, Chalcy Raja
mailto:chalcy.r...@careerbuilder.com>> wrote:
You could try breaking up the hive query to return smaller datasets. I have
noticed this behavior when the hive query has ‘in’ in where clause.
Thanks,
Chalcy
From: Daning Wang [mail
You could try breaking up the hive query to return smaller datasets. I have
noticed this behavior when the hive query has 'in' in where clause.
Thanks,
Chalcy
From: Daning Wang [mailto:dan...@netseer.com]
Sent: Wednesday, March 06, 2013 3:08 PM
To: user@hive.apache.org
Subject: Hadoop cluster ha
these typo issues by doing that.
On Fri, Oct 12, 2012 at 1:33 PM, Chalcy Raja
mailto:chalcy.r...@careerbuilder.com>> wrote:
We are using Hive 8 as part of CDH4.0.1. We noticed a bug in Partition.
Say if my hive table is partitioned by mydate and to drop a partition I mistype
the name
We are using Hive 8 as part of CDH4.0.1. We noticed a bug in Partition.
Say if my hive table is partitioned by mydate and to drop a partition I mistype
the name of the partition, like,
Alter table mytable drop partition (yourdate='2012-09-10') , this will drop all
the mytable paritions (mydate
Hi Raihan,
You can set it in hive prompt like below,
set mapred.jobtracker.maxtasks.per.job=777;
To see if it is set just type in hive prompt, set; and you'll see this
parameter in the output.
Hope this helps,
Chalcy
From: Raihan Jamal [mailto:jamalrai...@gmail.com]
Sent: Wednesday, Octob
Using hive server with Tableau now and realize that the user comes in as "hive"
user. Also after reading more in the emails and elsewhere and found that hive
server is not thread safe and does not have way to set up authentication.
How is the connection to hive handled in Tableau, Microstrategy
Snappy vs LZO -
To implement lzo, there are several steps, starting from building hadoop-lzo
library. Finally we got it built. Indexing had to be done as a separate step
and the lzo indexing does alter the way the files are stored and thus not use
hadoop's in built mapper. Snappy on the other
eries deal with some specific columns
and not on the whole data in a row. For a general purpose, Sequence File is a
better choice. Also it is widely adopted, so more tools will have support for
Sequence Files.
Regards
Bejoy KS
From: Chalcy Raja
To: "user@hive.
sometime in the future, but RC file is not listed in sqoop import.
Any input on this is highly appreciated.
Thanks,
Chalcy
-Original Message-
From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com]
Sent: Tuesday, June 19, 2012 8:23 AM
To: user@hive.apache.org; 'bejoy...@yahoo.com
Hi Mayank,
Hive load can go only one directory deep. You cannot load directory of
directory of files.
Thanks,
CHalcy
From: Jasper Knulst [mailto:jasper.knu...@incentro.com]
Sent: Thursday, June 21, 2012 6:30 AM
To: user@hive.apache.org
Subject: Re: Loading files from a directory
Hi Mayank,
C
when I do select * from
table limit 100; gives garbage.
What am I not setting right?
Thanks,
Chalcy
-Original Message-
From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com]
Sent: Tuesday, June 19, 2012 8:23 AM
To: user@hive.apache.org; 'bejoy...@yahoo.com'
Subject: RE: sqoop
data. Meta data contains information like the compression codec used etc,
from the first few characters of a Sequence file . Try linux head command on
the sequence file to get those details.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-----
From: Chalcy Raja
-Original Message-
From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com]
Sent: Monday, June 18, 2012 3:28 PM
To: user@hive.apache.org; 'bejoy...@yahoo.com'
Subject: RE: sqoop, hive and lzo and cdh3u3 - not creating in index
automatically
Snappy with sequence file works well for
Snappy with sequence file works well for us. We'll have to decide which one
suits our needs.
Is there a way to convert exiting hdfs in text format to convert to sequence
files?
Thanks for all your input,
Chalcy
-Original Message-
From: Chalcy Raja [mailto:cha
ppy is not splittable on its own. But sequence files are splittable so when
used together snappy gains the advantage of splittability.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-----
From: Chalcy Raja
Date: Mon, 18 Jun 2012 14:31:36
To: user@hive.apache.or
e you considered switching to sequence files using snappy compression (or
lzo). IIRC the process of generating LZO files and then generating an index on
top of these is cumbersome. When sequence files are directly splittable.
On Mon, Jun 18, 2012 at 9:16 AM, Chalcy Raja
wrote:
> I am posting it he
I am posting it here first and then may be on sqoop user group as well.
I am trying to use lzo compression.
Tested on a standalone by installing cdh3u3 and did sqoop to hive import with
lzo compression and everything works great. The data is sqooped into hdfs and
lzo index file got created and
I went and modified the .properties file to reference all these jars from
/hivehome/build/ivy/default/ directory and that fixed the issue.
From: gemini alex [mailto:gemini5201...@gmail.com]
Sent: Thursday, May 03, 2012 6:00 AM
To: user@hive.apache.org; wyuk...@gmail.com
Subject: Re: unable to set
38 AM, Chalcy Raja
mailto:chalcy.r...@careerbuilder.com>> wrote:
In my situation, the tables I was importing into hive are daily tables. Couple
of columns were added in a month which are not added to the end of the table.
Also one field got dropped in between. Also I have data for a year.
Anyway
older
data and sqoop imported specifying columns in the query tag of the sqoop import
into the hive external table. Had to have three different sqoop imports for
the three periods. Anyway got them all together so nobody will notice the
change :)
--Chalcy
From: Chalcy Raja [mailto:chalcy.r
AM, Chalcy Raja
mailto:chalcy.r...@careerbuilder.com>> wrote:
Hi all,
Adding a column by alter table mytable add column (mynewcolumn string); works
well for me. This adds a field to the end of the schema.
Is there a way to add a column in between two columns?
Thanks,
Chalcy
--
As
Hi all,
Adding a column by alter table mytable add column (mynewcolumn string); works
well for me. This adds a field to the end of the schema.
Is there a way to add a column in between two columns?
Thanks,
Chalcy
this also causes that the entire output is treated as a single record.
I am not much aware of sqoop but I did face these issues while migrating data
from hive to mysql
Thanks,
Nitin
On Thu, Mar 29, 2012 at 10:08 PM, Chalcy Raja
mailto:chalcy.r...@careerbuilder.com>> wrote:
Hi Nitin,
I have
:16 PM, Chalcy Raja
mailto:chalcy.r...@careerbuilder.com>> wrote:
I am trying to do a sqoop export (data from hdfs hadoop to database). The table
I am trying to export has 2 million rows. The table has 20 fields. The sqoop
command is successful if I did 10 rows till 95 rows. When I try anyt
I am trying to do a sqoop export (data from hdfs hadoop to database). The table
I am trying to export has 2 million rows. The table has 20 fields. The sqoop
command is successful if I did 10 rows till 95 rows. When I try anything more
than 95, the sqoop export fails with the following error.
configure it in Windows ,
its giving that error
You can check http://gnuwin32.sourceforge.net/packages/make.htm
The easiest way is to install Virtual Ubuntu ( or any other linux) into system
and configure your development and testing environment.
Regards,
Jagat
On Thu, Mar 15, 2012 at 12:24 AM, Chalcy
I have issues in setting up development environment for hive. So far, I just
got the jar file modified and worked and now trying to get it to svn, so I can
contribute code back.
Any help is appreciated.
Here is what I am doing now,
Went to page,
https://cwiki.apache.org/Hive/howtocontribute.
ys of keys and values respectively.
Let me know if you have any questions.
Thanks,
Chalcy
From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com]
Sent: Wednesday, November 23, 2011 1:20 PM
To: user@hive.apache.org; 'Bejoy Ks'
Subject: RE: hive map field question
Wow, this is great news
Hive source to
double check.
But, as you might have guessed, it'd be pretty straightforward to do that in
your own UDF:-)
Mark
- Original Message -----
From: "Chalcy Raja"
mailto:chalcy.r...@careerbuilder.com>>
To: "user@hive.apache.org<mailto:user@hive.apache.
could read it as a
serialized string.
Mark
- Original Message -
From: "Chalcy Raja"
To: user@hive.apache.org
Sent: Wednesday, November 23, 2011 10:48:07 AM
Subject: hive map field question
Hello HiveUsers,
I have a need to convert a map field to string field and vice
Hello HiveUsers,
I have a need to convert a map field to string field and vice
versa in a hive table. I could not do cast.
I created two external tables with one has string and another map. I can join
both to get what I want, takes a long time.
Any ideas of how it can be done efficiently ?
There is a sqoop user group meet on the 7th which I am planning to attend. I
am really interested in hive group meet up. I have the same question as Bennie
and if yes when?
Thank you,
Chalcy
From: Bennie Schut [mailto:bsc...@ebuddy.com]
Sent: Wednesday, October 12, 2011 9:58 AM
To: user@hive.
I second Loren's opinion.
I had used hwi before and now using HUE. If you are considering using HUE,
here is the installation guide by Cloudera,
http://archive.cloudera.com/cdh/3/hue/manual.html
Hope this helps,
Chalcy
-Original Message-
From: Loren Siebert [mailto:lo...@siebert.org
ailto:lo...@siebert.org>]
Sent: Thursday, September 15, 2011 12:28 PM
To: user@hive.apache.org<mailto:user@hive.apache.org>
Subject: Re: urldecode hive column
You need to write a UDF, like this person did:
http://search-hadoop.com/m/HFWE32CYs6x/v=plain
On Sep 15, 2011, at 9:03 AM, Chalcy
person did:
http://search-hadoop.com/m/HFWE32CYs6x/v=plain
On Sep 15, 2011, at 9:03 AM, Chalcy Raja wrote:
Hi,
I have a situation where I need to do urldecode on one particular column. Is
there any hive built in function available?
Thank you,
Chalcy
Hi,
I have a situation where I need to do urldecode on one particular column. Is
there any hive built in function available?
Thank you,
Chalcy
Hi,
I am trying to import sqlserver table in to hive table using sqoop. The import
failed on nvarchar field. There is a patch released as per
https://issues.apache.org/jira/browse/SQOOP-323?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel#issue-tabs.
How do I apply this pat
Looks like permission issue. Check the permission on the table social. Run
the load script as sudo.
Hope this helps!
Chalcy
-Original Message-
From: Praveen Bathala [mailto:pbatha...@gmail.com]
Sent: Saturday, September 10, 2011 10:01 AM
To: user@hive.apache.org
Subject: Error while e
40 matches
Mail list logo