Partitions in Hive

2014-03-06 Thread nagarjuna kanamarlapudi
Hi, I have a table with 3 columns in hive. I want that table to be partitioned based on first letter of column 1. How do we define such partition condition in hive ? Regards, Nagarjuna K

Re: Unable to export hadoop trunk into eclipse

2014-03-05 Thread nagarjuna kanamarlapudi
Can any one help me here ? On Tue, Mar 4, 2014 at 3:23 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Yes I installed.. mvn clean install -DskipTests was successful. Only import to eclipse is failing. On Tue, Mar 4, 2014 at 12:51 PM, Azuryy Yu azury...@gmail.com

Re: Unable to export hadoop trunk into eclipse

2014-03-04 Thread nagarjuna kanamarlapudi
at 3:08 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi Ted, I didn't do that earlier. Now , I did it mvn:eclipse eclipse and tried importing the projects same into eclipse. Now, this is throwing the following errors 1. No marketplace entries found to handle

Unable to export hadoop trunk into eclipse

2014-03-03 Thread nagarjuna kanamarlapudi
Hi, I checked out the hadoop trunck from http://svn.apache.org/repos/asf/hadoop/common/trunk. I set up protobuf-2.5.0 and then did mvn build. mvn clean install -DskipTests .. worked well. Maven build was Successful. So, I tried importing the project into eclipse. It is showing errors in

Re: Unable to export hadoop trunk into eclipse

2014-03-03 Thread nagarjuna kanamarlapudi
under the root of your workspace ? mvn eclipse:eclipse On Mar 3, 2014, at 9:18 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi, I checked out the hadoop trunck from http://svn.apache.org/repos/asf/hadoop/common/trunk. I set up protobuf-2.5.0 and then did mvn

Understanding MapReduce source code : Flush operations

2014-01-06 Thread nagarjuna kanamarlapudi
Hi, I am using hadoop/ map reduce for aout 2.5 years. I want to understand the internals of the hadoop source code. Let me put my requirement very clear. I want to have a look at the code where of flush operations that happens after the reduce phase. Reducer writes the output to OutputFormat

Fwd: Understanding MapReduce source code : Flush operations

2014-01-06 Thread nagarjuna kanamarlapudi
-- Forwarded message -- From: nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com Date: Mon, Jan 6, 2014 at 8:09 AM Subject: Understanding MapReduce source code : Flush operations To: mapreduce-u...@hadoop.apache.org Hi, I am using hadoop/ map reduce for aout 2.5 years. I

Fwd: Understanding MapReduce source code : Flush operations

2014-01-06 Thread nagarjuna kanamarlapudi
-- Forwarded message -- From: nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com Date: Mon, Jan 6, 2014 at 6:39 PM Subject: Understanding MapReduce source code : Flush operations To: mapreduce-u...@hadoop.apache.org Hi, I am using hadoop/ map reduce for aout 2.5 years. I

Re: Understanding MapReduce source code : Flush operations

2014-01-06 Thread nagarjuna kanamarlapudi
and sends them across to the remote data-nodes. Thanks +Vinod On Jan 6, 2014, at 11:07 AM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: I want to have a look at the code where of flush operations that happens after the reduce phase. Reducer writes the output

Re: unable to compile hadoop source code

2014-01-06 Thread nagarjuna kanamarlapudi
instructions for Hadoop. http://svn.apache.org/repos/asf/hadoop/common/trunk/BUILDING.txt For your problem, proto-buf not set in PATH. After setting, recheck proto-buffer version is 2.5 *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlap...@gmail.com] *Sent:* 07 January 2014 09:18

java.lang.IllegalStateException: Invalid shuffle port number -1

2013-11-15 Thread nagarjuna kanamarlapudi
Hi, I am trying to run mapreduce with yarn (mrv2). Kicked off with word count example but failed with the below exception. Can some one help me out. Container launch failed for container_1384558555627_0001_01_02 : java.lang.IllegalStateException: Invalid shuffle port number -1 returned for

Re: Child JVM memory allocation / Usage

2013-03-27 Thread nagarjuna kanamarlapudi
, Mar 26, 2013 at 10:23 AM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi hemanth, This sounds interesting, will out try out that on the pseudo cluster. But the real problem for me is, the cluster is being maintained by third party. I only have have a edge node through

Re: Child JVM memory allocation / Usage

2013-03-27 Thread nagarjuna kanamarlapudi
='-Xmx2048m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./myheapdump.hprof -XX:OnOutOfMemoryError=./dump.sh' com.hadoop.publicationMrPOC.Launcher Fudan\ Univ Thanks Hemanth On Wed, Mar 27, 2013 at 1:58 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi Hemanth/Koji

Re: Child JVM memory allocation / Usage

2013-03-27 Thread nagarjuna kanamarlapudi
. The file names don't match - can you check your script / command line args. Thanks hemanth On Wed, Mar 27, 2013 at 3:21 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi Hemanth, Nice to see this. I didnot know about this till now. But few one more issue

Re: Child JVM memory allocation / Usage

2013-03-25 Thread nagarjuna kanamarlapudi
because GC hasn't reclaimed what it can. Can you just try reading in the data you want to read and see if that works ? Thanks Hemanth On Mon, Mar 25, 2013 at 10:32 AM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: io.sort.mb = 256 MB On Monday, March 25, 2013, Harsh

Re: Child JVM memory allocation / Usage

2013-03-25 Thread Nagarjuna Kanamarlapudi
...@thoughtworks.com wrote: Hmm. How are you loading the file into memory ? Is it some sort of memory mapping etc ? Are they being read as records ? Some details of the app will help On Mon, Mar 25, 2013 at 2:14 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi Hemanth, I tried

Re: Child JVM memory allocation / Usage

2013-03-25 Thread nagarjuna kanamarlapudi
On Mon, Mar 25, 2013 at 6:43 PM, Nagarjuna Kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: I have a lookup file which I need in the mapper. So I am trying to read the whole file and load it into list in the mapper. For each and every record Iook in this file which I got from distributed

Re: Regarding NameNode Problem

2013-03-25 Thread Nagarjuna Kanamarlapudi
Just search for the logs that are created when u start your cluster. — Sent from iPhone On Tue, Mar 26, 2013 at 10:55 AM, Sagar Thacker sagar7...@gmail.com wrote: Hello Sir/Madam, I have configured HDFS on my Ubuntu cluster . Now when I try to implement a Map/Reduce

Child JVM memory allocation / Usage

2013-03-24 Thread nagarjuna kanamarlapudi
Hi, I configured my child jvm heap to 2 GB. So, I thought I could really read 1.5GB of data and store it in memory (mapper/reducer). I wanted to confirm the same and wrote the following piece of code in the configure method of mapper. @Override public void configure(JobConf job) {

Re: Child JVM memory allocation / Usage

2013-03-24 Thread nagarjuna kanamarlapudi
, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com wrote: Hi, I configured my child jvm heap to 2 GB. So, I thought I could really read 1.5GB of data and store it in memory (mapper/reducer). I wanted to confirm the same and wrote the following piece of code in the configure

Re: Child JVM memory allocation / Usage

2013-03-24 Thread nagarjuna kanamarlapudi
io.sort.mb = 256 MB On Monday, March 25, 2013, Harsh J wrote: The MapTask may consume some memory of its own as well. What is your io.sort.mb (MR1) or mapreduce.task.io.sort.mb (MR2) set to? On Sun, Mar 24, 2013 at 3:40 PM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com

Re: In Compatible clusterIDs

2013-02-20 Thread nagarjuna kanamarlapudi
previous data in the dfs directory which is not in sync with the last installation. Maybe you can remove the content of /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 if it's not usefull for you, reformat your node and restart it? JM 2013/2/20, nagarjuna kanamarlapudi

Re: Execution of udf

2013-01-18 Thread nagarjuna kanamarlapudi
No but the query execution shows a reducer running .. And infant I feel that reduce phase can be there On Friday, January 18, 2013, Dean Wampler wrote: There is no reduce phase needed in this query. On Fri, Jan 18, 2013 at 6:59 AM, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com

Re: hadoop namenode recovery

2013-01-14 Thread nagarjuna kanamarlapudi
I am not sure if this is possible as in 0.2X or 1.0 releases of Hadoop . On Tuesday, January 15, 2013, Panshul Whisper wrote: Hello, I have another idea regarding solving the single point failure of Hadoop... What If I have multiple Name Nodes setup and running behind a load balancer in

Re: What is the preferred way to pass a small number of configuration parameters to a mapper or reducer

2012-12-30 Thread nagarjuna kanamarlapudi
Only if u have few mappers and reducers On Monday, December 31, 2012, Jonathan Bishop wrote: E. Store them in hbase... On Sun, Dec 30, 2012 at 12:24 AM, Hemanth Yamijala yhema...@thoughtworks.com wrote: If it is a small number, A seems the best way to me. On Friday, December 28, 2012,

Re: Hive-Site XML changing any proprty.

2012-10-09 Thread nagarjuna kanamarlapudi
by restarting the hive server your problem should be solved. Not sure if we have any other ways of starting the hive server other than . 1. bin/hive --service hiveserver 2. HIVE_PORT= ./hive --service hiveserver Regards, Nagarjuna On Tue, Oct 9, 2012 at 8:10 PM, Uddipan Mukherjee