Hi,
I have a table with 3 columns in hive.
I want that table to be partitioned based on first letter of column 1.
How do we define such partition condition in hive ?
Regards,
Nagarjuna K
Can any one help me here ?
On Tue, Mar 4, 2014 at 3:23 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Yes I installed..
mvn clean install -DskipTests was successful. Only import to eclipse is
failing.
On Tue, Mar 4, 2014 at 12:51 PM, Azuryy Yu azury...@gmail.com
at 3:08 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi Ted,
I didn't do that earlier.
Now , I did it
mvn:eclipse eclipse
and tried importing the projects same into eclipse. Now, this is
throwing the following errors
1. No marketplace entries found to handle
Hi,
I checked out the hadoop trunck from
http://svn.apache.org/repos/asf/hadoop/common/trunk.
I set up protobuf-2.5.0 and then did mvn build.
mvn clean install -DskipTests .. worked well. Maven build was Successful.
So, I tried importing the project into eclipse.
It is showing errors in
under the root of your workspace ?
mvn eclipse:eclipse
On Mar 3, 2014, at 9:18 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi,
I checked out the hadoop trunck from
http://svn.apache.org/repos/asf/hadoop/common/trunk.
I set up protobuf-2.5.0 and then did mvn
Hi,
I am using hadoop/ map reduce for aout 2.5 years. I want to understand the
internals of the hadoop source code.
Let me put my requirement very clear.
I want to have a look at the code where of flush operations that happens
after the reduce phase.
Reducer writes the output to OutputFormat
-- Forwarded message --
From: nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com
Date: Mon, Jan 6, 2014 at 8:09 AM
Subject: Understanding MapReduce source code : Flush operations
To: mapreduce-u...@hadoop.apache.org
Hi,
I am using hadoop/ map reduce for aout 2.5 years. I
-- Forwarded message --
From: nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com
Date: Mon, Jan 6, 2014 at 6:39 PM
Subject: Understanding MapReduce source code : Flush operations
To: mapreduce-u...@hadoop.apache.org
Hi,
I am using hadoop/ map reduce for aout 2.5 years. I
and sends them across to the remote
data-nodes.
Thanks
+Vinod
On Jan 6, 2014, at 11:07 AM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
I want to have a look at the code where of flush operations that happens
after the reduce phase.
Reducer writes the output
instructions for Hadoop.
http://svn.apache.org/repos/asf/hadoop/common/trunk/BUILDING.txt
For your problem, proto-buf not set in PATH. After setting, recheck
proto-buffer version is 2.5
*From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlap...@gmail.com]
*Sent:* 07 January 2014 09:18
Hi,
I am trying to run mapreduce with yarn (mrv2). Kicked off with word count
example but failed with the below exception. Can some one help me out.
Container launch failed for container_1384558555627_0001_01_02 :
java.lang.IllegalStateException: Invalid shuffle port number -1 returned
for
, Mar 26, 2013 at 10:23 AM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi hemanth,
This sounds interesting, will out try out that on the pseudo cluster.
But the real problem for me is, the cluster is being maintained by third
party. I only have have a edge node through
='-Xmx2048m -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=./myheapdump.hprof -XX:OnOutOfMemoryError=./dump.sh'
com.hadoop.publicationMrPOC.Launcher Fudan\ Univ
Thanks
Hemanth
On Wed, Mar 27, 2013 at 1:58 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi Hemanth/Koji
.
The file names don't match - can you check your script / command line args.
Thanks
hemanth
On Wed, Mar 27, 2013 at 3:21 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi Hemanth,
Nice to see this. I didnot know about this till now.
But few one more issue
because GC hasn't reclaimed what it
can. Can you just try reading in the data you want to read and see if that
works ?
Thanks
Hemanth
On Mon, Mar 25, 2013 at 10:32 AM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
io.sort.mb = 256 MB
On Monday, March 25, 2013, Harsh
...@thoughtworks.com wrote:
Hmm. How are you loading the file into memory ? Is it some sort of memory
mapping etc ? Are they being read as records ? Some details of the app will
help
On Mon, Mar 25, 2013 at 2:14 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi Hemanth,
I tried
On Mon, Mar 25, 2013 at 6:43 PM, Nagarjuna Kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
I have a lookup file which I need in the mapper. So I am trying to read
the whole file and load it into list in the mapper.
For each and every record Iook in this file which I got from distributed
Just search for the logs that are created when u start your cluster.
—
Sent from iPhone
On Tue, Mar 26, 2013 at 10:55 AM, Sagar Thacker sagar7...@gmail.com
wrote:
Hello Sir/Madam,
I have configured HDFS on my Ubuntu cluster . Now
when I try to implement a Map/Reduce
Hi,
I configured my child jvm heap to 2 GB. So, I thought I could really read
1.5GB of data and store it in memory (mapper/reducer).
I wanted to confirm the same and wrote the following piece of code in the
configure method of mapper.
@Override
public void configure(JobConf job) {
, nagarjuna kanamarlapudi nagarjuna.kanamarlap...@gmail.com
wrote:
Hi,
I configured my child jvm heap to 2 GB. So, I thought I could really
read
1.5GB of data and store it in memory (mapper/reducer).
I wanted to confirm the same and wrote the following piece of code in the
configure
io.sort.mb = 256 MB
On Monday, March 25, 2013, Harsh J wrote:
The MapTask may consume some memory of its own as well. What is your
io.sort.mb (MR1) or mapreduce.task.io.sort.mb (MR2) set to?
On Sun, Mar 24, 2013 at 3:40 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com
previous data in the dfs directory which is not
in sync with the last installation.
Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?
JM
2013/2/20, nagarjuna kanamarlapudi
No but the query execution shows a reducer running .. And infant I feel
that reduce phase can be there
On Friday, January 18, 2013, Dean Wampler wrote:
There is no reduce phase needed in this query.
On Fri, Jan 18, 2013 at 6:59 AM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com
I am not sure if this is possible as in 0.2X or 1.0 releases of Hadoop .
On Tuesday, January 15, 2013, Panshul Whisper wrote:
Hello,
I have another idea regarding solving the single point failure of
Hadoop...
What If I have multiple Name Nodes setup and running behind a load
balancer in
Only if u have few mappers and reducers
On Monday, December 31, 2012, Jonathan Bishop wrote:
E. Store them in hbase...
On Sun, Dec 30, 2012 at 12:24 AM, Hemanth Yamijala
yhema...@thoughtworks.com wrote:
If it is a small number, A seems the best way to me.
On Friday, December 28, 2012,
by restarting the hive server your problem should be solved.
Not sure if we have any other ways of starting the hive server other than .
1. bin/hive --service hiveserver
2. HIVE_PORT= ./hive --service hiveserver
Regards,
Nagarjuna
On Tue, Oct 9, 2012 at 8:10 PM, Uddipan Mukherjee
26 matches
Mail list logo