This is my error (stacktrace below). It cannot
find org.apache.hadoop.security.KerberosName class. But the strange is that
I have hadoop-core-1.0.4-SNAPSHOT.jar in the classpath, and the path to
the jar is correct. I've no idea what the problem is. Any help?
java.io.IOException: failure to login
Hello Bala,
I bumped into your email by chance.
I'm not sure what you are after, really because if you wanted something like
image processing and feature detection then hadoop's mailing list is probably
not the place and you might want to join NVIDIA's or ATI's CUDA / OpenCL forums.
Hope
Hi,
The free memory might be low, just because GC hasn't reclaimed what it can.
Can you just try reading in the data you want to read and see if that works
?
Thanks
Hemanth
On Mon, Mar 25, 2013 at 10:32 AM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
io.sort.mb = 256 MB
Can I suggest an answer of Yes, but you probably don't want to?
As a typical user of Hadoop you would not do this. Hadoop already chooses
the best server to do the work based on the location of the data (a server
that is available to do work and also has the data locally will generally be
Hi Hemanth,
I tried out your suggestion loading 420 MB file into memory. It threw java
heap space error.
I am not sure where this 1.6 GB of configured heap went to ?
On Mon, Mar 25, 2013 at 12:01 PM, Hemanth Yamijala
yhema...@thoughtworks.com wrote:
Hi,
The free memory might be low, just
From your description split the data in to chunks, feed the chunks to the
application, and merge the processed chunks to get A back is just suit for
the MapReduce paradigm. First you can feed the split chunks to Mapper and
merge the processed chunks at Reducer. Why did you not use MapReduce
I just submitted the following patch, welcome review it.
-- Forwarded message --
From: Fengdong Yu (JIRA) j...@apache.org
Date: Mar 25, 2013 6:07 PM
Subject: [jira] [Created] (HDFS-4631) Support customized call back method
during failover automatically.
To:
Thanks Michel! Looks like that’ll do the trick.
It wasn’t clear originally from the docs that context.getCounter(groupName,
counterName) will create the group and counter if they don’t exist.
From: Michel Segel [mailto:michael_se...@hotmail.com]
Sent: 22 March 2013 18:27
To:
Hi,
I have a YARN application (Client.java and ApplicationMaster.java) that I
have had been compiling and running with the hadoop-2.0.0-alpha. I recently
downloaded hadoop-2.0.3-alpha and been trying to compile and run my
application against it. I am seeing the following exception, can somebody
Hello Yanbo Liang,
My issue is that I neither do not have data nor I have an application! I was
looking for an open source benchmark application to test this, preferably, if
the application (or simple test code) is related to image processing domain
Regards
Bala
From: Yanbo Liang
Have you locally tested the script? What do you mean by not work? Do
you not see it loaded, not see it send back proper values, etc. -
what?
P.s. What version?
P.s. User questions are not to be sent to the developer/issue lists.
They should be sent just to user@hadoop.apache.org. Thanks!
On
Hmm. How are you loading the file into memory ? Is it some sort of memory
mapping etc ? Are they being read as records ? Some details of the app will
help
On Mon, Mar 25, 2013 at 2:14 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi Hemanth,
I tried out your
I have a lookup file which I need in the mapper. So I am trying to read the
whole file and load it into list in the mapper.
For each and every record Iook in this file which I got from distributed cache.
—
Sent from iPhone
On Mon, Mar 25, 2013 at 6:39 PM, Hemanth Yamijala
On Mon, Mar 25, 2013 at 4:29 AM, Tapas Sarangi tapas.sara...@gmail.comwrote:
Hi,
Thanks for the explanation. Where can I find the java code for balancer
that utilizes the threshold value and calculate it myself as you mentioned
? I think I understand your calculation, but would like to see
Hi Azurry,
Do you have detail steps what did you do to make MRV1 work with HDFS2?
Thanks,
Mounir
On Mon, 2013-03-25 at 13:39 +0800, Azuryy Yu wrote:
Thanks Harsh!
I used -Pnative got it.
I am compile src code. I made MRv1 work with HDFSv2 successfully.
On Mar 25, 2013 12:56 PM, Harsh J
What's your fsimage size? If its too high you would want to control
the checkpoint transfer bandwidth to not affect the load at the NN.
This is available via the JIRA HDFS-1457.
On Mon, Mar 25, 2013 at 8:13 PM, Ivan Tretyakov
itretya...@griddynamics.com wrote:
Hi!
We see DataNode heartbeat
Thanks Harsh!
My image size is about 3.1 Gb.
Yes, I think feature from HDFS-1457 is what I need, but unfortunately it is
not available in version of hadoop we use.
What kind of risks pose by these peaks.
On Mon, Mar 25, 2013 at 7:31 PM, Harsh J ha...@cloudera.com wrote:
What's your fsimage
Worst case if it puts too much strain: randomly failing clients and
missing DN heartbeats leading to unnecessary dead node appearances.
Worth using a release with HDFS-1457 in or patching your NN/SNN for
that - helps shape those graphs.
On Mon, Mar 25, 2013 at 9:59 PM, Ivan Tretyakov
Hi
I am trying to create my own Application Master. I have followed this
tutorial
Hi,
I kind of cleared the logs at /var/logs with rm -rf. I know this was a
stupid move. Now the cluster does not start throwing an error.:
HTTP ERROR 403
Problem accessing /cmf/process/920/logs. Reason:
Server returned HTTP response code:500 for URL
Rohit,
Look at any of the other data nodes to obtain the directory structure of
/var/log.
You will need to recreate the directories like hadoop-hdfs , hive, hbase
and the likes. Make sure you chown them with the respective user:group as
shown in datanode log directory. This should allow directory
Hi Bala,
A standard benchmark program for mapreduce is terasort, which is included
in the hadoop examples jar. You can generate data for it using teragen,
which runs a map-only job:
hadoop jar path-to-examples-jar.jar number of records directory to put
them in
and then sort the data using
On Mar 25, 2013, at 10:27 AM, blah blah wrote:
Hi
I am trying to create my own Application Master. I have followed this
tutorial
http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
However I have problem with reading AM jar as resource
Hmm... an easy way to debug this would be to add a LOG statement in
NodeHealthScriptRunner.init to printout the args you are getting from the
config.
Is there a chance you can try this, recompile re-run?
thanks,
Arun
On Mar 25, 2013, at 11:14 AM, Tucker wrote:
Does anyone have a working
More importantly, second and subsequent access of the file in DC is guaranteed
to be local disk i/o.
On Mar 24, 2013, at 3:00 AM, Alberto Cordioli wrote:
Thanks for your reply Harsh.
So if I want to read a simple text file, choosing whether to use
DistributedCachce or HDFS it becomes just a
Hi,
I tried to read a file from HDFS by using below code. I read the bytes and
transform into string and then transform back to byte array again. At this
point even I did nothing besides transforming string to array or reverse,
my file will be corrupted. The special characters are changed with
Hi,
Each time my MR job is run, a directory is created on the TaskTracker
under mapred/local/taskTracker/hadoop/distcache (based on my
configuration).
I looked at the directory today, and it's hosting thousands of
directories and more than 8GB of data there.
Is there a way to automatically
YARN does not seem to be checking for a fully qualified path when you
pass it yours and ends up breaking. The problem is easily reproducible
with the two transforming calls from ConverterUtils.
Transform the jarPath to a fully qualified one like so, before using
it anywhere:
Path jarPath = new
This is encouraging ! Thanks a lot,
From: Sandy Ryza [mailto:sandy.r...@cloudera.com]
Sent: 26 March 2013 01:22
To: user@hadoop.apache.org
Subject: Re: Any answer ? Candidate application for map reduce
Hi Bala,
A standard benchmark program for mapreduce is terasort, which is included in
the
Its too complicated. because Hadoopv2 is wire compatable, so MRv1 should
keep old RPC protocol, you can compile only common and HDFS firstly from
hadoopv2, then add to MRv1 classpath, fix the issues for uncompatables.
On Mar 25, 2013 10:26 PM, Mounir E. Bsaibes m...@linux.vnet.ibm.com wrote:
Hi
Hi,
One option to find what could be taking the memory is to use jmap on the
running task. The steps I followed are:
- I ran a sleep job (which comes in the examples jar of the distribution -
effectively does nothing in the mapper / reducer).
- From the JobTracker UI looked at a map task attempt
for your reqeirement, its just write a customized MR inputformat and
outputformat based on FileInputFormat.
On Mar 25, 2013 1:48 PM, AMARNATH, Balachandar
balachandar.amarn...@airbus.com wrote:
Any answers from anyone of you J
** **
** **
Regards
Bala
** **
*From:*
Hi hemanth,
This sounds interesting, will out try out that on the pseudo cluster. But
the real problem for me is, the cluster is being maintained by third party.
I only have have a edge node through which I can submit the jobs.
Is there any other way of getting the dump instead of physically
Hi,
Try running 'hadoop dfsadmin -refreshNodes' ! Your NN might have
cached previously set values!
Thanks,
On Tue, Mar 26, 2013 at 10:31 AM, preethi ganeshan
preethiganesha...@gmail.com wrote:
Hi,
I used this script . In core-site.xml i have set
net.topology.script.file.name to this file's
Unfortunately that may not be very feasible. I'll have to see if there's a
way. Are my assumptions about how the yarn.nodemanager.health-
checker.script.opts option should work correct? It took quite a bit of
trial and error to sort this out initially (who knew even an unhealthy node
script run
Hello Sir/Madam,
I have configured HDFS on my Ubuntu cluster . Now
when I try to implement a Map/Reduce program in it, it is not running
because the NameNode automatically becomes inactive in a few seconds. I
tried searching many things but could not get any useful
Is the Iterable values associated with a key sorted in any order? Are
there are any configuration options controlling how the input values are
sorted?
I know that the secondary sort way can be used to achieve the same
effect. I am not asking for a workaround.
--
Jingguo
Just search for the logs that are created when u start your cluster.
—
Sent from iPhone
On Tue, Mar 26, 2013 at 10:55 AM, Sagar Thacker sagar7...@gmail.com
wrote:
Hello Sir/Madam,
I have configured HDFS on my Ubuntu cluster . Now
when I try to implement a Map/Reduce
Hello Sagar,
It would be helpful if you could share your logs with us.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Mar 26, 2013 at 10:47 AM, Sagar Thacker sagar7...@gmail.com wrote:
Hello Sir/Madam,
I have configured HDFS
39 matches
Mail list logo