Hi Tom
I'm curious about that are you the author of the Hadoop definitive .
Sorry to ask the uncorrelated question .
Regards
发自我的 iPhone
在 2013-4-19,20:51,Tom White t...@cloudera.com 写道:
Hi Amit,
It is a bug, fixed by
https://issues.apache.org/jira/browse/HADOOP-6103, although the
Hi Everyone
I am testing my MR programe with MRunit, it's version
is mrunit-0.9.0-incubating-hadoop2. My hadoop version is 1.0.4
The error trace is below:
java.lang.IncompatibleClassChangeError: Found class
org.apache.hadoop.mapreduce.TaskInputOutputContext, but interface was
expected
at
Are you using Fuse for mounting HDFS ?
On Fri, Apr 19, 2013 at 4:30 PM, lijinlong wakingdrea...@163.com wrote:
I mounted HDFS to a local directory for storage,that is /mnt/hdfs.I can do
the basic file operation such as create ,remove,copy etc just using linux
command and GUI.But when I tried
As this is a HBase specific question, it will be better to ask this
question on the HBase user mailing list.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans
amitj...@estudiantes.uci.cu wrote:
Hello:
I'm working in a proyect, and i'm using hbase for storage the data,
Hi,
If your goal is to use the new API, I am able to get it to work with the
following maven configuration:
dependency
groupIdorg.apache.mrunit/groupId
artifactIdmrunit/artifactId
version0.9.0-incubating/version
classifierhadoop1/classifier
/dependency
If I
Yes,I tried both FUSE and NTFS,but all failed.Have you done this before?And do
you know why?
Date: Sat, 20 Apr 2013 15:48:36 +0530
Subject: Re: Create and write files on mounted HDFS via java api
From: yhema...@thoughtworks.com
To: user@hadoop.apache.org
Are you using Fuse for mounting HDFS ?
Sorry - no. I just wanted to know if you were using FUSE, because I knew of
no other way of mounting HDFS.. Basically was wondering if some libraries
needed to be system path for the Java programs to work.
From your response looks like you aren't using FUSE. So what are you using
to mount ?
Hi HemanthI did used contrib/fuse-dfs to mount HDFS which is built on FUSE,I
did it as here http://wiki.apache.org/hadoop/MountableHDFSThere are various
ways mounting HDFS as the url describedBesides FUSE I tried hdfs-nfs-proxy,but
both failed.I just wonder if mounted HDFS supports the way I
Hi,
I am new to Hadoop, from Hadoop download I can find 4 versions:
1.0.x / 1.1.x / 2.x.x / 0.23.x
May I know which one is the latest stable version that provides Namenode
high availability for production environment?
regards
+ user@
Please do continue the conversation on the mailing list, in case others
like you can benefit from / contribute to the discussion
Thanks
Hemanth
On Sat, Apr 20, 2013 at 5:32 PM, Hemanth Yamijala yhema...@thoughtworks.com
wrote:
Hi,
My code is working with having
2.x.x provides NN high availability.
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html
However, it is in alpha stage right now.
Thanks
hemanth
On Sat, Apr 20, 2013 at 5:30 PM, Ascot Moss ascot.m...@gmail.com wrote:
Hi,
I am new to
2.0.4-alpha is being released.
To my knowledge it passed the votes yesterday.
FYI
On Apr 20, 2013, at 5:10 AM, Hemanth Yamijala yhema...@thoughtworks.com wrote:
2.x.x provides NN high availability.
Hello,
Can anyone help me in following issue
Writing intermediate key,value pairs to file and read it again
let us say i have to write each intermediate pair received @reducer to a
file and again read that as key value pair again and use it for processing
I found IFile.java file which has reader
writing a map only job will do the trick for you.
On Sat, Apr 20, 2013 at 8:43 AM, Vikas Jadhav vikascjadha...@gmail.comwrote:
Hello,
Can anyone help me in following issue
Writing intermediate key,value pairs to file and read it again
let us say i have to write each intermediate pair
How many intermediate keys? If small enough, you can keep them in memory. If
large, you can just wait for the job to finish and siphon them into your job as
input with the MultipleInputs API.
On Apr 20, 2013, at 10:43 AM, Vikas Jadhav vikascjadha...@gmail.com wrote:
Hello,
Can anyone help
The program (the one on the SO question) seems to work alright for me
when run on a 2.x FUSE mounted HDFS:
➜ ~ java Test
true
Not currently sure on what the presented error on SO means, but
perhaps it was a permissions error? You can check the fuse_dfs' output
or the NN log for more info on
Do you also ensure setting your desired input format class via the
setInputFormat*(…) API?
On Sat, Apr 20, 2013 at 6:48 AM, yypvsxf19870706
yypvsxf19870...@gmail.com wrote:
Hi
I thought it would be different when adopt the NLineInputFormat
So here is my conclusion the maps distribution
Hello Raj,
Could you show me the lines where you have set the i/o paths?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Sun, Apr 21, 2013 at 3:34 AM, Raj Hadoop hadoop...@yahoo.com wrote:
Hello All,
I am very new to Hadoop. Just installed and ran the Wordcount
All,
I have posted this question to CDH ML , but i guess i can post it here
because its a general hadoop question.
When the NN or JT gets the rack info, i guess it stores the info in memory.
can i ask you where in the JVM memory it will store the results ( perm gen
?) ? . I am getting cannot
Tariq,
Thanks for the quick reply.
I executed the sample example files that came with installation.
bin/hadoop jar hadoop-examples-1.0.4.jar wordcount input1 output1
I created 'input1' myself by hadoop commnad. I dont remember the path I gave.
Thanks,
Raj
The problem is probably not related to the JVM memory so much as the Linux
memory manager. The exception is in
java.lang.UNIXProcess.init(UNIXProcess.java:148)
which would imply this is happening when trying to create a new process.
The initial malloc for the new process space is being denied by
ok..do u remember the command?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Sun, Apr 21, 2013 at 4:45 AM, Raj Hadoop hadoop...@yahoo.com wrote:
Tariq,
Thanks for the quick reply.
I executed the sample example files that came with installation.
bin/hadoop jar
bin/hadoop dfs - mkdir input1
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Saturday, April 20, 2013 7:22 PM
Subject: Re: Very basic question
ok..do u remember the
try to look into /user dir inside ur hdfs
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Sun, Apr 21, 2013 at 4:52 AM, Mohammad Tariq donta...@gmail.com wrote:
ok..do u remember the command?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On
This is my folder structure. Can you help me?
/Users/hadoop/hadoop-1.0.4
hadoop$ ls -lrt
total 14960
drwxr-xr-x 3 hadoop staff 102 Oct 3 2012 share
drwxr-xr-x 9 hadoop staff 306 Oct 3 2012 webapps
drwxr-xr-x 52 hadoop staff 1768 Oct 3 2012 lib
-rw-r--r-- 1 hadoop
do this :
/Users/hadoop/hadoop-1.0.4/ bin/hadoop fs -lsr /user
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Sun, Apr 21, 2013 at 5:07 AM, Raj Hadoop hadoop...@yahoo.com wrote:
This is my folder structure. Can you help me?
/Users/hadoop/hadoop-1.0.4
hadoop$ ls
Thanks Thariq. Following is the list. where are the actual directories . how
can i traverse to the directories? can i?
2013-04-20 19:45:18.772 java[3742:1603] Unable to load realm info from
SCDynamicStore
drwxr-xr-x - hadoop supergroup 0 2013-04-20 18:05 /user/hadoop
drwxr-xr-x -
Like, Aaron say, this problem is related the Linux memory manager.
You can tune it using the vm.overcommit_memory=1.
Before to do any change, read all resources first:
http://www.thegeekstuff.com/2012/02/linux-memory-swap-cache-shared-vm/
All dirs start with drw in your example
Sent from my iPhone
On Apr 20, 2013, at 4:46 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Thanks Thariq. Following is the list. where are the actual directories . how
can i traverse to the directories? can i?
2013-04-20 19:45:18.772 java[3742:1603]
And don't forget to look at unlimit settings as well
Sent from my iPhone
On Apr 20, 2013, at 5:07 PM, Marcos Luis Ortiz Valmaseda
marcosluis2...@gmail.com wrote:
Like, Aaron say, this problem is related the Linux memory manager.
You can tune it using the vm.overcommit_memory=1.
Before to do
you would look at chain reducer java doc, which meet your requirement.
--Send from my Sony mobile.
On Apr 20, 2013 11:43 PM, Vikas Jadhav vikascjadha...@gmail.com wrote:
Hello,
Can anyone help me in following issue
Writing intermediate key,value pairs to file and read it again
let us say i
Hi Harsh
Thank you for suggestion . I do miss the expression to set the input format .
Now, it works .
Thanks
Regards
发自我的 iPhone
在 2013-4-21,1:04,Harsh J ha...@cloudera.com 写道:
Do you also ensure setting your desired input format class via the
setInputFormat*(…) API?
On Sat,
32 matches
Mail list logo