Thanks Amareshwari,
here is the posting :
The *nopipe* example needs more documentation. It assumes that it is
run with the InputFormat from src/test/org/apache/*hadoop*/mapred/*pipes*/
*WordCountInputFormat*.java, which has a very specific input split
format. By running with a TextInputForm
Sorry, As usual Please find the attachment here.
Thanks & best Regards,
Adarsh Sharma
--- Begin Message ---
Dear all,
Today I faced a problem while running a map-reduce job in C++. I am not
able to understand to find the reason of the below error :
11/03/30 12:09:02 INFO mapred.JobClient:
Here is an answer for your question in old mail archive:
http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html
On 3/31/11 10:15 AM, "Adarsh Sharma" wrote:
Any update on the below error.
Please guide.
Thanks & best Regards,
Adarsh Sharma
Adarsh Sharma wrote:
> Dear all,
>
Thanks a lot for such deep explanation :
I have done it now, but it doesn't help me in my original problem for
which I'm doing this.
Please if you have some idea comment on it. I attached the problem.
Thanks & best Regards,
Adarsh Sharma
Matthew Foley wrote:
Hi Adarsh,
see if the inform
Adarsh,
Your command should be :
patch -p0 < fix-test-pipes.patch
See http://wiki.apache.org/hadoop/HowToContribute for details on how to
contribute.
Thanks
Amareshwari
On 3/31/11 9:54 AM, "Adarsh Sharma" wrote:
Thanks Harsh,
I am trying the patch command but below error exists :
[root@ws-t
Hi Adarsh,
see if the information at http://wiki.apache.org/hadoop/HowToContribute is
helpful to you. I'll walk you thru the typical process, but first a couple
questions:
Did you get a tar file for the whole source tree of Hadoop, or only the binary
distribution? To apply patches you must ge
This might help
http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/24351
See the comment at the last. It was done for mapred.local.dir but I guess
will work for hadoop.tmp.dir also.
On Wed, Mar 30, 2011 at 6:23 PM, modemide wrote:
> I'm a little confused as to why you're putting
>
Dear all,
Today I faced a problem while running a map-reduce job in C++. I am not
able to understand to find the reason of the below error :
11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
attempt_201103301130_0011_m_00_0, Status : FAILED
java.io.IOException: pipe child exception
Any update on the below error.
Please guide.
Thanks & best Regards,
Adarsh Sharma
Adarsh Sharma wrote:
Dear all,
Today I faced a problem while running a map-reduce job in C++. I am
not able to understand to find the reason of the below error :
11/03/30 12:09:02 INFO mapred.JobClient: T
Thanks Harsh,
I am trying the patch command but below error exists :
[root@ws-test project]# patch hadoop-0.20.2 fix-test-pipes.patch
patch: File hadoop-0.20.2 is not a regular file -- can't patch
[root@ws-test project]# patch -R hadoop-0.20.2 fix-test-pipes.patch
patch: File hadoop-0.
There is a utility available for Unix called 'patch'. You can use that
with a suitable -p(num) argument (man patch, for more info).
On Thu, Mar 31, 2011 at 9:41 AM, Adarsh Sharma wrote:
> Dear all,
>
> Can Someone Please tell me how to apply a patch on hadoop-0.20.2 package.
>
> I attached the pa
Sorry, Just check the attachment now.
Adarsh Sharma wrote:
Dear all,
Can Someone Please tell me how to apply a patch on hadoop-0.20.2 package.
I attached the patch.
Please find the attachment. I just follow below steps for Hadoop :
1. Download Hadoop-0.20.2.tar.gz
2. Extract the file.
3. Set
Dear all,
Can Someone Please tell me how to apply a patch on hadoop-0.20.2 package.
I attached the patch.
Please find the attachment. I just follow below steps for Hadoop :
1. Download Hadoop-0.20.2.tar.gz
2. Extract the file.
3. Set Configurations in site.xml files
Thanks & best Regards,
Adar
Hi Shrinivas,
Yes, this is the behavior of the task logs when using JVM Reuse. You should
notice in the log directories for the other tasks a "log index" file which
specifies the byte offsets into the log files where the task starts and
stops. When viewing logs through the web UI, it will use thes
It seems like when JVM reuse is enabled map task log data is not getting
written to their corresponding log files; log data from certain map tasks
gets appended to log files corresponding to some other map task.
For example, I have a case here where 8 map JVMs are running simultaneously
and all sy
On Thu, Mar 31, 2011 at 12:59 AM, Bill Brune wrote:
> Thanks for that tidbit, it appears to be the problem... Maybe that's a well
> known issue? or perhaps it should be added to the setup WIKI ???
It isn't really a Hadoop issue. See here for what defines a valid
hostname (The behavior of '_' is
Thanks for that tidbit, it appears to be the problem... Maybe that's a
well known issue? or perhaps it should be added to the setup WIKI ???
-Bill
On 03/29/2011 09:47 PM, Harsh J wrote:
On Wed, Mar 30, 2011 at 3:59 AM, Bill Brune wrote:
Hi,
I've been running hadoop 0.20.2 for a while now
It's not the sorting, since the sorted files are produced in output, it's then
mapper not existing well. so can anyone tell me if it's wrong to write
mapper.close() function like this ?
@Override
public void close() throws IOException{
helper.CleanUp();
Hi,
When I click the Browse the filesystem link, I was redirected to
http://localhost.localdomain:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/,
which is an error URL, I think it should be related to the domain name of my
server.
I am setting up a pseudo cluster environment.
Regards,
I haven't used 0.21. You can compare the source codes of the two versions.
I set these in namenode's hdfs-site.xml to 1. I'm not sure you'd want to do it
on a production cluster if its a big one.
On 3/29/11 7:13 PM, "Rita" wrote:
what about for 0.21 ?
Also, where do you set this? in the data
Hello,
My map tasks are freezing after 100% .. I'm suspecting my mapper.close()
function which does some sorting. Any better suggestion of where shall I put my
sorting method ? I thought of mapper.close() so that each map task sorts its
own output (which is local) and hence faster.
output
Harsh:
I found that jvmManager.getPid(...) returned the pid of
MapTaskRunner, but I want to get the task's pid. For example, I ran the the
example randomwrite, the pid of task which is writing is 8268, but
jvmManager.getPid(...) seemed to be its parent pid. I can not figure out the
I'm a little confused as to why you're putting
/pseg/local /...
as the location.
Are you sure that you've been given a folder name at the root of the
drive called /pseg/ ?
Maybe try to ssh to your server and navigate to your datastore folder,
then do "pwd".
That should give you the working direct
Thank you modemide for your quick response.
Sorry for not be clear...your understanding is right.
I have a machine, called grande, and the other called pseg. Now i'm using
grande as master (by fill the masters file by "grande") and pseg as slave.
the configuration of grande is (core-site.xml)
Ok, so if I understand correctly, you want to change the location of
the datastore on individual computers.
I've tested it on my cluster, and it seems to work. Just for the sake
of troubleshooting, you didn't mention the following:
1) Which computer were you editing the files on
2) which file wer
Hey guys, I'm new here, and recently I'm working on configuring a cluster
with 32 nodes.
However, there are some problems, I describe below
The cluster consists of nodes, which I don't have "root" to configure as I
wish. We only have the space /localhost_name/local space to use.
Thus, we only hav
26 matches
Mail list logo