Re: running mapreduce

2014-05-25 Thread Ted Yu
Can you provide a bit more information ? Such as the release of hadoop you're running. BTW did you use 'ps' command to see the command line for 4383 ? Cheers On Sun, May 25, 2014 at 7:30 AM, dwld0...@gmail.com wrote: > Hi > *Once running mapreduce, it will appear an unavailable process.* > *Ea

Re: Re: running mapreduce

2014-05-25 Thread dwld0...@gmail.com
HiIt is CDH5.0.0, Hadoop 2.3.0, I found the  unavailable process disappeared this morning.but it appears again on the Map and Reduce server after running mapreduce  #jps15371 Jps 2269 QuorumPeerMain 15306 -- process information unavailable 11295 DataNode 11455 NodeManager #ps -ef|grep j

Re: Re: running mapreduce

2014-05-27 Thread dwld0...@gmail.com
: Re: Re: running mapreduce HiIt is CDH5.0.0, Hadoop 2.3.0, I found the  unavailable process disappeared this morning.but it appears again on the Map and Reduce server after running mapreduce  #jps15371 Jps 2269 QuorumPeerMain 15306 -- process information unavailable 11295 DataNode 11455

Re: Running MapReduce jobs in batch mode on different data sets

2015-02-21 Thread Artem Ervits
Take a look at Apache Oozie Artem Ervits On Feb 21, 2015 6:35 AM, "tesm...@gmail.com" wrote: > Hi, > > Is it possible to run jobs on Hadoop in batch mode? > > I have 5 different datasets in HDFS and need to run the same MapReduce > application on these datasets sets one after the other. > > Righ

Re: running mapreduce on different filesystems as input and output locations

2017-03-31 Thread sudhakara st
It not possible write to S3 if use context.write(), but it possible when you open a s3 file in reducer and write. Create output stream to a S3 file in reducer *setup() *method like FSDataOutputStream fsStream = FileSystem.create(to s3); PrintWriter writer = new PrintWriter(fsStream); wri