Hi,
I am running a single node hadoop cluster and also the HDFS NFS gateway on the
same node. HDFS processes are running under the context of user “hadoop” and
NFS gateway is running under the context of user “nfsserver”. I mounted the NFS
export on the same machine as the root user. Now whenev
"completed"
> in the same folder. Other mappers can wait for the "completed" file
> created.
>
> >> Is there any way to have synchronization between two independent map
> reduce jobs?
> I think ZK can do some complex synchronization here. Like mutex, ma
Hi Folks ,
I have been writing a map-reduce application where I am having an input
file containing records and every field in the record is separated by some
delimiter.
In addition to this user will also provide a list of columns that he wants
to lookup in a master properties file (stored in HDFS
ng...but I hope this
helps :)
Best,
B
On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain
mailto:saurabh_j...@symantec.com>> wrote:
Hi All,
I have setup a single node cluster(release hadoop-1.0.4). Following is the
configuration used -
core-site.xml :-
fs.default.name<http://fs.defaul
Hi All,
I have setup a single node cluster(release hadoop-1.0.4). Following is the
configuration used -
core-site.xml :-
fs.default.name
hdfs://localhost:54310
masters:-
localhost
slaves:-
localhost
I am able to successfully format the Namenode and perform files system
operation