Hi,
I am running a single node hadoop cluster and also the HDFS NFS gateway on the
same node. HDFS processes are running under the context of user “hadoop” and
NFS gateway is running under the context of user “nfsserver”. I mounted the NFS
export on the same machine as the root user. Now
for the completed file
created.
Is there any way to have synchronization between two independent map
reduce jobs?
I think ZK can do some complex synchronization here. Like mutex, master
election, etc.
Hope this helps,
Wangda Tan
On Tue, Aug 12, 2014 at 10:43 AM, saurabh jain sauravma
Hi Folks ,
I have been writing a map-reduce application where I am having an input
file containing records and every field in the record is separated by some
delimiter.
In addition to this user will also provide a list of columns that he wants
to lookup in a master properties file (stored in
hope this
helps :)
Best,
B
On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain
saurabh_j...@symantec.commailto:saurabh_j...@symantec.com wrote:
Hi All,
I have setup a single node cluster(release hadoop-1.0.4). Following is the
configuration used -
core-site.xml :-
property
Hi All,
I have setup a single node cluster(release hadoop-1.0.4). Following is the
configuration used -
core-site.xml :-
property
namefs.default.name/name
valuehdfs://localhost:54310/value
/property
masters:-
localhost
slaves:-
localhost
I am able to successfully format the