Hi,
I understand the way file upload happens on HDFS, where the data node asks
the namenode for a pipe (64 MB) for writing chunks of the file to hdfs.
I want to change the source code of HDFS such that the datanode can have
multiple pipes opens in parallel, where i push the data to the pipe based
Can you share the code?
sent from mobile
On Nov 1, 2013 7:06 AM, "unmesha sreeveni" wrote:
>
> thanks Steve Loughran and Amr Shahin
> Amr Shahin , i refered "
> http://my.safaribooksonline.com/book/databases/hadoop/9780596521974/serialization/id3548156";
> the same thing only. but my toString is
In your eclipse classpath core-site.xml is there?
Directory which contains site xmls should be there in classpath. Not
directly xml files.
Make sure fs.defaultFS points to correct hdfs path
Regards,
Vinayakumar B
On Nov 2, 2013 5:21 PM, "Harsh J" wrote:
> Your job configuration isn't picking up
Your job configuration isn't picking up or passing the right default
filesystem (fs.default.name or fs.defaultFS) before submitting the job. As
a result, the non-configured default of local filesystem is getting picked
up for paths you intended to look for on HDFS.
On Friday, November 1, 2013, Oma