Using different security mechanism instead of Kerberos/Simple

2014-10-13 Thread Shani Ranasinghe
Hi, While going through the code we learned that except for simple and kerberos, there seems not to be a way we could plug in our own authentication mechanism. Are there any plans in accommodating the change any soon? Also if to abstract out the security component, is there any way other than rep

Re: Hadoop 2.5.1 - distro not compatible with "quick start"?

2014-10-13 Thread Martin Wunderlich
Thanks a lot, Ted, Mark and Tomas. I have found the required files and managed to get my first run of a MapReduce job completed. The file "hadoop-site.xml" seems to have been renamed to "hdfs-site.xml" Cheers, Martin Am 13.10.2014 um 04:42 schrieb Tomas Delvechio : > 2014-10-12 12:14 GMT-

read from a hdfs file on the same host as client

2014-10-13 Thread Demai Ni
hi, folks, a very simple question, looking forward a couple pointers. Let's say I have a hdfs file: testfile, which only have one block(256MB), and the block has a replica on datanode: host1.hdfs.com (the whole hdfs may have 100 nodes though, and the other 2 replica are available at other datanod

Re: read from a hdfs file on the same host as client

2014-10-13 Thread Shivram Mani
Demai, you are right. HDFS's default BlockPlacementPolicyDefault makes sure one replica of your block is available on the writer's datanode. The replica selection for the read operation is also aimed at minimizing bandwidth/latency and will serve the block from the reader's local node. If you want

Re: HDFS openforwrite CORRUPT -> HEALTHY

2014-10-13 Thread Vinayak Borkar
Hi Ulul, Thanks for trying. I will try the dev list to see if they can help me with this. Thanks, Vinayak On 10/11/14, 5:33 AM, Ulul wrote: Hi Vinayak, Sorry this is beyond my understanding. I would need to test furthet to try and understand the problem. Hope you'll find help from someone e

Re: read from a hdfs file on the same host as client

2014-10-13 Thread Demai Ni
Shivram, many thanks for confirming the behavior. I will also turn on the shortcircuit as you suggested. Appreciate the help Demai On Mon, Oct 13, 2014 at 3:42 PM, Shivram Mani wrote: > Demai, you are right. HDFS's default BlockPlacementPolicyDefault makes > sure one replica of your block is a

Redirect Writes/Reads to a particular Node

2014-10-13 Thread Dhiraj Kamble
Hi, Is it possible to redirect writes to one particular node i.e. store the primary replica always on the same node; and have reads served from this primary node. If the primary node goes down; then hadoop replication works as per its policy; but when this node comes up it should again become

RE: Redirect Writes/Reads to a particular Node

2014-10-13 Thread Rakesh R
Hi Dhiraj, AFAIK there is a mechanism to pass the set of 'favoredNodes datanodes' that should be favored as targets while creating a file. But this is only considered as a hint, sometimes due to cluster state(for example: given dn doesn't have sufficient space, doesn't available etc.), namenode