Re: deadNodes in DFSInputStream

2013-12-31 Thread Haosong Huang
Why the status of HDFS-4273 is still "Unresolved"?
HDFS-5540 is
fixed now.


On Wed, Jan 1, 2014 at 6:12 AM, Colin McCabe  wrote:

> Take a look at HDFS-4273, which fixes some issues with the read retry
> logic.
>
> cheers,
> Colin
>
> On Tue, Dec 31, 2013 at 1:25 AM, lei liu  wrote:
> > I use Hbase-0.94 and CDH-4.3.1
> > When RegionServer read data from loca datanode, if local datanode is
> dead,
> > the local datanode is add to deadNodes, and RegionServer read data from
> > remote datanode. But when local datanode is become live, RegionServer
> still
> > read data from remote datanode, that reduces the performance of
> RegionServer.
> > We need to on way that remove local datanode from deadNodes when the
> local
> > datanode is become live.
> >
> > I can do it, please everybody give some advises.
> >
> >
> > Thanks,
> >
> > LiuLei
>



-- 
Best Regards,
Haosdent Huang


Re: HDFS read/write data throttling

2013-11-11 Thread Haosong Huang
Hi, lohit. There is a Class named
ThrottledInputStream
 in hadoop-distcp, you could check it out and find more details.

In addition to this, I am working on this and try to achieve resources
control(include CPU, Network, Disk IO) in JVM. But my implementation is
depends on cgroup, which only could run in Linux. I would push my
library(java-cgroup) to github in the next several months. If you are
interested at it, give my any advices and help me improve it please. :-)


On Tue, Nov 12, 2013 at 3:47 AM, lohit  wrote:

> Hi Adam,
>
> Thanks for the reply. The changes I was referring was in FileSystem.java
> layer which should not affect HDFS Replication/NameNode operations.
> To give better idea this would affect clients something like this
>
> Configuration conf = new Configuration();
> conf.setInt("read.bandwitdh.mbpersec", 20); // 20MB/s
> FileSystem fs = FileSystem.get(conf);
>
> FSDataInputStream fis = fs.open("/path/to/file.xt");
> fis.read(); // <-- This would be max of 20MB/s
>
>
>
>
> 2013/11/11 Adam Muise 
>
> > See https://issues.apache.org/jira/browse/HDFS-3475
> >
> > Please note that this has met with many unexpected impacts on workload.
> Be
> > careful and be mindful of your Datanode memory and network capacity.
> >
> >
> >
> >
> > On Mon, Nov 11, 2013 at 1:59 PM, lohit 
> wrote:
> >
> > > Hello Devs,
> > >
> > > Wanted to reach out and see if anyone has thought about ability to
> > throttle
> > > data transfer within HDFS. One option we have been thinking is to
> > throttle
> > > on a per FileSystem basis, similar to Statistics in FileSystem. This
> > would
> > > mean anyone with handle to HDFS/Hftp will be throttled globally within
> > JVM.
> > > Right value to come up for this would be based on type of hardware we
> use
> > > and how many tasks/clients we allow.
> > >
> > > On the other hand doing something like this at FileSystem layer would
> > mean
> > > many other tasks such as Job jar copy, DistributedCache copy and any
> > hidden
> > > data movement would also be throttled. We wanted to know if anyone has
> > had
> > > such requirement on their clusters in the past and what was the
> thinking
> > > around it. Appreciate your inputs/comments
> > >
> > > --
> > > Have a Nice Day!
> > > Lohit
> > >
> >
> >
> >
> > --
> >* Adam Muise *   Solutions Engineer
> > --
> >
> > Phone:416-417-4037
> >   Email:  amu...@hortonworks.com
> >   Website:   http://www.hortonworks.com/
> >
> >   * Follow Us: *
> > <
> >
> http://facebook.com/hortonworks/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
> > >
> > <
> >
> http://twitter.com/hortonworks?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
> > >
> > <
> >
> http://www.linkedin.com/company/hortonworks?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
> > >
> >
> >  [image: photo]
> >
> >   Latest From Our Blog:  How to use R and other non-Java languages in
> > MapReduce and Hive
> > <
> >
> http://hortonworks.com/blog/using-r-and-other-non-java-languages-in-mapreduce-and-hive/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
> > >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>
>
>
> --
> Have a Nice Day!
> Lohit
>



-- 
Best Regards,
Haosdent Huang