[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720914#comment-16720914
 ] 

Brahma Reddy Battula edited comment on HDFS-12943 at 12/14/18 4:33 AM:
-----------------------------------------------------------------------

Thanks all for great work here.

I think,write requests can be degraded..? As they also contains some read 
requests like  getFileinfo(),getServerDefaults() ...(getHAServiceState() is 
newly added) .

Just I had checked for mkdir perf,it's like below.
 * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() 
 + 1 mkdirs = 6 calls)
 * ii) Every second request is getting timedout[1] and rpc call is getting 
skipped from observer.(  7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 
12 calls).Here two getFileInfo() skipped from observer hence it's success with 
Active.

 

 
{noformat}
time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF1
real 0m4.314s
user 0m3.668s
sys 0m0.272s
time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF2
real 0m22.238s
user 0m3.800s
sys 0m0.248s
{noformat}
 

*without ObserverReadProxyProvider ( 2 getFileInfo()  + 1 mkdirs() = 3 Calls)* 
{noformat}
time ./hdfs --loglevel debug dfs  -mkdir /TestsCFP
real 0m2.105s
user 0m3.768s
sys 0m0.592s
{noformat}
*Please correct me if I am missing anyting.*

 

timedout[1],Every second write request I am getting following, did I miss 
something here,these calls are skipped from observer.
{noformat}
2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to 
vm1/10.*.*.*:65110: 10000 millis timeout while waiting for channel to be ready 
for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 
remote=vm1/10.*.*.*:65110]
java.net.SocketTimeoutException: 10000 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110]
 at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
 at java.io.FilterInputStream.read(FilterInputStream.java:133)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
 at java.io.FilterInputStream.read(FilterInputStream.java:83)
 at java.io.FilterInputStream.read(FilterInputStream.java:83)
 at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:567)
 at java.io.DataInputStream.readInt(DataInputStream.java:387)
 at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1849)
 at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1183)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079)
2018-12-14 11:21:45,313 DEBUG ipc.Client: IPC Client (1006094903) connection to 
vm1/10.*.*.*:65110 from brahma: closed{noformat}
 

 

 


was (Author: brahmareddy):
Thanks all for great work here.

I think,write requests can be degraded..? As they also contains some read 
requests like  getFileinfo(),getServerDefaults() ...(getHAServiceState() is 
newly added) .

Just I had checked for mkdir perf,it's like below.
 * i) getHAServiceState() took 2+ sec ( 3 getHAServiceState() + 2 getFileInfo() 
 + 1 mkdirs = 6 calls)
 * ii) Every second request is getting timedout[1] and rpc call is getting 
skipped from observer.(  7 getHAServiceState() + 4 getFileInfo() + 1 mkdirs = 
12 calls).Here two getFileInfo() skipped from observer hence it's success with 
Active.

 

 
{noformat}
time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF1
real 0m4.314s
user 0m3.668s
sys 0m0.272s
time hdfs --loglevel debug dfs 
-Ddfs.client.failover.proxy.provider.hacluster=org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
 -mkdir /TestsORF2
real 0m22.238s
user 0m3.800s
sys 0m0.248s
{noformat}
without ObserverReadProxyProvider ( 2 getFileInfo()  + 1 mkdirs() = 3 Calls)

 
{noformat}
time ./hdfs --loglevel debug dfs  -mkdir /TestsCFP
real 0m2.105s
user 0m3.768s
sys 0m0.592s
{noformat}
*Please correct me if I am missing anyting.*

 

timedout[1],Every second write request I am getting following, did I miss 
something here,these calls are skipped from observer.
{noformat}
2018-12-14 11:21:45,312 DEBUG ipc.Client: closing ipc connection to 
vm1/10.*.*.*:65110: 10000 millis timeout while waiting for channel to be ready 
for read. ch : java.nio.channels.SocketChannel[connected local=/10.*.*.*:58409 
remote=vm1/10.*.*.*:65110]
java.net.SocketTimeoutException: 10000 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/10.*.*.*:58409 remote=vm1/10.*.*.*:65110]
 at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
 at java.io.FilterInputStream.read(FilterInputStream.java:133)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
 at java.io.FilterInputStream.read(FilterInputStream.java:83)
 at java.io.FilterInputStream.read(FilterInputStream.java:83)
 at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:567)
 at java.io.DataInputStream.readInt(DataInputStream.java:387)
 at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1849)
 at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1183)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079)
2018-12-14 11:21:45,313 DEBUG ipc.Client: IPC Client (1006094903) connection to 
vm1/10.*.*.*:65110 from brahma: closed{noformat}
 

 

 

> Consistent Reads from Standby Node
> ----------------------------------
>
>                 Key: HDFS-12943
>                 URL: https://issues.apache.org/jira/browse/HDFS-12943
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs
>            Reporter: Konstantin Shvachko
>            Priority: Major
>         Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to