SocketTimeoutException on regionservers
Hi, I noticed that Regionservers are raising following exceptions intermittently that is manifesting itself as request timeouts on the client side. HDFS is in a healthy state and there are no corrupted blocks (from "hdfs fsck" results). Datanodes were not out of service when this error occurs and GC on datanodes is usually around 0.3sec. Also, when these exceptions occur, HDFS metric "Send Data Packet Blocked On Network Average Time" tends to go up. Here are the configured values for some of the relevant parameters: dfs.client.socket-timeout: 10s dfs.datanode.socket.write.timeout: 10s dfs.namenode.avoid.read.stale.datanode: true dfs.namenode.avoid.write.stale.datanode: true dfs.datanode.max.xcievers: 8192 Any pointers towards what could be causing these exceptions is appreciated. Thanks. CDH 5.7.2 HBase 1.2.0 ---> Regionserver logs 2017-01-11 19:19:04,940 WARN [PriorityRpcServer.handler=3,queue=1,port=60020] hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.SocketTimeoutException: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/datanode3:27094 remote=/datanode2:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) ... 2017-01-11 19:19:04,995 WARN [PriorityRpcServer.handler=11,queue=1,port=60020] hdfs.DFSClient: Connection failure: Failed to connect to /datanode2:50010 for file /hbase/data/default//ec9ca java.net.SocketTimeoutException: 1 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/datanode3:27107 remote=/datanode2:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102) at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:207) at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:156) at org.apache.hadoop.hdfs.BlockReaderUtil.readAll(BlockReaderUtil.java:32) at org.apache.hadoop.hdfs.RemoteBlockReader2.readAll(RemoteBlockReader2.java:386) at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1193) at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1112) at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1473) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1432) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:89) at org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(HFileBlock.java:752) at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1448) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1648) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1532) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:261) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:642) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:622) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:314) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:226) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:437) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:340) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:261) at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:806) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:794) at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:617) at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.popul
Successful: HBase Generate Website
Build status: Successful If successful, the website and docs have been generated. To update the live site, follow the instructions below. If failed, skip to the bottom of this email. Use the following commands to download the patch and apply it to a clean branch based on origin/asf-site. If you prefer to keep the hbase-site repo around permanently, you can skip the clone step. git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git cd hbase-site wget -O- https://builds.apache.org/job/hbase_generate_website/459/artifact/website.patch.zip | funzip > f7d0f15c99e7eacb487ba9e06cfa42ecc4d41263.patch git fetch git checkout -b asf-site-f7d0f15c99e7eacb487ba9e06cfa42ecc4d41263 origin/asf-site git am --whitespace=fix f7d0f15c99e7eacb487ba9e06cfa42ecc4d41263.patch At this point, you can preview the changes by opening index.html or any of the other HTML pages in your local asf-site-f7d0f15c99e7eacb487ba9e06cfa42ecc4d41263 branch. There are lots of spurious changes, such as timestamps and CSS styles in tables, so a generic git diff is not very useful. To see a list of files that have been added, deleted, renamed, changed type, or are otherwise interesting, use the following command: git diff --name-status --diff-filter=ADCRTXUB origin/asf-site To see only files that had 100 or more lines changed: git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}' When you are satisfied, publish your changes to origin/asf-site using these commands: git commit --allow-empty -m "Empty commit" # to work around a current ASF INFRA bug git push origin asf-site-f7d0f15c99e7eacb487ba9e06cfa42ecc4d41263:asf-site git checkout asf-site git branch -D asf-site-f7d0f15c99e7eacb487ba9e06cfa42ecc4d41263 Changes take a couple of minutes to be propagated. You can verify whether they have been propagated by looking at the Last Published date at the bottom of http://hbase.apache.org/. It should match the date in the index.html on the asf-site branch in Git. As a courtesy- reply-all to this email to let other committers know you pushed the site. If failed, see https://builds.apache.org/job/hbase_generate_website/459/console
FOSDEM 2017 Open Source Conference - Brussels
Hello Everyone This email is to tell you about ASF participation at FOSDEM. The event will be held in Brussels on 4^th & 5^th February 2017 and we are hoping that many people from our ASF projects will be there. https://fosdem.org/2017/ Attending FOSDEM is completely free and the ASF will again be running a booth there. Our main focus will on talking to people about the ASF, our projects and communities. *_Why Attend FOSDEM?_* Some reasons for attending FOSDEM are: 1. Promoting your project: FOSDEM has up to 4-5000 attendees so is a great place to spread the word about your project 2. Learning, participating and meeting up: FOSDEM is a developers conference so includes presentations covering a range of technologies and includes lots of topic specific devrooms _*FOSDEM Wiki *_ A page on the Community Development wiki has been created with the main details about our involvement at conference, so please take a look https://cwiki.apache.org/confluence/display/COMDEV/FOSDEM+2017 If you would like to spend some time on the ASF booth promoting your project then please sign up on the FOSDEM wiki page. Initially we would like to split this into slots of 3-4 hours but this will depend on the number of projects that are represented. We are also looking for volunteers to help out on the booth over the 2 days of the conference, so if you are going to be there and are willing to help then please add your name to the volunteer list. _*Project Stickers*_ If you are going to be at FOSDEM and do not have any project stickers to give away then we may (budget permitting) be able to help you get some printed. Please contact me with your requirements. _*Social Event*_ Some people have asked about organising an ASF social event / meetup during the conference. This is possible but we will need know how many people are interested and which date works best. The FOSDEM wiki page also contains an 'Arrival / Departure' section so so please add your details if you would like to participate. I hope this helps people see some of the advantages of attending FOSDEM and we are looking forward to seeing lots of people there from our ASF communities. Thanks Sharan Apache Community Development http://community.apache.org/