I got a table "words" which is 200GB on a not-so-fast cluster, 7200rpm
disks.
0: jdbc:phoenix:nn.lan> create index IX_WORDS on words(word);
15/09/29 10:04:48 WARN client.AsyncProcess: #1, table=IX_WORDS,
attempt=1/1 failed=80ops, last exception: null on
d3.lan,16020,1443514600886, tracking started Tue Sep 29 10:04:48 BST
2015; not retrying 80 - final failure
15/09/29 10:04:48 WARN client.AsyncProcess: #1, table=IX_WORDS,
attempt=1/1 failed=142ops, last exception: null on
d3.lan,16020,1443514600886, tracking started Tue Sep 29 10:04:48 BST
2015; not retrying 142 - final failure
15/09/29 10:04:48 WARN client.AsyncProcess: #1, table=IX_WORDS,
attempt=1/1 failed=141ops, last exception: null on
d3.lan,16020,1443514600886, tracking started Tue Sep 29 10:04:48 BST
2015; not retrying 141 - final failure
15/09/29 10:04:48 WARN client.AsyncProcess: #1, table=IX_WORDS,
attempt=1/1 failed=61ops, last exception: null on
d3.lan,16020,1443514600886, tracking started Tue Sep 29 10:04:48 BST
2015; not retrying 61 - final failure
It eventually fails the creation of the index. Is there a config I
should alter to fix this?
On the server log there is no exception:
2015-09-29 10:07:47,193 INFO [MemStoreFlusher.1]
regionserver.DefaultStoreFlusher: Flushed, sequenceid=846905,
memsize=128.2 M, hasBloomFilter=true, into tmp file
hdfs://nn.lan:8020/hbase/data/default/IX_WORDS/e4bcc5bcda8227204db81eced37b5546/.tmp/3fee1d78302146ccac035138a498a966
2015-09-29 10:07:47,204 INFO [MemStoreFlusher.1] regionserver.HStore:
Added
hdfs://nn.lan:8020/hbase/data/default/IX_WORDS/e4bcc5bcda8227204db81eced37b5546/0/3fee1d78302146ccac035138a498a966,
entries=786646, sequenceid=846905, filesize=13.4 M
2015-09-29 10:07:47,208 INFO [MemStoreFlusher.1] regionserver.HRegion:
Finished memstore flush of ~128.19 MB/134418456, currentsize=11.45
MB/12001160 for region
IX_WORDS,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00,1443516250552.e4bcc5bcda8227204db81eced37b5546.
in 1972ms, sequenceid=846905, compaction requested=true
2015-09-29 10:07:53,630 INFO [sync.3] wal.FSHLog: Slow sync cost: 143
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:56,807 INFO [sync.3] wal.FSHLog: Slow sync cost: 113
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:56,808 INFO [sync.2] wal.FSHLog: Slow sync cost: 115
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:57,225 INFO [sync.4] wal.FSHLog: Slow sync cost: 103
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:58,693 INFO [sync.3] wal.FSHLog: Slow sync cost: 939
ms, current pipeline: [192.168.0.31:50010]
*2015-09-29 10:07:58,693 INFO [sync.4] wal.FSHLog: Slow sync cost: 939
ms, current pipeline: [192.168.0.31:50010]**
*2015-09-29 10:07:58,693 INFO [sync.0] wal.FSHLog: Slow sync cost: 938
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:58,693 INFO [sync.2] wal.FSHLog: Slow sync cost: 940
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:58,694 INFO [sync.1] wal.FSHLog: Slow sync cost: 938
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:07:59,484 INFO [sync.4] wal.FSHLog: Slow sync cost: 147
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:00,116 INFO [sync.3] wal.FSHLog: Slow sync cost: 131
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:00,117 INFO [sync.2] wal.FSHLog: Slow sync cost: 133
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:00,654 INFO [sync.2] wal.FSHLog: Slow sync cost: 127
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:01,128 INFO [sync.0] wal.FSHLog: Slow sync cost: 117
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:01,128 INFO [sync.1] wal.FSHLog: Slow sync cost: 117
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:02,988 INFO
[regionserver/d3.lan/192.168.0.31:16020.logRoller] wal.FSHLog: Rolled
WAL
/hbase/WALs/d3.lan,16020,1443514600886/d3.lan%2C16020%2C1443514600886.default.1443517654445
with entries=23584, filesize=121.87 MB; new WAL
/hbase/WALs/d3.lan,16020,1443514600886/d3.lan%2C16020%2C1443514600886.default.1443517682939
2015-09-29 10:08:08,122 INFO [MemStoreFlusher.1] regionserver.HRegion:
Started memstore flush for
IX_WORDS,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00,1443516250552.e4bcc5bcda8227204db81eced37b5546.,
current region memstore size 128.00 MB, and 1/1 column families'
memstores are being flushed.
2015-09-29 10:08:10,819 INFO [MemStoreFlusher.1]
regionserver.DefaultStoreFlusher: Flushed, sequenceid=859636,
memsize=128.0 M, hasBloomFilter=true, into tmp file
hdfs://nn.lan:8020/hbase/data/default/IX_WORDS/e4bcc5bcda8227204db81eced37b5546/.tmp/197a91b7e345448a9905576f0e66e411
2015-09-29 10:08:10,839 INFO [MemStoreFlusher.1] regionserver.HStore:
Added
hdfs://nn.lan:8020/hbase/data/default/IX_WORDS/e4bcc5bcda8227204db81eced37b5546/0/197a91b7e345448a9905576f0e66e411,
entries=785829, sequenceid=859636, filesize=13.4 M
2015-09-29 10:08:10,850 INFO [MemStoreFlusher.1] regionserver.HRegion:
Finished memstore flush of ~128.02 MB/134242872, currentsize=15.92
MB/16692544 for region
IX_WORDS,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00,1443516250552.e4bcc5bcda8227204db81eced37b5546.
in 2728ms, sequenceid=859636, compaction requested=true
2015-09-29 10:08:11,087 INFO [sync.3] wal.FSHLog: Slow sync cost: 117
ms, current pipeline: [192.168.0.31:50010]
*2015-09-29 10:08:11,087 INFO [sync.4] wal.FSHLog: Slow sync cost: 117
ms, current pipeline: [192.168.0.31:50010]**
*2015-09-29 10:08:11,087 INFO [sync.0] wal.FSHLog: Slow sync cost: 118
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:11,088 INFO [sync.1] wal.FSHLog: Slow sync cost: 119
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:11,089 INFO [sync.2] wal.FSHLog: Slow sync cost: 116
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:11,912 INFO [sync.1] wal.FSHLog: Slow sync cost: 125
ms, current pipeline: [192.168.0.31:50010]
2015-09-29 10:08:11,914 INFO [sync.0] wal.FSHLog: Slow sync cost: 127
ms, current pipeline: [192.168.0.31:50010]
Thanks