[
https://issues.apache.org/jira/browse/PHOENIX-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152771#comment-14152771
]
ramkrishna.s.vasudevan commented on PHOENIX-1263:
-------------------------------------------------
[~jeffreyz]
Thanks for the inputs. Actually I had been running all my test cases in
Windows using IDE. Even HBase.
All things were working fine in Phoenix also until friday and all of a sudden
after I took an update on the code and starting running test cases I started
seeing this
{code}
014-09-30 09:10:33,946 DEBUG [IPC Server handler 0 on 56981]
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(2918): *BLOCK*
NameNode.processIncrementalBlockReport: from DatanodeRegistration(127.0.0.1,
datanodeUuid=ff801c90-0493-4f86-b3bf-4ea33ed80d55, infoPort=56993,
ipcPort=56996, storageInfo=lv=-55;cid=testClusterID;nsid=2022730518;c=0)
receiving: 1, received: 0, deleted: 0
2014-09-30 09:10:33,986 WARN [PacketResponder:
BP-2005792417-10.252.156.239-1412048430143:blk_1073741825_1001,
type=LAST_IN_PIPELINE, downstreams=0:[]]
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder(1175):
IOException in BlockReceiver.run():
java.io.IOException: Failed to move meta file for ReplicaBeingWritten,
blk_1073741825_1001, RBW
getNumBytes() = 7
getBytesOnDisk() = 7
getVisibleLength()= 7
getVolume() =
C:\HBase\Phoenix\Apache_phoenix_git\apache-mirror\github-mirror\phoenix-1\phoenix-core\target\test-data\8e2cf486-2af2-407f-bdcf-cf76d36de4b8\dfscluster_07a1bc36-1653-4f73-b927-34fdcc299a57\dfs\data\data1\current
getBlockFile() =
C:\HBase\Phoenix\Apache_phoenix_git\apache-mirror\github-mirror\phoenix-1\phoenix-core\target\test-data\8e2cf486-2af2-407f-bdcf-cf76d36de4b8\dfscluster_07a1bc36-1653-4f73-b927-34fdcc299a57\dfs\data\data1\current\BP-2005792417-10.252.156.239-1412048430143\current\rbw\blk_1073741825
bytesAcked=7
bytesOnDisk=7 from
C:\HBase\Phoenix\Apache_phoenix_git\apache-mirror\github-mirror\phoenix-1\phoenix-core\target\test-data\8e2cf486-2af2-407f-bdcf-cf76d36de4b8\dfscluster_07a1bc36-1653-4f73-b927-34fdcc299a57\dfs\data\data1\current\BP-2005792417-10.252.156.239-1412048430143\current\rbw\blk_1073741825_1001.meta
to
C:\HBase\Phoenix\Apache_phoenix_git\apache-mirror\github-mirror\phoenix-1\phoenix-core\target\test-data\8e2cf486-2af2-407f-bdcf-cf76d36de4b8\dfscluster_07a1bc36-1653-4f73-b927-34fdcc299a57\dfs\data\data1\current\BP-2005792417-10.252.156.239-1412048430143\current\finalized\blk_1073741825_1001.meta
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:462)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LDir.addBlock(LDir.java:78)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LDir.addBlock(LDir.java:71)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addBlock(BlockPoolSlice.java:248)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlock(FsVolumeImpl.java:199)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:958)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:939)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.finalizeBlock(BlockReceiver.java:1208)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1164)
at java.lang.Thread.run(Thread.java:722)
Caused by: 3: The system cannot find the path specified.
at org.apache.hadoop.io.nativeio.NativeIO.renameTo0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:828)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:460)
... 9 more
2014-09-30 09:10:33,990 WARN [PacketResponder:
BP-2005792417-10.252.156.239-1412048430143:blk_1073741825_1001,
type=LAST_IN_PIPELINE, downstreams=0:[]]
org.apache.hadoop.hdfs.server.datanode.DataNode(1334): checkDiskError:
exception:
java.io.IOException: Failed to move meta file for ReplicaBeingWritten,
blk_1073741825_1001, RBW
getNumBytes() = 7
{code}
I remember i used to get this error while running test cases for Hbase on the
trunk after hadoop 2.2.0 was updated. Then we installed the natives for that
and it started working. The same with hadoop 2.4.0.
But now Phoenix does not work with this natives. Interesting thing is even now
HBase test cases work with 2.4.0. I even tried upgrading the Phoenix to point
to 2.4.0 and still things does not work. I am trying out with this but with no
benefits :(
> Only cache guideposts on physical PTable
> ----------------------------------------
>
> Key: PHOENIX-1263
> URL: https://issues.apache.org/jira/browse/PHOENIX-1263
> Project: Phoenix
> Issue Type: Sub-task
> Reporter: James Taylor
> Assignee: ramkrishna.s.vasudevan
> Attachments: Phoenix-1263_1.patch
>
>
> Rather than caching the guideposts on all tenant-specific tables, we should
> cache them only on the physical table. On the client side, we should also
> update the cache with the latest for the base multi-tenant table when we
> update the cache for a tenant-specific table. Then when we lookup the
> guideposts, we should ensure that we're getting them from the physical table.
> Otherwise, it'll be difficult to keep the guideposts cached on the PTable in
> sync across all tenant-specific tables (not to mention using quite a bit of
> memory).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)