[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2013-02-25 Thread Max Lapan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585860#comment-13585860
 ] 

Max Lapan commented on HBASE-5071:
--

Add my notes on this bug. This could be helpful to somewone who as we are still 
use HFileV1.

This bug was introduced by HBASE-3040 performance optimisation and cannot be 
fixed by Harsh's patch, which truncates index data (there are later problems 
rise on index parse).

I fixed this issue in our installation by replacing readAllIndex whith 
BufferedInputStreams, which is transparent and have no index size limitations: 
https://github.com/Shmuma/hbase/commit/d0ef517482a0475588e229344558c31b47d5a269

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2013-02-25 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13586395#comment-13586395
 ] 

Harsh J commented on HBASE-5071:


Hi Max,

Why go down that path instead of upgrading to HFileV2?

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2013-02-25 Thread Max Lapan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13586673#comment-13586673
 ] 

Max Lapan commented on HBASE-5071:
--

Harsh: at the moment, we aren't ready to upgrade from 0.90.6 to 0.92

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-10-02 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468235#comment-13468235
 ] 

Mikhail Bautin commented on HBASE-5071:
---

[~qwertymaniac]: Good catch! Looks good to me.

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-27 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465028#comment-13465028
 ] 

Chris Trezzo commented on HBASE-5071:
-

Closed and left a comment in the release notes. Thanks Harsh J!

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464296#comment-13464296
 ] 

Chris Trezzo commented on HBASE-5071:
-

[~mikhail] Do you want to have a look?

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile

 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464300#comment-13464300
 ] 

Chris Trezzo commented on HBASE-5071:
-

It seems like this might not be a problem in HFileV2. Should we just close this 
issue?

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile

 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464431#comment-13464431
 ] 

Harsh J commented on HBASE-5071:


We could close it with a note that it does not affect HFileV2.

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile

 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-01-04 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13179838#comment-13179838
 ] 

stack commented on HBASE-5071:
--

lgtm

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile

 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-01-03 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13178709#comment-13178709
 ] 

Harsh J commented on HBASE-5071:


Hm, given the array building, I can't really figure out a way to bypass this 
one.

The following is ugly, but lemme know what you think of it:
{code}
int sizeToLoadOnOpen = min(fileSize - trailer.getLoadOnOpenDataOffset() - 
trailer.getTrailerSize(), Integer.MAX_VALUE);
// First param computes for Long type, and we cap that to Integer.MAX_VALUE.
{code}

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile

 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2011-12-19 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13172943#comment-13172943
 ] 

stack commented on HBASE-5071:
--

Probably.

Thanks Harsh.

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: io
Affects Versions: 0.90.0
Reporter: Harsh J

 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira