Re: Data loss after compaction when a row has more than Integer.MAX_VALUE columns

2016-01-17 Thread Toshihiro Suzuki
Thank you for your reply. The limit of row size is "hbase.table.max.rowsize"? I think it is max size of single row for only Get's or Scan's, and we can put data more than "hbase.table.max.rowsize". Thanks, Toshihiro Suzuki. 2016-01-18 13:16 GMT+09:00 Heng Chen : > I

Re: Data loss after compaction when a row has more than Integer.MAX_VALUE columns

2016-01-17 Thread Heng Chen
I am interesting in which situation a row has more than Integer.MAX_VALUE columns. If so, how large the row is, it satisfies the limit of row size?

Data loss after compaction when a row has more than Integer.MAX_VALUE columns

2016-01-17 Thread Toshihiro Suzuki
Hi, We have lost the data in our development environment when a row has more than Integer.MAX_VALUE columns after compaction. I think the reason is type of StoreScanner's countPerRow is int.

Re: Data loss after compaction when a row has more than Integer.MAX_VALUE columns

2016-01-17 Thread Heng Chen
The incoming-follow-nodes and outgoing-follow-nodes of one node exceed Integer.MAX_VALUE, unbelievable! Is the performance OK if i request the number of incoming-follow-nodes? 2016-01-18 13:26 GMT+08:00 Toshihiro Suzuki : > Thank you for your reply. > > We are using hbase

Re: Data loss after compaction when a row has more than Integer.MAX_VALUE columns

2016-01-17 Thread Toshihiro Suzuki
Thank you for your reply. We are using hbase to store social graph data on the SNS we provide. Our use case was presented in HBasecon 2015. http://www.slideshare.net/HBaseCon/use-cases-session-6a Schema Design is the below, http://www.slideshare.net/HBaseCon/use-cases-session-6a/44 Thanks,

Re: Data loss after compaction when a row has more than Integer.MAX_VALUE columns

2016-01-17 Thread Ted Yu
Interesting. Can you share your use case where more than Integer.MAX_VALUE columns are needed. Consider filing a JIRA for the proposed change. On Sun, Jan 17, 2016 at 8:05 PM, Toshihiro Suzuki wrote: > Hi, > > We have lost the data in our development environment when a row

[jira] [Created] (HBASE-15126) HBaseFsck's checkRegionBoundaries function set the 'storesFirstKey' was incorrect.

2016-01-17 Thread chenrongwei (JIRA)
chenrongwei created HBASE-15126: --- Summary: HBaseFsck's checkRegionBoundaries function set the 'storesFirstKey' was incorrect. Key: HBASE-15126 URL: https://issues.apache.org/jira/browse/HBASE-15126

[jira] [Created] (HBASE-15125) HBaseFsck's adoptHdfsOrphan function create region with end key boundary.

2016-01-17 Thread chenrongwei (JIRA)
chenrongwei created HBASE-15125: --- Summary: HBaseFsck's adoptHdfsOrphan function create region with end key boundary. Key: HBASE-15125 URL: https://issues.apache.org/jira/browse/HBASE-15125 Project:

[jira] [Resolved] (HBASE-14457) Umbrella: Improve Multiple WAL for production usage

2016-01-17 Thread Yu Li (JIRA)
[ https://issues.apache.org/jira/browse/HBASE-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li resolved HBASE-14457. --- Resolution: Fixed Seems no more comments. Since all sub-tasks done and doc uploaded, close this umbrella

Successful: HBase Generate Website

2016-01-17 Thread Apache Jenkins Server
Build status: Successful If successful, the website and docs have been generated. If failed, skip to the bottom of this email. Use the following commands to download the patch and apply it to a clean branch based on origin/asf-site. If you prefer to keep the hbase-site repo around