[jira] Commented: (HBASE-3308) SplitTransaction.splitStoreFiles slows splits a lot

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969190#action_12969190
 ] 

HBase Review Board commented on HBASE-3308:
---

Message from: st...@duboce.net


bq.  On 2010-12-07 17:02:49, stack wrote:
bq.  > 
/branches/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java,
 line 400
bq.  > 
bq.  >
bq.  > Why not have an upper bound?  If 100 files thats 100 threads doing 
FS operations.  I bet if you had upper bound of 10 on the executorservice, it 
complete faster than an unbounded executorservice?
bq.  
bq.  Jean-Daniel Cryans wrote:
bq.  I think we are already bounded by hbase.hstore.blockingStoreFiles

That'll do.  +1 on commit.


- stack


---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1273/#review2043
---





> SplitTransaction.splitStoreFiles slows splits a lot
> ---
>
> Key: HBASE-3308
> URL: https://issues.apache.org/jira/browse/HBASE-3308
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.92.0
>
>
> Recently I've been seeing some slow splits in our production environment 
> triggering timeouts, so I decided to take a closer look into the issue.
> According to my debugging, we spend almost all the time it takes to split on 
> creating the reference files. Each file in my testing takes at least 300ms to 
> create, and averages around 600ms. Since we create two references per store 
> file, it means that a region with 4 store file can easily take up to 5 
> seconds to split just to create those references.
> An intuitive improvement would be to create those files in parallel, so at 
> least it wouldn't be much slower when we're splitting a higher number of 
> files. Stack left the following comment in the code:
> {noformat}
> // TODO: If the below were multithreaded would we complete steps in less
> // elapsed time?  St.Ack 20100920
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969162#action_12969162
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Jonathan Gray" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/#review2049
---

Ship it!


looks good, thanks ted!  i will commit the final patch to trunk.


trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


line is > 80 chars but will fix on commit, don't worry


- Jonathan





> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969153#action_12969153
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Ted Yu" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/
---

(Updated 2010-12-07 18:28:46.368066)


Review request for hbase, stack and Jonathan Gray.


Changes
---

Reverted movement of imports


Summary
---

Adopted round-robin assignment as default for regions specified when table is 
created.


This addresses bug HBASE-3305.
http://issues.apache.org/jira/browse/HBASE-3305


Diffs (updated)
-

  trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java 
1043216 
  trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 1043216 
  trunk/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java 1043216 
  trunk/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java 1043216 

Diff: http://review.cloudera.org/r/1271/diff


Testing
---

Put unit tests for this change inside TestAdmin.testCreateTableWithRegions()
They passed.


Thanks,

Ted




> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969149#action_12969149
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Ted Yu" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/
---

(Updated 2010-12-07 18:25:05.129171)


Review request for hbase, stack and Jonathan Gray.


Changes
---

I used Organize Imports in Eclipse for AssignmentManager


Summary
---

Adopted round-robin assignment as default for regions specified when table is 
created.


This addresses bug HBASE-3305.
http://issues.apache.org/jira/browse/HBASE-3305


Diffs (updated)
-

  trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java 
1043216 
  trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 1043216 
  trunk/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java 1043216 
  trunk/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java 1043216 

Diff: http://review.cloudera.org/r/1271/diff


Testing
---

Put unit tests for this change inside TestAdmin.testCreateTableWithRegions()
They passed.


Thanks,

Ted




> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969148#action_12969148
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Ted Yu" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/#review2048
---



trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


I wrap InterruptedException in IOException.


- Ted





> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969135#action_12969135
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Jonathan Gray" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/#review2047
---


almost :)


trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


why is this and above import of EventType moved in your diff?



trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


white space here and two lines below



trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


put back the previous comment about round-robin, and whitespace (tabs?)



trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


Generally it's not good or "right" to catch, log, and ignore an IE.  How is 
this handled elsewhere?


- Jonathan





> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1861) Multi-Family support for bulk upload tools (HFileOutputFormat / loadtable.rb)

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969123#action_12969123
 ] 

HBase Review Board commented on HBASE-1861:
---

Message from: "Nicolas" 


bq.  On 2010-12-07 17:13:55, stack wrote:
bq.  > src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java, 
line 93
bq.  > 
bq.  >
bq.  > Should this behavior be documented in method javadoc?

will do


- Nicolas


---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1272/#review2044
---





> Multi-Family support for bulk upload tools (HFileOutputFormat / loadtable.rb)
> -
>
> Key: HBASE-1861
> URL: https://issues.apache.org/jira/browse/HBASE-1861
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.20.0
>Reporter: Jonathan Gray
>Assignee: Nicolas Spiegelberg
> Fix For: 0.92.0
>
> Attachments: HBASE1861-incomplete.patch
>
>
> Add multi-family support to bulk upload tools from HBASE-48.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1861) Multi-Family support for bulk upload tools (HFileOutputFormat / loadtable.rb)

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969118#action_12969118
 ] 

HBase Review Board commented on HBASE-1861:
---

Message from: st...@duboce.net

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1272/#review2044
---

Ship it!


+1  Excellent.


src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java


Should this behavior be documented in method javadoc?


- stack





> Multi-Family support for bulk upload tools (HFileOutputFormat / loadtable.rb)
> -
>
> Key: HBASE-1861
> URL: https://issues.apache.org/jira/browse/HBASE-1861
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.20.0
>Reporter: Jonathan Gray
>Assignee: Nicolas Spiegelberg
> Fix For: 0.92.0
>
> Attachments: HBASE1861-incomplete.patch
>
>
> Add multi-family support to bulk upload tools from HBASE-48.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3318) Split rollback leaves parent with writesEnabled=false

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969117#action_12969117
 ] 

stack commented on HBASE-3318:
--

+1

That looks like an issue we've  had for a long time

> Split rollback leaves parent with writesEnabled=false
> -
>
> Key: HBASE-3318
> URL: https://issues.apache.org/jira/browse/HBASE-3318
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.90.1, 0.92.0
>
> Attachments: HBASE-3318.patch
>
>
> I saw a split rollback today, and it left the region in a state where it was 
> able to take writes, but wasn't able to flush or compact. It's printing this 
> message every few milliseconds:
> {noformat}
> NOT flushing memstore for region xxx., flushing=false, writesEnabled=false
> {noformat}
> I see why, writesEnabled is never set back in HRegion.initialize:
> {code}
> // See if region is meant to run read-only.
> if (this.regionInfo.getTableDesc().isReadOnly()) {
>   this.writestate.setReadOnly(true);
> }
> {code}
> Instead it needs to pass isReadOnly into the setReadOnly method to work 
> correctly.
> I think it should go in 0.90.0 if there's a new RC.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3318) Split rollback leaves parent with writesEnabled=false

2010-12-07 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-3318:
--

Attachment: HBASE-3318.patch

Patch that fixes the issue. Very minor change.

> Split rollback leaves parent with writesEnabled=false
> -
>
> Key: HBASE-3318
> URL: https://issues.apache.org/jira/browse/HBASE-3318
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.90.1, 0.92.0
>
> Attachments: HBASE-3318.patch
>
>
> I saw a split rollback today, and it left the region in a state where it was 
> able to take writes, but wasn't able to flush or compact. It's printing this 
> message every few milliseconds:
> {noformat}
> NOT flushing memstore for region xxx., flushing=false, writesEnabled=false
> {noformat}
> I see why, writesEnabled is never set back in HRegion.initialize:
> {code}
> // See if region is meant to run read-only.
> if (this.regionInfo.getTableDesc().isReadOnly()) {
>   this.writestate.setReadOnly(true);
> }
> {code}
> Instead it needs to pass isReadOnly into the setReadOnly method to work 
> correctly.
> I think it should go in 0.90.0 if there's a new RC.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3308) SplitTransaction.splitStoreFiles slows splits a lot

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969113#action_12969113
 ] 

HBase Review Board commented on HBASE-3308:
---

Message from: "Jean-Daniel Cryans" 


bq.  On 2010-12-07 17:02:49, stack wrote:
bq.  > 
/branches/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java,
 line 400
bq.  > 
bq.  >
bq.  > Why not have an upper bound?  If 100 files thats 100 threads doing 
FS operations.  I bet if you had upper bound of 10 on the executorservice, it 
complete faster than an unbounded executorservice?

I think we are already bounded by hbase.hstore.blockingStoreFiles


- Jean-Daniel


---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1273/#review2043
---





> SplitTransaction.splitStoreFiles slows splits a lot
> ---
>
> Key: HBASE-3308
> URL: https://issues.apache.org/jira/browse/HBASE-3308
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.92.0
>
>
> Recently I've been seeing some slow splits in our production environment 
> triggering timeouts, so I decided to take a closer look into the issue.
> According to my debugging, we spend almost all the time it takes to split on 
> creating the reference files. Each file in my testing takes at least 300ms to 
> create, and averages around 600ms. Since we create two references per store 
> file, it means that a region with 4 store file can easily take up to 5 
> seconds to split just to create those references.
> An intuitive improvement would be to create those files in parallel, so at 
> least it wouldn't be much slower when we're splitting a higher number of 
> files. Stack left the following comment in the code:
> {noformat}
> // TODO: If the below were multithreaded would we complete steps in less
> // elapsed time?  St.Ack 20100920
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3308) SplitTransaction.splitStoreFiles slows splits a lot

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969110#action_12969110
 ] 

HBase Review Board commented on HBASE-3308:
---

Message from: st...@duboce.net

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1273/#review2043
---

Ship it!


+1  Minor comment below.


/branches/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java


Why not have an upper bound?  If 100 files thats 100 threads doing FS 
operations.  I bet if you had upper bound of 10 on the executorservice, it 
complete faster than an unbounded executorservice?


- stack





> SplitTransaction.splitStoreFiles slows splits a lot
> ---
>
> Key: HBASE-3308
> URL: https://issues.apache.org/jira/browse/HBASE-3308
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.92.0
>
>
> Recently I've been seeing some slow splits in our production environment 
> triggering timeouts, so I decided to take a closer look into the issue.
> According to my debugging, we spend almost all the time it takes to split on 
> creating the reference files. Each file in my testing takes at least 300ms to 
> create, and averages around 600ms. Since we create two references per store 
> file, it means that a region with 4 store file can easily take up to 5 
> seconds to split just to create those references.
> An intuitive improvement would be to create those files in parallel, so at 
> least it wouldn't be much slower when we're splitting a higher number of 
> files. Stack left the following comment in the code:
> {noformat}
> // TODO: If the below were multithreaded would we complete steps in less
> // elapsed time?  St.Ack 20100920
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969107#action_12969107
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Ted Yu" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/
---

(Updated 2010-12-07 16:56:46.150530)


Review request for hbase, stack and Jonathan Gray.


Changes
---

Added AssignmentManager.assignUserRegions() which is called from createTable() 
and assignAllUserRegions()


Summary
---

Adopted round-robin assignment as default for regions specified when table is 
created.


This addresses bug HBASE-3305.
http://issues.apache.org/jira/browse/HBASE-3305


Diffs (updated)
-

  trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java 
1043216 
  trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 1043216 
  trunk/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java 1043216 
  trunk/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java 1043216 

Diff: http://review.cloudera.org/r/1271/diff


Testing
---

Put unit tests for this change inside TestAdmin.testCreateTableWithRegions()
They passed.


Thanks,

Ted




> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2856) TestAcidGuarantee broken on trunk

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2856:
-

Attachment: acid.txt

Making a start.  Adding version to KV.

> TestAcidGuarantee broken on trunk 
> --
>
> Key: HBASE-2856
> URL: https://issues.apache.org/jira/browse/HBASE-2856
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.89.20100621
>Reporter: ryan rawson
>Assignee: stack
>Priority: Blocker
> Fix For: 0.92.0
>
> Attachments: acid.txt
>
>
> TestAcidGuarantee has a test whereby it attempts to read a number of columns 
> from a row, and every so often the first column of N is different, when it 
> should be the same.  This is a bug deep inside the scanner whereby the first 
> peek() of a row is done at time T then the rest of the read is done at T+1 
> after a flush, thus the memstoreTS data is lost, and previously 'uncommitted' 
> data becomes committed and flushed to disk.
> One possible solution is to introduce the memstoreTS (or similarly equivalent 
> value) to the HFile thus allowing us to preserve read consistency past 
> flushes.  Another solution involves fixing the scanners so that peek() is not 
> destructive (and thus might return different things at different times alas).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3308) SplitTransaction.splitStoreFiles slows splits a lot

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969098#action_12969098
 ] 

HBase Review Board commented on HBASE-3308:
---

Message from: "Jean-Daniel Cryans" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1273/
---

Review request for hbase.


Summary
---

Patch that parallelizes the splitting of the files using ThreadPoolExecutor and 
Futures. The code is a bit ugly, but does the job really well as shown during 
cluster testing (which also uncovered HBASE-3318).

One new behavior this patch adds is that it's now possible to rollback a split 
because it took too long to split the files. I did some testing with a timeout 
of 5 secs on my cluster, even tho each machine did a few rollbacks the import 
went fine. The default is 30 seconds and isn't in hbase-default.xml as I don't 
think anyone would really want to change that.


This addresses bug HBASE-3308.
http://issues.apache.org/jira/browse/HBASE-3308


Diffs
-

  
/branches/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
 1043188 

Diff: http://review.cloudera.org/r/1273/diff


Testing
---


Thanks,

Jean-Daniel




> SplitTransaction.splitStoreFiles slows splits a lot
> ---
>
> Key: HBASE-3308
> URL: https://issues.apache.org/jira/browse/HBASE-3308
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.92.0
>
>
> Recently I've been seeing some slow splits in our production environment 
> triggering timeouts, so I decided to take a closer look into the issue.
> According to my debugging, we spend almost all the time it takes to split on 
> creating the reference files. Each file in my testing takes at least 300ms to 
> create, and averages around 600ms. Since we create two references per store 
> file, it means that a region with 4 store file can easily take up to 5 
> seconds to split just to create those references.
> An intuitive improvement would be to create those files in parallel, so at 
> least it wouldn't be much slower when we're splitting a higher number of 
> files. Stack left the following comment in the code:
> {noformat}
> // TODO: If the below were multithreaded would we complete steps in less
> // elapsed time?  St.Ack 20100920
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-3319) Add File Count Threshold to HBCK

2010-12-07 Thread Nicolas Spiegelberg (JIRA)
Add File Count Threshold to HBCK


 Key: HBASE-3319
 URL: https://issues.apache.org/jira/browse/HBASE-3319
 Project: HBase
  Issue Type: Improvement
  Components: util
Reporter: Nicolas Spiegelberg
Priority: Minor


A useful check to add to HBCK is a way to estimate the max # of files that a 
cluster should have and raise a warning/error if the file count goes above that 
threshold.   We ran into an issue this week where our ".oldlogs" folder filled 
up to 100k files because 'hbase.master.logcleaner.maxdeletedlogs" was set too 
low.  We found this because of faulty region count metric in the HTTP server 
that actually showed the file count.  Adding an HBCK check would provide an 
extra layer of detection to find leaks from new features or conservatively 
configured cleanup thresholds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1861) Multi-Family support for bulk upload tools (HFileOutputFormat / loadtable.rb)

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969086#action_12969086
 ] 

HBase Review Board commented on HBASE-1861:
---

Message from: "Nicolas" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1272/
---

Review request for hbase.


Summary
---

support writing to multiple column families for HFileOutputFormat.  also, added 
a max threshold for PutSortReducer because we had some pathological row cases.


This addresses bug HBASE-1861.
http://issues.apache.org/jira/browse/HBASE-1861


Diffs
-

  src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java 
8ccdf4d 
  src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java 5fb3e83 
  src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java 
c5d56cc 

Diff: http://review.cloudera.org/r/1272/diff


Testing
---

mvn test -Dtest=ThestHFileOutputFormat
internal MR testing


Thanks,

Nicolas




> Multi-Family support for bulk upload tools (HFileOutputFormat / loadtable.rb)
> -
>
> Key: HBASE-1861
> URL: https://issues.apache.org/jira/browse/HBASE-1861
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.20.0
>Reporter: Jonathan Gray
>Assignee: Nicolas Spiegelberg
> Fix For: 0.92.0
>
> Attachments: HBASE1861-incomplete.patch
>
>
> Add multi-family support to bulk upload tools from HBASE-48.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1502) Remove need for heartbeats in HBase

2010-12-07 Thread Jonathan Gray (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969083#action_12969083
 ] 

Jonathan Gray commented on HBASE-1502:
--

We're also using the regionServerStartup or regionServerReport to determine an 
RS is online (we use the ZK node to determine when it dies).  This stuff would 
also need some cleanup/rework to completely drop heartbeats.

> Remove need for heartbeats in HBase
> ---
>
> Key: HBASE-1502
> URL: https://issues.apache.org/jira/browse/HBASE-1502
> Project: HBase
>  Issue Type: Wish
>Reporter: Nitay Joffe
> Fix For: 0.92.0
>
>
> HBase currently uses heartbeats between region servers and the master, 
> piggybacking information on them when it can. This issue is to investigate if 
> we can get rid of the need for those using ZooKeeper events.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-695) Add passing of filter state across regions

2010-12-07 Thread Jonathan Gray (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969081#action_12969081
 ] 

Jonathan Gray commented on HBASE-695:
-

Why would we never allow filter state across regions?  Makes sense in MR 
context where all regions running in parallel, but could use cases of 
single-threaded client wanting a stateful filter?

Don't think this is a high priority but does seem legitimate (simple row 
paging, for example, cannot be done correctly without it).

> Add passing of filter state across regions
> --
>
> Key: HBASE-695
> URL: https://issues.apache.org/jira/browse/HBASE-695
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>
> Discussion on list arrived at need for filters to carry cross-region state.  
> For example, if you are looking for sufficient rows to fill the fifth page of 
> a set of results and a particular region only has the first half of page 5, 
> there needs to be a mechanism to tell the next region in line, how far the 
> scan has gotten.  Clint Morgan suggested some kind of RPC or callback that 
> the serverside region could tug on to pass back to the client the state-laden 
> filter for passing the next region.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3305) Allow round-robin distribution for table created with multiple regions

2010-12-07 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969080#action_12969080
 ] 

HBase Review Board commented on HBASE-3305:
---

Message from: "Jonathan Gray" 

---
This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1271/#review2042
---


Almost there.  Some spacing only changes still in here and need to move out 
logic into AM method.


trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


still tabbing changes here and next method signature as well



trunk/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


same as stack's original comment.  this logic should be in 
AssignmentManager.  I wouldn't reuse the method 'assignAllUserRegions' because 
it says "all" in it.  A method 'assignUserRegions' which takes a list and does 
a bulk assign w/ round-robin would make sense . 'assignAllUserRegions' could 
then call it once it makes a list of regions.


- Jonathan





> Allow round-robin distribution for table created with multiple regions
> --
>
> Key: HBASE-3305
> URL: https://issues.apache.org/jira/browse/HBASE-3305
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.20.6
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hbase-3305-array.patch, 
> hbase-3305-default-round-robin.patch, hbase-3305-round-robin-unit-test.patch, 
> hbase-3305.patch
>
>
> We can distribute the initial regions created for a new table in round-robin 
> fashion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-3318) Split rollback leaves parent with writesEnabled=false

2010-12-07 Thread Jean-Daniel Cryans (JIRA)
Split rollback leaves parent with writesEnabled=false
-

 Key: HBASE-3318
 URL: https://issues.apache.org/jira/browse/HBASE-3318
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
Priority: Critical
 Fix For: 0.90.1, 0.92.0


I saw a split rollback today, and it left the region in a state where it was 
able to take writes, but wasn't able to flush or compact. It's printing this 
message every few milliseconds:

{noformat}
NOT flushing memstore for region xxx., flushing=false, writesEnabled=false
{noformat}

I see why, writesEnabled is never set back in HRegion.initialize:

{code}
// See if region is meant to run read-only.
if (this.regionInfo.getTableDesc().isReadOnly()) {
  this.writestate.setReadOnly(true);
}
{code}

Instead it needs to pass isReadOnly into the setReadOnly method to work 
correctly.

I think it should go in 0.90.0 if there's a new RC.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-1888) KeyValue methods throw NullPointerException instead of IllegalArgumentException during parameter sanity check

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-1888:
-

Attachment: 1888.txt

Small patch to do as Michal suggests (head of patch has some fix up of class 
javadoc).

> KeyValue methods throw NullPointerException instead of 
> IllegalArgumentException during parameter sanity check
> -
>
> Key: HBASE-1888
> URL: https://issues.apache.org/jira/browse/HBASE-1888
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.0
>Reporter: Michal Podsiadlowski
>Priority: Minor
> Fix For: 0.92.0
>
> Attachments: 1888.txt
>
>
> Methods of org.apache.hadoop.hbase.KeyValue
> public static int getDelimiter(final byte [] b, int offset, final int length, 
> final int delimiter)
> public static int getDelimiterInReverse(final byte [] b, final int offset, 
> final int length, final int delimiter)
> throw NullPointerException instead of IllegalArgumentException when byte 
> array b is check for null  - which is very bad practice!
> Please refactor this because this can be very misleading.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1888) KeyValue methods throw NullPointerException instead of IllegalArgumentException during parameter sanity check

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1888.
--

   Resolution: Fixed
Fix Version/s: 0.92.0
 Assignee: stack

Committed to TRUNK.

> KeyValue methods throw NullPointerException instead of 
> IllegalArgumentException during parameter sanity check
> -
>
> Key: HBASE-1888
> URL: https://issues.apache.org/jira/browse/HBASE-1888
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.0
>Reporter: Michal Podsiadlowski
>Assignee: stack
>Priority: Minor
> Fix For: 0.92.0
>
> Attachments: 1888.txt
>
>
> Methods of org.apache.hadoop.hbase.KeyValue
> public static int getDelimiter(final byte [] b, int offset, final int length, 
> final int delimiter)
> public static int getDelimiterInReverse(final byte [] b, final int offset, 
> final int length, final int delimiter)
> throw NullPointerException instead of IllegalArgumentException when byte 
> array b is check for null  - which is very bad practice!
> Please refactor this because this can be very misleading.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-3317) Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-3317.
--

   Resolution: Fixed
Fix Version/s: 0.92.0
 Hadoop Flags: [Reviewed]

Committed to TRUNK.  Thank you for the patch Ed.

> Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong
> --
>
> Key: HBASE-3317
> URL: https://issues.apache.org/jira/browse/HBASE-3317
> Project: HBase
>  Issue Type: Bug
>Reporter: Ed Kohlwey
>Priority: Trivial
> Fix For: 0.92.0
>
> Attachments: HBASE-3317.patch
>
>
> The throws declaration for Bytes.incrementBytes() states that an IOException 
> is thrown by the method, and javadocs suggest that this is expected if the 
> byte array's size is larger than SIZEOF_LONG.
> The code actually uses an IllegalArgumentException, which is probably more 
> appropriate anyways. This should be changed to simplify the code that uses 
> this method.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HBASE-3317) Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-3317:


Assignee: Ed Kohlwey

> Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong
> --
>
> Key: HBASE-3317
> URL: https://issues.apache.org/jira/browse/HBASE-3317
> Project: HBase
>  Issue Type: Bug
>Reporter: Ed Kohlwey
>Assignee: Ed Kohlwey
>Priority: Trivial
> Fix For: 0.92.0
>
> Attachments: HBASE-3317.patch
>
>
> The throws declaration for Bytes.incrementBytes() states that an IOException 
> is thrown by the method, and javadocs suggest that this is expected if the 
> byte array's size is larger than SIZEOF_LONG.
> The code actually uses an IllegalArgumentException, which is probably more 
> appropriate anyways. This should be changed to simplify the code that uses 
> this method.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3317) Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong

2010-12-07 Thread Ed Kohlwey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Kohlwey updated HBASE-3317:
--

Attachment: HBASE-3317.patch

> Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong
> --
>
> Key: HBASE-3317
> URL: https://issues.apache.org/jira/browse/HBASE-3317
> Project: HBase
>  Issue Type: Bug
>Reporter: Ed Kohlwey
>Priority: Trivial
> Attachments: HBASE-3317.patch
>
>
> The throws declaration for Bytes.incrementBytes() states that an IOException 
> is thrown by the method, and javadocs suggest that this is expected if the 
> byte array's size is larger than SIZEOF_LONG.
> The code actually uses an IllegalArgumentException, which is probably more 
> appropriate anyways. This should be changed to simplify the code that uses 
> this method.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3317) Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong

2010-12-07 Thread Ed Kohlwey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Kohlwey updated HBASE-3317:
--

Priority: Trivial  (was: Major)

> Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong
> --
>
> Key: HBASE-3317
> URL: https://issues.apache.org/jira/browse/HBASE-3317
> Project: HBase
>  Issue Type: Bug
>Reporter: Ed Kohlwey
>Priority: Trivial
>
> The throws declaration for Bytes.incrementBytes() states that an IOException 
> is thrown by the method, and javadocs suggest that this is expected if the 
> byte array's size is larger than SIZEOF_LONG.
> The code actually uses an IllegalArgumentException, which is probably more 
> appropriate anyways. This should be changed to simplify the code that uses 
> this method.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-3317) Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong

2010-12-07 Thread Ed Kohlwey (JIRA)
Javadoc and Throws Declaration for Bytes.incrementBytes() is Wrong
--

 Key: HBASE-3317
 URL: https://issues.apache.org/jira/browse/HBASE-3317
 Project: HBase
  Issue Type: Bug
Reporter: Ed Kohlwey


The throws declaration for Bytes.incrementBytes() states that an IOException is 
thrown by the method, and javadocs suggest that this is expected if the byte 
array's size is larger than SIZEOF_LONG.

The code actually uses an IllegalArgumentException, which is probably more 
appropriate anyways. This should be changed to simplify the code that uses this 
method.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1998) Check that session timeout is actually being set; it doesn't seem to be

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1998.
--

Resolution: Fixed

Fixed by zk 3.3.2:

{code}
2010-12-07 18:57:32,025 INFO org.apache.zookeeper.ZooKeeper: Initiating client 
connection, connectString=sv2borg180:20001 sessionTimeout=18 
watcher=master:6
2010-12-07 18:57:32,046 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server sv2borg180/10.20.20.180:20001
2010-12-07 18:57:32,051 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to sv2borg180/10.20.20.180:20001, initiating session
2010-12-07 18:57:32,121 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server sv2borg180/10.20.20.180:20001, sessionid = 
0x12cc2321857, negotiated timeout = 18
{code}

> Check that session timeout is actually being set; it doesn't seem to be
> ---
>
> Key: HBASE-1998
> URL: https://issues.apache.org/jira/browse/HBASE-1998
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> The just-previous issue is about relation of tick time to session timeout.  
> We need to fix that.  Independent, it would seem that session timeouts are 
> after 30 seconds, not the 40 seconds we'd expect passing in a tick time of 2 
> seconds and a default sessino timout of 60 seconds.  Check.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1988) OutOfMemoryError in RegionServer

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1988.
--

Resolution: Invalid

The moment has passed.

> OutOfMemoryError in RegionServer
> 
>
> Key: HBASE-1988
> URL: https://issues.apache.org/jira/browse/HBASE-1988
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.20.2
> Environment: Java 1.6.0_16 64bit on CentOS with 3000M max heap
>Reporter: Stefan Will
> Attachments: regionserver.log.gz
>
>
> RegionServers tend to die with an OutOfMemoryError under load. I expected 
> this problem to go away with the fix for HBASE-1927 in 0.20.2 RC1, but it's 
> still happening. Also, when this happens the cluster becomes unresponsive, 
> even once the load on the machines has gone back down. Interestingly there 
> are lots and lots of scanner lease expired messages right before the OOM.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1986) Batch Gets

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1986.
--

Resolution: Duplicate

Fixed by hbase-1845

> Batch Gets
> --
>
> Key: HBASE-1986
> URL: https://issues.apache.org/jira/browse/HBASE-1986
> Project: HBase
>  Issue Type: New Feature
>  Components: client, regionserver
>Affects Versions: 0.20.1
>Reporter: Peter Rietzler
>
> Put as well as Delete allow batch operations using HTable.put(List) and 
> HTable.delete(ArrayList).
> We often need to fetch a few thousand rows per id and currently have to issue 
> an RPC call (using HTable.get(Get)) for each of the rows. Support for batch 
> gets, a la HTable.get(List) could easily improve performance since only one 
> RPC call per region server must be issued.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1983) Fix up the hbase-default.xml descriptions; in particular, note interaction between flush and write buffer

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1983.
--

Resolution: Invalid

I did a pass over hbase-default.xml in lead up to 0.90.0RC1.  Closing as no 
longer valid.

> Fix up the hbase-default.xml descriptions; in particular, note interaction 
> between flush and write buffer
> -
>
> Key: HBASE-1983
> URL: https://issues.apache.org/jira/browse/HBASE-1983
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
> Fix For: 0.92.0
>
>
> I was wondering why I was only flushing every 1k edits though I'd set 
> hbase.regionserver.flushlogentries to 100.  Couldn't figure why.  J-D set me 
> straight.  Flush is done up in HRS now at end of a put.  If the put is a big 
> batch put, then 1k edits will go in before I sync.  In our descriptions in 
> hbase-default.xml, need to bring this out.  I'm sure this file could do with 
> a good clean up by now too... Let this issue cover that too.  For 0.21.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1967) [Transactional] client.TestTransactions.testPutPutScan fails sometimes

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1967.
--

Resolution: Invalid

Resolving as not valid.   We don't carry transactional anymore in hbase.

> [Transactional] client.TestTransactions.testPutPutScan fails sometimes
> --
>
> Key: HBASE-1967
> URL: https://issues.apache.org/jira/browse/HBASE-1967
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.1
>Reporter: Jean-Daniel Cryans
>Assignee: Clint Morgan
>Priority: Minor
>
> Testcase: testPutPutScan took 15.822 sec FAILED
> expected:<299> but was:<199>
> Not sure exactly how the test is supposed to work but it seems that sometimes 
> the two Put are on the same timestamp so the value returned is 199. I will 
> commit a temporary fix to branch in order to release 0.20.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1955) When starting Hbase cluster,

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1955.
--

Resolution: Invalid

We don't have a safe-mode anymore.  Resolving as invalid.

> When starting Hbase cluster, 
> -
>
> Key: HBASE-1955
> URL: https://issues.apache.org/jira/browse/HBASE-1955
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.20.0, 0.19.3, 0.20.1
> Environment: ulimit -n 1024
> 2009-11-02 12:42:47,653 INFO org.apache.hadoop.hbase.master.HMaster: 
> vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc., 
> vmVersion=14.2
> -b01
> 2009-11-02 12:42:47,653 INFO org.apache.hadoop.hbase.master.HMaster: 
> vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, 
> -XX:+UseConcMarkSweepGC, -
> XX:+CMSIncrementalMode, -Dhbase.log.dir=/home/hadoop/hbase/bin/../logs, 
> -Dhbase.log.file=hbase-hadoop-master-px1011.log, 
> -Dhbase.home.dir=/home/hadoop/hbase/
> bin/.., -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,DRFA, 
> -Djava.library.path=/home/hadoop/hbase/bin/../lib/native/Linux-amd64-64]
> 2009-11-02 12:42:47,701 INFO org.apache.hadoop.hbase.master.HMaster: My 
> address is px1010.myserver.int:6
> 2009-11-02 12:42:48,015 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: 
> Initializing RPC Metrics with hostName=HMaster, port=6
> 2009-11-02 12:42:48,096 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.2.1-808558, built on 08/27/2009 18:48 GMT
> 2009-11-02 12:42:48,096 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:host.name=px1010.myserver.int
> 2009-11-02 12:42:48,096 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.version=1.6.0_16
> 2009-11-02 12:42:48,096 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.vendor=Sun Microsystems Inc.
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.home=/home/hadoop/jdk1.6.0_16/jre
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.class.path=/home/hadoop/hbase/bin/../conf:/home/hadoop/java/lib/tools.ja
> r:/home/hadoop/hbase/bin/..:/home/hadoop/hbase/bin/../hbase-0.20.1.jar:/home/hadoop/hbase/bin/../lib/AgileJSON-2009-03-30.jar:/home/hadoop/hbase/bin/../lib/c
> ommons-cli-2.0-SNAPSHOT.jar:/home/hadoop/hbase/bin/../lib/commons-el-from-jetty-5.1.4.jar:/home/hadoop/hbase/bin/../lib/commons-httpclient-3.0.1.jar:/home/ha
> doop/hbase/bin/../lib/commons-logging-1.0.4.jar:/home/hadoop/hbase/bin/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hbase/bin/../lib/commons-math-1.1.ja
> r:/home/hadoop/hbase/bin/../lib/hadoop-0.20.1-hdfs127-core.jar:/home/hadoop/hbase/bin/../lib/hadoop-0.20.1-test.jar:/home/hadoop/hbase/bin/../lib/jasper-comp
> iler-5.5.12.jar:/home/hadoop/hbase/bin/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hbase/bin/../lib/jetty-6.1.14.jar:/home/hadoop/hbase/bin/../lib/jetty-ut
> il-6.1.14.jar:/home/hadoop/hbase/bin/../lib/jruby-complete-1.2.0.jar:/home/hadoop/hbase/bin/../lib/json.jar:/home/hadoop/hbase/bin/../lib/junit-3.8.1.jar:/ho
> me/hadoop/hbase/bin/../lib/libthrift-r771587.jar:/home/hadoop/hbase/bin/../lib/log4j-1.2.15.jar:/home/hadoop/hbase/bin/../lib/lucene-core-2.2.0.jar:/home/had
> oop/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/bin/../lib/xmlenc-0.52.jar:/home/hadoop/hbase/bin/../lib/zookeeper-3.2.1.jar:/home/hadoop/
> hbase/bin/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hbase/bin/../lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/hbase/hbase-0.20.1.jar:/home/hadoop/hbase/conf
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.library.path=/home/hadoop/hbase/bin/../lib/native/Linux-amd64-64
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.io.tmpdir=/tmp
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.compiler=
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.name=Linux
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.arch=amd64
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.version=2.6.18-128.el5
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.name=hadoop
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.home=/home/hadoop
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.dir=/home/hadoop/hbase-0.20.1
> 2009-11-02 12:42:48,097 INFO org.apache.zookeeper.ZooKeeper: Initiating 
> client connection, 
> connectString=c1-zk4:2181,c1-zk3:2181,c1-zk2:2181,c1-zk1:2181,c1-z
> k5:2181 sessionTimeout=6 watcher=Thread[Thread-1,5,main]
>Reporter: Ryan Smith
>
> When 

[jira] Resolved: (HBASE-1950) Migration from 0.20 to 0.21

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1950.
--

Resolution: Invalid

There is no migration going from 0.20 to 0.90.

> Migration from 0.20 to 0.21
> ---
>
> Key: HBASE-1950
> URL: https://issues.apache.org/jira/browse/HBASE-1950
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>
> + Remove of existing Historians from .META.
> + Rewrite .META. table so schema and state is done out in zk instead (HCD and 
> HTD are versioned so shouldn't be hard migrating these).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1297) Add a wiki page on hardware sizing

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1297.
--

Resolution: Duplicate

Duplicate of HBASE-1931

> Add a wiki page on hardware sizing
> --
>
> Key: HBASE-1297
> URL: https://issues.apache.org/jira/browse/HBASE-1297
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>
> We need a page on recommended hardware sizings.  A thread up on hbase-user 
> with contrib. by Andrew, Ryan, Billy and Yabo-Arber has the meat of an 
> article (Could start with a page that had a link to this mail thread).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1910) IndexedRegion RPC deadlock

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1910.
--

Resolution: Fixed

indexed has been removed from hbase. 

> IndexedRegion RPC deadlock
> --
>
> Key: HBASE-1910
> URL: https://issues.apache.org/jira/browse/HBASE-1910
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.1, 0.90.0
>Reporter: Andrew Purtell
> Fix For: 0.92.0
>
> Attachments: thread.dump.gz
>
>
> From Tatsuya Kawano up on hbase-user@
> {quote}
> 50 client threads who try to put millions of records, autoFlush(false), 
> flushCommits() on every 5,000 put. After inserting about 3 million records, a 
> deadlock occurred on a region server who has both the table and index regions 
> loaded.
> I have attached a full thread dump of the deadlocked region server, and you 
> can see IPC Server handlers are blocked in 
> org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegion.updateIndex().
> I found the flowing FIXME comment on updateIndex() method, and it seems this 
> is the deadlock I'm having.
> {code}
>   // FIXME: This call takes place in an RPC, and requires an RPC. This makes 
> for
>   // a likely deadlock if the number of RPCs we are trying to serve is >= the
>   // number of handler threads.
>   private void updateIndex(IndexSpecification indexSpec, byte[] row,
>   SortedMap columnValues) throws IOException {
> {code}
> I use HBase 0.20.1 and my region servers were running with 10 RPC handler 
> threads on each (default).
> Maybe you can workaround this by adding more RPC handlers (increase the value 
> of "hbase.regionserver.handler.count" in hbase-site.xml)
> {quote}
> Opening this issue to track the FIXME.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1898) Each Store replays WAL split. Should be replayed at Region level (it used to be done here)

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1898.
--

Resolution: Duplicate

This has been done.  Resolving as duplicate.  See replayRecoveredEditsIfAny.

> Each Store replays WAL split.  Should be replayed at Region level (it used to 
> be done here)
> ---
>
> Key: HBASE-1898
> URL: https://issues.apache.org/jira/browse/HBASE-1898
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> Looking at Store constructor, each Store in a Region picks up the split 
> output log and replays it in turn.  Seems like we should be reading the file 
> of edits once up at the Region level and per edit figuring which store to 
> insert into?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1802) master log recovery hole, master can crash and trash the logs

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1802.
--

Resolution: Invalid

The moment has passed.  Resolving as invalid.

> master log recovery hole, master can crash and trash the logs
> -
>
> Key: HBASE-1802
> URL: https://issues.apache.org/jira/browse/HBASE-1802
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.0
>Reporter: ryan rawson
>
> During recovery, the master had opened all the logfiles, but when it went to 
> open the destination files, it crashed.  The logfile is missing, the edits 
> did not get applied.
> looks like there is a hole whereby we delete the original logfiles before we 
> confirm the new output logs were written. oops!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-1774) HTable$ClientScanner modifies its input parameters

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-1774:
-

Tags: noob

Should make a copy of the Scan object passed to HTable and remove the HEADSUP.

> HTable$ClientScanner modifies its input parameters
> --
>
> Key: HBASE-1774
> URL: https://issues.apache.org/jira/browse/HBASE-1774
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.20.0
>Reporter: Jim Kellerman
>Assignee: Jim Kellerman
>Priority: Critical
>
> HTable$ClientScanner modifies the Scan that is passed to it on construction.
> I would consider this to be bad programming practice because if I wanted to 
> use the same Scan object to scan multiple tables, I would not expect one 
> table scan to effect the other, but it does.
> If input parameters are going to be modified either now or later it should be 
> called out *loudly* in the javadoc. The only way I found this behavior was by 
> creating an application that did scan multiple tables using the same Scan 
> object and having 'wierd stuff' happen.
> In my opinion, if you want to modify a field in an input parameter, you 
> should:
> - make a copy of the original object
> - optionally return a reference to the copy.
> There is no javadoc about this behavior. The only thing I found was a comment 
> in HTable$ClientScanner:
> {code}
> // HEADSUP: The scan internal start row can change as we move through 
> table.
> {code}
> Is there a use case that requires this behavior? If so, I would recommend 
> that ResultScanner  (and the classes that implement it) provide an accessor 
> to the mutable copy of the input Scan and leave the input argument alone.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1762) Remove concept of ZooKeeper from HConnection interface

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968996#action_12968996
 ] 

stack commented on HBASE-1762:
--

Hmmm... we've become more dependent on zk and an the getZooKeeperWatcher in 
HConnection rather than less.  Ugh.

> Remove concept of ZooKeeper from HConnection interface
> --
>
> Key: HBASE-1762
> URL: https://issues.apache.org/jira/browse/HBASE-1762
> Project: HBase
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 0.20.0
>Reporter: Ken Weiner
>Assignee: stack
> Fix For: 0.92.0
>
> Attachments: HBASE-1762.patch
>
>
> The concept of ZooKeeper is really an implementation detail and should not be 
> exposed in the {{HConnection}} interface.   Therefore, I suggest removing the 
> {{HConnection.getZooKeeperWrapper()}} method from the interface. 
> I couldn't find any uses of this method within the HBase code base except for 
> in one of the unit tests: {{org.apache.hadoop.hbase.TestZooKeeper}}.  This 
> unit test should be changed to instantiate the implementation of 
> {{HConnection}} directly, allowing it to use the {{getZooKeeperWrapper()}} 
> method.  This requires making 
> {{org.apache.hadoop.hbase.client.HConnectionManager.TableServers}} public.  
> (I actually think TableServers should be moved out into an outer class, but 
> in the spirit of small patches, I'll refrain from suggesting that in this 
> issue).
> I'll attach a patch for:
> # The removal of {{HConnection.getZooKeeperWrapper()}}
> # Change of {{TableServers}} class from private to public
> # Direct instantiation of {{TableServers}} within {{TestZooKeeper}}.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1744) Thrift server to match the new java api.

2010-12-07 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968992#action_12968992
 ] 

Lars Francke commented on HBASE-1744:
-

I'll get on it next week.

> Thrift server to match the new java api.
> 
>
> Key: HBASE-1744
> URL: https://issues.apache.org/jira/browse/HBASE-1744
> Project: HBase
>  Issue Type: Improvement
>  Components: thrift
>Reporter: Tim Sell
>Assignee: Lars Francke
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-1744.preview.1.patch, thriftexperiment.patch
>
>
> This mutateRows, etc.. is a little confusing compared to the new cleaner java 
> client.
> Thinking of ways to make a thrift client that is just as elegant. something 
> like:
> void put(1:Bytes table, 2:TPut put) throws (1:IOError io)
> with:
> struct TColumn {
>   1:Bytes family,
>   2:Bytes qualifier,
>   3:i64 timestamp
> }
> struct TPut {
>   1:Bytes row,
>   2:map values
> }
> This creates more verbose rpc  than if the columns in TPut were just 
> map>, but that is harder to fit timestamps into and 
> still be intuitive from say python.
> Presumably the goal of a thrift gateway is to be easy first.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1744) Thrift server to match the new java api.

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968988#action_12968988
 ] 

stack commented on HBASE-1744:
--

@Lars Any chance of your updating this patch?  I'd like to get it into TRUNK 
for 0.92.  Good on you.

> Thrift server to match the new java api.
> 
>
> Key: HBASE-1744
> URL: https://issues.apache.org/jira/browse/HBASE-1744
> Project: HBase
>  Issue Type: Improvement
>  Components: thrift
>Reporter: Tim Sell
>Assignee: Lars Francke
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-1744.preview.1.patch, thriftexperiment.patch
>
>
> This mutateRows, etc.. is a little confusing compared to the new cleaner java 
> client.
> Thinking of ways to make a thrift client that is just as elegant. something 
> like:
> void put(1:Bytes table, 2:TPut put) throws (1:IOError io)
> with:
> struct TColumn {
>   1:Bytes family,
>   2:Bytes qualifier,
>   3:i64 timestamp
> }
> struct TPut {
>   1:Bytes row,
>   2:map values
> }
> This creates more verbose rpc  than if the columns in TPut were just 
> map>, but that is harder to fit timestamps into and 
> still be intuitive from say python.
> Presumably the goal of a thrift gateway is to be easy first.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1711) META not cleaned up after table deletion/truncation

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1711.
--

Resolution: Invalid

Resolving as no longer valid.  I just did a truncate of a 15k region table and 
when all was done, all that was in meta was a single squeaky clean region.

> META not cleaned up after table deletion/truncation
> ---
>
> Key: HBASE-1711
> URL: https://issues.apache.org/jira/browse/HBASE-1711
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.1
> Environment: 8 RS, 1 Master, ZK on 5 nodes in the same cluster
> All machines are Quad Core + 8GB RAM + 500GB HD
>Reporter: Amandeep Khurana
>
> On deletion or truncation of a table (including major compacting the META), 
> the entries for that table should get deleted in the META table. That doesnt 
> happen and the entries remain. This causes Region Not Hosting exceptions when 
> doing insertions into the table later on. The files for the deleted table do 
> get deleted from the FS though.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1721) IOException: Cannot append; log is closed

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1721.
--

Resolution: Invalid

Resolving.  The moment passed long ago.  Don't see this any more.

> IOException: Cannot append; log is closed
> -
>
> Key: HBASE-1721
> URL: https://issues.apache.org/jira/browse/HBASE-1721
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> JGray RS was stuck doing the below:
> {code}
> IOException: Cannot append; log is closed
> {code}
> Just kept going on and on.
> Was after a zk session timeout.  Regionserver had restarted itself and had 
> been taking on new regions just fine.  I saw this entry from HLog:
> {code}
> 2009-07-29 08:13:13,493 INFO org.apache.hadoop.hbase.regionserver.HLog: HLog 
> configuration: blocksize=67108864, rollsize=63753420, enabled=true, 
> flushlogentries=100, optionallogflushinternal=1ms
> 2009-07-29 08:13:13,495 INFO org.apache.hadoop.hbase.regionserver.HLog: New 
> hlog /hbase/.logs/hb2,60020,1248880393481/hlog.dat.1248880393493
> {code}
> Then two minutes later I saw the 'Cannot append'.
> I do not see any close nor on a cursory glance, how this situation might 
> arise -- somethign to do with the restart?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1681) NSRE due to duplicate assignment to the same region server

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1681.
--

Resolution: Invalid

Closing as no longer valid given master has been redone to make duplicate 
assignment (near) impossible -- least if a duplicate assign, it should be 
fixable bug now rather than an unpluggable race.

> NSRE due to duplicate assignment to the same region server
> --
>
> Key: HBASE-1681
> URL: https://issues.apache.org/jira/browse/HBASE-1681
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.20.0
> Environment: Software
> * hbase trunk (0.20.0-dev, r795916)
> * hadoop-0.20.0
> * zookeeper-3.2.0 
> Hardware
> * 3 dev servers: 8 core, 16G ram, 4x750G 7200 rpm SATA disk, RAID 0, each 
> disk individually mounted
> * snv-it-lin-010: 
>   o hadoop namenode (1G)
>   o hadoop secondary namenode (1G)
>   o hadoop datanode (1G, max_xreciver=4096, handler=50)
>   o hadoop job track (1G)
>   o hadoop taks tracker (1G, max_map=1, max_red=1)
>   o zookeeper (1G)
>   o hbase master (2G)
>   o hbase region server (2G) 
> * snv-it-lin-011: 
>   o hadoop datanode (1G, max_xreciver=4096, handler=50)
>   o hadoop taks tracker (1G, max_map=1, max_red=1)
>   o zookeeper (1G)
>   o hbase region server (2G, handler=50) 
> * snv-it-lin-012: 
>   o hadoop datanode (1G, max_xreciver=4096, handler=50)
>   o hadoop taks tracker (1G, max_map=1, max_red=1)
>   o zookeeper (1G)
>   o hbase region server (2G, handler=50) 
> * jvm: 32bit 
>  
>Reporter: Haijun Cao
>
> Reproduce: 
> 1. populate hbase with 100 m records: bin/hadop jar hbase-dev-test.jar 
> --rows=100 sequtialWrite 100
> 2. populate hbase with 10 m records (random writes): bin/hadoop jar 
> hbase-dev-test.jar --rows=100 randomWrite 10
> 3. scan 10 m records: bin/hadoop jar hbase-dev-test.jar --rows=100 scan 10
> 2 scan mapper task failed with NSRE exception for one region:
> org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: 
> TestTable,0001724032,1248204794507
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2251)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1862)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:650)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:913)
> Grep master log for TestTable,0001724032,1248204794507:
> 2009-07-21 12:33:18,275 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Recei
> ved MSG_REPORT_SPLIT: TestTable,0001724032,1248141721258: Daughters; 
> TestTable,0
> 001724032,1248204794507, TestTable,000178,1248204794507 from 
> snv-it-lin-010.
> projectrialto.com,60020,1248115451722; 1 of 3
> 2009-07-21 12:33:19,169 INFO org.apache.hadoop.hbase.master.RegionManager: 
> Assig
> ning region TestTable,0001724032,1248204794507 to 
> snv-it-lin-011.projectrialto.c
> om,60020,1248115452051
> 2009-07-21 12:33:21,464 DEBUG org.apache.hadoop.hbase.master.BaseScanner: 
> Curren
> t assignment of TestTable,0001724032,1248204794507 is not valid;  Server '' 
> star
> tCode: 0 unknown.
> 2009-07-21 12:33:22,207 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Recei
> ved MSG_REPORT_PROCESS_OPEN: TestTable,0001724032,1248204794507 from 
> snv-it-lin-
> 011.projectrialto.com,60020,1248115452051; 1 of 1
> 2009-07-21 12:33:22,208 INFO org.apache.hadoop.hbase.master.RegionManager: 
> Assig
> ning region TestTable,0001724032,1248204794507 to 
> snv-it-lin-011.projectrialto.c
> om,60020,1248115452051
> 2009-07-21 12:33:25,245 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Recei
> ved MSG_REPORT_PROCESS_OPEN: TestTable,0001724032,1248204794507 from 
> snv-it-lin-
> 011.projectrialto.com,60020,1248115452051; 1 of 3
> 2009-07-21 12:33:25,245 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Recei
> ved MSG_REPORT_PROCESS_OPEN: TestTable,0001724032,1248204794507 from 
> snv-it-lin-
> 011.projectrialto.com,60020,1248115452051; 3 of 3
> 2009-07-21 12:33:28,283 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Recei
> ved MSG_REPORT_PROCESS_OPEN: TestTable,0001724032,1248204794507 from 
> snv-it-lin-
> 011.projectrialto.com,60020,1248115452051; 1 of 7
> 2009-07-21 12:33:28,283 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Recei
> ved MSG_REPORT_PROCESS_OPEN: TestTable,0

[jira] Commented: (HBASE-1667) hbase-daemon.sh stop master should only stop the master, not the cluster

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968982#action_12968982
 ] 

stack commented on HBASE-1667:
--

HBaseAdmin now has a stopMaster which stops the master only.  Need to get this 
out to shell or into bash scripts.

> hbase-daemon.sh stop master should only stop the master, not the cluster
> 
>
> Key: HBASE-1667
> URL: https://issues.apache.org/jira/browse/HBASE-1667
> Project: HBase
>  Issue Type: Improvement
>  Components: master, scripts
>Affects Versions: 0.20.0
>Reporter: Rong-En Fan
> Fix For: 0.92.0
>
>
> 0.20 supports multi masters. However,
> bin/hbase-daemon.sh stop master
> on backup masters will bring the whole cluster down.
> Per rolling upgrade wiki that stack pointed out, kill -9 for backup master is 
> the only way to go currently.
> I think it's better to make some sort of magic that we can use something like
> bin/hbase-daemon.sh stop master
> to properly stop either the backup master or the whole cluster.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1673) .META. regionserver died, cluster recovered but not UI

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1673.
--

Resolution: Invalid

Resolving as invalid.  This HCM stuff has all changed since.

> .META. regionserver died, cluster recovered but not UI
> --
>
> Key: HBASE-1673
> URL: https://issues.apache.org/jira/browse/HBASE-1673
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> Getting 500 in UI:
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact 
> region server null for region , row '', but failed after 3 attempts.
> Exceptions:
> java.net.ConnectException: Call to /208.76.44.141:60020 failed on connection 
> exception: java.net.ConnectException: Connection refused
> ...
> {code}
> Doesn't recover.
> I think issue is here in HCM:
> {code}
> } catch (IOException e) {
>   if (e instanceof RemoteException) {
> e = RemoteExceptionHandler.decodeRemoteException(
> (RemoteException) e);
>   }
>   if (tries < numRetries - 1) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("locateRegionInMeta attempt " + tries + " of " +
> this.numRetries + " failed; retrying after sleep of " +
> getPauseTime(tries), e);
> }
> relocateRegion(parentTable, metaKey);
>   } else {
> ...
> {code}
> The call to relocateRegion is going to result in an attempt at finding the 
> .META.,,1 region in .META. which will get a ConnectionException again.
> On ConnectionException, should be backing up and going to -ROOT- to find new 
> location of .META.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-1502) Remove need for heartbeats in HBase

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968980#action_12968980
 ] 

stack commented on HBASE-1502:
--

We are almost there.  If we could get rid of the split message and shutdown 
being passed by HMsg, we'd be able to drop HMsg and just rely completely on zk 
client ping.

> Remove need for heartbeats in HBase
> ---
>
> Key: HBASE-1502
> URL: https://issues.apache.org/jira/browse/HBASE-1502
> Project: HBase
>  Issue Type: Wish
>Reporter: Nitay Joffe
> Fix For: 0.92.0
>
>
> HBase currently uses heartbeats between region servers and the master, 
> piggybacking information on them when it can. This issue is to investigate if 
> we can get rid of the need for those using ZooKeeper events.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1479) signal handler not working on regionserver: hbase-daemon.sh stop regionserver

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1479.
--

Resolution: Fixed

This works in TRUNK and 0.90.  I tested it recently.

> signal handler not working on regionserver: hbase-daemon.sh stop regionserver
> -
>
> Key: HBASE-1479
> URL: https://issues.apache.org/jira/browse/HBASE-1479
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> I'm not clear whats going on here -- will improve logging in meantime so can 
> get a better picture -- but short story is that a stop regionserver was done 
> when regionserver was in reportForDuty and we never exited this state.
> Here is longer story:
> Regionserver is stuck cycling in reportForDuty.  shutdown handler should be 
> setting the stop flag so we should be breaking out of the loop but we dont' 
> seem to be doing that.  I can see start of the shutdown thread message... but 
> not the ending message as it joins on main thread waiting on it to go down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1313) IllegalStateException when creating new table

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1313.
--

Resolution: Invalid

Resolving as invalid/stale.  Stuff works differently now in new master.

> IllegalStateException when creating new table
> -
>
> Key: HBASE-1313
> URL: https://issues.apache.org/jira/browse/HBASE-1313
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.20.0
>Reporter: Andrew Purtell
>Priority: Minor
>
> 2009-04-06 17:34:13,357 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN: 
> urls,,1239039250933
> 2009-04-06 17:34:13,357 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN: 
> urls,,1239039250933
> 2009-04-06 17:34:13,358 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Opening region urls,,1239039250933/39686773
> 2009-04-06 17:34:13,360 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Next sequence id for region urls,,1239039250933 is 0
> 2009-04-06 17:34:13,360 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> region urls,,1239039250933/39686773 available
> 2009-04-06 17:34:13,360 DEBUG 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction requested 
> for region urls,,1239039250933/39686773 because: Region open check
> 2009-04-06 17:34:13,360 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> starting  compaction on region urls,,1239039250933
> 2009-04-06 17:34:13,360 DEBUG org.apache.hadoop.hbase.regionserver.Store: 
> 39686773/info: no store files to compact
> 2009-04-06 17:34:13,361 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> compaction completed on region urls,,1239039250933 in 0sec
> 2009-04-06 17:34:16,368 WARN 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Processing message 
> (Retry: 0)
> java.io.IOException: java.io.IOException: java.lang.IllegalStateException: 
> Cannot set a region as open if it has not been pending. State: 
> name=urls,,1239039250933, unassigned=true, pendingOpen=false, open=false, 
> closing=false, pendingClose=false, closed=false, offlined=false
>   at 
> org.apache.hadoop.hbase.master.RegionManager$RegionState.setOpen(RegionManager.java:1236)
>   at 
> org.apache.hadoop.hbase.master.RegionManager.setOpen(RegionManager.java:805)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.processRegionOpen(ServerManager.java:524)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.processMsgs(ServerManager.java:390)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.processRegionServerAllsWell(ServerManager.java:361)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:269)
>   at 
> org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:601)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:909)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
>   at 
> org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
>   at 
> org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:493)
>   at java.lang.Thread.run(Thread.java:619)
> 2009-04-06 17:34:16,376 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN: 
> urls,,1239039250933
> 2009-04-06 17:34:16,376 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN: 
> urls,,1239039250933

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1237) Make How-to run hbase for testing and debuging tutorial

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1237.
--

Resolution: Duplicate

Marking as duplicate.  Our 'book' in 0.90.0 has such tutorials.

> Make How-to run hbase for testing and debuging tutorial
> ---
>
> Key: HBASE-1237
> URL: https://issues.apache.org/jira/browse/HBASE-1237
> Project: HBase
>  Issue Type: Task
>  Components: documentation
> Environment: Eclipse
>Reporter: Evgeny Ryabitskiy
>Assignee: Evgeny Ryabitskiy
>Priority: Trivial
>
> It took me some time to run HBase for debuging and it will be cool to make 
> some tutorial for such thing.
>  Also about how to deploy mini claster for same porpoise.
> May be some Ant task that will simplify running.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1228) Hang on DFSOS#flushInternal for minutes after regionserver crash

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1228.
--

Resolution: Invalid

Resolving as stale/invalid.  Alot has changed since.  Lets open new issue if we 
see this again.

> Hang on DFSOS#flushInternal for minutes after regionserver crash
> 
>
> Key: HBASE-1228
> URL: https://issues.apache.org/jira/browse/HBASE-1228
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.19.0
>Reporter: Ben Maurer
>
> After an exception that forced an HRegionServer to shut down, I'm seeing it 
> hang in the following method for at least a few minutes:
> "regionserver/0:0:0:0:0:0:0:0:60020" prio=10 tid=0x2aaaf41a9000 
> nid=0x10f6 in Object.wait() [0x422dd000..0x422ddb10]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:485)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.flushInternal(DFSClient.java:3025)
>   - locked <0x2aaad8fa2410> (a java.util.LinkedList)
>   - locked <0x2aaad8fa2078> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3105)
>   - locked <0x2aaad8fa2078> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3054)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
>   at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:959)
>   - locked <0x2aaad8fa1f10> (a 
> org.apache.hadoop.io.SequenceFile$Writer)
>   at org.apache.hadoop.hbase.regionserver.HLog.close(HLog.java:431)
>   - locked <0x2aaab378b290> (a java.lang.Integer)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:498)
>   at java.lang.Thread.run(Thread.java:619)
> I believe the file system may have been closed and thus there is trouble 
> flushing the HLog. The HLog should be pro actively closed before shutdown 
> begins, to maximize the chances of it surviving the crash.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1163) Master root scanner hung, clients blocked indefinitely waiting for getStartKeys()

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1163.
--

Resolution: Invalid

We don't have a root scanner any more.  Resolving invalid.

> Master root scanner hung, clients blocked indefinitely waiting for 
> getStartKeys()
> -
>
> Key: HBASE-1163
> URL: https://issues.apache.org/jira/browse/HBASE-1163
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.19.0
>Reporter: Andrew Purtell
>Priority: Critical
> Attachments: stacks-1163.1.zip
>
>
> Mapreduce tasks based on TIF won't start. Clients trying to find regions by 
> start key block indefinitely (Heritrix hbase writer eventually times out 
> archiver). 
> Master seems hung in root scan. I've dumped thread stacks 10 times in 10 
> minutes and the same HBaseClient$Call  object appears in the trace. See below:
> Thread 21 (RegionManager.rootScanner):
>   State: WAITING
>   Blocked count: 500
>   Waited count: 621
>   Waiting on org.apache.hadoop.hbase.ipc.hbaseclient$c...@55a2896d
>   Stack:
> java.lang.Object.wait(Native Method)
> java.lang.Object.wait(Object.java:485)
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:695)
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:321)
> $Proxy2.next(Unknown Source)
> 
> org.apache.hadoop.hbase.master.BaseScanner.scanRegion(BaseScanner.java:161)
> org.apache.hadoop.hbase.master.RootScanner.scanRoot(RootScanner.java:55)
> 
> org.apache.hadoop.hbase.master.RootScanner.maintenanceScan(RootScanner.java:80)
> org.apache.hadoop.hbase.master.BaseScanner.chore(BaseScanner.java:137)
> org.apache.hadoop.hbase.Chore.run(Chore.java:65)
> I only see messages from the MetaScanner scanner in the master log, nothing 
> from RootScanner.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1168) Master ignoring server restart

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1168.
--

Resolution: Invalid

Resolving as invalid.  Stuff works different now in 0.90.

> Master ignoring server restart
> --
>
> Key: HBASE-1168
> URL: https://issues.apache.org/jira/browse/HBASE-1168
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.19.0
>Reporter: Andrew Purtell
>
> After a HRS goes down on OOME and is restarted, the master acknowledges it 
> but does not assign any regions to it. Stack dump on HRS shows it is up and 
> waiting for work. Relevant lines from tail of master log is:
> 2009-01-31 03:30:54,377 DEBUG 
> org.apache.hadoop.hbase.master.RegionServerOperation: Removed 
> 10.30.94.38:60020 from deadservers Map
> 2009-01-31 03:32:49,955 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> received server report from unknown server: 10.30.94.38:60020
> 2009-01-31 03:50:37,025 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Received start message from: 10.30.94.38:60020
> 2009-01-31 04:03:59,822 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Cancelling lease for 10.30.94.38:60020
> 2009-01-31 04:03:59,823 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Region server 10.30.94.38:60020: MSG_REPORT_EXITING -- lease cancelled
> 2009-01-31 04:05:31,061 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Received start message from: 10.30.94.38:60020

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1149) Dataloss when master and region server die at same time

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1149.
--

Resolution: Invalid

 In 0.90 we have working flush and new master makes it so more tolerant of fail 
of master and regionserver (There are unit tests that have master and a 
regionserver die and then we run recover).  Lets open specific issues for 
problems in the new stuff.  Resolving as stale/invalid.

> Dataloss when master and region server die at same time
> ---
>
> Key: HBASE-1149
> URL: https://issues.apache.org/jira/browse/HBASE-1149
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.19.0
>Reporter: Ben Maurer
>
> To reproduce:
> 1) Run HBase in standalone mode
> 2)
> create 'foo', 'bar'
> 3) kill -9 the HBase server
> 4) Restart hbase
> The table 'foo' will not exist.
> Apparently this problem happens because the master and region servers die at 
> the same time. To me that suggests a fairly large flaw -- if your cluster has 
> a systematic failure (say, a power outage) it would cause data loss.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1115) Rollback after regions failed compaction

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1115.
--

Resolution: Invalid

Resolving as no longer valid.  Nicolas made it so compactions could be 
interrupted.  As part of his change, implicit is fact that compaction can be 
redone on fail/interrupt.

> Rollback after regions failed compaction
> 
>
> Key: HBASE-1115
> URL: https://issues.apache.org/jira/browse/HBASE-1115
> Project: HBase
>  Issue Type: Bug
> Environment: apurtell cluster, HBase TRUNK on hadoop 0.18 branch
>Reporter: Andrew Purtell
>
> When compaction fails the affected region is left in an open and writable 
> state, but scanners fail construction. Later a manual reassignment via 
> close_region brings the region all the way back up.
> Should there be rollback after a failed compaction somehow?
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact 
> region server 10.30.94.50:60020 for region 
> content,c84bbfc94b2143e41ba119d159be2958,1231518442461, row 
> 'c84bbfc94b2143e41ba119d159be2958', but failed after 10 attempts.
> Exceptions:
> java.io.IOException: java.io.IOException: HStoreScanner failed construction
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.(StoreFileScanner.java:70)
>   at 
> org.apache.hadoop.hbase.regionserver.HStoreScanner.(HStoreScanner.java:84)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2119)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$HScanner.(HRegion.java:1878)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1162)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1673)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:894)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://sjdc-atr-dc-1.atr.trendmicro.com:5/data/hbase/content/1707725801/url/mapfiles/7039742044868774100/data
>   at 
> org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:394)
>   at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:695)
>   at 
> org.apache.hadoop.hbase.io.SequenceFile$Reader.(SequenceFile.java:1431)
>   at 
> org.apache.hadoop.hbase.io.SequenceFile$Reader.(SequenceFile.java:1426)
>   at 
> org.apache.hadoop.hbase.io.MapFile$Reader.createDataFileReader(MapFile.java:310)
>   at 
> org.apache.hadoop.hbase.io.HBaseMapFile$HBaseReader.createDataFileReader(HBaseMapFile.java:96)
>   at org.apache.hadoop.hbase.io.MapFile$Reader.open(MapFile.java:292)
>   at 
> org.apache.hadoop.hbase.io.HBaseMapFile$HBaseReader.(HBaseMapFile.java:79)
>   at 
> org.apache.hadoop.hbase.io.BloomFilterMapFile$Reader.(BloomFilterMapFile.java:65)
>   at 
> org.apache.hadoop.hbase.io.HalfMapFileReader.(HalfMapFileReader.java:86)
>   at 
> org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:438)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.openReaders(StoreFileScanner.java:96)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.(StoreFileScanner.java:67)
>   ... 10 more

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-1057) Example MR jobs to simulate bulk importing

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-1057.
--

Resolution: Invalid

Resolving as invalid.  We should use ycsb for doing this kinda thing.  Reopen 
if I'm wrong Jon.

> Example MR jobs to simulate bulk importing
> --
>
> Key: HBASE-1057
> URL: https://issues.apache.org/jira/browse/HBASE-1057
> Project: HBase
>  Issue Type: New Feature
>Reporter: Jonathan Gray
>Assignee: Jonathan Gray
>Priority: Trivial
> Attachments: 1057.patch, ImportTestMR_v1.java
>
>
> It's very useful to have standalone MR jobs that simulate production system 
> load characteristics.  Specifically bulk importing as this has been 
> uncovering OOME and long-running compaction issues.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-916) Webpages should print current server time; e.g. regionhistorian logs events by timestamp -- "was last log just-now or hours ago?"

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-916.
-

Resolution: Invalid

Refers to historian since removed.  Resolving as invalid.

> Webpages should print current server time; e.g. regionhistorian logs events 
> by timestamp -- "was last log just-now or hours ago?"
> -
>
> Key: HBASE-916
> URL: https://issues.apache.org/jira/browse/HBASE-916
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-792) Rewrite getClosestAtOrJustBefore; doesn't scale as currently written

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-792.
-

Resolution: Fixed

Resolving as done for now.

> Rewrite getClosestAtOrJustBefore; doesn't scale as currently written
> 
>
> Key: HBASE-792
> URL: https://issues.apache.org/jira/browse/HBASE-792
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Attachments: 792.patch
>
>
> As currently written, as a table gets bigger, the number of rows .META. needs 
> to keep count of grows.
> As written, our getClosestAtOrJustBefore, goes through every storefile and in 
> each picks up any row that could be a possible candidate for closest before.  
> It doesn't just get the closest from the storefile, but all keys that are 
> closest before.  Its not selective because how can it tell at the store file 
> level which of the candidates will survive deletes that are sitting in later 
> store files or up in memcache.
> So, if a store file has keys 0-10 and we ask to get the row that is closest 
> or just before 7, it returns rows 0-7.. and so on per store file.
> Can bet big and slow weeding key wanted.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-767) Add option so stop-processing if 'lost' data files

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-767.
-

Resolution: Invalid

No longer valid and besides we have a flag in WAL that can be set to abort if 
data loss or log and carry-on.

> Add option so stop-processing if 'lost' data files
> --
>
> Key: HBASE-767
> URL: https://issues.apache.org/jira/browse/HBASE-767
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Minor
>
> HBASE-646 and HBASE-766 are about handlers for the case where store file 
> 'data' goes missing.  This issue is about figuring why/how they disappear

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-759) TestMetaUtils failing on hudson

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-759.
-

Resolution: Won't Fix

I just removed this test.  Its stale by now and would need to be rewritten 
anyways.

> TestMetaUtils failing on hudson
> ---
>
> Key: HBASE-759
> URL: https://issues.apache.org/jira/browse/HBASE-759
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
> Attachments: patch.txt
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-711) Complain if clock skew across the cluster is badly out of sync

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-711.
-

Resolution: Fixed

Fixed by HBASE-3168

> Complain if clock skew across the cluster is badly out of sync
> --
>
> Key: HBASE-711
> URL: https://issues.apache.org/jira/browse/HBASE-711
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jim Kellerman
>Priority: Minor
>
> hbase-710 and hbase-609 are issues where the system has broken in presence of 
> clock skew over the cluster.  Would be a nice service if master could flag 
> very bad clock skew.  Regionservers could report their local time when they 
> ping the master.  It could do a compare.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-695) Add passing of filter state across regions

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-695.
-

Resolution: Invalid

We'll never allow this happen, not if we want to be scalable.

> Add passing of filter state across regions
> --
>
> Key: HBASE-695
> URL: https://issues.apache.org/jira/browse/HBASE-695
> Project: HBase
>  Issue Type: New Feature
>Reporter: stack
>
> Discussion on list arrived at need for filters to carry cross-region state.  
> For example, if you are looking for sufficient rows to fill the fifth page of 
> a set of results and a particular region only has the first half of page 5, 
> there needs to be a mechanism to tell the next region in line, how far the 
> scan has gotten.  Clint Morgan suggested some kind of RPC or callback that 
> the serverside region could tug on to pass back to the client the state-laden 
> filter for passing the next region.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-549) Don't CLOSE region if message is not from server that opened it or is opening it

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-549.
-

   Resolution: Invalid
Fix Version/s: (was: 0.92.0)

Master doesn't work like this any more.  Resolving as invalid.

> Don't CLOSE region if message is not from server that opened it or is opening 
> it
> 
>
> Key: HBASE-549
> URL: https://issues.apache.org/jira/browse/HBASE-549
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.16.0, 0.1.0, 0.1.1, 0.2.0
>Reporter: stack
>Assignee: stack
>
> We assign a region to a server.  It takes too long to open (HBASE-505).  
> Region gets assigned to another server.  Meantime original host returns a 
> MSG_REPORT_CLOSE (because other regions opening messes it up moving files on 
> disk out from under it).  We queue a shutdown which marks the region as 
> needing reassignment.  Second server reports in that it successfully opened 
> the region.  Master tells it it should not have opened it.  Churn ensues.
> Fix is to ignore the CLOSE if its reported server/startcode does not match 
> that of the server currently trying to open region.  Fix is not easy because 
> currently we don't keep list of server info in unassigned regions.
> Here's master log snippet showing problem:
> {code}
> ...
> 2008-03-25 19:16:43,711 INFO org.apache.hadoop.hbase.HMaster: assigning 
> region enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 to server 
> XX.XX.XX.220:60020
> 2008-03-25 19:16:46,725 DEBUG org.apache.hadoop.hbase.HMaster: Received 
> MSG_REPORT_PROCESS_OPEN : 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 from XX.XX.XX.220:60020
> 2008-03-25 19:18:06,411 DEBUG org.apache.hadoop.hbase.HMaster: shutdown 
> scanner looking at enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482
> 2008-03-25 19:18:06,811 DEBUG org.apache.hadoop.hbase.HMaster: shutdown 
> scanner looking at enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482
> 2008-03-25 19:19:46,841 INFO org.apache.hadoop.hbase.HMaster: assigning 
> region enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 to server 
> XX.XX.XX.221:60020
> 2008-03-25 19:19:49,849 DEBUG org.apache.hadoop.hbase.HMaster: Received 
> MSG_REPORT_PROCESS_OPEN : 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 from XX.XX.XX.221:60020
> 2008-03-25 19:19:56,883 DEBUG org.apache.hadoop.hbase.HMaster: Received 
> MSG_REPORT_CLOSE : enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 from 
> XX.XX.XX.220:60020
> 2008-03-25 19:19:56,883 INFO org.apache.hadoop.hbase.HMaster: 
> XX.XX.XX.220:60020 no longer serving regionname: 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482, startKey: 
> , endKey:  >, encodedName: 1857033608, tableDesc: {name: enwiki_080103, families: 
> >{alternate_title:={name: alternate_title, max versions: 3, compression: 
> >NONE, in memory: false, max length: 2147483647, bloom filter: none}, 
> >alternate_url:={name: al
> ternate_url, max versions: 3, compression: NONE, in memory: false, max 
> length: 2147483647, bloom filter: none}, anchor:={name: anchor, max versions: 
> 3, compression: NONE, in memory: false, max length: 2147483647, bloom filter: 
> none}, mi
> sc:={name: misc, max versions: 3, compression: NONE, in memory: false, max 
> length: 2147483647, bloom filter: none}, page:={name: page, max versions: 3, 
> compression: NONE, in memory: false, max length: 2147483647, bloom filter: 
> none}, re
> direct:={name: redirect, max versions: 3, compression: NONE, in memory: 
> false, max length: 2147483647, bloom filter: none}}}
> 2008-03-25 19:19:56,885 DEBUG org.apache.hadoop.hbase.HMaster: Main 
> processing loop: ProcessRegionClose of 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482, true, false
> 2008-03-25 19:19:56,885 INFO org.apache.hadoop.hbase.HMaster: region closed: 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482
> 2008-03-25 19:19:56,887 INFO org.apache.hadoop.hbase.HMaster: reassign 
> region: enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482
> 2008-03-25 19:19:57,288 INFO org.apache.hadoop.hbase.HMaster: assigning 
> region enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 to server 
> XX.XX.XX.189:60020
> 2008-03-25 19:20:00,296 DEBUG org.apache.hadoop.hbase.HMaster: Received 
> MSG_REPORT_PROCESS_OPEN : 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 from XX.XX.XX.189:60020
> 2008-03-25 19:20:16,885 DEBUG org.apache.hadoop.hbase.HMaster: Received 
> MSG_REPORT_OPEN : enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482 from 
> XX.XX.XX.221:60020
> 2008-03-25 19:20:16,885 DEBUG org.apache.hadoop.hbase.HMaster: region server 
> XX.XX.XX.221:60020 should not have opened region 
> enwiki_080103,iLStZ0yTnfVUziYcNVVxWV==,1205393076482
> 2008-03-25 19:23:51,707 DEBUG org.apache.hadoop.hbase.HMaster: shutdown 
> scan

[jira] Resolved: (HBASE-653) Given an HTable, I should be able to enable/disable the table without having to get the HBaseConfiguration object

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-653.
-

Resolution: Invalid

Resolving as invalid.

> Given an HTable, I should be able to enable/disable the table without having 
> to get the HBaseConfiguration object
> -
>
> Key: HBASE-653
> URL: https://issues.apache.org/jira/browse/HBASE-653
> Project: HBase
>  Issue Type: Wish
>Reporter: Michael Bieniosek
>
> It would be nice if there were a way to do a HBaseAdmin.disableTable(HTable) 
> without needing a HBaseConfiguration (I already gave you the 
> HBaseConfiguration when I created the HTable).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-661) Allow to specify a user supplied row key comparator for a table

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-661.
-

Resolution: Invalid

We'll never support this.  Closing as invalid.

> Allow to specify a user supplied row key comparator for a table 
> 
>
> Key: HBASE-661
> URL: https://issues.apache.org/jira/browse/HBASE-661
> Project: HBase
>  Issue Type: New Feature
>  Components: client, master, regionserver
>Affects Versions: 0.2.0
>Reporter: Clint Morgan
>Assignee: Clint Morgan
>
> Now that row keys are byte arrays, users should be able to specify a 
> comparator at table creation time.
> My use case for this is to implement secondary indexes. In this case, row 
> keys for the index tables will be constructed from an optional prefix of the 
> original row key as well as the content of column that is being indexed. Then 
> the comparator will first compare based on the key prefix, and break ties by 
> deserializing the column values and using the deserialized type's compareTo 
> method.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-345) [hbase] Change configuration on running cluster

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-345.
-

Resolution: Duplicate

Marking as duplicate of hbase-1730

> [hbase] Change configuration on running cluster
> ---
>
> Key: HBASE-345
> URL: https://issues.apache.org/jira/browse/HBASE-345
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Minor
>
> Most options currently require restart for them to be noticed or taking table 
> offline.  It should be possible to change certain configuration attributes 
> even though the cluster is online and under load; examples would include 
> setting flush and compaction size/frequency/limits or more radically, 
> changing region size.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-70) Improve region server memory management

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-70.


Resolution: Invalid

The discussion in this issue is way stale; none of whats discussion pertains 
any more.  Lets open new issue to look at memory use.  Closing as invalid.

> Improve region server memory management
> ---
>
> Key: HBASE-70
> URL: https://issues.apache.org/jira/browse/HBASE-70
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: stack
>
> Each Store has a Memcache of edits that is flushed on a fixed period (It used 
> to be flushed when it grew beyond a limit). A Region can be made up of N 
> Stores.  A regionserver has no upper bound on the number of regions that can 
> be deployed to it currently.  Add to this that per mapfile, we have read the 
> index into memory.  We're also talking about adding caching of blocks and 
> cells.
> We need a means of keeping an account of memory usage adjusting cache sizes 
> and flush rates (or sizes) dynamically -- using References where possible -- 
> to accomodate deployment of added regions.  If memory is strained, we should 
> reject regions proffered by the master with a resouce-constrained, or some 
> such, message.
> The manual sizing we currently do ain't going to cut it for clusters of any 
> decent size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HBASE-83) Add JMeter performance test for HBase

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-83?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-83.


Resolution: Invalid

Resolving as no longer valid (Sorry Tom).  We have other tools for loading that 
can put up loads of a more interesting character.

> Add JMeter performance test for HBase
> -
>
> Key: HBASE-83
> URL: https://issues.apache.org/jira/browse/HBASE-83
> Project: HBase
>  Issue Type: Test
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-2625.patch, hbase-jmeter-test.jar, 
> hbase-jmeter-test.jmx, hbase.jmx, hbench.jar, plot.r
>
>
> The PerformanceEvaluation test is good for running benchmarks, but is not 
> really designed for learning about the performance of HBase for real world 
> datasets. By using JMeter we can test HBase to discover its average response 
> time under different loads.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2960) Allow Incremental Table Alterations

2010-12-07 Thread Karthick Sankarachary (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968932#action_12968932
 ] 

Karthick Sankarachary commented on HBASE-2960:
--

No, this issue does not exist in trunk - it has been addressed by HBASE-2984 
already. Thanks.

> Allow Incremental Table Alterations
> ---
>
> Key: HBASE-2960
> URL: https://issues.apache.org/jira/browse/HBASE-2960
> Project: HBase
>  Issue Type: Wish
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: Karthick Sankarachary
> Attachments: HBASE-2960.patch
>
>
> As per the HBase shell help, the alter command will "Alter column family 
> schema;  pass table name and a dictionary  specifying new column family 
> schema." The assumption here seems to be that the new column family schema 
> must be completely specified. In other words, if a certain attribute is not 
> specified in the column family schema, then it is effectively defaulted. Is 
> this side-effect by design? 
> I for one assumed (wrongly apparently) that I can alter a table in 
> "increments". Case in point, the following commands should've resulted in the 
> final value of the VERSIONS attribute of my table to stay put at 1, but 
> instead it got defaulted to 3. I guess there's no right or wrong answer here, 
> but what should alter do by default? My expectation is that it only changes 
> those attributes that were specified in the "alter" command, leaving the 
> unspecified attributes untouched.
> hbase(main):003:0> create 't1', {NAME => 'f1', VERSIONS => 1}
> 0 row(s) in 1.7230 seconds
> hbase(main):004:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', COMPRESSION => 'NONE', VERSIONS 
> => '1', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' false', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.2030 seconds
> hbase(main):006:0> disable 't1'
> 0 row(s) in 0.1140 seconds
> hbase(main):007:0> alter 't1', {NAME => 'f1', IN_MEMORY => 'true'}
> 0 row(s) in 0.0160 seconds
> hbase(main):009:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', VERSIONS => '3', COMPRESSION => 
> 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' true', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.1280 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HBASE-2937) Facilitate Timeouts In HBase Client

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-2937:


Assignee: Karthick Sankarachary

Thanks.  Assigning the issue to you.

> Facilitate Timeouts In HBase Client
> ---
>
> Key: HBASE-2937
> URL: https://issues.apache.org/jira/browse/HBASE-2937
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: Karthick Sankarachary
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-2937.patch
>
>
> Currently, there is no way to force an operation on the HBase client (viz. 
> HTable) to time out if a certain amount of time has elapsed.  In other words, 
> all invocations on the HTable class are veritable blocking calls, which will 
> not return until a response (successful or otherwise) is received. 
> In general, there are two ways to handle timeouts:  (a) call the operation in 
> a separate thread, until it returns a response or the wait on the thread 
> times out and (b) have the underlying socket unblock the operation if the 
> read times out.  The downside of the former approach is that it consumes more 
> resources in terms of threads and callables. 
> Here, we describe a way to specify and handle timeouts on the HTable client, 
> which relies on the latter approach (i.e., socket timeouts). Right now, the 
> HBaseClient sets the socket timeout to the value of the "ipc.ping.interval" 
> parameter, which is also how long it waits before pinging the server in case 
> of a failure. The goal is to allow clients to set that timeout on the fly 
> through HTable. Rather than adding an optional timeout argument to every 
> HTable operation, we chose to make it a property of HTable which effectively 
> applies to every method that involves a remote operation.
> In order to propagate the timeout  from HTable to HBaseClient, we replaced 
> all occurrences of ServerCallable in HTable with an extension called 
> ClientCallable, which sets the timeout on the region server interface, once 
> it has been instantiated, through the HConnection object. The latter, in 
> turn, asks HBaseRPC to pass that timeout to the corresponding Invoker, so 
> that it may inject the timeout at the time the invocation is made on the 
> region server proxy. Right before the request is sent to the server, we set 
> the timeout specified by the client on the underlying socket.
> In conclusion, this patch will afford clients the option of performing an 
> HBase operation until it completes or a specified timeout elapses. Note that 
> a timeout of zero is interpreted as an infinite timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2960) Allow Incremental Table Alterations

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968925#action_12968925
 ] 

stack commented on HBASE-2960:
--

bq. Thanks for making me a contributor!

No. Thank you for the sweet patches.  Sorry we've been slow to get to them.

So, do we still have this issue in trunk?  If so, I'll take the time to mess w/ 
your patch.  Thanks.

> Allow Incremental Table Alterations
> ---
>
> Key: HBASE-2960
> URL: https://issues.apache.org/jira/browse/HBASE-2960
> Project: HBase
>  Issue Type: Wish
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: Karthick Sankarachary
> Attachments: HBASE-2960.patch
>
>
> As per the HBase shell help, the alter command will "Alter column family 
> schema;  pass table name and a dictionary  specifying new column family 
> schema." The assumption here seems to be that the new column family schema 
> must be completely specified. In other words, if a certain attribute is not 
> specified in the column family schema, then it is effectively defaulted. Is 
> this side-effect by design? 
> I for one assumed (wrongly apparently) that I can alter a table in 
> "increments". Case in point, the following commands should've resulted in the 
> final value of the VERSIONS attribute of my table to stay put at 1, but 
> instead it got defaulted to 3. I guess there's no right or wrong answer here, 
> but what should alter do by default? My expectation is that it only changes 
> those attributes that were specified in the "alter" command, leaving the 
> unspecified attributes untouched.
> hbase(main):003:0> create 't1', {NAME => 'f1', VERSIONS => 1}
> 0 row(s) in 1.7230 seconds
> hbase(main):004:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', COMPRESSION => 'NONE', VERSIONS 
> => '1', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' false', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.2030 seconds
> hbase(main):006:0> disable 't1'
> 0 row(s) in 0.1140 seconds
> hbase(main):007:0> alter 't1', {NAME => 'f1', IN_MEMORY => 'true'}
> 0 row(s) in 0.0160 seconds
> hbase(main):009:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', VERSIONS => '3', COMPRESSION => 
> 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' true', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.1280 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3316) Add support for Java Serialization to HbaseObjectWritable

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3316:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

Committed.  Thanks for the patch Ed.

> Add support for Java Serialization to HbaseObjectWritable
> -
>
> Key: HBASE-3316
> URL: https://issues.apache.org/jira/browse/HBASE-3316
> Project: HBase
>  Issue Type: New Feature
>  Components: io
>Affects Versions: 0.92.0
>Reporter: Ed Kohlwey
>Priority: Minor
> Fix For: 0.92.0
>
> Attachments: HBASE-3316.patch
>
>
> It is convenient in some situations to have HbaseObjectWritable write 
> serializable Java objects, for instance when prototyping new code where you 
> don't want to take the time to implement a writable.
> Adding this support requires no overhead compared the current implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3173) HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION via shell

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3173:
-

   Resolution: Fixed
Fix Version/s: 0.90.1
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

Committed to branch and trunk (after verifying it works).  Modelled fix on 
Igor's hbase-3310.

> HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION via shell
> 
>
> Key: HBASE-3173
> URL: https://issues.apache.org/jira/browse/HBASE-3173
> Project: HBase
>  Issue Type: Bug
>Reporter: Kannan Muthukkaruppan
>Assignee: Kannan Muthukkaruppan
>Priority: Minor
> Fix For: 0.90.1
>
> Attachments: 3173-v2.txt, HBASE-3173.txt
>
>
> HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION via shell

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-3310) Failing creating/altering table with compression agrument from the HBase shell

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968917#action_12968917
 ] 

stack commented on HBASE-3310:
--

Looks like I only applied to branch.  Just applied to TRUNK too.

> Failing creating/altering table with compression agrument from the HBase shell
> --
>
> Key: HBASE-3310
> URL: https://issues.apache.org/jira/browse/HBASE-3310
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Igor Ranitovic
>Assignee: Igor Ranitovic
> Fix For: 0.90.0
>
> Attachments: HBASE-3310.patch
>
>
> HColumnDescriptor setCompressionType takes Compression.Algorithm and not 
> String
> hbase(main):007:0> create 't1', { NAME => 'f', COMPRESSION => 'lzo'}
> ERROR: cannot convert instance of class org.jruby.RubyString to class 
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3173) HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION via shell

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3173:
-

Attachment: 3173-v2.txt

> HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION via shell
> 
>
> Key: HBASE-3173
> URL: https://issues.apache.org/jira/browse/HBASE-3173
> Project: HBase
>  Issue Type: Bug
>Reporter: Kannan Muthukkaruppan
>Assignee: Kannan Muthukkaruppan
>Priority: Minor
> Attachments: 3173-v2.txt, HBASE-3173.txt
>
>
> HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION via shell

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2960) Allow Incremental Table Alterations

2010-12-07 Thread Karthick Sankarachary (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968906#action_12968906
 ] 

Karthick Sankarachary commented on HBASE-2960:
--

Thanks for making me a contributor! 

Just to clarify, this issue is not quite related to HBASE-2944. Whereas the 
latter is related to the ALTER statement, it doesn't address the problem so 
eloquently described in 
http://mail-archives.apache.org/mod_mbox/hbase-user/201012.mbox/browser. 
Basically, we want to superimpose (as opposed to overwrite) the schema 
specified in the ALTER statement on top of the underlying schema. In short, the 
patch gets the column descriptor from the admin object, and change only those 
properties of the column that were specified in the ALTER statement. In other 
words, we don't try to default those properties *not* specified in the ALTER 
statement.



> Allow Incremental Table Alterations
> ---
>
> Key: HBASE-2960
> URL: https://issues.apache.org/jira/browse/HBASE-2960
> Project: HBase
>  Issue Type: Wish
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: Karthick Sankarachary
> Attachments: HBASE-2960.patch
>
>
> As per the HBase shell help, the alter command will "Alter column family 
> schema;  pass table name and a dictionary  specifying new column family 
> schema." The assumption here seems to be that the new column family schema 
> must be completely specified. In other words, if a certain attribute is not 
> specified in the column family schema, then it is effectively defaulted. Is 
> this side-effect by design? 
> I for one assumed (wrongly apparently) that I can alter a table in 
> "increments". Case in point, the following commands should've resulted in the 
> final value of the VERSIONS attribute of my table to stay put at 1, but 
> instead it got defaulted to 3. I guess there's no right or wrong answer here, 
> but what should alter do by default? My expectation is that it only changes 
> those attributes that were specified in the "alter" command, leaving the 
> unspecified attributes untouched.
> hbase(main):003:0> create 't1', {NAME => 'f1', VERSIONS => 1}
> 0 row(s) in 1.7230 seconds
> hbase(main):004:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', COMPRESSION => 'NONE', VERSIONS 
> => '1', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' false', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.2030 seconds
> hbase(main):006:0> disable 't1'
> 0 row(s) in 0.1140 seconds
> hbase(main):007:0> alter 't1', {NAME => 'f1', IN_MEMORY => 'true'}
> 0 row(s) in 0.0160 seconds
> hbase(main):009:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', VERSIONS => '3', COMPRESSION => 
> 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' true', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.1280 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2960) Allow Incremental Table Alterations

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2960:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Fixed already in TRUNK: HBASE-2944. Sorry for the inconvenience Karthik.  
Thanks for the patch.

> Allow Incremental Table Alterations
> ---
>
> Key: HBASE-2960
> URL: https://issues.apache.org/jira/browse/HBASE-2960
> Project: HBase
>  Issue Type: Wish
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
> Attachments: HBASE-2960.patch
>
>
> As per the HBase shell help, the alter command will "Alter column family 
> schema;  pass table name and a dictionary  specifying new column family 
> schema." The assumption here seems to be that the new column family schema 
> must be completely specified. In other words, if a certain attribute is not 
> specified in the column family schema, then it is effectively defaulted. Is 
> this side-effect by design? 
> I for one assumed (wrongly apparently) that I can alter a table in 
> "increments". Case in point, the following commands should've resulted in the 
> final value of the VERSIONS attribute of my table to stay put at 1, but 
> instead it got defaulted to 3. I guess there's no right or wrong answer here, 
> but what should alter do by default? My expectation is that it only changes 
> those attributes that were specified in the "alter" command, leaving the 
> unspecified attributes untouched.
> hbase(main):003:0> create 't1', {NAME => 'f1', VERSIONS => 1}
> 0 row(s) in 1.7230 seconds
> hbase(main):004:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', COMPRESSION => 'NONE', VERSIONS 
> => '1', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' false', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.2030 seconds
> hbase(main):006:0> disable 't1'
> 0 row(s) in 0.1140 seconds
> hbase(main):007:0> alter 't1', {NAME => 'f1', IN_MEMORY => 'true'}
> 0 row(s) in 0.0160 seconds
> hbase(main):009:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', VERSIONS => '3', COMPRESSION => 
> 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' true', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.1280 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HBASE-2960) Allow Incremental Table Alterations

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-2960:


Assignee: Karthick Sankarachary

I made you a contributor karthick and assigned this issue to you.

> Allow Incremental Table Alterations
> ---
>
> Key: HBASE-2960
> URL: https://issues.apache.org/jira/browse/HBASE-2960
> Project: HBase
>  Issue Type: Wish
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: Karthick Sankarachary
> Attachments: HBASE-2960.patch
>
>
> As per the HBase shell help, the alter command will "Alter column family 
> schema;  pass table name and a dictionary  specifying new column family 
> schema." The assumption here seems to be that the new column family schema 
> must be completely specified. In other words, if a certain attribute is not 
> specified in the column family schema, then it is effectively defaulted. Is 
> this side-effect by design? 
> I for one assumed (wrongly apparently) that I can alter a table in 
> "increments". Case in point, the following commands should've resulted in the 
> final value of the VERSIONS attribute of my table to stay put at 1, but 
> instead it got defaulted to 3. I guess there's no right or wrong answer here, 
> but what should alter do by default? My expectation is that it only changes 
> those attributes that were specified in the "alter" command, leaving the 
> unspecified attributes untouched.
> hbase(main):003:0> create 't1', {NAME => 'f1', VERSIONS => 1}
> 0 row(s) in 1.7230 seconds
> hbase(main):004:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', COMPRESSION => 'NONE', VERSIONS 
> => '1', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' false', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.2030 seconds
> hbase(main):006:0> disable 't1'
> 0 row(s) in 0.1140 seconds
> hbase(main):007:0> alter 't1', {NAME => 'f1', IN_MEMORY => 'true'}
> 0 row(s) in 0.0160 seconds
> hbase(main):009:0> describe 't1'
> DESCRIPTION
>  {NAME => 't1', FAMILIES => [{NAME => 'f1', VERSIONS => '3', COMPRESSION => 
> 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => ' true', 
> BLOCKCACHE => 'true'}]}
> 1 row(s) in 0.1280 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2939) Allow Client-Side Connection Pooling

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968892#action_12968892
 ] 

stack commented on HBASE-2939:
--

Oh, I brought it into 0.92 too.

> Allow Client-Side Connection Pooling
> 
>
> Key: HBASE-2939
> URL: https://issues.apache.org/jira/browse/HBASE-2939
> Project: HBase
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: ryan rawson
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-2939-0.20.6.patch, HBASE-2939.patch, 
> HBASE-2939.patch
>
>
> By design, the HBase RPC client multiplexes calls to a given region server 
> (or the master for that matter) over a single socket, access to which is 
> managed by a connection thread defined in the HBaseClient class. While this 
> approach may suffice for most cases, it tends to break down in the context of 
> a real-time, multi-threaded server, where latencies need to be lower and 
> throughputs higher. 
> In brief, the problem is that we dedicate one thread to handle all 
> client-side reads and writes for a given server, which in turn forces them to 
> share the same socket. As load increases, this is bound to serialize calls on 
> the client-side. In particular, when the rate at which calls are submitted to 
> the connection thread is greater than that at which the server responds, then 
> some of those calls will inevitably end up sitting idle, just waiting their 
> turn to go over the wire.
> In general, sharing sockets across multiple client threads is a good idea, 
> but limiting the number of such sockets to one may be overly restrictive for 
> certain cases. Here, we propose a way of defining multiple sockets per server 
> endpoint, access to which may be managed through either a load-balancing or 
> thread-local pool. To that end, we define the notion of a SharedMap, which 
> maps a key to a resource pool, and supports both of those pool types. 
> Specifically, we will apply that map in the HBaseClient, to associate 
> multiple connection threads with each server endpoint (denoted by a 
> connection id). 
>  Currently, the SharedMap supports the following types of pools:
> * A ThreadLocalPool, which represents a pool that builds on the 
> ThreadLocal class. It essentially binds the resource to the thread from which 
> it is accessed.
> * A ReusablePool, which represents a pool that builds on the LinkedList 
> class. It essentially allows resources to be checked out, at which point it 
> is (temporarily) removed from the pool. When the resource is no longer 
> required, it should be returned to the pool in order to be reused.
> * A RoundRobinPool, which represents a pool that stores its resources in 
> an ArrayList. It load-balances access to its resources by returning a 
> different resource every time a given key is looked up.
> To control the type and size of the connection pools, we give the user a 
> couple of parameters (viz. "hbase.client.ipc.pool.type" and 
> "hbase.client.ipc.pool.size"). In case the size of the pool is set to a 
> non-zero positive number, that is used to cap the number of resources that a 
> pool may contain for any given key. A size of Integer#MAX_VALUE is 
> interpreted to mean an unbounded pool.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2937) Facilitate Timeouts In HBase Client

2010-12-07 Thread Karthick Sankarachary (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968893#action_12968893
 ] 

Karthick Sankarachary commented on HBASE-2937:
--

Hi Stack, Just FYI, we're probably better off interrupting the client in the 
thread waiting for the response as opposed to the connection thread receiving 
the response, as the latter is shared by multiple clients. Please stay tuned 
for a revised patch, which should be forthcoming shortly. Thanks for your 
patience.

> Facilitate Timeouts In HBase Client
> ---
>
> Key: HBASE-2937
> URL: https://issues.apache.org/jira/browse/HBASE-2937
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-2937.patch
>
>
> Currently, there is no way to force an operation on the HBase client (viz. 
> HTable) to time out if a certain amount of time has elapsed.  In other words, 
> all invocations on the HTable class are veritable blocking calls, which will 
> not return until a response (successful or otherwise) is received. 
> In general, there are two ways to handle timeouts:  (a) call the operation in 
> a separate thread, until it returns a response or the wait on the thread 
> times out and (b) have the underlying socket unblock the operation if the 
> read times out.  The downside of the former approach is that it consumes more 
> resources in terms of threads and callables. 
> Here, we describe a way to specify and handle timeouts on the HTable client, 
> which relies on the latter approach (i.e., socket timeouts). Right now, the 
> HBaseClient sets the socket timeout to the value of the "ipc.ping.interval" 
> parameter, which is also how long it waits before pinging the server in case 
> of a failure. The goal is to allow clients to set that timeout on the fly 
> through HTable. Rather than adding an optional timeout argument to every 
> HTable operation, we chose to make it a property of HTable which effectively 
> applies to every method that involves a remote operation.
> In order to propagate the timeout  from HTable to HBaseClient, we replaced 
> all occurrences of ServerCallable in HTable with an extension called 
> ClientCallable, which sets the timeout on the region server interface, once 
> it has been instantiated, through the HConnection object. The latter, in 
> turn, asks HBaseRPC to pass that timeout to the corresponding Invoker, so 
> that it may inject the timeout at the time the invocation is made on the 
> region server proxy. Right before the request is sent to the server, we set 
> the timeout specified by the client on the underlying socket.
> In conclusion, this patch will afford clients the option of performing an 
> HBase operation until it completes or a specified timeout elapses. Note that 
> a timeout of zero is interpreted as an infinite timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2939) Allow Client-Side Connection Pooling

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2939:
-

 Priority: Critical  (was: Major)
Fix Version/s: 0.92.0
 Assignee: ryan rawson

Making this issue critical -- We owe Karthik feedback -- and assigning Ryan 
since he was looking into this (Can you take a look RR?  Karthik updated his 
patch... thanks).

> Allow Client-Side Connection Pooling
> 
>
> Key: HBASE-2939
> URL: https://issues.apache.org/jira/browse/HBASE-2939
> Project: HBase
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Assignee: ryan rawson
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-2939-0.20.6.patch, HBASE-2939.patch, 
> HBASE-2939.patch
>
>
> By design, the HBase RPC client multiplexes calls to a given region server 
> (or the master for that matter) over a single socket, access to which is 
> managed by a connection thread defined in the HBaseClient class. While this 
> approach may suffice for most cases, it tends to break down in the context of 
> a real-time, multi-threaded server, where latencies need to be lower and 
> throughputs higher. 
> In brief, the problem is that we dedicate one thread to handle all 
> client-side reads and writes for a given server, which in turn forces them to 
> share the same socket. As load increases, this is bound to serialize calls on 
> the client-side. In particular, when the rate at which calls are submitted to 
> the connection thread is greater than that at which the server responds, then 
> some of those calls will inevitably end up sitting idle, just waiting their 
> turn to go over the wire.
> In general, sharing sockets across multiple client threads is a good idea, 
> but limiting the number of such sockets to one may be overly restrictive for 
> certain cases. Here, we propose a way of defining multiple sockets per server 
> endpoint, access to which may be managed through either a load-balancing or 
> thread-local pool. To that end, we define the notion of a SharedMap, which 
> maps a key to a resource pool, and supports both of those pool types. 
> Specifically, we will apply that map in the HBaseClient, to associate 
> multiple connection threads with each server endpoint (denoted by a 
> connection id). 
>  Currently, the SharedMap supports the following types of pools:
> * A ThreadLocalPool, which represents a pool that builds on the 
> ThreadLocal class. It essentially binds the resource to the thread from which 
> it is accessed.
> * A ReusablePool, which represents a pool that builds on the LinkedList 
> class. It essentially allows resources to be checked out, at which point it 
> is (temporarily) removed from the pool. When the resource is no longer 
> required, it should be returned to the pool in order to be reused.
> * A RoundRobinPool, which represents a pool that stores its resources in 
> an ArrayList. It load-balances access to its resources by returning a 
> different resource every time a given key is looked up.
> To control the type and size of the connection pools, we give the user a 
> couple of parameters (viz. "hbase.client.ipc.pool.type" and 
> "hbase.client.ipc.pool.size"). In case the size of the pool is set to a 
> non-zero positive number, that is used to cap the number of resources that a 
> pool may contain for any given key. A size of Integer#MAX_VALUE is 
> interpreted to mean an unbounded pool.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2938) Add Thread-Local Behavior To HTable Pool

2010-12-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1296#action_1296
 ] 

stack commented on HBASE-2938:
--

@Karthik Sorry for taking so long getting to this issue.  It looks very 
interesting.  What you say above makes a lot of sense.  May I have an 
illustration, a use case, where you've found this payload carrying facilitate 
to be of use?  FYI, Apache doesn't allow '@author' tags.  Otherwise, patch 
looks great. 

> Add Thread-Local Behavior To HTable Pool
> 
>
> Key: HBASE-2938
> URL: https://issues.apache.org/jira/browse/HBASE-2938
> Project: HBase
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
> Attachments: HBASE-2938.patch
>
>
>   It is a well-documented fact that the HBase table client (viz., HTable) is 
> not thread-safe. Hence, the recommendation has been to use a HTablePool or a 
> ThreadLocal to manage access to tables. The downside of the latter is that it 
> (a) requires the user to reinvent the wheel in terms of mapping table names 
> to tables and (b) forces the user to maintain the thread-local objects. 
> Ideally, it would be nice if we could make the HTablePool handle thread-local 
> objects as well. That way, it not only becomes the "one stop shop" for all 
> client-side tables, but also insulates the user from the ThreadLocal object.
>   
>   Here, we propose a way to generalize the HTablePool so that the underlying 
> pool type is either "reusable" or "thread-local". To make this possible, we 
> introdudce the concept of a SharedMap, which essentially, maps a key to a 
> collection of values, the elements of which are managed by a pool. In effect, 
> that collection acts as a shared pool of resources, access to which is 
> closely controlled as dictated by the particular semantics of the pool.
>  Furthermore, to simplify the construction of HTablePools, we added a couple 
> of parameters (viz. "hbase.client.htable.pool.type" and 
> "hbase.client.hbase.pool.size") to control the default behavior of a 
> HTablePool.
>   
>   In case the size of the pool is set to a non-zero positive number, that is 
> used to cap the number of resources that a pool may contain for any given 
> key. A size of Integer#MAX_VALUE is interpreted to mean an unbounded pool.
>
>Currently, the SharedMap supports the following types of pools:
>* A ThreadLocalPool, which represents a pool that builds on the 
> ThreadLocal class. It essentially binds the resource to the thread from which 
> it is accessed.
>* A ReusablePool, which represents a pool that builds on the LinkedList 
> class. It essentially allows resources to be checked out, at which point it 
> is (temporarily) removed from the pool. When the resource is no longer 
> required, it should be returned to the pool in order to be reused.
>* A RoundRobinPool, which represents a pool that stores its resources in 
> an ArrayList. It load-balances access to its resources by returning a 
> different resource every time a given key is looked up.
>   

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2486) Add simple "anti-entropy" for region assignment

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2486:
-

Status: Open  (was: Patch Available)

Cancelling stale patch.

> Add simple "anti-entropy" for region assignment
> ---
>
> Key: HBASE-2486
> URL: https://issues.apache.org/jira/browse/HBASE-2486
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver
>Affects Versions: 0.20.5
>Reporter: Todd Lipcon
>Assignee: Eugene Koontz
> Fix For: 0.92.0
>
> Attachments: hbase2486.diff, hbase2486.diff
>
>
> We've seen a number of bugs where a region server thinks it should not be 
> serving a region, but the master and META think it should be. I'd like to 
> propose a very simple way of fixing this issue:
> 1) whenever a regionserver throws a NotServingRegionException, it also marks 
> that region id in an RS-wide Set
> 2) when a region sends a heartbeat, include a message for each of these 
> regions, MSG_REPORT_NSRE or somesuch, and then clear the set
> 3) when the master receives MSG_REPORT_NSRE, it does the following checks:
> a) if the region is assigned elsewhere according to META, the NSRE was due to 
> a stale client, ignore
> b) if the region is in transition, ignore
> c) otherwise, we have an inconsistency, and we should take some steps to 
> resolve (eg mark the region unassigned, or exit the master if we are in 
> "paranoid mode")
> Whatever we do, we need to make sure that this is loudly logged, and causes 
> unit tests to fail, when it's detected. This should *not* happen, but when it 
> does, it would be good to recover without addtable.rb, etc.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2937) Facilitate Timeouts In HBase Client

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2937:
-

Priority: Critical  (was: Major)

Marking critical.  We owe Karthik a review and besides, we need interruptible 
client whether we go with Karthik's patch or not.

> Facilitate Timeouts In HBase Client
> ---
>
> Key: HBASE-2937
> URL: https://issues.apache.org/jira/browse/HBASE-2937
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-2937.patch
>
>
> Currently, there is no way to force an operation on the HBase client (viz. 
> HTable) to time out if a certain amount of time has elapsed.  In other words, 
> all invocations on the HTable class are veritable blocking calls, which will 
> not return until a response (successful or otherwise) is received. 
> In general, there are two ways to handle timeouts:  (a) call the operation in 
> a separate thread, until it returns a response or the wait on the thread 
> times out and (b) have the underlying socket unblock the operation if the 
> read times out.  The downside of the former approach is that it consumes more 
> resources in terms of threads and callables. 
> Here, we describe a way to specify and handle timeouts on the HTable client, 
> which relies on the latter approach (i.e., socket timeouts). Right now, the 
> HBaseClient sets the socket timeout to the value of the "ipc.ping.interval" 
> parameter, which is also how long it waits before pinging the server in case 
> of a failure. The goal is to allow clients to set that timeout on the fly 
> through HTable. Rather than adding an optional timeout argument to every 
> HTable operation, we chose to make it a property of HTable which effectively 
> applies to every method that involves a remote operation.
> In order to propagate the timeout  from HTable to HBaseClient, we replaced 
> all occurrences of ServerCallable in HTable with an extension called 
> ClientCallable, which sets the timeout on the region server interface, once 
> it has been instantiated, through the HConnection object. The latter, in 
> turn, asks HBaseRPC to pass that timeout to the corresponding Invoker, so 
> that it may inject the timeout at the time the invocation is made on the 
> region server proxy. Right before the request is sent to the server, we set 
> the timeout specified by the client on the underlying socket.
> In conclusion, this patch will afford clients the option of performing an 
> HBase operation until it completes or a specified timeout elapses. Note that 
> a timeout of zero is interpreted as an infinite timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2937) Facilitate Timeouts In HBase Client

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2937:
-


Marking critical.  We owe Karthik a review and besides, we need interruptible 
client whether we go with Karthik's patch or not.

> Facilitate Timeouts In HBase Client
> ---
>
> Key: HBASE-2937
> URL: https://issues.apache.org/jira/browse/HBASE-2937
> Project: HBase
>  Issue Type: New Feature
>  Components: client
>Affects Versions: 0.89.20100621
>Reporter: Karthick Sankarachary
>Priority: Critical
> Fix For: 0.92.0
>
> Attachments: HBASE-2937.patch
>
>
> Currently, there is no way to force an operation on the HBase client (viz. 
> HTable) to time out if a certain amount of time has elapsed.  In other words, 
> all invocations on the HTable class are veritable blocking calls, which will 
> not return until a response (successful or otherwise) is received. 
> In general, there are two ways to handle timeouts:  (a) call the operation in 
> a separate thread, until it returns a response or the wait on the thread 
> times out and (b) have the underlying socket unblock the operation if the 
> read times out.  The downside of the former approach is that it consumes more 
> resources in terms of threads and callables. 
> Here, we describe a way to specify and handle timeouts on the HTable client, 
> which relies on the latter approach (i.e., socket timeouts). Right now, the 
> HBaseClient sets the socket timeout to the value of the "ipc.ping.interval" 
> parameter, which is also how long it waits before pinging the server in case 
> of a failure. The goal is to allow clients to set that timeout on the fly 
> through HTable. Rather than adding an optional timeout argument to every 
> HTable operation, we chose to make it a property of HTable which effectively 
> applies to every method that involves a remote operation.
> In order to propagate the timeout  from HTable to HBaseClient, we replaced 
> all occurrences of ServerCallable in HTable with an extension called 
> ClientCallable, which sets the timeout on the region server interface, once 
> it has been instantiated, through the HConnection object. The latter, in 
> turn, asks HBaseRPC to pass that timeout to the corresponding Invoker, so 
> that it may inject the timeout at the time the invocation is made on the 
> region server proxy. Right before the request is sent to the server, we set 
> the timeout specified by the client on the underlying socket.
> In conclusion, this patch will afford clients the option of performing an 
> HBase operation until it completes or a specified timeout elapses. Note that 
> a timeout of zero is interpreted as an infinite timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2936) Differentiate between daemon & restart sleep periods

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2936:
-

   Resolution: Fixed
Fix Version/s: 0.92.0
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

Committed to TRUNK.

> Differentiate between daemon & restart sleep periods
> 
>
> Key: HBASE-2936
> URL: https://issues.apache.org/jira/browse/HBASE-2936
> Project: HBase
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.89.20100621
>Reporter: Nicolas Spiegelberg
>Assignee: Nicolas Spiegelberg
>Priority: Trivial
> Fix For: 0.92.0
>
> Attachments: HBASE-2936.patch
>
>
> Trivial change for rolling restart scripts.  Right now, both the stop->start 
> time and per-daemon sleep time is controlled via HBASE_SLAVE_SLEEP.  This 
> param will normally be set relatively high (1-2 min), but we should be able 
> to start up an RS very soon after the stop has successfully completed.  Add 
> new variable HBASE_RESTART_SLEEP to allow for lower downtime on a per-daemon 
> basis.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2584) Add support for tryLock - non-blocking row lock acquisition

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2584:
-

Status: Open  (was: Patch Available)

Cancelling stale patch.

> Add support for tryLock - non-blocking row lock acquisition
> ---
>
> Key: HBASE-2584
> URL: https://issues.apache.org/jira/browse/HBASE-2584
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Michael Dalton
> Attachments: 2584.patch
>
>
> Currently HBase clients can only acquire row locks via the blocking lockRow() 
> method in HTable. As ryan described on the mailing list, relying on this 
> method in rare highly contended situations can lead to (temporary) deadlock. 
> This deadlock occurs if a client acquires the lock, and a large number of 
> other clients attempt to acquire the lock and block. Each blocked client 
> awaiting lock acquisition consumes one of the limited I/O handler threads on 
> the regionserver. When lock holder wishes to release the lock, he will be 
> unable to as all I/O threads are currently serving clients that are blocking 
> on lock acquisition -- and thus no I/O threads are open to process the unlock 
> request.
> To avoid deadlock situations such as the one described above, I have added 
> support for 'tryLock' in HTable (and on the regionservers). The 'tryLock' 
> method will attempt to acquire a row lock. In the event that a lock is 
> already held, tryLock immediately returns null rather than blocking and 
> waiting for the lock to be acquire. Clients can then implement their own 
> backoff/retry policy to re-acquire the lock, determine their own timeout 
> values, etc based on their application performance characteristics rather 
> than block on the regionserver and tie up precious I/O regionserver handler 
> threads for an indefinite amount of time. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2507) HTable#flushCommits() - event notification handler

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2507:
-

Status: Open  (was: Patch Available)

Cancelling stale patch.  This should be done as a coprocessor now anyways?

> HTable#flushCommits() - event notification handler 
> ---
>
> Key: HBASE-2507
> URL: https://issues.apache.org/jira/browse/HBASE-2507
> Project: HBase
>  Issue Type: Improvement
>Reporter: Kay Kay
> Fix For: 0.92.0
>
> Attachments: HBASE-2507.patch
>
>
> Event notification handler code when flushing commits on the client side. By 
> default is null. 
> New Class - CommitEventHandler , HTableCommitEvent . 
> notification data - preSize, postSize . 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2368) BulkPut - Writable class compatible with TableRecordWriter for bulk puts agnostic of region server mapping at Mapper/Combiner level

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2368:
-

Status: Open  (was: Patch Available)

Canceling stale patch

> BulkPut - Writable class  compatible with TableRecordWriter for bulk puts 
> agnostic of region server mapping at Mapper/Combiner level
> 
>
> Key: HBASE-2368
> URL: https://issues.apache.org/jira/browse/HBASE-2368
> Project: HBase
>  Issue Type: Improvement
>  Components: client
>Reporter: Kay Kay
> Fix For: 0.92.0
>
> Attachments: HBASE-2368.patch
>
>
> TableRecordWriter currently accepts only a put/delete as writables. Some 
> mapper processes might want to consolidate the 'put's and insert them in 
> bulk. Useful in combiners / mappers - to send across a bunch of puts from one 
> stage to another , while maintaining a very similar region-server-mapping 
> agnostic api at respective levels. 
> New type - BulkPut ( Writable ) introduced that is just a consolidation of 
> Puts.  Eventually , the TableRecordWriter bulk inserts the puts together into 
> the hbase eco-system. 
> Patch made against trunk only. But since , it does not break any backward 
> compatibility - it can be an useful addition to the branch as well. 
> Let me know your comments on the same. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-493) Write-If-Not-Modified-Since support

2010-12-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-493:


Status: Open  (was: Patch Available)

Cancelling stale patch.

> Write-If-Not-Modified-Since support
> ---
>
> Key: HBASE-493
> URL: https://issues.apache.org/jira/browse/HBASE-493
> Project: HBase
>  Issue Type: New Feature
>  Components: client, io, regionserver
>Affects Versions: 0.90.0
>Reporter: Chris Richard
>Priority: Minor
> Attachments: HBASE-493.patch, HBASE-493.v2.patch
>
>
> Write-If-Not-Modified-Since for optimistic concurrency control:
> Client retrieves cell (or row) and stores timestamp.
> Client writes to same cell (or row) and passes timestamp.
> If the cell's (or row's) latest timestamp matches the passed timestamp, the 
> write succeeds. If the timestamps do not match, the write fails and the 
> client is notified. The client must re-retrieve the cell/row to get the 
> latest timestamp before attempting to write back.
> This behavior would be optional, if the client doesn't pass a timestamp to 
> the write method, no modified check would be enforced.
> Note: blocked behind HBASE-489 due to requirement that client be able to 
> access timestamp values.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3316) Add support for Java Serialization to HbaseObjectWritable

2010-12-07 Thread Ed Kohlwey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Kohlwey updated HBASE-3316:
--

Attachment: HBASE-3316.patch

> Add support for Java Serialization to HbaseObjectWritable
> -
>
> Key: HBASE-3316
> URL: https://issues.apache.org/jira/browse/HBASE-3316
> Project: HBase
>  Issue Type: New Feature
>  Components: io
>Affects Versions: 0.92.0
>Reporter: Ed Kohlwey
>Priority: Minor
> Fix For: 0.92.0
>
> Attachments: HBASE-3316.patch
>
>
> It is convenient in some situations to have HbaseObjectWritable write 
> serializable Java objects, for instance when prototyping new code where you 
> don't want to take the time to implement a writable.
> Adding this support requires no overhead compared the current implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3316) Add support for Java Serialization to HbaseObjectWritable

2010-12-07 Thread Ed Kohlwey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Kohlwey updated HBASE-3316:
--

Status: Patch Available  (was: Open)

> Add support for Java Serialization to HbaseObjectWritable
> -
>
> Key: HBASE-3316
> URL: https://issues.apache.org/jira/browse/HBASE-3316
> Project: HBase
>  Issue Type: New Feature
>  Components: io
>Affects Versions: 0.92.0
>Reporter: Ed Kohlwey
>Priority: Minor
> Fix For: 0.92.0
>
> Attachments: HBASE-3316.patch
>
>
> It is convenient in some situations to have HbaseObjectWritable write 
> serializable Java objects, for instance when prototyping new code where you 
> don't want to take the time to implement a writable.
> Adding this support requires no overhead compared the current implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3316) Add support for Java Serialization to HbaseObjectWritable

2010-12-07 Thread Ed Kohlwey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Kohlwey updated HBASE-3316:
--

Status: Open  (was: Patch Available)

> Add support for Java Serialization to HbaseObjectWritable
> -
>
> Key: HBASE-3316
> URL: https://issues.apache.org/jira/browse/HBASE-3316
> Project: HBase
>  Issue Type: New Feature
>  Components: io
>Affects Versions: 0.92.0
>Reporter: Ed Kohlwey
>Priority: Minor
> Fix For: 0.92.0
>
>
> It is convenient in some situations to have HbaseObjectWritable write 
> serializable Java objects, for instance when prototyping new code where you 
> don't want to take the time to implement a writable.
> Adding this support requires no overhead compared the current implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-3316) Add support for Java Serialization to HbaseObjectWritable

2010-12-07 Thread Ed Kohlwey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Kohlwey updated HBASE-3316:
--

Status: Patch Available  (was: Open)

> Add support for Java Serialization to HbaseObjectWritable
> -
>
> Key: HBASE-3316
> URL: https://issues.apache.org/jira/browse/HBASE-3316
> Project: HBase
>  Issue Type: New Feature
>  Components: io
>Affects Versions: 0.92.0
>Reporter: Ed Kohlwey
>Priority: Minor
> Fix For: 0.92.0
>
>
> It is convenient in some situations to have HbaseObjectWritable write 
> serializable Java objects, for instance when prototyping new code where you 
> don't want to take the time to implement a writable.
> Adding this support requires no overhead compared the current implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-3316) Add support for Java Serialization to HbaseObjectWritable

2010-12-07 Thread Ed Kohlwey (JIRA)
Add support for Java Serialization to HbaseObjectWritable
-

 Key: HBASE-3316
 URL: https://issues.apache.org/jira/browse/HBASE-3316
 Project: HBase
  Issue Type: New Feature
  Components: io
Affects Versions: 0.92.0
Reporter: Ed Kohlwey
Priority: Minor
 Fix For: 0.92.0


It is convenient in some situations to have HbaseObjectWritable write 
serializable Java objects, for instance when prototyping new code where you 
don't want to take the time to implement a writable.

Adding this support requires no overhead compared the current implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.