help needed towards connecting Hbase

2014-01-30 Thread jeevi tesh
Hi all,

I'm writing simple java program to connect to HBase.

Versions of software used

Hbase 0.96.0-hadoop2

Hadoop:hadoop-2.2.0.

I'm trying to connect from my windows 7 (32 bit)machine to oracle linux
machine (Details :VM, 64 bit).Note I have not installed ZooKeeper.


Any suggestion or comments will be of great help

Thanks

Here is the script...

*package* pack1;



*import* java.io.IOException;



*import* org.apache.hadoop.conf.Configuration;

*import* org.apache.hadoop.hbase.HBaseConfiguration;

*import* org.apache.hadoop.hbase.client.HTable;

*import* org.apache.hadoop.hbase.client.Put;

*import* org.apache.hadoop.hbase.util.Bytes;



*public* *class* testDB {



  /**

   * *@param* args

   */

  *public* *static* *void* main(String[] args) {

*try* {

System.*out*.println("1 Before HBASE COnfiguration");

Configuration config = HBaseConfiguration.*create*();

config.clear();

System.*out*.println("2 Before HBASE COnfiguration");

config.set("hbase.master", "192.168.1.42:60010");



System.*out*.println("HBase is running!");

HTable table;



table = *new* HTable(config, "mytable");



System.*out*.println("Table mytable obtained ");



Put put = *new* Put(Bytes.*toBytes*("row1"));

put.add(Bytes.*toBytes*("colfam1"),Bytes.*toBytes*("qual1"
),Bytes.*toBytes*("val1"));

put.add(Bytes.*toBytes*("colfam1"),Bytes.*toBytes*("qual2"
),Bytes.*toBytes*("val2"));

table.put(put);

} *catch* (IOException e) {



e.printStackTrace();

}



  }



}

Error message

1 Before HBASE COnfiguration

2 Before HBASE COnfiguration

HBase is running!

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client environment:host.name
=DELL-75.unilog

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.6.0_16

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Sun Microsystems Inc.

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.home=C:\Program Files\Java\jdk1.6.0_16\jre

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=E:\jeevi\XRFWorkSpace200\testdbcon\bin;E:\jeevi\XRF100\WebContent\WEB-INF\lib\apache-logging-log4j.jar;E:\jeevi\XRF100\WebContent\WEB-INF\lib\asm-3.1.jar;E:\jeevi\XRF100\WebContent\WEB-INF\lib\commons-logging-1.1.3.jar;E:\jeevi\XRF100\WebContent\WEB-INF\lib\hadoop-core-0.19.0.jar;E:\jeevi\XRF100\WebContent\WEB-INF\lib\hbase-0.90.2.jar;E:\jeevi\XRF100\WebContent\WEB-INF\lib\zookeeper-3.3.2.jar

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=C:\Program
Files\Java\jdk1.6.0_16\bin;.;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program
Files\Common Files\Acronis\SnapAPI\;C:\Program
Files\Java\jdk1.6.0_16\bin;C:\apache-maven-3.1.1\bin

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=C:\Users\jems\AppData\Local\Temp\

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client environment:os.name=Windows
7

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client environment:os.arch=x86

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:os.version=6.1

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client environment:user.name
=jems

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:user.home=C:\Users\jems

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Client
environment:user.dir=E:\jeevi\XRFWorkSpace200\testdbcon

14/01/31 13:14:55 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=18 watcher=hconnection

14/01/31 13:14:55 INFO zookeeper.ClientCnxn: Opening socket connection to
server localhost/127.0.0.1:2181

14/01/31 13:14:56 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
unexpected error, closing socket connection and attempting reconnect

*java.net.ConnectException*: Connection refused: no further information

  at sun.nio.ch.SocketChannelImpl.checkConnect(*Native Method*)

  at sun.nio.ch.SocketChannelImpl.finishConnect(
*SocketChannelImpl.java:574*)

  at org.apache.zookeeper.ClientCnxn$SendThread.run(
*ClientCnxn.java:1119*)

14/01/31 13:14:56 INFO zookeeper.ClientCnxn: Opening socket connection to
server localhost/0:0:0:0:0:0:0:1:2181

14/01/31 13:14:56 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
unexpected error, closing socket connection and attempting reconnect

*java.net.SocketException*: Address family not supported by protocol
family: connect

  at sun.nio.ch.Net.connect(*Native Method*)

 

Re: StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread ramkrishna vasudevan
Yes there was concurrent compaction happening.  This was the cause for the
scanner reset and so finally ended up in seeking/next in the encoded block
of those files under the storefilescanner.

Adding the trace to show how a memstore flusher was trying to read a hfile.

org.apache.hadoop.hbase.DroppedSnapshotException: region:
usertable,user5152654437639860133,1391056599393.654e89edf63813d2120e9d287afff889.
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1694)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1556)
at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1471)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:456)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:430)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:66)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:248)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: index (16161) must be
less than size (7)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:305)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:284)
at 
org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.get(LRUDictionary.java:139)
at 
org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.access$000(LRUDictionary.java:76)
at 
org.apache.hadoop.hbase.io.util.LRUDictionary.getEntry(LRUDictionary.java:43)
at 
org.apache.hadoop.hbase.io.TagCompressionContext.uncompressTags(TagCompressionContext.java:159)
at 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.decodeTags(BufferedDataBlockEncoder.java:273)
at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:522)
at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
at 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:262)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1063)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:137)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:509)
at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:128)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:73)
at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:786)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1943)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1669)



On Fri, Jan 31, 2014 at 11:17 AM, lars hofhansl  wrote:

> Interesting. Did you see the cause for the scanner reset? Was it a
> concurrent compaction?
>
>
>
> - Original Message -
> From: ramkrishna vasudevan 
> To: "dev@hbase.apache.org" ; lars hofhansl <
> la...@apache.org>
> Cc:
> Sent: Thursday, January 30, 2014 9:41 PM
> Subject: Re: StoreScanner created for memstore flush should be bothered
> about updated readers?
>
> >> The scanner stack is only reset if the set of HFiles for this store
> changes, i.e. a compaction or a concurrent flush (when using multithreaded
> flushing). It seems that would relatively rare.
> In our test scenario this happens.  While trying to find out the root cause
> for HBASE-10443, hit this issue. It is not directly related to the flush
> scenario but found this issue while debugging it.
> I was not trying to improve the performance here, but the fact that we
> updating the kv heap does make the flush to read those Hfiles on a
> StoreScanner.next() call and it is expensive.
>
> Regards
> Ram
>
>
>
>
>
>
> On Fri, Jan 31, 2014 at 11:02 AM, lars hofhansl  wrote:
>
> > From what I found is that the main performance detriment comes from the
> > fact that we need to take a lock for each next/peek call of the
> > StoreScanner. Even when those are uncontended (which they are in 99.9% of
> > the cases) the memory read/writes barriers are expensive.
> >
> > I doubt you'll see much improvement from this. The scanner stack is only
> > reset if the set of HFiles for this store changes, i.e. a compaction or a
> > concurrent flush (when using multithreaded flushing). It seems that would
> > relatively rare.
> >
> > If anything we could a class like StoreScanner that does not need to
> > synchronize any of its calls, but even th

Re: StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread lars hofhansl
Interesting. Did you see the cause for the scanner reset? Was it a concurrent 
compaction?



- Original Message -
From: ramkrishna vasudevan 
To: "dev@hbase.apache.org" ; lars hofhansl 

Cc: 
Sent: Thursday, January 30, 2014 9:41 PM
Subject: Re: StoreScanner created for memstore flush should be bothered about 
updated readers?

>> The scanner stack is only reset if the set of HFiles for this store
changes, i.e. a compaction or a concurrent flush (when using multithreaded
flushing). It seems that would relatively rare.
In our test scenario this happens.  While trying to find out the root cause
for HBASE-10443, hit this issue. It is not directly related to the flush
scenario but found this issue while debugging it.
I was not trying to improve the performance here, but the fact that we
updating the kv heap does make the flush to read those Hfiles on a
StoreScanner.next() call and it is expensive.

Regards
Ram






On Fri, Jan 31, 2014 at 11:02 AM, lars hofhansl  wrote:

> From what I found is that the main performance detriment comes from the
> fact that we need to take a lock for each next/peek call of the
> StoreScanner. Even when those are uncontended (which they are in 99.9% of
> the cases) the memory read/writes barriers are expensive.
>
> I doubt you'll see much improvement from this. The scanner stack is only
> reset if the set of HFiles for this store changes, i.e. a compaction or a
> concurrent flush (when using multithreaded flushing). It seems that would
> relatively rare.
>
> If anything we could a class like StoreScanner that does not need to
> synchronize any of its calls, but even there, the flush is asynchronous to
> any user action (unless we're blocked on the number of store files, in
> which case there bigger problem anyway).
>
>
> Did you see a specific issue?
>
> -- Lars
>
>
>
> - Original Message -
> From: ramkrishna vasudevan 
> To: "dev@hbase.apache.org" 
> Cc:
> Sent: Thursday, January 30, 2014 11:48 AM
> Subject: StoreScanner created for memstore flush should be bothered about
> updated readers?
>
> Hi All
>
> In case of flush we create a memstore flusher which in turn creates a
> StoreScanner backed by a Single ton MemstoreScanner.
>
> But this scanner also registers for any updates in the reader in the
> HStore.  Is this needed?
> If this happens then any update on the reader may nullify the current heap
> and the entire Scanner Stack is reset, but this time with the other
> scanners for all the files that satisfies the last top key.  So the flush
> that happens on the memstore holds the storefile scanners also in the heap
> that was recreated but originally the intention was to create a scanner on
> the memstore alone.
>
> Am i missing something here?  Or what i observed is right?  If so, then I
> feel that this step can be avoided.
>
> Regards
> Ram
>
>



Re: StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread ramkrishna vasudevan
>> The scanner stack is only reset if the set of HFiles for this store
changes, i.e. a compaction or a concurrent flush (when using multithreaded
flushing). It seems that would relatively rare.
In our test scenario this happens.  While trying to find out the root cause
for HBASE-10443, hit this issue. It is not directly related to the flush
scenario but found this issue while debugging it.
I was not trying to improve the performance here, but the fact that we
updating the kv heap does make the flush to read those Hfiles on a
StoreScanner.next() call and it is expensive.

Regards
Ram





On Fri, Jan 31, 2014 at 11:02 AM, lars hofhansl  wrote:

> From what I found is that the main performance detriment comes from the
> fact that we need to take a lock for each next/peek call of the
> StoreScanner. Even when those are uncontended (which they are in 99.9% of
> the cases) the memory read/writes barriers are expensive.
>
> I doubt you'll see much improvement from this. The scanner stack is only
> reset if the set of HFiles for this store changes, i.e. a compaction or a
> concurrent flush (when using multithreaded flushing). It seems that would
> relatively rare.
>
> If anything we could a class like StoreScanner that does not need to
> synchronize any of its calls, but even there, the flush is asynchronous to
> any user action (unless we're blocked on the number of store files, in
> which case there bigger problem anyway).
>
>
> Did you see a specific issue?
>
> -- Lars
>
>
>
> - Original Message -
> From: ramkrishna vasudevan 
> To: "dev@hbase.apache.org" 
> Cc:
> Sent: Thursday, January 30, 2014 11:48 AM
> Subject: StoreScanner created for memstore flush should be bothered about
> updated readers?
>
> Hi All
>
> In case of flush we create a memstore flusher which in turn creates a
> StoreScanner backed by a Single ton MemstoreScanner.
>
> But this scanner also registers for any updates in the reader in the
> HStore.  Is this needed?
> If this happens then any update on the reader may nullify the current heap
> and the entire Scanner Stack is reset, but this time with the other
> scanners for all the files that satisfies the last top key.  So the flush
> that happens on the memstore holds the storefile scanners also in the heap
> that was recreated but originally the intention was to create a scanner on
> the memstore alone.
>
> Am i missing something here?  Or what i observed is right?  If so, then I
> feel that this step can be avoided.
>
> Regards
> Ram
>
>


Re: StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread lars hofhansl
>From what I found is that the main performance detriment comes from the fact 
>that we need to take a lock for each next/peek call of the StoreScanner. Even 
>when those are uncontended (which they are in 99.9% of the cases) the memory 
>read/writes barriers are expensive.

I doubt you'll see much improvement from this. The scanner stack is only reset 
if the set of HFiles for this store changes, i.e. a compaction or a concurrent 
flush (when using multithreaded flushing). It seems that would relatively rare.

If anything we could a class like StoreScanner that does not need to 
synchronize any of its calls, but even there, the flush is asynchronous to any 
user action (unless we're blocked on the number of store files, in which case 
there bigger problem anyway).


Did you see a specific issue?

-- Lars



- Original Message -
From: ramkrishna vasudevan 
To: "dev@hbase.apache.org" 
Cc: 
Sent: Thursday, January 30, 2014 11:48 AM
Subject: StoreScanner created for memstore flush should be bothered about 
updated readers?

Hi All

In case of flush we create a memstore flusher which in turn creates a
StoreScanner backed by a Single ton MemstoreScanner.

But this scanner also registers for any updates in the reader in the
HStore.  Is this needed?
If this happens then any update on the reader may nullify the current heap
and the entire Scanner Stack is reset, but this time with the other
scanners for all the files that satisfies the last top key.  So the flush
that happens on the memstore holds the storefile scanners also in the heap
that was recreated but originally the intention was to create a scanner on
the memstore alone.

Am i missing something here?  Or what i observed is right?  If so, then I
feel that this step can be avoided.

Regards
Ram



Re: StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread ramkrishna vasudevan
Thanks Enis for your reply.  The effect of resetting the heap is that we
would start reading those files under the StoreScanner and start doing the
comparison with the KVs in the memstore scanner.
Ideally this comparison is not needed and this can be avoided by not
allowing the storescanners getting added to the heap.
(Will this flush the KVs from the file once again? Need to verify that.
Should not happen ideally.)
I can file a JIRA for this and discuss there.

Regards
Ram


On Fri, Jan 31, 2014 at 7:54 AM, Enis Söztutar  wrote:

> It seems you are right. I think only if a concurrent compaction finishes
> the memstore scanner would be affected right?
>
> How big is the affect for resetting the KVHeap ?
>
> Enis
>
>
> On Thu, Jan 30, 2014 at 11:48 AM, ramkrishna vasudevan <
> ramkrishna.s.vasude...@gmail.com> wrote:
>
> > Hi All
> >
> > In case of flush we create a memstore flusher which in turn creates a
> >  StoreScanner backed by a Single ton MemstoreScanner.
> >
> > But this scanner also registers for any updates in the reader in the
> > HStore.  Is this needed?
> > If this happens then any update on the reader may nullify the current
> heap
> > and the entire Scanner Stack is reset, but this time with the other
> > scanners for all the files that satisfies the last top key.  So the flush
> > that happens on the memstore holds the storefile scanners also in the
> heap
> > that was recreated but originally the intention was to create a scanner
> on
> > the memstore alone.
> >
> > Am i missing something here?  Or what i observed is right?  If so, then I
> > feel that this step can be avoided.
> >
> > Regards
> > Ram
> >
>


Re: StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread Enis Söztutar
It seems you are right. I think only if a concurrent compaction finishes
the memstore scanner would be affected right?

How big is the affect for resetting the KVHeap ?

Enis


On Thu, Jan 30, 2014 at 11:48 AM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> Hi All
>
> In case of flush we create a memstore flusher which in turn creates a
>  StoreScanner backed by a Single ton MemstoreScanner.
>
> But this scanner also registers for any updates in the reader in the
> HStore.  Is this needed?
> If this happens then any update on the reader may nullify the current heap
> and the entire Scanner Stack is reset, but this time with the other
> scanners for all the files that satisfies the last top key.  So the flush
> that happens on the memstore holds the storefile scanners also in the heap
> that was recreated but originally the intention was to create a scanner on
> the memstore alone.
>
> Am i missing something here?  Or what i observed is right?  If so, then I
> feel that this step can be avoided.
>
> Regards
> Ram
>


[jira] [Reopened] (HBASE-10445) TestRegionObserverInterface occasionally times out

2014-01-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-10445:



> TestRegionObserverInterface occasionally times out
> --
>
> Key: HBASE-10445
> URL: https://issues.apache.org/jira/browse/HBASE-10445
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10445-v1.txt, TestRegionObserverInterface-output-2.txt, 
> TestRegionObserverInterface-output-3.txt
>
>
> TestRegionObserverInterface occasionally times out
> Running in a loop, it timed out at 9th iteration twice.
> The test starts a cluster with 1 region server. If this server goes down, the 
> following message would be repeatedly printed:
> {code}
> 2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
> master.AssignmentManager(2140): Can't move 1588230740, there is no 
> destination server available.
> 2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
> master.AssignmentManager(1863): Unable to determine a plan to assign {ENCODED 
> => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10445) TestRegionObserverInterface occasionally times out

2014-01-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-10445.


Resolution: Invalid

bq. I reproduced the test failure just now - with distributed log replay on.

To do that, you have to modify the code to enable it by default as a local 
change. Therefore this is an invalid issue, unless and until we commit that 
change.

> TestRegionObserverInterface occasionally times out
> --
>
> Key: HBASE-10445
> URL: https://issues.apache.org/jira/browse/HBASE-10445
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10445-v1.txt, TestRegionObserverInterface-output-2.txt, 
> TestRegionObserverInterface-output-3.txt
>
>
> TestRegionObserverInterface occasionally times out
> Running in a loop, it timed out at 9th iteration twice.
> The test starts a cluster with 1 region server. If this server goes down, the 
> following message would be repeatedly printed:
> {code}
> 2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
> master.AssignmentManager(2140): Can't move 1588230740, there is no 
> destination server available.
> 2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
> master.AssignmentManager(1863): Unable to determine a plan to assign {ENCODED 
> => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (HBASE-10445) TestRegionObserverInterface occasionally times out

2014-01-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reopened HBASE-10445:



> TestRegionObserverInterface occasionally times out
> --
>
> Key: HBASE-10445
> URL: https://issues.apache.org/jira/browse/HBASE-10445
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10445-v1.txt, TestRegionObserverInterface-output-2.txt
>
>
> TestRegionObserverInterface occasionally times out
> Running in a loop, it timed out at 9th iteration twice.
> The test starts a cluster with 1 region server. If this server goes down, the 
> following message would be repeatedly printed:
> {code}
> 2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
> master.AssignmentManager(2140): Can't move 1588230740, there is no 
> destination server available.
> 2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
> master.AssignmentManager(1863): Unable to determine a plan to assign {ENCODED 
> => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


StoreScanner created for memstore flush should be bothered about updated readers?

2014-01-30 Thread ramkrishna vasudevan
Hi All

In case of flush we create a memstore flusher which in turn creates a
 StoreScanner backed by a Single ton MemstoreScanner.

But this scanner also registers for any updates in the reader in the
HStore.  Is this needed?
If this happens then any update on the reader may nullify the current heap
and the entire Scanner Stack is reset, but this time with the other
scanners for all the files that satisfies the last top key.  So the flush
that happens on the memstore holds the storefile scanners also in the heap
that was recreated but originally the intention was to create a scanner on
the memstore alone.

Am i missing something here?  Or what i observed is right?  If so, then I
feel that this step can be avoided.

Regards
Ram


[jira] [Created] (HBASE-10446) Backup master gives Error 500 for debug dump

2014-01-30 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-10446:
---

 Summary: Backup master gives Error 500 for debug dump
 Key: HBASE-10446
 URL: https://issues.apache.org/jira/browse/HBASE-10446
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor


Click Debug dump on the backup master web ui, we get:

{noformat}
HTTP ERROR 500

Problem accessing /dump. Reason:

INTERNAL_SERVER_ERROR

Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.hbase.master.MasterDumpServlet.dumpServers(MasterDumpServlet.java:113)
at 
org.apache.hadoop.hbase.master.MasterDumpServlet.doGet(MasterDumpServlet.java:68)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10445) TestRegionObserverInterface occasionally times out

2014-01-30 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10445:
--

 Summary: TestRegionObserverInterface occasionally times out
 Key: HBASE-10445
 URL: https://issues.apache.org/jira/browse/HBASE-10445
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu


TestRegionObserverInterface occasionally times out
Running in a loop, it timed out at 9th iteration twice.

The test starts a cluster with 1 region server. If this server goes down, the 
following message would be repeatedly printed:
{code}
2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
master.AssignmentManager(2140): Can't move 1588230740, there is no destination 
server available.
2014-01-30 00:35:16,144 WARN  [MASTER_META_SERVER_OPERATIONS-kiyo:42930-0] 
master.AssignmentManager(1863): Unable to determine a plan to assign {ENCODED 
=> 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Tablesplit.getLength returns 0

2014-01-30 Thread Nick Dimiduk
Sounds good, I'll watch for your patch!

On Thursday, January 30, 2014, Lukas Nalezenec <
lukas.naleze...@firma.seznam.cz> wrote:

> I talked with guy who worked on this and he said our production issue was
> probably not directly caused by getLength() returning 0.
> Anyway, we are interested in fixing that, estimating length from files is
> good idea.
>
> Lukas
>
>  InputSplit.getLength() and RecordReader.getProgress() is important for the
>> MR framework to be able to show progress etc. It would be good to return
>> raw data sizes in getLength() computed from region's total size of store
>> files, and progress being calculated from scanner's amount of raw data
>> seen.
>>
>
> Enis
>
>
>


[jira] [Created] (HBASE-10444) NPE seen in logs at tail of fatal shutdown

2014-01-30 Thread Steve Loughran (JIRA)
Steve Loughran created HBASE-10444:
--

 Summary: NPE seen in logs at tail of fatal shutdown
 Key: HBASE-10444
 URL: https://issues.apache.org/jira/browse/HBASE-10444
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
 Environment: in 0.98.0 RC1
Reporter: Steve Loughran
Priority: Minor


hbase RS logs show an NPE in shutdown; no other info

{code}
14/01/30 14:18:25 INFO ipc.RpcServer: Stopping server on 57186
Exception in thread "regionserver57186" java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:897)
at java.lang.Thread.run(Thread.java:744)
14/01/30 14:18:25 ERROR regionserver.HRegionServerCommand
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Tablesplit.getLength returns 0

2014-01-30 Thread Lukas Nalezenec
I talked with guy who worked on this and he said our production issue 
was probably not directly caused by getLength() returning 0.
Anyway, we are interested in fixing that, estimating length from files 
is good idea.


Lukas


InputSplit.getLength() and RecordReader.getProgress() is important for the
MR framework to be able to show progress etc. It would be good to return
raw data sizes in getLength() computed from region's total size of store
files, and progress being calculated from scanner's amount of raw data seen.


Enis