RE: How to deal OutOfOrderScannerNextException

2016-08-30 Thread Kang Minwoo
Thanks for your help.

I have to upgrade my client dependencies.


I will try!


보낸 사람: Dima Spivak 
보낸 날짜: 2016년 8월 31일 수요일 오전 9:43:42
받는 사람: user@hbase.apache.org
제목: Re: How to deal OutOfOrderScannerNextException

Hey Minwoo,

Gotcha. Unfortunately, we don't guarantee compatibility between that client
and the HBase server you're running, so the only way to solve the issue
will probably be to upgrade your client dependencies.

On Tuesday, August 30, 2016, Kang Minwoo  wrote:

> Because I have a number of hbase cluster.
>
> They are different version.
>
>
> Legacy Hbase cluster version is 0.96.2-hadoop2.
>
> So I have to maintain 0.96.2-hadoop2.
>
> 
> 보낸 사람: Dima Spivak >
> 보낸 날짜: 2016년 8월 31일 수요일 오전 9:32:59
> 받는 사람: user@hbase.apache.org 
> 제목: Re: How to deal OutOfOrderScannerNextException
>
> Any reason to not use the 1.2.2 client library? You're likely hitting a
> compatibility issue.
>
> On Tuesday, August 30, 2016, Kang Minwoo  > wrote:
>
> > Hi Dima Spivak,
> >
> >
> > Thanks for interesting my problem.
> >
> >
> > Hbase server version is 1.2.2
> >
> > Java Hbase library version is 0.96.2-hadoop2 at hbase-client,
> > 0.96.2-hadoop2 at hbase-hadoop-compat.
> >
> >
> > Here is an excerpt of the code.
> >
> > 
> > 
> >
> > 
> >
> > ResultScanner rs = keyTable.getScanner(scan); ==> Exception is here.
> > List list = new ArrayList();
> > try {
> > for (Result r : rs) {
> > list.add(r);
> > }
> > } finally {
> > rs.close();
> > }
> > return list;
> > 
> > 
> >
> >
> > Here is a stacktrace.
> > 
> > 
> > org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> > OutOfOrderScannerNextException: was there a rpc timeout?
> > at org.apache.hadoop.hbase.client.ClientScanner.next(
> > ClientScanner.java:384)
> > at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
> > MetaScanner.java:177)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.prefetchRegionCache(
> > HConnectionManager.java:1107)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.locateRegionInMeta(
> HConnectionManager.java:1167)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.locateRegion(HConnectionManager.java:1059)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.locateRegion(HConnectionManager.java:1016)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.getRegionLocation(HConnectionManager.java:857)
> > at org.apache.hadoop.hbase.client.RegionServerCallable.
> > prepare(RegionServerCallable.java:72)
> > at org.apache.hadoop.hbase.client.ScannerCallable.
> > prepare(ScannerCallable.java:118)
> > at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:119)
> > at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:96)
> > at org.apache.hadoop.hbase.client.ClientScanner.
> > nextScanner(ClientScanner.java:264)
> > at org.apache.hadoop.hbase.client.ClientScanner.
> > initializeScannerInConstruction(ClientScanner.java:169)
> > at org.apache.hadoop.hbase.client.ClientScanner.(
> > ClientScanner.java:164)
> > at org.apache.hadoop.hbase.client.ClientScanner.(
> > ClientScanner.java:107)
> > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:720)
> > at com.my.app.reader.hbase.HBaseReader.getResult(HBaseReader.java:1)
> > 
> > 
> >
> > Yours sincerely,
> > Minwoo
> >
> >
> > 
> > 보낸 사람: Dima Spivak  >
> > 보낸 날짜: 2016년 8월 30일 화요일 오후 11:58:12
> > 받는 사람: user@hbase.apache.org  
> > 제목: Re: How to deal OutOfOrderScannerNextException
> >
> > Hey Minwoo,
> >
> > What version of HBase are you running? Also, can you post an excerpt of
> the
> > code you're trying to run when you get this Exception?
> >
> > On Tuesday, August 30, 2016, Kang Minwoo  
> > > wrote:
> >
> > > Hello Hbase users.
> > >
> > >
> > > While I used hbase client libarary in JAVA, I got
> > > OutOfOrderScannerNextException.
> > >
> > > Here is stacktrace.
> > >
> > >
> > > --
> > >
> > > java.lang.RuntimeException: org.apache.hadoop.hbase.
> > DoNotRetryIOException:
> > > Failed after retry of OutOfOrderScannerNextException: was there a rpc
> > > timeout?
> > > org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(
> > > AbstractClientScanner.java:94)
> > > --
> > >
> > > This error was ocured by using scan method.
> > > When I used 

Re: How to deal OutOfOrderScannerNextException

2016-08-30 Thread Dima Spivak
Hey Minwoo,

Gotcha. Unfortunately, we don't guarantee compatibility between that client
and the HBase server you're running, so the only way to solve the issue
will probably be to upgrade your client dependencies.

On Tuesday, August 30, 2016, Kang Minwoo  wrote:

> Because I have a number of hbase cluster.
>
> They are different version.
>
>
> Legacy Hbase cluster version is 0.96.2-hadoop2.
>
> So I have to maintain 0.96.2-hadoop2.
>
> 
> 보낸 사람: Dima Spivak >
> 보낸 날짜: 2016년 8월 31일 수요일 오전 9:32:59
> 받는 사람: user@hbase.apache.org 
> 제목: Re: How to deal OutOfOrderScannerNextException
>
> Any reason to not use the 1.2.2 client library? You're likely hitting a
> compatibility issue.
>
> On Tuesday, August 30, 2016, Kang Minwoo  > wrote:
>
> > Hi Dima Spivak,
> >
> >
> > Thanks for interesting my problem.
> >
> >
> > Hbase server version is 1.2.2
> >
> > Java Hbase library version is 0.96.2-hadoop2 at hbase-client,
> > 0.96.2-hadoop2 at hbase-hadoop-compat.
> >
> >
> > Here is an excerpt of the code.
> >
> > 
> > 
> >
> > 
> >
> > ResultScanner rs = keyTable.getScanner(scan); ==> Exception is here.
> > List list = new ArrayList();
> > try {
> > for (Result r : rs) {
> > list.add(r);
> > }
> > } finally {
> > rs.close();
> > }
> > return list;
> > 
> > 
> >
> >
> > Here is a stacktrace.
> > 
> > 
> > org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> > OutOfOrderScannerNextException: was there a rpc timeout?
> > at org.apache.hadoop.hbase.client.ClientScanner.next(
> > ClientScanner.java:384)
> > at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
> > MetaScanner.java:177)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.prefetchRegionCache(
> > HConnectionManager.java:1107)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.locateRegionInMeta(
> HConnectionManager.java:1167)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.locateRegion(HConnectionManager.java:1059)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.locateRegion(HConnectionManager.java:1016)
> > at org.apache.hadoop.hbase.client.HConnectionManager$
> > HConnectionImplementation.getRegionLocation(HConnectionManager.java:857)
> > at org.apache.hadoop.hbase.client.RegionServerCallable.
> > prepare(RegionServerCallable.java:72)
> > at org.apache.hadoop.hbase.client.ScannerCallable.
> > prepare(ScannerCallable.java:118)
> > at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:119)
> > at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:96)
> > at org.apache.hadoop.hbase.client.ClientScanner.
> > nextScanner(ClientScanner.java:264)
> > at org.apache.hadoop.hbase.client.ClientScanner.
> > initializeScannerInConstruction(ClientScanner.java:169)
> > at org.apache.hadoop.hbase.client.ClientScanner.(
> > ClientScanner.java:164)
> > at org.apache.hadoop.hbase.client.ClientScanner.(
> > ClientScanner.java:107)
> > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:720)
> > at com.my.app.reader.hbase.HBaseReader.getResult(HBaseReader.java:1)
> > 
> > 
> >
> > Yours sincerely,
> > Minwoo
> >
> >
> > 
> > 보낸 사람: Dima Spivak  >
> > 보낸 날짜: 2016년 8월 30일 화요일 오후 11:58:12
> > 받는 사람: user@hbase.apache.org  
> > 제목: Re: How to deal OutOfOrderScannerNextException
> >
> > Hey Minwoo,
> >
> > What version of HBase are you running? Also, can you post an excerpt of
> the
> > code you're trying to run when you get this Exception?
> >
> > On Tuesday, August 30, 2016, Kang Minwoo  
> > > wrote:
> >
> > > Hello Hbase users.
> > >
> > >
> > > While I used hbase client libarary in JAVA, I got
> > > OutOfOrderScannerNextException.
> > >
> > > Here is stacktrace.
> > >
> > >
> > > --
> > >
> > > java.lang.RuntimeException: org.apache.hadoop.hbase.
> > DoNotRetryIOException:
> > > Failed after retry of OutOfOrderScannerNextException: was there a rpc
> > > timeout?
> > > org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(
> > > AbstractClientScanner.java:94)
> > > --
> > >
> > > This error was ocured by using scan method.
> > > When I used Hbase shell, it is no any exception.
> > > But In java client It is occured OutOfOrderScannerNextException when
> > using
> > > scan. (get method is find.)
> > >
> > > If someone know how to deal OutOfOrderScannerNextException, Please
> share
> > > 

RE: How to deal OutOfOrderScannerNextException

2016-08-30 Thread Kang Minwoo
Because I have a number of hbase cluster.

They are different version.


Legacy Hbase cluster version is 0.96.2-hadoop2.

So I have to maintain 0.96.2-hadoop2.


보낸 사람: Dima Spivak 
보낸 날짜: 2016년 8월 31일 수요일 오전 9:32:59
받는 사람: user@hbase.apache.org
제목: Re: How to deal OutOfOrderScannerNextException

Any reason to not use the 1.2.2 client library? You're likely hitting a
compatibility issue.

On Tuesday, August 30, 2016, Kang Minwoo  wrote:

> Hi Dima Spivak,
>
>
> Thanks for interesting my problem.
>
>
> Hbase server version is 1.2.2
>
> Java Hbase library version is 0.96.2-hadoop2 at hbase-client,
> 0.96.2-hadoop2 at hbase-hadoop-compat.
>
>
> Here is an excerpt of the code.
>
> 
> 
>
> 
>
> ResultScanner rs = keyTable.getScanner(scan); ==> Exception is here.
> List list = new ArrayList();
> try {
> for (Result r : rs) {
> list.add(r);
> }
> } finally {
> rs.close();
> }
> return list;
> 
> 
>
>
> Here is a stacktrace.
> 
> 
> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> OutOfOrderScannerNextException: was there a rpc timeout?
> at org.apache.hadoop.hbase.client.ClientScanner.next(
> ClientScanner.java:384)
> at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
> MetaScanner.java:177)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.prefetchRegionCache(
> HConnectionManager.java:1107)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1167)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.locateRegion(HConnectionManager.java:1059)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.locateRegion(HConnectionManager.java:1016)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.getRegionLocation(HConnectionManager.java:857)
> at org.apache.hadoop.hbase.client.RegionServerCallable.
> prepare(RegionServerCallable.java:72)
> at org.apache.hadoop.hbase.client.ScannerCallable.
> prepare(ScannerCallable.java:118)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> RpcRetryingCaller.java:119)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> RpcRetryingCaller.java:96)
> at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:264)
> at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:169)
> at org.apache.hadoop.hbase.client.ClientScanner.(
> ClientScanner.java:164)
> at org.apache.hadoop.hbase.client.ClientScanner.(
> ClientScanner.java:107)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:720)
> at com.my.app.reader.hbase.HBaseReader.getResult(HBaseReader.java:1)
> 
> 
>
> Yours sincerely,
> Minwoo
>
>
> 
> 보낸 사람: Dima Spivak >
> 보낸 날짜: 2016년 8월 30일 화요일 오후 11:58:12
> 받는 사람: user@hbase.apache.org 
> 제목: Re: How to deal OutOfOrderScannerNextException
>
> Hey Minwoo,
>
> What version of HBase are you running? Also, can you post an excerpt of the
> code you're trying to run when you get this Exception?
>
> On Tuesday, August 30, 2016, Kang Minwoo  > wrote:
>
> > Hello Hbase users.
> >
> >
> > While I used hbase client libarary in JAVA, I got
> > OutOfOrderScannerNextException.
> >
> > Here is stacktrace.
> >
> >
> > --
> >
> > java.lang.RuntimeException: org.apache.hadoop.hbase.
> DoNotRetryIOException:
> > Failed after retry of OutOfOrderScannerNextException: was there a rpc
> > timeout?
> > org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(
> > AbstractClientScanner.java:94)
> > --
> >
> > This error was ocured by using scan method.
> > When I used Hbase shell, it is no any exception.
> > But In java client It is occured OutOfOrderScannerNextException when
> using
> > scan. (get method is find.)
> >
> > If someone know how to deal OutOfOrderScannerNextException, Please share
> > your knowledge.
> > It would be very helpful.
> >
> > Yours sincerely,
> > Minwoo
> >
> >
>
> --
> -Dima
>


--
-Dima


Re: How to deal OutOfOrderScannerNextException

2016-08-30 Thread Dima Spivak
Any reason to not use the 1.2.2 client library? You're likely hitting a
compatibility issue.

On Tuesday, August 30, 2016, Kang Minwoo  wrote:

> Hi Dima Spivak,
>
>
> Thanks for interesting my problem.
>
>
> Hbase server version is 1.2.2
>
> Java Hbase library version is 0.96.2-hadoop2 at hbase-client,
> 0.96.2-hadoop2 at hbase-hadoop-compat.
>
>
> Here is an excerpt of the code.
>
> 
> 
>
> 
>
> ResultScanner rs = keyTable.getScanner(scan); ==> Exception is here.
> List list = new ArrayList();
> try {
> for (Result r : rs) {
> list.add(r);
> }
> } finally {
> rs.close();
> }
> return list;
> 
> 
>
>
> Here is a stacktrace.
> 
> 
> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of
> OutOfOrderScannerNextException: was there a rpc timeout?
> at org.apache.hadoop.hbase.client.ClientScanner.next(
> ClientScanner.java:384)
> at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
> MetaScanner.java:177)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.prefetchRegionCache(
> HConnectionManager.java:1107)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1167)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.locateRegion(HConnectionManager.java:1059)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.locateRegion(HConnectionManager.java:1016)
> at org.apache.hadoop.hbase.client.HConnectionManager$
> HConnectionImplementation.getRegionLocation(HConnectionManager.java:857)
> at org.apache.hadoop.hbase.client.RegionServerCallable.
> prepare(RegionServerCallable.java:72)
> at org.apache.hadoop.hbase.client.ScannerCallable.
> prepare(ScannerCallable.java:118)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> RpcRetryingCaller.java:119)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> RpcRetryingCaller.java:96)
> at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:264)
> at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:169)
> at org.apache.hadoop.hbase.client.ClientScanner.(
> ClientScanner.java:164)
> at org.apache.hadoop.hbase.client.ClientScanner.(
> ClientScanner.java:107)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:720)
> at com.my.app.reader.hbase.HBaseReader.getResult(HBaseReader.java:1)
> 
> 
>
> Yours sincerely,
> Minwoo
>
>
> 
> 보낸 사람: Dima Spivak >
> 보낸 날짜: 2016년 8월 30일 화요일 오후 11:58:12
> 받는 사람: user@hbase.apache.org 
> 제목: Re: How to deal OutOfOrderScannerNextException
>
> Hey Minwoo,
>
> What version of HBase are you running? Also, can you post an excerpt of the
> code you're trying to run when you get this Exception?
>
> On Tuesday, August 30, 2016, Kang Minwoo  > wrote:
>
> > Hello Hbase users.
> >
> >
> > While I used hbase client libarary in JAVA, I got
> > OutOfOrderScannerNextException.
> >
> > Here is stacktrace.
> >
> >
> > --
> >
> > java.lang.RuntimeException: org.apache.hadoop.hbase.
> DoNotRetryIOException:
> > Failed after retry of OutOfOrderScannerNextException: was there a rpc
> > timeout?
> > org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(
> > AbstractClientScanner.java:94)
> > --
> >
> > This error was ocured by using scan method.
> > When I used Hbase shell, it is no any exception.
> > But In java client It is occured OutOfOrderScannerNextException when
> using
> > scan. (get method is find.)
> >
> > If someone know how to deal OutOfOrderScannerNextException, Please share
> > your knowledge.
> > It would be very helpful.
> >
> > Yours sincerely,
> > Minwoo
> >
> >
>
> --
> -Dima
>


-- 
-Dima


RE: How to deal OutOfOrderScannerNextException

2016-08-30 Thread Kang Minwoo
Hi Dima Spivak,


Thanks for interesting my problem.


Hbase server version is 1.2.2

Java Hbase library version is 0.96.2-hadoop2 at hbase-client, 0.96.2-hadoop2 at 
hbase-hadoop-compat.


Here is an excerpt of the code.





ResultScanner rs = keyTable.getScanner(scan); ==> Exception is here.
List list = new ArrayList();
try {
for (Result r : rs) {
list.add(r);
}
} finally {
rs.close();
}
return list;



Here is a stacktrace.

org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of 
OutOfOrderScannerNextException: was there a rpc timeout?
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:384)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:177)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1107)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1167)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1059)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1016)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:857)
at 
org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72)
at 
org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:118)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:119)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:96)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:264)
at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:169)
at 
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:164)
at 
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:107)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:720)
at com.my.app.reader.hbase.HBaseReader.getResult(HBaseReader.java:1)


Yours sincerely,
Minwoo



보낸 사람: Dima Spivak 
보낸 날짜: 2016년 8월 30일 화요일 오후 11:58:12
받는 사람: user@hbase.apache.org
제목: Re: How to deal OutOfOrderScannerNextException

Hey Minwoo,

What version of HBase are you running? Also, can you post an excerpt of the
code you're trying to run when you get this Exception?

On Tuesday, August 30, 2016, Kang Minwoo  wrote:

> Hello Hbase users.
>
>
> While I used hbase client libarary in JAVA, I got
> OutOfOrderScannerNextException.
>
> Here is stacktrace.
>
>
> --
>
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException:
> Failed after retry of OutOfOrderScannerNextException: was there a rpc
> timeout?
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(
> AbstractClientScanner.java:94)
> --
>
> This error was ocured by using scan method.
> When I used Hbase shell, it is no any exception.
> But In java client It is occured OutOfOrderScannerNextException when using
> scan. (get method is find.)
>
> If someone know how to deal OutOfOrderScannerNextException, Please share
> your knowledge.
> It would be very helpful.
>
> Yours sincerely,
> Minwoo
>
>

--
-Dima


Re: ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Dima Spivak
If you're trying to unsubscribe from HBase's mailing lists, go to
https://hbase.apache.org/mail-lists.html and follow the instructions.

On Tuesday, August 30, 2016, Mark Prakash  wrote:

> I am no longer involved in Hadoop, can someone please tell me how to un
> subscribe to these email thread?
>
> On Tue, Aug 30, 2016 at 11:46 AM, Jason Barber  > wrote:
>
> > Unsubscribe
> >
> > > On Aug 30, 2016, at 10:03 AM, Rich Bowen  > wrote:
> > >
> > > It's traditional. We wait for the last minute to get our talk proposals
> > > in for conferences.
> > >
> > > Well, the last minute has arrived. The CFP for ApacheCon Seville closes
> > > on September 9th, which is less than 2 weeks away. It's time to get
> your
> > > talks in, so that we can make this the best ApacheCon yet.
> > >
> > > It's also time to discuss with your developer and user community
> whether
> > > there's a track of talks that you might want to propose, so that you
> > > have more complete coverage of your project than a talk or two.
> > >
> > > For Apache Big Data, the relevant URLs are:
> > > Event details:
> > > http://events.linuxfoundation.org/events/apache-big-data-europe
> > > CFP:
> > > http://events.linuxfoundation.org/events/apache-big-data-
> > europe/program/cfp
> > >
> > > For ApacheCon Europe, the relevant URLs are:
> > > Event details: http://events.linuxfoundation.
> org/events/apachecon-europe
> > > CFP: http://events.linuxfoundation.org/events/apachecon-europe/
> > program/cfp
> > >
> > > This year, we'll be reviewing papers "blind" - that is, looking at the
> > > abstracts without knowing who the speaker is. This has been shown to
> > > eliminate the "me and my buddies" nature of many tech conferences,
> > > producing more diversity, and more new speakers. So make sure your
> > > abstracts clearly explain what you'll be talking about.
> > >
> > > For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
> > > or drop by our IRC channel, #apachecon on the Freenode IRC network.
> > >
> > > --
> > > Rich Bowen
> > > WWW: http://apachecon.com/
> > > Twitter: @ApacheCon
> >
>


-- 
-Dima


Re: ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Mark Prakash
I am no longer involved in Hadoop, can someone please tell me how to un
subscribe to these email thread?

On Tue, Aug 30, 2016 at 11:46 AM, Jason Barber  wrote:

> Unsubscribe
>
> > On Aug 30, 2016, at 10:03 AM, Rich Bowen  wrote:
> >
> > It's traditional. We wait for the last minute to get our talk proposals
> > in for conferences.
> >
> > Well, the last minute has arrived. The CFP for ApacheCon Seville closes
> > on September 9th, which is less than 2 weeks away. It's time to get your
> > talks in, so that we can make this the best ApacheCon yet.
> >
> > It's also time to discuss with your developer and user community whether
> > there's a track of talks that you might want to propose, so that you
> > have more complete coverage of your project than a talk or two.
> >
> > For Apache Big Data, the relevant URLs are:
> > Event details:
> > http://events.linuxfoundation.org/events/apache-big-data-europe
> > CFP:
> > http://events.linuxfoundation.org/events/apache-big-data-
> europe/program/cfp
> >
> > For ApacheCon Europe, the relevant URLs are:
> > Event details: http://events.linuxfoundation.org/events/apachecon-europe
> > CFP: http://events.linuxfoundation.org/events/apachecon-europe/
> program/cfp
> >
> > This year, we'll be reviewing papers "blind" - that is, looking at the
> > abstracts without knowing who the speaker is. This has been shown to
> > eliminate the "me and my buddies" nature of many tech conferences,
> > producing more diversity, and more new speakers. So make sure your
> > abstracts clearly explain what you'll be talking about.
> >
> > For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
> > or drop by our IRC channel, #apachecon on the Freenode IRC network.
> >
> > --
> > Rich Bowen
> > WWW: http://apachecon.com/
> > Twitter: @ApacheCon
>


Re: ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Jason Barber
Unsubscribe 

> On Aug 30, 2016, at 10:03 AM, Rich Bowen  wrote:
> 
> It's traditional. We wait for the last minute to get our talk proposals
> in for conferences.
> 
> Well, the last minute has arrived. The CFP for ApacheCon Seville closes
> on September 9th, which is less than 2 weeks away. It's time to get your
> talks in, so that we can make this the best ApacheCon yet.
> 
> It's also time to discuss with your developer and user community whether
> there's a track of talks that you might want to propose, so that you
> have more complete coverage of your project than a talk or two.
> 
> For Apache Big Data, the relevant URLs are:
> Event details:
> http://events.linuxfoundation.org/events/apache-big-data-europe
> CFP:
> http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp
> 
> For ApacheCon Europe, the relevant URLs are:
> Event details: http://events.linuxfoundation.org/events/apachecon-europe
> CFP: http://events.linuxfoundation.org/events/apachecon-europe/program/cfp
> 
> This year, we'll be reviewing papers "blind" - that is, looking at the
> abstracts without knowing who the speaker is. This has been shown to
> eliminate the "me and my buddies" nature of many tech conferences,
> producing more diversity, and more new speakers. So make sure your
> abstracts clearly explain what you'll be talking about.
> 
> For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
> or drop by our IRC channel, #apachecon on the Freenode IRC network.
> 
> -- 
> Rich Bowen
> WWW: http://apachecon.com/
> Twitter: @ApacheCon


Re: Options to Import Data from MySql DB to HBase

2016-08-30 Thread Jerry He
1. You can run the sqoop import multiple jobs to select columns to put into
different HBase column families.
2. You can specify delimiter/separator in your ImportTSV.

Jerry

On Tue, Aug 30, 2016 at 6:13 AM, John Leach 
wrote:

> I would suggest using an open source tool on top of HBase (Splice Machine,
> Trafodion, or Phoenix) if you are wanting to map from an RDBMS.
>
> Regards,
> John Leach
>
> > On Aug 30, 2016, at 1:09 AM, Soumya Nayan Kar 
> wrote:
> >
> > I have a single table in MySql which contains around 2400 records. I
> > need a way to import this data into a table in HBase with multiple column
> > families. I initially chose Sqoop as the tool to import the data but
> later
> > found that I cannot use Sqoop to directly import the data as Sqoop does
> not
> > support multiple column family import as yet. I have populated the data
> in
> > HDFS using Sqoop from the MySql database. What are my choices to import
> > this data from HDFSFS to HBase table with 3 column families? It seems for
> > bulk import, I have two choices:
> >
> >   - ImportTSV tool: this probably requires the source data to be in TSV
> >   format. But the data that I have imported in HDFS from MySql using
> Sqoop
> >   seems to be in the CSV format. What is the standard solution for this
> >   approach?
> >   - Write a custom Map Reduce program to translate the data in HDFS to
> >   HFile and load it into HBase.
> >
> > I just wanted to ensure that are these the only two choices available to
> > load the data. This seems to be a bit restrictive given the fact that
> such
> > a requirement is a very basic one in any system. If custom Map Reduce is
> > the way to go, an example or working sample would be really helpful.
>
>


Re: Avro schema getting changed dynamically

2016-08-30 Thread Stack
On Mon, Aug 29, 2016 at 11:17 PM, Manjeet Singh 
wrote:

> Hi All,
>
> I have use case to put data in avro format in Hbase , I have frequent read
> write operations but its not a problem.
>
> Problem is what if my avro schema get changed how shuld I deal with it?
> This should in mind what about older data which already inserted in Hbase
> and now we have new schema.
>
> can anyone suggest me solution for the same
>
>
Do a bit of research on Avro.

Usually schema is stored apart from data. To read old format you must
volunteer the old schema.

But you can set it up otherwise to embed schema IIRC.

Also, I'm no expert, but adding fields is probably fine. Have you tried
reading the old format w/ new schema?

For more, poke around here, https://github.com/kijiproject, a mature
project that used avro serializing data with built-in schema-migration, etc.

St.Ack





> Thanks
> Manjeet
>
> --
> luv all
>


Re: Multirange Scan performance

2016-08-30 Thread Ted Yu
Can you take a look at TestMultiRowRangeFilter to see if your usage is
different ?

It would be easier if you pastebin snippet of your code w.r.t.
MultiRowRangeFilter.

Thanks

On Tue, Aug 30, 2016 at 8:29 AM, daunnc  wrote:

> Hi HBase users. I'm using HBase with Spark;
> What I am trying to do is to perform a multirange scan, first of all tried
> to use MultiRowRangeFilter for setting up scanner ranges and
> TableInputFormat, but it looks like this solution scans all rows in a
> table?
> What is such filter use case then?
>
> The next step was just to use separate scanners, and MultiTableInputFormat
> to read data, it helped and works indeed faster, and i guess this solution
> not walks through the whole table.
>
> Is MultiTableInputFormat a kind of an optimal solution in this case? How
> usually such situations (multirange scans) handled (exmpl: Accumulo API has
> setRanges function to set ranges for InputFormat)?
>
>
>
> --
> View this message in context: http://apache-hbase.679495.n3.
> nabble.com/Multirange-Scan-performance-tp4082215.html
> Sent from the HBase User mailing list archive at Nabble.com.
>


Multirange Scan performance

2016-08-30 Thread daunnc
Hi HBase users. I'm using HBase with Spark; 
What I am trying to do is to perform a multirange scan, first of all tried
to use MultiRowRangeFilter for setting up scanner ranges and
TableInputFormat, but it looks like this solution scans all rows in a table?
What is such filter use case then?

The next step was just to use separate scanners, and MultiTableInputFormat
to read data, it helped and works indeed faster, and i guess this solution
not walks through the whole table.

Is MultiTableInputFormat a kind of an optimal solution in this case? How
usually such situations (multirange scans) handled (exmpl: Accumulo API has
setRanges function to set ranges for InputFormat)?



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/Multirange-Scan-performance-tp4082215.html
Sent from the HBase User mailing list archive at Nabble.com.


ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Rich Bowen
It's traditional. We wait for the last minute to get our talk proposals
in for conferences.

Well, the last minute has arrived. The CFP for ApacheCon Seville closes
on September 9th, which is less than 2 weeks away. It's time to get your
talks in, so that we can make this the best ApacheCon yet.

It's also time to discuss with your developer and user community whether
there's a track of talks that you might want to propose, so that you
have more complete coverage of your project than a talk or two.

For Apache Big Data, the relevant URLs are:
Event details:
http://events.linuxfoundation.org/events/apache-big-data-europe
CFP:
http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp

For ApacheCon Europe, the relevant URLs are:
Event details: http://events.linuxfoundation.org/events/apachecon-europe
CFP: http://events.linuxfoundation.org/events/apachecon-europe/program/cfp

This year, we'll be reviewing papers "blind" - that is, looking at the
abstracts without knowing who the speaker is. This has been shown to
eliminate the "me and my buddies" nature of many tech conferences,
producing more diversity, and more new speakers. So make sure your
abstracts clearly explain what you'll be talking about.

For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
or drop by our IRC channel, #apachecon on the Freenode IRC network.

-- 
Rich Bowen
WWW: http://apachecon.com/
Twitter: @ApacheCon


Re: How to deal OutOfOrderScannerNextException

2016-08-30 Thread Dima Spivak
Hey Minwoo,

What version of HBase are you running? Also, can you post an excerpt of the
code you're trying to run when you get this Exception?

On Tuesday, August 30, 2016, Kang Minwoo  wrote:

> Hello Hbase users.
>
>
> While I used hbase client libarary in JAVA, I got
> OutOfOrderScannerNextException.
>
> Here is stacktrace.
>
>
> --
>
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException:
> Failed after retry of OutOfOrderScannerNextException: was there a rpc
> timeout?
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(
> AbstractClientScanner.java:94)
> --
>
> This error was ocured by using scan method.
> When I used Hbase shell, it is no any exception.
> But In java client It is occured OutOfOrderScannerNextException when using
> scan. (get method is find.)
>
> If someone know how to deal OutOfOrderScannerNextException, Please share
> your knowledge.
> It would be very helpful.
>
> Yours sincerely,
> Minwoo
>
>

-- 
-Dima


Re: Options to Import Data from MySql DB to HBase

2016-08-30 Thread John Leach
I would suggest using an open source tool on top of HBase (Splice Machine, 
Trafodion, or Phoenix) if you are wanting to map from an RDBMS.

Regards,
John Leach

> On Aug 30, 2016, at 1:09 AM, Soumya Nayan Kar  
> wrote:
> 
> I have a single table in MySql which contains around 2400 records. I
> need a way to import this data into a table in HBase with multiple column
> families. I initially chose Sqoop as the tool to import the data but later
> found that I cannot use Sqoop to directly import the data as Sqoop does not
> support multiple column family import as yet. I have populated the data in
> HDFS using Sqoop from the MySql database. What are my choices to import
> this data from HDFSFS to HBase table with 3 column families? It seems for
> bulk import, I have two choices:
> 
>   - ImportTSV tool: this probably requires the source data to be in TSV
>   format. But the data that I have imported in HDFS from MySql using Sqoop
>   seems to be in the CSV format. What is the standard solution for this
>   approach?
>   - Write a custom Map Reduce program to translate the data in HDFS to
>   HFile and load it into HBase.
> 
> I just wanted to ensure that are these the only two choices available to
> load the data. This seems to be a bit restrictive given the fact that such
> a requirement is a very basic one in any system. If custom Map Reduce is
> the way to go, an example or working sample would be really helpful.



Re: Avro schema getting changed dynamically

2016-08-30 Thread Ted Yu
Probably you can poll user@avro for how the new field is handled given old
data.

FYI

On Mon, Aug 29, 2016 at 11:28 PM, Manjeet Singh 
wrote:

> I want ot add few more points
>
> I am using Java native Api for Hbase get/put
>
> and below is the example
>
> assume i have below schema and I am inserting data by using this schema
> into Hbase but later I have new new values coming and i need to fit this
> into my schema for this I need to create new schema as showing in below
> example
> in example 2 I have new field time stamp
>
> example 1
> { "type" : "record", "name" : "twitter_schema", "namespace" :
> "com.miguno.avro", "fields" : [ { "name" : "username", "type" : "string",
> "doc" : "Name of the user account on Twitter.com" }, { "name" : "tweet",
> "type" : "string", "doc" : "The content of the user's Twitter message" } ]
> }
>
>
> example 2
>
>
> { "type" : "record", "name" : "twitter_schema", "namespace" :
> "com.miguno.avro", "fields" : [ { "name" : "username", "type" : "string",
> "doc" : "Name of the user account on Twitter.com" }, { "name" : "tweet",
> "type" : "string", "doc" : "The content of the user's Twitter message" }, {
> "name" : "timestamp", "type" : "long", "doc" : "Unix epoch time in seconds"
> } ], }
>
>
>
>
> Thanks
> Manjeet
>
>
>
>
>
> On Tue, Aug 30, 2016 at 11:47 AM, Manjeet Singh <
> manjeet.chand...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I have use case to put data in avro format in Hbase , I have frequent
> read
> > write operations but its not a problem.
> >
> > Problem is what if my avro schema get changed how shuld I deal with it?
> > This should in mind what about older data which already inserted in Hbase
> > and now we have new schema.
> >
> > can anyone suggest me solution for the same
> >
> > Thanks
> > Manjeet
> >
> > --
> > luv all
> >
>
>
>
> --
> luv all
>


Re: HBase for Small Key Value Tables

2016-08-30 Thread Ted Yu
You don't need to rebuild hbase. 

Just add entry in hbase-site.xml for the following config:
> hbase.master.balancer.stochastic.tableSkewCost

Restart master after the addition. 

Cheers

> On Aug 30, 2016, at 12:10 AM, Manish Maheshwari  wrote:
> 
> Hi Ted,
> 
> Where do we set this value  DEFAULT_TABLE_SKEW_COST = 35. I see it in only
> in StochasticLoadBalancer.java
> We don't find this in any of the HBase Config files. Do we need to re-build
> HBase from code for this?
> 
> Thanks,
> Manish
> 
>> On Tue, Aug 30, 2016 at 6:44 AM, Ted Yu  wrote:
>> 
>> StochasticLoadBalancer by default would balance regions evenly across the
>> cluster.
>> 
>> Regions of particular table may not be evenly distributed.
>> 
>> Increase the value for the following config:
>> 
>>private static final String TABLE_SKEW_COST_KEY =
>> 
>>"hbase.master.balancer.stochastic.tableSkewCost";
>> 
>>private static final float DEFAULT_TABLE_SKEW_COST = 35;
>> 
>> You can set 500 or higher.
>> 
>> FYI
>> 
>> On Mon, Aug 29, 2016 at 3:22 PM, Manish Maheshwari 
>> wrote:
>> 
>>> Thanks Ted for the maxregionsize per table idea. We will try to keep it
>>> around 1-2 Gigs and see how it goes. Will this also make sure that the
>>> region migrates to another region server? Or do we still need to do that
>>> manually?
>>> 
>>> On JMX, Since the environment is production, we are yet unable to use jmx
>>> for stats collection. But in dev we are trying it out.
>>> 
 On Aug 30, 2016 1:01 AM, "Ted Yu"  wrote:
 
 bq. We cannot change the maxregionsize parameter
 
 The region size can be changed on per table basis:
 
  hbase> alter 't1', MAX_FILESIZE => '134217728'
 
 See the beginning of hbase-shell/src/main/ruby/shell/commands/alter.rb
>>> for
 more details.
 
 FYI
 
 On Sun, Aug 28, 2016 at 10:44 PM, Manish Maheshwari <
>> mylogi...@gmail.com
 
 wrote:
 
> Hi,
> 
> We have a scenario where HBase is used like a Key Value Database to
>> map
> Keys to Regions. We have over 5 Million Keys, but the table size is
>>> less
> than 7 GB. The read volume is pretty high - About 50x of the
>> put/delete
> volume. This causes hot spotting on the Data Node and the region is
>> not
> split. We cannot change the maxregionsize parameter as that will
>> impact
> other tables too.
> 
> Our idea is to manually inspect the row key ranges and then split the
> region manually and assign them to different region servers. We will
> continue to then monitor the rows in one region to see if needs to be
> split.
> 
> Any experience of doing this on HBase. Is this a recommended
>> approach?
> 
> Thanks,
> Manish
>> 


How to deal OutOfOrderScannerNextException

2016-08-30 Thread Kang Minwoo
Hello Hbase users.


While I used hbase client libarary in JAVA, I got 
OutOfOrderScannerNextException.

Here is stacktrace.


--

java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
--

This error was ocured by using scan method.
When I used Hbase shell, it is no any exception.
But In java client It is occured OutOfOrderScannerNextException when using 
scan. (get method is find.)

If someone know how to deal OutOfOrderScannerNextException, Please share your 
knowledge.
It would be very helpful.

Yours sincerely,
Minwoo



Re: HBase for Small Key Value Tables

2016-08-30 Thread Manish Maheshwari
Hi Ted,

Where do we set this value  DEFAULT_TABLE_SKEW_COST = 35. I see it in only
in StochasticLoadBalancer.java
We don't find this in any of the HBase Config files. Do we need to re-build
HBase from code for this?

Thanks,
Manish

On Tue, Aug 30, 2016 at 6:44 AM, Ted Yu  wrote:

> StochasticLoadBalancer by default would balance regions evenly across the
> cluster.
>
> Regions of particular table may not be evenly distributed.
>
> Increase the value for the following config:
>
> private static final String TABLE_SKEW_COST_KEY =
>
> "hbase.master.balancer.stochastic.tableSkewCost";
>
> private static final float DEFAULT_TABLE_SKEW_COST = 35;
>
> You can set 500 or higher.
>
> FYI
>
> On Mon, Aug 29, 2016 at 3:22 PM, Manish Maheshwari 
> wrote:
>
> > Thanks Ted for the maxregionsize per table idea. We will try to keep it
> > around 1-2 Gigs and see how it goes. Will this also make sure that the
> > region migrates to another region server? Or do we still need to do that
> > manually?
> >
> > On JMX, Since the environment is production, we are yet unable to use jmx
> > for stats collection. But in dev we are trying it out.
> >
> > On Aug 30, 2016 1:01 AM, "Ted Yu"  wrote:
> >
> > > bq. We cannot change the maxregionsize parameter
> > >
> > > The region size can be changed on per table basis:
> > >
> > >   hbase> alter 't1', MAX_FILESIZE => '134217728'
> > >
> > > See the beginning of hbase-shell/src/main/ruby/shell/commands/alter.rb
> > for
> > > more details.
> > >
> > > FYI
> > >
> > > On Sun, Aug 28, 2016 at 10:44 PM, Manish Maheshwari <
> mylogi...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We have a scenario where HBase is used like a Key Value Database to
> map
> > > > Keys to Regions. We have over 5 Million Keys, but the table size is
> > less
> > > > than 7 GB. The read volume is pretty high - About 50x of the
> put/delete
> > > > volume. This causes hot spotting on the Data Node and the region is
> not
> > > > split. We cannot change the maxregionsize parameter as that will
> impact
> > > > other tables too.
> > > >
> > > > Our idea is to manually inspect the row key ranges and then split the
> > > > region manually and assign them to different region servers. We will
> > > > continue to then monitor the rows in one region to see if needs to be
> > > > split.
> > > >
> > > > Any experience of doing this on HBase. Is this a recommended
> approach?
> > > >
> > > > Thanks,
> > > > Manish
> > > >
> > >
> >
>