HBase rest service did not respond in a secure cluster

2016-10-10 Thread kumar r
Hi,


HBase Version 1.1.5
OS - Windows

I have enabled HBase rest service with SSL and Proxy.

When accessing HBase rest url,

https://machine1:8082/ 

it returns nothing and keep on loading.


  hbase.rest.keytab.file
  thrift.keytab
  hbase.rest.kerberos.principal
  Principal
  hbase.rest.support.proxyuser
  true
  hbase.rest.authentication.type
  kerberos
  hbase.rest.authentication.kerberos.principal
  HTTP/_h...@example.com
  hbase.rest.authentication.kerberos.keytab
  http.keytab
  hbase.rest.ssl.enabled
  true
  hbase.rest.ssl.keystore.store
  keystore
  hbase.rest.ssl.keystore.password
  pass
  hbase.rest.ssl.keystore.keypassword
  keypass
  hbase.rest.port
  8082

After sometimes, logs shows

Unavailable
org.apache.hadoop.hbase.client.RetriesExhaustedEception: Failed after
attempts=14, exceptions:
Mon Oct 10 12:55:21 IST 2016,
RpcRetryingCaller{globalStartTime=1476084308042, pause=1000,
retries=14}, org.apache.hadoop.hbase.MasterNotRunningException:
com.google.protobuf.ServiceException:
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to
machine2/192.168.60.4:6 failed on local exception:
org.apache.hadoop.hbase.exceptions.ConnectionClosingException:
Connection to machine2/192.168.60.4:6 is closing. Call id=66,
waitTime=13088

Do i miss anything?

Any help would be appreciated.

Stackoverflow:
http://stackoverflow.com/questions/39952755/hbase-rest-service-did-not-respond-in-a-secure-cluster


Thanks,

Kumar


Re: Coprocessor exception in ResionServer log

2016-10-10 Thread Ted Yu
Can you outline the steps taken in hbase shell ?

If you can show skeleton of your endpoint, that may help as well. 

Lastly, consider upgrading :-)

Cheers

> On Oct 10, 2016, at 8:54 PM, big data  wrote:
> 
> I use the static load method, it seems ok...
> 
> why it can not running in dynamic load method?
> 
> 
> 
>> 在 16/10/11 上午10:18, big data 写道:
>> No, it is the original log from hbase, no missing.
>> 
>> I'll try it through hbase-site.xml.
>> 
>> thanks
>> 
>> 
>> 
>>> 在 16/10/11 上午10:05, Ted Yu 写道:
>>> bq. have different Class objects for the type otobuf/Service
>>> 
>>> Was the above copied verbatim ?
>>> 
>>> I wonder why protobuf was cut off.
>>> 
>>> If you deploy through hbase.coprocessor.region.classes in hbase-site.xml,
>>> do you still have the same error ?
>>> 
>>> Cheers
>>> 
 On Mon, Oct 10, 2016 at 6:43 PM, big data  wrote:
 
 [INFO] Scanning for projects...
 [INFO]
 [INFO]
 
 [INFO] Building bitmapdemo 1.0-SNAPSHOT
 [INFO]
 
 Downloading:
 https://repo.maven.apache.org/maven2/org/roaringbitmap/
 RoaringBitmap/maven-metadata.xml
 Downloaded:
 https://repo.maven.apache.org/maven2/org/roaringbitmap/
 RoaringBitmap/maven-metadata.xml
 (3 KB at 1.0 KB/sec)
 [INFO]
 [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ demo ---
 [INFO] com.demo:jar:1.0-SNAPSHOT
 [INFO] +- org.apache.hadoop:hadoop-client:jar:2.6.0:compile
 [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.6.0:compile
 [INFO] |  |  +- io.netty:netty:jar:3.6.2.Final:compile
 [INFO] |  |  \- xerces:xercesImpl:jar:2.9.1:compile
 [INFO] |  | \- xml-apis:xml-apis:jar:1.3.04:compile
 [INFO] |  +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.6.
 0:compile
 [INFO] |  |  +-
 org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.6.0:compile
 [INFO] |  |  |  +- org.apache.hadoop:hadoop-yarn-client:jar:2.6.0:compile
 [INFO] |  |  |  \-
 org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.0:compile
 [INFO] |  |  \-
 org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.6.0:compile
 [INFO] |  | \- org.fusesource.leveldbjni:
 leveldbjni-all:jar:1.8:compile
 [INFO] |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.0:compile
 [INFO] |  +-
 org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.6.0:compile
 [INFO] |  |  \- org.apache.hadoop:hadoop-yarn-common:jar:2.6.0:compile
 [INFO] |  | +- javax.xml.bind:jaxb-api:jar:2.2.2:compile
 [INFO] |  | |  +- javax.xml.stream:stax-api:jar:1.0-2:compile
 [INFO] |  | |  \- javax.activation:activation:jar:1.1:compile
 [INFO] |  | \- com.sun.jersey:jersey-client:jar:1.9:compile
 [INFO] |  +-
 org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.6.0:compile
 [INFO] |  \- org.apache.hadoop:hadoop-annotations:jar:2.6.0:compile
 [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
 [INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
 [INFO] |  +- commons-cli:commons-cli:jar:1.2:compile
 [INFO] |  +- org.apache.commons:commons-math3:jar:3.1.1:compile
 [INFO] |  +- xmlenc:xmlenc:jar:0.52:compile
 [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.1:compile
 [INFO] |  +- commons-codec:commons-codec:jar:1.4:compile
 [INFO] |  +- commons-io:commons-io:jar:2.4:compile
 [INFO] |  +- commons-net:commons-net:jar:3.1:compile
 [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
 [INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
 [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:compile
 [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
 [INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:compile
 [INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:compile
 [INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
 [INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
 [INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
 [INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:compile
 [INFO] |  |  \- asm:asm:jar:3.1:compile
 [INFO] |  +- tomcat:jasper-compiler:jar:5.5.23:compile
 [INFO] |  +- tomcat:jasper-runtime:jar:5.5.23:runtime
 [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:runtime
 [INFO] |  +- commons-el:commons-el:jar:1.0:runtime
 [INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
 [INFO] |  +- log4j:log4j:jar:1.2.17:compile
 [INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:compile
 [INFO] |  |  +- org.apache.httpcomponents:httpclient:jar:4.1.2:compile
 [INFO] |  |  +- org.apache.httpcomponents:httpcore:jar:4.1.2:compile
 [INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
 [INFO] |  +- 

Re: Coprocessor exception in ResionServer log

2016-10-10 Thread big data
I use the static load method, it seems ok...

why it can not running in dynamic load method?



在 16/10/11 上午10:18, big data 写道:
> No, it is the original log from hbase, no missing.
>
> I'll try it through hbase-site.xml.
>
> thanks
>
>
>
> 在 16/10/11 上午10:05, Ted Yu 写道:
>> bq. have different Class objects for the type otobuf/Service
>>
>> Was the above copied verbatim ?
>>
>> I wonder why protobuf was cut off.
>>
>> If you deploy through hbase.coprocessor.region.classes in hbase-site.xml,
>> do you still have the same error ?
>>
>> Cheers
>>
>> On Mon, Oct 10, 2016 at 6:43 PM, big data  wrote:
>>
>>> [INFO] Scanning for projects...
>>> [INFO]
>>> [INFO]
>>> 
>>> [INFO] Building bitmapdemo 1.0-SNAPSHOT
>>> [INFO]
>>> 
>>> Downloading:
>>> https://repo.maven.apache.org/maven2/org/roaringbitmap/
>>> RoaringBitmap/maven-metadata.xml
>>> Downloaded:
>>> https://repo.maven.apache.org/maven2/org/roaringbitmap/
>>> RoaringBitmap/maven-metadata.xml
>>> (3 KB at 1.0 KB/sec)
>>> [INFO]
>>> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ demo ---
>>> [INFO] com.demo:jar:1.0-SNAPSHOT
>>> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.6.0:compile
>>> [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.6.0:compile
>>> [INFO] |  |  +- io.netty:netty:jar:3.6.2.Final:compile
>>> [INFO] |  |  \- xerces:xercesImpl:jar:2.9.1:compile
>>> [INFO] |  | \- xml-apis:xml-apis:jar:1.3.04:compile
>>> [INFO] |  +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.6.
>>> 0:compile
>>> [INFO] |  |  +-
>>> org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.6.0:compile
>>> [INFO] |  |  |  +- org.apache.hadoop:hadoop-yarn-client:jar:2.6.0:compile
>>> [INFO] |  |  |  \-
>>> org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.0:compile
>>> [INFO] |  |  \-
>>> org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.6.0:compile
>>> [INFO] |  | \- org.fusesource.leveldbjni:
>>> leveldbjni-all:jar:1.8:compile
>>> [INFO] |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.0:compile
>>> [INFO] |  +-
>>> org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.6.0:compile
>>> [INFO] |  |  \- org.apache.hadoop:hadoop-yarn-common:jar:2.6.0:compile
>>> [INFO] |  | +- javax.xml.bind:jaxb-api:jar:2.2.2:compile
>>> [INFO] |  | |  +- javax.xml.stream:stax-api:jar:1.0-2:compile
>>> [INFO] |  | |  \- javax.activation:activation:jar:1.1:compile
>>> [INFO] |  | \- com.sun.jersey:jersey-client:jar:1.9:compile
>>> [INFO] |  +-
>>> org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.6.0:compile
>>> [INFO] |  \- org.apache.hadoop:hadoop-annotations:jar:2.6.0:compile
>>> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
>>> [INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
>>> [INFO] |  +- commons-cli:commons-cli:jar:1.2:compile
>>> [INFO] |  +- org.apache.commons:commons-math3:jar:3.1.1:compile
>>> [INFO] |  +- xmlenc:xmlenc:jar:0.52:compile
>>> [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.1:compile
>>> [INFO] |  +- commons-codec:commons-codec:jar:1.4:compile
>>> [INFO] |  +- commons-io:commons-io:jar:2.4:compile
>>> [INFO] |  +- commons-net:commons-net:jar:3.1:compile
>>> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
>>> [INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
>>> [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:compile
>>> [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
>>> [INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:compile
>>> [INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:compile
>>> [INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
>>> [INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
>>> [INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
>>> [INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:compile
>>> [INFO] |  |  \- asm:asm:jar:3.1:compile
>>> [INFO] |  +- tomcat:jasper-compiler:jar:5.5.23:compile
>>> [INFO] |  +- tomcat:jasper-runtime:jar:5.5.23:runtime
>>> [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:runtime
>>> [INFO] |  +- commons-el:commons-el:jar:1.0:runtime
>>> [INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
>>> [INFO] |  +- log4j:log4j:jar:1.2.17:compile
>>> [INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:compile
>>> [INFO] |  |  +- org.apache.httpcomponents:httpclient:jar:4.1.2:compile
>>> [INFO] |  |  +- org.apache.httpcomponents:httpcore:jar:4.1.2:compile
>>> [INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
>>> [INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
>>> [INFO] |  +- commons-configuration:commons-configuration:jar:1.6:compile
>>> [INFO] |  |  +- commons-digester:commons-digester:jar:1.8:compile
>>> [INFO] |  |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
>>> [INFO] |  |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
>>> [INFO] |  +- 

Re: Coprocessor exception in ResionServer log

2016-10-10 Thread big data
No, it is the original log from hbase, no missing.

I'll try it through hbase-site.xml.

thanks



在 16/10/11 上午10:05, Ted Yu 写道:
> bq. have different Class objects for the type otobuf/Service
>
> Was the above copied verbatim ?
>
> I wonder why protobuf was cut off.
>
> If you deploy through hbase.coprocessor.region.classes in hbase-site.xml,
> do you still have the same error ?
>
> Cheers
>
> On Mon, Oct 10, 2016 at 6:43 PM, big data  wrote:
>
>> [INFO] Scanning for projects...
>> [INFO]
>> [INFO]
>> 
>> [INFO] Building bitmapdemo 1.0-SNAPSHOT
>> [INFO]
>> 
>> Downloading:
>> https://repo.maven.apache.org/maven2/org/roaringbitmap/
>> RoaringBitmap/maven-metadata.xml
>> Downloaded:
>> https://repo.maven.apache.org/maven2/org/roaringbitmap/
>> RoaringBitmap/maven-metadata.xml
>> (3 KB at 1.0 KB/sec)
>> [INFO]
>> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ demo ---
>> [INFO] com.demo:jar:1.0-SNAPSHOT
>> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.6.0:compile
>> [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.6.0:compile
>> [INFO] |  |  +- io.netty:netty:jar:3.6.2.Final:compile
>> [INFO] |  |  \- xerces:xercesImpl:jar:2.9.1:compile
>> [INFO] |  | \- xml-apis:xml-apis:jar:1.3.04:compile
>> [INFO] |  +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.6.
>> 0:compile
>> [INFO] |  |  +-
>> org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.6.0:compile
>> [INFO] |  |  |  +- org.apache.hadoop:hadoop-yarn-client:jar:2.6.0:compile
>> [INFO] |  |  |  \-
>> org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.0:compile
>> [INFO] |  |  \-
>> org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.6.0:compile
>> [INFO] |  | \- org.fusesource.leveldbjni:
>> leveldbjni-all:jar:1.8:compile
>> [INFO] |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.0:compile
>> [INFO] |  +-
>> org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.6.0:compile
>> [INFO] |  |  \- org.apache.hadoop:hadoop-yarn-common:jar:2.6.0:compile
>> [INFO] |  | +- javax.xml.bind:jaxb-api:jar:2.2.2:compile
>> [INFO] |  | |  +- javax.xml.stream:stax-api:jar:1.0-2:compile
>> [INFO] |  | |  \- javax.activation:activation:jar:1.1:compile
>> [INFO] |  | \- com.sun.jersey:jersey-client:jar:1.9:compile
>> [INFO] |  +-
>> org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.6.0:compile
>> [INFO] |  \- org.apache.hadoop:hadoop-annotations:jar:2.6.0:compile
>> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
>> [INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
>> [INFO] |  +- commons-cli:commons-cli:jar:1.2:compile
>> [INFO] |  +- org.apache.commons:commons-math3:jar:3.1.1:compile
>> [INFO] |  +- xmlenc:xmlenc:jar:0.52:compile
>> [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.1:compile
>> [INFO] |  +- commons-codec:commons-codec:jar:1.4:compile
>> [INFO] |  +- commons-io:commons-io:jar:2.4:compile
>> [INFO] |  +- commons-net:commons-net:jar:3.1:compile
>> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
>> [INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
>> [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:compile
>> [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
>> [INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:compile
>> [INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:compile
>> [INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
>> [INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
>> [INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
>> [INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:compile
>> [INFO] |  |  \- asm:asm:jar:3.1:compile
>> [INFO] |  +- tomcat:jasper-compiler:jar:5.5.23:compile
>> [INFO] |  +- tomcat:jasper-runtime:jar:5.5.23:runtime
>> [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:runtime
>> [INFO] |  +- commons-el:commons-el:jar:1.0:runtime
>> [INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
>> [INFO] |  +- log4j:log4j:jar:1.2.17:compile
>> [INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:compile
>> [INFO] |  |  +- org.apache.httpcomponents:httpclient:jar:4.1.2:compile
>> [INFO] |  |  +- org.apache.httpcomponents:httpcore:jar:4.1.2:compile
>> [INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
>> [INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
>> [INFO] |  +- commons-configuration:commons-configuration:jar:1.6:compile
>> [INFO] |  |  +- commons-digester:commons-digester:jar:1.8:compile
>> [INFO] |  |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
>> [INFO] |  |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
>> [INFO] |  +- org.slf4j:slf4j-api:jar:1.7.5:compile
>> [INFO] |  +- org.slf4j:slf4j-log4j12:jar:1.7.5:compile
>> [INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
>> [INFO] |  +- 

Re: Coprocessor exception in ResionServer log

2016-10-10 Thread Ted Yu
bq. have different Class objects for the type otobuf/Service

Was the above copied verbatim ?

I wonder why protobuf was cut off.

If you deploy through hbase.coprocessor.region.classes in hbase-site.xml,
do you still have the same error ?

Cheers

On Mon, Oct 10, 2016 at 6:43 PM, big data  wrote:

> [INFO] Scanning for projects...
> [INFO]
> [INFO]
> 
> [INFO] Building bitmapdemo 1.0-SNAPSHOT
> [INFO]
> 
> Downloading:
> https://repo.maven.apache.org/maven2/org/roaringbitmap/
> RoaringBitmap/maven-metadata.xml
> Downloaded:
> https://repo.maven.apache.org/maven2/org/roaringbitmap/
> RoaringBitmap/maven-metadata.xml
> (3 KB at 1.0 KB/sec)
> [INFO]
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ demo ---
> [INFO] com.demo:jar:1.0-SNAPSHOT
> [INFO] +- org.apache.hadoop:hadoop-client:jar:2.6.0:compile
> [INFO] |  +- org.apache.hadoop:hadoop-hdfs:jar:2.6.0:compile
> [INFO] |  |  +- io.netty:netty:jar:3.6.2.Final:compile
> [INFO] |  |  \- xerces:xercesImpl:jar:2.9.1:compile
> [INFO] |  | \- xml-apis:xml-apis:jar:1.3.04:compile
> [INFO] |  +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.6.
> 0:compile
> [INFO] |  |  +-
> org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.6.0:compile
> [INFO] |  |  |  +- org.apache.hadoop:hadoop-yarn-client:jar:2.6.0:compile
> [INFO] |  |  |  \-
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.0:compile
> [INFO] |  |  \-
> org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.6.0:compile
> [INFO] |  | \- org.fusesource.leveldbjni:
> leveldbjni-all:jar:1.8:compile
> [INFO] |  +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.0:compile
> [INFO] |  +-
> org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.6.0:compile
> [INFO] |  |  \- org.apache.hadoop:hadoop-yarn-common:jar:2.6.0:compile
> [INFO] |  | +- javax.xml.bind:jaxb-api:jar:2.2.2:compile
> [INFO] |  | |  +- javax.xml.stream:stax-api:jar:1.0-2:compile
> [INFO] |  | |  \- javax.activation:activation:jar:1.1:compile
> [INFO] |  | \- com.sun.jersey:jersey-client:jar:1.9:compile
> [INFO] |  +-
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.6.0:compile
> [INFO] |  \- org.apache.hadoop:hadoop-annotations:jar:2.6.0:compile
> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
> [INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
> [INFO] |  +- commons-cli:commons-cli:jar:1.2:compile
> [INFO] |  +- org.apache.commons:commons-math3:jar:3.1.1:compile
> [INFO] |  +- xmlenc:xmlenc:jar:0.52:compile
> [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.1:compile
> [INFO] |  +- commons-codec:commons-codec:jar:1.4:compile
> [INFO] |  +- commons-io:commons-io:jar:2.4:compile
> [INFO] |  +- commons-net:commons-net:jar:3.1:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> [INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
> [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:compile
> [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
> [INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:compile
> [INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
> [INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
> [INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:compile
> [INFO] |  |  \- asm:asm:jar:3.1:compile
> [INFO] |  +- tomcat:jasper-compiler:jar:5.5.23:compile
> [INFO] |  +- tomcat:jasper-runtime:jar:5.5.23:runtime
> [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:runtime
> [INFO] |  +- commons-el:commons-el:jar:1.0:runtime
> [INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
> [INFO] |  +- log4j:log4j:jar:1.2.17:compile
> [INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:compile
> [INFO] |  |  +- org.apache.httpcomponents:httpclient:jar:4.1.2:compile
> [INFO] |  |  +- org.apache.httpcomponents:httpcore:jar:4.1.2:compile
> [INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
> [INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
> [INFO] |  +- commons-configuration:commons-configuration:jar:1.6:compile
> [INFO] |  |  +- commons-digester:commons-digester:jar:1.8:compile
> [INFO] |  |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
> [INFO] |  |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> [INFO] |  +- org.slf4j:slf4j-api:jar:1.7.5:compile
> [INFO] |  +- org.slf4j:slf4j-log4j12:jar:1.7.5:compile
> [INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] |  +- org.apache.avro:avro:jar:1.7.4:compile
> [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
> [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.0.4.1:compile
> [INFO] |  +- 

Re: Coprocessor exception in ResionServer log

2016-10-10 Thread Ted Yu
Could be related to incompatible protobuf versions.

What's the output of:
mvn dependency:tree

Please pastebin it - it should be fairly long.

On Mon, Oct 10, 2016 at 6:29 PM, big data  wrote:

> Dear all,
>
> I've created an Endpoint coprocessor, and deployed it through hbase shell.
>
> Then when I run client code, when code
>
> response = rpcCallback.get();
>
> rpcCallback.get() return null;
>
> in the resion server log, i found this exception:
>
> 2016-10-10 15:50:07,636 ERROR 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
> Failed to load coprocessor AndEndPoint
>
> java.lang.LinkageError: loader constraint violation in interface itable
> initialization: when resolving method
> "AndEndPoint.getService()Lcom/google/protobuf/Service;" the class loader
> (instance of
> org/apache/hadoop/hbase/util/CoprocessorClassLoader) of the current
> class, AndEndPoint, and the class loader (instance of
> sun/misc/Launcher$AppClassLoader) for interface 
> org/apache/hadoop/hbase/coprocessor/CoprocessorService
> have different Class objects for
> the type otobuf/Service; used in the signature
>
> Any solution to solve this problem?
>
> thanks.
>
>
> My pom like this:
>
> 
> org.apache.hadoop
> hadoop-client
> 2.6.0
> 
> 
> org.apache.hadoop
> hadoop-common
> 2.6.0
> 
> 
> org.apache.hbase
> hbase-server
> 1.0.1
> 
>
>
>


Coprocessor exception in ResionServer log

2016-10-10 Thread big data
Dear all,

I've created an Endpoint coprocessor, and deployed it through hbase shell.

Then when I run client code, when code

response = rpcCallback.get();

rpcCallback.get() return null;

in the resion server log, i found this exception:

2016-10-10 15:50:07,636 ERROR 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: Failed to load 
coprocessor AndEndPoint

java.lang.LinkageError: loader constraint violation in interface itable 
initialization: when resolving method
"AndEndPoint.getService()Lcom/google/protobuf/Service;" the class loader 
(instance of
org/apache/hadoop/hbase/util/CoprocessorClassLoader) of the current class, 
AndEndPoint, and the class loader (instance of
sun/misc/Launcher$AppClassLoader) for interface 
org/apache/hadoop/hbase/coprocessor/CoprocessorService have different Class 
objects for
the type otobuf/Service; used in the signature

Any solution to solve this problem?

thanks.


My pom like this:


org.apache.hadoop
hadoop-client
2.6.0


org.apache.hadoop
hadoop-common
2.6.0


org.apache.hbase
hbase-server
1.0.1





Re: reading Hbase table in Spark

2016-10-10 Thread Mich Talebzadeh
I have already done it with Hive and Phoenix thanks

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 10 October 2016 at 22:58, Ted Yu  wrote:

> In that case I suggest polling user@hive to see if someone has done this.
>
> Thanks
>
> On Mon, Oct 10, 2016 at 2:56 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com>
> wrote:
>
> > Thanks I am on Spark 2 so may not be feasible.
> >
> > As a mater of interest how about using Hive on top of Hbase table?
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >  > OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 10 October 2016 at 22:49, Ted Yu  wrote:
> >
> > > In hbase master branch, there is hbase-spark module which would allow
> you
> > > to integrate with Spark seamlessly.
> > >
> > > Note: support for Spark 2.0 is pending. For details, see HBASE-16179
> > >
> > > Cheers
> > >
> > > On Mon, Oct 10, 2016 at 2:46 PM, Mich Talebzadeh <
> > > mich.talebza...@gmail.com>
> > > wrote:
> > >
> > > > Thanks Ted,
> > > >
> > > > So basically involves Java programming much like JDBC connection
> > > retrieval
> > > > etc.
> > > >
> > > > Writing to Hbase is pretty fast. Now I have both views in Phoenix and
> > > Hive
> > > > on the underlying Hbase tables.
> > > >
> > > > I am looking for flexibility here so I get I should use Spark on Hive
> > > > tables with a view on Hbase table.
> > > >
> > > > Also I like tools like Zeppelin that work with both SQL and Spark
> > > > Functional programming.
> > > >
> > > > Sounds like reading data from Hbase table is best done through some
> > form
> > > of
> > > > SQL.
> > > >
> > > > What are view on this approach?
> > > >
> > > >
> > > >
> > > > Dr Mich Talebzadeh
> > > >
> > > >
> > > >
> > > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > > > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > >  > AAEWh2gBxianrbJd6zP6AcPCCd
> > > > OABUrV8Pw>*
> > > >
> > > >
> > > >
> > > > http://talebzadehmich.wordpress.com
> > > >
> > > >
> > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> > any
> > > > loss, damage or destruction of data or any other property which may
> > arise
> > > > from relying on this email's technical content is explicitly
> > disclaimed.
> > > > The author will in no case be liable for any monetary damages arising
> > > from
> > > > such loss, damage or destruction.
> > > >
> > > >
> > > >
> > > > On 10 October 2016 at 22:13, Ted Yu  wrote:
> > > >
> > > > > For org.apache.hadoop.hbase.client.Result, there is this method:
> > > > >
> > > > >   public byte[] getValue(byte [] family, byte [] qualifier) {
> > > > >
> > > > > which allows you to retrieve value for designated column.
> > > > >
> > > > >
> > > > > FYI
> > > > >
> > > > > On Mon, Oct 10, 2016 at 2:08 PM, Mich Talebzadeh <
> > > > > mich.talebza...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I am trying to do some operation on an Hbase table that is being
> > > > > populated
> > > > > > by Spark Streaming.
> > > > > >
> > > > > > Now this is just Spark on Hbase as opposed to Spark on Hive ->
> view
> > > on
> > > > > > Hbase etc. I also have Phoenix view on this Hbase table.
> > > > > >
> > > > > > This is sample code
> > > > > >
> > > > > > scala> val tableName = "marketDataHbase"
> > > > > > > val conf = HBaseConfiguration.create()
> > > > > > conf: org.apache.hadoop.conf.Configuration = Configuration:
> > > > > > core-default.xml, core-site.xml, mapred-default.xml,
> > mapred-site.xml,
> > > > > > yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
> > > > > > hbase-default.xml, hbase-site.xml
> > > > > > scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
> > > > > > scala> //create rdd
> > > > > > scala>
> > > > > > *val hBaseRDD = sc.newAPIHadoopRDD(conf,
> > > > > > 

Re: reading Hbase table in Spark

2016-10-10 Thread Ted Yu
In that case I suggest polling user@hive to see if someone has done this.

Thanks

On Mon, Oct 10, 2016 at 2:56 PM, Mich Talebzadeh 
wrote:

> Thanks I am on Spark 2 so may not be feasible.
>
> As a mater of interest how about using Hive on top of Hbase table?
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 10 October 2016 at 22:49, Ted Yu  wrote:
>
> > In hbase master branch, there is hbase-spark module which would allow you
> > to integrate with Spark seamlessly.
> >
> > Note: support for Spark 2.0 is pending. For details, see HBASE-16179
> >
> > Cheers
> >
> > On Mon, Oct 10, 2016 at 2:46 PM, Mich Talebzadeh <
> > mich.talebza...@gmail.com>
> > wrote:
> >
> > > Thanks Ted,
> > >
> > > So basically involves Java programming much like JDBC connection
> > retrieval
> > > etc.
> > >
> > > Writing to Hbase is pretty fast. Now I have both views in Phoenix and
> > Hive
> > > on the underlying Hbase tables.
> > >
> > > I am looking for flexibility here so I get I should use Spark on Hive
> > > tables with a view on Hbase table.
> > >
> > > Also I like tools like Zeppelin that work with both SQL and Spark
> > > Functional programming.
> > >
> > > Sounds like reading data from Hbase table is best done through some
> form
> > of
> > > SQL.
> > >
> > > What are view on this approach?
> > >
> > >
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > >  AAEWh2gBxianrbJd6zP6AcPCCd
> > > OABUrV8Pw>*
> > >
> > >
> > >
> > > http://talebzadehmich.wordpress.com
> > >
> > >
> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> > > loss, damage or destruction of data or any other property which may
> arise
> > > from relying on this email's technical content is explicitly
> disclaimed.
> > > The author will in no case be liable for any monetary damages arising
> > from
> > > such loss, damage or destruction.
> > >
> > >
> > >
> > > On 10 October 2016 at 22:13, Ted Yu  wrote:
> > >
> > > > For org.apache.hadoop.hbase.client.Result, there is this method:
> > > >
> > > >   public byte[] getValue(byte [] family, byte [] qualifier) {
> > > >
> > > > which allows you to retrieve value for designated column.
> > > >
> > > >
> > > > FYI
> > > >
> > > > On Mon, Oct 10, 2016 at 2:08 PM, Mich Talebzadeh <
> > > > mich.talebza...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am trying to do some operation on an Hbase table that is being
> > > > populated
> > > > > by Spark Streaming.
> > > > >
> > > > > Now this is just Spark on Hbase as opposed to Spark on Hive -> view
> > on
> > > > > Hbase etc. I also have Phoenix view on this Hbase table.
> > > > >
> > > > > This is sample code
> > > > >
> > > > > scala> val tableName = "marketDataHbase"
> > > > > > val conf = HBaseConfiguration.create()
> > > > > conf: org.apache.hadoop.conf.Configuration = Configuration:
> > > > > core-default.xml, core-site.xml, mapred-default.xml,
> mapred-site.xml,
> > > > > yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
> > > > > hbase-default.xml, hbase-site.xml
> > > > > scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
> > > > > scala> //create rdd
> > > > > scala>
> > > > > *val hBaseRDD = sc.newAPIHadoopRDD(conf,
> > > > > classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io
> > > > > .ImmutableBytesWritable],
> classOf[org.apache.hadoop.
> > > > > hbase.client.Result])*hBaseRDD:
> > > > > org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.
> > > > > ImmutableBytesWritable,
> > > > > org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[4] at
> > > > > newAPIHadoopRDD at :64
> > > > > scala> hBaseRDD.count
> > > > > res11: Long = 22272
> > > > >
> > > > > scala> // transform (ImmutableBytesWritable, Result) tuples
> into
> > an
> > > > RDD
> > > > > of Result's
> > > > > scala> val resultRDD = hBaseRDD.map(tuple => tuple._2)
> > > > > resultRDD: org.apache.spark.rdd.RDD[org.
> apache.hadoop.hbase.client.
> > > > Result]
> > > > > = MapPartitionsRDD[8] at map at :41
> > > > >
> > > > > scala>  // transform into an RDD of (RowKey, ColumnValue)s  the
> > RowKey
> > > > has
> > > > > the time removed
> > > > >
> > > > > scala> val keyValueRDD = 

Re: reading Hbase table in Spark

2016-10-10 Thread Mich Talebzadeh
Thanks I am on Spark 2 so may not be feasible.

As a mater of interest how about using Hive on top of Hbase table?

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 10 October 2016 at 22:49, Ted Yu  wrote:

> In hbase master branch, there is hbase-spark module which would allow you
> to integrate with Spark seamlessly.
>
> Note: support for Spark 2.0 is pending. For details, see HBASE-16179
>
> Cheers
>
> On Mon, Oct 10, 2016 at 2:46 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com>
> wrote:
>
> > Thanks Ted,
> >
> > So basically involves Java programming much like JDBC connection
> retrieval
> > etc.
> >
> > Writing to Hbase is pretty fast. Now I have both views in Phoenix and
> Hive
> > on the underlying Hbase tables.
> >
> > I am looking for flexibility here so I get I should use Spark on Hive
> > tables with a view on Hbase table.
> >
> > Also I like tools like Zeppelin that work with both SQL and Spark
> > Functional programming.
> >
> > Sounds like reading data from Hbase table is best done through some form
> of
> > SQL.
> >
> > What are view on this approach?
> >
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >  > OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 10 October 2016 at 22:13, Ted Yu  wrote:
> >
> > > For org.apache.hadoop.hbase.client.Result, there is this method:
> > >
> > >   public byte[] getValue(byte [] family, byte [] qualifier) {
> > >
> > > which allows you to retrieve value for designated column.
> > >
> > >
> > > FYI
> > >
> > > On Mon, Oct 10, 2016 at 2:08 PM, Mich Talebzadeh <
> > > mich.talebza...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I am trying to do some operation on an Hbase table that is being
> > > populated
> > > > by Spark Streaming.
> > > >
> > > > Now this is just Spark on Hbase as opposed to Spark on Hive -> view
> on
> > > > Hbase etc. I also have Phoenix view on this Hbase table.
> > > >
> > > > This is sample code
> > > >
> > > > scala> val tableName = "marketDataHbase"
> > > > > val conf = HBaseConfiguration.create()
> > > > conf: org.apache.hadoop.conf.Configuration = Configuration:
> > > > core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
> > > > yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
> > > > hbase-default.xml, hbase-site.xml
> > > > scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
> > > > scala> //create rdd
> > > > scala>
> > > > *val hBaseRDD = sc.newAPIHadoopRDD(conf,
> > > > classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io
> > > > .ImmutableBytesWritable],classOf[org.apache.hadoop.
> > > > hbase.client.Result])*hBaseRDD:
> > > > org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.
> > > > ImmutableBytesWritable,
> > > > org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[4] at
> > > > newAPIHadoopRDD at :64
> > > > scala> hBaseRDD.count
> > > > res11: Long = 22272
> > > >
> > > > scala> // transform (ImmutableBytesWritable, Result) tuples into
> an
> > > RDD
> > > > of Result's
> > > > scala> val resultRDD = hBaseRDD.map(tuple => tuple._2)
> > > > resultRDD: org.apache.spark.rdd.RDD[org.apache.hadoop.hbase.client.
> > > Result]
> > > > = MapPartitionsRDD[8] at map at :41
> > > >
> > > > scala>  // transform into an RDD of (RowKey, ColumnValue)s  the
> RowKey
> > > has
> > > > the time removed
> > > >
> > > > scala> val keyValueRDD = resultRDD.map(result =>
> > > > (Bytes.toString(result.getRow()).split(" ")(0),
> > > > Bytes.toString(result.value)))
> > > > keyValueRDD: org.apache.spark.rdd.RDD[(String, String)] =
> > > > MapPartitionsRDD[9] at map at :43
> > > >
> > > > scala> keyValueRDD.take(2).foreach(kv => println(kv))
> > > > (55e2-63f1-4def-b625-e73f0ac36271,43.89760813529593664528)
> > > > (000151e9-ff27-493d-a5ca-288507d92f95,57.68882040742382868990)
> > > >
> > > > OK above I am only getting 

Re: reading Hbase table in Spark

2016-10-10 Thread Ted Yu
In hbase master branch, there is hbase-spark module which would allow you
to integrate with Spark seamlessly.

Note: support for Spark 2.0 is pending. For details, see HBASE-16179

Cheers

On Mon, Oct 10, 2016 at 2:46 PM, Mich Talebzadeh 
wrote:

> Thanks Ted,
>
> So basically involves Java programming much like JDBC connection retrieval
> etc.
>
> Writing to Hbase is pretty fast. Now I have both views in Phoenix and Hive
> on the underlying Hbase tables.
>
> I am looking for flexibility here so I get I should use Spark on Hive
> tables with a view on Hbase table.
>
> Also I like tools like Zeppelin that work with both SQL and Spark
> Functional programming.
>
> Sounds like reading data from Hbase table is best done through some form of
> SQL.
>
> What are view on this approach?
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 10 October 2016 at 22:13, Ted Yu  wrote:
>
> > For org.apache.hadoop.hbase.client.Result, there is this method:
> >
> >   public byte[] getValue(byte [] family, byte [] qualifier) {
> >
> > which allows you to retrieve value for designated column.
> >
> >
> > FYI
> >
> > On Mon, Oct 10, 2016 at 2:08 PM, Mich Talebzadeh <
> > mich.talebza...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I am trying to do some operation on an Hbase table that is being
> > populated
> > > by Spark Streaming.
> > >
> > > Now this is just Spark on Hbase as opposed to Spark on Hive -> view on
> > > Hbase etc. I also have Phoenix view on this Hbase table.
> > >
> > > This is sample code
> > >
> > > scala> val tableName = "marketDataHbase"
> > > > val conf = HBaseConfiguration.create()
> > > conf: org.apache.hadoop.conf.Configuration = Configuration:
> > > core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
> > > yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
> > > hbase-default.xml, hbase-site.xml
> > > scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
> > > scala> //create rdd
> > > scala>
> > > *val hBaseRDD = sc.newAPIHadoopRDD(conf,
> > > classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io
> > > .ImmutableBytesWritable],classOf[org.apache.hadoop.
> > > hbase.client.Result])*hBaseRDD:
> > > org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.
> > > ImmutableBytesWritable,
> > > org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[4] at
> > > newAPIHadoopRDD at :64
> > > scala> hBaseRDD.count
> > > res11: Long = 22272
> > >
> > > scala> // transform (ImmutableBytesWritable, Result) tuples into an
> > RDD
> > > of Result's
> > > scala> val resultRDD = hBaseRDD.map(tuple => tuple._2)
> > > resultRDD: org.apache.spark.rdd.RDD[org.apache.hadoop.hbase.client.
> > Result]
> > > = MapPartitionsRDD[8] at map at :41
> > >
> > > scala>  // transform into an RDD of (RowKey, ColumnValue)s  the RowKey
> > has
> > > the time removed
> > >
> > > scala> val keyValueRDD = resultRDD.map(result =>
> > > (Bytes.toString(result.getRow()).split(" ")(0),
> > > Bytes.toString(result.value)))
> > > keyValueRDD: org.apache.spark.rdd.RDD[(String, String)] =
> > > MapPartitionsRDD[9] at map at :43
> > >
> > > scala> keyValueRDD.take(2).foreach(kv => println(kv))
> > > (55e2-63f1-4def-b625-e73f0ac36271,43.89760813529593664528)
> > > (000151e9-ff27-493d-a5ca-288507d92f95,57.68882040742382868990)
> > >
> > > OK above I am only getting the rowkey (UUID above) and the last
> > > attribute (price).
> > > However, I have the rowkey and 3 more columns there in Hbase table!
> > >
> > > scan 'marketDataHbase', "LIMIT" => 1
> > > ROW   COLUMN+CELL
> > >  55e2-63f1-4def-b625-e73f0ac36271
> > > column=price_info:price, timestamp=1476133232864,
> > > value=43.89760813529593664528
> > >  55e2-63f1-4def-b625-e73f0ac36271
> > > column=price_info:ticker, timestamp=1476133232864, value=S08
> > >  55e2-63f1-4def-b625-e73f0ac36271
> > > column=price_info:timecreated, timestamp=1476133232864,
> > > value=2016-10-10T17:12:22
> > > 1 row(s) in 0.0100 seconds
> > > So how can I get the other columns?
> > >
> > > Thanks
> > >
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > >  AAEWh2gBxianrbJd6zP6AcPCCd
> > > OABUrV8Pw>*
> > >
> > >
> > >

Re: reading Hbase table in Spark

2016-10-10 Thread Mich Talebzadeh
Thanks Ted,

So basically involves Java programming much like JDBC connection retrieval
etc.

Writing to Hbase is pretty fast. Now I have both views in Phoenix and Hive
on the underlying Hbase tables.

I am looking for flexibility here so I get I should use Spark on Hive
tables with a view on Hbase table.

Also I like tools like Zeppelin that work with both SQL and Spark
Functional programming.

Sounds like reading data from Hbase table is best done through some form of
SQL.

What are view on this approach?



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 10 October 2016 at 22:13, Ted Yu  wrote:

> For org.apache.hadoop.hbase.client.Result, there is this method:
>
>   public byte[] getValue(byte [] family, byte [] qualifier) {
>
> which allows you to retrieve value for designated column.
>
>
> FYI
>
> On Mon, Oct 10, 2016 at 2:08 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am trying to do some operation on an Hbase table that is being
> populated
> > by Spark Streaming.
> >
> > Now this is just Spark on Hbase as opposed to Spark on Hive -> view on
> > Hbase etc. I also have Phoenix view on this Hbase table.
> >
> > This is sample code
> >
> > scala> val tableName = "marketDataHbase"
> > > val conf = HBaseConfiguration.create()
> > conf: org.apache.hadoop.conf.Configuration = Configuration:
> > core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
> > yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
> > hbase-default.xml, hbase-site.xml
> > scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
> > scala> //create rdd
> > scala>
> > *val hBaseRDD = sc.newAPIHadoopRDD(conf,
> > classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io
> > .ImmutableBytesWritable],classOf[org.apache.hadoop.
> > hbase.client.Result])*hBaseRDD:
> > org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.
> > ImmutableBytesWritable,
> > org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[4] at
> > newAPIHadoopRDD at :64
> > scala> hBaseRDD.count
> > res11: Long = 22272
> >
> > scala> // transform (ImmutableBytesWritable, Result) tuples into an
> RDD
> > of Result's
> > scala> val resultRDD = hBaseRDD.map(tuple => tuple._2)
> > resultRDD: org.apache.spark.rdd.RDD[org.apache.hadoop.hbase.client.
> Result]
> > = MapPartitionsRDD[8] at map at :41
> >
> > scala>  // transform into an RDD of (RowKey, ColumnValue)s  the RowKey
> has
> > the time removed
> >
> > scala> val keyValueRDD = resultRDD.map(result =>
> > (Bytes.toString(result.getRow()).split(" ")(0),
> > Bytes.toString(result.value)))
> > keyValueRDD: org.apache.spark.rdd.RDD[(String, String)] =
> > MapPartitionsRDD[9] at map at :43
> >
> > scala> keyValueRDD.take(2).foreach(kv => println(kv))
> > (55e2-63f1-4def-b625-e73f0ac36271,43.89760813529593664528)
> > (000151e9-ff27-493d-a5ca-288507d92f95,57.68882040742382868990)
> >
> > OK above I am only getting the rowkey (UUID above) and the last
> > attribute (price).
> > However, I have the rowkey and 3 more columns there in Hbase table!
> >
> > scan 'marketDataHbase', "LIMIT" => 1
> > ROW   COLUMN+CELL
> >  55e2-63f1-4def-b625-e73f0ac36271
> > column=price_info:price, timestamp=1476133232864,
> > value=43.89760813529593664528
> >  55e2-63f1-4def-b625-e73f0ac36271
> > column=price_info:ticker, timestamp=1476133232864, value=S08
> >  55e2-63f1-4def-b625-e73f0ac36271
> > column=price_info:timecreated, timestamp=1476133232864,
> > value=2016-10-10T17:12:22
> > 1 row(s) in 0.0100 seconds
> > So how can I get the other columns?
> >
> > Thanks
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >  > OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
>


Re: reading Hbase table in Spark

2016-10-10 Thread Ted Yu
For org.apache.hadoop.hbase.client.Result, there is this method:

  public byte[] getValue(byte [] family, byte [] qualifier) {

which allows you to retrieve value for designated column.


FYI

On Mon, Oct 10, 2016 at 2:08 PM, Mich Talebzadeh 
wrote:

> Hi,
>
> I am trying to do some operation on an Hbase table that is being populated
> by Spark Streaming.
>
> Now this is just Spark on Hbase as opposed to Spark on Hive -> view on
> Hbase etc. I also have Phoenix view on this Hbase table.
>
> This is sample code
>
> scala> val tableName = "marketDataHbase"
> > val conf = HBaseConfiguration.create()
> conf: org.apache.hadoop.conf.Configuration = Configuration:
> core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
> yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
> hbase-default.xml, hbase-site.xml
> scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
> scala> //create rdd
> scala>
> *val hBaseRDD = sc.newAPIHadoopRDD(conf,
> classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io
> .ImmutableBytesWritable],classOf[org.apache.hadoop.
> hbase.client.Result])*hBaseRDD:
> org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.
> ImmutableBytesWritable,
> org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[4] at
> newAPIHadoopRDD at :64
> scala> hBaseRDD.count
> res11: Long = 22272
>
> scala> // transform (ImmutableBytesWritable, Result) tuples into an RDD
> of Result's
> scala> val resultRDD = hBaseRDD.map(tuple => tuple._2)
> resultRDD: org.apache.spark.rdd.RDD[org.apache.hadoop.hbase.client.Result]
> = MapPartitionsRDD[8] at map at :41
>
> scala>  // transform into an RDD of (RowKey, ColumnValue)s  the RowKey has
> the time removed
>
> scala> val keyValueRDD = resultRDD.map(result =>
> (Bytes.toString(result.getRow()).split(" ")(0),
> Bytes.toString(result.value)))
> keyValueRDD: org.apache.spark.rdd.RDD[(String, String)] =
> MapPartitionsRDD[9] at map at :43
>
> scala> keyValueRDD.take(2).foreach(kv => println(kv))
> (55e2-63f1-4def-b625-e73f0ac36271,43.89760813529593664528)
> (000151e9-ff27-493d-a5ca-288507d92f95,57.68882040742382868990)
>
> OK above I am only getting the rowkey (UUID above) and the last
> attribute (price).
> However, I have the rowkey and 3 more columns there in Hbase table!
>
> scan 'marketDataHbase', "LIMIT" => 1
> ROW   COLUMN+CELL
>  55e2-63f1-4def-b625-e73f0ac36271
> column=price_info:price, timestamp=1476133232864,
> value=43.89760813529593664528
>  55e2-63f1-4def-b625-e73f0ac36271
> column=price_info:ticker, timestamp=1476133232864, value=S08
>  55e2-63f1-4def-b625-e73f0ac36271
> column=price_info:timecreated, timestamp=1476133232864,
> value=2016-10-10T17:12:22
> 1 row(s) in 0.0100 seconds
> So how can I get the other columns?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>


reading Hbase table in Spark

2016-10-10 Thread Mich Talebzadeh
Hi,

I am trying to do some operation on an Hbase table that is being populated
by Spark Streaming.

Now this is just Spark on Hbase as opposed to Spark on Hive -> view on
Hbase etc. I also have Phoenix view on this Hbase table.

This is sample code

scala> val tableName = "marketDataHbase"
> val conf = HBaseConfiguration.create()
conf: org.apache.hadoop.conf.Configuration = Configuration:
core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml,
hbase-default.xml, hbase-site.xml
scala> conf.set(TableInputFormat.INPUT_TABLE, tableName)
scala> //create rdd
scala>
*val hBaseRDD = sc.newAPIHadoopRDD(conf,
classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io
.ImmutableBytesWritable],classOf[org.apache.hadoop.hbase.client.Result])*hBaseRDD:
org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.ImmutableBytesWritable,
org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[4] at
newAPIHadoopRDD at :64
scala> hBaseRDD.count
res11: Long = 22272

scala> // transform (ImmutableBytesWritable, Result) tuples into an RDD
of Result's
scala> val resultRDD = hBaseRDD.map(tuple => tuple._2)
resultRDD: org.apache.spark.rdd.RDD[org.apache.hadoop.hbase.client.Result]
= MapPartitionsRDD[8] at map at :41

scala>  // transform into an RDD of (RowKey, ColumnValue)s  the RowKey has
the time removed

scala> val keyValueRDD = resultRDD.map(result =>
(Bytes.toString(result.getRow()).split(" ")(0),
Bytes.toString(result.value)))
keyValueRDD: org.apache.spark.rdd.RDD[(String, String)] =
MapPartitionsRDD[9] at map at :43

scala> keyValueRDD.take(2).foreach(kv => println(kv))
(55e2-63f1-4def-b625-e73f0ac36271,43.89760813529593664528)
(000151e9-ff27-493d-a5ca-288507d92f95,57.68882040742382868990)

OK above I am only getting the rowkey (UUID above) and the last
attribute (price).
However, I have the rowkey and 3 more columns there in Hbase table!

scan 'marketDataHbase', "LIMIT" => 1
ROW   COLUMN+CELL
 55e2-63f1-4def-b625-e73f0ac36271
column=price_info:price, timestamp=1476133232864,
value=43.89760813529593664528
 55e2-63f1-4def-b625-e73f0ac36271
column=price_info:ticker, timestamp=1476133232864, value=S08
 55e2-63f1-4def-b625-e73f0ac36271
column=price_info:timecreated, timestamp=1476133232864,
value=2016-10-10T17:12:22
1 row(s) in 0.0100 seconds
So how can I get the other columns?

Thanks


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Re: Scan Performance Decreases Over Time

2016-10-10 Thread Ted Yu
Have you taken jstack for the slow scans ?

If so, can you pastebin the stack trace ?

1.0.0 is quite old. 

Any chance of upgrading to 1.2 release ?

Cheers

> On Oct 10, 2016, at 2:04 AM, 陆巍  wrote:
> 
> Hi All,
> 
> I met with a problem where the scan perfoamance decreases over time.
> Hbase connections are kept in a data access service (in tomcat), and there 
> are table scan operations. The scan performance for each scan batch(~10 
> parallel scan) increases as below:
> dayavg. cost(ms)
> 156.213115
> 243.697054
> 336.925063
> 450.683257
> 562.749022
> 684.943314
> 792.237783
> 894.452549
> 9103.853937
> 10114.725657
> 11128.601287
> 
> The time cost for each remote scan batch is now over 100ms.
> In order to make sure Hbase cluster is fine, I started another same data 
> access service, but find the time cost is around 30ms. So, I think hbase 
> cluster is fine and the problem is in the data access service, which is the 
> client of hbase.
> 
> I do believe there is some resource missed to release, but really have no 
> idea where it is.
> I am using hbase 1.0.0-chd5.5.1.
> 
> Here is the code:
> // the connection is created as a static instance:
> connection=ConnectionFactory.createConnection(getConfiguration())
> // scan logic for each remote call
> try {
>  Table table = connection.getTable(tableName);
>  rs = table.getScanner(scan);
> } finally {
>  rs.close();
>  table.close();
> }
> 
> 
> 
> Thanks,
> Wei


Scan Performance Decreases Over Time

2016-10-10 Thread 陆巍
Hi All,

I met with a problem where the scan perfoamance decreases over time.
Hbase connections are kept in a data access service (in tomcat), and there are 
table scan operations. The scan performance for each scan batch(~10 parallel 
scan) increases as below:
dayavg. cost(ms)
156.213115
243.697054
336.925063
450.683257
562.749022
684.943314
792.237783
894.452549
9103.853937
10114.725657
11128.601287

The time cost for each remote scan batch is now over 100ms.
In order to make sure Hbase cluster is fine, I started another same data access 
service, but find the time cost is around 30ms. So, I think hbase cluster is 
fine and the problem is in the data access service, which is the client of 
hbase.

I do believe there is some resource missed to release, but really have no idea 
where it is.
I am using hbase 1.0.0-chd5.5.1.

Here is the code:
// the connection is created as a static instance:
connection=ConnectionFactory.createConnection(getConfiguration())
// scan logic for each remote call
try {
  Table table = connection.getTable(tableName);
  rs = table.getScanner(scan);
} finally {
  rs.close();
  table.close();
}



Thanks,
Wei