Re: Cloudera parcel update

2017-10-25 Thread Flavio Pompermaier
I'll take care of it ;)

On 25 Oct 2017 23:45, "Sergey Soldatov"  wrote:

> Hi Flavio,
>
> It looks like you need to ask the vendor, not the community about their
> plan for further releases.
>
> Thanks,
> Sergey
>
> On Wed, Oct 25, 2017 at 2:21 PM, Flavio Pompermaier 
> wrote:
>
>> Hi to all,
>> the latest Phoenix Cloudera parcel I can see is 4.7...any plan to
>> release a newer version?
>>
>> I'd need at least Phoenix 4.9..anyone using it?
>>
>> Best,
>> Flavio
>>
>
>


Re: Cloudera parcel update

2017-10-25 Thread Sergey Soldatov
Hi Flavio,

It looks like you need to ask the vendor, not the community about their
plan for further releases.

Thanks,
Sergey

On Wed, Oct 25, 2017 at 2:21 PM, Flavio Pompermaier 
wrote:

> Hi to all,
> the latest Phoenix Cloudera parcel I can see is 4.7...any plan to release
> a newer version?
>
> I'd need at least Phoenix 4.9..anyone using it?
>
> Best,
> Flavio
>


Re: Not able to connect to Phoenix Queryserver from Spark

2017-10-25 Thread Josh Elser

I don't know why running it inside of Spark would cause issues.

I would double-check the classpath of your application when running in 
Spark as well as look at the PQS log (HTTP/500 is a server error).


On 10/25/17 6:39 AM, cmbendre wrote:

I am trying to connect to Phoenix queryserver from Spark. Following Scala
code works perfectly fine when i run it without spark.

*import java.sql.{Connection, DriverManager, PreparedStatement, ResultSet,
Statement}
Class.forName("org.apache.phoenix.queryserver.client.Driver")
val connection=
DriverManager.getConnection("jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF;);
val statement = connection.createStatement()*

But the same code fails with following exception in Spark Shell / Spark
Submit -

/java.lang.RuntimeException: response code 500
   at
org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:45)
   at
org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:235)
   at
org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:97)
   at
org.apache.calcite.avatica.remote.RemoteMeta.createStatement(RemoteMeta.java:65)
   at
org.apache.calcite.avatica.AvaticaStatement.(AvaticaStatement.java:83)
   at
org.apache.calcite.avatica.AvaticaJdbc41Factory$AvaticaJdbc41Statement.(AvaticaJdbc41Factory.java:114)
   at
org.apache.calcite.avatica.AvaticaJdbc41Factory.newStatement(AvaticaJdbc41Factory.java:73)
   at
org.apache.calcite.avatica.AvaticaConnection.createStatement(AvaticaConnection.java:263)
   at
org.apache.calcite.avatica.AvaticaConnection.createStatement(AvaticaConnection.java:110)
   at
org.apache.calcite.avatica.AvaticaConnection.createStatement(AvaticaConnection.java:51)
   ... 48 elided/

I am using Spark 2.1.0 along with Phoenix 4.11 with HBase 1.3.

I could not find any similar error on the internet. Please help.





--
Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/



Re: Phoenix 4.12 error on HDP 2.6

2017-10-25 Thread Ted Yu
Since you're deploying onto a vendor's platform, I suggest asking this
question on the vendor's forum.

Cheers

On Wed, Oct 25, 2017 at 3:59 AM, Sumanta Gh  wrote:

> Hi,
> I am trying to install phoenix-4.12.0 (HBase-1.1) on HDP 2.6.2.0. As per
> installation guide, I have copied the phoenix-4.12.0-HBase-1.1-server.jar
> inside HBase lib directory. After restarting HBase using Ambari and
> connecting through SqlLine, I can see phoenix system tables are getting
> created. I used HBase shell to check them.
>
> When I try to create a table, the region servers stops with the following
> error. Could anyone please guide what is wrong here.
>
> thanks
> sumanta
>
>
>
> DDL :
>
> CREATE TABLE V5.USER (
> ADMIN BOOLEAN,
> KEYA VARCHAR,
> KEYB VARCHAR,
> ID INTEGER,
> USERNAME VARCHAR,
> CONSTRAINT PK PRIMARY KEY (KEYA)) *COLUMN_ENCODED_BYTES=0*;
>
>
> *Region Server Error:*
>
> 2017-10-25 10:47:12,499 ERROR [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.hbase.index.Indexer
> threw org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(
> DefaultMetricsSystem.java:144)
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(
> DefaultMetricsSystem.java:117)
> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.
> register(MetricsSystemImpl.java:229)
> at org.apache.hadoop.hbase.metrics.BaseSourceImpl.(
> BaseSourceImpl.java:74)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:49)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:44)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory.
> create(MetricsIndexerSourceFactory.java:34)
> at org.apache.phoenix.hbase.index.Indexer.start(Indexer.java:251)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$
> Environment.startup(CoprocessorHost.java:415)
>
> 
> 2017-10-25 10:47:12,499 FATAL [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> regionserver.HRegionServer: ABORTING region server 
> ip-172-30-3-197,16020,1508926506368:
> The coprocessor org.apache.phoenix.hbase.index.Indexer threw
> org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> org.apache.hadoop.metrics2.MetricsException: Metrics source
> RegionServer,sub=PhoenixIndexer already exists!
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(
> DefaultMetricsSystem.java:144)
> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(
> DefaultMetricsSystem.java:117)
> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.
> register(MetricsSystemImpl.java:229)
> at org.apache.hadoop.hbase.metrics.BaseSourceImpl.(
> BaseSourceImpl.java:74)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:49)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(
> MetricsIndexerSourceImpl.java:44)
> at org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory.
> create(MetricsIndexerSourceFactory.java:34)
> at org.apache.phoenix.hbase.index.Indexer.start(Indexer.java:251)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$
> Environment.startup(CoprocessorHost
>
> 
> 2017-10-25 10:47:12,499 FATAL [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> regionserver.HRegionServer: RegionServer abort: loaded coprocessors are:
> [org.apache.phoenix.coprocessor.MetaDataEndpointImpl, org.apache.phoenix.
> coprocessor.ScanRegionObserver, org.apache.phoenix.coprocessor.
> UngroupedAggregateRegionObserver, org.apache.phoenix.hbase.index.Indexer,
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver,
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl,
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,
> org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2017-
>
> .
> 2017-10-25 10:47:12,511 INFO  [RS_OPEN_REGION-ip-172-30-3-197:16020-1]
> regionserver.HRegionServer: STOPPED: The coprocessor
> org.apache.phoenix.hbase.index.Indexer threw 
> org.apache.hadoop.metrics2.MetricsException:
> Metrics source RegionServer,sub=PhoenixIndexer already exists!
> 2017-10-25 10:47:12,511 INFO  [regionserver/ip-172-30-3-197/
> 172.30.3.197:16020] regionserver.SplitLogWorker: Sending interrupt to
> stop the worker thread
> 2017-10-25 10:47:12,511 INFO  [regionserver/ip-172-30-3-197/
> 172.30.3.197:16020] regionserver.HRegionServer: Stopping infoServer
> 2017-10-25 10:47:12,511 INFO  [SplitLogWorker-ip-172-30-3-197:16020]
> regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
> 2017-10-25 10:47:12,511 INFO  [SplitLogWorker-ip-172-30-3-197:16020]
> regionserver.SplitLogWorker: 

Phoenix 4.12 error on HDP 2.6

2017-10-25 Thread Sumanta Gh
Hi,
I am trying to install phoenix-4.12.0 (HBase-1.1) on HDP 2.6.2.0. As per 
installation guide, I have copied the phoenix-4.12.0-HBase-1.1-server.jar 
inside HBase lib directory. After restarting HBase using Ambari and connecting 
through SqlLine, I can see phoenix system tables are getting created. I used 
HBase shell to check them.

When I try to create a table, the region servers stops with the following 
error. Could anyone please guide what is wrong here.

thanks
sumanta 



DDL :

CREATE TABLE V5.USER (
ADMIN BOOLEAN, 
KEYA VARCHAR, 
KEYB VARCHAR, 
ID INTEGER, 
USERNAME VARCHAR, 
CONSTRAINT PK PRIMARY KEY (KEYA)) COLUMN_ENCODED_BYTES=0;


Region Server Error:

2017-10-25 10:47:12,499 ERROR [RS_OPEN_REGION-ip-172-30-3-197:16020-1] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.phoenix.hbase.index.Indexer threw 
org.apache.hadoop.metrics2.MetricsException: Metrics source 
RegionServer,sub=PhoenixIndexer already exists!
org.apache.hadoop.metrics2.MetricsException: Metrics source 
RegionServer,sub=PhoenixIndexer already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:74)
at 
org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(MetricsIndexerSourceImpl.java:49)
at 
org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(MetricsIndexerSourceImpl.java:44)
at 
org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory.create(MetricsIndexerSourceFactory.java:34)
at org.apache.phoenix.hbase.index.Indexer.start(Indexer.java:251)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)


2017-10-25 10:47:12,499 FATAL [RS_OPEN_REGION-ip-172-30-3-197:16020-1] 
regionserver.HRegionServer: ABORTING region server 
ip-172-30-3-197,16020,1508926506368: The coprocessor 
org.apache.phoenix.hbase.index.Indexer threw 
org.apache.hadoop.metrics2.MetricsException: Metrics source 
RegionServer,sub=PhoenixIndexer already exists!
org.apache.hadoop.metrics2.MetricsException: Metrics source 
RegionServer,sub=PhoenixIndexer already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:74)
at 
org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(MetricsIndexerSourceImpl.java:49)
at 
org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceImpl.(MetricsIndexerSourceImpl.java:44)
at 
org.apache.phoenix.hbase.index.metrics.MetricsIndexerSourceFactory.create(MetricsIndexerSourceFactory.java:34)
at org.apache.phoenix.hbase.index.Indexer.start(Indexer.java:251)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost


2017-10-25 10:47:12,499 FATAL [RS_OPEN_REGION-ip-172-30-3-197:16020-1] 
regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
[org.apache.phoenix.coprocessor.MetaDataEndpointImpl, 
org.apache.phoenix.coprocessor.ScanRegionObserver, 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
org.apache.phoenix.hbase.index.Indexer, 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl, 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint, 
org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
2017-

.
2017-10-25 10:47:12,511 INFO  [RS_OPEN_REGION-ip-172-30-3-197:16020-1] 
regionserver.HRegionServer: STOPPED: The coprocessor 
org.apache.phoenix.hbase.index.Indexer threw 
org.apache.hadoop.metrics2.MetricsException: Metrics source 
RegionServer,sub=PhoenixIndexer already exists!
2017-10-25 10:47:12,511 INFO  [regionserver/ip-172-30-3-197/172.30.3.197:16020] 
regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2017-10-25 10:47:12,511 INFO  [regionserver/ip-172-30-3-197/172.30.3.197:16020] 
regionserver.HRegionServer: Stopping infoServer
2017-10-25 10:47:12,511 INFO  [SplitLogWorker-ip-172-30-3-197:16020] 
regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting. 
2017-10-25 10:47:12,511 INFO  [SplitLogWorker-ip-172-30-3-197:16020] 
regionserver.SplitLogWorker: SplitLogWorker ip-172-30-3-197,16020,1508926506368 
exiting
2017-10-25 10:47:12,512 INFO  [RS_OPEN_REGION-ip-172-30-3-197:16020-1] 
regionserver.RegionCoprocessorHost: Loaded coprocessor 

Not able to connect to Phoenix Queryserver from Spark

2017-10-25 Thread cmbendre
I am trying to connect to Phoenix queryserver from Spark. Following Scala
code works perfectly fine when i run it without spark.

*import java.sql.{Connection, DriverManager, PreparedStatement, ResultSet,
Statement}
Class.forName("org.apache.phoenix.queryserver.client.Driver")
val connection=
DriverManager.getConnection("jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF;);
val statement = connection.createStatement()*

But the same code fails with following exception in Spark Shell / Spark
Submit -

/java.lang.RuntimeException: response code 500
  at
org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:45)
  at
org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:235)
  at
org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:97)
  at
org.apache.calcite.avatica.remote.RemoteMeta.createStatement(RemoteMeta.java:65)
  at
org.apache.calcite.avatica.AvaticaStatement.(AvaticaStatement.java:83)
  at
org.apache.calcite.avatica.AvaticaJdbc41Factory$AvaticaJdbc41Statement.(AvaticaJdbc41Factory.java:114)
  at
org.apache.calcite.avatica.AvaticaJdbc41Factory.newStatement(AvaticaJdbc41Factory.java:73)
  at
org.apache.calcite.avatica.AvaticaConnection.createStatement(AvaticaConnection.java:263)
  at
org.apache.calcite.avatica.AvaticaConnection.createStatement(AvaticaConnection.java:110)
  at
org.apache.calcite.avatica.AvaticaConnection.createStatement(AvaticaConnection.java:51)
  ... 48 elided/

I am using Spark 2.1.0 along with Phoenix 4.11 with HBase 1.3. 

I could not find any similar error on the internet. Please help.





--
Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/


Re: Phoenix client failed when used HACluster name on hbase.rootdir property

2017-10-25 Thread Mallieswari Dineshbabu
Hi Rafa,

Dfs name service issue for phoenix got resolved after setting class path of
Hadoop configuration and HBase configuration. This can be done by setting
environment variable named HADOOP_HOME and HBASE_HOME in the respective
machines.

Thanks for your support.

Regards,
Mallieswari

On Thu, Oct 12, 2017 at 5:26 PM, rafa  wrote:

> You cannot  use "hacluster" if that hostname is not resolved to a IP. Is
> what I tried to explain in my last mail.
>
> Use the ip of te machine that is running query server or its hostname
>
> Regards
> Rafa
>
> El 12 oct. 2017 6:19, "Mallieswari Dineshbabu" 
> escribió:
>
>> Hi Rafa,
>>
>> Still, faced “UnKnownHostException:hacluster” when started the query
>> server with cluster name 'hacluster' and try to connect phoenix client like
>> below.
>>
>>
>>
>> bin>python sqlline-thin.py http://hacluster:8765
>>
>>
>>
>> Setting property: [incremental, false]
>>
>> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
>>
>> issuing: !connect jdbc:phoenix:thin:url=http://h
>> acluster:8765;serialization=PROTOBUF none none
>> org.apache.phoenix.queryserver.client.Driver
>>
>> Connecting to jdbc:phoenix:thin:url=http://h
>> acluster:8765;serialization=PROTOBUF
>>
>> java.lang.RuntimeException: java.net.UnknownHostException: hacluster
>>
>> at org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientIm
>> pl.send(AvaticaCommonsHttpClientImpl.java:169)
>>
>> at org.apache.calcite.avatica.remote.RemoteProtobufService._app
>> ly(RemoteProtobufService.java:45)
>>
>> at org.apache.calcite.avatica.remote.ProtobufService.apply(Prot
>> obufService.java:81)
>>
>> at org.apache.calcite.avatica.remote.Driver.connect(Driver.java
>> :176)
>>
>> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:
>> 157)
>>
>> at sqlline.DatabaseConnection.getConnection(DatabaseConnection.
>> java:203)
>>
>> at sqlline.Commands.connect(Commands.java:1064)
>>
>> at sqlline.Commands.connect(Commands.java:996)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:57)
>>
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>> ndler.java:38)
>>
>> at sqlline.SqlLine.dispatch(SqlLine.java:809)
>>
>> at sqlline.SqlLine.initArgs(SqlLine.java:588)
>>
>> at sqlline.SqlLine.begin(SqlLine.java:661)
>>
>> at sqlline.SqlLine.start(SqlLine.java:398)
>>
>> at sqlline.SqlLine.main(SqlLine.java:291)
>>
>>
>>
>> Regards,
>>
>> Mallieswari D
>>
>>
>>
>> On Wed, Oct 11, 2017 at 5:53 PM, rafa  wrote:
>>
>>> Hi Mallieswari,
>>>
>>> The hbase.rootdir is a filesystem resource. If you have a HA NAmenode
>>> and a created nameservice you can point to the active namenode
>>> automatically. As far as I know it is not related to the HBase Master HA.
>>>
>>> The "hacluster" used in this command : python sqlline-thin.py 
>>> http://hacluster:8765
>>>   is a hostname resource. What do you obtain from nslookup hacluster ?
>>>
>>> To have serveral Phoenix query servers to achieve HA in that layer you
>>> would need a balancer (sw or hw) defined to balance across all your
>>> available query servers.
>>> Regards,
>>>
>>> rafa
>>>
>>> On Wed, Oct 11, 2017 at 1:30 PM, Mallieswari Dineshbabu <
>>> dmalliesw...@gmail.com> wrote:
>>>
 Hi All,

 I am trying to integrate Phoenix in a High availability enabled
 Hadoop-HBase cluster. I have used nameservice ID
 
  instead
 of HMaster's hostname in the following property. So that any of my active
 HMaster will be identified automatically in case of fail over,

 

hbase.rootdir

hdfs://hacluster:9000/HBase

  



 Similarly, I tried connecting to QueryServer that is *running in one
 of the HMaster node*, from my thin client with the following URL . But
 I get the error, “No suitable driver found for http://hacluster:8765;



 python sqlline-thin.py http://hacluster:8765





 *Please tell what configuration need to be done to connect QueryServer
 with nameserviceID.*



 Note: The same works fine when I specify HMaster's ip address in both
 my HBase configuration and sqline connection string.


 --
 Thanks and regards
 D.Mallieswari

>>>
>>>
>>
>>
>> --
>> Thanks and regards
>> D.Mallieswari
>>
>


-- 
Thanks and regards
D.Mallieswari