Re: JOIN issue, getting errors

2015-09-09 Thread Aaron Bossert
I'm not sure if that was the problem, but last night, I decided to retrace my 
steps from installation onwards...it turns out that I failed to delete the old 
Phoenix server jar files from my 48 region servers.  Once I placed a fresh copy 
of the jar file in there and deleted the old one, the joins worked...

Sent from my iPhone

> On Sep 9, 2015, at 8:59 AM, Josh Mahonin  wrote:
> 
> This looks suspiciously like 
> https://issues.apache.org/jira/browse/PHOENIX-2169
> 
> Are you able to perform the same queries when the tables aren't salted? Also, 
> what versions of HBase / Phoenix are you using?
> 
>> On Tue, Sep 8, 2015 at 12:33 PM, M. Aaron Bossert  
>> wrote:
>> All,
>> 
>> Any help would be greatly appreciated...
>> 
>> I have two tables with the following structure:
>> 
>> CREATE TABLE NG.BARS_Cnc_Details_Hist (
>> ip  varchar(30) not null,
>> last_active date not null,
>> cnc_type varchar(5),
>> cnc_value varchar(50),
>> pull_date date CONSTRAINT cnc_pk PRIMARY KEY(ip,last_active,cnc_value)) 
>> SALT_BUCKETS=48;
>> 
>> create table NG.IID_IP_Threat_Hist (
>> IPaddr varchar(50) not null,
>> Original_Date varchar(16),
>> Classname varchar(50),
>> Category varchar(30),
>> Threat_Date date,
>> Time varchar(8),
>> Created date,
>> VENDOR_ID integer
>> CONSTRAINT pk PRIMARY KEY(IPaddr)) SALT_BUCKETS=48;
>> 
>> When I run a query against each individual table, I get the expected 
>> results...but when I run a query that is a JOIN of the two (query below), I 
>> get errors related to IO/corruption:
>> 
>> SELECT
>>   NG.BARS_CNC_DETAILS_HIST.IP,
>>   NG.BARS_CNC_DETAILS_HIST.LAST_ACTIVE,
>>   NG.BARS_CNC_DETAILS_HIST.CNC_TYPE,
>>   NG.BARS_CNC_DETAILS_HIST.CNC_VALUE,
>>   NG.BARS_CNC_DETAILS_HIST.PULL_DATE,
>>   NG.IID_IP_THREAT_HIST.IPADDR,
>>   NG.IID_IP_THREAT_HIST.CLASSNAME,
>>   NG.IID_IP_THREAT_HIST.CATEGORY,
>>   NG.IID_IP_THREAT_HIST.THREAT_DATE,
>>   NG.IID_IP_THREAT_HIST.TIME
>> FROM
>>   NG.BARS_CNC_DETAILS_HIST
>> INNER JOIN
>>   NG.IID_IP_THREAT_HIST
>> ON
>>   NG.BARS_CNC_DETAILS_HIST.IP=NG.IID_IP_THREAT_HIST.IPADDR;
>> 
>> 15/09/08 11:17:56 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
>> attempts=35, exceptions:
>> Tue Sep 08 11:08:36 CDT 2015, 
>> org.apache.hadoop.hbase.client.RpcRetryingCaller@768f318b, 
>> java.io.IOException: java.io.IOException: java.lang.NoClassDefFoundError: 
>> org/iq80/snappy/CorruptionException
>>  at 
>> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
>>  at 
>> org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
>>  at 
>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6864)
>>  at 
>> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3415)
>>  at 
>> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3397)
>>  at 
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
>>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>>  at 
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>>  at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.NoClassDefFoundError: 
>> org/iq80/snappy/CorruptionException
>>  at java.lang.Class.forName0(Native Method)
>>  at java.lang.Class.forName(Class.java:190)
>>  at 
>> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:72)
>>  ... 10 more
>> Caused by: java.lang.ClassNotFoundException: 
>> org.iq80.snappy.CorruptionException
>>  at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>  at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>  at java.security.AccessController.doPrivileged(Native Method)
>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>  at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>  at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>  ... 13 more
> 


Re: JOIN issue, getting errors

2015-09-09 Thread Josh Mahonin
This looks suspiciously like
https://issues.apache.org/jira/browse/PHOENIX-2169

Are you able to perform the same queries when the tables aren't salted?
Also, what versions of HBase / Phoenix are you using?

On Tue, Sep 8, 2015 at 12:33 PM, M. Aaron Bossert 
wrote:

> All,
>
> Any help would be greatly appreciated...
>
> I have two tables with the following structure:
>
> CREATE TABLE NG.BARS_Cnc_Details_Hist (
>
> ip  varchar(30) not null,
>
> last_active date not null,
>
> cnc_type varchar(5),
>
> cnc_value varchar(50),
>
> pull_date date CONSTRAINT cnc_pk PRIMARY KEY(ip,last_active,cnc_value))
> SALT_BUCKETS=48;
>
>
> create table NG.IID_IP_Threat_Hist (
>
> IPaddr varchar(50) not null,
>
> Original_Date varchar(16),
>
> Classname varchar(50),
>
> Category varchar(30),
>
> Threat_Date date,
>
> Time varchar(8),
>
> Created date,
>
> VENDOR_ID integer
>
> CONSTRAINT pk PRIMARY KEY(IPaddr)) SALT_BUCKETS=48;
>
>
> When I run a query against each individual table, I get the expected
> results...but when I run a query that is a JOIN of the two (query below), I
> get errors related to IO/corruption:
>
>
> SELECT
>
>   NG.BARS_CNC_DETAILS_HIST.IP,
>
>   NG.BARS_CNC_DETAILS_HIST.LAST_ACTIVE,
>
>   NG.BARS_CNC_DETAILS_HIST.CNC_TYPE,
>
>   NG.BARS_CNC_DETAILS_HIST.CNC_VALUE,
>
>   NG.BARS_CNC_DETAILS_HIST.PULL_DATE,
>
>   NG.IID_IP_THREAT_HIST.IPADDR,
>
>   NG.IID_IP_THREAT_HIST.CLASSNAME,
>
>   NG.IID_IP_THREAT_HIST.CATEGORY,
>
>   NG.IID_IP_THREAT_HIST.THREAT_DATE,
>
>   NG.IID_IP_THREAT_HIST.TIME
>
> FROM
>
>   NG.BARS_CNC_DETAILS_HIST
>
> INNER JOIN
>
>   NG.IID_IP_THREAT_HIST
>
> ON
>
>   NG.BARS_CNC_DETAILS_HIST.IP=NG.IID_IP_THREAT_HIST.IPADDR;
>
>
> 15/09/08 11:17:56 WARN ipc.CoprocessorRpcChannel: Call failed on
> IOException
>
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=35, exceptions:
>
> Tue Sep 08 11:08:36 CDT 2015,
> org.apache.hadoop.hbase.client.RpcRetryingCaller@768f318b,
> java.io.IOException: java.io.IOException: java.lang.NoClassDefFoundError:
> org/iq80/snappy/CorruptionException
>
> at
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
>
> at
> org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6864)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3415)
>
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3397)
>
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.NoClassDefFoundError:
> org/iq80/snappy/CorruptionException
>
> at java.lang.Class.forName0(Native Method)
>
> at java.lang.Class.forName(Class.java:190)
>
> at
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:72)
>
> ... 10 more
>
> Caused by: java.lang.ClassNotFoundException:
> org.iq80.snappy.CorruptionException
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>
> ... 13 more
>


Re: How to add lots of dummy data to a table efficiently

2015-09-09 Thread Cody Marcel
Have you looked at the pherf module in Phoenix? You can predefined
rules and have it generate data based on that.

Sent from my iPhone

> On Sep 9, 2015, at 3:55 AM, James Heather  wrote:
>
> I've been trying to create a table and then bung lots of random data into it, 
> but without generating the data client-side and then upserting it. I'd rather 
> find a way to have it generated on the server.
>
> My table has three columns, all BIGINT, and the first is the primary key.
>
> I'd ideally like to write something like
>
>UPSERT into linktab (id, first_val, second_val) SELECT next value for 
> linktab_next_id, random_func, random_func ...
>
> and complete the query so that the SELECT generates its values without 
> needing to read from a table. But there doesn't seem to be a way of writing a 
> SELECT that doesn't need a table parameter.
>
> So I've ended up doing this. First I upsert a single row into the table; 
> then, I double the number of rows in it by a self-referential UPSERT SELECT:
>
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 8 rows affected (0.193 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 16 rows affected (0.202 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 32 rows affected (0.205 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 64 rows affected (0.235 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 128 rows affected (0.28 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 256 rows affected (0.378 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 512 rows affected (0.53 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 1,024 rows affected (0.848 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 2,048 rows affected (1.513 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 4,096 rows affected (2.817 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 8,192 rows affected (5.314 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 16,384 rows affected (10.807 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 32,768 rows affected (21.306 seconds)
> 0: jdbc:phoenix:172.31.31.143> upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> 65,536 rows affected (45.713 seconds)
> 0: jdbc:phoenix:172.31.31.143> explain upsert into linktab (id, first_val, 
> second_val) select next value for linktab_next_id, 
> round(rand()*1), round(rand()*1) from linktab;
> +--+
> |   PLAN   |
> +--+
> | UPSERT SELECT|
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER LINKTAB |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | CLIENT RESERVE VALUES FROM 1 SEQUENCE|
> +--+
> 4 rows selected (0.192 seconds)
> 0: jdbc:phoenix:172.31.31.143>
>
>
> This works, but 

Re: Phoenix 4.4+ on CDH

2015-09-09 Thread ALEX K
these changes worked for me on cdh5.4.4 and phoenix 4.5.1-hbase-1.0 :
http://stackoverflow.com/a/31934434/165130


On Sat, Aug 22, 2015 at 10:17 AM, Ns G  wrote:

> Hi Lukas,
>
> I have shared the changes I made in one of the previous email. Did u try
> them?
>
> Thanks,
> Satya
> On 22-Aug-2015 6:51 pm, "Lukáš Lalinský"  wrote:
>
>> On Sat, Aug 22, 2015 at 1:26 AM, Ns G  wrote:
>>
>>> But, what I can suggest is with minimal changes to the phoenix code you
>>> will be able to build & deploy phoenix on CDH 5.4.x version.
>>>
>>> I didn't try phoenix query server but I was able to generate all server
>>> and client jars and deploy on CDH 5.4. And use it successfully.
>>>
>> I'm curious about this, because when I tried to run Phoenix on CDH I ran
>> into some compatibility issues. I don't remember the details, but it was
>> about HBase classes not having the right methods. That's when I realized
>> CDH is really a fork of Hadoop and I switched to deploying custom packages
>> based on the original versions.
>>
>> Did you have to do anything special, or just use the Phoenix binary
>> tarball?
>>
>> Lukas
>>
>>
>>


Re: Phoenix 4.4+ on CDH

2015-09-09 Thread Justin Workman
Same here with CDH5.4.5

Sent from my iPhone

> On Sep 9, 2015, at 8:28 PM, ALEX K  wrote:
> 
> these changes worked for me on cdh5.4.4 and phoenix 4.5.1-hbase-1.0 : 
> http://stackoverflow.com/a/31934434/165130
> 
> 
>> On Sat, Aug 22, 2015 at 10:17 AM, Ns G  wrote:
>> Hi Lukas,
>> 
>> I have shared the changes I made in one of the previous email. Did u try 
>> them?
>> 
>> Thanks,
>> Satya
>> 
>>> On 22-Aug-2015 6:51 pm, "Lukáš Lalinský"  wrote:
 On Sat, Aug 22, 2015 at 1:26 AM, Ns G  wrote:
 But, what I can suggest is with minimal changes to the phoenix code you 
 will be able to build & deploy phoenix on CDH 5.4.x version.
 
 I didn't try phoenix query server but I was able to generate all server 
 and client jars and deploy on CDH 5.4. And use it successfully.
 
>>> I'm curious about this, because when I tried to run Phoenix on CDH I ran 
>>> into some compatibility issues. I don't remember the details, but it was 
>>> about HBase classes not having the right methods. That's when I realized 
>>> CDH is really a fork of Hadoop and I switched to deploying custom packages 
>>> based on the original versions.
>>> 
>>> Did you have to do anything special, or just use the Phoenix binary tarball?
>>> 
>>> Lukas
>