HBase Phoenix Integration

2016-02-26 Thread Amit Shah
Hello, I have been trying to install phoenix on my cloudera hbase cluster. Cloudera version is CDH5.5.2 while HBase version is 1.0. I copied the server & core jar (version 4.6-HBase-1.0) on the master and region servers and restarted the hbase cluster. I copied the corresponding client jar on my

Re: HBase Phoenix Integration

2016-02-26 Thread Amit Shah
n Fri, Feb 26, 2016 at 11:26 PM, Murugesan, Rani wrote: > Did you test and confirm your phoenix shell from the zookeeper server? > > cd /etc/hbase/conf > > > phoenix-sqlline.py :2181 > > > > > > *From:* Amit Shah [mailto:amits...@gmail.com] > *Sent

Re: HBase Phoenix Integration

2016-02-28 Thread Amit Shah
p would be appreciated On Sat, Feb 27, 2016 at 8:03 AM, Amit Shah wrote: > Hi Murugesan, > > What preconditions would I need on the server to execute the python > script? I have Python 2.7.5 installed on the zookeeper server. If I just > copy the sqlline script to the /etc/hbase/conf

Re: ***UNCHECKED*** Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
lease remove the 'x' from the jar extension > Hope this helps. > > > Thanks, > Divya > > On 26 February 2016 at 20:44, Amit Shah wrote: > >> Hello, >> >> I have been trying to install phoenix on my cloudera hbase cluster. >> Cloudera version is

Re: ***UNCHECKED*** Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
.4 . > > Thanks, > Divya > > On 1 March 2016 at 13:00, Amit Shah wrote: > >> Hi Divya, >> >> Thanks for the patch. Is this for phoenix version 4.6 ? Are the changes >> made to make phoenix work with CDH 5.5.2? >> >> Thanks, >> Amit. >&g

Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
4.3 means you need HBase 0.98. What kind of problem you > experienced after building 4.6 from sources with changes suggested on > StackOverflow? > > Thanks, > Sergey > > On Sun, Feb 28, 2016 at 10:49 PM, Amit Shah wrote: > > An update - > > > > I was able t

Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
> > > > Thanks, > > James > > > > > > > > On Mon, Feb 29, 2016 at 10:19 PM, Amit Shah wrote: > > Hi Sergey, > > > > I get lot of compilation errors when I compile the source code > for 4.6-HBase-1.0 branch or v4.7.0-HBase-1.0-rc3 tag. N

Re: Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
If no way to resolve this, I would still be using the Cloudera-Labs > phoenix version from this : > > https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/ > > > Thanks, > Sun. > > ------ >

Re: RE: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
Cloudera-Labs > phoenix version from this : > > > https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/ > > > > > Thanks, > > Sun. > > > -- > -- > > > &g

Re: Re: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
; > Best, > Sun. > > -- > ------ > > > *From:* Amit Shah > *Date:* 2016-03-01 17:22 > *To:* user > *Subject:* Re: RE: HBase Phoenix Integration > Hi All, > > I got some success in deploying phoenix 4.6-HBase-1.0 on CDH 5.5.2. I > resolved the compilation errors by commenting

Re: Re: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
00, callDuration=69350: row 'SYSTEM.SEQUENCE,,00' > on table 'hbase:meta' at region=hbase:meta,,1.1588230740, > hostname=dev-2,60020,1456826584858, seqNum=0 > > -- > > *From:* Amit Shah > *Date:* 2016-03-01 18:00 >

Re: Re: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
error message from the regionserver log. That is > supre weird. > > -- > ------ > > > *From:* Amit Shah > *Date:* 2016-03-01 19:02 > *To:* user > *Subject:* Re: Re: HBase Phoenix Integration > Hi Sun, > > In my de

Disabling HBase Block Cache

2016-03-25 Thread Amit Shah
Hi, I am using apache hbase (version 1.0.0) and phoenix (version 4.6) deployed through cloudera. Since my aggregations with group by query is slow, I want to try out disabling the block cache for a particular hbase table. I tried a couple of approaches but couldn't succeed. I am verifying if the b

Re: Disabling HBase Block Cache

2016-03-25 Thread Amit Shah
t would be great if someone could throw some light on this. P.S - Though disabling the block cache didn't speed up the group by query but that seems like a separate topic of discussion. Thanks! On Fri, Mar 25, 2016 at 1:34 PM, Amit Shah wrote: > Hi, > > I am using apache hbase (ve

Speeding Up Group By Queries

2016-03-25 Thread Amit Shah
Hi, I am trying to evaluate apache hbase (version 1.0.0) and phoenix (version 4.6) deployed through cloudera for our OLAP workfload. I have a table that has 10 mil rows. I try to execute the below roll up query and it takes around 2 mins to return 1,850 rows. SELECT SUM(UNIT_CNT_SOLD), SUM(TOTAL_

Re: Disabling HBase Block Cache

2016-03-28 Thread Amit Shah
HE */ ... > > Thanks, > James > > [1] https://phoenix.apache.org/language/index.html#alter > [2] https://phoenix.apache.org/language/index.html#hint > > On Fri, Mar 25, 2016 at 6:39 AM, Amit Shah wrote: > >> I noticed that the charts <http://i.imgur.com/ZEJTHWt.png>on clou

Re: Speeding Up Group By Queries

2016-03-29 Thread Amit Shah
e hbase web UI >> > > You need to do *major_compact* from HBase shell. From UI it's minor. > > - mujtaba > > On Mon, Mar 28, 2016 at 12:32 AM, Amit Shah wrote: > >> Thanks Mujtaba and James for replying back. >> >> Mujtaba, Below are details to

Re: Speeding Up Group By Queries

2016-03-29 Thread Amit Shah
t; "phoenix.stats.guidepost.width"=50000000; > > > > > On Tue, Mar 29, 2016 at 6:45 AM, Amit Shah wrote: > >> Hi Mujtaba, >> >> I did try the two optimization techniques by recreating the table and >> then loading it again with 10 mil records.

Phoenix Upsert Query Failure - Could Not Get Page

2016-03-31 Thread Amit Shah
Hi, I have been trying to execute a upsert query that select data from a 10 mil records table. The query fails on the sqlline client at times with Caused by: java.lang.RuntimeException: Could not get page at index: 16. The detailed exception is pasted here - http://pastebin.com/1wTCHyJM. I tried

Re: Region Server Crash On Upsert Query Execution

2016-03-31 Thread Amit Shah
? Also what's the RS heap size? > > > On Thu, Mar 31, 2016 at 1:48 AM, Amit Shah wrote: > >> Hi, >> >> We have been experimenting hbase (version 1.0) and phoenix (version 4.6) >> for our OLAP workload. In order to precalculate aggregates we have been >>

Re: Phoenix Upsert Query Failure - Could Not Get Page

2016-04-01 Thread Amit Shah
Any inputs here? On Thu, Mar 31, 2016 at 4:45 PM, Amit Shah wrote: > Hi, > > I have been trying to execute a upsert query that select data from a 10 > mil records table. The query fails on the sqlline client at times with Caused > by: java.lang.RuntimeException: Could not get pa

Re: Region Server Crash On Upsert Query Execution

2016-04-01 Thread Amit Shah
C logs? Also 2GB > heap is on the low side, can you rerun you test with setting heap to 5 and > 10GB? > > On Thu, Mar 31, 2016 at 7:01 AM, Amit Shah wrote: > >> Another such instance of the crash is described below. >> >> >> When the regions are evenly dis

Re: Region Server Crash On Upsert Query Execution

2016-04-02 Thread Amit Shah
; doing things optimally given the fact you know your algorithm or workload/ > goal). > > P.S. I think we know each other, right? > > Regards, > Constantin > Pe 1 apr. 2016 4:16 p.m., "Amit Shah" a scris: > >> I tried raising the region server heap memory to 3

Missing Rows In Table After Bulk Load

2016-04-08 Thread Amit Shah
Hi, I am using phoenix 4.6 and hbase 1.0. After bulk loading 10 mil records into a table using the psql.py utility, I tried querying the table using the sqlline.py utility through a select count(*) query. I see only 0.1 million records. What could be missing? The psql.py logs are python psql.py

Re: Missing Rows In Table After Bulk Load

2016-04-11 Thread Amit Shah
Yes, It looks like they were not unique and hence the reduction in count. Thanks! On Fri, Apr 8, 2016 at 9:46 PM, Steve Terrell wrote: > Are the primary keys in the .csv file are all unique? (no rows overwriting > other rows) > > On Fri, Apr 8, 2016 at 10:21 AM, Amit Shah wr

Understanding Phoenix Query Plans

2016-04-11 Thread Amit Shah
Hi, I am using hbase version 1.0 and phoenix version 4.6. For different queries that we are benchmarking, I am trying to understand the query plan 1. If we execute a where clause with group by query on primary key columns of the table, the plan looks like +

Re: Speeding Up Group By Queries

2016-04-11 Thread Amit Shah
at 10:55 PM, Amit Shah wrote: > Hi Mujtaba, > > Could these improvements be because of region distribution across region > servers? Along with the optimizations you had suggested I had also used > hbase-region-inspector to move regions evenly across the region server. > > Bel

Re: Speeding Up Group By Queries

2016-04-12 Thread Amit Shah
at will help > you benchmark your queries under representative data sizes? > > Thanks, > James > > [1] https://phoenix.apache.org/secondary_indexing.html > [2] https://www.youtube.com/watch?v=f4Nmh5KM6gI&feature=youtu.be > [3] https://phoenix.apache.org/pherf.html >

Phoenix Bulk Load With Column Overrides

2016-04-20 Thread Amit Shah
Hello, I am using phoenix 4.6 and trying to bulk load data into a table from a csv file using the psql.py utility. How do I map the table columns to the header values in the csv file through the "-h" argument? For e.g. Assume my phoenix table does not match the columns in the csv. The phoenix tab