Re: question about a config property

2016-02-20 Thread ashish tapdiya
Thanks for the response James. Is it a percentage of the java heap allocated to the region server? Also, does it affect join processing? Ashish On Sat, Feb 20, 2016 at 8:12 PM, James Taylor wrote: > Yes > > On Fri, Feb 19, 2016 at 11:33 AM, ashish tapdiya

Re: Delete records

2016-02-20 Thread James Taylor
Hi Nanda, This error occurs if your table is immutable, you have an index on the table, and your WHERE clause is filtering on a column not contained in all of the indexes. If that's not the case, would you mind posting a complete end-to-end test as it's possible you're hitting a bug. Thanks, James

Re: question about a config property

2016-02-20 Thread James Taylor
Yes On Fri, Feb 19, 2016 at 11:33 AM, ashish tapdiya wrote: > Hi, > > Is phoenix.query.maxGlobalMemoryPercentage a server side property? > > Thanks, > ~Ashish >

Delete records

2016-02-20 Thread Nanda
Hi, I am using Phoenix 4.4 and when executing the below query i get an exception as below, I have indexes on the below "some_table" on column report_time in all the indexes created. Delete from some_table where TO_NUMBER(report_time) <= 145593750 java.sql.SQLException: ERROR 1027 (42Y86):

Re: Spark Phoenix Plugin

2016-02-20 Thread Benjamin Kim
Josh, My production environment at our company is: CDH 5.4.8 Hadoop 2.6.0-cdh5.4.8 YARN 2.6.0-cdh5.4.8 HBase 1.0.0-cdh5.4.8 Apache HBase 1.1.3 Spark 1.6.0 Phoenix 4.7.0 I tried to use the Phoenix Spark Plugin against both versions of HBase. I hope this helps. Thanks, Ben > On Feb 20, 2016,

Re: Spark Phoenix Plugin

2016-02-20 Thread Josh Mahonin
Hi Ben, Can you describe in more detail what your environment is? Are you using stock installs of HBase, Spark and Phoenix? Are you using the hadoop2.4 pre-built Spark distribution as per the documentation [1]? The unread block data error is commonly traced back to this issue [2] which indicates