Re: phoenix-spark error with index on target table

2016-08-12 Thread James Taylor
Thanks for letting us know, Nathan. On Friday, August 12, 2016, Nathan Davis wrote: > I was able to find a solution to this issue, so for posterity's sake here > is the solution (perhaps more of a workaround): > > When executing the spark driver (in my simple case spark-shell, but should > be th

Re: phoenix-spark error with index on target table

2016-08-12 Thread Nathan Davis
I was able to find a solution to this issue, so for posterity's sake here is the solution (perhaps more of a workaround): When executing the spark driver (in my simple case spark-shell, but should be the same with spark-submit), you need to provide the hbase-protocol.jar both via `--jars` and in t

Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser
Yeah, hadoop-common is the issue. I will file a JIRA issue and we can try to add some exclusions to remove the cruft for a 4.8.1 release. Thanks for letting us know! I would still be interested to hear about dependency conflicts that you're getting. The shading should have prevented issues at

Re: Issues while Running Apache Phoenix against TPC-H data

2016-08-12 Thread Mujtaba Chohan
Hi Amit, * What's the heap size of each of your region servers? * Do you see huge amount of disk reads when you do a select count(*) from tpch.lineitem? If yes then try setting snappy compression on your table followed by major compaction * Were there any deleted rows in this table? What's the row

phoenix-spark error with index on target table

2016-08-12 Thread Nathan Davis
Hi All, I am using phoenix-spark to write a DataFrame to a Phoenix table. All works fine when just writing to a table alone. However, when I do the same thing but with a global index on that table I get the following error. The index is on two columns (varchar, date) with no includes. A little goo

Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread vikashtalanki
I dont have a code snippet for composite key, but you can encode each field in the composite key and then do an array concatenation. http://stackoverflow.com/questions/80476/how-can-i-concatenate-two-arrays-in-java -- View this message in context: http://apache-phoenix-user-list.1124778.n5.nabb

Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser
Hi Youngwoo, The inclusion of hadoop-common is probably the source of most of the bloat. We really only needed the UserGroupInformation code, but Hadoop doesn't provide a proper artifact with just that dependency for us to use downstream. What dependency issues are you running into? There wa

[ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-12 Thread Ankit Singhal
Apache Phoenix enables OLTP and operational analytics for Hadoop through SQL support and integration with other projects in the ecosystem such as Spark, HBase, Pig, Flume, MapReduce and Hive. We're pleased to announce our 4.8.0 release which includes: - Local Index improvements[1] - Integration wi

Re: Tables can have schema name but indexes cannot

2016-08-12 Thread John Leach
Michael, The object browser in DBVisualizer is driven by the jdbc driver. If you get any weird interaction, it usually means the JDBC implementation has an issue. We had issues at Splice Machine with our Foreign Keys returning incorrectly and then realized any deviation from the spec causes s

Re: Tables can have schema name but indexes cannot

2016-08-12 Thread Michael McAllister
James Thanks – looks like I was misled by DBVisualizer. The underlying hbase index tables automatically have the parent table’s schema name prepended, which is perfect. For some reason in the DBVisualizer object browser the indexes don’t show up in the correct schema, they’re showing up in a sc

Re: Tables can have schema name but indexes cannot

2016-08-12 Thread James Taylor
Hi Michael, SQL dictates that an index must be in the same schema as the table it's indexing. Thanks, James On Fri, Aug 12, 2016 at 8:50 AM, Michael McAllister < mmcallis...@homeaway.com> wrote: > Hi > > > > Is there any reason we can specify the schema name for a table, but not an > index. I not

Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread Nathan Davis
Thanks James, all CAPS did the trick! Yes, the event table is already IMMUTABLE_ROWS=true. Thanks again, Nathan On Fri, Aug 12, 2016 at 10:59 AM, James Taylor wrote: > In your IndexTool invocation, try use all caps for your table and index > name. Phoenix normalizes names by upper casing them

Tables can have schema name but indexes cannot

2016-08-12 Thread Michael McAllister
Hi Is there any reason we can specify the schema name for a table, but not an index. I note that the grammar online makes it clear this isn’t part of the syntax, but it would be nice if we could do it. To illustrate what I’d like:- -- Create the table CREATE TABLE IF NOT EXISTS MMCALLISTER.TES

Re: Problems with Phoenix bulk loader when using row_timestamp feature

2016-08-12 Thread Ryan Templeton
FYI… The sample data that I loaded in the table was based on the current timestamp with each additional row increasing that value by 1 minute so the current time up to 999,999 minutes into the future. Turns out this was a bug that prevents the scanner from reading timestamp values greater than

Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread James Taylor
In your IndexTool invocation, try use all caps for your table and index name. Phoenix normalizes names by upper casing them (unless they're in double quotes). One other unrelated question: did you declare your event table with IMMUTABLE_ROWS=true (assuming it's a write-once table)? If not, you can

Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread Nathan Davis
Thanks for the detailed info. I took the advice of using the ASYNC method. The CREATE statement executes fine and I end up with an index table showing in state BUILDING. When I kick off the MR job with `hbase org.apache.phoenix.mapreduce.index.IndexTool --schema trans --data-table event --index-tab

Re: Phoenix Ifnull

2016-08-12 Thread Michael McAllister
Seeing as we’re talking COALESCE and NULLs, depending on the version Ankit is running, this could also be the issue in PHOENIX-2994:- https://issues.apache.org/jira/browse/PHOENIX-2994 Michael McAllister Staff Data Warehouse Engineer | Decision Systems mmcallis...@homeaway.com

Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread Dong-iL, Kim
Oh. Thanks a lot. do you have a snippet for generating composite key? I’m sorry for my laziness. > On Aug 12, 2016, at 3:24 PM, vikashtalanki wrote: > > Hi Dong, > > If you still want to insert through hbase, you can use the below snippets > for encoding values as per phoenix. ---> import >

Re: Phoenix Ifnull

2016-08-12 Thread Lukáš Lalinský
I think this is a problem with the WHERE clause. NULL values are neither equal nor not-equal to any other values. You might need to add "OR API_KEY IS NULL" to the WHERE clause. Lukas On Fri, Aug 12, 2016 at 9:51 AM, ankit beohar wrote: > Hi Lukáš/James, > > I have one table in which only one

Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread vikashtalanki
Hi Dong, If you still want to insert through hbase, you can use the below snippets for encoding values as per phoenix. ---> import org.apache.phoenix.schema.types.*; public static byte[] encodeDecimal(String value) { BigDecimal bigDecValue = new BigDecimal(value);

Re: Phoenix Ifnull

2016-08-12 Thread ankit beohar
Hi Lukáš/James, I have one table in which only one rowkey is available and for my null check case I am firing below queries and in that null check is not working:- [image: Inline image 1] Please see and let me know if I am doing right thing or missed something. Best Regards, ANKIT BEOHAR On T