ORDER BY Error on Windows

2016-02-24 Thread Yiannis Gkoufas
Hi there, we have been using phoenix client without a problem in linux systems but we have encountered some problems on windows. We run the queries through SquirellSQL using the 4.5.2 client jar The query which looks like this SELECT * FROM TABLE WHERE ID='TEST' works without a problem. But when w

Re: Question about IndexTool

2015-09-16 Thread Yiannis Gkoufas
cted KeyValues are > then written to the HFile. > > - Gabriel > > On Tue, Sep 15, 2015 at 2:12 PM Yiannis Gkoufas > wrote: > >> Hi there, >> >> I was going through the code related to index creation via MapReduce job >> (IndexTool) and I have some q

Question about IndexTool

2015-09-15 Thread Yiannis Gkoufas
Hi there, I was going through the code related to index creation via MapReduce job (IndexTool) and I have some questions. If I am not mistaken, for a global secondary index Phoenix creates a new HBase table which has the appropriate key (the column value of the original table you want to index) an

Re: Get values that caused the exception

2015-09-03 Thread Yiannis Gkoufas
>) and > check if an exception has been thrown, then log it somewhere. > > As a wild guess, if you're dealing with a Double datatype and getting > NumberFormatException, is it possible one of your values is a NaN? > > Josh > > On Thu, Sep 3, 2015 at 6:11 AM, Yiannis Gkouf

Get values that caused the exception

2015-09-03 Thread Yiannis Gkoufas
Hi there, I am using phoenix-spark to insert multiple entries on a phoenix table. I get the following errors: ..Exception while committing to database.. ..Caused by: java.lang.NumberFormatException.. I couldn't find on the logs what was the row that was causing the issue. Is it possible to extra

Extract salted compound index values from Row

2015-08-27 Thread Yiannis Gkoufas
Hi there, I was trying to experiment a bit with accessing Phoenix-enabled tables using the HBase API directly. My primary key is compound consisting of a String and an Unsigned Long. By printing the bytes of the Row I realized that the byte that splits the values is 0. Moreover I realized that the

Re: Getting ArrayIndexOutOfBoundsException in UPSERT SELECT

2015-08-26 Thread Yiannis Gkoufas
Hi Jaime, is this https://issues.apache.org/jira/browse/PHOENIX-2169 the error you are getting? Thanks On 26 August 2015 at 19:52, Jaime Solano wrote: > Hi guys, > > I'm getting *Error: java.lang.ArrayIndexOutOfBoundsException > (state=08000, code=101)*, while doing a query like the following:

"ERROR 201 (22000): Illegal data" on Upsert Select

2015-08-20 Thread Yiannis Gkoufas
Hi there, I am getting an error while executing: UPSERT INTO READINGS SELECT R.SMID, R.DT, R.US, R.GEN, R.USEST, R.GENEST, RM.LAT, RM.LON, RM.ZIP, RM.FEEDER FROM READINGS AS R JOIN (SELECT SMID,LAT,LON,ZIP,FEEDER FROM READINGS_META) AS RM ON R.SMID = RM.SMID the full stacktrace is: Err

Re: Maven issue with version 4.5.0

2015-08-14 Thread Yiannis Gkoufas
t;> >> Thanks, >> Mujtaba >> >> On Thu, Aug 13, 2015 at 8:33 AM, Yiannis Gkoufas >> wrote: >> >>> Hi there, >>> >>> When I try to include the following in my pom.xml: >>> >>> >>> org.apache.p

Re: Maven issue with version 4.5.0

2015-08-14 Thread Yiannis Gkoufas
t;> >> Thanks, >> Mujtaba >> >> On Thu, Aug 13, 2015 at 8:33 AM, Yiannis Gkoufas >> wrote: >> >>> Hi there, >>> >>> When I try to include the following in my pom.xml: >>> >>> >>> org.apache.p

Maven issue with version 4.5.0

2015-08-13 Thread Yiannis Gkoufas
Hi there, When I try to include the following in my pom.xml: org.apache.phoenix phoenix-core 4.5.0-HBase-0.98 provided I get this error: Failed to collect dependencies at org.apache.phoenix:phoenix-core:jar:4.5.0-HBase-0.98: Failed to re

Re: Perform Scan-like range query on VARBINARY key column

2015-07-08 Thread Yiannis Gkoufas
a lot! On 6 July 2015 at 23:55, Yiannis Gkoufas wrote: > Thanks James for your reply! > I will give it a shot! > > > On 6 July 2015 at 19:04, James Taylor wrote: > >> You can use a regular SQL query with comparison operators (=, <, <=, >, >> >=, !=)

Re: Perform Scan-like range query on VARBINARY key column

2015-07-06 Thread Yiannis Gkoufas
INARY and you can use PrepareStatement.setBytes( bind variables that are arbitrary bytes for your key. The salting will > happen transparently, so you don't have to do anything special. > > Thanks, > James > > On Mon, Jul 6, 2015 at 9:06 AM, Yiannis Gkoufas > wrote: > >> Hi all, >

Perform Scan-like range query on VARBINARY key column

2015-07-06 Thread Yiannis Gkoufas
Hi all, I have created a table in that way: CREATE TABLE TWEETS (my_key VARBINARY, text varchar, tweetid varchar, user varchar, date varchar, CONSTRAINT my_pk PRIMARY KEY(my_key)) SALT_BUCKETS=120 my_key is a custom byte array key I have constructed What I want to do is to actually perform a Sc

Strategy on joining on partial keys

2015-06-29 Thread Yiannis Gkoufas
Hi there, I have two tables I want to join. TABLE_A: ( (A,B), C, D, E) where (A,B) is the composite key TABLE_B: ( (A), C, D, E) where A is the key I basically want to join TABLE_A and TABLE_B on A and update TABLE_A with the values C, D, E coming from TABLE_B When I try to use UPSERT SELECT JO

Advice for UDF used in GROUP BY

2015-06-22 Thread Yiannis Gkoufas
Hi there, I was just looking for some tips on implementing a UDF to be used in a GROUP BY statement. For instance lets say I have the table: ( (A, B), C, D) with (A,B) being the composite key My UDF targets the field C and I want to optimize the query: SELECT A,MYFUNCTION(C),SUM(D) FROM MYTABLE

Re: Phoenix Client Configuration

2015-06-18 Thread Yiannis Gkoufas
Hi Thomas, please ignore my last email, I wasn't running the sqlline.py command from within the bin directory. Now it works just fine! Thanks! On 18 June 2015 at 10:39, Yiannis Gkoufas wrote: > Hi Thomas, > > unfortunately just modifying the hbase-site in the current director

Re: Phoenix Client Configuration

2015-06-18 Thread Yiannis Gkoufas
sqlline.SqlLine.dispatch(SqlLine.java:808) at sqlline.SqlLine.begin(SqlLine.java:681) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) Thanks a lot for spending time on this On 17 June 2015 at 22:18, Yiannis Gkoufas wrote: > Thanks a lot Tho

Re: Phoenix Client Configuration

2015-06-17 Thread Yiannis Gkoufas
the -cp flag that is used while starting > squirrel. > > On Wed, Jun 17, 2015 at 11:35 AM, Yiannis Gkoufas > wrote: > > Hi Thomas, > > > > thanks for the reply! So whats the case for squirel? > > Or running a main class (which connects to phoenix) from a jar file

Re: Phoenix Client Configuration

2015-06-17 Thread Yiannis Gkoufas
get > picked up or else it will use the default timeout. > When using sqlline it sets the CLASSPATH to the HBASE_CONF_PATH > environment variable which default to the current directory. > Try running sqlline directly from the bin directory. > > -Thomas > > On Wed, Jun 17

Phoenix Client Configuration

2015-06-17 Thread Yiannis Gkoufas
Hi there, I have failed to understand from the documentation where exactly to set the client configuration. For the server, I think is clear that I have to modify hbase-site.xml of my hbase cluster. But what is the case for the client? It requires to have hbase-site.xml somewhere in the classpath?

Re: Bulk loading through HFiles

2015-06-17 Thread Yiannis Gkoufas
bigger set of data. I > will try to post the code when I polish it a bit. The partitions should be > sorted with KeyValue sorter before bulkSaving them. > > 2015-06-16 15:10 GMT+02:00 Yiannis Gkoufas : > >> Hi, >> >> didn't realize that I only sent to Dawi

Re: Bulk loading through HFiles

2015-06-16 Thread Yiannis Gkoufas
u'd like to > use this code I could try to investigate your problem, but I need the full > stack trace. > > > > On 11.06.2015 00:53, Yiannis Gkoufas wrote: > > Hi Dawid, > > yes I have been using your code. Probably I am invoking the classes in a > wrong way. &g

Re: Schema and indexes for efficient time range queries

2015-06-11 Thread Yiannis Gkoufas
ious about the difference in the performance. Thanks a lot! On 9 June 2015 at 10:26, Yiannis Gkoufas wrote: > Thanks a lot for your replies! > Will try the DATE field and change the order of the Composite Key. > > On 8 June 2015 at 17:37, James Taylor wrote: > >> Both

Re: Bulk loading through HFiles

2015-06-10 Thread Yiannis Gkoufas
e in HBase. I can't make them 'visible' to phoenix. >>> >>> What I noticed today is I have rows loaded from the generated HFiles and >>> upserted through sqlline when I run 'DELETE FROM TABLE' only the upserted >>> one disappears. The loade

Re: Bulk loading through HFiles

2015-06-10 Thread Yiannis Gkoufas
Hi Dawid, I am trying to do the same thing but I hit a wall while writing the Hfiles getting the following error: java.io.IOException: Added a key not lexically larger than previous key=\x00\x168675230967GMP\x00\x00\x00\x01=\xF4h)\xE0\x010GEN\x00\x00\x01M\xDE.\xB4T\x04, lastkey=\x00\x168675230967

Re: Schema and indexes for efficient time range queries

2015-06-09 Thread Yiannis Gkoufas
n, Jun 8, 2015 at 9:17 AM, Vladimir Rodionov > wrote: > > There are several time data types, natively supported by Phoenix: TIME is > > probably most suitable for your case (it should have millisecond > accuracy, > > but you better check it yourself.) > > >

Re: Schema and indexes for efficient time range queries

2015-06-08 Thread Yiannis Gkoufas
e bigint for timestamp, try to fit it into long or use > stringified version in a format suitable for byte-by-byte comparison. > > "2015 06/08 05:23:25.345" > > -Vlad > > On Mon, Jun 8, 2015 at 2:48 AM, Yiannis Gkoufas > wrote: > >> Hi there, >> >

Schema and indexes for efficient time range queries

2015-06-08 Thread Yiannis Gkoufas
Hi there, I am investigating Phoenix as a potential data-store for time-series data on sensors. What I am really interested in, as a first milestone, is to have efficient time range queries for a particular sensor. From those queries the results would consist of 1 or 2 columns (so small rows). I w