Hi there,
we have been using phoenix client without a problem in linux systems but we
have encountered some problems on windows.
We run the queries through SquirellSQL using the 4.5.2 client jar
The query which looks like this SELECT * FROM TABLE WHERE ID='TEST' works
without a problem. But when w
cted KeyValues are
> then written to the HFile.
>
> - Gabriel
>
> On Tue, Sep 15, 2015 at 2:12 PM Yiannis Gkoufas
> wrote:
>
>> Hi there,
>>
>> I was going through the code related to index creation via MapReduce job
>> (IndexTool) and I have some q
Hi there,
I was going through the code related to index creation via MapReduce job
(IndexTool) and I have some questions.
If I am not mistaken, for a global secondary index Phoenix creates a new
HBase table which has the appropriate key (the column value of the original
table you want to index) an
>) and
> check if an exception has been thrown, then log it somewhere.
>
> As a wild guess, if you're dealing with a Double datatype and getting
> NumberFormatException, is it possible one of your values is a NaN?
>
> Josh
>
> On Thu, Sep 3, 2015 at 6:11 AM, Yiannis Gkouf
Hi there,
I am using phoenix-spark to insert multiple entries on a phoenix table.
I get the following errors:
..Exception while committing to database..
..Caused by: java.lang.NumberFormatException..
I couldn't find on the logs what was the row that was causing the issue.
Is it possible to extra
Hi there,
I was trying to experiment a bit with accessing Phoenix-enabled tables
using the HBase API directly. My primary key is compound consisting of a
String and an Unsigned Long.
By printing the bytes of the Row I realized that the byte that splits the
values is 0.
Moreover I realized that the
Hi Jaime,
is this https://issues.apache.org/jira/browse/PHOENIX-2169 the error you
are getting?
Thanks
On 26 August 2015 at 19:52, Jaime Solano wrote:
> Hi guys,
>
> I'm getting *Error: java.lang.ArrayIndexOutOfBoundsException
> (state=08000, code=101)*, while doing a query like the following:
Hi there,
I am getting an error while executing:
UPSERT INTO READINGS
SELECT R.SMID, R.DT, R.US, R.GEN, R.USEST, R.GENEST, RM.LAT, RM.LON,
RM.ZIP, RM.FEEDER
FROM READINGS AS R
JOIN
(SELECT SMID,LAT,LON,ZIP,FEEDER
FROM READINGS_META) AS RM
ON R.SMID = RM.SMID
the full stacktrace is:
Err
t;>
>> Thanks,
>> Mujtaba
>>
>> On Thu, Aug 13, 2015 at 8:33 AM, Yiannis Gkoufas
>> wrote:
>>
>>> Hi there,
>>>
>>> When I try to include the following in my pom.xml:
>>>
>>>
>>> org.apache.p
t;>
>> Thanks,
>> Mujtaba
>>
>> On Thu, Aug 13, 2015 at 8:33 AM, Yiannis Gkoufas
>> wrote:
>>
>>> Hi there,
>>>
>>> When I try to include the following in my pom.xml:
>>>
>>>
>>> org.apache.p
Hi there,
When I try to include the following in my pom.xml:
org.apache.phoenix
phoenix-core
4.5.0-HBase-0.98
provided
I get this error:
Failed to collect dependencies at
org.apache.phoenix:phoenix-core:jar:4.5.0-HBase-0.98: Failed to re
a lot!
On 6 July 2015 at 23:55, Yiannis Gkoufas wrote:
> Thanks James for your reply!
> I will give it a shot!
>
>
> On 6 July 2015 at 19:04, James Taylor wrote:
>
>> You can use a regular SQL query with comparison operators (=, <, <=, >,
>> >=, !=)
INARY and you can use PrepareStatement.setBytes( bind variables that are arbitrary bytes for your key. The salting will
> happen transparently, so you don't have to do anything special.
>
> Thanks,
> James
>
> On Mon, Jul 6, 2015 at 9:06 AM, Yiannis Gkoufas
> wrote:
>
>> Hi all,
>
Hi all,
I have created a table in that way:
CREATE TABLE TWEETS (my_key VARBINARY, text varchar, tweetid varchar, user
varchar, date varchar, CONSTRAINT my_pk PRIMARY
KEY(my_key)) SALT_BUCKETS=120
my_key is a custom byte array key I have constructed
What I want to do is to actually perform a Sc
Hi there,
I have two tables I want to join.
TABLE_A: ( (A,B), C, D, E) where (A,B) is the composite key
TABLE_B: ( (A), C, D, E) where A is the key
I basically want to join TABLE_A and TABLE_B on A and update TABLE_A with
the values C, D, E coming from TABLE_B
When I try to use UPSERT SELECT JO
Hi there,
I was just looking for some tips on implementing a UDF to be used in a
GROUP BY statement.
For instance lets say I have the table:
( (A, B), C, D) with (A,B) being the composite key
My UDF targets the field C and I want to optimize the query:
SELECT A,MYFUNCTION(C),SUM(D) FROM MYTABLE
Hi Thomas,
please ignore my last email, I wasn't running the sqlline.py command from
within the bin directory.
Now it works just fine!
Thanks!
On 18 June 2015 at 10:39, Yiannis Gkoufas wrote:
> Hi Thomas,
>
> unfortunately just modifying the hbase-site in the current director
sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Thanks a lot for spending time on this
On 17 June 2015 at 22:18, Yiannis Gkoufas wrote:
> Thanks a lot Tho
the -cp flag that is used while starting
> squirrel.
>
> On Wed, Jun 17, 2015 at 11:35 AM, Yiannis Gkoufas
> wrote:
> > Hi Thomas,
> >
> > thanks for the reply! So whats the case for squirel?
> > Or running a main class (which connects to phoenix) from a jar file
get
> picked up or else it will use the default timeout.
> When using sqlline it sets the CLASSPATH to the HBASE_CONF_PATH
> environment variable which default to the current directory.
> Try running sqlline directly from the bin directory.
>
> -Thomas
>
> On Wed, Jun 17
Hi there,
I have failed to understand from the documentation where exactly to set the
client configuration.
For the server, I think is clear that I have to modify hbase-site.xml of my
hbase cluster.
But what is the case for the client? It requires to have hbase-site.xml
somewhere in the classpath?
bigger set of data. I
> will try to post the code when I polish it a bit. The partitions should be
> sorted with KeyValue sorter before bulkSaving them.
>
> 2015-06-16 15:10 GMT+02:00 Yiannis Gkoufas :
>
>> Hi,
>>
>> didn't realize that I only sent to Dawi
u'd like to
> use this code I could try to investigate your problem, but I need the full
> stack trace.
>
>
>
> On 11.06.2015 00:53, Yiannis Gkoufas wrote:
>
> Hi Dawid,
>
> yes I have been using your code. Probably I am invoking the classes in a
> wrong way.
&g
ious about the difference in the performance.
Thanks a lot!
On 9 June 2015 at 10:26, Yiannis Gkoufas wrote:
> Thanks a lot for your replies!
> Will try the DATE field and change the order of the Composite Key.
>
> On 8 June 2015 at 17:37, James Taylor wrote:
>
>> Both
e in HBase. I can't make them 'visible' to phoenix.
>>>
>>> What I noticed today is I have rows loaded from the generated HFiles and
>>> upserted through sqlline when I run 'DELETE FROM TABLE' only the upserted
>>> one disappears. The loade
Hi Dawid,
I am trying to do the same thing but I hit a wall while writing the Hfiles
getting the following error:
java.io.IOException: Added a key not lexically larger than previous
key=\x00\x168675230967GMP\x00\x00\x00\x01=\xF4h)\xE0\x010GEN\x00\x00\x01M\xDE.\xB4T\x04,
lastkey=\x00\x168675230967
n, Jun 8, 2015 at 9:17 AM, Vladimir Rodionov
> wrote:
> > There are several time data types, natively supported by Phoenix: TIME is
> > probably most suitable for your case (it should have millisecond
> accuracy,
> > but you better check it yourself.)
> >
>
e bigint for timestamp, try to fit it into long or use
> stringified version in a format suitable for byte-by-byte comparison.
>
> "2015 06/08 05:23:25.345"
>
> -Vlad
>
> On Mon, Jun 8, 2015 at 2:48 AM, Yiannis Gkoufas
> wrote:
>
>> Hi there,
>>
>
Hi there,
I am investigating Phoenix as a potential data-store for time-series data
on sensors.
What I am really interested in, as a first milestone, is to have efficient
time range queries for a particular sensor. From those queries the results
would consist of 1 or 2 columns (so small rows).
I w
29 matches
Mail list logo