Hello,
Check the Java version.
Phoenix was compiled with JDK 7.0 and you are probably using JDK 6.0 (runtime).
From: 聪聪 [mailto:175998...@qq.com]
Sent: Friday, December 19, 2014 9:39 AM
To: user
Subject: sqlline.py operation error
I use HBase version hbase-0.98.6-cdh5.2.0,so I download phoenix
Hello all,
(Due to the slow speed of Phoenix JDBC – single machine ~ 1000-1500 rows /sec)
I am also documenting myself about loading data into Phoenix via MapReduce.
So far I understood that the Key + List<[Key,Value]> to be inserted into HBase
table is obtained via a “dummy” Phoenix connection
ase table.
Then you should hit bottleneck of HBase itself. It should be from 10 to 30+
times faster than your current solution. Depending on HW of course.
I'd prefer this solution for stream writes.
Vaclav
On 01/13/2015 10:12 AM, Ciureanu, Constantin (GfK) wrote:
> Hello all,
>
>
multiple results of
> PDataType.TYPE.toBytes() as rowkey. For values use same logic. Data
> types are defined as enums at this class:
> org.apache.phoenix.schema.PDataType.
>
> Good luck,
> Vaclav;
>
> On 01/13/2015 10:58 AM, Ciureanu, Constantin (GfK) wrote:
>> Thank you Vac
machines, 24 tasks can run in the
same time).
Can be this because of some limitation on number of connections to Phoenix?
Regards,
Constantin
-Original Message-
From: Ciureanu, Constantin (GfK) [mailto:constantin.ciure...@gfk.com]
Sent: Wednesday, January 14, 2015 9:44 AM
To: user
r to first determine what the real issue is,
could you give a general overview of how your MR job is implemented (or even
better, give me a pointer to it on GitHub or something similar)?
- Gabriel
On Thu, Jan 15, 2015 at 2:19 PM, Ciureanu, Constantin (GfK)
wrote:
> Hello all,
>
> I
ven
better, give me a pointer to it on GitHub or something similar)?
- Gabriel
On Thu, Jan 15, 2015 at 2:19 PM, Ciureanu, Constantin (GfK)
wrote:
> Hello all,
>
> I finished the MR Job - for now it just failed a few times since the Mappers
> gave some weird timeout (600 s
Hello Ralph,
Try to check if the PIG script doesn’t produce keys that overlap (that would
explain the reduce in number of rows).
Good luck,
Constantin
From: Ravi Kiran [mailto:maghamraviki...@gmail.com]
Sent: Tuesday, February 03, 2015 2:42 AM
To: user@phoenix.apache.org
Subject: Re: Pig vs
Hello all,
Is there any Cascading / Scalding Tap to read / write data from and to Phoenix?
I couldn’t find anything on the internet so far.
I know that there is a Cascading Tap to read from HBase and Cascading
integration with JDBC.
Thank you,
Constantin
Hello all,
1. Is there a good explanation why updating the statistics:
update statistics tableX;
made this query 2x times slower? (it was 27 seconds before, now it’s
somewhere between 60 – 90 seconds)
select count(*) from tableX;
+--+
|
at 5:47 PM, Ciureanu, Constantin (GfK)
wrote:
> Hello all,
>
>
>
> Is there any Cascading / Scalding Tap to read / write data from and to
> Phoenix?
>
> I couldn’t find anything on the internet so far.
>
> I know that there is a Cascading Tap to read from HBase and Cas
nks of 1 rows
within that region.
Have you modified any of the parameters related to statistics like this one
‘phoenix.stats.guidepost.width’.
Regards
Ram
From: Ciureanu, Constantin (GfK)
[mailto:constantin.ciure...@gfk.com<mailto:constantin.ciure...@gfk.com>]
Sent: Wednesday, Februar
mns
width, number of region servers in your cluster plus their heap size,
HBase/Phoenix version and any default property overrides so we can identify why
stats are slowing things down in your case.
Thanks,
Mujtaba
On Thu, Feb 12, 2015 at 12:56 AM, Ciureanu, Constantin (GfK)
mailto:constantin.ci
are your rows and how much memory is
available on your RS/HBase heap?
3. Can you also send output of explain select count(*) from tablex for this
case?
Thanks,
Mujtaba
On Fri, Feb 13, 2015 at 12:34 AM, Ciureanu, Constantin (GfK)
mailto:constantin.ciure...@gfk.com>> wrote:
Hello Mujtaba,
statistics tableX;
Error: ERROR 6000 (TIM01): Operation timed out . Query couldn't be completed in
the alloted time: 60 ms (state=TIM01,code=6000)
Thank you,
Constantin
From: Ciureanu, Constantin (GfK) [mailto:constantin.ciure...@gfk.com]
Sent: Monday, February 16, 2015 10:31 AM
To: us
Hi Matthew,
Is it working without the quotes “ / " ? (I see you are using 2 types of
quotes, weird)
I guess that’s not needed, and probably causing troubles. I don’t have to use
quotes anyway.
Alternatively check the types of data in those 2 tables (if the field types are
not the same in
Matt
From: Ciureanu, Constantin (GfK)
[mailto:constantin.ciure...@gfk.com<mailto:constantin.ciure...@gfk.com>]
Sent: 20 February 2015 14:40
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: RE: Inner Join not returning any results in Phoenix
Hi Matthew,
Is it wor
Hello James,
Sorry, no – it’s not my case.
I haven’t ran any (minor/major) compaction for my table.
Regards,
Constantin
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Tuesday, March 03, 2015 2:20 AM
To: user; Ciureanu, Constantin (GfK)
Subject: Re: Update statistics made query 2-3x
ll?
Thanks,
James
On Mon, Feb 16, 2015 at 8:44 AM, Vasudevan, Ramkrishna S
mailto:ramkrishna.s.vasude...@intel.com>>
wrote:
Without update statistics – if we run select count(*) what is the PLAN that it
executes? One of the RS has got more data I believe.
Regards
Ram
From: Ciureanu,
on your other questions.
Thanks,
James
On Tue, Mar 3, 2015 at 4:23 AM, Ciureanu, Constantin (GfK)
mailto:constantin.ciure...@gfk.com>> wrote:
Hello James,
Btw, I noticed some other issues:
- My table key is (DATUM, … ) ordered ascending by key (LONG, in
milliseconds) – I have changed th
20 matches
Mail list logo