I'm also interested to hear about this! Please do give more info!
On 7 Apr 2016 7:00 p.m., "James Taylor" wrote:
> I'm interested in the "how". Thanks for sharing this info, Steve.
>
> James
>
> On Thu, Apr 7, 2016 at 8:30 AM, Steve Terrell
>
Cool. That's big news for us.
On 8 Mar 2016 2:15 p.m., "Josh Mahonin" wrote:
> Hi all,
>
> Just thought I'd let you know that Flyway 4.0 was recently released, which
> includes support for DB migrations with Phoenix.
>
> https://flywaydb.org/blog/flyway-4.0
>
> Josh
>
I asked about Phoenix on EMR, and also received no response.
It would be nice to have a comment on this from someone in the know, if
only to confirm that Phoenix and EMR are no longer friends.
James
On 26/01/16 16:37, j pimmel wrote:
Hi all
I'm just evaluating using HBase with phoenix on
I don't know what the answer is to your question, but I have hit this
before.
It seems that adding a column is a lazy operation, and results in
changing just the metadata, so it returns almost immediately; but
dropping a column is not. In fact, if you add a column and then
immediately drop
Bravo, Sir!
Thanks for doing that.
On 03/11/15 17:10, Andrew Purtell wrote:
Today I pushed a new branch '4.6-HBase-1.0-cdh5' and the tag
'v4.6.0-cdh5.4.5' (58fcfa6) to
https://github.com/chiastic-security/phoenix-for-cloudera. This is the
Phoenix 4.6.0 release, modified to build against CDH
This would be of huge benefit to us.
One of the problems we have with MySQL is the locking that's done for many
schema changes. Any support in Phoenix for online schema changes will be a
major plus point.
James
On 20 Oct 2015 5:38 p.m., "James Taylor" wrote:
> We don't
@JT/Maryann:
I've submitted a pull request for fixing a test
(DerivedTableIT.testDerivedTableWithGroupBy()), which passed on Java 7
but not on Java 8.
The test was retrieving two rows from a query with no ORDER BY clause,
and assuming that they would come back in a specific order. This
Commit message updated.
On 02/10/15 17:02, James Taylor wrote:
Patch looks great - thanks so much, James. Would you mind prefixing
the commit message with "PHOENIX-2256" as that's what ties the pull to
the JIRA? I'll get this committed today.
James
On Fri, Oct 2, 2015 at 7:34
The JDBC methods work just fine.
You're really better off using them, rather than querying the internal
tables, because if implementation details change, your code will break.
On 2 Oct 2015 21:29, "Konstantinos Kougios"
wrote:
> I didn't try the jdbc getMetaData
a
Java 7 implementation detail that isn't contractual...
James
On 14/09/15 18:38, Maryann Xue wrote:
Thank you, James! I have assigned the issue to myself.
On Mon, Sep 14, 2015 at 7:39 AM James Heather
<james.heat...@mendeley.com <mailto:james.heat...@mendel
You're asking for every single row of the table, so nothing's going to
avoid a full scan. The index wouldn't help.
On 30/09/15 15:18, Sumit Nigam wrote:
Hi,
I have a table as:
CREATE TABLE EXP (ID BIGINT NOT NULL PRIMARY KEY, TEXT VARCHAR);
If I explain the select:
EXPLAIN SELECT ID FROM
s used?
*From:* James Heather <james.heat...@mendeley.com>
*To:* user@phoenix.apache.org
*Sent:* Wednesday, September 30, 2015 7:49 PM
*Subject:* Re: Explain plan over primary key column
You're asking for every single row of the table, so nothing's going to
avoid
uery are a
part of it.
Sorry again.
Sumit
----
*From:* James Heather <james.heat...@mendeley.com>
*To:* user@phoenix.apache.org
*Sent:* Wednesday, September 30, 2015 7:58 PM
*Subject:* Re: Explain plan over primary key colu
If no one else will be hitting the table while you complete the
operation, and if you don't mind about missing a few sequence values
(i.e., having a gap), you should just need the following.
SELECT NEXT VALUE FOR sequencename FROM sometable;
That will tell you the next value the sequence
pies of this
communication and any attachment.
On Sep 22, 2015, at 2:47 PM, James Heather
<james.heat...@mendeley.com <mailto:james.heat...@mendeley.com>> wrote:
If no one else will be hitting the table while you complete the
operation, and if you don't mind about missing a few sequenc
I don't know for certain what that parameter does but it sounds a bit
scary to me...
On 21/09/15 09:41, rajeshb...@apache.org wrote:
You can try adding below property to hbase-site.xml and restart hbase.
hbase.table.sanity.checks
false
Thanks,
Rajeshbabu.
On Mon, Sep 21, 2015 at 12:51 PM,
We're using CDH5, which runs HBase 1.0.
I think it would kill the whole Phoenix project for us if there were no
1.0 support.
James
On 19/09/15 01:53, James Taylor wrote:
+user list
Please let us know if you're counting on HBase 1.0 support given that
we have HBase 1.1 support.
Thanks,
Is it still possible/advisable to run Phoenix on EMR?
There's some documentation on it
https://phoenix.apache.org/phoenix_on_emr.html
but it's from the time of Henry VIII. (Presumably this page wants either
updating or deleting, depending on whether the idea is still viable.)
It's certainly
lls I built from this, get them here:
>>>>> Binary:
>>>>> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.5.2-cdh5.4.5-bin.tar.gz
>>>>> Source:
>>>>> http://apurtell.s3.amazonaws.com/phoenix/phoenix-4.5.2-cdh5.4.5-src.tar.gz
&g
com
<mailto:maghamraviki...@gmail.com>> wrote:
Hi James,
You need to increase the value of hbase.rpc.timeout in
hbase-site.xml on your client end.
http://hbase.apache.org/book.html#trouble.client.lease.exception
Ravi
On Tue, Sep 15, 2015 at 12:56 PM, James Heather
I found today that I can't execute this:
UPSERT INTO loadtest.testing (id, firstname, lastname) SELECT NEXT VALUE
FOR loadtest.testing_id_seq, firstname, lastname FROM loadtest.testing
when the table has more than 500,000 rows in it ("MutationState size of
512000 is bigger than max allowed
Reported as
https://issues.apache.org/jira/browse/PHOENIX-2256
James
On 14/09/15 11:56, James Heather wrote:
Table "b" should get evicted first, which creates enough space for
"d". But in fact "c" gets evicted first, and then "b" needs to be
evicte
Reported as
https://issues.apache.org/jira/browse/PHOENIX-2257
On 14/09/15 12:24, James Heather wrote:
I also have two failing integration tests in DerivedTableIT:
Failed tests:
DerivedTableIT.testDerivedTableWithGroupBy:320 expected:<['e']> but
was:<['b'
know if there's a race condition in here somewhere. It's odd
that no one has picked up on a failing test before, so I'm wondering
whether it succeeds in some environments. But it fails for me on both
Ubuntu and Fedora (both with 64-bit Java 8).
James
On 14/09/15 10:16, James Heather wrote:
he POM so
they can be overridden on the maven command line with -D.
That would be easy and something I think we could get
committed without any controversy.
On Sep 11, 2015, at 6:53 AM, James Heather
<james.heat...@mendeley.com> wrote:
Yes
:
Thanks for filing these issues. I believe these failures occur on Java
8, but not on 7. Not sure why, though.
James
On Monday, September 14, 2015, James Heather
<james.heat...@mendeley.com <mailto:james.heat...@mendeley.com>> wrote:
Reported as
https://issues.apa
:
Thank you, James! I have assigned the issue to myself.
On Mon, Sep 14, 2015 at 7:39 AM James Heather
<james.heat...@mendeley.com <mailto:james.heat...@mendeley.com>> wrote:
Reported as
https://issues.apache.org/jira/browse/PHOENIX-2257
On 14/09/15 12:24, James H
want help?
On Sep 14, 2015, at 6:21 AM, James Heather <james.heat...@mendeley.com
<mailto:james.heat...@mendeley.com>> wrote:
I've set up a repo at
https://github.com/chiastic-security/phoenix-for-cloudera
It is a fork of the vanilla Phoenix github mirror. I've created a
branch call
Does anyone else get a test failure when they build Phoenix?
If I make a fresh clone of the repo, and then run mvn package, I get a
test failure:
---
Test set: org.apache.phoenix.schema.PMetaDataImplTest
Sorry, yes, it does make a couple of very minor source changes.
I do wonder whether ultimately we'll be able to get those into the main
repo as conditionals somehow, but let's get the repo up and running first.
James
On 13 Sep 2015 8:47 am, "James Heather" <james.heat...@mendel
o address:
>>> 1) How to maintain CDH compatible Phoenix code base?
>>> 2) Is having a CDH compatible branch even an option?
>>>
>>> Krishna
>>>
>>>
>>>
>>> On Friday, August 28, 2015, Andrew Purtell <andrew.pur
With your query as it stands, you're trying to construct 250K*270M pairs
before filtering them. That's 67.5 trillion. You will need a quantum
computer.
I think you will be better off restructuring...
James
On 11 Sep 2015 5:34 pm, "M. Aaron Bossert" wrote:
> AH! Now I get
I just tried to create an index on a column for a table with 200M rows.
Creating the index timed out:
0: jdbc:phoenix:172.31.31.143> CREATE INDEX idx_lastname ON loadtest.testing
(lastname);
Error: Operation timed out (state=TIM01,code=6000)
java.sql.SQLTimeoutException: Operation
Ah, too late, I'm afraid. I dropped it.
James
On 11/09/15 11:41, rajeshb...@apache.org wrote:
James,
It should be in building state. Can you check what's the state of it?
Thanks,
Rajeshbabu.
On Fri, Sep 11, 2015 at 4:04 PM, James Heather
<james.heat...@mendeley.com <mailto:jame
present.
3) then start sqlline.py command prompt
4) Then run create index query.
Thanks,
Rajeshbabu.
On Fri, Sep 11, 2015 at 3:26 PM, James Heather
<james.heat...@mendeley.com <mailto:james.heat...@mendeley.com>> wrote:
I just tried to create an index on a column for a table with 2
k? I think
>> little must be done besides compile against the CDH artifacts for binary
>> compatibility.
>>
>>
>> > On Aug 28, 2015, at 11:19 AM, James Heather <james.heat...@mendeley.com>
>> wrote:
>> >
>> > Is anyone interested in helping with
f bad data
is found, but it doesn't look like this is currently checked (when bad
data is encountered). I've filed PHOENIX-2239 for this.
Thanks,
James
On Tue, Sep 8, 2015 at 11:26 AM, James Heather
<james.heat...@mendeley.com <mailto:james.heat...@mendeley.com>> wrote:
I've had another go running the performance.py script to upsert
100,000,000 rows into a Phoenix table, and again I've ended up with
around 500 rows missing.
Can anyone explain this, or reproduce it?
It is rather concerning: I'm reluctant to use Phoenix if I'm not sure
whether rows will be
:
Are you using sqlline via putty?
If so, consider stretching it's window wider so the whole description
would fit in (and stretch it before running sqlline)
Also, consider using SQuireL client for better view of your data...
Regards,
David
בתאריך 3 בספט׳ 2015 16:21, "James Heather"
&
Eek. Any idea what's wrong here, or how to fix it?
Notice that the number of rows returned does increase when I add a
table, so I think the information is there, just not being
returned/displayed for some reason.
0: jdbc:phoenix:172.31.30.216> !describe performance_1
https://issues.apache.org/jira/browse/PHOENIX-2223
James
On 02/09/15 15:09, Jean-Marc Spaggiari wrote:
Yep, now I can only totally agree with you.
I think you should open a JIRA.
2015-09-02 10:05 GMT-04:00 James Heather <james.heat...@mendeley.com
<mailto:james.heat...@mendel
ommand, which is
one line (the command itself) and not the number of deleted lines?
Can you try to put some rows into the table and do the delete again?
Or try without the where close too?
2015-09-02 9:54 GMT-04:00 James Heather <james.heat...@mendeley.com
<mailto:james.heat...@mendeley.com
You could look at SchemaCrawler (http://sualeh.github.io/SchemaCrawler/).
I've used it for MySQL databases; it is JDBC-based, so it could certainly
be extended to work for Phoenix.
I have in mind to do this myself one day, but not for a while.
James
On 22 August 2015 at 00:50, Saurabh Malviya
Is there a nice way of taking a backup of a Phoenix database?
Either just the schema, or schema plus data?
James
I'm a bit unclear as to what changes need to be made to hbase-site.xml
when I'm running Phoenix on CDH5.
The Phoenix site http://phoenix.apache.org/secondary_indexing.html
tells me that I need to add this:
property
namehbase.regionserver.wal.codec/name
Can anyone explain why I can't always see the 'modified' column in the
table I've just created?
0: jdbc:phoenix:172.17.0.19 create table something.blobby_tab (id bigint not
null primary key, email varchar(200), modified date);
No rows affected (1.118 seconds)
0: jdbc:phoenix:172.17.0.19 upsert
46 matches
Mail list logo