Hello,
I am planning on supplying all guidepost properties through my
DriverManager.getConnection method by passing a java.util.Properties of
key=value pairs.
In the documentation at https://phoenix.apache.org/tuning.html, it mentions
guidepost parameters to be server-side parameters. Does that
Hi Ravi,
Raised Phoenix-2266 JIRA for the same.
Thanks,
Durga Prasad
On Tue, Sep 15, 2015 at 7:30 PM, Ravi Kiran
wrote:
> Hi Durga,
>Please file a JIRA. With the current api in hand, it is difficult to
> read from multiple tables .
> Regards
> Ravi
>
> On Tue, Sep 15, 2015 at 1:47 AM, Ns G
You should change the hbase.regionserver.lease.period,
hbase.client.scanner.timeout.period in hbase-site.xml
发件人: user-return-3791-tanzqgz=163@phoenix.apache.org
[mailto:user-return-3791-tanzqgz=163@phoenix.apache.org] 代表 Gaurav Kanade
发送时间: 2015年9月16日 10:10
收件人: user@phoenix.apache.o
I am facing the same problem - it seems that my newly applied settings are
not being picked up correctly. I have put the rpc.timeout as well as
phoenix.query.timeout to appropriate values; in addition I changed the
client retries to 50 instead of 36 (I see the number 36 in your message
too) and yet
The other important timeout is Phoenix specific: phoenix.query.timeoutMs.
Set this in your hbase-site.xml on the client side to the value in
milliseconds for the amount of time you're willing to wait before the query
finishes. I might be wrong, but I believe the hbase.rpc.timeout config
parameter n
Hi James,
You need to increase the value of hbase.rpc.timeout in hbase-site.xml on
your client end.
http://hbase.apache.org/book.html#trouble.client.lease.exception
Ravi
On Tue, Sep 15, 2015 at 12:56 PM, James Heather
wrote:
> I'm a bit lost as to what I need to change, and where I need to c
I used dev/make_rc.sh, built with Maven 3.2.2, Java 7u79. Ubuntu build host.
On Tue, Sep 15, 2015 at 4:58 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> No, I don't know why. I will ask and see if I can get a respons on that. I
> have also started the thread for the Parcel. I will s
No, I don't know why. I will ask and see if I can get a respons on that. I
have also started the thread for the Parcel. I will see if I find enough
help to work on that.
Regarding the branch you made, I tried to build it but got the error below.
what's the command to build it?
Thanks,
JM
[INFO]
Cool, thanks J-M.
Do you know why support for query tracing was removed? If it's just a
matter of porting it to the HTrace that ships with CDH, I can look at that.
On Tue, Sep 15, 2015 at 4:49 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Nice! I will see if there is a way to buil
Nice! I will see if there is a way to build a parcel from that the same way
there is a parcel for Apache Phoenix 4.3 in Cloudera Labs... Will clone
what you did and try to build it locally...
2015-09-15 19:45 GMT-04:00 Andrew Purtell :
> I pushed updates to branch 4.5-HBase-1.0-cdh5 and the tag v
I pushed updates to branch 4.5-HBase-1.0-cdh5 and the tag v4.5.2-cdh5.4.5
(1fcb5cf). This is the pending Phoenix 4.5.2 release, currently at RC1,
likely to pass, that will build against CDH 5.4.5. If you want release
tarballs I built from this, get them here:
Binary:
http://apurtell.s3.amazonaw
Correct, you must use the CREATE VIEW ( ) syntax when
you're creating a view directly over HBase tables. Even through this
syntax, you must also specify the primary key (which is not currently
required - I filed PHOENIX-2265 for this). The reason is that you're
telling Phoenix the structure of you
I'm a bit lost as to what I need to change, and where I need to change
it, to bump up timeouts for this kind of error:
Caused by: org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36,
exceptions:
Tue Sep 15 18:48:13 U
Yes, I don't think creating views that way on existing HBase tables is
supported. Maybe James can confirm?
FWIW, I tried the following on 4.4.1 version of Phoenix:
hbase shell: create T2, 'f1', 'f2', 'f3'
create T1 'f1', f2'
sqlline: create view T2_VIEW AS SELECT * FROM T2;
This fails:
at
org.
The upsert statements in the MR jobs are used to convert data into the
appropriate encoding for writing to an HFile -- the data doesn't actually
get pushed to Phoenix from within the MR job. Instead, the created
KeyValues are extracted from the "output" of the upsert statement, and the
statement is
In order to connect to a kerberos-secured cluster from the query server,
the query server must be configured to connect using the security
credentials. This means all operations performed by that query server are
performed using these credentials. See the security-related configs
mentioned in the C
So, if I set this property to a very small value, then it will not be honored
if auto-commit is set to true? Is it that in case of autocommit, the engine
figures out its own buffer size or we can influence it through some property
again?
From: James Taylor
To: "user@phoenix.apache.org"
Yes, when using auto commit, the phoenix.mutate.maxSiiz config property has
no effect.
On Tuesday, September 15, 2015, James Heather
wrote:
> I can confirm that setting auto-commit to true avoided the problem.
>
> That's for this particular query, though. I don't know whether it would do
> if th
I can confirm that setting auto-commit to true avoided the problem.
That's for this particular query, though. I don't know whether it would do
if the select query hit a different table. Probably that would mean it
would execute client side, and the error would come back.
On 15 Sep 2015 16:12, "Sum
Hi James,
Is it right to assume that with auto-commit set to true, the mutate maxSize
being exceeded error would not occur? This should be because now server side
does the commit automatically when the batch size is reached/ buffered.
Thanks,Sumit
From: James Taylor
To: "user@phoenix.apac
That config setting (phoenix.mutate.maxSiize) is just a safety valve to
prevent out of memory errors and may be set to whatever you like. However,
if you're going to just turn around and do a commit after running your
upsert statement, performance will improve if you turn on auto commit
instead (co
Hi Durga,
Please file a JIRA. With the current api in hand, it is difficult to
read from multiple tables .
Regards
Ravi
On Tue, Sep 15, 2015 at 1:47 AM, Ns G wrote:
> Hi There,
>
> Can any one provide an answer?
> Can I use PhoenixMapReduceUtil to read two different tables in a single
> mapre
For most cases, you're able to delete from a table with immutable rows (I
believe as of 4.2 release), so that kind of switching shouldn't be
necessary. In theory, that switching should be ok, but I'm not sure we've
tested that code path when the table has an index.
Thanks,
James
On Tuesday, Septe
Hey Samarth,
Thanks for looking into that for me! I can give you a bit more information
about what I’m doing.
We have view creation scripts that we use to create our schema when we spin up
development environments. These scripts also get run on upgrades to our main
cluster if we need to add ne
Hi there,
I was going through the code related to index creation via MapReduce job
(IndexTool) and I have some questions.
If I am not mistaken, for a global secondary index Phoenix creates a new
HBase table which has the appropriate key (the column value of the original
table you want to index) an
For example:
CREATE TABLE test(
id BIGINT NOT NULL PRIMARY KEY,
value BIGINT
) IMMUTABLE_ROWS=true;
CREATE INDEX idx ON test(id, value);
As I know:
> Creating indexes on tables with immutable rows should only be used for
use cases where the data is written once and
I found today that I can't execute this:
UPSERT INTO loadtest.testing (id, firstname, lastname) SELECT NEXT VALUE
FOR loadtest.testing_id_seq, firstname, lastname FROM loadtest.testing
when the table has more than 500,000 rows in it ("MutationState size of
512000 is bigger than max allowed si
Hi There,
Can any one provide an answer?
Can I use PhoenixMapReduceUtil to read two different tables in a single
mapreduce program (in mapper)
Thanks,
On Thu, Sep 10, 2015 at 8:41 PM, Ns G wrote:
> Hi Ravi,
>
> Can we read mutiple tables through map reduce? Do we have any JIRA for
> implementi
28 matches
Mail list logo