Phoenix Properties

2015-09-15 Thread Sumit Nigam
Hello, I am planning on supplying all guidepost properties through my DriverManager.getConnection method by passing a java.util.Properties of key=value pairs. In the documentation at https://phoenix.apache.org/tuning.html, it mentions guidepost parameters to be server-side parameters. Does that

Re: PhoenixMapReduceUtil multiple tables

2015-09-15 Thread Ns G
Hi Ravi, Raised Phoenix-2266 JIRA for the same. Thanks, Durga Prasad On Tue, Sep 15, 2015 at 7:30 PM, Ravi Kiran wrote: > Hi Durga, >Please file a JIRA. With the current api in hand, it is difficult to > read from multiple tables . > Regards > Ravi > > On Tue, Sep 15, 2015 at 1:47 AM, Ns G

答复: timeouts for long queries

2015-09-15 Thread 谭自强
You should change the hbase.regionserver.lease.period, hbase.client.scanner.timeout.period in hbase-site.xml 发件人: user-return-3791-tanzqgz=163@phoenix.apache.org [mailto:user-return-3791-tanzqgz=163@phoenix.apache.org] 代表 Gaurav Kanade 发送时间: 2015年9月16日 10:10 收件人: user@phoenix.apache.o

Re: timeouts for long queries

2015-09-15 Thread Gaurav Kanade
I am facing the same problem - it seems that my newly applied settings are not being picked up correctly. I have put the rpc.timeout as well as phoenix.query.timeout to appropriate values; in addition I changed the client retries to 50 instead of 36 (I see the number 36 in your message too) and yet

Re: timeouts for long queries

2015-09-15 Thread James Taylor
The other important timeout is Phoenix specific: phoenix.query.timeoutMs. Set this in your hbase-site.xml on the client side to the value in milliseconds for the amount of time you're willing to wait before the query finishes. I might be wrong, but I believe the hbase.rpc.timeout config parameter n

Re: timeouts for long queries

2015-09-15 Thread Ravi Kiran
Hi James, You need to increase the value of hbase.rpc.timeout in hbase-site.xml on your client end. http://hbase.apache.org/book.html#trouble.client.lease.exception Ravi On Tue, Sep 15, 2015 at 12:56 PM, James Heather wrote: > I'm a bit lost as to what I need to change, and where I need to c

Re: setting up community repo of Phoenix for CDH5?

2015-09-15 Thread Andrew Purtell
I used dev/make_rc.sh, built with Maven 3.2.2, Java 7u79. Ubuntu build host. On Tue, Sep 15, 2015 at 4:58 PM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > No, I don't know why. I will ask and see if I can get a respons on that. I > have also started the thread for the Parcel. I will s

Re: setting up community repo of Phoenix for CDH5?

2015-09-15 Thread Jean-Marc Spaggiari
No, I don't know why. I will ask and see if I can get a respons on that. I have also started the thread for the Parcel. I will see if I find enough help to work on that. Regarding the branch you made, I tried to build it but got the error below. what's the command to build it? Thanks, JM [INFO]

Re: setting up community repo of Phoenix for CDH5?

2015-09-15 Thread Andrew Purtell
Cool, thanks J-M. Do you know why support for query tracing was removed? If it's just a matter of porting it to the HTrace that ships with CDH, I can look at that. On Tue, Sep 15, 2015 at 4:49 PM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > Nice! I will see if there is a way to buil

Re: setting up community repo of Phoenix for CDH5?

2015-09-15 Thread Jean-Marc Spaggiari
Nice! I will see if there is a way to build a parcel from that the same way there is a parcel for Apache Phoenix 4.3 in Cloudera Labs... Will clone what you did and try to build it locally... 2015-09-15 19:45 GMT-04:00 Andrew Purtell : > I pushed updates to branch 4.5-HBase-1.0-cdh5 and the tag v

Re: setting up community repo of Phoenix for CDH5?

2015-09-15 Thread Andrew Purtell
I pushed updates to branch 4.5-HBase-1.0-cdh5 and the tag v4.5.2-cdh5.4.5 (1fcb5cf). This is the pending Phoenix 4.5.2 release, currently at RC1, likely to pass, that will build against CDH 5.4.5. If you want release tarballs I built from this, get them here: Binary: http://apurtell.s3.amazonaw

Re: Can't add views on HBase tables after upgrade

2015-09-15 Thread James Taylor
Correct, you must use the CREATE VIEW ( ) syntax when you're creating a view directly over HBase tables. Even through this syntax, you must also specify the primary key (which is not currently required - I filed PHOENIX-2265 for this). The reason is that you're telling Phoenix the structure of you

timeouts for long queries

2015-09-15 Thread James Heather
I'm a bit lost as to what I need to change, and where I need to change it, to bump up timeouts for this kind of error: Caused by: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Tue Sep 15 18:48:13 U

Re: Can't add views on HBase tables after upgrade

2015-09-15 Thread Samarth Jain
Yes, I don't think creating views that way on existing HBase tables is supported. Maybe James can confirm? FWIW, I tried the following on 4.4.1 version of Phoenix: hbase shell: create T2, 'f1', 'f2', 'f3' create T1 'f1', f2' sqlline: create view T2_VIEW AS SELECT * FROM T2; This fails: at org.

Re: Question about IndexTool

2015-09-15 Thread Gabriel Reid
The upsert statements in the MR jobs are used to convert data into the appropriate encoding for writing to an HFile -- the data doesn't actually get pushed to Phoenix from within the MR job. Instead, the created KeyValues are extracted from the "output" of the upsert statement, and the statement is

Re: Thin driver and kerberos support

2015-09-15 Thread Nick Dimiduk
In order to connect to a kerberos-secured cluster from the query server, the query server must be configured to connect using the security credentials. This means all operations performed by that query server are performed using these credentials. See the security-related configs mentioned in the C

Re: simple commands that mutate a large number of rows

2015-09-15 Thread Sumit Nigam
So, if I set this property to a very small value, then it will not be honored if auto-commit is set to true? Is it that in case of autocommit, the engine figures out its own buffer size or we can influence it through some property again? From: James Taylor To: "user@phoenix.apache.org"

Re: simple commands that mutate a large number of rows

2015-09-15 Thread James Taylor
Yes, when using auto commit, the phoenix.mutate.maxSiiz config property has no effect. On Tuesday, September 15, 2015, James Heather wrote: > I can confirm that setting auto-commit to true avoided the problem. > > That's for this particular query, though. I don't know whether it would do > if th

Re: simple commands that mutate a large number of rows

2015-09-15 Thread James Heather
I can confirm that setting auto-commit to true avoided the problem. That's for this particular query, though. I don't know whether it would do if the select query hit a different table. Probably that would mean it would execute client side, and the error would come back. On 15 Sep 2015 16:12, "Sum

Re: simple commands that mutate a large number of rows

2015-09-15 Thread Sumit Nigam
Hi James, Is it right to assume that with auto-commit set to true, the mutate maxSize being exceeded error would not occur? This should be because now server side does the commit automatically when the batch size is reached/ buffered. Thanks,Sumit From: James Taylor To: "user@phoenix.apac

Re: simple commands that mutate a large number of rows

2015-09-15 Thread James Taylor
That config setting (phoenix.mutate.maxSiize) is just a safety valve to prevent out of memory errors and may be set to whatever you like. However, if you're going to just turn around and do a commit after running your upsert statement, performance will improve if you turn on auto commit instead (co

Re: PhoenixMapReduceUtil multiple tables

2015-09-15 Thread Ravi Kiran
Hi Durga, Please file a JIRA. With the current api in hand, it is difficult to read from multiple tables . Regards Ravi On Tue, Sep 15, 2015 at 1:47 AM, Ns G wrote: > Hi There, > > Can any one provide an answer? > Can I use PhoenixMapReduceUtil to read two different tables in a single > mapre

Re: Is it appropriate to switch between immutable and mutable ?

2015-09-15 Thread James Taylor
For most cases, you're able to delete from a table with immutable rows (I believe as of 4.2 release), so that kind of switching shouldn't be necessary. In theory, that switching should be ok, but I'm not sure we've tested that code path when the table has an index. Thanks, James On Tuesday, Septe

RE: Can't add views on HBase tables after upgrade

2015-09-15 Thread Jeffrey Lyons
Hey Samarth, Thanks for looking into that for me! I can give you a bit more information about what I’m doing. We have view creation scripts that we use to create our schema when we spin up development environments. These scripts also get run on upgrades to our main cluster if we need to add ne

Question about IndexTool

2015-09-15 Thread Yiannis Gkoufas
Hi there, I was going through the code related to index creation via MapReduce job (IndexTool) and I have some questions. If I am not mistaken, for a global secondary index Phoenix creates a new HBase table which has the appropriate key (the column value of the original table you want to index) an

Is it appropriate to switch between immutable and mutable ?

2015-09-15 Thread zz d
For example: CREATE TABLE test( id BIGINT NOT NULL PRIMARY KEY, value BIGINT ) IMMUTABLE_ROWS=true; CREATE INDEX idx ON test(id, value); As I know: > Creating indexes on tables with immutable rows should only be used for use cases where the data is written once and

simple commands that mutate a large number of rows

2015-09-15 Thread James Heather
I found today that I can't execute this: UPSERT INTO loadtest.testing (id, firstname, lastname) SELECT NEXT VALUE FOR loadtest.testing_id_seq, firstname, lastname FROM loadtest.testing when the table has more than 500,000 rows in it ("MutationState size of 512000 is bigger than max allowed si

Re: PhoenixMapReduceUtil multiple tables

2015-09-15 Thread Ns G
Hi There, Can any one provide an answer? Can I use PhoenixMapReduceUtil to read two different tables in a single mapreduce program (in mapper) Thanks, On Thu, Sep 10, 2015 at 8:41 PM, Ns G wrote: > Hi Ravi, > > Can we read mutiple tables through map reduce? Do we have any JIRA for > implementi