Re: Pagination with Phoenix

2016-02-17 Thread Subramanyam Satyanarayana
Thanks all for the responses. We will incorporate the suggestions in our design. Much appreciated ~Subbu From: Sachin Katakdound mailto:sachin.katakdo...@gmail.com>> Reply-To: "user@phoenix.apache.org" mailto:user@phoenix.apache.org>> Date: Wednesday, February 1

Re: Problem with String Concatenation with Fields

2016-02-17 Thread Steve Terrell
Done! https://issues.apache.org/jira/browse/PHOENIX-2689 Thanks, Steve On Wed, Feb 17, 2016 at 5:58 PM, Thomas D'Silva wrote: > Steve, > > That is a bug, can you please file a JIRA. > > Thanks, > Thomas > > On Wed, Feb 17, 2016 at 3:34 PM, Steve Terrell > wrote: > > Can someone please tel

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-17 Thread Parth Sawant
Thanks a lot Ravi. On Wed, Feb 17, 2016 at 12:12 PM, Ravi Kiran wrote: > Hi Parth, > > Definitely it looks like a bug to me. I wrote a small test and it fails > too. Will try to provide a patch for this. > > Ravi > > On Tue, Feb 16, 2016 at 5:37 PM, Parth Sawant > wrote: > >> Update: The sam

Re: Problem with String Concatenation with Fields

2016-02-17 Thread Thomas D'Silva
Steve, That is a bug, can you please file a JIRA. Thanks, Thomas On Wed, Feb 17, 2016 at 3:34 PM, Steve Terrell wrote: > Can someone please tell me if this is a bug in Phoenix 4.6.0 ? > > This works as expected: > 0: jdbc:phoenix:localhost> select * from BUGGY where > ('tortilla'||F2)='tortilla

Problem with String Concatenation with Fields

2016-02-17 Thread Steve Terrell
Can someone please tell me if this is a bug in Phoenix 4.6.0 ? This works as expected: 0: jdbc:phoenix:localhost> select * from BUGGY where (*'tortilla'* ||F2)='tortillachip'; PK1 0 *F1 tortilla* F2 chip But this does not: 0: jdbc:phoenix:localhost> select * from BUGGY where (*F1* ||F2)='tor

Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread anil gupta
phoenix.upsert.batch.size is a client side property. We lowered it down to 20-50. Its YMMV as per your use case. phoenix.coprocessor.maxServerCacheTimeToLiveMs is a server side property. You will need to restart your hbase cluster for this. On Wed, Feb 17, 2016 at 3:01 PM, Neelesh wrote: > Als

Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread Neelesh
Also, was your change to phoenix.upsert.batch.size on the client or on the region server or both? On Wed, Feb 17, 2016 at 2:57 PM, Neelesh wrote: > Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs, > but haven't tried playing with phoenix.upsert.batch.size. Its at the > de

Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread Neelesh
Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs, but haven't tried playing with phoenix.upsert.batch.size. Its at the default 1000. On Wed, Feb 17, 2016 at 12:48 PM, anil gupta wrote: > I think, this has been answered before: > http://search-hadoop.com/m/9UY0h2FKuo8RfAPN

Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread anil gupta
I think, this has been answered before: http://search-hadoop.com/m/9UY0h2FKuo8RfAPN Please let us know if the problem still persists. On Wed, Feb 17, 2016 at 12:02 PM, Neelesh wrote: > We've been running phoenix 4.4 client for a while now with HBase 1.1.2. > Once in a while while UPSERTing reco

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Steve Terrell
And meanwhile, try the streaming function. Seems inefficient, but I routinely load tens of thousands of records with dynamic fields this way. Hasn't ever crashed on a single upsert, and it's actually pretty fast, at least if your Pig job is running on the same cluster as Phoenix. On Wed, Feb 17,

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-17 Thread Ravi Kiran
Hi Parth, Definitely it looks like a bug to me. I wrote a small test and it fails too. Will try to provide a patch for this. Ravi On Tue, Feb 16, 2016 at 5:37 PM, Parth Sawant wrote: > Update: The same method doesn't work for writing into SMALLINT columns in > a Phoenix table, ie a 'bytearr

ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread Neelesh
We've been running phoenix 4.4 client for a while now with HBase 1.1.2. Once in a while while UPSERTing records (on a table with 2 global indexes), we see the following error. I found https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both values in that JIRA to 360. This still does

Re: Pagination with Phoenix

2016-02-17 Thread Sachin Katakdound
We used a similar approach; If your data set never changes (no inserts or deletes) then you can cache the keys before hand and use them for pagination. However for our use case data changes so we found a work around with sub query and nth value function. Roughly like this, Select * from t

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Ravi Kiran
Hi , Unfortunately, we don't support dynamic columns within the phoenix-pig module. Currently, the only two options to PhoenixHBaseStorage are specifying the table or a set of table columns . We can definitely support dynamic columns. Please feel free to create a ticket. Regards Ravi On Wed

Re: production

2016-02-17 Thread James Taylor
Hi Dor, We're in the process of releasing Phoenix 4.7.0 which includes transaction support through Tephra. We're calling our transaction support beta because it's the first release with this support. Phoenix is used in production at many companies[1], including mine (Salesforce). Tephra (as part o

Re: Pagination with Phoenix

2016-02-17 Thread James Taylor
See https://phoenix.apache.org/paged.html and the unit test for QueryMoreIT. The row value constructor (RVC) was implemented specifically to provide an efficient means of pagination over HBase data. Thanks, James On Wed, Feb 17, 2016 at 10:54 AM, Steve Terrell wrote: > I was just thinking about

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Steve Terrell
I would be interested in knowing, too. My solution was to write a Pig streaming function that executed the Phoenix upsert command for every row. On Wed, Feb 17, 2016 at 7:21 AM, Sumanta Gh wrote: > Hi, > I was going through the Phoenix Pig integration [1]. > I need to store value in a dynamic c

Re: Pagination with Phoenix

2016-02-17 Thread Steve Terrell
I was just thinking about this today. I was going to try to implement it by using a LIMIT on every query, with an addition of WHERE (rowkey_field_1 > last_rowkey_field_1_value_from_previous_query) OR (rowkey_field_2 > last_rowkey_field_2_value_from_previous_query) OR … But I haven't tried it

Re: Save dataframe to Phoenix

2016-02-17 Thread Josh Mahonin
Hi Krishna, There was some talk a few weeks ago about a new feature to allow creating / saving to tables dynamically, with schema inferred from the DataFrame. However, I don't believe a JIRA has been filed for it yet. As always, pull requests are appreciated. Josh On Tue, Feb 16, 2016 at 6:16 P

Pagination with Phoenix

2016-02-17 Thread Subramanyam Satyanarayana
We have micro services built within Play that generate Phoenix queries to serve RESTful requests. We are trying to figure a good way to implement pagination in the services. We were curious to know if there is any prescribed way of implementing either Row Keys ( to determine start & stop condi

production

2016-02-17 Thread Dor Ben Dov
Hi, Are any one here knows or uses the project in his production ? for how long ? Does the Tephra integration and transactions over Hbase are working good - can I count on it for production stress ? Regards, Dor Ben Dov This message and the information contained herein is proprietary and con

Re: Write path blocked by MetaDataEndpoint acquiring region lock

2016-02-17 Thread Andrew Purtell
Is 1000 a good default? On Wed, Feb 17, 2016 at 9:42 AM, Nick Dimiduk wrote: > Thanks for the context Arun. > > For what it's worth, I greatly increased the batch size (from default > 1,000 to 500,000), which i believe reduced contention on the lock and > allowed ingest to catch up. > > On Tue,

Thin Client Commits?

2016-02-17 Thread Steve Terrell
It seems that when I use phoenix-4.6.0-HBase-0.98-thin-client.jar , that deletes and upserts do not take effect. Is this expected behavior? Thanks, Steve

Dynamic column using Pig STORE function

2016-02-17 Thread Sumanta Gh
Hi, I was going through the Phoenix Pig integration [1]. I need to store value in a dynamic column using org.apache.phoenix.pig.PhoenixHBaseStorage. Is dynamic column allowed in STORE function? Please send me some example. [1] - https://phoenix.apache.org/pig_integration.html Regards Sumanta Gho

Re: Write path blocked by MetaDataEndpoint acquiring region lock

2016-02-17 Thread Nick Dimiduk
Thanks for the context Arun. For what it's worth, I greatly increased the batch size (from default 1,000 to 500,000), which i believe reduced contention on the lock and allowed ingest to catch up. On Tue, Feb 16, 2016 at 9:14 PM, Thangamani, Arun wrote: > Sorry I had pressed Control + Enter a l