Re: Phoenix view over existing HBase table - timestamps

2015-11-02 Thread Thomas D'Silva
Camelia You can specify the “CurrentSCN” attribute to get values < timestamp. See http://phoenix.apache.org/faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API -Thomas On Sun, Nov 1, 2015 at 5:00 AM, Camelia Elena Ciolac wrote: > Hello, > > I created successfu

Re: How do How do the operation Tephra

2015-12-03 Thread Thomas D'Silva
Phoenix is working on supporting transactions using tephra (see PHOENIX-1674). If you want to use HBase directly with tephra, the tephra website has a getting started guide with an example (see https://github.com/caskdata/tephra). -Thomas On Thu, Dec 3, 2015 at 1:02 AM, Hardika Catur Sapta wrote

Re: How do How do the operation Tephra

2015-12-03 Thread Thomas D'Silva
, Thomas D'Silva wrote: > Phoenix is working on supporting transactions using tephra (see > PHOENIX-1674). If you want to use HBase directly with tephra, the > tephra website has a getting started guide with an example (see > https://github.com/caskdata/tephra). > > -Thomas

Re: Get a count of open connections?

2015-12-03 Thread Thomas D'Silva
The number of open phoenix connections isn't currently exposed to users. Phoenix connections are light weight, all phoenix connections to a cluster from the Phoenix JDBC driver use the same underlying HConnection. On Thu, Dec 3, 2015 at 7:07 AM, Riesland, Zack wrote: > Is there some way to find

Re: Questions: history of deleted records, controlling timestamps

2015-12-17 Thread Thomas D'Silva
John, If you enable KEEP_DELETED_CELLS on the underlying HBase table you will be able to see deleted data (See http://hbase.apache.org/0.94/book/cf.keep.deleted.html ) Could you describe what you mean by making a set of changes at the same timestamp? Thanks, Thomas On Thu, Dec 17, 2015 at 4:50 P

Re: Questions: history of deleted records, controlling timestamps

2015-12-18 Thread Thomas D'Silva
hich we can hang additional > meta-data > -- Make the set of changes all appear to have happened "at the same time" > -- Hopefully, be able to undo all of the changes of a changeset. > > Thanks > John > > -Original Message- > From: Thomas D'Sil

Re: to_date not working as expected

2016-01-30 Thread Thomas D'Silva
Binu, I am able to repro the issue by manually running the test from the patch from https://issues.apache.org/jira/browse/PHOENIX-1769 . I will investigate further. Thanks, Thomas On Fri, Jan 29, 2016 at 4:26 PM, Binu Mathew wrote: > That doesn't seem to work. > > Phoenix is not recognizing th

Re: to_date not working as expected

2016-02-02 Thread Thomas D'Silva
;t seem to upgrade a > single package, 4.4, to 4.6 on the HDP 2.3 distribution. > > Can you provide us with a patch to resolve this issue? > > Thanks, > > On Sat, Jan 30, 2016 at 11:45 AM, Thomas D'Silva > wrote: > >> Binu, >> >> I am able to rep

Re: Problem with String Concatenation with Fields

2016-02-17 Thread Thomas D'Silva
Steve, That is a bug, can you please file a JIRA. Thanks, Thomas On Wed, Feb 17, 2016 at 3:34 PM, Steve Terrell wrote: > Can someone please tell me if this is a bug in Phoenix 4.6.0 ? > > This works as expected: > 0: jdbc:phoenix:localhost> select * from BUGGY where > ('tortilla'||F2)='tortilla

Re: Multiple versions for single row key

2016-02-22 Thread Thomas D'Silva
You need to set the versions attribute of the scan : scan ‘t1’, {RAW => true, VERSIONS => 10} As James said in a previous post, getting all versions of a row using a phoenix query is not implemented yet (https://issues.apache.org/jira/browse/PHOENIX-590) Thanks, Thomas On Mon, Feb 22, 2016 at 9

Re: Multiple versions for single row key

2016-02-22 Thread Thomas D'Silva
PHOENIX-590 is currently unassigned, so I'm not sure when it will be implemented. We are always looking for contributions. On Mon, Feb 22, 2016 at 1:23 PM, wrote: > Thanks Thomas. Do we have rough estimate of when PHOENIX-590 will be done? > > -Original Message- > Fro

Re: Failed to dynamically load library 'libMJCloudConnector.so'.

2016-02-26 Thread Thomas D'Silva
What phoenix server jar version are you using? The 1.2.0 client jar is probably too old to use with your server jar. Its best if your client jar version is the same as the server jar. On Thu, Feb 25, 2016 at 3:55 PM, Murugesan, Rani wrote: > Hi, > > > > I used the open source client - phoenix-4.5

Re: Question about in-flight new rows while index creation in progress

2016-03-18 Thread Thomas D'Silva
There will be 10 millions rows. After the index metadata is created we run an UPSERT SELECT (with a SCN at which the data table was resolved) to create index rows for all existing data rows. Any new rows that are written to the data table after the index metadata is created will be written to the i

Re: Using transaction in custom coprocessor

2016-04-20 Thread Thomas D'Silva
It is possible to use a transaction started by a client in a coprocessor. The transaction is serialized as the TxConstants.TX_OPERATION_ATTRIBUTE_KEY attribute on the operation. On Wed, Apr 13, 2016 at 7:42 AM, Mohammad Adnan Raza wrote: > Hello everyone, > > I have requirement to use transactio

Re: Emulating a true INSERT or UPDATE

2016-07-28 Thread Thomas D'Silva
If the table is transactional, you are guaranteed that if there are overlapping transactions that try to commit the same row one will succeed and the others will fail with an exception. There is also an additional cost to doing conflict detection at commit time. On Thu, Jul 28, 2016 at 8:18 AM, H

Re: Emulating a true INSERT or UPDATE

2016-07-29 Thread Thomas D'Silva
umber of rows you have. You'll also have the >> added benefit that another client attempting to INSERT or UPDATE the same >> rows at the same time would fail (that's the conflict detection piece that >> Thomas mentioned). >> Thanks, >> James >> >> O

Re: Tephra errors when trying to create a transactional table in Phoenix 4.8.0

2016-08-31 Thread Thomas D'Silva
Can you check the Transaction Manager logs and see if there are any error? Also can you do a jps and see confirm the Transaction Manager is running ? On Wed, Aug 31, 2016 at 2:12 AM, F21 wrote: > Just another update. Even though the logs says that the transaction > manager is not running, it is

Re: Which statements are supported when using transactions?

2016-10-06 Thread Thomas D'Silva
Francis, Can you please file a JIRA for this? Thanks, Thomas On Thu, Oct 6, 2016 at 12:58 AM, F21 wrote: > I just ran into the following scenario with Phoenix 4.8.1 and HBase 1.2.3. > > 1. Create a transactional table: CREATE TABLE schemas(version varchar not > null primary key) TRANSACTIONAL=

Re: SALT_BUCKETS and PRIMARY KEY DESC

2017-02-27 Thread Thomas D'Silva
I was unable to repro this behavior with phoenix 4.9, maybe you can try using a later version of phoenix? On Mon, Feb 20, 2017 at 9:17 AM, Afshin Moazami wrote: > Hi folks, > > I am not sure if I it is by design, or it is a [known] bug > in phoenix-4.7.0-HBase-1.1. > It looks like when I create

Re: Disable NO_CACHE hint on query for LIMIT OFFSET paging queries

2018-08-17 Thread Thomas D'Silva
Shouldn't you pass the NO_CACHE hint for the LIMIT-OFFSET queries, since you will be reading and filtering out lots of rows on the server? I guess using the block cache for RVC queries might help depending on how many rows you read per query, you should be able to easily test this out. On Fri, Aug

Re: Disable NO_CACHE hint on query for LIMIT OFFSET paging queries

2018-08-21 Thread Thomas D'Silva
gt; Thanks, > Abhishek > > On Sat, Aug 18, 2018 at 4:16 AM Thomas D'Silva > wrote: > >> Shouldn't you pass the NO_CACHE hint for the LIMIT-OFFSET queries, since >> you will be reading and filtering out lots of rows on the server? >> I guess using the block

Re: Is read-only user of Phoeix table possible?

2018-08-23 Thread Thomas D'Silva
On a new cluster, the first time a client connects is when the SYSTEM tables are created. You need to connect with a user that has RWX on the SYSTEM schema the very first time. After that user1 should be able to connect. Also from the doc: Every user requires 'RX' permissions on all Phoenix SYSTEM

Re: Disable NO_CACHE hint on query for LIMIT OFFSET paging queries

2018-08-23 Thread Thomas D'Silva
to rows and rejected till offset. So in such a > scenario wouldn't those rows in block cache help during the pagination > duration. > > Thanks, > Abhishek > > On Wed, Aug 22, 2018 at 12:07 AM Thomas D'Silva > wrote: > >> When you do an OFFSET Phoenix wil

Re: Empty row when using OFFSET + LIMIT

2018-08-25 Thread Thomas D'Silva
Juan, Can you please create a JIRA that allows us to repro this in a test? Thanks, Thomas On Fri, Aug 24, 2018 at 9:12 PM, Juan Pablo Gardella < gardellajuanpa...@gmail.com> wrote: > Hi all, > > Today I faced a bug -I think-. I'm using Phoenix shipped >

Re: Is read-only user of Phoeix table possible?

2018-08-27 Thread Thomas D'Silva
nix to work, and may not be how the 4.9 release of Phoenix works (due > to changes that have been made). > > On 8/23/18 12:42 PM, Thomas D'Silva wrote: > >> On a new cluster, the first time a client connects is when the SYSTEM >> tables are created. You need to conn

Re: Unable to find cached index metadata

2018-09-02 Thread Thomas D'Silva
Is your cluster under heavy write load when you see these expceptions? How long does it take to write a batch of mutations? If its longer than the config value of maxServerCacheTimeToLiveMs you will see the exception because the index metadata expired from the cache. On Sun, Sep 2, 2018 at 4:02 P

Re: TTL on a single column family in table

2018-09-04 Thread Thomas D'Silva
If you set different TTLs for column families you can run into issues with SELECT count(*) queries not working correctly (depending on which column family is used to store the EMPTY_COLUMN_VALUE). On Tue, Sep 4, 2018 at 10:56 AM, Sergey Soldatov wrote: > What is the use case to set TTL only for

Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63

2018-09-11 Thread Thomas D'Silva
Since you dropped all the system tables, all the phoenix metadata was lost. If you have the ddl statements used to create your tables, you can try rerunning them. On Tue, Sep 11, 2018 at 9:32 AM, Tanvi Bhandari wrote: > Hi, > > > > I am trying to upgrade the phoenix binaries in my setup from pho

Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63

2018-09-12 Thread Thomas D'Silva
> But when I performed the *select * from "myTable";* it is not returning > any result. > > On Wed, Sep 12, 2018 at 1:55 AM Thomas D'Silva > wrote: > >> Since you dropped all the system tables, all the phoenix metadata was >> lost. If you have the ddl statements

Re: Missing content in phoenix after writing from Spark

2018-09-12 Thread Thomas D'Silva
Is there a reason you didn't use the spark-connector to serialize your data? On Wed, Sep 12, 2018 at 2:28 PM, Saif Addin wrote: > Thank you Josh! That was helpful. Indeed, there was a salt bucket on the > table, and the key-column now shows correctly. > > However, the problem still persists in t

Re: Salting based on partial rowkeys

2018-09-13 Thread Thomas D'Silva
For the usage example that you provided when you write data how does the values of id_1, id_2 and other_key vary? I assume id_1 and id_2 remain the same while other_key is monotonically increasing, and thats why the table is salted. If you create the salt bucket only on id_2 then wouldn't you run i

Re: How are Phoenix Arrays Stored in HBase?

2018-10-19 Thread Thomas D'Silva
Take a look at PArrayDataTypeEncoder appendValue() and encode(). For variable length data types we store the individual element's serialized bytes with a separator and the the last part of the array contains the offsets. For fixed length data types we just store the individual elements. On Fri, O

Re: Phoenix 4.14 - VIEW creation

2018-11-14 Thread Thomas D'Silva
You cannot create a view over multiple tables. On Wed, Nov 14, 2018 at 3:49 AM, lkyaes wrote: > Hello, > > I wonder, if there already some way how to CREATE VIEW over multiply > tables (with aggregation)? > > Br, > Liubov > > > >

Re: Phoenix Query Taking Long Time to Execute

2018-11-14 Thread Thomas D'Silva
Can you describe your cluster setup and table definitions, types of queries you are running etc.? On Wed, Nov 14, 2018 at 12:40 AM, Azharuddin Shaikh < azharuddins...@gmail.com> wrote: > Hi All, > > We have hbase tables which consist of 4.4 Million records on which we are > performing query usin

Re: Rolling hourly data

2018-11-20 Thread Thomas D'Silva
Since your PK already leads with (sid, day) I don't think adding a secondary index will help. Do you generally always run the aggregation query for the recently inserted data? The row timestamp feature might help in this case https://phoenix.apache.org/rowtimestamp.html If you run the same aggregat

Re: Phoenix 5.0 documentation

2018-11-20 Thread Thomas D'Silva
Phoenix 5.0 only works with HBase 2.0 see https://www.mail-archive.com/dev@phoenix.apache.org/msg49675.html On Tue, Nov 20, 2018 at 5:20 AM Alexandre Berthaud < alexandre.berth...@clever-cloud.com> wrote: > Hello everyone, > > Is there a version of the documentation for Phoenix 5.0 somewhere I >

Re: Rolling hourly data

2018-11-26 Thread Thomas D'Silva
he SELECT query to include a particular sid, the upsert > select worked. > Hence I think the only way would be for me to run UPSERt for generating > daily data for range of sids or segment_id. > > Did I miss something? > > On Tue, Nov 20, 2018 at 9:59 AM Thomas D'Silva >

Re: Cursor Query Loops Eternally with Local Index, Returns Fine Without It

2018-12-17 Thread Thomas D'Silva
Jack, Can you please file a JIRA that includes you repro steps? On Fri, Dec 14, 2018 at 2:33 AM Jack Steenkamp wrote: > Hi All, > > I have come across a curious case with Phoenix (4.14.1) cursors where a > particular query would carry on looping forever if executed when a local > index is prese

Re: Query All Dynamic Columns

2018-12-26 Thread Thomas D'Silva
With splittable system catalog you should be able to create views without seeing performance issues. Chinmay is working on enabling running a select query to return the dynamic column values without specifying the dynamic column names and types ahead of times. (see https://issues.apache.org/jira/b

Re: Inner Join Cursor Query fails with NullPointerException - JoinCompiler.java:187

2019-01-02 Thread Thomas D'Silva
This looks like a bug, please file a JIRA with your test case. On Sat, Dec 29, 2018 at 7:42 AM Jack Steenkamp wrote: > Hi All, > > Using Phoenix 4.14.1, I have come across an inner join query in my > application that fails with the NullPointerException if executed as part of > a Cursor, but exec

Re: column mapping schema decoding

2019-01-02 Thread Thomas D'Silva
The encoded column qualifiers do not start at one (see QueryConstants.ENCODED_CQ_COUNTER_INITIAL_VALUE). Its best to use QualifierEncodingScheme as was suggested. On Wed, Jan 2, 2019 at 3:53 PM Shawn Li wrote: > Hi Jaanai and Pedro, > > Any input for my example? > > Thanks, > Shawn > > On Thu, D

Re: Hbase vs Phienix column names

2019-01-07 Thread Thomas D'Silva
There isn't an existing utility that does that. You would have to look up the COLUMN_QUALIFIER for the columns you are interested in from SYSTEM.CATALOG and use then create a Scan. On Mon, Jan 7, 2019 at 9:22 PM Anil wrote: > Hi Team, > > Is there any utility to read hbase data using hbase apis

Re: Phoenix Performance Improvement

2019-01-17 Thread Thomas D'Silva
Can you provide the schema of the table and the queries you are running? On Tue, Jan 15, 2019 at 12:35 PM Azharuddin Shaikh wrote: > Thanks Pedro for your response. > > Actually we have an status column which consists of various status and we > are executing queries which select queries which ar

Re: unexpected behavior...MIN vs ORDER BY and LIMIT 1

2019-01-17 Thread Thomas D'Silva
The first query scans over all the rows in the index, while the second query reads one row (SERVER 1 ROW LIMIT ). On Tue, Jan 15, 2019 at 6:55 PM M. Aaron Bossert wrote: > I have a table (~ 724M rows) with a secondary index on the "TIME" column. > When I run a MIN function on the table, the quer

Re: FromCompiler - Re-resolved stale table logging

2019-01-28 Thread Thomas D'Silva
There is an open bug because of which we resolve a table multiple times during a single query compilation ( https://issues.apache.org/jira/browse/PHOENIX-4962). This will only affect query performance and is not a correctness issue. On Fri, Jan 25, 2019 at 3:24 PM William Shen wrote: > Hi all, >

Re: split count for mapreduce jobs with PhoenixInputFormat

2019-01-30 Thread Thomas D'Silva
If stats are enabled PhoenixInputFormat will generate a split per guidepost. On Wed, Jan 30, 2019 at 7:31 AM Josh Elser wrote: > You can extend/customize the PhoenixInputFormat with your own code to > increase the number of InputSplits and Mappers. > > On 1/30/19 6:43 AM, Edwin Litterst wrote: >

Re: Phoenix JDBC Connection Warmup

2019-02-04 Thread Thomas D'Silva
As James suggested if you set the UPDATE_CACHE_FREQUENCY table property, the server will not be pinged for the latest metadata until the update frequency. Check out the altering section (https://phoenix.apache.org/) On Sun, Feb 3, 2019 at 5:57 PM William Shen wrote: > Thanks for the suggestions!

Re: Strange query results

2019-02-12 Thread Thomas D'Silva
This looks like a bug, can you please file a JIRA with the repro steps ? On Tue, Feb 12, 2019 at 4:56 PM Victor Brakauskas wrote: > Hello all, > Running into some strange query results on a cluster I have that's running > phoenix 4.14.0 and hbase 1.3.1. > I've got a table that's defined as such.

Re: Using Hint with PhoenixRDD

2019-04-10 Thread Thomas D'Silva
I don't there is a way to pass in a hint while using PhoenixRDD. Which hint are you trying to pass in? On Wed, Apr 10, 2019 at 10:42 AM William Shen wrote: > Anyone still using PhoenixRDD with Spark, or anyone had used it in the > past that might be able to answer this? > > Thanks! > > On Thu, A

Re: Using Hint with PhoenixRDD

2019-04-10 Thread Thomas D'Silva
Can you please file a JIRA for this? On Wed, Apr 10, 2019 at 5:53 PM William Shen wrote: > Thanks for chiming in Thomas. We were trying to pass in NO_CACHE to > prevent large one-time scans from affecting the block cache. > > On Wed, Apr 10, 2019 at 5:14 PM Thomas D'Silva >

Re: query cell timestamp and tag

2019-04-11 Thread Thomas D'Silva
We have an open JIRA PHOENIX-4552 which has some details on the work involved. Currently there cell tags cannot be queried using Phoenix. On Thu, Apr 11, 2019 at 11:41 AM Jimmy Xiang wrote: > I have an existing HBase table. The cell timestamp

Re: Date for next release ?

2019-04-17 Thread Thomas D'Silva
Josh had started a discussion thread on the dev list about having a 5.0.1 release. https://lists.apache.org/thread.html/99fcc737d7a8f82ddffb1b34a64f7099f7909900b8bea36dd6afca16@%3Cdev.phoenix.apache.org%3E We would appreciate any help in making this release happen. On Mon, Apr 15, 2019 at 4:11 AM

Re: Local Index data not replicating for older HBase versions

2019-05-21 Thread Thomas D'Silva
Your approach seems like the correct thing to do. HBase has stopped supporting the 1.2 branch, so we also EOL'ed it, there will not be any more releases targeting HBase 1.2. I would suggest that you upgrade to a later version. On Tue, Apr 30, 2019 at 8:55 PM Hieu Nguyen wrote: > Hi, > > We are o

Re: Local Index data not replicating for older HBase versions

2019-05-23 Thread Thomas D'Silva
I was under the impression that the CDH branches will live on). > > > On Tue, May 21, 2019 at 9:24 PM Thomas D'Silva > wrote: > >> Your approach seems like the correct thing to do. HBase has stopped >> supporting the 1.2 branch, so we also EOL'ed it, there will not b

[ANNOUNCE] Apache Phoenix 4.14.2 released

2019-05-28 Thread Thomas D'Silva
The Apache Phoenix team is pleased to announce the immediate availability of the 4.14.2 patch release. Apache Phoenix enables SQL-based OLTP and operational analytics for Apache Hadoop using Apache HBase as its backing store and providing integration with other projects in the Apache ecosystem such

Re: Sequence number

2019-10-22 Thread Thomas D'Silva
Are you sure SYSTEM.SEQUENCE was restored properly? What is the current value of the sequence in the restored table? On Fri, Oct 4, 2019 at 1:52 PM jesse wrote: > Let's say there is a running cluster A, with table:books and > system.sequence current value 5000, cache size 100, incremental is 1,

Re: Apache phoenix problem with order by and offset giving duplicate results in paging

2019-11-12 Thread Thomas D'Silva
Try including the title in the order by clause as well {order by TITLE, TO_NUMBER(COUNT) desc}. Using offset and limit for paging is not efficient when the table has a lot of rows. You try using row value constructors and ordering by the primary key column of the table (see https://phoenix.apache.o

Re: Bulk Load data into Phoenix Table with Dynamic columns

2019-11-17 Thread Thomas D'Silva
Bulk loading with dynamic columns is not supported. You could try modeling the data using multiple views on the same physical table. On Tue, Nov 12, 2019 at 3:26 PM Mohammed Shoaib Quraishi < shoaib.qurai...@outlook.com> wrote: > Hi, > > I have a phoenix table created with a fixed set of columns.

Re: How to verify query timeout is working

2014-12-18 Thread Thomas D'Silva
Jerry, I was able to trigger a query time out while testing with sqlline. I set the phoenix.query.timeoutMs property on the phoenix client hbase-site.xml to 1 ms and was able to trigger a timeout while querying SYSTEM.CATALOG. hbase-site.xml needs to be on the client's CLASSPATH in order to get p

Re: Exception when starting sqlline.py

2015-02-11 Thread Thomas D'Silva
Anirudha, Did you add the location of phoenix-4.2.2-server.jar to the the HBASE_CLASSPATH (in confhbase-env.sh)? Thanks, Thomas On Wed, Feb 11, 2015 at 6:35 AM, Anirudha Khanna wrote: > Hi All, > > On our dev HBase cluster after installing the Phoenix server jars on > "ALL"(master and region se

Re: Persisting Objects thru Phoenix

2015-03-18 Thread Thomas D'Silva
Anirudha At Salesforce, one of the use cases Phoenix and HBase is used for is storing immutable event data such as login information. We periodically run aggregate queries to generate metrics eg. number of logins per user. We select the columns of the primary key based on the filters used while q

Re: Extra Column when creating views

2015-03-31 Thread Thomas D'Silva
I think its to support the case when all columns of the table are part of primary key (and are used to construct the rowkey). On Tue, Mar 31, 2015 at 6:23 AM, Anirudha Khanna wrote: > Hi All, > > I am creating updatable views on a Phoenix table and inserting data into the > table through the view

Re: Timestamp for mutations

2015-04-07 Thread Thomas D'Silva
Abhilash, You can set the timestamp by setting the PhoenixRuntime.CURRENT_SCN_ATTRIB property on the phoenix connection. See "Can phoenix work on tables with arbitrary timestamp" on http://phoenix.apache.org/faq.html -Thomas On Mon, Apr 6, 2015 at 11:20 PM, Abhilash L L wrote: > Hello, > >

Re: Timestamp for mutations

2015-04-07 Thread Thomas D'Silva
s scn stand for ? > > > Regards, > Abhilash L L > Capillary Technologies > M:919886208262 > abhil...@capillarytech.com | www.capillarytech.com > > On Tue, Apr 7, 2015 at 11:27 PM, Thomas D'Silva > wrote: >> >> Abhilash, >> >> You can set the timest

Re: understanding phoenix code flow

2015-04-07 Thread Thomas D'Silva
Ashish, If you want to step through server side code you can enable remote debugging in hbase-env.sh. I have used this with standalone mode. # Enable remote JDWP debugging of major HBase processes. Meant for Core Developers # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transpor

Re: Socket timeout while counting number of rows of a table

2015-04-09 Thread Thomas D'Silva
The phoenix.query.timeoutMs property should be set on the hbase-site.xml of the client (in the phoenix/bin) directory, not the server hbase-site.xml. See https://github.com/forcedotcom/phoenix/wiki/Tuning . Did you try just setting it on the client side config before starting sqlline and running t

Re: PhoenixIOException :Setting the query timeout

2015-06-16 Thread Thomas D'Silva
Bahubali, hbase-site.xml needs to be on the client's CLASSPATH in order to get picked up or else it will use the default timeout. When using sqlline it sets the CLASSPATH to the HBASE_CONF_PATH environment variable which default to the current directory. Try running sqlline directly from the bin d

Re: Phoenix Client Configuration

2015-06-17 Thread Thomas D'Silva
Yiannis hbase-site.xml needs to be on the client's CLASSPATH in order to get picked up or else it will use the default timeout. When using sqlline it sets the CLASSPATH to the HBASE_CONF_PATH environment variable which default to the current directory. Try running sqlline directly from the bin dir

Re: Phoenix Client Configuration

2015-06-17 Thread Thomas D'Silva
mas, > > thanks for the reply! So whats the case for squirel? > Or running a main class (which connects to phoenix) from a jar file? > > Thanks a lot! > > On 17 June 2015 at 19:31, Thomas D'Silva wrote: >> >> Yiannis >> >> hbase-site.xml needs to b

Re: How to upsert data into dynamic columns in phoniex.

2015-06-22 Thread Thomas D'Silva
You can upsert rows by sepecifying the column name and data type along with the table in the select. For the example in http://phoenix.apache.org/dynamic_columns.html UPSERT INTO TABLE (eventId, eventTime, lastGCTime INTEGER) VALUES(1, CURRENT_TIME(), 1234); On Sun, Jun 21, 2015 at 6:51 PM, guxia

Re: How to upsert data into dynamic columns in phoniex.

2015-06-23 Thread Thomas D'Silva
ct lastGCTime from eventlog fails with undefined column error. > > is there a bug? > > > > ---------- Original -- > From: "Thomas D'Silva";; > Send time: Tuesday, Jun 23, 2015 5:44 AM > To: "user"; > Subject: Re: How to u

Re: Avoid deleting Hbase table when droping table with Phoenix

2015-06-29 Thread Thomas D'Silva
Jose, hbase-site.xml needs to be on the classpath in order for the config to get picked up. Regarding the empty key value see : https://groups.google.com/forum/#!msg/phoenix-hbase-user/UWdBghSfePo/BmCxOUOPHn8J -Thomas On Mon, Jun 29, 2015 at 4:38 PM, Jose M wrote: > Hi, > I'm new to Phoenix and

Re: REG: REGEXP in Phoenix Queries

2015-07-21 Thread Thomas D'Silva
The default regex functions just use Java Pattern On Tue, Jul 21, 2015 at 2:28 AM, Ns G wrote: > Hi All, > > I have a requirement for using Regular expression. I am trying below query > but it doesnt seem to work. > > SELECT el_id from element where el_name like > REGEXP_SUBSTR(el_name,'^[Z][A-

Re: How fast is upsert select?

2015-07-22 Thread Thomas D'Silva
Zack, It depends on how wide the rows are in your table. On a 8 node cluster, creating an index with 3 columns (char(15),varchar and date) on a 1 billion row table takes about 1 hour 15 minutes. How many rows does your table have and how wide are they? On Wed, Jul 22, 2015 at 8:29 AM, Riesland

Re: More help with secondary indexes

2015-07-22 Thread Thomas D'Silva
Zack, Can you try increasing the value of hbase.regionserver.lease.period ? Also set the following to a high value phoenix.query.timeoutMs phoenix.query.keepAliveMs On Wed, Jul 22, 2015 at 5:38 AM, Riesland, Zack wrote: > I have a table like this: > > > > CREATE TABLE fma. er_keyed_gz_meterkey_

Re: Issue while joining data using pheonix

2015-08-11 Thread Thomas D'Silva
Nipur, Are you sure the config change is getting picked up? The exception says the maximum allowed size is (104857664 bytes ~ 0.1GB) not 1GB. Thanks, Thomas On Tue, Aug 11, 2015 at 12:43 AM, Nipur Patodi wrote: > Hi All, > > I am trying to join data in hbase phoenix tables. How ever I am gettin

Re: Issue while joining data using pheonix

2015-08-12 Thread Thomas D'Silva
server. I have > crosschecked and looks like this file is in class path of sqlline.py. But > still looks like updated config are not picked. Is there any way to apply > these config ( by cmdline if possible) in phoenix sqlline? > > Thanks, > > _Nipur > > -Original Message

Re: AbstractMethodError

2015-09-21 Thread Thomas D'Silva
Sumit, I tested out HBase 0.98.6 and the Phoenix 4.5.1 server jar and it worked for me. The PhoenixRpcSchedulerFactory.create(Configuration conf, PriorityFunction priorityFunction, Abortable abortable) signature is used for HBase 1.0 and 1.1 versions. The create(Configuration conf, RegionServerSer

Re: [ANNOUNCE] New Apache Phoenix committer - Jan Fernando

2015-09-29 Thread Thomas D'Silva
Congrats Jan! On Tue, Sep 29, 2015 at 11:23 AM, Eli Levine wrote: > On behalf of the Apache Phoenix project I am happy to welcome Jan Fernando > as a committer. Jan has been an active user and contributor to Phoenix in > the last couple of years. Some of his major contributions are: > 1) Worked d

Re: Drop in throughput

2015-10-13 Thread Thomas D'Silva
Sumanta, Phoenix resolves the table for every SELECT. For UPSERT it resolves the table once at commit time. We have a JIRA in the txn branch where if you specify an SCN it will cache the table and look it up from the cache https://issues.apache.org/jira/browse/PHOENIX-1812 This will be available o