Re: Slow query help

2018-03-16 Thread Samarth Jain
A less resource intensive approach would be to use approx count distinct - https://phoenix.apache.org/language/functions.html#approx_count_distinct You would still need the secondary index though, as James suggested, if you want it to run fast. On Fri, Mar 16, 2018 at 10:26 AM Flavio Pompermaier

Re: error when using hint on global index where table is using row timestamp mapping

2017-10-01 Thread Samarth Jain
Hi Noam, Can you pass on the DDL statements for the table and index and the query you are executing, please? Thanks! On Sun, Oct 1, 2017 at 2:01 AM, Bulvik, Noam wrote: > Hi > > > > I have create a table and used the row timestamp mapping functionality. > The key of the table is + column. I al

Re: Apache Phoenix Column Mapping Feature

2017-06-12 Thread Samarth Jain
Column mapping is enabled by default. See details on various configs and table properties here - http://phoenix.apache.org/columnencoding.html On Sun, Jun 11, 2017 at 11:55 PM, Udbhav Agarwal wrote: > Hi, > > I am using apache Phoenix 4.10 on Hbase. I want to use column mapping > feature. Do I

Re: Scan phoenix created columns, hbase

2017-06-05 Thread Samarth Jain
Cheyene, with Phoenix 4.10, column mapping feature is enabled by default which means the column names declared in the Phoenix schema are going to be different from the column qualifiers in hbase. If you would like to disabled column mapping, set COLUMN_ENCODED_BYTES=NONE property in your ddl. On M

Re: Short Tables names and column names

2017-05-30 Thread Samarth Jain
Yes, Phoenix will take care of mapping the column name to hbase column qualifier. Before using the column mapping feature (which is on by default), make sure that the limits, as highlighted on the website, on number of columns works for you. On Tue, May 30, 2017 at 7:21 PM Ash N <742...@gmail.com

Re: Unexpected dynamic column issues

2017-04-07 Thread Samarth Jain
st case - upsert into TMP_SNACKS(k, c1, "page_title" varchar) values(2,'a','c'); If you would like the original behavior, then you would need to turn off column encoding for your table like I mentioned in the previous email. For more details on this feature - go to ht

Re: Unexpected dynamic column issues

2017-04-06 Thread Samarth Jain
Thanks for reporting the issue, Dave. This has to do with the new column mapping feature that we rolled out in 4.10. To disable it for your table, please create your table like this: create table TMP_SNACKS(k bigint primary key, c1 varchar) COLUMN_ENCODED_BYTES=0; I will file a JIRA and get a fix

Re: Row timestamp

2017-03-10 Thread Samarth Jain
This is because you are using now() for created. If you used a different date then with TEST_ROW_TIMESTAMP1, the cell timestamp would be that date where as with TEST_ROW_TIMESTAMP2 it would be the server side time. Also, which examples are broken on the page? On Thu, Mar 9, 2017 at 11:28 AM, Baty

Re: Memory leak

2016-12-05 Thread Samarth Jain
Thanks for reporting this, Jonathan. Would you mind filing a JIRA preferably with the object tree that you are seeing in the leak. Also, what version of hbase and phoenix are you using? On Mon, Dec 5, 2016 at 9:53 AM Jonathan Leech wrote: > Looks like PHOENIX-2357 introduced a memory leak, at le

Re: Recover from "Cluster is being concurrently upgraded from 4.7.x to 4.8.x"

2016-10-06 Thread Samarth Jain
Patrick, Do you have multiple 4.8.1 clients connecting to the cluster at the same time? On Thu, Oct 6, 2016 at 8:11 AM, Patrick FICHE wrote: > Hi, > > I upgraded Phoenix server from 4.7.0 to 4.8.1 on HDP cluster. > > Now, when I try to connect to my server using sqlline.py from 4.8.1, I get > t

Re: Combining an RVC query and a filter on a datatype smaller than 8 bytes causes an Illegal Data Exception

2016-09-19 Thread Samarth Jain
Kumar, Can you try with the 4.8 release? On Mon, Sep 19, 2016 at 2:54 PM, Kumar Palaniappan < kpalaniap...@marinsoftware.com> wrote: > > Any one had faced this issue? > > https://issues.apache.org/jira/browse/PHOENIX-3297 > > And this one gives no rows > > SELECT * FROM TEST.RVC_TEST WHERE (CO

Re: Problems with Phoenix bulk loader when using row_timestamp feature

2016-08-09 Thread Samarth Jain
Ryan, Can you tell us what the explain plan says for the select count(*) query. - Samarth On Tue, Aug 9, 2016 at 12:58 PM, Ryan Templeton wrote: > I am working on a project that will be consuming sensor data. The “fact” > table is defined as: > > CREATE TABLE historian.data ( > assetid unsign

Re: Phoenix upsert query time

2016-08-02 Thread Samarth Jain
Best bet is to updgrade your cloudera version to cdh5.7. It supports phoenix 4.7. See - http://community.cloudera.com/t5/Cloudera-Labs/ANNOUNCE-Third-installment-of-Cloudera-Labs-packaging-of-Apache/m-p/42351#U42351 On Tuesday, August 2, 2016, anupama agarwal wrote: > Hi, > > We need to insert

Re: question on calltimeout

2016-06-28 Thread Samarth Jain
+user@phoenix Larry which version of HBase and Phoenix are you using? Starting from 4.7 Phoenix takes care of automatically renewing scanner leases which should such timeouts. To take advantage of that feature, you would need to use an HBase version to a version as recent as 0.98.17 if you are usin

Re: phoenix task rejected

2016-06-22 Thread Samarth Jain
Please look at this tuning guide: https://phoenix.apache.org/tuning.html You probably would want to adjust these client side properties to deal with your workload: phoenix.query.threadPoolSize and phoenix.query.queueSize. On Wed, Jun 22, 2016 at 9:34 AM, 金砖 wrote: > 16 regionservers, 1500+ reg

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-21 Thread Samarth Jain
IL" > + " FROM user.SESSION_EXPIRATION " > + " WHERE NEXT_CHECK <= CURRENT_TIME()" > + " LIMIT " + batchSize > + " ) AS TSE" > + " LEFT OUTER JOIN user.SESSION TS1" > + " ON TS1.

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-19 Thread Samarth Jain
;= CURRENT_TIME()" > + " LIMIT " + batchSize > + " ) AS TSE" > + " LEFT OUTER JOIN user.SESSION TS1" > + " ON TS1.CLIENT_ID = TSE.CLIENT_ID" > + " AND TS1.BRAND_ID = TSE.BRAND_ID" >

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-18 Thread Samarth Jain
gt; never seem to be cleaned up after each query. Is there any work-around? > > > ------ > *From:* Samarth Jain > *To:* "user@phoenix.apache.org" > *Sent:* Friday, 15 April 2016, 17:00 > *Subject:* Re: Getting swamped with Phoenix *.tmp files o

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-18 Thread Samarth Jain
Arun, Older phoenix views, created pre-4.6, shouldn't have the ROW_TIMESTAMP column. Was the upgrade done correctly i.e. the server jar upgraded before the client jar? Is it possible to get the complete stack trace? Would be great if you could come up with a test case here to understand better whe

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-15 Thread Samarth Jain
server/jboss-datasource/using-try-with-resources-to-close-database-connections >> , >> https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html). >> >> >> -- >> *From:* Samarth Jain >> *Sent:* Friday,

Re: Getting swamped with Phoenix *.tmp files on SELECT.

2016-04-15 Thread Samarth Jain
What version of phoenix are you using? Is the application properly closing statements and result sets? On Friday, April 15, 2016, wrote: > I am running into an issue where a huge number temporary files are being > created in my C:\Users\myuser\AppData\Local\Temp folder, they are around > 20MB bi

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-04-13 Thread Samarth Jain
Srinivas, Are you trying to create a phoenix view over an existing HBase table? On Wed, Apr 13, 2016 at 11:47 AM, Pindi, Srinivas < srinivas.pi...@epsilon.com> wrote: > *Problem* *Statement*: > > While we are trying to create a phoenix view and we are getting the > following exception. > > > >

Re: Phoenix table is unaccessable...

2016-03-11 Thread Samarth Jain
Saurabh, another option for you would be to upgrade your phoenix to our just released 4.7 version. It is possible that you might be hitting a bug that has been fixed now. Worth a try. On Fri, Mar 11, 2016 at 4:07 PM, Sergey Soldatov wrote: > Hi Saurabh, > It seems that your SYSTEM.CATALOG got co

Re: How to set up different phoenix timeout values for different client applications?

2016-03-09 Thread Samarth Jain
Hi Simon, phoenix.query.timeoutMs is a client side phoenix property. You can set it in client side hbase-site.xml for a global setting or can programmatically set it on per jdbc statement via stmt.setQueryTimeout(int seconds). There are a couple other hbase level timeouts that are in play: hbase

Re: Can't change TTL using alter table

2016-03-04 Thread Samarth Jain
Also, are you using the open source version or a vendor supplied distro? On Fri, Mar 4, 2016 at 10:44 AM, Samarth Jain wrote: > Rafit, > > Changing TTL the way you are doing it should work. Do you have any > concurrent requests going on that are issuing some kind of ALTER TABLE

Re: Can't change TTL using alter table

2016-03-04 Thread Samarth Jain
Rafit, Changing TTL the way you are doing it should work. Do you have any concurrent requests going on that are issuing some kind of ALTER TABLE statements? Also, would you mind posting the DDL statement for your table? - Samarth On Fri, Mar 4, 2016 at 9:20 AM, Rafit Izhak-Ratzin wrote: > Hi a

Re: WARN client.ScannerCallable: Ignore, probably already closed

2016-01-19 Thread Samarth Jain
This likely has to do with hbase scanners running into lease expiration. Try overriding the value of hbase.client.scanner.timeout.period in the server side hbase-site.xml to a large value. We have a feature coming out in Phoenix 4.7 (soon to be released) that will take care of automatically renewi

Re: Phoenix JDBC connection pool

2015-12-15 Thread Samarth Jain
Kannan, See my response here: https://mail-archives.apache.org/mod_mbox/phoenix-user/201509.mbox/%3CCAMfSBK+WKzd5EscXLJcn9nVpDYd66dH=nL=devdc9n_skww...@mail.gmail.com%3E There is a JIRA in place https://issues.apache.org/jira/browse/PHOENIX-2388 to help pooling of phoenix connections. Would be a

Re: weird result I got when I try row_timestamp feature

2015-12-11 Thread Samarth Jain
Hi Roc, FWIW, looking at your schema, it doesn't look like you are using the ROW_TIMESTAMP feature. The constraint part of your DDL needs to be changed like this: CONSTRAINT my_pk PRIMARY KEY ( server_timestamp ROW_TIMESTAMP, app_id, client_ip, cluster_id,host_id,api ) For the issue of getting

Re: Help tuning for bursts of high traffic?

2015-12-10 Thread Samarth Jain
io to make it faster? > > > > Right now, I’m only averaging about 5 queries/second, even though I’m > querying by the primary key. > > > > Before I upgraded, I was getting a lot closer to 100. > > > > Thanks! > > > > > > > > *From:* Samarth Jain [

Re: Help tuning for bursts of high traffic?

2015-12-09 Thread Samarth Jain
Zack, These stats are collected continuously and at the global client level. So collecting them only when the query takes more than 1 second won't work. A better alternative for you would be to report stats at a request level. You could then conditionally report the metrics for queries that exceed

Re: CsvBulkUpload not working after upgrade to 4.6

2015-12-09 Thread Samarth Jain
Zack, What version of HBase are you running? And which version of Phoenix (specifically 4.6-0.98 version or 4.6-1.x version)? FWIW, I don't see the MetaRegionTracker.java file in HBase branches 1.x and master. Maybe you don't have the right hbase-client jar in place? - Samarth On Wed, Dec 9, 201

Re: Row timestamp support in 4.6

2015-12-04 Thread Samarth Jain
Pierre, Thanks for reporting this. Do you mind filing a JIRA? Also, as a workaround, can you check if changing the data type from UNSIGNED_LONG to BIGINT resolves the issue? -Samarth On Friday, December 4, 2015, pierre lacave wrote: > > Hi, > > I am trying to use the ROW_TIMESTAMP mapping feat

Re: Get a count of open connections?

2015-12-03 Thread Samarth Jain
Hi Zack, One simple way to expose the number of open phoenix connections would be via global client metrics that Phoenix exposes at the client JVM level. I have filed https://issues.apache.org/jira/browse/PHOENIX-2485. The client side metrics capability of Phoenix needs to be documented. I have f

Re: JRuby on rails -> Phoenix connection error - cannot load java class

2015-12-02 Thread Samarth Jain
Josh, One step worth trying would be is to register the PhoenixDriver instance and see if that helps. Something like this: DriverManager.registerDriver(PhoenixDriver.INSTANCE) Connection con = DriverManager.getConnection("jdbc:phoenix:localhost:2181”) - Samarth On Wed, Dec 2, 2015 at 3:41 PM,

Re: blog describing new time-series data optimization

2015-11-08 Thread Samarth Jain
t; > > > On Sat, Nov 7, 2015 at 6:53 PM, James Taylor > wrote: > >> If you have time-series data for which you'd like to improve query >> performance, take a look at this[1] blog written by Samarth Jain on a new >> feature in our 4.6 release: >> >> https://blogs.apache.org/phoenix/entry/new_optimization_for_time_series >> >> Enjoy! >> >> James >> > >

[ANNOUNCE] Apache Phoenix 4.6 released

2015-10-26 Thread Samarth Jain
The Apache Phoenix team is pleased to announce the immediate availability of the 4.6 release with support for HBase 0.98/1.0/1.1. Some of the highlights of this release include: Support for surfacing HBase native timestamp [1] Support for correlate variable [2] Alpha version of a web-app for visu

[no subject]

2015-10-26 Thread Samarth Jain
The Apache Phoenix team is pleased to announce the immediate availability of the 4.6 release with support for HBase 0.98/1.0/1.1. Some of the highlights of this release include: Support for surfacing HBase native timestamp [1] Support for correlate variable [2] Alpha version of a web-app for visu

Re: Phoenix JDBC driver hangs/timeouts

2015-10-18 Thread Samarth Jain
Alok, Please answer the below questions to help us figure out what might be going on: 1) How many region servers are on the cluster? 2) What is the value configured for hbase.regionserver.handler.count? 3) What kind of queries is your test executing - point look up / range / aggregate/ full tab

Re: Phoenix 4.4 to 4.6 Client Errors

2015-10-16 Thread Samarth Jain
Thanks for reporting back, Mark. I have checked in the updated patch that should fix the error you are running into. After the fix ( https://github.com/apache/phoenix/commit/416c860f7d9f3490d46169fa74656994b4fe27a8), I was able to successfully upgrade from 4.4 to 4.6 version of Phoenix. On Fri, Oc

Re: Salting and pre-splitting

2015-10-08 Thread Samarth Jain
er has to be able to hold multiple salt > buckets. Is that correct? > 3. Where does Phoenix maintain the mapping of salt buckets to region > server given that the two are orthogonal to each other? > > Best regards, > Sumit > > -- > *From:* Sa

Re: Salting and pre-splitting

2015-10-07 Thread Samarth Jain
- Default value of phoenix.query.rowKeyOrderSaltedTable is true and that ensure that LIMIT clause returns data in rowkey order This is no longer the case starting Phoenix 4.4. You need to provide an explicit ORDER BY on row key columns if you need the rows to be returned in row key order. On Wed,

Re: Can't understand reason for rejected from org.apache.phoenix.job.JobManager: Running, pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 204

2015-10-06 Thread Samarth Jain
Sergea, any chance you have other queries concurrently executing on the client? What version of Phoenix you are on? On Tuesday, October 6, 2015, Serega Sheypak wrote: > Hi, found smth similar here: > > http://mail-archives.apache.org/mod_mbox/phoenix-user/201501.mbox/%3CCAAF1Jdg-E4=54e5dC3WazL=m

Re: ResultSet size

2015-10-06 Thread Samarth Jain
To add to what Jesse said, you can override the default scanner fetch size programmatically via Phoenix by calling statement.setFetchSize(int). On Tuesday, October 6, 2015, Jesse Yates wrote: > So HBase (and by extension, Phoenix) does not do true "streaming" of rows > - rows are copied into memo

Re: unexpected throwable? probably due to query

2015-09-30 Thread Samarth Jain
Hi Konstantinos, Can you tell us what versions of Phoenix and HBase are you using? - Samarth On Wed, Sep 30, 2015 at 1:46 PM, anil gupta wrote: > As per the stack trace, that looks like a bug to me. > > On Wed, Sep 30, 2015 at 7:27 AM, Konstantinos Kougios < > kostas.koug...@googlemail.com>

Re: Can't add views on HBase tables after upgrade

2015-09-15 Thread Samarth Jain
ported, or point me to some docs that reiterate > that, it would help my case to refactor all the scripts to fit the > supported format. > > > > Thanks, > > Jeff > > > > *From:* Samarth Jain [mailto:sama...@apache.org] > *Sent:* September-14-15 7:32 PM > *To:* user@ph

Re: Phoenix with PreparedStatement

2015-09-14 Thread Samarth Jain
Hi Sumit, Phoenix doesn't cache query plans as of yet. Once we move over to Calcite parser and optimizer (which is a work in progress), we will hopefully start doing that which is when your suggested approach of using PreparedStatement with bind params would be beneficial. - Samarth On Mon, Sep

Re: Can't add views on HBase tables after upgrade

2015-09-14 Thread Samarth Jain
MO. Thanks in advance for offering to look into it, Samarth. > > On Sat, Sep 12, 2015 at 11:22 AM, Samarth Jain wrote: > >> Jeffrey, >> >> I will look into this and get back to you. >> >> - Samarth >> >> On Thu, Sep 10, 2015 at 8:44 AM, Jeffr

Re: Can't add views on HBase tables after upgrade

2015-09-12 Thread Samarth Jain
Jeffrey, I will look into this and get back to you. - Samarth On Thu, Sep 10, 2015 at 8:44 AM, Jeffrey Lyons wrote: > Hey all, > > > > I have recently tried upgrading my Phoenix version from 4.4-HBase-0.98 to > build 835 on 4.x-HBase-0.98 to get some of the new changes. After the > upgrade it

Re: Phoenix JDBC in web-app, what is the right pattern?

2015-09-03 Thread Samarth Jain
me (therefore, a lot of connections) ?? > Also, can you expand a little bit more on the implications of having a > pooling mechanism for Phoenix connections? > Thanks in advance! > -Jaime > > On Thu, Sep 3, 2015 at 3:35 PM, Samarth Jain > wrote: > >> Yes. PhoenixConnectio

Re: How to force timeout when connection fails

2015-09-03 Thread Samarth Jain
Zack, The configs that you overrode do not apply when establishing connection to HBase via phoenix. You might want to muck around with hbase.client.retries.number and zookeeper.recovery.retry to see if you can get a faster response if HBase is down. I am not an expert in that area though. Someone

Re: Phoenix JDBC in web-app, what is the right pattern?

2015-09-03 Thread Samarth Jain
n, right? > > 2015-09-03 21:26 GMT+02:00 Samarth Jain : > >> Your pattern is correct. >> >> Phoenix doesn't cache connections. You shouldn't pool them and you >> shouldn't share them with multiple threads. >> >> For batching upserts, you

Re: Phoenix JDBC in web-app, what is the right pattern?

2015-09-03 Thread Samarth Jain
Your pattern is correct. Phoenix doesn't cache connections. You shouldn't pool them and you shouldn't share them with multiple threads. For batching upserts, you could do something like this: You can do this via phoenix by doing something like this: try (Connection conn = DriverManager.getConne

help diagnosing issue

2015-09-01 Thread Samarth Jain
Ralph, Couple of questions. Do you have phoenix stats enabled? Can you send us a stacktrace of RegionTooBusy exception? Looking at HBase code it is thrown in a few places. Would be good to check where the resource crunch is occurring at. On Tue, Sep 1, 2015 at 2:26 PM, Perko, Ralph J wrote:

Re: select * from table throws scanner timeout

2015-08-26 Thread Samarth Jain
, 2015 at 6:02 AM, Sunil B wrote: > Hi Samarth, > > The patch definitely solves the issue. The query "select * from > table" retrieves all the records. Thanks for the patch. > > Thanks, > Sunil > > On Tue, Aug 25, 2015 at 1:21 PM, Samarth Jain > wrote:

Re: select * from table throws scanner timeout

2015-08-25 Thread Samarth Jain
ng for the > past 5 hours now. Will update the thread with success or failure. > Code Analysis: ScanPlan.newIterator function uses SerialIterators > instead of ParallelIterators if there is an "order by" in the query. > > Thanks, > Sunil > > On Mon, Aug 24, 2015 at

Re: select * from table throws scanner timeout

2015-08-24 Thread Samarth Jain
Sunil, Can you tells us a little bit more about the table - 1) How many regions are there? 2) Do you have phoenix stats enabled? http://phoenix.apache.org/update_statistics.html 3) Is the table salted? 4) Do you have any overrides for scanner caching ( hbase.client.scanner.caching) or result s

Re: HBase rowkey filter impl in Phoenix for scanning specific time range rows

2015-08-24 Thread Samarth Jain
Hi Sun, There is no custom HBase filter that phoenix uses to scan specific time range rows. Having said that, I am currently working on https://issues.apache.org/jira/browse/PHOENIX-914 that is going to provide the capability of having a column directly map to HBase cell level timestamp. By speci

Re: How to do true batch updates in Phoenix

2015-08-21 Thread Samarth Jain
ter >> batching) >> >> On Wed, Aug 19, 2015 at 7:11 PM, Samarth Jain >> wrote: >> >>> You can do this via phoenix by doing something like this: >>> >>> try (Connection conn = DriverManager.getConnection(url)) { >>> conn.setAutoCommit(f

Re: "ERROR 201 (22000): Illegal data" on Upsert Select

2015-08-20 Thread Samarth Jain
Yiannis, Can you please provide a reproducible test case (schema, minimum data to reproduce the error) along with the phoenix and hbase versions so we can take a look at it further. Thanks, Samarh On Thu, Aug 20, 2015 at 2:09 PM, Yiannis Gkoufas wrote: > Hi there, > > I am getting an error whi

Re: How to do true batch updates in Phoenix

2015-08-19 Thread Samarth Jain
You can do this via phoenix by doing something like this: try (Connection conn = DriverManager.getConnection(url)) { conn.setAutoCommit(false); int batchSize = 0; int commitSize = 1000; // number of rows you want to commit per batch. Change this value according to your needs. while (there are reco

Re: PHOENIX-2000

2015-07-17 Thread Samarth Jain
eta.prepareAndExecute(RemoteMeta.java:157) > at > org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:474) > at > org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:108) > ... 7 more > > > The versi

Re: PHOENIX-2000

2015-07-15 Thread Samarth Jain
Changing the email group to user@phoenix.apache.org. Please don't use phoenix-hbase-u...@googlegroups.com as that group is deprecated. Can you try upgrading your HBase version? I see that the way HBase configuration is being loaded has been changed in the later releases. I am not seeing the issue

Re: Can't understand why phoenix saves but not selects

2015-06-23 Thread Samarth Jain
Hi Serega, Do you know if auto-commit is on for the connection returned by getJdbcFacade().createConnection(). If not, you need to call connection.commit() after executeUpdate() -Samarth On Tuesday, June 23, 2015, Serega Sheypak wrote: > Hi, I'm testing dummy code: > > int result = getJdbcFaca

Re: Phoenix's behavior when applying limit to the query

2015-05-15 Thread Samarth Jain
Prasanth, To help us answer you better please let us know your table schema. Also what does EXPLAIN select * from limit 1000; tell you? -Samarth On Friday, May 15, 2015, Chagarlamudi, Prasanth < prasanth.chagarlam...@epsilon.com> wrote: > Hello, > I would appreciate if someone could help with

Re: Socket timeout while counting number of rows of a table

2015-04-09 Thread Samarth Jain
Looking at the exception java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Thu Apr 09 16:49:33 CEST 2015, null, java.net.SocketTimeoutException: callTimeout=6, callDuration=62366

[ANNOUNCE] Apache Phoenix 4.3.1 released

2015-04-08 Thread Samarth Jain
The Apache Phoenix team is pleased to announce the immediate availability of the 4.3.1 release. Highlights of the release being: - Global client side resource metrics - SQL command to turn Phoenix tracing ON/OFF - SQL command to allow setting tracing sampling frequency - Capability to pass guide p

Re: hbase / phoenix errors

2015-04-07 Thread Samarth Jain
I think that page needs to be updated. Sorry about that, Ralph. We ran into problems with HBase 0.98.4 and local indexes where a similar (but not the same) error was thrown: Coprocessor.CoprocessorHost: the coprocessor …LocalIndexSplitter threw an exception NoSuchMethodError hbase.regionserver.Reg

Re: UPSERT SELECT slow and crashes

2015-03-30 Thread Samarth Jain
If you are using sqlline.py, then by default autocommit should be on. To confirm, can you run !autocommit and see what the output is? On Mon, Mar 30, 2015 at 9:05 PM, 梁鹏程 wrote: > hi, > phoenix.connection.autoCommit in the hbase-site.xml is server side or > client side? > thanks. > > Regards,

Re: Using Hints in Phoenix

2015-03-23 Thread Samarth Jain
Hi Matt, Any chance your hint is going to the next line when using squirrel? Just to be sure, make sure you do this: Navigate to File --> New Session Properties --> Tab SQL and uncheck the "Remove multi line comment (/.../) from SQL before sending it to database" so that hints you include in quer

Re: Fwd: java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString

2015-03-18 Thread Samarth Jain
You also want to make sure that you are using compatible versions of client and server jars: phoenix-core version is 4.3.0 and phoenix-server.jar version is 4.2.3 are *NOT* compatible. The server side jar version should always be the same or newer (in version) than the client side jar. In general

Re: - Multitenancy ...

2015-03-11 Thread Samarth Jain
Hi Naga, Can you try create table mt3 ( tenant_id varchar NOT NULL, tenant_name varchar constraint mt_pk primary key (tenant_id, tenant_name) ) multi_tenant=true; - Samarth On Wednesday, March 11, 2015, Naga Vijayapuram wrote: > I am on HDP 2.2 ; it uses Phoenix 4.2 ; Is this fixed

Re: peoblem with 4.3 client and java 1.6

2015-03-01 Thread Samarth Jain
You would likely need to modify the source code if you want 1.6. We use try-with-resource construct at a few places now and that is 1.7+ only. On Sun, Mar 1, 2015 at 4:43 PM, James Taylor wrote: > Hi Noam, > Java 1.6 was end of life more than a year ago, so Phoenix binaries no > longer support i

Re: TTL

2015-02-11 Thread Samarth Jain
+1 to what Ralph said. FWIW, starting with 4.3 (soon to be out) we allow setting HBase properties like TTL through ALTER TABLE. However, you can't have different TTL for different column families. On Wednesday, February 11, 2015, Perko, Ralph J wrote: > That is a great point. Setting the TTL

Re: Weird memory leak for phoenix tracing

2015-01-28 Thread Samarth Jain
Hey Sun, I believe you are running into an HTrace bug. See this: https://issues.apache.org/jira/browse/HTRACE-92 With the tracing on, we end up publishing a lot many metrics record than we should. These records find there way to the tracing table which ends up causing an infinite loop which ends

Re: Scan performance using JDBC result impacted by limit

2015-01-21 Thread Samarth Jain
Vijay, Is there a reason why you are doing PhoenixResultSet.string()? Is it for logging purposes? Regarding your question regarding increase in object creation time, that doesn't seem like it is phoenix related. Are you seeing an increase in time for resultset.next() or are you seeing an increase

Re: Whether or not multi-thread share the single PhoenixConnection object?

2015-01-13 Thread Samarth Jain
There is only one HConnection per cluster. Every phoenix connection to a cluster shares the same underlying HConnection. See ConnectionQueryServicesImpl#init() where we make sure that there is only one HConnection to the cluster. The HConnection is established when it's the first time that the phoe

Re: Phoenix4.2.1 against HBase0.98.6 encountered a strange problem when using connection with props

2014-12-04 Thread Samarth Jain
The value of timestamp provided by CURRENT_SCN_ATTRIBUTE has to be greater than the table timestamp. So it really is any arbitrary value >= table create timestamp. Providing timestamps on connections helps us with executing point in time or snapshot queries. In other words, it's a way of surfacing

Re: Phoenix4.2.1 against HBase0.98.6 encountered a strange problem when using connection with props

2014-12-03 Thread Samarth Jain
Is there a reason why you are using CURRENT_SCN_ATTRIBUTE while you are getting a phoenix connection? Is it because you want to query data at a point of time? If yes, you probably want to check that the create time stamp of the table MYTEST1 <= 141759720L. If you don't want any snapshot like qu

Re: Parsing rowkey of existing Hbase table while creating a View

2014-12-03 Thread Samarth Jain
Hi Vijay, One of the closing parentheses is wrongly placed. Try: create *view* "events" ( "cid" UNSIGNED_LONG, "timestamp" UNSIGNED_LONG, "category" CHAR(128), D."pc" VARCHAR, D."un" VARCHAR, D."ug" VARCHAR CONSTRAINT pk PR

Re: Is there any support for paging or offset like mysql limit m,n grammer?

2014-11-23 Thread Samarth Jain
Supporting limit/offset like queries is inherently not efficient for HBase. Alternatively you could use Phoenix's support for row value constructors to page through query results. See http://phoenix.apache.org/paged.html for details. On Sunday, November 23, 2014, su...@certusnet.com.cn wrote: >

Re: Regionserver Crashing whenever join query run_tables with 10lac rows

2014-11-12 Thread Samarth Jain
10 lac is 1 million. Siddharth, please let us know the schema and the query you are executing too. Thanks! On Wednesday, November 12, 2014, Vladimir Rodionov wrote: > What does RS log file say, Siddharth? > > Btw, what does 'lac' stand for? In '10 lac'? > > -Vladimir > > On Wed, Nov 12, 2014 at

Re: ManagedTests and 4.1.0-RC1

2014-08-29 Thread Samarth Jain
es bytes and makes an int from >> the >> > bytes... >> > >> > >> https://github.com/apache/phoenix/blob/f99e5d8d609d326fb3571255cd8f47961b1c6860/phoenix-hadoop-compat/src/main/java/org/apache/phoenix/trace/TracingCompat.java#L56 >> > >> > >>

Re: Can't connect to Phoenix via JDBC in Scala

2014-08-29 Thread Samarth Jain
Hi Russell, I am not a Scala guy, but do you know if calling classOf[com.salesforce.phoenix.jdbc.PhoenixDriver] ends up loading the java class and hence executing the static block? If it doesn't you might want to try DriverManager.registerDriver( com.salesforce.phoenix.jdbc.PhoenixDriver) and then

Re: ManagedTests and 4.1.0-RC1

2014-08-28 Thread Samarth Jain
Dan, Can you tell me how you are running your tests? Do you have the test class annotated with the right category annotation - @Category( HBaseManagedTimeTest.class). Also, can you send over your test class to see what might be causing problems? Thanks, Samarth On Thu, Aug 28, 2014 at 10:34 AM,