ArrayIndexOutOfBoundsException in PQS

2017-01-04 Thread Tulasi Paradarami
We noticed that PQS started raising ArrayIndexOutOfBoundsException in our production cluster. This exception is raised sporadically and goes away when PQS is restarted. Does anyone know what might be causing this exception? Are there any configuration (PQS and/or Avatica) parameters that we can mod

Re: ArrayIndexOutOfBoundsException in PQS

2017-01-05 Thread Tulasi Paradarami
wrote: > Interesting! I haven't come across this one myself. > > By Phoenix 4.7, am I to assume you mean 4.7.0? Phoenix version strings are > 3 "digits", not 2. > > My first guess would be that it might be a race condition around the > closeStatement call

Re: ArrayIndexOutOfBoundsException in PQS

2017-01-05 Thread Tulasi Paradarami
avatica.connectioncache.expiryunit avatica.statementcache.concurrency avatica.statementcache.initialcapacity avatica.statementcache.maxcapacity avatica.statementcache.expiryduration avatica.statementcache.expiryunit On Thu, Jan 5, 2017 at 10:04 AM, Tulasi Paradarami < tulasi.krishn...@gmail.com>

Re: ArrayIndexOutOfBoundsException in PQS

2017-01-05 Thread Tulasi Paradarami
nt to do). > > You can try setting org.apache.calcite.avatica.jdbc.JdbcMeta=DEBUG in the > $PHOENIX_HOME/bin/log4j.properties file. That will print some messages > when a statement or connection is automatically evicted from the respective > cache. > > Finally, no stack trace on

Re: ArrayIndexOutOfBoundsException in PQS

2017-01-05 Thread Tulasi Paradarami
ement, both trying to close the ResultSet. > > This is sounding more and more like an Avatica bug to me. Any chance you > can share more of the TRACE logging that you've turned on and maybe open up > a JIRA issue under the CALCITE project (and ping me),

Timeline consistency using PQS

2017-01-19 Thread Tulasi Paradarami
Hi, Does PQS support HBase's timeline consistency (HBASE-10070)? Looking at the connection properties implementation within Avatica, I see that following are defined: ["transactionIsolation", "schema", "readOnly", "dirty", "autoCommit", "catalog"] but there's isn't a property defined for setting

Re: Timeline consistency using PQS

2017-01-19 Thread Tulasi Paradarami
ions.DEFAULT_CONSISTENCY_LEVEL)); On Thu, Jan 19, 2017 at 2:35 PM, Tulasi Paradarami < tulasi.krishn...@gmail.com> wrote: > Hi, > > Does PQS support HBase's timeline consistency (HBASE-10070)? > > Looking at the connection properties implementation within Avatica, I see > t

Null array elements with joins

2018-06-19 Thread Tulasi Paradarami
Hi, I'm running few tests against Phoenix array and running into this bug where array elements return null values when a join is involved. Is this a known issue/limitation of arrays? create table array_test_1 (id integer not null primary key, arr tinyint[5]); upsert into array_test_1 values (1001

Re: Null array elements with joins

2018-06-20 Thread Tulasi Paradarami
Tested on 4.7, 4.11 & 4.14. https://issues.apache.org/jira/browse/PHOENIX-4791 On Tue, Jun 19, 2018 at 8:10 PM Jaanai Zhang wrote: > what's your Phoenix's version? > > > >Yun Zhang >Best regards! > > > 20

Re: Null array elements with joins

2018-06-27 Thread Tulasi Paradarami
es are initialized in KeyValyColumnExpression for ARRAY elements during tuple projection. That is, when projecting array elements, it should perhaps be initialized with "_v, ", instead of "default cf, actual column qualifier"? It'll be helpful to hear from the Phoenix expe

Re: Phoenix on Amazon EMR

2014-09-08 Thread Tulasi Paradarami
Yes, a blog post will be of great help especially considering its not clear when Amazon will upgrade their default version to 3.x or 4.x > On Sep 8, 2014, at 9:07 PM, James Taylor wrote: > > Thanks, Puneet. That's super helpful. Was (2) difficult to do? That might > make an interesting blog i

Bulk-loader performance

2015-03-04 Thread Tulasi Paradarami
Hi, Here are the details of our environment: Phoenix 4.3 HBase 0.98.6 I'm loading data to a Phoenix table using the csv bulk-loader and it is processing about 16,000 - 20,000 rows/sec. I noticed that the bulk-loader spends upto 40% of the execution time in the following steps. //... csvRecord

Re: add a snapshotted phoenix table

2015-03-11 Thread Tulasi Paradarami
How big is the table? I think, its taking so long to create the table because phoenix creates a new column "_0" for each row with null values. So, when it failed the upserts were only partially complete but table is available for querying. Since, view doesnt perform this upsert, its faster to creat