Re: Phoenix MR integration api only accepts Index of column for setting column value

2016-01-06 Thread James Taylor
My mistake - I was thinking of ResultSet.getString(String,String) and ResultSet.getString(int,String). PreparedStatement sets parameters based on the order you list the ? In your statement - they don't necessarily map to columns at all. These are all standard JDBC interfaces, not defined by Phoenix

Re: array of BIGINT index

2016-01-06 Thread Kumar Palaniappan
Thanks James. We will look into it. we need to find a way to overcome the full scan. On Wed, Jan 6, 2016 at 9:26 AM, James Taylor wrote: > In that case, you'd need PHOENIX-1544 to be implemented. > > On Wed, Jan 6, 2016 at 8:52 AM, Kumar Palaniappan < > kpalaniap...@marinsoftware.com> wrote: > >

Re: Phoenix MR integration api only accepts Index of column for setting column value

2016-01-06 Thread anil gupta
Hi James, Maybe, i am missing your point. I dont see following method in PreparedStatement interface: pstmt.setString("STOCK_NAME", stockName); Do i need to use some other stuff than Phoenix MR integration to get that method? Thanks, Anil Gupta On Tue, Jan 5, 2016 at 8:48 PM, James Taylor

Getting null pointer exception while invoking sqlline

2016-01-06 Thread anil gupta
Hi, We are running Phoenix4.4 and while invoking sqlline we are getting NPE: 16/01/06 13:33:05 WARN util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit is deprecated by hbase.regionserver.global.memstore.size 16/01/06 13:33:05 WARN util.HeapMemorySizeUtil: hbase.regionserver.glo

Re: array of BIGINT index

2016-01-06 Thread James Taylor
In that case, you'd need PHOENIX-1544 to be implemented. On Wed, Jan 6, 2016 at 8:52 AM, Kumar Palaniappan < kpalaniap...@marinsoftware.com> wrote: > Unfortunately changing the table is not an option for us at this time. > > On Tue, Jan 5, 2016 at 6:27 PM, James Taylor > wrote: > >> If the "find

Re: Can phoenix local indexes create a deadlock after an HBase full restart?

2016-01-06 Thread Artem Ervits
this was answered in this thread https://community.hortonworks.com/questions/8757/phoenix-local-indexes.html On Wed, Jan 6, 2016 at 10:16 AM, Pedro Gandola wrote: > Hi Guys, > > The issue is a deadlock but it's not related with phoenix and it can be > resolved increasing the number of threads re

Re: array of BIGINT index

2016-01-06 Thread Kumar Palaniappan
Unfortunately changing the table is not an option for us at this time. On Tue, Jan 5, 2016 at 6:27 PM, James Taylor wrote: > If the "finding customers that have a particular account" is a common > query, you might consider modifying your schema by pulling the account into > an optional/nullable

Re: Can phoenix local indexes create a deadlock after an HBase full restart?

2016-01-06 Thread Pedro Gandola
Hi Guys, The issue is a deadlock but it's not related with phoenix and it can be resolved increasing the number of threads responsible for opening the regions. > hbase.regionserver.executor.openregion.threads > 100 > Got help from here

Re: Re: error when get data from Phoenix 4.5.2 on CDH 5.5.x by spark 1.5

2016-01-06 Thread Josh Mahonin
Hi, Is it still the same 'No suitable driver' exception, or is it something else? Have you tried using the 'yarn-cluster' mode? I've had success with that personally, although I don't have any experience on the CDH stack. Josh On Wed, Jan 6, 2016 at 2:59 AM, sac...@outlook.com wrote: > hi j

Re: Thin Client:: Connection Refused

2016-01-06 Thread Thomas Decaux
You could use the jdbc protocol instead thé thin client ? Else i dont see solution , maybe because the nature of HTTP (what is using thin client) Le 5 janv. 2016 11:43 PM, a écrit : > Hello Thomas, > > Thank you ! this what was missing ! :) > > I noticed that the dbConnection.commit(); is no

Re: Re: error when get data from Phoenix 4.5.2 on CDH 5.5.x by spark 1.5

2016-01-06 Thread sac...@outlook.com
hi josh: I did what you say, and now i can run my codes in spark-shell --master local without any other confs , but when it comes to 'yarn-client' ,the error is as the same . I try to give you more informations, we have 11 nodes,3 zks,2 masters. I am sure all the 11