Re: Phoenix and NodeJS

2015-05-18 Thread Eli Levine
Yeah, so you can see that code creates a String array containing the whole
result set. Usually a very bad idea for 400K-row result sets. You want to
process results incrementally, probably via paging using row-value
constructors and LIMIT.

On Mon, May 18, 2015 at 12:00 PM, Isart Montane 
wrote:

> Thanks James.
>
> That code is from the node driver, I will try to get some advice from it's
> developer.
>
> Thanks,
>
>
> On Mon, May 18, 2015 at 6:34 PM, James Taylor 
> wrote:
>
>> Hi Isart,
>> That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
>> has done a lot with Node JS, so he may be able to give you some tips.
>> Thanks,
>> James
>>
>> On Mon, May 18, 2015 at 9:06 AM, Isart Montane 
>> wrote:
>> > Hi Eli,
>> >
>> > thanks a lot for your comments. I think you are right. I found the
>> client
>> > code that's causing the issue. Do you have an example I can use to
>> patch it?
>> > is that the recommended way to access phoenix? I've seen on the web that
>> > there's also a query server available, is it worth a try?
>> >
>> >
>> > public String[] query(String sql)
>> >   {
>> > List lsResults = new ArrayList();
>> > Connection conn = null;
>> > try
>> > {
>> >   conn = this.dataSource.getConnection();
>> >   ResultSet rs = conn.createStatement().executeQuery(sql);
>> >   ResultSetMetaData data = rs.getMetaData();
>> >   int numberOfColumns = data.getColumnCount();
>> >   List lsRows = new ArrayList();
>> >   for (int i = 1; i <= numberOfColumns; i++) {
>> > lsRows.add(data.getColumnName(i));
>> >   }
>> >   lsResults.add(join("\t", lsRows));
>> >   lsRows.clear();
>> >   while (rs.next())
>> >   {
>> > for (int i = 1; i <= numberOfColumns; i++) {
>> >   lsRows.add(rs.getString(i));
>> > }
>> > lsResults.add(join("\t", lsRows));
>> > lsRows.clear();
>> >   }
>> >   rs.close();
>> >   conn.close();
>> > }
>> > catch (Exception e)
>> > {
>> >   e.printStackTrace();
>> >   return null;
>> > }
>> > return (String[])lsResults.toArray(new String[lsResults.size()]);
>> >   }
>> >
>> > On Mon, May 18, 2015 at 5:43 PM, Eli Levine 
>> wrote:
>> >>
>> >> I don't have info on what your app does with results from Phoenix. If
>> the
>> >> app is constructing some sort of object representations from Phoenix
>> results
>> >> and holding on to them, I would look at what the memory footprint of
>> that
>> >> is. I know this isn't very helpful but at this point I would try to dig
>> >> deeper into your app and the NodeJS driver rather than Phoenix, since
>> you
>> >> mentioned the same queries run fine in sqlline.
>> >>
>> >> On Mon, May 18, 2015 at 7:30 AM, Isart Montane <
>> isart.mont...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi Eli,
>> >>>
>> >>> thanks a lot for your answer. That might be a workaround but I was
>> hoping
>> >>> to get a more generic answer I can apply to the driver/phoenix since
>> that
>> >>> will require me lots of changes to the code.
>> >>>
>> >>> Any clue on why it works with sqline but not trough the node driver?
>> >>>
>> >>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine 
>> wrote:
>> 
>>  Have you looked at paging [1] using Phoenix's row-value constructors
>>  together with the LIMIT clause? That might be what you are looking
>> for.
>> 
>>  [1] http://phoenix.apache.org/paged.html
>> 
>>  Eli
>> 
>> 
>>  On Mon, May 18, 2015 at 6:46 AM, Isart Montane <
>> isart.mont...@gmail.com>
>>  wrote:
>> >
>> > Hi,
>> >
>> > the company I work for is performing some tests on Phoenix with
>> NodeJS.
>> > For simple queries I didn't had any problem, but as soon as I start
>> to use
>> > our app I'm getting "process out of memory" errors on the client
>> when I runs
>> > queries that return a big number of rows (i.e. 400k) . I think the
>> problem
>> > is that the client tries to buffer all the results in RAM and that
>> kills it.
>> > The same query runs fine when I run it with sqline.
>> >
>> > So, is there a way to tell the client to stream the results (or
>> batch
>> > them) instead of buffering them all? is raising the client memory
>> the only
>> > solution?
>> >
>> > I'm using phoenix-4.3.1 and
>> https://github.com/gaodazhu/phoenix-client
>> > as the NodeJS driver
>> >
>> > Thanks,
>> >
>> > Isart Montane
>> 
>> 
>> >>>
>> >>
>> >
>>
>
>


Re: Phoenix and NodeJS

2015-05-18 Thread Isart Montane
Thanks James.

That code is from the node driver, I will try to get some advice from it's
developer.

Thanks,


On Mon, May 18, 2015 at 6:34 PM, James Taylor 
wrote:

> Hi Isart,
> That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
> has done a lot with Node JS, so he may be able to give you some tips.
> Thanks,
> James
>
> On Mon, May 18, 2015 at 9:06 AM, Isart Montane 
> wrote:
> > Hi Eli,
> >
> > thanks a lot for your comments. I think you are right. I found the client
> > code that's causing the issue. Do you have an example I can use to patch
> it?
> > is that the recommended way to access phoenix? I've seen on the web that
> > there's also a query server available, is it worth a try?
> >
> >
> > public String[] query(String sql)
> >   {
> > List lsResults = new ArrayList();
> > Connection conn = null;
> > try
> > {
> >   conn = this.dataSource.getConnection();
> >   ResultSet rs = conn.createStatement().executeQuery(sql);
> >   ResultSetMetaData data = rs.getMetaData();
> >   int numberOfColumns = data.getColumnCount();
> >   List lsRows = new ArrayList();
> >   for (int i = 1; i <= numberOfColumns; i++) {
> > lsRows.add(data.getColumnName(i));
> >   }
> >   lsResults.add(join("\t", lsRows));
> >   lsRows.clear();
> >   while (rs.next())
> >   {
> > for (int i = 1; i <= numberOfColumns; i++) {
> >   lsRows.add(rs.getString(i));
> > }
> > lsResults.add(join("\t", lsRows));
> > lsRows.clear();
> >   }
> >   rs.close();
> >   conn.close();
> > }
> > catch (Exception e)
> > {
> >   e.printStackTrace();
> >   return null;
> > }
> > return (String[])lsResults.toArray(new String[lsResults.size()]);
> >   }
> >
> > On Mon, May 18, 2015 at 5:43 PM, Eli Levine  wrote:
> >>
> >> I don't have info on what your app does with results from Phoenix. If
> the
> >> app is constructing some sort of object representations from Phoenix
> results
> >> and holding on to them, I would look at what the memory footprint of
> that
> >> is. I know this isn't very helpful but at this point I would try to dig
> >> deeper into your app and the NodeJS driver rather than Phoenix, since
> you
> >> mentioned the same queries run fine in sqlline.
> >>
> >> On Mon, May 18, 2015 at 7:30 AM, Isart Montane  >
> >> wrote:
> >>>
> >>> Hi Eli,
> >>>
> >>> thanks a lot for your answer. That might be a workaround but I was
> hoping
> >>> to get a more generic answer I can apply to the driver/phoenix since
> that
> >>> will require me lots of changes to the code.
> >>>
> >>> Any clue on why it works with sqline but not trough the node driver?
> >>>
> >>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine 
> wrote:
> 
>  Have you looked at paging [1] using Phoenix's row-value constructors
>  together with the LIMIT clause? That might be what you are looking
> for.
> 
>  [1] http://phoenix.apache.org/paged.html
> 
>  Eli
> 
> 
>  On Mon, May 18, 2015 at 6:46 AM, Isart Montane <
> isart.mont...@gmail.com>
>  wrote:
> >
> > Hi,
> >
> > the company I work for is performing some tests on Phoenix with
> NodeJS.
> > For simple queries I didn't had any problem, but as soon as I start
> to use
> > our app I'm getting "process out of memory" errors on the client
> when I runs
> > queries that return a big number of rows (i.e. 400k) . I think the
> problem
> > is that the client tries to buffer all the results in RAM and that
> kills it.
> > The same query runs fine when I run it with sqline.
> >
> > So, is there a way to tell the client to stream the results (or batch
> > them) instead of buffering them all? is raising the client memory
> the only
> > solution?
> >
> > I'm using phoenix-4.3.1 and
> https://github.com/gaodazhu/phoenix-client
> > as the NodeJS driver
> >
> > Thanks,
> >
> > Isart Montane
> 
> 
> >>>
> >>
> >
>


Re: Phoenix and NodeJS

2015-05-18 Thread James Taylor
Hi Isart,
That code isn't Phoenix code. This sounds like a Node JS issue. Vaclav
has done a lot with Node JS, so he may be able to give you some tips.
Thanks,
James

On Mon, May 18, 2015 at 9:06 AM, Isart Montane  wrote:
> Hi Eli,
>
> thanks a lot for your comments. I think you are right. I found the client
> code that's causing the issue. Do you have an example I can use to patch it?
> is that the recommended way to access phoenix? I've seen on the web that
> there's also a query server available, is it worth a try?
>
>
> public String[] query(String sql)
>   {
> List lsResults = new ArrayList();
> Connection conn = null;
> try
> {
>   conn = this.dataSource.getConnection();
>   ResultSet rs = conn.createStatement().executeQuery(sql);
>   ResultSetMetaData data = rs.getMetaData();
>   int numberOfColumns = data.getColumnCount();
>   List lsRows = new ArrayList();
>   for (int i = 1; i <= numberOfColumns; i++) {
> lsRows.add(data.getColumnName(i));
>   }
>   lsResults.add(join("\t", lsRows));
>   lsRows.clear();
>   while (rs.next())
>   {
> for (int i = 1; i <= numberOfColumns; i++) {
>   lsRows.add(rs.getString(i));
> }
> lsResults.add(join("\t", lsRows));
> lsRows.clear();
>   }
>   rs.close();
>   conn.close();
> }
> catch (Exception e)
> {
>   e.printStackTrace();
>   return null;
> }
> return (String[])lsResults.toArray(new String[lsResults.size()]);
>   }
>
> On Mon, May 18, 2015 at 5:43 PM, Eli Levine  wrote:
>>
>> I don't have info on what your app does with results from Phoenix. If the
>> app is constructing some sort of object representations from Phoenix results
>> and holding on to them, I would look at what the memory footprint of that
>> is. I know this isn't very helpful but at this point I would try to dig
>> deeper into your app and the NodeJS driver rather than Phoenix, since you
>> mentioned the same queries run fine in sqlline.
>>
>> On Mon, May 18, 2015 at 7:30 AM, Isart Montane 
>> wrote:
>>>
>>> Hi Eli,
>>>
>>> thanks a lot for your answer. That might be a workaround but I was hoping
>>> to get a more generic answer I can apply to the driver/phoenix since that
>>> will require me lots of changes to the code.
>>>
>>> Any clue on why it works with sqline but not trough the node driver?
>>>
>>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine  wrote:

 Have you looked at paging [1] using Phoenix's row-value constructors
 together with the LIMIT clause? That might be what you are looking for.

 [1] http://phoenix.apache.org/paged.html

 Eli


 On Mon, May 18, 2015 at 6:46 AM, Isart Montane 
 wrote:
>
> Hi,
>
> the company I work for is performing some tests on Phoenix with NodeJS.
> For simple queries I didn't had any problem, but as soon as I start to use
> our app I'm getting "process out of memory" errors on the client when I 
> runs
> queries that return a big number of rows (i.e. 400k) . I think the problem
> is that the client tries to buffer all the results in RAM and that kills 
> it.
> The same query runs fine when I run it with sqline.
>
> So, is there a way to tell the client to stream the results (or batch
> them) instead of buffering them all? is raising the client memory the only
> solution?
>
> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
> as the NodeJS driver
>
> Thanks,
>
> Isart Montane


>>>
>>
>


Re: Phoenix and NodeJS

2015-05-18 Thread Isart Montane
Hi Eli,

thanks a lot for your comments. I think you are right. I found the client
code that's causing the issue. Do you have an example I can use to patch
it? is that the recommended way to access phoenix? I've seen on the web
that there's also a query server available, is it worth a try?


public String[] query(String sql)
  {
List lsResults = new ArrayList();
Connection conn = null;
try
{
  conn = this.dataSource.getConnection();
  ResultSet rs = conn.createStatement().executeQuery(sql);
  ResultSetMetaData data = rs.getMetaData();
  int numberOfColumns = data.getColumnCount();
  List lsRows = new ArrayList();
  for (int i = 1; i <= numberOfColumns; i++) {
lsRows.add(data.getColumnName(i));
  }
  lsResults.add(join("\t", lsRows));
  lsRows.clear();
  while (rs.next())
  {
for (int i = 1; i <= numberOfColumns; i++) {
  lsRows.add(rs.getString(i));
}
lsResults.add(join("\t", lsRows));
lsRows.clear();
  }
  rs.close();
  conn.close();
}
catch (Exception e)
{
  e.printStackTrace();
  return null;
}
return (String[])lsResults.toArray(new String[lsResults.size()]);
  }

On Mon, May 18, 2015 at 5:43 PM, Eli Levine  wrote:

> I don't have info on what your app does with results from Phoenix. If the
> app is constructing some sort of object representations from Phoenix
> results and holding on to them, I would look at what the memory footprint
> of that is. I know this isn't very helpful but at this point I would try to
> dig deeper into your app and the NodeJS driver rather than Phoenix, since
> you mentioned the same queries run fine in sqlline.
>
> On Mon, May 18, 2015 at 7:30 AM, Isart Montane 
> wrote:
>
>> Hi Eli,
>>
>> thanks a lot for your answer. That might be a workaround but I was hoping
>> to get a more generic answer I can apply to the driver/phoenix since that
>> will require me lots of changes to the code.
>>
>> Any clue on why it works with sqline but not trough the node driver?
>>
>> On Mon, May 18, 2015 at 4:20 PM, Eli Levine  wrote:
>>
>>> Have you looked at paging [1] using Phoenix's row-value constructors
>>> together with the LIMIT clause? That might be what you are looking for.
>>>
>>> [1] http://phoenix.apache.org/paged.html
>>>
>>> Eli
>>>
>>>
>>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane 
>>> wrote:
>>>
 Hi,

 the company I work for is performing some tests on Phoenix with NodeJS.
 For simple queries I didn't had any problem, but as soon as I start to use
 our app I'm getting "process out of memory" errors on the client when I
 runs queries that return a big number of rows (i.e. 400k) . I think the
 problem is that the client tries to buffer all the results in RAM and that
 kills it. The same query runs fine when I run it with sqline.

 So, is there a way to tell the client to stream the results (or batch
 them) instead of buffering them all? is raising the client memory the only
 solution?

 I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
 as the NodeJS driver

 Thanks,

 Isart Montane

>>>
>>>
>>
>


Problem while upgrading from Phooenix 4.0.0 to 4.3.1

2015-05-18 Thread Arun Kumaran Sabtharishi
Hello,

1. Currently using phoenix 4.0.0 incubating for both client and server.
2. Upgraded to 4.3.1(most recent)
3. While trying to connect using the client in command line (using
./sqlline.py) the connection could not be success throwing the following
error.

1)
*Error: ERROR 1013 (42M04): Table already exists. tableName=SYSTEM.CATALOG
(state=42M04,code=1013)org.apache.phoenix.schema.NewerTableAlreadyExistsException:
ERROR 1013 (42M04): Table already exists. tableName=SYSTEM.CATALOG*

Deleting the SYSTEM.CATALOG table works, but that is not the solution
intended.

What is the solution/workaround for the problem?

Thanks,
Arun Sabtharishi


Re: Phoenix and NodeJS

2015-05-18 Thread Eli Levine
I don't have info on what your app does with results from Phoenix. If the
app is constructing some sort of object representations from Phoenix
results and holding on to them, I would look at what the memory footprint
of that is. I know this isn't very helpful but at this point I would try to
dig deeper into your app and the NodeJS driver rather than Phoenix, since
you mentioned the same queries run fine in sqlline.

On Mon, May 18, 2015 at 7:30 AM, Isart Montane 
wrote:

> Hi Eli,
>
> thanks a lot for your answer. That might be a workaround but I was hoping
> to get a more generic answer I can apply to the driver/phoenix since that
> will require me lots of changes to the code.
>
> Any clue on why it works with sqline but not trough the node driver?
>
> On Mon, May 18, 2015 at 4:20 PM, Eli Levine  wrote:
>
>> Have you looked at paging [1] using Phoenix's row-value constructors
>> together with the LIMIT clause? That might be what you are looking for.
>>
>> [1] http://phoenix.apache.org/paged.html
>>
>> Eli
>>
>>
>> On Mon, May 18, 2015 at 6:46 AM, Isart Montane 
>> wrote:
>>
>>> Hi,
>>>
>>> the company I work for is performing some tests on Phoenix with NodeJS.
>>> For simple queries I didn't had any problem, but as soon as I start to use
>>> our app I'm getting "process out of memory" errors on the client when I
>>> runs queries that return a big number of rows (i.e. 400k) . I think the
>>> problem is that the client tries to buffer all the results in RAM and that
>>> kills it. The same query runs fine when I run it with sqline.
>>>
>>> So, is there a way to tell the client to stream the results (or batch
>>> them) instead of buffering them all? is raising the client memory the only
>>> solution?
>>>
>>> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
>>> as the NodeJS driver
>>>
>>> Thanks,
>>>
>>> Isart Montane
>>>
>>
>>
>


Re: Problems with Phoenix and HBase

2015-05-18 Thread Ted Yu
Sending to Phoenix user mailing list.

Here is the thread:
http://search-hadoop.com/m/YGbbu2WzHtZBkq1

On Mon, May 18, 2015 at 7:20 AM, Asfare  wrote:

> Can someone give some tips?
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Problems-with-Phoenix-and-HBase-tp4071362p4071537.html
> Sent from the HBase Developer mailing list archive at Nabble.com.
>


Re: Phoenix and NodeJS

2015-05-18 Thread Isart Montane
Hi Eli,

thanks a lot for your answer. That might be a workaround but I was hoping
to get a more generic answer I can apply to the driver/phoenix since that
will require me lots of changes to the code.

Any clue on why it works with sqline but not trough the node driver?

On Mon, May 18, 2015 at 4:20 PM, Eli Levine  wrote:

> Have you looked at paging [1] using Phoenix's row-value constructors
> together with the LIMIT clause? That might be what you are looking for.
>
> [1] http://phoenix.apache.org/paged.html
>
> Eli
>
>
> On Mon, May 18, 2015 at 6:46 AM, Isart Montane 
> wrote:
>
>> Hi,
>>
>> the company I work for is performing some tests on Phoenix with NodeJS.
>> For simple queries I didn't had any problem, but as soon as I start to use
>> our app I'm getting "process out of memory" errors on the client when I
>> runs queries that return a big number of rows (i.e. 400k) . I think the
>> problem is that the client tries to buffer all the results in RAM and that
>> kills it. The same query runs fine when I run it with sqline.
>>
>> So, is there a way to tell the client to stream the results (or batch
>> them) instead of buffering them all? is raising the client memory the only
>> solution?
>>
>> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client
>> as the NodeJS driver
>>
>> Thanks,
>>
>> Isart Montane
>>
>
>


Re: Phoenix and NodeJS

2015-05-18 Thread Eli Levine
Have you looked at paging [1] using Phoenix's row-value constructors
together with the LIMIT clause? That might be what you are looking for.

[1] http://phoenix.apache.org/paged.html

Eli


On Mon, May 18, 2015 at 6:46 AM, Isart Montane 
wrote:

> Hi,
>
> the company I work for is performing some tests on Phoenix with NodeJS.
> For simple queries I didn't had any problem, but as soon as I start to use
> our app I'm getting "process out of memory" errors on the client when I
> runs queries that return a big number of rows (i.e. 400k) . I think the
> problem is that the client tries to buffer all the results in RAM and that
> kills it. The same query runs fine when I run it with sqline.
>
> So, is there a way to tell the client to stream the results (or batch
> them) instead of buffering them all? is raising the client memory the only
> solution?
>
> I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client as
> the NodeJS driver
>
> Thanks,
>
> Isart Montane
>


Phoenix and NodeJS

2015-05-18 Thread Isart Montane
Hi,

the company I work for is performing some tests on Phoenix with NodeJS. For
simple queries I didn't had any problem, but as soon as I start to use our
app I'm getting "process out of memory" errors on the client when I runs
queries that return a big number of rows (i.e. 400k) . I think the problem
is that the client tries to buffer all the results in RAM and that kills
it. The same query runs fine when I run it with sqline.

So, is there a way to tell the client to stream the results (or batch them)
instead of buffering them all? is raising the client memory the only
solution?

I'm using phoenix-4.3.1 and https://github.com/gaodazhu/phoenix-client as
the NodeJS driver

Thanks,

Isart Montane


Re: Trying to setup unittests to query phoenix test db but getting UnsupportedOperationException

2015-05-18 Thread Ron van der Vegt

Thanks! I will look into it.

On 05/15/2015 06:24 PM, James Taylor wrote:

You'll want to derive from BaseHBaseManagedTimeIT. The
BaseConnectionlessQueryTest class is for compile-time only or negative
tests as it doesn't spin up any mini cluster.

Thanks,
James

On Fri, May 15, 2015 at 5:41 AM, Ron van der Vegt
 wrote:

Hello everyone,

I'm currently developing an Rest API which should query a phoenix table, and
return it in JSON. Currently have no issues with building this API, but it
would be really nice if I could write unittests with dummy data to test our
created API calls.

I was getting into the right direction, I hope, by extending the
BaseConnectionlessQueryTest class and setup a test database:

String ddl = "CREATE TABLE test (id VARCHAR not null primary key, test_value
CHAR(16)";
createTestTable(getUrl(), ddl, (byte[][]) null, (Long) null);

And it looks like I could also upsert some data:

Properties props = new Properties();
PhoenixConnection conn =
(PhoenixConnection)DriverManager.getConnection("jdbc:phoenix:none;test=true",
props);
PreparedStatement statement = conn.prepareStatement("UPSERT INTO test(id)
VALUES ('meh')");
statement.execute();

But when I want to select data:

PreparedStatement statement = conn.prepareStatement("SELECT * FROM test");
ResultSet rs = statement.executeQuery();
while (rs.next()) {
   System.out.println(rs.getString("ID"));
}

I am get an UnsupportedOperationException. Could someone please explain to
me what I'm doing wrong, or that my use case is possible somehow?

Thanks in advice!

Ron