Re: Phoenix Performance issue

2016-05-11 Thread Naveen Nahata
Thanks Mujtaba.

Could you tell me which version of phoenix are you using ?

-Naveen Nahata

On 11 May 2016 at 04:12, Mujtaba Chohan  wrote:

> Tried the following in Sqlline/Phoenix and HBase shell. Both take ~20ms for
> point lookups with local HBase.
>
> hbase(main):015:0> get 'MYTABLE','a'
> COLUMN
> CELL
>
>  0:MYCOLtimestamp=1462515518048,
> value=b
>
>  0:_0   timestamp=1462515518048,
> value=
>
> 2 row(s) in 0.0190 seconds
>
> 0: jdbc:phoenix:localhost> select * from mytable where pk1='a';
> +--++
> | PK1  | MYCOL  |
> +--++
> | a| b  |
> +--++
> 1 row selected (0.028 seconds)
>
> In your test, are you factoring out initial cost of setting up Phoenix
> connection? If no then see performance of subsequent runs by measuring time
> in a loop for executeStatement and iterate over resultSet.
>
> -mujtaba
>
>
> On Tue, May 10, 2016 at 12:55 PM, Naveen Nahata ( SC ) <
> naveen.nah...@flipkart.com> wrote:
>
> > Hi,
> >
> > I am using phoenix 4.5.2-HBase-0.98 to connect HBase. To benchmark
> > phoenix perforance executed select statement on primary key using phoenix
> > driver and hbase client.
> >
> > Surprisingly figured out PhoenixDriver is approx. 10~15 times slower then
> > hbase client.
> >
> >
> > ​
> > Addition to this looked explain statement from phoenix, which stats query
> > is look up on one key.
> >
> >
> >
> > ​
> > If query on look up on 1 key why its taking so long ?
> >
> > Code Ref.
> >
> > // Connecting phoenix
> >
> > String sql = "select * from fklogistics.shipment where shipmentId =
> 'WSRR4271782117'";
> > long startTime = System.nanoTime();
> > ResultSet rs1 = st.executeQuery(sql);
> > long endTime = System.nanoTime();
> > long duration = endTime - startTime;
> > System.out.println("Time take by phoenix :" + duration);
> >
> > // Connecting HBase
> >
> > Get get = new Get(row);
> > startTime = System.nanoTime();
> > Result rs = table1.get(get);
> > endTime = System.nanoTime();
> > duration = endTime - startTime;
> > System.out.println("Time take by hbase :" + duration);
> >
> > Please suggest why query is so slow ? Also will upgrading phoenix driver
> can help in this ?
> >
> > Thanks & Regards,
> >
> > Naveen Nahata
> >
> >
> >
>


Fwd: error Loading via MapReduce

2016-05-11 Thread kevin
-- Forwarded message --
From: kevin 
Date: 2016-05-11 16:19 GMT+08:00
Subject: Re: error Loading via MapReduce
To: "user.phoenix" 


Thanks,I did't found  fs.defaultFS property be overwritten . And I have
change to use pig to load table data into Phoenix.

2016-05-11 14:23 GMT+08:00 Gabriel Reid :

> Another idea: could you check in
> /home/dcos/hbase-0.98.16.1-hadoop2/conf (or elsewhere) to see if there
> is somewhere where the fs.defaultFS property is being overwritten. For
> example, in hbase-site.xml?
>
> On Wed, May 11, 2016 at 3:59 AM, kevin  wrote:
> > I have tried to modify core-site.xml:
> > 
> > hadoop.tmp.dir
> > file:/home/dcos/hdfs/tmp ->>
> > hdfs://master1:9000/tmp
> > 
> >
> > then the Loading via MapReduce commond successful build mr job,but this
> > chang is wrong to hadoop the result is my hadoop cluster can't work.
> >
> > 2016-05-11 9:49 GMT+08:00 kevin :
> >>
> >>
> >> my commond is :
> >> $
> >>
> HADOOP_CLASSPATH=/home/dcos/hbase-0.98.16.1-hadoop2/lib/hbase-protocol-0.98.16.1-hadoop2.jar:/home/dcos/hbase-0.98.16.1-hadoop2/conf
> >> hadoop jar phoenix-4.6.0-HBase-0.98-client.jar
> >> org.apache.phoenix.mapreduce.CsvBulkLoadTool -t income_band -i
> >> /user/hive/warehouse/income_band/income_band.dat
> >>
> >> here is the log on the console:
> >>
> >> 16/05/11 09:25:04 INFO zookeeper.ZooKeeper: Client
> >> environment:java.class.path=
> >> /home/dcos/hadoop-2.7.1/etc/hadoop:-- this is dir where is my
> >> hadoop config file is.
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-configuration-1.6.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/gson-2.2.4.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/activation-1.1.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jsp-api-2.1.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-io-2.4.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/paranamer-2.3.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/log4j-1.2.17.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jets3t-0.9.0.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jettison-1.1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jersey-server-1.9.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/avro-1.7.4.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-codec-1.4.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-cli-1.2.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-net-3.1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-digester-1.8.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/guava-11.0.2.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jsch-0.1.42.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jersey-core-1.9.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/api-util-1.0.0-M20.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/xz-1.0.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/asm-3.2.jar:
> >>
> >>
> /home/dcos/hadoop-2.7.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/dcos/hadoop-2.7.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:
> >>
> >>
> /

[jira] [Commented] (PHOENIX-2862) Do client server compatibility checks before upgrading system tables

2016-05-11 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279814#comment-15279814
 ] 

Ankit Singhal commented on PHOENIX-2862:


[~giacomotaylor], any update on this?

> Do client server compatibility checks before upgrading system tables
> 
>
> Key: PHOENIX-2862
> URL: https://issues.apache.org/jira/browse/PHOENIX-2862
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2862.patch
>
>
> currently , we allow upgrade of system tables to map to system namespace by 
> enabling "phoenix.schema.mapSystemTablesToNamespace" config (conjuction with 
> "phoenix.connection.isNamespaceMappingEnabled") 
> but we need to ensure following things whenever client connects with above 
> config:-
> 1. Server should be upgraded and check consistency of these properties 
> between client and server.
> 2. If above property does not exists but system:catalog exists, we should not 
> start creating system.catalog.
> 3. if old client connects, it should not create system.catalog again ignoring 
> the upgrade and start using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Create view using select *

2016-05-11 Thread Sergey Soldatov
Actually it works exactly as you want.
https://phoenix.apache.org/views.html
Please also read Limitations chapter. It may be useful.

On Tue, May 10, 2016 at 8:46 PM, Swapna Swapna  wrote:
> I have an existing hbase table with 50k columns.
>
> Can we create a view in phoenix by mapping to an existing hbase table
> without specifying the schema of all 50k columns.
>
> Something like:
>
> Create view myview (pk varchar primary key) as select * from hbasetable
>
> For all columns to be available in my view without specifying schema of all
> 50k columns.


Re: Create view using select *

2016-05-11 Thread Ankit Singhal
And , also look at the dynamic column support.
https://phoenix.apache.org/dynamic_columns.html

You don't need to specify all the columns at the time of create view and
you can still query other columns by specifying them during the query.

On Wed, May 11, 2016 at 2:32 PM, Sergey Soldatov 
wrote:

> Actually it works exactly as you want.
> https://phoenix.apache.org/views.html
> Please also read Limitations chapter. It may be useful.
>
> On Tue, May 10, 2016 at 8:46 PM, Swapna Swapna 
> wrote:
> > I have an existing hbase table with 50k columns.
> >
> > Can we create a view in phoenix by mapping to an existing hbase table
> > without specifying the schema of all 50k columns.
> >
> > Something like:
> >
> > Create view myview (pk varchar primary key) as select * from hbasetable
> >
> > For all columns to be available in my view without specifying schema of
> all
> > 50k columns.
>


[jira] [Created] (PHOENIX-2889) No exception in Phoenix-Spark when column does not exist

2016-05-11 Thread Ian Hellstrom (JIRA)
Ian Hellstrom created PHOENIX-2889:
--

 Summary: No exception in Phoenix-Spark when column does not exist
 Key: PHOENIX-2889
 URL: https://issues.apache.org/jira/browse/PHOENIX-2889
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.4.0
 Environment: Phoenix: 4.4
HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
Spark: 1.4.1 (compiled with Scala 2.10)
Reporter: Ian Hellstrom


When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
qualifiers where the user has erroneously specified these with lower caps, no 
exception is returned. Ideally, a 
{{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
lines like the following show up in the log

{code}
INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 ms 
ago, cancelled=false, msg=
{code}

A minimal example:

{code}
CREATE TABLE test (foo INT, bar VARCHAR);
UPSERT INTO test VALUES (1, 'hello');
UPSERT INTO test VALUES (2, 'bye');
{code}

In Spark (shell):

{code}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.phoenix.spark._
@transient lazy val hadoopConf: Configuration = new Configuration()
@transient lazy val hbConf: Configuration = 
HBaseConfiguration.create(hadoopConf)
val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf = 
hbConf)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2889) No exception in Phoenix-Spark when column does not exist

2016-05-11 Thread Ian Hellstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Hellstrom updated PHOENIX-2889:
---
Description: 
When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
qualifiers where the user has erroneously specified these with lower caps, no 
exception is returned. Ideally, an 
{{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
lines like the following show up in the log

{code}
INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 ms 
ago, cancelled=false, msg=
{code}

A minimal example:

{code}
CREATE TABLE test (foo INT, bar VARCHAR);
UPSERT INTO test VALUES (1, 'hello');
UPSERT INTO test VALUES (2, 'bye');
{code}

In Spark (shell):

{code}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.phoenix.spark._
@transient lazy val hadoopConf: Configuration = new Configuration()
@transient lazy val hbConf: Configuration = 
HBaseConfiguration.create(hadoopConf)
val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf = 
hbConf)
{code}

  was:
When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
qualifiers where the user has erroneously specified these with lower caps, no 
exception is returned. Ideally, a 
{{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
lines like the following show up in the log

{code}
INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 ms 
ago, cancelled=false, msg=
{code}

A minimal example:

{code}
CREATE TABLE test (foo INT, bar VARCHAR);
UPSERT INTO test VALUES (1, 'hello');
UPSERT INTO test VALUES (2, 'bye');
{code}

In Spark (shell):

{code}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.phoenix.spark._
@transient lazy val hadoopConf: Configuration = new Configuration()
@transient lazy val hbConf: Configuration = 
HBaseConfiguration.create(hadoopConf)
val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf = 
hbConf)
{code}


> No exception in Phoenix-Spark when column does not exist
> 
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2889) No exception in Phoenix-Spark when column does not exist

2016-05-11 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279886#comment-15279886
 ] 

Ankit Singhal commented on PHOENIX-2889:


[~hellstorm],
It is already fixed in 4.6/4.5.3(PHOENIX-2196) by automatically capitalizing 
the column names.
Please ask your vendor to provide you a hotfix for the same.

> No exception in Phoenix-Spark when column does not exist
> 
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2889) No exception in Phoenix-Spark when column does not exist

2016-05-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-2889.

Resolution: Duplicate

> No exception in Phoenix-Spark when column does not exist
> 
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2889) No exception thrown in Phoenix-Spark when column does not exist

2016-05-11 Thread Ian Hellstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Hellstrom updated PHOENIX-2889:
---
Summary: No exception thrown in Phoenix-Spark when column does not exist  
(was: No exception in Phoenix-Spark when column does not exist)

> No exception thrown in Phoenix-Spark when column does not exist
> ---
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2889) No exception thrown in Phoenix-Spark when column does not exist

2016-05-11 Thread Ian Hellstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279894#comment-15279894
 ] 

Ian Hellstrom commented on PHOENIX-2889:


Cool! I think the Phoenix version must match the Phoenix-Spark version, right?

> No exception thrown in Phoenix-Spark when column does not exist
> ---
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-2889) No exception thrown in Phoenix-Spark when column does not exist

2016-05-11 Thread Ian Hellstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Hellstrom updated PHOENIX-2889:
---
Comment: was deleted

(was: Cool! I think the Phoenix version must match the Phoenix-Spark version, 
right?)

> No exception thrown in Phoenix-Spark when column does not exist
> ---
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (PHOENIX-2889) No exception thrown in Phoenix-Spark when column does not exist

2016-05-11 Thread Ian Hellstrom (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Hellstrom closed PHOENIX-2889.
--

> No exception thrown in Phoenix-Spark when column does not exist
> ---
>
> Key: PHOENIX-2889
> URL: https://issues.apache.org/jira/browse/PHOENIX-2889
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>Reporter: Ian Hellstrom
>  Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Phoenix Tracing Web App Problem

2016-05-11 Thread Pranavan Theivendiram
Hi Devs,

 I have started the tracing web app. It is starting like

starting Trace Server, logging to
/tmp/phoenix/phoenix-pranavan-traceserver.log

But, from the web side. I am not able to access anything.  When I try to
stop the trace server, I am getting the following log.

no Trace Server to stop because PID file not found,
/tmp/phoenix/phoenix-pranavan-traceserver.pid

Can you please help me on finding the actual problem?
*T. Pranavan*
*Junior Consultant | Department of Computer Science & Engineering
,University of Moratuwa*
*Mobile| *0775136836


[jira] [Assigned] (PHOENIX-2887) Uberjar application fail with "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteStri

2016-05-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-2887:
--

Assignee: Ankit Singhal

> Uberjar application fail with "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: PHOENIX-2887
> URL: https://issues.apache.org/jira/browse/PHOENIX-2887
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 4.8.0
>
>
> Uberjar applications operating over Phoenix 4.7.0 are expressing the same 
> symptoms as MR jobs from HBASE-8. Looks like PHOENIX-2417 introduced 
> direct use of {{ZeroCopyLiteralByteString}}, which causes those problems. We 
> need to replace use of that class with 
> {{org.apache.hadoop.hbase.util.ByteStringer}}. We should probably consider 
> also doing as HBase did, adding a pre-commit check for this "toxic" class so 
> that it doesn't sneak in again.
> {noformat}
> Caused by: java.sql.SQLException: java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1215)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1176)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1434)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:491)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:414)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:406)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:402)
> at 
> org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:381)
> at 
> org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:403)
> ...
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:497)
> ... 6 more
> Caused by: java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1036)
> at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:192)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1207)
> ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2887) Uberjar application fail with "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteStrin

2016-05-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2887:
---
Attachment: PHOENIX-2887.patch

> Uberjar application fail with "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: PHOENIX-2887
> URL: https://issues.apache.org/jira/browse/PHOENIX-2887
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2887.patch
>
>
> Uberjar applications operating over Phoenix 4.7.0 are expressing the same 
> symptoms as MR jobs from HBASE-8. Looks like PHOENIX-2417 introduced 
> direct use of {{ZeroCopyLiteralByteString}}, which causes those problems. We 
> need to replace use of that class with 
> {{org.apache.hadoop.hbase.util.ByteStringer}}. We should probably consider 
> also doing as HBase did, adding a pre-commit check for this "toxic" class so 
> that it doesn't sneak in again.
> {noformat}
> Caused by: java.sql.SQLException: java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1215)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1176)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1434)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:491)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:414)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:406)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:402)
> at 
> org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:381)
> at 
> org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:403)
> ...
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:497)
> ... 6 more
> Caused by: java.lang.IllegalAccessError: class 
> com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1036)
> at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:192)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1207)
> ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2016-05-11 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2208:
--
Comment: was deleted

(was: For the regex query search can we have sql query %like ? Yes it is 
possible to have top headers for each feature. )

> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2016-05-11 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2208:
--
Comment: was deleted

(was: Hi,

We can achieve this by having a module for mapping queries to tracing IDs. And 
this module includes user interface elements and it can be plugged to all pages 
where it needs as mentioned by [~mujtabachohan]
)

> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2890) Extend IndexTool to allow incremental index rebuilds

2016-05-11 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-2890:
--

 Summary: Extend IndexTool to allow incremental index rebuilds
 Key: PHOENIX-2890
 URL: https://issues.apache.org/jira/browse/PHOENIX-2890
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ankit Singhal
Priority: Minor


Currently , IndexTool is used for initial index rebuild but I think we should 
extend it to be used for recovering index from last disabled timestamp too. 

In general terms if we run IndexTool on already existing/new index, then it 
should follow the same semantics as followed by background Index rebuilding 
thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2890) Extend IndexTool to allow incremental index rebuilds

2016-05-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280249#comment-15280249
 ] 

James Taylor commented on PHOENIX-2890:
---

Partial index rebuild is different because it needs to "replay" all data table 
mutations by doing a raw scan (see the code comments). This is necessary 
because the incrememtal index maintanence failed. The IndexTool only needs to 
generate the index rows for existing data rows as incrememtal index maintenance 
is being done on new data coming in.

> Extend IndexTool to allow incremental index rebuilds
> 
>
> Key: PHOENIX-2890
> URL: https://issues.apache.org/jira/browse/PHOENIX-2890
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Ankit Singhal
>Priority: Minor
>
> Currently , IndexTool is used for initial index rebuild but I think we should 
> extend it to be used for recovering index from last disabled timestamp too. 
> In general terms if we run IndexTool on already existing/new index, then it 
> should follow the same semantics as followed by background Index rebuilding 
> thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2862) Do client server compatibility checks before upgrading system tables

2016-05-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280291#comment-15280291
 ] 

James Taylor commented on PHOENIX-2862:
---

I see now, [~ankit.singhal]. Thanks for the explanation. Seems like client and 
server should agree on both system tables namespaces *and* regular table 
namespaces too. In MetaDataRegionObserver, for partial index rebuild, we look 
up the HBase table to replay the updates.

> Do client server compatibility checks before upgrading system tables
> 
>
> Key: PHOENIX-2862
> URL: https://issues.apache.org/jira/browse/PHOENIX-2862
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2862.patch
>
>
> currently , we allow upgrade of system tables to map to system namespace by 
> enabling "phoenix.schema.mapSystemTablesToNamespace" config (conjuction with 
> "phoenix.connection.isNamespaceMappingEnabled") 
> but we need to ensure following things whenever client connects with above 
> config:-
> 1. Server should be upgraded and check consistency of these properties 
> between client and server.
> 2. If above property does not exists but system:catalog exists, we should not 
> start creating system.catalog.
> 3. if old client connects, it should not create system.catalog again ignoring 
> the upgrade and start using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2883) Region close during automatic disabling of index for rebuilding can lead to RS abort

2016-05-11 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280299#comment-15280299
 ] 

Josh Elser commented on PHOENIX-2883:
-

Some more updates. I spent some time with [~devaraj] yesterday talking over 
this one. I believe we are both in agreement that we get into the following 
(condensed) scenario (we've seen this across a few regions in a cluster):

* The {{Indexer}}'s {{preBatchMutate}} method ends up throwing an {{Index 
update failed}} error because the server cache is missing (likely was evicted 
due to time, lots of back-up on the system).
* The next flush for that region reports that the final memstoreSize is negative
* All subsequent attempts to flush the region never run because a sanity check 
is run to see if the region has data to flush (by checking that {{memstoreSize 
> 0}}).
* A region move request eventually is received and the region is attempted to 
be closed
* The final attempt to flush is called and not run (just like the previous 
cases)
* The sanity check that each store's memstore is empty (verifying that the 
flushes ran) fail.

At this point, we haven't been able to figure out how the Region's memstore 
gets screwed up, but I have a patch I can put into HBase to more gracefully 
handle this scenario (not to mention catch any culprits that obviously screw up 
the memstore size).

> Region close during automatic disabling of index for rebuilding can lead to 
> RS abort
> 
>
> Key: PHOENIX-2883
> URL: https://issues.apache.org/jira/browse/PHOENIX-2883
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>
> (disclaimer: still performing due-diligence on this one)
> I've been helping a user this week with what is thought to be a race 
> condition in secondary index updates. This user has a relatively heavy 
> write-based workload with a few tables that each have at least one index.
> What we have seen is that when the region distribution is changing 
> (concretely, we were doing a rolling restart of the cluster without the load 
> balancer disabled in the hopes of retaining as much availability as 
> possible), I've seen the following general outline in the logs:
> * An index update fails (due to {{ERROR 2008 (INT10)}} the index metadata 
> cache expired or is just missing)
> * The index is taken offline to be asynchronously rebuilt
> * A flush on the data table's region is queue for quite some time
> * RS is asked to close a region (due to a move, commonly)
> * RS aborts because the memstore for the data table's region is in an 
> inconsistent state (e.g. {{Assertion failed while closing store  
>  flushableSize expected=0, actual= 193392. Current 
> memstoreSize=-552208. Maybe a coprocessor operation failed and left the 
> memstore in a partially updated state.}}
> Some relevant HBase issues include HBASE-10514 and HBASE-10844.
> Have been talking to [~ayingshu] and [~devaraj] about it, but haven't found 
> anything definitively conclusive yet. Will dump findings here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2862) Do client server compatibility checks before upgrading system tables

2016-05-11 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280440#comment-15280440
 ] 

Ankit Singhal commented on PHOENIX-2862:


Thanks [~jamestaylor] for looking into this. could you please review the same? 
And how about making "phoenix.schema.mapSystemTablesToNamespace" as TRUE by 
default (though it will come in affect when 
"phoenix.connection.isNamespaceMappingEnabled" is made TRUE in config and 
SYSTEM tables will also automatically migrated during the first connection).

> Do client server compatibility checks before upgrading system tables
> 
>
> Key: PHOENIX-2862
> URL: https://issues.apache.org/jira/browse/PHOENIX-2862
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2862.patch
>
>
> currently , we allow upgrade of system tables to map to system namespace by 
> enabling "phoenix.schema.mapSystemTablesToNamespace" config (conjuction with 
> "phoenix.connection.isNamespaceMappingEnabled") 
> but we need to ensure following things whenever client connects with above 
> config:-
> 1. Server should be upgraded and check consistency of these properties 
> between client and server.
> 2. If above property does not exists but system:catalog exists, we should not 
> start creating system.catalog.
> 3. if old client connects, it should not create system.catalog again ignoring 
> the upgrade and start using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2891) Support system-wide UDFs

2016-05-11 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created PHOENIX-2891:
-

 Summary: Support system-wide UDFs
 Key: PHOENIX-2891
 URL: https://issues.apache.org/jira/browse/PHOENIX-2891
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.4.0
Reporter: Nick Dimiduk


Current UDF implementation requires the use of DynamicClassLoader to locate 
function classes. This imposes all sorts of filesystem permissions 
complications when running clients as non-hbase users. Some of these problems 
can be simplified by supporting loading classes from "standard" classloader, 
IE, whatever is in hbase/sqlline's classpath. Not requiring the 
DynamicClassLoader means those permissions issues can be avoided and better 
support top-down installation of UDFs (ie, via configuration management systems 
like chef/puppet). This will enable a simplified model of deployment for system 
or "site" UDFs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2791) Support append only schema declaration

2016-05-11 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2791:

Attachment: PHOENIX-2791-addendum.patch

Attaching addendum patch that enhances test.

> Support append only schema declaration
> --
>
> Key: PHOENIX-2791
> URL: https://issues.apache.org/jira/browse/PHOENIX-2791
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: argus
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2791-addendum.patch, PHOENIX-2791-v2.patch, 
> PHOENIX-2791.patch
>
>
> If we know in advance that columns will only be added to but never removed 
> from a schema, we can prevent the RPC from the client to the server when the 
> client already has all columns declared in the CREATE TABLE/VIEW IF NOT 
> EXISTS. To enable this, we can add an APPEND_ONLY_SCHEMA boolean flag to 
> SYSTEM.CATALOG. Or another potential name would be IMMUTABLE_SCHEMA to match 
> IMMUTABLE_ROWS?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2791) Support append only schema declaration

2016-05-11 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2791.
-
Resolution: Fixed

> Support append only schema declaration
> --
>
> Key: PHOENIX-2791
> URL: https://issues.apache.org/jira/browse/PHOENIX-2791
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: argus
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2791-addendum.patch, PHOENIX-2791-v2.patch, 
> PHOENIX-2791.patch
>
>
> If we know in advance that columns will only be added to but never removed 
> from a schema, we can prevent the RPC from the client to the server when the 
> client already has all columns declared in the CREATE TABLE/VIEW IF NOT 
> EXISTS. To enable this, we can add an APPEND_ONLY_SCHEMA boolean flag to 
> SYSTEM.CATALOG. Or another potential name would be IMMUTABLE_SCHEMA to match 
> IMMUTABLE_ROWS?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2862) Do client server compatibility checks before upgrading system tables

2016-05-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280856#comment-15280856
 ] 

James Taylor commented on PHOENIX-2862:
---

bq. could you please review the same?
Not sure what you're asking me to review. I think it's fine to default 
phoenix.schema.mapSystemTablesToNamespace to true, assuming that is has no 
impact if phoenix.connection.isNamespaceMappingEnabled is false.

> Do client server compatibility checks before upgrading system tables
> 
>
> Key: PHOENIX-2862
> URL: https://issues.apache.org/jira/browse/PHOENIX-2862
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2862.patch
>
>
> currently , we allow upgrade of system tables to map to system namespace by 
> enabling "phoenix.schema.mapSystemTablesToNamespace" config (conjuction with 
> "phoenix.connection.isNamespaceMappingEnabled") 
> but we need to ensure following things whenever client connects with above 
> config:-
> 1. Server should be upgraded and check consistency of these properties 
> between client and server.
> 2. If above property does not exists but system:catalog exists, we should not 
> start creating system.catalog.
> 3. if old client connects, it should not create system.catalog again ignoring 
> the upgrade and start using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Phoenix Tracing Web App Problem

2016-05-11 Thread Nick Dimiduk
Is there anything in /tmp/phoenix/phoenix-pranavan-traceserver.log ?

On Wed, May 11, 2016 at 5:00 AM, Pranavan Theivendiram <
pranavan...@cse.mrt.ac.lk> wrote:

> Hi Devs,
>
>  I have started the tracing web app. It is starting like
>
> starting Trace Server, logging to
> /tmp/phoenix/phoenix-pranavan-traceserver.log
>
> But, from the web side. I am not able to access anything.  When I try to
> stop the trace server, I am getting the following log.
>
> no Trace Server to stop because PID file not found,
> /tmp/phoenix/phoenix-pranavan-traceserver.pid
>
> Can you please help me on finding the actual problem?
> *T. Pranavan*
> *Junior Consultant | Department of Computer Science & Engineering
> ,University of Moratuwa*
> *Mobile| *0775136836
>


[jira] [Commented] (PHOENIX-2886) Union ALL with Char column not present in the table in Query 1 but in Query 2 throw exception

2016-05-11 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280911#comment-15280911
 ] 

Alicia Ying Shu commented on PHOENIX-2886:
--

[~maryannxue] [~jamestaylor]  The culprit of the problem is that currently 
Union All uses the “schema” of the first query to construct the result 
template. The data type and length of the results returned from the 2nd queries 
(or the following queries) would be different. When I implemented Union All, I 
had proposed to pass down the “schema” with the result sets and evaluate it 
base on individual query during runtime. We did not choose my proposal. Would 
like to see your input on fixing this. Thanks. 

> Union ALL with Char column  not present in the table in Query 1 but in Query 
> 2 throw exception
> --
>
> Key: PHOENIX-2886
> URL: https://issues.apache.org/jira/browse/PHOENIX-2886
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>
> To reproduce:
> create table person ( id bigint not null primary key, firstname char(10), 
> lastname varchar(10) );
> upsert into person values( 1, 'john', 'doe');
> upsert into person values( 2, 'jane', 'doe');
> -- fixed value for char(10)
> select id, 'foo' firstname, lastname from person union all select * from 
> person;
> java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 106 bytes, but had 13
> -- fixed value for bigint
> select cast( 10 AS bigint) id, 'foo' firstname, lastname from person union 
> all select * from person;
> java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 106 bytes, but had 13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2892) Scan for pre-warming the block cache for 2ndary index should be removed

2016-05-11 Thread Enis Soztutar (JIRA)
Enis Soztutar created PHOENIX-2892:
--

 Summary: Scan for pre-warming the block cache for 2ndary index 
should be removed
 Key: PHOENIX-2892
 URL: https://issues.apache.org/jira/browse/PHOENIX-2892
 Project: Phoenix
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 4.8.0


We have run into an issue in a mid-sized cluster with secondary indexes. The 
problem is that all handlers for doing writes were blocked waiting on a single 
scan from the secondary index to complete for > 5mins, thus causing all 
incoming RPCs to timeout and causing write un-availability and further problems 
(disabling the index, etc). We've taken jstack outputs continuously from the 
servers to understand what is going on. 

In the jstack outputs from that particular server, we can see three types of 
stacks (this is raw jstack so the thread names are not there unfortunately). 
  - First, there are a lot of threads waiting for the MVCC transactions started 
previously: 
{code}
Thread 15292: (state = BLOCKED)
 - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
imprecise)
 - 
org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.waitForPreviousTransactionsComplete(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry)
 @bci=86, line=253 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.completeMemstoreInsertWithSeqNum(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry,
 org.apache.hadoop.hbase.regionserver.SequenceId) @bci=29, line=135 (Compiled 
frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
 @bci=1906, line=3187 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
 @bci=79, line=2819 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
 long, long) @bci=12, line=2761 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
 org.apache.hadoop.hbase.regionserver.Region, 
org.apache.hadoop.hbase.quotas.OperationQuota, java.util.List, 
org.apache.hadoop.hbase.CellScanner) @bci=150, line=692 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
 org.apache.hadoop.hbase.quotas.OperationQuota, 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionAction, 
org.apache.hadoop.hbase.CellScanner, 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
 java.util.List, long) @bci=547, line=654 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
@bci=407, line=2032 (Compiled frame)
 - 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
 com.google.protobuf.RpcController, com.google.protobuf.Message) @bci=167, 
line=32213 (Compiled frame)
 - 
org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService, 
com.google.protobuf.Descriptors$MethodDescriptor, com.google.protobuf.Message, 
org.apache.hadoop.hbase.CellScanner, long, 
org.apache.hadoop.hbase.monitoring.MonitoredRPCHandler) @bci=59, line=2114 
(Compiled frame)
 - org.apache.hadoop.hbase.ipc.CallRunner.run() @bci=345, line=101 (Compiled 
frame)
 - 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(java.util.concurrent.BlockingQueue)
 @bci=54, line=130 (Compiled frame)
 - org.apache.hadoop.hbase.ipc.RpcExecutor$1.run() @bci=20, line=107 
(Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
{code}
The way MVCC works is that it assumes that transactions are short living, and 
it guarantees that transactions are committed in strict serial order. 
Transactions in this case are write requests coming in and being executed from 
handlers. Each handler will start a transaction, get a mvcc write index (which 
is the mvcc trx number) and does the WAL append + memstore append. Then it 
marks the mvcc trx to be complete, and before returning to the user, we have to 
guarantee that the transaction is visible. So we wait for the mvcc read point 
to be advanced beyond our own write trx number. This is done at the above stack 
trace (waitForPreviousTransactionsComplete). A lot of threads with this stack 
means that one or more handlers have started a mvcc transaction, but did not 
finish the work and thus did not complete their transactions. MVCC read point 
can o

[jira] [Updated] (PHOENIX-2892) Scan for pre-warming the block cache for 2ndary index should be removed

2016-05-11 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated PHOENIX-2892:
---
Attachment: phoenix-2892_v1.patch

[~jamestaylor] do you have a background on why we are doing a scan for 
pre-warming the block cache? Is it worth doing double work? 

Attaching simple patch. Running the index tests with it. 

> Scan for pre-warming the block cache for 2ndary index should be removed
> ---
>
> Key: PHOENIX-2892
> URL: https://issues.apache.org/jira/browse/PHOENIX-2892
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.8.0
>
> Attachments: phoenix-2892_v1.patch
>
>
> We have run into an issue in a mid-sized cluster with secondary indexes. The 
> problem is that all handlers for doing writes were blocked waiting on a 
> single scan from the secondary index to complete for > 5mins, thus causing 
> all incoming RPCs to timeout and causing write un-availability and further 
> problems (disabling the index, etc). We've taken jstack outputs continuously 
> from the servers to understand what is going on. 
> In the jstack outputs from that particular server, we can see three types of 
> stacks (this is raw jstack so the thread names are not there unfortunately). 
>   - First, there are a lot of threads waiting for the MVCC transactions 
> started previously: 
> {code}
> Thread 15292: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.waitForPreviousTransactionsComplete(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry)
>  @bci=86, line=253 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.completeMemstoreInsertWithSeqNum(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry,
>  org.apache.hadoop.hbase.regionserver.SequenceId) @bci=29, line=135 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=1906, line=3187 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=79, line=2819 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2761 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, 
> org.apache.hadoop.hbase.quotas.OperationQuota, java.util.List, 
> org.apache.hadoop.hbase.CellScanner) @bci=150, line=692 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionAction, 
> org.apache.hadoop.hbase.CellScanner, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  java.util.List, long) @bci=547, line=654 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Message) @bci=167, 
> line=32213 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.Descriptors$MethodDescriptor, 
> com.google.protobuf.Message, org.apache.hadoop.hbase.CellScanner, long, 
> org.apache.hadoop.hbase.monitoring.MonitoredRPCHandler) @bci=59, line=2114 
> (Compiled frame)
>  - org.apache.hadoop.hbase.ipc.CallRunner.run() @bci=345, line=101 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(java.util.concurrent.BlockingQueue)
>  @bci=54, line=130 (Compiled frame)
>  - org.apache.hadoop.hbase.ipc.RpcExecutor$1.run() @bci=20, line=107 
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> {code}
> The way MVCC works is that it assumes that transactions are short living, and 
> it guarantees that transactions are committed in strict serial order. 
> Transactions in this case are write requests coming in and being executed 
> from handlers. Each handler will star

[jira] [Commented] (PHOENIX-2892) Scan for pre-warming the block cache for 2ndary index should be removed

2016-05-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281034#comment-15281034
 ] 

James Taylor commented on PHOENIX-2892:
---

Thanks for the detective work and explanation, [~enis]. We do the skip scan to 
load the batch of rows being mutated into the block cache because it improved 
overall performance of secondary index maintenance.

bq. I think that for some cases, this skip scan turns into an expensive and 
long scan
The skip scan would always do point lookups.

Functionally, removing the pre-warming won't have an impact, so if it prevents 
this situation, then feel free to remove it. However, if the skip scan is 
taking long, the cumulative time of doing the individual scans on each row will 
take even longer, so you may just be kicking the can down the road.

Have you tried lower {{phoenix.mutate.batchSize}} or decreasing the number of 
rows being upserted before a commit is done by the client?

> Scan for pre-warming the block cache for 2ndary index should be removed
> ---
>
> Key: PHOENIX-2892
> URL: https://issues.apache.org/jira/browse/PHOENIX-2892
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.8.0
>
> Attachments: phoenix-2892_v1.patch
>
>
> We have run into an issue in a mid-sized cluster with secondary indexes. The 
> problem is that all handlers for doing writes were blocked waiting on a 
> single scan from the secondary index to complete for > 5mins, thus causing 
> all incoming RPCs to timeout and causing write un-availability and further 
> problems (disabling the index, etc). We've taken jstack outputs continuously 
> from the servers to understand what is going on. 
> In the jstack outputs from that particular server, we can see three types of 
> stacks (this is raw jstack so the thread names are not there unfortunately). 
>   - First, there are a lot of threads waiting for the MVCC transactions 
> started previously: 
> {code}
> Thread 15292: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.waitForPreviousTransactionsComplete(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry)
>  @bci=86, line=253 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.completeMemstoreInsertWithSeqNum(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry,
>  org.apache.hadoop.hbase.regionserver.SequenceId) @bci=29, line=135 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=1906, line=3187 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=79, line=2819 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2761 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, 
> org.apache.hadoop.hbase.quotas.OperationQuota, java.util.List, 
> org.apache.hadoop.hbase.CellScanner) @bci=150, line=692 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionAction, 
> org.apache.hadoop.hbase.CellScanner, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  java.util.List, long) @bci=547, line=654 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Message) @bci=167, 
> line=32213 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.Descriptors$MethodDescriptor, 
> com.google.protobuf.Message, org.apache.hadoop.hbase.CellScanner, long, 
> org.apache.hadoop.hbase.monitoring.MonitoredRPCHandler) @bci=59, line=2114 
> (Compiled frame)
>  - org.apache.hadoop.hbase.ipc.CallRunner.run() 

[jira] [Commented] (PHOENIX-2405) Improve performance and stability of server side sort for ORDER BY

2016-05-11 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281072#comment-15281072
 ] 

Maryann Xue commented on PHOENIX-2405:
--

Yes, that's correct, [~RCheungIT]. And don't forget to do 
"MemoryManager.allocate()" to make sure that the heap memory usage is tracked 
by Phoenix's memory manager and most like a Phoenix InsufficientMemoryException 
will be thrown other than OOM. Sorry for the late reply, [~RCheungIT]

> Improve performance and stability of server side sort for ORDER BY
> --
>
> Key: PHOENIX-2405
> URL: https://issues.apache.org/jira/browse/PHOENIX-2405
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Haoran Zhang
>  Labels: gsoc2016
> Fix For: 4.8.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2892) Scan for pre-warming the block cache for 2ndary index should be removed

2016-05-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281077#comment-15281077
 ] 

Enis Soztutar commented on PHOENIX-2892:


Thanks James. 
bq. Have you tried lower phoenix.mutate.batchSize or decreasing the number of 
rows being upserted before a commit is done by the client?
I believe the cluster was running with {{phoenix.mutate.batchSize=1000}} when 
this happened initially. With the hbase's asyncprocess grouping the edits by 
regionserver, there can be even more or less in the incoming {{multi()}} call. 

bq. Functionally, removing the pre-warming won't have an impact, so if it 
prevents this situation, then feel free to remove it. However, if the skip scan 
is taking long, the cumulative time of doing the individual scans on each row 
will take even longer, so you may just be kicking the can down the road.
We are doing the gets in parallel (I believe 10 threads by default), while the 
skip scan is still a serial scanner, no? Plus, we are doing double work. Even 
if the results are coming from the block cache, the scan overhead is not 
negligible. I am surprised that the secondary index performance was improved. 

I'm running the tests with {{mvn verify -Dtest=*Index*}}, but some tests are 
failing out of the box without the patch for me. Maybe I have a setup issue. 
[~chrajeshbab...@gmail.com] any idea? 

 

> Scan for pre-warming the block cache for 2ndary index should be removed
> ---
>
> Key: PHOENIX-2892
> URL: https://issues.apache.org/jira/browse/PHOENIX-2892
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.8.0
>
> Attachments: phoenix-2892_v1.patch
>
>
> We have run into an issue in a mid-sized cluster with secondary indexes. The 
> problem is that all handlers for doing writes were blocked waiting on a 
> single scan from the secondary index to complete for > 5mins, thus causing 
> all incoming RPCs to timeout and causing write un-availability and further 
> problems (disabling the index, etc). We've taken jstack outputs continuously 
> from the servers to understand what is going on. 
> In the jstack outputs from that particular server, we can see three types of 
> stacks (this is raw jstack so the thread names are not there unfortunately). 
>   - First, there are a lot of threads waiting for the MVCC transactions 
> started previously: 
> {code}
> Thread 15292: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.waitForPreviousTransactionsComplete(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry)
>  @bci=86, line=253 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.completeMemstoreInsertWithSeqNum(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry,
>  org.apache.hadoop.hbase.regionserver.SequenceId) @bci=29, line=135 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=1906, line=3187 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=79, line=2819 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2761 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, 
> org.apache.hadoop.hbase.quotas.OperationQuota, java.util.List, 
> org.apache.hadoop.hbase.CellScanner) @bci=150, line=692 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionAction, 
> org.apache.hadoop.hbase.CellScanner, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  java.util.List, long) @bci=547, line=654 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google

[jira] [Commented] (PHOENIX-2886) Union ALL with Char column not present in the table in Query 1 but in Query 2 throw exception

2016-05-11 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281103#comment-15281103
 ] 

Maryann Xue commented on PHOENIX-2886:
--

I do think the types should be coerced for both sides at compile time. No such 
logic right now?

> Union ALL with Char column  not present in the table in Query 1 but in Query 
> 2 throw exception
> --
>
> Key: PHOENIX-2886
> URL: https://issues.apache.org/jira/browse/PHOENIX-2886
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>
> To reproduce:
> create table person ( id bigint not null primary key, firstname char(10), 
> lastname varchar(10) );
> upsert into person values( 1, 'john', 'doe');
> upsert into person values( 2, 'jane', 'doe');
> -- fixed value for char(10)
> select id, 'foo' firstname, lastname from person union all select * from 
> person;
> java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 106 bytes, but had 13
> -- fixed value for bigint
> select cast( 10 AS bigint) id, 'foo' firstname, lastname from person union 
> all select * from person;
> java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 106 bytes, but had 13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2892) Scan for pre-warming the block cache for 2ndary index should be removed

2016-05-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281117#comment-15281117
 ] 

James Taylor commented on PHOENIX-2892:
---

bq. I believe the cluster was running with phoenix.mutate.batchSize=1000 when 
this happened initially. 
One thing to try would be to lower this and confirm that the batch size client 
is using for multiple UPSERT VALUES is small as well. 

bq. We are doing the gets in parallel (I believe 10 threads by default), while 
the skip scan is still a serial scanner, no?
A scan (not a get, but I don't think that matters) will be issued for each row 
in the batch. These scans will be done serially due to PHOENIX-2671 (the change 
made to ensure that the scans are issued in the same cp thread). The advantage 
of a skip scan over lots of separate get/scans is outlined here[1], but perhaps 
HBase is better at this now as that was a long time ago.

Either way, you're going to have the locks due to the processing of the batch 
mutation for the same amount of time, no?

FYI, some tests are flaky on a Mac (a couple of the join ones), but better on 
Linux.

[1] 
http://phoenix-hbase.blogspot.com/2013/05/demystifying-skip-scan-in-phoenix.html

> Scan for pre-warming the block cache for 2ndary index should be removed
> ---
>
> Key: PHOENIX-2892
> URL: https://issues.apache.org/jira/browse/PHOENIX-2892
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 4.8.0
>
> Attachments: phoenix-2892_v1.patch
>
>
> We have run into an issue in a mid-sized cluster with secondary indexes. The 
> problem is that all handlers for doing writes were blocked waiting on a 
> single scan from the secondary index to complete for > 5mins, thus causing 
> all incoming RPCs to timeout and causing write un-availability and further 
> problems (disabling the index, etc). We've taken jstack outputs continuously 
> from the servers to understand what is going on. 
> In the jstack outputs from that particular server, we can see three types of 
> stacks (this is raw jstack so the thread names are not there unfortunately). 
>   - First, there are a lot of threads waiting for the MVCC transactions 
> started previously: 
> {code}
> Thread 15292: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.waitForPreviousTransactionsComplete(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry)
>  @bci=86, line=253 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.completeMemstoreInsertWithSeqNum(org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl$WriteEntry,
>  org.apache.hadoop.hbase.regionserver.SequenceId) @bci=29, line=135 (Compiled 
> frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=1906, line=3187 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
>  @bci=79, line=2819 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
>  long, long) @bci=12, line=2761 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  org.apache.hadoop.hbase.regionserver.Region, 
> org.apache.hadoop.hbase.quotas.OperationQuota, java.util.List, 
> org.apache.hadoop.hbase.CellScanner) @bci=150, line=692 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
>  org.apache.hadoop.hbase.quotas.OperationQuota, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionAction, 
> org.apache.hadoop.hbase.CellScanner, 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
>  java.util.List, long) @bci=547, line=654 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
>  org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
> @bci=407, line=2032 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
>  com.google.protobuf.RpcController, com.google.protobuf.Message) @bci=167, 
> line=32213 (Compiled frame)
>  - 
> org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService,
>  com.google.protobuf.

[jira] [Commented] (PHOENIX-2862) Do client server compatibility checks before upgrading system tables

2016-05-11 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281250#comment-15281250
 ] 

Ankit Singhal commented on PHOENIX-2862:


Thanks [~jamestaylor], my bad if I was not clear with my first comment. but 
actually I was asking you to review pull request if [~samarthjain] is not 
around. But anyways, can you or [~samarthjain] please review the pull request.
https://github.com/apache/phoenix/pull/167

Pull request also includes other changes or fixes related to schema support and 
other tools which are important for 4.8 release.

> Do client server compatibility checks before upgrading system tables
> 
>
> Key: PHOENIX-2862
> URL: https://issues.apache.org/jira/browse/PHOENIX-2862
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2862.patch
>
>
> currently , we allow upgrade of system tables to map to system namespace by 
> enabling "phoenix.schema.mapSystemTablesToNamespace" config (conjuction with 
> "phoenix.connection.isNamespaceMappingEnabled") 
> but we need to ensure following things whenever client connects with above 
> config:-
> 1. Server should be upgraded and check consistency of these properties 
> between client and server.
> 2. If above property does not exists but system:catalog exists, we should not 
> start creating system.catalog.
> 3. if old client connects, it should not create system.catalog again ignoring 
> the upgrade and start using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2862) Do client server compatibility checks before upgrading system tables

2016-05-11 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2862:
---
Affects Version/s: 4.8.0

> Do client server compatibility checks before upgrading system tables
> 
>
> Key: PHOENIX-2862
> URL: https://issues.apache.org/jira/browse/PHOENIX-2862
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2862.patch
>
>
> currently , we allow upgrade of system tables to map to system namespace by 
> enabling "phoenix.schema.mapSystemTablesToNamespace" config (conjuction with 
> "phoenix.connection.isNamespaceMappingEnabled") 
> but we need to ensure following things whenever client connects with above 
> config:-
> 1. Server should be upgraded and check consistency of these properties 
> between client and server.
> 2. If above property does not exists but system:catalog exists, we should not 
> start creating system.catalog.
> 3. if old client connects, it should not create system.catalog again ignoring 
> the upgrade and start using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2876) Using aggregation function in ORDER BY

2016-05-11 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-2876:
-
Attachment: PHOENIX-2876-2.patch

updated version

> Using aggregation function in ORDER BY
> --
>
> Key: PHOENIX-2876
> URL: https://issues.apache.org/jira/browse/PHOENIX-2876
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2876-1.patch, PHOENIX-2876-2.patch
>
>
> {noformat}
> create table x (id integer primary key, i1 integer, i2 integer);
> upsert into x values (1, 1, 1);
> upsert into x values (2, 2, 2);
> upsert into x values (3, 2, 3);
> upsert into x values (4, 3, 3);
> upsert into x values (5, 3, 2);
> upsert into x values (6, 3, 1);
> {noformat}
> Test query:
> {noformat}
> select i1 from X group by i1 order by avg(i2) desc;
> {noformat}
> Expected result: 2, 3, 1
> Real result : 1, 3, 2
> In other hands 
> {noformat}
> select i1, avg(i2) from X group by i1 order by avg(i2) desc;
> {noformat}
> works correctly. 
> That happens because in ORDER BY we add nothing to RowProjector if we deal 
> with aggregate functions. So, there is a question. Do we have any 
> restrictions why we can't add the expression from ORDER BY to RowProjector ?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)