Tool for visualizing tracing info with Phoenix Tracing Web App [GSoC]

2015-08-20 Thread Ayola Jayamaha
Hi All,

You can find all the tasks and milestones on GSOC 2015 - Phoenix in the 2
PRs[1,2]. It contains a launch script to run the jetty server. It creates a
web application running on user preferred port. The back-end services
contains querying the SYSTEM.TRACING_STATS table and projecting the JSON
output of it.

You can try them out as described in the blog post[3].

The respective jiras are given[4,5]. The phoenix tracing page
https://phoenix.apache.org/tracing.html will  be updated and the patch is
attached with the screenshots. [6]

All the blog posts I have written regard to GSoC 2015 can be found here[7].

Thanks.

On Fri, Aug 21, 2015 at 12:11 AM, Ayola Jayamaha 
wrote:

> Hi All,
>
> The Phoenix Tracing Web App main features and setup information can be
> found on the following blog posts[1,2].
>
> [1]
> http://ayolajayamaha.blogspot.com/2015/08/apache-phoenix-web-application.html
>
> [2]
> http://ayolajayamaha.blogspot.com/2015/08/apache-phoenix-web-application-part-02.html
>
> --
> Best Regards,
> Nishani Jayamaha
> http://ayolajayamaha.blogspot.com/
>
>
>
[1] https://github.com/apache/phoenix/pull/111
[2] https://github.com/apache/phoenix/pull/112
[3]
http://ayolajayamaha.blogspot.com/2015/08/apache-phoenix-web-application.html
[4] https://issues.apache.org/jira/browse/PHOENIX-2186
[5] https://issues.apache.org/jira/browse/PHOENIX-2187
[6] https://issues.apache.org/jira/browse/PHOENIX-2190
[7] http://ayolajayamaha.blogspot.com/search?q=phoenix

-- 
Best Regards,
Nishani Jayamaha
http://ayolajayamaha.blogspot.com/


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706241#comment-14706241
 ] 

Hudson commented on PHOENIX-2031:
-

FAILURE: Integrated in Phoenix-master #877 (See 
[https://builds.apache.org/job/Phoenix-master/877/])
PHOENIX-2031 - Unable to process timestamp/Date data loaded via Phoenix 
org.apache.phoenix.pig.PhoenixHBaseLoader(ayingshu) (ravimagham: rev 
0f84104e2d8914436bb1c1e5f8fa1d40118d1343)
* phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TypeUtil.java
* phoenix-pig/src/it/java/org/apache/phoenix/pig/PhoenixHBaseLoaderIT.java


> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: PhoenixHbaseStorage on secure cluster

2015-08-20 Thread Ravi Kiran
Hi Siddhi,
   I remember the fix was done and tested as part of
https://issues.apache.org/jira/browse/PHOENIX-1078 .  If possible, can you
go a bit deeper in explaining how you are calling PhoenixHBaseStorage from
a map task.

Regards
Ravi

On Thu, Aug 20, 2015 at 6:54 PM, Siddhi Mehta  wrote:

> Hey Guys,
>
> Just wanted to ping once again and see if anyone has tried phoenix-pig
> integration job against the secure hbase cluster.
> Pig job started from within a map task.
>
>
> I see the following exception in the HMaster logs
> ipc.RpcServer - RpcServer.listener,port=6: count of bytes read: 0
> org.apache.hadoop.hbase.security.AccessDeniedException: Authentication is
> required
> at
>
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1516)
> at
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:856)
> at
>
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:647)
> at
>
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:622)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> Somewhere in the flow my HBASE_AUTH_TOKEN is being messed up.
>
> --Siddhi
>
> On Wed, Aug 19, 2015 at 7:39 PM, Siddhi Mehta  wrote:
>
> > Hey Guys
> >
> >
> > I am trying to make use of the PhoenixHbaseStorage to write to Hbase
> Table.
> >
> >
> > The way we start this pig job is from within a map task(Similar to oozie)
> >
> >
> > I run TableMapReduceUtil.initCredentials(job) on the client to get the
> > correct AuthTokens for my map task
> >
> >
> > I have ensured that hbase-site.xml is on the classpath for the pigjob and
> > also hbase-client and hbase-server jars.
> >
> >
> > Any ideas on what could I be missing?
> >
> >
> > I am using Phoenix4.5 version and hbase 0.98.13
> >
> >
> > I see the following exception in the the logs of the pig job that tries
> > writing to hbase
> >
> >
> >
> > Aug 20, 2015 12:04:31 AM
> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper 
> > INFO: Process identifier=hconnection-0x3c1e23ff connecting to ZooKeeper
> > ensemble=hmaster1:2181,hmaster2:2181,hmaster3:2181
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 1 of 35 failed; retrying after sleep of 100,
> > exception=com.google.protobuf.ServiceException:
> > java.lang.NullPointerException
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 2 of 35 failed; retrying after sleep of 200,
> > exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> > to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:6
>  failed on
> local
> > exception: java.io.EOFException
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 3 of 35 failed; retrying after sleep of 300,
> > exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> > to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:6
>  failed on
> local
> > exception: java.io.EOFException
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 4 of 35 failed; retrying after sleep of 500,
> > exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> > to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:6
>  failed on
> local
> > exception: java.io.EOFException
> > Aug 20, 2015 12:04:32 AM:
> >
>
>
>
> --
> Regards,
> Siddhi
>


[jira] [Comment Edited] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705274#comment-14705274
 ] 

Nishani  edited comment on PHOENIX-2187 at 8/21/15 4:18 AM:


Dependencies and Licensing

Here are the dependencies and Licensing of the javascript libraries used in the 
front-end  application of the Phoenix Tracing Web App.

angular-mocks.js   @license AngularJS v1.3.15  http://angularjs.org  License: 
MIT
angular-route.js  @license AngularJS v1.3.8  http://angularjs.org License: MIT
angular.js  @license AngularJS v1.3.15  http://angularjs.org License: MIT
bootstrap.js Bootstrap v3.3.4 (http://getbootstrap.com) Licensed under MIT 
(https://github.com/twbs/bootstrap/blob/master/LICENSE)
jquery-2.1.4.js  MIT license http://jquery.org/license
ng-google-chart.js @license 
(https://github.com/angular-google-chart/angular-google-chart) License: MIT
ui-bootstrap-tpls.js  Version: 0.13.0 (https://github.com/angular-ui/bootstrap) 
License: MIT (https://github.com/angular-ui/bootstrap/blob/master/LICENSE)
Font Awesome 4.3.0 - License - http://fontawesome.io/license (CSS: MIT License) 
(Font Awesome's licensing is a combination of the SIL open font license, MIT 
license for code)
Bootstrap v3.3.4 and bootstrap-theme v3.3.4 - Licensed under MIT - 
(https://github.com/twbs/bootstrap/blob/master/LICENSE)
Icons from Glyphicons Free, licensed under CC BY 3.0. (Coming with bootstrap) 
jasmine-maven-plugin- 1.3.1.6v -License - Apache License - 
(https://github.com/searls/jasmine-maven-plugin/blob/master/LICENSE.txt)



was (Author: nishani):
Dependencies and Licensing

Here are the dependencies and Licensing of the javascript libraries used in the 
front-end  application of the Phoenix Tracing Web App.

angular-mocks.js   @license AngularJS v1.3.15  http://angularjs.org  License: 
MIT
angular-route.js  @license AngularJS v1.3.8  http://angularjs.org License: MIT
angular.js  @license AngularJS v1.3.15  http://angularjs.org License: MIT
bootstrap.js Bootstrap v3.3.4 (http://getbootstrap.com) Licensed under MIT 
(https://github.com/twbs/bootstrap/blob/master/LICENSE)
jquery-2.1.4.js  MIT license http://jquery.org/license
ng-google-chart.js @license 
(https://github.com/angular-google-chart/angular-google-chart) License: MIT
ui-bootstrap-tpls.js  Version: 0.13.0 (https://github.com/angular-ui/bootstrap) 
License: MIT (https://github.com/angular-ui/bootstrap/blob/master/LICENSE)


> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706193#comment-14706193
 ] 

maghamravikiran commented on PHOENIX-2031:
--

Thanks [~ayingshu] I am applying the patch now. 

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: PhoenixHbaseStorage on secure cluster

2015-08-20 Thread Siddhi Mehta
Hey Guys,

Just wanted to ping once again and see if anyone has tried phoenix-pig
integration job against the secure hbase cluster.
Pig job started from within a map task.


I see the following exception in the HMaster logs
ipc.RpcServer - RpcServer.listener,port=6: count of bytes read: 0
org.apache.hadoop.hbase.security.AccessDeniedException: Authentication is
required
at
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1516)
at
org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:856)
at
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:647)
at
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:622)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

Somewhere in the flow my HBASE_AUTH_TOKEN is being messed up.

--Siddhi

On Wed, Aug 19, 2015 at 7:39 PM, Siddhi Mehta  wrote:

> Hey Guys
>
>
> I am trying to make use of the PhoenixHbaseStorage to write to Hbase Table.
>
>
> The way we start this pig job is from within a map task(Similar to oozie)
>
>
> I run TableMapReduceUtil.initCredentials(job) on the client to get the
> correct AuthTokens for my map task
>
>
> I have ensured that hbase-site.xml is on the classpath for the pigjob and
> also hbase-client and hbase-server jars.
>
>
> Any ideas on what could I be missing?
>
>
> I am using Phoenix4.5 version and hbase 0.98.13
>
>
> I see the following exception in the the logs of the pig job that tries
> writing to hbase
>
>
>
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper 
> INFO: Process identifier=hconnection-0x3c1e23ff connecting to ZooKeeper
> ensemble=hmaster1:2181,hmaster2:2181,hmaster3:2181
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 1 of 35 failed; retrying after sleep of 100,
> exception=com.google.protobuf.ServiceException:
> java.lang.NullPointerException
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 2 of 35 failed; retrying after sleep of 200,
> exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:6 failed on local
> exception: java.io.EOFException
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 3 of 35 failed; retrying after sleep of 300,
> exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:6 failed on local
> exception: java.io.EOFException
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 4 of 35 failed; retrying after sleep of 500,
> exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:6 failed on local
> exception: java.io.EOFException
> Aug 20, 2015 12:04:32 AM:
>



-- 
Regards,
Siddhi


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706147#comment-14706147
 ] 

Nishani  commented on PHOENIX-2187:
---

Hi, Nick

Font Awesome 4.3.0 - License - http://fontawesome.io/license (CSS: MIT License) 
(Font Awesome's licensing is a combination of the SIL open font license, MIT 
license for code)
Bootstrap v3.3.4 and bootstrap-theme  v3.3.4 - Licensed under MIT -  
(https://github.com/twbs/bootstrap/blob/master/LICENSE)
Icons from Glyphicons Free, licensed under CC BY 3.0. (Coming with bootstrap) - 
I think we can remove Glyphicons  in here if license is issue.
jasmine-maven-plugin- 1.3.1.6v  -License  - Apache License - 
https://github.com/searls/jasmine-maven-plugin/blob/master/LICENSE.txt

Thanks


> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-953) Support UNNEST for ARRAY

2015-08-20 Thread Dumindu Buddhika (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dumindu Buddhika updated PHOENIX-953:
-
Attachment: PHOENIX-953-v3.patch

> Support UNNEST for ARRAY
> 
>
> Key: PHOENIX-953
> URL: https://issues.apache.org/jira/browse/PHOENIX-953
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-953-v1.patch, PHOENIX-953-v2.patch, 
> PHOENIX-953-v3.patch
>
>
> The UNNEST built-in function converts an array into a set of rows. This is 
> more than a built-in function, so should be considered an advanced project.
> For an example, see the following Postgres documentation: 
> http://www.postgresql.org/docs/8.4/static/functions-array.html
> http://www.anicehumble.com/2011/07/postgresql-unnest-function-do-many.html
> http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html
> So the UNNEST is a way of converting an array to a flattened "table" which 
> can then be filtered on, ordered, grouped, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-953) Support UNNEST for ARRAY

2015-08-20 Thread Dumindu Buddhika (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dumindu Buddhika updated PHOENIX-953:
-
Attachment: (was: PHOENIX-953-v3.patch)

> Support UNNEST for ARRAY
> 
>
> Key: PHOENIX-953
> URL: https://issues.apache.org/jira/browse/PHOENIX-953
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-953-v1.patch, PHOENIX-953-v2.patch
>
>
> The UNNEST built-in function converts an array into a set of rows. This is 
> more than a built-in function, so should be considered an advanced project.
> For an example, see the following Postgres documentation: 
> http://www.postgresql.org/docs/8.4/static/functions-array.html
> http://www.anicehumble.com/2011/07/postgresql-unnest-function-do-many.html
> http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html
> So the UNNEST is a way of converting an array to a flattened "table" which 
> can then be filtered on, ordered, grouped, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2169) Illegal data error on UPSERT SELECT and JOIN with salted tables

2015-08-20 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706068#comment-14706068
 ] 

Josh Mahonin commented on PHOENIX-2169:
---

Including comments from [~johngouf] posted to phoenix-users:

Query:
{code}
UPSERT INTO READINGS
SELECT R.SMID, R.DT, R.US, R.GEN, R.USEST, R.GENEST, RM.LAT, RM.LON, RM.ZIP, 
RM.FEEDER
FROM READINGS AS R
JOIN
(SELECT SMID,LAT,LON,ZIP,FEEDER
 FROM READINGS_META) AS RM
ON R.SMID = RM.SMID
{code}

Stacktrace:
{noformat}
Error: ERROR 201 (22000): Illegal data. ERROR 201 (22000): Illegal data. 
Expected length of at least 70 bytes, but had 25 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. ERROR 201 (22000): 
Illegal data. Expected length of at least 70 bytes, but had 25
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:388)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:131)
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:553)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at 
org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:685)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:306)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:304)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1374)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
{noformat}

> Illegal data error on UPSERT SELECT and JOIN with salted tables
> ---
>
> Key: PHOENIX-2169
> URL: https://issues.apache.org/jira/browse/PHOENIX-2169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Josh Mahonin
>
> I have an issue where I get periodic failures (~50%) for an UPSERT SELECT 
> query involving a JOIN on salted tables. Unfortunately I haven't been able to 
> create a reproducible test case yet, though I'll keep trying. I believe this 
> same behaviour existed in 4.3.1 as well, so I don't think it's a regression.
> The upsert query itself looks something like this:
> {code}
> UPSERT INTO a(tid, ds, etp, eid, ts, atp, rel, tp, tpid, dt, pro) 
> SELECT c.tid, 
>c.ds, 
>c.etp, 
>c.eid, 
>c.dh, 
>0, 
>c.rel, 
>c.tp, 
>c.tpid, 
>current_time(), 
>1.0 / s.th 
> FROM   e_c c 
> join   e_s s 
> ON s.tid = c.tid 
> ANDs.ds = c.ds 
> ANDs.etp = c.etp 
> ANDs.eid = c.eid 
> WHERE  c.tid = 'FOO';
> {code}
> Without the upsert, the query always returns the right data, but with the 
> upsert, it ends up with failures like:
> Error: ERROR 201 (22000): Illegal data. ERROR 201 (22000): Illegal data. 
> Expected length of at least 109 bytes, but had 19 (state=22000,code=201)
> The explain plan looks like:
> {code}
> UPSERT SELECT
> CLIENT 16-CHUNK PARALLEL 16-WAY RANGE SCAN OVER E_C [0,'FOO']
>   SERVER FILTER BY FIRST KEY ONLY
>   PARALLEL INNER-JOIN TABLE 0
>   CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER E_S
>   DYNAMIC SERVER FILTER BY (C.TID, C.DS, C.ETP, C.EID) IN ((S.TID, S.DS, 
> S.ETP, S.EID))
> {code}
> I'm using SALT_BUCKETS=16 for both tables in the join, and this is a dev 
> environment, so only 1 region server. Note that without salted tables, I have 
> no issue with this query.
> The number of rows in E_C is around 23K, and the number of rows in E_S is 62.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2194) order by should not require all PK fields with = constraint

2015-08-20 Thread Gary Horen (JIRA)
Gary Horen created PHOENIX-2194:
---

 Summary: order by should not require all PK fields with = 
constraint
 Key: PHOENIX-2194
 URL: https://issues.apache.org/jira/browse/PHOENIX-2194
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.5.0
 Environment: linux
Reporter: Gary Horen


Here is a table:
CREATE TABLE IF NOT EXISTS FEEDS.STUFF
(
STUFF CHAR(15) NOT NULL,
NONSENSE CHAR(15) NOT NULL
CONSTRAINT PK PRIMARY KEY
(
STUFF,
NONSENSE

)
) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1

Here is a query:
explain SELECT * FROM feeds.stuff
where stuff = ' '
and nonsense > ' '
order by nonsense

Here is the plan:
CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN  
SERVER FILTER BY FIRST KEY ONLY   
SERVER TOP 100 ROWS SORTED BY [NONSE  
CLIENT MERGE SORT   

If I change to ORDER BY STUFF, NONSENSE I get:
CLIENT 1-CHUNK SERIAL 1-WAY RANGE SCAN O  
SERVER FILTER BY FIRST KEY ONLY AND   
SERVER 100 ROW LIMIT  
CLIENT 100 ROW LIMIT  

Since the leading constraint is =,  ORDER BY will be unaffected by it, so ORDER 
BY should not need the leading constraint; it should only require the columns 
whose values would vary (which, since they are ordered by the key, should (and 
do) result in the client side sort being optimized out.) Having to include the 
leading = constraints in the ORDER BY clause is very counter-intuitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2169) Illegal data error on UPSERT SELECT and JOIN with salted tables

2015-08-20 Thread Yiannis Gkoufas (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705991#comment-14705991
 ] 

Yiannis Gkoufas commented on PHOENIX-2169:
--

I observe the exact same behaviour

> Illegal data error on UPSERT SELECT and JOIN with salted tables
> ---
>
> Key: PHOENIX-2169
> URL: https://issues.apache.org/jira/browse/PHOENIX-2169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
>Reporter: Josh Mahonin
>
> I have an issue where I get periodic failures (~50%) for an UPSERT SELECT 
> query involving a JOIN on salted tables. Unfortunately I haven't been able to 
> create a reproducible test case yet, though I'll keep trying. I believe this 
> same behaviour existed in 4.3.1 as well, so I don't think it's a regression.
> The upsert query itself looks something like this:
> {code}
> UPSERT INTO a(tid, ds, etp, eid, ts, atp, rel, tp, tpid, dt, pro) 
> SELECT c.tid, 
>c.ds, 
>c.etp, 
>c.eid, 
>c.dh, 
>0, 
>c.rel, 
>c.tp, 
>c.tpid, 
>current_time(), 
>1.0 / s.th 
> FROM   e_c c 
> join   e_s s 
> ON s.tid = c.tid 
> ANDs.ds = c.ds 
> ANDs.etp = c.etp 
> ANDs.eid = c.eid 
> WHERE  c.tid = 'FOO';
> {code}
> Without the upsert, the query always returns the right data, but with the 
> upsert, it ends up with failures like:
> Error: ERROR 201 (22000): Illegal data. ERROR 201 (22000): Illegal data. 
> Expected length of at least 109 bytes, but had 19 (state=22000,code=201)
> The explain plan looks like:
> {code}
> UPSERT SELECT
> CLIENT 16-CHUNK PARALLEL 16-WAY RANGE SCAN OVER E_C [0,'FOO']
>   SERVER FILTER BY FIRST KEY ONLY
>   PARALLEL INNER-JOIN TABLE 0
>   CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER E_S
>   DYNAMIC SERVER FILTER BY (C.TID, C.DS, C.ETP, C.EID) IN ((S.TID, S.DS, 
> S.ETP, S.EID))
> {code}
> I'm using SALT_BUCKETS=16 for both tables in the join, and this is a dev 
> environment, so only 1 region server. Note that without salted tables, I have 
> no issue with this query.
> The number of rows in E_C is around 23K, and the number of rows in E_S is 62.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705975#comment-14705975
 ] 

Alicia Ying Shu commented on PHOENIX-2031:
--

[~maghamraviki...@gmail.com] Ok. I revised the test cases to use different 
table names for each test case and the tests all passed on my local machine. I 
uploaded the revised patch. Thanks.

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-2031:
-
Attachment: PHOENIX-2031-v2.patch

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-2031:
-
Attachment: (was: PHOENIX-2031-v2.patch)

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705939#comment-14705939
 ] 

Alicia Ying Shu commented on PHOENIX-2031:
--

I tested it in my local machine and the tests passed. In my test cases, I 
always delete tables in my finally block.

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705888#comment-14705888
 ] 

Lars Hofhansl commented on PHOENIX-2154:


Sure... With that we'll need to a reducer class.
I'm not an expert in this, can we configure a reduce that does nothing and also 
avoid the shuffle phase?
If M/R does the shuffle just so that the reducer can ignore all the data, we 
haven't gained anything.

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705882#comment-14705882
 ] 

Lars Hofhansl commented on PHOENIX-2154:


Sorry... Bit late. The specific problem for index builds is that (a) it goes 
into an initially empty table (b) we can't know ahead of time how to presplit 
the table (unless we do a first pass or sample).
With that, the index build will go through a single reducer producing a single 
- possibly humongous region, because we'll have a table with a single region, 
which HBase then needs to split potentially multiple times.

I agree using the HBase front door is not ideal either, but at least we're 
writing in memstore-size chunks and HBase will split as we go.


> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705782#comment-14705782
 ] 

maghamravikiran commented on PHOENIX-2031:
--

[~ayingshu] I didn't push your changes earlier due to this PHOENIX-2103 . The 
issue is fixed now and the decision is to go with new table names for each test 
method .  I have a hunch that the tests will fail if the patch is applied as is 
.  Please have each test with a different table name and test the entire suite. 

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705770#comment-14705770
 ] 

Alicia Ying Shu commented on PHOENIX-2031:
--

[~rajeshbabu] FYI.

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705669#comment-14705669
 ] 

Nick Dimiduk commented on PHOENIX-2187:
---

bq. Dependencies and Licensing

What about this "awesome fonts" and "glyphicons"?

> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705662#comment-14705662
 ] 

Nick Dimiduk commented on PHOENIX-2186:
---

Thanks for the licensing review. This looks fine.

> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705661#comment-14705661
 ] 

ASF GitHub Bot commented on PHOENIX-2187:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/112#discussion_r37574069
  
--- Diff: phoenix-tracing-webapp/src/build/build.xml ---
@@ -0,0 +1,40 @@
+
+  
+
+
+
+
+
+
+
--- End diff --

this is provided in the other PR, right? no need to include it here.


> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2187: Adding front-end of Tracing We...

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/112#discussion_r37574069
  
--- Diff: phoenix-tracing-webapp/src/build/build.xml ---
@@ -0,0 +1,40 @@
+
+  
+
+
+
+
+
+
+
--- End diff --

this is provided in the other PR, right? no need to include it here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705656#comment-14705656
 ] 

ASF GitHub Bot commented on PHOENIX-2187:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/112#discussion_r37573877
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,171 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2187: Adding front-end of Tracing We...

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/112#discussion_r37573877
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,171 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  

[jira] [Commented] (PHOENIX-1706) Create skeleton for parsing DDL

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705652#comment-14705652
 ] 

ASF GitHub Bot commented on PHOENIX-1706:
-

GitHub user julianhyde opened a pull request:

https://github.com/apache/phoenix/pull/113

PHOENIX-1706 Create skeleton for parsing DDL



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/julianhyde/phoenix calcite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/113.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #113


commit 402cdd273f0926fa2a500645ebac297036658579
Author: Julian Hyde 
Date:   2015-04-22T04:43:12Z

PHOENIX-1706 Create skeleton for parsing DDL




> Create skeleton for parsing DDL
> ---
>
> Key: PHOENIX-1706
> URL: https://issues.apache.org/jira/browse/PHOENIX-1706
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Julian Hyde
>
> Phoenix would like to leverage the Calcite parser, so would like to have the 
> ability to parse the following DDL statements. The current work for this is 
> occurring in the calcite branch.
> CREATE TABLE: http://phoenix.apache.org/language/index.html#create_table
> CREATE VIEW: http://phoenix.apache.org/language/index.html#create_view
> CREATE INDEX: http://phoenix.apache.org/language/index.html#create_index
> CREATE SEQUENCE: http://phoenix.apache.org/language/index.html#create_sequence
> ALTER TABLE/VIEW: http://phoenix.apache.org/language/index.html#alter
> ALTER INDEX: http://phoenix.apache.org/language/index.html#alter_index
> DROP TABLE: http://phoenix.apache.org/language/index.html#drop_table
> DROP VIEW: http://phoenix.apache.org/language/index.html#drop_view
> DROP INDEX: http://phoenix.apache.org/language/index.html#drop_index
> DROP SEQUENCE: http://phoenix.apache.org/language/index.html#drop_sequence
> UPDATE STATISTICS: 
> http://phoenix.apache.org/language/index.html#update_statistics
> TRACE ON/OFF [WITH SAMPLING ] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1706 Create skeleton for parsing DDL

2015-08-20 Thread julianhyde
GitHub user julianhyde opened a pull request:

https://github.com/apache/phoenix/pull/113

PHOENIX-1706 Create skeleton for parsing DDL



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/julianhyde/phoenix calcite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/113.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #113


commit 402cdd273f0926fa2a500645ebac297036658579
Author: Julian Hyde 
Date:   2015-04-22T04:43:12Z

PHOENIX-1706 Create skeleton for parsing DDL




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705650#comment-14705650
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573572
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/TraceServlet.java
 ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.PreparedStatement;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Map;
+
+/**
+ *
+ * Server to show trace information
+ *
+ *
+ * @since 4.4.1
+ */
+public class TraceServlet extends HttpServlet {
+
+  protected Connection con;
+  protected String DEFAULT_LIMIT = "25";
+  protected String DEFAULT_COUNTBY = "hostname";
+  protected String LOGIC_AND = "AND";
+  protected String LOGIC_OR = "OR";
+  protected String PHOENIX_HOST = "localhost";
+  protected String TRACING_TABLE = "SYSTEM.TRACING_STATS";
+  protected int PHOENIX_PORT = 2181;
+  
+  
+  protected void doGet(HttpServletRequest request, HttpServletResponse 
response)
+  throws ServletException, IOException {
+
+//reading url params
+String action = request.getParameter("action");
+String limit = request.getParameter("limit");
+String traceid = request.getParameter("traceid");
+String parentid = request.getParameter("parentid");
+String jsonObject = "{}";
+if ("getall".equals(action)) {
+  jsonObject = getAll(limit);
+} else if ("getCount".equals(action)) {
+  jsonObject = getCount("description");
+} else if ("getDistribution".equals(action)) {
+  jsonObject = getCount(DEFAULT_COUNTBY);
+} else if ("searchTrace".equals(action)) {
+  jsonObject = searchTrace(parentid, traceid, LOGIC_OR);
+} else {
+  jsonObject = "{ \"Server\": \"Phoenix Tracing Web App\", \"API 
version\": 0.1 }";
+}
+//response send as json
+response.setContentType("application/json");
+String output = jsonObject;
+PrintWriter out = response.getWriter();
+out.print(output);
+out.flush();
+
+  }
+
+  //get all trace results with limit count
+  protected String getAll(String limit) {
+String json = null;
+if(limit == null) {
+  limit = DEFAULT_LIMIT;
+}
+String sqlQuery = "SELECT * FROM " + TRACING_TABLE + " LIMIT "+limit;
+json = getResults(sqlQuery);
+return getJson(json);
+  }
+
+  //get count on traces can pick on param to count
+  protected String getCount(String countby) {
+String json = null;
+if(countby == null) {
+  countby = DEFAULT_COUNTBY;
+}
+String sqlQuery = "SELECT "+countby+", COUNT(*) AS count FROM " + 
TRACING_TABLE + " GROUP BY "+countby+" HAVING COUNT(*) > 1 ";
+json = getResults(sqlQuery);
+return json;
+  }
+
+  //search the trace over parent id or trace id
+  protected String searchTrace(String parentId, String traceId,String 
logic) {
+String json = null;
+String query = null;
+if(parentId != null && traceId != null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
parent_id="+parentId+" "+logic+" trace_id="+traceId;
+   

[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573572
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/TraceServlet.java
 ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.PreparedStatement;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Map;
+
+/**
+ *
+ * Server to show trace information
+ *
+ *
+ * @since 4.4.1
+ */
+public class TraceServlet extends HttpServlet {
+
+  protected Connection con;
+  protected String DEFAULT_LIMIT = "25";
+  protected String DEFAULT_COUNTBY = "hostname";
+  protected String LOGIC_AND = "AND";
+  protected String LOGIC_OR = "OR";
+  protected String PHOENIX_HOST = "localhost";
+  protected String TRACING_TABLE = "SYSTEM.TRACING_STATS";
+  protected int PHOENIX_PORT = 2181;
+  
+  
+  protected void doGet(HttpServletRequest request, HttpServletResponse 
response)
+  throws ServletException, IOException {
+
+//reading url params
+String action = request.getParameter("action");
+String limit = request.getParameter("limit");
+String traceid = request.getParameter("traceid");
+String parentid = request.getParameter("parentid");
+String jsonObject = "{}";
+if ("getall".equals(action)) {
+  jsonObject = getAll(limit);
+} else if ("getCount".equals(action)) {
+  jsonObject = getCount("description");
+} else if ("getDistribution".equals(action)) {
+  jsonObject = getCount(DEFAULT_COUNTBY);
+} else if ("searchTrace".equals(action)) {
+  jsonObject = searchTrace(parentid, traceid, LOGIC_OR);
+} else {
+  jsonObject = "{ \"Server\": \"Phoenix Tracing Web App\", \"API 
version\": 0.1 }";
+}
+//response send as json
+response.setContentType("application/json");
+String output = jsonObject;
+PrintWriter out = response.getWriter();
+out.print(output);
+out.flush();
+
+  }
+
+  //get all trace results with limit count
+  protected String getAll(String limit) {
+String json = null;
+if(limit == null) {
+  limit = DEFAULT_LIMIT;
+}
+String sqlQuery = "SELECT * FROM " + TRACING_TABLE + " LIMIT "+limit;
+json = getResults(sqlQuery);
+return getJson(json);
+  }
+
+  //get count on traces can pick on param to count
+  protected String getCount(String countby) {
+String json = null;
+if(countby == null) {
+  countby = DEFAULT_COUNTBY;
+}
+String sqlQuery = "SELECT "+countby+", COUNT(*) AS count FROM " + 
TRACING_TABLE + " GROUP BY "+countby+" HAVING COUNT(*) > 1 ";
+json = getResults(sqlQuery);
+return json;
+  }
+
+  //search the trace over parent id or trace id
+  protected String searchTrace(String parentId, String traceId,String 
logic) {
+String json = null;
+String query = null;
+if(parentId != null && traceId != null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
parent_id="+parentId+" "+logic+" trace_id="+traceId;
+}else if (parentId != null && traceId == null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
parent_id="+parentId;
+}else if(parentId == null && traceId != null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
trace_i

[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705645#comment-14705645
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573414
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/TraceServlet.java
 ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.PreparedStatement;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Map;
+
+/**
+ *
+ * Server to show trace information
+ *
+ *
+ * @since 4.4.1
+ */
+public class TraceServlet extends HttpServlet {
+
+  protected Connection con;
+  protected String DEFAULT_LIMIT = "25";
+  protected String DEFAULT_COUNTBY = "hostname";
+  protected String LOGIC_AND = "AND";
+  protected String LOGIC_OR = "OR";
+  protected String PHOENIX_HOST = "localhost";
+  protected String TRACING_TABLE = "SYSTEM.TRACING_STATS";
+  protected int PHOENIX_PORT = 2181;
+  
+  
+  protected void doGet(HttpServletRequest request, HttpServletResponse 
response)
+  throws ServletException, IOException {
+
+//reading url params
+String action = request.getParameter("action");
+String limit = request.getParameter("limit");
+String traceid = request.getParameter("traceid");
+String parentid = request.getParameter("parentid");
+String jsonObject = "{}";
+if ("getall".equals(action)) {
+  jsonObject = getAll(limit);
+} else if ("getCount".equals(action)) {
+  jsonObject = getCount("description");
+} else if ("getDistribution".equals(action)) {
+  jsonObject = getCount(DEFAULT_COUNTBY);
+} else if ("searchTrace".equals(action)) {
+  jsonObject = searchTrace(parentid, traceid, LOGIC_OR);
+} else {
+  jsonObject = "{ \"Server\": \"Phoenix Tracing Web App\", \"API 
version\": 0.1 }";
+}
+//response send as json
+response.setContentType("application/json");
+String output = jsonObject;
+PrintWriter out = response.getWriter();
+out.print(output);
+out.flush();
+
+  }
+
+  //get all trace results with limit count
+  protected String getAll(String limit) {
+String json = null;
+if(limit == null) {
+  limit = DEFAULT_LIMIT;
+}
+String sqlQuery = "SELECT * FROM " + TRACING_TABLE + " LIMIT "+limit;
+json = getResults(sqlQuery);
+return getJson(json);
+  }
+
+  //get count on traces can pick on param to count
+  protected String getCount(String countby) {
+String json = null;
+if(countby == null) {
+  countby = DEFAULT_COUNTBY;
+}
+String sqlQuery = "SELECT "+countby+", COUNT(*) AS count FROM " + 
TRACING_TABLE + " GROUP BY "+countby+" HAVING COUNT(*) > 1 ";
+json = getResults(sqlQuery);
+return json;
+  }
+
+  //search the trace over parent id or trace id
+  protected String searchTrace(String parentId, String traceId,String 
logic) {
+String json = null;
+String query = null;
+if(parentId != null && traceId != null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
parent_id="+parentId+" "+logic+" trace_id="+traceId;
+   

[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573414
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/TraceServlet.java
 ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.PreparedStatement;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Map;
+
+/**
+ *
+ * Server to show trace information
+ *
+ *
+ * @since 4.4.1
+ */
+public class TraceServlet extends HttpServlet {
+
+  protected Connection con;
+  protected String DEFAULT_LIMIT = "25";
+  protected String DEFAULT_COUNTBY = "hostname";
+  protected String LOGIC_AND = "AND";
+  protected String LOGIC_OR = "OR";
+  protected String PHOENIX_HOST = "localhost";
+  protected String TRACING_TABLE = "SYSTEM.TRACING_STATS";
+  protected int PHOENIX_PORT = 2181;
+  
+  
+  protected void doGet(HttpServletRequest request, HttpServletResponse 
response)
+  throws ServletException, IOException {
+
+//reading url params
+String action = request.getParameter("action");
+String limit = request.getParameter("limit");
+String traceid = request.getParameter("traceid");
+String parentid = request.getParameter("parentid");
+String jsonObject = "{}";
+if ("getall".equals(action)) {
+  jsonObject = getAll(limit);
+} else if ("getCount".equals(action)) {
+  jsonObject = getCount("description");
+} else if ("getDistribution".equals(action)) {
+  jsonObject = getCount(DEFAULT_COUNTBY);
+} else if ("searchTrace".equals(action)) {
+  jsonObject = searchTrace(parentid, traceid, LOGIC_OR);
+} else {
+  jsonObject = "{ \"Server\": \"Phoenix Tracing Web App\", \"API 
version\": 0.1 }";
+}
+//response send as json
+response.setContentType("application/json");
+String output = jsonObject;
+PrintWriter out = response.getWriter();
+out.print(output);
+out.flush();
+
+  }
+
+  //get all trace results with limit count
+  protected String getAll(String limit) {
+String json = null;
+if(limit == null) {
+  limit = DEFAULT_LIMIT;
+}
+String sqlQuery = "SELECT * FROM " + TRACING_TABLE + " LIMIT "+limit;
+json = getResults(sqlQuery);
+return getJson(json);
+  }
+
+  //get count on traces can pick on param to count
+  protected String getCount(String countby) {
+String json = null;
+if(countby == null) {
+  countby = DEFAULT_COUNTBY;
+}
+String sqlQuery = "SELECT "+countby+", COUNT(*) AS count FROM " + 
TRACING_TABLE + " GROUP BY "+countby+" HAVING COUNT(*) > 1 ";
+json = getResults(sqlQuery);
+return json;
+  }
+
+  //search the trace over parent id or trace id
+  protected String searchTrace(String parentId, String traceId,String 
logic) {
+String json = null;
+String query = null;
+if(parentId != null && traceId != null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
parent_id="+parentId+" "+logic+" trace_id="+traceId;
+}else if (parentId != null && traceId == null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
parent_id="+parentId;
+}else if(parentId == null && traceId != null) {
+  query = "SELECT * FROM " + TRACING_TABLE + " WHERE 
trace_i

[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573194
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
+ */
+public final class Main extends Configured implements Tool {
+
+protected static final Log LOG = LogFactory.getLog(Main.class);
+public static final String TRACE_SERVER_HTTP_PORT_KEY =
+"phoenix.traceserver.http.port";
+public static final int DEFAULT_HTTP_PORT = 8864;
+public static final String TRACE_SERVER_HTTP_JETTY_HOME_KEY =
+"phoenix.traceserver.http.home";
+public static final String DEFAULT_HTTP_HOME = "";
+
+public static void main(String[] args) throws Exception {
+int ret = ToolRunner.run(HBaseConfiguration.create(), new Main(), 
args);
+System.exit(ret);
+
+}
+
+@Override
+public int run(String[] arg0) throws Exception {
+// logProcessInfo(getConf());
+final int port = getConf().getInt(TRACE_SERVER_HTTP_PORT_KEY,
+DEFAULT_HTTP_PORT);
+BasicConfigurator.configure();
+final String home = getConf().get(TRACE_SERVER_HTTP_JETTY_HOME_KEY,
+DEFAULT_HTTP_HOME);
+Server server = new Server(port);
+ProtectionDomain domain = Main.class.getProtectionDomain();
+URL location = domain.getCodeSource().getLocation();
+WebAppContext webapp = new WebAppContext();
+webapp.setContextPath("/");
+if (home.length() != 0) {
+webapp.setTempDirectory(new File(home));
+}
+
+   String warPath = location.toString().split("target")[0] + 
"build/trace-webapp-demo.war";
--- End diff --

oh, you are still deploying from war. we'll need to remove this and do to a 
standard resources-based embedded web app. please open a new JIRA for this as 
well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705640#comment-14705640
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573194
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
+ */
+public final class Main extends Configured implements Tool {
+
+protected static final Log LOG = LogFactory.getLog(Main.class);
+public static final String TRACE_SERVER_HTTP_PORT_KEY =
+"phoenix.traceserver.http.port";
+public static final int DEFAULT_HTTP_PORT = 8864;
+public static final String TRACE_SERVER_HTTP_JETTY_HOME_KEY =
+"phoenix.traceserver.http.home";
+public static final String DEFAULT_HTTP_HOME = "";
+
+public static void main(String[] args) throws Exception {
+int ret = ToolRunner.run(HBaseConfiguration.create(), new Main(), 
args);
+System.exit(ret);
+
+}
+
+@Override
+public int run(String[] arg0) throws Exception {
+// logProcessInfo(getConf());
+final int port = getConf().getInt(TRACE_SERVER_HTTP_PORT_KEY,
+DEFAULT_HTTP_PORT);
+BasicConfigurator.configure();
+final String home = getConf().get(TRACE_SERVER_HTTP_JETTY_HOME_KEY,
+DEFAULT_HTTP_HOME);
+Server server = new Server(port);
+ProtectionDomain domain = Main.class.getProtectionDomain();
+URL location = domain.getCodeSource().getLocation();
+WebAppContext webapp = new WebAppContext();
+webapp.setContextPath("/");
+if (home.length() != 0) {
+webapp.setTempDirectory(new File(home));
+}
+
+   String warPath = location.toString().split("target")[0] + 
"build/trace-webapp-demo.war";
--- End diff --

oh, you are still deploying from war. we'll need to remove this and do to a 
standard resources-based embedded web app. please open a new JIRA for this as 
well.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705637#comment-14705637
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573002
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
+ */
+public final class Main extends Configured implements Tool {
+
+protected static final Log LOG = LogFactory.getLog(Main.class);
+public static final String TRACE_SERVER_HTTP_PORT_KEY =
+"phoenix.traceserver.http.port";
+public static final int DEFAULT_HTTP_PORT = 8864;
+public static final String TRACE_SERVER_HTTP_JETTY_HOME_KEY =
+"phoenix.traceserver.http.home";
+public static final String DEFAULT_HTTP_HOME = "";
+
+public static void main(String[] args) throws Exception {
+int ret = ToolRunner.run(HBaseConfiguration.create(), new Main(), 
args);
+System.exit(ret);
+
--- End diff --

nit: extraneous newline.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705638#comment-14705638
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573048
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
+ */
+public final class Main extends Configured implements Tool {
+
+protected static final Log LOG = LogFactory.getLog(Main.class);
+public static final String TRACE_SERVER_HTTP_PORT_KEY =
+"phoenix.traceserver.http.port";
+public static final int DEFAULT_HTTP_PORT = 8864;
--- End diff --

please mention the default port number in your readme and in documentation. 
make sure it's all consistent.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573048
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
+ */
+public final class Main extends Configured implements Tool {
+
+protected static final Log LOG = LogFactory.getLog(Main.class);
+public static final String TRACE_SERVER_HTTP_PORT_KEY =
+"phoenix.traceserver.http.port";
+public static final int DEFAULT_HTTP_PORT = 8864;
--- End diff --

please mention the default port number in your readme and in documentation. 
make sure it's all consistent.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705636#comment-14705636
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572962
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
--- End diff --

remove the since annotation.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37573002
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
+ */
+public final class Main extends Configured implements Tool {
+
+protected static final Log LOG = LogFactory.getLog(Main.class);
+public static final String TRACE_SERVER_HTTP_PORT_KEY =
+"phoenix.traceserver.http.port";
+public static final int DEFAULT_HTTP_PORT = 8864;
+public static final String TRACE_SERVER_HTTP_JETTY_HOME_KEY =
+"phoenix.traceserver.http.home";
+public static final String DEFAULT_HTTP_HOME = "";
+
+public static void main(String[] args) throws Exception {
+int ret = ToolRunner.run(HBaseConfiguration.create(), new Main(), 
args);
+System.exit(ret);
+
--- End diff --

nit: extraneous newline.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572962
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/Main.java
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.io.File;
+import java.net.URL;
+import java.security.ProtectionDomain;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.BasicConfigurator;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.webapp.WebAppContext;
+
+/**
+ * tracing web app runner
+ * 
+ * @since 4.5.5
--- End diff --

remove the since annotation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2151) Two different UDFs called on same column return values from first UDF only

2015-08-20 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705634#comment-14705634
 ] 

Samarth Jain commented on PHOENIX-2151:
---

[~rajeshbabu] - I am a little confused with the change in this patch. 

Shouldn't the first check instead be:
{code}
+   if 
(!this.udfFunction.getName().equals(that.udfFunction.getName())) {
+   return false;
+   }
{code}

> Two different UDFs called on same column return values from first UDF only
> --
>
> Key: PHOENIX-2151
> URL: https://issues.apache.org/jira/browse/PHOENIX-2151
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix 4.4.0
> HBase 0.98_13
> Java 7
> Ubuntu 14.04.1 X64
>Reporter: Nicholas Whitehead
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 4.6.0
>
> Attachments: PHOENIX-2151.patch
>
>
> I have defined two different UDFs, say FOO(varchar) and BAR(varchar).
> If I execute a query such as:
> SELECT PK, FOO(NAME), BAR(NAME) FROM USERS, I get:
> ===
>PK  |  FOO  |
> BAR
> ===
> 37546   ||  
> If I reverse the order, I only get the Barred value (i.e. it ignores the 2nd 
> and subsequent UDF operators)
> SELECT PK, BAR(NAME), FOO(NAME) FROM USERS, I get:
> ===
>PK  |  BAR  |
> FOO
> ===
> 37546   ||  
> Reproduced in plain command JDBC and Squirrel SQL.
> Packaged reproduction pending.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705624#comment-14705624
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572381
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/EntityFactory.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class EntityFactory {
+
+  private final String queryString;
+  protected Connection connection;
+
+  public EntityFactory(Connection connection, String queryString) {
+this.queryString = queryString;
+this.connection = connection;
+  }
+
+  public Map findSingle(Object[] params) throws 
SQLException {
+List> objects = this.findMultiple(params);
+
+if (objects.size() != 1) {
+  throw new SQLException("Query did not produce one object it 
produced: "
+  + objects.size() + " objects.");
+}
+
+Map object = objects.get(0); // get first record;
+
+return object;
+  }
+
+  public List> findMultiple(Object[] params)
+  throws SQLException {
+ResultSet rs = null;
+PreparedStatement ps = null;
+try {
+  ps = this.connection.prepareStatement(this.queryString);
+  for (int i = 0; i < params.length; ++i) {
+ps.setObject(1, params[i]);
+  }
+
+  rs = ps.executeQuery();
+  return getEntitiesFromResultSet(rs);
+} catch (SQLException e) {
+  throw (e);
+} finally {
+  if (rs != null) {
+rs.close();
+  }
+  if (ps != null) {
+ps.close();
+  }
+}
+  }
+
+  protected List> getEntitiesFromResultSet(
--- End diff --

this can also be made a static method.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705621#comment-14705621
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572345
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/EntityFactory.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class EntityFactory {
+
+  private final String queryString;
+  protected Connection connection;
+
+  public EntityFactory(Connection connection, String queryString) {
+this.queryString = queryString;
+this.connection = connection;
+  }
+
+  public Map findSingle(Object[] params) throws 
SQLException {
+List> objects = this.findMultiple(params);
+
+if (objects.size() != 1) {
+  throw new SQLException("Query did not produce one object it 
produced: "
+  + objects.size() + " objects.");
+}
+
+Map object = objects.get(0); // get first record;
+
+return object;
+  }
+
+  public List> findMultiple(Object[] params)
+  throws SQLException {
+ResultSet rs = null;
+PreparedStatement ps = null;
+try {
+  ps = this.connection.prepareStatement(this.queryString);
+  for (int i = 0; i < params.length; ++i) {
+ps.setObject(1, params[i]);
+  }
+
+  rs = ps.executeQuery();
+  return getEntitiesFromResultSet(rs);
+} catch (SQLException e) {
+  throw (e);
+} finally {
+  if (rs != null) {
+rs.close();
+  }
+  if (ps != null) {
+ps.close();
+  }
+}
+  }
+
+  protected List> getEntitiesFromResultSet(
+  ResultSet resultSet) throws SQLException {
+ArrayList> entities = new ArrayList<>();
+while (resultSet.next()) {
+  entities.add(getEntityFromResultSet(resultSet));
+}
+return entities;
+  }
+
+  protected Map getEntityFromResultSet(ResultSet resultSet)
--- End diff --

this method does not use member variables and thus can be made a static 
method.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572381
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/EntityFactory.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class EntityFactory {
+
+  private final String queryString;
+  protected Connection connection;
+
+  public EntityFactory(Connection connection, String queryString) {
+this.queryString = queryString;
+this.connection = connection;
+  }
+
+  public Map findSingle(Object[] params) throws 
SQLException {
+List> objects = this.findMultiple(params);
+
+if (objects.size() != 1) {
+  throw new SQLException("Query did not produce one object it 
produced: "
+  + objects.size() + " objects.");
+}
+
+Map object = objects.get(0); // get first record;
+
+return object;
+  }
+
+  public List> findMultiple(Object[] params)
+  throws SQLException {
+ResultSet rs = null;
+PreparedStatement ps = null;
+try {
+  ps = this.connection.prepareStatement(this.queryString);
+  for (int i = 0; i < params.length; ++i) {
+ps.setObject(1, params[i]);
+  }
+
+  rs = ps.executeQuery();
+  return getEntitiesFromResultSet(rs);
+} catch (SQLException e) {
+  throw (e);
+} finally {
+  if (rs != null) {
+rs.close();
+  }
+  if (ps != null) {
+ps.close();
+  }
+}
+  }
+
+  protected List> getEntitiesFromResultSet(
--- End diff --

this can also be made a static method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572345
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/EntityFactory.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class EntityFactory {
+
+  private final String queryString;
+  protected Connection connection;
+
+  public EntityFactory(Connection connection, String queryString) {
+this.queryString = queryString;
+this.connection = connection;
+  }
+
+  public Map findSingle(Object[] params) throws 
SQLException {
+List> objects = this.findMultiple(params);
+
+if (objects.size() != 1) {
+  throw new SQLException("Query did not produce one object it 
produced: "
+  + objects.size() + " objects.");
+}
+
+Map object = objects.get(0); // get first record;
+
+return object;
+  }
+
+  public List> findMultiple(Object[] params)
+  throws SQLException {
+ResultSet rs = null;
+PreparedStatement ps = null;
+try {
+  ps = this.connection.prepareStatement(this.queryString);
+  for (int i = 0; i < params.length; ++i) {
+ps.setObject(1, params[i]);
+  }
+
+  rs = ps.executeQuery();
+  return getEntitiesFromResultSet(rs);
+} catch (SQLException e) {
+  throw (e);
+} finally {
+  if (rs != null) {
+rs.close();
+  }
+  if (ps != null) {
+ps.close();
+  }
+}
+  }
+
+  protected List> getEntitiesFromResultSet(
+  ResultSet resultSet) throws SQLException {
+ArrayList> entities = new ArrayList<>();
+while (resultSet.next()) {
+  entities.add(getEntityFromResultSet(resultSet));
+}
+return entities;
+  }
+
+  protected Map getEntityFromResultSet(ResultSet resultSet)
--- End diff --

this method does not use member variables and thus can be made a static 
method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705615#comment-14705615
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572197
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/EntityFactory.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class EntityFactory {
--- End diff --

What does this class do? A class-level javadoc will help make this clear.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705611#comment-14705611
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572065
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,173 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
+org.eclipse.jetty
+jetty-server
+
+${jettyVersion}
+provided
+  
+  
+org.eclipse.jetty
+jetty-webapp
+
+${jettyVersion}
+provided
+  
+  
+  org.apache.hadoop
+  hadoop-common
+  provided
+
+  
+org.apache.phoenix
+phoenix-core
+provided
+  
+  
+org.apache.hbase
+hbase-common
+provided
+  
+  
+commons-logging
+commons-logging
+provided
+  
+  
+org.codehaus.jackson
+jackson-core-asl
+provided
+  
+  
+org.codehaus.jackson
+jackson-mapper-asl
+provided
+  
+
+
+
+  
+
+  org.codehaus.mojo
+  build-helper-maven-plugin
+
+  
+maven-antrun-plugin
--- End diff --

this can be removed along with the build.xml, right?


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572197
  
--- Diff: 
phoenix-tracing-webapp/src/main/java/org/apache/phoenix/tracingwebapp/http/EntityFactory.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.tracingwebapp.http;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class EntityFactory {
--- End diff --

What does this class do? A class-level javadoc will help make this clear.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37572065
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,173 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
+org.eclipse.jetty
+jetty-server
+
+${jettyVersion}
+provided
+  
+  
+org.eclipse.jetty
+jetty-webapp
+
+${jettyVersion}
+provided
+  
+  
+  org.apache.hadoop
+  hadoop-common
+  provided
+
+  
+org.apache.phoenix
+phoenix-core
+provided
+  
+  
+org.apache.hbase
+hbase-common
+provided
+  
+  
+commons-logging
+commons-logging
+provided
+  
+  
+org.codehaus.jackson
+jackson-core-asl
+provided
+  
+  
+org.codehaus.jackson
+jackson-mapper-asl
+provided
+  
+
+
+
+  
+
+  org.codehaus.mojo
+  build-helper-maven-plugin
+
+  
+maven-antrun-plugin
--- End diff --

this can be removed along with the build.xml, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705610#comment-14705610
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37571986
  
--- Diff: phoenix-tracing-webapp/src/build/build.xml ---
@@ -0,0 +1,40 @@
+
--- End diff --

Do we still need this ant file? We're not building a war anymore, it's an 
embedded service. I think the whole ant step can be removed.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37571986
  
--- Diff: phoenix-tracing-webapp/src/build/build.xml ---
@@ -0,0 +1,40 @@
+
--- End diff --

Do we still need this ant file? We're not building a war anymore, it's an 
embedded service. I think the whole ant step can be removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705606#comment-14705606
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37571771
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,173 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
+org.eclipse.jetty
+jetty-server
+
+${jettyVersion}
+provided
+  
+  
+org.eclipse.jetty
+jetty-webapp
+
+${jettyVersion}
+provided
+  
+  
+  org.apache.hadoop
+  hadoop-common
+  provided
+
+  
+org.apache.phoenix
+phoenix-core
+provided
+  
+  
+org.apache.hbase
+hbase-common
+provided
+  
+  
+commons-logging
+commons-logging
+provided
+  
+  
+org.codehaus.jackson
+jackson-core-asl
+provided
+  
+  
+org.codehaus.jackson
+jackson-mapper-asl
+provided
+  
+
+
+
+  
+
+  org.codehaus.mojo
+  build-helper-maven-plugin
+
+  
+maven-antrun-plugin
+1.7
+
+  
+generate-sources
+
+  
+
+  
+
+
+  run
+
+  
+
+  
+
+  maven-assembly-plugin
+  
+
+  runnable
+  package
+  
+single
+  
+  
+true
+
+  
+true
+
org.apache.phoenix.tracingwebapp.http.Main
+  
+
+
${project.artifactId}-${project.version}
+
+  
src/build/trace-server-runnable.xml
+
+  
+
+  
+
+
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705608#comment-14705608
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37571839
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,173 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
+org.eclipse.jetty
+jetty-server
+
--- End diff --

You can file a new ticket for the work mentioned by this PR as well.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2151) Two different UDFs called on same column return values from first UDF only

2015-08-20 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705607#comment-14705607
 ] 

James Taylor commented on PHOENIX-2151:
---

Thanks for the patch, [~rajeshbabu]. How about we add the test that [~anchal] 
has as unit test so that we prevent any regressions?

> Two different UDFs called on same column return values from first UDF only
> --
>
> Key: PHOENIX-2151
> URL: https://issues.apache.org/jira/browse/PHOENIX-2151
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix 4.4.0
> HBase 0.98_13
> Java 7
> Ubuntu 14.04.1 X64
>Reporter: Nicholas Whitehead
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 4.6.0
>
> Attachments: PHOENIX-2151.patch
>
>
> I have defined two different UDFs, say FOO(varchar) and BAR(varchar).
> If I execute a query such as:
> SELECT PK, FOO(NAME), BAR(NAME) FROM USERS, I get:
> ===
>PK  |  FOO  |
> BAR
> ===
> 37546   ||  
> If I reverse the order, I only get the Barred value (i.e. it ignores the 2nd 
> and subsequent UDF operators)
> SELECT PK, BAR(NAME), FOO(NAME) FROM USERS, I get:
> ===
>PK  |  BAR  |
> FOO
> ===
> 37546   ||  
> Reproduced in plain command JDBC and Squirrel SQL.
> Packaged reproduction pending.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37571771
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,173 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
+org.eclipse.jetty
+jetty-server
+
+${jettyVersion}
+provided
+  
+  
+org.eclipse.jetty
+jetty-webapp
+
+${jettyVersion}
+provided
+  
+  
+  org.apache.hadoop
+  hadoop-common
+  provided
+
+  
+org.apache.phoenix
+phoenix-core
+provided
+  
+  
+org.apache.hbase
+hbase-common
+provided
+  
+  
+commons-logging
+commons-logging
+provided
+  
+  
+org.codehaus.jackson
+jackson-core-asl
+provided
+  
+  
+org.codehaus.jackson
+jackson-mapper-asl
+provided
+  
+
+
+
+  
+
+  org.codehaus.mojo
+  build-helper-maven-plugin
+
+  
+maven-antrun-plugin
+1.7
+
+  
+generate-sources
+
+  
+
+  
+
+
+  run
+
+  
+
+  
+
+  maven-assembly-plugin
+  
+
+  runnable
+  package
+  
+single
+  
+  
+true
+
+  
+true
+
org.apache.phoenix.tracingwebapp.http.Main
+  
+
+
${project.artifactId}-${project.version}
+
+  
src/build/trace-server-runnable.xml
+
+  
+
+  
+
+

[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread ndimiduk
Github user ndimiduk commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/111#discussion_r37571839
  
--- Diff: phoenix-tracing-webapp/pom.xml ---
@@ -0,0 +1,173 @@
+
+  
+
+  http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+  org.apache.phoenix
+  phoenix
+  4.4.1-HBase-0.98-SNAPSHOT
+
+
+phoenix-tracing-webapp
+Phoenix - Tracing Web Application
+Tracing web application will visualize the phoenix 
traces
+
+
+   ${project.basedir}/..
+
+
+
+  
+org.eclipse.jetty
+jetty-server
+
--- End diff --

You can file a new ticket for the work mentioned by this PR as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705534#comment-14705534
 ] 

Alicia Ying Shu commented on PHOENIX-2031:
--

[~maghamraviki...@gmail.com] I think the issue was that the code base had 
changed since I submitted my patch. I uploaded another patch after rebase. 
Please apply it. We need the fix now. Thanks.

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


PHOENIX-1118 (Phoenix Tracing Web App)

2015-08-20 Thread Ayola Jayamaha
Hi All,

The Phoenix Tracing Web App main features and setup information can be
found on the following blog posts[1,2].

[1]
http://ayolajayamaha.blogspot.com/2015/08/apache-phoenix-web-application.html

[2]
http://ayolajayamaha.blogspot.com/2015/08/apache-phoenix-web-application-part-02.html

-- 
Best Regards,
Nishani Jayamaha
http://ayolajayamaha.blogspot.com/


[jira] [Updated] (PHOENIX-2031) Unable to process timestamp/Date data loaded via Phoenix org.apache.phoenix.pig.PhoenixHBaseLoader

2015-08-20 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-2031:
-
Attachment: PHOENIX-2031-v2.patch

> Unable to process timestamp/Date data loaded via Phoenix 
> org.apache.phoenix.pig.PhoenixHBaseLoader
> --
>
> Key: PHOENIX-2031
> URL: https://issues.apache.org/jira/browse/PHOENIX-2031
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-2031-v1.patch, PHOENIX-2031-v2.patch, 
> PHOENIX-2031.patch
>
>
> 2015-05-11 15:41:44,419 WARN main org.apache.hadoop.mapred.YarnChild: 
> Exception running child : org.apache.pig.PigException: ERROR 0: Error 
> transforming PhoenixRecord to Tuple Cannot convert a Unknown to a 
> java.sql.Timestamp at 
> org.apache.phoenix.pig.util.TypeUtil.transformToTuple(TypeUtil.java:293)
> at 
> org.apache.phoenix.pig.PhoenixHBaseLoader.getNext(PhoenixHBaseLoader.java:197)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2151) Two different UDFs called on same column return values from first UDF only

2015-08-20 Thread Anchal Agrawal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705466#comment-14705466
 ] 

Anchal Agrawal commented on PHOENIX-2151:
-

Thank you [~rajeshbabu], I applied the patch and it resolves the issue for all 
the test cases I mentioned above.

> Two different UDFs called on same column return values from first UDF only
> --
>
> Key: PHOENIX-2151
> URL: https://issues.apache.org/jira/browse/PHOENIX-2151
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix 4.4.0
> HBase 0.98_13
> Java 7
> Ubuntu 14.04.1 X64
>Reporter: Nicholas Whitehead
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 4.6.0
>
> Attachments: PHOENIX-2151.patch
>
>
> I have defined two different UDFs, say FOO(varchar) and BAR(varchar).
> If I execute a query such as:
> SELECT PK, FOO(NAME), BAR(NAME) FROM USERS, I get:
> ===
>PK  |  FOO  |
> BAR
> ===
> 37546   ||  
> If I reverse the order, I only get the Barred value (i.e. it ignores the 2nd 
> and subsequent UDF operators)
> SELECT PK, BAR(NAME), FOO(NAME) FROM USERS, I get:
> ===
>PK  |  BAR  |
> FOO
> ===
> 37546   ||  
> Reproduced in plain command JDBC and Squirrel SQL.
> Packaged reproduction pending.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705398#comment-14705398
 ] 

Nishani  edited comment on PHOENIX-2186 at 8/20/15 5:55 PM:


Dependencies and Licensing

jetty-server License:  Apache License 2.0 and Eclipse Public License 1.0. 
(http://www.apache.org/licenses/LICENSE-2.0.html,http://www.eclipse.org/legal/epl-v10.html)
 
jetty-webapp  License:  Apache License 2.0 and Eclipse Public License 1.0. 
(http://www.apache.org/licenses/LICENSE-2.0.html,http://www.eclipse.org/legal/epl-v10.html)
 
hadoop-common License:  Apache License 2.0 (http://www.apache.org/licenses/)
hbase-common  License:  Apache License 2.0 (http://www.apache.org/licenses/)
commons-logging  License:  Apache License 2.0 (http://www.apache.org/licenses/)
jackson-core-asl License:  Apache License 2.0 (http://www.apache.org/licenses/)
jackson-mapper-asl License:  Apache License 2.0 
(http://www.apache.org/licenses/)

I have introduced only Jetty as a new dependency to Phoenix. Other dependencies 
were existing  ones.



was (Author: nishani):
Dependencies and Licensing

jetty-server License:  Apache License 2.0 and Eclipse Public License 1.0. 
(http://www.apache.org/licenses/LICENSE-2.0.html,http://www.eclipse.org/legal/epl-v10.html)
 
jetty-webapp  License:  Apache License 2.0 and Eclipse Public License 1.0. 
(http://www.apache.org/licenses/LICENSE-2.0.html,http://www.eclipse.org/legal/epl-v10.html)
 
hadoop-common License:  Apache License 2.0 (http://www.apache.org/licenses/)
hbase-common  License:  Apache License 2.0 (http://www.apache.org/licenses/)
commons-logging  License:  Apache License 2.0 (http://www.apache.org/licenses/)
jackson-core-asl License:  Apache License 2.0 (http://www.apache.org/licenses/)
jackson-mapper-asl License:  Apache License 2.0 
(http://www.apache.org/licenses/)

I have introduced only Jetty as a new dependency to Phoenix. Other dependencies 
were existing  ones.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread Ravi Kishore Valeti (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705409#comment-14705409
 ] 

Ravi Kishore Valeti commented on PHOENIX-2154:
--

My bad. I overlooked the null writable reducer in this mode. Nm, thanks

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705398#comment-14705398
 ] 

Nishani  commented on PHOENIX-2186:
---

Dependencies and Licensing

jetty-server License:  Apache License 2.0 and Eclipse Public License 1.0. 
(http://www.apache.org/licenses/LICENSE-2.0.html,http://www.eclipse.org/legal/epl-v10.html)
 
jetty-webapp  License:  Apache License 2.0 and Eclipse Public License 1.0. 
(http://www.apache.org/licenses/LICENSE-2.0.html,http://www.eclipse.org/legal/epl-v10.html)
 
hadoop-common License:  Apache License 2.0 (http://www.apache.org/licenses/)
hbase-common  License:  Apache License 2.0 (http://www.apache.org/licenses/)
commons-logging  License:  Apache License 2.0 (http://www.apache.org/licenses/)
jackson-core-asl License:  Apache License 2.0 (http://www.apache.org/licenses/)
jackson-mapper-asl License:  Apache License 2.0 
(http://www.apache.org/licenses/)

I have introduced only Jetty as a new dependency to Phoenix. Other dependencies 
were existing  ones.


> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2186) Creating backend services for the Phoenix Tracing Web App

2015-08-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2186:
--
Summary: Creating backend services for the Phoenix Tracing Web App  (was: 
Creating backend services for the Tracing Web App)

> Creating backend services for the Phoenix Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705324#comment-14705324
 ] 

maghamravikiran commented on PHOENIX-2154:
--

In this mode, we write to HBase alone. 

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705322#comment-14705322
 ] 

maghamravikiran commented on PHOENIX-2154:
--

Out of turn, but I agree. I believe, we need to break up this task into two 
broadly, 1) resilency 2) quick execution

Right now, with the direct API approach, we achieve the quicker execution of 
the job as we have seen the direct API performing far better than the HFiles 
route . This can be partially attributed to the fact there was only one reducer 
shuffling across the mapper output.  Here, if one mapper fails,  there is a 
possibility that few successful mappers have already committed data onto the 
index table.  Is this ok?

For resilency, I believe the bulk load approach is good as it's a all or 
nothing job. We don't copy HFiles onto the index table until all the mappers 
and reducer is completed .   

In both approaches, the important task we need to address is looking out for 
options to avoid successful mappers from being re run.  Is this possible and 
what are the best means to address it.

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705319#comment-14705319
 ] 

maghamravikiran commented on PHOENIX-2154:
--

My bad. Will move it to the reduce method to be absolutely sure.

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705316#comment-14705316
 ] 

maghamravikiran commented on PHOENIX-2154:
--

True [~giacomotaylor] I will correct that code . 

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2151) Two different UDFs called on same column return values from first UDF only

2015-08-20 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2151:
-
Attachment: PHOENIX-2151.patch

Here is the patch fixes the issue.
The problem coming here when ignoring the same expressions if already exists. 
For scalar functions equal check just check children only. In this case the 
children is same for the both the udfs but implementation might vary. So 
implementing equals for UDFExpression solves the problem.
{code}
public Expression addIfAbsent(Expression expression) {
Expression existingExpression = expressionMap.get(expression);
if (existingExpression == null) {
expressionMap.put(expression, expression);
return expression;
}
return existingExpression;
}
{code}
[~anchal] [~nickman] Can you please try the patch and check things are fine 
from your side.

> Two different UDFs called on same column return values from first UDF only
> --
>
> Key: PHOENIX-2151
> URL: https://issues.apache.org/jira/browse/PHOENIX-2151
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Phoenix 4.4.0
> HBase 0.98_13
> Java 7
> Ubuntu 14.04.1 X64
>Reporter: Nicholas Whitehead
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: PHOENIX-2151.patch
>
>
> I have defined two different UDFs, say FOO(varchar) and BAR(varchar).
> If I execute a query such as:
> SELECT PK, FOO(NAME), BAR(NAME) FROM USERS, I get:
> ===
>PK  |  FOO  |
> BAR
> ===
> 37546   ||  
> If I reverse the order, I only get the Barred value (i.e. it ignores the 2nd 
> and subsequent UDF operators)
> SELECT PK, BAR(NAME), FOO(NAME) FROM USERS, I get:
> ===
>PK  |  BAR  |
> FOO
> ===
> 37546   ||  
> Reproduced in plain command JDBC and Squirrel SQL.
> Packaged reproduction pending.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705274#comment-14705274
 ] 

Nishani  commented on PHOENIX-2187:
---

Dependencies and Licensing

Here are the dependencies and Licensing of the javascript libraries used in the 
front-end  application of the Phoenix Tracing Web App.

angular-mocks.js   @license AngularJS v1.3.15  http://angularjs.org  License: 
MIT
angular-route.js  @license AngularJS v1.3.8  http://angularjs.org License: MIT
angular.js  @license AngularJS v1.3.15  http://angularjs.org License: MIT
bootstrap.js Bootstrap v3.3.4 (http://getbootstrap.com) Licensed under MIT 
(https://github.com/twbs/bootstrap/blob/master/LICENSE)
jquery-2.1.4.js  MIT license http://jquery.org/license
ng-google-chart.js @license 
(https://github.com/angular-google-chart/angular-google-chart) License: MIT
ui-bootstrap-tpls.js  Version: 0.13.0 (https://github.com/angular-ui/bootstrap) 
License: MIT (https://github.com/angular-ui/bootstrap/blob/master/LICENSE)


> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread Ravi Kishore Valeti (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705266#comment-14705266
 ] 

Ravi Kishore Valeti commented on PHOENIX-2154:
--

[~maghamraviki...@gmail.com]

Just to understand, In case of "configureSubmittableJobUsingDirectApi" mode, 
does it mean we are writing to both Phoenix Table (via autocommit true) and 
Hdfs (as mapper output KeyValue)?

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2154) Failure of one mapper should not affect other mappers in MR index build

2015-08-20 Thread Ravi Kishore Valeti (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705258#comment-14705258
 ] 

Ravi Kishore Valeti commented on PHOENIX-2154:
--

[~maghamraviki...@gmail.com]

I think the major task for the subject line, "Failure of one mapper should not 
affect other mappers in MR index build", is missing from the task list :).

Even a single map task eventual failure  (after retires) will mark the job as 
"failed" and I think there is no way to resume ONLY  a specific task in the 
next run (unless we write custom logic?). Next run will be a fresh job 
execution.

[~jamestaylor] [~lhofhansl] please correct me if I'm wrong.

> Failure of one mapper should not affect other mappers in MR index build
> ---
>
> Key: PHOENIX-2154
> URL: https://issues.apache.org/jira/browse/PHOENIX-2154
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: maghamravikiran
> Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705138#comment-14705138
 ] 

ASF GitHub Bot commented on PHOENIX-2187:
-

Github user AyolaJayamaha commented on the pull request:

https://github.com/apache/phoenix/pull/112#issuecomment-133054413
  
Build phoenix-tracing-webapp by
`mvn package`
It will run jasmine JS Test Specs with testing JS files in Tracing Web app

![screenshot from 2015-08-20 21 05 
31](https://cloud.githubusercontent.com/assets/1260234/9387657/3bf6401c-477f-11e5-9d8e-78af949f6723.png)



> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2187: Adding front-end of Tracing We...

2015-08-20 Thread AyolaJayamaha
Github user AyolaJayamaha commented on the pull request:

https://github.com/apache/phoenix/pull/112#issuecomment-133054413
  
Build phoenix-tracing-webapp by
`mvn package`
It will run jasmine JS Test Specs with testing JS files in Tracing Web app

![screenshot from 2015-08-20 21 05 
31](https://cloud.githubusercontent.com/assets/1260234/9387657/3bf6401c-477f-11e5-9d8e-78af949f6723.png)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705133#comment-14705133
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

Github user AyolaJayamaha commented on the pull request:

https://github.com/apache/phoenix/pull/111#issuecomment-133053181
  
Build Apache Phoenix from root
mvn clean install

Start the backend services
`./bin/traceserver.py start`

Stop the backend services
`./bin/traceserver.py stop`

![screenshot from 2015-08-20 20 13 
55](https://cloud.githubusercontent.com/assets/1260234/9387559/c5783b5c-477e-11e5-917a-82b980babf6d.png)


The services can be viewed by accessing the following location

http://localhost:8864/trace?action=get
![screenshot from 2015-08-20 20 13 
59](https://cloud.githubusercontent.com/assets/1260234/9387582/ec41f80e-477e-11e5-848b-ca1e05b7cd86.png)





> Creating backend services for the Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread AyolaJayamaha
Github user AyolaJayamaha commented on the pull request:

https://github.com/apache/phoenix/pull/111#issuecomment-133053181
  
Build Apache Phoenix from root
mvn clean install

Start the backend services
`./bin/traceserver.py start`

Stop the backend services
`./bin/traceserver.py stop`

![screenshot from 2015-08-20 20 13 
55](https://cloud.githubusercontent.com/assets/1260234/9387559/c5783b5c-477e-11e5-917a-82b980babf6d.png)


The services can be viewed by accessing the following location

http://localhost:8864/trace?action=get
![screenshot from 2015-08-20 20 13 
59](https://cloud.githubusercontent.com/assets/1260234/9387582/ec41f80e-477e-11e5-848b-ca1e05b7cd86.png)





---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2187) Creating the front-end application for Phoenix Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705111#comment-14705111
 ] 

ASF GitHub Bot commented on PHOENIX-2187:
-

GitHub user AyolaJayamaha opened a pull request:

https://github.com/apache/phoenix/pull/112

PHOENIX-2187: Adding front-end of Tracing Web App

This will include the followingTracing visualization features.
- [x] List - lists the traces with their attributes
- [x] Trace Count - Chart view over the trace description
- [x] Dependency Tree - tree view of trace ids
- [x] Timeline - timeline of trace ids
- [x] Trace Distribution - Distribution chart of hosts of traces

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/AyolaJayamaha/phoenix front-end-webapp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/112.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #112


commit 7f78b5508f782196a97c4f876987e6cdaf573477
Author: ayolajayamaha 
Date:   2015-08-20T15:17:34Z

adding front-end of trace web app




> Creating the front-end application for Phoenix Tracing Web App
> --
>
> Key: PHOENIX-2187
> URL: https://issues.apache.org/jira/browse/PHOENIX-2187
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following tracing visualization features.
> List - lists the traces with their attributes
> Trace Count - Chart view over the trace description
> Dependency Tree - tree view of  trace ids
> Timeline - timeline of trace ids
> Trace Distribution - Distribution chart of hosts of traces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2187: Adding front-end of Tracing We...

2015-08-20 Thread AyolaJayamaha
GitHub user AyolaJayamaha opened a pull request:

https://github.com/apache/phoenix/pull/112

PHOENIX-2187: Adding front-end of Tracing Web App

This will include the followingTracing visualization features.
- [x] List - lists the traces with their attributes
- [x] Trace Count - Chart view over the trace description
- [x] Dependency Tree - tree view of trace ids
- [x] Timeline - timeline of trace ids
- [x] Trace Distribution - Distribution chart of hosts of traces

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/AyolaJayamaha/phoenix front-end-webapp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/112.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #112


commit 7f78b5508f782196a97c4f876987e6cdaf573477
Author: ayolajayamaha 
Date:   2015-08-20T15:17:34Z

adding front-end of trace web app




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2186 : Back end services

2015-08-20 Thread AyolaJayamaha
GitHub user AyolaJayamaha opened a pull request:

https://github.com/apache/phoenix/pull/111

PHOENIX-2186 : Back end services

This will include the following components.
- [x] Main class
- [x] Pom file
 - [x] Adding as moduel for root pom
- [x] Launch script
- [x] Backend trace service API

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/AyolaJayamaha/phoenix back-end-services

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/111.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #111


commit a353dab6d2863a23637b8199f050785a58545455
Author: ayolajayamaha 
Date:   2015-07-27T16:57:27Z

adding readme

commit a8db9b5b0a555893693792ba28f1d042c216dc65
Author: ayolajayamaha 
Date:   2015-08-20T14:39:58Z

adding back end services for tracing app




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2186) Creating backend services for the Tracing Web App

2015-08-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705039#comment-14705039
 ] 

ASF GitHub Bot commented on PHOENIX-2186:
-

GitHub user AyolaJayamaha opened a pull request:

https://github.com/apache/phoenix/pull/111

PHOENIX-2186 : Back end services

This will include the following components.
- [x] Main class
- [x] Pom file
 - [x] Adding as moduel for root pom
- [x] Launch script
- [x] Backend trace service API

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/AyolaJayamaha/phoenix back-end-services

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/111.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #111


commit a353dab6d2863a23637b8199f050785a58545455
Author: ayolajayamaha 
Date:   2015-07-27T16:57:27Z

adding readme

commit a8db9b5b0a555893693792ba28f1d042c216dc65
Author: ayolajayamaha 
Date:   2015-08-20T14:39:58Z

adding back end services for tracing app




> Creating backend services for the Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-953) Support UNNEST for ARRAY

2015-08-20 Thread Dumindu Buddhika (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dumindu Buddhika updated PHOENIX-953:
-
Attachment: PHOENIX-953-v3.patch

[~maryannxue] Thanks a lot for the feedback and the code!. Earlier I thought 
UnnestExpression would not be useful when Unnest is used in a FROM. I have 
created a patch with the code changes you suggested here.

[~jamestaylor] I have added end-to-end tests for Unnest here. I have run them 
against hsqldb and verified. There are tests which have joins on the ordinal  
for the parallel array use case. 

> Support UNNEST for ARRAY
> 
>
> Key: PHOENIX-953
> URL: https://issues.apache.org/jira/browse/PHOENIX-953
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-953-v1.patch, PHOENIX-953-v2.patch, 
> PHOENIX-953-v3.patch
>
>
> The UNNEST built-in function converts an array into a set of rows. This is 
> more than a built-in function, so should be considered an advanced project.
> For an example, see the following Postgres documentation: 
> http://www.postgresql.org/docs/8.4/static/functions-array.html
> http://www.anicehumble.com/2011/07/postgresql-unnest-function-do-many.html
> http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html
> So the UNNEST is a way of converting an array to a flattened "table" which 
> can then be filtered on, ordered, grouped, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2186) Creating backend services for the Tracing Web App

2015-08-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2186:
--
Summary: Creating backend services for the Tracing Web App  (was: Creating 
backend services for the tracing web app)

> Creating backend services for the Tracing Web App
> -
>
> Key: PHOENIX-2186
> URL: https://issues.apache.org/jira/browse/PHOENIX-2186
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> This will include the following components.
> Main class 
> Pom file
> Launch script 
> Backend trace service API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2190) Updating the Documentation

2015-08-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2190:
--
Attachment: Screenshot-trace-web-doc3.png
Screenshot-trace-web-doc2.png
Screenshot-trace-web-doc1.png

Attaching screen shots on documentation on trace web app. It will update the 
tracing.html page. 

URL:
http://phoenix.apache.org/tracing.html

> Updating the Documentation
> --
>
> Key: PHOENIX-2190
> URL: https://issues.apache.org/jira/browse/PHOENIX-2190
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
> Attachments: Screenshot-trace-web-doc1.png, 
> Screenshot-trace-web-doc2.png, Screenshot-trace-web-doc3.png, 
> phoenix-trace-web-app-doc.patch, trace-images.zip
>
>
> The Documentation would contain the following.
> How to start the Tracing Web Application
> How to change the port number on which it runs
> Five visualization features in the Tracing Web Application



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-08-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1118:
--
Description: 
Currently there's no means of visualizing the trace information provided by 
Phoenix. We should provide some simple charting over our metrics tables. Take a 
look at the following JIRA for sample queries: 
https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151
The tool need to provide visualization over SYSTEM.TRACING_STATS table.

  was:Currently there's no means of visualizing the trace information provided 
by Phoenix. We should provide some simple charting over our metrics tables. 
Take a look at the following JIRA for sample queries: 
https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151


> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
> Attachments: MockUp1-TimeSlider.png, MockUp2-AdvanceSearch.png, 
> MockUp3-PatternDetector.png, MockUp4-FlameGraph.png, Screenshot of dependency 
> tree.png, Screenshot-loading-trace-list.png, m1-mockUI-tracedistribution.png, 
> m1-mockUI-tracetimeline.png, screenshot of tracing timeline.png, screenshot 
> of tracing web app.png, timeline.png
>
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151
> The tool need to provide visualization over SYSTEM.TRACING_STATS table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2190) Updating the Documentation

2015-08-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2190:
--
Attachment: trace-images.zip
phoenix-trace-web-app-doc.patch

Adding patch for Trace-Web App Documentation. 

Attaching trace-images.zip that trace-images.zip that contains resources/images 
for the patch.
 - Add those images to "phoenix/site/source/src/site/resources/images"

> Updating the Documentation
> --
>
> Key: PHOENIX-2190
> URL: https://issues.apache.org/jira/browse/PHOENIX-2190
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
> Attachments: phoenix-trace-web-app-doc.patch, trace-images.zip
>
>
> The Documentation would contain the following.
> How to start the Tracing Web Application
> How to change the port number on which it runs
> Five visualization features in the Tracing Web Application



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2190) Updating the Documentation

2015-08-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2190:
--
Summary: Updating the Documentation  (was: Updating the Documentaation)

> Updating the Documentation
> --
>
> Key: PHOENIX-2190
> URL: https://issues.apache.org/jira/browse/PHOENIX-2190
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>Assignee: Nishani 
>
> The Documentation would contain the following.
> How to start the Tracing Web Application
> How to change the port number on which it runs
> Five visualization features in the Tracing Web Application



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)