[GSOC 2016] [Status Update]

2016-05-24 Thread Ayola Jayamaha
Hi All,

I went through the knowledge base on HTrace and ZipKin documentation and code
base experiments with Zipkin Collector, Storage, Zipkin Query Service and
Zipkin Web UI. I have shared the summary of my findings on these blog posts
[1-3].

This is my proposal for Improving Phoenix UI [4]. Currently I am going
through

   -

   Completed with industrial standard for embed tracing web application as
   a service
   -

   Improve the web application with better package management system
   -

   Set a common Jetty version for Phoenix
   -

   Create a page which under the covers turn on  tracing

I have started on setting a common Jetty version [5]

Thank you.

[1] http://ayolajayamaha.blogspot.com/2016/05/zipkin-architecture.html
[2]
http://ayolajayamaha.blogspot.com/2016/05/zipkin-distributed-tracing-system.html
[3]
http://ayolajayamaha.blogspot.com/2016/05/zipkin-integration-with-htrace.html
[4]
https://docs.google.com/document/d/1Mvcae5JLws_ivpiWP8PuAqUhA27k1H_I9_OVEhJTzOY/
[5] https://issues.apache.org/jira/browse/PHOENIX-2211
-- 
Best Regards,
Ayola Jayamaha
http://ayolajayamaha.blogspot.com/


[jira] [Issue Comment Deleted] (PHOENIX-2211) Set a common Jetty version for Phoenix

2016-05-24 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2211:
--
Comment: was deleted

(was: A common Jetty version can be included for the whole project from 
communicating with Nick Dimiduk.)

> Set a common Jetty version for Phoenix
> --
>
> Key: PHOENIX-2211
> URL: https://issues.apache.org/jira/browse/PHOENIX-2211
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>  Labels: tracing
>
> Jetty is used in the Tracing Web Application. The Jetty version used is 
> defined at the root pom file. The Jetty version that is used needs to be 
> common to the whole Phoenix project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: looking for help on HBASE

2016-05-24 Thread Sergey Soldatov
Hi Sean,
Not sure about the unit test, but the fix that cause our issue is
HBASE-15198. Prior it we had connection with cellBlock false, so protobuf
serialized everything, including timeRange. Now cellBlock is true and
buildNoDataRegionAction is used for the serializing of Increament
mutations. And it doesn't even think about timeRange.
If you want to reproduce it with phoenix, just get the workspace from the
git on the commit provided in Jame's comments, build it with -DskipTests,
import into IDE (I would recommend Idea since it's better handle generated
files), place breakpoint in RequestConverter.buildNoDataRegionAction at
 Increment i = (Increment) row;
Now run PhoenixTimeQueryIT  in the debugger. Several steps and you will see
that builder will be started with empty timeRangeBuilder_

Thanks,
Sergey


On Tue, May 24, 2016 at 4:49 PM, Sean Busbey  wrote:

> Hi Phoenix!
>
> I've been trying to chase down the root cause of an issue that y'all
> reported with HBase increments that have custom time ranges in
> 1.2+[1]. Right now this issue is marked as a Blocker and HBase is
> waiting on it to continue our 1.2.z releases and start our 1.3.z
> release line.
>
> Long story short, thus far my attempts to come up with a unit test
> that shows the issue all pass on the HBase side, though I can clearly
> see the problem in one of Phoenix's ITs. I've been trying to track
> what happens across HBase + Phoenix along the write path but so far I
> haven't found a smoking gun.
>
> Could someone familiar with the Phoenix customizations along the HBase
> write path spare a bit of time to walk me through things? I want to
> make sure I'm looking at the correct places on the Phoenix side.
>
> [1]: https://issues.apache.org/jira/browse/HBASE-15698
>
> --
> busbey
>


[jira] [Assigned] (PHOENIX-1119) Use Zipkin to visualize Phoenix metrics data

2016-05-24 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  reassigned PHOENIX-1119:
-

Assignee: Nishani 

> Use Zipkin to visualize Phoenix metrics data
> 
>
> Key: PHOENIX-1119
> URL: https://issues.apache.org/jira/browse/PHOENIX-1119
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: gsoc2016, tracing
>
> Zipkin provides a nice tool for visualizing trace information: 
> http://twitter.github.io/zipkin/
> It's likely not difficult to visualize the Phoenix tracing data through this 
> tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1119) Use Zipkin to visualize Phoenix metrics data

2016-05-24 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299280#comment-15299280
 ] 

Nishani  commented on PHOENIX-1119:
---

Hi All,

htrace-zipkin provides span receiver sends spans to zipkin collector. 
htrace-hbase provides span receiver which sends tracing spans to HBase and a 
viewer which retrieves spans from HBase and displays them graphically.

[1] https://github.com/apache/incubator-htrace

> Use Zipkin to visualize Phoenix metrics data
> 
>
> Key: PHOENIX-1119
> URL: https://issues.apache.org/jira/browse/PHOENIX-1119
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: gsoc2016, tracing
>
> Zipkin provides a nice tool for visualizing trace information: 
> http://twitter.github.io/zipkin/
> It's likely not difficult to visualize the Phoenix tracing data through this 
> tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: looking for help on HBASE

2016-05-24 Thread Sergey Soldatov
Hey Sean,
I'll take a look while other guys are on HBaseCon.

Thanks,
Sergey

On Tue, May 24, 2016 at 4:49 PM, Sean Busbey  wrote:

> Hi Phoenix!
>
> I've been trying to chase down the root cause of an issue that y'all
> reported with HBase increments that have custom time ranges in
> 1.2+[1]. Right now this issue is marked as a Blocker and HBase is
> waiting on it to continue our 1.2.z releases and start our 1.3.z
> release line.
>
> Long story short, thus far my attempts to come up with a unit test
> that shows the issue all pass on the HBase side, though I can clearly
> see the problem in one of Phoenix's ITs. I've been trying to track
> what happens across HBase + Phoenix along the write path but so far I
> haven't found a smoking gun.
>
> Could someone familiar with the Phoenix customizations along the HBase
> write path spare a bit of time to walk me through things? I want to
> make sure I'm looking at the correct places on the Phoenix side.
>
> [1]: https://issues.apache.org/jira/browse/HBASE-15698
>
> --
> busbey
>


[jira] [Commented] (PHOENIX-2936) Missing antlr runtime on server side after PHOENIX-2908

2016-05-24 Thread William Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299268#comment-15299268
 ] 

William Yang commented on PHOENIX-2936:
---

I am really sorry for this. I didn't mean to remove all antlr dependency on 
server side, all i want is to remove antlr but keep antlr-runtime. 
I mistook 'org.antlr:antlr*' for antlr only, but actually it contains both 
antlr and antlr-runtime. 

> Missing antlr runtime on server side after PHOENIX-2908
> ---
>
> Key: PHOENIX-2936
> URL: https://issues.apache.org/jira/browse/PHOENIX-2936
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2936.patch
>
>
> During PHOENIX-2908 antlr was completely removed from server jar. That's was 
> a bad idea, since runtime is still required for indexes:
> {noformat}
> 2016-05-24 11:40:53,596 ERROR [StoreFileOpenerThread-L#0-1] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
> java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1359)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1413)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2276)
> at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2276)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
> at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:208)
> at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:326)
> at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:313)
> at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreFileReaderOpen(IndexHalfStoreFileReaderGenerator.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$64.call(RegionCoprocessorHost.java:1580)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreFileReaderOpen(RegionCoprocessorHost.java:1575)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:246)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:399)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:504)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:494)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:653)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:118)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:520)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:517)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


looking for help on HBASE

2016-05-24 Thread Sean Busbey
Hi Phoenix!

I've been trying to chase down the root cause of an issue that y'all
reported with HBase increments that have custom time ranges in
1.2+[1]. Right now this issue is marked as a Blocker and HBase is
waiting on it to continue our 1.2.z releases and start our 1.3.z
release line.

Long story short, thus far my attempts to come up with a unit test
that shows the issue all pass on the HBase side, though I can clearly
see the problem in one of Phoenix's ITs. I've been trying to track
what happens across HBase + Phoenix along the write path but so far I
haven't found a smoking gun.

Could someone familiar with the Phoenix customizations along the HBase
write path spare a bit of time to walk me through things? I want to
make sure I'm looking at the correct places on the Phoenix side.

[1]: https://issues.apache.org/jira/browse/HBASE-15698

-- 
busbey


Re: Jenkins build failures?

2016-05-24 Thread Sergey Soldatov
James,
Sure. I will file a JIRA and check about non zero thread pool size (not
sure that would help since it's initialized in getDefaultExecutor and
always used if no other pool is provided in HTable constructor).
Thanks,
Sergey

On Mon, May 23, 2016 at 8:11 PM, James Taylor 
wrote:

> Thanks, Sergey. Sounds like you're on to it. We could try configuring
> those tests with a non zero thread pool size so they don't
> use SynchronousQueue. Want to file a JIRA with this info so we don't lose
> track of it?
>
> James
>
> On Tue, May 17, 2016 at 11:21 PM, Sergey Soldatov <
> sergeysolda...@gmail.com> wrote:
>
>> Getting back to the failures with OOM/unable to create a native thread.
>> Those files have around 100 tests inside each that are running on top of
>> the phoenix. In total they generate over 2500 scans. (system.catalog,
>> sequences and regular scans over table).  The problem that on HBase side
>> all scans are going through the ThreadPoolExecutor generated in HTable.
>> Which is using SynchronousQueue as the queue. As from the javadoc for
>> ThreadPoolExecutor:
>>
>> *Direct handoffs. A good default choice for a work queue is a
>> SynchronousQueue that hands off tasks to threads without otherwise holding
>> them. Here, an attempt to queue a task will fail if no threads are
>> immediately available to run it, so a new thread will be constructed. This
>> policy avoids lockups when handling sets of requests that might have
>> internal dependencies. Direct handoffs generally require unbounded
>> maximumPoolSizes to avoid rejection of new submitted tasks. This in turn
>> admits the possibility of unbounded thread growth when commands continue
>> to
>> arrive on average faster than they can be processed.*
>>
>>
>> And actually we hit exactly last  case. But still there isl a question.
>> Since all those tests all passing correctly and the scans are completed
>> during execution (I checked that) it's not clear why all those threads are
>> still alive. If someone has a suggestion why it could happen it will be
>> interesting to listen. Otherwise I will dig deeper a bit later.  Possible
>> also it's worth to change the queue in HBase to something less aggressive
>> in terms of thread creation.
>>
>> Thanks,
>> Sergey
>>
>>
>> On Thu, May 5, 2016 at 8:24 AM, James Taylor 
>> wrote:
>>
>> > Looks like all Jenkins builds are failing, but it seems environmental?
>> Do
>> > we need to exclude some particular kind of host(s)?
>> >
>> > On Wed, May 4, 2016 at 5:25 PM, James Taylor 
>> > wrote:
>> >
>> > > Thanks, Sergey!
>> > >
>> > > On Wed, May 4, 2016 at 5:22 PM, Sergey Soldatov <
>> > sergeysolda...@gmail.com>
>> > > wrote:
>> > >
>> > >> James,
>> > >> Ah, didn't notice that timeouts are not shown in the final report as
>> > >> failures. It seems that the build is using JDK 1.7 and test run OOM
>> > >> with PermGen space. Fixed in PHOENIX-2879
>> > >>
>> > >> Thanks,
>> > >> Sergey
>> > >>
>> > >> On Wed, May 4, 2016 at 1:48 PM, James Taylor > >
>> > >> wrote:
>> > >> > Sergey, on master branch (which is HBase 1.2):
>> > >> > https://builds.apache.org/job/Phoenix-master/1214/console
>> > >> >
>> > >> > On Wed, May 4, 2016 at 1:31 PM, Sergey Soldatov <
>> > >> sergeysolda...@gmail.com>
>> > >> > wrote:
>> > >> >>
>> > >> >> James,
>> > >> >> Regarding HivePhoenixStoreIT. Are you talking about
>> > >> >> Phoenix-4.x-HBase-1.0  job? Last build passed it successfully.
>> > >> >>
>> > >> >>
>> > >> >> On Wed, May 4, 2016 at 10:15 AM, James Taylor <
>> > jamestay...@apache.org>
>> > >> >> wrote:
>> > >> >> > Our Jenkins builds have improved, but we're seeing some issues:
>> > >> >> > - timeouts with the new
>> org.apache.phoenix.hive.HivePhoenixStoreIT
>> > >> test.
>> > >> >> > - consistent failure with 4.x-HBase-1.1 build. I suspect that
>> > Jenkins
>> > >> >> > build
>> > >> >> > is out-of-date, as we haven't had a 4.x-HBase-1.1 branch for
>> quite
>> > a
>> > >> >> > while.
>> > >> >> > There's likely some changes that were made to the other Jenkins
>> > build
>> > >> >> > scripts that weren't made to this one
>> > >> >> > - flapping of
>> > >> >> > the
>> > >> >> >
>> > >>
>> >
>> org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT.testWriteFailureReadOnlyIndex
>> > >> >> > test in 0.98 and 1.0
>> > >> >> > - no email sent for 0.98 build (as far as I can tell)
>> > >> >> >
>> > >> >> > If folks have time to look into these, that'd be much
>> appreciated.
>> > >> >> >
>> > >> >> > James
>> > >> >> >
>> > >> >> >
>> > >> >> >
>> > >> >> > On Sat, Apr 30, 2016 at 11:55 AM, James Taylor <
>> > >> jamestay...@apache.org>
>> > >> >> > wrote:
>> > >> >> >
>> > >> >> >> The defaults when tests are running are much lower than the
>> > standard
>> > >> >> >> Phoenix defaults (see QueryServicesTestImpl and
>> > >> >> >> BaseTest.setUpConfigForMiniCluster()). It's unclear to me why
>> the
>> > >> >> >> HashJoinIT and SortMergeJoinIT tests (I think these are the
>> > >> culprits)
>> > >> >> >> do
>> > >> >> >> no

[jira] [Commented] (PHOENIX-2936) Missing antlr runtime on server side after PHOENIX-2908

2016-05-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298933#comment-15298933
 ] 

Hadoop QA commented on PHOENIX-2936:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12805968/PHOENIX-2936.patch
  against master branch at commit 10909ae502095bac775d98e6d92288c5cad9b9a6.
  ATTACHMENT ID: 12805968

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
31 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.IndexIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/361//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/361//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/361//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/361//console

This message is automatically generated.

> Missing antlr runtime on server side after PHOENIX-2908
> ---
>
> Key: PHOENIX-2936
> URL: https://issues.apache.org/jira/browse/PHOENIX-2936
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2936.patch
>
>
> During PHOENIX-2908 antlr was completely removed from server jar. That's was 
> a bad idea, since runtime is still required for indexes:
> {noformat}
> 2016-05-24 11:40:53,596 ERROR [StoreFileOpenerThread-L#0-1] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
> java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1359)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1413)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2276)
> at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2276)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
> at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:208)
> at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:326)
> at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:313)
> at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreFileReaderOpen(IndexHalfStoreFileReaderGenerator.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$64.call(RegionCoprocessorHost.java:1580)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreFileReaderOpen(RegionCoprocessorHost.java:1575)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:

[jira] [Commented] (PHOENIX-1734) Local index improvements

2016-05-24 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298925#comment-15298925
 ] 

Sergey Soldatov commented on PHOENIX-1734:
--

[~rajeshbabu], [~jamestaylor]
just FYI, checked that recent changes in CSVBulkLoad are compatible with the 
new local indexes. It works, even better than before. Loaded 5 mil records for 
table with 1 global index and 2 local indexes. On single node cluster that took 
less than 10 min. (table with over 20 columns, CSV file 1.5Gb). And some 
performance observations:
 simple query {{select * from table where indexed_col = something}} :
0.2 sec with local index 
1 min without index (after split almost 2 min)
~1.5 sec was with old implementation 

Now about a problem I found. I tried to split/compact this table from HBase 
shell. and the compaction fails :
{noformat}
2016-05-24 12:26:30,362 ERROR 
[regionserver//10.22.8.101:16201-longCompactions-1464116687568] 
regionserver.CompactSplitThread: Compaction failed Request = 
regionName=GIGANTIC_TABLE,\x80\x03\xD0\xA3,1464117986481.3a4eef7f676dd670ce4fc1ef5130c293.,
 storeName=L#0, fileCount=1, fileSize=32.0 M (32.0 M), priority=9, 
time=154281628674638 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.isSatisfiedMidKeyCondition(LocalIndexStoreFileScanner.java:158)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.next(LocalIndexStoreFileScanner.java:55)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:581)
at 
org.apache.phoenix.schema.stats.StatisticsScanner.next(StatisticsScanner.java:73)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:318)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:111)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:119)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1223)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1845)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:529)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:566)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}


> Local index improvements
> 
>
> Key: PHOENIX-1734
> URL: https://issues.apache.org/jira/browse/PHOENIX-1734
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENI-1734-WIP.patch, PHOENIX-1734_v1.patch, 
> PHOENIX-1734_v4.patch, PHOENIX-1734_v5.patch, TestAtomicLocalIndex.java
>
>
> Local index design considerations: 
>  1. Colocation: We need to co-locate regions of local index regions and data 
> regions. The co-location can be a hard guarantee or a soft (best approach) 
> guarantee. The co-location is a performance requirement, and also maybe 
> needed for consistency(2). Hard co-location means that either both the data 
> region and index region are opened atomically, or neither of them open for 
> serving. 
>  2. Index consistency : Ideally we want the index region and data region to 
> have atomic updates. This means that they should either (a)use transactions, 
> or they should (b)share the same WALEdit and also MVCC for visibility. (b) is 
> only applicable if there is hard colocation guarantee. 
>  3. Local index clients : How the local index will be accessed from clients. 
> In case of the local index being managed in a table, the HBase client can be 
> used for doing scans, etc. If the local index is hidden inside the data 
> regions, there has to be a different mechanism to access the data through the 
> data region. 
> With the above considerations, we imagine three possible implementation for 
> the local index solution, each detailed below. 
> APPROACH 1:  Current approach
> (1) Current approach uses balancer as a soft guarantee. Because of this, in 
> some rare cases, colocation might not happen. 
> (2) The index and data regions do not share the same WALEdits. Meaning 
> consistency cannot be achieved. Also there are two WAL writes per write from 
> client. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table. 
> APPROACH 2: Shadow reg

[jira] [Created] (PHOENIX-2937) PHOENIX_QUERYSERVER_OPTS not checked for some options

2016-05-24 Thread Kevin Liew (JIRA)
Kevin Liew created PHOENIX-2937:
---

 Summary: PHOENIX_QUERYSERVER_OPTS not checked for some options
 Key: PHOENIX-2937
 URL: https://issues.apache.org/jira/browse/PHOENIX-2937
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
 Environment: Phoenix 4.8.0 snapshot
Reporter: Kevin Liew
Priority: Minor


Set 
{code}PHOENIX_QUERYSERVER_OPTS="-Dphoenix.query.isNamespaceMappingEnabled=true 
-Dphoenix.queryserver.serialization=JSON{code}

Run {code}queryserver.py start{code}

Verify that the options were set in `vmInputArguments` of the log file. 

The psql log options took effect because the log file was created, but the 
options that were set above do not take effect. The server still uses PROTOBUF 
and namespace-mapping is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2936) Missing antlr runtime on server side after PHOENIX-2908

2016-05-24 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-2936:
-
Attachment: PHOENIX-2936.patch

> Missing antlr runtime on server side after PHOENIX-2908
> ---
>
> Key: PHOENIX-2936
> URL: https://issues.apache.org/jira/browse/PHOENIX-2936
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2936.patch
>
>
> During PHOENIX-2908 antlr was completely removed from server jar. That's was 
> a bad idea, since runtime is still required for indexes:
> {noformat}
> 2016-05-24 11:40:53,596 ERROR [StoreFileOpenerThread-L#0-1] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
> java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1359)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1413)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2276)
> at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2276)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
> at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:208)
> at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:326)
> at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:313)
> at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreFileReaderOpen(IndexHalfStoreFileReaderGenerator.java:145)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$64.call(RegionCoprocessorHost.java:1580)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreFileReaderOpen(RegionCoprocessorHost.java:1575)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:246)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:399)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:504)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:494)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:653)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:118)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:520)
> at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:517)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2936) Missing antlr runtime on server side after PHOENIX-2908

2016-05-24 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-2936:


 Summary: Missing antlr runtime on server side after PHOENIX-2908
 Key: PHOENIX-2936
 URL: https://issues.apache.org/jira/browse/PHOENIX-2936
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


During PHOENIX-2908 antlr was completely removed from server jar. That's was a 
bad idea, since runtime is still required for indexes:
{noformat}
2016-05-24 11:40:53,596 ERROR [StoreFileOpenerThread-L#0-1] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
java.lang.NoClassDefFoundError: org/antlr/runtime/CharStream
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1359)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1413)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2276)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2276)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:326)
at 
org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:313)
at 
org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:303)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreFileReaderOpen(IndexHalfStoreFileReaderGenerator.java:145)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$64.call(RegionCoprocessorHost.java:1580)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreFileReaderOpen(RegionCoprocessorHost.java:1575)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:246)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:399)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:504)
at 
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:494)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:653)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:118)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:520)
at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:517)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298535#comment-15298535
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2903:
--

After PHOENIX-2628 actual region boundaries are scan boundaries and what ever 
the context scan boundaries are values of SCAN_START_ROW_SUFFIX and 
SCAN_STOP_ROW_SUFFIX so at server we prefix start key to these attribute 
values. Wonder all cases we set these values properly after split. Will look 
more deeper once again and come back to you.
like 
{noformat}
+public static void setAggregateStartRow(QueryPlan queryPlan, Tuple tuple, 
Scan scan) throws SQLException {
+if (tuple != null) {
+byte[] newStartRow = getAggregationStartRow(queryPlan, tuple);
+byte[] originalStartRow = 
queryPlan.getContext().getScan().getStartRow();
+if (Bytes.compareTo(originalStartRow, newStartRow) > 0) {
+logger.warn("Expected start row based on partial scan (" +
+Bytes.toStringBinary(newStartRow) + ") to be after 
original start row (" +
+Bytes.toStringBinary(originalStartRow) + ") when split 
occurs");
+} else {
+scan.setStartRow(newStartRow);
+}
+}
+}

+
+@Override
+public void splitOccurred(QueryPlan queryPlan, Scan scan, Tuple tuple) 
throws SQLException {
+if (tuple != null) {
+ImmutableBytesWritable ptr = 
queryPlan.getContext().getTempPtr();
+tuple.getKey(ptr);
+byte[] startAfterRow = ByteUtil.copyKeyBytesIfNecessary(ptr);
+// This will force our coprocessor to skip until the row 
*after* this one
+// The server will properly take care of reverse or forward 
scan while us
+// forming the row key *before* the current key is not 100% 
reliable.
+
scan.setAttribute(BaseScannerRegionObserver.SCAN_START_AFTER_ROW, 
startAfterRow);
+scan.setStartRow(startAfterRow);
+// The scan stop row will be reset if necessary by other 
trackers
+}
+}
+
{noformat}

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1734) Local index improvements

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298521#comment-15298521
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1734:
--

[~jamestaylor] Thanks for reviews. committed the  patch to master. Failed test 
passed locally.  Looking at test failures will upload addendum to fix the 
failures now.
bq. +1 with the adding of ASYNC to CREATE INDEX statement during 
ConnectionQueryServices.init()
I have made the create index async for all cases offline upgrade from psql and 
for ConnectionQueryServices.init(). Do you think for offline upgrade also ASYNC 
is fine? or normal creation is fine?

> Local index improvements
> 
>
> Key: PHOENIX-1734
> URL: https://issues.apache.org/jira/browse/PHOENIX-1734
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENI-1734-WIP.patch, PHOENIX-1734_v1.patch, 
> PHOENIX-1734_v4.patch, PHOENIX-1734_v5.patch, TestAtomicLocalIndex.java
>
>
> Local index design considerations: 
>  1. Colocation: We need to co-locate regions of local index regions and data 
> regions. The co-location can be a hard guarantee or a soft (best approach) 
> guarantee. The co-location is a performance requirement, and also maybe 
> needed for consistency(2). Hard co-location means that either both the data 
> region and index region are opened atomically, or neither of them open for 
> serving. 
>  2. Index consistency : Ideally we want the index region and data region to 
> have atomic updates. This means that they should either (a)use transactions, 
> or they should (b)share the same WALEdit and also MVCC for visibility. (b) is 
> only applicable if there is hard colocation guarantee. 
>  3. Local index clients : How the local index will be accessed from clients. 
> In case of the local index being managed in a table, the HBase client can be 
> used for doing scans, etc. If the local index is hidden inside the data 
> regions, there has to be a different mechanism to access the data through the 
> data region. 
> With the above considerations, we imagine three possible implementation for 
> the local index solution, each detailed below. 
> APPROACH 1:  Current approach
> (1) Current approach uses balancer as a soft guarantee. Because of this, in 
> some rare cases, colocation might not happen. 
> (2) The index and data regions do not share the same WALEdits. Meaning 
> consistency cannot be achieved. Also there are two WAL writes per write from 
> client. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table. 
> APPROACH 2: Shadow regions + shared WAL & MVCC 
> (1) Introduce a shadow regions concept in HBase. Shadow regions are not 
> assigned by AM. Phoenix implements atomic open (and split/merge) of region 
> opening for data regions and index regions so that hard co-location is 
> guaranteed. 
> (2) For consistency requirements, the index regions and data regions will 
> share the same WALEdit (and thus recovery) and they will also share the same 
> MVCC mechanics so that index update and data update is visible atomically. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table.  
> APPROACH 3: Storing index data in separate column families in the table.
>  (1) Regions will have store files for cfs, which is sorted using the primary 
> sort order. Regions may also maintain stores, sorted in secondary sort 
> orders. This approach is similar in vein how a RDBMS keeps data (a B-TREE in 
> primary sort order and multiple B-TREEs in secondary sort orders with 
> pointers to primary key). That means store the index data in separate column 
> families in the data region. This way a region is extended to be more similar 
> to a RDBMS (but LSM instead of BTree). This is sometimes called shadow cf’s 
> as well. This approach guarantees hard co-location.
>  (2) Since everything is in a single region, they automatically share the 
> same WALEdit and MVCC numbers. Atomicity is easily achieved. 
>  (3) Current Phoenix implementation need to change in such a way that column 
> families selection in read/write path is based data table/index table(logical 
> table in phoenix). 
> I think that APPROACH 3 is the best one for long term, since it does not 
> require to change anything in HBase, mainly we don't need to muck around with 
> the split/merge stuff in HBase. It will be win-win.
> However, APPROACH 2 still needs a “shadow regions” concept to be implemented 
> in HBase itself, and also a way to share WALEdits and MVCCs from multiple 
> regions.
> APPROACH 1 is a good start for local indexes, but I think we are not getting 
> the full benefits for the feature. We

[jira] [Commented] (PHOENIX-1734) Local index improvements

2016-05-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298493#comment-15298493
 ] 

Hudson commented on PHOENIX-1734:
-

FAILURE: Integrated in Phoenix-master #1234 (See 
[https://builds.apache.org/job/Phoenix-master/1234/])
PHOENIX-1734 Local index improvements(Rajeshbabu) (rajeshbabu: rev 
10909ae502095bac775d98e6d92288c5cad9b9a6)
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/TxWriteFailureIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexExpressionIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinIT.java
* phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropMetadataIT.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexSplitter.java
* phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/SubqueryIT.java
* phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleWriterIndexCommitter.java
* 
phoenix-core/src/it/java/org/apache/phoenix/hbase/index/balancer/IndexLoadBalancerIT.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexSplitTransaction.java
* phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/UserDefinedFunctionsIT.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/RollbackIT.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/TableRef.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/balancer/IndexLoadBalancer.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinLocalIndexIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/master/IndexMasterObserver.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseViewIT.java
* phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriter.java
* phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ProjectionCompiler.java
* 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixTransactionalIndexer.java
* 
phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinIT.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/MutableRollbackIT.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/ScanRegionObserver.java
* 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/TrackingParallelWriterIndexCommitter.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
* phoenix-core/src/main/

[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298461#comment-15298461
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2903:
--

bq. In ScanUtil#setLocalIndexAttributes we need to replace 
SCAN_ACTUAL_START_ROW with SCAN_START_ROW to set region start key as actual 
start key. 
{noformat}
public static void setLocalIndexAttributes(Scan newScan, int keyOffset, 
byte[] regionStartKey, byte[] regionEndKey, byte[] startRowSuffix, byte[] 
stopRowSuffix) {
 if(ScanUtil.isLocalIndex(newScan)) {
- newScan.setAttribute(SCAN_ACTUAL_START_ROW, regionStartKey);

{noformat}

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298461#comment-15298461
 ] 

Rajeshbabu Chintaguntla edited comment on PHOENIX-2903 at 5/24/16 4:30 PM:
---

In ScanUtil#setLocalIndexAttributes we need to replace SCAN_ACTUAL_START_ROW 
with SCAN_START_ROW to set region start key as actual start key. 
{noformat}
public static void setLocalIndexAttributes(Scan newScan, int keyOffset, 
byte[] regionStartKey, byte[] regionEndKey, byte[] startRowSuffix, byte[] 
stopRowSuffix) {
 if(ScanUtil.isLocalIndex(newScan)) {
- newScan.setAttribute(SCAN_ACTUAL_START_ROW, regionStartKey);

{noformat}


was (Author: rajeshbabu):
bq. In ScanUtil#setLocalIndexAttributes we need to replace 
SCAN_ACTUAL_START_ROW with SCAN_START_ROW to set region start key as actual 
start key. 
{noformat}
public static void setLocalIndexAttributes(Scan newScan, int keyOffset, 
byte[] regionStartKey, byte[] regionEndKey, byte[] startRowSuffix, byte[] 
stopRowSuffix) {
 if(ScanUtil.isLocalIndex(newScan)) {
- newScan.setAttribute(SCAN_ACTUAL_START_ROW, regionStartKey);

{noformat}

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298447#comment-15298447
 ] 

Samarth Jain commented on PHOENIX-2903:
---

Patch looks good to me, [~jamestaylor]. 

One minor nit: instead of passing null for RegionTracker in 
TableResultIterator, maybe just pass the DEFAULT_TRACKER? 

One such example in PhoenixRecordReader :
{code}
+final TableResultIterator tableResultIterator = new 
TableResultIterator(queryPlan.getContext().getConnection().getMutationState(),
+scan, readMetrics.allotMetric(SCAN_BYTES, tableName), 
+renewScannerLeaseThreshold, queryPlan, 
+MapReduceParallelScanGrouper.getInstance(), null);
{code}

You can then get rid of this check in TableResultIterator constructor:
{code}
this.tracker = tracker == null ? DEFAULT_TRACKER : tracker;
{code}

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298428#comment-15298428
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2903:
--

I think when we setup local index scan at server we need to make use of it 
instead of SCAN_START_ROW_SUFFIX then if it's not null otherwise we go through 
already scanned data.
{noformat}
private static void setupLocalIndexScan(Scan scan, HRegionInfo regionInfo) {
534 final byte[] lowerInclusiveRegionKey = regionInfo.getStartKey();
535 final byte[] upperExclusiveRegionKey = regionInfo.getEndKey();
536 byte[] prefix = lowerInclusiveRegionKey.length == 0 ? new 
byte[upperExclusiveRegionKey.length]: lowerInclusiveRegionKey;
537 int prefixLength = lowerInclusiveRegionKey.length == 0? 
upperExclusiveRegionKey.length: lowerInclusiveRegionKey.length;
538 if(scan.getAttribute(SCAN_START_ROW_SUFFIX)!=null) {
539 
scan.setStartRow(ScanRanges.prefixKey(scan.getAttribute(SCAN_START_ROW_SUFFIX), 
0, prefix, prefixLength));
540 }
541 if(scan.getAttribute(SCAN_STOP_ROW_SUFFIX)!=null) {
542 
scan.setStopRow(ScanRanges.prefixKey(scan.getAttribute(SCAN_STOP_ROW_SUFFIX), 
0, prefix, prefixLength));
543 }
544 }
{noformat}

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298412#comment-15298412
 ] 

James Taylor commented on PHOENIX-2903:
---

Yes, for non aggregate queries we always set SCAN_START_AFTER_ROW on the scans 
that occur after a split happens.

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298401#comment-15298401
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2903:
--

[~jamestaylor] For non aggregate queries we can use SCAN_START_AFTER_ROW 
attribute value as scan boundary directly right so that scan itself starts from 
it otherwise we need to scan till we reach SCAN_START_AFTER_ROW. This might 
give scanner timeout exceptions when there is large data in a region.

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298348#comment-15298348
 ] 

James Taylor commented on PHOENIX-2903:
---

Tests pass locally. [~samarthjain] - would you mind giving this a quick look?

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2903:
--
Fix Version/s: 4.8.0

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2935) IndexMetaData cache can expire when a delete and or query running on server

2016-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298158#comment-15298158
 ] 

Ankit Singhal commented on PHOENIX-2935:


I think , in such cases(delete or upsert on server), server cache either should 
expire after the period of query timeout or by a another RPC call once the 
query is completed.

WDYT, [~jamestaylor]

> IndexMetaData cache can expire when a delete and or query running on server
> ---
>
> Key: PHOENIX-2935
> URL: https://issues.apache.org/jira/browse/PHOENIX-2935
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>
> IndexMetaData cache can expire when a delete or upsert query is running on 
> server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2935) IndexMetaData cache can expire when a delete and or query running on server

2016-05-24 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-2935:
--

 Summary: IndexMetaData cache can expire when a delete and or query 
running on server
 Key: PHOENIX-2935
 URL: https://issues.apache.org/jira/browse/PHOENIX-2935
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


IndexMetaData cache can expire when a delete or upsert query is running on 
server.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2901) If namespaces are enabled, check for existence of schema when sequence created

2016-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2901:
---
Attachment: PHOENIX-2901.patch

[~giacomotaylor],can you please review the attached patch.

> If namespaces are enabled, check for existence of schema when sequence created
> --
>
> Key: PHOENIX-2901
> URL: https://issues.apache.org/jira/browse/PHOENIX-2901
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Ankit Singhal
> Attachments: PHOENIX-2901.patch
>
>
> If namespaces are enabled, we should check for the existence of the sequence 
> schema before creating the sequence. There are some sequences that are 
> generated by Phoenix to manage indexes over views which auto generate a 
> schema name. Perhaps it'd be better if those used the SYSTEM schema instead 
> and prepended the sequence name with the previous schema name to ensure 
> uniqueness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2914) Make sqlline refer to bin/hbase-site.xml by default

2016-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298125#comment-15298125
 ] 

Ankit Singhal commented on PHOENIX-2914:


Sorry to pitch in late, but I'm facing problem to update config at multiple 
places after the above change.

IMHO, We can delete bin/hbase-site.xml , if it is confusing a user and update 
hbase_config_path to use phoenix_utils.hbase_conf_dir only. 
If at all, we require some config to be overridden for phoenix client , we may 
introduce PHOENIX_CONF_DIR or something to override cluster values. but we 
should keep only one hbase config path in java classpath. 

{code}
-hbase_config_path = os.getenv('HBASE_CONF_DIR', phoenix_utils.current_dir)
+hbase_config_path = phoenix_utils.hbase_conf_dir

-' -cp "' + hbase_config_path + os.pathsep + phoenix_utils.hbase_conf_dir + 
os.pathsep + phoenix_utils.phoenix_client_jar + os.pathsep + 
phoenix_utils.hadoop_common_jar + os.pathsep + phoenix_utils.hadoop_hdfs_jar + \
+' -cp "' + hbase_config_path +  os.pathsep + 
phoenix_utils.phoenix_client_jar + os.pathsep + phoenix_utils.hadoop_common_jar 
+ os.pathsep + phoenix_utils.hadoop_hdfs_jar + \
{code}

And, also in any case we should keep psql.py also consistent.



> Make sqlline refer to bin/hbase-site.xml by default
> ---
>
> Key: PHOENIX-2914
> URL: https://issues.apache.org/jira/browse/PHOENIX-2914
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Junegunn Choi
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2914.patch
>
>
> I expected sqlline to pick up the settings in {{bin/hbase-site.xml}} by 
> default, but it didn't unless I set up {{HBASE_CONF_DIR}} to point to the 
> {{bin}} directory.
> An easy solution would be to simply prepend {{hbase_config_path}} to the 
> classpath. {{hbase_config_path}} and {{phoenix_utils.hbase_conf_dir}} will 
> point to the same directory when {{HBASE_CONF_DIR}} is set, but having it 
> twice in classpath will not cause any problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2886) Union ALL with Char column not present in the table in Query 1 but in Query 2 throw exception

2016-05-24 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297858#comment-15297858
 ] 

Sergey Soldatov commented on PHOENIX-2886:
--

[~aliciashu] as [~jamestaylor] mentioned, we need to track max scale and sort 
order. and select the best of them. MaxLength is better to keep as Integer, but 
not int. 

I also checked the case with {{select id, cast('foo' as char(10)) firstname, 
lastname from person;}}
There is a couple surprises. First of all, at the first glance we don't need 
any coerce expressions here since firstname is char(10) as well as 'foo' with 
cast expression. But. actually cast('foo' as char(10)) become to coerce 
expression {{TO_CHAR('foo')}} with type PChar and max length 3 (!). So, when we 
try to get value using outer schema which has PChar with max length 10, we are 
getting troubles. First that came to my mind is to add an additional coerce if 
lengths are different. But here is an another surprise - {{PChar.coerceBytes}} 
is calling {{PDataType.coerceBytes}} which is not taking in consideration 
desiredMaxLength in the case of the same data types. 
As a possible solution we can change {{PChar.coerceBytes}} to extend ptr to 
desired size. 
So, there is no need to remove existing expressions. It will simplify code. I 
would also simplify how TargetDataExpressions is collected.  I used a simple 
code 
{noformat}
private static List checkit (List 
selectPlans) throws SQLException {
int columnCount = selectPlans.get(0).getProjector().getColumnCount();
List result = new ArrayList<>(columnCount);
for (int i = 0; i < columnCount; i++) {
for (QueryPlan plan : selectPlans) {
ColumnProjector cp = plan.getProjector().getColumnProjector(i);
if(result.size() < i+1 ) {
result.add(new TargetDataExpression(cp));
} else {
result.get(i).update(cp);
}
}
}
return result;
}
{noformat}
You don't need an additional cycle to check values, all logic for choosing best 
expression/length/order/scale can be hidden in {{update()}} as well as there is 
no need to modify constructor calls  if something is need to be added (like 
scale/order)



> Union ALL with Char column  not present in the table in Query 1 but in Query 
> 2 throw exception
> --
>
> Key: PHOENIX-2886
> URL: https://issues.apache.org/jira/browse/PHOENIX-2886
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2886-v1.patch, PHOENIX-2886-v2.patch, 
> PHOENIX-2886-v3.patch, PHOENIX-2886.patch, UnionAllIT.java.diff
>
>
> To reproduce:
> create table person ( id bigint not null primary key, firstname char(10), 
> lastname varchar(10) );
> upsert into person values( 1, 'john', 'doe');
> upsert into person values( 2, 'jane', 'doe');
> -- fixed value for char(10)
> select id, 'foo' firstname, lastname from person union all select * from 
> person;
> java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 106 bytes, but had 13
> -- fixed value for bigint
> select cast( 10 AS bigint) id, 'foo' firstname, lastname from person union 
> all select * from person;
> java.lang.RuntimeException: java.sql.SQLException: ERROR 201 (22000): Illegal 
> data. Expected length of at least 106 bytes, but had 13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297843#comment-15297843
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2903:
--

Reviewing the latest patch [~jamestaylor].

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2016-05-24 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2903:
--
Attachment: PHOENIX-2903_v3.patch

Fixed a few issues and removed some unnecessary (and incorrect) code in 
BaseResultIterators - we can use the getParallelScans() that's already there as 
it does the correct thing for both local indexes and salted tables. Also, 
collapsed one of the new scan attributes I added to use an existing one (whose 
semantics was identical).

Running tests locally now. Please review, [~rajeshbabu].

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)