4 Apache Events in 2019: DC Roadshow soon; next up Chicago, Las Vegas, and Berlin!

2019-03-06 Thread Rich Bowen
Dear Apache Enthusiast,

(You’re receiving this because you are subscribed to one or more user
mailing lists for an Apache Software Foundation project.)

TL;DR:
 * Apache Roadshow DC is in 3 weeks. Register now at
https://apachecon.com/usroadshowdc19/
 * Registration for Apache Roadshow Chicago is open.
http://apachecon.com/chiroadshow19
 * The CFP for ApacheCon North America is now open.
https://apachecon.com/acna19
 * Save the date: ApacheCon Europe will be held in Berlin, October 22nd
through 24th.  https://apachecon.com/aceu19


Registration is open for two Apache Roadshows; these are smaller events
with a more focused program and regional community engagement:

Our Roadshow event in Washington DC takes place in under three weeks, on
March 25th. We’ll be hosting a day-long event at the Fairfax campus of
George Mason University. The roadshow is a full day of technical talks
(two tracks) and an open source job fair featuring AWS, Bloomberg, dito,
GridGain, Linode, and Security University. More details about the
program, the job fair, and to register, visit
https://apachecon.com/usroadshowdc19/

Apache Roadshow Chicago will be held May 13-14th at a number of venues
in Chicago’s Logan Square neighborhood. This event will feature sessions
in AdTech, FinTech and Insurance, startups, “Made in Chicago”, Project
Shark Tank (innovations from the Apache Incubator), community diversity,
and more. It’s a great way to learn about various Apache projects “at
work” while playing at a brewery, a beercade, and a neighborhood bar.
Sign up today at https://www.apachecon.com/chiroadshow19/

We’re delighted to announce that the Call for Presentations (CFP) is now
open for ApacheCon North America in Las Vegas, September 9-13th! As the
official conference series of the ASF, ApacheCon North America will
feature over a dozen Apache project summits, including Cassandra,
Cloudstack, Tomcat, Traffic Control, and more. We’re looking for talks
in a wide variety of categories -- anything related to ASF projects and
the Apache development process. The CFP closes at midnight on May 26th.
In addition, the ASF will be celebrating its 20th Anniversary during the
event. For more details and to submit a proposal for the CFP, visit
https://apachecon.com/acna19/ . Registration will be opening soon.

Be sure to mark your calendars for ApacheCon Europe, which will be held
in Berlin, October 22-24th at the KulturBrauerei, a landmark of Berlin's
industrial history. In addition to innovative content from our projects,
we are collaborating with the Open Source Design community
(https://opensourcedesign.net/) to offer a track on design this year.
The CFP and registration will open soon at https://apachecon.com/aceu19/ .

Sponsorship opportunities are available for all events, with details
listed on each event’s site at http://apachecon.com/.

We look forward to seeing you!

Rich, for the ApacheCon Planners
@apachecon



[jira] [Created] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-06 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-5178:
--

 Summary: SYSTEM schema is not getting cached at MetaData server
 Key: PHOENIX-5178
 URL: https://issues.apache.org/jira/browse/PHOENIX-5178
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
Reporter: Ankit Singhal
Assignee: Ankit Singhal


During initialization, the meta connection will not be able to see the SYSTEM 
schema as the scanner at meta server is running with max_timestamp of 
MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
create SYSTEM schema metadata everytime.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-06 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5178:
---
Attachment: PHOENIX-5178.patch

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5178.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-03-06 Thread Chinmay Kulkarni (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5122:
--
Fix Version/s: 4.13.0
   4.13.1
   5.1.0
   4.15.0

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.13.0, 4.13.1, 4.15.0, 4.14.1, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, PHOENIX-5122.patch, 
> Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 2019-03-04 at 6.21.10 
> PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5131) Make spilling to disk for order/group by configurable

2019-03-06 Thread Abhishek Singh Chouhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5131:

Attachment: PHOENIX-5131-master.patch

> Make spilling to disk for order/group by configurable
> -
>
> Key: PHOENIX-5131
> URL: https://issues.apache.org/jira/browse/PHOENIX-5131
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5131-master.patch
>
>
> We've observed that large queries, doing order/group by leading to issues on 
> the regionserver (crashes/long gc pauses/file handler exhaustion etc.). We 
> should make spilling to disk configurable and in case its disabled, fail the 
> query once it hits the spilling limit on any of the region servers. Also make 
> spooling threshold server-side property only to prevent clients from 
> controlling memory allocation on the rs side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit on for deletes

2019-03-06 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4900:

Summary: Modify MAX_MUTATION_SIZE_EXCEEDED and 
MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning 
autocommit on for deletes  (was: Modify MAX_MUTATION_SIZE_EXCEEDED and 
MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning 
autocommit off for deletes)

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit on for deletes
> ---
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4900-4.x-HBase-1.4.patch, PHOENIX-4900.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-06 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Attachment: (was: PHOENIX-5172-v2.patch)

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5172.patch, PHOENIX-5172.patch-v1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5174) Spin up mini cluster for queryserver canary tool tests

2019-03-06 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam resolved PHOENIX-5174.
-
Resolution: Not A Problem

> Spin up mini cluster for queryserver canary tool tests
> --
>
> Key: PHOENIX-5174
> URL: https://issues.apache.org/jira/browse/PHOENIX-5174
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4315) function Greatest/Least

2019-03-06 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4315:
---
Attachment: PHOENIX-4315.patch

> function Greatest/Least
> ---
>
> Key: PHOENIX-4315
> URL: https://issues.apache.org/jira/browse/PHOENIX-4315
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4315.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Resolve as the greatest value among a collection of projections.
> e.g.,
> Select greatest(A, B) from table  
> Select greatest(1,2) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4315) function Greatest/Least

2019-03-06 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan reassigned PHOENIX-4315:
--

Assignee: Xinyi Yan

Currently, COALESCE supports only 2 arguments.

For instance, the GREATEST(expr1, expr2) is supported but GREATEST(expr1, 
expr2, expr3, ...) is not. Only implemented GREATEST for now, but if the list 
of arguments uses cases are needed from the open source community, We can do 
the optimization later.

If everyone is ok with this approach, I will create another patch for Least 
feature.

:)

> function Greatest/Least
> ---
>
> Key: PHOENIX-4315
> URL: https://issues.apache.org/jira/browse/PHOENIX-4315
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4315.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Resolve as the greatest value among a collection of projections.
> e.g.,
> Select greatest(A, B) from table  
> Select greatest(1,2) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5179) empower/add more DateType related functions

2019-03-06 Thread Xinyi Yan (JIRA)
Xinyi Yan created PHOENIX-5179:
--

 Summary: empower/add more DateType related functions
 Key: PHOENIX-5179
 URL: https://issues.apache.org/jira/browse/PHOENIX-5179
 Project: Phoenix
  Issue Type: Improvement
Reporter: Xinyi Yan


In order to have a better user experience, we could offer a few useful 
syntactic surge functions for customers, especially for e-commerce users. They 
use the date range to fetch last a few days/months expressions, GMV, unique 
buyers a lot. Many internal tools have very good date wraps on top of it, such 
as 

1. DATE_RANGE

 
{code:java}
select * from table where DATE_RANGE(date, '2019-03-01') 
# date is equal or greater than the date.
select * from table where DATE_RANGE(date,'', '2019-03-01') 
# date is equal or less than the date.
select * from table where DATE_RANGE(date,'2018-05-12', '2019-03-01') 
# in range select
select * from table where DATE_RANGE(date,'2018-05-12', '2019-03-01', 'PST') 
# timezone option{code}
 2. DATE_INTERVAL

 
{code:java}
SELECT * from table where DATE_INTERVAL(time, '-7d')
# last 7 days
SELECT * from table where DATE_INTERVAL(time, '-1w')
# last week
SELECT * from table where DATE_INTERVAL(time, '-1m')
# last month{code}
3.DATE_ADD
{code:java}
select * from table where DATE_RANGE(date, '2019-03-01', 
 DATE_ADD('2019-03-01', '7d')) {code}
 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)