[GitHub] incubator-hawq issue #1227: HAWQ-1448. Fixed postmaster process hung at recv...

2017-06-28 Thread liming01
Github user liming01 commented on the issue:

https://github.com/apache/incubator-hawq/pull/1227
  
@edespino,  if we can find all hung root cause when hawq stop and fix them, 
then we don't need this PR. 

However from a long time investigate, these problems still cannot been 
fixed. So I suggest to apply this PR. This PR can fix all hung problems when 
hawq stop, (if still hung in some cases, we should check not disable interrupt 
processing during the hung function called).

Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-28 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen closed HAWQ-1491.
---

> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-28 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen resolved HAWQ-1491.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

PR merged; resolving and closing.

> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen resolved HAWQ-1435.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

PR merged; resolving and closing.

> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen closed HAWQ-1435.
---

> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.3.0.0-incubating
>
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067457#comment-16067457
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/126


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066931#comment-16066931
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124606922
  
--- Diff: markdown/pxf/HivePXF.html.md.erb ---
@@ -565,6 +577,44 @@ In the following example, you will create a Hive table 
stored in ORC format and
 Time: 425.416 ms
 ```
 
+### Example: Using the HiveVectorizedORC 
Profile
+
+In the following example, you will use the `HiveVectorizedORC` profile to 
query the `sales_info_ORC` Hive table you created in the previous example.
+
+**Note**: The `HiveVectorizedORC` profile does not support the timestamp 
data type and complex types.
--- End diff --

Just to avoid any potential confusion, let's change this to "**or** 
complext types."


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066788#comment-16066788
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124581749
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
--- End diff --

Extremely minor, but "Note:" here should be bolded:  **Note:**


> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066790#comment-16066790
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124582363
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
--- End diff --

Aren't most of these custom options required in order to setup a JDBC 
connection?  If so, docs should just indicate that up-front, as otherwise it 
seems like these are all optional.


> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066789#comment-16066789
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124582946
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
+
+| Option Name   | Description
+|---||
+| JDBC_DRIVER | The JDBC driver class name. |
+| DB_URL | The URL to the database; includes the hostname, port, and 
database name. |
+| USER | The database user name. |
+| PASS | The database password for USER. |
+| PARTITION_BY | The partition column, \:\. 
The JDBC plug-in supports `date`, `int`, and `enum` \s. Use the  
`-MM-dd` format for the `date` \. A null `PARTITION_BY` 
defaults to a single fragment. |
+| RANGE | The query range, \[:\]. \ 
may be empty for an `int` \. |
--- End diff --

Should probably clarify that RANGE and INTERVAL are only used with 
PARTITION_BY?


> docs - add usage info for pxf jdbc plug-in
> 

[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066791#comment-16066791
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124583252
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
+
+| Option Name   | Description
+|---||
+| JDBC_DRIVER | The JDBC driver class name. |
+| DB_URL | The URL to the database; includes the hostname, port, and 
database name. |
+| USER | The database user name. |
+| PASS | The database password for USER. |
+| PARTITION_BY | The partition column, \:\. 
The JDBC plug-in supports `date`, `int`, and `enum` \s. Use the  
`-MM-dd` format for the `date` \. A null `PARTITION_BY` 
defaults to a single fragment. |
+| RANGE | The query range, \[:\]. \ 
may be empty for an `int` \. |
+| INTERVAL | The interval, \[:\], of one 
fragment.  `INTERVAL` may be empty for an `enum` \. 
\ may be