Hi Niranda,
INSERT INTO syntax is available but I can't insert arbitrary values without
using a select. This is for testing purposes.
Cheers~
On Fri, Jun 17, 2016 at 2:03 AM, Niranda Perera wrote:
> Hi Dulitha,
>
> This is a new connector only. It does not affect Spark SQL queries (apart
> from
Hi Dulitha,
This is a new connector only. It does not affect Spark SQL queries (apart
from the options you specify in the CREATE TEMPORARY TABLE queries). INSERT
INTO was available in the previous CarbonJDBC connector as well.
Best
On Fri, Jun 17, 2016 at 12:47 AM, Dulitha Wijewantha
wrote:
>
Hi Gokul,
Will this allow us to perform INSERT INTO queries with sample data (not
from a table)? This is useful in the DEV phase.
Cheers~
On Tue, Jun 14, 2016 at 7:38 AM, Gokul Balakrishnan wrote:
> Hi product analytics leads,
>
> Please make sure that the configuration file spark-jdbc-config.x
Hi product analytics leads,
Please make sure that the configuration file spark-jdbc-config.xml is added
to the product-analytics packs, especially is you're using the CarbonJDBC
provider. Example commit may be found at [1].
[1]
https://github.com/wso2/product-das/commit/4bdbf68833bd2bc8a20549eaf7
Hi Anjana, Nirmal,
The schema being mandatory is an architectural decision we've had to take.
If I go into a bit more detail as to the reasons, Spark requires its own
catalyst schema to be constructed when a relation is being created. In the
previous implementation, this was achieved through dropp
Hi,
On Mon, Jun 13, 2016 at 12:23 PM, Nirmal Fernando wrote:
>
> The "schema" option is required, and is used to specify the schema to be
>> utilised throughout the temporary table's lifetime. Here, the field types
>> used for the schema match what we have for the CarbonAnalytics provider
>> (i.
Hi Gokul,
On Fri, Jun 10, 2016 at 4:11 PM, Gokul Balakrishnan wrote:
> Hi Gihan/Inosh,
>
> A sample statement for creating a temporary table using this provider
> would look like the following:
>
> CREATE TEMPORARY TABLE StateUsage using CarbonJDBC options (dataSource
> "MY_DATASOURCE", tableNam
Hi Gihan/Inosh,
A sample statement for creating a temporary table using this provider would
look like the following:
CREATE TEMPORARY TABLE StateUsage using CarbonJDBC options (dataSource
"MY_DATASOURCE", tableName "state_usage",schema "us_state STRING -i,
polarity INTEGER, usage_avg FLOAT", prim
Hi Gokul,
On Fri, Jun 10, 2016 at 2:08 PM, Gokul Balakrishnan wrote:
> Hi all,
>
> In DAS 3.0.x, for interacting with relational databases directly from
> Spark (i.e. bypassing the data access layer), we have hitherto been using
> the JDBC connector that comes directly with Apache Spark (with a
Hi Gokul,
Can you please share a couple of sample Spark SQL queries that use
this updated CarbonJDBC connector?
Regards,
Gihan
On Fri, Jun 10, 2016 at 2:08 PM, Gokul Balakrishnan wrote:
> Hi all,
>
> In DAS 3.0.x, for interacting with relational databases directly from
> Spark (i.e. bypassing
Hi all,
In DAS 3.0.x, for interacting with relational databases directly from Spark
(i.e. bypassing the data access layer), we have hitherto been using the
JDBC connector that comes directly with Apache Spark (with added support
for Carbon datasources).
This connector has contained many issues th
11 matches
Mail list logo