This definitely looks like a bug, could you open a JIRA and share as much
information as possible about the structure of the CSV file and the number
of records.
On Tue, Jan 26, 2016 at 7:38 PM, Matt wrote:
> The CTAS with fails with:
>
> ~~~
> Error: SYSTEM ERROR: IllegalArgumentException: lengt
The CTAS with fails with:
~~~
Error: SYSTEM ERROR: IllegalArgumentException: length: -260 (expected:
>= 0)
Fragment 1:2
[Error Id: 1807615e-4385-4f85-8402-5900aaa568e9 on es07:31010]
(java.lang.IllegalArgumentException) length: -260 (expected: >= 0)
io.netty.buffer.AbstractByteBuf.chec
Seems to be an issue with mysql.performance_schema. Filed a JIRA-4312.
--Andries
> On Jan 26, 2016, at 10:01 AM, Magnus Pierre wrote:
>
> Ok, then I was wrong. :)
> I leave it to the experts to figure out this one.
>
> Regards,
> Magnus
>> 26 jan 2016 kl. 18:55 skrev Andries Engelbrecht :
>
Hi,
I got an email from the Tachyon team a while back were they informed my of
this change.
I think you should visit their google group and check the status of this
change.
Regards,
-Stefán
On Tue, Jan 26, 2016 at 9:28 PM, Stephan Kölle wrote:
> I'm working with tachyon 0.8.2 (November 11,
I'm working with tachyon 0.8.2 (November 11, 2015) - current
- stephan
> Hi,
>
>I think the latest version of Tachyon uses a transparent storage structure.
>
>Regards,
>-Stefán
>
>
>On Tue, Jan 26, 2016 at 10:05 AM, Stephan Kölle wrote:
>> Querying JSON data stored on aws s3 with apache drill
It's an internal buffer index. Can you try enabling verbose errors and run
the query again, this should provide us with more details about the error.
You can enable verbose error by running the following before the select *:
alter session set `exec.errors.verbose`=true;
thanks
On Tue, Jan 26, 20
Putting the "select * from
`/csv/customer/hourly/customer_20151017.csv`;" in a local .sql file,
and executing it with sqlline > /dev/null (to avoid a ton of scrolling)
results in:
~~~
index: 418719, length: 2 (expected: range(0, 418719))
sqlline -u ... -q 'SELECT * FROM dfs.`/path/to/files/file.csv` LIMIT 10'
seems to emit a list of files in the local path (pwd), along with a
parsing error.
Putting the query in a file and passing that file name to sqlline or
using an explicit column list runs the query as expected.
Is this a
Ok, then I was wrong. :)
I leave it to the experts to figure out this one.
Regards,
Magnus
> 26 jan 2016 kl. 18:55 skrev Andries Engelbrecht :
>
> MySQL shows up the the following tables in INFORMATION_SCHEMA
>
> - SCHEMATA
> - TABLES
>
> But when I query COLUMNS it works for all mysql tables
https://plus.google.com/hangouts/_/dremio.com/drillhangout?authuser=0
--
Jacques Nadeau
CTO and Co-Founder, Dremio
MySQL shows up the the following tables in INFORMATION_SCHEMA
- SCHEMATA
- TABLES
But when I query COLUMNS it works for all mysql tables, apart from when I try
to query mysql.performance_schema.
0: jdbc:drill:> select * from INFORMATION_SCHEMA.`COLUMNS` where TABLE_SCHEMA =
'mysql.information_
Does a select * on the same data also fail ?
On Tue, Jan 26, 2016 at 9:44 AM, Matt wrote:
> Getting some errors when attempting to create Parquet files from CSV data,
> and trying to determine if it is due to the format of the source data.
>
> Its a fairly simple format of
> "datetime,key,key,ke
Do you have it in the drill info_schema? If so it would prove my guess.
Regards,
Magnus
> 26 jan 2016 kl. 18:43 skrev Andries Engelbrecht :
>
> Thx for the input. I'll file a JIRA as it seems to be a bug vs a
> configuration issue.
>
> In my case it seems to complain about PERFORMANCE_SCHEMA in
Getting some errors when attempting to create Parquet files from CSV
data, and trying to determine if it is due to the format of the source
data.
Its a fairly simple format of
"datetime,key,key,key,numeric,numeric,numeric, ..." with 32 of those
numeric columns in total.
The source data does
Thx for the input. I'll file a JIRA as it seems to be a bug vs a configuration
issue.
In my case it seems to complain about PERFORMANCE_SCHEMA in MySQL not actually
the info_schema.
--Andries
> On Jan 26, 2016, at 9:35 AM, Magnus Pierre wrote:
>
> I’ve seen it as well. My unqualified guess i
I’ve seen it as well. My unqualified guess is that the engine gets confused
with multiple databases named INFORMATION_SCHEMA and which makes it combine
metadata from two different engines. Gets the metadata of a table from one and
tries to use on the other…
Regards,
Magnus
> 26 jan 2016 kl.
Hi Andries,I can not also use INFORMATION_SCHEMA.SCHEMA when my MSSQL JDBC
driver is on; As a consequence I need to disable this storage plug in it to
connect Tableau with Drill through ODBC driver;
Boris
Le Mardi 26 janvier 2016 17h26, Andries Engelbrecht
a écrit :
Anyone run into
Anyone run into issues with Drill INFORMATION_SCHEMA queries when using the
JDBC plugin with MySQL?
In my case some tools are interrogating Drill's Metadata, which then fails when
enabling the JDBC plugin with MySQL.
Using Drill 1.4 and MySQL 5.1.73
{query}
SELECT DISTINCT TABLE_SCHEMA as NAME_
Hi,
I think the latest version of Tachyon uses a transparent storage structure.
Regards,
-Stefán
On Tue, Jan 26, 2016 at 10:05 AM, Stephan Kölle wrote:
> Querying JSON data stored on aws s3 with apache drill works awesome, but
> drill fetches the data fresh from s3 for every query.
>
> How t
Querying JSON data stored on aws s3 with apache drill works awesome, but
drill fetches the data fresh from s3 for every query.
How to tell drill to keep the data in memory for the next query?
I got tachyon to work with drill (with the informations available on this
list) about 90%, but "SHOW FILE
20 matches
Mail list logo