Hello,
I am currently running Apache Drill on a 20 node cluster and was running into 
some errors that I was wondering if you would be able to help me with this.

I am attempting to run the following query to create a parquet table in a new 
S3 bucket from another table that is in a tsv format:
create table s3_output.tmp.`<output file>` as select
columns[0], columns[1], columns[2], columns[3], columns[4], columns[5], 
columns[6], columns[7], columns[8], columns[9],
columns[10], columns[11], columns[12], columns[13], columns[14], columns[15], 
columns[16], columns[17], columns[18], columns[19],
columns[20], columns[21], columns[22], columns[23], columns[24], columns[25], 
columns[26], columns[27], columns[28], columns[29],
columns[30], columns[31], columns[32], columns[33], columns[34], columns[35], 
columns[36], columns[37], columns[38], columns[39],
columns[40], columns[41], columns[42], columns[43], columns[44], columns[45], 
columns[46], columns[47], columns[48], columns[49],
columns[50], columns[51], columns[52], columns[53], columns[54], columns[55], 
columns[56], columns[57], columns[58], columns[59],
columns[60], columns[61], columns[62], columns[63], columns[64], columns[65], 
columns[66], columns[67], columns[68], columns[69],
columns[70], columns[71], columns[72], columns[73], columns[74], columns[75], 
columns[76], columns[77], columns[78], columns[79],
columns[80], columns[81], columns[82], columns[83], columns[84], columns[85], 
columns[86], columns[87], columns[88], columns[89],
columns[90], columns[91], columns[92], columns[93], columns[94], columns[95], 
columns[96], columns[97], columns[98], columns[99],
columns[100], columns[101], columns[102], columns[103], columns[104], 
columns[105], columns[106], columns[107], columns[108], columns[109],
columns[110], columns[111], columns[112], columns[113], columns[114], 
columns[115], columns[116], columns[117], columns[118], columns[119],
columns[120], columns[121], columns[122], columns[123], columns[124], 
columns[125], columns[126], columns[127], columns[128], columns[129],
columns[130], columns[131], columns[132], columns[133], columns[134], 
columns[135], columns[136], columns[137], columns[138], columns[139],
columns[140], columns[141], columns[142], columns[143], columns[144], 
columns[145], columns[146], columns[147], columns[148], columns[149],
columns[150], columns[151], columns[152], columns[153], columns[154], 
columns[155], columns[156], columns[157], columns[158], columns[159],
columns[160], columns[161], columns[162], columns[163], columns[164], 
columns[165], columns[166], columns[167], columns[168], columns[169],
columns[170], columns[171], columns[172], columns[173] from s3input.`<input 
path>*.gz`;
This is the error output I get while running this query.
Error: DATA_READ ERROR: Error processing input: , line=2026, char=2449781. 
Content parsed: [ ]

Failure while reading file s3a://<input bucket/file>.gz. Happened at or shortly 
before byte position 329719.
Fragment 1:19

[Error Id: fe289e19-c7b7-4739-9960-c15b8a62af3b on <node 6>:31010] 
(state=,code=0)
Do you have any idea how I can go about trying to solve this issue?
Thanks for any help!Tanmay Solanki

Reply via email to