You can load already existing parquet files to the destination table
from another location in HDFS.

On 12 October 2017 at 18:44, sky <x_h...@163.com> wrote:
> From the impala document, parquet supports load data operation, and how does 
> it support ?
>
>
>
>
>
>
>
>
> At 2017-10-13 00:30:12, "Jeszy" <jes...@gmail.com> wrote:
>>See the docs on LOAD DATA:
>>http://impala.apache.org/docs/build/html/topics/impala_load_data.html
>>
>>"In the interest of speed, only limited error checking is done. If the
>>loaded files have the wrong file format, different columns than the
>>destination table, or other kind of mismatch, Impala does not raise
>>any error for the LOAD DATA statement. Querying the table afterward
>>could produce a runtime error or unexpected results. Currently, the
>>only checking the LOAD DATA statement does is to avoid mixing together
>>uncompressed and LZO-compressed text files in the same table."
>>
>>To reload CSV data as parquet using Impala, you'd have to create a
>>table for the CSV data, then do an 'insert into [parquet table] select
>>[...] from [csv_table]'.
>>
>>HTH
>>
>>On 12 October 2017 at 07:58, sky <x_h...@163.com> wrote:
>>> Hi all,
>>>     How does the parquet table perform load data operations? How does a CSV 
>>> file import into the parquet table?

Reply via email to