Hi,

I want to load a csv file that contains >10000000 rows of data into Apache
Kylin. Currently, my Kylin (version 3.0.2) is installed in AWS EMR cluster.
My csv file is currently stored in s3 bucket. I created the hive table in
the EMR and I was able to count all the data by querying 'select count(*)
from my_table;, But when I tried to do other specific query, it didn't show
the result but just showed OK and the running time.

Then, I tried to load the hive table into Kylin UI. It was successfully
loaded and I was able to create the cube for it. However, the cube is 0 GB,
but I was able to see all the column names, data type and the file size in
the data source.

How can I load the data in the Kylin UI?

Thank you

Reply via email to