What kind of steps exists when reading ORC format on Spark-SQL?
I meant usually reading csv file is just directly reading the dataset on
memory.

But I feel like Spark-SQL has some steps when reading ORC format.
For example, they have to create table to insert the dataset? and then they
insert the dataset to the table? theses steps are reading step in Spark-SQL?

[image: Inline image 1]

Reply via email to