Hello Spark Users,

We are receiving too much small small files. About 3 million. Reading it
using spark.read itself taking long time and job is not proceeding further.

Is there any way to fasten this and proceed?

Regards
Sachit Murarka

Reply via email to