Yes. The JSON files compressed by Flume or Spark work well with Spark. But
the json files compressed by myself cannot be read by spark due to codec
problem. It seems sparking can read files compressed by hadoop snappy(
https://code.google.com/archive/p/hadoop-snappy/) only
Regard,
Junfeng Chen
I made some snappy compressed json file with normal snappy codec(
https://github.com/xerial/snappy-java ) , which seems cannot be read by
Spark correctly.
So how to make existed snappy file recognized by spark? Any tools to
convert them?
Thanks@!
Regard,
Junfeng Chen