I think you bumped the wrong thread.

As I mentioned in the other thread:

saveAsHadoopFile only applies compression when the codec is available, and
it does not seem to respect the global hadoop compression properties.

I'm not sure if this is a feature, or a bug in spark.

if this is a feature, the docs should make it clear that
mapred.output.compression.* properties are read only.


On Sat, Mar 22, 2014 at 12:20 AM, deenar.toraskar <deenar.toras...@db.com>wrote:

> Matei
>
> It turns out that saveAsObjectFile(), saveAsSequenceFile() and
> saveAsHadoopFile() currently do not pickup the hadoop settings as Aureliano
> found out in this post
>
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Turning-kryo-on-does-not-decrease-binary-output-tp212p249.html
>
> Deenar
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SequenceFileRDDFunctions-cannot-be-used-output-of-spark-package-tp250p3019.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to