Hi:
        After update spark to version1.1.0, I experienced a snappy error which 
was
posted here
http://apache-spark-user-list.1001560.n3.nabble.com/Update-gcc-version-Still-snappy-error-tt15137.html
. I avoid this problem with
code:conf.set("spark.io.compression.codec","org.apache.spark.io.LZ4CompressionCodec").I
run the als and svd algorithm with a huge 5 500 000 *5 000  sparse matrix ,I
want to save some result in binary format which can save space. Then I found
function 'saveAsSequenceFile' occured the same problem,But I don't know how
two avoid this problem again. There is some problem with my environment ,I
just can't solve it out. can any one give me some idea about this problem or
to void use snappy when saveAsSequenceFile?
    Thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-avoid-use-snappy-compression-when-saveAsSequenceFile-tp17350.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to