Github user xerial commented on the issue:
https://github.com/apache/spark/pull/22967
Thank you for all the efforts to make this happen!
Spark has been the last resort before deprecating Scala 2.11.
After Spark 3.0, as an OSS contributor, we can stop maintaining
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/12074#issuecomment-203748806
Released snappy-java-1.1.2.4 with this fix. Thanks for letting me know.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/12074#issuecomment-203734165
@sitalkedia Sure. I'll do that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/12074#issuecomment-203702958
I have just deployed snappy-java-1.1.2.3 with this fix, which will be
synchronized to the Maven central soon.
---
If your project is set up for it, you can reply
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/12074#issuecomment-203687378
A reason snappy-java's SnappyInputStream uses Snappy.arrayCopy (JNI method)
is to load the uncompressed data into primitive type arrays (e.g., float[],
int[]) since
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/2911#issuecomment-60331324
Please use snappy-java-1.1.1.5, which fixes the broken build.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/2911#issuecomment-60343444
@JoshRosen If you have the stack trace of this error, please let me know. I
would like to check it.
---
If your project is set up for it, you can reply to this email