You might have better luck downloading the 2.4.X branch
Am Di., 12. März 2019 um 16:39 Uhr schrieb swastik mittal :
> Then are the mlib of spark compatible with scala 2.12? Or can I change the
> spark version from spark3.0 to 2.3 or 2.4 in local spark/master?
>
>
>
> --
> Sent from:
Then are the mlib of spark compatible with scala 2.12? Or can I change the
spark version from spark3.0 to 2.3 or 2.4 in local spark/master?
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe
I think scala 2.11 support was removed with the spark3.0/master
Am Di., 12. März 2019 um 16:26 Uhr schrieb swastik mittal :
> I am trying to build my spark using build/sbt package, after changing the
> scala versions to 2.11 in pom.xml because my applications jar files use
> scala 2.11. But
I am trying to build my spark using build/sbt package, after changing the
scala versions to 2.11 in pom.xml because my applications jar files use
scala 2.11. But building the spark code gives an error in sql saying "A
method with a varargs annotation produces a forwarder method with the same
Hi all,
If you are interested in using kubernetes with Spark 2.2.3, the branch is
here:
https://github.com/puneetloya/spark/tree/spark-2.2.3-k8s-0.5.0
I just rebased Spark 2.2.3 with
https://github.com/apache-spark-on-k8s/spark/tree/v2.2.0-kubernetes-0.5.0
Have tested it.
Thanks,
Puneet
--
We haven't seen many of these, but we have seen it a couple of times --
there is ongoing work under SPARK-26089 to address the issue we know about,
namely that we don't detect corruption in large shuffle blocks.
Do you believe the cases you have match that -- does it appear to be
corruption in
I/We have seen this error before on 1.6 but ever since we upgraded to 2.1
two years ago we haven't seen it
On Tue, Mar 12, 2019 at 2:19 AM wangfei wrote:
> Hi all,
> Non-deterministic FAILED_TO_UNCOMPRESS(5) or ’Stream is corrupted’
> errors
> may occur during shuffle read, described as
Hi, All,
I need to overwrite data in a Hive table and I use the following code to
do so:
df = sqlContext.sql(my-spark-sql-statement);
df.count
df.write.format("orc").mode("overwrite").saveAsTable("foo") // I also
tried 'insertInto("foo")
The "df.count" shows that there are only 452 records in
Hello,
I have quite specific usecase. I want to use an MXNet neural-net model in a
distributed fashion to get predictions on a very large dataset. It is not
possible to broadcast the model directly because the underlying implementation
is not serializable. Instead the model has to be loaded
Hi all,
Non-deterministic FAILED_TO_UNCOMPRESS(5) or ’Stream is corrupted’ errors
may occur during shuffle read, described as this
JIRA(https://issues.apache.org/jira/browse/SPARK-4105).
There is not new comment for a long time in this JIRA. So, Is there
anyone seen these errors in
10 matches
Mail list logo