Github user bzz commented on the issue:
https://github.com/apache/zeppelin/pull/1689
Interesting, that is the first time I see it! AFAIK `spark-csv` has been
included in Spark since 2.0
The failing profile is Spark 1.6 with **Scala 2.11**, but it fails on
fetching `spark-csv_2.10` which seems to me as almost the right thing to happen
(to avoid runtime errors)...
This artefact seems to be fine, published on central and sonatype
https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10/1.3.0
May be we need to update `./dev/change_scala_version.sh` to take care of
the version literal here as well and see if that helps?
Another way is to let maven `resource-plugin` replace it with actual
version on build time, and use smth like `${scala.version}` in the Java code.
Default property value can be set to 2.10 and override it though SCALA_VERSION
env var, but this is not very IDE-friendly as they tend to do custom builds for
you, skipping lifecycle definition from time to time, but the default value in
the code like `${scala.version}` would not be very useful. So resource-plugin
way actually seems to me to complicated and we usually try to avoid it.
What do you think, @1ambda ?
Or another, simplest, way could be - create a separate JIRA issue for Scala
version mismatch in tests(if that is the case here), move the
feedback\discussion there and handle it in subsequent PRs. But just re-trigger
CI here, to see if it is reproducible.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---