Github user rawkintrevo commented on the issue: https://github.com/apache/zeppelin/pull/928 @bzz and @Leemoonsoo - Was talking to some other about this- I think we should take a new tack on this as Mahout currently only supports Spark v1.5 officially, but also 1.6 in practice. Given Zeppelin ships with Spark 2.0 as default, and you can't have multiple SparkContexts (e.g. a Spark 1.6 and a Spark 2.0) it's likely this 'terp would be dead on arrival for many users. New thought, based on a lot of work I've been doing recently- create a python script that in essence updates `conf/interpreter.json` with a new interpreter based on the current Spark interpreter, and adds the required libraries and configurations (such as the Kryo serializers), resulting in something called `%sparkMahout` A much lighter weight version of what you can see here in principal: [create a new interpreter](https://github.com/rawkintrevo/bluemix-extra-services/blob/master/data/services/zeppelin.py#L166) [update a spark interpreter with Mahout configs](https://github.com/rawkintrevo/bluemix-extra-services/blob/master/data/services/zeppelin.py#L68) In addition, a page or two of documentation in Zeppelin saying, "If you want to use Mahout with Apache Spark, you have to have Spark 1.5 or 1.6 (until Mahout is Spark2 compliant), go here run this python script, it will be created. Here is some basic useage on Mahout and links to some other good getting started material" So in short we would scrap all of this code, and replace it with some docs and a python script that would create a more resilient (based on my experience) interpreter, that will be far less likely to give users a negative experience with Zeppelin or Mahout. Thoughts?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---