Github user steveloughran commented on the issue:
https://github.com/apache/zeppelin/pull/2618
bq. Can this be just a list of steps people can follow?
It's a very brittle list of steps. It's easier at build time as you already
have your hadoop version fixed, the hadoop-aws and hadoop-azure poms give you
the library versions they need. All you need is to add them *and evict all
conflict with the later stuff Spark has chosen*. It's really hard to get this
right.
The spark work adds a new optional module and profile to set this up. I'd
recommend doing the same thing for now, using the code in spark's POMs to tell
you what to exclude
---