Attempting to answer Qing's question
--
If you can digest the legal terms:
https://docs.nvidia.com/cuda/eula/index.html#distribution-requirements.
It sounds its OK("

   1. Your application must have material additional functionality, beyond
   the included portions of the SDK.")

 but I not don't understand the legal lingo

@Hen <bay...@apache.org> : Could you provide input to this?

Thanks, Naveen

On Mon, Dec 17, 2018 at 3:29 PM Davydenko, Denis <
dzianis.davydze...@gmail.com> wrote:

> Kellen, please see conversation [1] on previously published proposal re:
> maven publishing pipeline. I think your concerns are valid and we should
> look into security aspect of running our CI on a broader scope, not bound
> to just artifact publishing.
>
> I believe right now Qing's question is whether it is OK from legal
> perspective to download CUDA by literally running wget during one of the
> jobs in publishing pipeline. The fact it is not available by just simple
> URL download raises concern: whether it is a protective measure from
> downloads by unauthenticated users or just inconvenience that has not been
> addressed by nVidia yet.
>
> [1]:
> https://lists.apache.org/thread.html/464712f0136fb51916ca9f1b702b99847e108dbdbd0b6a2b73fc91f1@%3Cdev.mxnet.apache.org%3E
>
>
> On 12/17/18, 2:48 PM, "kellen sunderland" <kellen.sunderl...@gmail.com>
> wrote:
>
>     Restricted nodes may provide enough security for some use cases, but
> in my
>     opinion they don't provide enough for artifact publishing. An example
> would
>     be if there were a exploit available that worked against a Jenkins
> master.
>     In this case I think an attacker code still pivot to a secure node
> (correct
>     me if I'm wrong).
>
>     To your second point, it shouldn't be too hard for us to maintain all
> the
>     deps for our packages in Dockerfiles which are checked into source and
>     built on a regular basis.  To publish these artifacts I'd recommend
> doing
>     this from a separate, secure environment.  The flow I'd recommend
> would be
>     something like: (1) Developers commit PRs with verification that the
>     artifacts build properly on a continual basis from the CI. (2) In a
>     separate, secure environment we do the same artifact build generation
>     again, but this time we publish to various repos as a convenience to
> our
>     MXNet users.
>
>     On Mon, Dec 17, 2018 at 2:34 PM Qing Lan <lanking...@live.com> wrote:
>
>     > Hi Kellen,
>     >
>     > Firstly the restricted node is completely isolated to the
> PR-checking CI
>     > system (physically) which is explained in here:
>     >
> https://cwiki.apache.org/confluence/display/MXNET/Restricted+jobs+and+nodes
>     > .
>     > What you are mentioning: the Public CIs are all having troubles if
> they
>     > are public accessible. I am not sure how secure the restricted node
> is.
>     > However, the only way I can think of from your end is to downloading
> all
>     > deps in a single machine and run everything there (disconnected from
>     > internet). It would bring us the best security we have.
>     >
>     > Thanks,
>     > Qing
>     >
>     > On 12/17/18, 2:06 PM, "kellen sunderland" <
> kellen.sunderl...@gmail.com>
>     > wrote:
>     >
>     >     I'm not in favour of publishing artifacts from any Jenkins based
>     > systems.
>     >     There are many ways to bundle artifacts and publish them from an
>     > automated
>     >     system.  Why we would use a CI system like Jenkins for this task?
>     > Jenkins
>     >     frequently has security vulnerabilities and is designed to run
>     > arbitrary
>     >     code from the internet.  It is a real possibility that an
> attacker
>     > could
>     >     pivot from any Jenkins based CI system to infect artifacts which
> would
>     > then
>     >     potentially be pushed to repositories our users would consume.
> I would
>     >     consider any system using Jenkins as insecure-by-design, and
> encourage
>     > us
>     >     to air-gapped any artifact generation (websites, jars, PyPi
> packages)
>     >     completely from a system like that.
>     >
>     >     An alternative I could see is a simple Dockerfile (no Jenkins)
> that
>     > builds
>     >     all artifacts end-to-end and can be run in an automated account
> well
>     >     outside our CI account.
>     >
>     >     On Mon, Dec 17, 2018 at 1:53 PM Qing Lan <lanking...@live.com>
> wrote:
>     >
>     >     > Dear community,
>     >     >
>     >     > Currently me and Zach are working on the Automated-publish
> pipeline
>     > on
>     >     > Jenkins which is a pipeline used to publish Maven packages and
> pip
>     > packages
>     >     > nightly build. We are trying to use NVIDIA deb which could
> help us
>     > to build
>     >     > different CUDA/CUDNN versions in the publish system. Sheng has
>     > provided a
>     >     > script here:
> https://github.com/apache/incubator-mxnet/pull/13646.
>     > This
>     >     > provide a very concrete and automatic solution from
> downloading to
>     >     > installing on the system. The only scenario we are facing is:
> It
>     > seemed
>     >     > NVIDIA has a restriction on distributing CUDA. We are not sure
> if it
>     > is
>     >     > legally-safe for us to use this in public.
>     >     >
>     >     > We would be grateful if somebody has a better context on it
> and help
>     > us
>     >     > out!
>     >     >
>     >     > Thanks,
>     >     > Qing
>     >     >
>     >
>     >
>     >
>
>
>
>

Reply via email to