We actually zipped the full conda environments  during our build and ship
those

On Wed, 21 Oct 2020 at 20:25, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> How about PySpark? What process can that go through to not depend on
> external repo access in production
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 21 Oct 2020 at 19:19, Sean Owen <sro...@gmail.com> wrote:
>
>> Yes, it's reasonable to build an uber-jar in development, using Maven/Ivy
>> to resolve dependencies (and of course excluding 'provided' dependencies
>> like Spark), and push that to production. That gives you a static artifact
>> to run that does not depend on external repo access in production.
>>
>> On Wed, Oct 21, 2020 at 1:15 PM Wim Van Leuven <
>> wim.vanleu...@highestpoint.biz> wrote:
>>
>>> I like an artefact repo as the proper solution. Problem with
>>> environments that haven't yet fully embraced devops: artefact repos are
>>> considered development tools and are often not yet used to promote packages
>>> to production, air gapped if necessary.
>>> -wim
>>>
>>

Reply via email to