If the purpose is to use for temporary work and write put it in temporary
sub-directory under a give bucket

spark.conf.set("temporaryGcsBucket", config['GCPVariables']['tmp_bucket'])

That dict reference is to this yml file entry

CPVariables:
   tmp_bucket: "tmp_storage_bucket/tmp"


just create a temporary bucket and sub-directory tmp underneath

tmp_storage_bucket/tmp


HTH



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 7 Mar 2021 at 16:23, Ranju Jain <ranju.j...@ericsson.com.invalid>
wrote:

> Hi,
>
>
>
> I need to save the Executors processed data in the form of part files ,
> but I think persistent Volume is not an option for this as Executors
> terminates after their work completes.
>
> So I am thinking to use shared volume across executor pods.
>
>
>
> Should I go with NFS or is there any other Volume option as well to
> explore?
>
>
>
> Regards
>
> Ranju
>

Reply via email to