Hello,

we are planing to publish some data which is processed on our hpc
system. The dataset are up to 100tb (already packed) and need to be
stored and published with dspace.

I know it is possible to upload the file to the filesystem and assign it
to an item but this does not scale well with huge datasets.

Is there any way to upload the data directly to S3 storage and assign
the object later? without uploading to dspace first.

Kind regards

 Philipp Rehs

---------------------------

Zentrum für Informations- und Medientechnologie
Kompetenzzentrum für wissenschaftliches Rechnen und Speichern

Heinrich-Heine-Universität Düsseldorf
Universitätsstr. 1
Raum 25.41.00.51
40225 Düsseldorf / Germany
Tel: +49-211-81-15557

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://duraspace.org/about/policies/code-of-conduct/
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/98c61d57-8bed-4eb6-a143-87f892c55a9an%40googlegroups.com.

Reply via email to