On Fri, Feb 28, 2014 at 8:55 AM, Phelps, Matt <mphe...@cfa.harvard.edu> wrote:
> I'd highly recommend getting a NetApp storage device for something that big.
>
> It's more expensive up front, but the amount of heartache/time saved in the
> long run is WELL worth it.
>
      My vote would be for a ZFS-based storage solution, be it
homegrown or appliance (like nextenta). Remember, as far as ZFS (and
similar filesystems whose acronyms are more than 3 letters) is
concerned, a petabyte is still small fry.

>
> On Fri, Feb 28, 2014 at 8:15 AM, Götz Reinicke - IT Koordinator <
> goetz.reini...@filmakademie.de> wrote:
>
>> Hi,
>>
>> over time the requirements and possibilities regarding filesystems
>> changed for our users.
>>
>> currently I'm faced with the question:
>>
>> What might be a good way to provide one big filesystem for a few users
>> which could also be enlarged; backuping the data is not the question.
>>
>> Big in that context is up to couple of 100 TB may be.
>>
>> O.K. I could install one hardware raid with e.g. N big drives format
>> with xfs. And export one big share. Done.
>>
>> On the other hand, e.g. using 60 4 TB Disks in one storage would be a
>> lot of space, but a nightmare in rebuilding on a disk crash.
>>
>> Now if the share fills up, my users "complain", that they usually get a
>> new share (what is a new raidbox).
>>
>> From my POV I could e.g. use hardware raidboxes, and use LVM and
>> filesystem growth options to extend the final share, but what if one of
>> the boxes crash totally? The whole Filesystem would be gone.
>>
>> hm.
>>
>> So how do you handle big filesystems/storages/shares?
>>
>>         Regards . Götz
>>
>> --
>> Götz Reinicke
>> IT-Koordinator
>>
>> Tel. +49 7141 969 82 420
>> E-Mail goetz.reini...@filmakademie.de
>>
>> Filmakademie Baden-Württemberg GmbH
>> Akademiehof 10
>> 71638 Ludwigsburg
>> www.filmakademie.de
>>
>> Eintragung Amtsgericht Stuttgart HRB 205016
>>
>> Vorsitzender des Aufsichtsrats: Jürgen Walter MdL
>> Staatssekretär im Ministerium für Wissenschaft,
>> Forschung und Kunst Baden-Württemberg
>>
>> Geschäftsführer: Prof. Thomas Schadt
>>
>>
>> _______________________________________________
>> CentOS mailing list
>> CentOS@centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
>>
>
>
> --
> Matt Phelps
> System Administrator, Computation Facility
> Harvard - Smithsonian Center for Astrophysics
> mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to