We use Storage Archive and are happy with it.  Policies are very customizable 
using SQL-like queries.
We have 4.5 PB archived onto LTO8 tape.  I haven’t had any issues with tape 
spanning.
We create two copies and send full tapes offsite.
Stub files create a seamless experience for users. We wrote a simple 
self-service tool to recall files.  No complaints.
If you happen to use Veritas NetBackup, NetBackup can selectively skip 
premigrated and migrated files so you can backup a directory with a mix of file 
states and the migrated files won’t be rehydrated.
(Migrated = file is on tape only, premigrated = both tape and resident, 
resident = disk only)

I’m happy to answer any questions you might have.

Cheers,

Neil Conner
Storage, Backup & Database Administrator
P (831) 775-1989   F (831) 775-1620
                
Monterey Bay Aquarium Research Institute
7700 Sandholdt Road, Moss Landing CA 95039
www.mbari.org <http://www.mbari.org/>
Advancing marine science and engineering to understand our changing ocean.


> On Jan 15, 2024, at 3:33 PM, ANDREW BEATTIE <abeat...@au1.ibm.com> wrote:
> 
> So full disclosure I work for IBM πŸ™‚
> 
> IBM Storage Scale supports multiple DMAPI aware Hierarchical Storage 
> Management (HSM) offerings
> 
> There are 3 Offerings available with an IBM logo associated.
> 
> IBM Storage Archive (LTFS) - Simple light weight,  - does not support tape 
> spanning (can be an issue for LTO, not an issue for Enterprise tape with 
> TS1170 - 50TB media), 
>                         can be slower for tape reclamation processes 
> depending on how the environment is configured. 
>                         Does have specific tape library support - not all 
> libraries have been validated. 
>                         Robust architecture, easy to scale out by adding 
> additional Archive nodes as required, plenty of client references
>                         Licensed on nodes deployed
> 
> IBM Storage Protect Extended Edition + IBM Storage Protect Space Management - 
>  Solid robust architecture has been successfully deployed delivering HSM 
> capability                         for 15+ years.  Potentially best 
> integrated with IBM Storage scale with the mmbackup / SOBAR integration for 
> both backup / archive for 
>                         same data sets.   Single DB2 Database server can be 
> seen as a performance bottleneck at very large scale. 
>                         2 licensing options - Capacity based (per TB) or by 
> Processor Value Unit (PVU)
> 
> IBM High Performance Storage System - (HPSS) Offered as a Managed Service 
> (options for Partners to provide Lvl 1 support) - Stand alone fully featured 
> archive                         platform that does have a DMAPI integration 
> connector for Storage Scale,  Very useful for very large scale out 
> environments as the Managed Services costs                           are a 
> fixed cost per annum regardless of capacity. 20PB / 200PB the "license" is 
> the same. 
> 
> HPe Logo
> 
> HPE DMF7 - HPE have done extensive work to build support for DMF 7 to 
> natively integrate with IBM Storage Scale - at least 2 referenceable clients 
> in APAC region
> 
> Kalray Logo
> PixitMedia Ngenea - I haven't done anything with this platform but I'm aware 
> that it exists and has an extensive following in the film & TV verticals. 
> 
> In many ways your decision will come down to how the clients want their users 
> to experience data management,  Do they want the users to be responsible for 
> the management of their data, or do they want an automated experience where 
> data management simply happens and users don't have to worry about or think 
> about archival policy / process / requirements. 
> 
> 
> 
> 
> 
> 
> Regards,
> 
> Andrew Beattie
> Technical Sales Specialist - Storage for Big Data & AI
> IBM Australia and New Zealand
> P. +61 421 337 927
> E. abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com>
> Twitter: AndrewJBeattie
> LinkedIn: https://www.linkedin.com/in/ajbeattie/
> 
> From: gpfsug-discuss <gpfsug-discuss-boun...@gpfsug.org 
> <mailto:gpfsug-discuss-boun...@gpfsug.org>> on behalf of Nicolas Perez de 
> Arenaza <npe...@giux.com <mailto:npe...@giux.com>>
> Sent: Tuesday, 16 January 2024 8:08 AM
> To: gpfsug-discuss@gpfsug.org <mailto:gpfsug-discuss@gpfsug.org> 
> <gpfsug-discuss@gpfsug.org <mailto:gpfsug-discuss@gpfsug.org>>
> Subject: [EXTERNAL] [gpfsug-discuss] ILM tiering to Tape - What to use 
> Protect Space Management or Spectrum Archive?
>  
> This Message Is From an External Sender
> This message came from outside your organization.
> ReportΒ Suspicious 
> <https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!12-uLH6RhPe69Kv1G0HlTkTi0Jr6U-T_AYBxizwD6UxjKM_3ggSkokLoo-a2QpoXyxOpInspO9J678AkfN3nx2bqsevL5calViveStJUrFQpPr5h4x50qw-xQ-st$>
>  
> Hello,
> 
> Working on offering IBM Storage Scale solution leveraging on ILM to Tier to 
> Tape.
> 
> My concerns are:
> - reliability
> - complexity ( I mean keep it simple ).
> - future roadmap and lifecycle (this will be used for many years)
> 
> What to choose for Tape Management?
> IBM Storage Space Management for Linux + IBM Storage Protect.
> or
> IBM Storage Archive
> 
> Any opinions are welcome,
> 
> Look forward to receive some,
> 
> Thanks
> 
> NicolΓ‘s.
> 
> NicolΓ‘s PΓ©rez de Arenaza
> Gerente de ConsultorΓ­a | GIUX S.A.
> Tel Dir: (5411) 5218-0099 | Ofi: (5411) 5218-0037 x 201 | Cel: (54911) 
> 4428-1795
> npe...@giux.com <mailto:npe...@giux.com> | Skype ID: nperezdearenaza | 
> http://www.giux.com <http://www.giux.com/>
> 
>  
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>     
> Virus-free.www.avast.com 
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org <http://gpfsug.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to