> Andrew Rowley wrote:
> It's not cents per GB cheap

While I agree with everything you're saying, at the end of the day it's the 
storage sysprog's decision. As with any z/OS sysprog, they make decisions that 
programmers feel are abusive. 
People are arguing about passion. If this were a manager or a CEO's decision, 
would you argue the point with them? Right or wrong, we live with their 
decisions. Just because sysprogs aren't managers, doesn't mean that they 
shouldn't be considered the authority without the necessity of arguing every 
detail.

If you came into a company as storage sysprog, would you on day 1 satisfy all 
storage requests on your new desk? My point is that you can't possibly assume 
that everything from you last employer is the same as your new employer. If 
after 10 days, you must take the disk space back, those people will be more 
angry than if you just denied their requests.


    On Tuesday, August 15, 2023 at 05:20:47 PM PDT, Andrew Rowley 
<and...@blackhillsoftware.com> wrote:  
 
 On 16/08/2023 6:17 am, Jon Perryman wrote:

> This is absurd. Not all disk is cheap (e.g. GDPS). Not all data is valuable. 
> While a person may be expensive, not everything they do is of value to the 
> business and worth the hidden expenses.

It's not cents per GB PC cheap, but it's not 1990s expensive either.

If you have a meeting with a couple of people, schedule a change, get it 
approved, implement it... you could easily spend $1000 of people's time, 
not to mention add weeks to a project. How much disk space would that 
buy you, if you can avoid that cost and delay?

The IBM RDP systems charge $10/month for 5GB of disk space, which seems 
expensive. But still, $1000 of meetings avoided will pay for 40GB of 
space for a year.

What is the (ballpark) cost per GB on a real customer system? How much 
space is it worth to reduce the time spent managing it?

> You can't be serious about being a storage admin. Every situation and company 
> are different. Questions must be asked. How do you not understand adding 
> 100GB to a filesystem has an impact on GDPS, HSM, backups, recovery and much 
> more. If you believe, everything is created equal, 100GB has the same impact 
> on a 10GB or 10TB filesystem. A file system may contain millions of Unix 
> files but its 1 MVS dataset. Recovery of a filesystem is risky at the best of 
> times but add 100GB increases the risks and may impact the nightly archival 
> time (1 Unix file change causes HSM to backup the entire filesystem). If as 
> you say, data is valuable, then the UNIX backup would be used. I could go on 
> but I expect this should be obvious.

Whole filesystem backups are not very useful - really just a DR tool. 
What do you do if a user wants file(s) restored? Restore all their files 
from a point in time? Or restore the filesystem, mount it somewhere and 
manually copy files?

The whole filesystem backup takes us back to the problems with 
individual filesystems. You are going to back up a user's whole 
filesystem, including all the freespace from the largest file they ever 
deleted, because they logged on and updated .sh_history? Or worse - can 
the filesystem change indicator tell the difference between a data 
update and a metadata (e.g. last accessed date) change?

I'm not saying everything is equal, I'm just saying that freespace is a 
lot cheaper than managing a lack of freespace.

-- 
Andrew Rowley
Black Hill Software

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
  

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to