>>> On 11/5/2013 at 02:49 PM, Jon Perryman <jperr...@pacbell.net> wrote: -snip- > As for showing that z/OS is not as bad as some would make > it out, here are some of the issues the cloud has addressed but not truly > resolved: > > 1. Disk full: > * Cloud: Some disk manufacturers have implementations that work well but > they exist more as NFS than as a local filesystem and they are still not in > heavy use. Other implementations simply use Unix filesystems but they require > more disk space than is needed.
I have no idea what you're talking about here, and my company has a product that lets you create your own private cloud infrastructure. > * UNIX: Applications often do bizarre things with disk full. Admin's usually > find out there is a problem from user reporting the problem. Adding a disk > immediately won't solve the problem because the file system must be copied. Wrong. Logical Volume Manager solved that well over 10 years ago. > Adding a mountpoint doesn't help. Admin must search / delete files to free > space. Increasing the file system size requires the sceduling of down time > (it's not just adding space). Wrong. File systems have been able to be resized online for well over 10 years now. Some will even let you shrink them while they're online. > * z/OS: Applications will die but we can have automation to add disks to a > storage group in a matter of seconds. We can easily steal disks from our > test systems in an emergency where we don't have sufficient extra's. This is no different in the distributed world. It all depends on how friendly you are with the storage administrators. > We have > HSM to migrate seldom used files to tape. Admins can change the migration > interval. There are products that will do this, but I wish they had more capabilities. HSM is one of the (very) few things I miss about z/OS. > 2. CPU busy. > * Cloud: There are a few implementations to spread workload (e.g. SOA). They > all are the same basic principle but with different standards. They all > basically send a request and wait for a response. It still exists within the > restrictions of UNIX. Again, I have no idea what you're talking about here. > * UNIX: Can't dynamically add a cpu without reboot. Loosely coupled. Can't > offload workload without purpose built applications (E.g. Peoplesoft and > SAP). Wrong. CPU hotplug and removal has been around quite a long time. > * z/OS: We have WLM to prioritize workload. CPU's can be dynamically added > (IBM often has spares in the box that can be quickly purchased). Our systems > are tightly coupled thru sysplex. IMS, CICS, TSO and batch can easily be > spread on any / all systems within the sysplex. Sorry, but I don't see sysplex as being "tightly coupled." At least that's not the definition I learned 30 years ago. In many ways, sysplex is the z/OS implementation of High Availability. Being a specialized implementation, it could do some nice magic with coupling facilities and so on. It's also extremely expensive, as we've all come to expect from the industries pricing models for the mainframe. Linux has similar prioritizing capabilities, but without a lot of the undue complications that WLM introduces (even if you're just talking about the blasted terminology). > 3. Networking: > * Cloud: Same as UNIX. Again, what? > * UNIX: TCP/IP was not publicly available until the 70's. Prior to that, > simple communications were available. We're not living in the late 60's any more, so I don't see how this is relevant to today. > * z/OS: SNA existed long before TCP/IP was available. SNA was a robust, > reliable and secure communications methodology. Once TCP was became > available, we had the same situation as Betamax versus VHS. TCP won. And for good reason. The industry has always swung back and forth between proprietary standards and open standards. As time has gone on, the realization has been that using technologies that implement open standards reduces costs and vendor lock-in while increasing flexibility. SNA had its advantages, it was a real pain to manage and modify network designs. I don't miss it in the least. > 4. Data recovery: > * Cloud: Cross your fingers and hope that your cloud provider is taking > sufficient precautions. Or run your own private cloud. It's not that hard, and it's certainly cheaper than using a public cloud. > * UNIX: Backup and recovery utilities exist but an admin is often required > to perform recovery. In addition, recovery is often an interactive process > (start request / mount tape then repeat). Can't say I've seen these limitations in the systems I've worked with. You may not get what you pay for, but you almost certainly don't get what you won't pay for. Distributed systems can use automated tape libraries, real or virtual. Now if you want to point a finger at some things in Linux that really, really could use improvement, let's talk about diagnostic instrumentation in the operating system, as well as much better data for performance management. The latter particularly is pretty much a joke. If I were competent to hack on the kernel, that's what I would like to see improved. Because of the huge variety of hardware out there that gets used, things like printing are a colossal pain. Automation of bulk data transfer is very difficult. Some days i wish there was something like ACF2 for Linux (that was actually designed with Linux in mind, and not just some transplant from z/OS), and then there are days when I'm very happy it doesn't exist. Either way, the security model of UNIX and Linux is was too coarse, and Access Control Lists (ACLs) for files and directories being added in later were pretty much just grafted on. There's lots more about Linux that could be improved, but you get the idea. My main point is, don't criticize it for imagined ills where there are plenty of real ones. Mark Post ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN