On Fri, Feb 21, 2014 at 11:51:19AM +0000, Edward Ned Harvey (lopser) wrote: > > From: Charles Polisher [mailto:[email protected]] > > > > There's a tradeoff between how fast the designer wants the head > > to seek and how much heat they're willing to dissipate in the > > positioning servo. With infinite power you can position the > > head pretty damn fast. > > A good point - I have neither data to confirm or deny any difference in > average seek time of 2.5" vs 3.5" disks, but I have the presumed position > that they're approximately equal. If this is correct, it could be explained > as you said - the larger disks have more distance to travel, but also have > more power and cooling available. > > > > If you consider groups of disks, rotational latency isn't > > completely governed by RPMs. Two mirrored disks can be written > > 180 degrees out of phase with 50% of the rotational latency of a > > single disk, 4 disks can be stepped at 90 degrees delivering a > > quarter the latency, etc.
> This is true, but not practical. Besides the fact that > rotational latency is a slim slim minority of where time is lost > - the head seek time is where the vast majority of ground to be > gained - There are techniques such as short-stroking to minimize > head seek. But just like short-stroking, if you have tuned > drivers and just a teeny little bit of proprietary hardware > support, you can get knowledge about the rotational position of > head over platters, and the variations of sector density > according to track position, and you can keep your volatile data > in a tightly grouped small number of tracks (or rotating buffer > or similar) so you can get IOPS out of a HDD that are comparable > to SSD. Good points. The use-case for me is an Oracle DB with about 100GB of tablespace utilized in a college ERP system. Exactly which data is volatile is a complex function of time of day, where we are in the instructional calendar and hard-to-predict actions of students and faculty. Because automatic tiering algorithms aren'y tractible for all workloads (they've all got pathological cases IIRC) we have to live with having to make storage performance guarantees to ensure system performance. In my case, the DB writing processes are relatively insensitive to latency via intelligent caching at the application (DB) layer, but still highly sensitive to latency in the journalling piece. We keep the journals (sequential write workload) on a few GB of SSD, and the DB (60%Read/40%Write) on relatively slow rotating disks. Works for us. > But because it's a specialized function, these types of > performance enhancements are pretty well limited, practically, > to the academic world. The cost differential (as well as other > characteristics) between SSD, vs Hybrid, vs HDD does not provide > significant enough motivation for manufacturers to productize > commercial offerings of this type... Not academic... Linux mdraid (software RAID) implements "far" layouts which place data at different track offsets, so it is practical -- I'm using it now. With multiple spindles, unless you synchronize the spindles you'll tend to have rotational offsets for mirrored pairs, even more for tripled mirrors. Some say triples waste too much storage but at $88/TB the cost of raw storage is not the principle cost driver for storage solutions IMO. As for zoning (ZCAV; using proximity to the outer edge to improve transfer rates), I'm doing it now and see a 50% difference between outer and inner zones. Again, Linux mdraid but I believe most SANs will let you do this. -- Charles Polisher _______________________________________________ Discuss mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
