Re: [zfs-discuss] Nice chassis for ZFS server
Are there benchmarks somewhere showing a RAID10 implemented on an LSI card with, say, 128MB of cache being beaten in terms of performance by a similar zraid configuration with no cache on the drive controller? Somehow I don't think they exist. I'm all for data scrubbing, but this anti-raid-card movement is puzzling. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OT: NTFS Single Instance Storage (Re: Yager on ZFS
[EMAIL PROTECTED] wrote: Darren, Do you happen to have any links for this? I have not seen anything about NTFS and CAS/dedupe besides some of the third party apps/services that just use NTFS as their backing store. Single Instance Storage is what Microsoft uses to refer to this: http://research.microsoft.com/sn/Farsite/WSS2000.pdf While SIS is likely useful in certain environments, it is actually layered on top of NTFS rather than part of it - and in fact could in principle be layered on top of just about any underlying file system in any OS that supported layered 'filter' drivers. File access to a shared file via SIS runs through an additional phase of directory look-up similar to that involved in following a symbolic link, and its described copy-on-close semantics require divided data access within the updater's version of the file (fetching unchanged data from the shared copy and changed data from the to-be-fleshed-out-after-close copy) with apparently no mechanism to avoid the need to copy the entire file after close even if only a single byte within it has been changed (which could compromise its applicability in some environments). Nonetheless, unlike most dedupe products it does apply to on-line rather than backup storage, and Microsoft deserves credit for fielding it well in advance of the dedupe startups: once in a while they actually do produce something that qualifies as at least moderately innovative. NTFS was at least respectable if not ground-breaking as well when it first appeared, and it's too bad that it has largely stagnated since while MS pursued its 'structured storage' and similar dreams (one might suspect in part to try to create a de facto storage standard that competitors couldn't easily duplicate, limiting the portability of applications built to take advantage of its features without attracting undue attention from trust-busters, such as they are these days - but perhaps I'm just too cynical). - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
from the description here http://www.djesys.com/vms/freevms/mentor/rms.html so who cares here ? RMS is not a filesystem, but more a CAS type of data repository Since David begins his description with the statement RMS stands for Record Management Services. It is the underlying file system of OpenVMS, I'll suggest that your citation fails a priori to support your allegation above. Perhaps you're confused by the fact that RMS/Files-11 is a great deal *more* of a file system than most Unix examples (though ReiserFS was at least heading in somewhat similar directions). You might also be confused by the fact that VMS separates its file system facilities into an underlying block storage and directory layer specific to disk storage and the upper RMS deblocking/interpretation/pan-device layer, whereas Unix combines the two. Better acquainting yourself with what CAS means in the context of contemporary disk storage solutions might be a good idea as well, since it bears no relation to RMS (nor to virtually any Unix file system). - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Odd prioritisation issues.
On Fri, Dec 07, 2007 at 05:27:25 -0800, Anton B. Rang wrote: : I was under the impression that real-time processes essentially trump all : others, and I'm surprised by this behaviour; I had a dozen or so RT-processes : sat waiting for disc for about 20s. : Process priorities on Solaris affect CPU scheduling, but not (currently) : I/O scheduling nor memory usage. Ah, hmm. I hadn't appreciated that. I'm surprised. : * Is this a ZFS issue? Would we be better using another filesystem? : It is a ZFS issue, though depending on your I/O patterns, you might be : able to see similar starvation on other file systems. In general, other : file systems issue I/O independently, so on average each process will : make roughly equal forward process on a continuous basis. You still : don't have guaranteed I/O rates (in the sense that XFS on SGI, for : instance, provides). That would make sense. I've not seen this before on any other filesystem. : * Is there any way to mitigate against it? Reduce the number of iops : available for reading, say? : Is there any way to disable or invert this behaviour? : I'll let the ZFS developers tackle this one : --- : Have you considered using two systems (or two virtual systems) to ensure : that the writer isn't affected by reads? Some QFS customers use this : configuration, with one system writing to disk and another system : reading from the same disk. This requires the use of a SAN file system : but it provides the potential for much greater (and controllable) : throughput. If your I/O needs are modest (less than a few GB/second), : this is overkill. We're writing (currently) about 10MB/s; this may rise to about double that if we add the other multiplexes. We're taking the BBC's DVB content off-air, splitting it into programme chunks, and moving it from the machine that's doing the recording to a filestore. As it's off-air streams, we have no control over the inbound data -- it just arrives whether we like it or not. We do control the movement from the recorder to the filestore, but as this is largely achieved via a Perl module calling sendfile(), even that's mostly out of our hands. Definitely a headscratcher. -- Dickon Hood Due to digital rights management, my .sig is temporarily unavailable. Normal service will be resumed as soon as possible. We apologise for the inconvenience in the meantime. No virus was found in this outgoing message as I didn't bother looking. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
from the description here http://www.djesys.com/vms/freevms/mentor/rms.html so who cares here ? RMS is not a filesystem, but more a CAS type of data repository On Dec 8, 2007 7:04 AM, Anton B. Rang [EMAIL PROTECTED] wrote: NOTHING anton listed takes the place of ZFS That's not surprising, since I didn't list any file systems. Here's a few file systems, and some of their distinguishing features. None of them do exactly what ZFS does. ZFS doesn't do what they do, either. QFS: Very, very fast. Supports segregation of data from metadata, and classes of data. Supports SAN access to data. XFS: Also fast; works efficiently on multiprocessors (in part because allocation can proceed in parallel). Supports SAN access to data (CXFS). Delayed allocation allows temporary files to stay in memory and never even be written to disk (and improves contiguity of data on disk). JFS: Another very solid journaled file system. GPFS: Yet another SAN file system, with tighter semantics than QFS or XFS; highly reliable. StorNext: Hey, it's another SAN file system! Guaranteed I/O rates (hmmm, which XFS has too, at least on Irix) -- a key for video use. SAMFS: Integrated archiving -- got petabytes of data that you need virtually online? SAM's your man! (well, at least your file system) AdvFS: A journaled file system with snapshots, integrated volume management, online defragmentation, etc. VxFS: Everybody knows, right? Journaling, snapshots (including writable snapshots), highly tuned features for databases, block-level change tracking for more efficient backups, etc. There are many, many different needs. There's a reason why there is no one true file system. -- Anton Better yet, you get back to writing that file system that's going to fix all these horrible deficiencies in zfs. Ever heard of RMS? A file system which supports not only sequential access to files, or random access, but keyed access. (e.g. update the record whose key is 123)? A file system which allowed any program to read any file, without needing to know about its internal format? (so such an indexed file could just be read as a sequence of ordered records by applications which processed ordinary text files.) A file system which could be shared between two, or even more, running operating systems, with direct access from each system to the disks. A file system with features like access control with alarms, MAC security on a per-file basis, multiple file versions, automatic deletion of temporary files, verify-after-write. You probably wouldn't be interested; but others would. It solves a particular set of needs (primarily in the enterprise market). It did it very well. It did it some 30 years before ZFS. It's very much worthwhile listening to those who built such a system, and their experiences, if your goal is to learn about file systems. Even if they don't suffer fools gladly. If you've got a problem for which ZFS is the best solution, great. Use it. But don't think that it solves every problem, nor that it's perfect for everyone -- even you. (One particular area to think about -- how do you back up your multi-terabyte pool? And how do you restore an individual file from your backups?) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- -- Blog: http://fakoli.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mail system errors (On Topic).
Yet another prime example. can you guess? wrote: Please see below for an example. Ah - I see that you'd rather be part of the problem than part of the solution. Perhaps you're also one of those knuckle-draggers who believes that a woman with the temerity to leave her home after dark shouldn't be allowed to use force against attackers, Becuz she wuz askin' for it. But what can one expect from someone whose mother hasn't yet taught him not to top-post? Wake me up if you ever have an actual technical contribution to make to the discussion. - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
can you run a database on RMS? I guess its not suited we are already trying to get ride of a 15 years old filesystem called wafl, and a 10 years old file system called Centera, so do you thing we are going to consider a 35 years old filesystem now... computer science made a lot of improvement since On Dec 8, 2007 1:38 PM, can you guess? [EMAIL PROTECTED] wrote: from the description here http://www.djesys.com/vms/freevms/mentor/rms.html so who cares here ? RMS is not a filesystem, but more a CAS type of data repository Since David begins his description with the statement RMS stands for Record Management Services. It is the underlying file system of OpenVMS, I'll suggest that your citation fails a priori to support your allegation above. Perhaps you're confused by the fact that RMS/Files-11 is a great deal *more* of a file system than most Unix examples (though ReiserFS was at least heading in somewhat similar directions). You might also be confused by the fact that VMS separates its file system facilities into an underlying block storage and directory layer specific to disk storage and the upper RMS deblocking/interpretation/pan-device layer, whereas Unix combines the two. Better acquainting yourself with what CAS means in the context of contemporary disk storage solutions might be a good idea as well, since it bears no relation to RMS (nor to virtually any Unix file system). - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- -- Blog: http://fakoli.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Seperate ZIL
On Thu, Dec 06, 2007 at 03:27:33PM -0800, Scott Laird wrote: MAX3xxxRC (where xxx represents the size) and you'll be wearing a big smile every time you work on a system so equipped. Hmmm, on second glace, 36G versions of that seem to be going for $40. Do you mean $140, or am I missing a really good deal somewhere? The first hit on Google Products is for $40. No idea if they are real or reputable or what. http://www.itovernight.com/store/comersus_viewItem.asp?idProduct=866720 -brian -- Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it's just that most of the shit out there is built by people who'd be better suited to making sure that my burger is cooked thoroughly. -- Jonathan Patschke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Seperate ZIL
http://www.itovernight.com/store/comersus_viewItem.asp ?idProduct=866720 Fly by night from the looks of it. http://www.resellerratings.com/store/IToverNight $140 looks like bottom dollar from anywhere reputable (which is more in line with what I would expect). http://castle.pricewatch.com/s/search.asp?s=FUJ-MAX3036RC+group1=1sci=26c=Hard+%2F+Removable+Drives This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mail system errors (On Topic).
Yet another prime example. Ah - yet another brave denizen (and top-poster) who's more than happy to dish it out but squeals for administrative protection when receiving a response in kind. The fact that your pleas seem to be going unanswered actually reflects rather well on whoever is managing this forum: even if they don't particularly care for my attitude, they appear to recognize that there's a good reason why I deal with some of you as I have. Do have a nice day. - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
can you run a database on RMS? As well as you could on must Unix file systems. And you've been able to do so for almost three decades now (whereas features like asynchronous and direct I/O are relative newcomers in the Unix environment). I guess its not suited And you guess wrong: that's what happens when you speak from ignorance rather than from something more substantial. we are already trying to get ride of a 15 years old filesystem called wafl, Whatever for? Please be specific about exactly what you expect will work better with whatever you're planning to replace it with - and why you expect it to be anywhere nearly as solid. and a 10 years old file system called Centera, My, you must have been one of the *very* early adopters, since EMC launched it only 5 1/2 years ago. so do you thing we are going to consider a 35 years old filesystem now... computer science made a lot of improvement since Well yes, and no. For example, most Unix platforms are still struggling to match the features which VMS clusters had over two decades ago: when you start as far behind as Unix did, even continual advances may still not be enough to match such 'old' technology. Not that anyone was suggesting that you replace your current environment with RMS: if it's your data, knock yourself out using whatever you feel like using. On the other hand, if someone else is entrusting you with *their* data, they might be better off looking for someone with more experience and sense. - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
can you guess? wrote: can you run a database on RMS? As well as you could on must Unix file systems. And you've been able to do so for almost three decades now (whereas features like asynchronous and direct I/O are relative newcomers in the Unix environment). Funny, I remember trying to help customers move their applications from TOPS-20 to VMS, back in the early 1980s, and finding that the VMS I/O capabilities were really badly lacking. RMS was an abomination -- nothing but trouble, and another layer to keep you away from your data. Of course, TOPS-20 isn't Unix; it's one of the things the original Unix developers couldn't afford, so they had to try to write something that would work for them and would run on hardware they *could* afford (the other one was Multics of course). -- David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
can you guess? wrote: can you run a database on RMS? As well as you could on must Unix file systems. And you've been able to do so for almost three decades now (whereas features like asynchronous and direct I/O are relative newcomers in the Unix environment). nny, I remember trying to help customers move their applications from TOPS-20 to VMS, back in the early 1980s, and finding that the VMS I/O capabilities were really badly lacking. Funny how that works: when you're not familiar with something, you often mistake your own ignorance for actual deficiencies. Of course, the TOPS-20 crowd was extremely unhappy at being forced to migrate at all, and this hardly improved their perception of the situation. If you'd like to provide specifics about exactly what was supposedly lacking, it would be possible to evaluate the accuracy of your recollection. RMS was an abomination -- nothing but trouble, Again, specifics would allow an assessment of that opinion. and another layer to keep you away from your data. Real men use raw disks, of course. And with RMS (unlike Unix systems of that era) you could get very close to that point if you wanted to without abandoning the file level of abstraction - or work at a considerably more civilized level if you wanted that with minimal sacrifice in performance (again, unlike the Unix systems of that era, where storage performance was a joke until FFS began to improve things - slowly). VMS and RMS represented a very different philosophy than Unix: you could do anything, and therefore were exposed to the complexity that this flexibility entailed. Unix let you do things one simple way - whether it actually met your needs or not. Back then, efficient use of processing cycles (even in storage applications) could be important - and VMS and RMS gave you that option. Nowadays, trading off cycles to obtain simplicity is a lot more feasible, and the reasons for the complex interfaces of yesteryear can be difficult to remember. - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss