Re: [zfs-discuss] Unable to add cache device
As for source, here you go :) http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Noob: Best way to replace a disk when you're out of internal connectors?
Umm, why do you need to do it the complicated way ? Here it is from zpool man page- zpool replace [-f] pool old_device [new_device] Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device. The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration. new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actually a dif- ferent disk. ZFS recognizes this. You just need to hot-swap the failed disk (say c0d0) and utter zpool replace c0d0 and go have a coffee or something. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Import Problem
I'm not an expert but for what it's worth- 1. Try the original system. It might be a fluke/bad cable or anything else intermittent. I've seen it happen here. If so, your pool may be alright. 2. For the (defunct) originals, I'd say we'd need to take a look into the sources to find if something needs to be done. AFAIK, device paths aren't hard-coded. ZFS doesn't care where the disks are as long as it finds them and they contain the right label. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Upgrade from UFS - ZFS on a single disk?
Seriously, if I had that many on _field_ I'd directly ring my support rep. Getting one step go wrong from instruction provided in forum might mean that you'd have to spend quite a long time fixing everyone (or worse re-installing) one by one from scratch! Get a support guy walk you through this... every step documented and tested (twice! with power failures thrown in the mix)... this isn't a question for the forum. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...
Hi Gray, You've got a nice setup going there, few comments: 1. Do not tune ZFS without a proven test-case to show otherwise, except... 2. For databases. Tune recordsize for that particular FS to match DB recordsize. Few questions... * How are you divvying up the space ? * How are you taking care of redundancy ? * Are you aware that each layer of ZFS needs its own redundancy ? Since you have got a mixed use case here, I would be surprized if a general config would cover all, though it might do with some luck. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...
Just a random spectator here, but I think artifacts you're seeing here are not due to file size, but rather due to record size. What is the ZFS record size ? On a personal note, I wouldn't do non-concurrent (?) benchmarks. They are at best useless and at worst misleading for ZFS - Akhilesh. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Error: value too large for defined data type
you need to run /usr/bin/amd64/ls Some utils eg virtualbox shared folders in an old build munge file dates -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS: First use recommendations
Hi, My setup is arguably smaller than yours, so YMMV: Key Point: I have found that using infrastructure provided natively by Solaris/ZFS are the best choices. I have been using CIFS... it's unpredictable when some random windows machines would stop seeing them. XP/Server 2003/Vista - Too many things go wrong. So here is what I do: 1. I use xVM VirtualBox for Windows 2. Snapshot/Clones are managed by ZFS... Just put the vmdk on its own FS and let zfs handle all sorts of shiny stuff. In your case same thing can be done with ZFS Volumes, if you choose to go with iSCSI. 3. There is a reason whole industry relies on NFS (learned the hard way). In short - it works! I have just installed SFU on all windows clients and couldn't be happier. No matter what happens to server or network (unplugging the wrong cable or router rebooting) , the clients *always* behave predictably. When the server/network goes up, it all again starts working. I do *all* shares through NFS now - windows, Linux and Solaris. Easy to setup *and* predictable! In short, for minimum fuss, stick with letting the bottom most layer which makes sense manage stuff... and in any case, don't let the virtualization products manage the storage in _any_ way. ZFS does it best and let it do it... and stick with NFS for sharing. SFU NFS client/server are small and they work really well, and they come bundled with Windows Server 200x. For 8 drives, go to raidz2. Two drives going bad is easy in an 8 drive setup. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OT: Formatting Problem of ZFS Adm Guide (pdf)
Waynel, It takes significant amount of work to typeset any large document. Especially if it is a technical document in which you have to adhere to a set of strict typesetting guidelines. In these cases separation of content and style is essential and can't be stressed enough. Word Processors have no mechanism to enforce this separation. So you can not guarantee that a given document strictly follows the set standard of styling rules - these include presentation AND language rules. Eg. how to hyphenate certain words and how to decide how long a given dash would be. In a word processor this task is manual labor intensive, but current advances have made it good for one-off document. Still, they are grossly inadequate for large documents and manuals which have to be written by group of people, styled by another group of people, proof-read, cross referenced, and updated from time to time. SGML based tools (eg. docbook), LaTeX/TeX and Adobe FrameMaker are the only tools that can do this at present. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OT: Formatting Problem of ZFS Adm Guide (pdf)
I don't doubt the superiority of LaTex/Framemaker in conjunction with Distiller in producing (the pdf versions of) nicely typeset books and brochures. But how good is a tool if it produces a product that its intended users can NOT read? This is what prompted You seem to have missed the following reply by richard = Quote === This is not a PDF problem, it is a freetype font problem which was introduced with freetype 2.3.6 in b93 and should be fixed in b97. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6723656 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6712820 -- richard This is a freetype problem, and such problems can impact anything. Nevada builds are anything but lightly tested developer snapshots. They didn't even introduce the problem - freetype folks did! On my system (nv84) OpenOffice looked so crap (thick ugly fonts) that I avoided using it until I compiled and replaced freetype. Secondly, PDF Latex/docbook/SGML are as free and open source as you can get. PDF is open spec anyone can create a viewer. It's like using Gimp to produce an art instead of GNOME Paint. Just like Gimp shouldn't be blamed if someone's image viewer is broken and they can't see an image, we shouldn't blame the tools that generated the PDF. The PDFs render fine in the following viewers I have on this nv84 system: * xpdf (3.02) * Acrobat Reader 8.0 (running via wine) * Foxit Reader 2.3 (running on wine) * GhostScript Thirdly, most (all?) the docs can be created independently by community. Infact you can start creating one yourself rather than wait/depend on Sun or anybody else. Isn't that what open source all about ? - Akhilesh This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [RFC] Improved versioned pointer algorithms
Btrfs does not suffer from this problem as far as I can see because it uses reference counting rather than a ZFS-style dead list. I was just wondering if ZFS devs recognize the problem and are working on a solution. Daniel, Correct me if I'm wrong, but how does reference counting solve this problem ? The terminology is as following: 1. Filesystem : A writable filesystem with no references or a parent. 2. Snapshot: Immutable point-in-time view of a filesystem 3. Clone: A writable filesystem whose parent is a given snapsho Under this terminology, it is easy to see that dead-list is equivalent to reference counting. The problem is rather that to have a clone, you need to have it's snapshot around, since by definition it is a child of a snapshot (with an exception that by using zfs promote you can make a clone a direct child of the filesystem, it's like turning a grand-child into a child). So what is the terminology of brtfs ? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] OT: Formatting Problem of ZFS Adm Guide (pdf)
I doubt so. Star/OpenOffice are word processors... and like Word they are not suitable for typesetting documents. SGML, FrameMaker TeX/LateX are the only ones capable of doing that. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Formatting Problem of ZFS Adm Guide (pdf)
Welcome to font hell :-(. For many years, Sun documentation was written in the Palatino font, which is (or was?) not freely available. I believe Umm No. PDF supports font embedding. This is how so many PDFs are out there (company brochures, fliers etc) with commercial fonts and they look just right. I checked one of the PDFs and it does have all the fonts (including platino and helvetica) embedded as Type 1 font. It may have something to do with Type1 font or something else that FrameMaker is generating that evince is choking on because the PDFs look fine with Adobe Reader 8 (under wine-1.1), Foxit Reader (wine 1.1) xpdf (3.02 from blastwave). Even ghostscript renders them ok. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [RFC] Improved versioned pointer algorithms
On Monday 14 July 2008 08:29, Akhilesh Mritunjai wrote: Writable snapshots are called clones in zfs. So infact, you have trees of snapshots and clones. Snapshots are read-only, and you can create any number of writable clones from a snapshot, that behave like a normal filesystem and you can again take snapshots of the clones. So if I snapshot a filesystem, then clone it, then delete a file from both the clone and the original filesystem, the presence of the snapshot will prevent the file blocks from being recovered, and there is no way I can get rid of those blocks short of deleting both the clone and the snapshot. Did I get that right? Right. Snapshots are immutable. Isn't this the whole point of a snapshot ? FS1(file1) - Snapshot1 (file1) delete FS1-file1 : Snapshot1-File1 is still intact Snapshot1(file1) - CloneFs1(file1) delete CloneFS1-file1 : Snapshot1-File1 is still intact (snapshot is immutable) There is lot of information in zfs docs on zfs community. For low level info, you may refer to ZFS on disc format document. Regards - Akhilesh This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Formatting Problem of ZFS Adm Guide (pdf)
Evince likes to fuzz a number of PDFs. I too can't seem to nail the problems, but it seems that a number of PDFs from SUN have this problem (very wrong character spacing), and they all have been generated using FrameMaker. PDFs generated using TeX/LaTeX are *usually* ok. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Announcement: The Unofficial Unsupported Python ZFS API
Hi I had a quick look. Looks great! A suggestion - From given example, I think API could be made more pythonic. Python is dynamically typed and properties can be dynamically looked up too. Thus, instead of prop_get_* we can have - 1. prop() : generic function, returning typed arguments. The builtin zfs properties would be returned with correct type and user properties would be returned as generic string. 2. Ability to just say z.property_name (eg. z.compression). This would be trivial to implement syntactic sugar. Also, some work would be needed to provide pythonic iterators and other idioms so that API does not feel like a python interface to C. Thanks - Akhilesh This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [RFC] Improved versioned pointer algorithms
Still reading, but would like to correct one point. * It would seem that ZFS is deeply wedded to the concept of a single, linear chain of snapshots. No snapshots of snapshots, apparently. http://blogs.sun.com/ahrens/entry/is_it_magic Writable snapshots are called clones in zfs. So infact, you have trees of snapshots and clones. Snapshots are read-only, and you can create any number of writable clones from a snapshot, that behave like a normal filesystem and you can again take snapshots of the clones. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] X4540
Well, I'm not holding out much hope of Sun working with these suppliers any time soon. I asked Vmetro why they don't work with Sun considering how well ZFS seems to fit with their products, and this was the reply I got: Micro Memory has a long history of working with Sun, and I worked at Sun for almost 10 years developing Solaris x86. We have tried to get various Sun Product Managers responsible for these servers (Thumper) to work with us on this and they have said no. We have tried to get Sun's integration group to work with us (where they would integrate upon customer request, charging the customer for integration and support), and they have also said no. They don't feel there is an adequate business case to justify it as all of the opportunities are so small. This is an incredibly frustrating response for all the Sun customers who could have really benefited from these cards. Why develop the ability to move the ZIL to nvram devices, benchmark the Thumper on one of them, and then refuse to work with the manufacturer to offer the card to customers? May be post this to Jonathan's blog. When the stock is down so much, it's bad that some guy somewhere is not doing his/her job properly of providing something the customers want. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] please help with raid / failure / rebuild calculations
Thanks for your comments. FWIW, I am building an actual hardware array, so een though I _may_ put ZFS on top of the hardware arrays 22TB drive that the OS sees (I may not) I am focusing purely on the controller rebuild. Not letting ZFS handle (at least one level of) redundancy is a bad idea. Don't do that! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS problem mirror
Hi I too strongly suspect that some HW component is failing. It is rare to see all drives (in your case both drives in mirror and the boot drive) reporting errors at same time. zfs clear just resets the error counters. You still have got errors in there. Start with following components (in this order): 1. Memory: Use memtest86+ (use any live CD.. it is very common) 2. Power supply - search the forums, it is very common 3. Your mobo/disk controller - (??? try another one maybe) Have you also experienced any kernel panics or strange random software crashes on this box ? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Recovering an array on Mac
This shouldn't have happened. Do you have zdb on Mac ? If yes you can try it. It is (intentionally?) undocumented, so you'll need to search for various scripts on blogs.sun.com and here. Something might just work. But do check what apple is actually shipping. You may want to use dtrace to find out why it can't find any pools. I doubt it is due to labelling mistake as that should have been flushed long back if you were copying data when you lost power. ZFS transactional property guarantees that. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] /var/log as a single zfs filesystem -- problems at boot
Can't say about /var/log, but I have a system here with /var on zfs. My assumption was that, not just /var/log, but essentially all of /var is supposed to be runtime cruft, and so can be treated equally. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Some basic questions about getting the best performance for database usage
I feel I'm being mis-understood. RAID - Redundant Array of Inexpensive Disks. I meant to state that - Let ZFS deal with redundancy. If you want to have an AID by all means have your RAID controller do all kind of striping/mirroring it can to help with throughput or ease of managing drives. Let ZFS deal with the redundancy part. I'm not counting redundancy offered by traditional RAID as you can see by just posts in this forums that - 1. It doesn't work. 2. It bites when you least expect it to. 3. You can do nothing but resort to tapes and LOT of aspirin when you get bitten. - Akhilesh This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Some basic questions about getting the best performance for database usage
I'll probably be having 16 Seagate 15K5 SAS disks, 150 GB each. Two in HW raid1 for the OS, two in HW raid 1 or 10 for the transaction log. The OS does not need to be on ZFS, but could be. Whatever you do, DO NOT mix zfs and HW RAID. ZFS likes to handle redundancy all by itself. It's much smarter than any HW RAID and when does NOT like it when it detects a data corruption it can't fix (i.e. no replicas). HW RAID's can't fix data corruption and that leads to a very unhappy ZFS. Let ZFS handle all redundancy. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption...
If there was no redundancy configured in zfs then you're mostly toast. RAID is no protection against data errors as has been told by zfs guys and you just discovered. I think your only option is to somehow setup a recent build of OpenSolaris (05/08 or SXCE), configure it to not panic on checksum failure (just give IO Err) and import the pool. Your data is mostly toast though. Please don't use zfs without configuring redundancy. If you do, please make sure you have backups ! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] The ZFS inventor and Linus sitting in a tree?
On May 18, 2008, at 14:01, Mario Goebbels wrote: ZFS on Linux on humper would actually be very interesting to many of them. I think that's good for Sun. Of course, ZFS on Linux on Umm, how many Linux shops buy support and/or HW from Sun ? It it's a Linux shop money is (in order) going to these people - IBM, Redhat, Novell, Dell. Those all are - technically - Sun competitors in some sphere. If you consider software stacks, there are only 3 companies in the world with complete SW stack - Sun, IBM and Microsoft If you throw HW in the mix, there are only two - Sun and IBM. Figuring out who's printing money and who's contributing most code is left as an exercise to the reader. - mritun This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Any fix for the ZFS pool corruption Bug 6393634 ?
From the bug description, it's actually not pool corruption, but rather error handling is not comprehensive. Your data is fine, you need to upgrade to snv77+ or S10u5 for the fix. - mritun This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs diff @snap1 @snap2
Hi Is it possible to see what changed between two snapshots (efficiently) ? I tried to take a look what zfs send -i does, and I found that it operates at very low (dmu) level and basically dumps the blocks. Any pointers on extracting inode info from this stream or otherwise ? - mritun This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharenfs with over 10000 file systems
New, yes. Aware - probably not. Given cheap filesystems, users would create many filesystems was an easy guess, but I somehow don't think anybody envisioned that users would be creating tens of thousands of filesystems. ZFS - too good for it's own good :-p This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharenfs with over 10000 file systems
I remember reading a discussion where these kind of problems were discussed. Basically it boils down to everything not being aware of the radical changes in filesystems concept. All these things are being worked on, but it might take sometime before everything is made aware that yes it's no longer unusual that there can be 1+ filesystems on one machine. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Panic on Zpool Import (Urgent)
Hi Ben Not that I know much, but while monitoring the posts I read sometime long ago that there was a bug/race condition in slab allocator which results in panic on double free (ss != NULL). I think zpool is fine but your system is tripping on this bug. Since it is snv43, I'd suggest upgrading. Is LU/fresh install possible ? Can you quickly try importing it on belenix liveCD/USB ? - Akhilesh PS: I'll post the bug# if I find it. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Panic on Zpool Import (Urgent)
Most probable culprit (close, but not identical stacktrace): http://bugs.opensolaris.org/view_bug.do?bug_id=6458218 Fixed since snv60. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Performance writing to USB drive, performance reporting
USB2 giving you ~30MB/s is normal... a little better than mine (on Windows - ~25MB/s) actually. For better performance better switch to eSATA or Firewire. Even FW400 will give you better results than USB as there are lesser overheads. However, I'm sure I saw some FW+ZFS related bug in bugdb sometime ago. Please check. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] R/W ZFS on Leopard Question -or- Where's my 40MB?
SUMMARY: 1) Why the difference between pool size and fs capacity? With zfs take df output with a grain of salt -- add more if compression is turned on. ZFS being quite complicated, it seems only an approximate free space is reported, which won't be too wrong and would suffice for the purpose. But if you're expecting it to be correct to the last block,it won't be. 2) If this is normal overhead, then how to you examine these aspects of the fs (commands to use, background links to read, etc. (If you say RTFM then please supply a page number for 817-2271.pdf))? No public mechanism currently exists, afaik. Some black magic with dtrace might be possible to look at the FS data structures OR by reading the code and ZFS on-disk format document one /could/ possibly figure it out. 3) What's the relationship between pools (zpool) and filesystems (zfs command)? / Is there a default fs created hwne the pool is created? Yes. As soon as you create a pool, it can be used as a FS. Nothing else needed. You can of course, create additional filesystems in the pool, but one is always available to you (you may or may not like it... I keep it unmounted). 4) BONUS QUESTION: Is Sun currently using / promoting / shipping hardware that *boots* ZFS? (e.g. last I checked even stuff like Thumper did not use ZFS for the 2 mirror'd boot drives (UFS?) but used ZFS for the 10,000 other drives (OK, maybe there aren't 10,000 drive but there sure are a lot)). ZFS boot didn't get integrated into even Nevada until very recently, let alone backported to Solaris 10. I doubt it is ready for production use yet. The new Opensolaris Dev. preview aka. Project Indiana by default installs ZFS boot (no UFS needed). So, things are moving but we still need to go a long way before all things are stabilized, documented, corner cases identified, recovery tools OS install/update applications updated etc etc 5) BONUS QUESTION #2: How does a frustrated yet extremely seasoned Mac/ OS X technician with a terrific Solaris background find happiness by landing a job at his other favorite company, Sun? (My friend wants to know.) WARNING: Zen mode ON! One has to find happiness within. A more correct question might be: Would it be better for you to switch to working for Sun ? Well I personally admire Sun's engineering. It's one of the *few* places left where you are allowed to dream, and of course, build! If that is what you want to do, you might like working for them very much! 6) FINAL QUESTION (2 parts): (a) When will we see default booting to ZFS? You can see it now... download the OpenSolaris Developer Preview live CD and install it to HDD. It's there! (b) [When] will we see ZFS as the default fs on OS X? Only when uncle Stevie says so! (Don't hold your breath) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Questions from a windows admin - Samba, shares quotas
Yes it will work, and quite nicely indeed. But you need to be careful. Currently ZFS mounting is not instantaneous, if you have like say 3 users, you might be for a rude surprize as system takes its own merry time (~ few hrs) mounting them at next reboot. Even with auto mounter, things won't be so fast. ZFS philosophy of helluva tons of filesystems breaks a lot of tools made with assumption of who would ever need more than 4 filesystems ?. To test it, create $NUM_USERS filesystems, reboot the server and see if everything comes up ok and in acceptable time. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] O.T. patches for OpenSolaris
OpenSolaris builds are like development snapshots...they're not a release and thus there are no patches. SXCE is just binary build from these snapshots... it's there are convenience only, and patches are applied like in every other development project... by updating from source repository, compiling and installing (aka. BFU). This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Google paper on disk reliability
Hi Folks I believe that the word would have gone around already, Google engineers have published a paper on disk reliability. It might supplement the ZFS FMA integration and well - all the numerous debates on spares etc etc over here. To quote /. The Google engineers just published a paper on Failure Trends in a Large Disk Drive Population. Based on a study of 100,000 disk drives over 5 years they find some interesting stuff. To quote from the abstract: 'Our analysis identifies several parameters from the drive's self monitoring facility (SMART) that correlate highly with failures. Despite this high correlation, we conclude that models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures. Surprisingly, we found that temperature and activity levels were much less correlated with drive failures than previously reported.' Link to the paper is http://labs.google.com/papers/disk_failures.pdf This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS or UFS - what to do?
Oh yep, I know that churning feeling in stomach that there's got to be a GOTCHA somewhere... it can't be *that* simple! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS on my iPhone?
So, does anyone know if I can run ZFS on my iPhone? ;-) -- richard Hi Richard Thanks for your interest in running ZFS, the final word in filesystems, on your iPhone. I'd be happy to help you. Please send the iPhone to me at the address provided below and I shall get you going as fast as possible. Thanks for your inquiry. Looking forward to the iPhone Yours truly - mritun This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS problems
Hi I'll recommend going over the zfs presentation. One of the points they listed was that - even in case of silent errors (like you noticed) other systems just go on. Your data gets silently corrupted and you'd never notice it. If there are few bit flips in jpegs and movie files, it will almost never be noticeable. However, there are places where it will cause catastrophy but in day-to-day cases we don't come across or even if we do - we attribute them to $CAUSE, forget and go on. ZFS tries to fix this problem as one of its core goals. (that is why block checksums are there). Rest assured, zfs + solaris has only uncovered and made it uncomfortably evident the problem that has been so far latent. Now the uncovering itself may cause you pains is a different issue. Ignorance is bliss for most of the humans :-) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: bare metal ZFS ? How To ?
Excuse me if I'm mistaken, but I think the question is on the lines of how to access and more importantly - Backup zfs pools/filesystems present on a system by just booting from a CD/DVD. I think the answer would be on the lines of (forced?) importing of zfs pools present on the system and then using zfs send /foo | star The OP might be looking at something convenient along the lines of ufsdump. I think there is a need of a zfsdump tool (script?) or even better - zfs integration in star. Maybe Jörg should chip in :-) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: ZFS for Linux 2.6
Yuen L. Lee wrote: opensolaris could be a nice NAS filer. I posted my question on How to build a NAS box asking for instructions on how to build a Solaris NAS box. It looks like everyone is busy. I haven't got any response yet. By any chance, do you have any Hi Yuen May I suggest that a better question would have been How to build a minimal Nevada distribution ?. I'm sure it would have gotten more responses as it is both - a more general, and a more relevent question. Apart from that unasked advice, If my memory serves right the Belenix folks (Moinak and gang) were discussing a similar thing in a thread sometime back... chasing them might be a good idea ;-) I found some articles on net on how to build minimal image of solaris with networking. Packages relating to storage (zfs, iSCSI etc) can be added to it later. The minimal system with required components, sure, is heavy - about 200MB... but shouldn't be an issue for a *NAS* box. I googled Minimal solaris configuration and found several articles. Hope that helps - Akhilesh This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: reproducible zfs panic on Solaris 10 06/06
zpool status # uncomment the following lines if you want to see the system think # it can still read and write to the filesystem after the backing store has gone. Hi UNIX unlink() syscall doesn't remove the inode if its in use. Its marked to be unliked when its use count falls to zero. So deleting any file has no effect on applications already having it open. I'm not surprized by the yaking out USB drive test (already know the bug exists)... but this unliking test is puzzling me. Does ZFS subsystem closes and re'opens files in due course of usage ? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: Unbootable system recovery
Hi, Like what matt said, unless there is a bug in code, zfs should automatically figure out the drive mappings. The real problem as I see is using 16 drives in single raidz... which means if two drives malfunction, you're out of luck. (raidz2 would survive 2 drives... but still I believe 16 drives is too much). May I suggest you re-check the cabling as drive going bad might be related to that... or even changing the power supply (I got burnt that way). It might just be an intermittent drive malfunction. You might also surface scan the drives and rule out bad sectors. Good luck :) PS: When you get your data back, do switch to raidz2 or mirrored config that can survive loss of more than 1 disk. My experience (which is not much) shows it doesn't take much to render more than one disk out of 20 or so... especially when moving them. - Akhilesh This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss