Re: [zfs-discuss] BOOT, ZIL, L2ARC one one SSD?
Understood Edward, and if this was a production data center, I wouldn't be doing it this way. This is for my home lab, so spending hundreds of dollars on SSD devices isn't practical. Can several datasets share a single ZIL and a single L2ARC, or much must each dataset have their own? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] BOOT, ZIL, L2ARC one one SSD?
60GB SSD drives using the SF 1222 controller can be had now for around $100. I know ZFS likes to use the entire disk to do it's magic, but under X86, is the entire disk the entire disk, or is it one physical X86 partition? In the past I have created 2 partitions with FDISK, but format will only show one of them? Did I do something wrong, or is that the way it works? So, maybe what I want to do won't workBut this is my thought on a single 60GB SSD drive, use FDISK to create 3 physical partitions, a 20GB for boot, a 30GB for L2ARC and a 10GB for ZIL? Or is 3 physical Solaris partitions on a disk not considered the entire disk as far as ZFS is concerned? Can a ZIL and/or L2ARC be shared amongst 1+ ZPOOLs, or must each pool have it's own? If each pool must have it's own, can a disk be partitioned so a single fast SSD can be shared amongst 1+ pools? Thanks -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Looking for 3.5" SSD for ZIL
> > got it attached to a UPS with very conservative > shut-down timing. Or > > are there other host failures aside from power a > ZIL would be > > vulnerable too (system hard-locks?)? > > Correct, a system hard-lock is another example... How about comparing a non-battery backed ZIL to running a ZFS dataset with sync=disabled. Which is more risky? This has been an educational thread for me...I was not aware that SSD drives had some DRAM in front of the SSD part? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] How do you use >1 partition on x86?
So when I built my new workstation last year, I partitioned the one and only disk in half, 50% for Windows, 50% for 2009.06. Now, I'm not using Windows, so I'd like to use the other half for another ZFS pool, but I can't figure out how to access it. I have used fdisk to create a second Solaris2 partition, did a re-con reboot, but format still only shows the 1 available partition. How do I used the second partition? selecting c7t0d0 Total disk size is 30401 cylinders Cylinder size is 16065 (512 byte) blocks Cylinders Partition StatusType Start End Length% = == = === == === 1 Other OS 0 4 5 0 2 IFS: NTFS 5 19171913 6 3 ActiveSolaris2 1917 1497113055 43 4 Solaris2 14971 3017015200 50 format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /p...@0,0/pci1028,2...@1f,2/d...@0,0 Thanks for any idea. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS write bursts cause short app stalls
Thanks for this thread! I was just coming here to discuss this very same problem. I'm running 2009.06 on a Q6600 with 8GB of RAM. I have a Windows system writing multiple OTA HD video streams via CIFS to the 2009.06 system running Samba. I then have multiple clients reading back other HD video streams. The write client never skips a beat, but the read clients have constant problems getting data when the "burst" writes occur. I am now going to try the txg_timeout and see if that helps. It would be nice if these tunables were settable on a per-pool basis though. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
> If you want a small system that is pre-built, look at > every possible > permutation/combination of the Dell Vostro 200 box. I agree, the Vostro 200 systems are an excellent deal. Update to the latest BIOS and they will recognize 8GB of RAM. The ONE problem with them, is that Dell does not enable AHCI, so SATA access is slower than it needs to be. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss