[zfs-discuss] MySQL benchmark
Hello zfs-discuss, http://dev.mysql.com/tech-resources/articles/mysql-zfs.html I've just quickly glanced thru it. However the argument about double buffering problem is not valid. -- Best regards, Robert Milkowski mailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu [EMAIL PROTECTED] wrote: I created a ZFS file system like the following with /mypool/cache being the partition for the Squid cache: 18:51:27 [EMAIL PROTECTED]:~$ zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 478M 31.0G 10.0M /mypool mypool/cache 230M 9.78G 230M /mypool/cache mypool/home226M 31.0G 226M /export/home Note: I only have a few days of experience on Solaris and I might have made some mistakes with the above ZFS partitions! No, that looks ok. You can just 'zfs set quota=something else mypool/cache' to be bigger in the future if need be. Basically, I want to know if somebody here on this list is using a ZFS file system for a proxy cache and what will be it's performance? Will it improve and degrade Squid's performance? Or better still, is there any kind of benchmark tools for ZFS performance? filebench sounds like it'd be useful for you. It's coming in the next Nevada release, but since it looks like you're on Solaris 10, take a look at: http://blogs.sun.com/erickustarz/entry/filebench Remember to 'zfs set atime=off mypool/cache' - there's no need for it for squid caches. -- Rasputnik :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
First off, can we just confirm the exact version of the Silicon Image Card and which driver Solaris is using. Use 'prtconf -pv' and '/usr/X11/bin/scanpci' to get the PCI vendor device ID information. Use 'prtconf -D' to confirm which drivers are being used by which devices. And 'modinfo' will tell you the version of the drivers. The above commands will give details for all the devices in the PC. You may want to edit down the output before posting it back here, or alternatively put the output into an attached file. See this link for an example of this sort of information for a different hard disk controller card: http://mail.opensolaris.org/pipermail/storage-discuss/2007-September/003399.html Regards Nigel Smith This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
And are you seeing any error messages in '/var/adm/messages' indicating any failure on the disk controller card? If so, please post a sample back here to the forum. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
On 10/30/07, Neal Pollack [EMAIL PROTECTED] wrote: I'm experiencing major checksum errors when using a syba silicon image 3114 based pci sata controller w/ nonraid firmware. I've tested by copying data via sftp and smb. With everything I've swapped out, I can't fathom this being a hardware problem. Even before ZFS, I've had numerous situations where various si3112 and 3114 chips would corrupt data on UFS and PCFS, with very simple copy and checksum test scripts, doing large bulk transfers. Those SIL chips are really broken when used with certain Seagate drivers. But I have data corrupted by them with WD drive also. Linux can workaround this bug by reducing transfer sizes (and thus dramatically impacting speed). Solaris probably don't have workaround. With this quirk enabled (on Linux), I get at most 20 MB/s from drives, but ZFS do not report any corruption. Before I had corruptions hourly. More info about SIL issue: http://home-tj.org/wiki/index.php/Sil_m15w I have Si 3112, but despite SIL claims other chips seem to be affected also. -- Tomasz Torcz [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
One thing to check before you blame your controller: Are the SATA cables close together for an extended length? Basically, most SATA cables will generate massive levels of cross-talk between them if they're tied together or a run parallel in close proximity for a part of their run-length. I friend found this sort of problem a couple of months ago and it was cured by separating the cables. Steve -- --- Computer Systems Administrator,E-Mail:[EMAIL PROTECTED] Department of Earth Sciences, Tel:- +44 (0)1865 282110 University of Oxford, Parks Road, Oxford, UK. Fax:- +44 (0)1865 272072 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
On Tue, 30 Oct 2007, Tomasz Torcz wrote: On 10/30/07, Neal Pollack [EMAIL PROTECTED] wrote: I'm experiencing major checksum errors when using a syba silicon image 3114 based pci sata controller w/ nonraid firmware. I've tested by copying data via sftp and smb. With everything I've swapped out, I can't fathom this being a hardware problem. Even before ZFS, I've had numerous situations where various si3112 and 3114 chips would corrupt data on UFS and PCFS, with very simple copy and checksum test scripts, doing large bulk transfers. Those SIL chips are really broken when used with certain Seagate drivers. But I have data corrupted by them with WD drive also. Linux can workaround this bug by reducing transfer sizes (and thus dramatically impacting speed). Solaris probably don't have workaround. Might be slightly off-topic for the whole, but _this_ specific thing (reducing transfer sizes) is possible on Solaris as well. As documented here: http://docs.sun.com/app/docs/doc/819-2724/chapter2-29?a=view You can also read a bit more on the following thread: http://www.opensolaris.org/jive/thread.jspa?threadID=6866 It's possible to limit this system-wide or per-LUN. Best regards, FrankH. With this quirk enabled (on Linux), I get at most 20 MB/s from drives, but ZFS do not report any corruption. Before I had corruptions hourly. More info about SIL issue: http://home-tj.org/wiki/index.php/Sil_m15w I have Si 3112, but despite SIL claims other chips seem to be affected also. -- Tomasz Torcz [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- No good can come from selling your freedom, not for all the gold in the world, for the value of this heavenly gift far exceeds that of any fortune on earth. -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mac OS X 10.5.0 Leopard ships with a readonly ZFS
I'm also very interested in getting this going, it's frustrating having Apple ignore what a big selling point for Leopard this is! Kugutsum, could you drop me a line so we can discuss? joe (at) penski {dot} net This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool question
On Mon, 29 Oct 2007, Krzys wrote: everything is great but I've made a mistake and I would like to remove emcpower2a from my pool and I cannot do that... Well the mistake that I made is that I did not format my device correctly so instead of adding 125gig I added 128meg You can't remove it directly, but you certainly can *replace* it with a larger drive. If this is critical data, then obviously back up first, and test these steps on alternate storage. Part TagFlag Cylinders SizeBlocks 0 rootwm 0 -63 128.00MB(64/0/0) 262144 1 swapwu 64 - 127 128.00MB(64/0/0) 262144 2 backupwu 0 - 63997 125.00GB(63998/0/0) 262135808 3 unassignedwm 00 (0/0/0) 0 4 unassignedwm 00 (0/0/0) 0 5 unassignedwm 00 (0/0/0) 0 6usrwm 128 - 63997 124.75GB(63870/0/0) 261611520 7 unassignedwm 00 (0/0/0) 0 The easiest thing would be to replace s0 with s6. You'll be 128mb shy of the full disk, but that's a drop in the bucket. The command would be: zpool replace mypool emcpower2a emcpowerXX where XX is the name of slice 6. You should see the new size right away. Another option would be to use a different drive, formatted to give you the entire disk, and then do a replace of emcpower2a with emcpower3a. Then you could repartition 2 properly, and repalce 3 with 2. Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
On Mon, 29 Oct 2007, MC wrote: Here's what I've done so far: The obvious thing to test is the drive controller, so maybe you should do that :) Also - while you're doing swapTronics - don't forget the Power Supply (PSU). Ensure that your PSU has sufficient capacity on its 12Volt rails (older PSUs did'nt even tell you how much current they can push out on the 12V outputs). See also: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta Regards, Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED] Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ Graduate from sugar-coating school? Sorry - I never attended! :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
Tried that... completely different cases with different power supplies. On Oct 30, 2007, at 10:28 AM, Al Hopper wrote: On Mon, 29 Oct 2007, MC wrote: Here's what I've done so far: The obvious thing to test is the drive controller, so maybe you should do that :) Also - while you're doing swapTronics - don't forget the Power Supply (PSU). Ensure that your PSU has sufficient capacity on its 12Volt rails (older PSUs did'nt even tell you how much current they can push out on the 12V outputs). See also: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta Regards, Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED] Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ Graduate from sugar-coating school? Sorry - I never attended! :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs mounting
It would be nice to be able to mount zfs file system by its mountpoit also and not just by the pool... For example I have the following: mypool5 257G 199G 24.5K /mypool5 mypool5/d5 257G 199G 257G /d/d5 the only way to mount it is by zfs mount mypool5 and zfs mount mypool5/d5, but it would be nice to be able to mount mypool5/d5 by issuing zfs mount /d/d5 Just a suggestion to make zfs even easier to use... but they why stop there, why not be able to mount using just mount command? mount /d/d5 Just my thought as I was in need to mount this usb drive after beeing disconnected and it took me few minutes to figure it out... Sorry if that was covered in the past, I di dnot take my time to search archives... Regards, Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool question
Chris, I agree that your best bet is to replace the 128-mb device with another device, fix the emcpower2a manually, and then replace it back. I don't know these drives at all, so I'm unclear about the fix it manually step. Because your pool isn't redundant, you can't use zpool offline or detach. I'm curious if the capacity of this pool is 128mb x 3? If so, then I think you could replace the emcpower2a with a 128mb file. Then, replace it back. Like this: 0. Backup your data. 1. Create the file. # mkdir /files # mkfile 128m /files/file1 2. Replace the device with the file: # zpool replace mypool emcpower2a /files/file1 3. fix the emcpower2a drive 4. Replace the file with the device # zpool replace mypool /files/file1 emcpower2a I have no experience with these drives, but in theory, this should work. I'm also wondering if you should make the 128mb file slightly larger to account for any differences in sizing of a UFS file and the emcpower drive. Cindy Krzys wrote: yes, I was thinking about this but I wanted to just remove the whole 128mb disk and then use format to repartition this complete disk to give it full capacity... I have all the disks setup this way so I wanted to be consistent with it but its not letting remove that disk at all from the pool...128mb is not much to waste and I am not concern about it but as I said I wanted to be consistent and thats the reason why I wanted to remove the other disk... Maybe what I can do is replace it with a different device if I can find it and then replace that disk with it and then partition it to my need and then replace the temporary disk with this new repartitioned disk... I thought there might be easier way to do it... Thanks for help. Chris On Tue, 30 Oct 2007, Mark J Musante wrote: On Mon, 29 Oct 2007, Krzys wrote: everything is great but I've made a mistake and I would like to remove emcpower2a from my pool and I cannot do that... Well the mistake that I made is that I did not format my device correctly so instead of adding 125gig I added 128meg You can't remove it directly, but you certainly can *replace* it with a larger drive. If this is critical data, then obviously back up first, and test these steps on alternate storage. Part TagFlag Cylinders SizeBlocks 0 rootwm 0 -63 128.00MB(64/0/0) 262144 1 swapwu 64 - 127 128.00MB(64/0/0) 262144 2 backupwu 0 - 63997 125.00GB(63998/0/0) 262135808 3 unassignedwm 00 (0/0/0) 0 4 unassignedwm 00 (0/0/0) 0 5 unassignedwm 00 (0/0/0) 0 6usrwm 128 - 63997 124.75GB(63870/0/0) 261611520 7 unassignedwm 00 (0/0/0) 0 The easiest thing would be to replace s0 with s6. You'll be 128mb shy of the full disk, but that's a drop in the bucket. The command would be: zpool replace mypool emcpower2a emcpowerXX where XX is the name of slice 6. You should see the new size right away. Another option would be to use a different drive, formatted to give you the entire disk, and then do a replace of emcpower2a with emcpower3a. Then you could repartition 2 properly, and repalce 3 with 2. Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss !DSPAM:122,472733c5131049287932! ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
Hi, I have the same sil3114 based controller, installed in a dual Opteron box. I have installed Solaris x86 and have had no problem with it, however I hardly used that box with Solaris as my installation was only to try out Solaris on my Opteron worksation. Instead, on that workstation I constantly run Linux, and twice in a few months I came across (while running linux Fedora) several I/O errors on the SATA disk attached to that controller. I though at first that the hard drive was gone, but then I swapped that controller with a sil3112 and the I/O errors stopped. I swapped back the sil3114 and had no errors since. I reckon that it might have been due to one of the SATA cables (power or data?) not making a perfect contact. SATA connectors are of extremely poor quality and they fail to hold in place as well as the older IDE or SCSI or molex power connector. I noticed as well that they crack easily if inadvertently pulled or pushed while working inside the computer case. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool question
Cindy Swearingen wrote: Chris, I agree that your best bet is to replace the 128-mb device with another device, fix the emcpower2a manually, and then replace it back. I don't know these drives at all, so I'm unclear about the fix it manually step. Because your pool isn't redundant, you can't use zpool offline or detach. I'm curious if the capacity of this pool is 128mb x 3? If so, then I think you could replace the emcpower2a with a 128mb file. It should be 125G+125G+128M. I think this is a good idea, just create this file somewhere outside of your pool. Hth, Victor Then, replace it back. Like this: 0. Backup your data. 1. Create the file. # mkdir /files # mkfile 128m /files/file1 2. Replace the device with the file: # zpool replace mypool emcpower2a /files/file1 3. fix the emcpower2a drive 4. Replace the file with the device # zpool replace mypool /files/file1 emcpower2a I have no experience with these drives, but in theory, this should work. I'm also wondering if you should make the 128mb file slightly larger to account for any differences in sizing of a UFS file and the emcpower drive. Cindy Krzys wrote: yes, I was thinking about this but I wanted to just remove the whole 128mb disk and then use format to repartition this complete disk to give it full capacity... I have all the disks setup this way so I wanted to be consistent with it but its not letting remove that disk at all from the pool...128mb is not much to waste and I am not concern about it but as I said I wanted to be consistent and thats the reason why I wanted to remove the other disk... Maybe what I can do is replace it with a different device if I can find it and then replace that disk with it and then partition it to my need and then replace the temporary disk with this new repartitioned disk... I thought there might be easier way to do it... Thanks for help. Chris On Tue, 30 Oct 2007, Mark J Musante wrote: On Mon, 29 Oct 2007, Krzys wrote: everything is great but I've made a mistake and I would like to remove emcpower2a from my pool and I cannot do that... Well the mistake that I made is that I did not format my device correctly so instead of adding 125gig I added 128meg You can't remove it directly, but you certainly can *replace* it with a larger drive. If this is critical data, then obviously back up first, and test these steps on alternate storage. Part TagFlag Cylinders SizeBlocks 0 rootwm 0 -63 128.00MB(64/0/0) 262144 1 swapwu 64 - 127 128.00MB(64/0/0) 262144 2 backupwu 0 - 63997 125.00GB(63998/0/0) 262135808 3 unassignedwm 00 (0/0/0) 0 4 unassignedwm 00 (0/0/0) 0 5 unassignedwm 00 (0/0/0) 0 6usrwm 128 - 63997 124.75GB(63870/0/0) 261611520 7 unassignedwm 00 (0/0/0) 0 The easiest thing would be to replace s0 with s6. You'll be 128mb shy of the full disk, but that's a drop in the bucket. The command would be: zpool replace mypool emcpower2a emcpowerXX where XX is the name of slice 6. You should see the new size right away. Another option would be to use a different drive, formatted to give you the entire disk, and then do a replace of emcpower2a with emcpower3a. Then you could repartition 2 properly, and repalce 3 with 2. Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss !DSPAM:122,472733c5131049287932! ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mac OS X 10.5.0 Leopard ships with a readonly ZFS
On 10/30/07, Joe Richards [EMAIL PROTECTED] wrote: I'm also very interested in getting this going, it's frustrating having Apple ignore what a big selling point for Leopard this is! Check out the Mac blogosphere. I think Apple's waiting until ZFS's got a few more things ironed out. The big concerns seem to be massive bulk I/O for video, maybe more reliability/testing on their part. -- H. Lally Singh Ph.D. Candidate, Computer Science Virginia Tech ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool question
Krzys wrote: hello folks, I am running Solaris 10 U3 and I have small problem that I dont know how to fix... I had a pool of two drives: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypoolONLINE 0 0 0 emcpower0a ONLINE 0 0 0 emcpower1a ONLINE 0 0 0 errors: No known data errors I added another drive so now I have pool of 3 drives bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypoolONLINE 0 0 0 emcpower0a ONLINE 0 0 0 emcpower1a ONLINE 0 0 0 emcpower2a ONLINE 0 0 0 errors: No known data errors everything is great but I've made a mistake and I would like to remove emcpower2a from my pool and I cannot do that... Well the mistake that I made is that I did not format my device correctly so instead of adding 125gig I added 128meg here is my partition on that disk: partition print Current partition table (original): Total disk cylinders available: 63998 + 2 (reserved cylinders) Part TagFlag Cylinders SizeBlocks 0 rootwm 0 -63 128.00MB(64/0/0) 262144 1 swapwu 64 - 127 128.00MB(64/0/0) 262144 2 backupwu 0 - 63997 125.00GB(63998/0/0) 262135808 3 unassignedwm 00 (0/0/0) 0 4 unassignedwm 00 (0/0/0) 0 5 unassignedwm 00 (0/0/0) 0 6usrwm 128 - 63997 124.75GB(63870/0/0) 261611520 7 unassignedwm 00 (0/0/0) 0 partition what I would like to do is to remove my emcpower2a device, format it and then add 125gig one instead of the 128meg. Is it possible to do this in Solaris 10 U3? If not what are my options? Regards, Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss One other (risker) option would be to export the pool and grow slice 0 in emcpower2a so that it consumes the entire disk. Then reimport the pool and we should detect the new size and grow the pool accordingly. You want to make sure you don't change the starting cylinder so that we can still see the front half of the labels. I've been able to successfully do this with EFI labels but have not tried this with VTOCs. If you do decide to go this route, a full backup is highly recommended. - George ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs mounting
The regular mount/umount commands can only be used if you have the filesystems present in /etc/vfstab. To create a zfs filesystem with the idea of using mount/umount you must specify 'mountpoint=legacy'. Now you can 'mount /d/d5' ... as per regular ufs. Zpools don't need mountpoints ... ie 'mountpoint=none' won't mount the pool. Which means you can mount the zfs pool only AND mount it where you want by using 'set mountpoint=/d/d6'. Cheers On 10/30/07, Krzys [EMAIL PROTECTED] wrote: It would be nice to be able to mount zfs file system by its mountpoit also and not just by the pool... For example I have the following: mypool5 257G 199G 24.5K /mypool5 mypool5/d5 257G 199G 257G /d/d5 the only way to mount it is by zfs mount mypool5 and zfs mount mypool5/d5, but it would be nice to be able to mount mypool5/d5 by issuing zfs mount /d/d5 Just a suggestion to make zfs even easier to use... but they why stop there, why not be able to mount using just mount command? mount /d/d5 Just my thought as I was in need to mount this usb drive after beeing disconnected and it took me few minutes to figure it out... Sorry if that was covered in the past, I di dnot take my time to search archives... Regards, Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss