Re: rm -rf / fanclub
On 02/24/2012 01:16 AM, deeptec...@gmail.com wrote: On Thu, Feb 23, 2012 at 11:41 PM, deeptec...@gmail.com deeptec...@gmail.com wrote: X11R6 is actually a symlink to /usr/local, and not usr/local or .usr/local! Also, /home is a symlink to /usr/home, and not usr/home or ./usr/home I meant to say that X11R6 should be a symlink to local or ./local. I fully agree (if tested)... it makes no sense to use absolute symlinks unless you really want them to be absolute. eg. in my home dir I added dhcpd.leases - /var/lib/... Did you test this, by removing the link and creating it relative to see if there are any stupid side effects? And I've never really thought symlinks were so dangerous (unless you use software that auto-dereferences, eg. platform independent stuff), but with mount --bind in linux (same as nullfs in FreeBSD?), you can do similar things if you don't use --one-file-system with your rm -rf calls. So in addition to the symlink change, why not improve rm also, by adding the --one-file-system option (which actually exists in grm from the sysutils/coreutils port). About /home: I've just noticed that /home points to usr/home in the newest release. The newest basic installation (base + kernel) doesn't even come with an X11R6 symlink, yet I did have it after a full install (-CURRENTization + ports), so that symlink must be coming from mergemaster or some port. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org -- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: Uneven load on drives in ZFS RAIDZ1
On 12/19/2011 03:22 PM, Stefan Esser wrote: Hi ZFS users, for quite some time I have observed an uneven distribution of load between drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of a longer log of 10 second averages logged with gstat: dT: 10.001s w: 10.000s filter: ^a?da?.$ L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name 0130106 41344.5 23 10335.2 48.8| ada0 0131111 37844.2 19 10074.0 47.6| ada1 0 90 66 22194.5 24 10315.1 31.7| ada2 1 81 58 20074.6 22 10232.3 28.1| ada3 L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name 1132104 40364.2 27 11295.3 45.2| ada0 0129103 36794.5 26 11156.8 47.6| ada1 1 91 61 21334.6 30 11291.9 29.6| ada2 0 81 56 19854.8 24 11026.0 29.4| ada3 L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name 1148108 40845.3 39 25117.2 55.5| ada0 1141104 36935.1 36 2505 10.4 54.4| ada1 1102 62 21125.6 39 25085.5 35.4| ada2 0 99 60 20646.0 39 24833.7 36.1| ada3 ... So: Can anybody reproduce this distribution requests? I don't have a raidz1 machine, and no time to make you a special raidz1 pool out of spare disks, but on my raidz2 I can only ever see unevenness when a disk is bad, or between different vdevs. But you only have one vdev. Check is that your disks are identical (are they? we can only assume so since you didn't say so). Show us output from: smartctl -i /dev/ada0 smartctl -i /dev/ada1 smartctl -i /dev/ada2 smartctl -i /dev/ada3 Since your tests show read ms/r to be pretty even, I guess your disks are not broken. But the ms/w is slightly different. So I think it seems that the first 2 disks are slower for writing (someone once said that refurbished disks are like this, even if identical), or the hard disk controller ports they use are slower. For example, maybe your motherboard has 6 ports, and you plugged disks 1,2,3 into port 1,2,3 and disk 4 into port 5. Disk 3 and 4 would have their own channel, but disk 1 and 2 share one. So if the disks are identical, I would guess your hard disk controller is to blame. To test this, first back it up. Then *fix your setup by using labels*. ie. use gpt/somelabel0 or gptid/... rather than ada0p2. Check ls /dev/gpt* output for options on what labels you have already. Then try swapping disks around to see if the load changes. Make sure to back up... Swapping disks (or even removing one depending on controller, etc. when it fails) without labels can be bad. eg. You have ada1 ada2 ada3 ada4. Someone spills coffee on ada2; it fries and cannot be detected anymore, and you reboot. Now you have ada1 ada2 ada3. Then things are usually still fine (even though ada3 is now ada2 and ada4 is now ada3, because there is some zfs superblock stuff to keep track of things), but if you also had an ada5 that was not part of the pool, or was a spare or a log or something other than another disk in the same vdev as ada1, etc., bad things happen when it becomes ada4. Unfortunately, I don't know exactly what people do to cause the bad things that happen. When this happened to me, it just said my pool was faulted or degraded or something, and set a disk or two to UNAVAIL or FAULTED. I don't remember it automatically resilvering them, but when I read about these problems, I think it seems like some disks were resilvered afterwards. And last thing I can think of is to make sure your partitions are aligned, and identical. Show us output from: gpart show Any idea, why this is happening and whether something should be changed in ZFS to better distribute the load (leading to higher file system performance)? Best regards, STefan ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org -- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: Uneven load on drives in ZFS RAIDZ1
Am 19.12.2011 17:48, schrieb Michael Reifenberger: On Mon, 19 Dec 2011, Peter Maloney wrote: Swapping disks (or even removing one depending on controller, etc. when it fails) without labels can be bad. eg. Since ZFS uses (and searches for) its own UUID partition signatures s disk wapping shouldn't matter as long enough disks are found. Set vfs.zfs.debug=1 during boot to watch what is searched for. Bye/2 --- Michael Reifenberger mich...@reifenberger.com http://www.Reifenberger.com Thanks for the info. But I am confused by it, because when my disks moved around randomly on reboot, it really did mess things up. The first few times it happened, there was no issue, but when a spare took the place of a pool disk, it messed things up. I can see the UUIDs when I look at zdb output, so I really have no idea why it messed things up. ... but it did, so I will always caution people anyway. I can't point you to any relevant lines of code that cause the problem, but I know it can happen... and it will when you least expect it. ;) And I also see the opposite... people talking about their very old pools, with many disks exchanged, and wonder why mine was so easily messed up and theirs survived so long without labels. I just assumed it was the way the controller arranged the disks. (and by the way, mine now orders the disks perfectly consistently now that it is in IT mode, not mostly random like before... could be a factor) I am always very busy, but when I get the chance, it shouldn't take too long, so I will try to recreate it on a virtual machine and try vfs.zfs.debug=1.Thanks for the suggestion. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: Dog Food tm
On 12/08/2011 09:35 AM, Daniel Gerzo wrote: On Tue, 06 Dec 2011 13:51:05 -0800, Sean Bruno wrote: Was trying to use gmirror(4) or zfs(4) today to get a machine in the cluster setup with s/w raid and was completely flummoxed by the intricacies of manual setup. Chances are, I just am not smart enough to wind my way though the various how tos and wiki pages that I've been browsing to get the job done. Why using gmirror under zfs, when zfs itself supports software raid? If someone wants to work on modifying bsdinstaller to do s/w raid via one of these mechanisms, clusteradm@ can provide you a two disk SATA machine that can be used for this purpose. Sean And what problems did you run into? This guide worked for me: http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror (but the zfs create ... part was too much typing, so I did it with a script that I added to the CD) Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: ZFS: i/o error all block copies unavailable Invalid format
On 12/06/2011 12:06 PM, KOT MATPOCKuH wrote: Hello! 2011/12/6 Peter Maloney peter.malo...@brockmann-consult.de: Invalid format sounds like the software doesn't understand the disks. Check your pool (software) version with: # zpool upgrade -v zpool upgrade -v does not show pools, available for upgrade :) # zpool upgrade This system is currently running ZFS pool version 28. All pools are formatted using this version. Check your pool (on disk) version with (I forget the exact command): # zpool get version sunway NAMEPROPERTY VALUESOURCE sunway version 28 default It's latest pool's version for RELENG_9. My guess is that you installed the latest zfs on the pool, but left the old version of the bootloader. You mean gptzfsboot ? Yes. Old gptzfsboot must fail with message like this: ZFS: unsupported ZFS version %u (should be %u) And why problem solved by copying previous zfsloader? Without any another changes... previous zfsloader? Oh how interesting. I missed that in your last message. When you updated the other 4 systems with same sources did you mean the same cvsup file, or the exact copy of the source? I often see people posting about some mirrors updating later than others [I forget if this applies to current or stable or both], so I wouldn't trust them to have the same download each time, or a consistent download given a date unless it is a distant date. And just out of curiosity, how did you find the old bootloader? I probably wouldn't think to back it up if the new one compiled without error. Did you also try copying the bootloader (with dd maybe) from one of the working updated systems? Or comparing checksums of the bootloaders? -- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: ZFS: i/o error all block copies unavailable Invalid format
Am 06.12.2011 07:14, schrieb KOT MATPOCKuH: Hello all! On 24 nov I updated sources via csup to RELENG_9 (9.0-PRERELEASE). After make installboot I successfully booted to single user. But after make installworld system was fail to boot with message: ZFS: i/o error all block copies unavailable Invalid format status command shows status of all polls properly. root filesystem is not compressed. # zfsboottest /dev/gpt/rootdisk /dev/gpt/rootmirr pool: sunway config: NAME STATE sunway ONLINE mirror ONLINE gpt/rootdisk ONLINE gpt/rootmirr ONLINE Restore of old /boot/zfsloader was solved issue. Before I successfully updated 4 another systems with same sources level without any problems. My sys/boot/zfs/zfsimpl.c's version: 1.17.2.2 2011/11/19 10:49:03 Where may a root cause of problem? And how I can debug this problem? Invalid format sounds like the software doesn't understand the disks. Check your pool (software) version with: # zpool upgrade -v Check your pool (on disk) version with (I forget the exact command): # zpool get version sunway My guess is that you installed the latest zfs on the pool, but left the old version of the bootloader. - To fix an unbootable zfs root where the disks are working fine or degrade, this is the general procedure. I don't know if it applies to your particular problem, but I am optimistic. In this example, I copied a usb disk called zrootusb to one called zrootusbcopy. Import the pool using altroot and cachefile. # zpool import -o altroot=/z -o cachefile=/tmp/zpool.cache zrootusbcopy Set mount points (/ is fine, don't need legacy... legacy is a hassle, needing to set it to / and back after umount every time you repair things) Since altroot is /z, the root will be at /z/; do not prepend /z in mountpoint. # zfs list | grep zrootusbcopy # zfs set mountpoint=/ zrootusbcopy (if you were copying a disk and wanted it to be bootable, this is the point when you would snapshot and zfs send, where the above is the newly created bootable copy) Make sure bootfs is set. zfs get bootfs zrootusbcopy zfs set bootfs=zrootusbcopy zrootusbcopy **Copy the cache file to the new pool's /boot/zfs cp /tmp/zpool.cache /z/boot/zfs/zpool.cache Verify that the /boot/loader.conf is correct (pool name), and zfs_load is there. vfs.root.mountfrom=zfs:zrootusbcopy zfs_load=YES If this is your only zfs: # zfs umount -a otherwise one at a time: # zfs umount zrootusbcopy/var/empty # zfs umount zrootusbcopy/usr/ ... or a script (bash, untested): #begin script for name in $(zfs list -H -o name | grep -E ^zrootusbcopy/); do zfs umount $name done zfs umount zrootusbcopy #end script install bootloader (possibly the only step you actually needed). 1. Figure out what disks and partition number to put it on... I use: gpart show 2. Install. If it is a mirror, do 2 of these commands with different devices. gpart bootcode -b /z/boot/pmbr -p /z/boot/gptzfsboot -i partitionnumber diskdevice Then do not export. Then reboot; try to boot your previously unbootable zfs root system. Here is a thread where I suggested this method to someone and it worked for him, although his error message was different. http://forums.freebsd.org/showthread.php?t=26789 ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: samba+zfs
with NFS/SMB, but I suspected it was because I have an re(4) based NIC. ZFS has also wired down a lot of my system memory for the L2ARC… re isn't great but I wouldn't expect it to slow down over time.. Unless bounce buffers got used more and more or something. I have an em0 card in this system - but in any case it is slow locally (i.e. dd a large file with 64k block size). -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au The nice thing about standards is that there are so many of them to choose from. -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C Right now (while experience slow writes via samba+zfs) this is general read speed off a 4 x 1.5TB sata2 raidz1: # dd if=test.file of=/dev/null 13753502+1 records in 13753502+1 records out 7041793036 bytes transferred in 100.020897 secs (70403218 bytes/sec) That's not in the same ball park of slow writes, but it is below what I expect for reads. My setup is a little odd: 4x1.5tb raidz sata2 on mobo + 2 x 2tb mirror on sata1 pci controller, zfs v28, stable/9 r227357, amd x4 810 2.6ghz, 4gb ram, no dedupe, no compression, daily snapshots saved for 7 days The above file read was stored before the 2 x 2tb mirror addition, so it was a solely read off the sata2 mobo ports. Reading off of something more recent (and split amongst both raidz1 and mirror vdevs): # dd if=test2.file of=/dev/null 9154715+1 records in 9154715+1 records out 4687214153 bytes transferred in 82.963181 secs (56497522 bytes/sec) This is, again, seems slower than usual, but not as terrible as the write speeds that I've been seeing via samba. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org -- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: samba+zfs
On 11/09/2011 08:07 AM, Daniel O'Connor wrote: On 09/11/2011, at 17:32, Garrett Cooper wrote dd's of large files (spooled backups going to tape) to /dev/null are as slow as Samba. - Dedupe? Nope. You are probably right, but just to be sure, let's verify that with: zpool get dedupratio poolname If it is not 1.00x, then even though your dedup may be disabled, your data is deduped. - Compression? On the mail spool ports, but not on the tape spool. - How much RAM? 8GB. Why are your memory settings set so low if you have 8 GB of memory? Do you need 6.2 GB left over for other apps? On 11/08/2011 10:26 AM, Dan The Man wrote: vm.kmem_size=1844M vfs.zfs.arc_min=1024M vfs.zfs.arc_max=1536M On my machine with 48 GB of memory, I set: vm.kmem_size=44g vm.kmem_size_max=44g vfs.zfs.arc_min=80m vfs.zfs.arc_max=42g And now it is very fast. My dataset is only about 10 TiB, and the total space is around 32 TiB. In practice, it uses about 36g of ARC and 151g of L2ARC, rather than the full 42g. Before, writes were quite slow when doing read at the same time (such as simply using cp). You shouldn't set those the way I did, using all for ZFS, because you are also using UFS, which needs some memory too. Also, my machine has no services other than ssh, nfs, and samba. I guess leave some for whatever services you run (or does the kernel give up its memory it used when userspace apps want more, like in Linux?). Also, instead of vfs.zfs.zil_disable=1 you should try setting sync=disabled # zfs set sync=disabled somepool/someasyncdataset And to find out if it is a bad disk or some IO bottleneck, use gstat to check the load % while it is doing the slow writing. eg. # gstat -I 5s -f gpt/root|label/.ank|gpt/log|gpt/cache - What debug options do you have enabled in the kernel? It is 8.2-GENERIC so.. no WITNESS (for example) I've been noticing a slowdown in some respects with NFS/SMB, but I suspected it was because I have an re(4) based NIC. ZFS has also wired down a lot of my system memory for the L2ARC… re isn't great but I wouldn't expect it to slow down over time.. Unless bounce buffers got used more and more or something. I have an em0 card in this system - but in any case it is slow locally (i.e. dd a large file with 64k block size). -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au The nice thing about standards is that there are so many of them to choose from. -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org -- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org