[zfs-discuss] Retrieve per-block checksum algorithm
Greetings to everyone. I'm trying to retrieve the checksumming algorithm on a per-block basis with zdb(1M). I know it's supposed to be ran by Sun's support engineers only I take full responsibility for whatever damage I cause to my machine by using it. Now. I created a tank/test filesystem, dd'ed some files, then changed the checksum to sha256 and dd'ed some more files. I retrieved the DVAs of all the files and wanted to verify that some of them are using the default and the rest the sha256 checksums. The problem is that zdb -R either returns (null), meaning that printf() has been given a NULL pointer or it returns corrupt data and there are cases where it works ok. This is a case where it fails: $ zdb -R tank:0:f076d8600:7a00:b Found vdev type: mirror DVA[0]: vdev_id 1199448952 / 4315c6bdf768bc00 DVA[0]: GANG: TRUE GRID: 00bd ASIZE: eb45ac00 DVA[0]: :1199448952:4315c6bdf768bc00:a75a00:egd DVA[1]: vdev_id 1938508981 / e19c60208f39cc00 DVA[1]: GANG: TRUE GRID: 005d ASIZE: 11fe4ac00 DVA[1]: :1938508981:e19c60208f39cc00:a75a00:egd DVA[2]: vdev_id 1231513852 / 646586e9b6609400 DVA[2]: GANG: FALSE GRID: 00e6 ASIZE: 15e953200 DVA[2]: :1231513852:646586e9b6609400:a75a00:edd LSIZE: 6efc00 PSIZE: a75a00 ENDIAN:BIG TYPE: (null) BIRTH: 2a9513965f18afdLEVEL: 24FILL: 85adfa322e48a796 CKFUNC: (null) COMP: (null) CKSUM: 7408c0516468b934:a0f29a7c28b6c319:28280aab19d1ad3c:64607350c7ea256c $ Is it a zdb deficiency of my input is to blame? Thank you for considering. Best regards, Stathis Kamperis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Retrieve per-block checksum algorithm
2009/10/26 Victor Latushkin victor.latush...@sun.com: On 26.10.09 14:25, Stathis Kamperis wrote: Greetings to everyone. I'm trying to retrieve the checksumming algorithm on a per-block basis with zdb(1M). I know it's supposed to be ran by Sun's support engineers only I take full responsibility for whatever damage I cause to my machine by using it. I guess -S option can help you to get what you are looking for. Victor Hi Victor; thank you for your answer. I tried with -S with no luck. I tried also different levels of verboseness (-vv/-SS, etc). E.g., $ zdb -S all:all tank/test Dataset tank/test [ZPL], ID 197, cr_txg 94447, 63.5K, 7 objects $ From what I've read there's also zdb - pool which outputs ckalg as a side effect while doing some validation checks. E.g., objset 0 object 26 offset 0x76000 [L0 SPA space map] 1000L/c00P DVA[0]=0:232c680400:c00 DVA[1]=0:108f12a00:c00 DVA[2]=0:430237ce00:c00 fletcher4 lzjb LE contiguous birth=34121 fill=1 cksum=911bd91bf9:fdcdafd76e06:1056870d1b78c0a:c623a15a8f99054a Just wondering if is doable on a per user specified block basis. Best regards, Stathis Kamperis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] compressed fs taking up more space than uncompressed equivalent
Salute. I have a filesystem where I store various source repositories (cvs + git). I have compression enabled on and zfs get compressratio reports 1.46x. When I copy all the stuff to another filesystem without compression, the data take up _less_ space (3.5GB vs 2.5GB). How's that possible ? Best regards, Stathis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent
2009/10/23 michael schuster michael.schus...@sun.com: Stathis Kamperis wrote: Salute. I have a filesystem where I store various source repositories (cvs + git). I have compression enabled on and zfs get compressratio reports 1.46x. When I copy all the stuff to another filesystem without compression, the data take up _less_ space (3.5GB vs 2.5GB). How's that possible ? just a few thoughts: - how do you measure how much space your data consumes? With zfs list, under the 'USED' column. du(1) gives the same results as well (the individual fs sizes aren't enterily identical with those that zfs list reports , but the difference still exists). tank/sources 3.73G 620G 3.73G /export/sources --- compressed tank/test 2.32G 620G 2.32G /tank/test --- uncompressed - how do you copy? With cp(1). Should I be using zfs send | zfs receive ? - is the other FS also ZFS? Yes. And they both live under the same pool. If it matters, I don't have any snapshots on neither of the filesystems. Thank you for your time. Best regards, Stathis Kamperis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent
2009/10/23 Gaëtan Lehmann gaetan.lehm...@jouy.inra.fr: Le 23 oct. 09 à 08:46, Stathis Kamperis a écrit : 2009/10/23 michael schuster michael.schus...@sun.com: Stathis Kamperis wrote: Salute. I have a filesystem where I store various source repositories (cvs + git). I have compression enabled on and zfs get compressratio reports 1.46x. When I copy all the stuff to another filesystem without compression, the data take up _less_ space (3.5GB vs 2.5GB). How's that possible ? just a few thoughts: - how do you measure how much space your data consumes? With zfs list, under the 'USED' column. du(1) gives the same results as well (the individual fs sizes aren't enterily identical with those that zfs list reports , but the difference still exists). tank/sources 3.73G 620G 3.73G /export/sources --- compressed tank/test 2.32G 620G 2.32G /tank/test --- uncompressed USED includes the size of the children and the size of the snapshot. I see below that you don't have snapshots on that pull, but in general, I found more useful to use zfs list -o space,compress,ratio to look at how the space is used. - how do you copy? With cp(1). Should I be using zfs send | zfs receive ? zfs send/receive or rsync -aH may do a better job by preserving hard links. I destroyed the test fs, recreated it and did an rsync. The size of the uncompressed filesystem is now larger than the compressed one. I guess cp(1) missed a great deal of stuff, which is weird because I didn't get any error/warning on the console output. All good now. Thanks Gaëtan and Michael for your time and sorry to the rest of the list readers for the noise. Best regards, Stathis Kamperis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Move home filesystem to new pool
Greetings to everyone! I'm trying to move the home filesystem from my root pool to another pool, and I'm really lost. Specifically to move rpool/export/home to tank/home. I did the following: 1. Created a snapshot of rpool/export/home (with -r option set) 2. Did a zfs send -R ... | zfs receive -d ... The home filesystem is created in the new pool, but when I enter it I see no files at all. Mind that zfs list shows that the new filesystem occupies the correct space. I tried many variations ( working from single user mode for instance ), but either the filesystem is empty or the mount points don't correspond to real directories that I can cd into them. Would anyone be so kind as to give me a couple of directions or point me to a document on how to accomplish my task please? Following random google blog spots, didn't pay off. Thank you for considering. Best regards, Stathis Kamperis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss