Max -

Sorry for the late reply.

On Tue, Sep 18, 2007 at 08:48:58AM +0200, max at bruningsystems.com wrote:
>
> Well... Currrently, I changed mdb for all raw targets to load ctf for
> the entire kernel.  I had wanted to do "::loadctf module", but found
> that because a lot of the basic data types are defined in other
> modules (notably unix/genunix), it was easier to just do what mdb does
> for the "-k" option, and load everything.  So, right now, I hacked in
> several pieces of code from kt_activate() and various other mdb_kvm.c
> code related to ctf into mdb_rawfile.c.  To be honest, I expected to
> have to do more work.  Once I had a "shadow" kt_data_t under the
> rf_data_t, the ::print stuff magically worked.  The nice thing about
> this is that the zfs ctf stuff comes free.

CTF files are optionally uniquified against a common target (typically
genunix) using 'ctfmerge'.  I would imagine this information is encoded
in the CTF file somewhere, so that when you load 'ufs' you can notice
that it depends on CTF data in 'genunix' and load that as well.

> Right now, it seems to work for single disks, but I am not getting
> data that looks reasonable.  I posted a question on the zfs-discuss
> list about the uberblock_t dva (blkptr) and what it refers to, as what
> I see using ::print objset_phys_t does not look right.  The problem of
> multiple disks is currently beyond my scope as I do not have enough
> hardware (or money) to get into that.  Having said that, I would think
> I should be able to use the nvpair stuff at the beginning of any raw
> disk in the pool to get the configuration info that is needed to 
> handle this.
> 
> The main reason I wanted this change in mdb in the first place was to
> be able to actually figure out what IS the on-disk format.  The white
> paper at the zfs community web site basically shows (label 0)
> consisting of 8k of blank space, 8k of boot header, 112k of nvpairs,
> and a 128k uberblock_t array.  This is followed by a repeat of the
> same info (label 1), and then a cloud for the remaining xxxGB/TB until
> the end where label 0 is again repeated twice.  What I want to do is,
> given the uberblock (or an inumber, or a znode), find the data
> corresponding to this on the disk in zfs, similarly to what I can do
> with ufs.  So far, I'm not there...  I think an ability to 
> do this will greatly enhance understanding (at least, my
> understanding), of how zfs works.

Definitely.  It certainly seems useful for examining a single-disk,
uncompressed (including metadata) pool.  To make this truly useful for
ZFS in general, we would have to develop a ZFS-specific backend that
understood things like multiple devices, compression, RAID-Z, etc.

> A "webrev"?  How do I do that?

You can find this tool in the SUNWonbld tool package, which you should
have if you are building ON sources.  If you're building from the
Mercurial sources I'm not sure how it works since I'm still using
teamware.  You may want to ask tools-discuss if it isn't obvious.

- Eric

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock

Reply via email to