Re: [zfs-discuss] What to do with a disk partition

2009-03-15 Thread Harry Putnam
Blake blake.ir...@gmail.com writes:

 I think you will be helped by looking at this document:

 http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recommendations_and_Requirements

 It addresses many of your questions.

 I think the easiest way to back up your OS might be to attach a disk
 to the rpool as a mirror, use 'installgrub' to get the grub boot
 blocks onto the new mirror disk, then detach this disk and put it in
 storage.

Thanks for the pointer... looks like I can confuse myself with this
guide for quite a while ... hehe.

One question springs to mind immediately about attaching a mirror for
backup of the os.

I'm doing this on pc hardware.

I don't know enough yet to understand how zpools repair themselves or
how the parity data really works to recreate missing data.  Or what
might happen if a zpool were mounted with a disk missing.

Imagine I have all hardware controller plugins used up with disks in
various pools. So to install a disk to use for the mirror, something
else will have to be unhooked (I mean manually).

When I boot up to transfer zpool data to the newly added mirror disk,
one or another zpool will be missing a disk.  Is that something that
will cause some kind of big problem?  Or would I be able to do
something so the effected pool did'nt get mounted... or maybe other
choices I have no idea about yet?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ACL interpretation

2009-03-15 Thread David Dyer-Bennet
On page 202 of the December 2008 Solaris ZFS Administration Guide, it says
the ACLs are processed in order.  Then it says that an explicit allow ends
processing (or at least it says that a later deny can't override an
earlier allow).

But that's all it says; it doesn't really describe the interpretation
process completely.  I certainly couldn't implement it from this!  And I
can't figure out what my ACLs should mean from this.

In particular, does a matching deny entry also halt processing?  Or does
processing continue, meaning that a later allow can override an earlier
deny?


-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] After createing zpool of combined 750gb only 229 shows

2009-03-15 Thread Harry Putnam

Summary:
I'm doing something wrong here but not sure what.  Put a 250gb and
500gb disk into zpool but only 229gb is available

I have a 250 gb disk and a 500gb disk installed and configured as
raidz1. both have efi labels when viewed with format/fdisk.

fdisk c3d1:
Total disk size is 30401 cylinders
Cylinder size is 16065 (512 byte) blocks

  Cylinders
 Partition   StatusType  Start   End   Length%
 =   ==  =   ===   ==   ===
 1 EFI   0  3040130402100

fdisk c4d0:
Total disk size is 60800 cylinders
Cylinder size is 16065 (512 byte) blocks

  Cylinders
 Partition   StatusType  Start   End   Length%
 =   ==  =   ===   ==   ===
 1 EFI   0  6080060801100


After creating the zpool with:
zpool create zbk raidz1 c3d1(250gb) c4d0(500gb) (with no errors)

  zpool status zbk
   pool: zbk
  state: ONLINE
  scrub: none requested
 config:

NAMESTATE READ WRITE CKSUM
zbk ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c3d1ONLINE   0 0 0
c4d0ONLINE   0 0 0

 errors: No known data errors


But then df -h shows only 229gb available

 df -h /zbk
 FilesystemSize  Used Avail Use% Mounted on
 zbk   229G   22K  229G   1% /zbk


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can VirtualBox run a 64 bit guests on 32 bit host

2009-03-15 Thread Brian Hechinger
On Sat, Feb 28, 2009 at 01:20:54AM -0600, Harry Putnam wrote:
 So cutting to the chase here... would you happen to have a
 recommendation from your own experience, or something you've heard
 will work and that can stand more ram... my current setup tops out at
 3gb.

The link to the HCL that was posted is probably your best bet.  I know very
little about PC hardware as I've always worked on SPARC (or POWER/MIPS/etc).

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Freezing OpenSolaris with ZFS

2009-03-15 Thread Markus Denhoff

Hi there,

we set up an OpenSolaris/ZFS based storage server with two zpools:  
rpool is a mirror for the operating system. tank is a raidz for data  
storage.


The system is used to store large video files and has attached 12x1GB  
SATA-drives (2 mirrored for the system). Everytime large files are  
copied around the system hangs without apparent reason, 50% kernel CPU  
usage (so one core is occupied totally) and about 2GB of free RAM (8GB  
installed). On idle nothing crashes. Furthermore every scrub on tank  
hangs the system up below 1% finished. Neither the /var/adm/messages  
nor the /var/log/syslog file contains any errors or warnings. We  
limited the ZFS ARC cache to 4GB with an entry in /etc/system.


Does anyone has an idea what's happening there and how to solve the  
problem?


Below some outputs which may help.

Thanks and greetings from germany,

Markus Denhoff,
Sebastian Friederichs

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c6t2d0   ONLINE   0 0 0
c6t3d0   ONLINE   0 0 0
c6t4d0   ONLINE   0 0 0
c6t5d0   ONLINE   0 0 0
c6t6d0   ONLINE   0 0 0
c6t7d0   ONLINE   0 0 0
c6t8d0   ONLINE   0 0 0
c6t9d0   ONLINE   0 0 0
c6t10d0  ONLINE   0 0 0
c6t11d0  ONLINE   0 0 0

errors: No known data errors

# zpool iostat
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool   37.8G   890G  3  2  94.7K  17.4K
tank2.03T  7.03T112  0  4.62M906
--  -  -  -  -  -  -

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 39.8G   874G72K  /rpool
rpool/ROOT35.7G   874G18K  legacy
rpool/ROOT/opensolaris35.6G   874G  35.3G  /
rpool/ROOT/opensolaris-1  89.9M   874G  2.47G  /tmp/tmp8CN5TR
rpool/dump2.00G   874G  2.00G  -
rpool/export   172M   874G19K  /export
rpool/export/home  172M   874G21K  /export/home
rpool/swap2.00G   876G24K  -
tank  1.81T  6.17T  32.2K  /tank
tank/data 1.81T  6.17T  1.77T  /data
tank/public-share 34.9K  6.17T  34.9K  /public-share
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] After createing zpool of combined 750gb only 229 shows

2009-03-15 Thread Tomas Ögren
On 15 March, 2009 - Harry Putnam sent me these 1,7K bytes:

 
 Summary:
 I'm doing something wrong here but not sure what.  Put a 250gb and
 500gb disk into zpool but only 229gb is available
 
 I have a 250 gb disk and a 500gb disk installed and configured as
 raidz1. both have efi labels when viewed with format/fdisk.
 
 fdisk c3d1:
 Total disk size is 30401 cylinders
 Cylinder size is 16065 (512 byte) blocks
 
   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 EFI   0  3040130402100
 
 fdisk c4d0:
 Total disk size is 60800 cylinders
 Cylinder size is 16065 (512 byte) blocks
 
   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 EFI   0  6080060801100
 
 
 After creating the zpool with:
 zpool create zbk raidz1 c3d1(250gb) c4d0(500gb) (with no errors)
 
   zpool status zbk
pool: zbk
   state: ONLINE
   scrub: none requested
  config:
 
 NAMESTATE READ WRITE CKSUM
 zbk ONLINE   0 0 0
   raidz1ONLINE   0 0 0
 c3d1ONLINE   0 0 0
 c4d0ONLINE   0 0 0
 
  errors: No known data errors
 
 
 But then df -h shows only 229gb available
 
  df -h /zbk
  FilesystemSize  Used Avail Use% Mounted on
  zbk   229G   22K  229G   1% /zbk

You are using raidz1 (which is less useful when you just have 2 disks,
use a mirror instead for same safety but higher performance) which can't
use more space than the smallest disk.. so half of the larger disk is
unused in this case. 250 vs 229GB is 1000 vs 1024 when counting MB/kB
etc.

For redundant storage, use disks of the same size.. If you just want to
add up both disks together (and lose all data if one disk goes belly
up), create it with: zpool create zkb c3d1 c4d0

I seem to recall that ZFS should complain and point out that they are
of (major) different size.. But apparently not..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] After createing zpool of combined 750gb only 229 shows

2009-03-15 Thread Harry Putnam
Tomas Ögren st...@acc.umu.se writes:

 I seem to recall that ZFS should complain and point out that they are
 of (major) different size.. But apparently not..

Thanks for the tips.

It actually did complain about size difference at one point.  I used
the -f option.  But later destroyed the zpool I'd created that way.

I thought the problem had something to do with fdisk partitions so
fdisked and deleted all partitions.  (There was 1 on each drive).

After that when I created the raidz1 with those two there was no
complaint so I thought I was walking in tall cotton.  Only later when
I ran df -h and saw that zfs had reduced the zpool to lowest disk
size did I realize I didn't really understand what I was doing.

Your tips summarize what I'd already found on line before seeing your
post.  Very useful to hear from experienced user on this.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CLI grinds to a halt during backups

2009-03-15 Thread Boom
Here is the info required.



PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
  6179 nobody312M  225M sleep   510  12:42:09 0.8% BackupPC_dump/1
 7783 root 3812K 2984K cpu7500   0:00:03 0.4% prstat/1
 7803 root 2948K 1736K sleep   540   0:00:00 0.0% top/1
 900 nobody 88M 4140K cpu3590   0:00:00 0.0% httpd/1
 832 nobody 88M 3800K sleep   590   0:00:00 0.0% httpd/1
 898 nobody 88M 3700K sleep   590   0:00:00 0.0% httpd/1
 7782 root 6172K 3448K sleep   590   0:00:00 0.0% sshd/1
 7772 root 2748K 1644K sleep   590   0:00:00 0.0% iostat/1
 746 root 3164K 1616K sleep   590   0:00:00 0.0% dmispd/1
 516 root 2800K 1532K sleep   590   0:00:00 0.0% automountd/2
 513 root 2516K  948K sleep   590   0:00:00 0.0% automountd/2
 532 root 4120K 1876K sleep   590   0:00:00 0.0% syslogd/13
 829 nobody 88M 3568K sleep   590   0:00:00 0.0% httpd/1
 831 nobody 88M 4124K sleep   590   0:00:00 0.0% httpd/1
 352 daemon   2436K 1292K sleep   60  -20   0:00:00 0.0% nfs4cbd/2
 430 root 2060K  676K sleep   590   0:00:00 0.0% smcboot/1
 300 root 2752K  940K sleep   590   0:00:00 0.0% cron/1
 359 daemon   4704K 1752K sleep   590   0:00:00 0.0% nfsmapid/3
 173 daemon   4216K 2068K sleep   590   0:00:00 0.0% kcfd/3
 517 root 3020K 2020K sleep   590   0:00:00 0.0% vold/5
 152 root 1820K 1028K sleep   590   0:00:00 0.0% powerd/3
 425 root 4884K 3260K sleep   590   0:00:00 0.0% inetd/3
 138 root 4964K 1908K sleep   590   0:00:00 0.0% syseventd/15
 428 root 2060K  964K sleep   590   0:00:00 0.0% smcboot/1
 393 root 2068K  912K sleep   590   0:00:00 0.0% sac/1
 163 root 3684K 2000K sleep   590   0:00:00 0.0% devfsadm/6
 167 root 3880K 2620K sleep   590   0:00:00 0.0% picld/5
 899 nobody 88M 4100K sleep   590   0:00:00 0.0% httpd/1
 398 root 1428K  648K sleep   590   0:00:00 0.0% utmpd/1
 350 daemon   2768K 1592K sleep   590   0:00:00 0.0% statd/1
NPROC USERNAME  SWAP   RSS MEMORY  TIME  CPU
  12 nobody901M  512M   6.2%  12:46:35 0.8%
  47 root  329M  209M   2.5%   0:14:01 0.4%
   1 noaccess  171M  204M   2.5%   0:00:59 0.0%
   1 smmsp1200K 3272K   0.0%   0:00:00 0.0%
   6 daemon   6352K 6216K   0.1%   0:00:00 0.0%






Total: 67 processes, 243 lwps, load averages: 18.49, 15.84, 13.77

 iostat -x 5

   extended device statisticsdevicer/s
 w/s   kr/s   kw/s wait actv  svc_t  %w  %b
sd0   0.00.00.00.0  0.0  0.00.0   0   0
sd1   0.00.00.00.0  0.0  0.00.0   0   0
sd2   0.0   18.90.0  195.9  0.0  0.01.2   0   1
sd3   0.0   19.40.0  196.4  0.0  0.01.4   0   1
sd4   0.00.00.00.0  0.0  0.00.0   0   0
sd5   0.0   18.90.0  176.4  0.0  0.01.3   0   1
sd6   0.0   18.40.0  166.2  0.0  0.01.4   0   1
sd7   0.0   19.40.0  175.7  0.0  0.01.3   0   1
sd8   0.0   20.20.0  178.3  0.0  0.01.3   0   1
sd9   0.0   19.90.0  213.8  0.0  0.01.1   0   1
sd10  0.0   19.40.0  196.5  0.0  0.01.2   0   1
sd11  0.0   19.70.0  200.6  0.0  0.01.2   0   1
sd12  0.0   19.40.0  175.9  0.0  0.01.4   0   1
sd13  0.0   19.40.0  188.0  0.0  0.01.3   0   1
nfs1  0.00.00.00.0  0.0  0.00.0   0   0


 zpool iostat 5 (if you are using ZFS)

-bash-3.00# zpool iostat 5
 capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
pool1   1.68T  8.32T  3168   371K  9.81M
pool1   1.68T  8.32T  0 68  0  1.58M
pool1   1.68T  8.32T  0 98  0  2.29M
pool1   1.68T  8.32T  0 36  0  1.23M
pool1   1.68T  8.32T  0103  0  2.67M
pool1   1.68T  8.32T  0 16  0  90.8K
pool1   1.68T  8.32T  0104  0  2.88M
pool1   1.68T  8.32T  0 86  0  1.65M
pool1   1.68T  8.32T  0 35  0  1.03M
pool1   1.68T  8.32T  0162  0  4.03M
pool1   1.68T  8.32T  0 46  0  1.35M
pool1   1.68T  8.32T  0 53  0  1.11M
pool1   1.68T  8.32T  0 75  0  2.15M

Also top:

last pid:  7803;  load avg:  18.5,  15.8,  13.8;  up 1+21:19:03

10:06:00
67 processes: 63 sleeping, 2 running, 2 on cpu
CPU states:  7.1% idle,  0.6% user, 92.3% kernel,  0.0% iowait,  0.0% swap
Kernel: 194 ctxsw, 13 trap, 18419 intr, 2955 syscall, 9 flt
Memory: 8191M phys mem, 615M free mem, 20G total swap, 20G free swap

 PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
 7783 root   1  500 3812K 2984K run  0:03  0.70% prstat
 6179 nobody 1  510  312M  225M run762:09  0.48% BackupPC_dump
 898 nobody 1  590   88M 3700K 

Re: [zfs-discuss] ACL interpretation

2009-03-15 Thread Mark Shellenbaum

David Dyer-Bennet wrote:

On page 202 of the December 2008 Solaris ZFS Administration Guide, it says
the ACLs are processed in order.  Then it says that an explicit allow ends
processing (or at least it says that a later deny can't override an
earlier allow).

But that's all it says; it doesn't really describe the interpretation
process completely.  I certainly couldn't implement it from this!  And I
can't figure out what my ACLs should mean from this.

In particular, does a matching deny entry also halt processing?  Or does
processing continue, meaning that a later allow can override an earlier
deny?




An ACL is processed from top to bottom.  A deny entry  can't take away 
an already granted allow nor can a allow take away an denied deny 
entry.


For example:

user:joe:read_data/write_data:allow
user:joe:write_data:deny

In this case joe would be allowed read_data and write_data

whereas

user:joe:write_data/execute:deny
user:joe:read_data/write_data:allow

would deny joe the ability to execute or write_data, but joe could 
still read the files data.


Once a bit has been denied only a privilege subsystem override can give 
you that ability.


  -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ACL interpretation

2009-03-15 Thread David Dyer-Bennet

Mark Shellenbaum wrote:

David Dyer-Bennet wrote:
On page 202 of the December 2008 Solaris ZFS Administration Guide, it 
says
the ACLs are processed in order.  Then it says that an explicit allow 
ends

processing (or at least it says that a later deny can't override an
earlier allow).

But that's all it says; it doesn't really describe the interpretation
process completely.  I certainly couldn't implement it from this!  And I
can't figure out what my ACLs should mean from this.

In particular, does a matching deny entry also halt processing?  Or does
processing continue, meaning that a later allow can override an earlier
deny?




An ACL is processed from top to bottom.  A deny entry  can't take 
away an already granted allow nor can a allow take away an denied 
deny entry.


For example:

[snip]

Once a bit has been denied only a privilege subsystem override can 
give you that ability.


Thanks, that's what I guessed and what simple experiments seemed to 
show, but  Happy to have it confirmed.  So the list is processed top 
to bottom and the first definite answer is THE answer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Freezing OpenSolaris with ZFS

2009-03-15 Thread Blake
This sounds quite like the problems I've been having with a spotty
sata controller and/or motherboard.  See my thread from last week
about copying large amounts of data that forced a reboot.  Lots of
good info from engineers and users in that thread.



On Sun, Mar 15, 2009 at 1:17 PM, Markus Denhoff denh...@net-bite.net wrote:
 Hi there,

 we set up an OpenSolaris/ZFS based storage server with two zpools: rpool is
 a mirror for the operating system. tank is a raidz for data storage.

 The system is used to store large video files and has attached 12x1GB
 SATA-drives (2 mirrored for the system). Everytime large files are copied
 around the system hangs without apparent reason, 50% kernel CPU usage (so
 one core is occupied totally) and about 2GB of free RAM (8GB installed). On
 idle nothing crashes. Furthermore every scrub on tank hangs the system up
 below 1% finished. Neither the /var/adm/messages nor the /var/log/syslog
 file contains any errors or warnings. We limited the ZFS ARC cache to 4GB
 with an entry in /etc/system.

 Does anyone has an idea what's happening there and how to solve the problem?

 Below some outputs which may help.

 Thanks and greetings from germany,

 Markus Denhoff,
 Sebastian Friederichs

 # zpool status tank
  pool: tank
  state: ONLINE
  scrub: none requested
 config:

        NAME         STATE     READ WRITE CKSUM
        tank         ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c6t2d0   ONLINE       0     0     0
            c6t3d0   ONLINE       0     0     0
            c6t4d0   ONLINE       0     0     0
            c6t5d0   ONLINE       0     0     0
            c6t6d0   ONLINE       0     0     0
            c6t7d0   ONLINE       0     0     0
            c6t8d0   ONLINE       0     0     0
            c6t9d0   ONLINE       0     0     0
            c6t10d0  ONLINE       0     0     0
            c6t11d0  ONLINE       0     0     0

 errors: No known data errors

 # zpool iostat
               capacity     operations    bandwidth
 pool         used  avail   read  write   read  write
 --  -  -  -  -  -  -
 rpool       37.8G   890G      3      2  94.7K  17.4K
 tank        2.03T  7.03T    112      0  4.62M    906
 --  -  -  -  -  -  -

 # zfs list
 NAME                       USED  AVAIL  REFER  MOUNTPOINT
 rpool                     39.8G   874G    72K  /rpool
 rpool/ROOT                35.7G   874G    18K  legacy
 rpool/ROOT/opensolaris    35.6G   874G  35.3G  /
 rpool/ROOT/opensolaris-1  89.9M   874G  2.47G  /tmp/tmp8CN5TR
 rpool/dump                2.00G   874G  2.00G  -
 rpool/export               172M   874G    19K  /export
 rpool/export/home          172M   874G    21K  /export/home
 rpool/swap                2.00G   876G    24K  -
 tank                      1.81T  6.17T  32.2K  /tank
 tank/data                 1.81T  6.17T  1.77T  /data
 tank/public-share         34.9K  6.17T  34.9K  /public-share
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Freezing OpenSolaris with ZFS

2009-03-15 Thread Tim
On Sun, Mar 15, 2009 at 6:42 PM, Blake blake.ir...@gmail.com wrote:




 On Sun, Mar 15, 2009 at 1:17 PM, Markus Denhoff denh...@net-bite.net
 wrote:
  Hi there,
 
  we set up an OpenSolaris/ZFS based storage server with two zpools: rpool
 is
  a mirror for the operating system. tank is a raidz for data storage.
 
  The system is used to store large video files and has attached 12x1GB
  SATA-drives (2 mirrored for the system). Everytime large files are copied
  around the system hangs without apparent reason, 50% kernel CPU usage (so
  one core is occupied totally) and about 2GB of free RAM (8GB installed).
 On
  idle nothing crashes. Furthermore every scrub on tank hangs the system up
  below 1% finished. Neither the /var/adm/messages nor the /var/log/syslog
  file contains any errors or warnings. We limited the ZFS ARC cache to 4GB
  with an entry in /etc/system.
 
  Does anyone has an idea what's happening there and how to solve the
 problem?
 
  Below some outputs which may help.
 
  Thanks and greetings from germany,
 
  Markus Denhoff,
  Sebastian Friederichs
 
  # zpool status tank
   pool: tank
   state: ONLINE
   scrub: none requested
  config:
 
 NAME STATE READ WRITE CKSUM
 tank ONLINE   0 0 0
   raidz1 ONLINE   0 0 0
 c6t2d0   ONLINE   0 0 0
 c6t3d0   ONLINE   0 0 0
 c6t4d0   ONLINE   0 0 0
 c6t5d0   ONLINE   0 0 0
 c6t6d0   ONLINE   0 0 0
 c6t7d0   ONLINE   0 0 0
 c6t8d0   ONLINE   0 0 0
 c6t9d0   ONLINE   0 0 0
 c6t10d0  ONLINE   0 0 0
 c6t11d0  ONLINE   0 0 0
 
  errors: No known data errors
 
  # zpool iostat
capacity operationsbandwidth
  pool used  avail   read  write   read  write
  --  -  -  -  -  -  -
  rpool   37.8G   890G  3  2  94.7K  17.4K
  tank2.03T  7.03T112  0  4.62M906
  --  -  -  -  -  -  -
 
  # zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  rpool 39.8G   874G72K  /rpool
  rpool/ROOT35.7G   874G18K  legacy
  rpool/ROOT/opensolaris35.6G   874G  35.3G  /
  rpool/ROOT/opensolaris-1  89.9M   874G  2.47G  /tmp/tmp8CN5TR
  rpool/dump2.00G   874G  2.00G  -
  rpool/export   172M   874G19K  /export
  rpool/export/home  172M   874G21K  /export/home
  rpool/swap2.00G   876G24K  -
  tank  1.81T  6.17T  32.2K  /tank
  tank/data 1.81T  6.17T  1.77T  /data
  tank/public-share 34.9K  6.17T  34.9K  /public-share



Might also be helpful to provide the version of Opensolaris you're on, as
well as the zfs version.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Forensics related ZFS questions

2009-03-15 Thread Nicole Beebe
1. Does variable FSB block sizing extend to files larger than record
size, concerning the last FSB allocated?
 
In other words, for files larger than 128KB, that utilize more than one
full recordsize FSB, will the LAST FSB allocated be 'right-sized' to fit
the remaining data, or will ZFS allocate a full recordsize FSB for the
last 'chunk' of the file?  (This is a file slack issue re: how much will
exist.)
 
2. Can a developer confirm that COW occurs at the FSB level (vs. sector
level, for example)? 
 
In other words, when a single FSB (say 64KB file w/ recordsize=128KB)
file is modified, and it's only one sector within that file that's
modified, is it correct that what's copied-on-write is the entire 64KB
FSB allocated to that file?  (This is a data recovery issue.)
 
 
 
NICOLE L. BEEBE, Ph.D., CISSP
Assistant Professor
The University of Texas at San Antonio
Department of Information Systems  Technology Management
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss