Re: [zfs-discuss] zfs vbox and shared folders

2009-09-27 Thread Ian Collins

dick hoogendijk wrote:
Are there any known issues involving VirtualBox using shared folders 
from a ZFS filesystem?



Why should there be?  A shared folder is just a directory.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OS install question

2009-09-27 Thread Ross Walker

On Sep 27, 2009, at 10:05 PM, Ron Watkins  wrote:

My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another  
mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross  
c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.


There is no need for the 2 mirrors both on c1t0 and c2t0 one mirrored  
rpool with multiple zfs datasets will provide the same performance  
with easier admin.


As for overlapping zpools, well it can be done, but I'm not sure how  
well it'll perform, or if there are any issues with zpools sharing  
disks.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OS install question

2009-09-27 Thread Ron Watkins
Yes, you are correct about the layout.
However, I don't appear to be able to control how the root pool is configured 
when I install from the live-CD. It either takes:
a) The entire physical disk
b) A slice the same size as the physical disk
c) A smaller slice, but no way to get at the remaining space.
Thus, im at a loss as to how to get the root pool setup as a 20Gb slice from 
the first disk. If I could somehow partition the disk first, then get the root 
pool on s0, then I can use the prtvtoc/fmthard to get c1t0 to look just like 
c0t0.
I read some best practices guides, and they recommend keeping the root pool and 
the application pool seperate, but I cant' figure out how to get to that point 
where I can split c0t0 into 2 parts BEFORE creating the root pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OS install question

2009-09-27 Thread Trevor Pretty




Ron

That should work it's no real different to SVM.

BTW: I did you mean?

mirrored root on c1t0d0s0/c2t0d0s0
mirrored app  on c1t1d0s0/c2t1d0s0
RaidZ accross c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7

I would then make slices 0 and 7 the same on all disks using fmthard
(BTW:I would not use 7, I would use 1 - but that's just preference)

Remember you don't need spare
slices with ZFS root for Live Upgrade like you did with SVM.


Ron Watkins wrote:

  My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
I want to play with creating ISCSI target luns on the Raid-5 partition, so I am trying out opensolaris for the first time. In the past, I would use Solaris 10 with the SVM do create what I need, but without ISCSI target support.
  










www.eagle.co.nz 
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OS install question

2009-09-27 Thread Ron Watkins
My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another mirrored app 
fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross 
c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
I want to play with creating ISCSI target luns on the Raid-5 partition, so I am 
trying out opensolaris for the first time. In the past, I would use Solaris 10 
with the SVM do create what I need, but without ISCSI target support.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OS install question

2009-09-27 Thread Ross Walker

On Sep 27, 2009, at 8:41 PM, Ron Watkins  wrote:

I have a box with 4 disks. It was my intent to place a mirrored root  
partition on 2 disks on different controllers, then use the  
remaining space and the other 2 disks to create a raid-5  
configuration from which to export iscsi luns for use by other hosts.


You can't have a raidz (raid5) with 2 disks, it's a 3 disk minimum.  
You can have a mirror of the two. Get 2 good large disks and make a  
mirror.


The problem im having is that when I try to install OS, it either  
takes the entire disk or a partition the same size as the entire  
disk. I tried creating 2 slices, but the install won't allow it and  
if I make the solaris partition smaller, then the OS no longer sees  
the rest of the disk, only the small piece.


On the install it will ask you for the size you want to use, pick a  
smaller size. It will still create a partition filling the whole disk,  
but you can use format to create additional slices. I usually make  
mine slightly smaller then the disk, allows you to replace with a  
smaller disk in the future if need be and leaves a little room for  
utility slices, like a meta-data slice for SVM.


I found references on how to mirror the root disk pool, but the grub  
piece doesn't seem to work as when I disconnect the first disk all I  
get at reboot is a grub prompt.


Try zero'ing out the first sector (or 63 to get rid of any junk in the  
first cylinder), then use format and duplicate the layout of the first  
disk, add slice zero, then run installgrub.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OS install question

2009-09-27 Thread Ron Watkins
I have a box with 4 disks. It was my intent to place a mirrored root partition 
on 2 disks on different controllers, then use the remaining space and the other 
2 disks to create a raid-5 configuration from which to export iscsi luns for 
use by other hosts.
The problem im having is that when I try to install OS, it either takes the 
entire disk or a partition the same size as the entire disk. I tried creating 2 
slices, but the install won't allow it and if I make the solaris partition 
smaller, then the OS no longer sees the rest of the disk, only the small piece.
I found references on how to mirror the root disk pool, but the grub piece 
doesn't seem to work as when I disconnect the first disk all I get at reboot is 
a grub prompt.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Lori Alt

Bill Sommerfeld wrote:

On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote:
  

The list of datasets in a root pool should look something like this:


...
  
rpool/swap  



I've had success with putting swap into other pools.  I believe others
have, as well.

  
Yes, that's true.  Swap can be in a different pool.  



  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool replace single disk with raidz

2009-09-27 Thread Richard Elling

On Sep 27, 2009, at 2:28 PM, Trevor Pretty wrote:



To: ZFS Developers.

I know we hate them but an "Are you sure?" may have helped here, and  
may be a quicker fix than waiting for 4852783  (just thinking out  
loud here). Could the zfs command have worked out c5d0 was a single  
disk and attaching it to the pool would have been dumb?


It already does this, and has for some time.
	# zpool create zwimming raidz /dev/ramdisk/rd1 /dev/ramdisk/rd2 /dev/ 
ramdisk/rd3

# zpool add zwimming /dev/ramdisk/rd4
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

Which basically prompts someone to add the "-f"
See also the first great debate on zfs-discuss.
 -- richard





Ryan Hirsch wrote:


I have a zpool named rtank.  I accidently attached a single drive  
to the pool.  I am an idiot I know :D Now I want to replace this  
single drive with a raidz group.  Below is the pool setup and what  
I tried:



NAMESTATE READ WRITE CKSUM
rtank   ONLINE   0 0 0
 - raidz1ONLINE   0 0 0
   -- c4t0d0  ONLINE   0 0 0
   -- c4t1d0  ONLINE   0 0 0
   -- c4t2d0  ONLINE   0 0 0
   -- c4t3d0  ONLINE   0 0 0
   -- c4t4d0  ONLINE   0 0 0
   -- c4t5d0  ONLINE   0 0 0
   -- c4t6d0  ONLINE   0 0 0
   -- c4t7d0  ONLINE   0 0 0
 - raidz1ONLINE   0 0 0
   -- c3t0d0  ONLINE   0 0 0
   -- c3t1d0  ONLINE   0 0 0
   -- c3t2d0  ONLINE   0 0 0
   -- c3t3d0  ONLINE   0 0 0
   -- c3t4d0  ONLINE   0 0 0
   -- c3t5d0  ONLINE   0 0 0
  - c5d0  ONLINE   0 0 0  <--- single drive  
in the pool not in any raidz



$ pfexec zpool replace rtank c5d0 raidz c3t6d0 c3t7d0 c3t8d0 c3t9d0  
c3t10d0 c3t11d0

too many arguments

$ zpool upgrade -v
This system is currently running ZFS pool version 18.


Is what I am trying to do possible?  If so what am I doing wrong?   
Thanks.




--
Trevor Pretty | Technical Account Manager | +64 9 639 0652 | +64 21  
666 161

Eagle Technology Group Ltd.
Gate D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211, Parnell, Auckland






www.eagle.co.nz
This email is confidential and may be legally privileged. If  
received in error please destroy and immediately notify us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs vbox and shared folders

2009-09-27 Thread Trevor Pretty




Dick

I'm 99$ sure I use to do this when I had OpenSolaris as my base OS to
an XP guest (no NFS client - Bob) for my $HOME

Now I use Vista as my base OS because I now work in an MS environment,
so sorry can't check. You having problems?

BTW: Thank goodness for VirtualBox when I want to do real file
manipulation, rather than windows explorer!

Trevor

dick hoogendijk wrote:

  Are there any known issues involving VirtualBox using shared folders 
from a ZFS filesystem?

  









www.eagle.co.nz 
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool replace single disk with raidz

2009-09-27 Thread Trevor Pretty





To: ZFS Developers. 

I know we hate them but an "Are you sure?" may have helped here, and
may be a quicker fix than waiting for 4852783 
(just thinking out loud here). Could the zfs command have worked out
c5d0 was a single disk and attaching it to the pool would have been
dumb?


Ryan Hirsch wrote:

  I have a zpool named rtank.  I accidently attached a single drive to the pool.  I am an idiot I know :D Now I want to replace this single drive with a raidz group.  Below is the pool setup and what I tried:
 

NAMESTATE READ WRITE CKSUM
rtank   ONLINE   0 0 0
 - raidz1ONLINE   0 0 0
   -- c4t0d0  ONLINE   0 0 0
   -- c4t1d0  ONLINE   0 0 0
   -- c4t2d0  ONLINE   0 0 0
   -- c4t3d0  ONLINE   0 0 0
   -- c4t4d0  ONLINE   0 0 0
   -- c4t5d0  ONLINE   0 0 0
   -- c4t6d0  ONLINE   0 0 0
   -- c4t7d0  ONLINE   0 0 0
 - raidz1ONLINE   0 0 0
   -- c3t0d0  ONLINE   0 0 0
   -- c3t1d0  ONLINE   0 0 0
   -- c3t2d0  ONLINE   0 0 0
   -- c3t3d0  ONLINE   0 0 0
   -- c3t4d0  ONLINE   0 0 0
   -- c3t5d0  ONLINE   0 0 0
  - c5d0  ONLINE   0 0 0  <--- single drive in the pool not in any raidz


$ pfexec zpool replace rtank c5d0 raidz c3t6d0 c3t7d0 c3t8d0 c3t9d0 c3t10d0 c3t11d0
too many arguments

$ zpool upgrade -v
This system is currently running ZFS pool version 18.


Is what I am trying to do possible?  If so what am I doing wrong?  Thanks.
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz 
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Albert Chin
On Sun, Sep 27, 2009 at 10:06:16AM -0700, Andrew wrote:
> This is what my /var/adm/messages looks like:
> 
> Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss 
> == NULL, file: ../../common/fs/zfs/space_map.c, line: 109
> Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a97a0 
> genunix:assfail+7e ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9830 
> zfs:space_map_add+292 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a98e0 
> zfs:space_map_load+3a7 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9920 
> zfs:metaslab_activate+64 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a99e0 
> zfs:metaslab_group_alloc+2b7 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9ac0 
> zfs:metaslab_alloc_dva+295 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9b60 
> zfs:metaslab_alloc+9b ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9b90 
> zfs:zio_dva_allocate+3e ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9bc0 
> zfs:zio_execute+a0 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9c40 
> genunix:taskq_thread+193 ()
> Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9c50 
> unix:thread_start+8 ()

I'm not sure that aok=1/zfs:zfs_recover=1 would help you because
zfs_panic_recover isn't in the backtrace (see
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6638754).
Sometimes a Sun zfs engineer shows up on the freenode #zfs channel. I'd
pop up there and ask. There are somewhat similar bug reports at
bugs.opensolaris.org. I'd post a bug report just in case.

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs vbox and shared folders

2009-09-27 Thread Bob Friesenhahn

On Sun, 27 Sep 2009, dick hoogendijk wrote:

Are there any known issues involving VirtualBox using shared folders from a 
ZFS filesystem?


I am not sure what you mean by 'shared folders' but I am using a NFS 
mount to access the host ZFS filesystem.  It works great.  I have less 
faith in VirtualBox's "local" filesystem access to the host's files.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer

1:19pm, Richard Elling wrote:

The other thing that's weird is the writes. I am seeing writes in that 
3.5MB/sec range during the resilver, *and* I was seeing the same thing 
during the dd.
This is from the resilver, but again, the dd was similar. c7d0 is the 
device in question:


  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  0.0  238.00.0  476.0  0.0  1.00.04.1   0  99 c12d1
 30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0


This is the bottleneck. 29.2 ms average service time is slow.
As you can see, this causes a backup in the queue, which is
seeing an average service time of 206 ms.

The problem could be the disk itself or anything in the path
to that disk, including software.  But first, look for hardware
issues via
iostat -E
fmadm faulty
fmdump -eV



I don't see anything in the output of these commands except for the ZFS 
errors from when I was trying to get the disk online and resilvered.
I estimate another 10-15 hours before this disk is finished resilvering 
and the zpool is OK again. At that time, I'm going to switch some hardware 
out (I've got a newer and higher-end LSI card that I hadn't used before 
because it's PCI-X, and won't fit on my current motherboard.)
I'll report back what I get with it tomorrow or the next day, depending on 
the timing on the resilver.


Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Richard Elling


On Sep 27, 2009, at 8:49 AM, Paul Archer wrote:

Problem is that while it's back, the performance is horrible. It's  
resilvering at about (according to iostat) 3.5MB/sec. And at some  
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/ 
dsk/c7d0'), and iostat showed me that the drive was only writing  
at around 3.5MB/sec. *And* it showed reads of about the same 3.5MB/ 
sec even during the dd.
This same hardware and even the same zpool have been run under  
linux with zfs-fuse and BSD, and with BSD at least, performance  
was much better. A complete resilver under BSD took 6 hours. Right  
now zpool is estimating this resilver to take 36.
Could this be a driver problem? Something to do with the fact that  
this is a very old SATA card (LSI 150-6)?
This is driving me crazy. I finally got my zpool working under  
Solaris so I'd have some stability, and I've got no performance.





It appears your controller is preventing ZFS from enabling write  
cache.


I'm not familiar with that model. You will need to find a way to  
enable the drives write cache manually.




My controller, while normally a full RAID controller, has had its  
BIOS turned off, so it's acting as a simple SATA controller. Plus,  
I'm seeing this same slow performance with dd, not just with ZFS.  
And I wouldn't think that write caching would make a difference with  
using dd (especially writing in from /dev/zero).


The other thing that's weird is the writes. I am seeing writes in  
that 3.5MB/sec range during the resilver, *and* I was seeing the  
same thing during the dd.
This is from the resilver, but again, the dd was similar. c7d0 is  
the device in question:


   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   0.0  238.00.0  476.0  0.0  1.00.04.1   0  99 c12d1
  30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0


This is the bottleneck. 29.2 ms average service time is slow.
As you can see, this causes a backup in the queue, which is
seeing an average service time of 206 ms.

The problem could be the disk itself or anything in the path
to that disk, including software.  But first, look for hardware
issues via
iostat -E
fmadm faulty
fmdump -eV

 -- richard



  80.40.0 3417.60.0  0.3  0.33.33.2   8  14 c8d0
  80.40.0 3417.60.0  0.3  0.33.43.2   9  14 c9d0
  80.60.0 3417.60.0  0.3  0.33.43.2   9  14 c10d0
  80.60.0 3417.60.0  0.3  0.33.33.1   9  14 c11d0
   0.00.00.00.0  0.0  0.00.00.0   0   0 c12t0d0


Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Ross Walker

On Sep 27, 2009, at 1:44 PM, Paul Archer  wrote:

My controller, while normally a full RAID controller, has had its  
BIOS turned off, so it's acting as a simple SATA controller. Plus,  
I'm seeing this same slow performance with dd, not just with ZFS.  
And I wouldn't think that write caching would make a difference  
with using dd (especially writing in from /dev/zero).


I don't think you got what I said. Because the controller normally  
runs as a RAID controller the controller controls the SATA drives'  
on-board write cache, it may not allow the OS to enable/disable the  
drives' on-board write cache.


I see what you're saying. I just think that with the BIOS turned  
off, this card is essentially acting like a dumb SATA controller,  
and therefore not doing anything with the drives' cache.


You are probably right that the controller doesn't do anything,  
neither enables or disables the drives' cache, so whatever they were  
set to before it was switched to JBOD mode is what they are now.



Using 'dd' to the raw disk will also show the same poor performance  
if the HD on-board write-cache is disabled.


The other thing that's weird is the writes. I am seeing writes in  
that 3.5MB/sec range during the resilver, *and* I was seeing the  
same thing during the dd.


Was the 'dd' to the raw disk? Either was it shows the HDs aren't  
setup properly.


This is from the resilver, but again, the dd was similar. c7d0 is  
the device in question:


 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.0  238.00.0  476.0  0.0  1.00.04.1   0  99 c12d1
30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0
80.40.0 3417.60.0  0.3  0.33.33.2   8  14 c8d0
80.40.0 3417.60.0  0.3  0.33.43.2   9  14 c9d0
80.60.0 3417.60.0  0.3  0.33.43.2   9  14 c10d0
80.60.0 3417.60.0  0.3  0.33.33.1   9  14 c11d0
 0.00.00.00.0  0.0  0.00.00.0   0   0 c12t0d0


Try using 'format -e' on the drives, go into 'cache' then 'write- 
cache' and display the current state. You can try to manually  
enable it from there.




I tried this, but the 'cache' menu item didn't show up. The man page  
says it only works for SCSI disks. Do you know of any other way to  
get/set those parameters?


Hmm, I thought SATA under Solaris behaves like SCSI? I use RAID  
controllers that export as each-disk-raid0 which uses the SCSI command  
set so it abstracts the SATA disks, so I have no way to verify myself.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer


My controller, while normally a full RAID controller, has had its BIOS 
turned off, so it's acting as a simple SATA controller. Plus, I'm seeing 
this same slow performance with dd, not just with ZFS. And I wouldn't think 
that write caching would make a difference with using dd (especially 
writing in from /dev/zero).


I don't think you got what I said. Because the controller normally runs as a 
RAID controller the controller controls the SATA drives' on-board write 
cache, it may not allow the OS to enable/disable the drives' on-board write 
cache.


I see what you're saying. I just think that with the BIOS turned off, this 
card is essentially acting like a dumb SATA controller, and therefore not 
doing anything with the drives' cache.




Using 'dd' to the raw disk will also show the same poor performance if the HD 
on-board write-cache is disabled.


The other thing that's weird is the writes. I am seeing writes in that 
3.5MB/sec range during the resilver, *and* I was seeing the same thing 
during the dd.


Was the 'dd' to the raw disk? Either was it shows the HDs aren't setup 
properly.


This is from the resilver, but again, the dd was similar. c7d0 is the 
device in question:


  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  0.0  238.00.0  476.0  0.0  1.00.04.1   0  99 c12d1
 30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0
 80.40.0 3417.60.0  0.3  0.33.33.2   8  14 c8d0
 80.40.0 3417.60.0  0.3  0.33.43.2   9  14 c9d0
 80.60.0 3417.60.0  0.3  0.33.43.2   9  14 c10d0
 80.60.0 3417.60.0  0.3  0.33.33.1   9  14 c11d0
  0.00.00.00.0  0.0  0.00.00.0   0   0 c12t0d0


Try using 'format -e' on the drives, go into 'cache' then 'write-cache' and 
display the current state. You can try to manually enable it from there.




I tried this, but the 'cache' menu item didn't show up. The man page says 
it only works for SCSI disks. Do you know of any other way to get/set 
those parameters?


Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Andrew
This is what my /var/adm/messages looks like:

Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss 
== NULL, file: ../../common/fs/zfs/space_map.c, line: 109
Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a97a0 
genunix:assfail+7e ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9830 
zfs:space_map_add+292 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a98e0 
zfs:space_map_load+3a7 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9920 
zfs:metaslab_activate+64 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a99e0 
zfs:metaslab_group_alloc+2b7 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9ac0 
zfs:metaslab_alloc_dva+295 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9b60 
zfs:metaslab_alloc+9b ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9b90 
zfs:zio_dva_allocate+3e ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9bc0 
zfs:zio_execute+a0 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9c40 
genunix:taskq_thread+193 ()
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice] ff00089a9c50 
unix:thread_start+8 ()
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Ross Walker

On Sep 27, 2009, at 11:49 AM, Paul Archer  wrote:

Problem is that while it's back, the performance is horrible. It's  
resilvering at about (according to iostat) 3.5MB/sec. And at some  
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/ 
dsk/c7d0'), and iostat showed me that the drive was only writing  
at around 3.5MB/sec. *And* it showed reads of about the same 3.5MB/ 
sec even during the dd.
This same hardware and even the same zpool have been run under  
linux with zfs-fuse and BSD, and with BSD at least, performance  
was much better. A complete resilver under BSD took 6 hours. Right  
now zpool is estimating this resilver to take 36.
Could this be a driver problem? Something to do with the fact that  
this is a very old SATA card (LSI 150-6)?
This is driving me crazy. I finally got my zpool working under  
Solaris so I'd have some stability, and I've got no performance.





It appears your controller is preventing ZFS from enabling write  
cache.


I'm not familiar with that model. You will need to find a way to  
enable the drives write cache manually.




My controller, while normally a full RAID controller, has had its  
BIOS turned off, so it's acting as a simple SATA controller. Plus,  
I'm seeing this same slow performance with dd, not just with ZFS.  
And I wouldn't think that write caching would make a difference with  
using dd (especially writing in from /dev/zero).


I don't think you got what I said. Because the controller normally  
runs as a RAID controller the controller controls the SATA drives' on- 
board write cache, it may not allow the OS to enable/disable the  
drives' on-board write cache.


Using 'dd' to the raw disk will also show the same poor performance if  
the HD on-board write-cache is disabled.


The other thing that's weird is the writes. I am seeing writes in  
that 3.5MB/sec range during the resilver, *and* I was seeing the  
same thing during the dd.


Was the 'dd' to the raw disk? Either was it shows the HDs aren't setup  
properly.


This is from the resilver, but again, the dd was similar. c7d0 is  
the device in question:


   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   0.0  238.00.0  476.0  0.0  1.00.04.1   0  99 c12d1
  30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0
  80.40.0 3417.60.0  0.3  0.33.33.2   8  14 c8d0
  80.40.0 3417.60.0  0.3  0.33.43.2   9  14 c9d0
  80.60.0 3417.60.0  0.3  0.33.43.2   9  14 c10d0
  80.60.0 3417.60.0  0.3  0.33.33.1   9  14 c11d0
   0.00.00.00.0  0.0  0.00.00.0   0   0 c12t0d0


Try using 'format -e' on the drives, go into 'cache' then 'write- 
cache' and display the current state. You can try to manually enable  
it from there.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fixing Wikipedia tmpfs article (was Re: Which directories must be part of rpool?)

2009-09-27 Thread Frank Middleton

On 09/27/09 11:25 AM, Joerg Schilling wrote:

Frank Middleton  wrote:



Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS

"it first appeared in SunOS 4.1, released in March 1990"


It appeared with SunOS-4.0. The official release was probably Februars 1987,
but there have been betas before IIRC.


Do you have any references one could quote so that the Wikipedia
article can be corrected? The section on Solaris is rather skimpy
and could do with some work...

AFAIK this has nothing to do with ZFS. I wonder if we should
move it to another discussion. Apologies to the OP for hijacking
your thread, although I think the original question has been
answered only too thoroughly :-)

Cheers -- Frank



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding RAIDZ with larger disks - can't see all space.

2009-09-27 Thread Chris Murray
I knew it would be something simple!!  :-)

Now 3.63TB, as expected, and no need to export and import either! Thanks
Richard, that's done the trick.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer
Problem is that while it's back, the performance is horrible. It's 
resilvering at about (according to iostat) 3.5MB/sec. And at some point, I 
was zeroing out the drive (with 'dd if=/dev/zero of=/dev/dsk/c7d0'), and 
iostat showed me that the drive was only writing at around 3.5MB/sec. *And* 
it showed reads of about the same 3.5MB/sec even during the dd.


This same hardware and even the same zpool have been run under linux with 
zfs-fuse and BSD, and with BSD at least, performance was much better. A 
complete resilver under BSD took 6 hours. Right now zpool is estimating 
this resilver to take 36.


Could this be a driver problem? Something to do with the fact that this is 
a very old SATA card (LSI 150-6)?


This is driving me crazy. I finally got my zpool working under Solaris so 
I'd have some stability, and I've got no performance.






It appears your controller is preventing ZFS from enabling write cache.

I'm not familiar with that model. You will need to find a way to enable the 
drives write cache manually.




My controller, while normally a full RAID controller, has had its BIOS 
turned off, so it's acting as a simple SATA controller. Plus, I'm seeing 
this same slow performance with dd, not just with ZFS. And I wouldn't 
think that write caching would make a difference with using dd (especially 
writing in from /dev/zero).


The other thing that's weird is the writes. I am seeing writes in that 
3.5MB/sec range during the resilver, *and* I was seeing the same thing 
during the dd.
This is from the resilver, but again, the dd was similar. c7d0 is the 
device in question:


r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0  238.00.0  476.0  0.0  1.00.04.1   0  99 c12d1
   30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0
   80.40.0 3417.60.0  0.3  0.33.33.2   8  14 c8d0
   80.40.0 3417.60.0  0.3  0.33.43.2   9  14 c9d0
   80.60.0 3417.60.0  0.3  0.33.43.2   9  14 c10d0
   80.60.0 3417.60.0  0.3  0.33.33.1   9  14 c11d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c12t0d0


Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Borked zpool, missing slog/zil

2009-09-27 Thread Erik Ableson
Good link - thanks. I'm looking at the details for that one and learning a 
little zdb at the same time. I've got a situation perhaps a little different in 
that I _do_ have a current copy of the slog in a file with what appears to be 
current data.

However, I don't see how to attach the slog file to an offline zpool - I have 
both a dd backup of the ramdisk slog from midnight as well as the current file 
based slog :

zdb -l /root/slog.tmp

version=14
name='siovale'
state=1
txg=4499446
pool_guid=13808783103733022257
hostid=4834000
hostname='shemhazai'
top_guid=6374488381605474740
guid=6374488381605474740
is_log=1
vdev_tree
type='file'
id=1
guid=6374488381605474740
path='/root/slog.tmp'
metaslab_array=230
metaslab_shift=21
ashift=9
asize=938999808
is_log=1
DTL=51

Is there any way that I can attach this slog to the zpool while it's offline?

Erik

On 27 sept. 2009, at 02:23, David Turnbull  wrote:

> I believe this is relevant: http://github.com/pjjw/logfix
> Saved my array last year, looks maintained.
>
> On 27/09/2009, at 4:49 AM, Erik Ableson wrote:
>
>> Hmmm - this is an annoying one.
>>
>> I'm currently running an OpenSolaris install (2008.11 upgraded to  
>> 2009.06) :
>> SunOS shemhazai 5.11 snv_111b i86pc i386 i86pc Solaris
>>
>> with a zpool made up of one radiz vdev and a small ramdisk based  
>> zil.  I usually swap out the zil for a file-based copy when I need  
>> to reboot (zpool replace /dev/ramdisk/slog /root/slog.tmp) but this  
>> time I had a brain fart and forgot to.
>>
>> The server came back up and I could sort of work on the zpool but  
>> it was complaining so I did my replace command and it happily  
>> resilvered.  Then I restarted one more time in order to test  
>> bringing everything up cleanly and this time it can't find the file  
>> based zil.
>>
>> I try importing and it comes back with:
>> zpool import
>> pool: siovale
>>   id: 13808783103733022257
>> state: UNAVAIL
>> status: One or more devices are missing from the system.
>> action: The pool cannot be imported. Attach the missing
>>   devices and try again.
>>  see: http://www.sun.com/msg/ZFS-8000-6X
>> config:
>>
>>   siovale UNAVAIL  missing device
>> raidz1ONLINE
>>   c8d0ONLINE
>>   c9d0ONLINE
>>   c10d0   ONLINE
>>   c11d0   ONLINE
>>
>>   Additional devices are known to be part of this pool, though  
>> their
>>   exact configuration cannot be determined.
>>
>> Now the file still exists so I don't know why it can't seem to find  
>> it and I thought the missing zil issue was corrected in this  
>> version (or did I miss something?).
>>
>> I've looked around for solutions to bring it back online and ran  
>> across this method: 
>> > > but before I jump in on this one I was hoping there was a newer,  
>> cleaner approach that I missed somehow.
>>
>> Ideas appreciated...
>>
>> Erik
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Joerg Schilling
Frank Middleton  wrote:

> On 09/27/09 03:05 AM, Joerg Schilling wrote:
>
> > BTW: Solaris has tmpfs since late 1987.
>
> Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
>
> "it first appeared in SunOS 4.1, released in March 1990"

It appeared with SunOS-4.0. The official release was probably Februars 1987,
but there have been betas before IIRC.
   
> > It is a de-facto standard since then as it e.g. helps to reduce compile 
> > times.
>
> You bet! Provided the compiler doesn't use /var/tmp as IIRC early
> versions of gcc once did...

I know that gcc ignored facts for a long time.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Joerg Schilling
Richard Elling  wrote:

> > BTW: Solaris has tmpfs since late 1987.
> >
> > It is a de-facto standard since then as it e.g. helps to reduce  
> > compile times.
>
> Yep, and before that, there was just an rc script to rm everything in / 
> tmp.

IIRC, SunOS-3.x did call (cd /tmp; rm -rf *)

Most Linux distros do AFAIR either not remove the content in /tmp or just call
(cd /tmp; rm *) which may leave all files or all files in sub-directories.

If people depend on this behavior, they make a mistake ;-)


Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread David Magda

On Sep 27, 2009, at 10:41, Frank Middleton wrote:


You bet! Provided the compiler doesn't use /var/tmp as IIRC early
versions of gcc once did...


I find using "-pipe" better:

   -pipe
   Use pipes rather than temporary files for communication  
between the
   various stages of compilation.  This fails to work on some  
systems
   where the assembler is unable to read from a pipe; but the  
GNU

   assembler has no trouble.

That's with GCC. Not sure if other compilers have anything similar.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Richard Elling

On Sep 27, 2009, at 12:05 AM, Joerg Schilling wrote:


Toby Thain  wrote:


at least as of RHFC10. I have files in /tmp
going back to Feb 2008 :-). Evidently, quoting Wikipedia,
"tmpfs is supported by the Linux kernel from version 2.4 and up."
http://en.wikipedia.org/wiki/TMPFS, FC1 6 years ago. Solaris /tmp
has been a tmpfs since 1990...


The question wasn't "who was first".


BTW: Solaris has tmpfs since late 1987.

It is a de-facto standard since then as it e.g. helps to reduce  
compile times.


Yep, and before that, there was just an rc script to rm everything in / 
tmp.

No rocket science needed :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding RAIDZ with larger disks - can't see all space.

2009-09-27 Thread Richard Elling

On Sep 27, 2009, at 2:34 AM, Chris Murray wrote:

Posting this question again as I originally tagged it onto the end  
of a series of longwinded posts of mine where I was having problems  
replacing a drive. After dodgy cabling and a few power cuts, I  
finally got the new drive resilvered.


Before this final replace, I had 3 x 1TB & 1 x 750GB drives in  
RAIDZ1 zpool.


After the replace, all four are 1TB, but I can still only see a  
total of 2.73TB in zpool list.


I have tried:
1. Reboot.
2. zpool export, then a zpool import.
3. zpool export, reboot, zpool import.

However, I can still only see 2.73TB total. Any ideas what it could  
be?


zpool set autoexpand=on poolname
 -- richard

All four disks show as 931.51GB in format. Where should I start  
troubleshooting to see why ZFS isn't using all of this space?
I'm currently on SXCE119. I tried a mock-scenario in VMware using  
the 2009.06 live cd, which worked correctly after an export and  
import. Can't do this on my setup, however, as I have upgraded my  
zpool to the latest version, and it can't be read using the CD now.


Thanks,
Chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Frank Middleton

On 09/27/09 03:05 AM, Joerg Schilling wrote:


BTW: Solaris has tmpfs since late 1987.


Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS

"it first appeared in SunOS 4.1, released in March 1990"
 

It is a de-facto standard since then as it e.g. helps to reduce compile times.


You bet! Provided the compiler doesn't use /var/tmp as IIRC early
versions of gcc once did...

-- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Ross Walker

On Sep 27, 2009, at 3:19 AM, Paul Archer  wrote:

So, after *much* wrangling, I managed to take on of my drives  
offline, relabel/repartition it (because I saw that the first sector  
was 34, not 256, and realized there could be an alignment issue),  
and get it back into the pool.


Problem is that while it's back, the performance is horrible. It's  
resilvering at about (according to iostat) 3.5MB/sec. And at some  
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/ 
dsk/c7d0'), and iostat showed me that the drive was only writing at  
around 3.5MB/sec. *And* it showed reads of about the same 3.5MB/sec  
even during the dd.


This same hardware and even the same zpool have been run under linux  
with zfs-fuse and BSD, and with BSD at least, performance was much  
better. A complete resilver under BSD took 6 hours. Right now zpool  
is estimating this resilver to take 36.


Could this be a driver problem? Something to do with the fact that  
this is a very old SATA card (LSI 150-6)?


This is driving me crazy. I finally got my zpool working under  
Solaris so I'd have some stability, and I've got no performance.


It appears your controller is preventing ZFS from enabling write cache.

I'm not familiar with that model. You will need to find a way to  
enable the drives write cache manually.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Ancestor filesystems writable by zone admin - by design?

2009-09-27 Thread Miles Benson
Hi All,

I'm not sure what I'm seeing is by design or by misconfiguration.  I created a 
filesystem "tank/zones" to hold some zones, then created a specific zone 
filesystem "tank/zones/basezone".  Then built a zone, setting 
zonepath=/tank/zones/basezone.

If I zlogin to basezone, and do zfs list, it shows the ancestors to basezone

tank
tank/zones
tank/zones/basezone
tank/zones/basezone/ROOT
tank/zones/basezone/ROOT/zbe

This in itself is not ideal - if a zone become compromised then it's revealing 
something about the underlying pool and filesystems.  I can live with it.

However, if I become root in the zone then the ancestor filesystem is 
*writable*. I can write a file in /tank/zones!  So if I delegate root access to 
a zone to someone, all of a sudden they can write to the entire pool?

Am I doing something wrong?  Any and all suggestions welcome!

Thanks
Miles
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs vbox and shared folders

2009-09-27 Thread dick hoogendijk
Are there any known issues involving VirtualBox using shared folders 
from a ZFS filesystem?


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Borked zpool, missing slog/zil

2009-09-27 Thread Erik Ableson
Hmmm - I've got a fairly old copy of the zpool cache file (circa July), but 
nothing structural has changed in pool since that date. What other data is held 
in that file? There have been some filesystem changes, but nothing critical is 
in the newer filesystems.

Any particular procedure required for swapping out the zpool.cache file?

Erik

On Sunday, 27 September, 2009, at 12:28AM, "Ross"  
wrote:
>Do you have a backup copy of your zpool.cache file?
>
>If you have that file, ZFS will happily mount a pool on boot without its slog 
>device - it'll just flag the slog as faulted and you can do your normal 
>replace.  I used that for a long while on a test server with a ramdisk slog - 
>and I never needed to swap it to a file based slog.
>
>However without a backup of that file to make zfs load the pool on boot I 
>don't believe there is any way to import that pool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Expanding RAIDZ with larger disks - can't see all space.

2009-09-27 Thread Chris Murray
Posting this question again as I originally tagged it onto the end of a series 
of longwinded posts of mine where I was having problems replacing a drive. 
After dodgy cabling and a few power cuts, I finally got the new drive 
resilvered.

Before this final replace, I had 3 x 1TB & 1 x 750GB drives in RAIDZ1 zpool.

After the replace, all four are 1TB, but I can still only see a total of 2.73TB 
in zpool list.

I have tried:
1. Reboot.
2. zpool export, then a zpool import.
3. zpool export, reboot, zpool import.

However, I can still only see 2.73TB total. Any ideas what it could be?
All four disks show as 931.51GB in format. Where should I start troubleshooting 
to see why ZFS isn't using all of this space?
I'm currently on SXCE119. I tried a mock-scenario in VMware using the 2009.06 
live cd, which worked correctly after an export and import. Can't do this on my 
setup, however, as I have upgraded my zpool to the latest version, and it can't 
be read using the CD now.

Thanks,
Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Albert Chin
On Sun, Sep 27, 2009 at 12:25:28AM -0700, Andrew wrote:
> I'm getting the same thing now.
> 
> I tried moving my 5-disk raidZ and 2disk Mirror over to another
> machine, but that machine would keep panic'ing (not ZFS related
> panics). When I brought the array back over, I started getting this as
> well.. My Mirror array is unaffected.
> 
> snv111b (2009.06 release)

What does the panic dump look like?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Andrew
I'm getting the same thing now.

I tried moving my 5-disk raidZ and 2disk Mirror over to another machine, but 
that machine would keep panic'ing (not ZFS related panics). When I brought the 
array back over, I started getting this as well.. My Mirror array is unaffected.

snv111b (2009.06 release)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer
So, after *much* wrangling, I managed to take on of my drives offline, 
relabel/repartition it (because I saw that the first sector was 34, not 
256, and realized there could be an alignment issue), and get it back into 
the pool.


Problem is that while it's back, the performance is horrible. It's 
resilvering at about (according to iostat) 3.5MB/sec. And at some point, I 
was zeroing out the drive (with 'dd if=/dev/zero of=/dev/dsk/c7d0'), and 
iostat showed me that the drive was only writing at around 3.5MB/sec. 
*And* it showed reads of about the same 3.5MB/sec even during the dd.


This same hardware and even the same zpool have been run under linux with 
zfs-fuse and BSD, and with BSD at least, performance was much better. A 
complete resilver under BSD took 6 hours. Right now zpool is estimating 
this resilver to take 36.


Could this be a driver problem? Something to do with the fact that this is 
a very old SATA card (LSI 150-6)?


This is driving me crazy. I finally got my zpool working under Solaris so 
I'd have some stability, and I've got no performance.


Paul Archer



Friday, Paul Archer wrote:

Since I got my zfs pool working under solaris (I talked on this list last 
week about moving it from linux & bsd to solaris, and the pain that was), I'm 
seeing very good reads, but nada for writes.


Reads:

r...@shebop:/data/dvds# rsync -aP young_frankenstein.iso /tmp
sending incremental file list
young_frankenstein.iso
^C1032421376  20%   86.23MB/s0:00:44

Writes:

r...@shebop:/data/dvds# rsync -aP /tmp/young_frankenstein.iso yf.iso
sending incremental file list
young_frankenstein.iso
^C  68976640   6%2.50MB/s0:06:42


This is pretty typical of what I'm seeing.


r...@shebop:/data/dvds# zpool status -v
 pool: datapool
state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
   still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
   pool will no longer be accessible on older software versions.
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   datapoolONLINE   0 0 0
 raidz1ONLINE   0 0 0
   c2d0s0  ONLINE   0 0 0
   c3d0s0  ONLINE   0 0 0
   c4d0s0  ONLINE   0 0 0
   c6d0s0  ONLINE   0 0 0
   c5d0s0  ONLINE   0 0 0

errors: No known data errors

 pool: syspool
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   syspool ONLINE   0 0 0
 c0d1s0ONLINE   0 0 0

errors: No known data errors

(This is while running an rsync from a remote machine to a ZFS filesystem)
r...@shebop:/data/dvds# iostat -xn 5
   extended device statistics
   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  11.14.8  395.8  275.9  5.8  0.1  364.74.3   2   5 c0d1
   9.8   10.9  514.3  346.4  6.8  1.4  329.7   66.7  68  70 c5d0
   9.8   10.9  516.6  346.4  6.7  1.4  323.1   66.2  67  70 c6d0
   9.7   10.9  491.3  346.3  6.7  1.4  324.7   67.2  67  70 c3d0
   9.8   10.9  519.9  346.3  6.8  1.4  326.7   67.2  68  71 c4d0
   9.8   11.0  493.5  346.6  3.6  0.8  175.3   37.9  38  41 c2d0
   0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
   extended device statistics
   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   0.00.00.00.0  0.0  0.00.00.0   0   0 c0d1
  64.6   12.6 8207.4  382.1 32.8  2.0  424.7   25.9 100 100 c5d0
  62.2   12.2 7203.2  370.1 27.9  2.0  375.1   26.7  99 100 c6d0
  53.2   11.8 5973.9  390.2 25.9  2.0  398.8   30.5  98  99 c3d0
  49.4   10.6 5398.2  389.8 30.2  2.0  503.7   33.3  99 100 c4d0
  45.2   12.8 5431.4  337.0 14.3  1.0  247.3   17.9  52  52 c2d0
   0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0


Any ideas?

Paul


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-27 Thread Joerg Schilling
Toby Thain  wrote:

> > at least as of RHFC10. I have files in /tmp
> > going back to Feb 2008 :-). Evidently, quoting Wikipedia,
> > "tmpfs is supported by the Linux kernel from version 2.4 and up."
> > http://en.wikipedia.org/wiki/TMPFS, FC1 6 years ago. Solaris /tmp
> > has been a tmpfs since 1990...
>
> The question wasn't "who was first".

BTW: Solaris has tmpfs since late 1987.

It is a de-facto standard since then as it e.g. helps to reduce compile times.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss