Re: [zfs-discuss] zpool upgrade wrecked GRUB

2008-08-05 Thread sanjay nadkarni (Laptop)
Luca Morettoni reported a similar behavior (i.e. a perfectly running 
system that drops into grub on reboot) on indiana-discuss.  I wonder if 
the issue is that installgrub is updating the MBR on one disk.  If the 
second disk does not have an updated grub menu, that would explain what 
you are seeing.  If this indeed is the issue, then what is puzzling is 
why did the BIOS change the boot order ?  Was the BIOS updated and the 
values got reset to some default values ?

Lori Alt wrote:
 Seymour Krebs wrote:
   
 Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05. 
  ZFS and zpool reported no troubles except suggesting upgrade for from 
 ver.10 to ver.11. seemed like a good idea at the time.  system up for 
 several days after that point then took down for some unrelated maintenance.

 now will not boot the opensol, drops to grub prompt, no menus.

 zfs was mirrored on two disks c6d0s0 and c7d0.  I never noted the GRUB 
 commands for booting  and not really familiar with the nomenclature.  at 
 this point I am hoping that a burn of SXCE snv_94 will give me access to the 
 zfs pools so I can try update-grub but at this point it will be about 9 
 hours to download the .iso and I kinda need to work on data residing in that 
 system

   
 
 I'll try to help, but I'm confused by a few things.  First, when
 you say that you upgraded from OpenSolaris 2008.05 to snv_94,
 what do you mean?  Because I'm not sure how one upgrades
 an IPS-based release to the older SVR4 packages-based
 release type. 
In the IPS world, one upgrades using the command pkg image-update.  pkg 
commands link with beadm libraries. rpool is snapshotted, then cloned.  
It is mounted on a temporary mountpoint and then the contents are 
upgraded.  Very similar to live upgrade for zfs.


-Sanjay

  Do you mean that you did an initial install
 using snv_94?  If so, did you select a zfs root or a ufs root?
 At what point were you presented with the suggestion
 to upgrade the pool from ver.10 to ver.11? 

 Also, you write that  you are doing a burn of SXCE snv_94,
 but how did you do the upgrade (or whatever) to snv_94
 in the first place without a snv_94 install medium? 

 Lori
   
 any suggestions 

 thanks, 
 sgk
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Ross Smith

Just a thought, before I go and wipe this zpool, is there any way to manually 
recreate the /etc/zfs/zpool.cache file?
 
Ross Date: Mon, 4 Aug 2008 10:42:43 -0600 From: [EMAIL PROTECTED] Subject: 
Re: [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL 
PROTECTED]; [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.orgRichard 
Elling wrote:  Ross wrote:  I'm trying to import a pool I just exported 
but I can't, even -f doesn't help. Every time I try I'm getting an error:  
cannot import 'rc-pool': one or more devices is currently unavailable   
Now I suspect the reason it's not happy is that the pool used to have a ZIL :) 
 Correct. What you want is CR 6707530, log device failure needs some 
work  http://bugs.opensolaris.org/view_bug.do?bug_id=6707530  which Neil 
has been working on, scheduled for b96.  Actually no. That CR mentioned the 
problem and talks about splitting out the bug, as it's really a separate 
problem. I've just done that and here's the new CR which probably won't be 
visible immediately to you:  6733267 Allow a pool to be imported with a 
missing slog  Here's the Description:  --- This CR is 
being broken out from 6707530 log device failure needs some work  When 
Separate Intent logs (slogs) were designed they were given equal status in the 
pool device tree. This was because they can contain committed changes to the 
pool. So if one is missing it is assumed to be important to the integrity of 
the application(s) that wanted the data committed synchronously, and thus a 
pool cannot be imported with a missing slog. However, we do allow a pool to be 
missing a slog on boot up if it's in the /etc/zfs/zpool.cache file. So this 
sends a mixed message.  We should allow a pool to be imported without a slog 
if -f is used and to not import without -f but perhaps with a better error 
message.  It's the guidsum check that actually rejects imports with missing 
devices. We could have a separate guidsum for the main pool devices (non 
slog/cache). --- 
_
Win a voice over part with Kung Fu Panda  Live Search   and   100’s of Kung Fu 
Panda prizes to win with Live Search
http://clk.atdmt.com/UKM/go/107571439/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and disk partitioning

2008-08-05 Thread Johan Hartzenberg
I am trying to upgrade my laptop hard drive, and want to use Live-upgrade.

What I have done so far is:
1. Moved the old drive to an external enclosure

2. Made it bootable (At this point I had to overcome the first obstacle -
due to ZFS storing the disk device path in the ZFS structure it refused to
automatically mount the root file system.  The work-around involved booting
to safe mode and mounting the zfs file systems, then rebooting.  Note
previously I had to re-do this even when moving the disk from one USB port
to another.  The disk is now portable at least between USB ports, seemingly
after zpool upgrade to v11)

3. Installed the new drive into the laptop.

4. Partitioned it using Solaris/fdisk.  Oops.

At this point I had to overcome the second obstacle - the system failed to
find the root pool.  The eventual solution (work arround) was to boot from a
live CD and wipe the partition table from the internal disk.

5. Trying to create a partition table on the disk again resulted in format
telling me the disk type is unknown.  A partial work-arround was to
temporarily put the whole disk under zfs control and then destroying the
pool.  This resulted in an EFI label being created on the disk.  From here
it was possible to delete the EFI partition and create new partitions, but
Solaris does not properly recognize the primary partitions created.

The desired outcome of the partitioning is:

fdisk P1 = Solaris2 (oxbf) to be used as ZFS root
fdisk P2 =  (to be used as ZFS data pool)
fdisk P3 = NTFS... Still debating whether I want to have a copy of Windows
consuming disk space... I still have to finish Far Cry some time.
fdisk P4 = Extended partition, will be sub-partitioned for Linux.

The best I've been able to do so far is to use Linux to create P1 and P2
above with neither made active.  If either is made active, I can no longer
boot from the external disk (grub fails to find the root).

But Linux did not properly create the partition table.

AVAILABLE DISK SELECTIONS:
   0. c0d0 WDC WD25-  WD-WXE508NW759-0001-232.89GB
  /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0)
   1. c2t0d0 DEFAULT cyl 2688 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],7/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number)[0]:
selecting c0d0
NO Alt slice
No defect list found
[disk formatted, no defect list found]

Entering the FDISK menu, I see
 Total disk size is 30401 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 Solaris2  0  42564257 14
  2 EFI4256  2614021885 72




SELECT ONE OF THE FOLLOWING:
   1. Create a partition
   2. Specify the active partition
   3. Delete a partition
   4. Change between Solaris and Solaris2 Partition IDs
   5. Exit (update disk configuration and exit)
   6. Cancel (exit without updating disk configuration)
Enter Selection:


Going to the partition menu, I try to create a Slice 0 of the entire disk:
partition mod
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1

Part  TagFlag First Sector Size Last Sector
  0 unassignedwm 0   0   0
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 0   0   0

Do you wish to continue creating a new partition
table based on above table[yes]? 0
`0' is not expected.
Do you wish to continue creating a new partition
table based on above table[yes]? yes
Free Hog partition[6]? 0
Enter size of partition 1 [0b, 33e, 0mb, 0gb, 0tb]: 0
Enter size of partition 2 [0b, 33e, 0mb, 0gb, 0tb]: 0
Enter size of partition 3 [0b, 33e, 0mb, 0gb, 0tb]: 0
Enter size of partition 4 [0b, 33e, 0mb, 0gb, 0tb]: 0
Enter size of partition 5 [0b, 33e, 0mb, 0gb, 0tb]: 0
Enter size of partition 6 [0b, 33e, 0mb, 0gb, 0tb]: 0
Enter size of partition 7 [0b, 33e, 0mb, 0gb, 0tb]: 0
Part  TagFlag First Sector Size Last Sector
  0usrwm34  232.88GB  488379741
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0

Re: [zfs-discuss] zpool upgrade wrecked GRUB

2008-08-05 Thread Andre Wenas
You can try to boot from Opensolaris CD, import rpool, mount the root 
filesystem and upgrade the grub.

Regards,
Andre W.


Seymour Krebs wrote:
 Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05.  
 ZFS and zpool reported no troubles except suggesting upgrade for from ver.10 
 to ver.11. seemed like a good idea at the time.  system up for several days 
 after that point then took down for some unrelated maintenance.

 now will not boot the opensol, drops to grub prompt, no menus.

 zfs was mirrored on two disks c6d0s0 and c7d0.  I never noted the GRUB 
 commands for booting  and not really familiar with the nomenclature.  at this 
 point I am hoping that a burn of SXCE snv_94 will give me access to the zfs 
 pools so I can try update-grub but at this point it will be about 9 hours 
 to download the .iso and I kinda need to work on data residing in that system

 any suggestions 

 thanks, 
 sgk
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10u6, zfs and zones

2008-08-05 Thread Enda O'Connor ( Sun Micro Systems Ireland)
dick hoogendijk wrote:
 My server runs S10u5. All slices are UFS. I run a couple of sparse
 zones on a seperate slice mounted on /zones.
 
 When S10u6 comes out booting of ZFS will become possible. That is great
 news. However, will it be possible to have those zones I run now too?
you can migrate pre u5 ufs to u6 zfs via lucreate, zones included.

There is no support issues for zones on a system with zfs root, that I'm aware 
of, and Lu 
( Live upgrade ) in u6 will support zones on zfs upgrade.
 I always understood ZFS and root zones are difficult. I hope to be able
 to change all FS to ZFS, including the space for the sparse zones.
zones can be on zfs or any other supported config in combination with zfs root.

Is there a specific question you had in mind with regard to sparse zones and 
zfs root, no 
too clear if I answered your actual query.

Enda
 
 Does somebody have more information on this?
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] evaluate ZFS ACL

2008-08-05 Thread Joerg Schilling
Paul B. Henson [EMAIL PROTECTED] wrote:


 I was curious if there was any utility or library function available to
 evaluate a ZFS ACL. The standard POSIX access(2) call is available to
 evaluate access by the current process, but I would like to evaluate an ACL
 in one process that would be able to determine whether or not some other
 user had a particular permission. Obviously, the running process would need
 to have read privileges on the ACL itself, but I'd rather not reimplement
 the complexity of actually interpreting the ACL. Something like:

   access(/path/to/file, R_OK, 400)

 Where 400 is the UID of the user whose access should be tested. Clearly

This is not the POSIX access() call which only has 2 parameters.

Depending on the platform where you are, there is either 

access(/path/to/file, R_OK | E_OK )
or 
eaccess(/path/to/file, R_OK)
euidaccess(/path/to/file, R_OK)

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] implications of using whole disk and other OS

2008-08-05 Thread Johann Gile
I read that for performance reasons (using the disks write cache) it is advised
to use whole disks rather than slices for zfs pools. In a dual-boot scenario
with FreeBSD, Linux, Windows XP etc. (of course on another disk with
partitions) are there any risks involved having such a disk without a partition
table? It reminds me of FreeBSD's dangerously dedicated mode. From
http://www.freebsd.org/doc/en/books/faq/disks.html#DANGEROUSLY-DEDICATED 

 So why it is called “dangerous”? A disk in this mode does not contain what
 normal PC utilities would consider a valid fdisk(8) table. Depending on how
 well they have been designed, they might complain at you once they are
 getting in contact with such a disk, or even worse, they might damage the BSD
 bootstrap without even asking or notifying you.

Is there any experience that dedicated zfs disks might be harmed by a not zfs
aware OS/software?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?

2008-08-05 Thread Andre Wenas
Hi Evert,

Sun positions virtualbox as desktop virtualization software. It only 
support 32bit with 1 CPU only. If this met your requirement, it should 
run ok.

Regards,
Andre W.


Evert Meulie wrote:
 Hi all,

 I have been looking at various alternatives for a system that runs several 
 Linux  Windows guests. So far my favorite choice would be 
 OpenSolaris+ZFS+RAIDZ+VirtualBox. Is this combo ready to be a host for Linux 
  Windows guests? Or is it not 100% stable (yet)?

  
 Greetings,
   Evert
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool setup suggestions

2008-08-05 Thread Stefan Palm
OK, as nobody seemed to have a better solution I decided to stay with my 
initial idea (two mirror sets) with a slight change. Instead of using UFS 
slices for the root  swap filesystems I installed to whole system on zfs pool.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?

2008-08-05 Thread Bob Friesenhahn
On Tue, 5 Aug 2008, Evert Meulie wrote:

 I have been looking at various alternatives for a system that runs 
 several Linux  Windows guests. So far my favorite choice would be 
 OpenSolaris+ZFS+RAIDZ+VirtualBox. Is this combo ready to be a host 
 for Linux  Windows guests? Or is it not 100% stable (yet)?

The future looks quite good, but my impression is that the current 
VirtualBox release is not well ported to Solaris yet.  It is useful 
for mouse and keyboard access, and is able to use the network as a 
client.  Some other features (e.g. USB, local filesystem access) don't 
work right yet.  Time sychronization between host and guest is not as 
tight as it should be so if you are using VirtualBox for software 
development you may see complaints about 'time skew' and possibly bad 
build from GNU make.

As someone else mentioned, VirtualBox only runs 32-bit OSs with up to 
2GB of RAM for the guest OS.  My testing here shows that performance 
is pretty good as long as your host has plenty of RAM.

Since this seems to be the ZFS list, it is worth mentioning that the 
since the VirtualBox guest extensions are not working so well on 
Solaris yet, that a local NFS mount of an exported ZFS filesystem 
works great to access local files, with good performance.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs, raidz, spare and jbod

2008-08-05 Thread Claus Guttesen
 I installed solaris express developer edition (b79) on a supermicro
 quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
 I installed solaris onto one of the internal drives. I added an areca
 arc-1680 sas-controller and configured it in jbod-mode. I attached an
 external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I
 created a raidz2-pool with ten disks and one spare. I then copied some
 400 GB of small files each approx. 1 MB. To simulate a disk-crash I
 pulled one disk out of the cabinet and zfs faulted the drive and used
 the spare and started a resilver.

 During the resilver-process one of the remaining disks had a
 checksum-error and was marked as degraded. The zpool is now
 unavailable. I first tried to add another spare but got I/O-error. I
 then tried to replace the degraded disk by adding a new one:

 # zpool add ef1 c3t1d3p0
 cannot open '/dev/dsk/c3t1d3p0': I/O error

 Partial dmesg:

 Jul 25 13:14:00 malene arcmsr: [ID 419778 kern.notice] arcmsr0: scsi
 id=1 lun=3 ccb='0xff02e0ca0800' outstanding command timeout
 Jul 25 13:14:00 malene arcmsr: [ID 610198 kern.notice] arcmsr0: scsi
 id=1 lun=3 fatal error on target, device was gone
 Jul 25 13:14:00 malene arcmsr: [ID 658202 kern.warning] WARNING:
 arcmsr0: tran reset level=1

 Is this a deficiency in the arcmsr-driver?

I beleive I have found the problem. I tried to define a raid-5-volume
on the arc-1680-card and still saw errors as mentioned above.
Areca-support suggested that I upgraded to the lastest solaris-drivers
(located in the beta-folder) and upgraded firmware as well. I did both
and it somewhat solved my problems but I had very poor
write-performance, 2-6 MB/s.

So I deleted my zpool and changed the arc-1680-configuration and put
all disks in passthrough-mode. I created a new zpool and performed
similar tests and have not experienced any abnormal behaviour.

I'm re-installing the server with FreeBSD and will do similar tests
and report back.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Richard Elling
Ross Smith wrote:
 Just a thought, before I go and wipe this zpool, is there any way to 
 manually recreate the /etc/zfs/zpool.cache file?

Do you have a copy in a snapshot?  ZFS for root is awesome!
 -- richard

  
 Ross

  Date: Mon, 4 Aug 2008 10:42:43 -0600
  From: [EMAIL PROTECTED]
  Subject: Re: [zfs-discuss] Zpool import not working - I broke my pool...
  To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
  CC: zfs-discuss@opensolaris.org
 
 
 
  Richard Elling wrote:
   Ross wrote:
   I'm trying to import a pool I just exported but I can't, even -f 
 doesn't help. Every time I try I'm getting an error:
   cannot import 'rc-pool': one or more devices is currently 
 unavailable
  
   Now I suspect the reason it's not happy is that the pool used to 
 have a ZIL :)
  
  
   Correct. What you want is CR 6707530, log device failure needs 
 some work
   http://bugs.opensolaris.org/view_bug.do?bug_id=6707530
   which Neil has been working on, scheduled for b96.
 
  Actually no. That CR mentioned the problem and talks about splitting out
  the bug, as it's really a separate problem. I've just done that and 
 here's
  the new CR which probably won't be visible immediately to you:
 
  6733267 Allow a pool to be imported with a missing slog
 
  Here's the Description:
 
  ---
  This CR is being broken out from 6707530 log device failure needs 
 some work
 
  When Separate Intent logs (slogs) were designed they were given 
 equal status in the pool device tree.
  This was because they can contain committed changes to the pool.
  So if one is missing it is assumed to be important to the integrity 
 of the
  application(s) that wanted the data committed synchronously, and thus
  a pool cannot be imported with a missing slog.
  However, we do allow a pool to be missing a slog on boot up if
  it's in the /etc/zfs/zpool.cache file. So this sends a mixed message.
 
  We should allow a pool to be imported without a slog if -f is used
  and to not import without -f but perhaps with a better error message.
 
  It's the guidsum check that actually rejects imports with missing 
 devices.
  We could have a separate guidsum for the main pool devices (non 
 slog/cache).
  ---
 


 
 Get Hotmail on your mobile from Vodafone Try it Now! 
 http://clk.atdmt.com/UKM/go/107571435/direct/01/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Ross Smith

No, but that's a great idea!  I'm on a UFS root at the moment, will have a look 
at using ZFS next time I re-install.
 Date: Tue, 5 Aug 2008 07:59:35 -0700 From: [EMAIL PROTECTED] Subject: Re: 
 [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL 
 PROTECTED] CC: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org  Ross Smith 
 wrote:  Just a thought, before I go and wipe this zpool, is there any way 
 to   manually recreate the /etc/zfs/zpool.cache file?  Do you have a copy 
 in a snapshot? ZFS for root is awesome! -- richard Ross
 Date: Mon, 4 Aug 2008 10:42:43 -0600   From: [EMAIL PROTECTED]   
 Subject: Re: [zfs-discuss] Zpool import not working - I broke my pool...   
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]   CC: 
 zfs-discuss@opensolaris.org Richard Elling wrote:
 Ross wrote:I'm trying to import a pool I just exported but I can't, 
 even -f   doesn't help. Every time I try I'm getting an error:
 cannot import 'rc-pool': one or more devices is currently   unavailable 
   Now I suspect the reason it's not happy is that the pool used 
 to   have a ZIL :)  Correct. What you want is CR 
 6707530, log device failure needs   some work
 http://bugs.opensolaris.org/view_bug.do?bug_id=6707530which Neil has 
 been working on, scheduled for b96. Actually no. That CR mentioned 
 the problem and talks about splitting out   the bug, as it's really a 
 separate problem. I've just done that and   here's   the new CR which 
 probably won't be visible immediately to you: 6733267 Allow a pool 
 to be imported with a missing slog Here's the Description:
  ---   This CR is being broken out from 6707530 log 
 device failure needs   some work When Separate Intent logs 
 (slogs) were designed they were given   equal status in the pool device 
 tree.   This was because they can contain committed changes to the pool. 
   So if one is missing it is assumed to be important to the integrity   
 of the   application(s) that wanted the data committed synchronously, and 
 thus   a pool cannot be imported with a missing slog.   However, we do 
 allow a pool to be missing a slog on boot up if   it's in the 
 /etc/zfs/zpool.cache file. So this sends a mixed message. We should 
 allow a pool to be imported without a slog if -f is used   and to not 
 import without -f but perhaps with a better error message. It's 
 the guidsum check that actually rejects imports with missing   devices.  
  We could have a separate guidsum for the main pool devices (non   
 slog/cache).   ---  
   
 Get Hotmail on your mobile from Vodafone Try it Now!   
 http://clk.atdmt.com/UKM/go/107571435/direct/01/ 
_
Get Hotmail on your mobile from Vodafone 
http://clk.atdmt.com/UKM/go/107571435/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-08-05 Thread Miles Nordin
 re == Richard Elling [EMAIL PROTECTED] writes:

re This was fixed some months ago, and it should be hard to find
re the old B2 chips anymore (not many were made or sold).  --

well, they all ended up on newegg. :)


pgpSasiHxNZEB.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?

2008-08-05 Thread Miles Nordin
 em == Evert Meulie [EMAIL PROTECTED] writes:

em OpenSolaris+ZFS+RAIDZ+VirtualBox.

I'm using snv b83 + ZFS-unredundant + 32bit CPU + VirtualBox.

It's stable, but not all the features like USB and RDP are working for
me.  Also it is being actively developed, so that's good.

I'm planning to build a bigger one.

I cannot vouch for its memory- or cpu-efficiency.  It is probably
fine, but mine is not a situation where I swapped it into the place of
another virtualization stack so I could compare the performance to a
system widely known to perform reasonably---you'll have to do that.

Also VirtualBox does not make easy certain things I'd like to be
doing, like bridged networking and importing virtual disks from ZVol's
instead of big files on ZFS filesystems.  I think it's possible to do
these things, though.

stability is really perfect.  I've had some problems running out of
host memory, and that's it.

While VirtualBox has ``flat'' and ``sparse'' image formats like
VMWare, the VMWare ``flat'' format is a pair of files, a small one
that points to the big one, and the bigger of the two files is a
headerless image you could mount on the host with lofiadm.  The
VirtualBox ``flat'' images are single files and have headers on them.
The headers are a round number of sectors.  It's possible to mount
the images with Mac OS X hdiutil, but AFAIK not with lofiadm.

  http://web.ivy.net/~carton/rant/virtualbox-macos-hdiutil.html

The ZFS snapshots are, for me, a lot faster (to merge/destroy), safer,
and more featureful (can make a tree with branches, not only a
straight line (VirtualBox) or a single snapshot (VMWare)) than the
ones built into VMWare Server, VMWare Fusion, or VirtualBox.  not sure
how they compare to the serious $0 VMWare stuff.


pgpAWeJpC80oT.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help me....

2008-08-05 Thread Al Hopper
On Sun, Aug 3, 2008 at 10:46 PM, Rahul [EMAIL PROTECTED] wrote:
 hi
 can you give some disadvantages of the ZFS file system??

Stay away from the yellow one.

 plzz its urgent...

I understand.

 help me.

Next!

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-05 Thread Alan
I took the brute force approach, but it was simple and passed the boot from 
either test: install on both, then mirror s0, and I'm reasonably confident 
identical disks will look the same ;-)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-05 Thread Sanjay Nadkarni



Richard Elling wrote:
 Malachi de Ælfweald wrote:
   
 I have to say, looking at that confuses me a little. How can the two 
 disks be mirrored when the partition tables don't match?
 

 Welcome to ZFS!  In traditional disk mirrors,
 disk A block 0 == disk B block 0
 disk A block 1 == disk B block 1
 ...
 disk A block N == disk B block N

 In a ZFS world, block 1 might be defective. So ZFS will reallocate
 the block somewhere else.  This is great for reliable storage on
 unreliable devices (disks).  It also relieves you from the expectation
 that the partition tables must match.  And it also means that I can
 grow the pool size by replacing the mirror sides with larger devices.
 Life is good!  Enjoy!
   
This functionality is not specific to ZFS. SVM mirroring  also does not 
require partition tables to match and as in  ZFS, one can grow the size 
of mirror by replacing larger size disks.

-Sanjay
  -- richard


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread soren
ZFS has detected that my root filesystem has a small number of errors.  Is 
there a way to tell which specific files have been corrupted?


sbox:~$ zpool status -x
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 0h10m with 2 errors on Sun Aug  3 00:16:33 2008
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 4
  c4t0d0s0  ONLINE   0 0 4
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Bob Netherton
soren wrote:
 ZFS has detected that my root filesystem has a small number of errors.  Is 
 there a way to tell which specific files have been corrupted?
   
After a scrub a zpool status -v should give you a list of files with 
unrecoverable errors.


Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread soren
 soren wrote:
  ZFS has detected that my root filesystem has a
 small number of errors.  Is there a way to tell which
 specific files have been corrupted?

 After a scrub a zpool status -v should give you a
 list of files with 
 unrecoverable errors.

Hmm, I just tried that.  Perhaps No known data errors means that my files are 
OK.  In that case I wonder what the checksum failure was from.


sbox:~$ zpool status -xv
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 0h10m with 2 errors on Sun Aug  3 00:16:33 2008
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 4
  c4t0d0s0  ONLINE   0 0 4

errors: No known data errors
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Mario Goebbels (iPhone)
Possibly metadata. Since that's however redundant due to ditto blocks  
(2 or 3 copies depending on importance), it was repaired during the  
scrub.

--
Via iPhone 3G

On 05-août-08, at 21:11, soren [EMAIL PROTECTED] wrote:

 soren wrote:
 ZFS has detected that my root filesystem has a
 small number of errors.  Is there a way to tell which
 specific files have been corrupted?

 After a scrub a zpool status -v should give you a
 list of files with
 unrecoverable errors.

 Hmm, I just tried that.  Perhaps No known data errors means that  
 my files are OK.  In that case I wonder what the checksum failure  
 was from.


 sbox:~$ zpool status -xv
  pool: rpool
 state: ONLINE
 status: One or more devices has experienced an unrecoverable error.   
 An
attempt was made to correct the error.  Applications are  
 unaffected.
 action: Determine if the device needs to be replaced, and clear the  
 errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 0h10m with 2 errors on Sun Aug  3  
 00:16:33 2008
 config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 4
  c4t0d0s0  ONLINE   0 0 4

 errors: No known data errors


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Cindy . Swearingen
Soren,

At this point, I'd like to know what fmdump -eV says about your disk so
you can determine whether it should be replaced or not.

Cindy

soren wrote:
soren wrote:

ZFS has detected that my root filesystem has a

small number of errors.  Is there a way to tell which
specific files have been corrupted?

After a scrub a zpool status -v should give you a
list of files with 
unrecoverable errors.
 
 
 Hmm, I just tried that.  Perhaps No known data errors means that my files 
 are OK.  In that case I wonder what the checksum failure was from.
 
 
 sbox:~$ zpool status -xv
   pool: rpool
  state: ONLINE
 status: One or more devices has experienced an unrecoverable error.  An
 attempt was made to correct the error.  Applications are unaffected.
 action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
  scrub: scrub completed after 0h10m with 2 errors on Sun Aug  3 00:16:33 2008
 config:
 
 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 4
   c4t0d0s0  ONLINE   0 0 4
 
 errors: No known data errors
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Block unification in ZFS

2008-08-05 Thread Nathaniel Filardo
Hello list.

I have a storage server running ZFS which primarily is used for storing
on-site mirrors of source trees and interesting sites (textfiles.com and
bitsavers.org, for example) and for backups of local machines.  There are
several (small) problems with the otherwise ideal picture:
  - Some mirrors include sparse or slightly stale copies of others.
  - Not all of the local machines are always networked (laptops), and
their backups tend to have duplicated data wrt the rest of the system.
  - My pre-ZFS backup tarballs are in a similar state.

Therefore, I wonder if something like block unification (which seems to be
an old idea, though I know of it primarily through Venti[1]) would be useful
to ZFS.  Since ZFS checksums all of the data passing through it, it seems
natural to hook those checksums and have a hash table from checksum to block
pointer. It would seem that one could write a shim vdev which used the ZAP
and a host vdev to store this hash table and could inform the higher
layers that, when writing a block, that they should simply alias an earlier
block (and increment its reference count -- already there for snapshots --
appropriately; naturally if the block's reference count becomes zero, its
checksum should be deleted from the hash).

The only (slight) complications that leap to mind are:
 1. Strictly accounting for used space becomes a little more funny.

 2. ZFS wide block pointers (ditto blocks) would have to somehow bypass
block unification or risk missing the point.  As far as I understand ZFS's
on disk structures[2], though, this is not a problem: one copy of the
wide block could be stored in the unified vdev and the other two could
simply be stored directly in the host vdev.

 3. It's possible such an algorithm would miss identical blocks checksummed
under different schemes.  I think I'm OK with that.

 4. Relatedly, one may want to expose a check before unifying option for
those who are sufficiently paranoid to fear hash collisions deleting data.

Thoughts?  Is something like this already possible and I just don't know
about it? :)
--nwf;

[1] http://plan9.bell-labs.com/sys/doc/venti.html
[2] I'm aware of
http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf but if
there's a more recent version available or if I've grossly mistook
something therein, please let me know.

P.S. This message is sent via opensolaris.org; I originally sent a slightly 
earlier version via SMTP and received a notice that a moderator would look at 
it, however that copy seems to have gotten lost.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Bill Sommerfeld
On Tue, 2008-08-05 at 12:11 -0700, soren wrote:
  soren wrote:
   ZFS has detected that my root filesystem has a
  small number of errors.  Is there a way to tell which
  specific files have been corrupted?
 
  After a scrub a zpool status -v should give you a
  list of files with 
  unrecoverable errors.
 
 Hmm, I just tried that.  Perhaps No known data errors means that my files 
 are OK.  In that case I wonder what the checksum failure was from.

If this is build 94 and you have one or more unmounted filesystems, 
(such as alternate boot environments), these errors are false positives.
There is no actual error; the scrubber misinterpreted the end of an
intent log block chain as a checksum error.

the bug id is:

6727872 zpool scrub: reports checksum errors for pool with zfs and
unplayed ZIL

This bug is fixed in build 95.  One workaround is to mount the
filesystems and then unmount them to apply the intent log changes.

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread soren
Aha, that's the problem.  I just upgraded to build 94, and I have alternate 
boot environments.

===

 Hmm, I just tried that.  Perhaps No known data errors means that my files 
 are OK.  In  that case I wonder what the checksum failure was from.

If this is build 94 and you have one or more unmounted filesystems, 
(such as alternate boot environments), these errors are false positives.
There is no actual error; the scrubber misinterpreted the end of an
intent log block chain as a checksum error.

the bug id is:

6727872 zpool scrub: reports checksum errors for pool with zfs and
unplayed ZIL

This bug is fixed in build 95.  One workaround is to mount the
filesystems and then unmount them to apply the intent log changes.

- Bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Rob
Hello All!

Is there a command to force a re-inheritance/reset of ACLs? e.g., if i have a 
directory full of folders that have been created with inherited ACLs, and i 
want to change the ACLs on the parent folder, how can i force a reapply of all 
ACLs?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] evaluate ZFS ACL

2008-08-05 Thread Paul B. Henson
On Tue, 5 Aug 2008, Joerg Schilling wrote:

 This is not the POSIX access() call which only has 2 parameters.

Yes, I'm aware of that; it was meant to be an example of something I wished
existed :).


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Mark Shellenbaum
Rob wrote:
 Hello All!
 
 Is there a command to force a re-inheritance/reset of ACLs? e.g., if i have a 
 directory full of folders that have been created with inherited ACLs, and i 
 want to change the ACLs on the parent folder, how can i force a reapply of 
 all ACLs?
  
  


There isn't an easy way to do exactly what you want.

You could use a chmod in the directory and reapply the ACL to each child 
of the directory.

# chmod Awhatever path...

or

# chmod -R Awhatever path...

If you want to remove all of the ACLs then

# chmod -R A- path


P.S.

If your using a build older than snv_95 then you will get errors if you 
attempt to set inheritance flags on files.  That problem has been fixed 
in snv_95.

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-08-05 Thread Bryan Wagoner
Well, just an update I suppose since there are people waiting fora review on 
this card.  I bought the card shortly after my last post and it has been 
sitting in the box for the last 3 weeks or so on my desk.  The reason being is 
because the LSI IPASS to 4 sata cables I ordered along with the card are 
backordered and they are expected to ship them to me on the 11th to finish the 
order.So basically, I haven't bothered doing anything with the card since I 
don't have any SAS drives to test with.  As soon as the cables come in I'll let 
you guys know if anything comes up good or bad.

Did anyone else give the card a shot yet?  I'm running 2008.05 as my home NAS 
box and have been quite happy. I'm going to use some t5700 thin clients with it 
too since the Processor usage on the storage server is so low.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Rob
 Rob wrote:
  Hello All!
  
  Is there a command to force a re-inheritance/reset
 of ACLs? e.g., if i have a directory full of folders
 that have been created with inherited ACLs, and i
 want to change the ACLs on the parent folder, how can
 i force a reapply of all ACLs?
   
   
 
 
 There isn't an easy way to do exactly what you want.

That's unfortunate :(
How do I go about requesting a feature like this?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Block unification in ZFS

2008-08-05 Thread Alan
I was just thinking of a similar feature request: one of the things I'm doing 
is hosting vm's.  I build a base vm with standard setup in a dedicated 
filesystem, then when I need a new instance zfs clone and voila!  ready to 
start tweaking for the needs of the new instance, using a fraction of the 
space.  Until update time.  It still saves space, but it would be nice if there 
were a way to identify the common blocks.  I realize it's a double whammy 
because vms just look like big monolithic files to the base filesystem, whereas 
normally you might simply look for identical files to map together (though the 
regular clone mechanism seems to be block based), but something to think about 
in the nice to haves...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Block unification in ZFS

2008-08-05 Thread Mattias Pantzare
 Therefore, I wonder if something like block unification (which seems to be
 an old idea, though I know of it primarily through Venti[1]) would be useful
 to ZFS.  Since ZFS checksums all of the data passing through it, it seems
 natural to hook those checksums and have a hash table from checksum to block
 pointer. It would seem that one could write a shim vdev which used the ZAP
 and a host vdev to store this hash table and could inform the higher
 layers that, when writing a block, that they should simply alias an earlier
 block (and increment its reference count -- already there for snapshots --
 appropriately; naturally if the block's reference count becomes zero, its
 checksum should be deleted from the hash).


De duplication has been discussed many times, but it is not trivial to
implement.

There are no reference counts for blocks.Blocks have a time stamp that
is compared to the creation time of snapshots to work out if it can be
freed when you destroy a snapshot.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Block unification in ZFS

2008-08-05 Thread Bill Sommerfeld
See the long thread titled ZFS deduplication, last active
approximately 2 weeks ago.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] force a reset/reinheit zfs acls?

2008-08-05 Thread Mark Shellenbaum
Rob wrote:
 Rob wrote:
 Hello All!

 Is there a command to force a re-inheritance/reset
 of ACLs? e.g., if i have a directory full of folders
 that have been created with inherited ACLs, and i
 want to change the ACLs on the parent folder, how can
 i force a reapply of all ACLs?
  
  

 There isn't an easy way to do exactly what you want.
 
 That's unfortunate :(
 How do I go about requesting a feature like this?
  

You can open an RFE via:

http://www.opensolaris.org/bug/report.jspa

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS cache flushes and 6540 disk array - fixed

2008-08-05 Thread Robert Milkowski
Hello zfs-discuss,

I've just did a quick test on scsi cache flushes performance impact on
6540 disk array when using ZFS.

The configuration is: v490, S10U5 (137111-03), 2x 6540 disk
arrays with 7.10.25.10 firmware, host is dual ported. ZFS does
mirroring between 6540s. There is no other load on 6540 except for
these testing. Each LUN is a RAID-10 made of many disks on each 6540
(doesn't really matter).


 zpool status dslrp
  pool: dslrp
 state: ONLINE
 scrub: scrub completed with 0 errors on Tue Aug  5 10:49:28 2008
config:

NAME   STATE READ WRITE CKSUM
dslrp  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t600A0B800029B7464245486B68EBd0  ONLINE   0 0 0
c6t600A0B800029AF006CCC486B36ABd0  ONLINE   0 0 0

errors: No known data errors


I wrote a simple C program to perform in a loop (N times) creation of
a file with O_DSYNC flag (synchronous writes), writing 255 bytes of
data, closing file and removing it. Then I measured total execution
time while disabling or enabling scsi cache flushes in zfs. The source
code for the program is attached at the end of this post.




# echo zfs_nocacheflush/D | mdb -k
zfs_nocacheflush:
zfs_nocacheflush:   1


# ptime ./filesync-1 /dslrp/test/ 10
Time in seconds to create and unlink 10 files with O_DSYNC: 44.947136


real   44.950
user0.349
sys18.188


# dtrace -n fbt::*SYNCHRONIZE_CACHE*:entry'[EMAIL PROTECTED]();}' -n 
tick-1s'{printa(@);clear(@);}'
[not a single synchronize which is expected]


# iostat -xnzC 1
[...]
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 4297.00.0 17236.1  0.0  1.00.00.2   0  98 c6
0.0 2149.50.0 8622.0  0.0  0.50.00.2   0  50 
c6t600A0B800029AF006CCC486B36ABd0
0.0 2147.50.0 8614.0  0.0  0.50.00.2   1  49 
c6t600A0B800029B7464245486B68EBd0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 3892.90.0 15571.7  0.0  1.00.00.3   0  98 c6
0.0 1946.00.0 7783.9  0.0  0.60.00.3   0  57 
c6t600A0B800029AF006CCC486B36ABd0
0.0 1947.00.0 7787.9  0.0  0.40.00.2   0  41 
c6t600A0B800029B7464245486B68EBd0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 4548.10.0 18192.3  0.0  1.00.00.2   0  97 c6
0.0 2274.00.0 9096.2  0.0  0.50.00.2   0  51 
c6t600A0B800029AF006CCC486B36ABd0
0.0 2274.00.0 9096.2  0.0  0.50.00.2   1  46 
c6t600A0B800029B7464245486B68EBd0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 4632.80.0 18620.4  0.0  1.10.00.2   0  99 c6
0.0 2316.90.0 9310.2  0.0  0.50.00.2   1  47 
c6t600A0B800029AF006CCC486B36ABd0
0.0 2315.90.0 9310.2  0.0  0.60.00.2   1  52 
c6t600A0B800029B7464245486B68EBd0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 4610.20.0 18150.6  0.0  1.00.00.2   0  97 c6
0.0 2304.10.0 9075.3  0.0  0.50.00.2   0  52 
c6t600A0B800029AF006CCC486B36ABd0
0.0 2306.10.0 9075.3  0.0  0.50.00.2   1  45 
c6t600A0B800029B7464245486B68EBd0
[...]



Now lets repeat the same test but with ZFS sending scsi flushes.


# echo zfs_nocacheflush/W0 | mdb -kw
zfs_nocacheflush:   0x1 =   0x0

# ptime ./filesync-1 /dslrp/test/ 10
Time in seconds to create and unlink 10 files with O_DSYNC: 53.809971


real   53.813
user0.351
sys22.107


# dtrace -n fbt::*SYNCHRONIZE_CACHE*:entry'[EMAIL PROTECTED]();}' -n 
tick-1s'{printa(@);clear(@);}'
[...]
CPU IDFUNCTION:NAME
  3  93193 :tick-1s
 6000

  3  93193 :tick-1s
 7244

  3  93193 :tick-1s
 6172

  3  93193 :tick-1s
 7172
[...]

So now we are sending thousands of cache flushes per second as
expected.

# iostat -xnzC 1
[...]
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 8386.60.0 17050.2  0.0  1.00.00.1   0  84 c6
0.0 4191.30.0 8525.1  0.0  0.50.00.1   0  43 
c6t600A0B800029AF006CCC486B36ABd0
0.0 4195.30.0 8525.1  0.0  0.50.00.1   0  41 
c6t600A0B800029B7464245486B68EBd0
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0 7476.50.0 

Re: [zfs-discuss] zpool i/o error

2008-08-05 Thread Victor Pajor
I found out what was my problem.
It's hardware related. My two disks where on a SCSI channel that didn't work 
properly.
It wasn't a ZFS problem.
Thank you everybody who replied.

My Bad.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Block unification in ZFS

2008-08-05 Thread Marc Bevand
Alan alan at peak.org writes:
 
 I was just thinking of a similar feature request: one of the things
 I'm doing is hosting vm's.  I build a base vm with standard setup in a
 dedicated filesystem, then when I need a new instance zfs clone and voila!
 ready to start tweaking for the needs of the new instance, using a fraction
 of the space.

This is OT but FYI some virtualization apps have built-in support for exactly 
what you want, you can create disk images that share identical blocks between 
themselves.

In Qemu/KVM this feature copy-on-write disk images:
$ qemu-img create -b base_image -f qcow2 new_image

In Microsoft Virtual Server, there is also an equivalent feature but I can't 
recall how it is called.

-marc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-05 Thread Malachi de Ælfweald
So I spent some time trying to get the 2nd slice up on the 2nd disk... I did
manage to finally get it on there by saving the partition table to
format.dat and reformatting the 2nd disk using it, but as soon as I did the
zpool attach, it wiped out the slice 2 again.  I also tried the prtvtoc and
fmthard after attaching, but that didn't work either.

Is there some specific steps I can follow to get the 2nd slice to stay on
post-attach?

Thanks,
Malachi

On Sun, Aug 3, 2008 at 6:00 AM, andrew [EMAIL PROTECTED] wrote:

 OK, I've put up some screenshots and a copy of my menu.lst to clarify my
 setup:

 http://sites.google.com/site/solarium/zfs-screenshots

 Cheers

 Andrew.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-05 Thread Malachi de Ælfweald
Ok, here's the end results...

zfs attach rpool c5t0d0s0 c5t1d0s0: removes c5t1d0s2
zfs attach rpool c5t0d0s2 c5t1d0s2: says c5t0d0s2 is not in the pool
zfs attach rpool c5t0d0s0 c5t1d0s2: says I have to force it because s0 and
s2 overlap
zfs attach -f rpool c5t0d0s0 c5t1d0s2: partition table now matches - onto
next step

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t1d0s2
raw device must be a root slice (not s2)

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t1d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 260 sectors starting at 50 (abs 16115)

I *think* this means it is good to go? What is the easiest way to test it?

Malachi


On Tue, Aug 5, 2008 at 9:12 PM, Ellis, Mike [EMAIL PROTECTED] wrote:

  yep...

 when you're source pool is a slice, you better add a slice as a target
 pool if you want to have that slice on the target side remain.

 should work fine that way,

  -- MikeE

  --
 *From:* Malachi de Ælfweald [mailto:[EMAIL PROTECTED]
 *Sent:* Wednesday, August 06, 2008 12:11 AM
 *To:* Ellis, Mike
 *Subject:* Re: [zfs-discuss] ZFS boot mirror

  Hmmm. I tried c5t1d0 which gave an error and c5t1d0s0 which is what
 overwrote it. Maybe I should try mirroring c5t0d0s0 to c5t1d0s2?

 On Tue, Aug 5, 2008 at 8:53 PM, Ellis, Mike [EMAIL PROTECTED] wrote:

  did you zpool attach the whole disk or the specific slice you
 prepared?

  -- MikeE

  --
 *From:* [EMAIL PROTECTED] [mailto:
 [EMAIL PROTECTED] *On Behalf Of *Malachi de Ælfweald
 *Sent:* Tuesday, August 05, 2008 11:42 PM
 *To:* andrew
 *Cc:* zfs-discuss@opensolaris.org
 *Subject:* Re: [zfs-discuss] ZFS boot mirror

   So I spent some time trying to get the 2nd slice up on the 2nd disk...
 I did manage to finally get it on there by saving the partition table to
 format.dat and reformatting the 2nd disk using it, but as soon as I did the
 zpool attach, it wiped out the slice 2 again.  I also tried the prtvtoc and
 fmthard after attaching, but that didn't work either.

 Is there some specific steps I can follow to get the 2nd slice to stay on
 post-attach?

 Thanks,
 Malachi

 On Sun, Aug 3, 2008 at 6:00 AM, andrew [EMAIL PROTECTED] wrote:

 OK, I've put up some screenshots and a copy of my menu.lst to clarify my
 setup:

 http://sites.google.com/site/solarium/zfs-screenshots

 Cheers

 Andrew.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-08-05 Thread Brandon High
On Mon, Aug 4, 2008 at 6:03 PM, Miles Nordin [EMAIL PROTECTED] wrote:
bh It should support any AM2/AM2+ dual-core Opteron like the
bh 1220, etc.  as well as the quad-core stuff.

 Are you inferring that based on the name/shape of the socket?  I don't
 think that's a fair assumption.

I'm basing it on experience, actually. The 165 was a popular for
Socket 939 systems since it cost less than the Athlon at the same
clock. AMD realized their mistake and now charges more for the Opteron
at equal clocks across the board.

 The boards I looked at, if you go to the taiwanese manufacturer's web
 site, explicitly list the CPU's they support, and for all the boards I
 looked at, it's either phenom or opteron, not both---a strict divide
 between desktop and server.  Also the server boards all need
 registered memory, and the desktops all need unregistered.

That's based more on the target market for the board. It's mainly just
marketing, though some manufacturers may not add the server CPUIDs to
the desktop BIOS.

Remember that the memory controller is in the CPU, so it really
doesn't matter what the board says. (In fact, the very first desktop
Athlon 64 chips were socket 940 and required registered memory.) The
current 1-way Opterons are just binned Athlons.

If you look at the actual CPU specs
(http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/23932.pdf)
for the AM2 Opterons it reads:
- 144-bit DDR2 SDRAM controller operating at up to 333 MHz
- Supports up to four unbuffered DIMMs
- ECC checking with double-bit detect and single-bit correct

The Socket F chips (2xxx and 8xxx series) require registered memory.

 The other is requirement for workaround of the BA/B2 stepping TLB bug.

Any BIOS that can recognize an Phenom / 3rd-gen Opteron will implement
this for the B2 stepping.

 or something different, but many motherboards needed a BIOS update to
 boot with a Barcelona chip.  Customers were told to install an older
 AMD chip, upgrade the BIOS, then install the new chip.  I would not

The BIOS needs to know about the chip. The same thing happened on the
Intel side when the 65nm Core 2 came out (E6xxx and Q6xxx), and again
with the 45nm Core 2 (E8xxx and Q8xxx).

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-05 Thread andrew
Sounds like you've got an EFI label on the second disk. Can you run format, 
select the second disk, then enter fdisk then print and post the output 
here?

Thanks

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-05 Thread Malachi de Ælfweald
It looks like we finally got it working. The log of what Mike had me do to
fix it is 
herehttp://malsserver.blogspot.com/2008/08/mirroring-resolved-correct-way.htmlin
case anyone else runs into this. Thanks to everyone who helped with
this.

Thanks again!
Mal


On Tue, Aug 5, 2008 at 9:40 PM, andrew [EMAIL PROTECTED] wrote:

 Sounds like you've got an EFI label on the second disk. Can you run
 format, select the second disk, then enter fdisk then print and post
 the output here?

 Thanks

 Andrew.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss