[zfs-discuss] Pogo Linux ships NexentaStor pre-installed boxes

2008-08-02 Thread Erast Benson
Hi folks,

wanted to share some exciting news with you. Pogo Linux shipping
NexentaStor pre-installed boxes, like this one 16TB - 24TB:

http://www.pogolinux.com/quotes/editsys?sys_id=3989

And here is announce:

http://www.nexenta.com/corp/index.php?option=com_contenttask=viewid=129Itemid=56

Pogo says: Managed Storage – NetApp features without the price...

Go OpenSolaris, Go!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-02 Thread W. Wayne Liauh
 How about we complain enough to shame somebody into
 adding power
 management to the K8 chips?  We can start by
 reminding SUN on how much
 it was trumpeting the early Opterons as 'green
 computing'.
 
 Cheers,
 florin
 

Casper's frkit power management script works very well with AMD's single-core 
K8's.  Sun did a very admirable job of pioneering green computing at a time 
when no one was paying any attention.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-02 Thread Malachi de Ælfweald
I just tried that, but the installgrub keeps failing:

[EMAIL PROTECTED]:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: resilver completed after 0h1m with 0 errors on Sat Aug  2 01:44:55
2008
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c5t0d0s0  ONLINE   0 0 0
c5t1d0s0  ONLINE   0 0 0

errors: No known data errors
[EMAIL PROTECTED]:~# installgrub /boot/grub/stage1 /boot/grub/stage2
/dev/dsk/c5t1d0s0
cannot open/stat device /dev/dsk/c5t1d0s2


On Wed, May 21, 2008 at 3:19 PM, Lori Alt [EMAIL PROTECTED] wrote:


 It is also necessary to use either installboot (sparc) or installgrub (x86)
 to install the boot loader on the attached disk.  It is a bug that this
 is not done automatically (6668666 - zpool command should put a
 bootblock on a disk added as a mirror of a root pool vdev)

 Lori

 [EMAIL PROTECTED] wrote:
  Hi Tom,
 
  You need to use the zpool attach command, like this:
 
  # zpool attach pool-name disk1 disk2
 
  Cindy
 
  Tom Buskey wrote:
 
  I've always done a disksuite mirror of the boot disk.  It's been easry
 to do after the install in Solaris.  WIth Linux I had do do it during the
 install.
 
  OpenSolaris 2008.05 didn't give me an option.
 
  How do I add my 2nd drive to the boot zpool to make it a mirror?
 
 
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-02 Thread Enda O'Connor
Malachi de Ælfweald wrote:
 I just tried that, but the installgrub keeps failing:
 
 [EMAIL PROTECTED]:~# zpool status
   pool: rpool
  state: ONLINE
  scrub: resilver completed after 0h1m with 0 errors on Sat Aug  2 
 01:44:55 2008
 config:
 
 NAME  STATE READ WRITE CKSUM
 rpool ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c5t0d0s0  ONLINE   0 0 0
 c5t1d0s0  ONLINE   0 0 0
 
 errors: No known data errors
 [EMAIL PROTECTED]:~# installgrub /boot/grub/stage1 /boot/grub/stage2 
 /dev/dsk/c5t1d0s0
 cannot open/stat device /dev/dsk/c5t1d0s2
  
that should be /dev/rdsk/c5t1d0s2

Enda
 
 On Wed, May 21, 2008 at 3:19 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 
 It is also necessary to use either installboot (sparc) or
 installgrub (x86)
 to install the boot loader on the attached disk.  It is a bug that this
 is not done automatically (6668666 - zpool command should put a
 bootblock on a disk added as a mirror of a root pool vdev)
 
 Lori
 
 [EMAIL PROTECTED] wrote:
   Hi Tom,
  
   You need to use the zpool attach command, like this:
  
   # zpool attach pool-name disk1 disk2
  
   Cindy
  
   Tom Buskey wrote:
  
   I've always done a disksuite mirror of the boot disk.  It's been
 easry to do after the install in Solaris.  WIth Linux I had do do it
 during the install.
  
   OpenSolaris 2008.05 didn't give me an option.
  
   How do I add my 2nd drive to the boot zpool to make it a mirror?
  
  
   This message posted from opensolaris.org http://opensolaris.org
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-02 Thread Malachi de Ælfweald
[EMAIL PROTECTED]:~# installgrub /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/c5t1d0s2
raw device must be a root slice (not s2)

and trying rdsk with s0 gave same error as before

On Sat, Aug 2, 2008 at 2:02 AM, Enda O'Connor [EMAIL PROTECTED] wrote:

 Malachi de Ælfweald wrote:

 I just tried that, but the installgrub keeps failing:

 [EMAIL PROTECTED]:~# zpool status
  pool: rpool
  state: ONLINE
  scrub: resilver completed after 0h1m with 0 errors on Sat Aug  2 01:44:55
 2008
 config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c5t0d0s0  ONLINE   0 0 0
c5t1d0s0  ONLINE   0 0 0

 errors: No known data errors
 [EMAIL PROTECTED]:~# installgrub /boot/grub/stage1 /boot/grub/stage2
 /dev/dsk/c5t1d0s0
 cannot open/stat device /dev/dsk/c5t1d0s2


 that should be /dev/rdsk/c5t1d0s2

 Enda


 On Wed, May 21, 2008 at 3:19 PM, Lori Alt [EMAIL PROTECTED] mailto:
 [EMAIL PROTECTED] wrote:


It is also necessary to use either installboot (sparc) or
installgrub (x86)
to install the boot loader on the attached disk.  It is a bug that this
is not done automatically (6668666 - zpool command should put a
bootblock on a disk added as a mirror of a root pool vdev)

Lori

[EMAIL PROTECTED] wrote:
  Hi Tom,
 
  You need to use the zpool attach command, like this:
 
  # zpool attach pool-name disk1 disk2
 
  Cindy
 
  Tom Buskey wrote:
 
  I've always done a disksuite mirror of the boot disk.  It's been
easry to do after the install in Solaris.  WIth Linux I had do do it
during the install.
 
  OpenSolaris 2008.05 didn't give me an option.
 
  How do I add my 2nd drive to the boot zpool to make it a mirror?
 
 
  This message posted from opensolaris.org http://opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-08-02 Thread Casper . Dik

 How about we complain enough to shame somebody into
 adding power
 management to the K8 chips?  We can start by
 reminding SUN on how much
 it was trumpeting the early Opterons as 'green
 computing'.
 
 Cheers,
 florin
 

Casper's frkit power management script works very well with AMD's single-core 
K8's.  Sun did a ver
y admirable job of pioneering green computing at a time when no one was paying 
any attention.


There's a reason why Intel and AMD changed the TSC to be not the same
as the CPU frequency.  You couldn't use TSC for anything interesting if
you also wanted to change the frequency of the CPU.

Solaris uses TSC everywhere so using K8 AMDs with powernow impossible.

(My powernow driver makes dtrace's timestamps return wrong values)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] are these errors dangerous

2008-08-02 Thread Matt Harrison
Hi everyone,

I've been running a zfs fileserver for about a month now (on snv_91) and 
it's all working really well. I'm scrubbing once a week and nothing has 
come up as a problem yet.

I'm a little worried as I've just noticed these messages in 
/var/adm/message and I don't know if they're bad or just informational:

Aug  2 14:46:06 exodus  Error for Command: read_defect_dataError 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x20 
(invalid command operation code), ASCQ: 0x0, FRU: 0x0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
/[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
Aug  2 14:46:06 exodus  Error for Command: log_sense   Error 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
/[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
Aug  2 14:46:06 exodus  Error for Command: mode_sense  Error 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
/[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
Aug  2 14:46:06 exodus  Error for Command: mode_sense  Error 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0

Any insights would be greatly appreciated.

Thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1586 - Release Date: 01/08/2008 
18:59


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pogo Linux ships NexentaStor pre-installed boxes

2008-08-02 Thread Will Murnane
On Sat, Aug 2, 2008 at 06:02, Erast Benson [EMAIL PROTECTED] wrote:
 wanted to share some exciting news with you. Pogo Linux shipping
 NexentaStor pre-installed boxes
at fairly astounding prices!  The 16 disk one fully loaded with 1TB
sas disks, CPUs, memory, and warranty comes in at $11,620; the 24 disk
one is $15,450.  That's 72 and 64 cents per gigabyte respectively!
It's pretty clear most of this machine is off-the-shelf Supermicro
hardware (the case is some sort of SC846, probably an E1; the board is
probably from the X7DB* series) but off-the-shelf Supermicro hardware
is pretty nice these days ;-)

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Compress a Root Pool?

2008-08-02 Thread W. Wayne Liauh
Is it possible to compress a root pool?  If yes, how?  Thanks.


(I installed os 08.05 into a 4 GB USB stick, and want to know whether I could 
squeeze more stuff in there.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling disks' write-cache in J4200 with ZFS?

2008-08-02 Thread Carson Gaspar
Todd E. Moore wrote:
 I'm working with a group that wants to commit all the way to disk every
 single write - flushing or bypassing all the caches each time. The
 fsync() call will flush the ZIL. As for the disk's cache, if given the
 entire disk, ZFS enables its cache by default. Rather than ZFS having to
 issue the flush command to the disk we want to disable this cache and
 avoid the step altogether

Then tell your (in my opinion insane) group to pass O_SYNC to open(). 
That will guarantee writes go to disk without being cached (except by 
non-volatile cache in a raid controller). If you don't want the ZIL 
involved, don't configure one for your storage.

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-02 Thread Ross
What does zpool status say?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting from a USB HD

2008-08-02 Thread W. Wayne Liauh
Looks like my naive conclusion was wrong.  This morning I installed os08.05 
into a 4GB flash stick plugged to an HP 6700 notebook (Intel C2D, bought this 
week).  This machine boots and runs very nicely from this os08.05 flash stick.

However, I was unable to use this flash stick to boot an Athlon X2 machine.  
Its MBR was read--and the GRUB options were shown on the screen.  But when I 
selected an option (e.g., the rc3), the machine would go into the restart mode, 
and the GRUB screen would be shown again.  This process can be repeated again 
and again.  It seems that the bootloader was not able to read the kernel from 
the flash stick.

Also I noticed that with the Athlon X2 machine (which is about two years old), 
the os08.05-installed flash stick was NOT treated as a removable disc.  
Instead, I had to move its boot priority up in the __hard drive__ category in 
order for it to be recognized by the POST process.  This contrasts with the C2D 
notebook, which recognizes the flash stick as a removable medium.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-02 Thread Matt Harrison
Ross wrote:
 What does zpool status say?

zpool status says everythings fine, i've run another scrub and it hasn't 
found any errors, so can i just consider this harmless? its filling up 
my log quickly though

thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1586 - Release Date: 01/08/2008 
18:59


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-02 Thread Matt Harrison
Matt Harrison wrote:
 Ross wrote:
 What does zpool status say?
 
 zpool status says everythings fine, i've run another scrub and it hasn't 
 found any errors, so can i just consider this harmless? its filling up 
 my log quickly though
 

I've just checked past logs and i'm getting up to about 250mb of these 
messages each week. if this is not a harmful error is there any way to 
mute this particular message? I'd rather not be accumulating such large 
logs without good reason.

thanks

Matt


No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1586 - Release Date: 01/08/2008 
18:59


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting from a USB HD

2008-08-02 Thread Carson Gaspar
W. Wayne Liauh wrote:
...
 However, I was unable to use this flash stick to boot an Athlon X2
 machine.  Its MBR was read--and the GRUB options were shown on the
 screen.  But when I selected an option (e.g., the rc3), the machine
 would go into the restart mode, and the GRUB screen would be shown
 again.  This process can be repeated again and again.  It seems that
 the bootloader was not able to read the kernel from the flash stick.

 Also I noticed that with the Athlon X2 machine (which is about two
 years old), the os08.05-installed flash stick was NOT treated as a
 removable disc.  Instead, I had to move its boot priority up in the
 __hard drive__ category in order for it to be recognized by the POST
 process.  This contrasts with the C2D notebook, which recognizes the
 flash stick as a removable medium.

First off, please get a real mail client that doesn't send huge lines. 
Thankfully Thunderbird has re-wrap that handles quotations.

Your problem is almost certainly that your boot device order differs, 
probably due to thw BIOS differences you mention. Go to the grub command 
line, and do a find /platform/i86pc/multiboot. Pay attention to the 
hd(n,m) it prints (I hope!) and edit your boot entry to match. Once 
you're up in multi-user, add a new boot entry to /boot/grub/menu.lst for 
your alternate device numbering.

FYI, this is a generic x86 grub problem - Linux behaves the same way.

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Thomas Nau
Dear all

As we wanted to patch one of our iSCSI Solaris servers we had to offline 
the ZFS submirrors on the clients connected to that server. The devices 
connected to the second server stayed online so the pools on the clients 
were still available but in degraded mode. When the server came back 
up we onlined the devices on the clients an the resilver completed pretty 
quickly as the filesystem was read-mostly (ftp, http server)

Nevertheless during the first hour of operation after onlining we 
recognized numerous checksum errors on the formerly offlined device. We 
decided to scrub the pool and after several hours we got about 3500 error 
in 600GB of data.

I always thought that ZFS would sync the mirror immediately after bringing 
the device online not requiring a scrub. Am I wrong?

Both, servers and clients run s10u5 with the latest patches but we 
saw the same behaviour with OpenSolaris clients

Any hints?
Thomas

-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Miles Nordin
 tn == Thomas Nau [EMAIL PROTECTED] writes:

tn Nevertheless during the first hour of operation after onlining
tn we recognized numerous checksum errors on the formerly
tn offlined device. We decided to scrub the pool and after
tn several hours we got about 3500 error in 600GB of data.

Did you use 'zpool offline' when you took them down, or did you
offline them some other way, like by breaking the network connection,
stopping the iSCSI target daemon, or 'iscsiadm remove
discovery-address ..' on the initiator?

This is my experience, too (but with old b71).  I'm also using iSCSI.
It might be a variant of this:

 http://bugs.opensolaris.org/view_bug.do?bug_id=6675685
 checksum errors after 'zfs offline ; reboot'

Aside from the fact the checksum-errored blocks are silently not
redundant, it's also interesting because I think, in general, there
are a variety of things which can cause checksum errors besides
disk/cable/controller problems.  I wonder if they're useful for
diagnosing disk problems only in very gently-used setups, or not at
all?

Another iSCSI problem: for me, the targets I've 'zpool offline'd will
automatically ONLINE themselves when iSCSI rediscovers them.  but only
sometimes.  I haven't figured out how to predict when they will and
when they won't.


pgpo9BOlPemM3.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Thomas Nau
Miles

On Sat, 2 Aug 2008, Miles Nordin wrote:
 tn == Thomas Nau [EMAIL PROTECTED] writes:

tn Nevertheless during the first hour of operation after onlining
tn we recognized numerous checksum errors on the formerly
tn offlined device. We decided to scrub the pool and after
tn several hours we got about 3500 error in 600GB of data.

 Did you use 'zpool offline' when you took them down, or did you
 offline them some other way, like by breaking the network connection,
 stopping the iSCSI target daemon, or 'iscsiadm remove
 discovery-address ..' on the initiator?

We did a zpool offline, nothing else, before we took the iSCSI server 
down


 Another iSCSI problem: for me, the targets I've 'zpool offline'd will
 automatically ONLINE themselves when iSCSI rediscovers them.  but only
 sometimes.  I haven't figured out how to predict when they will and
 when they won't.

I never experienced that one but we usually don't touch any of the iSCSI 
settings as long as a devices is offline. At least as long as we don't 
have to for any reason

Thomas

-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Miles Nordin
 tn == Thomas Nau [EMAIL PROTECTED] writes:

tn I never experienced that one but we usually don't touch any of
tn the iSCSI settings as long as a devices is offline. At least
tn as long as we don't have to for any reason

Usually I do 'zpool offline' followed by 'iscsiadm remove
discovery-address ...'

This is for two reasons:

 1. At least with my old crappy Linux IET, it doesn't restore the
sessions unless I remove and add the discovery-address

 2. the auto-ONLINEing-on-discovery problem.  Removing the discovery
address makes absolutely sure ZFS doesn't ONLINE something before
I want it to.

If you have to do this maintenance again, you might want to try
removing the discovery address for reason #2.  Maybe when your iSCSI
target was coming back up, it bounced a bit.  so, when the target was
coming back up, you might have done the equivalent of removing the
target without 'zpool offline'ing first (and then immediately plugging
it back in).

That's the ritual I've been using anyway.  If anything unexpected
happens, I still have to manually scrub the whole pool to seek out all
these hidden ``checksum'' errors.

Hopefully some day you will be able to just look in fmdump and see
``yup, the target bounced once as it was coming back up.''  and
targets will be able to bounce as much as they like with
failmode=wait, or for short reasonable timeouts with other failmodes,
and automatically do fully-adequate but efficient resilvers with
proper dirty-region-logging without causing any latent checksum
errors.  and zpool offline'd devices will stay offline until reboot as
promised, and will never online themselves.  and iSCSI sessions will
always come up on their own without having to kick the initiator.


pgpPajiw7r2cN.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-02 Thread andrew
I ran into this as well. For some reason installgrub needs slice 2 to be the 
special backup slice that covers the whole disk, as in Solaris. You actually 
specify s0 on the command line since this is the location of the ZFS root, but 
installgrub will go away and try to access the whole disk using slice 2 for 
some reason. What I did to solve it was to use format to select the disk, then 
the partition option to create a slice 2 that started on cylinder 0 and ended 
on the final cylinder of the disk. Once I did that installgrub worked OK. You 
might also need to issue the command disks to get Solaris to update the disk 
links under /dev before you use installgrub.

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-02 Thread Malachi de Ælfweald
Have you verified that it will auto failover correctly if one is s0 and one
is s2?

On Sat, Aug 2, 2008 at 3:53 PM, andrew [EMAIL PROTECTED] wrote:

 I ran into this as well. For some reason installgrub needs slice 2 to be
 the special backup slice that covers the whole disk, as in Solaris. You
 actually specify s0 on the command line since this is the location of the
 ZFS root, but installgrub will go away and try to access the whole disk
 using slice 2 for some reason. What I did to solve it was to use format to
 select the disk, then the partition option to create a slice 2 that
 started on cylinder 0 and ended on the final cylinder of the disk. Once I
 did that installgrub worked OK. You might also need to issue the command
 disks to get Solaris to update the disk links under /dev before you use
 installgrub.

 Andrew.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'zpool status' intrusiveness

2008-08-02 Thread Miles Nordin
 c == Miles Nordin [EMAIL PROTECTED] writes:
 tn == Thomas Nau [EMAIL PROTECTED] writes:

 c 'zpool status' should not be touching the disk at all.

I found this on some old worklog:

http://web.Ivy.NET/~carton/oneNightOfWork/20061119-carton.html

-8-
Also, zpool status takes forEVer. I found out why:

ezln:~$ sudo tcpdump -n -p -i tlp2 host fishstick
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tlp2, link-type EN10MB (Ethernet), capture size 96 bytes
17:44:43.916373 IP 10.100.100.140.42569  10.100.100.135.3260: S 
582435419:582435419(0) win 49640 
17:44:43.916679 IP 10.100.100.135.3260  10.100.100.140.42569: R 0:0(0) ack 
582435420 win 0
17:44:52.611549 IP 10.100.100.140.48474  10.100.100.135.3260: S 
584903933:584903933(0) win 49640 
17:44:52.611858 IP 10.100.100.135.3260  10.100.100.140.48474: R 0:0(0) ack 
584903934 win 0
17:44:58.766525 IP 10.100.100.140.58767  10.100.100.135.3260: S 
586435093:586435093(0) win 49640 
17:44:58.766831 IP 10.100.100.135.3260  10.100.100.140.58767: R 0:0(0) ack 
586435094 win 0

10.100.100.135 is the iSCSI target. When it's down, connect() from the
Solaris initiator will take a while to time out. I added [the
target's] address as an alias on some other box's interface, so
Solaris would get a TCP reset immediately. Now zpool status is fast
again, and every time I type zpool status, I get one of those SYN, RST
pairs. (one, not three. I typed zpool status three times.) They also
appear on their own over time.

How would I fix this? I'd have iSCSI keep track of whether targets are
``up'' or ``down''. If an upper layer tries to access a target that's
``down'', iSCSI will immediately return an error, then try to open the
target in the background. There will be no automatic attempts to open
targets in the background. so, if an iSCSI target goes away, and then
it comes back, your software may need to touch the device inode twice
before you see the target available again.

If targets close their TCP circuits on inactivity or go into
power-save or some such flakey nonsense, we're still ok, because after
that happens iSCSI will still have the target marked ``up.'' It will
thus keep the upper layers waiting for one connection attempt,
returning no error if the first connection attempt succeeds. If it
doesn't, the iSCSI initiator will then mark the target ``down'' and
start returning errors immediately.

As I said before, error handling is the most important part of any
RAID implementation. In this case, among the more obvious and
immediately inconvenient problems we have a fundamentally serious one:
iSCSI's not returning errors fast enough is pushing us up against a
timeout in the svc subsystem, so one broken disk can potentially
cascade into breaking a huge swath of the SVM subsystem.
-8-

I would add, I'd fix 'zpool status' first, and start being careful
throughout ZFS to do certain things in parallel rather than serial.
but the iSCSI initiator could be smarter, too.

tn we usually don't touch any of the iSCSI settings as long as a
tn devices is offline.

so the above is another reason you may want to remove a
discovery-address before taking that IP off the network.  If the
discovery-address returns an immediate TCP RST, then 'zpool status'
will work okay, but if the address is completely gone so connect()
times out, 'zpool status' will make you wait quite a while,
potentially multiplied by the number of devices or pools you have,
which could make it equivalent to broken in a practical
sense---scalability applies to failure scenarios, too, not just to
normal operation.

Don't worry---iSCSI won't move your /dev/dsk/... links around or
forget your CHAP passwords when you remove the discovery-address.
It's super-convenient.  But in fact, even if you WANT the iSCSI
initiator to forget this stuff, it seems there's no documented way to
do it!  It's sort of like the Windows Registry keeping track of your
30-day shareware trial. :(


pgpLfMLVbbcrr.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compress a Root Pool?

2008-08-02 Thread Mark Danico
Sure, 
zfs set compression=on rpool

Of course this only compresses items written after compression is turned on.
On my system when I started to perform the install I opened up a
terminal and after rpool was created I set the compression to on so that
it got set before the packages were installed.

If you have already performed the install I'm not sure if there is an 
easy way
to have the compression affect the current files so that you can regain some
of the currently used space. Maybe someone else has an idea on that.

-Mark D.


W. Wayne Liauh wrote:
 Is it possible to compress a root pool?  If yes, how?  Thanks.


 (I installed os 08.05 into a 4 GB USB stick, and want to know whether I could 
 squeeze more stuff in there.)
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling disks' write-cache in J4200 with ZFS?

2008-08-02 Thread Neil Perrin


Carson Gaspar wrote:
 Todd E. Moore wrote:
 I'm working with a group that wants to commit all the way to disk every
 single write - flushing or bypassing all the caches each time. The
 fsync() call will flush the ZIL. As for the disk's cache, if given the
 entire disk, ZFS enables its cache by default. Rather than ZFS having to
 issue the flush command to the disk we want to disable this cache and
 avoid the step altogether
 
 Then tell your (in my opinion insane) group to pass O_SYNC to open().
 That will guarantee writes go to disk without being cached (except by 
 non-volatile cache in a raid controller). If you don't want the ZIL 
 involved, don't configure one for your storage.

I think there's some confusion here. For a description of the ZIL
and separate log devices see:

http://blogs.sun.com/perrin/entry/the_lumberjack
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on

If you don't configure a separate log device - I assume this
is what you meant by ZIL above), then the intent log is
embedded in the main pool. It's not advisable to disable the
ZIL code as this actually handles fsync/O_[D]SYNC.
Without it those synchronous requests are ignored,
which is the opposite of what Todd wants.

Neil.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compress a Root Pool?

2008-08-02 Thread Ian Collins
Mark Danico wrote:
 Sure, 
 zfs set compression=on rpool

 Of course this only compresses items written after compression is turned on.
 On my system when I started to perform the install I opened up a
 terminal and after rpool was created I set the compression to on so that
 it got set before the packages were installed.

 If you have already performed the install I'm not sure if there is an 
 easy way
 to have the compression affect the current files so that you can regain some
 of the currently used space. Maybe someone else has an idea on that.

   
See the thread ZFS root compressed ? from June 6th. for some suggestions.

Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss