Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Chris Ridd

On 11 Mar 2010, at 04:17, Erik Trimble wrote:

> Matt Cowger wrote:
>> On Mar 10, 2010, at 6:30 PM, Ian Collins wrote:
>> 
>>  
>>> Yes, noting the warning.  
>> 
>> Is it safe to execute on a live, active pool?
>> 
>> --m
>>  
> Yes.  No reboot necessary.
> 
> The Warning only applies to this circumstance:  if you've upgraded from an 
> older build, then upgrading the zpool /may/ mean that you will NOT be able to 
> reboot to the OLDER build and still read the now-upgraded zpool.
> 
> 
> So, say you're currently on 111b (fresh 2009.06 build).   It has zpool 
> version X (I'm too lazy to look up the actual version numbers now).  You now 
> decide to live on the bleeding edge, and upgrade to build 133.  That has 
> zpool version X+N.   Without doing anything, all pool are still at version X, 
> and everything can be read by either BootEnvironment (BE).  However, you want 
> the neat features in zpool X+N.  You boot to the 133 BE, and run 'zpool 
> upgrade' on all pools.  You now get all those fancy features, instantly.  
> Naturally, these new features don't change any data that is already on the 
> disk (it doesn't somehow magically dedup previously written data).  HOWEVER, 
> you are now in the situation where you CAN'T boot to the 111b BE, as that 
> version doesn't understand the new pool format.
> 
> Basically, it boils down to this:  upgrade your pools ONLY when you are sure 
> the new BE is stable and working for you, and you have no desire to revert to 
> the old pool.   I run a 'zpool upgrade' right after I do a 'beadm destroy 
> '

I'd also add that for disaster recovery purposes you should also have a live CD 
handy which supports your new zpool version.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Mattias Pantzare
> These days I am a fan for forward check access lists, because any one who
> owns a DNS server can say that for IPAddressX returns aserver.google.com.
> They can not set the forward lookup outside of their domain  but they can
> setup a reverse lookup. The other advantage is forword looking access lists
> is you can use DNS Alias in access lists as well.

That is not true, you have to have a valid A record in the correct domain.

This is how it works (and how you should check you reverse lookups in
your applications):

1. Do a reverse lookup.
2. Do a lookup with the name from 1.
3. Check that the IP address is one of the adresses you got in 2.

Ignore the reverse lookup if the check in 3 fails.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Erik Trimble

Matt Cowger wrote:

On Mar 10, 2010, at 6:30 PM, Ian Collins wrote:

  
Yes, noting the warning.  



Is it safe to execute on a live, active pool?

--m
  

Yes.  No reboot necessary.

The Warning only applies to this circumstance:  if you've upgraded from 
an older build, then upgrading the zpool /may/ mean that you will NOT be 
able to reboot to the OLDER build and still read the now-upgraded zpool.



So, say you're currently on 111b (fresh 2009.06 build).   It has zpool 
version X (I'm too lazy to look up the actual version numbers now).  You 
now decide to live on the bleeding edge, and upgrade to build 133.  That 
has zpool version X+N.   Without doing anything, all pool are still at 
version X, and everything can be read by either BootEnvironment (BE).  
However, you want the neat features in zpool X+N.  You boot to the 133 
BE, and run 'zpool upgrade' on all pools.  You now get all those fancy 
features, instantly.  Naturally, these new features don't change any 
data that is already on the disk (it doesn't somehow magically dedup 
previously written data).  HOWEVER, you are now in the situation where 
you CAN'T boot to the 111b BE, as that version doesn't understand the 
new pool format.


Basically, it boils down to this:  upgrade your pools ONLY when you are 
sure the new BE is stable and working for you, and you have no desire to 
revert to the old pool.   I run a 'zpool upgrade' right after I do a 
'beadm destroy '




--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Matt Cowger

On Mar 10, 2010, at 6:30 PM, Ian Collins wrote:

> Yes, noting the warning.  

Is it safe to execute on a live, active pool?

--m


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Damon Atkins
In /etc/hosts for the format is
IP FQDN Alias...
Which would means "1.1.1.1 aserver.google.com aserver aserver-le0"
I have seen a lot of sysadmins do the following:
"1.1.1.1 aserver aserver.google.com"
which means the host file (or NIS) does not match DNS

As the first entry is FQDN it is then "name" return when an application looks 
up an IP address.   In the first example 1.1.1.1  belongs to aserver.google.com 
(FQDN) and access lists need to match this (e.g. .rhost/nfs shares)   

e.g. dig -x 1.1.1.1 | egrep PTR
And it will return FQDN for example aserver.google.com (assuming a standard DNS 
setup)

These days I am a fan for forward check access lists, because any one who owns 
a DNS server can say that for IPAddressX returns aserver.google.com. They can 
not set the forward lookup outside of their domain  but they can setup a 
reverse lookup. The other advantage is forword looking access lists is you can 
use DNS Alias in access lists as well.

e.g. NFS share should do a DNS lookup on aserver.google.com get an IP Address 
or multiple IP Address and then check to see if the client has the same IP 
address rather than a string match.

PS I read in the doco that as of Solaris 10 hostname should be set to FQDN if 
you wish to use Kerb5.
e.g. hostname command should return
"aserver.google.com.au" not "aserver" if you wish to use Kerb5 Sol10.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Ian Collins

On 03/11/10 03:21 PM, Harry Putnam wrote:

Running b133

When you see this line in a `zpool status' report:

   status: The pool is formatted using an older on-disk format.  The
 pool can still be used, but some features are unavailable.

Is it safe and effective to heed the advice given in next line:

   action: Upgrade the pool using 'zpool upgrade'.  Once this is done,
 the pool will no longer be accessible on older software
 versions.

   
Yes, noting the warning.  So if you upgrade your root pool, you might 
not be able to boot a previous BE.



I don't recall now what all I might have done when the disks were
installed. I do remember that they were new WD 750GB sata drives and
were set up as a mirror, maybe 6 to 9 mnths ago..

I was then, and still am bumbling around with only rudimentary
knowledge of what I'm doing or what needs doing.
(There has been some knowledge improvement in those 6-9 mnts [I hope])

I don't think I really did any formatting at all.

   
It's the pool format that changes, not the disk format.  Another 
overloaded and potentially confusing term!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Harry Putnam
Running b133

When you see this line in a `zpool status' report:

  status: The pool is formatted using an older on-disk format.  The
pool can still be used, but some features are unavailable.

Is it safe and effective to heed the advice given in next line:

  action: Upgrade the pool using 'zpool upgrade'.  Once this is done,
the pool will no longer be accessible on older software
versions.

I don't recall now what all I might have done when the disks were
installed. I do remember that they were new WD 750GB sata drives and
were set up as a mirror, maybe 6 to 9 mnths ago..

I was then, and still am bumbling around with only rudimentary
knowledge of what I'm doing or what needs doing.
(There has been some knowledge improvement in those 6-9 mnts [I hope])
 
I don't think I really did any formatting at all.

This host is a home NAS and general zfs server.  It does not see
industrial strength use.  The hardware is older athlon64 (+3400)
with 3 GB ram.

What I'd like to learn from posting this, is if I should follow the
advice or am I likely to cause a pile of new problems.  Far as I know
this hasn't caused problems so far.  Although there was a problem with
data corruption in 2 fileson this pool.  

In fact I just finished cleaning up two cases of data corruption in 2
files.  (now deleted, followed by a new scrub)

----   ---=---   -   
Full status output:

  pool: z3
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: scrub completed after 0h49m with 0 errors on Wed Mar 10 12:55:48 2010
config:

NAMESTATE READ WRITE CKSUM
z3  ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c5d0ONLINE   0 0 0  512K repaired
c6d0ONLINE   0 0 0  640K repaired

errors: No known data errors

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Kyle McDonald

On 3/10/2010 3:27 PM, Robert Thurlow wrote:

As said earlier, it's the string returned from the reverse DNS lookup 
that needs to be matched.





So, to make a long story short, if you log into the server
from the client and do "who am i", you will get the host
name you need for the share.
Another test (for a server configured as a DNS client, LDAP would be 
different) is to run 'nslookup ' (or the dig equivalent.) The 
name returned is the one that needs to be in the share config.


  -Kyle





Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool iostat / how to tell if your iop bound

2010-03-10 Thread Chris Banal
What is the best way to tell if your bound by the number of individual
operations per second / random io? "zpool iostat" has an "operations" column
but this doesn't really tell me if my disks are saturated. Traditional
"iostat" doesn't seem to be the greatest place to look when utilizing zfs.

Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-10 Thread R.G. Keen
I did some reading on DDRn ram and controller chips and how they do ECC.  
Sorry, but I was moderately incorrect. Here's closer to what happens. 

DDRn memory has no ECC logic on the DIMMs. What it has is an additional eight 
bits of memory for each 64 bit read/write operation. That is, for ECC DIMMs, 
the reads and writes are 72 bits wide, not 64. The extra 8 bits are 
read/written just like any other bits. 

The actual operation of error checking and correction happens in the memory 
controllers (for the ones I looked at at least). These memory controller 
chipsets do the actual interaction with the DIMMs and (a) determine what, if 
any, bits get written to all 64 or 72 bits as well as (b) looking at the data 
back from a read to see if that they get back is acceptable. 

- if the memory controller chipset tolerates only 64 bit wide DIMMs but not 72 
bit wide ones, it cannot do ECC.
- if the memory controller tolerates both 64 bit and 72 bit wide DIMMs, perhaps 
by ignoring the "extra" bits in a 64 wide read/write, then either style DIMM 
can be used, but if the memory controller doesn't computer, write, and then 
check the extra eight bit for errors, ECC never happens
- if the controller computes the extra checking bits and sends them with write, 
and also checks them on a read, it has the potential to do effective ECC in the 
controller itself, in hardware. 
- for the couple of chipsets I looked at, if i read correctly, the controller 
is set up by the BIOS for doing or not doing ECC, and it may signal back to the 
software that an ECC has happened. 

I was incorrect - for DDRn, it's not a signalling line that something is wrong. 
Motherboards can force ECC not to happen by either not carrying the extra bits 
to/from the DIMM sockets, in which case even if the memory controller supports 
ECC internall, it will not work. This is one method for tolerating either kind 
of DIMM, I guess. Another is to program the chipset in BIOS to not do ECC. 

What I'm not clear on is what OS does with this. I'm not competent to delve 
through the OS and find where the connection to the memory controller ECC 
enable/setup happens and what the ramifications are. And I don't know what the 
link between hardware ECC write/read in the memory is, and a software scrub. 

Is the nature of the scrub that it walks through memory doing read/write/read 
and looking at the ECC reply in hardware? I came up with an all-software 
scrubbing technique, by doing a software block check much like zfs, but that 
seems very impractical.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen

Hey list,

Grant says his system is hanging after the zpool replace on a v240, 
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.


No errors from zpool replace so it sounds like the disk was physically
replaced successfully.

If anyone else can comment or help Grant diagnose this issue, please
feel free...

Thanks,

Cindy

On 03/10/10 16:19, Grant Lowe wrote:

Well, this system is Solaris 05/09, with patches form November. No snapshots 
running and no internal contollers. It's a file serving and attached to a HDS 
disk array. Help and please respond ASAP as this is production! Even an IM 
would be helpful.

--- On Wed, 3/10/10, Cindy Swearingen  wrote:


From: Cindy Swearingen 
Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
To: "Grant Lowe" 
Cc: zfs-discuss@opensolaris.org
Date: Wednesday, March 10, 2010, 1:09 PM
Hi Grant,

I don't have a v240 to test but I think you might need to
unconfigure
the disk first on this system.

So I would follow the more complex steps.

If this is a root pool, then yes, you would need to use the
slice
identifier, and make sure it has an SMI disk label.

After the zpool replace operation and the disk resilvering
is
complete, apply the boot blocks.

The steps would look like this:

# zpool offline rpool c2t1d0
#cfgadm -c unconfigure c1::dsk/c2t1d0
(physically replace the drive)
(confirm an SMI label and a s0 exists)
# cfgadm -c configure c1::dsk/c2t1d0
# zpool replace rpool c2t1d0s0
# zpool online rpool c2t1d0s0
# zpool status rpool /* to confirm the replacement/resilver
is complete
# installboot -F zfs /usr/platform/`uname
-i`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0

Thanks,

Cindy


On 03/10/10 13:28, Grant Lowe wrote:

Please help me out here. I've got a V240 with the root

drive, c2t0d0 mirrored to c2t1d0. The mirror is having
problems, and I'm unsure of the exact procedure to pull the
mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice

as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Well, this system is Solaris 05/09, with patches form November. No snapshots 
running and no internal contollers. It's a file serving and attached to a HDS 
disk array. Help and please respond ASAP as this is production! Even an IM 
would be helpful.

--- On Wed, 3/10/10, Cindy Swearingen  wrote:

> From: Cindy Swearingen 
> Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
> To: "Grant Lowe" 
> Cc: zfs-discuss@opensolaris.org
> Date: Wednesday, March 10, 2010, 1:09 PM
> Hi Grant,
> 
> I don't have a v240 to test but I think you might need to
> unconfigure
> the disk first on this system.
> 
> So I would follow the more complex steps.
> 
> If this is a root pool, then yes, you would need to use the
> slice
> identifier, and make sure it has an SMI disk label.
> 
> After the zpool replace operation and the disk resilvering
> is
> complete, apply the boot blocks.
> 
> The steps would look like this:
> 
> # zpool offline rpool c2t1d0
> #cfgadm -c unconfigure c1::dsk/c2t1d0
> (physically replace the drive)
> (confirm an SMI label and a s0 exists)
> # cfgadm -c configure c1::dsk/c2t1d0
> # zpool replace rpool c2t1d0s0
> # zpool online rpool c2t1d0s0
> # zpool status rpool /* to confirm the replacement/resilver
> is complete
> # installboot -F zfs /usr/platform/`uname
> -i`/lib/fs/zfs/bootblk
> /dev/rdsk/c2t1d0s0
> 
> Thanks,
> 
> Cindy
> 
> 
> On 03/10/10 13:28, Grant Lowe wrote:
> > Please help me out here. I've got a V240 with the root
> drive, c2t0d0 mirrored to c2t1d0. The mirror is having
> problems, and I'm unsure of the exact procedure to pull the
> mirrored drive. I see in various googling:
> > 
> > zpool replace rpool c2t1d0 c2t1d0
> > 
> > or I've seen simply:
> > 
> > zpool replace rpool c2t1d0
> > 
> > or I've seen the much more complex:
> > 
> > zpool offline rpooll c2t1d0
> > cfgadm -c unconfigure c1::dsk/c2t1d0
> > (replace the drive)
> > cfgadm -c configure c1::dsk/c2t1d0
> > zpool replace rpool c2t1d0s0
> > zpool online rpool c2t1d0s0
> > 
> > So which is it? Also, do I need to include the slice
> as in the last example?
> > 
> > Thanks.
> > 
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and FM(A)

2010-03-10 Thread Matthew R. Wilson
Not sure about lighting up the drive tray light, but for automated email
notification of faults I use a script that I found here:
http://www.prefetch.net/code/fmadmnotifier

-Matthew


On Wed, Mar 10, 2010 at 2:04 PM, Matt  wrote:

> Working on my ZFS Build, using a SuperMicro 846E1 chassis and an LSI 1068e
> SAS controller, I'm wondering how well FM works in OpenSolaris 2009.06.
>
> I'm hoping that if ZFS detects an error with a drive, that it'll light up
> the fault light on the corresponding hot-swap drive in my enclosure and any
> attached JBOD enclosures.
>
> How would I go about testing this?  All drives in my array are fine, but
> I'd like to "force" one into a degraded state to test the drive fault
> lights.  Remembering that c8t8d0 is the 2nd from the bottom in the 3rd row
> of drives isn't exactly what I want to be telling someone in a DC that's
> replacing a drive.  Having the fault indicator light up would be much
> better.
>
> Is this something that will work out of the box, or am I going to have to
> dig deep to get this working?
>
> Also, on the subject - auto email notification if a drive goes bad would be
> great.  That way I get paged to let me know a drive is bad, and I can either
> head to the DC and replace it, or have someone on-site do it for me quickly.
>
> Any chances this is easy to set up?  I didn't see any mention of it in the
> ZFS Administration guide.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Matthew R. Wilson
http://www.mattwilson.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-03-10 Thread norm.tallant
So I did manage to get everything to work after switching to the Dev repository 
and doing a pkg image-update, but what happens when 2010.$spring comes out?  
Should I wait a week or so after release and then change my repository back to 
standard and then image-update again?

I'm new to Osol; sorry for the newbie-ish question and the slight derailment.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and FM(A)

2010-03-10 Thread Matt
Working on my ZFS Build, using a SuperMicro 846E1 chassis and an LSI 1068e SAS 
controller, I'm wondering how well FM works in OpenSolaris 2009.06.

I'm hoping that if ZFS detects an error with a drive, that it'll light up the 
fault light on the corresponding hot-swap drive in my enclosure and any 
attached JBOD enclosures.

How would I go about testing this?  All drives in my array are fine, but I'd 
like to "force" one into a degraded state to test the drive fault lights.  
Remembering that c8t8d0 is the 2nd from the bottom in the 3rd row of drives 
isn't exactly what I want to be telling someone in a DC that's replacing a 
drive.  Having the fault indicator light up would be much better.

Is this something that will work out of the box, or am I going to have to dig 
deep to get this working?

Also, on the subject - auto email notification if a drive goes bad would be 
great.  That way I get paged to let me know a drive is bad, and I can either 
head to the DC and replace it, or have someone on-site do it for me quickly.

Any chances this is easy to set up?  I didn't see any mention of it in the ZFS 
Administration guide.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Miles Nordin
> "dc" == Dennis Clarke  writes:

dc> zfs set
dc> sharenfs=nosub\,nosuid\,rw\=hostname1\:hostname2\,root\=hostname2
dc> zpoolname/zfsname/pathname

 >> wth?  Commas and colons are not special characters.  This is
 >> silly.

dc> Works real well.

I said it was silly, not broken.  It's cargo-cult.  Try this:

\z\f\s \s\e\t 
\s\h\a\r\e\n\f\s\=\n\o\s\u\b\,\n\o\s\u\i\d\,\r\w\=\h\o\s\t\n\a\m\e\1\:\h\o\s\t\n\a\m\e\2\,\r\o\o\t\=\h\o\s\t\n\a\m\e\2
 \z\p\o\o\l\n\a\m\e\/\z\f\s\n\a\m\e\/\p\a\t\h\n\a\m\e

works real well, too.


pgp9sZc4ojaDX.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Ian Collins

On 03/11/10 09:27 AM, Robert Thurlow wrote:

Ian Collins wrote:

On 03/11/10 05:42 AM, Andrew Daugherity wrote:



I've found that when using hostnames in the sharenfs line, I had to use
the FQDN; the short hostname did not work, even though both client and
server were in the same DNS domain and that domain is in the search
path, and nsswitch uses DNS for hosts (read: 'ping client1' works fine,
as does 'mount server:/export/fs /mnt' from client1).


I have found the same, whether sharing to Linux or Solaris hosts, the 
FQDN appears to be required.


It's not quite true that you need the FQDN, as it still
does depend on the name service setup.  However, what is
true is this: to authenticate a client, the server does
a IP-to-hostname mapping and compares the string with the
string on the share entry.  If the strings match (ignoring
case), the client gets access.  If not, the client does not
get access.  This has confused many, and it's not clear
how or where to document this so that it does not cause
more confusion.  RFEs with example language would be
welcome.

So, to make a long story short, if you log into the server
from the client and do "who am i", you will get the host
name you need for the share.


Thanks for the clarification Rob.

Digging a little deeper, this is documented in the share_nfs man page:

   access_list
 The access_list argument is  a  colon-separated  list  whose
 components may be any number of the following:

 hostname

 The name of a host. With a server configured for DNS  or
 LDAP  naming in the nsswitch "hosts" entry, any hostname
 must be represented as a fully  qualified  DNS  or  LDAP
 name.

Maybe your last paragraph could be added to the NOTES section on that page?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen

Hi Grant,

I don't have a v240 to test but I think you might need to unconfigure
the disk first on this system.

So I would follow the more complex steps.

If this is a root pool, then yes, you would need to use the slice
identifier, and make sure it has an SMI disk label.

After the zpool replace operation and the disk resilvering is
complete, apply the boot blocks.

The steps would look like this:

# zpool offline rpool c2t1d0
#cfgadm -c unconfigure c1::dsk/c2t1d0
(physically replace the drive)
(confirm an SMI label and a s0 exists)
# cfgadm -c configure c1::dsk/c2t1d0
# zpool replace rpool c2t1d0s0
# zpool online rpool c2t1d0s0
# zpool status rpool /* to confirm the replacement/resilver is complete
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0

Thanks,

Cindy


On 03/10/10 13:28, Grant Lowe wrote:

Please help me out here. I've got a V240 with the root drive, c2t0d0 mirrored 
to c2t1d0. The mirror is having problems, and I'm unsure of the exact procedure 
to pull the mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover rpool

2010-03-10 Thread D. Pinnock
So I was back on it again today and I was following this thread
http://opensolaris.org/jive/thread.jspa?threadID=70205&tstart=15

and got the following error when I ran this command

zdb -e -bb rpool

Traversing all blocks to verify nothing leaked ...
Assertion failed: c < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT, file 
../../../uts/common/fs/zfs/zio.c, line 203, function zio_buf_alloc
Abort (core dumped)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread David Dyer-Bennet

On Wed, March 10, 2010 13:32, Matt wrote:
> That is exactly what I meant.  Sorry for my newbie terminology.  I'm so
> used to traditional RAID that it's hard to shake.

No apology required; it's natural that your questions will occur in the
terminology you're familiar with.  It's certainly hard to shake.  I was
trying to clean up the terminology (by one of the laws of the Internet, I
presume *I* got something wrong in that post!) to verify I'd understood
correctly, and to offer the right terms back for learning.  Being rather a
pedant at  heart I'm sometimes a bit ham-tongued doing that kind of thing,
too.

> That's great to know.  Time to soldier on with the build!

Sounds like.  I'm very happy with my rather smaller setup.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Dennis Clarke

>> "ea" == erik ableson  writes:
>> "dc" == Dennis Clarke  writes:
>
>  >> "rw,ro...@100.198.100.0/24", it works fine, and the NFS client
>  >> can do the write without error.
>
> ea> I' ve found that the NFS host based settings required the
> ea> FQDN, and that the reverse lookup must be available in your
> ea> DNS.
>
> I found, oddly, the @a.b.c.d/y syntax works only if the client's IP
> has reverse lookup.  I had to add bogus hostnames to /etc/hosts for
> the whole /24 because if I didn't, for v3 it would reject mounts
> immediately, and for v4 mountd would core dump (and get restarted)
> which you see from the client as a mount that appears to hang.  This
> is all using the @ip/mask syntax.

I have LDAP and DNS in place for name resolution and NFS v4 works fine
with either format in the sharenfs parameter. Never seen a problem. The
Solaris 8 an 9 NFS clients work fine also.

>
>  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6901832
>
> If you use hostnames instead, it makes sense that you would have to
> use FQDN's.  If you want to rewrite mountd to allow using short
> hostnames, the access checking has to be done like this:
>
>   at export time:
> given hostname-> forward nss lookup -> list of IP's -> remember IP's
>
>   at mount time:
> client IP -> check against list of remembered IP's
>
> but with fqdn's it can be:
>
>   at export time:
> given hostname -> remember it
>
>   at mount time:
>  client IP -> reverse nss lookup -> check against remembered list
>\-->forward lookup->verify client IP among results
>
> The second way, all the lookups happen at mount time rather than
> export time.  This way the data in the nameservice can change without
> forcing you to learn and then invoke some kind of ``rescan the
> exported filesystems'' command or making mountd remember TTL's for its
> cached nss data, or any such complexity.  Keep all the nameservice
> caching inside nscd so there is only one place to flush it!  However
> the forward lookup is mandatory for security, not optional OCDism.
> Without it, anyone from any IP can access your NFS server so long as
> he has control of his reverse lookup, which he probably does.  I hope
> mountd is doing that forward lookup!
>
> dc> Try to use a backslash to escape those special chars like so :
>
> dc> zfs set
> dc> sharenfs=nosub\,nosuid\,rw\=hostname1\:hostname2\,root\=hostname2
> dc> zpoolname/zfsname/pathname
>
> wth?  Commas and colons are not special characters.  This is silly.

Works real well.

-- 
Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Please help me out here. I've got a V240 with the root drive, c2t0d0 mirrored 
to c2t1d0. The mirror is having problems, and I'm unsure of the exact procedure 
to pull the mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Robert Thurlow

Ian Collins wrote:

On 03/11/10 05:42 AM, Andrew Daugherity wrote:



I've found that when using hostnames in the sharenfs line, I had to use
the FQDN; the short hostname did not work, even though both client and
server were in the same DNS domain and that domain is in the search
path, and nsswitch uses DNS for hosts (read: 'ping client1' works fine,
as does 'mount server:/export/fs /mnt' from client1).


I have found the same, whether sharing to Linux or Solaris hosts, the 
FQDN appears to be required.


It's not quite true that you need the FQDN, as it still
does depend on the name service setup.  However, what is
true is this: to authenticate a client, the server does
a IP-to-hostname mapping and compares the string with the
string on the share entry.  If the strings match (ignoring
case), the client gets access.  If not, the client does not
get access.  This has confused many, and it's not clear
how or where to document this so that it does not cause
more confusion.  RFEs with example language would be
welcome.

So, to make a long story short, if you log into the server
from the client and do "who am i", you will get the host
name you need for the share.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Miles Nordin
> "ea" == erik ableson  writes:
> "dc" == Dennis Clarke  writes:

 >> "rw,ro...@100.198.100.0/24", it works fine, and the NFS client
 >> can do the write without error.

ea> I' ve found that the NFS host based settings required the
ea> FQDN, and that the reverse lookup must be available in your
ea> DNS.

I found, oddly, the @a.b.c.d/y syntax works only if the client's IP
has reverse lookup.  I had to add bogus hostnames to /etc/hosts for
the whole /24 because if I didn't, for v3 it would reject mounts
immediately, and for v4 mountd would core dump (and get restarted)
which you see from the client as a mount that appears to hang.  This
is all using the @ip/mask syntax.

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6901832

If you use hostnames instead, it makes sense that you would have to
use FQDN's.  If you want to rewrite mountd to allow using short
hostnames, the access checking has to be done like this:

  at export time:
given hostname-> forward nss lookup -> list of IP's -> remember IP's

  at mount time:
client IP -> check against list of remembered IP's

but with fqdn's it can be:

  at export time:
given hostname -> remember it

  at mount time:
 client IP -> reverse nss lookup -> check against remembered list
   \-->forward lookup->verify client IP among results

The second way, all the lookups happen at mount time rather than
export time.  This way the data in the nameservice can change without
forcing you to learn and then invoke some kind of ``rescan the
exported filesystems'' command or making mountd remember TTL's for its
cached nss data, or any such complexity.  Keep all the nameservice
caching inside nscd so there is only one place to flush it!  However
the forward lookup is mandatory for security, not optional OCDism.
Without it, anyone from any IP can access your NFS server so long as
he has control of his reverse lookup, which he probably does.  I hope
mountd is doing that forward lookup!

dc> Try to use a backslash to escape those special chars like so :

dc> zfs set
dc> sharenfs=nosub\,nosuid\,rw\=hostname1\:hostname2\,root\=hostname2
dc> zpoolname/zfsname/pathname

wth?  Commas and colons are not special characters.  This is silly.



pgptWVuUb6wBm.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread David Magda
On Wed, March 10, 2010 14:47, Svein Skogen wrote:

> On 10.03.2010 18:18, Edward Ned Harvey wrote:
>> The advantage of the tapes is an official support channel, and much
>> greater
>> archive life.  The advantage of the removable disks is that you need no
>> special software to do a restore, and you could just as easily restore a
>> single file or the whole filesystem.
>
> There is another advantage as well, but I'll let you try that one for
> yourself.
>
> - -Make two backups. One to a HDD, one to a modern LTO or similar tape.
> - -Walk up the stairs to the first floor.
> - -Open the window.
> - -Drop both backups onto the ground.
> - -Try to restore both backups...
>
> See any differences in reliability for disasters here?
>
> ;)

Slightly OT, but it should also be noted that you need to generally need
to put disks in front of most modern tape systems (LTO-3, -4, upcoming
-5). It's a matter of tape being "too fast": LTO-4 = 120 MB/s; LTO-5 = 140
MB/s; LTO-6 = planned 270 MB/s. (Speeds are native, not compressed.)

It's very challenging to stream directly from a client to a tape drive
over the network. Most modern backup architectures go to disk first (e.g.
VTL), and then clone to tape for longer term storage (or off site) needs.

The UER are also better for tapes than disks (though this is mitigated
with ZFS' checksums).


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Gregory Durham
Hey Ed,
Thanks for the comment, I have been thinking along the lines of the
same thing, I am going to continue to try to use bacula but we will
see. Out of curiosity, what version of netbackup are you using? I
would love to feel pretty well covered haha.

Thanks a lot!
Greg


On Wed, Mar 10, 2010 at 9:18 AM, Edward Ned Harvey
 wrote:
>> In my case where I reboot the server I cannot get the pool to come
>> back up. It shows UNAVAIL, I have tried to export before reboot and
>> reimport it and have not been successful and I dont like this in the
>> case a power issue of some sort happens. My other option was to mount
>> using lofiadm however I cannot get it to mount on boot, so the same
>> thing happens. Does anyone have any experience with backing up zpools
>> to tape? Please any ideas would be greatly beneficial.
>
> I have a similar setup.  "zfs send | ssh somehost 'zfs receive'" works
> perfectly, and the 2nd host is attached to a tape library.  I'm running
> Netbackup on the 2nd host because we could afford it, and then I have an
> honest-to-goodness support channel.
>
> But if you don't want to spend the money for netbackup, I've heard good
> things about using Amanda or Bacula to get this stuff onto tape.
>
> FWIW, since I hate tapes so much, there's one more thing I'm doing.  I use
> external hard drives, attached to the 2nd server, and periodically "zfs send
> | zfs receive" from the 2nd server main disks to the 2nd server removable
> disks.  Then export the removable disks, and take 'em offsite in a backup
> rotation with the tapes.
>
> The advantage of the tapes is an official support channel, and much greater
> archive life.  The advantage of the removable disks is that you need no
> special software to do a restore, and you could just as easily restore a
> single file or the whole filesystem.
>
> So let's see...  I have two different types of offline backup for the backup
> server, which itself is just a backup of the main server.  So I'm feeling
> pretty well covered.  ;-)
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-10 Thread Wes Felter

Svein Skogen wrote:

Are there any good options for encapsulating/decapsulating a zfs send
stream inside FEC (Forward Error Correction)? This could prove very
useful both for backup purposes, and for long-haul transmissions.


http://www.s.netic.de/gfiala/dvbackup.html
http://planete-bcast.inrialpes.fr/rubrique.php3?id_rubrique=5

I'm skeptical about the benefit, but there you are.

Wes Felter

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Ian Collins

On 03/11/10 05:42 AM, Andrew Daugherity wrote:

On Tue, 2010-03-09 at 20:47 -0800, mingli wrote:
   

And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works 
fine, and the NFS client can do the write without error.

Thanks.
 

I've found that when using hostnames in the sharenfs line, I had to use
the FQDN; the short hostname did not work, even though both client and
server were in the same DNS domain and that domain is in the search
path, and nsswitch uses DNS for hosts (read: 'ping client1' works fine,
as does 'mount server:/export/fs /mnt' from client1).

Perhaps it's because I left the NFSv4 domain setting at the default.
(I'm just using NFSv3, but trying to come up with an explanation.  In
any case, using the FQDN works.)

   
I have found the same, whether sharing to Linux or Solaris hosts, the 
FQDN appears to be required.


This doesn't appear to be documented anywhere.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10.03.2010 18:18, Edward Ned Harvey wrote:
> The advantage of the tapes is an official support channel, and much greater
> archive life.  The advantage of the removable disks is that you need no
> special software to do a restore, and you could just as easily restore a
> single file or the whole filesystem.

There is another advantage as well, but I'll let you try that one for
yourself.

- -Make two backups. One to a HDD, one to a modern LTO or similar tape.
- -Walk up the stairs to the first floor.
- -Open the window.
- -Drop both backups onto the ground.
- -Try to restore both backups...

See any differences in reliability for disasters here?

;)

//Svein

- -- 
- +---+---
  /"\   |Svein Skogen   | sv...@d80.iso100.no
  \ /   |Solberg Østli 9| PGP Key:  0xE5E76831
   X|2020 Skedsmokorset | sv...@jernhuset.no
  / \   |Norway | PGP Key:  0xCE96CE13
|   | sv...@stillbilde.net
 ascii  |   | PGP Key:  0x58CD33B6
 ribbon |System Admin   | svein-listm...@stillbilde.net
Campaign|stillbilde.net | PGP Key:  0x22D494A4
+---+---
|msn messenger: | Mobile Phone: +47 907 03 575
|sv...@jernhuset.no | RIPE handle:SS16503-RIPE
- +---+---
 If you really are in a hurry, mail me at
   svein-mob...@stillbilde.net
 This mailbox goes directly to my cellphone and is checked
even when I'm not in front of my computer.
- 
 Picture Gallery:
  https://gallery.stillbilde.net/v/svein/
- 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkuX92kACgkQSBMQn1jNM7Y+7QCfZx1Nt9qOsnCvOkwnmbXq5Ql5
AS4AoIS+m9F4r9Eowh7tXQK8IYS/N1lr
=/kwT
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread Matt
That is exactly what I meant.  Sorry for my newbie terminology.  I'm so used to 
traditional RAID that it's hard to shake.

That's great to know.  Time to soldier on with the build!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread David Dyer-Bennet

On Wed, March 10, 2010 12:49, Matt wrote:
> So I'm working up my SAN build, and I want to make sure it's going to
> behave the way I expect when I go to expand it.
>
> Currently I'm running 10 - 500GB Seagate Barracuda ES.2 drives as two
> drive mirrors added to my tank pool.
>
> I'm going to be using this for virtual machine storage, and have created
> fixed size disks (around 200GB per file).
>
> If I'm reading the documentation correctly, if I add spindles to my drive
> array (say 10 more drives) even though the data files on the disks won't
> change in size, the data will eventually migrate to all spindles as it is
> changed and written out, thus improving performance.  Is this correct?

I think so; though you're using non-ZFS terminology which hence isn't
absolutely precise.

Let me rephrase it:  If you add more mirror vdevs to your pool named
"tank", that space will become instantly available to things that draw on
that pool (filesystems, zvols, and I forget what else).  When new data has
to be written, there will be a preference for it going to less-full vdevs
in the pool; so as data is rewritten, usage will gradually even out across
the vdevs in the pool.  Having the data across more spindles can,
obviously, potentially increase performance, depending on what the
existing limiting factors are.

So, if that says about the same thing you said, then the answer to your
question is "yes".

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread Matt
So I'm working up my SAN build, and I want to make sure it's going to behave 
the way I expect when I go to expand it.

Currently I'm running 10 - 500GB Seagate Barracuda ES.2 drives as two drive 
mirrors added to my tank pool.

I'm going to be using this for virtual machine storage, and have created fixed 
size disks (around 200GB per file).

If I'm reading the documentation correctly, if I add spindles to my drive array 
(say 10 more drives) even though the data files on the disks won't change in 
size, the data will eventually migrate to all spindles as it is changed and 
written out, thus improving performance.  Is this correct?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Harry Putnam
"Andrew Daugherity"  writes:

>> And I update the sharenfs option with "rw,ro...@100.198.100.0/24",
>> it works fine, and the NFS client can do the write without error.
>> 
>> Thanks.
>
> I've found that when using hostnames in the sharenfs line, I had to use
> the FQDN; the short hostname did not work, even though both client and
> server were in the same DNS domain and that domain is in the search
> path, and nsswitch uses DNS for hosts (read: 'ping client1' works fine,
> as does 'mount server:/export/fs /mnt' from client1).  
>
> Perhaps it's because I left the NFSv4 domain setting at the default.
> (I'm just using NFSv3, but trying to come up with an explanation.  In
> any case, using the FQDN works.)

if you wanted to add more would it just be separated with a comma:
sharenfs=on,rw,ro...@198.xxx.xxx.xxx,userna...@192.xxx.xxx.101

or do you have to add the rw, for each one too:
sharenfs=on,rw,ro...@198.xxx.xxx.xxx,rw,userna...@192.xxx.xxx.102

Is any of that syntax right?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-10 Thread Harry Putnam
David Dyer-Bennet  writes:

> On 3/9/2010 4:57 PM, Harry Putnam wrote:
>> Also - it appears `zpool scrub -s z3' doesn't really do anything.
>> The status report above is taken immediately after a scrub command.
>>
>> The `scub -s' command just returns the prompt... no output and
>> apparently no scrub either.
>>
>
> The "-s" switch is documented to STOP a scrub, though I've never used it.

egad... and so it is...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Edward Ned Harvey
> In my case where I reboot the server I cannot get the pool to come
> back up. It shows UNAVAIL, I have tried to export before reboot and
> reimport it and have not been successful and I dont like this in the
> case a power issue of some sort happens. My other option was to mount
> using lofiadm however I cannot get it to mount on boot, so the same
> thing happens. Does anyone have any experience with backing up zpools
> to tape? Please any ideas would be greatly beneficial.

I have a similar setup.  "zfs send | ssh somehost 'zfs receive'" works
perfectly, and the 2nd host is attached to a tape library.  I'm running
Netbackup on the 2nd host because we could afford it, and then I have an
honest-to-goodness support channel.

But if you don't want to spend the money for netbackup, I've heard good
things about using Amanda or Bacula to get this stuff onto tape.

FWIW, since I hate tapes so much, there's one more thing I'm doing.  I use
external hard drives, attached to the 2nd server, and periodically "zfs send
| zfs receive" from the 2nd server main disks to the 2nd server removable
disks.  Then export the removable disks, and take 'em offsite in a backup
rotation with the tapes.

The advantage of the tapes is an official support channel, and much greater
archive life.  The advantage of the removable disks is that you need no
special software to do a restore, and you could just as easily restore a
single file or the whole filesystem.

So let's see...  I have two different types of offline backup for the backup
server, which itself is just a backup of the main server.  So I'm feeling
pretty well covered.  ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Andrew Daugherity
On Tue, 2010-03-09 at 20:47 -0800, mingli wrote:
> And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works 
> fine, and the NFS client can do the write without error.
> 
> Thanks.

I've found that when using hostnames in the sharenfs line, I had to use
the FQDN; the short hostname did not work, even though both client and
server were in the same DNS domain and that domain is in the search
path, and nsswitch uses DNS for hosts (read: 'ping client1' works fine,
as does 'mount server:/export/fs /mnt' from client1).  

Perhaps it's because I left the NFSv4 domain setting at the default.
(I'm just using NFSv3, but trying to come up with an explanation.  In
any case, using the FQDN works.)


-Andrew

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Should ZFS write data out when disk are idle

2010-03-10 Thread Damon Atkins
> 
> > For a RaidZ, when data is written to a disk, are
> individual 32k join together to the same disk and
> written out as a single I/O to the disk?
> 
> I/Os can be coalesced, but there is no restriction as
> to what can be coalesced.
> In other words, subsequent writes can also be
> coalesced if they are contiguous.
> 
> > e.g. 128k for file a, 128k for file b, 128k for
> file c.   When written out does zfs do
> > 32k+32k+32k i/o to each disk, or will it do one 96k
> i/o if the space is available sequentially?
Should have written this, for a 5 disk RaidZ
5x(32k(a)+32k(b)+32k(c) i/o to each disk), or will it attempt to do 
5x(96k(a+b+c)) combind larger I/O to each disk if all allocated blocks for a,b 
and c are sequential on some or every physical disk.
> 
> I'm not sure how one could write one 96KB physical
> I/O to three different disks?
I meant to a single disk, three sequential 32k i/o's targeted to the same disk 
becomes a single 96k i/o.  (raidz or even if it was mirrored)
>  -- richard
Given you have said ZFS will coalesce contiguous writes together? (???Targeted 
to an individual disk?).
What is the largest physical write ZFS will do to an individual disk?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-10 Thread David Dyer-Bennet

On Wed, March 10, 2010 07:54, Svein Skogen wrote:

> Are there any good options for encapsulating/decapsulating a zfs send
> stream inside FEC (Forward Error Correction)? This could prove very
> useful both for backup purposes, and for long-haul transmissions.

I don't know of anything that would actually function as a filter.

In the absence of something better turning up, I'd write the stream to
disk, and then use PAR2 to create some redundant data for recover from
errors, up to whatever level you choose.  Obviously this approach doesn't
work if you don't have the space; since I think in terms of disks rather
than tapes for backups, that's not my issue.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-10 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Are there any good options for encapsulating/decapsulating a zfs send
stream inside FEC (Forward Error Correction)? This could prove very
useful both for backup purposes, and for long-haul transmissions.

If there are any good options for simply piping the data trough, feel
free to point me in the right direction, because my google-searches
didn't give me any real clues. Maybe my google-fu isn't up to scratch.

//Svein

- -- 
- +---+---
  /"\   |Svein Skogen   | sv...@d80.iso100.no
  \ /   |Solberg Østli 9| PGP Key:  0xE5E76831
   X|2020 Skedsmokorset | sv...@jernhuset.no
  / \   |Norway | PGP Key:  0xCE96CE13
|   | sv...@stillbilde.net
 ascii  |   | PGP Key:  0x58CD33B6
 ribbon |System Admin   | svein-listm...@stillbilde.net
Campaign|stillbilde.net | PGP Key:  0x22D494A4
+---+---
|msn messenger: | Mobile Phone: +47 907 03 575
|sv...@jernhuset.no | RIPE handle:SS16503-RIPE
- +---+---
 If you really are in a hurry, mail me at
   svein-mob...@stillbilde.net
 This mailbox goes directly to my cellphone and is checked
even when I'm not in front of my computer.
- 
 Picture Gallery:
  https://gallery.stillbilde.net/v/svein/
- 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkuXpIoACgkQSBMQn1jNM7YPPgCgk40w47g/L2djCvMVEYGiU4zt
wOcAoP5CzX/9W7kAJIgLj8H+zZqhSfi+
=SmEf
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-10 Thread Günther
hello

what i'm thinking about is:
keep it simple

1.
i'm really happy to throw away all sort of tapes.
when you need them, they are not working, are to slow ore
capacity is too small.

use hd*s instead. they are much faster, bigger, cheaper and data are much safer 
on it. for example a external 2gb hdd (usb or better e-sata) is about 100 euro.
buy three of them, copy/sync your files to them, export the drive and keep it 
on a other location. use zfs-send, rsync or do it within windows with robocopy 
to keep files in sync.

2.
move your esxi storage from local to your zfs storage to have zfs-snapshot and
dedup feature. i would prefer nfs over iscsi to have parallel access via cifs.
use a second storage box (or not recomended/ slow use local esxi space) for 
redundancy of your vhd files.


gea

napp-it.org / zfs server
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread erik.ableson
I' ve found that the NFS host based settings required the FQDN, and that the 
reverse lookup must be available in your DNS.

Try "rw,root=host1.mydomain.net"

Cheers,

Erik
On 10 mars 2010, at 05:47, mingli wrote:

> And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works 
> fine, and the NFS client can do the write without error.
> 
> Thanks.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss