Re: [zfs-discuss] Need 1.5 TB drive size to use for array for testing

2009-08-21 Thread Eric D. Mudama

On Fri, Aug 21 at 12:22, Jason Pfingstmann wrote:

This is an odd question, to be certain, but I need to find out what
size a 1.5 TB drive is to help me create a sparse/fake array.


(Personally, I think you're making your job a lot harder than it
should be.  Just wait til you have the real disks, and do your array
creation then with no gimicks.)

That being said, while all drives can have slightly different numbers
of LBAs, vendors seems to be standardizing on the IDEMA formula in
their "Document LBA1-02: LBA Count for IDE Hard Disk Drives Standard"

LBA count = (97696368) + (1953504 * (Desired Capacity in Gbytes - 50.0))

Just plug in 1500 for "desired capacity in Gbytes" and it should tell
you what most vendors are configuring their 1.5TB drives to.

I just checked, and the 1.5TB Caviar Green spec sheet on wdc.com
indicates exactly the resulting number of LBAs when you plug in 1500
to the above formula.

--eric


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Move home filesystem to new pool

2009-08-21 Thread Stathis Kamperis
Greetings to everyone!

I'm trying to move the home filesystem from my root pool to another
pool, and I'm really lost. Specifically to move rpool/export/home to
tank/home. I did the following:

1. Created a snapshot of rpool/export/home (with -r option set)
2. Did a zfs send -R ... | zfs receive -d ...

The home filesystem is created in the new pool, but when I enter it I
see no files at all. Mind that zfs list shows that the new filesystem
occupies the correct space. I tried many variations ( working from
single user mode for instance ), but either the filesystem is empty or
the mount points don't correspond to real directories that I can cd
into them.

Would anyone be so kind as to give me a couple of directions or point
me to a document on how to accomplish my task please? Following random
google blog spots, didn't pay off.

Thank you for considering.

Best regards,
Stathis Kamperis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Tutorial at LISA09

2009-08-21 Thread Tristan Ball



Marcus wrote:

- how to best handle broken disks/controllers without ZFS hanging or
being unable to replace the disk
  
A definite +1 here. I realise it's something that Sun probably consider 
"fixed by the disk/controller drivers", however many of us are using 
opensolaris on non-sun hardware, and can't necessarily test in advance 
how the system is going to behave when things fail! So any techniques 
available to mitigate against any entire pool hanging because 1 device 
is doing something dumb would be very useful.


I guess that might move away from "pure zfs" in terms of tute content, 
but as a sysadmin, it's the whole system we care about!

That would be my personal wishlist. I hope the presentation will be
made public after the event. ;-)

  

+1 :-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Bob Friesenhahn

On Fri, 21 Aug 2009, Richard Elling wrote:

magnitude for HDDs). Depending on the repair policy, the probability 
of losing a SAS controller is expected to be less than the 
probability of losing 3 disks in a raidz2. Since SAS is relatively 
easy to make redundant, a really paranoid person would have two SAS 
controllers and the probability of losing two highly-reliable SAS 
controllers at the same time is way small :-)


This is a reason to prefer mirroring, with devices in the mirror 
carefully split across controllers.  This approach makes failures 
easier to understand and helps avoid propagation of errors.  Complex 
system designs lead to complex problems.  Some of the world's largest 
and most successful 5-9s class systems are built using simple duplex 
redundancy.


It is possible to build raidz and raidz2 systems so that their devices 
are accessed via unique paths, but such systems rapidly become quite 
large and expensive.



As the Kinks sing, "paranoia will destroy ya!" :-)


There's a time device inside of me, I'm a self-destructin disk!

When anything goes wrong in a system, the human factor becomes quite 
large.  It dramatically increases the probability that human error 
(the primary cause of data loss) will occur.  The system should be 
designed to accommodate the attendant humans.


Solaris is still much too complicated for people to understand in 
times of crisis.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Bob Friesenhahn

On Fri, 21 Aug 2009, Ron Mexico wrote:


Since I can't make a mirrored raidz2, I'd like the next best thing. 
If that means doing a zfs send from one raidz2 to the other, that's 
fine.


Without using heirarchical servers (e.g. volumes from a zfs pool 
exported via iSCSI to be part of another zfs storage pool) you can't 
do mirrored raidz2 but you can easily do triple mirroring.  If disk 
space is not a concern, then it is difficult to beat the reliability 
of a triple mirror.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Bob Friesenhahn

On Fri, 21 Aug 2009, Tim Cook wrote:


Raid10 won't provide as much protection.  Raidz21, you can lose any 4
drives, and up to 14 if it's the right 14.  Raid10, if you lose the wrong
two drives, you're done.


On the flip side, the chance of loosing a second drive during the 
recovery interval is much less with mirroring since only one drive 
needs to be read in order to support the resilver and there is far 
less mechanical action and I/Os involved.


If you make sure that you have a spare drive available to the pool, 
then the spare drive can be resilvered and take over while you sleep, 
minimizing the risk.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Richard Elling

comment far below...

On Aug 21, 2009, at 6:17 PM, Tim Cook wrote:
On Fri, Aug 21, 2009 at 8:04 PM, Richard Elling > wrote:

On Aug 21, 2009, at 5:55 PM, Tim Cook wrote:
On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling > wrote:


My vote is with Ross. KISS wins :-)
Disclaimer: I'm also a member of BAARF.


My point is, RAIDZx+1 SHOULD be simple.  I don't entirely understand  
why it hasn't been implemented.  I can only imagine like so many  
other things it's because there hasn't been significant customer  
demand.  Unfortunate if it's as simple as I believe it is to  
implement.  (No, don't ask me to do it, I put in my time programming  
in college and have no desire to do it again :))


You can get in the same ballpark with at least two top-level raidz2  
devs and
copies=2.  If you have three or more top-level raidz2 vdevs, then  
you can even

do better with copies=3 ;-)

Note that I do not have a model for that because it would require  
separate
failure rate data for whole disk failures and all other non-whole  
disk failures.
The latter is not available in data sheets. The closest I can get  
with published
data is using the MTTDL[2] model which considers the published  
unrecoverable
read error rate. In other words, the model would be easy, but data  
to feed the
model is not available :-(  Suffice to say, 2 top-level raidz2 vdevs  
of similar size
with copies=2 should offer very nearly the same protection as  
raidz2+1.

 -- richard


You sure about that?  Say I have a sas controller shit the bed  
(pardon the french), and take one of the JBOD's out entirely.  Even  
with copies=2, isn't the entire pool going tits up and offline when  
it loses an entire vdev?


Yes. But you need to understand that the probability of a SAS  
controller failing is
much, much smaller than a disk. So in order to properly model the  
system, you
can't treat them as having the same failure rate (the difference is an  
order of
magnitude for HDDs). Depending on the repair policy, the probability  
of losing a
SAS controller is expected to be less than the probability of losing 3  
disks in a
raidz2. Since SAS is relatively easy to make redundant, a really  
paranoid person
would have two SAS controllers and the probability of losing two  
highly-reliable

SAS controllers at the same time is way small :-)

It would seem to me copies=2 is only applicable when you have both  
an entire disk loss, and corrupt data on the "good disks".  But feel  
free to enlighten :)  That scenario seems far less likely than  
having a controller go bad, but that's with my anecdotal personal  
experiences.


As the Kinks sing, "paranoia will destroy ya!" :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Richard Elling


On Aug 21, 2009, at 6:09 PM, Adam Sherman wrote:


On 21-Aug-09, at 21:04 , Richard Elling wrote:
My point is, RAIDZx+1 SHOULD be simple.  I don't entirely  
understand why it hasn't been implemented.  I can only imagine  
like so many other things it's because there hasn't been  
significant customer demand.  Unfortunate if it's as simple as I  
believe it is to implement.  (No, don't ask me to do it, I put in  
my time programming in college and have no desire to do it again :))


You can get in the same ballpark with at least two top-level raidz2  
devs and
copies=2.  If you have three or more top-level raidz2 vdevs, then  
you can even

do better with copies=3 ;-)



Maybe this is noted somewhere, but I did not realize that "copies"  
invoked logic that distributed the copies among vdevs? Can you  
please provide some pointers about this?


It is hard to describe in words, so I made some pictures :-)
http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Tim Cook
On Fri, Aug 21, 2009 at 8:04 PM, Richard Elling wrote:

> On Aug 21, 2009, at 5:55 PM, Tim Cook wrote:
>
>> On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling 
>> wrote:
>>
>> My vote is with Ross. KISS wins :-)
>> Disclaimer: I'm also a member of BAARF.
>>
>>
>> My point is, RAIDZx+1 SHOULD be simple.  I don't entirely understand why
>> it hasn't been implemented.  I can only imagine like so many other things
>> it's because there hasn't been significant customer demand.  Unfortunate if
>> it's as simple as I believe it is to implement.  (No, don't ask me to do it,
>> I put in my time programming in college and have no desire to do it again
>> :))
>>
>
> You can get in the same ballpark with at least two top-level raidz2 devs
> and
> copies=2.  If you have three or more top-level raidz2 vdevs, then you can
> even
> do better with copies=3 ;-)
>
> Note that I do not have a model for that because it would require separate
> failure rate data for whole disk failures and all other non-whole disk
> failures.
> The latter is not available in data sheets. The closest I can get with
> published
> data is using the MTTDL[2] model which considers the published
> unrecoverable
> read error rate. In other words, the model would be easy, but data to feed
> the
> model is not available :-(  Suffice to say, 2 top-level raidz2 vdevs of
> similar size
> with copies=2 should offer very nearly the same protection as raidz2+1.
>  -- richard
>


You sure about that?  Say I have a sas controller shit the bed (pardon the
french), and take one of the JBOD's out entirely.  Even with copies=2, isn't
the entire pool going tits up and offline when it loses an entire vdev?

It would seem to me copies=2 is only applicable when you have both an entire
disk loss, and corrupt data on the "good disks".  But feel free to enlighten
:)  That scenario seems far less likely than having a controller go bad, but
that's with my anecdotal personal experiences.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Adam Sherman

On 21-Aug-09, at 21:04 , Richard Elling wrote:
My point is, RAIDZx+1 SHOULD be simple.  I don't entirely  
understand why it hasn't been implemented.  I can only imagine like  
so many other things it's because there hasn't been significant  
customer demand.  Unfortunate if it's as simple as I believe it is  
to implement.  (No, don't ask me to do it, I put in my time  
programming in college and have no desire to do it again :))


You can get in the same ballpark with at least two top-level raidz2  
devs and
copies=2.  If you have three or more top-level raidz2 vdevs, then  
you can even

do better with copies=3 ;-)



Maybe this is noted somewhere, but I did not realize that "copies"  
invoked logic that distributed the copies among vdevs? Can you please  
provide some pointers about this?


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Richard Elling

On Aug 21, 2009, at 5:55 PM, Tim Cook wrote:
On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling > wrote:


My vote is with Ross. KISS wins :-)
Disclaimer: I'm also a member of BAARF.


My point is, RAIDZx+1 SHOULD be simple.  I don't entirely understand  
why it hasn't been implemented.  I can only imagine like so many  
other things it's because there hasn't been significant customer  
demand.  Unfortunate if it's as simple as I believe it is to  
implement.  (No, don't ask me to do it, I put in my time programming  
in college and have no desire to do it again :))


You can get in the same ballpark with at least two top-level raidz2  
devs and
copies=2.  If you have three or more top-level raidz2 vdevs, then you  
can even

do better with copies=3 ;-)

Note that I do not have a model for that because it would require  
separate
failure rate data for whole disk failures and all other non-whole disk  
failures.
The latter is not available in data sheets. The closest I can get with  
published
data is using the MTTDL[2] model which considers the published  
unrecoverable
read error rate. In other words, the model would be easy, but data to  
feed the
model is not available :-(  Suffice to say, 2 top-level raidz2 vdevs  
of similar size

with copies=2 should offer very nearly the same protection as raidz2+1.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Tim Cook
On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling wrote:

> On Aug 21, 2009, at 3:34 PM, Tim Cook wrote:
>
>  On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker  wrote:
>> On Aug 21, 2009, at 5:46 PM, Ron Mexico  wrote:
>>
>> I'm in the process of setting up a NAS for my company. It's going to be
>> based on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs.
>> Each HBA will be connected to a 24 bay Supermicro JBOD chassis. Each chassis
>> will have 12 drives to start out with, giving us room for expansion as
>> needed.
>>
>> Ideally, I'd like to have a mirror of a raidz2 setup, but from the
>> documentation I've read, it looks like I can't do that, and that a stripe of
>> mirrors is the only way to accomplish this.
>>
>> Why?
>>
>> Because some people are paranoid.
>>
>
> cue the Kinks Destroyer :-)
>
>  It uses as many drives as a RAID10, but you loose 1 more drive of usable
>> space then RAID10 and you get less then half the performance.
>>
>> And far more protection.
>>
>
> Yes. With raidz3 even more :-)
> I put together a spreadsheet a while back to help folks make this sort
> of decision.
> http://blogs.sun.com/relling/entry/sample_raidoptimizer_output
>
> I didn't put the outputs for RAID-5+1, but RAIDoptmizer can calculate it.
> It won't calculate raidz+1 because there is no such option.  If there is
> some
> demand, I can put together a normal RAID (LVM or array) output of similar
> construction.


Good point as well.  Completely spaced on the fact raidz3 was added not so
long ago.  I don't think it's made it to any officially supported build yet
though, has it?



>
>
>  You might be thinking of a RAID50 which would be multiple raidz vdevs in a
>> zpool, or striped RAID5s.
>>
>> If not then stick with multiple mirror vdevs in a zpool (RAID10).
>>
>> -Ross
>>
>
> My vote is with Ross. KISS wins :-)
> Disclaimer: I'm also a member of BAARF.



My point is, RAIDZx+1 SHOULD be simple.  I don't entirely understand why it
hasn't been implemented.  I can only imagine like so many other things it's
because there hasn't been significant customer demand.  Unfortunate if it's
as simple as I believe it is to implement.  (No, don't ask me to do it, I
put in my time programming in college and have no desire to do it again :))




>
>
>  Raid10 won't provide as much protection.  Raidz21, you can lose any 4
>> drives, and up to 14 if it's the right 14.  Raid10, if you lose the wrong
>> two drives, you're done.
>>
>
> One of the reasons I wrote RAIDoptimizer is to help people get a
> handle on the math behind this.  You can see some of that orientation
> in my other blogs on MTTDL. But at the end of the day, you can get a
> pretty good ballpark by saying every level of parity adds about 3 orders
> of magnitude to the MTTDL. No parity is always a loss.  Single parity
> is better. Double parity even better. Eventually, common-cause problems
> dominate.
>  -- richard
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Richard Elling

On Aug 21, 2009, at 3:34 PM, Tim Cook wrote:

On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker   
wrote:
On Aug 21, 2009, at 5:46 PM, Ron Mexico   
wrote:


I'm in the process of setting up a NAS for my company. It's going to  
be based on Open Solaris and ZFS, running on a Dell R710 with two  
SAS 5/E HBAs. Each HBA will be connected to a 24 bay Supermicro JBOD  
chassis. Each chassis will have 12 drives to start out with, giving  
us room for expansion as needed.


Ideally, I'd like to have a mirror of a raidz2 setup, but from the  
documentation I've read, it looks like I can't do that, and that a  
stripe of mirrors is the only way to accomplish this.


Why?

Because some people are paranoid.


cue the Kinks Destroyer :-)

It uses as many drives as a RAID10, but you loose 1 more drive of  
usable space then RAID10 and you get less then half the performance.


And far more protection.


Yes. With raidz3 even more :-)
I put together a spreadsheet a while back to help folks make this sort
of decision.
http://blogs.sun.com/relling/entry/sample_raidoptimizer_output

I didn't put the outputs for RAID-5+1, but RAIDoptmizer can calculate  
it.
It won't calculate raidz+1 because there is no such option.  If there  
is some
demand, I can put together a normal RAID (LVM or array) output of  
similar

construction.

You might be thinking of a RAID50 which would be multiple raidz  
vdevs in a zpool, or striped RAID5s.


If not then stick with multiple mirror vdevs in a zpool (RAID10).

-Ross


My vote is with Ross. KISS wins :-)
Disclaimer: I'm also a member of BAARF.

Raid10 won't provide as much protection.  Raidz21, you can lose any  
4 drives, and up to 14 if it's the right 14.  Raid10, if you lose  
the wrong two drives, you're done.


One of the reasons I wrote RAIDoptimizer is to help people get a
handle on the math behind this.  You can see some of that orientation
in my other blogs on MTTDL. But at the end of the day, you can get a
pretty good ballpark by saying every level of parity adds about 3 orders
of magnitude to the MTTDL. No parity is always a loss.  Single parity
is better. Double parity even better. Eventually, common-cause problems
dominate.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ian Collins

Ron Mexico wrote:

You'll have to add a bit of meat to "this"!

What are you resiliency, space and performance
requirements?



Resiliency is most important, followed by space and then speed. It's primary 
function is to host digital assets for ad agencies and backups of other servers 
and workstations in the office.

Since I can't make a mirrored raidz2, I'd like the next best thing. If that 
means doing a zfs send from one raidz2 to the other, that's fine.
  
I normally use a strip of mirrors for "live" data and a stripe of raidz2 
(4+2) for "backup" data.  I always assign a couple of hot spares to the 
pools.  I also replicate important data between hosts or pools.


The replication provides resiliency during a resilver.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ron Mexico
> You'll have to add a bit of meat to "this"!
> 
> What are you resiliency, space and performance
> requirements?

Resiliency is most important, followed by space and then speed. It's primary 
function is to host digital assets for ad agencies and backups of other servers 
and workstations in the office.

Since I can't make a mirrored raidz2, I'd like the next best thing. If that 
means doing a zfs send from one raidz2 to the other, that's fine.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Tim Cook
On Fri, Aug 21, 2009 at 5:52 PM, Ross Walker  wrote:

> On Aug 21, 2009, at 6:34 PM, Tim Cook  wrote:
>
>
>
> On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker < 
> rswwal...@gmail.com> wrote:
>
>> On Aug 21, 2009, at 5:46 PM, Ron Mexico < 
>> no-re...@opensolaris.org> wrote:
>>
>>  I'm in the process of setting up a NAS for my company. It's going to be
>>> based on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs.
>>> Each HBA will be connected to a 24 bay Supermicro JBOD chassis. Each chassis
>>> will have 12 drives to start out with, giving us room for expansion as
>>> needed.
>>>
>>> Ideally, I'd like to have a mirror of a raidz2 setup, but from the
>>> documentation I've read, it looks like I can't do that, and that a stripe of
>>> mirrors is the only way to accomplish this.
>>>
>>
>> Why?
>>
>
> Because some people are paranoid.
>
>
> If that is the case how about a separate zpool of large SATA disks and
> either snapshot and send/recv to it, or use AVT to replicate to it.
>

That adds a window of opportunity for failure.  Potentially quite a large
window.



>
>
>
>
>>
>> It uses as many drives as a RAID10, but you loose 1 more drive of usable
>> space then RAID10 and you get less then half the performance.
>>
>
> And far more protection.
>
>
>
> It's not worth the cost, the complexity is so high that it itself will be a
> point of failure and performance is too low for it to be any use.
>
>
The complexity?  There should be no complexity involved in a mirrored
raid-z/z2 pool.




>
>
>
>
>> You might be thinking of a RAID50 which would be multiple raidz vdevs in a
>> zpool, or striped RAID5s.
>>
>> If not then stick with multiple mirror vdevs in a zpool (RAID10).
>>
>> -Ross
>
>
> Raid10 won't provide as much protection.  Raidz21, you can lose any 4
> drives, and up to 14 if it's the right 14.  Raid10, if you lose the wrong
> two drives, you're done.
>
>
>
> Setup a side raidz2 zpool of SATA disks, snap the RAID10 and zsend it to
> the other pool. In the event of catastrophy you can run off the raidz2 pool
> temporarily until the mirror pool is fixed (and it would still perform
> better then the mirrored raidz2 setup!).
>
>
Snapshots are not a substitute for raid. That's a completely different
protection mechanism.  If he wants another copy of the data, I'm sure he'll
setup a second server and do zfs send/receives.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Enrico Maria Crisostomo
Check with Vista if you have permissions to read the file. I
experienced the same problem (that's why I posted another questions to
the CIFS mailing list about mapping users with idmap). It always
happens when I copy these files from the iPhone. These files result
with such permissions:

$ ls -dV IMG_0004.MOV
-rw---   1 enrico   staff12949182 Jul 17 19:39 IMG_0004.MOV
 owner@:--x---:---:deny
 owner@:rw-p---A-W-Co-:---:allow
 group@:rwxp--:---:deny
 group@:--:---:allow
  everyone@:rwxp---A-W-Co-:---:deny
  everyone@:--a-R-c--s:---:allow

On the Vista side, having mapped with idmap even the staff group to
the corresponding Windows group of the enrico user, the file results
with the following special permissions for the user enrico:
List folder/read data
Create files/write data
Create folders/append data
Write attributes
Write extended attributes

Here something's missing to me and documentation hasn't helped me
(yet)... There's no "read" here. Just set it (on the Vista side it's
just one click) and Quicktime will work. I've got a script which
resets my files' permissions something like:

find /yourdir -type f -exec chmod A- "{}" +
find /yourdir -type f -exec chmod 644 "{}" +

Hope this helps,
Enrico


On Fri, Aug 21, 2009 at 11:35 PM, Scott Laird wrote:
> Checksum all of the files using something like md5sum and see if
> they're actually identical.  Then test each step of the copy and see
> which one is corrupting your files.
>
> On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnam wrote:
>> During the course of backup I had occassion to copy a number of
>> quicktime video (*.mov) files to zfs server disk.
>>
>> Once there... navigating to them with quicktime player and opening
>> results in a failure that (From windows Vista laptop) says:
>>    error --43: A file could not be found (Welcome.mov)
>>
>> I would have attributed it to some problem from scping it to the zfs
>> server had it not been for finding that if I scp it to a linux server
>> the problem does not occur.
>>
>> Both the zfs and linux (Gentoo) servers are on a home lan.. but using
>> the same router/switch[s] over gigabit network adaptors.
>>
>> On both occasions the files were copied using cygwin/ssh on a Vista
>> laptop.
>>
>> Anyone have an idea what might cause this.
>>
>> Any more details I can add that would make diagnostics easier?
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Ελευθερία ή θάνατος
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning."
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ian Collins

Ron Mexico wrote:

I'm in the process of setting up a NAS for my company. It's going to be based 
on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs. Each HBA 
will be connected to a 24 bay Supermicro JBOD chassis. Each chassis will have 
12 drives to start out with, giving us room for expansion as needed.

Ideally, I'd like to have a mirror of a raidz2 setup, but from the 
documentation I've read, it looks like I can't do that, and that a stripe of 
mirrors is the only way to accomplish this.

I'm interested in hearing the opinions of others about the best way to set this 
up.

  

You'll have to add a bit of meat to "this"!

What are you resiliency, space and performance requirements?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ross Walker

On Aug 21, 2009, at 6:34 PM, Tim Cook  wrote:




On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker   
wrote:
On Aug 21, 2009, at 5:46 PM, Ron Mexico   
wrote:


I'm in the process of setting up a NAS for my company. It's going to  
be based on Open Solaris and ZFS, running on a Dell R710 with two  
SAS 5/E HBAs. Each HBA will be connected to a 24 bay Supermicro JBOD  
chassis. Each chassis will have 12 drives to start out with, giving  
us room for expansion as needed.


Ideally, I'd like to have a mirror of a raidz2 setup, but from the  
documentation I've read, it looks like I can't do that, and that a  
stripe of mirrors is the only way to accomplish this.


Why?

Because some people are paranoid.


If that is the case how about a separate zpool of large SATA disks and  
either snapshot and send/recv to it, or use AVT to replicate to it.






It uses as many drives as a RAID10, but you loose 1 more drive of  
usable space then RAID10 and you get less then half the performance.


And far more protection.



It's not worth the cost, the complexity is so high that it itself will  
be a point of failure and performance is too low for it to be any use.






You might be thinking of a RAID50 which would be multiple raidz  
vdevs in a zpool, or striped RAID5s.


If not then stick with multiple mirror vdevs in a zpool (RAID10).

-Ross

Raid10 won't provide as much protection.  Raidz21, you can lose any  
4 drives, and up to 14 if it's the right 14.  Raid10, if you lose  
the wrong two drives, you're done.



Setup a side raidz2 zpool of SATA disks, snap the RAID10 and zsend it  
to the other pool. In the event of catastrophy you can run off the  
raidz2 pool temporarily until the mirror pool is fixed (and it would  
still perform better then the mirrored raidz2 setup!).


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Jason Pfingstmann
As you can add multiple vdevs to a pool, my suggestion would be to do several 
smaller raidz1 or raidz2 vdevs in the pool.

With your setup - assuming 2 HBAs @ 24 drives each your setup would have 
yielded 20 drives usable storage (about) (assuming raidz2 with 2 spares on each 
HBA) and then mirrored.

Maximum number of drives before failure (idea scenario): 5 (assuming the spare 
hasn't caught up yet), 9 (assuming the spare had caught up and more drives 
failed)  

Suggested setup (at least as far as I'm concerned - and I am kinda new at ZFS, 
but not new to storage systems):
5 x raidz2 w/ 9 disks = 35 drives usable (9 disks ea x 5 raidz2 = 45 total 
drives - (5 raidz2 x 2 parity drives ea))
This leaves you with 3 drives that you can assign as spares (assuming 48 drives 
total)

Maximum number of drives before failure (ideal scenario): 11 (assuming the 
spare hasn't caught up yet), 14 (assuming the spare had caught up and more 
drives failed)

Keep in mind, the parity information will take up additional space as well, but 
it seems you were looking for maximum redundancy (and this setup would give you 
that).

Sorry, I just saw you were talking about 12 drives in each chassis.  A similar 
thing applies, I would do 1 9 drive raidz2 in each chassis and add 2 total 
spares and then add drives 9 at a time (and 1 more spare at some point).

Note: Keep in mind, I'm still kinda new to ZFS, so I may be completely wrong... 
 (if I am, somebody, please correct me)

P-Chan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Tim Cook
On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker  wrote:

> On Aug 21, 2009, at 5:46 PM, Ron Mexico  wrote:
>
>  I'm in the process of setting up a NAS for my company. It's going to be
>> based on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs.
>> Each HBA will be connected to a 24 bay Supermicro JBOD chassis. Each chassis
>> will have 12 drives to start out with, giving us room for expansion as
>> needed.
>>
>> Ideally, I'd like to have a mirror of a raidz2 setup, but from the
>> documentation I've read, it looks like I can't do that, and that a stripe of
>> mirrors is the only way to accomplish this.
>>
>
> Why?
>

Because some people are paranoid.


>
> It uses as many drives as a RAID10, but you loose 1 more drive of usable
> space then RAID10 and you get less then half the performance.
>

And far more protection.



>
> You might be thinking of a RAID50 which would be multiple raidz vdevs in a
> zpool, or striped RAID5s.
>
> If not then stick with multiple mirror vdevs in a zpool (RAID10).
>
> -Ross


Raid10 won't provide as much protection.  Raidz21, you can lose any 4
drives, and up to 14 if it's the right 14.  Raid10, if you lose the wrong
two drives, you're done.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ross Walker
On Aug 21, 2009, at 5:46 PM, Ron Mexico   
wrote:


I'm in the process of setting up a NAS for my company. It's going to  
be based on Open Solaris and ZFS, running on a Dell R710 with two  
SAS 5/E HBAs. Each HBA will be connected to a 24 bay Supermicro JBOD  
chassis. Each chassis will have 12 drives to start out with, giving  
us room for expansion as needed.


Ideally, I'd like to have a mirror of a raidz2 setup, but from the  
documentation I've read, it looks like I can't do that, and that a  
stripe of mirrors is the only way to accomplish this.


Why?

It uses as many drives as a RAID10, but you loose 1 more drive of  
usable space then RAID10 and you get less then half the performance.


You might be thinking of a RAID50 which would be multiple raidz vdevs  
in a zpool, or striped RAID5s.


If not then stick with multiple mirror vdevs in a zpool (RAID10).

-Ross
 
 
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ron Mexico
I'm in the process of setting up a NAS for my company. It's going to be based 
on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs. Each HBA 
will be connected to a 24 bay Supermicro JBOD chassis. Each chassis will have 
12 drives to start out with, giving us room for expansion as needed.

Ideally, I'd like to have a mirror of a raidz2 setup, but from the 
documentation I've read, it looks like I can't do that, and that a stripe of 
mirrors is the only way to accomplish this.

I'm interested in hearing the opinions of others about the best way to set this 
up.

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Scott Laird
Checksum all of the files using something like md5sum and see if
they're actually identical.  Then test each step of the copy and see
which one is corrupting your files.

On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnam wrote:
> During the course of backup I had occassion to copy a number of
> quicktime video (*.mov) files to zfs server disk.
>
> Once there... navigating to them with quicktime player and opening
> results in a failure that (From windows Vista laptop) says:
>    error --43: A file could not be found (Welcome.mov)
>
> I would have attributed it to some problem from scping it to the zfs
> server had it not been for finding that if I scp it to a linux server
> the problem does not occur.
>
> Both the zfs and linux (Gentoo) servers are on a home lan.. but using
> the same router/switch[s] over gigabit network adaptors.
>
> On both occasions the files were copied using cygwin/ssh on a Vista
> laptop.
>
> Anyone have an idea what might cause this.
>
> Any more details I can add that would make diagnostics easier?
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Harry Putnam
During the course of backup I had occassion to copy a number of
quicktime video (*.mov) files to zfs server disk.  

Once there... navigating to them with quicktime player and opening
results in a failure that (From windows Vista laptop) says:
error --43: A file could not be found (Welcome.mov)

I would have attributed it to some problem from scping it to the zfs
server had it not been for finding that if I scp it to a linux server
the problem does not occur.

Both the zfs and linux (Gentoo) servers are on a home lan.. but using
the same router/switch[s] over gigabit network adaptors.

On both occasions the files were copied using cygwin/ssh on a Vista
laptop.

Anyone have an idea what might cause this.

Any more details I can add that would make diagnostics easier?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-21 Thread Ian Collins

Jorgen Lundman wrote:

Ian Collins wrote:

Jorgen Lundman wrote:


Finally came to the reboot maintenance to reboot the x4540 to make 
it see the newly replaced HDD.


I tried, reboot, then power-cycle, and reboot -- -r,

but I can not make the x4540 accept any HDD in that bay. I'm 
starting to think that perhaps we did not lose the original HDD, but 
rather the slot, and there is a hardware problem.


This is what I see after a reboot, the disk is c1t5d0, sd37, s...@5,0 
or slot 13.


c1::dsk/c1t4d0 disk connectedconfigured 
unknown
c1::dsk/c1t5d0 disk connectedconfigured 
unknown
c1::dsk/c1t6d0 disk connectedconfigured 
unknown



Does format show it?


Nope, that it does not.


Time to call the repair man!

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's eating my disk space? Missing snapshots?

2009-08-21 Thread Matthew Stevenson
Thank you very much, this explains it perfectly.

I had been coming to the conclusion that the shared data must be what accounts 
for the "missing" space, but previously my thinking/expectation was that it 
would be "charged" against the snapshot in which the shared data first 
appeared. Doing that would probably bring its own complications too though I 
suppose, so I can see why the decision was made to lump it all together in the 
"usedbysnapshots" figure rather than complicate the zfs command output further.

It does lead to another question though: is there a way to see how much data is 
shared between any two given snapshots?

Thanks again,
Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nautilus Access List Tab

2009-08-21 Thread Cindy . Swearingen

Hi Chris,

You might repost this query on desktop-discuss to find out
the status of the Access List tab.

Last I heard, it was being reworked.

Cindy
On 08/21/09 10:14, Chris wrote:

How do I get this in OpenSolaris 2009.06?

http://www.alobbs.com/albums/albun26/ZFS_acl_dialog1.jpg

thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Tutorial at LISA09

2009-08-21 Thread Marcus
I won't be able to attend, but being subscribed to this list and the
storage list for a short while, the following topics would come to
mind:

- ZFS pool layout tips (not possible to grow RaidZ, proper planning,
when to not use RaidZ for high IOPS volumes etc.)
- how to best handle broken disks/controllers without ZFS hanging or
being unable to replace the disk
- how to do proper troubleshooting and performance testing (especially
trying to isolate if the issue is in ZFS, iSCSI, NFS, CIFS, etc).
- how to determine if an SSD for my ZIL or cache is going to improve
my throughput

That would be my personal wishlist. I hope the presentation will be
made public after the event. ;-)

Thanks,

Marcus

On Fri, Aug 21, 2009 at 4:10 PM, Eric Sproul wrote:
> Bob Friesenhahn wrote:
>> An obvious item to cover is the new Flash Archive support as well as the
>> changes to Live Upgrade due to zfs boot.  Also, the benefits and issues
>> associated with adding SSDs to the mix.
>>
>> Based on the last zfs talk I attended at a LISA, system administrators
>> are largely interested in system administration issues.
>
> +1 to those items.  I'd also like to hear about how people are maintaining
> offsite DR copies of critical data with ZFS.  Just send/recv, or something a
> little more "live"?
>
> Eric
>
> --
> Eric Sproul
> Lead Site Reliability Engineer
> OmniTI Computer Consulting, Inc.
> Web Applications & Internet Architectures
> http://omniti.com
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Not sure how to do this in zfs

2009-08-21 Thread Gregory Skelton

Thanks for your reply Andrey,

Unfortunately, I wasn't able to see the files in that directory from any 
of scenario's. It seems a zfs inside another zfs is causing the problem. 
Of course when I put just a directory inside the zfs, everything can be 
seen.


Cheers,
Greg


On Fri, 21 Aug 2009, Andrey V. Elsukov wrote:


Gregory Skelton wrote:

 I've tried changing all kinds of attributes for the zfs's, but I can't
 seem to find the right configuration.

 So I'm trying to move some zfs's under another, it looks like this:

 /pool/joe_user move to /pool/homes/joe_user


You can do it in several ways:
1. Create a new FS and copy data from old FS :)
2. Change mountpoint:
# zfs set mountpoint=/pool/homes/joe_user pool/joe_user
3. Use clone and promote:
#  zfs snapshot pool/joe_u...@copy
#  zfs clone pool/joe_u...@copy pool/homes/joe_user
#  zfs promote pool/homes/joe_user
verify that all is ok, then destroy old FS

It's IMHO...
--
WBR, Andrey V. Elsukov



--
Gregory R. Skelton Phone: (414)229-2678 (Office)
System Administrator: (920) 246-4415 (Cell)
1900 E. Kenwood Blvd: gskel...@gravity.phys.uwm.edu
University of Wisconsin : AIM/ICQ gregor159
Milwaukee, WI 53201 http://www.lsc-group.phys.uwm.edu/~gskelton
Emergency Email: grego...@vzw.blackberry.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need 1.5 TB drive size to use for array for testing

2009-08-21 Thread Jason Pfingstmann
This is an odd question, to be certain, but I need to find out what size a 1.5 
TB drive is to help me create a sparse/fake array.

Basically, if I could have someone do a dd if=<1.5 TB disk> of=  and 
then post the ls -l size of that file, it would greatly assist me.

Here's what I'm doing:

I have a 1 TB drive with my data on it (NTFS) and a second 1 TB drive that I 
want to move my data on to.  However, I eventually want to have 6 x 1.5 TB 
drives for this array (with raidz2 for 6 TB of usable storage - I have a ton of 
additional drives with data).  I can't afford the drives now, but want to get 
it ready for when I can, so here's my plan:

1) Create a ZFS volume with the 1 TB drive that's empty
2) Move data onto it
3) Wipe out the original NTFS drive
4) Create 1.5 TB sparse files (5 of them) on the old NTFS drive
5) Create raidz2 on the 1.5 TB files (mount as loopback if necessary)
Note:  I realize this defeats the benefits of the raidz, the drive dies and I 
lose everything on it.
6) Copy data from the 1 TB ZFS volume to the psuedo/fake raidz array (I'd set 
up an rsync or something)

This way I still have -some- redundancy should 1 of the 2 drives fail.  As I 
get new drives, I'll replace the loopback device with the physical device.  
This way I'll slowly gain the redundancy I desire (raidz2), while still being 
redundant while the amount of data I have is low.

Any thoughts on this?  I don't see why it shouldn't work, but I've only been 
tinkering with ZFS for 2 days now and this is all unexplored territory.

P-Chan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Not sure how to do this in zfs

2009-08-21 Thread Andrey V. Elsukov

Gregory Skelton wrote:
I've tried changing all kinds of attributes for the zfs's, but I can't 
seem to find the right configuration.


So I'm trying to move some zfs's under another, it looks like this:

/pool/joe_user move to /pool/homes/joe_user


You can do it in several ways:
1. Create a new FS and copy data from old FS :)
2. Change mountpoint:
# zfs set mountpoint=/pool/homes/joe_user pool/joe_user
3. Use clone and promote:
# zfs snapshot pool/joe_u...@copy
# zfs clone pool/joe_u...@copy pool/homes/joe_user
# zfs promote pool/homes/joe_user
verify that all is ok, then destroy old FS

It's IMHO...
--
WBR, Andrey V. Elsukov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-21 Thread Chris Murray
That looks like it indeed. Output of zdb -

Object  lvl   iblk   dblk  lsize  asize  type
 9516K 8K   150G  14.0G  ZFS plain file
 264  bonus  ZFS znode
path???

Thanks for the help in clearing this up - satisfies my curiosity.  Nico, I'll 
add those commands to the little list in my mind and they'll push the Windows 
ones out in no time  :)

I'll go to b120 when it is available.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Not sure how to do this in zfs

2009-08-21 Thread Gregory Skelton

Hello all,

I've tried changing all kinds of attributes for the zfs's, but I can't 
seem to find the right configuration.


So I'm trying to move some zfs's under another, it looks like this:

/pool/joe_user move to /pool/homes/joe_user

I know I can do this with zfs rename, and everything is fine. The problem 
I'm having is, when I mount /pool/homes I see joe_user directory but its 
empty. I know for a fact there are files that are in the directory. Am I 
on the right path, thinking its a permissions issue?


If it matters, I'm using zfs version 4 but I having been trying to twist 
other knobs in version 14.


Any help would be appreciated.

Thanks,
Gregory
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-21 Thread Nicolas Williams
On Fri, Aug 21, 2009 at 06:46:32AM -0700, Chris Murray wrote:
> Nico, what is a zero-link file, and how would I go about finding
> whether I have one? You'll have to bear with me, I'm afraid, as I'm
> still building my Solaris knowledge at the minute - I was brought up
> on Windows. I use Solaris for my storage needs now though, and slowly
> improving on my knowledge so I can move away from Windows one day  :)

I see that Mark S. thinks this may be a specific ZFS bug, and there's a
followup with instructions on how to detect if that's the case.

However, it can also be a zero-link file.  I've certainly run into that
problem before myself, on UFS and other filesystems.

A zero-link file is a file that has been removed (unlink(2)ed), but
which remains open in some process(es).  Such a file continues to
consume space until the processes that have it open are killed.

Typically you'd use pfiles(1) or lsof to find such files.

> If it makes any difference, the problem persists after a full reboot,

Yeah, if you rebooted and there's no 14GB .nfs* files, then this is not
a zero-link file.  See the followups.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Confusion

2009-08-21 Thread Volker A. Brandt
> > Can you actually see the literal commands?  A bit like MySQL's 'show
> > create table'?  Or are you just intrepreting the output?
> 
> Just interpreting the output.

Actually you could see the commands on the "old" server by using

  zpool history oradata


Regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filesystem notification / query

2009-08-21 Thread Richard Elling


On Aug 21, 2009, at 4:13 AM, Felix Nielsen wrote:

thx will look into that, but would really to be able to "query"  
something and get a result back with all "changes"


See the Solaris Administration Guide: Security Service for the section
on auditing.
http://docs.sun.com/app/docs/doc/816-4557/audittm-1?l=en&a=browse
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 80% full .. automatically deleting backups....

2009-08-21 Thread Richard Elling


On Aug 20, 2009, at 11:50 PM, Johan Eliasson wrote:

Got a curious message the other day.. that my tank is över 80% full  
and that ZFS has deleted old backups to free up space. That's  
curious since I'm not using the Time Slider for tank, only for  
rpool...


So what exactly did it delete??


Check the pool history:
zpool history tank

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Nautilus Access List Tab

2009-08-21 Thread Chris
How do I get this in OpenSolaris 2009.06?

http://www.alobbs.com/albums/albun26/ZFS_acl_dialog1.jpg

thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Virtual harddrive and ZFS performance

2009-08-21 Thread David Magda
On Fri, August 21, 2009 11:18, David Magda wrote:

> The current default value ignores flush requests from guest OSes, but this
> can't tweaked via a parameter (11.1.3 Responding to guest IDE flush
> requests):

s/can't/can be/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Virtual harddrive and ZFS performance

2009-08-21 Thread David Magda
On Fri, August 21, 2009 10:32, Bob Friesenhahn wrote:

> Various people have stated here that VirtualBox intentionally does not
> pass through cache flush requests.

The current default value ignores flush requests from guest OSes, but this
can't tweaked via a parameter (11.1.3 Responding to guest IDE flush
requests):

> If desired, the virtual disk images (VDI) can be flushed when the guest
> issues the IDE FLUSH CACHE command. Normally these requests are ignored
> for improved performance. To enable flushing, issue the following command:
>
>  VBoxManage setextradata VMNAME
>   "VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0

http://www.virtualbox.org/manual/UserManual.html#id2531504

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Confusion

2009-08-21 Thread Stephen Nelson-Smith
Sorry - didn't realised I'd replied only to you.
> You can either set the mountpoint property when you create the dataset or do 
> it
> in a second operation after the create.
>
> Either:
> # zfs create -o mountpoint=/u01 rpool/u01
>
> or:
> # zfs create rpool/u01
> # zfs set mountpoint=/u01 rpool/u01

Got you.

> I'm not sure about the remote mount.  It appears to be a local SMB resource
> mounted as NFS?  I've never seen that before.

Ah that's just a Sharity mount - it's a red herring.  u0[1-4] will be the same.

Thanks very much,

S.
-- 
Stephen Nelson-Smith
Technical Director
Atalanta Systems Ltd
www.atalanta-systems.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Confusion

2009-08-21 Thread Eric Sproul
Stephen Nelson-Smith wrote:
> Hi,
> 
>> I presume you've already installed your new server with the same rpool
>> configuration as your original, so you're asking how to recreate your two 
>> other
>> pools.
> 
> Correct - and also the mountpoints, which seem particulary confusing:
> 
> -bash-3.00# grep u0 /etc/mnttab
> rpool/u01   /u01zfs
> rw,devices,setuid,nonbmand,exec,xattr,atime,dev=4010008 1244978155
> oradata/u02 /u02zfs
> rw,devices,setuid,nonbmand,exec,xattr,atime,dev=4010009 1244978156
> oradata/u03 /u03zfs
> rw,devices,setuid,nonbmand,exec,xattr,atime,dev=401000a 1244978156
> redo/u04/u04zfs
> rw,devices,setuid,nonbmand,exec,xattr,atime,dev=401000b 1244978156
> localhost:smb://192.168.168.253/ics /u05/oradatanfs
> dev=5480003 1244981284
> 

You can either set the mountpoint property when you create the dataset or do it
in a second operation after the create.

Either:
# zfs create -o mountpoint=/u01 rpool/u01

or:
# zfs create rpool/u01
# zfs set mountpoint=/u01 rpool/u01

I'm not sure about the remote mount.  It appears to be a local SMB resource
mounted as NFS?  I've never seen that before.

>> Also assuming your new devices have the same names as the old, you can 
>> basically
>> just pluck the 'zpool create' arguments from your zpool status output on the
>> other machine:
>>
>> # zpool create oradata mirror c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0
>> # zpool create redo mirror c1t6d0 c1t7d0
> 
> Can you actually see the literal commands?  A bit like MySQL's 'show
> create table'?  Or are you just intrepreting the output?

Just interpreting the output.  The first one creates the 'oradata' pool with two
mirrors of two drives each.  Data will be dynamically balanced across both
mirrors, effectively the same as RAID1+0.  The second one creates a simple
mirror of two disks (RAID1).

Regards,
Eric

-- 
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
http://omniti.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-21 Thread Francois Napoleoni

Yup try to see what the ouput of

# zdb - /

if you find big file(s) without pathname you are in ...

it should look like this :

...
Object  lvl   iblk   dblk  lsize  asize  type
6516K   128K   300G  70.0G  ZFS plain file
264  bonus  ZFS znode
   path??? < this
   uid 0
   gid 0
   atime   Thu Mar 26 18:08:51 2009
   mtime   Thu Mar 26 18:12:42 2009
   ctime   Thu Mar 26 18:12:42 2009
   crtime  Thu Mar 26 18:08:51 2009
   gen 6075
   mode100600
   size322122547200
   parent  3
   links   0
   xattr   0
   rdev0x
...

but 14Gb ...  we should rename
6792701 "Removing large holey file does not free space"
into
6792701 "Removing NOT THAT large holey file does not free space"

:)

F.


On 08/21/09 15:59, Mark Shellenbaum wrote:

Chris Murray wrote:
Nico, what is a zero-link file, and how would I go about finding 
whether I have one? You'll have to bear with me, I'm afraid, as I'm 
still building my Solaris knowledge at the minute - I was brought up 
on Windows. I use Solaris for my storage needs now though, and slowly 
improving on my knowledge so I can move away from Windows one day  :)


If it makes any difference, the problem persists after a full reboot, 
and I've deleted the three folders, so now there is literally nothing 
in that filesystem .. yet it reports 14GB.


It's not too much of an inconvenience, but it does make me wonder 
whether the 'used' figures on my other filesystems and zvols are correct.



You could be running into an instance of

6792701 Removing large holey file does not free space

A fix for this was integrated into build 118


  -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Francois Napoleoni / Sun Support Engineer
mail  : francois.napole...@sun.com
phone : +33 (0)1 3403 1707
fax   : +33 (0)1 3403 1114
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Virtual harddrive and ZFS performance

2009-08-21 Thread Bob Friesenhahn

On Thu, 20 Aug 2009, Johan Eliasson wrote:


So a full NTFS defrag should result in just a long sequential ZFS write?


That would depend on how the defrag algorithm works, how often NTFS 
issues a synchronous write request (cache flush), how much memory is 
installed on your Solaris system, and how fast the defrag works.


Various people have stated here that VirtualBox intentionally does not 
pass through cache flush requests.  Solaris will buffer writes up to 
30 seconds so if you have enough RAM, then quite a lot of writes may 
be coalesced into a nice order.  If defrag works fast enough, then it 
may give Solaris more to work with so that the volume is less 
fragmented.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to remove [alternate] cylinders from slice 9?

2009-08-21 Thread Andrew Gabriel
You can remove them by using fmthard(1M) instead of format(1M). This 
allows full access to all 16 slices on x86.


Actually, if you want an exact copy of the VToC, grab the output of 
prtvtoc(1M) from one disk, and send it into fmthard -s on the other 
disk. (I've not tried this where EFI labels are involved though.)


--
Andrew Gabriel

Jeff Victor wrote:
I am trying to mirror an existing zpool on OpenSolaris 2009.06. I 
think I need to delete two alternate cylinders...



The existing disk in the pool (c7d0s0):
Part  TagFlag Cylinders SizeBlocks
 0   rootwm   1 - 19453  149.02GB(19453/0/0) 
312512445
 1 unassignedwm   00 
(0/0/0) 0
 2 backupwu   0 - 19453  149.03GB(19454/0/0) 
312528510
 3 unassignedwm   00 
(0/0/0) 0
 4 unassignedwm   00 
(0/0/0) 0
 5 unassignedwm   00 
(0/0/0) 0
 6 unassignedwm   00 
(0/0/0) 0
 7 unassignedwm   00 
(0/0/0) 0
 8   bootwu   0 - 07.84MB(1/0/0) 
16065
 9 unassignedwm   00 
(0/0/0) 0



The new disk, which was a zpool before I destroyed that pool:
Part  TagFlag Cylinders SizeBlocks
 0 unassignedwm   00 
(0/0/0) 0
 1 unassignedwm   00 
(0/0/0) 0
 2 backupwu   0 - 19453  149.03GB(19454/0/0) 
312528510
 3 unassignedwm   00 
(0/0/0) 0
 4 unassignedwm   00 
(0/0/0) 0
 5 unassignedwm   00 
(0/0/0) 0
 6 unassignedwm   00 
(0/0/0) 0
 7 unassignedwm   00 
(0/0/0) 0
 8   bootwu   0 - 07.84MB(1/0/0) 
16065
 9 alternateswm   1 - 2   15.69MB(2/0/0) 
32130


Format won't let me remove the two cylinders from slice 9:
partition> 0
Part  TagFlag Cylinders SizeBlocks
 0 unassignedwm   00 
(0/0/0) 0


Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[3]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 19453c

Warning: Partition overlaps alternates partition. Specify different 
start cyl.

partition> 9
`9' is not expected.

How can I delete the alternate cylinders, or otherwise mirror c7d1 to 
c7d0?  Or can I safely use c7d0s2?


Thanks,
--JeffV



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Confusion

2009-08-21 Thread Eric Sproul
Stephen Nelson-Smith wrote:
> On my new server I've inserted new 6 disks, run devfsadm and labelled
> them.  I want to get the same set up as the previous, but can't for
> the life of me work out what I did.  I can't find my notes, and the
> documentation is just confusing me.
> 
> Can someone point me in the right direction?

Stephen,
I presume you've already installed your new server with the same rpool
configuration as your original, so you're asking how to recreate your two other
pools.

Also assuming your new devices have the same names as the old, you can basically
just pluck the 'zpool create' arguments from your zpool status output on the
other machine:

# zpool create oradata mirror c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0
# zpool create redo mirror c1t6d0 c1t7d0

The labelling step was probably unnecessary in this case, since the pools are
not boot pools.  ZFS will automatically label the disks with EFI labels when you
give a whole disk (no 's#') as an argument.

Hope this helps,
Eric

-- 
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
http://omniti.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Tutorial at LISA09

2009-08-21 Thread Eric Sproul
Bob Friesenhahn wrote:
> An obvious item to cover is the new Flash Archive support as well as the
> changes to Live Upgrade due to zfs boot.  Also, the benefits and issues
> associated with adding SSDs to the mix.
> 
> Based on the last zfs talk I attended at a LISA, system administrators
> are largely interested in system administration issues.

+1 to those items.  I'd also like to hear about how people are maintaining
offsite DR copies of critical data with ZFS.  Just send/recv, or something a
little more "live"?

Eric

-- 
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
http://omniti.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to remove [alternate] cylinders from slice 9?

2009-08-21 Thread Jeff Victor

Thanks Trevor. Good to hear from you.

I needed to use '-e' to move forward. I think I was confused because 
these sources seem to disagree on SMI vs. EFI labels for ZFS:


ZFS always uses EFI:
* "ZFS formats the disk using an EFI label to contain a single, large 
slice" - no mention of any exceptions - 
http://docs.sun.com/app/docs/doc/817-2271/gazdp?l=en&a=view


Cannot boot from EFI labeled disk:
* http://docs.sun.com/app/docs/doc/819-2723/disksconcepts-17?l=en&a=view

ZFS disk "*cannot* have an EFI label" (perhaps only for rpools?)
* http://mail.opensolaris.org/pipermail/zfs-code/2009-May/000842.html

I think I understand:
* ZFS uses EFI labels except for boot disks, which must use an SMI label 
("Solaris2" in fdisk output).
* The docs need to point out the exception at 
http://docs.sun.com/app/docs/doc/817-2271/gazdp?l=en&a=view .



Trevor Pretty wrote:

Jeff old mate I assume you used format -e?

Have you tried swapping the label back to SMI and then back to EFI?

Trevor

Jeff Victor wrote:
I am trying to mirror an existing zpool on OpenSolaris 2009.06. I think 
I need to delete two alternate cylinders...



The existing disk in the pool (c7d0s0):
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 19453  149.02GB(19453/0/0) 312512445
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 19453  149.03GB(19454/0/0) 312528510
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwm   00 (0/0/0) 0


The new disk, which was a zpool before I destroyed that pool:
Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0) 0
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 19453  149.03GB(19454/0/0) 312528510
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 alternateswm   1 - 2   15.69MB(2/0/0) 32130

Format won't let me remove the two cylinders from slice 9:
partition> 0
Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0) 0

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[3]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 19453c

Warning: Partition overlaps alternates partition. Specify different 
start cyl.

partition> 9
`9' is not expected.

How can I delete the alternate cylinders, or otherwise mirror c7d1 to 
c7d0?  Or can I safely use c7d0s2?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Confusion

2009-08-21 Thread Stephen Nelson-Smith
Hello,

It could be lack of sleep, but I can't work this out.

On one server (which I built about 6 months ago) I have this:

-bash-3.00# mount
/ on rpool/ROOT/s10s_u6wos_07b read/write/setuid/devices/dev=4010002
on Thu Jan  1 01:00:00 1970
/devices on /devices read/write/setuid/devices/dev=510 on Sun Jun
14 12:15:32 2009
/system/contract on ctfs read/write/setuid/devices/dev=5140001 on Sun
Jun 14 12:15:32 2009
/proc on proc read/write/setuid/devices/dev=518 on Sun Jun 14 12:15:32 2009
/etc/mnttab on mnttab read/write/setuid/devices/dev=51c0001 on Sun Jun
14 12:15:32 2009
/etc/svc/volatile on swap read/write/setuid/devices/xattr/dev=521
on Sun Jun 14 12:15:32 2009
/system/object on objfs read/write/setuid/devices/dev=5240001 on Sun
Jun 14 12:15:32 2009
/etc/dfs/sharetab on sharefs read/write/setuid/devices/dev=5280001 on
Sun Jun 14 12:15:32 2009
/platform/sun4v/lib/libc_psr.so.1 on
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1
read/write/setuid/devices/dev=4010002 on Sun Jun 14 12:15:43 2009
/platform/sun4v/lib/sparcv9/libc_psr.so.1 on
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
read/write/setuid/devices/dev=4010002 on Sun Jun 14 12:15:43 2009
/dev/fd on fd read/write/setuid/devices/dev=541 on Sun Jun 14 12:15:52 2009
/tmp on swap read/write/setuid/devices/xattr/dev=522 on Sun Jun 14
12:15:53 2009
/var/run on swap read/write/setuid/devices/xattr/dev=523 on Sun
Jun 14 12:15:53 2009
/export on rpool/export
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010003 on Sun
Jun 14 12:15:55 2009
/export/home on rpool/export/home
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010004 on Sun
Jun 14 12:15:55 2009
/oradata on oradata
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010005 on Sun
Jun 14 12:15:55 2009
/redo on redo read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010006
on Sun Jun 14 12:15:55 2009
/rpool on rpool
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010007 on Sun
Jun 14 12:15:55 2009
/u01 on rpool/u01
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010008 on Sun
Jun 14 12:15:55 2009
/u02 on oradata/u02
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=4010009 on Sun
Jun 14 12:15:56 2009
/u03 on oradata/u03
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=401000a on Sun
Jun 14 12:15:56 2009
/u04 on redo/u04
read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=401000b on Sun
Jun 14 12:15:56 2009
/CIFS on localhost:x-browser:
remote/read/write/setuid/devices/dev=5480001 on Sun Jun 14 12:16:03
2009
/u05/oradata on localhost:smb://192.168.168.253/ics
remote/read/write/setuid/devices/dev=5480003 on Sun Jun 14 13:08:04
2009

-bash-3.00# zpool status
 pool: oradata
 state: ONLINE
 scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   oradata ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t2d0  ONLINE   0 0 0
   c1t3d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t4d0  ONLINE   0 0 0
   c1t5d0  ONLINE   0 0 0

errors: No known data errors

 pool: redo
 state: ONLINE
 scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   redoONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t6d0  ONLINE   0 0 0
   c1t7d0  ONLINE   0 0 0

errors: No known data errors

 pool: rpool
 state: ONLINE
 scrub: none requested
config:

   NAME  STATE READ WRITE CKSUM
   rpool ONLINE   0 0 0
 mirror  ONLINE   0 0 0
   c1t0d0s0  ONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

-bash-3.00# zpool status
 pool: oradata
 state: ONLINE
 scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   oradata ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t2d0  ONLINE   0 0 0
   c1t3d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t4d0  ONLINE   0 0 0
   c1t5d0  ONLINE   0 0 0

errors: No known data errors

 pool: redo
 state: ONLINE
 scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   redoONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t6d0  ONLINE   0 0 0
   c1t7d0  ONLINE   0 0 0

errors: No known data errors

 pool: rpool
 state: ONLINE
 scrub: none requested
config:

   NAME  STATE READ WRITE CKSUM
   rpool ONLINE   0 0 0
 mirror  ONLINE   0 0 0
   c1t0d0s0  ONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

errors: No known data err

Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-21 Thread Mark Shellenbaum

Chris Murray wrote:

Nico, what is a zero-link file, and how would I go about finding whether I have 
one? You'll have to bear with me, I'm afraid, as I'm still building my Solaris 
knowledge at the minute - I was brought up on Windows. I use Solaris for my 
storage needs now though, and slowly improving on my knowledge so I can move 
away from Windows one day  :)

If it makes any difference, the problem persists after a full reboot, and I've 
deleted the three folders, so now there is literally nothing in that filesystem 
.. yet it reports 14GB.

It's not too much of an inconvenience, but it does make me wonder whether the 
'used' figures on my other filesystems and zvols are correct.



You could be running into an instance of

6792701 Removing large holey file does not free space

A fix for this was integrated into build 118


  -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-21 Thread Chris Murray
Nico, what is a zero-link file, and how would I go about finding whether I have 
one? You'll have to bear with me, I'm afraid, as I'm still building my Solaris 
knowledge at the minute - I was brought up on Windows. I use Solaris for my 
storage needs now though, and slowly improving on my knowledge so I can move 
away from Windows one day  :)

If it makes any difference, the problem persists after a full reboot, and I've 
deleted the three folders, so now there is literally nothing in that filesystem 
.. yet it reports 14GB.

It's not too much of an inconvenience, but it does make me wonder whether the 
'used' figures on my other filesystems and zvols are correct.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] possible resilver bugs

2009-08-21 Thread Markus Kovero
Hi, I don't have means to replicate this issue nor file a bug about it so I'd 
like your opinion about these issues or perhaps make bug report if necessary.

In scenario where is say three raidz2 groups consisting several disks, two 
disks fail in different raidz-groups. You have degraded pool and two degraded 
raidz2 groups.

Now, one replaces first disk and starts resilvering, it takes day, two days, 
three days, counter says 100% resilvered but new data is still written to disk 
being replaced, counter SHOULD update if data amount increases in group.
Before that first disk is resilvered, second failed disk in second group is 
replaced resulting in BOTH resilver-processes start from beginning making pool 
rather unusable due two resilvers and compromising pool for several days to 
come.
Replacing disk in other raidz2-group should not interfere with ongoing 
resilvering on another disk set.

Yours
Markus Kovero


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-21 Thread Jorgen Lundman

Nope, that it does not.



Ian Collins wrote:

Jorgen Lundman wrote:


Finally came to the reboot maintenance to reboot the x4540 to make it 
see the newly replaced HDD.


I tried, reboot, then power-cycle, and reboot -- -r,

but I can not make the x4540 accept any HDD in that bay. I'm starting 
to think that perhaps we did not lose the original HDD, but rather the 
slot, and there is a hardware problem.


This is what I see after a reboot, the disk is c1t5d0, sd37, s...@5,0 or 
slot 13.


c1::dsk/c1t4d0 disk connectedconfigured 
unknown
c1::dsk/c1t5d0 disk connectedconfigured 
unknown
c1::dsk/c1t6d0 disk connectedconfigured 
unknown



Does format show it?



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filesystem notification / query

2009-08-21 Thread Felix Nielsen
thx will look into that, but would really to be able to "query" something and 
get a result back with all "changes"
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] bug :zpool create allow member driver as the raw drive of full partition

2009-08-21 Thread ??
IF you run solaris and opensolaris ,for example you my use c0t0d0 (for scsi 
disk) or c0d0 (for ide /SATA disk ) as the system disk.
In default ,solaris x86 and opensolaris will use RAW driver :
c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of  rpool.

Infact, solaris2 partition can be more then one in each Hard Disk, so we also 
can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1) ,c0t0d0p2 
(/dev/rdsk/c0t0d0p2) as the member driver to create a new zpool :
mor...@egoodbrac1:~# zpool create dpool raidz c0t0d0p1,c0t1d0,c0t2d0

This command can successful create a new raidz pool named dpool
and c0t0d0p1 means the RAW drive of the first solaris2 partition of the system 
disk (c0t0d0),c0t1d0 and c0t2d0 is another 2 RAW disk.

But ,If you understand , in logic the member driver of rpool c0t0d0s0 should 
same with c0t0d0p1s0 (p0 means the full disk ,and p1 means the first partition 
),
so c0t0d0s0 is the child of c0t0d0p1   ,if c0t0d0s0 already be the member drive 
of a zpool how we still can use " the father" c0t0d0p1 to create a new zpool?

I tried it tow times in my PC and the VM on Virtualbox :
IF you create 2 solaris2 fdisk partition on a disk ,you can use the second 
partition (as a raw  partition) like c0t0d0p2 to be a member driver of new pool 
 .
But if  you use the first partition of your system disk to be a member drive of 
other zpool   ,it will make the grub load boot stage failed when you try reboot 
the system.

If use c0t0d0p1 as a member driver of a zpool not make the grub failed,you can 
try destroy the new zpool you just create ,after the problem must be happen!

the real test process is post on ixpub.net :
http://home.ixpub.net/space.php?uid=10821989&do=blog&id=407468 
Q.Ho
21/08/2009 11:52 GMT+1
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import hangs with indefinite writes

2009-08-21 Thread No Guarantees

Every time I attempt to import a particular RAID-Z pool, my system hangs.  
Specifically, if I open up a gnome terminal and input '$ pfexec import zpool 
mypool', the process will never complete and I will return to the prompt.  If I 
open up another terminal, I can input a 'zpool status" and see that the pool 
has been imported, but the directory has not been mounted.  In other words, 
there is no /mypool in the tree.  If I issue a 'zpool iostat 1' I can see that 
there are constant writes to the pool, but NO reads.  If I halt the zpool 
import, and then do a 'zpool scrub', it will complete with no errors after 
about 12-17 hours (it's a 5TB pool).  I have looked through this forum and 
found many examples where people can't import due to hardware failure and lack 
of redundancy, but none where they had a redundant setup, everything appears 
okay, and they STILL can't import.  I can export the pool without any problems. 
 I need to do this before rebooting, otherwise it hangs on reboot,
  probably while trying to import the pool.  I've looked around for 
troubleshooting info, but the only thing that gives me any hint of what is 
wrong is a core dump after issuing a 'zdb -v mypool'.  It fails with "Assertion 
failed: object_count == usedobjs (0x7 == 0x6), file ../zdb.c, line 1215.  Any 
suggestions?


_
Windows Live: Keep your friends up to date with what you do online.
http://windowslive.com/Campaign/SocialNetworking?ocid=PID23285::T:WLMTAGL:ON:WL:en-US:SI_SB_online:082009
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filesystem notification / query

2009-08-21 Thread Chris Ridd


On 20 Aug 2009, at 21:22, Felix Nielsen wrote:


Hi

Is it possible to get filesystem notification like when files are  
created, modified, deleted? or can the "activity" be exported?


If you have a vscan service then that will get notified when files are  
accessed or modified; would that be sufficient for your purposes?




Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] creating zones in open solaris on x86

2009-08-21 Thread Ian Collins

sai prasath wrote:

Hi

I have installed open solaris on HP Proliant ML 370 G6.While creating zones
I am getting error message for the following command.

#zfs create -o canmount=noauto rpool/ROOT/S10be/zones

cannot create 'rpool/ROOT/S10be/zones': parent does not exist.


what does "zfs list -r rpool" report?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot restore to [snapshot]: destination already exists

2009-08-21 Thread Ian Collins

Tony Pyro wrote:

I'm running into a problem trying to do "zfs receive", with data being 
replicated from a Solaris 10 (11/06 release) to a storage server running OS 118. Here is 
the error:

r...@lznas2:/backup# cat backup+mcc+use...@zn2---pre_messages_delete_20090430 | 
 zfs receive backup/mcc/users
cannot restore to backup/mcc/us...@pre_messages_delete_20090430: destination 
already exists
r...@lznas2:/backup# 

  

How was the file created?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-21 Thread Ian Collins

Jorgen Lundman wrote:


Finally came to the reboot maintenance to reboot the x4540 to make it 
see the newly replaced HDD.


I tried, reboot, then power-cycle, and reboot -- -r,

but I can not make the x4540 accept any HDD in that bay. I'm starting 
to think that perhaps we did not lose the original HDD, but rather the 
slot, and there is a hardware problem.


This is what I see after a reboot, the disk is c1t5d0, sd37, s...@5,0 or 
slot 13.


c1::dsk/c1t4d0 disk connectedconfigured 
unknown
c1::dsk/c1t5d0 disk connectedconfigured 
unknown
c1::dsk/c1t6d0 disk connectedconfigured 
unknown



Does format show it?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-21 Thread Ian Collins

Joseph L. Casale wrote:

I have my own application that uses large circular buffers and a socket
connection between hosts.  The buffers keep data flowing during ZFS
writes and the direct connection cuts out ssh.



Application, as in not script (something you can share)?
  


Not yet!

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS tale of woe and fail

2009-08-21 Thread Ross
It was blogged about on Joyent Tim:
http://www.joyent.com/joyeurblog/2008/01/16/strongspace-and-bingodisk-update/
http://bugs.opensolaris.org/view_bug.do?bug_id=6458218
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss