Re: [zfs-discuss] ZFS Scalability/performance

2007-06-22 Thread Brian Hechinger
On Wed, Jun 20, 2007 at 12:03:02PM -0400, Will Murnane wrote:
> Yes.  2 disks means when one fails, you've still got an extra.  In
> raid 5 boxes, it's not uncommon with large arrays for one disk to die,
> and when it's replaced, the stress on the other disks causes another
> failure.  Then the array is toast.  I don't know if this is a problem
> on ZFS... but they took the time to implement raidz2, so I'd suggest
> it.

If you buy all the disks at once and add them to a pool all at once,
they should all theoretically have appoximately the same lifespan.
When one dies, you can almost count on others following soon after.
Nothing sucks more than your "redundant" disk array losing more disks
than it can support and you lose all your data anyway.  You'd be better
off doing a giant non-parity stripe and dumping to tape on a regular
basis. ;)

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-22 Thread Brian Hechinger
On Thu, Jun 21, 2007 at 11:36:53AM +0200, Roch - PAE wrote:
> 
> code) or Samba might be better by being careless with data.

Well, it *is* trying to be a Microsoft replacement.  Gotta get it
right, you know?  ;)

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-22 Thread Darren . Reed

mike wrote:


it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...



Well, there's a dark horse here called "patents".
Nobody really knows the full extent of who's got what or
what covers what between the likes of Sun (ZFS/Stotek),
NetApp (WAFL) and IBM (ARC?).

Maybe this patent game will just end up being like the
cold war with nuclear missiles...and the only real winners
being the lawyers and patents office collecting $$.

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-22 Thread Matthew Ahrens

Dan Saul wrote:

I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.


Any config I could imagine would be able to stream several videos at once 
(even 10Mbit/sec 1080p HD).  For very high data integrity and plenty of 
performance for streaming a few videos, I'd try:


3 x 9-wide raidz-2
3 x hot spare

For a total of 21 disks worth of usable space.  If you need a bit more space, 
do:

2 x 14-wide raidz-2
2 x hot spare

for a total of 24 disks worth of usable space.

--matt



On 6/22/07, Richard Elling <[EMAIL PROTECTED]> wrote:

Dan Saul wrote:
> Good day ZFS-Discuss,
>
> I am planning to build an array of 30 drives in a RaidZ2 configuration
> with two hot spares. However I read on the internet that this was not
> ideal.
>
> So I ask those who are more experianced then me, what configuration
> would you recommend with ZFS, I would like to have some redundancy but
> still keeping as much disk space open for my uses as possible.
>
> I don't want to mirror 15 drives to 15 drives as that would
> drastically affect my storage capacity.

There are hundreds of possible combinations of 30 drives.
It really comes down to a trade-off of space vs performance vs RAS.
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance 


  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-22 Thread Richard Elling

Dan Saul wrote:

I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.


The model I used in this blog deals with small, random reads, not
streaming workloads.  In part this is because the data needed to
calculate small, random read performance is readily available on disk
data sheets.  For streaming workloads we can calculate the media
bandwidth, which will be good when you have multiple drives.  But
there are other limitations in the system which will ultimately cap
the bandwidth, and those limitations are not expressed in data sheets.

The best way is to try it.  Let us know how it works.
 -- richard


On 6/22/07, Richard Elling <[EMAIL PROTECTED]> wrote:

Dan Saul wrote:
> Good day ZFS-Discuss,
>
> I am planning to build an array of 30 drives in a RaidZ2 configuration
> with two hot spares. However I read on the internet that this was not
> ideal.
>
> So I ask those who are more experianced then me, what configuration
> would you recommend with ZFS, I would like to have some redundancy but
> still keeping as much disk space open for my uses as possible.
>
> I don't want to mirror 15 drives to 15 drives as that would
> drastically affect my storage capacity.

There are hundreds of possible combinations of 30 drives.
It really comes down to a trade-off of space vs performance vs RAS.
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance 


  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-22 Thread Dan Saul

I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.

On 6/22/07, Richard Elling <[EMAIL PROTECTED]> wrote:

Dan Saul wrote:
> Good day ZFS-Discuss,
>
> I am planning to build an array of 30 drives in a RaidZ2 configuration
> with two hot spares. However I read on the internet that this was not
> ideal.
>
> So I ask those who are more experianced then me, what configuration
> would you recommend with ZFS, I would like to have some redundancy but
> still keeping as much disk space open for my uses as possible.
>
> I don't want to mirror 15 drives to 15 drives as that would
> drastically affect my storage capacity.

There are hundreds of possible combinations of 30 drives.
It really comes down to a trade-off of space vs performance vs RAS.
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Indiana Wish List

2007-06-22 Thread Lori Alt



andrewk9 wrote:


Apologies: I've just realised all this talk of "I've booted off of ZFS" is totally bogus. 
What they've actually done is booted off Ext3FS, for example, then jumped into loading the 
"real" root from the zpool. That'll teach me to read things first. This is indeed a 
pretty ugly hack.

The only obstacle, as I see it, to getting *real* RAIDZ /  RAIDZ2 boot support is adding the requisite code for reading the zpool into Grub. There was also some discussion that on some systems, the ZFS grub plugin would be unable to reliably access large numbers of disks due to BIOS bugs / limitations on x86. 



Correct, only one disk at a time can be accessed.  This is the
problem, since on RAID-Z, files can be spread all over
the disks in the pool.  With simple  mirroring, you are
guaranteed that each file can be read, in its entirety, from
a single disk.

Booting from RAID-Z is planned for the second release
of zfs boot.  The design is unclear, but it will probably
involve a new dataset option (the"replicate on all disks
in the pool" option) which will enable us to make sure
that the files needed for booting can be read from a single
disk.




This begs the question: does Sun have a version of the ZFS Grub plugin that 
*can* read (however un-reliably) from a RAIDZ pool?



No.


   Lori



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaCP Beta1-test2 (ZFS/Boot - manual partitioning support)

2007-06-22 Thread Al Hopper

On Fri, 22 Jun 2007, Erast Benson wrote:


New unstable ISO of NexentaCP (Core Platform) available.

http://www.gnusolaris.org/unstable-iso/ncp_beta1-test2-b67_i386.iso


Also available at:

http://www.genunix.org/distributions/gnusolaris/index.html


Changes:

* ON B67 based
* ZFS/Boot manual partitioning support implemented (in addition to
auto-partitioning). Both, Wizard and FDisk types fully supported.
* gcc/g++ now officially included on installation media
* APT repository fixed
* first official meta-package: nexenta-gnome

After installation, those who needs GNOME environment, just type:

$ sudo apt-get install nexenta-gnome

Known bugs:

* after fresh install APT caches needs to be re-created:

$ sudo rm /var/lib/apt/*
$ sudo apt-get update
--
Erast


Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Indiana Wish List

2007-06-22 Thread andrewk9
Apologies: I've just realised all this talk of "I've booted off of ZFS" is 
totally bogus. What they've actually done is booted off Ext3FS, for example, 
then jumped into loading the "real" root from the zpool. That'll teach me to 
read things first. This is indeed a pretty ugly hack.

The only obstacle, as I see it, to getting *real* RAIDZ /  RAIDZ2 boot support 
is adding the requisite code for reading the zpool into Grub. There was also 
some discussion that on some systems, the ZFS grub plugin would be unable to 
reliably access large numbers of disks due to BIOS bugs / limitations on x86. 

This begs the question: does Sun have a version of the ZFS Grub plugin that 
*can* read (however un-reliably) from a RAIDZ pool?

Cheers

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] zfs and snmp disk space stats

2007-06-22 Thread Bruce Shaw
Gimme specific examples and I'll have a look at it.

We (net-snmp) are just about to release a new version (5.4.1) so I'd
like to fix it before it goes to production. 

It may be a known bug, since fixed, with 5.0.9.

>Not specifically a ZFS question, but is anyone monitoring disk space of
their ZFS filesystems via the Solaris 10 snmpd?  I can't find any
64-bit counters in the MIB for disk space, so the normal tools I use
get completely wrong numbers for my 1-terabyte pool.




This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Implicit storage tiering w/ ZFS

2007-06-22 Thread Richard Elling

Blue Thunder Somogyi wrote:

I'm curious if there has been any discussion of or work done toward 
implementing storage classing within zpools (this would be similar to the 
storage foundation QoSS feature).


There has been some discussion.  AFAIK, there is no significant work
in progress.  This problem is far more complex to solve than it may
first appear.


I've searched the forum and inspected the documentation looking for a means to 
do this, and haven't found anything, so pardon the post if this is 
redundant/superfluous.

I would imagine this would require something along the lines of:
a) the ability to catagorize devices in a zpool with thier "class of storage", perhaps a numeric 
rating or otherwise, with the idea that the fastest disks get a "1" and the slowest get a 
"9" (or whatever the largest number of supported tiers would be)


This gets more complicated when devices are very asymmetric in performance.
For a current example, consider an NVRAM-backed RAID array.  Writes tend to
complete very quickly, regardless of the offset.  But reads can vary widely,
and may be an order of magnitude slower.  However, this will not be consistent
as many of these arrays also cache reads (like JBOD track buffer caches).
Today, there are some devices which may demonstrate 2 or more orders of
magnitude difference between read and write latency.


b) leveraging the copy-on-write nature of ZFS, when data is modified, the new 
copy would be sent to the devices that were appropriate given statistical 
information regarding that data's access/modification frequency.  Not being 
familiar with ZFS internals, I don't know if there would be a way of taking 
advantage of the ARC knowledge of access frequency.


I think the data is there.  This gets further complicated when a vdev shares
a resource with another vdev.  A shared resource may not be visible to Solaris
at all, so it would be difficult (or wrong) for Solaris to make a policy with
incorrect assumptions about resource constraints.


c) It seems to me there would need to be some trawling of the storage tiers (probably 
only the fastest, as the COW migration of frequently accessed data to fast disk would not 
have an analogously inherent mechanism to move idle data down a tier) to locate data that 
is gathering cobwebs and stage it down to an appropriate tier.  Obviously it would be 
nice to have as much data as possible on the fastest disks, while leaving all the free 
space on the dog disks, but would also want to avoid any "write twice" behavior 
(not enough space on appropriate tier so staged to slower tier and migrated up to faster 
disk) due to the fastest tier being overfull.


When I follow this logical progression, I arrive at SAM-FS.  Perhaps it is
better to hook ZFS info SAM-FS?

While zpools are great for dealing with large volumes of data with integrity and minimal management overhead, I've remained concerned about the inabiity to control where data lives when using different types of storage, eg a mix of FC and SATA disk in the extreme, mirror vs RAID-Z2, or as subtle as high RPM small spindles vs low RPM large spindles.  


There is no real difference in performance based on the interface: FC vs. SATA.
So it would be a bad idea to base a policy on the interface type.


For instance, if you had a database that you know has 100GB of dynamic data and 900GB of 
more stable data, with the above capabilities you could allocate the appropriate ratio of 
FC and SATA disk and be confident that the data would naturally migrate to it's 
appropriate underlying storage.  Of course there are ways of using multiple zpools with 
the different storage types and table spaces to locate the data onto the appropriate 
zpool, but this is undermining the "minimal management" appeal of ZFS.


The people who tend to really care about performance will do what is needed
to get performance, and that doesn't include intentially using slow devices.
Perhaps you are thinking of a different market demographic?


Anyhow, just curious if this concept has come up before and if there are any 
plans around it (or something similar).


Statistically, it is hard to beat stochastically spreading wide and far.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] data structures in ZFS

2007-06-22 Thread eric kustarz

A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/

We've only got one picture up right now (though its a juicy one!),  
but let us know what you're interested in seeing, and we'll try to  
make that happen.


I see this as a nice supplement to the actual source:
http://www.opensolaris.org/os/community/zfs/source/

and the on-disk format guide:
http://www.opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf

And hopefully some of this will find its way into the ZFS chapter of  
the Solaris Internals book:
http://www.amazon.com/dp/0131482092? 
tag=solarisintern-20&camp=14573&creative=327641&linkCode=as1&creativeASI 
N=0131482092&adid=0VTFCDYF5NTGMS4F14XP&
http://www.solarisinternals.com/wiki/index.php/ 
Solaris_Internals_and_Performance_FAQ


happy friday,
eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] New article on ZFS in Russian

2007-06-22 Thread Victor Latushkin

Hi,

Recently PC Magazine Russian Edition published article about ZFS in 
Russian titled

ZFS - Новый взгляд на файловые системы
or in English
ZFS - New view on a filesystem

  http://www.pcmag.ru/solutions/detail.php?ID=9141

There's already so much collateral on ZFS in English and other 
languages, so it is good to have article on ZFS in one more language.


It is good to have slides of the famous "ZFS - The Last Word in File 
Systems" handy, for example, ones used by Neil Perrin during his talk at 
Sun Tech Days in Saint-Petersburg, since article rather closely follows 
slides in the presentation.


Hope this helps,
Victor

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread michael schuster

Joubert Nel wrote:



What I meant is that when I do "zpool create" on a disk, the entire
contents of the disk doesn't seem to be overwritten/destroyed. I.e. I
suspect that if I didn't copy any data to this disk, a large portion of
what was on it is potentially recoverable.

If so, is there a tool that can help with such recovery?


I can't answer this in detail, but, to borrow from Tim O'Reilly, think 
of it as the text of a book where you've lost the table of contents and 
the first few chapters, and thrown all the remaining pages on the floor...



--
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Implicit storage tiering w/ ZFS

2007-06-22 Thread Blue Thunder Somogyi
I'm curious if there has been any discussion of or work done toward 
implementing storage classing within zpools (this would be similar to the 
storage foundation QoSS feature).

I've searched the forum and inspected the documentation looking for a means to 
do this, and haven't found anything, so pardon the post if this is 
redundant/superfluous.

I would imagine this would require something along the lines of:
a) the ability to catagorize devices in a zpool with thier "class of storage", 
perhaps a numeric rating or otherwise, with the idea that the fastest disks get 
a "1" and the slowest get a "9" (or whatever the largest number of supported 
tiers would be)
b) leveraging the copy-on-write nature of ZFS, when data is modified, the new 
copy would be sent to the devices that were appropriate given statistical 
information regarding that data's access/modification frequency.  Not being 
familiar with ZFS internals, I don't know if there would be a way of taking 
advantage of the ARC knowledge of access frequency.
c) It seems to me there would need to be some trawling of the storage tiers 
(probably only the fastest, as the COW migration of frequently accessed data to 
fast disk would not have an analogously inherent mechanism to move idle data 
down a tier) to locate data that is gathering cobwebs and stage it down to an 
appropriate tier.  Obviously it would be nice to have as much data as possible 
on the fastest disks, while leaving all the free space on the dog disks, but 
would also want to avoid any "write twice" behavior (not enough space on 
appropriate tier so staged to slower tier and migrated up to faster disk) due 
to the fastest tier being overfull.

While zpools are great for dealing with large volumes of data with integrity 
and minimal management overhead, I've remained concerned about the inabiity to 
control where data lives when using different types of storage, eg a mix of FC 
and SATA disk in the extreme, mirror vs RAID-Z2, or as subtle as high RPM small 
spindles vs low RPM large spindles.  

For instance, if you had a database that you know has 100GB of dynamic data and 
900GB of more stable data, with the above capabilities you could allocate the 
appropriate ratio of FC and SATA disk and be confident that the data would 
naturally migrate to it's appropriate underlying storage.  Of course there are 
ways of using multiple zpools with the different storage types and table spaces 
to locate the data onto the appropriate zpool, but this is undermining the 
"minimal management" appeal of ZFS.

Anyhow, just curious if this concept has come up before and if there are any 
plans around it (or something similar).

Thanks,
BTS
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Darren Dunham
> What I meant is that when I do "zpool create" on a disk, the entire
> contents of the disk doesn't seem to be overwritten/destroyed. I.e. I
> suspect that if I didn't copy any data to this disk, a large portion
> of what was on it is potentially recoverable.

Presumably a scavenger program could try to find the top of the oldest
tree and construct an uberblock that points to it.

> If so, is there a tool that can help with such recovery?

Not that I'm aware of.  

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Eric Schrock
On Thu, Jun 21, 2007 at 07:34:13PM -0700, Joubert Nel wrote:
> 
> OK, so if I didn't copy any data to this disk, presumably a large
> portion of what was on the disk previously is theoretically
> recoverable. There is really one file in particular that I'd like to
> recover (it is a cpio backup).
> 
> Is there a tool that can accomplish this?

For ZFS, no.  In addition to the fact that ZFS uses variable blocksizes
and compression, there is no distinction between metadata and data.
Without knowing the pool configuration and the 'root' of the tree (the
uberblock), there is no way for ZFS to recover data.

An interesting tool could be written to try to recover data from trivial
(single vdev, non-RAID-Z, uncompressed) pools by trying to interpret
each block as metadata and verifying the checksums, but it would still
be quite difficult (and painfully slow).

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NexentaCP Beta1-test2 (ZFS/Boot - manual partitioning support)

2007-06-22 Thread Erast Benson
New unstable ISO of NexentaCP (Core Platform) available.

http://www.gnusolaris.org/unstable-iso/ncp_beta1-test2-b67_i386.iso

Changes:

* ON B67 based
* ZFS/Boot manual partitioning support implemented (in addition to
auto-partitioning). Both, Wizard and FDisk types fully supported.
* gcc/g++ now officially included on installation media
* APT repository fixed
* first official meta-package: nexenta-gnome

After installation, those who needs GNOME environment, just type:

$ sudo apt-get install nexenta-gnome

Known bugs:

* after fresh install APT caches needs to be re-created:

$ sudo rm /var/lib/apt/*
$ sudo apt-get update

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-22 Thread Richard Elling

Dan Saul wrote:

Good day ZFS-Discuss,

I am planning to build an array of 30 drives in a RaidZ2 configuration
with two hot spares. However I read on the internet that this was not
ideal.

So I ask those who are more experianced then me, what configuration
would you recommend with ZFS, I would like to have some redundancy but
still keeping as much disk space open for my uses as possible.

I don't want to mirror 15 drives to 15 drives as that would
drastically affect my storage capacity.


There are hundreds of possible combinations of 30 drives.
It really comes down to a trade-off of space vs performance vs RAS.
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot boot zone on zfs inside a logical domain 65543017

2007-06-22 Thread Claire . Grandalski

cust has this issue:
Sun Fire T2000   solaris 10 11/06

This is a new install and ZFS has not worked at all inside of a Logical
Domain.  Unfortunately, nothing shows up in the messages file and I
receive no errors when trying to boot the zone.  It appears to just hang
when trying to import the service manifests, and it is never the same
service manifest.

I appreciate any help that can be provided.

--

Thanks!
Have a good day!

Claire Grandalski - OS Technical Support Engineer
[EMAIL PROTECTED]
(800)USA-4SUN (Reference your Case Id #) 
Hours 8:00 - 3:00 EST

Sun Support Services
4 Network Drive,  UBUR04-105
Burlington MA 01803-0902


  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs and snmp disk space stats

2007-06-22 Thread Ed Ravin
Not specifically a ZFS question, but is anyone monitoring disk space of
their ZFS filesystems via the Solaris 10 snmpd?  I can't find any
64-bit counters in the MIB for disk space, so the normal tools I use
get completely wrong numbers for my 1-terabyte pool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS + ISCSI + LINUX QUESTIONS

2007-06-22 Thread Gary Gendel
Al,

Has there been any resolution to this problem? I get it repeatedly on my 
5-500GB Raidz configuration. I sometimes get port drop/reconnect errors when 
this occurs.

Gary
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Joubert Nel
> On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel
> wrote:
> > 
> > When I ran "zpool create", the pool got created
> without a warning. 
> 
> zpool(1M) will diallow creation of the disk if it
> contains data in
> active use (mounted fs, zfs pool, dump device, swap,
> etc).  It will warn
> if it contains a recognized filesystem (zfs, ufs,
> etc) that is not
> currently mounted, but allow you to override it with
> '-f'.  What was
> previously on the disk?

It was ZFS with a few GB of data.

> 
> > What is strange, and maybe I'm naive here, is that
> there was no
> > "formatting" of this physical disk so I'm
> optimistic that the data is
> > still recoverable from it, even though the new pool
> shadows it.
> > 
> > Or is this way off mark?
> 
> You are guaranteed to have lost all data within the
> vdev label portions
> of the disk (see on-disk specification from
> opensolaris.org).  How much
> else you lost depends on how long the device was
> active in the pool and
> how much data was written to it.

OK, so if I didn't copy any data to this disk, presumably a large portion of 
what was on the disk previously is theoretically recoverable. There is really 
one file in particular that I'd like to recover (it is a cpio backup).

Is there a tool that can accomplish this?

Joubert
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Joubert Nel
Richard,

> Joubert Nel wrote:
> >> If the device was actually in use on another
> system, I
> >> would expect that libdiskmgmt would have warned
> you about
> >> this when you ran "zpool create".
> 
> AFAIK, libdiskmgmt is not multi-node aware.  It does
> know about local
> uses of the disk.  Remote uses of the disk,
> especially those shared with
> other OSes, is a difficult problem to solve where
> there are no standards.
> Reason #84612 why I hate SANs.
> 
> > When I ran "zpool create", the pool got created
> without a warning. 
> 
> If the device was not currently in use, why wouldn't
> it proceed?
> 
> > What is strange, and maybe I'm naive here, is that
> there was no "formatting" of this physical disk so
> I'm optimistic that the data is still recoverable
> from it, even though the new pool shadows it.
> > 
> > Or is this way off mark?
> 
> If you define formatting as writing pertinent
> information to the disk
> such that ZFS works, then it was formatted.  The
> uberblock and its replicas
> only take a few iops.

What I meant is that when I do "zpool create" on a disk, the entire contents of 
the disk doesn't seem to be overwritten/destroyed. I.e. I suspect that if I 
didn't copy any data to this disk, a large portion of what was on it is 
potentially recoverable.

If so, is there a tool that can help with such recovery?

Joubert
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL on user specified devices?

2007-06-22 Thread eric kustarz


On Jun 21, 2007, at 3:25 PM, Bryan Wagoner wrote:


Quick question,

Are there any tunables, or is there any way to specify devices in a  
pool to use for the ZIL specifically? I've been thinking through  
architectures to mitigate performance problems on SAN and various  
other storage technologies where disabling ZIL or cache flushes has  
been necessary to make up for performance and was  wondering if  
there would be a way to specify a specific device or set of devices  
for the ZIL to use separate of the data devices so I wouldn't have  
to disable it in those circumstances.




See:
6339640 Make ZIL use NVRAM when available.

Neil has done some really nice work and is very close to putting  
back... wait a couple of days...


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bug in "zpool history"

2007-06-22 Thread Niclas Sodergard

On 6/21/07, eric kustarz <[EMAIL PROTECTED]> wrote:


> # zpool history
> 2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
> 2007-06-20.10:20:03 zfs clone syspool/[EMAIL PROTECTED] syspool/
> myrootfs
> 2007-06-20.10:23:21 zfs set bootfs=syspool/myrootfs syspool
>
> As you can see it says I did a "zfs set bootfs=..." even though the
> correct command should have been "zpool set bootfs=...". Of course
> this is purely cosmetical. I currently don't have access to a recent
> nevada build so I just wonder if this is present there as well.

nice catch... i filed:
6572465 'zpool set bootfs=...' records history as 'zfs set bootfs=...'

expect a fix today simply passing 'FALSE' instead of 'TRUE' as
the 'pool' parameter in zpool_log_history().


Great. Thanks.

cheers,
Nickus

--
Have a look at my blog for sysadmins!
http://aspiringsysadmin.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Suggestions on 30 drive configuration?

2007-06-22 Thread Dan Saul

Good day ZFS-Discuss,

I am planning to build an array of 30 drives in a RaidZ2 configuration
with two hot spares. However I read on the internet that this was not
ideal.

So I ask those who are more experianced then me, what configuration
would you recommend with ZFS, I would like to have some redundancy but
still keeping as much disk space open for my uses as possible.

I don't want to mirror 15 drives to 15 drives as that would
drastically affect my storage capacity.

Thank you for your time,
Dan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL on user specified devices?

2007-06-22 Thread Eric Schrock
This feature is implemented as part of PSARC 2007/171 and will be
putback shortly.

- Eric

On Thu, Jun 21, 2007 at 03:25:30PM -0700, Bryan Wagoner wrote:
> Quick question,
> 
> Are there any tunables, or is there any way to specify devices in a
> pool to use for the ZIL specifically? I've been thinking through
> architectures to mitigate performance problems on SAN and various
> other storage technologies where disabling ZIL or cache flushes has
> been necessary to make up for performance and was  wondering if there
> would be a way to specify a specific device or set of devices for the
> ZIL to use separate of the data devices so I wouldn't have to disable
> it in those circumstances. 
> 
> Thanks in advance!
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL on user specified devices?

2007-06-22 Thread Neil Perrin

Bryna,

Your timing is excellent! We've been working on this for a while now and
hopefully within the next day I'll be adding support for separate log
devices into Nevada.

I'll send out more details soon...

Neil.

Bryan Wagoner wrote:

Quick question,

Are there any tunables, or is there any way to specify devices in a pool to use for the ZIL specifically? I've been thinking through architectures to mitigate performance problems on SAN and various other storage technologies where disabling ZIL or cache flushes has been necessary to make up for performance and was  wondering if there would be a way to specify a specific device or set of devices for the ZIL to use separate of the data devices so I wouldn't have to disable it in those circumstances. 


Thanks in advance!
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss