[zfs-discuss] Raid Edition drive with RAIDZ

2007-01-18 Thread Albert Ye
Since ZFS already has error correction, would drives that limit the time a hard 
drive attempts to recover from errors such as WD RE drives or Seagate ES drive 
be necessary?  Would it be safe to use standard hard drives without the Time 
Limited Error Recovery feature in a RAIDZ array?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread Al Hopper
On Thu, 18 Jan 2007,  . wrote:

> Looking around there still is not a good "these cards/motherboards" work
> list. the HCL is hardly ever updated, and its far more geared towards
> business use than hobbyist/home use. So bearing all of that in mind I
> will need the following things: 
> 1. At least 2 gigE nics.   
> 
> 2. At least 6 SATAII ports (and at least 6 drive slots)   
> 3. Reasonable price (probably going to build it myself.)  

I can give you another working recipe - one that may not meet your needs
exactly, but one that will work nicely.  This particular recipe was not a
design based on a clean-sheet-of-paper - but rather, my Sun Ultra 20
motherboard died and I wanted a replacement system that would provide a
classic *workstation* function while also providing (extra SATA disk drive
bays for) ZFS-based storage and allow re-use of the (heavily modified)
Ultra-20 parts. This is the resulting system which is *highly*
satisfactory/stable as a desktop driving two LCD panels[0], while
providing ZFS filespace and software development facilities and currently
running 6 zones under Solaris Update 3:

Motherboard:  Asus A8N32-SLI Delux NB: AMD 939-pin socket
CPU:  AMD x4200 X2 dual-core CPU with 1Mb L2 cache per core [1]
RAM:  2 * Corsair TWINX2048-3200C2PT 2Gb kit (4Gb total)
Case: Antec P180 mid-tower (comes with no PSU)
ZFS disks:2 * Seagate 500GB SATA ST3500641AS (on sale @ Frys)
Boot disk:Western Digital 74Gb SATA WDC WD740GD-00FLA1 [1]
PSU:  SS-500HT (500W) [1]
Graphics: 7800GT [1][3]
CDROM/DVD:Plextor 716A slot-loader [1] (the only one to work with the
  Ultra-20 front panel)

Add ins:
- Gigabyte I-ram (GC-RAMDISK) with 4*Kingston KVR400X74C3A/1G DIMMs [1][2]
- extra Antec TriCool 120mm 3-speed fan (front panel)

The motherboard was setup using the Asus automatic overclock BIOS function
set to +10% and the x4200 appears as an x4400 [4]

Notes:

a) Upside: The Antec P180 provides a compartment (at the bottom of the
case) which includes 4-bay SATA disk drive bay in front of a 1" * 120mm
fan which holds 4 * SATA drives mounted on silicon vibration dampers and
the Power Supply Unit (PSU).  This compartment has airflow which is
separated from the main motherboard area of the case.  There is extra
space between the disk drives and cooling is excellent.

b) Downside: The PSU wiring harness, coming upwards from the PSU/disk
drive compartment, blocks off access to some of the neighbouring
motherboard expansion slots.

c) Upside: There are lots of disk drive bays - aside from the ones
mentioned in the bottom PSU compartment.

d) I found that 4 of the built-in (motherboard) SATA ports worked with no
effort.  The other two SATA ports did not work on first try - but no
effort was expended trying to make them work.  Currently the 4 working
SATA ports are assigned:

1) WD740 boot drive
2) Seagate 500Gb drive (ZFS mirror)
3) Seagate 500Gb drive (ZFS mirror)
4) Gigabyte I-ram

e) Downside: The build time for this box was longer that usual.  Perhaps
because of the extra "head scratching" time required to figure out the
unusual case layout (power supply at the bottom and all wiring going
vertically upwards) and the time it took to route the wiring and cable-tie
everything neatly.

If you decide to try this recipe, snag a 939-pin Model 165 Opteron
dual-core processor (before the supply dries up completely).  These are
known to overclock easily and reliably to 2.6GHz+.  There are many (now
old) articles describing how to do this and the Asus A8N32-SLI Delux is
known as a motherboard that is easy/reliable to overclock.  Checkout
newegg.com and zipzoomfly.com for the CPU.  Hurry before they are gone!!

NB: I know that the 939-pin AMD family is already EOL - but these
components are rock solid and high performance.  But buy the long term
config your want *now*, since it will be impossible to upgrade this system
later.

Email me offlist if you have any questions.

[0] Dell 3007WFP 30" panel running at 2560x1600 and iiyama AU5131DT
running at 1600x1200 running under Twinview using the Sun/Nvidia driver
version *9746 driver (twinview config provided by nvidia-config).

[1] moved from the (modified) Ultra-20

[2] used to provide test zones that can be deployed pretty quickly and can
provide fast swap space, if necessary.

[3] BFG GeForce 7800GT OC  (The OC indicates that its over-clocked)

[4] # psrinfo -v
Status of virtual processor 0 as of: 01/18/2007 23:16:43
  on-line since 01/01/2007 21:15:39.
  The i386 processor operates at 2420 MHz,
and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 01/18/2007 23:16:43
  on-line since 01/01/2007 21:15:45.
  The i386 processor operates at 2420 MHz,
and has an i387 compatible floating point processor.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134 

[zfs-discuss] Re: Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Frank Cusack
Toby Thain:
> 
> On 18-Jan-07, at 9:55 PM, Jason J. W. Williams wrote:
> 
> > Hi Frank,
> >
> > What do they [not] support?
> 
> Hotplug.

and NCQ.  and SMART.

-frank
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS patches for Solaris 10U2 ?

2007-01-18 Thread Mike Gerdts

On 1/18/07, Christophe Dupré <[EMAIL PROTECTED]> wrote:


I've been looking for the patches to get the latest ZFS bits for S10U2,
like kernel patch 108833-30, but I can't find then on sunsolve.sun.com.
Latest seems to be 108833-24

Is there any other location I should look for the patches ?


If you have (or download) the latest installation DVD, look in the
/UpgradePatches (or similarly named) directory.

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread David

On 1/18/07, . <[EMAIL PROTECTED]> wrote:

Looking around there still is not a good "these cards/motherboards" work list. 
the HCL is hardly ever updated, and its far more geared towards business use than 
hobbyist/home use.


Yes, this is true. This list is the best resource I have found so far,
and I have been half heartedly looking for the last three months or
so. So to help you and and see if I can get some answers myself, I
will post the system I am currently looking at. I have picked these
parts up from mentions on the list:

ASUS M2NPV-VM ( http://www.newegg.com/Product/Product.asp?item=N82E16813131014 )

AMD Sempron 64 2800+ (
http://www.newegg.com/Product/Product.asp?item=N82E16819104245 )

SYBA SD-SATA-4P PCI SATA Controller Card (
http://www.newegg.com/product/Product.asp?item=N82E16815124020 )

The sata card is only a SATA 1 card, but do I care? Do the ports on
the  motherboard work? (I think I saw somewhere they do in PATA
compatibility mode.)

As you can see I am going for bottom of the line, but it is just a box
to play around with, and a gigE nas box if things work out well.

So list, what do you think?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How much do we really want zpool remove?

2007-01-18 Thread Martin
> Jeremy Teo wrote:
> > On the issue of the ability to remove a device from
> a zpool, how
> > useful/pressing is this feature? Or is this more
> along the line of
> > "nice to have"?
> 
> This is a pretty high priority.  We are working on
> it.

Good news!  Where is the discussion on the best approach to take?

> On 18/01/2007, at 9:55 PM, Jeremy Teo wrote:
> The most common reason is migration of data to new
> storage  
> infrastructure. The experience is often that the
> growth in disk size  
> allows the new storage to consist of fewer disks/LUNs
> than the old.

I agree completely.  No matter how wonderful your current FC/SAS/whatever 
cabinet is, at some point in the future you will want to migrate to another 
newer/faster array with a better/faster interface, probably on fewer disks.  
The "just add another top level vdev" approach to growing a RAIDZ pool seems a 
bit myopic.

> On Thu, 2007-01-18 at 10:51 -0800, Matthew Ahrens
> wrote:
> I'd consider it a lower priority than say, adding a
> drive to a RAIDZ
> vdev, but yes, being able to reduce a zpool's size by
> removing devices
> is quite useful, as it adds a considerable degree of
> flexibility that
> (we) admins crave.

These two items (removing a vdev and restriping an array) are probably closely 
related.  At the core of either operation likely will center around some 
metaslab_evacuate() routine which empties a metaslab and puts the data onto 
another metaslab.

Evacuating a vdev could be no more than evacuating all of the metaslabs in the 
vdev.

Restriping (adding/removing a data/parity disk) could be no more than 
progressively evacuating metaslabs with the old stripe geometry and writing the 
data to metaslabs with the new stripe geometry.  The biggest challenge while 
restriping might be getting the read routine to figure out on-the-fly which 
geometry is in use for any particular stripe.  Even so, this shouldn't be too 
big of a challenge: one geometry will checksum correctly and the other will not.

Marty
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread mike

I get that part. I think I asked that question before (although not as
direct) - basically you're talking about the ability to shrink volumes
and/or disable/change the mirroring/redundancy options if there is
space available to account for it.

If this was allowed, this would also allow for a conversion from RAIDZ
to RAIDZ2, or vice-versa then, correct?

On 1/18/07, Erik Trimble <[EMAIL PROTECTED]> wrote:

Mike,

I think you are missing the point.  What we are talking about is
removing a drive from a zpool, that is, reducing the zpool's total
capacity by a drive.   Say you have 4 drives of 100GB in size,
configured in a striped mirror, capacity of 200GB usable.  We're
discussing the case where if the zpool's total used space is under
100GB, we could remove the second vdev (consisting of a mirror) from the
zpool, and have ZFS evacuate all the data from the to-be-removed vdev
before we actually remove the disks (or, maybe we simply want to
reconfigure them into another zpool).  In this case, after the drive
remoovals, the zpool would be left with a 100GB capacity, and be a
simple 2-drive mirror.


What you are talking about is replacement of a drive, whether or not it
is actually bad. In your instance, the zpool capacity size remains the
same, and it will return to optimal performance when a new drive is
inserted (and, no, there is no difference between a manual and automatic
"removal" in the case of marking a drive bad for replacement).

-Erik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread Chris Csanady

2007/1/18, . <[EMAIL PROTECTED]>:

2. What consumer level SATAII chipsets work. 4-ports onboard is fine for now 
since I can always add a card later. I will need at least four ports to start. 
pci-e cards are highly preferred since pci-x is expensive and going to become 
rarer. (mark my words)


Something worth mentioning is that ACHI SATA controllers will be
supported in the next Nevada build.  As such, I would probably look at
Intel boards instead.

Nvidia SATA support has been long awaited, but there is no clear
indication of when, if ever it will arrive.  It will still work, but
with no NCQ or hot swap.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread mike

Couldn't this be considered a compatibility list that we can trust for
OpenSolaris and ZFS?
http://www.sun.com/io_technologies/

I've been looking at it for the past few days. I am looking for eSATA
support options - more details below.

Only 2 devices on the list show support for eSATA, both are PCI-X. One
uses Infiniband, one uses an eSATA multiplier cable. I wish PCI
Express would get in there. That would open up my options more... I
really don't want to be limited only to internal SATA; I want to use
these 5 drive eSATA-driven enclosures, like one of the below:

http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-500
http://fwdepot.com/thestore/product_info.php/products_id/1586
http://fwdepot.com/thestore/product_info.php/products_id/1325
http://fwdepot.com/thestore/product_info.php/products_id/1578 (a 10
drive one [2 eSATA ports used])

This would effectively, if I understand it right, allow for 20 drives
per controller card (4 ports x 5 drives apiece)

However, I don't think OpenSolaris/Solaris support these unless the
Addonics eSATA PCI-X adapter supports them. I have not figured that
one out yet. All I know is I want ZFS.

This would be for home usage, I want something as small and quiet as
possible. Otherwise I would look into getting 3u type drive shelves
(which would be noisy, etc.)

I have a couple other friends as well all interested in this same
idea. Just waiting around for confirmation that these combinations
work, or for the hardware support to grow...

Anyone have any additional thoughts? I looked at the SATA thread
already, didn't help me much though.

On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:


You must have just missed the "What SATA controllers are people using
for ZFS?" thread.  Not a list, but you can probably find similar components
to what was recommended.  I would be hestitant myself to use any other
SATA card than what was recommended, however motherboard choice is probably
fairly wide open.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread Frank Cusack

On January 18, 2007 6:27:14 PM -0800 "." <[EMAIL PROTECTED]> wrote:

Looking around there still is not a good "these cards/motherboards" work
list.


You must have just missed the "What SATA controllers are people using
for ZFS?" thread.  Not a list, but you can probably find similar components
to what was recommended.  I would be hestitant myself to use any other
SATA card than what was recommended, however motherboard choice is probably
fairly wide open.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Cheap ZFS homeserver.

2007-01-18 Thread .
So after toying around with some stuff a few months back I got bogged down and 
set this project aside for a while. Time to revisit.  

Looking around there still is not a good "these cards/motherboards" work list. 
the HCL is hardly ever updated, and its far more geared towards business use 
than hobbyist/home use. So bearing all of that in mind I will need the 
following things:


1. At least 2 gigE nics.   
2. At least 6 SATAII ports (and at least 6 drive slots)   
3. Reasonable price (probably going to build it myself.)  

I'm not worried about hotswapping. I want to make sure the box is going to be 
upgradeable with decent priced parts for a while (so PCI is out). My current 
fileserver is 6 years old. Out of space (or close enough) and starting to 
become a little less stable than I would like. So without getting into all of 
the gory details, the things I am stuck on are the following:

1. What consumer level motherboards (not a $400 server board, I dont need that) 
and/or chipsets does opensolaris support at this point ? I dont want "if you 
compile this 3 week old version with these four changes it might work". I want 
"this works". I use solaris at work on all sorts of Sun hardware, but of course 
I cant afford Sun hardware for my house.

2. What consumer level SATAII chipsets work. 4-ports onboard is fine for now 
since I can always add a card later. I will need at least four ports to start. 
pci-e cards are highly preferred since pci-x is expensive and going to become 
rarer. (mark my words)

So I was hoping that this board would work:
http://www.gigabyte.com.tw/Products/Motherboard/Products_Overview.aspx?ClassValue=Motherboard&ProductID=2287&ProductName=GA-M57SLI-S4";>HERE

I'm open to suggestions. I'd prefer to use solaris and zfs, but if it cannot be 
easily done I will stick with Linux.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Erik Trimble
Mike,

I think you are missing the point.  What we are talking about is
removing a drive from a zpool, that is, reducing the zpool's total
capacity by a drive.   Say you have 4 drives of 100GB in size,
configured in a striped mirror, capacity of 200GB usable.  We're
discussing the case where if the zpool's total used space is under
100GB, we could remove the second vdev (consisting of a mirror) from the
zpool, and have ZFS evacuate all the data from the to-be-removed vdev
before we actually remove the disks (or, maybe we simply want to
reconfigure them into another zpool).  In this case, after the drive
remoovals, the zpool would be left with a 100GB capacity, and be a
simple 2-drive mirror. 


What you are talking about is replacement of a drive, whether or not it
is actually bad. In your instance, the zpool capacity size remains the
same, and it will return to optimal performance when a new drive is
inserted (and, no, there is no difference between a manual and automatic
"removal" in the case of marking a drive bad for replacement).

-Erik


On Thu, 2007-01-18 at 18:01 -0800, mike wrote:
> what is the technical difference between forcing a removal and an
> actual failure?
> 
> isn't it the same process? except one is manually triggered? i would
> assume the same resilvering process happens when a usable drive is put
> back in...
> 
> On 1/18/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
> > Not quite.  I suspect you are thinking about drive replacement rather
> > than removal.
> >
> > Drive replacement is already supported in ZFS and the task involves
> > rebuilding data on the disk from data available elsewhere.  Drive
> > removal involves rebalancing data from the target drive to other
> > disks.  The latter is non-trivial.
> >
> >
> > --
> > Just me,
> > Wire ...
> >
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erik Trimble
Java System Support
Mailstop:  usca14-102
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread mike

what is the technical difference between forcing a removal and an
actual failure?

isn't it the same process? except one is manually triggered? i would
assume the same resilvering process happens when a usable drive is put
back in...

On 1/18/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:

Not quite.  I suspect you are thinking about drive replacement rather
than removal.

Drive replacement is already supported in ZFS and the task involves
rebuilding data on the disk from data available elsewhere.  Drive
removal involves rebalancing data from the target drive to other
disks.  The latter is non-trivial.


--
Just me,
Wire ...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Wee Yeh Tan

On 1/19/07, mike <[EMAIL PROTECTED]> wrote:

Would this be the same as failing a drive on purpose to remove it?

I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.


Not quite.  I suspect you are thinking about drive replacement rather
than removal.

Drive replacement is already supported in ZFS and the task involves
rebuilding data on the disk from data available elsewhere.  Drive
removal involves rebalancing data from the target drive to other
disks.  The latter is non-trivial.


--
Just me,
Wire ...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Jason J. W. Williams

Hi Toby,

Thanks for the links. That's interesting. I assume this goes forward
to the M2s. Glad hot-swap isn't a requirement where we use them.

Best Regards,
Jason

On 1/18/07, Toby Thain <[EMAIL PROTECTED]> wrote:


On 18-Jan-07, at 9:55 PM, Jason J. W. Williams wrote:

> Hi Frank,
>
> What do they [not] support?

Hotplug.
See, inter alia,
http://groups.google.com/group/comp.unix.solaris/msg/56e9e341607aa984
http://groups.google.com/group/comp.unix.solaris/msg/9c0afc2668207d36

--Toby

> We've had some various service issues on the
> NICs on the original X2100...which they gave us some flack on because
> we were running Gentoo. Once we proved it on Solaris 10 Update 2 (at
> the time) they got on board with the problem.
>
> Best Regards,
> Jason
>
> On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
>> On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams"
>> <[EMAIL PROTECTED]> wrote:
>> > Sun doesn't support the X2100 SATA controller on Solaris 10? That's
>> > just bizarre.
>>
>> Not only that, their marketing is misleading (at best) on the issue.
>>
>> -frank
>>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Toby Thain


On 18-Jan-07, at 9:55 PM, Jason J. W. Williams wrote:


Hi Frank,

What do they [not] support?


Hotplug.
See, inter alia,
http://groups.google.com/group/comp.unix.solaris/msg/56e9e341607aa984
http://groups.google.com/group/comp.unix.solaris/msg/9c0afc2668207d36

--Toby


We've had some various service issues on the
NICs on the original X2100...which they gave us some flack on because
we were running Gentoo. Once we proved it on Solaris 10 Update 2 (at
the time) they got on board with the problem.

Best Regards,
Jason

On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:

On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams"
<[EMAIL PROTECTED]> wrote:
> Sun doesn't support the X2100 SATA controller on Solaris 10? That's
> just bizarre.

Not only that, their marketing is misleading (at best) on the issue.

-frank


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Frank Cusack

Please don't top-post.  It's annoying.

On January 18, 2007 4:55:35 PM -0700 "Jason J. W. Williams" 
<[EMAIL PROTECTED]> wrote:

On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:

On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams"
<[EMAIL PROTECTED]> wrote:
> Sun doesn't support the X2100 SATA controller on Solaris 10? That's
> just bizarre.

Not only that, their marketing is misleading (at best) on the issue.


What do they support?


Marvell 88sx family.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2007-01-18 Thread Eric Boutilier

For background on what this is, see:

http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200

=
zfs-discuss 01/01 - 01/15
=

Size of all threads during period:

Thread size Topic
--- -
 74   RAIDZ2 vs. ZFS RAID-10
 42   ZFS related (probably) hangs due to memory exhaustion(?) with 
snv53
 37   Limit ZFS Memory Utilization
 31   Solid State Drives?
 25   Adding disk to a RAID-Z?
 24   NFS and ZFS, a fine combination
 16   zfs list and snapshots..
 14   Question: ZFS + Block level SHA256 ~= almost free CAS Squishing?
 13   Thoughts on ZFS Secure Delete - without using Crypto
 10   hard-hang on snapshot rename
 10   ZFS over NFS extra slow?
 10   ZFS direct IO
 10   ZFS and HDLM 5.8 ... does that coexist well ?
 10   Puzzling ZFS behavior with COMPRESS option
 10   Eliminating double path with ZFS's volume manager
  8   question about self healing
  8   Multiple Read one Writer Filesystem
  8   HOWTO make a mirror after the fact
  7   optimal zpool layout?
  7   What SATA controllers are people using for ZFS?
  6   zfs recv
  6   zfs pool in degraded state,
  6   use the same zfs filesystem with differnet mountpoint
  6   odd versus even
  6   Solaris crashes when ZFS device disappears
  6   Can someone explain this acl behavior?
  6   Can ZFS solve my problem?
  5   zfs clones
  5   ZFS and storage array]
  5   Replacing a drive in a raidz2 group
  5   Help understanding some benchmark results
  5   Distributed FS
  4   zfs pool in degraded state, zpool offline fails with no valid 
replicas
  4   ZFS entry in /etc/vfstab
  4   ZFS --> Grub shell
  4   On the SATA framework
  3   using veritas dmp with ZFS (but not vxvm)
  3   ZFS remote mirroring
  3   ZFS reference
  3   ZFS on my iPhone?
  3   ZFS Hot Spare Behavior
  3   Samba and ZFS ACL Question
  2   zfs umount -a in a global zone
  2   iSCSI on a single interface?
  2   Why is "+" not allowed in a ZFS file system name ?
  2   Thoughts on ZFS SecureDelete - without usingCrypto
  2   Snapshots impact on performance
  2   Seven questions for a newbie
  2   Scrubbing on active zfs systems (many snaps per day)
  2   SXCR 55
  2   Resizing lun.
  2   Remote Replication
  2   Raidz and self-healing...
  2   Mounting a ZFS clone
  2   Gerard wrote:
  2   Differences between ZFS and UFS.
  2   Checksum errors...
  1   iPod WAS::ZFS on my iPhone?
  1   blog: space vs MTTDL
  1   another blog: space vs U_MTBSI
  1   Rebel: 'We aided bin Laden escape'
  1   OT: How does them coll ZFS demos are made
  1   Noemi
  1   N.J. suspected as source of stench MORE ...
  1   Kathy wrote:
  1   I have a disk wedged in a zpool.
  1   Fwd: Rebel: 'We aided bin Laden escape'
  1   Extremely poor ZFS perf and other observations
  1   Eliminating double path with ZFS's volumemanager
  1   Difference between ZFS checksum algorithms


Posting activity by person for period:

# of posts  By
--   --
 42   rmilkowski at task.gda.pl (robert milkowski)
 40   jasonjwwilliams at gmail.com (jason j. w. williams)
 29   stric at acc.umu.se (tomas =?iso-8859-1?q?=d6gren?=)
 27   richard.elling at sun.com (richard elling)
 19   wade.stuart at fallon.com (wade stuart)
 14   matthew.ahrens at sun.com (matthew ahrens)
 14   eric at ijack.net (eric hill)
 14   darren.moffat at sun.com (darren j moffat)
 13   anton.rang at sun.com (anton b. rang)
 12   mark.maybee at sun.com (mark maybee)
 11   ddunham at taos.com (darren dunham)
 11   dclarke at blastwave.org (dennis clarke)
  9   tmcmahon2 at yahoo.com (torrey mcmahon)
  9   roch.bourbonnais at sun.com (roch - pae)
  9   peter.schuller at infidyne.com (peter schuller)
  8   sanjeev.bagewadi at sun.com (sanjeev bagewadi)
  8   rang at acm.org (anton b. rang)
  8   neil.perrin at sun.com (neil perrin)
  8   bart.smaalders at sun.com (bart smaalders)
  8   anantha.srirama at cdc.hhs.gov (anantha n. srirama)
  7   eric.kustarz at sun.com (eric kustarz)
  7   bill.moore at sun.com (bill moore)
  6   toby at smartgames.ca (toby thain)
 

Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Jason J. W. Williams

Hi Frank,

What do they support? We've had some various service issues on the
NICs on the original X2100...which they gave us some flack on because
we were running Gentoo. Once we proved it on Solaris 10 Update 2 (at
the time) they got on board with the problem.

Best Regards,
Jason

On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:

On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams"
<[EMAIL PROTECTED]> wrote:
> Sun doesn't support the X2100 SATA controller on Solaris 10? That's
> just bizarre.

Not only that, their marketing is misleading (at best) on the issue.

-frank


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Frank Cusack
On January 18, 2007 4:45:49 PM -0700 "Jason J. W. Williams" 
<[EMAIL PROTECTED]> wrote:

Sun doesn't support the X2100 SATA controller on Solaris 10? That's
just bizarre.


Not only that, their marketing is misleading (at best) on the issue.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Jason J. W. Williams

Hi Frank,

Sun doesn't support the X2100 SATA controller on Solaris 10? That's
just bizarre.

-J

On 1/18/07, Frank Cusack <[EMAIL PROTECTED]> wrote:

THANK YOU Naveen, Al Hopper, others, for sinking yourselves into the
shit world of PC hardware and [in]compatibility and coming up with
well qualified white box solutions for S10.

I strongly prefer to buy Sun kit, but I am done waiting for Sun to support
the SATA controller on the x2100.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-01-18 Thread Frank Cusack

THANK YOU Naveen, Al Hopper, others, for sinking yourselves into the
shit world of PC hardware and [in]compatibility and coming up with
well qualified white box solutions for S10.

I strongly prefer to buy Sun kit, but I am done waiting for Sun to support
the SATA controller on the x2100.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS patches for Solaris 10U2 ?

2007-01-18 Thread Christophe Dupré
Of course, I meant 118833, not 108833... :-(

Christophe Dupré wrote:
> I've been looking for the patches to get the latest ZFS bits for S10U2,
> like kernel patch 108833-30, but I can't find then on sunsolve.sun.com.
> Latest seems to be 108833-24
> 
> Is there any other location I should look for the patches ?
> 
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS patches for Solaris 10U2 ?

2007-01-18 Thread Christophe Dupré

I've been looking for the patches to get the latest ZFS bits for S10U2,
like kernel patch 108833-30, but I can't find then on sunsolve.sun.com.
Latest seems to be 108833-24

Is there any other location I should look for the patches ?


-- 
Christophe Dupré
Administrateur Unix et Réseau Sénior
(514) 931-4433 x3078
www.accovia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question: zfs code size statistics

2007-01-18 Thread Eric Schrock
On Thu, Jan 18, 2007 at 11:37:26PM +0100, Henk Langeveld wrote:
> When ZFS was first announced, one argument was how ZFS complexity and
> code size was actually significantly less than for instance, UFS+SVM.
> 
> Over a year has passed, and I wonder how code size has grown since, with
> all of the features that have been added.
> 
> Has anyone kept track of this?  Would it be easy to generate such statistics
> from the code repository?

The attached script yields the following result on the current gate:

-
  UFS: kernel= 47188   user= 40045   total= 87233
  SVM: kernel= 77711   user=162522   total=240233
TOTAL: kernel=124899   user=202567   total=327466
-
  ZFS: kernel= 59813   user= 27932   total= 87745
-

Note that this doesn't include ZFS-related fmd plugins or java APIs,
since UFS has no equivalent.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
#!/bin/ksh

if [ "z$GATE" == "z" ]; then
GATE=/ws/onnv-gate
fi
NT=Codemgr_wsdata/nametable

cd $GATE

kfufs=`egrep ^usr/src/uts/common/.*/ufs $NT | nawk '{print $1}'`
kfsvm=`egrep '^usr/src/uts/common/(io|sys)/lvm/' $NT | nawk '{print $1}'`
kfufs="$kfufs usr/src/uts/common/os/bio.c usr/src/uts/common/os/fbio.c"

ufufs=`egrep ^usr/src/cmd/fs.d/ufs/ $NT | nawk '{print $1}'`
ufsvm=`egrep '^usr/src/(cmd|lib)/lvm/' $NT | nawk '{print $1}'`

klufs=`cat $kfufs | wc -l`
klsvm=`cat $kfsvm | wc -l`

ulufs=`cat $ufufs | wc -l`
ulsvm=`cat $ufsvm | wc -l`

((tlufs=klufs+ulufs))
((tlsvm=klsvm+ulsvm))

((tk=klufs+klsvm))
((tu=ulufs+ulsvm))
((tt=tk+tu))

kfzfs=`egrep '^usr/src/uts/common/fs/zfs/' $NT | nawk '{print $1}'`
ufzfs=`egrep '^usr/src/(common/|cmd/|lib/lib)(zfs|zpool|zdb|zinject)/' $NT |
nawk '{print $1}'`
ufzfs_fm=`egrep ^usr/src/cmd/fm/modules/common/zfs.* $NT | nawk '{print $1}'`

klzfs=`cat $kfzfs | wc -l`
ulzfs=`cat $ufzfs $ufzfs_fm | wc -l`

((tlzfs=klzfs+ulzfs))

printf "-\n"

printf "  UFS: kernel=%6d   user=%6d   total=%6d\n" $klufs $ulufs $tlufs
printf "  SVM: kernel=%6d   user=%6d   total=%6d\n" $klsvm $ulsvm $tlsvm
printf "TOTAL: kernel=%6d   user=%6d   total=%6d\n" $tk $tu $tt

printf "-\n"

printf "  ZFS: kernel=%6d   user=%6d   total=%6d\n" $klzfs $ulzfs $tlzfs

printf "-\n"
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question: zfs code size statistics

2007-01-18 Thread Henk Langeveld

When ZFS was first announced, one argument was how ZFS complexity and
code size was actually significantly less than for instance, UFS+SVM.

Over a year has passed, and I wonder how code size has grown since, with
all of the features that have been added.

Has anyone kept track of this?  Would it be easy to generate such statistics
from the code repository?

Curious,

Henk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: How much do we really want zpool remove?

2007-01-18 Thread Shannon Roddy
Celso wrote:
> Both removing disks from a zpool and modifying raidz arrays would be very 
> useful. 

Add my vote for this.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How much do we really want zpool remove?

2007-01-18 Thread Celso
Both removing disks from a zpool and modifying raidz arrays would be very 
useful. 

I would also still love to have ditto data blocks. Is there any progress on 
this?

Celso.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread mike

Would this be the same as failing a drive on purpose to remove it?

I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.

On 1/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

> > This is a pretty high priority.  We are working on it.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Wade . Stuart






[EMAIL PROTECTED] wrote on 01/18/2007 01:29:23 PM:

> On Thu, 2007-01-18 at 10:51 -0800, Matthew Ahrens wrote:
> > Jeremy Teo wrote:
> > > On the issue of the ability to remove a device from a zpool, how
> > > useful/pressing is this feature? Or is this more along the line of
> > > "nice to have"?
> >
> > This is a pretty high priority.  We are working on it.
> >
> > --matt
>
> I'd consider it a lower priority than say, adding a drive to a RAIDZ
> vdev, but yes, being able to reduce a zpool's size by removing devices
> is quite useful, as it adds a considerable degree of flexibility that
> (we) admins crave.
>

I would be surprised if much of the code to allow removal does not bring
device adds closer to reality -- assuming device removal migrates data and
resilvers to optimal stripe online..

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Erik Trimble
On Thu, 2007-01-18 at 10:51 -0800, Matthew Ahrens wrote:
> Jeremy Teo wrote:
> > On the issue of the ability to remove a device from a zpool, how
> > useful/pressing is this feature? Or is this more along the line of
> > "nice to have"?
> 
> This is a pretty high priority.  We are working on it.
> 
> --matt

I'd consider it a lower priority than say, adding a drive to a RAIDZ
vdev, but yes, being able to reduce a zpool's size by removing devices
is quite useful, as it adds a considerable degree of flexibility that
(we) admins crave.


-- 
Erik Trimble
Java System Support
Mailstop:  usca14-102
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Heavy writes freezing system

2007-01-18 Thread Rainer Heilke
Rats, didn't proof accurately. For "UFS", I meant NFS.

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Heavy writes freezing system

2007-01-18 Thread Rainer Heilke
Sorry, I should have qualified that "effective" better. I was specifically 
speaking in terms of Solaris and price. For companies without a SAN (especially 
using Linux), something like a NetApp Filer using UFS is the way to go, I 
realize. If you're running Solaris, the cost of QFS becomes a major factor. If 
you have a SAN, then getting a NetApp Filer seems silly. And so on.

Oracle has suggested RAW disk for some time, I think. (Some?) DBA's don't seem 
to like it largely because they cannot see the files, and so on. ASM still has 
some of these limitations, but it's getting better, and DBA's are starting to 
get used to the new paradigms. If I remember a conversation last year 
correctly, OEM will become the window into some of these ideas. Once ASM has 
industry acceptance on a large scale, then yes, making file systems perform 
well especially for Oracle databases will be chasing the wind. But, that may be 
a while down the road. I don't know, my crystal ball got cracked during the 
last comet transition.  ;-)

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Matthew Ahrens

Jeremy Teo wrote:

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?


This is a pretty high priority.  We are working on it.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Heavy writes freezing system

2007-01-18 Thread Richard Elling

Rainer Heilke wrote:

If you plan on RAC, then ASM makes good sense.  It is
unclear (to me anyway)
if ASM over a zvol is better than ASM over a raw LUN.


Hmm. I thought ASM was really the _only_ effective way to do RAC, 
but then, I'm not a DBA (and don't want to be ;-)  We'll be just 
using raw LUN's. While the zvol idea is interesting, the DBA's 
are very particular about making sure the environment is set up 
in a way Oracle will support (and not hang up when we have a problem).


ASM is relatively new technology. Traditionally, OPS and RAC were
built over raw devices, directly or as represented by cluster-aware
logical volume managers.  DBAs tend to not like raw, so Sun Cluster
(Solaris Cluster) supports RAC over QFS which is a very good solution.
Some Sun Cluster customers run RAC over NFS, which also works
surprisingly well.

Meanwhile, Oracle continues to develop ASM to appease the DBAs who
want filesystem-like solutions.  IMHO, in the long run, Oracle will
transition many customers to ASM and this means that it probably
isn't worth the effort to make a file system be the best for Oracle,
at the expense of other features and workloads.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to reconfigure ZFS?

2007-01-18 Thread Dana H. Myers
Karen Chau wrote:
> How do you reconfigure ZFS on the server after an OS upgrade?   I have a
> ZFS pool on a 6130 storge array.
> After upgrade the data on the storage array is still intact, but ZFS
> configuration is gone due to new OS.
> 
> Do I use the same commands/procedure to recreate the zpool, ie.
> # zpool create canary raidz c2t1d0 c2t2d0 c2t3d0
> 
> Does the create command destroy data on the disks?
> 
> --OR--
> 
> Should I restore /etc/*zfs*/zpool.cache on the new OS (assuming we have
> a good backup)??

Have you first tried 'zfs import' ?  You'll have to use the '-f'
option if you didn't export the pools before the upgraded OS installation.

Dana
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to reconfigure ZFS?

2007-01-18 Thread Karen Chau
How do you reconfigure ZFS on the server after an OS upgrade?   I have a 
ZFS pool on a 6130 storge array.
After upgrade the data on the storage array is still intact, but ZFS 
configuration is gone due to new OS.


Do I use the same commands/procedure to recreate the zpool, ie.
# zpool create canary raidz c2t1d0 c2t2d0 c2t3d0

Does the create command destroy data on the disks?

--OR--

Should I restore /etc/*zfs*/zpool.cache on the new OS (assuming we have 
a good backup)??


Thanks,
-Karen

--

NOTICE:  This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information.  Any unauthorized
review, use, disclosure or distribution is prohibited.  If you are not the
intended recipient, please contact the sender by reply email and destroy all
copies of the original message.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: Heavy writes freezing system

2007-01-18 Thread Bev Crair

Rainer,
Have you considered looking for a patch?  If you have the supported 
version(s) of Solaris (which it sound like you do), this may already be 
available in a patch.

Bev.

Rainer Heilke wrote:

Thanks for the detailed explanation of the bug. This makes it clearer to us as 
to what's happening, and why (which is something I _always_ appreciate!). 
Unfortunately, U4 doesn't buy us anything for our current problem.

Rainer
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Heavy writes freezing system

2007-01-18 Thread Rainer Heilke
> If you plan on RAC, then ASM makes good sense.  It is
> unclear (to me anyway)
> if ASM over a zvol is better than ASM over a raw LUN.

Hmm. I thought ASM was really the _only_ effective way to do RAC, but then, I'm 
not a DBA (and don't want to be ;-)  We'll be just using raw LUN's. While the 
zvol idea is interesting, the DBA's are very particular about making sure the 
environment is set up in a way Oracle will support (and not hang up when we 
have a problem).

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Heavy writes freezing system

2007-01-18 Thread Rainer Heilke
Thanks for the detailed explanation of the bug. This makes it clearer to us as 
to what's happening, and why (which is something I _always_ appreciate!). 
Unfortunately, U4 doesn't buy us anything for our current problem.

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Heavy writes freezing system

2007-01-18 Thread Rainer Heilke
> > This problem was fixed in snv_48 last September
>  and will be
> > in S10_U4.

U4 doesn't help us any. We need the fix now. :-(  By the time U4 is out, we may 
even be finished (certainly well on our way) our RAC/ASM migration and this 
whole issue will be moot.

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Heavy writes freezing system

2007-01-18 Thread Rainer Heilke
> Bag-o-tricks-r-us, I suggest the following in such a case:
> 
> - Two ZFS pools
>   - One for production
> - One for Education

The DBA's are very resistant to splitting our whole environments. There are 
nine on the test/devl server! So, we're going to put the DB files and redo logs 
on separate (UFS with directio) LUN's. Binaries and backups will go onto two 
separate ZFS LUN's. With production, they can do their cloning at night to 
minimize impact. Not sure what they'll do on test/devl. The two ZFS file 
systems will probably also be separate zpools (political as well as juggling 
Hitachi disk space reasons).

BTW, it wasn't the storage guys who decided the "one filesystem to rule them 
all" strategy, but my predecessors. It was part of the move from Clarion arrays 
to Hitachi. The storage folks know about, understand, and agree with us when we 
talk about these kinds of issues (at least, they do now). We've pushed the 
caching and other subsystems often enough to make this painfully clear.

> Another thought is while ZFS works out its kinks why
> not use the BCV or ShadowCopy or whatever IBM calls
> it to create Education instance. This will reduce a
> tremendous amount of I/O.

This means buying more software to alleviate a short-term problem (with RAC, 
the whole design will be different, including moving to ASM). We have RMAN and 
OEM already, so this argument won't fly.

> BTW, I'm curious what application using Oracle is
> creating more than a million files?

Oracle Financials. The application includes everything but the kitchen sink 
(but the bathroom sink is there!).

Thanks for all of your feedback and suggestions. They all sound bang on. If we 
could just get all the pieces in place to move forward now, I think we'll be 
OK. One big issue for us will be finding the Hitachi disk space--we're pretty 
full-up right now. :-(

Rainer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How much do we really want zpool remove?

2007-01-18 Thread Anantha N. Srirama
I can vouch for this situation. I had to go through a long maintenance to 
accomplish the following:

- 50 x 64GB drives in a zpool; needed to seperate out 15 of them out due to 
performance issues. There was no need to increase storage capacity.

Because I couldn't yank 15 drives from the existing pool to create a UFS 
filesystem I had to go evacuate the entire 50 disk pool, recreate a new pool 
and the UFS filesystem, and then repopulate the filesystems.

I think this feature will add to the adoption rate of ZFS. However, I feel that 
this shouldn't be at the top of the 'to-do' list. I'll trade this feature for 
some of the performance enhancements that've been discussed on this group.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Boyd Adamson

On 18/01/2007, at 9:55 PM, Jeremy Teo wrote:

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?


Assuming we're talking about removing a top-level vdev..

I introduce new sysadmins to ZFS on a weekly basis. After 2 hours of  
introduction this is the single feature that they most often realise  
is "missing".


The most common reason is migration of data to new storage  
infrastructure. The experience is often that the growth in disk size  
allows the new storage to consist of fewer disks/LUNs than the old.


I can see that is will come increasingly needed as more and more  
storage goes under ZFS. Sure, we can put 256 quadrillion zettabytes  
in the pool, but if you accidentally add a disk to the wrong pool or  
with the wrong redundancy you have a long long wait for your tape  
drive :)


Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI on a single interface?

2007-01-18 Thread Dick Davies

On 15/01/07, Rick McNeal <[EMAIL PROTECTED]> wrote:


On Jan 15, 2007, at 8:34 AM, Dick Davies wrote:



> Hi, are there currently any plans to make an iSCSI target created by
> setting shareiscsi=on on a zvol
> bindable to a single interface (setting tpgt or acls)?



We're working on some more interface stuff for setting up various
properties like TPGT's and ACL for the ZVOLs which are shared through
ZFS.



Now that I've knocked off a couple of things that have been on my
plate I've got room to add some more. These definitely rank right up
towards the top.


Great news.

For the record, the reason I asked was we have an iscsi target host with
2 NICs and for some reason clients were attempting to connect to the targets
on  the private interface instead of the one they were doing discovery on
(which I thought was a bit odd).

I tried creating a TPGT with iscsitadm, which seemed to work:

vera ~ # iscsitadm list tpgt -v
TPGT: 1
   IP Address: 131.251.5.8

but adding a ZFS iscsi target into it gives me:

 vera ~ # iscsitadm modify target -p 1 tank/iscsi/second4gb
 iscsitadm: Error Can't call daemon


which is a pity (I'm assuming it can't find the targets to modify).
I've had to go back to just using iscsitadm due to time pressures, but
will be watching any progress closely.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Dick Davies

On 18/01/07, Jeremy Teo <[EMAIL PROTECTED]> wrote:

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?


It's very useful if you accidentally create a concat rather than mirror
of an existing zpool. Otherwise you have to buy another drive :)


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread przemolicc
On Thu, Jan 18, 2007 at 06:55:39PM +0800, Jeremy Teo wrote:
> On the issue of the ability to remove a device from a zpool, how
> useful/pressing is this feature? Or is this more along the line of
> "nice to have"?

If you think "remove a device from a zpool" = "to shrink a pool" then
it is really usefull. Definitely really usefull.
Do you need any example ?


przemol

--
Lufa dla generala. Zobacz >> http://link.interia.pl/f19e1


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Jeremy Teo

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Heavy writes freezing system

2007-01-18 Thread Roch - PAE

Jason J. W. Williams writes:
 > Hi Anantha,
 > 
 > I was curious why segregating at the FS level would provide adequate
 > I/O isolation? Since all FS are on the same pool, I assumed flogging a
 > FS would flog the pool and negatively affect all the other FS on that
 > pool?
 > 
 > Best Regards,
 > Jason
 > 

Good point, If the problem is

6413510 zfs: writing to ZFS filesystem slows down fsync() on other files

Then the seggegration to 2 filesystem on the same pool will
help.

But if the problem is more like

6429205 each zpool needs to monitor its throughput and throttle heavy 
writers

then it 2 FS won't help. 2 pools probably would though.

-r


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Heavy writes freezing system

2007-01-18 Thread Roch - PAE

If some aspect  of the load is writing  large amount of data
into the pool (through  the memory cache,  as opposed to the
zil)  and that leads  to a frozen system,  I  think that a
possible contributor should be:

|6429205||each zpool needs to monitor its throughput and throttle heavy 
writers|

-r

Anantha N. Srirama writes:
 > Bug 6413510 is the root cause. ZFS maestros please correct me if I'm quoting 
 > an incorrect bug.
 >  
 >  
 > This message posted from opensolaris.org
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] VxVM volumes in a zpool.

2007-01-18 Thread Tan Shao Yi

Hi,

Was wondering if anyone had experience working with VxVM volumes in a 
zpool. We are using VxVM 5.0 on a Solaris 10 11/06 box. The volume is on a 
SAN, with two FC HBAs connected to a fabric.


The setup works, but we observe a very strange message on bootup. The 
bootup screen is attached at the bottom of this e-mail.


Strangly with all the bootup errors, zpool continues to work:


zpool status

  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  /dev/vx/dsk/mailstore/storage  ONLINE   0 0 0

errors: No known data errors


Could it be the sequence at which the /dev/vx/dsk is detected by zpool?

Thanking in advance.

Cheers,
Tan Shao Yi



NOTICE: VxVM vxdmp V-5-0-34 added disk array 110352722, datype = FAS920

NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk

NOTICE: VxVM vxdmp V-5-3-1700 dmpnode 130/0x0 has migrated from enclosure 
FAKE_E

NCLR_SNO to enclosure DISKS

WARNING: VxVM vxio V-5-0-181 Illegal vminor encountered
WARNING: VxVM vxio V-5-0-181 Illegal vminor encountered
checking ufs filesystems
/dev/md/rdsk/d5: is logging.

servername console login:
SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Thu Jan 18 15:19:47 SGT 2007
PLATFORM: SUNW,Sun-Fire-V240, CSN: -, HOSTNAME: recess1
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 3c5a5896-df1d-6bf7-85c0-8337f788e925
DESC: A ZFS pool failed to open.  Refer to http://sun.com/msg/ZFS-8000-CS 
for mo

re information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
restore from backup
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss