Regarding the bold statement
There is no NFS over ZFS issue
What I mean here is that,if you _do_ encounter a
performance pathology not linked to the NVRAM Storage/cache
flush issue then you _should_ complain or better get someone
to do an analysis of the situation.
One
Hi all,
I am after some help/feedback to the subject issue explained below.
We are in the process of migrating a big DB2 database from a
6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
25K 12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage (compressed &
RaidZ) So
The ZFS test suite is being released today on OpenSolaris.org along with
the Solaris Test Framework (STF), Checkenv and Runwattr test tools.
The source tarball, binary package and baseline can be downloaded from the test
consolidation download center at http://dlc.sun.com/osol/test/downloads/curre
On 6/26/07, Roshan Perera <[EMAIL PROTECTED]> wrote:
25K 12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage (compressed &
RaidZ) Solaris 10.
RaidZ is a poor choice for database apps in my opinion; due to the way
it handles checksums on raidz stripes, it must read every disk in
order
I used a zpool on a usb key today to get some core files off a non-networked
Thumper running S10U4 beta.
Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to
'zpool import sticky' and it worked ok.
But when we attach the drive to a blade 100 (running s10u3), it sees the
poo
Hi Will,
Thanks for your reply.
Customer has EMC San solution and will not change their current layout.
Therefore, asking the customer to give RAW disks to ZFS is no no. Hence, the
RaidZ configuration as suppose to Raid - 5.
I have given some stats below. I know its a bit difficult to troubles
> an array of 30 drives in a RaidZ2 configuration with two hot spares
> I don't want to mirror 15 drives to 15 drives
ok, so space over speed... and are willing to toss somewhere between 4
and 15 drives for protection.
raidz splits the (up to 128k) write/read recordsize into each element of
the
On Jun 26, 2007, at 4:26 AM, Roshan Perera wrote:
Hi all,
I am after some help/feedback to the subject issue explained below.
We are in the process of migrating a big DB2 database from a
6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
25K 12 CPU dual core x 1800Mhz with
> I used a zpool on a usb key today to get some core files off a non-networked
> Thumper running S10U4 beta.
>
> Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to
> 'zpool import sticky' and it worked ok.
>
> But when we attach the drive to a blade 100 (running s10u3), it
Possibly the storage is flushing the write caches when it
should not. Until we get a fix, cache flushing could be
disabled in the storage (ask the vendor for the magic
incantation). If that's not forthcoming and if all pools are
attached to NVRAM protected devices; then these /etc/
At what Solaris10 level (patch/update) was the "single-threaded
compression" situation resolved?
Could you be hitting that one?
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Roch - PAE
Sent: Tuesday, June 26, 2007 12:26 PM
To: Roshan Perera
Rob Logan wrote:
> an array of 30 drives in a RaidZ2 configuration with two hot spares
> I don't want to mirror 15 drives to 15 drives
ok, so space over speed... and are willing to toss somewhere between 4
and 15 drives for protection.
raidz splits the (up to 128k) write/read recordsize into
We ran into an interesting situation recently w.r.t. Oracle 9i and
the zpool it lives on.
Oracle is installed on (and has its tablespace on as well) a zpool
which is comprised of two mirrored LUNs, and each LUN is from a
separate 6140. We needed to do work on one of these 6140s and so I
Hi. I'm looking for the best solution to create an expandable heterogeneous
pool of drives. I think in an ideal world, there'd be a raid version which
could cleverly handle both multiple drive sizes and the addition of new drives
into a group (so one could drop in a new drive of arbitrary size,
Roshan Perera writes:
> Hi all,
>
> I am after some help/feedback to the subject issue explained below.
>
> We are in the process of migrating a big DB2 database from a
>
> 6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
> 25K 12 CPU dual core x 1800Mhz with ZFS 8TB s
Jef Pearlman wrote:
Hi. I'm looking for the best solution to create an expandable heterogeneous
pool of drives. I think in an ideal world, there'd be a raid version which
could cleverly handle both multiple drive sizes and the addition of new drives
into a group (so one could drop in a new dri
Hi folks,
So the expansion unit for the 2500 series is the 2501.
The back-end drive channels are SAS.
Currently it is not "supported" to connect a 2501 directly to a SAS HBA.
SATA drives are in the pipe, but will not be released until the RAID firmware
for the 2500 series officially supports th
Hello,
I'm sure there is a simple solution, but I am unable to figure this one out.
Assuming I have tank/fs, tank/fs/fs1, tank/fs/fs2, and I set sharenfs=on for
tank/fs (child filesystems are inheriting it as well), and I chown
user:group /tank/fs, /tank/fs/fs1 and /tank/fs/fs2, I see:
ls -la /t
I am pretty sure the T3/6120/6320 firmware does not support the
SYNCHRONIZE_CACHE commands..
Off the top of my head, I do not know if that triggers any change in behavior
on the Solaris side...
The firmware does support the use of the FUA bit...which would potentially lead
to similar flushing
Nicolas Williams wrote:
Couldn't wait for ZFS delegation, so I cobbled something together; see
attachment.
Nico
The *real* ZFS delegation code was integrated into Nevada this morning.
I've placed a little overview in my blog.
http://blogs.sun.com/marks
-Mark
___
On Tue, Jun 26, 2007 at 04:19:03PM -0600, Mark Shellenbaum wrote:
> Nicolas Williams wrote:
> >Couldn't wait for ZFS delegation, so I cobbled something together; see
> >attachment.
>
> The *real* ZFS delegation code was integrated into Nevada this morning.
> I've placed a little overview in my b
Oh, and thanks! ZFS delegations rocks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I figured out how to get it to work, but I still don't quite understand it.
The way i got it to work is to zfs unmount tank/fs/fs1 and tank/fs/fs2, and
then it looked like this:
ls -la /tank/fs
user:group .
root:root fs1
root:root fs2
That is, those mountpoints changed to root:root from user:gro
Nicolas Williams wrote:
> On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
> > On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
> > > Couldn't wait for ZFS delegation, so I cobbled something together; see
> > > attachment.
> >
> > I forgot to slap on the CDDL heade
Shouldn't S10u3 just see the newer on-disk format and report that fact, rather
than complain it is corrupt?
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Jun 27, 2007 at 12:55:15AM +0200, Roland Mainz wrote:
> Nicolas Williams wrote:
> > On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
> > > On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
> > > > Couldn't wait for ZFS delegation, so I cobbled something toge
Well, I didn't realize this at first because I was testing with newly empty
directories and sorry about wasting the bandwidth here, but it apprears NFS
is not showing nested ZFS filesystems *at all*, all I was seing is
mountpoints of the parent filesystem, and their changing ownership as server
wa
Nicolas Williams wrote:
> On Wed, Jun 27, 2007 at 12:55:15AM +0200, Roland Mainz wrote:
> > Nicolas Williams wrote:
> > > On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
> > > > On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
> > > > > Couldn't wait for ZFS deleg
On Wed, Jun 27, 2007 at 01:45:07AM +0200, Roland Mainz wrote:
> Nicolas Williams wrote:
> > But will ksh or ksh93 know that this script must not source $ENV?
>
> Erm, I don't know what's the correct behaviour for Solaris ksh88... but
> for ksh93 it's clearly defined that ${ENV} and /etc/ksh.kshrc
On 26/06/2007, at 12:08 PM, [EMAIL PROTECTED] wrote:
I've been saving up a few wishlist items for zfs. Time to share.
1. A verbose (-v) option to the zfs commandline.
In particular zfs sometimes takes a while to return from zfs
snapshot -r tank/[EMAIL PROTECTED] in the case where there are a
30 matches
Mail list logo