Hi,
we're having a bad situation with a SAN iScsi solution in a production
environment of a customer: the storage hardware may panic its kernel because of
its software fault, with the risk of loosing data.
We want to give the SAN manufacturer a last chance of correcting their
solution: we're
Louwtjie Burger writes:
Hi
After a clean database load a database would (should?) look like this,
if a random stab at the data is taken...
[8KB-m][8KB-n][8KB-o][8KB-p]...
The data should be fairly (100%) sequential in layout ... after some
days though that same spot (using
Sun did something like this with the v60 and v65 servers, and they should do it
again with the SSR212MC2.
The heart of the SAS subsystem of the SSR212MC2 is the SRCSAS144E .
This card is interfacing with a Vitesse VSC410 SAS-expander and is plugged into
a S5000PSL motherboard.
This card is
Internal drives suck. If you go through the trouble of putting in a
drive, at least make it hot pluggable.
They are all hot-swappable/pluggable on the the SSR212MC2. There are two
additional internal 2.5 SAS bonus drives that arent, but the front 12 are.
I for one think external enclosures are
Hi everyone,
We've building a storage system that should have about 2TB of storage
and good sequential write speed. The server side is a Sun X4200 running
Solaris 10u4 (plus yesterday's recommended patch cluster), the array we
bought is a Transtec Provigo 510 12-disk array. The disks are SATA,
I've just discovered patch 125205-07, which wasn't installed on our system
because we don't have SUNWhea..
Has anyone with problems tried this patch, and has it helped at all?
This message posted from opensolaris.org
___
zfs-discuss mailing list
Paul Jochum wrote:
Hi Richard:
I just tried your suggestion, unfortunately it doesn't work. Basically:
make a clone of the snapshot - works bine
in the clone, remove the directories - works fine
make a snapshot of the clone - works fine
destroy the clone - fails, because ZFS reports that
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
As with any application, if you hold the vnode (or file descriptor) open
and remove the underlying file, you can still write to the file even if
it is removed. Removing the file only removes it from the namespace;
until the last reference is closed it will continue to exist.
You can use 'zpool
Mick Russom wrote:
Sun's own v60 and Sun v65 were pure Intel reference servers that worked
GREAT!
I'm glad they worked for you. But I'll note that the critical deficiencies
in those platforms is solved by the newer Sun AMD/Intel/SPARC small form factor
rackmount servers. The new chassis are
can you guess? wrote: Vitesse VSC410
Yes, it will help detect
hardware faults as well if they happen to occur between RAM and the
disk (and aren't otherwise detected - I'd still like to know whether
the 'bad cable' experiences reported here occurred before ATA started
CRCing its transfers),
But to create a clone you'll need a snapshot so I
think the problem
will still be there...
This might be a way around this problem though. Deleting files from snapshots
sounds like a messy approach in terms of the architecture, but deleting files
from clones would be fine.
So what's needed
On 11/13/07, Dan Poltawski [EMAIL PROTECTED] wrote:
I've just discovered patch 125205-07, which wasn't installed on our system
because we don't have SUNWhea..
Has anyone with problems tried this patch, and has it helped at all?
We were having a pretty rough time running S10U4. While I was
This question triggered some silly questions in my mind:
Lots of folks are determined that the whole COW to different locations
are a Bad Thing(tm), and in some cases, I guess it might actually be...
What if ZFS had a pool / filesystem property that caused zfs to do a
journaled, but non-COW
On Tue, Nov 13, 2007 at 07:33:20PM -0200, Toby Thain wrote:
Yup - that's exactly the kind of error that ZFS and
WAFL do a
perhaps uniquely good job of catching.
WAFL can't catch all: It's distantly isolated from
the CPU end.
WAFL will catch everything that ZFS catches, including
Nathan Kroenert wrote:
This question triggered some silly questions in my mind:
Lots of folks are determined that the whole COW to different locations
are a Bad Thing(tm), and in some cases, I guess it might actually be...
There is a lot of speculation about this, but no real data.
I've
Paul Boven wrote:
Hi everyone,
We've building a storage system that should have about 2TB of storage
and good sequential write speed. The server side is a Sun X4200 running
Solaris 10u4 (plus yesterday's recommended patch cluster), the array we
bought is a Transtec Provigo 510 12-disk
I agree, being able to delete the snapshot that a clone is attached to would be
a nice feature. Until we get that, this is what I have done (in case this
helps anyone else):
1) snapshot the filesystem
2) clone the snapshot into a seperate pool
3) only nfs mount the seperate pool with clones
and when the system is reboot, I run zpool status, status told me that one vdev
is corrupt, and I recreate the file what I had removed. After all those
operation, I run zpool destroy pool, the system reboot again.. should
solaris do it?
This message posted from opensolaris.org
The reset: no matching NCQ I/O found issue appears to be related to the
error recovery for bad blocks on the disk. In general it should be harmless,
but
I have looked into this. If there is someone out there who;
1) Is hitting this issue, and;
2) Is running recent Solaris Nevada bits (not
21 matches
Mail list logo