It is the same for the 2530, and I am fairly certain it is also valid
for the 6130,6140, & 6540.
-Joel
On Feb 18, 2008, at 3:51 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Joel,
>
> Saturday, February 16, 2008, 4:09:11 PM, you wrote:
>
> JM> Bob,
>
> JM> Here is how you can tell th
Any IDRXX (Released immediately) is the interim relief (Will also
contains the fix) provided to the customers till the official patch
(Usually takes longer to be released) is available. Patch is supposed to
be consider as the permanent solution.
--
Prabahar.
Stuart Anderson wrote:
> Thanks
Thanks for the information.
How does the temporary patch 127729-07 relate to the IDR127787 (x86) which
I believe also claims to fix this panic?
Thanks.
On Mon, Feb 18, 2008 at 08:32:03PM -0800, Prabahar Jeyaram wrote:
> The patches (127728-06 : sparc, 127729-07 : x86) which has the fix for
> t
Hi,
I got my MacBook pro set up to dual boot between Solaris and OSX and I
have created a zpool to use as a shred storage for documents etc..
However got this strange thing when trying to access the zpool from
Solaris, only root can see it?? I created the zpool on OSX as they are
using an
The patches (127728-06 : sparc, 127729-07 : x86) which has the fix for
this panic is in temporary state and will be released via SunSolve soon.
Please contact your support channel to get these patches.
--
Prabahar.
Stuart Anderson wrote:
> On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anders
On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
> Is this kernel panic a known ZFS bug, or should I open a new ticket?
>
> Feb 18 17:55:18 thumper1 genunix: [ID 403854 kern.notice] assertion failed:
> arc_buf_remove_ref(db->db_buf, db) == 0, file: ../../common/fs/zfs/dbuf.c,
> l
Is this kernel panic a known ZFS bug, or should I open a new ticket?
Note, this happened on an X4500 running S10U4 (127112-06) with NCQ disabled.
Thanks.
Feb 18 17:55:18 thumper1 ^Mpanic[cpu1]/thread=fe8000809c80:
Feb 18 17:55:18 thumper1 genunix: [ID 403854 kern.notice] assertion failed:
Hello Joel,
Saturday, February 16, 2008, 4:09:11 PM, you wrote:
JM> Bob,
JM> Here is how you can tell the array to ignore cache sync commands
JM> and the force unit access bits...(Sorry if it wraps..)
JM> On a Solaris CAM install, the 'service' command is in "/opt/SUNWsefms/bin"
JM> To read th
> The free basic edition sounds cool, though - downloading now.
> I could use a bit of practice with VxVM/VxFS; it's always struck
> me as very good when it was good (online reorgs of storage and
> such), and an utter terror to untangle when it got messed up,
> not to mention rather more complicate
> On Sat, 16 Feb 2008, Richard Elling wrote:
>
> > "ls -l" shows the length. "ls -s" shows the size,
> which may be
> > different than the length. You probably want size
> rather than du.
>
> That is true. Unfortunately 'ls -s' displays in
> units of disk blocks
> and does not also consider t
> Hello,
>
> I have just done comparison of all the above
> filesystems
> using the latest filebench. If you are interested:
> http://przemol.blogspot.com/2008/02/zfs-vs-vxfs-vs-ufs
> -on-x4500-thumper.html
>
> Regards
> przemol
I would think there'd be a lot more variation based on workload,
s
On Mon, Feb 18, 2008 at 11:15:34AM -0800, Eric Schrock wrote:
>
> The 'failmode' property only applies when writes fail, or
> read-during-write dependies, such as the spacemaps. It does not affect
^
That should read 'dependencies', obviously ;-)
- Eric
--
Eric Schro
On Mon, Feb 18, 2008 at 11:52:48AM -0700, Joe Peterson wrote:
>
> Is "wait" the default behavior now? When I had CKSUM errors, reading
> the file would return EIO and stop reading at that point (returning only
> the good data so far). Do you mean it blocks access on the errored
> file, or on the
Richard Elling wrote:
> Adrian Saul wrote:
>> Howdy, I have at several times had issues with consumer grade PC
>> hardware and ZFS not getting along. The problem is not the disks
>> but the fact I dont have ECC and end to end checking on the
>> datapath. What is happening is that random memory er
Bob Friesenhahn writes:
> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
> >>> What was the interlace on the LUN ?
> >
> > The question was about LUN interlace not interface.
> > 128K to 1M works better.
>
> The "segment size" is set to 128K. The max the 2540 allows is 512K.
> Unfortuna
comment below...
Adrian Saul wrote:
> Howdy,
> I have at several times had issues with consumer grade PC hardware and ZFS
> not getting along. The problem is not the disks but the fact I dont have ECC
> and end to end checking on the datapath. What is happening is that random
> memory errors
On Mon, 18 Feb 2008, Ralf Ramge wrote:
> I'm a bit disturbed because I think about switching to 2530/2540
> shelves, but a maximum 250 MB/sec would disqualify them instantly, even
Note that this is single-file/single-thread I/O performance. I suggest
that you read the formal benchmark report for
Mertol Ozyoney wrote:
>
> 2540 controler can achieve maximum 250 MB/sec on writes on the first
> 12 drives. So you are pretty close to maximum throughput already.
>
> Raid 5 can be a little bit slower.
>
I'm a bit irritated now. I have ZFS running for some Sybase ASE 12.5
databases using X4600 s
Howdy,
I have at several times had issues with consumer grade PC hardware and ZFS not
getting along. The problem is not the disks but the fact I dont have ECC and
end to end checking on the datapath. What is happening is that random memory
errors and bit flips are written out to disk and when
19 matches
Mail list logo