Mario Goebbels wrote:
Heh, the last ever RAM problems I had was a broken 1MB memory stick on
that wannabe 486 from Cyrix like over a decade ago. And I never test my
machines for broken sticks :)
If you don't test your RAM, how are you sure you have no problems (unless you
exclusively use ECC
Are there any plans to support record sizes larger than 128k? We use
ZFS file systems for disk staging on our backup servers (compression
is a nice feature here), and we typically configure the disk staging
process to read and write large blocks (typically 1MB or so). This
reduces the number of
Sickness, which case are you using? I've been looking for something that
supports many HDDs. Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
For everyone else:
http://blogs.sun.com/timthomas/entry/samba_and_swat_in_solaris#comments
It looks like nevada 70b will be the next Solaris Express Developer Edition
(SXDE) which should also drop shortly and should also have the ZFS ACL fix, but
to find the full source integration you have to
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14 drives.
Suddenly the zpool using that jbod stopped to respond to I/O requests and we
get tons of the following messages on /var/adm/messages:
Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/[EMAIL
Hi,
I have a Solaris 10u3/x86 box with a single mirrored zpool, patched with
10_Recommended as of mid-May and which has been running with no obvious
problems since that time until today.
Today processes accessing certain zfs files starting hanging (sleeping in an
unkillable state), which
On Sep 4, 2007, at 12:09, MC wrote:
For everyone else:
http://blogs.sun.com/timthomas/entry/
samba_and_swat_in_solaris#comments
It looks like nevada 70b will be the next Solaris Express
Developer Edition (SXDE) which should also drop shortly and should
also have the ZFS ACL fix, but
long topic, it was discuss in a previous thread.
in relation to this, there is
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
which can be of interest
selim
On 9/4/07, Matty [EMAIL PROTECTED] wrote:
Are there any plans to support record sizes larger than 128k? We use
ZFS
Pete Bentley wrote:
Mario Goebbels wrote:
Heh, the last ever RAM problems I had was a broken 1MB memory stick on
that wannabe 486 from Cyrix like over a decade ago. And I never test my
machines for broken sticks :)
If you don't test your RAM, how are you sure you have no problems (unless
If you don't test your RAM, how are you sure you have no problems (unless
you exclusively use ECC memory)?
I usually keep an eagle eye on my personal systems. If something appears
to be wrong, I usually spend considerable time into diagnosing. Goes as
far as me running zpool status everytime I
I'm going to go out on a limb here and say you have an A5000 with the
1.6 disks in it. Because of their design, (all drives seeing each other
on both the A and B loops), it's possible for one disk that is behaving
badly to take over the FC-AL loop and require human intervention. You
can
Hi Mark,
the drive (147GB, FC 2Gb) failed on a Xyratex JBOD.
Also in the past we had the same problem with a drive failed on a EMC CX JBOD.
Anyway I can't understand why rebooting Solaris solved out the situation ..
Thank you,
Gino
This message posted from opensolaris.org
First time user of Solaris and ZFS
I have Solaris 10 installed on the primary IDE drive of my motherboard. I also
have a 4 disc RAIDZ setup on my sata connections. I setup up a successful
1.5TB ZFS server with all discs operational.
Well ... I was trying out something new and I borked
13 matches
Mail list logo