Hi Karl,
Thank you for all your input, and i will keep this list updated about
this project.
Regards,
Bruno
Karl Katzke wrote:
Bruno -
Sorry, I don't have experience with OpenSolaris, but I *do* have experience
running a J4400 with Solaris 10u8.
First off, you need a LSI HBA for the
On Sun, Nov 29, 2009 at 10:38 AM, Brandon High bh...@freaks.com wrote:
I recently ran a scrub on a zpool, and it's showing that data was
repaired, but the drive doesn't have any read, write or checksum
errors. Is this normal behavior, or is something weird going on? All
the documentation
Just as a random data point, I have about 80-100 MBit/s write
performance to a CIFS share on a 4x 1 TByte Seagate 7200.11
system (all four drives on the same PCI SATA Adaptec at 1.5 GBit),
2 GByte RAM, 2 GHz Athlon 64 with FreeNAS 0.7 (FreeBSD 7.2). This is raidz2.
Interface is GBit Ethernet
Tru Huynh wrote:
On Sat, Nov 21, 2009 at 07:08:20PM +1000, James C. McPherson wrote:
If you and everybody else who is seeing this problem could provide
details about your configuration (output from cfgadm -lva, raidctl
-l, prtconf -v, what your zpool configs are, and the firmware rev
of each
Starting a new thead...
On Mon, Nov 30, 2009 at 07:59:14PM +1000, James C. McPherson wrote:
Tru Huynh wrote:
...
On a supermicro board, with 3 hw raid6 vdev joined in a single pool,
random hangs (weekly) which required hardware reset, nothing on the logs.
symptoms: rpool fine, zfs status
2009/11/27 Thomas Maier-Komor tho...@maier-komor.de:
Chavdar Ivanov schrieb:
Hi,
I BFUd successfully snv_128 over snv_125:
---
# cat /etc/release
Solaris Express Community Edition snv_125 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
James C. McPherson wrote:
Adam Cheal wrote:
I thought you had just set
set xpv_psm:xen_support_msi = -1
which is different, because that sets the
xen_support_msi variable
which lives inside the xpv_psm module.
Setting mptsas:* will have no effect on your system
if you do not
have an mptsas
Dear Saanjeevb,
In order to analyze a customer issue I am working on a script that computes ARC
and L2ARc activities by interval.
My intention is to analyze caches efficiency as well as evictions. I would
appreciate more details :
about hits breakdown :
- Are m[fr]u_ghost_hits a subset of
Can folks confirm/deny each of these?
o The problems are not seen with Sun's version of
this card
On the Thumper x4540 (which uses 6 of the same LSI 1068E controller chips), we
do not see this problem. Then again, it uses a one-to-one mapping of controller
PHY ports to internal disks; no
Hi Stuart,
Which Solaris release are you seeing this behavior?
I would like reproduce it and file a bug, if necessary.
Thanks,
Cindy
On 11/29/09 13:06, Stuart Reid wrote:
Answered by own question...
When using the -n switch the output is truncated i.e. the d0 is not printed.
When actually
o The problems are not seen with Sun's version of
this card
Unable to comment as I don't have a Sun card here. If Sun would like to send me
one, I would be willing to test it compared to the cards I do have. I'm running
Supermicro USAS-L8i cards (LSI 1068e based).
o The problems are not
On Mon, 30 Nov 2009, Carsten Aulbert wrote:
after the disk was exchanged, I ran 'zpool clear' and another zpoo scrub
afterwards...
and guess what, now another vdev shows similar problems:
Ugh!
Now, the big question is, what could be faulty. fmadm only shows vdev checksum
problems, right
Any news on this bug? We are trying to implement write acceleration, but can't
deploy to production with this issue still not fixed. If anyone has an estimate
(e.g., would it be part of 10.02?) i would very much appreciate to know.
Thanks,
Moshe
--
This message posted from opensolaris.org
On Mon, Nov 30, 2009 at 2:30 PM, Moshe Vainer mvai...@doyenz.com wrote:
Any news on this bug? We are trying to implement write acceleration, but
can't deploy to production with this issue still not fixed. If anyone has an
estimate (e.g., would it be part of 10.02?) i would very much appreciate
Hi Moshe:
On Mon, Nov 30, 2009 at 20:30, Moshe Vainer mvai...@doyenz.com wrote:
Any news on this bug? We are trying to implement write acceleration, but
can't deploy to production with this issue still not fixed. If anyone has an
estimate (e.g., would it be part of 10.02?) i would very much
I am sorry, i think i confused the matters a bit. I meant the bug that prevents
importing with slog device missing, 6733267.
I am aware that one can remove a slog device, but if you lose your rpool and
the device goes missing while you rebuild, you will lose your pool in its
entirety. Not a
I was responding to this:
Now I have an exported file system that I cant import because of the log
device but the disks are all there. Except the original log device which
failed.
Which actually means bug #6733267, not the one about slog removal. You can
remove now (b125) but only if the pool
Hi Eugen,
please have a look at my this blogpost -
http://harryd71.blogspot.com/2009/06/benchmark-of-freenas-07-and-single-ssd.html
105 MByte/s Read - 77 MByte/s Write over 1 GBit/s Ethernet is not to
bad for a single SSD...
Regards,
Harry
On Mon, Nov 30, 2009 at 10:02 AM, Eugen Leitl
Mark Johnson wrote:
I think there are two different bugs here...
I think there is a problem with MSIs and some variant of mpt
card on xVM. These seem to be showing up as timeout errors.
Disabling MSIs for this adapter seems to fix this problem.
For folks seeing this problem, what HBA adapter
I haven't used it myself, but you could look at the EON software NAS appliance:
http://eonstorage.blogspot.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Nov 30, 2009, at 2:14 PM, Carson Gaspar wrote:
Mark Johnson wrote:
I think there are two different bugs here...
I think there is a problem with MSIs and some variant of mpt
card on xVM. These seem to be showing up as timeout errors.
Disabling MSIs for this adapter seems to fix this
Carson Gaspar wrote:
Mark Johnson wrote:
I think there are two different bugs here...
I think there is a problem with MSIs and some variant of mpt
card on xVM. These seem to be showing up as timeout errors.
Disabling MSIs for this adapter seems to fix this problem.
For folks seeing this
Hi all,
I believe it's an accurate summary of the emails on this thread
over the last 18 hours to say that
(1) disabling MSI support in xVM makes the problem go away
(2) disabling MSI support on bare metal when you only have
disks internal to your host (no jbods), makes the problem
go
Hi,
Sorry for not replying to one of the already open threads on this topic;
I've just joined the list for the purposes of this discussion and have
nothing in my client to reply to yet.
I have an x86_64 opensolaris machine running on a Core 2 Quad Q9650
platform with two LSI SAS3081E-R PCI-E 8
Chad Cantwell wrote:
Hi,
Sorry for not replying to one of the already open threads on this topic;
I've just joined the list for the purposes of this discussion and have
nothing in my client to reply to yet.
I have an x86_64 opensolaris machine running on a Core 2 Quad Q9650
platform with two
Hi,
I just posted a summary of a similiar issue I'm having with non-Sun hardware.
For the record, it's in a Chenbro RM41416 chassis with 4 chenbro SAS backplanes
but no expanders (each backplane is 4 disks connected by SFF-8087 cable). Each
of my LSI brand SAS3081E PCI-E cards is connected to
Hi,
Replied to your previous general query already, but in summary, they are in the
server chassis. It's a Chenbro 16 hotswap bay case. It has 4 mini backplanes
that each connect via an SFF-8087 cable (1m) to my LSI cards (2 cables / 8
drives
per card).
Chad
On Tue, Dec 01, 2009 at
Chad Cantwell wrote:
Hi,
Replied to your previous general query already, but in summary, they are in the
server chassis. It's a Chenbro 16 hotswap bay case. It has 4 mini backplanes
that each connect via an SFF-8087 cable (1m) to my LSI cards (2 cables / 8
drives
per card).
Hi Chad,
thanks
Hi,
The Chenbro chassis contains everything - the motherboard/CPU, and the disks.
As far as
I know the chenbro backplanes are basically electrical jumpers that the LSI
cards shouldn't
be aware of. They pass through the SATA signals directly from SFF-8087 cables
to the
disks.
Thanks,
Chad
Will a write(2) to a ZFS file be made durable atomically?
Under the hood in ZFS, writes are committed using either shadow paging or
logging, as I understand it. So I believe that I mean to ask whether a
write(2), pushed to ZPL, and pushed on down the stack, can be split into
multiple
Moshe Vainer wrote:
I am sorry, i think i confused the matters a bit. I meant the bug that prevents
importing with slog device missing, 6733267.
I am aware that one can remove a slog device, but if you lose your rpool and
the device goes missing while you rebuild, you will lose your pool in
(1) disabling MSI support in xVM makes the problem go
away
Yes here.
(6) mpt(7d) without MSI support is sloow.
That does seem to be the case. It's not so bad overall, and at least the
performance is consistent. It would be nice if this were improved.
For those of you who have been
Under the hood in ZFS, writes are committed using either shadow paging or
logging, as I understand it. So I believe that I mean to ask whether a
write(2), pushed to ZPL, and pushed on down the stack, can be split into
multiple transactions? Or, instead, is it guaranteed to be committed in a
On Mon, Nov 30, 2009 at 11:03:07PM -0700, Neil Perrin wrote:
A write made through the ZPL (zfs_write()) will be broken into transactions
that contain at most 128KB user data. So a large write could well be split
across transaction groups, and thus committed separately.
That answers my exact
Chenbro 16 hotswap bay case. It has 4 mini backplanes that each connect via
an SFF-8087 cable
StarTech HSB430SATBK
hmm, both are passive backplanes with one SATA tunnel per link...
no SAS Expanders (LSISASx36) like those found in SuperMicro or J4x00 with 4
links per connection.
wonder
On Mon, Nov 30, 2009 at 10:23:06PM -0800, Chris Frost wrote:
On Mon, Nov 30, 2009 at 11:03:07PM -0700, Neil Perrin wrote:
A write made through the ZPL (zfs_write()) will be broken into transactions
that contain at most 128KB user data. So a large write could well be split
across transaction
After another crash I checked the syslog and there were some different errors
than the ones
I saw previously during operation:
Nov 30 20:26:11 the-vault scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci8086,2...@3/pci111d,8...@0/pci111d,8...@1/pci1000,3...@0 (mpt1):
Nov 30 20:26:11
Chad Cantwell wrote:
After another crash I checked the syslog and there were some different errors
than the ones
I saw previously during operation:
...
Nov 30 20:59:13 the-vault LSI PCI device (1000,) not supported.
...
Nov 30 20:59:13 the-vault mpt_config_space_init failed
38 matches
Mail list logo