Hi,
Brandon High freaks.com> writes:
>
> I only looked at the Megaraid that he mentioned, which has a PCIe
> 1.0 4x interface, or 1000MB/s.
You mean x8 interface (theoretically plugged into that x4 slot below...)
> The board also has a PCIe 1.0 4x electrical slot, which is 8x
> physical.
On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
wrote:
> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
>
> sh-4.0# zfs create rpool/iscsi
> sh-4.0# zfs set shareiscsi=on rpool/iscsi
> sh-4.0# zfs create -s -V 10g rpool/iscsi/test
>
> The underlying zpool is a mirror of t
On Wed, May 26, 2010 at 8:35 PM, Marc Bevand wrote:
> The Supermicro X8DTi mobo and LSISAS9211-4i HBA are both PCIe 2.0 compatible,
> so the max theoretical PCIe x4 throughput is 4GB/s aggregate, or 2GB/s in each
> direction, well above the 800MB/s bottleneck observed by Giovanni.
I only looked a
You can set metaslab_gang_bang to (say) 8k to force lots of gang block
allocations.
Jeff
On May 25, 2010, at 11:42 PM, Andriy Gapon wrote:
>
> I am working on improving some ZFS-related bits in FreeBSD boot chain.
> At the moment it seems that the things work mostly fine except for a case
> w
On Wed, 2010-05-26 at 18:37 -0700, Brandon High wrote:
>
> That's what I thought too, so you can imagine my surprise when I saw
> the man page stating otherwise.
>
> Marty, is the source available publicly? (or do you know if the pmp or
> port selector features are supported as well?) I couldn't
On Wed, May 26, 2010 at 5:52 PM, Thomas Burgess wrote:
> I thought it didI couldn't imagine sun using that chip in the original
> thumper if it didn't suppoer NCQalso, i've read where people have had to
> DISABLE ncq on this driver to fix one bug or another (as a work around)
That's what
I am working on improving some ZFS-related bits in FreeBSD boot chain.
At the moment it seems that the things work mostly fine except for a case where
the boot code needs to read gang blocks. We have some reports from users about
failures, but unfortunately their pools are not available for testi
On Wed, May 26, 2010 at 9:22 PM, Brandon High wrote:
> On Wed, May 26, 2010 at 4:27 PM, Giovanni Tirloni
> wrote:
>> SuperMicro X8DTi motherboard
>> SuperMicro SC846E1 chassis (3Gb/s backplane)
>> LSI 9211-4i (PCIex x4) connected to backplane with a SFF-8087 cable (4-lane).
>> 18 x Seagate 1TB S
I thought it didI couldn't imagine sun using that chip in the original
thumper if it didn't suppoer NCQalso, i've read where people have had to
DISABLE ncq on this driver to fix one bug or another (as a work around)
On Wed, May 26, 2010 at 8:40 PM, Marty Faltesek
wrote:
> On Wed, 2010-05
On Wed, 2010-05-26 at 17:18 -0700, Brandon High wrote:
> > If that is the chip on the AOC-SAT2-MV8 then i'm pretty sure it does
> suppoer
> > NCQ
>
> Not according to the driver documentation:
> http://docs.sun.com/app/docs/doc/819-2254/marvell88sx-7d
> "In addition, the 88SX6081 device supports t
On Wed, May 26, 2010 at 4:27 PM, Giovanni Tirloni wrote:
> SuperMicro X8DTi motherboard
> SuperMicro SC846E1 chassis (3Gb/s backplane)
> LSI 9211-4i (PCIex x4) connected to backplane with a SFF-8087 cable (4-lane).
> 18 x Seagate 1TB SATA 7200rpm
>
> I was able to saturate the system at 800MB/s wi
On Wed, May 26, 2010 at 4:25 PM, Thomas Burgess wrote:
> If that is the chip on the AOC-SAT2-MV8 then i'm pretty sure it does suppoer
> NCQ
Not according to the driver documentation:
http://docs.sun.com/app/docs/doc/819-2254/marvell88sx-7d
"In addition, the 88SX6081 device supports the SATA II P
On Thu, May 20, 2010 at 2:19 AM, Marc Bevand wrote:
> Deon Cui gmail.com> writes:
>>
>> So I had a bunch of them lying around. We've bought a 16x SAS hotswap
>> case and I've put in an AMD X4 955 BE with an ASUS M4A89GTD Pro as
>> the mobo.
>>
>> In the two 16x PCI-E slots I've put in the 1068E c
On Wed, May 26, 2010 at 5:47 PM, Brandon High wrote:
> On Sat, May 15, 2010 at 4:01 AM, Marc Bevand wrote:
> > I have done quite some research over the past few years on the best (ie.
> > simple, robust, inexpensive, and performant) SATA/SAS controllers for
> ZFS.
>
> I've spent some time lookin
+storage-discuss
On Wed, May 26, 2010 at 2:47 PM, Brandon High wrote:
> I've spent some time looking at the capabilities of a few controllers
> based on the questions about the SiI3124 and PMP support.
>
> According to the docs, the Marvell 88SX6081 driver doesn't support NCQ
> or PMP, though the
> > I've spent some time looking at the capabilities of a few
> > controllers based on the questions about the SiI3124 and PMP
> > support.
> >
> > According to the docs, the Marvell 88SX6081 driver doesn't support
> > NCQ or PMP, though the card does. While I'm not really performance
> > bound on
>
> I've set up an iScsi volume on OpenSolaris (snv_134) with these
> commands:
>
> sh-4.0# zfs create rpool/iscsi
> sh-4.0# zfs set shareiscsi=on rpool/iscsi sh-4.0# zfs create -s -V 10g
> rpool/iscsi/test
>
> The underlying zpool is a mirror of two SATA drives. I'm connecting
> from a
> Mac
On Wed, May 26, 2010 at 2:25 PM, Richard Elling
wrote:
> I think the community would be happier when booting from EFI is
> solved :-)
+10
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
On Sat, May 15, 2010 at 4:01 AM, Marc Bevand wrote:
> I have done quite some research over the past few years on the best (ie.
> simple, robust, inexpensive, and performant) SATA/SAS controllers for ZFS.
I've spent some time looking at the capabilities of a few controllers
based on the questions
-Original Message-
From: Matt Connolly
Sent: Wednesday, May 26, 2010 5:08 AM
I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
sh-4.0# zfs create rpool/iscsi
sh-4.0# zfs set shareiscsi=on rpool/iscsi sh-4.0# zfs create -s -V 10g
rpool/iscsi/test
The underlying z
On May 26, 2010, at 1:21 PM, Cindy Swearingen wrote:
> Hi Lori,
>
> I haven't filed it yet.
>
> We need to file a CR that allows us to successfully set bootfs to "".
+1
> The failure case in this thread was attempting to unset bootfs on
> a pool with disks that have EFI labels.
I think the co
On May 26, 2010, at 5:08 AM, Matt Connolly wrote:
> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
>
> sh-4.0# zfs create rpool/iscsi
> sh-4.0# zfs set shareiscsi=on rpool/iscsi
> sh-4.0# zfs create -s -V 10g rpool/iscsi/test
NB shareiscsi uses the legacy iSCSI target
On May 26, 2010, at 8:38 AM, Neil Perrin wrote:
> On 05/26/10 07:10, sensille wrote:
>> Recently, I've been reading through the ZIL/slog discussion and
>> have the impression that a lot of folks here are (like me)
>> interested in getting a viable solution for a cheap, fast and
>> reliable ZIL dev
Hi Lori,
I haven't filed it yet.
We need to file a CR that allows us to successfully set bootfs to "".
The failure case in this thread was attempting to unset bootfs on
a pool with disks that have EFI labels.
Thanks,
Cindy
On 05/26/10 14:09, Lori Alt wrote:
Was a bug ever filed against zfs
Was a bug ever filed against zfs for not allowing the bootfs property to
be set to ""? We should always let that request succeed.
lori
On 05/26/10 09:09 AM, Cindy Swearingen wrote:
Hi--
I'm glad you were able to resolve this problem.
I drafted some hints in this new section:
http://www.s
Thanks a lot for heads up Garrett. I'll be watching for an update from
Nexenta then.
Dmitry
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Garrett D'Amore
Sent: Wednesday, May 26, 2010 3:57 PM
To: zfs-discuss@opensolaris.org
Subject:
On 5/26/2010 11:47 AM, Dmitry Sorokin wrote:
Hi All,
I was just wandering if the issue that affects NFS availability when
deleting large snapshots on ZFS data sets with dedup enabled was fixed.
There is a fix for this in b141 of the OpenSolaris source product. We
are looking at including
Hi All,
I was just wandering if the issue that affects NFS availability when
deleting large snapshots on ZFS data sets with dedup enabled was fixed.
More on the issue here:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37288.html
and here:
http://opensolaris.org/jive/thread.
iSCSI writes require a sync to disk for every write. SMB writes get cached in
memory, therefore are much faster.
I am not sure why it is so slow for reads.
Have you tried comstar iSCSI? I have read in these forums that it is faster.
-Scott
--
This message posted from opensolaris.org
__
More info:
The crashes go away just by swapping the cpu to a faster/more horsepower cpu.
On one box where the crash consistently happened (2 core slow cpu)
I no longer see the crash after swapping to a quad core.
--
This message posted from opensolaris.org
On May 26, 2010, at 4:12 AM, Attila Mravik wrote:
> If your ZIL does use nonvolatile cache and does not honor flush
> requests then a powerloss is the same as loosing the ZIL altogether
> since it will not have the data saved for a playback.
This is not a correct statement. Those are two differe
Martijn de Munnik wrote:
I have several home directories on a Solaris server. I want to move
these home directories to a S7000 storage. I know I can use zfs send |
zfs receive to move zfs filesystems. Can this be done to a S7000 storage
using ssh?
No. Check out the shadow migration feature,
On 05/26/10 07:10, sensille wrote:
Recently, I've been reading through the ZIL/slog discussion and
have the impression that a lot of folks here are (like me)
interested in getting a viable solution for a cheap, fast and
reliable ZIL device.
I think I can provide such a solution for about $200, bu
Hi--
I'm glad you were able to resolve this problem.
I drafted some hints in this new section:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues
We had all the clues, you and Brandon got it though.
I think my brain function was missing yesterday.
Bob Friesenhahn wrote:
> On Wed, 26 May 2010, sensille wrote:
>> The basic idea: the main problem when using a HDD as a ZIL device
>> are the cache flushes in combination with the linear write pattern
>> of the ZIL. This leads to a whole rotation of the platter after
>> each write, because after th
On Wed, 26 May 2010, sensille wrote:
The basic idea: the main problem when using a HDD as a ZIL device
are the cache flushes in combination with the linear write pattern
of the ZIL. This leads to a whole rotation of the platter after
each write, because after the first write returns, the head is
On 26 May, 2010 - sensille sent me these 4,5K bytes:
> Edward Ned Harvey wrote:
> >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> >> boun...@opensolaris.org] On Behalf Of sensille
> >>
> >> The basic idea: the main problem when using a HDD as a ZIL device
> >> are the cache flu
Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of sensille
>>
>> The basic idea: the main problem when using a HDD as a ZIL device
>> are the cache flushes in combination with the linear write pattern
>> of the ZIL. T
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of sensille
>
> The basic idea: the main problem when using a HDD as a ZIL device
> are the cache flushes in combination with the linear write pattern
> of the ZIL. This leads to a whole rotation
Recently, I've been reading through the ZIL/slog discussion and
have the impression that a lot of folks here are (like me)
interested in getting a viable solution for a cheap, fast and
reliable ZIL device.
I think I can provide such a solution for about $200, but it
involves a lot of development wo
I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
sh-4.0# zfs create rpool/iscsi
sh-4.0# zfs set shareiscsi=on rpool/iscsi
sh-4.0# zfs create -s -V 10g rpool/iscsi/test
The underlying zpool is a mirror of two SATA drives. I'm connecting from a Mac
client with global SAN i
>>
>> Since this is a SSD you're talking about, unless you have enabled
>> nonvolatile write cache on that disk (which you should never do), and the
>> disk incorrectly handles cache flush commands (which it should never do),
>> then the supercap is irrelevant. All ZIL writes are to be done
>> syn
On 26/05/2010 10:08, Martijn de Munnik wrote:
I have several home directories on a Solaris server. I want to move
these home directories to a S7000 storage. I know I can use zfs send |
zfs receive to move zfs filesystems. Can this be done to a S7000 storage
using ssh?
You maybe able to use the
Hi,
I have several home directories on a Solaris server. I want to move these home
directories to a S7000 storage. I know I can use zfs send | zfs receive to move
zfs filesystems. Can this be done to a S7000 storage using ssh?
thanks,
Martijn
YoungGuns
Kasteleinenkampweg 7b
5222 AX 's-Hertogen
44 matches
Mail list logo