On Fri, 21 May 2010, Don wrote:
You know- it would probably be sufficient to provide the SSD with
_just_ a big capacitor bank. If the host lost power it would stop
writing and if the SSD still had power it would probably use the
idle time to flush it's buffers. Then there would be world peace!
On Fri, 21 May 2010, Brandon High wrote:
My understanding is that the controller contains enough cache to
buffer enough data to write a complete erase block size, eliminating
the need to read / erase / write that a partial block write entails.
It's reported to do a copy-on-write, so it doesn't n
On Fri, 21 May 2010, David Dyer-Bennet wrote:
To be comfortable (I don't ask for "know for a certainty"; I'm not sure
that exists outside of "faith"), I want a claim by the manufacturer and
multiple outside tests in "significant" journals -- which could be the
blog of somebody I trusted, as well
> Now, if someone would make a Battery FOB, that gives broken SSD 60
> seconds of power, then we could use the consumer SSD's in servers
> again with real value instead of CYA value.
You know- it would probably be sufficient to provide the SSD with _just_ a big
capacitor bank. If the host los
> "dd" == David Dyer-Bennet writes:
dd> Just how DOES one know something for a certainty, anyway?
science.
Do a test like Lutz did on X25M G2. see list archives 2010-01-10.
pgpeiR4DYODbj.pgp
Description: PGP signature
___
zfs-discuss mailin
On Thu, May 20, 2010 at 2:23 PM, Miika Vesti wrote:
> "I'm pretty sure that all SandForce-based SSDs don't use DRAM as their
> cache, but take a hunk of flash to use as scratch space instead. Which
> means that they'll be OK for ZIL use."
I've read conflicting reports that the controller contains
This is intresting. I thought all Vertex 2 SSDs are good choices for ZIL
but this does not seem to be the case.
According to http://www.legitreviews.com/article/1208/1/ Vertex 2 LE,
Vertex 2 Pro and Vertex 2 EX are SF-1500 based but Vertex 2 (without any
suffix) is SF-1200 based.
Here is the
On Fri, May 21, 2010 10:19, Bob Friesenhahn wrote:
> On Fri, 21 May 2010, Miika Vesti wrote:
>
>> AFAIK OCZ Vertex 2 does not use volatile DRAM cache but non-volatile
>> NAND
>> grid. Whether it respects or ignores the cache flush seems irrelevant.
>>
>> There has been previous discussion about th
On Fri, 21 May 2010, Miika Vesti wrote:
AFAIK OCZ Vertex 2 does not use volatile DRAM cache but non-volatile NAND
grid. Whether it respects or ignores the cache flush seems irrelevant.
There has been previous discussion about this:
http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/35
Hi guys.
yep I know about the ZIL, and SSD Slogs.
While setting Nextenta up it offered to disable the ZIL entirely. For
now I left it on. In the end (hopefully for only specifc filesystems -
once that feature is released.) I'll end up disabling the ZIL for our
software builds since:
1) The bui
> AFAIK OCZ Vertex 2 does not use volatile DRAM cache but non-volatile NAND
> grid. Whether it respects or ignores the cache flush seems irrelevant.
>
> There has been previous discussion about this:
> http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/35702
>
> "I'm pretty sure that all Sa
If you do not care about this NFS problem (or the others) then maybe
you can just disable the ZIL. It is a matter of working through step
1. Working through STEP 1 might be ``doesn't affect us. Disable
ZIL.'' Or it might be ``get slog with supercap''. STEP 1 will never
be ``plug in OCZ Vertex
> "rsk" == Roy Sigurd Karlsbakk writes:
> "dm" == David Magda writes:
> "tt" == Travis Tabbal writes:
rsk> Disabling ZIL is, according to ZFS best practice, NOT
rsk> recommended.
dm> As mentioned, you do NOT want to run with this in production,
dm> but it is a quick w
On Thu, May 20, 2010 13:58, Roy Sigurd Karlsbakk wrote:
> - "Travis Tabbal" skrev:
>
>> Disable ZIL and test again. NFS does a lot of sync writes and kills
>> performance. Disabling ZIL (or using the synchronicity option if a
>> build with that ever comes out) will prevent that behavior, and s
- "Travis Tabbal" skrev:
> Disable ZIL and test again. NFS does a lot of sync writes and kills
> performance. Disabling ZIL (or using the synchronicity option if a
> build with that ever comes out) will prevent that behavior, and should
> get your NFS performance close to local. It's up to yo
Disable ZIL and test again. NFS does a lot of sync writes and kills
performance. Disabling ZIL (or using the synchronicity option if a build with
that ever comes out) will prevent that behavior, and should get your NFS
performance close to local. It's up to you if you want to leave it that way.
Hi Kyle,
very likely that you hit driver bug in isp. After the reboot, take a
look on /var/adm/messages file - anything related might shed some light.
I wouldn't suspect Intel GigE card - fairly good one and driver is very
stable.
Also, some upgrades posted, make sure the kernel displays 13
Hi all,
I recently installed Nexenta Community 3.0.2 on one of my servers:
IBM eSeries X346
2.8Ghz Xeon
12GB DDR2 RAM
1 builtin BGE interface for management
4 port Intel GigE card aggregated for Data
IBM ServRAID 7k with 256MB BB Cache with (isp driver)
6 RAID0 single drive LUNS (so I can use t
18 matches
Mail list logo