On 8/6/2012 2:53 PM, Bob Friesenhahn wrote:
On Mon, 6 Aug 2012, Stefan Ring wrote:
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
So you're saying that SSDs don't generally flush data to stable medium
w
I mean this as constructive criticism, not as angry bickering. I totally
respect you guys doing your own thing.
Thanks, I'll try my best to address your comments...
*) Increased capacity for high-volume applications.
We do have a select number of customers striping two
X1s for a total capaci
I am glad to hear that both user AND system data is stored. That is
rather reassuring. :-)
I agree!
---
[Excerpt from the linked Intel Technology Brief]
What Type of Data is Protected:
During an unsafe shutdown, firmware routines in the
Int
On 08/07/2012 12:12 AM, Christopher George wrote:
>> Is your DDRdrive product still supported and moving?
>
> Yes, we now exclusively target ZIL acceleration.
>
> We will be at the upcoming OpenStorage Summit 2012,
> and encourage those attending to stop by our booth and
> say hello :-)
>
> http
Is your DDRdrive product still supported and moving?
Yes, we now exclusively target ZIL acceleration.
We will be at the upcoming OpenStorage Summit 2012,
and encourage those attending to stop by our booth and
say hello :-)
http://www.openstoragesummit.org/
Is it well supported for Illumos?
On Mon, 6 Aug 2012, Stefan Ring wrote:
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
So you're saying that SSDs don't generally flush data to stable medium
when instructed to? So data written before an
On Mon, 6 Aug 2012, Christopher George wrote:
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
I am glad to hear that both user AND system data is stored. That is
rather reassuring. :-)
Is your DDRDriv
On Mon, Aug 6, 2012 at 2:15 PM, Stefan Ring wrote:
> So you're saying that SSDs don't generally flush data to stable medium
> when instructed to? So data written before an fsync is not guaranteed
> to be seen after a power-down?
It depends on the model. Consumer models are less likely to
immediat
> Unfortunately, the Intel 520 does *not* power protect it's
> on-board volatile cache (unlike the Intel 320/710 SSD).
>
> Intel has an eye-opening technology brief, describing the
> benefits of "power-loss data protection" at:
>
> http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-s
Are people getting intel 330's for l2arc and 520's for slog?
Unfortunately, the Intel 520 does *not* power protect it's
on-board volatile cache (unlike the Intel 320/710 SSD).
Intel has an eye-opening technology brief, describing the
benefits of "power-loss data protection" at:
http://www.inte
Have you not seen my answer?
http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052170.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the responses. I read the docs that Cindy suggested and they
were educational but I still don't understand where the missing disk space
is. I used the zfs list command and added up all space used. If I'm
reading it right, I have <250GB of snapshots. Zpool list shows that the
pool si
Stec ZeusRAM for Slog - it's exensive and small, but it's the best out
there. OCZ Talos C for L2ARC.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Sent: Friday, August 03, 2012 8:40 PM
To: Karl Rossin
> I think for the cleanness of the experiment, you should also include
"sync" after the dd's, to actually commit your file to the pool.
OK that 'fixes' it:
finsdb137@root> dd if=/dev/random of=ob bs=128k count=1 && sync && while true
> do
> ls -s ob
> sleep 1
> done
0+1 records in
0+1 records o
>Can you check whether this happens from /dev/urandom as well?
It does:
finsdb137@root> dd if=/dev/urandom of=oub bs=128k count=1 && while true
> do
> ls -s oub
> sleep 1
> done
0+1 records in
0+1 records out
1 oub
1 oub
1 oub
1 oub
1 oub
4 oub
4 oub
4 oub
4 oub
4
15 matches
Mail list logo