Don't hear about triple-parity RAID that often:
Author: Adam Leventhal
Repository: /hg/onnv/onnv-gate
Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651
Total changesets: 1
Log message:
6854612 triple-parity RAID-Z
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/
009872.htm
which gap?
'RAID-Z should mind the gap on writes' ?
Message was edited by: thometal
I believe this is in reference to the raid 5 write hole, described
here:
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance
It's not.
So I'm not sure what the 'RAID-Z should mind the gap
Hey Bob,
MTTDL analysis shows that given normal evironmental conditions, the
MTTDL of RAID-Z2 is already much longer than the life of the
computer or the attendant human. Of course sometimes one encounters
unusual conditions where additional redundancy is desired.
To what analysis are yo
The i7 and Xeon 3300 m/b that say they have ECC support have exactly this
problem as well.
On Wed, Jul 22, 2009 at 4:53 PM, Nicholas Lee wrote:
>
>
> On Tue, Jul 21, 2009 at 4:20 PM, chris wrote:
>
>> Thanks for your reply.
>> What if I wrap the ram in a sheet of lead?;-)
>> (hopefully the
On Tue, Jul 21, 2009 at 4:20 PM, chris wrote:
> Thanks for your reply.
> What if I wrap the ram in a sheet of lead?;-)
> (hopefully the lead itself won't be radioactive)
>
> I found these 4 AM3 motherboard with "optional" ECC memory support. I don't
> know whether this means ECC works, or ECC
Where is the best space to read the latest support of ZFS with SSD and its
roadmap as the latest ZFS release adds SSD management to ZFS.
- Henry
- Original Message
From: Richard Elling
To: Louis-Frédéric Feuillette
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, July 21, 2009 5:43:23 P
On Jul 21, 2009, at 3:00 PM, Louis-Frédéric Feuillette wrote:
On Tue, 2009-07-21 at 14:45 -0700, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20
GBytes of
data a day on a ZFS file system for 5 years (according to Intel) to
reach the expected endurance.
Fo
On 07/21/09 03:00 PM, Nicolas Williams wrote:
On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20 GBytes
Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if n
On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
> But to put this in perspective, you would have to *delete* 20 GBytes
Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if not referred to from snapshot).
> of data a day on a ZFS
Louis-Frédéric Feuillette wrote:
On Tue, 2009-07-21 at 14:45 -0700, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20 GBytes of
data a day on a ZFS file system for 5 years (according to Intel) to
reach the expected endurance.
Forgive my ignorance, but is thi
On Tue, 21 Jul 2009, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20 GBytes of
data a day on a ZFS file system for 5 years (according to Intel) to reach
the expected endurance. I don't know many people who delete that
much data continuously (I suspect that th
On Tue, 2009-07-21 at 14:45 -0700, Richard Elling wrote:
> But to put this in perspective, you would have to *delete* 20 GBytes of
> data a day on a ZFS file system for 5 years (according to Intel) to
> reach the expected endurance.
Forgive my ignorance, but is this not exactly what a SSD ZIL d
On Jul 21, 2009, at 2:24 PM, Bob Friesenhahn wrote:
On Tue, 21 Jul 2009, Richard Elling wrote:
With wear leveling and zfs you would probably discover that the
drive suddenly starts to wear out all at once once it reaches the
end of its lifetime. Unless drive ages are carefully staggered,
On Tue, 21 Jul 2009, Richard Elling wrote:
With wear leveling and zfs you would probably discover that the drive
suddenly starts to wear out all at once once it reaches the end of its
lifetime. Unless drive ages are carefully staggered, or different types of
drives are intentionally used, it
On Wed 22/07/09 08:21 , "Richard Elling" richard.ell...@gmail.com sent:
> On Jul 21, 2009, at 12:49 PM, Bob Friesenhahn wrote:
>> With wear leveling and zfs you would probably discover that the
>> drive suddenly starts to wear out all at once once it reaches the
>> end of its lifetime. Unless
Richard Elling wrote:
On Jul 21, 2009, at 12:49 PM, Bob Friesenhahn wrote:
On Tue, 21 Jul 2009, Andrew Gabriel wrote:
The X25-M drives referred to are Intel's Mainstream drives, using MLC
flash.
The Enterprise grade drives are X25-E, which currently use SLC flash
(less dense, more reliable,
On Jul 21, 2009, at 12:49 PM, Bob Friesenhahn wrote:
On Tue, 21 Jul 2009, Andrew Gabriel wrote:
The X25-M drives referred to are Intel's Mainstream drives, using
MLC flash.
The Enterprise grade drives are X25-E, which currently use SLC
flash (less dense, more reliable, much longer lasting/
On Tue, 21 Jul 2009, Andrew Gabriel wrote:
The X25-M drives referred to are Intel's Mainstream drives, using MLC flash.
The Enterprise grade drives are X25-E, which currently use SLC flash (less
dense, more reliable, much longer lasting/more writes). The expected lifetime
is similar to an Ente
asher...@versature.com said:
> And, on that subject, is there truly a difference between Seagate's line-up
> of 7200 RPM drives? They seem to now have a bunch:
> . . .
> Other manufacturers seem to have similar lineups. Is the difference going to
> matter to me when putting a mess of them into a
Bob Friesenhahn wrote:
On Tue, 21 Jul 2009, Richard Elling wrote:
FYI, this is actually a pretty good article which talks about
improvements in SSDs. Don't bet against Moore's Law :-)
Intel boosts speed, cuts prices of solid-state drives
http://news.cnet.com/8301-13924_3-10291582-64.html?tag=n
And pstack won't give stack on bootadm process:
devu...@zfs05:/var/crash/zfs05# pstack 23870
23870: /sbin/bootadm -a update_all
devu...@zfs05:/var/crash/zfs05# pstack -F 23870
23870: /sbin/bootadm -a update_all
devu...@zfs05:/var/crash/zfs05# kill -9 23870
devu...@zfs05:/var/crash/zfs05# kill -9
Some more info - the system won't shutdown, issuing shutdown -g0 -i5 just sits
there doing nothing.
Then i tried to find locks on the savecore i took, - mdb crashes:
mdb -k ./unix.1 ./vmcore.1
mdb: failed to read panicbuf and panic_reg -- current register set will be
unavailable
Loading modules
On Tue, 21 Jul 2009, Richard Elling wrote:
FYI, this is actually a pretty good article which talks about
improvements in SSDs. Don't bet against Moore's Law :-)
Intel boosts speed, cuts prices of solid-state drives
http://news.cnet.com/8301-13924_3-10291582-64.html?tag=newsEditorsPicksArea.0
Regarding the SATA card and the mainboard slots, make sure that
whatever you get is compatible with the OS. In my case I chose
OpenSolaris which lacks support for Promise SATA cards. As a result,
my choices were very limited since I had chosen a Chenbro ES34069 case
and Intel Little Falls 2 mainboa
You might be running into 6827199 - zfs mount -a performs mounts in the wrong
order.
There is a workaround contained in the bug
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6827199)
--
This message posted from opensolaris.org
___
zfs-d
FYI, this is actually a pretty good article which talks about
improvements in SSDs. Don't bet against Moore's Law :-)
Intel boosts speed, cuts prices of solid-state drives
http://news.cnet.com/8301-13924_3-10291582-64.html?tag=newsEditorsPicksArea.0
-- richard
__
On Jul 21, 2009, at 10:14 AM, Andre Lue wrote:
Hi Ian,
Thanks for the the reply. I will check your recommendation when I
get a chance. However this happens on any zfs that have hierarchical
zfs filesystems. I noticed it started this problem since snv_114.
This same filesystem structure la
On Jul 20, 2009, at 12:48 PM, Frank Middleton wrote:
On 07/19/09 06:10 PM, Richard Elling wrote:
Not that bad. Uncommitted ZFS data in memory does not tend to
live that long. Writes are generally out to media in 30 seconds.
Yes, but memory hits are instantaneous. On a reasonably busy
system
Hi Ian,
Thanks for the the reply. I will check your recommendation when I get a chance.
However this happens on any zfs that have hierarchical zfs filesystems. I
noticed it started this problem since snv_114. This same filesystem structure
last worked fine with snv_110.
--
This message posted
On Jul 21, 2009, at 6:25 AM, F. Wessels wrote:
So to wrap it up. According to Will, a supermicro chassis using a
single lsi expander connected to sata disks can utilize the wide sas
port between hba and the chassis. (like a J4500 Richard mentioned.
How much I like these systems (thumper etc
Hi,
I tried to contact Joel Miller, but the mail server responses "User unknown" :(
Is there anybody else here who received the modified firmware based on 3.2.7
from Joel Miller? I have several 6120 arrays at home and I like to create a
large ZFS pool over all disks without the built-in RAID c
Joseph L. Casale wrote:
Another thing to remember is the expansion slots. You mentioned putting
in a SATA controller for more drives, You'll want to make sure the board
has a slot that can handle the card you want. If you're not using
graphics then any board with a single PCI-E x16 slot should ha
Russel wrote:
OK.
So do we have an zpool import --xtg 56574 mypoolname
or help to do it (script?)
Russel
We are working on the pool rollback mechanism and hope to have that
soon. The ZFS team recognizes that not all hardware is created equal and
thus the need for this mechanism. We are usi
>Another thing to remember is the expansion slots. You mentioned putting
>in a SATA controller for more drives, You'll want to make sure the board
>has a slot that can handle the card you want. If you're not using
>graphics then any board with a single PCI-E x16 slot should handle
>anything. But if
chris wrote:
Thanks for your reply.
What if I wrap the ram in a sheet of lead?;-)
(hopefully the lead itself won't be radioactive)
I've been looking at the same thing recently.
I found these 4 AM3 motherboard with "optional" ECC memory support. I don't
know whether this means ECC works
On 21-Jul-09, at 9:25 , F. Wessels wrote:
So to wrap it up. According to Will, a supermicro chassis using a
single lsi expander connected to sata disks can utilize the wide sas
port between hba and the chassis. (like a J4500 Richard mentioned.
How much I like these systems (thumper etc), the
So to wrap it up. According to Will, a supermicro chassis using a single lsi
expander connected to sata disks can utilize the wide sas port between hba and
the chassis. (like a J4500 Richard mentioned. How much I like these systems
(thumper etc), they're way out of my budget.) Will did see more
Roger wrote:
Hello,
I am new to Solaris.
Several PDFs out there suggest any of the following:
a) Solaris comes with 128bit encryption (full filesystem)
b) Solaris supports full root encryption.
Can you send a pointer to these please, because the information is not
correct and I would like to
Peter Farmer wrote:
Super!
Does the export need to be called just before I import the pool to
another server,
Yes that is correct.
or can the export be called at the time the pool is
created?
no. It must be done on the server that is exporting the pool so that it
can be imported as Daniel
Super!
Does the export need to be called just before I import the pool to
another server, or can the export be called at the time the pool is
created? because in a fail over I wouldn't be able to "export" the
pool before importing it.
Thanks,
Peter
2009/7/20 Daniel J. Priem :
> Peter Farmer w
My understanding of the root cause of these issues is that the vast majority
are happening with consumer grade hardware that is reporting to ZFS that writes
have succeeded, when in fact they are still in the cache.
When that happens, ZFS believes the data is safely written, but a power cut or
c
41 matches
Mail list logo