Hi,
In simple terms, the ARC is divided into a MRU and MFU side.
target size (c) = target MRU size (p) + target MFU size (c-p)
On Solaris, to get from the MRU to the MFU side, the block must be
read at least once in 62.5 milliseconds. For pure read-once workloads,
the data won't to the M
On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote:
> On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone wrote:
>
> > On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
> > > By the way, I see that now one of the disks is listed as degraded - too
> > many errors. Is there a goo
On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone wrote:
> On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
> > By the way, I see that now one of the disks is listed as degraded - too
> many errors. Is there a good way to identify exactly which of the disks it
> is?
>
> It's hidde
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote:
> By the way, I see that now one of the disks is listed as degraded - too many
> errors. Is there a good way to identify exactly which of the disks it is?
It's hidden in iostat -E, of all places.
--
Dan.
pgpB1dUBrSfPC.pgp
Descrip
Memtest didn't show any errors, but between Frank, early in the thread, saying
that he had found memory errors that memtest didn't catch, and remove of DIMMs
apparently fixing the problem, I too soon jumped to the conclusion it was the
memory. Certainly there are other explanations.
I see that
On Mon, Apr 05, 2010 at 09:46:58PM -0500, Tim Cook wrote:
> On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage
> wrote:
>
> > It certainly has symptoms that match a marginal power supply, but I
> > measured the power consumption some time ago and found it comfortably within
> > the power supply's c
On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage wrote:
> It certainly has symptoms that match a marginal power supply, but I
> measured the power consumption some time ago and found it comfortably within
> the power supply's capacity. I've also wondered if the RAM is fine, but
> there is just som
On Apr 5, 2010, at 6:32 PM, Learner Study wrote:
> Hi Folks:
>
> I'm wondering what is the correct flow when both raid5 and de-dup are
> enabled on a storage volume
>
> I think we should do de-dup first and then raid5 ... is that
> understanding correct?
Yes. If you look at the (somewhat ou
It certainly has symptoms that match a marginal power supply, but I measured
the power consumption some time ago and found it comfortably within the power
supply's capacity. I've also wondered if the RAM is fine, but there is just
some kind of flaky interaction of the ram configuration I had wit
On Mon, Apr 05, 2010 at 06:58:57PM -0700, Learner Study wrote:
> Hi Jeff:
>
> I'm a bit confused...did you say "Correct" to my orig email or the
> reply from Daniel...
Jeff is replying to your mail, not mine.
It looks like he's read your question a little differently. By that
reading, you are c
On Mon, Apr 5, 2010 at 8:16 PM, Brad wrote:
> I'm wondering if the author is talking about "cache mirroring" where the
> cache is mirrored between both controllers. If that is the case, is he
> saying that for every write to the active controlle,r a second write issued
> on the passive controlle
The author mentions multipathing software in the blog entry. Kind of
hard to mix that up with cache mirroring if you ask me.
On 4/5/2010 9:16 PM, Brad wrote:
I'm wondering if the author is talking about "cache mirroring" where the cache
is mirrored between both controllers. If that is the ca
Hi Jeff:
I'm a bit confused...did you say "Correct" to my orig email or the
reply from Daniel...Is there a doc that may explain it better?
Thanks!
On Mon, Apr 5, 2010 at 6:54 PM, jeff.bonw...@oracle.com
wrote:
> Correct.
>
> Jeff
>
> Sent from my iPhone
>
> On Apr 5, 2010, at 6:32 PM, Learner
On Mon, Apr 05, 2010 at 06:32:13PM -0700, Learner Study wrote:
> I'm wondering what is the correct flow when both raid5 and de-dup are
> enabled on a storage volume
>
> I think we should do de-dup first and then raid5 ... is that
> understanding correct?
Not really. Strictly speaking, ZFS do
On Mon, Apr 05, 2010 at 07:43:26AM -0400, Edward Ned Harvey wrote:
> Is the database running locally on the machine? Or at the other end of
> something like nfs? You should have better performance using your present
> config than just about any other config ... By enabling the log devices,
> such
Hi Folks:
I'm wondering what is the correct flow when both raid5 and de-dup are
enabled on a storage volume
I think we should do de-dup first and then raid5 ... is that
understanding correct?
Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensol
I'm wondering if the author is talking about "cache mirroring" where the cache
is mirrored between both controllers. If that is the case, is he saying that
for every write to the active controlle,r a second write issued on the passive
controller to keep the cache mirrored?
--
This message post
On Sun, Apr 04, 2010 at 11:46:16PM -0700, Willard Korfhage wrote:
> Looks like it was RAM. I ran memtest+ 4.00, and it found no problems.
Then why do you suspect the ram?
Especially with 12 disks, another likely candidate could be an
overloaded power supply. While there may be problems showing u
On 04/05/10 11:43, Andreas Höschler wrote:
Hi Khyron,
No, he did *not* say that a mirrored SLOG has no benefit,
redundancy-wise.
He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG devices
which are striped. And if this machine is running Solaris 10, then
you cannot
remove a lo
On 04/ 5/10 05:28 AM, Eric Schrock wrote:
On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote:
Am I missing something here? Under what conditions can I expect hot spares to
be recruited?
Hot spares are activated by the zfs-retire agent in response to a list.suspect
event containing o
On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote:
>
> Am I missing something here? Under what conditions can I expect hot spares
> to be recruited?
Hot spares are activated by the zfs-retire agent in response to a list.suspect
event containing one of the following faults:
fault.fs.z
> From: Kyle McDonald [mailto:kmcdon...@egenera.com]
>
> So does your HBA have newer firmware now than it did when the first
> disk
> was connected?
> Maybe it's the HBA that is handling the new disks differently now, than
> it did when the first one was plugged in?
>
> Can you down rev the HBA FW
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andreas Höschler
>
> Thanks for the clarification! This is very annoying. My intend was to
> create a log mirror. I used
>
> zpool add tank log c1t6d0 c1t7d0
>
> and this was obviously f
On Apr 5, 2010, at 3:24 PM, Peter Schuller wrote:
> I will have to look into it in better detail to understand the
> consequences. Is there a paper that describes the ARC as it is
> implemented in ZFS (since it clearly diverges from the IBM ARC)?
There are various blogs, but perhaps the best docum
On 04/05/10 15:24, Peter Schuller wrote:
In the urxvt case, I am basing my claim on informal observations.
I.e., "hit terminal launch key, wait for disks to rattle, get my
terminal". Repeat. Only by repeating it very many times in very rapid
succession am I able to coerce it to be cached such tha
> In simple terms, the ARC is divided into a MRU and MFU side.
> target size (c) = target MRU size (p) + target MFU size (c-p)
>
> On Solaris, to get from the MRU to the MFU side, the block must be
> read at least once in 62.5 milliseconds. For pure read-once workloads,
> the data won't to
On Apr 5, 2010, at 2:23 PM, Peter Schuller wrote:
> That's a very general statement. I am talking about specifics here.
> For example, you can have mountains of evidence that shows that a
> plain LRU is "optimal" (under some conditions). That doesn't change
> the fact that if I want to avoid a sequ
> The ARC is designed to use as much memory as is available up to a limit. If
> the kernel allocator needs memory and there is none available, then the
> allocator requests memory back from the zfs ARC. Note that some systems have
> multiple memory allocators. For example, there may be a memory a
- "Kyle McDonald" skrev:
> I've seen the Nexenta and EON webpages, but I'm not looking to build
> my own.
>
> Is there anything out there I can just buy?
I've setup a few systems with supermicro hardware - works well and doesn't cost
a whole lot
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
On Mon, 5 Apr 2010, Peter Schuller wrote:
It may be FreeBSD specific, but note that I a not talking about the
amount of memory dedicated to the ARC and how it balances with free
memory on the system. I am talking about eviction policy. I could be
wrong but I didn't think ZFS port made significan
Kyle McDonald writes:
> I've seen the Nexenta and EON webpages, but I'm not looking to build my own.
>
> Is there anything out there I can just buy?
In Germany, someone sells preconfigured hardware based on Nexenta:
http://www.thomas-krenn.com/de/storage-loesungen/storage-systeme/nexentastor/nexe
Install nexenta on a dell poweredge ?
or one of these http://www.pogolinux.com/products/storage_director
On Mon, Apr 5, 2010 at 9:48 PM, Kyle McDonald wrote:
> I've seen the Nexenta and EON webpages, but I'm not looking to build my
> own.
>
> Is there anything out there I can just buy?
>
> -Kyl
I've seen the Nexenta and EON webpages, but I'm not looking to build my own.
Is there anything out there I can just buy?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is
released on Genunix! This release marks the end of SXCE releases and Sun
Microsystems as we know it! It is dubbed the Sun-set release! Many thanks to Al
at Genunix.org for download hosting and serving the Opensolaris
> It sounds like you are complaining about how FreeBSD has implemented zfs in
> the system rather than about zfs in general. These problems don't occur
> under Solaris. Zfs and the kernel need to agree on how to allocate/free
> memory, and it seems that Solaris is more advanced than FreeBSD in th
On Mon, 5 Apr 2010, Peter Schuller wrote:
For desktop use, and presumably rapidly changing non-desktop uses, I
find the ARC cache pretty annoying in its behavior. For example this
morning I had to hit my launch-terminal key perhaps 50 times (roughly)
before it would start completing without disk
Hi Khyron,
No, he did *not* say that a mirrored SLOG has no benefit,
redundancy-wise.
He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG
devices
which are striped. And if this machine is running Solaris 10, then
you cannot
remove a log device because those updates have not made
On Sun, 4 Apr 2010, Brad wrote:
I had always thought that with mpxio, it load-balances IO request
across your storage ports but this article
http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/
has got me thinking its not true.
"The available bandwidth is 2 or 4Gb/s (20
On Apr 5, 2010, at 11:43 AM, Garrett D'Amore wrote:
>
> I see ereport.fs.zfs.io_failure, and ereport.fs.zfs.probe_failure. Also,
> ereport.io.service.lost and ereport.io.device.inval_state. There is indeed a
> fault.fs.zfs.device in the list as well.
The ereports are not interesting, only
Response below...
2010/4/5 Andreas Höschler
> Hi Edward,
>
> thanks a lot for your detailed response!
>
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Andreas Höschler
>>>
>>> • I would like to remove the two SSDs as log devices from
I would appreciate if somebody can clarify a few points.
I am doing some random WRITES (100% writes, 100% random) testing and observe
that ARC grows way beyond the "hard" limit during the test. The hard limit is
set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it
Not true. There are different ways that a storage array, and it's
controllers, connect to the host visible front end ports which might be
confusing the author but i/o isn't duplicated as he suggests.
On 4/4/2010 9:55 PM, Brad wrote:
I had always thought that with mpxio, it load-balances IO re
Hi Edward,
thanks a lot for your detailed response!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andreas Höschler
• I would like to remove the two SSDs as log devices from the pool and
instead add them as a separate pool for sole use by
On 4/4/2010 11:04 PM, Edward Ned Harvey wrote:
>> Actually, It's my experience that Sun (and other vendors) do exactly
>> that for you when you buy their parts - at least for rotating drives, I
>> have no experience with SSD's.
>>
>> The Sun disk label shipped on all the drives is setup to make the
Alright, I've made the benchmarks and there isn't a difference worth mentioning
except that i only get about 30MB/s (to my Mac, which has an SSD as system
disk). I've also tried copying to a ram disk with slightly better results.
Well, now that I've restarted the server I probably won't see the
While testing a zpool with a different storage adapter using my "blkdev"
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare woul
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson
> pool: s1
> state: ONLINE
> scrub: none requested
> config:
>
> NAMESTATE READ WRITE CKSUM
> s1 ONLINE 0 0 0
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andreas Höschler
>
> I would like to remove the two SSDs as log devices from the pool and
> instead add them as a separate pool for sole use by the database to
> see how this enhences perform
> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Marcus
> Wilhelmsson
> >
> > I have a problem with my zfs system, it's getting
> slower and slower
> > over time. When the OpenSolaris machine is rebooted
> and just started I
> > get abo
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson
>
> I have a problem with my zfs system, it's getting slower and slower
> over time. When the OpenSolaris machine is rebooted and just started I
> get about 30-35MB/s in read
Hi all,
while setting of our X4140 I have - following suggestions - added two
SSDs as log devices as follows
zpool add tank log c1t6d0 c1t7d0
I currently have
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool
On 5 apr 2010, at 04.35, Edward Ned Harvey wrote:
>> When running the card in copyback write cache mode, I got horrible
>> performance (with zfs), much worse than with copyback disabled
>> (which I believe should mean it does write-through), when tested
>> with filebench.
>
> When I benchmark my
Hello,
For desktop use, and presumably rapidly changing non-desktop uses, I
find the ARC cache pretty annoying in its behavior. For example this
morning I had to hit my launch-terminal key perhaps 50 times (roughly)
before it would start completing without disk I/O. There are plenty of
other examp
53 matches
Mail list logo