Hi,
Please forgive me if my searching-fu has failed me in this case, but
I've been unable to find any information on how people are going about
monitoring and alerting regarding memory usage on Solaris hosts using
ZFS.
The problem is not that the ZFS ARC is using up the memory, but that the
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was delivered as part of the OpenSolaris SPARC
preview. Can anyone
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL)
troy.nancar...@foxtel.com.au wrote:
So how are others monitoring memory usage on ZFS servers?
I think you can get the amount of memory zfs arc uses with arcstat.pl.
http://www.solarisinternals.com/wiki/index.php/Arcstat
IMHO it's probably
Troy Nancarrow (MEL) schrieb:
Hi,
Please forgive me if my searching-fu has failed me in this case, but
I've been unable to find any information on how people are going about
monitoring and alerting regarding memory usage on Solaris hosts using ZFS.
The problem is not that the ZFS ARC is
Ellis, Mike wrote:
How about a generic zfs options field in the JumpStart profile?
(essentially an area where options can be specified that are all applied
to the boot-pool (with provisions to deal with a broken-out-var))
We had this discussion a while back and, IIRC, it was expected that
Roger Solano wrote:
Hello,
Does it make any sense to use a bunch of 15K SAS drives as L2ARC
cache for several TBs of SATA disks?
For example:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
Alternatively, you can purchase non-Sun 500
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified. I agree that
interactive installation needs to remain as simple as possible.
How about offering a choice an installation time: Custom or default??
Those that don't want/need the interactive
Fajar A. Nugraha wrote:
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL)
troy.nancar...@foxtel.com.au wrote:
So how are others monitoring memory usage on ZFS servers?
I think you can get the amount of memory zfs arc uses with arcstat.pl.
On Wed, 6 May 2009, Troy Nancarrow (MEL) wrote:
Please forgive me if my searching-fu has failed me in this case, but
I've been unable to find any information on how people are going about
monitoring and alerting regarding memory usage on Solaris hosts using
ZFS.
The problem is not that the ZFS
This sounds like a good idea to me, but it should be brought up
on the caiman-disc...@opensolaris.org mailing list, since this
is not just, or even primarily, a zfs issue.
Lori
Rich Teer wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified.
On Wed, May 6, 2009 at 11:14 AM, Rich Teer rich.t...@rite-group.com wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified. I agree that
interactive installation needs to remain as simple as possible.
How about offering a choice an installation
Bob Friesenhahn wrote:
On Wed, 6 May 2009, Troy Nancarrow (MEL) wrote:
Please forgive me if my searching-fu has failed me in this case, but
I've been unable to find any information on how people are going about
monitoring and alerting regarding memory usage on Solaris hosts using
ZFS.
The
On Wed, 6 May 2009, Richard Elling wrote:
Memory is meant to be used. 96% RAM use is good since it represents an
effective use of your investment.
Actually, I think a percentage of RAM is a bogus metric to measure.
For example, on a 2TBytes system, you would be wasting 80 GBytes.
Perhaps
re == Richard Elling richard.ell...@gmail.com writes:
re Note: in the Caiman world, this is only an issue for the first
re BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
be rewritten, so it's still an
Ben Rockwood's written a very useful util called arc_summary:
http://www.cuddletech.com/blog/pivot/entry.php?id=979
It's really good for looking at ARC usage (including memory usage).
You might be able to make some guesses based on kstat -n zfs_file_data
and kstat -n zfs_file_data_buf. Look for
On Wed, May 6, 2009 at 2:54 AM, casper@sun.com wrote:
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was
Roger Solano wrote:
Hello,
Does it make any sense to use a bunch of 15K SAS drives as L2ARC
cache for several TBs of SATA disks?
For example:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA
Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
re Note: in the Caiman world, this is only an issue for the first
re BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
On Thu, 7 May 2009, Scott Lawson wrote:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Just thought I would point out that these are hardware backed RAID
arrays. You might be better off using
Miles Nordin wrote:
djm == Darren J Moffat darr...@opensolaris.org writes:
djm If you only present a single lun to ZFS it may not be able to
djm repair any detected errors.
And also the problems with pools becoming corrupt and unimportable,
especially when the SAN reboots
re == Richard Elling richard.ell...@gmail.com writes:
re We forget because it is no longer a problem ;-)
bug number?
re I think it is disingenuous to compare an enterprise-class RAID
re array with the random collection of hardware on which Solaris
re runs.
compare with a
Bob Friesenhahn wrote:
On Thu, 7 May 2009, Scott Lawson wrote:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Just thought I would point out that these are hardware backed RAID
arrays. You
On Thu, 7 May 2009, Scott Lawson wrote:
Something nice about the STK2540 solution is that if the server system
dies. The STK2540's can quickly be swung over to another system via a quick
'zfs import'.
Sure provided they have it attached to a fibre channel switch or
have a nice long fibre lead.
On May 6, 2009, at 20:46, Bob Friesenhahn wrote:
After all this discussion, I am not sure if anyone adequately
answered the original poster's question as to whether at 2540 with
SAS 15K drives would provide substantial synchronous write
throughput improvement when used as a L2ARC device.
After all this discussion, I am not sure if anyone adequately answered the
original poster's question as to whether at 2540 with SAS 15K drives would
provide substantial synchronous write throughput improvement when used as
a L2ARC device.
I was under the impression that the L2ARC was to
On 7 mai 09, at 04:03, Adam Leventhal wrote:
After all this discussion, I am not sure if anyone adequately
answered the
original poster's question as to whether at 2540 with SAS 15K
drives would
provide substantial synchronous write throughput improvement when
used as
a L2ARC device.
I
26 matches
Mail list logo