Besides the /etc/system, you could also export all
the pools, use mdb to
set the same variable that /etc/system sets, and then
import the pools
again. Don't know of any other mechanism to limit
ZFS's memory foot print.
If you don't do ZFS boot, manually import the pools
after the
Hi,
Other than modifying /etc/system, how can I keep the ARC cache low at boot time?
Can I somehow create an SMF service and wire it in at a very low level to put a
fence around ZFS memory usage before other services come up?
I have a deployment scenario where I will have some reasonably large
Off the lists, someone suggested to me that the Inconsistent
filesystem may be the boot archive and not the ZFS filesystem (though I
still don't know what's wrong with booting b99).
Regardless, I tried rebuilding the boot_archive with bootadm
update-archive -vf and verified it by mounting it
PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],2/[EMAIL
PROTECTED],0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=29
ashift=9
asize=90374406144
is_log=0
DTL=161
--
Matt Ingenthron
http://blogs.sun.com
div id=jive-html-wrapper-div
brbrdiv class=gmail_quoteOn Sat, Mar 22, 2008
at 11:33 PM, Matt Ingenthron a
href=mailto:[EMAIL PROTECTED]matt.ingenthron@
sun.com/a wrote:brblockquote
class=gmail_quote style=border-left: 1px solid
rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex;
padding-left
Hi all,
I'm migrating to a new laptop from one which has had hardware issues lately. I
kept my home directory on zfs, so in theory it should be straightforward to
send/receive, but I've had issues. I've moved the disk out of the faulty
system, though I saw the same issue there.
The behavior
One update to this, I tried a scrub. This found a number of errors on old
snapshots (long story, I'd once done a zpool replace from an old disk with
hardware errors to this disk). I destroyed the snapshots since they weren't
needed. The snapshot I was trying to send did not have any errors.
card. I've never
measured this or seen it measured-- any pointers would be useful. I
believe the IOs are 8KB, the application is MySQL.
Thanks in advance,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com
I'm potentially stepping in areas I don't quite know enough about, but
others can jump in if I speak any mistruths :)
More inline...
Georg-W. Koltermann wrote:
Hi,
ok I know zfs-fuse is still incomplete and performance has not been considered,
but still, before I'm going to use it for
to what goes on in his sausage making duties.
- Matt
p.s.: The web says a German word for colloquialism is umgangssprachlich.
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED
), that's available today, with
the limitation that you can't expand a raidz group itself.
Regards,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED
Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Matt Ingenthron
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
shouldn't cause this if you were using RAID-5 on
subcommand. In your case, you can run,
for instance: zpool status mypool.
Good luck,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
/listinfo/zfs-discuss
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
___
zfs-discuss
with something
reliable.
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
___
zfs
c7t6d0 ONLINE 0 0 0
c7t7d0 ONLINE 0 0 0
errors: No known data errors
Thanks in advance,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Systems Practice, Client Solutions
http://blogs.sun.com/mingenthron
After some quick experimenting, I determined that it is in fact a single
raidz pool with all 47 devices. Apparently something was either done
wrong or miscommunicated in the process.
Sorry for the bandwidth.
- Matt
Matt Ingenthron wrote:
Hi all,
Sorry for the newbie question, but I've
18 matches
Mail list logo