Could you not also pin process' to cores, preventing switching should help
too? I've done this for performance reasons before on a 24 core Linux box
Sent from my HTC Desire
On 16 Feb 2011 05:12, "Richard Elling" wrote:
> On Feb 15, 2011, at 7:46 PM, ian W wrote:
>
>> Thanks..
>>
>> given this box
On Feb 15, 2011, at 7:46 PM, ian W wrote:
> Thanks..
>
> given this box runs 18 hours a day and is idle for maybe 17.5 hrs of that,
> I'd rather have the best power management I can...
>
> I would have loved to have upgraded to a i3 or even SB but the solaris 11
> express support for both is m
Thanks..
given this box runs 18 hours a day and is idle for maybe 17.5 hrs of that, I'd
rather have the best power management I can...
I would have loved to have upgraded to a i3 or even SB but the solaris 11
express support for both is marginal. (h55 chipset issues, no sandybridge
support at
On 2/15/2011 1:37 PM, Torrey McMahon wrote:
On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not
a ZIL-like thing), it should be relatively simple to swamp one with
write requests (most SANs have little more than 1GB of cache), at
wh
I just replaced a failing disk on one of my servers running Solaris 10
U9. The system was MPxIO enabled and I now have the old device hanging
around in the cfgadm list.
I understand from searching around that cfgadm may not be MPxIO aware
-- at least not in Solaris 10. I see a fix was pushed to
Thanks Cindy.
Are you (or anyone else reading) aware of a way to disable MPxIO at
install time?
I imagine there's no harm* in leaving MPxIO enabled with single-pathed
devices -- we'll likely just keep this in mind for future installs.
Thanks,
Ray
* performance penalty -- we do see errors in our
Hi Ray,
MPxIO is on by default for x86 systems that run the Solaris 10 9/10
release.
On my Solaris 10 9/10 SPARC system, I see this:
# stmsboot -L
stmsboot: MPxIO is not enabled
stmsboot: MPxIO disabled
You can use the stmsboot CLI to disable multipathing. You are prompted
to reboot the system
On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not
a ZIL-like thing), it should be relatively simple to swamp one with
write requests (most SANs have little more than 1GB of cache), at
which point, the SAN will be blocking on flushi
Thanks Torrey. I definitely see that multipathing is enabled... I
mainly want to understand whether or not there are installation
scenarios where multipathing is enabled by default (if the mpt driver
thinks it can support it will it enable mpathd at install time?) as
well as the consequences of di
in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that
mpathadm is the storage multipath admin tool. )
If scsi_vhci is loaded in the kernel you have storage multipathing
enabled. (Check with modinfo.)
On 2/15/2011 3:53 PM, Ray Van Dolson wrote:
I'm troubleshooting an existing
On 02/16/11 09:50 AM, David Strom wrote:
Up to the moderator whether this will add anything:
I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between
SANs. configured a private subnet & allowed rsh on the receiving V440.
command: zfs send | (rsh zfs receive ...)
Took a whol
I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and
noticed its device names are extremely hair -- very similar to the
multipath device names: c0t5000C50026F8ACAAd0, etc, etc.
mpathadm seems to confirm:
# mpathadm list lu
/dev/rdsk/c0t50015179591CE0C1d0s2
Up to the moderator whether this will add anything:
I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between
SANs. configured a private subnet & allowed rsh on the receiving V440.
command: zfs send | (rsh zfs receive ...)
Took a whole week (7 days) and brought the receiving h
On Tue, Feb 15 at 11:18, Erik ABLESON wrote:
Just wondering if an expert can chime in on this one.
I have an older machine running 2009.11 with a zpool at version
14. I have a new machine running Solaris Express 11 with the zpool
at version 31.
I can use zfs send/recv to send a filesystem from
>I had a pool on external drive.Recently the drive failed,but pool still shows
>up when run 'zpoll s
tatus'
>
>Any attempt to remove/delete/export pool ends up with unresponsiveness(The
>system is still up/runn
ing perfectly,it's just this specific command kind of hangs so I have to open
new ss
The best way to remove the pool is to reconnect the device and then
destroy the pool, but if the device is faulted or no longer available,
then you'll need a workaround.
If the external drive with the FAULTED pool remnants isn't connected to
the system, then rename the /etc/zfs/zpool.cache file a
Deduped dataset is 2.1TB, no L2ARC and server has 64GB RAM. We have currently
ruled out possibility that this is related to dedup and ZFS and working to get
fix for "6996574 smbd intermittently hangs".
Thanks!
--
This message posted from opensolaris.org
_
I had a pool on external drive.Recently the drive failed,but pool still shows
up when run 'zpoll status'
Any attempt to remove/delete/export pool ends up with unresponsiveness(The
system is still up/running perfectly,it's just this specific command kind of
hangs so I have to open new ssh sessio
Doh - 2008.11
On 15 févr. 2011, at 11:18, Erik ABLESON wrote:
> I have an older machine running 2009.11 with a zpool at version 14. I have a
> new machine running Solaris Express 11 with the zpool at version 31.
___
zfs-discuss mailing list
zfs-discus
Hi
I am no expert, but I have used several virtualisation environments, and I
am always in favour of passing iSCSI straight through to the VM. It creates
a much more portable system, often able to be booted on a different
virtualisation environment, or even on a dedicated server, if you choose at
Just wondering if an expert can chime in on this one.
I have an older machine running 2009.11 with a zpool at version 14. I have a
new machine running Solaris Express 11 with the zpool at version 31.
I can use zfs send/recv to send a filesystem from the older machine to the new
one without any
21 matches
Mail list logo