miaojian wrote:
I create a solaris zone for testing, and when I login to the zone, I can see
the /tmp, but I can't figure out where the /tmp is coming from. If it is coming
from global zone /tmp, then a user in a local zone can cause undesire effects
to the global zone by filling out /tmp in t
Ben Rockwood wrote:
I brought up a similar issue some time back
(http://www.opensolaris.org/jive/thread.jspa?threadID=20355) which pertained to
ligHTTPd. A fix on that issue has been integrated into snv_57, however, I'm
seeing a similar situation with other applications include Mongrel and Ap
miaojian wrote:
Is resource control for max-shm-memory for zones in the plan also?
If you are asking if this rctl will be backported to S10u4, then
the answer is yes.
Jerry
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Victor Feng wrote:
> Given the importance of zones and some other info, is there any formula to
> calculate the number of CPUs that system will allocate to each zone?
>
> e.g.
> Following system has only two zones with dynamic pool service enabled.
> Total number of CPU in the system is 32.
>
Erich Weiler wrote:
Does the "common-agent-container" service manage Solaris 10 containers?
Does it need to be taking up time like it is even if I have no
container specifically configured?
This has nothing to do with Solaris "containers" (zones and/or resource
management).
Jerry
___
Dennis Clarke wrote:
Hi
On 09/01/06 01:59, Dennis Clarke wrote:
I have SVM metadevices defined in the usual way.
# metattach d7 d27
d7: submirror d27 is attached
metattach: mars: /dev/md/dsk/d0: not a metadevice
#
That last message makes no sense. I already have d0 defined and setup
fine.
> How does Solaris load up its tasks and know when to
> say "stop, no more please"?
You might want to take a look at the zone.max-lwps
rctl. See resource_controls(5). You can set this
rctl on the global zone using something like:
# prctl -n zone.max-lwps -v 1000 -r -i zone global
The problem