Re: [zones-discuss] zlogin -C does not seem to work for me as expected
Something is now working right if you are getting the maintenance prompt. You can log into the zone with zlogin -S zonename, and then run svcs -xv to see what service has issues. -Steve On 03/21/11 09:43 AM, Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote: On 21/03/2011 16:37, Russ Weingartz wrote: Hi, i'm new to this, and working on an older system (s10_06) but am trying to complete the final configuration steps to adding a zone and any help would be appreciated from what i find i need to finish the configuration after the create/install/boot procedure (those steps seem to have gone fine) using zlogin commands. when, as root user, i do zlogin -C i am told i'm connected to the console, and am presented with a cursor. what next? from what i have seen i should be faced with a barrage of questions allowing me to complete my steps. if i wait a while and press return key, i am told i've enered system maintenance mode at /dev/console and again a cursor presents itself. so how do i get the questions to show up? thanks for any assistance. rosco Hi what does zoneadm lsit -cv say about zone state what does ptree -z zonename say is running in the zone. To automate this one can include a sysidcfg file in zonepath/root/etc seem man sysidcfg to get an idea of what it might look like or an example: domain-name and ip adresses changed :-) root@kilcolgan:/export# cat sysidcfg name_service=NIS { domain_name=foo.com } system_locale=C terminal=vt100 timeserver=patchmenow timezone=GB-Eire network_interface=PRIMARY { hostname=whitecliff ip_address=100.100.100.100 protocol_ipv6=no default_route=100.100.100.1 } security_policy=none display=workaround:Unknown pointer=workaround:Unknown monitor=workaround:Unknown root_password=blahblah nfs4_domain=sun.com root@kilcolgan:/export# Enda ___ zones-discuss mailing list zones-discuss@opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] X11 in solaris zone
What version of solaris are you using, and which procedures have you attempted? Are you attempting to forward X11 to the global zone by: global$ ssh -X local zone ip local$ xterm or: global$ ssh local zone ip local$ export DISPLAY=global zone ip:# local$ xterm For the former, you could be hitting: http://bugs.opensolaris.org/view_bug.do?bug_id=6704823 You also need to (in the global zone) enable x11 forwarding in /etc/ssh/sshd_config, restart the ssh service, and re-ssh into the local zone. For the latter, you need to allow incoming x11 connections in the global zone. # svccfg -s x11-server setprop options/tcp_listen = true restart x11 by logging out/logging into X session. Allow incoming x11 connections via host + command or similar. Connect to local zone (probably via ssh) and export DISPLAY. Of course, both methods assume that you can ssh from the global to to a local zone. If the local zone has no networking configured, this will not work. -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Understanding Zones. Please help.
These comments apply to Solaris 08/07 or later (or KU patch 120011-14 or later). There is some info on capping memory for zones here: http://docs.sun.com/app/docs/doc/817-1592/gepte?a=view You can specify capped memory for each zone, and rcapd will be enabled automatically. There is no need to configure rcapd or projects within any zones. To set cpu shares for all zones, including the global: http://docs.sun.com/app/docs/doc/817-1592/gepvn?l=ena=viewq=cpu-shares The cpu-share settings for all zones (including global) are applied when the zone boots. To update the shares on a running zone (including global): # prctl -n zone.cpu-shares -r -v value -i zone zonename All zones with cpu shares are automatically put into the fair share scheduler. The exception is the global zone. FSS must be configured system wide for the global zone. Example 1: Setting caps and shares for the first time, with zones already running: --- Update system's default scheduler (applied at boot) # dispadmin -d FSS Update scheduler on running zones. # priocntl -s -c FSS -i all Set the global zone boot-time shares: # zonecfg -z global set cpu-shares=50 Configure the running global zone shares: # prctl -n zone.cpu-shares -r -v 50 -i zone global For each non-global zone: Configure boot-time settings for the zone. # zonecfg -z zonename set cpu-shares=10 add capped-memory set physical=200M set swap=300M end add capped-cpu set ncpus=1.5 end exit If the zone is running, configure it's running settings: # prctl -n zone.cpu-shares -r -v 10 -i zone zonename # prctl -n zone.cpu-cap -s -t priv -v 150 -t priv -i zone zonename # prctl -n zone.max-swap -s -t priv -v 300M -i zone zonename # rcapadm -z zonename -m 200M Enable rcap service: # svcadm enable rcap Example 2: Updating caps and shares on a system already configured with caps and shares from -- Example 1. Update the global zone boot-time shares: # zonecfg -z global set cpu-shares=100 Update running global zone shares: # prctl -n zone.cpu-shares -r -v 100 -i zone global For each non-global zone: Update it's boot time settings: # zonecfg -z zonename set cpu-shares=20 add capped-memory set physical=400M set swap=600M end add capped-cpu set ncpus=2.5 end exit If the zone is running, update it's running settings: # prctl -n zone.cpu-shares -r -v 20 -i zone zonename # prctl -n zone.cpu-cap -r -t priv -v 250 -t priv -i zone zonename # prctl -n zone.max-swap -r -t priv -v 600M -i zone zonename # rcapadm -z zonename -m 400M Notes: - The cpu cap value is multiplied by 100 for prctl in both examples. - prctl for caps uses -s in example 1, and -r in example 2. - cpu cap is not required. Just cpu-shares + capped-memory is certainly valid. - capped memory and cpu can be set on the global zone as well. Capping the global zone can impact system availability. Use with caution. -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Problem Creating a zone
Is /zones/zone_roots/ a zfs dataset? # zfs create -o mountpoint=/zones rpool/zones # zfs create rpool/zones/zone_roots -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Monitor Zones resource usage
prstat -Z shows memory in use in megabytes. Look at the RSS column (ram used) and SWAP column (virtual memory used). Did you reboot the zone after configuring it's cpu-cap? With two cpus, and a cap of 0.25, the zone should not use more that 12.5% of the total cpu. What value are you seeing? What is your prstat interval? cpu caps are enforced over time, so in a short interval, you may get a little more or a little less. Check out the zone statistics project for work in progress: http://opensolaris.org/os/project/zonestat/ -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Raw devices in non-global zone + oracle 10g
Oracle in a zone can access devices added to the zone with add device http://www.sun.com/bigadmin/features/articles/db_in_containers.jsp Here's some info on oracle in a container: http://opensolaris.org/os/community/zones/faq/#app_orcl_shmem -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] X11 in solaris zone
svc:/application/x11/xvnc-inetd:default This starts an Xvnc server when connected to by a vncviewer. It does an xdmcp query, so when the vncviewer connects, it gets a gdm login. gdm must be configured to enable xdmcp. When the vncviewer disconnects, the Xvnc goes away. I'm guessing this is not what you want. I think you want a persistent Xvnc server, running in the zone as a particular user, that the application server can always pop something up on. To access this server, you would use a vncviewer from somewhere. Is this correct? Are there any security constraints? Perhaps try [EMAIL PROTECTED] -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
[zones-discuss] Re: Tivoli Storage Manager (TSM) and zones
I can make some general suggestions on how to avoid duplicate backups, but I'm not very familiar with TSM, or your specific backup requirements. Does the TSM backup client run in the global zone, the non-global zones, or both? What is the default TSM configuration that results in these duplicate backups? Does the default TSM configuration traverse mountpoints? In general I would suggest NOT backing up any lofs mounts. For instance, if you want to backup Everything, on the system, backup all mountpoints of fstypes that you are interested in, from the global zone. For instance, if you want to backup all ufs, zfs, and whateverfs filesytems, backup these mountpoints from global zone: # fstypes='ufs|zfs|whateverfs' # mount -v | /usr/xpg4/bin/egrep type ($fstypes) | /usr/bin/awk '{ print $3}' And do not traverse mountpoints. I'd say that the caveat here is that if any non-global zones have non-lofs filesystems (add fs, type=ufs, for example), then these zones will need to be ready or booted in order for their filesystems be be backed up by the global zone. If the TSM client can and needs to run in each non-global zone, then a more complex backup configuration will be required. You'll need to determine which zone (global or non-global) should be responsible for backups/restores on a per-filesystem basis. Also, is there a reason for the logging fs option to the lofs mounts? This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org