Re: new cg-manager gui tool for managin cgroups

2011-08-02 Thread Vivek Goyal
On Thu, Jul 21, 2011 at 06:36:55PM +0200, Lennart Poettering wrote:
> On Thu, 21.07.11 11:28, Vivek Goyal (vgo...@redhat.com) wrote:
> 
> > > It is already possible for different applications to use cgroups
> > > without stepping on each other, and without requiring every app
> > > to communicate with each other.
> > > 
> > > As an example, when it starts libvirt will look at what cgroup
> > > it has been placed in, and create the VM cgroups below this point.
> > > So systemd can put libvirtd in an arbitrary location and set an
> > > overall limits for the virtualization service, and it will cap
> > > all VMs. No direct communication between systemd & libvirt is
> > > required.
> > > 
> > > If applications similarly take care to honour the location in
> > > which they were started, rather than just creating stuff directly
> > > in the root cgroup, they too will interoperate nicely.
> > > 
> > > This is one of the nice aspects to the cgroups hierarchy, and
> > > why having tools/daemons which try to arbitrarily re-arrange
> > > cgroups systemwide are not very desirable IMHO.
> > 
> > This will work as long as somebody has done the top level setup and
> > planning. For example, if somebody is running bunch of virtual machines
> > and hosting some native applications and services also on the machine,
> > then he might decide that all the virt machines can only use 8 out of
> > 10 cpus and keep 2 cpus free for native services.
> > 
> > In that case an admin ought to be able to do this top level planning
> > before handing out control of sub-hierarchies to respective applications.
> > Does systemd allow that as of today?
> 
> Right now, systemd only offers you to place services in the cgroups of
> your choice, it will not configure any settings on those cgroups. (This
> is very likely to change soon though as there is a patch pending that
> makes a number of popular controls available as native config options in
> systemd.)
> 
> For the controllers like "cpuset" or the RT part of "cpu" where you
> assign resources from a limited pool we currently have no solution at
> all to deal with conflicts. Neither in libcgroup and friends, not in
> systemd, not in libvirt.

It is not just "cpuset" or "RT part of cpu". This resource thing can apply
to simple thing like cpu shares or blkio controller weigts. For example,
one notion people seem to have to be able view division of system
resources in terms of percentage. Currently we don't have any way to
deal with it and if we want to achieve it then one would require overall
view of the hierarchy to be able to tell whether a certain group has got
certain % of something or not. If there is a separate manager for separate
parts of hierarchy, it is hard to do so.

So if we want to offer more sophisticated features to admin, then design
becomes somewhat complex and I am not sure if it is worth or not.

Also there is a question what kind of interface should be exposed to a
user/admin when it comes to allocating resources to cgroup. Saying that
give a virtual machine/group a cpu weight of 512 does not mean much. If
one wants to translate this number to a certain %, then he needs the
gloabl view.

Similarly some absolute max limits like offered by some controllers like
blkio, cpu might not make much sense if parent has been throttled to even
a smaller limit.

All this raises the question of how the design of UI/command line look
like for configuring cgroups/limits on various things like
users/services/virtual machines. Right now libvirt seems to be allowing
to specify name of the guest domain and some cgroups parameters (cpu
shares, blkio weight etc) for that domain. Again, in an hierarchy
specifying that does not mean anything in absolute system picture until
and unless somebody has overall view of the system.

This also raises the interesting question how cgroup interface of other
UIs in the system should evolve. 

So I have lots of questions/concerns but do not have good answers.
Hopefully this discussion can lead to some of the answers.

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-22 Thread Karel Zak
On Fri, Jul 22, 2011 at 02:32:08AM +0200, Lennart Poettering wrote:
> a) Your app can just place a simple unit file in /run/systemd/system, and
>issue Reload() on the manager D-Bus object. Then, you can start it
>via D-Bus, and stop it, and introspect it, and all other
>operations. When you want to get rid of it again, stop it, then
>remove the file, issue another Reload().
> 
> b) Write a "generator". This are small binaries you can drop into
>/lib/systemd/system-generators. They are spawned in parallel every
>time systemd loads its configuration. These generators can
>dynamically generate unit files from other configuration

 Nice, this is the right way to make it extendable. Thanks!

Karel

-- 
 Karel Zak  
 http://karelzak.blogspot.com
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-22 Thread Daniel P. Berrange
On Thu, Jul 21, 2011 at 04:58:50PM -0400, Vivek Goyal wrote:
> On Thu, Jul 21, 2011 at 06:17:03PM +0200, Lennart Poettering wrote:
> > On Thu, 21.07.11 15:52, Daniel P. Berrange (berra...@redhat.com) wrote:
> > 
> > > > I think its a question of do we want to make users go to a bunch of
> > > > different front end tools, which don't communicate with each other to
> > > > configure the system? I think it makes sense to have libvirt or
> > > > virt-manager and systemd front-end be able to configure cgroups, but I
> > > > think it would be also nice if they could know when the step on each
> > > > other. I think it would also be nice if there was a way to help better
> > > > understand how the various system components are making use of cgroups
> > > > and interacting. I liked to see an integrated desktop approach - not one
> > > > where separate components aren't communicating with each other. 
> > > 
> > > It is already possible for different applications to use cgroups
> > > without stepping on each other, and without requiring every app
> > > to communicate with each other.
> > > 
> > > As an example, when it starts libvirt will look at what cgroup
> > > it has been placed in, and create the VM cgroups below this point.
> > > So systemd can put libvirtd in an arbitrary location and set an
> > > overall limits for the virtualization service, and it will cap
> > > all VMs. No direct communication between systemd & libvirt is
> > > required.
> > 
> > systemd (when run as the user) does exactly the same thing btw. It will
> > discover the group it is urnning in, and will create all its groups
> > beneath that.
> > 
> > In fact, right now the cgroup hierarchy is not virtualized. To make sure
> > systemd works fine in a container we employ the very same logic here: if
> > the container manager started systemd in specific cgroup, then system
> > will do all its stuff below that, even if it is PID 1.
> 
> How does the cgroup hierarchy look like in case of containers? I thought
> libvirtd will do all container management and libvirtd is one of the services
> started by systemd.

If we assume systemd placed libvirtd into $LIBVIRT_ROOT, then libvirtd
will create the following hierarchy below that point:

$LIBVIRT_ROOT  (Where libvirtd lives)
  |
  +- libvirt   (Root for all virtual machines & containers)
  |
  +- lxc   (Root for all LXC containers)
  |   |
  |   +- c1   (Root for container with name c1)
  |   +- c2   (Root for container with name c2)
  |   +- c3   ...
  |   +- ...
  |
  +- qemu  (Root for all QEMU virtual machines)
  |
  +- v1   (Root for virtual machine with name v1)
  +- v2   (Root for virtual machine with name v2)
  +- v3   ...
  +- ...

In the future we might introduce the concept of virtual machine groups
into the API, at which point we might also have separate cgroups to
match those VM groups. That is still TBD.

Primarily the libvirt API is providing you control over the 3rd level
in this hierarchy, eg the individual VM groups. The other levels in
the heirarchy are primarily there so that we can ensure a unique namespace
because we only validate VM name uniqueness within a driver. ie LXC can
have a container called 'demo' and so can QEMU at the same time.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-22 Thread Daniel P. Berrange
On Thu, Jul 21, 2011 at 04:32:52PM -0400, Vivek Goyal wrote:
> On Thu, Jul 21, 2011 at 04:15:46PM -0400, Jason Baron wrote:
> > On Thu, Jul 21, 2011 at 05:53:13PM +0200, Lennart Poettering wrote:
> > > On Thu, 21.07.11 16:36, Daniel P. Berrange (berra...@redhat.com) wrote:
> > > 
> > > > IIRC, you can already set cgroups configuration in the service's
> > > > systemd unit file, using something like:
> > > > 
> > > >  ControlGroup="cpu:/foo/bar mem:/wizz"
> > > > 
> > > > though I can't find the manpage right now.
> > > 
> > > Yes, this is supported in systemd (though without the "").
> > > 
> > > It's (tersely) documented in systemd.exec(5).
> > > 
> > > My guess is that we'll cover about 90% or so of the usecases of cgroups
> > > if we make it easy to assign cgroup-based limits to VMs, to system
> > > services and to users, like we can do with libvirt and systemd. There's
> > > still a big chunk of the 10% who want more complex setups with more
> > > arbitrary groupings, but I have my doubts the folks doing that are the
> > > ones who need a UI for that.
> > > 
> > 
> > Ok, if systemd is planning to add all these knobs and buttons to cover all 
> > of
> > these cgroups use cases, then yes, we probably don't need another management
> > layer nor the UI. I guess we could worry about the more complex setups when
> > we better understand what they are, once systemd has added more cgroup
> > support. (I only recently understood that systemd was adding all this
> > cgroups support). In fact, at that point we probably don't need
> > libcgroup either.
> 
> So the model seems to be that there are various components which control
> their own children. 
> 
> So at top level is systemd which controls users, services and libvirtd
> and will provide interfaces/options to be able to configure cgroups
> and resources for its children.
> 
> Similarly libvirtd will provide interfaces/options to be able to configure
> cgroups/resources for its children (primary VMs and possibly containers at
> some point of time).

FYI, we already use CGroups for our Linux container driver, in fact
it is mandatory, while use of CGroups with KVM is still optional at
this time.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Ben Boeckel
In gmane.linux.redhat.fedora.devel Tomas Mraz  wrote:
>> Right, as Dan Walsh, mentioned I need to separate this into two parts -
>> the front end UI and a backend communicating via DBUS. This is had been
>> a todo item.
> 
> Note that in case the services that the backend provides for the GUI are
> really powerful there is no real security benefit in separating the GUI
> and backend. 

Not to mention that if, say KDE wants a GUI for it, or XFCE, or an
ncurses app pops up, being able to reuse the DBUS interface for doing
the actual work will make things a lot easier.

--Ben

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Lennart Poettering
On Fri, 22.07.11 01:08, Karel Zak (k...@redhat.com) wrote:

> 
> On Thu, Jul 21, 2011 at 06:11:25PM +0200, Lennart Poettering wrote:
> > I'd expect people to just tell systemd about their preferred grouping
> > (if the default of sticking each service into a group of its own is not
> > good enough) using the ControlGroup= setting in unit files. This is
> > trivial to do, and will put things right from the beginning with no
> > complex moving around.
> 
>  It would be nice to have a shared library (or D-BUS interface) to
>  talk with systemd, then it would be possible to (on-the-fly)
>  create/modify services and execute processes by systemd from another
>  daemons (like crond).

You can actually do that, and there are a couple of hooks provided to
make this pretty:

a) Your app can just place a simple unit file in /run/systemd/system, and
   issue Reload() on the manager D-Bus object. Then, you can start it
   via D-Bus, and stop it, and introspect it, and all other
   operations. When you want to get rid of it again, stop it, then
   remove the file, issue another Reload().

b) Write a "generator". This are small binaries you can drop into
   /lib/systemd/system-generators. They are spawned in parallel every
   time systemd loads its configuration. These generators can
   dynamically generate unit files from other configuration, and place
   them in a special dir in /run/systemd. We currently use this to
   create the appropriate units for /etc/crypttab, or to generate a
   getty on the kernel console if the kernel was booted with a special
   one. Eventually we'll probably rip the currently built-in SysV compat
   code in systemd out and make a generator out of it. So that we just
   dynamically create a systemd unit file from each SysV script. These
   generators must be very simple, as they are started very early at
   boot. They may not do IPC and they may not access any files outside
   of the root dir. They are primarily useful to dynamically translate
   foreign configuration into systemd unit files.

c) In case you have a number of very similar dynamic services to run use
   a template.  i.e. as part of your package drop yourservice@.service  
   into /lib/systemd/system. Then, when you dynamically need an instance
   of this, just start "yourservice@someparam.service". In that unit
   file you can then access "someparam" with %i. If you need to change
   more than one parameter this won't be useful to you, however.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Karel Zak
On Thu, Jul 21, 2011 at 06:11:25PM +0200, Lennart Poettering wrote:
> I'd expect people to just tell systemd about their preferred grouping
> (if the default of sticking each service into a group of its own is not
> good enough) using the ControlGroup= setting in unit files. This is
> trivial to do, and will put things right from the beginning with no
> complex moving around.

 It would be nice to have a shared library (or D-BUS interface) to
 talk with systemd, then it would be possible to (on-the-fly)
 create/modify services and execute processes by systemd from another
 daemons (like crond).

Karel

-- 
 Karel Zak  
 http://karelzak.blogspot.com
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Vivek Goyal
On Thu, Jul 21, 2011 at 06:17:03PM +0200, Lennart Poettering wrote:
> On Thu, 21.07.11 15:52, Daniel P. Berrange (berra...@redhat.com) wrote:
> 
> > > I think its a question of do we want to make users go to a bunch of
> > > different front end tools, which don't communicate with each other to
> > > configure the system? I think it makes sense to have libvirt or
> > > virt-manager and systemd front-end be able to configure cgroups, but I
> > > think it would be also nice if they could know when the step on each
> > > other. I think it would also be nice if there was a way to help better
> > > understand how the various system components are making use of cgroups
> > > and interacting. I liked to see an integrated desktop approach - not one
> > > where separate components aren't communicating with each other. 
> > 
> > It is already possible for different applications to use cgroups
> > without stepping on each other, and without requiring every app
> > to communicate with each other.
> > 
> > As an example, when it starts libvirt will look at what cgroup
> > it has been placed in, and create the VM cgroups below this point.
> > So systemd can put libvirtd in an arbitrary location and set an
> > overall limits for the virtualization service, and it will cap
> > all VMs. No direct communication between systemd & libvirt is
> > required.
> 
> systemd (when run as the user) does exactly the same thing btw. It will
> discover the group it is urnning in, and will create all its groups
> beneath that.
> 
> In fact, right now the cgroup hierarchy is not virtualized. To make sure
> systemd works fine in a container we employ the very same logic here: if
> the container manager started systemd in specific cgroup, then system
> will do all its stuff below that, even if it is PID 1.

How does the cgroup hierarchy look like in case of containers? I thought
libvirtd will do all container management and libvirtd is one of the services
started by systemd.

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Vivek Goyal
On Thu, Jul 21, 2011 at 04:15:46PM -0400, Jason Baron wrote:
> On Thu, Jul 21, 2011 at 05:53:13PM +0200, Lennart Poettering wrote:
> > On Thu, 21.07.11 16:36, Daniel P. Berrange (berra...@redhat.com) wrote:
> > 
> > > IIRC, you can already set cgroups configuration in the service's
> > > systemd unit file, using something like:
> > > 
> > >  ControlGroup="cpu:/foo/bar mem:/wizz"
> > > 
> > > though I can't find the manpage right now.
> > 
> > Yes, this is supported in systemd (though without the "").
> > 
> > It's (tersely) documented in systemd.exec(5).
> > 
> > My guess is that we'll cover about 90% or so of the usecases of cgroups
> > if we make it easy to assign cgroup-based limits to VMs, to system
> > services and to users, like we can do with libvirt and systemd. There's
> > still a big chunk of the 10% who want more complex setups with more
> > arbitrary groupings, but I have my doubts the folks doing that are the
> > ones who need a UI for that.
> > 
> 
> Ok, if systemd is planning to add all these knobs and buttons to cover all of
> these cgroups use cases, then yes, we probably don't need another management
> layer nor the UI. I guess we could worry about the more complex setups when
> we better understand what they are, once systemd has added more cgroup
> support. (I only recently understood that systemd was adding all this
> cgroups support). In fact, at that point we probably don't need
> libcgroup either.

So the model seems to be that there are various components which control
their own children. 

So at top level is systemd which controls users, services and libvirtd
and will provide interfaces/options to be able to configure cgroups
and resources for its children.

Similarly libvirtd will provide interfaces/options to be able to configure
cgroups/resources for its children (primary VMs and possibly containers at
some point of time).

And down the line if there is another significant component comes along
it does the same thing.

So every component defines its own sytax and interfaces to configure
cgroups and there is no global control. If somebody wants to mangage
the system remotely, it got to decide what it wants to control and then
use the API offered by respective manager (systemd, libvirt, xyz)?

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Jason Baron
On Thu, Jul 21, 2011 at 05:53:13PM +0200, Lennart Poettering wrote:
> On Thu, 21.07.11 16:36, Daniel P. Berrange (berra...@redhat.com) wrote:
> 
> > IIRC, you can already set cgroups configuration in the service's
> > systemd unit file, using something like:
> > 
> >  ControlGroup="cpu:/foo/bar mem:/wizz"
> > 
> > though I can't find the manpage right now.
> 
> Yes, this is supported in systemd (though without the "").
> 
> It's (tersely) documented in systemd.exec(5).
> 
> My guess is that we'll cover about 90% or so of the usecases of cgroups
> if we make it easy to assign cgroup-based limits to VMs, to system
> services and to users, like we can do with libvirt and systemd. There's
> still a big chunk of the 10% who want more complex setups with more
> arbitrary groupings, but I have my doubts the folks doing that are the
> ones who need a UI for that.
> 

Ok, if systemd is planning to add all these knobs and buttons to cover all of
these cgroups use cases, then yes, we probably don't need another management
layer nor the UI. I guess we could worry about the more complex setups when
we better understand what they are, once systemd has added more cgroup
support. (I only recently understood that systemd was adding all this
cgroups support). In fact, at that point we probably don't need
libcgroup either.

thanks,

-Jason
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Daniel J Walsh
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/21/2011 12:36 PM, Lennart Poettering wrote:
> On Thu, 21.07.11 11:28, Vivek Goyal (vgo...@redhat.com) wrote:
> 
>>> It is already possible for different applications to use cgroups 
>>> without stepping on each other, and without requiring every app 
>>> to communicate with each other.
>>> 
>>> As an example, when it starts libvirt will look at what cgroup it
>>> has been placed in, and create the VM cgroups below this point. 
>>> So systemd can put libvirtd in an arbitrary location and set an 
>>> overall limits for the virtualization service, and it will cap 
>>> all VMs. No direct communication between systemd & libvirt is 
>>> required.
>>> 
>>> If applications similarly take care to honour the location in 
>>> which they were started, rather than just creating stuff
>>> directly in the root cgroup, they too will interoperate nicely.
>>> 
>>> This is one of the nice aspects to the cgroups hierarchy, and why
>>> having tools/daemons which try to arbitrarily re-arrange cgroups
>>> systemwide are not very desirable IMHO.
>> 
>> This will work as long as somebody has done the top level setup
>> and planning. For example, if somebody is running bunch of virtual
>> machines and hosting some native applications and services also on
>> the machine, then he might decide that all the virt machines can
>> only use 8 out of 10 cpus and keep 2 cpus free for native
>> services.
>> 
>> In that case an admin ought to be able to do this top level
>> planning before handing out control of sub-hierarchies to
>> respective applications. Does systemd allow that as of today?
> 
> Right now, systemd only offers you to place services in the cgroups
> of your choice, it will not configure any settings on those cgroups.
> (This is very likely to change soon though as there is a patch
> pending that makes a number of popular controls available as native
> config options in systemd.)
> 
> For the controllers like "cpuset" or the RT part of "cpu" where you 
> assign resources from a limited pool we currently have no solution
> at all to deal with conflicts. Neither in libcgroup and friends, not
> in systemd, not in libvirt.
> 
> However, I do think that figuring out the conflicts here is something
> to fix at the UI level -- and systemd itself (or libvirt) should not
> have to deal with this at all. The UIs should figure that out
> between themselves. I think it should be possible to come up with
> really simple schemes to deal with this however. For example, have
> every UI drop in some dir /var/lib or so a file encoding which
> resources have been taken and should not available in the other UIs
> anymore, maybe with a descriptive stirng, so that those UIs can show
> who took it away.
> 
> However, I believe that adding something like that should be the
> last step to care for in a UI. So far systemd doesn't have any
> comprehensive UI. How to deal with conflicts between resource
> assignments is not a central problem that would justify moving all
> the consumers of cgroups on some additional middleware.
> 
>> To allow that I think systemd should either provide native
>> configuration capability or build on top of existing libcgroups
>> constructs like cgconfig, cgrules.conf to decide how an admin has
>> planned the resource management and in what cgroups services have
>> to be launched, IMHO.
> 
> For the systemd case I'd assume that a UI which wants to enable the 
> admin to control cgroup limits would just place a unit file for the 
> respective service in /etc/systemd/system, use ".include" to pull in
> the original unit file, and set the setting it wants to set.
> 
> Lennart
> 


One goal I have for sandbox would be to allow it to plug into the
cgroups hiearchy also.  So a user could say I want to run all my
sandboxes in such a way that they can only use X% of my CPU and Y% of
memory.  Allowing a user to always be able to kill the sandbox.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk4ohGwACgkQrlYvE4MpobONCQCfSX64iqXWKCokiuG8/ehxfl3L
pTsAn3iRyJJNADkeY9O/osOrqcdPRL96
=aq0Q
-END PGP SIGNATURE-
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Lennart Poettering
On Thu, 21.07.11 19:23, Tomas Mraz (tm...@redhat.com) wrote:

> 
> On Thu, 2011-07-21 at 10:20 -0400, Jason Baron wrote: 
> > Hi,
> > 
> > On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:
> > > On Wed, 20.07.11 15:20, Jason Baron (jba...@redhat.com) wrote:
> > > 
> > > Heya,
> > > 
> > > > The memory and cpu share controllers are currently supported (I simply 
> > > > haven't
> > > > gotten to supporting other controllers yet). One can add/delete 
> > > > cgroups, change
> > > > configuration parameters, and see which processes are currently 
> > > > associated
> > > > with each cgroup. There is also a 'usage' view, which displays 
> > > > graphically the
> > > > real-time usage statistics. The cgroup configuration can then be saved
> > > > into /etc/cgconfig.conf using the 'save' menubar button.
> > > 
> > > How does it write that file? Does the UI run as root? That's not really
> > > desirable. It's not secure and it is cumbersome to mix applications
> > > running as differnt users on the same session and one the same X
> > > screen (since they cannot communicate with dbus, and so on).
> > 
> > Right, as Dan Walsh, mentioned I need to separate this into two parts -
> > the front end UI and a backend communicating via DBUS. This is had been
> > a todo item.
> 
> Note that in case the services that the backend provides for the GUI are
> really powerful there is no real security benefit in separating the GUI
> and backend. 
> 
> I am not sure that a cgroup manager would be such case though.
> 
> For example if the backend service is "interface to enable and configure
> various network based authentication services" as it would be in case of
> authconfig if it was split into GUI and backend, then you can easily
> instruct the backend to authenticate against a network service that will
> give you root user with no password. So we would have to trust the user
> X session that is the authconfig frontend running in anyway.

There's quite a substantial security benefit here. If you use stuff like
PK you can ensure that UIs only ask for and get very specific privileges
when interacting with the mechanism. That means even if you cannot trust
your UI it will be able to do very few bad things only. If you entire UI
runs as roo however, it can do pretty much everything it wants.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Tomas Mraz
On Thu, 2011-07-21 at 10:20 -0400, Jason Baron wrote: 
> Hi,
> 
> On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:
> > On Wed, 20.07.11 15:20, Jason Baron (jba...@redhat.com) wrote:
> > 
> > Heya,
> > 
> > > The memory and cpu share controllers are currently supported (I simply 
> > > haven't
> > > gotten to supporting other controllers yet). One can add/delete cgroups, 
> > > change
> > > configuration parameters, and see which processes are currently associated
> > > with each cgroup. There is also a 'usage' view, which displays 
> > > graphically the
> > > real-time usage statistics. The cgroup configuration can then be saved
> > > into /etc/cgconfig.conf using the 'save' menubar button.
> > 
> > How does it write that file? Does the UI run as root? That's not really
> > desirable. It's not secure and it is cumbersome to mix applications
> > running as differnt users on the same session and one the same X
> > screen (since they cannot communicate with dbus, and so on).
> 
> Right, as Dan Walsh, mentioned I need to separate this into two parts -
> the front end UI and a backend communicating via DBUS. This is had been
> a todo item.

Note that in case the services that the backend provides for the GUI are
really powerful there is no real security benefit in separating the GUI
and backend. 

I am not sure that a cgroup manager would be such case though.

For example if the backend service is "interface to enable and configure
various network based authentication services" as it would be in case of
authconfig if it was split into GUI and backend, then you can easily
instruct the backend to authenticate against a network service that will
give you root user with no password. So we would have to trust the user
X session that is the authconfig frontend running in anyway.

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Jóhann B. Guðmundsson
On 07/21/2011 03:36 PM, Daniel P. Berrange wrote:
> IIRC, you can already set cgroups configuration in the service's
> systemd unit file, using something like:
>
>   ControlGroup="cpu:/foo/bar mem:/wizz"
>
> though I can't find the manpage right now.


man systemd.exec

or

http://0pointer.de/public/systemd-man/systemd.exec.html

JBG


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Lennart Poettering
On Thu, 21.07.11 11:28, Vivek Goyal (vgo...@redhat.com) wrote:

> > It is already possible for different applications to use cgroups
> > without stepping on each other, and without requiring every app
> > to communicate with each other.
> > 
> > As an example, when it starts libvirt will look at what cgroup
> > it has been placed in, and create the VM cgroups below this point.
> > So systemd can put libvirtd in an arbitrary location and set an
> > overall limits for the virtualization service, and it will cap
> > all VMs. No direct communication between systemd & libvirt is
> > required.
> > 
> > If applications similarly take care to honour the location in
> > which they were started, rather than just creating stuff directly
> > in the root cgroup, they too will interoperate nicely.
> > 
> > This is one of the nice aspects to the cgroups hierarchy, and
> > why having tools/daemons which try to arbitrarily re-arrange
> > cgroups systemwide are not very desirable IMHO.
> 
> This will work as long as somebody has done the top level setup and
> planning. For example, if somebody is running bunch of virtual machines
> and hosting some native applications and services also on the machine,
> then he might decide that all the virt machines can only use 8 out of
> 10 cpus and keep 2 cpus free for native services.
> 
> In that case an admin ought to be able to do this top level planning
> before handing out control of sub-hierarchies to respective applications.
> Does systemd allow that as of today?

Right now, systemd only offers you to place services in the cgroups of
your choice, it will not configure any settings on those cgroups. (This
is very likely to change soon though as there is a patch pending that
makes a number of popular controls available as native config options in
systemd.)

For the controllers like "cpuset" or the RT part of "cpu" where you
assign resources from a limited pool we currently have no solution at
all to deal with conflicts. Neither in libcgroup and friends, not in
systemd, not in libvirt.

However, I do think that figuring out the conflicts here is something to
fix at the UI level -- and systemd itself (or libvirt) should not have to
deal with this at all. The UIs should figure that out between
themselves. I think it should be possible to come up with really simple
schemes to deal with this however. For example, have every UI drop in
some dir /var/lib or so a file encoding which resources have been taken
and should not available in the other UIs anymore, maybe with a
descriptive stirng, so that those UIs can show who took it away.

However, I believe that adding something like that should be the last
step to care for in a UI. So far systemd doesn't have any comprehensive
UI. How to deal with conflicts between resource assignments is not a
central problem that would justify moving all the consumers of cgroups
on some additional middleware.

> To allow that I think systemd should either provide native configuration
> capability or build on top of existing libcgroups constructs like
> cgconfig, cgrules.conf to decide how an admin has planned the resource
> management and in what cgroups services have to be launched, IMHO.

For the systemd case I'd assume that a UI which wants to enable the
admin to control cgroup limits would just place a unit file for the
respective service in /etc/systemd/system, use ".include" to pull in the
original unit file, and set the setting it wants to set.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Lennart Poettering
On Thu, 21.07.11 15:52, Daniel P. Berrange (berra...@redhat.com) wrote:

> > I think its a question of do we want to make users go to a bunch of
> > different front end tools, which don't communicate with each other to
> > configure the system? I think it makes sense to have libvirt or
> > virt-manager and systemd front-end be able to configure cgroups, but I
> > think it would be also nice if they could know when the step on each
> > other. I think it would also be nice if there was a way to help better
> > understand how the various system components are making use of cgroups
> > and interacting. I liked to see an integrated desktop approach - not one
> > where separate components aren't communicating with each other. 
> 
> It is already possible for different applications to use cgroups
> without stepping on each other, and without requiring every app
> to communicate with each other.
> 
> As an example, when it starts libvirt will look at what cgroup
> it has been placed in, and create the VM cgroups below this point.
> So systemd can put libvirtd in an arbitrary location and set an
> overall limits for the virtualization service, and it will cap
> all VMs. No direct communication between systemd & libvirt is
> required.

systemd (when run as the user) does exactly the same thing btw. It will
discover the group it is urnning in, and will create all its groups
beneath that.

In fact, right now the cgroup hierarchy is not virtualized. To make sure
systemd works fine in a container we employ the very same logic here: if
the container manager started systemd in specific cgroup, then system
will do all its stuff below that, even if it is PID 1.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Lennart Poettering
On Thu, 21.07.11 10:20, Jason Baron (jba...@redhat.com) wrote:

> > Quite frankly, I think cgrulesd is a really bad idea, since it applies
> > control group limits after a process is already running. This is
> > necessarily racy (and adds quite a burden too, since you ask for
> > notifications on each exec()). I'd claim that cgrulesd is broken by
> > design and cannot be fixed.
> 
> I'm not going to claim that cgrulesd is perfect, but in the case where
> you have untrusted users, you can start their login session in a
> cgroup, and they can't break out of it. I agree it can be racy in the
> case where you want to then further limit that user at run-time (fork
> vs. re-assignment race). Another point, is that the current situation
> can be no worse then the current unconstrained (no cgroup) case,
> especially when you take into account the fact that system services or
> 'trusted services' are going to be properly assigned. Perhaps, the
> authors of cgrulesd can further comment on this issue... 

placing users in cgroups is note done by cgrulesd afaik. The PAM module
does that. (and systemd can do that for you, too).

> > systemd is and will always have to maintain its own hierarchy
> > independently of everybody else.
> 
> My suggestion here was that systemd starts its own hierarchy in some
> default way, and then once configuration info is available it can move
> processes around as required (in most cases there would probably be no
> movement since we don't expect most users to override the defaults). 
> Doesn't it have to do this now, if the user requests some sort of
> customized cgroup configuration?

I'd expect people to just tell systemd about their preferred grouping
(if the default of sticking each service into a group of its own is not
good enough) using the ControlGroup= setting in unit files. This is
trivial to do, and will put things right from the beginning with no
complex moving around.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Lennart Poettering
On Thu, 21.07.11 16:36, Daniel P. Berrange (berra...@redhat.com) wrote:

> IIRC, you can already set cgroups configuration in the service's
> systemd unit file, using something like:
> 
>  ControlGroup="cpu:/foo/bar mem:/wizz"
> 
> though I can't find the manpage right now.

Yes, this is supported in systemd (though without the "").

It's (tersely) documented in systemd.exec(5).

My guess is that we'll cover about 90% or so of the usecases of cgroups
if we make it easy to assign cgroup-based limits to VMs, to system
services and to users, like we can do with libvirt and systemd. There's
still a big chunk of the 10% who want more complex setups with more
arbitrary groupings, but I have my doubts the folks doing that are the
ones who need a UI for that.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Daniel P. Berrange
On Thu, Jul 21, 2011 at 11:28:45AM -0400, Vivek Goyal wrote:
> On Thu, Jul 21, 2011 at 03:52:03PM +0100, Daniel P. Berrange wrote:
> > On Thu, Jul 21, 2011 at 10:36:23AM -0400, Jason Baron wrote:
> > > Hi,
> > > 
> > > On Wed, Jul 20, 2011 at 07:01:30PM -0400, Matthias Clasen wrote:
> > > > On Wed, 2011-07-20 at 15:20 -0400, Jason Baron wrote:
> > > > > Hi,
> > > > > 
> > > > > I've been working on a new gui tool for managing and monitoring 
> > > > > cgroups, called
> > > > > 'cg-manager'. I'm hoping to get people interested in contributing to 
> > > > > this
> > > > > project, as well as to add to the conversation about how cgroups 
> > > > > should
> > > > > be configured and incorporated into distros.
> > > > > 
> > > > 
> > > > As a high-level comment, I don't think 'cgroup management' is a very
> > > > compelling rationale for an end-user graphical tool.
> > > > 
> > > > For most people it will be much better to expose cgroup information in
> > > > the normal process monitor. For people who want to use the specific
> > > > cgroup functionality of systemd, it will be better to have that
> > > > functionality available in a new service management frontend.
> > > 
> > > I've thought that displaying at least the cgroup that a process is part 
> > > of would
> > > be nice in the system monitor as well.
> > > 
> > > I think its a question of do we want to make users go to a bunch of
> > > different front end tools, which don't communicate with each other to
> > > configure the system? I think it makes sense to have libvirt or
> > > virt-manager and systemd front-end be able to configure cgroups, but I
> > > think it would be also nice if they could know when the step on each
> > > other. I think it would also be nice if there was a way to help better
> > > understand how the various system components are making use of cgroups
> > > and interacting. I liked to see an integrated desktop approach - not one
> > > where separate components aren't communicating with each other. 
> > 
> > It is already possible for different applications to use cgroups
> > without stepping on each other, and without requiring every app
> > to communicate with each other.
> > 
> > As an example, when it starts libvirt will look at what cgroup
> > it has been placed in, and create the VM cgroups below this point.
> > So systemd can put libvirtd in an arbitrary location and set an
> > overall limits for the virtualization service, and it will cap
> > all VMs. No direct communication between systemd & libvirt is
> > required.
> > 
> > If applications similarly take care to honour the location in
> > which they were started, rather than just creating stuff directly
> > in the root cgroup, they too will interoperate nicely.
> > 
> > This is one of the nice aspects to the cgroups hierarchy, and
> > why having tools/daemons which try to arbitrarily re-arrange
> > cgroups systemwide are not very desirable IMHO.
> 
> This will work as long as somebody has done the top level setup and
> planning. For example, if somebody is running bunch of virtual machines
> and hosting some native applications and services also on the machine,
> then he might decide that all the virt machines can only use 8 out of
> 10 cpus and keep 2 cpus free for native services.
> 
> In that case an admin ought to be able to do this top level planning
> before handing out control of sub-hierarchies to respective applications.
> Does systemd allow that as of today?

IIRC, you can already set cgroups configuration in the service's
systemd unit file, using something like:

 ControlGroup="cpu:/foo/bar mem:/wizz"

though I can't find the manpage right now.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Daniel P. Berrange
On Wed, Jul 20, 2011 at 03:20:30PM -0400, Jason Baron wrote:
> However, I think we need to discuss how we envision cgroup configuration
> working longer term. The current consumers out of the box, that I'm aware of
> are libvirt and systemd. Currently, these consumers manage cgroups as they see
> fit. However, since they are managing shared resources, I think there probably
> is an argument to be made that its useful, to have some sort of global view
> of things, to understand how resources are being allocated, and thus a global
> way of configuring cgroups (as opposed to each new consumer doing its own
> thing).

One thing I should mention is that it is an explicit design goal of
libvirt, that the concept of "cgroups" is not exposed to users of
the libvirt API. This is because the libvirt goal is to provide an
API / XML format that is independent of the specific implementation
that a hypervisor uses. Cgroups is an implementation detail of the
libvirt KVM & LXC drivers. The libvirt VMWare, Hyper-V, VirtualBox
drivers use completely different mechanisms to provide resource
controls. Thus the guest resource controls are expressed with a
common vocabularly, which is then internally translated to changes
in cgroups  or some other mechanism as appropriate for the driver.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Vivek Goyal
On Thu, Jul 21, 2011 at 03:52:03PM +0100, Daniel P. Berrange wrote:
> On Thu, Jul 21, 2011 at 10:36:23AM -0400, Jason Baron wrote:
> > Hi,
> > 
> > On Wed, Jul 20, 2011 at 07:01:30PM -0400, Matthias Clasen wrote:
> > > On Wed, 2011-07-20 at 15:20 -0400, Jason Baron wrote:
> > > > Hi,
> > > > 
> > > > I've been working on a new gui tool for managing and monitoring 
> > > > cgroups, called
> > > > 'cg-manager'. I'm hoping to get people interested in contributing to 
> > > > this
> > > > project, as well as to add to the conversation about how cgroups should
> > > > be configured and incorporated into distros.
> > > > 
> > > 
> > > As a high-level comment, I don't think 'cgroup management' is a very
> > > compelling rationale for an end-user graphical tool.
> > > 
> > > For most people it will be much better to expose cgroup information in
> > > the normal process monitor. For people who want to use the specific
> > > cgroup functionality of systemd, it will be better to have that
> > > functionality available in a new service management frontend.
> > 
> > I've thought that displaying at least the cgroup that a process is part of 
> > would
> > be nice in the system monitor as well.
> > 
> > I think its a question of do we want to make users go to a bunch of
> > different front end tools, which don't communicate with each other to
> > configure the system? I think it makes sense to have libvirt or
> > virt-manager and systemd front-end be able to configure cgroups, but I
> > think it would be also nice if they could know when the step on each
> > other. I think it would also be nice if there was a way to help better
> > understand how the various system components are making use of cgroups
> > and interacting. I liked to see an integrated desktop approach - not one
> > where separate components aren't communicating with each other. 
> 
> It is already possible for different applications to use cgroups
> without stepping on each other, and without requiring every app
> to communicate with each other.
> 
> As an example, when it starts libvirt will look at what cgroup
> it has been placed in, and create the VM cgroups below this point.
> So systemd can put libvirtd in an arbitrary location and set an
> overall limits for the virtualization service, and it will cap
> all VMs. No direct communication between systemd & libvirt is
> required.
> 
> If applications similarly take care to honour the location in
> which they were started, rather than just creating stuff directly
> in the root cgroup, they too will interoperate nicely.
> 
> This is one of the nice aspects to the cgroups hierarchy, and
> why having tools/daemons which try to arbitrarily re-arrange
> cgroups systemwide are not very desirable IMHO.

This will work as long as somebody has done the top level setup and
planning. For example, if somebody is running bunch of virtual machines
and hosting some native applications and services also on the machine,
then he might decide that all the virt machines can only use 8 out of
10 cpus and keep 2 cpus free for native services.

In that case an admin ought to be able to do this top level planning
before handing out control of sub-hierarchies to respective applications.
Does systemd allow that as of today?

Secondly not all applications are going to do the cgroup management. So
we still need a common system wide tool which can do the monitoring
to figure out the problems and also be able to do atlest top level
resource planning before it allows the applications to do their own
planning with-in their top level group.

To allow that I think systemd should either provide native configuration
capability or build on top of existing libcgroups constructs like
cgconfig, cgrules.conf to decide how an admin has planned the resource
management and in what cgroups services have to be launched, IMHO.

> 
> > > The only role I could see for this kind of dedicated cgroup UI would be
> > > as a cgroup debugging aid, but is that really worth the effort,
> > > considering most cgroup developers probably prefer to use cmdline tools
> > > for the that purpose ?
> > > 
> > > 
> > 
> > The reason I started looking at this was b/c there were requests to be
> > able to use a GUI to configure cgroups. Correct me if I'm wrong, but  the 
> > answer
> > is go to the virt-manager gui, then the systemd front end, and then hand 
> > edit
> > cgrules.conf for custom rules. And then hope you don't start services in
> > the wrong order.
> 
> My point is that 'configure cgroups' is not really a task users would
> need to. Going to virt-manager GUI, then systemd GUI, and so on is not
> likely to be a problem in the real world usage, because the users tasks
> do not require that they go through touch every single cgroup on the
> system at once.
> 
> People who are using virtualization, will already be using virt-manager
> to configure their VMs, so of course they expect to be able to control
> the VMs's resource utilization from there, rather tha

Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Vivek Goyal
On Thu, Jul 21, 2011 at 10:20:54AM -0400, Jason Baron wrote:

[..]
> > Quite frankly, I think cgrulesd is a really bad idea, since it applies
> > control group limits after a process is already running. This is
> > necessarily racy (and adds quite a burden too, since you ask for
> > notifications on each exec()). I'd claim that cgrulesd is broken by
> > design and cannot be fixed.
> 
> I'm not going to claim that cgrulesd is perfect, but in the case where
> you have untrusted users, you can start their login session in a
> cgroup, and they can't break out of it. I agree it can be racy in the
> case where you want to then further limit that user at run-time (fork
> vs. re-assignment race). Another point, is that the current situation
> can be no worse then the current unconstrained (no cgroup) case,
> especially when you take into account the fact that system services or
> 'trusted services' are going to be properly assigned. Perhaps, the
> authors of cgrulesd can further comment on this issue... 

Agreed that cgrulesd reacts after the event and can be racy. It is a
best effort kind of situation. A more fool proof way is to launch the
task in right cgroup to begin with and that can be done with various
other mechianisms available.

- pam plugin to put users in right cgroup upon login
- cgexec command line tool to launch tasks in right cgroup
- Applications make use of libcgroup API to launch/fork tasks in
  desired cgroup. 

If none of the above is being used, then cgrulesengd works in the
background as best effort to enforce the rules and can easily be turned
off, if need be.

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Daniel P. Berrange
On Thu, Jul 21, 2011 at 10:36:23AM -0400, Jason Baron wrote:
> Hi,
> 
> On Wed, Jul 20, 2011 at 07:01:30PM -0400, Matthias Clasen wrote:
> > On Wed, 2011-07-20 at 15:20 -0400, Jason Baron wrote:
> > > Hi,
> > > 
> > > I've been working on a new gui tool for managing and monitoring cgroups, 
> > > called
> > > 'cg-manager'. I'm hoping to get people interested in contributing to this
> > > project, as well as to add to the conversation about how cgroups should
> > > be configured and incorporated into distros.
> > > 
> > 
> > As a high-level comment, I don't think 'cgroup management' is a very
> > compelling rationale for an end-user graphical tool.
> > 
> > For most people it will be much better to expose cgroup information in
> > the normal process monitor. For people who want to use the specific
> > cgroup functionality of systemd, it will be better to have that
> > functionality available in a new service management frontend.
> 
> I've thought that displaying at least the cgroup that a process is part of 
> would
> be nice in the system monitor as well.
> 
> I think its a question of do we want to make users go to a bunch of
> different front end tools, which don't communicate with each other to
> configure the system? I think it makes sense to have libvirt or
> virt-manager and systemd front-end be able to configure cgroups, but I
> think it would be also nice if they could know when the step on each
> other. I think it would also be nice if there was a way to help better
> understand how the various system components are making use of cgroups
> and interacting. I liked to see an integrated desktop approach - not one
> where separate components aren't communicating with each other. 

It is already possible for different applications to use cgroups
without stepping on each other, and without requiring every app
to communicate with each other.

As an example, when it starts libvirt will look at what cgroup
it has been placed in, and create the VM cgroups below this point.
So systemd can put libvirtd in an arbitrary location and set an
overall limits for the virtualization service, and it will cap
all VMs. No direct communication between systemd & libvirt is
required.

If applications similarly take care to honour the location in
which they were started, rather than just creating stuff directly
in the root cgroup, they too will interoperate nicely.

This is one of the nice aspects to the cgroups hierarchy, and
why having tools/daemons which try to arbitrarily re-arrange
cgroups systemwide are not very desirable IMHO.

> > The only role I could see for this kind of dedicated cgroup UI would be
> > as a cgroup debugging aid, but is that really worth the effort,
> > considering most cgroup developers probably prefer to use cmdline tools
> > for the that purpose ?
> > 
> > 
> 
> The reason I started looking at this was b/c there were requests to be
> able to use a GUI to configure cgroups. Correct me if I'm wrong, but  the 
> answer
> is go to the virt-manager gui, then the systemd front end, and then hand edit
> cgrules.conf for custom rules. And then hope you don't start services in
> the wrong order.

My point is that 'configure cgroups' is not really a task users would
need to. Going to virt-manager GUI, then systemd GUI, and so on is not
likely to be a problem in the real world usage, because the users tasks
do not require that they go through touch every single cgroup on the
system at once.

People who are using virtualization, will already be using virt-manager
to configure their VMs, so of course they expect to be able to control
the VMs's resource utilization from there, rather than any external tool

People who are configuring system services and want to control the
resources of a service on their system, would expect todo so in the
same tool where they are enabling their service, along with changing
firewall rules for that service all in the same place. They again would
have no little to go off to separate tools for cgroups or firewalling
while enabling services.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Jason Baron
Hi,

On Wed, Jul 20, 2011 at 07:01:30PM -0400, Matthias Clasen wrote:
> On Wed, 2011-07-20 at 15:20 -0400, Jason Baron wrote:
> > Hi,
> > 
> > I've been working on a new gui tool for managing and monitoring cgroups, 
> > called
> > 'cg-manager'. I'm hoping to get people interested in contributing to this
> > project, as well as to add to the conversation about how cgroups should
> > be configured and incorporated into distros.
> > 
> 
> As a high-level comment, I don't think 'cgroup management' is a very
> compelling rationale for an end-user graphical tool.
> 
> For most people it will be much better to expose cgroup information in
> the normal process monitor. For people who want to use the specific
> cgroup functionality of systemd, it will be better to have that
> functionality available in a new service management frontend.

I've thought that displaying at least the cgroup that a process is part of would
be nice in the system monitor as well.

I think its a question of do we want to make users go to a bunch of
different front end tools, which don't communicate with each other to
configure the system? I think it makes sense to have libvirt or
virt-manager and systemd front-end be able to configure cgroups, but I
think it would be also nice if they could know when the step on each
other. I think it would also be nice if there was a way to help better
understand how the various system components are making use of cgroups
and interacting. I liked to see an integrated desktop approach - not one
where separate components aren't communicating with each other. 


> 
> The only role I could see for this kind of dedicated cgroup UI would be
> as a cgroup debugging aid, but is that really worth the effort,
> considering most cgroup developers probably prefer to use cmdline tools
> for the that purpose ?
> 
> 

The reason I started looking at this was b/c there were requests to be
able to use a GUI to configure cgroups. Correct me if I'm wrong, but  the answer
is go to the virt-manager gui, then the systemd front end, and then hand edit
cgrules.conf for custom rules. And then hope you don't start services in
the wrong order.

thanks,

-Jason
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Jason Baron

Hi,

On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:
> On Wed, 20.07.11 15:20, Jason Baron (jba...@redhat.com) wrote:
> 
> Heya,
> 
> > The memory and cpu share controllers are currently supported (I simply 
> > haven't
> > gotten to supporting other controllers yet). One can add/delete cgroups, 
> > change
> > configuration parameters, and see which processes are currently associated
> > with each cgroup. There is also a 'usage' view, which displays graphically 
> > the
> > real-time usage statistics. The cgroup configuration can then be saved
> > into /etc/cgconfig.conf using the 'save' menubar button.
> 
> How does it write that file? Does the UI run as root? That's not really
> desirable. It's not secure and it is cumbersome to mix applications
> running as differnt users on the same session and one the same X
> screen (since they cannot communicate with dbus, and so on).

Right, as Dan Walsh, mentioned I need to separate this into two parts -
the front end UI and a backend communicating via DBUS. This is had been
a todo item.

> 
> > Right now, the gui assumes that the various hierarchies are mounted 
> > separately,
> > but that the cpu and cpuacct are co-mounted. Its my understanding that this
> > is consistent with how systemd is doing things. So that's great.
> 
> In F15 we mount all controllers enabled in the kernel separately. In F16
> we'll optionally mount some of them together, and we'll probably ship a
> default of mounting cpu and cpuacct together if they are both enabled.
> 
> > Currently, the gui saves its state using cgconfig.conf, and cgrules.conf. It
> > will display what libvirtd and systemd are doing, but will not save the 
> > state
> > for any of the cgroups that are created by libvirt or systemd. So, it
> > basically ignores these directories as far as persistent configuration. 
> > Thus,
> > it should not conflict with what systemd or libvirtd are doing.
> 
> Quite frankly, I think cgrulesd is a really bad idea, since it applies
> control group limits after a process is already running. This is
> necessarily racy (and adds quite a burden too, since you ask for
> notifications on each exec()). I'd claim that cgrulesd is broken by
> design and cannot be fixed.

I'm not going to claim that cgrulesd is perfect, but in the case where
you have untrusted users, you can start their login session in a
cgroup, and they can't break out of it. I agree it can be racy in the
case where you want to then further limit that user at run-time (fork
vs. re-assignment race). Another point, is that the current situation
can be no worse then the current unconstrained (no cgroup) case,
especially when you take into account the fact that system services or
'trusted services' are going to be properly assigned. Perhaps, the
authors of cgrulesd can further comment on this issue... 


> 
> > Thus, various privaleged consumers could make 'configuration requests' to a
> > common API, basically asking what's my configuration data. If there is 
> > already
> > data the consumer can proceed assigning tasks to those cgroups. Otherwise, 
> > it
> > can create a new request, which may or may not be allowed based upon the
> > available resources on the system. And each consumer can handle what it 
> > wants
> > to do in this case, which could potentially include tweaking the global
> > configuration.
> 
> systemd is the first process of the OS after the initrd finished. At
> that time no other process is running, and that means it is not
> practical to have systemd talk to any other daemon to get the most basic
> of its functionality working.
> 
> systemd is and will always have to maintain its own hierarchy
> independently of everybody else.

My suggestion here was that systemd starts its own hierarchy in some
default way, and then once configuration info is available it can move
processes around as required (in most cases there would probably be no
movement since we don't expect most users to override the defaults). 
Doesn't it have to do this now, if the user requests some sort of
customized cgroup configuration?

> 
> In fact I think running som arbitration daemon which clients talk to to
> get a group created is a bad idea anyway: this makes things needless
> complex and fragile. If the main reason to do such a thing is to get
> events when the cgroup configuration changes then I'd much rather see
> changes made to the kernel so that we can get notifications when groups
> are created or removed. That could be done via netlink. Another option
> would be to hook cgroupfs up with fanotify.
> 

The main point of the arbitration daemon is to help users configure
their system in a consistent way. For example, if systemd wants to
use cpuset to assign certain service to say cpu 1, and libvirtd also
wants to assign a virtual machine to cpu 1, it would be nice to allow
the user to know there might be a conflict and either adjust his
settings or continue anyway. I think it would also be nice to see the

Re: new cg-manager gui tool for managin cgroups

2011-07-21 Thread Daniel P. Berrange
On Wed, Jul 20, 2011 at 07:01:30PM -0400, Matthias Clasen wrote:
> On Wed, 2011-07-20 at 15:20 -0400, Jason Baron wrote:
> > Hi,
> > 
> > I've been working on a new gui tool for managing and monitoring cgroups, 
> > called
> > 'cg-manager'. I'm hoping to get people interested in contributing to this
> > project, as well as to add to the conversation about how cgroups should
> > be configured and incorporated into distros.
> > 
> 
> As a high-level comment, I don't think 'cgroup management' is a very
> compelling rationale for an end-user graphical tool.
> 
> For most people it will be much better to expose cgroup information in
> the normal process monitor. For people who want to use the specific
> cgroup functionality of systemd, it will be better to have that
> functionality available in a new service management frontend.
> 
> The only role I could see for this kind of dedicated cgroup UI would be
> as a cgroup debugging aid, but is that really worth the effort,
> considering most cgroup developers probably prefer to use cmdline tools
> for the that purpose ?

I tend to agree. CGroups is really just a low level piece of infrastructure
to be used as a building block by higher level services like systemd or
libvirt. End users shouldn't know or care about cgroups directly, but
instead work off higher level concepts like

  "Allow this virtual machine a max 30% of total CPU time"

This kind of policy is best expressed in the virtualization management
tool, or in the system services configuration tool, or another high
level application.

An end user tool for directly managing low level cgroups is not only an
inappropriate level of abstraction for users, but it will make it trivial
for users to totally screw up the use cgroups by things like systemd /
libvirt by moving groups/processes to unexpected places.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Matthias Clasen
On Wed, 2011-07-20 at 15:20 -0400, Jason Baron wrote:
> Hi,
> 
> I've been working on a new gui tool for managing and monitoring cgroups, 
> called
> 'cg-manager'. I'm hoping to get people interested in contributing to this
> project, as well as to add to the conversation about how cgroups should
> be configured and incorporated into distros.
> 

As a high-level comment, I don't think 'cgroup management' is a very
compelling rationale for an end-user graphical tool.

For most people it will be much better to expose cgroup information in
the normal process monitor. For people who want to use the specific
cgroup functionality of systemd, it will be better to have that
functionality available in a new service management frontend.

The only role I could see for this kind of dedicated cgroup UI would be
as a cgroup debugging aid, but is that really worth the effort,
considering most cgroup developers probably prefer to use cmdline tools
for the that purpose ?


Matthias


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Lennart Poettering
On Wed, 20.07.11 17:26, Vivek Goyal (vgo...@redhat.com) wrote:

> > /sys/fs/cgroup/cpu+cpuacct is the joint mount point.
> > 
> > /sys/fs/cgroup/cpu → /sys/fs/cgroup/cpu+cpuacct is a symlink.
> > 
> > /sys/fs/cgroup/cpuacct → /sys/fs/cgroup/cpu+cpuacct is a symlink.
> > 
> > That way to most applications it will be very easy to support this: they
> > can simply assume that the controller "foobar" is available under
> > /sys/fs/cgroup/foobar, and that's it.
> 
> I guess this will be reasonable. Just that application need to handle
> the case that directory they are about to create might already be present
> there.

systemd always uses to equivalent of "mkdir -p" to create its groups. So
at least systemd should be safe here.

> So down the we should be able to co-mount memory and IO together with
> additional symlinks?

Yes, if you wish.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Vivek Goyal
On Wed, Jul 20, 2011 at 11:07:14PM +0200, Lennart Poettering wrote:
> On Wed, 20.07.11 16:42, Vivek Goyal (vgo...@redhat.com) wrote:
> 
> > 
> > On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:
> > 
> > [..]
> > > 
> > > > Right now, the gui assumes that the various hierarchies are mounted 
> > > > separately,
> > > > but that the cpu and cpuacct are co-mounted. Its my understanding that 
> > > > this
> > > > is consistent with how systemd is doing things. So that's great.
> > > 
> > > In F15 we mount all controllers enabled in the kernel separately. In F16
> > > we'll optionally mount some of them together, and we'll probably ship a
> > > default of mounting cpu and cpuacct together if they are both enabled.
> > 
> > Last time we talked about possibility of co-mounting memory and IO at some
> > point of time and you said it is a bad idea from applications programming
> > point of view.  Has that changed now?
> 
> Well, no, but yes.
> 
> After discussing this Dhaval the scheme we came up with is to add
> symlinks to /sys/fs/cgroup/ so that even when some controllers are
> mounted together they are still available at the the separate
> directories. Example, if we mount cpu+cpuacct together things will look
> like this:
> 
> /sys/fs/cgroup/cpu+cpuacct is the joint mount point.
> 
> /sys/fs/cgroup/cpu → /sys/fs/cgroup/cpu+cpuacct is a symlink.
> 
> /sys/fs/cgroup/cpuacct → /sys/fs/cgroup/cpu+cpuacct is a symlink.
> 
> That way to most applications it will be very easy to support this: they
> can simply assume that the controller "foobar" is available under
> /sys/fs/cgroup/foobar, and that's it.

I guess this will be reasonable. Just that application need to handle
the case that directory they are about to create might already be present
there.

So down the we should be able to co-mount memory and IO together with
additional symlinks?

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Lennart Poettering
On Wed, 20.07.11 16:59, Vivek Goyal (vgo...@redhat.com) wrote:

> 
> On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:
> 
> [..]
> > systemd is and will always have to maintain its own hierarchy
> > independently of everybody else.
> 
> In the presentation today, you mentioned that you would like to create
> cgroups for users by default in cpu hierarchy (once RT time allocation
> issue is resolved). I am wondering what happens if an admin wants to
> change the policy a bit. Say give higher cpu shares to a specific
> user.

He can just go and do that. systemd assume to be the exclusive owner of
/sys/fs/cgroup/systemd, but the other hierarchies can be manipulated by
others too. That means that if people want to create a hierarchy there,
then they can do that. If people want systemd to stop mucking with the
cpu hierarchy for all users then this can be configured in a simple
config file.

> How is one supposed to do that. As it looks like that part of the
> control lies with systemd (as it is the one creates user group under
> cpu) and part of the control lies with GUItool/cgconfig.

Right now, systemd has few controls to actually make use of
controllers. It assumes that the limits on hte controllers are already
configured by something else.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Lennart Poettering
On Wed, 20.07.11 16:42, Vivek Goyal (vgo...@redhat.com) wrote:

> 
> On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:
> 
> [..]
> > 
> > > Right now, the gui assumes that the various hierarchies are mounted 
> > > separately,
> > > but that the cpu and cpuacct are co-mounted. Its my understanding that 
> > > this
> > > is consistent with how systemd is doing things. So that's great.
> > 
> > In F15 we mount all controllers enabled in the kernel separately. In F16
> > we'll optionally mount some of them together, and we'll probably ship a
> > default of mounting cpu and cpuacct together if they are both enabled.
> 
> Last time we talked about possibility of co-mounting memory and IO at some
> point of time and you said it is a bad idea from applications programming
> point of view.  Has that changed now?

Well, no, but yes.

After discussing this Dhaval the scheme we came up with is to add
symlinks to /sys/fs/cgroup/ so that even when some controllers are
mounted together they are still available at the the separate
directories. Example, if we mount cpu+cpuacct together things will look
like this:

/sys/fs/cgroup/cpu+cpuacct is the joint mount point.

/sys/fs/cgroup/cpu → /sys/fs/cgroup/cpu+cpuacct is a symlink.

/sys/fs/cgroup/cpuacct → /sys/fs/cgroup/cpu+cpuacct is a symlink.

That way to most applications it will be very easy to support this: they
can simply assume that the controller "foobar" is available under
/sys/fs/cgroup/foobar, and that's it.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Vivek Goyal
On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:

[..]
> systemd is and will always have to maintain its own hierarchy
> independently of everybody else.

In the presentation today, you mentioned that you would like to create
cgroups for users by default in cpu hierarchy (once RT time allocation
issue is resolved). I am wondering what happens if an admin wants to
change the policy a bit. Say give higher cpu shares to a specific
user.

Generally one should have been able to do this with the help of GUI
tool. Show the system view and allow to change the parameters (which
are persistent across reboot).

How is one supposed to do that. As it looks like that part of the
control lies with systemd (as it is the one creates user group under
cpu) and part of the control lies with GUItool/cgconfig.

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Vivek Goyal
On Wed, Jul 20, 2011 at 10:28:32PM +0200, Lennart Poettering wrote:

[..]
> 
> > Right now, the gui assumes that the various hierarchies are mounted 
> > separately,
> > but that the cpu and cpuacct are co-mounted. Its my understanding that this
> > is consistent with how systemd is doing things. So that's great.
> 
> In F15 we mount all controllers enabled in the kernel separately. In F16
> we'll optionally mount some of them together, and we'll probably ship a
> default of mounting cpu and cpuacct together if they are both enabled.

Last time we talked about possibility of co-mounting memory and IO at some
point of time and you said it is a bad idea from applications programming
point of view.  Has that changed now?

Thanks
Vivek
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Daniel J Walsh
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/20/2011 03:20 PM, Jason Baron wrote:
> Hi,
> 
> I've been working on a new gui tool for managing and monitoring
> cgroups, called 'cg-manager'. I'm hoping to get people interested in
> contributing to this project, as well as to add to the conversation
> about how cgroups should be configured and incorporated into
> distros.
> 
> * Intro
> 
> I've created a Fedora hosted page for the project, including a screen
> shot at:
> 
> https://fedorahosted.org/cg-manager/
> 
> To build and run, it's:
> 
> $ git clone git://git.fedorahosted.org/cg-manager.git $ make $ sudo
> ./cg-gui
> 
> I've also setup a mailing list to discuss the gui at:
> 
> https://fedorahosted.org/mailman/listinfo/cg-manager-developers
> 
> Currently, I assume the root user, but I hope to relax that
> assumption in future versions. The program is still quite rough, but
> its been fairly stable for me. Its a GTK 2.0 based application,
> written in C (to interface with libcgroup as much as possible).
> 
> * Brief summary of current functionality:
> 
> There are two top-level panes (which can be switched b/w using toggle
> buttons). The first one centers around cgroup controllers, and the
> second one is about configuring cgroup rules.
> 
> The memory and cpu share controllers are currently supported (I
> simply haven't gotten to supporting other controllers yet). One can
> add/delete cgroups, change configuration parameters, and see which
> processes are currently associated with each cgroup. There is also a
> 'usage' view, which displays graphically the real-time usage
> statistics. The cgroup configuration can then be saved into
> /etc/cgconfig.conf using the 'save' menubar button.
> 
> The rules view allows for the creation of rules, such as this process
> for this user goes into this cgroup. One can view rules by user and
> by process name. One can also save the rules configuration into
> /etc/cgrules.conf.
> 
> I've also introduced the concept of a 'logical' cgroup, which is
> incorporated into the cgroup pane and the rules pane. Basically, it
> allows you to group at most one cgroup from each hierarchy into a
> logical group. And then you can create rules that assign processes to
> that logical group.
> 
> * Future direction:
> 
> I've been working Mairin Duffy on what the UI look and feel should
> eventually look like...Right now, I have a lot of the elements from
> Mairin's mockups in my UI, but its certainly not quite as polished.
> Mock-ups can be found at:
> 
> http://mairin.wordpress.com/2011/05/13/ideas-for-a-cgroups-ui/
> 
> * Integration with Fedora/systemd:
> 
> Right now, the gui assumes that the various hierarchies are mounted
> separately, but that the cpu and cpuacct are co-mounted. Its my
> understanding that this is consistent with how systemd is doing
> things. So that's great.
> 
> Currently, the gui saves its state using cgconfig.conf, and
> cgrules.conf. It will display what libvirtd and systemd are doing,
> but will not save the state for any of the cgroups that are created
> by libvirt or systemd. So, it basically ignores these directories as
> far as persistent configuration. Thus, it should not conflict with
> what systemd or libvirtd are doing.
> 
> However, I think we need to discuss how we envision cgroup
> configuration working longer term. The current consumers out of the
> box, that I'm aware of are libvirt and systemd. Currently, these
> consumers manage cgroups as they see fit. However, since they are
> managing shared resources, I think there probably is an argument to
> be made that its useful, to have some sort of global view of things,
> to understand how resources are being allocated, and thus a global 
> way of configuring cgroups (as opposed to each new consumer doing its
> own thing).
> 
> Thus, various privaleged consumers could make 'configuration
> requests' to a common API, basically asking what's my configuration
> data. If there is already data the consumer can proceed assigning
> tasks to those cgroups. Otherwise, it can create a new request, which
> may or may not be allowed based upon the available resources on the
> system. And each consumer can handle what it wants to do in this
> case, which could potentially include tweaking the global 
> configuration.
> 
> So for example, in the case of systemd, it continues to have the
> default configuration that it currently has, but if the user has gone
> in and tweaked the global configuration, that configuration may be
> re-assigned once we're far enough along in the boot process to read
> what that configuration is.
> 
> Thanks,
> 
> -Jason
> 

Split the gui in 2.  A Priv part (DBUS auto started service) and a non
priv part.  X should not run as root.  Then use DBUS for communications
between the GUI and the server.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk4nPMUACgkQrlYvE4MpobNdvgCePZ3eGB1

Re: new cg-manager gui tool for managin cgroups

2011-07-20 Thread Lennart Poettering
On Wed, 20.07.11 15:20, Jason Baron (jba...@redhat.com) wrote:

Heya,

> The memory and cpu share controllers are currently supported (I simply haven't
> gotten to supporting other controllers yet). One can add/delete cgroups, 
> change
> configuration parameters, and see which processes are currently associated
> with each cgroup. There is also a 'usage' view, which displays graphically the
> real-time usage statistics. The cgroup configuration can then be saved
> into /etc/cgconfig.conf using the 'save' menubar button.

How does it write that file? Does the UI run as root? That's not really
desirable. It's not secure and it is cumbersome to mix applications
running as differnt users on the same session and one the same X
screen (since they cannot communicate with dbus, and so on).

> Right now, the gui assumes that the various hierarchies are mounted 
> separately,
> but that the cpu and cpuacct are co-mounted. Its my understanding that this
> is consistent with how systemd is doing things. So that's great.

In F15 we mount all controllers enabled in the kernel separately. In F16
we'll optionally mount some of them together, and we'll probably ship a
default of mounting cpu and cpuacct together if they are both enabled.

> Currently, the gui saves its state using cgconfig.conf, and cgrules.conf. It
> will display what libvirtd and systemd are doing, but will not save the state
> for any of the cgroups that are created by libvirt or systemd. So, it
> basically ignores these directories as far as persistent configuration. Thus,
> it should not conflict with what systemd or libvirtd are doing.

Quite frankly, I think cgrulesd is a really bad idea, since it applies
control group limits after a process is already running. This is
necessarily racy (and adds quite a burden too, since you ask for
notifications on each exec()). I'd claim that cgrulesd is broken by
design and cannot be fixed.

> Thus, various privaleged consumers could make 'configuration requests' to a
> common API, basically asking what's my configuration data. If there is already
> data the consumer can proceed assigning tasks to those cgroups. Otherwise, it
> can create a new request, which may or may not be allowed based upon the
> available resources on the system. And each consumer can handle what it wants
> to do in this case, which could potentially include tweaking the global
> configuration.

systemd is the first process of the OS after the initrd finished. At
that time no other process is running, and that means it is not
practical to have systemd talk to any other daemon to get the most basic
of its functionality working.

systemd is and will always have to maintain its own hierarchy
independently of everybody else.

In fact I think running som arbitration daemon which clients talk to to
get a group created is a bad idea anyway: this makes things needless
complex and fragile. If the main reason to do such a thing is to get
events when the cgroup configuration changes then I'd much rather see
changes made to the kernel so that we can get notifications when groups
are created or removed. That could be done via netlink. Another option
would be to hook cgroupfs up with fanotify.

One of the nicer things of cgroups is that an "mkdir" is sufficient to
create a group and an "echo $PID > $GROUP/tasks" to add a process to
it. If you add complex systems on top of that which you need to talk to
instead of trivial "mkdirs" and "open()+write()" you make cgroups much
less attractive.

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


new cg-manager gui tool for managin cgroups

2011-07-20 Thread Jason Baron
Hi,

I've been working on a new gui tool for managing and monitoring cgroups, called
'cg-manager'. I'm hoping to get people interested in contributing to this
project, as well as to add to the conversation about how cgroups should
be configured and incorporated into distros.

 * Intro

I've created a Fedora hosted page for the project, including a screen shot at:

https://fedorahosted.org/cg-manager/

To build and run, it's:

$ git clone git://git.fedorahosted.org/cg-manager.git
$ make
$ sudo ./cg-gui

I've also setup a mailing list to discuss the gui at:

https://fedorahosted.org/mailman/listinfo/cg-manager-developers

Currently, I assume the root user, but I hope to relax that assumption in
future versions. The program is still quite rough, but its been fairly stable
for me. Its a GTK 2.0 based application, written in C (to interface with
libcgroup as much as possible).

 * Brief summary of current functionality:

There are two top-level panes (which can be switched b/w using toggle buttons).
The first one centers around cgroup controllers, and the second one is about
configuring cgroup rules.

The memory and cpu share controllers are currently supported (I simply haven't
gotten to supporting other controllers yet). One can add/delete cgroups, change
configuration parameters, and see which processes are currently associated
with each cgroup. There is also a 'usage' view, which displays graphically the
real-time usage statistics. The cgroup configuration can then be saved
into /etc/cgconfig.conf using the 'save' menubar button.

The rules view allows for the creation of rules, such as this process for this
user goes into this cgroup. One can view rules by user and by process name. One
can also save the rules configuration into /etc/cgrules.conf.

I've also introduced the concept of a 'logical' cgroup, which is incorporated
into the cgroup pane and the rules pane. Basically, it allows you to group at
most one cgroup from each hierarchy into a logical group. And then you can
create rules that assign processes to that logical group.

 * Future direction:

I've been working Mairin Duffy on what the UI look and feel should eventually
look like...Right now, I have a lot of the elements from Mairin's mockups in
my UI, but its certainly not quite as polished. Mock-ups can be found at:

http://mairin.wordpress.com/2011/05/13/ideas-for-a-cgroups-ui/

 * Integration with Fedora/systemd:

Right now, the gui assumes that the various hierarchies are mounted separately,
but that the cpu and cpuacct are co-mounted. Its my understanding that this
is consistent with how systemd is doing things. So that's great.

Currently, the gui saves its state using cgconfig.conf, and cgrules.conf. It
will display what libvirtd and systemd are doing, but will not save the state
for any of the cgroups that are created by libvirt or systemd. So, it
basically ignores these directories as far as persistent configuration. Thus,
it should not conflict with what systemd or libvirtd are doing.

However, I think we need to discuss how we envision cgroup configuration
working longer term. The current consumers out of the box, that I'm aware of
are libvirt and systemd. Currently, these consumers manage cgroups as they see
fit. However, since they are managing shared resources, I think there probably
is an argument to be made that its useful, to have some sort of global view
of things, to understand how resources are being allocated, and thus a global
way of configuring cgroups (as opposed to each new consumer doing its own
thing).

Thus, various privaleged consumers could make 'configuration requests' to a
common API, basically asking what's my configuration data. If there is already
data the consumer can proceed assigning tasks to those cgroups. Otherwise, it
can create a new request, which may or may not be allowed based upon the
available resources on the system. And each consumer can handle what it wants
to do in this case, which could potentially include tweaking the global
configuration.

So for example, in the case of systemd, it continues to have the default
configuration that it currently has, but if the user has gone in and tweaked
the global configuration, that configuration may be re-assigned once we're far
enough along in the boot process to read what that configuration is.

Thanks,

-Jason

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel