Re: [Vserver] network isolation implementation - pros and cons

2006-07-27 Thread Matt Rechenburg

yep, to have generic device names inside the partitions.
Something like eth1, eth2, eth3... instead of eth0:1, eth0:2, eth0:3, ...

greetzs,

Matt

On 7/27/06, Daniel Lezcano <[EMAIL PROTECTED]> wrote:

Matt Rechenburg wrote:
> Hi Daniel,
>
> i found veth very usefull  (http://www.geocities.com/nestorjpg/veth/).
> It is a user-space tool which creates a separated virtual interfaces
> using tun/tap
>
> stay tuned,

Hi Matt,

I downloaded and played with it. Very interesting, thanks.

Vserver usually use a ip aliasing hiding the IP part to all other
vserver, right ? Do you know for which reasons someone could use the
veth instead of the vserver mechanism ? (I mean some other reasons than
dhcp)

   -- Daniel

___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] failed on "vyum gast -- install yum" :-(

2006-07-27 Thread Guenther Fuchs
Hi there,

on Wednesday, July 26, 2006 at 5:28:36 PM there was posted:

JP> When I want to install yum package management in my vserver I get
JP> the following error:
...
JP> Execution will continue in 5 seconds...
JP> Cannot find a valid baseurl for repo: core
JP> Error: Cannot find a valid baseurl for repo: core

This seems very much the same problem we had with FC4 - Daniel had a
patch for this, but I can't find it for the moment.

One workaround for this (which I've recently found on your post) is to
place the "yum" package into the distribution base files, certainly
only for building a guest where you want to use internal package
management and therefore want to install yum.

When using the (described) RPM installation off the wiki, the
directory to place the yum package is located in the distributions dir
in /usr/lib/util-vserver/:

# echo yum > /usr/lib/util-vserver/distributions/fc5/pkgs/99

(this will create a file "99" containing the "yum" package only to be
installed on build)

Hopefully Daniel will find some time to reply with the patch, so that
can be included into the howto.

JP> What is wrong ? As anyone been able to install vservers on FC5 ?

I really don't know why this is broken again, it was already working
as I have some FC5 guests built before. Somehow it doesn't work here
either anymore, so it's obviousely due to a fedora patch I guess ...

-- 
regards 'n greez,

Guenther Fuchs
(aka "muh" and "powerfox")

___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] Shares and Reservations in the token-bucket-algorithm

2006-07-27 Thread Herbert Poetzl
On Thu, Jul 27, 2006 at 02:32:45PM +0200, Wilhelm Meier wrote:
> Am Donnerstag, 27. Juli 2006 13:20 schrieb Herbert Poetzl:
> > On Thu, Jul 27, 2006 at 09:50:37AM +0200, Wilhelm Meier wrote:
> > > Am Mittwoch, 26. Juli 2006 22:04 schrieb Herbert Poetzl:
> > > > > > well, not all solaris does is a good idea per se
> > > > > > and I think the current hard cpu scheduler is much
> > > > > > more powerful than the solaris proportional stuff
> > > > > > (i.e. you can consider the solaris settings a subset
> > > > > > of what you can achieve with the hard cpu scheduler)
> > > > >
> > > > > If the solaris fss is a subset, how do I set the token
> > > > > bucket values? With 3 vservers and 0.5 , 0.25 , 0.25 as
> > > > > (fillrate/intervall) for these, how do I get 2/3 for the first
> > > > > and 1/3 for the second vserver if the third vserver is idle as
> > > > > in the solaris fss case? Where is the hidden parameter?
> > > >
> > > > I take it that you 'simply' want fair scheduling
> > > > between three guests in a 2:1:1 ratio when all three
> > > > guests are hogging cpu, yes?
> > >
> > > Yes, the above settings of (fillrate/intervall) should achieve this.
> > >
> > > > in this case you simply forget the maximum values
> > > > i.e. set them to something very low to give some
> > > > kind of minimum amount of tokens per time unit, e.g.
> > > > 1/100 and use a set of 2:1:1 for the idle time values
> > > > e.g. 1/3,1/6 and 1/6. once hard cpu and indle time
> > > > is enabled for those guests, they will run in the
> > > > specified ratio, as long as host processes do not
> > > > consume cpu resources, in which case the remaining
> > > > cpu resources will be divided 2:1:1 between them
> > >
> > > But what happens if the third vserver falls to idle?
> > >
> > > In the solaris case the remaining two vservers would get 2/3 and
> > > 1/3 of the cpus (if the host ifself is idle as well).
> >
> > same here as there a 2/6 added for each idle time tick to
> > the first guest and 1/6 for the second one, which still
> > is 2:1 as in your 2/3 and 1/3 example ...
> >
> > > In the vserver case we get idle time now: 1/4 with the values above.
> >
> > how do you come to this conclusion?
> 
> I try to summarize: we have three vservers with 1/2, 1/4, 1/4 as
> values for (fillrate/intervall)_1 (not idle time).
>
> If all three vservers have runnable processes, they get 1/2, 1/4, 1/4
> of a cpu (if we have more than one cpu we scale the values to sum up
> to the number of cpus).
> 
> If the third vserver now gets idle, the first and second vserver still
> gets 1/2 and 1/4 of a cpu (if they have tokensmax in the bucket, they
> can get more of a cpu for a limited burst-time). In the long run there
> is 1/4 of a cpu idle.
> 
> If we have setup the idle time bucket and the three vservers have 
> the (fillrate/intervall)_2 values 1/2, 1/4, 1/4 for this also, the
> remaining 1/4 of the cpu (which is left idle from the normal tocken   
> bucket) is given to the vservers. 
> 
> If the third vserver is still idle, the first vserver gets 1/4 * 1/2  
> from the idle token bucket and this sums to 1/2 + 1/8 = 5/8. The  
> second vserver sums up to 1/4 + 1/4*1/4 = 5/16.   
> 
> So the ratio between the first and second vserver is still 2:1. But   
> we left 1/16 of a cpu idle. And this leds to a recursion and then the 
> two active vservers gets 2/3 and 1/3 of a cpu. Yes, I think I got it  
> ;-)   

> Thank you!

you're welcome!

> Is this type of scheduler already in the stable version?

it is in the _very stable_ development branch :)

HTH,
Herbert

> ...
> >
> > as usual, all the features are supported by my 'hack' tools
> > in various forms, this one is probably best to control with
> > the vsched (0.02) or the vcmd tool (which is non trivial to
> > use, I guess), but it would definitely be better to get this
> > functionality into mainline userspace tools ...
> >
> >   http://vserver.13thfloor.at/Experimental/TOOLS/
> 
> Thank you for the hint!
> 
> -- 
> Wilhelm
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] network isolation implementation - pros and cons

2006-07-27 Thread Daniel Lezcano

Matt Rechenburg wrote:

Hi Daniel,

i found veth very usefull  (http://www.geocities.com/nestorjpg/veth/).
It is a user-space tool which creates a separated virtual interfaces
using tun/tap

stay tuned,


Hi Matt,

I downloaded and played with it. Very interesting, thanks.

Vserver usually use a ip aliasing hiding the IP part to all other 
vserver, right ? Do you know for which reasons someone could use the 
veth instead of the vserver mechanism ? (I mean some other reasons than 
dhcp)


  -- Daniel

___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] Shares and Reservations in the token-bucket-algorithm

2006-07-27 Thread Wilhelm Meier
Am Donnerstag, 27. Juli 2006 13:20 schrieb Herbert Poetzl:
> On Thu, Jul 27, 2006 at 09:50:37AM +0200, Wilhelm Meier wrote:
> > Am Mittwoch, 26. Juli 2006 22:04 schrieb Herbert Poetzl:
> > > > > well, not all solaris does is a good idea per se
> > > > > and I think the current hard cpu scheduler is much
> > > > > more powerful than the solaris proportional stuff
> > > > > (i.e. you can consider the solaris settings a subset
> > > > > of what you can achieve with the hard cpu scheduler)
> > > >
> > > > If the solaris fss is a subset, how do I set the token
> > > > bucket values? With 3 vservers and 0.5 , 0.25 , 0.25 as
> > > > (fillrate/intervall) for these, how do I get 2/3 for the first and
> > > > 1/3 for the second vserver if the third vserver is idle as in the
> > > > solaris fss case? Where is the hidden parameter?
> > >
> > > I take it that you 'simply' want fair scheduling
> > > between three guests in a 2:1:1 ratio when all three
> > > guests are hogging cpu, yes?
> >
> > Yes, the above settings of (fillrate/intervall) should achieve this.
> >
> > > in this case you simply forget the maximum values
> > > i.e. set them to something very low to give some
> > > kind of minimum amount of tokens per time unit, e.g.
> > > 1/100 and use a set of 2:1:1 for the idle time values
> > > e.g. 1/3,1/6 and 1/6. once hard cpu and indle time
> > > is enabled for those guests, they will run in the
> > > specified ratio, as long as host processes do not
> > > consume cpu resources, in which case the remaining
> > > cpu resources will be divided 2:1:1 between them
> >
> > But what happens if the third vserver falls to idle?
> >
> > In the solaris case the remaining two vservers would get 2/3 and 1/3
> > of the cpus (if the host ifself is idle as well).
>
> same here as there a 2/6 added for each idle time tick to
> the first guest and 1/6 for the second one, which still
> is 2:1 as in your 2/3 and 1/3 example ...
>
> > In the vserver case we get idle time now: 1/4 with the values above.
>
> how do you come to this conclusion?

I try to summarize: we have three vservers with 1/2, 1/4, 1/4 as values for 
(fillrate/intervall)_1 (not idle time).

If all three vservers have runnable processes, they get 1/2, 1/4, 1/4 of a cpu 
(if we have more than one cpu we scale the values to sum up to the number of 
cpus).

If the third vserver now gets idle, the first and second vserver still gets 
1/2 and 1/4 of a cpu (if they have tokensmax in the bucket, they can get more 
of a cpu for a limited burst-time). In the long run there is 1/4 of a cpu 
idle.

If we have setup the idle time bucket and the three vservers have the 
(fillrate/intervall)_2 values 1/2, 1/4, 1/4 for this also, the remaining 1/4 
of the cpu (which is left idle from the normal tocken bucket) is given to the 
vservers. 

If the third vserver is still idle, the first vserver gets 1/4 * 1/2 from the 
idle token bucket and this sums to 1/2 + 1/8 = 5/8. The second vserver sums 
up to 1/4 + 1/4*1/4 = 5/16. 

So the ratio between the first and second vserver is still 2:1. But we left 
1/16 of a cpu idle. And this leds to a recursion and then the two active 
vservers gets 2/3 and 1/3 of a cpu. Yes, I think I got it ;-) 
Thank you!

Is this type of scheduler already in the stable version?

>
...
>
> as usual, all the features are supported by my 'hack' tools
> in various forms, this one is probably best to control with
> the vsched (0.02) or the vcmd tool (which is non trivial to
> use, I guess), but it would definitely be better to get this
> functionality into mainline userspace tools ...
>
>   http://vserver.13thfloor.at/Experimental/TOOLS/

Thank you for the hint!

-- 
Wilhelm
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] Shares and Reservations in the token-bucket-algorithm

2006-07-27 Thread Herbert Poetzl
On Thu, Jul 27, 2006 at 09:50:37AM +0200, Wilhelm Meier wrote:
> Am Mittwoch, 26. Juli 2006 22:04 schrieb Herbert Poetzl:
> > > >
> > > > well, not all solaris does is a good idea per se
> > > > and I think the current hard cpu scheduler is much
> > > > more powerful than the solaris proportional stuff
> > > > (i.e. you can consider the solaris settings a subset
> > > > of what you can achieve with the hard cpu scheduler)
> > >
> > > If the solaris fss is a subset, how do I set the token
> > > bucket values? With 3 vservers and 0.5 , 0.25 , 0.25 as
> > > (fillrate/intervall) for these, how do I get 2/3 for the first and
> > > 1/3 for the second vserver if the third vserver is idle as in the
> > > solaris fss case? Where is the hidden parameter?
> >
> > I take it that you 'simply' want fair scheduling
> > between three guests in a 2:1:1 ratio when all three
> > guests are hogging cpu, yes?

> Yes, the above settings of (fillrate/intervall) should achieve this. 

> > in this case you simply forget the maximum values
> > i.e. set them to something very low to give some
> > kind of minimum amount of tokens per time unit, e.g.
> > 1/100 and use a set of 2:1:1 for the idle time values
> > e.g. 1/3,1/6 and 1/6. once hard cpu and indle time
> > is enabled for those guests, they will run in the
> > specified ratio, as long as host processes do not
> > consume cpu resources, in which case the remaining
> > cpu resources will be divided 2:1:1 between them
> 
> But what happens if the third vserver falls to idle? 

> In the solaris case the remaining two vservers would get 2/3 and 1/3  
> of the cpus (if the host ifself is idle as well). 

same here as there a 2/6 added for each idle time tick to
the first guest and 1/6 for the second one, which still
is 2:1 as in your 2/3 and 1/3 example ...

> In the vserver case we get idle time now: 1/4 with the values above.  

how do you come to this conclusion?

> Then this idle time is shared amon the three vservers according to the
> (fillrate/intervall)_2 for idle time?

idle time is advanced when the cpu would otherwise get
idle (i.e. _no_ process would be scheduled)

> Well, I don't think this is the same as in the solaris case. 
> (I mention the solaris case here because in some use cases this 
> makes sense, not because I think it is superior).

I still fail to see the difference ...   

> I want to get my understanding of vserver-scheduling right.)
> Where can one enable the idle time token bucket? 

> The user-level tools don't seem to have support for this.

as usual, all the features are supported by my 'hack' tools
in various forms, this one is probably best to control with
the vsched (0.02) or the vcmd tool (which is non trivial to
use, I guess), but it would definitely be better to get this 
functionality into mainline userspace tools ...

  http://vserver.13thfloor.at/Experimental/TOOLS/

best,
Herbert

> -- 
> Wilhelm 
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver


Re: [Vserver] Shares and Reservations in the token-bucket-algorithm

2006-07-27 Thread Wilhelm Meier
Am Mittwoch, 26. Juli 2006 22:04 schrieb Herbert Poetzl:
> > >
> > > well, not all solaris does is a good idea per se
> > > and I think the current hard cpu scheduler is much
> > > more powerful than the solaris proportional stuff
> > > (i.e. you can consider the solaris settings a subset
> > > of what you can achieve with the hard cpu scheduler)
> >
> > If the solaris fss is a subset, how do I set the token bucket values?
> > With 3 vservers and 0.5 , 0.25 , 0.25 as (fillrate/intervall) for
> > these, how do I get 2/3 for the first and 1/3 for the second vserver
> > if the third vserver is idle as in the solaris fss case? Where is the
> > hidden parameter?
>
> I take it that you 'simply' want fair scheduling
> between three guests in a 2:1:1 ratio when all three
> guests are hogging cpu, yes?

Yes, the above settings of (fillrate/intervall) should achieve this. 

>
> in this case you simply forget the maximum values
> i.e. set them to something very low to give some
> kind of minimum amount of tokens per time unit, e.g.
> 1/100 and use a set of 2:1:1 for the idle time values
> e.g. 1/3,1/6 and 1/6. once hard cpu and indle time
> is enabled for those guests, they will run in the
> specified ratio, as long as host processes do not
> consume cpu resources, in which case the remaining
> cpu resources will be divided 2:1:1 between them

But what happens if the third vserver falls to idle? In the solaris case the 
remaining two vservers would get 2/3 and 1/3 of the cpus (if the host ifself 
is idle as well). 
In the vserver case we get idle time now: 1/4 with the values above. Then this 
idle time is shared amon the three vservers according to the 
(fillrate/intervall)_2 for idle time? Well, I don't think this is the same as 
in the solaris case. (I mention the solaris case here because in some use 
cases this makes sense, not because I think it is superior. I want to get my 
understanding of vserver-scheduling right.)

Where can one enable the idle time token bucket? The user-level tools don't 
seem to have support for this.

-- 
Wilhelm 
___
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver