Kir Kolyshkin wrote:
> OK, looks like we came to the consensus and our so-called "annual
> containers breakfast at OLS" (TM) event will happen tomorrow (Thursday)
> morning, at 8:30am, Starbucks cafe near the Rideau Centre.
I went there today and it's a small place. It's fine to have a coffee
stan
Jeff Garzik wrote:
> Eric W. Biederman wrote:
>
>>Jeff Garzik <[EMAIL PROTECTED]> writes:
>>
>>
>>>David Miller wrote:
>>>
I don't accept that we have to add another function argument
to a bunch of core routines just to support this crap,
especially since you give no way to turn it off
Kirill Korotaev wrote:
Patrick McHardy wrote:
I believe OpenVZ stores the current namespace somewhere global,
which avoids passing the namespace around. Couldn't you do this
as well?
yes, we store a global namespace context on current
(can be stored in per-cpu as well).
do you prefer
Kirill Korotaev wrote:
Patrick McHardy wrote:
I believe OpenVZ stores the current namespace somewhere global,
which avoids passing the namespace around. Couldn't you do this
as well?
yes, we store a global namespace context on current
(can be stored in per-cpu as well).
do you prefe
Ben Greear wrote:
> Patrick McHardy wrote:
>
>>Eric W. Biederman wrote:
>>
>>
>>>-- The basic design
>>>
>>>There will be a network namespace structure that holds the global
>>>variables for a network namespace, making those global variables
>>>per network namespace.
>>>
>>>One of those per netw
Patrick McHardy wrote:
> Eric W. Biederman wrote:
>
>>-- The basic design
>>
>>There will be a network namespace structure that holds the global
>>variables for a network namespace, making those global variables
>>per network namespace.
>>
>>One of those per network namespace global variables will
Patrick McHardy wrote:
> Patrick McHardy wrote:
>
> Ideally we should do something like this I think (please let it be
> correct :)):
>
> [...]
> So we always walk chains up to the end and NF_CT_EVICTION_RANGE is
> just a minimum. This ensures we will always get the last entry *and*
> we won't sc
Patrick McHardy wrote:
> Vasily Averin wrote:
>
>>Patrick McHardy wrote:
>>
>>
>>>+for (i = 0; i < nf_conntrack_htable_size; i++) {
>>>+hlist_for_each_entry(h, n, &nf_conntrack_hash[hash], hnode) {
>>>+tmp = nf_ct_tuplehash_to_ctrack(h);
>>>+
Vasily Averin wrote:
> Patrick McHardy wrote:
>
>>+ for (i = 0; i < nf_conntrack_htable_size; i++) {
>>+ hlist_for_each_entry(h, n, &nf_conntrack_hash[hash], hnode) {
>>+ tmp = nf_ct_tuplehash_to_ctrack(h);
>>+ if (!test_bit(IPS_ASSURED_BIT,
Patrick McHardy wrote:
> Vasily Averin wrote:
>> Patrick McHardy wrote:
> -static int early_drop(struct hlist_head *chain)
> +static int early_drop(unsigned int hash)
> {
> /* Use oldest entry, which is roughly LRU */
> struct nf_conntrack_tuple_hash *h;
> struct nf_conn *ct = NU
[Dropping a few CCs since I supect its beginning to be annoying :)]
Patrick McHardy wrote:
> Vasily Averin wrote:
>
> Indeed, thanks. Fixed now. Also changed it to leave the loop
> if we found an entry within a chain (we want the last one of
> the chain, so we still walk it entirely) and replaced
Vasily Averin wrote:
> Patrick McHardy wrote:
>
> it is incorrect again: when cnt=0 you should break both cycles.
Indeed, thanks. Fixed now. Also changed it to leave the loop
if we found an entry within a chain (we want the last one of
the chain, so we still walk it entirely) and replaced
hash =
Patrick McHardy wrote:
> Vasily Averin wrote:
>> it is incorrect,
>> We should count the number of checked _conntracks_, but you count the number
>> of
>> hash buckets. I.e "i" should be incremented/checked inside the nested loop.
>
>
> I misunderstood your patch then. This one should be better.
Vasily Averin wrote:
> Patrick McHardy wrote:
>
>>+ for (i = 0; i < NF_CT_EVICTION_RANGE; i++) {
>>+ hlist_for_each_entry(h, n, &nf_conntrack_hash[hash], hnode) {
>>+ tmp = nf_ct_tuplehash_to_ctrack(h);
>>+ if (!test_bit(IPS_ASSURED_BIT, &tmp
Patrick McHardy wrote:
> + for (i = 0; i < NF_CT_EVICTION_RANGE; i++) {
> + hlist_for_each_entry(h, n, &nf_conntrack_hash[hash], hnode) {
> + tmp = nf_ct_tuplehash_to_ctrack(h);
> + if (!test_bit(IPS_ASSURED_BIT, &tmp->status))
> +
No, meminfo virtualization work perfectly.
# uname -a
Linux vz-etch 2.6.18-028stab035.1-ovz-smp #1 SMP Wed Jun 13 22:08:06 CEST
2007 i686 GNU/Linux
- Dietmar
> Perhaps you are using some old kernel which does not have
> meminfo virtualization. What's your kernel version?
Dietmar Maurer wrote:
OK, now the VPS stpos correctly, but i get the following error:
vzctl set 777 --userpasswd root:test
Starting VE ...
VE is mounted
VE start in progress...
Stopping VE ...
VE was stopped
VE is unmounted
Configure meminfo: 309371
Unable to set meminfo: Invalid argument
Pe
Patrick McHardy wrote:
> Vasily Averin wrote:
>
>>When the number of conntracks is reached nf_conntrack_max limit, early_drop()
>>tries to free one of already used conntracks. If it does not find any
>>conntracks
>>that may be freed, it leads to transmission errors.
>>In current implementation th
Vasily Averin wrote:
> When the number of conntracks is reached nf_conntrack_max limit, early_drop()
> tries to free one of already used conntracks. If it does not find any
> conntracks
> that may be freed, it leads to transmission errors.
> In current implementation the conntracks are searched in
When the number of conntracks is reached nf_conntrack_max limit, early_drop()
tries to free one of already used conntracks. If it does not find any conntracks
that may be freed, it leads to transmission errors.
In current implementation the conntracks are searched in one hash bucket only.
It have s
20 matches
Mail list logo