Re: [PATCH] Make math_state_restore() save and restore the interrupt flag

2014-02-02 Thread Pekka Riikonen

On Sat, 1 Feb 2014, Linus Torvalds wrote:


We could do that with the whole "task_work" thing (or perhaps just
do_notify_resume(), especially after merging the "don't necessarily
return with iret" patch I sent out earlier), with additionally making
sure that scheduling does the right thing wrt a "currently dirty math
state due to kernel use".

The advantage of that would be that we really could do a *lot* of FP
math very cheaply in the kernel, because we'd pay the overhead of
kernel_fpu_begin/end() just once (well, the "end" part would be just
setting the bit that we now have dirty state, the cost would be in the
return-to-user-space-and-restore-fp-state part).

Comments? That would be much more invasive than just changing
__kernel_fpu_end(), but would bring in possibly quite noticeable
advantages under loads that use the FP/vector resources in the kernel.

This would be very good and it needs to work in interrupt context 
(softirq) also, and when we interrupt idle task.  It's with networking we 
can really hit kernel_fpu_begin()/end() millions of times per second and 
there's really only need to do it once per interrupt.  This is actually 
similar what I was doing (in do_softirq)) when I noticed eagerfpu was 
broken and now Nate's bug AFAICS happens there as well.


Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Make math_state_restore() save and restore the interrupt flag

2014-02-02 Thread Pekka Riikonen

On Sat, 1 Feb 2014, Linus Torvalds wrote:


We could do that with the whole task_work thing (or perhaps just
do_notify_resume(), especially after merging the don't necessarily
return with iret patch I sent out earlier), with additionally making
sure that scheduling does the right thing wrt a currently dirty math
state due to kernel use.

The advantage of that would be that we really could do a *lot* of FP
math very cheaply in the kernel, because we'd pay the overhead of
kernel_fpu_begin/end() just once (well, the end part would be just
setting the bit that we now have dirty state, the cost would be in the
return-to-user-space-and-restore-fp-state part).

Comments? That would be much more invasive than just changing
__kernel_fpu_end(), but would bring in possibly quite noticeable
advantages under loads that use the FP/vector resources in the kernel.

This would be very good and it needs to work in interrupt context 
(softirq) also, and when we interrupt idle task.  It's with networking we 
can really hit kernel_fpu_begin()/end() millions of times per second and 
there's really only need to do it once per interrupt.  This is actually 
similar what I was doing (in do_softirq)) when I noticed eagerfpu was 
broken and now Nate's bug AFAICS happens there as well.


Pekka
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3] Fix lockup related to stop_machine being stuck in __do_softirq.

2013-06-06 Thread Pekka Riikonen

On Thu, 6 Jun 2013, Tejun Heo wrote:


On Thu, Jun 06, 2013 at 02:29:49PM -0700, gree...@candelatech.com wrote:

From: Ben Greear 

The stop machine logic can lock up if all but one of
the migration threads make it through the disable-irq
step and the one remaining thread gets stuck in
__do_softirq.  The reason __do_softirq can hang is
that it has a bail-out based on jiffies timeout, but
in the lockup case, jiffies itself is not incremented.

To work around this, re-add the max_restart counter in __do_irq
and stop processing irqs after 10 restarts.

Thanks to Tejun Heo and Rusty Russell and others for
helping me track this down.

This was introduced in 3.9 by commit:  c10d73671ad30f5469
(softirq:  reduce latencies).

It may be worth looking into ath9k to see if it has issues with
it's irq handler at a later date.

The hang stack traces look something like this:

...

Signed-off-by: Ben Greear 


Acked-by: Tejun Heo 

Linus, while this doesn't fix the root cause of the problem - softirq
runaway - I still think this is a worthwhile protection to have.  Ben
is in the process of finding out why the softirq runaway happens in
the first place.  We probably want to add Cc: sta...@vger.kernel.org
tag.

The counter also helps to keep the interrupted task interrupted a shorter 
period of time.  10 iterations may be a lot shorter than the 2 ms, or 10 
ms with HZ=100, so it helps interactivity also.  This is a good change 
to bring back in any case.


Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3] Fix lockup related to stop_machine being stuck in __do_softirq.

2013-06-06 Thread Pekka Riikonen

On Thu, 6 Jun 2013, Tejun Heo wrote:


On Thu, Jun 06, 2013 at 02:29:49PM -0700, gree...@candelatech.com wrote:

From: Ben Greear gree...@candelatech.com

The stop machine logic can lock up if all but one of
the migration threads make it through the disable-irq
step and the one remaining thread gets stuck in
__do_softirq.  The reason __do_softirq can hang is
that it has a bail-out based on jiffies timeout, but
in the lockup case, jiffies itself is not incremented.

To work around this, re-add the max_restart counter in __do_irq
and stop processing irqs after 10 restarts.

Thanks to Tejun Heo and Rusty Russell and others for
helping me track this down.

This was introduced in 3.9 by commit:  c10d73671ad30f5469
(softirq:  reduce latencies).

It may be worth looking into ath9k to see if it has issues with
it's irq handler at a later date.

The hang stack traces look something like this:

...

Signed-off-by: Ben Greear gree...@candelatech.com


Acked-by: Tejun Heo t...@kernel.org

Linus, while this doesn't fix the root cause of the problem - softirq
runaway - I still think this is a worthwhile protection to have.  Ben
is in the process of finding out why the softirq runaway happens in
the first place.  We probably want to add Cc: sta...@vger.kernel.org
tag.

The counter also helps to keep the interrupted task interrupted a shorter 
period of time.  10 iterations may be a lot shorter than the 2 ms, or 10 
ms with HZ=100, so it helps interactivity also.  This is a good change 
to bring back in any case.


Pekka
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86: Allow FPU to be used at interrupt time even with eagerfpu

2013-05-30 Thread tip-bot for Pekka Riikonen
Commit-ID:  5187b28ff08249ab8a162e802209ed04e271ca02
Gitweb: http://git.kernel.org/tip/5187b28ff08249ab8a162e802209ed04e271ca02
Author: Pekka Riikonen 
AuthorDate: Mon, 13 May 2013 14:32:07 +0200
Committer:  H. Peter Anvin 
CommitDate: Thu, 30 May 2013 16:36:42 -0700

x86: Allow FPU to be used at interrupt time even with eagerfpu

With the addition of eagerfpu the irq_fpu_usable() now returns false
negatives especially in the case of ksoftirqd and interrupted idle task,
two common cases for FPU use for example in networking/crypto.  With
eagerfpu=off FPU use is possible in those contexts.  This is because of
the eagerfpu check in interrupted_kernel_fpu_idle():

...
  * For now, with eagerfpu we will return interrupted kernel FPU
  * state as not-idle. TBD: Ideally we can change the return value
  * to something like __thread_has_fpu(current). But we need to
  * be careful of doing __thread_clear_has_fpu() before saving
  * the FPU etc for supporting nested uses etc. For now, take
  * the simple route!
...
if (use_eager_fpu())
return 0;

As eagerfpu is automatically "on" on those CPUs that also have the
features like AES-NI this patch changes the eagerfpu check to return 1 in
case the kernel_fpu_begin() has not been said yet.  Once it has been the
__thread_has_fpu() will start returning 0.

Notice that with eagerfpu the __thread_has_fpu is always true initially.
FPU use is thus always possible no matter what task is under us, unless
the state has already been saved with kernel_fpu_begin().

[ hpa: this is a performance regression, not a correctness regression,
  but since it can be quite serious on CPUs which need encryption at
  interrupt time I am marking this for urgent/stable. ]

Signed-off-by: Pekka Riikonen 
Link: http://lkml.kernel.org/r/alpine.gso.2.00.1305131356320...@git.silcnet.org
Cc:  v3.7+
Signed-off-by: H. Peter Anvin 
---
 arch/x86/kernel/i387.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);
 
return !__thread_has_fpu(current) &&
(read_cr0() & X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;
 
if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86: Allow FPU to be used at interrupt time even with eagerfpu

2013-05-30 Thread tip-bot for Pekka Riikonen
Commit-ID:  b61601079f974b9ffd3caf08ecb3a71142adf821
Gitweb: http://git.kernel.org/tip/b61601079f974b9ffd3caf08ecb3a71142adf821
Author: Pekka Riikonen 
AuthorDate: Mon, 13 May 2013 14:32:07 +0200
Committer:  H. Peter Anvin 
CommitDate: Thu, 30 May 2013 11:49:31 -0700

x86: Allow FPU to be used at interrupt time even with eagerfpu

With the addition of eagerfpu the irq_fpu_usable() now returns false
negatives especially in the case of ksoftirqd and interrupted idle task,
two common cases for FPU use for example in networking/crypto.  With
eagerfpu=off FPU use is possible in those contexts.  This is because of
the eagerfpu check in interrupted_kernel_fpu_idle():

...
  * For now, with eagerfpu we will return interrupted kernel FPU
  * state as not-idle. TBD: Ideally we can change the return value
  * to something like __thread_has_fpu(current). But we need to
  * be careful of doing __thread_clear_has_fpu() before saving
  * the FPU etc for supporting nested uses etc. For now, take
  * the simple route!
...
if (use_eager_fpu())
return 0;

As eagerfpu is automatically "on" on those CPUs that also have the
features like AES-NI this patch changes the eagerfpu check to return 1 in
case the kernel_fpu_begin() has not been said yet.  Once it has been the
__thread_has_fpu() will start returning 0.

Notice that with eagerfpu the __thread_has_fpu is always true initially.
FPU use is thus always possible no matter what task is under us, unless
the state has already been saved with kernel_fpu_begin().

[ hpa: this is a performance regression, not a correctness regression,
  but since it can be quite serious on CPUs which need encryption at
  interrupt time I am marking this for urgent/stable. ]

Signed-off-by: Pekka Riikonen 
Link: http://lkml.kernel.org/r/alpine.gso.2.00.1305131356320...@git.silcnet.org
Cc:  v3.7+
Signed-off-by: H. Peter Anvin 
---
 arch/x86/kernel/i387.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);
 
return !__thread_has_fpu(current) &&
(read_cr0() & X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;
 
if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86: Allow FPU to be used at interrupt time even with eagerfpu

2013-05-30 Thread tip-bot for Pekka Riikonen
Commit-ID:  b61601079f974b9ffd3caf08ecb3a71142adf821
Gitweb: http://git.kernel.org/tip/b61601079f974b9ffd3caf08ecb3a71142adf821
Author: Pekka Riikonen priik...@iki.fi
AuthorDate: Mon, 13 May 2013 14:32:07 +0200
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 30 May 2013 11:49:31 -0700

x86: Allow FPU to be used at interrupt time even with eagerfpu

With the addition of eagerfpu the irq_fpu_usable() now returns false
negatives especially in the case of ksoftirqd and interrupted idle task,
two common cases for FPU use for example in networking/crypto.  With
eagerfpu=off FPU use is possible in those contexts.  This is because of
the eagerfpu check in interrupted_kernel_fpu_idle():

...
  * For now, with eagerfpu we will return interrupted kernel FPU
  * state as not-idle. TBD: Ideally we can change the return value
  * to something like __thread_has_fpu(current). But we need to
  * be careful of doing __thread_clear_has_fpu() before saving
  * the FPU etc for supporting nested uses etc. For now, take
  * the simple route!
...
if (use_eager_fpu())
return 0;

As eagerfpu is automatically on on those CPUs that also have the
features like AES-NI this patch changes the eagerfpu check to return 1 in
case the kernel_fpu_begin() has not been said yet.  Once it has been the
__thread_has_fpu() will start returning 0.

Notice that with eagerfpu the __thread_has_fpu is always true initially.
FPU use is thus always possible no matter what task is under us, unless
the state has already been saved with kernel_fpu_begin().

[ hpa: this is a performance regression, not a correctness regression,
  but since it can be quite serious on CPUs which need encryption at
  interrupt time I am marking this for urgent/stable. ]

Signed-off-by: Pekka Riikonen priik...@iki.fi
Link: http://lkml.kernel.org/r/alpine.gso.2.00.1305131356320...@git.silcnet.org
Cc: sta...@vger.kernel.org v3.7+
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/kernel/i387.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);
 
return !__thread_has_fpu(current) 
(read_cr0()  X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;
 
if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86: Allow FPU to be used at interrupt time even with eagerfpu

2013-05-30 Thread tip-bot for Pekka Riikonen
Commit-ID:  5187b28ff08249ab8a162e802209ed04e271ca02
Gitweb: http://git.kernel.org/tip/5187b28ff08249ab8a162e802209ed04e271ca02
Author: Pekka Riikonen priik...@iki.fi
AuthorDate: Mon, 13 May 2013 14:32:07 +0200
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 30 May 2013 16:36:42 -0700

x86: Allow FPU to be used at interrupt time even with eagerfpu

With the addition of eagerfpu the irq_fpu_usable() now returns false
negatives especially in the case of ksoftirqd and interrupted idle task,
two common cases for FPU use for example in networking/crypto.  With
eagerfpu=off FPU use is possible in those contexts.  This is because of
the eagerfpu check in interrupted_kernel_fpu_idle():

...
  * For now, with eagerfpu we will return interrupted kernel FPU
  * state as not-idle. TBD: Ideally we can change the return value
  * to something like __thread_has_fpu(current). But we need to
  * be careful of doing __thread_clear_has_fpu() before saving
  * the FPU etc for supporting nested uses etc. For now, take
  * the simple route!
...
if (use_eager_fpu())
return 0;

As eagerfpu is automatically on on those CPUs that also have the
features like AES-NI this patch changes the eagerfpu check to return 1 in
case the kernel_fpu_begin() has not been said yet.  Once it has been the
__thread_has_fpu() will start returning 0.

Notice that with eagerfpu the __thread_has_fpu is always true initially.
FPU use is thus always possible no matter what task is under us, unless
the state has already been saved with kernel_fpu_begin().

[ hpa: this is a performance regression, not a correctness regression,
  but since it can be quite serious on CPUs which need encryption at
  interrupt time I am marking this for urgent/stable. ]

Signed-off-by: Pekka Riikonen priik...@iki.fi
Link: http://lkml.kernel.org/r/alpine.gso.2.00.1305131356320...@git.silcnet.org
Cc: sta...@vger.kernel.org v3.7+
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/kernel/i387.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);
 
return !__thread_has_fpu(current) 
(read_cr0()  X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;
 
if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 net-next 1/4] net: implement support for low latency socket polling

2013-05-21 Thread Pekka Riikonen

On Tue, 21 May 2013, David Miller wrote:


From: Pekka Riikonen 
Date: Tue, 21 May 2013 19:02:19 +0200 (CEST)


On Tue, 21 May 2013, Eric Dumazet wrote:

: > > Alternatively, use a napi_id instead of a pointer.
: >
: > I'm not sure I understand what you propose.
:
: Oh well.
:
: To get a pointer to a struct net_device, we can use ifindex, and do a
: rcu lookup into a hash table to get the net_device. We do not need
: {pointer,ifindex} but {ifindex} is enough
:
: My suggestion is to not have skb->skb_ref but skb->napi_index : Its safe
: to copy its value from skb->napi_index to sk->napi_index without
: refcounting.
:
: All NAPI need to get a unique napi_index, and be inserted in a hash
: table for immediate/fast lookup.
:
Maybe even that's not needed.  Couldn't skb->queue_mapping give the
correct NAPI instance in multiqueue nics?  The NAPI instance could be made
easily available from skb->dev.  In any case an index is much better than
a new pointer.


Queue mapping is more volatile, and consider layered devices.

Yes, true.  The napi_index then is probably the way to go.  Main thing for 
me is that it doesn't increase skb size when in union with dma_cookie (skb 
has been growing lately).


Pekka

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 net-next 1/4] net: implement support for low latency socket polling

2013-05-21 Thread Pekka Riikonen
On Tue, 21 May 2013, Eric Dumazet wrote:

: > > Alternatively, use a napi_id instead of a pointer.
: > 
: > I'm not sure I understand what you propose.
: 
: Oh well.
: 
: To get a pointer to a struct net_device, we can use ifindex, and do a
: rcu lookup into a hash table to get the net_device. We do not need
: {pointer,ifindex} but {ifindex} is enough
: 
: My suggestion is to not have skb->skb_ref but skb->napi_index : Its safe
: to copy its value from skb->napi_index to sk->napi_index without
: refcounting.
: 
: All NAPI need to get a unique napi_index, and be inserted in a hash
: table for immediate/fast lookup.
: 
Maybe even that's not needed.  Couldn't skb->queue_mapping give the 
correct NAPI instance in multiqueue nics?  The NAPI instance could be made 
easily available from skb->dev.  In any case an index is much better than 
a new pointer.

Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 net-next 1/4] net: implement support for low latency socket polling

2013-05-21 Thread Pekka Riikonen
On Tue, 21 May 2013, Eric Dumazet wrote:

:   Alternatively, use a napi_id instead of a pointer.
:  
:  I'm not sure I understand what you propose.
: 
: Oh well.
: 
: To get a pointer to a struct net_device, we can use ifindex, and do a
: rcu lookup into a hash table to get the net_device. We do not need
: {pointer,ifindex} but {ifindex} is enough
: 
: My suggestion is to not have skb-skb_ref but skb-napi_index : Its safe
: to copy its value from skb-napi_index to sk-napi_index without
: refcounting.
: 
: All NAPI need to get a unique napi_index, and be inserted in a hash
: table for immediate/fast lookup.
: 
Maybe even that's not needed.  Couldn't skb-queue_mapping give the 
correct NAPI instance in multiqueue nics?  The NAPI instance could be made 
easily available from skb-dev.  In any case an index is much better than 
a new pointer.

Pekka
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 net-next 1/4] net: implement support for low latency socket polling

2013-05-21 Thread Pekka Riikonen

On Tue, 21 May 2013, David Miller wrote:


From: Pekka Riikonen priik...@iki.fi
Date: Tue, 21 May 2013 19:02:19 +0200 (CEST)


On Tue, 21 May 2013, Eric Dumazet wrote:

:   Alternatively, use a napi_id instead of a pointer.
: 
:  I'm not sure I understand what you propose.
:
: Oh well.
:
: To get a pointer to a struct net_device, we can use ifindex, and do a
: rcu lookup into a hash table to get the net_device. We do not need
: {pointer,ifindex} but {ifindex} is enough
:
: My suggestion is to not have skb-skb_ref but skb-napi_index : Its safe
: to copy its value from skb-napi_index to sk-napi_index without
: refcounting.
:
: All NAPI need to get a unique napi_index, and be inserted in a hash
: table for immediate/fast lookup.
:
Maybe even that's not needed.  Couldn't skb-queue_mapping give the
correct NAPI instance in multiqueue nics?  The NAPI instance could be made
easily available from skb-dev.  In any case an index is much better than
a new pointer.


Queue mapping is more volatile, and consider layered devices.

Yes, true.  The napi_index then is probably the way to go.  Main thing for 
me is that it doesn't increase skb size when in union with dma_cookie (skb 
has been growing lately).


Pekka

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH RESEND] x86: irq_fpu_usable returns false negatives with eagerfpu

2013-05-13 Thread Pekka Riikonen
With the addition of eagerfpu the irq_fpu_usable() now returns false 
negatives especially in the case of ksoftirqd and interrupted idle task, 
two common cases for FPU use for example in networking/crypto.  With 
eagerfpu=off FPU use is possible in those contexts.  This is because of 
the eagerfpu check in interrupted_kernel_fpu_idle():


...
 * For now, with eagerfpu we will return interrupted kernel FPU
 * state as not-idle. TBD: Ideally we can change the return value
 * to something like __thread_has_fpu(current). But we need to
 * be careful of doing __thread_clear_has_fpu() before saving
 * the FPU etc for supporting nested uses etc. For now, take
 * the simple route!
...
if (use_eager_fpu())
return 0;

As eagerfpu is automatically "on" on those CPUs that also have the 
features like AES-NI this patch changes the eagerfpu check to return 1 in 
case the kernel_fpu_begin() has not been said yet.  Once it has been the 
__thread_has_fpu() will start returning 0.


Notice that with eagerfpu the __thread_has_fpu is always true initially. 
FPU use is thus always possible no matter what task is under us, unless 
the state has already been saved with kernel_fpu_begin().


Signed-off-by: Pekka Riikonen 
---
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);

return !__thread_has_fpu(current) &&
(read_cr0() & X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;

if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH RESEND] x86: irq_fpu_usable returns false negatives with eagerfpu

2013-05-13 Thread Pekka Riikonen
With the addition of eagerfpu the irq_fpu_usable() now returns false 
negatives especially in the case of ksoftirqd and interrupted idle task, 
two common cases for FPU use for example in networking/crypto.  With 
eagerfpu=off FPU use is possible in those contexts.  This is because of 
the eagerfpu check in interrupted_kernel_fpu_idle():


...
 * For now, with eagerfpu we will return interrupted kernel FPU
 * state as not-idle. TBD: Ideally we can change the return value
 * to something like __thread_has_fpu(current). But we need to
 * be careful of doing __thread_clear_has_fpu() before saving
 * the FPU etc for supporting nested uses etc. For now, take
 * the simple route!
...
if (use_eager_fpu())
return 0;

As eagerfpu is automatically on on those CPUs that also have the 
features like AES-NI this patch changes the eagerfpu check to return 1 in 
case the kernel_fpu_begin() has not been said yet.  Once it has been the 
__thread_has_fpu() will start returning 0.


Notice that with eagerfpu the __thread_has_fpu is always true initially. 
FPU use is thus always possible no matter what task is under us, unless 
the state has already been saved with kernel_fpu_begin().


Signed-off-by: Pekka Riikonen priik...@iki.fi
---
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);

return !__thread_has_fpu(current) 
(read_cr0()  X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;

if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC PATCH] irq_fpu_usable returns false negatives with eagerfpu

2013-05-07 Thread Pekka Riikonen

Hi,

With the addition of eagerfpu the irq_fpu_usable() now returns false 
negatives especially in the case of ksoftirqd and interrupted idle task, 
two common cases for FPU use for example in networking.  This is because 
of the eagerfpu check in interrupted_kernel_fpu_idle():


...
 * For now, with eagerfpu we will return interrupted kernel FPU
 * state as not-idle. TBD: Ideally we can change the return value
 * to something like __thread_has_fpu(current). But we need to
 * be careful of doing __thread_clear_has_fpu() before saving
 * the FPU etc for supporting nested uses etc. For now, take
 * the simple route!
...
if (use_eager_fpu())
return 0;

This should be fixed as eagerfpu is commonly "on" by default on those CPUs 
that also have the features like AES-NI.


The enclosed patch changes the eagerfpu check to return 1 in case the 
kernel_fpu_begin() has not been said yet.  Once it has been the 
__thread_has_fpu() will start returning 0.  The state is restored in 
kernel_fpu_end().  The state is thus always saved no matter what task is 
under us, unless it has been saved already with kernel_fpu_begin().


This does cause the penalty of unnecessary FPU saving in the cases of 
ksoftirqd and interrupted idle task but presumably it is cheaper than 
clts/stts.  Still, there could also be separate checks for ksoftirqd (new 
check like in_ksoftirqd()) and idle task and then not save the state at 
all.  As far as I can see it would always be safe.  This is not in the 
patch.


I've tested this lightly, but it seems to work.  Have I missed something?

Signed-off-by: Pekka Riikonen 
---
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);

return !__thread_has_fpu(current) &&
(read_cr0() & X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;

if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC PATCH] irq_fpu_usable returns false negatives with eagerfpu

2013-05-07 Thread Pekka Riikonen

Hi,

With the addition of eagerfpu the irq_fpu_usable() now returns false 
negatives especially in the case of ksoftirqd and interrupted idle task, 
two common cases for FPU use for example in networking.  This is because 
of the eagerfpu check in interrupted_kernel_fpu_idle():


...
 * For now, with eagerfpu we will return interrupted kernel FPU
 * state as not-idle. TBD: Ideally we can change the return value
 * to something like __thread_has_fpu(current). But we need to
 * be careful of doing __thread_clear_has_fpu() before saving
 * the FPU etc for supporting nested uses etc. For now, take
 * the simple route!
...
if (use_eager_fpu())
return 0;

This should be fixed as eagerfpu is commonly on by default on those CPUs 
that also have the features like AES-NI.


The enclosed patch changes the eagerfpu check to return 1 in case the 
kernel_fpu_begin() has not been said yet.  Once it has been the 
__thread_has_fpu() will start returning 0.  The state is restored in 
kernel_fpu_end().  The state is thus always saved no matter what task is 
under us, unless it has been saved already with kernel_fpu_begin().


This does cause the penalty of unnecessary FPU saving in the cases of 
ksoftirqd and interrupted idle task but presumably it is cheaper than 
clts/stts.  Still, there could also be separate checks for ksoftirqd (new 
check like in_ksoftirqd()) and idle task and then not save the state at 
all.  As far as I can see it would always be safe.  This is not in the 
patch.


I've tested this lightly, but it seems to work.  Have I missed something?

Signed-off-by: Pekka Riikonen priik...@iki.fi
---
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 245a71d..cb33909 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -22,23 +22,19 @@
 /*
  * Were we in an interrupt that interrupted kernel mode?
  *
- * For now, with eagerfpu we will return interrupted kernel FPU
- * state as not-idle. TBD: Ideally we can change the return value
- * to something like __thread_has_fpu(current). But we need to
- * be careful of doing __thread_clear_has_fpu() before saving
- * the FPU etc for supporting nested uses etc. For now, take
- * the simple route!
- *
  * On others, we can do a kernel_fpu_begin/end() pair *ONLY* if that
  * pair does nothing at all: the thread must not have fpu (so
  * that we don't try to save the FPU state), and TS must
  * be set (so that the clts/stts pair does nothing that is
  * visible in the interrupted kernel thread).
+ *
+ * Except for the eagerfpu case when we return 1 unless we've already
+ * been eager and saved the state in kernel_fpu_begin().
  */
 static inline bool interrupted_kernel_fpu_idle(void)
 {
if (use_eager_fpu())
-   return 0;
+   return __thread_has_fpu(current);

return !__thread_has_fpu(current) 
(read_cr0()  X86_CR0_TS);
@@ -78,8 +74,8 @@ void __kernel_fpu_begin(void)
struct task_struct *me = current;

if (__thread_has_fpu(me)) {
-   __save_init_fpu(me);
__thread_clear_has_fpu(me);
+   __save_init_fpu(me);
/* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] exporting IPv6 symbols

2000-09-06 Thread Pekka Riikonen [Adm]


: I mean show me real code, not hypothetical examples.
: 
Like code that I'm doing currently though not ready and not to be
included into the main kernel.  The point is that corresponding IPv4
symbols are exported as well why should we restrict IPv6?  I bet after
IPv6 is used more commonly the need for the symbols will come.  The thing
I've noticed programming various kernel modules is that the real pain in
the ass always is that the symbols you need aren't exported. :)  I think
that interface programmers should think whether their public interfaces
might be needed from the modules and export symbols accordingly.

Pekka

 Pekka Riikonen| Email: [EMAIL PROTECTED]
 SSH Communications Security Corp. | http://poseidon.pspt.fi/~priikone
 Tel. +358 (0)40 580 6673  | Kasarmikatu 11 A4, SF-70110 Kuopio
 PGP KeyID A924ED4F: http://poseidon.pspt.fi/~priikone/pubkey.asc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [PATCH] exporting IPv6 symbols

2000-09-06 Thread Pekka Riikonen [Adm]


: What needs access to these ipv6 symbols?  Do tell...
: 
For example, any module that performs packet manipulation for example in
netfilter hooks using IPv6. Currently such modules might be impossible to
do if they need to for example copy and resend packets (they also might
need to do route lookups). On 2.2 I guess in order to even detect that a
device (struct netdevice) uses IPv6 addresses we would have to export for
example ipv6_find_idev. On 2.4 we have the dev->ip6_ptr that is missing
from 2.2. I don't know if it can be detected in any other way currently in
2.2?

Pekka
____
 Pekka Riikonen| Email: [EMAIL PROTECTED]
 SSH Communications Security Corp. | http://poseidon.pspt.fi/~priikone
 Tel. +358 (0)40 580 6673  | Kasarmikatu 11 A4, SF-70110 Kuopio
 PGP KeyID A924ED4F: http://poseidon.pspt.fi/~priikone/pubkey.asc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[PATCH] exporting IPv6 symbols

2000-09-06 Thread Pekka Riikonen [Adm]


Enclosed is a patch that exports some IPv6 specific symbols on 2.4 so that
finally IPv6 is accessible from modules.  I bet there are bunch of other
functions that needs to be exported as well but this is a start.  There
are also various IPv6 functions in 2.2 that should be exported for the
same reasons; I can make a patch for 2.2 if there are no objections. This
patch is against 2.4.0test7.

-snip-
diff -uNr ./net/netsyms.c ../linux-2.4-new/net/netsyms.c
--- ./net/netsyms.c Fri Aug 18 20:26:25 2000
+++ ../linux-2.4-new/net/netsyms.c  Wed Sep  6 13:02:10 2000
@@ -55,6 +55,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -255,6 +256,13 @@
 #ifdef CONFIG_IPV6
 EXPORT_SYMBOL(ipv6_addr_type);
 EXPORT_SYMBOL(icmpv6_send);
+EXPORT_SYMBOL(ip6_route_input);
+EXPORT_SYMBOL(ip6_route_output);
+EXPORT_SYMBOL(rt6_lookup);
+EXPORT_SYMBOL(ipv6_rcv);
+EXPORT_SYMBOL(ip6_input);
+EXPORT_SYMBOL(ip6_output);
+EXPORT_SYMBOL(ip6_forward);
 #endif
 #if defined (CONFIG_IPV6_MODULE) || defined (CONFIG_KHTTPD) || defined 
(CONFIG_KHTTPD_MODULE)
 /* inet functions common to v4 and v6 */
-snip-

Pekka

 Pekka Riikonen| Email: [EMAIL PROTECTED]
 SSH Communications Security Corp. | http://poseidon.pspt.fi/~priikone
 Tel. +358 (0)40 580 6673  | Kasarmikatu 11 A4, SF-70110 Kuopio
 PGP KeyID A924ED4F: http://poseidon.pspt.fi/~priikone/pubkey.asc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: 2.2 / 2.4 ethernet detection order

2000-09-06 Thread Pekka Riikonen [Adm]

On Tue, 5 Sep 2000, Andries Brouwer wrote:

: I described the cause on Mon, 14 Aug 2000, and soon afterwards
: [EMAIL PROTECTED] sent a corrected (I hope - didnt check)
: patch to l-k. (Look for: Pekka Riikonen, Re: devfs / eth micro-problems)
: 
: I see that this patch didnt make it into 2.4.0test7.
: Perhaps it should be resubmitted.
: 
Ok, here it is again, and is going to Alan and Linus as well:

-snip-
diff -uNr ./drivers/net/net_init.c ../linux-2.4-new/drivers/net/net_init.c
--- ./drivers/net/net_init.cSat Jul 15 00:46:30 2000
+++ ../linux-2.4-new/drivers/net/net_init.c Tue Aug 22 12:28:44 2000
@@ -117,17 +117,15 @@
 
if (dev->name[0] == '\0' || dev->name[0] == ' ') {
strcpy(dev->name, mask);
-   if (!netdev_boot_setup_check(dev)) {
-   if (dev_alloc_name(dev, mask)<0) {
-   if (new_device)
-   kfree(dev);
-   return NULL;
-   }
+   if (dev_alloc_name(dev, mask)<0) {
+   if (new_device)
+   kfree(dev);
+   return NULL;
}
-   } else {
-   netdev_boot_setup_check(dev);
+
}
-   
+   netdev_boot_setup_check(dev);
+
/*
 *  Configure via the caller provided setup function then
 *  register if needed.
diff -uNr ./net/core/dev.c ../linux-2.4-new/net/core/dev.c
--- ./net/core/dev.cWed Jun 21 00:32:27 2000
+++ ../linux-2.4-new/net/core/dev.c Tue Aug 22 12:29:13 2000
@@ -303,28 +303,12 @@
 int netdev_boot_setup_check(struct net_device *dev)
 {
struct netdev_boot_setup *s;
-   char buf[IFNAMSIZ + 1];
-   int i, mask = 0;
-
-   memset(buf, 0, sizeof(buf));
-   strcpy(buf, dev->name);
-   if (strchr(dev->name, '%')) {
-   *strchr(buf, '%') = '\0';
-   mask = 1;
-   }
+   int i;
 
s = dev_boot_setup;
for (i = 0; i < NETDEV_BOOT_SETUP_MAX; i++) {
if (s[i].name[0] != '\0' && s[i].name[0] != ' ' &&
-   !strncmp(buf, s[i].name, mask ? strlen(buf) :
-   strlen(s[i].name))) {
-   if (__dev_get_by_name(s[i].name)) {
-   if (!mask)
-   return 0;
-   continue;
-   }
-   memset(dev->name, 0, IFNAMSIZ);
-   strcpy(dev->name, s[i].name);
+   !strncmp(dev->name, s[i].name, strlen(s[i].name))) {
dev->irq= s[i].map.irq;
dev->base_addr  = s[i].map.base_addr;
dev->mem_start  = s[i].map.mem_start;
@@ -365,6 +349,7 @@
 
 __setup("netdev=", netdev_boot_setup);
 
+
 
/*
 
Device Interface Subroutines
@@ -2481,27 +2466,26 @@
dev->iflink = -1;
dev_hold(dev);
 
+   /*
+* Allocate name. If the init()
+* fails the name will be reissued correctly.
+*/
+   if (strchr(dev->name, '%'))
+   dev_alloc_name(dev, dev->name);
+
/* 
 * Check boot time settings for the device.
 */
-   if (!netdev_boot_setup_check(dev)) {
-   /*
-* No settings found - allocate name. If the init()
-* fails the name will be reissued correctly.
-*/
-   if (strchr(dev->name, '%'))
-   dev_alloc_name(dev, dev->name);
-   }
+   netdev_boot_setup_check(dev);
 
if (dev->init && dev->init(dev)) {
/*
-*  It failed to come up. Unhook it.
+* It failed to come up. It will be unhooked later.
+* dev_alloc_name can now advance to next suitable
+* name that is checked next.
 */
-   write_lock_bh(_base_lock);
-   *dp = dev->next;
dev->deadbeaf = 1;
-   write_unlock_bh(_base_lock);
-   dev_put(dev);
+   dp = >next;
} else {
dp = >next;
dev->ifindex = dev_new_index();
@@ -2511,6 +2495,21 @@
dev->rebuild_header = default_rebuild_he

Re: 2.2 / 2.4 ethernet detection order

2000-09-06 Thread Pekka Riikonen [Adm]

On Tue, 5 Sep 2000, Andries Brouwer wrote:

: I described the cause on Mon, 14 Aug 2000, and soon afterwards
: [EMAIL PROTECTED] sent a corrected (I hope - didnt check)
: patch to l-k. (Look for: Pekka Riikonen, Re: devfs / eth micro-problems)
: 
: I see that this patch didnt make it into 2.4.0test7.
: Perhaps it should be resubmitted.
: 
Ok, here it is again, and is going to Alan and Linus as well:

-snip-
diff -uNr ./drivers/net/net_init.c ../linux-2.4-new/drivers/net/net_init.c
--- ./drivers/net/net_init.cSat Jul 15 00:46:30 2000
+++ ../linux-2.4-new/drivers/net/net_init.c Tue Aug 22 12:28:44 2000
@@ -117,17 +117,15 @@
 
if (dev-name[0] == '\0' || dev-name[0] == ' ') {
strcpy(dev-name, mask);
-   if (!netdev_boot_setup_check(dev)) {
-   if (dev_alloc_name(dev, mask)0) {
-   if (new_device)
-   kfree(dev);
-   return NULL;
-   }
+   if (dev_alloc_name(dev, mask)0) {
+   if (new_device)
+   kfree(dev);
+   return NULL;
}
-   } else {
-   netdev_boot_setup_check(dev);
+
}
-   
+   netdev_boot_setup_check(dev);
+
/*
 *  Configure via the caller provided setup function then
 *  register if needed.
diff -uNr ./net/core/dev.c ../linux-2.4-new/net/core/dev.c
--- ./net/core/dev.cWed Jun 21 00:32:27 2000
+++ ../linux-2.4-new/net/core/dev.c Tue Aug 22 12:29:13 2000
@@ -303,28 +303,12 @@
 int netdev_boot_setup_check(struct net_device *dev)
 {
struct netdev_boot_setup *s;
-   char buf[IFNAMSIZ + 1];
-   int i, mask = 0;
-
-   memset(buf, 0, sizeof(buf));
-   strcpy(buf, dev-name);
-   if (strchr(dev-name, '%')) {
-   *strchr(buf, '%') = '\0';
-   mask = 1;
-   }
+   int i;
 
s = dev_boot_setup;
for (i = 0; i  NETDEV_BOOT_SETUP_MAX; i++) {
if (s[i].name[0] != '\0'  s[i].name[0] != ' ' 
-   !strncmp(buf, s[i].name, mask ? strlen(buf) :
-   strlen(s[i].name))) {
-   if (__dev_get_by_name(s[i].name)) {
-   if (!mask)
-   return 0;
-   continue;
-   }
-   memset(dev-name, 0, IFNAMSIZ);
-   strcpy(dev-name, s[i].name);
+   !strncmp(dev-name, s[i].name, strlen(s[i].name))) {
dev-irq= s[i].map.irq;
dev-base_addr  = s[i].map.base_addr;
dev-mem_start  = s[i].map.mem_start;
@@ -365,6 +349,7 @@
 
 __setup("netdev=", netdev_boot_setup);
 
+
 
/*
 
Device Interface Subroutines
@@ -2481,27 +2466,26 @@
dev-iflink = -1;
dev_hold(dev);
 
+   /*
+* Allocate name. If the init()
+* fails the name will be reissued correctly.
+*/
+   if (strchr(dev-name, '%'))
+   dev_alloc_name(dev, dev-name);
+
/* 
 * Check boot time settings for the device.
 */
-   if (!netdev_boot_setup_check(dev)) {
-   /*
-* No settings found - allocate name. If the init()
-* fails the name will be reissued correctly.
-*/
-   if (strchr(dev-name, '%'))
-   dev_alloc_name(dev, dev-name);
-   }
+   netdev_boot_setup_check(dev);
 
if (dev-init  dev-init(dev)) {
/*
-*  It failed to come up. Unhook it.
+* It failed to come up. It will be unhooked later.
+* dev_alloc_name can now advance to next suitable
+* name that is checked next.
 */
-   write_lock_bh(dev_base_lock);
-   *dp = dev-next;
dev-deadbeaf = 1;
-   write_unlock_bh(dev_base_lock);
-   dev_put(dev);
+   dp = dev-next;
} else {
dp = dev-next;
dev-ifindex = dev_new_index();
@@ -2511,6 +2495,21 @@
dev-rebuild_header = default_rebuild_header;
dev_init_scheduler(dev);
set_bit(__LINK_STATE_PRESENT,

Re: [PATCH] exporting IPv6 symbols

2000-09-06 Thread Pekka Riikonen [Adm]


: What needs access to these ipv6 symbols?  Do tell...
: 
For example, any module that performs packet manipulation for example in
netfilter hooks using IPv6. Currently such modules might be impossible to
do if they need to for example copy and resend packets (they also might
need to do route lookups). On 2.2 I guess in order to even detect that a
device (struct netdevice) uses IPv6 addresses we would have to export for
example ipv6_find_idev. On 2.4 we have the dev-ip6_ptr that is missing
from 2.2. I don't know if it can be detected in any other way currently in
2.2?

Pekka

 Pekka Riikonen| Email: [EMAIL PROTECTED]
 SSH Communications Security Corp. | http://poseidon.pspt.fi/~priikone
 Tel. +358 (0)40 580 6673  | Kasarmikatu 11 A4, SF-70110 Kuopio
 PGP KeyID A924ED4F: http://poseidon.pspt.fi/~priikone/pubkey.asc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/