On 03/05/2013 11:55 AM, David Miller wrote:
> From: Ben Hutchings
> Date: Tue, 5 Mar 2013 16:43:01 +
>
>> In general it appears to require a run-time check. You might need to
>> augment .
>
> On the other hand, unlike get_cycles, sched_clock() is always available.
>
On the gripping hand,
From: Eliezer Tamir
Date: Tue, 05 Mar 2013 19:15:26 +0200
> We are not very sensitive to this setting, anything on the order of
> your half round time trip plus a few standard deviations works well.
> We are busy waiting, so setting a higher value does not change the
> results much.
This makes t
From: Ben Hutchings
Date: Tue, 5 Mar 2013 16:43:01 +
> In general it appears to require a run-time check. You might need to
> augment .
On the other hand, unlike get_cycles, sched_clock() is always available.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On 05/03/2013 18:43, Ben Hutchings wrote:
On Wed, 2013-02-27 at 09:55 -0800, Eliezer Tamir wrote:
Should the units really be cycles or, say, microseconds? I assume that
a sysctl setter can do a conversion to cycles so that there's no need to
multiply every time the value is used. (If the CPU d
On Wed, 2013-02-27 at 09:55 -0800, Eliezer Tamir wrote:
> Adds a new ndo_ll_poll method and the code that supports and uses it.
> This method can be used by low latency applications to busy poll ethernet
> device queues directly from the socket code. The ip_low_latency_poll sysctl
> entry controls
On Mon, 2013-03-04 at 17:28 +0200, Eliezer Tamir wrote:
> On 04/03/2013 16:52, Eric Dumazet wrote:
> > On Mon, 2013-03-04 at 10:43 +0200, Eliezer Tamir wrote:
> >
> >> One could for example increment the generation id every time the RTNL is
> >> taken. or is this too much?
> >
> > RTNL is taken for
On 04/03/2013 16:52, Eric Dumazet wrote:
On Mon, 2013-03-04 at 10:43 +0200, Eliezer Tamir wrote:
One could for example increment the generation id every time the RTNL is
taken. or is this too much?
RTNL is taken for a lot of operations, it would be better to have a
finer grained increment.
On Mon, 2013-03-04 at 10:43 +0200, Eliezer Tamir wrote:
> One could for example increment the generation id every time the RTNL is
> taken. or is this too much?
RTNL is taken for a lot of operations, it would be better to have a
finer grained increment.
--
To unsubscribe from this list: send t
On 03/03/2013 20:35, Eric Dumazet wrote:
On Wed, 2013-02-27 at 09:55 -0800, Eliezer Tamir wrote:
index 821c7f4..d1d1016 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -408,6 +408,10 @@ struct sk_buff {
struct sock *sk;
struct net_device *de
On Wed, 27 Feb 2013 at 17:55 GMT, Eliezer Tamir
wrote:
> +static inline void skb_mark_ll(struct napi_struct *napi, struct sk_buff *skb)
> +{
> + skb->dev_ref = napi;
> +}
> +
> +static inline void sk_mark_ll(struct sock *sk, struct sk_buff *skb)
> +{
> + if (skb->dev_ref)
> +
On Sun, Mar 03, 2013 at 01:20:01PM -0800, Eric Dumazet wrote:
> On Sun, 2013-03-03 at 20:21 +0100, Andi Kleen wrote:
> > > Alternative to 2) would be to use a generation id, incremented every
> > > time a napi used in spin polling enabled driver is dismantled (and freed
> > > after RCU grace period
On Sun, 2013-03-03 at 20:21 +0100, Andi Kleen wrote:
> > Alternative to 2) would be to use a generation id, incremented every
> > time a napi used in spin polling enabled driver is dismantled (and freed
> > after RCU grace period)
> >
> > And store in sockets not only the pointer to napi_struct, b
> Alternative to 2) would be to use a generation id, incremented every
> time a napi used in spin polling enabled driver is dismantled (and freed
> after RCU grace period)
>
> And store in sockets not only the pointer to napi_struct, but the
> current generation id : If the generation id doesnt ma
On Wed, 2013-02-27 at 09:55 -0800, Eliezer Tamir wrote:
> index 821c7f4..d1d1016 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -408,6 +408,10 @@ struct sk_buff {
> struct sock *sk;
> struct net_device *dev;
>
> +#ifdef CONFIG_INET_LL_RX_P
14 matches
Mail list logo