Shuyu Wei :
[...]
> I still have a question, is it possible that tx_clean() run
> between priv->tx_buff[*txbd_curr].skb = skb and dma_wmb()?
A (previous) run can take place after priv->tx_buff[*txbd_curr].skb and
before *info = cpu_to_le32(FOR_EMAC | FIRST_OR_LAST_MASK | len).
So, yes, the
On Sun, May 15, 2016 at 11:19:53AM +0200, Francois Romieu wrote:
>
> static void arc_emac_tx_clean(struct net_device *ndev)
> {
> [...]
> for (i = 0; i < TX_BD_NUM; i++) {
> unsigned int *txbd_dirty = &priv->txbd_dirty;
> struct arc_emac_bd *txbd = &priv->tx
Shuyu Wei :
[...]
> I don't think taking txbd_curr and txbd_dirty only as hints is a good idea.
> That could be a big waste, since tx_clean have to go through all the txbds.
Sorry if my point was not clear: arc_emac_tx_clean() does not need
to change (at least not for the reason given in the comm
Sorry, the last two lines is wrong, ignore it.
I mean I intended to ignore one or two packets.
It's just a trade-off for performance, but it
doesn't cause any memory leak.
On Sat, May 14, 2016 at 10:03:56PM +0200, Francois Romieu wrote:
> Shuyu Wei :
> > The tail of the ring buffer(txbd_dirty) should never go ahead of the
> > head(txbd_curr) or the ring buffer will corrupt.
> >
> > This is the root cause of racing.
>
> No (see below).
>
> It may suffer from some
Shuyu Wei :
> The tail of the ring buffer(txbd_dirty) should never go ahead of the
> head(txbd_curr) or the ring buffer will corrupt.
>
> This is the root cause of racing.
No (see below).
It may suffer from some barrier illness though.
> Besides, setting the FOR_EMAC flag should be the last s
The tail of the ring buffer(txbd_dirty) should never go ahead of the
head(txbd_curr) or the ring buffer will corrupt.
This is the root cause of racing.
Besides, setting the FOR_EMAC flag should be the last step of modifying
the buffer descriptor, or possible racing will occur.
Signed-off-by: Sh