On Tue, 10 Nov 2020 at 19:32, Pavel Pisa <p...@cmp.felk.cvut.cz> wrote:
>
> Hello Peter,
>
> On Tuesday 10 of November 2020 19:24:03 Peter Maydell wrote:
> > For unaligned accesses, for 6.0, I think the code for doing
> > them to the txbuff at least is straightforward:
> >
> >    if (buff_num < CTUCAN_CORE_TXBUF_NUM &&
> >        (addr + size) < CTUCAN_CORE_MSG_MAX_LEN) {
> >       stn_le_p(s->tx_buffer[buff_num].data + addr, size, val);
> >    }
> >
> > (stn_le_p takes care of doing an appropriate-width write.)
>
> Thanks, great to know, I like that much.
> Only small nitpicking, it should be (addr + size) <= CTUCAN_CORE_MSG_MAX_LEN
>
> So whole code I am testing now
>
>     if (addr >= CTU_CAN_FD_TXTB1_DATA_1) {
>         int buff_num;
>         addr -= CTU_CAN_FD_TXTB1_DATA_1;
>         buff_num = addr / CTUCAN_CORE_TXBUFF_SPAN;
>         addr %= CTUCAN_CORE_TXBUFF_SPAN;
>         if ((buff_num < CTUCAN_CORE_TXBUF_NUM) &&
>             ((addr + size) <= sizeof(s->tx_buffer[buff_num].data))) {
>             stn_le_p(s->tx_buffer[buff_num].data + addr, size, val);
>         }
>     } else {
>
> So I have applied you whole series with above update. All works correctly
> on x86_64 Linux host and with Linux x86_64 and MIPS big endian guests.
>
> Please update to this combination.

If you've got a modified patch set that you've tested, would
you mind sending it out to the list? That would avoid my
possibly making mistakes in updating patches on my end and
then requiring you to repeat the testing.

thanks
-- PMM

Reply via email to