Re: [RFC 1/1] hw: tpmtisspi: add SPI support to QEMU TPM implementation

2023-12-14 Thread Iris Chen
Hi Guenter,

Not from me at this moment :)

Thanks,
Iris

On Wed, Dec 13, 2023 at 9:39 AM Guenter Roeck  wrote:

> Hi,
>
> On Tue, Aug 02, 2022 at 07:32:41PM -0700, Iris Chen wrote:
> > From: Iris Chen 
> >
> > Signed-off-by: Iris Chen 
> > ---
>
> Are there any plans to submit a new version of this patch ?
>
> Thanks,
> Guenter
>
> >  configs/devices/arm-softmmu/default.mak |   1 +
> >  hw/arm/Kconfig  |   5 +
> >  hw/tpm/Kconfig  |   5 +
> >  hw/tpm/meson.build  |   1 +
> >  hw/tpm/tpm_tis_spi.c| 311 
> >  include/sysemu/tpm.h|   3 +
> >  6 files changed, 326 insertions(+)
> >  create mode 100644 hw/tpm/tpm_tis_spi.c
> >
> > diff --git a/configs/devices/arm-softmmu/default.mak
> b/configs/devices/arm-softmmu/default.mak
> > index 6985a25377..80d2841568 100644
> > --- a/configs/devices/arm-softmmu/default.mak
> > +++ b/configs/devices/arm-softmmu/default.mak
> > @@ -42,3 +42,4 @@ CONFIG_FSL_IMX6UL=y
> >  CONFIG_SEMIHOSTING=y
> >  CONFIG_ARM_COMPATIBLE_SEMIHOSTING=y
> >  CONFIG_ALLWINNER_H3=y
> > +CONFIG_FBOBMC_AST=y
> > diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
> > index 15fa79afd3..193decaec1 100644
> > --- a/hw/arm/Kconfig
> > +++ b/hw/arm/Kconfig
> > @@ -458,6 +458,11 @@ config ASPEED_SOC
> >  select PMBUS
> >  select MAX31785
> >
> > +config FBOBMC_AST
> > +bool
> > +select ASPEED_SOC
> > +select TPM_TIS_SPI
> > +
> >  config MPS2
> >  bool
> >  imply I2C_DEVICES
> > diff --git a/hw/tpm/Kconfig b/hw/tpm/Kconfig
> > index 29e82f3c92..370a43f045 100644
> > --- a/hw/tpm/Kconfig
> > +++ b/hw/tpm/Kconfig
> > @@ -8,6 +8,11 @@ config TPM_TIS_SYSBUS
> >  depends on TPM
> >  select TPM_TIS
> >
> > +config TPM_TIS_SPI
> > +bool
> > +depends on TPM
> > +select TPM_TIS
> > +
> >  config TPM_TIS
> >  bool
> >  depends on TPM
> > diff --git a/hw/tpm/meson.build b/hw/tpm/meson.build
> > index 1c68d81d6a..1a057f4e36 100644
> > --- a/hw/tpm/meson.build
> > +++ b/hw/tpm/meson.build
> > @@ -2,6 +2,7 @@ softmmu_ss.add(when: 'CONFIG_TPM_TIS', if_true:
> files('tpm_tis_common.c'))
> >  softmmu_ss.add(when: 'CONFIG_TPM_TIS_ISA', if_true:
> files('tpm_tis_isa.c'))
> >  softmmu_ss.add(when: 'CONFIG_TPM_TIS_SYSBUS', if_true:
> files('tpm_tis_sysbus.c'))
> >  softmmu_ss.add(when: 'CONFIG_TPM_CRB', if_true: files('tpm_crb.c'))
> > +softmmu_ss.add(when: 'CONFIG_TPM_TIS_SPI', if_true:
> files('tpm_tis_spi.c'))
> >
> >  specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TPM_TIS'], if_true:
> files('tpm_ppi.c'))
> >  specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TPM_CRB'], if_true:
> files('tpm_ppi.c'))
> > diff --git a/hw/tpm/tpm_tis_spi.c b/hw/tpm/tpm_tis_spi.c
> > new file mode 100644
> > index 00..c98ddcfddb
> > --- /dev/null
> > +++ b/hw/tpm/tpm_tis_spi.c
> > @@ -0,0 +1,311 @@
> > +#include "qemu/osdep.h"
> > +#include "hw/qdev-properties.h"
> > +#include "migration/vmstate.h"
> > +#include "hw/acpi/tpm.h"
> > +#include "tpm_prop.h"
> > +#include "tpm_tis.h"
> > +#include "qom/object.h"
> > +#include "hw/ssi/ssi.h"
> > +#include "hw/ssi/spi_gpio.h"
> > +
> > +#define TPM_TIS_SPI_ADDR_BYTES 3
> > +#define SPI_WRITE 0
> > +
> > +typedef enum {
> > +TIS_SPI_PKT_STATE_DEACTIVATED = 0,
> > +TIS_SPI_PKT_STATE_START,
> > +TIS_SPI_PKT_STATE_ADDRESS,
> > +TIS_SPI_PKT_STATE_DATA_WR,
> > +TIS_SPI_PKT_STATE_DATA_RD,
> > +TIS_SPI_PKT_STATE_DONE,
> > +} TpmTisSpiPktState;
> > +
> > +union TpmTisRWSizeByte {
> > +uint8_t byte;
> > +struct {
> > +uint8_t data_expected_size:6;
> > +uint8_t resv:1;
> > +uint8_t rwflag:1;
> > +};
> > +};
> > +
> > +union TpmTisSpiHwAddr {
> > +hwaddr addr;
> > +uint8_t bytes[sizeof(hwaddr)];
> > +};
> > +
> > +union TpmTisSpiData {
> > +uint32_t data;
> > +uint8_t bytes[64];
> > +};
> > +
> > +struct TpmTisSpiState {
> > +/*< private >*/
> > +SSIPeripheral parent_obj;
> > +
> > +/*< public >*/
> > +TPMState tpm_state; /* not a QOM object */
> > +TpmTisSpiPktState tpm_tis_spi_state;
> > +
> > +union TpmTisRWSizeByte first_byte;
> > +union TpmTisSpiHwAddr addr;
> > +union TpmTisSpiData data;
> > +
> > +uint32_t data_size;
> > +uint8_t data_idx;
> > +uint8_t addr_idx;
> > +};
> > +
> > +struct TpmTisSpiClass {
> > +SSIPeripheralClass parent_class;
> > +};
> > +
> > +OBJECT_DECLARE_TYPE(TpmTisSpiState, TpmTisSpiClass, TPM_TIS_SPI)
> > +
> > +static void tpm_tis_spi_mmio_read(TpmTisSpiState *tts)
> > +{
> > +uint16_t offset = tts->addr.addr & 0xffc;
> > +
> > +switch (offset) {
> > +case TPM_TIS_REG_DATA_FIFO:
> > +for (uint8_t i = 0; i < tts->data_size; i++) {
> > +tts->data.bytes[i] = (uint8_t)tpm_tis_memory_ops.read(
> > +  >tpm_state,
> > +  tts->addr.addr,
> > +

Re: [PATCH] docs: fix typo

2023-12-14 Thread Zhao Liu
On Thu, Dec 14, 2023 at 11:53:18PM +0100, Samuel Tardieu wrote:
> Date: Thu, 14 Dec 2023 23:53:18 +0100
> From: Samuel Tardieu 
> Subject: [PATCH] docs: fix typo
> X-Mailer: git-send-email 2.42.0
> 
> Signed-off-by: Samuel Tardieu 
> ---

Reviewed-by: Zhao Liu 

>  docs/tools/qemu-img.rst | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/docs/tools/qemu-img.rst b/docs/tools/qemu-img.rst
> index 4459c065f1..3653adb963 100644
> --- a/docs/tools/qemu-img.rst
> +++ b/docs/tools/qemu-img.rst
> @@ -406,7 +406,7 @@ Command description:
>Compare exits with ``0`` in case the images are equal and with ``1``
>in case the images differ. Other exit codes mean an error occurred during
>execution and standard error output should contain an error message.
> -  The following table sumarizes all exit codes of the compare subcommand:
> +  The following table summarizes all exit codes of the compare subcommand:
>  
>0
>  Images are identical (or requested help was printed)
> -- 
> 2.42.0
> 
> 



[PATCH] docs: fix typo

2023-12-14 Thread Samuel Tardieu
Signed-off-by: Samuel Tardieu 
---
 docs/tools/qemu-img.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/tools/qemu-img.rst b/docs/tools/qemu-img.rst
index 4459c065f1..3653adb963 100644
--- a/docs/tools/qemu-img.rst
+++ b/docs/tools/qemu-img.rst
@@ -406,7 +406,7 @@ Command description:
   Compare exits with ``0`` in case the images are equal and with ``1``
   in case the images differ. Other exit codes mean an error occurred during
   execution and standard error output should contain an error message.
-  The following table sumarizes all exit codes of the compare subcommand:
+  The following table summarizes all exit codes of the compare subcommand:
 
   0
 Images are identical (or requested help was printed)
-- 
2.42.0




Re: [RFC 3/3] aio-posix: call ->poll_end() when removing AioHandler

2023-12-14 Thread Paolo Bonzini
Il gio 14 dic 2023, 21:12 Stefan Hajnoczi  ha scritto:

> Since the fd may be bypassed until ->io_poll_end() returns, we must poll
> one last time to check if an event snuck in right at the end without
> making the fd readable. If polling detected an event, then we must do
> something. We cannot drop the event


I agree that in general we cannot. I wonder however if, in the (already
racy) case of a concurrent aio_set_fd_handler(ctx, fd, NULL, ...), you
really need to call poll_ready here.

>
> (An alternative is to poll once before monitoring the fd again. That way
> pending events are detected even if the fd wasn't readable. That is
> currently not the way aio-posix.c works though.)
>

Yes, that would be a change. If I understood correctly Hanna's suggestions
in the issue, she also mentioned doing a manual virtqueue notification
before monitoring restarts. So basically my idea boils down to implementing
that, and then cleaning up everything on top.

Paolo



> Stefan
>


Re: [RFC 3/3] aio-posix: call ->poll_end() when removing AioHandler

2023-12-14 Thread Stefan Hajnoczi
On Wed, Dec 13, 2023 at 10:52:58PM +0100, Paolo Bonzini wrote:
> On Wed, Dec 13, 2023 at 10:15 PM Stefan Hajnoczi  wrote:
> > -/* If a read is in progress, just mark the node as deleted */
> > -if (ctx->walking_handlers > 0) {
> > -QLIST_INSERT_HEAD_RCU(>deleted_aio_handlers, node, 
> > node_deleted);
> > -return false;
> > +/* If polling was started on the node then end it now */
> > +if (ctx->poll_started && node->io_poll_end) {
> > +node->io_poll_end(node->opaque);
> > +
> > +/* Poll one last time in case ->io_poll_end() raced with the event 
> > */
> > +if (node->io_poll(node->opaque)) {
> > +poll_ready = true;
> > +}
> 
> Do you need this at all? If the caller is removing the handlers, they
> should have put themselves in a state where they don't care about the
> file descriptor becoming readable.

The caller no longer wishes to monitor the fd. This may be temporary
though. The fd must stay readable so that the caller can monitor it
again in the future and handle pending events.

->io_poll_begin() and ->io_poll_end() are used to bypass the fd while in
polling mode (e.g. disabling virtqueue kicks). This is a performance
optimization that avoids wasting cycles servicing fds when we're polling
anyway.

When the caller does:
1. aio_set_fd_handler(ctx, fd, NULL, ...)// stop monitoring
2. aio_set_fd_handler(ctx, fd, read_fd, ...) // start monitoring

Then read_fd() should be called if the fd became readable between 1 and 2.

Since the fd may be bypassed until ->io_poll_end() returns, we must poll
one last time to check if an event snuck in right at the end without
making the fd readable. If polling detected an event, then we must do
something. We cannot drop the event.

(An alternative is to poll once before monitoring the fd again. That way
pending events are detected even if the fd wasn't readable. That is
currently not the way aio-posix.c works though.)

Stefan


signature.asc
Description: PGP signature


Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2023-12-14 Thread Stefan Hajnoczi
On Thu, Dec 14, 2023 at 02:38:47PM +0100, Fiona Ebner wrote:
> Am 13.12.23 um 22:15 schrieb Stefan Hajnoczi:
> > But there you have it. Please let me know what you think and try your
> > reproducers to see if this fixes the missing io_poll_end() issue. Thanks!
> > 
> 
> Thanks to you! I applied the RFC (and the series it depends on) on top
> of 8.2.0-rc4 and this fixes my reproducer which drains VirtIO SCSI or
> VirtIO block devices in a loop. Also didn't encounter any other issues
> while playing around a bit with backup and mirror jobs.
> 
> The changes look fine to me, but this issue is also the first time I
> came in close contact with this code, so that unfortunately does not say
> much.

Great.

I will still try the other approach that Hanna and Paolo have suggested.
It seems more palatable. I will send a v2.

Stefan


signature.asc
Description: PGP signature


Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2023-12-14 Thread Stefan Hajnoczi
On Thu, Dec 14, 2023 at 12:10:32AM +0100, Paolo Bonzini wrote:
> On Wed, Dec 13, 2023 at 10:15 PM Stefan Hajnoczi  wrote:
> > Alternatives welcome! (A cleaner version of this approach might be to forbid
> > cross-thread aio_set_fd_handler() calls and to refactor all
> > aio_set_fd_handler() callers so they come from the AioContext's home thread.
> > I'm starting to think that only the aio_notify() and aio_schedule_bh() APIs
> > should be thread-safe.)
> 
> I think that's pretty hard because aio_set_fd_handler() is a pretty
> important part of the handoff from one AioContext to another and also
> of drained_begin()/end(), and both of these things run in the main
> thread.
> 
> Regarding how to solve this issue, there is a lot of
> "underdocumenting" of the locking policy in aio-posix.c, and indeed it
> makes running aio_set_fd_handler() in the target AioContext tempting;
> but it is also scary to rely on the iothread being able to react
> quickly. I'm also worried that we're changing the logic just because
> we don't understand the old one, but then we add technical debt.
> 
> So, as a first step, I would take inspiration from the block layer
> locking work, and add assertions to functions like poll_set_started()
> or find_aio_handler(). Is the list_lock elevated (nonzero)? Or locked?
> Are we in the iothread? And likewise, for each list, does insertion
> happen from the iothread or with the list_lock taken (and possibly
> elevated)? Does removal happen from the iothread or with list_lock
> zero+taken?
> 
> After this step,  we should have a clearer idea of the possible states
> of the node (based on the lists, the state is a subset of
> {poll_started, deleted, ready}) and draw a nice graph of the
> transitions. We should also understand if any calls to
> QLIST_IS_INSERTED() have correctness issues.
> 
> Good news, I don't think any memory barriers are needed here. One
> thing that we already do correctly is that, once a node is deleted, we
> try to skip work; see for example poll_set_started(). This also
> provides a good place to do cleanup work for deleted nodes, including
> calling poll_end(): aio_free_deleted_handlers(), because it runs with
> list_lock zero and taken, just like the tail of
> aio_remove_fd_handler(). It's the safest possible place to do cleanup
> and to take a lock. Therefore we have:
> 
> - a fast path in the iothread that runs without any concurrence with
> stuff happening in the main thread
> 
> - a slow path in the iothread that runs with list_lock zero and taken.
> The slow path shares logic with the main thread, meaning that
> aio_free_deleted_handlers() and aio_remove_fd_handler() should share
> some functions called by both.
> 
> If the code is organized this way, any wrong bits should jump out more
> easily. For example, these two lines in aio_remove_fd_handler() are
> clearly misplaced
> 
> node->pfd.revents = 0;
> node->poll_ready = false;
> 
> because they run in the main thread but they touch iothread data! They
> should be after qemu_lockcnt_count() is checked to be zero.
> 
> Regarding the call to io_poll_ready(), I would hope that it is
> unnecessary; in other words, that after drained_end() the virtqueue
> notification would be raised. Yes, virtio_queue_set_notification is
> edge triggered rather than level triggered, so it would be necessary
> to add a check with virtio_queue_host_notifier_aio_poll() and
> virtio_queue_host_notifier_aio_poll_ready() in
> virtio_queue_aio_attach_host_notifier, but that does not seem too bad
> because virtio is the only user of the io_poll_begin and io_poll_end
> callbacks. It would have to be documented though.

I think Hanna had the same idea: document that ->io_poll_end() isn't
called by aio_set_fd_handler() and shift the responsibility onto the
caller to get back into a state where notifications are enabled before
they add the fd with aio_set_fd_handler() again.

In a little more detail, the caller needs to do the following before
adding the fd back with aio_set_fd_handler() again:
1. Call ->io_poll_end().
2. Poll one more time in case an event slipped in and write to the
   eventfd so the fd is immediately readable or call ->io_poll_ready().

I think this is more or less what you described above.

I don't like pushing this responsibility onto the caller, but adding a
synchronization point in aio_set_fd_handler() is problematic, so let's
give it a try. I'll try that approach and send a v2.

Stefan

> 
> Paolo
> 
> 
> Paolo
> 
> >
> > Stefan Hajnoczi (3):
> >   aio-posix: run aio_set_fd_handler() in target AioContext
> >   aio: use counter instead of ctx->list_lock
> >   aio-posix: call ->poll_end() when removing AioHandler
> >
> >  include/block/aio.h |  22 ++---
> >  util/aio-posix.c| 197 
> >  util/async.c|   2 -
> >  util/fdmon-epoll.c  |   6 +-
> >  4 files changed, 152 insertions(+), 75 deletions(-)
> >
> > --
> > 2.43.0
> >
> 


signature.asc
Description: PGP 

Re: [RFC 0/3] aio-posix: call ->poll_end() when removing AioHandler

2023-12-14 Thread Fiona Ebner
Am 13.12.23 um 22:15 schrieb Stefan Hajnoczi:
> But there you have it. Please let me know what you think and try your
> reproducers to see if this fixes the missing io_poll_end() issue. Thanks!
> 

Thanks to you! I applied the RFC (and the series it depends on) on top
of 8.2.0-rc4 and this fixes my reproducer which drains VirtIO SCSI or
VirtIO block devices in a loop. Also didn't encounter any other issues
while playing around a bit with backup and mirror jobs.

The changes look fine to me, but this issue is also the first time I
came in close contact with this code, so that unfortunately does not say
much.

Best Regards,
Fiona