Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Thu, Mar 02, 2017 at 01:36:25AM +0200, Michael S. Tsirkin wrote:
> libtpms seems to be packaged and used outside QEMU, I don't say we need
> to have that in tree. I thought we were discussing swtpm cuse thing.

In any case, I'd like to stress that my comments aren't absolute.  I
merely described what would it take for me to be able to review these
patches properly but others more motivated might be able to do it with
the current cuse architecture. Should someone else review and merge
these patches, I won't comment.

-- 
MST



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 02:24:20PM -0500, Stefan Berger wrote:
> > > Anyway, having it in a separate process locked down by SELinux means that 
> > > even
> > > if it does go horribly wrong it won't break qemu.
> > > 
> > > Dave
> > Since qemu does blocking ioctls on it and doesn't validate results
> > too much it sure can break QEMU - anything from DOS to random
> > code execution. That's why we want to keep it in tree and
> > start it ourselves - I don't want CVEs claiming not validating
> > some parameter we get from it is a remote code execution.
> > It should be just a library that yes, we can keep out of
> > process for extra security but no, we can't just out random
> > stuff in there and never care.
> 
> So then the question is whether the implementation is hopelessly broken or
> whether we can defend against buffer overflows so that remote code execution
> from a malicious TPM emulator can actually happen? I thought I was properly
> checking the alllocated buffer for size and that we won't receive more than
> the expected number of bytes, but maybe it needs an additional check for
> unreasonable input.
> 
> Example of such code is here:
> 
> https://github.com/stefanberger/qemu-tpm/commit/27d332dc3b2c6bfd0fcd38e69f5c899651f3a5d8#diff-c9d7e2e1d4b17b93ca5580ec2d0d204aR188
> 
> 
> FYI:
> TPM 1.2 in libtpms:
> 
> $ wc *.c *.h | grep total
>   86130  352307 3227530 total
> 
> 
> TPM 2 in TPM 2 preview branch of libtpms:
> 
> $ wc *.c *.h | grep total
>   65318  319043 2651231 total

libtpms seems to be packaged and used outside QEMU, I don't say we need
to have that in tree. I thought we were discussing swtpm cuse thing.

> 



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 06:56:17PM +, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 06:32:19PM +, Marc-André Lureau wrote:
> > Hi
> > 
> > On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin  wrote:
> > 
> > >
> > > > You're also tieing the code
> > > > into the QEMU release cycle, again for no tangible benefit.
> > >
> > > No need for ABI stability would be the benefit.
> > >
> > 
> > We are talking about the control channel ABI (the data channel is using TCG
> > defined command streams afaict - don't remember what it is called)
> > 
> > 
> > >
> > > > Conceptually
> > > > swtpm does not depend on, or require, QEMU to be useful - it can have
> > > > other non-QEMU consumers - bundling with QEMU is not helpful there.
> > >
> > > Maybe it could but it isn't.
> > >
> > 
> > Right, it would be reasonable to have qemu provide it's own private "swtpm"
> > (linking with libtpms, doing most of the job), that way it wouldn't have to
> > rely on a stable ABI (as long as the process isn't shared across different
> > qemu versions, which should be quite easy to achieve)
> 
> I think we need to expect to have a stable ABI no matter what. During
> upgrade cycles, it is desirable to be able to upgrade the swtpm process
> assocatied with a running VM.

Why? It should be part of the same rpm as QEMU,
upgrading QEMU requires VM restart and so should this.

We really really do not want a stable ABI if we can get
away with not having one.

> Whether this is done by restarting the
> process & having QEMU reconnect, or by re-exec'ing swtpm and keeping the
> FD open, you still end up with newer swtpm talking to an older QEMU. Or
> conversely you might have setup swtpm processes to populate a number of
> CUSE devices, and then later launch QEMU binaries to connect to them - at
> which point there's no guarantee the QEMU version hasn't been upgraded -
> or the user could have requested a custom QEMU binary to virt-install,
> etc.

Sounds like feature creep to me. A separate processes for parts of QEMU
for extra security make sense. Stable ABI between parts does not.

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 03/01/2017 01:30 PM, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 06:18:01PM +, Dr. David Alan Gilbert wrote:

* Michael S. Tsirkin (m...@redhat.com) wrote:

On Wed, Mar 01, 2017 at 06:06:02PM +, Dr. David Alan Gilbert wrote:

* Michael S. Tsirkin (m...@redhat.com) wrote:

On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:

On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:

On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:

On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:

On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:

I had already proposed a linked-in version before I went to the out-of-process
design. Anthony's concerns back then were related to the code not being trusted
and a segfault in the code could bring down all of QEMU. That we have test
suites running over it didn't work as an argument. Some of the test suite are
private, though.

Given how bad the alternative is maybe we should go back to that one.
Same argument can be made for any device and we aren't making
them out of process right now.

IIMO it's less the in-process question (modularization
of QEMU has been on the agenda since years and I don't
think anyone is against it) it's more a code control/community question.

I rather disagree. Modularization of QEMU has seen few results
because it is generally a hard problem to solve when you have a
complex pre-existing codebase.  I don't think code control has
been a factor in this - as long as QEMU can clearly define its
ABI/API between core & the modular pieces, it doesn't matter
who owns the module. We've seen this with vhost-user which is
essentially outsourcing network device backend impls to a 3rd
party project.

And it was done precisely for community reasons.  dpdk/VPP community is
quite large and fell funded but they just can't all grok QEMU.  They
work for hardware vendors and do baremetal things.  With the split we
can focus on virtualization and they can focus on moving packets around.



QEMU's defined the vhost-user ABI/API and delegated
impl to something else.

The vhost ABI isn't easy to maintain at all though. So I would not
commit to that lightly without a good reason.

It will be way more painful if the ABI is dictated by a 3rd party
library.

Who should define it?


No one. Put it in same source tree with QEMU and forget ABI stability
issues.

You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
These are multiple thousands of lines of code each and we'll break them
apart into logical chunks and review them?

No, lets not make that mistake again - we only just got rid of the
libcacard smartcard library code from QEMU git.

Regards,
Daniel

I don't mean that as an external library. As an integral part of QEMU
adhering to our coding style etc - why not?

I don't know what are the other options.  How is depending on an ABI
with a utility with no other users and not packaged by most distros
good? You are calling out to a CUSE device but who's reviewing that
code?

vl.c weights in a 4500 lines of code. several thousand lines is
not small but not unmanageable.


That's 4500 lines of fairly generic code; not like the TPM where the number
of people who really understand the details of it is pretty slim.

It's better on most counts to have it as a separate process.

Dave

Separate process we start and stop automatically I don't mind. A
separate tree with a distinct coding style where no one will ever even
look at it? Not so much.

That code is used elsewhere anyway,

Who uses it? Who packages it? Fedora doesn't ...


so asking them to change the coding style
isn't very nice.
Even if they change the coding style it doesn't mean you're suddenly going to
understand how a TPM works in detail and be able to review it.

I did in the past but I didn't kept abreast of the recent developments.


Anyway, having it in a separate process locked down by SELinux means that even
if it does go horribly wrong it won't break qemu.

Dave

Since qemu does blocking ioctls on it and doesn't validate results
too much it sure can break QEMU - anything from DOS to random
code execution. That's why we want to keep it in tree and
start it ourselves - I don't want CVEs claiming not validating
some parameter we get from it is a remote code execution.
It should be just a library that yes, we can keep out of
process for extra security but no, we can't just out random
stuff in there and never care.


So then the question is whether the implementation is hopelessly broken 
or whether we can defend against buffer overflows so that remote code 
execution from a malicious TPM emulator can actually happen? I thought I 
was properly checking the alllocated buffer for size and that we won't 
receive more than the expected number of 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Marc-André Lureau
On Wed, Mar 1, 2017 at 10:56 PM Daniel P. Berrange 
wrote:

> On Wed, Mar 01, 2017 at 06:32:19PM +, Marc-André Lureau wrote:
> > Hi
> >
> > On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin 
> wrote:
> >
> > >
> > > > You're also tieing the code
> > > > into the QEMU release cycle, again for no tangible benefit.
> > >
> > > No need for ABI stability would be the benefit.
> > >
> >
> > We are talking about the control channel ABI (the data channel is using
> TCG
> > defined command streams afaict - don't remember what it is called)
> >
> >
> > >
> > > > Conceptually
> > > > swtpm does not depend on, or require, QEMU to be useful - it can have
> > > > other non-QEMU consumers - bundling with QEMU is not helpful there.
> > >
> > > Maybe it could but it isn't.
> > >
> >
> > Right, it would be reasonable to have qemu provide it's own private
> "swtpm"
> > (linking with libtpms, doing most of the job), that way it wouldn't have
> to
> > rely on a stable ABI (as long as the process isn't shared across
> different
> > qemu versions, which should be quite easy to achieve)
>
> I think we need to expect to have a stable ABI no matter what. During
> upgrade cycles, it is desirable to be able to upgrade the swtpm process
> assocatied with a running VM. Whether this is done by restarting the
> process & having QEMU reconnect, or by re-exec'ing swtpm and keeping the
> FD open, you still end up with newer swtpm talking to an older QEMU. Or
>

I am not sure why this is required. You could require that both qemu &
helper process are restarted in this case so they stay in sync, no?


> conversely you might have setup swtpm processes to populate a number of
> CUSE devices, and then later launch QEMU binaries to connect to them - at
>

I would rather avoid CUSE device with this private qemu helper process.


> which point there's no guarantee the QEMU version hasn't been upgraded -
> or the user could have requested a custom QEMU binary to virt-install,
> etc.
>

The point is to have the qemu binary tight with the helper process. If they
are incompatible, you broke your installation, it should fail to start.

-- 
Marc-André Lureau


Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 06:32:19PM +, Marc-André Lureau wrote:
> Hi
> 
> On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin  wrote:
> 
> >
> > > You're also tieing the code
> > > into the QEMU release cycle, again for no tangible benefit.
> >
> > No need for ABI stability would be the benefit.
> >
> 
> We are talking about the control channel ABI (the data channel is using TCG
> defined command streams afaict - don't remember what it is called)
> 
> 
> >
> > > Conceptually
> > > swtpm does not depend on, or require, QEMU to be useful - it can have
> > > other non-QEMU consumers - bundling with QEMU is not helpful there.
> >
> > Maybe it could but it isn't.
> >
> 
> Right, it would be reasonable to have qemu provide it's own private "swtpm"
> (linking with libtpms, doing most of the job), that way it wouldn't have to
> rely on a stable ABI (as long as the process isn't shared across different
> qemu versions, which should be quite easy to achieve)

I think we need to expect to have a stable ABI no matter what. During
upgrade cycles, it is desirable to be able to upgrade the swtpm process
assocatied with a running VM. Whether this is done by restarting the
process & having QEMU reconnect, or by re-exec'ing swtpm and keeping the
FD open, you still end up with newer swtpm talking to an older QEMU. Or
conversely you might have setup swtpm processes to populate a number of
CUSE devices, and then later launch QEMU binaries to connect to them - at
which point there's no guarantee the QEMU version hasn't been upgraded -
or the user could have requested a custom QEMU binary to virt-install,
etc.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Marc-André Lureau
Hi

On Wed, Mar 1, 2017 at 10:20 PM Michael S. Tsirkin  wrote:

>
> > You're also tieing the code
> > into the QEMU release cycle, again for no tangible benefit.
>
> No need for ABI stability would be the benefit.
>

We are talking about the control channel ABI (the data channel is using TCG
defined command streams afaict - don't remember what it is called)


>
> > Conceptually
> > swtpm does not depend on, or require, QEMU to be useful - it can have
> > other non-QEMU consumers - bundling with QEMU is not helpful there.
>
> Maybe it could but it isn't.
>

Right, it would be reasonable to have qemu provide it's own private "swtpm"
(linking with libtpms, doing most of the job), that way it wouldn't have to
rely on a stable ABI (as long as the process isn't shared across different
qemu versions, which should be quite easy to achieve)

>
> >
> > > I don't know what are the other options.  How is depending on an ABI
> > > with a utility with no other users and not packaged by most distros
> > > good? You are calling out to a CUSE device but who's reviewing that
> > > code?
> >
> > If anyone is motivated enough to review the code, they can do it whether
> > it is in QEMU git or its own git. Pulling entire of swtpm into QEMU GIT
> > isn't magically going to get useful reviews done on the code. The QEMU
> > maintainers already have far more code to review than available review
> > bandwidth, and lack domain knowledge in TPM concepts.
>
> I was the only one merging TPM code so far. I don't call myself an
> expert.  If someone steps up to do the work, is trusted by Peter to
> maintain it for X years and doesn't care about the extra hurdles, more
> power to them.
>

Why not give Stefan maintainership of TPM?

>
> > > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > > the current implementation because I could not figure out what do qemu
> > > bits do without looking at the implementation. I don't want to jump
> > > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > > painful enough to manage. If some other maintainer volunteers, or if
> > > Peter wants to merge it directly from Stefan, I won't object.
> >
>

ok


> >
> > Regards,
> > Daniel
> > --
> > |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> > |: http://libvirt.org  -o-
> http://virt-manager.org :|
> > |: http://entangle-photo.org   -o-
> http://search.cpan.org/~danberr/ :|
>
-- 
Marc-André Lureau


Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 06:18:01PM +, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (m...@redhat.com) wrote:
> > On Wed, Mar 01, 2017 at 06:06:02PM +, Dr. David Alan Gilbert wrote:
> > > * Michael S. Tsirkin (m...@redhat.com) wrote:
> > > > On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> > > > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange 
> > > > > > > > > wrote:
> > > > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. 
> > > > > > > > > > Tsirkin wrote:
> > > > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger 
> > > > > > > > > > > wrote:
> > > > > > > > > > > > I had already proposed a linked-in version before I 
> > > > > > > > > > > > went to the out-of-process
> > > > > > > > > > > > design. Anthony's concerns back then were related to 
> > > > > > > > > > > > the code not being trusted
> > > > > > > > > > > > and a segfault in the code could bring down all of 
> > > > > > > > > > > > QEMU. That we have test
> > > > > > > > > > > > suites running over it didn't work as an argument. Some 
> > > > > > > > > > > > of the test suite are
> > > > > > > > > > > > private, though.
> > > > > > > > > > > Given how bad the alternative is maybe we should go back 
> > > > > > > > > > > to that one.
> > > > > > > > > > > Same argument can be made for any device and we aren't 
> > > > > > > > > > > making
> > > > > > > > > > > them out of process right now.
> > > > > > > > > > > 
> > > > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > > > think anyone is against it) it's more a code 
> > > > > > > > > > > control/community question.
> > > > > > > > > > I rather disagree. Modularization of QEMU has seen few 
> > > > > > > > > > results
> > > > > > > > > > because it is generally a hard problem to solve when you 
> > > > > > > > > > have a
> > > > > > > > > > complex pre-existing codebase.  I don't think code control 
> > > > > > > > > > has
> > > > > > > > > > been a factor in this - as long as QEMU can clearly define 
> > > > > > > > > > its
> > > > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > > > who owns the module. We've seen this with vhost-user which 
> > > > > > > > > > is
> > > > > > > > > > essentially outsourcing network device backend impls to a 
> > > > > > > > > > 3rd
> > > > > > > > > > party project.
> > > > > > > > > And it was done precisely for community reasons.  dpdk/VPP 
> > > > > > > > > community is
> > > > > > > > > quite large and fell funded but they just can't all grok 
> > > > > > > > > QEMU.  They
> > > > > > > > > work for hardware vendors and do baremetal things.  With the 
> > > > > > > > > split we
> > > > > > > > > can focus on virtualization and they can focus on moving 
> > > > > > > > > packets around.
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > > > impl to something else.
> > > > > > > > > The vhost ABI isn't easy to maintain at all though. So I 
> > > > > > > > > would not
> > > > > > > > > commit to that lightly without a good reason.
> > > > > > > > > 
> > > > > > > > > It will be way more painful if the ABI is dictated by a 3rd 
> > > > > > > > > party
> > > > > > > > > library.
> > > > > > > > Who should define it?
> > > > > > > > 
> > > > > > > No one. Put it in same source tree with QEMU and forget ABI 
> > > > > > > stability
> > > > > > > issues.
> > > > > > 
> > > > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the 
> > > > > > QEMU tree?
> > > > > > These are multiple thousands of lines of code each and we'll break 
> > > > > > them
> > > > > > apart into logical chunks and review them?
> > > > > 
> > > > > No, lets not make that mistake again - we only just got rid of the
> > > > > libcacard smartcard library code from QEMU git. 
> > > > > 
> > > > > Regards,
> > > > > Daniel
> > > > 
> > > > I don't mean that as an external library. As an integral part of QEMU
> > > > adhering to our coding style etc - why not?
> > > > 
> > > > I don't know what are the other options.  How is depending on an ABI
> > > > with a utility with no other users and not packaged by most distros
> > > > good? You are calling out to a CUSE device but who's reviewing that
> > > > code?
> > > > 
> > > > vl.c weights in a 4500 lines of code. several thousand lines is
> > > > not small but not unmanageable.
> > > 
> > > 
> > > That's 4500 lines of fairly generic code; not like the TPM where the 
> > > number
> > > of people who really understand the details of it is 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Dr. David Alan Gilbert
* Michael S. Tsirkin (m...@redhat.com) wrote:
> On Wed, Mar 01, 2017 at 06:06:02PM +, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (m...@redhat.com) wrote:
> > > On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> > > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange 
> > > > > > > > wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin 
> > > > > > > > > wrote:
> > > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger 
> > > > > > > > > > wrote:
> > > > > > > > > > > I had already proposed a linked-in version before I went 
> > > > > > > > > > > to the out-of-process
> > > > > > > > > > > design. Anthony's concerns back then were related to the 
> > > > > > > > > > > code not being trusted
> > > > > > > > > > > and a segfault in the code could bring down all of QEMU. 
> > > > > > > > > > > That we have test
> > > > > > > > > > > suites running over it didn't work as an argument. Some 
> > > > > > > > > > > of the test suite are
> > > > > > > > > > > private, though.
> > > > > > > > > > Given how bad the alternative is maybe we should go back to 
> > > > > > > > > > that one.
> > > > > > > > > > Same argument can be made for any device and we aren't 
> > > > > > > > > > making
> > > > > > > > > > them out of process right now.
> > > > > > > > > > 
> > > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > > think anyone is against it) it's more a code 
> > > > > > > > > > control/community question.
> > > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > > because it is generally a hard problem to solve when you have 
> > > > > > > > > a
> > > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > > party project.
> > > > > > > > And it was done precisely for community reasons.  dpdk/VPP 
> > > > > > > > community is
> > > > > > > > quite large and fell funded but they just can't all grok QEMU.  
> > > > > > > > They
> > > > > > > > work for hardware vendors and do baremetal things.  With the 
> > > > > > > > split we
> > > > > > > > can focus on virtualization and they can focus on moving 
> > > > > > > > packets around.
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > > impl to something else.
> > > > > > > > The vhost ABI isn't easy to maintain at all though. So I would 
> > > > > > > > not
> > > > > > > > commit to that lightly without a good reason.
> > > > > > > > 
> > > > > > > > It will be way more painful if the ABI is dictated by a 3rd 
> > > > > > > > party
> > > > > > > > library.
> > > > > > > Who should define it?
> > > > > > > 
> > > > > > No one. Put it in same source tree with QEMU and forget ABI 
> > > > > > stability
> > > > > > issues.
> > > > > 
> > > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
> > > > > tree?
> > > > > These are multiple thousands of lines of code each and we'll break 
> > > > > them
> > > > > apart into logical chunks and review them?
> > > > 
> > > > No, lets not make that mistake again - we only just got rid of the
> > > > libcacard smartcard library code from QEMU git. 
> > > > 
> > > > Regards,
> > > > Daniel
> > > 
> > > I don't mean that as an external library. As an integral part of QEMU
> > > adhering to our coding style etc - why not?
> > > 
> > > I don't know what are the other options.  How is depending on an ABI
> > > with a utility with no other users and not packaged by most distros
> > > good? You are calling out to a CUSE device but who's reviewing that
> > > code?
> > > 
> > > vl.c weights in a 4500 lines of code. several thousand lines is
> > > not small but not unmanageable.
> > 
> > 
> > That's 4500 lines of fairly generic code; not like the TPM where the number
> > of people who really understand the details of it is pretty slim.
> > 
> > It's better on most counts to have it as a separate process.
> > 
> > Dave
> 
> Separate process we start and stop automatically I don't mind. A
> separate tree with a distinct coding style where no one will ever even
> look at it? Not so much.

That code is used elsewhere anyway, so asking them to change the coding style
isn't very nice.
Even if 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 06:11:17PM +, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 07:58:36PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange 
> > > > > > > wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin 
> > > > > > > > wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > > I had already proposed a linked-in version before I went to 
> > > > > > > > > > the out-of-process
> > > > > > > > > > design. Anthony's concerns back then were related to the 
> > > > > > > > > > code not being trusted
> > > > > > > > > > and a segfault in the code could bring down all of QEMU. 
> > > > > > > > > > That we have test
> > > > > > > > > > suites running over it didn't work as an argument. Some of 
> > > > > > > > > > the test suite are
> > > > > > > > > > private, though.
> > > > > > > > > Given how bad the alternative is maybe we should go back to 
> > > > > > > > > that one.
> > > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > > them out of process right now.
> > > > > > > > > 
> > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > think anyone is against it) it's more a code 
> > > > > > > > > control/community question.
> > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > party project.
> > > > > > > And it was done precisely for community reasons.  dpdk/VPP 
> > > > > > > community is
> > > > > > > quite large and fell funded but they just can't all grok QEMU.  
> > > > > > > They
> > > > > > > work for hardware vendors and do baremetal things.  With the 
> > > > > > > split we
> > > > > > > can focus on virtualization and they can focus on moving packets 
> > > > > > > around.
> > > > > > > 
> > > > > > > 
> > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > impl to something else.
> > > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > > commit to that lightly without a good reason.
> > > > > > > 
> > > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > > library.
> > > > > > Who should define it?
> > > > > > 
> > > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > > issues.
> > > > 
> > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
> > > > tree?
> > > > These are multiple thousands of lines of code each and we'll break them
> > > > apart into logical chunks and review them?
> > > 
> > > No, lets not make that mistake again - we only just got rid of the
> > > libcacard smartcard library code from QEMU git. 
> > 
> > I don't mean that as an external library. As an integral part of QEMU
> > adhering to our coding style etc - why not?
> 
> Changing swtpm to the QEMU coding style is a pointless exercise - just
> busy work for no functional end benefit.

I'm not sure what you are saying here, I don't appreciate extra hurdles
to review, it's hard enough as it is. If others don't care, good for
them.

> You're also tieing the code
> into the QEMU release cycle, again for no tangible benefit.

No need for ABI stability would be the benefit.

> Conceptually
> swtpm does not depend on, or require, QEMU to be useful - it can have
> other non-QEMU consumers - bundling with QEMU is not helpful there.

Maybe it could but it isn't.

> 
> > I don't know what are the other options.  How is depending on an ABI
> > with a utility with no other users and not packaged by most distros
> > good? You are calling out to a CUSE device but who's reviewing that
> > code?
> 
> If anyone is motivated enough to review the code, they can do it whether
> it is in QEMU git or its own git. Pulling entire of swtpm into QEMU GIT
> isn't magically going to get useful reviews done on the code. The QEMU
> maintainers already have far more code to review than available review
> bandwidth, and lack domain knowledge in TPM concepts.

I was the only one 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 07:58:36PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin 
> > > > > > > wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > I had already proposed a linked-in version before I went to 
> > > > > > > > > the out-of-process
> > > > > > > > > design. Anthony's concerns back then were related to the code 
> > > > > > > > > not being trusted
> > > > > > > > > and a segfault in the code could bring down all of QEMU. That 
> > > > > > > > > we have test
> > > > > > > > > suites running over it didn't work as an argument. Some of 
> > > > > > > > > the test suite are
> > > > > > > > > private, though.
> > > > > > > > Given how bad the alternative is maybe we should go back to 
> > > > > > > > that one.
> > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > them out of process right now.
> > > > > > > > 
> > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > think anyone is against it) it's more a code control/community 
> > > > > > > > question.
> > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > party project.
> > > > > > And it was done precisely for community reasons.  dpdk/VPP 
> > > > > > community is
> > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > work for hardware vendors and do baremetal things.  With the split 
> > > > > > we
> > > > > > can focus on virtualization and they can focus on moving packets 
> > > > > > around.
> > > > > > 
> > > > > > 
> > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > impl to something else.
> > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > commit to that lightly without a good reason.
> > > > > > 
> > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > library.
> > > > > Who should define it?
> > > > > 
> > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > issues.
> > > 
> > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
> > > tree?
> > > These are multiple thousands of lines of code each and we'll break them
> > > apart into logical chunks and review them?
> > 
> > No, lets not make that mistake again - we only just got rid of the
> > libcacard smartcard library code from QEMU git. 
> 
> I don't mean that as an external library. As an integral part of QEMU
> adhering to our coding style etc - why not?

Changing swtpm to the QEMU coding style is a pointless exercise - just
busy work for no functional end benefit. You're also tieing the code
into the QEMU release cycle, again for no tangible benefit. Conceptually
swtpm does not depend on, or require, QEMU to be useful - it can have
other non-QEMU consumers - bundling with QEMU is not helpful there.

> I don't know what are the other options.  How is depending on an ABI
> with a utility with no other users and not packaged by most distros
> good? You are calling out to a CUSE device but who's reviewing that
> code?

If anyone is motivated enough to review the code, they can do it whether
it is in QEMU git or its own git. Pulling entire of swtpm into QEMU GIT
isn't magically going to get useful reviews done on the code. The QEMU
maintainers already have far more code to review than available review
bandwidth, and lack domain knowledge in TPM concepts.

> Anyway, it all boils down to lack of reviewers. I know I am not merging
> the current implementation because I could not figure out what do qemu
> bits do without looking at the implementation. I don't want to jump
> between so many trees and coding styles. bios/qemu/linux/dpdk are
> painful enough to manage. If some other maintainer volunteers, or if
> Peter wants to merge it directly from Stefan, I won't object.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 06:06:02PM +, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (m...@redhat.com) wrote:
> > On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange 
> > > > > > > wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin 
> > > > > > > > wrote:
> > > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > > I had already proposed a linked-in version before I went to 
> > > > > > > > > > the out-of-process
> > > > > > > > > > design. Anthony's concerns back then were related to the 
> > > > > > > > > > code not being trusted
> > > > > > > > > > and a segfault in the code could bring down all of QEMU. 
> > > > > > > > > > That we have test
> > > > > > > > > > suites running over it didn't work as an argument. Some of 
> > > > > > > > > > the test suite are
> > > > > > > > > > private, though.
> > > > > > > > > Given how bad the alternative is maybe we should go back to 
> > > > > > > > > that one.
> > > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > > them out of process right now.
> > > > > > > > > 
> > > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > > think anyone is against it) it's more a code 
> > > > > > > > > control/community question.
> > > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > > party project.
> > > > > > > And it was done precisely for community reasons.  dpdk/VPP 
> > > > > > > community is
> > > > > > > quite large and fell funded but they just can't all grok QEMU.  
> > > > > > > They
> > > > > > > work for hardware vendors and do baremetal things.  With the 
> > > > > > > split we
> > > > > > > can focus on virtualization and they can focus on moving packets 
> > > > > > > around.
> > > > > > > 
> > > > > > > 
> > > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > > impl to something else.
> > > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > > commit to that lightly without a good reason.
> > > > > > > 
> > > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > > library.
> > > > > > Who should define it?
> > > > > > 
> > > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > > issues.
> > > > 
> > > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
> > > > tree?
> > > > These are multiple thousands of lines of code each and we'll break them
> > > > apart into logical chunks and review them?
> > > 
> > > No, lets not make that mistake again - we only just got rid of the
> > > libcacard smartcard library code from QEMU git. 
> > > 
> > > Regards,
> > > Daniel
> > 
> > I don't mean that as an external library. As an integral part of QEMU
> > adhering to our coding style etc - why not?
> > 
> > I don't know what are the other options.  How is depending on an ABI
> > with a utility with no other users and not packaged by most distros
> > good? You are calling out to a CUSE device but who's reviewing that
> > code?
> > 
> > vl.c weights in a 4500 lines of code. several thousand lines is
> > not small but not unmanageable.
> 
> 
> That's 4500 lines of fairly generic code; not like the TPM where the number
> of people who really understand the details of it is pretty slim.
> 
> It's better on most counts to have it as a separate process.
> 
> Dave

Separate process we start and stop automatically I don't mind. A
separate tree with a distinct coding style where no one will ever even
look at it? Not so much.

> > Anyway, it all boils down to lack of reviewers. I know I am not merging
> > the current implementation because I could not figure out what do qemu
> > bits do without looking at the implementation. I don't want to jump
> > between so many trees and coding styles. bios/qemu/linux/dpdk are
> > painful enough to manage. If some other maintainer volunteers, or if
> > Peter wants to merge it directly from Stefan, I won't object.
> > 
> > > -- 
> > > |: 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Dr. David Alan Gilbert
* Michael S. Tsirkin (m...@redhat.com) wrote:
> On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> > On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin 
> > > > > > > wrote:
> > > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > > I had already proposed a linked-in version before I went to 
> > > > > > > > > the out-of-process
> > > > > > > > > design. Anthony's concerns back then were related to the code 
> > > > > > > > > not being trusted
> > > > > > > > > and a segfault in the code could bring down all of QEMU. That 
> > > > > > > > > we have test
> > > > > > > > > suites running over it didn't work as an argument. Some of 
> > > > > > > > > the test suite are
> > > > > > > > > private, though.
> > > > > > > > Given how bad the alternative is maybe we should go back to 
> > > > > > > > that one.
> > > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > > them out of process right now.
> > > > > > > > 
> > > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > > think anyone is against it) it's more a code control/community 
> > > > > > > > question.
> > > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > > because it is generally a hard problem to solve when you have a
> > > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > > party project.
> > > > > > And it was done precisely for community reasons.  dpdk/VPP 
> > > > > > community is
> > > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > > work for hardware vendors and do baremetal things.  With the split 
> > > > > > we
> > > > > > can focus on virtualization and they can focus on moving packets 
> > > > > > around.
> > > > > > 
> > > > > > 
> > > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > > impl to something else.
> > > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > > commit to that lightly without a good reason.
> > > > > > 
> > > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > > library.
> > > > > Who should define it?
> > > > > 
> > > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > > issues.
> > > 
> > > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
> > > tree?
> > > These are multiple thousands of lines of code each and we'll break them
> > > apart into logical chunks and review them?
> > 
> > No, lets not make that mistake again - we only just got rid of the
> > libcacard smartcard library code from QEMU git. 
> > 
> > Regards,
> > Daniel
> 
> I don't mean that as an external library. As an integral part of QEMU
> adhering to our coding style etc - why not?
> 
> I don't know what are the other options.  How is depending on an ABI
> with a utility with no other users and not packaged by most distros
> good? You are calling out to a CUSE device but who's reviewing that
> code?
> 
> vl.c weights in a 4500 lines of code. several thousand lines is
> not small but not unmanageable.


That's 4500 lines of fairly generic code; not like the TPM where the number
of people who really understand the details of it is pretty slim.

It's better on most counts to have it as a separate process.

Dave

> Anyway, it all boils down to lack of reviewers. I know I am not merging
> the current implementation because I could not figure out what do qemu
> bits do without looking at the implementation. I don't want to jump
> between so many trees and coding styles. bios/qemu/linux/dpdk are
> painful enough to manage. If some other maintainer volunteers, or if
> Peter wants to merge it directly from Stefan, I won't object.
> 
> > -- 
> > |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ 
> > :|
> > |: http://libvirt.org  -o- http://virt-manager.org 
> > :|
> > |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ 
> > :|
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 05:20:13PM +, Daniel P. Berrange wrote:
> > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > impl to something else.
> > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > commit to that lightly without a good reason.
> > > > 
> > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > library.
> > > 
> > > Who should define it?
> > 
> > No one. Put it in same source tree with QEMU and forget ABI stability
> > issues.
> 
> That doesn't work very well in practice as you have to make sure the
> vTPM process that is running, provides exactly the same ABI as the QEMU
> process that's connecting to it. You could have a single vTPM process
> on the host serving many QEMU processes, each of which could be a
> different QEMU version, due to upgraded RPMs/Debs.
> 
> Regards,
> Daniel

I might be wrong but last time I looked each QEMU instance had to use
its own CUSE device. So the pain seems entirely self-inflicted, you
could have a process per QEMU instance, start and stop it from within
QEMU.


> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 05:38:23PM +, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> > On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > > I had already proposed a linked-in version before I went to the 
> > > > > > > > out-of-process
> > > > > > > > design. Anthony's concerns back then were related to the code 
> > > > > > > > not being trusted
> > > > > > > > and a segfault in the code could bring down all of QEMU. That 
> > > > > > > > we have test
> > > > > > > > suites running over it didn't work as an argument. Some of the 
> > > > > > > > test suite are
> > > > > > > > private, though.
> > > > > > > Given how bad the alternative is maybe we should go back to that 
> > > > > > > one.
> > > > > > > Same argument can be made for any device and we aren't making
> > > > > > > them out of process right now.
> > > > > > > 
> > > > > > > IIMO it's less the in-process question (modularization
> > > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > > think anyone is against it) it's more a code control/community 
> > > > > > > question.
> > > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > > because it is generally a hard problem to solve when you have a
> > > > > > complex pre-existing codebase.  I don't think code control has
> > > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > > who owns the module. We've seen this with vhost-user which is
> > > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > > party project.
> > > > > And it was done precisely for community reasons.  dpdk/VPP community 
> > > > > is
> > > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > > work for hardware vendors and do baremetal things.  With the split we
> > > > > can focus on virtualization and they can focus on moving packets 
> > > > > around.
> > > > > 
> > > > > 
> > > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > > impl to something else.
> > > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > > commit to that lightly without a good reason.
> > > > > 
> > > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > > library.
> > > > Who should define it?
> > > > 
> > > No one. Put it in same source tree with QEMU and forget ABI stability
> > > issues.
> > 
> > You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> > These are multiple thousands of lines of code each and we'll break them
> > apart into logical chunks and review them?
> 
> No, lets not make that mistake again - we only just got rid of the
> libcacard smartcard library code from QEMU git. 
> 
> Regards,
> Daniel

I don't mean that as an external library. As an integral part of QEMU
adhering to our coding style etc - why not?

I don't know what are the other options.  How is depending on an ABI
with a utility with no other users and not packaged by most distros
good? You are calling out to a CUSE device but who's reviewing that
code?

vl.c weights in a 4500 lines of code. several thousand lines is
not small but not unmanageable.

Anyway, it all boils down to lack of reviewers. I know I am not merging
the current implementation because I could not figure out what do qemu
bits do without looking at the implementation. I don't want to jump
between so many trees and coding styles. bios/qemu/linux/dpdk are
painful enough to manage. If some other maintainer volunteers, or if
Peter wants to merge it directly from Stefan, I won't object.


> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 12:25:46PM -0500, Stefan Berger wrote:
> On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > > I had already proposed a linked-in version before I went to the 
> > > > > > > out-of-process
> > > > > > > design. Anthony's concerns back then were related to the code not 
> > > > > > > being trusted
> > > > > > > and a segfault in the code could bring down all of QEMU. That we 
> > > > > > > have test
> > > > > > > suites running over it didn't work as an argument. Some of the 
> > > > > > > test suite are
> > > > > > > private, though.
> > > > > > Given how bad the alternative is maybe we should go back to that 
> > > > > > one.
> > > > > > Same argument can be made for any device and we aren't making
> > > > > > them out of process right now.
> > > > > > 
> > > > > > IIMO it's less the in-process question (modularization
> > > > > > of QEMU has been on the agenda since years and I don't
> > > > > > think anyone is against it) it's more a code control/community 
> > > > > > question.
> > > > > I rather disagree. Modularization of QEMU has seen few results
> > > > > because it is generally a hard problem to solve when you have a
> > > > > complex pre-existing codebase.  I don't think code control has
> > > > > been a factor in this - as long as QEMU can clearly define its
> > > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > > who owns the module. We've seen this with vhost-user which is
> > > > > essentially outsourcing network device backend impls to a 3rd
> > > > > party project.
> > > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > > quite large and fell funded but they just can't all grok QEMU.  They
> > > > work for hardware vendors and do baremetal things.  With the split we
> > > > can focus on virtualization and they can focus on moving packets around.
> > > > 
> > > > 
> > > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > > impl to something else.
> > > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > > commit to that lightly without a good reason.
> > > > 
> > > > It will be way more painful if the ABI is dictated by a 3rd party
> > > > library.
> > > Who should define it?
> > > 
> > No one. Put it in same source tree with QEMU and forget ABI stability
> > issues.
> 
> You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU tree?
> These are multiple thousands of lines of code each and we'll break them
> apart into logical chunks and review them?

No, lets not make that mistake again - we only just got rid of the
libcacard smartcard library code from QEMU git. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > I had already proposed a linked-in version before I went to the 
> > > > > out-of-process
> > > > > design. Anthony's concerns back then were related to the code not 
> > > > > being trusted
> > > > > and a segfault in the code could bring down all of QEMU. That we have 
> > > > > test
> > > > > suites running over it didn't work as an argument. Some of the test 
> > > > > suite are
> > > > > private, though.
> > > > Given how bad the alternative is maybe we should go back to that one.
> > > > Same argument can be made for any device and we aren't making
> > > > them out of process right now.
> > > > 
> > > > IIMO it's less the in-process question (modularization
> > > > of QEMU has been on the agenda since years and I don't
> > > > think anyone is against it) it's more a code control/community question.
> > > I rather disagree. Modularization of QEMU has seen few results
> > > because it is generally a hard problem to solve when you have a
> > > complex pre-existing codebase.  I don't think code control has
> > > been a factor in this - as long as QEMU can clearly define its
> > > ABI/API between core & the modular pieces, it doesn't matter
> > > who owns the module. We've seen this with vhost-user which is
> > > essentially outsourcing network device backend impls to a 3rd
> > > party project.
> > And it was done precisely for community reasons.  dpdk/VPP community is
> > quite large and fell funded but they just can't all grok QEMU.  They
> > work for hardware vendors and do baremetal things.  With the split we
> > can focus on virtualization and they can focus on moving packets around.
> > 
> > 
> > > QEMU's defined the vhost-user ABI/API and delegated
> > > impl to something else.
> > The vhost ABI isn't easy to maintain at all though. So I would not
> > commit to that lightly without a good reason.
> > 
> > It will be way more painful if the ABI is dictated by a 3rd party
> > library.
> 
> Who should define it?

I'm unsure of the best answer here right now. If swtpm is targetted as
being a generic component for use by arbitrary consumers, that'd tend
towards suggesting swtpm should "own" protocol definition. On the other
hand if we desire for QEMU to be able to replace swtpm with an alternate
impl, that might suggest QEMU define the protocol. Possibly it just does
not matter if there's an owner, as long as both swtpm & QEMU maintainers
colloborate & agree on the details.

>From a purely self-ish QEMU maintainer POV, I'd tend to suggest a QAPI
based protocol since we have two impls of that already (monitor and
guest agent) & thus its well understood by QEMU maintainers. Writing
a custom binary protocol might sound easy, but things can get complex
pretty quickly

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 03/01/2017 12:16 PM, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:

On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:

On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:

I had already proposed a linked-in version before I went to the out-of-process
design. Anthony's concerns back then were related to the code not being trusted
and a segfault in the code could bring down all of QEMU. That we have test
suites running over it didn't work as an argument. Some of the test suite are
private, though.

Given how bad the alternative is maybe we should go back to that one.
Same argument can be made for any device and we aren't making
them out of process right now.

IIMO it's less the in-process question (modularization
of QEMU has been on the agenda since years and I don't
think anyone is against it) it's more a code control/community question.

I rather disagree. Modularization of QEMU has seen few results
because it is generally a hard problem to solve when you have a
complex pre-existing codebase.  I don't think code control has
been a factor in this - as long as QEMU can clearly define its
ABI/API between core & the modular pieces, it doesn't matter
who owns the module. We've seen this with vhost-user which is
essentially outsourcing network device backend impls to a 3rd
party project.

And it was done precisely for community reasons.  dpdk/VPP community is
quite large and fell funded but they just can't all grok QEMU.  They
work for hardware vendors and do baremetal things.  With the split we
can focus on virtualization and they can focus on moving packets around.



QEMU's defined the vhost-user ABI/API and delegated
impl to something else.

The vhost ABI isn't easy to maintain at all though. So I would not
commit to that lightly without a good reason.

It will be way more painful if the ABI is dictated by a 3rd party
library.

Who should define it?


No one. Put it in same source tree with QEMU and forget ABI stability
issues.


You mean put the code implementing TPM 1.2 and/or TPM 2 into the QEMU 
tree? These are multiple thousands of lines of code each and we'll break 
them apart into logical chunks and review them?





Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 07:16:01PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> > On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > > I had already proposed a linked-in version before I went to the 
> > > > > > out-of-process
> > > > > > design. Anthony's concerns back then were related to the code not 
> > > > > > being trusted
> > > > > > and a segfault in the code could bring down all of QEMU. That we 
> > > > > > have test
> > > > > > suites running over it didn't work as an argument. Some of the test 
> > > > > > suite are
> > > > > > private, though.
> > > > > Given how bad the alternative is maybe we should go back to that one.
> > > > > Same argument can be made for any device and we aren't making
> > > > > them out of process right now.
> > > > > 
> > > > > IIMO it's less the in-process question (modularization
> > > > > of QEMU has been on the agenda since years and I don't
> > > > > think anyone is against it) it's more a code control/community 
> > > > > question.
> > > > I rather disagree. Modularization of QEMU has seen few results
> > > > because it is generally a hard problem to solve when you have a
> > > > complex pre-existing codebase.  I don't think code control has
> > > > been a factor in this - as long as QEMU can clearly define its
> > > > ABI/API between core & the modular pieces, it doesn't matter
> > > > who owns the module. We've seen this with vhost-user which is
> > > > essentially outsourcing network device backend impls to a 3rd
> > > > party project.
> > > And it was done precisely for community reasons.  dpdk/VPP community is
> > > quite large and fell funded but they just can't all grok QEMU.  They
> > > work for hardware vendors and do baremetal things.  With the split we
> > > can focus on virtualization and they can focus on moving packets around.
> > > 
> > > 
> > > > QEMU's defined the vhost-user ABI/API and delegated
> > > > impl to something else.
> > > The vhost ABI isn't easy to maintain at all though. So I would not
> > > commit to that lightly without a good reason.
> > > 
> > > It will be way more painful if the ABI is dictated by a 3rd party
> > > library.
> > 
> > Who should define it?
> 
> No one. Put it in same source tree with QEMU and forget ABI stability
> issues.

That doesn't work very well in practice as you have to make sure the
vTPM process that is running, provides exactly the same ABI as the QEMU
process that's connecting to it. You could have a single vTPM process
on the host serving many QEMU processes, each of which could be a
different QEMU version, due to upgraded RPMs/Debs.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 12:12:34PM -0500, Stefan Berger wrote:
> On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> > > On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > > > I had already proposed a linked-in version before I went to the 
> > > > > out-of-process
> > > > > design. Anthony's concerns back then were related to the code not 
> > > > > being trusted
> > > > > and a segfault in the code could bring down all of QEMU. That we have 
> > > > > test
> > > > > suites running over it didn't work as an argument. Some of the test 
> > > > > suite are
> > > > > private, though.
> > > > Given how bad the alternative is maybe we should go back to that one.
> > > > Same argument can be made for any device and we aren't making
> > > > them out of process right now.
> > > > 
> > > > IIMO it's less the in-process question (modularization
> > > > of QEMU has been on the agenda since years and I don't
> > > > think anyone is against it) it's more a code control/community question.
> > > I rather disagree. Modularization of QEMU has seen few results
> > > because it is generally a hard problem to solve when you have a
> > > complex pre-existing codebase.  I don't think code control has
> > > been a factor in this - as long as QEMU can clearly define its
> > > ABI/API between core & the modular pieces, it doesn't matter
> > > who owns the module. We've seen this with vhost-user which is
> > > essentially outsourcing network device backend impls to a 3rd
> > > party project.
> > And it was done precisely for community reasons.  dpdk/VPP community is
> > quite large and fell funded but they just can't all grok QEMU.  They
> > work for hardware vendors and do baremetal things.  With the split we
> > can focus on virtualization and they can focus on moving packets around.
> > 
> > 
> > > QEMU's defined the vhost-user ABI/API and delegated
> > > impl to something else.
> > The vhost ABI isn't easy to maintain at all though. So I would not
> > commit to that lightly without a good reason.
> > 
> > It will be way more painful if the ABI is dictated by a 3rd party
> > library.
> 
> Who should define it?
> 

No one. Put it in same source tree with QEMU and forget ABI stability
issues.

-- 
MST



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 03/01/2017 12:02 PM, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:

On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:

On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:

I had already proposed a linked-in version before I went to the out-of-process
design. Anthony's concerns back then were related to the code not being trusted
and a segfault in the code could bring down all of QEMU. That we have test
suites running over it didn't work as an argument. Some of the test suite are
private, though.

Given how bad the alternative is maybe we should go back to that one.
Same argument can be made for any device and we aren't making
them out of process right now.

IIMO it's less the in-process question (modularization
of QEMU has been on the agenda since years and I don't
think anyone is against it) it's more a code control/community question.

I rather disagree. Modularization of QEMU has seen few results
because it is generally a hard problem to solve when you have a
complex pre-existing codebase.  I don't think code control has
been a factor in this - as long as QEMU can clearly define its
ABI/API between core & the modular pieces, it doesn't matter
who owns the module. We've seen this with vhost-user which is
essentially outsourcing network device backend impls to a 3rd
party project.

And it was done precisely for community reasons.  dpdk/VPP community is
quite large and fell funded but they just can't all grok QEMU.  They
work for hardware vendors and do baremetal things.  With the split we
can focus on virtualization and they can focus on moving packets around.



QEMU's defined the vhost-user ABI/API and delegated
impl to something else.

The vhost ABI isn't easy to maintain at all though. So I would not
commit to that lightly without a good reason.

It will be way more painful if the ABI is dictated by a 3rd party
library.


Who should define it?





Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 04:31:04PM +, Daniel P. Berrange wrote:
> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > I had already proposed a linked-in version before I went to the 
> > > out-of-process
> > > design. Anthony's concerns back then were related to the code not being 
> > > trusted
> > > and a segfault in the code could bring down all of QEMU. That we have test
> > > suites running over it didn't work as an argument. Some of the test suite 
> > > are
> > > private, though.
> > 
> > Given how bad the alternative is maybe we should go back to that one.
> > Same argument can be made for any device and we aren't making
> > them out of process right now.
> > 
> > IIMO it's less the in-process question (modularization
> > of QEMU has been on the agenda since years and I don't
> > think anyone is against it) it's more a code control/community question.
> 
> I rather disagree. Modularization of QEMU has seen few results
> because it is generally a hard problem to solve when you have a
> complex pre-existing codebase.  I don't think code control has
> been a factor in this - as long as QEMU can clearly define its
> ABI/API between core & the modular pieces, it doesn't matter
> who owns the module. We've seen this with vhost-user which is
> essentially outsourcing network device backend impls to a 3rd
> party project.

And it was done precisely for community reasons.  dpdk/VPP community is
quite large and fell funded but they just can't all grok QEMU.  They
work for hardware vendors and do baremetal things.  With the split we
can focus on virtualization and they can focus on moving packets around.


> QEMU's defined the vhost-user ABI/API and delegated
> impl to something else.

The vhost ABI isn't easy to maintain at all though. So I would not
commit to that lightly without a good reason.

It will be way more painful if the ABI is dictated by a 3rd party
library.

> With the vTPM stuff here, we've not got a pre-existing feature
> we need to deal with, so the biggest blocker wrt modularization does
> not exist. Given that I think having the vTPM impl modularized is
> highly desirable, as long as we can define a sane ABI/API between
> QEMU and the external piece.  So I think anthony's point about not
> putting a vTPM impl in-process is still valid, and since Stefan's
> already done much of the work to achieve a modular design we should
> not go back to an in-process design now.

Fine by me. But with the given project I'm inclined to say
- let's keep it in tree so we don't get to maintain an ABI
- CUSE seems like a wrong interface - not portable, too
  hard to setup ...

> > It doesn't look like userspace swtpm bits have a large community of
> > developers around it, and the only user appears to be QEMU, so depending
> > on that externally does not make sense, we should just have them
> > in-tree. This way we don't need to worry about versioning etc.
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Dr. David Alan Gilbert
* Daniel P. Berrange (berra...@redhat.com) wrote:
> On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> > On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > > I had already proposed a linked-in version before I went to the 
> > > out-of-process
> > > design. Anthony's concerns back then were related to the code not being 
> > > trusted
> > > and a segfault in the code could bring down all of QEMU. That we have test
> > > suites running over it didn't work as an argument. Some of the test suite 
> > > are
> > > private, though.
> > 
> > Given how bad the alternative is maybe we should go back to that one.
> > Same argument can be made for any device and we aren't making
> > them out of process right now.
> > 
> > IIMO it's less the in-process question (modularization
> > of QEMU has been on the agenda since years and I don't
> > think anyone is against it) it's more a code control/community question.
> 
> I rather disagree. Modularization of QEMU has seen few results
> because it is generally a hard problem to solve when you have a
> complex pre-existing codebase.  I don't think code control has
> been a factor in this - as long as QEMU can clearly define its
> ABI/API between core & the modular pieces, it doesn't matter
> who owns the module. We've seen this with vhost-user which is
> essentially outsourcing network device backend impls to a 3rd
> party project. QEMU's defined the vhost-user ABI/API and delegated
> impl to something else.
> 
> With the vTPM stuff here, we've not got a pre-existing feature
> we need to deal with, so the biggest blocker wrt modularization does
> not exist. Given that I think having the vTPM impl modularized is
> highly desirable, as long as we can define a sane ABI/API between
> QEMU and the external piece.  So I think anthony's point about not
> putting a vTPM impl in-process is still valid, and since Stefan's
> already done much of the work to achieve a modular design we should
> not go back to an in-process design now.

Yes, I agree.  Also it means there's potential to do things like only
allow the vTPM process to access the underlying key storage using SELinux.

Dave

> > It doesn't look like userspace swtpm bits have a large community of
> > developers around it, and the only user appears to be QEMU, so depending
> > on that externally does not make sense, we should just have them
> > in-tree. This way we don't need to worry about versioning etc.
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 06:22:45PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> > I had already proposed a linked-in version before I went to the 
> > out-of-process
> > design. Anthony's concerns back then were related to the code not being 
> > trusted
> > and a segfault in the code could bring down all of QEMU. That we have test
> > suites running over it didn't work as an argument. Some of the test suite 
> > are
> > private, though.
> 
> Given how bad the alternative is maybe we should go back to that one.
> Same argument can be made for any device and we aren't making
> them out of process right now.
> 
> IIMO it's less the in-process question (modularization
> of QEMU has been on the agenda since years and I don't
> think anyone is against it) it's more a code control/community question.

I rather disagree. Modularization of QEMU has seen few results
because it is generally a hard problem to solve when you have a
complex pre-existing codebase.  I don't think code control has
been a factor in this - as long as QEMU can clearly define its
ABI/API between core & the modular pieces, it doesn't matter
who owns the module. We've seen this with vhost-user which is
essentially outsourcing network device backend impls to a 3rd
party project. QEMU's defined the vhost-user ABI/API and delegated
impl to something else.

With the vTPM stuff here, we've not got a pre-existing feature
we need to deal with, so the biggest blocker wrt modularization does
not exist. Given that I think having the vTPM impl modularized is
highly desirable, as long as we can define a sane ABI/API between
QEMU and the external piece.  So I think anthony's point about not
putting a vTPM impl in-process is still valid, and since Stefan's
already done much of the work to achieve a modular design we should
not go back to an in-process design now.

> It doesn't look like userspace swtpm bits have a large community of
> developers around it, and the only user appears to be QEMU, so depending
> on that externally does not make sense, we should just have them
> in-tree. This way we don't need to worry about versioning etc.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Michael S. Tsirkin
On Wed, Mar 01, 2017 at 09:50:38AM -0500, Stefan Berger wrote:
> I had already proposed a linked-in version before I went to the out-of-process
> design. Anthony's concerns back then were related to the code not being 
> trusted
> and a segfault in the code could bring down all of QEMU. That we have test
> suites running over it didn't work as an argument. Some of the test suite are
> private, though.

Given how bad the alternative is maybe we should go back to that one.
Same argument can be made for any device and we aren't making
them out of process right now.

IIMO it's less the in-process question (modularization
of QEMU has been on the agenda since years and I don't
think anyone is against it) it's more a code control/community question.

It doesn't look like userspace swtpm bits have a large community of
developers around it, and the only user appears to be QEMU, so depending
on that externally does not make sense, we should just have them
in-tree. This way we don't need to worry about versioning etc.

-- 
MST



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 10:40:13AM -0500, Stefan Berger wrote:
> On 03/01/2017 10:18 AM, Daniel P. Berrange wrote:
> > This goes back to the question Dave mentions above ? Ignoring the control
> > channel aspect temporarily, can the CUSE TPM support the exact same ioctl
> > interface as the existing kernel TPM device ? It feels like this should
> > be possible, and if so, then this virtal TPM feature can be considered to
> > have two separate pieces.
> 
> The existing kernel device has not ioctl interface. If it had one, it
> wouldn't have the same since the control channel implemented on the ioctl
> interface is related to low level commands such as resetting the device when
> the platform resets, etc.

Ok, well what I meant is the CUSE TPM device should implement the same
API/ABI as the existing kernel TPM and vTPM devices.

> > First enabling basic CUSE TPM device support would not require QEMU changes,
> > as we could just use the existing tpm-passthrough driver against the CUSE
> > device, albeit with the limitations around migration, snapshot etc.
> 
> ... and device reset upon VM reset. You want to have at least that since
> otherwise the PCRs will not be in the correct state once the firmware with
> TPM support startes extending them. They need to be reset and the only way
> to do that is through some control channel command.

How does that work with real TPM device passthrough ?  Is it simply broken
if the VM is reset ?

> > Second we could consider the question of supporting a control channel as
> > a separate topic. IIUC, QEMU essentially needs a way to trigger various
> > operations in the underlying TPM implementation, when certain lifecycle
> > operations are performed on the VM. I could see this being done as a
> > simple network protocol over a UNIX socket. So, you could then add a
> > new 'chardev' property to the tpm-passthrough device, which gives the
> > ID of a character device that provides the control channel.
> 
> Why would that other control channel need to be a device rather than an
> ioctl on the device? Or maybe entirely access the emulated TPM through
> UnixIO ?

I'm not suggesting another device - I'm saying use a QEMU chardev
backend API - which lets you use multipl transports, one of which
is UNIX sockets.

> > This way QEMU does not need to have any special code to deal with CUSE
> > directly. QEMU could be used with a real TPM device, a vTPM device or
> > a CUSE TPM device, with the same driver. With both the vTPM and the
> > CUSE TPM device, QEMU would have the ability to use a out of band
> > control channel when migration/snapshot/etc take place.
> > 
> > This cleanly isolates QEMU from the particular design & implementation
> > that is currently used by the current swtpm code.
> 
> Someone needs to define the control channel commands.My definition is here:
> 
> https://github.com/stefanberger/qemu-tpm/commit/27d6cd856d5a14061955df7a93ee490697a7a174#diff-5cc0e46d3ec33a3f4262db773c193dfe
> 
> 
>  This won't go away even if we changed the transport for the commands.
> ioctl's seem to be one way of achieving this with a character device. The
> socket based control channels of 'swtpm' use the same commands.

Essentially the control channel is an RPC interface to an external
process managing virtual TPM. By common agreement, ioctl() is one
of the worst parts of the UNIX ABI, and thus so I'm loathe to see
us an RPC interface between QEMU & an external process, using ioctl.
The exception would be if this was for interoperability with an
existing standard, but this isn't the case here - the ioctl proposal
is a green-field design. I can't help thinking it'd be better from
QEMU's POV to just use QAPI for the RPC.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 03/01/2017 10:24 AM, Marc-André Lureau wrote:

Hi

On Wed, Mar 1, 2017 at 6:50 PM Stefan Berger 
> wrote:


On 03/01/2017 09:17 AM, Marc-André Lureau wrote:

Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger > wrote:

"Daniel P. Berrange" > wrote on 03/01/2017 07:54:14
AM:
>

> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David
Alan Gilbert
wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com
) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > 
> > > >
> > > > > > So what was the multi-instance vTPM proxy driver
patch set
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the
multi-instance
> > > > proxy works; my brief reading when I saw your patch
series seemed
> > > > to suggest it could be used instead of CUSE for the
non-container
case.
> > > One of the key things that was/is not appealing about
this CUSE
approach
> > > is that it basically invents a new ioctl() mechanism
for talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU
probably doesn't
need
> > > to have any changes at all - its existing driver for
talking to TPM
> >
> > We still need the control channel with the vTPM to reset
it upon VM
reset,
> > for getting and setting the state of the vTPM upon
snapshot/suspend/resume,
> > changing locality, etc.
>
> You ultimately need the same mechanisms if using in-kernel
vTPM with
> containers as containers can support
snapshot/suspend/resume/etc too.

The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the
same control
channel through sockets. So on that level we would have
support for the
operations but not integrated with anything that would
support container
migration.


Ah that might explain why you added the socket control channel,
but there is no user yet? (or some private product perhaps).
Could you tell if control and data channels need to be
synchronized in any ways?



In the general case, synchronization would have to happen, yes. So
a lock that is held while the TPM processes data would have to
lock out control channel commands that operate on the TPM data.
That may be missing. In case of QEMU being the client, not much
concurrency would be expected there just by the way QEMU interacts
with it.


Could the data channel be muxed in with the control channel? )that is 
only use one control socket)



You could run the data channel as part of the control channel or vice 
versa. I think the problem is that TCG hasn't defined anything in this 
area and two people in different rooms will come up with two different 
designs.






A detail: A corner case is live-migration with the TPM emulation
being busy processing a command, like creation of a key. In that
case QEMU would keep on running and only start streaming device
state to the recipient side after the TPM command processing
finishes and has returned the result. QEMU wouldn't want to get
stuck in a lock between data and control channel, so would have
other means of determining when the backend processing is done.




Getting back to the original out-of-process design: qemu links
with many libraries already, perhaps a less controversial
approach would be to have a linked in solution before proposing
out-of-process? This would be easier to deal with for


I had already proposed a linked-in version before I went to the
out-of-process design. Anthony's concerns back then were related
to the code not being trusted and a segfault in the code could
bring down all of QEMU. That we have test suites running over it
didn't work as an argument. Some of the test suite are private,
though.


I think Anthony argument is valid for anything running in qemu :) So I 
don't see why TPM would be an exception now.


Could you say how much is covered by the public test suite?


I don't know anything in terms of percentage of code coverage. But I 
think in terms of coverage of commands of a TPM 1.2 we were probably 
>95%. Now there's also TPM 2 and for that I don't know.




About tests, is there any test for qemu TIS?


For the TIS 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 03/01/2017 10:18 AM, Daniel P. Berrange wrote:

On Wed, Mar 01, 2017 at 08:25:43AM -0500, Stefan Berger wrote:

"Daniel P. Berrange" <berra...@redhat.com> wrote on 03/01/2017 07:54:14
AM:


From: "Daniel P. Berrange" <berra...@redhat.com>
To: Stefan Berger <stef...@linux.vnet.ibm.com>
Cc: "Dr. David Alan Gilbert" <dgilb...@redhat.com>, Stefan Berger/
Watson/IBM@IBMUS, "m...@redhat.com" <m...@redhat.com>, "qemu-
de...@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA"
<cs1...@att.com>, "Xu, Quan" <quan...@intel.com>,
"silviu.vlasce...@gmail.com" <silviu.vlasce...@gmail.com>,
"hagen.la...@huawei.com" <hagen.la...@huawei.com>, "SHIH, CHING C"
<cs1...@att.com>
Date: 03/01/2017 08:03 AM
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE

TPM

On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:

On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:

On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert

wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set

about?

That's for containers.

Why have the two mechanisms? Can you explain how the

multi-instance

proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container

case.

One of the key things that was/is not appealing about this CUSE

approach

is that it basically invents a new ioctl() mechanism for talking to
a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't

need

to have any changes at all - its existing driver for talking to TPM

We still need the control channel with the vTPM to reset it upon VM

reset,

for getting and setting the state of the vTPM upon

snapshot/suspend/resume,

changing locality, etc.

You ultimately need the same mechanisms if using in-kernel vTPM with
containers as containers can support snapshot/suspend/resume/etc too.

The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the same control
channel through sockets. So on that level we would have support for the
operations but not integrated with anything that would support container
migration.

This goes back to the question Dave mentions above ? Ignoring the control
channel aspect temporarily, can the CUSE TPM support the exact same ioctl
interface as the existing kernel TPM device ? It feels like this should
be possible, and if so, then this virtal TPM feature can be considered to
have two separate pieces.


The existing kernel device has not ioctl interface. If it had one, it 
wouldn't have the same since the control channel implemented on the 
ioctl interface is related to low level commands such as resetting the 
device when the platform resets, etc.




First enabling basic CUSE TPM device support would not require QEMU changes,
as we could just use the existing tpm-passthrough driver against the CUSE
device, albeit with the limitations around migration, snapshot etc.


... and device reset upon VM reset. You want to have at least that since 
otherwise the PCRs will not be in the correct state once the firmware 
with TPM support startes extending them. They need to be reset and the 
only way to do that is through some control channel command.





Second we could consider the question of supporting a control channel as
a separate topic. IIUC, QEMU essentially needs a way to trigger various
operations in the underlying TPM implementation, when certain lifecycle
operations are performed on the VM. I could see this being done as a
simple network protocol over a UNIX socket. So, you could then add a
new 'chardev' property to the tpm-passthrough device, which gives the
ID of a character device that provides the control channel.


Why would that other control channel need to be a device rather than an 
ioctl on the device? Or maybe entirely access the emulated TPM through 
UnixIO ?





This way QEMU does not need to have any special code to deal with CUSE
directly. QEMU could be used with a real TPM device, a vTPM device or
a CUSE TPM device, with the same driver. With both the vTPM and the
CUSE TPM device, QEMU would have the ability to use a out of band
control channel when migration/snapshot/etc take place.

This cleanly isolates QEMU from the particular design & implementation
that is currently used by the current swtpm code.


Someone needs to define the control channel commands.My definition is here:

https://github.com/stefanberger/qemu-tpm/commit/27d6cd856d5a14061955df7a93ee490697a7a174#diff-5cc0e46d3ec33a3f4262db773c193dfe


 This won't go away even if we changed the transport for the commands. 
ioctl's seem to be one way of achieving this with a character device. 
The socket based control channels of 'swtpm' use the same commands.


   Stefan



Regards,
Daniel






Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Marc-André Lureau
Hi

On Wed, Mar 1, 2017 at 6:50 PM Stefan Berger 
wrote:

On 03/01/2017 09:17 AM, Marc-André Lureau wrote:

Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger  wrote:

"Daniel P. Berrange"  wrote on 03/01/2017 07:54:14
AM:
>

> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > 
> > > >
> > > > > > So what was the multi-instance vTPM proxy driver patch set
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the
multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container
case.
> > > One of the key things that was/is not appealing about this CUSE
approach
> > > is that it basically invents a new ioctl() mechanism for talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
need
> > > to have any changes at all - its existing driver for talking to TPM
> >
> > We still need the control channel with the vTPM to reset it upon VM
reset,
> > for getting and setting the state of the vTPM upon
snapshot/suspend/resume,
> > changing locality, etc.
>
> You ultimately need the same mechanisms if using in-kernel vTPM with
> containers as containers can support snapshot/suspend/resume/etc too.

The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the same control
channel through sockets. So on that level we would have support for the
operations but not integrated with anything that would support container
migration.


Ah that might explain why you added the socket control channel, but there
is no user yet? (or some private product perhaps). Could you tell if
control and data channels need to be synchronized in any ways?



In the general case, synchronization would have to happen, yes. So a lock
that is held while the TPM processes data would have to lock out control
channel commands that operate on the TPM data. That may be missing. In case
of QEMU being the client, not much concurrency would be expected there just
by the way QEMU interacts with it.


Could the data channel be muxed in with the control channel? )that is only
use one control socket)

A detail: A corner case is live-migration with the TPM emulation being busy
processing a command, like creation of a key. In that case QEMU would keep
on running and only start streaming device state to the recipient side
after the TPM command processing finishes and has returned the result. QEMU
wouldn't want to get stuck in a lock between data and control channel, so
would have other means of determining when the backend processing is done.



Getting back to the original out-of-process design: qemu links with many
libraries already, perhaps a less controversial approach would be to have a
linked in solution before proposing out-of-process? This would be easier to
deal with for


I had already proposed a linked-in version before I went to the
out-of-process design. Anthony's concerns back then were related to the
code not being trusted and a segfault in the code could bring down all of
QEMU. That we have test suites running over it didn't work as an argument.
Some of the test suite are private, though.


I think Anthony argument is valid for anything running in qemu :) So I
don't see why TPM would be an exception now.

Could you say how much is covered by the public test suite?

About tests, is there any test for qemu TIS?


management layers etc. This wouldn't be the most robust solution, but could
get us somewhere at least for easier testing and development.


Hm. In terms of external process it's basically 'there', so I don't related
to the 'easier testing and development.' The various versions with QEMU +
CUSE TPM driver patches applied are here.

https://github.com/stefanbergToo bader/qemu-tpm/tree/v2.8.0+tpm




Some people may want to use simulated TPM with qemu without the need for
security, just to do development/testing.

Dealing with external processes makes also qemu development and testing
more difficult.

Changing the IPC interface is more complicated  than having linked in
solution. Testing is easier if you can just start/kill one qemu process.

I can't say if it's really needed to ease progress, but at least it would
avoid that CUSE/IPC discussion for now.

I have an older version of libvirt that has the necessary patches applied
to start QEMU with the external TPM. There's also virt-manager support.


Ok, I think it would be worth to list all the up to date trees in
http://www.qemu-project.org/Features/TPM (btw, that page 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 08:25:43AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange" <berra...@redhat.com> wrote on 03/01/2017 07:54:14 
> AM:
> 
> > From: "Daniel P. Berrange" <berra...@redhat.com>
> > To: Stefan Berger <stef...@linux.vnet.ibm.com>
> > Cc: "Dr. David Alan Gilbert" <dgilb...@redhat.com>, Stefan Berger/
> > Watson/IBM@IBMUS, "m...@redhat.com" <m...@redhat.com>, "qemu-
> > de...@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA" 
> > <cs1...@att.com>, "Xu, Quan" <quan...@intel.com>, 
> > "silviu.vlasce...@gmail.com" <silviu.vlasce...@gmail.com>, 
> > "hagen.la...@huawei.com" <hagen.la...@huawei.com>, "SHIH, CHING C" 
> > <cs1...@att.com>
> > Date: 03/01/2017 08:03 AM
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
> TPM
> > 
> > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert 
> wrote:
> > > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > 
> > > > > 
> > > > > > > So what was the multi-instance vTPM proxy driver patch set 
> about?
> > > > > > That's for containers.
> > > > > Why have the two mechanisms? Can you explain how the 
> multi-instance
> > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > to suggest it could be used instead of CUSE for the non-container 
> case.
> > > > One of the key things that was/is not appealing about this CUSE 
> approach
> > > > is that it basically invents a new ioctl() mechanism for talking to
> > > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't 
> need
> > > > to have any changes at all - its existing driver for talking to TPM
> > > 
> > > We still need the control channel with the vTPM to reset it upon VM 
> reset,
> > > for getting and setting the state of the vTPM upon 
> snapshot/suspend/resume,
> > > changing locality, etc.
> > 
> > You ultimately need the same mechanisms if using in-kernel vTPM with
> > containers as containers can support snapshot/suspend/resume/etc too.
> 
> The vTPM running on the backend side of the vTPM proxy driver is 
> essentially the same as the CUSE TPM used for QEMU. I has the same control 
> channel through sockets. So on that level we would have support for the 
> operations but not integrated with anything that would support container 
> migration.

This goes back to the question Dave mentions above ? Ignoring the control
channel aspect temporarily, can the CUSE TPM support the exact same ioctl
interface as the existing kernel TPM device ? It feels like this should
be possible, and if so, then this virtal TPM feature can be considered to
have two separate pieces.

First enabling basic CUSE TPM device support would not require QEMU changes,
as we could just use the existing tpm-passthrough driver against the CUSE
device, albeit with the limitations around migration, snapshot etc.

Second we could consider the question of supporting a control channel as
a separate topic. IIUC, QEMU essentially needs a way to trigger various
operations in the underlying TPM implementation, when certain lifecycle
operations are performed on the VM. I could see this being done as a
simple network protocol over a UNIX socket. So, you could then add a
new 'chardev' property to the tpm-passthrough device, which gives the
ID of a character device that provides the control channel.

This way QEMU does not need to have any special code to deal with CUSE
directly. QEMU could be used with a real TPM device, a vTPM device or
a CUSE TPM device, with the same driver. With both the vTPM and the
CUSE TPM device, QEMU would have the ability to use a out of band
control channel when migration/snapshot/etc take place.

This cleanly isolates QEMU from the particular design & implementation
that is currently used by the current swtpm code.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 03/01/2017 09:17 AM, Marc-André Lureau wrote:

Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger > wrote:


"Daniel P. Berrange" > wrote on 03/01/2017 07:54:14
AM:
>
> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com
) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > 
> > > >
> > > > > > So what was the multi-instance vTPM proxy driver patch set
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the
multi-instance
> > > > proxy works; my brief reading when I saw your patch series
seemed
> > > > to suggest it could be used instead of CUSE for the
non-container
case.
> > > One of the key things that was/is not appealing about this CUSE
approach
> > > is that it basically invents a new ioctl() mechanism for
talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU probably
doesn't
need
> > > to have any changes at all - its existing driver for talking
to TPM
> >
> > We still need the control channel with the vTPM to reset it
upon VM
reset,
> > for getting and setting the state of the vTPM upon
snapshot/suspend/resume,
> > changing locality, etc.
>
> You ultimately need the same mechanisms if using in-kernel vTPM with
> containers as containers can support snapshot/suspend/resume/etc
too.

The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the same
control
channel through sockets. So on that level we would have support
for the
operations but not integrated with anything that would support
container
migration.


Ah that might explain why you added the socket control channel, but 
there is no user yet? (or some private product perhaps). Could you 
tell if control and data channels need to be synchronized in any ways?



In the general case, synchronization would have to happen, yes. So a 
lock that is held while the TPM processes data would have to lock out 
control channel commands that operate on the TPM data. That may be 
missing. In case of QEMU being the client, not much concurrency would be 
expected there just by the way QEMU interacts with it.


A detail: A corner case is live-migration with the TPM emulation being 
busy processing a command, like creation of a key. In that case QEMU 
would keep on running and only start streaming device state to the 
recipient side after the TPM command processing finishes and has 
returned the result. QEMU wouldn't want to get stuck in a lock between 
data and control channel, so would have other means of determining when 
the backend processing is done.




Getting back to the original out-of-process design: qemu links with 
many libraries already, perhaps a less controversial approach would be 
to have a linked in solution before proposing out-of-process? This 
would be easier to deal with for


I had already proposed a linked-in version before I went to the 
out-of-process design. Anthony's concerns back then were related to the 
code not being trusted and a segfault in the code could bring down all 
of QEMU. That we have test suites running over it didn't work as an 
argument. Some of the test suite are private, though.


management layers etc. This wouldn't be the most robust solution, but 
could get us somewhere at least for easier testing and development.


Hm. In terms of external process it's basically 'there', so I don't 
related to the 'easier testing and development.' The various versions 
with QEMU + CUSE TPM driver patches applied are here.


https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm

I have an older version of libvirt that has the necessary patches 
applied to start QEMU with the external TPM. There's also virt-manager 
support.


If CUSE is the wrong interface, then there's a discussion about this 
here. Alternatively UnixIO for data and control channel could be used.


https://github.com/stefanberger/swtpm/issues/4

   Stefan




thanks


--
Marc-André Lureau





Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Marc-André Lureau
Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <stef...@us.ibm.com> wrote:

> "Daniel P. Berrange" <berra...@redhat.com> wrote on 03/01/2017 07:54:14
> AM:
>
> > From: "Daniel P. Berrange" <berra...@redhat.com>
> > To: Stefan Berger <stef...@linux.vnet.ibm.com>
> > Cc: "Dr. David Alan Gilbert" <dgilb...@redhat.com>, Stefan Berger/
> > Watson/IBM@IBMUS, "m...@redhat.com" <m...@redhat.com>, "qemu-
> > de...@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA"
> > <cs1...@att.com>, "Xu, Quan" <quan...@intel.com>,
> > "silviu.vlasce...@gmail.com" <silviu.vlasce...@gmail.com>,
> > "hagen.la...@huawei.com" <hagen.la...@huawei.com>, "SHIH, CHING C"
> > <cs1...@att.com>
> > Date: 03/01/2017 08:03 AM
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE
> TPM
> >
> > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
> wrote:
> > > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > 
> > > > >
> > > > > > > So what was the multi-instance vTPM proxy driver patch set
> about?
> > > > > > That's for containers.
> > > > > Why have the two mechanisms? Can you explain how the
> multi-instance
> > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > to suggest it could be used instead of CUSE for the non-container
> case.
> > > > One of the key things that was/is not appealing about this CUSE
> approach
> > > > is that it basically invents a new ioctl() mechanism for talking to
> > > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
> need
> > > > to have any changes at all - its existing driver for talking to TPM
> > >
> > > We still need the control channel with the vTPM to reset it upon VM
> reset,
> > > for getting and setting the state of the vTPM upon
> snapshot/suspend/resume,
> > > changing locality, etc.
> >
> > You ultimately need the same mechanisms if using in-kernel vTPM with
> > containers as containers can support snapshot/suspend/resume/etc too.
>
> The vTPM running on the backend side of the vTPM proxy driver is
> essentially the same as the CUSE TPM used for QEMU. I has the same control
> channel through sockets. So on that level we would have support for the
> operations but not integrated with anything that would support container
> migration.
>
>
Ah that might explain why you added the socket control channel, but there
is no user yet? (or some private product perhaps). Could you tell if
control and data channels need to be synchronized in any ways?

Getting back to the original out-of-process design: qemu links with many
libraries already, perhaps a less controversial approach would be to have a
linked in solution before proposing out-of-process? This would be easier to
deal with for management layers etc. This wouldn't be the most robust
solution, but could get us somewhere at least for easier testing and
development.

thanks


-- 
Marc-André Lureau


Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger
"Daniel P. Berrange" <berra...@redhat.com> wrote on 03/01/2017 07:54:14 
AM:

> From: "Daniel P. Berrange" <berra...@redhat.com>
> To: Stefan Berger <stef...@linux.vnet.ibm.com>
> Cc: "Dr. David Alan Gilbert" <dgilb...@redhat.com>, Stefan Berger/
> Watson/IBM@IBMUS, "m...@redhat.com" <m...@redhat.com>, "qemu-
> de...@nongnu.org" <qemu-devel@nongnu.org>, "SERBAN, CRISTINA" 
> <cs1...@att.com>, "Xu, Quan" <quan...@intel.com>, 
> "silviu.vlasce...@gmail.com" <silviu.vlasce...@gmail.com>, 
> "hagen.la...@huawei.com" <hagen.la...@huawei.com>, "SHIH, CHING C" 
> <cs1...@att.com>
> Date: 03/01/2017 08:03 AM
> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
TPM
> 
> On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert 
wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > 
> > > > 
> > > > > > So what was the multi-instance vTPM proxy driver patch set 
about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the 
multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container 
case.
> > > One of the key things that was/is not appealing about this CUSE 
approach
> > > is that it basically invents a new ioctl() mechanism for talking to
> > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't 
need
> > > to have any changes at all - its existing driver for talking to TPM
> > 
> > We still need the control channel with the vTPM to reset it upon VM 
reset,
> > for getting and setting the state of the vTPM upon 
snapshot/suspend/resume,
> > changing locality, etc.
> 
> You ultimately need the same mechanisms if using in-kernel vTPM with
> containers as containers can support snapshot/suspend/resume/etc too.

The vTPM running on the backend side of the vTPM proxy driver is 
essentially the same as the CUSE TPM used for QEMU. I has the same control 
channel through sockets. So on that level we would have support for the 
operations but not integrated with anything that would support container 
migration.

   Stefan


> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- 
http://virt-manager.org :|
> |: http://entangle-photo.org   -o-
http://search.cpan.org/~danberr/ :|
> 




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Daniel P. Berrange
On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:
> > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > 
> > > 
> > > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > > That's for containers.
> > > Why have the two mechanisms? Can you explain how the multi-instance
> > > proxy works; my brief reading when I saw your patch series seemed
> > > to suggest it could be used instead of CUSE for the non-container case.
> > One of the key things that was/is not appealing about this CUSE approach
> > is that it basically invents a new ioctl() mechanism for talking to
> > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
> > to have any changes at all - its existing driver for talking to TPM
> 
> We still need the control channel with the vTPM to reset it upon VM reset,
> for getting and setting the state of the vTPM upon snapshot/suspend/resume,
> changing locality, etc.

You ultimately need the same mechanisms if using in-kernel vTPM with
containers as containers can support snapshot/suspend/resume/etc too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 02/28/2017 01:31 PM, Marc-André Lureau wrote:

Hi

On Fri, Jun 17, 2016 at 1:29 AM Stefan Berger 
> wrote:


On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stef...@linux.vnet.ibm.com
) wrote:
>> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
>>> * Stefan Berger (stef...@linux.vnet.ibm.com
) wrote:
 On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stef...@linux.vnet.ibm.com
) wrote:
>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
>>> * Stefan Berger (stef...@linux.vnet.ibm.com
) wrote:
 On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
>>> 
>>>
> So what was the multi-instance vTPM proxy driver patch
set about?
 That's for containers.
>>> Why have the two mechanisms? Can you explain how the
multi-instance
>>> proxy works; my brief reading when I saw your patch series
seemed
>>> to suggest it could be used instead of CUSE for the
non-container case.
>> The multi-instance vtpm proxy driver basically works
through usage of an
>> ioctl() on /dev/vtpmx that is used to spawn a new front-
and backend pair.
>> The front-end is a new /dev/tpm%d device that then can be
moved into the
>> container (mknod + device cgroup setup). The backend is an
anonymous file
>> descriptor that is to be passed to a TPM emulator for
reading TPM requests
>> coming in from that /dev/tpm%d and returning responses to.
Since it is
>> implemented as a kernel driver, we can hook it into the
Linux Integrity
>> Measurement Architecture (IMA) and have it be used by IMA
in place of a
>> hardware TPM driver. There's ongoing work in the area of
namespacing support
>> for IMA to have an independent IMA instance per container
so that this can
>> be used.
>>
>> A TPM does not only have a data channel (/dev/tpm%d) but
also a control
>> channel, which is primarily implemented in its hardware
interface and is
>> typically not fully accessible to user space. The vtpm
proxy driver _only_
>> supports the data channel through which it basically relays
TPM commands and
>> responses from user space to the TPM emulator. The control
channel is
>> provided by the software emulator through an additional TCP
or UnixIO socket
>> or in case of CUSE through ioctls. The control channel
allows to reset the
>> TPM when the container/VM is being reset or set the
locality of a command or
>> retrieve the state of the vTPM (for suspend) and set the
state of the vTPM
>> (for resume) among several other things. The commands for
the control
>> channel are defined here:
>>
>>
https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
>>
>> For a container we would require that its management stack
initializes and
>> resets the vTPM when the container is rebooted. (These are
typically
>> operations that are done through pulses on the motherboard.)
>>
>> In case of QEMU we would need to have more access to the
control channel,
>> which includes initialization and reset of the vTPM,
getting and setting its
>> state for suspend/resume/migration, setting the locality of
commands, etc.,
>> so that all low-level functionality is accessible to the
emulator (QEMU).
>> The proxy driver does not help with this but we should use
the swtpm
>> implementation that either has that CUSE interface with
control channel
>> (through ioctls) or provides UnixIO and TCP sockets for the
control channel.
> OK, that makes sense; does the control interface need to be
handled by QEMU
> or by libvirt or both?
 The control interface needs to be handled primarily by QEMU.

 In case of the libvirt implementation I am running an
external program
 swtpm_ioctl that uses the control channel to gracefully shut
down any
 existing running TPM emulator whose device name happens to
have the same
 name as the device of the TPM emulator that is to be created.
So it cleans
 up before starting a new TPM emulator just to make sure that
that new TPM
 instance can be started. Detail...

> Either way, I think you're saying that with your kernel
interface + a UnixIO
> socket you can avoid the CUSE stuff?
 So in case of QEMU you don't need 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-03-01 Thread Stefan Berger

On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:

On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

One of the key things that was/is not appealing about this CUSE approach
is that it basically invents a new ioctl() mechanism for talking to
a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
to have any changes at all - its existing driver for talking to TPM


We still need the control channel with the vTPM to reset it upon VM 
reset, for getting and setting the state of the vTPM upon 
snapshot/suspend/resume, changing locality, etc.


   Stefan




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2017-02-28 Thread Marc-André Lureau
Hi

On Fri, Jun 17, 2016 at 1:29 AM Stefan Berger 
wrote:

> On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> >> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
> >>> * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
>  On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> >> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> >>> * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
>  On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> >>> 
> >>>
> > So what was the multi-instance vTPM proxy driver patch set about?
>  That's for containers.
> >>> Why have the two mechanisms? Can you explain how the multi-instance
> >>> proxy works; my brief reading when I saw your patch series seemed
> >>> to suggest it could be used instead of CUSE for the non-container
> case.
> >> The multi-instance vtpm proxy driver basically works through usage
> of an
> >> ioctl() on /dev/vtpmx that is used to spawn a new front- and
> backend pair.
> >> The front-end is a new /dev/tpm%d device that then can be moved
> into the
> >> container (mknod + device cgroup setup). The backend is an
> anonymous file
> >> descriptor that is to be passed to a TPM emulator for reading TPM
> requests
> >> coming in from that /dev/tpm%d and returning responses to. Since it
> is
> >> implemented as a kernel driver, we can hook it into the Linux
> Integrity
> >> Measurement Architecture (IMA) and have it be used by IMA in place
> of a
> >> hardware TPM driver. There's ongoing work in the area of
> namespacing support
> >> for IMA to have an independent IMA instance per container so that
> this can
> >> be used.
> >>
> >> A TPM does not only have a data channel (/dev/tpm%d) but also a
> control
> >> channel, which is primarily implemented in its hardware interface
> and is
> >> typically not fully accessible to user space. The vtpm proxy driver
> _only_
> >> supports the data channel through which it basically relays TPM
> commands and
> >> responses from user space to the TPM emulator. The control channel
> is
> >> provided by the software emulator through an additional TCP or
> UnixIO socket
> >> or in case of CUSE through ioctls. The control channel allows to
> reset the
> >> TPM when the container/VM is being reset or set the locality of a
> command or
> >> retrieve the state of the vTPM (for suspend) and set the state of
> the vTPM
> >> (for resume) among several other things. The commands for the
> control
> >> channel are defined here:
> >>
> >>
> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> >>
> >> For a container we would require that its management stack
> initializes and
> >> resets the vTPM when the container is rebooted. (These are typically
> >> operations that are done through pulses on the motherboard.)
> >>
> >> In case of QEMU we would need to have more access to the control
> channel,
> >> which includes initialization and reset of the vTPM, getting and
> setting its
> >> state for suspend/resume/migration, setting the locality of
> commands, etc.,
> >> so that all low-level functionality is accessible to the emulator
> (QEMU).
> >> The proxy driver does not help with this but we should use the swtpm
> >> implementation that either has that CUSE interface with control
> channel
> >> (through ioctls) or provides UnixIO and TCP sockets for the control
> channel.
> > OK, that makes sense; does the control interface need to be handled
> by QEMU
> > or by libvirt or both?
>  The control interface needs to be handled primarily by QEMU.
> 
>  In case of the libvirt implementation I am running an external program
>  swtpm_ioctl that uses the control channel to gracefully shut down any
>  existing running TPM emulator whose device name happens to have the
> same
>  name as the device of the TPM emulator that is to be created. So it
> cleans
>  up before starting a new TPM emulator just to make sure that that new
> TPM
>  instance can be started. Detail...
> 
> > Either way, I think you're saying that with your kernel interface +
> a UnixIO
> > socket you can avoid the CUSE stuff?
>  So in case of QEMU you don't need that new kernel device driver --
> it's
>  primarily meant for containers. For QEMU one would start the TPM
> emulator
>  and make sure that QEMU has access to the data and control channels,
> which
>  are now offered as
> 
>  - CUSE interface with ioctl
>  - TCP + TCP
>  - UnixIO + TCP
>  - TCP + UnioIO
>  - UnixIO + UnixIO
>  - file descriptors passed from invoker
> >>> OK, I'm trying to remember back; I'll admit to not having
> >>> 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Stefan Berger

On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

The multi-instance vtpm proxy driver basically works through usage of an
ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
The front-end is a new /dev/tpm%d device that then can be moved into the
container (mknod + device cgroup setup). The backend is an anonymous file
descriptor that is to be passed to a TPM emulator for reading TPM requests
coming in from that /dev/tpm%d and returning responses to. Since it is
implemented as a kernel driver, we can hook it into the Linux Integrity
Measurement Architecture (IMA) and have it be used by IMA in place of a
hardware TPM driver. There's ongoing work in the area of namespacing support
for IMA to have an independent IMA instance per container so that this can
be used.

A TPM does not only have a data channel (/dev/tpm%d) but also a control
channel, which is primarily implemented in its hardware interface and is
typically not fully accessible to user space. The vtpm proxy driver _only_
supports the data channel through which it basically relays TPM commands and
responses from user space to the TPM emulator. The control channel is
provided by the software emulator through an additional TCP or UnixIO socket
or in case of CUSE through ioctls. The control channel allows to reset the
TPM when the container/VM is being reset or set the locality of a command or
retrieve the state of the vTPM (for suspend) and set the state of the vTPM
(for resume) among several other things. The commands for the control
channel are defined here:

https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h

For a container we would require that its management stack initializes and
resets the vTPM when the container is rebooted. (These are typically
operations that are done through pulses on the motherboard.)

In case of QEMU we would need to have more access to the control channel,
which includes initialization and reset of the vTPM, getting and setting its
state for suspend/resume/migration, setting the locality of commands, etc.,
so that all low-level functionality is accessible to the emulator (QEMU).
The proxy driver does not help with this but we should use the swtpm
implementation that either has that CUSE interface with control channel
(through ioctls) or provides UnixIO and TCP sockets for the control channel.

OK, that makes sense; does the control interface need to be handled by QEMU
or by libvirt or both?

The control interface needs to be handled primarily by QEMU.

In case of the libvirt implementation I am running an external program
swtpm_ioctl that uses the control channel to gracefully shut down any
existing running TPM emulator whose device name happens to have the same
name as the device of the TPM emulator that is to be created. So it cleans
up before starting a new TPM emulator just to make sure that that new TPM
instance can be started. Detail...


Either way, I think you're saying that with your kernel interface + a UnixIO
socket you can avoid the CUSE stuff?

So in case of QEMU you don't need that new kernel device driver -- it's
primarily meant for containers. For QEMU one would start the TPM emulator
and make sure that QEMU has access to the data and control channels, which
are now offered as

- CUSE interface with ioctl
- TCP + TCP
- UnixIO + TCP
- TCP + UnioIO
- UnixIO + UnixIO
- file descriptors passed from invoker

OK, I'm trying to remember back; I'll admit to not having
liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
side require a lot of code to add a qemu interface that wasn't
ioctl?

Adding these additional interface to the TPM was a bigger effort, yes.

Right, so that code isn't in upstream qemu is it?


I was talking about the TPM emulator side that has been extended like 
this, not QEMU.





Doesn't using the kernel driver give you the benefit of both worlds,
i.e. the non-control side in QEMU is unchanged.

Yes. I am not sure what you are asking, though. A control channel is
necessary no matter what. The kernel driver talks to /dev/vtpm- via
a file descriptor and uses commands sent through ioctl for the control
channel. Whether QEMU now uses an fd that is a UnixIO or TCP socket to send
the commands to the TPM or an fd 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Dr. David Alan Gilbert
* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > > > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > > 
> > > > > > 
> > > > > > > > So what was the multi-instance vTPM proxy driver patch set 
> > > > > > > > about?
> > > > > > > That's for containers.
> > > > > > Why have the two mechanisms? Can you explain how the multi-instance
> > > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > > to suggest it could be used instead of CUSE for the non-container 
> > > > > > case.
> > > > > The multi-instance vtpm proxy driver basically works through usage of 
> > > > > an
> > > > > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend 
> > > > > pair.
> > > > > The front-end is a new /dev/tpm%d device that then can be moved into 
> > > > > the
> > > > > container (mknod + device cgroup setup). The backend is an anonymous 
> > > > > file
> > > > > descriptor that is to be passed to a TPM emulator for reading TPM 
> > > > > requests
> > > > > coming in from that /dev/tpm%d and returning responses to. Since it is
> > > > > implemented as a kernel driver, we can hook it into the Linux 
> > > > > Integrity
> > > > > Measurement Architecture (IMA) and have it be used by IMA in place of 
> > > > > a
> > > > > hardware TPM driver. There's ongoing work in the area of namespacing 
> > > > > support
> > > > > for IMA to have an independent IMA instance per container so that 
> > > > > this can
> > > > > be used.
> > > > > 
> > > > > A TPM does not only have a data channel (/dev/tpm%d) but also a 
> > > > > control
> > > > > channel, which is primarily implemented in its hardware interface and 
> > > > > is
> > > > > typically not fully accessible to user space. The vtpm proxy driver 
> > > > > _only_
> > > > > supports the data channel through which it basically relays TPM 
> > > > > commands and
> > > > > responses from user space to the TPM emulator. The control channel is
> > > > > provided by the software emulator through an additional TCP or UnixIO 
> > > > > socket
> > > > > or in case of CUSE through ioctls. The control channel allows to 
> > > > > reset the
> > > > > TPM when the container/VM is being reset or set the locality of a 
> > > > > command or
> > > > > retrieve the state of the vTPM (for suspend) and set the state of the 
> > > > > vTPM
> > > > > (for resume) among several other things. The commands for the control
> > > > > channel are defined here:
> > > > > 
> > > > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> > > > > 
> > > > > For a container we would require that its management stack 
> > > > > initializes and
> > > > > resets the vTPM when the container is rebooted. (These are typically
> > > > > operations that are done through pulses on the motherboard.)
> > > > > 
> > > > > In case of QEMU we would need to have more access to the control 
> > > > > channel,
> > > > > which includes initialization and reset of the vTPM, getting and 
> > > > > setting its
> > > > > state for suspend/resume/migration, setting the locality of commands, 
> > > > > etc.,
> > > > > so that all low-level functionality is accessible to the emulator 
> > > > > (QEMU).
> > > > > The proxy driver does not help with this but we should use the swtpm
> > > > > implementation that either has that CUSE interface with control 
> > > > > channel
> > > > > (through ioctls) or provides UnixIO and TCP sockets for the control 
> > > > > channel.
> > > > OK, that makes sense; does the control interface need to be handled by 
> > > > QEMU
> > > > or by libvirt or both?
> > > The control interface needs to be handled primarily by QEMU.
> > > 
> > > In case of the libvirt implementation I am running an external program
> > > swtpm_ioctl that uses the control channel to gracefully shut down any
> > > existing running TPM emulator whose device name happens to have the same
> > > name as the device of the TPM emulator that is to be created. So it cleans
> > > up before starting a new TPM emulator just to make sure that that new TPM
> > > instance can be started. Detail...
> > > 
> > > > Either way, I think you're saying that with your kernel interface + a 
> > > > UnixIO
> > > > socket you can avoid the CUSE stuff?
> > > So in case of QEMU you don't need that new kernel device driver -- it's
> > > primarily meant for containers. For QEMU one would start the TPM emulator
> > > and make sure that QEMU has access to the data and control channels, which
> > > are now offered as
> > > 
> > > - CUSE interface with ioctl
> > > - TCP + TCP
> > > - UnixIO + TCP
> > > - TCP + UnioIO
> > > - 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Stefan Berger

On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

The multi-instance vtpm proxy driver basically works through usage of an
ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
The front-end is a new /dev/tpm%d device that then can be moved into the
container (mknod + device cgroup setup). The backend is an anonymous file
descriptor that is to be passed to a TPM emulator for reading TPM requests
coming in from that /dev/tpm%d and returning responses to. Since it is
implemented as a kernel driver, we can hook it into the Linux Integrity
Measurement Architecture (IMA) and have it be used by IMA in place of a
hardware TPM driver. There's ongoing work in the area of namespacing support
for IMA to have an independent IMA instance per container so that this can
be used.

A TPM does not only have a data channel (/dev/tpm%d) but also a control
channel, which is primarily implemented in its hardware interface and is
typically not fully accessible to user space. The vtpm proxy driver _only_
supports the data channel through which it basically relays TPM commands and
responses from user space to the TPM emulator. The control channel is
provided by the software emulator through an additional TCP or UnixIO socket
or in case of CUSE through ioctls. The control channel allows to reset the
TPM when the container/VM is being reset or set the locality of a command or
retrieve the state of the vTPM (for suspend) and set the state of the vTPM
(for resume) among several other things. The commands for the control
channel are defined here:

https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h

For a container we would require that its management stack initializes and
resets the vTPM when the container is rebooted. (These are typically
operations that are done through pulses on the motherboard.)

In case of QEMU we would need to have more access to the control channel,
which includes initialization and reset of the vTPM, getting and setting its
state for suspend/resume/migration, setting the locality of commands, etc.,
so that all low-level functionality is accessible to the emulator (QEMU).
The proxy driver does not help with this but we should use the swtpm
implementation that either has that CUSE interface with control channel
(through ioctls) or provides UnixIO and TCP sockets for the control channel.

OK, that makes sense; does the control interface need to be handled by QEMU
or by libvirt or both?

The control interface needs to be handled primarily by QEMU.

In case of the libvirt implementation I am running an external program
swtpm_ioctl that uses the control channel to gracefully shut down any
existing running TPM emulator whose device name happens to have the same
name as the device of the TPM emulator that is to be created. So it cleans
up before starting a new TPM emulator just to make sure that that new TPM
instance can be started. Detail...


Either way, I think you're saying that with your kernel interface + a UnixIO
socket you can avoid the CUSE stuff?

So in case of QEMU you don't need that new kernel device driver -- it's
primarily meant for containers. For QEMU one would start the TPM emulator
and make sure that QEMU has access to the data and control channels, which
are now offered as

- CUSE interface with ioctl
- TCP + TCP
- UnixIO + TCP
- TCP + UnioIO
- UnixIO + UnixIO
- file descriptors passed from invoker

OK, I'm trying to remember back; I'll admit to not having
liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
side require a lot of code to add a qemu interface that wasn't
ioctl?


Adding these additional interface to the TPM was a bigger effort, yes.


Doesn't using the kernel driver give you the benefit of both worlds,
i.e. the non-control side in QEMU is unchanged.


Yes. I am not sure what you are asking, though. A control channel is 
necessary no matter what. The kernel driver talks to /dev/vtpm- 
via a file descriptor and uses commands sent through ioctl for the 
control channel. Whether QEMU now uses an fd that is a UnixIO or TCP 
socket to send the commands to the TPM or an fd that uses CUSE, doesn't 
matter much on the side of QEMU. The control channel may be a bit 
different when using ioctl versus an fd (for UnixIO or TCP) or ioctl. I 
am not sure why we would send commands through that vTPM proxy driver in 
case 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Dr. David Alan Gilbert
* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > > > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > 
> > > > 
> > > > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container case.
> > > The multi-instance vtpm proxy driver basically works through usage of an
> > > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> > > The front-end is a new /dev/tpm%d device that then can be moved into the
> > > container (mknod + device cgroup setup). The backend is an anonymous file
> > > descriptor that is to be passed to a TPM emulator for reading TPM requests
> > > coming in from that /dev/tpm%d and returning responses to. Since it is
> > > implemented as a kernel driver, we can hook it into the Linux Integrity
> > > Measurement Architecture (IMA) and have it be used by IMA in place of a
> > > hardware TPM driver. There's ongoing work in the area of namespacing 
> > > support
> > > for IMA to have an independent IMA instance per container so that this can
> > > be used.
> > > 
> > > A TPM does not only have a data channel (/dev/tpm%d) but also a control
> > > channel, which is primarily implemented in its hardware interface and is
> > > typically not fully accessible to user space. The vtpm proxy driver _only_
> > > supports the data channel through which it basically relays TPM commands 
> > > and
> > > responses from user space to the TPM emulator. The control channel is
> > > provided by the software emulator through an additional TCP or UnixIO 
> > > socket
> > > or in case of CUSE through ioctls. The control channel allows to reset the
> > > TPM when the container/VM is being reset or set the locality of a command 
> > > or
> > > retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> > > (for resume) among several other things. The commands for the control
> > > channel are defined here:
> > > 
> > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> > > 
> > > For a container we would require that its management stack initializes and
> > > resets the vTPM when the container is rebooted. (These are typically
> > > operations that are done through pulses on the motherboard.)
> > > 
> > > In case of QEMU we would need to have more access to the control channel,
> > > which includes initialization and reset of the vTPM, getting and setting 
> > > its
> > > state for suspend/resume/migration, setting the locality of commands, 
> > > etc.,
> > > so that all low-level functionality is accessible to the emulator (QEMU).
> > > The proxy driver does not help with this but we should use the swtpm
> > > implementation that either has that CUSE interface with control channel
> > > (through ioctls) or provides UnixIO and TCP sockets for the control 
> > > channel.
> > OK, that makes sense; does the control interface need to be handled by QEMU
> > or by libvirt or both?
> 
> The control interface needs to be handled primarily by QEMU.
> 
> In case of the libvirt implementation I am running an external program
> swtpm_ioctl that uses the control channel to gracefully shut down any
> existing running TPM emulator whose device name happens to have the same
> name as the device of the TPM emulator that is to be created. So it cleans
> up before starting a new TPM emulator just to make sure that that new TPM
> instance can be started. Detail...
> 
> > Either way, I think you're saying that with your kernel interface + a UnixIO
> > socket you can avoid the CUSE stuff?
> 
> So in case of QEMU you don't need that new kernel device driver -- it's
> primarily meant for containers. For QEMU one would start the TPM emulator
> and make sure that QEMU has access to the data and control channels, which
> are now offered as
> 
> - CUSE interface with ioctl
> - TCP + TCP
> - UnixIO + TCP
> - TCP + UnioIO
> - UnixIO + UnixIO
> - file descriptors passed from invoker

OK, I'm trying to remember back; I'll admit to not having
liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
side require a lot of code to add a qemu interface that wasn't
ioctl?
Doesn't using the kernel driver give you the benefit of both worlds,
i.e. the non-control side in QEMU is unchanged.

Dave

>   Stefan
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Stefan Berger

On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.


The multi-instance vtpm proxy driver basically works through usage of an 
ioctl() on /dev/vtpmx that is used to spawn a new front- and backend 
pair. The front-end is a new /dev/tpm%d device that then can be moved 
into the container (mknod + device cgroup setup). The backend is an 
anonymous file descriptor that is to be passed to a TPM emulator for 
reading TPM requests coming in from that /dev/tpm%d and returning 
responses to. Since it is implemented as a kernel driver, we can hook it 
into the Linux Integrity Measurement Architecture (IMA) and have it be 
used by IMA in place of a hardware TPM driver. There's ongoing work in 
the area of namespacing support for IMA to have an independent IMA 
instance per container so that this can be used.


A TPM does not only have a data channel (/dev/tpm%d) but also a control 
channel, which is primarily implemented in its hardware interface and is 
typically not fully accessible to user space. The vtpm proxy driver 
_only_ supports the data channel through which it basically relays TPM 
commands and responses from user space to the TPM emulator. The control 
channel is provided by the software emulator through an additional TCP 
or UnixIO socket or in case of CUSE through ioctls. The control channel 
allows to reset the TPM when the container/VM is being reset or set the 
locality of a command or retrieve the state of the vTPM (for suspend) 
and set the state of the vTPM (for resume) among several other things. 
The commands for the control channel are defined here:


https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h

For a container we would require that its management stack initializes 
and resets the vTPM when the container is rebooted. (These are typically 
operations that are done through pulses on the motherboard.)


In case of QEMU we would need to have more access to the control 
channel, which includes initialization and reset of the vTPM, getting 
and setting its state for suspend/resume/migration, setting the locality 
of commands, etc., so that all low-level functionality is accessible to 
the emulator (QEMU). The proxy driver does not help with this but we 
should use the swtpm implementation that either has that CUSE interface 
with control channel (through ioctls) or provides UnixIO and TCP sockets 
for the control channel.


Stefan



Dave
P.S. I've removed Jeff from the cc because I got a bounce from
his AT address saying 'restricted/not authorized'


 Stefan


--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK






Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Stefan Berger

On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:

On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

One of the key things that was/is not appealing about this CUSE approach
is that it basically invents a new ioctl() mechanism for talking to
a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
to have any changes at all - its existing driver for talking to TPM
char devices ought to just work. All that would be required is libvirt
support too configure the vTPM instances.


The issue here is mainly the control channel as stated in the other email.

The CUSE TPM allows users to provide the name of the device that will 
appear in /dev. Since the kernel TPM driver basically owns the 
/dev/tpm%d names, a CUSE TPM should use a different name. I don't quite 
understand why such a device should not be able to offer an ioctl 
interface for its control channel? In case of the CUSE TPM it's not a 
hardware device underneath but a software emulation of a hardware device 
that needs an additional control channel to allow certain functionality 
to be reached that is typically hidden by the device driver. It just 
happens to have a compatible data channel that works just like /dev/tpm%d.


The ioctl interface is in my opinion only a problem in so far as the 
control channel commands can be larger than what the Linux CUSE driver 
supports so that the implementation had to work around this restriction. 
As stated in the other email, there's the possibility of using the TPM 
emulator with socket interfaces where the data and control channels can 
now use any combination of UnixIO and TCP sockets, so two UnixIO sockets 
(for data and control) are possible.


Stefan





Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Stefan Berger

On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:




So what was the multi-instance vTPM proxy driver patch set about?

That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

The multi-instance vtpm proxy driver basically works through usage of an
ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
The front-end is a new /dev/tpm%d device that then can be moved into the
container (mknod + device cgroup setup). The backend is an anonymous file
descriptor that is to be passed to a TPM emulator for reading TPM requests
coming in from that /dev/tpm%d and returning responses to. Since it is
implemented as a kernel driver, we can hook it into the Linux Integrity
Measurement Architecture (IMA) and have it be used by IMA in place of a
hardware TPM driver. There's ongoing work in the area of namespacing support
for IMA to have an independent IMA instance per container so that this can
be used.

A TPM does not only have a data channel (/dev/tpm%d) but also a control
channel, which is primarily implemented in its hardware interface and is
typically not fully accessible to user space. The vtpm proxy driver _only_
supports the data channel through which it basically relays TPM commands and
responses from user space to the TPM emulator. The control channel is
provided by the software emulator through an additional TCP or UnixIO socket
or in case of CUSE through ioctls. The control channel allows to reset the
TPM when the container/VM is being reset or set the locality of a command or
retrieve the state of the vTPM (for suspend) and set the state of the vTPM
(for resume) among several other things. The commands for the control
channel are defined here:

https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h

For a container we would require that its management stack initializes and
resets the vTPM when the container is rebooted. (These are typically
operations that are done through pulses on the motherboard.)

In case of QEMU we would need to have more access to the control channel,
which includes initialization and reset of the vTPM, getting and setting its
state for suspend/resume/migration, setting the locality of commands, etc.,
so that all low-level functionality is accessible to the emulator (QEMU).
The proxy driver does not help with this but we should use the swtpm
implementation that either has that CUSE interface with control channel
(through ioctls) or provides UnixIO and TCP sockets for the control channel.

OK, that makes sense; does the control interface need to be handled by QEMU
or by libvirt or both?


The control interface needs to be handled primarily by QEMU.

In case of the libvirt implementation I am running an external program 
swtpm_ioctl that uses the control channel to gracefully shut down any 
existing running TPM emulator whose device name happens to have the same 
name as the device of the TPM emulator that is to be created. So it 
cleans up before starting a new TPM emulator just to make sure that that 
new TPM instance can be started. Detail...



Either way, I think you're saying that with your kernel interface + a UnixIO
socket you can avoid the CUSE stuff?


So in case of QEMU you don't need that new kernel device driver -- it's 
primarily meant for containers. For QEMU one would start the TPM 
emulator and make sure that QEMU has access to the data and control 
channels, which are now offered as


- CUSE interface with ioctl
- TCP + TCP
- UnixIO + TCP
- TCP + UnioIO
- UnixIO + UnixIO
- file descriptors passed from invoker

  Stefan




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread SERBAN, CRISTINA
Dave -
Jeff moved from AT to another company at the end of last week.
Ching Shih (cc'd) and I are still here, if any questions come up about this 
testing.
Thanks,
Cristina


Cristina Serban, PhD, CISSP
Lead Member of Technical Staff
AT Security Research Center 
Middletown, NJ - USA
  


-Original Message-
From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] 
Sent: Thursday, June 16, 2016 4:05 AM
To: Stefan Berger <stef...@linux.vnet.ibm.com>
Cc: Stefan Berger <stef...@us.ibm.com>; m...@redhat.com; qemu-devel@nongnu.org; 
SERBAN, CRISTINA <cs1...@att.com>; Xu, Quan <quan...@intel.com>; 
silviu.vlasce...@gmail.com; hagen.la...@huawei.com; SHIH, CHING C 
<cs1...@att.com>
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


P.S. I've removed Jeff from the cc because I got a bounce from
his AT address saying 'restricted/not authorized'

> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Dr. David Alan Gilbert
* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > 
> > 
> > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > That's for containers.
> > Why have the two mechanisms? Can you explain how the multi-instance
> > proxy works; my brief reading when I saw your patch series seemed
> > to suggest it could be used instead of CUSE for the non-container case.
> 
> The multi-instance vtpm proxy driver basically works through usage of an
> ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> The front-end is a new /dev/tpm%d device that then can be moved into the
> container (mknod + device cgroup setup). The backend is an anonymous file
> descriptor that is to be passed to a TPM emulator for reading TPM requests
> coming in from that /dev/tpm%d and returning responses to. Since it is
> implemented as a kernel driver, we can hook it into the Linux Integrity
> Measurement Architecture (IMA) and have it be used by IMA in place of a
> hardware TPM driver. There's ongoing work in the area of namespacing support
> for IMA to have an independent IMA instance per container so that this can
> be used.
> 
> A TPM does not only have a data channel (/dev/tpm%d) but also a control
> channel, which is primarily implemented in its hardware interface and is
> typically not fully accessible to user space. The vtpm proxy driver _only_
> supports the data channel through which it basically relays TPM commands and
> responses from user space to the TPM emulator. The control channel is
> provided by the software emulator through an additional TCP or UnixIO socket
> or in case of CUSE through ioctls. The control channel allows to reset the
> TPM when the container/VM is being reset or set the locality of a command or
> retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> (for resume) among several other things. The commands for the control
> channel are defined here:
> 
> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> 
> For a container we would require that its management stack initializes and
> resets the vTPM when the container is rebooted. (These are typically
> operations that are done through pulses on the motherboard.)
> 
> In case of QEMU we would need to have more access to the control channel,
> which includes initialization and reset of the vTPM, getting and setting its
> state for suspend/resume/migration, setting the locality of commands, etc.,
> so that all low-level functionality is accessible to the emulator (QEMU).
> The proxy driver does not help with this but we should use the swtpm
> implementation that either has that CUSE interface with control channel
> (through ioctls) or provides UnixIO and TCP sockets for the control channel.

OK, that makes sense; does the control interface need to be handled by QEMU
or by libvirt or both?
Either way, I think you're saying that with your kernel interface + a UnixIO
socket you can avoid the CUSE stuff?

Dave

> Stefan
> 
> > 
> > Dave
> > P.S. I've removed Jeff from the cc because I got a bounce from
> > his AT address saying 'restricted/not authorized'
> > 
> > >  Stefan
> > > 
> > --
> > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> > 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Daniel P. Berrange
On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> 
> 
> 
> > > So what was the multi-instance vTPM proxy driver patch set about?
> > 
> > That's for containers.
> 
> Why have the two mechanisms? Can you explain how the multi-instance
> proxy works; my brief reading when I saw your patch series seemed
> to suggest it could be used instead of CUSE for the non-container case.

One of the key things that was/is not appealing about this CUSE approach
is that it basically invents a new ioctl() mechanism for talking to
a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't need
to have any changes at all - its existing driver for talking to TPM
char devices ought to just work. All that would be required is libvirt
support too configure the vTPM instances.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-16 Thread Dr. David Alan Gilbert
* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:



> > So what was the multi-instance vTPM proxy driver patch set about?
> 
> That's for containers.

Why have the two mechanisms? Can you explain how the multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container case.

Dave
P.S. I've removed Jeff from the cc because I got a bounce from
his AT address saying 'restricted/not authorized'

> 
> Stefan
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-15 Thread Stefan Berger

On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:

* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:

On 05/31/2016 09:58 PM, Xu, Quan wrote:

On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E  wrote:

* Daniel P. Berrange (berra...@redhat.com) wrote:

On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:

On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:

On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:

"Daniel P. Berrange"  wrote on 01/20/2016
10:00:41
AM:



process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem

I tried having it linked in QEMU before. It was basically rejected.

I remember an impl you did many years(?) ago now, but don't
recall the results of the discussion. Can you elaborate on why it
was rejected as an approach ? It just doesn't make much sense to
me to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure

The TPM 1.2 implementation adds 10s of thousands of lines of code.
The TPM 2 implementation is in the same range. The concern was
having this code right in the QEMU address space. It's big, it can
have bugs, so we don't want it to harm QEMU. So we now put this
into an external process implemented by the swtpm project that
builds on libtpms which provides TPM 1.2 functionality (to be
extended with TPM 2). We cannot call APIs of libtpms directly
anymore, so we need a control channel, which is implemented through

ioctls on the CUSE device.

Ok, the security separation concern does make some sense. The use of
CUSE still seems fairly questionable to me. CUSE makes sense if you
want to provide a drop-in replacement for the kernel TPM device
driver, which would avoid ned for a new QEMU backend. If you're not
emulating an existing kernel driver ABI though, CUSE + ioctl is
feels like a really awful RPC transport between 2 userspace processes.

While I don't really like CUSE; I can see some of the reasoning here.
By providing the existing TPM ioctl interface I think it means you can
use existing host-side TPM tools to initialise/query the soft-tpm, and
those should be independent of the soft-tpm implementation.
As for the extra interfaces you need because it's a soft-tpm to set it
up, once you've already got that ioctl interface as above, then it
seems to make sense to extend that to add the extra interfaces needed.
The only thing you have to watch for there are that the extra
interfaces don't clash with any future kernel ioctl extensions, and
that the interface defined is generic enough for different soft-tpm

implementations.


Dave
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

Over the past several months, AT Security Research has been testing the
Virtual TPM software from IBM on the Power (ppc64) platform.

What about x86 platform?


Based on our
testing results, the vTPM software works well and as expected. Support for
libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
and was tested in a full-fledged OpenStack environment.


Cool..


We believe the vTPM functionality will improve various aspects of VM security
in our enterprise-grade cloud environment. AT would like to see these
patches accepted into the QEMU community as the default-standard build so
this technology can be easily adopted in various open source cloud
deployments.

Stefan: could you update status about this patch set? I'd really appreciate 
your patch..

What do you mean by 'update status'. It's pretty much still the same as
before.

https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm


The implementation of the swtpm that I use for connecting QEMU to now has
more interface choices. There's the existing CUSE + ioctl for data and
control channel or any combination of TCP and Unix sockets for data and
control channel. The libvirt based management stack I built on top of QEMU
with vTPM assumes QEMU using the CUSE interface.

So what was the multi-instance vTPM proxy driver patch set about?


That's for containers.

Stefan




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-15 Thread Dr. David Alan Gilbert
* Stefan Berger (stef...@linux.vnet.ibm.com) wrote:
> On 05/31/2016 09:58 PM, Xu, Quan wrote:
> > On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E  
> > wrote:
> > > > * Daniel P. Berrange (berra...@redhat.com) wrote:
> > > > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > > > > On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > > > > > "Daniel P. Berrange"  wrote on 01/20/2016
> > > > > > > > 10:00:41
> > > > > > > > AM:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > process at all - it would make sense if there was a single
> > > > > > > > > swtpm_cuse shared across all QEMU's, but if there's one per
> > > > > > > > > QEMU device, it feels like it'd be much simpler to just have
> > > > > > > > > the functionality linked in QEMU.  That avoids the problem
> > > > > > > > I tried having it linked in QEMU before. It was basically 
> > > > > > > > rejected.
> > > > > > > I remember an impl you did many years(?) ago now, but don't
> > > > > > > recall the results of the discussion. Can you elaborate on why it
> > > > > > > was rejected as an approach ? It just doesn't make much sense to
> > > > > > > me to have to create an external daemon, a CUSE device and comms
> > > > > > > protocol, simply to be able to read/write a plain file containing
> > > > > > > the TPM state. Its massive over engineering IMHO and adding way
> > > > > > > more complexity and thus scope for failure
> > > > > > The TPM 1.2 implementation adds 10s of thousands of lines of code.
> > > > > > The TPM 2 implementation is in the same range. The concern was
> > > > > > having this code right in the QEMU address space. It's big, it can
> > > > > > have bugs, so we don't want it to harm QEMU. So we now put this
> > > > > > into an external process implemented by the swtpm project that
> > > > > > builds on libtpms which provides TPM 1.2 functionality (to be
> > > > > > extended with TPM 2). We cannot call APIs of libtpms directly
> > > > > > anymore, so we need a control channel, which is implemented through
> > > ioctls on the CUSE device.
> > > > > Ok, the security separation concern does make some sense. The use of
> > > > > CUSE still seems fairly questionable to me. CUSE makes sense if you
> > > > > want to provide a drop-in replacement for the kernel TPM device
> > > > > driver, which would avoid ned for a new QEMU backend. If you're not
> > > > > emulating an existing kernel driver ABI though, CUSE + ioctl is
> > > > > feels like a really awful RPC transport between 2 userspace processes.
> > > > While I don't really like CUSE; I can see some of the reasoning here.
> > > > By providing the existing TPM ioctl interface I think it means you can
> > > > use existing host-side TPM tools to initialise/query the soft-tpm, and
> > > > those should be independent of the soft-tpm implementation.
> > > > As for the extra interfaces you need because it's a soft-tpm to set it
> > > > up, once you've already got that ioctl interface as above, then it
> > > > seems to make sense to extend that to add the extra interfaces needed.
> > > > The only thing you have to watch for there are that the extra
> > > > interfaces don't clash with any future kernel ioctl extensions, and
> > > > that the interface defined is generic enough for different soft-tpm
> > > implementations.
> > > 
> > > > Dave
> > > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> > > 
> > > Over the past several months, AT Security Research has been testing the
> > > Virtual TPM software from IBM on the Power (ppc64) platform.
> > What about x86 platform?
> > 
> > > Based on our
> > > testing results, the vTPM software works well and as expected. Support for
> > > libvirt and the CUSE TPM allows us to create VMs with the vTPM 
> > > functionality
> > > and was tested in a full-fledged OpenStack environment.
> > > 
> > Cool..
> > 
> > > We believe the vTPM functionality will improve various aspects of VM 
> > > security
> > > in our enterprise-grade cloud environment. AT would like to see these
> > > patches accepted into the QEMU community as the default-standard build so
> > > this technology can be easily adopted in various open source cloud
> > > deployments.
> > Stefan: could you update status about this patch set? I'd really appreciate 
> > your patch..
> 
> What do you mean by 'update status'. It's pretty much still the same as
> before.
> 
> https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm
> 
> 
> The implementation of the swtpm that I use for connecting QEMU to now has
> more interface choices. There's the existing CUSE + ioctl for data and
> control channel or any combination of TCP and Unix sockets for data and
> control channel. The libvirt based management stack I built on top of QEMU
> with vTPM assumes QEMU using the CUSE interface.

So what was the multi-instance vTPM proxy driver patch set 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-13 Thread Stefan Berger

On 05/31/2016 09:58 PM, Xu, Quan wrote:

On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E  wrote:

* Daniel P. Berrange (berra...@redhat.com) wrote:

On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:

On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:

On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:

"Daniel P. Berrange"  wrote on 01/20/2016
10:00:41
AM:



process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem

I tried having it linked in QEMU before. It was basically rejected.

I remember an impl you did many years(?) ago now, but don't
recall the results of the discussion. Can you elaborate on why it
was rejected as an approach ? It just doesn't make much sense to
me to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure

The TPM 1.2 implementation adds 10s of thousands of lines of code.
The TPM 2 implementation is in the same range. The concern was
having this code right in the QEMU address space. It's big, it can
have bugs, so we don't want it to harm QEMU. So we now put this
into an external process implemented by the swtpm project that
builds on libtpms which provides TPM 1.2 functionality (to be
extended with TPM 2). We cannot call APIs of libtpms directly
anymore, so we need a control channel, which is implemented through

ioctls on the CUSE device.

Ok, the security separation concern does make some sense. The use of
CUSE still seems fairly questionable to me. CUSE makes sense if you
want to provide a drop-in replacement for the kernel TPM device
driver, which would avoid ned for a new QEMU backend. If you're not
emulating an existing kernel driver ABI though, CUSE + ioctl is
feels like a really awful RPC transport between 2 userspace processes.

While I don't really like CUSE; I can see some of the reasoning here.
By providing the existing TPM ioctl interface I think it means you can
use existing host-side TPM tools to initialise/query the soft-tpm, and
those should be independent of the soft-tpm implementation.
As for the extra interfaces you need because it's a soft-tpm to set it
up, once you've already got that ioctl interface as above, then it
seems to make sense to extend that to add the extra interfaces needed.
The only thing you have to watch for there are that the extra
interfaces don't clash with any future kernel ioctl extensions, and
that the interface defined is generic enough for different soft-tpm

implementations.


Dave
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK


Over the past several months, AT Security Research has been testing the
Virtual TPM software from IBM on the Power (ppc64) platform.

What about x86 platform?


Based on our
testing results, the vTPM software works well and as expected. Support for
libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
and was tested in a full-fledged OpenStack environment.


Cool..


We believe the vTPM functionality will improve various aspects of VM security
in our enterprise-grade cloud environment. AT would like to see these
patches accepted into the QEMU community as the default-standard build so
this technology can be easily adopted in various open source cloud
deployments.

Stefan: could you update status about this patch set? I'd really appreciate 
your patch..


What do you mean by 'update status'. It's pretty much still the same as 
before.


https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm


The implementation of the swtpm that I use for connecting QEMU to now 
has more interface choices. There's the existing CUSE + ioctl for data 
and control channel or any combination of TCP and Unix sockets for data 
and control channel. The libvirt based management stack I built on top 
of QEMU with vTPM assumes QEMU using the CUSE interface.


Stefan




-Quan






Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-13 Thread Stefan Berger

On 05/31/2016 03:10 PM, Dr. David Alan Gilbert wrote:

* BICKFORD, JEFFREY E (jb6...@att.com) wrote:

* Daniel P. Berrange (berra...@redhat.com) wrote:

On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:

On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:

On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:

"Daniel P. Berrange"  wrote on 01/20/2016 10:00:41
AM:



process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem

I tried having it linked in QEMU before. It was basically rejected.

I remember an impl you did many years(?) ago now, but don't recall
the results of the discussion. Can you elaborate on why it was
rejected as an approach ? It just doesn't make much sense to me
to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure

The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
implementation is in the same range. The concern was having this code right
in the QEMU address space. It's big, it can have bugs, so we don't want it
to harm QEMU. So we now put this into an external process implemented by the
swtpm project that builds on libtpms which provides TPM 1.2 functionality
(to be extended with TPM 2). We cannot call APIs of libtpms directly
anymore, so we need a control channel, which is implemented through ioctls
on the CUSE device.

Ok, the security separation concern does make some sense. The use of CUSE
still seems fairly questionable to me. CUSE makes sense if you want to
provide a drop-in replacement for the kernel TPM device driver, which
would avoid ned for a new QEMU backend. If you're not emulating an existing
kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
transport between 2 userspace processes.

While I don't really like CUSE; I can see some of the reasoning here.
By providing the existing TPM ioctl interface I think it means you can use
existing host-side TPM tools to initialise/query the soft-tpm, and those
should be independent of the soft-tpm implementation.
As for the extra interfaces you need because it's a soft-tpm to set it up,
once you've already got that ioctl interface as above, then it seems to make
sense to extend that to add the extra interfaces needed.  The only thing
you have to watch for there are that the extra interfaces don't clash
with any future kernel ioctl extensions, and that the interface defined
is generic enough for different soft-tpm implementations.
Dave
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK


Over the past several months, AT Security Research has been testing the 
Virtual TPM software from IBM on the Power (ppc64) platform. Based on our testing 
results, the vTPM software works well and as expected. Support for libvirt and the 
CUSE TPM allows us to create VMs with the vTPM functionality and was tested in a 
full-fledged OpenStack environment.
  
We believe the vTPM functionality will improve various aspects of VM security in our enterprise-grade cloud environment. AT would like to see these patches accepted into the QEMU community as the default-standard build so this technology can be easily adopted in various open source cloud deployments.

Interesting; however, I see Stefan has been contributing other kernel
patches that create a different vTPM setup without the use of CUSE;
if that's the case then I guess that's the preferable solution.


That solution is for Linux containers. It doesn't have the control 
channel we need for virtual machines where for example a reset it sent 
to the vTPM by QEMU when rebooting the VM. Instead we assume that the 
container management stack would reset the vTPM upon container restart.




Jeffrey: Can you detail a bit more about your setup, and how
you're maanging the life cycle of the vTPM data?

Dave


Regards,
Jeffrey Bickford
AT Security Research Center
jbickf...@att.com

--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK






Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-06-01 Thread BICKFORD, JEFFREY E
> * BICKFORD, JEFFREY E (jb6...@att.com) wrote:
> > > * Daniel P. Berrange (berra...@redhat.com) wrote:
> > > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > > >>"Daniel P. Berrange"  wrote on 01/20/2016 
> > > > > >>10:00:41
> > > > > >>AM:
> > > > > >>
> > > > > >>
> > > > > >>>process at all - it would make sense if there was a single
> > > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > > > >>>the functionality linked in QEMU.  That avoids the problem
> > > > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > > > >I remember an impl you did many years(?) ago now, but don't recall
> > > > > >the results of the discussion. Can you elaborate on why it was
> > > > > >rejected as an approach ? It just doesn't make much sense to me
> > > > > >to have to create an external daemon, a CUSE device and comms
> > > > > >protocol, simply to be able to read/write a plain file containing
> > > > > >the TPM state. Its massive over engineering IMHO and adding way
> > > > > >more complexity and thus scope for failure
> > > > > 
> > > > > The TPM 1.2 implementation adds 10s of thousands of lines of code. 
> > > > > The TPM 2
> > > > > implementation is in the same range. The concern was having this code 
> > > > > right
> > > > > in the QEMU address space. It's big, it can have bugs, so we don't 
> > > > > want it
> > > > > to harm QEMU. So we now put this into an external process implemented 
> > > > > by the
> > > > > swtpm project that builds on libtpms which provides TPM 1.2 
> > > > > functionality
> > > > > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > > > > anymore, so we need a control channel, which is implemented through 
> > > > > ioctls
> > > > > on the CUSE device.
> > > > 
> > > > Ok, the security separation concern does make some sense. The use of 
> > > > CUSE
> > > > still seems fairly questionable to me. CUSE makes sense if you want to
> > > > provide a drop-in replacement for the kernel TPM device driver, which
> > > > would avoid ned for a new QEMU backend. If you're not emulating an 
> > > > existing
> > > > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> > > > transport between 2 userspace processes.
> > 
> > > While I don't really like CUSE; I can see some of the reasoning here.
> > > By providing the existing TPM ioctl interface I think it means you can use
> > > existing host-side TPM tools to initialise/query the soft-tpm, and those
> > > should be independent of the soft-tpm implementation.
> > > As for the extra interfaces you need because it's a soft-tpm to set it up,
> > > once you've already got that ioctl interface as above, then it seems to 
> > > make
> > > sense to extend that to add the extra interfaces needed.  The only thing
> > > you have to watch for there are that the extra interfaces don't clash
> > > with any future kernel ioctl extensions, and that the interface defined
> > > is generic enough for different soft-tpm implementations.
> > 
> > > Dave
> > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> > 
> > 
> > Over the past several months, AT Security Research has been testing the 
> > Virtual TPM software from IBM on the Power (ppc64) platform. Based on our 
> > testing results, the vTPM software works well and as expected. Support for 
> > libvirt and the CUSE TPM allows us to create VMs with the vTPM 
> > functionality and was tested in a full-fledged OpenStack environment. 
> >  
> > We believe the vTPM functionality will improve various aspects of VM 
> > security in our enterprise-grade cloud environment. AT would like to see 
> > these patches accepted into the QEMU community as the default-standard 
> > build so this technology can be easily adopted in various open source cloud 
> > deployments.
> 
> Interesting; however, I see Stefan has been contributing other kernel
> patches that create a different vTPM setup without the use of CUSE;
> if that's the case then I guess that's the preferable solution.
> 
> Jeffrey: Can you detail a bit more about your setup, and how
> you're maanging the life cycle of the vTPM data?
> 
> Dave
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

Sure. We are using the various patches Stefan submitted at the beginning of the 
year on an IBM Power (ppc64) machine. The machine is running the PowerKVM 
operating system (Linux with KVM for Power architecture) with Stefan's vTPM 
patches installed. This machine is also running as a single OpenStack compute 
node for us to test the vTPM functionality through an OpenStack deployment. Our 
environment is currently running OpenStack Kilo. 
 
Our main goal has been to test the overall functionality of the vTPM 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-05-31 Thread Xu, Quan
On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E  wrote:
> > * Daniel P. Berrange (berra...@redhat.com) wrote:
> > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > >>"Daniel P. Berrange"  wrote on 01/20/2016
> > > > >>10:00:41
> > > > >>AM:
> > > > >>
> > > > >>
> > > > >>>process at all - it would make sense if there was a single
> > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > > >>>the functionality linked in QEMU.  That avoids the problem
> > > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > > >I remember an impl you did many years(?) ago now, but don't
> > > > >recall the results of the discussion. Can you elaborate on why it
> > > > >was rejected as an approach ? It just doesn't make much sense to
> > > > >me to have to create an external daemon, a CUSE device and comms
> > > > >protocol, simply to be able to read/write a plain file containing
> > > > >the TPM state. Its massive over engineering IMHO and adding way
> > > > >more complexity and thus scope for failure
> > > >
> > > > The TPM 1.2 implementation adds 10s of thousands of lines of code.
> > > > The TPM 2 implementation is in the same range. The concern was
> > > > having this code right in the QEMU address space. It's big, it can
> > > > have bugs, so we don't want it to harm QEMU. So we now put this
> > > > into an external process implemented by the swtpm project that
> > > > builds on libtpms which provides TPM 1.2 functionality (to be
> > > > extended with TPM 2). We cannot call APIs of libtpms directly
> > > > anymore, so we need a control channel, which is implemented through
> ioctls on the CUSE device.
> > >
> > > Ok, the security separation concern does make some sense. The use of
> > > CUSE still seems fairly questionable to me. CUSE makes sense if you
> > > want to provide a drop-in replacement for the kernel TPM device
> > > driver, which would avoid ned for a new QEMU backend. If you're not
> > > emulating an existing kernel driver ABI though, CUSE + ioctl is
> > > feels like a really awful RPC transport between 2 userspace processes.
> 
> > While I don't really like CUSE; I can see some of the reasoning here.
> > By providing the existing TPM ioctl interface I think it means you can
> > use existing host-side TPM tools to initialise/query the soft-tpm, and
> > those should be independent of the soft-tpm implementation.
> > As for the extra interfaces you need because it's a soft-tpm to set it
> > up, once you've already got that ioctl interface as above, then it
> > seems to make sense to extend that to add the extra interfaces needed.
> > The only thing you have to watch for there are that the extra
> > interfaces don't clash with any future kernel ioctl extensions, and
> > that the interface defined is generic enough for different soft-tpm
> implementations.
> 
> > Dave
> > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> 
> 
> Over the past several months, AT Security Research has been testing the
> Virtual TPM software from IBM on the Power (ppc64) platform.

What about x86 platform?

> Based on our
> testing results, the vTPM software works well and as expected. Support for
> libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality
> and was tested in a full-fledged OpenStack environment.
>

Cool..

> We believe the vTPM functionality will improve various aspects of VM security
> in our enterprise-grade cloud environment. AT would like to see these
> patches accepted into the QEMU community as the default-standard build so
> this technology can be easily adopted in various open source cloud
> deployments.

Stefan: could you update status about this patch set? I'd really appreciate 
your patch..

-Quan



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-05-31 Thread Dr. David Alan Gilbert
* BICKFORD, JEFFREY E (jb6...@att.com) wrote:
> > * Daniel P. Berrange (berra...@redhat.com) wrote:
> > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > >>"Daniel P. Berrange"  wrote on 01/20/2016 
> > > > >>10:00:41
> > > > >>AM:
> > > > >>
> > > > >>
> > > > >>>process at all - it would make sense if there was a single
> > > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > > >>>the functionality linked in QEMU.  That avoids the problem
> > > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > > >I remember an impl you did many years(?) ago now, but don't recall
> > > > >the results of the discussion. Can you elaborate on why it was
> > > > >rejected as an approach ? It just doesn't make much sense to me
> > > > >to have to create an external daemon, a CUSE device and comms
> > > > >protocol, simply to be able to read/write a plain file containing
> > > > >the TPM state. Its massive over engineering IMHO and adding way
> > > > >more complexity and thus scope for failure
> > > > 
> > > > The TPM 1.2 implementation adds 10s of thousands of lines of code. The 
> > > > TPM 2
> > > > implementation is in the same range. The concern was having this code 
> > > > right
> > > > in the QEMU address space. It's big, it can have bugs, so we don't want 
> > > > it
> > > > to harm QEMU. So we now put this into an external process implemented 
> > > > by the
> > > > swtpm project that builds on libtpms which provides TPM 1.2 
> > > > functionality
> > > > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > > > anymore, so we need a control channel, which is implemented through 
> > > > ioctls
> > > > on the CUSE device.
> > > 
> > > Ok, the security separation concern does make some sense. The use of CUSE
> > > still seems fairly questionable to me. CUSE makes sense if you want to
> > > provide a drop-in replacement for the kernel TPM device driver, which
> > > would avoid ned for a new QEMU backend. If you're not emulating an 
> > > existing
> > > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> > > transport between 2 userspace processes.
> 
> > While I don't really like CUSE; I can see some of the reasoning here.
> > By providing the existing TPM ioctl interface I think it means you can use
> > existing host-side TPM tools to initialise/query the soft-tpm, and those
> > should be independent of the soft-tpm implementation.
> > As for the extra interfaces you need because it's a soft-tpm to set it up,
> > once you've already got that ioctl interface as above, then it seems to make
> > sense to extend that to add the extra interfaces needed.  The only thing
> > you have to watch for there are that the extra interfaces don't clash
> > with any future kernel ioctl extensions, and that the interface defined
> > is generic enough for different soft-tpm implementations.
> 
> > Dave
> > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> 
> 
> Over the past several months, AT Security Research has been testing the 
> Virtual TPM software from IBM on the Power (ppc64) platform. Based on our 
> testing results, the vTPM software works well and as expected. Support for 
> libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality 
> and was tested in a full-fledged OpenStack environment. 
>  
> We believe the vTPM functionality will improve various aspects of VM security 
> in our enterprise-grade cloud environment. AT would like to see these 
> patches accepted into the QEMU community as the default-standard build so 
> this technology can be easily adopted in various open source cloud 
> deployments.

Interesting; however, I see Stefan has been contributing other kernel
patches that create a different vTPM setup without the use of CUSE;
if that's the case then I guess that's the preferable solution.

Jeffrey: Can you detail a bit more about your setup, and how
you're maanging the life cycle of the vTPM data?

Dave

> 
> Regards,
> Jeffrey Bickford
> AT Security Research Center
> jbickf...@att.com
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-05-31 Thread BICKFORD, JEFFREY E
> * Daniel P. Berrange (berra...@redhat.com) wrote:
> > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > >>"Daniel P. Berrange"  wrote on 01/20/2016 10:00:41
> > > >>AM:
> > > >>
> > > >>
> > > >>>process at all - it would make sense if there was a single
> > > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > > >>>QEMU device, it feels like it'd be much simpler to just have
> > > >>>the functionality linked in QEMU.  That avoids the problem
> > > >>I tried having it linked in QEMU before. It was basically rejected.
> > > >I remember an impl you did many years(?) ago now, but don't recall
> > > >the results of the discussion. Can you elaborate on why it was
> > > >rejected as an approach ? It just doesn't make much sense to me
> > > >to have to create an external daemon, a CUSE device and comms
> > > >protocol, simply to be able to read/write a plain file containing
> > > >the TPM state. Its massive over engineering IMHO and adding way
> > > >more complexity and thus scope for failure
> > > 
> > > The TPM 1.2 implementation adds 10s of thousands of lines of code. The 
> > > TPM 2
> > > implementation is in the same range. The concern was having this code 
> > > right
> > > in the QEMU address space. It's big, it can have bugs, so we don't want it
> > > to harm QEMU. So we now put this into an external process implemented by 
> > > the
> > > swtpm project that builds on libtpms which provides TPM 1.2 functionality
> > > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > > anymore, so we need a control channel, which is implemented through ioctls
> > > on the CUSE device.
> > 
> > Ok, the security separation concern does make some sense. The use of CUSE
> > still seems fairly questionable to me. CUSE makes sense if you want to
> > provide a drop-in replacement for the kernel TPM device driver, which
> > would avoid ned for a new QEMU backend. If you're not emulating an existing
> > kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> > transport between 2 userspace processes.

> While I don't really like CUSE; I can see some of the reasoning here.
> By providing the existing TPM ioctl interface I think it means you can use
> existing host-side TPM tools to initialise/query the soft-tpm, and those
> should be independent of the soft-tpm implementation.
> As for the extra interfaces you need because it's a soft-tpm to set it up,
> once you've already got that ioctl interface as above, then it seems to make
> sense to extend that to add the extra interfaces needed.  The only thing
> you have to watch for there are that the extra interfaces don't clash
> with any future kernel ioctl extensions, and that the interface defined
> is generic enough for different soft-tpm implementations.

> Dave
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK


Over the past several months, AT Security Research has been testing the 
Virtual TPM software from IBM on the Power (ppc64) platform. Based on our 
testing results, the vTPM software works well and as expected. Support for 
libvirt and the CUSE TPM allows us to create VMs with the vTPM functionality 
and was tested in a full-fledged OpenStack environment. 
 
We believe the vTPM functionality will improve various aspects of VM security 
in our enterprise-grade cloud environment. AT would like to see these patches 
accepted into the QEMU community as the default-standard build so this 
technology can be easily adopted in various open source cloud deployments.

Regards,
Jeffrey Bickford
AT Security Research Center
jbickf...@att.com



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-28 Thread Daniel P. Berrange
On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange"  wrote on 01/20/2016 10:00:41 
> AM:
> 
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
> > > The CUSE TPM and associated tools can be found here:
> > > 
> > > https://github.com/stefanberger/swtpm
> > > 
> > > (please use the latest version)
> > > 
> > > To use the external CUSE TPM, the CUSE TPM should be started as 
> follows:
> > > 
> > > # terminate previously started CUSE TPM
> > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > 
> > > # start CUSE TPM
> > > /usr/bin/swtpm_cuse -n vtpm-test
> > 
> > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > TPM device ?  This makes my wonder why we need this separate
> 
> Correct. See reason in answer to previous email.
> 
> > process at all - it would make sense if there was a single
> > swtpm_cuse shared across all QEMU's, but if there's one per
> > QEMU device, it feels like it'd be much simpler to just have
> > the functionality linked in QEMU.  That avoids the problem
> 
> I tried having it linked in QEMU before. It was basically rejected.
> 
> > of having to manage all these extra processes alongside QEMU
> > which can add a fair bit of mgmt overhead.
> 
> For libvirt, yes, there is mgmt. overhead but it's quite transparent. So 
> libvirt is involved in the creation of the directory for the vTPMs, the 
> command line creation for the external process as well as the startup of 
> the process, but otherwise it's not a big issue (anymore). I have the 
> patches that do just for an older libvirt version that along with setting 
> up SELinux labels, cgroups etc. for each VM that wants an attached vTPM.

A question that just occurred is how this will work with live migration.
If we live migrate a VM we need the file that backs the guest's vTPM
device to either be on shared storage, or it needs to be copied. With
modern QEMU we are using drive-mirror to copy all block backends over
an NBD connection. If the file backing the vTPM is invisible to QEMU
hidden behind the swtpm_cuse ioctl(), then there's no way for us to
leverage QEMUs block mirror to copy across the TPM state file AFAICT.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-28 Thread Stefan Berger
"Daniel P. Berrange"  wrote on 01/28/2016 08:15:21 
AM:


> 
> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > "Daniel P. Berrange"  wrote on 01/20/2016 
10:00:41 
> > AM:
> > 
> > > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the 
CUSE 
> > > > The CUSE TPM and associated tools can be found here:
> > > > 
> > > > https://github.com/stefanberger/swtpm
> > > > 
> > > > (please use the latest version)
> > > > 
> > > > To use the external CUSE TPM, the CUSE TPM should be started as 
> > follows:
> > > > 
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > > 
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > 
> > > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > > TPM device ?  This makes my wonder why we need this separate
> > 
> > Correct. See reason in answer to previous email.
> > 
> > > process at all - it would make sense if there was a single
> > > swtpm_cuse shared across all QEMU's, but if there's one per
> > > QEMU device, it feels like it'd be much simpler to just have
> > > the functionality linked in QEMU.  That avoids the problem
> > 
> > I tried having it linked in QEMU before. It was basically rejected.
> > 
> > > of having to manage all these extra processes alongside QEMU
> > > which can add a fair bit of mgmt overhead.
> > 
> > For libvirt, yes, there is mgmt. overhead but it's quite transparent. 
So 
> > libvirt is involved in the creation of the directory for the vTPMs, 
the 
> > command line creation for the external process as well as the startup 
of 
> > the process, but otherwise it's not a big issue (anymore). I have the 
> > patches that do just for an older libvirt version that along with 
setting 
> > up SELinux labels, cgroups etc. for each VM that wants an attached 
vTPM.
> 
> A question that just occurred is how this will work with live migration.
> If we live migrate a VM we need the file that backs the guest's vTPM
> device to either be on shared storage, or it needs to be copied. With

The vTPM implements commands over the control channel to get the vTPM's 
state blobs upon migration (suspend) and set it back into the vTPM upon 
end of migration (resume). The code is here:

http://lists.nongnu.org/archive/html/qemu-devel/2016-01/msg00088.html

This function implements the retrieval of the state.

+int tpm_util_cuse_get_state_blobs(int tpm_fd,
+  bool decrypted_blobs,
+  TPMBlobBuffers *tpm_blobs)


> modern QEMU we are using drive-mirror to copy all block backends over
> an NBD connection. If the file backing the vTPM is invisible to QEMU
> hidden behind the swtpm_cuse ioctl(), then there's no way for us to
> leverage QEMUs block mirror to copy across the TPM state file AFAICT.

The vTPM's state is treated like any other device's state and is 
serialized upon machine suspend (alongside all the other VM devices) and 
de-serialized upon machine resume (with the addition that the state is 
pushed into the external vTPM device over the control channel and there 
are control channel commands to resume the vTPM with that state).

It is correct that the vTPM writes its state into a plain text file 
otherwise. This vTPM state needs to go alongside the image of the VM for 
all TPM related applications to run seamlessly under all circumstances (I 
can go into more detail here but don't want to confuse). There's currently 
one problem related to running snapshots and snapshots being 'volatile' 
that I mentioned here (volatile = state of VM filesystem is discarded upon 
shutdown of the VM running a snapshot):

 https://lists.gnu.org/archive/html/qemu-devel/2016-01/msg04047.html

I haven't gotten around trying to run a snapshot and migrating it to 
another machine. Let's say one was to create a new file /root/XYZ while 
running that snapshot and that snapshot is shut down on the machine where 
its destination was. Will that file /root/XYZ appear in the filesystem 
then upon restart of that VM? The 'normal' behavior when not migrating is 
that while running a snapshot and creating a new file /root/XYZ that file 
will not appear when restarting that snapshot (of course!) or when 
starting the machine 'normally'. So VM image state is 'volatile' if 
running a snapshot and that snapshot is shut down. The state of the vTPM 
would have to be treated equally volatile or non-volatile.

Does this explanation clarify things?

Regards,
Stefan 

> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- 
http://virt-manager.org :|
> |: http://autobuild.org   -o- 
http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   
http://live.gnome.org/gtk-vnc :|
> 




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-21 Thread Stefan Berger
"Michael S. Tsirkin"  wrote on 01/21/2016 12:08:20 AM:


> 
> Well that was just one idea, it's up to you guys.
> But while modular multi-process QEMU for security
> might happen in future, I don't see us doing this
> by moving large parts of QEMU into cuse devices,
> and talking to these through ioctls.

I guess we'll have to rely one someone else to provide a different 
interface.

   Stefan
=



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-21 Thread Dr. David Alan Gilbert
* Daniel P. Berrange (berra...@redhat.com) wrote:
> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > >>"Daniel P. Berrange"  wrote on 01/20/2016 10:00:41
> > >>AM:
> > >>
> > >>
> > >>>process at all - it would make sense if there was a single
> > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > >>>QEMU device, it feels like it'd be much simpler to just have
> > >>>the functionality linked in QEMU.  That avoids the problem
> > >>I tried having it linked in QEMU before. It was basically rejected.
> > >I remember an impl you did many years(?) ago now, but don't recall
> > >the results of the discussion. Can you elaborate on why it was
> > >rejected as an approach ? It just doesn't make much sense to me
> > >to have to create an external daemon, a CUSE device and comms
> > >protocol, simply to be able to read/write a plain file containing
> > >the TPM state. Its massive over engineering IMHO and adding way
> > >more complexity and thus scope for failure
> > 
> > The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> > implementation is in the same range. The concern was having this code right
> > in the QEMU address space. It's big, it can have bugs, so we don't want it
> > to harm QEMU. So we now put this into an external process implemented by the
> > swtpm project that builds on libtpms which provides TPM 1.2 functionality
> > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > anymore, so we need a control channel, which is implemented through ioctls
> > on the CUSE device.
> 
> Ok, the security separation concern does make some sense. The use of CUSE
> still seems fairly questionable to me. CUSE makes sense if you want to
> provide a drop-in replacement for the kernel TPM device driver, which
> would avoid ned for a new QEMU backend. If you're not emulating an existing
> kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
> transport between 2 userspace processes.

While I don't really like CUSE; I can see some of the reasoning here.
By providing the existing TPM ioctl interface I think it means you can use
existing host-side TPM tools to initialise/query the soft-tpm, and those
should be independent of the soft-tpm implementation.
As for the extra interfaces you need because it's a soft-tpm to set it up,
once you've already got that ioctl interface as above, then it seems to make
sense to extend that to add the extra interfaces needed.  The only thing
you have to watch for there are that the extra interfaces don't clash
with any future kernel ioctl extensions, and that the interface defined
is generic enough for different soft-tpm implementations.

Dave


> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-21 Thread Michael S. Tsirkin
On Thu, Jan 21, 2016 at 05:41:32AM +, Xu, Quan wrote:
> > On January 21, 2016 at 1:08pm,  wrote:
> > On Wed, Jan 20, 2016 at 04:25:15PM -0500, Stefan Berger wrote:
> > > On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> > > >On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> > > >>"Michael S. Tsirkin"  wrote on 01/20/2016 10:58:02 AM:
> > > >>
> > > >>>From: "Michael S. Tsirkin"  On Wed, Jan 20, 2016 at
> > > >>>10:36:41AM -0500, Stefan Berger wrote:
> > > "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58
> > AM:
> > > 
> > > >From: "Michael S. Tsirkin" 
> > > >>The CUSE TPM and associated tools can be found here:
> > > >>
> > > >>https://github.com/stefanberger/swtpm
> > > >>
> > > >>(please use the latest version)
> > > >>
> > > >>To use the external CUSE TPM, the CUSE TPM should be started as
> > > >>follows:
> > > >># terminate previously started CUSE TPM /usr/bin/swtpm_ioctl -s
> > > >>/dev/vtpm-test
> > > >>
> > > >># start CUSE TPM
> > > >>/usr/bin/swtpm_cuse -n vtpm-test
> > > >>
> > > >>QEMU can then be started using the following parameters:
> > > >>
> > > >>qemu-system-x86_64 \
> > > >>[...] \
> > > >> -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> > > >>>dev/vtpm-test
> > > \
> > > >> -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > >>[...]
> > > >>
> > > >>
> > > >>Signed-off-by: Stefan Berger 
> > > >>Cc: Eric Blake 
> > > >Before we add a dependency on this interface, I'd rather see this
> > > >interface supported in kernel and not just in CUSE.
> > > For using the single hardware TPM, we have the passthrough type.
> > > >>>It's usage is
> > > limited.
> > > 
> > > CUSE extends the TPM character device interface with ioctl's.
> > > Behind the character device we can implement a TPM 1.2 and a TPM
> > > 2. Both TPM implementations require large amounts of code, which I
> > > don't thinkshould go into the Linux kernel itself. So I don't know
> > > who would implement this interface inside the Linux kernel.
> > > 
> > >    Stefan
> > > 
> > > >>>BTW I'm not talking about the code - I'm talking about the interfaces 
> > > >>>here.
> > > >>>
> > > >>>One way would be to add support for these interface support in the 
> > > >>>kernel.
> > > >>>
> > > >>>Maybe others can be replaced with QMP events so management can take
> > > >>>the necessary action.
> > > >>>
> > > >>>As long as this is not the case, I suspect this code will have to
> > > >>>stay out of tree :( We can't depend on interfaces provided solely
> > > >>>by cuse devices on github.
> > > >>Why is that? I know that the existing ioctls cannot be modified
> > > >>anymore once QEMU accepts the code. So I don't understand it. Some
> > > >>of the ioctls are only useful when emulating a hardware device,
> > > >Maybe they can be replaced with QMP events?
> > > >These could be emitted unconditionally, and ignored by management in
> > > >passthrough case.
> > > >
> > > >>so there's no need for them in a
> > > >>kernel interface unless one was to put the vTPM code into the Linux
> > > >>kernel, but I don't see that this is happening. What is better about
> > > >>a kernel interface versus one implemented by a project on github
> > > >>assuming that the existing ioctls are stable? What is the real reason 
> > > >>here?
> > > >>
> > > >>Stefan
> > > >>
> > > >That someone went to the trouble of reviewing the interface for
> > > >long-term maintainability, portability etc. That it obeys some
> > > >existing standards for API use, coding style etc and will continue to.
> > >
> > > The same applies to the libtpms and swtpm projects as well, I suppose.
> > > If someone wants to join them, let me know.
> > >
> > > As stated, we will keep the existing ioctl stables once integrated but
> > > will adapt where necessary before that.
> > > >
> > > >In other words, kernel is already a dependency for QEMU.
> > >
> > > I don't see vTPM going into the kernel, at least I don't know of
> > > anyone trying to do that.
> > >
> > >Stefan
> > >
> > 
> > Well that was just one idea, it's up to you guys.
> > But while modular multi-process QEMU for security might happen in future, I
> > don't see us doing this by moving large parts of QEMU into cuse devices, and
> > talking to these through ioctls.
> 
> 
> IIUC, the major root issue is that CUSE TPM is based on soft defined TPM, 
> instead of hardware TPM.

No, the major issue is in how it's put together.

> This may bring in more security/stability issues.. 
> As I know, some trusted cloud products must base on upstream code. TPM 
> passthru is just for limited VM.
> As I mentioned, I think CUSE TPM is a good solution for trusted cloud.
>
> Hagen, could you share more user 

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Michael S. Tsirkin
On Wed, Jan 20, 2016 at 04:25:15PM -0500, Stefan Berger wrote:
> On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> >On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> >>"Michael S. Tsirkin"  wrote on 01/20/2016 10:58:02 AM:
> >>
> >>>From: "Michael S. Tsirkin" 
> >>>On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:
> 
> >From: "Michael S. Tsirkin" 
> >>The CUSE TPM and associated tools can be found here:
> >>
> >>https://github.com/stefanberger/swtpm
> >>
> >>(please use the latest version)
> >>
> >>To use the external CUSE TPM, the CUSE TPM should be started as
> >>follows:
> >># terminate previously started CUSE TPM
> >>/usr/bin/swtpm_ioctl -s /dev/vtpm-test
> >>
> >># start CUSE TPM
> >>/usr/bin/swtpm_cuse -n vtpm-test
> >>
> >>QEMU can then be started using the following parameters:
> >>
> >>qemu-system-x86_64 \
> >>[...] \
> >> -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> >>>dev/vtpm-test
> \
> >> -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> >>[...]
> >>
> >>
> >>Signed-off-by: Stefan Berger 
> >>Cc: Eric Blake 
> >Before we add a dependency on this interface,
> >I'd rather see this interface supported in kernel
> >and not just in CUSE.
> For using the single hardware TPM, we have the passthrough type.
> >>>It's usage is
> limited.
> 
> CUSE extends the TPM character device interface with ioctl's. Behind the
> character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> implementations require large amounts of code, which I don't thinkshould 
> go
> into the Linux kernel itself. So I don't know who would implement this
> interface inside the Linux kernel.
> 
>    Stefan
> 
> >>>BTW I'm not talking about the code - I'm talking about the interfaces here.
> >>>
> >>>One way would be to add support for these interface support in the kernel.
> >>>
> >>>Maybe others can be replaced with QMP events so management
> >>>can take the necessary action.
> >>>
> >>>As long as this is not the case, I suspect this code will have to stay
> >>>out of tree :( We can't depend on interfaces provided solely by cuse
> >>>devices on github.
> >>Why is that? I know that the existing ioctls cannot be modified anymore once
> >>QEMU accepts the code. So I don't understand it. Some of the ioctls are only
> >>useful when emulating a hardware device,
> >Maybe they can be replaced with QMP events?
> >These could be emitted unconditionally, and ignored
> >by management in passthrough case.
> >
> >>so there's no need for them in a
> >>kernel interface unless one was to put the vTPM code into the Linux kernel, 
> >>but
> >>I don't see that this is happening. What is better about a kernel interface
> >>versus one implemented by a project on github assuming that the existing 
> >>ioctls
> >>are stable? What is the real reason here?
> >>
> >>Stefan
> >>
> >That someone went to the trouble of reviewing the interface for
> >long-term maintainability, portability etc. That it obeys some existing
> >standards for API use, coding style etc and will continue to.
> 
> The same applies to the libtpms and swtpm projects as well, I suppose. If
> someone wants to join them, let me know.
> 
> As stated, we will keep the existing ioctl stables once integrated but will
> adapt where necessary before that.
> >
> >In other words, kernel is already a dependency for QEMU.
> 
> I don't see vTPM going into the kernel, at least I don't know of anyone
> trying to do that.
> 
>Stefan
> 

Well that was just one idea, it's up to you guys.
But while modular multi-process QEMU for security
might happen in future, I don't see us doing this
by moving large parts of QEMU into cuse devices,
and talking to these through ioctls.

> >>>
> >>>
> >>>--
> >>>MST
> >>>



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Xu, Quan
> On January 21, 2016 at 1:08pm,  wrote:
> On Wed, Jan 20, 2016 at 04:25:15PM -0500, Stefan Berger wrote:
> > On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:
> > >On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> > >>"Michael S. Tsirkin"  wrote on 01/20/2016 10:58:02 AM:
> > >>
> > >>>From: "Michael S. Tsirkin"  On Wed, Jan 20, 2016 at
> > >>>10:36:41AM -0500, Stefan Berger wrote:
> > "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58
> AM:
> > 
> > >From: "Michael S. Tsirkin" 
> > >>The CUSE TPM and associated tools can be found here:
> > >>
> > >>https://github.com/stefanberger/swtpm
> > >>
> > >>(please use the latest version)
> > >>
> > >>To use the external CUSE TPM, the CUSE TPM should be started as
> > >>follows:
> > >># terminate previously started CUSE TPM /usr/bin/swtpm_ioctl -s
> > >>/dev/vtpm-test
> > >>
> > >># start CUSE TPM
> > >>/usr/bin/swtpm_cuse -n vtpm-test
> > >>
> > >>QEMU can then be started using the following parameters:
> > >>
> > >>qemu-system-x86_64 \
> > >>[...] \
> > >> -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> > >>>dev/vtpm-test
> > \
> > >> -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > >>[...]
> > >>
> > >>
> > >>Signed-off-by: Stefan Berger 
> > >>Cc: Eric Blake 
> > >Before we add a dependency on this interface, I'd rather see this
> > >interface supported in kernel and not just in CUSE.
> > For using the single hardware TPM, we have the passthrough type.
> > >>>It's usage is
> > limited.
> > 
> > CUSE extends the TPM character device interface with ioctl's.
> > Behind the character device we can implement a TPM 1.2 and a TPM
> > 2. Both TPM implementations require large amounts of code, which I
> > don't thinkshould go into the Linux kernel itself. So I don't know
> > who would implement this interface inside the Linux kernel.
> > 
> >    Stefan
> > 
> > >>>BTW I'm not talking about the code - I'm talking about the interfaces 
> > >>>here.
> > >>>
> > >>>One way would be to add support for these interface support in the 
> > >>>kernel.
> > >>>
> > >>>Maybe others can be replaced with QMP events so management can take
> > >>>the necessary action.
> > >>>
> > >>>As long as this is not the case, I suspect this code will have to
> > >>>stay out of tree :( We can't depend on interfaces provided solely
> > >>>by cuse devices on github.
> > >>Why is that? I know that the existing ioctls cannot be modified
> > >>anymore once QEMU accepts the code. So I don't understand it. Some
> > >>of the ioctls are only useful when emulating a hardware device,
> > >Maybe they can be replaced with QMP events?
> > >These could be emitted unconditionally, and ignored by management in
> > >passthrough case.
> > >
> > >>so there's no need for them in a
> > >>kernel interface unless one was to put the vTPM code into the Linux
> > >>kernel, but I don't see that this is happening. What is better about
> > >>a kernel interface versus one implemented by a project on github
> > >>assuming that the existing ioctls are stable? What is the real reason 
> > >>here?
> > >>
> > >>Stefan
> > >>
> > >That someone went to the trouble of reviewing the interface for
> > >long-term maintainability, portability etc. That it obeys some
> > >existing standards for API use, coding style etc and will continue to.
> >
> > The same applies to the libtpms and swtpm projects as well, I suppose.
> > If someone wants to join them, let me know.
> >
> > As stated, we will keep the existing ioctl stables once integrated but
> > will adapt where necessary before that.
> > >
> > >In other words, kernel is already a dependency for QEMU.
> >
> > I don't see vTPM going into the kernel, at least I don't know of
> > anyone trying to do that.
> >
> >Stefan
> >
> 
> Well that was just one idea, it's up to you guys.
> But while modular multi-process QEMU for security might happen in future, I
> don't see us doing this by moving large parts of QEMU into cuse devices, and
> talking to these through ioctls.


IIUC, the major root issue is that CUSE TPM is based on soft defined TPM, 
instead of hardware TPM.
This may bring in more security/stability issues.. 
As I know, some trusted cloud products must base on upstream code. TPM passthru 
is just for limited VM.
As I mentioned, I think CUSE TPM is a good solution for trusted cloud.

Hagen, could you share more user cases for CUSE TPM?
MST, is it possible for CUSE TPM upstream? or much more ARs for Stefan Berger?


Quan









Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Daniel P. Berrange
On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> From: Stefan Berger 
> 
> Rather than integrating TPM functionality into QEMU directly
> using the TPM emulation of libtpms, we now integrate an external
> emulated TPM device. This device is expected to implement a Linux
> CUSE interface (CUSE = character device in userspace).
> 
> QEMU talks to the CUSE TPM using much functionality of the
> passthrough driver. For example, the TPM commands and responses
> are sent to the CUSE TPM using the read()/write() interface.
> However, some out-of-band control needs to be done using the CUSE
> TPM's ioctls. The CUSE TPM currently defines and implements 15
> different ioctls for controlling certain life-cycle aspects of
> the emulated TPM. The ioctls can be regarded as a replacement for
> direct function calls to a TPM emulator if the TPM were to be
> directly integrated into QEMU.
> 
> One of the ioctls allows to get a bitmask of supported capabilities.
> Each returned bit indicates which capabilities have been implemented.
> An include file defining the various ioctls is added to QEMU.
> 
> The CUSE TPM and associated tools can be found here:
> 
> https://github.com/stefanberger/swtpm
> 
> (please use the latest version)
> 
> To use the external CUSE TPM, the CUSE TPM should be started as follows:
> 
> # terminate previously started CUSE TPM
> /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> 
> # start CUSE TPM
> /usr/bin/swtpm_cuse -n vtpm-test

IIUC, there needs to be one swtpm_cuse process running per QEMU
TPM device ?  This makes my wonder why we need this separate
process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem
of having to manage all these extra processes alongside QEMU
which can add a fair bit of mgmt overhead.

> 
> QEMU can then be started using the following parameters:
> 
> qemu-system-x86_64 \
>   [...] \
> -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
> -device tpm-tis,id=tpm0,tpmdev=tpm0 \
>   [...]

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Daniel P. Berrange
On Wed, Jan 20, 2016 at 05:58:02PM +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> > "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:
> > 
> > > From: "Michael S. Tsirkin" 
> > 
> > > >
> > > > The CUSE TPM and associated tools can be found here:
> > > >
> > > > https://github.com/stefanberger/swtpm
> > > >
> > > > (please use the latest version)
> > > >
> > > > To use the external CUSE TPM, the CUSE TPM should be started as follows:
> > > >
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > >
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > >
> > > > QEMU can then be started using the following parameters:
> > > >
> > > > qemu-system-x86_64 \
> > > >[...] \
> > > > -tpmdev 
> > > > cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test
> > \
> > > > -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > >[...]
> > > >
> > > >
> > > > Signed-off-by: Stefan Berger 
> > > > Cc: Eric Blake 
> > >
> > > Before we add a dependency on this interface,
> > > I'd rather see this interface supported in kernel
> > > and not just in CUSE.
> > 
> > For using the single hardware TPM, we have the passthrough type. It's usage 
> > is
> > limited.
> > 
> > CUSE extends the TPM character device interface with ioctl's. Behind the
> > character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> > implementations require large amounts of code, which I don't think should go
> > into the Linux kernel itself. So I don't know who would implement this
> > interface inside the Linux kernel.
> > 
> >   Stefan
> > 
> 
> BTW I'm not talking about the code - I'm talking about the interfaces here.
> 
> One way would be to add support for these interface support in the kernel.
> 
> Maybe others can be replaced with QMP events so management
> can take the necessary action.
> 
> As long as this is not the case, I suspect this code will have to stay
> out of tree :( We can't depend on interfaces provided solely by cuse
> devices on github.

The kernel already has a userspace device interface for TPMs doesn't
it - it is what our existing 'tpm-passthrough' backend in QEMU surely
uses.

If swtpm is going to the trouble of providing device node emulation
with CUSE, I would have thought it could emulate the same interface
as the existing kernel TPM device nodes, thus removing the need for
any extra driver in QEMU ?  Otherwise it doesn't seem like there's
much point in using CUSE, as opposed to a general userspace RPC
protocol that doesn't need kernel support at all.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Michael S. Tsirkin
On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:
> "Michael S. Tsirkin"  wrote on 01/20/2016 10:58:02 AM:
> 
> > From: "Michael S. Tsirkin" 
> 
> >
> > On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> > > "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:
> > >
> > > > From: "Michael S. Tsirkin" 
> > >
> > > > >
> > > > > The CUSE TPM and associated tools can be found here:
> > > > >
> > > > > https://github.com/stefanberger/swtpm
> > > > >
> > > > > (please use the latest version)
> > > > >
> > > > > To use the external CUSE TPM, the CUSE TPM should be started as
> follows:
> > > > >
> > > > > # terminate previously started CUSE TPM
> > > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > > >
> > > > > # start CUSE TPM
> > > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > > >
> > > > > QEMU can then be started using the following parameters:
> > > > >
> > > > > qemu-system-x86_64 \
> > > > >[...] \
> > > > > -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> > dev/vtpm-test
> > > \
> > > > > -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > > >[...]
> > > > >
> > > > >
> > > > > Signed-off-by: Stefan Berger 
> > > > > Cc: Eric Blake 
> > > >
> > > > Before we add a dependency on this interface,
> > > > I'd rather see this interface supported in kernel
> > > > and not just in CUSE.
> > >
> > > For using the single hardware TPM, we have the passthrough type.
> > It's usage is
> > > limited.
> > >
> > > CUSE extends the TPM character device interface with ioctl's. Behind the
> > > character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> > > implementations require large amounts of code, which I don't thinkshould 
> > > go
> > > into the Linux kernel itself. So I don't know who would implement this
> > > interface inside the Linux kernel.
> > >
> > >   Stefan
> > >
> >
> > BTW I'm not talking about the code - I'm talking about the interfaces here.
> >
> > One way would be to add support for these interface support in the kernel.
> >
> > Maybe others can be replaced with QMP events so management
> > can take the necessary action.
> >
> > As long as this is not the case, I suspect this code will have to stay
> > out of tree :( We can't depend on interfaces provided solely by cuse
> > devices on github.
> 
> Why is that? I know that the existing ioctls cannot be modified anymore once
> QEMU accepts the code. So I don't understand it. Some of the ioctls are only
> useful when emulating a hardware device,

Maybe they can be replaced with QMP events?
These could be emitted unconditionally, and ignored
by management in passthrough case.

> so there's no need for them in a
> kernel interface unless one was to put the vTPM code into the Linux kernel, 
> but
> I don't see that this is happening. What is better about a kernel interface
> versus one implemented by a project on github assuming that the existing 
> ioctls
> are stable? What is the real reason here?
> 
>Stefan
> 

That someone went to the trouble of reviewing the interface for
long-term maintainability, portability etc. That it obeys some existing
standards for API use, coding style etc and will continue to.

In other words, kernel is already a dependency for QEMU.

> >
> >
> >
> > --
> > MST
> >
> 



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Stefan Berger
"Michael S. Tsirkin" <m...@redhat.com> wrote on 01/20/2016 11:03:11 AM:

> From: "Michael S. Tsirkin" <m...@redhat.com>

> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
TPM
> 
> On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > >>"Daniel P. Berrange" <berra...@redhat.com> wrote on 01/20/2016 
10:00:41
> > >>AM:
> > >>
> > >>
> > >>>process at all - it would make sense if there was a single
> > >>>swtpm_cuse shared across all QEMU's, but if there's one per
> > >>>QEMU device, it feels like it'd be much simpler to just have
> > >>>the functionality linked in QEMU.  That avoids the problem
> > >>I tried having it linked in QEMU before. It was basically rejected.
> > >I remember an impl you did many years(?) ago now, but don't recall
> > >the results of the discussion. Can you elaborate on why it was
> > >rejected as an approach ? It just doesn't make much sense to me
> > >to have to create an external daemon, a CUSE device and comms
> > >protocol, simply to be able to read/write a plain file containing
> > >the TPM state. Its massive over engineering IMHO and adding way
> > >more complexity and thus scope for failure
> > 
> > The TPM 1.2 implementation adds 10s of thousands of lines of code.The 
TPM 2
> > implementation is in the same range. The concern was having this code 
right
> > in the QEMU address space. It's big, it can have bugs, so we don't 
want it
> > to harm QEMU. So we now put this into an external process implemented 
by the
> > swtpm project that builds on libtpms which provides TPM 1.2 
functionality
> > (to be extended with TPM 2). We cannot call APIs of libtpms directly
> > anymore, so we need a control channel, which is implemented through 
ioctls
> > on the CUSE device.
> > 
> >Stefan
> 
> If that's the only reason for it, you can package it as part of QEMU
> source, run it as a sub-process.


I am packaging libtpms in Fedora. Once we have the CUSE interface in QEMU, 
I would package the swtpm project for Fedora as well. Both projects have 
at least been prepared for Debian packaging also.

https://github.com/stefanberger/libtpms
https://github.com/stefanberger/swtpm

The 'dist' directory has the spec file.

If I would combine things, then on the libvirt layer. Introduce an RPM 
dependency there.

I doubt I will have the possibility to go back to integrating it directly 
into QEMU directly.

Regards,
   Stefan




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Michael S. Tsirkin
On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> >>"Daniel P. Berrange"  wrote on 01/20/2016 10:00:41
> >>AM:
> >>
> >>
> >>>process at all - it would make sense if there was a single
> >>>swtpm_cuse shared across all QEMU's, but if there's one per
> >>>QEMU device, it feels like it'd be much simpler to just have
> >>>the functionality linked in QEMU.  That avoids the problem
> >>I tried having it linked in QEMU before. It was basically rejected.
> >I remember an impl you did many years(?) ago now, but don't recall
> >the results of the discussion. Can you elaborate on why it was
> >rejected as an approach ? It just doesn't make much sense to me
> >to have to create an external daemon, a CUSE device and comms
> >protocol, simply to be able to read/write a plain file containing
> >the TPM state. Its massive over engineering IMHO and adding way
> >more complexity and thus scope for failure
> 
> The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> implementation is in the same range. The concern was having this code right
> in the QEMU address space. It's big, it can have bugs, so we don't want it
> to harm QEMU. So we now put this into an external process implemented by the
> swtpm project that builds on libtpms which provides TPM 1.2 functionality
> (to be extended with TPM 2). We cannot call APIs of libtpms directly
> anymore, so we need a control channel, which is implemented through ioctls
> on the CUSE device.
> 
>Stefan

If that's the only reason for it, you can package it as part of QEMU
source, run it as a sub-process.

> >
> >
> >Regards,
> >Daniel



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Michael S. Tsirkin
On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:
> 
> > From: "Michael S. Tsirkin" 
> 
> > >
> > > The CUSE TPM and associated tools can be found here:
> > >
> > > https://github.com/stefanberger/swtpm
> > >
> > > (please use the latest version)
> > >
> > > To use the external CUSE TPM, the CUSE TPM should be started as follows:
> > >
> > > # terminate previously started CUSE TPM
> > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > >
> > > # start CUSE TPM
> > > /usr/bin/swtpm_cuse -n vtpm-test
> > >
> > > QEMU can then be started using the following parameters:
> > >
> > > qemu-system-x86_64 \
> > >[...] \
> > > -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test
> \
> > > -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > >[...]
> > >
> > >
> > > Signed-off-by: Stefan Berger 
> > > Cc: Eric Blake 
> >
> > Before we add a dependency on this interface,
> > I'd rather see this interface supported in kernel
> > and not just in CUSE.
> 
> For using the single hardware TPM, we have the passthrough type. It's usage is
> limited.
> 
> CUSE extends the TPM character device interface with ioctl's. Behind the
> character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> implementations require large amounts of code, which I don't think should go
> into the Linux kernel itself. So I don't know who would implement this
> interface inside the Linux kernel.
> 
>   Stefan
> 

BTW I'm not talking about the code - I'm talking about the interfaces here.

One way would be to add support for these interface support in the kernel.

Maybe others can be replaced with QMP events so management
can take the necessary action.

As long as this is not the case, I suspect this code will have to stay
out of tree :( We can't depend on interfaces provided solely by cuse
devices on github.



-- 
MST



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Michael S. Tsirkin
On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> From: Stefan Berger 
> 
> Rather than integrating TPM functionality into QEMU directly
> using the TPM emulation of libtpms, we now integrate an external
> emulated TPM device. This device is expected to implement a Linux
> CUSE interface (CUSE = character device in userspace).
> 
> QEMU talks to the CUSE TPM using much functionality of the
> passthrough driver. For example, the TPM commands and responses
> are sent to the CUSE TPM using the read()/write() interface.
> However, some out-of-band control needs to be done using the CUSE
> TPM's ioctls. The CUSE TPM currently defines and implements 15
> different ioctls for controlling certain life-cycle aspects of
> the emulated TPM. The ioctls can be regarded as a replacement for
> direct function calls to a TPM emulator if the TPM were to be
> directly integrated into QEMU.
> 
> One of the ioctls allows to get a bitmask of supported capabilities.
> Each returned bit indicates which capabilities have been implemented.
> An include file defining the various ioctls is added to QEMU.
> 
> The CUSE TPM and associated tools can be found here:
> 
> https://github.com/stefanberger/swtpm
> 
> (please use the latest version)
> 
> To use the external CUSE TPM, the CUSE TPM should be started as follows:
> 
> # terminate previously started CUSE TPM
> /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> 
> # start CUSE TPM
> /usr/bin/swtpm_cuse -n vtpm-test
> 
> QEMU can then be started using the following parameters:
> 
> qemu-system-x86_64 \
>   [...] \
> -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
> -device tpm-tis,id=tpm0,tpmdev=tpm0 \
>   [...]
> 
> 
> Signed-off-by: Stefan Berger 
> Cc: Eric Blake 

Before we add a dependency on this interface,
I'd rather see this interface supported in kernel
and not just in CUSE.


> ---
>  hmp.c|   6 ++
>  hw/tpm/tpm_int.h |   1 +
>  hw/tpm/tpm_ioctl.h   | 215 +
>  hw/tpm/tpm_passthrough.c | 274 
> +--
>  qapi-schema.json |  18 +++-
>  qemu-options.hx  |  21 +++-
>  qmp-commands.hx  |   2 +-
>  tpm.c|  11 +-
>  8 files changed, 530 insertions(+), 18 deletions(-)
>  create mode 100644 hw/tpm/tpm_ioctl.h
> 
> diff --git a/hmp.c b/hmp.c
> index c2b2c16..5f70aac 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -863,6 +863,12 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
> tpo->has_cancel_path ? ",cancel-path=" : "",
> tpo->has_cancel_path ? tpo->cancel_path : "");
>  break;
> +case TPM_TYPE_OPTIONS_KIND_CUSE_TPM:
> +tpo = ti->options->u.passthrough;
> +monitor_printf(mon, "%s%s",
> +   tpo->has_path ? ",path=" : "",
> +   tpo->has_path ? tpo->path : "");
> +break;
>  case TPM_TYPE_OPTIONS_KIND__MAX:
>  break;
>  }
> diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
> index f2f285b..6b2c9c9 100644
> --- a/hw/tpm/tpm_int.h
> +++ b/hw/tpm/tpm_int.h
> @@ -61,6 +61,7 @@ struct tpm_resp_hdr {
>  #define TPM_TAG_RSP_AUTH1_COMMAND 0xc5
>  #define TPM_TAG_RSP_AUTH2_COMMAND 0xc6
>  
> +#define TPM_SUCCESS   0
>  #define TPM_FAIL  9
>  
>  #define TPM_ORD_ContinueSelfTest  0x53
> diff --git a/hw/tpm/tpm_ioctl.h b/hw/tpm/tpm_ioctl.h
> new file mode 100644
> index 000..a341e15
> --- /dev/null
> +++ b/hw/tpm/tpm_ioctl.h
> @@ -0,0 +1,215 @@
> +/*
> + * tpm_ioctl.h
> + *
> + * (c) Copyright IBM Corporation 2014, 2015.
> + *
> + * This file is licensed under the terms of the 3-clause BSD license
> + */
> +#ifndef _TPM_IOCTL_H_
> +#define _TPM_IOCTL_H_
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +/*
> + * Every response from a command involving a TPM command execution must hold
> + * the ptm_res as the first element.
> + * ptm_res corresponds to the error code of a command executed by the TPM.
> + */
> +
> +typedef uint32_t ptm_res;
> +
> +/* PTM_GET_TPMESTABLISHED: get the establishment bit */
> +struct ptm_est {
> +union {
> +struct {
> +ptm_res tpm_result;
> +unsigned char bit; /* TPM established bit */
> +} resp; /* response */
> +} u;
> +};
> +
> +/* PTM_RESET_TPMESTABLISHED: reset establishment bit */
> +struct ptm_reset_est {
> +union {
> +struct {
> +uint8_t loc; /* locality to use */
> +} req; /* request */
> +struct {
> +ptm_res tpm_result;
> +} resp; /* response */
> +} u;
> +};
> +
> +/* PTM_INIT */
> +struct ptm_init {
> +union {
> +struct {
> +uint32_t init_flags; /* see definitions below */
> +} req; /* request */
> +struct {

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Stefan Berger
"Michael S. Tsirkin"  wrote on 01/20/2016 10:58:02 AM:

> From: "Michael S. Tsirkin" 

> 
> On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:
> > "Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:
> > 
> > > From: "Michael S. Tsirkin" 
> > 
> > > >
> > > > The CUSE TPM and associated tools can be found here:
> > > >
> > > > https://github.com/stefanberger/swtpm
> > > >
> > > > (please use the latest version)
> > > >
> > > > To use the external CUSE TPM, the CUSE TPM should be started as 
follows:
> > > >
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > >
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > > >
> > > > QEMU can then be started using the following parameters:
> > > >
> > > > qemu-system-x86_64 \
> > > >[...] \
> > > > -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/
> dev/vtpm-test
> > \
> > > > -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> > > >[...]
> > > >
> > > >
> > > > Signed-off-by: Stefan Berger 
> > > > Cc: Eric Blake 
> > >
> > > Before we add a dependency on this interface,
> > > I'd rather see this interface supported in kernel
> > > and not just in CUSE.
> > 
> > For using the single hardware TPM, we have the passthrough type. 
> It's usage is
> > limited.
> > 
> > CUSE extends the TPM character device interface with ioctl's. Behind 
the
> > character device we can implement a TPM 1.2 and a TPM 2. Both TPM
> > implementations require large amounts of code, which I don't 
thinkshould go
> > into the Linux kernel itself. So I don't know who would implement this
> > interface inside the Linux kernel.
> > 
> >   Stefan
> > 
> 
> BTW I'm not talking about the code - I'm talking about the interfaces 
here.
> 
> One way would be to add support for these interface support in the 
kernel.
> 
> Maybe others can be replaced with QMP events so management
> can take the necessary action.
> 
> As long as this is not the case, I suspect this code will have to stay
> out of tree :( We can't depend on interfaces provided solely by cuse
> devices on github.

Why is that? I know that the existing ioctls cannot be modified anymore 
once QEMU accepts the code. So I don't understand it. Some of the ioctls 
are only useful when emulating a hardware device, so there's no need for 
them in a kernel interface unless one was to put the vTPM code into the 
Linux kernel, but I don't see that this is happening. What is better about 
a kernel interface versus one implemented by a project on github assuming 
that the existing ioctls are stable? What is the real reason here?

   Stefan


> 
> 
> 
> -- 
> MST
> 




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Stefan Berger
"Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:

> From: "Michael S. Tsirkin" 

> > 
> > The CUSE TPM and associated tools can be found here:
> > 
> > https://github.com/stefanberger/swtpm
> > 
> > (please use the latest version)
> > 
> > To use the external CUSE TPM, the CUSE TPM should be started as 
follows:
> > 
> > # terminate previously started CUSE TPM
> > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > 
> > # start CUSE TPM
> > /usr/bin/swtpm_cuse -n vtpm-test
> > 
> > QEMU can then be started using the following parameters:
> > 
> > qemu-system-x86_64 \
> >[...] \
> > -tpmdev 
cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
> > -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> >[...]
> > 
> > 
> > Signed-off-by: Stefan Berger 
> > Cc: Eric Blake 
> 
> Before we add a dependency on this interface,
> I'd rather see this interface supported in kernel
> and not just in CUSE.

For using the single hardware TPM, we have the passthrough type. It's 
usage is limited.

CUSE extends the TPM character device interface with ioctl's. Behind the 
character device we can implement a TPM 1.2 and a TPM 2. Both TPM 
implementations require large amounts of code, which I don't think should 
go into the Linux kernel itself. So I don't know who would implement this 
interface inside the Linux kernel.

  Stefan




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Daniel P. Berrange
On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> "Daniel P. Berrange" <berra...@redhat.com> wrote on 01/20/2016 10:00:41 
> AM:
> 
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
> TPM
> > 
> > On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> > > From: Stefan Berger <stef...@linux.vnet.ibm.com>
> > > 
> > > Rather than integrating TPM functionality into QEMU directly
> > > using the TPM emulation of libtpms, we now integrate an external
> > > emulated TPM device. This device is expected to implement a Linux
> > > CUSE interface (CUSE = character device in userspace).
> > > 
> > > QEMU talks to the CUSE TPM using much functionality of the
> > > passthrough driver. For example, the TPM commands and responses
> > > are sent to the CUSE TPM using the read()/write() interface.
> > > However, some out-of-band control needs to be done using the CUSE
> > > TPM's ioctls. The CUSE TPM currently defines and implements 15
> > > different ioctls for controlling certain life-cycle aspects of
> > > the emulated TPM. The ioctls can be regarded as a replacement for
> > > direct function calls to a TPM emulator if the TPM were to be
> > > directly integrated into QEMU.
> > > 
> > > One of the ioctls allows to get a bitmask of supported capabilities.
> > > Each returned bit indicates which capabilities have been implemented.
> > > An include file defining the various ioctls is added to QEMU.
> > > 
> > > The CUSE TPM and associated tools can be found here:
> > > 
> > > https://github.com/stefanberger/swtpm
> > > 
> > > (please use the latest version)
> > > 
> > > To use the external CUSE TPM, the CUSE TPM should be started as 
> follows:
> > > 
> > > # terminate previously started CUSE TPM
> > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > 
> > > # start CUSE TPM
> > > /usr/bin/swtpm_cuse -n vtpm-test
> > 
> > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > TPM device ?  This makes my wonder why we need this separate
> 
> Correct. See reason in answer to previous email.
> 
> > process at all - it would make sense if there was a single
> > swtpm_cuse shared across all QEMU's, but if there's one per
> > QEMU device, it feels like it'd be much simpler to just have
> > the functionality linked in QEMU.  That avoids the problem
> 
> I tried having it linked in QEMU before. It was basically rejected.

I remember an impl you did many years(?) ago now, but don't recall
the results of the discussion. Can you elaborate on why it was
rejected as an approach ? It just doesn't make much sense to me
to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Daniel P. Berrange
On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> >On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> >>"Daniel P. Berrange"  wrote on 01/20/2016 10:00:41
> >>AM:
> >>
> >>
> >>>process at all - it would make sense if there was a single
> >>>swtpm_cuse shared across all QEMU's, but if there's one per
> >>>QEMU device, it feels like it'd be much simpler to just have
> >>>the functionality linked in QEMU.  That avoids the problem
> >>I tried having it linked in QEMU before. It was basically rejected.
> >I remember an impl you did many years(?) ago now, but don't recall
> >the results of the discussion. Can you elaborate on why it was
> >rejected as an approach ? It just doesn't make much sense to me
> >to have to create an external daemon, a CUSE device and comms
> >protocol, simply to be able to read/write a plain file containing
> >the TPM state. Its massive over engineering IMHO and adding way
> >more complexity and thus scope for failure
> 
> The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
> implementation is in the same range. The concern was having this code right
> in the QEMU address space. It's big, it can have bugs, so we don't want it
> to harm QEMU. So we now put this into an external process implemented by the
> swtpm project that builds on libtpms which provides TPM 1.2 functionality
> (to be extended with TPM 2). We cannot call APIs of libtpms directly
> anymore, so we need a control channel, which is implemented through ioctls
> on the CUSE device.

Ok, the security separation concern does make some sense. The use of CUSE
still seems fairly questionable to me. CUSE makes sense if you want to
provide a drop-in replacement for the kernel TPM device driver, which
would avoid ned for a new QEMU backend. If you're not emulating an existing
kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
transport between 2 userspace processes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Stefan Berger
"Daniel P. Berrange" <berra...@redhat.com> wrote on 01/20/2016 10:00:41 
AM:

> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE 
TPM
> 
> On Mon, Jan 04, 2016 at 10:23:19AM -0500, Stefan Berger wrote:
> > From: Stefan Berger <stef...@linux.vnet.ibm.com>
> > 
> > Rather than integrating TPM functionality into QEMU directly
> > using the TPM emulation of libtpms, we now integrate an external
> > emulated TPM device. This device is expected to implement a Linux
> > CUSE interface (CUSE = character device in userspace).
> > 
> > QEMU talks to the CUSE TPM using much functionality of the
> > passthrough driver. For example, the TPM commands and responses
> > are sent to the CUSE TPM using the read()/write() interface.
> > However, some out-of-band control needs to be done using the CUSE
> > TPM's ioctls. The CUSE TPM currently defines and implements 15
> > different ioctls for controlling certain life-cycle aspects of
> > the emulated TPM. The ioctls can be regarded as a replacement for
> > direct function calls to a TPM emulator if the TPM were to be
> > directly integrated into QEMU.
> > 
> > One of the ioctls allows to get a bitmask of supported capabilities.
> > Each returned bit indicates which capabilities have been implemented.
> > An include file defining the various ioctls is added to QEMU.
> > 
> > The CUSE TPM and associated tools can be found here:
> > 
> > https://github.com/stefanberger/swtpm
> > 
> > (please use the latest version)
> > 
> > To use the external CUSE TPM, the CUSE TPM should be started as 
follows:
> > 
> > # terminate previously started CUSE TPM
> > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > 
> > # start CUSE TPM
> > /usr/bin/swtpm_cuse -n vtpm-test
> 
> IIUC, there needs to be one swtpm_cuse process running per QEMU
> TPM device ?  This makes my wonder why we need this separate

Correct. See reason in answer to previous email.

> process at all - it would make sense if there was a single
> swtpm_cuse shared across all QEMU's, but if there's one per
> QEMU device, it feels like it'd be much simpler to just have
> the functionality linked in QEMU.  That avoids the problem

I tried having it linked in QEMU before. It was basically rejected.

> of having to manage all these extra processes alongside QEMU
> which can add a fair bit of mgmt overhead.

For libvirt, yes, there is mgmt. overhead but it's quite transparent. So 
libvirt is involved in the creation of the directory for the vTPMs, the 
command line creation for the external process as well as the startup of 
the process, but otherwise it's not a big issue (anymore). I have the 
patches that do just for an older libvirt version that along with setting 
up SELinux labels, cgroups etc. for each VM that wants an attached vTPM.

Regards,
   Stefan

> 
> > 
> > QEMU can then be started using the following parameters:
> > 
> > qemu-system-x86_64 \
> >[...] \
> > -tpmdev 
cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/dev/vtpm-test \
> > -device tpm-tis,id=tpm0,tpmdev=tpm0 \
> >[...]
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- 
http://virt-manager.org :|
> |: http://autobuild.org   -o- 
http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   
http://live.gnome.org/gtk-vnc :|
> 




Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Stefan Berger

On 01/20/2016 01:54 PM, Michael S. Tsirkin wrote:

On Wed, Jan 20, 2016 at 11:06:45AM -0500, Stefan Berger wrote:

"Michael S. Tsirkin"  wrote on 01/20/2016 10:58:02 AM:


From: "Michael S. Tsirkin" 
On Wed, Jan 20, 2016 at 10:36:41AM -0500, Stefan Berger wrote:

"Michael S. Tsirkin"  wrote on 01/20/2016 10:20:58 AM:


From: "Michael S. Tsirkin" 

The CUSE TPM and associated tools can be found here:

https://github.com/stefanberger/swtpm

(please use the latest version)

To use the external CUSE TPM, the CUSE TPM should be started as

follows:

# terminate previously started CUSE TPM
/usr/bin/swtpm_ioctl -s /dev/vtpm-test

# start CUSE TPM
/usr/bin/swtpm_cuse -n vtpm-test

QEMU can then be started using the following parameters:

qemu-system-x86_64 \
[...] \
 -tpmdev cuse-tpm,id=tpm0,cancel-path=/dev/null,path=/

dev/vtpm-test

\

 -device tpm-tis,id=tpm0,tpmdev=tpm0 \
[...]


Signed-off-by: Stefan Berger 
Cc: Eric Blake 

Before we add a dependency on this interface,
I'd rather see this interface supported in kernel
and not just in CUSE.

For using the single hardware TPM, we have the passthrough type.

It's usage is

limited.

CUSE extends the TPM character device interface with ioctl's. Behind the
character device we can implement a TPM 1.2 and a TPM 2. Both TPM
implementations require large amounts of code, which I don't thinkshould go
into the Linux kernel itself. So I don't know who would implement this
interface inside the Linux kernel.

   Stefan


BTW I'm not talking about the code - I'm talking about the interfaces here.

One way would be to add support for these interface support in the kernel.

Maybe others can be replaced with QMP events so management
can take the necessary action.

As long as this is not the case, I suspect this code will have to stay
out of tree :( We can't depend on interfaces provided solely by cuse
devices on github.

Why is that? I know that the existing ioctls cannot be modified anymore once
QEMU accepts the code. So I don't understand it. Some of the ioctls are only
useful when emulating a hardware device,

Maybe they can be replaced with QMP events?
These could be emitted unconditionally, and ignored
by management in passthrough case.


so there's no need for them in a
kernel interface unless one was to put the vTPM code into the Linux kernel, but
I don't see that this is happening. What is better about a kernel interface
versus one implemented by a project on github assuming that the existing ioctls
are stable? What is the real reason here?

Stefan


That someone went to the trouble of reviewing the interface for
long-term maintainability, portability etc. That it obeys some existing
standards for API use, coding style etc and will continue to.


The same applies to the libtpms and swtpm projects as well, I suppose. 
If someone wants to join them, let me know.


As stated, we will keep the existing ioctl stables once integrated but 
will adapt where necessary before that.




In other words, kernel is already a dependency for QEMU.


I don't see vTPM going into the kernel, at least I don't know of anyone 
trying to do that.


   Stefan





--
MST






Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM

2016-01-20 Thread Stefan Berger

On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:

On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:

"Daniel P. Berrange"  wrote on 01/20/2016 10:00:41
AM:



process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem

I tried having it linked in QEMU before. It was basically rejected.

I remember an impl you did many years(?) ago now, but don't recall
the results of the discussion. Can you elaborate on why it was
rejected as an approach ? It just doesn't make much sense to me
to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure


The TPM 1.2 implementation adds 10s of thousands of lines of code. The 
TPM 2 implementation is in the same range. The concern was having this 
code right in the QEMU address space. It's big, it can have bugs, so we 
don't want it to harm QEMU. So we now put this into an external process 
implemented by the swtpm project that builds on libtpms which provides 
TPM 1.2 functionality (to be extended with TPM 2). We cannot call APIs 
of libtpms directly anymore, so we need a control channel, which is 
implemented through ioctls on the CUSE device.


   Stefan



Regards,
Daniel