On Fri, 2011-05-27 at 12:54 +0200, Ingo Molnar wrote:
> * Sasha Levin <levinsasha...@gmail.com> wrote:
> 
> > ioeventfd is way provided by KVM to receive notifications about
> > reads and writes to PIO and MMIO areas within the guest.
> > 
> > Such notifications are usefull if all we need to know is that
> > a specific area of the memory has been changed, and we don't need
> > a heavyweight exit to happen.
> > 
> > The implementation uses epoll to scale to large number of ioeventfds.
> 
> Nice! :-)
> 
> > +struct ioevent {
> > +   u64                     start;
> > +   u8                      len;
> 
> If that's an mmio address then it might be worth naming it 
> ioevent->mmio_addr, ioevent->mmio_end.
> 
> > +   void                    (*ioevent_callback_fn)(struct kvm *kvm, void 
> > *ptr);
> 
> Please only 'fn', we already know this is an ioevent.
> 
> > +   struct kvm              *kvm;
> > +   void                    *ptr;
> 
> what is the purpose of the pointer?
> 
> AFAICS it the private data of the callback function. In such cases 
> please name them in a harmonizing fashion, such as:
> 
>       void                    (*fn)(struct kvm *kvm, void *data);
>       struct kvm              *fn_kvm;
>       void                    *fn_data;
> 
> Also, will tools/kvm/ ever run with multiple 'struct kvm' instances 
> present?

I'm not sure. We pass it around to all our functions instead of using a
global, so I assumed we might have several guests under one process.

> A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way 
> too aspecific, it tells us nothing. What is a 'kvm'?
> 
> A much better name would be 'struct machine *machine', hm? Even if 
> everyone agrees this would be a separate patch, obviously.
> 
> Also, can ioevent->kvm *ever* be different from the kvm that the 
> mmio-event receiving vcpu thread is associated with? If not then the 
> fn_kvm field is really superfluous - we get the machine from the mmio 
> handler and can pass it down to the callback function.
> 
> > +   int                     event_fd;
> 
> 'fd'
> 
> > +   u64                     datamatch;
> 
> what's a datamatch? 'cookie'? 'key'?

The kernel-side ioeventfd matches the value written to the PIO port and
signals the event only if both values match.
It's named this way in the kernel code so I wanted to be consistent.

> 
> > +
> > +   struct list_head        list_used;
> 
> just 'list' is enough i think - it's obvious that ioevent->list is a 
> list of ioevents, right?
> 

We might have a list of free ioevents if we ever decide to scale it
beyond the max 20 event limit we currently have, so I would rather be
specific with the list names.

> > +   kvm_ioevent = (struct kvm_ioeventfd) {
> > +           .addr                   = ioevent->start,
> > +           .len                    = ioevent->len,
> 
> Do you see how confusing the start/len naming is? Here you are 
> assigning a 'start' field to an 'addr' and the reviewer is kept 
> wondering whether that's right. If it was ->mmio_addr then it would 
> be a lot more obvious what is going on.

Yes, I'll rename them to addr/len to match with KVM naming.

> > +static void *ioeventfd__thread(void *param)
> > +{
> > +   for (;;) {
> > +           int nfds, i;
> > +
> > +           nfds = epoll_wait(epoll_fd, events, IOEVENTFD_MAX_EVENTS, -1);
> > +           for (i = 0; i < nfds; i++) {
> > +                   u64 tmp;
> > +                   struct ioevent *ioevent;
> > +
> > +                   ioevent = events[i].data.ptr;
> > +
> > +                   if (read(ioevent->event_fd, &tmp, sizeof(tmp)) < 0)
> > +                           die("Failed reading event");
> > +
> > +                   ioevent->ioevent_callback_fn(ioevent->kvm, 
> > ioevent->ptr);
> > +           }
> > +   }
> > +
> > +   return NULL;
> > +}
> > +
> > +void ioeventfd__start(void)
> > +{
> > +   pthread_t thread;
> > +
> > +   pthread_create(&thread, NULL, ioeventfd__thread, NULL);
> > +}
> 
> Shouldnt this use the thread pool, so that we know about each and 
> every worker thread we have started, in one central place?
> 
Our thread pool currently responds to events - it runs a callback if it
has received a notification to do so. It doesn't manage threads which
have to run all the time like in this case.

Though once we return from epoll_wait() here we do minimal work before
sending the IO event into the thread pool.


> (This might have relevance, see the big-reader-lock mail i sent 
> earlier today.)
> 
> Thanks,
> 
>       Ingo

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to