Hi Vijay,
On 04/05/2015 16:19, Vijay Kilari wrote:
How did you implement the interrupt mode? Could it be improve?
1) In physical ITS driver its_device is created with devID 00:00.1
with
256 MSI-x are reserved and is named as completion_dev, which is global.
That's a lot of MSI-x reserved... Can't you use only one per domain?
Hmmm... I meant for all the domain, not "per domain".
Complexity with one irq for all domains is that if completion interrupt
comes it is difficult to find out for which vITS/Domain ITS command
it came for.
While reserving a single devID sounds feasible on all the future
platform. Allocating 256 MSI-x sounds more difficult, you assume that
any board will have at least 256 MSI-x free.
Although, this is not scalable. How do you plan to handle more than 256
domains? By increasing the number of reserved MSI-x?
I don't ask you to implement the later now... but if increasing the
number of domain supported means rewriting all the completion code and
maybe the vITS then you should ask yourself if it's really worth to take
this current approach.
[..]
I am adding one INT per command. This can be improved to add one INT
cmd for all
the pending commands. Existing Linux driver sends 2 commands at a time.
You should not assume that other OS will send 2 commands at the same
time... It could be more or less.
Although, having a INT per command is rather slow. One INT command per
batch would improve the boot time.
We cannot limit on number of commands sent at a time. we have to send all
the
pending commands in vITS queue at a time when trapped on CWRITER, Otherwise
we have to check for pending interrupts on completion interrupt and translate
and send pending commands in interrupt context. Which complicates and adds more
delays.
If we don't limit the number of commands sent, we would allow a domain
to flood the command queue. Therefore, other domains wouldn't be able to
send command and will likely timeout and crash. This is one possible
security issue among many others.
Nobody like security issue, it impacts both end-user and the project.
Please have this security concern in mind before performance.
Performance is usually more easier to address later.
As the vITS is only used for interrupt managing (mapping, unmapping),
it's not used in hot path such as receiving interrupt. So we don't care
if it's "slow" from the guest point of view as long as we emulate the
behavior correctly without impacting the other domain.
Also, what happen if the physical queue is full? You need to have a away
to inject new commands later.
Overall I'm aware that the command queue emulation is huge. Lots of
things to take into account : security, performance, concurrence problem...
As you did before in this thread, I would suggest you to write down all
the possible solutions and see what are the impacts (security, theorical
performances...). So we could talk on the ML (or over an IRC/phone
meeting) and agree to a solution that satisfy everyone.
By experience, doing a such things may speed up the acceptance of the
series because everyone will focus on the implementation when you send a
new patch.
One good example is x86 PML support. Intel has sent a design doc a
couple of months ago [1]. Developers discussed about the overall design,
when a common agreement has been made they send the patch series.
The same would have been very helpful to understand this series. TBH, I
spent most of my time trying to understand what was your design and how
everything works together. On a 4000 lines series split in 22 patches
it's a rather big task.
Although I don't necessarily ask for exactly the same. It could be part
of the cover letter and/or commit messages.
Regards,
[1] http://www.gossamer-threads.com/lists/xen/devel/366537
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel