Hi Alessio,

[Sorry for double post]

> On 07 Jul 2015, at 10:08, Alessio Igor Bogani <alessioigorbog...@gmail.com> 
> wrote:
> 
> Hi Dmitry,
> 
> On 6 July 2015 at 19:24, Dmitry Kalinkin <dmitry.kalin...@gmail.com> wrote:
> [...]
> I'm not a VME expert, but it seems that VME windows are a quiet limited 
> resource
> no matter how you allocate your resources. Theoretically we could put up to 32
> different boards in a single crate, so there won't be enough windows for each
> driver to allocate. That said, there is no way around this when putting 
> together
> a really heterogeneous VME system. To overcome such problem, one could
> develop a different kernel API that would not provide windows to the
> drivers, but
> handle reads and writes by reconfiguring windows on the fly, which in turn 
> would
> introduce more latency.
> 
> In my humble opinion using user-space drivers (as workaround for limited 
> windows/images) introduce more latency than let VME driver dynamically 
> configure windows/images. After all VME systems usually aren't so much 
> dynamic by its nature (Who likes continuously put in and out a board which 
> requires an insertion force between 20 and 50 kg?) and are instead heavily 
> used in critical contexts often in non-stop way.
Userspace drivers are not exactly doing this differently. It’s just that you 
can use
that interface to quickly build more flexible site-specific software that knows 
about
whole VME system. So there, having a low level access to windows works well
(there is a 20+ year history of such drivers). But if we want reusable drivers,
especially in the kernel, that will require some more effort in making a driver 
stack.

The API I had in mind would have only vme_master_read and vme_master_write
that would take absolute addresses (not relative to any window). These variants
of access functions would then try to reuse any window that is already able to 
serve
the request or wait for a free window and reconfigure it for the need of the 
request.
After usage the window is to be returned back to the window pool.
Other way to implement these would be to use DMA for everything, since it 
doesn’t
have the limitations that the windows have.
> 
> In fact this is a big obstacle for adoption of this VME stack (at least for 
> us). We use VME a lot and we care about latency as well so we use only 
> kernel-space drivers for ours VME boards but unfortunately the VME stack let 
> us to link a single board with a single window/image (so max 8 boards on 
> tsi148) only. That said that stack has proven to be very rock solid.
Current VME stack links windows not to the boards, but to device drivers. Driver
could potentially minimise window usage within it’s scope (any sort of window
reusing, like mapping whole A16 once to be used with all boards), but this won’t
work across multiple drivers. Even if all of your drivers are window-wise 
economic,
they will still need some amount of windows per each driver. Not that we have 
that
many kernel drivers...
> 
> Until now we have used a modified version of the old vmelinux.org stack for 
> sharing windows/images between all (ours) VME kernel drivers and we would be 
> very happy to see something similar in vanilla (at least coalescence two 
> adjacent addresses with same modifiers).
>  
> Those who need such API are welcome to develop it :)
> 
> I would be glad to try it if the maintainer is willing to receive this type 
> of changes.
> 
> Ciao,
> Alessio
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to