Re: PCI arbiter memory mapping

2021-12-12 Thread Joan Lledó
Hi,

These patches implement device memory mapping in the arbiter by using the new 
gnumach RPC





Re: PCI arbiter memory mapping

2021-08-18 Thread Sergey Bugaev
On Wed, Aug 18, 2021 at 8:43 PM Joan Lledó  wrote:
> El 18/8/21 a les 0:02, Sergey Bugaev ha escrit:
> > To me it sounds like libpciaccess should have a Hurd-specific API
> > addition that would let the user get the memory object
>
> That's a solution and can be done. But I'd like to know more about
> vm_region first. It seems it can return the object, and I don't see why
> is a security problem to allow a task to retrieve objects belonging to
> itself.

Basically because it allowed the task to overcome any limitations
imposed onto it by max_protection (and offset/size). If you have page
2 of a memory object mapped into your address space read-only, that
shouldn't let you write to page 5.

Sergey



Re: PCI arbiter memory mapping

2021-08-18 Thread Sergey Bugaev
On Wed, Aug 18, 2021 at 8:34 PM Joan Lledó  wrote:
> El 18/8/21 a les 0:13, Sergey Bugaev ha escrit:
> > you can no longer get the underlying memory object with vm_region ()
>
> How so? reading the docs I understood it does:
>
> https://www.gnu.org/software/hurd/gnumach-doc/Memory-Attributes.html
>
> "The port object_name identifies the memory object associated with this
> region"

Yes; the docs could have been clearer about this — and I was myself
confused about this not long ago. What it gives you is a _memory
object name port_ which "identifies" the memory object, in the sense
that you can get name ports for two regions (even from different
tasks) and they'll compare equal if it's the same memory object mapped
into both. And that's about the only thing you can do with the name
port. It's *not* the actual port to the underlying memory object.

> > (Well in fact what actually changed is that gnumach at some point
> > allowed the use of "memory object name ports" in vm_map (), and now
> > once again doesn't.
>
> vm_map does receive  the memory object as parameter:
>
> https://www.gnu.org/software/hurd/gnumach-doc/Mapping-Memory-Objects.html
>
> "memory_object is the port that represents the memory object: used by
> user tasks in vm_map"

Previously, as a GNU Mach extension, you could also pass the memory
object *name* port (one you get from vm_region ()), instead of the
memory object port. But not anymore.

> > It never allowed you to get the actual memory
> > object.)
>
> vm_map receives a memory object, doesn't return it

Yes. What I'm saying is vm_region () gives you the name port, which
previously could be used in vm_map () much like the memory object port
(this was allowed to enable mremap () to be implemented). vm_region ()
never gave out the actual memory object port, but the name port could
be used in vm_map () in a similar way to the real memory object port.

Sergey



Re: PCI arbiter memory mapping

2021-08-18 Thread Joan Lledó




El 18/8/21 a les 0:02, Sergey Bugaev ha escrit:

To me it sounds like libpciaccess should have a Hurd-specific API
addition that would let the user get the memory object


That's a solution and can be done. But I'd like to know more about 
vm_region first. It seems it can return the object, and I don't see why 
is a security problem to allow a task to retrieve objects belonging to 
itself.




Re: PCI arbiter memory mapping

2021-08-18 Thread Joan Lledó




El 18/8/21 a les 0:13, Sergey Bugaev ha escrit:

you can no longer get the underlying memory object with vm_region ()


How so? reading the docs I understood it does:

https://www.gnu.org/software/hurd/gnumach-doc/Memory-Attributes.html

"The port object_name identifies the memory object associated with this 
region"



(Well in fact what actually changed is that gnumach at some point
allowed the use of "memory object name ports" in vm_map (), and now
once again doesn't.


vm_map does receive  the memory object as parameter:

https://www.gnu.org/software/hurd/gnumach-doc/Mapping-Memory-Objects.html

"memory_object is the port that represents the memory object: used by 
user tasks in vm_map"



It never allowed you to get the actual memory
object.)


vm_map receives a memory object, doesn't return it



Re: PCI arbiter memory mapping

2021-08-17 Thread Sergey Bugaev
On Tue, Aug 17, 2021 at 9:14 PM Joan Lledó  wrote:
> > The underlying question is getting the memory object of a given memory
> > range, like vm_region does.
>
> Yes, I see that could be useful. Is vm_region not workig for proxies?
>
>  > We need to be careful since we don't want any process to be able to
>  > get the memory object of any other process
>
> Is that not being checked yet? can I call vm_region to get another
> task's memory object now?

It doesn't matter whether it's another task or not, or whether it's a
proxy or not (in fact it will never be a proxy, but that doesn't
matter): you can no longer get the underlying memory object with
vm_region () — for, uh, obvious security reasons. It was a gnumach
extension anyway — vanilla Mach doesn't support that.

(Well in fact what actually changed is that gnumach at some point
allowed the use of "memory object name ports" in vm_map (), and now
once again doesn't. It never allowed you to get the actual memory
object.)

Sergey



Re: PCI arbiter memory mapping

2021-08-17 Thread Sergey Bugaev
On Tue, Aug 17, 2021 at 12:38 AM Samuel Thibault
 wrote:
> The root pci-arbiter uses libpciaccess' x86 backend to access PCI

On Tue, Aug 17, 2021 at 9:47 PM Joan Lledó  wrote:
> Yes, and the arbiter can play two roles: root arbiter, which uses x86
> module in libpciacces; and nested arbiter, which uses the hurd module in
> libpciaccess.
>
> > The hardware devices connected via PCI are available (to the PCI arbiter)
> > as Mach devices
>
> Actually, the devices are available to the arbiter as libpciaccess devices

Thank you both for explaining, this is the part that I was missing:
that the arbiter itself uses libpciaccess, and that it cannot map the
devices directly, it has to map /dev/mem.

To me it sounds like libpciaccess should have a Hurd-specific API
addition that would let the user get the memory object backing the
mapping created by device_map_region (). I.e., device_map_region () is
a cross-platform API that maps the device memory into your address
space (right?), but on the Hurd there'd also be a way to actually get
the memory object it would map (and map it yourself if you so choose,
or do something else).

It would be the job of that libpciaccess API to make this object have
the right offset and everything, so that the caller wouldn't have to
worry about that. If I understand this right, its Hurd backend already
gets the memory object with the right offset and size, and would
return that directly; while the x86 backend would either have to use
the unimplemented-as-of-now offset parameter in device_map (), or
create and return an appropriate proxy from that API.

In memory_object_create_proxy (), the kernel would take care of
short-circuiting nested proxy creation to make that a non-issue. This
will allow netfs_get_filemap (VM_PROT_READ) to create another proxy
just for enforcing read-only access without worrying that the object
might already be a proxy.

Does that sound remotely sensible? :)
Please keep in mind that while (I think) I understand Mach VM, I have
very little idea about PCI. I'm (obviously) not in a position to
decide what's best for libpciaccess and the PCI arbiter.

Sergey



Re: PCI arbiter memory mapping

2021-08-17 Thread Joan Lledó

Hi,

I'm sorry I can't follow your discussion, I only know about the small 
part of the kernel I worked on.


El 16/8/21 a les 23:07, Sergey Bugaev ha escrit:

I don't think I understand enough about the situation. It would help
if you or Joan were to kindly give me some more context :)


Basically, libpciaccess gets a memory object at [1]. And later I need it 
in the arbiter at [2], to create a proxy over it.


To do that, the current code stores it in a structure I created called 
pci_user_data [3], and then it reads that structure from the arbiter [4].


> What's the issue you're trying to solve?

We are looking for another way to get the pager at [4] and get rid of 
that structure.



As I understand it, there's the PCI arbiter, which is a translator
that arbitrates access to PCI, which is a hardware bus that various
devices can be connected to.


Yes, and the arbiter can play two roles: root arbiter, which uses x86 
module in libpciacces; and nested arbiter, which uses the hurd module in 
libpciaccess.



The hardware devices connected via PCI are available (to the PCI arbiter)
as Mach devices


Actually, the devices are available to the arbiter as libpciaccess devices


it's possible to use device_map () and then vm_map () to access the
device memory.


Yes, root arbiter uses device_map() on "mem" to get the memory object, 
nested arbiters use io_map() over the region files exposed by the root 
arbiter to get the memory object.


Both pass the memory object to vm_map() to map the range.


Then there's libpciaccess whose Hurd backend uses the
files exported by the PCI arbiter to get access to the PCI,


Only nested arbiters, as I said the root arbiter uses the x86 backend


Naturally its user can request read-only or
read-write mapping, but the PCI arbiter may decide to only return a
read-only memory object (a proxy to the device pager), in which case
libpciaccess should deallocate the port and return EPREM, or the PCI
arbiter may return the real device pager.


Yes, but that's not really relevant for our problem, I was talking about 
a bug I found.




[1] 
https://gitlab.freedesktop.org/jlledom/libpciaccess/-/blob/hurd-device-map/src/x86_pci.c#L275
[2] 
http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/pci-arbiter/netfs_impl.c?h=jlledom-pci-memory-map#n613
[3] 
https://gitlab.freedesktop.org/jlledom/libpciaccess/-/blob/hurd-device-map/src/x86_pci.c#L287
[4] 
http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/pci-arbiter/netfs_impl.c?h=jlledom-pci-memory-map#n605




Re: PCI arbiter memory mapping

2021-08-17 Thread Joan Lledó

Hi,

El 16/8/21 a les 20:16, Samuel Thibault ha escrit:

Ok but I meant that the device_map interface already has has an "offset"


Ok, now I got it, yes I think that's better. I'll do that.


Actually I'm thinking that this is just another case of mremap().


I need help on this part, why is mremap relevant here?


The underlying question is getting the memory object of a given memory
range, like vm_region does.


Yes, I see that could be useful. Is vm_region not workig for proxies?

> We need to be careful since we don't want any process to be able to
> get the memory object of any other process

Is that not being checked yet? can I call vm_region to get another 
task's memory object now?





Re: PCI arbiter memory mapping

2021-08-16 Thread Samuel Thibault
Sergey Bugaev, le mar. 17 août 2021 00:30:34 +0300, a ecrit:
> On Tue, Aug 17, 2021 at 12:10 AM Samuel Thibault
>  wrote:
> > Yes, and we want to implement nesting: providing a sub-hurd with a
> > partial view of the PCI space.
> 
> Interesting. But I still don't think I understand what the problem you
> are running into.

Please look at the actual source code, I'm sorry I just don't have time
to spend on paraphrasing the thorough story.

The root pci-arbiter uses libpciaccess' x86 backend to access PCI, and
x86_pci.c which uses mmap(/dev/mem) to map PCI BARs. pci-arbiter wants
to expose them in /servers/bus/pci, but for that it needs a memory
object. In the current branches of Joan the memory object is passed
under the hood between libpciaccess and pci-arbiter, but we want to
avoid such horrible thing.

Then there is the sub-arbiter, which gets its mapping from libpciaccess'
hurd backend, whose underlying memory object behaves differently
(because the mem device does not implement yet the offset feature, while
the memory object proxy does). And thus his current has a bad-looking
"if I'm dealing with a proxy rather than a device" which we'd really
want to avoid.

Again, the devil lies in the details.

> (Hmm, how does I/O port access work in a subhurd? I'm guessing the
> subhurd has to be privileged?

The I/O port access tokens can be provided to the subhurd.

> But then it probably has to anyway, to access the PCI devices?)

It can access them through the memory maps provided by the pci-arbiter.

Samuel



Re: PCI arbiter memory mapping

2021-08-16 Thread Sergey Bugaev
On Tue, Aug 17, 2021 at 12:10 AM Samuel Thibault
 wrote:
> Yes, and we want to implement nesting: providing a sub-hurd with a
> partial view of the PCI space.

Interesting. But I still don't think I understand what the problem you
are running into.

boot(1) already emulates Mach devices; it would itself use
libpciaccess (or the PCI arbiter directly, since it doesn't need
portability?) to get access to the host's devices that are to be
passed through to the subhurd. Then it'd return whatever memory
objects it got from the host's PCI arbiter — or proxies to them — from
emulated device_map (). The kernel would take care of short-circuiting
the nested proxies, although this is just a tiny optimization in this
case and is not required for correctness or anything like that.

Is that how you're implementing it?

(Hmm, how does I/O port access work in a subhurd? I'm guessing the
subhurd has to be privileged? But then it probably has to anyway, to
access the PCI devices?)

Sergey



Re: PCI arbiter memory mapping

2021-08-16 Thread Samuel Thibault
Sergey Bugaev, le mar. 17 août 2021 00:07:29 +0300, a ecrit:
> On Mon, Aug 16, 2021 at 11:43 PM Samuel Thibault
>  wrote:
> > Here, possibly proxies can indeed work properly. But please do look at
> > what Joan's situation is to make sure it does.
> 
> I don't think I understand enough about the situation. It would help
> if you or Joan were to kindly give me some more context :)
> 
> As I understand it, there's the PCI arbiter, which is a translator
> that arbitrates access to PCI, which is a hardware bus that various
> devices can be connected to. The hardware devices connected via PCI
> are available (to the PCI arbiter) as Mach devices, and in particular
> it's possible to use device_map () and then vm_map () to access the
> device memory. Then there's libpciaccess whose Hurd backend uses the
> files exported by the PCI arbiter to get access to the PCI, including
> mapping device memory. Naturally its user can request read-only or
> read-write mapping, but the PCI arbiter may decide to only return a
> read-only memory object (a proxy to the device pager), in which case
> libpciaccess should deallocate the port and return EPREM, or the PCI
> arbiter may return the real device pager.
> 
> Is that right?

Yes, and we want to implement nesting: providing a sub-hurd with a
partial view of the PCI space.

Samuel



Re: PCI arbiter memory mapping

2021-08-16 Thread Sergey Bugaev
On Mon, Aug 16, 2021 at 11:43 PM Samuel Thibault
 wrote:
> Here, possibly proxies can indeed work properly. But please do look at
> what Joan's situation is to make sure it does.

I don't think I understand enough about the situation. It would help
if you or Joan were to kindly give me some more context :)

As I understand it, there's the PCI arbiter, which is a translator
that arbitrates access to PCI, which is a hardware bus that various
devices can be connected to. The hardware devices connected via PCI
are available (to the PCI arbiter) as Mach devices, and in particular
it's possible to use device_map () and then vm_map () to access the
device memory. Then there's libpciaccess whose Hurd backend uses the
files exported by the PCI arbiter to get access to the PCI, including
mapping device memory. Naturally its user can request read-only or
read-write mapping, but the PCI arbiter may decide to only return a
read-only memory object (a proxy to the device pager), in which case
libpciaccess should deallocate the port and return EPREM, or the PCI
arbiter may return the real device pager.

Is that right? What am I missing? What's the issue you're trying to solve?

Sergey



Re: PCI arbiter memory mapping

2021-08-16 Thread Samuel Thibault
I'm fine with using proxies if that happens to be optimized.

But I'm sad seeing FIXME, XXX, etc. in various places that people don't
take the time to implement, and they work around them.

This week-end, I at last managed to get rid of the horrible rmh
kludge in glibc that hurt us several times, and was there only
because nobody took the time to implement gnumach's 10-line-longish
0650a4ee30e34ba039940032fdff3719c1b4b000.

Here, possibly proxies can indeed work properly. But please do look at
what Joan's situation is to make sure it does.

Samuel



Re: PCI arbiter memory mapping

2021-08-16 Thread Sergey Bugaev
On Mon, Aug 16, 2021 at 10:28 PM Samuel Thibault
 wrote:
> Sergey Bugaev, le lun. 16 août 2021 21:59:47 +0300, a ecrit:
> > IMO the proper way to get mremap () is vm_remap ()
>
> But that doesn't answer Joan's need for getting a memory object, to be
> able to put a proxy memory object on top of it to make it read-only.

Ah, I wasn't really following the discussion. I'm talking specifically
about mremap.

> But then it'll create a pile of proxies if the application calls mremap
> several times, which I guess some applications will happily try to do.

No, for two reasons:

1. The kernel should just transparently short-circuit nested proxies
at proxy creation time — make the new proxy reference the original
memory object and not the other proxy; this way the other proxy can be
deallocated when it's no longer referenced, even if the new proxy
continues to exist. I believe this is what Joan has suggested above:

> I think it'd be a better solution to move the call to
> memory_object_proxy_valid() and the start value overwrite to
> memory_object_create_proxy()

2. As always, when you map a proxy it's the original memory object
that actually gets mapped (and referenced, and kept alive); the proxy
will then get deallocated when the task does mach_port_deallocate
(mach_task_self (), proxy). In other words, the proxy is short-lived
and only serves to make this whole operation representable. (This is
transparent to userspace though: from its perspective it's the proxy
that stays mapped.)

Sergey



Re: PCI arbiter memory mapping

2021-08-16 Thread Samuel Thibault
Sergey Bugaev, le lun. 16 août 2021 21:59:47 +0300, a ecrit:
> On Mon, Aug 16, 2021 at 9:17 PM Samuel Thibault  
> wrote:
> > > > I'd rather see a
> > > > hurd-specific libpciaccess function to get the pager.
> > >
> > > That's better, but we'd have to add that function in both hurd_pci.c and
> > > x86_pci.c. I don't like the idea of adding Hurd specific code to 
> > > x86_module
> > > but there's already a lot of it and we could avoid the existence of struct
> > > pci_user_data, which I like even less.
> >
> > Actually I'm thinking that this is just another case of mremap(). The
> > underlying question is getting the memory object of a given memory
> > range, like vm_region does. We need to be careful since we don't want
> > any process to be able to get the memory object of any other process,
> > but it does make sense, and would be useful for mremap.
> 
> IMO the proper way to get mremap () is vm_remap ()

But that doesn't answer Joan's need for getting a memory object, to be
able to put a proxy memory object on top of it to make it read-only.

> This makes me think of something cooler though: what if there was an
> API to create a proxy memory object from a (task, address, size)
> triple?

That wouldn't answer his need either: we don't want to access the task
content, we want to access the underlying (PCI-mmapped) memory object.

> routine vm_create_proxy (
> target_task : vm_task_t;
> address : vm_address_t;
> size : vm_size_t;
> out proxy : memory_object_t);
> 
> Then mremap () could be implemented like this:
> 
> vm_create_proxy (mach_task_self (), old_address, old_size, &proxy);
> vm_deallocate (mach_task_self (), old_address, old_size);
> vm_map (mach_task_self (), new_address, new_size, ..., proxy, ...);
> mach_port_deallocate (mach_task_self (), proxy);

But then it'll create a pile of proxies if the application calls mremap
several times, which I guess some applications will happily try to do.

Samuel



Re: PCI arbiter memory mapping

2021-08-16 Thread Sergey Bugaev
On Mon, Aug 16, 2021 at 9:17 PM Samuel Thibault  wrote:
> > > I'd rather see a
> > > hurd-specific libpciaccess function to get the pager.
> >
> > That's better, but we'd have to add that function in both hurd_pci.c and
> > x86_pci.c. I don't like the idea of adding Hurd specific code to x86_module
> > but there's already a lot of it and we could avoid the existence of struct
> > pci_user_data, which I like even less.
>
> Actually I'm thinking that this is just another case of mremap(). The
> underlying question is getting the memory object of a given memory
> range, like vm_region does. We need to be careful since we don't want
> any process to be able to get the memory object of any other process,
> but it does make sense, and would be useful for mremap.

IMO the proper way to get mremap () is vm_remap () — basically like
vm_map (), but instead of a memory object to map it takes another
(task, address) pair. OSF (or just Apple's?) Mach has it [0], although
I'd add 'anywhere' and 'unmap_source' flags to that interface.

[0] https://developer.apple.com/documentation/kernel/1585336-vm_remap

This makes me think of something cooler though: what if there was an
API to create a proxy memory object from a (task, address, size)
triple? Then you'd be able to map the new proxy, and there wouldn't be
a need for a separate vm_remap () — all without getting direct access
to the underlying memory object(s).

So, something like this:

routine vm_create_proxy (
target_task : vm_task_t;
address : vm_address_t;
size : vm_size_t;
out proxy : memory_object_t);

Then mremap () could be implemented like this:

vm_create_proxy (mach_task_self (), old_address, old_size, &proxy);
vm_deallocate (mach_task_self (), old_address, old_size);
vm_map (mach_task_self (), new_address, new_size, ..., proxy, ...);
mach_port_deallocate (mach_task_self (), proxy);

(Hmm, but what about increasing the map size? This cannot be done
securely for non-anonimous mappings, can it?)

This could also be used for other fun things, like implementing
/proc/pid/mem in a coherent way. (It can be implemented in a sad way
without this by making a pager that does vm_read () / vm_write () when
asked to retrieve / store a page.)

What do you think?

Sergey



Re: PCI arbiter memory mapping

2021-08-16 Thread Samuel Thibault
Joan Lledó, le lun. 16 août 2021 10:53:47 +0200, a ecrit:
> El 5/8/21 a les 1:26, Samuel Thibault ha escrit:
> 
> > Is it not possible to avoid having to call memory_object_proxy_valid?
> > maybe better fix device_map into supporting non-zero offset,
> 
> I think it'd be a better solution to move the call to
> memory_object_proxy_valid() and the start value overwrite to
> memory_object_create_proxy(), that way all logic related to proxies is kept
> inside memory_object_proxy.c and other modules of the kernel don't need to
> be aware of proxies.

Ok but I meant that the device_map interface already has has an "offset"
parameter which is marked as TODO.  So it's already there, it makes
sense (getting a map object from a specific offset), and documented, it
is just waiting to be implemented:

device/dev_pager.c

/* FIXME: This is not recording offset! */
kern_return_t   device_pager_setup(
const mach_device_t device,
int prot,
vm_offset_t offset,
vm_size_t   size,
mach_port_t *pager)

And in x86_pci.c's map_dev_mem you have

/* XXX: Mach should be fixed into supporting non-zero offset */
err = device_map (devmem, prot, 0x0, mem_offset + mem_size, &pager, 0);

That looks like an exact match to get things fixed and along the way
kill an old FIXME.

> > I'd rather see a
> > hurd-specific libpciaccess function to get the pager.
> 
> That's better, but we'd have to add that function in both hurd_pci.c and
> x86_pci.c. I don't like the idea of adding Hurd specific code to x86_module
> but there's already a lot of it and we could avoid the existence of struct
> pci_user_data, which I like even less.

Actually I'm thinking that this is just another case of mremap(). The
underlying question is getting the memory object of a given memory
range, like vm_region does. We need to be careful since we don't want
any process to be able to get the memory object of any other process,
but it does make sense, and would be useful for mremap.

> Apart from that, I also found a bug in hurd_pci.c:196 [1]. When the user
> asks for read/write permissions but only read is allowed, we should either:
> 
> - Deallocate robj and return EPERM

We should do that, yes.

> I think pci_device_hurd_map_range() should behave the same way
> pci_device_x86_map_range() does in the x86 module. But the implementation of
> map_dev_mem() is not clear about what happens in that case, I guess vm_map()
> handles that.

Not yet, because glibc was not careful about it, but we should aim for
that, yes.

Samuel



Re: PCI arbiter memory mapping

2021-08-16 Thread Joan Lledó

Hi,

El 5/8/21 a les 1:26, Samuel Thibault ha escrit:


Is it not possible to avoid having to call memory_object_proxy_valid?
maybe better fix device_map into supporting non-zero offset,


I think it'd be a better solution to move the call to 
memory_object_proxy_valid() and the start value overwrite to 
memory_object_create_proxy(), that way all logic related to proxies is 
kept inside memory_object_proxy.c and other modules of the kernel don't 
need to be aware of proxies.




Does pci_device_hurd_unmap_range not need to close the pager?



Yes.


Also, map_dev_mem needs to release the pager when dev == NULL? otherwise
pci_device_x86_read_rom would leak the pager?



Yes. Or not passing a pager to device_map() if dev == NULL, is not going 
to be used anyway.



I'm a bit afraid of the struct pci_user_data passing between
libpciaccess and pci-arbiter


Me too.


I'd rather see a
hurd-specific libpciaccess function to get the pager.



That's better, but we'd have to add that function in both hurd_pci.c and 
x86_pci.c. I don't like the idea of adding Hurd specific code to 
x86_module but there's already a lot of it and we could avoid the 
existence of struct pci_user_data, which I like even less.


Apart from that, I also found a bug in hurd_pci.c:196 [1]. When the user 
asks for read/write permissions but only read is allowed, we should either:


- Deallocate robj and return EPERM
- Do nothing and map the range read-only which is not what the user 
asked for.


The current code deallocates wobj which is null, leaks robj and returns 
EPERM. That wrong, since it doesn't make much sense, but which of two 
above is correct?


I think pci_device_hurd_map_range() should behave the same way 
pci_device_x86_map_range() does in the x86 module. But the 
implementation of map_dev_mem() is not clear about what happens in that 
case, I guess vm_map() handles that.



BTW, you can directly push "fix indentation" commits, so that after
rebasing your branch only contains actual code changes, for easier
review.


I'll do.

--
[1] 
https://gitlab.freedesktop.org/jlledom/libpciaccess/-/blob/hurd-device-map/src/hurd_pci.c#L196




Re: PCI arbiter memory mapping

2021-08-15 Thread Joan Lledó

Hi

El 9/8/21 a les 19:45, Samuel Thibault ha escrit:

I pushed the start/len start, along with a fix for the length.  Could
you proofread that part?





I ran it and works fine



Re: PCI arbiter memory mapping

2021-08-09 Thread Samuel Thibault
Joan Lledó, le lun. 09 août 2021 23:30:47 +0200, a ecrit:
> El 9/8/21 a les 19:45, Samuel Thibault ha escrit:
> > I pushed the start/len start, along with a fix for the length.  Could
> > you proofread that part?
> 
> It seems all right for me.

Thanks!
The change definitely deserved a second pair of eyes :)

Samuel



Re: PCI arbiter memory mapping

2021-08-09 Thread Joan Lledó

hi

El 9/8/21 a les 19:45, Samuel Thibault ha escrit:

I pushed the start/len start, along with a fix for the length.  Could
you proofread that part?



It seems all right for me. I'll test this and check you other comments 
next weekend or the following.




Re: PCI arbiter memory mapping

2021-08-09 Thread Samuel Thibault
Hello,

Joan Lledó, le dim. 20 juin 2021 14:59:25 +0200, a ecrit:
> http://git.savannah.gnu.org/cgit/hurd/gnumach.git/log/?h=jlledom-memory-object-proxy

I pushed the start/len start, along with a fix for the length.  Could
you proofread that part?

Samuel



Re: PCI arbiter memory mapping

2021-08-04 Thread Samuel Thibault
Joan Lledó, le dim. 20 juin 2021 14:59:25 +0200, a ecrit:
> +  /* Find out whther pager is already a proxy */
> +  memory_object_proxy_valid (pager, &pager_is_proxy);
> 
> +  starts[0] = pager_is_proxy ? 0 : region->base_addr;

Is it not possible to avoid having to call memory_object_proxy_valid?
It looks odd for the caller not to know what kind of object it has in
its hands. I guess you can simply add a boolean in pci_user_data?

Or else, maybe better fix device_map into supporting non-zero offset,
that should be easy enough, by making device_pager_setup record the
offset into a new field of dev_pager_t, and just add that offset on the
fly in device_map_page.

Does pci_device_hurd_unmap_range not need to close the pager?

Also, map_dev_mem needs to release the pager when dev == NULL? otherwise
pci_device_x86_read_rom would leak the pager?

I'm a bit afraid of the struct pci_user_data passing between
libpciaccess and pci-arbiter: if we ever want to extend the structure
the compatibility will be hard to achieve. I'd rather see a
hurd-specific libpciaccess function to get the pager.

Apart from these, the principles look good to me!


BTW, you can directly push "fix indentation" commits, so that after
rebasing your branch only contains actual code changes, for easier
review.

Samuel



Re: PCI arbiter memory mapping

2021-08-04 Thread Samuel Thibault
Hello,

Joan Lledó, le sam. 19 juin 2021 11:50:09 +0200, a ecrit:
> do you prefer diff patches by email

That's simpler for discussing over the code, yes.

Samuel



Re: PCI arbiter memory mapping

2021-07-03 Thread Joan Lledó

Any thoughts on this?

El 19/6/21 a les 11:50, Joan Lledó ha escrit:

Hi Hurd,

I finally got memory mapping working in the pci arbiter. That means any 
user with the proper permissions can map the device region files 
generated by an arbiter. This is also working for nested arbiters.


For this I made changes in libpciaccess, gnumach, and the Hurd: 
pci-arbiter and libnetfs folders.


I got the code in some branches in different repositories. How should I 
share the code for your review? do you prefer diff patches by email or 
should I clean the branches and share the links?






Re: PCI arbiter memory mapping

2021-06-20 Thread Joan Lledó

Hi,

El 20/6/21 a les 3:25, Damien Zammit ha escrit:

Hi Joan,

On 19/6/21 7:50 pm, Joan Lledó wrote:



How does that interact with existing pci access for example, I think the AHCI 
rump driver
works with DMA so do we need to also adjust the pci-userspace part of 
rumpkernel?


I couldn't say, I know very little of that, TBH. but I'd say if it was 
working before you shouldn't need to adjust anything, b/c my changes in 
the arbiter added new functionality, not breaking changes on components 
already working. At the end of the day what is did is only allowing 
region files to be mapped with mmap() and vm_map().


Anyway, I cleaned up the branches, you can test it yourself:

https://gitlab.freedesktop.org/jlledom/libpciaccess/-/tree/hurd-device-map

http://git.savannah.gnu.org/cgit/hurd/gnumach.git/log/?h=jlledom-memory-object-proxy

http://git.savannah.gnu.org/cgit/hurd/hurd.git/log/?h=jlledom-pci-memory-map



Re: PCI arbiter memory mapping

2021-06-19 Thread Damien Zammit
Hi Joan,

On 19/6/21 7:50 pm, Joan Lledó wrote:
> I finally got memory mapping working in the pci arbiter. That means any 
> user with the proper permissions can map the device region files 
> generated by an arbiter. This is also working for nested arbiters.

How does that interact with existing pci access for example, I think the AHCI 
rump driver
works with DMA so do we need to also adjust the pci-userspace part of 
rumpkernel? 

Cheers,
Damien