Hi Jan,

On 7.6.2015 19:49, Jan Mareš wrote:
> I noticed that when I have a fibril which is blocked in read from stdin
> and I want to write something to stdout, call to write gets blocked
> until read is finish. Is there a way to fix vfs/locfs so the write would
> finish before read?

I am trying to figure out what could be the cause of this, i.e. what is
causing the serialization. In the past, we have had similar issues, but
managed to solve them one way or another.

For example, the calls would be serialized if sent over an
EXCHANGE_SERIALIZE sessions. But that's not the case with VFS, as it
uses EXCHANGE_PARALLEL. So different exchanges use different IPC
connections and can be processed in parallel. In servers (eg. slip),
where, for some reason, EXCHANGE_PARALLEL could not have been used, two
separate sessions had to be created.

Another thing causing serialization could be some lock in the VFS. There
is a contents_rwlock taken as a reader for each read() and as a writer
for each write(). But if the underlying file system supports concurrent
read/write and writes don't change the node's size, the lock should be
taken as a reader even for writes. At the moment, I see that locfs does
not support this, but maybe this could be reconsidered.

And finally, also the end-point filesystems themselves (such as locfs)
and/or the underlying services could introduce some serialization on the
processing of the requests.

>From your description, I assume that your writer is blocking on the VFS
node's contents_rwlock because it cannot take it as a reader since locfs
does not support this. It remains to be established whether it would be
wise and possible to modify locfs in such a way that it could guarantee
that a write does not modify node's size. On the other hand, the nodes
located on locfs are not ordinary files and their size is always 0, so
maybe the change is possible.

Jakub

_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/listinfo/helenos-devel

Reply via email to