[9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-06 Thread Siddhant
On Feb 5, 9:36 pm, [EMAIL PROTECTED] (erik quanstrom) wrote:
> > see bind(2):
>
> >   Finally, the MCACHE flag, valid for mount only, turns on
> >   caching for files made available by the mount.  By default,
> >   file contents are always retrieved from the server.  With
> >   caching enabled, the kernel may instead use a local cache to
> >   satisfy read(5) requests for files accessible through this
> >   mount point.  The currency of cached data for a file is ver-
> >   ified at each open(5) of the file from this client machine.
>
> sleezy hacks notwithstanding.
>
> - erik

Thanks people! For your replies! Thanks a lot! I'm new to these kernel
thingies. I think I've got the point now. (Almost... :) )
I'll sum it up. Please tell me if I am right or wrong.

9P clients execute remote procedure calls to establish fid in the
remote (9P) file server.
This fid HAS to get mapped from a channel to a driver in devtab[].
If the file is present in kernel resident filesystem, well and good.
Else, mount driver gets in (from devtab[]), converts the local
procedure call to a 9P message.
(and then the mount driver is supposed to attach a namespace, so it
ends there...)


Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-05 Thread erik quanstrom
> see bind(2):
> 
>   Finally, the MCACHE flag, valid for mount only, turns on
>   caching for files made available by the mount.  By default,
>   file contents are always retrieved from the server.  With
>   caching enabled, the kernel may instead use a local cache to
>   satisfy read(5) requests for files accessible through this
>   mount point.  The currency of cached data for a file is ver-
>   ified at each open(5) of the file from this client machine.
> 

sleezy hacks notwithstanding.

- erik


Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-05 Thread Charles Forsyth
> 9P is used universally in plan9 to deal with files. If the file is
> present in the kernel resident file system, then a direct procedure
> call is enough, else, the mount driver comes into picture.
> If its in the kernel resident file system, then the channel of the
> file is looked up for the Dev structure (the device driver
> corresponding to the file being referred to by the channel) , which
> defines all those procedures, that may be finally executed.

yes, that seems fine.  conventional device drivers are the main
file servers that are built-in, but there are others.  if you look
at section 3 of the programmer's manual you'll see a representative set.
notably, in plan 9, none of them deal with files on discs etc.
that's all done by user-level servers, which are typically found in section 4 of
the manual.

>there is no client-side caching in the plan 9 kernel.

see bind(2):

  Finally, the MCACHE flag, valid for mount only, turns on
  caching for files made available by the mount.  By default,
  file contents are always retrieved from the server.  With
  caching enabled, the kernel may instead use a local cache to
  satisfy read(5) requests for files accessible through this
  mount point.  The currency of cached data for a file is ver-
  ified at each open(5) of the file from this client machine.



Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-05 Thread roger peppe
> there is no client-side caching in the plan 9 kernel.

*cough*, MCACHE?


Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-05 Thread erik quanstrom
one thing of note, linux vfs implements a dcache.  this connects the
virtual memory system to the filesystem.  (but oddly in linux network
buffers are handled seperately.) there is no client-side caching in
the plan 9 kernel.

there is a notable exception.  there is an executable cache.

>> One more point, I googled a lot on "kernel resident file systems and
>> non kernel resident file systems", but I could not find a single
>> useful link. It would be great if you could specify the difference
>> between the two. I wish that eases up the situation a bit.

since the devtab[] functions map 1:1 with 9p, all the mount driver
needs to do for calls outside the kernel is to marshal/demarshal
9p messages.

it's important to remember that most in-kernel file servers could easily
exist outside the kernel.  the entire ip stack can be implemented from
user space.  (and it has been in the past.)

> "Kernel resident filesystem" in this context simply means a filesystem  
> which was created for use by the kernel; this may or may not be  
> visible to user-space applications - I'm not too sure. 

every element of mounttab[] has an associated device letter.  to
mount the device, one does (typically)
bind -a '#'^$letter /dev

for example, to bind a second ip stack on /net.alt,
bind -a '#I1' /net.alt
 
> To sum up, you  
> use the 9 primitive operations provided by each 'Dev' when you work  
> with kernel-resident filesystems, while all other filesystems are  
> dealt with using regular 9P.

all devices are accessed through devtab.  it may be that that entry
is the mount driver.  the mount driver turns devtab[]->fn into
the corresponding 9p message.  (and vice versa.)

- erik



Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-05 Thread Anant Narayanan

Hi,


One more point, I googled a lot on "kernel resident file systems and
non kernel resident file systems", but I could not find a single
useful link. It would be great if you could specify the difference
between the two. I wish that eases up the situation a bit.


I don't claim to know much about how 9P or Plan 9 work, but I will  
attempt to answer.


9P is very much like Linux's VFS. It defines how user-space  
applications can access files; whether they are stored locally, or on  
the network, or whether the information is generated on-the-fly by the  
kernel from its internal data-structures is of no consequence - 9P  
abstracts all that.


Accessing files from within the kernel is a different ball-game  
(that's true for every kernel). Since you don't have the 9P style  
access anymore - The 'Dev' structure essentially replaces it. These  
structures point to methods are quite analogous to the 9P operations.  
The methods do the work of implementing the file operations - and  
these would all be different for files depending on whether they are  
stored on local disk, produced synthetically or accessed over a  
network. This makes it easier to work with files in the kernel since  
all the operations are delegated further to the 'Dev' structures, each  
one doing what it knows best; quite similar to what happens with 9P in  
user-space.


Charles mentions one such 'Dev', the mount driver, which merely passes  
on the received request as 9P messages to the file descriptor  
(possibly connected to a remote 9P server).


"Kernel resident filesystem" in this context simply means a filesystem  
which was created for use by the kernel; this may or may not be  
visible to user-space applications - I'm not too sure. To sum up, you  
use the 9 primitive operations provided by each 'Dev' when you work  
with kernel-resident filesystems, while all other filesystems are  
dealt with using regular 9P.


I hope I'm right, and that this helps.

--
Anant


[9fans] Re: 9P, procedure calls, kernel, confusion. :(

2008-02-05 Thread Siddhant
On Feb 5, 3:35 pm, [EMAIL PROTECTED] (Charles Forsyth) wrote:
> you can see the structures themselves in /sys/src/9/port/portdat.h
>
> inside the kernel, the traditional integer file descriptor indexes an array 
> of pointers to (possibly shared)
> instances of the channel data structure, Chan, which contains the integer 
> fid, an open mode, a current offset,
> and most important a pointer (or an index into a table of pointers)
> to the Dev structure for the device driver associated with the Chan.  each 
> device driver implements its
> own name space (ie, operations such as walk, dirread, stat) including the 
> file i/o
> operations (eg, open, read, write).  most drivers use a small library 
> (/sys/src/9/port/dev.c)
> to make implementing simple name spaces mainly a matter of defining a table
> and some conventional calls for walk, stat, wstat, open, and dirread (which 
> is handled as a special
> case inside the device's read operation, not as a separate entry in Dev).
>
> all open files have a Chan, but not all Chans correspond to open files (they 
> can mark
> current directories, or more interesting the current location in a walk of a 
> name space).
>
> inside the kernel, operations are done by
> Chan *c = p->fgrp->fd[fd];
> devtab[c->type]->walk(c, ...);
> devtab[c->type]->read(c, va, n, offset);
> and so on.  fairly obvious stuff, really.
>
> >Q. Does it mean that every RPC message essentially has to end up being 
> >implemented via that procedural interface?
>
> one interesting driver (ie, one Dev) is the `mount driver' 
> (/sys/src/9/port/devmnt.c).  its implementations
> of walk, open, stat, read, write, etc build 9P messages representating the 
> corresponding operations and
> exchanges them on any given file descriptor (ie, Chan) using 
> devtab[...]->write and devtab[...]->read
> so it doesn't care what sort of file descriptor is used (pipe, network, ...).
>
> (an alternative might be to use 9P messages throughout.)
>
> >>>"A table in the kernel provides a list of entry points corresponding one 
> >>>to one with the 9P messages for each device."
> >Q. Can I relate this 'table in the kernel' to the 'representation in a
>
> it's the devtab[] that lists pointers to the Dev structures representing 
> configured devices.

Thanks a lot for the reply.
For whatever I could make out, does it work like the following?

9P is used universally in plan9 to deal with files. If the file is
present in the kernel resident file system, then a direct procedure
call is enough, else, the mount driver comes into picture.
If its in the kernel resident file system, then the channel of the
file is looked up for the Dev structure (the device driver
corresponding to the file being referred to by the channel) , which
defines all those procedures, that may be finally executed.
If its not then?

One more point, I googled a lot on "kernel resident file systems and
non kernel resident file systems", but I could not find a single
useful link. It would be great if you could specify the difference
between the two. I wish that eases up the situation a bit.

Thanks once again.