Svante Signell, le Sun 16 Sep 2012 17:53:19 +0200, a écrit :
> > What is puzzling exactly?  That said, you don't need to understand that
> > part.
> 
> I want to know what's happening (not necessarilty understand every
> detail).

Then I can only say that I don't even actually know. And that's just
fine, really we don't need to understand all the details to work on
stuff.

> > > Q3: I cannot see where the code is included/the macros expanded, going
> > > from setsockopt.c to RPC_socket_setopt.c?? What am I missing?
> > 
> > I don't see what you are missing. setsockopt.c calls __socket_setopt(),
> > and RPC_socket_setopt.c defines __socket_setopt(). It's a mere C call,
> > what don't you understand?
> 
> OK, it is a C call. The question is how the RPC_socket_setopt.c is
> created: A assume mig is behind this, and the only trace I found in the
> build log is the issuing of mig and then, echoing weak_alias and a move
> of tmp_${call},c to RPC_${call}.c, in this case is call=socket_getopt.

So you've found it. No need to know more about it. Details really don't
matter.

> > > There is some mig stuff before the above in the build log:
> > 
> > Again, what is the question?
> 
> See above: It seems to be mig that creates the RPC_socket_getopt.c from
> its definition in socket.defs, etc??

Yes. See rpc.mdwn:

“This is an RPC from the fs interface (see fs.defs). The
implementation of the function is thus actually generated using mig
during the glibc build in RPC_dir_lookup.c.”

> > > Q4: Is that where the build-tree/hurd-i386-libc/mach/tmp_*.c functions
> > > are created?
> > 
> > You don't need to understand that. RPC_socket_setopt.c simply provides
> > the function that setsockopt.c calls. I don't see what more you want.
> 
> See above, the truth and noting but the truth is enough :)

The whole truth is way too complex. Again, you *don't* need it to hack
on the Hurd. Just like you don't need to know the details of how a
system call is made to understand how in Linux you get from open() in
glibc to sys_open() in the kernel.

> > > That was where I lost track of the function.
> > 
> > See rpc.mdwn, it's a system call. So it ends up in Mach, the kernel. And
> > you don't want to go into the details inside mach. All you need to know
> > is what is described in rpc.mdwn: mach_msg sends a message to the port.
> 
> Yes, and before rpc.mdwn was written, the gnumach and hurd reference
> manuals did not give you that hint...

The gnumach reference surely does: it documents mach_msg as doing what I
mentioned above.

> Nice indeed, reading the source code wouldn't help much here, agreed?

It would: mach_msg shows up there as a system call. Then you read the
reference about that system call.

> > > And from the build of hurd:
> > > hurd-20120710/build/pflocal/socketServer.c:
> > > mig_external kern_return_t S_socket_setopt
> > > but there is also:
> > > mig_internal void _Xsocket_setopt
> > > 
> > > Q6: Where do the _X and S_ definitions come into play?
> > 
> > See rpc.mdwn, it's mentioned there.
> 
> rpc.mdwn does not explain where and how these _X and S_ prefixed
> versions are hooked together with the RPC_*.c code. 

Sorry, but it does.

“
This generated function [in RPC_*.c] essentially encodes the parameters
into a data buffer, and makes a mach_msg system call to send the buffer
to the root filesystem port, with the dir_lookup RPC id.

The root filesystem, for instance ext2fs, was sitting in its main
loop (libdiskfs/init-first.c, master_thread_function()), which calls
ports_manage_port_operations_multithread(), which essentially simply keeps
making a mach_msg system call to receive a message, and calls the demuxer
on it, here the demuxer parameter, diskfs_demuxer. This demuxer calls the
demuxers for the various interfaces supported by ext2fs. These demuxers are
generated using mig during the hurd build. For instance, the fs interface
demuxer for diskfs, diskfs_fs_server, is in libdiskfs/fsServer.c. It simply
checks whether the RPC id is an fs interface ID, and if so uses the
diskfs_fs_server_routines array to know which function should be
called according to the RPC id. Here it's _Xdir_lookup which thus gets
called. This decodes the parameters from the message data buffer, and calls
diskfs_S_dir_lookup.
”

Just replace ext2fs with pflocal, master_thread_function with main
(since that's what calls ports_manage_port_operations_multithread),
diskfs_demuxer with pf_demuxer (since that's what is passed
to ports_manage_port_operations_multithread), dir_lookup with
socket_setopt, io* with socket* (since it's in socket.defs that
socket_setopt is defined, not io.defs), and thus libdiskfs_server with
socket_server (since that's what pf_demuxer calls), and fsServer with
socketserver, etc. and you'll be done.

I am sorry but the paragraph I've just written above should be
*obvious*, there is no way we are going to document exactly the same
thing for all servers, it's always just the same principle, you just
have to adapt words to your situation.

> > > We also have:
> > > hurd-20120710/build/hurd/socket.msgids:socket 26000 socket_setopt 13
> > > 26013 26113
> > > hurd-20120710/build/hurd/hurd.msgids:socket 26000 socket_setopt 13 26013
> > > 26113
> > > 
> > > The socket_setopt function is defined in hurd-20120710/hurd/socket.defs:
> > > /* Set a socket option.  */
> > > routine socket_setopt (
> > >         sock: socket_t;
> > >         level: int;
> > >         option: int;
> > >         optval: data_t SCP);
> > 
> > Yes, that's the definition that mig uses to create the stubs. That's
> > what is referenced in rpc.mdwn btw.
> 
> Maybe the mig stuff could be explained more in detail.

There is documentation for mig. Go read it.

> > > Q9: Where to find what to add here?
> > 
> > You have to invent it. I.e. check in the POSIX standard what it means
> > for a local socket to enable/disable SO_REUSEADDR, check how pflocal is
> > supposed to implement it (yes, that means reading the source code, we
> > won't document the internal mechanisms of pflocal, just like the Linux
> > kernel didn't document its own), and then implement it.
> 
> Sorry to hear. A reference manula could include such stuff, but of
> course not a users manual.

It's NOT IMPLEMENTED YET. Why on earth do you assume that something
that's not even implemented should be documented????

Damn, grow up. It doens't exist yet. So we have to invent it. That's how
software gets written. It doesn't grow out of nothing, people have to
write it.

> > > Q10: If gnumach is kernel space and hurd user space, what is the eglibc
> > > code?
> > 
> > Don't try to map monolithic-kernel vocabulary on the Hurd, that can't
> > work.
> > 
> > > Is this really a client-server implementation?
> > 
> > Yes: glibc is the client, the translators are the servers, and the
> > kernel is only the mailman (mach_msg)
> 
> So it looks like:
> user_code <-> libc(client) <-> gnumach(mailman) <-> hurd(server)

Yes

> Why make things so complicated, via the RPC stuff?

Because that's *PRECISELY* what makes the Hurd powerful: you can
redirect the RPC to any server.

> Implementing the whole function setsockopt in eglibc would simplify a
> lot.

But it can not work: it's pflocal which has to change the option for the
socket, since that's where the AF_UNIX sockets are actually IMPLEMENTED.

> how does the above look for a monolithic kernel, like Linux (I could
> dig that up but if you know already).

Linux would be user_code -> libc -> system call -> socket core layer ->
pflocal implementation.

And it's actually not much less complex to read. Give it a try.

> > > Don't say that the eglibc-gnumach-hurd combination is simple, then you
> > > are not serious :(
> > 
> > I don't think we ever said that. It's not, and it's not really meant to,
> > otherwise we wouldn't have so much powerful features.
> 
> What are the powerful features compared to a monolithic kernel.

Using your (as a user) own pflocal instead of the system-provided one,
using gdb/valgrind on it, etc. See the wiki pages about the benefits of
the Hurd.

> Sorry, I cannot see them. I only find tons of not implemented things
> and nasty bugs.

I'm speechless.

Linux also has tons of not implemented things and nasty bugs. It doesn't
make it a too bad kernel. Not for all situations.

> > > And one can understand why people have problems
> > > contributing to such a complicated software structure. 
> > 
> > I can tell you, it's definitely *more* complex to contribute to the
> > Linux kernel nowadays than to the Hurd. Not only because it's a very
> > complex and big software, but also because the Linux kernel doesn't have
> > internal documentation either!
> 
> Sad case for Linux too if that is true :(

That's life. You have to understand that you won't always have
documentation and people taking time to explain you things. The more you
get into the core of the system, the less documentation you'll find. It
has been so in all systems I've seen.

> > > If somebody can explain this properly to me, I will write it down and
> > > add to the existing (incomplete) documentation.
> > 
> > Please tell what is missing from rpc.mdwn. For now I believe there is
> > already everything you need to know. The rest is details that are not
> > needed for understanding RPCs.
> 
> Why not adding to hurd.texi or creating some overview document
> describing the overall picture.

That's what I have added as rpc.mdwn

> The wiki is good, but at least me appreciate written manuals too.

Why?

Samuel

Reply via email to