On Wed, 21 Sep 2016 11:29:00 -0300 Gustavo Sverzut Barbieri
<barbi...@gmail.com> said:

> On Wed, Sep 21, 2016 at 9:08 AM, Carsten Haitzler <ras...@rasterman.com>
> wrote:
> > On Tue, 20 Sep 2016 19:12:57 -0300 Gustavo Sverzut Barbieri
> > <barbi...@gmail.com> said:
> >
> >> damn raster... I had to do so I could check.
> >>
> >> dlopen -> in git.
> >>
> >> server and libproxy.so wrapper, attached with the basics, I'm not
> >> doing all the cumbersome details to get a single process running and
> >> spawn it from libproxy.so wrapper without a race condition.
> >
> > there is no race.
> >
> > connect. if connect fail, spawn, set timer to connect every 0.1 sec until
> > successful.
> >
> > there is no race. first daemon spawned that binds to the socket wins. every
> > other one will fail and exit. there is no race to deal with as the bind does
> > the job for the daemon - first one in wins and the rest fail.
> 
> there is a problem with stale sockets. If daemon dies and leaves the
> socket file, then next daemons will try to bind and fail... then
> nobody wins. If we first unlink(), then there is a race.
> 
> Or should we use an abstract socket?

or ... never die. ever. once up, stay up (of course until session shuts down).
unless of course it's a crash/bug/segv in which case there is no unlink. :)

> >> I'd simplify all of that by using dbus with acquiring a name in the
> >> session bus and let that entity control it. Also would let the dbus
> >> daemon set isolation, like not inheriting current processes limits,
> >> namespaces and all.
> >
> > no it isn't any easier with dbus. see above. this is what efreetd does not.
> > its REALLyYsimple and race-free.
> 
> still doesn't address inheriting caps and other access controls...

efreetd doesnt need such caps. neither does this efl.proxy. remember how i
differentiated between "global resources" vs things like thumbnails where
thumbs are not global resources - they are directly tied tot he src file and
permissions on it and the permissions of the app etc.

a "proxy lookup" daemon is going to provide a global resource - the ability to
take an input target machine and give a resulting proxy server to use, and that
global resource may be as simple as an $http_proxy env var, or as complex as a
bunch of js in a pac file to execute to figure out which proxy to use.

> say constrained process A spawned the daemon. Then unconstrained
> process B will get the constraints.

in a constrained environment like one with SMACK this daemon would launch as
part of your login session. such constrained environments have to deal with
this. they could also set up systemd socket activation too. efreetd - same
story. outside of such env's the "i'll launch if i can't connect" just works
(tm).

> more concrete description is: for some reason "A" was started without
> network access. The daemon inherits that and thus libproxy won't work.
> That's okay, expected.

actually it'd still work because proxy daemon is doing a lookup locally as to
what to use. the app itself still does the connect and thus gets blocked. the
proxy daemon isnt meant to be doing dns lookups or anything else - just return
the $http_proxy to use or runt he js to figure out what proxy to use given a
nice fat bit of data in the pac file to choose what may be internal and/or
external sites by name/ip etc. and what proxy to use.

> However "B" has network access and is started afterwards, it will
> check and the daemon is there. It will use the daemon with libproxy
> that doesn't work due lack of network access. This is not expected.
> 
> I know not all systems employ fine grained capabilities and
> smack/selinux, but some do and we need to be careful.

such constrained environments have to launch the proxy daemon separately. as
above. as long as they either set up socket activation OR just launch it on
login everything works as normal.

> What I can do to solve this is to allow the server to be started from
> socket activation like systemd (also in --user variant). In that case,
> if the user cares about isolation, he uses the
> efl-proxy-resolver.socket and efl-proxy-resolver.service to start
> those on demand. The client will try to connect and it will work, thus
> it won't spawn any daemon on its own. Systemd will spawn the service
> with proper caps.

and everyone on bsd, windows, osx will hate you for doing this. tying to
systemd as the ONLY and DEFAULT method is stupid. you shot portability in the
foot.

the PORTABLE way that works on osx, windows (assuming local sockets work),
bsd's, solaris, linux and EVERY *nix is what i described. connect, if connect
fails, launch, then keep connecting until success.

"odd" constrained env's can simply avoid ever needing to trigger the "launch"
code by ensuring the daemon is already started OR use socket activation.
ecore_con/ipc already supported socket activation systemd stuff if the systemd
env vars were set to pass in the socket fd. all of that is/was already done...
ship/add the systemd socket file then too as part of a user session.

but FIRST do the "i'll launch it myself" code as that is universal. one code
path for everyone everywhere on every os.

those with systemd ANd that have systemd managing their login session can then
rely on tyhe socket activation. catch - the socket file has to be installed int
he right place. currently wee have a mess that we install such files where
systemd say to and not in $PREFIX which is pretty bad as it's unexpected
out-of-target dir file installs. but if we dont then systemd isnt looking
in /usr/local/.... by default. that's all a mess and we had this mess for years
with dbus activation files. still do with ethumb. enough. this cant be the
default/only way. it needs to be an optional extra.

those on constrained environments can use the above socket activation OR ...
just ensure these daemons are launched to avoid problems. same story as socket
activation. they do have a constrained env and they have to do some work to
take care of things in such an env.

and for the rest things just work™. thats the important bit. that they just
work out of the box.

> If systemd is not used, then the client will fail to connect, will
> fork-exec the daemon and it will create and bind the socket on its
> own, inheriting parent's caps. A possible workaround for this is to
> die after some idle timeout, then it would "auto-fix" if problems like
> that happens.
> 
> looks good?

no. no idle die. see above. :) it's portable. it's simple. it works everywhere
until you have oddball smack/selinux etc. envs and then the env can take care
of it via launch or socket actvation. first make the "works everywhere"
version. :)

> >> however as you dislike that, be my guest :-D
> >>
> >>  gcc -o efl-proxy-resolver libproxy-efl-server.c `pkg-config --cflags
> >> --libs ecore eina efl eo libproxy-1.0`
> >>  gcc -shared -o libproxy-efl.so libproxy-efl.c `pkg-config --cflags
> >> eina` -lpthread
> >
> > why did you do both a client exe and a server? just need the server... the
> > client copnnect/talk is in efl.net :) if you use efl.net like ecore_con
> > it'd do the XDG_RUNTIME_DIR for you for user local sockets (unless they are
> > full path). yuou just should have re-use efl.net inside there.
> 
> client is not an exe, it's a libproxy.so drop-in replacement so you
> can LD_PRELOAD, like from E... after all you want all your processes
> to use the same proxy configuration and benefits, right? VLC, glib,
> qt...
> 
> -- 
> Gustavo Sverzut Barbieri
> --------------------------------------
> Mobile: +55 (16) 99354-9890
> 


-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler)    ras...@rasterman.com


------------------------------------------------------------------------------
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to