Bárður Árantsson wrote:
> Yannick Lecaillez wrote:
>   
> [snip]
> I took it as being static, so
>
> system/    is _always_ mapped to filesys
> system/sw  is _always_ mapped to any backend
> user:      is _always_ mapped to any backend
>
> (And only those mappings can exist.)
>   
Well... as far as a certain program's lifespan is concerned, yes.
I'm not completely sure whether we actually need to allow *any and
every* subtree to be remapped to a different backend -- keep in mind
that this makes lookups *way more difficult* to do. My feeling here is
that mapping should be limited to the first two or three levels at most
(the ones "forced" by the specification)
> I was going to write about an alternative mapping here which would be 
> more friendly to large networks of machines, but I think I just realized 
> that static mappings maybe are not flexible enough to remain sane for 
> both small and large configurations. (The mapping I wanted would also 
> store machine configurations in a central location, but mapped so that 
> the configurations were still separate from each other.)
>
> Can I change my mind? ;)
>
> I think we really want dynamic or at the very least 
> arbitrary-but-only-read-at-startup mappings.
>   
The second is the only viable option.
BTW, what you propose is "easily" feasible by specifying a mapping as:

---
/etc/elektra/mappings:
system/sw/subtree      backend:<applicable parameters>
---

---
/etc/elektra/backends.conf:
[berkeleydb]

[mysql]
---

where the parameters passed as "arguments" to the backend are indeed
overrides for those in the backends.conf file

example:
---
/etc/elektra/mappings:
system/sw/samba/  ldap:base="ou=samba,ou=daemons,dc=example,dc=com",
    binddn="cn=samba,ou=daemons,dc=example,dc=com"

system/sw/myapplication/
mysql:host=db.example.com,user=dbuser,pass=dbpass,db=myconfigdb,query=myconfigquery
---

with a separate /etc/elektra/backend-mysql.conf containing a query
called "myconfigquery"

This is done like this so that a more flexible syntax can be used for
"programmable backends" (MySQL, PgSQL, LDAP) while the "mappings" parser
can be made simpler and thus more robust.
In fact, even LDAP can be abstracted a bit further. Please see Postfix's
"db_common" infrastructure for a very good approach.

> [--snip--]
>   
>> The fact is i would have maximum flexibility and KISS (Keep It Simple,
>> Stupid) as possible. Creating dynamicly pipe, process, agent will simply
>> add complexity without adding (IMHO) enought good value.
>>     
>
> Pipes wouldn't _have_ to be created dynamically. With named pipes 
Better use UNIX-domain sockets. Less cumbersome, and allow
"multiplexing" :-)
> they could be precreated, and any particular user's daemon could be started 
> by the user themselves -- though probably by libelektra starting it for 
> them.
>   
This was my first suggestion, if you read it carefully.
Right now, I do prefer to have the "central" kdb fork an instance for
each user requesting access to their particular "registry hives"(sic --
forgive my windoze speak).

Once again, the "daemon" running as the user is needed for both
serialization issues (allow more than one program to concurrently read
and write a particular user's configuration) as well as performance
(open the berkeleydb / what-have-you just once and even allow caching
where applicable)
> As I think I mentioned, the setup I had in mind doesn't actually require 
> separate processes -- it simply requires being able to do non-blocking 
> I/O (which the daemon needs to do anyway) and to direct requests to 
> multiple different backends. However, if you want the best possible 
> security you _have_ to separate the "user daemons" from the "system" 
> daemon (and the root:root daemon which directs requests to the 
> appropriate daemon).
>
> It _is_ somewhat complicated,
Not that much -- as soon as security is of any importance, it quickly
becomes far easier to secure multiple *simple* cooperating process than
a big "do-it-all" daemon: go look at Sendmail if you need proof of this.
>  but frankly, I would just start by 
> creating a prototype/proof of concept in Python, Ruby or some other 
> high-level language... and only after that has show that the concept is 
> workable worry about implementing it in a low-level language like C. 
> (C++ might also be considered as the "final" language... Boost.Asio is 
> set to make non-blocking I/O very easy and Boost.Statechart makes 
> implementing server state very easy indeed.)
>   
I fail to see where AIO is *required* for this (or even desirable, most
of the time) since most operations have to be serialized anyway. I do
find a place where it is needed: within the filesys backend, in order to
avoid stalling the requests... but I seriously think that the daemon
needs only "multiplex and distribute" requests among the backends
depending on the mappings.
It is up to the particular backend to decide whether to use AIO or not.

Regarding the daemon itself, nothing fancier than select() is needed nor
can be really justified for performance reasons given the kind of
workload it will be servicing. Moreover, restricting ourselves to just
select() [vs. epoll() for example] avoids having yet more portability
issues.
>> Adding caching stuff to kdbd is something i'm thinking too. Its "easy"
>> to do for local backends (understand stored localy on filesystem) but
>> not as easy for remote backends : cache system have to be aware about
>> each modification done and these modifications could have be done not by
>> you (your instance of kdbd).
>>     
>
> Well, you can always relax the consistency requirements slightly if 
> you're willing to live with programs possibly using slightly outdated 
> configuration information (usually not that big a deal unless you do 
> modifications on both "sides").
>   
The particular backend for that subtree can always redirect the write
request to a central "read-write" store which then replicates changes to
as many R/O slaves as needed -- this is OpenLDAP's model, by the way.
>>  Afaik, LDAP allow you to monitor some
>> branch/objectClass for modification. But for SQL that's another story ...
>> So perhaps caching stuff would probably have to be done in the real
>> backends themself rather than into kdbd.
>>     
>
> I think caching is best left to the LDAPs of the world. I'm sure there 
> is a *lot* of expertise and work that has been poured into making LDAP 
> caching work reasonably and reliably, and I would be surprised if a 
> relatively small project like Elektra could come up with something 
> *significantly* better to justify the effort... Anyway, the door on that 
> is still open once the basic stuff is in place.
>   
Indeed. Let's try to get a 1.0 version out of the door and then we can
improve as much as needed.



    J.L.


-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Registry-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/registry-list

Reply via email to