On 18.4.2016 17:44, Petr Spacek wrote:
> On 18.4.2016 16:42, Martin Basti wrote:
>>
>>
>> On 18.04.2016 15:22, Petr Spacek wrote:
>>> On 6.4.2016 10:57, Petr Spacek wrote:
>>>> On 6.4.2016 10:50, Jan Cholasta wrote:
>>>>> On 4.4.2016 13:51, Petr Spacek wrote:
>>>>>> On 4.4.2016 13:39, Martin Basti wrote:
>>>>>>>
>>>>>>> On 31.03.2016 09:58, Petr Spacek wrote:
>>>>>>>> On 26.2.2016 15:37, Petr Spacek wrote:
>>>>>>>>> On 25.2.2016 16:46, Simo Sorce wrote:
>>>>>>>>>> On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
>>>>>>>>>>> On 25.2.2016 15:28, Simo Sorce wrote:
>>>>>>>>>>>> On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
>>>>>>>>>>>>> Variant C
>>>>>>>>>>>>> ---------
>>>>>>>>>>>>> An alternative is to be lazy and dumb. Maybe it would be enough 
>>>>>>>>>>>>> for
>>>>>>>>>>>>> the first
>>>>>>>>>>>>> round ...
>>>>>>>>>>>>>
>>>>>>>>>>>>> We would retain
>>>>>>>>>>>>> [first step - no change from variant A]
>>>>>>>>>>>>> * create locations
>>>>>>>>>>>>> * assign 'main' (aka 'primary' aka 'home') servers to locations
>>>>>>>>>>>>> ++ specify weights for the 'main' servers in given location, i.e.
>>>>>>>>>>>>> manually
>>>>>>>>>>>>> input (server, weight) tuples
>>>>>>>>>>>>>
>>>>>>>>>>>>> Then, backups would be auto-generated set of all remaining servers
>>>>>>>>>>>>> from all
>>>>>>>>>>>>> other locations.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Additional storage complexity: 0
>>>>>>>>>>>>>
>>>>>>>>>>>>> This covers the scenario "always prefer local servers and use 
>>>>>>>>>>>>> remote
>>>>>>>>>>>>> only as
>>>>>>>>>>>>> fallback" easily. It does not cover any other scenario.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This might be sufficient for the first run and would allow us to
>>>>>>>>>>>>> gather some
>>>>>>>>>>>>> feedback from the field.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Now I'm inclined to this variant :-)
>>>>>>>>>>>> To be honest, this is all I always had in mind, for the first step.
>>>>>>>>>>>>
>>>>>>>>>>>> To recap:
>>>>>>>>>>>> - define a location with the list of servers (perhaps location is a
>>>>>>>>>>>> property of server objects so you can have only one location per
>>>>>>>>>>>> server,
>>>>>>>>>>>> and if you remove the server it is automatically removed from the
>>>>>>>>>>>> location w/o additional work or referential integrity necessary), 
>>>>>>>>>>>> if
>>>>>>>>>>>> weight is not defined (default) then they all have the same weight.
>>>>>>>>>>> Agreed.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> - Allow to specify backup locations in the location object, 
>>>>>>>>>>>> priorities
>>>>>>>>>>>> are calculated automatically and all backup locations have same
>>>>>>>>>>>> weight.
>>>>>>>>>>> Hmm, weights have to be inherited form the original location in all
>>>>>>>>>>> cases. Did
>>>>>>>>>>> you mean that all backup locations have the same *priority*?
>>>>>>>>>> Yes, sorry.
>>>>>>>>>>
>>>>>>>>>>> Anyway, explicit configuration of backup locations is introducing
>>>>>>>>>>> API and
>>>>>>>>>>> schema for variant A and that is what I'm questioning above. It is
>>>>>>>>>>> hard to
>>>>>>>>>>> make it extensible so we do not have headache in future when 
>>>>>>>>>>> somebody
>>>>>>>>>>> decides
>>>>>>>>>>> that more flexibility is needed OR that link-based approach is 
>>>>>>>>>>> better.
>>>>>>>>>> I think no matter we do we'll need to allow admins to override backup
>>>>>>>>>> locations, in future if we can calculate them automatically admins 
>>>>>>>>>> will
>>>>>>>>>> simply not set any backup location explicitly (or set some special 
>>>>>>>>>> value
>>>>>>>>>> like "autogenerate" and the system will do it for them.
>>>>>>>>>>
>>>>>>>>>> Forcing admins to mentally calculate weights to force the system to
>>>>>>>>>> autogenerate the configuration they want would be a bad experience, I
>>>>>>>>>> personally would find it very annoying.
>>>>>>>>>>
>>>>>>>>>>> In other words, for doing what you propose above we would have to
>>>>>>>>>>> design
>>>>>>>>>>> complete schema and API for variant A anyway to make sure we do not
>>>>>>>>>>> lock
>>>>>>>>>>> ourselves, so we are not getting any saving by doing so.
>>>>>>>>>> A seemed much more complicated to me, as you wanted to define a ful
>>>>>>>>>> matrix for weights of servers when they are served as backups and all
>>>>>>>>>> that.
>>>>>>>>>>
>>>>>>>>>>>> - Define a *default* location, which is the backup for any other
>>>>>>>>>>>> location but always with lower priority to any other explicitly
>>>>>>>>>>>> defined
>>>>>>>>>>>> backup locations.
>>>>>>>>>>> I would rather *always* use the default location as backup for all
>>>>>>>>>>> other
>>>>>>>>>>> locations. It does not require any API or schema (as it equals to 
>>>>>>>>>>> "all
>>>>>>>>>>> servers" except "servers in this location" which can be easily
>>>>>>>>>>> calculated
>>>>>>>>>>> on fly).
>>>>>>>>>> We can start with this, but it works well only in a stellar topology
>>>>>>>>>> where you have a central location all other location connect to.
>>>>>>>>>> As soon as you have a super-stellar topology where you have hub 
>>>>>>>>>> location
>>>>>>>>>> to which regional locations connect to, then this is wasteful.
>>>>>>>>>>
>>>>>>>>>>> This can be later on extended in whatever direction we want without 
>>>>>>>>>>> any
>>>>>>>>>>> upgrade/migration problem.
>>>>>>>>>>>
>>>>>>>>>>> More importantly, all the schema and API will be common for all 
>>>>>>>>>>> other
>>>>>>>>>>> variants
>>>>>>>>>>> anyway so we can start doing so and see how much time is left when
>>>>>>>>>>> it is
>>>>>>>>>>> done.
>>>>>>>>>> I am ok with this for the first step.
>>>>>>>>>> After all location is mostly about the "normal" case where clients 
>>>>>>>>>> want
>>>>>>>>>> to reach the local servers, the backup part is only an additional
>>>>>>>>>> feature we can keep simple for now. It's a degraded mode of operation
>>>>>>>>>> anyway so it is probably ok to have just one default backup location 
>>>>>>>>>> as
>>>>>>>>>> a starting point.
>>>>>>>>> Okay, now we are in agreement. I will think about minimal schema and 
>>>>>>>>> API
>>>>>>>>> over
>>>>>>>>> the weekend.
>>>>>>>> Well, it took longer than one weekend.
>>>>>>>>
>>>>>>>> There was couple of changes in the design document:
>>>>>>>> * ‎Feature Management: CLI proposal
>>>>>>>> * ‎Feature Management: web UI - idea with topology graph replaced 
>>>>>>>> original
>>>>>>>> complicated table
>>>>>>>> * Feature Management: described necessary configuration outside of IPA 
>>>>>>>> DNS
>>>>>>>> * Version 1 parts which were moved into separate document:
>>>>>>>> V4/DNS_Location_Mechanism_with_per_client_override
>>>>>>>> * ‎Assumptions: removed misleading reference to DHCP, clarified role of
>>>>>>>> DNS
>>>>>>>> views
>>>>>>>> * Assumptions: removed misleading mention of 'different networks' and
>>>>>>>> added
>>>>>>>> summary explaining how Location is defined
>>>>>>>> * Implementation: high-level outline added
>>>>>>>>
>>>>>>>> Current version:
>>>>>>>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism
>>>>>>>>
>>>>>>>> Full diff:
>>>>>>>> http://www.freeipa.org/index.php?title=V4%2FDNS_Location_Mechanism&diff=12603&oldid=12514
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Practical usage is described in section How to test:
>>>>>>>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#How_to_Test
>>>>>>>>
>>>>>>>>
>>>>>>>> I will think about LDAP schema after we agree on CLI.
>>>>>>>>
>>>>>>>> Petr^2 Spacek
>>>>>>>>
>>>>>>>>
>>>>>>>>> Petr^2 Spacek
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>>> - Weights for backup location servers are the same as the weight
>>>>>>>>>>>> defined
>>>>>>>>>>>> within the backup location itself, so no additional weights are
>>>>>>>>>>>> defined
>>>>>>>>>>>> for backups.
>>>>>>>>>>> Yes, that was somehow implied in the variant A. Sorry for not
>>>>>>>>>>> mentioning it.
>>>>>>>>>>> Weight is always relative number for servers inside one location.
>>>>>>>>>> Ok it looked a lot more complex from your description.
>>>>>>>>>>
>>>>>>>>>> Simo.
>>>>>>> Design review:
>>>>>>>
>>>>>>> 1)
>>>>>>> You missed warning when there is no backup DNS server in location
>>>>>> Thanks, added.
>>>>>>
>>>>>>
>>>>>>> 2)
>>>>>>> "Number of IPA DNS servers <= number of configured IPA locations" I dont
>>>>>>> understand
>>>>>>>
>>>>>>> You need at least one DNS server per location, thus  DNS servers >=
>>>>>>> locations
>>>>>> Good catch, fixed.
>>>>>>
>>>>>>
>>>>>>> 3)
>>>>>>> Design (Version 1: DNAME per client)  Link to design doesn't work for me
>>>>>> Oh, my wiki-fu was weak. Fixed.
>>>>>>
>>>>>>
>>>>>>> CLI looks good to me. Maybe we should explicitly write in design that
>>>>>>> priorities of the SRV records will be set statically (What values? 0 -
>>>>>>> servers
>>>>>>> in location, 100 - backup?)
>>>>>> I've added a note about static priorities. Particular values are just an
>>>>>> implementation detail so I would not clutter feature management section 
>>>>>> with
>>>>>> that.
>>>>> If server can be only in one location, why bother with
>>>>> location-{add,mod,remove}-member and not use server-mod:
>>>>>
>>>>>      server-mod <FQDN> --location=<NAME> [--location-weight=0..65535]
>>>>>
>>>>> ? This is the natural way to model one-to-many relationships in the API,
>>>>> consistent with existing stuff.
>>>> I originally wanted to have location-add-member command so (external) DNS
>>>> servers and IPA servers can be assigned to a location using the same 
>>>> command:
>>>> location-add-member     LOCATION_NAME --ipa-server=<FQDN>
>>>> location-add-member     LOCATION_NAME --advertising-server=<server/view ID>
>>>>
>>>> Should I split this between
>>>> server-mod <FQDN> --location=<NAME> [--location-weight=0..65535]
>>>> and
>>>> dnsserver-mod <server/view ID> --type=external --advertise-location=...
>>>>
>>>> I do not like splitting server-to-location assignment management between 
>>>> two
>>>> commands very much. Current proposal in design page was inspired by
>>>> group-add-member command which has --users and --groups options which 
>>>> seemed
>>>> philosophically similar to me.
>>>>
>>>> Anyway, I'm open to suggestions how to handle this.
>>> Honza and me are playing with idea that Server Roles can be re-used for
>>> Locations, too.
>>>
>>> The rough idea is that 'the advertising' server will have a role like 'DNS
>>> Location XYZ DNS server' and that the member server will have role like 'IPA
>>> master in location XYZ'.
>>>
>>> (Pick your own names, these are just examples.)
>>>
>>> Obvious advantage is consistency in the user interface, which is something 
>>> we
>>> really need.
>>>
>>> The question is how where to put equivalent of --weight option.
>>>
>>> This would make location-add-member command unnecessary.
>>>
>> Today I found out that I misunderstood how non-IPA SRV records will work with
>> the DNS locations feature.
>>
>> I expected that other SRV record stored in IPA domain will be copied 
>> unchanged
>> to locations, only for IPA services SRV records will be altered with 
>> priorities.
>>
>> However, DNS locations *will not* handle other SRV than IPA ones, what
>> effectively means that custom user SRV records will disappear on hosts thats
>> belong to a location.
> 
> Yes, thank you for pointing this out explicitly.
> 
> I've tried to capture all I know about this to the design page:
> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_2:_DNAME_per_sub-tree.29
> 
> Copy follows so we can discuss it quickly:
> 
> === Interaction with hand-made records ===
> Side-effect of DNAME-redirecting <tt>_udp</tt> and <tt>_tcp</tt> subdomains is
> that all original names under these subdomains will become occluded/invisible
> to clients (see [https://tools.ietf.org/html/rfc6672#section-2.4 RFC 6672
> section 2.4]).
> 
> This effectively means that hand-made records in the IPA DNS domain will
> become invisible. E.g. following record will disappear when DNS locations are
> configured and enabled on IPA domain <tt>ipa.example</tt>:
> 
>  _userservice._udp.ipa.example.  SRV record: 0 100 123
> own-server.somewhere.example
> 
> This behavior is in fact necessary for seamless upgrade of replicas which do
> not understand the new template LDAP entries in DNS tree. Old replicas will
> ignore the template entries and use original sub-tree (and ignore
> <tt>_locations</tt> sub-tree). New replicas will understand the entry,
> generate DNAME records and thus occlude old names and use only new ones (in
> <tt>_locations</tt> sub-tree).
> 
> Note: This would be unnecessary if IPA used standard DNS update protocol
> against standard DNS server with non-replicated zones because we would not
> need to play DNAME tricks. In that case we could instead update records on
> each server separately. With current LDAP schema we cannot do that without
> adding non-replicated part of LDAP tree to each DNS server.
> * If we added non-replicated sub-tree to each IPA DNS server we would have
> another set of problems because hand-made entries would not be replicated
> among IPA servers.
> 
> Handling of hand-made records adds some interesting questions:
> * How to find hand-made records?
> ** Blacklist on name-level or record-data level? What record fields should we
> compare?
> * How to handle collisions with IPA-generated records?
> ** Ignore hand-made records?
> ** Add hand-made records?
> ** Replace IPA generated ones with hand-made ones?
> * What triggers hand-made record synchronization?
> ** Should the user or IPA framework call ''ipa dnslocation-fix-records'' after
> each hand-made change to DNS records?
> ** How is this synchronization supposed to work with DNS update protocol?
> Currently we do not have means to trigger an action when a records is changed
> in LDAP.
> * How it affects interaction with older IPA DNS servers (see above)?
> 
> There are several options:
> {{clarify|reason=What to do with hand-made records?}}
> * For first version, document that enabling DNS location will hide hand-made
> records in IPA domain.
> * Add non-replicated sub-trees for IPA records and somehow solve replication
> of hand-made records.
> ** What is the proper granularity? Create 20 backends so we can filter on
> name-level?
> * Do 'something' which prevents replication of IPA-generated DNS records among
> servers while still using one LDAP suffix.
> ** With this in place we can mark IPA-generated records as non-replicable
> while still replicating hand-made records as usual. (An object class like
> <tt>idnsRecordDoNotReplicate</tt>?) This would mean that we can drop whole
> <tt>_locations</tt> sub-tree and each server will hold only its own copy of
> DNS records.
> * Find, filter and copy hand-made records from main tree into the
> <tt>_locations</tt> sub-trees. This means that every hand-made record needs to
> be copied and synchronized N-times where N = number of IPA locations.
> 
> 
> My favorite option for the first version is 'document that enabling DNS
> location will hide hand-made records in IPA domain.'
> 
> The feature is disabled by default and needs additional configuration anyway
> so simply upgrading should not break anything.
> 
> 
> I'm eager to hear opinions and answers to questions above.

Yet another option:
Replace DNAME with CNAMEs on *each* IPA-managed named.

This would mean some changes in
https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator

In particular idnsTemplateObject would have to somehow replace all other
attributes on given entry (as CNAME cannot coexist with anything else) + we
would have to somehow solve the problem with servers where template is present
but substitution is not configured.

I will dream about that over night.

Petr^2 Spacek

> 
> Petr^2 Spacek
> 
> 
>> Example:
>> domain: ipa.test
>> server: server.ipa.test
>> custom SRV record in IPA domain: _userservice._udp  SRV record: 0 100 123
>> server.ipa.test.
>>
>> The record above will not be accessible from clients that connect to server
>> with enabled locations. I think that users may have own services on IPA 
>> server
>> with custom SRV records, and I don't consider this behavior as user-friendly
>> and blocker for deployment of this feature.
>>
>> NACK to design from me. We should fix this.
> 
> 
> 


-- 
Petr^2 Spacek

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Reply via email to