[Freeipa-devel] [PATCH] Fix various typos

2012-09-17 Thread Martin Kosek
ACK for typo fixes made by Yuri Chornoivan (patch attached).

Pushed to master, ipa-3-0.

Martin
From c1b421a6bfdf0d0b2eec8aed409b720c6e1ab783 Mon Sep 17 00:00:00 2001
From: Yuri Chornoivan 
Date: Sun, 16 Sep 2012 19:35:56 +0300
Subject: [PATCH] Fix various typos.

https://fedorahosted.org/freeipa/ticket/3089
---
 install/tools/man/ipa-adtrust-install.1 |2 +-
 install/tools/man/ipa-replica-manage.1  |2 +-
 install/ui/test/data/ipa_init.json  |2 +-
 install/ui/test/data/ipa_init_commands.json |4 ++--
 install/ui/test/ipa_tests.js|4 ++--
 ipa-client/man/default.conf.5   |2 +-
 ipa-client/man/ipa-client-install.1 |2 +-
 ipa-client/man/ipa-getkeytab.1  |2 +-
 ipa-client/man/ipa-join.1   |2 +-
 ipa.1   |2 +-
 ipalib/errors.py|2 +-
 ipalib/plugins/automember.py|2 +-
 ipalib/plugins/dns.py   |6 +++---
 ipalib/plugins/idrange.py   |6 +++---
 ipalib/plugins/internal.py  |2 +-
 ipalib/plugins/sudorule.py  |4 ++--
 ipalib/plugins/trust.py |4 ++--
 ipalib/plugins/user.py  |2 +-
 ipalib/session.py   |4 ++--
 ipapython/log_manager.py|2 +-
 tests/test_xmlrpc/test_attr.py  |2 +-
 21 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/install/tools/man/ipa-adtrust-install.1 b/install/tools/man/ipa-adtrust-install.1
index 936e04c..5303ec2 100644
--- a/install/tools/man/ipa-adtrust-install.1
+++ b/install/tools/man/ipa-adtrust-install.1
@@ -22,7 +22,7 @@ ipa\-adtrust\-install \- Prepare an IPA server to be able to establish trust rel
 .SH "SYNOPSIS"
 ipa\-adtrust\-install [\fIOPTION\fR]...
 .SH "DESCRIPTION"
-Adds all necesary objects and configuration to allow an IPA server to create a
+Adds all necessary objects and configuration to allow an IPA server to create a
 trust to an Active Directory domain. This requires that the IPA server is
 already installed and configured.
 .SH "OPTIONS"
diff --git a/install/tools/man/ipa-replica-manage.1 b/install/tools/man/ipa-replica-manage.1
index 98103ff..2767579 100644
--- a/install/tools/man/ipa-replica-manage.1
+++ b/install/tools/man/ipa-replica-manage.1
@@ -120,7 +120,7 @@ A special user entry is created for the PassSync service. The DN of this entry i
 The following examples use the AD administrator account as the synchronization user. This is not mandatory but the user must have read\-access to the subtree.
 
 .TP
-1. Transfer the base64\-encoded Windows AD CA Certficate to your IPA Server
+1. Transfer the base64\-encoded Windows AD CA Certificate to your IPA Server
 .TP
 2. Remove any existing kerberos credentials
   # kdestroy
diff --git a/install/ui/test/data/ipa_init.json b/install/ui/test/data/ipa_init.json
index 9c158d5..0d94d9b 100644
--- a/install/ui/test/data/ipa_init.json
+++ b/install/ui/test/data/ipa_init.json
@@ -110,7 +110,7 @@
 "refresh": "Refresh the page.",
 "reload": "Reload the browser.",
 "main_page": "Return to the main page and retry the operation",
-"title": "An error has occured (${error})"
+"title": "An error has occurred (${error})"
 },
 "errors": {
 "error": "Error",
diff --git a/install/ui/test/data/ipa_init_commands.json b/install/ui/test/data/ipa_init_commands.json
index 4237cc8..2c128f7 100644
--- a/install/ui/test/data/ipa_init_commands.json
+++ b/install/ui/test/data/ipa_init_commands.json
@@ -16780,9 +16780,9 @@
 },
 {
 "class": "Password",
-"doc": "Active directory domain adminstrator's password",
+"doc": "Active directory domain administrator's password",
 "flags": [],
-"label": "Active directory domain adminstrator's password",
+"label": "Active directory domain administrator's password",
 "name": "realm_passwd",
 "noextrawhitespace": true,
 "type": "unicode"
diff --git a/install/ui/test/ipa_tests.js b/install/ui/test/ipa_tests.js
index 7a2c18b..478196c 100644
--- a/install/ui/test/ipa_tests.js
+++ b/install/ui/test/ipa_tests.js
@@ -110,7 +110,7 @@ test("Testing successful IPA.command().", function() {
 
 var xhr = {};
 var text_status = null;
-var error_thrown = {name:'ERROR', message:'An error has occured'};
+var error_thrown = {name:'ERROR', message:'An error has occurred'};
 
 var ajax_counter = 0;
 
@@ -186,7 +186,7 @@ test("Testing unsuccessful IPA.command().", function() {

Re: [Freeipa-devel] [PATCH] 212 Fix integer validation when boundary value is empty string

2012-09-17 Thread Endi Sukma Dewata

On 9/11/2012 10:09 AM, Petr Vobornik wrote:

There was an error in number validation check. If boundary value was an
empty string, validation of a number always failed. This patch fixes the
problem by not performing the check in these cases.

Basic unit tests for IPA.metadata_validator created.


ACK. Some comments:

1. Instead of IPA.not_defined() it might be better to call it 
IPA.defined() to avoid double negations like this:


  if (!IPA.not_defined(metadata.minvalue, true) ...

2. The check_empty_str probably could made optional and the value would 
be true by default. It will make the code cleaner.


--
Endi S. Dewata

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 214 Fix jquery error when using '??' in a pkey

2012-09-17 Thread Endi Sukma Dewata

On 9/14/2012 8:00 AM, Petr Vobornik wrote:

This patch is only for FreeIPA 2.2. It is already fixed in 3.0.

If '??' is used in a adder dialog as a pkey it can cause
"jQuery15208158273949015573_1346241267446 was not called" error.

Update of jquery library fixes the issue. Update reveals an incorrect
handler definition issue in ssh_key_widget, which is also fixed.

https://bugzilla.redhat.com/show_bug.cgi?id=855278
https://fedorahosted.org/freeipa/ticket/3073


ACK.

--
Endi S. Dewata

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 302 Stricter IP network validator in dnszone-add command

2012-09-17 Thread Rob Crittenden

Martin Kosek wrote:

On 09/05/2012 01:02 PM, Jan Cholasta wrote:

Dne 5.9.2012 12:48, Martin Kosek napsal(a):

On 09/05/2012 12:36 PM, Jan Cholasta wrote:

Dne 5.9.2012 12:22, Petr Spacek napsal(a):

On 09/05/2012 11:30 AM, Jan Cholasta wrote:

Dne 5.9.2012 10:04, Martin Kosek napsal(a):

We allowed IP addresses without network specification which lead
to unexpected results when the zone was being created. We should rather
strictly require the prefix/netmask specifying the IP network that
the reverse zone should be created for. This is already done in
Web UI.

A unit test exercising this new validation was added.

https://fedorahosted.org/freeipa/ticket/2461



I don't like this much. I would suggest using CheckedIPAddress and not
forcing
the user to enter the prefix length instead.

CheckedIPAddress uses a sensible default prefix length if one is not
specified
(class-based for IPv4, /64 for IPv6) as opposed to IPNetwork (/32 for
IPv4,
/128 for IPv6 - this causes the erroneous reverse zones to be created as
described in the ticket).


Hello,

I don't like automatic netmask guessing. I have met class-based guessing
in Windows (XP?) and I was forced to overwrite default mask all the time
...


If there was no guessing, you would have to write the netmask anyway, so I
don't see any harm in guessing here.



IMHO there is no "sensible default prefix" in real world. I sitting on
network with /23 prefix right now. Also, I have never seen 10.x network
with /8 prefix.



While this might be true for IPv4 in some cases, /64 is perfectly sensible for
IPv6. Also, I have never seen 192.168.x.x network with non-/24 prefix.

Honza



While this may be true for 192.168.x.x, it does not apply for 10.x.x.x networks
as Petr already pointed out. I don't think that there will be many people
expecting that a reverse zone of 10.0.0.0/24 would be created.


And they would be correct, because the default prefix length for a class A
network is /8, not /24.



And since FreeIPA is mainly deployed to internal networks, I assume this will
be the case of most users.

Martin



OK, but what about IPv6? Correct me if I'm wrong, but the prefix length is
going to be /64 99% of the time for IPv6.

The installer uses /24 for IPv4 addresses and /64 for IPv6 addresses, maybe
this should be used as a default here as well.

Honza



In the end, I choose a more liberal approach and instead of defining a more
stricter validator for IPv4 only I rather used approach already implemented in
the installers, i.e. default length of network prefix is 24 for IPv4 and 64 for
IPv6.

Updated patch attached.

Martin


Works for me. I wonder if this is a candidate for some more unit tests...

rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 0077 Check direct/reverse hostname/address resolution in ipa-replica-install

2012-09-17 Thread Rob Crittenden

Petr Viktorin wrote:

On 09/14/2012 08:46 AM, Martin Kosek wrote:

On 09/13/2012 10:35 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

On 09/11/2012 11:05 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

On 09/04/2012 07:44 PM, Rob Crittenden wrote:

Petr Viktorin wrote:


https://fedorahosted.org/freeipa/ticket/2845


Shouldn't this also call verify_fqdn() on the local hostname and not
just the master? I think this would eventually fail in the conncheck
but
what if that was skipped?

rob


A few lines above there is a call to get_host_name, which will call
verify_fqdn.



I double-checked this, it fails in conncheck. Here are my steps:

# ipa-server-install --setup-dns
# ipa-replica-prepare replica.example.com --ip-address=192.168.100.2
# ipa host-del replica.example.com

On replica, set DNS to IPA master, with hostname in /etc/hosts.

# ipa-replica-install ...

The verify_fqdn() passes because the resolver uses /etc/hosts.

The conncheck fails:

Execute check on remote master
Check connection from master to remote replica 'replica.example.com':

Remote master check failed with following error message(s):
Could not chdir to home directory /home/admin: No such file or
directory
Port check failed! Unable to resolve host name 'replica.example.com'

Connection check failed!
Please fix your network settings according to error messages above.
If the check results are not valid it can be skipped with
--skip-conncheck parameter.

The DNS test happens much further after this, and I get why, I just
don't see how useful it is unless the --skip-conncheck is used.


For the record, it's because we need to check if the host has DNS
installed. We need a LDAP connection to check this.


ipa-replica-install ~rcrit/replica-info-replica.example.com.gpg
--skip-conncheck
Directory Manager (existing master) password:

ipa : ERRORCould not resolve hostname replica.example.com
using DNS. Clients may not function properly. Please check your DNS
setup. (Note that this check queries IPA DNS directly and ignores
/etc/hosts.)
Continue? [no]:

So I guess, what are the intentions here? It is certainly better than
before.

rob


If the replica is in the master's /etc/hosts, but not in DNS, the
conncheck will succeed. This check explicitly queries IPA records only
and ignores /etc/hosts so it'll notice this case and warn.



Ok, like I said, this is better than we have. Just one nit then you
get an ack:

+# If remote host has DNS, check forward/reverse resolution
+try:
+entry = conn.find_entries(u'cn=dns',
base_dn=DN(api.env.basedn))
+except errors.NotFound:

u'cn=dns' should be str(constants.container_dns).

rob


This is a search filter, Petr could use the one I already have in
"dns.py::get_dns_masters()" function:
'(&(objectClass=ipaConfigObject)(cn=DNS))'

For performance sake, I would also not search in the entire tree, but
limit the
search only to:

DN(('cn', 'masters'), ('cn', 'ipa'), ('cn', 'etc'), api.env.basedn)

Martin



Attaching updated patch with Martin's suggestions.


I think what Martin had in mind was:

if api.Object.dnsrecord.get_dns_masters():
...

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 0079 Update the pot file (translation source)

2012-09-17 Thread Jérôme Fenal
2012/9/17 Petr Viktorin 

> On 09/14/2012 09:36 PM, Jérôme Fenal wrote:
>
>> 2012/9/14 Petr Viktorin mailto:pvikt...@redhat.com
>> >>
>>
>
>  I pushed the pot manually.
>> Since we have infrequent explicit string freezes I don't think it's
>> necessary to configure automatic pot updates again.
>>
>>
>> Thanks Petr!
>>
>> Actually, having the strings updated on Transifex on a regular basis
>> makes it (IMHO) more manageable for translators to update the
>> translations even before a string freeze. Translating a dozen strings
>> per week is lighter than a mere 339 strings.
>>
>
> A possible problem with this approach is that the translators would see
> and translate messages that don't make it into the final version. Do you
> think a more even workload would be worth the occasional extra work?
>

Having some extra work from time to time is OK. Having a huge batch of
strings to translate on a deadline is uneasy. Especially with day job
ramping up... ;-)


> I would like to change our i18n workflow/infrastructure. I was planning to
> (re-)start discussing this after the 3.0 release rush is done. It should be
> possible to do what you suggest.
>

Cool, thanks.


>  I also don't know if pulls from Transifex or push from your side has an
>> effect of keeping memory (in suggestions) of past or close enough
>> strings from the past for small modifications.
>>
>
> Sadly, I don't know much about Transifex itself. Perhaps ask the team
> there, and request the feature if it's missing.
>

This item is not that important, more a wish.



>  Another comment/request, I don't know given my zero-level Python-fu:
>> would it be possible to break down the huge __doc__ strings in plugins
>> into smaller parts, as a small modification would impact a smaller
>> strings, easing maintenance instead of trying to track the one character
>> modification in a 2000 chars text.
>>
>> Does Python support concatenations of __doc___ strings?
>>
>
> That is be possible on the Python side. I'm not sure how Transifex (and
> other translation tools) would cope with text split between several
> messages -- sorting and filtering the messages could take things out of
> context.
>

I'm aware of this issue, translators for others languages may raise their
hands. I could also probe trans@fp.o on that matter, to reach all of them.
That will nevertheless make changes more atomic, and overall easier to
manage on the longer term. In the past, changes were just a matter of
adding one paragraph, or adding one more usage example. Or fixing a typo.
Which is way harder to spot when you have a 1000 chars strings tagged as
changed.

Regards,

J.
-- 
Jérôme Fenal - jfenal AT gmail.com - http://fenal.org/
Paris.pm - http://paris.mongueurs.net/
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH] 1050 prevent replica orphans

2012-09-17 Thread Martin Kosek
On 09/17/2012 04:06 PM, Martin Kosek wrote:
> On 09/14/2012 09:16 PM, Rob Crittenden wrote:
>> Martin Kosek wrote:
>>> On 09/10/2012 08:34 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
> On Thu, 2012-09-06 at 17:22 -0400, Rob Crittenden wrote:
>> Martin Kosek wrote:
>>> On 08/31/2012 07:40 PM, Rob Crittenden wrote:
 Rob Crittenden wrote:
> It was possible use ipa-replica-manage connect/disconnect/del to end 
> up
> orphaning or or more IPA masters. This is an attempt to catch and
> prevent that case.
>
> I tested with this topology, trying to delete B.
>
> A <-> B <-> C
>
> I got here by creating B and C from A, connecting B to C then deleting
> the link from A to B, so it went from A -> B and A -> C to the above.
>
> What I do is look up the servers that the delete candidate host has
> connections to and see if we're the last link.
>
> I added an escape clause if there are only two masters.
>
> rob

 Oh, this relies on my cleanruv patch 1031.

 rob

>>>
>>> 1) When I run ipa-replica-manage del --force to an already uninstalled 
>>> host,
>>> the new code will prevent me the deletation because it cannot connect to
>>> it. It
>>> also crashes with UnboundLocalError:
>>>
>>> # ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
>>>
>>> Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing 
>>> removal
>>> Traceback (most recent call last):
>>>  File "/sbin/ipa-replica-manage", line 708, in 
>>>main()
>>>  File "/sbin/ipa-replica-manage", line 677, in main
>>>del_master(realm, args[1], options)
>>>  File "/sbin/ipa-replica-manage", line 476, in del_master
>>>sys.exit("Failed read master data from '%s': %s" % 
>>> (delrepl.hostname,
>>> str(e)))
>>> UnboundLocalError: local variable 'delrepl' referenced before assignment
>>
>> Fixed.
>>
>>>
>>>
>>> I also hit this error when removing a winsync replica.
>>
>> Fixed.
>>
>>>
>>>
>>> 2) As I wrote before, I think having --force option override the user
>>> inquiries
>>> would benefit test automation:
>>>
>>> +if not ipautil.user_input("Continue to delete?", False):
>>> +sys.exit("Aborted")
>>
>> Fixed.
>>
>>>
>>>
>>> 3) I don't think this code won't cover this topology:
>>>
>>> A - B - C - D - E
>>>
>>> It would allow you deleting a replica C even though it would separate 
>>> A-B
>>> and
>>> D-E. Though we may not want to cover this situation now, what you got is
>>> definitely helping.
>>
>> I think you may be right. I only tested with 4 servers. With this B and
>> D would both still have 2 agreements so wouldn't be covered by the last
>> link test.
>
> Everything looks good now, so ACK. We just need to push it along with
> CLEANALLRUV patch.
>
> Martin
>

 Not to look a gift ACK In the mouth but here is a revised patch. I've 
 added a
 cleanup routine to remove an orphaned master properly. If you had tried the
 mechanism I outlined in the man page it would have worked but was
 less-than-complete. This way is better, just don't try it on a live master.

 I also added a cleanruv abort command, in case you want to kill an existing
 task.

 rob

>>>
>>> 1) CLEANRUV stuff should be in your patch 1031 and not here (but I will 
>>> comment
>>> on the code in this mail since it is in this patch)
>>>
>>>
>>> 2) In new command defitinion:
>>>
>>> +"abort-clean-ruv":(1, 1, "Replica ID of to abort cleaning", ""),
>>>
>>> I miss error message in case REPLICA_ID is not passed, this way error 
>>> message
>>> when I omit the parameter is confusing:
>>>
>>> # ipa-replica-manage abort-clean-ruv
>>> Usage: ipa-replica-manage [options]
>>>
>>> ipa-replica-manage: error: must provide a command [force-sync | clean-ruv |
>>> disconnect | connect | list-ruv | del | re-initialize | list | 
>>> abort-clean-ruv]
>>>
>>> On another note, I am thinking about the new command(s). Since we now have
>>> abort-clean-ruv command, we may want to also implement list-clean-ruv 
>>> commands
>>> in the future to see all CLEANALLRUV commands in process... Or we may 
>>> enhance
>>> list-ruv to write a flag like [CLEANALLRUV in process] for RUV's for which
>>> CLEANALLRUV task is in process.
>>>
>>>
>>> 3) Will clean-ruv - abort-clean-ruv - clean-ruv sequence work? If the 
>>> aborted
>>> CLEANALLRUV task stays in DIT, we may not be able to enter new CLEANALLRUV 
>>> task
>>> because we always use the same DN...
>>>
>>> Btw. did abort CLEANALLRUV command worked for you? Mine seemed to be stuck 
>>> on
>>> replicas t

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-17 Thread Martin Kosek
On 09/17/2012 04:15 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/17/2012 04:04 PM, Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On 09/14/2012 09:17 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/06/2012 11:17 PM, Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On 09/06/2012 05:55 PM, Rob Crittenden wrote:
> Rob Crittenden wrote:
>> Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On 09/05/2012 08:06 PM, Rob Crittenden wrote:
> Rob Crittenden wrote:
>> Martin Kosek wrote:
>>> On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
> On 07/03/2012 04:41 PM, Rob Crittenden wrote:
>> Deleting a replica can leave a replication vector (RUV) on 
>> the
>> other servers.
>> This can confuse things if the replica is re-added, and it 
>> also
>> causes the
>> server to calculate changes against a server that may no 
>> longer
>> exist.
>>
>> 389-ds-base provides a new task that self-propogates itself
>> to all
>> available
>> replicas to clean this RUV data.
>>
>> This patch will create this task at deletion time to 
>> hopefully
>> clean things up.
>>
>> It isn't perfect. If any replica is down or unavailable at 
>> the
>> time
>> the
>> cleanruv task fires, and then comes back up, the old RUV data
>> may be
>> re-propogated around.
>>
>> To make things easier in this case I've added two new
>> commands to
>> ipa-replica-manage. The first lists the replication ids of
>> all the
>> servers we
>> have a RUV for. Using this you can call clean_ruv with the
>> replication id of a
>> server that no longer exists to try the cleanallruv step 
>> again.
>>
>> This is quite dangerous though. If you run cleanruv against a
>> replica id that
>> does exist it can cause a loss of data. I believe I've put in
>> enough scary
>> warnings about this.
>>
>> rob
>>
>
> Good work there, this should make cleaning RUVs much easier 
> than
> with the
> previous version.
>
> This is what I found during review:
>
> 1) list_ruv and clean_ruv command help in man is quite lost. I
> think
> it would
> help if we for example have all info for commands indented. 
> This
> way
> user could
> simply over-look the new commands in the man page.
>
>
> 2) I would rename new commands to clean-ruv and list-ruv to 
> make
> them
> consistent with the rest of the commands (re-initialize,
> force-sync).
>
>
> 3) It would be nice to be able to run clean_ruv command in an
> unattended way
> (for better testing), i.e. respect --force option as we 
> already
> do for
> ipa-replica-manage del. This fix would aid test automation in 
> the
> future.
>
>
> 4) (minor) The new question (and the del too) does not react 
> too
> well for
> CTRL+D:
>
> # ipa-replica-manage clean_ruv 3 --force
> Clean the Replication Update Vector for
> vm-055.idm.lab.bos.redhat.com:389
>
> Cleaning the wrong replica ID will cause that server to no
> longer replicate so it may miss updates while the process
> is running. It would need to be re-initialized to maintain
> consistency. Be very careful.
> Continue to clean? [no]: unexpected error:
>
>
> 5) Help for clean_ruv command without a required parameter is
> quite
> confusing
> as it reports that command is wrong and not the parameter:
>
> # ipa-replica-manage clean_ruv
>>>

[Freeipa-devel] [PATCH] 0073 Add trust verification code

2012-09-17 Thread Alexander Bokovoy

Hi,

Following patch adds trust verification sequence to the case when we
establish trust with knowledge of AD administrative credentials.

As we found out, in order to validate/verify trust, one has to have
administrative credentials for the trusted domain, since there are
few RPCs that should be performed against trusted domain's DC's LSA
and NetLogon pipes and these are protected by administrative credentials.

Thus, when we know admin credentials for the remote domain, we can
perform the trust validation.

https://fedorahosted.org/freeipa/ticket/2763


--
/ Alexander Bokovoy
>From ddf4205c8b3182cbb19328dc9f8b21ede5de3c65 Mon Sep 17 00:00:00 2001
From: Alexander Bokovoy 
Date: Thu, 13 Sep 2012 20:01:55 +0300
Subject: [PATCH] Add verification of the AD trust

Since we only can perform verification when AD admin credentials are available,
report that trust should be verified from the AD side in other cases,
including unsuccessful verification.

Once trust is added, status of it is never stored anywhere.

https://fedorahosted.org/freeipa/ticket/2763
---
 ipalib/plugins/trust.py | 12 +++-
 ipaserver/dcerpc.py | 31 ---
 2 files changed, 35 insertions(+), 8 deletions(-)

diff --git a/ipalib/plugins/trust.py b/ipalib/plugins/trust.py
index 
074560dc27eb121b5035ba9a8260e5ab24b2b4b5..2e20725e6343dfd7ea602dd7903745cd0a0e0c62
 100644
--- a/ipalib/plugins/trust.py
+++ b/ipalib/plugins/trust.py
@@ -60,8 +60,8 @@ _trust_type_dict = {1 : _('Non-Active Directory domain'),
 _trust_direction_dict = {1 : _('Trusting forest'),
  2 : _('Trusted forest'),
  3 : _('Two-way trust')}
-_trust_status = {1 : _('Established and verified'),
- 2 : _('Waiting for confirmation by remote side')}
+_trust_status_dict = {True : _('Established and verified'),
+ False : _('Waiting for confirmation by remote side')}
 _trust_type_dict_unknown = _('Unknown')
 
 def trust_type_string(level):
@@ -84,7 +84,7 @@ def trust_direction_string(level):
 return unicode(string)
 
 def trust_status_string(level):
-string = _trust_direction_dict.get(int(level), _trust_type_dict_unknown)
+string = _trust_status_dict.get(level, _trust_type_dict_unknown)
 return unicode(string)
 
 class trust(LDAPObject):
@@ -190,6 +190,8 @@ class trust_add(LDAPCreate):
 result['result'] = trusts[0][1]
 result['result']['trusttype'] = 
[trust_type_string(result['result']['ipanttrusttype'][0])]
 result['result']['trustdirection'] = 
[trust_direction_string(result['result']['ipanttrustdirection'][0])]
+result['result']['truststatus'] = 
[trust_status_string(result['verified'])]
+del result['verified']
 
 return result
 
@@ -272,14 +274,14 @@ class trust_add(LDAPCreate):
 if result is None:
 raise errors.ValidationError(name=_('AD Trust setup'), 
error=_('Unable to verify write permissions to the AD'))
 
-return dict(result=dict(), 
value=trustinstance.remote_domain.info['dns_domain'])
+return dict(value=trustinstance.remote_domain.info['dns_domain'], 
verified=result['verified'])
 
 # 2. We don't have access to the remote domain and trustdom password
 # is provided. Do the work on our side and inform what to do on remote
 # side.
 if 'trust_secret' in options:
 result = trustinstance.join_ad_ipa_half(keys[-1], realm_server, 
options['trust_secret'])
-return dict(result=dict(), 
value=trustinstance.remote_domain.info['dns_domain'])
+return dict(value=trustinstance.remote_domain.info['dns_domain'], 
verified=result['verified'])
 raise errors.ValidationError(name=_('AD Trust setup'), error=_('Not 
enough arguments specified to perform trust setup'))
 
 class trust_del(LDAPDelete):
diff --git a/ipaserver/dcerpc.py b/ipaserver/dcerpc.py
index 
b7ccd15d3e9008fddb6dc5419fc05c50ede39d26..86cf01dbac9aca21c35d2db65ef4d4c56e313709
 100644
--- a/ipaserver/dcerpc.py
+++ b/ipaserver/dcerpc.py
@@ -35,7 +35,7 @@ import os, string, struct, copy
 import uuid
 from samba import param
 from samba import credentials
-from samba.dcerpc import security, lsa, drsblobs, nbt
+from samba.dcerpc import security, lsa, drsblobs, nbt, netlogon
 from samba.ndr import ndr_pack
 from samba import net
 import samba
@@ -217,6 +217,7 @@ class TrustDomainInstance(object):
 if self._pipe is None:
 raise errors.RemoteRetrieveError(
 reason=_('Cannot establish LSA connection to %(host)s. Is CIFS 
server running?') % dict(host=remote_host))
+self.binding = binding
 
 def __gen_lsa_bindings(self, remote_host):
 """
@@ -251,6 +252,7 @@ class TrustDomainInstance(object):
 self.info['dns_domain'] = unicode(result.dns_domain)
 self.info['dns_forest'] = unicode(result.forest)
 self.info['guid'] = unicode(result.domain_uuid)
+self.inf

Re: [Freeipa-devel] [PATCH] Patch to allow IPA to work with dogtag 10 on f18

2012-09-17 Thread Ade Lee
On Mon, 2012-09-17 at 14:32 +0200, Petr Viktorin wrote:
> On 09/14/2012 11:19 PM, Rob Crittenden wrote:
> > Petr Viktorin wrote:
> >> On 09/12/2012 06:40 PM, Petr Viktorin wrote:
> >>> A new Dogtag build with changed pkispawn/pkidestroy locations should be
> >>> out later today. The attached patch should work with that build.
> >
> > Fresh install is failing in F-18.
> >
> > ki-tools-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.i686
> > pki-base-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-server-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-silent-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-symkey-9.0.21-1.fc18.x86_64
> > dogtag-pki-ca-theme-10.0.0-0.1.a1.20120914T0604zgit69c0684.fc18.noarch
> > pki-selinux-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-ca-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-setup-9.0.21-1.fc18.noarch
> >
> >
> > rob
> >
> >
> 
> Ade, your patch adds a step of moving 
> /var/lib/pki/pki-tomcat/alias/ca_admin_cert.p12 to /root/ca-agent.p12 
> right after calling pkispawn.
> It seems the file is not created on f18. Did something change in Dogtag 
> or are we calling it incorrectly?
> 
> 
Just for the record, as discussed on IRC, the problem is that the latest
build of apache-commons-codec (1.6.4) is missing
the /usr/share/java/apache-commons-codec.jar symlink.  A bug has been
filed https://bugzilla.redhat.com/show_bug.cgi?id=857947 and is being
addressed.

Until a new build of apache-commons-codec is available, you can use the
workaround of creating the missing link:

ln -s /usr/share/java/commons-codec.jar /usr/share/java/apache-commons-codec.jar

Also, there is a bug where pkispawn requires pki-symkey to be installed
in order to complete.  We will fix pkispawn to not require this -
because its only needed for the TKS, and deliver that in a subsequent
build.  For now though, just make sure pki-symkey is installed.

We'll do a new build once you guys have had a chance to try the current
build out a bit and report issues.

Ade

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 305-308 Expand Referential Integrity checks

2012-09-17 Thread Rob Crittenden

Martin Kosek wrote:

On 09/13/2012 06:40 PM, Rob Crittenden wrote:

Martin Kosek wrote:

To test, add sudo commands, hosts or users to a sudo rule or hbac rule and then
rename or delete the linked object. After the update, the links should be
amended.

-

Many attributes in IPA (e.g. manager, memberuser, managedby, ...)
are used to store DNs of linked objects in IPA (users, hosts, sudo
commands, etc.). However, when the linked objects is deleted or
renamed, the attribute pointing to it stays with the objects and
thus may create a dangling link causing issues in client software
reading the data.

Directory Server has a plugin to enforce referential integrity (RI)
by checking DEL and MODRDN operations and updating affected links.
It was already used for manager and secretary attributes and
should be expanded for the missing attributes to avoid dangling
links.

As a prerequisite, all attributes checked for RI must have pres
and eq indexes to avoid performance issues. The following indexes
have been added:
* manager (pres index only)
* secretary (pres index only)
* memberHost
* memberUser
* sourcehost
* memberservice
* managedby
* memberallowcmd
* memberdenycmd
* ipasudorunas
* ipasudorunasgroup

Referential Integrity plugin was updated to check all these
attributes.

Note: this update will only fix RI on one master as RI plugin does
not check replicated operations.

https://fedorahosted.org/freeipa/ticket/2866


These patches look good but I'd like to see some tests associated with the
referential integrity changes in patch 308. I'm not sure we need a test for
every single combination where RI comes into play but at least testing that the
original sequence (sudorule/sudocmd) works as expected.

rob


Right, I should have seen that coming. I want this feature to be checked
properly so I added a tests for all RI-checked attributes.

Patches attached.

Martin



ACK, pushed to master and ipa-3-0

rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 309 Fix addattr internal error

2012-09-17 Thread Rob Crittenden

Martin Kosek wrote:

On 09/13/2012 09:19 PM, Rob Crittenden wrote:

Martin Kosek wrote:

When ADD command is being executed and a single-value object attribute
is being set with both option and addattr IPA ends up in an internal
error.

Make better value sanitizing job in this case and let IPA throw
a user-friendly error. Unit test exercising this situation is added.

https://fedorahosted.org/freeipa/ticket/2429


+if not isinstance(val, (list, tuple)):
+val = [val]
+val.extend(adddict[attr])

I val is a tuple the extend is still going to fail. Can val ever be a tuple? If
so we'd need to convert it to a list.

rob


I don't think it could be a tuple, I am about 99% certain. So for this 1% I
added a special clause for tuple. Patch attached.

We will be able to be even more certain when Honza finishes his strict encoding
patch he was working on in the summer. With his patch, the attributes should
always be a list.

Martin



ACK, pushed to master and ipa-3-0

rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH 0064] Improve log message about improperly formated Resource Records

2012-09-17 Thread Petr Spacek

Hello,

this patch adds DN to log message about improperly formated Resource Records.

Petr^2 Spacek
From d36ae54c593c33a45ef9936720357ff7de30c8b5 Mon Sep 17 00:00:00 2001
From: Petr Spacek 
Date: Mon, 17 Sep 2012 17:01:41 +0200
Subject: [PATCH] Improve log message about improperly formated Resource
 Records.

Signed-off-by: Petr Spacek 
---
 src/ldap_helper.c | 17 ++---
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/src/ldap_helper.c b/src/ldap_helper.c
index d22e8714df57edaad4cf45e6cc26ec0dbbd59108..2245cb982f26eab165a327b4ad72046f9eb4024e 100644
--- a/src/ldap_helper.c
+++ b/src/ldap_helper.c
@@ -1473,6 +1473,8 @@ ldap_parse_rrentry(isc_mem_t *mctx, ldap_entry_t *entry,
 	dns_rdata_t *rdata = NULL;
 	dns_rdatalist_t *rdlist = NULL;
 	ldap_attribute_t *attr;
+	const char *dn = "";
+	const char *data = "";
 
 	result = add_soa_record(mctx, qresult, origin, entry,
 rdatalist, fake_mname);
@@ -1501,6 +1503,11 @@ ldap_parse_rrentry(isc_mem_t *mctx, ldap_entry_t *entry,
 	return ISC_R_SUCCESS;
 
 cleanup:
+	if (entry != NULL)
+		dn = entry->dn;
+	if (buf != NULL && str_buf(buf) != NULL)
+		data = str_buf(buf);
+	log_error_r("failed to parse RR entry: dn '%s': data '%s'", dn, data);
 	return result;
 }
 
@@ -1539,7 +1546,6 @@ ldapdb_nodelist_get(isc_mem_t *mctx, ldap_instance_t *ldap_inst, dns_name_t *nam
 		dns_name_init(&node_name, NULL);
 		if (dn_to_dnsname(mctx, entry->dn,  &node_name, NULL)
 		!= ISC_R_SUCCESS) {
-			log_error("Failed to parse dn %s", entry->dn);
 			continue;
 		}
 
@@ -1551,7 +1557,6 @@ ldapdb_nodelist_get(isc_mem_t *mctx, ldap_instance_t *ldap_inst, dns_name_t *nam
 		   string, &node->rdatalist);
 		}
 		if (result != ISC_R_SUCCESS) {
-			log_error("Failed to parse RR entry (%s)", str_buf(string));
 			/* node cleaning */	
 			dns_name_reset(&node->owner);
 			ldapdb_rdatalist_destroy(mctx, &node->rdatalist);
@@ -1609,11 +1614,9 @@ ldapdb_rdatalist_get(isc_mem_t *mctx, ldap_instance_t *ldap_inst, dns_name_t *na
 	for (entry = HEAD(ldap_qresult->ldap_entries);
 		entry != NULL;
 		entry = NEXT(entry, link)) {
-		if (ldap_parse_rrentry(mctx, entry, ldap_qresult,
-		   origin, ldap_inst->fake_mname,
-		   string, rdatalist) != ISC_R_SUCCESS ) {
-			log_error("Failed to parse RR entry (%s)", str_buf(string));
-		}
+		CHECK(ldap_parse_rrentry(mctx, entry, ldap_qresult,
+   origin, ldap_inst->fake_mname,
+   string, rdatalist));
 	}
 
 	if (!EMPTY(*rdatalist)) {
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH] Patch to allow IPA to work with dogtag 10 on f18

2012-09-17 Thread Ade Lee
On Mon, 2012-09-17 at 14:32 +0200, Petr Viktorin wrote:
> On 09/14/2012 11:19 PM, Rob Crittenden wrote:
> > Petr Viktorin wrote:
> >> On 09/12/2012 06:40 PM, Petr Viktorin wrote:
> >>> A new Dogtag build with changed pkispawn/pkidestroy locations should be
> >>> out later today. The attached patch should work with that build.
> >
> > Fresh install is failing in F-18.
> >
> > ki-tools-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.i686
> > pki-base-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-server-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-silent-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-symkey-9.0.21-1.fc18.x86_64
> > dogtag-pki-ca-theme-10.0.0-0.1.a1.20120914T0604zgit69c0684.fc18.noarch
> > pki-selinux-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-ca-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
> > pki-setup-9.0.21-1.fc18.noarch
> >
> >
> > rob
> >
> >
> 
> Ade, your patch adds a step of moving 
> /var/lib/pki/pki-tomcat/alias/ca_admin_cert.p12 to /root/ca-agent.p12 
> right after calling pkispawn.
> It seems the file is not created on f18. Did something change in Dogtag 
> or are we calling it incorrectly?
> 
The failure of that step often indicates a failure of the previous
configure() step.  That is  - moving that file fails because it was not
created, because configuration fails.

Rob's logs seem to indicate some kind of classpath issue with the jython
code in pkispawn which calls configure() on the server.  I set up an f18
machine and was able to configure an instance (outside of IPA)  Will now
try with the ipa code (and your patches) to see if I can reproduce.

Ade
> 


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-17 Thread Rob Crittenden

Martin Kosek wrote:

On 09/17/2012 04:04 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 09/14/2012 09:17 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 09/06/2012 11:17 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob



Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping



tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping









tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) "pass" is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug("No permission to switch replica to
read-only,
continuing anyway")
+pass



I think this addresses everything.

rob


Thanks, almost there! I just found one more issue which need

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-17 Thread Martin Kosek
On 09/17/2012 04:04 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/14/2012 09:17 PM, Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On 09/06/2012 11:17 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/06/2012 05:55 PM, Rob Crittenden wrote:
>>> Rob Crittenden wrote:
 Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/05/2012 08:06 PM, Rob Crittenden wrote:
>>> Rob Crittenden wrote:
 Martin Kosek wrote:
> On 07/05/2012 08:39 PM, Rob Crittenden wrote:
>> Martin Kosek wrote:
>>> On 07/03/2012 04:41 PM, Rob Crittenden wrote:
 Deleting a replica can leave a replication vector (RUV) on the
 other servers.
 This can confuse things if the replica is re-added, and it also
 causes the
 server to calculate changes against a server that may no longer
 exist.

 389-ds-base provides a new task that self-propogates itself to 
 all
 available
 replicas to clean this RUV data.

 This patch will create this task at deletion time to hopefully
 clean things up.

 It isn't perfect. If any replica is down or unavailable at the
 time
 the
 cleanruv task fires, and then comes back up, the old RUV data
 may be
 re-propogated around.

 To make things easier in this case I've added two new commands 
 to
 ipa-replica-manage. The first lists the replication ids of all 
 the
 servers we
 have a RUV for. Using this you can call clean_ruv with the
 replication id of a
 server that no longer exists to try the cleanallruv step again.

 This is quite dangerous though. If you run cleanruv against a
 replica id that
 does exist it can cause a loss of data. I believe I've put in
 enough scary
 warnings about this.

 rob

>>>
>>> Good work there, this should make cleaning RUVs much easier than
>>> with the
>>> previous version.
>>>
>>> This is what I found during review:
>>>
>>> 1) list_ruv and clean_ruv command help in man is quite lost. I
>>> think
>>> it would
>>> help if we for example have all info for commands indented. This
>>> way
>>> user could
>>> simply over-look the new commands in the man page.
>>>
>>>
>>> 2) I would rename new commands to clean-ruv and list-ruv to make
>>> them
>>> consistent with the rest of the commands (re-initialize,
>>> force-sync).
>>>
>>>
>>> 3) It would be nice to be able to run clean_ruv command in an
>>> unattended way
>>> (for better testing), i.e. respect --force option as we already
>>> do for
>>> ipa-replica-manage del. This fix would aid test automation in 
>>> the
>>> future.
>>>
>>>
>>> 4) (minor) The new question (and the del too) does not react too
>>> well for
>>> CTRL+D:
>>>
>>> # ipa-replica-manage clean_ruv 3 --force
>>> Clean the Replication Update Vector for
>>> vm-055.idm.lab.bos.redhat.com:389
>>>
>>> Cleaning the wrong replica ID will cause that server to no
>>> longer replicate so it may miss updates while the process
>>> is running. It would need to be re-initialized to maintain
>>> consistency. Be very careful.
>>> Continue to clean? [no]: unexpected error:
>>>
>>>
>>> 5) Help for clean_ruv command without a required parameter is 
>>> quite
>>> confusing
>>> as it reports that command is wrong and not the parameter:
>>>
>>> # ipa-replica-manage clean_ruv
>>> Usage: ipa-replica-manage [options]
>>>
>>> ipa-replica-manage: error: must provide a command [clean_ruv |
>>> force-sync |
>>> disconnect | connect | del | re-initialize | list | list_ruv]
>>>
>>> It seems you just forgot to specify the error message in the
>>> command
>>> definition
>>>
>>>
>>> 6) When the remote replica is down, the clean_ruv comma

Re: [Freeipa-devel] [PATCH] 1050 prevent replica orphans

2012-09-17 Thread Martin Kosek
On 09/14/2012 09:16 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/10/2012 08:34 PM, Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On Thu, 2012-09-06 at 17:22 -0400, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 08/31/2012 07:40 PM, Rob Crittenden wrote:
>>> Rob Crittenden wrote:
 It was possible use ipa-replica-manage connect/disconnect/del to end up
 orphaning or or more IPA masters. This is an attempt to catch and
 prevent that case.

 I tested with this topology, trying to delete B.

 A <-> B <-> C

 I got here by creating B and C from A, connecting B to C then deleting
 the link from A to B, so it went from A -> B and A -> C to the above.

 What I do is look up the servers that the delete candidate host has
 connections to and see if we're the last link.

 I added an escape clause if there are only two masters.

 rob
>>>
>>> Oh, this relies on my cleanruv patch 1031.
>>>
>>> rob
>>>
>>
>> 1) When I run ipa-replica-manage del --force to an already uninstalled 
>> host,
>> the new code will prevent me the deletation because it cannot connect to
>> it. It
>> also crashes with UnboundLocalError:
>>
>> # ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
>>
>> Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing 
>> removal
>> Traceback (most recent call last):
>>  File "/sbin/ipa-replica-manage", line 708, in 
>>main()
>>  File "/sbin/ipa-replica-manage", line 677, in main
>>del_master(realm, args[1], options)
>>  File "/sbin/ipa-replica-manage", line 476, in del_master
>>sys.exit("Failed read master data from '%s': %s" % 
>> (delrepl.hostname,
>> str(e)))
>> UnboundLocalError: local variable 'delrepl' referenced before assignment
>
> Fixed.
>
>>
>>
>> I also hit this error when removing a winsync replica.
>
> Fixed.
>
>>
>>
>> 2) As I wrote before, I think having --force option override the user
>> inquiries
>> would benefit test automation:
>>
>> +if not ipautil.user_input("Continue to delete?", False):
>> +sys.exit("Aborted")
>
> Fixed.
>
>>
>>
>> 3) I don't think this code won't cover this topology:
>>
>> A - B - C - D - E
>>
>> It would allow you deleting a replica C even though it would separate A-B
>> and
>> D-E. Though we may not want to cover this situation now, what you got is
>> definitely helping.
>
> I think you may be right. I only tested with 4 servers. With this B and
> D would both still have 2 agreements so wouldn't be covered by the last
> link test.

 Everything looks good now, so ACK. We just need to push it along with
 CLEANALLRUV patch.

 Martin

>>>
>>> Not to look a gift ACK In the mouth but here is a revised patch. I've added 
>>> a
>>> cleanup routine to remove an orphaned master properly. If you had tried the
>>> mechanism I outlined in the man page it would have worked but was
>>> less-than-complete. This way is better, just don't try it on a live master.
>>>
>>> I also added a cleanruv abort command, in case you want to kill an existing
>>> task.
>>>
>>> rob
>>>
>>
>> 1) CLEANRUV stuff should be in your patch 1031 and not here (but I will 
>> comment
>> on the code in this mail since it is in this patch)
>>
>>
>> 2) In new command defitinion:
>>
>> +"abort-clean-ruv":(1, 1, "Replica ID of to abort cleaning", ""),
>>
>> I miss error message in case REPLICA_ID is not passed, this way error message
>> when I omit the parameter is confusing:
>>
>> # ipa-replica-manage abort-clean-ruv
>> Usage: ipa-replica-manage [options]
>>
>> ipa-replica-manage: error: must provide a command [force-sync | clean-ruv |
>> disconnect | connect | list-ruv | del | re-initialize | list | 
>> abort-clean-ruv]
>>
>> On another note, I am thinking about the new command(s). Since we now have
>> abort-clean-ruv command, we may want to also implement list-clean-ruv 
>> commands
>> in the future to see all CLEANALLRUV commands in process... Or we may enhance
>> list-ruv to write a flag like [CLEANALLRUV in process] for RUV's for which
>> CLEANALLRUV task is in process.
>>
>>
>> 3) Will clean-ruv - abort-clean-ruv - clean-ruv sequence work? If the aborted
>> CLEANALLRUV task stays in DIT, we may not be able to enter new CLEANALLRUV 
>> task
>> because we always use the same DN...
>>
>> Btw. did abort CLEANALLRUV command worked for you? Mine seemed to be stuck on
>> replicas that are down just like CLEANALLRUV command. I had then paralelly
>> running CLEANALLRUV and ABORT CLEANALLRUV running for the same RUV ID. Then, 
>> it
>> is unclear to me what this command is actually good for.
>>
>>
>> 4)

Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-17 Thread Rob Crittenden

Martin Kosek wrote:

On 09/14/2012 09:17 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 09/06/2012 11:17 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 09/06/2012 05:55 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 09/05/2012 08:06 PM, Rob Crittenden wrote:

Rob Crittenden wrote:

Martin Kosek wrote:

On 07/05/2012 08:39 PM, Rob Crittenden wrote:

Martin Kosek wrote:

On 07/03/2012 04:41 PM, Rob Crittenden wrote:

Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob



Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to replagreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping








tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) "pass" is now redundant in replication.py:
+except ldap.INSUFFICIENT_ACCESS:
+# We can't make the server we're removing read-only
but
+# this isn't a show-stopper
+root_logger.debug("No permission to switch replica to
read-only,
continuing anyway")
+pass



I think this addresses everything.

rob


Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab

[Freeipa-devel] [PATCH 0063] Notify DNS slaves if zone serial number modification was detected.

2012-09-17 Thread Petr Spacek

Hello,

this patch adds missing notification to DNS slaves if zone serial number 
modification was detected.


Petr^2 Spacek
From eb8d7fc0c02e03b9c7c90e497225536c449fab1c Mon Sep 17 00:00:00 2001
From: Petr Spacek 
Date: Mon, 17 Sep 2012 14:29:45 +0200
Subject: [PATCH] Notify DNS slaves if zone serial number modification was
 detected.

Signed-off-by: Petr Spacek 
---
 src/ldap_helper.c | 22 ++
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/src/ldap_helper.c b/src/ldap_helper.c
index 658b960f50b461fa602edf51e955f3bdd4769e1d..d22e8714df57edaad4cf45e6cc26ec0dbbd59108 100644
--- a/src/ldap_helper.c
+++ b/src/ldap_helper.c
@@ -1035,9 +1035,10 @@ ldap_parse_zoneentry(ldap_entry_t *entry, ldap_instance_t *inst)
 	isc_task_t *task = inst->task;
 	isc_uint32_t ldap_serial;
 	isc_uint32_t zr_serial;	/* SOA serial value from in-memory zone register */
-	unsigned char ldap_digest[RDLIST_DIGESTLENGTH];
+	unsigned char ldap_digest[RDLIST_DIGESTLENGTH] = {0};
 	unsigned char *zr_digest = NULL;
 	ldapdb_rdatalist_t rdatalist;
+	isc_boolean_t zone_dynamic = ISC_FALSE;
 
 	REQUIRE(entry != NULL);
 	REQUIRE(inst != NULL);
@@ -1131,13 +1132,13 @@ ldap_parse_zoneentry(ldap_entry_t *entry, ldap_instance_t *inst)
 		&& result != DNS_R_DYNAMIC && result != DNS_R_CONTINUE)
 		goto cleanup;
 
+	zone_dynamic = (result == DNS_R_DYNAMIC);
 	result = ISC_R_SUCCESS;
 
 	/* initialize serial in zone register and always increment serial
 	 * for a new zone (typically after BIND start)
 	 * - the zone was possibly changed in meanwhile */
 	if (publish) {
-		memset(ldap_digest, 0, RDLIST_DIGESTLENGTH);
 		CHECK(ldap_get_zone_serial(inst, &name, &ldap_serial));
 		CHECK(zr_set_zone_serial_digest(inst->zone_register, &name, ldap_serial,
 ldap_digest));
@@ -1154,23 +1155,28 @@ ldap_parse_zoneentry(ldap_entry_t *entry, ldap_instance_t *inst)
 	 * 3c) The old and new serials are same: autoincrement only if something
 	 * else was changed.
 	 */
+	CHECK(ldap_get_zone_serial(inst, &name, &ldap_serial));
+	CHECK(zr_get_zone_serial_digest(inst->zone_register, &name, &zr_serial,
+			&zr_digest));
 	if (inst->serial_autoincrement) {
-		CHECK(ldap_get_zone_serial(inst, &name, &ldap_serial));
 		CHECK(ldapdb_rdatalist_get(inst->mctx, inst, &name,
 &name, &rdatalist));
 		CHECK(rdatalist_digest(inst->mctx, &rdatalist, ldap_digest));
 
-		CHECK(zr_get_zone_serial_digest(inst->zone_register, &name, &zr_serial,
-&zr_digest));
 		if (ldap_serial == zr_serial) {
 			/* serials are same - increment only if something was changed */
 			if (memcmp(zr_digest, ldap_digest, RDLIST_DIGESTLENGTH) != 0)
 CHECK(soa_serial_increment(inst->mctx, inst, &name));
-		} else { /* serial in LDAP was changed - update zone register */
-			CHECK(zr_set_zone_serial_digest(inst->zone_register, &name,
-	ldap_serial, ldap_digest));
 		}
 	}
+	if (ldap_serial != zr_serial) {
+		/* serial in LDAP was changed - update zone register */
+		CHECK(zr_set_zone_serial_digest(inst->zone_register, &name,
+ldap_serial, ldap_digest));
+
+		if (zone_dynamic)
+			dns_zone_notify(zone);
+	}
 
 cleanup:
 	if (unlock)
-- 
1.7.11.4

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH] Patch to allow IPA to work with dogtag 10 on f18

2012-09-17 Thread Petr Viktorin

On 09/14/2012 11:19 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

On 09/12/2012 06:40 PM, Petr Viktorin wrote:

A new Dogtag build with changed pkispawn/pkidestroy locations should be
out later today. The attached patch should work with that build.


Fresh install is failing in F-18.

ki-tools-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.i686
pki-base-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
pki-server-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
pki-silent-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
pki-symkey-9.0.21-1.fc18.x86_64
dogtag-pki-ca-theme-10.0.0-0.1.a1.20120914T0604zgit69c0684.fc18.noarch
pki-selinux-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
pki-ca-10.0.0-0.33.a1.20120914T0536zgit69c0684.fc18.noarch
pki-setup-9.0.21-1.fc18.noarch


rob




Ade, your patch adds a step of moving 
/var/lib/pki/pki-tomcat/alias/ca_admin_cert.p12 to /root/ca-agent.p12 
right after calling pkispawn.
It seems the file is not created on f18. Did something change in Dogtag 
or are we calling it incorrectly?



--
Petr³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 0077 Check direct/reverse hostname/address resolution in ipa-replica-install

2012-09-17 Thread Petr Viktorin

On 09/14/2012 08:46 AM, Martin Kosek wrote:

On 09/13/2012 10:35 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

On 09/11/2012 11:05 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

On 09/04/2012 07:44 PM, Rob Crittenden wrote:

Petr Viktorin wrote:


https://fedorahosted.org/freeipa/ticket/2845


Shouldn't this also call verify_fqdn() on the local hostname and not
just the master? I think this would eventually fail in the conncheck
but
what if that was skipped?

rob


A few lines above there is a call to get_host_name, which will call
verify_fqdn.



I double-checked this, it fails in conncheck. Here are my steps:

# ipa-server-install --setup-dns
# ipa-replica-prepare replica.example.com --ip-address=192.168.100.2
# ipa host-del replica.example.com

On replica, set DNS to IPA master, with hostname in /etc/hosts.

# ipa-replica-install ...

The verify_fqdn() passes because the resolver uses /etc/hosts.

The conncheck fails:

Execute check on remote master
Check connection from master to remote replica 'replica.example.com':

Remote master check failed with following error message(s):
Could not chdir to home directory /home/admin: No such file or directory
Port check failed! Unable to resolve host name 'replica.example.com'

Connection check failed!
Please fix your network settings according to error messages above.
If the check results are not valid it can be skipped with
--skip-conncheck parameter.

The DNS test happens much further after this, and I get why, I just
don't see how useful it is unless the --skip-conncheck is used.


For the record, it's because we need to check if the host has DNS
installed. We need a LDAP connection to check this.


ipa-replica-install ~rcrit/replica-info-replica.example.com.gpg
--skip-conncheck
Directory Manager (existing master) password:

ipa : ERRORCould not resolve hostname replica.example.com
using DNS. Clients may not function properly. Please check your DNS
setup. (Note that this check queries IPA DNS directly and ignores
/etc/hosts.)
Continue? [no]:

So I guess, what are the intentions here? It is certainly better than
before.

rob


If the replica is in the master's /etc/hosts, but not in DNS, the
conncheck will succeed. This check explicitly queries IPA records only
and ignores /etc/hosts so it'll notice this case and warn.



Ok, like I said, this is better than we have. Just one nit then you get an ack:

+# If remote host has DNS, check forward/reverse resolution
+try:
+entry = conn.find_entries(u'cn=dns', base_dn=DN(api.env.basedn))
+except errors.NotFound:

u'cn=dns' should be str(constants.container_dns).

rob


This is a search filter, Petr could use the one I already have in
"dns.py::get_dns_masters()" function:
'(&(objectClass=ipaConfigObject)(cn=DNS))'

For performance sake, I would also not search in the entire tree, but limit the
search only to:

DN(('cn', 'masters'), ('cn', 'ipa'), ('cn', 'etc'), api.env.basedn)

Martin



Attaching updated patch with Martin's suggestions.

--
Petr³
From 019777646e996aed8de0b799806dcad3433a7248 Mon Sep 17 00:00:00 2001
From: Petr Viktorin 
Date: Tue, 4 Sep 2012 03:47:43 -0400
Subject: [PATCH] Check direct/reverse hostname/address resolution in
 ipa-replica-install

Forward and reverse resolution of the newly created replica is already
checked via get_host_name (which calls verify_fqdn).
Add the same check for the existing master.

Additionally, if DNS is installed on the remote host, check forward
and reverse resolution of both replicas using that DNS only
(ignoring /etc/hosts). These checks give only warnings and, in interactive
installs, a "Continue?" prompt.

https://fedorahosted.org/freeipa/ticket/2845
---
 install/tools/ipa-replica-install | 105 ++
 1 file changed, 105 insertions(+)

diff --git a/install/tools/ipa-replica-install b/install/tools/ipa-replica-install
index 267a70d8b60d96de9a9bde83b15c81ae59da1a96..a1670fb3f4ce39b1be417f4c257bbc32ce9659b4 100755
--- a/install/tools/ipa-replica-install
+++ b/install/tools/ipa-replica-install
@@ -25,6 +25,10 @@ import os, pwd, shutil
 import grp
 from optparse import OptionGroup
 
+import dns.resolver
+import dns.reversename
+import dns.exception
+
 from ipapython import ipautil
 
 from ipaserver.install import dsinstance, installutils, krbinstance, service
@@ -279,6 +283,86 @@ def check_bind():
 print "Aborting installation"
 sys.exit(1)
 
+
+def check_dns_resolution(host_name, dns_server):
+"""Check forward and reverse resolution of host_name using dns_server
+"""
+# Point the resolver at specified DNS server
+server_ips = list(
+a[4][0] for a in socket.getaddrinfo(dns_server, None))
+resolver = dns.resolver.Resolver()
+resolver.nameservers = server_ips
+
+root_logger.debug('Search DNS server %s (%s) for %s',
+dns_server, server_ips, host_name)
+
+# Get IP addresses of host_name
+addresses = set()
+for rt

Re: [Freeipa-devel] IPA server resolv.conf

2012-09-17 Thread Sumit Bose
On Mon, Sep 17, 2012 at 11:18:53AM +0200, Petr Spacek wrote:
> On 09/17/2012 09:15 AM, Martin Kosek wrote:
> >On 09/17/2012 09:06 AM, Petr Spacek wrote:
> >>Discussion about patch "Set master_kdc and dns_lookup_kdc to true)" reminds 
> >>one
> >>related problem:
> >>
> >>Our server installer puts line "nameserver 127.0.0.1" to /etc/resolv.conf, 
> >>but
> >>this file should contain all (or three nearest) DNS servers in IPA domain.
> >>
> >>As a result, IPA server will work even after local named crash (which is 
> >>not so
> >>rare as I want :-().
> >>
> >>New ticket:
> >>https://fedorahosted.org/freeipa/ticket/3085
> >>
> >>Martin, what do you think?
> >>
> >>How we can update resolv.conf to reflect replica addition/deletion?
> >>
> >>Should it be done manually? E.g. ipa-replica-install script can print "don't
> >>forget to add this server to /etc/resolv.conf on other servers"?
> >>
> >>Petr^2 Spacek
> >>
> >
> >It would not be difficult to pull a list of IPA masters with DNS support 
> >during
> >ipa-{server,replica}-install and write more IPs to the resolv.conf. But I 
> >think
> >there may be an issue when somebody willingly stop a remote replica or
> >uninstall it. He would also need to remove it's IP from all resolv.confs in 
> >all
> >replicas...
> >
> >Btw. why would IPA server fail when a local named crashes? A record in
> >/etc/hosts we always add should still enable local IPA services to work or 
> >do I
> >miss something?
> 
> Well... try it :-D "service named stop"
> 
> I didn't examine details of this problem, but my guess is Kerberos
> and reverse DNS lookups. Also, you need to resolve neighbouring

at least reverse DNS lookups shouldn't be the case since 'rdns = false'
in krb5.conf.

bye,
Sumit

> replica IP and so on.
> 
> 
> Name servers listed in resolv.conf are tried in order, so 127.0.0.1
> should be on first place.
> 
> man resolv.conf:
> nameserver Name server IP address
> ...  Up to MAXNS (currently  3,  see  )  name  servers
> may  be listed,  one  per  keyword.  If there are multiple servers,
> the resolver library queries them in the order listed.
> ...
> (The algorithm used is to try a name server, and if the query times
> out, try the next, until out of name servers, then repeat trying all
> the name servers until a maximum number of retries are made.)
> 
> 
> Also, some update mechanism for resolv.conf would be nice. We should
> provide "gen-recolv-conf.py script" at least, so admin can call it
> from cron or someting like that.
> 
> Petr^2 Spacek
> 
> ___
> Freeipa-devel mailing list
> Freeipa-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/freeipa-devel

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] Set master_kdc and dns_lookup_kdc to true

2012-09-17 Thread Sumit Bose
On Sat, Sep 15, 2012 at 06:14:56PM -0400, Simo Sorce wrote:
> On Sat, 2012-09-15 at 22:02 +0200, Sumit Bose wrote:
> > On Fri, Sep 14, 2012 at 05:57:23PM -0400, Rob Crittenden wrote:
> > > Sumit Bose wrote:
> > > >Hi,
> > > >
> > > >those two patches should fix
> > > >https://fedorahosted.org/freeipa/ticket/2515 . The first makes the
> > > >needed change for fresh installations. The second adds the changes
> > > >during ipa-adtrust-install if needed. I prefer to do the changes here
> > > >instead of during updates, because during updates it is not easy to see
> > > >that the Kerberos configuration was changes.
> > > >
> > > 
> > > I guess it is good form to update the RHEL 4 client installer but
> > > will anyone test it?
> > 
> > I think it would be confusion if the RHEL4 client installer has
> > different information than the default one.
> > 
> > > 
> > > Is master_kdc supported in the MIT kfw version (krb5.ini)?
> > 
> > For me it looks that the parse is build from the same sources.
> > 
> > > 
> > > This suffers from the problem Simo envisioned with ticket 931. If
> > > the /etc/hosts entry is removed then DNS will not start. We add an
> > > entry during installation, so this may be less of an issue.
> > 
> > If the /etc/hosts entry is removed DNS  will not start in either case.
> > 
> > I think the solution to #931 is setting the master_kdc option. You can
> > easily reproduce startup problems if you set 'dns_lookup_kdc = true',
> > stop sssd and try to restart named. This will run into a timeout and
> > bind will not start. The reason is that besides a KDC the Kerberos
> > client libraries also try to look up the master KDC (but it seems to be
> > ok if the lookup finally fails). If sssd is running the locator plugin
> > will return the current KDC as master. If it is not running, as in the
> > test described above, /etc/krb5.conf is used next. If it does not have a
> > master_kdc entry and 'dns_lookup_kdc = false' there is no other source
> > for the master KDC and the client libraries continue with normal
> > processing. If master_kdc is not set but 'dns_lookup_kdc = true' then a
> > DNS lookup is tried, which will run into a timeout since the DNS server
> > is not started yet. But if master_kdc is set in krb5.conf the client
> > libraries will just use this value and will not try any DNS lookup,
> > independently of the setting of dns_lookup_kdc.
> > 
> > As a side note. Since we run named as user named I wonder if it would be
> > possible to use SASL EXTERNAL auth instead of GSSAPI to bind to the LDAP
> > server. If this would work safe and secure there would be no
> > dependencies to the KDC during the startup of bind?
> 
> The reason why we use gssapi is so that all operations performed by bind
> happen as the DNS/fqdn user, and we can use ACIs targeted at the bind
> process. In order to use SASL EXTERNAL we would need the bind process to
> change euid to an unprivileged user that we then need to map to some
> specific user.

As said above named is already run as the unprivileged user named.

> 
> In general krb5kdc should always start before named, and should not
> depend on DNS resolution. If krb5kdc is started first bind should have
> no issues. The only proble is if gssapi libraries try to use DNS
> resolution, but we should have that solved by using the krb locator
> plugin.

yes, and even if the locator plugin isn't available setting master_kdc
will make sure we never fall back to DNS for the local realm.

Just to make sure, I do not want to say that the authentication type
used by named must be changes to solve potential issues. Setting
master_kdc will solve them.

bye,
Sumit
> 
> Simo.
> 
> -- 
> Simo Sorce * Red Hat, Inc * New York
> 

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] IPA server resolv.conf

2012-09-17 Thread Petr Spacek

On 09/17/2012 09:15 AM, Martin Kosek wrote:

On 09/17/2012 09:06 AM, Petr Spacek wrote:

Discussion about patch "Set master_kdc and dns_lookup_kdc to true)" reminds one
related problem:

Our server installer puts line "nameserver 127.0.0.1" to /etc/resolv.conf, but
this file should contain all (or three nearest) DNS servers in IPA domain.

As a result, IPA server will work even after local named crash (which is not so
rare as I want :-().

New ticket:
https://fedorahosted.org/freeipa/ticket/3085

Martin, what do you think?

How we can update resolv.conf to reflect replica addition/deletion?

Should it be done manually? E.g. ipa-replica-install script can print "don't
forget to add this server to /etc/resolv.conf on other servers"?

Petr^2 Spacek



It would not be difficult to pull a list of IPA masters with DNS support during
ipa-{server,replica}-install and write more IPs to the resolv.conf. But I think
there may be an issue when somebody willingly stop a remote replica or
uninstall it. He would also need to remove it's IP from all resolv.confs in all
replicas...

Btw. why would IPA server fail when a local named crashes? A record in
/etc/hosts we always add should still enable local IPA services to work or do I
miss something?


Well... try it :-D "service named stop"

I didn't examine details of this problem, but my guess is Kerberos and reverse 
DNS lookups. Also, you need to resolve neighbouring replica IP and so on.



Name servers listed in resolv.conf are tried in order, so 127.0.0.1 should be 
on first place.


man resolv.conf:
nameserver Name server IP address
...  Up to MAXNS (currently  3,  see  )  name  servers  may  be 
listed,  one  per  keyword.  If there are multiple servers, the resolver 
library queries them in the order listed.

...
(The algorithm used is to try a name server, and if the query times out, try 
the next, until out of name servers, then repeat trying all the name servers 
until a maximum number of retries are made.)



Also, some update mechanism for resolv.conf would be nice. We should provide 
"gen-recolv-conf.py script" at least, so admin can call it from cron or 
someting like that.


Petr^2 Spacek

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1031 run cleanallruv task

2012-09-17 Thread Martin Kosek
On 09/14/2012 09:17 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> On 09/06/2012 11:17 PM, Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On 09/06/2012 05:55 PM, Rob Crittenden wrote:
> Rob Crittenden wrote:
>> Rob Crittenden wrote:
>>> Martin Kosek wrote:
 On 09/05/2012 08:06 PM, Rob Crittenden wrote:
> Rob Crittenden wrote:
>> Martin Kosek wrote:
>>> On 07/05/2012 08:39 PM, Rob Crittenden wrote:
 Martin Kosek wrote:
> On 07/03/2012 04:41 PM, Rob Crittenden wrote:
>> Deleting a replica can leave a replication vector (RUV) on the
>> other servers.
>> This can confuse things if the replica is re-added, and it also
>> causes the
>> server to calculate changes against a server that may no longer
>> exist.
>>
>> 389-ds-base provides a new task that self-propogates itself to 
>> all
>> available
>> replicas to clean this RUV data.
>>
>> This patch will create this task at deletion time to hopefully
>> clean things up.
>>
>> It isn't perfect. If any replica is down or unavailable at the
>> time
>> the
>> cleanruv task fires, and then comes back up, the old RUV data
>> may be
>> re-propogated around.
>>
>> To make things easier in this case I've added two new commands to
>> ipa-replica-manage. The first lists the replication ids of all 
>> the
>> servers we
>> have a RUV for. Using this you can call clean_ruv with the
>> replication id of a
>> server that no longer exists to try the cleanallruv step again.
>>
>> This is quite dangerous though. If you run cleanruv against a
>> replica id that
>> does exist it can cause a loss of data. I believe I've put in
>> enough scary
>> warnings about this.
>>
>> rob
>>
>
> Good work there, this should make cleaning RUVs much easier than
> with the
> previous version.
>
> This is what I found during review:
>
> 1) list_ruv and clean_ruv command help in man is quite lost. I
> think
> it would
> help if we for example have all info for commands indented. This
> way
> user could
> simply over-look the new commands in the man page.
>
>
> 2) I would rename new commands to clean-ruv and list-ruv to make
> them
> consistent with the rest of the commands (re-initialize,
> force-sync).
>
>
> 3) It would be nice to be able to run clean_ruv command in an
> unattended way
> (for better testing), i.e. respect --force option as we already
> do for
> ipa-replica-manage del. This fix would aid test automation in the
> future.
>
>
> 4) (minor) The new question (and the del too) does not react too
> well for
> CTRL+D:
>
> # ipa-replica-manage clean_ruv 3 --force
> Clean the Replication Update Vector for
> vm-055.idm.lab.bos.redhat.com:389
>
> Cleaning the wrong replica ID will cause that server to no
> longer replicate so it may miss updates while the process
> is running. It would need to be re-initialized to maintain
> consistency. Be very careful.
> Continue to clean? [no]: unexpected error:
>
>
> 5) Help for clean_ruv command without a required parameter is 
> quite
> confusing
> as it reports that command is wrong and not the parameter:
>
> # ipa-replica-manage clean_ruv
> Usage: ipa-replica-manage [options]
>
> ipa-replica-manage: error: must provide a command [clean_ruv |
> force-sync |
> disconnect | connect | del | re-initialize | list | list_ruv]
>
> It seems you just forgot to specify the error message in the
> command
> definition
>
>
> 6) When the remote replica is down, the clean_ruv command fails
> with an
> unexpected error:
>
> [root@vm-086 ~]# ipa-replica-manage clean_ruv 5
> Clean the Replication Update Vector for
> vm-055.idm.lab.bos.redhat.com:389
>
> Cleaning the wrong replica ID will cause that server to no
>>

Re: [Freeipa-devel] [PATCH] 0079 Update the pot file (translation source)

2012-09-17 Thread Petr Viktorin

On 09/14/2012 09:36 PM, Jérôme Fenal wrote:

2012/9/14 Petr Viktorin mailto:pvikt...@redhat.com>>



I pushed the pot manually.
Since we have infrequent explicit string freezes I don't think it's
necessary to configure automatic pot updates again.


Thanks Petr!

Actually, having the strings updated on Transifex on a regular basis
makes it (IMHO) more manageable for translators to update the
translations even before a string freeze. Translating a dozen strings
per week is lighter than a mere 339 strings.


A possible problem with this approach is that the translators would see 
and translate messages that don't make it into the final version. Do you 
think a more even workload would be worth the occasional extra work?


I would like to change our i18n workflow/infrastructure. I was planning 
to (re-)start discussing this after the 3.0 release rush is done. It 
should be possible to do what you suggest.



I also don't know if pulls from Transifex or push from your side has an
effect of keeping memory (in suggestions) of past or close enough
strings from the past for small modifications.


Sadly, I don't know much about Transifex itself. Perhaps ask the team 
there, and request the feature if it's missing.



Another comment/request, I don't know given my zero-level Python-fu:
would it be possible to break down the huge __doc__ strings in plugins
into smaller parts, as a small modification would impact a smaller
strings, easing maintenance instead of trying to track the one character
modification in a 2000 chars text.

Does Python support concatenations of __doc___ strings?


That is be possible on the Python side. I'm not sure how Transifex (and 
other translation tools) would cope with text split between several 
messages -- sorting and filtering the messages could take things out of 
context.



--
Petr³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] IPA server resolv.conf

2012-09-17 Thread Martin Kosek
On 09/17/2012 09:06 AM, Petr Spacek wrote:
> Discussion about patch "Set master_kdc and dns_lookup_kdc to true)" reminds 
> one
> related problem:
> 
> Our server installer puts line "nameserver 127.0.0.1" to /etc/resolv.conf, but
> this file should contain all (or three nearest) DNS servers in IPA domain.
> 
> As a result, IPA server will work even after local named crash (which is not 
> so
> rare as I want :-().
> 
> New ticket:
> https://fedorahosted.org/freeipa/ticket/3085
> 
> Martin, what do you think?
> 
> How we can update resolv.conf to reflect replica addition/deletion?
> 
> Should it be done manually? E.g. ipa-replica-install script can print "don't
> forget to add this server to /etc/resolv.conf on other servers"?
> 
> Petr^2 Spacek
> 

It would not be difficult to pull a list of IPA masters with DNS support during
ipa-{server,replica}-install and write more IPs to the resolv.conf. But I think
there may be an issue when somebody willingly stop a remote replica or
uninstall it. He would also need to remove it's IP from all resolv.confs in all
replicas...

Btw. why would IPA server fail when a local named crashes? A record in
/etc/hosts we always add should still enable local IPA services to work or do I
miss something?

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] IPA server resolv.conf (was: [PATCH] Set master_kdc and dns_lookup_kdc to true)

2012-09-17 Thread Petr Spacek
Discussion about patch "Set master_kdc and dns_lookup_kdc to true)" reminds 
one related problem:


Our server installer puts line "nameserver 127.0.0.1" to /etc/resolv.conf, but 
this file should contain all (or three nearest) DNS servers in IPA domain.


As a result, IPA server will work even after local named crash (which is not 
so rare as I want :-().


New ticket:
https://fedorahosted.org/freeipa/ticket/3085

Martin, what do you think?

How we can update resolv.conf to reflect replica addition/deletion?

Should it be done manually? E.g. ipa-replica-install script can print "don't 
forget to add this server to /etc/resolv.conf on other servers"?


Petr^2 Spacek

 Original Message 
Subject: Re: [Freeipa-devel] [PATCH] Set master_kdc and dns_lookup_kdc to true
Date: Sat, 15 Sep 2012 18:14:56 -0400
From: Simo Sorce 
Organization: Red Hat, Inc.
To: Sumit Bose 
CC: freeipa-devel 

On Sat, 2012-09-15 at 22:02 +0200, Sumit Bose wrote:

On Fri, Sep 14, 2012 at 05:57:23PM -0400, Rob Crittenden wrote:
> Sumit Bose wrote:
> >Hi,
> >
> >those two patches should fix
> >https://fedorahosted.org/freeipa/ticket/2515 . The first makes the
> >needed change for fresh installations. The second adds the changes
> >during ipa-adtrust-install if needed. I prefer to do the changes here
> >instead of during updates, because during updates it is not easy to see
> >that the Kerberos configuration was changes.
> >
>
> I guess it is good form to update the RHEL 4 client installer but
> will anyone test it?

I think it would be confusion if the RHEL4 client installer has
different information than the default one.

>
> Is master_kdc supported in the MIT kfw version (krb5.ini)?

For me it looks that the parse is build from the same sources.

>
> This suffers from the problem Simo envisioned with ticket 931. If
> the /etc/hosts entry is removed then DNS will not start. We add an
> entry during installation, so this may be less of an issue.

If the /etc/hosts entry is removed DNS  will not start in either case.

I think the solution to #931 is setting the master_kdc option. You can
easily reproduce startup problems if you set 'dns_lookup_kdc = true',
stop sssd and try to restart named. This will run into a timeout and
bind will not start. The reason is that besides a KDC the Kerberos
client libraries also try to look up the master KDC (but it seems to be
ok if the lookup finally fails). If sssd is running the locator plugin
will return the current KDC as master. If it is not running, as in the
test described above, /etc/krb5.conf is used next. If it does not have a
master_kdc entry and 'dns_lookup_kdc = false' there is no other source
for the master KDC and the client libraries continue with normal
processing. If master_kdc is not set but 'dns_lookup_kdc = true' then a
DNS lookup is tried, which will run into a timeout since the DNS server
is not started yet. But if master_kdc is set in krb5.conf the client
libraries will just use this value and will not try any DNS lookup,
independently of the setting of dns_lookup_kdc.

As a side note. Since we run named as user named I wonder if it would be
possible to use SASL EXTERNAL auth instead of GSSAPI to bind to the LDAP
server. If this would work safe and secure there would be no
dependencies to the KDC during the startup of bind?


The reason why we use gssapi is so that all operations performed by bind
happen as the DNS/fqdn user, and we can use ACIs targeted at the bind
process. In order to use SASL EXTERNAL we would need the bind process to
change euid to an unprivileged user that we then need to map to some
specific user.

In general krb5kdc should always start before named, and should not
depend on DNS resolution. If krb5kdc is started first bind should have
no issues. The only proble is if gssapi libraries try to use DNS
resolution, but we should have that solved by using the krb locator
plugin.

Simo.

--
Simo Sorce * Red Hat, Inc * New York

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel