Re: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be?

2013-01-11 Thread Petr Viktorin

On 01/10/2013 05:11 PM, John Dennis wrote:

On 01/10/2013 10:23 AM, Martin Kosek wrote:

AFAIU, the API will not change as we do the CSV processing only on
client side
and send processed entries to the server. CSV processing on old
clients should
still work fine.


Is that really true? I thought I remembered CSV parsing logic inside the
plugins.



No. CSV parsing now only happens on the client. For example, when using 
the Web UI no CSV is involved at all.


This would be a CLI change, not an API one. But I guess we need a policy 
for CLI changes, too.


--
Petr³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 1080 fix migration of uniqueMember

2013-01-11 Thread Martin Kosek
On 01/10/2013 09:34 PM, Rob Crittenden wrote:
> We were asserting that the uniqueMember contain DN objects but weren't 
> actually
> making them DN objects.
> 
> A sample entry looks like:
> 
> dn: cn=Group1,ou=Groups,dc=example,dc=com
> gidNumber: 1001
> objectClass: top
> objectClass: groupOfUniqueNames
> objectClass: posixGroup
> cn: Group1
> uniqueMember: uid=puser2,ou=People,dc=example,dc=com
> 
> rob
> 

Works fine. ACK. Pushed to master, ipa-3-1, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 346 permission-find no longer crashes with --targetgroup

2013-01-11 Thread Martin Kosek
On 01/10/2013 10:53 PM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> Target Group parameter was not processed correctly which caused
>> permission-find to always crash when this search parameter was used.
>> Fix the crash and create a unit test case to avoid future regression.
>>
>> https://fedorahosted.org/freeipa/ticket/3335
> 
> ACK
> 

Pushed to master, ipa-3-1, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] [PATCH] 347 Avoid CRL migration error message

2013-01-11 Thread Martin Kosek
On 01/11/2013 12:10 AM, Rob Crittenden wrote:
> Martin Kosek wrote:
>> When CRL files are being migrated to a new directory, the upgrade
>> log may contain an error message raised during MasterCRL.bin symlink
>> migration. This is actually being caused by `chown' operation which
>> tried to chown a symlinked file that was not migrated yet.
>>
>> Sort migrated files before the migration process and put symlinks
>> at the end of the list. Also do not run chown on the symlinks as
>> it is a redundant operation since the symlinked file will be
>> chown'ed on its own.
>>
>> https://fedorahosted.org/freeipa/ticket/3336
> 
> ACK
> 

Pushed to master, ipa-3-1, ipa-3-0.

Martin

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread Petr Viktorin
We had a small discussion off-list about how we want IPA's LDAP handling 
to look in the future.
To continue the discussion publicly I've summarized the results and 
added some of my own ideas to a page.

John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.


--
Petr³


__NOTOC__

= Overview =

Ticket [https://fedorahosted.org/freeipa/ticket/2660 #2660] installer 
code should use ldap2


This is important to do.
We really should have just one API and set of classes for dealing with LDAP.
For the DN work we had to refactor a fair amount of code in order to 
force most

things to funnel through one common code location.
Because ldap is so decentralized and we had so many different APIs,
classes, etc it was a large chunk of work and is only partially completed,
it got a lot better but it wasn't finished.

The primary thing which needs to be resolved is our use of Entity and Entry
classes.
There never should have been two almost identical classes.
One or both of Entity/Entry needs to be removed.

As it stands now we have two basic ways we access ldap results.
In the installer code it's mostly via Entity/Entry objects.
But in the server code (ldpa2) it's done by accessing the data as 
returned by

the python ldap module (e.g. list of (DN, attr_dict) tuples).

We need to decide which of the two basic interfaces we're going to use and
converge on it.
Each approach has merits.
But 3 different APIs for interacting with ldap is 2 too many.


= Use Cases =

N/A

= Design=

== Entry representation ==

LDAP entries will be encapsulated in objects.
These will perform type checking and validation (ticket #2357).
They should grow from ldap2.LDAPEntry (which is currently just a "dn, data"
namedtuple).

These objects will behave like a dict of lists:

  entry[attrname] = [value]
  attrname in entry
  del entry[attrname]
  entry.keys(), .values(), .items()  # but NOT `for key in entry`, see 
below


The keys are case-insensitive but case-preserving.

We'll use lists for all attributes, even single-valued ones, because
"single-valuedness" can change.


QUESTION: Would having entry.dn as an alias for entry['dn'] be useful 
enough to

break the rules?


The object should also "rembember" its original set of attributes,
so we don't have to retrieve them from LDAP again when it's updated.



== The connection/backend class ==

We'll use the ldap2 class.

The class has some overly specific "helper" methods like 
remove_principal_key
or modify_password. We shouldn't add new ones, and the existing ones 
should be

moved away eventually.




== Backwards compatibility, porting ==

For compatibility with existing plugins, the LDAPEntry object will unpack to
a tuple:

dn, entry_attrs = entry
(the entry_attrs can be entry itself, so we keep the object's validation 
powers)

I'd rather not do keep indexing (dn = entry[0]; entry_attrs = entry[1]).

We'll also temporarily add the legacy Entry interface (toTupleList, toDict,
setValues, data, origDataDict, ...) to LDAPEntry.
The IPAdmin class will be subclassed from ldap2. All its uses will be
gradually converted to only use the ldap2 interface, at which point IPAdmin
can be removed.

The backwards compatibility stuff should be removed as soon as it's unused.

Of course code using the raw python-ldap API will also be converted to 
ldap2.


= Implementation =

No additional requirements or changes discovered during the 
implementation phase.


= Feature Managment =

N/A

= Major configuration options and enablement =

N/A

= Replication =

N/A

= Updates and Upgrades =

N/A

= Dependencies =

N/A

= External Impact =

N/A

= Design page authors =

~~~, jdennis

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH] 348 Sort LDAP updates properly

2013-01-11 Thread Martin Kosek
LDAP updates were sorted by number of RDNs in DN. This, however,
sometimes caused updates to be executed before cn=schema updates.
If the update required an objectClass or attributeType added during
the cn=schema update, the update operation failed.

Fix the sorting so that the cn=schema updates are always run first
and then the other updates sorted by RDN count.

https://fedorahosted.org/freeipa/ticket/3342
From 51d877abf8216a3f07a222197c7be32aac7f000a Mon Sep 17 00:00:00 2001
From: Martin Kosek 
Date: Fri, 11 Jan 2013 13:43:15 +0100
Subject: [PATCH] Sort LDAP updates properly

LDAP updates were sorted by number of RDNs in DN. This, however,
sometimes caused updates to be executed before cn=schema updates.
If the update required an objectClass or attributeType added during
the cn=schema update, the update operation failed.

Fix the sorting so that the cn=schema updates are always run first
and then the other updates sorted by RDN count.

https://fedorahosted.org/freeipa/ticket/3342
---
 ipaserver/install/ldapupdate.py | 31 ++-
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/ipaserver/install/ldapupdate.py b/ipaserver/install/ldapupdate.py
index f7261adc4742fb6175969c2c0a16833e2394054f..4854410897a760f7cf3803d4308af7af82122e64 100644
--- a/ipaserver/install/ldapupdate.py
+++ b/ipaserver/install/ldapupdate.py
@@ -893,26 +893,23 @@ class LDAPUpdate:
 
 def _run_updates(self, all_updates):
 # For adds and updates we want to apply updates from shortest
-# to greatest length of the DN. For deletes we want the reverse.
-
-dn_by_rdn_count = {}
-for dn in all_updates.keys():
+# to greatest length of the DN. cn=schema must always go first to add
+# new objectClasses and attributeTypes
+# For deletes we want the reverse
+def update_sort_key(dn_update):
+dn, update = dn_update
 assert isinstance(dn, DN)
-rdn_count = len(dn)
-rdn_count_list = dn_by_rdn_count.setdefault(rdn_count, [])
-if dn not in rdn_count_list:
-rdn_count_list.append(dn)
+return dn != DN(('cn', 'schema')), len(dn)
 
-sortedkeys = dn_by_rdn_count.keys()
-sortedkeys.sort()
-for rdn_count in sortedkeys:
-for dn in dn_by_rdn_count[rdn_count]:
-self._update_record(all_updates[dn])
+sorted_updates = sorted(all_updates.iteritems(), key=update_sort_key)
 
-sortedkeys.reverse()
-for rdn_count in sortedkeys:
-for dn in dn_by_rdn_count[rdn_count]:
-self._delete_record(all_updates[dn])
+for dn, update in sorted_updates:
+self._update_record(update)
+
+# Now run the deletes in reversed order
+sorted_updates.reverse()
+for dn, update in sorted_updates:
+self._delete_record(update)
 
 def update(self, files):
 """Execute the update. files is a list of the update files to use.
-- 
1.7.11.7

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread Rob Crittenden

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a 
configuration to be set up prior to working. That is one of the nice 
things about the IPAdmin interface, it is much easier to create 
connections to other hosts.


rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread John Dennis

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API 
or a single point where LDAP data flows through. It might mean having 
more than one initialization method or subclassing.



--
John Dennis 

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread Rob Crittenden

John Dennis wrote:

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API
or a single point where LDAP data flows through. It might mean having
more than one initialization method or subclassing.




Right. We may need to decouple from api a bit. I haven't looked at this 
for a while but one of the problems is that api locks its values after 
finalization which can make things a bit inflexible. We use some nasty 
override code in some place but it isn't something I'd want to see 
spread further.


rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread Petr Viktorin

Hello list,
This discussion was started in private; I'll continue it here.

On 01/10/2013 05:41 PM, John Dennis wrote:

On 01/10/2013 04:27 AM, Petr Viktorin wrote:

On 01/09/2013 03:55 PM, John Dennis wrote:



And I could work on improving the i18n/translations infrastructure,
starting by writing up a RFE+design.



Could you elaborate as to what you perceive as the current problems and
what this work would address.



Here are my notes:



- Use fake translations for tests


We already do (but perhaps not sufficiently).


I mean use it in *all* tests, to ensure all the right things are 
translated and weird characters are handled well.

See https://www.redhat.com/archives/freeipa-devel/2012-October/msg00278.html


- Split up huge strings so the entire text doesn't have to be
retranslated each time something changes/is added


Good idea. But one question I have is should we be optimizing for our
programmers time or the translators time? The Transifex tool should make
available to translators similar existing translations (in fact it
might, I seem to recall some functionality in this area). Wouldn't it be
better to address this issue in Transifex where all projects would benefit?

Also the exact same functionality is needed to support release versions.
The strings between releases are often close but not identical. The
Transifex tool should make available a close match from a previous
version to the translator working on a new version (or visa versa). See
your issue below concerning versions.

IMHO this is a Transifex issue which needs to be solved there, not
something we should be investing precious IPA programmers time on. Plus
if it's solved in Transifex it's a *huge* win for *everyone*, not just IPA.


Huh? Splitting the strings provides additional information 
(paragraph/context boundaries) that Transifex can't get otherwise. From 
what I hear it's a pretty standard technique when working with gettext.


For typos, gettext has the "fuzzy" functionality that we explicitly turn 
off. I think we're on our own here.



- Keep a history/repo of the translations, since Transifex only stores
the latest version


We already do keep a history, it's in git.


It's not updated often enough. If I mess something up before a release 
and Transifex gets wiped, or if a rogue translator deletes some 
translations, the work is gone.



- Update the source strings on Transifex more often (ideally as soon as
patches are pushed)


Yes, great idea, this would be really useful and is necessary.


- Break Git dependencies: make it possible generate the POT in an
unpacked tarball


Are you talking about the fact our scripts invoke git to determine what
files to process? If so then yes, this would be a good dependency to get
rid of. However it does mean we somehow have to maintain a manifest list
of some sort somewhere.


A directory listing is fine IMO. We use it for more critical things, 
like loading plugins, without any trouble.
Also, when run in a Git repo the Makefile can compare the file list with 
what Git says and warn accordingly.



- Figure out how to best share messages across versions (2.x vs. 3.x) so
they only have to be translated once


There is a crying need for this, but isn't this a Transifex issue? Why
would we solving this in IPA? What about SSSD and every other project,
they all have identical issues. As far as I can tell Transifex has never
addressed this issue sufficiently (see above) and the onus is on them to
do so.


I don't think waiting for Transifex will solve the problem.


- Clean up checked-in PO files even more, for nicer diffs


A nice feature, but I'm wondering to extent we're currently suffering
because of this. It's rare that we have to compare PO files. Plus diff
is not well suited for comparing PO's because PO files with equivalent
data can be formatted differently. That's why I wrote some tools to read
PO files, normalize the contents and then do a comparison. Anyway my top
level question is is this something we really need at this point?


You're right that files have to be normalized to diff well.That's 
actually the point here :)
Anyway I'm just thinking of sorting the PO alphabetically - an extra 
option to msgattrib should do it.



- Automate & document the process so any dev can do it


Excellent goal, we're not too far from it now, but of all the things on
the list this is the most important.


--
Petr³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread John Dennis

On 01/11/2013 09:55 AM, Rob Crittenden wrote:

John Dennis wrote:

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API
or a single point where LDAP data flows through. It might mean having
more than one initialization method or subclassing.




Right. We may need to decouple from api a bit. I haven't looked at this
for a while but one of the problems is that api locks its values after
finalization which can make things a bit inflexible. We use some nasty
override code in some place but it isn't something I'd want to see
spread further.


Ah, object locking, yes I've been bitten by that too. I'm not sure I 
recall having problems with locked ldap objects but I've certainly have 
been frustrated with trying to modify other objects which were locked at 
api creation time.


I wonder if the object locking Jason introduced at the early stages of 
our development is an example of a good idea that's not wonderful in 
practice. You either have to find the exact moment where an object gets 
created and update it then which is sometimes awkward or worse 
impossible or you have to resort to overriding it with the setattr() big 
hammer. Judging by our use of setattr it's obvious there are numerous 
places we need to modify a locked object.


It's not clear to me if the issues with modifying a locked object are 
indicative of problems with the locking concept or bad code structure 
which forces us to violate the locking concept.



--
John Dennis 

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread Petr Viktorin

On 01/11/2013 03:55 PM, Rob Crittenden wrote:

John Dennis wrote:

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP
handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API
or a single point where LDAP data flows through. It might mean having
more than one initialization method or subclassing.


Yes. I looked at the code again and saw the same thing. Fortunately, 
there's not too much that needs the api object: creating the connection, 
`get_ipa_config` which shouldn't really be at this level, 
CrudBackend-specific things, and `normalize_dn` (which I'd really like 
to remove but it's probably not worth the effort).



My working plan now is to have a ipaldap.LDAPBackend base class (please 
give me a better name), and subclass ldap2 & IPAdmin from that.
IPAdmin would just add the legacy API which we'll try to move away from; 
ldap2 would add the api-specific setup and CrudBackend bits (plus its 
own legacy methods).


So, the ticket shouldn't really be named "installer code should use 
ldap2" :)




Right. We may need to decouple from api a bit. I haven't looked at this
for a while but one of the problems is that api locks its values after
finalization which can make things a bit inflexible. We use some nasty
override code in some place but it isn't something I'd want to see
spread further.


--
Petr³

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] Redesigning LDAP code

2013-01-11 Thread Rob Crittenden

John Dennis wrote:

On 01/11/2013 09:55 AM, Rob Crittenden wrote:

John Dennis wrote:

On 01/11/2013 09:10 AM, Rob Crittenden wrote:

Petr Viktorin wrote:

We had a small discussion off-list about how we want IPA's LDAP
handling
to look in the future.
To continue the discussion publicly I've summarized the results and
added some of my own ideas to a page.
John gets credit for the overview (the mistakes & WTFs are mine).

The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below.




IIRC some of the the python-ldap code is used b/c ldap2 may require a
configuration to be set up prior to working. That is one of the nice
things about the IPAdmin interface, it is much easier to create
connections to other hosts.


Good point. But I don't believe that issue affects having a common API
or a single point where LDAP data flows through. It might mean having
more than one initialization method or subclassing.




Right. We may need to decouple from api a bit. I haven't looked at this
for a while but one of the problems is that api locks its values after
finalization which can make things a bit inflexible. We use some nasty
override code in some place but it isn't something I'd want to see
spread further.


Ah, object locking, yes I've been bitten by that too. I'm not sure I
recall having problems with locked ldap objects but I've certainly have
been frustrated with trying to modify other objects which were locked at
api creation time.

I wonder if the object locking Jason introduced at the early stages of
our development is an example of a good idea that's not wonderful in
practice. You either have to find the exact moment where an object gets
created and update it then which is sometimes awkward or worse
impossible or you have to resort to overriding it with the setattr() big
hammer. Judging by our use of setattr it's obvious there are numerous
places we need to modify a locked object.

It's not clear to me if the issues with modifying a locked object are
indicative of problems with the locking concept or bad code structure
which forces us to violate the locking concept.




The reasoning IIRC was we didn't want a plugin developer mucking with a 
lot of this for their one plugin as it would affect the entire server.


rob

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH] 349 Test NetBIOS name clash before creating a trust

2013-01-11 Thread Martin Kosek
Give a clear message about what is wrong with current Trust settings
before letting AD to return a confusing error message.

https://fedorahosted.org/freeipa/ticket/3193
From c792dffbc65aba27d18196def91da14c2e98f5f4 Mon Sep 17 00:00:00 2001
From: Martin Kosek 
Date: Fri, 11 Jan 2013 16:33:43 +0100
Subject: [PATCH] Test NetBIOS name clash before creating a trust

Give a clear message about what is wrong with current Trust settings
before letting AD to return a confusing error message.

https://fedorahosted.org/freeipa/ticket/3193
---
 ipaserver/dcerpc.py | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/ipaserver/dcerpc.py b/ipaserver/dcerpc.py
index 54a70defc9df52db58054d29c1c9f9189a88cabb..570dc9d53789dffa50d02d915510f34e8e2d1a9f 100644
--- a/ipaserver/dcerpc.py
+++ b/ipaserver/dcerpc.py
@@ -585,6 +585,12 @@ class TrustDomainInstance(object):
 info.trust_type = lsa.LSA_TRUST_TYPE_UPLEVEL
 info.trust_attributes = lsa.LSA_TRUST_ATTRIBUTE_FOREST_TRANSITIVE
 
+if self.info['name'] == info.netbios_name.string:
+# Check that NetBIOS names do not clash
+raise errors.ValidationError(name=u'AD Trust Setup',
+error=_('this and the remote domain cannot share the same '
+'NetBIOS name: %s') % self.info['name'])
+
 try:
 dname = lsa.String()
 dname.string = another_domain.info['dns_domain']
-- 
1.7.11.7

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread John Dennis

On 01/11/2013 10:04 AM, Petr Viktorin wrote:

Hello list,
This discussion was started in private; I'll continue it here.

On 01/10/2013 05:41 PM, John Dennis wrote:

On 01/10/2013 04:27 AM, Petr Viktorin wrote:

On 01/09/2013 03:55 PM, John Dennis wrote:



And I could work on improving the i18n/translations infrastructure,
starting by writing up a RFE+design.



Could you elaborate as to what you perceive as the current problems and
what this work would address.



Here are my notes:



- Use fake translations for tests


We already do (but perhaps not sufficiently).


I mean use it in *all* tests, to ensure all the right things are
translated and weird characters are handled well.
See https://www.redhat.com/archives/freeipa-devel/2012-October/msg00278.html


Ah yes, I like the idea of a test domain for strings, this is a good 
idea. Not only would it exercise our i18n code more but it could 
insulate the tests from string changes (the test would look for a 
canonical string in the test domain)





- Split up huge strings so the entire text doesn't have to be
retranslated each time something changes/is added


Good idea. But one question I have is should we be optimizing for our
programmers time or the translators time? The Transifex tool should make
available to translators similar existing translations (in fact it
might, I seem to recall some functionality in this area). Wouldn't it be
better to address this issue in Transifex where all projects would benefit?

Also the exact same functionality is needed to support release versions.
The strings between releases are often close but not identical. The
Transifex tool should make available a close match from a previous
version to the translator working on a new version (or visa versa). See
your issue below concerning versions.

IMHO this is a Transifex issue which needs to be solved there, not
something we should be investing precious IPA programmers time on. Plus
if it's solved in Transifex it's a *huge* win for *everyone*, not just IPA.


Huh? Splitting the strings provides additional information
(paragraph/context boundaries) that Transifex can't get otherwise. From
what I hear it's a pretty standard technique when working with gettext.


I'm not sure how splitting text into smaller units gives more context 
but I can see the argument for each msgid being a logical paragraph. We 
don't have too many multi-paragraph strings now so it shouldn't be too 
involved.




For typos, gettext has the "fuzzy" functionality that we explicitly turn
off. I think we're on our own here.


Be very afraid of turning on fuzzy matching. Before we moved to TX we 
used the entire gnu tool chain. I discovered a number of our PO files 
were horribly corrupted. With a lot of work I traced this down to fuzzy 
matches. If memory serves me right here is what happened.


When a msgstr was absent a fuzzy match was performed and inserted as a 
candidate msgstr. Somehow the fuzzy candidates got accepted as actual 
msgstr's. I'm not sure if we ever figured out how this happened. The two 
most likely explanations were 1) a known bug in TX that stripped the 
fuzzy flag off the msgstr or 2) a translator who blindly accepted all 
"TX suggestions". (A suggestion in TX comes from a fuzzy match).


But the real problem is the fuzzy matching is horribly bad. Most of the 
fuzzy suggestions (primarily on short strings) were wildly incorrect.


I had to go back to a number of PO files and manually locate all fuzzy 
suggestions that had been promoted to legitimate msgstr's. A tedious 
process I hope to never repeat.


BTW, if memory serves me correctly the fuzzy suggestions got into the PO 
files in the first place because we were running the full gnu tool chain 
(sorry off the top of my head I don't recall exactly which component 
inserts the fuzzy suggestion), but I think we've since turned that off, 
for a very good reason.






- Keep a history/repo of the translations, since Transifex only stores
the latest version


We already do keep a history, it's in git.


It's not updated often enough. If I mess something up before a release
and Transifex gets wiped, or if a rogue translator deletes some
translations, the work is gone.


Yes, updating more frequently is an excellent goal.




- Update the source strings on Transifex more often (ideally as soon as
patches are pushed)


Yes, great idea, this would be really useful and is necessary.


- Break Git dependencies: make it possible generate the POT in an
unpacked tarball


Are you talking about the fact our scripts invoke git to determine what
files to process? If so then yes, this would be a good dependency to get
rid of. However it does mean we somehow have to maintain a manifest list
of some sort somewhere.


A directory listing is fine IMO. We use it for more critical things,
like loading plugins, without any trouble.
Also, when run in a Git repo the Makefile can compare the file list with
what Git says and warn accordingly.


How do you 

Re: [Freeipa-devel] [PATCH 0005] Clarified error message with ipa-client-automount

2013-01-11 Thread Lynn Root

On Mon 03 Dec 2012 05:20:32 AM PST, Lynn Root wrote:

On 11/30/2012 10:35 PM, Rob Crittenden wrote:

Lynn Root wrote:

Returns a clearer hint when user is running ipa-client-automount with
possible firewall up and blocking need ports.

Not sure if this patch is worded correctly in order to address the
potential firewall block when running ipa-client-automount. Perhaps a
different error should be thrown, rather than NOT_IPA_SERVER.

Ticket: https://fedorahosted.org/freeipa/ticket/3080


Tomas made a similar change recently in ipa-client-install which
includes more information on the ports we need. You may want to take
a look at that. It was for ticket
https://fedorahosted.org/freeipa/ticket/2816

rob

Thank you Rob - I adapted the same approach in this updated patch. Let
me know if it addresses the blocked port issue better.

Thanks!


Just bumping this thread - I think this might have fallen on the 
way-side; certainly lost track of it myself after returning 
home/holidays.


However I noticed that this ticket 
(https://fedorahosted.org/freeipa/ticket/3080) now has an RFE tag - 
don't _believe_ that was there when I started working on it in late 
November.  I believe the whole design doc conversation was going on 
around then. I assume I'll need to start one for this?


Thanks!

--
Lynn Root

@roguelynn
Associate Software Engineer
Red Hat, Inc

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


[Freeipa-devel] [PATCH 0107] Don't fail if idnsSOAserial attribute is missing in LDAP

2013-01-11 Thread Petr Spacek

Hello,

Don't fail if idnsSOAserial attribute is missing in LDAP.

DNS zones created on remote IPA 3.0 server don't have
idnsSOAserial attribute present in LDAP.

https://bugzilla.redhat.com/show_bug.cgi?id=894131


Attached patch contains the minimal set of changes need for resurrecting BIND.

In configurations with serial auto-increment:
- enabled (IPA 3.0+ default) - some new serial is written back to LDAP nearly 
immediately

- disabled - the attribute will be missing forever

--
Petr^2 Spacek
From 958f46a5ceee336e2466686bafbb203082e2ccc1 Mon Sep 17 00:00:00 2001
From: Petr Spacek 
Date: Fri, 11 Jan 2013 17:30:03 +0100
Subject: [PATCH] Don't fail if idnsSOAserial attribute is missing in LDAP.

DNS zones created on remote IPA 3.0 server don't have
idnsSOAserial attribute present in LDAP.

https://bugzilla.redhat.com/show_bug.cgi?id=894131

Signed-off-by: Petr Spacek 
---
 src/ldap_entry.c | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/src/ldap_entry.c b/src/ldap_entry.c
index 1e165ca696ccafa177f17b97bda08ed9cc344c7d..52b927d410300eb6df98ea058c3a08b426d66a70 100644
--- a/src/ldap_entry.c
+++ b/src/ldap_entry.c
@@ -350,8 +350,9 @@ ldap_entry_getfakesoa(ldap_entry_t *entry, const ld_string_t *fake_mname,
 	ldap_valuelist_t values;
 	int i = 0;
 
+	const char *soa_serial_attr = "idnsSOAserial";
 	const char *soa_attrs[] = {
-		"idnsSOAmName", "idnsSOArName", "idnsSOAserial",
+		"idnsSOAmName", "idnsSOArName", soa_serial_attr,
 		"idnsSOArefresh", "idnsSOAretry", "idnsSOAexpire",
 		"idnsSOAminimum", NULL
 	};
@@ -366,12 +367,25 @@ ldap_entry_getfakesoa(ldap_entry_t *entry, const ld_string_t *fake_mname,
 		CHECK(str_cat_char(target, " "));
 	}
 	for (; soa_attrs[i] != NULL; i++) {
-		CHECK(ldap_entry_getvalues(entry, soa_attrs[i], &values));
+		result = ldap_entry_getvalues(entry, soa_attrs[i], &values);
+		/** Workaround for
+		 *  https://bugzilla.redhat.com/show_bug.cgi?id=894131
+		 *  DNS zones created on remote IPA 3.0 server don't have
+		 *  idnsSOAserial attribute present in LDAP. */
+		if (result == ISC_R_NOTFOUND
+		&& soa_attrs[i] == soa_serial_attr) {
+			/* idnsSOAserial is missing! Read it as 1. */
+			CHECK(str_cat_char(target, "1 "));
+			continue;
+		} else if (result != ISC_R_SUCCESS)
+			goto cleanup;
+
 		CHECK(str_cat_char(target, HEAD(values)->value));
 		CHECK(str_cat_char(target, " "));
 	}
 
 cleanup:
+	/* TODO: check for memory leaks */
 	return result;
 }
 
-- 
1.7.11.7

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread Jérôme Fenal
2013/1/11 John Dennis 

> On 01/11/2013 10:04 AM, Petr Viktorin wrote:
>
>> Hello list,
>> This discussion was started in private; I'll continue it here.
>>
>> On 01/10/2013 05:41 PM, John Dennis wrote:
>>
>>> On 01/10/2013 04:27 AM, Petr Viktorin wrote:
>>>
 On 01/09/2013 03:55 PM, John Dennis wrote:

>>>
>>>  And I could work on improving the i18n/translations infrastructure,
>> starting by writing up a RFE+design.
>>
>
>>>  Could you elaborate as to what you perceive as the current problems and
> what this work would address.
>

>>>  Here are my notes:

>>>
>>>  - Use fake translations for tests

>>>
>>> We already do (but perhaps not sufficiently).
>>>
>>
>> I mean use it in *all* tests, to ensure all the right things are
>> translated and weird characters are handled well.
>> See https://www.redhat.com/**archives/freeipa-devel/2012-**
>> October/msg00278.html
>>
>
> Ah yes, I like the idea of a test domain for strings, this is a good idea.
> Not only would it exercise our i18n code more but it could insulate the
> tests from string changes (the test would look for a canonical string in
> the test domain)
>

FWIW, KDE also uses an empty .po (e.g. empty translated messages) in order
to easier spot strings not marked for translations.



> - Split up huge strings so the entire text doesn't have to be
 retranslated each time something changes/is added

>>>
>>> Good idea. But one question I have is should we be optimizing for our
>>> programmers time or the translators time? The Transifex tool should make
>>> available to translators similar existing translations (in fact it
>>> might, I seem to recall some functionality in this area). Wouldn't it be
>>> better to address this issue in Transifex where all projects would
>>> benefit?
>>>
>>> Also the exact same functionality is needed to support release versions.
>>> The strings between releases are often close but not identical. The
>>> Transifex tool should make available a close match from a previous
>>> version to the translator working on a new version (or visa versa). See
>>> your issue below concerning versions.
>>>
>>> IMHO this is a Transifex issue which needs to be solved there, not
>>> something we should be investing precious IPA programmers time on. Plus
>>> if it's solved in Transifex it's a *huge* win for *everyone*, not just
>>> IPA.
>>>
>>
>> Huh? Splitting the strings provides additional information
>> (paragraph/context boundaries) that Transifex can't get otherwise. From
>> what I hear it's a pretty standard technique when working with gettext.
>>
>
> I'm not sure how splitting text into smaller units gives more context but
> I can see the argument for each msgid being a logical paragraph. We don't
> have too many multi-paragraph strings now so it shouldn't be too involved.
>

One issue also discussed on this list is the problem of 100+ lines strings
in man pages generated from ___doc___ tags in scripts.
Those are a _real_ pain for translators to maintain when only one line is
changed.

Didn't have the time yet to explore splitting those strings, I need to take
some to do so.


>
>> For typos, gettext has the "fuzzy" functionality that we explicitly turn
>> off. I think we're on our own here.
>>
>
> Be very afraid of turning on fuzzy matching. Before we moved to TX we used
> the entire gnu tool chain. I discovered a number of our PO files were
> horribly corrupted. With a lot of work I traced this down to fuzzy matches.
> If memory serves me right here is what happened.
>
> When a msgstr was absent a fuzzy match was performed and inserted as a
> candidate msgstr. Somehow the fuzzy candidates got accepted as actual
> msgstr's. I'm not sure if we ever figured out how this happened. The two
> most likely explanations were 1) a known bug in TX that stripped the fuzzy
> flag off the msgstr or 2) a translator who blindly accepted all "TX
> suggestions". (A suggestion in TX comes from a fuzzy match).
>
> But the real problem is the fuzzy matching is horribly bad. Most of the
> fuzzy suggestions (primarily on short strings) were wildly incorrect.
>
> I had to go back to a number of PO files and manually locate all fuzzy
> suggestions that had been promoted to legitimate msgstr's. A tedious
> process I hope to never repeat.
>
> BTW, if memory serves me correctly the fuzzy suggestions got into the PO
> files in the first place because we were running the full gnu tool chain
> (sorry off the top of my head I don't recall exactly which component
> inserts the fuzzy suggestion), but I think we've since turned that off, for
> a very good reason.
>
>
>
>>  - Keep a history/repo of the translations, since Transifex only stores
 the latest version

>>>
>>> We already do keep a history, it's in git.
>>>
>>
>> It's not updated often enough. If I mess something up before a release
>> and Transifex gets wiped, or if a rogue transl

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread John Dennis

On 01/11/2013 02:44 PM, Jérôme Fenal wrote:

2013/1/11 John Dennis mailto:jden...@redhat.com>>


Thank you Jérôme for your insights as a translator. We have a lop-sided 
perspective mostly from the developer point of view. We need to better 
understand the translator's perspective.



I'm not sure how splitting text into smaller units gives more
context but I can see the argument for each msgid being a logical
paragraph. We don't have too many multi-paragraph strings now so it
shouldn't be too involved.


One issue also discussed on this list is the problem of 100+ lines
strings in man pages generated from ___doc___ tags in scripts.
Those are a _real_ pain for translators to maintain when only one line
is changed.


I still think TX should attempt to match the msgid from a previous pot 
with an updated pot and show the *word* differences between the strings 
along with an edit window for the original translation. That would be so 
useful to translators I can't believe TX does not have that feature. All 
you would have to do is make a few trivial edits in the translation and 
save it.


But heck, I'm not a translator and I haven't used the translator's part 
of the TX tool much other than to explore how it works (and that was a 
while ago).




I'd see a few remarks here:
- this massive .po file would grow wildly, especially when a typo is
corrected in huge strings (__doc___), when additional sentences are
added to those, etc.
- breaking down bigger strings in smaller ones will certainly help here
in avoiding duplicated content,



- in Transifex, it is easy to upload a .po onto another branch, and only
untranslated matching strings would be updated. I used it on ananconda
where there are multiple branches between Fedora & RHEL5/6 & master,
that worked easily without breaking anything.


When you say easy to upload a .po onto another branch I assume you don't 
mean branch (TX has no such concept) but rather another TX resource. 
Anyway this is good to know, perhaps the way TX handles versions is not 
half as bad as it would appear.


--
John Dennis 

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH 0005] Clarified error message with ipa-client-automount

2013-01-11 Thread Dmitri Pal
On 01/11/2013 12:47 PM, Lynn Root wrote:
> On Mon 03 Dec 2012 05:20:32 AM PST, Lynn Root wrote:
>> On 11/30/2012 10:35 PM, Rob Crittenden wrote:
>>> Lynn Root wrote:
 Returns a clearer hint when user is running ipa-client-automount with
 possible firewall up and blocking need ports.

 Not sure if this patch is worded correctly in order to address the
 potential firewall block when running ipa-client-automount. Perhaps a
 different error should be thrown, rather than NOT_IPA_SERVER.

 Ticket: https://fedorahosted.org/freeipa/ticket/3080
>>>
>>> Tomas made a similar change recently in ipa-client-install which
>>> includes more information on the ports we need. You may want to take
>>> a look at that. It was for ticket
>>> https://fedorahosted.org/freeipa/ticket/2816
>>>
>>> rob
>> Thank you Rob - I adapted the same approach in this updated patch. Let
>> me know if it addresses the blocked port issue better.
>>
>> Thanks!
>
> Just bumping this thread - I think this might have fallen on the
> way-side; certainly lost track of it myself after returning
> home/holidays.
>
> However I noticed that this ticket
> (https://fedorahosted.org/freeipa/ticket/3080) now has an RFE tag -
> don't _believe_ that was there when I started working on it in late
> November.  I believe the whole design doc conversation was going on
> around then. I assume I'll need to start one for this?
>
> Thanks!
>

It is an RFE, just was not marked as such.
Good catch.
Yes, since it is an RFE design page will be required.


> -- 
> Lynn Root
>
> @roguelynn
> Associate Software Engineer
> Red Hat, Inc
>
> ___
> Freeipa-devel mailing list
> Freeipa-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/freeipa-devel


-- 
Thank you,
Dmitri Pal

Sr. Engineering Manager for IdM portfolio
Red Hat Inc.


---
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/



___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel


Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread Jérôme Fenal
2013/1/11 John Dennis 

> On 01/11/2013 02:44 PM, Jérôme Fenal wrote:
>
>> 2013/1/11 John Dennis mailto:jden...@redhat.com>>
>>
>
> Thank you Jérôme for your insights as a translator. We have a lop-sided
> perspective mostly from the developer point of view. We need to better
> understand the translator's perspective.


You're welcome.
I'm not an expert at Transifex though.
I've yet to schedule a lunch with Kevin Raymond (he works a few kms away
from the French Red Hat office) who is coordinating the whole Fedora
translation effort, but customers first, yada yada... :)






 I'm not sure how splitting text into smaller units gives more
> context but I can see the argument for each msgid being a logical
> paragraph. We don't have too many multi-paragraph strings now so it
> shouldn't be too involved.
>
>
> One issue also discussed on this list is the problem of 100+ lines
> strings in man pages generated from ___doc___ tags in scripts.
> Those are a _real_ pain for translators to maintain when only one line
> is changed.
>

I still think TX should attempt to match the msgid from a previous pot with
> an updated pot and show the *word* differences between the strings along
> with an edit window for the original translation. That would be so useful
> to translators I can't believe TX does not have that feature. All you would
> have to do is make a few trivial edits in the translation and save it.
>

I agree with you.
But transifex developers seem to be overloaded at the moment.
I can check with Kevin (and internally) if Zanata would provide a better
home to host the translation effort.


> But heck, I'm not a translator and I haven't used the translator's part of
> the TX tool much other than to explore how it works (and that was a while
> ago).


I can understand that... :)
Hopefully, the IPA dev team is mutllingual ;)

 I'd see a few remarks here:
> - this massive .po file would grow wildly, especially when a typo is
> corrected in huge strings (__doc___), when additional sentences are
> added to those, etc.
> - breaking down bigger strings in smaller ones will certainly help here
> in avoiding duplicated content,
>

 - in Transifex, it is easy to upload a .po onto another branch, and only
> untranslated matching strings would be updated. I used it on ananconda
> where there are multiple branches between Fedora & RHEL5/6 & master,
> that worked easily without breaking anything.
>

When you say easy to upload a .po onto another branch I assume you don't
> mean branch (TX has no such concept) but rather another TX resource. Anyway
> this is good to know, perhaps the way TX handles versions is not half as
> bad as it would appear.


You're right. See how anaconda is organized, for instance:
 https://fedora.transifex.com/projects/p/fedora/language/en/?project=2059

Regards,

J.
-- 
Jérôme Fenal
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread John Dennis

On 01/11/2013 04:00 PM, Jérôme Fenal wrote:

When you say easy to upload a .po onto another branch I assume you
don't mean branch (TX has no such concept) but rather another TX
resource. Anyway this is good to know, perhaps the way TX handles
versions is not half as bad as it would appear.


You're right. See how anaconda is organized, for instance:
https://fedora.transifex.com/projects/p/fedora/language/en/?project=2059


We follow the same model as anaconda, a new TX resource per version.

--
John Dennis 

Looking to carve out IT costs?
www.redhat.com/carveoutcosts/

___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] i18n infrastructure improvements

2013-01-11 Thread Jérôme Fenal
2013/1/11 John Dennis 

> On 01/11/2013 04:00 PM, Jérôme Fenal wrote:
>
>> When you say easy to upload a .po onto another branch I assume you
>> don't mean branch (TX has no such concept) but rather another TX
>> resource. Anyway this is good to know, perhaps the way TX handles
>> versions is not half as bad as it would appear.
>>
>>
>> You're right. See how anaconda is organized, for instance:
>> https://fedora.transifex.com/**projects/p/fedora/language/en/**
>> ?project=2059
>>
>
> We follow the same model as anaconda, a new TX resource per version.


Yup.
Minus the frequent updates on master/head resource ipa (and no IPA  3.x
resource, but that is not a problem for IPA, given its fast pace and no
long maintenance on older branches).

-- 
Jérôme Fenal
___
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Re: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues

2013-01-11 Thread Rob Crittenden

Rob Crittenden wrote:

Petr Viktorin wrote:

On 01/07/2013 05:42 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

On 01/07/2013 03:09 PM, Rob Crittenden wrote:

Petr Viktorin wrote:

[...]


Works for me, but I have some questions (this is an area I know
little
about).

Can we be 100% sure these certs are always renewed together? Is
certmonger the only possible mechanism to update them?


You raise a good point. If though some mechanism someone replaces
one of
these certs it will cause the script to fail. Some notification of
this
failure will be logged though, and of course, the certs won't be
renewed.

One could conceivably manually renew one of these certificates. It is
probably a very remote possibility but it is non-zero.


Can we be sure certmonger always does the updates in parallel? If it
managed to update the audit cert before starting on the others, we'd
get
no CA restart for the others.


These all get issued at the same time so should expire at the same
time
as well (see problem above). The script will hang around for 10
minutes
waiting for the renewal to complete, then give up.


The certs might take different amounts of time to update, right?
Eventually, the expirations could go out of sync enough for it to
matter.
AFAICS, without proper locking we still get a race condition when the
other certs start being renewed some time (much less than 10 min) after
the audit one:

(time axis goes down)

 audit cert  other cert
 --  --
 certmonger does renew.
   post-renew script starts   .
  check state of other certs: OK  .
 .   certmonger starts renew
  certutil modifies NSS DB  +  certmonger modifies NSS DB  == boom!


This can't happen because we count the # of expected certs and wait
until all are in MONITORING before continuing.


The problem is that they're also in MONITORING before the whole renewal
starts. If the script happens to check just before the state changes
from MONITORING to GENERATING_CSR or whatever, we can get corruption.


The worse that would
happen is the trust wouldn't be set on the audit cert and dogtag
wouldn't be restarted.





The state the system would be in is this:

- audit cert trust not updated, so next restart of CA will fail
- CA is not restarted so will not use updated certificates


And anyway, why does certmonger do renewals in parallel? It seems
that
if it did one at a time, always waiting until the post-renew
script is
done, this patch wouldn't be necessary.



 From what Nalin told me certmonger has some coarse locking such that
renewals in a the same NSS database are serialized. As you point
out, it
would be nice to extend this locking to the post renewal scripts. We
can
ask Nalin about it. That would fix the potential corruption issue.
It is
still much nicer to not have to restart dogtag 4 times.



Well, three extra restarts every few years seems like a small price to
pay for robustness.


It is a bit of a problem though because the certs all renew within
seconds so end up fighting over who is restarting dogtag. This can cause
some renewals go into a failure state to be retried later. This is fine
functionally but makes QE a bit of a pain. You then have to make sure
that renewal is basically done, then restart certmonger and check
everything again, over and over until all the certs are renewed. This is
difficult to automate.


So we need to extend the certmonger lock, and wait until Dogtag is back
up before exiting the script. That way it'd still take longer than 1
restart, but all the renews should succeed.



Right, but older dogtag versions don't have the handy servlet to tell
that the service is actually up and responding. So it is difficult to
tell from tomcat alone whether the CA is actually up and handling requests.



Revised patch that takes advantage of new version of certmonger. 
certmonger-0.65 adds locking from the time renewal begins to the end of 
the post_save_command. This lets us be sure that no other certmonger 
renewals will have the NSS database open in read-write mode.


We need to be sure that tomcat is shut down before we let certmonger 
save the certificate to the NSS database because dogtag opens its 
database read/write and two writers can cause corruption.


rob

>From 78c0e087df1d1ad1c04517e70e2e8bbecc62d591 Mon Sep 17 00:00:00 2001
From: Rob Crittenden 
Date: Tue, 2 Dec 2014 13:18:36 -0500
Subject: [PATCH] Use new certmonger locking to prevent NSS database
 corruption.

dogtag opens its NSS database in read/write mode so we need to be very
careful during renewal that we don't also open it up read/write. We
basically need to serialize access to the database. certmonger does the
majority of this work via internal locking from the point where it generates
a new key/submits a rewewal through the pre_save and releases the lock after
the post_save command. This lock is held per NSS database so we're sav