[389-users] object with auxiliary object class only?

2014-06-18 Thread Petr Spacek

Hello list,

I have accidentally created an object which has only auxiliary object class, 
389 allowed me to do it.


version: 389-ds-base-1.3.2.16-20140526081843.fc20.x86_64
from http://copr.fedoraproject.org/coprs/lkrispen/132test/

LDIF I used:

dn: ipaPublicKey=test,cn=keys,cn=sec,cn=dns,dc=ipa,dc=example
changetype: add
objectClass: ipaPublicKeyObject
objectClass: top
ipaPublicKey: test


Schema:
attributeTypes: (2.16.840.1.113730.3.8.11.53 NAME 'ipaPublicKey' DESC 'Public 
key as DER-encoded SubjectPublicKeyInfo (RFC 5280)' EQUALITY octetStringMatch 
SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 X-ORIGIN 'IPA v4' )
objectClasses: (2.16.840.1.113730.3.8.12.24 NAME 'ipaPublicKeyObject' DESC 
'Wrapped public keys' SUP top AUXILIARY MAY ( ipaPublicKey ) X-ORIGIN 'IPA v4' )


Is it expected or should I open a ticket about it?

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Password synchronisation beetween openldap and AD 2008 R2

2014-01-16 Thread Petr Spacek

On 16.1.2014 17:07, Louis-Marie Plumel wrote:

My environment is 99 % under linux and authentication is full LDAP.

You are a lucky man! :-)


For some 30 workstations under windows, i had to create an AD under 2008
R2. For some reasons, i have to synchronize password beetween LDAP and AD.
Linux users will keep authentication on LDAP. (windows users are on LDAP
AND AD, and if they want to change their password, they have to do this on
LDAP. That's why i want to synchronise their password beetween LDAP and AD).


In that case you can use either 389 password synchronization (which is simpler 
for initial configuration, I guess) or upcoming version of FreeIPA (v3.4).


=== Beginning of FreeIPA advertisement === :-D

FreeIPA is more heavy-weight but in long term it will ease you administration 
of Linux machines.


With FreeIPA, you will have all your users in LDAP (FreeIPA's LDAP server) and 
on the Windows workstation you will specify username as user@LINUXDOMAIN with 
password used for LDAP/Kerberos and that combination will allow you log-in.


Nothing will be copied to AD but the authentication request will be routed 
from Windows machine to FreeIPA server, the authentication will happen on the 
Linux side, and the result of authentication will be sent back to the Windows 
machine.


=== End of FreeIPA advertisement === :-D

Have a nice day!

Petr^2 Spacek


LM


2014/1/16 Petr Spacek 


On 16.1.2014 16:55, Louis-Marie Plumel wrote:


   Ok ok, i'm going to see what you sent to me . To be sure, is  389DS may
be an
intermediate between my two actual servers?

Not sure what you mean here.


Is my actual LDAP can be used by 389DS? I'm sorry for these requests i'm
novice in this domain



Could you describe what are you trying to achieve?

What is the use case? Logging to workstations? To web apps? File sharing
over NFS with centralized identity store? What else?

Petr^2 Spacek


  2014/1/16 Rich Megginson 


On 01/16/2014 08:12 AM, Louis-Marie Plumel wrote:


   Ok ok, i'm going to see what you sent to me . To be sure, is  389DS may
be an intermediate between my two actual servers?

Not sure what you mean here.

   I have to keep my actual LDAP and remain the master and
synchronization must
be a single direction (LDAP -> AD).

389 supports one way sync.

   Will users have to change their password?

Yes, unfortunately.


   My goal is that everything will be transparent.

Then you may want to look into IPA with AD cross domain trust as Petr
suggested.

    regards


2014/1/16 Petr Spacek 

  On 16.1.2014 15:59, Rich Megginson wrote:


  On 01/16/2014 07:57 AM, Louis-Marie Plumel wrote:


  Hello,


Actually , i work with openldap.
I've installed an AD 2008 R2.My challenge is to work with both and
synchronise LDAP and AD 2008 R2. After a long research on the web, i
don't
find any information about howto synchronise passwords . That's why i
come
here to see if with 389 DS it's possible or not.



Yes.

https://access.redhat.com/site/documentation/en-US/Red_
Hat_Directory_Server/9.0/html/Administration_Guide/Windows_Sync.html



   There is also one completely different option: Use trust between AD
and
Unix domain. It depends on your requirements ...

See
http://www.freeipa.org/page/Trusts

or join mailing list
https://www.redhat.com/mailman/listinfo/freeipa-users



--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Password synchronisation beetween openldap and AD 2008 R2

2014-01-16 Thread Petr Spacek

On 16.1.2014 16:55, Louis-Marie Plumel wrote:

  Ok ok, i'm going to see what you sent to me . To be sure, is  389DS may be an
intermediate between my two actual servers?

Not sure what you mean here.


Is my actual LDAP can be used by 389DS? I'm sorry for these requests i'm
novice in this domain


Could you describe what are you trying to achieve?

What is the use case? Logging to workstations? To web apps? File sharing over 
NFS with centralized identity store? What else?


Petr^2 Spacek


2014/1/16 Rich Megginson 


  On 01/16/2014 08:12 AM, Louis-Marie Plumel wrote:

  Ok ok, i'm going to see what you sent to me . To be sure, is  389DS may
be an intermediate between my two actual servers?

Not sure what you mean here.

  I have to keep my actual LDAP and remain the master and synchronization must
be a single direction (LDAP -> AD).

389 supports one way sync.

  Will users have to change their password?

Yes, unfortunately.


  My goal is that everything will be transparent.

Then you may want to look into IPA with AD cross domain trust as Petr
suggested.

   regards


2014/1/16 Petr Spacek 


On 16.1.2014 15:59, Rich Megginson wrote:


On 01/16/2014 07:57 AM, Louis-Marie Plumel wrote:


Hello,

Actually , i work with openldap.
I've installed an AD 2008 R2.My challenge is to work with both and
synchronise LDAP and AD 2008 R2. After a long research on the web, i
don't
find any information about howto synchronise passwords . That's why i
come
here to see if with 389 DS it's possible or not.



Yes.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Windows_Sync.html



  There is also one completely different option: Use trust between AD and
Unix domain. It depends on your requirements ...

See
http://www.freeipa.org/page/Trusts

or join mailing list
https://www.redhat.com/mailman/listinfo/freeipa-users

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Password synchronisation beetween openldap and AD 2008 R2

2014-01-16 Thread Petr Spacek

On 16.1.2014 15:59, Rich Megginson wrote:

On 01/16/2014 07:57 AM, Louis-Marie Plumel wrote:

Hello,

Actually , i work with openldap.
I've installed an AD 2008 R2.My challenge is to work with both and
synchronise LDAP and AD 2008 R2. After a long research on the web, i don't
find any information about howto synchronise passwords . That's why i come
here to see if with 389 DS it's possible or not.


Yes.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Windows_Sync.html


There is also one completely different option: Use trust between AD and Unix 
domain. It depends on your requirements ...


See
http://www.freeipa.org/page/Trusts

or join mailing list
https://www.redhat.com/mailman/listinfo/freeipa-users

Have a nice day!

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] How to specify number of hashing iterations for a password

2014-01-15 Thread Petr Spacek

On 15.1.2014 20:10, Rich Megginson wrote:

On 01/15/2014 11:51 AM, Richard Mixon wrote:

Nathan/Rich,

Thank you both for the responses.

We are using the 389 Directory Server for a pretty isolated situation -
authentication/authorization for external users on an "extranet" type portal
website (it integrates pieces of several different web applications).

We don't really envision (famous last words, I know) using it on a broader
basis.

Rich, I can understand why the pre-hashed passwords cause a lot of
integration points to break. Is there a good alternative that still makes
cracking your passwords prohibitively expensive?


Well, actually, yes - don't use passwords - use client certificate based
authentication . . .


SASL/GSSAPI is the most flexible option. Teach your applications SASL and you 
can use any of

http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer#SASL_mechanisms

Naturally, some of them have the same problem with plaintext passwords but 
others do not (like GSSAPI - e.g. Kerberos).


Petr^2 Spacek


Nathan, I have a background in C, but do mostly Java these days. I will take
a look at ticket 397 and get back to you if it's something I could work on.
Can you provide me the pointers you were referring to?

Thank you - Richard



On Wed, Jan 15, 2014 at 11:25 AM, Rich Megginson mailto:rmegg...@redhat.com>> wrote:

On 01/15/2014 10:38 AM, Richard Mixon wrote:

During the bind process is there anyway to tell 389 directory
server to hash a plaintext password n (multiple) times before
trying to compare to what is stored?

I am trying to implement something similar to what's described in
this article:
http://www.stormpath.com/blog/strong-password-hashing-apache-shiro

Our plan was to to use SSHA256 to hash the passwords around
200,000 times before storing. This would at least slow down any
cracking attempts should someone get access to our directory.

I've read through the documentation on the Red Hat Directory
Server site, including the "Plug-in Guide". Under "5.8 Checking
Passwords" it refers to calling function "slapi_pw_find_sv()" -
looking at the doc for this function it does not look like
hashing multiple times is supported.

Is there  some means of doing this that is not obvious to me?


No.



I can certainly do it by re-writing the security plugins for the
various servers (Tomcat, PHP Wordpress, etc) such that they hash
the plaintext password n minus 1 times before issuing the bind -
but was hoping not to do that.


Use of pre-hashed passwords is strongly discouraged and will break
things like sasl and replication.

Does this have anything to do with
https://fedorahosted.org/389/ticket/397?



I'm relatively new to 389 directory server, but so far quite
happy to have moved to it from another directory server.

Thank you - Richard

-- Richard Mixon



--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Trigger on modify

2014-01-14 Thread Petr Spacek

On 14.1.2014 19:32, Deas, Jim wrote:

Is there a way to have 389-DS trigger a script when a group is modified?
I.E. update posix members on changes to static and dynamic groups?


May be that member-of plugin does what you want:
http://directory.fedoraproject.org/wiki/MemberOf_Plugin

https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Advanced_Entry_Management.html#groups-cmd-memberof

You can write own post-operation plug-in if want to do something special:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Plug-in_Guide/index.html

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] SSH Public keys

2014-01-10 Thread Petr Spacek

On 10.1.2014 12:06, Conor O'Callaghan wrote:


As an aside, if you're interested in doing Kerberos and LDAP together with
a 389-ds backend you may want to look at the FreeIPA project which handles
a lot of the integration for you. It also supports storing SSH keys.

rob



Freeipa looks very very nice indeed, but it doesn't look like it's built
and available for ubuntu :(


There is ongoing effort to port it to Debian/Ubuntu. You are more than welcome 
to contact freeipa-devel list [1] and help us with that.


Have a nice day!

[1] https://www.redhat.com/mailman/listinfo/freeipa-devel

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] SSH Public keys

2014-01-09 Thread Petr Spacek

Hello Paul!

On 9.1.2014 22:56, Paul Robert Marino wrote:

I agree FreeIPA is a good solution but it does have limitations

the one down side to it is you loose some flexibility with FreeIPA for
instance in  in places where you may want strict security policy
separations like a web application farm or a larger enterprises with
many subsidiaries you may want to have multiple OU's with different
replication policies and security ACL's FreeIPA doesn't support that.


Could you elaborate what you miss in FreeIPA, please? We want to know what we 
miss for which use cases...


Naturally, we can't add missing functionality if nobody tells us what is 
missing and why it is useful! :-)


Thank you for your time.

Petr^2 Spacek


On a side note neither does the MIT kerberos V server strictly
speaking but you can workaround that by running multiple instances on
different ports or you can use a Heimdal kerberos V server.


On Thu, Jan 9, 2014 at 3:26 PM, Rob Crittenden  wrote:

Jonathan Vaughn wrote:


We use Kerberos, with LDAP (389DS) as our storage backend, which makes
standing up Kerberos servers really easy, and keeps replication in
perfect sync unlike normal Kerberos "replication". Together with SSSD
and sudo-ldap this all makes a pretty powerful combination.

On RHEL/CentOS platforms, install krb5-server-ldap and configure
/etc/krb5.conf accordingly:

[dbmodules]
  REALM = {
  db_library = kldap
  ldap_kerberos_container_dn="dc=some,dc=container"
  ldap_kdc_dn = "uid=kdc,cn=config"
  ldap_kadmind_dn = "uid=kadmin,cn=config"
  ldap_service_password_file =
/var/kerberos/krb5kdc/realm/service.keyfile
  ldap_servers = "ldaps://ldap1.realm ldaps://ldap0.realm
ldaps://ldap2.realm"
  }

Of course there's more to it, but you'll have to google the details, I
can't remember the details off the top of my head. Create the
appropriate LDAP credentials of course, as well as creating the LDAP
service.keyfile ...



As an aside, if you're interested in doing Kerberos and LDAP together with a
389-ds backend you may want to look at the FreeIPA project which handles a
lot of the integration for you. It also supports storing SSH keys.

rob




On Thu, Jan 9, 2014 at 12:42 PM, Paul Robert Marino mailto:prmari...@gmail.com>> wrote:

 have you considered using Kerberos instead of ssh keys?
 its fairly transparent and doesn't require any patches.


 On Thu, Jan 9, 2014 at 1:10 PM, Vesa Alho mailto:lis...@alho.fi>> wrote:
  >>> I'm just wondering if anyone has experience storing public keys
 in 389
  >>> directory server to allow a user to login using an ssh-key
 rather than a
  >>> password? I am running the server on Ubuntu 13.10 and the client
is
  >>> Ubuntu
  >>> 12.04.
  >
  >
  > Last time I checked it requires patched openssh-server for
 Ubuntu. Check
  > this: https://marc.waeckerlin.org/computer/blog/ssh_and_ldap
  >
  > -Vesa



--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

[389-users] unstable 389 in Fedora 20 - a time for epoch bump?

2013-12-13 Thread Petr Spacek

Hello list,

I'm sorry for nagging you, but Fedora 20 release day is coming and I have 
experienced serious issues with 389-ds-base-1.3.2.8-1.fc20.x86_64.


https://fedorahosted.org/389/ticket/47629

Would it be worth to build a 389-ds-base-1.3.1 for Fedora 20 (and bump epoch 
in RPM spec file)? That will give us more time for debugging and testing ...


Have a nice day.

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Secondary passwords - like Google's application specific passwords

2013-11-06 Thread Petr Spacek

On 6.11.2013 17:34, Jan Tomasek wrote:

Hello,

please, does anybyody any idea how to implement this with 389?


According to http://tools.ietf.org/html/rfc4519#section-2.41
the userPassword attribute is multi-valued.

Did you try to add multiple values to the attribute?

I never tried it, so no warranty :-)

Petr^2 Spacek


Thanks

Jan

On 11/04/2013 07:40 PM, Jan Tomasek wrote:

Hi,

my question about PAM, libscript... come from my idea: I would like to
implement secondary passwords in very similar way like Google's
application specific passwords works. [1]

We are using LDAP for centralized user management. Systems providing
services to users are verified against this LDAP. Users are saving those
passwords within mail clients, in workstation, in tablet, ... we would
like to provide option to users to not store their main password within
their clients. We would like to offer them alternative passwords working
for email, calendar client and so on on specific device. In case of
compromising one of devices - user will have only to revoke password for
that device.

In short. I want to users offer possibility to generate secondary
passwords working for email, and so on. I expect them to create multiple
passwords marked with some nickname, like:
   phone-email
   tablet-email
   phone-calendar
and so on. Those passwords should work with standard LDAP bind but not
necessarily on the same suffix and/or where primary LDAP is. We would
like to split primary LDAP passwors used for financial and high trust
applications from those serving email and calendar.

How to do something like this with 389 DS?

My idea is this:

uid=semik,dc=neco
objectClass: inetOrgPerson
cn: Jan Tomasek
sn: Tomasek
uid: semik
userPassword: {SSHA}...

dc=12345,uid=semik,dc=neco
objectClass: appPassword
dc: 12345
password: some-generated-password1
passwordLabel: phone-email

dc=12395,uid=semik,dc=neco
objectClass: appPassword
dc: 12395
password: some-generated-password2
passwordLabel: tablet-email

dc=12399,uid=semik,dc=neco
objectClass: appPassword
dc: 12399
password: some-generated-password3
passwordLabel: phone-calendar


I tried to implement this as PAM Pass through authentication. It works
but it is very fragile.

I'm looking for more robust and faster way. I know it is possible to do
this with PreOperation Plugin but maybe there is some easier way. Or
maybe already someone implemented such plugin.

Any comments? Ideas?


Thanks

[1] https://support.google.com/accounts/answer/185833

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users



--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Need help configuring fine grained password policy enforcement on RHEL6 using sssd

2013-09-11 Thread Petr Spacek

On 11.9.2013 13:56, Bright, Daniel wrote:

All,

I am in the process of moving away from pam_ldap and on to pam_sss. The basic sssd setup 
is working just fine, user authentication works, getent passwd works, caching is great, 
everything looks like it's working fine except for password policy enforcement. I am 
wondering if there is some sort of password policy overlay I need to use, or a special 
setup of sssd.conf, I tried using "ldap_pwd_policy=shadow" however this doesn't 
allow me to change passwords.

If anyone could help it would be greatly appreciated, I will post a working 
config on my blog after this is done so we can help others too.


I think that sssd-us...@lists.fedorahosted.org would be better place to post 
your question. SSSD gurus are there :-)


Have a nice day!

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] sub-tree synchronization/watching: persistent search questions

2013-06-10 Thread Petr Spacek

Hello Rich,

thank you very much for your time.

On 7.6.2013 18:12, Rich Megginson wrote:

I can see another option:
To implement 389 plugin which will provide (very partial) support for RFC
4533. The idea is to implement only state-less pieces (no cookies) and
return some error when client attempts to use a cookie.


This would also likely use entryUSN for the cookie, internallly.

Yes, that was also my idea, but I don't want to implement the 'state-full
part' of the RFC in all it's complexity. Now I'm interested only in
detection that all existing entries were read :-)


Sure, but it would be nice to implement the whole syncrepl protocol if you're
going to have to implement it partially anyway.

I definitely agree, but unfortunately, I'm tasked with something different
and this syncRepl episode is only the small piece of the whole story :-)


Sure, but this might be enough motivation for the core 389 team to pick and
finish syncrepl based on what you started.

Okay, I created a ticket for RFC 4533 implementation:
https://fedorahosted.org/389/ticket/47388

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] sub-tree synchronization/watching: persistent search questions

2013-06-07 Thread Petr Spacek

On 7.6.2013 16:51, Rich Megginson wrote:

On 06/07/2013 08:44 AM, Petr Spacek wrote:

On 7.6.2013 16:11, Rich Megginson wrote:

On 06/07/2013 05:42 AM, Petr Spacek wrote:

I would like to get opinions from 389 gurus to following problem.

I have an application (DNS server), which needs to read content of whole one
sub-tree (cn=dns, dc=test) and keep it synchronized.

The work flow is:
1) Application (DNS server) starts
2) Application reads all existing data out from the sub-tree
3) Application does /something/ with the existing data and starts replying
to application clients
4) Sub-tree has to be kept in sync with LDAP server, i.e. updates from LDAP
server should be incrementally applied to the 'state' inside the application


The problem with persistent search is that it doesn't offer any reliable
'signal' that step (2) ended. The search is just running for infinite time
and I can't find any signal that all existing entries were read already and
now the application will get only Entry Change Notifications.


Basically, I'm looking for something like LDAP syncRepl in refreshAndPersist
mode with no cookie (RFC 4533 section 1.3.2 and section 3.4).


I know that Entry Change Notification from persistent search contains bit
field which denotes if the entry was added/modified/deleted/nothing (i.e.
not modified, just read). Unfortunately, this bit field can't be used for
*reliable* detection that all existing entries were read.


Could this 'hack' work reliably?
1) Start persistent search (in separate application thread), but suspend
result processing.
2) In another application thread, do the normal sub-tree search on the same
sub-tree. Normal search will be started *after* the persistent search.
3) Process all results from normal search first
4) Do /something application specific/
5) Start processing updates from persistent search

In my application I can cope with duplicates, when 'normal' search returned
entry cn=xyz and the persistent search returned the same entry cn=xyz again.


Could you use entryUSN?  For example - keep searching until the entryUSN in
the entry is the same as the global entryUSN, then fallback to persistent
search?


Could you elaborate it a bit more, please? I'm not sure if I understood.
What exactly 'global' entryUSN means?
Do you mean 'lastUSN' value on particular server?

Yes.

Can it work on server where modification are scarce? (Note that I do
sub-tree search on subset of the whole database.)

Not sure what you mean.  What difference does it make if modifications are
scarce?  By modifications do you mean adds/mods/modrdn/delete - that is, any
update?


I need to operate on one sub-tree in the database, not the whole database. I 
think that for this reason I can't depend on fact that sub-tree search will 
encounter entryUSN == lastUSN.


This will never happen if 'my' sub-tree wasn't modified as the last part of 
sub-tree, right? (That is why spoke about 'scarce' updates, and yes, update = 
any modification in given sub-tree.)


Did I misunderstand something?


I considered normal search followed by persistent search with entryUSN
filter, but IMHO it will not work with entry deletion.

For example:
1) Start normal search and request entryUSN attribute (among others)
2) Process all results from search and compute max(entryUSN)
3) Start persistent search with filter (entryUSN > computedMaxValue)

I can see the race condition if an entry is deleted between steps (2) and (3).

That is exactly what I tried to solve with 'parallel' searches, i.e.
effectively avoid any time gap between steps (2) and (3).


I'm not sure what difference it makes if the update is a deletion or not, but
yes, there is a race condition.




Of course, I could read entryUSN during normal and persistent search and
then skip all results from persistent search with entryUSN <
computedMaxValue. Is that what you meant?


Yes.


Anyway, do you think that the approach with 'normal & persistent searches in 
parallel' is enough to avoid the race condition? I.e. Does it prevent me from 
missing any update? (Let's suppose that duplicate-detection is solved :-))



I can see another option:
To implement 389 plugin which will provide (very partial) support for RFC
4533. The idea is to implement only state-less pieces (no cookies) and
return some error when client attempts to use a cookie.


This would also likely use entryUSN for the cookie, internallly.

Yes, that was also my idea, but I don't want to implement the 'state-full
part' of the RFC in all it's complexity. Now I'm interested only in
detection that all existing entries were read :-)


Sure, but it would be nice to implement the whole syncrepl protocol if you're
going to have to implement it partially anyway.
I definitely agree, but unfortunately, I'm tasked with something 

Re: [389-users] sub-tree synchronization/watching: persistent search questions

2013-06-07 Thread Petr Spacek

On 7.6.2013 16:11, Rich Megginson wrote:

On 06/07/2013 05:42 AM, Petr Spacek wrote:

I would like to get opinions from 389 gurus to following problem.

I have an application (DNS server), which needs to read content of whole one
sub-tree (cn=dns, dc=test) and keep it synchronized.

The work flow is:
1) Application (DNS server) starts
2) Application reads all existing data out from the sub-tree
3) Application does /something/ with the existing data and starts replying
to application clients
4) Sub-tree has to be kept in sync with LDAP server, i.e. updates from LDAP
server should be incrementally applied to the 'state' inside the application


The problem with persistent search is that it doesn't offer any reliable
'signal' that step (2) ended. The search is just running for infinite time
and I can't find any signal that all existing entries were read already and
now the application will get only Entry Change Notifications.


Basically, I'm looking for something like LDAP syncRepl in refreshAndPersist
mode with no cookie (RFC 4533 section 1.3.2 and section 3.4).


I know that Entry Change Notification from persistent search contains bit
field which denotes if the entry was added/modified/deleted/nothing (i.e.
not modified, just read). Unfortunately, this bit field can't be used for
*reliable* detection that all existing entries were read.


Could this 'hack' work reliably?
1) Start persistent search (in separate application thread), but suspend
result processing.
2) In another application thread, do the normal sub-tree search on the same
sub-tree. Normal search will be started *after* the persistent search.
3) Process all results from normal search first
4) Do /something application specific/
5) Start processing updates from persistent search

In my application I can cope with duplicates, when 'normal' search returned
entry cn=xyz and the persistent search returned the same entry cn=xyz again.


Could you use entryUSN?  For example - keep searching until the entryUSN in
the entry is the same as the global entryUSN, then fallback to persistent 
search?


Could you elaborate it a bit more, please? I'm not sure if I understood.
What exactly 'global' entryUSN means?
Do you mean 'lastUSN' value on particular server?
Can it work on server where modification are scarce? (Note that I do sub-tree 
search on subset of the whole database.)


I considered normal search followed by persistent search with entryUSN filter, 
but IMHO it will not work with entry deletion.


For example:
1) Start normal search and request entryUSN attribute (among others)
2) Process all results from search and compute max(entryUSN)
3) Start persistent search with filter (entryUSN > computedMaxValue)

I can see the race condition if an entry is deleted between steps (2) and (3).

That is exactly what I tried to solve with 'parallel' searches, i.e. 
effectively avoid any time gap between steps (2) and (3).



Of course, I could read entryUSN during normal and persistent search and then 
skip all results from persistent search with entryUSN < computedMaxValue. Is 
that what you meant?




I can see another option:
To implement 389 plugin which will provide (very partial) support for RFC
4533. The idea is to implement only state-less pieces (no cookies) and
return some error when client attempts to use a cookie.


This would also likely use entryUSN for the cookie, internallly.
Yes, that was also my idea, but I don't want to implement the 'state-full 
part' of the RFC in all it's complexity. Now I'm interested only in detection 
that all existing entries were read :-)



Could somebody judge how difficult it can be? From my (naive) point of view
are state-less parts of RFC 4533 only 'persistent search encapsulated in
another LDAP controls'.


Thank you very much for your time.

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

[389-users] sub-tree synchronization/watching: persistent search questions

2013-06-07 Thread Petr Spacek

Hello list,

I would like to get opinions from 389 gurus to following problem.

I have an application (DNS server), which needs to read content of whole one 
sub-tree (cn=dns, dc=test) and keep it synchronized.


The work flow is:
1) Application (DNS server) starts
2) Application reads all existing data out from the sub-tree
3) Application does /something/ with the existing data and starts replying to 
application clients
4) Sub-tree has to be kept in sync with LDAP server, i.e. updates from LDAP 
server should be incrementally applied to the 'state' inside the application



The problem with persistent search is that it doesn't offer any reliable 
'signal' that step (2) ended. The search is just running for infinite time and 
I can't find any signal that all existing entries were read already and now 
the application will get only Entry Change Notifications.



Basically, I'm looking for something like LDAP syncRepl in refreshAndPersist 
mode with no cookie (RFC 4533 section 1.3.2 and section 3.4).



I know that Entry Change Notification from persistent search contains bit 
field which denotes if the entry was added/modified/deleted/nothing (i.e. not 
modified, just read). Unfortunately, this bit field can't be used for 
*reliable* detection that all existing entries were read.



Could this 'hack' work reliably?
1) Start persistent search (in separate application thread), but suspend 
result processing.
2) In another application thread, do the normal sub-tree search on the same 
sub-tree. Normal search will be started *after* the persistent search.

3) Process all results from normal search first
4) Do /something application specific/
5) Start processing updates from persistent search

In my application I can cope with duplicates, when 'normal' search returned 
entry cn=xyz and the persistent search returned the same entry cn=xyz again.



I can see another option:
To implement 389 plugin which will provide (very partial) support for RFC 
4533. The idea is to implement only state-less pieces (no cookies) and return 
some error when client attempts to use a cookie.


Could somebody judge how difficult it can be? From my (naive) point of view 
are state-less parts of RFC 4533 only 'persistent search encapsulated in 
another LDAP controls'.



Thank you very much for your time.

--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] FreeIPA (was: adding ssh public keys to 389)

2013-05-09 Thread Petr Spacek

Hello,

On 8.5.2013 20:53, Steve Ovens wrote:

Hi,

I have been quite happily using 389 for around a year, and while I am not
at all advanced I have been able to add Samba and sudoers to 389. I am now
attempting to add openssh keys to the user entries I am using the
openssh-lpk_openldap.schema:


IMHO the FreeIPA project could help you a lot. It contains pre-baked solutions 
for common problems like central management of sudoers and ssh 
authorized_keys, including management utilities (with CLI, WebUI, XML RPC, 
JSON RPC).


Homepage: http://freeipa.org/
Users list: http://www.redhat.com/mailman/listinfo/freeipa-users

The home page is undergoing a major redesign at the moment, because it is a 
bit confusing to newcomers. I would recommend you to ask freeipa-users list if 
you can't find what you are looking for.


And BTW - FreeIPA is based on 389 DS.

I'm sorry for the advertisement.

Petr^2 Spacek


#
# LDAP Public Key Patch schema for use with openssh-ldappubkey
# Author: Eric AUGE 
#
# Based on the proposal of : Mark Ruijter
#


# octetString SYNTAX
attributetype ( 1.3.6.1.4.1.24552.500.1.1.1.13 NAME 'sshPublicKey'
 DESC 'MANDATORY: OpenSSH Public key'
 EQUALITY octetStringMatch
 SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 )

# printableString SYNTAX yes|no
objectclass ( 1.3.6.1.4.1.24552.500.1.1.2.0 NAME 'ldapPublicKey' SUP top
AUXILIARY
 DESC 'MANDATORY: OpenSSH LPK objectclass'
 MAY ( sshPublicKey $ uid )
 )


I have run this through the ol-schema-migrate.pl and placed the output in
/etc/dirsrv/slapd-ds/schema/62sshkeys.ldif.

I have restarted the server and there were no errors produced so I am
assuming that it took the ldif fine.

How do I go about using the new schema. I have googled around quite a bit,
but I must be missing something. I would appreciate any pointers and I
fully intend on publishing a how-to on this (as I do for most things over
at overclockers.com
)

Any guidance would be greatly appreciated!

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] Can i use Same Certificate for all my ldap server

2013-04-17 Thread Petr Spacek

On 16.4.2013 23:10, Kyle Flavin wrote:

On Tue, Apr 16, 2013 at 2:04 PM, Rob Crittenden  wrote:


expert alert wrote:


Hi
I am planning to deploy all my ldap server by puppet.
so I am wondering, Can i use Same Server Certificate and CA certificate
(Directory server) for all my server ???

if yes, then under which directory shall i place those certificate ??


Although it is technically possible, it is not recommended.

All servers will share the same private key, so the chance that the key will 
be compromised is bigger - you need to transfer the key securely from one 
server to another etc.


Could you explain your use case? I'm curious :-)

--
Petr Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] replication with some attributes excluded leads to schema violation

2013-01-14 Thread Petr Spacek

On 11.1.2013 22:00, Rich Megginson wrote:

On 01/11/2013 10:07 AM, Petr Spacek wrote:

On 11.1.2013 17:05, Petr Spacek wrote:

On 11.1.2013 16:22, Rich Megginson wrote:

On 01/11/2013 08:13 AM, Petr Spacek wrote:

On 11.1.2013 15:54, Rich Megginson wrote:

On 01/11/2013 06:26 AM, Petr Spacek wrote:

Hello 389 users and developers,

I would be very happy if somebody could give me any advice about "the
right
way" to solve this problem:

I have following objectClass in the schema:
objectClasses: ( 2.16.840.1.113730.3.8.6.1 NAME 'idnsZone' DESC 'Zone
class'
SUP idnsRecord STRUCTURAL MUST ( idnsName $ idnsZoneActive $
idnsSOAmName $
idnsSOArName $ idnsSOAserial $ idnsSOArefresh $ idnsSOAretry $
idnsSOAexpire
$ idnsSOAminimum ) MAY ( idnsUpdatePolicy $ idnsAllowQuery $
idnsAllowTransfer $ idnsAllowSyncPTR $ idnsForwardPolicy $ idnsForwarders
) )

Please note MUST attribute idnsSOAserial.

I have two 389 servers on RHEL 6.4 with the same schema:
389-ds-base-1.2.11.15-10.el6.x86_64

There is multi master replication agreement between machines
vm-115<->vm-042.

Attribute idnsSOAserial is excluded from incremental replication (export
from vm-042):
cn=meTovm-115,cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config

idnsSOAserial: (objectclass=*) $ EXCLUDE memberof idnssoaserial entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


The list above with proper attribute name looks like:

nsDS5ReplicatedAttributeList: (objectclass=*) $ EXCLUDE memberof
idnssoaserial entryusn krblastsuccessfulauth krblastfailedauth
krbloginfailedcount


nsDS5ReplicatedAttributeListTotal: (objectclass=*) $ EXCLUDE entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


Now I create a new object with objectClass idnsZone on vm-042. The new
object is replicated to vm-115, but the attribute idnsSOAserial is
missing -
and this fact violates the schema.


Is it expected behaviour?


Yes.  Since idnssoaserial is excluded from incremental (I think there is a
typo above - idnsSOAserial: (objectclass=*) $ EXCLUDE is not correct),
it is
excluded from the replicated ADD operation.

Heh, that is my copy & paste fail :-)


What I misunderstood?


Nothing?



Which approach you recommend to application developers for dealing with
such
situation?


So you would like idnsSOAserial to be included for replicated ADD
operations
but excluded for replicated MOD operations?


It seems like best option to me, but I'm open to any other proposal which
solves this problem.


Why don't you want idnsSOAserial to be replicated for MOD operations?


There could potentially be a high update traffic (from more servers) and we
want to avoid replication conflicts. I will dig design document for the
feature using this attribute.


"Design e-mails" follow, hopefully they are complete enough to illustrate
what is going on:

The problem statement:
https://www.redhat.com/archives/freeipa-devel/2012-April/msg00222.html

Last proposed solution with non-replicated idnsSOAserial attribute:
https://www.redhat.com/archives/freeipa-devel/2012-May/msg00047.html


So idnsSOAserial is a "local only" attribute that lives in a "global" entry.

I agree that this is an attribute that should be managed internally by the
directory server, like the entryusn attribute.

Unfortunately, since it is a required attribute, you must specify it in an
LDAP ADD operation.

I don't really see an easy way to do this without the ability to allow
replication for ADD operations and disallow for MOD operations.  Please file a
389 ticket.


Thank you for information! We will investigate entryUSN properties and file 
the ticket if we consider entryUSN as insufficient replacement.



Petr^2 Spacek


I don't like the approach where application have to go to *all* DS to
initialize the excluded attribute, because:
What the application should do if it's unable to connect to one of DSs?

What if rollback is impossible? (E.g. attribute was initialized on
replica1
and replica2 but the link from application to the world failed before
replica3 was initialized.)

Would it be possible to configure DS to replicate the attribute when
object
is created but not replicate further changes? It would defer all problems
above to DS replication mechanism and simplify applications :-)

Thank you for your time!

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] replication with some attributes excluded leads to schema violation

2013-01-11 Thread Petr Spacek

On 11.1.2013 17:05, Petr Spacek wrote:

On 11.1.2013 16:22, Rich Megginson wrote:

On 01/11/2013 08:13 AM, Petr Spacek wrote:

On 11.1.2013 15:54, Rich Megginson wrote:

On 01/11/2013 06:26 AM, Petr Spacek wrote:

Hello 389 users and developers,

I would be very happy if somebody could give me any advice about "the right
way" to solve this problem:

I have following objectClass in the schema:
objectClasses: ( 2.16.840.1.113730.3.8.6.1 NAME 'idnsZone' DESC 'Zone class'
SUP idnsRecord STRUCTURAL MUST ( idnsName $ idnsZoneActive $ idnsSOAmName $
idnsSOArName $ idnsSOAserial $ idnsSOArefresh $ idnsSOAretry $ idnsSOAexpire
$ idnsSOAminimum ) MAY ( idnsUpdatePolicy $ idnsAllowQuery $
idnsAllowTransfer $ idnsAllowSyncPTR $ idnsForwardPolicy $ idnsForwarders
) )

Please note MUST attribute idnsSOAserial.

I have two 389 servers on RHEL 6.4 with the same schema:
389-ds-base-1.2.11.15-10.el6.x86_64

There is multi master replication agreement between machines
vm-115<->vm-042.

Attribute idnsSOAserial is excluded from incremental replication (export
from vm-042):
cn=meTovm-115,cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config

idnsSOAserial: (objectclass=*) $ EXCLUDE memberof idnssoaserial entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


The list above with proper attribute name looks like:

nsDS5ReplicatedAttributeList: (objectclass=*) $ EXCLUDE memberof
idnssoaserial entryusn krblastsuccessfulauth krblastfailedauth
krbloginfailedcount


nsDS5ReplicatedAttributeListTotal: (objectclass=*) $ EXCLUDE entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


Now I create a new object with objectClass idnsZone on vm-042. The new
object is replicated to vm-115, but the attribute idnsSOAserial is missing -
and this fact violates the schema.


Is it expected behaviour?


Yes.  Since idnssoaserial is excluded from incremental (I think there is a
typo above - idnsSOAserial: (objectclass=*) $ EXCLUDE is not correct), it is
excluded from the replicated ADD operation.

Heh, that is my copy & paste fail :-)


What I misunderstood?


Nothing?



Which approach you recommend to application developers for dealing with such
situation?


So you would like idnsSOAserial to be included for replicated ADD operations
but excluded for replicated MOD operations?


It seems like best option to me, but I'm open to any other proposal which
solves this problem.


Why don't you want idnsSOAserial to be replicated for MOD operations?


There could potentially be a high update traffic (from more servers) and we
want to avoid replication conflicts. I will dig design document for the
feature using this attribute.


"Design e-mails" follow, hopefully they are complete enough to illustrate what 
is going on:


The problem statement:
https://www.redhat.com/archives/freeipa-devel/2012-April/msg00222.html

Last proposed solution with non-replicated idnsSOAserial attribute:
https://www.redhat.com/archives/freeipa-devel/2012-May/msg00047.html


Petr^2 Spacek


I don't like the approach where application have to go to *all* DS to
initialize the excluded attribute, because:
What the application should do if it's unable to connect to one of DSs?

What if rollback is impossible? (E.g. attribute was initialized on replica1
and replica2 but the link from application to the world failed before
replica3 was initialized.)

Would it be possible to configure DS to replicate the attribute when object
is created but not replicate further changes? It would defer all problems
above to DS replication mechanism and simplify applications :-)

Thank you for your time!

--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] replication with some attributes excluded leads to schema violation

2013-01-11 Thread Petr Spacek

On 11.1.2013 16:22, Rich Megginson wrote:

On 01/11/2013 08:13 AM, Petr Spacek wrote:

On 11.1.2013 15:54, Rich Megginson wrote:

On 01/11/2013 06:26 AM, Petr Spacek wrote:

Hello 389 users and developers,

I would be very happy if somebody could give me any advice about "the right
way" to solve this problem:

I have following objectClass in the schema:
objectClasses: ( 2.16.840.1.113730.3.8.6.1 NAME 'idnsZone' DESC 'Zone class'
SUP idnsRecord STRUCTURAL MUST ( idnsName $ idnsZoneActive $ idnsSOAmName $
idnsSOArName $ idnsSOAserial $ idnsSOArefresh $ idnsSOAretry $ idnsSOAexpire
$ idnsSOAminimum ) MAY ( idnsUpdatePolicy $ idnsAllowQuery $
idnsAllowTransfer $ idnsAllowSyncPTR $ idnsForwardPolicy $ idnsForwarders ) )

Please note MUST attribute idnsSOAserial.

I have two 389 servers on RHEL 6.4 with the same schema:
389-ds-base-1.2.11.15-10.el6.x86_64

There is multi master replication agreement between machines vm-115<->vm-042.

Attribute idnsSOAserial is excluded from incremental replication (export
from vm-042):
cn=meTovm-115,cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config

idnsSOAserial: (objectclass=*) $ EXCLUDE memberof idnssoaserial entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


The list above with proper attribute name looks like:

nsDS5ReplicatedAttributeList: (objectclass=*) $ EXCLUDE memberof
idnssoaserial entryusn krblastsuccessfulauth krblastfailedauth
krbloginfailedcount


nsDS5ReplicatedAttributeListTotal: (objectclass=*) $ EXCLUDE entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


Now I create a new object with objectClass idnsZone on vm-042. The new
object is replicated to vm-115, but the attribute idnsSOAserial is missing -
and this fact violates the schema.


Is it expected behaviour?


Yes.  Since idnssoaserial is excluded from incremental (I think there is a
typo above - idnsSOAserial: (objectclass=*) $ EXCLUDE is not correct), it is
excluded from the replicated ADD operation.

Heh, that is my copy & paste fail :-)


What I misunderstood?


Nothing?



Which approach you recommend to application developers for dealing with such
situation?


So you would like idnsSOAserial to be included for replicated ADD operations
but excluded for replicated MOD operations?


It seems like best option to me, but I'm open to any other proposal which
solves this problem.


Why don't you want idnsSOAserial to be replicated for MOD operations?


There could potentially be a high update traffic (from more servers) and we 
want to avoid replication conflicts. I will dig design document for the 
feature using this attribute.



Petr^2 Spacek


I don't like the approach where application have to go to *all* DS to
initialize the excluded attribute, because:
What the application should do if it's unable to connect to one of DSs?

What if rollback is impossible? (E.g. attribute was initialized on replica1
and replica2 but the link from application to the world failed before
replica3 was initialized.)

Would it be possible to configure DS to replicate the attribute when object
is created but not replicate further changes? It would defer all problems
above to DS replication mechanism and simplify applications :-)

Thank you for your time!



--
Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] replication with some attributes excluded leads to schema violation

2013-01-11 Thread Petr Spacek

On 11.1.2013 15:54, Rich Megginson wrote:

On 01/11/2013 06:26 AM, Petr Spacek wrote:

Hello 389 users and developers,

I would be very happy if somebody could give me any advice about "the right
way" to solve this problem:

I have following objectClass in the schema:
objectClasses: ( 2.16.840.1.113730.3.8.6.1 NAME 'idnsZone' DESC 'Zone class'
SUP idnsRecord STRUCTURAL MUST ( idnsName $ idnsZoneActive $ idnsSOAmName $
idnsSOArName $ idnsSOAserial $ idnsSOArefresh $ idnsSOAretry $ idnsSOAexpire
$ idnsSOAminimum ) MAY ( idnsUpdatePolicy $ idnsAllowQuery $
idnsAllowTransfer $ idnsAllowSyncPTR $ idnsForwardPolicy $ idnsForwarders ) )

Please note MUST attribute idnsSOAserial.

I have two 389 servers on RHEL 6.4 with the same schema:
389-ds-base-1.2.11.15-10.el6.x86_64

There is multi master replication agreement between machines vm-115<->vm-042.

Attribute idnsSOAserial is excluded from incremental replication (export
from vm-042):
cn=meTovm-115,cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config

idnsSOAserial: (objectclass=*) $ EXCLUDE memberof idnssoaserial entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


The list above with proper attribute name looks like:

nsDS5ReplicatedAttributeList: (objectclass=*) $ EXCLUDE memberof idnssoaserial 
entryusn krblastsuccessfulauth krblastfailedauth krbloginfailedcount



nsDS5ReplicatedAttributeListTotal: (objectclass=*) $ EXCLUDE entryusn
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


Now I create a new object with objectClass idnsZone on vm-042. The new
object is replicated to vm-115, but the attribute idnsSOAserial is missing -
and this fact violates the schema.


Is it expected behaviour?


Yes.  Since idnssoaserial is excluded from incremental (I think there is a
typo above - idnsSOAserial: (objectclass=*) $ EXCLUDE is not correct), it is
excluded from the replicated ADD operation.

Heh, that is my copy & paste fail :-)


What I misunderstood?


Nothing?



Which approach you recommend to application developers for dealing with such
situation?


So you would like idnsSOAserial to be included for replicated ADD operations
but excluded for replicated MOD operations?


It seems like best option to me, but I'm open to any other proposal which 
solves this problem.


Petr^2 Spacek


I don't like the approach where application have to go to *all* DS to
initialize the excluded attribute, because:
What the application should do if it's unable to connect to one of DSs?

What if rollback is impossible? (E.g. attribute was initialized on replica1
and replica2 but the link from application to the world failed before
replica3 was initialized.)

Would it be possible to configure DS to replicate the attribute when object
is created but not replicate further changes? It would defer all problems
above to DS replication mechanism and simplify applications :-)

Thank you for your time!


I'm not subscribed to this list, please include me in Cc explicitly.


--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

[389-users] replication with some attributes excluded leads to schema violation

2013-01-11 Thread Petr Spacek

Hello 389 users and developers,

I would be very happy if somebody could give me any advice about "the right 
way" to solve this problem:


I have following objectClass in the schema:
objectClasses: ( 2.16.840.1.113730.3.8.6.1 NAME 'idnsZone' DESC 'Zone class' 
SUP idnsRecord STRUCTURAL MUST ( idnsName $ idnsZoneActive $ idnsSOAmName $ 
idnsSOArName $ idnsSOAserial $ idnsSOArefresh $ idnsSOAretry $ idnsSOAexpire $ 
idnsSOAminimum ) MAY ( idnsUpdatePolicy $ idnsAllowQuery $ idnsAllowTransfer $ 
idnsAllowSyncPTR $ idnsForwardPolicy $ idnsForwarders ) )


Please note MUST attribute idnsSOAserial.

I have two 389 servers on RHEL 6.4 with the same schema:
389-ds-base-1.2.11.15-10.el6.x86_64

There is multi master replication agreement between machines vm-115<->vm-042.

Attribute idnsSOAserial is excluded from incremental replication (export from 
vm-042):

cn=meTovm-115,cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config

idnsSOAserial: (objectclass=*) $ EXCLUDE memberof idnssoaserial entryusn 
krblastsuccessfulauth krblastfailedauth krbloginfailedcount


nsDS5ReplicatedAttributeListTotal: (objectclass=*) $ EXCLUDE entryusn 
krblastsuccessfulauth krblastfailedauth krbloginfailedcount



Now I create a new object with objectClass idnsZone on vm-042. The new object 
is replicated to vm-115, but the attribute idnsSOAserial is missing - and this 
fact violates the schema.



Is it expected behaviour?

What I misunderstood?

Which approach you recommend to application developers for dealing with such 
situation?


I don't like the approach where application have to go to *all* DS to 
initialize the excluded attribute, because:

What the application should do if it's unable to connect to one of DSs?

What if rollback is impossible? (E.g. attribute was initialized on replica1 
and replica2 but the link from application to the world failed before replica3 
was initialized.)


Would it be possible to configure DS to replicate the attribute when object is 
created but not replicate further changes? It would defer all problems above 
to DS replication mechanism and simplify applications :-)


Thank you for your time!


I'm not subscribed to this list, please include me in Cc explicitly.

--
Petr Spacek
Software engineer
Red Hat Czech, BRQ
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

[389-users] subtree stacking/subtree virtual views

2012-05-22 Thread Petr Spacek

Hello,

I'm looking for some way how to "stack" LDAP sub-trees one on top of another.

What I mean: Let's have two subtrees:
dc=lower  and  dc=upper

dc=lower contains objects:
cn=obj1
cn=obj2,attr A = 2
cn=obj3


dc=upper contains objects:
cn=obj2,attr A = 4
cn=obj4


Now I push dc=upper on top of dc=lower (let say it creates dc=stack)
Queries with base dc=stack will return:
cn=obj1 --> same object as in dc=lower
cn=obj2 --> same object as in dc=upper, attr A = 2
cn=obj3 --> same object as in dc=lower
cn=obj4 --> same object as in dc=upper

I saw overlays "relay" and "rwm" in OpenLDAP. Is there any support in 389 for 
this use case?



I need to override several records from "lower" subtree with object from 
"upper" subtree. Problem is, that subtree "lower" can contain 10 000 objects 
and I need to override only 5 of them. I'm searching for effective way how to 
accomplish this without copying whole subtree "lower" to "upper".



Thanks for your time.

Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] bypassing limits for persistent search and specific user

2012-03-14 Thread Petr Spacek

Hello,

On 03/14/2012 12:16 AM, Nathan Kinder wrote:

On 03/13/2012 04:09 PM, Petr Spacek wrote:

Hello list,

I'm looking for way how to bypass nsslapd-sizelimit and
nsslapd-timelimit for persistent search made by specific user (or
anything made by that user).


... snip ...

On 03/14/2012 12:16 AM, Nathan Kinder wrote:

On 03/13/2012 04:09 PM, Petr Spacek wrote:

 It's possible to bypass limits for this connection/user

I think setting the limits based on your bind DN should work.


I did some testing and converged to this setting:
nsIdleTimeout, nsLookThroughLimit, nsSizeLimit, nsTimeLimit set to -1, 
so limits are disabled for specific user.


Is there any potential problem with this, if user is trusted? (It's LDAP 
server <-> DNS server "pipe".)

Are there some limits which should not be bypassed? :-)

Expected use case has 1 LDAP to 1 DNS ratio.


Thanks for your time.


Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

Re: [389-users] bypassing limits for persistent search and specific user

2012-03-13 Thread Petr Spacek

On 03/14/2012 12:16 AM, Nathan Kinder wrote:

On 03/13/2012 04:09 PM, Petr Spacek wrote:

Hello list,

I'm looking for way how to bypass nsslapd-sizelimit and
nsslapd-timelimit for persistent search made by specific user (or
anything made by that user).

Please, can you point me to right place in documentation about
persistent search/user specific settings in 389? I googled for a
while, but I can't find exact way how to accomplish this.

You can set user-based limits as shown here:

http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/User_Account_Management-Setting_Resource_Limits_Based_on_the_Bind_DN.html#Setting_Resource_Limits_Based_on_the_Bind_DN-Setting_Resource_Limits_Using_the_Command_Line



I found attributes nsSizeLimit and nsTimeLimit in
http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html-single/Schema_Reference/index.html#nsPagedSizeLimit
, but I'm not sure how to deploy them.


If bypassing is not possible in 389:
Is there any way how to enumerate all records from given subtree
part-by-part? (My guess: VLV or something similar.)

There is VLV, and there is also simple-paged results. Both are methods
that can be used to enumerate through search results in chunks. VLV
requires explicit configuration of a VLV index for the exact search that
you want to perform ahead of time. Simple-paged results can be used with
any search. Here are some details on using simple-paged results:

http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/using-simple-paged-results.html



I know only basics about persistent search and next to nothing about
VLV, so sorry if I'm completely wrong.


--- Background / why I needed this / long story ---
FreeIPA project has LDAP plugin for BIND. This plugin pulls DNS
records from LDAP database and populates BIND's internal memory with
them. (Homepage: https://fedorahosted.org/bind-dyndb-ldap/)

This plugin can use persistent search, which enables reflecting
changes in LDAP inside BIND immediately.

At this moment, plugin after start do persistent search for all DNS
records. This single query can lead to tens of thousands records - and
of course fails, because nssldapd-sizelimit stops that.

Another problem arises with databases smaller than sizelimit - query
is ended after timelimit and has to be re-established. It leads to
periodical re-downloading whole DNS DB.

Question is:
It's possible to bypass limits for this connection/user

I think setting the limits based on your bind DN should work.

-NGK

OR
plugin is completely broken by design?


Thanks for you time.

Petr^2 Spacek @ Red Hat @ Brno office


Absolutely perfect! Thanks a lot for immediate response.

Petr^2 Spacek
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users

[389-users] bypassing limits for persistent search and specific user

2012-03-13 Thread Petr Spacek

Hello list,

I'm looking for way how to bypass nsslapd-sizelimit and 
nsslapd-timelimit for persistent search made by specific user (or 
anything made by that user).


Please, can you point me to right place in documentation about 
persistent search/user specific settings in 389? I googled for a while, 
but I can't find exact way how to accomplish this.


I found attributes nsSizeLimit and nsTimeLimit in 
http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html-single/Schema_Reference/index.html#nsPagedSizeLimit 
, but I'm not sure how to deploy them.



If bypassing is not possible in 389:
Is there any way how to enumerate all records from given subtree 
part-by-part? (My guess: VLV or something similar.)


I know only basics about persistent search and next to nothing about 
VLV, so sorry if I'm completely wrong.



--- Background / why I needed this / long story ---
FreeIPA project has LDAP plugin for BIND. This plugin pulls DNS records 
from LDAP database and populates BIND's internal memory with them. 
(Homepage: https://fedorahosted.org/bind-dyndb-ldap/)


This plugin can use persistent search, which enables reflecting changes 
in LDAP inside BIND immediately.


At this moment, plugin after start do persistent search for all DNS 
records. This single query can lead to tens of thousands records - and 
of course fails, because nssldapd-sizelimit stops that.


Another problem arises with databases smaller than sizelimit - query is 
ended after timelimit and has to be re-established. It leads to 
periodical re-downloading whole DNS DB.


Question is:
 It's possible to bypass limits for this connection/user
OR
 plugin is completely broken by design?


Thanks for you time.

Petr^2 Spacek  @  Red Hat  @  Brno office
--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users