Re: Possible transaction issue with LMDB

2018-08-17 Thread William Brown
On Fri, 2018-08-17 at 07:06 +0100, Howard Chu wrote:
> > I'm quite aware that it is COW - this issue is specific to COW
> > trees.
> > Transactions must be removed in order, they can not be removed out
> > of
> > order. It is about how pages are reclaimed for the freelist
> 
> Incorrect.

We may have to "agree to disagree" then. 

Thanks for your time,

-- 
Sincerely,

William




Re: Possible transaction issue with LMDB

2018-08-16 Thread William Brown
On Fri, 2018-08-17 at 01:34 +0300, Леонид Юрьев wrote:
> Hi, 
> 
> Shortly, no issue.
> 
> LMDB provides MVCC via COW, therefore any read transaction see the
> constant snapshot of DB, which is correspond to last committed write
> transaction at the time when reading was started.
> 
> Any further write transactions will create the new snapshots, which
> will be visible for further reading, but not for read transactions
> that was started before.
> 


Hi,

I'm quite aware that it is COW - this issue is specific to COW trees.
Transactions must be removed in order, they can not be removed out of
order. It is about how pages are reclaimed for the freelist.

If you have tree state A, you have nodes say 1 through 4. We'll say 1
is the root, and 2 - 4 are the leaves.

We take the transaction at A.

Now we conduct a write with TXN X. We'll copy nodes 1 (r) and 4 to 5
(r) and 6 now and make our update. We now commit. At this point this is
the "last generation" where 1 and 4 exist.

We begin the transaction at B

Now we condact a write with TXN Y. We'll copy nodes 2, 3 and 5 (r). 5
Being the root from X, 2,3 still in use at A. We now commit.

We begin a read transaction C.

At this point we now close transaction B.

B now clears it's resources - Because B is the last location where 2,3
were "alive", they are at this point freed. however, they are *still
required* by TXN A for a valid and complete tree.

At this point, TXN A now consists of nodes 1, 4, and nodes 2,3 are on
the freelist where they are possibly able to be reused. 

You begin a new write, and commit many values. This will over-write the
content of nodes 2,3 (as they are in the free list), causing the reader
of TXN A to now percieve invalid data.

I have previously seen this with my in-memory only tree, but I was able
to recreate this with LMDB. I'm not able to release the POC at this
time. The issue arises because transactions *must* be cleared in order
from oldest to newest, as a fundamental part of COW is the shared
resources, and understanding when node lifetimes expire. 

I hope that this explains the issue more thoroughly. 


> Regards,
> Leonid.
> 
> 
> 2018-08-16 13:03 GMT+03:00 William Brown :
> > Hi there,
> > 
> > While doing some integration testing of LMDB I noticed that there
> > may
> > be an issue with out of order transaction handling.
> > 
> > The scenario is:
> > 
> > Open Read TXN A
> > Open Write TXN X, and change values of the DB
> > Commit X
> > Open Read TXN B
> > Open Write TXN Y, and change values of the DB
> > Commit Y
> > Open Read TXN C
> > 
> > Abort/Close TXN B.
> > 
> > At this point, because of the page touch between A -> B and B -> C,
> > B
> > now believes that the pages of A are the "last time" they are
> > available
> > as they were all subsequently copied for TXN C. The pages of A are
> > then
> > added to the freelists when B closes. When TXN A is read from the
> > data
> > may have been altered due to future writes as LMDB attempts to use
> > previously allocated pages first. 
> > 
> > This situation is more likely to arise on large batch writes, but
> > could
> > manifest with smaller series of writes. This would be a silent
> > issue,
> > as the over-written pages may be valid, and could cause data to
> > "silently vanish" inside of the read transaction A leading to
> > unpredictable results.
> > 
> > I hope that this report helps you to diagnose and resolve the
> > issue. 
> > 
-- 
Sincerely,

William




Possible transaction issue with LMDB

2018-08-16 Thread William Brown
Hi there,

While doing some integration testing of LMDB I noticed that there may
be an issue with out of order transaction handling.

The scenario is:

Open Read TXN A
Open Write TXN X, and change values of the DB
Commit X
Open Read TXN B
Open Write TXN Y, and change values of the DB
Commit Y
Open Read TXN C

Abort/Close TXN B.

At this point, because of the page touch between A -> B and B -> C, B
now believes that the pages of A are the "last time" they are available
as they were all subsequently copied for TXN C. The pages of A are then
added to the freelists when B closes. When TXN A is read from the data
may have been altered due to future writes as LMDB attempts to use
previously allocated pages first. 

This situation is more likely to arise on large batch writes, but could
manifest with smaller series of writes. This would be a silent issue,
as the over-written pages may be valid, and could cause data to
"silently vanish" inside of the read transaction A leading to
unpredictable results.

I hope that this report helps you to diagnose and resolve the issue. 

-- 
Sincerely,

William




MDB questions

2018-05-07 Thread William Brown
Hi there,

I have a few questions about MDB, and I have some things I'd like to
work on.

In the docs there are a few references that reference binary searching.
It's not 100% clear but I assume this is a binary search of the keys in
a BTree node, not that MDB is a bst. 

How does MDB provide crash resilience on the free pages?

According to man, free() should only be called on memory from malloc
but I see that you use free on mmaped pages in mdb_dpage_free. There
must be something I'm missing here about this.

Anyway, I have two things I want to work on.

The simple one is when pages are moved from the txn free list to the
env free list (I hope that's correct), it would be good to call
madvise(MADV_REMOVE) on the data section. 

The reason for this is that the madvise call will allow supported
filesystems to hole punch the sparse file, allowing space reclamation -
without MDB needing to worry about it!

The much more invasive change I want to work on is page checksumming.
Basically there are 4 cases I have in mind

* No checksumming (today)
* Metadata checksumming only
* Metadata and data checksumming

These could be used in these scenarios:

* write checksums but don't verify them at run time
* write checksums, and only verify metadata on read (possibly a good
default option)
* write checksums, and verify metadata and data on read (slowest, but
has strong integrity properties for some applications)

And in all cases I want to add an "mdb_verify" command that would
assert all of these are also correct offline.

There are a few reasons for this

* Hardware is unreliable. Ram, disk, cables, even cpu cache memory can
all exhibt bit flips and other data loss. Changing a bit in a pointer
can cause damage to any datastructure, and flows on to crashes or
silent corruption
* Software is never perfect - checksumming allows detection of over-
writes of data from overflow or other mistakes that we as humans all
make.

I'd opt to use something fast like crc32c (intel provides hardware to
accelerate this with -march=native). The only issue I see is that this
would require an ondisk structure change because the current structs
don't have space for this  -and the csums have to be *first*.

http://www.lmdb.tech/doc/group__internal.html#structMDB__page

The checksum would have to be the first value *or* the last value of
the page header, (so that it can be updated without affecting the
result of the checksum). The checksum for the data would have to be
within the header so that this is asserted as correct.

Is this something I should pursue? Would this require a ondisk format
change? Is there something that could be done to avoid this?


Thanks,

William




Re: IETF opinion change on "implicit TLS" vs. StartTLS

2018-02-16 Thread William Brown
On Mon, 2018-02-12 at 18:10 -0800, Quanah Gibson-Mount wrote:
> --On Tuesday, February 13, 2018 9:31 AM +1000 William Brown 
> <wibr...@redhat.com> wrote:
> 
> > On Mon, 2018-02-12 at 14:30 +0100, Michael Ströder wrote:
> > > HI!
> > > 
> > > To me this rationale for SMTP submission with implicit TLS seems
> > > also
> > > applicable to LDAPS vs. StartTLS:
> > > 
> > > https://tools.ietf.org/html/rfc8314#appendix-A
> > > 
> > > So LDAPS should not be considered deprecated. Rather it should be
> > > recommended and the _optional_ use of StartTLS should be strongly
> > > discouraged.
> > 
> > Yes, I strongly agree with this. I have evidence to this fact and
> > can
> > provide it if required,
> 
> Personally, I'm all for it.  I'd suggest using the above RFC as a
> template 
> for one formalizing port 636, so it's finally a documented standard.

Great! Where do we go from here to get this formalised properly? 

> 
> --Quanah
> 
> --
> 
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by
> OpenLDAP:
> <http://www.symas.com>
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: IETF opinion change on "implicit TLS" vs. StartTLS

2018-02-12 Thread William Brown
On Mon, 2018-02-12 at 14:30 +0100, Michael Ströder wrote:
> HI!
> 
> To me this rationale for SMTP submission with implicit TLS seems also
> applicable to LDAPS vs. StartTLS:
> 
> https://tools.ietf.org/html/rfc8314#appendix-A
> 
> So LDAPS should not be considered deprecated. Rather it should be
> recommended and the _optional_ use of StartTLS should be strongly
> discouraged.

Yes, I strongly agree with this. I have evidence to this fact and can
provide it if required,



> 
> Ciao, Michael.
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: Antw: Re: ssf Security Question

2017-11-21 Thread William Brown
On Tue, 2017-11-21 at 13:39 -0800, Quanah Gibson-Mount wrote:
> --On Monday, November 20, 2017 8:43 AM +0100 Ulrich Windl 
> <ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
> > Hi!
> > 
> > BTW: Does anyone know the backgraound of SUSE Linux Enterprise
> > Server
> > (SLES) moving from OpenLDAP to Redhat's directory server in ist
> > next
> > release?
> 
> Do you have a relevant link?

It's in the SLES15 beta release notes:

https://www.suse.com/betaprogram/sle-beta/

There have been a few media groups that caught onto it too, but it was
a pretty silent change - even we didn't know it was coming.

PS: Again full disclosure for clarity, I work on RHDS.

-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: ssf Security Question

2017-11-20 Thread William Brown
On Mon, 2017-11-20 at 11:22 +, Howard Chu wrote:
> William Brown wrote:
> > On Fri, 2017-11-17 at 08:34 +0100, Michael Ströder wrote:
> > > William Brown wrote:
> > > > Just want to point out there are some security risks with ssf
> > > > settings.
> > > > I have documented these here:
> > > > 
> > > > https://fy.blackhats.net.au/blog/html/2016/11/23/the_minssf_tra
> > > > p.ht
> > > > ml
> > > 
> > > Nice writeup. I always considered SSF values to be naive and
> > > somewhat
> > > overrated. People expect too much when looking at these numbers -
> > > especially regarding the "strength" of cryptographic algorithms
> > > which
> > > changes over time anyway with new cryptanalysis results coming
> > > up.
> > > 
> > > Personally I always try to implement a TLS-is-must policy and
> > > prefer
> > > LDAPS (with correct protocol and ciphersuites configured) over
> > > LDAP/StartTLS to avoid this kind of pre-TLS leakage.
> > > Yes, I deliberately ignore "LDAPS is deprecated". ;-]
> > 
> > I agree. If only there was a standards working group that could
> > deprecate startTLS in favour of TLS  :)
> 
> I have to agree as well. On my own servers I also use TLS on other
> "plaintext" 
> ports too (such as pop3 and others) that no one has any business
> connecting to 
> in plaintext.

Yep. TLS and end-to-end is the way of the future. We need to update our
documents to support this :) 

> 
> > > Furthermore some LDAP server implementation (IIRC e.g. MS AD)
> > > refuse
> > > to
> > > accept SASL/GSSAPI bind requests sent over TLS-secured channel.
> > > Which
> > > is
> > > IMO also somewhat questionable.
> > 
> > Yes, I really agree. While a plain text port exists, data leaks are
> > possible. We should really improve this situation, where we have
> > TLS
> > for all data to prevent these mistakes.
> > 
> > I think a big part of the issue is that GSSAPI forces the
> > encryption
> > layer, and can't work via an already encrypted channel. The people
> > I
> > know involved in this space are really resistant to changing this
> > due
> > to the "kerberos centric" nature of the products.
> 
> Interesting. Our libldap/liblber works fine with GSSAPI's encryption
> layered 
> over TLS and vice versa.
> 

Sadly your libldap/liblber is not the only one we have to use. I'm told
that especially AD for IPA trust's is unable to do GSSAPI-over-TLS.

Really, IMO if the SSF is already > 1, then GSSAPI shouldn't install
encryption layer, but you know, I'm not the one who writes the SASL
code ... If you have some contacts in this space, I'm open to
suggestion as to how we can proceed to improve this, 

-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: One account for modifying directory and wiki

2017-11-20 Thread William Brown
On Fri, 2017-11-17 at 07:46 -0500, John Lewis wrote:
> On Fri, 2017-11-17 at 12:51 +1000, William Brown wrote:
> > On Thu, 2017-11-16 at 11:26 -0500, John Lewis wrote:
> > > I want to have one account for modifying both a LDAP directory
> > > and
> > > a
> > > Mediawiki. What tactic would you you use to do it?
> > 
> > I'm not sure this is a tough issue: the access controls are
> > seperate
> > in
> > these cases.
> > 
> > On one hand from the LDAP directory management side, you only need
> > the
> > ACI/ACL's in place on the config/tree that would allow writes to
> > appropriate locations. There is plenty of docs on aci/acl placement
> > and
> > construction for this.
> > 
> > From the mediawiki side, you can search users and use an ldap
> > backend
> > to do password checks (binds) and then use groups to provide
> > authorization control as to "who" can access the wiki.
> > 
> > I hope that helps you,
> 
> Is that configuration self serviceable, as in the user can request
> their own account with the permissions I deem them to have?

What do you mean by this? As in "make it so anyone can login to the
wiki"? Just don't add access controls IE group membership or filter
tests in the media wiki ldap config. Then "anyone with a valid ldap
account" can login, with NO aci changes needed for openldap,

Hope that helps, if I recall, media wiki has great ldap connection
docs,



-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: ssf Security Question

2017-11-19 Thread William Brown
On Fri, 2017-11-17 at 08:34 +0100, Michael Ströder wrote:
> William Brown wrote:
> > Just want to point out there are some security risks with ssf
> > settings.
> > I have documented these here:
> > 
> > https://fy.blackhats.net.au/blog/html/2016/11/23/the_minssf_trap.ht
> > ml
> 
> Nice writeup. I always considered SSF values to be naive and somewhat
> overrated. People expect too much when looking at these numbers -
> especially regarding the "strength" of cryptographic algorithms which
> changes over time anyway with new cryptanalysis results coming up.
> 
> Personally I always try to implement a TLS-is-must policy and prefer
> LDAPS (with correct protocol and ciphersuites configured) over
> LDAP/StartTLS to avoid this kind of pre-TLS leakage.
> Yes, I deliberately ignore "LDAPS is deprecated". ;-]

I agree. If only there was a standards working group that could
deprecate startTLS in favour of TLS  :) 

> 
> Furthermore some LDAP server implementation (IIRC e.g. MS AD) refuse
> to
> accept SASL/GSSAPI bind requests sent over TLS-secured channel. Which
> is
> IMO also somewhat questionable.

Yes, I really agree. While a plain text port exists, data leaks are
possible. We should really improve this situation, where we have TLS
for all data to prevent these mistakes.

I think a big part of the issue is that GSSAPI forces the encryption
layer, and can't work via an already encrypted channel. The people I
know involved in this space are really resistant to changing this due
to the "kerberos centric" nature of the products. 



> 
> Ciao, Michael.
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: Is existing documentation kind of vague?

2017-11-16 Thread William Brown
On Fri, 2017-11-17 at 08:27 +0200, MJ J wrote:
> No matter how you wrap poll() and select(), they will always be
> poll()
> and select() - you will always run loops around an ever increasing
> stack of file descriptors while doing I/O. BDB is always going to
> have
> the same old problems... That's what I'm talking about - sacrificing
> performance for platform portability (NSPR).
> 
> FreeIPA could be multi-tenant i.e.support top-level and subordinate
> kerberos realms if it supported a more sensible DIT layout. I know
> because I have built such a system (based on OpenLDAP) and deployed
> it
> internationally. Probably the best piece of code to come out of the
> project is bind-dyndb-ldap.

Whoa mate - I'm not here to claim that 389 is a better ldap server - we
just do some different things. We acknowledge our limitations and are
really working on them and paying down our tech debt. We want to remove
parts of nspr, replace bdb and more. :) 

I'm here to follow the progress of the openldap project, who have a
team of people I respect greatly and want to learn from, and here to
help discussions and provide input from a different perspective.

There are things that today openldap does much better than us for
certain - and there are also some things that we do differently too
like DNA plugin uid allocation, replication etc,

There are also project focusses and decisions made to improve
supportability in systems like FreeIPA - we can discuss them forever,
but reality is today, FreeIPA is not targeting multi-tennant
environments because the majority of our consumers don't want that
functionality. We made a design decision and have to live with it. I'm
providing this information to help give the ability for people to
construct an informed opinion. 


As mentioned, I'm not here to throw insults and criticisms, I'm here to
have positive, respectful discussions about technology, to provide
different ideas, and to learn from others :) 

Thanks,

> 
> On Fri, Nov 17, 2017 at 4:49 AM, William Brown <wibr...@redhat.com>
> wrote:
> > On Thu, 2017-11-16 at 05:54 +0200, MJ J wrote:
> > > Sure, it can be improved to become invulnerable to the
> > > academically
> > > imaginative race conditions that are not going to happen in real
> > > life.
> > > That will go to the very bottom of my list of things to do now,
> > > thanks.
> > > 
> > > FreeIPA is a cool concept, too bad it's not scalable or multi-
> > > tenant
> > > capable.
> > 
> > It's a lot more scalable depending on which features you
> > enable/disable. It won't even be multi-tenant due to the design
> > with
> > gssapi/krb.
> > 
> > At the end of the day, the atomic UID/GID alloc in FreeIPA is from
> > the
> > DNA plugin from 389-ds-base (which you can multi-instance on a
> > server
> > or multi-tentant with many backends). We use a similar method to AD
> > in
> > that each master has a pool of ids to alloc from, and they can
> > atomically request pools. This prevents the race issues you are
> > describing here with openldap.
> > 
> > So that's an option for you, because those race conditions *do* and
> > *will* happen, and it will be a bad day for you when they do.
> > 
> > 
> > Another option is an external IDM system that allocs the uid's and
> > feeds them to your LDAP environment instead,
> > 
> > Full disclosure: I'm a core dev of 389 directory server, so that's
> > why
> > I'm speaking in this context. Not here to say bad about openldap or
> > try
> > to poach you, they are a great project, just want to offer
> > objective
> > insight from "the other (dark?) side". :)
> > 
> > > 
> > > On Wed, Nov 15, 2017 at 11:09 PM, Michael Ströder <michael@stroed
> > > er.c
> > > om> wrote:
> > > > MJ J wrote:
> > > > > TLDR; in a split-brain situation, you could run into trouble.
> > > > > But
> > > > > this
> > > > > isn't the only place. Efffective systems monitoring is the
> > > > > key
> > > > > here.
> > > > > 
> > > > > Long answer;
> > > > > [..]
> > > > > The solution I posted has been in production in a large,
> > > > > dynamic
> > > > > company for several years and never encountered a problem.
> > > > 
> > > > Maybe it works for you. But I still don't understand why you
> > > > post
> > > > such a
> > > > lengthy justification insisting on your MOD_INCREMENT / read-
> > > > after-
> > > > write
>

Re: ssf Security Question

2017-11-16 Thread William Brown
On Thu, 2017-11-16 at 21:25 -0800, Quanah Gibson-Mount wrote:
> --On Friday, November 17, 2017 12:53 PM +1000 William Brown 
> <wibr...@redhat.com> wrote:
> 
> Hi William,
> 
> > Hey mate,
> > 
> > Just want to point out there are some security risks with ssf
> > settings.
> > I have documented these here:
> > 
> > https://fy.blackhats.net.au/blog/html/2016/11/23/the_minssf_trap.ht
> > ml
> > 
> > This is a flaw in the ldap protocol and can never be resolved
> > without
> > breaking the standard. The issue is that by the time the ssf check
> > is
> > done, you have already cleartexted sensitive material.
> 
> I think what you mean is: There is no way with startTLS to prevent
> possible 
> leakage of credentials when using simple binds. ;)  Your blog
> certainly 
> covers this concept well, but just wanted to be very clear on what
> the 
> actual issue is. ;)  I've been rather unhappy about this for a long
> time as 
> well, and have had a discussion going on the openldap-devel list
> about 
> LDAPv4 and breaking backwards compatibility to fix this protocol bug.

Absolutely. I think it's just better to say look, expect leakage. Do it
right, once, and guarantee your behaviours. It's not just simple bind
though,

An example here though, is because of how minssf works, we have to
accept anonymous binds on ssf=0, because we expect starttls next - even
then, you can leak things like "search mail=secret@secret". If they
don't want to leak phone numbers, mail etc. So we have a dataleak in
the form of the query, before the ssf check can reject our request.

Sure, we aren't leaking entries, but we shouldn't leak *anything* if we
are in this kind of environment,

Again coming back to LDAPS is the only way to really guarantee this
connection is truly encrypted from the first byte to the last :) 

> 
> Another note -- The reason GSSAPI shows up as an SSF of 56 is because
> it 
> has been hard coded that way in cyrus-sasl.  Starting with cyrus-
> sasl 
> version 2.1.27, which is near release, the actual SASL SSF is
> finally 
> passed back into the caller.  It may be worthwhile noting this in
> your blog 
> post. ;)

Yeah, the krb devs told me about this change recently, I should go and
update this :) I've just been busy lately :) 

Thanks mate,

> 
> Warm regards,
> Quanah
> 
> 
> --
> 
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by
> OpenLDAP:
> <http://www.symas.com>
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: ssf Security Question

2017-11-16 Thread William Brown
On Tue, 2017-11-14 at 20:56 +, Kaya Saman wrote:
> Hi,
> 
> 
> I am a little confused with this. Basically I have a client
> connecting 
> to the database, a DECT IP phone base station which doesn't support 
> STARTLS and my slapd config has settings for clients to use
> certificates 
> to connect.
> 
> 
> What would be the best way to set this up so that the DECT IP client 
> only accesses the particular place that it needs to, the AddressBook 
> section but then other clients will need to use STARTTLS for
> everything 
> else??
> 
> 
> Currently I am looking at:
> 
> https://www.openldap.org/doc/admin24/security.html
> 
> 
> https://www.openldap.org/doc/admin24/access-control.html
> 
> 
> and have currently put this in my slapd.conf:
> 
> 
> #Removed the Global? security clause
> 
> #security ssf=128
> 
> 
> #Added generic ACL for all access to require ssf of 128
> 
> access to *
>  by ssf=128 self write
>  by ssf=128 anonymous auth
>  by ssf=128 users read
> 
> 
> #Added ACL for open access to AddressBook in Read and Search only
> mode
> 
> access to dn.children="ou=AddressBook,dc=domain,dc=com"
>  by * search
>  by * read
> 
> 
> Is this correct or do I need to engage the "security" Global section
> too?
> 
> 
> Though the documentation suggests otherwise: "For fine-grained
> control, 
> SSFs may be used in access controls. See theAccess Control 
> <https://www.openldap.org/doc/admin24/access-control.html>section
> for 
> more information."
> 
> 

Hey mate,

Just want to point out there are some security risks with ssf settings.
I have documented these here:

https://fy.blackhats.net.au/blog/html/2016/11/23/the_minssf_trap.html

This is a flaw in the ldap protocol and can never be resolved without
breaking the standard. The issue is that by the time the ssf check is
done, you have already cleartexted sensitive material.

I would advise that you use LDAPS with TLS instead, or provide suitable
access control over your network segments to prevent these issues.
Relying on SSF can allow data leaks from misconfigured clients.

Hope that helps, 


-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: One account for modifying directory and wiki

2017-11-16 Thread William Brown
On Thu, 2017-11-16 at 11:26 -0500, John Lewis wrote:
> I want to have one account for modifying both a LDAP directory and a
> Mediawiki. What tactic would you you use to do it?

I'm not sure this is a tough issue: the access controls are seperate in
these cases.

On one hand from the LDAP directory management side, you only need the
ACI/ACL's in place on the config/tree that would allow writes to
appropriate locations. There is plenty of docs on aci/acl placement and
construction for this.

>From the mediawiki side, you can search users and use an ldap backend
to do password checks (binds) and then use groups to provide
authorization control as to "who" can access the wiki.

I hope that helps you,


> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: Is existing documentation kind of vague?

2017-11-16 Thread William Brown
On Thu, 2017-11-16 at 05:54 +0200, MJ J wrote:
> Sure, it can be improved to become invulnerable to the academically
> imaginative race conditions that are not going to happen in real
> life.
> That will go to the very bottom of my list of things to do now,
> thanks.
> 
> FreeIPA is a cool concept, too bad it's not scalable or multi-tenant
> capable.

It's a lot more scalable depending on which features you
enable/disable. It won't even be multi-tenant due to the design with
gssapi/krb. 

At the end of the day, the atomic UID/GID alloc in FreeIPA is from the
DNA plugin from 389-ds-base (which you can multi-instance on a server
or multi-tentant with many backends). We use a similar method to AD in
that each master has a pool of ids to alloc from, and they can
atomically request pools. This prevents the race issues you are
describing here with openldap.

So that's an option for you, because those race conditions *do* and
*will* happen, and it will be a bad day for you when they do. 


Another option is an external IDM system that allocs the uid's and
feeds them to your LDAP environment instead, 

Full disclosure: I'm a core dev of 389 directory server, so that's why
I'm speaking in this context. Not here to say bad about openldap or try
to poach you, they are a great project, just want to offer objective
insight from "the other (dark?) side". :) 

> 
> On Wed, Nov 15, 2017 at 11:09 PM, Michael Ströder <michael@stroeder.c
> om> wrote:
> > MJ J wrote:
> > > TLDR; in a split-brain situation, you could run into trouble. But
> > > this
> > > isn't the only place. Efffective systems monitoring is the key
> > > here.
> > > 
> > > Long answer;
> > > [..]
> > > The solution I posted has been in production in a large, dynamic
> > > company for several years and never encountered a problem.
> > 
> > Maybe it works for you. But I still don't understand why you post
> > such a
> > lengthy justification insisting on your MOD_INCREMENT / read-after-
> > write
> > approach with possible race condition even in a single master
> > deployment
> > while there are two proper solutions with just a few lines code
> > more:
> > 
> > 1. delete-by-value to provoke a conflict like the original poster
> > mentioned by pointing to
> > http://www.rexconsulting.net/ldap-protocol-uidNumber.html
> > 
> > 2. MOD_INCREMENT with pre-read control
> > 
> > Of course none of the solutions work when hitting multiple
> > providers
> > hard in a MMR setup or in a split-brain situation. One has to
> > choose a
> > "primary" provider then.
> > BTW: AFAIK with FreeIPA each provider has its own ID range to
> > prevent that.
> > 
> > Ciao, Michael.
> 
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: restrict wildcard searches

2017-11-16 Thread William Brown
On Tue, 2017-11-14 at 12:44 +0100, Michael Ströder wrote:
> Geert Hendrickx wrote:
> > Is there a way to restrict (acl?) searches using wildcards?
> 
> AFAIK no.
> 
> > For compliancly reasons, I want to allow certain (actually most)
> > users to
> > search on eg. known email addresses, like: mail=u...@example.org,
> > but not
> > to retrieve a list of all users, like mail=*@example.org.

There is really no way to block this. If you disable this search you
can do:

uid=*
sn=*
mail=* (pres)
objectClass=*

Even with admin limits, paged search limits etc, you can not block
this. You can *always* get *every* entry from a server given enough
time.

I really think that the question you should ask yourself is "what's the
threat you want to counter? How can I prevent that?"

A list of users is one thing, but perhaps the threat is "list of users
full names". So then limit access to cn/sn/displayName etc. If it's
mail addrs then limit who can read mail. It could be a eduProvider so
block access to edu* attrs instead. There are better ways to achieve
what you want here I think, 

I think that you should ask those questions and think about better ways
to express the threat you want to prevent, and build accordingly.



> > 
> > Sizelimit restriction is not enough, because they could still
> > iteratively
> > retrieve everything, without launching an actual dictionary attack
> > on all
> > possible mail addresses, which would be much harder.
> 
> You could remove SUBSTR matching rule from attribute type description
> of
> 'mail' (in core.schema or core.ldif).
> 
> Caveats:
> 
> 1. Probably you already know that tweaking standard schema is not
> recommend.
> 
> 2. It disables sub-string matching on 'mail' completely. You might
> solve
> this by building a partial replica or a LDAP proxy dedicated to the
> exact search on known e-mail addresses.
> 
> AFAICS other possibilities would be implementing an overlay or a
> dynacl
> module for your specific needs.
> 
> Ciao, Michael.
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane




Re: Load testing bind performance

2017-11-02 Thread William Brown
On Thu, 2017-11-02 at 09:08 +0100, Michael Ströder wrote:
> William Brown wrote:
> > On Wed, 2017-11-01 at 20:33 +0100, Michael Ströder wrote:
> > > Tim wrote:
> > > > I've used the python-ldap library to simulate other varieties
> > > > of
> > > > interactions successfully, but when it comes to binds, each
> > > > interaction seems to generate a substantial amount of traffic
> > > > behind the scenes, so suspect that *things* are happening that
> > > > is
> > > > artificially limiting the bind rate/s.>>
> > > 
> > > python-ldap itself is a pretty thin wrapper on top of libldap.
> > > Especially if you're using LDAPObject.simple_bind() or
> > > LDAPObject.simple_bind_s() [1] there is definitely no "traffic
> > > behind
> > > the scenes".
> > > 
> > > So if you have overhead on the client side I suspect your own
> > > Python
> > > code adds this.
> > 
> > python-ldap is very thin, but it does have a global mutex that can
> > prevent you "really hitting" the ldap server you are testing.
> 
> Yes, you're right. But not sure whether you really hit the GIL limit
> since python-ldap releases GIL whenever it calls libldap functions.
> And of course when running a multi-threaded client each thread should
> have its own LDAPObject instance.
> (I assume here that Python is built with thread support and python-
> ldap
> was built against libldap_r. Otherwise all calls into libldap
> (without
> _r) are serialized with a global lock.)

Yeah, the GIL isn't the issue, it's the global lock. You need to start
multiple separate python interpreters to really generate this load. We
have a python load test, but you have to start about 16 instances of it
to really stress a server.

I've always wondered what the purpose of the ldap lock was, but that's
a topic for it's own thread I think :) 

> 
> Ciao, Michael.
> 
-- 
Sincerely,

William Brown
Software Engineer
Red Hat, Australia/Brisbane


signature.asc
Description: This is a digitally signed message part