On Thu, Nov 16, 2023, at 5:17 PM, William Faulk wrote:
>
> Since asking the question, I've been doing some research and found that the
> "cn=changelog" tree is populated by the "Retro Changelog Plugin", and on my
> systems, that has a config that limits it to the "cn=dns" subtree in my
>
> had the same CSN
That shouldn't be possible. It's an axiom of the system that CSNs are unique.
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of
On Thu, Nov 16, 2023, at 2:22 PM, William Faulk wrote:
>
> Do you know how I can find mappings between CSNs and changes? Or even just
> how to see the changelog at all?
More of a meta-answer, but I suspect the CSN is available as an operational
attribute on each entry. If that hunch is
On Thu, Nov 16, 2023, at 12:54 PM, William Faulk wrote:
>
>
> Ultimately, I think I mostly understand now. A change happens on a replica,
> it assigns a CSN to it and updates its RUV to indicate that that's now the
> newest CSN it has. Then a replication event occurs with its peers and those
On Wed, Nov 15, 2023, at 12:02 PM, William Faulk wrote:
> > it isn't necessary to keep track of a list of CSNs
>
> If it doesn't keep track of the CSNs, how does it know what data needs to be
> replicated?
>
> That is, imagine replica A, whose latest CSN is 48, talks to replica B, whose
>
There's also some information in patents e.g.
https://patents.google.com/patent/GB2388933A/en
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of
I'm not sure about doc, but the basic idea iirc is that a vector clock[1]
(called replica update vector) is constructed from the sequence numbers from
each node. Therefore it isn't necessary to keep track of a list of CSNs, only
compare them to determine if another node is caught up with, or
No, unless you have some unusually large attributes (storing high-resolution
profile pictures, something like that), and/or unusually high write traffic
(constantly changing users' status, something like that), you should be fine on
modern hardware.
On Wed, May 18, 2022, at 8:48 AM, Morgan
On 10/9/2020 3:10 AM, Jan Kowalsky wrote:
I started with strace - but there are no actionable messages: I get a
schema error - but this is not causal (it has to be fixed anyway...):
Try adding the -f flag to strace. Sometimes the target process forks and
you only get output from the parent.
libdb: BDB2034 unable to allocate memory for mutex; resize mutex region
mmap in opening database environment failed trying to allocate 50
bytes. (OS err 12 - Cannot allocate memory)
One observation: this is a mmap() call failure, not an ordinary "OOM"
situation.
Some googling suggests
On 8/14/2020 9:04 AM, Ben Spencer wrote:
After a little investigation I didn't find any recent information on
how well / linearly 389 scales from a CPU perspective. I also realize
this is a more complicated topic with many factors which actually play
into it.
Throwing the basic question out
On 6/23/2020 10:07 AM, Mark Reynolds wrote:
In 389 what we are seeing is that our backend txn plugins are doing
unindexed searches, but I would not call it a bug.
The unindexed search is fine per se (although probably not a great idea
if you want the op the plugin hooked to complete
On 6/23/2020 9:34 AM, Emmanuel Kasprzyk wrote:
I am working on large Directory Server topology, which is reaching
very fast the amount of available locks in BDB ( cf
https://bugzilla.redhat.com/show_bug.cgi?id=1831812 )
- Can the planned switch in 389-ds-base-1.4.next to LMDB help for such
On 4/17/2019 9:13 AM, Crocker, Deborah wrote:
Is it okay to allow a master to accept changes while a replica is being
initialized?
While IT is initializing another replica? Yes.
This has always been ok.
___
389-users mailing list --
On 12/13/2018 2:44 PM, Jan Kowalsky wrote:
Before struggling with this, I tried upgrading 389-ds in a snapshot:
After upgrade to 1.3.5.17-2 dirsrv starts again. Migration of the
databases and config worked.
I'll make a bet that this is unrelated (sometimes it works, sometimes it
doesn't), but
On 12/13/2018 1:37 PM, Jan Kowalsky wrote:
Well, we just added a new database on runtime which worked fine - 389ds
was still running. After changing a replica I wanted to restart and
resulted in the error.
Also try turning up the logging verbosity to the max. From memory the
How can I achive
On 12/13/2018 12:30 PM, Jan Kowalsky wrote:
after dirsrv crashed and trying to restart, I got the following errors
and dirsrv doesn't start at all:
[13/Dec/2018:20:17:28 +0100] - 389-Directory/1.3.3.5 B2018.298.1116
starting up
[13/Dec/2018:20:17:28 +0100] - Detected Disorderly Shutdown last
On 9/6/2018 6:37 PM, William Brown wrote:
I have seen this behaviour due to an issue in the design of access log
buffer flushing. During a buffer flush all other threads are delayed,
which can cause this spike.
You can confirm this by changing your access log buffer size up or
down.
Sadly
On 9/6/2018 6:41 PM, William Brown wrote:
I think, looking at the data you posted, the question you're asking
is
"why, when I subject my server to a continuous search operation load,
do
some operations have much longer latency than others?".
If they are doing the same operation repeatedly,
On 9/6/2018 8:50 AM, isabella.ghiu...@nrc-cnrc.gc.ca wrote:
This does not justify this since running 1 tread takes 0.1564msec/op and
running 10 threads takes 0.0590ms/op and the last one will require the
access.log to be flush more frequently I think for 10 threads and I do not
see the
On 8/15/2018 10:36 AM, Rich Megginson wrote:
Updating the csn generator and the uuid generator will cause a lot of
churn in dse.ldif. There are other housekeeping tasks which will
write dse.ldif
But if those things were being done so frequently that the resulting
filesystem I/O showed
in strace.log:
[pid 8088] 12:55:39.739539 poll([{fd=435, events=POLLOUT}], 1, 180
[pid 8058] 12:55:39.739573 <... write resumed> ) = 1 <0.87>
[pid 8088] 12:55:39.739723 <... poll resumed> ) = 1 ([{fd=435,
revents=POLLOUT}]) <0.000168>
[pid 8058] 12:55:39.739754 write(426, "dn:
While it is true that you want to have the highest possible hit ratio on
the two kinds of cache slapd maintains in order to achieve optimal read
performance, you _can_ simply configure quite small caches for slapd
(e.g. 1 few thousand entry cache and a few 100 MB DB cache) and rely on
the
On 8/9/2018 2:44 AM, Ludwig Krispenz wrote:
Sadly this doesn't tell us much :(
we could get a pstack along with iotop to see which threads do teh IO,
regular mods or the BDB regulars like trickle, checkpointing
Also : strace
___
389-users
On 7/2/2018 2:54 PM, Artur Lojewski wrote:
Question: If I issue a delete operation to a read-only replica, and the delete
request ist properly resend to the actual supplier, can I expect (?) that an
immediate read to the consumer does not find the deleted object?
Note : you wrote "ist"
Can I ask why is the timeout a problem? Wouldn't the pool manager just
open a new connection when required?
Put another way : is a pool connection that has been idle for 120
seconds actually useful?
On 6/27/2018 1:50 PM, Ghiurea, Isabella wrote:
Hi List
we are running
On 6/15/2018 2:04 PM, Jan Kowalsky wrote:
What I can see are a log of unindexec component queries, most of them like:
[15/Jun/2018:21:51:14 +0200] conn=462 op=31251 SRCH
base="ou=Domains,dc=example,dc=org" scope=2
filter="(&(objectClass=domainrelatedobject)(associatedDomain=example.net))"
https://github.com/bozemanpass/newrelic_java_ldap_plugin
New Relic is a popular cloud graphing service. This is a "plugin" in
their parlance (actually it is more of an "agent" since it doesn't plug
into anything) that pushes server counters (including the database
stats) to New Relic. Written
On 2/4/2015 11:20 AM, ghiureai wrote:
Out of memory: Kill process 2090 (ns-slapd) score 954 or sacrifice child
It wasn't clear to me from your post whether you already have a good
understanding of the OOM killer behavior in the kernel.
On the chance that you're not yet familiar with its
I think you're on the right track with the comment that the startTLS
extended op is not needed if the connection is already native SSL on the
SSL port. First thing I'd try, given the printer's penchant for using
startTLS would be to tell it to connect to the non-SSL port (389 is the
default
On 10/15/2014 8:16 AM, Jan Tomasek wrote:
is http://poodlebleed.com/ related to 389? I think it is, this is not
implementation flaw in OpenSSL, this seems to be related to the SSLv3
design.
From
On 5/29/2014 11:27 AM, John Trump wrote:
I believe they are false positives. I am just searching for proof to
provide to person running sans.
If it were really testing for the vulnerabilities it would have to be
presenting requests that exploit them and checking the the desired
outcome
On 5/14/2014 3:11 PM, Michael Gettes wrote:
The db2bak strategy worries me cuz you’re backing up the db files and the time
it takes to back those up
on a reasonable sized ldap store is non-trivial. So, is there not a bit of
worry about indices being out of
sync with the entry store itself
On 5/14/2014 3:11 PM, Michael Gettes wrote:
db2ldif gets you the text dump of the DB. it is my understanding, at an object
level, this gets you a reliable
backup of each entry although data throughout the store may be inconsistent
while the large file is being written.
i can tell you i do
On 5/14/2014 3:11 PM, Michael Gettes wrote:
of course, you can have yet another ldap server lying around not being used by
apps and it’s purpose is to dump
the store periodically, but that may not be part of you what want to achieve
with disparate locations and such.
This is a useful approach
On 5/14/2014 7:19 PM, Michael Gettes wrote:
Thank you so much for the 3 replies. They are VERY illuminating and helpful
for me to now press ahead and better address my own particular needs based on
our “requirements”. What I now intend to do is to perform, at regular
intervals, db2bak to a
On 5/13/2014 10:12 AM, Elizabeth Jones wrote:
no need for wildcard certs⦠use the Subject Alt Name. Works fine. Been
doing it for years. certutil supports it as well.
/mrg
Thanks, this looks like it is what I need. I do have a question about this
though - we have a single url that we use
On 5/12/2014 9:53 AM, Elizabeth Jones wrote:
Do the certs have to have the server hostnames in them or can I create a
cert that has a virtual name and put that on all the LDAP servers?
If I understand the scenario : you are using a LB that passes through
SSL traffic to the LDAP servers
On 5/5/2014 3:37 AM, Graham Leggett wrote:
What appears to be happening is that during the replication process,
an LDAP operation that is accepted on servera is being rejected by
serverc. The replication process is brittle, and has not been coded to
handle any kind of error during the
On 5/5/2014 8:55 AM, Graham Leggett wrote:
One of the objects being replicated is a large group containing about 21000
uniqueMembers. When it comes to replicate this object, the replication pauses
for about 6 seconds or so, and at that point it times out, responding with the
following
On 5/5/2014 9:46 AM, Graham Leggett wrote:
[05/May/2014:17:36:04 +0200] NSMMReplicationPlugin - agmt=cn=Agreement
servera.example.com (servera:636): Replica has a different generation ID than the
local data.
I haven't the faintest clue what a generation ID is, how you set it, or what
the
On 5/4/2014 1:27 PM, Graham Leggett wrote:
LDAP error 1 is LDAP_OPERATIONS_ERROR, which seems to be the ldap
equivalent of an error has occurred. There seems to be some kind of
mechanism where a reason string is made available by the underlying
code, but this is ignored by the above code, and
On 5/4/2014 2:18 PM, Graham Leggett wrote:
Nothing in the above seems to indicate an error that I can see, but we now see
this two seconds later:
[04/May/2014:23:03:38 +0200] - ERROR bulk import abandoned
[04/May/2014:23:03:38 +0200] - import userRoot: Aborting all Import threads...
On 1/30/2014 10:18 AM, Paul Whitney wrote:
rpm -q 389-ds-base
389-ds-base-1.2.11.15-30.el6_5.x86_64
No errors, just a status:
reindex userRoot: Processed 315000 entries (pass 11) -- avg rate
15283456.5/sec, recent rate 0.0/sec. hit ration 0%
Then errors log states threshold has dropped
On 8/21/2013 9:46 AM, Ludwig Krispenz wrote:
we don't have dedicated threads for read or write operations, in
theory writes should not block reads, but if the write threads queue
up for the backend lock there might be no threads available to do the
reads
I wasn't taking about threads. It is
On 2/21/2013 8:27 AM, Patrick Raspante wrote:
Is it required (or at least suggested) that multi-mastered directory
server instances have the equal values for dbcache and entry cache
settings? If so, what adverse effects result from not configuring the
caches similarly?
There's no
On 2/21/2013 11:11 AM, Patrick Raspante wrote:
I was mostly curious if the difference in cache configurations has any
negative effect on the integrity of the replication agreement between
the directory instances.
To illustrate, say one directory instance is managing several root
suffixes and
On 11/13/2012 11:15 AM, Rich Megginson wrote:
You would expect that you saw this issue in different deployments,
but I only saw it in one instance.
If it turns out that the issue I see is identical the issue, you
mentioned, I’d like to know, when it was fixed.
Upon further
On 11/12/2010 8:59 AM, Gerrard Geldenhuis wrote:
I am trying to decrypt SSL traffic capture with tcpdump in wireshark.
A quick google turned up a page that said the NSS utils does not allow
you to expose your private key. Is there different way or howto that
anyone can share to help decrypt
On 11/12/2010 9:21 AM, Gerrard Geldenhuis wrote:
I created a new certificate datase with certutil, and I can view the
private key fingerprints with certutil -d . -K but I can't actually
extract the private key from the certutil database. I can create a
certificate sign request using certutil
50 matches
Mail list logo