[389-devel] Source directory is now list-able

2014-04-07 Thread Rich Megginson
http://port389.org/sources is now open and list-able.  The default sort 
order is latest first.  The http://port389.org/wiki/Source page has been 
updated with this link.

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [389-users] Source directory is now list-able

2014-04-08 Thread Rich Megginson

On 04/08/2014 02:24 PM, Timo Aaltonen wrote:

On 07.04.2014 21:52, Rich Megginson wrote:

http://port389.org/sources is now open and list-able.  The default sort
order is latest first.  The http://port389.org/wiki/Source page has been
updated with this link.

\o/

many thanks for this :)




Sure, it was about time we did this :P
Please let us know if there are any issues, or suggested improvements.  
My apache-fu is not good, perhaps there are some nice mod_autoindex 
hacks . . .

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47772 empty modify returns LDAP_INVALID_DN_SYNTAX

2014-04-09 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47772/0001-Ticket-47772-empty-modify-returns-LDAP_INVALID_DN_SY.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47774 mem leak in do_search - rawbase not freed upon certain errors

2014-04-09 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47774/0001-Ticket-47774-mem-leak-in-do_search-rawbase-not-freed.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Take 2: Ticket #47772 empty modify returns LDAP_INVALID_DN_SYNTAX

2014-04-11 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47772/0001-Ticket-47772-empty-modify-returns-LDAP_INVALID_DN_SY.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] single valued attribute update resolution

2014-04-30 Thread Rich Megginson

On 04/25/2014 07:43 AM, Ludwig Krispenz wrote:
There are still scenarios where replication can lead to inconsistent 
states for single valued attributes, which I think has two reasons:
- for single valued attributes there are scenarios where modifications 
applied concurrently cannot be simply resolved without violating the 
schema
- the code to handle single valued attribute resolution is quite 
complex and has always been extended to resolve reported issues, not 
making it simpler


I tried to specify all potential scenarios which should be handled and 
what the expected consistent state should be. In parallel writing a 
test suite based on lib389 test framework to provide testcases for all 
scenarios and then test the current implementation. The doc and test 
suite can be used as a reference for a potential rework of the update 
resolution code.


Please have a look at: 
http://port389.org/wiki/Update_resolution_for_single_valued_attributes


comments, corrections, additonal requirements are welcome - the doc is 
not final :-)


Very nice!



Thanks,
Ludwig
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47831 - server restart wipes out index config if there is a default index

2014-06-25 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47831/0001-Ticket-47831-server-restart-wipes-out-index-config-i.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Redux: Ticket #47692 single valued attribute replicated ADD does not work

2014-07-10 Thread Rich Megginson

Previous fix was incomplete.
https://fedorahosted.org/389/attachment/ticket/47692/0001-Ticket-47692-single-valued-attribute-replicated-ADD-.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review lib389: start/stop may hand indefinitely

2014-09-05 Thread Rich Megginson

On 09/05/2014 10:32 AM, thierry bordaz wrote:

On 09/05/2014 01:10 PM, thierry bordaz wrote:
Detected with testcase 47838 that defines ciphers not recognized 
during SSL init. 47838 testcase makes the full test suite to hang.



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Hello,

Rich pointed me that the indentation was bad in the second part of the 
fix. I was wrongly playing with tab instead of spaces.

Here is a better fix


ack



thanks
theirry


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #47892 coverity defects found in 1.3.3.1

2014-09-12 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/47892/0001-Ticket-47892-coverity-defects-found-in-1.3.3.1.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: Fix slapi_td_plugin_lock_init prototype

2014-09-15 Thread Rich Megginson

On 09/15/2014 05:20 AM, Petr Viktorin wrote:

diff --git a/ldap/servers/slapd/slapi-plugin.h 
b/ldap/servers/slapd/slapi-plugin.h
index f1ecfe8..268e465 100644
--- a/ldap/servers/slapd/slapi-plugin.h
+++ b/ldap/servers/slapd/slapi-plugin.h
@@ -5582,7 +5582,7 @@ void slapi_td_get_val(int indexType, void **value);
  int slapi_td_dn_init(void);
  int slapi_td_set_dn(char *dn);
  void slapi_td_get_dn(char **dn);
-int slapi_td_plugin_lock_init();
+int slapi_td_plugin_lock_init(void);
  int slapi_td_set_plugin_locked(int *value);
  void slapi_td_get_plugin_locked(int **value);
  
--

Thanks - https://fedorahosted.org/389/ticket/47899
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: Fix slapi_td_plugin_lock_init prototype

2014-09-15 Thread Rich Megginson

On 09/15/2014 07:28 PM, Petr Viktorin wrote:

On 09/15/2014 09:06 PM, Rich Megginson wrote:

On 09/15/2014 05:20 AM, Petr Viktorin wrote:

diff --git a/ldap/servers/slapd/slapi-plugin.h
b/ldap/servers/slapd/slapi-plugin.h
index f1ecfe8..268e465 100644
--- a/ldap/servers/slapd/slapi-plugin.h
+++ b/ldap/servers/slapd/slapi-plugin.h
@@ -5582,7 +5582,7 @@ void slapi_td_get_val(int indexType, void 
**value);

  int slapi_td_dn_init(void);
  int slapi_td_set_dn(char *dn);
  void slapi_td_get_dn(char **dn);
-int slapi_td_plugin_lock_init();
+int slapi_td_plugin_lock_init(void);
  int slapi_td_set_plugin_locked(int *value);
  void slapi_td_get_plugin_locked(int **value);
--

Thanks - https://fedorahosted.org/389/ticket/47899


Thanks.

I read the GIT Rules page on the wiki [0], which mentions patches not 
associated with a ticket. 


That is correct.  I just wanted to make sure that this did not get lost.


If all patches do need a ticket, it would be good to update it.

[0] http://www.port389.org/docs/389ds/development/git-rules.html




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] if you tag a release, please release a tarball too

2014-09-19 Thread Rich Megginson

On 09/19/2014 01:15 AM, Timo Aaltonen wrote:

On 19.09.2014 09:33, Timo Aaltonen wrote:

Hi

  1.3.3.3 is tagged in git since a week ago, but there's no tarball for
it. Dunno if you have scripts for the release dance, but if you do
please include the tarball build to it so it's not a manual thing to
remember every time ;)

I'll roll back to 1.3.3.2 in the meantime..

oh well, 1.3.3.2 tarball doesn't match the tag:

tarball doesn't have 55e317f2a5d8fc488e76f2b4155298a45d25 nor
0363fa49265c0c27d510064cea361eb400802548

and ldap/servers/slapd/ssl.c has a diff to the comments of the cipher
mess (from 58cb12a7b8cf9), and VERSION.sh on the tarball still has
'VERSION_PREREL=.a1' (should be gone in fefa20138b6a3a)

so I don't know where the tarball was built from, this isn't cool..


Yep, we screwed up, sorry about that.  I've just uploaded a new 1.3.3.3 
release, and the sources page with the new checksum is building.

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] No more spam from jenkins

2014-10-21 Thread Rich Megginson
I don't know what's wrong with jenkins.  I tried to fix it, but I cannot 
figure out what the problem is.  In the meantime, I have disabled it, so 
no more spam.  Sorry for the spam.

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] we now have epel7 branches for fedpkg . . .

2014-11-10 Thread Rich Megginson
. . . but it looks like we are gated by rhel7.1 - waiting for TLS 1.1 
fixes/packages to show up with rhel 7.1

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review #2: Ticket #48105 create tests - tests subdir - test server/client programs - test scripts

2015-03-09 Thread Rich Megginson

I found a bug with my previous patch.

https://fedorahosted.org/389/attachment/ticket/48105/0001-Ticket-48105-create-tests-tests-subdir-test-server-c.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [389-users] GUI console and Kerberos

2015-03-12 Thread Rich Megginson

On 03/11/2015 11:54 AM, Paul Robert Marino wrote:

Hey every one
I have a question I know at least once in the past i setup the admin
console so it could utilize Kerberos passwords based on a howto I
found once which after I changed jobs I could never find again.

today I was looking for something else and I saw a mention on the site
about httpd needing to be compiled with http auth support.
well I did a little digging and I found this file
/etc/dirsrv/admin-serv/admserv.conf

in that file I found a lot of entries that look like this

LocationMatch /*/[tT]asks/[Cc]onfiguration/*
   AuthUserFile /etc/dirsrv/admin-serv/admpw
   AuthType basic
   AuthName Admin Server
   Require valid-user
   AdminSDK on
   ADMCgiBinDir /usr/lib64/dirsrv/cgi-bin
   NESCompatEnv on
   Options +ExecCGI
   Order allow,deny
   Allow from all
/LocationMatch


when I checked /etc/dirsrv/admin-serv/admpw sure enough I found the
Password hash for the admin user.

So my question is before I wast time experimenting could it possibly
be as simple as changing the auth type to kerberos
http://modauthkerb.sourceforge.net/configure.html


I don't know.  I don't think anyone has ever tried it.


keep in mind my Kerberos Servers do not use LDAP as the backend.
--
389 users mailing list
389-us...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48105 create tests - tests subdir - test server/client programs - test scripts

2015-03-06 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48105/0001-Ticket-48105-create-tests-tests-subdir-test-server-c.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48106 create code doc with doxygen

2015-03-04 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48106/0001-Ticket-48106-create-code-doc-with-doxygen.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] No rule to make target dbmon.sh

2015-02-26 Thread Rich Megginson

On 02/26/2015 04:31 PM, William wrote:

On latest master I am getting:

make[1]: *** No rule to make target `ldap/admin/src/scripts/dbmon.sh',
needed by `all-am'.  Stop.

Did a git clean -f -x -d followed by autoreconf -i; ./configure
--with-openldap --prefix=/srv --enable-debug


Not sure, but you should not need to run autoreconf unless you are 
changing one of the autoconf files.  There is an autogen.sh script for 
this purpose instead of using autoreconf directly.




What am I missing?



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Take 2: Please review: Ticket #48178 add config param to enable nunc-stans

2015-05-01 Thread Rich Megginson

Fixed problem with previous patch.

https://fedorahosted.org/389/attachment/ticket/48178/0001-Ticket-48178-add-config-param-to-enable-nunc-stans.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48178 add config param to enable nunc-stans

2015-04-30 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48178/0001-Ticket-48178-add-config-param-to-enable-nunc-stans.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48122 nunc-stans FD leak

2015-05-11 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48122/0001-Ticket-48122-nunc-stans-FD-leak.2.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: nunc-stans: Ticket #33 - coverity - 13178 Explicit null dereferenced

2015-05-12 Thread Rich Megginson

https://fedorahosted.org/nunc-stans/attachment/ticket/33/0001-Ticket-33-coverity-13178-Explicit-null-dereferenced.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: nunc-stans: Tickets 34-38 - coverity 13179-13183 - Dereference before NULL check

2015-05-12 Thread Rich Megginson

https://fedorahosted.org/nunc-stans/attachment/ticket/34/0001-Tickets-34-38-coverity-13179-13183-Dereference-befor.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] 389 performance testing tools

2015-05-15 Thread Rich Megginson
No readme yet, but here are the scripts: 
https://github.com/richm/389-perf-test

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Review of plugin code

2015-08-07 Thread Rich Megginson

On 08/07/2015 05:18 PM, William Brown wrote:

On Thu, 2015-08-06 at 14:25 -0700, Noriko Hosoi wrote:

Hi William,

Very interesting plug-in!

Thanks. As a plugin, it's value is quite useless due to the nsDS5ReplicaType
flags. But it's a nice simple exercise to get ones head around how the plugin
architecture works from scratch. It's one thing to patch a plugin, compared to
writing one from nothing.


Regarding betxn plug-in, it is for putting the entire operation -- the
primary update + associated updates by the enabled plug-ins -- in one
transaction.  By doing so, the entire updates are committed to the DB if
and only if all of the updates are successful. Otherwise, all of them
are rolled back.  That guarantees there will be no consistency among
entries.

Okay, so if I can be a pain, how to betxn handle reads? Do reads come from
within the transaction?


Yes.


Or is there a way to read from the database outside the
transaction.

Say for example:

begin
add some object Y
read Y
commit

Does read Y see the object within the transaction?


Yes.


Is there a way to make the
search happen so that it occurs outside the transaction, IE it doesn't see Y?


Not a nested search operation.  A nested search operation will always 
use the parent/context transaction.






In that sense, your read-only plug-in is not a good example for betxn
since it does not do any updates. :)  Considering the purpose of the
read-only plug-in, invoking it at the pre-op timing (before the
transaction) would be the best.

Very true! I kind of knew what betxn did, but I wanted to confirm more
completely in my mind. So I think what my read-only plugin does at the moment
works quite nicely then outside of betxn.

Is there a piece of documentation (perhaps the plugin guide) that lists the
order in which these operations are called?


Not sure, but in general it is:

incoming operation from client
front end processing
preoperation
call backend
bepreoperation
start transaction
betxnpreoperation
do operation in the database
betxnpostoperation
end transaction
bepostoperation
return from backend
send result to client
postoperation




Since MEP requires the updates on the DB, it's supposed to be called in
betxn.  That way, what was done in the MEP plug-in is committed or
rolled back together with the primary updates.

Makes sense.


The toughest part is the deadlock prevention.  At the start transaction,
it holds a DB lock.  And most plug-ins maintain its own mutex to protect
its resource.  It'd easily cause deadlock situation especially when
multiple plug-ins are enabled (which is common :). So, please be careful
not to acquire/free locks in the wrong order...

Of course. This is always an issue in multi-threaded code and anything with
locking. Stress tests are probably good to find these deadlocks, no?


Yes.  There is some code in dblayer.c that will stress the transaction 
code by locking/unlocking many db pages concurrently with external 
operations.

https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/slapd/back-ldbm/dblayer.c#n210
https://git.fedorahosted.org/cgit/389/ds.git/tree/ldap/servers/slapd/back-ldbm/dblayer.c#n4131




About your commented out code in read_only.c, I guess you copied the
part from mep.c and are wondering what it is for?
There are various type of plug-ins.

 $ egrep nsslapd-pluginType dse.ldif | sort | uniq
 nsslapd-pluginType: accesscontrol
 nsslapd-pluginType: bepreoperation
 nsslapd-pluginType: betxnpostoperation
 nsslapd-pluginType: betxnpreoperation
 nsslapd-pluginType: database
 nsslapd-pluginType: extendedop
 nsslapd-pluginType: internalpreoperation
 nsslapd-pluginType: matchingRule
 nsslapd-pluginType: object
 nsslapd-pluginType: preoperation
 nsslapd-pluginType: pwdstoragescheme
 nsslapd-pluginType: reverpwdstoragescheme
 nsslapd-pluginType: syntax

The reason why slapi_register_plugin and slapi_register_plugin_ext were
implemented was:

 /*
   * Allows a plugin to register a plugin.
   * This was added so that 'object' plugins could register all
   * the plugin interfaces that it supports.
   */

On the other hand, MEP has this type.

 nsslapd-pluginType: betxnpreoperation

The type is not object, but the MEP plug-in is implemented as having
the type.  Originally, it might have been object...  Then, we
introduced the support for betxn.  To make the transition to betxn
smoothly, we put the code to check betxn is in the type. If there is
betxn as in betxnpreoperation, call the plug-in in betxn, otherwise
call them outside of the transaction.  Having the switch in the
configuration, we could go back to the original position without
rebuilding the plug-in.

Since we do not go back to pre-betxn era, the switch may not be too
important.  But keeping it would be a good idea for the consistency with
the other plug-ins.

Does this answer you question?  Please feel free to let us know if it
does not.

That answers some of 

[389-devel] Please review: Ticket #48224 - logconv.pl should handle *.tar.xz, *.txz, *.xz log files

2015-07-13 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48224/0001-Ticket-48224-logconv.pl-should-handle-.tar.xz-.txz-..patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Please review: Ticket #48224 - redux 2 - logconv.pl should handle *.tar.xz, *.txz, *.xz log files

2015-07-20 Thread Rich Megginson

https://fedorahosted.org/389/attachment/ticket/48224/0001-Ticket-48224-redux-2-logconv.pl-should-handle-.tar.x.patch
--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please comment: [389 Project] #48285: The dirsrv user/group should be created in rpm %pre, and ideally with fixed uid/gid

2015-10-21 Thread Rich Megginson

On 10/21/2015 12:20 PM, Noriko Hosoi wrote:
Thanks to William for his reviewing the patch.  I'm going to push it. 
But before doing so, I have a question regarding the autogen files.


The proposed patch requires to rerun autogen.sh and push the generated 
files to the git.  My current env has automake 1.15 and it generates 
large diffs as attached to this email.

-# Makefile.in generated by automake 1.13.4 from Makefile.am.
+# Makefile.in generated by automake 1.15 from Makefile.am.

Is it okay to push the attached patch 
0002-Ticket-48285-The-dirsrv-user-group-should-be-created.patch to git 
or do we prefer to keep the diff minimum by runing autogen on the host 
having the same version of automake (1.13.4)?


Should confirm that the generated configure script runs on el7



Thanks,
--noriko

On 10/20/2015 05:48 PM, Noriko Hosoi wrote:

https://fedorahosted.org/389/ticket/48285

https://fedorahosted.org/389/attachment/ticket/48285/0001-Ticket-48285-The-dirsrv-user-group-should-be-created.patch 


git patch file (master) -- revised

If these users and groups exist on the system:

/etc/passwd:xdirsrv:x:389:389:389-ds-base:/usr/share/dirsrv:/sbin/nologin 

/etc/passwd:dirsrvy:x:390:390:389-ds-base:/usr/share/dirsrv:/sbin/nologin 


/etc/group:xdirsrv:x:389:
/etc/group:dirsrvy:x:390:

This pair is supposed to be generated:

/etc/passwd:dirsrv:x:391:391:389-ds-base:/usr/share/dirsrv:/sbin/nologin
/etc/group:dirsrv:x:391:


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] Please review: [389 Project] #48257: Fix coverity issues - 08/24/2015

2015-11-06 Thread Rich Megginson

On 11/05/2015 05:09 PM, Noriko Hosoi wrote:

https://fedorahosted.org/389/ticket/48257

https://fedorahosted.org/389/attachment/ticket/48257/0001-Ticket-48257-Fix-coverity-issues-08-24-2015.patch 




Once this ticket is closed, is it okay to respin nunc_stans which is 
going to be version 0.1.6?


Yes.  After every "batch" of commits to nunc-stans the version should be 
bumped, where "batch" can be a single commit if no other commits are 
planned for the immediate future.




Current: rpm/389-ds-base.spec.in:%global nunc_stans_ver 0.1.5

--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [lib389] Deref control advice needed

2015-08-26 Thread Rich Megginson

On 08/26/2015 03:28 AM, William Brown wrote:

In relation to ticket 47757, I have started work on a deref control for
Noriko.
The idea is to get it working in lib389, then get it upstreamed into pyldap.

At this point it's all done, except that the actual request control doesn't
appear to work. Could one of the lib389 / ldap python experts cast their eye
over this and let me know where I've gone wrong?

I have improved this, but am having issues with the asn1spec for ber decoding.

I have attached the updated patch, but specifically the issue is in _controls.py

I would appreciate if anyone could take a look at this, and let me know if there
is something I have missed.


Not sure, but here is some code I did without using pyasn:
https://github.com/richm/scripts/blob/master/derefctrl.py
This is quite old by now, and is probably bit rotted with respect to 
python-ldap and python3.





  controlValue ::= SEQUENCE OF derefRes DerefRes

  DerefRes ::= SEQUENCE {
  derefAttr   AttributeDescription,
  derefValLDAPDN,
  attrVals[0] PartialAttributeList OPTIONAL }

  PartialAttributeList ::= SEQUENCE OF
 partialAttribute PartialAttribute


class DerefRes(univ.Sequence):
 componentType = namedtype.NamedTypes(
 namedtype.NamedType('derefAttr', AttributeDescription()),
 namedtype.NamedType('derefVal', LDAPDN()),
 namedtype.OptionalNamedType('attrVals', PartialAttributeList()),
 )

class DerefResultControlValue(univ.SequenceOf):
 componentType = DerefRes()





 def decodeControlValue(self,encodedControlValue):
 self.entry = {}
 #decodedValue,_ =
decoder.decode(encodedControlValue,asn1Spec=DerefResultControlValue())
 # Gets the error: TagSet(Tag(tagClass=0, tagFormat=32, tagId=16),
Tag(tagClass=128, tagFormat=32, tagId=0)) not in asn1Spec:
{TagSet(Tag(tagClass=0, tagFormat=32, tagId=16)): PartialAttributeList()}/{}
 decodedValue,_ = decoder.decode(encodedControlValue)
 print(decodedValue.prettyPrint())
 # Pretty print yields
 #Sequence:  -- Sequence of
 # no-name=Sequence:  -- derefRes
 #  no-name=uniqueMember -- derefAttr
 #  no-name=uid=test,dc=example,dc=com -- derefVal
 #  no-name=Sequence: -- attrVals
 #   no-name=uid
 #   no-name=Set:
 #no-name=test
 # For now, while asn1spec is sad, we'll just rely on it being well
formed
 # However, this isn't good, as without the asn1spec, we seem to 
actually
be dropping values 
 for result in decodedValue:
 derefAttr, derefVal, _ = result
 self.entry[str(derefAttr)] = str(derefVal)



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

Re: [389-devel] [lib389] Deref control advice needed

2015-09-02 Thread Rich Megginson

On 09/02/2015 10:35 AM, thierry bordaz wrote:

On 08/27/2015 02:31 AM, Rich Megginson wrote:

On 08/26/2015 03:28 AM, William Brown wrote:

In relation to ticket 47757, I have started work on a deref control for
Noriko.
The idea is to get it working in lib389, then get it upstreamed into pyldap.

At this point it's all done, except that the actual request control doesn't
appear to work. Could one of the lib389 / ldap python experts cast their eye
over this and let me know where I've gone wrong?

I have improved this, but am having issues with the asn1spec for ber decoding.

I have attached the updated patch, but specifically the issue is in _controls.py

I would appreciate if anyone could take a look at this, and let me know if there
is something I have missed.


Not sure, but here is some code I did without using pyasn:
https://github.com/richm/scripts/blob/master/derefctrl.py
This is quite old by now, and is probably bit rotted with respect to 
python-ldap and python3.




Old !! but it worked like a charm for me. I just had to do this modif 
because of change in python-ldap IIRC


OK.  But I would rather use William's version which is based on pyasn1 - 
it hurts my brain to hand code BER . . .



diff derefctrl.py /tmp/derefctrl_orig.py
0a1
>
151,152c152
< self.criticality,self.derefspeclist,self.entry =
criticality,derefspeclist or [],None
<
#LDAPControl.__init__(self,DerefCtrl.controlType,criticality,derefspeclist)
---
>
LDAPControl.__init__(self,DerefCtrl.controlType,criticality,derefspeclist)
154c154
< def encodeControlValue(self):
---
> def encodeControlValue(self,value):
156c156
< for (derefattr,attrs) in self.derefspeclist:
---
> for (derefattr,attrs) in value:



"""
  controlValue ::= SEQUENCE OF derefRes DerefRes

  DerefRes ::= SEQUENCE {
  derefAttr   AttributeDescription,
  derefValLDAPDN,
  attrVals[0] PartialAttributeList OPTIONAL }

  PartialAttributeList ::= SEQUENCE OF
 partialAttribute PartialAttribute
"""

class DerefRes(univ.Sequence):
 componentType = namedtype.NamedTypes(
 namedtype.NamedType('derefAttr', AttributeDescription()),
 namedtype.NamedType('derefVal', LDAPDN()),
 namedtype.OptionalNamedType('attrVals', PartialAttributeList()),
 )

class DerefResultControlValue(univ.SequenceOf):
 componentType = DerefRes()





 def decodeControlValue(self,encodedControlValue):
 self.entry = {}
 #decodedValue,_ =
decoder.decode(encodedControlValue,asn1Spec=DerefResultControlValue())
 # Gets the error: TagSet(Tag(tagClass=0, tagFormat=32, tagId=16),
Tag(tagClass=128, tagFormat=32, tagId=0)) not in asn1Spec:
{TagSet(Tag(tagClass=0, tagFormat=32, tagId=16)): PartialAttributeList()}/{}
 decodedValue,_ = decoder.decode(encodedControlValue)
 print(decodedValue.prettyPrint())
 # Pretty print yields
 #Sequence:  <-- Sequence of
 # =Sequence:  <-- derefRes
 #  =uniqueMember <-- derefAttr
 #  =uid=test,dc=example,dc=com <-- derefVal
 #  =Sequence: <-- attrVals
 #   =uid
 #   =Set:
 #=test
 # For now, while asn1spec is sad, we'll just rely on it being well
formed
 # However, this isn't good, as without the asn1spec, we seem to 
actually
be dropping values 
 for result in decodedValue:
 derefAttr, derefVal, _ = result
 self.entry[str(derefAttr)] = str(derefVal)



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-devel

[389-devel] Re: Logging performance improvement

2016-06-30 Thread Rich Megginson

On 06/30/2016 08:14 PM, William Brown wrote:

On Thu, 2016-06-30 at 20:01 -0600, Rich Megginson wrote:

On 06/30/2016 07:52 PM, William Brown wrote:

Hi,

I've been thinking about this for a while, so I decided to dump my
thoughts to a document. I think I won't get to implementing this for a
while, but it would really help our server performance.

http://www.port389.org/docs/389ds/design/logging-performance-improvement.html

Looks good.  Can we quantify the current log overhead?

Sure, I could probably sit down and work out a way to bench mark
this .

But without the alternative being written, hard to say. I could always
patch out logging and drop the lock in a hacked build so we can show
what "without logging contention" looks like?


That's only one part of it - you'd have to figure out some way to get 
rid of the overhead of the formatting and flushing in the operation 
threads too.


I suppose you could just write it and see what happens.





--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[389-devel] Re: Logging performance improvement

2016-06-30 Thread Rich Megginson

On 06/30/2016 07:52 PM, William Brown wrote:

Hi,

I've been thinking about this for a while, so I decided to dump my
thoughts to a document. I think I won't get to implementing this for a
while, but it would really help our server performance.

http://www.port389.org/docs/389ds/design/logging-performance-improvement.html


Looks good.  Can we quantify the current log overhead?




--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[389-devel] Re: Please review: 48951 dsconf and dsadm foundations

2016-08-21 Thread Rich Megginson

On 08/21/2016 07:56 PM, William Brown wrote:

On Sun, 2016-08-21 at 19:44 -0600, Rich Megginson wrote:

On 08/21/2016 05:28 PM, William Brown wrote:

On Fri, 2016-08-19 at 11:21 +0200, Ludwig Krispenz wrote:

Hi William,

On 08/19/2016 02:22 AM, William Brown wrote:

On Wed, 2016-08-17 at 14:53 +1000, William Brown wrote:

https://fedorahosted.org/389/ticket/48951

https://fedorahosted.org/389/attachment/ticket/48951/0001-Ticket-48951-dsadm-and-dsconf-base-files.patch
https://fedorahosted.org/389/attachment/ticket/48951/0002-Ticket-48951-dsadm-and-dsconf-refactor-installer-cod.patch
https://fedorahosted.org/389/attachment/ticket/48951/0003-Ticket-48951-dsadm-and-dsconf-Installer-options-mana.patch
https://fedorahosted.org/389/attachment/ticket/48951/0004-Ticket-48951-dsadm-and-dsconf-Ability-to-unit-test-t.patch
https://fedorahosted.org/389/attachment/ticket/48951/0005-Ticket-48951-dsadm-and-dsconf-Backend-management-and.patch



As a follow up, here is a design / example document

http://www.port389.org/docs/389ds/design/dsadm-dsconf.html

thanks for this work, it is looking great and is something we were
really missing.

But of course I have some comments  (and I know I am late).
- The naming dsadm and dsconf, and the split of tasks between them, is
the same as in Sun/Oracle DSEE, and even if there is probably no legal
restriction to use them; I'd prefer to have own names for our own tools.

Fair enough. There is nothing saying these names are stuck in stone
right now so if we have better ideas we can change it.

I will however say that any command name, should not start with numbers
(ie 389), and that it should generally be fast to type, easy to remember
and less than 8 chars long if possible.

What about "adm389" and "conf389"?

Yeah, those could work.


- I'm not convinced of splitting the tasks into two utilities, you will
have different actions and options for the different
resources/subcommands anyway, so you could have one for all.

The issue is around connection to the server, and whether it needs to be
made or not. The command in the code is:

dsadm:
command:
action

dsconf:
connect to DS
command
action

So dsconf takes advantage of all the commands being remote, so it shares
common connection code. If we were to make the tools "one" we would need
to make a decorator or something to repeat this, and there are some
other issues there with the way that the argparse library works.

I think this is an arbitrary distinction - needing a connection or not -
but other projects use similar "admin client" vs. "more general use
client" e.g. OpenShift has "oadm" vs. "oc".  If this is a pattern that
admins are used to, we just need to be consistent in applying that pattern.




Also, I think, the goal should be to make all actions available local
and remote, the start/stop/install should be possible remotely via rsh
or another mechanism as long as the utilities are available on the
target machine, so I propose one dsmanage or 389manage

dsmanage is an okay name but, remote start stop is not an easy task.

At that point, you are looking at needing to ssh, manage the acceptance
of keys, you have to know the remote server ds prefix, you need to ssh
as root (bad) or manage sudo (annoying).

We already have the ability to remote stop/start/restart the server,
with admin server at least.

Not with systemd we don't. systemd + selinux has broken that for a stack
of our products, and at the moment, we are publishing release notes that
these don't work in certain cases. And rightly so, ds should not have
the rights to touch system services in the way we were doing, it's a
huge security risk.

To make it work we need to do dbus and polkit magic, and the amount of
motivation I have to give about this problem is low, especially when
tools like ansible do it for us, much better.


You need to potentially manage
selinux, systemd etc. It gets really complicated, really fast, and at
that point I'm going to turn around and say "no, just use ansible if you
want to remote manage things".

Lets keep these tools simple as we can, and let things like ansible
which is designed for remote tasks, do their job.

Right, but it will take a lot of work to determine what should be done
in ansible vs. specialized tool.

Not really. An admin will know "okay, if I want to start stop services I
write action: service state=enabled dirsrv@instance". They will also
know "well I want to reconfigure plugins on DS, I use conf389/dsconf".

Anything that is yum, systemd command, etc. is ansible. Anything about
installing an instance or 389 specific we do.


I think that is an arbitrary line of demarcation.  ansible can be used 
for a lot more than that.





A better strategy is that we can potentially write a lib389 ansible
module in the future allowing us to playbook tasks for DS.

I wo

[389-devel] Re: Please review: 48951 dsconf and dsadm foundations

2016-08-21 Thread Rich Megginson

On 08/21/2016 05:28 PM, William Brown wrote:

On Fri, 2016-08-19 at 11:21 +0200, Ludwig Krispenz wrote:

Hi William,

On 08/19/2016 02:22 AM, William Brown wrote:

On Wed, 2016-08-17 at 14:53 +1000, William Brown wrote:

https://fedorahosted.org/389/ticket/48951

https://fedorahosted.org/389/attachment/ticket/48951/0001-Ticket-48951-dsadm-and-dsconf-base-files.patch
https://fedorahosted.org/389/attachment/ticket/48951/0002-Ticket-48951-dsadm-and-dsconf-refactor-installer-cod.patch
https://fedorahosted.org/389/attachment/ticket/48951/0003-Ticket-48951-dsadm-and-dsconf-Installer-options-mana.patch
https://fedorahosted.org/389/attachment/ticket/48951/0004-Ticket-48951-dsadm-and-dsconf-Ability-to-unit-test-t.patch
https://fedorahosted.org/389/attachment/ticket/48951/0005-Ticket-48951-dsadm-and-dsconf-Backend-management-and.patch



As a follow up, here is a design / example document

http://www.port389.org/docs/389ds/design/dsadm-dsconf.html

thanks for this work, it is looking great and is something we were
really missing.

But of course I have some comments  (and I know I am late).
- The naming dsadm and dsconf, and the split of tasks between them, is
the same as in Sun/Oracle DSEE, and even if there is probably no legal
restriction to use them; I'd prefer to have own names for our own tools.

Fair enough. There is nothing saying these names are stuck in stone
right now so if we have better ideas we can change it.

I will however say that any command name, should not start with numbers
(ie 389), and that it should generally be fast to type, easy to remember
and less than 8 chars long if possible.


What about "adm389" and "conf389"?




- I'm not convinced of splitting the tasks into two utilities, you will
have different actions and options for the different
resources/subcommands anyway, so you could have one for all.

The issue is around connection to the server, and whether it needs to be
made or not. The command in the code is:

dsadm:
command:
action

dsconf:
connect to DS
command
action

So dsconf takes advantage of all the commands being remote, so it shares
common connection code. If we were to make the tools "one" we would need
to make a decorator or something to repeat this, and there are some
other issues there with the way that the argparse library works.


I think this is an arbitrary distinction - needing a connection or not - 
but other projects use similar "admin client" vs. "more general use 
client" e.g. OpenShift has "oadm" vs. "oc".  If this is a pattern that 
admins are used to, we just need to be consistent in applying that pattern.






Also, I think, the goal should be to make all actions available local
and remote, the start/stop/install should be possible remotely via rsh
or another mechanism as long as the utilities are available on the
target machine, so I propose one dsmanage or 389manage

dsmanage is an okay name but, remote start stop is not an easy task.

At that point, you are looking at needing to ssh, manage the acceptance
of keys, you have to know the remote server ds prefix, you need to ssh
as root (bad) or manage sudo (annoying).


We already have the ability to remote stop/start/restart the server, 
with admin server at least.



You need to potentially manage
selinux, systemd etc. It gets really complicated, really fast, and at
that point I'm going to turn around and say "no, just use ansible if you
want to remote manage things".

Lets keep these tools simple as we can, and let things like ansible
which is designed for remote tasks, do their job.


Right, but it will take a lot of work to determine what should be done 
in ansible vs. specialized tool.




A better strategy is that we can potentially write a lib389 ansible
module in the future allowing us to playbook tasks for DS.


I would like to see ansible playbooks for 389.  Ansible is python, so we 
can leverage python-ldap/lib389 instead of having to fork/exec 
ldapsearch/ldapmodify.





This is why I kept them separate, because I wanted to have simple,
isolated domains in the commands for actions, that let us know clearly
what we are doing. It's still an open discussion though.


If this is a common patterns that admins are used to, then we should 
consider it.





- could this be made interactive ? run the command, providing some or
none options and then have a shell like env
dsmanage
  >>> help
.. connect
.. create-x
  >>> connect -h 
... replica-enable 

In the current form, no. However, the way I have written it, we should
be able to pretty easily replace the command line framework on front and
drop in something that does allow interactive commands like this. I was
thinking:

https://github.com/Datera/configshell

This is already in EL, as it's part of the targetcli application.


Think MVC - just make sure you can change the View.  I tried to do this 
with setup-ds.pl - make it possible to "plug in" a different "UI".




[389-devel] Re: Please review: 48951 dsconf and dsadm foundations

2016-08-22 Thread Rich Megginson

On 08/22/2016 05:23 PM, William Brown wrote:

On Sun, 2016-08-21 at 21:33 -0600, Rich Megginson wrote:

On 08/21/2016 09:02 PM, William Brown wrote:

Anything that is yum, systemd command, etc. is ansible. Anything about
installing an instance or 389 specific we do.

I think that is an arbitrary line of demarcation.  ansible can be used
for a lot more than that.

Yes it can. But I don't have infinite time, and neither does the team.
Lets get something to work first, then we can grow what ansible is able
to integrate with. Lets design our code to be able to be integrate with
ansible, but draw some basic lines on things we shouldn't duplicate and
then remove in the future. This is why I want to draw the line that
start/stop of the server, and certain remote admin tasks aren't part of
the scope here.



Saying this, in a way I'm not a fan of this also. Because we are doing
behind the scenes magic, rather than simple, implicit tasks. What
happens if someone crons this? What happens? We lose the intent of the
admin in some cases.

I think the principle should be "make it simple to do the easy things -
make it possible to do the difficult things".  In this case, if I am an
admin running a cli, I think it should "do the right thing".  If I'm
setting up a cron job, I should be able to force it to use offline mode
or whatever - it is easy to keep track of extra cli arguments if I'm
automating something vs. running interactively on the command line.

I agree with that principle, and is actually one of the guides I am
following in my design.

I think that here, we have a differing view of simple. My interpretation
is.

My idea of simple is "each task should do one specific thing, and do it
well". you have db2ldif and db2ldif_task. Each one just does that one
simple thing. The intent of the admin is clear at the moment they hit
enter.

Not if they don't know what is meant by "_task".  It might as well be
".pl" to most admins.

Most of the admins I've encountered say "I just want to get an ldif dump
from the server - I have no idea what is the difference between db2ldif
and db2ldif.pl."  I think they will say the same thing about "db2ldif"
vs. "db2ldif_task".

I was thinking about this, this morning, and I think I have come to
agree with you. Lets make this "you want to get from A to B, and we work
out how to get there". Similar to ansible, which probably lends well to
use using ansible in the future for things.


Your idea of simple is "intuitive simple" for the admin, where
behaviours are inferred from running application state. The admin says
"how I want you to act" and the computer resolves the path to get there.

And - if the admin knows the tool, because the admin has learned by
experience, progressive disclosure, or RTFM, the admin can explicitly
specify the exact modes of operation using command line flags.  Using
the tool simply is easy, using the tool in an advanced fashion is possible.

I think the intent of the tool should be clear without huge amounts of
experience and rtfm. We have a huge usability and barrier to entry
problem in DS, and if we don't make changes to lower that, we will
become irrelevant. We need to make it easier to use, while retaining
every piece of advanced functionality that our experienced users
expect :) (I think we agree on this point though)


One day we will need to make a decision on which way to go with these
tools, and which path we follow, but again, for now it's open. Of
course, I am going to argue for the former, because that is the
construction of my experience. Reality is that I've seen a lot of
production systems get messed up because what seemed intuitive to the
programmer, was not the intent of the admin. We are basically having the
"boeing vs airbus" debate. Boeing has autopilots and computer
assistance, but believes the pilot is always right and will give up
control even if the pilot is going to do something the computer
disagrees with. Airbus assumes the computer is always right, and will
actively take control away from the pilot if they are going to do
something the computer disagrees with. It's about what's right: The
program? Or the human intent? And that question has never been answered.

I think the discussion doesn't fall exactly on the "boeing vs airbus"
axis, but perhaps isn't entirely orthogonal either.

As said above, I think maybe we should go down the "programmer is right"
idea, but with the ability for the sysadmin to take over if needed.


+1 - I think you've got the right idea.






--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[EPEL-devel] Re: nodejs update

2016-08-25 Thread Rich Megginson

On 08/11/2016 05:43 AM, Stephen Gallagher wrote:

On 08/11/2016 05:16 AM, Zuzana Svetlikova wrote:

Hi!

As some of you may know, nodejs package that is present in
EPEL is pretty outdated. The current v0.10 that we have will
go EOL in October and npm (package manager) is already not
maintained.

Currently, upstreams' plan is to have two versions of Long
Term Support (LTS) at once, one in active development and one
in maintenance mode.
Currently active is v4, which is switching to maintenance in
April and v6 which is switching to LTS in October.
This is also reason why we would like to skip v4, although
both will get security updates. Nodejs v6 also comes with
newer npm and v8 (which might best be bundled, as it is in
Fedora and Software Collections) (v8 might concern ruby and
database maintainers, but old v8 package still remains in
the repo).

There was also an idea to have both LTS versions in repo,
but we're not quite sure, how we'd do it and if it's even a
good idea.

Also, another thing is, if it is worth of updating every year
to new LTS or update only after the current one goes EOL.
According to guidelines, I'd say it's the latter, but it's
not exactly how node development works and some feedback from
users on this would be nice, because I have none.


tl;dr Need to update nodejs, but can't decide if v4 or v6,
v4: will update sooner, shorter support (2018-04-01)
v6: longer support (2019-04-01), *might* break more things,
 won't be in stable sooner than mid-October if everything
 goes well

FYI, I think this tl;dr missed explaining why v6 won't be in stable until
mid-October. What Zuzana and I discussed on another list is that the Node.js v6
schedule has it going into LTS mode on the same day that 0.10.x reaches EOL.
However, v6 is already out and available. The major thing that changes at that
point is just that from then on, they commit to adding no more major features
(as I understand it). This is the best moment for us to switch over to it.

However, in the meantime we will probably want to be carrying 6.x in
updates-testing for at least a month prior to declaring it stable (with
autokarma disabled) with wide announcements about the impending upgrade. This
will be safe to do since Node.js 6.x has already reached a point where no
backwards-incompatible changes are allowed in, so we can start the migration
process early.


How does EPEL deal with the fact that nodejs won't work with openssl 
1.0.1?  For CentOS we have a patch that allows nodejs 4.x to build with 
openssl 1.0.1 in EL7.  Are you using a similar patch?  Do you know if 
the same patch will work with nodejs 6.x?






Also need feedback from users.


I hope I didn't forget anything important.

Regards

Zuzka
Node.js SIG





___
epel-devel mailing list
epel-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/epel-devel@lists.fedoraproject.org



___
epel-devel mailing list
epel-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/epel-devel@lists.fedoraproject.org


[EPEL-devel] Re: nodejs update

2016-09-08 Thread Rich Megginson

On 09/08/2016 11:27 AM, Stephen Gallagher wrote:

On 08/22/2016 11:23 AM, Stephen Gallagher wrote:

On 08/11/2016 07:43 AM, Stephen Gallagher wrote:

On 08/11/2016 05:16 AM, Zuzana Svetlikova wrote:

Hi!

As some of you may know, nodejs package that is present in
EPEL is pretty outdated. The current v0.10 that we have will
go EOL in October and npm (package manager) is already not
maintained.

Currently, upstreams' plan is to have two versions of Long
Term Support (LTS) at once, one in active development and one
in maintenance mode.
Currently active is v4, which is switching to maintenance in
April and v6 which is switching to LTS in October.
This is also reason why we would like to skip v4, although
both will get security updates. Nodejs v6 also comes with
newer npm and v8 (which might best be bundled, as it is in
Fedora and Software Collections) (v8 might concern ruby and
database maintainers, but old v8 package still remains in
the repo).

There was also an idea to have both LTS versions in repo,
but we're not quite sure, how we'd do it and if it's even a
good idea.

Also, another thing is, if it is worth of updating every year
to new LTS or update only after the current one goes EOL.
According to guidelines, I'd say it's the latter, but it's
not exactly how node development works and some feedback from
users on this would be nice, because I have none.


tl;dr Need to update nodejs, but can't decide if v4 or v6,
v4: will update sooner, shorter support (2018-04-01)
v6: longer support (2019-04-01), *might* break more things,
 won't be in stable sooner than mid-October if everything
 goes well

FYI, I think this tl;dr missed explaining why v6 won't be in stable until
mid-October. What Zuzana and I discussed on another list is that the Node.js v6
schedule has it going into LTS mode on the same day that 0.10.x reaches EOL.
However, v6 is already out and available. The major thing that changes at that
point is just that from then on, they commit to adding no more major features
(as I understand it). This is the best moment for us to switch over to it.

However, in the meantime we will probably want to be carrying 6.x in
updates-testing for at least a month prior to declaring it stable (with
autokarma disabled) with wide announcements about the impending upgrade. This
will be safe to do since Node.js 6.x has already reached a point where no
backwards-incompatible changes are allowed in, so we can start the migration
process early.


OK, as we stated before, we really need to get Node.js 6.x into the
updates-testing repository soon. We mentioned that we wanted it to sit there for
at least a month before we cut over, and "at least a month" means "by next week"
since the cut over is planned for 2016-10-01.

I'm putting together a COPR right now as a first pass at this upgrade:

https://copr.fedorainfracloud.org/coprs/g/nodejs-sig/nodejs-epel/

I've run into the following blocker issues:

* We cannot jump to 6.x in EPEL 6 easily at this time, because upstream strictly
requires GCC 4.8 or later and we only have 4.4 in EPEL 6. It might be possible
to resolve this with SCLs, but I am no expert there. Zuzana?

* Node.js 4.x and 6.x both *strictly* require functionality from OpenSSL 1.0.2
and cannot run (or indeed build) against OpenSSL 1.0.1. Currently, both EPEL 6
and EPEL 7 have 1.0.1 in their buildroots. I am not aware of any solution (SCL
or otherwise) for linking EPEL to a newer version of OpenSSL.

The OpenSSL 1.0.2 problem is a significant one; we cannot build against the
bundled copy of OpenSSL because it includes patented algorithms that are not
acceptable for inclusion in Fedora. We also cannot trivially backport Fedora's
OpenSSL 1.0.2 packages because EPEL forbids upgrading packages provided by the
base RHEL/CentOS repositories.


Right now, the only thing I can think of would be for someone to build a
parallel-installable OpenSSL 1.0.2 package for EPEL 6 and EPEL 7 (similar to the
openssl101e package available for EPEL 5) and patch our specfile to be able to
work with that instead.

This is a task I'm not anxious to embark upon personally; there is too much
overhead in maintaining a fork of OpenSSL to make me comfortable.

How shall we proceed?



OK, I spent far too much of today attempting to solve this problem. I got fairly
far into it, but at this point I have run out of time to work on it for the near
future.

What I have been trying to do:

I decided that the most expedient approach for EPEL 7 right now would be to
attempt to build OpenSSL statically into Node.js. We cannot do that with the
copy that upstream carries due to certain patents, so I decided to see if I
could script up something that would pull the source of the OpenSSL package from
Fedora Rawhide, drop it into the Node.js source tree and allow us to build it.

This sounds simple in theory, but it turns out that it's going to require a fair
bit of mucking about with the gyp build that Node.js uses. I've made some
headway on it, 

[389-devel] Re: Sign compare checking

2016-08-29 Thread Rich Megginson

On 08/28/2016 11:13 PM, William Brown wrote:

So either, this is a bug in the way openldap uses the ber_len_t type, we
have a mistake in our logic, or something else hokey is going on.

I would like to update this to:

if ( (tag != LBER_END_OF_SEQORSET) && (len == 0) && (*fstr != NULL) )

Or even:

if ( (tag != LBER_END_OF_SEQORSET) && (*fstr != NULL) )

What do you think of this assessment given the ber_len_t type?

Looks like it's intentional by the openldap team. There are some other
areas for this problem. Specifically:

int ber_printf(BerElement *ber, const char *fmt, ...);

lber.h:79:#define LBER_ERROR((ber_tag_t) -1)

We check if (ber_printf(...) != LBER_ERROR)

Of course, we can't satisfy either. We can't cast the LBER_ERROR from
uint -> int without changing the value of it, and we can't cast the
output of ber_printf from int -> uint, again, without potentially
changing the value of it. So it seems that the openldap library may be
impossible to satisfy the gcc type checking with -Wsign-compare.

For now, I may just avoid these in my fixes, as it seems like a whole
set of landmines I want to avoid ...


Part of the problem is that we wanted to support being able to use both 
mozldap and openldap, without too much "helper" code/macros/#ifdef 
MOZLDAP/etc.  It looks as though this is a place where we need to have 
some sort of helper.


(as for why we still support mozldap - we still need an ldap c sdk that 
supports NSS for crypto until we can fix that in the server. Once we 
change 389 so that it can use openldap with openssl/gnutls for crypto, 
we should consider deprecating support for mozldap.)






--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org



--
389-devel mailing list
389-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/389-devel@lists.fedoraproject.org


[389-devel] Re: Close of 48241, let's not support bad crypto

2016-10-03 Thread Rich Megginson

On 10/03/2016 08:58 PM, William Brown wrote:

Hi,

I want to close #48241 [0] as "wontfix". I do not believe that it's
appropriate to provide SHA3 as a password hashing algorithm.

The SHA3 algorithm is designed to be fast, and cryptographically secure.
It's target usage is for signatures and verification of these in a rapid
manner.

The fact that this algorithm is fast, and could be implemented in
hardware is the reason it's not appropriate for password hashing.
Passwords should be hashed with a slow algorithm, and in the future, an
algorithm that is CPU and memory hard. This means that in the (hopefully
unlikely) case of password hash leak or dump from ldap that the attacker
must spend a huge amount of resources to brute force or attack any
password that we are storing in the system.


If the crypto/security team is ok with not supporting SHA3 for 
passwords, works for me.




As a result, I would like to make this ticket "wontfix" with an
explanation of why. I think it's better for us to pursue #397 [1].
PBKDF2 is a CPU hard algorithm, and scrypt is both CPU and Memory hard.
These are the direction we should be going (asap).

Thanks,


[0] https://fedorahosted.org/389/ticket/48241
[1] https://fedorahosted.org/389/ticket/397



___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


[389-devel] Re: Close of 48241, let's not support bad crypto

2016-10-03 Thread Rich Megginson

On 10/03/2016 09:34 PM, William Brown wrote:

On Mon, 2016-10-03 at 21:26 -0600, Rich Megginson wrote:

On 10/03/2016 08:58 PM, William Brown wrote:

Hi,

I want to close #48241 [0] as "wontfix". I do not believe that it's
appropriate to provide SHA3 as a password hashing algorithm.

The SHA3 algorithm is designed to be fast, and cryptographically secure.
It's target usage is for signatures and verification of these in a rapid
manner.

The fact that this algorithm is fast, and could be implemented in
hardware is the reason it's not appropriate for password hashing.
Passwords should be hashed with a slow algorithm, and in the future, an
algorithm that is CPU and memory hard. This means that in the (hopefully
unlikely) case of password hash leak or dump from ldap that the attacker
must spend a huge amount of resources to brute force or attack any
password that we are storing in the system.

If the crypto/security team is ok with not supporting SHA3 for
passwords, works for me.

Who would be a point of contact to ask this?


Nikos Mavrogiannopoulos <nmavr...@redhat.com>


As a result, I would like to make this ticket "wontfix" with an
explanation of why. I think it's better for us to pursue #397 [1].
PBKDF2 is a CPU hard algorithm, and scrypt is both CPU and Memory hard.
These are the direction we should be going (asap).

Thanks,


[0] https://fedorahosted.org/389/ticket/48241
[1] https://fedorahosted.org/389/ticket/397



___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org

___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org



___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


[389-devel] Re: Design Doc: Automatic server tuning by default

2016-11-07 Thread Rich Megginson

On 11/06/2016 04:07 PM, William Brown wrote:

On Fri, 2016-11-04 at 12:07 +0100, Ludwig Krispenz wrote:

On 11/04/2016 06:51 AM, William Brown wrote:

http://www.port389.org/docs/389ds/design/autotuning.html

I would like to hear discussion on this topic.

thread number:
   independent of number of cpus I would have a default minmum number of
threads,

What do you think would be a good minimum? With too many threads to CPU,
we can cause an overhead in context switching that is not efficient.


Even if the threads are unused, or mostly idle?




your test result for reduced thread number is with clients quickly
handling responses and short operations.
   But if some threads are serving lazy clients or do database access and
have to wait, you can quickly run out of threads handling new ops

Mmm this is true. Nunc-Stans helps a bit here, but not completely.


In this case, where there are a lot of mostly idle clients that want to 
maintain an open connection, nunc-stans helps a great deal, both because 
epoll is much better than a giant poll() array, and because libevent 
maintains a sorted idle connection list for you.




I wonder if something like 16 or 24 or something is a good "minimum",
and then if we detect more then we start to scale up.


entry cache:
you should not only take the available memory into account but also the
size of the database, it doesn't make sense to blow up the cache and its
associated data (eg hashtables) for a small database just because the
memory is there

Well, the cache size is "how much we *could* use" not "how much we will
use". So setting a cache size of 20GB for a 10Mb database doesn't
matter, as we'll still only use ~10Mb of memory.

The inverse of this, is that if we did set cachesize on database size,
what happens with a large online bulkload? We would need to retune the
database cache size, which means a restart of the application. Not
something that IPA/Admins want to hear. I think it's safer to just have
the higher number.




___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org


[389-devel] Re: [discuss] Entry cache and backend txn plugin problems

2019-02-26 Thread Rich Megginson

On 2/26/19 4:26 PM, William Brown wrote:



On 26 Feb 2019, at 18:32, Ludwig Krispenz  wrote:

Hi, I need a bit of time to read the docs and clear my thoughts, but one 
comment below
On 02/25/2019 01:49 AM, William Brown wrote:

On 23 Feb 2019, at 02:46, Mark Reynolds  wrote:

I want to start a brief discussion about a major problem we have backend 
transaction plugins and the entry caches.  I'm finding that when we get into a 
nested state of be txn plugins and one of the later plugins that is called 
fails then while we don't commit the disk changes (they are aborted/rolled 
back) we DO keep the entry cache changes!

For example, a modrdn operation triggers the referential integrity plugin which 
renames the member attribute in some group and changes that group's entry cache 
entry, but then later on the memberOf plugin fails for some reason.  The 
database transaction is aborted, but the entry cache changes that RI plugin did 
are still present :-(  I have also found other entry cache issues with modrdn 
and BE TXN plugins, and we know of other currently non-reproducible entry cache 
crashes as well related to mishandling of cache entries after failed operations.

It's time to rework how we use the entry cache.  We basically need a 
transaction style caching mechanism - we should not commit any entry cache 
changes until the original operation is fully successful. Unfortunately the way 
the entry cache is currently designed and used it will be a major change to try 
to change it.

William wrote up this doc: 
http://www.port389.org/docs/389ds/design/cache_redesign.html

But this also does not currently cover the nested plugin scenario either (not 
yet).  I do know how how difficult it would be to implement William's proposal, 
or how difficult it would be to incorporate the txn style caching into his 
design.  What kind of time frame could this even be implemented in?  William 
what are your thoughts?

I like coffee? How cool are planes? My thoughts are simple :)

I think there is a pretty simple mental simplification we can make here though. 
Nested transactions “don’t really exist”. We just have *recursive* operations 
inside of one transaction.

Once reframed like that, the entire situation becomes simpler. We have one 
thread in a write transaction that can have recursive/batched operations as 
required, which means that either “all operations succeed” or “none do”. 
Really, this is the behaviour we want anyway, and it’s the transaction model of 
LMDB and other kv stores that we could consider (wired tiger, sled in the 
future).

I think the recursive/nested transaction on the database level are not the 
problem, we do this correctly already, either all or no change becomes 
persistent.
What we do not manage is modifications we do in parallel on the in memory 
structure like the entry cache, changes to the EC are not managed by any txn 
and I do not see how any of the database txn models would help, they do not 
know about ec and can abort changes.
We would need to incorporate the EC into a generic txn model, or have a way to 
flag ec entries as garbage for if a txn is aborted

The issue is we allow parallel writes, which breaks the consistency guarantees 
of the EC anyway. LMDB won’t allow parallel writes (it’s single write - 
concurrent parallel readers), and most other modern kv stores take this 
approach too, so we should be remodelling our transactions to match this IMO. 
It will make the process of how we reason about the EC much much simpler I 
think.



Some sort of in-memory data structure with fast lookup and transactional semantics (modify operations are stored as mvcc/cow so each read of the database with a given txn handle sees its own 
view of the ec, a txn commit updates the parent txn ec view, or the global ec view if no parent, from the copy, a txn abort deletes the txn's copy of the ec) is needed.  A quick google search 
turns up several hits.  I'm not sure if the B+Tree proposed at http://www.port389.org/docs/389ds/design/cache_redesign.html has transactional semantics, or if such code could be added to its 
implementation.


With LMDB, if we could make the on-disk entry representation the same as the in-memory entry representation, then we could use LMDB as the entry cache too - the database would be the entry 
cache as well.






If William's design is too huge of a change that will take too long to safely implement 
then perhaps we need to look into revising the existing cache design where we use 
"cache_add_tentative" style functions and only apply them at the end of the op. 
 This is also not a trivial change.

It’s pretty massive as a change - if we want to do it right. I’d say we need:

* development and testing of a MVCC/COW cache implementation (proof that it 
really really works transactionally)
* allow “disable/disconnect” of the entry cache, but with the higher level 
txn’s so that we can prove the txn semantics are correct
* re-architect our transaction calls so 

[389-devel] Re: Porting Perl scripts

2019-06-24 Thread Rich Megginson

On 6/24/19 10:00 AM, Mark Reynolds wrote:



On 6/24/19 11:46 AM, Simon Pichugin wrote:

Hi team,
I am working on porting our admin Perl scripts to Python CLI.
Please, check the list and share your opinion:

- cl-dump.pl - dumps and decodes changelog. Is it used often (if at all)?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#cl_dump.pl_Dump_and_decode_changelog

This is used often actually, and is a good debugging tool.   I think it just 
creates a task, so it should be ported to CLI (added to replication CLI sub 
commands)

- logconv.pl - parse and analise the access logs. Pretty big one, is it 
priority? How much people use it?
   issue is created -https://pagure.io/389-ds-base/issue/50283
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#logconv_pl

Does not need to be ported as its a standalone tool



Would be great to eliminate perl altogether . . . but this one will be tricky 
to port to python . . .



- migrate.pl - which migration scenarios do we plan to support?
   Do we depricate old ones? Do we need the script?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#migrate-ds.pl

This script is obsolete IMHO

- ns_accountstatus.pl, ns_inactivate.pl, ns_activate.pl - the issue is
   discussed here -https://pagure.io/389-ds-base/issue/50206
   I think we should extend status at least. Also, William put there some
   of his thoughts. What do you think, guys? Will we refactor
   (kinda depricate) some "account lock" as William proposing?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#ldif2db.pl_Import-ns_accountstatus.pl_Establish_account_status
I will update the ticket, but we need the same functionality of the ns_* tools, especially the new status work that went into ns_accountstatus.pl - that all came from customer escalations so 
we must not lose that functionality.

- syntax-validate.pl - it probably will go to 'healthcheck' tool
   issue is created -https://pagure.io/389-ds-base/issue/50173
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#syntax-validate.pl

Yes

- repl_monitor.pl - should we make it a part of 'healthcheck' too?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html-single/configuration_command_and_file_reference/index#repl_monitor.pl_Monitor_replication_status

Yes

Thanks,
Simon

___
389-devel mailing list --389-devel@lists.fedoraproject.org
To unsubscribe send an email to389-devel-le...@lists.fedoraproject.org
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


[389-devel] Re: Logging future direction and ideas.

2019-05-10 Thread Rich Megginson

On 5/9/19 9:13 PM, William Brown wrote:

Hi all,

So I think it's time for me to write some logging code to improve the 
situation. Relevant links before we start:

https://pagure.io/389-ds-base/issue/49415
http://www.port389.org/docs/389ds/design/logging-performance-improvement.html
https://pagure.io/389-ds-base/issue/50350
https://pagure.io/389-ds-base/issue/50361


All of these links touch on issues around logging, and I think they all combine 
to create three important points:

* The performance of logging should be improved
* The amount of details (fine grain) and information in logs should improve
* The structure of the log content should be improved to aid interaction 
(possibly even machine parsable)

I will turn this into a design document, but there are some questions I would 
like some input to help answer as part of this process to help set the 
direction and tasks to achieve.

-- Should our logs as they exist today, continue to exist?

I think that my view on this is "no". I think if we make something better, we 
have little need to continue to support our legacy interfaces. Of course, this would be a 
large change and it may not sit comfortably with people.

A large part of this thinking is that the "new" log interface I want to add is 
focused on *operations* rather than auditing accesses or changes, or over looking at 
errors. The information of both the current access/audit/error would largely be melded 
into a single operation log, and then with tools like logconv, we
could parse and extract information that would behave the same way as 
access/error/audit.

At the same time - I can see how people *may* want a "realtime" audit of operations as 
they occur (IE access log), but this still today is already limited by having to "wait" 
for operations to complete.

In a crash scenario, we would be able to still view the logs that are queued, 
so I think there are not so many concerns about losing information in these 
cases (in fact we'd probably have more).

-- What should the operation log look like?

I think it should be structured, and should be whole-units of information, 
related to a single operation. IE only at the conclusion of the operation is it 
logged (thus the async!). It should support arbitrary, nested timers, and would 
*not* support log levels - it's a detailed log of the process each query goes 
through.

An example could be something like:

[timestamp] - [conn=id op=id] - start operation
[timestamp] - [conn=id op=id] - start time = time ...
[timestamp] - [conn=id op=id] - started internal search '(some=filter)'
[timestamp] - [conn=id op=id parentop=id] - start nested operation
[timestamp] - [conn=id op=id parentop=id] - start time = time ...
...
[timestamp] - [conn=id op=id parentop=id] - end time = time...
[timestamp] - [conn=id op=id parentop=id] - duration = diff end - start
[timestamp] - [conn=id op=id parentop=id] - end nested operation - result -> ...
[timestamp] - [conn=id op=id] - ended internal search '(some=filter)'
...
[timestamp] - [conn=id op=id] - end time = time
[timestamp] - [conn=id op=id] - duration = diff end - start


Due to the structured - blocked nature, there would be no interleaving of 
operation messages. therefor the log would appear as:

[timestamp] - [conn=00 op=00] - start operation
[timestamp] - [conn=00 op=00] - start time = time ...
[timestamp] - [conn=00 op=00] - started internal search '(some=filter)'
[timestamp] - [conn=00 op=00 parentop=01] - start nested operation
[timestamp] - [conn=00 op=00 parentop=01] - start time = time ...
...
[timestamp] - [conn=00 op=00 parentop=01] - end time = time...
[timestamp] - [conn=00 op=00 parentop=01] - duration = diff end - start
[timestamp] - [conn=00 op=00 parentop=01] - end nested operation - result -> ...
[timestamp] - [conn=00 op=00] - ended internal search '(some=filter)'
...
[timestamp] - [conn=00 op=00] - end time = time
[timestamp] - [conn=00 op=00] - duration = diff end - start
[timestamp] - [conn=22 op=00] - start operation
[timestamp] - [conn=22 op=00] - start time = time ...
[timestamp] - [conn=22 op=00] - started internal search '(some=filter)'
[timestamp] - [conn=22 op=00 parentop=01] - start nested operation
[timestamp] - [conn=22 op=00 parentop=01] - start time = time ...
...
[timestamp] - [conn=22 op=00 parentop=01] - end time = time...
[timestamp] - [conn=22 op=00 parentop=01] - duration = diff end - start
[timestamp] - [conn=22 op=00 parentop=01] - end nested operation - result -> ...
[timestamp] - [conn=22 op=00] - ended internal search '(some=filter)'
...
[timestamp] - [conn=22 op=00] - end time = time
[timestamp] - [conn=22 op=00] - duration = diff end - start

An alternate method for structuring could be a machine readable format like 
json:

{
 'timestamp': 'time',
 'duration': ,
 'bind': 'dn of who initiated operation',
 'events': [
 'debug': 'msg',
 'internal_search': {
  'timestamp': 'time',
  'duration': ,
  

[389-devel] Re: Future of nunc-stans

2019-10-22 Thread Rich Megginson

On 10/22/19 8:28 PM, William Brown wrote:



I think turbo mode was to try and shortcut returning to the conntable and then having the 
blocking on the connections poll because the locking strategies before weren't as good. I 
think there is still some value in turbo "for now" but if we can bring in 
libevent, then it diminishes because we become event driven rather than poll driven.


"turbo mode" means "keep reading from this socket as quickly as possible until you 
get EAGAIN/EWOULDBLOCK" i.e. keep reading from the socket as fast as possible as long as there 
is data immediately available.


Yep that's how I understood it - it's trying to prevent a longer delay until 
it's poll()-ed again.


This is very useful for replication consumers, especially during online init, 
when the supplier is feeding you data as fast as possible.  Otherwise, its 
usefulness is limited to applications where you have a single client hammering 
you with requests, of which test/stress clients form a significant percentage.


Don't you know though, microoptimising for benchmarks is the new and hip trend.

Joking aside, there probably are situations for now where it's still useful, 
but ifwe can bring in libevent and be event driven rather than using poll() we 
shouldn't have to worry to much.

Another option is when we hit EAGAIN/EWOULDBLOCK we move the task back to the 
slapi work q rather than re-waiting on it in the poll phase.


+1





—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs
___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


[389-devel] Re: Future of nunc-stans

2019-10-08 Thread Rich Megginson

On 10/8/19 4:55 PM, William Brown wrote:

Hi everyone,

In our previous catch up (about 4/5 weeks ago when I was visiting Matus/Simon), 
we talked about nunc-stans and getting it at least cleaned up and into the code 
base.

I've been looking at it again, and really thinking about it and reflecting on 
it and I have a lot of questions and ideas now.

The main question is *why* do we want it merged?

Is it performance? Recently I provided a patch that yielded an approximate ~30% 
speed up in the entire server through put just by changing our existing 
connection code.
Is it features? What features are we wanting from this? We have no complaints 
about our current threading model and thread allocations.
Is it maximum number of connections? We can always change the conntable to a 
better datastructure that would help scale this number higher (which would also 
yield a performance gain).


It is mostly about the c10k problem, trying to figure out a way to use 
epoll, via an event framework like libevent, libev, or libtevent, but in 
a multi-threaded way (at the time none of those were really thread safe, 
or suitable for use in the way we do multi-threading in 389).


It wasn't about performance, although I hoped that using lock-free data 
structures might solve some of the performance issues around thread 
contention, and perhaps using a "proper" event framework might give us 
some performance boost e.g. the idle thread processing using libevent 
timeouts.  I think that using poll() is never going to scale as well as 
epoll() in some cases e.g. lots of concurrent connections, no matter 
what sort of datastructure you use for the conntable.


As far as features goes - it would be nice to give plugins the ability 
to inject event requests, get timeout events, using the same framework 
as the main server engine.





The more I have looked at the code, I guess with time and experience, the more 
hesitant I am to actually commit to merging it. It was designed by people who 
did not understand low-level concurrency issues and memory architectures of 
systems,


I resemble that remark.  I suppose you could "turn off" the lock-free 
code and use mutexes.



so it's had a huge number of (difficult and subtle) unsafety issues. And while 
most of those are fixed, what it does is duplicating the connection structure 
from core 389,


It was supposed to eventually replace the connection code.


leading to weird solutions like lock sharing and having to use monitors and 
more. We've tried a few times to push forward with this, but each time we end 
up with a lot of complexity and fragility.


So I'm currently thinking a better idea is to step back, re-evaluate what the 
problem is we are trying to solve for, then to solve *that*.

The question now is "what is the concern that ns would solve". From knowing 
that, then we can make a plan and approach it more constructively I think.


I agree.  There are probably better ways to solve the problems now.



At the end of the day, I'm questioning if we should just rm -r src/nunc-stans 
and rethink this whole approach - there are just too many architectural flaws 
and limitations in ns that are causing us headaches.

Ideas and thoughts?

--
Sincerely,

William
___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


___
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org


<    1   2   3   4