[SSSD-users] Re: Issues with upgrade from 1.16.3 to 2.2.0 release

2019-08-20 Thread cedric hottier
Le mar. 20 août 2019 à 15:15, Sumit Bose  a écrit :

> On Tue, Aug 20, 2019 at 02:01:40PM +0200, cedric hottier wrote:
> > Dear SSSD users,
> >
> > I would like to share with you few issues I faced during the move from
> > 1.16.3 to 2.2.0 sssd release.
> > I am a Debian user and I did this move because Debian pushed the 2.2.0
> > release in the testing branch.
> >
> > My configuration may seem exotic as I use 'files' as id_provider and
> 'krb5'
> > as auth_provider.
> >
> > Initially with the 1.16 version I faced the following issue :
> > https://pagure.io/SSSD/sssd/issue/3591
> >
> > Thanks to Jakub Hrozek
> > <
> https://lists.fedorahosted.org/archives/users/5980502310531547029931685919681184321/
> >,
> > I was able to make it working with the following workaround :
> > id_provider=proxy proxy_lib_name=files
> > For those interested, the discussion thread is here :
> >
> https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org/thread/5BHXWYHNA7PT5V76CXCALZ4LVPOTRFVY/
> >
> >
> > With the move to 2.2.0, I faced several issues...
> > First, I had to remove the line services = nss, pam, ifp from sssd.conf
> > because I use systemd.
> > I think i fell in the bug described here :
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886483 . I do not
> know if
> > it is a debian specific integration issue, or a sssd issue. I did not
> find
> > any reference to sssd upstream bug, but in the meantime, it is written
> that
> > "We believe that the bug you reported is fixed in the latest version of
> sssd"
> > . Not clear for me if they are talking about sssd debian package version,
> > or upstream version.
> > Anyway, I faced this issue with new debian package 2.2.0, let me know if
> it
> > is a debian specific stuff to open a bug report on debian side.
> >
> > Once the previous issue was fixed, I faced a segmentation fault in
> > libsss_proxy.so.
>
> Hi,
>
> I guess you are seeing https://pagure.io/SSSD/sssd/issue/3931 which
> should be fixed in sssd-2.2.1.
>
> HTH
>
> bye,
> Sumit
>
Hi,
Thank you for your reply.
It is not so obvious to me. The bug report does not mention a segmentation
fault, but an excessive amount of time to fetch all groups.
the ticket also mentions this condition : enumerate = true , who is not
fulfilled in my case.

My config is the following :
/etc/sssd/sssd.conf : [sssd] services = nss, pam, ifp domains = ECCM.LAN
[pam] pam_verbosity = 2 offline_credentials_expiration = 0
/etc/sssd/conf.d/01_ECCM_LAN.conf [domain/ECCM.LAN] debug_level = 10
id_provider = proxy
proxy_lib_name=files
auth_provider = krb5 krb5_server = DebianCubox.eccm.lan krb5_realm =
ECCM.LAN krb5_validate = true krb5_ccachedir = /var/tmp krb5_keytab =
/etc/krb5.keytab krb5_store_password_if_offline = true
cache_credentials = true


> > Sorry to not have the exact error message. But it should be easy to
> > reproduce.
> > id_provider = files
> > auth_provider = krb5
> > should show the issue.
> >
> > Due to this seg fault, I removed the workaround of the bug 3591. sssd was
> > properly started by systemd, but, I realized, that the bug 3591, is still
> > not fixed.
> >
> > I am afraid I am locked with 1,16,3 release. ( who does the job, but not
> > aligned with debian testing )
> >
> > Thanks for your feedback
> >
> > Kind Regards
> > Cedric
>
> > ___
> > sssd-users mailing list -- sssd-users@lists.fedorahosted.org
> > To unsubscribe send an email to sssd-users-le...@lists.fedorahosted.org
> > Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives:
> https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org
> ___
> sssd-users mailing list -- sssd-users@lists.fedorahosted.org
> To unsubscribe send an email to sssd-users-le...@lists.fedorahosted.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org
>
___
sssd-users mailing list -- sssd-users@lists.fedorahosted.org
To unsubscribe send an email to sssd-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org


[SSSD-users] Issues with upgrade from 1.16.3 to 2.2.0 release

2019-08-20 Thread cedric hottier
Dear SSSD users,

I would like to share with you few issues I faced during the move from
1.16.3 to 2.2.0 sssd release.
I am a Debian user and I did this move because Debian pushed the 2.2.0
release in the testing branch.

My configuration may seem exotic as I use 'files' as id_provider and 'krb5'
as auth_provider.

Initially with the 1.16 version I faced the following issue :
https://pagure.io/SSSD/sssd/issue/3591

Thanks to Jakub Hrozek
,
I was able to make it working with the following workaround :
id_provider=proxy proxy_lib_name=files
For those interested, the discussion thread is here :
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org/thread/5BHXWYHNA7PT5V76CXCALZ4LVPOTRFVY/


With the move to 2.2.0, I faced several issues...
First, I had to remove the line services = nss, pam, ifp from sssd.conf
because I use systemd.
I think i fell in the bug described here :
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886483 . I do not know if
it is a debian specific integration issue, or a sssd issue. I did not find
any reference to sssd upstream bug, but in the meantime, it is written that
"We believe that the bug you reported is fixed in the latest version of sssd"
. Not clear for me if they are talking about sssd debian package version,
or upstream version.
Anyway, I faced this issue with new debian package 2.2.0, let me know if it
is a debian specific stuff to open a bug report on debian side.

Once the previous issue was fixed, I faced a segmentation fault in
libsss_proxy.so.
Sorry to not have the exact error message. But it should be easy to
reproduce.
id_provider = files
auth_provider = krb5
should show the issue.

Due to this seg fault, I removed the workaround of the bug 3591. sssd was
properly started by systemd, but, I realized, that the bug 3591, is still
not fixed.

I am afraid I am locked with 1,16,3 release. ( who does the job, but not
aligned with debian testing )

Thanks for your feedback

Kind Regards
Cedric
___
sssd-users mailing list -- sssd-users@lists.fedorahosted.org
To unsubscribe send an email to sssd-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org


[SSSD-users] Re: credentials cache cleared at sssd restart pam_sss + krb5

2018-05-02 Thread cedric hottier
Dear Jakub,
Thanks a lot for your workaround. It works perfectly now.

I guess that fixing this issue is not a priority as the
proxy_lib_names=files works fine.
I did not find any bug report regarding this issue, and I think it would be
worth to create one with your workaround .

Regards
Cedric
-
___
sssd-users mailing list -- sssd-users@lists.fedorahosted.org
To unsubscribe send an email to sssd-users-le...@lists.fedorahosted.org


[SSSD-users] credentials cache cleared at sssd restart pam_sss + krb5

2018-05-01 Thread cedric hottier
Dear sssd users,

I observe that at each sssd start, the credentials cache is cleared. Is it
an expected behavior ?
If yes, is there a parameter to make this caching permanent (or at least
not erased at each sssd restart ).
My issue is that If I reboot my laptop without connection to my KDC, I am
not able to log due to [sysdb_cache_auth] cached credentials not available.

here is my config : Debian testing / SSD version : 1.16.1

/etc/sssd/sssd.conf :
[sssd]
services = nss, pam, ifp
domains = ECCM.LAN

[pam]
pam_verbosity = 2
offline_credentials_expiration = 0

/etc/sssd/conf.d/01_ECCM_LAN.conf
[domain/ECCM.LAN]
debug_level = 10
id_provider = files
auth_provider = krb5
krb5_server = DebianCubox.eccm.lan
krb5_realm = ECCM.LAN
krb5_validate = true
krb5_ccachedir = /var/tmp
krb5_keytab = /etc/krb5.keytab
krb5_store_password_if_offline = true

cache_credentials = true

After a fresh reboot, I am able to log in only if the krb5_server is
available.
As long as I do not restart the sssd daemon, I am able to log in.

The credentials caching seems to work properly as I see
" Authenticated with cache credentials " at each TTY console just before
the usual loggin message.

But If restart the sssd daemon while disconnected from the network, I am
not able to log in anymore. The credentials cache seems to have been
cleared.

Here is the /var/log/sssd/krb5_child.log

(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [main] (0x0400):
krb5_child started.
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [unpack_buffer]
(0x1000): total buffer size: [128]
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [unpack_buffer]
(0x0100): cmd [241] uid [1000] gid [1000] validate [true] enterprise
principal [false] offline [true] UPN [ced...@eccm.lan]
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [unpack_buffer]
(0x2000): No old ccache
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [unpack_buffer]
(0x0100): ccname: [FILE:/var/tmp/krb5cc_1000_XX] old_ccname: [not set]
keytab: [/etc/krb5.keytab]
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [check_use_fast]
(0x0100): Not using FAST.
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942
[k5c_precreate_ccache] (0x4000): Recreating ccache
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942
[privileged_krb5_setup] (0x0080): Cannot open the PAC responder socket
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [become_user]
(0x0200): Trying to become user [1000][1000].
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [main] (0x2000):
Running as [1000][1000].
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [become_user]
(0x0200): Trying to become user [1000][1000].
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [become_user]
(0x0200): Already user [1000].
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [k5c_setup] (0x2000):
Running as [1000][1000].
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942
[set_lifetime_options] (0x0100): No specific renewable lifetime requested.
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942
[set_lifetime_options] (0x0100): No specific lifetime requested.
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [main] (0x0400): Will
perform offline auth
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [create_empty_ccache]
(0x1000): Creating empty ccache
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [create_empty_cred]
(0x2000): Created empty krb5_creds.
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [create_ccache]
(0x4000): Initializing ccache of type [FILE]
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [create_ccache]
(0x4000): returning: 0
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [k5c_send_data]
(0x0200): Received error code 0
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942
[pack_response_packet] (0x2000): response packet size: [56]
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [k5c_send_data]
(0x4000): Response sent.
(Tue May  1 12:35:44 2018) [[sssd[krb5_child[8942 [main] (0x0400):
krb5_child completed successfully

On /var/log/sssd/sssd_ECCM_LAN.log :

(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_dispatch] (0x4000):
dbus conn: 0x560ea04b4be0
(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_dispatch] (0x4000):
Dispatching.
(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_message_handler]
(0x2000): Received SBUS method
org.freedesktop.sssd.dataprovider.getAccountInfo on path
/org/freedesktop/sssd/dataprovider
(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_get_sender_id_send]
(0x2000): Not a sysbus message, quit
(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[dp_get_account_info_handler] (0x0200): Got request for
[0x3][BE_REQ_INITGROUPS][name=files_initgr_request]
(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sss_domain_get_state]
(0x1000): Domain ECCM.LAN is Active

...

(Tue May  1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_attach_req] (0x0400):