Hi Jakub,

(Top posting to save scrolling).

Success. It looks like the c-ares package was not installed during ipa-client install:

   # rpm -qV c-ares
   package c-ares is not installed
   # yum reinstall c-ares
   ...
   Package(s) c-ares available, but not installed.
   Error: Nothing to do
   # yum clean all
   ...
   # yum install c-ares
   ...
   Installed:
   c-ares.x86_64 0:1.7.0-6.el6

   Complete!

   # service sssd restart
Stopping sssd: cat: /var/run/sssd.pid: No such file or directory [FAILED]
   Starting sssd: [  OK  ]
   #

Now the ssh keys are working :-)

So one last question. Would we normally track this down this way for a customer or simply have them uninstall and re-install the ipa client? Is there any disadvantage to that?

Thank you!

-m


On 07/28/2014 08:38 AM, Jakub Hrozek wrote:
On Mon, Jul 28, 2014 at 08:28:01AM -0400, Mark Heslin wrote:
On 07/28/2014 07:33 AM, Jakub Hrozek wrote:
On Mon, Jul 28, 2014 at 07:28:22AM -0400, Mark Heslin wrote:
Hi Jakub,

I've added the output of 'sssd -i -d4' below:

On 07/28/2014 03:39 AM, Jakub Hrozek wrote:
On Sun, Jul 27, 2014 at 10:42:34PM -0400, Mark Heslin wrote:
Folks,

I just stumbled on an odd issue. I have an OpenShift deployment with 2
brokers, 2 nodes, 1 rhc client
all running RHEL 6.5. I also have 2 IPA servers (1 server, 1 replica), 1 IPA
admin (tools) client all running RHEL 7.0.
All OpenShift hosts, client and IPA client are members of IPA domain
'interop.example.com'.

After creating ssh public keys on the IPA admin client for user 'ose-admin1'
and uploading them into IPA,
I am able to ssh with the key to all IPA domain hosts as user 'ose-admin1'
except the 2 node hosts.
In looking closer at the 2 node hosts I noticed that SSSD keeps failing on
start:

# service sssd restart
Stopping sssd: cat: /var/run/sssd.pid: No such file or directory
[FAILED]
Starting sssd: [FAILED]

Starting with debug mode shows:

   [root@node1/2 ~]# sssd -d9
   (Sun Jul 27 22:12:29:527689 2014) [sssd] [check_file] (0x0400): lstat for
[/var/run/nscd/socket] failed: [2][No such file or directory].
   (Sun Jul 27 22:12:29:529293 2014) [sssd] [ldb] (0x0400):
server_sort:Unable to register control with rootdse!
   (Sun Jul 27 22:12:29:529596 2014) [sssd] [confdb_get_domain_internal]
(0x0400): No enumeration for [interop.example.com]!
   (Sun Jul 27 22:12:29:529646 2014) [sssd] [confdb_get_domain_internal]
(0x1000): pwd_expiration_warning is -1
   (Sun Jul 27 22:12:29:529686 2014) [sssd] [server_setup] (0x0040): Becoming
a daemon.
At this point sssd became a deamon and detached from the terminal, so no
more debug info was printed. Can you run sssd again, adding "-i"
(interactive) this time?
[root@node2 ~]# sssd -i -d4
(Mon Jul 28 07:25:20 2014) [sssd] [get_ping_config] (0x0100): Time between
service pings for [interop.example.com]: [10]
(Mon Jul 28 07:25:20 2014) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [interop.example.com]: [60]
(Mon Jul 28 07:25:20 2014) [sssd] [start_service] (0x0100): Queueing service
interop.example.com for startup
/usr/libexec/sssd/sssd_be: error while loading shared libraries:
libcares.so.2: cannot open shared object file: No such file or directory
^^^ Here goes the error. Can you check if c-ares is installed and has
the expected version? Yum check would be a good start, I think.
Here's what I found:

   # ll /usr/libexec/sssd/sssd_be
   -rwxr-xr-x. 1 root root 577480 Dec 19  2013 /usr/libexec/sssd/sssd_be

   # yum check
   Loaded plugins: priorities, security, subscription-manager
   This system is receiving updates from Red Hat Subscription Management.
   check all

#

Seems to be clean. Thoughts?

-m

rpm -q c-ares
rpm -qV c-ares
yum reinstall c-ares

make sure c-ares is the right architecture, same as the sssd deamon,
libraries can be multilib.

--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project

Reply via email to