Re: [Freeipa-users] Logging: IPA to Rsyslog to Logstash

2014-12-19 Thread Dmitri Pal

On 12/19/2014 11:35 AM, Innes, Duncan wrote:

Earlier this year I said I'd feed back how my IPA to Rsyslog to Logstash
experiments went.

They went badly.  And I didn't get much time.  Today, however, I managed
to get over my imaginary finishing line:

All systems are RHEL 6.6.

Rsyslog (rsyslog7-7.4.10) is configured to import logs from some dirsrv
files:

# cat /etc/rsyslog.d/dirsrv.conf
module(load="imfile" PollingInterval="2")

input(type="imfile"
   File="/var/log/dirsrv/slapd-EXAMPLE-COM/access"
   Tag="dirsrv"
   StateFile="statedirsrv"
   Facility="local0")

input(type="imfile"
   File="/var/log/dirsrv/slapd-EXAMPLE-COM/errors"
   Tag="dirsrv"
   StateFile="statedirsrverr"
   Severity="error"
   Facility="local0")

#

This pulls in those log entries on a regular basis.  Rsyslog8 allows you
to use inotify for file changes, but that's not available to me.

Rsyslog is then also configured to push all logs to my Logstash servers:

# cat /etc/rsyslog.d/logstash.conf
template(name="ls_json" type="list" option.json="on")
{ constant(value="{")
constant(value="\"@timestamp\":\"") property(name="timegenerated"
dateFormat="rfc3339")
constant(value="\",\"@version\":\"1")
constant(value="\",\"message\":\"") property(name="msg")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"my_environment\":\"dev")
constant(value="\",\"my_project\":\"Infrastructure")
constant(value="\",\"my_use\":\"IPA")
constant(value="\",\"logsource\":\"") property(name="fromhost")
constant(value="\",\"severity_label\":\"")
property(name="syslogseverity-text")
constant(value="\",\"severity\":\"") property(name="syslogseverity")
constant(value="\",\"facility_label\":\"")
property(name="syslogfacility-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"pid\":\"") property(name="procid")
constant(value="\",\"rawmsg\":\"") property(name="rawmsg")
constant(value="\",\"syslogtag\":\"") property(name="syslogtag")
constant(value="\"}\n")
}

*.* @@logstash01.example.com:5500;ls_json
$ActionExecOnlyWhenPreviousIsSuspended on
& @@logstash02.example.com:5500;ls_json
& /var/log/localbuffer
$ActionExecOnlyWhenPreviousIsSuspended off

[root@lvdlvldap02 ~]#

Which pushes all logs to my logstash servers in JSON format.  Failover
is built in by using 2 logstash servers.
The client needs to have SELinux managed to allow rsyslog to write to
port 5500:

# semanage port -a -t syslogd_port_t -p tcp 5500
# semanage port -l | grep 5500

The Logstash servers are then configured to listen on this port and do
some simple groking, before sending everything to the ElasticSearch
cluster:

# cat /etc/logstash/conf.d/syslog.conf
input {
   tcp {
 type => syslogjson
 port => 5500
 codec => "json"
   }
}

filter {
   # This replaces the host field (UDP source) with the host that
generated the message (sysloghost)
   if [sysloghost] {
 mutate {
   replace => [ "host", "%{sysloghost}" ]
   remove_field => "sysloghost" # prune the field after successfully
replacing "host"
 }
   }
   if [type] == "syslogjson" {
 grok {
   patterns_dir => "/opt/logstash/patterns"
   match => { "message" => "%{VIRGINFW}" }
   match => { "message" => "%{AUDITAVC}" }
   match => { "message" => "%{COMMONAPACHELOG}" }
   tag_on_failure => []
 }
   }

   # This filter populates the @timestamp field with the timestamp that's
in the actual message
   # dirsrv logs are currently pulled in every 2 minutes, so @timestamp
is wrong
   if [syslogtag] == "dirsrv" {
 mutate {
   remove_field => [ 'rawmsg' ]
 }
 grok {
   match => [ "message", "%{HTTPDATE:log_timestamp}" ]
 }
 date {
   match => [ "log_timestamp", "dd/MMM/YYY:HH:mm:ss Z"]
   locale => "en"
   remove_field => [ "log_timestamp" ]
 }
   }
}

output {
   elasticsearch {
 protocol => node
 node_name => "Indexer01"
   }
}
#

It works well for the most part.  I'm not performing any groking of the
actual message line as yet to pull out various bits of data into their
own separate fields, but at least I'm managing to log the access and
errors from multiple IPA servers.

The @timestamp field ends up with the timestamp from the actual message
line, so it's only down to second accuracy.  This means that multiple
log lines on the same second lose their ordering when viewed in the
Logstash/Kibana interface.  But the important thing at this point is
that they're now held centrally.

Is it feasible to alter the timestamp resolution that dirsrv uses?  This
would help separate log lines properly.


Please file a 389 RFE.



Cheers & Merry Festive Holiday thing

Duncan

This message has been checked for viruses and spam by the Virgin Money email 
scanning system powered by Messagelabs.

This e-mail is intended to be confidential to the recipient. If you receive a 
copy in error, pleas

Re: [Freeipa-users] dirsrv password incorrect on replicas?

2014-12-19 Thread Janelle
I am the only one who has access to these systems, so unless I did it in 
my sleep.. :-)


~J

On 12/19/14 12:14 AM, Ludwig Krispenz wrote:


On 12/18/2014 08:16 PM, Rich Megginson wrote:

On 12/18/2014 11:59 AM, Janelle wrote:
I am looking at the 2 entries in dse.ldif - and indeed they are 
different.  If I replace the one in question with the one from the 
working system, it works again.


I'm assuming by "entry" you are referring to nsslapd-rootpw in 
cn=config.




I did find - replica was created on Dec 11 at noon -- and the 
dse.ldif file CHANGED a day later?!?


The dse.ldif file changes all the time - unique id generator state, 
csn generator state, replication state, etc. etc.


BUT - nsslapd-rootpw SHOULD NOT CHANGE

no, except someone follows the steps to change it.
Janelle, could it be that someone else was working on that server, not 
knowing the root pw and changing it in dse.ldif ?


--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project


[Freeipa-users] Logging: IPA to Rsyslog to Logstash

2014-12-19 Thread Innes, Duncan
Earlier this year I said I'd feed back how my IPA to Rsyslog to Logstash
experiments went.

They went badly.  And I didn't get much time.  Today, however, I managed
to get over my imaginary finishing line:

All systems are RHEL 6.6.

Rsyslog (rsyslog7-7.4.10) is configured to import logs from some dirsrv
files:

# cat /etc/rsyslog.d/dirsrv.conf 
module(load="imfile" PollingInterval="2")

input(type="imfile"
  File="/var/log/dirsrv/slapd-EXAMPLE-COM/access"
  Tag="dirsrv"
  StateFile="statedirsrv"
  Facility="local0")

input(type="imfile"
  File="/var/log/dirsrv/slapd-EXAMPLE-COM/errors"
  Tag="dirsrv"
  StateFile="statedirsrverr"
  Severity="error"
  Facility="local0")

#

This pulls in those log entries on a regular basis.  Rsyslog8 allows you
to use inotify for file changes, but that's not available to me.

Rsyslog is then also configured to push all logs to my Logstash servers:

# cat /etc/rsyslog.d/logstash.conf 
template(name="ls_json" type="list" option.json="on")
{ constant(value="{")
constant(value="\"@timestamp\":\"") property(name="timegenerated"
dateFormat="rfc3339")
constant(value="\",\"@version\":\"1")
constant(value="\",\"message\":\"") property(name="msg")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"my_environment\":\"dev")
constant(value="\",\"my_project\":\"Infrastructure")
constant(value="\",\"my_use\":\"IPA")
constant(value="\",\"logsource\":\"") property(name="fromhost")
constant(value="\",\"severity_label\":\"")
property(name="syslogseverity-text")
constant(value="\",\"severity\":\"") property(name="syslogseverity")
constant(value="\",\"facility_label\":\"")
property(name="syslogfacility-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"pid\":\"") property(name="procid")
constant(value="\",\"rawmsg\":\"") property(name="rawmsg")
constant(value="\",\"syslogtag\":\"") property(name="syslogtag")
constant(value="\"}\n")
}

*.* @@logstash01.example.com:5500;ls_json
$ActionExecOnlyWhenPreviousIsSuspended on
& @@logstash02.example.com:5500;ls_json
& /var/log/localbuffer
$ActionExecOnlyWhenPreviousIsSuspended off

[root@lvdlvldap02 ~]#

Which pushes all logs to my logstash servers in JSON format.  Failover
is built in by using 2 logstash servers.
The client needs to have SELinux managed to allow rsyslog to write to
port 5500:

# semanage port -a -t syslogd_port_t -p tcp 5500
# semanage port -l | grep 5500

The Logstash servers are then configured to listen on this port and do
some simple groking, before sending everything to the ElasticSearch
cluster:

# cat /etc/logstash/conf.d/syslog.conf 
input {
  tcp {
type => syslogjson
port => 5500
codec => "json"
  }
}

filter {
  # This replaces the host field (UDP source) with the host that
generated the message (sysloghost)
  if [sysloghost] {
mutate {
  replace => [ "host", "%{sysloghost}" ]
  remove_field => "sysloghost" # prune the field after successfully
replacing "host"
}
  }
  if [type] == "syslogjson" {
grok {
  patterns_dir => "/opt/logstash/patterns"
  match => { "message" => "%{VIRGINFW}" }
  match => { "message" => "%{AUDITAVC}" }
  match => { "message" => "%{COMMONAPACHELOG}" }
  tag_on_failure => []
}
  }

  # This filter populates the @timestamp field with the timestamp that's
in the actual message
  # dirsrv logs are currently pulled in every 2 minutes, so @timestamp
is wrong
  if [syslogtag] == "dirsrv" {
mutate {
  remove_field => [ 'rawmsg' ]
}
grok {
  match => [ "message", "%{HTTPDATE:log_timestamp}" ]
}
date {
  match => [ "log_timestamp", "dd/MMM/YYY:HH:mm:ss Z"]
  locale => "en"
  remove_field => [ "log_timestamp" ]
}
  }
}

output {
  elasticsearch {
protocol => node
node_name => "Indexer01"
  }
}
#

It works well for the most part.  I'm not performing any groking of the
actual message line as yet to pull out various bits of data into their
own separate fields, but at least I'm managing to log the access and
errors from multiple IPA servers.

The @timestamp field ends up with the timestamp from the actual message
line, so it's only down to second accuracy.  This means that multiple
log lines on the same second lose their ordering when viewed in the
Logstash/Kibana interface.  But the important thing at this point is
that they're now held centrally.

Is it feasible to alter the timestamp resolution that dirsrv uses?  This
would help separate log lines properly.

Cheers & Merry Festive Holiday thing

Duncan

This message has been checked for viruses and spam by the Virgin Money email 
scanning system powered by Messagelabs.

This e-mail is intended to be confidential to the recipient. If you receive a 
copy in error, please inform the sender and then delete this message.

Virgin Money plc - Registered in England and Wales (Company no. 69523

Re: [Freeipa-users] while doing ipa-getkeytab , getting Operation failed! PrincipalName not found.

2014-12-19 Thread Dmitri Pal

On 12/19/2014 05:07 AM, Ben .T.George wrote:

Hi List

i was trying to add linux machine manually as client. iwas following 
this 
http://docs.fedoraproject.org/en-US/Fedora/15/html/FreeIPA_Guide/linux-manual.html


while doing ipa-getkeytab on FreeIpa server, i am getting error like 
" Operation failed! PrincipalName not found."


please help me to solve this issue.


When you do client enrollment using ipa-client you can run it in several 
ways:
- high level admin that has full privileges in IPA (recommended just for 
demo and POC purposes)
- low level admin that has permission to provision systems. Such admin 
does not have privilege to create the host entry during registration. 
The entry must be there. The error you see above indicates that the host 
entry does not exist.
- automated system. In this case the entry has to be precereated and one 
can set or request IPA to generate a registration code that can be used 
once as an OTP to register client.


So if you do things manually you need to create host entry first 
manually on the server side.





thanks & Regards,
Ben






--
Thank you,
Dmitri Pal

Sr. Engineering Manager IdM portfolio
Red Hat, Inc.

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project

Re: [Freeipa-users] KRB5CCNAME not defined in HTTP request environment

2014-12-19 Thread Dmitri Pal

On 12/19/2014 08:54 AM, Serafini, Adam wrote:

Hi,

I am trying to write some software that communicates with the FreeIPA 
server from a remote client.


Using Adam Young's helpful blog (
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/), 
I am successfully able to run this curl on the FreeIPA server itself:


curl -v -H referer:https://myserver.net/ipa -H 
"Content-Type:application/json" -H "Accept:application/json" 
--negotiate -u : --cacert /etc/ipa/ca.crt -d 
'{"method":"user_find","params":[[""],{}],"id":0}' -X POST 
https://myserver.net/ipa/json


But when I try and run an similar curl from my client workstation 
(with pre-requisite Kerberos setup):


curl -v -H referer:https://myworkstation.net/ipa -H 
"Content-Type:application/json" -H "Accept:application/json" 
--negotiate -u : --cacert /tmp/ca.crt -d 
'{"method":"user_find","params":[[""],{}],"id":0}' -X POST 
https://myserver.net/ipa/json


The following error is generated in the Apache logs:

KerberosWSGIExecutioner.__call__: KRB5CCNAME not defined in HTTP 
request environment


Would anyone have any pointers to fix, or a place to start 
investigating? I am assuming there is configuration problem but I have 
no idea where to begin. I believe I've done all the Kerberos setup 
correctly, but it's hard to tell.


It seems that curl can't find kerberos ticket cache.
KRB5CCNAME is an environment variable that points to the location of the 
ticket cache.
Try defining it for curl and see what happens. I suppose knit works fine 
from the client you try it on.




Kind regards,
Adam




This message (including any attachments) may contain information that 
is privileged or confidential. If you are not the intended recipient, 
please notify the sender and delete this email immediately from your 
systems and destroy all copies of it. You may not, directly or 
indirectly, use, disclose, distribute, print or copy this email or any 
part of it if you are not the intended recipient






--
Thank you,
Dmitri Pal

Sr. Engineering Manager IdM portfolio
Red Hat, Inc.

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project

[Freeipa-users] KRB5CCNAME not defined in HTTP request environment

2014-12-19 Thread Serafini, Adam
Hi,

I am trying to write some software that communicates with the FreeIPA server 
from a remote client.

Using Adam Young's helpful blog (
http://adam.younglogic.com/2010/07/talking-to-freeipa-json-web-api-via-curl/), 
I am successfully able to run this curl on the FreeIPA server itself:

curl -v -H referer:https://myserver.net/ipa -H "Content-Type:application/json" 
-H "Accept:application/json" --negotiate -u : --cacert /etc/ipa/ca.crt -d 
'{"method":"user_find","params":[[""],{}],"id":0}' -X POST 
https://myserver.net/ipa/json

But when I try and run an similar curl from my client workstation (with 
pre-requisite Kerberos setup):

curl -v -H referer:https://myworkstation.net/ipa -H 
"Content-Type:application/json" -H "Accept:application/json" --negotiate -u : 
--cacert /tmp/ca.crt -d '{"method":"user_find","params":[[""],{}],"id":0}' -X 
POST https://myserver.net/ipa/json

The following error is generated in the Apache logs:

KerberosWSGIExecutioner.__call__: KRB5CCNAME not defined in HTTP request 
environment

Would anyone have any pointers to fix, or a place to start investigating? I am 
assuming there is configuration problem but I have no idea where to begin. I 
believe I've done all the Kerberos setup correctly, but it's hard to tell.

Kind regards,
Adam




This message (including any attachments) may contain information that is 
privileged or confidential. If you are not the intended recipient, please 
notify the sender and delete this email immediately from your systems and 
destroy all copies of it. You may not, directly or indirectly, use, disclose, 
distribute, print or copy this email or any part of it if you are not the 
intended recipient
-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project

[Freeipa-users] while doing ipa-getkeytab , getting Operation failed! PrincipalName not found.

2014-12-19 Thread Ben .T.George
Hi List

i was trying to add linux machine manually as client. iwas following this
http://docs.fedoraproject.org/en-US/Fedora/15/html/FreeIPA_Guide/linux-manual.html

while doing ipa-getkeytab on FreeIpa server, i am getting error like
" Operation failed! PrincipalName not found."

please help me to solve this issue.


thanks & Regards,
Ben
-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project

Re: [Freeipa-users] dirsrv password incorrect on replicas?

2014-12-19 Thread Ludwig Krispenz


On 12/18/2014 08:16 PM, Rich Megginson wrote:

On 12/18/2014 11:59 AM, Janelle wrote:
I am looking at the 2 entries in dse.ldif - and indeed they are 
different.  If I replace the one in question with the one from the 
working system, it works again.


I'm assuming by "entry" you are referring to nsslapd-rootpw in cn=config.



I did find - replica was created on Dec 11 at noon -- and the 
dse.ldif file CHANGED a day later?!?


The dse.ldif file changes all the time - unique id generator state, 
csn generator state, replication state, etc. etc.


BUT - nsslapd-rootpw SHOULD NOT CHANGE

no, except someone follows the steps to change it.
Janelle, could it be that someone else was working on that server, not 
knowing the root pw and changing it in dse.ldif ?


Going to have OSSEC monitor the folders for changes in files to see 
what the heck is going on and WHAT changed it and if it happens again.


thanks for the help
~J


On 12/18/14 10:28 AM, Rich Megginson wrote:

On 12/18/2014 09:49 AM, Janelle wrote:

Good morning/evening All,

So, another strange thing I see with 4.1.2 running on FC21 
(server).  On some replicas if I attempt to modify the 389-ds 
backend, I get credential errors.  Even ldapsearch fails - which as 
me baffled.  I am trying to tune the servers but this has me 
confused as to what might cause something like this and where to 
start looking for a solution?


Here is the interesting part - when the server was intially 
replicated, I was able to make changes to 389-ds, but after a few 
days, credentials now show errors:


ldapsearch -x -LLL -D "cn=directory manager"  -b "cn=monitor" 
"(objectclass=*)" -W

Enter LDAP Password:
ldap_bind: Invalid credentials (49)


This doesn't make any sense.  Directory manager passwords are not 
replicated, they are local to each machine.  Directory manager 
passwords do not expire, and the error message is definitely 
"incorrect password" not "password expired".  There are no internal 
processes that touch directory manager or its password (unless there 
is something in ipa but I doubt it). So I have no idea how "all of a 
sudden" directory manager password stops working.


You can't recover it, you can only reset it.
http://www.port389.org/docs/389ds/howto/howto-resetdirmgrpassword.html



Thoughts?
~J









--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go To http://freeipa.org for more info on the project