Re: Auditd statsd integration

2021-02-10 Thread LC Bruzenak
On Wed, Feb 10, 2021 at 1:07 PM LC Bruzenak  wrote:

> On Mon, Feb 8, 2021 at 7:44 PM Steve Grubb  wrote:
>
>> Hello,
>>
>> I have recently checked in to the audit tree 2 experimental plugins. You
>> can
>> enable them by passing --enable-experimental to configure. One of the new
>> plugins is aimed at providing audit metrics to a statsd server. The idea
>> being that you can use this to relay the metrics to influxdb, prometheus
>> or
>> some other collector. Then you can use Grafana to visualize and alert.
>>
>> Currently, it supports the following metrics:
>>
>> kernel.audit.lost
>> kernel.audit.backlog
>> auditd.free_space
>> auditd.plugin_current_depth
>> auditd.plugin_max_depth
>> audit_events.total_count
>> audit_events.total_failed
>> audit_events.avc_count
>> audit_events.fanotify_count
>> audit_events.logins_failed
>> audit_events.logins_success
>> audit_events.anomaly_count
>> audit_events.response_count
>>
>> I'd be interested in hearing if this would be useful. And if these are
>> the
>> right metrics that people are interested in. Should something else be
>> measured? Should an example Grafana dashboard be included?
>>
>> Let me know what you think.
>>
>> -Steve
>>
>>
> Steve,
>
> I think this could be awesome; hoping to give it a try soon. An example
> dashboard would be very helpful if you could include that.
> The stats you already point out a good start.
>
> I'd also like to have a way to parse the per-machine kernel-assigned event
> IDs for missing ones. Might that need a separate plugin for that or could
> something be done within this setup?
> I'm pretty sure there are more metrics that would be desired as well as
> some derived; e.g. take a per-user login/logoff set to identify time spent
> on a particular machine (screenlocks notwithstanding, but maybe
> eventually). Or perhaps if clients send events+heartbeats, when are they
> up/down? These are some of the questions I've heard from security overseers.
>
> And while some of these may not be inspected directly by the end users, in
> the case of trouble calls or questions they might be the exact thing I'd
> ask them to relay to me in order to diagnose a problem or answer a question
> remotely.
>
> Thx,
> LCB
>
>
... and I forgot to ask - can you include a README there which specifies
the minimum kernel/userspace level of code required?

LCB

-- 

LC (Lenny) Bruzenak
le...@magitekltd.com
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Auditd statsd integration

2021-02-10 Thread LC Bruzenak
On Mon, Feb 8, 2021 at 7:44 PM Steve Grubb  wrote:

> Hello,
>
> I have recently checked in to the audit tree 2 experimental plugins. You
> can
> enable them by passing --enable-experimental to configure. One of the new
> plugins is aimed at providing audit metrics to a statsd server. The idea
> being that you can use this to relay the metrics to influxdb, prometheus
> or
> some other collector. Then you can use Grafana to visualize and alert.
>
> Currently, it supports the following metrics:
>
> kernel.audit.lost
> kernel.audit.backlog
> auditd.free_space
> auditd.plugin_current_depth
> auditd.plugin_max_depth
> audit_events.total_count
> audit_events.total_failed
> audit_events.avc_count
> audit_events.fanotify_count
> audit_events.logins_failed
> audit_events.logins_success
> audit_events.anomaly_count
> audit_events.response_count
>
> I'd be interested in hearing if this would be useful. And if these are the
> right metrics that people are interested in. Should something else be
> measured? Should an example Grafana dashboard be included?
>
> Let me know what you think.
>
> -Steve
>
>
Steve,

I think this could be awesome; hoping to give it a try soon. An example
dashboard would be very helpful if you could include that.
The stats you already point out a good start.

I'd also like to have a way to parse the per-machine kernel-assigned event
IDs for missing ones. Might that need a separate plugin for that or could
something be done within this setup?
I'm pretty sure there are more metrics that would be desired as well as
some derived; e.g. take a per-user login/logoff set to identify time spent
on a particular machine (screenlocks notwithstanding, but maybe
eventually). Or perhaps if clients send events+heartbeats, when are they
up/down? These are some of the questions I've heard from security overseers.

And while some of these may not be inspected directly by the end users, in
the case of trouble calls or questions they might be the exact thing I'd
ask them to relay to me in order to diagnose a problem or answer a question
remotely.

Thx,
LCB

-- 

LC (Lenny) Bruzenak
le...@magitekltd.com
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Remote Logging of auditd

2018-01-22 Thread LC Bruzenak

On 01/19/2018 07:51 AM, Joshua Ammons wrote:


Hi All,

I wanted to send this out to see if anyone has encountered this 
situation before and, if so, how you handled it.  We send our auditd 
logs to a remote central logging server.  Is there any way to decode 
the hex encoded fields before sending them along?  Similar to the 
ausearch [-i] flag which interprets the encoded value?


For example, the  “data” field in a USER_TTY event:

type=USER_TTY msg=audit(1516365981.138:13125): pid=7161 uid=0 
auid=1007 ses=65 data=73657276696365206175646974642073746F70


type=USER_TTY msg=audit(1516367294.919:13331): pid=7161 uid=0 
auid=1007 ses=65 data=73797374656D63746C2073746F7020617564697464


type=USER_TTY msg=audit(1516367648.904:13375): pid=7161 uid=0 
auid=1007 ses=65 data=6964206A6F7368616D6D6F6E73


type=USER_TTY msg=audit(1516367664.832:13378): pid=7161 uid=0 
auid=1007 ses=65 
data=636174202F6574632F706173737764207C2067726570206A6F7368616D6D6F6E73


type=USER_TTY msg=audit(1516367715.041:13388): pid=7161 uid=0 
auid=1007 ses=65 
data=636174202F7661722F6C6F672F61756469742F61756469742E6C6F67207C20677265702022555345525F54545922


We have the following configured in our /etc/rsyslog.conf file:

:programname, isequal, "audispd" @SERVER_NAME:514

:programname, isequal, "auditd" @SERVER_NAME:514

^^ This, however, will send those fields in their raw format and does 
not decode the values.  Is it possible to natively interpret those 
fields before sending them to the remote server?





Joshua,

What audit version are you using?
LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: audit rule problem

2017-11-15 Thread LC Bruzenak

On 11/15/2017 10:16 AM, Steve Grubb wrote:

OK. That's something that can be checked.  And I confirm this is the case.

[root@x2 ~]# auditctl -a always,exit -F arch=b64 -S open -F 
subj_type=doesnt_exist_t
[root@x2 ~]# echo $?
0
[root@x2 ~]# auditctl -l | grep doesnt_exist_t
-a always,exit -F arch=b64 -S open -F subj_type=doesnt_exist_t
[root@x2 ~]# auditctl -d always,exit -F arch=b64 -S open -F 
subj_type=doesnt_exist_t

That said, you can also write a rule with auid=4 which would be an invalid
user. The kernel has no concept of what uids are valid. So, I expect we have
the same issue with policy. I don't know if the kernel can check if a type is
valid. Typically policy is compiled into numbers and that's what the kernel
understands.

-Steve


Thanks Steve. I wouldn't mind as much if it accepts types not currently 
loaded (seems like a warning would be nice though), however the part 
about it subsequently discarding valid events due to the bogus type is 
the troubling part.


LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: audit rule problem

2017-11-15 Thread LC Bruzenak

On 11/14/2017 05:38 PM, LC Bruzenak wrote:

System:
Linux audit 2.6.32-696.3.2.el6.x86_64 #1 SMP Wed Jun 7 11:51:39 EDT 
2017 x86_64 x86_64 x86_64 GNU/Linux

userspace audit-2.4.5-3
Red Hat Enterprise Linux Client release 6.9 (Santiago)

I changed this line in /etc/audit/audit.rules from:
-a exit,always  -F arch=b64 -S mount -S umount2 -k mount
to this:
-a exit,always  -F arch=b64 -S mount -S umount2 -F 
subj_type!=nothing_t -k mount


Reloaded my rules, and now doing (as root):
# umount /boot; mount /boot

no longer produces audit events. I did this because on another system 
(mls policy, with lots of custom types) I lost the events once I 
included some custom types installed and operational on the system, so 
I was just trying to reduce this to a reproducible case. I can almost 
see that a non-existent type might fail, but it maybe should fail to 
load.?.


Ugh.
Looks like the entire problem was a non-existent subject type; I had a 
typo in the mls policy case.
So the rules accept a type which does not exist, does not warn, and then 
fails to report all events.

That's my story and I'm sticking to it...

Thx,
LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

audit rule problem

2017-11-14 Thread LC Bruzenak

System:
Linux audit 2.6.32-696.3.2.el6.x86_64 #1 SMP Wed Jun 7 11:51:39 EDT 2017 
x86_64 x86_64 x86_64 GNU/Linux

userspace audit-2.4.5-3
Red Hat Enterprise Linux Client release 6.9 (Santiago)

I changed this line in /etc/audit/audit.rules from:
-a exit,always  -F arch=b64 -S mount -S umount2 -k mount
to this:
-a exit,always  -F arch=b64 -S mount -S umount2 -F subj_type!=nothing_t 
-k mount


Reloaded my rules, and now doing (as root):
# umount /boot; mount /boot

no longer produces audit events. I did this because on another system 
(mls policy, with lots of custom types) I lost the events once I 
included some custom types installed and operational on the system, so I 
was just trying to reduce this to a reproducible case. I can almost see 
that a non-existent type might fail, but it maybe should fail to load.?.


However, the bigger problem is that trying to add my other valid custom 
types into the exclusion on the mls policy machine is causing me to lose 
events. Any ideas?


Thx,
LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: ausearch message types

2016-10-31 Thread LC Bruzenak

On 10/31/2016 04:21 PM, LC Bruzenak wrote:

I'm on the 2.4.5 version of the audit code.
Has anyone thought about or implemented a exclusionary message list, 
such as:


ausearch -m ALL-avc,user_avc -ts today


Actually in this case I'm running the search from a script so I can 
easily take the stderr results from "ausearch -i -m help", pipe them 
into a sed substitution which removes the preceding text, removes the 
ones I don't want, and replaces the spaces with commas.
So for now I am set; still I think it would perhaps be helpful to have 
at some point.


--
LC Bruzenak
magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


ausearch message types

2016-10-31 Thread LC Bruzenak

I'm on the 2.4.5 version of the audit code.
Has anyone thought about or implemented a exclusionary message list, 
such as:


ausearch -m ALL-avc,user_avc -ts today

I'd like to be able to search in this manner, where I exclude certain 
message types.
I could write a patch, but if anyone has already done this I'd happily 
use theirs.
The message type list is so long that it would be painful to have the 
comma-delimited list of all but a couple.


Thx,
LCB

--
LC Bruzenak
magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: ausearch checkpoint question

2016-10-03 Thread LC Bruzenak

On 09/29/2016 04:34 PM, Burn Alting wrote:

Lenny,

I typically use

TZ=UTC ausearch -i --input-logs \
--checkpoint /auditd_checkpoint.txt

but I also set auditd.conf to have 9 x 32MB log files so the checkpoint
code only scans the more recent files.


OK; thanks Burn. I store 20 x 100MB files; I need that many for my purposes.
I'll be testing it again under controlled conditions; seems like what I 
need in one instance.


--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

ausearch checkpoint question

2016-09-29 Thread LC Bruzenak
I'm using the 2.4.5-3 audit rpm set and I tried using the ausearch 
"checkpoint" option a couple weeks ago.
This was on a moderately busy system (judging by my own 
systems/experience) generating say 300-400MB of data/day.


I tried the checkpoint option in a 5-minute cron job, and I noticed that 
in comparison to the "-ts recent" option, it took far longer to complete.
The "recent" option result was less than a second, whereas the 
checkpoint version took ~20 seconds every 5 minutes.


It's possible there were other factors at play; e.g. it was used on a 
mls-policy machine, and although I saw no AVCs, it's possible there were 
some access issues I didn't have time to investigate.
On my intended application, I'll be on a standard targeted-policy 
machine so this won't be a potential factor.


I need to test this again, as I'm considering using the ausearch 
checkpoint capability for some new requirements, I was wondering if 
perhaps there were any timing results done or if there are any tips and 
tricks to getting the most out of it. Also - the man page section 
describing this is a little confusing to me so if anyone has a script 
segment that would be very helpful.


Thanks in advance,
LCB

--
LC Bruzenak
magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: New draft standards

2015-12-29 Thread LC Bruzenak

On 12/14/2015 08:34 AM, Steve Grubb wrote:

That is not exactly what I proposed. What I was proposing was to record the
translation of things that could change between systems and thus prevent
correct interpretation later. Doing all translations is technically possible
but would slow down auditd just a bit and increase the amount of data on disk.
But doing this is not really necessary for the native audit tools.

But I guess this gives me an opportunity to ask the community what tools they
are using for audit log collection and viewing? Its been a couple years since
e had this discussion on the mail list and I think some things have changed.

Do people use ELK?
Apache Flume?
Something else?

It might be possible to write a plugin to translate the audit logs into the
native format of these tools.


Sorry for the late reply. Translating the salient details is for me 
important.

This is especially true on systems where:
- aggregation is happening from one or more different machines (and 
cannot assume federated UIDs), and
- where records are required to be kept over long periods of time 
(system updates happen, UIDs are changed, people leave, etc)


I realize it carries a processing burden somewhere; this is inevitable 
and I believe we'll need to design for this.
We're auditing for a reason; we need proof of who did what and in 
varying degrees I believe this means persistence of accountability.


Because I'm almost a one-stop shop where I work, and the auditing 
requirements are specific and particular, I have a homegrown log 
collection and viewing solution for now but would prefer to incorporate 
a flexible, more useful user tool. So I'm in the "something else" 
category but somewhat open to change.


LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: audit 2.4.4 released

2015-08-14 Thread LC Bruzenak

On 08/13/2015 02:30 PM, Steve Grubb wrote:

...

If you ausearch -i on that file, your screen will get underlines with all the
text. An attacker could change this to be worse than just underlining your
text. They could try to write to the window title and then bounce that back in
black on black text to the command prompt hoping the admin will press enter.

Wow; that's something unexpected. Thanks for this extra info Steve; I 
may need to backport to my version.
Are these changes isolated to the ausearch/aureport code sets or inside 
libs?


Thx,
LCB

--
LC Bruzenak
magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH V6 05/10] audit: log creation and deletion of namespace instances

2015-05-14 Thread LC Bruzenak

On 05/14/2015 11:21 AM, Steve Grubb wrote:

Then I'd suggest we either scrap this set of patches and forget auditing of
containers. (This would have the effect of disallowing them in a lot of
environments because violations of security policy can't be detected.)

Again +1.

I personally have envisioned a use-case in which I feel containers would 
be architecturally ideal, however in my situation, and I'm fairly sure 
anyone  for whom the security requirements matter (i.e. WHY we use 
SElinux in the first place), this is mandatory.


Without context-aware definitive audit records which discretely identify 
people/actions/objects, the use of any otherwise attractive technology 
is untenable.


LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: [PATCH V6 05/10] audit: log creation and deletion of namespace instances

2015-05-14 Thread LC Bruzenak

On 05/14/2015 09:57 AM, Steve Grubb wrote:

Also, if the host OS cannot make sense of the information being logged because
the pid maps to another process name, or a uid maps to another user, or a file
access maps to something not in the host's, then we need the container to do
its own auditing and resolve these mappings and optionally pass these to an
aggregation server.

Nothing else makes sense.

+1

Except, being that is IS a container, I'd say that for anyone who cares 
about the audited data,  the passing to an aggregation server would not 
be optional.

At least not for any use-case I can envision.

LCB

--
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: How to audit socket close system call?

2015-01-09 Thread LC Bruzenak
On 01/08/2015 04:55 PM, Alexander Viro wrote:
> Incidentally, that's a fine example of the reasons why syscall audit is 
> useless
> for almost anything other than CYA.  It's not that syscall tracing is useless 
> -
> strace can be quite useful, actually.  It's the bogus impression of coverage
> in case of watching what live system does - a whole lot of events simply do
> not map on "somebody had done a syscall with such and such arguments".
All true & well put; thank you.
The CYA factor IS important. But the translation magic from user actions
to syscalls (and back - from intent to result) is where it gets interesting.
The forensics challenge with the data we have is what some of us are
grappling with now (forever).

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: STIG issue with auditctl -l

2014-11-20 Thread LC Bruzenak
On 11/20/2014 09:42 AM, leam hall wrote:
> The RHEL 6 STIG says:
>
>   auditctl -l | grep syscall | grep chmod
>
> Should return lines referring to chmod. Those lines are in my
> audit.rules. Just doing an:
>
>   auditctl -l | grep syscall
>
> Returns nothing. I've got no issues telling the STIG folks how to do
> their work, but wanted to make sure I know what I'm talking about
> first.
>
> Am I missing something if there's no "syscall" line(s) returned?
>
> Thanks!
>
> Leam
>

The auditctl  command returns the rules loaded into the kernel.
Looks to me as if you might not have a running auditd or else your rules
were not all successfully loaded.
This can happen if there was an error inside the ruleset and you didn't
have the "-c" or "-i" flag set to continue loading the rules.
Check your syslog for any errors on startup; also just auditctl -l and
compare the loaded rules against your file.

HTH,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Excluding few executable from audit.rules in redhat6.5

2014-11-17 Thread LC Bruzenak
On 11/17/2014 09:30 AM, Steve Grubb wrote:
> Well, what do you really want to do? In general, I'd look at the original 
> auditing rule to see if its scope can be narrowed. In this case, it appears 
> that you are wanting all calls to chmod. Why? Are you more concerned with 
> failed calls to chmod, meaning a user is trying to change system files? Are 
> system daemons calling chmod OK? Or do you really want everything? Or do you 
> want no events at all for that daemon no matter what the syscall?
>
> The event you are showing is that app successfully making a directory world 
> writable/readable. Its setting the sticky bit, so its "safe."
I think this is auditing because the supplied STIG rules specify it.
The "perm_mod" key is the hint. You probably do not want to remove this
rule for all chmod syscalls.

You cannot exclude an executable itself from the rule set by name.
The "exclude" option only applies to event types.
 
You could exclude it by type, except it is running as a generic
unconfined_t.
Perhaps it can be mitigated by "-F path !=" or something similar.
Check the auditctl man page for options.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: audispd audit-remote plugin and uid, gid, euid, suid, fsuid, egid, sgid, fsgid

2014-11-13 Thread LC Bruzenak
On 11/13/2014 09:01 AM, Steve Grubb wrote:
> They could unless use of those utilities are restricted. You could also setup 
> a centralized user name management system to help things. But if you want to 
> tackle this yourself, I think the uids, gids, and hostnames are the main 
> things that need interpreting locally. Everything else can be done after the 
> fact.
This subject is one I I've griped before. I'm amazed that more people
haven't mentioned this.
From an assurance perspective, having the human-understandable names of
the accounts is important.
If auditing systems aggregate records from multiple sources, this is
pretty big.

Until we can easily do something like the following, this isn't dire:

machine: local aggregator   enterprise aggregator
---  -   
-
finance sys1 ->
finance sys2 ->  fin. aggr\
finance sys3 -> ->

engineering1 ->
engineering2 -> eng. aggr  ->  enterprise aggregator
engineering3 ->

marketing1 ->  ->
marketing2 -> mark. aggr  /
marketing3 ->

In fact, to me, the ultimate assurance architecture would be to have the
username management system reside on the local auditing aggregator with
a very controlled/audited/secure interface.
Then I'd interpret the uids, gids and hns there.

My $0.02 FWIW,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Remote logging with autitd

2014-11-02 Thread LC Bruzenak
On 11/02/2014 03:16 PM, Wouter van Verre wrote:
> Hi Steve,
>
> Many thanks for your response.
> I will be reading the presentation and the examples in the tarball and
> go from there for implementing my processing plugin.
>
> Regarding the logging to disk on the central server:
> I have node names set up for both servers now and am now getting the
> following behaviour:
>On the client server I can see the events being prefixed with
> node=Elephant in the log on that server.
>On the central server I can see that local events are being
> prefixed with node=Mongoose.
>However, events that were sent to the central server by the client
> server show up in the central server's log with
>node=localhost.localdomain. So it seems that the node information
> gets lost between the client and central server?
>
> Would you have any idea why the node information is lost?
>
>
> Many thanks,
>
> Wouter

Check /etc/audisp/audispd.conf on your client.
Look at the  line with "name_format=" and it probably says "hostname"
(case insensitive).
Test this by checking "% hostname" command on your client.
See the audispd.conf man page for more info.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com



smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: [RFC][PATCH] audit: log join and part events to the read-only multicast log socket

2014-10-23 Thread LC Bruzenak
On 10/22/2014 04:29 PM, Paul Moore wrote:
> Well, like I said, It's probably safer that way as the code will work 
> regardless.  Time to break bad habits :)
>

I hear you. But there's working and there's working well.
As long as we don't suffer a search response degradation by changing the
assumptive order, as I said, I'm OK with going back and reworking code.
If it makes searching real data unusable, it's now broken some
operational stuff.

As it is, I already have to move the last 24 hours off to a different
area and make searching go day-by-day only. Otherwise the logs are too big.
I chose a 100MB file size a while back as a sweet spot based on the
observation that each file could be parsed in some time acceptable to
security managers.



Just some info on the amount of data I've seen in real world examples:
On a normally-busy site, each day, we will generate ~1-4GB of events. It
used to be much more, but our team has managed to keep it down. I'd say
that often more time is spent reducing the events than analyzing them.
If not, they grow to a size you simply cannot parse in people-time. I
did have way more, but we've become better adept at spotting and
preventing event storms.

Here is a current test machine's audit log directory. I have to go look
to see what went wrong, obviously something did, but this kind of thing
can and does happen in the real world.
Note the time stamps. 100MB every minute or two is moving right along.
We push all audit data off to a collector machine so it doesn't dominate
the disks or otherwise hog the business server resources. BTW - funny
that I just happened to login to this machine to check its status.
Whatever was running amok stopped soon after 00:57. This sometimes just
happens. I've seen this go on for hours more in fielded systems. Almost
always something is really wrong, but the system is supposed to maintain
security and be usable through these times as well.

[root@audit audit]# ls -alt
total 4179016
-rw---.  1 root root  84835735 Oct 23 02:49 audit.log
drwxr-x---.  2 root root  4096 Oct 23 00:57 .
-r.  1 root root 104857789 Oct 23 00:57 audit.log.1
-r.  1 root root 104857663 Oct 23 00:56 audit.log.2
-r.  1 root root 104857639 Oct 23 00:55 audit.log.3
-r.  1 root root 104857608 Oct 23 00:54 audit.log.4
-r.  1 root root 104857874 Oct 23 00:52 audit.log.5
-r.  1 root root 104857864 Oct 23 00:51 audit.log.6
-r.  1 root root 104857670 Oct 23 00:50 audit.log.7
-r.  1 root root 104857740 Oct 23 00:49 audit.log.8
-r.  1 root root 104857644 Oct 23 00:48 audit.log.9
-r.  1 root root 104857871 Oct 23 00:47 audit.log.10
-r.  1 root root 104857602 Oct 23 00:46 audit.log.11
-r.  1 root root 104857755 Oct 23 00:45 audit.log.12
-r.  1 root root 104857907 Oct 23 00:44 audit.log.13
-r.  1 root root 104857973 Oct 23 00:43 audit.log.14
-r.  1 root root 104857632 Oct 23 00:42 audit.log.15
-r.  1 root root 104857843 Oct 23 00:42 audit.log.16
-r.  1 root root 104857769 Oct 23 00:41 audit.log.17
-r.  1 root root 104857864 Oct 23 00:40 audit.log.18
-r.  1 root root 104857862 Oct 23 00:39 audit.log.19
-r.  1 root root 104857757 Oct 23 00:38 audit.log.20
-r.  1 root root 104857601 Oct 23 00:36 audit.log.21
-r.  1 root root 104857663 Oct 23 00:35 audit.log.22
-r.  1 root root 104857711 Oct 23 00:34 audit.log.23
-r.  1 root root 104857806 Oct 23 00:33 audit.log.24
-r.  1 root root 104857722 Oct 23 00:32 audit.log.25
-r.  1 root root 104857684 Oct 23 00:31 audit.log.26
-r.  1 root root 104857761 Oct 23 00:30 audit.log.27
-r.  1 root root 104857975 Oct 23 00:29 audit.log.28
-r.  1 root root 104857702 Oct 23 00:28 audit.log.29
-r.  1 root root 104857771 Oct 23 00:27 audit.log.30
-r.  1 root root 104857990 Oct 23 00:25 audit.log.31
-r.  1 root root 104857629 Oct 23 00:24 audit.log.32
-r.  1 root root 104857663 Oct 23 00:23 audit.log.33
-r.  1 root root 104857852 Oct 23 00:22 audit.log.34
-r.  1 root root 104857841 Oct 23 00:21 audit.log.35
-r.  1 root root 104857634 Oct 23 00:20 audit.log.36
-r.  1 root root 104857690 Oct 23 00:19 audit.log.37
-r.  1 root root 104857623 Oct 23 00:18 audit.log.38
-r.  1 root root 104857711 Oct 23 00:17 audit.log.39
-r.  1 root root 104857867 Oct 23 00:02 audit.log.40


I started an "aureport -i --summary". I realize my aureport and ausearch
don't use the auparse library, but they will soon, if not already, in
the newer code.

I just wanted to give you a feel for data amounts in your
considerations. Also I have designs for some new tools I want to deploy
which definitely will use libauparse, if it's usable.

At 4GB+ of audit data, this report isn't returning yet. The nice thing
is it isn't leaking memory, or if it is, it isn't gushing (from "

Re: [RFC][PATCH] audit: log join and part events to the read-only multicast log socket

2014-10-22 Thread LC Bruzenak
On 10/22/2014 03:44 PM, Paul Moore wrote:
> We haven't changed anything yet, but I strongly believe we need to do away 
> with field ordering.  The good news is that if you explicitly search for the 
> field instead of relying on a fixed order the code should be more robust and 
> work either way. ;)
I have no doubt my old code looks like Steve's first example, not the
second.
But as I said, code can be changed if the assumptions about ordering are
thrown out.

You're making a pretty big splash over here Paul! Very impressive...
:-)

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: [RFC][PATCH] audit: log join and part events to the read-only multicast log socket

2014-10-22 Thread LC Bruzenak
On 10/22/2014 03:06 PM, Paul Moore wrote:
>> > But it illustrates the point. There are tools that depend on an ordering 
>> > and
>> > format. There are more programs that just ausearch that needs to be
>> > considered if the fields change. For example, Someone could do things like
>> > this:
>> > 
>> > retval = auparse_find_field(au, "auid");
>> > retval = auparse_next_field(au);
>> > retval = auparse_next_field(au);
>> > retval = auparse_find_field(au, res");
>> > 
>> > Where, if the field ordering can't be guaranteed, the code becomes:
>> > 
>> > retval = auparse_find_field(au, "auid");
>> > retval = auparse_first_field(au);
>> > retval = auparse_find_field(au, "pid");
>> > retval = auparse_first_field(au);
>> > retval = auparse_find_field(au, "uid");
>> > retval = auparse_first_field(au);
>> > retval = auparse_find_field(au, res");
> In my mind the latter code is more robust and preferable.
>
OK; I swear if you change this I'm going to parse EVERY field straight
into a SQLite file first, since I'd have to go change code anyway.
:-)

I have code which is based on the examples, from years back, which
believe there is order. It can be changed if needed; rather not but could.
I suspect there are others...

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: [RFC][PATCH] audit: log join and part events to the read-only multicast log socket

2014-10-22 Thread LC Bruzenak
On 10/22/2014 01:18 PM, Eric Paris wrote:
> On Wed, 2014-10-22 at 10:51 -0500, LC Bruzenak wrote:
>> On 10/22/2014 10:12 AM, Eric Paris wrote:
>>> On Wed, 2014-10-22 at 10:25 -0400, Steve Grubb wrote:
>>>
>>>> 1) For the *at syscalls, can we get the path from the FD being passed to be
>>>> able to reconstruct what is being accessed?
>>> You might sometimes be able to get A path.  But every time anyone ever
>>> says THE path they've already lost.  There is no THE path.  There might
>>> be NO path.  Every single request with THE path is always doomed to
>>> fail.
>> IIUC we've got to have some assurance that the path is legit for forensics.
>> Technically I believe I understand and concur with what you are saying
>> Eric, but as a guy on the far end of the process I know I need to be
>> able to reference a complete path to a FD.
>> One which we believe did exist at the time the mod occurred. To me,
>> sometimes isn't really good enough. But A path probably is.
>> ...
> >From the PoV of the process in question there was, at some point, A
> path.  That I agree with.  But imagine I clone a new mount  namespace
> and don't share my changes with the parent namespace.  Now I mount
> something new in that child namespace.  What is A path for a file in the
> new mount?  From the parent namespace there is NO path, ABSOLUTELY NO
> PATH.  (guess which mount namespace auditd lives in, by the way).  From
> the PoV of the processes in the child mount namespace there is A path,
> but it's possibly/probably completely meaningless to the
> admin.  /etc/shadow != /etc/shadow the admin cares about...  readlink()
> doesn't work in this case either.  Sometimes there just plain is no
> path.  So yeah, I'm betting MOST of the time we can come up with A path,
> but that's not exactly what you want either   :-(
OK; interesting case. Now I hate namespaces.
:-)

Perhaps we'll have to get smarter in order to be able to work backwards
through namespaces.
Or if not solvable, maybe some of us will have to decide to not allow
ad-hoc mounted namespaces. Just saying. This stuff matters.
I'm assuming we get the namespace info in userland audit events? My
RHEL6 versions are behind where you guys are...
>
>>>> 9) Can we get events for a watched file even when a user's permissions do 
>>>> not
>>>> allow full path resolution?
>>> No.
>> No?
> Say I set a watch for failure to open /path/to/my/file.
> If someone comes along and says open(/path/to/my/file) but they do not
> have execute permissions on /path/to/ their request will be denied.  Not
> because they didn't have permission to open /path/to/my/file, but
> because they didn't have permission to open /path/to.  Watches do not,
> and can not, emit a rule for that.  The rule you requested (failure to
> open /path/to/my/file) was not violated.  The kernel did not try to
> open /path/to/my/file.  It tried to open /path/to/ and died right there.
> If you care about things being unable to open /path/to/, put a watch
> on /path/to (although I'm not 100% such watches actually work, but at
> least the theory is right and maybe that could be fixed)
I didn't match the "no" answer to what I read to be a more general question.
Per the STIG (& added) rules there are exit watches for EACCES and
EPERM, which IIUC would be caught in your example. Not all the way down
to the end of the path but to the point of failure. Good enough.
So I probably misinterpreted the question. Your answer cleared it up
nicely; thanks.

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: [RFC][PATCH] audit: log join and part events to the read-only multicast log socket

2014-10-22 Thread LC Bruzenak
On 10/22/2014 10:12 AM, Eric Paris wrote:
> On Wed, 2014-10-22 at 10:25 -0400, Steve Grubb wrote:
>
>> 1) For the *at syscalls, can we get the path from the FD being passed to be
>> able to reconstruct what is being accessed?
> You might sometimes be able to get A path.  But every time anyone ever
> says THE path they've already lost.  There is no THE path.  There might
> be NO path.  Every single request with THE path is always doomed to
> fail.
IIUC we've got to have some assurance that the path is legit for forensics.
Technically I believe I understand and concur with what you are saying
Eric, but as a guy on the far end of the process I know I need to be
able to reference a complete path to a FD.
One which we believe did exist at the time the mod occurred. To me,
sometimes isn't really good enough. But A path probably is.
...
>> 9) Can we get events for a watched file even when a user's permissions do not
>> allow full path resolution?
> No.
No?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Audit format utility

2014-09-26 Thread LC Bruzenak
On 09/25/2014 10:05 PM, Steve Grubb wrote:
> But this proposal is purely about output and not searching.
I get it now; thanks Steve!

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Audit format utility

2014-09-25 Thread LC Bruzenak
On 09/23/2014 05:03 PM, Steve Grubb wrote:
> Hello,
>
> I have been doing some thinking about allowing user defined formats to be
> declared as a parameter to ausearch. Before I commit to that, I thought it
> might be interesting to create a "mockup". I have placed a utility here:
>
Steve,

I'll be testing this ASAP to see what shakes out for me.
For user-defined formats, are you thinking maybe along the lines of
allowing a regexp ability to ausearch which would be like the
ausearch-expression capability in libauparse?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

rhel6/7 question

2014-09-12 Thread LC Bruzenak
Are there any issues with a RHEL7 auditd collecting events from RHEL6
submitters?
How about any interpretation issues, on the RHEL7 side, of these events?

Thanks in advance,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com




smime.p7s
Description: S/MIME Cryptographic Signature
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: One challenge for audit - seeking ideas

2014-06-09 Thread LC Bruzenak
On 06/09/2014 04:39 AM, Burn Alting wrote:
> All,
>
> I am looking a ways to counter the situation where a user restarts a
> service and hence all that service's auditing events are attributed to
> the auid of the user who performed the restart.
>
> That is
>
> a. User logs into system (and pam sets auid)
> b. User su's or sudo's up to a service account (auid still the same).
> c. User restarts the service
> d. All audit events resulting from the service have the user's auid.
>
> At present I am looking at solution that front-end's the
> RHEL5/RHEL6 /sbin/service command which sets the auid via a
> audit_setloginuid() call and then execv's the service script and command
> arguments.
>
> I am interested in any other solutions that people may have implemented
> successfully. Especially for the systemd replacement, if it's been done.
>
> Regards
>
> Burn
>
>
Like run_init does (in the policy_coreutils rpm)?

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [RFC][PATCH] audit: Simplify by assuming the callers socket buffer is large enough

2014-03-05 Thread LC Bruzenak
On 03/05/2014 10:59 AM, Steve Grubb wrote:
> The audit system has to be very reliable. It can't lose any event or record. 
> The people that really depend on it would rather have access denied to the 
> system than lose any event. This is the reason it goes to such lengths.
+1

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: ausearch

2014-01-03 Thread LC Bruzenak
On 01/03/2014 07:58 AM, David Flatley wrote:
> When running "ausearch -i", does this read both
> the /var/log/audit/audit.log and the rotated log files in the same
> directory? Thanks.
It does, unless you specify the "-if " option.
Remember: if called from a cron script, use the "--input-logs" option.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH] Fixed reason field in audit signal logging

2013-11-07 Thread LC Bruzenak
On 11/07/2013 09:05 AM, Steve Grubb wrote:
>
> I am confused. This is the abnormal end event I have:
>
> type=ANOM_ABEND msg=audit(1303339663.307:142): auid=4325 uid=0 gid=0 ses=1 
> subj=unconfined_u:unconfined_r:unconfined_t:s0 pid=3775 comm="aureport" sig=11
>
> Why / when did we start adding text explanations? We should not do that. We 
> didn't have it before and it should not have been added. The signal number is 
> enough to identify the problem.
>
> If we did need a reason= field, all these strings with spaces will get 
> separated on parsing. They should be like "memory-violation" or "recieved-
> abort". And would it be better to hide this in the audit_log_abend function? 
> I 
> honestly don't understand why this was added.
>
> -Steve

Whoops; looks like I jumped the gun. I also have the same results:
node=test1 type=ANOM_ABEND msg=audit(1383674813.174:5025253):
auid=4294967295 uid=0 gid=0 ses=4294967295
subj=system_u:system_r:xserver_t:s0-s15:c0.c1023 pid=5537 comm="X" sig=6

It looked like it would add value at first read.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH] Fixed reason field in audit signal logging

2013-11-07 Thread LC Bruzenak
On 11/07/2013 08:43 AM, Eric Paris wrote:
> On Thu, 2013-11-07 at 19:09 +0530, Paul Davies C wrote:
>> The audit system logs the signals that leads to abnormal end of a process.
>> However , as of now , it always states the reason for failure of a process as
>> "memory violation" regardless of the signal delivered. This is due to the
>> audit_core_dumps() function pass the reason for failure blindly to the
>> audit_log_abend() as "memory violation".
>>
>> This patch changes the audit_core_dumps() function as to pass on the right
>> reason to the audit_log_abend based on the signal received.
>>
>> Signed-off-by:Paul Davies C
> Acked-by: Eric Paris 
>
> But we really should wait for an Ack and thoughts from steve grubb
>
FWIW I really like this fix. It will be valuable to me (once I get it in
my RHEL-6.2 systems); thanks!

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: Audit Cross Compile Support

2013-09-20 Thread LC Bruzenak
On 09/20/2013 10:18 AM, clsho...@rockwellcollins.com wrote:
>
>
> I don't believe mock will be able to execute the gen_tables
> executables that are built for the ARM/PPC.  I think the only way I
> could do that would be to setup a QEMU target and that seems a little
> excessive.
>
Unless I do not understand, it appears that the mock build on koji is
able to do this. Is the arch different?
http://kojipkgs.fedoraproject.org//packages/audit/2.3.2/1.fc20/data/logs/armv7hl/build.log

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Audit Cross Compile Support

2013-09-20 Thread LC Bruzenak
On 09/20/2013 10:22 AM, LC Bruzenak wrote:
> On 09/20/2013 10:18 AM, clsho...@rockwellcollins.com wrote:
>>
>> I don't believe mock will be able to execute the gen_tables
>> executables that are built for the ARM/PPC.  I think the only way I
>> could do that would be to setup a QEMU target and that seems a little
>> excessive. 
>
> I wonder how koji does it?
>
> I see an ARM build there:
> http://koji.fedoraproject.org/koji/buildinfo?buildID=439098
>
Oh. There are different build hosts supporting different arches.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Audit Cross Compile Support

2013-09-20 Thread LC Bruzenak
On 09/20/2013 07:33 AM, clsho...@rockwellcollins.com wrote:
> Everyone,
>
> I apologize for my previous patch submission (8/23/2013 set of
> patches) without any background information.  I am looking to cross
> compile audit for the ARM and PPC platforms.  I am not sure if this
> has been looked into before but I would get some feedback on a general
> approach.  
>
> I tried to do a base cross compile but I ran into an issue when the
> table header files are generated, the executable generated by the
> makefile were built for the target and not the host.  I modified the
> makefile to build them for the host but I realized the executables
> would pull in the headers from the host rather than the target.  I
> attempted to work around this by porting the gen_tables.c algorithm to
> a python script to duplicate the header file generation using the
> target headers rather than the host.  Is this a good approach?  Is
> there a better way that this could be done a better way that does not
> use python?  Is this a desired feature?

I use mock. It's pretty straightforward.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: file watch question

2013-08-16 Thread LC Bruzenak
On 08/16/2013 03:19 PM, LC Bruzenak wrote:
> The line saying, "Unlike most syscall auditing rules, watches do not
> impact performance based on the number of rules sent to the kernel" -
> does this mean that BOTH the "-w" and "syscall" rules have no
> performance impact?
>
> Thx,
> LCB
>
Nevermind; I see that the rule in force (with auditctl -l) is the same
regardless of the way it is specified in the auditctl line.

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


file watch question

2013-08-16 Thread LC Bruzenak
Reading the man page for auditctl, looking at file watch rules I see this:

  -w path
  Insert a watch for the file system object at path. You
cannot insert a watch to the top level directory. This is prohibited by 
the  kernel.
  Wildcards  are not supported either and will generate a
warning. The way that watches work is by tracking the inode internally.
If you place
  a watch on a file, its the same as using the -F path
option on a syscall rule. If you place a watch on a directory, its the 
same  as  using
  the  -F  dir  option  on  a  syscall  rule. The -w form of
writing watches is for backwards compatibility and the syscall based
form is more
  expressive. Unlike most syscall auditing rules, watches do
not impact performance based on the number of rules sent to the kernel.
The  only
  valid  options when using a watch are the -p and -k. If
you need to anything fancy like audit a specific user accessing a file,
then use the
  syscall auditing form with the path or dir fields. See the
EXAMPLES section for an example of converting one form to another.

I assume if the "-w form" is just backwards-compatible, it is preferred
to use the syscall method.
Question -
The line saying, "Unlike most syscall auditing rules, watches do not
impact performance based on the number of rules sent to the kernel" -
does this mean that BOTH the "-w" and "syscall" rules have no
performance impact?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH 7/7] audit: audit feature to set loginuid immutable

2013-07-10 Thread LC Bruzenak
Eric,

NVM on this part.
I forget not everyone lives in a SElinux world...
> As far as restarting daemons - I guess in theory this obviates the
> "run_init" command?
> Or only the uid part of the context?
Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH 7/7] audit: audit feature to set loginuid immutable

2013-07-10 Thread LC Bruzenak
On 07/10/2013 01:16 PM, Eric Paris wrote:
>> > This sounds dangerous. Why would we want to allow this?
> Immutability was first introduced in kernel 3.3.  It wasn't enabled in
> the kernel config for Fedora until some time much later.  It is not
> present in any enterprise distro that I know of.
>
> Before systemd immutability was not possible.  If an admin logged in and
> restarted the sshd daemon the daemon would be running as the admin's
> loginuid.  When a new user tried to log in via ssh it would need to
> change the loginuid from the admin loginuid to their new loginuid.  We
> had to give sshd CAP_AUDIT_CONTROL in order to switch the loginuid.
> (ALL loginuid changes required CAP_AUDIT_CONTROL)
>
> When systemd came out I added immutability.  Since restarting sshd in a
> systemd world is done by init, which has no loginuid, and thus the new
> sshd would have no loginuid.  Thus a user logging in would be able to
> set a new loginuid without any permissions.
>
> We learned that admins, for various reasons, really do want to be able
> to launch daemons by hand and not via init/systemd.  In particular, we
> know of many people who launch containers by hand which allows some form
> of login (usually sshd).  With the current loginuid immutable code those
> people are UNABLE to log in inside the container.
>
> This patch series allows a privileged task (one with CAP_AUDIT_CONTROL)
> the ability to unset their own loginuid.  I allows behavior similar to
> the pre 3.3 kernels.  Big different being that the privilege is needed
> in a helper to UNSET the loginuid, not in the network facing daemon
> (ssh) to SET the loginuid.  The series also allows the 'unsetting'
> feature to be disabled and locked so it cannot ever be enabled.

Thank you for these details; I really appreciate it.

As far as restarting daemons - I guess in theory this obviates the
"run_init" command?
Or only the uid part of the context?

And by breaking apart the unsetting (privileged helper) with the setting
(daemon itself) this is more securely accomplished?

Chewing on this a bit...regardless, thanks again for the details; it
helps tremendously.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH 7/7] audit: audit feature to set loginuid immutable

2013-07-10 Thread LC Bruzenak
On 07/10/2013 08:46 AM, Steve Grubb wrote:
>
> Currently its a compile time option. This means when its selected the auid is 
> immutable and you have a strong assurance argument that any action by the 
> subject really is the subject's account. Strong assurance may be required for 
> high assurance deployments. It would be more solid standing up in court as 
> well because the argument can be made that whatever action occurred can be 
> attributed to the subject because there is no way to change it. Its tamper-
> proof.
That was my understanding.
>
> The change means the default policy will now allow process with 
> CAP_AUDIT_CONTROL to change the auid to anything at anytime and then perform 
> actions which would be attributed to another user. There is an event logged 
> on 
> setting the loginuid, so it could be considered tamper-evident. At least as 
> long as the event's not filtered or erased.
This sounds dangerous. Why would we want to allow this?
>
> My preference is that we have a way that we can put the system into the 
> immutable mode in a way that leaves no doubts about whether the system has 
> operated under the same policy from beginning to end.
That is a better way.

>From an end user perspective I can tell you that although we strive to
be diligent, the reality of reduced budgets and multi-tasking security
managers means that tamper-proof (at least as a near-to-mid-term goal)
is desired over tamper-evident.
Even if the event is not erased or filtered it means another requirement
levied on a person which I do not believe existed before.

Thanks for the info, Steve. I appreciate it!
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH 7/7] audit: audit feature to set loginuid immutable

2013-07-09 Thread LC Bruzenak
On 07/09/2013 05:24 PM, Steve Grubb wrote

...
I don't think anyone has plans to write those tools at the moment. That would 
be ideal. But even in the case where audit rules don't get loaded, there are 
audit events generated by the MAC systems and some hard coded kernel events 
and user space events. It would be nice to know they are not tampered with.
...


Question - from the title I had thought this was a good thing. But wasn't 
loginuid (and subsequently auid) already immutable?
Sorry; just not certain what this change does for the average guy...

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: key options with spaces

2013-06-19 Thread LC Bruzenak
Hey Steve,

I was expecting it to not match the one with the spaces.
I can live with any answer; either disallowing spaces or allowing spaces
and matching exactly, or (less desirable) even if it is desired to match
the first occurrence of the string and it is noted as such in the man page.

The reason I tried the spaces was at direction from a security guy who was
looking at a (now-mofied) RHEL 6 STIG beta release. I am not a fan of the
spaces myself; was probably going to substitute underscores.

Thanks,
LCB


On Wed, Jun 19, 2013 at 5:34 AM, Steve Grubb  wrote:

> On Tuesday, June 11, 2013 04:04:54 PM LC Bruzenak wrote:
> > I was playing with audit rules using keys with spaces.
> > Is the following expected (ignore the logic; was just testing the
> returns)?
> >
> > # auditctl -l -k lsmod
> > LIST_RULES: exit,always watch=/sbin/lsmod perm=x key=lsmod kernel
> > LIST_RULES: exit,always watch=/bin/ping perm=x key=lsmod ping
>
> What are you expecting? I can make it not accept keys with spaces. I don't
> think putting spaces in keys is a good idea.
>
> -Steve
>



-- 

LC (Lenny) Bruzenak
le...@magitekltd.com
--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: key options with spaces

2013-06-11 Thread LC Bruzenak
On 06/11/2013 04:04 PM, LC Bruzenak wrote:
> I was playing with audit rules using keys with spaces.
> Is the following expected (ignore the logic; was just testing the returns)?
>
> # auditctl -l -k lsmod
> LIST_RULES: exit,always watch=/sbin/lsmod perm=x key=lsmod kernel
> LIST_RULES: exit,always watch=/bin/ping perm=x key=lsmod ping
>
>
> Thx,
> LCB
>
Sorry - forgot the version :  audit-2.2-1.

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


key options with spaces

2013-06-11 Thread LC Bruzenak
I was playing with audit rules using keys with spaces.
Is the following expected (ignore the logic; was just testing the returns)?

# auditctl -l -k lsmod
LIST_RULES: exit,always watch=/sbin/lsmod perm=x key=lsmod kernel
LIST_RULES: exit,always watch=/bin/ping perm=x key=lsmod ping


Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: performance questions

2011-09-30 Thread LC Bruzenak
On Fri, 2011-09-30 at 09:20 -0400, Steve Grubb wrote:
> On Thursday, September 29, 2011 11:33:09 AM LC Bruzenak wrote:
...
> 
> You might try this:
...
>  
> - _get_exename(exename, sizeof(exename));
> + if (exename[0] == 0)
> + _get_exename(exename, sizeof(exename));
>   if (tty == NULL) 
>   tty = _get_tty(ttyname, TTY_PATH);
>   else if (*tty == 0)

Well, we could (and then it would work like the others) but we really
want to store the exename I think. Isn't that what becomes
"exe=" in the event?

> 
> We can probably use the return value of fprintf() +1 (for the NULL byte) and 
> just keep the running total in memory.

Oh, right. That would be more precise. Good idea!

Since we're looking, what about the fstatfs in check_disk_space? Any
thoughts on that one?

Thanks Steve!
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


performance questions

2011-09-29 Thread LC Bruzenak
I was looking at some strace results from a process using the
audit_log_user_message call and I think I see how I can eliminate some
ioctls and /proc/self lookups by setting the hostname/tty parameters to
non-NULL pointers pointing to NULL values.

But the exename is another story. It does a lookup each time. We have
persistent processes each of which submit 100Ks (on the way to 1Ms) of
audit_log_user_message events daily, so it would make a difference.

I was thinking about a patch to store off the exename statically if one
isn't already in the pipeline. Let me know; I'll submit something if
not.

The other question is on the auditd side. IIUC on each event the
write_to_log function is checking the logfile size. Seems to me that we
could limit the fstat checks to say one every ten events or so. Any
problems there?

Thx,
LCB 

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


new auparse question

2011-08-31 Thread LC Bruzenak
I have an issue now with auparse_find_field.
I work around it fine though but maybe it's worth reporting.
There is a place where I do this:

const char *result;
...
result=auparse_find_field(au, "res");

and get a segfault.


If I instead do this:
const char *result;
...
auparse_first_field(au);
result=auparse_find_field(au, "res");

then it is fine.


A quick gdb test shows me :

0x77dd2a7d in nvlist_get_cur_name (au=0x617a90, name=0x4022a8
"res") at nvlist.h:40
40  static inline const char *nvlist_get_cur_name(const nvlist *l)
{return l->cur->name;}


Looking at my own code, I believe I previously had walked through the
event record using this loop:
...
auparse_first_field(au);
do {
...
} while (auparse_next_field(au) > 0);
...

and so I guess that the "cur" field was undefined when used the
auparse_find_field call.

It (auparse_find_field) calls:
...
cur_name = nvlist_get_cur_name(&r->nv);

and I guess that's were the problem happened.

So my question is - is this a bug (I would think so) or should I always
precede any auparse call sequence with at least one fresh
auparse_first_field call?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: auparse question

2011-08-31 Thread LC Bruzenak
> That does seem to be a mistake in the API. As a workaround for this, 
...

Thanks Steve,

Per Mirek, I just changed to look for the AUPARSE_TYPE_* enum for
checking the return and it is fine now. 

LCB
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


auparse question

2011-08-30 Thread LC Bruzenak
I'm using auparse_get_field_type from the parse lib.
The return value for error is "0" which is also that of the AUDIT_PID
field.

Right? I am getting some errors that thought they were PIDs.

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


auparse question

2011-08-30 Thread LC Bruzenak
I am using the parse library, calling auparse_get_type.
It returns a 0 on failure, which I believe is also the integer value for
AUDIT_PID.

Is this correct or am I missing somethng?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: log files

2011-06-17 Thread LC Bruzenak
On Fri, 2011-06-17 at 15:15 -0400, Pittigher, Raymond - ES wrote:
> 
> The plan would be to rotate the log at midnight Saturday, use the
> aureport to read the file and give it some kind of format, dump the data
> into a mysql database, then parse it with php on a apache server with a
> firefox front end. Or something like that. 

OK; that was my thinking as well.
Only I roll mine up each day already and move them out of the way.

I think you would likely use a custom program which used the parse libs
to extract the searchable elements from each event.

What I was wondering is if on the front end (cgi+browser-side) you had
something in mind which existed already - or if you would code it up
from scratch with the php-mysql piece?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: log files

2011-06-17 Thread LC Bruzenak
On Fri, 2011-06-17 at 14:15 -0400, Pittigher, Raymond - ES wrote:
> What do the users of this list use to read the log files? I have tried
> Spacewalk (which is nice) but is a lot of software to install to read
> logs. I have looked at Prewikka but do not have it totally configured
> yet to give it a OK or not.

My experiences (I assume you specifically mean the audit logs):

Prewikka would be for IDS events only with the prelude plugin.
I use the audit-viewer with pre-constructed list tabs to match events
necessary for verification testing.
For faster results when looking for specific events or investigation, I
use the command line tools aureport and ausearch.

What would be great IMHO is to have a prewikka-like web interface for
the audit events.

HTH,
LCB
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: excluding auditd events

2011-06-01 Thread LC Bruzenak

> So, it turns out that apart from the human-like date description like 
> "yesterday" and "today", ausearch only accepts 2-digit years! I thought 
> we have long-passed these Y2K-related issues - that is so 1999. That is 
> assuming I didn't mess things up, which is also a possibility, of 
> course! The error messages I was getting above did not help my cause either!

Too bad on not using mock; it is in my experience easier than grabbing
pieces needed and certainly easier when those pieces get revised.

You must have read the ausearch man page which describes the date usage
and subsequently followed the pointer to the localtime man page. The
dates work as described in those pages:

$ sudo ausearch -ts 05/30/2011 | less  
works fine for me on FC10 & RHEL6.

Look at your system time - is it correct?
Use the "date" command.
Check your LC_TIME ENV variable.


> -bash-4.1# ausearch -m AVC -ts "05/26/11" | more <- works!

$ sudo ausearch -m AVC -ts "05/26/11"
Error - year is 11

This also is the same for me on FC10 & RHEL6 (audit-1.7.16 and
audit-2.1-5 respectively) . So my guess is your LC_TIME or locale value
is set for 2-digit dates or something alone those lines. The "date"
command should yield a clue, especially "date +%x".

LCB
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: excluding auditd events

2011-05-26 Thread LC Bruzenak
On Thu, 2011-05-26 at 16:22 +0100, Mr Dash Four wrote:
...
> 
> Without libprelude-devel I can't compile/build audispd-plugins, without 
> audispd-plugins I can't have remote logging in auditd (you get the 
> picture!).


I am using mock to compile both the 32-bit and 64-bit versions of all
the prelude code on rhel6, starting with the fc14 src rpm. You should be
able to do this on your distro as well.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: Audit slowing system.

2011-05-05 Thread LC Bruzenak
You generally should start with an aureport over the time period you saw
the issue, then drill down from there. But if your audit logs are not
larger than usual maybe this isn't your problem.

What symptoms of struggling are you seeing? You said top shows nothing
out of the ordinary and no directories are getting full (I suppose you
mean /var/log/audit)...so I am not certain what behavior you see that is
leading you towards audit being the problem.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: Interpreting fields in audisp-remote

2011-03-29 Thread LC Bruzenak
On Tue, 2011-03-29 at 21:07 -0400, Dmitry Krivitsky wrote:
> Hi,
> 
> I am trying to configure audisp-remote on several servers to send 
> audit logs 
> to a central server.
> Is there any way to configure 
> audisp-remote to resolve numerical user ids, 
> system call numbers, etc., 
> before sending them to the central server?
> The central server may have a 
> different list of users, different version of 
> Linux, etc., so resolving them 
> later on the central server may not work.
> 
> Thanks,
> Dmitry Krivitsky
> 

Funny; I looked back and I asked about this just over 2 years ago.
:)

With the new store and forward patch set from Mirek I would think this
would be almost required. Without having been through the patches yet
though I don't know if it was included.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


RE: questions about auditing on a new RH 6 box

2011-01-14 Thread LC Bruzenak
On Fri, 2011-01-14 at 19:07 +, Tangren, Bill wrote:
> 
> Where can I read on how to classify events? I have been frustrated in
> the past, because I was required to generate volumes of audit logs,
> and I haven't had much success there. 

man auditctl 
look for the "-k key" section

LCB

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


RE: questions about auditing on a new RH 6 box

2011-01-14 Thread LC Bruzenak
On Fri, 2011-01-14 at 17:56 +, Tangren, Bill wrote:
> 
> There are LOTS of the following:
> 
> 01/14/2011 11:44:29 type=SYSCALL, arch=x86_64, syscall=mknod,
> success=yes, exit=0, a0-3=[hex numbers that vary), auid=bill.tangren,
> comm=escd, egid=bill.tangren, euid=bill.tangren,
> exe=/usr/lib64/esc-1.1.0/escd, fsgid= bill.tangren, fsuid=
> bill.tangren, gid=bill.tangren, items=2, key=null, sgid=bill.tangren,
> subject=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023,
> tty=none, uid=bill.tangren
> 
> There are also some like this, but syscall=open instead.
> 
> 
> During this time, I am logged in to a GUI, but the screensaver has
> activated, and I am doing nothing. No one else has an account. 
> 

Well, herein lies the rub...the audit rules you have in place are doing
their job.
:)

The escd is creating device files as it does its thing...do you trust
it? Assuming so, maybe there is a way to filter those out.

Can you send a couple of the results of this command? This will tell you
the top (recent) auditing processes:
% sudo aureport -ts recent -i -x --summary

Also a couple of of these results (since you said there were a lot of
escd process events). Change "recent" to "today" or a specific start
time (see ausearch man page):
% sudo ausearch -ts recent -i -c escd


You will likely want to use aureport/ausearch just because they are
faster than the audit-viewer. But it is possible to use it...

HTH,
LCB

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


RE: questions about auditing on a new RH 6 box

2011-01-14 Thread LC Bruzenak
Probably can use a sampling of events as well.

LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: Excluding certain processes

2011-01-05 Thread LC Bruzenak
On Wed, 2011-01-05 at 08:35 -0500, rsh...@umbc.edu wrote: 
> I'm running audit 1.7.17-3 (RHEL 5) on ~450 clients sending via audisp to
> a single server.  This is mostly working well, except that periodically, I
> get messages like:
> 
> Jan  4 07:57:33 hostfoo audispd: queue is full - dropping event
> Jan  4 07:58:04 hostfoo last message repeated 814 times
> Jan  4 07:59:05 hostfoo last message repeated 4121 times
> Jan  4 08:00:06 hostfoo last message repeated 2602 times
> Jan  4 08:00:31 hostfoo last message repeated 773 times
> 
> Reading through the man pages, I've increased the q_depth value in
> audispd.conf.  But even with it set at 9 (the maximum), many events
> are still being dropped from almost half the clients.  Setting disp_qos to
> "lossless" in auditd.conf has also not helped.
> 
> It would be nice to solve this in general.  More specifically, however, I
> know that on the worst offender, the flood of events is being caused by an
> rsync job that runs at 8 and 12.  The events look something like:
> 
> node=hostfoo.domain.com type=SYSCALL msg=audit(1294232521.544:29609884):
> arch=c03e syscall=90 success=yes exit=0 a0=7fffbe5a7f60 a1=1ed a2=1
> a3=0 items=1 ppid=4397 pid=4398 auid=4990 uid=4990 gid=100 euid=4990
> suid=4990 fsuid=4990 egid=100 sgid=100 fsgid=100 tty=(none) ses=2867
> comm="rsync" exe="/home/bob/.toast/pkg/rsync/v3.0.4/1/root/bin/rsync"
> key="perm_mod"
> 
> Is there any way I can tell the perm_mod rules in audit.rules "Don't tell
> me about it if the command is rsync"?  I couldn't find an obvious answer
> from the auditctl man page (it doesn't seem that I can just specify, say,
> comm!=rsync).
> 
> Thanks,
> 
> --Ray

Ray,

I think your example illustrates why you would not want to filter based
on command name since it is a non-standard rsync
(/home/bob/.toast/pkg/rsync/v3.0.4/1/root/bin/rsync).
Probably a trojan. :)

The problem is that you likely do not want to disallow all rsync events,
just the ones you want to allow to run event-free. 

Otherwise you are effectively overriding the rule which specifies these
events in the first place, since everyone can run rsync  - or maybe it
justifies removing this rule in your case?

You can do it by controlling access to rsync, then disallow selected
subjects, with types (custom policy) or group (egid). Or set a range of
UIDs which are allowed to rsync free of audit, then specify that range
of UIDs as acceptable in your rule.

You are right; rsyncs generate LOTS of events due to this rule and
basically can overflow the event queue regardless of the settings.

The easiest way (just a theory), if this is a cron job, is to run it as
a particular pseudo-user from /etc/cron.d/, then add that uid to the
rule with "-F euid!= 1000". You could add that pseudo user ID to the
sudoers file and have it run rsync with nopasswd.

Bottom line is that there are a few ways around it but nothing as simple
as excluding by command.

HTH,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com


--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: user audits

2010-12-03 Thread LC Bruzenak
On Fri, 2010-12-03 at 10:54 -0500, Steve Grubb wrote:
> On Friday, December 03, 2010 10:40:37 am LC Bruzenak wrote:
> > Would there be any issue with adding a couple new trusted_application
> > event types? Would any kernel mods be needed to support this?
> 
> Are these events originating in user space or the kernel? I might be 
> convinced to set 
> aside the block of IDs from 2600 - 2699 for local use if a suitable framework 
> were 
> written. This would mean that there is some config file that holds the local 
> mapping of 
> event IDs to text and ausearch/report/parse will need to be patched to 
> understand 
> local definitions.

Great!
>From user space only. Analogous to signal handling of SIGUSR1, SIGUSR2
where the framework supports the user-implemented pieces. 

I was thinking ausearch/auparse would treat all the TRUSTED_APPs the
same...since they (currently) fails to parse these correctly anyway
IMHO. :)

Everything else could treat the additional user types exactly as they do
the TRUSTED_APP event now. When I dig through the events on the back end
I would be able to act differently on the ones I know I've put in place
on the originating end.

> 
> Would you be interested in this approach?

Absolutely! 
I will be starting work on some stuff which could utilize this after the
holidays. If it gets into RHEL6 I would be thrilled!

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


user audits

2010-12-03 Thread LC Bruzenak
Steve,

Would there be any issue with adding a couple new trusted_application
event types? Would any kernel mods be needed to support this?

The reason I ask is because I'd like to process some event types
differently on the back end (the aggregator) and if I could easily
identify those types it would make life easier.

Some trusted_application events are for recording "bad" security issues,
some for "good", etc. and I'd like to easily differentiate those. 

I can put something inside the event text but if possible would prefer a
couple different types, like trusted_app1, trusted_app2, etc.

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: reactive audit question

2010-11-19 Thread LC Bruzenak
On Fri, 2010-11-19 at 11:20 -0500, Steve Grubb wrote:
> 
> I didn't answer right away because I didn't have a good answer for you. If 
> the storm 
> is large enough to overrun the kernel queue, the rate limiting needs to be in 
> the 
> kernel. If auditd is able to handle the load, then perhaps you need an 
> analysis plugin 
> that performs whatever action you deem best.

Steve,
I understand; it isn't a straightforward thing and I appreciate you
thinking about it. I think I have settled on a workable solution.

I am using the unix audisp builtin and I am sampling the AVC events.
I've got a non-blocking mechanism whereby I can count the AVCs on a very
small number of senders. Then I can take action against the offenders
(kill). Not perfect, has issues, but might be satisfactory.

I'm still testing this sampling approach, making certain I don't
introduce any blockage points, which would aggravate the issue.

And while this may work on a single process sending thousands of AVCs in
a tight loop, it wouldn't work on one which gets respawned, unless I
look at the ppid or do something more clever. 

> 
> What is the general source of the problem right now? Was it just that the app 
> was 
> doing something that policy didn't know it could do? Or was there attacks 
> under way 
> that someone was trying something bad? Or was its just an admin mistake where 
> something didn't have the right label? Each of these has a different solution.

Mostly the first scenario you mention - that the 3rd-party application
hit an execution path we had not seen in testing. But of course it
doesn't have to be a 3rd-party app. Even ones we create can run amok
with AVCs if all code paths are not exercised under all data conditions.
Basically untestable in finite time by humans.
:)

Some things you never know the code will do - for example in one error
recovery case I believe some process (or library it uses) decides to go
look a different running process and then wants to figure out which
connections it has. Well, it doesn't get an answer because of course it
isn't policy-able to see the /proc details or some such thing, generates
AVCs, and it is in a loop until it gets an answer (forever).

Or things which are normally working fine on targeted-policy systems get
confused on MLS systems because they cannot connect to the server when
they are invoked for a process running at a higher/lower/incomparable
MLS level. Then they retry a few million times or so...

Or a process decides to see which files it can access in a big data
store. All the ones it cannot, for MAC level (MLS) reasons, all generate
AVCs. A few hundred isn't a big deal; a few million is.

Funny things happen to systems when you subject them to the real world
and real users.
:)

> 
> I think this is a complex problem and controls might be needed at several 
> spots. I'd 
> be open to hearing ideas on this too. I've also been wondering if the audit 
> daemon 
> might want to use control groups as a means of keeping itself scheduled for 
> very busy 
> systems. But i'd like to hear other people's thoughts.

I agree on the complexity. At the very least though I'd think adding a
syslog-like function whereby it can assimilate same-event audits and
then submit one event like "1000 similar events like this" would be
good.

Likely 1000 isn't even enough. At one point we were getting well over
1500 AVCs/second over a period of days. On a weekend of course. :)
Actually we were able to process that amount. I have no data on the
number of drops.

Tends to add right up. And this is just one sending host (there are
others but they are not as busy). If I had multiples, the aggregating
machine would be overrun. As processors/hardware get faster, I assume
the AVC error rates will too.

In my case, the concern is that a valuable event will be dropped off the
queue due to others like I described taking all the resources. Even
though I have increased the audispd queue size and the priorities, at
some point saturation will inevitably occur.

Thanks again!
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


reactive audit question

2010-11-12 Thread LC Bruzenak
Steve, others,

I may have asked this before, but it is becoming an issue so I thought
I'd check again anyway.

In our systems there are occasionally AVC "storms" which happen as a
result of some unforeseen (and often unknown) issue tickled by various
reasons.

At fielded sites, we are unable to fix this easily. Since we have to
keep all the audit data, this leads to many problems on a system running
over a weekend, for example, with no administrators around.

I probably need to add in either some rate-limiting code or possibly
kill off the process generating the AVCs. Rate-limiting I'd guess could
go into the auditd. If I wanted to be more brutal and kill the process,
I'd think maybe a modification to the setroubleshoot code would be
workable.

I don't think that a reactive rule is an option -
1) We have our rules locked into the kernel on startup and I'm against
changing that, and
2) maybe "normal" avc counts, under a threshold, we'd still want to see,
from that same process. Besides,
3) unless the rules have been changed, we cannot exclude AVCs from a
particular type/process anyway.

Got any thoughts/ideas/advice?

Thx,
LCB

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


adding context to exclude filter

2010-10-12 Thread LC Bruzenak
Steve (et. al.),

Has there been any talk of adding context (or another discriminator)
to the exclude capability?
I know we discussed this a while back but wanted to check again.

I would really like to be able to have a rule which says ignore all
AVCs from subjects of type foo_bar_t.

LCB

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: creating and inserting audits

2010-09-07 Thread LC Bruzenak
On Tue, 2010-09-07 at 16:38 -0400, Nestler, Roger - IS wrote:
>  

> Does this capability exist already in linux audit and I’m just not
> seeing it???
> 

man audit_log_user_message
 
> 
> Is it a bad idea to build and then to insert a custom audit/message,
> or any standard audit, into the audit.log file?

Nope.

> If so are there any problems to look out for , e.g event id/sequence
> number collisions, auparse or ausearch problems, formatting issues to
> adhere to???
> 

The text in the audit_log_user_message is not really freeform-safe, and
it is practically limited to somewhere around 900+ bytes (from a kernel
setting, unless it has been updated since).

The parser will throw away some of your records if the text matches what
it is looking for elsewhere. Maybe Steve can point out the specs. For
example, I had this one:

> > # ausearch -ts this-week -a 22476
> > 
> >
> > in the raw log:
> > node=slim type=USER msg=audit(1244730722.536:22476): user pid=16700
> > uid=0 auid=500 ses=1 subj=user_u:user_r:user_t:s0 msg='node=jim
> > type=PATH msg=audit(06/08/2009 13:33:50.101:19267) : item=4
> > name=/var/lib/ntp/drift inode=115581 dev=fd:00 mode=file,644
ouid=ntp
> > ogid=ntp rdev=00:00 obj=system_u:object_r:ntp_drift_t:s0 :
> > exe="/usr/local/sbin/auditctl" (hostname=?, addr=?, terminal=pts/13
> > res=success)'
> >
> > Any clues?
> 
> When ausearch finds a malformed record, it discards it as a safety
measure.
> 
> -Steve

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com


--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

Re: Log rotation and client disconnects

2010-08-13 Thread LC Bruzenak
On Fri, 2010-08-13 at 11:06 -0400, rsh...@umbc.edu wrote:

> 
> (Technology preview or no, I'm very happy to have audisp; certain other
> systems aren't so lucky.)

I agree.

> 
> Well, I can't run aureport --summary; it pegs the CPU for hours and hours.
>  That's not really a big deal for me, though.  I have a script that runs
> shortly after the logs are rotated, generating a report based on the
> previous day's data.  It's using 3 aureports and one ausearch (piped
> through a bunch of stuff).  Usually takes less than 15 minutes to run.  At
> the moment, this is the main way we're using the data, though I'm hoping
> to do more in the future.  I've glanced at the audit+Prelude HOWTO, since
> Prelude can do a few other things that appeal to me.

I use this. Works pretty well.

> 
> (The ausearch used to be an aureport, but aureport --anomaly -i doesn't
> seem to get the node/host names from the logs, which is why I ended up
> writing my own thing.  Interestingly, --anomaly isn't even in the man page
> for aureport; I've no idea where I found it.  I don't know if any of this
> is different in more recent versions.)

That's a doc bug I guess. I have never heard of it.

> 
> Hrm.  This is what I have:
> 
> network_retry_time = 30
> max_tries_per_record = 60
> max_time_per_record = 5
> network_failure_action = syslog (looks like I'll be changing that)
> ...
> remote_ending_action = reconnect
> 
> Are you using the heartbeat_timeout stuff?  I haven't been.
Me:
network_retry_time = 1
max_tries_per_record = 10
max_time_per_record = 10
heartbeat_timeout = 30
...
remote_ending_action = reconnect

> 
> > Also - I have a big ugly system involving timestamps and reconnect
> > logic.
> 
> Yeah, I think I might come up with something like that, and use the "exec"
> option for network_failure_action combined with cron stuff to keep
> retrying.

That is what I do. It gets a little tricky, but it works.

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: Log rotation and client disconnects

2010-08-12 Thread LC Bruzenak
On Thu, 2010-08-12 at 11:16 -0400, rsh...@umbc.edu wrote:
> > On Thursday, August 12, 2010 10:02:29 am rsh...@umbc.edu wrote:
> >> I've discovered the issue since I sent it, anyway.  If num_logs is set
> >> to
> >> 0, auditd will ignore explicit requests to rotate the logs.  I guess
> >> this
> >> may be intentional, but it's unfortunate as num_logs caps at 99 and I
> >> need
> >> to keep 365 of them.
> >
> > Have you looked at the keep_logs option for max_log_file_action?
> 
> I did, but the man page states that keep_logs is similar to rotate, so it
> sounds like if I used this option, it would still rotate the log file if
> it went above the max_log_file size, which I don't want to happen.  I
> suppose I could just set max_log_file to 9 or something (if that's
> supported).  Typically, uncompressed log files for ~400 clients on the
> central server end up being around 3-4Gb.
> 
> Thanks for all the help so far; I think I'm almost there.
> 
> --Ray

Do you not want to rotate because of the time it takes?
Yep, the keep_logs does a rotate without a limit.

The max_log_file value is an unsigned long so it should take a very
large number. However, in case there is a lot of auditing you are not
prepared for, I'd suggest limiting the file size to 2GB. The rotate time
should be similar regardless of the file size.

BTW, in what a time period are you getting the 3-4GB amounts? Are you
happy with the data you are getting - or maybe you could pare it down
some with audit.rules tweaks on the senders?

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: Log rotation and client disconnects

2010-08-12 Thread LC Bruzenak
On Thu, 2010-08-12 at 10:02 -0400, rsh...@umbc.edu wrote:
> I've discovered the issue since I sent it, anyway.  If num_logs is set to
> 0, auditd will ignore explicit requests to rotate the logs.  I guess this
> may be intentional, but it's unfortunate as num_logs caps at 99 and I need
> to keep 365 of them.  I suppose that since I'll have to rename and bzip
> them anyway, I may as well just move them to another location (maybe
> /var/log/audit/archive) so that auditd doesn't "see" them, unless there's
> a better way to do this.

How big are your logfiles? Mine are 100MB each.
Each day I have to move mine out of the way for the same reasons.
However, the search tools are then impacted, since you'll need to know
where to find them. 
Also, since it appears you have a lot of data, I assume you are finding
performance issues on the audit-viewer?

> 
> I'm still not sure what to do about the disconnection issues (although
> hopefully those will be very infrequent once I'm no longer restarting any
> of the daemons).  If a client does lose the connection to the server for a
> while though (say, an hour-long network outage for networking upgrades),
> I'd like to be able to tell them to try reconnecting periodically, and the
> combination of network_retry_time and max_tries_per_record doesn't seem to
> be the way to do that.
> 
> Other than checking the logs, is there a way to determine whether or not a
> running audispd is connected to the remote server?

I do a combination of things to detect this on the sending side.
The network_failure_action of the audisp-remote.conf file allows for a
custom action using the "exec" option.

The remote_ending_action = reconnect helps if the  (server) restarts its
auditd. Maybe your version is different from mine but I get the
reconnects...

Also - I have a big ugly system involving timestamps and reconnect
logic.

> 
> >> I'm also having separate issues with some clients disconnecting from the
> >> server, retrying twice in about a 40 second interval, and then giving
> >> up.
> >> The server isn't going down, and this isn't even happening at the same
> >> time I was restarting auditd.
> >
> > Anything written to syslog on either end?
> 
> Nothing is on the server, but this is (everything) on the client:
> 
> Aug  4 23:12:07 host1 audisp-remote: connection to host2 closed unexpectedly
> Aug  4 23:12:07 host1 audisp-remote: Connected to host2
> Aug  4 23:12:12 host1 audisp-remote: connection to host2 closed unexpectedly
> Aug  4 23:12:42 host1 audisp-remote: network failure, max retry time
> exhausted

I will go back and read your previous posts; maybe something will click.

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: year value for ausearch?

2010-04-30 Thread LC Bruzenak
On Fri, 2010-04-30 at 10:24 -0400, Steve Grubb wrote:
> 
> Yes. It should be documented in the --start --end portion of the man page. 
> What the exact format is for the locale you are in is not covered.

Thanks Steve, I believe if I use the "%x" date switch I will get what I
need for the ausearch input start time, regardless of the locale.

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: year value for ausearch?

2010-04-29 Thread LC Bruzenak
On Thu, 2010-04-29 at 00:39 -0500, LC Bruzenak wrote:
> Just got word back from a non-US fielded system that the ausearch -ts
> DAY/MONTH/YEAR doesn't work, e.g. :
> % ausearch -ts 04/29/2010 
> says that the year value is invalid
> 
> but
> % ausearch -ts 04/29/10 works.
> 
> Something with locale? It's F10 audit.
> 
> Thx,
> LCB.

Sorry I meant MONTH/DAY/YEAR as in the example.

Looking at the code it appears to use the locale's format, so it is
likely I will need to change the ausearch input to match.?.

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


year value for ausearch?

2010-04-29 Thread LC Bruzenak
Just got word back from a non-US fielded system that the ausearch -ts
DAY/MONTH/YEAR doesn't work, e.g. :
% ausearch -ts 04/29/2010 
says that the year value is invalid

but
% ausearch -ts 04/29/10 works.

Something with locale? It's F10 audit.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH] mapping of reactions

2010-04-05 Thread LC Bruzenak
Now that you appear to have all supporting patches in place, would you
mind sending out a sample of how to use this? It looks very interesting.

Thx,
LCB.
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: aureport header question

2010-03-25 Thread LC Bruzenak
OK, let me try again.

1st summarize all in the dir (minor - time precision varies on report
time start/ends):
[r...@audit tmp]# aureport -if audit-mirror/ -i --summary

Summary Report
==
Range of time in logs: 03/23/2010 16:30:17.279 - 03/26/2010 01:58:02.255
Selected time for report: 03/23/2010 16:30:17 - 03/26/2010 01:58:02.255
...

2nd see events from yesterday through now (range of time in logs isn't
accurate as shown above; same files are there):
[r...@audit tmp]# aureport -if audit-mirror/ -i --summary -ts
yesterday -te today

Summary Report
==
Range of time in logs: 03/25/2010 00:01:01.519 - 03/26/2010 01:58:02.255
Selected time for report: 03/25/2010 00:00:00 - 03/26/2010 01:58:53
...

Now see the issue I was trying to illustrate earlier (ending time of
range in logs; there are definitely events there in that timeframe) :
[r...@audit tmp]# aureport -if audit-mirror/ -i --summary -ts
yesterday -te 03/26/2010 00:00:00

Summary Report
==
Range of time in logs: 03/25/2010 00:01:01.519 - 01/01/1970 00:00:00.000
Selected time for report: 03/25/2010 00:00:00 - 03/26/2010 00:00:00
Number of changes in configuration: 234
Number of changes to accounts, groups, or roles: 0
Number of logins: 7
Number of failed logins: 146
...

And this is the issue I was questioning.
Do you think it has been addressed already by possibly newer code than
I have (1.7.16)?

Thx,
LCB.
-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: aureport header question

2010-03-25 Thread LC Bruzenak
On Thu, Mar 25, 2010 at 4:13 PM, LC Bruzenak  wrote:
> Steve (et. al.):
>
> I still see the aureport error (below).
> Has this been addressed in any patches in the V 2 audit release?
>
Never mind; I see the issue. There are no events. Sorry for the bother.

LCB.

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


aureport header question

2010-03-25 Thread LC Bruzenak
Steve (et. al.):

I still see the aureport error (below).
Has this been addressed in any patches in the V 2 audit release?

Good example:
[is...@audit ~]$ sudo aureport -ts yesterday -i --summary

Summary Report
==
Range of time in logs: 03/25/2010 00:01:02.160 - 03/25/2010 15:26:29.341
Selected time for report: 03/24/2010 00:00:00 - 03/25/2010 15:26:29.341

[is...@audit ~]$ date
Thu Mar 25 16:10:11 UTC 2010

Bad example (note "Range of time in logs" line):
[is...@audit ~]$ sudo aureport -ts yesterday -te 03/25/2010 00:00:00
-i --summary

Summary Report
==
Range of time in logs: 01/01/1970 00:00:00.000 - 01/01/1970 00:00:00.000
Selected time for report: 03/24/2010 00:00:00 - 03/25/2010 00:00:00


Thx,
LCB.


-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: ausearch results differ with "-i" flag

2010-03-17 Thread LC Bruzenak
On Wed, 2010-03-17 at 14:57 -0400, Steve Grubb wrote:
> 
> > What happened to the position that changing audit output from the
> kernel was
> > verboten?
> 
> This particular avc originates from user space. The application needs
> to follow the rules correctly so it doesn't mess up the logs.

User space, yes, but from the Xorg server. Because X controls accesses
internally it apparently audits stuff using the USER_AVC in this way.
My confusion is that I thought any freetext should be allowed inside the
"msg=" field and not interpreted by ausearch. 

I remember a while back though you told me why this can happen...so I
need to look back and see. I suspect because the parse libs work as I
think and the ausearch/aureport doesn't use those.?.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


ausearch results differ with "-i" flag

2010-03-16 Thread LC Bruzenak
I am doing an ausearch and noticed that with the "-i" flag the "comm="
field appears to lose the data.
The bad thing is that this appears inside the "msg=" string, and I feel
that it shouldn't be interpreting those values anyway.

I saw that the audit-viewer does parse out the "comm=" field correctly
when I look at the same event.

First the event without the "-i" flag:

time->Tue Mar 16 21:53:50 2010
node=jcdx type=USER_AVC msg=audit(1268776430.236:6808): user pid=2835
uid=0 auid=4294967295 ses=4294967295
subj=system_u:system_r:xdm_xserver_t:s0-s15:c0.c1023 msg='avc:  denied
{ write } for request=X11:PolyRectangle comm=MLTracks resid=5d
restype=WINDOW scontext=user_u:user_r:user_t:s6:c0.c511
tcontext=system_u:object_r:xdm_rootwindow_t:s0-s15:c0.c1023
tclass=x_drawable : exe="/usr/bin/Xorg" (sauid=0, hostname=?, addr=?,
terminal=?)'



Same event appears to lose the "comm" field with the "-i" flag:

node=jcdx type=USER_AVC msg=audit(03/16/2010 21:53:50.236:6808) : user
pid=2835 uid=root auid=unset ses=4294967295
subj=system_u:system_r:xdm_xserver_t:s0-s15:c0.c1023 msg='avc:  denied
{ write } for request=X11:PolyRectangle comm=(null) resid=5d
restype=WINDOW scontext=user_u:user_r:user_t:s6:c0.c511
tcontext=system_u:object_r:xdm_rootwindow_t:s0-s15:c0.c1023
tclass=x_drawable : exe=/usr/bin/Xorg (sauid=root  hostname=?, addr=?,
terminal=?)' 
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


aureport question

2009-12-20 Thread LC Bruzenak
Steve,

The aureport utility has an option to use an alternative input file.
Because I have to move my logs, I really need an alternative input
directory, preferably a starting point, since my saved logs are:
/var/log/audit-archive/// .
Then I could do "aureport --topdir /var/log/audit-archive/2009/12 "
and get all the 12/2009 events up to now.

What do you think?

I thought about creating a different flat directory and just linking
the files I want, however I do not think the current options will
allow this either. I guess that would be the easiest change though, to
allow the -if parameter to be a directory or a file.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audit-viewer performance

2009-12-19 Thread LC Bruzenak
On Sat, Dec 19, 2009 at 7:34 AM, Steve Grubb  wrote:
> On Friday 18 December 2009 08:42:51 pm LC Bruzenak wrote:
>> What is the plan for this tool? As I said, I think it is very nice
>> feature-wise in general but in practice it isn't living up to
>> expectations.
>> I can try to help but will take a while to get python-proficient. Or
>> is the trouble in the parse library?
>
> The audit parsing library has not been optimized for handling large data sets.
> I don't think its the entire problem you are seeing, but I'm sure its a
> contributor to the problem. I was planning to look at performance issues in a
> future release.

I should be able to help out testing, debugging, etc. since we really
use the aggregation capability on high-volume systems and therefore
have a big data store to use in testing.

> But you could test the native C library against the python version to see if
> python itself is adding delay.

I'll try to take a look at this.

I was thinking that it seems to me a relational DB would be a help on
this point. Rather than parsing the entire log structure every time,
perhaps the audit-viewer could just query for the desired data and try
to leverage the DB's optimization.
But I guess if you went to such a big change there you might also
consider making it network-capable similar in form and function to the
prewikka viewer. This one handles large amounts of data pretty well.

>
> -Steve
>
> PS - I keep a TODO file up to date that will always let you know what the
> immediate plans are: https://fedorahosted.org/audit/browser/trunk/TODO
>

Very good. Thanks Steve, and Happy Holidays!

LCB.


-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: print capability for audit-viewer?

2009-12-19 Thread LC Bruzenak
On Sat, Dec 19, 2009 at 1:03 AM, Miloslav Trmac  wrote:
> - "LC Bruzenak"  wrote:
>> Is there any plan to add printing capability to the audit-viewer?
> Not currently; you can export any tab to HTML[1] and use a web browser (or 
> perhaps (lynx -dump | lpr)) to print it.  Is that an acceptable solution for 
> you?
>    Mirek

Mirek,

Thanks for the reply. I tried the export, however it isn't the tab
contents per se which have the important data for us. We have modified
the event tab to include the entire raw event, because in our system,
the really important data is in usually the application-submitted
text.

Since the event details window has no export, there is no way to print
the desired information per event.

I also tried adding the "other fields" to the columns listing, however
that particular test also had a different error. The first column was
"Date" and when I tried to export that list, it failed to export the
list. So I ran the audit-viewer from the command line and saw an
error:
TypeError: __date_column_event_text takes exactly 1 argument (2 given).

>
> [1] I have just noticed that "list" exports don't work, and a fix will be 
> available in the next release.
>    Mirek
>

Was it the above or different?

Thanks again,
LCB.


-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


audit-viewer performance

2009-12-18 Thread LC Bruzenak
I don't know how much the audit-viewer tool is used by folks with
substantial amounts of data, but my experience is that it is nearly
unusable for our system. I appreciate that it does a lot really well,
however it takes minutes to load our data on startup. It seems more
filter tabs hurt the performance as well.

The only purpose of this particular machine is to collect/process
audit/prelude data.
Only prelude and audit-related processes are being run on this one.
It is an HP DL380 with 2 quad-core processors, 12GB RAM and an
internal RAID running on F10.

As it is now, I daily move the audit data from the /var/log/audit
directory or there is no chance that the audit-viewer will complete
its load. Sometimes it will never recover and we have to kill and
restart it. The amount of data is around 1.5GB on the directory we are
currently loading and it appears to take about 5 minutes (give or
take) for the viewer to load the data and be usable.

Once loaded, a big filter effort will take maybe a minute or so to
yield results.
While the data is loading, there is no feedback and of course
uncertainty about whether it is going to return with any data always
sets in after a minute or two.

What is the plan for this tool? As I said, I think it is very nice
feature-wise in general but in practice it isn't living up to
expectations.
I can try to help but will take a while to get python-proficient. Or
is the trouble in the parse library?

I do not have scientific data yet, but recently I loaded one 100MB
audit file from the store. It took around 3 minute to load. Then I
changed the source and that one took longer. When it was finally
loaded it, the process size was over 2GB.

I can run some better tests and try to get some data if it is helpful.
Are there ways I can try to exercise the parse library outside the GUI
on these same files which might help me know what to look for? Or any
other ideas I can try?

Thanks,
LCB.
-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


print capability for audit-viewer?

2009-12-17 Thread LC Bruzenak
Is there any plan to add printing capability to the audit-viewer?

Thx,
LCB.
-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: check_second_connection stopping my recovery?

2009-12-01 Thread LC Bruzenak
On Tue, 2009-12-01 at 13:10 -0500, Steve Grubb wrote:
> On Wednesday 18 November 2009 06:01:10 pm LC Bruzenak wrote:
> 
> Yes, it was. With the reconnect code its possible to DoS a server, so the 
> connections need to be limited. I think the best solution is to make an admin 
> tweakable setting that defaults to 1 and you can set it to 2. Your recovery 
> technique won't be needed in the long term since its planned to have a store-
> and-forward model so nothing is lost and its automatically recovered on start 
> up.
> 
> -Steve

Steve,

Your call but it may not be worth adding a new setting.
I've already patched it out of my system, and if I'm the only one who
cares then I'd say don't worry about it.  I am aware of a DoS attack but
all senders are locked tight so I feel mitigation is sufficient. In fact
I nearly DoS-attacked myself before restricting the recovery to at most
1 process. :)

The store-and-forward piece will be excellent. It will solve at least a
couple of issues for me: recovery and also forwarding from a DMZ machine
to an internal server which will then forward to an independent
collector.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


check_second_connection stopping my recovery?

2009-11-18 Thread LC Bruzenak
It appears to me as though the new connection code in auditd-listen.c
is stopping my recovery actions.
My aggregator is getting a constant stream of:
op=dup addr=192.168.10.10:43546 port=43546 res=no

I was going back through the events on disk, scooping them up and
sending them to the aggregation machine as Steve suggested a long
while back (using an ausearch then piping the results to
audisp-remote).
So it appears to me that this is now prohibited.  Was this intentional?

Thx,
LCB.

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audit-1.7.16 released

2009-11-17 Thread LC Bruzenak
On Tue, Nov 17, 2009 at 9:52 PM, LC Bruzenak  wrote:
> On Tue, Nov 17, 2009 at 3:48 PM, Steve Grubb  wrote:
>> On Monday 16 November 2009 06:52:24 pm LC Bruzenak wrote:
>>> > You should have daemon start/end events at the aggregator. Are they not
>>> > getting there? Also, the aggregator should have matching
>>> > connect/disconnect events.
>>>
>>> I am not getting the DAEMON_END events. In an orderly shutdown, the
>>> network shuts down before the audit daemon does.
>>
>> OK, I'll take a look to see if things can be reordered to let this event get
>> sent.
>>
>> -Steve
>>
>
> Thanks, that would help in the case where the client shuts down normally.
> There is definitely utility in having a positive event come from the
> sender saying it is shutting down.
>
> But if the client gets the power cord yanked out it doesn't help me,
> so I'll still try to add something on the server side to add a local
> audit event as well as the syslog.
>

OK, I see it appears it would work as expected.
I see that the "close_client" gets called on a client timeout, and it
does send a AUDIT_DAEMON_CLOSE event. I will test that ASAP.
I assume a client which just drops off would hit this case.

Thx!
LCB.

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audit-1.7.16 released

2009-11-17 Thread LC Bruzenak
On Tue, Nov 17, 2009 at 3:48 PM, Steve Grubb  wrote:
> On Monday 16 November 2009 06:52:24 pm LC Bruzenak wrote:
>> > You should have daemon start/end events at the aggregator. Are they not
>> > getting there? Also, the aggregator should have matching
>> > connect/disconnect events.
>>
>> I am not getting the DAEMON_END events. In an orderly shutdown, the
>> network shuts down before the audit daemon does.
>
> OK, I'll take a look to see if things can be reordered to let this event get
> sent.
>
> -Steve
>

Thanks, that would help in the case where the client shuts down normally.
There is definitely utility in having a positive event come from the
sender saying it is shutting down.

But if the client gets the power cord yanked out it doesn't help me,
so I'll still try to add something on the server side to add a local
audit event as well as the syslog.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audit-1.7.16 released

2009-11-16 Thread LC Bruzenak
On Mon, Nov 16, 2009 at 9:59 AM, Steve Grubb  wrote:
> On Friday 13 November 2009 08:14:22 pm LC Bruzenak wrote:
>> Thinking about the client network connection timeout...
>>
>> I am wondering if this is a serious enough condition to warrant
>> inserting an audit event in addition to the syslog.
>
> If you have 1000 machines and the switch dies, you will have 4000 events when
> everything reconnects. The server will record its own events and the clients
> will record duplicate copies from their end. The audit trail has not been lost
> or anything bad happened. I think this is more of a systems management issue
> than security.

And for most systems I agree. For me, if a switch fails and I get 4000
events that is acceptable.
Actually, on my systems, if a switch dies the clients will all start
shutting down. Then after it is fixed, and they reboot, they will
resend their events the way you suggested - by grabbing all of them
from the estimated network send error time and push them into
audisp-remote (BTW works great; thanks!).

>
>> For me it is, because sending a termination event from the client is
>> both difficult and unreliable, and I am supposed to provide client
>> (sender) startup/shutdown data.
>
> You should have daemon start/end events at the aggregator. Are they not
> getting there? Also, the aggregator should have matching connect/disconnect
> events.
>

I am not getting the DAEMON_END events. In an orderly shutdown, the
network shuts down before the audit daemon does. In a catastrophic or
otherwise unintended termination, obviously there it would be of
benefit.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audit-1.7.16 released

2009-11-13 Thread LC Bruzenak
On Fri, 2009-10-30 at 10:56 -0400, Steve Grubb wrote:
> On Monday 26 October 2009 10:46:33 am LC Bruzenak wrote:
> > On Sat, 2009-10-17 at 11:55 -0400, Steve Grubb wrote:
> ...
> > >
> > > - If audisp-remote plugin has a queue at exit, use non-zero exit code
> > > - In auditd, tell libev to stop processing a connection when idle timeout
> > > - In auditd, tell libev to stop processing a connection when shutting
> > > down
> > >
> > > This release fixes a bug introduced in the 1.7.15 release. The main
> > > problem was that the idle timeout was not telling libev to stop
> > > processing the associated socket when it closed an idle connection.
> > > Subsequent reconnects would go into an error state and libev would
> > > immediately close the new connection. This update fixes that problem. I
> > > also applied a patch from trunk that checks the queue size on exit of
> > > audisp-remote to decide if it was successful or not.
> > >
> > > Please let me know if you run across any problems with this release.
> > 
> > Is there any indication that this event has happened being logged?
> 
> This was a bugfix where libev was not being told that the connection was 
> closed 
> by the auditd code. The fix is right below the syslog message saying this 
> happened.
> 
> -Steve

Steve,

Thinking about the client network connection timeout...

I am wondering if this is a serious enough condition to warrant
inserting an audit event in addition to the syslog.

For me it is, because sending a termination event from the client is
both difficult and unreliable, and I am supposed to provide client
(sender) startup/shutdown data. For me, the connection termination is a
good indication, so I will probably patch mine.

I wonder if it would be helpful for others as well.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audit-1.7.16 released

2009-10-26 Thread LC Bruzenak
On Sat, 2009-10-17 at 11:55 -0400, Steve Grubb wrote:
> Hi,
> 
> I've just released a new version of the audit daemon. It can be downloaded 
> from http://people.redhat.com/sgrubb/audit. It will also be in rawhide  
> soon. The ChangeLog is:
> 
> - If audisp-remote plugin has a queue at exit, use non-zero exit code
> - In auditd, tell libev to stop processing a connection when idle timeout
> - In auditd, tell libev to stop processing a connection when shutting down
> 
> This release fixes a bug introduced in the 1.7.15 release. The main problem 
> was 
> that the idle timeout was not telling libev to stop processing the associated 
> socket when it closed an idle connection. Subsequent reconnects would go into 
> an error state and libev would immediately close the new connection. This 
> update fixes that problem. I also applied a patch from trunk that checks the 
> queue size on exit of audisp-remote to decide if it was successful or not.
> 
> Please let me know if you run across any problems with this release.
> 
> -Steve

Is there any indication that this event has happened being logged?
The man page indicates it might be ("...before auditd complains").

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH] Add auditd listener and remote audit protocol

2009-09-29 Thread LC Bruzenak
On Tue, 2009-09-29 at 14:51 -0400, Norman Mark St. Laurent wrote:
> Hi LCB,
> 
> I hope I answer u correctly...
> 
> I would look in your /etc/audisp/audisp-remote.conf file and note the 
> port you communicate on, as an alternate you can grab the port with 
> "lsof -i -nP" or "netstat -taupe".  Then you can use tcpdump to watch 
> the connections.
> 
> #tcpdump -i eth0 port 1001 -->  or what ever port you have setup to 
> the remote data on and the correct nic.
> 
> Sounds like this could help u out.
> 
> Norman Mark St. Laurent
> Conceras | Chief Technology Officer and ISSE
> Phone:  703-965-4892
> Email:  mstlaur...@conceras.com
> Web:  http://www.conceras.com
> 
> Connect. Collaborate. Conceras.
> 
> 
> 
> LC Bruzenak wrote:
> > On Thu, 2008-08-14 at 19:31 -0500, LC Bruzenak wrote:
> >   
> >> On Thu, 2008-08-14 at 20:27 -0400, Steve Grubb wrote:
> >> 
> >>> On Thursday 14 August 2008 20:22:24 LC Bruzenak wrote:
> >>>   
> >>>> I think you have a good point - this is the first cut and maybe
> >>>> 
> >> later on
> >> 
> >>>> institute a "replay daemon" or something which can send events on
> >>>> reconnect.
> >>>> 
> >>> Note that all audispd plugins take their input from stdin. At the
> >>>   
> >> worst, if 
> >> 
> >>> you had the time hacks, you could 
> >>>
> >>> ausearch --start  --end  --raw | /sbin.audisp-remote
> >>>
> >>> -Steve
> >>>   
> >
> > Steve,
> >
> > I have been doing this but I really cannot tell if the audisp-remote
> > connection succeeds; it returns "0" either way.
> > Would there be an easy way to return a non-zero failure indicator?
> >
> > Thx,
> > LCB.
> >

Norman,

Thank for the reply but I wasn't quite clear enough.
The context of this is within a recovery script, so I'm concerned that I
can get the return value of the audisp-remote within the script to
decide if the recovery was successful or if it failed.

I don't think that was clear above; my apologies since the conversation
I referenced was > 1 year old.

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: [PATCH] Add auditd listener and remote audit protocol

2009-09-29 Thread LC Bruzenak
On Thu, 2008-08-14 at 19:31 -0500, LC Bruzenak wrote:
> 
> On Thu, 2008-08-14 at 20:27 -0400, Steve Grubb wrote:
> > On Thursday 14 August 2008 20:22:24 LC Bruzenak wrote:
> > > I think you have a good point - this is the first cut and maybe
> later on
> > > institute a "replay daemon" or something which can send events on
> > > reconnect.
> > 
> > Note that all audispd plugins take their input from stdin. At the
> worst, if 
> > you had the time hacks, you could 
> > 
> > ausearch --start  --end  --raw | /sbin.audisp-remote
> > 
> > -Steve

Steve,

I have been doing this but I really cannot tell if the audisp-remote
connection succeeds; it returns "0" either way.
Would there be an easy way to return a non-zero failure indicator?

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audisp-prelude event not propagated?

2009-09-17 Thread LC Bruzenak
On Thu, 2009-09-17 at 11:24 -0500, LC Bruzenak wrote:
> On Wed, 2009-09-16 at 22:13 -0500, LC Bruzenak wrote:
> > On Wed, 2009-09-16 at 22:00 -0500, LC Bruzenak wrote:
> > > I have 2 machines: one collector and one sender.
> > > I added a watched file key with the ids keyword on the sender machine.
> > > 
> > > Other IDS events (login, AVCs, etc.) propagate from the sender to the
> > > collector, so I am sure they are registered correctly
> > > (prelude-manager->prelude-manager). Watched file IDS events do not get
> > > to the collector.
> > > 
> > > I have the same rule on the collector machine and verified it generated
> > > the prelude "Watched File" event when I touched a file watched with the
> > > key: 
> > > [r...@audit ~]# auditctl -l | grep ids
> > > LIST_RULES: exit,always watch=/boot/test perm=wa key=ids-file-hi
> > > 
> > > Is this a prelude issue or an audit issue?
> > > 
> > > Thx,
> > > LCB.
> > > 
> > > audit-1.7.13-1.fc10
> > > prelude-manager-0.9.14.2-2.fc10.x86_64
> > > 
> 
> ...and both machines have identical /etc/audisp/audisp-prelude.conf
> files.
> 

Must be a prelude issue since I see the event being generated when I run
prelude-manager in debug mode. I'll stop sending info on this one now
but if anyone else has the same issue later contact me and I'll share
the eventual fix.

LCB.


-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audisp-prelude event not propagated?

2009-09-17 Thread LC Bruzenak
On Wed, 2009-09-16 at 22:13 -0500, LC Bruzenak wrote:
> On Wed, 2009-09-16 at 22:00 -0500, LC Bruzenak wrote:
> > I have 2 machines: one collector and one sender.
> > I added a watched file key with the ids keyword on the sender machine.
> > 
> > Other IDS events (login, AVCs, etc.) propagate from the sender to the
> > collector, so I am sure they are registered correctly
> > (prelude-manager->prelude-manager). Watched file IDS events do not get
> > to the collector.
> > 
> > I have the same rule on the collector machine and verified it generated
> > the prelude "Watched File" event when I touched a file watched with the
> > key: 
> > [r...@audit ~]# auditctl -l | grep ids
> > LIST_RULES: exit,always watch=/boot/test perm=wa key=ids-file-hi
> > 
> > Is this a prelude issue or an audit issue?
> > 
> > Thx,
> > LCB.
> > 
> > audit-1.7.13-1.fc10
> > prelude-manager-0.9.14.2-2.fc10.x86_64
> > 

...and both machines have identical /etc/audisp/audisp-prelude.conf
files.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: audisp-prelude event not propagated?

2009-09-16 Thread LC Bruzenak
On Wed, 2009-09-16 at 22:00 -0500, LC Bruzenak wrote:
> I have 2 machines: one collector and one sender.
> I added a watched file key with the ids keyword on the sender machine.
> 
> Other IDS events (login, AVCs, etc.) propagate from the sender to the
> collector, so I am sure they are registered correctly
> (prelude-manager->prelude-manager). Watched file IDS events do not get
> to the collector.
> 
> I have the same rule on the collector machine and verified it generated
> the prelude "Watched File" event when I touched a file watched with the
> key: 
> [r...@audit ~]# auditctl -l | grep ids
> LIST_RULES: exit,always watch=/boot/test perm=wa key=ids-file-hi
> 
> Is this a prelude issue or an audit issue?
> 
> Thx,
> LCB.
> 
> audit-1.7.13-1.fc10
> prelude-manager-0.9.14.2-2.fc10.x86_64
> 

The detail in the IDS event generated locally on the collector isn't
complete. Under "Target" there is a "Target file current" block. The
"Name" value shows up as: "/" but the "Path" value is "/boot/".

The corresponding audit event is below.

node=audit type=PATH msg=audit(09/16/2009 20:26:54.945:271) : item=1
name=/boot/test inode=16 dev=08:01 mode=file,644 ouid=root ogid=root
rdev=00:00 obj=unconfined_u:object_r:boot_t:s0 
node=audit type=PATH msg=audit(09/16/2009 20:26:54.945:271) : item=0
name=/boot/ inode=2 dev=08:01 mode=dir,755 ouid=root ogid=root
rdev=00:00 obj=system_u:object_r:boot_t:s0 
node=audit type=CWD msg=audit(09/16/2009 20:26:54.945:271) :  cwd=/root 
node=audit type=SYSCALL msg=audit(09/16/2009 20:26:54.945:271) :
arch=x86_64 syscall=open success=yes exit=3 a0=7fffbe53d9c3 a1=941
a2=1b6 a3=386016d12c items=2 ppid=2275 pid=2448 auid=root uid=root
gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root
tty=pts0 ses=1 comm=touch exe=/bin/touch
subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
key=ids-file-hi 

Maybe the 2 "PATH" components confused the plugin code?

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


audisp-prelude event not propagated?

2009-09-16 Thread LC Bruzenak
I have 2 machines: one collector and one sender.
I added a watched file key with the ids keyword on the sender machine.

Other IDS events (login, AVCs, etc.) propagate from the sender to the
collector, so I am sure they are registered correctly
(prelude-manager->prelude-manager). Watched file IDS events do not get
to the collector.

I have the same rule on the collector machine and verified it generated
the prelude "Watched File" event when I touched a file watched with the
key: 
[r...@audit ~]# auditctl -l | grep ids
LIST_RULES: exit,always watch=/boot/test perm=wa key=ids-file-hi

Is this a prelude issue or an audit issue?

Thx,
LCB.

audit-1.7.13-1.fc10
prelude-manager-0.9.14.2-2.fc10.x86_64

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


make auditd persistent?

2009-09-15 Thread LC Bruzenak
Steve (or anyone),

Has there been any discussion about an option to make the auditd
unkillable? If it was already touched on the list, I apologize - I lost
the old emails.

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: buffer space

2009-08-27 Thread LC Bruzenak

On Thu, 2009-08-27 at 13:21 -0400, David Flatley wrote:
> When auditd starts and reports an error on a line in the audit.rules,
> does auditd continue to run with the rest of the rules?
> Or does auditd stop waiting for a correction to the rules file? An
> example would be on syscalls for fchown32 or umount. 
> Thanks.
> 
> 
> David Flatley CISSP
> 

David,

You can always run "sudo auditctl -l" to see which ones are loaded.

LCB.
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: buffer space

2009-08-18 Thread LC Bruzenak

On Tue, 2009-08-18 at 09:02 -0400, David Flatley wrote:
> When I do "service auditd rotate" I am getting in
> the /var/log/messages the following:
> 
> Error receiving audit netlink packet (No buffer space available)
> Error sending signal_info request (No buffer space available) 
> 
> At the same time I am running a regression test that is generating 20
> meg audit logs every six to eight minutes.
> 
> Is this a concern?
> 
> David Flatley 
> 

David,

What I believe is happening is that you are generating an abnormal
amount of audit data in your regression test. That's OK, but I think
when you do the rotate the auditd suspends disk writes while it waits
for the rotate to complete. 

IIRC, the rotate starts with the highest number log, rolls it to the
next higher number. Then it decrements the counter and repeats. So
log.13->log.14, then log.12->log.13, etc., and eventually moves
audit.log to audit.log.1. Then a new audit.log is created and the flow
resumes.

While this happens, you are stacking up events from the kernel and
eventually run out of space. On some machines where the log files are in
the hundreds (I had around 300) I have seen the rotate take an
appreciable amount of time.

So you are probably dropping events when you get the above messages and
I guess that is for you to decide if you are concerned about this for
the duration of the test.

This sounds like an instance of where you know that some application
will generate huge amounts of AVC data you do not want to see in the
logs, and ideally you would block those events with a rule. However,
last week I believe, Steve noted that under the current kernel code (and
probably auditctl rules) you cannot selectively exclude AVCs.

LCB.
-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


Re: buffer space

2009-08-17 Thread LC Bruzenak

On Mon, 2009-08-17 at 14:01 -0400, Steve Grubb wrote:
> 
> > It's a problem for me too.
> > I was thinking about just patching the ausearch code to behave as
> > desired...but hoping Steve beat me to it so there was a greatly
> reduced
> > chance of bad code...
> 
> #cat `ls /var/log/audit/a* | sort -r` | ausearch -i
> #cat `ls /var/log/audit/a* | sort -r` | aureport
> 
> cat can open more than one file at a time,
> 
> -Steve

Told you so!
:)

LCB.

-- 
LC (Lenny) Bruzenak
le...@magitekltd.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit


  1   2   3   >