On Fri, Jun 29, 2012 at 9:17 AM, Eric <eric.luel...@gmail.com> wrote:
> Dan,
>
> Thank you very much for all of your information. You've been very helpful. I
> just have 1 more quick question then I'll stop bugging you, for now. :) Is
> there any other way to manage the keys or do some sort of automated agent
> key management? I know there is ossec-authd that would work on an internal
> system with individual IPs for each host but I didn't know how/if that would
> work with a mixed environment of agents and multiple agents coming from 1
> IP.
>

ossec-authd is pretty much all we have. It sets the IP to 'any' for
every new host though, so it may work for you. Might be worth testing,
but I don't have a lot of experience with it so my insight is very
limited.

> Thanks again,
> Eric
>
>
> On Wednesday, June 27, 2012 12:21:06 PM UTC-4, dan (ddpbsd) wrote:
>>
>> On Wed, Jun 27, 2012 at 12:15 PM, Eric wrote:
>> > Thank you for the information. Is there any better way that you can
>> > think of
>> > architecting this setup? One of the main concerns is that location1 will
>> > reuse Host1's key for Host2 and then it completely confuse those
>> > monitoring
>> > the alerts.
>> >
>>
>> You could setup local OSSEC servers and have them forward their alerts
>> to a central OSSEC server.
>>
>> Tell the locations that re-using keys is bad, and they shouldn't do
>> it. Write it out in crayon if you have to.
>>
>> >
>> > On Wednesday, June 27, 2012 10:43:47 AM UTC-4, dan (ddpbsd) wrote:
>> >>
>> >> > Hello,
>> >> >
>> >> > I am working on a deployment that is going to involve multiple
>> >> > external
>> >> > locations (behind a NAT) with all of them talking back to 1 server.
>> >> >
>> >> > Location 1 will be a mixture of Linux and Windows agents. There will
>> >> > be
>> >> > ~10
>> >> > hosts at this location all going out of a single NAT, 1.1.1.1.
>> >> > Location 2 will have ~5 Linux machines going out a single NAT,
>> >> > 2.2.2.2.
>> >> > Location 3 will have ~20 Windows machines going out a single NAT,
>> >> > 3.3.3.3.
>> >> >
>> >> > So far I have gotten this general setup to work by creating an
>> >> > individual
>> >> > key for each host and setting the IP address to "any". However, I am
>> >> > curious
>> >> > if there is anyway to set up 1 key per location and have all agents
>> >> > share
>> >> > that one key. So I can give location 1 keyA and they put that on all
>> >> > of
>> >> > the
>> >> > agents and it is able to talk by to the portal. I kinda sorta gotten
>> >> > this to
>> >> > work by creating Location1 on the OSSEC server and giving it an IP of
>> >> > 1.1.1.1/32. I know if I just do 1.1.1.1 it says duplicate key error
>> >> > but
>> >> > if I
>> >> > put a CIDR around it, it has worked sometimes and other times it
>> >> > hasn't.
>> >> > So
>> >> > that is my first question. Is this scenario doable?
>> >> >
>> >>
>> >> No. Each individual agent requires its own unique key.
>> >>
>> >> > My second question is if I am able to make the above setup work, is
>> >> > there
>> >> > anyway I can distinguish the individual agents from one another? I
>> >> > know
>> >> > by
>> >> > default, if we have the hostnames set up correctly, I will see
>> >> > Location1
>> >> > as
>> >> > the "location" but I will see host1 somewhere in the log to
>> >> > distinguish
>> >> > it.
>> >> > Are there any additional fields that I can force OSSEC to send with
>> >> > the
>> >> > logs, such as the internal IP? This is especially the case for
>> >> > integrity
>> >> > checking alerts since it doesn't even give the hostname on those. Can
>> >> > I
>> >> > force it to?
>> >> >
>> >> > Thanks in advance for any advice/information you all have.

Reply via email to