Hi,

We where hitting the same issue also. We solved it like this:
- we have a fact that makes sure the iptables service is up on the machine.
Otherwise if the service is down, there will be no purge and we still get
the previous rules.
- we put the purge resource in a special class (something like
mycompany::firewall::puppet_purge) and enforce it to run in the last stage
(we use stages)

Best regards,
Cristian Falcas



On Fri, Jul 4, 2014 at 5:47 PM, Ken Barber <[email protected]> wrote:

> So puppetlabs-firewall is an active provider, whenever it 'runs' in
> the catalog it applies the rule straight away. You are probably seeing
> this because you're applying a blocking rule (like a DROP or default
> DROP for the table) before the SSH allowance rule gets applied.
>
> Take a close look at the pre/post suggestion here:
> https://forge.puppetlabs.com/puppetlabs/firewall#beginning-with-firewall
>
> Notice how it suggests creating a course-grained ordering to setup the
> "DROP" rule as the very last thing that runs. Now to be clear this
> concept is about puppet resource execution order, not the order which
> the rules are setup in iptables (ie. with the number in the title).
>
> ken.
>
> On Fri, Jul 4, 2014 at 1:19 PM, Danny Roberts
> <[email protected]> wrote:
> > To clarify; we have to use SSH to connect to the servers in this
> > environment, they are all VMs & the hosting provider does not give any
> means
> > of accessing a console (not ideal but sadly beyond our control).
> >
> > Our standard process is after building a new server to have manually run
> > Puppet once to bring it up to our standard ASAP. Normally Puppet runs
> > daemonized beyond this point.
> >
> > This is our first production environmnet that uses the Puppetlabs
> Firewall
> > module so our first time encountering this in anger. Oddly the server
> > remains unreachable via SSH after this for at least 2 hours which is
> enough
> > for 3/4 Puppet runs to sort out any issues. This still seems a bit long.
> >
> > I'm about to try another test by stopped the firewall before doing
> another
> > Puppet run on a fresh server to see how that behaves.
> >
> >
> > On Wednesday, 2 July 2014 14:27:05 UTC+1, jcbollinger wrote:
> >>
> >>
> >>
> >> On Tuesday, July 1, 2014 9:30:57 AM UTC-5, Danny Roberts wrote:
> >>>
> >>> I am using the Puppetlabs firewall module to manage our firewall. All
> >>> servers get our core ruleset:
> >>> [...]
> >>> This worked perfectly when I spun up a server with no role (and
> therefore
> >>> no extra rules. However when I spun up servers with the 'puppet' &
> >>> 'database' roles (and therefore the extra rules) it hung at:
> >>>
> >>> Notice: /Stage[main]/Mycompany/Firewall[9001
> >>> fe701ab7ca74bd49f13b9f0ab39f3254]/ensure: removed
> >>>
> >>> My SSH session eventually disconnects with a broken pipe. The puppet
> >>> server I spun up yesterday was available when I got into the office
> this
> >>> morning so it seems they do eventually come back but it takes some
> time. Is
> >>> there any reason I am getting cut of like that and is there any way to
> avoid
> >>> it?
> >>
> >>
> >>
> >> I'm a little confused.  What does your SSH session have to do with it?
>  I
> >> don't find it especially surprising that an existing SSH connection gets
> >> severed when the destination machine's firewall is manipulated by
> Puppet, if
> >> that's what you're describing.  I would not necessarily have predicted
> it,
> >> but in retrospect it seems reasonable.
> >>
> >> I'm supposing that you were connected remotely via SSH to the machine on
> >> which the agent was running, following the progress of the run in real
> time.
> >> In that case, are you certain that the run was in fact interrupted at
> all?
> >> Maybe the output from the remote side was curtailed when your SSH
> connection
> >> was disrupted, but the run continued.  Or if you were running
> un-daemonized,
> >> then perhaps the run was interrupted when severing the SSH connection
> >> produced a forced logout from the controlling terminal.
> >>
> >> Any way around, the fact that the subject systems eventually recover on
> >> their own makes me suspect that the problem lies in how you were
> monitoring
> >> the run, rather than in your manifests.  You could try running puppet in
> >> daemon mode, or otherwise disconnected from a terminal, and checking
> the log
> >> after the fact to make sure everything went as it should.
> >>
> >>
> >> John
> >>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Puppet Users" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to [email protected].
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/puppet-users/5eb83bdd-c8a5-4e36-956d-ff87eafd7acb%40googlegroups.com
> .
> >
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/CAE4bNTm94EFtp5ZsB9NzgYY0R4CRoEfkE84T1HX2ue4J3%2B8M3g%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAMo7R_d2x-NE%3Dj8YZHuP0P2bEdcYrQRu0Kn9BUVYf53OTyN%2BVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to