Re: pfsync and option WITH_PF_LOCK

2017-06-11 Thread Hrvoje Popovski
On 9.6.2017. 16:31, Alexandr Nedvedicky wrote:
>> Hi all,
>>
>> with this diff i don't see any traces as before.
>>
> 
> thanks a lot for quick testing.
> 
> regards
> sasha
> 

Hi,

i'm running latest snapshot with WITH_PF_LOCK in production and for now
everything is fine .. will see tomorrow when angry students turn on
their phones :)

carp
pf
pfsync
pflow
dhcpd + sync
tcpdump -lnqttti pflog0 2> error.log | /usr/bin/logger -t pf -p local2.info


many thanks for pf mp development ...



Re: pfsync and option WITH_PF_LOCK

2017-06-09 Thread Alexandr Nedvedicky
On Fri, Jun 09, 2017 at 03:48:49PM +0200, Hrvoje Popovski wrote:
> On 9.6.2017. 14:55, Alexandr Nedvedicky wrote:
> > Hello,
> > 
> > 
> > On Fri, Jun 09, 2017 at 01:11:01PM +0200, Alexander Bluhm wrote:
> >> On Fri, Jun 09, 2017 at 10:55:46AM +0200, Alexandr Nedvedicky wrote:
> >>> would it make sense to commit a poor man's solution below, before 
> >>> pfsync(4)
> >>> will get to shape? The idea is to grab PF_LOCK, just before we pass the 
> >>> data
> >>> to PF for further processing.
> >> Whatever helps to make progress with pf is fine.  We should not
> >> delay unlocking pf until someone steps in and refactors pfsync.
> >>
> >> OK bluhm@
> >>
> > I still would like to ask Hrvoje to give it a try first. I believe the fix
> > should work, but I could not try it as I don't have working pfsync set up
> > available for testing.
> > 
> > thanks and
> > regards
> > sasha
> > 
> 
> 
> Hi all,
> 
> with this diff i don't see any traces as before.
> 

thanks a lot for quick testing.

regards
sasha



Re: pfsync and option WITH_PF_LOCK

2017-06-09 Thread Hrvoje Popovski
On 9.6.2017. 14:55, Alexandr Nedvedicky wrote:
> Hello,
> 
> 
> On Fri, Jun 09, 2017 at 01:11:01PM +0200, Alexander Bluhm wrote:
>> On Fri, Jun 09, 2017 at 10:55:46AM +0200, Alexandr Nedvedicky wrote:
>>> would it make sense to commit a poor man's solution below, before pfsync(4)
>>> will get to shape? The idea is to grab PF_LOCK, just before we pass the data
>>> to PF for further processing.
>> Whatever helps to make progress with pf is fine.  We should not
>> delay unlocking pf until someone steps in and refactors pfsync.
>>
>> OK bluhm@
>>
> I still would like to ask Hrvoje to give it a try first. I believe the fix
> should work, but I could not try it as I don't have working pfsync set up
> available for testing.
> 
> thanks and
> regards
> sasha
> 


Hi all,

with this diff i don't see any traces as before.



Re: pfsync and option WITH_PF_LOCK

2017-06-09 Thread Alexandr Nedvedicky
Hello,


On Fri, Jun 09, 2017 at 01:11:01PM +0200, Alexander Bluhm wrote:
> On Fri, Jun 09, 2017 at 10:55:46AM +0200, Alexandr Nedvedicky wrote:
> > would it make sense to commit a poor man's solution below, before pfsync(4)
> > will get to shape? The idea is to grab PF_LOCK, just before we pass the data
> > to PF for further processing.
> 
> Whatever helps to make progress with pf is fine.  We should not
> delay unlocking pf until someone steps in and refactors pfsync.
> 
> OK bluhm@
> 

I still would like to ask Hrvoje to give it a try first. I believe the fix
should work, but I could not try it as I don't have working pfsync set up
available for testing.

thanks and
regards
sasha

> > 
> > regards
> > sasha
> > 
> > 8<---8<---8<--8<
> > --- src/sys/net/if_pfsync.c Fri Jun 09 09:40:12 2017 +0200  
> > 
> > +++ src/sys/net/if_pfsync.c Fri Jun 09 10:49:44 2017 +0200  
> > 
> > @@ -657,6 +657,7 @@ pfsync_input(struct mbuf **mp, int *offp
> > 
> > struct pfsync_header *ph; 
> > struct pfsync_subheader subh; 
> > int offset, noff, len, count, mlen, flags = 0;  
> > 
> > +   int e;
> >   
> > pfsyncstat_inc(pfsyncs_ipackets); 
> >   
> > @@ -733,8 +734,11 @@ pfsync_input(struct mbuf **mp, int *offp   
> > 
> > return IPPROTO_DONE;  
> > } 
> >   
> > -   if (pfsync_acts[subh.action].in(n->m_data + noff,   
> > 
> > -   mlen, count, flags) != 0) 
> > +   PF_LOCK();
> > +   e = pfsync_acts[subh.action].in(n->m_data + noff, mlen, 
> > count,  
> > +   flags);   
> > +   PF_UNLOCK();  
> > +   if (e != 0)   
> > goto done;
> >   
> > offset += mlen * count;   
> > 8<---8<---8<--8<



Re: pfsync and option WITH_PF_LOCK

2017-06-09 Thread Alexander Bluhm
On Fri, Jun 09, 2017 at 10:55:46AM +0200, Alexandr Nedvedicky wrote:
> would it make sense to commit a poor man's solution below, before pfsync(4)
> will get to shape? The idea is to grab PF_LOCK, just before we pass the data
> to PF for further processing.

Whatever helps to make progress with pf is fine.  We should not
delay unlocking pf until someone steps in and refactors pfsync.

OK bluhm@

> 
> regards
> sasha
> 
> 8<---8<---8<--8<
> --- src/sys/net/if_pfsync.c Fri Jun 09 09:40:12 2017 +0200
>   
> +++ src/sys/net/if_pfsync.c Fri Jun 09 10:49:44 2017 +0200
>   
> @@ -657,6 +657,7 @@ pfsync_input(struct mbuf **mp, int *offp  
>   
> struct pfsync_header *ph; 
> struct pfsync_subheader subh; 
> int offset, noff, len, count, mlen, flags = 0;
>   
> +   int e;
>   
> pfsyncstat_inc(pfsyncs_ipackets); 
>   
> @@ -733,8 +734,11 @@ pfsync_input(struct mbuf **mp, int *offp 
>   
> return IPPROTO_DONE;  
> } 
>   
> -   if (pfsync_acts[subh.action].in(n->m_data + noff, 
>   
> -   mlen, count, flags) != 0) 
> +   PF_LOCK();
> +   e = pfsync_acts[subh.action].in(n->m_data + noff, mlen, 
> count,  
> +   flags);   
> +   PF_UNLOCK();  
> +   if (e != 0)   
> goto done;
>   
> offset += mlen * count;   
> 8<---8<---8<--8<



Re: pfsync and option WITH_PF_LOCK

2017-06-09 Thread Alexandr Nedvedicky
Hello,

would it make sense to commit a poor man's solution below, before pfsync(4)
will get to shape? The idea is to grab PF_LOCK, just before we pass the data
to PF for further processing.

regards
sasha

8<---8<---8<--8<
--- src/sys/net/if_pfsync.c Fri Jun 09 09:40:12 2017 +0200  

+++ src/sys/net/if_pfsync.c Fri Jun 09 10:49:44 2017 +0200  

@@ -657,6 +657,7 @@ pfsync_input(struct mbuf **mp, int *offp

struct pfsync_header *ph; 
struct pfsync_subheader subh; 
int offset, noff, len, count, mlen, flags = 0;  

+   int e;
  
pfsyncstat_inc(pfsyncs_ipackets); 
  
@@ -733,8 +734,11 @@ pfsync_input(struct mbuf **mp, int *offp   

return IPPROTO_DONE;  
} 
  
-   if (pfsync_acts[subh.action].in(n->m_data + noff,   

-   mlen, count, flags) != 0) 
+   PF_LOCK();
+   e = pfsync_acts[subh.action].in(n->m_data + noff, mlen, count,  

+   flags);   
+   PF_UNLOCK();  
+   if (e != 0)   
goto done;
  
offset += mlen * count;   
8<---8<---8<--8<



Re: pfsync and option WITH_PF_LOCK

2017-06-06 Thread Martin Pieuchot
On 06/06/17(Tue) 17:32, Hrvoje Popovski wrote:
> Hi all,
> 
> i'm getting these traces with "option WITH_PF_LOCK" enaled.
> Setup is quite standard, trunk, vlan, carp, pfsync 
> 
> 
> splassert: pf_state_insert: want 1 have 0
> splassert: pf_remove_state: want 1 have 0

This is a known issue.  pfsync(4) needs a complete redesign for MP.

Hopefully David will enjoy doing that :)

> with kern.splassert=2
> 
> splassert: pf_state_insert: want 1 have 0
> Starting stack trace...
> pf_state_insert(80442a00,800020958c58,800020958c50,ff0786c9c898,8087afe8,ff0786c9c898)
> at pf_state_insert+0x64
> pfsync_state_import(ff00754db23c,2,ff00754db23c,108,1,0) at
> pfsync_state_import+0x6bb
> pfsync_in_ins(ff00754db23c,108,1,2,130,2c) at pfsync_in_ins+0x151
> pfsync_input(800020958df0,800020958dfc,f0,2,0,800020958dfc)
> at pfsync_input+0x371
> ip_deliver(800020958df0,800020958dfc,f0,2,0,0) at ip_deliver+0x10f
> ip_local(ff0074da0900,800020958e50,0,0,4,fdbfa4928cfbc85b) at
> ip_local+0x271
> ipintr(5,3b7,8162b4de,800020958e50,ff0074da0900,800020958e50)
> at ipintr+0x1e
> if_netisr(0,800020958eb0,800020958eb0,80019080,8116ebf0,800020958eb0)
> at if_netisr+0x1b5
> taskq_thread(80019080,2a2,80019080,8137bea0,0,800020958f10)
> at taskq_thread+0x79
> end trace frame: 0x0, count: 248
> End of stack trace.
> 
> 
> splassert: pf_remove_state: want 1 have 0
> Starting stack trace...
> pf_remove_state(ff0786c9c9d0,800020958ca0,c,ff0787323f18,ff0074bbc390,800020958d20)
> at pf_remove_state+0x43
> pfsync_in_del_c(ff0074bbc3e8,c,1,2,e0,d8) at pfsync_in_del_c+0x68
> pfsync_input(800020958df0,800020958dfc,f0,2,0,800020958dfc)
> at pfsync_input+0x371
> ip_deliver(800020958df0,800020958dfc,f0,2,0,0) at ip_deliver+0x10f
> ip_local(ff007575e400,800020958e50,0,0,4,fdbfa4928cfbc85b) at
> ip_local+0x271
> ipintr(5,3b7,8162b4de,800020958e50,ff007575e400,800020958e50)
> at ipintr+0x1e
> if_netisr(0,800020958eb0,800020958eb0,80019080,8116ebf0,800020958eb0)
> at if_netisr+0x1b5
> taskq_thread(80019080,2a2,80019080,8137bea0,0,800020958f10)
> at taskq_thread+0x79
> end trace frame: 0x0, count: 249
> End of stack trace.
> 



pfsync and option WITH_PF_LOCK

2017-06-06 Thread Hrvoje Popovski
Hi all,

i'm getting these traces with "option WITH_PF_LOCK" enaled.
Setup is quite standard, trunk, vlan, carp, pfsync 


splassert: pf_state_insert: want 1 have 0
splassert: pf_remove_state: want 1 have 0


with kern.splassert=2

splassert: pf_state_insert: want 1 have 0
Starting stack trace...
pf_state_insert(80442a00,800020958c58,800020958c50,ff0786c9c898,8087afe8,ff0786c9c898)
at pf_state_insert+0x64
pfsync_state_import(ff00754db23c,2,ff00754db23c,108,1,0) at
pfsync_state_import+0x6bb
pfsync_in_ins(ff00754db23c,108,1,2,130,2c) at pfsync_in_ins+0x151
pfsync_input(800020958df0,800020958dfc,f0,2,0,800020958dfc)
at pfsync_input+0x371
ip_deliver(800020958df0,800020958dfc,f0,2,0,0) at ip_deliver+0x10f
ip_local(ff0074da0900,800020958e50,0,0,4,fdbfa4928cfbc85b) at
ip_local+0x271
ipintr(5,3b7,8162b4de,800020958e50,ff0074da0900,800020958e50)
at ipintr+0x1e
if_netisr(0,800020958eb0,800020958eb0,80019080,8116ebf0,800020958eb0)
at if_netisr+0x1b5
taskq_thread(80019080,2a2,80019080,8137bea0,0,800020958f10)
at taskq_thread+0x79
end trace frame: 0x0, count: 248
End of stack trace.


splassert: pf_remove_state: want 1 have 0
Starting stack trace...
pf_remove_state(ff0786c9c9d0,800020958ca0,c,ff0787323f18,ff0074bbc390,800020958d20)
at pf_remove_state+0x43
pfsync_in_del_c(ff0074bbc3e8,c,1,2,e0,d8) at pfsync_in_del_c+0x68
pfsync_input(800020958df0,800020958dfc,f0,2,0,800020958dfc)
at pfsync_input+0x371
ip_deliver(800020958df0,800020958dfc,f0,2,0,0) at ip_deliver+0x10f
ip_local(ff007575e400,800020958e50,0,0,4,fdbfa4928cfbc85b) at
ip_local+0x271
ipintr(5,3b7,8162b4de,800020958e50,ff007575e400,800020958e50)
at ipintr+0x1e
if_netisr(0,800020958eb0,800020958eb0,80019080,8116ebf0,800020958eb0)
at if_netisr+0x1b5
taskq_thread(80019080,2a2,80019080,8137bea0,0,800020958f10)
at taskq_thread+0x79
end trace frame: 0x0, count: 249
End of stack trace.