Re: somewhat reproducable vimage panic

2020-08-10 Thread Hans Petter Selasky

On 2020-07-23 21:26, Bjoern A. Zeeb wrote:
That’ll probably work;  still, the deferred teardown work seems wrong to 
me;  I haven’t investigated;  the patch kind-of says exactly that as 
well: if “wait until deferred stuff is done” is all we are doing, why 
can we not do it on the spot then?


Hi Bjoern,

Trying to move the discussion over to Phabricator at:
https://reviews.freebsd.org/D24914

The answer to your question I believe is this commit:

https://svnweb.freebsd.org/base/head/sys/netinet/in_mcast.c?revision=333175=markup

It affects both IPv4 and IPv6.

I know that sometimes multicast entries can be freed from timer 
callbacks. I think having a task, probably one is enough, for network 
related configuration is acceptable. With D24914 there will be two 
threads to teardown which is probably overkill, but anyway makes a solid 
solution for now.


I don't know why Stephen didn't think about draining those tasks. I know 
some people are not actively using VIMAGE and that might be the reason.


--HPS
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-08-04 Thread Hans Petter Selasky

On 2020-07-25 21:21, John-Mark Gurney wrote:

So far so good...  I am getting these on occasion:
in6_purgeaddr: err=65, destination address delete failed


Maybe you could add a "kdb_backtrace()" call when that error happens?

--HPS
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-27 Thread Hans Petter Selasky

On 2020-07-25 21:21, John-Mark Gurney wrote:

Yeah, agreed. I think hselasky has a better fix:
https://reviews.freebsd.org/D24914

I just saw his e-mail in a different thread.

I'm testing out this patch now, and let people know how it goes.. It'll
be nice to not have to worry about these panics..

So far so good...  I am getting these on occasion:
in6_purgeaddr: err=65, destination address delete failed

But that's more that the patch prevented a panic.

The other issue that I'm now seeing is that because we don't forcefully
clear out the multicast task, it can take a good 20+ seconds from the
time a jail is destroyed to the interface appearing again in vnet0.
Pretty sure this is related to the dmesg from above...


Hi,

D24914 just ensures proper draining. Feel free to accept the patch if 
you think I should submit it. It fixes some problems seen at work too!


--HPS
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-25 Thread John-Mark Gurney
John-Mark Gurney wrote this message on Thu, Jul 23, 2020 at 16:49 -0700:
> Kristof Provost wrote this message on Thu, Jul 23, 2020 at 11:02 +0200:
> > On 23 Jul 2020, at 11:00, Bjoern A. Zeeb wrote:
> > > On 23 Jul 2020, at 8:09, Kristof Provost wrote:
> > >
> > >> On 23 Jul 2020, at 9:19, Kristof Provost wrote:
> > >>> On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:
> >  So, it's pretty easy to trigger, just attach a couple USB ethernet
> >  adapters, in my case, they were ure, but likely any two spare 
> >  ethernet
> >  interfaces will work, and wire them back to back..
> > 
> > >>> I???ve been able to trigger it using epair as well:
> > >>>
> > >>> `sudo sh testinterfaces.txt epair0a epair0b`
> > >>>
> > >>> I did have to comment out the waitcarrier() check.
> > >>>
> > >> I???ve done a little bit of digging, and I think I???m starting to 
> > >> see how this breaks.
> > >>
> > >> This always affects the jailed vlan interfaces. They???re getting 
> > >> deleted, but the ifp doesn???t go away just yet because it???s still 
> > >> in use by the multicast code.
> > >> The multicast code does its cleanup in task queues,
> > >
> > > Wow, did I miss that back then? Did I review a change and not notice? 
> > > Sorry if that was the case.
> > >
> > > Vnet teardown is blocking and forceful.
> > > Doing deferred cleanup work isn???t a good idea at all.
> > > I think that is the real problem here.
> > >
> > > I???d rather have us fix this than putting more bandaids into the 
> > > code.
> > >
> > Yeah, agreed. I think hselasky has a better fix: 
> > https://reviews.freebsd.org/D24914
> > 
> > I just saw his e-mail in a different thread.
> 
> I'm testing out this patch now, and let people know how it goes.. It'll
> be nice to not have to worry about these panics..

So far so good...  I am getting these on occasion:
in6_purgeaddr: err=65, destination address delete failed

But that's more that the patch prevented a panic.

The other issue that I'm now seeing is that because we don't forcefully
clear out the multicast task, it can take a good 20+ seconds from the
time a jail is destroyed to the interface appearing again in vnet0.
Pretty sure this is related to the dmesg from above...

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread John-Mark Gurney
Kristof Provost wrote this message on Thu, Jul 23, 2020 at 11:02 +0200:
> On 23 Jul 2020, at 11:00, Bjoern A. Zeeb wrote:
> > On 23 Jul 2020, at 8:09, Kristof Provost wrote:
> >
> >> On 23 Jul 2020, at 9:19, Kristof Provost wrote:
> >>> On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:
>  So, it's pretty easy to trigger, just attach a couple USB ethernet
>  adapters, in my case, they were ure, but likely any two spare 
>  ethernet
>  interfaces will work, and wire them back to back..
> 
> >>> I???ve been able to trigger it using epair as well:
> >>>
> >>> `sudo sh testinterfaces.txt epair0a epair0b`
> >>>
> >>> I did have to comment out the waitcarrier() check.
> >>>
> >> I???ve done a little bit of digging, and I think I???m starting to 
> >> see how this breaks.
> >>
> >> This always affects the jailed vlan interfaces. They???re getting 
> >> deleted, but the ifp doesn???t go away just yet because it???s still 
> >> in use by the multicast code.
> >> The multicast code does its cleanup in task queues,
> >
> > Wow, did I miss that back then? Did I review a change and not notice? 
> > Sorry if that was the case.
> >
> > Vnet teardown is blocking and forceful.
> > Doing deferred cleanup work isn???t a good idea at all.
> > I think that is the real problem here.
> >
> > I???d rather have us fix this than putting more bandaids into the 
> > code.
> >
> Yeah, agreed. I think hselasky has a better fix: 
> https://reviews.freebsd.org/D24914
> 
> I just saw his e-mail in a different thread.

I'm testing out this patch now, and let people know how it goes.. It'll
be nice to not have to worry about these panics..

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread Bjoern A. Zeeb

On 23 Jul 2020, at 9:02, Kristof Provost wrote:


On 23 Jul 2020, at 11:00, Bjoern A. Zeeb wrote:

On 23 Jul 2020, at 8:09, Kristof Provost wrote:


On 23 Jul 2020, at 9:19, Kristof Provost wrote:

On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:

So, it's pretty easy to trigger, just attach a couple USB ethernet
adapters, in my case, they were ure, but likely any two spare 
ethernet

interfaces will work, and wire them back to back..


I’ve been able to trigger it using epair as well:

`sudo sh testinterfaces.txt epair0a epair0b`

I did have to comment out the waitcarrier() check.

I’ve done a little bit of digging, and I think I’m starting to 
see how this breaks.


This always affects the jailed vlan interfaces. They’re getting 
deleted, but the ifp doesn’t go away just yet because it’s still 
in use by the multicast code.

The multicast code does its cleanup in task queues,


Wow, did I miss that back then? Did I review a change and not notice? 
Sorry if that was the case.


Vnet teardown is blocking and forceful.
Doing deferred cleanup work isn’t a good idea at all.
I think that is the real problem here.

I’d rather have us fix this than putting more bandaids into the 
code.


Yeah, agreed. I think hselasky has a better fix: 
https://reviews.freebsd.org/D24914


I just saw his e-mail in a different thread.


That’ll probably work;  still, the deferred teardown work seems wrong 
to me;  I haven’t investigated;  the patch kind-of says exactly that 
as well: if “wait until deferred stuff is done” is all we are doing, 
why can we not do it on the spot then?


/bz

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread Marko Zec
On Thu, 23 Jul 2020 11:02:01 +0200
"Kristof Provost"  wrote:

> Yeah, agreed. I think hselasky has a better fix: 
> https://reviews.freebsd.org/D24914

Yup, that should be it.

> 
> I just saw his e-mail in a different thread.
> 
> Best regards,
> Kristof
> ___
> freebsd-curr...@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to
> "freebsd-current-unsubscr...@freebsd.org"

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread Kristof Provost

On 23 Jul 2020, at 11:00, Bjoern A. Zeeb wrote:

On 23 Jul 2020, at 8:09, Kristof Provost wrote:


On 23 Jul 2020, at 9:19, Kristof Provost wrote:

On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:

So, it's pretty easy to trigger, just attach a couple USB ethernet
adapters, in my case, they were ure, but likely any two spare 
ethernet

interfaces will work, and wire them back to back..


I’ve been able to trigger it using epair as well:

`sudo sh testinterfaces.txt epair0a epair0b`

I did have to comment out the waitcarrier() check.

I’ve done a little bit of digging, and I think I’m starting to 
see how this breaks.


This always affects the jailed vlan interfaces. They’re getting 
deleted, but the ifp doesn’t go away just yet because it’s still 
in use by the multicast code.

The multicast code does its cleanup in task queues,


Wow, did I miss that back then? Did I review a change and not notice? 
Sorry if that was the case.


Vnet teardown is blocking and forceful.
Doing deferred cleanup work isn’t a good idea at all.
I think that is the real problem here.

I’d rather have us fix this than putting more bandaids into the 
code.


Yeah, agreed. I think hselasky has a better fix: 
https://reviews.freebsd.org/D24914


I just saw his e-mail in a different thread.

Best regards,
Kristof
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread Bjoern A. Zeeb

On 23 Jul 2020, at 8:09, Kristof Provost wrote:


On 23 Jul 2020, at 9:19, Kristof Provost wrote:

On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:

So, it's pretty easy to trigger, just attach a couple USB ethernet
adapters, in my case, they were ure, but likely any two spare 
ethernet

interfaces will work, and wire them back to back..


I’ve been able to trigger it using epair as well:

`sudo sh testinterfaces.txt epair0a epair0b`

I did have to comment out the waitcarrier() check.

I’ve done a little bit of digging, and I think I’m starting to see 
how this breaks.


This always affects the jailed vlan interfaces. They’re getting 
deleted, but the ifp doesn’t go away just yet because it’s still 
in use by the multicast code.

The multicast code does its cleanup in task queues,


Wow, did I miss that back then? Did I review a change and not notice? 
Sorry if that was the case.


Vnet teardown is blocking and forceful.
Doing deferred cleanup work isn’t a good idea at all.
I think that is the real problem here.

I’d rather have us fix this than putting more bandaids into the code.

/bz

PS:  I love that you can repro this with epairs, means we can write a 
generic test code piece for this and commit it.



so by the time it gets around to doing that the ifp is already marked 
as dying and the vnet is gone.
There are still references to the ifp though, and when the multicast 
code tries to do its cleanup we get the panic.


This hack stops the panic for me, but I don’t know if this is the 
best solution:


diff --git a/sys/net/if.c b/sys/net/if.c
index 59dd38267cf..bd0c87eddf1 100644
--- a/sys/net/if.c
+++ b/sys/net/if.c
	@@ -3681,6 +3685,10 @@ if_delmulti_ifma_flags(struct ifmultiaddr 
*ifma, int flags)

ifp = NULL;
}
 #endif
+
+   if (ifp && ifp->if_flags & IFF_DYING)
+   return;
+
/*
	 	 * If and only if the ifnet instance exists: Acquire the address 
lock.

 */
diff --git a/sys/netinet/in_mcast.c b/sys/netinet/in_mcast.c
index 39fc82c5372..6493e2a5bfb 100644
--- a/sys/netinet/in_mcast.c
+++ b/sys/netinet/in_mcast.c
@@ -623,7 +623,7 @@ inm_release(struct in_multi *inm)

/* XXX this access is not covered by IF_ADDR_LOCK */
CTR2(KTR_IGMPV3, "%s: purging ifma %p", __func__, ifma);
-   if (ifp != NULL) {
+   if (ifp != NULL && (ifp->if_flags & IFF_DYING) == 0) {
CURVNET_SET(ifp->if_vnet);
inm_purge(inm);
free(inm, M_IPMADDR);

Best regards,
Kristof

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread Kristof Provost

On 23 Jul 2020, at 9:19, Kristof Provost wrote:

On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:

So, it's pretty easy to trigger, just attach a couple USB ethernet
adapters, in my case, they were ure, but likely any two spare 
ethernet

interfaces will work, and wire them back to back..


I’ve been able to trigger it using epair as well:

`sudo sh testinterfaces.txt epair0a epair0b`

I did have to comment out the waitcarrier() check.

I’ve done a little bit of digging, and I think I’m starting to see 
how this breaks.


This always affects the jailed vlan interfaces. They’re getting 
deleted, but the ifp doesn’t go away just yet because it’s still in 
use by the multicast code.
The multicast code does its cleanup in task queues, so by the time it 
gets around to doing that the ifp is already marked as dying and the 
vnet is gone.
There are still references to the ifp though, and when the multicast 
code tries to do its cleanup we get the panic.


This hack stops the panic for me, but I don’t know if this is the best 
solution:


diff --git a/sys/net/if.c b/sys/net/if.c
index 59dd38267cf..bd0c87eddf1 100644
--- a/sys/net/if.c
+++ b/sys/net/if.c
	@@ -3681,6 +3685,10 @@ if_delmulti_ifma_flags(struct ifmultiaddr *ifma, 
int flags)

ifp = NULL;
}
 #endif
+
+   if (ifp && ifp->if_flags & IFF_DYING)
+   return;
+
/*
	 	 * If and only if the ifnet instance exists: Acquire the address 
lock.

 */
diff --git a/sys/netinet/in_mcast.c b/sys/netinet/in_mcast.c
index 39fc82c5372..6493e2a5bfb 100644
--- a/sys/netinet/in_mcast.c
+++ b/sys/netinet/in_mcast.c
@@ -623,7 +623,7 @@ inm_release(struct in_multi *inm)

/* XXX this access is not covered by IF_ADDR_LOCK */
CTR2(KTR_IGMPV3, "%s: purging ifma %p", __func__, ifma);
-   if (ifp != NULL) {
+   if (ifp != NULL && (ifp->if_flags & IFF_DYING) == 0) {
CURVNET_SET(ifp->if_vnet);
inm_purge(inm);
free(inm, M_IPMADDR);

Best regards,
Kristof
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-23 Thread Kristof Provost
On 23 Jul 2020, at 0:15, John-Mark Gurney wrote:
> So, it's pretty easy to trigger, just attach a couple USB ethernet
> adapters, in my case, they were ure, but likely any two spare ethernet
> interfaces will work, and wire them back to back..
>
I’ve been able to trigger it using epair as well:

`sudo sh testinterfaces.txt epair0a epair0b`

I did have to comment out the waitcarrier() check.

Best regards,
Kristof
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-22 Thread John-Mark Gurney
Bjoern A. Zeeb wrote this message on Wed, Jul 22, 2020 at 20:43 +:
> On 22 Jul 2020, at 19:34, John-Mark Gurney wrote:
> 
> > John-Mark Gurney wrote this message on Tue, Jul 21, 2020 at 23:05 
> > -0700:
> >> Peter Libassi wrote this message on Wed, Jul 22, 2020 at 06:54 +0200:
> >>> Is this related to
> >>>
> >>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234985 
> >>>  and 
> >>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238326 
> >>> 
> >>
> >> Definitely not 234985..  I'm using ue interfaces, and so they don't
> >> get destroyed while the jail is going away...
> >>
> >> I don't think it's 238326 either.  This is 100% reliable and it's in
> >> the IP multicast code..  It looks like in_multi isn't holding an
> >> interface or address lock waiting for things to free up...
> >
> > Did a little more poking, and it looks like the vnet is free'd before
> > the ifnet is free'd causing this problem:
> > (kgdb) print inm->inm_ifp[0].if_refcount
> > $5 = 1
> > (kgdb) print inm->inm_ifp[0].if_vnet[0]
> > $6 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 
> > 0xdeadc0dedeadc0de},
> >   vnet_magic_n = 3735929054, vnet_ifcnt = 3735929054,
> >   vnet_sockcnt = 3735929054, vnet_state = 3735929054,
> >   vnet_data_mem = 0xdeadc0dedeadc0de, vnet_data_base = 
> > 16045693110842147038,
> >   vnet_shutdown = 222}
> >
> > So the multicast code is fine, it holds and releases a reference to
> > ifnet..
> >
> > The issue is that the reference to the ifnet doesn't involve a
> > reference to the vnet/prison.
> 
> Does it need to?  The ifnet cannot go away while something holds a 
> reference to it, right?

It's the other way around that's the problem.. the ifnet is holding an
invalid vnet pointer that got free'd.

Maybe the problem isn't the tear down, but that the vnet pointer isn't
changed/restored before the free?

> Sounds more like the teardown order is wrong (again)?
> 
> There should be no more multicast when IP etc. is gone.  That means MC 
> doesn???t properly cleanup itself.

Don't know, just know that it's easy to trigger right now...  I haven't
tested on prior releases, but if you'd like me to, it isn't too hard for
me to test...

> I guess I should go back now and re-read your original problem statement 
> on how you trigger this..

So, it's pretty easy to trigger, just attach a couple USB ethernet
adapters, in my case, they were ure, but likely any two spare ethernet
interfaces will work, and wire them back to back..

Run the script attached earlier in the thread, providing it the name
of the two interfaces as arguments, and run it a few times.  You might
get failures or not.  It shouldn't matter.  After a few runs, it'll
panic...

I just tested this (to make sure my ure changes weren't causing addition
problems) using
FreeBSD-13.0-CURRENT-amd64-20200625-r362596-memstick.img.xz, so it's
stock reproducable.

Thanks for looking into this!

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-22 Thread Bjoern A. Zeeb

On 22 Jul 2020, at 19:34, John-Mark Gurney wrote:

John-Mark Gurney wrote this message on Tue, Jul 21, 2020 at 23:05 
-0700:

Peter Libassi wrote this message on Wed, Jul 22, 2020 at 06:54 +0200:

Is this related to

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234985 
 and 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238326 



Definitely not 234985..  I'm using ue interfaces, and so they don't
get destroyed while the jail is going away...

I don't think it's 238326 either.  This is 100% reliable and it's in
the IP multicast code..  It looks like in_multi isn't holding an
interface or address lock waiting for things to free up...


Did a little more poking, and it looks like the vnet is free'd before
the ifnet is free'd causing this problem:
(kgdb) print inm->inm_ifp[0].if_refcount
$5 = 1
(kgdb) print inm->inm_ifp[0].if_vnet[0]
$6 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 
0xdeadc0dedeadc0de},

  vnet_magic_n = 3735929054, vnet_ifcnt = 3735929054,
  vnet_sockcnt = 3735929054, vnet_state = 3735929054,
  vnet_data_mem = 0xdeadc0dedeadc0de, vnet_data_base = 
16045693110842147038,

  vnet_shutdown = 222}

So the multicast code is fine, it holds and releases a reference to
ifnet..

The issue is that the reference to the ifnet doesn't involve a
reference to the vnet/prison.


Does it need to?  The ifnet cannot go away while something holds a 
reference to it, right?


Sounds more like the teardown order is wrong (again)?

There should be no more multicast when IP etc. is gone.  That means MC 
doesn’t properly cleanup itself.


I guess I should go back now and re-read your original problem statement 
on how you trigger this..


/bz


___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-22 Thread John-Mark Gurney
John-Mark Gurney wrote this message on Tue, Jul 21, 2020 at 23:05 -0700:
> Peter Libassi wrote this message on Wed, Jul 22, 2020 at 06:54 +0200:
> > Is this related to 
> > 
> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234985 
> >  and 
> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238326 
> > 
> 
> Definitely not 234985..  I'm using ue interfaces, and so they don't
> get destroyed while the jail is going away...
> 
> I don't think it's 238326 either.  This is 100% reliable and it's in
> the IP multicast code..  It looks like in_multi isn't holding an
> interface or address lock waiting for things to free up...

Did a little more poking, and it looks like the vnet is free'd before
the ifnet is free'd causing this problem:
(kgdb) print inm->inm_ifp[0].if_refcount 
$5 = 1
(kgdb) print inm->inm_ifp[0].if_vnet[0]  
$6 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 0xdeadc0dedeadc0de},
  vnet_magic_n = 3735929054, vnet_ifcnt = 3735929054,
  vnet_sockcnt = 3735929054, vnet_state = 3735929054,
  vnet_data_mem = 0xdeadc0dedeadc0de, vnet_data_base = 16045693110842147038,
  vnet_shutdown = 222}

So the multicast code is fine, it holds and releases a reference to
ifnet..

The issue is that the reference to the ifnet doesn't involve a
reference to the vnet/prison.

I have noticed that a number of times after a jail destroy that one
of my interfaces doesn't make it back to the main OS.. it's just gone..

I can "make" it reappear by reseting the hardware, but that does imply
that an ifnet is hanging out in limbo...

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-22 Thread John-Mark Gurney
Peter Libassi wrote this message on Wed, Jul 22, 2020 at 06:54 +0200:
> Is this related to 
> 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234985 
>  and 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238326 
> 

Definitely not 234985..  I'm using ue interfaces, and so they don't
get destroyed while the jail is going away...

I don't think it's 238326 either.  This is 100% reliable and it's in
the IP multicast code..  It looks like in_multi isn't holding an
interface or address lock waiting for things to free up...

> > 21 juli 2020 kl. 22:23 skrev John-Mark Gurney :
> > 
> > Marko Zec wrote this message on Tue, Jul 21, 2020 at 11:31 +0200:
> >> On Tue, 21 Jul 2020 02:16:55 -0700
> >> John-Mark Gurney  wrote:
> >> 
> >>> I'm running:
> >>> FreeBSD test 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r362596: Thu Jun 25
> >>> 05:02:51 UTC 2020
> >>> r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC
> >>> amd64
> >>> 
> >>> and I'm working on improve the if_ure driver.  I've put together a
> >>> little script that I've attached that I'm using to test the driver..
> >>> It puts a couple ue interfaces each into their own jail, configures
> >>> them, and tries to pass traffic.  This assumes that the two interfaces
> >>> are connected together.
> >>> 
> >>> Pretty regularly when destroying the jails, I get the following
> >>> panic: CURVNET_SET at /usr/src/sys/netinet/in_mcast.c:626
> >>> inm_release() curvnet=0 vnet=0xf80154c82a80
> >> 
> >> Perhaps the attached patch could help? (disclaimer: not even
> >> compile-tested)
> > 
> > The patch compiled, but it just moved the panic earlier than before.
> > 
> > #4  0x80bc2123 in panic (fmt=)
> >at ../../../kern/kern_shutdown.c:839
> > #5  0x80d61726 in inm_release_task (arg=, 
> >pending=) at ../../../netinet/in_mcast.c:633
> > #6  0x80c2166a in taskqueue_run_locked (queue=0xf800033cfd00)
> >at ../../../kern/subr_taskqueue.c:476
> > #7  0x80c226e4 in taskqueue_thread_loop (arg=)
> >at ../../../kern/subr_taskqueue.c:793
> > 
> > Now it panics at the location of the new CURVNET_SET and not the
> > old one..
> > 
> > Ok, decided to dump the contents of the vnet, and it looks like
> > it's a use after free:
> > (kgdb) print/x *(struct vnet *)0xf8012a283140
> > $2 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 
> > 0xdeadc0dedeadc0de}, vnet_magic_n = 0xdeadc0de, 
> >  vnet_ifcnt = 0xdeadc0de, vnet_sockcnt = 0xdeadc0de, vnet_state = 
> > 0xdeadc0de, vnet_data_mem = 0xdeadc0dedeadc0de, 
> >  vnet_data_base = 0xdeadc0dedeadc0de, vnet_shutdown = 0xde}
> > 
> > The patch did seem to make it happen quicker, or maybe I was just more
> > lucky this morning...
> > 
> >>> (kgdb) #0  __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
> >>> #1  doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:394
> >>> #2  0x80bc6250 in kern_reboot (howto=260)
> >>>at /usr/src/sys/kern/kern_shutdown.c:481
> >>> #3  0x80bc66aa in vpanic (fmt=, ap= >>> out>) at /usr/src/sys/kern/kern_shutdown.c:913
> >>> #4  0x80bc6403 in panic (fmt=)
> >>>at /usr/src/sys/kern/kern_shutdown.c:839
> >>> #5  0x80d6553b in inm_release (inm=0xf80029043700)
> >>>at /usr/src/sys/netinet/in_mcast.c:630
> >>> #6  inm_release_task (arg=, pending=)
> >>>at /usr/src/sys/netinet/in_mcast.c:312
> >>> #7  0x80c2521a in taskqueue_run_locked
> >>> (queue=0xf80003116b00) at /usr/src/sys/kern/subr_taskqueue.c:476
> >>> #8  0x80c26294 in taskqueue_thread_loop (arg=)
> >>>at /usr/src/sys/kern/subr_taskqueue.c:793
> >>> #9  0x80b830f0 in fork_exit (
> >>>callout=0x80c26200 , 
> >>>arg=0x81cf4f70 ,
> >>> frame=0xfe0049e99b80) at /usr/src/sys/kern/kern_fork.c:1052
> >>> #10 
> >>> (kgdb) 
> >>> 
> >>> I have the core files so I can get additional information.
> >>> 
> >>> Let me know if you need any additional information.
> >>> 
> >> 
> > 
> >> Index: sys/netinet/in_mcast.c
> >> ===
> >> --- sys/netinet/in_mcast.c (revision 363386)
> >> +++ sys/netinet/in_mcast.c (working copy)
> >> @@ -309,8 +309,10 @@
> >>IN_MULTI_LOCK();
> >>SLIST_FOREACH_SAFE(inm, _free_tmp, inm_nrele, tinm) {
> >>SLIST_REMOVE_HEAD(_free_tmp, inm_nrele);
> >> +  CURVNET_SET(inm->inm_ifp->if_vnet);
> >>MPASS(inm);
> >>inm_release(inm);
> >> +  CURVNET_RESTORE();
> >>}
> >>IN_MULTI_UNLOCK();
> >> }

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to 

Re: somewhat reproducable vimage panic

2020-07-21 Thread Peter Libassi
Is this related to 

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234985 
 and 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=238326 


/Peter


> 21 juli 2020 kl. 22:23 skrev John-Mark Gurney :
> 
> Marko Zec wrote this message on Tue, Jul 21, 2020 at 11:31 +0200:
>> On Tue, 21 Jul 2020 02:16:55 -0700
>> John-Mark Gurney  wrote:
>> 
>>> I'm running:
>>> FreeBSD test 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r362596: Thu Jun 25
>>> 05:02:51 UTC 2020
>>> r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC
>>> amd64
>>> 
>>> and I'm working on improve the if_ure driver.  I've put together a
>>> little script that I've attached that I'm using to test the driver..
>>> It puts a couple ue interfaces each into their own jail, configures
>>> them, and tries to pass traffic.  This assumes that the two interfaces
>>> are connected together.
>>> 
>>> Pretty regularly when destroying the jails, I get the following
>>> panic: CURVNET_SET at /usr/src/sys/netinet/in_mcast.c:626
>>> inm_release() curvnet=0 vnet=0xf80154c82a80
>> 
>> Perhaps the attached patch could help? (disclaimer: not even
>> compile-tested)
> 
> The patch compiled, but it just moved the panic earlier than before.
> 
> #4  0x80bc2123 in panic (fmt=)
>at ../../../kern/kern_shutdown.c:839
> #5  0x80d61726 in inm_release_task (arg=, 
>pending=) at ../../../netinet/in_mcast.c:633
> #6  0x80c2166a in taskqueue_run_locked (queue=0xf800033cfd00)
>at ../../../kern/subr_taskqueue.c:476
> #7  0x80c226e4 in taskqueue_thread_loop (arg=)
>at ../../../kern/subr_taskqueue.c:793
> 
> Now it panics at the location of the new CURVNET_SET and not the
> old one..
> 
> Ok, decided to dump the contents of the vnet, and it looks like
> it's a use after free:
> (kgdb) print/x *(struct vnet *)0xf8012a283140
> $2 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 0xdeadc0dedeadc0de}, 
> vnet_magic_n = 0xdeadc0de, 
>  vnet_ifcnt = 0xdeadc0de, vnet_sockcnt = 0xdeadc0de, vnet_state = 0xdeadc0de, 
> vnet_data_mem = 0xdeadc0dedeadc0de, 
>  vnet_data_base = 0xdeadc0dedeadc0de, vnet_shutdown = 0xde}
> 
> The patch did seem to make it happen quicker, or maybe I was just more
> lucky this morning...
> 
>>> (kgdb) #0  __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
>>> #1  doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:394
>>> #2  0x80bc6250 in kern_reboot (howto=260)
>>>at /usr/src/sys/kern/kern_shutdown.c:481
>>> #3  0x80bc66aa in vpanic (fmt=, ap=>> out>) at /usr/src/sys/kern/kern_shutdown.c:913
>>> #4  0x80bc6403 in panic (fmt=)
>>>at /usr/src/sys/kern/kern_shutdown.c:839
>>> #5  0x80d6553b in inm_release (inm=0xf80029043700)
>>>at /usr/src/sys/netinet/in_mcast.c:630
>>> #6  inm_release_task (arg=, pending=)
>>>at /usr/src/sys/netinet/in_mcast.c:312
>>> #7  0x80c2521a in taskqueue_run_locked
>>> (queue=0xf80003116b00) at /usr/src/sys/kern/subr_taskqueue.c:476
>>> #8  0x80c26294 in taskqueue_thread_loop (arg=)
>>>at /usr/src/sys/kern/subr_taskqueue.c:793
>>> #9  0x80b830f0 in fork_exit (
>>>callout=0x80c26200 , 
>>>arg=0x81cf4f70 ,
>>> frame=0xfe0049e99b80) at /usr/src/sys/kern/kern_fork.c:1052
>>> #10 
>>> (kgdb) 
>>> 
>>> I have the core files so I can get additional information.
>>> 
>>> Let me know if you need any additional information.
>>> 
>> 
> 
>> Index: sys/netinet/in_mcast.c
>> ===
>> --- sys/netinet/in_mcast.c   (revision 363386)
>> +++ sys/netinet/in_mcast.c   (working copy)
>> @@ -309,8 +309,10 @@
>>  IN_MULTI_LOCK();
>>  SLIST_FOREACH_SAFE(inm, _free_tmp, inm_nrele, tinm) {
>>  SLIST_REMOVE_HEAD(_free_tmp, inm_nrele);
>> +CURVNET_SET(inm->inm_ifp->if_vnet);
>>  MPASS(inm);
>>  inm_release(inm);
>> +CURVNET_RESTORE();
>>  }
>>  IN_MULTI_UNLOCK();
>> }
> 
> 
> -- 
>  John-Mark Gurney Voice: +1 415 225 5579
> 
> "All that I will do, has been done, All that I have, has not."
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-21 Thread John-Mark Gurney
Marko Zec wrote this message on Tue, Jul 21, 2020 at 11:31 +0200:
> On Tue, 21 Jul 2020 02:16:55 -0700
> John-Mark Gurney  wrote:
> 
> > I'm running:
> > FreeBSD test 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r362596: Thu Jun 25
> > 05:02:51 UTC 2020
> > r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC
> >  amd64
> > 
> > and I'm working on improve the if_ure driver.  I've put together a
> > little script that I've attached that I'm using to test the driver..
> > It puts a couple ue interfaces each into their own jail, configures
> > them, and tries to pass traffic.  This assumes that the two interfaces
> > are connected together.
> > 
> > Pretty regularly when destroying the jails, I get the following
> > panic: CURVNET_SET at /usr/src/sys/netinet/in_mcast.c:626
> > inm_release() curvnet=0 vnet=0xf80154c82a80
> 
> Perhaps the attached patch could help? (disclaimer: not even
> compile-tested)

The patch compiled, but it just moved the panic earlier than before.

#4  0x80bc2123 in panic (fmt=)
at ../../../kern/kern_shutdown.c:839
#5  0x80d61726 in inm_release_task (arg=, 
pending=) at ../../../netinet/in_mcast.c:633
#6  0x80c2166a in taskqueue_run_locked (queue=0xf800033cfd00)
at ../../../kern/subr_taskqueue.c:476
#7  0x80c226e4 in taskqueue_thread_loop (arg=)
at ../../../kern/subr_taskqueue.c:793

Now it panics at the location of the new CURVNET_SET and not the
old one..

Ok, decided to dump the contents of the vnet, and it looks like
it's a use after free:
(kgdb) print/x *(struct vnet *)0xf8012a283140
$2 = {vnet_le = {le_next = 0xdeadc0dedeadc0de, le_prev = 0xdeadc0dedeadc0de}, 
vnet_magic_n = 0xdeadc0de, 
  vnet_ifcnt = 0xdeadc0de, vnet_sockcnt = 0xdeadc0de, vnet_state = 0xdeadc0de, 
vnet_data_mem = 0xdeadc0dedeadc0de, 
  vnet_data_base = 0xdeadc0dedeadc0de, vnet_shutdown = 0xde}

The patch did seem to make it happen quicker, or maybe I was just more
lucky this morning...

> > (kgdb) #0  __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
> > #1  doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:394
> > #2  0x80bc6250 in kern_reboot (howto=260)
> > at /usr/src/sys/kern/kern_shutdown.c:481
> > #3  0x80bc66aa in vpanic (fmt=, ap= > out>) at /usr/src/sys/kern/kern_shutdown.c:913
> > #4  0x80bc6403 in panic (fmt=)
> > at /usr/src/sys/kern/kern_shutdown.c:839
> > #5  0x80d6553b in inm_release (inm=0xf80029043700)
> > at /usr/src/sys/netinet/in_mcast.c:630
> > #6  inm_release_task (arg=, pending=)
> > at /usr/src/sys/netinet/in_mcast.c:312
> > #7  0x80c2521a in taskqueue_run_locked
> > (queue=0xf80003116b00) at /usr/src/sys/kern/subr_taskqueue.c:476
> > #8  0x80c26294 in taskqueue_thread_loop (arg=)
> > at /usr/src/sys/kern/subr_taskqueue.c:793
> > #9  0x80b830f0 in fork_exit (
> > callout=0x80c26200 , 
> > arg=0x81cf4f70 ,
> > frame=0xfe0049e99b80) at /usr/src/sys/kern/kern_fork.c:1052
> > #10 
> > (kgdb) 
> > 
> > I have the core files so I can get additional information.
> > 
> > Let me know if you need any additional information.
> > 
> 

> Index: sys/netinet/in_mcast.c
> ===
> --- sys/netinet/in_mcast.c(revision 363386)
> +++ sys/netinet/in_mcast.c(working copy)
> @@ -309,8 +309,10 @@
>   IN_MULTI_LOCK();
>   SLIST_FOREACH_SAFE(inm, _free_tmp, inm_nrele, tinm) {
>   SLIST_REMOVE_HEAD(_free_tmp, inm_nrele);
> + CURVNET_SET(inm->inm_ifp->if_vnet);
>   MPASS(inm);
>   inm_release(inm);
> + CURVNET_RESTORE();
>   }
>   IN_MULTI_UNLOCK();
>  }


-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: somewhat reproducable vimage panic

2020-07-21 Thread Marko Zec
On Tue, 21 Jul 2020 02:16:55 -0700
John-Mark Gurney  wrote:

> I'm running:
> FreeBSD test 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r362596: Thu Jun 25
> 05:02:51 UTC 2020
> r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC
>  amd64
> 
> and I'm working on improve the if_ure driver.  I've put together a
> little script that I've attached that I'm using to test the driver..
> It puts a couple ue interfaces each into their own jail, configures
> them, and tries to pass traffic.  This assumes that the two interfaces
> are connected together.
> 
> Pretty regularly when destroying the jails, I get the following
> panic: CURVNET_SET at /usr/src/sys/netinet/in_mcast.c:626
> inm_release() curvnet=0 vnet=0xf80154c82a80

Perhaps the attached patch could help? (disclaimer: not even
compile-tested)

Marko


> (kgdb) #0  __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
> #1  doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:394
> #2  0x80bc6250 in kern_reboot (howto=260)
> at /usr/src/sys/kern/kern_shutdown.c:481
> #3  0x80bc66aa in vpanic (fmt=, ap= out>) at /usr/src/sys/kern/kern_shutdown.c:913
> #4  0x80bc6403 in panic (fmt=)
> at /usr/src/sys/kern/kern_shutdown.c:839
> #5  0x80d6553b in inm_release (inm=0xf80029043700)
> at /usr/src/sys/netinet/in_mcast.c:630
> #6  inm_release_task (arg=, pending=)
> at /usr/src/sys/netinet/in_mcast.c:312
> #7  0x80c2521a in taskqueue_run_locked
> (queue=0xf80003116b00) at /usr/src/sys/kern/subr_taskqueue.c:476
> #8  0x80c26294 in taskqueue_thread_loop (arg=)
> at /usr/src/sys/kern/subr_taskqueue.c:793
> #9  0x80b830f0 in fork_exit (
> callout=0x80c26200 , 
> arg=0x81cf4f70 ,
> frame=0xfe0049e99b80) at /usr/src/sys/kern/kern_fork.c:1052
> #10 
> (kgdb) 
> 
> I have the core files so I can get additional information.
> 
> Let me know if you need any additional information.
> 

Index: sys/netinet/in_mcast.c
===
--- sys/netinet/in_mcast.c	(revision 363386)
+++ sys/netinet/in_mcast.c	(working copy)
@@ -309,8 +309,10 @@
 	IN_MULTI_LOCK();
 	SLIST_FOREACH_SAFE(inm, _free_tmp, inm_nrele, tinm) {
 		SLIST_REMOVE_HEAD(_free_tmp, inm_nrele);
+		CURVNET_SET(inm->inm_ifp->if_vnet);
 		MPASS(inm);
 		inm_release(inm);
+		CURVNET_RESTORE();
 	}
 	IN_MULTI_UNLOCK();
 }
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


somewhat reproducable vimage panic

2020-07-21 Thread John-Mark Gurney
I'm running:
FreeBSD test 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r362596: Thu Jun 25 05:02:51 
UTC 2020 
r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64

and I'm working on improve the if_ure driver.  I've put together a
little script that I've attached that I'm using to test the driver..
It puts a couple ue interfaces each into their own jail, configures
them, and tries to pass traffic.  This assumes that the two interfaces
are connected together.

Pretty regularly when destroying the jails, I get the following
panic: CURVNET_SET at /usr/src/sys/netinet/in_mcast.c:626 inm_release() 
curvnet=0 vnet=0xf80154c82a80

(kgdb) #0  __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
#1  doadump (textdump=1) at /usr/src/sys/kern/kern_shutdown.c:394
#2  0x80bc6250 in kern_reboot (howto=260)
at /usr/src/sys/kern/kern_shutdown.c:481
#3  0x80bc66aa in vpanic (fmt=, ap=)
at /usr/src/sys/kern/kern_shutdown.c:913
#4  0x80bc6403 in panic (fmt=)
at /usr/src/sys/kern/kern_shutdown.c:839
#5  0x80d6553b in inm_release (inm=0xf80029043700)
at /usr/src/sys/netinet/in_mcast.c:630
#6  inm_release_task (arg=, pending=)
at /usr/src/sys/netinet/in_mcast.c:312
#7  0x80c2521a in taskqueue_run_locked (queue=0xf80003116b00)
at /usr/src/sys/kern/subr_taskqueue.c:476
#8  0x80c26294 in taskqueue_thread_loop (arg=)
at /usr/src/sys/kern/subr_taskqueue.c:793
#9  0x80b830f0 in fork_exit (
callout=0x80c26200 , 
arg=0x81cf4f70 , frame=0xfe0049e99b80)
at /usr/src/sys/kern/kern_fork.c:1052
#10 
(kgdb) 

I have the core files so I can get additional information.

Let me know if you need any additional information.

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
#!/bin/sh -
#
# ls testinterfaces.sh | entr sh -c 'fsync testinterfaces.sh && sh 
testinterfaces.sh ue0 ue1'
#

testiface="$1"
checkiface="$2"

echo starting, test $1, check $2
# ifconfig  -m to get caps  
getcaps()
{
ifconfig -m "$1" | awk '$1 ~ /^capabilities=/ { split($0, a, "<"); 
split(a[2], b, ">"); split(b[1], caps, ","); for (i in caps) print caps[i] }'
}


if ! testjail=$(jail -i -c persist=1 path=/ name=testjail vnet=new 
vnet.interface="$testiface"); then
echo failed to start test jail
exit 1
fi

if ! checkjail=$(jail -i -c persist=1 path=/ name=checkjail vnet=new 
vnet.interface="$checkiface"); then
echo failed to start check jail
exit 1
fi

cleanup()
{
jail -r "$testjail"
jail -r "$checkjail"
}

trap cleanup EXIT

run()
{
if [ x"$1" = x"check" ]; then
jid="$checkjail"
elif [ x"$1" = x"test" ]; then
jid="$testjail"
else
echo Invalid: "$1" >&2
exit 1
fi

shift
jexec "$jid" "$@"
}

ifjail()
{
if [ x"$1" = x"check" ]; then
iface="$checkiface"
elif [ x"$1" = x"test" ]; then
iface="$testiface"
else
echo Invalid: "$1" >&2
exit 1
fi

j="$1"
shift

run "$j" ifconfig "$iface" "$@"
}

waitcarrier()
{
local i

for i in test check; do
while :; do
if ifjail "$i" | grep 1000baseT >/dev/null; then
break
fi
sleep .5
done
done
}

hwvlantest()
{
run test ifconfig lo0 up
run check ifconfig lo0 up

for i in "vlanhwtag" "-vlanhwtag"; do
ifjail test down
ifjail check down

ifjail test "$i"
ifjail check -vlanhwtag

ifjail test up
ifjail check up

sleep 2

run test ifconfig $testiface.42 create 172.30.5.5/24
run check ifconfig $checkiface.42 create 172.30.5.4/24

waitcarrier

run check tcpdump -p -q -c 2 -n -i "$checkiface" vlan 42 and 
icmp &
sleep 1
if ! run test ping -c 1 -t 2 172.30.5.4; then
echo FAILED on "$i"
exit 1
fi
done
}

hwvlantest
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"