Re: pf tables locking
On Monday, 13 August 2018 15:22:33 CEST Kristof Provost wrote: > > This function is called from pf_test only after PF_RULES_RUNLOCK(). > > I think you’re right, this does look wrong. > > It’s very unlikely that this will actually lead to a crash, because > rules (and associated tables) won’t just go away while there’s still > state, but we could theoretically lose memory (in the pfrke_counters > allocation), and miscount. > > I don’t want to re-take the rules lock for this But what about things other than counters and disappearing tables, that is getting addresses out of pool in pf_map_addr? I understand that rpool can't change live because it changes only with loading a ruleset. But then there is pfr_pool_get. This one operates totally unlocked. I proposed a patch locking pools in https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230640 but now as I see it locking of each table seems necessary. Why not have granular locking for each pool (or maybe rule) and for each table? -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.
Re: pf tables locking
On Tuesday, 14 August 2018 01:32:17 CEST Kajetan Staszkiewicz wrote: > > > There are some more issues I found around pf_map_addr. Some of them I > > > mentioned in https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229092. > > > Some > > > more came out while working on this least-states loadbalancing. I will > > > group them into something meaningful and make another PR for them. > > > > Yeah, that bug is still on my todo list somewhere, but things are > > extremely > > hectic at the moment, and I can’t make any promises about when I’ll have > > time for it. > > I thought that was rather on my todo :) ... mostly because I though of other issues found in pf_map_addr I took the liberty of opening another bug report just for those: 230640. I think that should be addressed first because 229092 can be really correctly fixed. -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.
Re: pf tables locking
On Tuesday, 14 August 2018 15:44:52 CEST Ermal Luçi wrote: > If you really want to spend time on it, the best option is to pull out the > pool concept used by the rules/nat... and manage it outside of the > rules/states but in its own module referenced by the former ones. Do you mean as separate kernel module? Or totally outside of kernel? I was considering doing this outside of kernel by providing a weighted round-robin algorithm but that would still require most of the patches as for doing it within kernel, in order to get counters working for redirection tables and state counter per table element, which both are missing in kernel now. > This would allow extensibility and propper reasoning about it. It might be the late hour but I really don't see how it would be extensible. Please be more specific. -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.
Re: pf tables locking
(sorry for the top post) If you really want to spend time on it, the best option is to pull out the pool concept used by the rules/nat... and manage it outside of the rules/states but in its own module referenced by the former ones. This would allow extensibility and propper reasoning about it. On Tue, Aug 14, 2018 at 9:35 AM, Kajetan Staszkiewicz wrote: > On Tuesday, 14 August 2018 16:15:48 CEST Kristof Provost wrote: > > On 14 Aug 2018, at 0:32, Kajetan Staszkiewicz wrote: > > > On Monday, 13 August 2018 17:59:15 CEST Kristof Provost wrote: > > > How about this? > > > > > > https://github.com/innogames/freebsd/commit/ > > > d44a0d9487285fac8ed1d7372cc99cca83f616e6 > > > > That looks good to me. > > There’s a few minor issues, things like inconsistent indentation and > > overly long lines, but that’s about the only criticism I have. > > I fixed some issues with unallocated counters and submitted bug 230619. > > > I see. I’m not quite sure yet if that’s a feature we want to import > > or not, > > but at least your ‘support’ patches should probably go in. The above > > one certainly. > > There are some more things which require changes before I can do least- > connections balancing. > > If you have a moment, please have a look at https://github.com/innogames/ > freebsd/commits/iglb/11.2/GetOnWithIt_2 , maybe some of those things can > get > imported anyway, like full support for counters of states. > > > >> Yeah, that bug is still on my todo list somewhere, but things are > > >> extremely > > >> hectic at the moment, and I can’t make any promises about when > > >> I’ll have > > >> time for it. > > > > > > I thought that was rather on my todo :) > > > > I’m not going to stop you. I love it when other people do the work ;) > > Since I have you here, let me explain the issues I see with pf_map_addr(). > For > round-robin target a list of interface,table pairs can be specified. This > list > is iterated and within each table addresses are iterated too. There is no > locking around it "because performance is assumed more important than > round- > robin precision" according to comment in code. > > Yet I believe there are way more serious issues possible with the current > approach. Interface is in fact picked up outside of pf_map_addr(). Another > thread could have already moved the rpool->counter to another table for > which > the interface is not valid anymore. > > I came up with this: https://github.com/innogames/freebsd/commit/ > 61ffb96a4dc948a0b06204ff39210c0578f77f08 although without locking this is > still not really a solution. It only moves interface selection to inside > of > pf_map_addr() > > Another one is https://github.com/innogames/freebsd/commit/ > 8fe6cd2d820052d2166afbaa311f34318a41db48 which stores table used for > loadbalancing in state and src_node. Then the table can be used for state > counting. > > The 2 patches above are also included in the first link I gave above. > > -- > | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | > | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | > |Vegeta | www: http://vegeta.tuxpowered.net | > `^---' > > -- > Ermal > ___ freebsd-pf@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-pf To unsubscribe, send any mail to "freebsd-pf-unsubscr...@freebsd.org"
Re: pf tables locking
On Tuesday, 14 August 2018 16:15:48 CEST Kristof Provost wrote: > On 14 Aug 2018, at 0:32, Kajetan Staszkiewicz wrote: > > On Monday, 13 August 2018 17:59:15 CEST Kristof Provost wrote: > > How about this? > > > > https://github.com/innogames/freebsd/commit/ > > d44a0d9487285fac8ed1d7372cc99cca83f616e6 > > That looks good to me. > There’s a few minor issues, things like inconsistent indentation and > overly long lines, but that’s about the only criticism I have. I fixed some issues with unallocated counters and submitted bug 230619. > I see. I’m not quite sure yet if that’s a feature we want to import > or not, > but at least your ‘support’ patches should probably go in. The above > one certainly. There are some more things which require changes before I can do least- connections balancing. If you have a moment, please have a look at https://github.com/innogames/ freebsd/commits/iglb/11.2/GetOnWithIt_2 , maybe some of those things can get imported anyway, like full support for counters of states. > >> Yeah, that bug is still on my todo list somewhere, but things are > >> extremely > >> hectic at the moment, and I can’t make any promises about when > >> I’ll have > >> time for it. > > > > I thought that was rather on my todo :) > > I’m not going to stop you. I love it when other people do the work ;) Since I have you here, let me explain the issues I see with pf_map_addr(). For round-robin target a list of interface,table pairs can be specified. This list is iterated and within each table addresses are iterated too. There is no locking around it "because performance is assumed more important than round- robin precision" according to comment in code. Yet I believe there are way more serious issues possible with the current approach. Interface is in fact picked up outside of pf_map_addr(). Another thread could have already moved the rpool->counter to another table for which the interface is not valid anymore. I came up with this: https://github.com/innogames/freebsd/commit/ 61ffb96a4dc948a0b06204ff39210c0578f77f08 although without locking this is still not really a solution. It only moves interface selection to inside of pf_map_addr() Another one is https://github.com/innogames/freebsd/commit/ 8fe6cd2d820052d2166afbaa311f34318a41db48 which stores table used for loadbalancing in state and src_node. Then the table can be used for state counting. The 2 patches above are also included in the first link I gave above. -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.
Re: pf tables locking
On 14 Aug 2018, at 0:32, Kajetan Staszkiewicz wrote: On Monday, 13 August 2018 17:59:15 CEST Kristof Provost wrote: How about this? https://github.com/innogames/freebsd/commit/ d44a0d9487285fac8ed1d7372cc99cca83f616e6 That looks good to me. There’s a few minor issues, things like inconsistent indentation and overly long lines, but that’s about the only criticism I have. Do you have a bit more information about your use case? What are you trying to accomplish with this change? I have a loadbalancer which uses pf and route-to targets. After a server is added to a pool, I want this server to immediately take over much traffic. With round-robin the server receives new clients rather slowly. If kernel could measure amount of states per table entry, I could send new clients to this new server until it serves as many clients as other servers. I see. I’m not quite sure yet if that’s a feature we want to import or not, but at least your ‘support’ patches should probably go in. The above one certainly. There are some more issues I found around pf_map_addr. Some of them I mentioned in https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229092. Some more came out while working on this least-states loadbalancing. I will group them into something meaningful and make another PR for them. Yeah, that bug is still on my todo list somewhere, but things are extremely hectic at the moment, and I can’t make any promises about when I’ll have time for it. I thought that was rather on my todo :) I’m not going to stop you. I love it when other people do the work ;) Regards, Kristof ___ freebsd-pf@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-pf To unsubscribe, send any mail to "freebsd-pf-unsubscr...@freebsd.org"
Re: pf tables locking
On Monday, 13 August 2018 17:59:15 CEST Kristof Provost wrote: > pf keeps rules around until there are no more states left referencing the > rule. Look at pf_commit_rules(): The old rules are unlinked rather than > removed. They’re kept on the V_pf_unlinked rules list. Every so often pf > runs through all states (in pf_purge_thread()) to mark their associated > rules as still referenced. Only rules which are not referenced by any state > are removed. > > This means that while there’s still a state which was created by the rule > (and can thus put packets towards its table), the rule will exist. Once the > state goes away it’ll still take one full iteration through all states > before the rule can be freed. Hence my statement that it’s highly unlikely > (pretty much impossible) for us to run into a situation where the rule no > longer exists. OK, now it makes sense. > >> I don’t want to re-take the rules lock for this, so my current > >> thinking is that the best approach would be to already get rid of the > >> potential memory leak by just always allocating the pfrke_counters when > >> the table is created (i.e. when the rule is first set). That might waste > >> a little memory if we didn’t need it, but it should simplify things a > >> bit. > >> > >> We can resolve the counting issue by using the counter_u64_*() functions > >> for them. We should be able to get away with not locking this. How about this? https://github.com/innogames/freebsd/commit/ d44a0d9487285fac8ed1d7372cc99cca83f616e6 > Do you have a bit more information about your use case? What are you trying > to accomplish with this change? I have a loadbalancer which uses pf and route-to targets. After a server is added to a pool, I want this server to immediately take over much traffic. With round-robin the server receives new clients rather slowly. If kernel could measure amount of states per table entry, I could send new clients to this new server until it serves as many clients as other servers. > > There are some more issues I found around pf_map_addr. Some of them I > > mentioned in https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229092. > > Some > > more came out while working on this least-states loadbalancing. I will > > group them into something meaningful and make another PR for them. > > Yeah, that bug is still on my todo list somewhere, but things are extremely > hectic at the moment, and I can’t make any promises about when I’ll have > time for it. I thought that was rather on my todo :) If you can agree on patch sent in this message (I would still make a PR and submit the patch there, just for documentation), I will re-work my other patches and show you what I came up with. I had working code for counting states per table entry, I only lack the modified round-robin selection itself. -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.
Re: pf tables locking
On 13 Aug 2018, at 17:06, Kajetan Staszkiewicz wrote: > On Monday, 13 August 2018 15:22:33 CEST Kristof Provost wrote: >> rules (and associated tables) won’t just go away while there’s still >> state, > > This is mostly what I wanted to ask about in this message. How is it ensured > that table and counters are gone only after everybody stops using them? What > if I delete a table, then change ruleset, but there is still active connection > keeping a state? I really had hard time finding how this is guarded in source. > pf keeps rules around until there are no more states left referencing the rule. Look at pf_commit_rules(): The old rules are unlinked rather than removed. They’re kept on the V_pf_unlinked rules list. Every so often pf runs through all states (in pf_purge_thread()) to mark their associated rules as still referenced. Only rules which are not referenced by any state are removed. This means that while there’s still a state which was created by the rule (and can thus put packets towards its table), the rule will exist. Once the state goes away it’ll still take one full iteration through all states before the rule can be freed. Hence my statement that it’s highly unlikely (pretty much impossible) for us to run into a situation where the rule no longer exists. >> I don’t want to re-take the rules lock for this, so my current >> thinking is that the best approach would be to already get rid of the >> potential memory leak by just always allocating the pfrke_counters when >> the table is created (i.e. when the rule is first set). That might waste >> a little memory if we didn’t need it, but it should simplify things a >> bit. > >> We can resolve the counting issue by using the counter_u64_*() functions >> for them. We should be able to get away with not locking this. > > Sure, I can use counter(9). The question, as always with my patches, is what > can go to FreeBSD and what won't go. > > My current goal is to modify round-robin pf target to always point to table > entry with least amount of states. > > As I see it for now: > 1. Modify pfrke_counters to be always allocated. > 2. Rewrite pfrke_counters to use counter(9). > 3. Provide state counter in pfrke_counters. > 4. Modify round-robin target. > > 1. and 2. make a good PR. I'm not sure about 3. Do you want patches for least- > connections target too? I want to just replace existing round-robin but if > there is any chance of getting it into kernel code, I could make it work as > new target in pf.conf. > Do you have a bit more information about your use case? What are you trying to accomplish with this change? > There are some more issues I found around pf_map_addr. Some of them I > mentioned in https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229092. Some > more came out while working on this least-states loadbalancing. I will group > them into something meaningful and make another PR for them. > Yeah, that bug is still on my todo list somewhere, but things are extremely hectic at the moment, and I can’t make any promises about when I’ll have time for it. Regards, Kristof signature.asc Description: OpenPGP digital signature
Re: pf tables locking
On Monday, 13 August 2018 15:22:33 CEST Kristof Provost wrote: > > I'm going through the code and I've found out that many table-related > > function > > are guarded by lock on pf ruleset. But that is not true for > > pfr_update_stats. > > This function is called from pf_test only after PF_RULES_RUNLOCK(). > > I think you’re right, this does look wrong. > > It’s very unlikely that this will actually lead to a crash, because I don't like the word "unlikely". With my traffic and frequent ruleset and carp changes I'm catching all the fanciest locking bugs as it seems. > rules (and associated tables) won’t just go away while there’s still > state, This is mostly what I wanted to ask about in this message. How is it ensured that table and counters are gone only after everybody stops using them? What if I delete a table, then change ruleset, but there is still active connection keeping a state? I really had hard time finding how this is guarded in source. > but we could theoretically lose memory (in the pfrke_counters > allocation), and miscount. Pre-allocating counters seems a good idea, it will simplify some other code. > I don’t want to re-take the rules lock for this, so my current > thinking is that the best approach would be to already get rid of the > potential memory leak by just always allocating the pfrke_counters when > the table is created (i.e. when the rule is first set). That might waste > a little memory if we didn’t need it, but it should simplify things a > bit. > We can resolve the counting issue by using the counter_u64_*() functions > for them. We should be able to get away with not locking this. Sure, I can use counter(9). The question, as always with my patches, is what can go to FreeBSD and what won't go. My current goal is to modify round-robin pf target to always point to table entry with least amount of states. As I see it for now: 1. Modify pfrke_counters to be always allocated. 2. Rewrite pfrke_counters to use counter(9). 3. Provide state counter in pfrke_counters. 4. Modify round-robin target. 1. and 2. make a good PR. I'm not sure about 3. Do you want patches for least- connections target too? I want to just replace existing round-robin but if there is any chance of getting it into kernel code, I could make it work as new target in pf.conf. Point 3. is the puzzle for me. For now I just call pfr_update_stats (modified to handle state counter) in pf_create_state and pf_unlink_state. But again - how do I know if the table (I added a pointer in struct pf_state) is still allocated in memory? There are some more issues I found around pf_map_addr. Some of them I mentioned in https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229092. Some more came out while working on this least-states loadbalancing. I will group them into something meaningful and make another PR for them. -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.
Re: pf tables locking
On 13 Aug 2018, at 0:09, Kajetan Staszkiewicz wrote: Hello group, Can anybody help me iwth pf_table.c and all operations on tables, especially pfr_update_stats? I'm working on implementing stats for redirection targets, that is for nat or route-to. I'm going through the code and I've found out that many table-related function are guarded by lock on pf ruleset. But that is not true for pfr_update_stats. This function is called from pf_test only after PF_RULES_RUNLOCK(). I think you’re right, this does look wrong. It’s very unlikely that this will actually lead to a crash, because rules (and associated tables) won’t just go away while there’s still state, but we could theoretically lose memory (in the pfrke_counters allocation), and miscount. I don’t want to re-take the rules lock for this, so my current thinking is that the best approach would be to already get rid of the potential memory leak by just always allocating the pfrke_counters when the table is created (i.e. when the rule is first set). That might waste a little memory if we didn’t need it, but it should simplify things a bit. We can resolve the counting issue by using the counter_u64_*() functions for them. We should be able to get away with not locking this. Regards, Kristof ___ freebsd-pf@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-pf To unsubscribe, send any mail to "freebsd-pf-unsubscr...@freebsd.org"
pf tables locking
Hello group, Can anybody help me iwth pf_table.c and all operations on tables, especially pfr_update_stats? I'm working on implementing stats for redirection targets, that is for nat or route-to. I'm going through the code and I've found out that many table-related function are guarded by lock on pf ruleset. But that is not true for pfr_update_stats. This function is called from pf_test only after PF_RULES_RUNLOCK(). -- | pozdrawiam / greetings | powered by Debian, FreeBSD and CentOS | | Kajetan Staszkiewicz | jabber,email: vegeta()tuxpowered net | |Vegeta | www: http://vegeta.tuxpowered.net | `^---' signature.asc Description: This is a digitally signed message part.