@shripchenko , it is ok with me, please make pull request for master.
And please do not forget about updating the docs too ;)
Thanks and regards,
Bogdan
---
Reply to this email directly or view it on GitHub:
@shripchenko, I uploaded your first patch, the one with the fixes on mallocs -
thank you.
On the second patch - the load are evaluated (and destination selected) under
the lock of the resources. So if you have the first call, it will get the first
destination; next call will take the second
@bogdan-iancu, Yes. exactly!
---
Reply to this email directly or view it on GitHub:
https://github.com/OpenSIPS/opensips/pull/345#issuecomment-72437947___
Devel mailing list
Devel@lists.opensips.org
global level round robin?
it would need a write lock at LB *data level to store last used index. (or
maybe atomic inc).
per-call round robin? this means that every new call will start from 1st
destination and it is actually how things implemented now.
Or am I miss something?
---
Reply to this
What about doing Round Robin ? the implementation will be simpler (you do not
need that array), just remember the last selected destination. Also, in this
case we do not need even the flag - it can be the default behavior - when we do
evaluate the destinations (as load) we start all the time
Global level, of course :) - but this may require locking indeed. The atomic
funcs will not help here as we do not have an atomic get inc.
---
Reply to this email directly or view it on GitHub:
i believe that one(or a few) malloc's is better than continuous locking.
in theory we also could soften (add a rounding) by a some (configurable)
_factor_ to `it_l = load` equation to slightly move from load-based
distribution to (pseudo-)equal-rate distribution (with`rand()`)
So what do you
Ok, I got it. I just try to avoid alloc's at any cost.
There are also some problems i found: please look at my commit:
https://github.com/shripchenko/opensips/commit/26de7268598c54c1581befe21204d592b5ce2f09
And one more commit to
Yes, I understood your logic in the original code, but you spare couple of mem
allocs (which btw, are pretty fast) with the downside of searching and matching
the resources - imho the effort of all those memcmp is comparable with the
effort of 3-4 extra allocs (usually the list of used
@bogdan-iancu, I did a small review.
I have a doubts about this one block of code in `lb_route()` function:
```c++
/* save state - res */
/* iterate AVPs once and delete old resources */
destroy_avps(0, res_avp_name, 1 /*all*/);
/* add new resources */
for(
Merged #345.
---
Reply to this email directly or view it on GitHub:
https://github.com/OpenSIPS/opensips/pull/345#event-222989172___
Devel mailing list
Devel@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/devel
Hi @shripchenko, I uploaded your patch - thank you very much. I did some
changes and I will be thankful to you if you could review and retest.
Also, please open a new request with the updates on the documentation files :)
Regards,
Bogdan
---
Reply to this email directly or view it on GitHub:
@bogdan-iancu, pinging you again ^)
---
Reply to this email directly or view it on GitHub:
https://github.com/OpenSIPS/opensips/pull/345#issuecomment-70673514___
Devel mailing list
Devel@lists.opensips.org
@bogdan-iancu, yes it does.
You could make a review and merge.
---
Reply to this email directly or view it on GitHub:
https://github.com/OpenSIPS/opensips/pull/345#issuecomment-69552039___
Devel mailing list
Devel@lists.opensips.org
Hi @shripchenko - my apologizes for the delay - I read your suggestions and I
do agree - it is a better approach on the matter. Thank you for the idea.
On the code, does the current push reflects those suggestions ?
---
Reply to this email directly or view it on GitHub:
@bogdan-iancu, pinging you again ^)
please review what I've done, when you have a time
---
Reply to this email directly or view it on GitHub:
https://github.com/OpenSIPS/opensips/pull/345#issuecomment-66780807___
Devel mailing list
@bogdan-iancu, here it is, i'm back with proposed commits and changes:
1. get rid of load_balance() in favor of lb_*() family of functions to make the
work-flow more clear and defined:
lb_start(grp, rl, flgs) - strictly used to start LB session. if sessions
already started, old session
Hi @shripchenko , sorry for the delay.
For 1) - right now we have only a function load_balance() that is used to start
and continue the LB process. We want to have in the future the possibility to
start, continue and re-start the LB process. To aligned it with the rest of the
modules (drouting,
@bogdan-iancu, pinging you ^)
Sorry to interrupt, but there are just a few questions left and then I could
start updating my PR.
---
Reply to this email directly or view it on GitHub:
I start reviewing the patches and :
- the first patch - a7549e00bdd3fd13fc20777bccf935d41303980e - I do not agree
with the approach - once you started a LB session, you should not change the
set or resources. If you need to do that, first reset the existing LB session
and then create a new
@bogdan-iancu, thanks for review!
About pt1.
In original approach we cannot 'reset' LB session by just calling lb_reset()
since we don't know the resources set used in previous time. Also the original
approach is 'provoking' users to change resources set, since despite we don't
want them to do
@bogdan-iancu, Is there anything I could do to make this PR accepted?
Or you have any doubts about this patch that have to be sorted out before
merging?
I believe it makes load_balancer module much more versatile and handy for users.
---
Reply to this email directly or view it on GitHub:
@shripchenko , the only problem here is my time - the idea is really good - I
will try to review the code by the next week and accept th PR.
Thanks, Bogdan
---
Reply to this email directly or view it on GitHub:
@shripchenko , basically what you want to do is to have in failure route the
possibly to start a new LB session (new ID, new resources) instead of doing
continuing (failover) the original LB session.
Is that correct ?
---
Reply to this email directly or view it on GitHub:
Yes it is a part of my scenario.
I have a flow of calls to many different DIDs, normally they are dispatched by
a number of primary PBX servers(with possibility to dispatch each call to a
different subsets of servers).
So at first I try to LB calls to this primary location.
I also have to cut
Apparently, in its current implementation load_balancer module could be used in
the only one way:
load balance on static destinations group and static resources list.
first - because do_load_balance() ignores new group.
second - because do_load_balance() - clean up dialog profiles according to
26 matches
Mail list logo