After few round of stress testing I got this output on one fastrouter, this
time it did not lock itself into infinite loop it's just that statis output
got repeated keys:
{ "version": "1.0-dev-1734",
"uid": 0,
"gid": 0,
"cwd": "/",
"fastrouter": ["127.0.0.1:2500"],
"subscriptions": [
{ "key": "app1.domain.com",
"hits": 3128,
"nodes": [
{"name": "172.16.200.56:3001", "modifier1": 0,
"modifier2": 0, "last_check": 1322582298, "requests": 1564, "tx": 10414588,
"ref": 0, "death_mark": 0},
{"name": "172.16.200.55:3001", "modifier1": 0,
"modifier2": 0, "last_check": 1322582298, "requests": 1564, "tx": 10440952,
"ref": 0, "death_mark": 0}
]
},
{ "key": "pingapp.local",
"hits": 2514,
"nodes": [
{"name": "172.16.200.55:3000", "modifier1": 0,
"modifier2": 0, "last_check": 1322582290, "requests": 15, "tx": 510, "ref":
0, "death_mark": 0},
{"name": "172.16.200.56:3000", "modifier1": 0,
"modifier2": 0, "last_check": 1322582291, "requests": 13, "tx": 442, "ref":
0, "death_mark": 0}
]
},
{ "key": "app2.domain.com",
"hits": 0,
"nodes": [
{"name": "172.16.200.56:3002", "modifier1": 0,
"modifier2": 0, "last_check": 1322582291, "requests": 0, "tx": 0, "ref": 0,
"death_mark": 0}
]
},
{ "key": "pingapp.local",
"hits": 6,
"nodes": [
{"name": "172.16.200.55:3000", "modifier1": 0,
"modifier2": 0, "last_check": 1322582260, "requests": 4, "tx": 136, "ref":
0, "death_mark": 0},
{"name": "172.16.200.56:3000", "modifier1": 0,
"modifier2": 0, "last_check": 1322582261, "requests": 2, "tx": 68, "ref":
0, "death_mark": 0}
]
},
{ "key": "app2.domain.com",
"hits": 0,
"nodes": [
{"name": "172.16.200.56:3002", "modifier1": 0,
"modifier2": 0, "last_check": 1322582261, "requests": 0, "tx": 0, "ref": 0,
"death_mark": 0}
]
}
],
"cheap": 0
}
2011/11/29 Roberto De Ioris <[email protected]>
>
> > Dnia wtorek, 29 listopada 2011 11:47:35 Roberto De Ioris pisze:
> >> > Hi,
> >> >
> >> > I'm doing some stress test of uWSGI with 3 servers:
> >> >
> >> > nginx -> uwsgi fastrouter -> 2x uwsgi backend
> >> > (nginx and fastrouter are on same server)
> >> >
> >> > and one backend gets 2x more requests. I've tested it with apache
> >> > benchmark
> >> > using:
> >> >
> >> > ab -n 10000 -c 1 -H "Host: app.domain.com"
> >> > "http://$nginx_address/$some_url"
> >>
> >> Ok, just committed the fix for round robin and the support for gathering
> >> statistics from the fastrouter.
> >>
> >> Some info for more "obscure" field:
> >>
> >> reference: is the amount of currently running requests for this node
> >>
> >> death_mark: if 1 this node is dead but there are still requests hanging
> >> on
> >> it (it will be completely deleted after the last requests will timeout)
> >>
> >> last_check: the last timestamp on which the node has been announced
> >
> > It works well and round robin not works, each node got 500 requests. But
> > during first test I've managed to put fastrouter into infinite loop,
> > unfortunately I can't reproduce it so I'll just discribe how I got this
> > bug:
> >
> > 1. I've upgraded uWSGI, added fastrouter-stats and restartet both workers
> > and
> > fastrouter
> > 2. I've started ab as before
> > 3. immediately I've run curl "fasterouter_ip:fastrouter-stats_port" and
> > I've
> > got:
> >
> > { "version": "1.0-dev-1732",
> > "uid": 0,
> > "gid": 0,
> > "cwd": "/",
> > "fastrouter": ["=0","=0"],
> > "subscriptions": [
> > { "key": "app.domain.com",
> > "hits": 13,
> > "nodes": [
> > {"name": "172.16.200.55:3001", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564749, "requests": 7, "tx": 36762,
> > "ref":
> > 1},
> > {"name": "172.16.200.56:3001", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564749, "requests": 6, "tx": 36762,
> > "ref": 0}
> > ]
> > },
> > { "key": "pingapp.local",
> > "hits": 2,
> > "nodes": [
> > {"name": "172.16.200.55:3000", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564634, "requests": 1, "tx": 34, "ref":
> > 0},
> > {"name": "172.16.200.56:3000", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564633, "requests": 1, "tx": 34, "ref":
> > 0}
> > ]
> > },
> > { "key": "app.domain.com",
> > "hits": 13,
> > "nodes": [
> > {"name": "172.16.200.55:3001", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564749, "requests": 7, "tx": 36762,
> > "ref":
> > 1},
> > {"name": "172.16.200.56:3001", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564749, "requests": 6, "tx": 36762,
> > "ref": 0}
> > ]
> > },
> > { "key": "pingapp.local",
> > "hits": 2,
> > "nodes": [
> > {"name": "172.16.200.55:3000", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564634, "requests": 1, "tx": 34, "ref":
> > 0},
> > {"name": "172.16.200.56:3000", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564633, "requests": 1, "tx": 34, "ref":
> > 0}
> > ]
> > },
> > { "key": "app.domain.com",
> > "hits": 13,
> > "nodes": [
> > {"name": "172.16.200.55:3001", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564749, "requests": 7, "tx": 36762,
> > "ref":
> > 1},
> > {"name": "172.16.200.56:3001", "modifier1": 0,
> > "modifier2": 0, "last_check": 1322564749, "requests": 6, "tx": 36762,
> > "ref": 0}
> > ]
> > },
> > [it goes a looooong way down repeating all apps all the time]
> >
> > uWSGI fastrouter instance was eating cpu all the time. Maybe there is
> some
> > race condition in the fastrouter-stats code and I've hit time window
> where
> > is
> > occurs.
> >
> >
>
> It is incredible how a feature added for a scope (monitoring) became so
> kick-ass in another one (debugging).
>
> Thanks, i have enough material to debug the problem. I will let you know
> soon.
>
>
> --
> Roberto De Ioris
> http://unbit.it
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>
--
--------------------------------
Łukasz Mierzwa
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi