Hi Willy,
My systems were out of rotation for some other tests so did not get
to this till now. I have pulled the latest bits just now and tested.
Regarding
maxconn, I simply kept maxconn in global/defaults to 1 million and have
this line in the backend section:
default-server maxconn 100
Hi again,
finally I got rid of the FD lock for single-threaded accesses (most of
them), and based on Olivier's suggestion, I implemented a per-thread
wait queue, and cache-aligned some list heads to avoid undesired cache
line sharing. For me all of this combined resulted in a performance
increase
Hi Krishna,
On Thu, Oct 11, 2018 at 12:04:31PM +0530, Krishna Kumar (Engineering) wrote:
> I must say the improvements are pretty impressive!
>
> Earlier number reported with 24 processes: 519K
> Earlier number reported with 24 threads: 79K
> New RPS with system irq tunin
I must say the improvements are pretty impressive!
Earlier number reported with 24 processes: 519K
Earlier number reported with 24 threads: 79K
New RPS with system irq tuning, today's git,
configuration changes, 24 threads:353K
Old code with same
Thanks, will do that.
On Thu, Oct 11, 2018 at 8:37 AM Willy Tarreau wrote:
> On Thu, Oct 11, 2018 at 08:18:21AM +0530, Krishna Kumar (Engineering)
> wrote:
> > Hi Willy,
> >
> > Thank you very much for the in-depth analysis and configuration setting
> > suggestions.
> > I believe I have got the
On Thu, Oct 11, 2018 at 08:18:21AM +0530, Krishna Kumar (Engineering) wrote:
> Hi Willy,
>
> Thank you very much for the in-depth analysis and configuration setting
> suggestions.
> I believe I have got the 3 key items to continue based on your mail:
>
> 1. Thread pinning
> 2. Fix system irq pinn
Hi Willy,
Thank you very much for the in-depth analysis and configuration setting
suggestions.
I believe I have got the 3 key items to continue based on your mail:
1. Thread pinning
2. Fix system irq pinning accordingly
3. Listen on all threads.
I will post the configuration changes and the resu
Hi Krishna,
On Tue, Oct 02, 2018 at 09:18:19PM +0530, Krishna Kumar (Engineering) wrote:
(...)
> 1. HAProxy system:
> Kernel: 4.17.13,
> CPU: 48 core E5-2670 v3
> Memory: 128GB memory
> NIC: Mellanox 40g with IRQ pinning
>
> 2. Client, 48 core similar to server. Test command line:
> wrk -c 4800 -
Hi Krishna,
On Fri, Oct 05, 2018 at 02:25:13PM +0530, Krishna Kumar (Engineering) wrote:
> Sorry for repeating once again, but this is my last unsolicited
> mail on this topic. Any directions for what to look out for?
Sorry, but I didn't even have the time to read your mail over the last
two days
Sorry for repeating once again, but this is my last unsolicited
mail on this topic. Any directions for what to look out for?
Thanks,
- Krishna
On Thu, Oct 4, 2018 at 8:42 AM Krishna Kumar (Engineering) <
krishna...@flipkart.com> wrote:
> Re-sending in case this mail was missed. To summarise the
Thanks, will take a look!
On Thu, Oct 4, 2018 at 12:58 PM Илья Шипицин wrote:
> what I going to try (when I will have some spare time) is sampling with
> google perftools
>
> https://github.com/gperftools/gperftools
>
> they are great in cpu profiling.
> you can try them youself if you have time
what I going to try (when I will have some spare time) is sampling with
google perftools
https://github.com/gperftools/gperftools
they are great in cpu profiling.
you can try them youself if you have time/wish :)
чт, 4 окт. 2018 г. в 11:53, Krishna Kumar (Engineering) <
krishna...@flipkart.com>
1. haproxy config: Same as given above (both processes and threads were
given in the mail)
2. nginx: default, no changes.
3. sysctl's: nothing set. All changes as described earlier (e.g.
irqbalance, irq pinning, etc).
4. nf_conntrack: disabled
5. dmesg: no messages.
With the same system and settin
haproxy config, nginx config
non default sysctl (if any)
as a side note, can you have a look at "dmesg" output ? do you have nf
conntrack enabled ? what are its limits ?
чт, 4 окт. 2018 г. в 9:59, Krishna Kumar (Engineering) <
krishna...@flipkart.com>:
> Sure.
>
> 1. Client: Use one of the follo
Sure.
1. Client: Use one of the following two setup's:
- a single baremetal (48 core, 40g) system
Run: "wrk -c 4800 -t 48 -d 30s http://:80/128", or,
- 100 2 core vm's.
Run "wrk -c 16 -t 2 -d 30s http://:80/128" from
each VM and summarize the results u
load testing is somewhat good.
can you describe an overall setup ? (I want to reproduce and play with it)
чт, 4 окт. 2018 г. в 8:16, Krishna Kumar (Engineering) <
krishna...@flipkart.com>:
> Re-sending in case this mail was missed. To summarise the 3 issues seen:
>
> 1. Performance drops 18x with
Re-sending in case this mail was missed. To summarise the 3 issues seen:
1. Performance drops 18x with higher number of nbthreads as compared to
nbprocs.
2. CPU utilisation remains at 100% after wrk finishes for 30 seconds (for
1.9-dev3
for nbprocs and nbthreads).
3. Sockets on client remain i
Hi Willy, and community developers,
I am not sure if I am doing something wrong, but wanted to report
some issues that I am seeing. Please let me know if this is a problem.
1. HAProxy system:
Kernel: 4.17.13,
CPU: 48 core E5-2670 v3
Memory: 128GB memory
NIC: Mellanox 40g with IRQ pinning
2. Clie
18 matches
Mail list logo