Hello Tatsuo,

The lag is reasonnable, althought no too good. One transaction is
about 1.2 ms, the lag is much smaller than that, and you are at about
50% of the maximum load. I've got similar figures on my box for such
settings. It improves if your reduce the number of clients.

No, 5000 TPS = 1/5000 = 0.2 ms per transaction, no?

Hmmm... Yes, and no:-)

Transaction are handled in parallel because there are 10 clients. I look at actual transaction times (latency) from a client perspective, not the "apparent" time because of parallelism, and compare it to the measured lag, which is also measured per client.

The transaction time I reported is derived from your maximum tps per client : 10 clients / 8300 tps = 1.2 ms / trans. However, there are 10 transaction in progress in parallel.

When you're running at 50% load, the clients basically spend 1.2 ms doing a transaction (sending queries, getting results), and 1.2 ms sleeping because of rate limiting. The reported 0.3 ms lag is that when sleeping 1.2 ms it tends to start a little bit later, after 1.5 ms, but this latency does not show up on the throuput figures because the next sleep will just be a smaller to catch-up.

As you have 10 clients in one pgbench thread, the scheduling say to start a new transaction for a client at a certain time, but the pgbench process is late to actually handle this client query because it is doing other things, like attending one of the other clients, or being switched off to run a server process.

However pgbench says average lag is 0.304 ms. So the lag is longer than transaction itself.

See above.

I would be surprised that this would be the issue is to compute the
measure, compared to network connections and the like. With -S the
bench is cpu bound. Possibly a better scheduler/thread management on
OSX? Or more available cores?

The number of cores is same.  I don't understand why number of cores
is relatedx, though.

In my mind, because "pgbench -S" is cpu bound, and with "-c 10" you have to run pgbench and 10 "postgres" backends, that is 11 processes competing for cpu time. If you have 11 cores that is mostly fine, if you have less then there will be some delay depending on how the process scheduler does thing in the OS to allocate cpu time. With a load of 50%, about 6 cores should be okay to run the load transparently (client & server).

Well, I do not know! At high load with clients running on the same box as the server, and with more clients & server than available cores, there is a lot of competition between processes, and between clients that share a unique thread, and a log context switching whoch will result in a measured lag.

Hmm... I would like to have cleaner explanation/evidence before
committing the patch.

The lag measures you report seems pretty consistent to me given the load your requiring, for a cpu bound bench, with more processes to run than available cores. At least, I'm buying my own explanation, and I hope to be convincing.

If you want to isolate yourself from such effects, pgbench must run on a different host than the server, with has many threads as there are cores available, and not too many clients per thread. This is also true without throttling, but it shows more under throttling because of the lag (latency) measures.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to