Can someone please guide me, if any standard scripting is available for doing such read/write performance test? Or point me to any available docs?
On Wed, 20 Dec, 2023, 10:39 am veem v, <veema0...@gmail.com> wrote: > Thank you. > > That would really be helpful if such test scripts or similar setups are > already available. Can someone please guide me to some docs or blogs or > sample scripts, on same please. > > On Wed, 20 Dec, 2023, 10:34 am Lok P, <loknath...@gmail.com> wrote: > >> As Rob mentioned, the syntax you posted is not correct. You need to >> process or read a certain batch of rows like 1000 or 10k etc. Not all 100M >> at one shot. >> >> But again your uses case seems common one considering you want to compare >> the read and write performance on multiple databases with similar table >> structure as per your usecase. So in that case, you may want to use some >> test scripts which others must have already done rather reinventing the >> wheel. >> >> >> On Wed, 20 Dec, 2023, 10:19 am veem v, <veema0...@gmail.com> wrote: >> >>> Thank you. >>> >>> Yes, actually we are trying to compare and see what maximum TPS are we >>> able to reach with both of these row by row and batch read/write test. And >>> then afterwards, this figure may be compared with other databases etc with >>> similar setups. >>> >>> So wanted to understand from experts here, if this approach is fine? Or >>> some other approach is advisable? >>> >>> I agree to the point that , network will play a role in real world app, >>> but here, we are mainly wanted to see the database capability, as network >>> will always play a similar kind of role across all databases. Do you >>> suggest some other approach to achieve this objective? >>> >>> >>> On Wed, 20 Dec, 2023, 2:42 am Peter J. Holzer, <hjp-pg...@hjp.at> wrote: >>> >>>> On 2023-12-20 00:44:48 +0530, veem v wrote: >>>> > So at first, we need to populate the base tables with the necessary >>>> data (say >>>> > 100million rows) with required skewness using random functions to >>>> generate the >>>> > variation in the values of different data types. Then in case of row >>>> by row >>>> > write/read test , we can traverse in a cursor loop. and in case of >>>> batch write/ >>>> > insert , we need to traverse in a bulk collect loop. Something like >>>> below and >>>> > then this code can be wrapped into a procedure and passed to the >>>> pgbench and >>>> > executed from there. Please correct me if I'm wrong. >>>> >>>> One important point to consider for benchmarks is that your benchmark >>>> has to be similar to the real application to be useful. If your real >>>> application runs on a different node and connects to the database over >>>> the network, a benchmark running within a stored procedure may not be >>>> very indicative of real performance. >>>> >>>> hp >>>> >>>> -- >>>> _ | Peter J. Holzer | Story must make more sense than reality. >>>> |_|_) | | >>>> | | | h...@hjp.at | -- Charles Stross, "Creative writing >>>> __/ | http://www.hjp.at/ | challenge!" >>>> >>>