Hi Joe and Muhammad,

I've finished the benchmarking experiments after listening to Muhammad's
suggestions to improve the process: 1. to create 2 separate repos for
original code and optimized code and 2. place the create_graph at the
beginning and drop_graph at the end of each sql file.

The experiments were conducted basically the same way: I calculated the
total time for running 100, 1000 and 10000 sql queries, repeated 20 times
consecutively, got the average value, and repeated that process 10 times
more. So in essence, there are totally 200 measurements conducted for
original and optimized code, each. And I calculated overall average for
total 200 measurements to get a final number to compare between the
original and the optimized: So here's the result:

Original code (ms) Optimized code (ms) Difference(ms)
100 queries 29.23397 28.786665 0.447305
1000 queries 255.391305 250.43654 4.95476500000004
10000 queries 2558.612315 2517.9374 40.6749150000001

According to the result, the difference in execution time for 100 queries
is that the optimized code was about 0.45ms faster than the original code.
For 1000 queries, the optimized code is 4.95ms faster and for 10000
queries, it's 40.67ms faster. These numbers, I think, are reasonable,
because as we make the number of queries 10 times larger, the difference in
time also increases around 10 times.

I've attached the detailed statistics report. Please let me know your
comments and suggestions.

Best regards,
Viet.

Attachment: benchmarking2.odf.ods
Description: application/vnd.oasis.opendocument.spreadsheet

Reply via email to