po 27. 2. 2023 v 5:08 odesílatel Kirk Wolak <wol...@gmail.com> napsal:
> On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wol...@gmail.com> wrote: > >> On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurj...@singh.im> wrote: >> >>> On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wol...@gmail.com> wrote: >>> >> ... > >> > I think like ROW_COUNT, it should not change because of internal >>> commands. >>> ... >> >> By using \timing, the user is explicitly opting into any overhead >>> caused by time-keeping. With this feature, the timing info will be >>> collected all the time. So do consider evaluating the performance >>> impact this can cause on people's workloads. They may not care for the >>> impact in interactive mode, but in automated scripts, even a moderate >>> performance overhead would be a deal-breaker. >>> >> Excellent point. I run lots of long scripts, but I usually set \timing >> on, just because I turn off everything else. >> I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the >> most impacted) >> Honestly, it was imperceptible, Maybe approximating 0.01 seconds >> With timing on: ~ seconds 0.28 >> With timing of: ~ seconds 0.27 >> >> The \timing incurs no realistic penalty at this point. The ONLY penalty >> we could face is the time to >> write it to the variable, and that cannot be tested until implemented. >> But I will do that. And I will >> report the results of the impact. But I do not expect a big impact. We >> update SQL_COUNT without an issue. >> And that might be much more expensive to get. >> > > Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name > improvement). > First, the instant you have ANY output, it swamps the impact. (I settled > on: SELECT 1 as v \gset xxx) for no output > Second, the variability of running even a constant script is mind-blowing. > Third, I've limited the output... I built this in layers (init.sql > initializes the psql variables I use), run_100.sql runs > another file (\i tst_2000.sql) 100 times. Resulting in 200k selects. > This is the very worst case. But nobody will run from psql 200K selects - can you try little bit more real but still synthetic test case? create table foo(a int); begin insert into foo values(1); ... insert into foo values(200000); commit; Regards Pavel > > Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5 > seconds of extra time (Total) > > So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per > statement > Roughly: 2.5us > > Unfortunately, my test lines look like this: > Without Timing > done 0.198215 (500) *total *98.862548 *min* 0.167614 *avg* > 0.19772509600000000000 *max *0.290659 > > With Timing > done 0.191583 (500) *total* 100.729868 *min *0.163280 *avg > *0.20145973600000000000 > *max *0.275787 > > Notice that the With Timing had a lower min, and a lower max. But a > higher average. > The distance between min - avg AND min - max, is big (those are for 1,000 > selects each) > > Are these numbers at the "So What" Level? > > While testing, I got the distinct impression that I am measuring something > that changes, or that the > variance in the system itself really swamps this on a per statement > basis. It's only impact is felt > on millions of PSQL queries, and that's a couple of seconds... > > Curious what others think before I take this any further. > > regards, Kirk > >> >> Thanks! >> >>> >>> [1]: >>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278 >>> [2]: >>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262 >>> >>> Best regards, >>> Gurjeet >>> http://Gurje.et >>> >>