n, so no need to have
that keyword.
regards,
-Gunther
ad choice for bulk data. It's only for
this recursive query it sometimes makes sense.
regards,
-Gunther
ync to the same snapshot point and then split it off, or do
the read activity of the stand-by server querying like crazy only from
that mirror, while batching changes to the RAID master so that they can
be applied with very low overhead.
Anyone thinking about these things?
regards,
-Gunther
aring? And as I read the log-shipping method may not be
suitable for hot standby use?
Is this something that has been written about already?
regards,
-Gunther
___
freebsd-performa...@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo
throughput of the EC2 instance. Needless to say you don't want
a t* instance where you have a limited burst CPU capacity only.
regards,
-Gunther
On 2/23/2021 1:12 PM, Maurici Meneghetti wrote:
Hi everyone,
I have 2 postgres instances created from the same dump (backup), one
on a GCP VM and the other
ut to just
increase that master db server?
regards,
-Gunther
u can't really try it
out unless you do some massively parallel magic. So I figured I just ask.
regards,
-Gunther
involved in
a couple of such research projects in Europe, not sure what exactly is
the situation/rules in US.
Yes, might work with a University directly. Although I will contact the
PostgreSQL foundation in the US also.
regards,
-Gunther
could donate to, 501(c)(3) tax
exempt? I can donate and possibly find some people at Purdue University
who might take this on. Interest?
regards,
-Gunther
and proposed, especially this last one
by Tomas Vondra, been applied to the 11 releases? Should I try these
older patches from April?
regards,
-Gunther
For what it is worth, this is what I am getting:
TopMemoryContext: 67424 total in 5 blocks; 7184 free (7 chunks); 60240
used pgstat TabStatusArray
re isn't really any big news here. But what matters is that it works.
thanks & regards,
-Gunther Schadow
t) Rows Removed by Filter:
7415579 Planning Time: 40.559 ms Execution Time: 6896581.566 ms (70 rows)
For the first time this query has succeeded now. Memory was bounded. The
time of nearly hours is crazy, but things sometimes take that long. The
important thing was not to get an out of memory error.
Thank you. Anything else you want to try, I can do it.
regards,
-Gunther
2 in ExecParallelHashJoinNewBatch (
hjstate=) at nodeHashjoin.c:1127
1127{
(
I don't understand why all of them are at the same location. Doesn't
make any sense to me.
But I'll leave it at that right now.
-Gunther
After applying Tomas' patch 0002 as corrected, over 0001, same thing:
integrator=# set enable_nestloop to off;
SET
integrator=# explain analyze select * from reports.v_BusinessOperation;
WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32
WARNING:
74] DETAIL: Failed on request of size 32800 in memory
context "HashBatchContext".
2019-04-21 19:50:21.338 UTC [6974] STATEMENT: explain analyze select * from
reports.v_BusinessOperation;
Next I'll apply Tomas' corrected 0002 patch on top of this and see ...
-Gunther
nts access to hack
directly, I can provide.
regards,
-Gunther
or not, etc. This would be so tremendously useful for runtime debugging
of queries. I think the utility of this can hardly be overstated. I mean
even for end-user applications of some data warehouse, where you could
probe a long running query every 5 seconds as to where the execution is.
Man, I could not think of any more low hanging fruit useful feature. I
am sure that if PostgreSQL was originally written in Java, this feature
would naturally exist already.
regards and Happy Easter,
-Gunther
oo=# update pg_class set (relpages, reltuples) = (7, 236) where relname =
'tmp_r';
UPDATE 1
but that didn't help. Somehow the planner outsmarts every such trick, so
I can't get it to follow my right outer join plan where the big table is
hashed. I am sure y'all know some way to force it.
regards,
-Gunther
even upgrading
the VM to a 32 GB VM just to crunch the heap dump. But can you tell me
just a little more as to how I need to configure this thing to get the
data you want without blowing up the memory and disk during this huge query?
regards,
-Gunther
on how ZFS might do? ZFS is of course so much more flexible.
-Gunther
* FROM (SELECT count(1) FROM BigTable) x
Parallel or not?
-Gunther
option, you will see each postgres
process identify itself as to its role, e.g.
postgres: parallel worker for PID 46687
or
postgres: SELECT ...
or
postgres: wal writer
extremely useful this.
-Gunther
ell me
just a little more as to how I need to configure this thing to get the
data you want without blowing up the memory and disk during this huge query?
regards,
-Gunther
alue=hashvalue@entry=0x7ffcbf92fe5c,
tupleSlot=0x2ae0ab8, hjstate=0x2a1d920) at nodeHashjoin.c:1277
(More stack frames follow...)
But now it increased
ExecutorState: 1369353824 total in 163399 blocks; 248792 free (36 chunks);
1369105032 used
It increases every 3 times I stop at the breakpoint.
-Gunther
ame, "ExecutorState") == 0 && *(int
*)$rsp != 0x84e7dd && 0x84e8ad != *(int *)$rsp
Note: breakpoint 6 also set at pc 0x849030.
Breakpoint 7 at 0x849030: file aset.c, line 718.
(gdb) delete 6
Now if I continue I don't seem to be stopping any more.
Does this help now?
-Gunther
On 4/15/2019 21:49, Gunther wrote:
I'm going to try without that DISTINCT step, or perhaps by dismantling
this query until it works without this excessive memory growth.
It also failed. Out of memory. The resident memory size of the backend
was 1.5 GB before it crashed.
TopMemoryContext
ep, or perhaps by dismantling
this query until it works without this excessive memory growth.
-Gunther
0xf45320) at postmaster.c:1379
#29 0x004822dc in main (argc=3, argv=0xf45320) at main.c:228
Good, now I leave this all sitting like that for you to ask me what else
you might want to see.
We are now close to the edge of the cliff.
-Gunther
this.
thanks,
-Gunther
006bf122 in PostmasterMain (argc=argc@entry=3,
argv=argv@entry=0xf45320) at postmaster.c:1379
#25 0x004822dc in main (argc=3, argv=0xf45320) at main.c:228
That's it.
Thank you all very much for your interest in this case.
-Gunther
because
there are many tables and columns involved. Would really be nice if the
error would say exactly what plan step that ExecutorState referred to,
so one could narrow it down.
regards,
-Gunther
On 4/14/2019 17:19, Tom Lane wrote:
Gunther writes:
For weeks now, I am banging my head a
re should be more information so the
person can do anything about it.
Any ideas?
-Gunther
an just read, I appreciate you sharing that.
I think that should be on some Wiki or FAQ somewhere. If I get a quick
and dirty explanation with some pointers, I can try to write it out into
a more complete answer that might be added into some documentation or
FAQ somewhere.
thanks & regards,
-Gunther
On 3/21/2019 17:16, Tom Lane wrote:
Gunther writes:
I have 250 rows to delete, but they are a target to a bunch of child
tables with foreign key on delete cascade.
EXPLAIN DELETE FROM Foo WHERE id = (SELECT fooId FROM Garbage);
shows me that it uses the nested loop by Foo_pkey index to find
aced, I am sure that somewhere inside there is some data
structure representing this activity and all it would take is to dump it?
regards,
-Gunther
. It just goes to show that in a case like this, it is
best to just go with the partitioned table anyway.
regards,
-Gunther
g, why not
partition
the entire table. But just for hecks, why not this way?
regards,
-Gunther
On 3/14/2019 11:11, Jeremy Schneider wrote:
On 3/14/19 07:53, Gunther wrote:
2. build a low level "spreading" scheme which is to take the partial
files 4653828 and 4653828.1, .2, _fsm, etc. and move each to another
device and then symlink it back to that directory (I
and odd days, such that on any given
day one of the two pending partitions are quiet? Is that how it's done?
regards,
-Gunther
and odd days, such that on any given
day one of the two pending partitions are quiet? Is that how it's done?
regards,
-Gunther
that?
Also, the REINDEX command always fails with a deadlock because there is
a row lock and a complete table lock involved.
I consider this ultimately a bug, or at the very least there is room for
improvement. And I am on version 11.1.
regards,
-Gunther
On 2/23/2019 16:13, Peter Geoghegan wrote:
On Sat, Feb 23, 2019 at 1:06 PM Gunther wrote:
I thought to keep my index tight, I would define it like this:
CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;
so that only pending jobs are in that index.
When a job is done
pending;
then it would only grow, but never shrink. May be that helps somehow? I
doubt it though. Adding to an index also causes deterioration, and most
of the rows would be irrelevant because they would be past work. It
would be nicer if there was another smooth way.
regards,
-Gunther
Thank you Magnus. 68% steal. Indeed. You probably hit the target. Yes.
That explains the discrepancy. I need to watch and understand that CPU
credits issue.
regards,
-Gunther
On 2/21/2019 4:08, Magnus Hagander wrote:
On Thu, Feb 21, 2019 at 12:34 AM Gunther <mailto:r...@gusw.net>&
total CPU% estimated by both top
and iostat? From the top it looks like I have 24 worker processes use 8%
each, most of them in R(unning) state, so that would be 192%, which is
divided over the 2 CPUs of the t2.medium instance, really 96%. So I am
CPU bound after all?
regards,
-Gunther
45 matches
Mail list logo