take it; this sounds
more like a WIP patch.
FYI, pg_upgrade is going to need pg_dump --binary-upgrade to output the
columns in physical order with some logical ordering information, i.e.
pg_upgrade cannot be passed only logical ordering from pg_dump.
Wouldn't it need attno info too, so all
On 3/3/15 11:15 AM, Jan de Visser wrote:
On March 3, 2015 11:09:29 AM Jim Nasby wrote:
On 3/3/15 9:26 AM, Andres Freund wrote:
On 2015-03-03 15:21:24 +, Greg Stark wrote:
Fwiw this concerns me slightly. I'm sure a lot of people are doing
things like "kill -HUP `cat .../postmaste
On 3/3/15 11:26 AM, Bruce Momjian wrote:
On Tue, Mar 3, 2015 at 11:24:38AM -0600, Jim Nasby wrote:
On 3/3/15 11:23 AM, Bruce Momjian wrote:
On Thu, Feb 26, 2015 at 01:55:44PM -0800, Josh Berkus wrote:
On 02/26/2015 01:54 PM, Alvaro Herrera wrote:
This patch decouples these three things so
On 3/3/15 11:33 AM, Andres Freund wrote:
On 2015-03-03 11:09:29 -0600, Jim Nasby wrote:
On 3/3/15 9:26 AM, Andres Freund wrote:
On 2015-03-03 15:21:24 +, Greg Stark wrote:
Fwiw this concerns me slightly. I'm sure a lot of people are doing
things like "kill -HUP `cat .../post
ust the text from pg_hba? Or is
that what you're opposed to?
FWIW, I'd say that having the individual array elements be correct is
more important than what the result of array_out is. That way you could
always do array_to_string(..., ', ') and get valid pg_hba output.
--
Jim
onds?
I think there's a difference between comments about the function of a
GUC and stating the units it's specified in. It's more than annoying to
have to go and look that up where it's not stated.
Look up the units?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data
On 3/3/15 3:34 PM, David Fetter wrote:
On Tue, Mar 03, 2015 at 05:49:06PM -0300, Alvaro Herrera wrote:
Jim Nasby wrote:
FWIW, what I would find most useful at this point is a way to get
the equivalent of an AFTER STATEMENT trigger that provided all
changed rows in a MV as the result of a
On 3/3/15 11:48 AM, Andres Freund wrote:
On 2015-03-03 11:43:46 -0600, Jim Nasby wrote:
>It's certainly better than now, but why put DBAs through an extra step for
>no reason?
Because it makes it more complicated than it already is? It's nontrivial
to capture the output, esca
On 3/3/15 12:57 PM, Greg Stark wrote:
On Tue, Mar 3, 2015 at 6:05 PM, Jim Nasby wrote:
What about a separate column that's just the text from pg_hba? Or is that what
you're opposed to?
I'm not sure what you mean by that. There's a rawline field we could
put somewhere
every role added.
Yeah, but you'd still have to grant "backup" to every role created
anyway, right?
Or you could create a role that has the backup attribute and then grant
that to users. Then they'd have to intentionally SET ROLE my_backup_role
to elevate their privile
On 3/3/15 5:13 PM, Tom Lane wrote:
Jim Nasby writes:
On 3/3/15 11:48 AM, Andres Freund wrote:
It'll be confusing to have different interfaces in one/multiple error cases.
If we simply don't want the code complexity then fine, but I just don't
buy this argument. How could
result in sub-optimal planning.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Are we afraid of an
extra GUC to control it?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 3/5/15 2:17 PM, Stephen Frost wrote:
* Jim Nasby (jim.na...@bluetreble.com) wrote:
On 3/4/15 2:56 PM, Stephen Frost wrote:
2) The per-session salt sent to the client is only 32-bits, meaning
that it is possible to reply an observed MD5 hash in ~16k connection
attempts.
Yes, and we have
hnology.
Yeah, lets at least get this wrapped and we can see about improving it.
I like the idea of doing a here-doc or similar in the .pid, though I
think it'd be sufficient to just prefix all the continuation lines with
a tab. An uglier option would be just striping the newlines out.
On 3/4/15 9:10 AM, Robert Haas wrote:
On Wed, Feb 25, 2015 at 5:06 PM, Jim Nasby wrote:
Could the large allocation[2] for the dead tuple array in lazy_space_alloc
cause problems with linux OOM? [1] and some other things I've read indicate
that a large mmap will count towards total s
ot answered on one of the other lists, right?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 3/2/15 10:58 AM, Sawada Masahiko wrote:
On Wed, Feb 25, 2015 at 4:58 PM, Jim Nasby wrote:
On 2/24/15 8:28 AM, Sawada Masahiko wrote:
According to the above discussion, VACUUM and REINDEX should have
trailing options. Tom seems (to me) suggesting that SQL-style
(bare word preceded by WITH
a stand alone shell command (pg_temp_cluster?), and
either have pg_regress call it or (probably more logical) add it to the
make files as a dependency for make check (make
temp-cluster/remove-temp-cluster or similar).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it
On 3/5/15 7:58 PM, Jim Nasby wrote:
This got answered on one of the other lists, right?
That was supposed to be off-list. I'll answer my own question: yes.
Sorry for the noise. :(
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTrebl
On 3/7/15 12:48 AM, Noah Misch wrote:
On Sat, Mar 07, 2015 at 12:46:42AM -0500, Tom Lane wrote:
Noah Misch writes:
On Thu, Mar 05, 2015 at 03:28:12PM -0600, Jim Nasby wrote:
I was thinking the simpler route of just repalloc'ing... the memcpy would
suck, but much less so than the extra
e
we got pg_class, pg_attributes and pg_type created? That would
theoretically allow us to drive much more of initdb with plain SQL
(possibly created via pg_dump).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers ma
On 3/7/15 4:49 PM, Andres Freund wrote:
On 2015-03-05 15:28:12 -0600, Jim Nasby wrote:
I was thinking the simpler route of just repalloc'ing... the memcpy would
suck, but much less so than the extra index pass. 64M gets us 11M tuples,
which probably isn't very common.
That has the
On 3/7/15 6:02 PM, Stephen Frost wrote:
* Andrew Dunstan (and...@dunslane.net) wrote:
On 03/07/2015 05:46 PM, Andres Freund wrote:
On 2015-03-07 16:43:15 -0600, Jim Nasby wrote:
Semi-related... if we put some special handling in some places for bootstrap
mode, couldn't most catalog objec
d node
If we're keeping a list, there's also hot_standby_feedback,
max_standby_archive_delay and max_standby_streaming_delay.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pg
On 3/9/15 12:28 PM, Alvaro Herrera wrote:
Robert Haas wrote:
On Sat, Mar 7, 2015 at 5:49 PM, Andres Freund wrote:
On 2015-03-05 15:28:12 -0600, Jim Nasby wrote:
I was thinking the simpler route of just repalloc'ing... the memcpy would
suck, but much less so than the extra index pass
On 3/9/15 9:43 PM, Sawada Masahiko wrote:
On Fri, Mar 6, 2015 at 11:07 AM, Jim Nasby wrote:
On 3/2/15 10:58 AM, Sawada Masahiko wrote:
On Wed, Feb 25, 2015 at 4:58 PM, Jim Nasby
wrote:
On 2/24/15 8:28 AM, Sawada Masahiko wrote:
According to the above discussion, VACUUM and REINDEX
On 3/10/15 10:53 AM, Jim Nasby wrote:
On 3/10/15 9:30 AM, Robert Haas wrote:
On Sat, Mar 7, 2015 at 1:06 PM, Petr Jelinek
wrote:
You still duplicate the type cache code, but many other array
functions do
that too so I will not hold that against you. (Maybe somebody should
write
separate patch
On 2/22/15 5:19 AM, Pavel Stehule wrote:
2015-02-22 3:00 GMT+01:00 Petr Jelinek mailto:p...@2ndquadrant.com>>:
On 28/01/15 08:15, Pavel Stehule wrote:
2015-01-28 0:01 GMT+01:00 Jim Nasby mailto:jim.na...@bluetreble.com>
<mailto:Jim.Nasby@blue
ions.c
1 executor/nodeWindowAgg.c
14 utils/adt/array_userfuncs.c
31 utils/adt/arrayfuncs.c
4 utils/adt/domains.c
2 utils/adt/enum.c
1 utils/adt/int.c
6 utils/adt/jsonfuncs.c
1 utils/adt/oid.c
4 utils/adt/orderedsetaggs.c
7 utils/adt/rangetypes.c
24 utils/adt/rowtypes.c
Is there any way to determine the typemod of the source data for a cast?
Perhaps a modification on get_call_expr_argtype(), though I'd hate to put that
in an extension...
BTW, it'd be nice if we better emphasized that the typmod passed to a cast
function is for the destination...
hich autovacuum doesn't get cancelled anymore.
Opinions?
What do you mean by "never succeed"? Is it skipping a large number of pages?
Might re-trying the locks within the same vacuum help, or are the user locks too
persistent?
--
Jim Nasby, Data Architect, Blue Treble Consult
On 11/10/14, 7:52 PM, Tom Lane wrote:
On the whole, I'm +1 for just logging the events and seeing what we learn
that way. That seems like an appropriate amount of effort for finding out
whether there is really an issue.
Attached is a patch that does this.
--
Jim Nasby, Data Architect,
On 12/1/14, 11:57 AM, Andres Freund wrote:
On 2014-11-30 20:46:51 -0600, Jim Nasby wrote:
On 11/10/14, 7:52 PM, Tom Lane wrote:
On the whole, I'm +1 for just logging the events and seeing what we learn
that way. That seems like an appropriate amount of effort for finding out
whether the
There doesn't seem to be documentation on *= (or search isn't finding it). Is
this intentional?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
g from any point.
Now that's not a bad idea. This would basically mean just saving a block number
in pg_class after every intermediate index clean and then setting that back to
zero when we're done with that relation, right?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data i
oning. The
up-side (which would be a double-edged sword) is that you could leave holes in
your partitioning map. Note that in the multi-key case we could still have a
record of rangetypes.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble
UENCE SET TABLESPACE, to use an
SSD for this purpose maybe?
Why not? RAID arrays typically use stripe sizes in the 128-256k range, which
means only 16 or 32 sequences per stripe.
It still might make sense to allow controlling what tablespace a sequence is
in, but IMHO the default should just be
re than once
LÍNEA 1: insert into t (col, col) values ((42, 43), (44, 43));
^
Isn't this a bit odd?
Yes, and sounds like a good way to create bugs... my vote would be to fix this.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Troubl
to decide on parallelism
during execution instead of at plan time? That would allow us to dynamically
scale parallelism based on system load. If we don't even consider parallelism
until we've pulled some number of tuples/pages from a relation, this would also
eliminate all parallel over
al composite data would
need to be a "dumb" varlena that stores the composite HeapTupleHeader.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make change
On 12/5/14, 1:22 PM, Jim Nasby wrote:
On 12/5/14, 3:42 AM, Amit Langote wrote:
> I think you are right. I think in this case we need something similar
>to column pg_index.indexprs which is of type pg_node_tree(which
>seems to be already suggested by Robert). So may be we can proc
On 12/5/14, 2:02 PM, Robert Haas wrote:
On Fri, Dec 5, 2014 at 2:52 PM, Jim Nasby wrote:
The other option would be to use some custom rowtype to store boundary
values and have a method that can form a boundary tuple from a real one.
Either way, I suspect this is better than frequently
ore that's huge because of shared buffers, but perhaps there's some way to
avoid writing those? (That means the core won't help if the bug is due to
something in a buffer, but that seems unlikely enough that the tradeoff is
worth it...)
--
Jim Nasby, Data Architect, Blue Treble Consult
On 12/5/14, 5:49 PM, Josh Berkus wrote:
On 12/05/2014 02:41 PM, Jim Nasby wrote:
Perhaps we should also officially recommend production servers be setup
to create core files. AFAIK the only downside is the time it would take
to write a core that's huge because of shared buffers, but pe
le is not adjusted for the
latest interface yet.
I've made some minor edits, with an emphasis on not changing original intent.
Each section was saved as a separate edit, so if anyone objects to something
just revert the relevant change. Once the code is available more editing can be
don
On 12/7/14, 6:16 PM, Simon Riggs wrote:
On 20 October 2014 at 10:57, Jim Nasby wrote:
Currently, a non-freeze vacuum will punt on any page it can't get a cleanup
lock on, with no retry. Presumably this should be a rare occurrence, but I
think it's bad that we just assume that and
ating new types without requiring C. C isn't an option on many (even most)
environments in today's "cloud" world, aside from the intimidation factor.
There are comments in the code that hypothesize about making cstring a full type; that
might be all that's needed.
-
while doing something
different on the disk.
If you think about it, partitioning is really a hack anyway. It clutters up
your logical set implementation with a bunch of physical details. What most
people really want when they implement partitioning is simply data locality.
--
Jim Nasby, Data A
ty in the
partitioned column.
If we allowed for a "catchall partition" and supported normal inheritance/triggers on
that partition then users could continue to do whatever they needed with data that didn't fit the
"normal" partitioning pattern.
--
Jim Nasby, Data Arc
Is there any particular reason we don't allow comparing char and varchar
arrays? If not I'll submit a patch.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgres
On 12/8/14, 5:19 PM, Josh Berkus wrote:
On 12/08/2014 02:12 PM, Jim Nasby wrote:
On 12/8/14, 12:26 PM, Josh Berkus wrote:
4. Creation Locking Problem
high probability of lock pile-ups whenever a new partition is created on
demand due to multiple backends trying to create the partition at the
On 12/9/14, 4:19 PM, Jim Nasby wrote:
Is there any particular reason we don't allow comparing char and varchar
arrays? If not I'll submit a patch.
We're also missing operators on text and varchar arrays.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? G
cular, I'm thinking that in DefineRelation we can randomize
stmt->tableElts before merging in inheritance attributes.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postg
On 12/9/14, 4:30 PM, Tom Lane wrote:
Jim Nasby writes:
On 12/9/14, 4:19 PM, Jim Nasby wrote:
Is there any particular reason we don't allow comparing char and varchar
arrays? If not I'll submit a patch.
We're also missing operators on text and varchar arrays.
Adding operat
backlog of patches into the new app over the
holidays, but not before then.
FWIW, I suspect a call for help on -general or IRC would find volunteers for
any necessary data entry work...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
x27;t mark myself as reviewer
of any of them because I don't feel I have enough knowledge to fulfill that
role.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgres
ons as needed, and it's generally easier to
write code that calls a function as opposed to glomming a text string together
and passing that to EXECUTE.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing
On 12/9/14, 5:06 PM, Jim Nasby wrote:
On 12/9/14, 4:30 PM, Tom Lane wrote:
Jim Nasby writes:
On 12/9/14, 4:19 PM, Jim Nasby wrote:
Is there any particular reason we don't allow comparing char and varchar
arrays? If not I'll submit a patch.
We're also missing operators on t
On 12/12/14, 3:48 PM, Robert Haas wrote:
On Fri, Dec 12, 2014 at 4:28 PM, Jim Nasby wrote:
Sure. Mind you, I'm not proposing that the syntax I just mooted is
actually for the best. What I'm saying is that we need to talk about
it.
Frankly, if we're going to require user
On 12/12/14, 7:16 PM, Tom Lane wrote:
Jim Nasby writes:
I'd say that array_eq (and probably _cmp) just needs to be taught to fall back
to what oper() does, but this part of the commit message gives me pause:
"Change the operator search algorithms to look for appropriate btree or
squelch in the server logfile, I think checking for the table is the right
answer.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your sub
he project, including reviewing patches. A
simple "thanks, but we feel it's already clear enough that there can't be anywhere
close to INT_MAX timezones" would have sufficed.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTrebl
should display something if it times
out; otherwise you'll have a test failure and won't have any indication why.
I've attached a patch that adds logging on timeout and contains a test case
that demonstrates the rollback to savepoint bug.
--
Jim Nasby, Data Architect, Blue Tre
ased SET in quotes, but we need
to do that for all GUCs that include units, so presumably there's no good way
around it.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers
comment about that.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
>From a681953a802230e73e5e4f91607eca9dd99c34f2 Mon Sep 17 00:00:00 2001
From: Jim Nasby
Date: Mon, 15 Dec 2014 18:35:50 -0600
Subject: [PATCH] Ignore config.cache
Also ad
erbose output.
BTW, what is it about a dynamic message that makes it untranslatable? Doesn't
the translation happen down-stream, so that at most we'd just need two
translation messages? Or worst case we could have two separate elog calls, if
we wanted to go that route.
--
Jim Nasby, Da
e best option I can think of for the later is something like "failed initial lock
attempt". That's the only thing that will be true in all cases.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing
a defect that should be corrected.
If copying data/palloc is the root of numeric's performance problems then we
need to address that, because it will provide benefit across the entire
database. The pattern of (palloc; copy) is repeated throughout a large part of
the codebase.
--
Jim Nasby, D
to do it myself. :)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
in on %d buffers\n"?
(Happy to do the patch either way, but I'd like us to decide what we're doing
first. ;)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresq
3 Tsearch parser cache|4
1 TupleHashTable| hash
97011 TupleHashTable|8
18045 Type information cache|4
2 db hash|4
52 json object hashtable|64
28958 smgr relation table|16
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
diff --git a
On 12/18/14, 5:00 PM, Jim Nasby wrote:
2201582 20 -- Mostly LOCALLOCK and Shared Buffer
Started looking into this; perhaps https://code.google.com/p/fast-hash/ would
be worth looking at, though it requires uint64.
It also occurs to me that we're needlessly shoving a lot of 0's int
On 12/19/14, 5:13 PM, Tom Lane wrote:
Jim Nasby writes:
On 12/18/14, 5:00 PM, Jim Nasby wrote:
2201582 20 -- Mostly LOCALLOCK and Shared Buffer
Started looking into this; perhaps https://code.google.com/p/fast-hash/ would
be worth looking at, though it requires uint64.
It also occurs
e value.
git does allow you to revise a commit message; it just makes downstream pulls
uglier if the commit was already pushed (see
https://help.github.com/articles/changing-a-commit-message/). It might be
possible to minimize or even eliminate that pain via git hooks.
--
Jim Nasby, Data Architec
On 12/20/14, 11:51 AM, Tom Lane wrote:
Andres Freund writes:
On 2014-12-19 22:03:55 -0600, Jim Nasby wrote:
What I am thinking is not using all of those fields in their raw form to
calculate the hash value. IE: something analogous to:
hash_any(SharedBufHash, (rot(forkNum, 2) | dbNode
n the above tests, it seems to me that the maximum benefit due to
'a' is realized upto 4~8 workers
I'd think a good first estimate here would be to just use
effective_io_concurrency.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http:
27;re going to go that route, then perhaps it would make more sense to
create a command that allows you to apply a second command to every object in a
schema. We would have to be careful about PreventTransactionChain commands.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trou
MemoryContext rcontext from
accumArrayResult. Currently, the code isn't using the rcontext for anything
except for old API calls (in first call to accumArrayResult).
Until we eliminate the API though, we should leave something in place that
still uses the old one, to make certain we do
pecial-case executing
non-transactional commands dynamically, because VACUUM isn't the only one that
suffers from this problem.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@
n't be in a
transaction.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
how about instead of solving this only for vacuum we create something
generic? :) Possibly using Robert's background worker work?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hacke
On 12/23/14, 7:44 AM, Robert Haas wrote:
On Mon, Dec 22, 2014 at 5:00 PM, Jim Nasby wrote:
I would MUCH rather that we find a way to special-case executing
non-transactional commands dynamically, because VACUUM isn't the only one
that suffers from this problem.
Is pg_background a soluti
On 12/20/14, 2:13 PM, Jim Nasby wrote:
On 12/20/14, 11:51 AM, Tom Lane wrote:
Andres Freund writes:
On 2014-12-19 22:03:55 -0600, Jim Nasby wrote:
What I am thinking is not using all of those fields in their raw form to
calculate the hash value. IE: something analogous to:
hash_any
On 12/24/14, 12:27 AM, Jim Nasby wrote:
There were several select-only runs on both to warm shared_buffers (set to
512MB for this test, and fsync is off).
BTW, presumably this ~380M database isn't big enough to show any problems with
hash collisions, and I'm guessing you'
On 12/23/14, 8:49 PM, Fabrízio de Royes Mello wrote:
Em terça-feira, 23 de dezembro de 2014, Jim Nasby mailto:jim.na...@bluetreble.com>> escreveu:
On 12/23/14, 8:54 AM, Fabrízio de Royes Mello wrote:
> Right now a lot of people just work around this with things like D
On 12/24/14, 10:58 AM, Tom Lane wrote:
Andres Freund writes:
On 2014-12-24 00:27:39 -0600, Jim Nasby wrote:
pgbench -S -T10 -c 4 -j 4
master:
tps = 9556.356145 (excluding connections establishing)
tps = 9897.324917 (excluding connections establishing)
tps = 9287.286907 (excluding connections
each line that tells you the encoding for that entry?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.or
k of a tapesort being faster than an
internal sort.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
iggers to enforce RI rather than defining FKs precisely so that
they can get a serialization failure return code and do automatic
retry if it is caused by a race condition. That's less practical
to compensate for when it comes to unique indexes or constraints.
Wow, that's horrible. :(
--
J
ot;DUMP" and BYPASSRLS then I think we need to call DUMP
something else. Otherwise, it's a massive foot-gun; you get a "successful" backup only to
find out it contains only a small part of the database.
My how this has become a can of worms...
--
Jim Nasby,
bly faster than SELECT 1, but it would prevent a bunch of
pointless work on the backend side, and should greatly simplify DBD's ping().
Only thing I'm not sure of is if this could be made to be safe within a COPY...
:(
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Troubl
ting the
row_security GUC to 'off', in which case you'll get an error if you hit
a table that has RLS enabled and you don't have BYPASSRLS. If you're
not checking for errors from pg_dump, well, there's a lot of ways you
could run into problems.
This also indica
On 12/29/14, 7:40 PM, Craig Ringer wrote:
On 12/30/2014 06:39 AM, Jim Nasby wrote:
How much of this issue is caused by trying to machine-parse log files?
Is a better option to improve that case, possibly doing something like
including a field in each line that tells you the encoding for that
it'd be easier to do
something like:
UPDATE table1 SET ...
WHERE ctid >= (SELECT '(' || relpages || ',0)' FROM pg_class WHERE oid =
'table1'::regclass)
;
in some kind of loop.
Obviously better to only handle what you already are then not get this in at
all
. For example, I've got some code
that's looking at fcinfo->flinfo->fn_expr, and I have no idea how likely that is to get
broken. Since it's a parse node, my guess is "likely", but I'm just guessing.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in
ing int32, 64 and 128 (if
needed?), and changing docs as needed?
Presumably that would be best as a separate patch...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
ere is the
ability to feed tuples to more than one node simultaneously? That would allow
things like
GroupAggregate
--> Sort(a) \
+--> Sort(a,b) -\
--> Hash(b) +
\--> SeqScan
That would allow the planner to trade off things like total
ge would affect very, very few users.
Also, note that I'm not talking about removing anything yet; that would come
later.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ly inlined and is fast. The SQL
function is significantly slower.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.post
1601 - 1700 of 2230 matches
Mail list logo