Noah Misch n...@leadboat.com writes:
On Wed, Aug 13, 2014 at 10:21:50AM -0400, Tom Lane wrote:
I believe that we deal with this by the expedient of checking the lengths
of tablespace paths in advance, when the tablespace is created.
The files under scrutiny here are not located in a
On Tue, Aug 12, 2014 at 11:47 PM, MauMau maumau...@gmail.com wrote:
From: Michael Paquier michael.paqu...@gmail.com
Yes, the build succeeded. I confirmed that the following files have version
info. However, unlike other files, they don't have file description. Is
this intended?
On 08/14/2014 06:53 AM, Joachim Wieland wrote:
I'm seeing an assertion failure with pg_dump -c --if-exists which is
not ready to handle BLOBs it seems:
pg_dump: pg_backup_archiver.c:472: RestoreArchive: Assertion `mark !=
((void *)0)' failed.
To reproduce:
$ createdb test
$ pg_dump -c
I think the hash-join like approach is reasonable, but I also think
you're going to run into a lot of challenges that make it more complex
for HashAgg. For instance, let's say you have the query:
SELECT x, array_agg(y) FROM foo GROUP BY x;
Say the transition state is an array (for the sake of
On Thu, Aug 14, 2014 at 3:04 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Heikki Linnakangas wrote:
What's with XLogReplayLSN and XLogReplayRecord? IMO the xlog code has
more than enough global variables already, it'd be good to avoid two
more if possible. Is there no other good way to
Hi,
I'd like to propose to add new option --immediate to pg_ctl promote.
When this option is set, recovery ignores any WAL data which have not
been replayed yet and exits immediately. Patch attached.
This promotion is faster than normal one but can cause data loss. So
it's useful if we want to
On Thu, Aug 14, 2014 at 2:05 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Here's a full version of this refactoring patch, all the rmgr's have now
been updated to use XLogReplayBuffer(). I think this is a worthwhile change
on its own, even if we drop the ball on the rest of the WAL
On 12 August 2014 20:09, Kevin Grittner kgri...@ymail.com wrote:
Amit Khandekar amit.khande...@enterprisedb.com wrote:
On 7 August 2014 19:49, Kevin Grittner kgri...@ymail.com wrote:
Amit Khandekar amit.khande...@enterprisedb.com wrote:
I tried to google some SQLs that use REFERENCING clause
On 08/14/2014 04:01 AM, Tom Lane wrote:
I wrote:
That's a fair question. I did a very very simple hack to replace the item
offsets with item lengths -- turns out that that mostly requires removing
some code that changes lengths to offsets ;-). I then loaded up Larry's
example of a
On 08/14/2014 10:29 AM, Michael Paquier wrote:
On Thu, Aug 14, 2014 at 3:04 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Heikki Linnakangas wrote:
What's with XLogReplayLSN and XLogReplayRecord? IMO the xlog code has
more than enough global variables already, it'd be good to avoid two
On 08/14/2014 11:22 AM, Michael Paquier wrote:
1) Why changing that from ERROR to PANIC?
/* Caller specified a bogus block_index */
- elog(ERROR, failed to restore block_index %d, block_index);
+ elog(PANIC, failed to restore block_index %d, block_index);
No reason, just a
On 2014-08-14 12:41:35 +0300, Heikki Linnakangas wrote:
On 08/14/2014 10:29 AM, Michael Paquier wrote:
On Thu, Aug 14, 2014 at 3:04 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Heikki Linnakangas wrote:
What's with XLogReplayLSN and XLogReplayRecord? IMO the xlog code has
more than
The execution level
itself was almost trivial; it's getting the tuplestore reference
through the parse analysis and planning phases that is painful for
me.
I am not sure why you think we would need to refer the tuplestore in
the parse analysis and planner phases. It seems that we would need
On 08/14/2014 02:45 AM, Peter Geoghegan wrote:
On Wed, Aug 13, 2014 at 3:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
If the bit is unused now, should we be worrying about reclaiming it for
better use? Like say allowing jsonb's to be larger than just a quarter
of the maximum datum size?
Commit
On Wed, Aug 13, 2014 at 4:10 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Wed, Aug 13, 2014 at 2:10 PM, Fujii Masao masao.fu...@gmail.com wrote:
I sent the SIGSTOP signal to the walreceiver process in one of sync standbys,
and then ran write transactions again. In this case, they
On Sat, Jul 12, 2014 at 2:19 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Sawada Masahiko wrote:
As you said, if line number reached UINT_MAX then I think that this
case is too strange.
I think INT_MAX is enough for line number.
My point is not whether 2 billion is a better number
I confirmed that all issues are solved. The patch content looks good,
alghouth I'm not confident in Perl. I marked this patch as ready for
committer. I didn't try the patch on MinGW.
Regards
MauMau
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Add --limit to limit latency under throttling
Under throttling, transactions are scheduled for execution at certain
times. Transactions may be far behind schedule and the system may catch up
with the load later. This option allows to change this behavior by
skipping transactions which are
Heikki Linnakangas wrote:
On 08/14/2014 06:53 AM, Joachim Wieland wrote:
I'm seeing an assertion failure with pg_dump -c --if-exists which is
not ready to handle BLOBs it seems:
pg_dump: pg_backup_archiver.c:472: RestoreArchive: Assertion `mark !=
((void *)0)' failed.
To reproduce:
$
2014-08-14 15:11 GMT+02:00 Alvaro Herrera alvhe...@2ndquadrant.com:
Heikki Linnakangas wrote:
On 08/14/2014 06:53 AM, Joachim Wieland wrote:
I'm seeing an assertion failure with pg_dump -c --if-exists which is
not ready to handle BLOBs it seems:
pg_dump: pg_backup_archiver.c:472:
Hi,
On Thu, Aug 14, 2014 at 10:05 AM, Etsuro Fujita fujita.ets...@lab.ntt.co.jp
wrote:
(2014/08/08 18:51), Etsuro Fujita wrote:
(2014/06/30 22:48), Tom Lane wrote:
I wonder whether it isn't time to change that. It was coded like that
originally only because calculating the values
Heikki Linnakangas wrote:
The quick fix would be to add an exception for blobs, close to where
Assert is. There are a few exceptions there already. A cleaner
solution would be to add a new argument to ArchiveEntry and make the
callers responsible for providing an DROP IF EXISTS query, but
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
On Thu, Aug 14, 2014 at 5:56 AM, Stephen Frost sfr...@snowman.net wrote:
Regarding this, I'm generally in the camp that says to just include it
in 'all' and be done with it- for now.
Okay, but tomorrow if someone wants to implement a
Jeff Davis pg...@j-davis.com writes:
I think the hash-join like approach is reasonable, but I also think
you're going to run into a lot of challenges that make it more complex
for HashAgg. For instance, let's say you have the query:
SELECT x, array_agg(y) FROM foo GROUP BY x;
Say the
On Mon, May 20, 2013 at 7:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Janes jeff.ja...@gmail.com writes:
I'd like to run same query repeatedly and see how long it takes each time.
I thought \watch would be excellent for this, but it turns out that using
\watch suppresses the output of
Fujii Masao masao.fu...@gmail.com writes:
I'd like to propose to add new option --immediate to pg_ctl promote.
When this option is set, recovery ignores any WAL data which have not
been replayed yet and exits immediately. Patch attached.
This promotion is faster than normal one but can cause
On 14 Srpen 2014, 9:22, Jeff Davis wrote:
I think the hash-join like approach is reasonable, but I also think
you're going to run into a lot of challenges that make it more complex
for HashAgg. For instance, let's say you have the query:
SELECT x, array_agg(y) FROM foo GROUP BY x;
Say the
Heikki Linnakangas hlinnakan...@vmware.com writes:
For comparison, here's a patch that implements the scheme that Alexander
Korotkov suggested, where we store an offset every 8th element, and a
length in the others. It compresses Larry's example to 525 bytes.
Increasing the stride from 8 to
On Thu, 2014-08-14 at 10:06 -0400, Tom Lane wrote:
If you're following the HashJoin model, then what you do is the same thing
it does: you write the input tuple back out to the pending batch file for
the hash partition that now contains key 1001, whence it will be processed
when you get to
On Thursday, August 14, 2014, Jeff Davis pg...@j-davis.com wrote:
On Thu, 2014-08-14 at 10:06 -0400, Tom Lane wrote:
If you're following the HashJoin model, then what you do is the same
thing
it does: you write the input tuple back out to the pending batch file for
the hash partition that
Tom and Robert,
I tried a combination of PIPE lock and file lock (fcntl) as Tom had suggested.
Attached experimental patch has this logic...
Postmaster :
- get exclusive fcntl lock (to guard against race condition in PIPE-based lock)
- check PIPE for any existing readers
- open PIPE for read
On Wed, Aug 13, 2014 at 09:01:43PM -0400, Tom Lane wrote:
I wrote:
That's a fair question. I did a very very simple hack to replace the item
offsets with item lengths -- turns out that that mostly requires removing
some code that changes lengths to offsets ;-). I then loaded up Larry's
Jeff Davis pg...@j-davis.com writes:
HashJoin only deals with tuples. With HashAgg, you have to deal with a
mix of tuples and partially-computed aggregate state values. Not
impossible, but it is a little more awkward than HashJoin.
Not sure that I follow your point. You're going to have to
On Thu, Aug 7, 2014 at 4:19 PM, Peter Geoghegan p...@heroku.com wrote:
On Thu, Aug 7, 2014 at 12:17 PM, Robert Haas robertmh...@gmail.com wrote:
Gah. Hit send to soon. Also, as much as I'd prefer to avoid
relitigating the absolutely stupid debate about how to expand the
buffers, this is no
On Sat, Aug 9, 2014 at 12:54 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-08-07 21:02:54 -0400, Tom Lane wrote:
Craig Ringer cr...@2ndquadrant.com writes:
On 08/08/2014 03:54 AM, Tom Lane wrote:
FWIW, I think it's a seriously bad idea to expose LSNs in the protocol
at all.
Bruce Momjian br...@momjian.us writes:
Uh, can we get compression for actual documents, rather than duplicate
strings?
[ shrug... ] What's your proposed set of actual documents?
I don't think we have any corpus of JSON docs that are all large
enough to need compression.
This gets back to the
On 14 Srpen 2014, 18:12, Tom Lane wrote:
Jeff Davis pg...@j-davis.com writes:
HashJoin only deals with tuples. With HashAgg, you have to deal with a
mix of tuples and partially-computed aggregate state values. Not
impossible, but it is a little more awkward than HashJoin.
Not sure that I
On 14 Srpen 2014, 18:02, Atri Sharma wrote:
On Thursday, August 14, 2014, Jeff Davis pg...@j-davis.com wrote:
On Thu, 2014-08-14 at 10:06 -0400, Tom Lane wrote:
If you're following the HashJoin model, then what you do is the same
thing
it does: you write the input tuple back out to the
On Thu, Aug 14, 2014 at 12:22:46PM -0400, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
Uh, can we get compression for actual documents, rather than duplicate
strings?
[ shrug... ] What's your proposed set of actual documents?
I don't think we have any corpus of JSON docs that
On Fri, Aug 8, 2014 at 4:16 AM, Jeff Davis pg...@j-davis.com wrote:
I wasn't able to reproduce your results on my machine. At -s 300, with
maintenance_work_mem set high enough to do internal sort, it took about
40s and I heard some disk activity, so I didn't think it was a valid
result. I went
Tomas Vondra t...@fuzzy.cz writes:
On 14 Srpen 2014, 18:12, Tom Lane wrote:
Not sure that I follow your point. You're going to have to deal with that
no matter what, no?
That is not how the patch work. Once the memory consumption hits work_mem,
it keeps the already existing groups in
On Thu, 2014-08-14 at 16:17 +0200, Tomas Vondra wrote:
Either it belongs to the current batch (and either it's in the hash
table, or you add it there), or it's not - in that case write it to a
temp file.
I think the part you left out is that you need two files per batch: one
for the dumped-out
On Mon, Aug 11, 2014 at 3:20 AM, Fujii Masao masao.fu...@gmail.com wrote:
There is no extra spinlock.
The version I reviewed had one; that's what I was objecting to.
Might need to add some pg_read_barrier() and pg_write_barrier() calls,
since we have those now.
--
Robert Haas
EnterpriseDB:
Hi,
I found that only CREATE_REPLICATION_SLOT of replication commands
is exposed as an index term in the document. Is this intentional?
If not, for the consistency, I think that we should either expose other
replication commands also as index terms, or remove
CREATE_REPLICATION_SLOT from the
On Thu, Aug 14, 2014 at 11:52 AM, Bruce Momjian br...@momjian.us wrote:
On Thu, Aug 14, 2014 at 12:22:46PM -0400, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
Uh, can we get compression for actual documents, rather than duplicate
strings?
[ shrug... ] What's your proposed set of
On Thu, Aug 14, 2014 at 9:13 AM, Robert Haas robertmh...@gmail.com wrote:
Committed that way. As the patch is by and large the same as what I
submitted for this originally, I credited myself as first author and
you as second author. I hope that seems fair.
I think that's more than fair.
On Fri, Aug 15, 2014 at 1:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 11, 2014 at 3:20 AM, Fujii Masao masao.fu...@gmail.com wrote:
There is no extra spinlock.
The version I reviewed had one; that's what I was objecting to.
Sorry for confusing you. I posted the latest patch to
Bruce Momjian br...@momjian.us writes:
On Thu, Aug 14, 2014 at 12:22:46PM -0400, Tom Lane wrote:
This gets back to the problem of what test case are we going to consider
while debating what solution to adopt.
Uh, we just one need one 12k JSON document from somewhere. Clearly this
is
On Thu, Aug 14, 2014 at 10:57 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Maybe this is telling us it's not worth changing the representation,
and we should just go do something about the first_success_by threshold
and be done. I'm hesitant to draw such conclusions on the basis of a
single
On Thu, Aug 14, 2014 at 10:21 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 14 Srpen 2014, 18:02, Atri Sharma wrote:
On Thursday, August 14, 2014, Jeff Davis pg...@j-davis.com wrote:
On Thu, 2014-08-14 at 10:06 -0400, Tom Lane wrote:
If you're following the HashJoin model, then what you do
On Thu, Aug 14, 2014 at 01:57:14PM -0400, Tom Lane wrote:
Maybe this is telling us it's not worth changing the representation,
and we should just go do something about the first_success_by threshold
and be done. I'm hesitant to draw such conclusions on the basis of a
single use-case though,
On Tue, Aug 12, 2014 at 1:52 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
This isn't a showstopper, but needs some thought. As the patch stands, it
uses a single key container called PostgreSQL server key container, and
makes no attempt to delete the keys after they're no longer used.
On Thu, Aug 14, 2014 at 1:51 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Fri, Aug 15, 2014 at 1:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 11, 2014 at 3:20 AM, Fujii Masao masao.fu...@gmail.com wrote:
There is no extra spinlock.
The version I reviewed had one; that's what
On 08/14/2014 11:13 AM, Bruce Momjian wrote:
On Thu, Aug 14, 2014 at 01:57:14PM -0400, Tom Lane wrote:
Maybe this is telling us it's not worth changing the representation,
and we should just go do something about the first_success_by threshold
and be done. I'm hesitant to draw such
On 2014-08-14 14:19:13 -0400, Robert Haas wrote:
On Thu, Aug 14, 2014 at 1:51 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Fri, Aug 15, 2014 at 1:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 11, 2014 at 3:20 AM, Fujii Masao masao.fu...@gmail.com wrote:
There is no extra
On Thu, 2014-08-14 at 12:53 -0400, Tom Lane wrote:
Oh? So if we have aggregates like array_agg whose memory footprint
increases over time, the patch completely fails to avoid bloat?
Yes, in its current form.
I might think a patch with such a limitation was useful, if it weren't
for the fact
I did quick test on the same bookmarks to test performance of 9.4beta2 and
9.4beta2+patch
The query was the same we used in pgcon presentation:
SELECT count(*) FROM jb WHERE jb @ '{tags:[{term:NYC}]}'::jsonb;
table size | time (ms)
9.4beta2:1374 Mb |
On Thu, Aug 14, 2014 at 2:21 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-08-14 14:19:13 -0400, Robert Haas wrote:
On Thu, Aug 14, 2014 at 1:51 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Fri, Aug 15, 2014 at 1:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 11,
On Thu, Aug 14, 2014 at 1:24 PM, Peter Geoghegan p...@heroku.com wrote:
On Thu, Aug 14, 2014 at 9:13 AM, Robert Haas robertmh...@gmail.com wrote:
Committed that way. As the patch is by and large the same as what I
submitted for this originally, I credited myself as first author and
you as
On 2014-08-14 14:37:22 -0400, Robert Haas wrote:
On Thu, Aug 14, 2014 at 2:21 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-08-14 14:19:13 -0400, Robert Haas wrote:
That's about the idea. However, what you've got there is actually
unsafe, because shmem-counter++ is not an atomic
On Thu, Aug 14, 2014 at 11:38 AM, Robert Haas robertmh...@gmail.com wrote:
Great. BTW, I notice to my chagrin that 'reindex table
some_table_with_an_indexed_text_column' doesn't benefit from this,
apparently because tuplesort_begin_index_btree is used, and it knows
nothing about sortsupport.
On Tue, Aug 12, 2014 at 4:02 PM, Fabien COELHO coe...@cri.ensmp.fr wrote:
Yet another very minor typo in pgbench doc.
I'm not sure of the best way to show formula in the doc.
When I built this, it left a space between the formula and the period.
Fixed that and committed.
--
Robert Haas
On Thu, Aug 14, 2014 at 3:49 PM, Larry White ljw1...@gmail.com wrote:
I attached a json file of approximately 513K. It contains two repetitions of
a single json structure. The values are quasi-random. It might make a decent
test case of meaningfully sized data.
I have a 59M in plain SQL (10M
On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila amit.kapil...@gmail.com wrote:
1.
+Number of parallel connections to perform the operation. This
option will enable the vacuum
+operation to run on parallel connections, at a time one table will
be operated on one connection.
a.
On Thu, Aug 14, 2014 at 10:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Fujii Masao masao.fu...@gmail.com writes:
I'd like to propose to add new option --immediate to pg_ctl promote.
When this option is set, recovery ignores any WAL data which have not
been replayed yet and exits immediately.
On Wed, Aug 13, 2014 at 6:42 AM, Fujii Masao masao.fu...@gmail.com wrote:
I'd propose the attached WIP patch which allows us to enable WAL archiving
even in standby. The patch adds always as the valid value of archive_mode.
If it's set to always, the archiver is started when the server is in
On Wed, Aug 13, 2014 at 9:33 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Stephen Frost wrote:
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
Stephen Frost wrote:
Alright, sounds like this is more-or-less the concensus. I'll see about
making it happen shortly.
Were you
On Thu, Aug 14, 2014 at 4:27 PM, Robert Haas robertmh...@gmail.com wrote:
We already have the facilities to stop replay at a defined place. But
then what? Without this patch, do well tell the customer to stop
replay, do a pg_dump of the whole database, and restore it into a new
database?
On 14.8.2014 18:54, Jeff Davis wrote:
On Thu, 2014-08-14 at 16:17 +0200, Tomas Vondra wrote:
Either it belongs to the current batch (and either it's in the hash
table, or you add it there), or it's not - in that case write it to a
temp file.
I think the part you left out is that you need
On Thu, Aug 14, 2014 at 11:55 AM, Peter Geoghegan p...@heroku.com wrote:
Clearly there are still benefits to be had for cluster and B-Tree
tuplesorts.
In a world where this general support exists, abbreviated keys could
be made to work with both of those, but not datum tuplesorts, because
that
On Thu, Aug 14, 2014 at 12:59 PM, Fujii Masao masao.fu...@gmail.com wrote:
Since I sometimes try to search the replication command in the index,
I'd feel inclined to expose all those commands as index terms...
+1.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise
I hit the following on 9.4 testing logical decoding.
TRAP: FailedAssertion(!(prev_first_lsn cur_txn-first_lsn), File:
reorderbuffer.c, Line: 618)
LOG: server process (PID 3801) was terminated by signal 6: Aborted
Unfortunately I don't have a core file and I haven't been able to
reproduce
Fabrízio de Royes Mello fabriziome...@gmail.com wrote:
Robert Haas robertmh...@gmail.com wrote:
We already have the facilities to stop replay at a defined
place. But then what? Without this patch, do well tell the
customer to stop replay, do a pg_dump of the whole database, and
restore it
On Thu, Aug 14, 2014 at 4:24 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Aug 14, 2014 at 3:49 PM, Larry White ljw1...@gmail.com wrote:
I attached a json file of approximately 513K. It contains two repetitions of
a single json structure. The values are quasi-random. It might make a
Greetings,
Based on the feedback on my previous patch, I've separated only the
LIMIT part into its own feature. This version plays nicely with
inheritance. The intended use is splitting up big UPDATEs and DELETEs
into batches more easily and efficiently.
♜
*** a/doc/src/sgml/ref/delete.sgml
Peter Geoghegan p...@heroku.com writes:
On Thu, Aug 14, 2014 at 10:57 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Maybe this is telling us it's not worth changing the representation,
and we should just go do something about the first_success_by threshold
and be done. I'm hesitant to draw such
On 15/08/14 09:47, Tom Lane wrote:
Peter Geoghegan p...@heroku.com writes:
On Thu, Aug 14, 2014 at 10:57 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Maybe this is telling us it's not worth changing the representation,
and we should just go do something about the first_success_by threshold
and be
Robert Haas robertmh...@gmail.com writes:
We really should not be making changes of this type less than a month
from our ostensible release date. That is not enough time for us to
notice if the changes turn out to be not as good as we think they are.
If it weren't for the fact that we'll be
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Robert Haas robertmh...@gmail.com writes:
We really should not be making changes of this type less than a month
from our ostensible release date. That is not enough time for us to
notice if the changes turn out to be not as good as we think they are.
Amit Khandekar amit.khande...@enterprisedb.com wrote:
The execution level itself was almost trivial; it's getting the
tuplestore reference through the parse analysis and planning
phases that is painful for me.
I am not sure why you think we would need to refer the
tuplestore in the parse
So, here's a destruction test case:
200,000 JSON values (plus 2 key columns)
Average width 4K (+/- 1K)
183 keys per JSON value
keys 10 to 30 characters
105 float values
70 integer values
8 text and date values
no nesting
The jsonic table is JSON
The
Josh Berkus j...@agliodbs.com writes:
So, here's a destruction test case:
200,000 JSON values (plus 2 key columns)
Average width 4K (+/- 1K)
183 keys per JSON value
Is that 183 keys exactly each time, or is 183 the average?
If so, what's the min/max number of keys?
I ask because 183 would be
On 08/14/2014 04:02 PM, Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
So, here's a destruction test case:
200,000 JSON values (plus 2 key columns)
Average width 4K (+/- 1K)
183 keys per JSON value
Is that 183 keys exactly each time, or is 183 the average?
Each time exactly.
It
Josh Berkus j...@agliodbs.com writes:
On 08/14/2014 04:02 PM, Tom Lane wrote:
It would be useful to see min/max/avg of pg_column_size() in both
these cases.
Well, this is 9.4, so I can do better than that. How about quartiles?
thetype |colsize_distribution
On 14.8.2014 21:47, Tomas Vondra wrote:
On 14.8.2014 18:54, Jeff Davis wrote:
On Thu, 2014-08-14 at 16:17 +0200, Tomas Vondra wrote:
Either it belongs to the current batch (and either it's in the hash
table, or you add it there), or it's not - in that case write it to a
temp file.
I think
What's more, it looks like the jsonb data is pretty much never getting
compressed --- the min is too high for that. So I'm guessing that this
example is mostly about the first_success_by threshold preventing any
compression from happening. Please, before looking at my other patch,
try
On 08/14/2014 04:47 PM, Josh Berkus wrote:
thetype |colsize_distribution
-+
json| {1777,1803,1890,1940,4424}
jsonb | {5902,5926,5978,6002,6208}
Just realized my query was counting the whole row size instead of just
the column size. Here's just
On 08/15/2014 12:21 AM, Robert Haas wrote:
On Sat, Aug 9, 2014 at 12:54 PM, Andres Freund and...@2ndquadrant.com wrote:
I don't have a very clear idea whether this is a good idea in any form
because I can't quite imagine how this is going to be used by the
client without adding an unwarranted
On 08/14/2014 05:45 PM, Craig Ringer wrote:
Wouldn't that force client drivers - libpq, psqlODBC, PgJDBC, etc - to
all watch for explicit COMMITs sent by the application and rewrite them?
Realistically, users are going to need new drivers to take advantage of
any automated connection failover
Before changing to to INT_MAX:
thetype |colsize_distribution
-+
json| {1741,1767,1854,1904,2292}
jsonb | {3551,5866,5910,5958,6168}
After:
thetype |colsize_distribution
-+
json|
On 5/6/14 10:20 PM, Peter Eisentraut wrote:
Agreed. I have committed the SGML changes that make things valid now,
but I will postpone the xmllint addition until the 9.5 branch, complete
with more documentation.
Per the above announcement, here is an updated patch, also with more
documentation
Josh Berkus j...@agliodbs.com writes:
And with Tom's test patch:
...
Since that improved things a *lot*, just +40% instead of +200%, I
thought I'd test some select queries.
That test patch is not meant to be fast, its ambition was only to see
what the effects on storage size would be. So I
On Fri, Aug 15, 2014 at 4:59 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Aug 14, 2014 at 12:59 PM, Fujii Masao masao.fu...@gmail.com wrote:
Since I sometimes try to search the replication command in the index,
I'd feel inclined to expose all those commands as index terms...
+1.
Peter Eisentraut pete...@gmx.net writes:
It would especially be valuable if someone with a different-than-mine OS
would verify whether they can install xmllint according to the
documentation, or update the documentation if not.
FWIW, xmllint appears to be part of the base libxml2 package on
Hello, I made a set of patches to fix this issue.
The attached files are the followings,
0001_Add_PGC_BACKEND_USERSET_v0.patch:
Add new GUC category PGC_BACKEND_USERSET and change
local_preload_libraries to use it.
0002_dblink_follow_change_of_set_config_options_v0.patch:
On Wed, Aug 13, 2014 at 9:20 PM, MauMau maumau...@gmail.com wrote:
From: Amit Kapila amit.kapil...@gmail.com
During my recent work on pg_basebackup, I noticed that
-T option doesn't seem to work on Windows.
The reason for the same is that while updating symlinks
it doesn't consider that on
And here are some comments about patch 2:
- Patch applies with some hunks.
- Some typos are present
s#memory segments..#memory segments. (double dots)
s#NULL#literalNULL/ (in the docs as this refers to a value)
- Your thoughts about providing separate patches for each view? What
this patch does is
make check-world creates a temporary installation in every subdirectory
it runs a test in, which is stupid: it's very slow and uses a lot of
disk space. It's enough to do this once per run. That is the essence
of what I have implemented. It cuts the time for make check-world in
half or less,
98 matches
Mail list logo