tLockTableWait() and it seems that it
is used mostly with
xids from heap so tx definetly set it lock somewhere in the past.
Not sure what it the best approach to handle that. May be write running xacts
only if they already
set their lock?
Also attaching pgbench script that ca
ter to just forbid to prepare such transactions.
Otherwise if some realistic
examples that can block decoding are actually exist, then we probably need to
reconsider the way
tx being decoded. Anyway this part probably need Andres blessing.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
com/en-us/research/wp-content/uploads/2016/02/samehe-clocksi.srds2013.pdf
[4] https://github.com/ept/hermitage
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
ther isolation nor atomicity. And if one
isn’t
doing cross-node analytical transactions it will be safe to live without
isolation.
But living without atomicity means that some parts of data can be lost without
simple
way to detect and fix that.
Stas Kelvich
Postgres Professional: http://www.p
stack, waiter
> stack
> - ...
>
> I think it might be interesting to collect a few of these somewhere
> centrally once halfway mature. Maybe in src/tools or such.
Wow, that’s extremely helpful, thanks a lot.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
not fsync file
"pg_logical/mappings/map-4000-4df-0_A4EA29F8-5aa5-5ae6": Too many open files in
system
I’m not sure whether this is boils down to some of the previous issues
mentioned here or not, so posting
here as observation.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
plies), not only some messages.
>
> Committed that.
>
>> Also, perhaps ApplyMessageContext should be a child of
>> TopTransactionContext. (You have it as a child of ApplyContext, which
>> is under TopMemoryContext.)
>
> Left that as is.
Thanks!
Stas Kelvich
Postgre
> On 20 Apr 2017, at 17:01, Dilip Kumar wrote:
>
> On Thu, Apr 20, 2017 at 7:04 PM, Stas Kelvich
> wrote:
>> Thanks for noting.
>>
>> Added short description of ApplyContext and ApplyMessageContext to README.
>
> Typo
>
> /analysys/analysis
>
> On 19 Apr 2017, at 16:07, Alvaro Herrera wrote:
>
> Stas Kelvich wrote:
>
>> With patch MemoryContextStats() shows following hierarchy during slot
>> operations in
>> apply worker:
>>
>> TopMemoryContext: 83824 total in 5 blocks; 9224 free (8
> On 19 Apr 2017, at 14:30, Petr Jelinek wrote:
>
> On 19/04/17 12:46, Stas Kelvich wrote:
>>
>> Right now ApplyContext cleaned after each transaction and by this patch I
>> basically
>> suggest to clean it after each command counter increment.
>
ll reset at the end of each
function involved.
>
> --
> Simon Riggs http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
S
> On 19 Apr 2017, at 12:37, Petr Jelinek wrote:
>
> On 18/04/17 13:45, Stas Kelvich wrote:
>> Hi,
>>
>> currently logical replication worker uses ApplyContext to decode received
>> data
>> and that context is never freed during transaction processi
.
applycontext_bloat.patch
Description: Binary data
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ilure, this happens under tablesync worker and putting
pgstat_report_stat() under the previous condition block should help.
However for me it took about an hour of running this script to catch original
assert.
Can you check with that patch applied?
logical_worker.patch
Description: Binary data
St
> Stas, I thought this patch was very important to you, yet two releases
> in a row we are too-late-and-buggy.
I’m looking at pgstat issue in nearby thread right now and will switch to this
shortly.
If that’s possible, I’m asking to delay revert for several days.
Stas Kelvich
Postgres
’t cancel transaction. At least when
COPY called outside of transaction block.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> On 10 Apr 2017, at 19:50, Peter Eisentraut
> wrote:
>
> On 4/10/17 05:49, Stas Kelvich wrote:
>> Here is small patch to call statistics in logical worker. Originally i
>> thought that stat
>> collection during logical replication should manually account amounts
> On 10 Apr 2017, at 05:20, Noah Misch wrote:
>
> On Wed, Apr 05, 2017 at 05:02:18PM +0300, Stas Kelvich wrote:
>>> On 27 Mar 2017, at 18:59, Robert Haas wrote:
>>> On Mon, Mar 27, 2017 at 11:14 AM, Fujii Masao wrote:
>>>> Logical replicatio
eplication workers are collected.
>> For example, this can prevent autovacuum from working on
>> those tables properly.
>
> Yeah, that doesn't sound good.
Seems that nobody is working on this, so i’m going to create the patch.
Stas Kelvich
Postgres Professional:
such cases and it is hard to address
or argue about.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
logical_twophase_v6.diff
Description: Binary data
logical_twophase_regresstest.diff
Description: Binary data
--
Sent via pgsql-hackers m
us
one, that implements logic i’ve just described. There is runtest.sh script that
setups postgres, runs python logical consumer in background and starts
regression test.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
logical_twophase_v5.diff
Descriptio
> On 28 Mar 2017, at 00:25, Andres Freund wrote:
>
> Hi,
>
> On 2017-03-28 00:19:29 +0300, Stas Kelvich wrote:
>> Ok, here it is.
>
> On a very quick skim, this doesn't seem to solve the issues around
> deadlocks of prepared transactions vs. catalog tables.
> On 28 Mar 2017, at 00:19, Stas Kelvich wrote:
>
> * It is actually doesn’t pass one of mine regression tests. I’ve added
> expected output
> as it should be. I’ll try to send follow up message with fix, but right now
> sending it
> as is, as you asked.
>
>
> On 27 Mar 2017, at 16:29, Craig Ringer wrote:
>
> On 27 March 2017 at 17:53, Stas Kelvich wrote:
>
>> I’m heavily underestimated amount of changes there, but almost finished
>> and will send updated patch in several hours.
>
> Oh, brilliant! Please post what
lised that it is not useful for the main
case when commit/abort is generated after receiver side will answer to
prepares. Also that two-pass scan is a massive change in relcache.c and
genam.c (FWIW there were no problems with cache, but some problems
with index scan and handling one-to-m
> On 23 Mar 2017, at 15:53, Craig Ringer wrote:
>
> On 23 March 2017 at 19:33, Alexey Kondratov
> wrote:
>
>> (1) Add errors handling to COPY as a minimum program
>
> Huge +1 if you can do it in an efficient way.
>
> I think the main barrier to doing so is that the naïve approach
> creates
> On 20 Mar 2017, at 16:39, Craig Ringer wrote:
>
> On 20 March 2017 at 20:57, Stas Kelvich wrote:
>>
>>> On 20 Mar 2017, at 15:17, Craig Ringer wrote:
>>>
>>>> I thought about having special field (or reusing one of the existing
>>>>
or
> something, to make it clear what we're doing.
Yes, that will be less confusing. However there is no any kind of queue, so
SnapBuildStartPrepare / SnapBuildFinishPrepare should work too.
> --
> Craig Ringer http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7
shot struct to force filtering xmax > snap->xmax or xmin = snap->xmin
as Petr suggested. Then this logic can reside in ReorderBufferCommit().
However this is not solving problem with catcache, so I'm looking into it right
now.
> On 17 Mar 2017, at 05:38, Craig Ringer wrote:
>
of decoding prepare
record we
already know that it is aborted than such decoding doesn’t have a lot of sense.
IMO intended usage of logical 2pc decoding is to decide about commit/abort based
on answers from logical subscribers/replicas. So there will be barrier between
prepare and commit/abort and s
> On 16 Mar 2017, at 14:44, Craig Ringer wrote:
>
> I'm going to try to pick this patch up and amend its interface per our
> discussion earlier, see if I can get it committable.
I’m working right now on issue with building snapshots for decoding prepared tx.
I hope I'll send updated patch later
rm prepare
decoding with some kind of copied-end-edited snapshot. I’ll have a look at this.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
be, I’m failing to understand some points. Can we maybe setup
skype call to discuss this and post summary here? Craig? Peter?
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To ma
postmaster.c:1330
frame #14: 0x00010e76371f postgres`main(argc=3,
argv=0x7fbcabc02b90) + 751 at main.c:228
frame #15: 0x7fffa951c255 libdyld.dylib`start + 1
frame #16: 0x7fffa951c255 libdyld.dylib`start + 1
Patch with lacking initStringInfo() attached.
init_reply_me
194-byte GID’s difference in WAL size is about 18%
So using big GID’s (as J2EE does) can cause notable WAL bloat, while small
GID’s are almost unnoticeable.
May be we can introduce configuration option track_commit_gid by analogy with
track_commit_timestamp and make that beh
/ABORT decoded and sent
After step 3 there is no more memory state associated with that prepared tx, so
if will fail
between 3 and 4 then we can’t know GID unless we wrote it commit record (or
table).
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
sewhere.
Thanks Nikhil, now I got that. Since we are talking about promotion we are on
different timescale and 1-10 second
lag matters a lot.
I think I have in my mind realistic scenario when proposed recovery code path
will hit the worst case: Google cloud.
They have quite fast storage, but fs
we
should invent
something more nasty like writing them into a table.
> That should eliminate Simon's
> objection re the cost of tracking GIDs and still let us have access to
> them when we want them, which is the best of both worlds really.
Having 2PC decoding in core is a good thi
last segment: 0x47
patched, with constant cache_drop:
total recovery time: 86s
patched, without constant cache_drop:
total recovery time: 68s
(while difference is significant, i bet that happens mostly because of database
file segments should be re-read after cache drop)
master, without
then we don't need to add the
> GID.
Yes, that’s also possible but seems to be less flexible restricting us to some
specific GID format.
Anyway, I can measure WAL space overhead introduced by the GID’s inside commit
records
to know exactly what will be the cost of such appro
o any WAL records, nor to any in-memory structures.
Other part of the story is how to find GID during decoding of commit prepared
record.
I did that by adding GID field to the commit WAL record, because by the time of
decoding
all memory structures that were holding xid<->gid cor
logical replication.
[1]
https://www.postgresql.org/message-id/EE7452CA-3C39-4A0E-97EC-17A414972884%40postgrespro.ru
logical_twophase.diff
Description: Binary data
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers
gments is linearly related to the volume of WAL between two
> checkpoints, so max_wal_size does not really matter. What matters is
> the time it takes to recover the same amount of WAL. Increasing
> max_wal_size would give more room to reduce the overall noise between
> two measureme
s well, but just
without
spending time on file creation.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
ripped by our email system. You were a direct CC so
> you received it.
>
Then, probably, my mail client did something strange. I’ll check.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-ha
On 27 Sep 2016, at 03:30, Michael Paquier wrote:OK. I am marking this patch as returned with feedback then. Lookingforward to seeing the next investigations.. At least this review hastaught us one thing or two.So, here is brand new implementation of the same thing.Now in
live_tup = 3 instead of 0.
Fix along with test is attached.
2pc-stats.patch
Description: Binary data
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to you
. During commitfests CMake build system will
> be supported by me.
> I need help with buildfarm because my knowledge of Perl is very bad (thought
> about rewrite buildfarm to Python).
>
> I hope for your support.
Tried to generate Xcode project out of cmake, build fail
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> On 21 Sep 2016, at 10:32, Michael Paquier wrote:
>
> On Tue, Sep 20, 2016 at 11:13 PM, Stas Kelvich
> wrote:
>>
>> Putting that before actual WAL replay is just following historical order of
>> events.
>> Prepared files are pieces of WAL that happened be
that is possible even
without DSM, it possible
to allocate static sized array storing some info about tx, whether it is in the
WAL or in file, xid, gid.
Some sort of PGXACT doppelganger only for replay purposes instead of using
normal one.
So taking into account my comments what do you think? Should we kee
> On 07 Sep 2016, at 11:07, Stas Kelvich wrote:
>
>> On 07 Sep 2016, at 03:09, Michael Paquier wrote:
>>
>>>> On 06 Sep 2016, at 12:03, Michael Paquier
>>>> wrote:
>>>>
>>>> On Tue, Sep 6, 2016 at 5:58 PM, Stas Kelvich
https://www.postgresql.org/docs/current/static/gist-extensibility.html
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
hile looking at StandbyRecoverPreparedTransactions() i’ve noticed that
buffer
for 2pc file is allocated in TopMemoryContext but never freed. That probably
exists
for a long time.
gidlen_fixes.diff
Description: Binary data
standby_recover_pfree.diff
Description: Binary data
--
Stas K
> On 07 Sep 2016, at 03:09, Michael Paquier wrote:
>
>>> On 06 Sep 2016, at 12:03, Michael Paquier wrote:
>>>
>>> On Tue, Sep 6, 2016 at 5:58 PM, Stas Kelvich
>>> wrote:
>>>> Oh, I was preparing new version of patch, after fresh look
> On 06 Sep 2016, at 12:09, Simon Riggs wrote:
>
> On 6 September 2016 at 09:58, Stas Kelvich wrote:
>>
>> I'll check it against my failure scenario with subtransactions and post
>> results or updated patch here.
>
> Make sure tests are added for that.
> On 06 Sep 2016, at 04:41, Michael Paquier wrote:
>
> On Sat, Sep 3, 2016 at 10:26 PM, Michael Paquier
> wrote:
>> On Fri, Sep 2, 2016 at 5:06 AM, Simon Riggs wrote:
>>> On 13 April 2016 at 15:31, Stas Kelvich wrote:
>>>
>>>> Fixed patch at
> On 31 Aug 2016, at 03:28, Craig Ringer wrote:
>
> On 25 Aug. 2016 20:03, "Stas Kelvich" wrote:
> >
> > Thanks for clarification about how restart_lsn is working.
> >
> > Digging slightly deeper into this topic revealed that problem was in two
tput plugin, and current postgres master).
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
h_lsn
-+-
0/1530EF8 | 7FFF/5E7F6A30
(1 row)
postgres=# select sent_location, write_location, flush_location,
replay_location from pg_stat_replication;
sent_location | write_location | flush_location | replay_location
---++-------
everal time i’ve run in a situation where provider's postmaster ignores
Ctrl-C until subscribed
node is switched off.
* Patch with small typos fixed attached.
I’ll do more testing, just want to share what i have so far.
typos.diff
Description: Binary data
--
Stas Kelvich
Postgr
ntion if we were able to easily rename old functions.
But now that will just create another pattern on top of three existing (no
prefix, ts_*, tsvector_*).
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pg
> On 04 May 2016, at 20:15, Tom Lane wrote:
>
> Stas Kelvich writes:
>>> On 04 May 2016, at 16:58, Tom Lane wrote:
>>> The other ones are not so problematic because they do not conflict with
>>> SQL keywords. It's only delete() and filter() that scar
> On 04 May 2016, at 16:58, Tom Lane wrote:
>
> Stas Kelvich writes:
>>> On 03 May 2016, at 00:59, David Fetter wrote:
>>> I suspect that steering that ship would be a good idea starting with
>>> deprecation of the old name in 9.6, etc. hs_filter(), perh
p()
Recent commit added setweight(), delete(), unnest(), tsvector_to_array(),
array_to_tsvector(), filter().
Last bunch can be painlessly renamed, for example to ts_setweight, ts_delete,
ts_unnest, ts_filter.
The question is what to do with old ones? Leave them as is? Rename to
Hi.
As discovered by Oleg Bartunov, current filter() function for tsvector can
crash backend.
Bug was caused erroneous usage of char type in memmove argument.
tsvector_bugfix_type.diff
Description: Binary data
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian
efine custom parameters for WITH, than
to extend parser.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> On 13 Apr 2016, at 01:04, Michael Paquier wrote:
>
> On Wed, Apr 13, 2016 at 1:53 AM, Stas Kelvich
> wrote:
>>> On 12 Apr 2016, at 15:47, Michael Paquier wrote:
>>>
>>> It looks to be the case... The PREPARE phase replayed after the
>>> stan
> On 12 Apr 2016, at 15:47, Michael Paquier wrote:
>
> On Mon, Apr 11, 2016 at 7:16 PM, Stas Kelvich wrote:
>> Michael, it looks like that you are the only one person who can reproduce
>> that bug. I’ve tried on bunch of OS’s and didn’t observe that behaviour,
&g
> On 11 Apr 2016, at 18:41, Stas Kelvich wrote:
>
> Hi.
>
> SPI_execute assumes that CreateTableAsStmt always have completionTag ==
> “completionTag”.
> But it isn’t true in case of ‘IF NOT EXISTS’ present.
>
>
>
Sorry, I meant completionTag == “SELEC
Hi.
SPI_execute assumes that CreateTableAsStmt always have completionTag ==
“completionTag”.
But it isn’t true in case of ‘IF NOT EXISTS’ present.
spi-cta.patch
Description: Binary data
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent
> On 08 Apr 2016, at 16:09, Stas Kelvich wrote:
>
> Tried on linux and os x 10.11 and 10.4.
>
> Still can’t reproduce, but have played around with your backtrace.
>
> I see in xlodump on slave following sequence of records:
>
> rmgr: Storage len (rec/tot):
ng into account that
absence of that
patch in release can cause problems with replication in some cases as it was
warned
by Jesper[1] and Andres[2].
[1] http://www.postgresql.org/message-id/5707a8cc.6080...@redhat.com
[2]
http://www.postgresql.org/message-id/80856693-5065-4392-8606-cf572a2ff..
> On 08 Apr 2016, at 21:55, Jesper Pedersen wrote:
>
> On 04/08/2016 02:42 PM, Robert Haas wrote:
>> On Tue, Jan 26, 2016 at 7:43 AM, Stas Kelvich
>> wrote:
>>> Hi,
>>>
>>> Thanks for reviews and commit!
>>
>> I apologize for bein
> On 08 Apr 2016, at 21:42, Robert Haas wrote:
>
> On Tue, Jan 26, 2016 at 7:43 AM, Stas Kelvich
> wrote:
>> Hi,
>>
>> Thanks for reviews and commit!
>
> I apologize for being clueless here, but was this patch committed?
> It's still marked as
.
If there will be deterministic way to reproduce that bug, i'll rework it and
move to 00X_twophase.pl
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subs
hat can be caused by changing
procedures of PREPARE replay.
Just to keep things sane, here is my current diff:
twophase_replay.v4.patch
Description: Binary data
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list
> On Apr 2, 2016, at 3:14 AM, Michael Paquier wrote:
>
> On Fri, Apr 1, 2016 at 10:53 PM, Stas Kelvich
> wrote:
>> I wrote:
>>> While testing the patch, I found a bug in the recovery conflict code
>>> path. You can do the following to reproduce it:
>&g
ocallock=0x7f90c203dac8,
> owner=0x) + 358 at lock.c:1703
>frame #9: 0x000107e70f93
> postgres`LockAcquireExtended(locktag=0x7fff581f0238, lockmode=8,
> sessionLock='\x01', dontWait='\0', reportMemoryError='\0') + 2819 at
>
On Mar 29, 2016, at 6:04 PM, David Steele wrote:It looks like you should post a new patch or respond to Michael's comments. Marked as "waiting on author".Yep, here it is.On Mar 22, 2016, at 4:20 PM, Michael Paquier wrote:Looking at this patch….Than
> On 24 Mar 2016, at 17:03, Robert Haas wrote:
>
> On Wed, Mar 23, 2016 at 1:44 AM, Craig Ringer wrote:
>> On 10 March 2016 at 22:50, Stas Kelvich wrote:
>>> Hi.
>>>
>>> Here is proof-of-concept version of two phase commit support for logical
>
imensions number is higher than 10-20,
intarray performs bad on data with big sets of possible coordinates, this patch
is also intended to help with specific, niche problem.
While people tends to use machine learning and regressions models more and more
it is interesting to have some general n-d
box
> as a point in 4-dimensional space?
Or just say 4-d vector instead of 4-d point. Look like it will be enough
rigorous.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
e
> offset = 0, nbytes = 0 case (via fseek(SEEK_END).
It is already in this diff. I’ve added this few messages ago.
flushdata.v4.patch
Description: Binary data
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing li
> On 18 Mar 2016, at 14:45, Stas Kelvich wrote:
>>
>>> One possible solution for that is just fallback to pg_fdatasync in case
>>> when offset = nbytes = 0.
>>
>> Hm, that's a bit heavyweight. I'd rather do an lseek(SEEK_END) to get
>>
there any check that
will guarantee that pg_flush_data will not end up with empty body on some
platform?
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
# Minimal test testing streaming replication
use strict;
use warnings;
use PostgresNode;
use TestLib;
erstand why are they happening.
>
> Greetings,
>
> Andres Freund
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
---
Stas Kelvich
Postgres Pro
r thread that counts all money in system:
select sum(v) from t;
So in transactional system we expect that sum should be always constant (zero
in our case, as we initialize user with zero balance).
But we can see that without tsdtm total amount of money fluctuates around zero.
https://github.com/ke
ic/warm-standby.html#STREAMING-REPLICATION
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
by Alvaro and Michael down thread
Done. Originally I thought about reducing number of tests (11 right now), but
now, after some debugging, I’m more convinced that it is better to include them
all, as they are really testing different code paths.
> * Add documentation for RecoverPreparedF
>
> On 11 Mar 2016, at 16:13, Stas Kelvich wrote:
>
>
>> On 10 Mar 2016, at 20:29, Teodor Sigaev wrote:
>>
>> I would like to suggest rename both functions to array_to_tsvector and
>> tsvector_to_array to have consistent name. Later we could add
>&g
>
Seems reasonable, done.
tsvector_ops-v6.diff
Description: Binary data
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www
.
pglogical_twophase.diff
Description: Binary data
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
Thanks.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
> On 04 Mar 2016, at 22:14, Robert Haas wrote:
>
> On Tue, Mar 1, 2016 at 4:31 AM, Stas Kelvich wrote:
>> Transaction function call sequence description in transam/REA
also changed it to
actual call nesting.
transam.readme.patch
Description: Binary data
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
! Fixed and added tests.
> --
> Teodor Sigaev E-mail: teo...@sigaev.ru
> WWW: http://www.sigaev.ru/
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.
so I think we can ask commiter to
mention you in commit message (if it that will be commited).
And how do you use ts_match_locs_array / ts_match_locs_array? To highlight
search results? There is function called ts_headline that can mark matches with
custom start/stop strings.
---
Stas Kelvich
Postgr
ing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
quot;unrecognized weight: %d"
> (instead of %c) in tsvector_setweight_by_filter.
>
Ah, I was thinking about moving it to separate diff and messed. Fixed and
attaching diff with same fix for old tsvector_setweight.
tsvector_ops-v2.1.diff
Description: Binary data
tsvector_ops-v2.2.diff
Description: Binary data
---
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi.
I tried that and confirm strange behaviour. It seems that problem with small
cyrillic letter ‘х’. (simplest obscene language filter? =)
That can be reproduced with simpler test
Stas
test.c
Description: Binary data
> On 27 Jan 2016, at 13:59, Artur Zakirov wrote:
>
> On 27.01.2016 13
1 - 100 of 129 matches
Mail list logo