Awesome, thanks!!
On Fri, Aug 4, 2017 at 11:54 PM, Tom Lane wrote:
> Shay Rojansky writes:
> > Great. Do you think it's possible to backport to the other maintained
> > branches as well, seeing as how this is quite trivial and low-impact?
>
> Already done, will be in
>
> > Doing SSL_CTX_set_session_cache_mode(context, SSL_SESS_CACHE_OFF)
> doesn't
> > have any effect whatsoever - I still have the same issue (session id
> > context uninitialized). I suspect session caching is an entirely
> different
> > feature from session tickets/RFC5077 (although it might sti
>
> On 2017-08-04 07:22:42 +0300, Shay Rojansky wrote:
> > I'm still not convinced of the risk/problem of simply setting the session
> > id context as I explained above (rather than disabling the optimization),
> > but of course either solution resolves my problem.
&
I tested the patch.
Doing SSL_CTX_set_session_cache_mode(context, SSL_SESS_CACHE_OFF) doesn't
have any effect whatsoever - I still have the same issue (session id
context uninitialized). I suspect session caching is an entirely different
feature from session tickets/RFC5077 (although it might stil
One more note: https://github.com/netty/netty/pull/5321/files is an
equivalent PR setting the session ID context to a constant value in netty
(which is also a server using OpenSSL). This is in line with the
documentation on SSL_CTX_set_session_id_context (
https://wiki.openssl.org/index.php/Manual:
>
> Shay Rojansky writes:
> > Once again, I manged to make the error go away simply by setting the
> > session id context, which seems to be a mandatory server-side step for
> > properly support session tickets.
>
> The fact that you made the error go away doesn'
Hi Tom and Heikki.
As Tom says, session caching and session tickets seem to be two separate
things. However, I think you may be reading more into the session ticket
feature than there is - AFAICT there is no expectation or mechanism for
restoring *application* state of any kind - the mechanism is
et me know if it's absolutely necessary).
On Mon, Jul 31, 2017 at 1:15 AM, Shay Rojansky wrote:
> Hi Tom.
>
> Again, I know little about this, but from what I understand PostgreSQL
> wouldn't actually need to do/implement anything here - the session ticket
> might be u
Hi Tom.
Again, I know little about this, but from what I understand PostgreSQL
wouldn't actually need to do/implement anything here - the session ticket
might be used only to abbreviate the SSL handshake (this would explain why
it's on by default without any application support). In other words, s
Dear hackers, a long-standing issue reported by users of the Npgsql .NET
driver for PostgreSQL may have its roots on the PostgreSQL side. I'm far
from being an SSL/OpenSSL expert so please be patient if the terms/analysis
are incorrect.
When trying to connect with Npgsql to PostgreSQL with client
>
> > As I said before, Npgsql for one loads data types by name, not by OID.
>>> > So this would definitely cause breakage.
>>>
>>> Why would that cause breakage?
>>
>>
>> Well, the first thing Npgsql does when it connects to a new database, is
>> to query pg_type. The type names are used to associ
>
> > 1. Does everyone agrees that renaming the existing datatype without
> > changing the OID?
> >
> >
> > As I said before, Npgsql for one loads data types by name, not by OID.
> > So this would definitely cause breakage.
>
> Why would that cause breakage?
Well, the first thing Npgsql d
>
> Yes. Before doing this change, it is better to confirm the approach and
> then do all the changes.
>
> 1. Does everyone agrees that renaming the existing datatype without
> changing the OID?
>
As I said before, Npgsql for one loads data types by name, not by OID. So
this would definitely cause
(separate transaction delineation
from protocol error recovery).
Note that the same issue was discussed with Craig Ringer in
https://www.postgresql.org/message-id/CAMsr%2BYEgnJ8ZAWPLx5%3DBCbYYq9SNTdwbwvUcb7V-vYm5d5uhbQ%40mail.gmail.com
On Wed, Sep 28, 2016 at 6:04 PM, Shay Rojansky wrote:
> Hi
>
> The current macaddr datatype needs to be kept for some time by renaming
> it without changing OID and use the newer one for further usage.
>
>From the point of view of a driver maintainer... Npgsql looks up data types
by their name - upon first connection to a database it queries pg_type and
m
>
> > Of course, this is a
> > relatively minor performance issue (especially when compared to the
> overall
> > performance benefits provided by batching), and providing an API
> distinction
> > between adding a Sync and flushing the buffer may over-complicate the
> API. I
> > just thought I'd me
>
> > It has recently come to my attention that this implementation is
> problematic
> > because it forces the batch to occur within a transaction; in other
> words,
> > there's no option for a non-transactional batch.
>
> That's not strictly the case. If you explicitly BEGIN and COMMIT,
> those op
Hi all. I thought I'd share some experience from Npgsql regarding
batching/pipelining - hope this isn't off-topic.
Npgsql has supported batching for quite a while, similar to what this patch
proposes - with a single Sync message is sent at the end.
It has recently come to my attention that this i
Sorry about this, I just haven't had a free moment (and it's definitely not
very high priority...)
On Wed, Sep 28, 2016 at 5:04 PM, Robert Haas wrote:
> On Mon, Aug 22, 2016 at 8:14 AM, Fabien COELHO
> wrote:
> > Hello Shay,
> >> Attached is a new version of the patch, adding an upgrade script
Hi everyone, I'd appreciate some guidance on an issue that's been raised
with Npgsql, input from other driver writers would be especially helpful.
Npgsql currently supports batching (or pipelining) to avoid roundtrips, and
sends a Sync message only at the end of the batch (so
Parse1/Bind1/Describe
Just a note from me - I also agree this thread evolved (or rather devolved)
in a rather unproductive and strange way.
One important note that came out, though, is that adding a new client
message does have a backwards compatibility issue - intelligent proxies
such as pgbouncer/pgpool will probably
Halfway through this mail I suddenly understood something, please read all
the way down before responding...
On Tue, Aug 16, 2016 at 2:16 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Shay> your analogy breaks down. Of course L2 was invented to improve
> performance,
> Shay> but t
>
> I'm not going to respond to the part about dealing with prepared
>> statements errors, since I think we've already covered that and there's
>> nothing new being said. I don't find automatic savepointing acceptable, and
>> a significant change of the PostgreSQL protocol to support this doesn't
>
On Mon, Aug 15, 2016 at 3:16 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Vladimir>> Yes, that is what happens.
> Vladimir>> The idea is not to mess with gucs.
>
> Shay:> Wow... That is... insane...
>
> Someone might say that "programming languages that enable side-effects
> are i
On Sat, Aug 13, 2016 at 11:20 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Tatsuo>Interesting. What would happen if a user changes some of GUC
> parameters? Subsequent session accidentally inherits the changed GUC
> parameter?
>
> Yes, that is what happens.
> The idea is not to me
Apologies, I accidentally replied off-list, here's the response I sent.
Vladimir, I suggest you reply to this message with your own response...
On Sat, Aug 13, 2016 at 6:32 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Shay>To be honest, the mere idea of having an SQL parser insid
Vladimir wrote:
Shay>I don't know much about the Java world, but both pgbouncer and pgpool
> (the major pools?)
>
> In Java world, https://github.com/brettwooldridge/HikariCP is a very good
> connection pool.
> Neither pgbouncer nor pgpool is required.
> The architecture is: application <=> Hikar
On Thu, Aug 11, 2016 at 1:22 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
2) The driver can use safepoints and autorollback to the good "right before
> failure" state in case of a known failure. Here's the implementation:
> https://github.com/pgjdbc/pgjdbc/pull/477
>
As far as I can
On Thu, Aug 11, 2016 at 8:39 AM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Shay:
>
>> Prepared statements can have very visible effects apart from the speedup
>> they provide (e.g. failure because of schema changes) It's not that
>> these effects can't be worked around - they can
Vladimir wrote:
Shay> As Tom said, if an application can benefit from preparing, the
> developer has the responsibility (and also the best knowledge) to manage
> preparation, not the driver. Magical behavior under the hood causes
> surprises, hard-to-diagnose bugs etc.
>
> Why do you do C# then?
>
Some comments...
For the record, I do find implicit/transparent driver-level query
preparation interesting and potentially useful, and have opened
https://github.com/npgsql/npgsql/issues/1237 to think about it - mostly
based on arguments on this thread. One big question mark I have is whether
this
On Tue, Aug 9, 2016 at 3:42 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Shay>But here's the more important general point. We're driver
> developers, not application developers. I don't really know what
> performance is "just fine" for each of my users, and what is not worth
> opt
On Tue, Aug 9, 2016 at 8:50 AM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Shay>There are many scenarios where connections are very short-lived
> (think about webapps where a pooled connection is allocated per-request and
> reset in between)
>
> Why the connection is reset in betwee
On Mon, Aug 8, 2016 at 6:44 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
>
> The problem with "empty statement name" is statements with empty name can
>> be reused (for instance, for batch insert executions), so the server side
>> has to do a defensive copy (it cannot predict how man
Vladimir wrote:
> On the other hand, usage of some well-defined statement name to trigger
>>> the special case would be fine: all pgbouncer versions would pass those
>>> parse/bind/exec message as if it were regular messages.
>>>
>>
>> Can you elaborate on what that means exactly? Are you proposi
>
> We could call this "protocol 3.1" since it doesn't break backwards
>> compatibility (no incompatible server-initiated message changes, but it
>> does include a feature that won't be supported by servers which only
>> support 3.0. This could be a sort of "semantic versioning" for the protocol
>>
On Sun, Aug 7, 2016 at 6:11 PM, Robert Haas wrote:
> I'm glad reducing the overhead of out-of-line parameters seems like an
> > important goal. FWIW, if as Vladimir seems to suggest, it's possible to
> > bring down the overhead of the v3 extended protocol to somewhere near the
> > simple protocol
>
> > I really don't get what's problematic with posting a message on a mailing
> > list about a potential performance issue, to try to get people's
> reactions,
> > without diving into profiling right away. I'm not a PostgreSQL developer,
> > have other urgent things to do and don't even spend mos
On Mon, Aug 1, 2016 at 12:12 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> The attached patch passes `make check` and it gains 31221 -> 33547
improvement for "extended pgbench of SELECT 1".
>
> The same version gains 35682 in "simple" mode, and "prepared" mode
achieves 46367 (just
Greg wrote:
> I think you're looking at this the wrong way around. 30% of what?
> You're doing these simple read-only selects on a database that
> obviously is entirely in RAM. If you do the math on the numbers you
> gave above the simple protocol took 678 microseconds per transaction
> and the ext
>
> Shay Rojansky :
>
>> I'm well aware of how the extended protocol works, but it seems odd for a
>> 30% increase in processing time to be the result exclusively of processing
>> 5 messages instead of just 1 - it doesn't seem like that big a deal
>> (a
>
> Without re-using prepared statements or portals, extended protocol is
> always slow because it requires more messages exchanges than simple
> protocol. In pgbench case, it always sends parse, bind, describe,
> execute and sync message in each transaction even if each transaction
> involves iden
Hi all. I know this has been discussed before, I'd like to know what's the
current position on this.
Comparing the performance of the simple vs. extended protocols with pgbench
yields some extreme results:
$ ./pgbench -T 10 -S -M simple -f /tmp/pgbench.sql pgbench
tps = 14739.803253 (excluding co
>
> I added this patch to the next CF (2016-09) under "Miscellaneous".
>
Thanks!
> Out of curiosity, what is the motivation?
I'm the owner of Npgsql, the open-source .NET driver for PostgreSQL, which
is a binary-first driver. That is, working with types that have no binary
I/O is possible but
>
> When adding new functions to an extension you need to bump the version of
> the extension by renaming the file, updating the .control file, creating an
> upgrade script, and updating the Makefile to include the new files.
Thanks for the guidance, I'll fix all that and resubmit a patch.
Hi.
Attached is a small patch which adds binary input/output for the types
added by the isn extension.
Shay
isn-binary.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/
>
> ​Would something like this be valid?
>
> OFFSET { start_literal | ( start_expression ) } { ROW | ROWS }
> FETCH { FIRST | NEXT} [ count_literal | ( count_expression ) ] { ROW |
> ROWS } ONLY
>
> Leaving the mandatory parentheses detail to the description, while
> adequate, seems insufficient -
Apologies, as usual I didn't read the docs carefully enough.
On Tue, May 17, 2016 at 7:13 PM, Tom Lane wrote:
> Shay Rojansky writes:
> > A user of mine just raised a strange issue... While it is possible to
> use a
> > parameter in a LIMIT clause, PostgreSQL does not
A user of mine just raised a strange issue... While it is possible to use a
parameter in a LIMIT clause, PostgreSQL does not seem to allow using one in
a FETCH NEXT clause. In other words, while the following works:
SELECT 1 LIMIT $1;
The following generates a syntax error:
SELECT 1 FETCH NEXT $
>
> We really do need "cancel up to" semantics for reliable behavior.
> Consider the case where the client has sent the query (or thinks it has)
> but the server hasn't received it yet. If the cancel request can arrive
> at the server before the query fully arrives, and we don't have "cancel
> all
>
> I definitely agree that simply tracking message sequence numbers on both
>> sides is better. It's also a powerful feature to be able to cancel all
>> messages "up to N" - I'm thinking of a scenario where, for example, many
>> simple queries are sent and the whole process needs to be cancelled.
tocol version.
Let me know if you'd like me to update the TODO.
On Sun, Apr 24, 2016 at 6:11 PM, Tom Lane wrote:
> Shay Rojansky writes:
> > The issue I'd like to tackle is the fact that it's not possible to make
> > sure a cancellation request affects a specific query
Hi.
A while ago I discussed some reliability issues when using cancellations (
http://www.postgresql.org/message-id/CADT4RqAk0E10=9ba8v+uu0dq9tr+pn8x+ptqbxfc1fbivh3...@mail.gmail.com).
Since we were discussing some protocol wire changes recently I'd like to
propose one to help with that.
The issu
I know this has been discussed before (
http://postgresql.nabble.com/Compression-on-SSL-links-td2261205.html,
http://www.postgresql.org/message-id/BANLkTi=Ba1ZCmBuwwn7M1wvPFioT=6n...@mail.gmail.com),
but it seems to make sense to revisit this in 2016.
Since CRIME in 2012, AFAIK compression with en
>
> On googling, it seems this is related to .Net framework compatibility. I am
> using .Net Framework 4 to build the program.cs and that is what I have
> on my m/c. Are you using the same for Npgsql or some different version?
>
That is probably the problem. Npgsql 3.0 is only available for .NET
>
> Hm. Is this with a self compiled postgres? If so, is it with assertions
> enabled?
>
No, it's just the EnterpriseDB 9.5rc1 installer...
Tom's probably right about the optimized code. I could try compiling a
debug version..
>
> Is this in a backend with ssl?
>
No.
If you go up one frame, what value does port->sock have?
>
For some reason VS is telling me "Unable to read memory" on port->sock... I
have no idea why that is...
>
> Are we sure this is a 9.5-only bug? Shay, can you try 9.4 branch tip
> and see if it misbehaves? Can anyone else reproduce the problem?
>
>
Doesn't occur with 9.4.5 either. The first version I tested which exhibited
this was 9.5beta2.
>
> Things that'd be interesting:
> 1) what are the arguments passed to WaitLatchOrSocket(), most
>importantly wakeEvents and sock
>
wakeEvents is 8387808 and so is sock.
Tom, this bug doesn't occur with 9.4.4 (will try to download 9.4.5 and
test).
>
> > > Any chance you could single-step through WaitLatchOrSocket() with a
> > > debugger? Without additional information this is rather hard to
> > > diagnose.
> > >
> >
> > Uh I sure can, but I have no idea what to look for :) Anything
> > specific?
>
> Things that'd be interesting:
> 1) what ar
>
> > The backends seem to hang when the client closes a socket without first
> > sending a Terminate message - some of the tests make this happen. I've
> > confirmed this happens with 9.5rc1 running on Windows (versions 10 and
> 7),
> > but this does not occur on Ubuntu 15.10. The client runs on W
let me know.
Shay
On Wed, Dec 30, 2015 at 5:32 AM, Amit Kapila
wrote:
>
>
> On Tue, Dec 29, 2015 at 7:04 PM, Shay Rojansky wrote:
>
>> Could you describe the worklad a bit more? Is this rather concurrent? Do
>>> you use optimized or debug builds? How long did y
>
> Could you describe the worklad a bit more? Is this rather concurrent? Do
> you use optimized or debug builds? How long did you wait for the
> backends to die? Is this all over localhost, external ip but local,
> remotely?
>
The workload is a a rather diverse set of integration tests executed w
>
> > The tests run for a couple minutes, open and close some connection. With
> my
> > pre-9.5 backends, the moment the test runner exits I can see that all
> > backend processes exit immediately, and pg_activity_stat has no rows
> > (except the querying one). With 9.5beta2, however, some backend
After setting up 9.5beta2 on the Npgsql build server and running the Npgsql
test suite against I've noticed some weird behavior.
The tests run for a couple minutes, open and close some connection. With my
pre-9.5 backends, the moment the test runner exits I can see that all
backend processes exit
>
> > Here's a patch that adds back the GUC, with default/min/max 0 and
> > GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE.
> >
> > This is my first pg patch, please be gentle with any screwups :)
>
> Why, you dummy.
>
> No, actually, this looks fine. I've committed it and back-patched
Here's a patch that adds back the GUC, with default/min/max 0
and GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE.
This is my first pg patch, please be gentle with any screwups :)
patch_tolerate_ssl_renegotiation_limit_zero
Description: Binary data
--
Sent via pgsql-hackers mailing
>
> > > If not, the only solution I can see is for PostgreSQL to not protest
> if it
> > > sees the
> > > parameter in the startup packet.
> > >
> >
> > Yeah, that's the ideal solution here as far as I'm concerned.
>
> Well, it seems that's where we're ending up then. Could you prepare a
> patch?
>
>
> As far as I remember, that was introduced because of renegotiation bugs
> with Mono:
> http://lists.pgfoundry.org/pipermail/npgsql-devel/2010-February/001074.html
> http://fxjr.blogspot.co.at/2010/03/ssl-renegotiation-patch.html
>
> Of course, with renegotiation disabled, nobody knows whether t
Just to give some added reasoning...
As Andres suggested, Npgsql sends ssl_renegotiation_limit=0 because we've
seen renegotiation bugs with the standard .NET SSL implementation (which
Npgsql uses). Seems like everyone has a difficult time with renegotiation.
As Tom suggested, it gets sent in the
Hi hackers.
I noticed ssl_renegotiation_limit has been removed in PostgreSQL 9.5, good
riddance...
However, the new situation where some versions of PG allow this parameter
while others bomb when seeing it. Specifically, Npgsql sends
ssl_renegotiation_limit=0 in the startup packet to completely d
>
> > So you would suggest changing my message chain to send Bind right after
> > Execute, right? This would yield the following messages:
>
> > P1/P2/D1/B1/E1/D2/B2/E2/S (rather than the current
> > P1/D1/B1/P2/D2/B2/E1/C1/E2/C2/S)
>
> > This would mean that I would switch to using named statement
Hi hackers, some odd behavior has been reported with Npgsql and I wanted to
get your help.
Npgsql supports sending multiple SQL statements in a single packet via the
extended protocol. This works fine, but when the second query SELECTs a
value modified by the first's UPDATE, I'm getting a result a
Thanks for the help Tom and the others, I'll modify my sequence and report
if I encounter any further issues.
On Sun, Oct 4, 2015 at 7:36 PM, Tom Lane wrote:
> Shay Rojansky writes:
> >> To my mind there is not a lot of value in performing Bind until you
> >> are read
>
> I'm fairly sure that the query snapshot is established at Bind time,
> which means that this SELECT will run with a snapshot that indeed
> does not see the effects of the UPDATE.
>
> To my mind there is not a lot of value in performing Bind until you
> are ready to do Execute. The only reason
>
> Try adding a sync before the second execute.
>
I tried inserting a Sync right before the second Execute, this caused an
error with the message 'portal "MQ1" does not exist'.
This seems like problematic behavior on its own, regardless of my issues
here (Sync shouldn't be causing an implicit clo
>
> > Npgsql supports sending multiple SQL statements in a single packet via
> the extended protocol. This works fine, but when the second query SELECTs a
> value modified by the first's UPDATE, I'm getting a result as if the
> > UPDATE hasn't yet occurred.
>
> Looks like the first updating stateme
Hi hackers, some odd behavior has been reported with Npgsql and I'm sure
you can help.
Npgsql supports sending multiple SQL statements in a single packet via the
extended protocol. This works fine, but when the second query SELECTs a
value modified by the first's UPDATE, I'm getting a result as if
>
> It is expected, and documented. (It's also different in 9.5, see
>
> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=c6b3c939b7e0f1d35f4ed4996e71420a993810d2
> )
>
Ah, thanks!
> > If nothing else, it seems that the concatenation operator should be
> listed
> > on the opera
Hi hackers.
Trying to execute the following query on PostgreSQL 9.4.4:
select 'a' >= 'b' || 'c';
Gives the result "falsec", implying that the precedence of the string
concatenation operator is lower than the comparison operator. Changing the
>= into = provides the result false, which is less sur
30 PM, Robert Haas wrote:
> On Mon, Aug 10, 2015 at 5:25 AM, Shay Rojansky wrote:
> > Thanks for the explanation Robert, that makes total sense. However, it
> seems
> > like the utility of PG's statement_timeout is much more limited than I
> > thought.
> >
> &
thing similar to the
above and enforce timeouts on the client only. Any further thoughts on this
would be appreciated.
On Sun, Aug 9, 2015 at 5:21 PM, Robert Haas wrote:
> On Sat, Aug 8, 2015 at 11:30 AM, Shay Rojansky wrote:
> > the entire row in memory (imagine rows with megabyte-siz
I'd also recommend adding a sentence about this aspect of statement_timeout
in the docs to prevent confusion...
On Sat, Aug 8, 2015 at 5:30 PM, Shay Rojansky wrote:
> Thanks for your responses.
>
> I'm not using cursors or anything fancy. The expected behavior (as far as
Shay
On Sat, Aug 8, 2015 at 5:13 PM, Tom Lane wrote:
> Shay Rojansky writes:
> > Hi everyone, I'm seeing some strange behavior and wanted to confirm it.
> > When executing a query that selects a long result set, if the code
> > processing the results takes its tim
Hi everyone, I'm seeing some strange behavior and wanted to confirm it.
When executing a query that selects a long result set, if the code
processing the results takes its time (i.e.g more than statement_timeout),
a timeout occurs. My expectation was that statement_timeout only affects
query *proc
4:28 PM, Shay Rojansky wrote:
> Thanks for the suggestions Tom.
>
> As I'm developing a general-purpose driver I can't do anything in
> PostgreSQL config, but it's a good workaround suggestion for users who
> encounter this error.
>
> Sending lc_messages i
etting combines both encoding and language. I guess I can
look at the user's locale preference on the client machine, try to
translate that into a PostgreSQL language/encoding and send that in
lc_messages - that seems like it might work.
Shay
On Fri, Jul 31, 2015 at 3:46 PM, Tom Lane wro
Hi hackers.
Developing Npgsql I've encountered the problem described in
http://www.postgresql.org/message-id/20081223212414.gd3...@merkur.hilbert.loc:
a German installation of PostgreSQL seems to respond to an incorrect
password with a non-UTF8 encoding of the error messages, even if the
startup m
Hi everyone.
The ParameterStatus message is currently sent for a hard-wired set of
parameters (
http://www.postgresql.org/docs/current/static/protocol-flow.html#PROTOCOL-ASYNC
).
Just wanted to let you know that making this more flexible would be a great
help in driver implementation. Npgsql main
On Sun, Jun 14, 2015 at 6:31 PM, Tom Lane wrote:
> Shay Rojansky writes:
> > [ rely on non-blocking sockets to avoid deadlock ]
>
> Yeah, that's pretty much the approach libpq has taken: write (or read)
> when you can, but press on when you can't.
>
Good
t work
well with non-blocking sockets...
Any comments?
Shay
On Sat, Jun 13, 2015 at 5:08 AM, Simon Riggs wrote:
> On 12 June 2015 at 20:06, Tom Lane wrote:
>
>> Simon Riggs writes:
>> > On 11 June 2015 at 22:12, Shay Rojansky wrote:
>> >> Just in case it's
sing something) - if the cancellation does hit
a query the transaction will be cancelled and it's up to the user to roll
it back as is required in PostgreSQL...
On Thu, Jun 11, 2015 at 9:50 PM, Robert Haas wrote:
> On Tue, Jun 9, 2015 at 4:42 AM, Shay Rojansky wrote:
> > Ah, OK
isn't an excuse for anything, we're looking into ways of
solving this problem differently in our driver implementation.
Shay
On Thu, Jun 11, 2015 at 6:17 PM, Simon Riggs wrote:
> On 11 June 2015 at 16:56, Shay Rojansky wrote:
>
> Npgsql (currently) sends Parse for the secon
roblematic
behavior after reordering the messages (assuming we do reorder).
Thanks for your inputs...
On Thu, Jun 11, 2015 at 5:50 PM, Tom Lane wrote:
> Simon Riggs writes:
> > On 11 June 2015 at 11:20, Shay Rojansky wrote:
> >> It appears that when we send two messages in an
In Npgsql, the .NET driver for PostgreSQL, we've switched from simple to
extended protocol and have received a user complaint.
It appears that when we send two messages in an extended protocol (so two
Parse/Bind/Execute followed by a single Sync), where the first one creates
some entity (function,
SQL queries
while a cancellation on that connection is still outstanding (meaning that
the cancellation connection hasn't yet been closed). As you mentioned this
wouldn't be a 100% solution since it would only cover signal sending, but
better than nothing?
On Tue, Jun 9, 2015 at 1:0
Hi everyone.
I'm working on Npgsql and have run into a race condition when cancelling.
The issue is described in the following 10-year-old thread, and I'd like to
make sure things are still the same:
http://www.postgresql.org/message-id/27126.1126649...@sss.pgh.pa.us
My questions/comments:
-
ng of
rows from several resultsets). And the lack of the ability to execute and
retrieve 0 rows hurts this scenario as well.
Just wanted to put it out there as another argument against deprecation.
On Wed, Feb 11, 2015 at 2:05 AM, Shay Rojansky wrote:
> Thanks for understanding Robert, that
_rows=1 for now, hopefully you guys don't decide to deprecate it.
Shay
On Tue, Feb 10, 2015 at 3:00 PM, Robert Haas wrote:
> On Sun, Feb 8, 2015 at 3:56 AM, Shay Rojansky wrote:
> > Just to be precise: what is strange to me is that the max_rows feature
> > exists
> > but
First a general comment:
> Then the driver writers that need these special API behaviors are
> reasonably expected to contribute to adding them to backend products that
> do not already have them. The database developers are not going to take
on
> responsibility for the API decisions of others; a
1 - 100 of 107 matches
Mail list logo