Re: Is it possible to create a cursor with hold using extended query protocol

2024-07-10 Thread Dave Cramer
On Wed, 10 Jul 2024 at 11:04, David G. Johnston 
wrote:

> On Wednesday, July 10, 2024, Dave Cramer  wrote:
>
>> Greetings,
>>
>> There are suggestions that you can use extended query to fetch from a
>> cursor, however I don't see how you can bind values to the cursor ?
>>
>> PostgreSQL: Documentation: 16: FETCH
>> <https://www.postgresql.org/docs/16/sql-fetch.html>
>>
>> Is this possible?
>>
>
> Not that i can see.  The declare’d query isn’t shown to accept $n bindings
> rather it must be executable (select or values).  Per the note on declare,
> the bind phase of the fetch command under the extended protocol is used to
> determine whether values retrieved are text or binary.  Beyond that, the
> bind is really just a formality of the protocol, the same as for executing
> any other non-parameterized query that way.
>

Seems you can bind to the Declare though.

execute : BEGIN
2024-07-10 11:18:57.247 EDT [98519] LOG:  duration: 0.239 ms  parse
: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id <
$1
2024-07-10 11:18:57.247 EDT [98519] LOG:  duration: 0.014 ms  bind
: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id <
$1
2024-07-10 11:18:57.247 EDT [98519] DETAIL:  Parameters: $1 = '400'
2024-07-10 11:18:57.248 EDT [98519] LOG:  duration: 1.080 ms  execute
: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id <
$1
2024-07-10 11:18:57.248 EDT [98519] DETAIL:  Parameters: $1 = '400'

Thanks,

Dave

>
>


Is it possible to create a cursor with hold using extended query protocol

2024-07-10 Thread Dave Cramer
Greetings,

There are suggestions that you can use extended query to fetch from a
cursor, however I don't see how you can bind values to the cursor ?

PostgreSQL: Documentation: 16: FETCH
<https://www.postgresql.org/docs/16/sql-fetch.html>

Is this possible?

Dave Cramer


Re: Direct SSL connection and ALPN loose ends

2024-06-25 Thread Dave Cramer
On Tue, 25 Jun 2024 at 09:37, Vladimir Sitnikov 
wrote:

> I reviewed the documentation for "direct ALPN connections' ', and it looks
> like it could be improved.
> Here's the link:
> https://www.postgresql.org/docs/17/protocol-flow.html#PROTOCOL-FLOW-SSL
>
> The currently suggested values for "sslnegotiations" are "direct" and
> "postgres".
> The project name is PostgreSQL and the ALPN name is postgresql. Is there a
> reason why property value uses "postgres"?
> Can the value be renamed to postgresql for consistency?
>

+1 I found it strange that we are not using postgresql

>
> "SSL". Technically, the proper term is TLS, and even the document refers
> to "IANA TLS ALPN Protocol IDs" (TLS, not SSL).
> I would not die on that hill, however, going for tlsnegotiation would look
> better than sslnegotiation.
>

+1 again, unusual to use SSL when this really is TLS.

Dave


Unusually long checkpoint time on version 16, and 17beta1 running in a docker container

2024-06-24 Thread Dave Cramer
Greetings,

While testing pgjdbc I noticed the following

pgdb-1  | Will execute command on database postgres:
pgdb-1  | SELECT pg_drop_replication_slot(slot_name) FROM
pg_replication_slots WHERE slot_name = 'replica_one';
pgdb-1  | DROP USER IF EXISTS replica_one;
pgdb-1  | CREATE USER replica_one WITH REPLICATION PASSWORD 'test';
pgdb-1  | SELECT * FROM
pg_create_physical_replication_slot('replica_one');
pgdb-1  |
pgdb-1  | NOTICE:  role "replica_one" does not exist, skipping
pgdb-1  |  pg_drop_replication_slot
pgdb-1  | --
pgdb-1  | (0 rows)
pgdb-1  |
pgdb-1  | DROP ROLE
pgdb-1  | CREATE ROLE
pgdb-1  |   slot_name  | lsn
pgdb-1  | -+-
pgdb-1  |  replica_one |
pgdb-1  | (1 row)
pgdb-1  |
pgdb-1  | waiting for checkpoint
pgdb-1  | 2024-06-24 19:07:18.569 UTC [66] LOG:  checkpoint starting: force
wait
pgdb-1  | 2024-06-24 19:11:48.008 UTC [66] LOG:  checkpoint complete: wrote
6431 buffers (39.3%); 0 WAL file(s) added, 0 removed, 3 recycled;
write=269.438 s, sync=0.001 s, total=269.439 s; sync files=0, longest=0.000
s, average=0.000 s; distance=44140 kB, estimate=44140 kB; lsn=0/4B8,
redo lsn=0/428


Note that it takes 4 minutes 48 seconds to do the checkpoint. This seems
ridiculously long ?

If I add a checkpoint before doing anything there is no delay

 Will execute command on database postgres:
pgdb-1  | checkpoint;
pgdb-1  | SELECT pg_drop_replication_slot(slot_name) FROM
pg_replication_slots WHERE slot_name = 'replica_one';
pgdb-1  | DROP USER IF EXISTS replica_one;
pgdb-1  | CREATE USER replica_one WITH REPLICATION PASSWORD 'test';
pgdb-1  | SELECT * FROM
pg_create_physical_replication_slot('replica_one');
pgdb-1  |
pgdb-1  | 2024-06-24 19:19:57.498 UTC [66] LOG:  checkpoint starting:
immediate force wait
pgdb-1  | 2024-06-24 19:19:57.558 UTC [66] LOG:  checkpoint complete: wrote
6431 buffers (39.3%); 0 WAL file(s) added, 0 removed, 2 recycled;
write=0.060 s, sync=0.001 s, total=0.061 s; sync files=0, longest=0.000 s,
average=0.000 s; distance=29947 kB, estimate=29947 kB; lsn=0/3223BA0, redo
lsn=0/3223B48
===> pgdb-1  | CHECKPOINT
pgdb-1  |  pg_drop_replication_slot
pgdb-1  | --
pgdb-1  | (0 rows)
pgdb-1  |
pgdb-1  | DROP ROLE
pgdb-1  | NOTICE:  role "replica_one" does not exist, skipping
pgdb-1  | CREATE ROLE
pgdb-1  |   slot_name  | lsn
pgdb-1  | -+-
pgdb-1  |  replica_one |
pgdb-1  | (1 row)
pgdb-1  |
pgdb-1  | waiting for checkpoint
pgdb-1  | 2024-06-24 19:19:57.614 UTC [66] LOG:  checkpoint starting: force
wait
pgdb-1  | 2024-06-24 19:19:57.915 UTC [66] LOG:  checkpoint complete: wrote
4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.301
s, sync=0.001 s, total=0.302 s; sync files=0, longest=0.000 s,
average=0.000 s; distance=14193 kB, estimate=28372 kB; lsn=0/480, redo
lsn=0/428

This starts in version 16, versions up to and including 15 do not impose
the wait.


Dave Cramer


Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2024-05-25 Thread Dave Cramer
On Sat, 25 May 2024 at 06:40, Jelte Fennema-Nio  wrote:

> On Thu, 23 May 2024 at 20:12, Tom Lane  wrote:
> >
> > Jelte Fennema-Nio  writes:
> > > On Fri, 17 May 2024 at 21:24, Robert Haas 
> wrote:
> > >> Perhaps these are all minority positions, but I can't tell you what
> > >> everyone thinks, only what I think.
> >
> > > I'd love to hear some opinions from others on these design choices. So
> > > far it seems like we're the only two that have opinions on this (which
> > > seems hard to believe) and our opinions are clearly conflicting. And
> > > above all I'd like to move forward with this, be it my way or yours
> > > (although I'd prefer my way of course ;) )
> >
> > I got around to looking through this thread in preparation for next
> > week's patch review session.  I have a couple of opinions to offer:
> >
> > 1. Protocol versions suck.  Bumping them is seldom a good answer,
> > and should never be done if you have any finer-grained negotiation
> > mechanism available.  My aversion to this is over thirty years old:
> > I learned that lesson from watching the GIF87-to-GIF89 transition mess.
> > Authors of GIF-writing tools tended to take the easy way out and write
> > "GIF89" in the header whether they were actually using any of the new
> > version's features or not.  This led to an awful lot of pictures that
> > couldn't be read by available GIF-displaying tools, for no good reason
> > whatsoever.  The PNG committee, a couple years later, reacted to that
> > mess by designing PNG to have no version number whatsoever, and yet
> > be extensible in a fine-grained way.  (Basically, a PNG file is made
> > up of labeled chunks.  If a reader doesn't recognize a particular
> > chunk code, it can still tell whether the chunk is "critical" or not,
> > and thereby decide if it must give up or can proceed while ignoring
> > that chunk.)
> >
> > So overall, I have a strong preference for using the _pq_.xxx
> > mechanism instead of a protocol version bump.  I do not believe
> > the latter has any advantage.
>
> I'm not necessarily super opposed to only using the _pq_.xxx
> mechanism.


I find it interesting that up to now nobody has ever used this mechanism.


> I mainly think it's silly to have a protocol version number
> and then never use it. And I feel some of the proposed changes don't
> really benefit from being able to be turned on-and-off by themselves.
> My rule of thumb would be:
> 1. Things that a modern client/pooler would always request: version bump
> 2. Everything else: _pq_.xxx
>

Have to agree, why have a protocol version and then just not use it ?

>
> Of the proposed changes so far on the mailing list the only 2 that
> would fall under 1 imho are:
> 1. The ParameterSet message
> 2. Longer than 32bit secret in BackendKeyData
>
> I also don't think the GIF situation you describe translates fully to
> this discussion. We have active protocol version negotiation, so if a
> server doesn't support protocol 3.1 a client is expected to fall back
> to the 3.0 protocol when communicating.


Also agree. Isn't the point of having a version number to figure out what
features the client wants and subsequently the server can provide?

> Of course you can argue that a
> badly behaved client will fail to connect when it gets a downgrade
> request from the server, but that same argument can be made about a
> server not reporting support for a _pq_.xxx parameter that every
> modern client/pooler requests. So I don't think there's a practical
> difference in the problem you're describing.
>

+1

>
>
>
> But again if I'm alone in this, then I don't
>

I would prefer to see a well defined protocol handshaking mechanism rather
than some strange _pq.xxx dance.

Dave


Re: request for database identifier in the startup packet

2024-05-09 Thread Dave Cramer
On Thu, 9 May 2024 at 15:39, Robert Haas  wrote:

> On Thu, May 9, 2024 at 3:33 PM Dave Cramer  wrote:
> > On Thu, 9 May 2024 at 15:19, Robert Haas  wrote:
> >> On Thu, May 9, 2024 at 3:14 PM Andres Freund 
> wrote:
> >> > ISTM that you could just as well query the information you'd like
> after
> >> > connecting. And that's going to be a lot more flexible than having to
> have
> >> > precisely the right information in the startup message, and most
> clients not
> >> > needing it.
> >>
> >> I agree with this.
> >>
> > Well other than the extra round trip.
>
> I mean, sure, but we can't avoid that for everyone for everything.
> There might be some way of doing something like this with, for
> example, the infrastructure that was proposed to dynamically add stuff
> to the list of PGC_REPORT GUCs, if the values you need are GUCs
> already, or were made so. But I think it's just not workable to
> unconditionally add a bunch of things to the startup packet. It'll
> just grow and grow.
>

I don't think this is unconditional. These are real world situations where
having this information is useful.
That said, adding them everytime I ask for them would end up growing
uncontrollably. This seems like a decent discussion to have with others.

Dave


Re: request for database identifier in the startup packet

2024-05-09 Thread Dave Cramer
On Thu, 9 May 2024 at 15:19, Robert Haas  wrote:

> On Thu, May 9, 2024 at 3:14 PM Andres Freund  wrote:
> > ISTM that you could just as well query the information you'd like after
> > connecting. And that's going to be a lot more flexible than having to
> have
> > precisely the right information in the startup message, and most clients
> not
> > needing it.
>
> I agree with this.
>
> Well other than the extra round trip.

Thanks,
Dave


Re: request for database identifier in the startup packet

2024-05-09 Thread Dave Cramer
Dave Cramer


On Thu, 9 May 2024 at 12:22, Robert Haas  wrote:

> On Thu, May 9, 2024 at 8:06 AM Dave Cramer  wrote:
> > The JDBC driver is currently keeping a per connection cache of types in
> the driver. We are seeing cases where the number of columns is quite high.
> In one case Prevent fetchFieldMetaData() from being run when unnecessary. ·
> Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.
> >
> > If we knew that we were connecting to the same database we could use a
> single cache across connections.
> >
> > I think we would require a server/database identifier in the startup
> message.
>
> I understand the desire to share the cache, but not why that would
> require any kind of change to the wire protocol.
>
> The server identity is actually useful for many things such as knowing
which instance of a cluster you are connected to.
For the cache however we can't use the IP address to determine which server
we are connected to as we could be connected to a pooler.
Knowing exactly which server/database makes it relatively easy to have a
common cache across connections. Getting that in the startup message seems
like a good place

Dave


request for database identifier in the startup packet

2024-05-09 Thread Dave Cramer
Greetings,

The JDBC driver is currently keeping a per connection cache of types in the
driver. We are seeing cases where the number of columns is quite high. In
one case Prevent fetchFieldMetaData() from being run when unnecessary. ·
Issue #3241 · pgjdbc/pgjdbc (github.com)
<https://github.com/pgjdbc/pgjdbc/issues/3241> 2.6 Million columns.

If we knew that we were connecting to the same database we could use a
single cache across connections.

I think we would require a server/database identifier in the startup
message.

Dave Cramer


Possible to include xid8 in logical replication

2024-05-06 Thread Dave Cramer
Greetings,

I've been asked by the debezium developers if it is possible to include
xid8 in the logical replication protocol. Are there any previous threads on
this topic?

Any reason why we wouldn't include the epoch ?

Dave Cramer


Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2024-04-15 Thread Dave Cramer
On Mon, 15 Apr 2024 at 15:38, Jelte Fennema-Nio  wrote:

> On Mon, 15 Apr 2024 at 19:43, Robert Haas  wrote:
> >
> > On Sat, Apr 6, 2024 at 6:14 PM Jelte Fennema-Nio  wrote:
> > > I think for clients/drivers, the work would generally be pretty
> > > minimal. For almost all proposed changes, clients can "support" the
> > > protocol version update by simply not using the new features, ...
> >
> > I mean, I agree if a particular protocol version bump does nothing
> > other than signal the presence of some optional, ignorable feature,
> > then it doesn't cause a problem if we force clients to support it. But
> > that seems a bit like saying that eating wild mushrooms is fine
> > because some of them aren't poisonous. The point is that if we roll
> > out two protocol changes, A and B, each of which requires the client
> > to make some change in order to work with the newer protocol version,
> > then using version numbers as the gating mechanism requires that the
> > client can't support the newer of those two changes without also
> > supporting the older one. Using feature flags doesn't impose that
> > constraint, which I think is a plus.
>
> I think we're in agreement here, i.e. it depends on the situation if a
> feature flag or version bump is more appropriate. I think the
> guidelines could be as follows:
> 1. For protocol changes that require "extremely minimal" work from
> clients & poolers: bump the protocol version.
> 2. For "niche" features that require some work from clients and/or
> poolers: use a protocol parameter feature flag.
> 3. For anything in between, let's discuss on the thread for that
> specific protocol change on the tradeoffs.
>

My first thought here is that all of the above is subjective and we will
end up discussing all of the above.
No real argument just an observation.

>
> On Mon, 15 Apr 2024 at 19:52, Robert Haas  wrote:
> > surely it can't be right to use protocol
> > version 3.0 to refer to a bunch of different things. But at the same
> > time, surely we don't want clients to start panicking and bailing out
> > when everything would have been fine.
>
> I feel like the ProtocolVersionNegotiation should make sure people
> don't panic and bail out. And if not, then feature flags won't help
> with this either. Because clients would then still bail out if some
> feature is not supported.
>

I don't think a client should ever bail out. They may not support something
but IMO bailing out is not an option.

Dave

>
>


Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2024-04-05 Thread Dave Cramer
>
>
> Plus, you've got all of the consequences for non-core drivers, which
> have to both add support for the new wire protocol - if they don't
> want to seem outdated and eventually obsolete - and also test that
> they're still compatible with all supported server versions.
> Connection poolers have the same set of problems. The whole thing is
> almost a hole with no bottom. Keeping up with core changes in this
> area could become a massive undertaking for lots and lots of people,
> some of whom may be the sole maintainer of some important driver that
> now needs a whole bunch of work.
>

We already have this in many places. Headers or functions change and
extensions have to fix their code.
catalog changes force drivers to change their code.
This argument blocks any improvement to the protocol. I don't think it's
unreasonable to expect maintainers to keep up.
We could make it easier by having a specific list for maintainers, but that
doesn't change the work.



> I'm not sure how much it improves things if we imagine adding feature
> flags to the existing protocol versions, rather than whole new
> protocol versions, but at least it cuts down on the assumption that
> adopting new features is mandatory, and that such features are
> cumulative. If a driver wants to support TDE but not protocol
> parameters or protocol parameters but not TDE, who are we to say no?
> If in supporting those things we bump the protocol version to 3.2, and
> then 3.3 fixes a huge performance problem, are drivers going to be
> required to add support for features they don't care about to get the
> performance fixes? I see some benefit in bumping the protocol version
> for major changes, or for changes that we have an important reason to
> make mandatory, or to make previously-optional features for which
> support has become in practical terms universal part of the base
> feature set. But I'm very skeptical of the idea that we should just
> handle as many things as possible via a protocol version bump. We've
> been avoiding protocol version bumps like the plague since forever,
> and swinging all the way to the other extreme doesn't sound like the
> right idea to me.
>

+1 for not swinging too far here. But I don't think it should be a non
starter.
Dave


Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2024-04-05 Thread Dave Cramer
On Fri, 5 Apr 2024 at 12:09, Jelte Fennema-Nio  wrote:

> On Fri, 5 Apr 2024 at 16:02, Robert Haas  wrote:
> > Often?
> >
> > I kind of hope that the protocol starts to evolve a bit more than it
> > has, but I don't want a continuous stream of changes. That will be
> > very hard to test and verify correctness, and a hassle for drivers to
> > keep up with, and a mess for compatibility.
>
> I definitely think protocol changes require a lot more scrutiny than
> many other things, given their hard/impossible to change nature.
>
> But I do think that we shouldn't be at all averse to the act of
> bumping the protocol version itself. If we have a single small
> protocol change in one release, then imho it's no problem to bump the
> protocol version. Bumping the version should be painless. So we
> shouldn't be inclined to push an already agreed upon protocol change
> to the next release, because there are some more protocol changes in
> the pipeline that won't make it in the current one.
>
> I don't think this would be any harder for drivers to keep up with,
> then if we'd bulk all changes together. If driver developers only want
> to support changes in bulk changes, they can simply choose not to
> support 3.1 at all if they want and wait until 3.2 to support
> everything in bulk then.
>
> > > So, what approach of checking feature support are you envisioning
> > > instead? A function for every feature?
> > > Something like SupportsSetProtocolParameter, that returns an error
> > > message if it's not supported and NULL otherwise. And then add such a
> > > support function for every feature?
> >
> > Yeah, something like that.
> > ...
> >
> > > I'm also not sure why you're saying a user is not entitled to this
> > > information. We already provide users of libpq a way to see the full
> > > Postgres version, and the major protocol version. I think allowing a
> > > user to access this information is only a good thing. But I agree that
> > > providing easy to use feature support functions is a better user
> > > experience in some cases.
> >
> > I guess that's a fair point. But I'm worried that if we expose too
> > much of the internals, we won't be able to change things later.
>
> I'll take a look at redesigning the protocol parameter stuff. To work
> with dedicated functions instead.
>
+1

>
> > I really intended the _pq_ prefix as a way of taking something out of
> > the GUC namespace, not as a part of the GUC namespace that users would
> > see. And I'm reluctant to go back on that. If we want to make
> > pg_protocol.${NAME} mean a wire protocol parameter, well maybe there's
> > something to that idea [insert caveats here]. But doesn't _pq_ look
> > like something that was intended to be internal? That's certainly how
> > I intended it.
>

Is this actually used in practice? If so, how ?

>
> I agree that _pq_ does look internal and doesn't clearly indicate that
> it's a protocol related change. But sadly the _pq_ prefix is the only
> one that doesn't error in startup packets, waiting another 5 years
> until pg_protocol is allowed in the startup packet doesn't seem like a
> reasonable solution either.
>
> How about naming the GUCs pg_protocol.${NAME}, but still requiring the
> _pq_ prefix in the StartupPacket. That way only client libraries would
> have to see this internal prefix and they could remap it someway. I
> see two options for that:
> 1. At the server replace the _pq_ prefix with pg_protocol. So
> _pq_.${NAME} would map to pg_protocol.${name}
> 2. At the server replace the _pq_.pg_protocol prefix with pg_protocol.
> So _pq_.pg_protocol.${NAME} would map to pg_protocol.${name}.
>
> I guess you prefer option 2, because that would still leave lots of
> space to do something with the rest of the _pq_ space, i.e.
> _pq_.magic_pixie_dust can still be used for something different than a
> GUC.
>
> Bikeshedding: I think I prefer protocol.${NAME} over
> pg_protocol.${NAME}, it's shorter and it seems obvious that protocol
> is the postgres protocol in this context.
>
> This should be a fairly simple change to make.
>
> > Wouldn't libpq already know what value it last set? Or is this needed
> > because it doesn't know what the other side's default is?
>
> libpq could/should indeed know this, but for debugging/testing
> purposes it is quite useful to have a facility to read the server side
> value. I think defaults should always be whatever was happening if the
> parameter wasn't specified before, so knowing the server default is
> not something the client needs to worry about (i.e. the default is
> defined as part of the protocol spec).
>
> > Hmm, OK. I guess if the PGC_PROTOCOL flag makes it so that the GUC can
> > only be set using ParameterSet, and it also makes them
> > non-transactional, then it's fine. So to be clear, I can't set these
> > in postgresql.conf, or postgresql.auto.conf, or via ALTER $ANYTHING,
> > or via SET, or in any other way than by sending ParameterStatus
> > messages. And when I send a 

Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2024-04-05 Thread Dave Cramer
On Thu, 4 Apr 2024 at 12:45, Jelte Fennema-Nio  wrote:

> On Thu, 4 Apr 2024 at 14:50, Peter Eisentraut 
> wrote:
> > It appears there are several different perspectives about this.  My
> > intuition was that a protocol version change indicates something that we
> > eventually want all client libraries to support.  Whereas protocol
> > extensions are truly opt-in.
> >
> > For example, if we didn't have the ParameterStatus message and someone
> > wanted to add it, then this ought to be a protocol version change, so
> > that we eventually get everyone to adopt it.
> >
> > But, for example, the column encryption feature is not something I'd
> > expect all client libraries to implement, so it can be opt-in.
>
> I think that is a good way of deciding between version bump vs
> protocol extension parameter. But I'd like to make one
> clarification/addition to this logic, if the protocol feature already
> requires opt-in from the client because of how the feature works, then
> there's no need for a protocol extension parameter. e.g. if you're
> column-encryption feature would require the client to send a
> ColumnDecrypt message before the server would exhibit any behaviour
> relating to the column-encryption feature, then the client can simply
> "support" the feature by never sending the ColumnDecrypt message.
> Thus, in such cases a protocol extension parameter would not be
> necessary, even if the feature is considered opt-in.
>
> > (I believe we have added a number of new protocol messages since the
> > beginning of the 3.0 protocol, without any version change, so "new
> > protocol message" might not always be a good example to begin with.)
>
> Personally, I feel like this is something we should change. IMHO, we
> should get to a state where protocol minor version bumps are so
> low-risk that we can do them whenever we add message types. Right now
> there are basically multiple versions of the 3.0 protocol, which is
> very confusing to anyone implementing it. Different servers
> implementing the 3.0 protocol without the NegotiateVersion message is
> a good example of that.
>

Totally agree.


>
> > I fear that if we prefer protocol extensions over version increases,
> > then we'd get a very fragmented landscape of different client libraries
> > supporting different combinations of things.
>
> +1
Dave

> +1
>


Re: incorrect results and different plan with 2 very similar queries

2024-03-27 Thread Dave Cramer
Dave Cramer


On Wed, 27 Mar 2024 at 17:57, David Rowley  wrote:

> On Thu, 28 Mar 2024 at 10:33, Dave Cramer  wrote:
> > There is a report on the pgjdbc github JDBC Driver shows erratic
> behavior when filtering on CURRENT_DATE · pgjdbc/pgjdbc · Discussion #3184 (
> github.com)
> >
> > Here are the plans.
> >
> > JDBC - Nested Loop (incorrect result)
> >
> > Index Cond: (mutation >= ((CURRENT_DATE -
> '1971-12-31'::date) - 28))
>
> > JDBC - Hash Right (correct result)
> >
> > Recheck Cond: (mutation >= ((CURRENT_DATE -
> '1971-12-31'::date) - 29))
>
> I don't see any version details or queries, but going by the
> conditions above, the queries don't appear to be the same, so
> different results aren't too surprising and not a demonstration that
> there's any sort of bug.
>

Sorry, you are correct. Version is 12.14. Here is the query

SELECT
  p.partseqno_i
, p.partno
, p.partmatch
, pfe.average_price
, pfe.sales_price
, pfe.purch_price
, pfe.average_price_2
, pfe.avg_repair_cost
, pfe.average_price_func
, pfe.fsv
, pfe.fsv_func
, p.status

FROM part p
LEFT JOIN part_fa_entity pfe ON (p.partno = pfe.partno)
WHERE 1=1
AND (p.mutation >= (CURRENT_DATE - '1971-12-31'::date)-27) ORDER BY p.partno

Dave


> David
>


incorrect results and different plan with 2 very similar queries

2024-03-27 Thread Dave Cramer
fsv, pfe.fsv_func
  Index Cond: ((pfe.partno)::text = (p.partno)::text)
Planning Time: 0.361 ms
Execution Time: 3.157 ms

AppX - Hash Join (correct result)

Sort  (cost=1352.35..1352.94 rows=238 width=83) (actual time=5.361..5.384
rows=345 loops=1)
  Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price,
pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost,
pfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status
  Sort Key: p.partno
  Sort Method: quicksort  Memory: 73kB
  ->  Hash Right Join  (cost=472.00..1342.95 rows=238 width=83) (actual
time=0.594..4.669 rows=345 loops=1)
Output: p.partseqno_i, p.partno, p.partmatch, pfe.average_price,
pfe.sales_price, pfe.purch_price, pfe.average_price_2, pfe.avg_repair_cost,
pfe.average_price_func, pfe.fsv, pfe.fsv_func, p.status
Inner Unique: true
Hash Cond: ((pfe.partno)::text = (p.partno)::text)
->  Seq Scan on amos.part_fa_entity pfe  (cost=0.00..837.27
rows=12827 width=65) (actual time=0.006..1.581 rows=12827 loops=1)
  Output: pfe.part_fa_entityno_i, pfe.partno, pfe.entityno_i,
pfe.average_price, pfe.sales_price, pfe.purch_price, pfe.average_price_2,
pfe.avg_repair_cost, pfe.avg_repair_cost_func, pfe.fa_qty,
pfe.fa_open_iv_qty, pfe.fa_start_qty, pfe.fa_start_price,
pfe.fa_start_price_2, pfe.mutation, pfe.mutator, pfe.status,
pfe.mutation_time, pfe.created_by, pfe.created_date,
pfe.average_price_func, pfe.fa_start_price_func, pfe.fsv, pfe.fsv_func
->  Hash  (cost=469.03..469.03 rows=238 width=29) (actual
time=0.564..0.566 rows=345 loops=1)
  Output: p.partseqno_i, p.partno, p.partmatch, p.status
  Buckets: 1024  Batches: 1  Memory Usage: 30kB
  ->  Bitmap Heap Scan on amos.part p  (cost=18.14..469.03
rows=238 width=29) (actual time=0.075..0.488 rows=345 loops=1)
Output: p.partseqno_i, p.partno, p.partmatch, p.status
Recheck Cond: (p.mutation >= ((CURRENT_DATE -
`1971-12-31`::date) - 29))
Heap Blocks: exact=186
->  Bitmap Index Scan on i_42609  (cost=0.00..18.08
rows=238 width=0) (actual time=0.035..0.035 rows=356 loops=1)
  Index Cond: (p.mutation >= ((CURRENT_DATE -
`1971-12-31`::date) - 29))
Planning Time: 0.379 ms
Execution Time: 5.443 ms
<https://github.com/pgjdbc/pgjdbc/discussions/3184>

Dave Cramer


Re: Trying to build x86 version on windows using meson

2024-03-21 Thread Dave Cramer
Andres,


On Thu, 21 Mar 2024 at 12:51, Andres Freund  wrote:

> Hi,
>
> On 2024-03-21 07:11:23 -0400, Dave Cramer wrote:
> > It seems that attempting to cross-compile on an ARM machine might be
> asking
> > too much as the use cases are pretty limited.
>
> It for sure is if you don't even provide the precise commands and logs of a
> failed run...
>
>
> > So the impetus for this is that folks require 32bit versions of psqlODBC.
> > Unfortunately EDB is no longer distributing a 32 bit windows version.
> >
> > All I really need is a 32bit libpq. This seems like a much smaller lift.
> > Suggestions ?
>
> FWIW, I can cross compile postgres from linux to 32bit windows without an
> issue. If you really just need a 32bit libpq, that might actually be
> easier.
>
> cd /tmp/ && rm -rf /tmp/meson-w32 && m setup --buildtype debug
> -Dcassert=true -Db_pch=true --cross-file
> ~/src/meson/cross/linux-mingw-w64-32bit.txt /tmp/meson-w32 ~/src/postgresql
> && cd /tmp/meson-w32 && ninja
>
> file src/interfaces/libpq/libpq.dll
> src/interfaces/libpq/libpq.dll: PE32 executable (DLL) (console) Intel
> 80386, for MS Windows, 19 sections
>
> You'd need a windows openssl to actually have a useful libpq, but that
> should
> be fairly simple.
>
>
> There are two warnings that I think point to us doing something wrong, but
> they're not affecting libpq:
>
> [1585/1945 42  81%] Linking target src/bin/pgevent/pgevent.dll
> /usr/bin/i686-w64-mingw32-ld: warning: resolving _DllRegisterServer by
> linking to _DllRegisterServer@0
> Use --enable-stdcall-fixup to disable these warnings
> Use --disable-stdcall-fixup to disable these fixups
> /usr/bin/i686-w64-mingw32-ld: warning: resolving _DllUnregisterServer by
> linking to _DllUnregisterServer@0
>
>
>
Attached correct log file

Dave
Build started at 2024-03-21T13:07:08.707715
Main binary: C:\Program Files\Meson\meson.exe
Build Options: '-Dextra_include_dirs=c:\Program Files\OpenSSL-Win64\include' 
-Derrorlogs=True '-Dextra_lib_dirs=c:\Program Files\OpenSSL-win64' 
'-Dprefix=c:\postgres86'
Python system: Windows
The Meson build system
Version: 1.3.1
Source dir: C:\Users\davec\projects\postgresql
Build dir: C:\Users\davec\projects\postgresql\build
Build type: native build
Project name: postgresql
Project version: 17devel
---
Detecting compiler via: `icl ""` -> [WinError 2] The system cannot find the 
file specified
---
Detecting compiler via: `cl /?` -> 0
stdout:
C/C++ COMPILER OPTIONS


  -OPTIMIZATION-

/O1 maximum optimizations (favor space) /O2 maximum optimizations (favor speed)
/Ob inline expansion (default n=0)   /Od disable optimizations (default)
/Og enable global optimization  /Oi[-] enable intrinsic functions
/Os favor code space/Ot favor code speed
/Ox optimizations (favor speed) /Oy[-] enable frame pointer omission 
/favor: select processor to optimize for, one of:
blend - a combination of optimizations for several different x86 processors
ATOM - Intel(R) Atom(TM) processors 

 -CODE GENERATION-

/Gu[-] ensure distinct functions have distinct addresses
/Gw[-] separate global variables for linker
/GF enable read-only string pooling /Gm[-] enable minimal rebuild
/Gy[-] separate functions for linker/GS[-] enable security checks
/GR[-] enable C++ RTTI  /GX[-] enable C++ EH (same as /EHsc)
/guard:cf[-] enable CFG (control flow guard)
/guard:ehcont[-] enable EH continuation metadata (CET)
/EHs enable C++ EH (no SEH exceptions)  /EHa enable C++ EH (w/ SEH exceptions)
/EHc extern "C" defaults to nothrow 
/EHr always generate noexcept runtime termination checks
/fp: choose floating-point model:
contract - consider floating-point contractions when generating code
except[-] - consider floating-point exceptions when generating code
fast - "fast" floating-point model; results are less predictable
precise - "precise" floating-point model; results are predictable
strict - "strict" floating-point model (implies /fp:except)
/Qfast_transcendentals generate inline FP intrinsics even with /fp:except
/Qspectre[-] enable mitigations for CVE 2017-5753
/Qpar[-] enable parallel code generation
/Qpar-report:1 auto-parallelizer diagnostic; indicate parallelized loops
/Qpar-report:2 auto-parallelizer diagnostic; indicate loops not parallelized
/Qvec-report:1 auto-vectorizer diagnostic; indicate vectorized loops
/Qvec-report:2 auto-vectorizer diagnostic; indicate loops not vectorized
/GL[-] enable link-time code generation 
/volatile: choose volatile model:
iso - Acquire/release semantics not guaranteed on volatile accesses
ms  - Acquire/release semantics guaranteed on volatile accesses

Re: Trying to build x86 version on windows using meson

2024-03-21 Thread Dave Cramer
On Thu, 21 Mar 2024 at 03:56, Peter Eisentraut  wrote:

> On 20.03.24 22:49, Dave Cramer wrote:
> >
> >
> >
> > On Wed, 20 Mar 2024 at 17:11, Andres Freund  > <mailto:and...@anarazel.de>> wrote:
> >
> > Hi,
> >
> > On 2024-03-20 16:14:23 -0400, Dave Cramer wrote:
> >  > I am getting the following error
> >  >
> >  > meson.build:1479:17: ERROR: Can not run test applications in this
> > cross
> >  > environment.
> >  >
> >  > Have configured for amd64_x86
> >  >
> >  > Running `meson setup --wipe build --prefix=c:\postgres86`
> >
> > This is not enough information to debug anything. At the very least
> > we need
> > the exact steps performed to set up the build and
> > meson-logs/meson-log.txt
> >
> > First off this is on an ARM64 machine
> >
> > The last error from meson-log.txt is
> >
> > ...
> > Checking if "c99" compiles: YES
> >
> > meson.build:1479:17: ERROR: Can not run test applications in this cross
> > environment.
> > ...
>
> I have never tried this, but there are instructions for cross-compiling
> with meson: https://mesonbuild.com/Cross-compilation.html


It seems that attempting to cross-compile on an ARM machine might be asking
too much as the use cases are pretty limited.

So the impetus for this is that folks require 32bit versions of psqlODBC.
Unfortunately EDB is no longer distributing a 32 bit windows version.

All I really need is a 32bit libpq. This seems like a much smaller lift.
Suggestions ?

Dave


Trying to build x86 version on windows using meson

2024-03-20 Thread Dave Cramer
Greetings,

I am getting the following error

meson.build:1479:17: ERROR: Can not run test applications in this cross
environment.

Have configured for amd64_x86

Running `meson setup --wipe build --prefix=c:\postgres86`

The docs say it is possible to build postgres for x86. Are there specific
instructions ?


Dave Cramer


Re: query_id, pg_stat_activity, extended query protocol

2024-03-20 Thread Dave Cramer
>
>
>>
>> FWIW, I'd like to think that we could improve the situation, requiring
>> a mix of calling pgstat_report_query_id() while feeding on some query
>> IDs retrieved from CachedPlanSource->query_list.  I have not in
>> details looked at how much could be achieved, TBH.
>>
>
This just cropped up as a pgjdbc github issue. Seems like something that
should be addressed.

Dave


Re: When extended query protocol ends?

2024-02-15 Thread Dave Cramer
Hi Tatsuo,

Actually no need, I figured it out.

I don't have a solution yet though.

Dave Cramer
www.postgres.rocks


On Thu, 15 Feb 2024 at 19:43, Tatsuo Ishii  wrote:

> > Can you ask the OP what they are doing in the startup. I'm trying to
> > replicate their situation.
> > Looks like possibly 'setReadOnly' and 'select version()'
>
> Sure I will. By the way 'select version()' may be issued by Pgpool-II
> itself. In this case it should be 'SELECT version()', not 'select
> version()'.
>
> Best reagards,
> --
> Tatsuo Ishii
> SRA OSS LLC
> English: http://www.sraoss.co.jp/index_en/
> Japanese:http://www.sraoss.co.jp
>


Re: When extended query protocol ends?

2024-02-15 Thread Dave Cramer
On Wed, 14 Feb 2024 at 17:55, Tatsuo Ishii  wrote:

> >>> From [1] I think the JDBC driver sends something like below if
> >>> autosave=always option is specified.
> >>>
> >>> "BEGIN READ ONLY" Parse/Bind/Eexecute (in the extended query protocol)
> >>> "SAVEPOINT PGJDBC_AUTOSAVE" (in the simple query protocol)
> >>>
> >>> It seems the SAVEPOINT is sent without finishing the extended query
> >>> protocol (i.e. without Sync message). Is it possible for the JDBC
> >>> driver to issue a Sync message before sending SAVEPOINT in simple
> >>> query protocol? Or you can send SAVEPOINT using the extended query
> >>> protocol.
> >>>
> >>> [1]
> >>>
> https://www.pgpool.net/pipermail/pgpool-general/2023-December/009051.html
> >>
> >>
> >> Can you ask the OP what version of the driver they are using. From what
> I
> >> can tell we send BEGIN using SimpleQuery.
> >
> > Sure. I will get back once I get the JDBC version.
>
> Here it is:
> > JDBC driver version used:42.5.1 Regards, Karel.
>

Can you ask the OP what they are doing in the startup. I'm trying to
replicate their situation.
Looks like possibly 'setReadOnly' and 'select version()'

Thanks,
Dave

>
>


Re: [PATCH] Add native windows on arm64 support

2024-02-13 Thread Dave Cramer
On Tue, 13 Feb 2024 at 12:52, Andres Freund  wrote:

> Hi,
>
> On 2024-02-13 12:49:33 -0500, Dave Cramer wrote:
> > > I think I might have been on to something - if my human emulation of a
> > > preprocessor isn't wrong, we'd end up with
> > >
> > > #define S_UNLOCK(lock)  \
> > > do { _ReadWriteBarrier(); (*(lock)) = 0; } while (0)
> > >
> > > on msvc + arm. And that's entirely insufficient - _ReadWriteBarrier()
> just
> > > limits *compiler* level reordering, not CPU level reordering.  I think
> it's
> > > even insufficient on x86[-64], but it's definitely insufficient on arm.
> > >
> > In fact ReadWriteBarrier has been deprecated _ReadWriteBarrier |
> Microsoft
> > Learn
> > <
> https://learn.microsoft.com/en-us/cpp/intrinsics/readwritebarrier?view=msvc-170
> >
>
> I'd just ignore that, that's just pushing towards more modern stuff that's
> more applicable to C++ than C.
>
>
> > I did try using atomic_thread_fence as per atomic_thread_fence -
> > cppreference.com
> > <https://en.cppreference.com/w/c/atomic/atomic_thread_fence>
>
> The semantics of atomic_thread_fence are, uh, very odd.  I'd just use
> MemoryBarrier().
>
> #define S_UNLOCK(lock)  \
do { MemoryBarrier(); (*(lock)) = 0; } while (0)

#endif

Has no effect.

I have no idea if that is what you meant that I should do ?

Dave


Re: [PATCH] Add native windows on arm64 support

2024-02-13 Thread Dave Cramer
Dave Cramer
www.postgres.rocks


On Mon, 12 Feb 2024 at 16:19, Andres Freund  wrote:

> Hi,
>
> On 2024-02-12 12:50:12 -0800, Andres Freund wrote:
> > On 2024-02-12 13:28:40 -0500, Andrew Dunstan wrote:
> > I wonder if this indicates that we are either missing memory barriers
> > somewhere or that the memory barriers we end up with on msvc + arm aren't
> > correct?  Either could explain why the problem doesn't occur when
> building
> > with optimizations.
>
> I think I might have been on to something - if my human emulation of a
> preprocessor isn't wrong, we'd end up with
>
> #define S_UNLOCK(lock)  \
> do { _ReadWriteBarrier(); (*(lock)) = 0; } while (0)
>
> on msvc + arm. And that's entirely insufficient - _ReadWriteBarrier() just
> limits *compiler* level reordering, not CPU level reordering.  I think it's
> even insufficient on x86[-64], but it's definitely insufficient on arm.
>
In fact ReadWriteBarrier has been deprecated _ReadWriteBarrier | Microsoft
Learn
<https://learn.microsoft.com/en-us/cpp/intrinsics/readwritebarrier?view=msvc-170>

I did try using atomic_thread_fence as per atomic_thread_fence -
cppreference.com
<https://en.cppreference.com/w/c/atomic/atomic_thread_fence>


However for some reason including #include 
causes a bunch of compiler errors.

c:\Program Files\Microsoft Visual
Studio\2022\Community\VC\Tools\MSVC\14.38.33130\include\vcruntime_c11_stdatomic.h(36):
error C2061: syntax error: identifier 'atomic_bool'

Dave


Re: When extended query protocol ends?

2024-02-13 Thread Dave Cramer
HI Tatsuo,



On Mon, 29 Jan 2024 at 20:15, Tatsuo Ishii  wrote:

> Hello Dave,
>
> > Tatsuo Ishii  writes:
> >> Below is outputs from "pgproto" command coming with Pgpool-II.
> >> (Lines starting "FE" represents a message from frontend to backend.
> >> Lines starting "BE" represents a message from backend to frontend.)
> >
> >> FE=> Parse(stmt="", query="SET extra_float_digits = 3")
> >> FE=> Bind(stmt="", portal="")
> >> FE=> Execute(portal="")
> >> FE=> Parse(stmt="", query="SET extra_float_digits = 3")
> >> FE=> Bind(stmt="", portal="")
> >> FE=> Execute(portal="")
> >> FE=> Query (query="SET extra_float_digits = 3")
> >> <= BE ParseComplete
> >> <= BE BindComplete
> >> <= BE CommandComplete(SET)
> >> <= BE ParseComplete
> >> <= BE BindComplete
> >> <= BE CommandComplete(SET)
> >> <= BE CommandComplete(SET)
> >> <= BE ReadyForQuery(I)
> >> FE=> Terminate
> >
> >> As you can see, two "SET extra_float_digits = 3" is sent in the
> >> extended query protocol, then one "SET extra_float_digits = 3" follows
> >> in the simple query protocol. No sync message is sent. However, I get
> >> ReadyForQuery at the end. It seems the extended query protocol is
> >> ended by a simple query protocol message instead of a sync message.
> >
> >> My question is, is this legal in terms of fronted/backend protocol?
> >
> > I think it's poor practice, at best.  You should end the
> > extended-protocol query cycle before invoking simple query.
>
> From [1] I think the JDBC driver sends something like below if
> autosave=always option is specified.
>
> "BEGIN READ ONLY" Parse/Bind/Eexecute (in the extended query protocol)
> "SAVEPOINT PGJDBC_AUTOSAVE" (in the simple query protocol)
>
> It seems the SAVEPOINT is sent without finishing the extended query
> protocol (i.e. without Sync message). Is it possible for the JDBC
> driver to issue a Sync message before sending SAVEPOINT in simple
> query protocol? Or you can send SAVEPOINT using the extended query
> protocol.
>
> [1]
> https://www.pgpool.net/pipermail/pgpool-general/2023-December/009051.html


Can you ask the OP what version of the driver they are using. From what I
can tell we send BEGIN using SimpleQuery.

Dave


Re: [PATCH] Add native windows on arm64 support

2024-02-12 Thread Dave Cramer
On Mon, 12 Feb 2024 at 15:50, Andres Freund  wrote:

> Hi,
>
> On 2024-02-12 13:28:40 -0500, Andrew Dunstan wrote:
> > On 2024-02-12 Mo 11:44, Dave Cramer wrote:
> > > OK, so I have managed to get a debugger attached to postgres.exe when
> it
> > > faults and the fault occurs at
> > >
> https://github.com/postgres/postgres/blob/09eb633e1baa3b7cd7929f3cc77f9c46f63c20b1/src/backend/utils/mmgr/dsa.c#L869
> > > span is pointing to 0x0
> >
> > Further data point. If I select a build type of 'debug' instead of
> > 'debugoptimized' the error disappears.
>
> Oh, this is quite interesting.  Dave, could you post the backtrace?
>

Andres,

I am using Visual Studio as the debugger. Here is what I have.

> postgres.exe!dsa_free(dsa_area * area, unsigned __int64 dp) Line 869 C
  postgres.exe!resize(dshash_table * hash_table, unsigned __int64
new_size_log2) Line 879 C
  postgres.exe!dshash_find_or_insert(dshash_table * hash_table, const void
* key, bool * found) Line 480 C
  postgres.exe!pgstat_get_entry_ref(PgStat_Kind kind, unsigned int dboid,
unsigned int objoid, bool create, bool * created_entry) Line 455 C
  postgres.exe!pgstat_prep_pending_entry(PgStat_Kind kind, unsigned int
dboid, unsigned int objoid, bool * created_entry) Line 1123 C
  [Inline Frame] postgres.exe!pgstat_prep_relation_pending(unsigned int)
Line 904 C
  postgres.exe!pgstat_assoc_relation(RelationData * rel) Line 139 C
  [Inline Frame] postgres.exe!ReadBufferExtended(RelationData *) Line 802 C
  postgres.exe!ReadBuffer(RelationData * reln, unsigned int blockNum) Line
737 C
  postgres.exe!read_seq_tuple(RelationData * rel, int * buf, HeapTupleData
* seqdatatuple) Line 1196 C
  postgres.exe!AlterSequence(ParseState * pstate, AlterSeqStmt * stmt) Line
481 C
  postgres.exe!ProcessUtilitySlow(ParseState * pstate, PlannedStmt * pstmt,
const char * queryString, ProcessUtilityContext context, ParamListInfoData
* params, QueryEnvironment * queryEnv, _DestReceiver * dest,
QueryCompletion * qc) Line 1679 C
  postgres.exe!standard_ProcessUtility(PlannedStmt * pstmt, const char *
queryString, bool readOnlyTree, ProcessUtilityContext context,
ParamListInfoData * params, QueryEnvironment * queryEnv, _DestReceiver *
dest, QueryCompletion * qc) Line 1080 C
  postgres.exe!ProcessUtility(PlannedStmt * pstmt, const char *
queryString, bool readOnlyTree, ProcessUtilityContext context,
ParamListInfoData * params, QueryEnvironment * queryEnv, _DestReceiver *
dest, QueryCompletion * qc) Line 530 C
  postgres.exe!ProcessUtilitySlow(ParseState * pstate, PlannedStmt * pstmt,
const char * queryString, ProcessUtilityContext context, ParamListInfoData
* params, QueryEnvironment * queryEnv, _DestReceiver * dest,
QueryCompletion * qc) Line 1263 C
  postgres.exe!standard_ProcessUtility(PlannedStmt * pstmt, const char *
queryString, bool readOnlyTree, ProcessUtilityContext context,
ParamListInfoData * params, QueryEnvironment * queryEnv, _DestReceiver *
dest, QueryCompletion * qc) Line 1080 C


if there is a better debugger to use, please let me know

Dave



>
> I wonder if this indicates that we are either missing memory barriers
> somewhere or that the memory barriers we end up with on msvc + arm aren't
> correct?  Either could explain why the problem doesn't occur when building
> with optimizations.
>
> Greetings,
>
> Andres Freund
>


Re: [PATCH] Add native windows on arm64 support

2024-02-12 Thread Dave Cramer
Dave Cramer
www.postgres.rocks


On Mon, 12 Feb 2024 at 09:19, Andrew Dunstan  wrote:

>
> On 2024-02-12 Mo 08:51, Dave Cramer wrote:
>
>
>
> On Sat, 10 Feb 2024 at 13:28, Andrew Dunstan  wrote:
>
>>
>> On 2024-02-10 Sa 12:20, Dave Cramer wrote:
>>
>>
>>
>> On Sat, 10 Feb 2024 at 11:19, Andrew Dunstan  wrote:
>>
>>>
>>> On 2024-02-09 Fr 14:23, Dave Cramer wrote:
>>>
>>>
>>> Dave Cramer
>>> www.postgres.rocks
>>>
>>>
>>> On Fri, 9 Feb 2024 at 07:18, Dave Cramer 
>>>  wrote:
>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, 9 Feb 2024 at 00:26, Michael Paquier 
>>>> wrote:
>>>>
>>>>> On Tue, Feb 06, 2024 at 07:01:49AM -0500, Dave Cramer wrote:
>>>>> > Thanks, this patch works and
>>>>> > testing with meson passes.
>>>>>
>>>>> Only with the version posted at [1]?  Interesting, that's the same
>>>>> contents as v8 posted upthread, minus src/tools/ because we don't need
>>>>> to care about them anymore.
>>>>>
>>>>> Andrew, what's happening on the test side?  It does not seem you've
>>>>> mentioned any details about what is going wrong, or I have just missed
>>>>> them.
>>>>>
>>>>> > I'll try the buildfarm next.
>>>>>
>>>>> [1]:
>>>>> https://www.postgresql.org/message-id/ea42654a-3dc4-98b0-335b-56b7ec5e5...@dunslane.net
>>>>
>>>>
>>>> interestingly meson test does not produce any error
>>>> The buildfarm produces the following error for me:
>>>>
>>>> -SELECT relname, attname, coltypes, get_columns_length(coltypes)
>>>> - FROM check_columns
>>>> - WHERE get_columns_length(coltypes) % 8 != 0 OR
>>>> -   'name'::regtype::oid = ANY(coltypes);
>>>> - relname | attname | coltypes | get_columns_length
>>>> --+-+--+
>>>> -(0 rows)
>>>> -
>>>> +server closed the connection unexpectedly
>>>> + This probably means the server terminated abnormally
>>>> + before or while processing the request.
>>>> +connection to server was lost
>>>>
>>>
>>> Actually digging some more, here is the actual error
>>>
>>> 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  server process (PID
>>> 11204) was terminated by exception 0xC005
>>> 2024-02-09 13:31:11.008 -05 postmaster[10672] DETAIL:  Failed process
>>> was running: VACUUM;
>>> 2024-02-09 13:31:11.008 -05 postmaster[10672] HINT:  See C include file
>>> "ntstatus.h" for a description of the hexadecimal value.
>>> 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  terminating any
>>> other active server processes
>>> 2024-02-09 13:31:11.013 -05 postmaster[10672] LOG:  all server
>>> processes terminated; reinitializing
>>> 2024-02-09 13:31:11.034 -05 startup[6152] LOG:  database system was
>>> interrupted; last known up at 2024-02-09 13:31:01 -05
>>>
>>>
>>>
>>>
>>>
>> So how does one debug this ?
>>
>> Also if I `run meson test` I don't see this error. What does the
>> buildfarm do differently?
>>
>>
>> First it does this:
>>
>>
>> meson test -C $pgsql --no-rebuild --suite setup
>>
>>
>> Then it does this (jflag is for the number of jobs):
>>
>>
>> meson test -t $meson_test_timeout $jflag -C $pgsql --logbase checklog
>> --print-errorlogs --no-rebuild --suite regress --test-args=--no-locale
>>
>>
>
> running the above manually produces no errors ??
>
>
>
> Not for me. I get the error I previously reported, It's an access
> violation error.
>
>
> cheers
>
>
> andrew
>

OK, so I have managed to get a debugger attached to postgres.exe when it
faults and the fault occurs at
https://github.com/postgres/postgres/blob/09eb633e1baa3b7cd7929f3cc77f9c46f63c20b1/src/backend/utils/mmgr/dsa.c#L869
span is pointing to 0x0

Dave


Re: [PATCH] Add native windows on arm64 support

2024-02-12 Thread Dave Cramer
On Sat, 10 Feb 2024 at 13:28, Andrew Dunstan  wrote:

>
> On 2024-02-10 Sa 12:20, Dave Cramer wrote:
>
>
>
> On Sat, 10 Feb 2024 at 11:19, Andrew Dunstan  wrote:
>
>>
>> On 2024-02-09 Fr 14:23, Dave Cramer wrote:
>>
>>
>> Dave Cramer
>> www.postgres.rocks
>>
>>
>> On Fri, 9 Feb 2024 at 07:18, Dave Cramer 
>>  wrote:
>>
>>>
>>>
>>>
>>>
>>> On Fri, 9 Feb 2024 at 00:26, Michael Paquier 
>>> wrote:
>>>
>>>> On Tue, Feb 06, 2024 at 07:01:49AM -0500, Dave Cramer wrote:
>>>> > Thanks, this patch works and
>>>> > testing with meson passes.
>>>>
>>>> Only with the version posted at [1]?  Interesting, that's the same
>>>> contents as v8 posted upthread, minus src/tools/ because we don't need
>>>> to care about them anymore.
>>>>
>>>> Andrew, what's happening on the test side?  It does not seem you've
>>>> mentioned any details about what is going wrong, or I have just missed
>>>> them.
>>>>
>>>> > I'll try the buildfarm next.
>>>>
>>>> [1]:
>>>> https://www.postgresql.org/message-id/ea42654a-3dc4-98b0-335b-56b7ec5e5...@dunslane.net
>>>
>>>
>>> interestingly meson test does not produce any error
>>> The buildfarm produces the following error for me:
>>>
>>> -SELECT relname, attname, coltypes, get_columns_length(coltypes)
>>> - FROM check_columns
>>> - WHERE get_columns_length(coltypes) % 8 != 0 OR
>>> -   'name'::regtype::oid = ANY(coltypes);
>>> - relname | attname | coltypes | get_columns_length
>>> --+-+--+
>>> -(0 rows)
>>> -
>>> +server closed the connection unexpectedly
>>> + This probably means the server terminated abnormally
>>> + before or while processing the request.
>>> +connection to server was lost
>>>
>>
>> Actually digging some more, here is the actual error
>>
>> 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  server process (PID
>> 11204) was terminated by exception 0xC005
>> 2024-02-09 13:31:11.008 -05 postmaster[10672] DETAIL:  Failed process
>> was running: VACUUM;
>> 2024-02-09 13:31:11.008 -05 postmaster[10672] HINT:  See C include file
>> "ntstatus.h" for a description of the hexadecimal value.
>> 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  terminating any
>> other active server processes
>> 2024-02-09 13:31:11.013 -05 postmaster[10672] LOG:  all server processes
>> terminated; reinitializing
>> 2024-02-09 13:31:11.034 -05 startup[6152] LOG:  database system was
>> interrupted; last known up at 2024-02-09 13:31:01 -05
>>
>>
>>
>>
>>
> So how does one debug this ?
>
> Also if I `run meson test` I don't see this error. What does the buildfarm
> do differently?
>
>
> First it does this:
>
>
> meson test -C $pgsql --no-rebuild --suite setup
>
>
> Then it does this (jflag is for the number of jobs):
>
>
> meson test -t $meson_test_timeout $jflag -C $pgsql --logbase checklog
> --print-errorlogs --no-rebuild --suite regress --test-args=--no-locale
>
>

running the above manually produces no errors ??

Dave


Re: [PATCH] Add native windows on arm64 support

2024-02-10 Thread Dave Cramer
On Sat, 10 Feb 2024 at 11:19, Andrew Dunstan  wrote:

>
> On 2024-02-09 Fr 14:23, Dave Cramer wrote:
>
>
> Dave Cramer
> www.postgres.rocks
>
>
> On Fri, 9 Feb 2024 at 07:18, Dave Cramer 
>  wrote:
>
>>
>>
>>
>>
>> On Fri, 9 Feb 2024 at 00:26, Michael Paquier  wrote:
>>
>>> On Tue, Feb 06, 2024 at 07:01:49AM -0500, Dave Cramer wrote:
>>> > Thanks, this patch works and
>>> > testing with meson passes.
>>>
>>> Only with the version posted at [1]?  Interesting, that's the same
>>> contents as v8 posted upthread, minus src/tools/ because we don't need
>>> to care about them anymore.
>>>
>>> Andrew, what's happening on the test side?  It does not seem you've
>>> mentioned any details about what is going wrong, or I have just missed
>>> them.
>>>
>>> > I'll try the buildfarm next.
>>>
>>> [1]:
>>> https://www.postgresql.org/message-id/ea42654a-3dc4-98b0-335b-56b7ec5e5...@dunslane.net
>>
>>
>> interestingly meson test does not produce any error
>> The buildfarm produces the following error for me:
>>
>> -SELECT relname, attname, coltypes, get_columns_length(coltypes)
>> - FROM check_columns
>> - WHERE get_columns_length(coltypes) % 8 != 0 OR
>> -   'name'::regtype::oid = ANY(coltypes);
>> - relname | attname | coltypes | get_columns_length
>> --+-+--+
>> -(0 rows)
>> -
>> +server closed the connection unexpectedly
>> + This probably means the server terminated abnormally
>> + before or while processing the request.
>> +connection to server was lost
>>
>
> Actually digging some more, here is the actual error
>
> 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  server process (PID
> 11204) was terminated by exception 0xC005
> 2024-02-09 13:31:11.008 -05 postmaster[10672] DETAIL:  Failed process was
> running: VACUUM;
> 2024-02-09 13:31:11.008 -05 postmaster[10672] HINT:  See C include file
> "ntstatus.h" for a description of the hexadecimal value.
> 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  terminating any other
> active server processes
> 2024-02-09 13:31:11.013 -05 postmaster[10672] LOG:  all server processes
> terminated; reinitializing
> 2024-02-09 13:31:11.034 -05 startup[6152] LOG:  database system was
> interrupted; last known up at 2024-02-09 13:31:01 -05
>
>
>
>
>
So how does one debug this ?

Also if I `run meson test` I don't see this error. What does the buildfarm
do differently?

Dave

> Yes, this is pretty much what I saw.
>
>
> cheers
>
>
> andrew
>
> --
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
>
>


Re: [PATCH] Add native windows on arm64 support

2024-02-09 Thread Dave Cramer
On Fri, 9 Feb 2024 at 14:36, Andres Freund  wrote:

> Hi,
>
> On 2024-02-09 14:23:46 -0500, Dave Cramer wrote:
> > > interestingly meson test does not produce any error
> > > The buildfarm produces the following error for me:
> > >
> > > -SELECT relname, attname, coltypes, get_columns_length(coltypes)
> > > - FROM check_columns
> > > - WHERE get_columns_length(coltypes) % 8 != 0 OR
> > > -   'name'::regtype::oid = ANY(coltypes);
> > > - relname | attname | coltypes | get_columns_length
> > > --+-+--+
> > > -(0 rows)
> > > -
> > > +server closed the connection unexpectedly
> > > + This probably means the server terminated abnormally
> > > + before or while processing the request.
> > > +connection to server was lost
> > >
> >
> > Actually digging some more, here is the actual error
> >
> > 2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  server process (PID
> > 11204) was terminated by exception 0xC005
> > 2024-02-09 13:31:11.008 -05 postmaster[10672] DETAIL:  Failed process was
> > running: VACUUM;
> > 2024-02-09 13:31:11.008 -05 postmaster[10672] HINT:  See C include file
> > "ntstatus.h" for a description of the hexadecimal value.
>
> That's something like a segfault.
>
> One suspicion I have is that src/port/pg_crc32c_armv8_choose.c possibly
> doesn't properly support msvc.  It seems to assume that SIGILL can be
> trapped,
> but that IIRC doesn't work on windows.
>
> I'd check if the problem persists if you change
> cdata.set('USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK', 1)
> to
> cdata.set('USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK', 0)
>

This results in

FAILED: src/bin/pg_checksums/pg_checksums.exe
src/bin/pg_checksums/pg_checksums.pdb
"link"  /MACHINE:ARM64 /OUT:src/bin/pg_checksums/pg_checksums.exe
src/bin/pg_checksums/pg_checksums.exe.p/win32ver.res
src/bin/pg_checksums/pg_checksums.exe.p/pg_checksums.c.obj "/release"
"/nologo" "/DEBUG" "/PDB:src\bin\pg_checksums\pg_checksums.pdb"
"/INCREMENTAL:NO" "/STACK:4194304" "/NOEXP" "src/fe_utils/libpgfeutils.a"
"src/common/libpgcommon.a" "src/port/libpgport.a" "ws2_32.lib" "ws2_32.lib"
"ws2_32.lib" "ws2_32.lib" "/SUBSYSTEM:CONSOLE" "kernel32.lib" "user32.lib"
"gdi32.lib" "winspool.lib" "shell32.lib" "ole32.lib" "oleaut32.lib"
"uuid.lib" "comdlg32.lib" "advapi32.lib"
libpgcommon.a(controldata_utils.c.obj) : error LNK2001: unresolved external
symbol pg_comp_crc32c


Dave


>
>
> Also, yikes, that's an ugly way of doing hardware detection. Jumping out
> of a
> signal handler into normal code. Brrr.
>
> Greetings,
>
> Andres Freund
>


Re: [PATCH] Add native windows on arm64 support

2024-02-09 Thread Dave Cramer
Dave Cramer
www.postgres.rocks


On Fri, 9 Feb 2024 at 07:18, Dave Cramer  wrote:

>
>
>
>
> On Fri, 9 Feb 2024 at 00:26, Michael Paquier  wrote:
>
>> On Tue, Feb 06, 2024 at 07:01:49AM -0500, Dave Cramer wrote:
>> > Thanks, this patch works and
>> > testing with meson passes.
>>
>> Only with the version posted at [1]?  Interesting, that's the same
>> contents as v8 posted upthread, minus src/tools/ because we don't need
>> to care about them anymore.
>>
>> Andrew, what's happening on the test side?  It does not seem you've
>> mentioned any details about what is going wrong, or I have just missed
>> them.
>>
>> > I'll try the buildfarm next.
>>
>> [1]:
>> https://www.postgresql.org/message-id/ea42654a-3dc4-98b0-335b-56b7ec5e5...@dunslane.net
>
>
> interestingly meson test does not produce any error
> The buildfarm produces the following error for me:
>
> -SELECT relname, attname, coltypes, get_columns_length(coltypes)
> - FROM check_columns
> - WHERE get_columns_length(coltypes) % 8 != 0 OR
> -   'name'::regtype::oid = ANY(coltypes);
> - relname | attname | coltypes | get_columns_length
> --+-+--+
> -(0 rows)
> -
> +server closed the connection unexpectedly
> + This probably means the server terminated abnormally
> + before or while processing the request.
> +connection to server was lost
>

Actually digging some more, here is the actual error

2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  server process (PID
11204) was terminated by exception 0xC005
2024-02-09 13:31:11.008 -05 postmaster[10672] DETAIL:  Failed process was
running: VACUUM;
2024-02-09 13:31:11.008 -05 postmaster[10672] HINT:  See C include file
"ntstatus.h" for a description of the hexadecimal value.
2024-02-09 13:31:11.008 -05 postmaster[10672] LOG:  terminating any other
active server processes
2024-02-09 13:31:11.013 -05 postmaster[10672] LOG:  all server processes
terminated; reinitializing
2024-02-09 13:31:11.034 -05 startup[6152] LOG:  database system was
interrupted; last known up at 2024-02-09 13:31:01 -05


Dave

>
> Dave
>


Re: [PATCH] Add native windows on arm64 support

2024-02-09 Thread Dave Cramer
On Fri, 9 Feb 2024 at 00:26, Michael Paquier  wrote:

> On Tue, Feb 06, 2024 at 07:01:49AM -0500, Dave Cramer wrote:
> > Thanks, this patch works and
> > testing with meson passes.
>
> Only with the version posted at [1]?  Interesting, that's the same
> contents as v8 posted upthread, minus src/tools/ because we don't need
> to care about them anymore.
>
> Andrew, what's happening on the test side?  It does not seem you've
> mentioned any details about what is going wrong, or I have just missed
> them.
>
> > I'll try the buildfarm next.
>
> [1]:
> https://www.postgresql.org/message-id/ea42654a-3dc4-98b0-335b-56b7ec5e5...@dunslane.net


interestingly meson test does not produce any error
The buildfarm produces the following error for me:

-SELECT relname, attname, coltypes, get_columns_length(coltypes)
- FROM check_columns
- WHERE get_columns_length(coltypes) % 8 != 0 OR
-   'name'::regtype::oid = ANY(coltypes);
- relname | attname | coltypes | get_columns_length
--+-+--+
-(0 rows)
-
+server closed the connection unexpectedly
+ This probably means the server terminated abnormally
+ before or while processing the request.
+connection to server was lost

Dave


Re: [PATCH] Add native windows on arm64 support

2024-02-06 Thread Dave Cramer
On Wed, 31 Jan 2024 at 10:21, Andrew Dunstan  wrote:

>
> On 2024-01-30 Tu 17:54, Dave Cramer wrote:
>
>
>
>
> On Tue, Jan 30, 2024 at 4:56 PM Andrew Dunstan 
> wrote:
>
>>
>> On 2024-01-30 Tu 09:50, Dave Cramer wrote:
>>
>>
>>
>> On Tue, 30 Jan 2024 at 08:38, Andrew Dunstan  wrote:
>>
>>>
>>> On 2024-01-29 Mo 11:20, Dave Cramer wrote:
>>>
>>>
>>> Dave Cramer
>>> www.postgres.rocks
>>>
>>>
>>> On Mon, 29 Jan 2024 at 11:16, Andrew Dunstan 
>>> wrote:
>>>
>>>>
>>>> On 2024-01-26 Fr 09:18, Dave Cramer wrote:
>>>>
>>>>
>>>>
>>>> On Fri, 26 Jan 2024 at 07:36, Andrew Dunstan 
>>>> wrote:
>>>>
>>>>>
>>>>> On 2024-01-25 Th 20:32, Michael Paquier wrote:
>>>>> > On Thu, Jan 25, 2024 at 04:52:30PM -0500, Dave Cramer wrote:
>>>>> >> On Thu, 25 Jan 2024 at 16:32, Andrew Dunstan 
>>>>> wrote:
>>>>> >>> On 2024-01-25 Th 16:17, Dave Cramer wrote:
>>>>> >>> Yeah, I think the default Developer Command Prompt for VS2022 is
>>>>> set up
>>>>> >>> for x86 builds. AIUI you should start by executing "vcvarsall
>>>>> x64_arm64".
>>>>> >> Yup, now I'm in the same state you are
>>>>> > Wait a minute here.  Based on [1], x64_arm64 means you can use a x64
>>>>> > host and you'll be able to produce ARM64 builds, still these will not
>>>>> > be able to run on the host where they were built.  How much of the
>>>>> > patch posted upthread is required to produce such builds?  Basically
>>>>> > everything from it, I guess, so as build dependencies can be
>>>>> > satisfied?
>>>>> >
>>>>> > [1]:
>>>>> https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170
>>>>>
>>>>>
>>>>> If you look at the table here x86 and x64 are the only supported host
>>>>> architectures. But that's OK, the x64 binaries will run on arm64 (W11
>>>>> ARM64 has x64 emulation builtin). If that didn't work Dave and I would
>>>>> not have got as far as we have. But you want the x64_arm64 argument to
>>>>> vcvarsall so you will get ARM64 output.
>>>>>
>>>>
>>>> I've rebuilt it using  x64_arm64 and with the attached (very naive
>>>> patch) and I still get an x64 binary :(
>>>>
>>>>
>>>> With this patch I still get a build error, but it's different :-)
>>>>
>>>>
>>>> [1406/2088] "link" @src/backend/postgres.exe.rsp
>>>> FAILED: src/backend/postgres.exe src/backend/postgres.pdb
>>>> "link" @src/backend/postgres.exe.rsp
>>>>Creating library src\backend\postgres.exe.lib
>>>>
>>>> storage_lmgr_s_lock.c.obj : error LNK2019: unresolved external symbol
>>>> spin_delay referenced in function perform_spin_delay
>>>>
>>>> src\backend\postgres.exe : fatal error LNK1120: 1 unresolved externals
>>>>
>>>
>>> Did you add the latest lock.patch ?
>>>
>>>
>>>
>>>
>>> I'm a bit confused about exactly what needs to be applied. Can you
>>> supply a complete patch to be applied to a pristine checkout that will let
>>> me build?
>>>
>>>
>>> cheers
>>>
>>
>> See attached.
>>
>>
>>
>> No, that is what is giving me the error shown above (just tried again to
>> be certain). And it's not surprising, as patch 2 #ifdef's out the
>> definition of spin_delay().
>>
>> If you can get a complete build with these patches then I suspect you're
>> not doing a proper ARM64 build.
>>
>
> Okay I will look when I get home in a week
>
>
> I made some progress. The attached is mostly taken from
> <https://postgr.es/m/dbee741f-b9b7-a0d5-1b1b-f9b532bb6...@linaro.org>
> <https://postgr.es/m/dbee741f-b9b7-a0d5-1b1b-f9b532bb6...@linaro.org>
>
> With it applied I was able to get a successful build using the buildfarm
> client. However, there are access violations when running some tests, so
> there is still some work to do, apparently.
>
>
> cheers
>
>
> andrew
>
> --
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
>
>

Thanks, this patch works and
testing with meon passes.

I'll try the buildfarm next.

Dave


Re: When extended query protocol ends?

2024-02-01 Thread Dave Cramer
On Mon, 29 Jan 2024 at 20:15, Tatsuo Ishii  wrote:

> Hello Dave,
>
> > Tatsuo Ishii  writes:
> >> Below is outputs from "pgproto" command coming with Pgpool-II.
> >> (Lines starting "FE" represents a message from frontend to backend.
> >> Lines starting "BE" represents a message from backend to frontend.)
> >
> >> FE=> Parse(stmt="", query="SET extra_float_digits = 3")
> >> FE=> Bind(stmt="", portal="")
> >> FE=> Execute(portal="")
> >> FE=> Parse(stmt="", query="SET extra_float_digits = 3")
> >> FE=> Bind(stmt="", portal="")
> >> FE=> Execute(portal="")
> >> FE=> Query (query="SET extra_float_digits = 3")
> >> <= BE ParseComplete
> >> <= BE BindComplete
> >> <= BE CommandComplete(SET)
> >> <= BE ParseComplete
> >> <= BE BindComplete
> >> <= BE CommandComplete(SET)
> >> <= BE CommandComplete(SET)
> >> <= BE ReadyForQuery(I)
> >> FE=> Terminate
> >
> >> As you can see, two "SET extra_float_digits = 3" is sent in the
> >> extended query protocol, then one "SET extra_float_digits = 3" follows
> >> in the simple query protocol. No sync message is sent. However, I get
> >> ReadyForQuery at the end. It seems the extended query protocol is
> >> ended by a simple query protocol message instead of a sync message.
> >
> >> My question is, is this legal in terms of fronted/backend protocol?
> >
> > I think it's poor practice, at best.  You should end the
> > extended-protocol query cycle before invoking simple query.
>
> From [1] I think the JDBC driver sends something like below if
> autosave=always option is specified.
>
> "BEGIN READ ONLY" Parse/Bind/Eexecute (in the extended query protocol)
> "SAVEPOINT PGJDBC_AUTOSAVE" (in the simple query protocol)
>
> It seems the SAVEPOINT is sent without finishing the extended query
> protocol (i.e. without Sync message). Is it possible for the JDBC
> driver to issue a Sync message before sending SAVEPOINT in simple
> query protocol? Or you can send SAVEPOINT using the extended query
> protocol.
>
> [1]
> https://www.pgpool.net/pipermail/pgpool-general/2023-December/009051.html
>
>
Hi Tatsuo,

Yes, it would be possible.

Can you create an issue on github? Issues · pgjdbc/pgjdbc (github.com)


Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-30 Thread Dave Cramer
On Tue, Jan 30, 2024 at 4:56 PM Andrew Dunstan  wrote:

>
> On 2024-01-30 Tu 09:50, Dave Cramer wrote:
>
>
>
> On Tue, 30 Jan 2024 at 08:38, Andrew Dunstan  wrote:
>
>>
>> On 2024-01-29 Mo 11:20, Dave Cramer wrote:
>>
>>
>> Dave Cramer
>> www.postgres.rocks
>>
>>
>> On Mon, 29 Jan 2024 at 11:16, Andrew Dunstan  wrote:
>>
>>>
>>> On 2024-01-26 Fr 09:18, Dave Cramer wrote:
>>>
>>>
>>>
>>> On Fri, 26 Jan 2024 at 07:36, Andrew Dunstan 
>>> wrote:
>>>
>>>>
>>>> On 2024-01-25 Th 20:32, Michael Paquier wrote:
>>>> > On Thu, Jan 25, 2024 at 04:52:30PM -0500, Dave Cramer wrote:
>>>> >> On Thu, 25 Jan 2024 at 16:32, Andrew Dunstan 
>>>> wrote:
>>>> >>> On 2024-01-25 Th 16:17, Dave Cramer wrote:
>>>> >>> Yeah, I think the default Developer Command Prompt for VS2022 is
>>>> set up
>>>> >>> for x86 builds. AIUI you should start by executing "vcvarsall
>>>> x64_arm64".
>>>> >> Yup, now I'm in the same state you are
>>>> > Wait a minute here.  Based on [1], x64_arm64 means you can use a x64
>>>> > host and you'll be able to produce ARM64 builds, still these will not
>>>> > be able to run on the host where they were built.  How much of the
>>>> > patch posted upthread is required to produce such builds?  Basically
>>>> > everything from it, I guess, so as build dependencies can be
>>>> > satisfied?
>>>> >
>>>> > [1]:
>>>> https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170
>>>>
>>>>
>>>> If you look at the table here x86 and x64 are the only supported host
>>>> architectures. But that's OK, the x64 binaries will run on arm64 (W11
>>>> ARM64 has x64 emulation builtin). If that didn't work Dave and I would
>>>> not have got as far as we have. But you want the x64_arm64 argument to
>>>> vcvarsall so you will get ARM64 output.
>>>>
>>>
>>> I've rebuilt it using  x64_arm64 and with the attached (very naive
>>> patch) and I still get an x64 binary :(
>>>
>>>
>>> With this patch I still get a build error, but it's different :-)
>>>
>>>
>>> [1406/2088] "link" @src/backend/postgres.exe.rsp
>>> FAILED: src/backend/postgres.exe src/backend/postgres.pdb
>>> "link" @src/backend/postgres.exe.rsp
>>>Creating library src\backend\postgres.exe.lib
>>>
>>> storage_lmgr_s_lock.c.obj : error LNK2019: unresolved external symbol
>>> spin_delay referenced in function perform_spin_delay
>>>
>>> src\backend\postgres.exe : fatal error LNK1120: 1 unresolved externals
>>>
>>
>> Did you add the latest lock.patch ?
>>
>>
>>
>>
>> I'm a bit confused about exactly what needs to be applied. Can you supply
>> a complete patch to be applied to a pristine checkout that will let me
>> build?
>>
>>
>> cheers
>>
>
> See attached.
>
>
>
> No, that is what is giving me the error shown above (just tried again to
> be certain). And it's not surprising, as patch 2 #ifdef's out the
> definition of spin_delay().
>
> If you can get a complete build with these patches then I suspect you're
> not doing a proper ARM64 build.
>
>
Okay I will look when I get home in a week

Dave

>
> cheers
>
>
> andrew
>
> --
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
>
>


Re: [PATCH] Add native windows on arm64 support

2024-01-30 Thread Dave Cramer
On Tue, 30 Jan 2024 at 08:38, Andrew Dunstan  wrote:

>
> On 2024-01-29 Mo 11:20, Dave Cramer wrote:
>
>
> Dave Cramer
> www.postgres.rocks
>
>
> On Mon, 29 Jan 2024 at 11:16, Andrew Dunstan  wrote:
>
>>
>> On 2024-01-26 Fr 09:18, Dave Cramer wrote:
>>
>>
>>
>> On Fri, 26 Jan 2024 at 07:36, Andrew Dunstan  wrote:
>>
>>>
>>> On 2024-01-25 Th 20:32, Michael Paquier wrote:
>>> > On Thu, Jan 25, 2024 at 04:52:30PM -0500, Dave Cramer wrote:
>>> >> On Thu, 25 Jan 2024 at 16:32, Andrew Dunstan 
>>> wrote:
>>> >>> On 2024-01-25 Th 16:17, Dave Cramer wrote:
>>> >>> Yeah, I think the default Developer Command Prompt for VS2022 is set
>>> up
>>> >>> for x86 builds. AIUI you should start by executing "vcvarsall
>>> x64_arm64".
>>> >> Yup, now I'm in the same state you are
>>> > Wait a minute here.  Based on [1], x64_arm64 means you can use a x64
>>> > host and you'll be able to produce ARM64 builds, still these will not
>>> > be able to run on the host where they were built.  How much of the
>>> > patch posted upthread is required to produce such builds?  Basically
>>> > everything from it, I guess, so as build dependencies can be
>>> > satisfied?
>>> >
>>> > [1]:
>>> https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170
>>>
>>>
>>> If you look at the table here x86 and x64 are the only supported host
>>> architectures. But that's OK, the x64 binaries will run on arm64 (W11
>>> ARM64 has x64 emulation builtin). If that didn't work Dave and I would
>>> not have got as far as we have. But you want the x64_arm64 argument to
>>> vcvarsall so you will get ARM64 output.
>>>
>>
>> I've rebuilt it using  x64_arm64 and with the attached (very naive patch)
>> and I still get an x64 binary :(
>>
>>
>> With this patch I still get a build error, but it's different :-)
>>
>>
>> [1406/2088] "link" @src/backend/postgres.exe.rsp
>> FAILED: src/backend/postgres.exe src/backend/postgres.pdb
>> "link" @src/backend/postgres.exe.rsp
>>Creating library src\backend\postgres.exe.lib
>>
>> storage_lmgr_s_lock.c.obj : error LNK2019: unresolved external symbol
>> spin_delay referenced in function perform_spin_delay
>>
>> src\backend\postgres.exe : fatal error LNK1120: 1 unresolved externals
>>
>
> Did you add the latest lock.patch ?
>
>
>
>
> I'm a bit confused about exactly what needs to be applied. Can you supply
> a complete patch to be applied to a pristine checkout that will let me
> build?
>
>
> cheers
>

See attached.

Dave


0001-fix-build-for-arm.patch
Description: Binary data


0002-naive-patch-to-fix-locking-for-arm64.patch
Description: Binary data


Re: [PATCH] Add native windows on arm64 support

2024-01-29 Thread Dave Cramer
Dave Cramer
www.postgres.rocks


On Mon, 29 Jan 2024 at 11:16, Andrew Dunstan  wrote:

>
> On 2024-01-26 Fr 09:18, Dave Cramer wrote:
>
>
>
> On Fri, 26 Jan 2024 at 07:36, Andrew Dunstan  wrote:
>
>>
>> On 2024-01-25 Th 20:32, Michael Paquier wrote:
>> > On Thu, Jan 25, 2024 at 04:52:30PM -0500, Dave Cramer wrote:
>> >> On Thu, 25 Jan 2024 at 16:32, Andrew Dunstan 
>> wrote:
>> >>> On 2024-01-25 Th 16:17, Dave Cramer wrote:
>> >>> Yeah, I think the default Developer Command Prompt for VS2022 is set
>> up
>> >>> for x86 builds. AIUI you should start by executing "vcvarsall
>> x64_arm64".
>> >> Yup, now I'm in the same state you are
>> > Wait a minute here.  Based on [1], x64_arm64 means you can use a x64
>> > host and you'll be able to produce ARM64 builds, still these will not
>> > be able to run on the host where they were built.  How much of the
>> > patch posted upthread is required to produce such builds?  Basically
>> > everything from it, I guess, so as build dependencies can be
>> > satisfied?
>> >
>> > [1]:
>> https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170
>>
>>
>> If you look at the table here x86 and x64 are the only supported host
>> architectures. But that's OK, the x64 binaries will run on arm64 (W11
>> ARM64 has x64 emulation builtin). If that didn't work Dave and I would
>> not have got as far as we have. But you want the x64_arm64 argument to
>> vcvarsall so you will get ARM64 output.
>>
>
> I've rebuilt it using  x64_arm64 and with the attached (very naive patch)
> and I still get an x64 binary :(
>
>
> With this patch I still get a build error, but it's different :-)
>
>
> [1406/2088] "link" @src/backend/postgres.exe.rsp
> FAILED: src/backend/postgres.exe src/backend/postgres.pdb
> "link" @src/backend/postgres.exe.rsp
>Creating library src\backend\postgres.exe.lib
>
> storage_lmgr_s_lock.c.obj : error LNK2019: unresolved external symbol
> spin_delay referenced in function perform_spin_delay
>
> src\backend\postgres.exe : fatal error LNK1120: 1 unresolved externals
>

Did you add the latest lock.patch ?

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-26 Thread Dave Cramer
On Fri, 26 Jan 2024 at 07:36, Andrew Dunstan  wrote:

>
> On 2024-01-25 Th 20:32, Michael Paquier wrote:
> > On Thu, Jan 25, 2024 at 04:52:30PM -0500, Dave Cramer wrote:
> >> On Thu, 25 Jan 2024 at 16:32, Andrew Dunstan 
> wrote:
> >>> On 2024-01-25 Th 16:17, Dave Cramer wrote:
> >>> Yeah, I think the default Developer Command Prompt for VS2022 is set up
> >>> for x86 builds. AIUI you should start by executing "vcvarsall
> x64_arm64".
> >> Yup, now I'm in the same state you are
> > Wait a minute here.  Based on [1], x64_arm64 means you can use a x64
> > host and you'll be able to produce ARM64 builds, still these will not
> > be able to run on the host where they were built.  How much of the
> > patch posted upthread is required to produce such builds?  Basically
> > everything from it, I guess, so as build dependencies can be
> > satisfied?
> >
> > [1]:
> https://learn.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-170
>
>
> If you look at the table here x86 and x64 are the only supported host
> architectures. But that's OK, the x64 binaries will run on arm64 (W11
> ARM64 has x64 emulation builtin). If that didn't work Dave and I would
> not have got as far as we have. But you want the x64_arm64 argument to
> vcvarsall so you will get ARM64 output.
>

I've rebuilt it using  x64_arm64 and with the attached (very naive patch)
and I still get an x64 binary :(

>
>


lock.patch
Description: Binary data


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Thu, 25 Jan 2024 at 16:32, Andrew Dunstan  wrote:

>
> On 2024-01-25 Th 16:17, Dave Cramer wrote:
>
>
>
> On Thu, 25 Jan 2024 at 16:04, Anthony Roberts 
> wrote:
>
>> Hi David,
>>
>> Unix "file" or "dumpbin /headers" in vcvarsall are your best bets.
>>
>> Thanks,
>> Anthony
>>
>
>
> So there is another way, select the file in Windows Explorer and right
> click, in the compatibility tab if the "Windows on ARM" is greyed out it is
> an arm binary.
>
> So far mine are not :(
>
>
> Yeah, I think the default Developer Command Prompt for VS2022 is set up
> for x86 builds. AIUI you should start by executing "vcvarsall x64_arm64".
>

Yup, now I'm in the same state you are

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Thu, 25 Jan 2024 at 16:04, Anthony Roberts 
wrote:

> Hi David,
>
> Unix "file" or "dumpbin /headers" in vcvarsall are your best bets.
>
> Thanks,
> Anthony
>


So there is another way, select the file in Windows Explorer and right
click, in the compatibility tab if the "Windows on ARM" is greyed out it is
an arm binary.

So far mine are not :(

Thanks,

Dave

>


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Thu, 25 Jan 2024 at 12:30, Andrew Dunstan  wrote:

>
> On 2024-01-24 We 19:02, Michael Paquier wrote:
>
> On Wed, Jan 24, 2024 at 06:45:21AM -0500, Dave Cramer wrote:
>
> I managed to get it to build the vcvarsall arch needs to be x64. I need to
> add some options, but the patch above needs to be applied to build it.
>
> Nice.  If I may ask, what kind of host and/or configuration have you
> used to reach a state where the code can be compiled and run tests
> with meson?  If you have found specific steps, it may be a good thing
> to document that on the wiki, say around [1].
>
> Perhaps you have not included TAP?  It may be fine in terms of runtime
> checks and coverage.
>
> [1]: 
> https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto#Running_on_Windows
>
>
>
> I now have an ARM64 machine, so I set up a W11 ARM64 VM. I think we really
> want to build with x64_arm64, i.e. to generate native arm64 binaries.
> Setting just x64 will not do that, AIUI.
>
> I tried that with the buidfarm, setting that in the config file's call to
> PGBuild::VSenv::getenv().
>
> That upset msvc_gendef.pl, so I added this there to keep it happy:
>
> $arch = 'x86_64' if $arch eq 'aarch64';
>
> After that things went ok until I got this:
>
> [1453/2088] "link" @src/backend/postgres.exe.rsp
> FAILED: src/backend/postgres.exe src/backend/postgres.pdb
> "link" @src/backend/postgres.exe.rsp
>Creating library src\backend\postgres.exe.lib
> storage_lmgr_s_lock.c.obj : error LNK2019: unresolved external symbol
> _mm_pause referenced in function perform_spin_delay
> src\backend\postgres.exe : fatal error LNK1120: 1 unresolved externals
>
>
> I haven't made further progress, but I will return to it in the next day
> or so.
>
> While this will be nice to have, I think it won't really matter until
> there is ARM64 support in released versions of Windows Server. AFAICT they
> still only sell versions for x86_64
>

I've tried it with my patch attached previously and x64_arm64 and it works
fine. builds using the buildfarm as well.

Is there a definitive way to figure out if the binaries are x64_arm64

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Thu, 25 Jan 2024 at 14:31, Andrew Dunstan  wrote:

>
> On 2024-01-25 Th 08:45, Dave Cramer wrote:
>
>
>
>
>
> I tried running my buildfarm using my git repo and my branch, but get the
> following error
> Status Line: 492 bad branch parameter
> Content:
> bad branch parameter fix_arm
>
> Web txn failed with status: 1
>
>
>
> You can't use your own branch with the buildfarm, you need to let it set
> up its own branches.
>

So I guess I have to wait until this patch is merged in ?

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Thu, 25 Jan 2024 at 12:30, Andrew Dunstan  wrote:

>
> On 2024-01-24 We 19:02, Michael Paquier wrote:
>
> On Wed, Jan 24, 2024 at 06:45:21AM -0500, Dave Cramer wrote:
>
> I managed to get it to build the vcvarsall arch needs to be x64. I need to
> add some options, but the patch above needs to be applied to build it.
>
> Nice.  If I may ask, what kind of host and/or configuration have you
> used to reach a state where the code can be compiled and run tests
> with meson?  If you have found specific steps, it may be a good thing
> to document that on the wiki, say around [1].
>
> Perhaps you have not included TAP?  It may be fine in terms of runtime
> checks and coverage.
>
> [1]: 
> https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto#Running_on_Windows
>
>
>
> I now have an ARM64 machine, so I set up a W11 ARM64 VM. I think we really
> want to build with x64_arm64, i.e. to generate native arm64 binaries.
> Setting just x64 will not do that, AIUI.
>
> I tried that with the buidfarm, setting that in the config file's call to
> PGBuild::VSenv::getenv().
>
> That upset msvc_gendef.pl, so I added this there to keep it happy:
>
> $arch = 'x86_64' if $arch eq 'aarch64';
>
> After that things went ok until I got this:
>
> [1453/2088] "link" @src/backend/postgres.exe.rsp
> FAILED: src/backend/postgres.exe src/backend/postgres.pdb
> "link" @src/backend/postgres.exe.rsp
>Creating library src\backend\postgres.exe.lib
> storage_lmgr_s_lock.c.obj : error LNK2019: unresolved external symbol
> _mm_pause referenced in function perform_spin_delay
> src\backend\postgres.exe : fatal error LNK1120: 1 unresolved externals
>
>
> I haven't made further progress, but I will return to it in the next day
> or so.
>
> While this will be nice to have, I think it won't really matter until
> there is ARM64 support in released versions of Windows Server. AFAICT they
> still only sell versions for x86_64
>

I need it to build clients. The clients need arm64 libraries to link against

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Thu, 25 Jan 2024 at 08:31, Dave Cramer  wrote:

>
>
> On Wed, 24 Jan 2024 at 19:03, Michael Paquier  wrote:
>
>> On Wed, Jan 24, 2024 at 06:45:21AM -0500, Dave Cramer wrote:
>> > I managed to get it to build the vcvarsall arch needs to be x64. I need
>> to
>> > add some options, but the patch above needs to be applied to build it.
>>
>> Nice.  If I may ask, what kind of host and/or configuration have you
>> used to reach a state where the code can be compiled and run tests
>> with meson?  If you have found specific steps, it may be a good thing
>> to document that on the wiki, say around [1].
>>
>
> I am running a mac-mini M2 with parallels running windows pro 11
> The only thing required is this patch. Running in a native developer
> prompt
>
> meson setup build --prefix=c:\postgres
> cd build
> ninja
> ninja install
> ninja test
>
> all run without errors
> I also have buildfarm running and will run that today
>
> Dave
>
>>
>> Perhaps you have not included TAP?  It may be fine in terms of runtime
>> checks and coverage.
>>
>
> Does TAP have to be explicitly added to the meson build ?
>
> Dave
>


I tried running my buildfarm using my git repo and my branch, but get the
following error
Status Line: 492 bad branch parameter
Content:
bad branch parameter fix_arm

Web txn failed with status: 1


Re: [PATCH] Add native windows on arm64 support

2024-01-25 Thread Dave Cramer
On Wed, 24 Jan 2024 at 19:03, Michael Paquier  wrote:

> On Wed, Jan 24, 2024 at 06:45:21AM -0500, Dave Cramer wrote:
> > I managed to get it to build the vcvarsall arch needs to be x64. I need
> to
> > add some options, but the patch above needs to be applied to build it.
>
> Nice.  If I may ask, what kind of host and/or configuration have you
> used to reach a state where the code can be compiled and run tests
> with meson?  If you have found specific steps, it may be a good thing
> to document that on the wiki, say around [1].
>

I am running a mac-mini M2 with parallels running windows pro 11
The only thing required is this patch. Running in a native developer
prompt

meson setup build --prefix=c:\postgres
cd build
ninja
ninja install
ninja test

all run without errors
I also have buildfarm running and will run that today

Dave

>
> Perhaps you have not included TAP?  It may be fine in terms of runtime
> checks and coverage.
>

Does TAP have to be explicitly added to the meson build ?

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-24 Thread Dave Cramer
On Tue, 23 Jan 2024 at 18:32, Michael Paquier  wrote:

> On Tue, Jan 23, 2024 at 04:13:05PM -0500, Dave Cramer wrote:
> > On Tue, 23 Jan 2024 at 08:46, Dave Cramer 
> wrote:
> >> The attached patch works with v17. I will work on getting a buildfarm
> >> animal up shortly
>
> Thanks for doing that!
>
> > I can build using meson manually, but for some reason building with the
> > buildfarm fails.
> >
> > I'm not sure which files to attach to this to help. Andrew, what files do
> > you need ?
>
> Probably build.ninja and the contents of meson-logs/ would offer
> enough information?
> --
> Michael
>

I managed to get it to build the vcvarsall arch needs to be x64. I need to
add some options, but the patch above needs to be applied to build it.

Dave


Re: [PATCH] Add native windows on arm64 support

2024-01-23 Thread Dave Cramer
On Tue, 23 Jan 2024 at 08:46, Dave Cramer  wrote:

>
>
> On Tue, 19 Sept 2023 at 23:48, Michael Paquier 
> wrote:
>
>> On Tue, Sep 19, 2023 at 01:35:17PM +0100, Anthony Roberts wrote:
>> > Was there an explicit request for something there? I was under the
>> > impression that this was all just suggestion/theory at the moment.
>>
>> Yes.  The addition of a buildfarm member registered into the community
>> facilities, so as it is possible to provide back to developers dynamic
>> feedback to the new configuration setup you would like to see
>> supported in PostgreSQL.  This has been mentioned a few times on this
>> thread, around these places:
>>
>> https://www.postgresql.org/message-id/20220322103011.i6z2tuj4hle23wgx@jrouhaud
>>
>> https://www.postgresql.org/message-id/dbd80715-e56b-40eb-84aa-ace70198e...@yesql.se
>>
>> https://www.postgresql.org/message-id/6d1e2719-fa49-42a5-a6ff-0b0072bf6...@yesql.se
>> --
>> Michael
>>
>
> The attached patch works with v17. I will work on getting a buildfarm
> animal up shortly
>
>
I can build using meson manually, but for some reason building with the
buildfarm fails.

I'm not sure which files to attach to this to help. Andrew, what files do
you need ?

Dave

>
>


Re: [PATCH] Add native windows on arm64 support

2024-01-23 Thread Dave Cramer
On Tue, 19 Sept 2023 at 23:48, Michael Paquier  wrote:

> On Tue, Sep 19, 2023 at 01:35:17PM +0100, Anthony Roberts wrote:
> > Was there an explicit request for something there? I was under the
> > impression that this was all just suggestion/theory at the moment.
>
> Yes.  The addition of a buildfarm member registered into the community
> facilities, so as it is possible to provide back to developers dynamic
> feedback to the new configuration setup you would like to see
> supported in PostgreSQL.  This has been mentioned a few times on this
> thread, around these places:
>
> https://www.postgresql.org/message-id/20220322103011.i6z2tuj4hle23wgx@jrouhaud
>
> https://www.postgresql.org/message-id/dbd80715-e56b-40eb-84aa-ace70198e...@yesql.se
>
> https://www.postgresql.org/message-id/6d1e2719-fa49-42a5-a6ff-0b0072bf6...@yesql.se
> --
> Michael
>

The attached patch works with v17. I will work on getting a buildfarm
animal up shortly


windows_arm_build.patch
Description: Binary data


Re: Password leakage avoidance

2024-01-03 Thread Dave Cramer
On Wed, 3 Jan 2024 at 08:53, Robert Haas  wrote:

> On Sun, Dec 24, 2023 at 12:06 PM Jonathan S. Katz 
> wrote:
> > We're likely to have new algorithms in the future, as there is a draft
> > RFC for updating the SCRAM hashes, and already some regulatory bodies
> > are looking to deprecate SHA256. My concern with relying on the
> > "encrypted_password" GUC (which is why PQencryptPasswordConn takes
> > "conn") makes it any easier for users to choose the algorithm, or if
> > they need to rely on the server/session setting.
>
> Yeah, I agree. It doesn't make much sense to me to propose that a GUC,
> which is a server-side setting, should control client-side behavior.
>
> Also, +1 for the general idea. I don't think this is a whole answer to
> the problem of passwords appearing in log files because (1) you have
> to be using libpq in order to make use of this


JDBC has it as of yesterday. I would imagine other clients will implement
it.
Dave Cramer

>
>


Re: Password leakage avoidance

2023-12-27 Thread Dave Cramer
On Wed, 27 Dec 2023 at 16:10, Tom Lane  wrote:

> Joe Conway  writes:
> > On 12/27/23 15:39, Peter Eisentraut wrote:
> >> On 23.12.23 16:13, Joe Conway wrote:
> >>> The attached patch set moves the guts of \password from psql into the
> >>> libpq client side -- PQchangePassword() (patch 0001).
>
> >> I don't follow how you get from the problem statement to this solution.
> >> This proposal doesn't avoid password leakage, does it?
> >> It just provides a different way to phrase the existing solution.
>
> > Yes, a fully built one that is convenient to use, and does not ask
> > everyone to roll their own.
>
> It's convenient for users of libpq, I guess, but it doesn't help
> anyone not writing C code directly atop libpq.  If this is the
> way forward then we need to also press JDBC and other client
> libraries to implement comparable functionality.  That's within
> the realm of sanity surely, and having a well-thought-through
> reference implementation in libpq would help those authors.
> So I don't think this is a strike against the patch; but the answer
> to Peter's question has to be that this is just part of the solution.
>

Already have one in the works for JDBC, actually predates this.
https://github.com/pgjdbc/pgjdbc/pull/3067

Dave


Re: Password leakage avoidance

2023-12-24 Thread Dave Cramer
Dave Cramer
www.postgres.rocks


On Sat, 23 Dec 2023 at 11:00, Tom Lane  wrote:

> Joe Conway  writes:
> > The attached patch set moves the guts of \password from psql into the
> > libpq client side -- PQchangePassword() (patch 0001).
>
> Haven't really read the patch, just looked at the docs, but here's
> a bit of bikeshedding:
>
> * This seems way too eager to promote the use of md5.  Surely the
> default ought to be SCRAM, full stop.  I question whether we even
> need an algorithm parameter.  Perhaps it's a good idea for
> future-proofing, but we could also plan that the function would
> make its own decisions based on noting the server's version.
> (libpq is far more likely to be au courant about what to do than
> the calling application, IMO.)
>

Using the server version has some issues. It's quite possible to encrypt a
user password with md5 when the server version is scram. So if you change
the encryption then pg_hba.conf would have to be updated to allow the user
to log back in.

Dave


Re: Emitting JSON to file using COPY TO

2023-12-08 Thread Dave Cramer
On Thu, 7 Dec 2023 at 08:47, David G. Johnston 
wrote:

> On Thursday, December 7, 2023, Daniel Verite 
> wrote:
>
>> Joe Conway wrote:
>>
>> > The attached should fix the CopyOut response to say one column. I.e. it
>> > ought to look something like:
>>
>> Spending more time with the doc I came to the opinion that in this bit
>> of the protocol, in CopyOutResponse (B)
>> ...
>> Int16
>> The number of columns in the data to be copied (denoted N below).
>> ...
>>
>> this number must be the number of columns in the source.
>> That is for COPY table(a,b,c)   the number is 3, independently
>> on whether the result is formatted in text, cvs, json or binary.
>>
>> I think that changing it for json can reasonably be interpreted
>> as a protocol break and we should not do it.
>>
>> The fact that this value does not help parsing the CopyData
>> messages that come next is not a new issue. A reader that
>> doesn't know the field separator and whether it's text or csv
>> cannot parse these messages into fields anyway.
>> But just knowing how much columns there are in the original
>> data might be useful by itself and we don't want to break that.
>
>
> This argument for leaving 3 as the column count makes sense to me.  I
> agree this content is not meant to facilitate interpreting the contents at
> a protocol level.
>

I'd disagree. From my POV if the data comes back as a JSON Array this is
one object and this should be reflected in the column count.

>
>
>>
>>
>> The other question for me is, in the CopyData message, this
>> bit:
>> " Messages sent from the backend will always correspond to single data
>> rows"
>>
>> ISTM that considering that the "[" starting the json array is a
>> "data row" is a stretch.
>> That might be interpreted as a protocol break, depending
>> on how strict the interpretation is.
>>
>
Well technically it is a single row if you send an array.

Regardless, I expect Euler's comment above that JSON lines format is going
to be the preferred format as the client doesn't have to wait for the
entire object before starting to parse.

Dave

>


Re: errors building on windows using meson

2023-12-07 Thread Dave Cramer
On Thu, 7 Dec 2023 at 14:34, Andres Freund  wrote:

> Hi,
>
> On 2023-12-07 14:16:52 -0500, Dave Cramer wrote:
> > On Thu, 7 Dec 2023 at 13:53, Andres Freund  wrote:
> >
> > > Hi,
> > >
> > > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:
> > > > state-exec: run failed: cannot create new executor meta: cannot get
> > > > matching bin by path: no matching binary by path
> > > >
> > >
> "C:\\Users\\Administrator\\AppData\\Local\\activestate\\cache\\b9117b06\\exec\\perl.EXE"
> > > > state-exec: Not user serviceable; Please contact support for
> assistance.
> > > >
> > > > anyone seen this or have a fix ?
> > >
> > > I've not seen that before. Please provide a bit more detail. Compiler,
> > > building with ninja or msbuild/visual studio, when exactly you're
> > > encountering
> > > the issue, ...
> > >
> > > Windows Server 2019
> > VS 2019
> > building with ninja
>
> I don't think this is sufficient detail to provide you with advice / fix
> problems / whatnot. Please provide complete logs of configuring and
> building.
>

I built perl from source and it worked.

Dave



>
> - Andres
>


Re: errors building on windows using meson

2023-12-07 Thread Dave Cramer
On Thu, 7 Dec 2023 at 13:53, Andres Freund  wrote:

> Hi,
>
> On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:
> > state-exec: run failed: cannot create new executor meta: cannot get
> > matching bin by path: no matching binary by path
> >
> "C:\\Users\\Administrator\\AppData\\Local\\activestate\\cache\\b9117b06\\exec\\perl.EXE"
> > state-exec: Not user serviceable; Please contact support for assistance.
> >
> > anyone seen this or have a fix ?
>
> I've not seen that before. Please provide a bit more detail. Compiler,
> building with ninja or msbuild/visual studio, when exactly you're
> encountering
> the issue, ...
>
> Windows Server 2019
VS 2019
building with ninja

Dave


> Greetings,
>
> Andres Freund
>


errors building on windows using meson

2023-12-07 Thread Dave Cramer
Greetings,

Getting the following error:

state-exec: run failed: cannot create new executor meta: cannot get
matching bin by path: no matching binary by path
"C:\\Users\\Administrator\\AppData\\Local\\activestate\\cache\\b9117b06\\exec\\perl.EXE"
state-exec: Not user serviceable; Please contact support for assistance.

anyone seen this or have a fix ?

Dave Cramer


Re: building with meson on windows with ssl

2023-11-14 Thread Dave Cramer
On Mon, 13 Nov 2023 at 20:56, Andres Freund  wrote:

> Hi,
>
> On 2023-11-12 11:41:15 -0500, Dave Cramer wrote:
> > On Sun, 12 Nov 2023 at 07:57, Dave Cramer  wrote:
> > > I am getting the following error
> > > building on HEAD
> > >
> > > Library crypto found: YES
> > > Checking for function "CRYPTO_new_ex_data" with dependencies -lssl,
> > > -lcrypto: NO
> > >
> >
> > So this is the error you get if you mix a 64 bit version of openssl and
> > build with x86 tools. Clearly my problem, but the error message is less
> > than helpful
>
> There probably is more detail in meson-logs/meson-log.txt - could you post
> that?
>
I'd have to undo what I did to fix it, but if I find time I will

>
>
> The problem could be related to the fact that on windows you (I think) can
> have binaries with both 32bit and 64bit libraries loaded.
>

I was building with the 32bit tools by mistake.

>
> Was this with msvc or gcc/mingw?
>

msvc

Dave

>
> Greetings,
>
> Andres Freund
>


Re: building with meson on windows with ssl

2023-11-12 Thread Dave Cramer
On Sun, 12 Nov 2023 at 07:57, Dave Cramer  wrote:

> Greetings,
>
> I am getting the following error
> building on HEAD
>
> Library crypto found: YES
> Checking for function "CRYPTO_new_ex_data" with dependencies -lssl,
> -lcrypto: NO
>

So this is the error you get if you mix a 64 bit version of openssl and
build with x86 tools. Clearly my problem, but the error message is less
than helpful

Dave

>


building with meson on windows with ssl

2023-11-12 Thread Dave Cramer
Greetings,

I am getting the following error
building on HEAD

Library crypto found: YES
Checking for function "CRYPTO_new_ex_data" with dependencies -lssl,
-lcrypto: NO

I have openssl 1.1.1 installed

Dave Cramer


Re: Protocol question regarding Portal vs Cursor

2023-11-08 Thread Dave Cramer
Dave Cramer


On Tue, 7 Nov 2023 at 10:26, Tom Lane  wrote:

> Dave Cramer  writes:
> > If we use a Portal it is possible to open the portal and do a describe
> and
> > then Fetch N records.
>
> > Using a Cursor we open the cursor. Is there a corresponding describe and
> a
> > way to fetch N records without getting the fields each time. Currently we
> > have to send the SQL  "fetch  N" and we get the fields and the
> > rows. This seems overly verbose.
>
> Portals and cursors are pretty much the same thing, so why not use
> the API that suits you better?
>

So in this case this is a refcursor. Based on above then I should be able
to do a describe on the refcursor and fetch using the extended query
protocol

Cool!

Dave


Protocol question regarding Portal vs Cursor

2023-11-07 Thread Dave Cramer
Greetings,

If we use a Portal it is possible to open the portal and do a describe and
then Fetch N records.

Using a Cursor we open the cursor. Is there a corresponding describe and a
way to fetch N records without getting the fields each time. Currently we
have to send the SQL  "fetch  N" and we get the fields and the
rows. This seems overly verbose.

Dave Cramer


building 32bit windows version

2023-10-12 Thread Dave Cramer
Greetings,

I've been running into challenges building 32 bit windows version. I
suspect there are no build farms and nobody really builds this.

The reason I need these is to be able to build 32 bit dll's for ODBC. At
one time EDB used to provide binaries but that doesn't appear to be the
case.

running build.bat in an x86 environment fails but that can be easily fixed
by adding

$ENV{CONFIG}="x86"; in buld_env.pl

build postgres then works as advertised, however

install  fails with

"Copying build output files...Could not copy release\zic\zic.exe to
postgres\bin\zic.exe"

Apparently 32 bit dlls are required. If there is an easier way to get
libpq.dll and the include files for building I'm all ears.


Dave Cramer


Re: Request for comment on setting binary format output per session

2023-10-10 Thread Dave Cramer
On Tue, 10 Oct 2023 at 10:25, Robert Haas  wrote:

> On Mon, Oct 9, 2023 at 4:25 PM Jeff Davis  wrote:
> > Another thing to consider is that using a GUC for binary formats is a
> > protocol change in a way that client_encoding is not. The existing
> > documentation for the protocol already specifies when binary formats
> > will be used, and a GUC would change that behavior. We absolutely would
> > need to update the documentation, and clients (like psql) really should
> > be updated.
>
> I think the idea of using a new parameterFormat value is a good one.
> Let 0 and 1 continue to mean what they mean, and let clients opt in to
> the new mechanism if they're aware of it.
>

Correct me if I am wrong, but the client has to request this. So I'm not
sure how we would be surprised ?

Dave


Re: Request for comment on setting binary format output per session

2023-10-10 Thread Dave Cramer
On Mon, 9 Oct 2023 at 17:11, Jelte Fennema  wrote:

> On Mon, 9 Oct 2023 at 21:08, Dave Cramer  wrote:
> > So if we use . would it be possible to have something like
>  which represents a set of well known types?
> > My goal here is to reduce the overhead of naming all the types the
> client  wants in binary. The list of well known types is pretty long.
> > Additionally we could have a shorthand for removing a well known type.
>
> You're only setting this once in the lifetime of the connection right,
>

Correct

> i.e. right at the start (although pgbouncer could set it once per
> transaction in the worst case). It seems like it shouldn't really
> matter much to optimize the size of the "SET format_binary=..."
> command, I'd expect it to be at most 1 kilobyte. I'm not super opposed
> to having a shorthand for some of the most commonly wanted built-in
> types, but then we'd need to decide on what those are, which would add
> even more discussion/bikeshedding to this thread. I'm not sure the win
> in size is worth that effort.
>
It's worth the effort if we use schema.typename, if we use oids then I'm
not that invested in this approach.

Dave


Re: Request for comment on setting binary format output per session

2023-10-09 Thread Dave Cramer
On Mon, 9 Oct 2023 at 15:00, Robert Haas  wrote:

> On Mon, Oct 9, 2023 at 11:09 AM Jelte Fennema  wrote:
> > Since the protocol already returns OIDs in the ParameterDescription
> > and RowDescription messages I don't see why using OIDs for this GUC
> > would cause any additional problems.
>
> ...but then...
>
> > With Citus the same user defined type can have
> > different OIDs on each of the servers in the cluster.
>
> I realize that your intention here may be to say that this is not an
> *additional* problem but one we have already. But it seems like one
> that we ought to be trying to solve, rather than propagating a
> problematic solution into more places.
>
> Decisions we make about the wire protocol are some of the most
> long-lasting and painful decisions we make, right up there with the
> on-disk format. Maybe worse, in some ways.
>

So if we use . would it be possible to have something like
 which represents a set of well known types?
My goal here is to reduce the overhead of naming all the types the client
wants in binary. The list of well known types is pretty long.
Additionally we could have a shorthand for removing a well known type.

Dave


Change of behaviour for creating same type name in multiple schemas

2023-10-05 Thread Dave Cramer
Greetings,

Before 16 if I created an array type in schema1 it would be named
schema1._array_type
if I created the same type in schema 2 it would have been named

schema2.__array_type

Can someone point me to where the code was changed ?

Thanks,
Dave Cramer


Re: Request for comment on setting binary format output per session

2023-10-04 Thread Dave Cramer
On Wed, 4 Oct 2023 at 10:17, Peter Eisentraut  wrote:

> On 31.07.23 18:27, Dave Cramer wrote:
> > On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson  > <mailto:dan...@yesql.se>> wrote:
> >
> >  > On 25 Apr 2023, at 16:47, Dave Cramer  > <mailto:davecra...@gmail.com>> wrote:
> >
> >  > Patch attached with comments removed
> >
> > This patch no longer applies, please submit a rebased version on top
> > of HEAD.
> >
> >
> > Rebased see attached
>
> I have studied this thread now.  It seems it has gone through the same
> progression with the same (non-)result as my original patch on the subject.
>
> I have a few intermediate conclusions:
>
> - Doing it with a GUC is challenging.  It's questionable layering to
> have the GUC system control protocol behavior.  It would allow weird
> behavior where a GUC set, maybe for a user or a database, would confuse,
> say, psql or pg_dump.  We probably should make some of those more robust
> in any case.  Also, handling of GUCs through connection poolers is a
> challenge.  It does work, but it's more like opt-in, and so can't be
> fully relied on for protocol correctness.
>
> - Doing it with a session-level protocol-level setting is challenging.
> We currently don't have that kind of thing.  It's not clear how
> connection poolers would/should handle it.  Someone would have to work
> all this out before this could be used.
>
> - In either case, there are issues like what if there is a connection
> pooler and types have different OIDs in different databases.  (Or,
> similarly, an extension is upgraded during the lifetime of a session and
> a type's OID changes.)  Also, maybe, what if types are in different
> schemas on different databases.
>
> - We could avoid some of the session-state issues by doing this per
> request, like extending the Bind message somehow by appending the list
> of types to be sent in binary.  But the JDBC driver currently lists 24
> types for which it supports binary, so that would require adding 24*4=96
> bytes per request, which seems untenable.
>
> I think intuitively, this facility ought to work like client_encoding.
> There, the client declares its capabilities, and the server has to
> format the output according to the client's capabilities.  That works,
> and it also works through connection poolers.  (It is a GUC.)  If we can
> model it like that as closely as possible, then we have a chance of
> getting it working reliably.  Notably, the value space for
> client_encoding is a globally known fixed list of strings.  We need to
> figure out what is the right way to globally identify types, like either
> by fully-qualified name, by base name, some combination, how does it
> work with extensions, or do we need a new mechanism like UUIDs.  I think
> that is something we need to work out, no matter which protocol
> mechanism we end up using.
>

So how is this different than the GUC that I proposed ?

Dave


Speaker Bureau

2023-09-01 Thread Dave Cramer
Greetings,

If you are on the speaker list can you send an email to ugc...@postgresql.us
indicating whether you are available to travel for meetups?

This serves the obvious purpose but also provides your email address to us.

Thanks,

Dave Cramer


Re: PostgreSQL 16 release announcement draft

2023-08-24 Thread Dave Cramer
>
>
> > Postgres, PostgreSQL, and the Elephant Logo (Slonik) are all registered
> > trademarks of the [PostgreSQL Community Association of Canada](
> https://www.postgres.ca).
>
> Isn't this just the "PostgreSQL Community Association", no Canada?
>

Certainly confusing from the website, but in the about section is this
"PostgreSQL Community Association is a trade or business name of the PostgreSQL
Community Association of Canada."

Dave


Re: Using defines for protocol characters

2023-08-17 Thread Dave Cramer
On Thu, Aug 17, 2023 at 9:22 AM Robert Haas  wrote:

> On Thu, Aug 17, 2023 at 12:13 PM Nathan Bossart
>  wrote:
> > On Thu, Aug 17, 2023 at 09:31:55AM +0900, Michael Paquier wrote:
> > > Looks sensible seen from here.
> >
> > Thanks for taking a look.
>
> I think this is going to be a major improvement in terms of readability!


That was the primary goal

>
Dave

>
> --
Dave Cramer


Re: Using defines for protocol characters

2023-08-09 Thread Dave Cramer
On Wed, 9 Aug 2023 at 10:34, Tom Lane  wrote:

> Dave Cramer  writes:
> > On Wed, 9 Aug 2023 at 09:19, Peter Eisentraut 
> wrote:
> >> 3. IMO, the names of the protocol messages in protocol.sgml are
> >> canonical.  Your patch appends "Request" and "Response" in cases where
> >> that is not part of the actual name.  Also, some messages are documented
> >> to go both ways, so this separation doesn't make sense strictly
> >> speaking.  Please use the names as in protocol.sgml without augmenting
> >> them.
>
> > I've changed this a number of times. I do not mind changing it again, but
> > can we reach a consensus ?
>
> I agree with Peter: let's use the names in the protocol document
> with a single prefix.  I've got mixed feelings about whether that prefix
> should have an underscore, though.
>

Well, we're getting closer :)

Dave


Re: Using defines for protocol characters

2023-08-09 Thread Dave Cramer
On Wed, 9 Aug 2023 at 09:19, Peter Eisentraut  wrote:

> 1. As was discussed, these definitions should go into
> src/include/libpq/pqcomm.h, not a new file.
>

I'm ambivalent, this is very easy to do.

>
> 2. I would prefer an underscore after PgMsg, like PqMsg_DescribeRequest,
> so it's easier to visually locate the start of the actual message name.
>
> 3. IMO, the names of the protocol messages in protocol.sgml are
> canonical.  Your patch appends "Request" and "Response" in cases where
> that is not part of the actual name.  Also, some messages are documented
> to go both ways, so this separation doesn't make sense strictly
> speaking.  Please use the names as in protocol.sgml without augmenting
> them.
>
>
I've changed this a number of times. I do not mind changing it again, but
can we reach a consensus ?

Dave


Re: Using defines for protocol characters

2023-08-07 Thread Dave Cramer
On Mon, 7 Aug 2023 at 16:50, Tatsuo Ishii  wrote:

> > On Mon, Aug 07, 2023 at 04:02:08PM -0400, Tom Lane wrote:
> >> Dave Cramer  writes:
> >>> On Mon, 7 Aug 2023 at 12:59, Robert Haas 
> wrote:
> >>>> PqMsgEmptyQueryResponse or something like that seems better, if we
> >>>> want to keep the current capitalization. I'm not a huge fan of the way
> >>>> we vary our capitalization conventions so much all over the code base,
> >>>> but I think we would at least do well to keep it consistent from one
> >>>> end of a certain identifier to the other.
> >>
> >>> I don't have a strong preference, but before I make the changes I'd
> like to
> >>> get consensus.
> >>> Can we vote or whatever it takes to decide on a naming pattern that is
> >>> acceptable ?
> >>
> >> I'm good with Robert's proposal above.
> >
> > +1
>
> +1.
>
> Also we need to decide what to do with them:
>
> > #define PQMSG_REQ_PREPARED 'S'
> > #define PQMSG_REQ_PORTAL 'P'
>
> If we go "PqMsgEmptyQueryResponse", probably we should go something
> like for these?
>
> #define PqMsgReqPrepared 'S'
> #define PqMsgReqPortal 'P'
>

I went with PqMsgPortalSubCommand and PqMsgPreparedSubCommand

See attached patch

Dave


0001-Created-protocol.h.patch
Description: Binary data


Re: Using defines for protocol characters

2023-08-07 Thread Dave Cramer
On Mon, 7 Aug 2023 at 12:59, Robert Haas  wrote:

> On Mon, Aug 7, 2023 at 2:25 PM Tom Lane  wrote:
> > +1.  For ease of greppability, maybe even PQMSG_EmptyQueryResponse
> > and so on?  Then one grep would find both uses of the constants and
> > code/docs references.  Not sure if the prefix should be all-caps or
> > not if we go this way.
>
> PqMsgEmptyQueryResponse or something like that seems better, if we
> want to keep the current capitalization. I'm not a huge fan of the way
> we vary our capitalization conventions so much all over the code base,
> but I think we would at least do well to keep it consistent from one
> end of a certain identifier to the other.
>

I don't have a strong preference, but before I make the changes I'd like to
get consensus.

Can we vote or whatever it takes to decide on a naming pattern that is
acceptable ?

Dave


Re: Using defines for protocol characters

2023-08-07 Thread Dave Cramer
On Mon, 7 Aug 2023 at 03:10, Alvaro Herrera  wrote:

> On 2023-Aug-07, Peter Smith wrote:
>
> > I guess, your patch would not be much different; you can still have
> > all the nice names and assign the appropriate values to the enum
> > values same as now, but using an enum you might also gain
> > type-checking in the code and also get warnings for the "switch"
> > statements if there are any cases accidentally omitted.
>
> Hmm, I think omitting a 'default' clause (which is needed when you want
> warnings for missing clauses) in a switch that handles protocol traffic
> is not great, because the switch would misbehave when the network
> counterpart sends a broken message.  I'm not sure we want to do that.
> It could become a serious security problem if confronted with a
> malicious libpq.
>
>
Any other changes required ?

Dave

> --
> Álvaro Herrera PostgreSQL Developer  —
> https://www.EnterpriseDB.com/
>


Re: Using defines for protocol characters

2023-08-04 Thread Dave Cramer
On Fri, 4 Aug 2023 at 03:44, Alvaro Herrera  wrote:

> On 2023-Aug-03, Dave Cramer wrote:
>
> > New patch attached which uses  PREPARED_SUB_COMMAND   and
> > PORTAL_SUB_COMMAND instead
>
> Hmm, I would keep the prefix in this case and make the message type a
> second prefix, with the subtype last -- PQMSG_CLOSE_PREPARED,
> PQMSG_DESCRIBE_PORTAL and so on.
>
I used

#define PQMSG_REQ_PREPARED 'S'
#define PQMSG_REQ_PORTAL 'P'

Duplicating them for CLOSE PREPARED|PORTAL seems awkward

>
> You define PASSWORD and GSS both with 'p', which I think is bogus; in
> some place that isn't doing GSS, you've replaced 'p' with the GSS one
> (CheckSASLAuth).  I think it'd be better to give 'p' a single name, and
> not try to distinguish which is PASSWORD and which is GSS, because
> ultimately it's not important.
>
Done

>
> There are some unpatched places, such as basebackup_copy.c -- there are
> several matches for /'.'/ that correspond to protocol chars in that file.
> Also CopySendEndOfRow has one 'd', and desc.c has two 'C'.
>
> I think fe-trace.c will require further adjustment of the comments for
> each function.  We could change them to be the symbol for each char, like
> so:
>
> /* PQMSG_RESP_READY_FOR_QUERY */
> static void
> pqTraceOutputZ(FILE *f, const char *message, int *cursor)
>
>
Thanks for reviewing see attached for fixes

Dave


0001-Created-protocol.h.patch
Description: Binary data


Re: Using defines for protocol characters

2023-08-03 Thread Dave Cramer
On Thu, 3 Aug 2023 at 16:59, Tatsuo Ishii  wrote:

> > On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii  wrote:
> >
> >> > Greetings,
> >> >
> >> > Attached is a patch which introduces a file protocol.h. Instead of
> using
> >> > the actual characters everywhere in the code this patch names the
> >> > characters and removes the comments beside each usage.
> >>
> >> > +#define DESCRIBE_PREPARED   'S'
> >> > +#define DESCRIBE_PORTAL 'P'
> >>
> >> You use these for Close message as well. I don't like the idea because
> >> Close is different message from Describe message.
> >>
> >> What about adding following for Close too use them instead?
> >>
> >> #define CLOSE_PREPARED   'S'
> >> #define CLOSE_PORTAL 'P'
> >>
> >
> > Good catch.
> > I recall when writing this it was a bit hacky.
> > What do you think of PREPARED_SUB_COMMAND   and PORTAL_SUB_COMMAND
> instead
> > of duplicating them ?
>
> Nice. Looks good to me.
>
New patch attached which uses  PREPARED_SUB_COMMAND   and
PORTAL_SUB_COMMAND instead
and uses
PQMSG_REQ_* and  PQMSG_RESP_*  as per Alvaro's suggestion

Dave Cramer


0001-Created-protocol.h.patch
Description: Binary data


Re: Using defines for protocol characters

2023-08-03 Thread Dave Cramer
On Thu, 3 Aug 2023 at 15:22, Dave Cramer  wrote:

>
>
> On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii  wrote:
>
>> > Greetings,
>> >
>> > Attached is a patch which introduces a file protocol.h. Instead of using
>> > the actual characters everywhere in the code this patch names the
>> > characters and removes the comments beside each usage.
>>
>> > +#define DESCRIBE_PREPARED   'S'
>> > +#define DESCRIBE_PORTAL 'P'
>>
>> You use these for Close message as well. I don't like the idea because
>> Close is different message from Describe message.
>>
>> What about adding following for Close too use them instead?
>>
>> #define CLOSE_PREPARED   'S'
>> #define CLOSE_PORTAL 'P'
>>
>
> Good catch.
> I recall when writing this it was a bit hacky.
> What do you think of PREPARED_SUB_COMMAND   and PORTAL_SUB_COMMAND instead
> of duplicating them ?
>

While reviewing this I found a number of mistakes where I used
DATA_ROW_RESPONSE instead of DESCRIBE_REQUEST.

I can provide a patch now or wait until we resolve the above


>
> Dave
>


Re: Using defines for protocol characters

2023-08-03 Thread Dave Cramer
On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii  wrote:

> > Greetings,
> >
> > Attached is a patch which introduces a file protocol.h. Instead of using
> > the actual characters everywhere in the code this patch names the
> > characters and removes the comments beside each usage.
>
> > +#define DESCRIBE_PREPARED   'S'
> > +#define DESCRIBE_PORTAL 'P'
>
> You use these for Close message as well. I don't like the idea because
> Close is different message from Describe message.
>
> What about adding following for Close too use them instead?
>
> #define CLOSE_PREPARED   'S'
> #define CLOSE_PORTAL 'P'
>

Good catch.
I recall when writing this it was a bit hacky.
What do you think of PREPARED_SUB_COMMAND   and PORTAL_SUB_COMMAND instead
of duplicating them ?

Dave


Re: Using defines for protocol characters

2023-08-03 Thread Dave Cramer
On Thu, 3 Aug 2023 at 11:59, Alvaro Herrera  wrote:

> On 2023-Aug-03, Dave Cramer wrote:
>
> > Greetings,
> >
> > Attached is a patch which introduces a file protocol.h. Instead of using
> > the actual characters everywhere in the code this patch names the
> > characters and removes the comments beside each usage.
>
> I saw this one last week.  I think it's a very idea (and I fact I had
> thought of doing it when I last messed with libpq code).
>
> I don't really like the name pattern you've chosen though; I think we
> need to have a common prefix in the defines.  Maybe prepending PQMSG_ to
> each name would be enough.  And maybe turn the _RESPONSE and _REQUEST
> suffixes you added into prefixes as well, so instead of PARSE_REQUEST
> you could make it PQMSG_REQ_PARSE, PQMSG_RESP_BIND_COMPLETE and so
> on.
>
That becomes trivial to do now that the names are defined. I presumed
someone would object to the names.
I'm fine with the names you propose, but I suggest we wait to see if anyone
objects.

Dave

>
> --
> Álvaro Herrera   48°01'N 7°57'E  —
> https://www.EnterpriseDB.com/
>  It does it in a really, really complicated way
>  why does it need to be complicated?
>  Because it's MakeMaker.
>


Using defines for protocol characters

2023-08-03 Thread Dave Cramer
Greetings,

Attached is a patch which introduces a file protocol.h. Instead of using
the actual characters everywhere in the code this patch names the
characters and removes the comments beside each usage.


Dave Cramer


0001-Created-protocol.h.patch
Description: Binary data


Re: Request for comment on setting binary format output per session

2023-07-31 Thread Dave Cramer
Dave Cramer


On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson  wrote:

> > On 25 Apr 2023, at 16:47, Dave Cramer  wrote:
>
> > Patch attached with comments removed
>
> This patch no longer applies, please submit a rebased version on top of
> HEAD.
>

Rebased see attached



>
> --
> Daniel Gustafsson
>
>


0001-Created-protocol.h.patch
Description: Binary data


Re: CommandStatus from insert returning when using a portal.

2023-07-14 Thread Dave Cramer
On Fri, 14 Jul 2023 at 16:32,  wrote:

> On 2023-07-14 15:49, Dave Cramer wrote:
> > On Fri, 14 Jul 2023 at 15:40,  wrote:
> >> Perhaps an easy rule would be, if the driver itself adds RETURNING
> >> because of a RETURN_GENERATED_KEYS option, it should also force the
> >> fetch count to zero and collect all the returned rows before
> >> executeUpdate returns, and then it will have the right count
> >> to return?
> >
> > Well that kind of negates the whole point of using a cursor in the case
> > where you have a large result set.
> >
> > The rows are subsequently fetched in rs.next()
>
> I guess it comes down, again, to the question of what kind of thing
> the API client thinks it is doing, when it issues an INSERT with
> the RETURN_GENERATED_KEYS option.
>
> An API client issuing a plain INSERT knows it is the kind of thing
> for which executeUpdate() is appropriate, and the complete success
> or failure outcome is known when that returns.
>
> An API client issuing its own explicit INSERT RETURNING knows it
> is the kind of thing for which executeQuery() is appropriate, and
> that some processing (and possibly ereporting) may be left to
> occur while working through the ResultSet.
>
> But now how about this odd hybrid, where the API client wrote
> a plain INSERT, but added the RETURN_GENERATED_KEYS option?
> The rewritten query is the kind of thing the server and the
> driver need to treat as a query, but to the API client it still
> appears the kind of thing for which executeUpdate() is suited.
> The generated keys can then be examined--in the form of a
> ResultSet--but one obtained with the special method
> getGeneratedKeys(), rather than the usual way of getting the
> ResultSet from a query.
>
> Should the client then assume the semantics of executeUpdate
> have changed for this case, the outcome isn't fully known yet,
> and errors could come to light while examining the returned
> keys? Or should it still assume that executeUpdate has its
> usual meaning, all the work is done by that point, and the
> ResultSet from getGeneratedKeys() is simply a convenient,
> familiar interface for examining what came back?
>

The fly in the ointment here is when they setFetchSize and we decide to use
a Portal under the covers.

I"m willing to document around this since it looks pretty unlikely that
changing the behaviour in the server is a non-starter.


>
> I'm not sure if there's a firm answer to that one way or the
> other in the formal JDBC spec, but the latter seems perhaps
> safer to me.
>

I'll leave the user to decide their own fate.

Dave

>
> Regards,
> -Chap
>


Re: CommandStatus from insert returning when using a portal.

2023-07-14 Thread Dave Cramer
On Fri, 14 Jul 2023 at 15:40,  wrote:

> On 2023-07-14 14:19, David G. Johnston wrote:
> > Because of the returning they all need a portal so far as the server is
> > concerned and the server will obligingly send the contents of the
> > portal
> > back to the client.
>
> Dave's pcap file, for the fetch count 0 case, does not show any
> portal name used in the bind, describe, or execute messages, or
> any portal close message issued by the client afterward. The server
> may be using a portal in that case, but it seems more transparent
> to the client when fetch count is zero.
>
>
There is no portal for fetch count 0.


> Perhaps an easy rule would be, if the driver itself adds RETURNING
> because of a RETURN_GENERATED_KEYS option, it should also force the
> fetch count to zero and collect all the returned rows before
> executeUpdate returns, and then it will have the right count
> to return?
>
> Well that kind of negates the whole point of using a cursor in the case
where you have a large result set.

The rows are subsequently fetched in rs.next()

Solves one problem, but creates another.

Dave

>
>


Re: CommandStatus from insert returning when using a portal.

2023-07-14 Thread Dave Cramer
On Fri, 14 Jul 2023 at 14:34,  wrote:

> On 2023-07-12 21:30, David G. Johnston wrote:
> > Right, and executeUpdate is the wrong API method to use, in the
> > PostgreSQL
> > world, when executing insert/update/delete with the non-SQL-standard
> > returning clause. ... ISTM that you are trying to make user-error less
> > painful.
>
> In Dave's Java reproducer, no user-error has been made, because the user
> supplied a plain INSERT with the RETURN_GENERATED_KEYS option, and the
> RETURNING clause has been added by the JDBC driver. So the user expects
> executeUpdate to be the right method, and return the row count, and
> getGeneratedKeys() to then return the rows.
>
> I've seen a possibly even more interesting result using pgjdbc-ng with
> protocol.trace=true:
>
> FetchSize=0
>  > 1.>t.>T$>Z*
>  > 2.>D.>D.>C.>Z*
> executeUpdate result: 2
> ids: 1 2
>
> FetchSize=1
>  > 2.>D.>s*
> executeUpdate result: -1
> ids: 3  > D.>s*
> 4  > C*
>  > 3.>Z*
>
> FetchSize=2
>  > 2.>D.>D.>s*
> executeUpdate result: -1
> ids: 5 6  > C*
>  > 3.>Z*
>
> FetchSize=3
>  > 2.>D.>D.>C*
>  > 3.>Z*
> executeUpdate result: 2
> ids: 7 8
>
>
> Unless there's some interleaving of trace and stdout messages happening
> here, I think pgjdbc-ng is not even collecting all the returned rows
> in the suspended-cursor case before executeUpdate returns, but keeping
> the cursor around for getGeneratedKeys() to use, so executeUpdate
> returns -1 before even having seen the later command complete, and would
> still do that even if the command complete message had the right count.
>

My guess is that pgjdbc-ng sees the -1 and doesn't bother looking any
further

Either way pgjdbc-ng is a dead project so I'm not so concerned about it.

Dave

>
> Regards,
> -Chap
>


Re: CommandStatus from insert returning when using a portal.

2023-07-14 Thread Dave Cramer
On Fri, 14 Jul 2023 at 13:39,  wrote:

> On 2023-07-14 12:58, Dave Cramer wrote:
> > See attached pcap file
>
> So if the fetch count is zero and no portal is needed,
> or if the fetch count exceeds the row count and the command
> completion follows directly with no suspension of the portal, then
> it comes with the correct count, but if the portal gets suspended,
> then the later command completion reports a zero count?
>
>
Seems so, yes.

Dave


Re: CommandStatus from insert returning when using a portal.

2023-07-14 Thread Dave Cramer
See attached pcap file

after the execute of the portal it returns INSERT 0 0
Dave Cramer


On Fri, 14 Jul 2023 at 12:57, David G. Johnston 
wrote:

> On Fri, Jul 14, 2023 at 9:50 AM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>>
>> Fixing that test in some manner and recompiling psql seems like it should
>> be the easiest way to produce a core-only test case.
>>
>>
> Apparently not - since it (ExecQueryUsingCursor) literally wraps the query
> in a DECLARE CURSOR SQL Command which prohibits INSERT...
>
> I suppose we'd have to write a psql equivalent of ExecQueryUsingPortal
> that iterates over via fetch to make this work...probably more than I'm
> willing to try.
>
> David J.
>


cursor.pcap
Description: Binary data


Re: CommandStatus from insert returning when using a portal.

2023-07-14 Thread Dave Cramer
David,

I will try to get a tcpdump file. Doing this in libpq seems challenging as
I'm not aware of how to create a portal in psql.

Chap

The only difference is one instance uses a portal to fetch the results, the
other (correct one) is a normal insert where all of the rows are returned
immediately

this is a reproducer in Java

conn.prepareStatement("DROP TABLE IF EXISTS test_table").execute();
conn.prepareStatement("CREATE TABLE IF NOT EXISTS test_table (id
SERIAL PRIMARY KEY, cnt INT NOT NULL)").execute();

for (var fetchSize : List.of(0, 1, 2, 3)) {
System.out.println("FetchSize=" + fetchSize);

try (var stmt = conn.prepareStatement("INSERT INTO test_table
(cnt) VALUES (1), (2) RETURNING id", RETURN_GENERATED_KEYS)) {
stmt.setFetchSize(fetchSize);

var ret = stmt.executeUpdate();
System.out.println("executeUpdate result: " + ret);

var rs = stmt.getGeneratedKeys();
System.out.print("ids: ");
while (rs.next()) {
System.out.print(rs.getInt(1) + " ");
}
System.out.print("\n\n");
    }
}

Dave

On Fri, 14 Jul 2023 at 12:07,  wrote:

> On 2023-07-12 20:57, Dave Cramer wrote:
> > Without a cursor it returns right away as all of the results are
> > returned
> > by the server. However with cursor you have to wait until you fetch the
> > rows before you can get the CommandComplete message which btw is wrong
> > as
> > it returns INSERT 0 0 instead of INSERT 2 0
>
> To make sure I am following, was this describing a comparison of
> two different ways in Java, using JDBC, to perform the same operation,
> one of which behaves as desired while the other doesn't? If so, for
> my curiosity, what do both ways look like in Java?
>
> Or was it a comparison of two different operations, say one
> an INSERT RETURNING and the other something else?
>
> Regards,
> -Chap
>


Re: CommandStatus from insert returning when using a portal.

2023-07-13 Thread Dave Cramer
On Thu, 13 Jul 2023 at 10:24, David G. Johnston 
wrote:

> On Thursday, July 13, 2023, Dave Cramer  wrote:
>
>>
>> Any comment on why the CommandComplete is incorrect ?
>> It returns INSERT 0 0  if a cursor is used
>>
>
>  Looking at DECLARE it is surprising that what you describe is even
> possible.  Can you share a psql reproducer?
>

apologies, we are using a portal, not a cursor.
Dave Cramer

>


Re: CommandStatus from insert returning when using a portal.

2023-07-13 Thread Dave Cramer
On Wed, 12 Jul 2023 at 21:31, David G. Johnston 
wrote:

> On Wed, Jul 12, 2023 at 5:57 PM Dave Cramer  wrote:
>
>> On Wed, 12 Jul 2023 at 20:00,  wrote:
>>
>>> Dave Cramer  writes:
>>> > Obviously I am biased by the JDBC API which would like to have
>>> > PreparedStatement.execute() return the number of rows inserted
>>> > without having to wait to read all of the rows returned
>>>
>>> Huh ... just how *is* PreparedStatement.execute() supposed
>>> to behave when the statement is an INSERT RETURNING?
>>>
>>
>> It's really executeUpdate which is supposed to return the number of rows
>> updated.
>>
>
> Right, and executeUpdate is the wrong API method to use, in the PostgreSQL
> world, when executing insert/update/delete with the non-SQL-standard
> returning clause.  executeQuery is the method to use.  And execute() should
> behave as if executeQuery was called, i.e., return true, which it is
> capable of doing since it has resultSet data that it needs to handle.
>
> The addition of returning turns the insert/update/delete into a select in
> terms of effective client-seen behavior.
>
> ISTM that you are trying to make user-error less painful.  While that is
> laudable it apparently isn't practical.  They can either discard the
> results and get a count by omitting returning or obtain the result and
> derive the count by counting rows alongside whatever else they needed the
> returned data for.
>
Any comment on why the CommandComplete is incorrect ?
It returns INSERT 0 0  if a cursor is used

Dave

>


Re: CommandStatus from insert returning when using a portal.

2023-07-12 Thread Dave Cramer
On Wed, 12 Jul 2023 at 20:00,  wrote:

> Dave Cramer  writes:
> > Obviously I am biased by the JDBC API which would like to have
> > PreparedStatement.execute() return the number of rows inserted
> > without having to wait to read all of the rows returned
>
> Huh ... just how *is* PreparedStatement.execute() supposed
> to behave when the statement is an INSERT RETURNING?
>

It's really executeUpdate which is supposed to return the number of rows
updated.
Without a cursor it returns right away as all of the results are returned
by the server. However with cursor you have to wait until you fetch the
rows before you can get the CommandComplete message which btw is wrong as
it returns INSERT 0 0 instead of INSERT 2 0

Dave


Re: CommandStatus from insert returning when using a portal.

2023-07-12 Thread Dave Cramer
On Wed, 12 Jul 2023 at 17:49, Tom Lane  wrote:

> Dave Cramer  writes:
> > Obviously I am biased by the JDBC API which would like to have
> > PreparedStatement.execute() return the number of rows inserted
> > without having to wait to read all of the rows returned
>
> Umm ... you do realize that we return the rows on-the-fly?
>
I do realize that.

> The server does not know how many rows got inserted/returned
>
Well I haven't looked at the code, but it seems unintuitive that adding the
returning clause changes the semantics of insert.

until it's run the query to completion, at which point all
> the data has already been sent to the client.  There isn't
> any way to return the rowcount before the data, and it wouldn't
> be some trivial protocol adjustment to make that work differently.
> (What it *would* be is expensive, because we'd have to store
> those rows somewhere.)
>
I wasn't asking for that, I just want the number of rows inserted.

Thanks,

Dave


Re: CommandStatus from insert returning when using a portal.

2023-07-12 Thread Dave Cramer
Dave Cramer


On Wed, 12 Jul 2023 at 16:31, David G. Johnston 
wrote:

> On Wed, Jul 12, 2023 at 1:03 PM Dave Cramer  wrote:
>
>>
>> INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id
>>
>> if a portal is used to get the results then the CommandStatus
>>
>
> IIUC the portal is not optional if you including the RETURNING clause.
>
>From my testing it isn't required.

>
> There is no CommandStatus message in the protocol, the desired information
> is part of the command tag returned in the CommandComplete message.  You
> get that at the end of the command, which has been defined as when any
> portal produced by the command has been fully executed.
>

I could argue that the insert is fully completed whether I read the data or
not.

>
> You probably should add your desire to the Version 4 protocol ToDo on the
> wiki.
>
> https://wiki.postgresql.org/wiki/Todo#Wire_Protocol_Changes_.2F_v4_Protocol
>

thx, will do.

Dave

>


CommandStatus from insert returning when using a portal.

2023-07-12 Thread Dave Cramer
Greetings,

With a simple insert such as

INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id

if a portal is used to get the results then the CommandStatus is not
returned on the execute only when the portal is closed. After looking at
this more it is really after all of the data is read which is consistent if
you don't use a portal, however it would be much more useful if we received
the CommandStatus after the insert was completed and before the data

Obviously I am biased by the JDBC API which would like to have

PreparedStatement.execute() return the number of rows inserted
without having to wait to read all of the rows returned


Dave Cramer


Re: Why is DATESTYLE, ordering ignored for output but used for input ?

2023-07-04 Thread Dave Cramer
On Mon, 3 Jul 2023 at 17:13, Matthias van de Meent <
boekewurm+postg...@gmail.com> wrote:

> On Mon, 3 Jul 2023 at 20:06, Dave Cramer  wrote:
> >
> > Greetings,
> >
> > For ISO and German dates the order DMY is completely ignored on output
> but used for input.
> >
> > test=# set datestyle to 'ISO,DMY';
> > SET
> > select '7-8-2023'::date
> > test-# ;
> > date
> > 
> >  2023-08-07
> > (1 row)
> >
> > test=# set datestyle to 'ISO,MDY';
> > SET
> > test=# select '7-8-2023'::date
> > ;
> > date
> > 
> >  2023-07-08
> > (1 row)
> >
> > Note regardless of  how the ordering is specified it is always output as
> > YMD
>
> Wouldn't that be because ISO only has one correct ordering of the day
> and month fields? I fail to see why we'd output non-ISO-formatted date
> strings when ISO format is requested. I believe the reason is the same
> for German dates - Germany's official (or most common?) date
> formatting has a single ordering of these fields, which is also the
> ordering that we supply.
>

seems rather un-intuitive that it works for some datestyles and not for
others


>
> The code comments also seem to hint to this:
>
> > case USE_ISO_DATES:
> > case USE_XSD_DATES:
> >  /* compatible with ISO date formats */
>
> > case USE_GERMAN_DATES:
> > /* German-style date format */
>
> This has been this way since the code for ISO was originally committed
> in July of '97 with 8507ddb9 and the GERMAN formatting which was added
> in December of '97 as D.M/Y with 352b3687 (and later that month was
> updated to D.M.Y with ca23837a).
> Sadly, the -hackers archives don't seem to have any mails from that
> time period, so I couldn't find much info on the precise rationale
> around this behavior.
>

Yeah, I couldn't find much either.


>
> Kind regards,
>
> Matthias van de Meent
> Neon (https://neon.tech/)
>
> PS. That was some interesting digging into the history of the date
> formatting module.
>

Always interesting digging into the history of the project.

Dave


Why is DATESTYLE, ordering ignored for output but used for input ?

2023-07-03 Thread Dave Cramer
Greetings,

For ISO and German dates the order DMY is completely ignored on output but
used for input.

test=# set datestyle to 'ISO,DMY';
SET
select '7-8-2023'::date
test-# ;
date

 2023-08-07
(1 row)

test=# set datestyle to 'ISO,MDY';
SET
test=# select '7-8-2023'::date
;
date

 2023-07-08
(1 row)

Note regardless of  how the ordering is specified it is always output as
YMD

Dave Cramer


Re: Let's make PostgreSQL multi-threaded

2023-06-10 Thread Dave Cramer
On Fri, 9 Jun 2023 at 18:29, Stephen Frost  wrote:

> Greetings,
>
> * Dave Cramer (davecramer@postgres.rocks) wrote:
> > One thing I can think of is upgrading. AFAIK dump and restore is the only
> > way to change the on disk format.
> > Presuming that eventually we will be forced to change the on disk format
> it
> > would be nice to be able to do so in a manner which does not force long
> > down times
>
> There is an ongoing effort moving in this direction.  The $subject isn't
> great, but this patch set (which we are currently working on
> updating...): https://commitfest.postgresql.org/43/3986/ attempts
> changing a lot of currently compile-time block-size pieces to be
> run-time which would open up the possibility to have a different page
> format for, eg, different tablespaces.  Possibly even different block
> sizes.  We'd certainly welcome discussion from others who are
> interested.
>
> Thanks,
>
> Stephen
>

Upgrading was just one example of difficult problems that need to be
addressed.
My thought was that before we commit to something as potentially resource
intensive as changing the threading model we compile a list of other "big
issues" and prioritize.

I realize open source is more of a scratch your itch kind of development
model, but I'm not convinced the random walk that entails is the
appropriate way to move forward. At the very least I'd like us to question
it.
Dave


Re: Let's make PostgreSQL multi-threaded

2023-06-09 Thread Dave Cramer
This is somewhat orthogonal to the topic of threading but relevant to the
use of resources.

If we are going to undertake some hard problems perhaps we should be
looking at other problems that solve other long term issues before we
commit to spending resources on changing the process model.

One thing I can think of is upgrading. AFAIK dump and restore is the only
way to change the on disk format.
Presuming that eventually we will be forced to change the on disk format it
would be nice to be able to do so in a manner which does not force long
down times

 Dave

>
>>


Re: Let's make PostgreSQL multi-threaded

2023-06-08 Thread Dave Cramer
On Thu, 8 Jun 2023 at 13:08, Hannu Krosing  wrote:

> I discovered this thread from a Twitter post "PostgreSQL will finally
> be rewritten in Rust"  :)
>

By the time we got around to finishing this, there would be a better
language to write it in.

Dave


Re: Named Prepared statement problems and possible solutions

2023-06-08 Thread Dave Cramer
On Thu, 8 Jun 2023 at 15:49, Jan Wieck  wrote:

> On 6/8/23 13:31, Dave Cramer wrote:
> >
> > On Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik  > <mailto:knizh...@garret.ru>> wrote:
> >
>
> > So it will be responsibility of client to remember text of prepared
> > query to be able to resend it when statement doesn't exists at
> server?
> > IMHO very strange decision. Why not to handle it in connection
> > pooler (doesn't matter - external or embedded)?
> >
> >
> > I may be myopic but in the JDBC world and I assume others we have a
> > `PreparedStatement` object which has the text of the query.
> > The text is readily available to us.
> >
> > Also again from the JDBC point of view we have use un-named statements
> > normally and then name them after 5 uses so we already have embedded
> > logic on how to deal with PreparedStatements
>
> The entire problem only surfaces when using a connection pool of one
> sort or another. Without one the session is persistent to the client.
>
> At some point I created a "functional" proof of concept for a connection
> pool that did a mapping of the client side name to a pool managed server
> side name. It kept track of which query was known by a server. It kept a
> hashtable of poolname+username+query MD5 sums. On each prepare request
> it would look up if that query is known, add a query-client reference in
> another hashtable and so on. On a Bind/Exec message it would check that
> the server has the query prepared and issue a P message if not. What was
> missing was to keep track of no longer needed queries and deallocate them.
>
> As said, it was a POC. Since it was implemented in Tcl it performed
> miserable, but I got it to the point of being able to pause & resume and
> the whole thing did work with prepared statements on the transaction
> level. So it was a full functioning POC.
>
> What makes this design appealing to me is that it is completely
> transparent to every existing client that uses the extended query
> protocol for server side prepared statements.
>

Apparently this is coming in pgbouncer Support of prepared statements by
knizhnik · Pull Request #845 · pgbouncer/pgbouncer (github.com)
<https://github.com/pgbouncer/pgbouncer/pull/845>

Dave

>
>
> Jan
>
>


Re: Named Prepared statement problems and possible solutions

2023-06-08 Thread Dave Cramer
On Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik  wrote:

>
>
> On 08.06.2023 6:18 PM, Dave Cramer wrote:
>
>
>
> On Thu, 8 Jun 2023 at 11:15, Jan Wieck  wrote:
>
>> On 6/8/23 10:56, Dave Cramer wrote:
>> >
>> >
>> >
>> >
>> > On Thu, 8 Jun 2023 at 10:31, Jan Wieck > > <mailto:j...@wi3ck.info>> wrote:
>> >
>> > On 6/8/23 09:53, Jan Wieck wrote:
>> >  > On 6/8/23 09:21, Dave Cramer wrote:
>> >  > The server doesn't know about all the clients of the pooler, does
>> > it? It
>> >  > has no way of telling if/when a client disconnects from the
>> pooler.
>> >
>> > Another problem that complicates doing it in the server is that the
>> > information require to (re-)prepare a statement in a backend that
>> > currently doesn't have it needs to be kept in shared memory. This
>> > includes the query string itself. Doing that without shared memory
>> in a
>> > pooler that is multi-threaded or based on async-IO is much simpler
>> and
>> > allows for easy ballooning.
>> >
>> >
>> > I don't expect the server to re-prepare the statement. If the server
>> > responds with "statement doesn't exist" the client would send a prepare.
>>
>> Are you proposing a new libpq protocol version?
>>
>
> I believe we would need to add this to the protocol, yes.
>
>
>
> So it will be responsibility of client to remember text of prepared query
> to be able to resend it when statement doesn't exists at server?
> IMHO very strange decision. Why not to handle it in connection pooler
> (doesn't matter - external or embedded)?
>

I may be myopic but in the JDBC world and I assume others we have a
`PreparedStatement` object which has the text of the query.
The text is readily available to us.

Also again from the JDBC point of view we have use un-named statements
normally and then name them after 5 uses so we already have embedded logic
on how to deal with PreparedStatements

Dave


  1   2   3   4   >