Re: Does psqlodbc_10_03_0000-x86(32bit) support PostgreSQL 12.664bit?

2021-07-07 Thread 管竚
>hi,
>Try with a newer version of odbc driver, something like psqlodbc_12_02_

Dear Diego,
 
What a helpful reply. Thank you for the time you put into getting back to me.
I really want you to know how much I appreciate that.
 
The ASP WebSite now works perfectly with no issues connecting to the 
PostgreSQL12.6 with psqlodbc_12_02_(32bit).
 
Sincerely,
 

gzhcoder

Re: pg_upgrade as a way of cloning an instance?

2021-07-07 Thread Bruce Momjian
On Wed, Jul  7, 2021 at 10:30:15AM +0200, Luca Ferrari wrote:
> On Wed, Jul 7, 2021 at 10:22 AM Peter Eisentraut
>  wrote:
> > Yeah, seems pretty pointless.  You can just copy the data directory
> > directly and get the same effect.  pg_upgrade basically does a data
> > directory copy plus some extra stuff to convert the system catalogs
> > between versions, and if you don't need that second part, you might as
> > well do the first part directly.
> 
> 
> Agree on everything, but I was curious, since users sometime find ways
> to do things that are almost insane!

Also, pg_upgrade will throw an error if copying from one version to the
same version if there are tablespaces since the tablespace directory
name is the catalog version name.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: Doubt on pgbouncer

2021-07-07 Thread Merlin Moncure
On Sat, Jul 3, 2021 at 10:36 AM Rama Krishnan  wrote:
>
>
> Hi Team,
>
> How can I split read and write queries using pgbouncer

Check out pgbouncer-rr.  I haven't used it, but it may do what you need.

merlin




Re: Does psqlodbc_10_03_0000-x86(32bit) support PostgreSQL 12.6 64bit?

2021-07-07 Thread Diego

hi,
Try with a newer version of odbc driver, something like psqlodbc_12_02_

On 07/07/2021 07:38, 管竚 wrote:




Created by:  gzhcoder
Email address:  gzhco...@126.com
PostgreSQL version: 12.6
Operating system:   windows 10
Description:

I have a classic ASP Website on IIS.  I change the “ Enable 32-bit
Applications ” to True (Set the IIS application pool to spawn in a 32-bit
mode).
The Website connect to PostgreSQL 9.4 64bit with
psqlodbc_10_03_-x86(32bit) and it works very well .
When I change the PostgreSQL 9.4 64bit to PostgreSQL 12.6 64bit . I get the
error below :

Number: -2147217887
Description: Multiple-step OLE DB operation generated errors.

I found that the problem occurred in the third step:
Step 1. Starts a new transaction with the BeginTrans method.
Step 2. Creates an OraDynaset object from the specified SQL SELECT statement
SQL-A.
Step 3. Creates an OraDynaset object from the specified SQL SELECT statement
SQL-B.
Step 4. Ends the current transaction with the CommitTrans method.

I made an sample asp as below:
psqlodbc.asp
-- start
-

<%@ language='VBScript' %>
<%
Option Explicit

Function CreateDynaset(ArgSQL, ArgMod,cn )
dim rs
Set rs = CreateObject("ADODB.Recordset")

if ArgMod = 0 then
   rs.Open ArgSQL, cn, 1, 3
else
   rs.Open ArgSQL, cn, 3, 2
End if

   Set CreateDynaset = rs
End Function
%>
<%
Dim sConnection, objConn , objRS1, objRS2 ,sql
stop
sConnection ="DRIVER={PostgreSQL
Unicode};SERVER=999.999.999.999;database=XX;UID=XX;PASSWORD=XX;port=5432"

Set objConn = Server.CreateObject("ADODB.Connection")

objConn.Open(sConnection)

'DoStep1
objConn.BeginTrans

'SQL-A
sql = "SELECT cola FROM odbctest.t_odbc_a "
On Error Resume Next

'DoStep2
set objRS1 = CreateDynaset(sql, 8, objConn)

If Err.Number <> 0 Then
   WScript.Echo "Error in DoStep2: " & Err.Description
   Err.Clear
End If

If objRS1.EOF <> True Then
objRS1.Close
Set objRS1 = Nothing
End If

objRS1.Close
Set objRS1 = Nothing

'SQL-B
sql="SELECT colb FROM odbctest.t_odbc_b"

On Error Resume Next

'DoStep3
set objRS2 = CreateDynaset(sql, 8, objConn)

If Err.Number <> 0 Then
   WScript.Echo "Error in DoStep3: " & Err.Description
   Err.Clear
End If

If objRS2.EOF <> True Then
objRS2.Close
Set objRS = Nothing
End If

objRS2.Close
Set objRS2 = Nothing

'DoStep4
objConn.CommitTrans

objConn.Close
Set objConn = Nothing
%>

psqlodbc test

-- end
-


Thanks for any help








Does psqlodbc_10_03_0000-x86(32bit) support PostgreSQL 12.6 64bit?

2021-07-07 Thread 管竚

 



Created by:  gzhcoder
Email address:  gzhco...@126.com
PostgreSQL version: 12.6
Operating system:   windows 10
Description:

I have a classic ASP Website on IIS.  I change the “ Enable 32-bit
Applications ” to True (Set the IIS application pool to spawn in a 32-bit
mode).
The Website connect to PostgreSQL 9.4 64bit with
psqlodbc_10_03_-x86(32bit) and it works very well .
When I change the PostgreSQL 9.4 64bit to PostgreSQL 12.6 64bit . I get the
error below :

Number: -2147217887
Description: Multiple-step OLE DB operation generated errors. 

I found that the problem occurred in the third step:
Step 1. Starts a new transaction with the BeginTrans method.
Step 2. Creates an OraDynaset object from the specified SQL SELECT statement
SQL-A.
Step 3. Creates an OraDynaset object from the specified SQL SELECT statement
SQL-B.
Step 4. Ends the current transaction with the CommitTrans method.

I made an sample asp as below:
psqlodbc.asp
-- start
-

<%@ language='VBScript' %>
<%
Option Explicit

Function CreateDynaset(ArgSQL, ArgMod,cn )
dim rs 
Set rs = CreateObject("ADODB.Recordset")

if ArgMod = 0 then
  rs.Open ArgSQL, cn, 1, 3
else
  rs.Open ArgSQL, cn, 3, 2
End if

  Set CreateDynaset = rs
End Function
%>
<% 
Dim sConnection, objConn , objRS1, objRS2 ,sql
stop
sConnection ="DRIVER={PostgreSQL
Unicode};SERVER=999.999.999.999;database=XX;UID=XX;PASSWORD=XX;port=5432"

Set objConn = Server.CreateObject("ADODB.Connection") 

objConn.Open(sConnection) 

'DoStep1
objConn.BeginTrans

'SQL-A
sql = "SELECT cola FROM odbctest.t_odbc_a "
On Error Resume Next

'DoStep2
set objRS1 = CreateDynaset(sql, 8, objConn)

If Err.Number <> 0 Then
  WScript.Echo "Error in DoStep2: " & Err.Description
  Err.Clear
End If

If objRS1.EOF <> True Then
objRS1.Close
Set objRS1 = Nothing
End If

objRS1.Close
Set objRS1 = Nothing

'SQL-B
sql="SELECT colb FROM odbctest.t_odbc_b"

On Error Resume Next

'DoStep3
set objRS2 = CreateDynaset(sql, 8, objConn)

If Err.Number <> 0 Then
  WScript.Echo "Error in DoStep3: " & Err.Description
  Err.Clear
End If

If objRS2.EOF <> True Then
objRS2.Close
Set objRS = Nothing
End If

objRS2.Close
Set objRS2 = Nothing

'DoStep4
objConn.CommitTrans

objConn.Close
Set objConn = Nothing
%> 

psqlodbc test

-- end
-


Thanks for any help



Re: wal_keep_segments and max_wal_size

2021-07-07 Thread Kyotaro Horiguchi
At Mon, 5 Jul 2021 16:45:09 +0530, Atul Kumar  wrote in 
> hi,
> 
> (on postgres 9.6) I was just practicing to make my no. of wal files in
> pg_xlog directory to be the same/equal as I pass on wal_keep_segments
> paramater.
> 
> say for example
> show wal_keep_segments;
>  wal_keep_segments
> ---
>  125

This setting ensures to preserve this number of WAL files before the
current WAL location (shown by pg_current_wal_lsn()). This means the
WAL record is written to the roughly 126th live (not-recycled) file
after a checkpoint. Because the setting instructs checkpoints to leave
that number of WAL files regardless of other settings.

> so I want my wal files should not be grow in numbers more than 125 so
> for that I have changed the parameter of max_wal_size to 2 GB to limit
> the size of pg_xlog directory.
> 
>  show max_wal_size;
>  max_wal_size
> --
>  2GB
> (1 row)

Note that max_wal_size is a soft limit, which could be exceeded for
certain cases for the reasons including wal_keep_segments.

https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-MAX-WAL-SIZE


> but I did some testing by bulk inserts then the number of wal files
> has grown more than 125.
> 
> and the size of pg_xlog directory also reached to 2.7 GB.
> 
> /data/apps/edb/as96/data/pg_xlog 04:05:08]$ ls | wc -l
> 173
> 
> /data/apps/edb/as96/data 04:05:11]$ du -sh pg_xlog/
> 2.7Gpg_xlog/

I'm not sure what value you set to checkpoint_completion_target but
suppose 0.5, the default, with that setting and max_wal_size = 2GB,
chekcpoint happens every 85 segments [*1].  Note that the 85 files
start after the 125 files kept in pg_wal by wal_keep_segments. So no
wonder even if the WAL files got more than 3GB.

*1: max_wal_size / (checkpoint_completion_target + 1.0) = 1365MB = 85 files

> I wonder why I faced such behavior bcz I  limited the size of pg_xlog
> directory by setting max_wal_size to 2GB that should be equivalent to
> around 125 number of wal file in the pg_xlog directory.

Does the above explanation makes sense for you?

> please suggest how should I make both identical (wal_keep_segments and
> max_wal_size).

wal_keep_segments must be minimal amount required for pg_basebackup or
replication, otherwise set to zero.

If wal_keep_segments is greater than max_wal_size - [*1], that is, 43
16MB-files, max_wal_size would be overflown before XLOG-triggered
checkpoint caused by max_wal_size starts.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Need suggestions about live migration from PG 9.2 to PG 13

2021-07-07 Thread Magnus Hagander
On Wed, Jul 7, 2021 at 3:22 AM Lucas  wrote:
>
> Hello all,
>
> I'm currently working on a migration from PG 9.2 to PG 13 (RDS) and would 
> like some suggestions, please.
>
> Our current database stack is:
>
> master (pg 9.2) --> slave (pg 9.2) --> slave (pg 9.2 - cascading replication)
>  --> bucardo (ec2 instance) --> RDS (pg 13)
>
>
> The original plan was:
>
> Get bucardo replication working
> Deploy a test environment using the new PG 13 RDS database
> Use the test environment and test as much as possible, to make sure our 
> application works with PG 13
> Test more
> Decide on a date to move all customers to the new database
> Stop the replication
>
> However, the business decided that's not the way they want to move forward. 
> If we have issues it would impact our customers, etc, even though we do have 
> the test environment up and running and the application works with it.
>
> Now, the business wants to move a few customers to RDS and leave the rest on 
> PG 9.2... then gradually migrate them to RDS. But they want customers data to 
> be available in both databases, in case we need to move the customers back to 
> 9.2 if we face any issues. So that means a bidirectional replication.
>
> I am not comfortable with that, using Bucardo, and was hopping you guys could 
> suggest an alternative solution? if anybody has ever done something like 
> this, could share their experiences?

Note that PostgreSQL 9.2 has been end of life for almost 5 years by
now. If I were you I'd be a *lot* more worried about that than I would
be about Bucardo.

You're also not going to find many other solutions that want to talk
to an old unsupported postgresql, in particular one which doesn't
support logical decoding (which came in postgresql 9.4, also out of
support by now). Slony might be a choice -- it doesn't do
bidirectional, but you can replicate different
tables/schemas/databases in different directions in the same system.
But that assumes that your customers are not shared in the same table
-- if so, you're stuck with something more complex.

I don't believe Slony is supported with RDS though -- RDS is quite
limited in what options you can use.



> BTW - The DB is ~ 1.5TB so pg_upgrade is out of the question, as it takes 
> ages.

As long as you run pg_upgrade in link mode, diong so on a 1.5TB
database is a very quick operation, normally completes in a couple of
minutes including the ANALYZE step. This is a "destructive operation",
so you can't go back if something goes wrong, but just keep an extra
standby node around to fail over to if everything blows up and you
have that covered.

The fact that pg_upgrade *doesn't* take ages to deal with medium size
databases and up is the main reason it *exists*.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




Re: pg_upgrade as a way of cloning an instance?

2021-07-07 Thread Luca Ferrari
On Wed, Jul 7, 2021 at 10:22 AM Peter Eisentraut
 wrote:
> Yeah, seems pretty pointless.  You can just copy the data directory
> directly and get the same effect.  pg_upgrade basically does a data
> directory copy plus some extra stuff to convert the system catalogs
> between versions, and if you don't need that second part, you might as
> well do the first part directly.


Agree on everything, but I was curious, since users sometime find ways
to do things that are almost insane!

Thanks,
Luca




Re: pg_upgrade as a way of cloning an instance?

2021-07-07 Thread Peter Eisentraut

On 07.07.21 08:53, Luca Ferrari wrote:

Hi all,
someone pointed me out that pg_upgrade can be used to do a clone of
the database, specifying the same binaries such as

pg_upgrade -B /usr/pgsql-13/bin -b /usr/pgsql-13/bin -D /data/clone -d /data/src

I tested it and it seems to work, even if I don't see any point in
running it (and most notably it requires a downtime on the original
cluster).
Any opinion about that?


Yeah, seems pretty pointless.  You can just copy the data directory 
directly and get the same effect.  pg_upgrade basically does a data 
directory copy plus some extra stuff to convert the system catalogs 
between versions, and if you don't need that second part, you might as 
well do the first part directly.





pg_upgrade as a way of cloning an instance?

2021-07-07 Thread Luca Ferrari
Hi all,
someone pointed me out that pg_upgrade can be used to do a clone of
the database, specifying the same binaries such as

pg_upgrade -B /usr/pgsql-13/bin -b /usr/pgsql-13/bin -D /data/clone -d /data/src

I tested it and it seems to work, even if I don't see any point in
running it (and most notably it requires a downtime on the original
cluster).
Any opinion about that?

Thanks,
Luca