On Wed, 2025-07-23 at 08:50 +0000, sivapostg...@yahoo.com wrote:
> Tried in PostgreSQL 11.11 , PostgreSQL 15.2 in Windows 10

Both of these choices are unsavory.  Don't use the unsupported v11,
and use 15.13 with v15.

> Here we try to transfer data from one database to another (remote) database. 
> 
> Tables do have records ranging from 85000 to 3600000 along with smaller sized 
> tables.
> No issues while transferring smaller sized tables.
> 
> I here take one particular table [table1] which has 85000 records.
> The table got Primary Key, Foreign Key(s), Triggers.  Trigger updates another 
> table [table2]
> Table2 have 2 triggers, one to arrive a closing value and other to delete, if 
> the closing value is zero.
> 
> 1.  Transfer the data from source database to a csv file.  85000 records 
> transferred. No issues.
> 2.  Transfer the file to the remote location.  No issues.
> 3.  Transfer the contents of the file to the table using Copy From command. - 
> Fails when try to transfer all the 85000 records at once.  
> 
> Copy from command is
> 
> Copy public.table1 From 'E:\temp\file1.csv' (FORMAT CSV, DELIMITER ',', 
> HEADER TRUE)
> 
> The above command succeeds, when
> 1.  The trigger in Table1 is disabled with all other constraints on.
> 2.  The no. of rows is within 16000 or less, with Trigger enabled.  We 
> haven't tried with higher no of rows.
> 
> The above command goes on infinite loop, when
> 1.  We try to transfer all 85000 rows at once, with Trigger and other 
> constraints in table1 enabled.
>      We waited for 1.5 hrs first time and 2.5 hrs second time before 
> cancelling the operation.
> 
> I read in the documentation that the fastest way to transfer data is to use 
> Copy command.
> And I couldn't find any limit in transferring data using that command.
> One could easily transfer millions of rows using this command.

There is no limit for the number of rows that get created by a single COPY.

You should research why processing fails for higher row counts:
- Are there any messages on the client or the server side?
- Is the backend process on the server busy (consuming CPU) when processing 
hangs?
- Do you see locks or other wait events in "pg_stat_activity"?

> Here are the triggers.
> 
> Trigger function, which is called from Table1 on After Insert, Update, Delete

One thing you could try is a BEFORE trigger.  That should work the same, unless
there are foreign key constraints.  Do you see high memory usage or paging for
the backend process when the COPY hangs?

> [...]
> If (Select Count(*)
>  From   table2
>  WHERE  companycode = company_code
>  AND    branchcode  = branch_code
>  AND    locationfk  = location_fk
>  AND    barcode     = variety_code ) > 0 Then
> [...]

That may well be slow, particularly without a matching index.
A better way to write that would be

  IF EXISTS (SELECT 1 FROM table2
             WHERE ...)

because that can stop processing after the first match.
It still needs an index for fast processing.

Yours,
Laurenz Albe


Reply via email to