Thanks Adrian & Hector .

I will try the copy approach & post the result here.

On Sat, Mar 9, 2024 at 9:57 PM hector vass <hector.v...@gmail.com> wrote:

>
>
> On Sat, Mar 9, 2024 at 4:10 PM Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>
>> On 3/9/24 08:00, kuldeep singh wrote:
>> > Copy may not work in our scenario since we need to join data from
>> > multiple tables & then  convert it to json using  row_to_json . This
>> > json data eventually  needs to be stored in a target table .
>>
>> Per:
>>
>> https://www.postgresql.org/docs/current/sql-copy.html
>>
>> "
>> COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
>>
>> <...>
>>
>> query
>>
>>      A SELECT, VALUES, INSERT, UPDATE, or DELETE command whose results
>> are to be copied. Note that parentheses are required around the query.
>>
>>      For INSERT, UPDATE and DELETE queries a RETURNING clause must be
>> provided, and the target relation must not have a conditional rule, nor
>> an ALSO rule, nor an INSTEAD rule that expands to multiple statements.
>> "
>>
>> >
>> > Will it be better if we break the process into batches of like 10,000
>> > rows & insert the data in its individual transactions? Or any other
>> > better solution available ?
>> >
>> > On Sat, Mar 9, 2024 at 9:01 PM hector vass <hector.v...@gmail.com
>> > <mailto:hector.v...@gmail.com>> wrote:
>> >
>> >
>> >
>> >     On Sat, Mar 9, 2024 at 3:02 PM kuldeep singh
>> >     <kuldeeparor...@gmail.com <mailto:kuldeeparor...@gmail.com>> wrote:
>> >
>> >         Hi,
>> >
>> >         We are inserting data close to 1M record & having a single Jsonb
>> >         column but query is getting stuck.
>> >
>> >         We are using insert into select * .. , so all the operations are
>> >         within the DB.
>> >
>> >         If we are running select query individually then it is returning
>> >         the data in 40 sec for all rows but with insert it is getting
>> stuck.
>> >
>> >         PG Version - 15.
>> >
>> >         What could be the problem here ?
>> >
>> >         Regards,
>> >         KD
>> >
>> >
>> >     insert 1M rows especially JSON that can be large, variable in size
>> >     and stored as blobs and indexed is not perhaps the correct way to do
>> >     this
>> >     insert performance will also depend on your tuning.  Supporting
>> >     transactions, users or bulk processing are 3x sides of a compromise.
>> >     you should perhaps consider that insert is for inserting a few rows
>> >     into live tables ... you might be better using copy or \copy,
>> >     pg_dump if you are just trying to replicate a large table
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>
>
> What Adrian Klaver said ^
> discovered even this works...
>
>
> create view myview as (select row_to_json from mytable);
>
> create table newtable as select * from myview where 1=0;
>
> copy myview to program 'psql mydb postgres -c ''copy newtable from stdin''
> ';
>
>
>
>
>

Reply via email to