On 2/15/21 12:22 PM, Karthik K wrote:
yes, I'm using \copy to load the batch table,
with the new design that we are doing, we expect updates to be less
going forward and more inserts, one of the target columns I'm updating
is indexed, so I will drop the index and try it out, also from your
Karthik K writes:
> exactly, for now, what I did was, as the table is already partitioned, I
> created 50 different connections and tried updating the target table by
> directly querying from the source partition tables. Are there any other
> techniques that I can use to speed this up? also
yes, I'm using \copy to load the batch table,
with the new design that we are doing, we expect updates to be less going
forward and more inserts, one of the target columns I'm updating is
indexed, so I will drop the index and try it out, also from your suggestion
above splitting the on conflict
On 2/15/21 11:41 AM, Karthik K wrote:
exactly, for now, what I did was, as the table is already partitioned, I
created 50 different connections and tried updating the target table by
directly querying from the source partition tables. Are there any other
techniques that I can use to speedĀ
exactly, for now, what I did was, as the table is already partitioned, I
created 50 different connections and tried updating the target table by
directly querying from the source partition tables. Are there any other
techniques that I can use to speed this up? also when we use on conflict
On 2/12/21 12:46 PM, Karthik Kumar Kondamudi wrote:
Hi,
I'm looking for suggestions on how I can improve the performance of the
below merge statement, we have a batch process that batch load the data
into the _batch tables using Postgres and the task is to update the main
target tables if
Hi,
I'm looking for suggestions on how I can improve the performance of the
below merge statement, we have a batch process that batch load the data
into the _batch tables using Postgres and the task is to update the main
target tables if the record exists else into it, sometime these batch table