Tim Cross writes:
> Jason Ralph writes:
>> I am planning an update on a table with 20Million records, I have been
>> researching the best practices. I will remove all indexes and foreign keys
>> prior to the update, however I am not sure if I should use a transaction or
>> not.
>> My thought
Jason Ralph writes:
> Hello List,
> PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313
> (R
> ed Hat 4.4.7-23), 64-bit
>
> I am planning an update on a table with 20Million records, I have been
> researching the best practices. I will remove all indexes and foreign
Bee.Lists writes:
> I have an issue with a server (v10) that’s seeing increasing connections
> until it’s maxxed-out.
>
> max_connections for my 4-core server is set to 12.
>
> I’ve installed pg_stat_activity and pg_stat_statements.
>
> I access this server through a web app, and another
Why is there now a second thread for this topic?
On Tue, Jun 23, 2020 at 3:21 PM Bee.Lists wrote:
> >
> > On Jun 23, 2020, at 4:51 PM, Michael Lewis wrote:
> >
> > Do you see anything in pg_stat_activity that stays idle for a while and
> then *does* disappear on its own? Perhaps some types of
On 6/23/20 3:16 PM, Bruce Momjian wrote:
On Sat, Jun 20, 2020 at 06:28:52PM +0200, Peter J. Holzer wrote:
On 2020-06-17 20:00:51 -0700, Adrian Klaver wrote:
On 6/17/20 7:14 PM, prachi surangalikar wrote:
i have tried every thing but still i could not find the solution to
this problem.
i
>
> On Jun 23, 2020, at 4:51 PM, Michael Lewis wrote:
>
> Do you see anything in pg_stat_activity that stays idle for a while and then
> *does* disappear on its own? Perhaps some types of connections are doing
> client side/application stuff before telling the DB to close the connection.
I’m
On Sat, Jun 20, 2020 at 06:28:52PM +0200, Peter J. Holzer wrote:
> On 2020-06-17 20:00:51 -0700, Adrian Klaver wrote:
> > On 6/17/20 7:14 PM, prachi surangalikar wrote:
> > > i have tried every thing but still i could not find the solution to
> > > this problem.
> > > i made changes in the
Tom Lane schrieb am 23.06.2020 um 23:25:
I would like to extract the dependency between materialized views.
e.g. in the following situation:
create materialized view mv1 as select ;
create materialized view mv2 as select ... from mv1, ...;
or more readably,
regression=# select
Thomas Kellerer writes:
> I would like to extract the dependency between materialized views.
> e.g. in the following situation:
> create materialized view mv1 as select ;
> create materialized view mv2 as select ... from mv1, ...;
> I would like to know that mv2 depends on mv1.
> I
One last follow up on this thread. I don't remember this being mentioned, so
wanted to share in case if might be of help. The following is NetApps's Best
Practices for PostgreSQL on NetApp ONTAP storage:
https://www.netapp.com/us/media/tr-4770.pdf
>Just update them and be done with it. Do the work in batches if it doesn't
>matter that concurrent accesses to the table might see some >rows that have
>old value and some new. Given you are on PG11 and can commit within a
>function, you could adapt something like this to >just run until done.
On Tue, Jun 23, 2020 at 2:29 PM Bee.Lists wrote:
> I have an issue with a server (v10) that’s seeing increasing connections
> until it’s maxxed-out.
>
> max_connections for my 4-core server is set to 12.
>
> I’ve installed pg_stat_activity and pg_stat_statements.
>
Do you see anything in
On Tue, Jun 23, 2020 at 2:34 PM Jim Hurne wrote:
> Sure! Below are more of the details from the same set of logs. Looking at
> them myself, I see that there is always some percentage of tuples that are
> dead but are not yet removable. And that number increases on every vacuum,
> which might
Magnus Hagander wrote on 06/22/2020 04:44:33 PM:
> Yes, that's how VACUUM FULL works (more or less).
Thanks for the confirmation and further explanation Magnus! I'm definitely
getting a better idea of how the vacuum and autovacuum facilities work.
> And for autovacuum, with a cost_delay of
I have an issue with a server (v10) that’s seeing increasing connections until
it’s maxxed-out.
max_connections for my 4-core server is set to 12.
I’ve installed pg_stat_activity and pg_stat_statements.
I access this server through a web app, and another client on two machines. I
also
On Tue, Jun 23, 2020 at 2:25 PM Adrian Klaver
wrote:
> On 6/23/20 9:44 AM, Edu Gargiulo wrote:
> >
> > On Tue, Jun 23, 2020 at 11:45 AM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 6/23/20 7:37 AM, Edu Gargiulo wrote:
> > >
> > >
> > > On Tue, Jun 23,
>
> >Are you updating *every* row in the table?
>>
>> No I am using an update like so: UPDATE members SET regdate='2038-01-18'
>> WHERE regdate='2020-07-07'
>>
>> DB=# select count(*) from members where regdate = '2020-07-07';
>>
>> count
>>
>> --
>>
>> 17333090
>>
>> (1 row)
>>
>>
Just
>
> > > On 23/06/2020 14:42, Klaudie Willis wrote:
> > >
> > > > I got my first hint of why this problem occurs when I looked at the
> > > > statistics. For the column in question, "instrument_ref" the
> > > > statistics claimed it to be:
> > > > The default_statistics_target=500, and analyze has
On 6/23/20 9:44 AM, Edu Gargiulo wrote:
On Tue, Jun 23, 2020 at 11:45 AM Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:
On 6/23/20 7:37 AM, Edu Gargiulo wrote:
>
>
> On Tue, Jun 23, 2020 at 10:56 AM Adrian Klaver
> mailto:adrian.kla...@aklaver.com>
You can try this
First take backup of table then execute below statements.
create table members_temp
as
select ,'2038-01-18' regdate from members where regdate =
'2020-07-07';
delete from members where regdate = '2020-07-07';
insert into members select * from members_temp ;
drop table
On Tue, Jun 23, 2020 at 11:45 AM Adrian Klaver
wrote:
> On 6/23/20 7:37 AM, Edu Gargiulo wrote:
> >
> >
> > On Tue, Jun 23, 2020 at 10:56 AM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 6/23/20 6:48 AM, Edu Gargiulo wrote:
> > > On Tue, Jun 23, 2020 at 10:34 AM
I would like to extract the dependency between materialized views.
e.g. in the following situation:
create materialized view mv1 as select ;
create materialized view mv2 as select ... from mv1, ...;
I would like to know that mv2 depends on mv1.
I assumed this could be done through
David Gauthier writes:
> I have a table which logs the inception of an event with an
> "incept_datetime" (timestamptz) field. I want to know how many events
> occurred per day from one date to the next using midnight as a marker for
> each day reported. The end result should be something
Hi:
9.6.0 on linux
I have a table which logs the inception of an event with an
"incept_datetime" (timestamptz) field. I want to know how many events
occurred per day from one date to the next using midnight as a marker for
each day reported. The end result should be something like...
I'd make a copy of the table, and test how long the various methods take.
On 6/23/20 10:17 AM, Jason Ralph wrote:
Hello List,
PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (R
ed Hat 4.4.7-23), 64-bit
I am planning an update on a table with 20Million
Adrian, you are correct. My mistanke.
K
‐‐‐ Original Message ‐‐‐
On Tuesday, June 23, 2020 4:14 PM, Adrian Klaver
wrote:
> On 6/23/20 7:05 AM, Fabio Pardi wrote:
>
> > On 23/06/2020 14:42, Klaudie Willis wrote:
> >
> > > I got my first hint of why this problem occurs when I looked at
Hello List,
PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (R
ed Hat 4.4.7-23), 64-bit
I am planning an update on a table with 20Million records, I have been
researching the best practices. I will remove all indexes and foreign keys
prior to the update, however I
On 6/23/20 8:32 AM, Jason Ralph wrote:
Hello List,
PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (R
ed Hat 4.4.7-23), 64-bit
I am planning an update on a table with 20Million records, I have been
researching the best practices. I will remove all indexes and
On 6/23/20 7:37 AM, Edu Gargiulo wrote:
On Tue, Jun 23, 2020 at 10:56 AM Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:
On 6/23/20 6:48 AM, Edu Gargiulo wrote:
> On Tue, Jun 23, 2020 at 10:34 AM Adrian Klaver
> mailto:adrian.kla...@aklaver.com>
On Tue, Jun 23, 2020 at 10:56 AM Adrian Klaver
wrote:
> On 6/23/20 6:48 AM, Edu Gargiulo wrote:
> > On Tue, Jun 23, 2020 at 10:34 AM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 6/23/20 6:30 AM, Edu Gargiulo wrote:
> > > Hi all,
> > >
> > > We are
On 6/22/20 9:18 PM, prachi surangalikar wrote:
Please relpy to list also.
Ccing lis.
i tried to connect to postgres using psql and then its asking me
password for user 'administrator' but i have not created any
administrator user and when i logged in as user 'postgres' it is asking
for
On 6/23/20 6:48 AM, Edu Gargiulo wrote:
On Tue, Jun 23, 2020 at 10:34 AM Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:
On 6/23/20 6:30 AM, Edu Gargiulo wrote:
> Hi all,
>
> We are using postgresql 11.7 on Debian.
> I noticed that pg_dump is not including
On 6/23/20 7:05 AM, Fabio Pardi wrote:
On 23/06/2020 14:42, Klaudie Willis wrote:
I got my first hint of why this problem occurs when I looked at the
statistics. For the column in question, "instrument_ref" the
statistics claimed it to be:
The default_statistics_target=500, and analyze has
On 23/06/2020 14:42, Klaudie Willis wrote:
> I got my first hint of why this problem occurs when I looked at the
> statistics. For the column in question, "instrument_ref" the statistics
> claimed it to be:
>
> The default_statistics_target=500, and analyze has been run.
> select * from
On 6/23/20 6:48 AM, Edu Gargiulo wrote:
On Tue, Jun 23, 2020 at 10:34 AM Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:
On 6/23/20 6:30 AM, Edu Gargiulo wrote:
> Hi all,
>
> We are using postgresql 11.7 on Debian.
> I noticed that pg_dump is not including
Thanks Adrian,
> You could break it down into multiple transactions if there is way to specify
> ranges of records.
Say I couldn't break it up, would it be faster in or out of the transaction?
It depends whether you have concurrent transactions using the table. If you do,
I think it would
On Tue, Jun 23, 2020 at 10:34 AM Adrian Klaver
wrote:
> On 6/23/20 6:30 AM, Edu Gargiulo wrote:
> > Hi all,
> >
> > We are using postgresql 11.7 on Debian.
> > I noticed that pg_dump is not including records on tables with fields
> > defined as array type (integer[] and real[]). The table
Thanks Adrian,
> You could break it down into multiple transactions if there is way to specify
> ranges of records.
Say I couldn't break it up, would it be faster in or out of the transaction?
Jason Ralph
-Original Message-
From: Adrian Klaver
Sent: Tuesday, June 23, 2020 9:38 AM
To:
On 6/23/20 6:32 AM, Jason Ralph wrote:
Hello List,
PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (R
ed Hat 4.4.7-23), 64-bit
I am planning an update on a table with 20Million records, I have been
researching the best practices. I will remove all indexes and
On 6/23/20 6:30 AM, Edu Gargiulo wrote:
Hi all,
We are using postgresql 11.7 on Debian.
I noticed that pg_dump is not including records on tables with fields
defined as array type (integer[] and real[]). The table structure is
normally restored but they have 0 records on restoring.
What is
Hello List,
PostgreSQL 11.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (R
ed Hat 4.4.7-23), 64-bit
I am planning an update on a table with 20Million records, I have been
researching the best practices. I will remove all indexes and foreign keys
prior to the update, however I
Hi all,
We are using postgresql 11.7 on Debian.
I noticed that pg_dump is not including records on tables with fields
defined as array type (integer[] and real[]). The table structure is
normally restored but they have 0 records on restoring.
i'm wondering if is it a normal behaviour of pg_dump
I didn't run it with "verbose" but otherwise, yes, several times. I can do it
again with verbose if you are interested in the output. Just give me some time.
500M rows 50 columns, is no small job :)
K
‐‐‐ Original Message ‐‐‐
On Tuesday, June 23, 2020 2:51 PM, Ron wrote:
> Maybe I
Maybe I missed it, but did you run "ANALYZE VERBOSE bigtable;"?
On 6/23/20 7:42 AM, Klaudie Willis wrote:
Friends,
I run Postgresql 12.3, on Windows. I have just discovered a pretty
significant problem with Postgresql and my data. I have a large table,
500M rows, 50 columns. It is split in
Friends,
I run Postgresql 12.3, on Windows. I have just discovered a pretty significant
problem with Postgresql and my data. I have a large table, 500M rows, 50
columns. It is split in 3 partitions by Year. In addition to the primary key,
one of the columns is indexed, and I do lookups on
45 matches
Mail list logo