Thanks for your answers.
1) First time, I set that value to 200. I think it is connection issue, so
increase max_connection = 1200.
After I show this email, I search max connection is associated to
shared_buffers.
My configuration set shared_buffers = 192GB because PostgreSQL documents
says set
On Tue, Apr 9, 2019 at 12:14 PM Thomas Munro wrote:
> It's more doable here than elsewhere because the data on disk isn't
> persistent across server restart, let alone pg_upgrade. Let's see...
> each segment file is 256kb and we need to be able to address 2^64 *
>
On Sun, Apr 7, 2019 at 2:31 AM Pavel Suderevsky wrote:
> Probably if you advise me what could cause "pg_serial": apparent wraparound
> messages I would have more chances to handle all the performance issues.
9.6 has this code:
/*
* Give a warning if we're about to run out of
On Fri, Apr 5, 2019 at 8:35 AM Jeff Janes wrote:
> On Tue, Apr 2, 2019 at 11:31 AM Andres Freund wrote:
>> On 2019-04-02 07:35:02 -0500, Brad Nicholson wrote:
>>
>> > A blog post would be nice, but it seems to me have something about this
>> > clearly in the manual would be best, assuming it's
On 4/8/19 7:19 AM, Raghavendra Rao J S V wrote:
Thank you very much for your prompt response.
Could you explain other admin type operations, which are not supported
by pgbouncer?
I would say anything you could not run through psql.
Regards,
Raghavendra Rao.
--
Adrian Klaver
Pavel,
On Sun, Apr 7, 2019 at 11:22 PM Pavel Stehule
wrote:
>
> po 8. 4. 2019 v 7:57 odesílatel Igal Sapir napsal:
>
>> David,
>>
>> On Sun, Apr 7, 2019 at 8:11 PM David Rowley
>> wrote:
>>
>>> On Mon, 8 Apr 2019 at 14:57, Igal Sapir wrote:
>>> > However, I have now deleted about 50,000 rows
po 8. 4. 2019 v 17:22 odesílatel Igal Sapir napsal:
> Pavel,
>
> On Sun, Apr 7, 2019 at 11:22 PM Pavel Stehule
> wrote:
>
>>
>> po 8. 4. 2019 v 7:57 odesílatel Igal Sapir napsal:
>>
>>> David,
>>>
>>> On Sun, Apr 7, 2019 at 8:11 PM David Rowley <
>>> david.row...@2ndquadrant.com> wrote:
>>>
On Mon, 8 Apr 2019 19:21:37 +0530
Arup Rakshit wrote:
> Hi,
>
> Thanks for showing different ways to achieve the goal. So what should
> be the optimal way to solve this. I have an composite index using
> company_id and feature_id columns for project_features table.
there are even more ways for
Thank you very much for your prompt response.
Could you explain other admin type operations, which are not supported by
pgbouncer?
Regards,
Raghavendra Rao.
On Mon, 8 Apr 2019 at 19:16, Scot Kreienkamp
wrote:
> Replication and several other admin type operations must connect directly
> to PG.
On 4/7/19 9:53 PM, 김준형 wrote:
Sorry for late but my server works without problem for a while.
> What problem occurs?
> Where is the Windows server?
Problem means Windows server doesn't accept new connection and non-admin
connection.
Only connected admin connection lives.
Windows server is
Basically anything that is not written as a sql query should be connected
directly to PG. PGBouncer is really only meant for SQL query type connections.
From: Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 10:19 AM
To: Scot Kreienkamp
Cc:
Hi,
Thanks for showing different ways to achieve the goal. So what should be the
optimal way to solve this. I have an composite index using company_id and
feature_id columns for project_features table.
I do ruby on rails development, where table names are plural always by
convention. The
po 8. 4. 2019 v 15:42 odesílatel Raghavendra Rao J S V <
raghavendra...@gmail.com> napsal:
> Hi All,
>
> We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
> 5433.
>
> Postgres database port number is 6433. By using port 5433 PGBOUNCER is
> connecting to postgres port 6433
Replication and several other admin type operations must connect directly to
PG. They are not supported through PGBouncer.
From: Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 9:21 AM
To: pgsql-general@lists.postgresql.org
Subject: Getting error while
Hi All,
We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
5433.
Postgres database port number is 6433. By using port 5433 PGBOUNCER is
connecting to postgres port 6433 database.
Now PGBOUNCER is establishing the connections properly but when I try to
run the pg_basebackup
resolved. sorry for not posting the resolution earlier.
it was a good puzzler. turns out the postgresql server used
network-attached disks. and the updated table had no index for the
updated columns. so, the update required a serial scan of the table over
the network. thus, the high cpu usage
On 07.04.2019 07:06, Jess Wren wrote:
However, I can't figure out how I would integrate this into the above
query to filter out duplicate domains from the results. And because this
is the docs for "testing and debugging text search
On Mon, 8 Apr 2019 15:32:36 +0530
Arup Rakshit wrote:
hi,
> I am still having some bugs. I am getting duplicate in the result set.
>
> psql (11.0, server 10.5)
> Type "help" for help.
>
> aruprakshit=# select * from features;
> id | name
> +--
> 1 | f1
> 2 | f2
> 3 | f3
> 4
I am still having some bugs. I am getting duplicate in the result set.
psql (11.0, server 10.5)
Type "help" for help.
aruprakshit=# select * from features;
id | name
+--
1 | f1
2 | f2
3 | f3
4 | f4
(4 rows)
aruprakshit=# select * from company;
id | name
+--
1 | c1
I knew that will be more compact way. Thanks for showing it. One thing I still
would like to handle is that, to make sure the column contains only True/False.
But right now sometimes it shows NULL. How can I fix this?
id|name|active|
--||--|
1|f1 |true |
2|f2 |true |
3|f3 |false
Hey,
you could just use
SELECT
features.id,
features.name,
company_features.company_id = 1 as active
regards,
Szymon
On Mon, 8 Apr 2019 at 09:55, Arup Rakshit wrote:
> I have 2 tables Company and Feature. They are connected via a join table
> called CompanyFeature. I
I have 2 tables Company and Feature. They are connected via a join table called
CompanyFeature. I want to build a result set where it will have id, name and a
custom boolean column. This boolean column is there to say if the feature is
present for the company or not.
Company table:
| id |
Kevin Wilkinson wrote:
> on 10.2, we're seeing very high cpu usage when doing an update statement
> on a relatively small table (1GB). one of the updated columns is text,
> about 1k bytes. there are four threads doing similar updates
> concurrently to the same table (but different rows). each
po 8. 4. 2019 v 7:57 odesílatel Igal Sapir napsal:
> David,
>
> On Sun, Apr 7, 2019 at 8:11 PM David Rowley
> wrote:
>
>> On Mon, 8 Apr 2019 at 14:57, Igal Sapir wrote:
>> > However, I have now deleted about 50,000 rows more and the table has
>> only 119,688 rows. The pg_relation_size() still
24 matches
Mail list logo