Re: pgbouncer best practices

2023-07-09 Thread Rita
Thanks for the tips. So far, I am very happy with PGbouncer. Just increased
number of db connections. Great piece of software!

Is there a way to see how many queued connections there are? Looking at the
stats I can't seem to figure that out.

On Sat, Jul 8, 2023 at 9:46 AM Ben Chobot  wrote:

> Rita wrote on 7/8/23 4:27 AM:
>
> I am not sure if it allows transaction pooling.
>
>
> Well, take the time to figure it out. With transaction pooling enabled,
> you can get away with a much lower number of server connections. For
> example, our application regularly has thousands of clients connected to
> pgbouncer and is quite happy with ~30 server connections (roughly the core
> count of the db server). If we couldn't use transaction pooling we'd be
> fighting with how to keep those thousands of connections from wasting a
> bunch of resources on our db.
>
> https://www.pgbouncer.org/features.html
>


-- 
--- Get your facts first, then you can distort them as you please.--


Re: pgbouncer best practices

2023-07-08 Thread Rita
I am not sure if it allows transaction pooling.

On Fri, Jul 7, 2023 at 12:37 PM Ben Chobot  wrote:

> Rita wrote on 7/7/23 9:23 AM:
>
> I have an  application that does many db calls  from a server farm. I've
> increased my max connections on postgresql to 1000 and tuned the server
> accordingly. However, I still get can't connect to postgresql some times. I
> installed pgbouncer on few servers in the farm. I pointed the traffic to
> pgbouncer and things helped dramatically. My question are: is there a ratio
> of max connections and pool i should use in my pgbouncer config?
>
>
> Does your application allow for transaction pooling? pgBouncer can unlock
> a lot more efficiency if you can cycle server slots on transaction
> boundaries.
>


-- 
--- Get your facts first, then you can distort them as you please.--


pgbouncer best practices

2023-07-07 Thread Rita
I have an  application that does many db calls  from a server farm. I've
increased my max connections on postgresql to 1000 and tuned the server
accordingly. However, I still get can't connect to postgresql some times. I
installed pgbouncer on few servers in the farm. I pointed the traffic to
pgbouncer and things helped dramatically. My question are: is there a ratio
of max connections and pool i should use in my pgbouncer config?

-- 
--- Get your facts first, then you can distort them as you please.--


Re: pgbouncer

2023-06-28 Thread Rita
I have a dedicated server. I have 3 additional physical application servers
which hit the database. I periodically get the application server not able
to reach the database server (is server on port 5432 listening? message).
Looking at some metrics, seems like I am pulling a lot of tuples (300k or
so) during that time.


On Wed, Jun 28, 2023 at 8:23 AM Sebastiaan Mannem  wrote:

> It greatly depends on what you want to achieve. Both have pro’s and con’s.
> What do you need pgbouncer for?
>
> Verstuurd vanaf mijn iPhone
>
> > Op 28 jun. 2023 om 13:19 heeft Rita  het volgende
> geschreven:
> >
> > 
> >
> > seems like I may need to deploy pgbouncer for my webapp. should i deploy
> it on the db server or on the webserver?
> > --
> > --- Get your facts first, then you can distort them as you please.--
>


-- 
--- Get your facts first, then you can distort them as you please.--


pgbouncer

2023-06-28 Thread Rita
seems like I may need to deploy pgbouncer for my webapp. should i deploy it
on the db server or on the webserver?
-- 
--- Get your facts first, then you can distort them as you please.--


Re: recovery.conf and archive files

2022-10-14 Thread Rita
is the archive command even needed? I can see my standby fully synced.

On Fri, Oct 14, 2022 at 5:27 AM Guillaume Lelarge 
wrote:

> Le jeu. 13 oct. 2022 à 12:42, Rita  a écrit :
>
>> The primary's recovery.conf looks like this
>>
>> listen_address='*'
>> wal_level=replica
>> synchronous_commit=local
>> archive_move = on
>> archive_command = 'cp %p /var/lib/pgsql/11/data/archive/%f'
>> max_wal_senders = 10
>> wal_keep_segments=10
>> synchronous_standby_names='standby0'
>> wal_log_hints=on
>>
>>
> The archive command stores the WAL in a local directory. That's what I
> said earlier.
>
>
>>
>> On Sun, Oct 9, 2022 at 8:45 AM Guillaume Lelarge 
>> wrote:
>>
>>> Hi,
>>>
>>> Le dim. 9 oct. 2022 à 13:54, Rita  a écrit :
>>>
>>>> I have primary and standby replication setup.
>>>>
>>>> On my primary the archive directory is rather large (30GB) and growing.
>>>> On my standby I have recovery.conf which has
>>>> archive_cleanup_command = 'pg_archivecleanup -d
>>>> /var/lib/pgsql/11/data/archive %r'
>>>>
>>>> I was under the impression this line would remove data from my primary
>>>> AND standby. Is that not the case?
>>>>
>>>>
>>> pg_archivecleanup will clean up the *local* directory. It won't clean up
>>> the archive directory if it's stored on the primary.
>>>
>>> If I misunderstood your issue, it would be great to send us the
>>> postgresql.conf file from your primary.
>>>
>>>
>>> --
>>> Guillaume.
>>>
>>
>>
>> --
>> --- Get your facts first, then you can distort them as you please.--
>>
>
>
> --
> Guillaume.
>


-- 
--- Get your facts first, then you can distort them as you please.--


Re: recovery.conf and archive files

2022-10-13 Thread Rita
The primary's recovery.conf looks like this

listen_address='*'
wal_level=replica
synchronous_commit=local
archive_move = on
archive_command = 'cp %p /var/lib/pgsql/11/data/archive/%f'
max_wal_senders = 10
wal_keep_segments=10
synchronous_standby_names='standby0'
wal_log_hints=on


On Sun, Oct 9, 2022 at 8:45 AM Guillaume Lelarge 
wrote:

> Hi,
>
> Le dim. 9 oct. 2022 à 13:54, Rita  a écrit :
>
>> I have primary and standby replication setup.
>>
>> On my primary the archive directory is rather large (30GB) and growing.
>> On my standby I have recovery.conf which has
>> archive_cleanup_command = 'pg_archivecleanup -d
>> /var/lib/pgsql/11/data/archive %r'
>>
>> I was under the impression this line would remove data from my primary
>> AND standby. Is that not the case?
>>
>>
> pg_archivecleanup will clean up the *local* directory. It won't clean up
> the archive directory if it's stored on the primary.
>
> If I misunderstood your issue, it would be great to send us the
> postgresql.conf file from your primary.
>
>
> --
> Guillaume.
>


-- 
--- Get your facts first, then you can distort them as you please.--


recovery.conf and archive files

2022-10-09 Thread Rita
I have primary and standby replication setup.

On my primary the archive directory is rather large (30GB) and growing. On
my standby I have recovery.conf which has
archive_cleanup_command = 'pg_archivecleanup -d
/var/lib/pgsql/11/data/archive %r'

I was under the impression this line would remove data from my primary AND
standby. Is that not the case?

-- 
--- Get your facts first, then you can distort them as you please.--


Re: Pause streaming replication

2021-11-10 Thread Rita
Yes, I have read the manual and seen this. It pauses the replication
(select pg_is_wal_replay_paused()). But on the primary, when I look at
pg_stat_replication, it still says 'streaming' in the state column. My
question was how do I get it from 'streaming'  to anything else?  (
https://www.postgresql.org/docs/11/monitoring-stats.html#PG-STAT-REPLICATION-VIEW)
?

I want to trigger an alert when 'streaming' isn't there. Or is there a
better way to do it?

On Wed, Nov 10, 2021 at 7:24 PM Ben Chobot  wrote:

> Rita wrote on 11/10/21 1:25 PM:
> > Hello.
> >
> > I am testing alerting on my primary and standby setup. I have async
> > replication working but I would like to temporarily pause it so the
> > value of 'state' isn't streaming. (select * from pg_stat_replication).
> >
> > How can I do that?
>
> By reading the fine manual:
>
> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL-TABLE
>
> The function you are looking for is pg_wal_replay_pause().
>


-- 
--- Get your facts first, then you can distort them as you please.--


Pause streaming replication

2021-11-10 Thread Rita
Hello.

I am testing alerting on my primary and standby setup. I have async
replication working but I would like to temporarily pause it so the value
of 'state' isn't streaming. (select * from pg_stat_replication).

How can I do that?


-- 
--- Get your facts first, then you can distort them as you please.--


horizontal scaling

2020-09-25 Thread Rita
There was a lot of talk
about
horizontal scaling a few years ago. I was wondering if there have been any
new efforts on it.

-- 
--- Get your facts first, then you can distort them as you please.--


python async with psycopg2

2020-07-17 Thread Rita
I am trying to listen to multiple channels thru pg_notify mechanism.

select pg_notify('mychan0',foo');
select pg_notify('mychan'1,'bar');

In python I have something like this

import select,time
import psycopg2
import psycopg2.extensions
from psycopg2.extras import wait_select

DSN="dbname=mydb user=user host=localhost"

def main():

conn = psycopg2.connect(DSN,async_=True)
wait_select(conn)
curs = conn.cursor()
curs.execute("LISTEN mychan0;")
#curs.execute("LISTEN mychan1;") #fails!
wait_select(conn)
while True:
wait_select(conn)
while conn.notifies:
print("Notify: %s"%conn.notifies.pop().payload)
time.sleep(1)
conn.close()

I keep getting

psycopg2.ProgrammingError: execute cannot be used while an asynchronous
query is underway

I prefer to stick with psycopg2 library instead of a wrapper.
What am I doing wrong?

-- 
--- Get your facts first, then you can distort them as you please.--


Re: Listen/Notify feedback

2020-07-12 Thread Rita
 Good to know about potential performance problems. I don't plan to have
more than 5 hosts. Also, good to know about MQTT.

On Sun, Jul 12, 2020 at 8:52 AM Andrew Smith  wrote:

> On Sun, 12 Jul 2020 at 21:39, Rita  wrote:
>
>> Thats good to know. Are there some standard patterns or best practices I
>> should follow when using messaging and with listen/notify?
>>
>> On Sat, Jul 11, 2020 at 1:44 PM Brian Dunavant 
>> wrote:
>>
>>> One aspect is if there is no one listening when a notify happens, the
>>> message is lost (e.g. no durability).   If this is important to you, it can
>>> be addressed by writing the messages to a table as well when you NOTIFY,
>>> and the listener deletes messages after they are processed.  On connection
>>> the listener can query the table to catch up on any missed messages, or
>>> messages that were mid-process during a crash.  This is trickier with more
>>> than one listener.   This isn't a whole lot more efficient than just using
>>> the table alone, but it saves you from having to poll so better response
>>> times.
>>>
>>> On Sat, Jul 11, 2020 at 8:58 AM Rita  wrote:
>>>
>>>> I am investigating various pub/sub tools such as ActiveMQ, Rabbit,
>>>> Redis, etc.I came across Postgresql Listen/Notify and was easily able to
>>>> write code to listen to messages. For the people who have been using this
>>>> for a while: what are its downsides, things to consider when writing good
>>>> code that use pub/sub, how do you deal with large messages, can I have
>>>> subscribers listen to replica nodes?
>>>>
>>>> Thanks
>>>> --
>>>> --- Get your facts first, then you can distort them as you please.--
>>>>
>>>
> A couple of years ago I started looking into listen/notify in PG10 and
> found that the throughput decreased quite a bit as I added more and more
> listeners. Given the number of apps I needed to have listening and the
> number of messages that I expected to be consuming, I ended up writing a
> single listener app which then republished the messages via MQTT. Not sure
> if the performance has improved in subsequent versions (or whether this
> will affect you at all) but it's something to keep in mind.
>


-- 
--- Get your facts first, then you can distort them as you please.--


Re: Listen/Notify feedback

2020-07-12 Thread Rita
Thats good to know. Are there some standard patterns or best practices I
should follow when using messaging and with listen/notify?

On Sat, Jul 11, 2020 at 1:44 PM Brian Dunavant  wrote:

> One aspect is if there is no one listening when a notify happens, the
> message is lost (e.g. no durability).   If this is important to you, it can
> be addressed by writing the messages to a table as well when you NOTIFY,
> and the listener deletes messages after they are processed.  On connection
> the listener can query the table to catch up on any missed messages, or
> messages that were mid-process during a crash.  This is trickier with more
> than one listener.   This isn't a whole lot more efficient than just using
> the table alone, but it saves you from having to poll so better response
> times.
>
> On Sat, Jul 11, 2020 at 8:58 AM Rita  wrote:
>
>> I am investigating various pub/sub tools such as ActiveMQ, Rabbit, Redis,
>> etc.I came across Postgresql Listen/Notify and was easily able to write
>> code to listen to messages. For the people who have been using this for a
>> while: what are its downsides, things to consider when writing good code
>> that use pub/sub, how do you deal with large messages, can I have
>> subscribers listen to replica nodes?
>>
>> Thanks
>> --
>> --- Get your facts first, then you can distort them as you please.--
>>
>

-- 
--- Get your facts first, then you can distort them as you please.--


Listen/Notify feedback

2020-07-11 Thread Rita
I am investigating various pub/sub tools such as ActiveMQ, Rabbit, Redis,
etc.I came across Postgresql Listen/Notify and was easily able to write
code to listen to messages. For the people who have been using this for a
while: what are its downsides, things to consider when writing good code
that use pub/sub, how do you deal with large messages, can I have
subscribers listen to replica nodes?

Thanks
-- 
--- Get your facts first, then you can distort them as you please.--


Re: what to do after a failover

2020-01-09 Thread Rita
Thanks for the response.
I am using Postgresql 11.
I want something simple and I have a strong preference toward using stock
tools. After the promotion and the original master comes online, I was
thinking of doing a pg_basebackup to sync. Any thoughts about that? I had a
very hard time with pg_rewind and I didn't like its complexity.



On Wed, Jan 8, 2020 at 11:31 PM Michael Paquier  wrote:

> On Wed, Jan 08, 2020 at 11:06:28PM -0500, Rita wrote:
> > I run a master and standby setup with Postgresql 11. The systems are
> > identical from a hardware and software setup.  If the master goes down I
> > can do a pg_ctl promote on the standby and point my applications to use
> the
> > standby (new master).
> >
> > Once the original master is online, when is an appropriate time to fail
> > back over? And are there any other things besides promote after the
> > failover is done?
>
> Make sure that you still have an HA configuration able to handle
> multiple degrees of failures with always standbys available after a
> promotion.
>
> The options available to rebuild your HA configuration after a
> failover depend on the version of PostgreSQL you are using.  After a
> failover the most simple solution would be to always recreate a new
> standby from a base backup taken from the freshly-promoted primary,
> though it can be costly depending on your instance.  You could also
> use pg_rewind (available in core since 9.5) to recycle the previous
> primary and reuse it as a standby of the new promoted custer.  Note
> that there are community-based solutions for such things, like
> pg_auto_failover or pacemaker-based stuff just to name two.  These
> rely on more complex architectures, where a third node is present to
> monitor the others (any sane HA infra ought to do at least that to be
> honest).
> --
> Michael
>


-- 
--- Get your facts first, then you can distort them as you please.--


what to do after a failover

2020-01-08 Thread Rita
I run a master and standby setup with Postgresql 11. The systems are
identical from a hardware and software setup.  If the master goes down I
can do a pg_ctl promote on the standby and point my applications to use the
standby (new master).

Once the original master is online, when is an appropriate time to fail
back over? And are there any other things besides promote after the
failover is done?


-- 
--- Get your facts first, then you can distort them as you please.--