Le 06/10/2017 à 11:32, Albert Cervera i Areny a écrit :
2017-09-20 19:25 GMT+02:00 Richard PALO <richard.p...@free.fr 
<mailto:richard.p...@free.fr>>:


    Thought I would mention that I'm seeing roughly a 2,5x speedup using 
pgbouncer
    over vanilla postgresql socket connections (via tryton.conf) where pg and
    proteus are running on the same iron.


I'm surprised you're see this speedup using pgbouncer on the same machine. 
Pgbouncer is just a connection pooler and Tryton already has a connection 
pooler itself, so it would be great if you could investigate it further. Some 
things that come to mind:

- Tryton is not handling correctly the pool of connections and thus creating a 
new connection more frequently than it should
- You used TCP/IP sockets when you worked with PostgreSQL but use UNIX sockets 
now that you talk to pgbouncer (so the difference would come from the type of 
socket, not pgbouncer itself)

This is not quite true.
Initially I used directly unix sockets:
uri = postgresql://tryton@/run/postgresql/
and now as well, but underneath pgbouncer:
uri = postgresql://tryton@:6432

pgbouncer.ini uses unix_socket_dir = /run/postgresql

- You changed other parameters in Postgres (or Tryton). For example, you should 
usually change the following default postgresql.conf parameters:

   - Increase work_mem -> This is the amount of memory to be used per 
connection so you may put one or two hundred megabytes
work_mem = 256MB                        # min 64kB

   - Increase shared_buffers -> Depending on database size, but you may want 
1GB, for example (if you've got enough memory, of course)

shared_buffers = 4GB                    # min 128kB
NB: 32GB main memory on my workstation

   - Consider using synchronous_commit = off. -> If you use it in production I 
recommend you try to understand its implications (we use it in our installs)

This is indeed interesting, will look into it more deeply.

   - If you run the migration process on a non-production machine you can use "fsync 
=off". This can have a huge impact on performance but do NOT use it in production 
EVER. But I recommend it for development environments if you know that no database you 
use is critical. It will also have a huge impact when restoring databases.

never use these...

In any event, once I get all the gritty conversion details completed, I can 
focus a bit more on runtime tuning.
I'm also wondering if psycopg2 use needs to be looked at too. I came across 
this 'best practice':

When should I save and re-use a cursor as opposed to creating a new one as 
needed?
Cursors are lightweight objects and creating lots of them should not pose any kind of problem. But note that cursors used to fetch result sets will cache the data and use memory in proportion to the result set size. Our suggestion is to almost always create a new cursor and dispose old ones as soon as the data is not required anymore (call close() on them.) The only exception are tight loops where one usually use the same cursor for a whole bunch of INSERTs or UPDATEs.

So perhaps I'll start there first.


    Pulling complete moves/lines by period by fiscalyear I'm averaging ~100 
seconds
    per period (roughly 30 minutes per fiscalyear with a rough average of 
21-22K lines/year)


Don't have numbers to compare, so it's just intuition, but it does not sound 
specially fast.

At this stage, I don't believe these imports are any longer comparable to 
'vanilla' openerp2tryton
as I'm doing a 'deep' copy now, including journals, periods, reconciliations 
and all...
(in France, my aim is to be able to use exclusively Tryton for previous fiscal 
year reports,
including 'FEC'... I hope to heave OpenERP as far away as possible when fully 
migrated:P)

cheers,

--

Richard PALO

--
You received this message because you are subscribed to the Google Groups 
"tryton" group.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tryton/59ea557e-8df3-138e-b347-0ebe71dd4cab%40free.fr.

Reply via email to