Re: [PERFORM] large object write performance

2015-10-08 Thread Bram Van Steenlandt
Op 08-10-15 om 15:10 schreef Graeme B. Bell: http://initd.org/psycopg/docs/usage.html#large-objects "Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.” See * I was under the impression they meant that the l

Re: [PERFORM] large object write performance

2015-10-08 Thread Graeme B. Bell
>> >> http://initd.org/psycopg/docs/usage.html#large-objects >> >> >> "Psycopg large object support *efficient* import/export with file system >> files using the lo_import() and lo_export() libpq functions.” >> >> See * >> > I was under the impression they meant that the lobject was using lo

Re: [PERFORM] large object write performance

2015-10-08 Thread Bram Van Steenlandt
Op 08-10-15 om 14:10 schreef Graeme B. Bell: On 08 Oct 2015, at 13:50, Bram Van Steenlandt wrote: 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there is anything there Re: lobject http://initd.org/psycopg/docs/usage.html#large-objects "Psycopg large object support *e

Re: [PERFORM] large object write performance

2015-10-08 Thread Graeme B. Bell
> On 08 Oct 2015, at 13:50, Bram Van Steenlandt wrote: >>> 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there >>> is anything there Re: lobject http://initd.org/psycopg/docs/usage.html#large-objects "Psycopg large object support *efficient* import/export with file syste

Re: [PERFORM] large object write performance

2015-10-08 Thread Bram Van Steenlandt
Op 08-10-15 om 13:37 schreef Graeme B. Bell: Like this ? gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives) zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136 zfs compressed (iozone -s 4 -a /datapool/data) = 676345 If you can get the complete tables (as in the imag

Re: [PERFORM] large object write performance

2015-10-08 Thread Bram Van Steenlandt
Op 08-10-15 om 13:13 schreef Graeme B. Bell: 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there is anything there Can you include the surrounding code please (e.g. setting up the db connection) so we can see what’s happening, any sync/commit type stuff afterwards. con

Re: [PERFORM] large object write performance

2015-10-08 Thread Graeme B. Bell
>> >> > Like this ? > > gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives) > zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136 > zfs compressed (iozone -s 4 -a /datapool/data) = 676345 If you can get the complete tables (as in the images on the blog post) with random

Re: [PERFORM] large object write performance

2015-10-08 Thread Bram Van Steenlandt
Op 08-10-15 om 13:21 schreef Graeme B. Bell: First the database was on a partition where compression was enabled, I changed it to an uncompressed one to see if it makes a difference thinking maybe the cpu couldn't handle the load. It made little difference in my case. My regular gmirror par

Re: [PERFORM] large object write performance

2015-10-08 Thread Graeme B. Bell
>> First the database was on a partition where compression was enabled, I >> changed it to an uncompressed one to see if it makes a difference thinking >> maybe the cpu couldn't handle the load. > It made little difference in my case. > > My regular gmirror partition seems faster: > dd bs=8k co

Re: [PERFORM] large object write performance

2015-10-08 Thread Graeme B. Bell
> On 08 Oct 2015, at 11:17, Bram Van Steenlandt wrote: > > The database (9.2.9) on the server (freebsd10) runs on a zfs mirror. > If I copy a file to the mirror using scp I get 37MB/sec > My script achieves something like 7 or 8MB/sec on large (+100MB) files. This may help - great blog article

Re: [PERFORM] large object write performance

2015-10-08 Thread Graeme B. Bell
Seems a bit slow. 1. Can you share the script (the portion that does the file transfer) to the list? Maybe you’re doing something unusual there by mistake. Similarly the settings you’re using for scp. 2. What’s the network like? For example, what if the underlying network is only capable of 10M

[PERFORM] large object write performance

2015-10-08 Thread Bram Van Steenlandt
Hi, I use postgresql often but I'm not very familiar with how it works internal. I've made a small script to backup files from different computers to a postgresql database. Sort of a versioning networked backup system. It works with large objects (oid in table, linked to large object), which