On Tue, Nov 28, 2006 at 07:34:56PM +0200, Andrus wrote:

> This id good idea but it forces to use Postgres protocol for downloading.
Why, of course.

> This protocol has some timeouts which are too small for large file download.
For "sane" values of "large" I doubt this is true. A field
in PG can store about 1 GB of data (says the FAQ) and the
protocol better be able to hand out as much.

It may be that you need to increase statement_timeout -
which can be done on a per-session basis.

> Postgres protocol has also a lot of overhead added to downloadable data.
Yes. But you wanted to use port 5432 on a machine already
running PG.

Not sure but using a binary cursor might improve things.
Using a client library capable of the v3 (?) protocol should
significantly lower the overhead, too.

> It also requires that whole downloadable file must fit into memory.
My PG knowledge isn't up to this task but I have a sneaking
suspicion this isn't really enforced by PG itself.

ODBC
> I tried this but was forced to store big files in 1 MB chunks in bytea 
> fields and create file from downloaded blocks
Other client libraries may do better here.

> Or should I really write code which divides backup file to 1 MB chunks and 
> stores them in bytea field ?
No. I would not even store them in the database at all. I
would use the untrusted language function to read the file
from disk and return a (virtual) bytea field (which doesn't
exist in the database).

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org/

Reply via email to