On Mon, Sep 20, 2010 at 10:06 AM, Willy-Bas Loos <willy...@gmail.com> wrote:
>>>> Rows are sent back in the entireity, so the PG instance would need
>>>> enough memory to work with that row.  When you're running a 32bit
>>>> version of PG, values whose size is beyond ~100MB are a bit touch and go
>>>> whether it will work.
>>
>>> ?? I hear that it's posible to store up to a GiB as a value for a "text" 
>>> field.
>>> How would i possibly retrieve that data from the database?
>>
>> That's the *theoretical* limit.  In a 32-bit build, the practical limit
>> is going to be quite a bit less.
> Sheesh..
>
> - why?

A database is not a filesystem, and a filesystem is not a database.
Postgres like many databases is designed to organize small pieces of
data for flexible retrieval.  Filesystems are designed to store pieces
of any size in a relatively inflexible way.   You can adapt either
technology into purposes they are not really designed for, but there
going to be practical constraints.  I expect these differences will
become especially glaring when the coming revolution in storage
arrives and most systems will become cpu (or even network) bound
again.

There are at least three specific features in postgres you can look at
to make dealing with large datums easier: toast (which is implicitly
used), large objects, and automatic column data compression features.
 You can also search the archives for a wealth of information on the
topic.  If I ever wrote a book on things not to do as a dba though,
storing 1gb+ xml files in a database would be in the first chapter
:-).  How about breaking the file down?

> - how about 64 bit?

The various 32 bit based limits in postgres are coming from
architecture choices and the desire to give consistent behavior across
all platforms, not necessarily hardware constraints.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to