On Apr 29, 2010, at 10:45 AM, Justin Graf wrote:
> Many people encode the binary data in Base64 and store as text data
> type?? Then never have to deal with escaping bytea data type. Which i
> have found can be a pain
Damn. Wish I'd thought of that ;-)
--
Scott Ribe
scott_r...@elevated-dev
On 4/29/2010 3:18 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>
> However, that toast limit is per-table, whereas the pg_largeobject limit
> is per-database. So for example if you have a partitioned table then
> the toast limit only applies per partition. With large objects you'd
> fall ove
Alvaro Herrera writes:
> Each toasted object also requires an OID, so you cannot have more than 4
> billion toasted attributes in a table.
> I've never seen this to be a problem in real life, but if you're talking
> about having that many large objects, then it will be a problem with
> toast too.
Justin Graf wrote:
> On 4/29/2010 12:07 PM, David Wall wrote:
> >
> >
> > Big downside for the DB is that all large objects appear to be stored
> > together in pg_catalog.pg_largeobject, which seems axiomatically
> > troubling that you know you have lots of big data, so you then store
> > them t
On Thu, Apr 29, 2010 at 1:51 PM, David Wall wrote:
> I missed the part that BYTEA was being used since it's generally not a good
> way for starting large binary data because you are right that BYTEA requires
> escaping across the wire (client to backend) both directions, which for true
> binary da
On 4/29/2010 1:51 PM, David Wall wrote:
>
>> Put it another way: bytea values are not stored in the pg_largeobject
>> catalog.
>
> I missed the part that BYTEA was being used since it's generally not a
> good way for starting large binary data because you are right that
> BYTEA requires escaping
Huh ??? isn't that point of using bytea or text datatypes.
I could have sworn bytea does not use large object interface it uses
TOAST or have i gone insane
You're not insane :)
Put it another way: bytea values are not stored in the pg_largeobject
catalog.
I missed the part that
Le 29/04/2010 18:45, Justin Graf a écrit :
> On 4/29/2010 12:07 PM, David Wall wrote:
>>
>>
>> Big downside for the DB is that all large objects appear to be stored
>> together in pg_catalog.pg_largeobject, which seems axiomatically
>> troubling that you know you have lots of big data, so you the
On 4/29/2010 12:07 PM, David Wall wrote:
>
>
> Big downside for the DB is that all large objects appear to be stored
> together in pg_catalog.pg_largeobject, which seems axiomatically
> troubling that you know you have lots of big data, so you then store
> them together, and then worry about run
Things to consider when /not /storing them in the DB:
1) Backups of DB are incomplete without a corresponding backup of the files.
2) No transactional integrity between filesystem and DB, so you will
have to deal with orphans from both INSERT and DELETE (assuming you
don't also update the file
2010/4/28 Adrian Klaver :
> On Tuesday 27 April 2010 5:45:43 pm Anthony wrote:
>> On Tue, Apr 27, 2010 at 5:17 AM, Cédric Villemain <
>>
>> cedric.villemain.deb...@gmail.com> wrote:
>> > store your files in a filesystem, and keep the path to the file (plus
>> > metadata, acl, etc...) in database.
>
On Tuesday 27 April 2010 5:45:43 pm Anthony wrote:
> On Tue, Apr 27, 2010 at 5:17 AM, Cédric Villemain <
>
> cedric.villemain.deb...@gmail.com> wrote:
> > store your files in a filesystem, and keep the path to the file (plus
> > metadata, acl, etc...) in database.
>
> What type of filesystem is goo
On Tue, Apr 27, 2010 at 5:17 AM, Cédric Villemain <
cedric.villemain.deb...@gmail.com> wrote:
> store your files in a filesystem, and keep the path to the file (plus
> metadata, acl, etc...) in database.
>
What type of filesystem is good for this? A filesystem with support for
storing tens of th
On Tuesday 27 April 2010 11.17:42 Cédric Villemain wrote:
> > Anyone had this kind of design problem and how did you solve it?
>
> store your files in a filesystem, and keep the path to the file (plus
> metadata, acl, etc...) in database.
... and be careful that db and file storage do not go out
S3 is not primary storage for the files, it's a distribution system.
We want to be able to switch form S3 to other CDN if required.
So, "Master" copies of files is kept on private server. Question is
should it be database of filesystem.
On Tue, Apr 27, 2010 at 7:03 PM, Massa, Harald Armin wrote:
2010/4/27 Rod :
> Hello,
>
> I have a web application where users upload/share files.
> After file is uploaded it is copied to S3 and all subsequent downloads
> are done from there.
> So in a file's lifetime it's accessed only twice- when created and
> when copied to S3.
>
> Files are documents, of
>
> No, I'm not storing RDBMS in S3. I didn't write that in my post.
> S3 is used as CDN, only for downloading files.
>
So you are storing your files on S3 ?
Why should you store those files additionally in a PostgreSQL database?
If you want to keep track of them / remember metadata, hashes will
No, I'm not storing RDBMS in S3. I didn't write that in my post.
S3 is used as CDN, only for downloading files.
On Tue, Apr 27, 2010 at 6:54 PM, John R Pierce wrote:
> Rod wrote:
>>
>> Hello,
>>
>> I have a web application where users upload/share files.
>> After file is uploaded it is copied to
Rod wrote:
Hello,
I have a web application where users upload/share files.
After file is uploaded it is copied to S3 and all subsequent downloads
are done from there.
So in a file's lifetime it's accessed only twice- when created and
when copied to S3.
Files are documents, of different size fro
Hello,
I have a web application where users upload/share files.
After file is uploaded it is copied to S3 and all subsequent downloads
are done from there.
So in a file's lifetime it's accessed only twice- when created and
when copied to S3.
Files are documents, of different size from few kilobyt
20 matches
Mail list logo