Re: [GENERAL] BLOB updates -> database size explodes

2013-05-30 Thread Luca Ferrari
On Thu, May 30, 2013 at 12:49 AM, Stephen Scheck
 wrote:

> If this hypothesis is correct, doing a vacuum should free up dead pages and
> your size expectations should be more accurate. And if that's the case
> putting more intelligence into the application could mitigate some of the
> update growth (predicting what page temporally similar updates will go to
> and grouping them into a single transaction, for instance).
>

Seems correct to me, according to this
http://www.postgresql.org/docs/current/static/lo.html
I would give a try to vacuumlo to see if the size decreases, in such
case that is the right explaination.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BLOB updates -> database size explodes

2013-05-29 Thread Stephen Scheck
This is just a guess (I haven't dug into the low-level page/disk access
Postgres code for Large Objects yet but if I'm right, the LO-based project
I'm working on will likely face the same issues you're seeing), but LOs
enjoy transactional behavior just like anything else (as far as I can tell
from my testing) and so are subject to MVCC effects. Since LOs are opaque
to Postgres and it can't infer anything about their structure, even
flipping a single bit in a LO causes whatever page that bit maps to be
marked invalid (as if the page corresponded exactly to one row in a normal
table), and the page copied to a new one along with your change(s).

If this hypothesis is correct, doing a vacuum should free up dead pages and
your size expectations should be more accurate. And if that's the case
putting more intelligence into the application could mitigate some of the
update growth (predicting what page temporally similar updates will go to
and grouping them into a single transaction, for instance).


On Tue, May 28, 2013 at 2:53 PM, Dimitar Misev wrote:

> I'm having some issue with BLOB updates (via ECPG). The total blobs size
> should be ~280MB, but after partially updating all of them for 150 times
> the size on disk grows up from 184MB to 18GB.
>
> In more details:
>
> There are 608 blobs of size 460800 bytes. All blobs are updated piecewise
> in 150 repetitions; so first all blobs are updated in bytes 0 - 3071, then
> 3072 - 6143, etc. In the end on the disk the database is 18GB.
>
> Computing the size of pg_largeobject gives me 267MB:
>
>SELECT pg_size_pretty(count(loid) * 2048) FROM pg_largeobject;
>  pg_size_pretty
>
>  267 MB
>
> On the other hand, this gives me 18GB, and du -sh on the disk also reports
> 18.4GB:
>
>SELECT tablename,
>pg_size_pretty(size) AS size_pretty,
>pg_size_pretty(total_size) AS total_size_pretty
>FROM (SELECT *, pg_relation_size(schemaname||'**.'||tablename) AS size,
> pg_total_relation_size(**schemaname||'.'||tablename)
>AS total_size
>   FROM pg_tables) AS TABLES
>WHERE TABLES.tablename = 'pg_largeobject'
>ORDER BY total_size DESC;
>tablename| size_pretty | total_size_pretty
>+-**+---
>  pg_largeobject | 18 GB   | 18 GB
>
>
> Doing these updates takes 85 minutes on quad-core i7 with 6GB RAM and SSD
> hard disk. This is PostgreSQL 8.4.12 on Debian 64 bit.
>
> Anyone knows what's going on here?
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


[GENERAL] BLOB updates -> database size explodes

2013-05-28 Thread Dimitar Misev
I'm having some issue with BLOB updates (via ECPG). The total blobs size 
should be ~280MB, but after partially updating all of them for 150 times 
the size on disk grows up from 184MB to 18GB.


In more details:

There are 608 blobs of size 460800 bytes. All blobs are updated 
piecewise in 150 repetitions; so first all blobs are updated in bytes 0 
- 3071, then 3072 - 6143, etc. In the end on the disk the database is 18GB.


Computing the size of pg_largeobject gives me 267MB:

   SELECT pg_size_pretty(count(loid) * 2048) FROM pg_largeobject;
 pg_size_pretty
   
 267 MB

On the other hand, this gives me 18GB, and du -sh on the disk also 
reports 18.4GB:


   SELECT tablename,
   pg_size_pretty(size) AS size_pretty,
   pg_size_pretty(total_size) AS total_size_pretty
   FROM (SELECT *, pg_relation_size(schemaname||'.'||tablename) AS size,
pg_total_relation_size(schemaname||'.'||tablename)
   AS total_size
  FROM pg_tables) AS TABLES
   WHERE TABLES.tablename = 'pg_largeobject'
   ORDER BY total_size DESC;
   tablename| size_pretty | total_size_pretty
   +-+---
 pg_largeobject | 18 GB   | 18 GB


Doing these updates takes 85 minutes on quad-core i7 with 6GB RAM and 
SSD hard disk. This is PostgreSQL 8.4.12 on Debian 64 bit.


Anyone knows what's going on here?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Blob handling with Delphi...

2011-04-13 Thread John R Pierce

On 04/13/11 1:28 AM, Durumdara wrote:

Hi!

PG9.0, Delphi 6, Zeos.

I want to use PGSQL bytea field as normal BLOB field in Delphi.

But when I insert a value into this field, for example all characters 
(chr 0..255), and I fetch, and save it as blob stream into a file, I 
got interesting result, not what I stored here previously.


It is got \x prefix, and it is stored hexadecimal values.

Is it normal, and I needs to convert this format to readable before I 
use it, or I can get same result as in other databases/adapters (the 
stream saved BlobField.SaveToFile have equal content as 
BlobField.LoadFromFile)...


Many DBAware components can show the blob directly as Image. With PG's 
\x prefix this won't working well... :-(




in 9.0, the default encoding for BYTEA changed, and if your client 
interface doesnt undersatnd the new encoding (and isn't using the libpq 
entry points for handling byte encoding), it will fail like this..  
there's a SET variable that will restore the old behavior.


See 
http://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-BYTEA-OUTPUThttp://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-BYTEA-OUTPUT


SET bytea_output='escape';

should revert to the previous behavior, and may well fix your problem 
with delphi


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Blob handling with Delphi...

2011-04-13 Thread Durumdara
Hi!

PG9.0, Delphi 6, Zeos.

I want to use PGSQL bytea field as normal BLOB field in Delphi.

But when I insert a value into this field, for example all characters (chr
0..255), and I fetch, and save it as blob stream into a file, I got
interesting result, not what I stored here previously.

It is got \x prefix, and it is stored hexadecimal values.

Is it normal, and I needs to convert this format to readable before I use
it, or I can get same result as in other databases/adapters (the stream
saved BlobField.SaveToFile have equal content as BlobField.LoadFromFile)...

Many DBAware components can show the blob directly as Image. With PG's \x
prefix this won't working well... :-(


Thanks for your help:
dd


Re: [GENERAL] Blob fields and backups

2006-11-30 Thread Jim Nasby

On Nov 30, 2006, at 5:15 AM, [EMAIL PROTECTED] wrote:
I have an Oracle DB, where my backup file is 280 GB and growing. I  
also have a
lot of blob fields there. When i make a backup recover, the blob  
fields are

there, and my boss is alive.

I want to know how postgresql's backup utilities deal with blob  
fields...


Most people that deal with binary data in PostgreSQL use bytea, which  
to PostgreSQL is JustAnotherField. It'll dump and restore just fine.  
The one downside is that a lot of binary values get escaped into  
octal, ie '\000', which adds a lot of size to the dump. Though, the  
custom dump type might get around that. I think that support for  
large objects (which are more akin to Oracle blobs/clobs) is in  
pg_dump as well, but I've never actually used them.


Ultimately, if you've got a 300G database, you probably don't want to  
be using pg_dump anyway; instead use Point In Time Recovery.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Blob fields and backups

2006-11-30 Thread william . munoz
Hi,

I have an Oracle DB, where my backup file is 280 GB and growing. I also have a
lot of blob fields there. When i make a backup recover, the blob fields are
there, and my boss is alive.

I want to know how postgresql's backup utilities deal with blob fields...

Thanks,






Servidor de correo corporativo - Ximma Ltda.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] BLOB & Searching

2006-06-13 Thread Joshua D. Drake

Jim C. Nasby wrote:

On Mon, Jun 12, 2006 at 02:44:34PM -0700, [EMAIL PROTECTED] wrote:

1. How can I store the word doc's in the DB, would it be best to use a
BLOB data type?
 
Use a bytea field.
 

2. Does Postgres support full text searching of a word document once it


Nope.


Not natively. It would however be possible to use one of the .doc libs 
out there to read the binary document and run it through a parser and 
feed that to Tsearch2.


Sincerely,

Joshua D. Drake



--

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] BLOB & Searching

2006-06-13 Thread Jim C. Nasby
On Mon, Jun 12, 2006 at 02:44:34PM -0700, [EMAIL PROTECTED] wrote:
> 1. How can I store the word doc's in the DB, would it be best to use a
> BLOB data type?
 
Use a bytea field.
 
> 2. Does Postgres support full text searching of a word document once it

Nope.

> is loaded into the BLOB column & how would this work?   Would I have to
> unload each BLOB object, convert it back to text to search, or does
> Postgres have the ability to complete the full-text search of a BLOB,
> like MSSQL Server & Oracle do?
 
You'd want to store the plain-text version of the doc and then index
that using tsearch2. Actually, there may be a way to avoid storing the
text representation if you get clever with tsearch2... worst case you
might need to extend tsearch2 so you can feed it an arbitrary function
instead of a field.
 
> 3. Is there a way to export the Word Doc From the BLOB colum and dump
> it into a PDF format (I guess I am asking if someone has seen or
> written a PDF generator script/storedProc for Postgres)?

No, but you should be able to make that happen using an untrusted
language/function and some external tools.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] BLOB & Searching

2006-06-13 Thread jdwatson1
Hi,
I am not 100% sure what the best solution would be, so I was hoping
someone could point me in the right direction.

I usually develop in MS tools, such as .net, ASP, SQL Server etc...,
but I really want to expand my skillset and learn as much about
Postgres
as possible.


What I need to do, is design a DB that will index and store
approximately 300 word docs, each with a size no more that 1MB.  They
need to be able to seacrh the word documents for keyword/phrases to be
able to identify which one to use.


So, I need to write 2 web interfaces. A front end and a back end. Front

end for the users who will search for their documents, and a backend
for an admin person to upload new/ammended documents to the DB to be
searchable.


NOW.  I could do this in the usual MS tools that I work with using
BLOB's and the built in Full-text searching that comes with SQL Server,

but i don't have these to work with. I am working with PostGres & JSP
pages


What I was hoping someone could help me out with was identifying the
best possible solution to use.


1. How can I store the word doc's in the DB, would it be best to use a
BLOB data type?


2. Does Postgres support full text searching of a word document once it

is loaded into the BLOB column & how would this work?   Would I have to
unload each BLOB object, convert it back to text to search, or does
Postgres have the ability to complete the full-text search of a BLOB,
like MSSQL Server & Oracle do?


3. Is there a way to export the Word Doc From the BLOB colum and dump
it into a PDF format (I guess I am asking if someone has seen or
written a PDF generator script/storedProc for Postgres)?


If someone could help me out, it would be greatly appreciated.


cheers, 
James


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] BLOB and OID

2005-11-04 Thread vishal saberwal
yes, there is one, but i dont know how would you modify the PostgreSQL code.

You can implement a GUID datatype and can use it as indexing which guarantees uniqueness and can be stored in double format.

thanks,
vishOn 11/4/05, Lolke B. Dijkstra <[EMAIL PROTECTED]> wrote:
Hi,OID being a 4 byte int seems limited to indexing for binary objects.More precisely I use BLOB to store images in the database and link theseobjects to another table using the OID as FK.If I run out of OID there will be no way to index new images. Of course
when not automatically creating OID for all objects in the database youwill not easily run out of index values, but still..Is there an alternative way of indexing BLOB? Or can I extend the rangeof OID?
Lolke---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster


[GENERAL] BLOB and OID

2005-11-04 Thread Lolke B. Dijkstra

Hi,

OID being a 4 byte int seems limited to indexing for binary objects. 
More precisely I use BLOB to store images in the database and link these 
objects to another table using the OID as FK.
If I run out of OID there will be no way to index new images. Of course 
when not automatically creating OID for all objects in the database you 
will not easily run out of index values, but still..
Is there an alternative way of indexing BLOB? Or can I extend the range 
of OID?


Lolke

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Blob data type and it's efficiency on PostgreSQL

2005-09-20 Thread Daniel Schuchardt

Stas Oskin schrieb:


Hi.

 

We are using PostgreSQL as the RDBMS for our product, and are very 
happy with it. Recently, we have encountered a need to store a lot of 
binary files, mainly images (up to ~100,000 files, with sizes varying 
from 300K-2MB).


 

The question is, how well PostgreSQL performs with the blob data type, 
and is it practical to store these files as blobs?


 


Thanks in advance,

Stas Oskin.

We save binary data in pgsql without problems. (Especially icons, Report 
definitions, Images, ...) Be carefull with dump and restore because it 
is a bit complicated to work with blobs here.


Daniel

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Blob data type and it's efficiency on PostgreSQL

2005-09-19 Thread Stas Oskin








Hi.

 

We
are using PostgreSQL as the RDBMS for our product, and are very happy with it.
Recently, we have encountered a need to store a lot of binary files, mainly
images (up to ~100,000 files, with sizes varying from 300K-2MB).

 

The
question is, how well PostgreSQL performs with the blob data type, and is it practical
to store these files as blobs?

 

Thanks
in advance,

Stas
Oskin.








Re: [GENERAL] blob storage

2005-04-26 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> On Tue, Apr 26, 2005 at 03:41:28PM -0500, Scott Marlowe wrote:
>> If you store them as large objects, they will each get their own file.

> Huh, no, they won't.  They will be stored in the pg_largeobject table.
> It's been quite a while since they are not stored in separate files;
> though they keep the POSIX-filesystem-like semantics.

But in any case, the platform limit on file size is irrelevant because
we split tables into 1GB-size files.  The effective limit is 16TB IIRC
(see the FAQ for the correct number).

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] blob storage

2005-04-26 Thread Scott Marlowe
On Tue, 2005-04-26 at 16:42, Alvaro Herrera wrote:
> On Tue, Apr 26, 2005 at 03:41:28PM -0500, Scott Marlowe wrote:
> > On Tue, 2005-04-26 at 15:30, Travis Harris wrote:
> > > I would like to use P* to store files.  These files will probably
> > > range from 500K to 2 MB in size and there will be thousands upon
> > > thousands of them.  I was wondering how P* stores blobs, if it is all
> > > in one file, or if each blob is sored in it's own file.  The reason
> > > being, I know that windows has a 2 GB limit on files, and if they are
> > > not stored as their own files, I'll hit my limit FAST... and it'll do
> > > me no good...  If this is going to be a problem, does anyone have any
> > > suggestions?
> > 
> > If you store them as large objects, they will each get their own file.
> 
> Huh, no, they won't.  They will be stored in the pg_largeobject table.
> It's been quite a while since they are not stored in separate files;
> though they keep the POSIX-filesystem-like semantics.

Oh, I guess it's been a few years since I last played with large
objects.  Sorry for the misinformation.

I guess you can tell I prefer bytea nowadays...  :)

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] blob storage

2005-04-26 Thread Alvaro Herrera
On Tue, Apr 26, 2005 at 03:41:28PM -0500, Scott Marlowe wrote:
> On Tue, 2005-04-26 at 15:30, Travis Harris wrote:
> > I would like to use P* to store files.  These files will probably
> > range from 500K to 2 MB in size and there will be thousands upon
> > thousands of them.  I was wondering how P* stores blobs, if it is all
> > in one file, or if each blob is sored in it's own file.  The reason
> > being, I know that windows has a 2 GB limit on files, and if they are
> > not stored as their own files, I'll hit my limit FAST... and it'll do
> > me no good...  If this is going to be a problem, does anyone have any
> > suggestions?
> 
> If you store them as large objects, they will each get their own file.

Huh, no, they won't.  They will be stored in the pg_largeobject table.
It's been quite a while since they are not stored in separate files;
though they keep the POSIX-filesystem-like semantics.

-- 
Alvaro Herrera (<[EMAIL PROTECTED]>)
Syntax error: function hell() needs an argument.
Please choose what hell you want to involve.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] blob storage

2005-04-26 Thread Rich Shepard
On Tue, 26 Apr 2005, Travis Harris wrote:
I would like to use P* to store files.  These files will probably range
from 500K to 2 MB in size and there will be thousands upon thousands of
them. I was wondering how P* stores blobs, if it is all in one file, or if
each blob is sored in it's own file. The reason being, I know that windows
has a 2 GB limit on files, and if they are not stored as their own files,
I'll hit my limit FAST... and it'll do me no good... If this is going to be
a problem, does anyone have any suggestions?
Travis,
  I'll assume that "P*" is IM for postgres, eh?
  PostgreSQL-7.x on the 2.2.x kernel series has a limit of 2G. However, on
the 2.4.x kernels the limit is 2T; postgres-8.x on the 2.4.x kernels can
store 4T per table.
  If that pinches your need, I'd love to be selling you storage solutions.
:-)
Rich
--
Dr. Richard B. Shepard, President
Applied Ecosystem Services, Inc. (TM)
   Voice: 503-667-4517   Fax: 503-667-8863
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] blob storage

2005-04-26 Thread Joshua D. Drake
Travis Harris wrote:
I would like to use P* to store files.  These files will probably
range from 500K to 2 MB in size and there will be thousands upon
thousands of them.  I was wondering how P* stores blobs,
Either as bytea or a large object.
if it is all
in one file, or if each blob is sored in it's own file.  The reason
being, I know that windows has a 2 GB limit on files,
PostgreSQL automatically splits its files into 1GB segments also I think
the 2GB limit is only on FAT32 not NTFS but I am not sure.
 and if they are
not stored as their own files, I'll hit my limit FAST... and it'll do
me no good...  If this is going to be a problem, does anyone have any
suggestions?

bytea works well for small files as long as you are not going to do a :
select * from foo
where foo is going to return 500,000 records all with 500k files. If you
are going to do something like that use large objects intead.
Sincerely,
Joshua D. Drake
Command Prompt, Inc.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedication Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] blob storage

2005-04-26 Thread Scott Marlowe
On Tue, 2005-04-26 at 15:30, Travis Harris wrote:
> I would like to use P* to store files.  These files will probably
> range from 500K to 2 MB in size and there will be thousands upon
> thousands of them.  I was wondering how P* stores blobs, if it is all
> in one file, or if each blob is sored in it's own file.  The reason
> being, I know that windows has a 2 GB limit on files, and if they are
> not stored as their own files, I'll hit my limit FAST... and it'll do
> me no good...  If this is going to be a problem, does anyone have any
> suggestions?

If you store them as large objects, they will each get their own file.

However, you can also store them as rows in a bytea field, and
postgresql will split the table every 1 gig or so automagically.

lo is generally faster but less "database like" and more like a file
system interface while bytea tends to have more overhead due to escaping
/ encoding needed to be done before storage and upon retrieval.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] blob storage

2005-04-26 Thread Travis Harris
I would like to use P* to store files.  These files will probably
range from 500K to 2 MB in size and there will be thousands upon
thousands of them.  I was wondering how P* stores blobs, if it is all
in one file, or if each blob is sored in it's own file.  The reason
being, I know that windows has a 2 GB limit on files, and if they are
not stored as their own files, I'll hit my limit FAST... and it'll do
me no good...  If this is going to be a problem, does anyone have any
suggestions?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Blob Fields

2005-03-04 Thread Ulrich Schwab
Alexandre da Siva wrote:

> Blobs is not Implemented on PostgreSQL, but I need to this field type on
> PosgreSQL databases, how I can to use this? I'm using delphi...
> 
> 
> ps: I readed PosgreSQL Manual and other lists and sites, but not get a
> answer for my specific problem
PostgreSQL has large objects, most probably this is what You want.
See this file in the docs:
pgsql/doc/html/largeobjects.html

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Blob Fields

2005-03-04 Thread J. Greenlees
Alexandre da Siva wrote:
Blobs is not Implemented on PostgreSQL, but I need to this field type on 
PosgreSQL databases, how I can to use this?
I'm using delphi...
ps: I readed PosgreSQL Manual and other lists and sites, but not get a 
answer for my specific problem
http://www.postgresql.org/docs/8.0/interactive/datatype-binary.html
definitions for blob, with usage.


smime.p7s
Description: S/MIME Cryptographic Signature


[GENERAL] Blob Fields

2005-03-04 Thread Alexandre da Siva



Blobs is not Implemented on PostgreSQL, 
but I need to this field type on 
PosgreSQL databases, how I can to use this?
I'm using delphi...
 
 
ps: I readed PosgreSQL Manual and other lists and 
sites, but not get a answer for my specific problem
 


Re: [GENERAL] BLOB help needed...

2004-04-27 Thread Development - multi.art.studio
Hi,
i wrote a php skript to test this, works beautiful,
you can download the script at
http://www.erdtrabant.de/index.php?i=500200104
volker
Guy Fraser wrote:
If you are using php, the two functions below should help.
http://ca.php.net/manual/en/function.pg-escape-bytea.php
http://ca.php.net/manual/en/function.pg-unescape-bytea.php
Taber, Mark wrote:
We’re implementing our first PostgreSQL database, and enjoying it 
very much. However, we have a table that will store binary image 
files (pie charts, etc.) for later display on a dynamic webpage. 
While we’re putting together our prototype application, I’ve been 
asked by the programmers (I’m the DBA) to “put the images in the 
database.” I can see how to do this using Large Objects, but then 
getting them out again seems problematic, and the documentation is a 
bit sketchy. Would BYTEA columns be better? However, it seems to me 
that there is no easy way using psql to load images into a BYTEA 
column. Any help would be greatly appreciated.

Regards,
Mark Taber
State of California
Department of Finance
Infrastructure & Architecture Unit
916.323.3104 x 2945
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] BLOB help needed...

2004-04-27 Thread Guy Fraser
If you are using php, the two functions below should help.
http://ca.php.net/manual/en/function.pg-escape-bytea.php
http://ca.php.net/manual/en/function.pg-unescape-bytea.php
Taber, Mark wrote:
We’re implementing our first PostgreSQL database, and enjoying it very 
much. However, we have a table that will store binary image files (pie 
charts, etc.) for later display on a dynamic webpage. While we’re 
putting together our prototype application, I’ve been asked by the 
programmers (I’m the DBA) to “put the images in the database.” I can 
see how to do this using Large Objects, but then getting them out 
again seems problematic, and the documentation is a bit sketchy. Would 
BYTEA columns be better? However, it seems to me that there is no easy 
way using psql to load images into a BYTEA column. Any help would be 
greatly appreciated.

Regards,
Mark Taber
State of California
Department of Finance
Infrastructure & Architecture Unit
916.323.3104 x 2945
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] BLOB help needed...

2004-04-27 Thread Taber, Mark








We’re implementing our first PostgreSQL database, and
enjoying it very much.  However, we have a table that will store binary
image files (pie charts, etc.) for later display on a dynamic webpage.  While
we’re putting together our prototype application, I’ve been asked
by the programmers (I’m the DBA) to “put the images in the
database.”  I can see how to do this using Large Objects, but then
getting them out again seems problematic, and the documentation is a bit
sketchy.  Would BYTEA columns be better?  However, it seems to me
that there is no easy way using psql to load images into a BYTEA column. 
Any help would be greatly appreciated.

 

Regards,

 

Mark Taber

State of California

Department of Finance

Infrastructure & Architecture Unit

916.323.3104 x 2945

 








Re: [GENERAL] BLOB problem

2004-02-03 Thread Tom Lane
Rens Admiraal <[EMAIL PROTECTED]> writes:
> upload te file using pg_lo_import() (PHP function). Everything works 
> fine, fast, and I was really glad with it, till I found out that my 
> database is rapadly growing. With only 20 images the database has a size 
> of 65 MB !!!

Hard to tell much from that statement.  I'd suggest looking into the
$PGDATA directory tree to find out which files are taking up the bulk of
the space --- with that info we can give you some advice.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] BLOB problem

2004-02-03 Thread Rens Admiraal




Hi,

I've encountered a problem with a PostgreSQL database. I made a image
management system which stores images in a database from a PHP script.
I upload te file using pg_lo_import() (PHP function). Everything works
fine, fast, and I was really glad with it, till I found out that my
database is rapadly growing. With only 20 images the database has a
size of 65 MB !!!

Does anybody know where this come from ? I know I don't give a lot of
information, if you want more, ask for it. But I don't want to send
unreadable posts ;-)

Grtz





Re: [GENERAL] BLOB problem

2004-02-03 Thread Doug McNaught
Rens Admiraal <[EMAIL PROTECTED]> writes:

>I've encountered a problem with a PostgreSQL database. I made a image
>management system which stores images in a database from a PHP script.
>I upload te file using pg_lo_import() (PHP function). Everything works
>fine, fast, and I was really glad with it, till I found out that my
>database is rapadly growing. With only 20 images the database has a
>size of 65 MB !!!

Are you running VACUUM regularly?  You might want to do a VACUUM FULL
and see if the size goes down again.

The other possibility is that you're not dropping large objects when
they're superseded by new ones.  If you never call lo_unlink() in your
code that's probably what's happening.

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] BLOB problem

2004-02-03 Thread Rens Admiraal






Hi,

I've encountered a problem with a PostgreSQL database. I made a image
management system which stores images in a database from a PHP script.
I upload te file using pg_lo_import() (PHP function). Everything works
fine, fast, and I was really glad with it, till I found out that my
database is rapadly growing. With only 20 images the database has a
size of 65 MB !!!

Does anybody know where this come from ? I know I don't give a lot of
information, if you want more, ask for it. But I don't want to send
unreadable posts ;-)

Grtz





[GENERAL] BLOB

2001-10-17 Thread Benny Marin

Hello,

Can someone tell me how to work with large objects ?

All I want to do is to store an image into my database and retrieve them
back.
What datatype do I have to use?

What is the sql statement to send a image to the database and how can I
retrieve it back.

I'm using php or VB.

Thanks



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] BLOB type

2001-05-03 Thread Vladislav Breus

Has PostgreSQL the data type such ORACLE BLOB type ?

(BLOB: A binary large object. Maximum size is 4 gigabytes.)

I have about 200-300 binary files (with size less then 1-2 MB), 
which I need storage in DB.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] BLOB DBI func() interface under postgres

2000-06-27 Thread Louis-David Mitterrand

Hello,

In DBD::Pg one can read (line 134):

$lobj_fd = $dbh->func($lobjId, $mode, 'lo_open');

But how is the LOB retrieved in the first place? If I pass the OID of an
existing LOB instance from a table the returned $lobj_fd is null. What
kind of $lobjId is one supposed to pass to this function to open a LOB?

The aim is to be able to read a LOB from a postgres DB without having to
lo_export the object to a file first. Can that be done with the
$dbh->func() interface? (using lo_open, lo_read, etc ..)

TIA

-- 
Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.fr

Hi, I am an alien .sig, and at the moment I am having sex to your
mind, by looking at your smile I can see that you like it.



RE: [GENERAL] BLOB datatype

2000-04-10 Thread Sampath, Krishna

look at the examples in the postgresql book available on the website.
 
krishna
 

-Original Message-
From: iqbal [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 10, 2000 7:11 AM
To: [EMAIL PROTECTED]
Subject: [GENERAL] BLOB datatype


i have created a table having blob object but now i am  not able to insert a
picture into the table. Can you send me sql command to insert picture into
the table i need syntax. i tried through DB2IMAGE but failed to insert.
 
if there is any other method of putting image in the table, kindly send me
the same




[GENERAL] BLOB datatype

2000-04-10 Thread iqbal



i have created a table having blob object but now i am  
not able to insert a picture into the table. Can you send me sql command to 
insert picture into the table i need syntax. i tried through DB2IMAGE but failed 
to insert.
 
if there is any other method of putting image in the table, 
kindly send me the same