Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Vance Maverick
Karsten Hilbert writes:
> Well, in my particular case it isn't so much that I *want*
> to access bytea in chunks but rather that under certain
> not-yet-pinned-down circumstances windows clients tend to go
> out-or-memory on the socket during *retrieval* (insertion is
> fine, as is put/get access from Linux clients). Doing
> chunked retrieval works on those boxen, too, so it's an
> option in our application (the user defines a chunk size
> that works, a size of 0 is treated as no-chunking).

This is my experience with a Java client too.  Writing the data with
PreparedStatement.setBinaryStream works great for long strings, but
reading it with the complementary method ResultSet.getBinaryStream runs
into the memory problem, killing the Java VM.

Thanks to all for the useful feedback.  I'm going to post a note to the
JDBC list as well to make this easier to find in the future.

Vance

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] query large amount of data in c++ using libpq

2007-08-18 Thread Felix Ji
 Hi all,
i am using PQexecParams() to "SELECT" about 3 million record in C++, and it
takes several minutes to make it done with used memory dramatically
incresed(about 200MB).
it seems when i using PQexecParams(), i can't use the query result before
all the data is sent to client.
is there something like server side cursor in libpq?
or is there any other way to do this in c++?
some articles say that query for large amount of data is automaticall
handled well by pg, but i can't see how.

thanks


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Guy Rouillier

Ron Johnson wrote:


So why is Perl-SP-INSERT so much slower than Perl-SQL-INSERT?

(I can imagine that the SP code path would be longer, but since IO
is the slowest part of the system, I'm surprised that it's *that*
much slower.)


I'm guessing that since PG allows overloaded SP names, the slowness is 
coming from resolving which SP to run.  But that is just a guess.  In my 
environment, I don't *have* overloaded SPs, only a single version of a 
given name.  But when I was doing the conversion from Oracle, it took me 
 a couple tries to get the SP signatures correct, so I'm pretty sure PG 
is still going through the resolution logic, even if you only have a 
single instance of a given name.


--
Guy Rouillier

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Seeking datacenter PITR backup procedures [RESENDING]

2007-08-18 Thread Joey K.
Greetings,

We have several web applications with Pg 8.2.x running on isolated servers
(~25). The database size on each machines (du -h pgdata) is ~2 GB. We have
been using nightly filesystem backup (stop pg, tar backup to ftp, start pg)
and it worked well.

We would like to move to PITR backups since the database size will increase
moving forward and our current backup method might increase server
downtimes.

We have a central ftp backup server (yes, ftp :-) which we would like to use
for weekly full and daily incremental PITR backups.

After reading the docs, PITR is still fuzzy. Our ideas for backup are (do
not worry about the syntax),

** START **

tmpwal = "/localhost/tmp"   # tmp space on server 1 for storing wal files
before ftp
Configure $pgdata/postgresql.conf archive_command = "cp %p $tmpwal/%f"

Day 1:
% psql pg_start_backup(); tar pgdata.tar --exclude pg_xlog/ pgdata
% psql pg_stop_backup()
% ftp put pgdata.tar ftpserver:/server1/day1/pgdata
% ftp put $tmpwal/* ftpserver:/server1/day1/wal
% rm -f $tmpwal/* pgdata.tar

Day 2:
% ftp put $tmpwal/* ftpserver:/server1/day2/wal
% rm -f $tmpwal/*

Day 3:
...
...

Day 7:
% rm -f $tmpwal/*
Start over

Recovery on server1 (skeleton commands),
% rm -f $tmpwal/*
% mv pgdata pgdata.hosed
% ftp get ftpbackup:/server1/day1/pgdata.tar  .
% tar -xvf pgdata.tar
% ftp get ftpbackup:/server1/day1/wal/*  $tmpwal
% ftp get ftpbackup:/server1/day2/wal/*  $tmpwal
.
.
% cp -r pgdata.hosed/pg_xlog pgdata/
% echo "cp $tmpwal/%f %p" > pgdata/recovery.conf
% start pg (recovery begins)

** END **

Assumptions:
a. After pg_stop_backup(), Pg immediately recycles log files and hence wal
logs can be copied to backup. This is a clean start.
b. New wal files since (a) are incremental backups

We are not sure if WAL log filenames are unique and possibly overwrite older
wal files during recovery.

I'm seeking suggestions from others with experience performing PostgreSQL
PITR backups from multiple servers to a central backup server.

Thanks in advance,
Joey Krane


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/18/07 21:10, Phoenix Kiula wrote:
> On 18/08/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
>>
>> On 08/18/07 06:02, Phoenix Kiula wrote:
>> [snip]
>>> Thanks for this. I am logged in as root. Put it there and it works. I
>> Well, that's your first problem.
>>
>> And second.  And third.
> 
> 
> 
> Thanks for the kick in the derierre. Have set it all up to operate as
> user postgres. Or is that insecure too?

Whenever thinking about security, the question to ask yourself is:
am I doing anything which would make it easier for a Bad Guy to gain
access to my data or systems.  Then, do the opposite.

Examples:

Using a powerful account for mundane activities?  Use a mundane
account instead.  (As Joshua pointed out, "postgres" is a powerful
account.)

Sending important data over the wire (or worse, wireless) in clear
text?  Encrypt it.

Vulnerable to SQL injection attacks by sending fully formed SQL
statements across the wire?  Use prepared statements instead.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGx6zsS9HxQb37XmcRAndxAJ0YJ1mGQ1+erBsDuq3/iCN3q6ZcsgCgsVpd
F0/q8sPWoWs4qgFhbP65NyM=
=syP0
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phoenix Kiula wrote:
> On 18/08/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> On 08/18/07 06:02, Phoenix Kiula wrote:
>> [snip]
>>> Thanks for this. I am logged in as root. Put it there and it works. I
>> Well, that's your first problem.
>>
>> And second.  And third.
> 
> 
> 
> Thanks for the kick in the derierre. Have set it all up to operate as
> user postgres. Or is that insecure too?

Not as insecure, but consider that postgres == PostgreSQL root.

Joshua D. Drake

> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGx6oEATb/zqfZUUQRAulwAJ4+Q/ycbnHC7r4c0hDD064DyX034gCfYM04
x363nOHfRIMbxuCANtzzuJQ=
=+iUG
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Phoenix Kiula
On 18/08/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 08/18/07 06:02, Phoenix Kiula wrote:
> [snip]
> >
> > Thanks for this. I am logged in as root. Put it there and it works. I
>
> Well, that's your first problem.
>
> And second.  And third.



Thanks for the kick in the derierre. Have set it all up to operate as
user postgres. Or is that insecure too?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Gregory Stark

"Karsten Hilbert" <[EMAIL PROTECTED]> writes:

> But maybe this can be nefariously interpreted such that I could sort-of
> implement cutoff-based extended/external switching by prepending "alter
> table ... set storage external/extended ..." to INSERTs/UPDATEs based on
> bytea parameter size. Or even writing a trigger issuing ALTER TABLE
> depending on size of insert ?

I wouldn't suggest doing that. It will bloat the pg_attribute catalog table
and require a lot of extra vacuums. I think it would also create some lock
contention issues.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Karsten Hilbert
On Sat, Aug 18, 2007 at 01:51:18PM -0400, Tom Lane wrote:

> Karsten Hilbert <[EMAIL PROTECTED]> writes:
> > Would it be feasible to add an ALTER TABLE mode
> > ... set storage externally-extended cutoff  ...
> > where  is the user configurable size of the column
> > data at which PostgreSQL switches from extended to external
> > storage strategy ?
> 
> Actually, it just occurred to me that this ties into the recent
> discussion of compression parameters
> http://archives.postgresql.org/pgsql-hackers/2007-08/msg00082.php
> (which hasn't gone further than discussion yet).  Perhaps we need
> an additional parameter which is a maximum input size to attempt
> compression at all.  IOW, the current force_input_size is not
> only useless but exactly backwards ...

I can see that a maximum size can be relevant for the
decision as to whether to *attempt* compression since large
things compress slowly and may unduly slow down queries.

As well as a minimum size to use compression on, quite
obviously.

OTOH, I'd like to be able to tell PostgreSQL to be so kind
and refrain from attempting to compress values above a
certain size even if it thought it'd make sense.

> There was some discussion in that thread (or maybe the earlier
> one on -patches) of exposing the lzcompress parameters directly
> to users, perhaps as an extended form of the current SET STORAGE
> command.  That won't happen for 8.3 but it might later.  In the
Sounds good.

> meantime, if the defaults included not attempting to compress
> multi-megabyte values, I think it'd Just Work for cases like
> yours.
Not as tweakable as I'd eventually want it but, yes, that
would sort of Just Work.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Karsten Hilbert
On Sat, Aug 18, 2007 at 09:32:33PM +0100, Gregory Stark wrote:

> I do have to wonder how you're getting the data *in* though. If it's large
> enough to have to stream out like this then how do you initially load the
> data?
Well, in my particular case it isn't so much that I *want*
to access bytea in chunks but rather that under certain
not-yet-pinned-down circumstances windows clients tend to go
out-or-memory on the socket during *retrieval* (insertion is
fine, as is put/get access from Linux clients). Doing
chunked retrieval works on those boxen, too, so it's an
option in our application (the user defines a chunk size
that works, a size of 0 is treated as no-chunking).

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Gregory Stark

"Joshua D. Drake" <[EMAIL PROTECTED]> writes:

> Well this is a guess, but:
>
> Set existing column to storage external
> update existing column with existing data:
>
> UPDATE foo SET bar = bar;

Well, not quite. That would actually reuse the toast pointer without
decompressing it. We try to be clever about not decompressing and duplicating
toast pointers unnecessarily on updates -- in this case too clever.

You could do this:

postgres=# ALTER TABLE foo ALTER bar TYPE bytea, ALTER bar SET STORAGE external;
ALTER TABLE

(Note that you have to include the 'ALTER bar SET STORAGE external' in the
same command or the storage will get reset to the default 'extended' for bytea
even if it was previously set to 'external'.)

When I tested this though I noticed it did *not* decompress compressed data
which was small enough to store internally. This may actually be desirable for
your case since anything small enough to be stored internally is probably not
worth bothering decompressing so it can be streamed out. It will still not be
compressed next time you update it so it's not really helpful for the long
term.

If you want to decompress everything you have to do something like:

postgres=# ALTER TABLE foo ALTER bar TYPE bytea USING t||'', ALTER bar SET 
STORAGE external;
ALTER TABLE

However note that this will require extra memory for both the decompressed
original value and the new value after "appending" the empty string.

Another option would be to update only the records which need to be
decompressed with something like

UPDATE foo SET bar=bar||'' WHERE length(bar) > pg_column_size(bar)

This at least gives you the option of doing them in small groups or even one
by one. I would suggest vacuuming between each update.



I do have to wonder how you're getting the data *in* though. If it's large
enough to have to stream out like this then how do you initially load the
data?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Steve Manes

Ron Johnson wrote:

Interesting.  Does PG have to initiate the Perl interpreter every
time you call a Perl-written SP?


I mean the *application* language was Perl for both the inline insert 
and the proc call.  The proc was written in plpgsql.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] query large amount of data in c++ using libpq

2007-08-18 Thread Felix
Hi all,
i am using PQexecParams() to "SELECT" about 3 million record in C++, and it
takes several minutes to make it done with used memory dramatically
incresed(about 200MB).
it seems when i using PQexecParams(), i can't use the query result before
all the data is sent to client.
is there something like server side cursor in libpq?
or is there any other way to do this in c++?
some articles say that query for large amount of data is automaticall
handled well by pg, but i can't see how.

thanks


Re: [GENERAL] server closed the connection unexpectedly

2007-08-18 Thread Muhyiddin A.M Hayat

This is pg_log :

2007-08-19 03:00:50 LOG:  database system was shut down at 2007-08-19 
02:58:26 Malay Peninsula Standard Time

2007-08-19 03:00:50 LOG:  checkpoint record is at 0/75A808
2007-08-19 03:00:50 LOG:  redo record is at 0/75A808; undo record is at 0/0; 
shutdown TRUE

2007-08-19 03:00:50 LOG:  next transaction ID: 0/1931; next OID: 16737
2007-08-19 03:00:50 LOG:  next MultiXactId: 1; next MultiXactOffset: 0
2007-08-19 03:00:51 LOG:  database system is ready
2007-08-19 03:01:43 LOG:  could not receive data from client: An operation 
was attempted on something that is not a socket.



2007-08-19 03:01:43 LOG:  incomplete startup packet
2007-08-19 03:20:15 LOG:  could not receive data from client: An operation 
was attempted on something that is not a socket.



2007-08-19 03:20:15 LOG:  incomplete startup packet
2007-08-19 03:25:30 LOG:  could not receive data from client: An operation 
was attempted on something that is not a socket.



2007-08-19 03:25:30 LOG:  incomplete startup packet
2007-08-19 03:27:05 LOG:  could not receive data from client: An operation 
was attempted on something that is not a socket.



2007-08-19 03:27:05 LOG:  incomplete startup packet
2007-08-19 03:33:18 WARNING:  there is no transaction in progress
2007-08-19 03:33:29 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit 
index "auth_users_pkey" for table "auth_users"

2007-08-19 03:33:30 WARNING:  there is no transaction in progress
2007-08-19 03:33:34 LOG:  could not receive data from client: An operation 
was attempted on something that is not a socket.



2007-08-19 03:33:34 LOG:  incomplete startup packet




Thanks
- Original Message - 
From: "Raymond O'Donnell" <[EMAIL PROTECTED]>

To: "Muhyiddin A.M Hayat" <[EMAIL PROTECTED]>
Cc: 
Sent: Sunday, August 19, 2007 3:13 AM
Subject: Re: [GENERAL] server closed the connection unexpectedly



On 18/08/2007 19:30, Muhyiddin A.M Hayat wrote:


somebody help me please


You'll need to post a lot more information before anyone can help.

Is there anything in the server log? - or the Windows event log?

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] server closed the connection unexpectedly

2007-08-18 Thread Raymond O'Donnell

On 18/08/2007 19:30, Muhyiddin A.M Hayat wrote:


somebody help me please


You'll need to post a lot more information before anyone can help.

Is there anything in the server log? - or the Windows event log?

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] server closed the connection unexpectedly

2007-08-18 Thread Muhyiddin A.M Hayat
Dear all,

i'm unable to connect postgres server with error :

C:\Program Files\PostgreSQL\8.2\bin>psql -U postgres siakad
Password for user postgres:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.


pg_log :

2007-08-19 03:00:50 LOG:  database system was shut down at 2007-08-19 02:58:26 
Malay Peninsula Standard Time
2007-08-19 03:00:50 LOG:  checkpoint record is at 0/75A808
2007-08-19 03:00:50 LOG:  redo record is at 0/75A808; undo record is at 0/0; 
shutdown TRUE
2007-08-19 03:00:50 LOG:  next transaction ID: 0/1931; next OID: 16737
2007-08-19 03:00:50 LOG:  next MultiXactId: 1; next MultiXactOffset: 0
2007-08-19 03:00:51 LOG:  database system is ready
2007-08-19 03:01:43 LOG:  could not receive data from client: An operation was 
attempted on something that is not a socket.

 
2007-08-19 03:01:43 LOG:  incomplete startup packet


somebody help me please


[GENERAL] server closed the connection unexpectedly

2007-08-18 Thread Muhyiddin A.M Hayat
Dear all,

i'm unable to connect postgres server with error :

C:\Program Files\PostgreSQL\8.2\bin>psql -U postgres siakad
Password for user postgres:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.


somebody help me please

Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Karsten Hilbert
On Sat, Aug 18, 2007 at 10:23:42AM -0700, Joshua D. Drake wrote:

> >> SET STORAGE EXTERNAL (before storing anything in it...)  See the
> >> ALTER TABLE reference page.

> > Now, to convert an existing bytea column I would need to add
> > a new bytea column with "set storage external", move the
> > data from the old column to the new column, remove the old
> > column, and give the new column the original name, correct ?
> 
> Set existing column to storage external
> update existing column with existing data:
> 
> UPDATE foo SET bar = bar;
> 
> Now the down side to this is you are going to create a dead row for
> every update which means a vacuum (probably full) afterward, but the way
> you describe above will do the same thing as well.
Sure.

I was a bit uneasy about the docs saying

"set storage doesn't affect existing data but only sets the
 strategy on new inserts/updates"

and hence thought using a wholy new column would somehow be
safer. But maybe this can be nefariously interpreted such
that I could sort-of implement cutoff-based
extended/external switching by prepending "alter table ...
set storage external/extended ..." to INSERTs/UPDATEs based
on bytea parameter size. Or even writing a trigger issuing
ALTER TABLE depending on size of insert ?

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Tom Lane
Karsten Hilbert <[EMAIL PROTECTED]> writes:
> Would it be feasible to add an ALTER TABLE mode
>   ... set storage externally-extended cutoff  ...
> where  is the user configurable size of the column
> data at which PostgreSQL switches from extended to external
> storage strategy ?

Actually, it just occurred to me that this ties into the recent
discussion of compression parameters
http://archives.postgresql.org/pgsql-hackers/2007-08/msg00082.php
(which hasn't gone further than discussion yet).  Perhaps we need
an additional parameter which is a maximum input size to attempt
compression at all.  IOW, the current force_input_size is not
only useless but exactly backwards ...

There was some discussion in that thread (or maybe the earlier
one on -patches) of exposing the lzcompress parameters directly
to users, perhaps as an extended form of the current SET STORAGE
command.  That won't happen for 8.3 but it might later.  In the
meantime, if the defaults included not attempting to compress
multi-megabyte values, I think it'd Just Work for cases like
yours.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ron Johnson wrote:
> On 08/18/07 11:08, Joshua D. Drake wrote:
>> Josh Tolley wrote:
>>> On 8/18/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
 Interesting.  Does PG have to initiate the Perl interpreter every
 time you call a Perl-written SP?
>>> IIRC PostgreSQL should only load the perl interpreter once per session.
>> Right.
> 
> I'd have been stunned if you did it any other way.
> 
> So why is Perl-SP-INSERT so much slower than Perl-SQL-INSERT?
> 
> (I can imagine that the SP code path would be longer, but since IO
> is the slowest part of the system, I'm surprised that it's *that*
> much slower.)

We would have to see his script as there are several variables at hand here.

1. Is he using do or execute with his perl-sql? If he is using execute
then he is getting the advantage of prepared query inserts.

2. Did he use a single transaction? If so then he is not paying for 2
million commits, which he would be with the procedure.

Sincerely,

Joshua D. Drake




- ---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxzFYATb/zqfZUUQRAvUWAKCHm4EOtmrblRcnoVLVSEA0/Hd1JQCfTbgu
4tRGmCvqZSuBsGvQyUOQBzU=
=edfC
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Karsten Hilbert
On Sat, Aug 18, 2007 at 12:49:09PM -0400, Tom Lane wrote:

> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > Should we consider setting storage external by default for the type?
> 
> No.  That would be counterproductive for the more typical case of bytea
> values in the range of some-small-number-of-kilobytes.  Or at least
> I think that's more typical than values that are so large you have to go
> out of your way to fetch them in chunks.

Would it be feasible to add an ALTER TABLE mode

... set storage externally-extended cutoff  ...

where  is the user configurable size of the column
data at which PostgreSQL switches from extended to external
storage strategy ?

Such that large bytea values would be chunkable while
smaller ones wouldn't at the discretion of the DBA.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Karsten Hilbert wrote:
> On Sat, Aug 18, 2007 at 12:20:42PM -0400, Tom Lane wrote:
> 
>> "Vance Maverick" <[EMAIL PROTECTED]> writes:
>>> My question is about performance in the postgres server.  When I execute
>>> "SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id =3D ?",
>>> does it fetch the whole BYTEA into memory?  Or does it access only the
>>> pages that contain the requested substring?
>> Recent releases will do what you want if the column has been marked
>> SET STORAGE EXTERNAL (before storing anything in it...)  See the
>> ALTER TABLE reference page.
> Ah, thanks, good to know !
> 
> "Recent releases" seems to mean at least as far back as 8.1
> going by the docs.
> 
> Now, to convert an existing bytea column I would need to add
> a new bytea column with "set storage external", move the
> data from the old column to the new column, remove the old
> column, and give the new column the original name, correct ?
> 
> Or is the an easier way ?

Well this is a guess, but:

Set existing column to storage external
update existing column with existing data:

UPDATE foo SET bar = bar;

Now the down side to this is you are going to create a dead row for
every update which means a vacuum (probably full) afterward, but the way
you describe above will do the same thing as well.

Sincerely,

Joshua D. Drake


> 
> Karsten


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxyseATb/zqfZUUQRAn7AAJ9jzhsOb8xoy9QWoI6yfNV4cO9Z3gCeJG6W
n3Z0uaYp5d6QGoFP3O8QJUI=
=fIqx
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Karsten Hilbert
On Sat, Aug 18, 2007 at 12:20:42PM -0400, Tom Lane wrote:

> "Vance Maverick" <[EMAIL PROTECTED]> writes:
> > My question is about performance in the postgres server.  When I execute
> > "SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id =3D ?",
> > does it fetch the whole BYTEA into memory?  Or does it access only the
> > pages that contain the requested substring?
> 
> Recent releases will do what you want if the column has been marked
> SET STORAGE EXTERNAL (before storing anything in it...)  See the
> ALTER TABLE reference page.
Ah, thanks, good to know !

"Recent releases" seems to mean at least as far back as 8.1
going by the docs.

Now, to convert an existing bytea column I would need to add
a new bytea column with "set storage external", move the
data from the old column to the new column, remove the old
column, and give the new column the original name, correct ?

Or is the an easier way ?

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>> Should we consider setting storage external by default for the type?
> 
> No.  That would be counterproductive for the more typical case of bytea
> values in the range of some-small-number-of-kilobytes.  Or at least
> I think that's more typical than values that are so large you have to go
> out of your way to fetch them in chunks.

The typical case that I run into with Bytea is storing enough
information to where this would be useful. Specifically items such as
pdf and .doc.

Sincerely,

Joshua D. Drake

> 
>   regards, tom lane
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxyYLATb/zqfZUUQRAq5gAJsGFIHglJGcGjqjNc92G6Wt2U+cwQCghGMV
181pA78JUFIfpepzXLY1eK0=
=GJGH
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Blobs in Postgresql

2007-08-18 Thread Shane Ambler

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/17/07 23:16, Merlin Moncure wrote:

On 8/18/07, Ron Olson <[EMAIL PROTECTED]> wrote:

The language is Java. I've made some tests and they work very well for 25meg
filesworks exactly the way it should, first time. MySQL had all kinds of
nasty surprises for me when I first started working with blobs, but I can
say that I took my code, changed the driver, and it all works like a champ
(mind you, this was a quick test app).

I haven't looked at encryption at the database levelis such a thing
available? I know Oracle has some form of data encryption at the database
level so the nefarious DBA with the wide mustache and black brimmed hat
always going "ah ha ha ha ha" can't make off with the data, but does
Postgres have something similar?

BTW, to put into context, the database will be designed to hold evidence
(well, photos and videos of). Thus the compelling need for some security, as
well as the variation in file sizes.

Well, my assumption was that you would encrypt the data on the client
side and store it that way.

PostgreSQL has open architecture.  If you wanted to do the encryption
on the server, one possible approach that jumps out at me is to write
a small C function which receives the data, encrypts the image using a
key sent by the client all (but not stored), and either stores the
encrypted image back in the database via SPI or writes it out to a
file.

There are many strategies to encrypting data...first thing to think
about is where the encryption happens, where the keys are stored, etc.


Client-side encryption is important, because with server-side
encryption, you are sending the Valuable Data across the wire (or,
even worse!) wireless in cleartext form.

It's more likely that there's a packet sniffer on the network than
an Evil DBA snooping around.



The two options I see are -

1. the client encrypts the data and sends it to the DB

2. the client uses an SSL connection to the server to prevent snooping 
and lets the DB encrypt for storage.


I would suggest looking at pgcrypto in contrib for server side encryption.

The main benefit I would see from the first is it doesn't matter if 
another DB admin changes the server security settings or not. The new 
guy may setup a new server and not enforce SSL connections. Of course if 
the client refused non-SSL connections you can prevent that.


Either way the app provides the key to decrypt the data for viewing, so 
the developers current and future must maintain the security level you 
choose.


What sort of security measures are taken for viewing the data? Will each 
user have a security certificate on their own USB flash drive to allow 
them to view the data? which could also prevent developers from 
accessing the data. Or is their password enough to allow the program to 
decrypt it for them?


It would really come down to which encryption method you find easiest to 
implement that provides enough security for your needs.





--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Should we consider setting storage external by default for the type?

No.  That would be counterproductive for the more typical case of bytea
values in the range of some-small-number-of-kilobytes.  Or at least
I think that's more typical than values that are so large you have to go
out of your way to fetch them in chunks.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] language interface in postgresql

2007-08-18 Thread Ron Mayer
David Fetter wrote:
>> Dollar-quoting is a cute technical solution to that, but you can't
>> deny that it's simpler if you just restrict the function language to
>> be SQL-ish so that CREATE FUNCTION can parse it without any
>> interesting quoting rules.  So sayeth Oracle and the SQL standards
>> committee, anyway.
> 
> I think the aforementioned entities got it wrong where we got it right :)
> 

Though Oracle says[1] 'Java-based stored procedures are a standard (ANSI 
SQL-1999
SQLJ-Part-1 a.k.a. "SQL routines using Java") database-independent alternative
to proprietary procedural extensions to SQL'; so I guess they did both think
of some flexibility in that regard, no?

[1] 
http://www.oracle.com/technology/sample_code/tech/java/jsp/samples/jwcache/Abstract.html

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Finding my database

2007-08-18 Thread Raymond O'Donnell

On 18/08/2007 09:03, Adrian Pitt wrote:


I have put v8.0 back on as you suggested, but so far it has made no
difference. I am still unable to get the service restarted.  When installing
I changed the install to drive D, and also changed the data reference to the
directory where I moved the data to. I unchecked the initialize database
cluster and it seemed to install fine and did not create the usual data
directory within the SQL folder.  But for some perplexing reason, even
though it all was working fine on drive C, I can't for the life of me work
out what has gone wrong.  I cannot export any data if it's not finding the


What about permissions on the data directory? Apart from that, I'm 
afraid I'm out of suggestions.


If you post the exact text of the error you're getting, maybe other more 
knowledgeable people can help.


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
> "Vance Maverick" <[EMAIL PROTECTED]> writes:
>> My question is about performance in the postgres server.  When I execute
>> "SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id =3D ?",
>> does it fetch the whole BYTEA into memory?  Or does it access only the
>> pages that contain the requested substring?
> 
> Recent releases will do what you want if the column has been marked
> SET STORAGE EXTERNAL (before storing anything in it...)  See the
> ALTER TABLE reference page.

Should we consider setting storage external by default for the type?

Joshua D. Drake

> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxx8CATb/zqfZUUQRAkXkAJ4i6GKe7/v8dHOaj8fjTvc2hZZN4wCgknFQ
VVaj655AAbKFipfFNcAbdos=
=jdgD
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/18/07 11:08, Joshua D. Drake wrote:
> Josh Tolley wrote:
>> On 8/18/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
>>> Interesting.  Does PG have to initiate the Perl interpreter every
>>> time you call a Perl-written SP?
>> IIRC PostgreSQL should only load the perl interpreter once per session.
> 
> Right.

I'd have been stunned if you did it any other way.

So why is Perl-SP-INSERT so much slower than Perl-SQL-INSERT?

(I can imagine that the SP code path would be longer, but since IO
is the slowest part of the system, I'm surprised that it's *that*
much slower.)

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGxx5jS9HxQb37XmcRAn1vAKCEWTovgcj/w/uFVK0nankGdbOFuACg6HSq
dUnMN0dPsdQ8NKSDW0EahcU=
=P8gN
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Tom Lane
"Vance Maverick" <[EMAIL PROTECTED]> writes:
> My question is about performance in the postgres server.  When I execute
> "SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id =3D ?",
> does it fetch the whole BYTEA into memory?  Or does it access only the
> pages that contain the requested substring?

Recent releases will do what you want if the column has been marked
SET STORAGE EXTERNAL (before storing anything in it...)  See the
ALTER TABLE reference page.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Josh Tolley wrote:
> On 8/18/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
>> Interesting.  Does PG have to initiate the Perl interpreter every
>> time you call a Perl-written SP?
> 
> IIRC PostgreSQL should only load the perl interpreter once per session.

Right.

Joshua D. Drake

> 
> - Josh
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxxmFATb/zqfZUUQRAoL+AJ0cyZ3FaB82PGbGvC/fPYWPo9UcjwCgjiDo
U5XCNKFZhhdiJ3Lf5x68lKQ=
=4HMG
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Josh Tolley
On 8/18/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
> Interesting.  Does PG have to initiate the Perl interpreter every
> time you call a Perl-written SP?

IIRC PostgreSQL should only load the perl interpreter once per session.

- Josh

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Partitioning

2007-08-18 Thread Josh Tolley
On 8/18/07, Julio Cesar Sánchez González
<[EMAIL PROTECTED]> wrote:
> Hi guys,
>
> It's natural what master table in the partitioning table contain data
> (http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html) ?
> or to be empty.

I'm no partitioning expert, but I would say most of the time the
parent table is supposed to be empty. That said, partitioning schemes
need to be developed based on what data you have and what exactly
you're trying to do. If you're using the common example of keeping a
bunch of historical data where partitions represent a distinct chunk
of time (for instance, one partition for every month of data) you
probably want to keep the parent table empty. Every row has a date,
and so there's a child table for each row naturally, so it doesn't
make sense to put anything in the parent tables. But I'm sure someone
can come up with a scheme where having data in the parent is also
useful.

- Josh

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Interpreting statistics collector output

2007-08-18 Thread Ron Mayer
Decibel! wrote:
> On Aug 15, 2007, at 2:11 PM, Gregory Stark wrote:
>> "Decibel!" <[EMAIL PROTECTED]> writes:
>>> On Wed, Aug 15, 2007 at 01:26:02PM -0400, Steve Madsen wrote:
 On Aug 15, 2007, at 11:52 AM, Decibel! wrote:
> I can't really think of a case where a seqscan wouldn't return all the
> rows in the table... that's what it's meant to do.
>>
>> LIMIT
> 
> Ok, you got me. :P But normally you wouldn't do a LIMIT without some
> kind of an ORDER BY, which would mean scanning the whole table.


Seems "where exists" does it too, no?



test=#  explain analyze select 1 where exists (select * from bigtbl);
QUERY PLAN
--
 Result  (cost=9681.61..9681.62 rows=1 width=0) (actual time=0.033..0.034 
rows=1 loops=1)
   One-Time Filter: $0
   InitPlan
 ->  Seq Scan on bigtbl  (cost=0.00..9681.61 rows=140461 width=443) (actual 
time=0.027..0.027 rows=1 loops=1)
 Total runtime: 0.177 ms
(5 rows)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/18/07 06:02, Phoenix Kiula wrote:
[snip]
> 
> Thanks for this. I am logged in as root. Put it there and it works. I

Well, that's your first problem.

And second.  And third.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGxwlGS9HxQb37XmcRAvJSAKDP//ElDCzRk2Jcewm1+GVxaeVikACfbk+p
4obghwE8R19ljPRiqRPQQRg=
=NYrP
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/17/07 21:45, Steve Manes wrote:
> Ron Johnson wrote:
>>> Moving all the application-bound inserts into stored procedures didn't
>>> achieve nearly the performance enhancement I'd assumed I'd get, which I
>>> figured was due to the overhead of the procs themselves.
>>
>> Would that be because the original app was written in a compiled
>> language, but the SPs in an interpreted language?
> 
> No, because the application language was Perl5 for both.  I think it was
> just the overhead of 2 million inserts via procs versus 2 million inline
> inserts (without the proc overhead).

Interesting.  Does PG have to initiate the Perl interpreter every
time you call a Perl-written SP?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGxwiAS9HxQb37XmcRAgi2AJ9Yq2drImecZVTbZR0Wo4VKlpaiXwCgpjHo
8KcDWqDpW6BWNWCj+ZUQFU4=
=Qlpg
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Transactional DDL

2007-08-18 Thread Ron Mayer
Scott Marlowe wrote:
> On 8/14/07, Harpreet Dhaliwal <[EMAIL PROTECTED]> wrote:
>> Hi,
>> I read a few lines about SP compilation in postgres
>>
>> http://searchoracle.techtarget.com/originalContent/0,289142,sid41_gci1179016,00.html
>>
>> 1. stored procedure compilation is transactional.
>> "You can recompile a stored procedure on a live system, and only
>> transactions starting after that compilation will see the changes," he said.
>> "Transactions in process can complete with the old version. Oracle just
>> blocks on the busy procedure."

Really?

When I tried it [1] - changing a function definition during the
middle of a long-running-query that used the function gave
me the surprising result that some rows were processed using
the old definition of the function and some with the new one.

The explanation from Tom [2] was that there was some good
reason function lookups used SnapshotNow.

  Ron






[1] http://archives.postgresql.org/pgsql-bugs/2005-04/msg00163.php

  I have a long query something like

select slow_function(col) from large_table;

  and half way through the query, in a separate connection, I

CREATE OR REPLACE slow_function 

  I was surprised to see that some of the rows in my select
  were processed by the old definition and some by the new.


[2] http://archives.postgresql.org/pgsql-bugs/2005-04/msg00179.php

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Blobs in Postgresql

2007-08-18 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/17/07 23:16, Merlin Moncure wrote:
> On 8/18/07, Ron Olson <[EMAIL PROTECTED]> wrote:
>> The language is Java. I've made some tests and they work very well for 25meg
>> filesworks exactly the way it should, first time. MySQL had all kinds of
>> nasty surprises for me when I first started working with blobs, but I can
>> say that I took my code, changed the driver, and it all works like a champ
>> (mind you, this was a quick test app).
>>
>> I haven't looked at encryption at the database levelis such a thing
>> available? I know Oracle has some form of data encryption at the database
>> level so the nefarious DBA with the wide mustache and black brimmed hat
>> always going "ah ha ha ha ha" can't make off with the data, but does
>> Postgres have something similar?
>>
>> BTW, to put into context, the database will be designed to hold evidence
>> (well, photos and videos of). Thus the compelling need for some security, as
>> well as the variation in file sizes.
> 
> Well, my assumption was that you would encrypt the data on the client
> side and store it that way.
> 
> PostgreSQL has open architecture.  If you wanted to do the encryption
> on the server, one possible approach that jumps out at me is to write
> a small C function which receives the data, encrypts the image using a
> key sent by the client all (but not stored), and either stores the
> encrypted image back in the database via SPI or writes it out to a
> file.
> 
> There are many strategies to encrypting data...first thing to think
> about is where the encryption happens, where the keys are stored, etc.

Client-side encryption is important, because with server-side
encryption, you are sending the Valuable Data across the wire (or,
even worse!) wireless in cleartext form.

It's more likely that there's a packet sniffer on the network than
an Evil DBA snooping around.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGxwf7S9HxQb37XmcRAimGAJ98Kykormb63BedYknIij2xZvDgEACgw23C
eWn7JJKSs1KL9dSfVx3p/BY=
=OLl1
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Vance Maverick
I'm working on reading large BYTEA fields from PostgreSQL 8.1.  (For
legacy reasons, it's unattractive to move them to large objects.)  I'm
using JDBC, and as various people have pointed out
, the
standard stream-style access method runs out of memory for large BYTEAs.

Karsten Hilbert mentions using SUBSTRING to read these BYTEA fields a
chunk at a time
.
I've tried this, and indeed it works.  (Once I corrected for the 1-based
indexing ;-))

My question is about performance in the postgres server.  When I execute
"SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id = ?",
does it fetch the whole BYTEA into memory?  Or does it access only the
pages that contain the requested substring?

Vance

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Magnus Hagander
Phoenix Kiula wrote:
> On 18/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>>> I am writing some simple batch scripts to login to the DB and do a
>>> pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
>>> to be asked for a password every time (which, for silly corporate
>>> reasons, is quite a convoluted one).
>>>
>> It's in the environment on the client machine. If it's for your scripts,
>> you can set it inside the script before you launch psql for example. If
>> you need it.
> 
> 
> Let's say my script was in Perl or PHP. What would the variable name
> be to set this password? My script is unlikely to call psql, I'm
> thinking of using only pg_dump and pg_restore.

As long as the interfaced is based off libpq, .pgpass will work. AFAIK,
this includes both Perl and PHP, and it certainly includes pg_dump and
pg_restore.



>>> 1. Where do I set up the automated password for (a) psql stuff and (b)
>>> for bash scripts or cron jobs -- I suppose both could have the same
>>> solution.
>> a) In the home directory of the user running psql.
>> b) In the home directory of the user running the cronjob.
> 
> 
> Thanks for this. I am logged in as root. Put it there and it works. I
> also put a ".psqlrc" in the home directory and that works too! Thanks!
> 
> I'd love to contribute back to the community and mention this in the
> manual for 8.2/interactive. But the community login and commenting on
> the site seems to be broken! Even after I am logged in, it does not
> show it on each page of the site, and when I submit my comment (and
> login all over again for it) it shows me a "numeric error". Where
> should I post that error?

It will appear on the site once it's been approved. But if you get an
actual error, than that needs to be fixed - please email the complete
error you get to the [EMAIL PROTECTED] mailinglist. Thanks!

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Phoenix Kiula
On 18/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > I am writing some simple batch scripts to login to the DB and do a
> > pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
> > to be asked for a password every time (which, for silly corporate
> > reasons, is quite a convoluted one).
> >
>
> It's in the environment on the client machine. If it's for your scripts,
> you can set it inside the script before you launch psql for example. If
> you need it.


Let's say my script was in Perl or PHP. What would the variable name
be to set this password? My script is unlikely to call psql, I'm
thinking of using only pg_dump and pg_restore.


> > 1. Where do I set up the automated password for (a) psql stuff and (b)
> > for bash scripts or cron jobs -- I suppose both could have the same
> > solution.
>
> a) In the home directory of the user running psql.
> b) In the home directory of the user running the cronjob.


Thanks for this. I am logged in as root. Put it there and it works. I
also put a ".psqlrc" in the home directory and that works too! Thanks!

I'd love to contribute back to the community and mention this in the
manual for 8.2/interactive. But the community login and commenting on
the site seems to be broken! Even after I am logged in, it does not
show it on each page of the site, and when I submit my comment (and
login all over again for it) it shows me a "numeric error". Where
should I post that error?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Automating logins for mundane chores

2007-08-18 Thread Magnus Hagander
Phoenix Kiula wrote:
> I am writing some simple batch scripts to login to the DB and do a
> pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
> to be asked for a password every time (which, for silly corporate
> reasons, is quite a convoluted one).
> 
> So I read up on .pgpass. Where should this file be located. "User's
> home directory" says the manual, but there's no "home directory" for
> database users (or is there? if so, where?), only for the postgres
> user. So I promptly did "su - postgres" and added the requisite info
> in the .pgpass file therein. But that doesn't seem to automate
> anything for actual DB users.

.pgpass is read by the client. It may not even be located on the same
machine as your server, depending on where you run psql.


> Next, the manual refers to some PGPASSFILE env variable (
> http://www.postgresql.org/docs/8.2/interactive/libpq-envars.html ) but
> it is unclear where this environment is set up. I don't see any such
> setting in the postgresql.conf file, which wouldn't make much sense
> anyway. Where else can I tweak the environment variables? No pointer
> in the manual.  Or was this page only some arcane C libpg stuff?

It's in the environment on the client machine. If it's for your scripts,
you can set it inside the script before you launch psql for example. If
you need it.


> So, my questions:
> 
> 1. Where do I set up the automated password for (a) psql stuff and (b)
> for bash scripts or cron jobs -- I suppose both could have the same
> solution.

a) In the home directory of the user running psql.
b) In the home directory of the user running the cronjob.

> 2. While we're on psql, I quite like the "\timing" stuff inside psql.
> I find it very useful to have that on every time I login to psql
> console, but I don't see any command line option to automate this
> every time. The psql man page (
> http://www.postgresql.org/docs/8.2/interactive/app-psql.html ) talks
> about a "psqlrc", which on my system is found at
> "/usr/share/pgsql/psqlrc" so I opened it up, entered the only line:
> 
>\timing
> 
> And saved it. Then I started the psql console again, but there's no
> timing on by default. How can I setup default options for psql?

That should be /usr/share/pgsql/psqlrc.sample, which is just a sample
file and isn't parsed (unless your distribution did something really
strange). On a source install, you need to put the file in
/usr/local/pgsql/etc/psqlrc - since you're obviously not using a source
install, you'll need to put it wherever your package is configured to
have it (should be documented alongside the package, I hope).

The easier way is to put it in the file .psqlrc in your home directory
(same home directory as you put .pgpass in), assuming you only want this
for one user.

//Magnus



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Automating logins for mundane chores

2007-08-18 Thread Phoenix Kiula
I am writing some simple batch scripts to login to the DB and do a
pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
to be asked for a password every time (which, for silly corporate
reasons, is quite a convoluted one).

So I read up on .pgpass. Where should this file be located. "User's
home directory" says the manual, but there's no "home directory" for
database users (or is there? if so, where?), only for the postgres
user. So I promptly did "su - postgres" and added the requisite info
in the .pgpass file therein. But that doesn't seem to automate
anything for actual DB users.

Next, the manual refers to some PGPASSFILE env variable (
http://www.postgresql.org/docs/8.2/interactive/libpq-envars.html ) but
it is unclear where this environment is set up. I don't see any such
setting in the postgresql.conf file, which wouldn't make much sense
anyway. Where else can I tweak the environment variables? No pointer
in the manual.  Or was this page only some arcane C libpg stuff?

So, my questions:

1. Where do I set up the automated password for (a) psql stuff and (b)
for bash scripts or cron jobs -- I suppose both could have the same
solution.

2. While we're on psql, I quite like the "\timing" stuff inside psql.
I find it very useful to have that on every time I login to psql
console, but I don't see any command line option to automate this
every time. The psql man page (
http://www.postgresql.org/docs/8.2/interactive/app-psql.html ) talks
about a "psqlrc", which on my system is found at
"/usr/share/pgsql/psqlrc" so I opened it up, entered the only line:

   \timing

And saved it. Then I started the psql console again, but there's no
timing on by default. How can I setup default options for psql?

Many TIA!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Partitioning

2007-08-18 Thread Julio Cesar Sánchez González
Hi guys,

It's natural what master table in the partitioning table contain data
(http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html) ?
or to be empty.

Thanks for all.

-- 
Regards,

Julio Cesar Sánchez González
www.sistemasyconectividad.com.mx
blog: http://darkavngr.blogspot.com

---
Ahora me he convertido en la muerte, destructora de mundos.
Soy la Muerte que se lleva todo, la fuente de las cosas que vendran.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Finding my database

2007-08-18 Thread Adrian Pitt
I have put v8.0 back on as you suggested, but so far it has made no
difference. I am still unable to get the service restarted.  When installing
I changed the install to drive D, and also changed the data reference to the
directory where I moved the data to. I unchecked the initialize database
cluster and it seemed to install fine and did not create the usual data
directory within the SQL folder.  But for some perplexing reason, even
though it all was working fine on drive C, I can't for the life of me work
out what has gone wrong.  I cannot export any data if it's not finding the
data.  And if I was able to export it then I wouldn't need to be going
through all the hassle I am.  Any further suggestions that I might be able
to try to get the SQL service running. What other possible reasons would
there be for it not seeing my database?

-Original Message-
From: Raymond O'Donnell [mailto:[EMAIL PROTECTED] 
Sent: Friday, 17 August 2007 10:33 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Finding my database

On 17/08/2007 05:44, [EMAIL PROTECTED] wrote:

> needed to move it to the D Drive.  After completely messing everything up,
I
> am now unable to get it working.  I have uninstalled everything and now
have
> v8.2 installed on the D drive, and I put the data directory back within
that

I know it's not a lot of help to you at this point, but the recommended 
way to backup and restore a database is using pg_dump, and if you're 
moving from one major version to another it's the only way.

Was the previous installation of Postgres also 8.2? If not - if it was 
an earlier version - I'd put the old version back, point it at the data 
directory, then use pg_dump to export the data if you want to upgrade at 
that point.

Have a look at the following, and in particular the examples towards the 
bottom:

   http://www.postgresql.org/docs/8.2/interactive/app-pgdump.html


HTH,

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.484 / Virus Database: 269.12.0/957 - Release Date: 16/08/2007
1:46 PM
 

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.484 / Virus Database: 269.12.0/959 - Release Date: 17/08/2007
5:43 PM
 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match