Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-15 Thread tek1

where in the postgres dox can we find info abt the vacuum feature?

thx... 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[GENERAL] SSL

2001-10-15 Thread Jelle Ouwerkerk

Question: If the postmaster has been enabled to support TCP/IP connections
using SSL, how do I need to modify my postgreSQL client app in order to
conduct an encrypted conversation? Currently, my app uses the postgreSQL C
API (libpq). Is the SSL functionality already supported in libpq?

Thanks,
...Jelle




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] lo_export

2001-10-15 Thread Luke

Hi,

I want to extract images from a database.
I am currently trying to use the following script
The point is if I get more than 1 image returned, how can I write it to
different output files.
As you can see, I've tried using '$dbh->func'


use DBI;
$dbh = DBI->connect("dbi:Pg:dbname=luke") or die $DBI::errstr;

#the next statement gets the objectID, which I have called imagename.

$qry = "select imagename from images where date='7/10/01'";
$sth = $dbh->prepare($qry);
$sth->execute();

$rec_num = 0;
print "\nResults of query are: \n";

#in the next statement I want to print each returned image to a
different filename,
#but I can't get func to work here at all
#any suggestions?


while (@array = $sth->fetchrow)
{
($imageName) = @array;
$rec_num = $rec_num + 1;
print "RECORD $rec_num\n";
print "  Name: $imageName\n";
$fileName = "/home/httpd/html/outimage.jpg";
$ret = $dbh->func($imageName, 'lo_export', $fileName);
}


print "\n--\n";

Luke


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] Embedded postgres ?

2001-10-15 Thread Henning Follmann

On Wed, 10 Oct 2001 14:18:41 + (UTC), John P. Looney 
<[EMAIL PROTECTED]> wrote:
>  How well is postgres suited for small applications ?
> 
>  I don't know it's architecture too well, though even with a very small
> database, it seems to use a huge number of files, which are all quite
> large, for the small amount of information in it.

Well, this is not for what postgres was designed for.
For embedded systems you are looking for databases that are optimised
for running completely in memory. Also a SQL engine is maybe not well
suited for embedded devices.
Better would be databases like Berley DB or GDBM. They are though not 
comparable to RDBMS like postgres, etc...


> 
>  For a database of say a megabyte, could someone take a guess at how small
> an installation I could get postgres down to ?
> 

Cheers,
Henning


-- 
Henning Follmann  | 8 Jane Road
Tel.: +1 908 656 7061 | New Providence, NJ 07974
[EMAIL PROTECTED] | USA


-=  Posted via Newsfeeds.Com, Uncensored Usenet News  =-
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
 Check out our new Unlimited Server. No Download or Time Limits!
-==  Over 80,000 Newsgroups - 19 Different Servers!  ==-

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Any Good Way To Do Sync DB's?

2001-10-15 Thread Gordan Bobic

On Monday 15 Oct 2001 13:35, Joseph Koenig wrote:
> Your solution sounds very interesting (Not the throw away NT
> part...)

That is where a signifficant part of the performance improvement would come 
from, if performance was what you were after...

> ...does anyone else have any input on this? Would it work well?

It works great - I am using it on a number of servers. It also gives you a 
much more secure setup, because you can pipe everything through a single 
firewall with a squid accelerating proxy. That means that a hacker would have 
to breach your firewall befure they would have even a remote opportunity to 
penetrate deeper into your network and attack the web server.

> Any idea as to what amount of traffic it would be capable of handling?

Any amount of traffic, it just means that only two connections would be 
served at any one time. If your query return and CGI page construction times 
are very low (which they should be if you are using mod_perl on a decent 
server), then you are unlikely to notice any slowdown. The excess hits would 
just get queued and processed when back-ends become available. It would also 
be a good idea to look through your scripts and attempt to connect the 
database in Perl::DBI as late as possible (not at the beginning of the script 
as a global variable, but just before the query is issued), and disconnect as 
early as possible (as soon as the query is executed and the data retrieved). 
This would not hog the connections to the PostgreSQL end as much.

Of course, this is all provided you use perl CGIs and mod_perl on Apache. 
Otherwise, persistent connections don't come into the picture at all.

> If apache is only running in two instances, would that really keep the
> number of ODBC connections to 2 at max?

No, that would only keep the number of your connections from the WEB SERVER 
to two max. ODBC connections from the rest of your network would be a 
completely separate issue. Basically, if Apache only ever has two instances 
running, then it can only ever serve up to 2 requests at any one time, and 
hence the database server will never receive more than 2 simultaneous request 
from the web server - UNLESS your CGIs make multiple database connections in 
parallel. If they do, then you're fighting a loosing battle, and you might as 
well give up.

If your CGIs only ever use one connection, then putting the web server behind 
an accelerating squid proxy would actually help further, by not using the 
Apache back ends to serve static documents, such as frames pages, gif/jpg 
background or button images, etc. This means that only the actual CGI 
requests would go to Apache. I have implemented this sort of a load-reduction 
solution with a reduced number of active Apache servers in my last 
consultancy contract, and it worked very well.

> By the way, I fully agree with throwing away NT --

Understandable...

> unfortunately this DB is part of their inventory and
> POS system that another company set up for them. They just finished
> investing about $200K in the system, so just telling them to get rid of
> it isn't quite an answer. Thanks for all of the input from everyone.

Hang on - if they have just invested $200K into this "solution", then why 
exactly is an additional $5,500 for another licence a problem all of a 
sudden???

Regards.

Gordan

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] PG mailing list problems (was Re: Support issues)

2001-10-15 Thread Culley Harrelson

I have the exact opposite problem-- I keep turning off direct mailing for
pgsql-general (with successfull confirmation) and it keeps sending me mail
anyway! haha

culley

--- David Link <[EMAIL PROTECTED]> wrote:
> Jim Caley wrote:
> > 
> > Has anyone heard any more about this problem?  I also haven't gotten
> anything
> > from pgsql-general since Oct. 1.  The other lists seem to be working
> fine for
> > me.
> 
> I am still only getting a few of the pgsql-general postings.  I know
> this by checking the archives.
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] kinda newbie - ish question

2001-10-15 Thread Richard Lynch

http://php.net/pg_getlastoid

This is the *ONLY* time you should actually use OIDs -- To get the unique
value you just inserted.

--
WARNING [EMAIL PROTECTED] address is an endangered species -- Use
[EMAIL PROTECTED]
Wanna help me out?  Like Music?  Buy a CD: http://l-i-e.com/artists.htm
Volunteer a little time: http://chatmusic.com/volunteer.htm
- Original Message -
From: Mike Judkins <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, October 08, 2001 9:31 PM
Subject: [GENERAL] kinda newbie - ish question


> hi all,
>
> Im trying to insert a record with a php script. I insert a NULL value
> to get my auto increment unique key to automatically populate as
> usual. Then I want to be able to insert another value in this same row
> which is a URL. This URL is basically a path plus a filename which I
> want to have the exact same name as the unique key that was just
> generated for the row. Is there a quick way is SQL to do this or will
> I have to handle it in my script?
>
> Thanks for helping!
>
> Mike
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] Newbie

2001-10-15 Thread Henshall, Stuart - WCP

Did you use speech marks around the column names when creating the tables?
If so that makes them case sensitive and you'll always need them (unless
there all lower case)
- Stuart

> -Original Message-
> From: William Winter [SMTP:[EMAIL PROTECTED]]
> Sent: Friday, October 12, 2001 4:29 AM
> To:   [EMAIL PROTECTED]
> Subject:  [GENERAL] Newbie
> 
> I'm Having a hard time with this statement:
> 
> Select Jobs.*, ProjectedDates.* FROM Jobs, ProjectedDates WHERE (Plotted =
> False) And (JobName = 'Boulder Valley') AND (BuilderName = 'Elliott
> Homes')
> AND (Jobs.JobID = ProjectedDates.JobID)
> 
> Can anyone see why? the record and field names certainly do exist...any
> help
> appreciated!
> 
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] error codes when running pg_dumpall from a perl script.

2001-10-15 Thread Simon Crute

Hi,
  If this was better posed in a different postgres group, please let me
know.

  I'm having problems when running pg_dumpall from a perl script.

Here's the line that's executing the backup.

`pg_dumpall -o |bzip2 -z -c > $os_bkup_loc/byt_pgdump_full.bz2` or warn
"Error while running probably all OK. $! \n";

During the run the following error is reported

"Error while running probably all OK.. Illegal seek "
i.e. the error code reported by $! is "Illegal seek"

I've ran a backup manually, and there are no error messages.
When running the backup from with a perl script, (but not sending the output
through bzip2) the error still occurs, but diffing the output of that with
the output of pg_dumpall -o when ran from the command line and there's only
two minor differences (which seem to be within some objectIDs. I can post
that diff if anyone thinks it may help.

the environment variables PGUSER and PGPASSWORD are set earlier in the
script.
I've searched the archives of the mail lists, and found no references to
illegal seeks WRT running pg_dump.

Anyone got any clues ?

--
Simon Crute




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[GENERAL] inserting data into multiple tables

2001-10-15 Thread Tony Grant

Hello,

I have a table "refers_to"

individual_id int4
book_id int4

and another "individual"

individual_id int4 nextval('individual_serial')
name varchar
forename varchar

If the individual exists then get his id. If he doesn't allready exist
write his names to the table and get the newly created individual_id.

Then I though of using a function to write the book_id and the
individual_id to refers_to.

QUESTIONS

I am using JSP so the ideal seems to be stored functions in pl/pgsql?

What is the best order to do this in order to prevent a performance hit?

TIA

Tony Grant

-- 
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-15 Thread Joel Burton

On Wed, 10 Oct 2001, Ian Barwick wrote:

> I'm doing some work for a smallish company which conducts
> its business largely online. Currently they have a legacy
> mishmash of Oracle and MySQL databases which they wish
> to unify one one platform (RDBMS with client access via
> browser and custom serverside applications for employees
> and customers).
>
> PostgreSQL would be my primary candidate. However the company's
> operating requirments mean that the data needed for interaction
> with customers / website users must be available on a 24/7 basis.
> This is primarily a) data related to product ordering and
> tables for storing order data; and b) website user authentication
> and personalisation data (logins, user preferences etc).
>
> It is therefore not an option to have these databases offline
> at regular intervals for any significant length of time for
> VACUUMing. Replicating data to say MySQL databases is
> technically feasible, at least in the case of b) above, but
> not desirable. Are there any existing "native" PostgreSQL solutions
> to this problem?
>
> More importantly, what is the situation on VACUUM for release 7.2?
> It seems from the pgsql-hackers list that there are plans for
> a none-exclusively locking VACUUM, e.g.:
>
> 
>http://groups.google.com/groups?q=vacuum&hl=en&group=comp.databases.postgresql.hackers&rnum=1&selm=12833.990140724%40sss.pgh.pa.us
>
> (sorry about the long URL); how far advanced are they, and is
> there any kind of release schedule for 7.2?
>
> Any answers (or pointers thereto, haven't found any myself :-()
> much appreciated

There is a faster, non-exclusive-locking VACUUM in the CVS now; this
should become part of 7.2. You can download the nightly snapsot and build
it to test it with your application.

HTH.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] writing & flushing C extensions

2001-10-15 Thread John P. Looney

 I'm trying to write a few functions in C for postgres. But, when I change
some code, I'm never sure that it'll be flushed when I do a;

 DROP FUNCTION test();
 CREATE FUNCTION test() RETURNS text AS '/var/lib/pgsql/functions/test.so' LANGUAGE 
'c';

 Is there a way to make sure ? Even restarting postgres doesn't always
work (or else my makefiles are broken...)

Kate

-- 
___
John Looney Chief Scientist
a n t e f a c t o t: +353 1 8586004
www.antefacto.com f: +353 1 8586014


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Newbie

2001-10-15 Thread William Winter

I'm Having a hard time with this statement:

Select Jobs.*, ProjectedDates.* FROM Jobs, ProjectedDates WHERE (Plotted =
False) And (JobName = 'Boulder Valley') AND (BuilderName = 'Elliott Homes')
AND (Jobs.JobID = ProjectedDates.JobID)

Can anyone see why? the record and field names certainly do exist...any help
appreciated!



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] Any Good Way To Do Sync DB's?

2001-10-15 Thread Joseph Koenig

Your solution sounds very interesting (Not the throw away NT
part...)...does anyone else have any input on this? Would it work well?
Any idea as to what amount of traffic it would be capable of handling?
If apache is only running in two instances, would that really keep the
number of ODBC connections to 2 at max? By the way, I fully agree with
throwing away NT -- unfortunately this DB is part of their inventory and
POS system that another company set up for them. They just finished
investing about $200K in the system, so just telling them to get rid of
it isn't quite an answer. Thanks for all of the input from everyone.

Joe

Gordan Bobic wrote:
> 
> On Fri, 12 Oct 2001, Joseph Koenig wrote:
> 
> > I have a project where a client has products stored in a large Progress
> > DB on an NT server. The web server is a FreeBSD box though, and the
> > client wants to try to avoid the $5,500 license for the Unlimited
> > Connections via OpenLink software and would like to take advantage of
> > the 'free' non-expiring 2 connection (concurrent) license. This wouldn't
> > be a huge problem, but the DB can easily reach 1 million records. Is
> > there any good way to pull this data out of Progess and get it into
> > Postgres? This is way too large of a db to do a "SELECT * FROM table"
> > and do an insert for each row. Any brilliant ideas? Thanks,
> 
> Yes. Throw away NT, and replace it with Linux or FreeBSD. They are free,
> they are much more stable, and you will find that PostgreSQL on Linux is
> likely to utterly blow away PostgreSQL on NT. IIRC, something to do with
> the way NT handles forking and threads. Allegedly, this is particularly
> noticeable in a heavy multi-user environment, where lots of connections
> are spawned and killed.
> 
> Another thing you could try - Apache persistent DBI connections using
> mod_perl. Set up Apache to use mod_perl and the persistent DBI database
> connections through it. Then limit the number of Apache instances to 2
> (i.e. up to 2 simultaneous connections). Then put an accelerating
> squid proxy in front of the Apache web server, so that it can handle
> static objects without wasting precious Apache back-ends.
> 
> All this seems like a hugely complicated solution when you could just
> replace NT with Linux or FreeBSD and have a cheaper and MUCH faster
> sollution, not to mention more stable.
> 
> Just my $0.02
> 
> Regards.
> 
> Gordan
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] lztext

2001-10-15 Thread t . ngockhoi




Il like lztext because it make a fast database with some big text fields. I
d'ont know why it removed in lastest release ?.
Any body can help me tu get and use it ?

Ngoc-Khoi TO.




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Problem in installing Postgresql-7.1.2

2001-10-15 Thread Thirumoorthy Bhuvneswari

Hi,
I am new to this mailing-list. I am having
Postgresql-7.0.2 installed on RedHat Linux-6.2. I
wanted to install Postgresql-7.1.2, since it supports
RedHat Linux6.2 and downloaded the rpms. While
installing, it ended up with the following error:
"Failed dependencies:
libpg.so.2, libcrypto.so.0, libssl.so.0".
I don't know where I am wrong. Please help me.
Thanks in advance.
bhuvaneswari.t.


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[GENERAL] how to use cursors in plpgsql (7.2)???

2001-10-15 Thread hubert depesz lubaczewski

hi
can anybody post a short example on how to use cursors (cursor-returning
functions to be exact) from postgresl 7.2?

depesz

-- 
hubert depesz lubaczewski  http://www.depesz.pl/

... vows are spoken to be broken ... [enjoy the silence]
... words are meaningless and forgettable ... [depeche mode]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-15 Thread Denis Gasparin


> >> More importantly, what is the situation on VACUUM for release 7.2?
> >> It seems from the pgsql-hackers list that there are plans for
> >> a none-exclusively locking VACUUM, e.g.:
> >>
> >> 
> 
>http://groups.google.com/groups?q=vacuum&hl=en&group=comp.databases.postgresql.hackers&rnum=1&selm=12833.990140724%40sss.pgh.pa.us
> >>
> >> (sorry about the long URL); how far advanced are they,
>
>It's long since done.

==>> This means that it will not be included in 7.2? I've read 7.2 
documentation on line and i've seen that the VACUUM command is changed: 
now, when run in normal mode (giving to the backend the VACUUM command 
without any parameter), the tables don't need to be locked  and also that 
the command does not minimize the space of the database (as instead the 
actual 7.1.3 VACUUM does). From the documentation:

-
Plain VACUUM (without FULL) simply reclaims space and makes it available 
for re-use. This form of the command can operate in parallel with normal 
reading and writing of the table. VACUUM FULL does more extensive 
processing, including moving of tuples across blocks to try to compact the 
table to the minimum number of disk blocks. This form is much slower and 
requires an exclusive lock on each table while it is being processed.
-

This way to do is similar (but not equal) to the LAZY VACUUM specified by 
Tom Lane in the above link...

In conclusion, the new VACUUM command as described above will be include in 
the 7.2 version of Postgresql?

Denis Gasparin: [EMAIL PROTECTED]
---
Programmer & System Administrator - Edistar srl



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-15 Thread Denis Gasparin


> >> More importantly, what is the situation on VACUUM for release 7.2?
> >> It seems from the pgsql-hackers list that there are plans for
> >> a none-exclusively locking VACUUM, e.g.:
> >>
> >> 
> 
>http://groups.google.com/groups?q=vacuum&hl=en&group=comp.databases.postgresql.hackers&rnum=1&selm=12833.990140724%40sss.pgh.pa.us
> >>
> >> (sorry about the long URL); how far advanced are they,
>
>It's long since done.

==>> This means that it will not be included in 7.2? I've read 7.2 
documentation on line and i've seen that the VACUUM command is changed: 
now, when run in normal mode (giving to the backend the VACUUM command 
without any parameter), the tables don't need to be locked  and also that 
the command does not minimize the space of the database (as instead the 
actual 7.1.3 VACUUM does). From the documentation:

-
Plain VACUUM (without FULL) simply reclaims space and makes it available 
for re-use. This form of the command can operate in parallel with normal 
reading and writing of the table. VACUUM FULL does more extensive 
processing, including moving of tuples across blocks to try to compact the 
table to the minimum number of disk blocks. This form is much slower and 
requires an exclusive lock on each table while it is being processed.
-

This way to do is similar (but not equal) to the LAZY VACUUM specified by 
Tom Lane in the above link...

In conclusion, the new VACUUM command as described above will be include in 
the 7.2 version of Postgresql?

Denis Gasparin: [EMAIL PROTECTED]
---
Programmer & System Administrator - Edistar srl



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] lo_ and SU privs

2001-10-15 Thread Bell John

Under PostgreSQL 7.1.3 (RH Linux 7.1 platform) I get
the following whenever trying to manipulate large
objects either from psql or sql oriented API's (eg.
PoPy, psycopg, pypgsql (DPAPI kit)):

You must have Postgres superuser privilege to use
server-side lo_import.  Anyone can use the client-side
lo_import provided by libpq.

Is there any way to disable this behaviour or to give
a user the priviliges that are being looked for?  This
effectively cripples the vast majority of programmatic
interfaces to PostgreSQL ie. all those that are more
than wrappers around libpq.

John

__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]