Re: [GENERAL] Removing OIDs without recreate

2004-05-07 Thread Együd Csaba
cygwin/7.3.4
The doc contains the option but doesn't work - at least for me.

--Csaba

> -Original Message-
> From: Karel Zak [mailto:[EMAIL PROTECTED]
> Sent: 2004. május 7. 13:48
> To: Együd Csaba
> Cc: '[EMAIL PROTECTED] (E-mail)'
> Subject: Re: [GENERAL] Removing OIDs without recreate
>
>
> On Fri, May 07, 2004 at 01:13:52PM +0200, Együd Csaba wrote:
> > Using 'ALTER TABLE tablename SET WITHOUT OIDS;' command it
> > rises an error saying
> > 'ERROR:  parser: parse error at or near "set" at character 23'
> > What is wrong with it?
>
>  Which PostgreSQL version do you have?
>
> Karel
>
> --
>  Karel Zak  <[EMAIL PROTECTED]>
>  http://home.zf.jcu.cz/~zakkr/
> ---
> Incoming mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.677 / Virus Database: 439 - Release Date: 2004. 05. 04.
>

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.677 / Virus Database: 439 - Release Date: 2004. 05. 04.



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Removing OIDs without recreate

2004-05-07 Thread 'Karel Zak'
On Fri, May 07, 2004 at 01:59:39PM +0200, Együd Csaba wrote:
> cygwin/7.3.4
> The doc contains the option but doesn't work - at least for me.

 http://www.postgresql.org/docs/7.3/static/sql-altertable.html

 The ALTER TABLE ... SET WITHOUT OIDS is in new in 7.4

 Release 7.4 notes:

ALTER TABLE ... WITHOUT OIDS (Rod)

-- 
 Karel Zak  <[EMAIL PROTECTED]>
 http://home.zf.jcu.cz/~zakkr/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Removing OIDs without recreate

2004-05-07 Thread Együd Csaba
Sorry I made a mistake, I saw docs for 7.4, because I use an other server
running that version.
So this won't work on 7.3.*.
Can I drop OID column painlessly? Does it lead to any problems?

Thank you,
-- Csaba

> -Original Message-
> From: 'Karel Zak' [mailto:[EMAIL PROTECTED]
> Sent: 2004. május 7. 14:06
> To: Együd Csaba
> Cc: '[EMAIL PROTECTED] (E-mail)'
> Subject: Re: [GENERAL] Removing OIDs without recreate
>
>
> On Fri, May 07, 2004 at 01:59:39PM +0200, Együd Csaba wrote:
> > cygwin/7.3.4
> > The doc contains the option but doesn't work - at least for me.
>
>  http://www.postgresql.org/docs/7.3/static/sql-altertable.html
>
>  The ALTER TABLE ... SET WITHOUT OIDS is in new in 7.4
>
>  Release 7.4 notes:
>
> ALTER TABLE ... WITHOUT OIDS (Rod)
>
> --
>  Karel Zak  <[EMAIL PROTECTED]>
>  http://home.zf.jcu.cz/~zakkr/
> ---
> Incoming mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.677 / Virus Database: 439 - Release Date: 2004. 05. 04.
>

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.677 / Virus Database: 439 - Release Date: 2004. 05. 04.



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP

2004-05-07 Thread Lonni Friedman
Thanks for your reply.  I thought (perhaps erroneously) that there
wasn't any real difference between dropping an index then recreating
it, and just reindexing an index?

On Thu, 06 May 2004 23:00:25 +0200, Denis Braekhus <[EMAIL PROTECTED]> wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Lonni Friedman wrote:
> |>>Then nothing for a month after going to 7.3.4, and now its happening
> |>>every vacuumdb run again.
> |>Once the complaint starts appearing, I'd expect it to continue until you
> |>reindex the index.
> | That's exactly what happens.  It consistantly errors until reindexed.
> | Any suggestions?  thanks.
> 
> A long shot, but we experienced a similar kind of issue, though our
> dataset was a bit different (not so many large objects).
> After having reindexed a couple of times with about the same results
> (working fine for some time, then complaining again), I dropped the
> index and recreated it. That seemed to fix the issue at our
> installation, we had no such problems after that. Ran for some months
> after that before I took the plunge and moved to 7.4..
> 
> This might not at all work for you though, and as removing the index
> will probably kill your performance for a while I cannot promise you a
> fix with this workaround..
> 
> Best Regards
> - --
> Denis

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Verhindern, dass im Mehrbenutzerbetrieb mit veralteten Daten gearbteitet wird

2004-05-07 Thread Bastian
Hallo Uwe,

danke für deine ausführliche Erklärung. Das ist die Lösung für mein Problem.

MfG

Bastian


[EMAIL PROTECTED] ("Uwe C. Schroeder") wrote in message news:<[EMAIL PROTECTED]>...
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> 
> Bastian,
> 
> warum die Tabelle nicht so aufbauen:
> 
> create table jadajada (
>   internal number serial primary key,
>   nr int4,
>   testdaten varchar(254)
> );
> 
> die "internal number" wird dann automatisch von postgres vergeben. Dann kan
> nst 
> du deine nummern umschieben wie du willst, die interne nummer wird sich nie
>  
>  ndern. Die interne nummer benutzt du um den delete, update etc. zu 
> kontrollieren ala:
> 
> delete from jadajada where internal number=12345;
> 
> Die Nr. degradiert zu einem normalen datum was beliebig ge ndert werden k
> ann.
> Du gibst die interne nummer in deiner Applikation nat rlich nicht aus. Di
> e 
> w re f r den Benutzer eher verwirrend.
> 
> Uwe
>

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] My database hurts "ERROR: duplicate key violates unique constraint"

2004-05-07 Thread Mike Hunter
(Please CC me on any replies as I'm not on the list)

Hi,

After a recent power failure, a program that uses a pgsql backend
(netdisco) started to send me nastygrams.  I tried the author's suggestion
of running a VACUUM FULL ANALYZE VERBOSE;, but it still sends me the
messages.  The data in the database isn't too important, i.e. I'm willing
to blow away data to try to fix the problem.  Any suggestions?

Thanks for your help and thanks on behalf of my whole department for
postgresql.

Mike

- Forwarded message from Max Baker <[EMAIL PROTECTED]> -

Date: Sat, 24 Apr 2004 13:53:16 -0400
From: Max Baker <[EMAIL PROTECTED]>
To: Mike Hunter <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]
Subject: Re: [Netdisco] pgsql error messages
In-Reply-To: <[EMAIL PROTECTED]>
User-Agent: Mutt/1.4.2i

Mike,

The first thing I would try is the "Things are getting really slow"
procedure in README for the database.


After that I would end up posting the same question on one of the postgres
mailing lists.  

-m
On Thu, Apr 15, 2004 at 04:02:54PM -0700, Mike Hunter wrote:
> Hey everybody,
> 
> I can't really blame netdisco for this one; we had a power outage and this
> machine went down ungracefully.  Now cron sends me these notes, but
> netdisco still seems to function.  
> 
> DBD::Pg::db do failed: ERROR:  duplicate key violates unique constraint "node_pkey" 
> at /usr/local/netdisco/netdisco.pm line 1076.
> DBD::Pg::db do failed: ERROR:  duplicate key violates unique constraint "node_pkey" 
> at /usr/local/netdisco/netdisco.pm line 967.
> insert_or_update(UPDATE node SET switch='128.32.11.11',active='1',time_last='Thu Apr 
> 15 14:06:05 2004',mac='00:03:93:48:e8:f8',port='FastEthernet1/2',oui='00:03:93' 
> WHERE switch = '128.32.11.11'  AND port = 'FastEthernet1/2'  AND mac = 
> '00:03:93:48:e8:f8' ) ERROR:  duplicate key violates unique constraint "node_pkey"
>  at /usr/local/netdisco/netdisco line 1810
> DBD::Pg::db do failed: ERROR:  duplicate key violates unique constraint "node_pkey" 
> at /usr/local/netdisco/netdisco.pm line 1076.
> DBD::Pg::db do failed: ERROR:  duplicate key violates unique constraint "node_pkey" 
> at /usr/local/netdisco/netdisco.pm line 967.
>  at /usr/local/netdisco/netdisco line 1810
> 
> I'm gonna try rediscovering the device I suppose...any other advice?
> 
> Mike

- End forwarded message -

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] My database hurts "ERROR: duplicate key violates unique constraint"

2004-05-07 Thread Tom Lane
Mike Hunter <[EMAIL PROTECTED]> writes:
> After a recent power failure, a program that uses a pgsql backend
> (netdisco) started to send me nastygrams.

Hmm.  Try REINDEX on the index involved.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Tom Lane
"Ed L." <[EMAIL PROTECTED]> writes:
> I guess the activity just totally outran the ability of autovac to keep up.  

Could you have been bit by autovac's bug with misreading '3e6' as '3'?
If you don't have a recent version it's likely to fail to vacuum large
tables often enough.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] How can I do conditional 'drop table' in Postgres

2004-05-07 Thread Shachar Shemesh
Dragan Matic wrote:

if exists (select * from sysobjects where id = 
object_id(N'[dbo].[pp_fisk]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[pp_fisk]
GO

For instance, this is a valid script in Ms SQL, it will drop table 
pp_fisk only if it exists, is there a way to do something similar in 
Postgres? Tnx in advance.

Dragan
I'm working from memory here, so exact syntax might be incorrect.

perform select * from pg_table where ;
if found then
drop table ...
end if
This applies to plpgsql only. Please pay attention, however, that if you 
are going to be dropping and recreating tables from this function, you 
must have all queries relating this table use "execute". Otherwise, the 
table's OID is going to be cached the first time code referencing this 
table (in "from") is run, and subsequent runs  in the same session will 
not find the table (even if you create a new table with the same name). 
If that is a problem for you, consider replacing "drop table" with 
"delete from table", which will delete all elements form the table, but 
leave the table itself.

Shachar

--
Shachar Shemesh
Lingnu Open Source Consulting
http://www.lingnu.com/
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Ed L.
On Friday May 7 2004 9:09, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > I guess the activity just totally outran the ability of autovac to keep
> > up.
>
> Could you have been bit by autovac's bug with misreading '3e6' as '3'?
> If you don't have a recent version it's likely to fail to vacuum large
> tables often enough.

No, our autovac logs the number of changes (upd+del for vac, upd+ins+del for 
analyze) on each round of checks, and we can see it was routinely 
performing when expected.  The number of updates/deletes just far exceeded 
the thresholds.  Vac threshold was 2000, and at times there might be 
300,000 outstanding changes in the 10-30 minutes between vacuums.

Given the gradual performance degradation we saw over a period of days if 
not weeks, and the extremely high numbers of unused tuples, I'm wondering 
if there is something like a data fragmentation problem occurring in which 
we're having to read many many disk pages to get just a few tuples off each 
page?  This cluster has 3 databases (2 nearly idle) with a total of 600 
tables (about 300 in the active database).  Gzipped dumps are 1.7GB.  
max_fsm_relations = 1000 and max_fsm_pages = 1.  The pattern of ops is 
a continuous stream of inserts, sequential scan selects, and deletes.


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] any experience with multithreaded pg apps?

2004-05-07 Thread Mark Harrison
I'm looking for feedback from anybody who has used pg in a
multi-threaded program, particularly one in which several
threads each open a database connection.
It's documented to work in that scenario, but I'm interested
in anybody who can share some real-world with that.
Many TIA!
Mark
--
Mark Harrison
Pixar Animation Studios
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Storing a file hash as primary key

2004-05-07 Thread Eduardo Pérez Ureta
I was wondering what the best way is to store a file hash (MD5 or SHA1)
and make it primary key indexed.
I have seen some people storing the hexadecimal encoded MD5 in a
CHAR(32) but it may be a better idea to use a CHAR(16) without encoding
the string, but that may cause some problems.

What do you recommend?
Do you have any experiences storing file hashes in a database?
Do you know any good opensource software that stores file hashes in the
database (to take a look)?

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Tom Lane
"Ed L." <[EMAIL PROTECTED]> writes:
> No, our autovac logs the number of changes (upd+del for vac, upd+ins+del for 
> analyze) on each round of checks, and we can see it was routinely 
> performing when expected.  The number of updates/deletes just far exceeded 
> the thresholds.  Vac threshold was 2000, and at times there might be 
> 300,000 outstanding changes in the 10-30 minutes between vacuums.

Well, in that case you probably want a lot less than "10-30 minutes"
between vacuums, at least for this particular table.  I don't know how
one configures autovac for this, but I suppose it can be done ...

> max_fsm_relations = 1000 and max_fsm_pages = 1.

Also you doubtless need max_fsm_pages a lot higher than that.  A
conservative setting would make it as big as your whole database,
eg for a 10Gb disk footprint use 10Gb/8K (something upwards of
a million) FSM page slots.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] any experience with multithreaded pg apps?

2004-05-07 Thread Bill Moran
Mark Harrison wrote:
I'm looking for feedback from anybody who has used pg in a
multi-threaded program, particularly one in which several
threads each open a database connection.
It's documented to work in that scenario, but I'm interested
in anybody who can share some real-world with that.
I've done some real-world work like this.

The application is a SOAP server written in C using the gsoap library to
query a PostgreSQL database.  The nature of the server is that individual
queries are very fast (generally less than .1 sec) and the possibility of
multiple queries in rapid succession is likely.  The possibility of multiple
simultaneious queries also exists, but is unlikely.
I built the app thusly:
The bugger fires up and opens a connection to Postgres, and a listening
socket.  A thread mutex is used to get exclusive access to the Postgres
database connection.  When a connection arrives, an attempt is made to
lock the mutex, if it succeeds, a thread is fired off to handle that
connection and the loop returns to wait for additional connections.  When
the handling of the connection is complete, the mutex is released and the
thread ends.
If a new connection arrives and the mutex is already held by a previous
thread, a new connection is established and a flag is set to alert the
thread that it should close the connection to Postgres apon completion.
When the thread completes, it closes the connection instead of releasing
the mutex.
The results meet pretty much what we wanted:
1) Processing of a single connection is _very_ fast, since the connection
   To Postgres is already established, and the query and processing of the
   results occur very quickly.
2) The daemon _can_ handle multiple simultaneous connections.  Performance
   degrades a bit (in my tests, it's about half as fast when it has to
   establish a connection prior to the query).  Since the nature of the
   application lends itself more to rapid sequential queries from a single
   host than to multiple simultaneous queries, this is acceptable for us.
We've considered moving this to a pre-threaded model, but so far there hasn't
been a need, and there is other work to do :)  I do have another SOAP app that
does pre-forking and pre-establishes database connections, but that's not
threaded so it doesn't really apply to your question.
Reliability and performance have been good in this application.  Haven't had a
lick of trouble with Postgres or the client libraries.  We developed this
initially on 7.3, and it now runs on 7.4.
Hope this helps.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Ed L.
On Friday May 7 2004 11:25, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > No, our autovac logs the number of changes (upd+del for vac,
> > upd+ins+del for analyze) on each round of checks, and we can see it was
> > routinely performing when expected.  The number of updates/deletes just
> > far exceeded the thresholds.  Vac threshold was 2000, and at times
> > there might be 300,000 outstanding changes in the 10-30 minutes between
> > vacuums.
>
> Well, in that case you probably want a lot less than "10-30 minutes"
> between vacuums, at least for this particular table.  I don't know how
> one configures autovac for this, but I suppose it can be done ...

This period is the minimum time it takes to vacuum or analyze every table 
that "needs it" in round-robin fashion.  Sometimes it is much shorter 
(seconds), sometimes longer, depending on how much upd/del/ins activity 
there has been.  That seems too long/slow.

> > max_fsm_relations = 1000 and max_fsm_pages = 1.
>
> Also you doubtless need max_fsm_pages a lot higher than that.  A
> conservative setting would make it as big as your whole database,
> eg for a 10Gb disk footprint use 10Gb/8K (something upwards of
> a million) FSM page slots.

Ah, OK.  Two questions:

1)  I'm inclined to set this to handle as large a DB footprint as will be in 
the coming year or two, so maybe 3X what it is now.  What is the 
impact/cost of setting max_fsm_pages at, say, 3M for an 8GB footprint?  (3 
x 8GB/8K)

2)  Would this low setting of 1 explain the behavior we saw of seqscans 
of a perfectly analyzed table with 1000 rows requiring ridiculous amounts 
of time even after we cutoff the I/O load?



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Ed L.
On Friday May 7 2004 12:23, Ed L. wrote:
> On Friday May 7 2004 12:20, Ed L. wrote:
> > 1)  I'm inclined to set this to handle as large a DB footprint as will
> > be in the coming year or two, so maybe 3X what it is now.  What is the
> > impact/cost of setting max_fsm_pages at, say, 3M for an 8GB footprint?
> > (3 x 8GB/8K)
>
> Ok, so I see 40B per, so setting it to 3M ==> 3M * 40 = 120MB of
> additional RAM usage for this?  Any other impacts with which to be
> concerned?

Sorry, I see that's *6B* per, so setting it to 3M ==> 18MB, which is trivial 
for the benefit.  Any other concerns in setting this too high?



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] any experience with multithreaded pg apps?

2004-05-07 Thread Jeff
On May 7, 2004, at 12:43 PM, Mark Harrison wrote:

I'm looking for feedback from anybody who has used pg in a
multi-threaded program, particularly one in which several
threads each open a database connection.
It's documented to work in that scenario, but I'm interested
in anybody who can share some real-world with that.
Works great over here.
Our app sits around doing 50-150 requests/sec constantly and spends 
most of its time sitting there with a blank look on its face.

Just make sure you take the normal precautions you need to take with 
threaded apps (no strtok, etc. )

--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Tom Lane
"Ed L." <[EMAIL PROTECTED]> writes:
> Sorry, I see that's *6B* per, so setting it to 3M ==> 18MB, which is trivial 
> for the benefit.  Any other concerns in setting this too high?

Not that I know of.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Tom Lane
"Ed L." <[EMAIL PROTECTED]> writes:
> 2)  Would this low setting of 1 explain the behavior we saw of seqscans 
> of a perfectly analyzed table with 1000 rows requiring ridiculous amounts 
> of time even after we cutoff the I/O load?

Possibly.  The undersized setting would cause leakage of disk space
(that is, new rows get appended to the end of the table even when space
is available within the table, because the system has "forgotten" about
that space due to lack of FSM slots to remember it in).  If the physical
size of the table file gets large enough, seqscans will take a long time
no matter how few live rows there are.  I don't recall now whether your
VACUUM VERBOSE results showed that the physical table size (number of
pages) was out of proportion to the actual number of live rows.  But it
sure sounds like that might have been the problem.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Jeff Boes
At some point in time, [EMAIL PROTECTED] (Tom Lane) wrote:

>
>> max_fsm_relations = 1000 and max_fsm_pages = 1.
>
>Also you doubtless need max_fsm_pages a lot higher than that.  A
>conservative setting would make it as big as your whole database,
>eg for a 10Gb disk footprint use 10Gb/8K (something upwards of
>a million) FSM page slots.

At some point, someone was going to write a "white paper" detailing how one
might go about setting these parameters. If that someone has done so, I'd love
to hear about it. If that someone hasn't ... well, how much beer would we have
to provide to get you to talk?  8-)

-- 
Jeff Boes  vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
  ...Nexcerpt... Extend your Expertise

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] any experience with multithreaded pg apps?

2004-05-07 Thread Ben
Mark Harrison wrote:
> I'm looking for feedback from anybody who has used pg in a
> multi-threaded program, particularly one in which several
> threads each open a database connection.
> 
> It's documented to work in that scenario, but I'm interested
> in anybody who can share some real-world with that.

We've wrapped the libpq commands inside some code that makes use of a glib
thread pool. On startup, we have a bunch of threads make persistant
connections to the database. The rest of our app simply has to push
querries onto this pool, and the first available thread will make use of
its persistant connection to return the results as fast as possible. If
all threads are busy working on queries, our wrapper code takes care of
the queuing for us.

It's a scaleable solution that we can make as complex as we want. (For
instance, if the database crashes, each thread should be able to
re-establish its connection.) It works pretty well. The only drawback is
that the application can't make use of the threadpool for multi-query
transactions, but we've wrapped all of those in stored procs anyway.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] any experience with multithreaded pg apps?

2004-05-07 Thread Mike Mascari
Mark Harrison wrote:

I'm looking for feedback from anybody who has used pg in a
multi-threaded program, particularly one in which several
threads each open a database connection.
It's documented to work in that scenario, but I'm interested
in anybody who can share some real-world with that.
It works fine so long as synchronization is guaranteed. We use CORBA 
(omniORB) which presents problems in the face of a traditional 
factory pattern where each client gets a server-side, multi-threaded 
object. For example, we want the following characteristics:

1) Single app server process - muli-threaded access
2) Single-threaded access to each session object
3) Garbage collection of server threads
4) Garbage collection of server objects
5) Single transactions across multiple RPCs
#1 comes automatically with CORBA unless you use the 
SINGLE_THREAD_POLICY for the POA in which the object is created.

#2 does not come automatically with CORBA unless you use the 
SINGLE_THREAD_POLICY and create each session object in its own POA, 
which is frowned upon. Therefore, all exposed API calls must be 
wrapped in code which synchronizes on a thread-lock attribute of the 
session object.

#3 happens automatically, depending upon ORB configuration. This 
means that you cannot simply use TLS for access to the libpq 
database handle.

#4 requires manual coding.

#5 requires some voodoo as well, since a multi-threaded client could 
interleave synchronized RPC calls that screw up another thread's 
transaction.

Of course, if you're just talking about a traditional multi-threaded 
application (non-CORBA), then storing the libpq handle in TLS would 
seem like the best solution to me, so long as thread lifetime = 
database session lifetime.

The other annoyance is that the ability to debug multi-threaded 
applications has progressed slowly over the years:

a. older versions of g++ had problems throwing exceptions in 
multi-threaded applications

b. older gdb versions couldn't debug multi-threaded applications

c. kernels couldn't dump the core image of multi-threaded applications

d. later kernels dumped the core image of the thread which caused 
the violation, not all threads

Depending upon compiler and kernel, all of the above may have been 
addressed, either partially or completely. I know newer linux 
kernels can dump the image of each thread when the process dies, but 
as of RedHat 8, d. was the behavior.

HTH,

Mike Mascari

Many TIA!
Mark




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Ed L.
On Friday May 7 2004 12:20, Ed L. wrote:
>
> 1)  I'm inclined to set this to handle as large a DB footprint as will be
> in the coming year or two, so maybe 3X what it is now.  What is the
> impact/cost of setting max_fsm_pages at, say, 3M for an 8GB footprint? 
> (3 x 8GB/8K)

Ok, so I see 40B per, so setting it to 3M ==> 3M * 40 = 120MB of additional 
RAM usage for this?  Any other impacts with which to be concerned?


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Interpreting vacuum verbosity

2004-05-07 Thread Tom Lane
Jeff Boes <[EMAIL PROTECTED]> writes:
> At some point, someone was going to write a "white paper" detailing how one
> might go about setting these parameters.

In 7.4, it's relatively easy to check on whether your settings are
reasonable: just do a VACUUM VERBOSE (database-wide) and check the
FSM requirements indicated at the end of the tediously chatty output.

All I have handy to illustrate with is a test server that has only the
regression test database loaded in it, so these numbers are very small,
but what I see is:

INFO:  free space map: 280 relations, 520 pages stored; 4720 total pages needed
DETAIL:  Allocated FSM size: 1000 relations + 2 pages = 178 kB shared memory.

This says that what I actually need to keep track of the present free
space in the database is 280 FSM relation slots and 4720 FSM page slots.
So the allocated space is plenty comfy here.  If the "pages needed"
number is significantly larger than your max_fsm_pages setting, then you
have a problem.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] pg_restore -L option

2004-05-07 Thread Carlos








Could someone clarify the use of the –L option in
pg_restore?  I have a pg_dump of a database (options –Ft –b) and I
want to restore most of the dump with the exception of one table into another
database.  Can I do the following?:

1)   restore the
tar file of the dump into a “list” file with pg_restore –l

2)   comment out
the line for the table that I want to exclude, and

3)   restore the
tar file with the following command: pg_restore –a –L  -d  

 

Thanks in advance for your response?

 








Re: [GENERAL] Storing a file hash as primary key

2004-05-07 Thread Greg Stark

Eduardo Pérez Ureta <[EMAIL PROTECTED]> writes:

> I was wondering what the best way is to store a file hash (MD5 or SHA1)
> and make it primary key indexed.
> I have seen some people storing the hexadecimal encoded MD5 in a
> CHAR(32) but it may be a better idea to use a CHAR(16) without encoding
> the string, but that may cause some problems.

I would say either char(32) or bytea(16). Not char(16) since you don't want to
treat the raw binary data using any specific character encoding or sort it
according to any locale specific rules etc.

Personally I would have preferred bytea(16) but for some reason the php
drivers seem to jut drop NULL there when I try to store raw binary md5 hashes.
So for now I just declared it bytea with no length specification and store the
hex encoded hash.

If anyone knows how to get Pear::DB to store binary data in a bytea column, by
all means.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Storing a file hash as primary key

2004-05-07 Thread Joe Conway
Greg Stark wrote:
Personally I would have preferred bytea(16) but for some reason the php
drivers seem to jut drop NULL there when I try to store raw binary md5 hashes.
So for now I just declared it bytea with no length specification and store the
hex encoded hash.
If anyone knows how to get Pear::DB to store binary data in a bytea column, by
all means.
Did you try using pg_escape_bytea()?

Joe

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] graphical ERD tool

2004-05-07 Thread Dennis Gearon
OK,
   I've given up on the tool I'm using. Anyone recommend a good, 
graphical ERD CASE tool for postgres? I'm on windblowsXP.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] C Functions, datatypes

2004-05-07 Thread Joe Conway
A Palmblad wrote:
I'm writing a function in C, and am trying to return a row,
containing Numeric and array types.  I'm building the row with
heap_formtuple.  I'm having trouble creating the numeric and array
Datums.  If anyone has some example code illustrating this, that'd be
great.
See PL/R for examples:
  http://www.joeconway.com/plr/
Specifically:

  plr.c:do_compile()
gathering required information for input conversion of return values
  pg_conversion.c:get_scalar_datum()
conversion from (char *value) to Datum using the gathered info for
scalar return datums
  pg_conversion.c:get_generic_array_datum()
conversion from multiple (char *values) to Datum using the gathered
info for array return datums
HTH,

Joe

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] graphical ERD tool

2004-05-07 Thread Dennis Gearon
I also like the looks of Mogwai. I am using Dezign, and it's OK. It used 
to be better in the last version. NOW, it can't print anything more than 
10 entities on my machine, and it's 1gig of ram/1.8ghz. It does a jpeg 
in 0.1 second, so what it's printing problem I don't know.

Anyway, I am mostly interested in original design, but importation will 
be some time in the future, I'm sure. I will look at Mogwai and hope 
they get importation working soon.

ERWin looks really good, as does CASE2. However, CASE2 seems mostly for 
maintenance and not origiinal design.

Dann Corbit wrote:

-Original Message-
From: Dennis Gearon [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 07, 2004 3:01 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] graphical ERD tool

OK,
   I've given up on the tool I'm using. Anyone recommend a good, 
graphical ERD CASE tool for postgres? I'm on windblowsXP.
   

What are you using?

I like ERWin/ERX, but it is not specifically tailored to PostgreSQL
(e.g. does not understand sequences, etc.).
So there are lots of manual steps after your bare SQL generation.
I could not get DBManager to work even with SQLite, but I think some
others like that one.
I downloaded Case Studio to try that, and also was unable even to
import.
I tried Visual Case 2, but that did not work for me, even though
PostgreSQL is listed as supported.
This looked real promising:
http://mogwai.sourceforge.net/
But when I try to reverse engineer PostgreSQL tables, the list of tables
is empty, even though the connection seems fine.
This is nowhere close to ready:
http://sourceforge.net/projects/pydbdesigner/
Don't bother:
http://sourceforge.net/projects/yadbd/
 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] pgFoundry Open For Business

2004-05-07 Thread Rick Gigger
This is a huge improvement over GBorg!  I feel much more comfortable 
with this than I ever did with GBorg.  GBorg always seemed very 
unfriendly and the crusty look and feel at first made me wonder how 
legitimate it was as a source for serious projects.  I realize now that 
it housed some great projects but I think the new GForge based site will 
be much, much better especially for people new to postgres.

Marc G. Fournier wrote:
PostgreSQL's new collaboration site for associated projects, pgFoundry,
also known as projects.postgresql.org, is up and running at
http://www.pgfoundry.org/.  This is the beginning of our transition from
our own GBorg to a framework which is maintained and improved by a broad
external community -- GForge. And of course it runs on PostgreSQL.
GForge offers us the following features over GBorg:
   -- A glossy new look
   -- Multiple search interfaces
   -- Multiple catagorizations for each project
   -- User-driven project and programmer ratings
   -- Project "job openings"
   -- ability to support project "home pages" (limited to static HTML
  right now)
   -- surveys
Right now, we are taking new projects only.  Next week, we hope to start
porting old projects from GBorg, on a strictly voluntary basis.  New
projects already on pgFoundry include plperlNG, framewerk, pathetic.org,
and my2postgres.  Your new project is welcome!
If you logged in with GBorg before April 18, 2004, your login has been
copied over from Gborg, and you can just use it.  If you are a recent
GBorg user, you will need to create a new login.
Over the next few months, we will be enabling the following features (all
of which currently have some bugs)
-- Code Snippets:  A library to share small scripts and functions,
   like Roberto's old "PL/pgSQL Library", but supporting multiple
   languages;
-- lightweight personal blogs for developers
-- PostgreSQL databases for each project
So, if you've been holding on to a new project idea, please create it now!
Your new project will be accessable as .projects.postgresql.org
as well as through pgFoundry navigation.
Please contact us at [EMAIL PROTECTED] if you can't log in or run into
other issues connecting or using the site.
pgFoundry is online due to the efforts of Andrew Dunstan, Gavin Roy, Josh
Berkus, Marc Fournier and Chris Ryan, with help from GForge's Tim Perdue.

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 8: explain analyze is your friend
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Strengthing mail restriction rules in Postfix

2004-05-07 Thread Marc G. Fournier

Just added a rule to postfix's config file to try and clean up some of the
trash going through the server, namely:

smtpd_helo_restrictions =
permit_mynetworks,
reject_unknown_hostname,
reject_invalid_hostname,
reject_non_fqdn_hostname,
permit

smtpd_sender_restrictions =
permit_mynetworks,
reject_invalid_hostname,
reject_unknown_hostname,
reject_unknown_sender_domain,
reject_non_fqdn_hostname,
reject_non_fqdn_sender,
permit

which are definied as:

#   reject_invalid_hostname: reject HELO hostname with bad syntax.
#   reject_unknown_hostname: reject HELO hostname without DNS A or MX record.
#   reject_non_fqdn_hostname: reject HELO hostname that is not in FQDN form
#   reject_non_fqdn_sender: reject sender address that is not in FQDN form
#   reject_unknown_sender_domain: reject sender domain without A or MX record.

that should weed out some garbage before it gets into the queues ... or
those users with mis-configured servers ... so, if you are legit, and
trying to send an email through that is being rejected, take a look at the
above as potential causes ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664


---(end of broadcast)---
TIP 8: explain analyze is your friend