[GENERAL] Re: Is PostgreSQL ready for mission critical applications?

1999-11-23 Thread Jochen Topf

The Hermit Hacker <[EMAIL PROTECTED]> wrote:
: [...]
: take a look at:
: [list deleted]
: Each one of those is mission critical to the person using it, and, in some
: cases, I'd say to the ppl that they affect (Utility Billing and POS System
: are the two that come to mind there)...
: [...]

Well, there are millions of people using Microsoft products for mission
critical applications. I would never do that. :-) Maybe my standards are higher
or my applications different. So this list really doesn't say much. The
problem with databases in general is, that my standards for them are way
higher then for most other pieces of software. If my web server failes, I
restart it. If a mail server fails, I restart it, if syslog failes, I don't
have a log file. But if a database failes, it is generally a lot more trouble.
On the other hand a database is generally, apart from the kernel, the most
complex thing running on your servers...

: Quite frankly, I think the fact that Jochen is still around *even though*
: he has problems says alot about the quality of both the software and the
: development processes that we've developed over the past year, and also
: gives a good indication of where we are going...

This is true. Despite of the problems I had with PostgreSQL, the system
I am using it for still runs PostgreSQL and it sort of works. We have to
reload the database every once in a while, and some of the triggers, I would
like to have, don't work. But basically it works. If you don't have the
money to go for a commercial database, PostgreSQL is not a bad option. But
don't think that everything with PostgreSQL is as bright, like some of the
postings make you believe. Watch your database for performance and other
problems, don't forget the backups and think about how to build your
application that it failes gracefully if the database screws up.

If you have an Oracle database you don't do that, you hire a DBA for it.
There is no way you can do it yourself. :-)

Jochen
-- 
Jochen Topf - [EMAIL PROTECTED] - http://www.remote.org/jochen/






[GENERAL] Re: Is PostgreSQL ready for mission criticalapplications?

1999-11-23 Thread Jochen Topf

Kaare Rasmussen <[EMAIL PROTECTED]> wrote:
:> But I am not imagining the random "I have rolled back the current
: transaction
:> and am going to terminate your database system connection and exit."
: messages.

: I'm wondering if you ever reported these problems to this list or the
: the hackers list? I've been reading both regularily and don't recall
: seeing this descussed before, but maybe I'm wrong.

: Generally I find the responsiveness from the development team way better
: than any commercial products. _All_ problem reports are treated with
: concern. So if you didn't report them before, please take the time to
: document your experience and send the problem report to the correct
: place.

No I haven't reported them. I have reported a minor bug that I could reproduce
to the bug tracking system. But all the other problems I had, were, as I said,
not reproducable. I tried to come up with a small test case for some of the
bugs and failed. Sure I can report them all, but the developers will tell me,
and rightly so, that they can't do anything with it, because they can't
reproduce it. I know that this is not very helpful, but I know no easy way out
here.

Jochen
-- 
Jochen Topf - [EMAIL PROTECTED] - http://www.remote.org/jochen/






[GENERAL] How to do this in Postgres

1999-11-23 Thread Holger Klawitter

Hi there,

I tried all I could think of with the following problem, perhaps
someone has another idea.

I have a table where for each id there may (and often are) multiple
rows with some kind of priority.
  create table data ( id1 int4, id2 int4, <>, prio int4 );
The minimal priority is not guaranteed to be 1. There are 200k
different ids with up to 10 entries, summing up to 400k rows.

Not I want to do something like this:

select * from data where <>.

First attempt (deleting non minimal)


select a.id1, a.id2, a.prio
into bugos
from a data, b data
where a.prio > b.prio and a.id1 = b.id1 and a.id2 = b.id2;

delete from data
where id1 = bogus.id1 and id2 = bogus.id2 and prio = bogus.prio;

The join does not seem to complete. I am not sure whether I should
have waited longer, but after 4h without significant disk access I
do not think that this thing will ever return. Indexing didn't help.

Second attempt (stored procedures)
--

create function GetData( int4, int4 )
returns data
as 'select *
from data
where id1 = $1 and id2 = $2
order by prio
limit 1'
language 'sql';

select GetData(id1,id2) from <>;

limit in functions is not yet implemented in postgres (6.5.2)

Third attempt (use perl on dumped table)


I don't want to :-)

Regards,
Holger Klawitter
--
Holger Klawitter +49 (0)251 484 0637
[EMAIL PROTECTED] http://www.klawitter.de/






[GENERAL] Socket file lock

1999-11-23 Thread Fabian . Frederick

Hi,

Sometimes I've got some core socket file in /tmp.However Postgres leaved
correctly in previous shutdown : /

What would be the best way to avoid this. (The big big problem is that
postmaster can't be launched due to that core).


Regards, Fabian





Re: [GENERAL] logging stuff in the right sequence.

1999-11-23 Thread Lincoln Yeoh

Hi,

I'm trying to set up logging tables and need a bit of help. 

I would like to ensure that things are stored so that they can be retrieved
in the correct sequence.

The example at http://www.postgresql.org/docs/postgres/rules17277.htm
says:
CREATE TABLE shoelace_log (
sl_namechar(10),  -- shoelace changed
sl_avail   integer,   -- new available value
log_whoname,  -- who did it
log_when   datetime   -- when
);

 CREATE RULE log_shoelace AS ON UPDATE TO shoelace_data
WHERE NEW.sl_avail != OLD.sl_avail
DO INSERT INTO shoelace_log VALUES (
NEW.sl_name,
NEW.sl_avail,
getpgusername(),
'now'::text
);

However is there a guarantee that datetime is sufficient for correct order
if an item is updated by different people one after the other at almost the
same time?

I would prefer something like 

CREATE TABLE shoelace_log (
 log_sequence serial-- sequence of events
sl_namechar(10),  -- shoelace changed
sl_avail   integer,   -- new available value
log_whoname,  -- who did it
log_when   datetime,   -- when
);

 CREATE RULE log_shoelace AS ON UPDATE TO shoelace_data
WHERE NEW.sl_avail != OLD.sl_avail
DO INSERT INTO shoelace_log VALUES (
NEW.sl_name,
NEW.sl_avail,
getpgusername(),
'now'::text
);

However I notice there isn't a column name specification in the DO INSERT
INTO, how would I format the INSERT INTO statement so that log_sequence is
not clobbered? Can I use the normal INSERT into format and specify the
columns? I haven't managed to get it to work that way. Would defining the
sequence at the end of the table help? That would be untidy tho ;).

Can/should I use now() instead of 'now'::text?

The serial type is an int4. Hmm, there actually may be more than 2 billion
updates to keep track off :). But I suppose we could cycle the logs and
resequence. 

Cheerio,

Link.








[GENERAL] PL

1999-11-23 Thread Roodie

Hi!
Just a quick question: is there a PL/pgSQL in the 6.5.1 version?
I cannot find it, but I need some functionality it offers.
Any help?

-- 
--
  Roodie aka Steve aka Farkas István   ICQ: 53623985 ; Linux, C++
VisualBasic, Quake, Ars Magica, AD&D, Lightwave 3D, Mutant Chronicles





[GENERAL] Re: Is PostgreSQL ready for mission criticalapplications?

1999-11-23 Thread Kaare Rasmussen

> But I am not imagining the random "I have rolled back the current
transaction
> and am going to terminate your database system connection and exit."
messages.

I'm wondering if you ever reported these problems to this list or the
the hackers list? I've been reading both regularily and don't recall
seeing this descussed before, but maybe I'm wrong.

Generally I find the responsiveness from the development team way better
than any commercial products. _All_ problem reports are treated with
concern. So if you didn't report them before, please take the time to
document your experience and send the problem report to the correct
place.






[GENERAL] Re: Is PostgreSQL ready for mission criticalapplications?

1999-11-23 Thread Jochen Topf

Kane Tao <[EMAIL PROTECTED]> wrote:
: The reason why opinions are so varied has alot to do with the expertise of
: each person in relation to PostgreSQL and Linux.  Often problems that are
: considered simple to resolve by some are very difficult for others.  And
: sometimes problems are caused by actions that are done out of inexperince
: with the system like cancelling certain operations in progress etc...
: You probably would not be able to determine reliability from opinions.  The
: thing is PostgreSQL is extremely reliable if u know what you are doing and
: know how to handle/get around any bugs.

Sorry, this is simply not true. We are talking about reliability here and
not about some features that might be difficult to find for the inexperienced
user or something like that. For instance, I had to fight with PostgreSQL and
Perl to get Notify to work. It might be difficult to get this to work, because
the lack of documentation or bugs in the way it is implemented, but I got
it to work. This is the thing a beginner stumbles over, and if not persistent
enough will label as a bug, although it might be only the documentation that
is buggy, or his level of understanding of the workings of the database is
just not good enough.

But I am not imagining the random "I have rolled back the current transaction
and am going to terminate your database system connection and exit." messages.
If there is a way to kill a database as a normal user, it is not reliable.
Maybe, if I knew more about PostgreSQL, I would be able to not trigger the
bugs, but that is not the point. The bugs should not be there or there
should be at least a meaningful error message saying: "I am sorry Dave, I can't
let you do this, because it would trigger a bug." I have seen random chrashes
without any indication to the problem and I have seen strange messages
hinting at a problem deep down in a btree implementation or something like
that. And the worst thing is, that these bugs are not repeatable in a way
that someone could start debugging them or at least work around them.

To be fair, I have never lost any data (or had it corrupted) that was
already *in* the database, although there is one unresolved case, which might
have been a database corruption but was probabely an application error. But
I have lost data, because the application wasn't able to put it in the
database in the first place and the database was not accessible. But that is
probabely an application error too, because it only buffered data in memory
and not on disk, in case of a database failure. I thought that this is enough,
because databases are supposed to be more reliable then simple filesystems...

: Lookig at some of the other posts about reliability...the number of records
: in a database will mainly determine the ability of a database to maintain
: performance at larger file/index sizes.  It does not really impact
: stability.  Stability is mainly affected by the number of
: reads/updates/inserts that are performed.  Usually u want to look at large
: user loads, large transaction loads and large number of
: updates/inserts/deletes to gauge reliability.   I havent seen anyone post
: saying that they are running a system that does this...perhaps I just missed
: the post.

While this is generally true, a huge database can have an impact on
stability. For instance, if you have a very small memory leak, it will not
show in small databases but might show in big ones, triggering a bug. Or
an index grows over some bound and a hash file has to be increased or whatever.
And there are some problems of this kind in PostgreSQL. I am logging all
logins and logouts from a radius server into PostgreSQL and after it ran
well for several months, it slowed to a crawl and vacuum wouldn't work
anymore. So, yes, I do have a lot of inserts, although about 6000 inserts a
day and a total of a few hundert thausend records is not really much.

My question of an earlier posting is still not answered. Does anybody here,
who reported PostgreSQL to be very stable, use advanced features like pl/pgsql
procedures, triggers, rules and notifies? Lets have a show of hands. I would
really like to know, why I am the only one having problems. :-) Although
it might be, because, as this is a PostgreSQL mailing list, most of the
readers are people who are happy with PostgreSQL, because all the others
have left and are on an Oracle list now. :-)

I would really, really like PostgreSQL to be stable and useful for mission
critical things, because it has some very nice features, is easy to setup,
and easy to maintain and generally a lot better then all the other databases
I know, weren't it for the problems described above. I hope that my criticism
here is not perceived as PostgreSQL bashing but as an attempt to understand
why so many people are happy with PostgreSQL and I am not.

Jochen
-- 
Jochen Topf - [EMAIL PROTECTED] - http://www.remote.org/jochen/