Re: [GENERAL] WAL directory size calculation

2016-07-28 Thread Jan Wieck
On Thu, Jul 28, 2016 at 9:54 AM, Andreas Kretschmer  wrote:

> Without Replication 1 GB would be fine, even with replication. But it must
> be realible!
>
>
The required size of WAL depends on what your intended checkpoint_timeout
vs. the amount
of WAL generated from data turnover is. A rather small 40GB database,
churning TPC-C style
transactions at a rate of 1,000 TPS can easily generate 60MB of WAL per
second (if configured
wrong). To keep the WAL size at or below 1GB would require a checkpoint to
complete every
17 seconds. In this case, max_wal_size=1GB is a very wrong config option.

One problem here is that the more frequent checkpoints occur, the more full
page writes will be
required. Which drives up the amount of WAL, requiring checkpoints even
more frequently
when max_wal_size is the limiting factor. This is a classic "down spiral"
scenario.

At 1,000 TPS, the above benchmark levels out (after about 1-2 hours) around
60-64GB of
WAL space used (with max_wal_size = 96GB and checkpoint_timeout=20min). The
total
amount of WAL actually produced goes down significantly (due to reduced
full page writes)
and the transaction response time improves in average as well as in stddev.
The whole DB
looks more like it is cruising, than fighting.

This example isn't a big database (40-80GB) or anything exotic. Just a
write heavy OLTP
load.


Regards, Jan







> Andreas
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info


Re: [GENERAL] Slony error please help

2016-07-17 Thread Jan Wieck
On Sun, Jul 17, 2016 at 12:47 AM, Jan Wieck  wrote:

>
> The only thing I can imagine would be that there is another slony cluster
> (or
> remnants of it) hanging around in the 9.4 installation, possibly in
> another database.
>
>
That does reproduce the problem. I ran the new doc/pgbench-tutorial through
steps
01, 02 and 03 with a 9.4/2.2.2 installation. Upgraded to 9.4/2.2.5 but left
out the
UPDATE FUNCTIONS for node 3. I could have created a fourth database and just
run INIT CLUSTER against that.

I then installed 9.5/2.2.5 in parallel and the pg_upgrade run looks like
this:

(venv)[postgres@db1 pgbench-tutorial]$ pg_upgrade -b
>> /var/lib/pgsql/test_94/bin -B /var/lib/pgsql/test_95/bin -d
>> /opt/pgsql/test_94 -D /opt/pgsql/test_95 -p 54394 -P 54395 -c
>
> Performing Consistency Checks
>
> -
>
> Checking cluster versions   ok
>
> Checking database user is the install user  ok
>
> Checking database connection settings   ok
>
> Checking for prepared transactions  ok
>
> Checking for reg* system OID user data typesok
>
> Checking for contrib/isn with bigint-passing mismatch   ok
>
> Checking for presence of required libraries fatal
>
>
>> Your installation references loadable libraries that are missing from the
>
> new installation.  You can add these libraries to the new installation,
>
> or remove the functions using them from the old installation.  A list of
>
> problem libraries is in the file:
>
> loadable_libraries.txt
>
>
>> Failure, exiting
>
> (venv)[postgres@db1 pgbench-tutorial]$ cat loadable_libraries.txt
>
> Could not load library "$libdir/slony1_funcs.2.2.2"
>
> ERROR:  could not access file "$libdir/slony1_funcs.2.2.2": No such file
>> or directory
>
>
>
If I drop the offending database or run UPDATE FUNCTIONS in it, pg_upgrade
is happy.


Regards, Jan

-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info


Re: [GENERAL] Slony error please help

2016-07-16 Thread Jan Wieck
On Thu, Jun 16, 2016 at 3:36 PM, avi Singh 
wrote:

> /usr/pgsql-9.5/bin/pg_upgrade -b /usr/pgsql-9.4/bin/ -B
> /usr/pgsql-9.5/bin/ -d /var/lib/pgsql/cmates1/data/ -D
> /var/lib/pgsql/cmates1/data/9.5/ -p 5432 -P 5433 -c
>
>
> it throws this error
>
> Could not load library "$libdir/slony1_funcs.2.2.2"
> ERROR:  could not access file "$libdir/slony1_funcs.2.2.2": No such file
> or directory
>
>
Using git clones of everything on CentOS 7.1 I cannot reproduce this
problem.
Even the actual pg_upgrade without -c works, the cluster starts up and
replicates
just fine. I went through installing a 9.4 with 2.2.2, then upgrading to
2.2.4 (via
UPDATE FUNCTIONS) and last upgrading to 9.5 with pg_upgrade.

The only thing I can imagine would be that there is another slony cluster
(or
remnants of it) hanging around in the 9.4 installation, possibly in another
database.

Can you provide a list of all databases and all schemas in them from the
9.4 install?


Regards, Jan







>
>
> Not sure why it is still looking for slony1_funcs.2.2.2 even though the
> version is upgraded to 2.2.4 and it is running fine. i do see
> slony1_funcs.2.2.4.so files in lib directory which is how it should be
> since i have upgraded it to 2.2.4
>
>
> Any suggestions?
>
>
> Thanks
> Avi
>
>


-- 
Jan Wieck
Senior Postgres Architect


Re: [GENERAL] Arduino SQL Connector

2014-04-18 Thread Jan Wieck

On 04/18/14 10:31, Steve Spence wrote:

Not a thing in that document about the Arduino. Just how to install
Postgres on a Raspberry Pi. My Postgres is on a hosted server at a ISP.


You intend to have thousands of Arduino devices, incapable of doing any 
sort of encryption or other means of secure IP connections, directly 
connect to a database, that is hosted on a publicly accessible VPS?


Maybe it is just me, but to me that design has DISASTER written in bold, 
red, 120pt font all over it.



Good luck with that,
Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arduino SQL Connector

2014-04-17 Thread Jan Wieck

On 04/18/14 00:27, Steve Spence wrote:

On 4/18/2014 12:21 AM, John R Pierce wrote:


personal opinion:

I don't think a terminal device like a PC or an embedded system should
be talking directly to SQL at all.   instead, they should be talking
to an application server which implements the "business logic", and
THAT talks to the database.





When all we need to do is log sensor values, there's no business logic
needed. It's scientific data collection, and direct to SQL works very
nicely. It's fast and clean.


In that case you should be sending messages to a message bus or queue.

The bus/queue receiver will then push the data into the database or 
whatever downstream system.



Jan


--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arduino SQL Connector

2014-04-17 Thread Jan Wieck

On 04/17/14 10:49, Bruce Momjian wrote:

On Thu, Apr 17, 2014 at 10:44:36AM -0400, David Rysdam wrote:

Bruce Momjian  writes:
> On Thu, Apr 17, 2014 at 09:39:55AM -0400, Steve Spence wrote:
>> So, who wants to work on this with me? I'm a fair arduino programmer,
>> but know nothing about postgres.
>
> I would look at the MySQL one as a first step to see how that was done.
> You are basically going to need to duplicate libpq, which is a major
> undertaking.

Maybe I'm being naive, but isn't libpq already being compiled for ARM by
Debian? As long as it fits, you should be good. If it doesn't, you'll
need to strip some stuff out.


Oh, can you run Debian ARM code on Arduino?  If so, Postgres's libpq
could be used directly, though it is probably too big, as you mentioned.
The MySQL driver is C++, which surprised me.


No, to do that you'd need something like a Beaglebone, which is ARM 
Cortex A8 based and runs Linux anyway.


http://www.ti.com/tool/beaglebk?DCMP=PPC_Google_TI&k_clickid=63c22498-5f5d-3789-4b41-0dabd35d

I don't think porting the whole libpq over to an Arduino would be a good 
move. For practical purposes a small subset of functionality through 
some gateway service would probably be a better approach.


Note that I am not an Arduino user/developer. I'm more familiar with the 
Microchip PICs.



73 de WI3CK

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL Replication Server + repmgr + Slony

2014-04-12 Thread Jan Wieck

On 03/20/14 10:37, basti wrote:

Hello,

at the moment we use a Postgres (9.1) Master and a Hot-Standby with WAL
Replication.

We also use Slony to copy some (not all) Tables to fronted server.
Is it possible to use repmgr to switch between Master and Hot-standby
without lose the slony functionality?

When I use repmgr and switch between master and slave what's about
IP-addresses and hostnames? Do I need a virtual IP that can I switch
from one to the other server?


Unless your standby moves backwards in time (async replication and 
failover losing transactions), all that is needed should be to issue 
STORE PATH commands with the new IP/hostname to the Slony replica(s).



Regards,
Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling of autovacuums considered harmful

2014-04-12 Thread Jan Wieck

On 02/27/14 10:43, Scott Marlowe wrote:

On Wed, Feb 26, 2014 at 5:40 PM, Steve Crawford
 wrote:

On 02/26/2014 08:56 AM, Alvaro Herrera wrote:


...

No matter how heavily updated, regular activity should not cause
autovacuum kills.  Only heavier operations would do that (say ALTER
TABLE, etc).



"Considered harmful" got my attention. What, if any, known harm is caused?

We have many errors of this type but in our case most are due to batch
processes that have a vacuum embedded at appropriate points in the string of
commands in order to avoid excessive bloat and to ensure the tables are
analyzed for the following steps. Occasionally the autovacuum triggers
before the manual but gets canceled.

Any harm?


We have some rather large tables that have never been autovacuumed. At
first I was thinking it was due to pgsql cancelling them due to load
etc. But if it's slony getting in the way then cancelling them is
still harmful, it's just not postgres' fault.


Slony (even the very old 1.2) does not cancel anything explicitly.


Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-10 Thread Jan Wieck

On 04/10/14 17:25, Christofer C. Bell wrote:

I'm not wanting to get after anyone here, but I want it on the record
that I am not the source of the above quote discouraging the use of
Ubuntu in a server role.  That would be Bruce Momjian. While Bruce is
entitled to his opinion, it's not one I agree with and I don't want a
Google search years from now to tie my name to that viewpoint.


Who (in their right mind) would ever think of anything but BSD in a 
server role?





Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] .pgpass being ignored

2013-06-24 Thread Jan Wieck
On 06/24/13 10:24, Rebecca Clarke wrote:
> I could be wrong, but shouldn't the owner of .pgpass be postgres?

The owner of ~/.pgpass is whoever owns ~ (the home directory of that user).

And ~/.pgpass must have permissions 0600 in order for libpq to actually
use it.


Jan


> 
> 
> On Mon, Jun 24, 2013 at 3:17 PM, Ziggy Skalski  > wrote:
> 
> On 13-06-21 06:19 PM, Stephen Rasku wrote:
> 
> I am trying to write a script that will create and populate a
> database.  I don't want to enter a password every time so I want to
> use a .pgpass file.  It has the correct permissions:
> 
>  $ ls -l $PGPASSFILE
>  -rw---  1 Stephen  staff  43 21 Jun 14:48
> /Users/Stephen/.pgpass
> 
> However, when I call createdb, it fails:
> 
>  $ createdb -h 192.168.1.4 -U postgres --no-password JobSearch
>  createdb: could not connect to database postgres:
> fe_sendauth: no
> password supplied
> 
> This is the contents of my .pgpass file:
> 
>
>  192.168.1.4:5432:DatabaseName:__postgres:__thisIsTheCorrectPassword
> 
> If I omit the --no-password option it will prompt me for a password
> and the command will succeed.  I am using 9.0.10 from MacPorts.
> 
> What am I doing wrong?
> 
> ...Stephen
> 
> 
> 
> Hi,
> 
> Just going from a personal experience, have you tried to open the
> .pgpass file in vi and made sure there's no trailing spaces in your
> pgpass entry?  That bit me once before :)
> 
> Ziggy
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/__mailpref/pgsql-general
> 
> 
> 


-- 
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Retiring from the PostgreSQL core team

2010-05-12 Thread Jan Wieck

To whom it may concern,

this is to inform the PostgreSQL community of my retirement from my
PostgreSQL core team position.

Over the past years I have not been able to dedicate as much time to
PostgreSQL as everyone would have liked. The main reason for that was
that I was swamped with other work and private matters and simply didn't
have time. I did follow the mailing lists but did not participate much.

Looking at my publicly visible involvement over the last two years or
so, there is little that would justify me being on the core team today.
I was not involved in the release process, in patch reviewing,
organizing and have contributed little.

However, in contrast to other previous core team members, I do not plan
to disappear. Very much to the contrary. I am right now picking up some
things that have long been on my TODO wish list and Afilias is doubling
down on the commitment to PostgreSQL and Slony. We can and should talk
about that stuff next week at PGCon in Ottawa. I will also stay in close
contact with the remaining core team members, many of whom have become
very good friends over the past 15 years.

The entire core team, me included, hoped that it wouldn't come to this
and that I could have returned to active duty earlier. Things in my
little sub universe didn't change as fast as we all hoped and we all
think it is best now that I focus on getting back to speed and do some
serious hacking.

I hope to see many of you in Ottawa.


Regards
Jan

--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: XX000: cache lookup failed for relation

2008-04-05 Thread Jan Wieck

On 4/5/2008 11:02 AM, Glyn Astill wrote:

Hi Jan,

Is that still true for 1.2.12? As that's the version I'm using.. Also any ideas 
on where I start to sort it out? I just want to drop the old table now I've 
removed it from replication, but the error mentioned previously is stopping me.


Yes, this is and will be true for ALL 1.2 versions. And this is also the 
reason why Slony 2.0 will NOT be supporting any Postgres version prior 
to 8.3.


If you removed the table from the replication set, then this error 
should not appear any more. All I can think of is that the table might 
have foreign key relationships with other tables, which are still 
involved in replication. If that is the case, try dropping it by using 
the EXECUTE SCRIPT feature of the slonik command language to perform the 
DROP TABLE (which I think is highly recommended for any sort of DDL 
throughout the Slony documentation anyway).



Jan



Thanks


- Original Message ----

From: Jan Wieck <[EMAIL PROTECTED]>
To: Glyn Astill <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Saturday, 5 April, 2008 3:00:04 PM
Subject: Re: [Slony1-general] ERROR:  XX000: cache lookup failed for relation

On 4/5/2008 7:47 AM, Glyn Astill wrote:
> Hi chaps,
> 
> I know there's been a bit of "activity" on this listrecently - but does anyone 
know where I should start looking to resolvethe below?


Yes, a "SET DROP TABLE" is mandatory prior to dropping the table itself. 
This is because up to version 1.2.x, Slony is deliberately corrupting 
the system catalog on subscriber nodes in order to suppress triggers and 
rules to fire (this can only be controlled by other means since Postgres 
8.3 and will be done so in Slony 2.0).



Jan

> 
> - Original Message 
>> From: Glyn Astill 
>> To: [EMAIL PROTECTED]

>> Cc: pgsql-general@postgresql.org
>> Sent: Friday, 4 April, 2008 3:05:18 PM
>> Subject: [Slony1-general] ERROR:  XX000: cache lookup failed for relation
>> 
>> Hi chaps,
>> 
>> I've got a problem trying to drop a table, I get the error "cache lookup 
failed 
>> for relation"
>> 
>> SEE=# drop table replicated_users;

>> ERROR:  XX000: cache lookup failed for relation 30554884
>> LOCATION:  getRelationDescription, dependency.c:2021
>> Now this table is on a slony-I slave and was in replication when I tried to 
drop 
>> it - I presume this is a big mistake and I should never try to drop a table 
>> without first droping it from replication?
>> 
>> In addition I'd set up a trigger on the table "replicate_users".
>> 
>> If I do:
>> 
>>  select relname,oid from pg_class where relname = 'replicated_users';
>> 
>> -[ RECORD 1 ]-

>> relname | replicated_users
>> oid | 30554879
>> 
>> Thats not the same oid as the one it's complaining about.
>> 
>> Does anyone have any idea why this has happened or how I can fix it?
>> 
>> Cheers

>> Glyn
>> 
>> 
>> 
>> 
>> 
>> 
>>   ___ 
>> Yahoo! For Good helps you make a difference  
>> 
>> http://uk.promotions.yahoo.com/forgood/
>> 
>> ___

>> Slony1-general mailing list
>> [EMAIL PROTECTED]
>> http://lists.slony.info/mailman/listinfo/slony1-general
>> 
> 
> 
> 
> 
>   ___ 
> Yahoo! For Good helps you make a difference  
> 
> http://uk.promotions.yahoo.com/forgood/
> 
> ___

> Slony1-general mailing list
> [EMAIL PROTECTED]
> http://lists.slony.info/mailman/listinfo/slony1-general


--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin







  ___ 
Yahoo! For Good helps you make a difference  


http://uk.promotions.yahoo.com/forgood/




--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Slony1-general] ERROR: XX000: cache lookup failed for relation

2008-04-05 Thread Jan Wieck

On 4/5/2008 7:47 AM, Glyn Astill wrote:

Hi chaps,

I know there's been a bit of "activity" on this listrecently - but does anyone 
know where I should start looking to resolvethe below?


Yes, a "SET DROP TABLE" is mandatory prior to dropping the table itself. 
This is because up to version 1.2.x, Slony is deliberately corrupting 
the system catalog on subscriber nodes in order to suppress triggers and 
rules to fire (this can only be controlled by other means since Postgres 
8.3 and will be done so in Slony 2.0).



Jan



- Original Message 

From: Glyn Astill <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Friday, 4 April, 2008 3:05:18 PM
Subject: [Slony1-general] ERROR:  XX000: cache lookup failed for relation

Hi chaps,

I've got a problem trying to drop a table, I get the error "cache lookup failed 
for relation"


SEE=# drop table replicated_users;
ERROR:  XX000: cache lookup failed for relation 30554884
LOCATION:  getRelationDescription, dependency.c:2021
Now this table is on a slony-I slave and was in replication when I tried to drop 
it - I presume this is a big mistake and I should never try to drop a table 
without first droping it from replication?


In addition I'd set up a trigger on the table "replicate_users".

If I do:

 select relname,oid from pg_class where relname = 'replicated_users';

-[ RECORD 1 ]-
relname | replicated_users
oid | 30554879

Thats not the same oid as the one it's complaining about.

Does anyone have any idea why this has happened or how I can fix it?

Cheers
Glyn






  ___ 
Yahoo! For Good helps you make a difference  


http://uk.promotions.yahoo.com/forgood/

___
Slony1-general mailing list
[EMAIL PROTECTED]
http://lists.slony.info/mailman/listinfo/slony1-general






  ___ 
Yahoo! For Good helps you make a difference  


http://uk.promotions.yahoo.com/forgood/

___
Slony1-general mailing list
[EMAIL PROTECTED]
http://lists.slony.info/mailman/listinfo/slony1-general



--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deferred constraints and locks...

2008-02-13 Thread Jan Wieck

On 2/12/2008 3:04 PM, Tom Lane wrote:

Nathan Wilhelmi <[EMAIL PROTECTED]> writes:
Hello - Trying to track down a lock contention problem, I have a process 
that does a series of select / insert operations. At some point the 
process grabs a series of RowExclusiveLock(s) and has the obvious effect 
of stalling other processes. I logged all the statements and don't see 
any for update or explicit lock statements.


Insert statements would naturally take RowExclusiveLock, but that
doesn't block other DML operations.  So the question is what *else*
are you doing?


Those SELECT statements aren't by chance FOR UPDATE, are they?


Jan

--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] What's the difference between SET STORAGE MAIN and EXTENDED?

2007-09-13 Thread Jan Wieck

On 9/7/2007 11:45 AM, Tom Lane wrote:

Zoltan Boszormenyi <[EMAIL PROTECTED]> writes:

Tom Lane =EDrta:

Zoltan Boszormenyi <[EMAIL PROTECTED]> writes:

At the end of the day, the behaviour is the same, isn't it?


No, there's a difference in terms of the priority for pushing this
column out to toast storage, versus pushing other columns of the row
out to toast.



Thanks very much for clarifying.



I was thinking of a binary data that wouldn't fit
into the maximum inline tuple size. In this case
both MAIN and EXTENDED end up compressed
and out-of-line. I didn't consider having multiple
bytea or text columns filled with small amount of data.


It'd be pretty unwise to mark a column MAIN if it's likely to contain
wide values ("wide" meaning more than 1K or so).  As you say, it'll
still get toasted --- but not until after everything else in the row has
been toasted, even quite narrow values that happen to be of toastable
types.


Additionally, EXTENDED means that the toaster tries to get the tuple 
down to a 1/4 blocksize. With MAIN, it won't do so.


MAIN storage strategy would be for wide columns that you *always* touch 
in *every* select *and* update and where the access pattern is always 
resulting in an index scan. Only in that case, you save from having the 
value right in the main tuple and don't need to pull it from the toast 
table and also don't lose the optimization of reusing external toast 
values if they aren't touched on update.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database performance comparison paper.

2007-02-20 Thread Jan Wieck

On 2/20/2007 3:51 PM, Andrej Ricnik-Bay wrote:

On 2/21/07, Guido Neitzer wrote:


It would be more or less the same, if you compare copy against insert
performance on PostgreSQL and state that insert should be as fast as
copy without saying why.

Btw: these guys claim to be database consultants.

Guess one should consider oneself lucky not to be their
customer, then, since they seem to base their decisions
on thin air and personal preference...


As the original author of the PHP TPC-W implementation you can find on 
pgfoundry, I know pretty good what it takes to make MySQL perform about 
as good as PostgreSQL under a real benchmarking scenario. I implemented 
all the database access parts basically two times. Once for PostgreSQL 
as an experienced DB developer would do it, once turning half the 
queries upside down in a horribly unintuitive way to give MySQL+InnoDB 
clues how to do it. Of course did I NOT run any of those tests using MyISAM.


In the end, both implementations performed more or less the same, 
measured at the HTTP interface. What the PHP+PG implementation did more 
elegantly in SQL, the PHP+My implementation had to do with more PHP 
code. And that is where all those crappy wannabe-benchmarks just fail to 
make sense to me. They measure some common denominator SQL statements at 
an abstracted DB API level. That is just nonsense. It doesn't matter how 
fast a specific index scan or a specific insert or update operation by 
itself is. What matters is how many parallel simulated users of a well 
defined business application the System Under Test (middleware plus 
database) can support within the responsetime constraints.


All that said, what really scares me is that these clowns apparently 
don't even know the system of their preference. No serious DB consultant 
would even bother testing anything using MyISAM any more. It is a table 
handler only considered for "disposable data".



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Database performance comparison paper.

2007-02-19 Thread Jan Wieck

On 2/16/2007 1:10 AM, Tom Lane wrote:

extra points, use *only one* test case.  Perhaps this paper can be
described as "comparing an F-15 to a 747 on the basis of required
runway length".


Oh, this one wasn't about raw speed of trivial single table statements 
like all the others?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Priorities for users or queries?

2007-02-16 Thread Jan Wieck

On 2/16/2007 5:05 PM, Benjamin Arai wrote:

Fair enough,  thanks for the clarification.


What you can do to throttle things in a reasonable manner would require 
that your application knows which transaction requires updating when it 
begins it. If that is the case, you can setup multiple connection pools 
with pgpool, one for reading having many physical connections, each 
shared for just a few clients, another having few physical connections 
shared by all writers. That way you will have a limited number of 
writers active at the same time.



Jan




Benjamin

Jan Wieck wrote:

On 2/16/2007 4:56 PM, Benjamin Arai wrote:

Hi Jan,

That makes sense.  Does that mean that a low-priority "road-block" 
can cause a deadlock or just an very long one lock?


It doesn't cause any deadlock by itself. Although the longer one holds 
one lock, before attempting to acquire another, the higher the risk 
someone else grabs that and tries visa versa. So if there is a risk of 
deadlocks due to the access pattern of your application, then slowing 
down the updating processes will increase the risk of it to happen.



Jan



Benjamin

Jan Wieck wrote:

On 2/11/2007 1:02 PM, Benjamin Arai wrote:

Hi Magnus,

Think this can be avoided as long the the queries executed on the 
lower priority process never lock anything important.  In my case, 
I would alway be doing inserts with the lower priority process, so 
inversion should never occur.  On the other hand if some lock occur 
somewhere else specific to Postgres then there may be an issue.  
Are there any other tables locked by the the Postgres process other 
than those locks explicitly set by the query?


If you assume that the logical row level locks, placed by such low 
priority "road-block", would be the important thing to watch out 
for, you are quite wrong. Although Postgres appears to avoid 
blocking readers by concurrent updates using MVCC, this isn't 
entirely true. The moment one updating backend needs to scribble 
around in any heap or index block, it needs an exclusive lock on 
that block until it is done with that. It will not hold that block 
level lock until the end of its transaction, but it needs to hold it 
until the block is in a consistent state again. That means that the 
lower the priority of those updating processes, the more exclusively 
locked shared buffers you will have in the system, with the locking 
processes currently not getting the CPU because of their low priority.



Jan



Benjamin

Magnus Hagander wrote:

Most likely, you do not want to do this. You *can* do it, but you are
quite likely to suffer from priority inversion
(http://en.wikipedia.org/wiki/Priority_inversion)

//Magnus


Adam Rich wrote:
 

There is a function pg_backend_pid() that will return the PID for
the current session.  You could call this from your updating app
to get a pid to feed to the NICE command.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Benjamin 
Arai

Sent: Saturday, February 10, 2007 6:56 PM
To: pgsql-general@postgresql.org; pgsql-admin@postgresql.org
Subject: [GENERAL] Priorities for users or queries?


Hi,

Is there a way to give priorities to queries or users?  Something 
similar to NICE in Linux.  My goal is to give the updating 
(backend) application a very low priority and give the web 
application a high priority to avoid disturbing the user experience.


Thanks in advance!

Benjamin


---(end of 
broadcast)---

TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


---(end of 
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



  


---(end of 
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq








---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Priorities for users or queries?

2007-02-16 Thread Jan Wieck

On 2/16/2007 4:56 PM, Benjamin Arai wrote:

Hi Jan,

That makes sense.  Does that mean that a low-priority "road-block" can 
cause a deadlock or just an very long one lock?


It doesn't cause any deadlock by itself. Although the longer one holds 
one lock, before attempting to acquire another, the higher the risk 
someone else grabs that and tries visa versa. So if there is a risk of 
deadlocks due to the access pattern of your application, then slowing 
down the updating processes will increase the risk of it to happen.



Jan



Benjamin

Jan Wieck wrote:

On 2/11/2007 1:02 PM, Benjamin Arai wrote:

Hi Magnus,

Think this can be avoided as long the the queries executed on the 
lower priority process never lock anything important.  In my case, I 
would alway be doing inserts with the lower priority process, so 
inversion should never occur.  On the other hand if some lock occur 
somewhere else specific to Postgres then there may be an issue.  Are 
there any other tables locked by the the Postgres process other than 
those locks explicitly set by the query?


If you assume that the logical row level locks, placed by such low 
priority "road-block", would be the important thing to watch out for, 
you are quite wrong. Although Postgres appears to avoid blocking 
readers by concurrent updates using MVCC, this isn't entirely true. 
The moment one updating backend needs to scribble around in any heap 
or index block, it needs an exclusive lock on that block until it is 
done with that. It will not hold that block level lock until the end 
of its transaction, but it needs to hold it until the block is in a 
consistent state again. That means that the lower the priority of 
those updating processes, the more exclusively locked shared buffers 
you will have in the system, with the locking processes currently not 
getting the CPU because of their low priority.



Jan



Benjamin

Magnus Hagander wrote:

Most likely, you do not want to do this. You *can* do it, but you are
quite likely to suffer from priority inversion
(http://en.wikipedia.org/wiki/Priority_inversion)

//Magnus


Adam Rich wrote:
 

There is a function pg_backend_pid() that will return the PID for
the current session.  You could call this from your updating app
to get a pid to feed to the NICE command.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Benjamin Arai
Sent: Saturday, February 10, 2007 6:56 PM
To: pgsql-general@postgresql.org; pgsql-admin@postgresql.org
Subject: [GENERAL] Priorities for users or queries?


Hi,

Is there a way to give priorities to queries or users?  Something 
similar to NICE in Linux.  My goal is to give the updating 
(backend) application a very low priority and give the web 
application a high priority to avoid disturbing the user experience.


Thanks in advance!

Benjamin


---(end of 
broadcast)---

TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


---(end of 
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



  


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq






--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Priorities for users or queries?

2007-02-16 Thread Jan Wieck

On 2/11/2007 1:02 PM, Benjamin Arai wrote:

Hi Magnus,

Think this can be avoided as long the the queries executed on the lower 
priority process never lock anything important.  In my case, I would 
alway be doing inserts with the lower priority process, so inversion 
should never occur.  On the other hand if some lock occur somewhere else 
specific to Postgres then there may be an issue.  Are there any other 
tables locked by the the Postgres process other than those locks 
explicitly set by the query?


If you assume that the logical row level locks, placed by such low 
priority "road-block", would be the important thing to watch out for, 
you are quite wrong. Although Postgres appears to avoid blocking readers 
by concurrent updates using MVCC, this isn't entirely true. The moment 
one updating backend needs to scribble around in any heap or index 
block, it needs an exclusive lock on that block until it is done with 
that. It will not hold that block level lock until the end of its 
transaction, but it needs to hold it until the block is in a consistent 
state again. That means that the lower the priority of those updating 
processes, the more exclusively locked shared buffers you will have in 
the system, with the locking processes currently not getting the CPU 
because of their low priority.



Jan



Benjamin

Magnus Hagander wrote:

Most likely, you do not want to do this. You *can* do it, but you are
quite likely to suffer from priority inversion
(http://en.wikipedia.org/wiki/Priority_inversion)

//Magnus


Adam Rich wrote:
  

There is a function pg_backend_pid() that will return the PID for
the current session.  You could call this from your updating app
to get a pid to feed to the NICE command.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Benjamin Arai
Sent: Saturday, February 10, 2007 6:56 PM
To: pgsql-general@postgresql.org; pgsql-admin@postgresql.org
Subject: [GENERAL] Priorities for users or queries?


Hi,

Is there a way to give priorities to queries or users?  Something 
similar to NICE in Linux.  My goal is to give the updating (backend) 
application a very low priority and give the web application a high 
priority to avoid disturbing the user experience.


Thanks in advance!

Benjamin


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



  


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Cutting the Gborg throat

2006-08-31 Thread Jan Wieck

On 8/31/2006 9:10 AM, Dave Page wrote:
 


-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Jan Wieck

Sent: 31 August 2006 13:59
To: Joshua D. Drake
Cc: Dave Cramer; Greg Sabino Mullane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Cutting the Gborg throat

On 8/28/2006 9:36 PM, Joshua D. Drake wrote:

> Hello,
> 
> O.k. so how about a phased approach?
> 
> 1. Contact maintainers to create their new projects on 
pgfoundry and 
> begin moving tickets
> 
> 2. Migrate CVS
> 
> 3. Migrate mailing lists


Apparently something cut the throat first. GBorg is down since Sunday.


Neptune lost 2 disks at once... Buts whats more interesting is that
yours is the first complaint I've seen.


Meaning what?

A) Will be restored from backup
B) Data is lost finally and must be recovered from other sources


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Cutting the Gborg throat

2006-08-31 Thread Jan Wieck

On 8/28/2006 9:36 PM, Joshua D. Drake wrote:


Hello,

O.k. so how about a phased approach?

1. Contact maintainers to create their new projects on pgfoundry and 
begin moving tickets


2. Migrate CVS

3. Migrate mailing lists


Apparently something cut the throat first. GBorg is down since Sunday.


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Background Writer and performances

2006-07-18 Thread Jan Wieck

On 7/10/2006 9:49 AM, Martijn van Oosterhout wrote:

On Mon, Jul 10, 2006 at 02:56:48PM +0200, DANTE Alexandra wrote:

**
I would like to send charts to show you exactly what happens on the 
server but, with the pictures, this e-mail is not posted on the mailing 
list.

I can send charts to a personal e-mail adress if needed.
**


The best idea is to upload them to a website.

By comparing the charts, I can see that the checkpoints are less 
expensive in term of Disk activity, IO/s and disk write throughput when 
the parameters are set to the maximum values but I don?t not reach to 
have constant disk IO/s, disk activity, disk write throughput before and 
after a checkpoint. I was expecting to see more activity on the disks 
during the bench (and not only a peak during the checkpoint) when the 
parameters are set to the maximum values. Is it possible ?


I have very little experience with the bgwriter, but on the whole, I
don't think the bgwriter will change the total number of I/Os. Rather,
it changes the timing to make them more consistant and the load more
even.


The bgwriter can only "increase" the total amount of IO. What it does is 
to write dirty pages out before a checkpoint or another backend (due to 
eviction of the buffer) has to do it. This means that without the 
bgwriter doing so, there would be a chance that a later update to the 
same buffer would hit an already dirty buffer as opposed to a now clean 
one. The upside of this increased write activity is that it happens all 
the time, spread out between the checkpoints and that this doesn't allow 
for large buffer cache configurations to accumulate tens of thousands of 
dirty buffers.


The latter is a typical problem with OLTP type benchmarks that are 
designed more closely to real world behaviour, like the TPC-C and TPC-W. 
In those benchmarks, hundreds or thousands of simulated users basically 
go through dialog steps of an application, and just like a real user 
they don't fill in the form in milliseconds and slam ASAP onto the 
submit button, they need a bit of time to "think" or "type". In that 
scenario, the performance drop caused by a checkpoint will let more and 
more "users" to finish their think/type phase and actually submit the 
next transaction (dialog step), causing a larger and larger number of 
concurrent DB requests and basically spiraling down the DB server.


The default settings are not sufficient for update intense applications.

I am not familiar with BenchmarkSQL, but 9 terminals with a 200 
warehouse configuration doesn't sound like it is simulating real user 
behaviour like outlined above.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Long term database archival

2006-07-12 Thread Jan Wieck

On 7/12/2006 12:18 PM, Tim Hart wrote:


Wouldn't you run into driver problems if you tried to restore a 20 year old
image? After all, you probably won't be using the same hardware in 20
years...


I can't even find the same hardware I bought "last year". That's one of 
the reasons why I use VMware on my laptop. It has a hardware abstraction 
layer that presents default XVGA and Soundblaster cards etc. to the 
guest OS. When I buy a new laptop, I just install VMware on the new 
thing, copy over the virtual machines and fire them up. They don't even 
notice that they run on entirely different hardware.



Jan



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jan Wieck
Sent: Wednesday, July 12, 2006 9:26 AM
To: Karl O. Pinc
Cc: Florian G. Pflug; pgsql-general@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [GENERAL] Long term database archival

On 7/6/2006 8:03 PM, Karl O. Pinc wrote:


On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:

Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival?  That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.



Anyway, 20 years is a _long_, _long_ time.


Yes, but our data goes back over 30 years now
and is never deleted, only added to, and I
recently had occasion to want to look at a
backup from 1994-ish.  So, yeah we probably do
really want backups for that long.  They
probably won't get used, but we'll feel better.


The best way is to not only backup the data. With todays VM technology 
it should be easy enough to backup a virtual disk that contains a full 
OS and everything install for every major Postgres release. Note that 
you would have troubles configuring and compiling a Postgres 4.2 these 
days because you'd need to get some seriously old tools running first 
(like bmake). And 4.2 is only what, 12 years old?


That way, you would be sure that you can actually load the data into the 
right DB version.



Jan




--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Long term database archival

2006-07-12 Thread Jan Wieck

On 7/6/2006 8:03 PM, Karl O. Pinc wrote:


On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:

Karl O. Pinc wrote:

Hi,

What is the best pg_dump format for long-term database
archival?  That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.



Anyway, 20 years is a _long_, _long_ time.


Yes, but our data goes back over 30 years now
and is never deleted, only added to, and I
recently had occasion to want to look at a
backup from 1994-ish.  So, yeah we probably do
really want backups for that long.  They
probably won't get used, but we'll feel better.


The best way is to not only backup the data. With todays VM technology 
it should be easy enough to backup a virtual disk that contains a full 
OS and everything install for every major Postgres release. Note that 
you would have troubles configuring and compiling a Postgres 4.2 these 
days because you'd need to get some seriously old tools running first 
(like bmake). And 4.2 is only what, 12 years old?


That way, you would be sure that you can actually load the data into the 
right DB version.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Help making a plpgsql function?

2006-07-11 Thread Jan Wieck

On 7/5/2006 3:51 PM, Bjørn T Johansen wrote:


Yes, but I need to return n fields from one table and n fiels from another, and 
n fields from yet another
table, etc... and return this as some kind of record... How do I to this?


I wonder why your problem can't be solved by a simple join.


Jan




BTJ

On Wed, 5 Jul 2006 19:13:39 +0200
Dany De Bontridder <[EMAIL PROTECTED]> wrote:


On Wednesday 05 July 2006 16:46, Bjørn T Johansen wrote:
> I need to make a funtion that take one parameter and then returns a
> "record" with x number of fields, collected from x no. of tables, i.e. I
> need to run several sql statemtents to collect all the needed values from x
> no. of fields and insert it into a "record" and return the "record" at the
> end...
From http://www.postgresql.org/docs/7.4/interactive/plpgsql-declarations.html

Example for function having return type set of record

create function testfunc(int) returns record as '
declare
output record;
begin
for output in select * from table1 where col1<$1 loop
return next output;
end loop;
return;
end;
' language plpgsql

executing through sql as:

select * from testfunc(6) as (col1 int, col2 float, col3 char(20));


Regards,

D.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 6: explain analyze is your friend



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgsql vs mysql

2006-07-11 Thread Jan Wieck

On 7/11/2006 1:36 PM, Merlin Moncure wrote:


that said, i tried to put fairness in my comparison, many pg/mysql
comparisons ulimately resort to a dismissive mysql diss which does not
play well to the uninformed third party.  so, I made an attempt at
something with some substance.


Totally understood. The life vest that is not worn because it is too 
complicated to put on doesn't save any lives. Meaning the simplicity of 
setting up MySQL replication means it is used more often. One just has 
to keep in mind to rebuild the slaves from time to time because they get 
out of sync without any visible failure. Slony-I on the other hand is a 
steeper learning curve, and although it could serve as a much more 
reliable backup solution, it isn't used nearly as often as it should.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Notes on converting from MySQL 5.0.x to PostgreSQL

2006-07-11 Thread Jan Wieck

On 7/11/2006 11:57 AM, Scott Marlowe wrote:


On Tue, 2006-07-11 at 10:45, Jan Wieck wrote:

On 7/10/2006 10:00 PM, Alex Turner wrote:

> http://dev.mysql.com/doc/refman/5.1/en/replication-row-based.html
> 
> 5.1


Ah, thanks. So I guess 5.1.5 and 5.1.8 must be considered major feature 
minor/bugfix releases. I still don't understand how people can use 
software in production that has literally zero bugfix upgrade path 
without the risk of incompatibility due to new features. I consider 
every IT manager, who makes that choice, simply overpaid.


Dear god!  That page made my eyes bleed.

Individual users can choose the method of replication for their
sessions?

There's a mixed method that switches back and forth?


It is totally unclear from that page what would make the server decide 
when to pick one or the other method. It seems to me that this is mainly 
an optimization for many single inserts in order to get a smaller 
binlog. Note that according to this page


http://dev.mysql.com/doc/internals/en/replication-prepared-statements.html

the master currently substitutes the parameters as literals into the 
query for prepared statements.


What also is totally unclear, maybe someone with more MySQL experience 
can answer this question, is if the binary format actually does solve 
the problems discussed. Namely timestamps and also autoincrement. What 
exactly happens if an insert doesn't provide a value for an autoinc or 
timestamp column? Is the server chosen value placed into the binlog when 
using row format or not?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgsql vs mysql

2006-07-11 Thread Jan Wieck

On 7/11/2006 1:08 PM, Scott Marlowe wrote:


On Tue, 2006-07-11 at 11:04, Jan Wieck wrote:

On 6/30/2006 1:07 PM, Merlin Moncure wrote:

> * mysql has a few features here and there which are nice...just to
> name a few, flush tables with lock, multiple insert, etc


The multiple insert stuff is not only non-standard, it also encourages 
the bad practice of using literal values directly in the SQL string 
versus prepared statements with place holders.


I thought it was in the SQL 99 standard...  


The SQL bible doesn't say SQL99, it says it is a DB2 specific feature.


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Notes on converting from MySQL 5.0.x to PostgreSQL

2006-07-11 Thread Jan Wieck

On 7/10/2006 10:00 PM, Alex Turner wrote:


http://dev.mysql.com/doc/refman/5.1/en/replication-row-based.html

5.1


Ah, thanks. So I guess 5.1.5 and 5.1.8 must be considered major feature 
minor/bugfix releases. I still don't understand how people can use 
software in production that has literally zero bugfix upgrade path 
without the risk of incompatibility due to new features. I consider 
every IT manager, who makes that choice, simply overpaid.



Jan



Alex

On 7/10/06, Jan Wieck <[EMAIL PROTECTED]> wrote:


On 6/30/2006 11:12 AM, Scott Marlowe wrote:
> I agree with Tom, nice notes.  I noted a few minor issues that seem to
> derive from a familiarity with MySQL.  I'll put my corrections below...
>
> On Fri, 2006-06-30 at 08:17, Jason McManus wrote:
>> On Converting from MySQL 5.0.x to PostgreSQL 8.1.x
>> --
>> Major differences I have noted:
>> ---
>>
>> MySQL 5.0.x:
>
>> * Easy, built-in and extensive replication support.
>
> Not sure how extensive it is.  It's basically synchronous single master
> single slave, right?  It is quite easy though.

Last thing I heard was that MySQL still had only statement based
replication and that it doesn't work together with some of the new
enterprise features like triggers and stored procedures. Row level
replication is on their TODO list and this major feature will probably
appear in some minor 5.2.x release.


Jan


>
>> PostgreSQL 8.1.x:
>> * Embedded procedures in multiple native languages (stored procedures
and
>>   functions can be written in native Perl, Python, Tcl, SQL, PL/PgSQL)
>
> Note that there are a dozen or more other languages as well.  Just FYI.
> Off the top of my head, plPHP, plJ (java there's two different java
> implementations, I think) and plR (R is the open source equivalent of
> the S statistics language)
>
>> * Replication support still rudimentary.
>
> H.  I think that's an overly simplistic evaluation.  The slony
> replication engine is actually VERY advanced, but the administrative
> tools consist mostly of "your brain".  hehe.  That said, once you've
> learned how to drive it, it's quite amazing.  Keep in mind, slony can be
> applied to a living database while it's running, and can run between
> different major versions of postgresql.  That's a pretty advanced
> feature.  Plus, if the replication daemons die (kill -9ed or whatever)
> you can restart replication and slony will come right back where it was
> and catch up.
>
>> Pointers, tips, quick facts and gotchas for other people converting:
>> 
>>
>> * MySQL combines the concepts of 'database' and 'schema' into
one.  PostgreSQL
>>   differentiates the two.  While the hierarchy in MySQL is
>>   database.table.field, PostgreSQL is roughly:
database.schema.table.field.
>>   A schema is a 'logically grouped set of tables but still kept within
a
>>   particular database.'  This could allow separate applications to be
built
>>   that still rely upon the same database, but can be kept somewhat
logically
>>   separated.  The default schema in each database is called 'public',
and is
>>   the one referred to if no others are specified.  This can be modified
with
>>   'SET search_path TO ...'.
>
> This is a VERY good analysis of the difference between the two
> databases.
>
>> * Pg uses a 'template1' pseudo-database that can be tailored to provide
>>   default objects for new database creation, if you should desire.  It
>>   obviously also offers a 'template0' database that is read-only and
>>   offers a barebones database, more equivalent to the empty db created
with
>>   mysql's CREATE DATABASE statement.
>
> This isn't quite right.
>
> template0 is a locked and "pure" copy of the template database.  It's
> there for "break glass in case of emergency" use. :)
>
> template1, when you first initdb, is exactly the same as template0, but
> you can connect to it, and alter it.  Both of these are "real"
> postgresql databases.  template1 is the database that gets copied by
> default when you do "create database".  Note that you can also define a
> different template database when running create database, which lets you
> easily clone any database on your machine.  "create database newdb with
> template olddb"
>
>> * Pg uses the 'serial' column type instead of AUTO_INCREMENT.  This
allows
>>   more than one independent sequence to be sp

Re: [GENERAL] pgsql vs mysql

2006-07-11 Thread Jan Wieck

On 6/30/2006 1:07 PM, Merlin Moncure wrote:


* mysql has a few features here and there which are nice...just to
name a few, flush tables with lock, multiple insert, etc


I have no clue what flushing tables with lock might be good for. Are 
applications in MySQuirreL land usually smarter than the DB engine with 
respect to when to checkpoint or not?


The multiple insert stuff is not only non-standard, it also encourages 
the bad practice of using literal values directly in the SQL string 
versus prepared statements with place holders. It is bad practice 
because it introduces SQL injection risks since the responsibility of 
literal value escaping is with the application instead of the driver.


Everything that teaches new developers bad things counts as a 
disadvantage in my book, so -1 on that for MySQL too.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Notes on converting from MySQL 5.0.x to PostgreSQL

2006-07-10 Thread Jan Wieck

On 6/30/2006 11:12 AM, Scott Marlowe wrote:

I agree with Tom, nice notes.  I noted a few minor issues that seem to
derive from a familiarity with MySQL.  I'll put my corrections below...

On Fri, 2006-06-30 at 08:17, Jason McManus wrote:

On Converting from MySQL 5.0.x to PostgreSQL 8.1.x
--
Major differences I have noted:
---

MySQL 5.0.x:



* Easy, built-in and extensive replication support.


Not sure how extensive it is.  It's basically synchronous single master
single slave, right?  It is quite easy though.


Last thing I heard was that MySQL still had only statement based 
replication and that it doesn't work together with some of the new 
enterprise features like triggers and stored procedures. Row level 
replication is on their TODO list and this major feature will probably 
appear in some minor 5.2.x release.



Jan





PostgreSQL 8.1.x:
* Embedded procedures in multiple native languages (stored procedures and
  functions can be written in native Perl, Python, Tcl, SQL, PL/PgSQL)


Note that there are a dozen or more other languages as well.  Just FYI. 
Off the top of my head, plPHP, plJ (java there's two different java

implementations, I think) and plR (R is the open source equivalent of
the S statistics language)  


* Replication support still rudimentary.


H.  I think that's an overly simplistic evaluation.  The slony
replication engine is actually VERY advanced, but the administrative
tools consist mostly of "your brain".  hehe.  That said, once you've
learned how to drive it, it's quite amazing.  Keep in mind, slony can be
applied to a living database while it's running, and can run between
different major versions of postgresql.  That's a pretty advanced
feature.  Plus, if the replication daemons die (kill -9ed or whatever)
you can restart replication and slony will come right back where it was
and catch up.


Pointers, tips, quick facts and gotchas for other people converting:


* MySQL combines the concepts of 'database' and 'schema' into one.  PostgreSQL
  differentiates the two.  While the hierarchy in MySQL is
  database.table.field, PostgreSQL is roughly: database.schema.table.field.
  A schema is a 'logically grouped set of tables but still kept within a
  particular database.'  This could allow separate applications to be built
  that still rely upon the same database, but can be kept somewhat logically
  separated.  The default schema in each database is called 'public', and is
  the one referred to if no others are specified.  This can be modified with
  'SET search_path TO ...'.


This is a VERY good analysis of the difference between the two
databases.


* Pg uses a 'template1' pseudo-database that can be tailored to provide
  default objects for new database creation, if you should desire.  It
  obviously also offers a 'template0' database that is read-only and
  offers a barebones database, more equivalent to the empty db created with
  mysql's CREATE DATABASE statement.


This isn't quite right.

template0 is a locked and "pure" copy of the template database.  It's
there for "break glass in case of emergency" use. :)

template1, when you first initdb, is exactly the same as template0, but
you can connect to it, and alter it.  Both of these are "real"
postgresql databases.  template1 is the database that gets copied by
default when you do "create database".  Note that you can also define a
different template database when running create database, which lets you
easily clone any database on your machine.  "create database newdb with
template olddb"


* Pg uses the 'serial' column type instead of AUTO_INCREMENT.  This allows
  more than one independent sequence to be specified per table (though the
  utility of this may be of dubious value).  These are closer to Oracle's
  concept of sequence generators, and they can be manipulated with the
  currval(), nextval(), setval(), and lastval() functions.


Don't forget 64bit bigserials too.


* Pg requires its tables and databases be 'vacuumed' regularly to remove
  completed transaction snapshots and optimize the tables on disk.  It is
  necessary because the way that PostgreSQL implements true MVCC is by
  writing all temporary transactions to disk and setting a visibility
  flag for the record.  Vacuuming can be performed automatically, and in
  a deferred manner by using vacuum_cost settings to limit it to low-load
  periods or based upon numerous other criteria.  See the manual for more
  information.


Interestingly enough, MySQL's innodb tables do almost the exact same
thing, but their vacuum process is wholly automated.  Generally, this
means fewer issues pop up for the new dba, but when they do, they can be
a little harder to deal with.  It's about a wash.  Of course, as you
mentioned earlier, most mysql folks aren't using innodb.


* While MySQL supports transactions with t

Re: [GENERAL] Adding foreign key constraints without integrity check?

2006-06-24 Thread Jan Wieck

On 6/18/2006 11:41 PM, Wes wrote:


Is there a way to add a foreign key constraint without having to wait for it
to check the consistency of all existing records?  If a database is being
reloaded (pg_dumpall then load), it really shouldn't be necessary to check
the referential integrity - or at least I should be able to stipulate that I
am accepting that risk.

My database reload is currently taking about 6 hours to load the data, 42
hours to reindex, and about another 40 hours or so to check the foreign key
constraints (about 1.2 billion rows).  That's a very long time to be out of
commission.  I'd really like to eliminate that second 40 hours so I can get
it down to a normal weekend.


The original implementation when the feature was new was to restore the 
foreign key constraints at the end of the dump with CREATE CONSTRAINT 
TRIGGER, which does not check the existing data. pg_dump was changed to 
use ALTER TABLE instead. Maybe it would be good to have the old 
behaviour as an option?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] GPL Licensed Files in 8.1.4

2006-06-21 Thread Jan Wieck

On 6/7/2006 4:34 PM, Bruce Momjian wrote:

Tom Lane wrote:

Andrew Sullivan <[EMAIL PROTECTED]> writes:
> At the same time, it strikes me that at least the userlock stuff, and
> maybe dbmirror as well, are candidates for pgfoundry rather than
> contrib/

We'd already agreed to move dbmirror to pgfoundry, but it just didn't
get done for 8.1.  I had not thought of pgfoundry as a reasonable
solution for userlock, but maybe that's the best thing to do with it.

A better idea would be to contact the module authors and get them to
relicense, but that might be hard.  Dal Zotto at least hasn't been
seen on these lists for a long time :-(


Here is the most recent feedback we have from Massimo:

 http://archives.postgresql.org/pgsql-hackers/2001-08/msg01001.php

 > Regarding the licencing of the code, I always release my code under GPL,
 > which is the licence I prefer, but my code in the backend is obviously
 > released under the original postgres licence. Since the module is loaded
 > dynamically and not linked into the backend I don't see a problem here.
 > If the licence becomes a problem I can easily change it, but I prefer the
 > GPL if possible.



Which means thus far he did not agree to the license change. Can we just 
move the stuff over to pgfoundry and be done with it?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [Slony1-general] [GENERAL] Is a high tab_reloid worrying?

2006-04-17 Thread Jan Wieck

On 4/13/2006 6:19 AM, John Sidney-Woollett wrote:


My tables are defined "WITHOUT OID" - does that make a difference?


That's good so far.

The other thing that is eating OID's are temporary objects. I personally 
consider the implementation of temp tables broken for precisely that 
matter. If your application uses temp tables, sooner or later it will 
cause an OID counter wrap around and then you run the risk of random 
transaction failures due to duplicate key errors on CREATE TEMP TABLE.



Jan



John
Hannu Krosing wrote:> Ühel kenal päeval, N, 2006-04-13 kell 10:06, kirjutas John> Sidney-Woollett:> >>I just added a new table to a slony relication set. 
The new table seems >>to have a really high tab_reloid value of 94,198,669> > ...> >>Is this something I should be worried about? Can I find out 
where all >>the intermediate OIDs have gone?> > > probably to data rows, unless you have all your tables defined using> WITHOUT OID. OIDs are assigned 
from a global "sequence".> > > Hannu> > > ---(end of broadcast)---> TIP 4: Have you 
searched our list archives?> >http://archives.postgresql.org___Slony1-general mailing [EMAIL 
PROTECTED]://gborg.postgresql.org/mailman/listinfo/slony1-general



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] bug or not? Trigger preventing delete causes circumvention

2005-12-09 Thread Jan Wieck

On 12/8/2005 9:12 PM, Luca Pireddu wrote:

I wrote a trigger function with the intent of preventing the deletion of a 
parent record when a referencing record would not allow it.  However, the 
result is that the referencing record stays, but the referenced one is gone, 
so that my foreign key constraint is not respected.  The behaviour can be 
replicated with the following:


You did something else than intended. You prevented deletion of the 
referencing (dependent) record. That is where you defined the trigger, 
and that is what renders the foreign key constraints DELETE operation 
from dependent into a NOP.


I guess that counts more as a pilot error.


Jan



create table parent(id serial primary key);
create table dependent (id integer primary key references parent on delete 
cascade);


create or replace function check_delete() returns trigger as $$
BEGIN
  if TG_OP = 'DELETE' then
  raise notice 'preventing delete';
  return null;
  else
  return OLD;
  end if;
END;
$$
language 'plpgsql';

CREATE TRIGGER trig_check_delete BEFORE DELETE ON dependent
FOR EACH ROW EXECUTE PROCEDURE check_delete();

insert into parent values(1);
insert into dependent values(1);
delete from parent;

The record in the dependent table is left behind, while the referenced parent 
is gone.  Is this a bug?


I'm using PostgreSQL version 8.0.4 on Linux.

Luca

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Selecting Large Object and TOAST

2005-12-04 Thread Jan Wieck

On 12/4/2005 11:45 PM, Joshua D. Drake wrote:

Well as I said it depends on the size of the data. Are we talking 100 
meg vector images? Then large objects. Are we talking thumbnails that 
are 32k then bytea.


I'd say that anything up to a megabyte or so can easily live in bytea. 
Beyond that it depends on the access pattern.


That said, for certain situations I think some sql-callable functions 
would be very handy:


lo_get(oid) returns bytea
lo_set(oid, bytea) returns void
lo_ins(bytea) returns oid
lo_del(oid) returns void

Those (and maybe some more) would allow access of traditional large 
objects through client interfaces that don't support the regular large 
object calls.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Selecting Large Object and TOAST

2005-12-04 Thread Jan Wieck

On 12/4/2005 9:24 PM, Joshua D. Drake wrote:



(1) what would be the return type if i want to return a large object (
XYZ.gif) to the remote client (GUI) using stored procedure.
Can anyone give an example please?
Are there any size limitations i need to consider when returning Large
Object using procedures?
You have to use a lookup table that correlates the meta information 
(filename, content-type)

with a particular loid. That way you can store any binary you want.


This doesn't answer the question.

Fact is that most procedural languages (including PL/pgSQL) don't have 
any access to classic large objects in the first place. So all the 
stored procedure can do is to return the identifier of the large object 
to the client and the client must then use lo_open(), lo_read() etc. to 
actually get the data of the object. Not all client interfaces support 
these fastpath based libpq functions.






How do i TOAST my data stored as Large Object?

This isn't a concern as it is all internal and automatic.

You don't. You would change you schema and application to store the 
images in bytea columns instead.


Well I have to disagree with this. It entirely depends on the size of 
the data you are storing. Bytea is remarkably

innefficient.


Which would be the data type of your choice for images?


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Selecting Large Object and TOAST

2005-12-04 Thread Jan Wieck

On 12/4/2005 7:55 PM, vishal saberwal wrote:


hi,

We are storing the Icons/IMages in the database as Large Objects using
lo_import functions.

(1) what would be the return type if i want to return a large object (
XYZ.gif) to the remote client (GUI) using stored procedure.
Can anyone give an example please?
Are there any size limitations i need to consider when returning Large
Object using procedures?

(2) A statement from documentation:
"PostgreSQL 7.1 introduced a mechanism (nicknamed "TOAST") that allows data
values to be much larger than single pages. This makes the large object
facility partially obsolete."
How do i TOAST my data stored as Large Object?


You don't. You would change you schema and application to store the 
images in bytea columns instead.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 8.03 versus 8.04

2005-12-03 Thread Jan Wieck

On 12/3/2005 1:34 PM, [EMAIL PROTECTED] wrote:

 
Can anyone specify from his/her experience what would be the benefits of using postgresql 8.04 versus 8.03  in terms of reliability  and/or performance. 


Unless forced because there is no other way to fix a bug, we do not 
change any functionality of the system within a release branch. The 
PostgreSQL version number has three components:


..

Every minor release has its own branch within the source repository. 
Only bugfixes will be applied to a release branch. So you can safely 
assume that the difference between x.y.z1 and x.y.z2 is reliablity and 
very unlikely performance.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Jan Wieck

On 12/2/2005 2:02 PM, Jaime Casanova wrote:

so the way to do it is create a trigger that record in a table the
number of rows...


Neither, because now you have to update one single row in that new 
table, which causes a row exclusive lock. That is worse than an 
exclusive lock on the original table because it has the same 
serialization of writers but the additional work to update the count 
table as well as vacuum it.


What you need is a separate table where your trigger will insert delta
rows with +1 or -1 for insert and delete. A view will sum() over that 
and tell you the true number of rows. Periodically you condense the 
table by replacing all current rows with one that represents the sum().



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] mysql replace in postgreSQL?

2005-11-01 Thread Jan Wieck

On 10/31/2005 11:58 AM, Lincoln Yeoh wrote:


At 08:24 AM 10/30/2005 -0800, David Fetter wrote:

> >http://developer.postgresql.org/docs/postgres/plpgsql-control-structure 
s.html#PLPGSQL-ERROR-TRAPPING

>
> Erm, doesn't it have the same race conditions?

No, don't believe it does.  Have you found some?


Depends on how you do things.

As I mentioned, it's only fine if you have the relevant uniqueness constraint.


One would use MySQL's REPLACE INTO to avoid duplicates. To deliberately 
omit the UNIQUE constraint in order to make the stored procedure 
solution fail would smell a lot like the old MySQL crashme BS ... first 
create and drop 10,000 tables to bloat the system catalog, next vacuum 
with a user that doesn't have privileges to vacuum system catalogs 
(because we told them to vacuum after that silly crap test), then show 
that the system is still slow.


Using REPLACE INTO at one place and creating duplicates on purpose in 
another seems to make zero sense to me. Until one can explain the reason 
for that to me, I claim that a UNIQUE constraint on such key is a 
logical consequence.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Oracle 10g Express - any danger for Postgres?

2005-11-01 Thread Jan Wieck

On 11/1/2005 8:49 AM, Tom Lane wrote:

Jan Wieck <[EMAIL PROTECTED]> writes:

On 10/31/2005 1:14 PM, Chris Browne wrote:

The fact that it appears "a joke" to people wanting to deploy big
databases doesn't prevent it from taking a painful bite out of, oh,
say, certain vendors that forgot to own their own transactional
storage engine...


It's not a joke. It fits exactly the "small web application" needs. Who 
will want to pay for a commercial MySQL license when they can run Oracle 
for free?


People who can't figure out how to configure Postgres are not likely to
get far with Oracle ;-).  Unless Oracle has made some *huge* strides in
ease of installation/administration with 10g, I see this making
practically no dent in MySQL.  Or PG for that matter.  All they're
really likely to accomplish is to cannibalize some of their own low-end
sales.


With those limitations, there isn't much left to "configure". We are 
talking about a 4GB maximum DB size. That is one default tablespace with 
appropriate default extent sizes and pctinc. All the user needs to chose 
is one of 3 canned config files for using 256, 512 or 1024 MB of RAM.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Oracle 10g Express - any danger for Postgres?

2005-11-01 Thread Jan Wieck

On 10/31/2005 1:14 PM, Chris Browne wrote:


The fact that it appears "a joke" to people wanting to deploy big
databases doesn't prevent it from taking a painful bite out of, oh,
say, certain vendors that forgot to own their own transactional
storage engine...


It's not a joke. It fits exactly the "small web application" needs. Who 
will want to pay for a commercial MySQL license when they can run Oracle 
for free? Remember, the "open source" aspect "can fix it yourself" isn't 
really existent in the MySQL world, so those customers aren't really 
looking for open source, they are looking for cheap or free. With the 
control over InnoDB, Oracle has an influence on what XE is competing 
against. Both offers compete with MS SQL Express as well, so they hit a 
lot of small database competition with one stone.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] NULL != text ?

2005-10-20 Thread Jan Wieck

On 10/20/2005 6:10 AM, Alban Hertroys wrote:


Michael Glaesemann wrote:
if (OLD.value IS NOT NULL and NEW.value IS NOT NULL and OLD.value <>  
NEW.value) or OLD.value IS NULL or NEW.value IS NULL


But that's untested and I have a hard time thinking in three-value  logic.


For completeness sake; Because of lazy evaluation, that boils down to:

if (OLD.value IS NULL OR NEW.value IS NULL OR OLD.value <> NEW.value)


That would result in TRUE if both, OLD and NEW are NULL. Is that what 
you intended?



Jan



The last part of the expression is only evaluated if both OLD.value and 
NEW.value aren't NULL.





--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [HACKERS] 'a' == 'a '

2005-10-20 Thread Jan Wieck

On 10/20/2005 2:17 AM, Greg Stark wrote:


(I can't believe anyone really wants varchar to be space padded. Space padding
always seemed like a legacy feature for databases with fixed record length
data types. Why would anyone want a string data type that can't represent all
strings?)


They must have buried that "bow to COBOL" so deep in the code that they 
had no choice but to abuse their power and stuff this cruft into the 
standard.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-19 Thread Jan Wieck

On 10/19/2005 3:46 PM, Dann Corbit wrote:

Would you want varchar(30) 'Dann Corbit' to compare equal to bpchar(30)
'Dann Corbit'?

I would.  


wieck=# select 'Jan'::varchar(20) = 'Jan'::char(20);
 ?column?
--
 t
(1 row)

wieck=# select 'Jan'::char(20) = 'Jan'::varchar(20);
 ?column?
--
 t
(1 row)

wieck=# select version();
   version
-
 PostgreSQL 8.0.4 on i386-unknown-freebsd4.9, compiled by GCC 2.95.4
(1 row)


Did I miss anything?


Jan



If both are considered character types by the language, then they must
compare that way.

Perhaps there are some nuances that I am not aware of.  But that is how
things ought to behave, if I were king of the forest.


-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Terry Fielder
Sent: Wednesday, October 19, 2005 12:37 PM
To: Marc G. Fournier
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

OK, I am not an expert on the SQL standard, but I thought the

definition

varied by data type e.g. varchar <> bpchar

Terry

Marc G. Fournier wrote:
> On Wed, 19 Oct 2005, [EMAIL PROTECTED] wrote:
>
>> I was referring to trailing blanks, but did not explicitly say it,
>> though showed it in the examples.  I am pretty sure that the SQL
>> standard says that trailing whitespace is insignificant in string
>> comparison.
>
>
> Then we are broken too :)
>
> # select 'a ' = 'a  ';
>  ?column?
> --
>  f
> (1 row)
>
> 
> Marc G. Fournier   Hub.Org Networking Services
(http://www.hub.org)
> Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ:
7615664
>
> ---(end of

broadcast)---

> TIP 4: Have you searched our list archives?
>
>   http://archives.postgresql.org
>

--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of

broadcast)---

TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 6: explain analyze is your friend



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Gotchas

2005-10-17 Thread Jan Wieck

On 10/17/2005 10:16 AM, Tom Lane wrote:


Jan Wieck <[EMAIL PROTECTED]> writes:
What is bad about leaving pg_catalog all lower case and expect everyone 
to query the catalog quoted?


The fact that it will break every nontrivial client currently in
existence.  Those quotes aren't there in the clients and we can't
suddenly mandate them to become there.  Also, that approach does not
work for built-in functions (MAX, etc)


Ugh ... got me on the MAX part.


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL Gotchas

2005-10-17 Thread Jan Wieck

On 10/16/2005 12:40 PM, Tom Lane wrote:

Martijn van Oosterhout  writes:

Actually, perhaps an even more restricted version would be better.
Lowercase quoted identifiers only if they are all uppercase. So then:


No, I think the original proposal was better.  This one doesn't fix
things for the lusers who think "MixedCase" and unquoted MixedCase
should be the same.


What is bad about leaving pg_catalog all lower case and expect everyone 
to query the catalog quoted? We would be totally free as to the 
flexibility of the new option. One setting could even act 100% according 
to the standard ... a feature I'm missing so far in all the proposals.



Jan




Note to implementor: In 'SELECT 1 as "Title"', the quoted string should
not be lowercased, even if you are lowercasing everything else...


You don't get to have that, I think, because the lexer is not context
aware.  It's not clear to me why it's a good idea anyway.

regards, tom lane



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle buys Innobase

2005-10-16 Thread Jan Wieck

On 10/16/2005 5:25 PM, Marc G. Fournier wrote:

On Sun, 16 Oct 2005, Chris Travers wrote:


Marc G. Fournier wrote:






They do not "own" MaxDB. They license it, just like Innodb.



Damn, do they ever have alot of "loose ends" ... what part, exactly, 
constitutes "MySQL" vs third party add ons? :)


If MaxDB, InnoDB, and DBD engines are all licensed, then they have problems.


Thank god our biggest headaches have been "can we include readline, since 
its GPL?" and "we need to re-write ARC *just in case* IBM decides to 
enforce their patent" :)"


You mean "their eventually someday to be patent".


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Oracle buys Innobase

2005-10-15 Thread Jan Wieck

On 10/15/2005 6:22 AM, Thomas Beutin wrote:

Maybe they lost the development of the know how for the only transaction 
safe table type of the current mysql releases, but they still "own" the 
former Adabas/MaxDB/SAP-DB code with transaction safe tables. Probably 
they force the "union" of mysql and SAP-DB code base to keep their 
transaction competence, but this are just my €0,02...


First, InnoDB is not the only transaction safe table type in MySQL. 
Although a poor stepchild today, there is still BDB.


Second, MySQL AB does not own the MaxDB code. I never fully understood 
what that contract was about, maybe someone from MySQL AB can explain 
that, but to my knowledge SAP AG did not transfer the copyright.


They could also go back to NuSphere, aka Multera, aka PeerDirect and ask 
what happened to the Progress storage engine Rocket.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL Gotchas

2005-10-14 Thread Jan Wieck

On 10/13/2005 2:40 PM, Tom Lane wrote:

Chris Travers <[EMAIL PROTECTED]> writes:
So basically, the problem is that any fix for case folding would touch a 
fair bit of code and possibly cause other problems.  However, I haven't 
seen anyone worry about performance issues in such a fix, just that it 
might be a fair bit of work.


More that it's likely to create serious forwards, backwards, and
sideways compatibility problems :-(.  The work involved is mostly
in figuring out how to deal with that.

Since the end reward for all this work would be having to read CATALOGS
WRITTEN IN ALL UPPER CASE, none of the key developers seem very
interested ...


Just an idea ...

if the release that offers UPPER case folding as an option also makes 
sure that all internal and tool references to system catalog objects are 
properly quoted (as they IMHO should be anyway), then it would be 
reduced to a third party tool/application problem accessing the system 
catalog in a database that has this new config option selected.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-12 Thread Jan Wieck

On 10/12/2005 6:18 PM, Marc G. Fournier wrote:


On Wed, 12 Oct 2005, Jussi Mikkola wrote:


Hi,

Well, if the PostgreSQL developers would be hired away from the project with 
big money, would that not mean, that the project would be a good path to earn 
a lot of money. So, new talented developers could join the project and see 
that as a path to high salary jobs??


Wow, what a twisted way to look at it ... not entirely inaccurate, but 
twisted :)


Oracle could even develop an exceptional interest in keeping PostgreSQL 
alive as it's "future DB engineer forge".



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-11 Thread Jan Wieck

On 10/11/2005 6:31 PM, Bruce Momjian wrote:


Jim C. Nasby wrote:

Of course one flip-side to all this is that if Oracle does attack us it
actually lends credibility; it means they see PostgreSQL as a threat. At
this point that could do more good for us than harm, depending on how
exactly the attacked.


Well, that was MySQL's reaction to it, but I think the harm far
outweighs the good for them.  Its more like, "Oracle finds MySQL a
threat, what is MySQL going to do now!"  We don't want that kind of
outcome.  Also, there are ways of attacking that do not show Oracle as
an agreesor, like hiring PostgreSQL developers.


From the fact that there was first an Oracle announcement and then some 
"calming words" from MySQL we can tell that this wasn't friendly. If it 
would have been, they would have had a joint press release instead of 
this big grin from Oracle and that clenched teeth smile from MySQL in 
return. So I agree, they are in deep trouble.


Now the much I agree that we should be carefull and watch out, I don't 
think we should be jumping to conclusions either. Nobody outside Oracle 
knows right now what their real plan and their real target with that 
acquisition is.


Don't forget that only a part, although a substantial part, of Oracles 
revenue comes out of the database business. One possibility is that they 
try to do birth control against a low-cost R/3 backend, which 
undoubtedly would be very bad for their CRM and ERP business in several 
ways. After failing to build any open source community, SAP had found 
MySQL, who was willing to maintain the SAP-DB sourcecode for them. If 
Oracle squishes MySQL now, SAP is back to square one on that project. 
There are many R/3 installations out there that go well beyond 1/4 
million dollars per year in DB license fees alone. So even if they can 
only delay this development by two to three years, it might pay off 
quite well.


And look at it, all Oracle would have to do is to be so open source 
friendly that they make InnoDB GPL only. Can you imagine the confusion 
in the MySQL fan club if Oracle releases the next GPL version of InnoDB 
and MySQL AB announces that they ripped out InnoDB support and favor 
something with half the feature set instead?



Jan



---




On Tue, Oct 11, 2005 at 06:04:40PM -0400, Bruce Momjian wrote:
> We have entered a new phase in the possible attacks on PostgreSQL.
> 
> The purchase of InnoDB clearly shows Oracle is ready to expend money to

> slow down competitive database technology.  Now that MySQL has been
> attacked, we should expect to be the next target.
> 
> Let's assume Oracle is willing to spend 1% of their revenue or net

> income on attacking PostgreSQL.  Given this financial statement:
> 
> 	http://finance.yahoo.com/q/is?s=ORCL&annual
> 
> that would be USD $20-100 million.  (The Oracle financial statement will

> eventually disclose the purchase price of InnoDB, and we can use that as
> a minimum amount they would be willing to spend.)
> 
> Now, I think Oracle realizes that the database will eventually become a

> commodity based on their purchase of Peoplesoft and other application
> technology.  However, every financial period they delay that time is
> more profit for them, so it is a cost/benefit of how much it is worth to
> slow down PostgreSQL.  Obviously they thought the InnoDB purchase was
> worth it to slow down or control MySQL.  Our goal should be to make the
> cost of attacks higher than the benefit.
> 
> Here are the three most likely attacks on our project:
> 
> o  Hiring 
> 
> Oracle could hire a large portion of our paid or volunteer developers,

> thereby slowing down the project.  Individuals would probably be
> approach as "We like your work on PostgreSQL and would like your
> expertise in improving Oracle", but of course once hired what they did
> for Oracle would be unimportant.  What would be important is what they
> _don't_ do for PostgreSQL.
> 
> o  Trademark
> 
> Marc Fournier owns the PostgreSQL trademark and domain names.  He could

> be attacked, perhaps by hiring him to do a job, causing it to fail, then
> suing him to obtain the trademark, and therefore the right to own the
> domain names.  The trademark has not been enforced, and it would be hard
> to enforce at this stage, but I think it would be effective in gaining
> control of the domain names.
> 
> o  Patents
> 
> Most technology people agree the software patent system is broken, but

> it could be a potent weapon against us, though we have shown we can
> efficiently remove patent issue from our code.
> 
> 
> There is probably nothing Oracle can do to permanently harm us, but

> there are a variety of things they can do to temporarily slow us down,
> and it is likely a attempt will be made in the future.  There are also
> possible threats to PostgreSQL support companies, though they are
> somewhat independent of the project.
> 
> -- 
>   Bruce Momjian 

Re: [GENERAL] Suggest forums software for postgresql?

2005-10-11 Thread Jan Wieck

On 10/3/2005 5:38 PM, Chris St Denis wrote:


Can anyone suggest good forums software to use with postgresql? I want to
integrate the forums users into my website's user base with a view. 


phpBB2 ... works just fine including upgrade procedures.


Jan



I know of FudForums but it doesn't work well with views (the upgrade script
in particular chokes very badly on them) and seems kinda buggy in general.

The other I know of is phpBB but it has a bad security track record, broke
pretty badly when I tried to integrate it, and just isn't as feature rich as
I'd like. 



Can anyone recommend any other forums software that works well?


---(end of broadcast)---
TIP 6: explain analyze is your friend



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle buys Innobase

2005-10-10 Thread Jan Wieck

On 10/10/2005 1:32 PM, Dann Corbit wrote:


From:
http://www.filmsite.org/whof4.html

Valiant: Come on. Nobody's gonna drive this lousy freeway when they can take 
the Red Car for a nickel.
Doom: Oh, they'll drive. They'll have to. You see, I bought the Red Car so I 
could dismantle it.

I don't think Oracle has any interest in InnoDB other than to pull the rug out 
from under the commercial version of MySQL.  Ranks right up there with MS's 
gutting of STAK and Sun's claim of language ownership for Java.
IMO-YMMV.


And this might not even be meant personally agains MySQL. There is this 
old tit for tat between Oracle and SAP, you know? Some of that finger 
wrestling lead to SAP having this other database, they don't really know 
what to do with (Adabas-D AKA SAP-DB AKA MaxDB). SAP still owns the 
rights to that code, but MySQL does all the maintenance and support for 
it. And as I understood it, there were plans to rebuild the MaxDB 
functionality in a future version of MySQL because the MaxDB code isn't 
exactly maintenance friendly.


Now here is the price question: How many SAP R/3 customers would chose 
that new MySQL version over Oracle while Oracle has their hand on that 
drain plug Innobase?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Oracle buys Innobase

2005-10-08 Thread Jan Wieck

On 10/8/2005 12:13 PM, Bruce Momjian wrote:


Jan Wieck wrote:
To have a really good position when talking to Oracle, MySQL will need 
to brush up on the BDB support, and that pretty quick.


What about the patents InnoDB might hold?  It would be easier to enforce
a patent based on the fact that they are using code actually developed
by the patent holder.


That too.

What strikes me a little odd is how brief the responses from the MySQL 
side are. Marten Mickos welcomes them, does some 2 sentence handwaving 
about licensing and the glorious freedom of open source, and then the 
rest of the statement is the usual blah blah about MySQL that you find 
in every other press release.


It almost seems as if MySQL wasn't exactly prepared for this deal to 
come through - or worse, that they are surprised about it. Although I 
can't believe they wouldn't have known about it in advance.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle buys Innobase

2005-10-08 Thread Jan Wieck

On 10/8/2005 4:34 AM, Andreas Kretschmer wrote:


Bruce Momjian  schrieb:

Ultimately, MySQL should drop InnoDB.


http://forums.mysql.com/read.php?3,48400,48400#msg-48400

InnoDB is GPL. But, i'm also confused.

My guess: a fork in the future.


This whole GPL forking thing is still the same as it was before. One can 
only take the last version, released under GPL, and build a GPL-only 
project based on it.


Oracle bought the copyright of InnoDB with the company. So if anything 
goes wrong during their upcoming relicensing talk, MySQL can of course 
fork off a GPL version of InnoDB, but that fork cannot be included in 
their commercial version of MySQL. What value would that fork have for 
them then? Using a pure GPL fork of InnoDB is in conflict with their own 
licensing scheme and I don't think MySQL is in the position to say bye 
to dual licensing.


To have a really good position when talking to Oracle, MySQL will need 
to brush up on the BDB support, and that pretty quick.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL 8.1 vs. MySQL 5.0?

2005-10-08 Thread Jan Wieck

On 10/6/2005 4:37 AM, Tzvetan Tzankov wrote:
They have collation and multiple characterset per table and etc. which 
actually is from 4.1 (not new in 5.0), and postgresql have only one 
collation per database cluster :-(
Otherwise I think their features are all there, but cannot be used 
togather most of them (you can have foreign key, but not using fulltext ...)


AFAIK MySQL's fulltext indexing is only supported on MyIsam tables, so 
if you want to use it, you lose ACID, hot backup and a couple other nice 
things entirely for that part of your data. Many MySQL users still 
believe that the pluggable storage engine design is an advantage ... I 
think one storage engine that supports the full feature set is better.



Jan



CSN wrote:

Just so I know (and am armed ;) ), are there any new
comparable features in MySQL 5.0 that aren't in
PostgreSQL up to the forthcoming 8.1? AFAIK, PG just
lacks updatable views (which are on the TODO).

MySQL 5.0 new features
http://dev.mysql.com/doc/mysql/en/mysql-5-0-nutshell.html

Thanks,
CSN



__ 
Yahoo! Mail - PC Magazine Editors' Choice 2005 
http://mail.yahoo.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Triggers after a rule

2005-09-29 Thread Jan Wieck

On 9/28/2005 5:44 AM, Wijnand Wiersma wrote:


Hi list,

I am currently trying to give normal users some read access to some
tables in the database. I also need to give update access to one
column of one table.

I have the table contact, the user should not be able to read or
update anything in it, except for his own record. So I created the
view v_my_account. When the user selects * from it he only sees his
own record. That works great. I also made a rule:
CREATE RULE update_v_my_account AS ON UPDATE TO v_my_account
DO INSTEAD
UPDATE contact set pause=NEW.pause where username=USER;


You probably want that to be

  DO INSTEAD
  UPDATE contact set pause=NEW.pause where username=OLD.username;

This will still not allow the user to update other's records, because 
the internal querytree for the update will have the views where clause 
attached too and that limits the result set already.




This does not work since there are some triggers on the contact table
and the trigger function selects the contact table and I don't want to
give the user access to that.


You want the trigger functions to be declared SECURITY DEFINER.


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] RI_ConstraintTrigger question

2005-09-27 Thread Jan Wieck

On 9/27/2005 3:27 PM, Tom Lane wrote:


Jan Wieck <[EMAIL PROTECTED]> writes:

On 9/27/2005 12:20 AM, George Essig wrote:

We have a database with about 30 tables and some RI. The RI constraints,
however, were not named upon creation of the database 2-3 years ago and
now when we get an error it contains  for the constraint.


What you could do is to dump the database, edit the dump and restore it. 


Why not just drop and re-add the FK constraints?


Dropping unnamed constraints can be a bit tedious.


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] RI_ConstraintTrigger question

2005-09-27 Thread Jan Wieck

On 9/27/2005 12:20 AM, George Essig wrote:


On 9/26/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wro


We have a database with about 30 tables and some RI. The RI constraints,
however, were not named upon creation of the database 2-3 years ago and
now when we get an error it contains  for the constraint.

I tried Google and the documentation, and I still have 2 questions -

1. Is it possible to rename RI_ConstraintTrigger, so that we do not get
 in the errors.

2. Is there somewhere explanation what the RI_FKey_ procedures mean?



I think RI stand for referential integrity. Foreign keys used to be
implemented using 'create constraint trigger' which automatically names
triggers 'RI_ConstraintTrigger_' then some integer which I guess is an oid
(object id).


CREATE CONSTRAINT TRIGGER was an interface also provided for database 
dumps, so that the constraints can be restored in the schema without 
checking the reloaded data. This possibility has since been abandoned.


This however has nothing to do with naming constraints. Newer PG 
versions have a different default naming scheme for constraints, the 
user didn't explicitly provided a name for, which is Table_Column_fkey 
instead of . This is stored in the pg_trigger.tgconstrname.


What you could do is to dump the database, edit the dump and restore it. 
If it's a big database, you might want to take separate schema- and 
data-dumps.



Jan



Constraint triggers execute functions to implement a constraint. RI_FKey_...
are the functions that implement foreign key constraints for different
events like insert, update, and delete.

When you upgrade a database it's likely that the oids for different database
objects will change. In sounds like somehow you upgraded and retained
references to old oids which don't exist anymore. Just a guess.

I suggest you upgrade to a newer version of PostgreSQL and drop all of the
'RI_ConstraintTrigger_' trigger and recreate the foreign keys.

George Essig




--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Replicating new sequences

2005-09-26 Thread Jan Wieck

On 9/22/2005 3:54 PM, Todd Eddy wrote:

I know this gets asked all the time, but I'm having issues with  
replication.  I got Slony setup between two computers and that does  
replication of transactions.  But we have a table that because of how  
it works new sequences are added on a somewhat regular basis, maybe a  
couple times a day.  Also replication needs to be "almost" instant  
because we distribute the load so updates go to one database and  
selects happen on another database.  So having these sequences  
replicate on a fairly quick basis is important.


I think what we need is a log based replication solution instead of  
the trigger based solution of Slony or other replication systems out  


I think it makes little difference how the changes are recorded 
(triggers or WAL log analysis). The thing that matters is that both are 
asynchronous replication methods which allow the master to run ahead.


What you are asking for is a synchronous replication system.

I hope that your load balancing system is smart enough to at least 
guarantee that "SELECT ... FOR UPDATE" queries are executed in the same 
"master transaction" as their corresponding "UPDATE" queries. If that is 
the case, you shouldn't really have much of a problem. If not, I am not 
even sure a synchronous replication system under MVCC can guarantee 
consistency for you.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Cost based SELECT/UPDATE

2005-09-22 Thread Jan Wieck

On 9/7/2005 10:45 PM, Leonid Safronie wrote:


Hi, ppl

Is there any way to do SELECTs with different priorities?

Once a month I need to do some complex reports on table with over 7
billion rows, which implies several nested SELECTS and grouping (query
runs over 20 minutes on P4/2.4GHz). Concurrently, there are over 50
processes updating tables in the same database, including table being
SELECTed to do monthly report. The issue is that response time for
these 50 processes is very important unlike for report generation, and
time spent by these processes while report running is unacceptable for
my production environment (response time grows from 1-3 seconds up to
1-2 minutes).

Is there any way to give different priorities to different
queries/transactions, as it's done for VACUUMing (vacuum_cost_*
options in config file)?



Fiddling with Postgres backend priorities (nice) has bee beaten to death 
before, and was allways rejected for good reasons.


Which Postgres version is this? Everything pre-8.0 will suffer from 
complete cache eviction on sequential scans of huge tables.


Have you thought about replicating the database to a "reporting slave"?


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Postgresql Hosting

2005-09-08 Thread Jan Wieck

On 8/4/2005 10:28 PM, Richard Sydney-Smith wrote:
I have asked my internet host to include postgresql as part of their 
service but it seems that there are issues in getting it to work with 
"cpanel" which is their support service for their clients. Is their a 
reason why Postgresql is harder to host than mysql? Is their any docs I 
can point the server admin to that would help him?


If not what service providers are people recommending?


http://cwihosting.com/

Apache with Frontpage extensions (if you want them), PHP, PostgreSQL and 
ssh access including crontab support. Having pl/pgsql added to template1 
was done in no time. I only had to put a binary cvs executable there so 
that I can develop somewhere else and deploy the changes via cvs update.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Query stucked in pg_stat_activity

2005-08-09 Thread Jan Wieck

On 8/9/2005 12:21 PM, Tom Lane wrote:


Csaba Nagy <[EMAIL PROTECTED]> writes:

I've executed a "select pg_stat_reset();" as superuser, and all went
away except the offending row...


That only resets the I/O counts (and only for one database), not the
backend activity info.



This reminds me I've forgot to ask, is there any other way of getting
rid of those ghost entries than via big load ?


Not at the moment.  It might be worth teaching the pgstats code to
cross-check the activity list every so often, but the only place
where it'd really fit naturally is vacuum_tabstats which is probably
not executed often enough to be helpful.

Or maybe we could just filter the data on the reading side: ignore
anything the stats collector reports that doesn't correspond to a
live backend according to the PGPROC array.

Jan, any thoughts?


The reset call is supposed to throw away everything. If it leaves crap 
behind, I'd call that a bug.


IIRC the pg_stat functions don't examine the shared memory, but rely 
entirely on information from the stats file. It sure would be possible 
to add something there that checks the PGPROC array.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Jan Wieck

On 7/28/2005 2:28 PM, Tom Lane wrote:


Jan Wieck <[EMAIL PROTECTED]> writes:

On 7/28/2005 2:03 PM, Tom Lane wrote:

Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to a large number of tables.
Offhand I don't see an easy solution ... Jan, any ideas?


PostgreSQL itself doesn't work too well with tens of thousands of 
tables.


Really?  AFAIK it should be pretty OK, assuming you are on a filesystem
that doesn't choke with tens of thousands of entries in a directory.
I think we should put down a TODO item to see if we can improve the
stats subsystem's performance in such cases.


Okay, I should be more specific. The problem with tens of thousands of 
tables does not exist just because of them being there. It will emerge 
if all those tables are actually used because it will mean that you'd 
need all the pg_class and pg_attribute rows cached and also your vfd 
cache will constantly rotate.


Then again, the stats file is only written. There is nothing that 
actually forces the blocks out. On a busy system, one individual stats 
file will be created, written to, renamed, live for 500ms and be thrown 
away by the next stat files rename operation. I would assume that with a 
decent filesystem and appropriate OS buffers, none of the data blocks of 
most stat files even hit the disk. I must be missing something.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Megabytes of stats saved after every connection

2005-07-28 Thread Jan Wieck

On 7/28/2005 2:03 PM, Tom Lane wrote:


Phil Endecott <[EMAIL PROTECTED]> writes:
For some time I had been trying to work out why every connection to my 
database resulted in several megabytes of data being written to the 
disk, however trivial the query.  I think I've found the culprit: 
global/pgstat.stat.  This is with 7.4.7.


This is for a web application which uses a new connection for each CGI 
request.  The server doesn't have a particularly high disk bandwidth and 
this mysterious activity had been the bottleneck for some time.  The 
system is a little unusual as one of the databases has tens of thousands 
of tables (though I saw these writes whichever database I connected to).


Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to a large number of tables.
Offhand I don't see an easy solution ... Jan, any ideas?


PostgreSQL itself doesn't work too well with tens of thousands of 
tables. I don't see much of an easy solution either. The best workaround 
I can offer is to move that horror-DB to a separate postmaster with 
stats disabled altogether.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] plpgsql constraint checked data fails to restore

2005-06-23 Thread Jan Wieck

Added pgsql-hackers
Added Bruce Momjian


On 6/23/2005 12:19 PM, Michael Fuhr wrote:
The question I have is how exactly you manage to get the trigger fired 
when restoring the dump. By default, the dump created by pg_dump will 
create the table, fill in the data and create the trigger(s) only after 
that.


Not true for CHECK constraints -- pg_dump creates them with the
CREATE TABLE statement:


This is still true in 8.1's pg_dump, even though check constraints can 
be added later. Even though it is bad practice to have functions that 
rely on or even manipulate other objects in a CHECK constraint, I think 
pg_dump should add the check constraints in the same manner as it does 
triggers.


Bruce, do we have a TODO item for this?


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] plpgsql constraint checked data fails to restore

2005-06-23 Thread Jan Wieck

On 6/20/2005 1:23 PM, Lee Harr wrote:


Some of the data required by the check function
is being restored after the data being checked
by the function and so it all fails the constraint.


Are you saying that the check function perform queries against other
data?  That might not be a good idea -- consider what happens if
the data changes: would changes invalidate records that had previously
passed the check but that wouldn't pass now if they were checked
again?


You ask some great questions. Thanks.


But not the really important one :-)



I think maybe I just got a little constraint-happy. The way I have
it, there is definitely a possibility for the other data to change
out from under the constraint. That can't be good.

Right now, I don't really see another way to check what I wanted
to check, so I am just going to remove the constraint.

When I get a few minutes, I will post my simplified example and
maybe someone will have a good idea.


The question I have is how exactly you manage to get the trigger fired 
when restoring the dump. By default, the dump created by pg_dump will 
create the table, fill in the data and create the trigger(s) only after 
that. From that I conclude that you are taking a data-only dump and 
restore the schema first either from a text file or a separate pg_dump 
schema only.


If you do keep your schema in external ascii files and do data-only 
dumps, you have to split the schema into table creation (without 
constraints, indexes, etc.) and a second part that adds all the 
constraints and indexes after the data is loaded.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] numeric precision when raising one numeric to another.

2005-06-06 Thread Jan Wieck

On 5/20/2005 2:26 PM, Tom Lane wrote:


numeric_power can in theory deliver an exact answer when the exponent is
a positive integer.  Division can deliver an exact answer in some cases
too --- but the spec doesn't say it must do so when possible.  So I
would say that there is no spec requirement for special behavior for
integral exponents.


There are cases where a numeric_power could in theory deliver an exact 
answer for a fractional exponent. That is when the exponent is a natural 
fraction because the result is the m'th root of x^n (for n/m). As an 
example 4^1.5 = 8. Of course does the m'th root need to produce a finite 
result, which I think is not guaranteed for arbitrary numbers.


I'm not advocating to do that, just saying it is theoretically possible 
for a subset of possible inputs.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] plperl function fails to "fire" Slony trigger

2005-04-22 Thread Jan Wieck
On 4/22/2005 2:08 PM, Tom Lane wrote:
Sven Willenberger <[EMAIL PROTECTED]> writes:
We have a replication set up between 2 servers using Slony; both are
runnind PostgreSQL 8.0.1. The issue is that when updates/inserts are
made to a replicated table, the replication does not occur; apparently
this is due to spi_exec somehow not allowing/causing the slony trigger
function to fire.
Yuck :-(.  The only idea that comes to mind is that 8.0 changed the
timing of trigger firing --- the triggers are probably firing while your
function still has control, whereas in earlier releases they'd only fire
after it returns.  Could this be breaking some assumption Slony makes
about the order of operations?
			regards, tom lane
Slony triggers are AFTER ROW triggers. All they do is one SPI_execp() to 
insert the log row. The only way that could possibly be suppressed is by 
bypassing the executor and doing direct heap_ access.

So how does plperl manage that?
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] I'm OWNER of the db but I get `permission denied` when

2005-04-20 Thread Jan Wieck
On 3/23/2005 11:22 AM, Joshua D. Drake wrote:
David Gagnon wrote:
Hi all,
I just created a new db wich userX is owner.  I log via pgAdminIII 
with the same user but I can't update the pg_class.

You are a datdba but not a superuser :). You have to be a super user
to update pg_class.
and not only a superuser, but one with usecatupd set to true as well.
Jan
Sincerely,
Joshua D. Drake
UPDATE pg_class SET reltriggers = 0 WHERE relname = 'ic'
I get:ERROR:  permission denied for relation pg_class
I do that on my dev env.  The only difference I saw beetween users is 
that my DEV user as priviledge to create database(But it shouln't 
matter...?!)

We create the db with : createdb -O userX -E UNICODE webCatalogTest
Thanks for your help!
/David


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Copression

2005-03-21 Thread Jan Wieck
On 3/20/2005 10:50 PM, Bruce Momjian wrote:
Stanislaw Tristan wrote:
It's a possible to compress traffic between server and client while server 
returns query result?
It's a very actually for dial-up users.
What is solution?
No, unless SSL compresses automatically.
You can use ssh port forwarding with compression. Works quite well.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] [pgsql-advocacy] MySQL worm attacks Windows servers

2005-03-07 Thread Jan Wieck
On 2/6/2005 4:31 PM, Greg Stark wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
No, Peter.
Posting a vulnerability on a public mailing list "before" there is a known fix
for it means that you put everyone who has that vulnerability into jeopardy.
Vulnerabilities are a special breed of bugs and need to be exterminated a
little different.
Many people disagree with this. Posting the vulnerability isn't what puts
people into jeopardy, the presence of the vulnerability puts people in
jeopardy. Posting it at least allows people to disable the feature or close
off access. Or at least monitor for possible intrusions. Not posting it leaves
people in jeopardy and in the dark about it. 

If you think you're the first one to find the vulnerability you're probably
wrong. Often malicious hackers who search for vulnerabilities find them and
keep them secret long before they're reported. 

How would you feel if your system was compromised and then you found out later
that it was a known security hole in a feature you had no need for and the
vulnerability had been kept secret?
It's interesting that everyone advocating for "immediate public report" 
is allways talking about vulnerabilities that can be taken care of by 
disabling some unused feature. What do you do if you find a 
vulnerability in the text/varchar data type multibyte handling? Still 
tell the world about it before having a fix?

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] [pgsql-advocacy] MySQL worm attacks Windows servers

2005-02-06 Thread Jan Wieck
On 1/30/2005 10:18 AM, Peter Eisentraut wrote:
Dawid Kuroczko wrote:
I think it is in good taste that when you find a
bug/vulnerability/etc first you contact the author (in this case:
core), leave them some time to fix the problem and then go on
announcing it to the
world.
In this case, core is not the author of the object in question.  And of 
course, to report a "bug/vulnerability/etc" you would write to 
pgsql-bugs, not core.

No, Peter.
Posting a vulnerability on a public mailing list "before" there is a 
known fix for it means that you put everyone who has that vulnerability 
into jeopardy. Vulnerabilities are a special breed of bugs and need to 
be exterminated a little different.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Is there a peer-to-peer server solution with PG?

2005-02-05 Thread Jan Wieck
On 2/4/2005 5:56 AM, Mike Nolan wrote:
If you have so much update load that one server cannot accomodate that
load, then you should wonder why you'd expect that causing every one
of these updates to be applied to (say) 3 servers would "diminish"
this burden.
The update/query load isn't the real issue here, it's that these two
servers will be 800 miles apart and there are some advantages in having
each office connect to its local database rather than having one of
them connect to the remote master. 
You do realize that any multimaster replication system, that is designed 
to avoind complex business process structure based conflict resolution 
mechanisms, necessarily has to be based on 2 phase commit or similar? So 
your global write transaction throughput will be limited by the latency 
of your WAN, no matter what bandwidth you have. And as per RFC 1925: No 
matter how hard you push and no matter what the priority, you can't 
increase the speed of light.

I think what you are really looking for is an application internal 
abstraction layer based multmaster replication approach.

Jan

The Slony-1 approach will work, assuming I've got suffient network
bandwidth to support it plus the traffic from the remote office plus 
exixting outside traffic from our public website.  

That's one of those things you just don't know will work until you
have it built, so I'm looking for other options now while I have time
to consider them.  Once I get on-site in two weeks it'll a lot more hectic.
--
Mike Nolan
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] pgpool 2.5b2 released

2005-02-03 Thread Jan Wieck
On 2/2/2005 11:57 AM, Bruce Momjian wrote:
Tatsuo Ishii wrote:
Pgpool 2.5b2 supports "master slave mode" which can cope with
master/slave replication softwares such as Slony-I. In this mode
pgpool sends non SELECT queries to master only. SELECTs are load
balanced by pgpool.
Other features of 2.5b2 include:
- ability to add timestamp to each log entry
- control to whether cache connection info or not
pgpool 2.5b2 is available at:
http://pgfoundry.org/projects/pgpool/
Wow, that is great!  I know Jan was waiting for this.
This is incredible! I definitely did wait for this and some people at 
the Solutions Linux here in Paris did ask about it. Too bad that I 
didn't read this earlier, it would have made their day.

Thank you Tatsuo. I will check this out when I'm back home.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] When to encrypt

2004-12-10 Thread Jan Wieck
On 12/6/2004 6:10 PM, Greg Stark wrote:
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
Actually, hard disk encryption is useful for one thing: so if somebody
kills the power and takes the hard disk/computer, the data is safe.
While it's running it's vulnerable though...
Where do you plan to keep the key?
I was wondering where he keeps his servers. Are they kept under a little 
rain shelter on the parking lot, so that one who steels them at least 
can't sue the company for hurting his back while carrying the racks outside?

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] When to encrypt

2004-12-06 Thread Jan Wieck
On 12/6/2004 1:33 AM, Derek Fountain wrote:
On Monday 06 December 2004 12:31, you wrote:
Derek Fountain <[EMAIL PROTECTED]> writes:
> If another SQL Injection vulnerability turns up (which it might, given
> the state of the website code),
You will never see another SQL injection vulnerability if you simply switch
to always using prepared queries and placeholders.

Indeed, but I'm still interested in the general answer. The server I have been 
looking at was hopelessly insecure and SQL injection is only one of its 
problems. There were several other ways in! Assume, for example, an attacker 
can write his own script directly into the website document tree. In this 
case prepared queries don't help protect what's in the database. The attacker 
can use them himself if he likes!
I don't quite see how encrypted storage of data can solve your problem. 
Somehow the web application must be able to unlock/decrypt the data. 
Either on a per session level, or by passing in a key with every query. 
Giving out the encrypt/decrypt keys to the end users, so that they have 
to supply them at login time, is probably as secure as putting them in 
cleartext onto the homepage. So they must be stored readable somewhere 
by the middleware system.

It does obscure the data a little more. At the same time it might give 
the Web application developer a completely false feeling of security.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Constaints

2004-12-05 Thread Jan Wieck
On 12/5/2004 12:48 PM, Jake Stride wrote:
Bruno Wolff III wrote:
On Sun, Dec 05, 2004 at 16:55:33 +,
 Jake Stride <[EMAIL PROTECTED]> wrote:
 

Is there anyway to declare a constant that you can then use within a 
postgresql 'session', i am connecting from a PHP based application and 
trying to integrate another.
   

At the worst you should be able to use a table with a row for each session
that includes the value to be used for each session.
 

Would this be a postgresql session? If so how do I go about accessing it 
from a query/setting the value of it? I have looked at set authorization 
but I don't think this is where I should be looking!
PostgreSQL does not have userland session variables. You would have to 
write some custom set/get functions in a procedural language that is 
capable of holding global data across function calls (like PL/Tcl).

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 3rd RFD: comp.databases.postgresql

2004-12-04 Thread Jan Wieck
On 12/3/2004 4:12 PM, Mike Cox wrote:
Jan Wieck wrote:
So how exactly do you think that big number of non-developer advanced
PostgreSQL users will populate the comp.* groups? I don't see them there
right now, and without them the comp.* groups are the wrong groups
because you will not get answers to questions there.
Look at this poll.
http://scripts.postgresql.org/survey.php?View=1&SurveyID=36
The question is only about _an official newsgroup_. To contradict my 
statement above, the survey question would have to ask about _an 
official comp.* group NOT gated to the mailing lists_. And it would also 
have to point out who already said very clearly that and why they would 
stay on the mailing lists only.

Interpreting this survey result as
"a large number of experienced and advanced PostgreSQL users would 
consider to move to a newsgroup where their questions will not be read 
by Tom Lane, Stephen Szabo (and many other key players who said NO to 
this on the mailing list already)"

is IMHO a too far strech.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] 3rd RFD: comp.databases.postgresql

2004-12-03 Thread Jan Wieck
On 12/3/2004 3:32 PM, Woodchuck Bill wrote:
[EMAIL PROTECTED] ("Joshua D. Drake") wrote in
news:[EMAIL PROTECTED]: 


So the current state of affairs is that we have the gated, official 
pgsql.* newsgroups, and the comp.* stuff is not gated in either
direction? 
Yes.
If that's the case, there should be a weekly/monthly reminder posting
on the comp.* side set up, pointing out that these are not official
groups and that real PostgreSQL questions are better asked somewhere
else, if the intention is to reach real developers and get real help.
I don't want to see people wasting a lot of time or getting confused
because they found the wrong newsgroups first.
Why would the comp.* group be the *wrong* group? Just an additional 
resource. The proponent said that he would post weekly pointers about the 
pgsql.* hierarchy to the comp.* group, but expecting him to post something 
negative about the comp.* group is asking too much of him. This group is 
not meant to be a competing resource..it is just another channel, and 
another plug for the open-source community. Stop treating it like a bad 
thing. 

You are insulting non-developer advanced pgsql.* users that would be using 
the comp.* group by inferring that only the developers are capable of 
answering questions. Do the Oracle developers, or MSsql developers 
participate in the respective comp.* groups for their products? Most 
probably not. Are those newsgroups extremely useful resources for users of 
those products? Very much so.
I didn't say that only developers are capable of that.
Since the mailing list to comp.databases.postgresql.general gating was 
stopped over a week ago, there has been zero communication on that 
newsgroup. I guess, that currently all of the developers and advanced 
users are either on the mailing list or using the pgsql.* groups.

And since there are no forums at all where you have direct access to 
Oracle or MSSql developers, this isn't exactly what I call a good 
example. Would they still be that usefull if like in our case all 
developers, experienced dba's and advanced users would be on oracle.* or 
microsoft.* groups already?

So how exactly do you think that big number of non-developer advanced 
PostgreSQL users will populate the comp.* groups? I don't see them there 
right now, and without them the comp.* groups are the *wrong* groups 
because you will not get answers to questions there.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] 3rd RFD: comp.databases.postgresql

2004-12-03 Thread Jan Wieck
On 12/3/2004 1:59 PM, Marc G. Fournier wrote:
On Fri, 3 Dec 2004, Net Virtual Mailing Lists wrote:
My only suggestion:
I don't care what you do with the newsgroups, just don't screw with the
mailing lists.  If the mailing lists go away, I will be *EXTREMELY*
disappointed!
this RFD in no way affects the mailing lists, and is in no way an 
'official PostgreSQL newsgruop' ... the 'official newsgroups' are the 
gated ones under pgsql.* ...
So the current state of affairs is that we have the gated, official 
pgsql.* newsgroups, and the comp.* stuff is not gated in either direction?

If that's the case, there should be a weekly/monthly reminder posting on 
the comp.* side set up, pointing out that these are not official groups 
and that real PostgreSQL questions are better asked somewhere else, if 
the intention is to reach real developers and get real help. I don't 
want to see people wasting a lot of time or getting confused because 
they found the wrong newsgroups first.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Select Database

2004-12-02 Thread Jan Wieck
On 12/2/2004 4:39 AM, ON.KG wrote:
Hi All!
How could I select another database without new connection?
For example, in PHP+MySQL we have mysql_select_db('database_name');
You can't. An existing session cannot change the database connected to.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Upcoming Changes to News Server ...

2004-12-01 Thread Jan Wieck
On 12/1/2004 1:25 PM, Woodchuck Bill wrote:
Jan, Gary may be blunt at times, but try to understand things from his
perspective. He is posting to Usenet. He expects his replies to appear on 
Usenet. You are accustomed to your way of writing and reading messages. He 
is accustomed to his way. Perhaps a bit overstated, his point is that if 
[...]
Do I really have to try to understand things from his perspective?
Please realize that nearly all of the long standing developers, key 
community members and active supporters of the PostgreSQL project have 
usenet experience. People like Gary are one of the reasons why we prefer 
using mailing lists to do our communication today, and it has served us 
well over several years. The short survey among these people (who are 
the ones answering the vast majority of questions on our lists) showed 
that basically none of us would consider using news instead of these 
mailing lists.

We have also developed the habit to use group-reply to all header 
addresses since many of us (due to the high email volume) use some kind 
of email sorting and filtering programs (like procmail) to not find all 
list email in our inbox, but to find emails that are in reply to our own 
messages there, or in special folders.

And there you have the real problem and incompatibility. The PostgreSQL 
developers and experts are all on the mailing lists, and they will stay 
there and most probably continue to use group reply most of the time. 
That means that as long as postings to the newsgroups are gated to the 
lists, the people asking questions on the newsgroups will find email 
answers in their inbox, and if we stop gating from news to lists they 
will get very few answers from the experts because we don't even see the 
questions. I don't know if some "I guess ..." from another newbie is as 
helpfull as an answer from Tom Lane backed by code knowledge, but that 
is what most answers on the news side without gating would probably look 
like.

Gary threatened to report any further email from me as harassment to 
[EMAIL PROTECTED] and [EMAIL PROTECTED] The IMHO correct measurement to that was 
to add a kill line to the news->list gateway that just stops his 
postings to get into my (or anybody elses) way. The kill line got added 
and nobody on the mailing lists will see his postings any more. 
Fortunately most people involved in PostgreSQL are very level headed, 
have a high level of tolerance and usually real world problems. The 
newsgroup stirrup that dragged all the net.kook attention will calm down 
and Gary will find another "field of work" pretty soon. The people 
posting questions via news will learn that and why they get the real 
answers via email to their inbox. And they will care as little about it 
as they did so far, because the DBA who has a production database at 
halt in the middle of the night really doesn't care how the answer that 
gets his server going again arrived, he is stressed, overworked and has 
way bigger fish to fry than "an email response to a news posting isn't 
correct, your lists are broken, yadda, yadda". He does the happydance 
for getting an answer in 2 hours without dialing through 3 hotline levels.

So to answer my own question: No, I don't have to try to understand 
things from his perspective. Gary Burnore is irrelevant, will disappear 
very soon and we will just continue to go about our business, help real 
people with real database related problems.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Upcoming Changes to News Server ...

2004-12-01 Thread Jan Wieck
On 11/30/2004 11:46 PM, Marc G. Fournier wrote:
On Tue, 30 Nov 2004, Jan Wieck wrote:
On 11/30/2004 2:37 PM, Gary L. Burnore wrote:
Perhaps I wasn't clear.  I don't care WHO you are. I've already asked you 
once to stay out of my email.  Further emails from you will be reported to 
both Yahoo and Comcast as harassment.

I'm not on your list.
   _I_ am posting to a USENet discussion group.  Your list is broken.
Do NOT email me again.
Oh my,
after reading this he really caught my attention. You have to google for 
"Gary Burnore" a little. This guy has a record ...

It seems to me that the whole RFD/CFV thing has attracted a bunch of net 
kooks and individuals who have nothing better to do than wasting other 
peoples time. Marc, can you add a kill line on the mail/news gateway so that 
messages from this guy (and as they pop up more of his kind) don't pollute 
our mailing lists and stay on the news side of it only? If not I will just 
add a /dev/null line for this idiot to my procmail config.
Done :)
Thanks!
And he's pretty much considered a net.kook on news.groups itself too ...
That, etc. and so on ... ranges from spamming to criminal charges for 
molesting. The only good thing I found about him is that he can serve as 
a bad example.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] [ANNOUNCE] USENET vs Mailing Lists Poll ...

2004-11-30 Thread Jan Wieck
On 11/30/2004 5:27 PM, Mike Cox wrote:
Ultimately, the RFD is about providing a place for _Usenet_ PostgreSQL users
who have been neglected for quite some time.  With the ease of posting to
the big 8 group, and the very large propegation, I can see why the
comp.databases.postgresql group will be very popular.  I originally tried
to include the developers so they could follow the comp PostgreSQL group
through their mailing list, but that proved too technically difficult. 

If they want to follow what will be a huge PostgreSQL usenet community in
the big 8, they will have to subscribe to comp.databases.postgresql. :-)
Mike,
I do recognize your honesty and good intentions. You originally tried to 
scratch an itch of many people. That is, that the PostgreSQL newsgroups 
were not carried by their NSP.

In doing so, you have opened a can of worms (happens). As usual, a once 
opened can of worms can only be re-canned by using a bigger can. If you 
think that telling 99% of the "knowledge" on these mailing lists that 
they are only 1% of the users and that a huge PostgreSQL usenet 
community will discuss a lot of interesting stuff aside of them will 
change much, you're wrong ;-)

I have been contributing to things via USENET and whatnot for over 15 
years and all I know is that people either make the mistake to abandon a 
good open source product (and pay bucks to some greedy company instead) 
or they find the way to the forum, where the real "knowledge" is 
answering, and stop reading the unproductive mailing lists or newsgroups 
at all. Many of the PostgreSQL contributors are like me - long standing 
open source developers, contributors, people who left USENET behind 
years ago and who know that for "them" nothing will change as long as 
they don't unsubscribe from the mailing lists, no matter what happens on 
a newsgroup. The committed users will follow where we go and the 
professional users are there already, waiting for us.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [ANNOUNCE] USENET vs Mailing Lists Poll ...

2004-11-30 Thread Jan Wieck
On 11/30/2004 5:55 PM, Woodchuck Bill wrote:
Marc G. Fournier From: <[EMAIL PROTECTED]> wrote in
news:[EMAIL PROTECTED]: 

[EMAIL PROTECTED] (Bill Harris) writes:
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

"If there was an official newsgroup for postgresql, would you switch
to using Usenet from using the mailing lists?"

As a side note, for those that do vote 'yes', please note that there
is an official pgsql.* hierarchy gated from the mailing lists, that
is available at news.postgresql.org, if you do wish to use a news
reader vs a mail reader ...

FWIW, I voted yes, but my vote depended upon it being a
comp.databases.postgresql.* hierarchy, done according to USENET
guidelines.  I sense that would be a lot more important for PostgreSQL
in the long term and a lot more sustainable in general than a pgsql.*
hierarchy.  It's been my experience that processes done outside the
norm tend to have extra problems along the way that cost more than the
immediate gratification is worth, even if it does seem more painful at
the time.
Just as an FYI ... the latest RFD is for *one*
comp.databases.postgresql group to be created, that is not-gated ...
this means that those using it would not have the benefit(s) that
those using the pgsql.* hierarchy do, namely access to the wealth of
knowledge/experience of those on the mailing lists ... 
Which is all the contributing developers, all the key people in the 
project. So that newsgroup whould be for whom?

I had posed the 'who would use USENET' question on -hackers previous
to the poll, and the general opinion was "not in this life time" by
ppl like PeterE, TomL, JoshuaD, etc ... the thread can be seen:
 http://archives.postgresql.org/pgsql-hackers/2004-11/msg01110.php
Trying to sway the vote?
Perhaps.
The long term solution for this incompatibility seems clear to me. Set 
it up as a moderated newsgroups under pgsql.* and have the moderator bot 
respond with a fixed "if you want your message to be read by all 
PostgreSQL community members, you must post to the underlying mailing 
list ..." with a reference how to do the nomail subscribe etc. and the 
gateway setting Follow-Up-To: and so on so that news-lurkers usually 
mail it to the list server anyway. Everything else will lead to constant 
work on Marc's side, delayed or double posts, all the crap people have 
been complaining about.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-30 Thread Jan Wieck
On 11/30/2004 2:37 PM, Gary L. Burnore wrote:
Perhaps I wasn't clear.  I don't care WHO you are. I've already asked you 
once to stay out of my email.  Further emails from you will be reported to 
both Yahoo and Comcast as harassment.

I'm not on your list.
   _I_ am posting to a USENet discussion group.  Your list is broken.
Do NOT email me again.
Oh my,
after reading this he really caught my attention. You have to google for 
"Gary Burnore" a little. This guy has a record ...

It seems to me that the whole RFD/CFV thing has attracted a bunch of net 
kooks and individuals who have nothing better to do than wasting other 
peoples time. Marc, can you add a kill line on the mail/news gateway so 
that messages from this guy (and as they pop up more of his kind) don't 
pollute our mailing lists and stay on the news side of it only? If not I 
will just add a /dev/null line for this idiot to my procmail config.

Jan

At 10:31 AM 11/30/2004, you wrote:
On 11/29/2004 11:53 PM, Gary L. Burnore wrote:
Stay out of my email.
This ia a PostgreSQL related topic discussed on PostgreSQL mailing lists 
and you react like this to a mail from a PostgreSQL CORE team member? 
Rethink your attitude.

Jan
At 11:50 PM 11/29/2004, you wrote:
On 11/23/2004 4:46 PM, Gary L. Burnore wrote:
It's ok. Mysql's better anyway.
This is the attitude I've seen from many of the pro-usenet people. If I
don't get it my way I will bash your project and try to do harm.
I am too one of those who have left usenet many years ago. Partly
because of people with this attitude. And I don't consider it much of a
loss if we lose the "message" to these people.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-30 Thread Jan Wieck
On 11/29/2004 11:53 PM, Gary L. Burnore wrote:
Stay out of my email. 
This ia a PostgreSQL related topic discussed on PostgreSQL mailing lists 
and you react like this to a mail from a PostgreSQL CORE team member? 
Rethink your attitude.

Jan
At 11:50 PM 11/29/2004, you wrote:
On 11/23/2004 4:46 PM, Gary L. Burnore wrote:
It's ok. Mysql's better anyway.
This is the attitude I've seen from many of the pro-usenet people. If I
don't get it my way I will bash your project and try to do harm.
I am too one of those who have left usenet many years ago. Partly
because of people with this attitude. And I don't consider it much of a
loss if we lose the "message" to these people.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-29 Thread Jan Wieck
On 11/23/2004 4:37 PM, Woodchuck Bill wrote:
[EMAIL PROTECTED] (Patrick B Kelly) wrote in news:E55E257B-3D95-11D9-
[EMAIL PROTECTED]:
The list has been deluged with  
countless angry process oriented messages filled with vitriol and  
devoid of any content regarding the purpose of this forum, we have been  
bombarded with profanity, and the lists have been dropped from google.   
The lists were dropped from Google because the newsgroups are using 
unauthorized (*stolen*) comp.* namespace. The groups were created by 
identity theft and criminal e-mail forgery of the comp.* hierarchy manager 
at the time. Did you know that? 
Isn't what Marc just proposed, to stop forwarding traffic into that
stolen namespace, exactly the right thing?
I agree with Marc. It was time to stop this nonsense debate and make
decisions. However unpopular these decisions might be to the fistfull of
contributors that use newsgroups, the way major members of the
PostgreSQL community are treated here does IMHO more harm to the project
than any single NSP carrying any of there groups is worth.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-29 Thread Jan Wieck
On 11/23/2004 4:46 PM, Gary L. Burnore wrote:
It's ok. Mysql's better anyway.
This is the attitude I've seen from many of the pro-usenet people. If I
don't get it my way I will bash your project and try to do harm.
I am too one of those who have left usenet many years ago. Partly
because of people with this attitude. And I don't consider it much of a
loss if we lose the "message" to these people.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] OID's

2004-11-18 Thread Jan Wieck
On 11/16/2004 6:32 AM, Holger Klawitter wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
A little bit OT, but:
  is there a way of removing duplicate rows in a table without OIDs? 
There is still the CTID.
Jan
Mit freundlichem Gruß / With kind regards
Holger Klawitter
- --
lists  klawitter  de
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQFBmeVA1Xdt0HKSwgYRAklNAJ4l0KtMVF2Tkhx5ZgyPR38LHXd/LACeNk4q
mwf/f5rI7VdckPfgfUotnSc=
=qpV0
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] OID's

2004-11-18 Thread Jan Wieck
On 11/16/2004 4:52 AM, Michael Glaesemann wrote:
On Nov 16, 2004, at 6:42 PM, Peter Eisentraut wrote:
Am Dienstag, 16. November 2004 10:01 schrieb Joolz:
Michael Glaesemann zei:
OIDS are a system level implementation. They are no longer required
(you can make tables without OIDS) and they may go away someday.
Out of curiosiry: how will we handle blobs once the OID's are gone?
They won't go away.  This is one reason.
Peter,
You sound pretty certain. I can imagine there might be a way to handle 
BLOBs without OIDs. I'm not saying that I know what it is, but I 
recognize the possibility.
A sequence and converting the blob identifier to int8 would be one ...
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL on Guest Host (VMWare)

2004-11-15 Thread Jan Wieck
On 11/15/2004 5:05 PM, ON.KG wrote:
Hi!
Description:
VMware 4.0
Main host is WinXP Pro (on FAT32)
and Guest Host is WinXP Pro (on NTFS)
I hope you're running the guest with fully preallocated virtual disks. 
Any FAT inconsistency caused by a system crash could destroy your entire 
guest otherwise.

On Guest Host - PostgreSQL 8.0-beta2-dev3
From Main host i'm trying to connect to PostgreSQL to Guest host
But as a result i'm receiving next message:
Connection Refused (0x274D/10061)
Is the server running on host xxx.xxx.xxx.xxx and accepting TCP/IP
connections on port 5432? 

Tell me please, what is the problem?
Windows XP has some basic firewall functionality. Could it be that this 
is blocking inbound access on the Guest side?


Jan
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] I spoke with Marc from the postgresql mailing list.

2004-11-12 Thread Jan Wieck
On 11/7/2004 8:06 PM, Brian {Hamilton Kelly} wrote:
On Sunday, in article <[EMAIL PROTECTED]>
 [EMAIL PROTECTED] "Mike Cox" wrote:
Also, he pointed out that for those who want to get the postgresql groups
when their usenet sever doesn't carry them, the solution would be to point
their newsreaders to news.postgresql.org.
Which only confirms my opinion that he's a fuckwit.  
And you think that sort of comment makes your opinion welcome here?
I haven't used usenet news for a few years, and I am shocked by the loss 
of quality over there. Certain technical groups used to be very much on 
the level of politeness and respect, our mailing lists reflect. If this 
is what we can expect if we encourage more ISP's to carry our lists, 
then I am strictly for "discouraging". Maybe our goal should not be to 
make the PostgreSQL lists available by default on every news server.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] The reasoning behind having several features outside

2004-10-30 Thread Jan Wieck
On 10/30/2004 3:39 PM, Marc G. Fournier wrote:
On Sat, 30 Oct 2004, Joshua D. Drake wrote:

Replication is one subsystem not included in source tree. But PostgreSQL
has other subsystems that are included such as plugins for procedural
languages. So isn't the same risks involved with them?
No, not really. Because items such as plPerl don't require
changes to the backend. Replication requires changes to the backend.
Replication doesn't require changes to the backend ... neither eRServer or 
Slony-I do, at least ...
Slony-II will probably need some additional features. But I consider it 
a sign for an immature design not to be able to define those features in 
form of an API and instead starting to hack together a prototype inside 
of the backend that later down the road will get promoted to a solution 
because there never will be any time to turn the findings of that 
explore-phase into a design and develop according to that.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] ON DELETE trigger blocks delete from my table

2004-10-25 Thread Jan Wieck
On 10/25/2004 3:53 PM, Naeem Bari wrote:
I understand. Makes sense. Is there anyway for my trigger function to
"know" that it is being called on a delete or on an update? Because I do
need to "return new" on update... and I really don't want to write 2
different functions, one for update and one for delete...
I would change the trigger to fire on "after" rather than before as Jan
Weick suggests, but does that mean that if the trigger fails, the
transaction would be committed anyways?
The variable TG_OP contains a string of 'INSERT', 'UPDATE' or 'DELETE' 
as per the documentation:

http://www.postgresql.org/docs/current/static/plpgsql-trigger.html
Making it an AFTER trigger still ensures that the transaction is rolled 
back if the trigger fails. What it also ensures is that no trigger fired 
later can modify the NEW row after your auditing already took place. As 
your trigger is, this doesn't matter to you. But as soon as you include 
some new value in your auditing table you might be surprised not to find 
that new value in the row.

In PostgreSQL a BEFORE trigger procedure on INSERT or UPDATE can modify 
values in NEW because it is called BEFORE the new row is actually stored 
in the table. This is usefull for enforcing timestamps, usernames, 
derived values, you name it. AFTER triggers can't do that and are 
guaranteed to see the values that really have been stored.

Jan

Thanks for your help!
naeem
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 25, 2004 2:48 PM
To: Naeem Bari
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] ON DELETE trigger blocks delete from my table 

"Naeem Bari" <[EMAIL PROTECTED]> writes:
CREATE OR REPLACE FUNCTION public.func_job_status_upd()
  RETURNS trigger AS
'
begin
  insert into x_job_status values ( OLD.job_id,
OLD.job_status_type_id,
OLD.status_date, OLD.notes, OLD.edit_person_id, OLD.edit_date);
  return new;
end;
'
  LANGUAGE 'plpgsql' VOLATILE;
If this is a BEFORE trigger, you probably need "RETURN OLD".  "NEW" will
be NULL in a delete trigger, so you're returning NULL which cancels the
operation.
regards, tom lane
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faqs/FAQ.html

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] ON DELETE trigger blocks delete from my table

2004-10-25 Thread Jan Wieck
On 10/25/2004 3:47 PM, Tom Lane wrote:
"Naeem Bari" <[EMAIL PROTECTED]> writes:
CREATE OR REPLACE FUNCTION public.func_job_status_upd()
  RETURNS trigger AS
'
begin
  insert into x_job_status values ( OLD.job_id, OLD.job_status_type_id,
OLD.status_date, OLD.notes, OLD.edit_person_id, OLD.edit_date);
  return new;
end;
'
  LANGUAGE 'plpgsql' VOLATILE;
If this is a BEFORE trigger, you probably need "RETURN OLD".  "NEW" will
be NULL in a delete trigger, so you're returning NULL which cancels the
operation.
... which would then again not work for the UPDATE case (not with the 
same internal consequences though).

Jan
regards, tom lane
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


  1   2   3   >