Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Honza Pazdziora
On Wed, Jun 09, 2004 at 09:13:05PM +0530, Shridhar Daithankar wrote:
> 
> Well that is easy. In the service file just say
> 
> [Cluster1]
>  datapath=/data/foo
> 
> [Cluster2]
>  datapath=/data/foo1
> 
> and postgresql.conf could still reside inside each cluster to provide 
> specific configuration.
> 
> Thenhave a script which can say 'service postgresql cluster1 start'

This is awfull way of doing configuration. Why should different
installation share anything, in one file? Running

/usr/bin/pg_ctl -C /etc/postgres.isp1.conf start

seems much more maintainable. And /etc/postgres.isp1.conf can specify
that the data files are in /bigdisk/data/isp1x or wherever you
please.

> Postgresql as a database server is a service. A cluster is an service 
> instance. A service configuration file documents all service instances and 
> their parameters required for all tuning and control purposes. Add a 
> possibility of multiple versions of postgresql on same box. That sums it up 

One file does not add possibility of multiple versions of postgresql
on same box, it merely makes it harder.

> Well, I wish I could have some archives link handy but suffice to say that 
> Tom has rejected this idea many times before..

That does not necessarily mean the idea is broken. Tom's main
objection (IIRC) was that he needs to be able to have multiple
postgresqls on one machine. That can easily be achieved, either by
specifying datadirectory in the configuration file, or even defaulting
to the same directory where the .conf file is stored when no
datadirectory option is used.

-- 

 Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/
 .project: Perl, mod_perl, DBI, Oracle, large Web systems, XML/XSL, ...
Only self-confident people can be simple.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Nested xacts: looking for testers and review

2004-06-09 Thread Stephan Szabo

On Wed, 9 Jun 2004, Stephan Szabo wrote:

> On Wed, 9 Jun 2004, Alvaro Herrera wrote:
>
> > On Sun, May 30, 2004 at 04:07:27AM -0700, Stephan Szabo wrote:
> > > On Sat, 29 May 2004, Alvaro Herrera wrote:
> >
> > > > Ah, this seems to work.  I'll implement it and I'll let you know how it
> > > > goes.
> > >
> > > Ugh... There's one further wrinkle I hadn't thought about, imagine the
> > > following:
> >
> > Ok Stephan, thank you very much for your help.  I implemented this and
> > it seems to work, at least on my (limited?) test scenario.  If you have
> > some spare time I'd like you to test it and see if you can break it (I
> > posted it to -patches yesterday).
>
> Unfortunately, I've gotten it to fail, but I haven't looked in depth (I'm
> at work, so I'm doing it during compilations and such.)
>
> I made a file ins with a 100 line copy to a table named fk with the
> value 1, and then if I do the following:
>
> create table pk(a int primary key);
> create table fk(a int references pk(a) initially deferred);
> insert into pk values (1);
>
> begin;
> begin;
> \i ins
> \i ins
> rollback;
> \i ins
> commit;
>
> It appears to exception on the third \i ins with the tail_thisxact
> pointing into a bad place.

Okay - I think I see what's going on here.

It looks like deferredTriggerInvokeEvents is being run (immediate_only),
but since deferredTriggers->events_imm is NULL it's using
deferredTriggers->events as the start of the list to check, but this value
isn't getting reset in DeferredTriggerEndSubXact in the case that the
entire list was created in an aborted subtransaction.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PATCHES] [HACKERS] serverlog function (log_destination file)

2004-06-09 Thread Andrew Dunstan

Tom Lane wrote:
Bruce Momjian <[EMAIL PROTECTED]> writes:
 

Looks good to me.  The only issue I saw was that the default file name
mentioned in postgresql.conf doesn't match the actual default.
   

I'm really not happy with the concept that the postmaster overrides
its stderr direction.
 

Me either without more thought.
If we start logging to a file explicitly, do we need to revisit the log 
rotation discussion which seems to have gone nowhere several times recently?

cheers
andrew
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] thread safety tests

2004-06-09 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Are people OK with requiring PGUSER, $USER, $LOGNAME, or the username to
> be supplied by the connection string in libpq on platforms that want
> threads and don't have getpwuid_r() (Solaris, FreeBSD, etc.)?

AFAICS that was not what Jan was suggesting at all.  I don't like it
either --- changing the user-visible behavior based on whether we think
the platform is thread-safe or not is horrid.

What I understood Jan to be saying is that we should be willing to build
the most thread-safe approximation we can when --enable-thread-safety
is requested.  Don't bomb out if you don't have getpwuid_r, just give
a warning and then use getpwuid.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] [HACKERS] serverlog function (log_destination file)

2004-06-09 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Looks good to me.  The only issue I saw was that the default file name
> mentioned in postgresql.conf doesn't match the actual default.

I'm really not happy with the concept that the postmaster overrides
its stderr direction.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Why hash indexes suck

2004-06-09 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> [blink]  This seems to miss out on the actual point of the thread (hash
>> bucket size shouldn't be a disk page) in favor of an entirely
>> unsupported sub-suggestion.

> Yes, I was unsure of the text myself.  I have changed it to:
>   * Allow hash buckets to fill disk pages, rather than being
> sparse

OK, though maybe "pack hash index buckets onto disk pages more
efficiently" would be clearer.

> If we sorted the keys, how do we insert new entries efficiently?

That is why I called it "unsupported".  I'm not clear what would happen
in buckets that overflow onto multiple pages --- do we try to maintain
ordering across all the pages, or just within a page, or what?  How much
does this buy us versus what it costs to maintain?  Maybe there's a win
there but I think it's pretty speculative ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PATCHES] [HACKERS] serverlog function (log_destination file)

2004-06-09 Thread Bruce Momjian

Oh, it would need doc additions. I can do that when I apply, or you can
resubmit.


---

Andreas Pflug wrote:
> Magnus Hagander wrote:
> 
> >Specifically about the logs, I still think there is a lot of value to
> >being able to read the logs remotely even if you can't restart
> >postmaster.
> >
> Since I believe that retrieving the logs easily without server file 
> access is a feature that's welcomed by many users, here's my proposal.
> 
> The attached diff will
> - add a guc-variable log_filename
> - extend log_destination to accept the keyword 'file'
> - elog to that file if configured
> - provide two functions pg_logfile_length() and pg_logfile to obtain the 
> contents.
> 
> int4 pg_logfile_length()
> cstring pg_logfile(int4 size, int4 position)
> size (may be null meaning max) is the chunk size (max: currently 5)
> position (may be null meaning -size) is the start position; positive 
> counting from log file start, negative from end.
> 
> 
> Regards,
> Andreas
> 

> Index: backend/postmaster/postmaster.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v
> retrieving revision 1.402
> diff -u -r1.402 postmaster.c
> --- backend/postmaster/postmaster.c   3 Jun 2004 02:08:03 -   1.402
> +++ backend/postmaster/postmaster.c   8 Jun 2004 18:07:30 -
> @@ -532,6 +532,9 @@
>   /* If timezone is not set, determine what the OS uses */
>   pg_timezone_initialize();
> 
> +/* open alternate logfile, if any */
> + LogFileOpen();
> +
>  #ifdef EXEC_BACKEND
>   write_nondefault_variables(PGC_POSTMASTER);
>  #endif
> Index: backend/storage/ipc/ipc.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/storage/ipc/ipc.c,v
> retrieving revision 1.87
> diff -u -r1.87 ipc.c
> --- backend/storage/ipc/ipc.c 12 Dec 2003 18:45:09 -  1.87
> +++ backend/storage/ipc/ipc.c 8 Jun 2004 18:07:31 -
> @@ -111,6 +111,8 @@
> 
> on_proc_exit_list[on_proc_exit_index].arg);
> 
>   elog(DEBUG3, "exit(%d)", code);
> +
> + LogFileClose();
>   exit(code);
>  }
> 
> Index: backend/utils/adt/misc.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/misc.c,v
> retrieving revision 1.34
> diff -u -r1.34 misc.c
> --- backend/utils/adt/misc.c  2 Jun 2004 21:29:29 -   1.34
> +++ backend/utils/adt/misc.c  8 Jun 2004 18:07:36 -
> @@ -103,3 +103,64 @@
>  {
>   PG_RETURN_INT32(pg_signal_backend(PG_GETARG_INT32(0),SIGINT));
>  }
> +
> +Datum pg_logfile_length(PG_FUNCTION_ARGS)
> +{
> + extern FILE *logfile; // in elog.c
> +
> + if (logfile)
> + PG_RETURN_INT32(ftell(logfile));
> + PG_RETURN_INT32(0);
> +}
> +
> +
> +#define MAXLOGFILECHUNK 5
> +Datum pg_logfile(PG_FUNCTION_ARGS)
> +{
> + size_t size=MAXLOGFILECHUNK;
> + char *buf=0;
> + size_t nbytes;
> +
> + char *filename = LogFileName();
> + if (filename)
> + {
> + if (!PG_ARGISNULL(0))
> + size = PG_GETARG_INT32(0);
> + if (size > MAXLOGFILECHUNK)
> + {
> + size = MAXLOGFILECHUNK;
> + ereport(WARNING,
> + (errcode(ERRCODE_OUT_OF_MEMORY),
> +  errmsg("Maximum size is %d.", 
> MAXLOGFILECHUNK)));
> + }
> +
> + FILE *f=fopen(filename, "r");
> + if (f)
> + {
> + if (PG_ARGISNULL(1))
> + fseek(f, -size, SEEK_END);
> + else
> + {
> + long pos = PG_GETARG_INT32(1);
> + if (pos >= 0)
> + fseek(f, pos, SEEK_SET);
> + else
> + fseek(f, pos, SEEK_END);
> + }
> + buf = palloc(size+1);
> + nbytes = fread(buf, 1, size, f);
> + buf[nbytes] = 0;
> +
> + fclose(f);
> + }
> + else
> + {
> + ereport(WARNING,
> + (errcode(ERRCODE_NO_DATA),
> +  errmsg("Could not open log file %s.", 
> filename)));
> + }
> + free(filename);
> + }
> +
> + PG_RETURN_CSTRING(buf);
> +}
> Index: backend/utils/error/elog.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/error/elog.c,v
> retrieving revision 1.140
> diff -u -r1.140 elog.c
> --- backend/util

Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian

Are people OK with requiring PGUSER, $USER, $LOGNAME, or the username to
be supplied by the connection string in libpq on platforms that want
threads and don't have getpwuid_r() (Solaris, FreeBSD, etc.)?

If so, I can easily create a patch and apply it.

---

Jan Wieck wrote:
> On 6/9/2004 1:44 PM, Bruce Momjian wrote:
> 
> > Jan Wieck wrote:
> >> On 6/9/2004 1:04 PM, Bruce Momjian wrote:
> >> 
> >> > What we really need is a way to do the uid->username mapping in a
> >> > thread-safe way.  Could we check the environment for $USER or $LOGNAME?
> >> > Could we require them to be set for thread builds on OS's without
> >> > getpwuid_r and in cases where the username is not specified in the
> >> > connection string?
> >> 
> >> Maybe not as popular, but what about breaking backward compatibility and 
> >> require the DB name to be specified, no username fallback? How many 
> >> applications really rely on that feature? And people who are used to it 
> >> from the commandline can set PGDATABASE in their .profile to get it back.
> > 
> > That is only part of where the username is used. I assume it is also
> > used for connections when the username isn't supplied, not just as the
> > default for the database name.
> > 
> > Basically on those platforms, either the username would have to be in
> > the environment, or supplied as part of the connection string.
> > 
> 
> We have PGUSER, PGHOST, PGPORT, PGDATABASE, all of them you can set in 
> your .profile, why do we need to lookup the uid at all?
> 
> 
> Jan
> 
> -- 
> #==#
> # It's easier to get forgiveness for being wrong than for being right. #
> # Let's break this rule - forgive me.  #
> #== [EMAIL PROTECTED] #
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PATCHES] [HACKERS] serverlog function (log_destination file)

2004-06-09 Thread Bruce Momjian

Looks good to me.  The only issue I saw was that the default file name
mentioned in postgresql.conf doesn't match the actual default.

Is this ready to be added to the patch queue?

---

Andreas Pflug wrote:
> Magnus Hagander wrote:
> 
> >Specifically about the logs, I still think there is a lot of value to
> >being able to read the logs remotely even if you can't restart
> >postmaster.
> >
> Since I believe that retrieving the logs easily without server file 
> access is a feature that's welcomed by many users, here's my proposal.
> 
> The attached diff will
> - add a guc-variable log_filename
> - extend log_destination to accept the keyword 'file'
> - elog to that file if configured
> - provide two functions pg_logfile_length() and pg_logfile to obtain the 
> contents.
> 
> int4 pg_logfile_length()
> cstring pg_logfile(int4 size, int4 position)
> size (may be null meaning max) is the chunk size (max: currently 5)
> position (may be null meaning -size) is the start position; positive 
> counting from log file start, negative from end.
> 
> 
> Regards,
> Andreas
> 

> Index: backend/postmaster/postmaster.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v
> retrieving revision 1.402
> diff -u -r1.402 postmaster.c
> --- backend/postmaster/postmaster.c   3 Jun 2004 02:08:03 -   1.402
> +++ backend/postmaster/postmaster.c   8 Jun 2004 18:07:30 -
> @@ -532,6 +532,9 @@
>   /* If timezone is not set, determine what the OS uses */
>   pg_timezone_initialize();
> 
> +/* open alternate logfile, if any */
> + LogFileOpen();
> +
>  #ifdef EXEC_BACKEND
>   write_nondefault_variables(PGC_POSTMASTER);
>  #endif
> Index: backend/storage/ipc/ipc.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/storage/ipc/ipc.c,v
> retrieving revision 1.87
> diff -u -r1.87 ipc.c
> --- backend/storage/ipc/ipc.c 12 Dec 2003 18:45:09 -  1.87
> +++ backend/storage/ipc/ipc.c 8 Jun 2004 18:07:31 -
> @@ -111,6 +111,8 @@
> 
> on_proc_exit_list[on_proc_exit_index].arg);
> 
>   elog(DEBUG3, "exit(%d)", code);
> +
> + LogFileClose();
>   exit(code);
>  }
> 
> Index: backend/utils/adt/misc.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/misc.c,v
> retrieving revision 1.34
> diff -u -r1.34 misc.c
> --- backend/utils/adt/misc.c  2 Jun 2004 21:29:29 -   1.34
> +++ backend/utils/adt/misc.c  8 Jun 2004 18:07:36 -
> @@ -103,3 +103,64 @@
>  {
>   PG_RETURN_INT32(pg_signal_backend(PG_GETARG_INT32(0),SIGINT));
>  }
> +
> +Datum pg_logfile_length(PG_FUNCTION_ARGS)
> +{
> + extern FILE *logfile; // in elog.c
> +
> + if (logfile)
> + PG_RETURN_INT32(ftell(logfile));
> + PG_RETURN_INT32(0);
> +}
> +
> +
> +#define MAXLOGFILECHUNK 5
> +Datum pg_logfile(PG_FUNCTION_ARGS)
> +{
> + size_t size=MAXLOGFILECHUNK;
> + char *buf=0;
> + size_t nbytes;
> +
> + char *filename = LogFileName();
> + if (filename)
> + {
> + if (!PG_ARGISNULL(0))
> + size = PG_GETARG_INT32(0);
> + if (size > MAXLOGFILECHUNK)
> + {
> + size = MAXLOGFILECHUNK;
> + ereport(WARNING,
> + (errcode(ERRCODE_OUT_OF_MEMORY),
> +  errmsg("Maximum size is %d.", 
> MAXLOGFILECHUNK)));
> + }
> +
> + FILE *f=fopen(filename, "r");
> + if (f)
> + {
> + if (PG_ARGISNULL(1))
> + fseek(f, -size, SEEK_END);
> + else
> + {
> + long pos = PG_GETARG_INT32(1);
> + if (pos >= 0)
> + fseek(f, pos, SEEK_SET);
> + else
> + fseek(f, pos, SEEK_END);
> + }
> + buf = palloc(size+1);
> + nbytes = fread(buf, 1, size, f);
> + buf[nbytes] = 0;
> +
> + fclose(f);
> + }
> + else
> + {
> + ereport(WARNING,
> + (errcode(ERRCODE_NO_DATA),
> +  errmsg("Could not open log file %s.", 
> filename)));
> + }
> + free(filename);
> + }
> +
> + PG_RETURN_CSTRING(buf);
> +}
> Index: backend/utils/error/elog.c
> ===
> RCS file: /projects/cvsroot/pgsql-server/src/ba

Re: [HACKERS] Why hash indexes suck

2004-06-09 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Added to TODO:
> > * Order heap pointers on hash index pages by hash value and ctid
> 
> [blink]  This seems to miss out on the actual point of the thread (hash
> bucket size shouldn't be a disk page) in favor of an entirely
> unsupported sub-suggestion.

Yes, I was unsure of the text myself.  I have changed it to:

* Allow hash buckets to fill disk pages, rather than being
  sparse

If we sorted the keys, how do we insert new entries efficiently?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Why hash indexes suck

2004-06-09 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Added to TODO:
>   * Order heap pointers on hash index pages by hash value and ctid

[blink]  This seems to miss out on the actual point of the thread (hash
bucket size shouldn't be a disk page) in favor of an entirely
unsupported sub-suggestion.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Why hash indexes suck

2004-06-09 Thread Bruce Momjian

Added to TODO:

* Order heap pointers on hash index pages by hash value and ctid

---

Zeugswetter Andreas SB SD wrote:
> 
> > We could safely sort on the hash value, but I'm not sure how effective
> > that would be, considering that we're talking about values that already
> > hashed into the same bucket --- there's likely not to be very many
> > distinct hash values there.
> 
> I think we can safely put that on the todo list.
> The existing hash algorithm is very good. So I would on the 
> contrary beleive that only a few keys share a hash value per pagesized bucket.
> For the equal keys case it does not matter since we want all of the rows anyways.
> For the equal hash value case it would probably be best to sort by ctid.
> 
> TODO ?: order heap pointers inside hash index pages by hash value and ctid
> 
> Andreas
> 
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] CHECK constraints inconsistencies

2004-06-09 Thread Michael Glaesemann
In a recent discussion on IRC, some anomalies concerning CHECK 
constraints were brought to light, in that in some cases they do not 
guarantee that the data within the table satisfies them. For example 
(against 7.4.1),

test=# create table foo (
test(# foo_stamp timestamptz not null,
test(# foo_int   integer not null,
test(# check (foo_stamp > current_timestamp)
test(# );
CREATE TABLE
test=#
test=# insert into foo values (now() + '20 seconds'::interval, 3);
INSERT 647207 1
test=# \d foo;
Table "public.foo"
  Column   |   Type   | Modifiers
---+--+---
 foo_stamp | timestamp with time zone | not null
 foo_int   | integer  | not null
Check constraints:
"$1" CHECK (foo_stamp > 'now'::text::timestamp(6) with time zone)
test=# select foo_stamp, foo_int, now() as now_stamp from foo;
  foo_stamp   | foo_int |  now_stamp
--+-+--
 2004-03-01 21:38:35.54846+09 |   3 | 2004-03-01 21:39:02.91338+09
(1 row)
test=# update foo set foo_int = 4;
ERROR:  new row for relation "foo" violates check constraint "$1"
test=# insert into foo values (now() - '10 seconds'::interval,3);
ERROR:  new row for relation "foo" violates check constraint "$1"
The CHECK constraint ( foo_stamp > current_timestamp ) is only checked 
on INSERT and UPDATE (and fails appropriately in such cases). In the 
case of the SELECT statement, it's clear that the data within the table 
no longer satisfies the CHECK constraint.

Another example, using an admittedly strange CHECK constraint:
test=# create table f (a float, check (a < random()));
CREATE TABLE
test=# insert into f values (0.02);
INSERT 647211 1
test=# insert into f values (0.03);
INSERT 647212 1
test=# insert into f values (0.04);
INSERT 647213 1
test=# insert into f values (0.99);
ERROR:  new row for relation "f" violates check constraint "$1"
test=# select * from f;
  a
--
 0.02
 0.03
 0.04
(3 rows)
While it may make sense under certain conditions to test against a 
random number at a specific time, what does it mean for the data to 
always be less than random(), as the CHECK constraint implies?

In both cases, the CHECK constraint uses a function that is stable or 
volatile. It was suggested that functions used in CHECK constraints be 
restricted to immutable, as are functions used in indexes on 
expressions, at least until PostgreSQL can guarantee that the CHECK 
constraints will hold at times beyond INSERT and UPDATE.

Similar functionality can be attained using ON INSERT and ON UPDATE 
trigger functions, which in the case of stable or volatile functions is 
a more accurate description of what actually is protected.

If functions such as CURRENT_TIMESTAMP are allowed in CHECK constraints 
and they are evaluated on SELECT as well as on INSERT or UPDATE, 
another thing to consider is what the proper behavior would be when 
rows are found to be in violation of the constraint. Should the 
offending rows be deleted?

Michael Glaesemann
grzm myrealbox com
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [pgsql-www] Collaboration Tool Proposal

2004-06-09 Thread Andrew Sullivan
On Fri, Feb 27, 2004 at 10:48:57AM -0800, Josh Berkus wrote:
> 
> RT:   I've been using RT for OSCON, and am not wowed by it.Of course, I 
> can say the same of BZ and GForge-Tracker.   From my perspective, it's 
> neither better nor worse than the other solutions, although the interaction 
> with e-mail is nice.
> More importantly, *we* would have to do the port to PostgreSQL.   This is 

That's not true.  RT 3.2 supports PostgreSQL out of the box, and at
least one of Best Practical's customers (Afilias) requires that MySQL
not be the platform (because I'm just too worried about the current
license).  That isn't to say it's the only choice, but it does indeed
support Postgres.  Jesse Vincent has told me, also, that PostgreSQL
support is important to him.

RT is pretty flexible for managing issues, bugs, problems, &c.  I'm
not real sure it's right for this job, but it might be.  CPAN appears
to use it, for instance.

A
-- 
Andrew Sullivan  | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant-
garde will probably become the textbook definition of Postmodernism. 
--Brad Holland

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] Slony-I Beta2 now availabel

2004-06-09 Thread Jan Wieck
A new BETA tarball for the Slony-I replication system is now available 
for download form the project homepage:

http://gborg.postgresql.org/project/slony1/projdisplay.php
Regards, the Slony-I project team

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Nested xacts: looking for testers and review

2004-06-09 Thread Stephan Szabo
On Wed, 9 Jun 2004, Alvaro Herrera wrote:

> On Sun, May 30, 2004 at 04:07:27AM -0700, Stephan Szabo wrote:
> > On Sat, 29 May 2004, Alvaro Herrera wrote:
>
> > > Ah, this seems to work.  I'll implement it and I'll let you know how it
> > > goes.
> >
> > Ugh... There's one further wrinkle I hadn't thought about, imagine the
> > following:
>
> Ok Stephan, thank you very much for your help.  I implemented this and
> it seems to work, at least on my (limited?) test scenario.  If you have
> some spare time I'd like you to test it and see if you can break it (I
> posted it to -patches yesterday).

Unfortunately, I've gotten it to fail, but I haven't looked in depth (I'm
at work, so I'm doing it during compilations and such.)

I made a file ins with a 100 line copy to a table named fk with the
value 1, and then if I do the following:

create table pk(a int primary key);
create table fk(a int references pk(a) initially deferred);
insert into pk values (1);

begin;
begin;
\i ins
\i ins
rollback;
\i ins
commit;

It appears to exception on the third \i ins with the tail_thisxact
pointing into a bad place.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] proposal: be smarter about i/o patterns in index scan

2004-06-09 Thread Bruce Momjian

Is there a TODO here?

---

Tom Lane wrote:
> "Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> > ... Introducing a new query execution step sounds
> > like a better/easier idea than was I was going to do, which was to
> > rework some of the access methods to operate on vectors instead of
> > scalars.  That idea looks increasingly difficult to implement.
> 
> One thing that I think is worth doing in any case is to alter the API
> for the index AMs' getnext functions so that they can return multiple
> TIDs per call (probably into an array supplied by the caller).  We could
> for example return all matching tuples from a single index page in one
> call, and then let index_getnext iterate through them without re-calling
> the index AM.  (Making it work per-index-page would allow us to keep the
> current VACUUM interlocking conventions in place exactly, so that there
> would be no risk of breaking anything.  All tuples returned in a given
> call would still be protected by an index-page lock.)
> 
> We'd have to make such an API change anyway to support unordered
> indexscan, but it should be a benefit even for ordered scans, because
> it should considerably reduce the locking overhead for indexscans that
> fetch multiple tuples.  In particular it might alleviate the BufMgrLock
> contention problems that were under discussion last month.  (The test
> case we were using to illustrate that problem fetched several hundred
> tuples per indexscan, so it clearly could benefit.  Extent of benefit
> unknown at this time, though.)
> 
> The tricky part of this is figuring out how to handle mark/restore and
> scan direction switching in a way that doesn't complicate the code to
> the point of unmaintainability.  I think it may be possible to keep all
> the extra complication within indexam.c, but haven't thought through the
> details.
> 
> This seems like a relatively localized change, and might be a good place
> for you to start.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Nested xacts: looking for testers and review

2004-06-09 Thread Bruce Momjian
Alvaro Herrera wrote:
> > One interesting idea would be for COMMIT to affect the outer
> > transaction, and END not affect the outer transaction.  Of course that
> > kills the logic that COMMIT and END are the same, but it is an
> > interesting idea, and doesn't affect backward compatibility because
> > END/COMMIT behave the same in non-nested transactions.
> 
> I implemented this behavior by using parameters to COMMIT/END.  I didn't
> want to add new keywords to the grammar so I just picked up
> "COMMIT WITHOUT ABORT".  (Originally I had thought "COMMIT IGNORE
> ERRORS" but those would be two new keywords and I don't want to mess
> around with the grammar.  If there are different opinions, tweaking the
> grammar is easy).
> 
> So the behavior I originally implemented is still there:
> 
> alvherre=# begin;
> BEGIN
> alvherre=# begin;
> BEGIN
> alvherre=# select foo;
> ERROR:  no existe la columna "foo"
> alvherre=# commit;
> COMMIT
> alvherre=# select 1;
> ERROR:  transacci?n abortada, las consultas ser?n ignoradas hasta el fin de bloque 
> de transacci?n
> alvherre=# commit;
> COMMIT

Perfect.  Your suggested behavior is best.  I think I like "COMMIT
IGNORE ABORT" best, but we can disucss this some more.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread pgsql
> On Wed, Jun 09, 2004 at 13:41:27 -0400,
>   [EMAIL PROTECTED] wrote:
>>
>> Sigh, because vacuums take away from performance. Imagine a table that
>> has
>> to be updated on the order of a few thousand times a minute. Think about
>> the drop in performance during the vacuum.
>>
>> On a one row table, vacuum is not so bad, but try some benchmarks on a
>> table with a goodly number of rows.
>
> But you only need to rapidly vacuum the one table that is keeping your
> totals record. This isn't going to be a big hit in performance relative
> to the updates that are going on. You don't need to vacuum the tables
> you are doing the inserts or updates to at that same rate.
>

I have been talking about two types of problems which are both based on
PostgreSQL's behavior with frequently updated tables.

Summary table: In the single row table system, you have to vacuum very
requently, and this affects performance.

Frequently updated tables: think about the session table for a website.
Each new user gets a new session row. Everytime they refresh or act in the
site, the row is updated. When they leave or their session times out, the
row is deleted. I wrote a RAM only session manager for PHP because
PostgreSQL couldn't handle the volume. (2000 hits a second)

If you have an active site, with hundreds or thousands of hits a second,
vacuuming the table constantly is not practical.

I don't think anyone who has seriously looked at these issues has
concluded that PostgreSQL works fine in these cases. The question is what,
if anything, can be done? The frequent update issue really affects
PostgreSQL's acceptance in web applications, and one which MySQL seems to
do a better job.

IMHO, this issue, a two stage commit based replication system, and a real
and usable setup/configuration system are all that stands between
PostgreSQL and the serious enterprise deployment.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Jim C. Nasby
On Tue, Jun 08, 2004 at 07:16:45PM -0400, [EMAIL PROTECTED] wrote:
> >
> >
> > [EMAIL PROTECTED] wrote:
> >
> >>I've been down several roads about how to handle data that has to change
> >>on a very frequent and rapid manner.
> >>
> >>Think about summary tables, WEB session tables, etc. As great as MVCC is
> >>for the vast majority of uses. The overhead of updates and deletes can
> >>kill a project that needs to constantly update tables.
> >>
> >>
> >>
> >>
> >
> > Are you saying that MVCC has *by design* a higher overhead for updates
> > and deletes? or are you referring to the gradual loss of performance as
> > a consequence of many dead tuples?
> >
> > I am guessing you mean the latter, but best to be sure :-)
> The best phrasing would be "the accumulating overhead of deletes and
> updates."
> 
> Yes.
 
Doesn't pg_autovacuum largely take care of this issue?
-- 
Jim C. Nasby, Database Consultant  [EMAIL PROTECTED]
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] sequences and "addval('myseq', value)"

2004-06-09 Thread Christopher Browne
In the last exciting episode, [EMAIL PROTECTED] wrote:
>> [EMAIL PROTECTED] writes:
>>> Anyway, I'm not quite getting the idea of caching sequence values. I
>>> understand the performance benefits, but it seems problematic across
>>> multiple backends, almost ensuring "holes" in the sequence of numbers.
>>
>> The point is to reduce lock contention on the sequence table.  Since
>> lack-of-holes is explicitly *not* a design goal, there is no downside
>> that I see.
>>
> I knew that, but it wasn't until I thought of using a sequence as a shared
> variable that it sort of hit me.
>
> The question is, what do you think of an "addval" function for sequences.
> As used:
>
> Executed in a trigger:
>
> select addval('mysum', val);
>
> Executed:
> select currval('mysum');
>
> Instead of:
>
> select sum(val) from largetable;

That seems completely ridiculous, as far as I can see.

The fundamental problem with is that sequence updates (at least the
ones resulting from pulling values from them) "commit themselves"
irrespective of whether the transaction does.

> The problem I, and I know many other people are having, is that
> large sums can not be obtained without a table scan. A summary table
> can be created, but if you have any sort of volume, you have to
> vacuum the summary table constantly.
>
> Using the sequence construct as sort of an accumulator just might
> fit the bill for this sort of thing.

No, it would not.

What would fit the bill would be the use of some sort of "accumulator
table" that would get an insert every time the main table got touched.
The inserts would be within the same transaction context, and
therefore "MVCC-valid."

create table summary_table (
  id serial not null unique,
  tname text not null,
  rows integer
);

create table large_table (
  id serial not null unique,
  name text,
  txn_date timestamptz default now(),
  quantity integer not null,
  units text not null,
  price numeric(12,2),
  value numeric(12,2)
);

insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, 'units', 25, 75);
insert into large_table (name, quantity, units, price, value) values
('foo', 4, 'units', 25, 100);
insert into large_table (name, quantity, units, price, value) values
('foo', 1, 'units', 25, 25);
insert into large_table (name, quantity, units, price, value) values
('foo', 2, 'units', 25, 50);
insert into large_table (name, quantity, units, price, value) values
('foo', 3, '

Re: [HACKERS] simple_heap_update: tuple concurrently updated -- during INSERT

2004-06-09 Thread SZŰCS Gábor
Dear Tom,

Thanks for your reply.
Thinking about it, yes; there are triggers that (may) do updates on this
table, and there is a master table "pakolas" ("pakolas_cikktetel" is a
detail of it) that I touch, and yes, it has a NOTIFY in AFTER trigger. (that
is one of the causes I touch that table ;) )

Q1. So is this everything that can be said -- NOTIFY calls
simple_heap_update that is concurrently updated by a different transaction?

Q2. I'm not sure if it's a good idea, but if the transaction simply fails
due to a NOTIFY, can I hack around it so that it won't ruin the entire
transaction, only NOTIFY?

Q3. Is there a way to degrade this error to warning or maybe notice (without
recompiling the server source ;) )?

Thanks again,
G.
%--- cut here ---%
\end

- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "SZŰCS Gábor" <[EMAIL PROTECTED]>
Cc: "PGH" <[EMAIL PROTECTED]>
Sent: Wednesday, June 09, 2004 5:23 PM
Subject: Re: [HACKERS] simple_heap_update: tuple concurrently updated -- 
during INSERT


> "=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=" <[EMAIL PROTECTED]> writes:
> >   ERROR:  simple_heap_update: tuple concurrently updated.
> >   LOG:  statement: INSERT INTO pakolas_cikktetel
> > (cikk, minoseg, helyrol, mennyi, pakolas, sorszam, helyre) VALUES
> > (102165, 1, 1488, '25', 68615, 1, 1338)
>
> Hmm.  AFAIK, an INSERT statement per se will never call
> simple_heap_update at all --- that routine is only used for system
> catalog updates.  I'm wondering what is going on behind the scenes
> here.  Have you got any triggers or rules that this INSERT would
> fire, and if so what do they do?  Maybe you are using NOTIFY?
>
> regards, tom lane
>


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Nested xacts: looking for testers and review

2004-06-09 Thread Alvaro Herrera
On Fri, May 28, 2004 at 04:05:40PM -0400, Bruce Momjian wrote:

Bruce,

> One interesting idea would be for COMMIT to affect the outer
> transaction, and END not affect the outer transaction.  Of course that
> kills the logic that COMMIT and END are the same, but it is an
> interesting idea, and doesn't affect backward compatibility because
> END/COMMIT behave the same in non-nested transactions.

I implemented this behavior by using parameters to COMMIT/END.  I didn't
want to add new keywords to the grammar so I just picked up
"COMMIT WITHOUT ABORT".  (Originally I had thought "COMMIT IGNORE
ERRORS" but those would be two new keywords and I don't want to mess
around with the grammar.  If there are different opinions, tweaking the
grammar is easy).

So the behavior I originally implemented is still there:

alvherre=# begin;
BEGIN
alvherre=# begin;
BEGIN
alvherre=# select foo;
ERROR:  no existe la columna "foo"
alvherre=# commit;
COMMIT
alvherre=# select 1;
ERROR:  transacción abortada, las consultas serán ignoradas hasta el fin de bloque de 
transacción
alvherre=# commit;
COMMIT


However if one wants to use in script the behavior you propose, use
the following:

alvherre=# begin;
BEGIN
alvherre=# begin;
BEGIN
alvherre=# select foo;
ERROR:  no existe la columna "foo"
alvherre=# commit without abort;
COMMIT
alvherre=# select 1;
 ?column? 
--
1
(1 fila)

alvherre=# commit;
COMMIT


The patch is attached.  It applies only after the previous patch,
obviously.

-- 
Alvaro Herrera ()
"Ciencias políticas es la ciencia de entender por qué
los políticos actúan como lo hacen"  (netfunny.com)
diff -Ncr --exclude-from=diff-ignore 10bgwriter/src/backend/access/transam/xact.c 
13commitOpt/src/backend/access/transam/xact.c
*** 10bgwriter/src/backend/access/transam/xact.c2004-06-08 17:34:49.0 
-0400
--- 13commitOpt/src/backend/access/transam/xact.c   2004-06-09 12:00:49.0 
-0400
***
*** 2125,2131 
   *EndTransactionBlock
   */
  void
! EndTransactionBlock(void)
  {
TransactionState s = CurrentTransactionState;
  
--- 2125,2131 
   *EndTransactionBlock
   */
  void
! EndTransactionBlock(bool ignore)
  {
TransactionState s = CurrentTransactionState;
  
***
*** 2163,2172 
/*
 * here we are in an aborted subtransaction.  Signal
 * CommitTransactionCommand() to clean up and return to the
!* parent transaction.
 */
case TBLOCK_SUBABORT:
!   s->blockState = TBLOCK_SUBENDABORT_ERROR;
break;
  
case TBLOCK_STARTED:
--- 2163,2177 
/*
 * here we are in an aborted subtransaction.  Signal
 * CommitTransactionCommand() to clean up and return to the
!* parent transaction.  If we are asked to ignore the errors
!* in the subtransaction, the parent can continue; else,
!* it has to be put in aborted state too.
 */
case TBLOCK_SUBABORT:
!   if (ignore)
!   s->blockState = TBLOCK_SUBENDABORT_OK;
!   else
!   s->blockState = TBLOCK_SUBENDABORT_ERROR;
break;
  
case TBLOCK_STARTED:
diff -Ncr --exclude-from=diff-ignore 10bgwriter/src/backend/parser/gram.y 
13commitOpt/src/backend/parser/gram.y
*** 10bgwriter/src/backend/parser/gram.y2004-06-03 20:46:48.0 -0400
--- 13commitOpt/src/backend/parser/gram.y   2004-06-09 11:51:04.0 -0400
***
*** 225,232 
target_list update_target_list insert_column_list
insert_target_list def_list opt_indirection
group_clause TriggerFuncArgs select_limit
!   opt_select_limit opclass_item_list 
transaction_mode_list
!   transaction_mode_list_or_empty
TableFuncElementList
prep_type_clause prep_type_list
execute_param_clause
--- 225,232 
target_list update_target_list insert_column_list
insert_target_list def_list opt_indirection
group_clause TriggerFuncArgs select_limit
!   opt_select_limit opclass_item_list 
transaction_commit_opts
!   transaction_mode_list transaction_mode_list_or_empty
TableFuncElementList
prep_type_clause prep_type_list
execute_param_clause
**

[HACKERS] transaction aborts in SPI

2004-06-09 Thread craig perras
I've appended a quote from Ch 41 on the SPI; I'd like
to make sure I understand the implications of this. If
I've allocated resources inside my procedure (file
handles or what-not), how would I clean those up in
the case of a transaction abort? Could I Notify my
application that the transaction aborted (IIUC they
aren't sent in the case of a transaction abort)?

thanks!
--craig


Note that if during the execution of a procedure the
transaction is aborted because of an error in a
command, then control will not be returned to your
procedure. Rather, all work will be rolled back and
the server will wait for the next command from the
client. A related restriction is the inability to
execute BEGIN, COMMIT, and ROLLBACK (transaction
control statements) inside a procedure. Both of these
restrictions will probably be changed in the future.




__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Nested xacts: looking for testers and review

2004-06-09 Thread Alvaro Herrera
On Sun, May 30, 2004 at 04:07:27AM -0700, Stephan Szabo wrote:
> On Sat, 29 May 2004, Alvaro Herrera wrote:

> > Ah, this seems to work.  I'll implement it and I'll let you know how it
> > goes.
> 
> Ugh... There's one further wrinkle I hadn't thought about, imagine the
> following:

Ok Stephan, thank you very much for your help.  I implemented this and
it seems to work, at least on my (limited?) test scenario.  If you have
some spare time I'd like you to test it and see if you can break it (I
posted it to -patches yesterday).

Or please see my test case below.  Is it missing something?  Note that
if I take out any of the four inserts just before the end of the
transaction, the whole thing is rejected.

DROP TABLE foo CASCADE;
DROP TABLE bar CASCADE;
CREATE TABLE foo (A INT UNIQUE);
CREATE TABLE bar (A INT REFERENCES foo(A) DEFERRABLE);

DELETE FROM bar;
DELETE FROM foo;
INSERT INTO foo VALUES (1);
INSERT INTO foo VALUES (2);
BEGIN;
SET CONSTRAINTS ALL DEFERRED;
INSERT INTO bar VALUES (1);
BEGIN;
INSERT INTO bar VALUES (3);
COMMIT;
BEGIN;
BEGIN;
INSERT INTO bar VALUES (4);
COMMIT;
INSERT INTO foo VALUES (3);
SET CONSTRAINTS ALL IMMEDIATE;
ROLLBACK;
SET CONSTRAINTS ALL DEFERRED;
BEGIN;
INSERT INTO bar VALUES (5);
COMMIT;
BEGIN;
BEGIN;
INSERT INTO bar VALUES (6);
ROLLBACK;
COMMIT;
BEGIN;
INSERT INTO bar VALUES (7);
COMMIT;
BEGIN;
BEGIN;
INSERT INTO bar VALUES (9);
COMMIT;
COMMIT;
INSERT INTO foo VALUES(3);
INSERT INTO foo VALUES(5);
INSERT INTO foo VALUES(7);
INSERT INTO foo VALUES(9);
COMMIT;

-- 
Alvaro Herrera ()
"Acepta los honores y aplausos y perderás tu libertad"


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] sequences and "addval('myseq', value)"

2004-06-09 Thread pgsql
> On 6/8/2004 11:46 AM, [EMAIL PROTECTED] wrote:
>
>>>
>>> This strikes me as a complete nonstarter.
>>
>> Tom, I have to chuckle here. You HATE every suggestion I ever make. I
>> can't think of one thing I've suggested over the years that was ever met
>> with enthusiasm. Never change. :-)
>
> I happen to agree with Tom on this entire thread. I do not think that
> sequences should be abused as a replacement for global shared variables.
>
> I do think that PostgreSQL should instead have some sort of shared
> memory segment for user variables. The accessor functions would place
> locks and the like, and would have mechanisms like UNDO on rollback.
> With all the limitations that has, inflexible size, garbage collection
> and so on, it would yet be far superior to anything you've come up with
> here.

My original suggestion was to have some sort of global variable system. I
thought using an existing construct would have been more palletable.

I was wrong.
>
>
> Jan
>
> --
> #==#
> # It's easier to get forgiveness for being wrong than for being right. #
> # Let's break this rule - forgive me.  #
> #== [EMAIL PROTECTED] #
>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Assignment to array elements

2004-06-09 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> What I'm curious about is where the original behaviour came from. Is
> it just because insert with subscripts was never implemented? Or was
> there a rationale for ignoring the subscripts?

It's been awhile, but I think that "ignore the subscripts" may have been
something I put in, because the original behavior was even more broken.
Old-timers will recall that the array behavior we got from Berkeley was
bizarrely broken in a whole lot of ways :-(

The regression tests expect that this will work:

INSERT INTO arrtest (a, b[1][2][2], c, d[2][1])
   VALUES ('{}', '{3,4}', '{foo,bar}', '{bar,foo}');

(AFAICT this has been in there since Berkeley) and you'll notice that
the values supplied for b and d do not agree at all with the
dimensionality specified by the insert targetlist.  There are several
other errors of the same kind.  So the net effect has certainly always
been that subscripts supplied here were ignored.

Given that we never documented that you could write a subscript in
INSERT, I doubt anybody ever tried, so the lack of functionality didn't
get noticed.

With the patch I'm about to commit, the subscripts *are* functional
and so the above command draws an error.  I've updated the regression
tests to do

INSERT INTO arrtest (a, b[1:2], c, d[1:2])
   VALUES ('{}', '{3,4}', '{foo,bar}', '{bar,foo}');

which inserts the same data that this test has always inserted.

Note that you have to write array-slice subscripts if your intention is
to insert more than one element this way.  It appears that the Berkeley
guys may have intended to allow this shorthand as an equivalent to the
above:

INSERT INTO arrtest (a, b[2], c, d[2])
   VALUES ('{}', '{3,4}', '{foo,bar}', '{bar,foo}');

but I think that's a lousy idea since it forecloses assigning just a
single array element, which seems like behavior at least as useful
as the slice case.  With the patch, you can do this:

INSERT INTO arrtest (b[1], b[2])
   VALUES (3, 4);

to insert the same b array value as the above.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Alvaro Herrera
On Wed, Jun 09, 2004 at 01:41:27PM -0400, [EMAIL PROTECTED] wrote:
> > On Wed, Jun 09, 2004 at 10:49:20PM +0800, Christopher Kings-Lynne wrote:

> > Also he said that the problem was solved with enough lazy VACUUM
> > scheduling.  I don't understand why he doesn't want to use that
> > solution.
> 
> Sigh, because vacuums take away from performance. Imagine a table that has
> to be updated on the order of a few thousand times a minute. Think about
> the drop in performance during the vacuum.
> 
> On a one row table, vacuum is not so bad, but try some benchmarks on a
> table with a goodly number of rows.

Hmm, this can be a problem if VACUUM pollutes the shared buffer pool.
So what about a new buffer replacement policy that takes this into
account and is not fooled by VACUUM?  This is already implemented in
7.5.  Also, how about a background writer process that writes dirty
buffers so that backends don't have to wait for IO to complete when a
dirty buffer has to be written?  This is also in current CVS.


Have you tried and measured how the current CVS code performs?  Jan
Wieck reported a lot of performance improvement some time ago while he
was developing this.  The code has changed since and I have not seen any
measurement.

-- 
Alvaro Herrera ()
Oh, oh, las chicas galacianas, lo harán por las perlas,
¡Y las de Arrakis por el agua! Pero si buscas damas
Que se consuman como llamas, ¡Prueba una hija de Caladan! (Gurney Halleck)


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian
Jan Wieck wrote:
> On 6/9/2004 1:44 PM, Bruce Momjian wrote:
> 
> > Jan Wieck wrote:
> >> On 6/9/2004 1:04 PM, Bruce Momjian wrote:
> >> 
> >> > What we really need is a way to do the uid->username mapping in a
> >> > thread-safe way.  Could we check the environment for $USER or $LOGNAME?
> >> > Could we require them to be set for thread builds on OS's without
> >> > getpwuid_r and in cases where the username is not specified in the
> >> > connection string?
> >> 
> >> Maybe not as popular, but what about breaking backward compatibility and 
> >> require the DB name to be specified, no username fallback? How many 
> >> applications really rely on that feature? And people who are used to it 
> >> from the commandline can set PGDATABASE in their .profile to get it back.
> > 
> > That is only part of where the username is used. I assume it is also
> > used for connections when the username isn't supplied, not just as the
> > default for the database name.
> > 
> > Basically on those platforms, either the username would have to be in
> > the environment, or supplied as part of the connection string.
> > 
> 
> We have PGUSER, PGHOST, PGPORT, PGDATABASE, all of them you can set in 
> your .profile, why do we need to lookup the uid at all?

In case they don't set it, which is very common, I am sure.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Scott Marlowe
On Wed, 2004-06-09 at 11:41, [EMAIL PROTECTED] wrote:
> > On Wed, Jun 09, 2004 at 10:49:20PM +0800, Christopher Kings-Lynne wrote:
> >> >I love PG, I've been using it since version 6x, and it has gotten
> >> >fantastic over the years, and in many cases, I would choose it over
> >> >Oracle, but for systems that need frequent updates, I have a lot of
> >> >concerns.
> >>
> >> ...that's the price you pay for concurrency man...
> >
> > Also he said that the problem was solved with enough lazy VACUUM
> > scheduling.  I don't understand why he doesn't want to use that
> > solution.
> >
> 
> Sigh, because vacuums take away from performance. Imagine a table that has
> to be updated on the order of a few thousand times a minute. Think about
> the drop in performance during the vacuum.
> 
> On a one row table, vacuum is not so bad, but try some benchmarks on a
> table with a goodly number of rows.

Several points:

All databases pay to clean up the mess they've made, so to speak.  In
PostgreSQL you get to choose when, instead of always paying the price at
the end of transaction.

Lazy vacuum does not impact performance nearly as much as the old full
vacuum.  With the sleep / delay patch that's been passed around on
hackers its impact is virtually zero on the rest of the database

Properly setup fsm settings, pg_autovacuum deamon, and an installation
of the sleep / delay patch mentioned aobve makes this a non-issue.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Joshua D. Drake

Sigh, because vacuums take away from performance. 
This is a known issue that has been pretty much resolved for 7.5. Vacuum 
in 7.5 does not take even close to as much IO resources.

Imagine a table that has
to be updated on the order of a few thousand times a minute. Think about
the drop in performance during the vacuum.
On a one row table, vacuum is not so bad, but try some benchmarks on a
table with a goodly number of rows.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
On 6/9/2004 1:44 PM, Bruce Momjian wrote:
Jan Wieck wrote:
On 6/9/2004 1:04 PM, Bruce Momjian wrote:
> What we really need is a way to do the uid->username mapping in a
> thread-safe way.  Could we check the environment for $USER or $LOGNAME?
> Could we require them to be set for thread builds on OS's without
> getpwuid_r and in cases where the username is not specified in the
> connection string?
Maybe not as popular, but what about breaking backward compatibility and 
require the DB name to be specified, no username fallback? How many 
applications really rely on that feature? And people who are used to it 
from the commandline can set PGDATABASE in their .profile to get it back.
That is only part of where the username is used. I assume it is also
used for connections when the username isn't supplied, not just as the
default for the database name.
Basically on those platforms, either the username would have to be in
the environment, or supplied as part of the connection string.
We have PGUSER, PGHOST, PGPORT, PGDATABASE, all of them you can set in 
your .profile, why do we need to lookup the uid at all?

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Joshua D. Drake

Also he said that the problem was solved with enough lazy VACUUM
scheduling.  I don't understand why he doesn't want to use that
solution.
Because even lazy VACUUM is horrendous to performance but as I said in
a further post this has been pretty much fixed by (Jan I believe) in 7.5.
Sincerely,
Joshua D. Drake



--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Bruno Wolff III
On Wed, Jun 09, 2004 at 13:41:27 -0400,
  [EMAIL PROTECTED] wrote:
> 
> Sigh, because vacuums take away from performance. Imagine a table that has
> to be updated on the order of a few thousand times a minute. Think about
> the drop in performance during the vacuum.
> 
> On a one row table, vacuum is not so bad, but try some benchmarks on a
> table with a goodly number of rows.

But you only need to rapidly vacuum the one table that is keeping your
totals record. This isn't going to be a big hit in performance relative
to the updates that are going on. You don't need to vacuum the tables
you are doing the inserts or updates to at that same rate.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian
Jan Wieck wrote:
> On 6/9/2004 1:04 PM, Bruce Momjian wrote:
> 
> > What we really need is a way to do the uid->username mapping in a
> > thread-safe way.  Could we check the environment for $USER or $LOGNAME?
> > Could we require them to be set for thread builds on OS's without
> > getpwuid_r and in cases where the username is not specified in the
> > connection string?
> 
> Maybe not as popular, but what about breaking backward compatibility and 
> require the DB name to be specified, no username fallback? How many 
> applications really rely on that feature? And people who are used to it 
> from the commandline can set PGDATABASE in their .profile to get it back.

That is only part of where the username is used. I assume it is also
used for connections when the username isn't supplied, not just as the
default for the database name.

Basically on those platforms, either the username would have to be in
the environment, or supplied as part of the connection string.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
On 6/9/2004 1:04 PM, Bruce Momjian wrote:
What we really need is a way to do the uid->username mapping in a
thread-safe way.  Could we check the environment for $USER or $LOGNAME?
Could we require them to be set for thread builds on OS's without
getpwuid_r and in cases where the username is not specified in the
connection string?
Maybe not as popular, but what about breaking backward compatibility and 
require the DB name to be specified, no username fallback? How many 
applications really rely on that feature? And people who are used to it 
from the commandline can set PGDATABASE in their .profile to get it back.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread pgsql
> On Wed, Jun 09, 2004 at 10:49:20PM +0800, Christopher Kings-Lynne wrote:
>> >I love PG, I've been using it since version 6x, and it has gotten
>> >fantastic over the years, and in many cases, I would choose it over
>> >Oracle, but for systems that need frequent updates, I have a lot of
>> >concerns.
>>
>> ...that's the price you pay for concurrency man...
>
> Also he said that the problem was solved with enough lazy VACUUM
> scheduling.  I don't understand why he doesn't want to use that
> solution.
>

Sigh, because vacuums take away from performance. Imagine a table that has
to be updated on the order of a few thousand times a minute. Think about
the drop in performance during the vacuum.

On a one row table, vacuum is not so bad, but try some benchmarks on a
table with a goodly number of rows.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] sequences and "addval('myseq', value)"

2004-06-09 Thread Jan Wieck
On 6/8/2004 11:46 AM, [EMAIL PROTECTED] wrote:
This strikes me as a complete nonstarter.
Tom, I have to chuckle here. You HATE every suggestion I ever make. I
can't think of one thing I've suggested over the years that was ever met
with enthusiasm. Never change. :-)
I happen to agree with Tom on this entire thread. I do not think that 
sequences should be abused as a replacement for global shared variables.

I do think that PostgreSQL should instead have some sort of shared 
memory segment for user variables. The accessor functions would place 
locks and the like, and would have mechanisms like UNDO on rollback. 
With all the limitations that has, inflexible size, garbage collection 
and so on, it would yet be far superior to anything you've come up with 
here.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian

Well, looks like you are missing getpwuid_r():

Your system uses getpwuid() which is not thread-safe. **

and we don't have any workaround for this.

---

Jan Wieck wrote:
> On 6/9/2004 11:16 AM, Bruce Momjian wrote:
> 
> > Jan Wieck wrote:
> >> On 6/9/2004 9:36 AM, Bruce Momjian wrote:
> >> 
> >> > Jan Wieck wrote:
> >> >> I am wondering why thread_test.c is checking for mktemp()? That function 
> >> >> is nowhere used in the libpq.
> >> > 
> >> > Uh, it isn't checking for mktemp, it is using it, and it is using it
> >> > because someone didn't like hard-coded paths I was using in the past. 
> >> > Is there something wrong with using mktemp?  I have heard of no
> >> > portability problems, except some need six X's, and we updated that.
> >> 
> >> There seems to be a portability issue here. Stefan Kaltenbrunner 
> >> reported a configure failure on sparc64-unknown-openbsd3.5 and the 
> >> config.log says:
> >> 
> >> /tmp//ccx22029.o: In function `main':
> >> /tmp//ccx22029.o(.text+0x8c): warning: mktemp() possibly used unsafely; 
> >> consider
> >>   using mkstemp()
> > 
> > Yes, I was wondering how mktemp was going to guard against concurrent
> > access.  I have applied the following patch to use mkstemp().
> > 
> >> Which is only a warning at this time, it fails later on getpwuid().
> > 
> > Oh, I will need to hear more about that failure.
> 
> The relevant part of the config.log is:
> 
> configure:17942: checking thread safety of required library functions
> configure:17967: gcc -o conftest -O2 -fno-strict-aliasing -pthread 
> -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -DIN_CONFIGURE 
>conftest.c -lz -lreadline -lcurses -lresolv -lcompat -lm
> -lutil   >&5
> /tmp//ccx22029.o: In function `main':
> /tmp//ccx22029.o(.text+0x8c): warning: mktemp() possibly used unsafely; 
> consider using mkstemp()
> configure:17970: $? = 0
> configure:17972: ./conftest
> Your errno is thread-safe.
> Your system has sterror_r();  it does not need strerror().
> Your system uses getpwuid() which is not thread-safe. **
> Your system has getaddrinfo();  it does not need gethostbyname()
>or gethostbyname_r().
> 
> ** YOUR PLATFORM IS NOT THREAD-SAFE. **
> configure:17975: $? = 1
> configure: program exited with status 1
> configure: failed program was:
> #line 17961 "configure"
> #include "confdefs.h"
> #include "./src/tools/thread/thread_test.c"
> configure:17984: result: no
> configure:17986: error:
> *** Thread test program failed.  Your platform is not thread-safe.
> *** Check the file 'config.log'for the exact reason.
> 
> 
> 
> > 
> > 
> > 
> > 
> > 
> > Index: src/tools/thread/thread_test.c
> > ===
> > RCS file: /cvsroot/pgsql-server/src/tools/thread/thread_test.c,v
> > retrieving revision 1.30
> > diff -c -c -r1.30 thread_test.c
> > *** src/tools/thread/thread_test.c  28 May 2004 18:37:10 -  1.30
> > --- src/tools/thread/thread_test.c  9 Jun 2004 15:03:29 -
> > ***
> > *** 104,110 
> >   {
> > pthread_t   thread1,
> > thread2;
> > ! 
> > if (argc > 1)
> > {
> > fprintf(stderr, "Usage: %s\n", argv[0]);
> > --- 104,111 
> >   {
> > pthread_t   thread1,
> > thread2;
> > !   int fd;
> > !   
> > if (argc > 1)
> > {
> > fprintf(stderr, "Usage: %s\n", argv[0]);
> > ***
> > *** 120,130 
> > /* Make temp filenames, might not have strdup() */
> > temp_filename_1 = malloc(strlen(TEMP_FILENAME_1) + 1);
> > strcpy(temp_filename_1, TEMP_FILENAME_1);
> > !   mktemp(temp_filename_1);
> >   
> > temp_filename_2 = malloc(strlen(TEMP_FILENAME_2) + 1);
> > strcpy(temp_filename_2, TEMP_FILENAME_2);
> > !   mktemp(temp_filename_2);
> > 
> >   #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R)
> > if (gethostname(myhostname, MAXHOSTNAMELEN) != 0)
> > --- 121,133 
> > /* Make temp filenames, might not have strdup() */
> > temp_filename_1 = malloc(strlen(TEMP_FILENAME_1) + 1);
> > strcpy(temp_filename_1, TEMP_FILENAME_1);
> > !   fd = mkstemp(temp_filename_1);
> > !   close(fd);
> >   
> > temp_filename_2 = malloc(strlen(TEMP_FILENAME_2) + 1);
> > strcpy(temp_filename_2, TEMP_FILENAME_2);
> > !   fd = mkstemp(temp_filename_2);
> > !   close(fd);
> > 
> >   #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R)
> > if (gethostname(myhostname, MAXHOSTNAMELEN) != 0)
> 
> 
> -- 
> #==#
> # It's easier to get forgiveness for being wrong than for being right. #
> # Let's break this rule - forgive me.  #
> #== [EM

Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian
Jan Wieck wrote:
> On 6/9/2004 11:45 AM, Bruce Momjian wrote:
> 
> > Jan Wieck wrote:
> >> The problem is that if your thread-safety tests fail, there is no way to 
> >> build libpq with -pthread and -DREENTRANT or whatever is required on 
> >> that platform. On Solaris this results in errno being defined as
> >> 
> >>  extern int errno;
> >> 
> >> as supposed to
> >> 
> >>  #define errno *(errno())
> >> 
> >> which makes libpq on Solaris completely useless for every program that 
> >> uses threads for "something". There is still value in compiling it with 
> >> thread support compiler flags, even if it will not result in a thread 
> >> safe libpq.
> > 
> > Well, first we should find out how to get the thread test to pass for
> > that patform, but for cases where we can't (FreeBSD doesn't have
> > getpwuid_r(), we are stuck.  (That might be your Solaris problem as
> > well.)
> 
> There is no problem with thread safety on Solaris. The configure script 
> for 7.4.2 is broken for it, leading to a broken libpq when using 
> --enable-thread-safey.

Oh, yes, to be fixed in 7.4.3.

> > I looked over the code and the only place getpwuid_r (through
> > pqGetpwuid) is used is in libpq to look up the default username based on
> > the euid for the connection to the backend.  Unfortunately, I can't find
> > any other way to do such a lookup in a thread-safe manner unless we do a
> > system() or lock/read /etc/passwd ourselves, both of which are ugly.
> > 
> > I can't imagine how some OS's cannot give us a thread-safe way to do
> > this.  
> > 
> > When FreeBSD can't enable threads in 7.5, folks are going to be upset. 
> > In 7.4 we allowed it by having our own C code lock/copy the passwd
> > structure, but someone pointed out that calls to getpwuid() in other
> > places in the client code don't have such locking, so it would not work,
> > so it was removed for 7.5.
> 
> I disagree that all or nothing is a good strategy. What you have changed 
> this to is to deny using PostgreSQL from multithreaded applications on 
> platforms that have no getpwuid_r() altogether, if their platform 
> happens to require any thread special compiler options for libpq to work 
> in general.

Yes, this was the agreed approach.  It wasn't all my idea.

> Take Slony as an example. It is multithreaded, and we aren't happy that 
> we have to guard the pg_connect() call with a mutex against multiple 
> concurrent calls. But since our connections are of long living nature 
> this is no problem. And nowhere else in the entire code is any call to 
> getpwuid() or anything else. So we have the situation under control. But 
> I really don't want to tell people in the build instructions that they 
> have to edit libpq's Makefile because PostgreSQL's ./configure script is 
> too restrictive.

Yep, a problem.  However, when we create libpq we can't know that your
app isn't going to call getpwuid itself, can we; and even if we did, we
couldn't know if other libraries you are calling are using it.  I bet
you don't even know that.

> And just to make your day, your tests for thread safety are incomplete. 

Why incomplete?  Because it doesn't handle kerberos?  See below.

> The reason why we use a mutex now on all platforms, thread safe or not, 
> is because in the event you have a kerberos lib available (which is not 
> thread safe), pg_connect() can crash wether you use kerberos or not. So 
> I think when compiling for --enable-thread-safe we should disable 
> kerberos in libpq, right?

I thought we added kerberos locking to our current CVS.

As for your general issue, I don't think we want to head down the road
of having a libpq that is safe only if certain qualifications are made
of clients calling it.  If folks know enough to deal with those
qualifications, they can use CFLAGS to pass compiler flags to their
libpq build or hack up the code to bypass the thread checks.

What we really need is a way to do the uid->username mapping in a
thread-safe way.  Could we check the environment for $USER or $LOGNAME?
Could we require them to be set for thread builds on OS's without
getpwuid_r and in cases where the username is not specified in the
connection string?

If FreeBSD and Solaris both have this issue, it is important for us to
solve it.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] pg_dumpall.c missing include time.h?

2004-06-09 Thread Bruce Momjian

Added time.h to pg_dumpall.c.  Thanks.

---

Alvaro Herrera wrote:
> I got:
> 
> gcc -O2 -fno-strict-aliasing -g -Wall -Wmissing-prototypes -Wmissing-declarations 
> -I/home/alvherre/CVS/pgsql/source/13commitOpt/src/interfaces/libpq 
> -I../../../src/include -I/home/alvherre/CVS/pgsql/source/13commitOpt/src/include 
> -D_GNU_SOURCE  -DFRONTEND  -c -o pg_dumpall.o 
> /home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c -MMD
> /home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c: In 
> function `dumpTimestamp':
> /home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c:826: 
> warning: implicit declaration of function `time'
> /home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c:828: 
> warning: implicit declaration of function `strftime'
> /home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c:828: 
> warning: implicit declaration of function `localtime'
> 
> -- 
> Alvaro Herrera ()
> "La naturaleza, tan fr?gil, tan expuesta a la muerte... y tan viva"
> 
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Alvaro Herrera
On Wed, Jun 09, 2004 at 10:49:20PM +0800, Christopher Kings-Lynne wrote:
> >I love PG, I've been using it since version 6x, and it has gotten
> >fantastic over the years, and in many cases, I would choose it over
> >Oracle, but for systems that need frequent updates, I have a lot of
> >concerns.
> 
> ...that's the price you pay for concurrency man...

Also he said that the problem was solved with enough lazy VACUUM
scheduling.  I don't understand why he doesn't want to use that
solution.

-- 
Alvaro Herrera ()
"People get annoyed when you try to debug them."  (Larry Wall)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] simple_heap_update: tuple concurrently updated -- during INSERT

2004-06-09 Thread Tom Lane
"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=" <[EMAIL PROTECTED]> writes:
> Q1. So is this everything that can be said -- NOTIFY calls
> simple_heap_update that is concurrently updated by a different transaction?

If that's what it is, then there's still a question: why?  The notify
code has enough locking that this failure shouldn't happen.  If you can
reproduce this I'd like to look into it.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Tablespaces

2004-06-09 Thread Greg Stark

Dennis Bjorklund <[EMAIL PROTECTED]> writes:

> On Thu, 26 Feb 2004, Gavin Sherry wrote:
> 
> > Comments? Questions? Suggestions?
> 
> Is that plan that in the future one can split a single table into 
> different table spaces? Like storing all rows with year < 1999 in one 
> tablespace and the rest in another?

That's a separate orthogonal feature called "partitioned tables". There's some
amount of resistance to the idea amongst postgres people, and there's nobody
who has spoken up interested in implementing it, but there's also lots of
interest from users. A good patch would probably go a long way to convincing
people :)

Table spaces are being able to store different tables in different physical
locations on disk. A first version of this has actually been implemented for
7.5 using symlinks.

Partitioned tables and tablespaces do indeed have a certain amount of synergy.
But even in a single tablespace your example makes sense.

> With the rule system and two underlying tables one could make it work by 
> hand I think.

The rule system could be used to do this, but there was some discussion of
using inherited tables to handle it. However neither handles the really hard
part of detecting queries that use only a part of the table and taking that
into account in generating the plan.

-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Shridhar Daithankar
Honza Pazdziora wrote:
On Wed, Jun 09, 2004 at 07:53:19PM +0530, Shridhar Daithankar wrote:
Well, the statement 'postgresql.conf outside data directory' isn't going to 
win I think.
One day there won't be any data directory because the data will be
on raw partitions. Then you will _have_ to have the configuration
somewhere else. Ideally, only the absolute minimum of parameters
needed to start the server ought to be in the external configuration.
Everything else may happily reside within the database storage.
Given that postgresql by design does not outsmart OS in any overlapping 
functionality, I doubt we would see raw storage anytime soon but anyways..


Let me put it in a different way. What you are asking is a service 
configuration file. It is *not* same as current postgresql configuration 
file. It will/should be unique to a perticular installation of postgresql. 
i.e. something like /etc/postgresql/7.4.2/service.conf
Why? The administrator may want to run second cluster on the same
machine, share a couple of options using "include" directive while
preserving separate configuration, including the location of data
store, for things that should be different for each of these
clusters.
Well that is easy. In the service file just say
[Cluster1]
 datapath=/data/foo
[Cluster2]
 datapath=/data/foo1
and postgresql.conf could still reside inside each cluster to provide specific 
configuration.

Thenhave a script which can say 'service postgresql cluster1 start'
I think it is a rather good idea to add service configuration to default 
What is that "service" you mention?
Postgresql as a database server is a service. A cluster is an service instance. 
A service configuration file documents all service instances and their 
parameters required for all tuning and control purposes. Add a possibility of 
multiple versions of postgresql on same box. That sums it up pretty well..

Also pulling postgresql.conf out of cluster has a drawback. All the 
clusters would have to share same tuning parameters which is not exactly 
ideal.
Why would they _have_ to? Pulling postgresql.conf out of cluster only
means that the file resides somewhere else and in it the location of
the data directory is specified. It does not mandate there will only
be one cluster and it does not mean that each cluster cannot have
completely different configuration file.
Well, I wish I could have some archives link handy but suffice to say that Tom 
has rejected this idea many times before..

 Shridhar
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian
Jan Wieck wrote:
> On 6/9/2004 9:36 AM, Bruce Momjian wrote:
> > 
> >> Also, I would suggest that we allow to build the libpq with thread 
> >> specific compiler options, even if it is not entirely thread safe. It 
> >> would require that a really multithreaded application has to have 
> >> mutexes around certain operations, but being entirely unable to 
> >> configure in a way that adds thread compile options to the CFLAGS makes 
> >> libpq completely useless for multithreaded programs on some platforms 
> >> (for example Solaris).
> > 
> > I am confused what you are suggesting.  Are you saying to use thread
> > flags but not the other things that make is thread-safe?  There isn't
> > much else other than the flags actually.  Now that more OS's are
> > thread-safe by default, we could consider using threading if it is
> > available by default.  We would need some way of reporting that to the
> > user, perhaps via an installed readme file or a pg_config output option.
> 
> The problem is that if your thread-safety tests fail, there is no way to 
> build libpq with -pthread and -DREENTRANT or whatever is required on 
> that platform. On Solaris this results in errno being defined as
> 
>  extern int errno;
> 
> as supposed to
> 
>  #define errno *(errno())
> 
> which makes libpq on Solaris completely useless for every program that 
> uses threads for "something". There is still value in compiling it with 
> thread support compiler flags, even if it will not result in a thread 
> safe libpq.

Well, first we should find out how to get the thread test to pass for
that patform, but for cases where we can't (FreeBSD doesn't have
getpwuid_r(), we are stuck.  (That might be your Solaris problem as
well.)

I looked over the code and the only place getpwuid_r (through
pqGetpwuid) is used is in libpq to look up the default username based on
the euid for the connection to the backend.  Unfortunately, I can't find
any other way to do such a lookup in a thread-safe manner unless we do a
system() or lock/read /etc/passwd ourselves, both of which are ugly.

I can't imagine how some OS's cannot give us a thread-safe way to do
this.  

When FreeBSD can't enable threads in 7.5, folks are going to be upset. 
In 7.4 we allowed it by having our own C code lock/copy the passwd
structure, but someone pointed out that calls to getpwuid() in other
places in the client code don't have such locking, so it would not work,
so it was removed for 7.5.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
On 6/9/2004 11:16 AM, Bruce Momjian wrote:
Jan Wieck wrote:
On 6/9/2004 9:36 AM, Bruce Momjian wrote:
> Jan Wieck wrote:
>> I am wondering why thread_test.c is checking for mktemp()? That function 
>> is nowhere used in the libpq.
> 
> Uh, it isn't checking for mktemp, it is using it, and it is using it
> because someone didn't like hard-coded paths I was using in the past. 
> Is there something wrong with using mktemp?  I have heard of no
> portability problems, except some need six X's, and we updated that.

There seems to be a portability issue here. Stefan Kaltenbrunner 
reported a configure failure on sparc64-unknown-openbsd3.5 and the 
config.log says:

/tmp//ccx22029.o: In function `main':
/tmp//ccx22029.o(.text+0x8c): warning: mktemp() possibly used unsafely; 
consider
  using mkstemp()
Yes, I was wondering how mktemp was going to guard against concurrent
access.  I have applied the following patch to use mkstemp().
Which is only a warning at this time, it fails later on getpwuid().
Oh, I will need to hear more about that failure.
The relevant part of the config.log is:
configure:17942: checking thread safety of required library functions
configure:17967: gcc -o conftest -O2 -fno-strict-aliasing -pthread 
-D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -DIN_CONFIGURE 
  conftest.c -lz -lreadline -lcurses -lresolv -lcompat -lm
-lutil   >&5
/tmp//ccx22029.o: In function `main':
/tmp//ccx22029.o(.text+0x8c): warning: mktemp() possibly used unsafely; 
consider using mkstemp()
configure:17970: $? = 0
configure:17972: ./conftest
Your errno is thread-safe.
Your system has sterror_r();  it does not need strerror().
Your system uses getpwuid() which is not thread-safe. **
Your system has getaddrinfo();  it does not need gethostbyname()
  or gethostbyname_r().

** YOUR PLATFORM IS NOT THREAD-SAFE. **
configure:17975: $? = 1
configure: program exited with status 1
configure: failed program was:
#line 17961 "configure"
#include "confdefs.h"
#include "./src/tools/thread/thread_test.c"
configure:17984: result: no
configure:17986: error:
*** Thread test program failed.  Your platform is not thread-safe.
*** Check the file 'config.log'for the exact reason.



Index: src/tools/thread/thread_test.c
===
RCS file: /cvsroot/pgsql-server/src/tools/thread/thread_test.c,v
retrieving revision 1.30
diff -c -c -r1.30 thread_test.c
*** src/tools/thread/thread_test.c	28 May 2004 18:37:10 -	1.30
--- src/tools/thread/thread_test.c	9 Jun 2004 15:03:29 -
***
*** 104,110 
  {
  	pthread_t	thread1,
  thread2;
! 
  	if (argc > 1)
  	{
  		fprintf(stderr, "Usage: %s\n", argv[0]);
--- 104,111 
  {
  	pthread_t	thread1,
  thread2;
! 	int			fd;
! 	
  	if (argc > 1)
  	{
  		fprintf(stderr, "Usage: %s\n", argv[0]);
***
*** 120,130 
  	/* Make temp filenames, might not have strdup() */
  	temp_filename_1 = malloc(strlen(TEMP_FILENAME_1) + 1);
  	strcpy(temp_filename_1, TEMP_FILENAME_1);
! 	mktemp(temp_filename_1);
  
  	temp_filename_2 = malloc(strlen(TEMP_FILENAME_2) + 1);
  	strcpy(temp_filename_2, TEMP_FILENAME_2);
! 	mktemp(temp_filename_2);
  	
  #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R)
  	if (gethostname(myhostname, MAXHOSTNAMELEN) != 0)
--- 121,133 
  	/* Make temp filenames, might not have strdup() */
  	temp_filename_1 = malloc(strlen(TEMP_FILENAME_1) + 1);
  	strcpy(temp_filename_1, TEMP_FILENAME_1);
! 	fd = mkstemp(temp_filename_1);
! 	close(fd);
  
  	temp_filename_2 = malloc(strlen(TEMP_FILENAME_2) + 1);
  	strcpy(temp_filename_2, TEMP_FILENAME_2);
! 	fd = mkstemp(temp_filename_2);
! 	close(fd);
  	
  #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R)
  	if (gethostname(myhostname, MAXHOSTNAMELEN) != 0)

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Assignment to array elements

2004-06-09 Thread Greg Stark

Tom Lane <[EMAIL PROTECTED]> writes:

> What I would like to do about this is define INSERT to a subscripted
> column name as working the same way that an assignment to a element or
> slice of a zero-dimension array presently does --- that is, you get an
> actual array back and not a NULL.  It would also fall out that UPDATE
> of a null array value would behave that way.

That certainly seems nice. Though update of a part of a null array value seems
a little sketchy in theory it would sure be convenient for me.

What I'm curious about is where the original behaviour came from. Is it just
because insert with subscripts was never implemented? Or was there a rationale
for ignoring the subscripts?

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
On 6/9/2004 11:45 AM, Bruce Momjian wrote:
Jan Wieck wrote:
The problem is that if your thread-safety tests fail, there is no way to 
build libpq with -pthread and -DREENTRANT or whatever is required on 
that platform. On Solaris this results in errno being defined as

 extern int errno;
as supposed to
 #define errno *(errno())
which makes libpq on Solaris completely useless for every program that 
uses threads for "something". There is still value in compiling it with 
thread support compiler flags, even if it will not result in a thread 
safe libpq.
Well, first we should find out how to get the thread test to pass for
that patform, but for cases where we can't (FreeBSD doesn't have
getpwuid_r(), we are stuck.  (That might be your Solaris problem as
well.)
There is no problem with thread safety on Solaris. The configure script 
for 7.4.2 is broken for it, leading to a broken libpq when using 
--enable-thread-safey.

I looked over the code and the only place getpwuid_r (through
pqGetpwuid) is used is in libpq to look up the default username based on
the euid for the connection to the backend.  Unfortunately, I can't find
any other way to do such a lookup in a thread-safe manner unless we do a
system() or lock/read /etc/passwd ourselves, both of which are ugly.
I can't imagine how some OS's cannot give us a thread-safe way to do
this.  

When FreeBSD can't enable threads in 7.5, folks are going to be upset. 
In 7.4 we allowed it by having our own C code lock/copy the passwd
structure, but someone pointed out that calls to getpwuid() in other
places in the client code don't have such locking, so it would not work,
so it was removed for 7.5.
I disagree that all or nothing is a good strategy. What you have changed 
this to is to deny using PostgreSQL from multithreaded applications on 
platforms that have no getpwuid_r() altogether, if their platform 
happens to require any thread special compiler options for libpq to work 
in general.

Take Slony as an example. It is multithreaded, and we aren't happy that 
we have to guard the pg_connect() call with a mutex against multiple 
concurrent calls. But since our connections are of long living nature 
this is no problem. And nowhere else in the entire code is any call to 
getpwuid() or anything else. So we have the situation under control. But 
I really don't want to tell people in the build instructions that they 
have to edit libpq's Makefile because PostgreSQL's ./configure script is 
too restrictive.

And just to make your day, your tests for thread safety are incomplete. 
The reason why we use a mutex now on all platforms, thread safe or not, 
is because in the event you have a kerberos lib available (which is not 
thread safe), pg_connect() can crash wether you use kerberos or not. So 
I think when compiling for --enable-thread-safe we should disable 
kerberos in libpq, right?

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] pg_dumpall.c missing include time.h?

2004-06-09 Thread Alvaro Herrera
I got:

gcc -O2 -fno-strict-aliasing -g -Wall -Wmissing-prototypes -Wmissing-declarations 
-I/home/alvherre/CVS/pgsql/source/13commitOpt/src/interfaces/libpq 
-I../../../src/include -I/home/alvherre/CVS/pgsql/source/13commitOpt/src/include 
-D_GNU_SOURCE  -DFRONTEND  -c -o pg_dumpall.o 
/home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c -MMD
/home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c: In function 
`dumpTimestamp':
/home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c:826: warning: 
implicit declaration of function `time'
/home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c:828: warning: 
implicit declaration of function `strftime'
/home/alvherre/CVS/pgsql/source/13commitOpt/src/bin/pg_dump/pg_dumpall.c:828: warning: 
implicit declaration of function `localtime'

-- 
Alvaro Herrera ()
"La naturaleza, tan frágil, tan expuesta a la muerte... y tan viva"


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread Christopher Kings-Lynne
I love PG, I've been using it since version 6x, and it has gotten
fantastic over the years, and in many cases, I would choose it over
Oracle, but for systems that need frequent updates, I have a lot of
concerns.
...that's the price you pay for concurrency man...
Chris
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread pgsql
>> I love PG, I've been using it since version 6x, and it has gotten
>> fantastic over the years, and in many cases, I would choose it over
>> Oracle, but for systems that need frequent updates, I have a lot of
>> concerns.
>
> ...that's the price you pay for concurrency man...

I think that's a cop-out. Other databases can handle this case fine and
they have MVCC.

Are we not "open source" "free software" proponents? Isn't one of our
motivations that we can do it better?

This *is* a problem with PostgreSQL, and it *is* a concern for a
reasonable number of potential deployments.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
On 6/9/2004 9:36 AM, Bruce Momjian wrote:
Jan Wieck wrote:
I am wondering why thread_test.c is checking for mktemp()? That function 
is nowhere used in the libpq.
Uh, it isn't checking for mktemp, it is using it, and it is using it
because someone didn't like hard-coded paths I was using in the past. 
Is there something wrong with using mktemp?  I have heard of no
portability problems, except some need six X's, and we updated that.
There seems to be a portability issue here. Stefan Kaltenbrunner 
reported a configure failure on sparc64-unknown-openbsd3.5 and the 
config.log says:

/tmp//ccx22029.o: In function `main':
/tmp//ccx22029.o(.text+0x8c): warning: mktemp() possibly used unsafely; 
consider
 using mkstemp()

Which is only a warning at this time, it fails later on getpwuid().
Jan

Also, I would suggest that we allow to build the libpq with thread 
specific compiler options, even if it is not entirely thread safe. It 
would require that a really multithreaded application has to have 
mutexes around certain operations, but being entirely unable to 
configure in a way that adds thread compile options to the CFLAGS makes 
libpq completely useless for multithreaded programs on some platforms 
(for example Solaris).
I am confused what you are suggesting.  Are you saying to use thread
flags but not the other things that make is thread-safe?  There isn't
much else other than the flags actually.  Now that more OS's are
thread-safe by default, we could consider using threading if it is
available by default.  We would need some way of reporting that to the
user, perhaps via an installed readme file or a pg_config output option.

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Andrew Dunstan
Greg Sabino Mullane wrote:
(warning: rehashing of issues ahead)
[snipped: suggestion to have much more documentation in sample postgresql.conf]
 

Wasn't RedHat working on a configuration utility for Postgres? That 
seems to me like a much more productive way to go.

cheers
andrew
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian
Jan Wieck wrote:
> On 6/9/2004 9:36 AM, Bruce Momjian wrote:
> 
> > Jan Wieck wrote:
> >> I am wondering why thread_test.c is checking for mktemp()? That function 
> >> is nowhere used in the libpq.
> > 
> > Uh, it isn't checking for mktemp, it is using it, and it is using it
> > because someone didn't like hard-coded paths I was using in the past. 
> > Is there something wrong with using mktemp?  I have heard of no
> > portability problems, except some need six X's, and we updated that.
> 
> There seems to be a portability issue here. Stefan Kaltenbrunner 
> reported a configure failure on sparc64-unknown-openbsd3.5 and the 
> config.log says:
> 
> /tmp//ccx22029.o: In function `main':
> /tmp//ccx22029.o(.text+0x8c): warning: mktemp() possibly used unsafely; 
> consider
>   using mkstemp()

Yes, I was wondering how mktemp was going to guard against concurrent
access.  I have applied the following patch to use mkstemp().

> Which is only a warning at this time, it fails later on getpwuid().

Oh, I will need to hear more about that failure.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
Index: src/tools/thread/thread_test.c
===
RCS file: /cvsroot/pgsql-server/src/tools/thread/thread_test.c,v
retrieving revision 1.30
diff -c -c -r1.30 thread_test.c
*** src/tools/thread/thread_test.c  28 May 2004 18:37:10 -  1.30
--- src/tools/thread/thread_test.c  9 Jun 2004 15:03:29 -
***
*** 104,110 
  {
pthread_t   thread1,
thread2;
! 
if (argc > 1)
{
fprintf(stderr, "Usage: %s\n", argv[0]);
--- 104,111 
  {
pthread_t   thread1,
thread2;
!   int fd;
!   
if (argc > 1)
{
fprintf(stderr, "Usage: %s\n", argv[0]);
***
*** 120,130 
/* Make temp filenames, might not have strdup() */
temp_filename_1 = malloc(strlen(TEMP_FILENAME_1) + 1);
strcpy(temp_filename_1, TEMP_FILENAME_1);
!   mktemp(temp_filename_1);
  
temp_filename_2 = malloc(strlen(TEMP_FILENAME_2) + 1);
strcpy(temp_filename_2, TEMP_FILENAME_2);
!   mktemp(temp_filename_2);

  #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R)
if (gethostname(myhostname, MAXHOSTNAMELEN) != 0)
--- 121,133 
/* Make temp filenames, might not have strdup() */
temp_filename_1 = malloc(strlen(TEMP_FILENAME_1) + 1);
strcpy(temp_filename_1, TEMP_FILENAME_1);
!   fd = mkstemp(temp_filename_1);
!   close(fd);
  
temp_filename_2 = malloc(strlen(TEMP_FILENAME_2) + 1);
strcpy(temp_filename_2, TEMP_FILENAME_2);
!   fd = mkstemp(temp_filename_2);
!   close(fd);

  #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R)
if (gethostname(myhostname, MAXHOSTNAMELEN) != 0)

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
On 6/9/2004 9:36 AM, Bruce Momjian wrote:

Also, I would suggest that we allow to build the libpq with thread 
specific compiler options, even if it is not entirely thread safe. It 
would require that a really multithreaded application has to have 
mutexes around certain operations, but being entirely unable to 
configure in a way that adds thread compile options to the CFLAGS makes 
libpq completely useless for multithreaded programs on some platforms 
(for example Solaris).
I am confused what you are suggesting.  Are you saying to use thread
flags but not the other things that make is thread-safe?  There isn't
much else other than the flags actually.  Now that more OS's are
thread-safe by default, we could consider using threading if it is
available by default.  We would need some way of reporting that to the
user, perhaps via an installed readme file or a pg_config output option.
The problem is that if your thread-safety tests fail, there is no way to 
build libpq with -pthread and -DREENTRANT or whatever is required on 
that platform. On Solaris this results in errno being defined as

extern int errno;
as supposed to
#define errno *(errno())
which makes libpq on Solaris completely useless for every program that 
uses threads for "something". There is still value in compiling it with 
thread support compiler flags, even if it will not result in a thread 
safe libpq.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] simple_heap_update: tuple concurrently updated -- during INSERT

2004-06-09 Thread Tom Lane
"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=" <[EMAIL PROTECTED]> writes:
>   ERROR:  simple_heap_update: tuple concurrently updatedog.
>   LOG:  statement: INSERT INTO pakolas_cikktetel
> (cikk, minoseg, helyrol, mennyi, pakolas, sorszam, helyre) VALUES
> (102165, 1, 1488, '25', 68615, 1, 1338)

Hmm.  AFAIK, an INSERT statement per se will never call
simple_heap_update at all --- that routine is only used for system
catalog updates.  I'm wondering what is going on behind the scenes
here.  Have you got any triggers or rules that this INSERT would
fire, and if so what do they do?  Maybe you are using NOTIFY?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Honza Pazdziora
On Wed, Jun 09, 2004 at 07:53:19PM +0530, Shridhar Daithankar wrote:
> 
> Well, the statement 'postgresql.conf outside data directory' isn't going to 
> win I think.

One day there won't be any data directory because the data will be
on raw partitions. Then you will _have_ to have the configuration
somewhere else. Ideally, only the absolute minimum of parameters
needed to start the server ought to be in the external configuration.
Everything else may happily reside within the database storage.

> Let me put it in a different way. What you are asking is a service 
> configuration file. It is *not* same as current postgresql configuration 
> file. It will/should be unique to a perticular installation of postgresql. 
> i.e. something like /etc/postgresql/7.4.2/service.conf

Why? The administrator may want to run second cluster on the same
machine, share a couple of options using "include" directive while
preserving separate configuration, including the location of data
store, for things that should be different for each of these
clusters.

> I think it is a rather good idea to add service configuration to default 

What is that "service" you mention?

> Also pulling postgresql.conf out of cluster has a drawback. All the 
> clusters would have to share same tuning parameters which is not exactly 
> ideal.

Why would they _have_ to? Pulling postgresql.conf out of cluster only
means that the file resides somewhere else and in it the location of
the data directory is specified. It does not mandate there will only
be one cluster and it does not mean that each cluster cannot have
completely different configuration file.

-- 

 Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/
 .project: Perl, mod_perl, DBI, Oracle, large Web systems, XML/XSL, ...
Only self-confident people can be simple.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] sequences and "addval('myseq', value)"

2004-06-09 Thread Greg Stark

[EMAIL PROTECTED] writes:

> This is *NOT* a perfect or elegant solution. There is, however, an
> important problem. How do you maintain a value that is visable to the
> database, but does not incure the cost of a huge number of updates or a
> full table scan? I'm talking about systems that need to take thousands of
> inserts per minute?

Hm, the real question here is how does postgres handle a particular
circumstance, namely a table with a small number of very busy rows taking
hundreds or thousands of updates per second, with few rollbacks.

In the actual case the busy records are always read in short transactions
(single-record reads) so there's a small number of active records at any time,
probably usually 1, at any time.

Currently postgres handles this by creating a new version for every update and
then delaying the cleanup of the old version until a vacuum can be done.

Oracle on the other hand creates new versions and stores the old version in a
rollback segment. So only the most recent version is in the heap. This incurs
cleanup overhead in the critical path of the transaction.

I don't think it makes sense to move to the Oracle-style method of course. But
perhaps there's some way to move the cleanup closer to the end of the
transaction.

I'm thinking something along the lines of the bgwriter vacuuming every block
as it's written out. That forces every update to generate a vacuum of that
same block at some point in the near future.

I think something like this has been proposed before, and the objection was
that it would force index pages to be read in the midst of bgwriter trying to
get pages out.

But if we're talking about a small, very busy table then the index will
presumably be entirely in memory anyways. Perhaps every page in memory could
hold an update count and trigger an immediate vacuum if 2 updates have been
performed on the page while still in the shared pool, or some heuristic like
that.

But I think it would be worth experimenting with things like this. If
something like it could relieve a lot of the vacuum related angst then it
might be worth some other angst about extra i/o in bgwriter.

-- 
greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Rules

2004-06-09 Thread Tom Lane
Darko Prenosil <[EMAIL PROTECTED]> writes:
> CREATE RULE child_ins AS ON INSERT TO test.child
> DO INSTEAD SELECT test.testfi(NEW);

> INSERT INTO test.child(id,id_data,opis,podaci) VALUES (1,1,'Opis','podaci');

>   (-403)ERROR:  cannot handle whole-row reference

It works in CVS tip ;-).  No chance in 7.4 or earlier, however.
You'll have to pass the components of the row as separate arguments
to the function.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> One idea that has been floated around is to pull the docs automatically
> from SGML and put them in postgresql.conf.

In theory, postgresql.conf.sample could be a generated file: pull the
docs from SGML and the default values from the GUC tables.  However I'm
of the same opinion Bruce mentioned, that duplicating the docs in the
file isn't an improvement.  Perhaps it would help to put an explicit
pointer to the docs at the top of the file?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Shridhar Daithankar
[EMAIL PROTECTED] wrote:
I have a LOT of opinions about postgresql.conf, and frankly, I think more
comments are not where the problems lie.
If you *really* want to make configuring postgresql easier,
postgresql.conf HAS to live outside the data directory and specify where
everything is. postgresql.conf should do exactly as one would assume it
does, configure postgresql.
Right now it doesn't. Right now it just sets parameters and the "-D" or
PGDATA environment variable *really* configure postgresql. If you do not
know how a machine is setup, you have to look for the install. Hopefuly,
the previous administrator did not have any test directories which would
confuse the search. Sorry, I'm ranting.
In an ideal world, I envision Postgresql having a default location for
postgresql.conf, in this file will be the declarations for where the data
directory is, possible included files, etc. i.e. the stuff I've been
pushing litterally for years. I am not saying that the current behavior
change in any way, what I am saying is that a more world compatible
methodology should be possible.
Once the postgresql.conf file is out of the data directory, you have a new
paradigm from which to work. One could write a setup application, in java
or something, which creates a new postgresql.conf file, right down to
where you want the installed directory to be, and THAT is used by initdb.
The setup application can also provide context sensitive help for each of
the setting. The user may not even *know* that there is such a file as
postgresql.conf.
Well, the statement 'postgresql.conf outside data directory' isn't going to win 
I think.

postgresql.conf is a cluster configuration file. I remember previous discussion 
on this and I agree with that a cluster configuration file should remain in 
cluster itself.

Let me put it in a different way. What you are asking is a service configuration 
file. It is *not* same as current postgresql configuration file. It will/should 
be unique to a perticular installation of postgresql. i.e. something like 
/etc/postgresql/7.4.2/service.conf

I think it makes a lot of sense then. It would allow to maitain different 
clusters, like in another thread where OP wanted different locales/collation. 
And it will still allow multiple versions of postgresql to be installed.

I remember chasing mandrake couple of years ago for not to make postgresql 
database live in /var/lib/data... but to make it configurable. It didn't go 
anywhere though.

I think it is a rather good idea to add service configuration to default 
postgresql install. May be linux distro. vendors can customize thr. the init 
scripts.

Also pulling postgresql.conf out of cluster has a drawback. All the clusters 
would have to share same tuning parameters which is not exactly ideal. If we 
define a services file with multiple clusters we woudl still provide ultimate 
control to the DBA/system admin.

Just a thought..
 Shridhar


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Refined LC_COLLATE or multiple database clusters?

2004-06-09 Thread Honza Pazdziora
On Wed, Jun 09, 2004 at 12:33:03PM +0200, Grega Bremec wrote:
> 
> Collate order for those databases, however, needs to be different. Obviously,

[...]

> Is it possible to do either of these things that could solve this problem
> adequately:
> 
> - somehow manage to make one postmaster run on top of two separate
>   database clusters that would each have a different collate ordering
>   scheme
> 
> - use some other method of initializing one database from a different
>   template and taking with it LC_COLLATE setting (I suppose not, as
>   the "${PGDATA}/global/" directory is global to the cluster)
> 
> - use a patch that would add such functionality or upgrade to a version
>   (even if release-candidate, beta is not really an option, i gather)
>   of PostgreSQL that supported it
> 
> - in absence of any other viable solution, change the global setting of
>   the database cluster without having to dump/reinitdb/restore it

If you do not need the collating sequence to affect index operations,
you can use nls_string function to sort using

order by nls_string(column, 'sl_SI.utf-8')

where any locale can be specified in the runtime. The nls_string
result can also be used to compare strings in collating manner, however,
indexes will not be used in that case, which may or may not be a
problem for your usage.

The source of nls_string with installation instructions can be found
at

http://www.fi.muni.cz/~adelton/l10n/

-- 

 Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/
 .project: Perl, mod_perl, DBI, Oracle, large Web systems, XML/XSL, ...
Only self-confident people can be simple.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Thomas Hallgren
I agree. I wasn't aware of that restriction.

regards,

Thomas Hallgren

- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Thomas Hallgren" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, June 09, 2004 15:56
Subject: Re: [HACKERS] Question regarding dynamic_library_path 


> "Thomas Hallgren" <[EMAIL PROTECTED]> writes:
> > And only the super user can use directory components in a module name?
> 
> Only superusers can create C functions in the first place, so quibbling
> about what paths they can use doesn't seem that useful ...
> 
> regards, tom lane
> 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] simple_heap_update: tuple concurrently updated -- during INSERT

2004-06-09 Thread SZŰCS Gábor
Dear Gurus,

I tried to shuffle through the archives but was lost in the technical
details. Please feel free to tell me a search keyword suitable for my case
if there's any.

QUESTION1: is this error _always_ harmless (other than transaction
rollback)?
QUESTION2: is this reported exactly like other errors? i.e. in psql, through
psqlodbc, etc.

As you can see below, this came during an INSERT. I don't really understand
the warning.

QUESTION3: May I assume that the transaction aborted even before it began?
QUESTION4: If that's so, does it rule out triggers and such?
QUESTION5: May it be a problem in 7.3.x that's fixed in 7.4.y?

System: PostgreSQL 7.3.4, Dual Xeon HT

Log:
  ERROR:  simple_heap_update: tuple concurrently updatedog.
  LOG:  statement: INSERT INTO pakolas_cikktetel
(cikk, minoseg, helyrol, mennyi, pakolas, sorszam, helyre) VALUES
(102165, 1, 1488, '25', 68615, 1, 1338)
  DEBUG:  AbortCurrentTransaction
  WARNING:  AbortTransaction and not in in-progress state

I read in an earlier post that turning on statement logging and duration may
help. If we can reproduce this error, I will. So far this is the only
occurence.

G.
%--- cut here ---%
\end


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Refined LC_COLLATE or multiple database clusters?

2004-06-09 Thread Grega Bremec
Hello, List,

I recently stumbled across a problem that I can't really get across.

We have a database cluster (PG 7.4.2) that was initialized as follows:

$ pg_controldata /data/dir
pg_control version number:72
Catalog version number:   200310211
Database cluster state:   in production
pg_control last modified: sre 09 jun 2004 03:00:26 CEST
Current log file ID:  20
Next log file segment:63
...  ...
Database block size:  8192
Blocks per segment of large relation: 131072
Maximum length of identifiers:64
Maximum number of function arguments: 32
Date/time type storage:   64-bit integers
Maximum length of locale name:128
LC_COLLATE:   C
LC_CTYPE: C

Inside that cluster, there are several databases using different encodings:

template1=# \l
 List of databases
Name |  Owner   | Encoding 
-+--+--
 db1 | ownera   | UNICODE
 db2 | ownera   | UNICODE
 db3 | ownerb   | LATIN2
 db4 | ownerc   | LATIN2
 db5 | ownera   | LATIN2
 db6 | ownera   | LATIN2
 template0   | postgres | UNICODE
 template1   | postgres | UNICODE

Collate order for those databases, however, needs to be different. Obviously,
db3, db4, db5 and db6 will want to use some collate ordering scheme based on
ISO-8859-2, whereas the other two could possibly have table- or even column-
based collate requirements, as they contain unicode data in UTF-8 encoding,
which doesn't give any warranties wrt the contents of these databases.

Producing a list of requirements and/or imposing certain conventions on the
format of data stored in those tables is outside the scope of my authorities,
the only reasonable assumption I can make is that these databases could be
limited to one collating order per database (or several databases, as it is)
without much hassle.

Also, running several postmasters on this same machine is not an option, as
it only has 1.5GB RAM, of which only 1GB is available for cache (~260MB is
swapped after roughly a month's uptime, but that doesn't change much after
settling down in a week or two).

My question to the list would be the following:

Is it possible to do either of these things that could solve this problem
adequately:

- somehow manage to make one postmaster run on top of two separate
  database clusters that would each have a different collate ordering
  scheme

- use some other method of initializing one database from a different
  template and taking with it LC_COLLATE setting (I suppose not, as
  the "${PGDATA}/global/" directory is global to the cluster)

- use a patch that would add such functionality or upgrade to a version
  (even if release-candidate, beta is not really an option, i gather)
  of PostgreSQL that supported it

- in absence of any other viable solution, change the global setting of
  the database cluster without having to dump/reinitdb/restore it

I thank you in advance for your valuable input,
-- 
Grega Bremec
Senior Administrator
Noviforum Ltd., Software & Media
http://www.noviforum.si/


pgp2j0EXb4ihK.pgp
Description: PGP signature


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Tom Lane
"Thomas Hallgren" <[EMAIL PROTECTED]> writes:
> And only the super user can use directory components in a module name?

Only superusers can create C functions in the first place, so quibbling
about what paths they can use doesn't seem that useful ...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] [ADMIN] Refined LC_COLLATE or multiple database clusters?

2004-06-09 Thread Tom Lane
Grega Bremec <[EMAIL PROTECTED]> writes:
> Collate order for those databases, however, needs to be different.

If you need multiple LC_COLLATE settings then you have to run multiple
postmasters.  There is no other solution today, nor likely to be one in
the near future.

> Also, running several postmasters on this same machine is not an option,

Sure it is.  Just don't go overboard with the shared_buffers settings.
Let the kernel do the bulk of the buffering and you'll be fine.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread pgsql
I have a LOT of opinions about postgresql.conf, and frankly, I think more
comments are not where the problems lie.

If you *really* want to make configuring postgresql easier,
postgresql.conf HAS to live outside the data directory and specify where
everything is. postgresql.conf should do exactly as one would assume it
does, configure postgresql.

Right now it doesn't. Right now it just sets parameters and the "-D" or
PGDATA environment variable *really* configure postgresql. If you do not
know how a machine is setup, you have to look for the install. Hopefuly,
the previous administrator did not have any test directories which would
confuse the search. Sorry, I'm ranting.

In an ideal world, I envision Postgresql having a default location for
postgresql.conf, in this file will be the declarations for where the data
directory is, possible included files, etc. i.e. the stuff I've been
pushing litterally for years. I am not saying that the current behavior
change in any way, what I am saying is that a more world compatible
methodology should be possible.

Once the postgresql.conf file is out of the data directory, you have a new
paradigm from which to work. One could write a setup application, in java
or something, which creates a new postgresql.conf file, right down to
where you want the installed directory to be, and THAT is used by initdb.
The setup application can also provide context sensitive help for each of
the setting. The user may not even *know* that there is such a file as
postgresql.conf.

The application should behave like a control panel of sorts which would
allow you to modify an existing configuration file and optionally restart
the correct postgresql process.

(Obviously, this program can be made to look at other files in other
locations.)

The work flow would be something like this:

run setup application.
Choose volume and directory for database.
set various options
press OK
standard "are you sure" dialog.
run initdb with variables,

In linux, freebsd, or Windows, have the option to start server at system
start.

This is all basically doable right now. Granted we have to hammer out a
few details in my configuration patch, but there are no show stoppers that
I can see. I even have a Windows application which will start and monitor
a cygwin PostgreSQL. It should be easily modified for a mingwin version.

That, IMHO, would *really* make PostgreSQL easy, almost brainless to
install and use.

>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> (warning: rehashing of issues ahead)
>
> Sometimes when I talk to technical people about using PostgreSQL,
> I get a strong reaction along the lines of "it's ugly, complex, and
> hard to set up." While we have gotten better than we used to be,
> some of this is still true. I usually press for further details.
> One of the top items mentioned is the configuration files,
> especially postgresql.conf. Specifically, it is non-standard and
> cryptic.
>
> One thing that would go a long way is to remove the confusing
> "default is commented out" behavior. This is ugly and unnecessary:
> simply explicitly list every value.
>
> The second thing that would help is better documentation. It doesn't
> have to be quite as verbose as the httpd.conf file, but a nice
> multi-line explanation of every item in the file, perhaps with a
> URL for further information, would be a very nice addition. Moving
> the comments above each item, rather than trying to squeeze some of
> them next to the parameter, would also make the file more readable
> and more consistent.
>
> For example, this original sample:
>
> #shared_buffers = 1000  # min 16, at least max_connections*2, 8KB
> each
> #work_mem = 1024# min 64, size in KB
>
> becomes:
>
> ## shared_buffers (kilobytes)
> ## Sets the number of shared memory buffers used by the database server.
> ## Increasing the number of buffers makes it more likely backends will
> find
> ## the information they need in the cache, thus avoiding an expensive
> operating
> ## system request. The default is typically 1000, but may be less if your
> ## kernel settings will not support it. Each buffer is 8192 bytes. The
> minimum
> ## value is 16, or twice the value of the 'max_connections' parameter.
> Settings
> ## significantly higher than the minimum are usually needed for good
> performance.
> ## Values of a few thousand are recommended for production installations.
> ## URL: http://www.postgresql.org/docs/current/static/runtime-config.html
>
> shared_buffers = 1000
>
>
> ## work_mem (kilobytes)
> ## (Previously known as sort_mem)
> ## Sets the amount of memory to be used by internal sort operations and
> hash tables before
> ## switching to temporary disk files. The default value is 1024 kilobytes
> (1 MB).
> ## Note that for a complex query, several sort or hash operations might be
> running in parallel;
> ## each one will be allowed to use up to the value set here before it
> starts to put data
> ## into temporary fi

Re: [HACKERS] thread safety tests

2004-06-09 Thread Bruce Momjian
Jan Wieck wrote:
> I am wondering why thread_test.c is checking for mktemp()? That function 
> is nowhere used in the libpq.

Uh, it isn't checking for mktemp, it is using it, and it is using it
because someone didn't like hard-coded paths I was using in the past. 
Is there something wrong with using mktemp?  I have heard of no
portability problems, except some need six X's, and we updated that.

> Also, I would suggest that we allow to build the libpq with thread 
> specific compiler options, even if it is not entirely thread safe. It 
> would require that a really multithreaded application has to have 
> mutexes around certain operations, but being entirely unable to 
> configure in a way that adds thread compile options to the CFLAGS makes 
> libpq completely useless for multithreaded programs on some platforms 
> (for example Solaris).

I am confused what you are suggesting.  Are you saying to use thread
flags but not the other things that make is thread-safe?  There isn't
much else other than the flags actually.  Now that more OS's are
thread-safe by default, we could consider using threading if it is
available by default.  We would need some way of reporting that to the
user, perhaps via an installed readme file or a pg_config output option.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Improving postgresql.conf

2004-06-09 Thread Bruce Momjian

We discussed this and thought that it would end up duplicating stuff
already in the docs, and removing the comments means that you have no
way to know which are being set to non-default values.  Both seem to be
a loss.

Are people saying the Apache config files are easier to use?  I actually
find it quite hard to understand, especially httpd.conf.

One idea that has been floated around is to pull the docs automatically
from SGML and put them in postgresql.conf.  We do that for psql's use of
\help now, so it seems this is possible.

---

Greg Sabino Mullane wrote:
[ There is text before PGP section. ]
> 
[ PGP not available, raw data follows ]
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>  
>  
> (warning: rehashing of issues ahead)
>  
> Sometimes when I talk to technical people about using PostgreSQL,
> I get a strong reaction along the lines of "it's ugly, complex, and
> hard to set up." While we have gotten better than we used to be,
> some of this is still true. I usually press for further details.
> One of the top items mentioned is the configuration files,
> especially postgresql.conf. Specifically, it is non-standard and
> cryptic.
>   
> One thing that would go a long way is to remove the confusing
> "default is commented out" behavior. This is ugly and unnecessary:
> simply explicitly list every value.
>  
> The second thing that would help is better documentation. It doesn't
> have to be quite as verbose as the httpd.conf file, but a nice
> multi-line explanation of every item in the file, perhaps with a
> URL for further information, would be a very nice addition. Moving
> the comments above each item, rather than trying to squeeze some of
> them next to the parameter, would also make the file more readable
> and more consistent.
>   
> For example, this original sample:
>   
> #shared_buffers = 1000  # min 16, at least max_connections*2, 8KB each
> #work_mem = 1024# min 64, size in KB
>   
> becomes:
>   
> ## shared_buffers (kilobytes)
> ## Sets the number of shared memory buffers used by the database server.
> ## Increasing the number of buffers makes it more likely backends will find
> ## the information they need in the cache, thus avoiding an expensive operating
> ## system request. The default is typically 1000, but may be less if your
> ## kernel settings will not support it. Each buffer is 8192 bytes. The minimum
> ## value is 16, or twice the value of the 'max_connections' parameter. Settings
> ## significantly higher than the minimum are usually needed for good performance.
> ## Values of a few thousand are recommended for production installations.
> ## URL: http://www.postgresql.org/docs/current/static/runtime-config.html
>   
> shared_buffers = 1000
>   
>   
> ## work_mem (kilobytes)
> ## (Previously known as sort_mem)
> ## Sets the amount of memory to be used by internal sort operations and hash tables 
> before
> ## switching to temporary disk files. The default value is 1024 kilobytes (1 MB).
> ## Note that for a complex query, several sort or hash operations might be running 
> in parallel;
> ## each one will be allowed to use up to the value set here before it starts to put 
> data
> ## into temporary files. Also, several running sessions could be doing sort 
> operations
> ## simultaneously. So the total memory used could be many times the value set here.
>   
> work_mem = 1024
>  
>  
>  
> - --
> Greg Sabino Mullane [EMAIL PROTECTED]
> PGP Key: 0x14964AC8 200406090615
>  
> -BEGIN PGP SIGNATURE-
>  
> iD8DBQFAxuPnvJuQZxSWSsgRAgAtAKDReW6WOREYapbWi61yacBi05im6gCeLrvd
> Dj/mlTtUh97C1gHVkJTtLyY=
> =J0CZ
> -END PGP SIGNATURE-
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 
[ Decrypting message... End of raw data. ]

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] thread safety tests

2004-06-09 Thread Jan Wieck
I am wondering why thread_test.c is checking for mktemp()? That function 
is nowhere used in the libpq.

Also, I would suggest that we allow to build the libpq with thread 
specific compiler options, even if it is not entirely thread safe. It 
would require that a really multithreaded application has to have 
mutexes around certain operations, but being entirely unable to 
configure in a way that adds thread compile options to the CFLAGS makes 
libpq completely useless for multithreaded programs on some platforms 
(for example Solaris).

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] bug in 7.4 SET WITHOUT OIDs

2004-06-09 Thread Bruce Momjian
Tom Lane wrote:
> Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> > Yep, Tom fixed it good.
> 
> Was this another of those darn regurgitated-from-February messages?
> I'm about ready to go out and acquire missile targeting coordinates
> for pcbuddy.com ...

No, it was me cleaning out my old email.  It wasn't clear to me that
this item had been fixed already.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Thomas Hallgren
And only the super user can use directory components in a module name?

regards,

Thomas Hallgren

"Andrew Dunstan" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>
>
> Thomas Hallgren wrote:
>
> >Isn't the fact that "dynamic_library_path" can
> >be changed at any time by a user a serious security flaw?
> >
> >
> It's not "a user". Only the superuser can change it.
>
> cheers
>
> andrew
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Frequently updated tables

2004-06-09 Thread pgsql
>
> [EMAIL PROTECTED] wrote:
>
>>The best phrasing would be "the accumulating overhead of deletes and
>>updates."
>>
>>Yes.
>>
>>
>
> Are you using 7.3?
>
> I am asking because in 7.3 high update / delete tables could suffer
> (index and toast) bloat that was untamable via (lazy) VACUUM and FSM.
> I believe this is fixed in 7.4, so it should be possible to achieve on
> disk size control of tables / indexes by configuring FSM and (lazy)
> VACUUM. Did you find this not to be the case?
>
Interesting, the company is usng 7.3.4. One single row summary table got
up to 2 million dead rows. A select from that single row took a quarter of
a second. A regular vacuum did not fix it, only a vacuum full did.
However, when the test was re-run with constant vacuums, it did not get
out of hand.

My concern is performance, and yes, for inserts PostgreSQL is great. For
data that is constantly being updated, PostgreSQL is a bit weak. Think
about a table with a few million rows that needs to be updated a few
thousand times a minute.

I love PG, I've been using it since version 6x, and it has gotten
fantastic over the years, and in many cases, I would choose it over
Oracle, but for systems that need frequent updates, I have a lot of
concerns.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] Rules

2004-06-09 Thread Darko Prenosil
Why is this wrong ?:

DROP SCHEMA test CASCADE ;
CREATE SCHEMA test;

CREATE TABLE test.parent (
id serial PRIMARY KEY,
opis text 
);

CREATE TABLE test.child_data (
id serial PRIMARY KEY,
id_parent int ,
podaci text,
FOREIGN KEY (id_parent) REFERENCES test.parent(id)
);

CREATE VIEW test.child AS 
SELECT  p.id AS id,
p.opis AS opis,
c.id AS id_data,
c.id_parent AS id_parent,
c.podaci AS podaci 
FROM
test.parent p, test.child_data c 
WHERE c.id_parent = p.id;
 
CREATE FUNCTION test.testfi(test.child) RETURNS bool AS '
DECLARE
_NEW ALIAS FOR \$1;
BEGIN
RAISE NOTICE ''%'',_NEW.opis;
INSERT INTO test.parent (id,opis) VALUES (_NEW.id,_NEW.opis);
INSERT INTO test.child_data (id,id_parent,podaci) VALUES 
(_NEW.id_data,_NEW.id,_NEW.podaci);
RETURN TRUE;
END; ' 
LANGUAGE 'plpgsql';

CREATE RULE child_ins AS ON INSERT TO test.child
DO INSTEAD SELECT test.testfi(NEW);


INSERT INTO test.child(id,id_data,opis,podaci) VALUES (1,1,'Opis','podaci');


Gives:
(-403)ERROR:  cannot handle whole-row reference

Can I pass NEW & OLD from rule into function in any way ?
Any suggestions ?

Regards !

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Andrew Dunstan

Thomas Hallgren wrote:
Isn't the fact that "dynamic_library_path" can
be changed at any time by a user a serious security flaw? 
 

It's not "a user". Only the superuser can change it.
cheers
andrew
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Thomas Hallgren
"Tommi Maekitalo" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> in linux you can change LD_LIBRARY_PATH in a running process, but it does
not
> help. The library-loader initializes himself at process-startup and
changing
> LD_LIBRARY_PATH do not change the actual path, the process is using for
> dlopen.
>
That's bad. My nice suggestion just scattered into pieces :-)

But then again, perhaps not. Isn't the fact that "dynamic_library_path" can
be changed at any time by a user a serious security flaw? It enables the
user to load a module from an arbitrary place and then execute what's in
that module with postmaster priviligies. I.e. there's nothing preventing
such a user to access all data in the database using low-level C-routines
and bypass the security imposed by PostgreSQL.

IMHO, the DB-admin should be able to limit what's loaded by defining loader
constraints. Regardless of the previous discussion I would suggest the
following:

1. Prohibit use of paths containing a directory component in SQL.
2. Make dynamic_library_path a PGC_POSTMASTER variable.

This would close the security hole and give the DB Admin full control over
what a user can and cannot load.

Now, back to my original suggestion.

Using the current desing, the postmaster would still not be able to set the
LD_LIBRARY_PATH from the dynamic_library_path on Unix since all it does when
a new connection is established is a fork. So on a Unix machine with Linux
behavior the postmaster would need to parse the config file on startup,
merge dynamic_library_path with LD_LIBRARY_PATH (or whatever is applicable),
set the environment, and then do re-exec using some flag indicating
"stage2". Stage 2 would do exactly what's done today but ignore the
dynamic_library_path setting completely.

I realize that this is stretching it a little :-). The part concerning the
security-leak is important though.

Kind regards,

Thomas Hallgren


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Improving postgresql.conf

2004-06-09 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
 
(warning: rehashing of issues ahead)
 
Sometimes when I talk to technical people about using PostgreSQL,
I get a strong reaction along the lines of "it's ugly, complex, and
hard to set up." While we have gotten better than we used to be,
some of this is still true. I usually press for further details.
One of the top items mentioned is the configuration files,
especially postgresql.conf. Specifically, it is non-standard and
cryptic.
  
One thing that would go a long way is to remove the confusing
"default is commented out" behavior. This is ugly and unnecessary:
simply explicitly list every value.
 
The second thing that would help is better documentation. It doesn't
have to be quite as verbose as the httpd.conf file, but a nice
multi-line explanation of every item in the file, perhaps with a
URL for further information, would be a very nice addition. Moving
the comments above each item, rather than trying to squeeze some of
them next to the parameter, would also make the file more readable
and more consistent.
  
For example, this original sample:
  
#shared_buffers = 1000  # min 16, at least max_connections*2, 8KB each
#work_mem = 1024# min 64, size in KB
  
becomes:
  
## shared_buffers (kilobytes)
## Sets the number of shared memory buffers used by the database server.
## Increasing the number of buffers makes it more likely backends will find
## the information they need in the cache, thus avoiding an expensive operating
## system request. The default is typically 1000, but may be less if your
## kernel settings will not support it. Each buffer is 8192 bytes. The minimum
## value is 16, or twice the value of the 'max_connections' parameter. Settings
## significantly higher than the minimum are usually needed for good performance.
## Values of a few thousand are recommended for production installations.
## URL: http://www.postgresql.org/docs/current/static/runtime-config.html
  
shared_buffers = 1000
  
  
## work_mem (kilobytes)
## (Previously known as sort_mem)
## Sets the amount of memory to be used by internal sort operations and hash tables 
before
## switching to temporary disk files. The default value is 1024 kilobytes (1 MB).
## Note that for a complex query, several sort or hash operations might be running in 
parallel;
## each one will be allowed to use up to the value set here before it starts to put 
data
## into temporary files. Also, several running sessions could be doing sort operations
## simultaneously. So the total memory used could be many times the value set here.
  
work_mem = 1024
 
 
 
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200406090615
 
-BEGIN PGP SIGNATURE-
 
iD8DBQFAxuPnvJuQZxSWSsgRAgAtAKDReW6WOREYapbWi61yacBi05im6gCeLrvd
Dj/mlTtUh97C1gHVkJTtLyY=
=J0CZ
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] Comments on patch for date_trunc( 'week', ... );

2004-06-09 Thread Robert Creager

Per a brief conversation with Tom, I've created a patch for adding support for
'week' within the date_trunc function.

Within the patch I added a couple of test cases and associated target output,
and changed the documentation to add 'week' appropriately.

Comments?  Should I of bypassed this list and subscribed/posted to the patch
list?

Cheers,
Rob

-- 
 17:07:39 up 15 days, 46 min,  4 users,  load average: 2.18, 2.23, 2.27
Linux 2.4.21-0.13_test #60 SMP Sun Dec 7 17:00:02 MST 2003


date_trunc.patch
Description: Binary data


pgpUKUkBmR9YU.pgp
Description: PGP signature


Re: [HACKERS] pgAdmin

2004-06-09 Thread Robert Creager
When grilled further on (29 Feb 2004 08:46:36 -0800),
[EMAIL PROTECTED] (Hammer) confessed:

> Quick one:
> Anyone know how to use Putty to open a connection up under SSH which
> will allow pgAdmin III to connect to a postgresql database ie. Only
> access to server postgresql is on is via ssh.
> 

Trivial.  In the connection configuration within PuTTY, Connection/SSH/Tunnels,
fill in source port ( for instance), destination (machine:5432), select
'Local' radio and click Add.  Save the connection, then open it.  In this
instance, from pgAdmin (I'm using pgAdmin II V1.6), host is 'localhost', port
''.

Cheers,
Rob

-- 
 17:03:13 up 15 days, 42 min,  4 users,  load average: 2.08, 2.19, 2.27
Linux 2.4.21-0.13_test #60 SMP Sun Dec 7 17:00:02 MST 2003


pgpCYUfzMxhcO.pgp
Description: PGP signature


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Tommi Maekitalo
Hi,

in linux you can change LD_LIBRARY_PATH in a running process, but it does not 
help. The library-loader initializes himself at process-startup and changing 
LD_LIBRARY_PATH do not change the actual path, the process is using for 
dlopen.

Tommi Mäkitalo

Am Dienstag, 8. Juni 2004 19:14 schrieb Thomas Hallgren:
> From: "Bruce Momjian" <[EMAIL PROTECTED]>
>
> > I think the idea is that you want to specify the path in the config
> > file, after the app has already started.  I don't think you can modify
> > the environment variable after the app has started, and even if you can,
> > it seems simpler to just do it in our code and specify the exact path
> > rather than having it poke around in whatever LD_LIBRARY_PATH is set to.
>
> Ok, that makes sense. But I think most systems have the ability to change
> the environment of a running process (setenv on Posix systems) and the
> current design, simple as it is, doesn't work in all cases.
>
> regards,
>
> Thomas Hallgren
>
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Tablespaces

2004-06-09 Thread Josh Berkus
Gavin,

#1:  I really think that we should have a way to set a "default tablespace" 
for any database in a cluster.    This property would be vitally important 
for anyone wishing to use tablespaces to impose quotas.   First, the 
superuser would:
ALTER DATABASE db1 ALTER DEFAULT_TABLESPACE partition2;
then any regular users creating tables in that database would, by default, 
have TABLESPACE partition2 automatically appended to them by the parser 
unless overridden in the creation statement by specifying another, specific, 
tablespace.

Alternately, the default tablespace could be set through a GUC.   In my mind, 
this would be inferior on 2 counts:
1) It would require adding Yet Another Miscellaneos GUC Variable.
2) It would preclude large, multisuer installations from seamlessly using 
tablespaces for quotas, becuase there would be no way to transparently set 
the GUC differently for each user or database.


#2: Permissions:
I see the permissions issue as quite transparent.   First, I agree that only 
the superuser should have the right to create, alter, or drop tablespaces. 
'nuff said.
Second, as far as I can see, there is only one relevant permission for regular 
users:  USE.   Either the user is permitted to create objects in that 
tablespace, or he/she is not.  Other permissions, such as read access, should 
NOT be set by tablespace, as such permissions are already governed by 
database, table, and schema; to add a SELECT restriction to tablespaces would 
frequently result in paralytic snarls of conflicting permissions on complex 
installations.
Thus, by my proposal, the only GRANT for tablespaces (executed by a superuser) 
would be:
GRANT USE ON tablespace1 TO user;
This permission would ONLY be accessed for CREATE/ALTER TABLE, and CREATE 
INDEX statements.
Easy, neh?

#3: ALTER TABLE  CHANGE TABLESPACE:
This is strictly in the class of "would be a very nice & useful feature if 
it's not too difficult".   

Given how painful it is to drop & replace a table with multiple dependencies 
(on some databases, only possible by droping & re-loading the entire 
database) it would be nice to have an ALTER TABLE command that moved the 
table to another tablespace.    It doesn't *seem* to me that this would be a 
very challenging bit of programming, as the operation would be very similar 
to REINDEX in the manipulation of files.   (But what I know, really?)

Once tablespaces are a feature and some users start using them for quota 
management, there will quickly develop situations where the original 
tablespace for a db runs out of room and can't be resized.   Being able to 
move the table "in situ" then becomes vital, especially on very large 
databases ... and when someday combined with partitioned tables, will become 
essential.

Further, we will get an *immediate* flurry of requests from users who just 
upgraded to 7.5 and want to make use of the tablespaces feature on an 
existing production database.

ALTER INDEX ... CHANGE TABLESPACE is *not* needed, though, as there are no 
issues other than time which I know of with dropping & re-creating an index.

If ALTER TABLE CHANGE TABLESPACE has some major technical hurdles, then I 
think it's one of those things that could be put off until the next version 
of tablespaces, or even held until Partition Tables is developed for a 
combined solution.    But it would be nice to have.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] anoncvs problem

2004-06-09 Thread Andrew Dunstan
I believe any non empty password works.
cheers
andrew
Oliver Elphick wrote:
Following instructions on
http://developer.postgresql.org/docs/postgres/cvs.html does not
currently work:
$ cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login
Logging in to
:pserver:[EMAIL PROTECTED]:2401/projects/cvsroot
CVS password:
cvs login: authorization failed: server anoncvs.postgresql.org rejected
access to /projects/cvsroot for user anoncvs
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Question regarding dynamic_library_path

2004-06-09 Thread Thomas Hallgren
Oh, sorry. This HP-UX 11.x. But you can get the same using shl_load (HP-UX
10.x) and pass the flag DYNAMIC_PATH provided the executable is linked with
+s. So it's still no showstopper.

If you do find that it is impossible on some older OS (HP-UX 11 arrived 4
years ago), then why not just disable this feature on that particular
version of the OS and make a note for the user that in order to get it, it's
time to upgrade?

IMO it's not a good policy to prevent new good features just to keep
customers sticking to old stuff from feeling bad not getting them. You need
to assure backward compatibility to keep everyone happy.

Kind regards,

Thomas Hallgren

- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Thomas Hallgren" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, June 09, 2004 7:24 AM
Subject: Re: [HACKERS] Question regarding dynamic_library_path


> "Thomas Hallgren" <[EMAIL PROTECTED]> writes:
> >> ...  Furthermore, AFAICT shl_load()
> >> requires an exact path spec for the initial shared library; it won't
> >> do a path search for that, only for dependencies.)
> >>
> > Here's an excerpt from the HPUX manual pages on dlopen(3C).
>
> HPUX which?  There is no dlopen() in HPUX 10.20, which is what
> I'm running here; nor do we use dlopen() in any HPUX version
> given the current state of port/dynloader/hpux.c.
>
> regards, tom lane
>



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]