[HACKERS] socket calls in signal handler (WAS: APC + socket restrictions un der Win32?)

2004-03-08 Thread Claudio Natoli

Hi all,

Was just discussing the issues related to the above off list with Magnus:
http://archives.postgresql.org/pgsql-hackers-win32/2004-03/msg00041.php

Whilst we can think of a number of work-arounds (the simplest being a
suggestion by Magnus: set a flag, like APCcalled, to false before the
select() call, and repeat the select() if APCcalled == true on return from
select), we were wondering if having socket calls inside signal handlers was
a good idea in any case?

The specific (and possibly only? are their others?) issue is the call to
pgstat_beterm from reaper/CleanupProc, invoked by a SIGCHLD. Can this call
be deferred to the main loop (ie. ServerLoop) and is there any merit in
doing so? 

[Seems like pgstat_fetch_stat_numbackends could get seriously out of date,
in pathological scenarios... other gotchas?]

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Tablespaces

2004-03-08 Thread Andrew Sullivan
On Mon, Mar 08, 2004 at 02:07:35AM +0200, Marko Karppinen wrote:
 One thing to keep in mind is that system administrators don't see
 symlinks as being informational -- they see them as the actual UI
 for the redirection in question. So their expectation is that they'll
 be able to move the actual directory around at will (as long as they
 update the symlink to match).

This is a good point.  It's worth keeping in mind, too, that in large
shops, the DBAs and the sysadmins often are in separate departments
with separate management, precisely because the database system has
traditionally been somewhat divorced from the OS (as an aside, I
suspect that this sort of separation is part of the reason for the
popularity of raw filesystems among DBAs.  Even if they didn't
provide better speed, it's just preferable not to have to involve
another department).  System administrators in such places have been
known to decide to reorganise the disks, assuming that the database
just has its own home.  For such a sysadmin, a pile of symlinks would
be fair game for reorganisation.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Tablespaces

2004-03-08 Thread Andreas Pflug
Andrew Sullivan wrote:

eorganise the disks, assuming that the database
just has its own home.  For such a sysadmin, a pile of symlinks would
be fair game for reorganisation.
 

Please take into consideration that symlinks might be every day work for 
*nix admins, but for win admins it's very uncommon. Additionally, win 
admins are accustomed to gui tools, and many of them will stumble if 
forced to use a command line. For worse, junctions are not 
distinguishable in explorer; only the shell's dir command knows about 
junctions. This obfuscation makes junctions quite a bad choice for admin 
purposes.

IMHO there are only two viable options:
- no tablespaces for win32, i.e. recommend *ix for bigger installations
- a different tablespace storage approach., e.g. simple desktop links 
(or alike) redirecting to a directory.

Regards,
Andreas


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] IN joining

2004-03-08 Thread Dennis Haney




Tom Lane wrote:

  Dennis Haney [EMAIL PROTECTED] writes:
  
  

  Joining {b,c} to {a} does not meet any of those four allowed cases.

  

Exactly my point... So why ever bother creating the {b,c} node which is 
legal by the above definition?

  
  
We don't, because there is no such join clause.

  

No, but we create the equality via the implied equality mechanism...

select * from a, b where a.id = b.id3 and a.id in (select c.id2 from c);

rtable is (after in-optimization):
resno refname relid inFromCl
- - - 
1 a  17143 inFromCl
2 b  17151 inFromCl
3 IN_subquery [subquery]
4 c  17147 inFromCl

in gdb:
break joinrels.c:563
commands
call bms_is_subset(ininfo-lefthand, rel1-relids)
call bms_equal(ininfo-righthand, rel2-relids)
call bms_is_subset(ininfo-lefthand, rel2-relids)
call bms_equal(ininfo-righthand, rel1-relids)
x/t rel1-relids.words
x/t rel2-relids.words
x/t joinrelids.words
p jointype
printf "%s\n",
pretty_format_node_dump(nodeToString(((RestrictInfo*)((RestrictInfo*)restrictlist)-clause)-clause))
end

then we get this join:

Breakpoint 4, make_join_rel (root=0x8307bc8, rel1=0x8316920,
rel2=0x8316b10, jointype=JOIN_UNIQUE_INNER)
 at joinrels.c:563
563 switch (jointype)
$92 = 0 '\0'
$93 = 1 '\001'
$94 = 0 '\0'
$95 = 0 '\0'
0x83169ac: 0100
0x8316b9c: 0001
0x832670c: 00010100
$96 = JOIN_UNIQUE_INNER
 {OPEXPR
 :opno 96
 :opfuncid 0
 :opresulttype 16
 :opretset false
 :args (
 {VAR
 :varno 4
 :varattno 1
 :vartype 23
 :vartypmod -1
 :varlevelsup 0
 :varnoold 4
 :varoattno 1
 }

 {VAR
 :varno 2
 :varattno 1
 :vartype 23
 :vartypmod -1
 :varlevelsup 0
 :varnoold 2
 :varoattno 1
 }
 )
 }


-- 
Dennis





[HACKERS] one byte data type

2004-03-08 Thread Shachar Shemesh
Hi gang,

Is there a datatype that means one byte? I'm importing a database from 
SQL Server, and some of the rows there are of this type. When I convert 
them to int2, I have discrepancies with the program that uses the 
database (OLE DB uses binary mode transfers, so the size of the variable 
is important beyond it's legal range).

Shachar

--
Shachar Shemesh
Lingnu Open Systems Consulting
http://www.lingnu.com/
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] avg() for timestamp

2004-03-08 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
   Neil Conway [EMAIL PROTECTED] wrote:
 It seems to me the following should Just Work:
 nconway=# select avg(a) from t1;
 ERROR:  function avg(timestamp without time zone) does not exist

 While there is a way to calculate an average timestamp, I don't think
 there is an easy way to do this automatically with say a polymorphic
 aggregate. You need to know that there is a related type interval that
 can be used to keep track of differences in timestamps and that can be
 added back to a timestamp at the end.

Given that this would be done with C code, I doubt we'd go to the
trouble of implementing it that way.  We'd just cheat: add up the 
numeric values of the timestamps and divide at the end.  float8
makes a perfectly fine accumulator ;-)

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] question about API to b-tree in PG

2004-03-08 Thread GeGeZ
Hello,
	this is my first post, so Hello again :)
I have a problem. I am trying to create another implementation of FTI in 
PG. What will be different? I am going to create my new index on text 
type field:
	CREATE TABLE test (id int, mytext text);
	CREATE INDEX myindex on test USING myindex (mytext) ;

The myindex index will be just ordinary postgresql's btree on tokenized 
text (so before inserting into btree, I will tokenize text, and then I 
will insert every found token). I mean, instead of keeping text in 
index, I will keep terms from tokenized column mytext. So:
insert INTO test(1,'this is my first text');
In index I would like to keep following objects: 'this', 'is', 'my', 
'first', 'text'.

My question is how to access btree index using API (not by operators, 
etc.). Espiecialy, I need following functions:
create empty btree index (I do not know how to create index without 
table connected to it)
insert tuple
remove tuple
destroy index
find term in btree index.

Is there any manual to btree API?

I found only pure C source files:
src/backend/access/index and src/backend/access/nbtree.
Sorry for my poor English,
Yours,
Gegez
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] IN joining

2004-03-08 Thread Tom Lane
Dennis Haney [EMAIL PROTECTED] writes:
 Exactly my point... So why ever bother creating the {b,c} node which is 
 legal by the above definition?
 
 We don't, because there is no such join clause.
 
 No, but we create the equality via the implied equality mechanism...

 select * from a, b where a.id = b.id3 and a.id in (select c.id2 from c);

Oh, I had forgotten that your original example involved an implied
equality.  I don't see that anything is wrong though.  The join path
that will result from considering the implied equality will be like

((UNIQUE-ified subselect) INNER JOIN b) INNER JOIN a

which is perfectly legal and perhaps even a winner.  Once you stick a
UNIQUE on top of the IN's subselect, you can treat the IN as exactly
like a plain equality join.

[ thinks a bit... ]  Actually I guess there is a problem here: we won't
actually generate that plan, because this test is too strict:

/*
 * If we already joined IN's RHS to any part of its LHS in
 * either input path, then this join is not constrained (the
 * necessary work was done at a lower level).
 */
if (bms_overlap(ininfo-lefthand, rel1-relids) 
bms_is_subset(ininfo-righthand, rel1-relids))
continue;
if (bms_overlap(ininfo-lefthand, rel2-relids) 
bms_is_subset(ininfo-righthand, rel2-relids))
continue;

I think it should be

/*
 * If we already joined IN's RHS to anything else in
 * either input path, then this join is not constrained (the
 * necessary work was done at a lower level).
 */
if (bms_is_subset(ininfo-righthand, rel1-relids) 
!bms_equal(ininfo-righthand, rel1-relids))
continue;
if (bms_is_subset(ininfo-righthand, rel2-relids) 
!bms_equal(ininfo-righthand, rel2-relids))
continue;

Comments?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] one byte data type

2004-03-08 Thread Tom Lane
Shachar Shemesh [EMAIL PROTECTED] writes:
 Is there a datatype that means one byte?

You might be able to use the char type (note the quotes).  I am not
sure how well it will cope with storing zeroes (nulls) though.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] IN joining

2004-03-08 Thread Dennis Haney
Tom Lane wrote:

[SNIP: a repetion of my first post ;) ]

I think it should be

   /*
* If we already joined IN's RHS to anything else in
* either input path, then this join is not constrained (the
* necessary work was done at a lower level).
*/
   if (bms_is_subset(ininfo-righthand, rel1-relids) 
   !bms_equal(ininfo-righthand, rel1-relids))
   continue;
   if (bms_is_subset(ininfo-righthand, rel2-relids) 
   !bms_equal(ininfo-righthand, rel2-relids))
   continue;
Comments?
 

It's good.
It was pretty much what I was thinking was wrong to begin with.
Whether the generated plans are valid is a different issue ;)
--
Dennis
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] one byte data type

2004-03-08 Thread Shachar Shemesh
Tom Lane wrote:

Shachar Shemesh [EMAIL PROTECTED] writes:
 

Is there a datatype that means one byte?
   

You might be able to use the char type (note the quotes).  I am not
sure how well it will cope with storing zeroes (nulls) though.
			regards, tom lane
 

Hmm, this will also screw up textual queries. Not important for this 
particular case, but not a good general solution. What are the chances 
of adding such a type for 7.4.2?

 Shachar
P.S.
Notice how this message arrives directly as well, despite spam filters.
--
Shachar Shemesh
Lingnu Open Systems Consulting
http://www.lingnu.com/
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] question about selecting across multiple dbs

2004-03-08 Thread Joe Maldonado
Hello,
I see that there is an item Queries across databases or servers (two-phase 
commit) on the todo list's urgent header. I have tried asking this question on the other 
lists and have not yet gotten a suitable answer to this question...When is this functionality 
expected to be available in PostgreSQL?  I as well as many others want to be able to run queries 
such as
SELECT a.field_a, b.field_b FROM user:[EMAIL PROTECTED] AS a NATURAL JOIN local_table AS b;

Thanks in advance,

-Joe

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[HACKERS] Data from Binary Cursor

2004-03-08 Thread Flavio
Hi all,

I am using Binary Cursor to retrive data(polygnos) from my DB. However, when I 
dump the data into a file I get a bunch of meanless data, in other works, I 
see one big string of data that makes no sense to me. 
Question:
1) how can I convert that data back to a readble interger/double numbers?

Thanks a lot... any information will be great

peace,
Flavio 

_
Linux - uvscan antivirus

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] question about selecting across multiple dbs

2004-03-08 Thread Rod Taylor
On Mon, 2004-03-08 at 11:57, Joe Maldonado wrote:
 Hello,
   I see that there is an item Queries across databases or servers (two-phase 
 commit) on the todo list's urgent header. I have tried asking this question on the 
 other lists and have not yet gotten a suitable answer to this question...When is 
 this functionality expected to be available in PostgreSQL?  I as well as many others 
 want to be able to run queries such as
 
 SELECT a.field_a, b.field_b FROM user:[EMAIL PROTECTED] AS a NATURAL JOIN 
 local_table AS b;

Cross database selects can be accomplished using contrib/dblink.

It will be some time before cross database writes or locks will be
possible.



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Josh Berkus
Neil,

 In the simple test I performed, raising the default_statistics_target 
 from 10 to 25 resulted in a 40% increase in the time to ANALYZE a 
 large table. (I picked 25 more or less at random -- would 15 or 20 be 
 better?)

I find that very interesting, since I haven't found much higher increases to 
be a proportionate penality.   For example, on an 11-column table raising 3 
columns to statistics=250 merely doubled the ANALYZE time.  I have not done 
exact timing, but would be happy to 

It may also be the case that for those people for whom 10 
 is an insufficient stats target, 25 is also insufficient.

It is.   I've found that problem queries, especially those caused by real, 
uneven distribution of data, require raising statistics to 150-400 in order 
to fix.  This is much to high a level to assign as a default.

 Any comments on whether increasing the default stats target is a good 
 idea for 7.5? (Details on the test I performed are included below)

No.   I don't think it's a good idea to raise the default for *all* columns; 
for one thing, I'd really hate to think what, say, a default stats of 100 
would do to a TEXT column with an average of 8K of data per row.

Further, in 7.5 we'll be introducing correlated stats for multi-column indexes 
(unless something's gone off with that?) which should help a lot of problem 
queries.  And change our whole emphasis on brute forcing analyze through 
increasing stats into the 100's.   

If you really want to tackle this issue, though, here's what I suggest:

1) add a GUC called default_statistics_indexed, which starts at say 100 or 50.
2) When ever the user indexes a column, automatically increase the stats
to the level in default_statistics_indexed, if they are at the level in 
default_statistics_target.

This will then give indexed columns automatically a somewhat higher level of 
stats analysis than other columns.   This should help a lot of slow query 
problems, yet effectively leave the selection of important columns in the 
hands of the DBA.   Make sense?

Also, another great feature in this department would be to extend the 
multi-column correlation statistics to cover foriegn keys, as a way of 
improving cross-table estimates.

Anyway, keep me in the loop on this, I have a lot of very complex databases I 
can test such issues on.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] question about selecting across multiple dbs

2004-03-08 Thread Joe Maldonado
On Mon, 08 Mar 2004 12:18:22 -0500, Rod Taylor [EMAIL PROTECTED] wrote:

On Mon, 2004-03-08 at 11:57, Joe Maldonado wrote:
Hello,
I see that there is an item Queries across databases or servers (two-phase 
commit) on the todo list's urgent header. I have tried asking this question on the other 
lists and have not yet gotten a suitable answer to this question...When is this functionality 
expected to be available in PostgreSQL?  I as well as many others want to be able to run queries 
such as
SELECT a.field_a, b.field_b FROM user:[EMAIL PROTECTED] AS a NATURAL JOIN local_table AS b;
Cross database selects can be accomplished using contrib/dblink.

It will be some time before cross database writes or locks will be
possible.
THANKS this looks very promising and it seems that there are function available for insert/update/delete.

Again thanks

-Joe



--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] question about selecting across multiple dbs

2004-03-08 Thread Joe Conway
Rod Taylor wrote:
On Mon, 2004-03-08 at 11:57, Joe Maldonado wrote:
I see that there is an item Queries across databases or servers
(two-phase commit) on the todo list's urgent header. I have tried
asking this question on the other lists and have not yet gotten a
suitable answer to this question...When is this functionality
expected to be available in PostgreSQL?  I as well as many others
want to be able to run queries such as
Cross database selects can be accomplished using contrib/dblink.

It will be some time before cross database writes or locks will be 
possible.
Actually cross database writes can also be done with dblink, but without 
2PC, or something similar, it is hard to make it bulletproof.

More to the original point, there is no one (that I'm aware of) actively 
working on internal backend cross database functionality such as what 
the OP seems to be expecting. There has been discussion on this in the 
past (see the mail archives), and there is a SQL2003 spec covering it 
(see SQL/MED). Patches would be warmly welcomed ;-). If I don't get beat 
to it, I might find the time and interest someday, but there is no 
expected date of arrival.

Joe

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] socket calls in signal handler (WAS: APC + socket restrictions un der Win32?)

2004-03-08 Thread Kurt Roeckx
On Mon, Mar 08, 2004 at 09:12:52PM +1100, Claudio Natoli wrote:
 
 Hi all,
 
 Was just discussing the issues related to the above off list with Magnus:
 http://archives.postgresql.org/pgsql-hackers-win32/2004-03/msg00041.php
 
 Whilst we can think of a number of work-arounds (the simplest being a
 suggestion by Magnus: set a flag, like APCcalled, to false before the
 select() call, and repeat the select() if APCcalled == true on return from
 select), we were wondering if having socket calls inside signal handlers was
 a good idea in any case?

Is this a win32 only thing, or is it more general?

OpenBSD for instance has some documentation on which functions
are safe to be called from a signal handler, and socket
operations aren't part of it.  See for isntance their manpages
about signal and sigaction.


Kurt


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Tom Lane
Josh Berkus [EMAIL PROTECTED] writes:
 It is.   I've found that problem queries, especially those caused by real, 
 uneven distribution of data, require raising statistics to 150-400 in order 
 to fix.  This is much to high a level to assign as a default.

That's basically what's bothering me about the suggestion to increase to
25 --- I'm dubious that it will do any good.

 Further, in 7.5 we'll be introducing correlated stats for multi-column indexes 
 (unless something's gone off with that?)

News to me.  It's certainly not there now.

 This will then give indexed columns automatically a somewhat higher
 level of stats analysis than other columns.

That is potentially a good idea.  There's still the question of what is
a reasonable default, though.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Bruce Momjian
Tom Lane wrote:
 Josh Berkus [EMAIL PROTECTED] writes:
  It is.   I've found that problem queries, especially those caused by real, 
  uneven distribution of data, require raising statistics to 150-400 in order 
  to fix.  This is much to high a level to assign as a default.
 
 That's basically what's bothering me about the suggestion to increase to
 25 --- I'm dubious that it will do any good.
 
  Further, in 7.5 we'll be introducing correlated stats for multi-column indexes 
  (unless something's gone off with that?)
 
 News to me.  It's certainly not there now.
 
  This will then give indexed columns automatically a somewhat higher
  level of stats analysis than other columns.
 
 That is potentially a good idea.  There's still the question of what is
 a reasonable default, though.

Do all the columns have to have the same number of statistics buckets? 
Could that stats collector adjust the number of buckets based on the
data somehow?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Do all the columns have to have the same number of statistics buckets? 

They do not, but the effort spent by ANALYZE is proportional to the
largest stats target among all the columns of the table.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Josh Berkus
Tom,

  Further, in 7.5 we'll be introducing correlated stats for multi-column 
indexes 
  (unless something's gone off with that?)

This was discussed on Hackers in October, a complete implementation was shown, 
I thought it was committed at that time.   If not, what happened to it?

Dammit, it's impossible to find anything in the archives if you don't have 
some good keywords or at least the author.  Is the autor reading this?   Will 
you speak up?

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Clark C. Evans
It would be wonderful to be able to create comments
on users and groups.  In particular, I need a place
to store the user's name.  Yes, I could make a user
table, but that seems overkill as all of the other
aspects of a user are already in the metadata.

Best,

Clark
-- 
Clark C. Evans Prometheus Research, LLC
Chief Technology Officer   Turning Data Into Knowledge
[EMAIL PROTECTED] www.prometheusresearch.com
(main) 203.777.2550(cell) 203.444.0557

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Do all the columns have to have the same number of statistics buckets? 
 
 They do not, but the effort spent by ANALYZE is proportional to the
 largest stats target among all the columns of the table.

Could we use previous stats to determine how many buckets to use when
running ANALYZE.  Also, if columns have a different number of buckets,
does that mean that we don't have the same per-query overhead for a
larger stats target?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Slony-I makes progress

2004-03-08 Thread Andreas Pflug
Jan Wieck wrote:

Alex J. Avriette wrote:

On Fri, Mar 05, 2004 at 12:47:23AM +0100, Jochem van Dieten wrote:

I personally don't think that a GUI tool should be the province of 
the Slony project.  Seriously.   I think that Slony should focus on a 


I very much agree with this, but this is Jan's baby, so I didn't say
anything. I have personally never used a GUI with a postgres database
(well, okay, I used one for a bit to troubleshoot a problem my boss
was having with a pg node once), and I don't really plan to. I guess
I was unaware this is a common usage pattern.


I was explicitly asking for opinions and input. I don't want this to 
be my baby. In the end I am a developer, not a DBA. I know how to do 
it, but don't have the ultimate wisdom about how to manage it.

If somebody likes to contribute a gui tool, I'm sure we could help to 
implement this in pgAdmin3.

Regards,
Andreas


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Tom Lane
Clark C. Evans [EMAIL PROTECTED] writes:
 It would be wonderful to be able to create comments
 on users and groups.  In particular, I need a place
 to store the user's name.  Yes, I could make a user
 table, but that seems overkill as all of the other
 aspects of a user are already in the metadata.

This seems like a good idea, but I'd recommend leaving it as a TODO
until after we finish the planned revisions for SQL role support.
(Peter E. has made noises about working on that, but I dunno what
his timeframe for it is.)  In particular, it's not clear that there
will still be a hard and fast separation between users and groups
after that happens, so it seems premature to wire such an assumption
into the syntax.

Another small problem that would have to be faced is that users and
groups don't have OIDs.  We could physically get away with a type-cheat
of storing their integer IDs into pg_description instead, but I'm worried
that would create issues of its own.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Bruce Momjian
Tom Lane wrote:
 Clark C. Evans [EMAIL PROTECTED] writes:
  It would be wonderful to be able to create comments
  on users and groups.  In particular, I need a place
  to store the user's name.  Yes, I could make a user
  table, but that seems overkill as all of the other
  aspects of a user are already in the metadata.
 
 This seems like a good idea, but I'd recommend leaving it as a TODO
 until after we finish the planned revisions for SQL role support.
 (Peter E. has made noises about working on that, but I dunno what
 his timeframe for it is.)  In particular, it's not clear that there
 will still be a hard and fast separation between users and groups
 after that happens, so it seems premature to wire such an assumption
 into the syntax.
 
 Another small problem that would have to be faced is that users and
 groups don't have OIDs.  We could physically get away with a type-cheat
 of storing their integer IDs into pg_description instead, but I'm worried
 that would create issues of its own.

Another problem is that pg_description is per-database, while
pg_user/group are global for all databases.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Tom Lane
Josh Berkus [EMAIL PROTECTED] writes:
 Further, in 7.5 we'll be introducing correlated stats for multi-column 
 indexes (unless something's gone off with that?)

 This was discussed on Hackers in October, a complete implementation was shown, 
 I thought it was committed at that time.   If not, what happened to it?

Are you sure you're not thinking of stats for functional indexes?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] raising the default default_statistics_target

2004-03-08 Thread Josh Berkus
Tom,

 Are you sure you're not thinking of stats for functional indexes?

Positive.I even remember seeing that the patch was accepted.

The patch specifically had to do with a multi-column correlation algorithm for 
improving the selectivity of multi-column indexes.

Problem is, with 1400 posts per month August to October, I can't find it, and 
the keywords that I think are obvious don't turn anything up.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Andrew Dunstan
Bruce Momjian wrote:

Another problem is that pg_description is per-database, while
pg_user/group are global for all databases.
 

databases are also per cluster, but we have comments on those.

Could we keep the user/group comments in those tables instead of in 
pg_description?

cheers

andrew

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Dave Page
It's rumoured that Andrew Dunstan once said:
 Bruce Momjian wrote:

Another problem is that pg_description is per-database, while
pg_user/group are global for all databases.



 databases are also per cluster, but we have comments on those.

 Could we keep the user/group comments in those tables instead of in
 pg_description?

Ahh, just like they used to live in pg_language.lancompiler for languages :-)

Regards, Dave




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Mike Mascari
Andrew Dunstan wrote:
Bruce Momjian wrote:

Another problem is that pg_description is per-database, while
pg_user/group are global for all databases.
 

databases are also per cluster, but we have comments on those.

Could we keep the user/group comments in those tables instead of in 
pg_description?
The comments are stored only in the database's pg_description where 
the COMMENT ON took place. This caused dump/reload problems. I 
believe Rod Taylor added the new warning:

[EMAIL PROTECTED] select count(*) from pg_description;
 count
---
  1541
(1 row)
[EMAIL PROTECTED] COMMENT ON DATABASE test IS 'Hello';
WARNING:  database comments may only be applied to the current database
COMMENT
[EMAIL PROTECTED] select count(*) from pg_description;
 count
---
  1541
(1 row)
[EMAIL PROTECTED] COMMENT ON DATABASE estore IS 'A good comment';
COMMENT
[EMAIL PROTECTED] select count(*) from pg_description;
 count
---
  1542
(1 row)
[EMAIL PROTECTED] select count(*) from pg_description;
 count
---
  1541
(1 row)
Mike Mascari

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Bruce Momjian

This doesn't look good.  If we throw a WARNING, why do we not insert
anything into pg_description.  Seems we should throw an error, or do the
insert with a warning.

---

Mike Mascari wrote:
 Andrew Dunstan wrote:
  Bruce Momjian wrote:
  
  Another problem is that pg_description is per-database, while
  pg_user/group are global for all databases.
 
   
 
  databases are also per cluster, but we have comments on those.
  
  Could we keep the user/group comments in those tables instead of in 
  pg_description?
 
 The comments are stored only in the database's pg_description where 
 the COMMENT ON took place. This caused dump/reload problems. I 
 believe Rod Taylor added the new warning:
 
 [EMAIL PROTECTED] select count(*) from pg_description;
   count
 ---
1541
 (1 row)
 
 [EMAIL PROTECTED] COMMENT ON DATABASE test IS 'Hello';
 WARNING:  database comments may only be applied to the current database
 COMMENT
 
 [EMAIL PROTECTED] select count(*) from pg_description;
   count
 ---
1541
 (1 row)
 
 [EMAIL PROTECTED] COMMENT ON DATABASE estore IS 'A good comment';
 COMMENT
 
 [EMAIL PROTECTED] select count(*) from pg_description;
   count
 ---
1542
 (1 row)
 
 [EMAIL PROTECTED] select count(*) from pg_description;
   count
 ---
1541
 (1 row)
 
 
 Mike Mascari
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Mike Mascari
Bruce Momjian wrote:

This doesn't look good.  If we throw a WARNING, why do we not insert
anything into pg_description.  Seems we should throw an error, or do the
insert with a warning.
It essentially makes the behavior deprecated and allows dumps to be 
restored properly (without the extra-database comments.) Here's a 
thread on the topic:

http://groups.google.com/groups?hl=enlr=ie=UTF-8oe=UTF-8threadm=bf1obi%24u7k%241%40FreeBSD.csie.NCTU.edu.twrnum=7prev=/groups%3Fq%3D%2522COMMENT%2BON%2BDATABASE%2522%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3Den

I don't know if Rod has plans to change attempts to COMMENT ON 
non-local databases to an ERROR in 7.5 or not. It was my fault from 
the beginning - but once I'd implemented COMMENT ON for tables and 
columns I just couldn't stop... :-)

Mike Mascari

Mike Mascari wrote:
..
The comments are stored only in the database's pg_description where 
the COMMENT ON took place. This caused dump/reload problems. I 
believe Rod Taylor added the new warning:

[EMAIL PROTECTED] select count(*) from pg_description;
 count
---
  1541
(1 row)
[EMAIL PROTECTED] COMMENT ON DATABASE test IS 'Hello';
WARNING:  database comments may only be applied to the current database
COMMENT
[EMAIL PROTECTED] select count(*) from pg_description;
 count
---
  1541
(1 row)


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Tom Lane
Rod Taylor [EMAIL PROTECTED] writes:
 Comments longer than ~7k would need a toast table. At the moment, toast
 tables don't work on a global basis.

Sure they do ... in fact, all the shared catalogs have one.

I think the idea of putting comments directly into pg_shadow and friends
is too icky to consider, though.  If we really wanted to support this
stuff then we'd make *one* shared table that is just like
pg_description, but is used for shared objects.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Andrew Dunstan
Rod Taylor wrote:

On Mon, 2004-03-08 at 14:46, Andrew Dunstan wrote:
 

Bruce Momjian wrote:

   

Another problem is that pg_description is per-database, while
pg_user/group are global for all databases.


 

databases are also per cluster, but we have comments on those.

Could we keep the user/group comments in those tables instead of in 
pg_description?
   

Comments longer than ~7k would need a toast table. At the moment, toast
tables don't work on a global basis.
 

Well, presumably we don't want to keep their life story ;-)

I was just thinking out loud I guess - I see there are wrinkles I hadn't 
considered.

cheers

andrew

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] socket calls in signal handler (WAS: APC + socket restrictions un der Win32?)

2004-03-08 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 To me this sounds like we have to make a general solution, and not win32
 specific, to get the socket calls out of the signal handler.

Hold on one second here.  I thought this thread was discussing some
local problem in the Win32 workaround for lack of signals?

The postmaster's use of nominally unsafe stuff in signal handlers is not
and never has been a problem, because there is only one place in the
main loop where signals are unblocked, thus no possibility for something
to interrupt something else.  I don't like the idea of redesigning that
code just because someone misunderstands it.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] COMMENT ON [GROUP/USER]

2004-03-08 Thread Bruce Momjian
Rod Taylor wrote:
-- Start of PGP signed section.
 On Mon, 2004-03-08 at 15:46, Bruce Momjian wrote:
  This doesn't look good.  If we throw a WARNING, why do we not insert
  anything into pg_description.  Seems we should throw an error, or do the
  insert with a warning.
 
 It used to be an error, but that had problems with pg_restore (it
 doesn't like errors). When it was changed to a warning, I think it was
 still reasonable to leave it out of pg_description in anticipation of
 the syntax being changed to:
 
 COMMENT ON DATABASE IS 'Hello';
 
 The above always applies the comment to the current database.

OK, I added a comment to the C code:

ereport(WARNING,/* throw just a warning so pg_restore doesn't fail */

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] socket calls in signal handler (WAS: APC + socket restrictions un der Win32?)

2004-03-08 Thread Magnus Hagander
 The postmaster's use of nominally unsafe stuff in signal 
 handlers is not
 and never has been a problem, because there is only one place in the
 main loop where signals are unblocked, thus no possibility for 
 something
 to interrupt something else.  I don't like the idea of 
redesigning that
 code just because someone misunderstands it.
 
 
 You're saying the above is not valid because we block signals?

It's all about reentrance of functions where it's not safe to do
so.

Either you avoid it in the signal handler or you avoid it by only
allowing it during a safe period.

Ok. Let me try to get this straight :-)

Since we *only* permit signals during the select() call, we are safe as
long as we don't call select() inside the signal handlers? (since
select() is not on the list of safe functions). Which we don't.

That makes sense :-) 

If that is indeed the case, I withdraw all my comments and misdirected
ideas, and say we go for a win32 specific workaround :-)


//Magnus

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] Catching up

2004-03-08 Thread Bruce Momjian
I know folks are waiting for things from me (patches applied, changes
made). I took the weekend off to relax and am know pedaling as fast as I
can.

FYI, my upcoming trips look like:

May - Zurich
June - Germany (Linuxtag)
July - Oregon (O'Reilly)
September - China, Japan

I also have a Washington DC in a week.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] PITR Functional Design v2 for 7.5

2004-03-08 Thread Simon Riggs
PITR Functional Design v2 for 7.5
[EMAIL PROTECTED]

Currently, PostgreSQL provides Crash Recovery but not yet full Point In
Time
Recovery (PITR). The following document provides a design which enhances
the existing robustness features to include full PITR. Since one of the
primary objectives for PITR is robustness, this design is provided in
advance of patches to allow those features and behaviours to be
subjected to the rigours of [HACKERS] before final coding is attempted.
We're really not that far now from making this work, hence the attention
on up-front planning.

Thanks for your comments, Best Regards, Simon Riggs, 2nd Quadrant

 
Review of current Crash Recovery

Crash recovery is catered for by the use of WAL logging, or xlogs. xlogs
are
written to disk immediately before a transaction is acknowledged as
committed. xlogs contain REDO information sufficient to rollforward any
changes from a known starting position. The known starting position is
also recorded by keeping track of which transactions have completed in a
file structure known as the clog. Clogs are also written to disk as
transactions commit.

The changed data pages are not written immediately back to disk. They do
not
need to be because of the entries in the xlog and clog taken together
are
sufficient to recover from a crash.  Every so often a full checkpoint
process is created that will perform a full synchronisation of changed
(or
dirty) data pages back to disk. When a checkpoint is complete it will
write the last transaction id to the xlog as a marker, and it will trim
the
clog files to the last transaction id. The frequency of checkpoints is
controllable. Changed data pages are written back to disk as a
background
process called the bg_writer (or lazy writer), reducing the effect of
checkpoints on busy workloads.

In crash recovery, the database files are presumed to be intact, but not
necessarily up to date. When postmaster comes up again, it checks clog
to
discover what the last checkpointed transaction id was. Using this, it
then
scans through the available xlog files to the marker written by the
checkpoint at that time. Following REDO entries are then reapplied to
the
data pages as far as possible until the system is brough to the best
available point.

If the appropriate xlogs are not available, no recovery is possible.

Following initdb, there will be at least 1 xlog. As new data is written
to
xlog, new files will be allocated as required. As a result of
checkpointing,
there will be a time when xlogs are no longer required for crash
recovery.
At each checkpoint, if there is an xlog that is no longer required the
last
one will be recycled or removed. Xlogs will be recycled back to the
front
of the queue, so that we do not need to delete and create files
constantly.
A certain maximum number of files will be kept as preallocated logs;
this limit is controllable. When the limit is reached, xlogs will be
removed rather than being recycled. As a result, the number of xlogs may
vary considerably over time, but mostly they will cycle around
maintaining roughly steady state number of xlogs, therefore with
predictably constant space utilisation.

If an xlog cannot be written because the space available is full then
the
transaction that depended upon the xlog write will not be able to
commit,
nor will any subsequent transactions until the space situation
alleviates.
Currently, this imposes a limit in the size of any transaction based
upon
the available diskspace in the pg_xlog directory.

Xlogs are relatively high volume, clogs are relatively low. An out of
space
condition on clog is typically unlikely.

Failure analysis:
- If a transaction fails, no changes will be committed to xlog and the
clog
entry will show the transaction aborted. 
- If a transaction succeeds, its changes are committed to xlog and the
clog
entry shows the transactions succeeded.
- If xlog directory fills or is otherwise unwritable, a PANIC is raised
- If clog directory fills or is otherwise unwritable, a PANIC is raised

Point in Time Recovery (PITR)

PITR features are designed to extend the existing Crash Recovery
features so
that a recovery can take place in situations where a crash recovery
would
not have been possible. These situations are:
- database objects have been dropped
- xlogs do not go back far enough in time to allow rollforward recovery
- the database files are not intact and need to be completely replaced
before rollforward

To do this, a full physical backup of the system is required for
recovery.
When tablespaces are available, it should be possible to restore and
recover
individual tablespaces. In addition, xlogs will need to be moved out of
the normal xlog filesystem to an archive destination.

PITR Proposed Solution

The proposed solution is to allow the existing crash recovery detection
and rollforward logic to be utilised directly to perform PITR, which
should allow the minimum number of changes and additional code.

To allow this to occur, 

Re: [HACKERS] Out of space situation and WAL log pre-allocation (was

2004-03-08 Thread Simon Riggs
Bruce Momjian
 Simon Riggs wrote:
  User-selectable behaviour? OK. That's how we deal with fsync; I can
  relate to that. That hadn't been part of my thinking because of the
  importance I'd attached to the log files themselves, but I can go
with
  that, if that's what was meant.
 
  So, if we had a parameter called Wal_archive_policy that has 3
settings:
  None = no archiving
  Optimistic = archive, but if for some reason log space runs out then
  make space by dropping the oldest archive logs
  Strict = if log space runs out, stop further write transactions from
  committing, by whatever means, even if this takes down dbms.
 
  That way, we've got something akin to transaction isolation level
with
  various levels of protection.
 
 Yep, we will definately need something like that.  Basically whenever
 the logs are being archived, you have to stop the database if you
can't
 archive, no?

That certainly was my initial feeling, though I believe it is possible
to accommodate both viewpoints. I would not want to have only the
alternative viewpoint, I must confess.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Out of space situation and WAL log pre-allocation (was Tablespaces)

2004-03-08 Thread Simon Riggs
Joe Conway [mailto:[EMAIL PROTECTED]
 Simon Riggs wrote:
  Tom Lane [mailto:[EMAIL PROTECTED] That should be user-scriptable
  policy, in my worldview.
 
  O... and other dbms will freeze when this situation is hit, rather
  than continue and drop archive logs.]
 
 Been there, done that, don't see how it's any better. I hesitate to be
 real specific here, but let's just say the end result was restore from
 backup :-(
 
  So, if we had a parameter called Wal_archive_policy that has 3
  settings: None = no archiving Optimistic = archive, but if for some
  reason log space runs out then make space by dropping the oldest
  archive logs Strict = if log space runs out, stop further write
  transactions from committing, by whatever means, even if this takes
  down dbms.
 
 That sounds good to me. For the Optimistic case, we need to yell
 loudly if we do find ourselves needing to drop segments. For the
 Strict case, we just need to be sure it works correctly ;-)

Good.

Yell loudly really needs to happen sometime earlier, which is as Gavin
originally thought something to do with tablespaces.

Strict behaviour is fairly straightforward, you just PANIC!

I'd think we could rename these to
Fail Operational rather than Optimistic
Fail Safe rather than Strict
...the other names were a bit like I'm right and but I'll do yours
too ;}

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] [PERFORM] WAL Optimisation - configuration and usage

2004-03-08 Thread Simon Riggs
Tom Lane
 Simon Riggs [EMAIL PROTECTED] writes:
  The behaviour I wish to add is:
  Keep wal_debug as a value between 0 and 16.
  If =0 then no debug output (default).
  Use following bitmasks against the value
  Mask 1 = XLOG Checkpoints get logged
  Mask 2 = Archive API calls get logged
  Mask 4 = Transaction - commits get logged
  Mask 8 = Flush  INSERTs get logged
 
 I see no value in reverting Neil's change.  The above looks way too
much
 like old-line assembler-programmer thinking to me, anyway.  Why not
 invent a separate, appropriately named boolean variable for each thing
 you want to control?  Even C programmers manage to avoid doing the
sort
 of mental arithmetic that the above would force onto DBAs.
 
 As for whether it should be #ifdef'd or not, I'd have no objection to
 turning WAL_DEBUG on by default in pg_config_manual.h for the duration
 of PITR development.  One should not however confuse short-term
 debugging needs with features that the average user is going to need
 indefinitely.  (It was not too long ago that there was still debugging
 code for btree index building in there, for crissakes.)

...erm, I guess you didn't like that one then? ;}

 As for whether it should be #ifdef'd or not, I'd have no objection to
 turning WAL_DEBUG on by default in pg_config_manual.h for the duration
 of PITR development.  

Yes OK, thank you.

 Why not
 invent a separate, appropriately named boolean variable for each thing
 you want to control?

Yes, OK, will do.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Out of space situation and WAL log pre-allocation (was

2004-03-08 Thread Bruce Momjian
Simon Riggs wrote:
 Bruce Momjian
  Simon Riggs wrote:
   User-selectable behaviour? OK. That's how we deal with fsync; I can
   relate to that. That hadn't been part of my thinking because of the
   importance I'd attached to the log files themselves, but I can go
 with
   that, if that's what was meant.
  
   So, if we had a parameter called Wal_archive_policy that has 3
 settings:
   None = no archiving
   Optimistic = archive, but if for some reason log space runs out then
   make space by dropping the oldest archive logs
   Strict = if log space runs out, stop further write transactions from
   committing, by whatever means, even if this takes down dbms.
  
   That way, we've got something akin to transaction isolation level
 with
   various levels of protection.
  
  Yep, we will definately need something like that.  Basically whenever
  the logs are being archived, you have to stop the database if you
 can't
  archive, no?
 
 That certainly was my initial feeling, though I believe it is possible
 to accommodate both viewpoints. I would not want to have only the
 alternative viewpoint, I must confess.
 

Added to PITR TODO list.  Anything else to add:

http://momjian.postgresql.org/main/writings/pgsql/project


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] socket calls in signal handler (WAS: APC + socket r

2004-03-08 Thread Claudio Natoli

 If that is indeed the case, I withdraw all my comments and misdirected
 ideas, and say we go for a win32 specific workaround :-)

We just need to be very careful on what this work-around is though...

For instance, whilst the idea of an APCcalled flag will work just fine,
mechanically, I'm extremely hesitant to adopt it until we *know* for sure
that, by making a socket call inside the APC, we aren't somehow munging
another part of the socket state. [after all, we didn't anticipate the
current issue]

Unfortunately, afaics, the MSDN documentation is pretty slim on details in
this regard.

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] socket calls in signal handler (WAS: APC + socket r

2004-03-08 Thread Claudio Natoli

Tom Lane wrote:
 I don't like the idea of redesigning that
 code just because someone misunderstands it.

Fair enough, on both counts. However, the original question which was asked
out of ignorance (that I'll freely admit) doesn't seem to have been directly
addressed:

Claudio Natoli wrote:
 The specific (and possibly only? are their others?) issue is the call to
 pgstat_beterm from reaper/CleanupProc, invoked by a SIGCHLD. Can this call
 be deferred to the main loop (ie. ServerLoop) and is there any merit in
 doing so? 

Just canvassing for options. If we can get a win32 specific change that we
trust, great! (I think it goes without saying that, throughout the work on
this port, we've tried to avoid changing the existing code as much as
possible). However, if we can not, I'd like to have other options, and am
exploring this possibility.

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] postgres Mac OS X startup script

2004-03-08 Thread Bruce Momjian

Patch applied.  Thanks.

---



Ray Aspeitia wrote:
 Hello,
 
 I wanted to submit some changes to the bundled postgres startup 
 script for Mac OS X. I added calls to utilize the bundled apache 
 rotatelogs script in the DB startup for log rotation. Also modified 
 startup parameters file to allow using the SystemStarter utility to 
 start/stop/restart postgres with a rotating log file.
 
 The script credits David Wheeler, 2002. I wrote him a message about 
 the changes an he suggested I post them here. I explain some of the 
 changes below.
 
 Not sure how to submit the changes. I have 3 files, PostgreSQL 
 script, StartupParameters.plist file, pg_startupscript.diff file. 
 The diff file was run against the original PostgreSQL script file. 
 I'll try attaching them to this message. If they get filtered I can 
 resend if needed.
 
 Thanks.
 
 Ray A.
 
 
 
 
 1) Changed the Provides parameter in StartupParameters.plist to 
 PostgreSQL from postgres database simply for ease of typing. It 
 seems that the SystemStarter utility uses the Provides value if you 
 want to control the script. This way I did not have to enclose it in 
 quotes on commandline. The modified StartupParameters.plist is now an 
 XML document also.
 
 
 2) For the startup script I added 2 user modifiable variables:
 
 # do you want to rotate the log files, 1=true 0=false
 ROTATELOGS=1
 
 # logfile rotate in seconds
 ROTATESEC=604800
 
 I also added a non modifiable variable:
 
 # The apache log rotation utility
 LOGUTIL=/usr/sbin/rotatelogs
 
 I modified the StartService and RestartService functions to execute 
 the new commands if the user wants log rotation.

[ Attachment, skipping... ]

[ Attachment, skipping... ]

[ Attachment, skipping... ]

[ Attachment, skipping... ]

[ Attachment, skipping... ]

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 8: explain analyze is your friend

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] NO WAIT ...

2004-03-08 Thread Bruce Momjian

Yes, I think it looks good.

---

Tatsuo Ishii wrote:
 LOCK TABLE table NO WAIT is OK for 7.5? If ok, I will make patches
 against current with some docs changes.
 --
 Tatsuo Ishii
 
   Tatsuo Ishii wrote:
   I agree with Tom here.  I have used the Oracle NOWAIT feature in the 
   past and think it is a great feature IMHO.  But when you need to use it, 
   you want it to apply very specifically to a single statement.  Using a 
   sledge hammer when you need a tweezers isn't the right way to go.


Once I have written patches for 7.3 to implement this feature for LOCK
statement. For example:

test=# LOCK TABLE sales NO WAIT;
ERROR:  Cannot aquire relation lock

If there's enough interest, I will modify and submit it for 7.5.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 8: explain analyze is your friend
   
   
   That would be great.
   Many people are asking for that.
   Maybe I have time to implement that for SELECT FOR UPDATE.
  
  Here it is(against 7.3.3).
  --
  Tatsuo Ishii
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] NO WAIT ...

2004-03-08 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 LOCK TABLE table NO WAIT is OK for 7.5? If ok, I will make patches
 against current with some docs changes.

Dept of minor gripes: can we do this without turning NO into a
keyword?  Even as a nonreserved word, I think that would be annoying.
no is a common abbreviation for number so I think it's likely to
get used as a column name.

If Oracle spells it NOWAIT then I'd be much happier with that...

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [PATCHES] NO WAIT ...

2004-03-08 Thread Tatsuo Ishii
If NOWAIT is the choice, I could live with it. If there's no
objection, I will go with NOWAIT, not NO WAIT.
--
Tatsuo Ishii

 Tatsuo Ishii [EMAIL PROTECTED] writes:
  LOCK TABLE table NO WAIT is OK for 7.5? If ok, I will make patches
  against current with some docs changes.
 
 Dept of minor gripes: can we do this without turning NO into a
 keyword?  Even as a nonreserved word, I think that would be annoying.
 no is a common abbreviation for number so I think it's likely to
 get used as a column name.
 
 If Oracle spells it NOWAIT then I'd be much happier with that...
 
   regards, tom lane
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] NO WAIT ...

2004-03-08 Thread Christopher Kings-Lynne
If NOWAIT is the choice, I could live with it. If there's no
objection, I will go with NOWAIT, not NO WAIT.
How about WITHOUT WAIT, which is like many of our other commands?

Chris

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] NO WAIT ...

2004-03-08 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 If NOWAIT is the choice, I could live with it. If there's no
 objection, I will go with NOWAIT, not NO WAIT.

 How about WITHOUT WAIT, which is like many of our other commands?

The first question in my mind is exactly how does Oracle spell this?
Let's not go inventing compatible-with-no-one syntax if we don't have to.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] [PATCHES] NO WAIT ...

2004-03-08 Thread Tatsuo Ishii
It seems NOWAIT is the winner...
--
Tatsuo Ishii

 Oracle uses NOWAIT so we should go for that one.
   
   Regards,
 
   Hans
 
 
 
 Tatsuo Ishii wrote:
  If NOWAIT is the choice, I could live with it. If there's no
  objection, I will go with NOWAIT, not NO WAIT.
  --
  Tatsuo Ishii
  
  
 Tatsuo Ishii [EMAIL PROTECTED] writes:
 
 LOCK TABLE table NO WAIT is OK for 7.5? If ok, I will make patches
 against current with some docs changes.
 
 Dept of minor gripes: can we do this without turning NO into a
 keyword?  Even as a nonreserved word, I think that would be annoying.
 no is a common abbreviation for number so I think it's likely to
 get used as a column name.
 
 If Oracle spells it NOWAIT then I'd be much happier with that...
 
 regards, tom lane
 
  
  
  ---(end of broadcast)---
  TIP 6: Have you searched our list archives?
  
 http://archives.postgresql.org
 
 
 -- 
 Cybertec Geschwinde u Schoenig
 Schoengrabern 134, A-2020 Hollabrunn, Austria
 Tel: +43/2952/30706 or +43/664/233 90 75
 www.cybertec.at, www.postgresql.at, kernel.cybertec.at
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faqs/FAQ.html
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Thread safe connection-name mapping in ECPG. Is it

2004-03-08 Thread Shridhar Daithankar
On Sunday 07 March 2004 20:28, Michael Meskes wrote:
 On Wed, Mar 03, 2004 at 07:40:40PM +0530, Shridhar Daithankar wrote:
  Is this fine?
   * Allow a 'connection *' pointer to be specified instead of a string to
   denote a connection.
  ...

 I personally have no problem with this as long as it does not break
 compatibility to the code we allow now.

I searched thr. SQL92 standard over weekend(sunday and monday.. had a working 
saturday..:-)) And need to correct some of the assumptions I stated 
previously.

In ECPG we can not dispose connection names as strings because standard 
expects it. Hence if we need to provide a connection pointer to denote a 
connection, that would be a postgresql only extension and such should be 
documented and warned for potential portability problem.

With responses so far, I believe it is OK for me to go ahead and actually try 
some coding now..:-)

Will keep things posted.

 Shridhar




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] [OT] Respository [was Re: [PERFORM] Feature request: smarter use of conditional indexes]

2004-03-08 Thread Shridhar Daithankar
On Sunday 07 March 2004 09:16, Tom Lane wrote:
 Personally I consider -c format the only one of the three that is
 readable for reviewing purposes, so even if I weren't intending
 immediate application, I'd ask for -c before looking at the patch.
 There are some folks who consider -u format readable, but I'm not
 one of them ...

I was wondering what people use to keep track of their personal development 
especially when they do not have a cvs commit access.

I am toying with idea of using GNU arch for personal use. It encourages 
branching, merging and having as many repository trees as possible. I haven't 
tried it in field as yet but if it delivers what it promises, it could be a 
great assistance.

I know that there are not many postgresql branches like say linux kernel needs 
but having a good tool does not hurt, isn't it..:-)

 Just a thought..

 Shridhar

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]