[HACKERS] missing pg_clog files ?

2003-09-22 Thread Patrick Welche
There was a thread on missing pg_clog files caused due to dodgy practices in
glibc *last year*. I am seeing something similar *now* with a server

 PostgreSQL 7.4beta1 on i386-unknown-netbsdelf1.6X, compiled by GCC 2.95.3

accessed by a similar client and a client

 PostgreSQL 7.4devel on i686-pc-linux-gnu, compiled by GCC 2.95.4


The following works:

  select * from olddata02_03vac offset 2573718 limit 1;

however

  select * from olddata02_03vac offset 2573719 limit 1;
ERROR:  could not access status of transaction 1664158221
DETAIL:  open of file "/usr/local/pgsql/data/pg_clog/0633" failed: No such file or 
directory

and

# ls -l pg_clog
total 32
-rw---  1 postgres  postgres  16384 Sep 22 13:12 
#


Is it true that the problem was with the server, so the fact that a glibc
client was connecting and possibly doing a vacuum is irrelevant?

What can I do now to fix it? Known problem with beta1 which is now old?

Cheers,

Patrick

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] missing pg_clog files ?

2003-09-22 Thread Tom Lane
Patrick Welche <[EMAIL PROTECTED]> writes:
>   select * from olddata02_03vac offset 2573719 limit 1;
> ERROR:  could not access status of transaction 1664158221
> DETAIL:  open of file "/usr/local/pgsql/data/pg_clog/0633" failed: No such file or 
> directory

> # ls -l pg_clog
> total 32
> -rw---  1 postgres  postgres  16384 Sep 22 13:12 

What you have here is a corrupted tuple (viz, a silly transaction number).

It would be useful to look at the page containing the tuple to see if
any pattern can be detected in the corruption.  To do this, get the
ctid of the prior tuple:
select ctid from olddata02_03vac offset 2573718 limit 1;
This will give you a result "(blocknumber,tuplenumber)".  The bogus
tuple is probably on the same page, though possibly further along.
Next find a dump tool --- I usually use Red Hat's pg_filedump:
http://sources.redhat.com/rhdb/tools.html
Dump out the page(s) in question and send them along.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] missing pg_clog files ?

2003-09-22 Thread Patrick Welche
On Mon, Sep 22, 2003 at 10:50:22AM -0400, Tom Lane wrote:
> Patrick Welche <[EMAIL PROTECTED]> writes:
> >   select * from olddata02_03vac offset 2573719 limit 1;
> > ERROR:  could not access status of transaction 1664158221
> > DETAIL:  open of file "/usr/local/pgsql/data/pg_clog/0633" failed: No such file or 
> > directory
> 
> > # ls -l pg_clog
> > total 32
> > -rw---  1 postgres  postgres  16384 Sep 22 13:12 
> 
> What you have here is a corrupted tuple (viz, a silly transaction number).
> 
> It would be useful to look at the page containing the tuple to see if
> any pattern can be detected in the corruption.  To do this, get the
> ctid of the prior tuple:
>   select ctid from olddata02_03vac offset 2573718 limit 1;

 (71716,15)

> This will give you a result "(blocknumber,tuplenumber)".  The bogus
> tuple is probably on the same page, though possibly further along.
> Next find a dump tool --- I usually use Red Hat's pg_filedump:
>   http://sources.redhat.com/rhdb/tools.html
> Dump out the page(s) in question and send them along.

I hope I guessed the right syntax...

% pg_filedump -R 71716 data/base/17148/283342

***
* PostgreSQL File/Block Formatted Dump Utility - Version 2.0-Alpha
*
* File: data/base/17148/283342
* Options used: -R 71716 
*
* Dump created on: Mon Sep 22 16:21:29 2003
***

Block 71716 
 -
 Block Offset: 0x23048000 Offsets: Lower 176 (0x00b0)
 Block: Size 8192  Version1Upper 236 (0x00ec)
 LSN:  logid  1 recoff 0xd308022c  Special  8192 (0x2000)
 Items:   39   Free Space:   60
 Length (including item array): 180

 -- 
 Item   1 -- Length:  204  Offset: 7988 (0x1f34)  Flags: USED
 Item   2 -- Length:  204  Offset: 7784 (0x1e68)  Flags: USED
 Item   3 -- Length:  204  Offset: 7580 (0x1d9c)  Flags: USED
 Item   4 -- Length:  204  Offset: 7376 (0x1cd0)  Flags: USED
 Item   5 -- Length:  204  Offset: 7172 (0x1c04)  Flags: USED
 Item   6 -- Length:  204  Offset: 6968 (0x1b38)  Flags: USED
 Item   7 -- Length:  204  Offset: 6764 (0x1a6c)  Flags: USED
 Item   8 -- Length:  204  Offset: 6560 (0x19a0)  Flags: USED
 Item   9 -- Length:  204  Offset: 6356 (0x18d4)  Flags: USED
 Item  10 -- Length:  204  Offset: 6152 (0x1808)  Flags: USED
 Item  11 -- Length:  204  Offset: 5948 (0x173c)  Flags: USED
 Item  12 -- Length:  204  Offset: 5744 (0x1670)  Flags: USED
 Item  13 -- Length:  204  Offset: 5540 (0x15a4)  Flags: USED
 Item  14 -- Length:  204  Offset: 5336 (0x14d8)  Flags: USED
 Item  15 -- Length:  204  Offset: 5132 (0x140c)  Flags: USED
 Item  16 -- Length:  204  Offset: 4928 (0x1340)  Flags: USED
 Item  17 -- Length:  204  Offset: 4724 (0x1274)  Flags: USED
 Item  18 -- Length:  204  Offset: 4520 (0x11a8)  Flags: USED
 Item  19 -- Length:  204  Offset: 4316 (0x10dc)  Flags: USED
 Item  20 -- Length:  204  Offset: 4112 (0x1010)  Flags: USED
 Item  21 -- Length:  204  Offset: 3908 (0x0f44)  Flags: USED
 Item  22 -- Length:  204  Offset: 3704 (0x0e78)  Flags: USED
 Item  23 -- Length:  204  Offset: 3500 (0x0dac)  Flags: USED
 Item  24 -- Length:  204  Offset: 3296 (0x0ce0)  Flags: USED
 Item  25 -- Length:  204  Offset: 3092 (0x0c14)  Flags: USED
 Item  26 -- Length:  204  Offset: 2888 (0x0b48)  Flags: USED
 Item  27 -- Length:  204  Offset: 2684 (0x0a7c)  Flags: USED
 Item  28 -- Length:  204  Offset: 2480 (0x09b0)  Flags: USED
 Item  29 -- Length:  204  Offset: 2276 (0x08e4)  Flags: USED
 Item  30 -- Length:  204  Offset: 2072 (0x0818)  Flags: USED
 Item  31 -- Length:  204  Offset: 1868 (0x074c)  Flags: USED
 Item  32 -- Length:  204  Offset: 1664 (0x0680)  Flags: USED
 Item  33 -- Length:  204  Offset: 1460 (0x05b4)  Flags: USED
 Item  34 -- Length:  204  Offset: 1256 (0x04e8)  Flags: USED
 Item  35 -- Length:  204  Offset: 1052 (0x041c)  Flags: USED
 Item  36 -- Length:  204  Offset:  848 (0x0350)  Flags: USED
 Item  37 -- Length:  204  Offset:  644 (0x0284)  Flags: USED
 Item  38 -- Length:  204  Offset:  440 (0x01b8)  Flags: USED
 Item  39 -- Length:  204  Offset:  236 (0x00ec)  Flags: USED


*** End of Requested Range Encountered. Last Block Read: 71716 ***


Cheers,

Patrick

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] missing pg_clog files ?

2003-09-22 Thread Tom Lane
Patrick Welche <[EMAIL PROTECTED]> writes:
> I hope I guessed the right syntax...
> % pg_filedump -R 71716 data/base/17148/283342

Yes, but this doesn't give all the available info.  Add -i and -f
options.  A plain -d dump might be interesting too.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] missing pg_clog files ?

2003-09-22 Thread Patrick Welche
On Mon, Sep 22, 2003 at 11:33:30AM -0400, Tom Lane wrote:
> Patrick Welche <[EMAIL PROTECTED]> writes:
> > I hope I guessed the right syntax...
> > % pg_filedump -R 71716 data/base/17148/283342
> 
> Yes, but this doesn't give all the available info.  Add -i and -f
> options.  A plain -d dump might be interesting too.

Indeed, the plain -d dump says that I have a chunk of /var/mail/prlw1
in 1000-13ff. No wonder postgres complained!

Highlight:

  0fe0: 0600     
  0ff0: 0100 3e00    >...
  1000: 52657475 726e2d70 6174683a 203c7072  Return-path: ...ΒΆ...
  1410:   01002418 0f001a00  ..$.

Would you be interested in the full dump anyway? It seems this is trashed
and I need to bring out the backups, right? Next is speculation as to how?
I read a very large mail file with mutt which I think uses mmap. It still
begs the question how did that end up in the database.. Worth reloading into
same database server, or upgrade to current cvs?
NetBSD-1.6ZC/i386 with 2Gb memory.

Thanks for the help!

Cheers,

Patrick

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] missing pg_clog files ?

2003-09-22 Thread Tom Lane
Patrick Welche <[EMAIL PROTECTED]> writes:
> Indeed, the plain -d dump says that I have a chunk of /var/mail/prlw1
> in 1000-13ff. No wonder postgres complained!

Yipes.  We have seen this sort of thing once or twice in the past.
I don't know whether you are looking at a disk drive fault (dropping
the right data onto the wrong sector) or a kernel fault (getting
confused about which buffer holds which file block), but I believe
it's one or the other.  It's hard to see how anything at the application
level could have gotten those two files switched.  You might look to see
if there are any updates available for your kernel.

As for recovery, you probably want to drop and reload at least that one
table.  Whether it's worth a complete reload is your call.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Killing the backend to cancel a long waiting query

2003-09-22 Thread Paulo Scardine
I can implement it as C functions, I think.
Would be nice to have something like:

Test=# select pg_list_backends();
  pid  | conn_id  |   user   | database  | time | host  | status
---+--+--+---+--+---+
  4724 | 35445134 | marcelo  | test  | 0:34 | 192.168.2.212 | select
 18737 | 15412337 | postgres | template1 | 0:00 | 192.168.1.65  | idle
 18693 | 11448964 | postgres | test  | 0:00 | 127.0.0.1 | idle
(3 rows)

Test=# select pg_stopquery_pid(4724);
 pg_stopquery_pid
--
0

Is it worth?

--
Paulo Scardine

- Original Message - 
From: "Dave Page" <[EMAIL PROTECTED]>
> > It would be awesome for phpPGAdmin as well.  eg. Superusers would be
> > able to cancel sql queries that gumby users are running, etc.
>
> I'll second that for pgAdmin. I have times in the past where it would have
> been useful on my own systems as well.
> Regards, Dave.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] missing pg_clog files ?

2003-09-22 Thread Alvaro Herrera
On Mon, Sep 22, 2003 at 05:03:28PM +0100, Patrick Welche wrote:
> On Mon, Sep 22, 2003 at 11:33:30AM -0400, Tom Lane wrote:
> > Patrick Welche <[EMAIL PROTECTED]> writes:
> > > I hope I guessed the right syntax...
> > > % pg_filedump -R 71716 data/base/17148/283342
> > 
> > Yes, but this doesn't give all the available info.  Add -i and -f
> > options.  A plain -d dump might be interesting too.
> 
> Indeed, the plain -d dump says that I have a chunk of /var/mail/prlw1
> in 1000-13ff. No wonder postgres complained!

For the record, what filesystem is this on?  Is it ReiserFS by any
chance?

-- 
Alvaro Herrera ()
"Aprende a avergonzarte mas ante ti que ante los demas" (Democrito)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Killing the backend to cancel a long waiting query

2003-09-22 Thread Robert Treat
On Mon, 2003-09-22 at 13:53, Paulo Scardine wrote:
> I can implement it as C functions, I think.
> Would be nice to have something like:
> 
> Test=# select pg_list_backends();
>   pid  | conn_id  |   user   | database  | time | host  | status
> ---+--+--+---+--+---+
>   4724 | 35445134 | marcelo  | test  | 0:34 | 192.168.2.212 | select
>  18737 | 15412337 | postgres | template1 | 0:00 | 192.168.1.65  | idle
>  18693 | 11448964 | postgres | test  | 0:00 | 127.0.0.1 | idle
> (3 rows)
> 
> Test=# select pg_stopquery_pid(4724);
>  pg_stopquery_pid
> --
> 0
> 
> Is it worth?

Yes, but you may need a way to ensure that the query canceled is the
same one that you really want to cancel. (imagine come connection
pooling and bad luck mixed together so that the long running query
actually ends but another query fires up between the time you lookup the
long running query and the time you issue the kill...). maybe
transaction id as well as pid for arguments?

Robert Treat 
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] PostgreSQL not ACID compliant?

2003-09-22 Thread Hannu Krosing
Heikki Tuuri kirjutas P, 21.09.2003 kell 12:51:
> Tom,
> 
> - Original Message - 
> From: "Tom Lane" <[EMAIL PROTECTED]>
> To: "Heikki Tuuri" <[EMAIL PROTECTED]>
> Cc: <[EMAIL PROTECTED]>
> Sent: Sunday, September 21, 2003 10:32 AM
> Subject: Re: [HACKERS] PostgreSQL not ACID compliant?
> 
> 
> > "Heikki Tuuri" <[EMAIL PROTECTED]> writes:
> > > if you set the transaction isolation level SERIALIZABLE in MySQL/InnoDB,
> > > then InnoDB uses next-key locking in every SELECT, and transactions
> really
> > > are serializable in the mathematical sense.
> >
> > My understanding is that next-key locking only helps when all your
> > predicates are point or range searches against an available b-tree
> > index.
> 
> all SQL queries are performed through index searches.

Does that mean that an index is allways needed for predicate column, or
is this an abstract "index" in some realational algebra sense ?

>  That is why next-key
> locking enforces serializability. IBM researcher C. Mohan has written papers
> about next-key locking. Next-key locking is an approximation of predicate
> locking. We simply lock more to make sure the 'predicates' themselves are
> locked.
> 
> > While that might cover many practical cases, it can hardly
> > be called a complete solution to the problem of serializability.
> 
> It is a complete solution.

Is this solution complete only for MAX() case, or is this a general
solution woking for things line AVG or STDDEV and perhaps all
user-defined aggregates as well ?

> Another approximation of predicate locking is
> table level locking, a solution which Oracle used some 15 years ago, if you
> switched it on the SERIALIZABLE isolation level.

Table level locking seems to be a complete solution indeed, just not
concurrent at all. It may be that we have to forget concurrency to get
complete and general concurrency ;( 

Or is "next key locking" something more than a solution for getting
continuous nextval() 's ?

--
Hannu

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Killing the backend to cancel a long waiting query

2003-09-22 Thread Paulo Scardine
I do not know how to do that. I learned that I can send a SIGINT to a
backend to cancel a query, but I dont know how to kill just one transaction.
I have only "pg_getpid" and "pg_killpid" as interfaces to system functions
"getpid" and "kill".

BTW, what information can I get about the current running backends on the
internal structures? Or do I have to ask the OS?

Thank you,
--
Paulo Scardine

- Original Message - 
From: "Robert Treat" <[EMAIL PROTECTED]>
Subject: Re: [HACKERS] Killing the backend to cancel a long waiting query
>
> Yes, but you may need a way to ensure that the query canceled is the
> same one that you really want to cancel. (imagine come connection
> pooling and bad luck mixed together so that the long running query
> actually ends but another query fires up between the time you lookup the
> long running query and the time you issue the kill...). maybe
> transaction id as well as pid for arguments?
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Back from Mexico

2003-09-22 Thread Bruce Momjian
I am back from speaking in Mexico.  I was one of the keynote speakers at
this event.  This is evidence of how far PostgreSQL has come in just the
past year.  PostgreSQL is now a "must have" for most open-source events.

FYI, we have lots of PostgreSQL users in Mexico.

Catching up on email now.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] More Prelimiary DBT-2 Test Results with PostgreSQL 7.3.4

2003-09-22 Thread markw
http://developer.osdl.org/markw/74/

I had a couple of hiccups doubling the database size,
but I have results with proper linux kernel profile data now.  The
increase in database size has decreased the overall performance, as
expected...

I haven't had the opportunity to try different database parameters, such
as effective cache sizes, WAL buffers, etc, yet.  I'm trying to track
down why I'm attempting to insert duplicate keys into one of the tables,
which will hopefully be a simple driver error.  But will get back with
more data when I can.

-- 
Mark Wong - - [EMAIL PROTECTED]
Open Source Development Lab Inc - A non-profit corporation
12725 SW Millikan Way - Suite 400 - Beaverton, OR 97005
(503) 626-2455 x 32 (office)
(503) 626-2436  (fax)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] old pgindent change

2003-09-22 Thread Nigel J. Andrews

There was a simple change commited in revision 1.47 of pgindent, listed as
being "More updates for GNU indent".

The questions are: why? and surely I can't be the only one whose hit this
problem since November 2001?

On a debian (woody or potato, which ever one had a 2.2 series kernal) using
GNU bash 2.03.0 via /bin/sh (in the pgindent script) I get:


which indent = /usr/local/bin/indent
indent -version = Berkeley indent 5.17

status = 0
You do not appear to have 'indent' installed on your system.

By adding appropiate echo commands before and after the indent -version line in
the script, I've attached my slightly modified version for completeness. I can
never remember the way around the test of $? succeds or fails but the above
experiment plus a look at the near by tests of $? in pgindent seem to show that
there is a problem here.

A simple test of 2.05a.0 on a newer system using:

ls
if [ $? -eq 0 ]; then echo success aaa; fi
lls
if [ $? -eq 0 ]; then echo success bbb; fi

shows that the lls (non-existant executable) status fails the test as expected
while the plain ls status passes.

I'm obviously missing something very significant here or there's a very strange
oddity that's been there, and specifically placed there, for nearly 2 years.


-- 
Nigel J. Andrews
#!/bin/sh

echo
echo -n Path=
echo $PATH
echo
# Known bugs:
#
# Blank line is added after, seen as a function definition, no space
# after *:
#   y = (int) x *y;

trap "rm -f /tmp/$$ /tmp/$$a" 0 1 2 3 15
entab /dev/null
if [ "$?" -ne 0 ]
thenecho "Go to the src/tools/entab directory and do a 'make' and 'make install'." 
>&2
echo "This will put the 'entab' command in your path." >&2
echo "Then run $0 again."
exit 1
fi
echo
a=`which indent`
echo "which indent = $a"
a=`indent -version 2>&1`
echo "indent -version = $a"
echo
indent -version /dev/null 2>&1
echo "status = $?"
if [ "$?" -ne 1 ]
thenecho "You do not appear to have 'indent' installed on your system." >&2
exit 1
fi
indent -gnu /dev/null 2>&1
if [ "$?" -eq 0 ]
thenecho "You appear to have GNU indent rather than BSD indent." >&2
echo "See the pgindent/README file for a description of its problems." >&2
EXTRA_OPTS="-cdb -bli0 -npcs -cli4 -sc"
elseecho "Hope you installed /src/tools/pgindent/indent.bsd.patch." >&2
EXTRA_OPTS="-cli1"
fi

for FILE
do
cat "$FILE" |
# convert // comments to /* */
sed 's;^\([ ]*\)//\(.*\)$;\1/* \2 */;g' |
# Avoid bug that converts 'x =- 1' to 'x = -1'
sed 's;=- ;-= ;g' |
# mark some comments for special treatment later
sed 's;/\*  *---;/*---X_X;g' |
# workaround for indent bug with 'else' handling
sed 's;\([} ]\)else[]*\(/\*.*\)$;\1else\
\2;g' | 
detab -t4 -qc |
# work around bug where function that defines no local variables misindents
# switch() case lines and line after #else.  Do not do for struct/enum.
awk '   BEGIN   {line1 = ""; line2 = ""}
{
line2 = $0;
if (NR >= 2)
print line1;
if (NR >= 2 &&
line2 ~ "^{[]*$" &&
line1 !~ "^struct" &&
line1 !~ "^enum" &&
line1 !~ "^typedef" &&
line1 !~ "^extern[  ][  ]*\"C\"" &&
line1 !~ "=" &&
line1 ~ ")")
print "int  pgindent_func_no_var_fix;";
line1 = line2;
}
END {
if (NR >= 1)
print line1;
}' |
# prevent indenting of code in 'extern "C"' blocks
awk '   BEGIN   {line1 = ""; line2 = ""; skips = 0}
{
line2 = $0;
if (skips > 0)
skips--;
if (line1 ~ "^#ifdef[   ]*__cplusplus" &&
line2 ~ "^extern[   ]*\"C\"[]*$")
{
print line1;
print line2;
if (getline && $0 ~ /^{[]*$/)
print "/* Open extern \"C\" */";
elseprint $0;
line2 = "";
skips = 2;
}
else if (line1 ~ "^#ifdef[  ]*__cplusplus" &&
line2 ~ "^}[]*$")
{
print line1;
print "/* Close extern \"C\" */";
line2 = "";
skips = 2;
}
else
  

Re: [HACKERS] old pgindent change

2003-09-22 Thread Nigel J. Andrews

On Mon, 22 Sep 2003, Nigel J. Andrews wrote:

> 
> There was a simple change commited in revision 1.47 of pgindent, listed as
> being "More updates for GNU indent".
> 
> The questions are: why? and surely I can't be the only one whose hit this
> problem since November 2001?
> 
> ...

I also had to apply the following change in order to avoid an awk compilation
error:

*** pgindentMon Sep 22 23:54:40 2003
--- pgindent.newMon Sep 22 23:54:27 2003
***
*** 65,71 
line1 !~ "^typedef" &&
line1 !~ "^extern[  ][  ]*\"C\"" &&
line1 !~ "=" &&
!   line1 ~ ")")
print "int  pgindent_func_no_var_fix;";
line1 = line2;
}
--- 65,71 
line1 !~ "^typedef" &&
line1 !~ "^extern[  ][  ]*\"C\"" &&
line1 !~ "=" &&
!   line1 ~ "\)")
print "int  pgindent_func_no_var_fix;";
line1 = line2;
}


The changed line was also added in November 2001, in revision 1.48 this time.

awk -W version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan

and again with /bin/sh being a link to bash 2.03.0


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] More Prelimiary DBT-2 Test Results with PostgreSQL 7.3.4

2003-09-22 Thread Greg Stark

[EMAIL PROTECTED] writes:

> http://developer.osdl.org/markw/74/

Are those response times in the right unit? 7-10s?

Are these mostly full table scans and big batch updates?

Personally, I'm more interested in seeing OLTP-oriented benchmarks testing
quick index based transactions in the 20-200ms range, not big i/o-bound batch
transactions. There's certainly a need for both, but I think the former are
much more important to micro-optimize, at least for my needs.

-- 
greg


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Improving REINDEX for system indexes (long)

2003-09-22 Thread Hiroshi Inoue
I've just put back your previous change, sorry.
(BAs I already mentioned many times it must be the first thing.
(B
(BThough I don't remenber my code completely yet, I would
(Breply to some points. 
(BUnfortunately REINDEX wasn't a eagerly wanted command when 
(BI implemented it. Though I wanted to introduce a per index
(Bflag, I unwillingly used an existent per table flag(relhasindex)
(Binstead. Because it was impossible to make REINDEX transaction-safe
(Bthen, such flag was needed to suppress inconsistency as less
(Bas possible.
(BI also unwillingly introduced the ReindexProcessing mode(flag)
(Bbecause I didn't think of other quick solutions. 
(B
(BIIRC the main reason why I gave up the REINDEX functionality
(Bon nailed relations was the difficulty of reindexing pg_class
(Band the handling of relcache overflow. I didn't have much time
(Bto test it. In addtion REINDEX wasn't a recognized command
(Bthen and I found no one to discuss the situation.
(B
(BTom Lane wrote:
(B> 
(B> I've been looking at the issues involved in reindexing system tables,
(B> and I now have what I think is a fairly defensible set of proposals.
(B> 
(B> We should whenever possible use the same reindexing technique used by
(B> CLUSTER: 
(B
(BREINDEX was the first command which used the pg_class.relfilenode
(Bfunctionality. The pg_class.relfilenode was essentially my proposal.
(B
(B> 1. There is a problem for a shared index, because we have no way to
(B> update pg_class.relfilenode in all the other databases besides ours.
(B> I see no good way around this problem.
(B
(BSo the current REINDEX disallows on-line reindex on shared relations.
(B 
(B> 2. There is a problem for a nailed-in-cache index, because the relcache
(B> isn't prepared to cope with relfilenode updates for such indexes.
(B> However, that is fixable.
(B
(BMy code works pretty good with nailed relations except pg_class
(B#if defined (ENABLE_REINDEX_NAILED_RELATIONS).
(B
(B> 3. There is a problem for an index on pg_class itself: doing heap_update
(B> on a pg_class row must make new index entries.  We have to be careful
(B> that the new row does appear in the updated index, while not making
(B> entries in old-and-possibly-broken indexes.  This is doable.
(B
(BYes.
(B
(BSorry I have no time to continue the discussion now.
(B
(Bregards,
(BHiroshi Inoue
(Bhttp://www.geocities.jp/inocchichichi/psqlodbc/
(B
(B---(end of broadcast)---
(BTIP 6: Have you searched our list archives?
(B
(B   http://archives.postgresql.org

Re: [HACKERS] [GENERAL] Can't Build 7.3.4 on OS X

2003-09-22 Thread Eric Ridge
On Sep 21, 2003, at 9:03 PM, Tom Lane wrote:

Eric Ridge <[EMAIL PROTECTED]> writes:
any ideas here?  7.3.2 and 7.4beta3 compile just fine (I noticed that
7.4 has something more cross-platform for tas).  What happened in 
7.3.4
that broke it?
That makes no sense at all --- AFAICT there were *no* darwin or ppc
specific changes between 7.3.2 and 7.3.4.  Can you double check?
Looks like I misspoke.  7.3.2 does not link on 10.3 either.  I don't 
know if this makes things better or worse.  Here's the output from 
7.3.2.  7.3.4 says the same thing.

gcc -no-cpp-precomp -O3 -D__APPLE__ -Wall -Wmissing-prototypes 
-Wmissing-declarationsaccess/SUBSYS.o bootstrap/SUBSYS.o 
catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o 
lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o 
optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o 
rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lz 
-lreadline -ltermcap -lresolv -ldl -lm  -o postgres
ld: Undefined symbols:
_tas
make[2]: *** [postgres] Error 1

I purposely defined -O3 and -D__APPLE__ when I ./configure-d.  With or 
w/o -D__APPLE__ things still fail.  I had a whole mess of things going 
wrong with my 10.3 beta box (including a hardware problem), and I must 
have gotten things confused.  Again, 7.3.2 doesn't link either.

I don't think the OS X 10.3 betas are readily available (I've payed to 
be in Apple's developer program), so if you don't have access to 10.3 
but have some idea as to what would cause this problem with tas, I'll 
do whatever I can to help test.

eric

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Can't Build 7.3.4 on OS X

2003-09-22 Thread E R
On Sep 21, 2003, at 9:03 PM, Tom Lane wrote:
That makes no sense at all --- AFAICT there were *no* darwin or ppc
specific changes between 7.3.2 and 7.3.4.  Can you double check?
Not really knowing what I'm doing, I took s_lock.c and s_lock.h from 
7.4beta3, copied 'em into the 7.3.4 src tree, and recompiled.  
Surprisingly, everything compiled and 7.3.4 seems to be running okay (I 
haven't run the regression tests or tried to stress test it yet).  I'm 
not sure this contributes to this thread in a meaningful way, but I 
thought it was worth mentioning.

eric

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]