Re: [HACKERS] Unable to check out REL7_1 via cvs

2000-12-22 Thread Alfred Perlstein

* Yusuf Goolamabbas [EMAIL PROTECTED] [001222 15:47] wrote:
 Nope, no luck with cvs -Rq also. Me thinks its some repository
 permission issue. Don't know if CVSup would help either. I don't have
 cvsup installed on this machine. 

CVSup would work, that's what I use.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] externalizing PGresult?

2000-12-21 Thread Alfred Perlstein

Is there anything for encoding a PGresult struct into something I
can pass between processes?  Like turning it into a platform
independant stream that I can pass between machines?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Idea for reducing planning time

2000-12-15 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [001215 10:34] wrote:
  
  sorry, meant to respond to the original and deleted it too fast ... 
  
  Tom, if the difference between 7.0 and 7.1 is such that there is a
  performance decrease, *please* apply the fix ... with the boon that OUTER
  JOINs will provide, would hate to see us with a performance hit reducing
  that impact ...
  
  One thing I would like to suggest for this stage of the beta, though, is
  that a little 'peer review' before committing the code might be something
  that would help 'ease' implementing stuff like this and Vadim's VACUUM
  code ... read through Vadim's code and see if it looks okay to you ... get
  Vadim to read through your code/patch and see if it looks okay to him
  ... it adds a day or two to the commit cycle, but at least you can say it
  was reviewed before committed ...
  
 
 Totally agree.  In the old days, we posted all our patches to the list
 so people could see.  We used to make cvs commits only on the main
 server, so we had the patch handy, and it made sense to post it.  Now
 that we have remote cvs, we don't do it as much, but in this case, cvs
 diff -c is a big help.

It seems that Tom has committed his fixups but we're still waiting
on Vadim?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]



Re: [HACKERS] Why vacuum?

2000-12-14 Thread Alfred Perlstein

* Ross J. Reedstrom [EMAIL PROTECTED] [001214 07:57] wrote:
 On Thu, Dec 14, 2000 at 12:07:00PM +0100, Zeugswetter Andreas SB wrote:
  
  They all have an overwriting storage manager. The current storage manager
  of PostgreSQL is non overwriting, which has other advantages.
  
  There seem to be 2 answers to the problem:
  1. change to an overwrite storage manager
  2. make vacuum concurrent capable
  
  The tendency here seems to be towards an improved smgr.
  But, it is currently extremely cheap to calculate where a new row
  needs to be located physically. This task is *a lot* more expensive
  in an overwrite smgr. It needs to maintain a list of pages with free slots,
  which has all sorts of concurrency and persistence problems.
  
 
 Not to mention the recent thread here about people recovering data that
 was accidently deleted, or from damaged db files: the old tuples serve
 as redundant backup, in a way. Not a real compelling reason to keep a
 non-overwriting smgr, but still a surprise bonus for those who need it.

One could make vacuum optional such that it either:

1) always overwrites
2) will not overwrite data until a vacuum is called (perhaps with
   a date option to specify how much deleted data you wish to
   reclaim) data can be marked free but not free for re-use
   until vacuum is run.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Why vacuum?

2000-12-14 Thread Alfred Perlstein

* mlw [EMAIL PROTECTED] [001214 09:30] wrote:
 "Martin A. Marques" wrote:
  
  El Mié 13 Dic 2000 16:41, bpalmer escribió:
   I noticed the other day that one of my pg databases was slow,  so I ran
   vacuum on it,  which brought a question to mind:  why the need?  I looked
   at my oracle server and we aren't doing anything of the sort (that I can
   find),  so why does pg need it?  Any info?
  
  I know nothing about Oracle, but I can tell you that Informix has an update
  statistics, which I don't know if it's similar to vacuum, but
  What vacuum does is clean the database from rows that were left during
  updates and deletes, non the less, the tables get shrincked, so searches get
  faster.
  
 
 While I would like Postgres to perform statistics, one and a while, on
 it own. I like vacuum in general.
 
 I would rather trade unused disk space for performace. The last thing
 you need during high loads is the database thinking that it is time to
 clean up.

Even worse is having to scan a file that has grown 20x the size
because you havne't vacuum'd in a while.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Why vacuum?

2000-12-13 Thread Alfred Perlstein

* Martin A. Marques [EMAIL PROTECTED] [001213 15:15] wrote:
 El Mié 13 Dic 2000 16:41, bpalmer escribió:
  I noticed the other day that one of my pg databases was slow,  so I ran
  vacuum on it,  which brought a question to mind:  why the need?  I looked
  at my oracle server and we aren't doing anything of the sort (that I can
  find),  so why does pg need it?  Any info?
 
 I know nothing about Oracle, but I can tell you that Informix has an update 
 statistics, which I don't know if it's similar to vacuum, but
 What vacuum does is clean the database from rows that were left during 
 updates and deletes, non the less, the tables get shrincked, so searches get 
 faster.

Yes, postgresql requires vacuum quite often otherwise queries and
updates start taking ungodly amounts of time to complete.  If you're
having problems because vacuum locks up your tables for too long
you might want to check out:

http://people.freebsd.org/~alfred/vacfix/

It has some tarballs that have patches to speed up vacuum depending
on how you access your tables you can see up to a 20x reduction in
vacuum time.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Idea for reducing planning time

2000-12-13 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001213 15:18] wrote:
 
 I'm trying to resist the temptation to make this change right now :-).
 It's not quite a bug fix --- well, maybe you could call it a performance
 bug fix --- so I'm kind of thinking it shouldn't be done during beta.
 OTOH I seem to have lost the argument that Vadim shouldn't commit VACUUM
 performance improvements during beta, so maybe this should go in too.
 What do you think?

If you're saying that you're OK with the work Vadim has done please
let him know, I'm assuming he hasn't committed out of respect for your
still standing objection.

If you're terribly against it then say so again, I just would rather
it not happen because you objected rather than missed communication.

As far as the work you're proposing, how much of a gain is it over
the current code?  2x? 3x? 20x? :)  There's a difference between a
slight performance increase and something too good to pass up.

thanks,
-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Why vacuum?

2000-12-13 Thread Alfred Perlstein

* xuyifeng [EMAIL PROTECTED] [001213 18:54] wrote:
 I have this nasty problem too,  in early time, I don't know the problem, but we used 
it for a while,
 than we found our table growing too fast without insert any record( we use update), 
this behaviour 
 most like M$ MSACCESS database I had used a long time ago which don't reuse deleted 
record 
 space and full fill your hard disk after several hours,  the nasty vaccum block any 
other users to operate
 on table,  this is a big problem for a large table, because it will block tooo long 
to let other user to run
 query. we have a project affected by this problem, and sadly we decide to use 
closure source database
  - SYBASE on linux, we havn't any other selections. :(
 
 note that SYBASE and Informix both have 'update statistics' command, but they run it 
fast in seconds,
 not block any other user, this is pretty. ya, what's good technology!

http://people.freebsd.org/~alfred/vacfix/

-Alfred



[HACKERS] (one more time) Patches with vacuum fixes available.

2000-12-11 Thread Alfred Perlstein

I know you guys are pretty busy with the upcoming release but I
was hoping for more interest in this work.

With this (which needs forward porting) we're able to cut
vacuum time down from ~10minutes to under 30 seconds.

The code is a nop unless you compile with special options(MMNB)
specify the special vacuum flag (VLAZY) and doesn't look like it
messes with anything otherwise.

I was hoping to see it go into 7.0.x because of the non-intrusiveness
of it and also because Vadim did it so he should understand it so
that it won't cause any problems (and on the slight chance that it
does, he should be able to fix it).

Basically Vadim left it up to me to campaign for acceptance of this
work and he said he wouldn't have a problem bringing it in as long
as it was ok with the rest of the development team.

So can we get a go-ahead on this? :)

thanks,
-Alfred

- Forwarded message from Alfred Perlstein [EMAIL PROTECTED] -

From: Alfred Perlstein [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: [HACKERS] Patches with vacuum fixes available for 7.0.x
Date: Thu, 7 Dec 2000 14:57:32 -0800
Message-ID: [EMAIL PROTECTED]
User-Agent: Mutt/1.2.5i
Sender: [EMAIL PROTECTED]

We recently had a very satisfactory contract completed by
Vadim.

Basically Vadim has been able to reduce the amount of time
taken by a vacuum from 10-15 minutes down to under 10 seconds.

We've been running with these patches under heavy load for
about a week now without any problems except one:
  don't 'lazy' (new option for vacuum) a table which has just
  had an index created on it, or at least don't expect it to
  take any less time than a normal vacuum would.

There's three patchsets and they are available at:

http://people.freebsd.org/~alfred/vacfix/

complete diff:
http://people.freebsd.org/~alfred/vacfix/v.diff

only lazy vacuum option to speed up index vacuums:
http://people.freebsd.org/~alfred/vacfix/vlazy.tgz

only lazy vacuum option to only scan from start of modified
data:
http://people.freebsd.org/~alfred/vacfix/mnmb.tgz

Although the patches are for 7.0.x I'm hoping that they
can be forward ported (if Vadim hasn't done it already)
to 7.1.

enjoy!

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."

- End forwarded message -----

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] (one more time) Patches with vacuum fixes available.

2000-12-11 Thread Alfred Perlstein

* The Hermit Hacker [EMAIL PROTECTED] [001211 14:27] wrote:
 On Mon, 11 Dec 2000, Bruce Momjian wrote:
 
   Alfred Perlstein [EMAIL PROTECTED] writes:
Basically Vadim left it up to me to campaign for acceptance of this
work and he said he wouldn't have a problem bringing it in as long
as it was ok with the rest of the development team.
So can we get a go-ahead on this? :)
   
   If Vadim isn't sufficiently confident of it to commit it on his own
   authority, I'm inclined to leave it out of 7.1.  My concern is mostly
   schedule.  We are well into beta cycle now and this seems like way too
   critical (not to say high-risk) a feature to be adding after start of
   beta.
  
  I was wondering if Vadim was hesitant because he had done this under
  contract.  Vadim, are you concerned about reliability or are there other
  issues?
 
 Irrelevant .. we are post-beta release, and this doesn't fix a bug, so it
 doesn't go in ...

I'm hoping this just means it won't be investigated until the release
is made?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] (one more time) Patches with vacuum fixes available .

2000-12-11 Thread Alfred Perlstein

* Andrew Snow [EMAIL PROTECTED] [001211 20:21] wrote:
 
 On Mon, 11 Dec 2000, Tom Lane wrote:
 
  "Mikheev, Vadim" [EMAIL PROTECTED] writes:
   If there are no objections then I'm ready to add changes to 7.1.
   Else, I'll produce patches for 7.1 just after release and incorporate
   changes into 7.2.
  
  I'd vote for the second choice.  I do not think we should be adding new
  features now.  Also, I don't know about you, but I have enough bug fix,
  testing, and documentation work to keep me busy till January even
  without any new features...
 
 It'd be really naughty to add it to the beta at this stage.  Would it be
 possible to add it to the 7.1 package with some kind of compile-time option?
 So that those of us who do want to use it, can.

One is a compile time option (CFLAGS+=-DMMNB), the other doesn't
happen unless you ask for it:

vacuum lazy table;

I don't understand what the deal here is, as I said, it's optional
code that you won't see unless you ask for it.

[children: 0 12/11/2000 21:57:20 x]
Vacuuming link.
[children: 0 12/11/2000 21:57:54 x]

-rw---  1 pgsql  pgsql  134627328 Dec 11 21:57 link
-rw---  1 pgsql  pgsql  261201920 Dec 11 21:57 link_triple_idx

Yup, 30 seconds, the table is 134 megabytes and the index is 261 megs.

I think normally this takes about 10 or so _minutes_.

On our faster server:

[children: 0 12/11/2000 22:17:50 x]
Vacuuming referer_link.
[children: 0 12/11/2000 22:18:09 x]

-rw---  1 pgsql  wheel  273670144 Dec 11 22:15 link
-rw---  1 pgsql  wheel  641048576 Dec 11 22:15 link_triple_idx

time is ~19seconds, table is 273 megs, and index 641 megs.

dual 800mhz, raid 5 disks.

I think the users deserve this patch. :)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] abstract: fix poor constant folding in 7.0.x, fixed in 7.1?

2000-12-07 Thread Alfred Perlstein

I have an abstract solution for a problem in postgresql's
handling of what should be constant data.

We had problem with a query taking way too long, basically
we had this:

select
  date_part('hour',t_date) as hour,
  transval as val
from st
where
  id = 500 
  AND hit_date = '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan
  AND hit_date = '2000-12-07 14:27:24-08'::timestamp
;

turning it into:

select
  date_part('hour',t_date) as hour,
  transval as val
from st
where
  id = 500 
  AND hit_date = '2000-12-07 14:27:24-08'::timestamp
  AND hit_date = '2000-12-07 14:27:24-08'::timestamp
;

(doing the -24 hours seperately)

The values of cost went from:
(cost=0.00..127.24 rows=11 width=12)
to:
(cost=0.00..4.94 rows=1 width=12)

By simply assigning each sql "function" a taint value for constness
one could easily reduce:
  '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan
to:
  '2000-12-07 14:27:24-08'::timestamp
by applying the expression and rewriting the query.

Each function should have a marker that explains whether when given
a const input if the output might vary, that way subexpressions can
be collapsed until an input becomes non-const.

Here, let's break up:
  '2000-12-07 14:27:24-08'::timestamp - '24 hours'::timespan

What we have is:
   timestamp(const) - timespan(const)

we have timestamp defined like so:
const timestamp(const string)
non-const timestamp(non-const)

and timespan like so:
const timespan(const string)
non-const timespan(non-const)

So now we have:
   const timestamp((const string)'2000-12-07 14:27:24-08')
 - const timespan((const string)'24 hours')
---
   const
 - const

   const

then eval the query.

You may want to allow a function to have a hook where it can
eval a const because depending on the const it may or may not
be able to return a const, for instance if some string
you passed to timestamp() caused it to return non-const data.

Or maybe this is fixed in 7.1?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] abstract: fix poor constant folding in 7.0.x, fixed in 7.1?

2000-12-07 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001207 16:45] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  Each function should have a marker that explains whether when given
  a const input if the output might vary, that way subexpressions can
  be collapsed until an input becomes non-const.
 
 We already have that and do that.
 
 The reason the datetime-related routines are generally not marked
 'proiscachable' is that there's this weird notion of a CURRENT time
 value, which means that the result of a datetime calculation may
 vary depending on when you do it, even though the inputs don't.
 
 Note that CURRENT here does not mean translating 'now' to current
 time during input conversion, it's a special-case data value inside
 the system.
 
 I proposed awhile back (see pghackers thread "Constant propagation and
 similar issues" from mid-September) that we should eliminate the CURRENT
 concept, so that datetime calculations can be constant-folded safely.
 That, um, didn't meet with universal approval... but I still think it
 would be a good idea.

I agree with you that doing anything to be able to fold these would
be nice.  However there's a hook mentioned in my abstract that
explains that if a constant makes it into a function, you can
provide a hook so that the function can return whether or not that
constant is cacheable.

If the date functions used that hook to get a glimpse of the constant
data passed in, they could return 'cachable' if it doesn't contain
the 'CURRENT' stuff you're talking about.

something like this could be called on input to "maybe-cachable"
functions:

int
date_cachable_hook(const char *datestr)
{

if (strcasecmp("current", datestr) == 0)
return (UNCACHEABLE);
return (CACHEABLE);
}

Or maybe I'm missunderstanding what CURRENT implies?

I do see that on:
  http://www.postgresql.org/mhonarc/pgsql-hackers/2000-09/msg00408.html

both you and Thomas Lockhart agree that CURRENT is a broken concept
because it can cause btree inconsistancies and should probably be
removed anyway.

No one seems to dispute that, and then the thread leads off into
discussions about optimizer hints.

 In the meantime you can cheat by defining functions that you choose
 to mark ISCACHABLE, as has been discussed several times in the archives.

Yes, but it doesn't help the niave user (me :) ) much. :(

Somehow I doubt that if 'CURRENT' was ifdef'd people would complain.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Patches with vacuum fixes available for 7.0.x

2000-12-07 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001207 17:10] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  Basically Vadim has been able to reduce the amount of time
  taken by a vacuum from 10-15 minutes down to under 10 seconds.
 
 Cool.  What's it do, exactly?



The first is a bonus that Vadim gave us to speed up index
vacuums, I'm not sure I understand it completely, but it 
work really well. :)

here's the README he gave us:

   Vacuum LAZY index cleanup option

LAZY vacuum option introduces new way of indices cleanup.
Instead of reading entire index file to remove index tuples
pointing to deleted table records, with LAZY option vacuum
performes index scans using keys fetched from table record
to be deleted. Vacuum checks each result returned by index
scan if it points to target heap record and removes
corresponding index tuple.
This can greatly speed up indices cleaning if not so many
table records were deleted/modified between vacuum runs.
Vacuum uses new option on user' demand.

New vacuum syntax is:

vacuum [verbose] [analyze] [lazy] [table [(columns)]]



The second is one of the suggestions I gave on the lists a while
back, keeping track of the "last dirtied" block in the data files
to only scan the tail end of the file for deleted rows, I think
what he instead did was keep a table that holds all the modified
blocks and vacuum only scans those:

  Minimal Number Modified Block (MNMB)

This feature is to track MNMB of required tables with triggers
to avoid reading unmodified table pages by vacuum. Triggers
store MNMB in per-table files in specified directory
($LIBDIR/contrib/mnmb by default) and create these files if not
existed.

Vacuum first looks up functions

mnmb_getblock(Oid databaseId, Oid tableId)
mnmb_setblock(Oid databaseId, Oid tableId, Oid block)

in catalog. If *both* functions were found *and* there was no
ANALYZE option specified then vacuum calls mnmb_getblock to obtain
MNMB for table being vacuumed and starts reading this table from
block number returned. After table was processed vacuum calls
mnmb_setblock to update data in file to last table block number.
Neither mnmb_getblock nor mnmb_setblock try to create file.
If there was no file for table being vacuumed then mnmb_getblock
returns 0 and mnmb_setblock does nothing.
mnmb_setblock() may be used to set in file MNMB to 0 and force
vacuum to read entire table if required.

To compile MNMB you have to add -DMNMB to CUSTOM_COPT
in src/Makefile.custom.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Patches with vacuum fixes available for 7.0.x

2000-12-07 Thread Alfred Perlstein

* Tom Samplonius [EMAIL PROTECTED] [001207 18:55] wrote:
 
 On Thu, 7 Dec 2000, Alfred Perlstein wrote:
 
  We recently had a very satisfactory contract completed by
  Vadim.
  
  Basically Vadim has been able to reduce the amount of time
  taken by a vacuum from 10-15 minutes down to under 10 seconds.
 ...
 
   What size database was that on?

Tables were around 300 megabytes.

   I looking at moving a 2GB database from MySQL to Postgres.  Most of that
 data is one table with 12 million records, to which we post about 1.5
 million records a month.  MySQL's table locking sucks, but as long as are
 careful about what reports we run and when, we can avoid the problem.  
 However, Postgres' vacuum also sucks.  I have no idea how long our
 particular database would take to vacuum, but I don't think it would be
 very nice.

We only do about 54,000,000 updates to a single table per-month.

   That also leads to the erserver thing.  erserver sounds nice, but I sure
 wish it was possible to get more details on it.  It seems rather
 intangible right now.  If erserver is payware, where do I buy it?

Contact Pgsql Inc. I think it's free, but you have to discuss terms
with them.

   This is getting a bit off-topic now...

Scalabilty is hardly ever off-topic. :)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] Spinlocks may be broken.

2000-12-05 Thread Alfred Perlstein

I'm debugging some code here where I get problems related to
spinlocks, anyhow, while running through the files I noticed
that the UNLOCK code seems sort of broken.

What I mean is that on machines that have loosely ordered
memory models you can have problems because of data that's
supposed to be protected by the lock not getting flushed
out to main memory until possibly after the unlock happens.

I'm pretty sure you guys need memory barrier ops.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] Need help with phys backed shm segments (Postgresql+FreeBSD).

2000-12-05 Thread Alfred Perlstein

On FreeBSD 4.1.1 and above there's a sysctl tunable called
kern.ipc.shm_use_phys, when set to 1 it's supposed to
make the kernel's handling of shared memory much more
effecient at the expense or making the shm segment unpageable.

I tried to use this option with 7.0.3 and FreeBSD 4.2 but
for some reason spinlocks keep getting mucked up (there's
a log at the tail end of this message).

Anyone using Postgresql on FreeBSD probably wants this to work,
otherwise using extremely large chunks of shm and many backends
active can exhaust kernel memory.

I was wondering if any of the more experienced developers could
take a look at what's happenening here.

Here's the log, the number in parens is the address of the lock,
on tas() the value printed to the right is the value in _ret,
for the others, it's the value before the lock count is set.

S_INIT_LOCK: (0x30048008) - 0
S_UNLOCK: (0x30048008) - 0
S_INIT_LOCK: (0x3004800c) - 0
S_UNLOCK: (0x3004800c) - 0
S_INIT_LOCK: (0x30048010) - 0
S_UNLOCK: (0x30048010) - 0
S_INIT_LOCK: (0x30048011) - 0
S_UNLOCK: (0x30048011) - 0
S_INIT_LOCK: (0x30048012) - 0
S_UNLOCK: (0x30048012) - 0
S_INIT_LOCK: (0x30048018) - 0
S_UNLOCK: (0x30048018) - 0
S_INIT_LOCK: (0x3004801c) - 0
S_UNLOCK: (0x3004801c) - 0
S_INIT_LOCK: (0x3004801d) - 1
S_UNLOCK: (0x3004801d) - 1
S_INIT_LOCK: (0x3004801e) - 0
S_UNLOCK: (0x3004801e) - 0
S_INIT_LOCK: (0x30048024) - 127
S_UNLOCK: (0x30048024) - 127
S_INIT_LOCK: (0x30048028) - 255
S_UNLOCK: (0x30048028) - 255
S_INIT_LOCK: (0x30048029) - 0
S_UNLOCK: (0x30048029) - 0
S_INIT_LOCK: (0x3004802a) - 0
S_UNLOCK: (0x3004802a) - 0
S_INIT_LOCK: (0x30048030) - 1
S_UNLOCK: (0x30048030) - 1
S_INIT_LOCK: (0x30048034) - 0
S_UNLOCK: (0x30048034) - 0
S_INIT_LOCK: (0x30048035) - 0
S_UNLOCK: (0x30048035) - 0
S_INIT_LOCK: (0x30048036) - 0
S_UNLOCK: (0x30048036) - 0
S_INIT_LOCK: (0x3004803c) - 50
S_UNLOCK: (0x3004803c) - 50
S_INIT_LOCK: (0x30048040) - 10
S_UNLOCK: (0x30048040) - 10
S_INIT_LOCK: (0x30048041) - 0
S_UNLOCK: (0x30048041) - 0
S_INIT_LOCK: (0x30048042) - 0
S_UNLOCK: (0x30048042) - 0
S_INIT_LOCK: (0x30048048) - 1
S_UNLOCK: (0x30048048) - 1
S_INIT_LOCK: (0x3004804c) - 80
S_UNLOCK: (0x3004804c) - 80
S_INIT_LOCK: (0x3004804d) - 1
S_UNLOCK: (0x3004804d) - 1
S_INIT_LOCK: (0x3004804e) - 0
S_UNLOCK: (0x3004804e) - 0
S_INIT_LOCK: (0x30048054) - 0
S_UNLOCK: (0x30048054) - 0
S_INIT_LOCK: (0x30048058) - 1
S_UNLOCK: (0x30048058) - 1
S_INIT_LOCK: (0x30048059) - 1
S_UNLOCK: (0x30048059) - 1
S_INIT_LOCK: (0x3004805a) - 0
S_UNLOCK: (0x3004805a) - 0
S_INIT_LOCK: (0x30048060) - 0
S_UNLOCK: (0x30048060) - 0
S_INIT_LOCK: (0x30048064) - 0
S_UNLOCK: (0x30048064) - 0
S_INIT_LOCK: (0x30048065) - 0
S_UNLOCK: (0x30048065) - 0
S_INIT_LOCK: (0x30048066) - 0
S_UNLOCK: (0x30048066) - 0
S_INIT_LOCK: (0x3004806c) - 0
S_UNLOCK: (0x3004806c) - 0
S_INIT_LOCK: (0x30048070) - 0
S_UNLOCK: (0x30048070) - 0
S_INIT_LOCK: (0x30048071) - 0
S_UNLOCK: (0x30048071) - 0
S_INIT_LOCK: (0x30048072) - 0
S_UNLOCK: (0x30048072) - 0
S_INIT_LOCK: (0x30048078) - 0
S_UNLOCK: (0x30048078) - 0
S_INIT_LOCK: (0x3004807c) - 0
S_UNLOCK: (0x3004807c) - 0
S_INIT_LOCK: (0x3004807d) - 0
S_UNLOCK: (0x3004807d) - 0
S_INIT_LOCK: (0x3004807e) - 0
S_UNLOCK: (0x3004807e) - 0
tas (0x30048054) - 0
tas (0x30048059) - 0
tas (0x30048058) - 0
S_UNLOCK: (0x30048054) - 1
tas (0x30048048) - 0
tas (0x3004804d) - 0
tas (0x3004804c) - 0
S_UNLOCK: (0x30048048) - 1
tas (0x30048048) - 0
S_UNLOCK: (0x3004804c) - 1
S_UNLOCK: (0x3004804d) - 1
S_UNLOCK: (0x30048048) - 1
tas (0x30048048) - 0
tas (0x3004804d) - 0
tas (0x3004804c) - 0
S_UNLOCK: (0x30048048) - 1
tas (0x30048048) - 0
S_UNLOCK: (0x3004804c) - 1
S_UNLOCK: (0x3004804d) - 1
S_UNLOCK: (0x30048048) - 1
tas (0x30048048) - 0
tas (0x3004804d) - 4
tas (0x3004804d) - 1
tas (0x3004804d) - 1
tas (0x3004804d) - 1
tas (0x3004804d) - 1
tas (0x3004804d) - 1
tas (0x3004804d) - 1
tas (0x3004804d) - 1

repeats (it's stuck)


-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE

2000-12-05 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001205 07:14] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  Anyhow, to address the problem I've removed struct mount from
  userland visibility in both FreeBSD 5.x (current) and FreeBSD 4.x
  (stable).
 
 That might fix things on your box, but we can hardly rely on it as an
 answer for everyone running FreeBSD :-(.
 
 Anyway, I've already worked around the problem by rearranging the PG
 headers so that plperl doesn't need to import s_lock.h ...

Well I didn't say it was completely our fault, it's just that we
try pretty hard not to let those types of structs leak into userland
and for us to "steal" something called s_lock from userland, well
that's no good. :)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Spinlocks may be broken.

2000-12-05 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001205 07:24] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  I'm pretty sure you guys need memory barrier ops.
 
 On a machine that requires such a thing, the assembly code for UNLOCK
 should include it.  Want to provide a patch?

My assembler is extremely rusty, you can probably find such code
in the NetBSD or Linux kernel for all the archs you want to do.
I wouldn't feel confident providing a patch, all I have is x86
hardware.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Need help with phys backed shm segments (Postgresql+FreeBSD).

2000-12-05 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001205 07:43] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  Here's the log, the number in parens is the address of the lock,
  on tas() the value printed to the right is the value in _ret,
  for the others, it's the value before the lock count is set.
 
 This looks to be the trace of a SpinAcquire()
 (see src/backend/storage/ipc/spin.c):

Yes, those are my debug printfs :).

  tas (0x30048048) - 0
  tas (0x3004804d) - 0
  tas (0x3004804c) - 0
  S_UNLOCK: (0x30048048) - 1
 
 followed by SpinRelease():
 
  tas (0x30048048) - 0
  S_UNLOCK: (0x3004804c) - 1
  S_UNLOCK: (0x3004804d) - 1
  S_UNLOCK: (0x30048048) - 1
 
 followed by a failed attempt to reacquire the same SLock:
 
  tas (0x30048048) - 0
  tas (0x3004804d) - 4
  tas (0x3004804d) - 1
  tas (0x3004804d) - 1
  tas (0x3004804d) - 1
  tas (0x3004804d) - 1
 
 And that looks completely broken :-( ... something's clobbered the
 exlock field of the SLock struct, apparently.  Are you sure this
 kernel feature you're trying to use actually works?

No I'm not sure actually. :)  I'll look into it further, but I
was wondering if there was something I could do to debug the
locks better.  I think I'll add some S_MAGIC or something in
the struct to see if the whole thing is getting clobbered or
what...  If you have any suggestions let me know.

 BTW, if you're wondering why an SLock needs to contain *three*
 hardware spinlocks, the answer is that it doesn't.  This code has
 been greatly simplified in current sources...

It did look a bit strange...

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] beta testing version

2000-12-05 Thread Alfred Perlstein

   I totaly missed your point here. How closing source of 
   ERserver is related to closing code of PostgreSQL DB server?
   Let me clear things:
  
   1. ERserver isn't based on WAL. It will work with any version = 6.5
  
   2. WAL was partially sponsored by my employer, Sectorbase.com,
   not by PG, Inc.
  
  Has somebody thought about putting PG in the GPL licence 
  instead of the BSD? 
  PG inc would still be able to do there money giving support 
  (just like IBM, HP and Compaq are doing there share with Linux),
  without been able to close the code.

This gets brought up every couple of months, I don't see the point
in denying any of the current Postgresql developers the chance
to make some money selling a non-freeware version of Postgresql.

We can also look at it another way, let's say ER server was meant
to be closed source, if the code it was derived from was GPL'd
then that chance was gone before it even happened.  Hence no
reason to develop it.

*poof* no ER server.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] COPY BINARY is broken...

2000-12-01 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001201 14:57] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  I would rip it out.
 
 I thought about that too, but was afraid to suggest it ;-)

I think you'd agree that you have more fun and important things to
do than to deal with this yucky interface. :)

 How many people are actually using COPY BINARY?

I'm not using it. :)

How about adding COPY XML?











(kidding of course about the XML, but it would make postgresql more
buzzword compliant :) )

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE

2000-11-28 Thread Alfred Perlstein

* Larry Rosenman [EMAIL PROTECTED] [001128 20:52] wrote:
 My offer stands for you as well, if you'd like an account
 on this P-III 600E, you are welcome to one...

I just remebered my laptop in the other room, it's a pretty recent 4.2.

I'll give it shot.

Yes, it's possible to forget about a computer...
   http://people.freebsd.org/~alfred/images/lab.jpg

:)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] location of Unix socket

2000-11-17 Thread Alfred Perlstein

* Oliver Elphick [EMAIL PROTECTED] [001117 16:41] wrote:
 At present the Unix socket's location is hard-coded as /tmp.
 
 As a result of a bug report, I have moved it in the Debian package to 
 /var/run/postgresql/.  (The bug was that tmpreaper was deleting it and
 thus blocking new connections.)
 
 I suppose that we cannot assume that /var/run exists across all target
 systems, so could the socket location be made a configurable parameter
 in 7.1?

What about X sockets and ssh-agent sockets, and so on?

Where's the source to this thing? :)

It would make more sense to fix tempreaper to ignore non regular
files.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c xlog.c)

2000-11-16 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [001116 11:59] wrote:
  At 02:13 PM 11/16/00 -0500, Bruce Momjian wrote:
  
   I think the default should probably be no delay, and the documentation
   on enabling this needs to be clear and obvious (i.e. hard to miss).
  
  I just talked to Tom Lane about this.  I think a sleep(0) just before
  the flush would be the best.  It would reliquish the cpu slice if
  another process is ready to run.  If no other backend is running, it
  probably just returns.  If there is another one, it gives it a chance to
  complete.  On return from sleep(0), it can check if it still needs to
  flush.  This would tend to bunch up flushers so they flush only once,
  while not delaying cases where only one backend is running.
  
  This sounds like an interesting approach, yes.
 
 In OS kernel design, you try to avoid process herding bottlenecks. 
 Here, we want them herded, and giving up the CPU may be the best way to
 do it.

Yes, but if everyone yeilds you're back where you started, and with
128 or more backends do you really want to cause possibly that many
context switches per fsync?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c xlog.c)

2000-11-16 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001116 13:31] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  It might make more sense to keep a private copy of the last time
  the file was modified per-backend by that particular backend and
  a timestamp of the last fsync shared globally so one can forgo the
  fsync if "it hasn't been dirtied by me since the last fsync"
  This would provide a rendevous point for the fsync call although
  cost more as one would need to periodically call gettimeofday to
  set the modified by me timestamp as well as the post-fsync shared
  timestamp.
 
 That's the hard way to do it.  We just need to keep track of the
 endpoint of the log as of the last fsync.  You need to fsync (after
 returning from sleep()) iff your commit record position  fsync
 endpoint.  No need to ask the kernel for time-of-day.

Well that breaks when you move to a overwriting storage manager,
however if you use oid instead that optimization would survive
the change to a overwriting storage manager.  ?

-Alfred



Re: [HACKERS] RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c xlog.c)

2000-11-16 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [001116 12:31] wrote:
   In OS kernel design, you try to avoid process herding bottlenecks. 
   Here, we want them herded, and giving up the CPU may be the best way to
   do it.
  
  Yes, but if everyone yeilds you're back where you started, and with
  128 or more backends do you really want to cause possibly that many
  context switches per fsync?
 
 You are going to kernel call/yield anyway to fsync, so why not try and
 if someone does the fsync, we don't need to do it.  I am suggesting
 re-checking the need for fsync after the return from sleep(0).

It might make more sense to keep a private copy of the last time
the file was modified per-backend by that particular backend and
a timestamp of the last fsync shared globally so one can forgo the
fsync if "it hasn't been dirtied by me since the last fsync"

This would provide a rendevous point for the fsync call although
cost more as one would need to periodically call gettimeofday to
set the modified by me timestamp as well as the post-fsync shared
timestamp.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] 486 Optimizations...

2000-11-15 Thread Alfred Perlstein

* Peter Eisentraut [EMAIL PROTECTED] [001115 08:15] wrote:
 
 I couldn't say I like these options, because they seem arbitrary, but
 given that it only affects the 0 univel users and the 3 bsdi users left
 (freebsd will be fixed), I wouldn't make a fuzz.

BSDi still has a market niche, and they are actively porting to
more platforms.

 
 I do feel more strongly about removing '-pipe', but it's not something I'm
 going to pursue.

Why?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] 486 Optimizations...

2000-11-14 Thread Alfred Perlstein

* Larry Rosenman [EMAIL PROTECTED] [001114 13:42] wrote:
 Anyone care if I build a patch to kill the -m486 type options in the
 following files:
 
 $ grep -i -- 486 *
 bsdi:  i?86)  CFLAGS="$CFLAGS -m486";;
 freebsd:CFLAGS='-O2 -m486 -pipe'
 univel:CFLAGS='-v -O -K i486,host,inline,loop_unroll -Dsvr4'
 $ pwd
 /home/ler/pg-dev/pgsql/src/template
 $

I have a patch pending for FreeBSD to support alpha builds that
also disables -m486 so if you left the freebsd template alone it
would be ok.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] 486 Optimizations...

2000-11-14 Thread Alfred Perlstein

* Trond Eivind Glomsrød [EMAIL PROTECTED] [001114 13:45] wrote:
 Larry Rosenman [EMAIL PROTECTED] writes:
 
  Anyone care if I build a patch to kill the -m486 type options in the
  following files:
  
  $ grep -i -- 486 *
  bsdi:  i?86)  CFLAGS="$CFLAGS -m486";;
  freebsd:CFLAGS='-O2 -m486 -pipe'
  univel:CFLAGS='-v -O -K i486,host,inline,loop_unroll -Dsvr4'
 
 Why would you want to? Not all gccs support -mpentium/mpentiumpro etc.

The idea is to remove it entirely (I hope) not add even more arch
specific compile flags.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] 486 Optimizations...

2000-11-14 Thread Alfred Perlstein

* Larry Rosenman [EMAIL PROTECTED] [001114 13:47] wrote:
 * Alfred Perlstein [EMAIL PROTECTED] [001114 15:46]:
  * Larry Rosenman [EMAIL PROTECTED] [001114 13:42] wrote:
   Anyone care if I build a patch to kill the -m486 type options in the
   following files:
   
   $ grep -i -- 486 *
   bsdi:  i?86)  CFLAGS="$CFLAGS -m486";;
   freebsd:CFLAGS='-O2 -m486 -pipe'
   univel:CFLAGS='-v -O -K i486,host,inline,loop_unroll -Dsvr4'
   $ pwd
   /home/ler/pg-dev/pgsql/src/template
   $
  
  I have a patch pending for FreeBSD to support alpha builds that
  also disables -m486 so if you left the freebsd template alone it
  would be ok.
 I have a P-III, I don't want the template to specify it *AT ALL*. 
 (this is on FreeBSD 4.2-BETA). 

My patches set i386-FreeBSD to -O2 and alpha-FreeBSD to -O, no
worries.

 It seems like GCC does the right (or mostly right) thing without 
 the -m option

heh. :)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] IRC?

2000-11-14 Thread Alfred Perlstein

I remeber a few developers used to gather on efnet irc,
there was a lot of instability recently that seems to have
cleared up even more recently.

Are you guys planning on coming back?  Or have you all
moved to a different network?


-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: One more [HACKERS] 486 Optimizations...

2000-11-14 Thread Alfred Perlstein

* igor [EMAIL PROTECTED] [001114 20:46] wrote:
 Hi ,
 
 I would like to increase perfomance of PG 7.02 on i486,
 where can I read about this ? May be there is any flags for
 postgres ?

Check your C compiler's manpage for the relevant optimization
flags, be aware that some compilers can emit broken code when
pushed to thier highest optimization levels.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] 7.0.2 dies when connection dropped mid-transaction

2000-11-11 Thread Alfred Perlstein

* The Hermit Hacker [EMAIL PROTECTED] [001109 20:19] wrote:
 On Thu, 9 Nov 2000, Tom Lane wrote:
 
  The Hermit Hacker [EMAIL PROTECTED] writes:
   Tom, if you can plug this one in the next, say, 48hrs (Saturday night),
  
  Done.  Want to generate some new 7.0.3 release-candidate tarballs?
 
 Done, and just forced a sync to ftp.postgresql.org of the new tarballs
 ... if nobody reports any probs with this by ~midnight tomorrow night,
 I'll finish up the 'release links' and get vince to add release info to
 the WWW site, followed by putting out an official announcement ...
 
 Great work, as always :)

Just wanted to confirm that we haven't experianced the bug since we've
applied Tom's patch several days ago.

thanks for the excellent work!

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] cygwin gcc problem.

2000-11-11 Thread Alfred Perlstein

* Gary MacDougall [EMAIL PROTECTED] [00 11:28] wrote:
 I'm trying to compile postgresql on Windows 2000.  I've followed the directions 
accordingly.
 
 When I run the "configure" script, and I get the following error message:
 
 
 configure: error  installation or configuration problem: C compiler cannot creat
 e executables.
 
 If anyone has any clues, I'd greatly appreciate the assistance.

I think you need to ask on the cygwin lists.  If you're compiling
this on Windows 2000 you already need a compiler to compile it.

I would just find the binary distribution and install that.

-Alfred



Re: [HACKERS] RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c xlog.c)

2000-11-11 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [00 00:16] wrote:
  * Tatsuo Ishii [EMAIL PROTECTED] [001110 18:42] wrote:

Yes, though we can change this. We also can implement now
feature that Bruce wanted so long and so much -:) -
fsync log not on each commit but each ~ 5sec, if
losing some recent commits is acceptable.
   
   Sounds great.
  
  Not really, I thought an ack on a commit would mean that the data
  is actually in stable storage, breaking that would be pretty bad
  no?  Or are you only talking about when someone is running with
  async Postgresql?
 
 The default is to sync on commit, but we need to give people options of
 several seconds delay for performance reasons.  Inforimx calls it
 buffered logging, and it is used by most of the sites I know because it
 has much better performance that sync on commit.
 
 If the machine crashes five seconds after commit, many people don't have
 a problem with just re-entering the data.

We have several critical tables and running certain updates/deletes/inserts
on them in async mode worries me.  Would it be possible to add a
'set' command to force a backend into fsync mode and perhaps back
into non-fsync mode as well?

What about setting an attribute on a table that could mean
a) anyone updating me better fsync me.
b) anyone updating me better fsync me as well as fsyncing
   anything else they touch.

I swear one of these days I'm going to get more familiar with the
codebase and actually submit some useful patches for the backend.
:(

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]



Re: [HACKERS] RE: [COMMITTERS] pgsql/src/backend/access/transam ( xact.c xlog.c)

2000-11-10 Thread Alfred Perlstein

* Tatsuo Ishii [EMAIL PROTECTED] [001110 18:42] wrote:
  
  Yes, though we can change this. We also can implement now
  feature that Bruce wanted so long and so much -:) -
  fsync log not on each commit but each ~ 5sec, if
  losing some recent commits is acceptable.
 
 Sounds great.

Not really, I thought an ack on a commit would mean that the data
is actually in stable storage, breaking that would be pretty bad
no?  Or are you only talking about when someone is running with
async Postgresql?

Although this doesn't have an effect on my current application,
when running Postgresql with sync commits and WAL can one expect
the old behavior, ie. success only after data and meta data (log)
are written?

Another question I had was what would the effect of a mid-fsync
crash have on a system using WAL, let's say someone yanks the
power while the OS in the midst of fsync, will all be ok?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] 7.0.2 dies when connection dropped mid-transaction

2000-11-09 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [001109 18:55] wrote:
  I guess the immediate question is do we want to hold up 7.0.3 release
  for a fix?  This bug is clearly ancient, so I'm not sure it's
  appropriate to go through a fire drill to fix it for 7.0.3.
  Comments?
 
 We have delayed 7.0.3 already.  Tom is fixing so many bugs, we may find
 at some point that Tom never stops fixing bugs long enough for us to do
 a release.  I say let's push 7.0.3 out.  We can always do 7.0.4 later if
 we wish.

I think being able to crash the backend by just dropping a connection
during a pretty trivial query is a bad thing and it'd be more
prudent to wait.  I have no problem syncing with your guys CVS,
but people using redhat RPMS and FreeBSD Packages are going to wind
up with this bug if you cut the release before squashing it. :(

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]



Re: [HACKERS] 7.0.2 dies when connection dropped mid-transaction

2000-11-09 Thread Alfred Perlstein

* The Hermit Hacker [EMAIL PROTECTED] [001109 20:19] wrote:
 On Thu, 9 Nov 2000, Tom Lane wrote:
 
  The Hermit Hacker [EMAIL PROTECTED] writes:
   Tom, if you can plug this one in the next, say, 48hrs (Saturday night),
  
  Done.  Want to generate some new 7.0.3 release-candidate tarballs?
 
 Done, and just forced a sync to ftp.postgresql.org of the new tarballs
 ... if nobody reports any probs with this by ~midnight tomorrow night,
 I'll finish up the 'release links' and get vince to add release info to
 the WWW site, followed by putting out an official announcement ...
 
 Great work, as always :)

Tom rules.

*thinking freebsd port should add user tgl rather than pgsql*

:)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] VACUUM causes violent postmaster death

2000-11-03 Thread Alfred Perlstein

* Dan Moschuk [EMAIL PROTECTED] [001103 14:55] wrote:
 
 Server process (pid 13361) exited with status 26 at Fri Nov  3 17:49:44 2000
 Terminating any active server processes...
 NOTICE:  Message from PostgreSQL backend:
 The Postmaster has informed me that some other backend died abnormally and 
possibly corrupted shared memory.
 I have rolled back the current transaction and am going to terminate your 
database system connection and exit.
 Please reconnect to the database system and repeat your query.
 
 This happens fairly regularly.  I assume exit code 26 is used to dictate
 that a specific error has occured.
 
 The database is a decent size (~3M records) with about 4 indexes.

What version of postgresql?  Tom Lane recently fixed some severe problems
with vacuum and heavily used databases, the fix should be in the latest
7.0.2-patches/7.0.3 release.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] VACUUM causes violent postmaster death

2000-11-03 Thread Alfred Perlstein

* Dan Moschuk [EMAIL PROTECTED] [001103 15:32] wrote:
 
 |  This happens fairly regularly.  I assume exit code 26 is used to dictate
 |  that a specific error has occured.
 |  
 |  The database is a decent size (~3M records) with about 4 indexes.
 | 
 | What version of postgresql?  Tom Lane recently fixed some severe problems
 | with vacuum and heavily used databases, the fix should be in the latest
 | 7.0.2-patches/7.0.3 release.
 
 It's 7.0.2-patches from about two or three weeks ago.

Make sure pgsql/src/backend/commands/vacuum.c is at:

revision 1.148.2.1
date: 2000/09/19 21:01:04;  author: tgl;  state: Exp;  lines: +37 -19
Back-patch fix to ensure that VACUUM always calls FlushRelationBuffers.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Alpha FreeBSD port of PostgreSQL !!!

2000-11-03 Thread Alfred Perlstein

* Nathan Boeger [EMAIL PROTECTED] [001103 15:43] wrote:
 is anyone working on the port of PostgreSQL for Alpha  FreeBSD ?? I have
 been waiting for over a year very very patiently !!!
 
 I really love my Alpha FreeBSD box and I want to use PostgreSQL on it...
 but postgresql does not build.
 
 If they need a box I am more than willing to give them complete access
 to my Alpha !
 
 please let me know

Part of the problem is that Postgresql assumes FreeBSD == -m486, since
I have absolutely no 'configure/automake' clue it's where I faltered
when initially trying to compile on FreeBSD.

I have access to a FreeBSD box through the FreeBSD project and would
like to have another shot at it, but I was hoping one of the guys
more initmate with autoconf could lend me a hand.

thanks,
-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] Query cache import?

2000-10-31 Thread Alfred Perlstein

I never saw much traffic regarding Karel's work on making stored
proceedures:

http://people.freebsd.org/~alfred/karel-pgsql.txt

What happened with this?  It looked pretty interesting. :(

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] Re: [GENERAL] Query caching

2000-10-31 Thread Alfred Perlstein

* Steve Wolfe [EMAIL PROTECTED] [001031 13:47] wrote:
 
  (Incidentally,  we've toyed  around with  developping a
 query-caching
   system that would sit betwen PostgreSQL and our DB libraries.
 
   Sounds  amazing, but  requires some  research, I  guess. However,  in
 many
  cases one  would be  more than  happy with  cahced connections.  Of
 course,
  cahced query results  can be naturally added to that,  but just
 connections
  are OK to start with. Security
 
 To me, it doesn't sound like it would be that difficult of a project, at
 least not for the likes of the PostgreSQL developpers.  It also doesn't seem
 like it would really introduce any security problems, not if it were done
 inside of PostgreSQL.  Long ago, I got sidetracked from my endeavors in C,
 and so I don't feel that I'm qualified to do it.  (otherwise, I would have
 done it already. : ) )   If you wanted it done in Perl or Object Pascal, I
 could help. : )
 
 Here's a simple design that I was tossing back and forth.  Please
 understand that I'm not saying this is the best way to do it, or even a good
 way to do it.  Just a possible way to do it.  I haven't been able to give it
 as much thought as I would like to.  Here goes.
 
 
 Implementation
 

[snip]

Karel Zak [EMAIL PROTECTED] Implemented stored proceedures for
postgresql but still hasn't been approached to integrated them.

You can find his second attempt to get a response from the developers
here:

http://people.freebsd.org/~alfred/karel-pgsql.txt

--
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Restricting permissions on Unix socket

2000-10-31 Thread Alfred Perlstein

* Peter Eisentraut [EMAIL PROTECTED] [001031 12:57] wrote:
 I'd like to add an option or two to restrict the set of users that can
 connect to the Unix domain socket of the postmaster, as an extra security
 option.
 
 I imagine something like this:
 
 unix_socket_perm = 0660
 unix_socket_group = pgusers
 
 Obviously, permissions that don't have 6's in there don't make much sense,
 but I feel this notation is the most intuitive way for admins.
 
 I'm not sure how to do the group thing, though.  If I use chown(2) then
 there's a race condition, but doing savegid; create socket; restoregid
 might be too awkward?  Any hints?

Set your umask to 777 then go to town.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] INHERITS doesn't offer enough functionality

2000-10-18 Thread Alfred Perlstein

* Oliver Elphick [EMAIL PROTECTED] [001018 04:59] wrote:
 Bruce Momjian wrote:
Alfred Perlstein wrote:
 
 I noticed that INHERITS doesn't propogate indecies, It'd be nice
 if there was an toption to do so.

Yep it would. Are you volunteering?

   
   Added to TODO:
   
  * Allow inherited tables to inherit index
 
 What is the spec for this?  
 
 Do you mean that inheriting tables should share a single index with their
 ancestors, or that each descendant should get a separate index on the
 same pattern as its ancestors'?  
 
 With the former, the inherited index could be used to enforce a primary
 key over a whole inheritance hierarchy, and would presumable make it
 easier to implement RI against an inheritance hierarchy.  Is this what
 you have in mind?

Not really, it's more of a convience issue for me, a 'derived table'
should inherit the attributes of the 'base table' (including indecies),
having an index shared between two tables is an interesting idea but
not what I had in mind.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] The lightbulb just went on...

2000-10-17 Thread Alfred Perlstein

* Michael J Schout [EMAIL PROTECTED] [001017 08:50] wrote:
 Tom:
 
 I think I may have been seeing this problem as well.  We were getting
 crashes very often with 7.0.2 during VACUUM's if activity was going
 on to our database during the vacuum (even though the activity was 
 light).  Our solution in the meantime was to simply disable the
 aplications during a vacuum to avoid any activity during hte vacuum,
 and we have not had a crash on vacuum since that happened.  If this
 sounds consistent with the problem you think Alfred is having, then
 I would be willing to test your patch on our system as well.
 
 If you think it would help, feel free to send me the patch and I will
 do some testing on it for you.

I'm not sure if you've been subscribed to this list for long but
It would have been nice if you had spoken up when I initially
reported the problems so that the developers realized this wasn't
a completely isolated incident.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Otvet: WAL and indexes (Re: [HACKERS] WAL status todo)

2000-10-16 Thread Alfred Perlstein

* Mikheev, Vadim [EMAIL PROTECTED] [001016 09:33] wrote:
 I don't understand why WAL needs to log internal operations of any of
 the index types.  Seems to me that you could treat indexes as black
 boxes that are updated as side effects of WAL log items for heap tuples:
 when adding a heap tuple as a result of a WAL item, you just call the
 usual index insert routines, and when deleting a heap tuple as a result
 
 On recovery backend *can't* use any usual routines:
 system catalogs are not available.
 
 of undoing a WAL item, you mark the tuple invalid but don't physically
 remove it till VACUUM (thus no need to worry about its index entries).
 
 One of the purposes of WAL is immediate removing tuples 
 inserted by aborted xactions. I want make VACUUM
 *optional* in future - space must be available for
 reusing without VACUUM. And this is first, very small,
 step in this direction.

Why would vacuum become optional?  Would WAL offer an option to
not reclaim free space?  We're hoping that vacuum becomes unneeded
when postgresql is run with some flag indicating that we're
uninterested in time travel.

How much longer do you estimate until you can make it work that way?

thanks,
-Alfred



Re: [HACKERS] Re: Otvet: WAL and indexes (Re: [HACKERS] WAL status todo)

2000-10-16 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001016 09:47] wrote:
 "Mikheev, Vadim" [EMAIL PROTECTED] writes:
  I don't understand why WAL needs to log internal operations of any of
  the index types.  Seems to me that you could treat indexes as black
  boxes that are updated as side effects of WAL log items for heap tuples:
  when adding a heap tuple as a result of a WAL item, you just call the
  usual index insert routines, and when deleting a heap tuple as a result
 
  On recovery backend *can't* use any usual routines:
  system catalogs are not available.
 
 OK, good point, but that just means you can't use the catalogs to
 discover what indexes exist for a given table.  You could still create
 log entries that look like "insert indextuple X into index Y" without
 any further detail.

One thing you guys may wish to consider is selectively fsyncing on
system catelogs and marking them dirty when opened for write:

postgres:  i need to write to a critical table...
opens table, marks dirty
completes operation and marks undirty and fsync

-or-

postgres:  i need to write to a critical table...
opens table, marks dirty
crash, burn, smoke (whatever)

Now you may still have the system tables broken, however the chances
of that may be siginifigantly reduced depending on how often writes
must be done to them.

It's a hack, but depending on the amount of writes done to critical
tables it may reduce the window for these inconvient situations 
signifigantly.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Core dump

2000-10-12 Thread Alfred Perlstein

* Dan Moschuk [EMAIL PROTECTED] [001012 09:47] wrote:
 
 Sparc solaris 2.7 with postgres 7.0.2
 
 It seems to be reproducable, the server crashes on us at a rate of about
 every few hours.
 
 Any ideas?
 
 GNU gdb 4.17
 Copyright 1998 Free Software Foundation, Inc.

[snip]

 #78 0x1dd210 in elog (lev=0, 
 fmt=0x21a9b0 "Message from PostgreSQL backend:\n\tThe Postmaster has informed me 
that some other backend died abnormally and possibly corrupted shared memory.\n\tI 
have rolled back the current transaction and am going "...)
 at elog.c:312
 #79 0x1636f8 in quickdie (postgres_signal_arg=16) at postgres.c:713
 #80 signal handler called
 #81 0xff195dd4 in _poll ()
 #82 0xff14e79c in select ()
 #83 0x14df58 in s_lock_sleep (spin=18) at s_lock.c:62
 #84 0x14dfa0 in s_lock (lock=0xff270011 "ÿ", file=0x2197c8 "spin.c", line=127)
 at s_lock.c:76
 #85 0x154620 in SpinAcquire (lockid=0) at spin.c:127
 #86 0x149100 in ReadBufferWithBufferLock (reln=0x2ce4e8, blockNum=4323, 
 bufferLockHeld=1 '\001') at bufmgr.c:297

% uname -sr
SunOS 5.7

from sys/signal.h:

#define SIGUSR1 16  /* user defined signal 1 */

Are you sure you don't have any application running amok sending
signals to processes it shouldn't?  Getting a superfolous signal
seems out of place, this doesn't look like a crash or anything
because USR1 isn't delivered by the kernel afaik.

And why are you using solaris?  *smack*

Any why isn't postmaster either blocking these signals or shutting
down cleanly on reciept of them?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: pg_dump possible fix, need testers. (was: Re: [HACKERS] pg_dump disaster)

2000-10-12 Thread Alfred Perlstein

* Tom Lane [EMAIL PROTECTED] [001012 12:14] wrote:
 Alfred Perlstein [EMAIL PROTECTED] writes:
  I'm pretty sure I know what to do now, it's pretty simple actually,
  I can examine the state of the connection, if it's in PGASYNC_COPY_IN
  then I don't grow the buffer, I inform the application that the 
  data will block, if it's no PGASYNC_COPY_IN I allow the buffer to grow
  protecting the application from blocking.
 
 From what I recall of the prior discussion, it seemed that a state-based
 approach probably isn't the way to go.  The real issue is how many
 routines are you going to have to change to deal with a three-way return
 convention; you want to minimize the number of places that have to cope
 with that.  IIRC the idea was to let pqPutBytes grow the buffer so that
 its callers didn't need to worry about a "sorry, won't block" return
 condition.  If you feel that growing the buffer is inappropriate for a
 specific caller, then probably the right answer is for that particular
 caller to make an extra check to see if the buffer will overflow, and
 refrain from calling pqPutBytes if it doesn't like what will happen.
 
 If you make pqPutByte's behavior state-based, then callers that aren't
 expecting a "won't block" return will fail (silently :-() in some states.
 While you might be able to get away with that for PGASYNC_COPY_IN state
 because not much of libpq is expected to be exercised in that state,
 it strikes me as an awfully fragile coding convention.  I think you will
 regret that choice eventually, if you make it.

It is a somewhat fragile change, but much less intrusive than anything
else I could think of.  It removes the three-way return value from
the send code for everything except when in a COPY IN state where
it's the application's job to handle it.  If there would be a
blocking condition we do as the non-blocking code handles it except
instead of blocking we buffer it in it's entirety.

My main question was wondering if there's any cases where we might 
"go nuts" sending data to the backend with the exception of the 
COPY code?

-- or --

I could make a function to check the buffer space and attempt to
flush the buffer (in a non-blocking manner) to be called from
PQputline and PQputnbytes if the connection is non-blocking.

However since those are external functions my question is if you
know of any other uses for those function besideds when COPY'ing
into the backend?

Since afaik I'm the only schmoe using my non-blocking stuff,
restricting the check for bufferspace to non-blocking connections
wouldn't break any APIs from my PoV.

How should I proceed?

--
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] calling PQendcopy() without blocking.

2000-10-10 Thread Alfred Perlstein

At times I need to call PQendcopy, how to I determine that it won't
block me waiting for output from the backend?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] more crashes with 7.0.2 from oct 4th-PATCHES

2000-10-09 Thread Alfred Perlstein

*Sigh*, still not having much fun over here:

# gdb /usr/local/pgsql/bin/postgres postgres.36561.core 
#0  0x8063b0f in nocachegetattr (tuple=0xbfbfe928, attnum=2, 
tupleDesc=0x84ca368, isnull=0xbfbfe7af "") at heaptuple.c:494
494 off = att_addlength(off, att[j]-attlen, tp + off);
(gdb) list
489  */
490 off = att_align(off, att[j]-attlen, att[j]-attalign);
491 
492 att[j]-attcacheoff = off;
493 
494 off = att_addlength(off, att[j]-attlen, tp + off);
495 }
496 
497 return (Datum) fetchatt((att[attnum]), tp + 
att[attnum]-attcacheoff);
498 }
(gdb) bt
#0  0x8063b0f in nocachegetattr (tuple=0xbfbfe928, attnum=2, 
tupleDesc=0x84ca368, isnull=0xbfbfe7af "") at heaptuple.c:494
#1  0x8075851 in GetIndexValue (tuple=0xbfbfe928, hTupDesc=0x84ca368, 
attOff=2, attrNums=0x84e9768, fInfo=0x0, attNull=0xbfbfe7af "")
at indexam.c:445
#2  0x80903be in FormIndexDatum (numberOfAttributes=4, 
attributeNumber=0x84e9768, heapTuple=0xbfbfe928, heapDescriptor=0x84ca368, 
datum=0x84e9518, nullv=0x84ba170 "", fInfo=0x0) at index.c:1256
#3  0x80a05e6 in vc_repair_frag (vacrelstats=0x84ba290, onerel=0x84c6788, 
vacuum_pages=0xbfbfe9d0, fraged_pages=0xbfbfe9c0, nindices=1, 
Irel=0x84ba118) at vacuum.c:1634
#4  0x809e3b9 in vc_vacone (relid=1315147913, analyze=0, va_cols=0x0)
at vacuum.c:640
#5  0x809d9ac in vc_vacuum (VacRelP=0xbfbfea60, analyze=0 '\000', va_cols=0x0)
at vacuum.c:299
#6  0x809d934 in vacuum (vacrel=0x84ba0e8 "\030", verbose=1, analyze=0 '\000', 
va_spec=0x0) at vacuum.c:223
#7  0x810ca8c in ProcessUtility (parsetree=0x84ba110, dest=Remote)
at utility.c:694
#8  0x810a44e in pg_exec_query_dest (
query_string=0x81cd370 "VACUUM verbose webhit_details_formatted;", 
dest=Remote, aclOverride=0) at postgres.c:617
#9  0x810a3a9 in pg_exec_query (
query_string=0x81cd370 "VACUUM verbose webhit_details_formatted;")
at postgres.c:562
#10 0x810b336 in PostgresMain (argc=9, argv=0xbfbff0e0, real_argc=10, 
real_argv=0xbfbffb40) at postgres.c:1588
#11 0x80f0742 in DoBackend (port=0x8464000) at postmaster.c:2009
#12 0x80f02d5 in BackendStartup (port=0x8464000) at postmaster.c:1776
#13 0x80ef4f9 in ServerLoop () at postmaster.c:1037
#14 0x80eeede in PostmasterMain (argc=10, argv=0xbfbffb40) at postmaster.c:725
#15 0x80bf3eb in main (argc=10, argv=0xbfbffb40) at main.c:93
#16 0x8063495 in _start ()
(gdb) 

Isn't there something someone can offer to help track down why this
is happening?  It only seems to happen when we enable a particular
database script, and it always seems to happen while a pg_dump is
going on in the background while performing other updates, or a
vacuum.

All my crashes seem to happen in this file in 'nocachegetattr()',
is there anything I can do to provide more comprehensive error
reporting or something?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



(forw) [HACKERS] more crashes

2000-10-04 Thread Alfred Perlstein

I just wanted to repost this one more time in case developers didn't
catch it.  I have a reliable way to make postgresql crash after a
couple of hours over here and a backtrace that looks like a good
catch.

My apologies if this one time to many, I won't be posting it again.

thanks for your time,
-Alfred

- Forwarded message from Alfred Perlstein [EMAIL PROTECTED] -

From: Alfred Perlstein [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED]
Subject: [HACKERS] more crashes
Date: Mon, 2 Oct 2000 15:17:12 -0700
Message-ID: [EMAIL PROTECTED]
User-Agent: Mutt/1.2.4i
X-Mailing-List: [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]

This time I'm pretty sure I caught the initial crash during an update:

I disabled the vacuum analyze and still got table corruption with a crash:

two crashdumps of 7.0.2+somepatches

 *$Header: /home/pgcvs/pgsql/src/backend/access/common/heaptuple.c,v 1.6
2 2000/04/12 17:14:36 momjian Exp $

Program terminated with signal 11, Segmentation fault.
Reading symbols from /usr/lib/libcrypt.so.2...done.
Reading symbols from /usr/lib/libm.so.2...done.
Reading symbols from /usr/lib/libutil.so.3...done.
Reading symbols from /usr/lib/libreadline.so.4...done.
Reading symbols from /usr/lib/libncurses.so.5...done.
Reading symbols from /usr/lib/libc.so.4...done.
Reading symbols from /usr/libexec/ld-elf.so.1...done.
#0  0x8063aa7 in nocachegetattr (tuple=0x84ae9fc, attnum=4, 
tupleDesc=0x84a6368, isnull=0x84afc20 "") at heaptuple.c:537
537 off = att_addlength(off, att[i]-attlen, tp + off);
(gdb) bt
#0  0x8063aa7 in nocachegetattr (tuple=0x84ae9fc, attnum=4, 
tupleDesc=0x84a6368, isnull=0x84afc20 "") at heaptuple.c:537
#1  0x80a027f in ExecEvalVar (variable=0x84974b0, econtext=0x84aedd8, 
isNull=0x84afc20 "") at execQual.c:314
#2  0x80a0d97 in ExecEvalExpr (expression=0x84974b0, econtext=0x84aedd8, 
isNull=0x84afc20 "", isDone=0xbfbfe6db "\001ØíJ\b+ù\021\b\004èJ\b\001")
at execQual.c:1214
#3  0x80a090a in ExecEvalFuncArgs (fcache=0x84afc38, econtext=0x84aedd8, 
argList=0x84974d8, argV=0xbfbfe6dc, 
argIsDone=0xbfbfe6db "\001ØíJ\b+ù\021\b\004èJ\b\001") at execQual.c:635
#4  0x80a09c1 in ExecMakeFunctionResult (node=0x8496a40, arguments=0x84974d8, 
econtext=0x84aedd8, isNull=0xbfbfe7db "", 
isDone=0xbfbfe75b "\b\214ç¿¿\027\016\n\bHuI\bØíJ\bÛç¿¿X\017B`\004")
at execQual.c:711
#5  0x80a0b37 in ExecEvalOper (opClause=0x8497548, econtext=0x84aedd8, 
isNull=0xbfbfe7db "") at execQual.c:902
#6  0x80a0e17 in ExecEvalExpr (expression=0x8497548, econtext=0x84aedd8, 
isNull=0xbfbfe7db "", isDone=0xbfbfe7e0 "\001É\016\b") at execQual.c:1249
#7  0x80a1011 in ExecTargetList (targetlist=0x8497fd8, nodomains=6, 
targettype=0x84aefb0, values=0x84aee48, econtext=0x84aedd8, 
isDone=0xbfbfe90b "\001,é¿¿.K\n\bPÝJ\b\214H\n\bé¿¿çA\023\b\030ÀH\b ")
at execQual.c:1511
#8  0x80a12af in ExecProject (projInfo=0x84aee20, 
isDone=0xbfbfe90b "\001,é¿¿.K\n\bPÝJ\b\214H\n\bé¿¿çA\023\b\030ÀH\b ")
at execQual.c:1721
#9  0x80a1365 in ExecScan (node=0x84add50, accessMtd=0x80a488c IndexNext)
at execScan.c:155
#10 0x80a4b2e in ExecIndexScan (node=0x84add50) at nodeIndexscan.c:288
#11 0x809fb6d in ExecProcNode (node=0x84add50, parent=0x84add50)
at execProcnode.c:272
#12 0x809ed59 in ExecutePlan (estate=0x84ae8a0, plan=0x84add50, 
operation=CMD_UPDATE, offsetTuples=0, numberTuples=0, 
direction=ForwardScanDirection, destfunc=0x84afaf0) at execMain.c:1052
#13 0x809e2ba in ExecutorRun (queryDesc=0x84ae888, estate=0x84ae8a0, 
feature=3, limoffset=0x0, limcount=0x0) at execMain.c:327
#14 0x80f92ca in ProcessQueryDesc (queryDesc=0x84ae888, limoffset=0x0, 
limcount=0x0) at pquery.c:310
#15 0x80f9347 in ProcessQuery (parsetree=0x84965d0, plan=0x84add50, 
dest=Remote) at pquery.c:353
#16 0x80f7ef0 in pg_exec_query_dest (
query_string=0x81a9370 "\nUPDATE\n  webhit_details_formatted\nSET\n  attr_hits = 
attr_hits + '1' \nWHERE\n  counter_id = '11909'\n  AND attr_type = 
'ATTR_OPERATINGSYS'\n  AND attr_name = 'win95'\n  AND attr_vers = '0'\n;", 
dest=Remote, aclOverride=0) at postgres.c:663
#17 0x80f7db9 in pg_exec_query (
query_string=0x81a9370 "\nUPDATE\n  webhit_details_formatted\nSET\n  attr_hits = 
attr_hits + '1' \nWHERE\n  counter_id = '11909'\n  AND attr_type = 
'ATTR_OPERATINGSYS'\n  AND attr_name = 'win95'\n  AND attr_vers = '0'\n;")
at postgres.c:562
#18 0x80f8d1a in PostgresMain (argc=9, argv=0xbfbff0dc, real_argc=10, 
real_argv=0xbfbffb3c) at postgres.c:1590
#19 0x80e1d06 in DoBackend (port=0x843f400) at postmaster.c:2009
#20 0x80e1899 in BackendStartup (port=0x843f400) at postmaster.c:1776
#21 0x80e0abd in ServerLoop () at postmaster.c:1037
#22 0x80e04be in PostmasterMain (argc=10,

Re: [HACKERS] Note about include files

2000-10-02 Thread Alfred Perlstein

* Peter Eisentraut [EMAIL PROTECTED] [001002 02:51] wrote:
 The file "postgres.h" (or "c.h" or "config.h", whatever is used) needs to
 be the very *first* file included by each source file.  Next time you
 touch a source file, please check that this is the case.
 
 The obvious failure mode is that if config.h redefines const, volatile, or
 inline then it will cause confusion when some system headers are included
 before and some after that definition.
 
 The slightly more esoteric problem I encountered is that when you compile
 with CC='gcc -std=c99 -pedantic' on a glibc platform (i.e., "Linux") then
 you need to define _SVID_SOURCE and _BSD_SOURCE before including any
 system header in order to get the full feature set from the headers.
 
 (Unfortunately, the flex output does not observe this rule either, so we
 can't be 100% pedantic warning safe without doing surgery on those files.)

gcc supports the '-include' directive which may be what you want.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] more crashes

2000-10-02 Thread Alfred Perlstein
772814392
(gdb) print tp
$4 = 0x5eab73d0 "\205."
(gdb) print tp+off
$7 = 0x8cbbaa08 Address 0x8cbbaa08 out of bounds
(gdb) print usecache
$8 = 0 '\000'
(gdb) print !VARLENA_FIXED_SIZE(att[i])
No symbol "VARLENA_FIXED_SIZE" in current context.
(gdb) print att[i]
$9 = 0x84a66c8
(gdb) print *(att[i]) 
$10 = {attrelid = 3518994475, attname = {
data = "attr_vers", '\000' repeats 22 times, 
alignmentDummy = 1920234593}, atttypid = 1043, 
  attdisbursion = 0.125293151, attlen = -1, attnum = 4, attnelems = 0, 
  attcacheoff = -1, atttypmod = 36, attbyval = 0 '\000', attstorage = 112 'p', 
  attisset = 0 '\000', attalign = 105 'i', attnotnull = 0 '\000', 
  atthasdef = 0 '\000'}
(gdb) print i
$11 = 3
(gdb) print *(att[0])
$12 = {attrelid = 3518994475, attname = {
data = "counter_id", '\000' repeats 21 times, 
alignmentDummy = 1853189987}, atttypid = 23, 
  attdisbursion = 0.000228356235, attlen = 4, attnum = 1, attnelems = 0, 
  attcacheoff = 0, atttypmod = -1, attbyval = 1 '\001', attstorage = 112 'p', 
  attisset = 0 '\000', attalign = 105 'i', attnotnull = 0 '\000', 
  atthasdef = 0 '\000'}
(gdb) print *(att[1])
$13 = {attrelid = 3518994475, attname = {
data = "attr_type", '\000' repeats 22 times, 
alignmentDummy = 1920234593}, atttypid = 1043, 
  attdisbursion = 0.0928893909, attlen = -1, attnum = 2, attnelems = 0, 
  attcacheoff = 4, atttypmod = 36, attbyval = 0 '\000', attstorage = 112 'p', 
  attisset = 0 '\000', attalign = 105 'i', attnotnull = 0 '\000', 
  atthasdef = 0 '\000'}
(gdb) print *(att[2])
$14 = {attrelid = 3518994475, attname = {
data = "attr_name", '\000' repeats 22 times, 
alignmentDummy = 1920234593}, atttypid = 1043, 
  attdisbursion = 0.370779663, attlen = -1, attnum = 3, attnelems = 0, 
  attcacheoff = -1, atttypmod = 36, attbyval = 0 '\000', attstorage = 112 'p', 
  attisset = 0 '\000', attalign = 105 'i', attnotnull = 0 '\000', 
  atthasdef = 0 '\000'}
(gdb) print attnum
$15 = 4
(gdb) print *(att[4])
$16 = {attrelid = 3518994475, attname = {
data = "attr_hits", '\000' repeats 22 times, 
alignmentDummy = 1920234593}, atttypid = 20, attdisbursion = 0.0573871136, 
  attlen = 8, attnum = 5, attnelems = 0, attcacheoff = -1, atttypmod = -1, 
  attbyval = 0 '\000', attstorage = 112 'p', attisset = 0 '\000', 
  attalign = 100 'd', attnotnull = 0 '\000', atthasdef = 1 '\001'}


--


I'm pretty sure this is a pg_dump that died when the fist crash
happened above:

 *$Header: /home/pgcvs/pgsql/src/backend/commands/copy.c,v 1.106.2.2 2000/06/28 
06:13:01 tgl Exp $


Program terminated with signal 10, Bus error.
#0  0x482a7d95 in ?? (?? )
1  0x808c393 in CopyTo (rel=0x84e7890, binary=0 '\000', oids=0 '\000', 
fp=0x0, delim=0x8159fa9 "\t", null_print=0x8159fab "\\N") at copy.c:508
#2  0x808bf99 in DoCopy (relname=0x84930e8 "", binary=0 '\000', oids=0 '\000', 
from=0 '\000', pipe=1 '\001', filename=0x0, delim=0x8159fa9 "\t", 
null_print=0x8159fab "\\N") at copy.c:374
#3  0x80f98a3 in ProcessUtility (parsetree=0x8493110, dest=Remote)
at utility.c:262
#4  0x80f7e5e in pg_exec_query_dest (query_string=0x81a9370 "", dest=Remote, 
aclOverride=0) at postgres.c:617
#5  0x80f7db9 in pg_exec_query (query_string=0x81a9370 "") at postgres.c:562
#6  0x80f8d1a in PostgresMain (argc=9, argv=0xbfbff0bc, real_argc=10, 
real_argv=0xbfbffb1c) at postgres.c:1590
#7  0x80e1d06 in DoBackend (port=0x843f000) at postmaster.c:2009
#8  0x80e1899 in BackendStartup (port=0x843f000) at postmaster.c:1776
#9  0x80e0abd in ServerLoop () at postmaster.c:1037
#10 0x80e04be in PostmasterMain (argc=10, argv=0xbfbffb1c) at postmaster.c:725
#11 0x80aee43 in main (argc=10, argv=0xbfbffb1c) at main.c:93
#12 0x80633c5 in _start ()
(gdb) up
#1  0x808c393 in CopyTo (rel=0x84e7890, binary=0 '\000', oids=0 '\000', 
fp=0x0, delim=0x8159fa9 "\t", null_print=0x8159fab "\\N") at copy.c:508
508 string = (char *) 
(*fmgr_faddr(out_functions[i]))
(gdb) print out_functions[i]
$1 = {fn_addr = 0, fn_plhandler = 0, fn_oid = 0, fn_nargs = 0}
(gdb) print i
$2 = 2
(gdb) print isnull
$3 = 0 '\000'
(gdb) print tupDesc
No symbol "tupDesc" in current context.
(gdb) print tuple
$4 = 0x8493268
(gdb) print *tuple
$5 = {t_len = 0, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 0}, ip_posid = 0}, 
  t_datamcxt = 0x0, t_data = 0x0}
(gdb) print value
$6 = 1072927316
(gdb) print *value
Cannot access memory at address 0x3ff39254.
(gdb) print oids
$7 = 0 '\000'
(gdb) print binary
$8 = 0 '\000'
(gdb) print string
$9 = 0xfffc Address 0xfffc out of bounds

Now I think I have the intial spot where it all goes to pot (the
initial traceback).  I really appreciate the continued help and
pointers that I've been given and was wondering if someone could
help me out a bit more.

sorry for being such a pain and if any other info is needed please ask.

thanks for you time,
-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] IF YOU'RE WORKING ON REPLICATION PLEASE CONTACT ME

2000-09-29 Thread Alfred Perlstein

* Erich [EMAIL PROTECTED] [000929 12:13] wrote:
 
 I have a very serious need for replication for my Postgres
 application.  Rather than deciding to spend $40k on a commercial DBMS
 with replication (minimum cost) I decided I would invest my money in
 hiring a contractor to add support for the feature I need into
 Postgres.  The patch that gets written will be released under the same
 license as Postgres, so hopefully it will be added to a future
 distribution.
 
 Anyway, I know that I'm not the first to want replication in Postgres,
 and I imagine someone else has probably done some work on it, or is
 currently working on it.  If that's you, please contact me, because
 it's possible that we can avoid some duplication (replication?) of
 effort.  Also, I want to make sure that this work is done in a way
 which would allow it to be integrated into a future release if at all
 possible.

Check this out, I'm sure your contribution can help realize the
replication server.

http://www.pgsql.com/press/PR_5.html

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: pg_dump possible fix, need testers. (was: Re: [HACKERS] pg_dump disaster)

2000-09-29 Thread Alfred Perlstein

* Bruce Momjian [EMAIL PROTECTED] [000929 19:30] wrote:
 Can someone remind me of where we left this?

I really haven't figured a correct way to deal with the output buffer.
I'll try to consider ways to deal with this.

-Alfred




Re: [HACKERS] pgsql is 75 times faster with my new index scan

2000-09-26 Thread Alfred Perlstein

* [EMAIL PROTECTED] [EMAIL PROTECTED] [000926 02:33] wrote:
 Hello,
 I recently spoke about extending index scan to be able
 to take data directly from index pages.
[snip]
 
 Is someone interested in this ??

Considering the speedup, I sure as hell am interested. :)

When are we going to have this?

-Alfred



<    1   2