Re: [HACKERS] beta testing version

2000-12-04 Thread Don Baccus

At 11:59 PM 12/3/00 -0400, The Hermit Hacker wrote:
> the sanctity of the *core* server is *always*
>foremost in our minds, no matter what other projects we are working on ...

What happens if financially things aren't entirely rosy with your company?
The problem in taking itty-bitty steps in this direction is that you're
involving outside money interests that don't necessarily adhere to this
view.

Having taken the first steps to a proprietary, closed source future, would
you pledge to bankrupt your company rather than accept a large captital 
investment with an ROI based on proprietary extensions to the core that 
might not be likely to come out of the non-tainted side of the development
house?

Would your company sign a contract to that effect with independent parties,
i.e. that it would never violate the sanctity of the *core*?   Even if it means
you go broke?  And that your investors go broke?

Or would your investors prefer you not make such a formal committment, in order
to keep options open if things don't go well?

(in the early 80's my company received a total of $8,000,000 in pre-IPO
capital investments, so I have some experience with the expectations of investors.
It tends to make me a bit paranoid.  I'm not the only COO to have such experiences
while living the life).

What happens in two years if those investors in eRServer haven't gotten adequate
return on their investment?  Do you have a formal agreement that the source will
be released regardless?  Can the community inspect the agreement so we can judge
for ourselves whether or not this assurance is adequately backed by contract
language?

Are your agreements Open Source? :)



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-04 Thread Don Baccus

At 09:42 PM 12/3/00 -0600, Ross J. Reedstrom wrote:

>This paragraph from erserver.com:
>
>eRServer development is currently concentrating on core, universal
>functions that will enable individuals and IT professionals
>to implement PostgreSQL ORDBMS solutions for mission critical
>datawarehousing, datamining, and eCommerce requirements. These
>initial developments will be published under the PostgreSQL Open
>Source license, and made available through our sites, Certified
>Platinum Partners, and others in PostgreSQL community.
>
>led me (and many others) to believe that this was going to be a tighly
>integrated service, requiring code in the PostgreSQL core, since that's the
>normal use of 'core' around here.

Right.  This is a big source of misunderstanding.  There's still the fact
that 50% of the PG steering committee that are involved in [partially] closed
source development based on PG, though.  This figure disturbs me.

50% is a lot.  It's like ... half, right?  Or did I miss something in the
conversion?

This represents significant change from the past where 0%, AFAIK, were 
involved in closed source PG add-ons.

>Now that I know it's a completely external implementation, I feel bad about
>griping about deadlines. I _do_ wish I'd known this _design choice_ a bit
>earlier, as it impacts how I'll try to do some things with pgsql, but that's
>my own fault for over interpreting press releases and pre-announcements.

IF 50% of the steering committee is to embark on such a task in a closed source
or semi-closed source development model, it would seem common courtesy to inform the
community of the facts as early as they were decided upon.

In fact, it might seem to be common courtesy to float the notion in the community,
to gauge reaction and to build support, before finalizing such a decision.

AFAIC this arrived out of no where, a sort of stealth "50% of the steering committee
has decided to embark on a semi-proprietary solution to the replication problem that
you won't see as open source for [up to] two years after its completion".

That's a paradigm shift.   Whether right or wrong, there's a responsibility to
communicate the fact that 50% of the steering committee has decided to partially
abandon the open source development model for one that is (in some cases) closed
for two years and (in other cases) forever.




- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



[HACKERS] Re: beta testing version

2000-12-04 Thread xuyifeng

Hi,

how long is PG7.1 already in beta testing? can it be released before Christmas day?
can PG7.1  will recover database from system crash?
 
Thanks,
 
 XuYifeng
 
 



Re: [HACKERS] compiling pg 7.0.3 on sco 5.0.5

2000-12-04 Thread Tom Lane

"Billy G. Allie" <[EMAIL PROTECTED]> writes:
> ... The DISABLE_COMPLEX_MACRO definition was originally put in to work
> around a macro size limitation of the UnixWare 2.1 C compiler (and
> later the SCO UDK (Universal Development Kit)).  If the gnu C compiler
> is being used it should not be defined.

Hm.  Is anyone likely to still be using a version of that compiler that
still has such limitations?

I ask because we recently pulled "#define DISABLE_COMPLEX_MACRO" from
port/sco.h, on the grounds that various people were seeing more harm
than good from it.  But I'm suddenly wondering whether those people
might've been using gcc.  I wonder if

#ifndef __GNUC__
#define DISABLE_COMPLEX_MACRO
#endif

in port/sco.h would be the smart way to go.

regards, tom lane



RE: [HACKERS] SQL 'in' vs join.

2000-12-04 Thread Andrew Snow


> Now, given the two components, each with very low costs, it chooses to
> do a sequential scan on the table. I don't get it. 


Read the FAQ?

http://www.postgresql.org/docs/faq-english.html#4.23
"4.23) Why are my subqueries using IN so slow?")


- Andrew





[HACKERS] Add-on

2000-12-04 Thread pejac


i have wrote an application dealing with ean13 and ean8 type,how can i
submit it ??



Re: [HACKERS] Using Threads?

2000-12-04 Thread Karel Zak

On Mon, 27 Nov 2000, Junfeng Zhang wrote:

> Hello all,
> 
> I am new to postgreSQL. When I read the documents, I find out the Postmaster
> daemon actual spawns a new backend server process to serve a new client
> request. Why not use threads instead? Is that just for a historical reason,
> or some performance/implementation concern?

 It's a little a historical reason, but not only. The PostgreSQL allows
 to use user defined modules (functions), it means that bad module or
 bug in core code crash one backend only, but postmaster run still. In the 
 thread model crash all running backend. Big differntion is in the lock 
 method too. 

Karel




Re: [HACKERS] beta testing version

2000-12-04 Thread Michael Fork

Judging by the information below, taken *directly* from PostgreSQL, Inc.
website, it appears that they will be releasing all code into the main
source code branch -- with the exception of "Advanced Replication and
Distributed Information capabilities" (to which capabilities they are
referring is not made clear) which may remain proprietary for up to 24
months "in order to assist us in recovering development costs and continue
to provide funding for our other Open Source contributions."

I have interpreted this to mean that basic replication (server -> server,
server -> client, possibly more)  will be available shortly for Postgres
(with the release of 7.1?) and that those more advanced features will
follow behind.  This is one of the last features that was missing from
Postgres (along with recordset returning functions and clusters, among
others) that was holding it back from the enterprise market -- and I do
not blame PostgreSQL, Inc. one bit for withholding some of the more
advanced features to recoup their development costs -- it was *their time*
and *their money* they spent developing the *product* and it must be
recoup'ed for projects like this to make sense in the future (who knows,
maybe next they will implement RS returning SP's or clusters, projects
that are funded with their profit off the advanced replication and
distributed information capabilities that they *may* withhold -- would
people still be whining then?)

Michael Fork - CCNA - MCP - A+ 
Network Support - Toledo Internet Access - Toledo Ohio

(http://www.pgsql.com/press/PR_5.html)
"At the moment we are limiting our test groups to our existing Platinum
Partners and those clients whose requirements include these
features." advises Jeff MacDonald, VP of Support Services. "We expect to
have the source code tested and ready to contribute to the open source
community before the middle of October. Until that time we are considering
requests from a number of development companies and venture capital groups
to join us in this process."

Davidson explains, "These initial Replication functions are important to
almost every commercial user of PostgreSQL. While we've fully funded all
of this development ourselves, we will be immediately donating these
capabilities to the open source PostgreSQL Global Development Project as
part of our ongoing commitment to the PostgreSQL community." 

http://www.erserver.com/
eRServer development is currently concentrating on core, universal
functions that will enable individuals and IT professionals to implement
PostgreSQL ORDBMS solutions for mission critical datawarehousing,
datamining, and eCommerce requirements. These initial developments will be
published under the PostgreSQL Open Source license, and made available
through our sites, Certified Platinum Partners, and others in PostgreSQL
community.

Advanced Replication and Distributed Information capabilities are also
under development to meet specific business and competitive requirements
for both PostgreSQL, Inc. and clients. Several of these enhanced
PostgreSQL, Inc. developments may remain proprietary for up to 24 months,
with availability limited to clients and partners, in order to assist us
in recovering development costs and continue to provide funding for our
other Open Source contributions. 

On Sun, 3 Dec 2000, Hannu Krosing wrote:

> The Hermit Hacker wrote:
> IIRC, this thread woke up on someone complaining about PostgreSQl inc
> promising 
> to release some code for replication in mid-october and asking for
> confirmation 
> that this is just a schedule slip and that the project is still going on
> and 
> going to be released as open source.
> 
> What seems to be the answer is: "NO, we will keep the replication code
> proprietary".
> 
> I have not seen this answer myself, but i've got this impression from
> the contents 
> of the whole discussion.
> 
> Do you know if this is the case ?
> 
> ---
> Hannu
> 











Re: [HACKERS] beta testing version

2000-12-04 Thread Don Baccus

At 07:11 AM 12/4/00 +, Thomas Lockhart wrote:

>We are offering our services and expertise to a community outside
>-hackers, as a business formed in a way that this new community expects
>to see. Nothing special or sinister here. Other than it seems to have
>raised the point that you expected each of us to be working for you,
>gratis, on projects you find compelling, using all of our available
>time, far into the future just as each of us has over the last five
>years.

No, not at all.  Working gratis is not the issue, as I made clear. There
are - despite your rather condescending statement implying otherwise -
business models that lead to revenue without abandoning open source.

I'm making a decent living following such a business model, thank
you very much.  I'm living proof that it is possible.

...

>A recent example of non-sinister change in another area is the work done
>to release 7.0.3. This is a release which would not have happened in
>previous cycles, since we are so close to beta on 7.1. But GB paid Tom
>Lane to work on it as part of *their* business plan, and he sheparded it
>through the cycle. There was no outcry from you at this presumption, and
>on this diversion of community resources for this effort. Not sure why,
>other than you chose to pick some other fight.

There's a vast difference between releasing 7.0.3 in open source form TODAY
and eRServer, which may not be released in open source form for up to two
years after it enters the market on a closed source, proprietary footing.
To suggest there is no difference, as you seem to be doing, is a hopelessly
unconvincing argument.

The fact that you seem blind to the difference is one reason why PG, Inc 
worries me (since you are a principle in the company).

The reason you heard no outcry from me in the PG 7.0.3 case is because there
*is* a difference between it and a semi-proprietary product like eRServer.
If GB had held Tom's work on PG 7.0.3 and released it only in (say) a packaged
release for purchase, saying "we'll release it to the CVS tree after we
recoup our investment", there would've been an outcry from me, bet on it.

Probably others, too...

>And no matter which fight you chose, you're wasting the time of others
>as you fight your demons.

Well, I guess I'll have to stay off my medication, otherwise my demons
might disappear.  I'm a regular miracle of medical science until I forget
to take them.



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-04 Thread Thomas Lockhart

> In fact, it might seem to be common courtesy...

An odd choice of words coming from you Don.

We are offering our services and expertise to a community outside
-hackers, as a business formed in a way that this new community expects
to see. Nothing special or sinister here. Other than it seems to have
raised the point that you expected each of us to be working for you,
gratis, on projects you find compelling, using all of our available
time, far into the future just as each of us has over the last five
years.

After your recent spewing, it irks me a little to admit that this will
not change, and that we are likely to continue to each work on OS
PostgreSQL projects using all of our available time, just as we have in
the past.

A recent example of non-sinister change in another area is the work done
to release 7.0.3. This is a release which would not have happened in
previous cycles, since we are so close to beta on 7.1. But GB paid Tom
Lane to work on it as part of *their* business plan, and he sheparded it
through the cycle. There was no outcry from you at this presumption, and
on this diversion of community resources for this effort. Not sure why,
other than you chose to pick some other fight.

And no matter which fight you chose, you're wasting the time of others
as you fight your demons.

   - Thomas



Re: AW: [HACKERS] broken locale in 7.0.2 without multibyte support (F reeBSD 4.1-RELEASE) ?

2000-12-04 Thread Tom Lane

Zeugswetter Andreas SB <[EMAIL PROTECTED]> writes:
>> Today I inserted (unsigned char) casts into all the  function
>> calls I could find.  This issue should be fixed as of current cvs.
>> Please try it again when you have time.

> I am a sceptic to the many casts. Would'nt the clean solution be, to use
> unsigned char througout the code ?

No; see the prior discussion.

> The casts only help to avoid compiler
> warnings or errors. They do not solve the underlying problem.

You are mistaken.

regards, tom lane



Re: [HACKERS] Using Threads?

2000-12-04 Thread Myron Scott

I maybe wrong but I think that PGSQL is not threaded mostly due to
historical reasons.  It looks to me like the source has developed over
time where much of the source is not reentrant with many global variables
throughout.  In addition, the parser is generated by flex which
can be made to generate reentrant code but is still not thread safe b/c
global variables are used.

That being said, I experimented with the 7.0.2 source and came up with a
multithreaded backend for PGSQL which uses Solaris Threads. It seems to
work, but I drifted very far from the original source.  I
had to hack flex to generate threadsafe code as well.  I use it as a
linked library with my own fe<->be protocol. This ended up being much much
more than I bargained for and looking back would probably not have tried
had I known any better.


Myron Scott


On Mon, 27 Nov 2000, Junfeng Zhang wrote:

> Hello all,
> 
> I am new to postgreSQL. When I read the documents, I find out the Postmaster
> daemon actual spawns a new backend server process to serve a new client
> request. Why not use threads instead? Is that just for a historical reason,
> or some performance/implementation concern?
> 
> Thank you very much.
> Junfeng
> 




Re: [HACKERS] beta testing version

2000-12-04 Thread Ron Chmara

Horst Herb wrote:
> > > Branding. Phone support lines. Legal departments/Lawsuit prevention.
> Figuring
> > > out how to prevent open source from stealing the thunder by duplicating
>  ^^
> > > features. And building a _product_.
> Oops. You didn't really mean that, did you? Could it be that there are some
> people out there thinking "let them free software fools do the hard initial
> work, once things are working nicely, we take over, add a few "secret"
> ingredients, and voila - the commercial product has been created?

That wasn't the _intended_ meaning, but I suppose that it's a related issue.

I was referring to companies expending variable amounts of time and resources
on a new closed source technology, only to have their marketshare shriveled up
by OSS coders rapidly duplicating their efforts, and releasing free code or a
less expensive product.

To put it in proper context:
If the project under discussion was reverse engineered (or even clean room
"re-engineered") and released as a separate, open source, product (or
even just "free" code), the demand for the PG, Inc. software is placed at
risk.

The actual size, and scope, of the project is irrelevant, as determined
OSS advocates have pretty much taken on any, and every, viable project. It's
not really about "stealing" code efforts, anymore than RedHat "stole"
linux, or that Pg has been stealing features from other ORDBMS's... it's that
OSS is a difficult market to capture, if you are selling closed source
code that can be created, or duplicated, by others.

Stronghold and Raven(?) were more sucessful products before the OSS
encryption efforts took off. Now anybody can build an SSL server, without
paying for licenses that used to cost thousands (I know there's the RSA
issue in this history as well, but let's be realistic about who actually
obeyed all those laws, okay?). Zend is trying to build an IDE for
PHP, but the open-source market moves fast enough that within a few
months of release, there will be clones, reverse engineered versions,
etc. SSH tried valiantly to close their code base which created
a market for OpenSSH. You see it time and again, there's a closed
version/extension/plug-in, feature, and an OSS clone gets built up
for it. GUI for sendmail? OSS now. New AIM protocols? gaim was on
it in days. New, proprietary, M$ mail software that took years to build
up, research, and develop? Give the OSS hordes a few months. New,
closed, SMB protocols? Give the samba team a few days, maybe a few
weeks.

To wrap up this point, a closed derivative (or closed new) product is
now competing against OSS pools of developers, which is much harder to
stop than a single closed source company. It's difficult to compete
on code quality, or code features you have to compete with a
*product* that is bettter than anything globally co-ordinated code
hackers can build themselves.

-Bop

--
Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,
which is currently in MacOS land.  Your bopping may vary.



Re: [HACKERS] beta testing version

2000-12-04 Thread The Hermit Hacker

On Sun, 3 Dec 2000, Don Baccus wrote:

> At 11:59 PM 12/3/00 -0400, The Hermit Hacker wrote:
> > the sanctity of the *core* server is *always*
> >foremost in our minds, no matter what other projects we are working on ...
> 
> What happens if financially things aren't entirely rosy with your
> company? The problem in taking itty-bitty steps in this direction is
> that you're involving outside money interests that don't necessarily
> adhere to this view.
> 
> Having taken the first steps to a proprietary, closed source future,
> would you pledge to bankrupt your company rather than accept a large
> captital investment with an ROI based on proprietary extensions to the
> core that might not be likely to come out of the non-tainted side of
> the development house?

You mean sort of like Great Bridge investing in core developers?  Quite
frankly, I have yet to see anything but good come out of Tom as a result
of that, as now he has more time on his hands ... then again, maybe Outer
Joins was a bad idea? *raised eyebrow*

PgSQL is *open source* ... that means that if you don't like it, take the
code, fork off your own version if you don't like what's happening to the
current tree and build your own community *shrug*  





AW: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ?

2000-12-04 Thread Zeugswetter Andreas SB


> Today I inserted (unsigned char) casts into all the  function
> calls I could find.  This issue should be fixed as of current cvs.
> Please try it again when you have time.

I am a sceptic to the many casts. Would'nt the clean solution be, to use
unsigned char througout the code ? The casts only help to avoid compiler
warnings or errors. They do not solve the underlying problem.

Andreas 



[HACKERS] Bitmap index

2000-12-04 Thread pejac

Hi,

on other RDBMS (Oracle,etc...),there is an index called bitmap index that
greatly improve performance compared to btree index for boolean value
(such as for a sex value,it's either M or F),i would like to know if such
index will be implemented inside PostgreSQL.



Re: [HACKERS] Using Threads?

2000-12-04 Thread The Hermit Hacker

On Mon, 27 Nov 2000, Junfeng Zhang wrote:

> Hello all,
> 
> I am new to postgreSQL. When I read the documents, I find out the
> Postmaster daemon actual spawns a new backend server process to serve
> a new client request. Why not use threads instead? Is that just for a
> historical reason, or some performance/implementation concern?

Several reasons, 'historical' probably being the strongest right now
... since PostgreSQL was never designed for threading, its about as
'un-thread-safe' as they come, and cleaning that up will/would be a
complete nightmare (should eventually be done, mind you) ...

The other is stability ... right now, if one backend drops away, for
whatever reason, it doesn't take down the whole system ... if you ran
things as one process, and that one process died, you just lost your whole
system ...





RE: [HACKERS] 8192 BLCKSZ ?]

2000-12-04 Thread Don Baccus

At 10:52 AM 12/2/00 +1100, Andrew Snow wrote:
>
>
>> The cost difference between 32K vs 8K disk reads/writes are so small
>> these days when compared with overall cost of the disk operation itself,
>> that you can even measure it, well below 1%. Remember seek times
>> advertised on disks are an average.
>
>It has been said how small the difference is - therefore in my opinion it
>should remain at 8KB to maintain best average performance with all existing
>platforms.

With versions <= PG 7.0, the motivation that's been stated isn't performance
based as much as an option to let you stick relatively big chunks of text
(~40k-ish+ for lzText) in a single row without resorting to classic PG's
ugly LOB interface or something almost as ugly as the built-in LOB handler
I did for AOLserver many months ago.  The performance arguments have mostly
been of the form "it won't really cost you much and you can use rows that
are so much longer ..."

I think there's been recognition that 8KB is a reasonable default, along with
lamenting (at least on my part) that the fact that this is just a DEFAULT hasn't
been well-communicated,  leading many casual surveyors of DB alternatives to
believe that it is truly a hard-wired limitation.  Causing PG's reputation to
suffer as a result.  One could argue that PG"s reputation would've been
enhanced in past years if a 32KB block size limit rather than 8KB block size
default had been emphasized.

But you wouldn't have to change the DEFAULT in order to make this claim!  It
would've been just a matter of emphasizing the limit rather than the default.

PG 7.1 will pretty much end any confusion.  The segmented approach used by
TOAST should work well (the AOLserver LOB handler I wrote months ago works
well in the OpenACS context, and uses a very similar segmentation scheme, so
I expect TOAST to work even better).  Users will still be able change to
larger blocksizes (perhaps a wise thing to do if a large percentage of their
data won't fit into a single PG block).   Users using the default will
be able to store rows of *awesome* length, efficiently.




- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] redundancy and disk i/o

2000-12-04 Thread Don Baccus

At 07:30 PM 11/28/00 -0800, Sandeep Joshi wrote:
>Hi,
>I have two questions
>
>1. Is it possible to set up a set of redundant disks for a database? one
>of them being remote from the database?

If you're talking about replication, PostgreSQL, Inc. will be offering a
solution to its $19,000/yr Platinum Partners shortly.  It will be released
in open source form no more than two years after its release in proprietary
form.

Check out http://www.erserver.com for more details, and http://www.pgsql.com
for more details on the PostgreSQL, Inc. partnership program.

Locally, you can use RAID.  Are there open-source journaling filesystems that
offer filesystem-level replication out there?



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-04 Thread The Hermit Hacker

On Mon, 4 Dec 2000, Don Baccus wrote:

> >A recent example of non-sinister change in another area is the work done
> >to release 7.0.3. This is a release which would not have happened in
> >previous cycles, since we are so close to beta on 7.1. But GB paid Tom
> >Lane to work on it as part of *their* business plan, and he sheparded it
> >through the cycle. There was no outcry from you at this presumption, and
> >on this diversion of community resources for this effort. Not sure why,
> >other than you chose to pick some other fight.
> 
> There's a vast difference between releasing 7.0.3 in open source form
> TODAY and eRServer, which may not be released in open source form for
> up to two years after it enters the market on a closed source,
> proprietary footing. To suggest there is no difference, as you seem to
> be doing, is a hopelessly unconvincing argument.

Except, eRServer, the basic model, will be released Open Source, and, if
all goes as planned, in time for inclusion in contrib of v7.1 ... 





[HACKERS] EAN13 for postgresql

2000-12-04 Thread pejac

i have wrote an application dealing with ean13 and ean8 type,how can i
submit it ??





[HACKERS] Wrong FOR UPDATE lock type

2000-12-04 Thread Jan Wieck

Hi,

I'm  about  99.67% sure that the lock type choosen in the
FOR UPDATE case (line  511  of  parse_relation.c)  should  be
RowExclusiveLock  instead  of  RowShareLock.   Actually I get
"Deadlock risk" debug messages when selecting FOR UPDATE  and
then really UPDATE.

Should I change it?


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





[HACKERS] triggers and actions tree

2000-12-04 Thread Fabio Nanni

Hello all,

I am new to postgreSQL.
When I perform an action on a psql database (e.g. insert into a table),
some more action could be induced, via trigger firing:
  - is it possible to know at any time the exact action chain?
  - is it possible to know at any time if the control is inside a
trigger (and which one)?
Sorry, I tried to search in www.postgresql.org  but I wasn't able to
find anything useful.

These questions arise because I'm trying to keep in sync two identical
psql databases; I have audited tables and an audit trail. I'm facing the
problem of recognising which actions in the trail were due to a trigger
firing, rather than explicitly commanded.

Thank you in advance
Fabio



re : Re: [HACKERS] Add-on

2000-12-04 Thread pejac

Ean13 and ean8 are bar codes for european.

You can convert an ISBN or iSSN to Ean13.
My addon add a new type and can convert isbn to an EAN
and calculate th key of ean. More over in few day
add on can store the png or jpg images of bar codes
in blob type or 
TODO: add upc-A upc-E ean128 and other type of bar code

Best regards

PEJAC Pascal



[HACKERS] Bitmap index

2000-12-04 Thread pejac

Hi,

on other RDBMS (Oracle,etc...),there is an index called bitmap index that
greatly improve performance compared to btree index for boolean value
(such as for a sex value,it's either M or F),i would like to know if such
index will be implemented inside PostgreSQL.

Best regards,

PEJAC Pascal



Re: [HACKERS] beta testing version

2000-12-04 Thread Vince Vielhaber

On Thu, 30 Nov 2000, Nathan Myers wrote:

> Second, the transaction log is not, as has been noted far too frequently
> for Vince's comfort, really written atomically.  The OS has promised
> to write it atomically, and given the opportunity, it will.  If you pull
> the plug, all promises are broken.

Say what?

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==






Re: [HACKERS] compiling pg 7.0.3 on sco 5.0.5

2000-12-04 Thread Larry Rosenman

* Tom Lane <[EMAIL PROTECTED]> [001204 09:27]:
> "Billy G. Allie" <[EMAIL PROTECTED]> writes:
> > ... The DISABLE_COMPLEX_MACRO definition was originally put in to work
> > around a macro size limitation of the UnixWare 2.1 C compiler (and
> > later the SCO UDK (Universal Development Kit)).  If the gnu C compiler
> > is being used it should not be defined.
> 
> Hm.  Is anyone likely to still be using a version of that compiler that
> still has such limitations?
> 
> I ask because we recently pulled "#define DISABLE_COMPLEX_MACRO" from
> port/sco.h, on the grounds that various people were seeing more harm
> than good from it.  But I'm suddenly wondering whether those people
> might've been using gcc.  I wonder if
> 
>   #ifndef __GNUC__
>   #define DISABLE_COMPLEX_MACRO
>   #endif
> 
> in port/sco.h would be the smart way to go.
Based on my running both CURRENT UDK and GCC on my UnixWare 7 boxes
with CURRENT sources, I think we may need to see if anyone complains. 

LER
> 
>   regards, tom lane
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: [HACKERS] redundancy and disk i/o

2000-12-04 Thread Dominic J. Eidson

On Tue, 28 Nov 2000, Sandeep Joshi wrote:

> 1. Is it possible to set up a set of redundant disks for a database? one 
> of them being remote from the database?

Call IBM Global Services, and tell them you are interested in purchasing
an RS/6000 with a 7133 SSA drives, one tray off-site using the fiber
extenders.

With those, you can have your drives up to 2.4 km from the server they're
connected to, while they still are local to the machine. (And you still
get 4 simultaneous reads/writes in each direction of the loop, for a total
of 160 Mbyte/sec transfer.)


-- 
Dominic J. Eidson
"Baruk Khazad! Khazad ai-menu!" - Gimli
---
http://www.the-infinite.org/  http://www.the-infinite.org/~dominic/




Re: [HACKERS] compiling pg 7.0.3 on sco 5.0.5

2000-12-04 Thread Dave Smith

Tom Lane wrote:

> "Billy G. Allie" <[EMAIL PROTECTED]> writes:
> 
>> ... The DISABLE_COMPLEX_MACRO definition was originally put in to work
>> around a macro size limitation of the UnixWare 2.1 C compiler (and
>> later the SCO UDK (Universal Development Kit)).  If the gnu C compiler
>> is being used it should not be defined.
> 
> 
> Hm.  Is anyone likely to still be using a version of that compiler that
> still has such limitations?
> 
> I ask because we recently pulled "#define DISABLE_COMPLEX_MACRO" from
> port/sco.h, on the grounds that various people were seeing more harm
> than good from it.  But I'm suddenly wondering whether those people
> might've been using gcc.  I wonder if
> 
>   #ifndef __GNUC__
>   #define DISABLE_COMPLEX_MACRO
>   #endif
> 
> in port/sco.h would be the smart way to go.
> 
>   regards, tom lane

Well I recompilied with the stock cc shipped in the SCO development 
package for OpenServer 5. It was released in 97'.




RE: [HACKERS] Wrong FOR UPDATE lock type

2000-12-04 Thread Mikheev, Vadim

> I'm  about  99.67% sure that the lock type choosen in the
> FOR UPDATE case (line  511  of  parse_relation.c)  should  be
> RowExclusiveLock  instead  of  RowShareLock.   Actually I get
> "Deadlock risk" debug messages when selecting FOR UPDATE  and
> then really UPDATE.

http://www.postgresql.org/users-lounge/docs/6.5/user/x3116.htm

RowShareLock
Acquired by SELECT FOR UPDATE and LOCK TABLE for IN ROW SHARE MODE
statements. 

Conflicts with ExclusiveLock and AccessExclusiveLock modes. 

Vadim



Re: [HACKERS] redundancy and disk i/o

2000-12-04 Thread Thomas Lockhart

> If you're talking about replication, PostgreSQL, Inc. will be offering a
> solution to its $19,000/yr Platinum Partners shortly.  It will be released
> in open source form no more than two years after its release in proprietary
> form.
> Check out http://www.erserver.com for more details, and http://www.pgsql.com
> for more details on the PostgreSQL, Inc. partnership program.

Thanks Don for the reference. As you know, there will also be a "roll
your own" replication toolset contributed by PostgreSQL Inc. under the
BSD license in the PostgreSQL contrib/ directory for the 7.1 release,
assuming that this inclusion is acceptable to the community. Given the
general interest, I hope that this won't be an issue, and that the
recent flames will have died down enough to not be a continued
distraction.

  - Thomas



Re: [HACKERS] Using Threads?

2000-12-04 Thread Ross J. Reedstrom

Myron - 
Putting aside the fork/threads discussion for a moment (the reasons,
both historical and other, such as inter-backend protection, are well
covered in the archives), the work you did sounds like an interesting
experiment in code redesign. Would you be willing to release the hacked
code somewhere for others to learn from? Hacking flex to generate
thread-safe code is of itself interesting, and the question about PG and
threads comes up so often, that an example of why it's not a simple task
would be useful.

Ross

On Mon, Dec 04, 2000 at 12:20:20AM -0800, Myron Scott wrote:
> I maybe wrong but I think that PGSQL is not threaded mostly due to
> historical reasons.  It looks to me like the source has developed over
> time where much of the source is not reentrant with many global variables
> throughout.  In addition, the parser is generated by flex which
> can be made to generate reentrant code but is still not thread safe b/c
> global variables are used.
> 
> That being said, I experimented with the 7.0.2 source and came up with a
> multithreaded backend for PGSQL which uses Solaris Threads. It seems to
> work, but I drifted very far from the original source.  I
> had to hack flex to generate threadsafe code as well.  I use it as a
> linked library with my own fe<->be protocol. This ended up being much much
> more than I bargained for and looking back would probably not have tried
> had I known any better.
> 
> 
> Myron Scott
> 



[HACKERS] update on compiling postgres on sco

2000-12-04 Thread Arno A. Karner

posting for others who may need, hopfuly the searchable mail list works
in the future

commenting out the define complex macro allowed me to compile on sco
5.0.5 using udk compiler on sco, without the c++ stuff. but scos udk
solution breaks almost every thing else i compile on sco 5.0.5 when u
install there compatability stuff they introduce 2 sets of libraries one
for sco one for unixware compatability, the compatability librarsy the
give you DO NOT replace all the shared libraies on the system, notably
/lib/prot.so or anything under /lib has no compatable libs installed
under /udk/usr/lib, the udk compilers both c and c++ use the new
libraries by default soon as you need a libray that is not available
under /udk/usr/lib your screwed, there is a skunkware version of gcc but
it passes a non existant option -b to the sco assember, and even the sco
assebler uses the new udk libs, removing c++ on sco does not fix the
problem as sco says it should, since there is no binutils ported to sco
skunkware with gas and other tools this realy sucks.

not sure if i can install my old sdk on sco 5.0.5 which was licsensed on
sco 5.0.2, not sure if sdk and udk can co exist, and how do u manage the
libs dirs that are searched autmaticly, or add the -Xo option that sco
says will allow stuff to compile with less strict error checking -
winblows model stuff compiles and links may - maynot work or unknown
problems to test scos udk on sco openserver 5.0.5 breaks all open source
code, or at least most of what i want to use.

advice to future people that want to use sco open server, screw the udk,
udk compatability
for those wishing to use the backwards compatability from unixware to
sco openserver expect conflicts, and unsuportted libraries, links static
and pray.

hope like hell caldara slaps some sense into sco fast, or at least port
binutils, gcc then maybe u can just license the header files for 100
bucks instead of buing crapy compiler technoweldgy for 500 bucks when
all you want is the header files, and 
-- 
My opinions are my own and not that of my employer even if I am self
employed
Tech Net, Inc.--FREE THE MACHINES-- 
651.224.2223
627 Palace Ave. #2   [EMAIL PROTECTED]  
[EMAIL PROTECTED]
St. Paul, MN 55102-3517www.tnss.com 
wanted: adsl/cable modem with static ip at reasonable price
accept-txt: us-ascii,html,pdf
accept-dat: ascii-delimited,sql insert statments



Re: [HACKERS] compiling pg 7.0.3 on sco 5.0.5

2000-12-04 Thread Peter Eisentraut

Tom Lane writes:

> I ask because we recently pulled "#define DISABLE_COMPLEX_MACRO" from
> port/sco.h, on the grounds that various people were seeing more harm
> than good from it.  But I'm suddenly wondering whether those people
> might've been using gcc.

We can be fairly certain that they weren't, unless GCC started accepting
SCO's compiler flags (or someone altered the compiler flags and filed a
*very* incomplete bug report).

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




AW: AW: [HACKERS] broken locale in 7.0.2 without multibyte support (F reeBSD 4.1-RELEASE) ?

2000-12-04 Thread Zeugswetter Andreas SB


> > I am a sceptic to the many casts. Would'nt the clean 
> solution be, to use
> > unsigned char througout the code ?
> 
> No; see the prior discussion.
> 
> > The casts only help to avoid compiler
> > warnings or errors. They do not solve the underlying problem.
> 
> You are mistaken.

You are of course correct, that they might solve the particular underlying problem,
sorry, I did not actually read or verify the committed code.
But don't they in general obfuscate cases where the callee does want
unsigned/signed chars ?

My assumption would be, that we need [un]signed char casts for library functions,
but we should not need them for internal code, no ? What is actually the reason 
to have them both in PostgreSQL code ?

My concern stems from a very bad experience with wrong signedness of chars
on AIX.

Andreas



Re: [HACKERS] Postgresql on dynix/ptx system

2000-12-04 Thread Peter Eisentraut

Radek Fleks writes:

> I'm compiling (not, I'm trying to compile) last version of Postgresql on
> Sequent Dynix/ptx ver 4.4.7 system. Under compilation process with gcc (ver
> 2.7.2 ported on dynix/pt) is reporting several errors.

It's not so interesting at this point to port PostgreSQL 7.0.*, given that
PostgreSQL 7.1 should go beta sometime, er, this year.  If you want to
port 7.1 then you should be looking into the following files and/or
directories for platform specific stuff:

configure.in
src/template
src/makefiles
src/include/port
src/Makefile.shlib
src/backend/port/dynloader
src/include/storage/s_lock.h
src/backend/storage/buffer/s_lock.c

Once you have gotten past the fact that configure will complain about your
system not being supported (which you should fix in configure.in and
re-run autoconf), showing actual compiler output will help.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [HACKERS] Using Threads?

2000-12-04 Thread The Hermit Hacker


if we were to do this in steps, I beliee that one of the major problems
irght now is that we have global variables up the wazoo ... my
'thread-awareness' is limited, as I've yet to use them, so excuse my
ignorance ... if we got patches that cleaned up the code in stages, moving
towards a cleaner code base, then we could get it into the main source
tree ... ?

 On Mon, 4 Dec 2000, Ross J. Reedstrom wrote:

> Myron - 
> Putting aside the fork/threads discussion for a moment (the reasons,
> both historical and other, such as inter-backend protection, are well
> covered in the archives), the work you did sounds like an interesting
> experiment in code redesign. Would you be willing to release the hacked
> code somewhere for others to learn from? Hacking flex to generate
> thread-safe code is of itself interesting, and the question about PG and
> threads comes up so often, that an example of why it's not a simple task
> would be useful.
> 
> Ross
> 
> On Mon, Dec 04, 2000 at 12:20:20AM -0800, Myron Scott wrote:
> > I maybe wrong but I think that PGSQL is not threaded mostly due to
> > historical reasons.  It looks to me like the source has developed over
> > time where much of the source is not reentrant with many global variables
> > throughout.  In addition, the parser is generated by flex which
> > can be made to generate reentrant code but is still not thread safe b/c
> > global variables are used.
> > 
> > That being said, I experimented with the 7.0.2 source and came up with a
> > multithreaded backend for PGSQL which uses Solaris Threads. It seems to
> > work, but I drifted very far from the original source.  I
> > had to hack flex to generate threadsafe code as well.  I use it as a
> > linked library with my own fe<->be protocol. This ended up being much much
> > more than I bargained for and looking back would probably not have tried
> > had I known any better.
> > 
> > 
> > Myron Scott
> > 
> 

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 




[HACKERS] [Fwd: Re: [CORE] Going Beta on Monday ...]

2000-12-04 Thread Thomas Lockhart

> I browsed through the CVS logs and made this list of the important
> stuff.  There's a ton of less important stuff...

Shall we consider this the start of the list then? I think there may be
a couple of things already mentioned in the release note stubs for 7.1
too.

 - Thomas

Additional items:

AT TIME ZONE clause for date/time types
OVERLAPS operator support rewritten

WAL --- fsync reliability without the performance hit
TOAST --- 8K row limit is no longer significant
outer joins (per SQL92 syntax, not Oracle's)
subselects in FROM clause
views and subselects now allow union/intersect/except, order by, limit
views containing grouping, aggregates, DISTINCT work now
bit-string types work now
function manager overhaul: fixes portability problems, NULL-argument
handling
memory management overhaul: prevent memory leak accumulation during
queries
drop table and rename table are now rollback-able (transaction-safe)
extensive overhaul of configure/build mechanism
overhaul of parameter-setting mechanisms (postmaster flags,
postmaster.opts,
etc)
more efficient large-object implementation
pg_dump can dump large objects now
pg_dump does the right thing with user-added objects in template1
support for binding postmaster's IP socket to a virtual host name
support for placing postmaster's Unix socket file elsewhere than /tmp
keep reference counts on syscache entries to avoid dropping still-used
entries
Protect against changes in LOCALE environment causing corrupted indexes
better handling of unknown-type literals (default to string type more
readily)
inet/cidr datatypes cleaned up
LIKE/ESCAPE implemented, also ILIKE (case-insensitive LIKE)
aggregate-function support redesigned: only one transition function now,
cleaner handling of NULLs
STDDEV() and VARIANCE() aggregates added
SUM() and AVG() on integer datatypes use NUMERIC accumulators
Child tables are now scanned by default -- ie, if foo has children then
SELECT FROM foo means SELECT FROM foo*.  Ditto for UPDATE and DELETE.
Use SELECT FROM ONLY foo if you don't want this behavior.
vacuum analyze does the analyze part without holding exclusive lock



Re: [HACKERS] Wrong FOR UPDATE lock type

2000-12-04 Thread Jan Wieck

Mikheev, Vadim wrote:
> > I'm  about  99.67% sure that the lock type choosen in the
> > FOR UPDATE case (line  511  of  parse_relation.c)  should  be
> > RowExclusiveLock  instead  of  RowShareLock.   Actually I get
> > "Deadlock risk" debug messages when selecting FOR UPDATE  and
> > then really UPDATE.
>
> http://www.postgresql.org/users-lounge/docs/6.5/user/x3116.htm
>
> RowShareLock
> Acquired by SELECT FOR UPDATE and LOCK TABLE for IN ROW SHARE MODE
> statements.
>
> Conflicts with ExclusiveLock and AccessExclusiveLock modes.

Tom,

IIRC  the  "Deadlock risk" debug message is from you. I think
it must get a little smarter. IMHO an application that want's
to  UPDATE  something  in  a  transaction but must SELECT the
row(s) first to do it's own calculation on them,  should  use
SELECT FOR UPDATE. Is that debug output really appropriate in
this case (it raises from  RowShareLock  to  RowExclusiveLock
because  of  the  UPDATE  of the previous FOR UPDATE selected
row)?


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





Re: [HACKERS] [Fwd: Re: [CORE] Going Beta on Monday ...]

2000-12-04 Thread Larry Rosenman

Macaddr manufacturer table update now in SQL table
syslog configurability improvements.
* Thomas Lockhart <[EMAIL PROTECTED]> [001204 13:42]:
> > I browsed through the CVS logs and made this list of the important
> > stuff.  There's a ton of less important stuff...
> 
> Shall we consider this the start of the list then? I think there may be
> a couple of things already mentioned in the release note stubs for 7.1
> too.
> 
>  - Thomas
> 
> Additional items:
> 
> AT TIME ZONE clause for date/time types
> OVERLAPS operator support rewritten
> 
> WAL --- fsync reliability without the performance hit
> TOAST --- 8K row limit is no longer significant
> outer joins (per SQL92 syntax, not Oracle's)
> subselects in FROM clause
> views and subselects now allow union/intersect/except, order by, limit
> views containing grouping, aggregates, DISTINCT work now
> bit-string types work now
> function manager overhaul: fixes portability problems, NULL-argument
> handling
> memory management overhaul: prevent memory leak accumulation during
> queries
> drop table and rename table are now rollback-able (transaction-safe)
> extensive overhaul of configure/build mechanism
> overhaul of parameter-setting mechanisms (postmaster flags,
> postmaster.opts,
>   etc)
> more efficient large-object implementation
> pg_dump can dump large objects now
> pg_dump does the right thing with user-added objects in template1
> support for binding postmaster's IP socket to a virtual host name
> support for placing postmaster's Unix socket file elsewhere than /tmp
> keep reference counts on syscache entries to avoid dropping still-used
> entries
> Protect against changes in LOCALE environment causing corrupted indexes
> better handling of unknown-type literals (default to string type more
> readily)
> inet/cidr datatypes cleaned up
> LIKE/ESCAPE implemented, also ILIKE (case-insensitive LIKE)
> aggregate-function support redesigned: only one transition function now,
>   cleaner handling of NULLs
> STDDEV() and VARIANCE() aggregates added
> SUM() and AVG() on integer datatypes use NUMERIC accumulators
> Child tables are now scanned by default -- ie, if foo has children then
>   SELECT FROM foo means SELECT FROM foo*.  Ditto for UPDATE and DELETE.
>   Use SELECT FROM ONLY foo if you don't want this behavior.
> vacuum analyze does the analyze part without holding exclusive lock
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: [HACKERS] beta testing version

2000-12-04 Thread Don Baccus

At 02:47 PM 12/1/00 -0500, Tom Lane wrote:

>All we can do is the best we can ;-).  In that light, I think it's
>reasonable for Postgres to proceed on the assumption that fsync does
>what it claims to do, ie, all blocks are written when it returns.
>We can't realistically expect to persuade a disk controller that
>reorders writes to stop doing so.  We can, however, expect that we've
>minimized the probability of failures induced by anything other than
>disk hardware failure or power failure.

Right.  This is very much the guarantee that RAID (non-zero) makes, 
except "other than disk hardware failure" is replaced by "other than
the failure of two drives".  RAID gives you that (very, very substantial
boost which is why it is so popular for DB servers).  It doesn't give
you power failure assurance for much the same reason that PG (or Oracle,
etc) can.

If transaction processing alone could give you protection against a 
single disk hardware failure, Oracle wouldn't've bothered implementing
mirroring in the past before software (and even reasonable hardware)
RAID was available.

Likewise, if mirroring + transaction processing could protect against
disks hosing themselves in power failure situations Oracle wouldn't 
suggest that enterprise level customers invest in external disk
subsystems with battery backup sufficient to guarantee everything
the db server believes has been written really is written.

Of course, Oracle license fees are high enough that proper hardware
support tends to look cheap in comparison...

Vadim's WAL code is excellent, and the fact that we run in essence
with -F performance and also less write activity to the disk both
increases performance, and tends to lessen the probability that the
disk will actually be writing a block when the power goes off.  The
dice aren't quite so loaded against the server with this lowered
disk activity...



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



Re: [HACKERS] beta testing version

2000-12-04 Thread bpalmer

Can we PLEASE kill this thread?  There are only a handful of people who
are making contributions here and nothing really new is being said.  I
agree that the issue should be discussed,  but this does not seem like the
right forum.

Thanks.
- brandon


b. palmer,  [EMAIL PROTECTED]
pgp:  www.crimelabs.net/bpalmer.pgp5




Re: [HACKERS] Wrong FOR UPDATE lock type

2000-12-04 Thread Tom Lane

Jan Wieck <[EMAIL PROTECTED]> writes:
> I'm  about  99.67% sure that the lock type choosen in the
> FOR UPDATE case (line  511  of  parse_relation.c)  should  be
> RowExclusiveLock  instead  of  RowShareLock.   Actually I get
> "Deadlock risk" debug messages when selecting FOR UPDATE  and
> then really UPDATE.

> Should I change it?

Not sure, but if you do change it, that's *not* the only place.  I coded
that as RowShareLock because that was what was getting grabbed by the
executor for SELECT FOR UPDATE.  I believe the rewriter may need changed
as well, since it can also be the first grabber of a lock for a rel.

Note also that the docs say SELECT FOR UPDATE gets RowShareLock!

The "deadlock risk" message is not very bright, and I wouldn't suggest
changing the code just because of that.  I'm not even sure I want to
leave that check in the release version ...

regards, tom lane



Re: [HACKERS] Using Threads?

2000-12-04 Thread Tom Lane

The Hermit Hacker <[EMAIL PROTECTED]> writes:
>> Why not use threads instead? Is that just for a
>> historical reason, or some performance/implementation concern?

> Several reasons, 'historical' probably being the strongest right now
> ... since PostgreSQL was never designed for threading, its about as
> 'un-thread-safe' as they come, and cleaning that up will/would be a
> complete nightmare (should eventually be done, mind you) ...

> The other is stability ... right now, if one backend drops away, for
> whatever reason, it doesn't take down the whole system ... if you ran
> things as one process, and that one process died, you just lost your whole
> system ...

Portability is another big reason --- using threads would create lots
of portability headaches for platforms that had no threads or an
incompatible threads library.  (Not to mention buggy threads libraries,
not-quite-thread-safe libc routines, yadda yadda.)

The amount of work required looks far out of proportion to the payoff...

regards, tom lane



Re: [HACKERS] Using Threads?

2000-12-04 Thread Bruce Guenter

On Mon, Nov 27, 2000 at 11:42:24PM -0600, Junfeng Zhang wrote:
> I am new to postgreSQL. When I read the documents, I find out the Postmaster
> daemon actual spawns a new backend server process to serve a new client
> request. Why not use threads instead? Is that just for a historical reason,
> or some performance/implementation concern?

Once all the questions regarding "why not" have been answered, it would
be good to also ask "why use threads?"  Do they simplify the code?  Do
they offer significant performance or efficiency gains?  What do they
give, other than being buzzword compliant?
-- 
Bruce Guenter <[EMAIL PROTECTED]>   http://em.ca/~bruceg/

 PGP signature


Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 4.1-RELEASE) ?

2000-12-04 Thread Oleg Bartunov

On Sun, 3 Dec 2000, Tom Lane wrote:

> Date: Sun, 03 Dec 2000 18:13:47 -0500
> From: Tom Lane <[EMAIL PROTECTED]>
> To: Oleg Bartunov <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] broken locale in 7.0.2 without multibyte support (FreeBSD 
>4.1-RELEASE) ? 
> 
> Oleg Bartunov <[EMAIL PROTECTED]> wrote a couple months ago:
>  It's clear that we must use 'unsigned char' instead of 'char'
>  and corrected version runs ok on both systems. That's why I suspect
>  that gcc 2.95.2 has different default under FreeBSD which could
>  cause problem with LC_CTYPE in 7.0.2 
> 
> > ok. will check this. I've recompile 7.0.2 on freebsd with -funsigned-char
> > and the problem has gone away. This prove my suggestion. I also 
> > checked 6.5 and it has the same probelm on FreeBSD. Also,
> > this makes clear many complains about broken locale under FreeBSD
> > I got from people. 
> > Hmm, current cvs has the same problem :-(
> 
> Today I inserted (unsigned char) casts into all the  function
> calls I could find.  This issue should be fixed as of current cvs.
> Please try it again when you have time.
> 

Just tried on FreeBSD 3.4-STABLE, current cvs, gcc version 2.95.2 
19991024 (release), ru-RU.KOI8-R locale, postgresql configured with
--enable-locale, no gcc option like --unisgned-chars
Looks like your changes did right job !

regards,

Oleg


>   regards, tom lane
> 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




Re: AW: AW: [HACKERS] broken locale in 7.0.2 without multibyte suppor t (F reeBSD 4.1-RELEASE) ?

2000-12-04 Thread Tom Lane

Zeugswetter Andreas SB <[EMAIL PROTECTED]> writes:
> But don't they in general obfuscate cases where the callee does want
> unsigned/signed chars ?

Well, it's ugly, but I don't think we have much choice.  Seems to me
that changing to "unsigned char" throughout the backend would obfuscate
things *more* than coding  calls as

char*p;
...
x = tolower((unsigned char) *p);

which is what I actually did.

There are lots of places where "char" variables are used that will never
see a  call.  Do we institute a coding rule that plain "char"
is verboten in *all* cases, whether or not they're relevant to ctype
calls?  If not, how do we check that "char" is being used safely?
Aren't we likely to get compiler warnings from passing "unsigned char *"
to libc functions that are declared to take plain "char *"?

I don't think that path is an improvement over a coding rule that ctype
functions must be applied to unsigned chars.  IMHO the latter is less
intrusive overall, and no harder to check for violations.

> My concern stems from a very bad experience with wrong signedness of chars
> on AIX.

I agree that this is something we'll have to watch.  I don't see any
cleaner answer, though.

regards, tom lane



Re: [HACKERS] Using Threads?

2000-12-04 Thread Junfeng Zhang

All the major operating systems should have POSIX threads implemented.
Actually this can be configurable--multithreads or one thread.

Thread-only server is unsafe, I agree. Maybe the following model can be a
little better. Several servers, each is multi-threaded. Every server can
support a maximum number of requests simultaneously. If anything bad
happends, it is limited to that server. 

The cons side of processes model is not the startup time. It is about
kernel resource and context-switch cost. Processes consume much more
kernel resource than threads, and have a much higher cost for context
switch. The scalability of threads model is much better than that of
processes model.

-Junfeng

On Mon, 4 Dec 2000, Thomas Lockhart wrote:

> > I am new to postgreSQL. When I read the documents, I find out the Postmaster
> > daemon actual spawns a new backend server process to serve a new client
> > request. Why not use threads instead? Is that just for a historical reason,
> > or some performance/implementation concern?
> 
> Both. Not all systems supported by PostgreSQL have a standards-compliant
> threading implementation (even more true for the systems PostgreSQL has
> supported over the years).
> 
> But there are performance and reliability considerations too. A
> thread-only server is likely more brittle than a process-per-client
> implementation, since all threads share the same address space.
> Corruption in one server might more easily propagate to other servers.
> 
> The time to start a backend is quite often small compared to the time
> required for a complete session, so imho the differences in absolute
> speed are not generally significant.
> 
>- Thomas
> 




Re: postgres docs (was Re: [HACKERS] Crash during WAL recovery?)

2000-12-04 Thread Peter Eisentraut

Norman Clarke writes:

> I am interested in volunteering some time to helping with the documentation

Good.  Not sure exactly what you want to do, but we need help in just
about every area, including proof-reading/copy-editing sort of stuff,
markup/consistency improvements, verification of examples, trying out the
outlined procedures from the point of view of a naïve user, rewriting old
stuff, documenting new stuff, etc.

Since we're going beta any minute now the primary focus would currently be
on getting everything completed and updated, rather than undertaking major
rewrites.

The Developer's Guide which should be found at or near
www.postgresql.org/devel-corner/docs has an appendix that explains how the
documentation is handled.  Contributions are accepted even if you don't
completely understand DocBook or don't want to bother installing the
tools.  (OTOH, it's very rewarding to have installed the tools and to have
understood DocBook. :-))

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




[HACKERS] Debian build failing...

2000-12-04 Thread Thomas Good

Hi.  Could any kind soul tell me what's amiss here.  I'm trying
to build pg7.0.3 on a friend's box - over the net.
Kind of like driving from the backseat.  ;-)

My src builds but the linker barfs with:

make[2]: Leaving directory `/usr/local/postgresql-7.0.3/src/backend/utils'
gcc -I../include -I../backend   -O2 -Wall -Wmissing-prototypes -Wmissing-declarations 
-o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o 
executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o parser/SUBSYS.o 
nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o 
rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o  ../utils/version.o 
-lcrypt -lnsl -ldl -lm -lutil -lncurses  -export-dynamic
/usr/lib/libdl.a(dlsym.o): In function `doit.2':
dlsym.o(.text+0x22): undefined reference to `_dl_default_scope'
dlsym.o(.text+0x4c): undefined reference to `_dl_default_scope'
make[1]: *** [postgres] Error 1
make[1]: Leaving directory `/usr/local/postgresql-7.0.3/src/backend'
make: *** [all] Error 2

TIA,
Tom



   SVCMC - Center for Behavioral Health  

Thomas Good  tomg@ { admin | q8 } .nrnet.org
IS Coordinator / DBA Phone: 718-354-5528 
 Fax:   718-354-5056  

Powered by:  PostgreSQL s l a c k w a r e  FreeBSD:
   RDBMS   |-- linux  The Power To Serve






Re: [HACKERS] Wrong FOR UPDATE lock type

2000-12-04 Thread Tom Lane

Jan Wieck <[EMAIL PROTECTED]> writes:
> Tom,
> IIRC  the  "Deadlock risk" debug message is from you. I think
> it must get a little smarter. IMHO an application that want's
> to  UPDATE  something  in  a  transaction but must SELECT the
> row(s) first to do it's own calculation on them,  should  use
> SELECT FOR UPDATE. Is that debug output really appropriate in
> this case (it raises from  RowShareLock  to  RowExclusiveLock
> because  of  the  UPDATE  of the previous FOR UPDATE selected
> row)?

Well, there is a theoretical chance of deadlock --- not against other
transactions doing the same thing, since RowShareLock and
RowExclusiveLock don't conflict, but you could construct deadlock
scenarios involving other transactions that grab ShareLock or
ShareRowExclusiveLock.  So I don't think it's appropriate for the
"deadlock risk" check to ignore RowShareLock->RowExclusiveLock
upgrades.

But I'm not sure the check should be enabled in production releases
anyway.  I just put it in as a quick and dirty debug check.  Perhaps
it should be under an #ifdef that's not enabled by default.

regards, tom lane



Re: [HACKERS] Using Threads?

2000-12-04 Thread Adam Haberlach

On Mon, Dec 04, 2000 at 02:28:10PM -0600, Bruce Guenter wrote:
> On Mon, Nov 27, 2000 at 11:42:24PM -0600, Junfeng Zhang wrote:
> > I am new to postgreSQL. When I read the documents, I find out the Postmaster
> > daemon actual spawns a new backend server process to serve a new client
> > request. Why not use threads instead? Is that just for a historical reason,
> > or some performance/implementation concern?
> 
> Once all the questions regarding "why not" have been answered, it would
> be good to also ask "why use threads?"  Do they simplify the code?  Do
> they offer significant performance or efficiency gains?  What do they
> give, other than being buzzword compliant?

Typically (on a well-written OS, at least), the spawning of a thread
is much cheaper then the creation of a new process (via fork()).  Also,
since everything in a group of threads (I'll call 'em a team) shares the
same address space, there can be some memory overhead savings.

-- 
Adam Haberlach   |"California's the big burrito, Texas is the big
[EMAIL PROTECTED]  | taco ... and following that theme, Florida is
http://www.newsnipple.com| the big tamale ... and the only tamale that 
'88 EX500| counts any more." -- Dan Rather 



[HACKERS] INSERT INTO ... SELECT problem

2000-12-04 Thread Alex Perel

Hi everyone,

I've recently encountered a bizzare problem that manifests itself reliably
on my running copy of postgres. I have a system set up to track IPs. The 
arrangement uses two mutually-exclusive buckets, one for free IPs and
the other for used ones. There are rules set up on the used pool to
remove IPs from the free on insert, and re-add them on delete.

The structure of the tables is:

CREATE TABLE "ips_free" (
"block_id" int4 NOT NULL,
"ip" inet NOT NULL,
"contact_id" int4,
"alloc_type" int4,
PRIMARY KEY ("block_id", "ip")
);

CREATE TABLE "ips_used" (
"block_id" int4 NOT NULL,
"ip" inet NOT NULL,
"contact_id" int4,
"alloc_type" int4,
PRIMARY KEY ("block_id", "ip")
);

The applicable rule that acts on inset to ips_used is:

CREATE RULE ip_allocated_rule AS 
 ON INSERT 
 TO ips_used
 DO DELETE FROM ips_free
  WHERE ips_free.block_id = NEW.block_id
AND ips_free.ip   = NEW.ip;

When I tried to minimize the total number of queries in a data load, I
tried to get the block ID (see above for the schema definition) using
INSERT INTO ... SELECT. A query like

INSERT INTO ips_used 
  (
   block_id,
   ip,
   contact_id
  )
SELECT block_id
 , ip
 , '1000'
  FROM ips_free
 WHERE ip = '10.10.10.10'

simply reutrns with "INSERT 0 0" and in fact removes the IP from the
free bucket without adding it to the USED bucket. I really can't
explain this behavior and I'm hoping someone can shed a little bit of
light on it. 

I am running PostgreSQL 7.0.0 on sparc-sun-solaris2.7, compiled by gcc 2.95.2

Thanks


Alex


-- 
  Alex G. Perel  -=-  AP5081
[EMAIL PROTECTED]  -=-  [EMAIL PROTECTED]
   play  -=-  work  
 
Disturbed Networks - Powered exclusively by FreeBSD
== The Power to Serve -=- http://www.freebsd.org/ 




Re: [HACKERS] Using Threads?

2000-12-04 Thread Dan Lyke

Adam Haberlach writes:
> Typically (on a well-written OS, at least), the spawning of a thread
> is much cheaper then the creation of a new process (via fork()).

This would be well worth testing on some representative sample
systems.

Within the past year and a half at one of my gigs some coworkers did
tests on various platforms (Irix, Solaris, a few variations of Linux
and *BSDs) and concluded that in fact the threads implementations were
often *slower* than using processes for moving and distributing the
sorts of data that they were playing with.

With copy-on-write and interprocess pipes that are roughly equivalent
to memcpy() speeds it was determined for that application that the
best way to split up tasks was fork()ing and dup().

As always, your mileage will vary, but the one thing that consistently
amazes me on the Un*x like operating systems is that usually the
programmatically simplest way to implement something has been
optimized all to heck.

A lesson that comes hard to those of us who grew up on MS systems.

Dan



Re: [HACKERS] 8192 BLCKSZ ?]

2000-12-04 Thread Jan Wieck

Don Baccus wrote:
>
> ...
> I expect TOAST to work even better).  Users will still be able change to
> larger blocksizes (perhaps a wise thing to do if a large percentage of their
> data won't fit into a single PG block).   Users using the default will
> be able to store rows of *awesome* length, efficiently.

Depends...

Actually  the  toaster already jumps in if your tuples exceed
BLKSZ/4, so with the default of 8K blocks it  tries  to  keep
all tuples smaller than 2K. The reasons behind that are:

1.  An average tuple size of 8K means an average of 4K unused
space at the end of each block. Wasting  space  means  to
waste IO bandwidth.

2.  Since  big  items  are  unlikely  to  be search criteria,
needing to read them into memory for every  chech  for  a
match on other columns is a waste again.  So the more big
items are off from the main tuple, the smaller  the  main
table becomes, the more likely it is that the main tuples
(holding  the  keys)  are  cached  and  the   cheaper   a
sequential scan becomes.

Of  course,  especially  for  2. there is a break even point.
That is when the extra fetches to send toast  values  to  the
client  cost  more  than  there  was  saved from not doing it
during  the  main  scan  already.  A  full  table  SELECT   *
definitely  costs  more  if  TOAST  is involved. But who does
unqualified SELECT * from a multi-gig table without  problems
anyway?   Usually  you  pick  a single or a few based on some
other key attributes - don't you?

Let's make an example. You have a forum server that  displays
one  article  plus the date and sender of all follow-ups. The
article bodies are usually big (1-10K). So you do a SELECT  *
to  fetch  the actually displayed article, and another SELECT
sender, date_sent just to get the info for the follow-ups. If
we  assume a uniform distribution of body size and an average
of 10 follow-ups, that'd mean that we  save  52K  of  IO  and
cache usage for each article displayed.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





Re: [HACKERS] Using Threads?

2000-12-04 Thread Bruce Guenter

On Mon, Dec 04, 2000 at 03:17:00PM -0800, Adam Haberlach wrote:
>   Typically (on a well-written OS, at least), the spawning of a thread
> is much cheaper then the creation of a new process (via fork()).

Unless I'm mistaken, the back-end is only forked when starting a new
connection, in which case the latency of doing the initial TCP tri-state
and start-up queries is much larger than any process creation cost.  On
Linux 2.2.16 on a 500MHz PIII, I can do the fork/exit/wait sequence in
about 164us.  On the same server, I can make/break a PostgreSQL
connection in about 19,000us (with 0% CPU idle, about 30% CPU system).
Even if we can manage to get a thread for free, and assume that the fork
from postmaster takes more than 164us, it won't make a big difference
once the other latencies are worked out.

> Also, since everything in a group of threads (I'll call 'em a team)

Actually, you call them a process.  That is the textbook definition.

> shares the
> same address space, there can be some memory overhead savings.

Only slightly.  All of the executable and libraries should already be
shared, as will all non-modified data.  If the data is modified by the
threads, you'll need seperate copies for each thread anyways, so the net
difference is small.

I'm not denying there would be a difference.  Compared to seperate
processes, threads are more efficient.  Doing a context switch between
threads means there is no PTE invalidations, which makes them quicker
than between processes.  Creation would be a bit faster due to just
linking in the VM to a new thread rather than marking it all as COW.
The memory savings would come from reduced fragmentation of the modified
data (if you have 1 byte modified on each of 100 pages, the thread would
grow by a few K, compared to 400K for processes).  I'm simply arguing
that the differences don't appear to be significant compared to the
other costs involved.
-- 
Bruce Guenter <[EMAIL PROTECTED]>   http://em.ca/~bruceg/

 PGP signature


RE: [HACKERS] Using Threads?

2000-12-04 Thread Matthew

*snip*
> > 
> > Once all the questions regarding "why not" have been answered, it would
> > be good to also ask "why use threads?"  Do they simplify the code?  Do
> > they offer significant performance or efficiency gains?  What do they
> > give, other than being buzzword compliant?
> 
The primary advantage that I see is that a single postgres process
can benefit from multiple processors. I see little advantage to using thread
for client connections.



Re: [HACKERS] Using Threads?

2000-12-04 Thread Bruce Guenter

On Mon, Dec 04, 2000 at 02:30:31PM -0800, Dan Lyke wrote:
> Adam Haberlach writes:
> > Typically (on a well-written OS, at least), the spawning of a thread
> > is much cheaper then the creation of a new process (via fork()).
> This would be well worth testing on some representative sample
> systems.

Using the following program for timing process creation and cleanup:

main() {
  int i;
  int pid;
  for (i=0; i<10; ++i) {
pid=fork();
if(pid==-1) exit(1);
if(!pid) _exit(0);
waitpid(pid,0,0);
  }
  exit(0);
} 

And using the following program for timing thread creation and cleanup:

#include 

threadfn() { pthread_exit(0); }

main() {
  int i;
  pthread_t thread;
  for (i=0; i<10; ++i) {
if (pthread_create(&thread, 0, threadfn, 0)) exit(1);
if (pthread_join(thread, 0)) exit(1);
  }
  exit(0);
} 

On a relatively unloaded 500MHz PIII running Linux 2.2, the fork test
program took a minimum of 16.71 seconds to run (167us per
fork/exit/wait), and the thread test program took a minimum of 12.10
seconds to run (121us per pthread_create/exit/join).  I use the minimums
because those would be the runs where the tasks were least interfered
with by other tasks.  This amounts to a roughly 25% speed improvement
for threads over processes, for the null-process case.

If I add the following lines before the for loop:
  char* m;
  m=malloc(1024*1024);
  memset(m,0,1024,1024);
The cost for doing the fork balloons to 240us, whereas the cost for
doing the thread is constant.  So, the cost of marking the pages as COW
is quite significant (using those numbers, 73us/MB).

So, forking a process with lots of data is expensive.  However, most of
the PostgreSQL data is in a SysV IPC shared memory segment, which
shouldn't affect the fork numbers.
-- 
Bruce Guenter <[EMAIL PROTECTED]>   http://em.ca/~bruceg/

 PGP signature


Re: [HACKERS] Using Threads?

2000-12-04 Thread Lamar Owen

Matthew wrote:
> The primary advantage that I see is that a single postgres process
> can benefit from multiple processors. I see little advantage to using thread
> for client connections.

Multiprocessors best benefit multiple backends.  And the current forked
model lends itself admirably to SMP.

And I say that even after using a multithreaded webserver (AOLserver)
for three and a half years.  Of course, AOLserver also sanely uses the
multi process PostgreSQL backends in a pooled fashion, but that's beside
the point.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [HACKERS] Bitmap index

2000-12-04 Thread Nathan Myers

On Mon, Dec 04, 2000 at 04:28:47PM +0100, [EMAIL PROTECTED] wrote:
> 
> on other RDBMS (Oracle,etc...),there is an index called bitmap index
> that greatly improve performance compared to btree index for boolean
> value (such as for a sex value,it's either M or F),i would like to
> know if such index will be implemented inside PostgreSQL.

Yes, please do send in your implementation for review.

Nathan Myers
[EMAIL PROTECTED]



Re: [HACKERS] Using Threads?

2000-12-04 Thread Myron Scott

I would love to distribute this code to anybody who wants it.  Any
suggestions for a good place?  However, calling the
work a code redesign is a bit generous.  This was more like a
brute force hack.  I just moved all the connection related global
variables to
a thread local "environment variable" and bypassed much of the postmaster
code.  

I did this so I could port my app which was originally designed for
Oracle OCI and Java.  My app uses very few SQL statements but uses them
over and over.  I wanted true prepared statements linked to Java with JNI.
I got both as well as batched transaction writes ( which was more relevant
before WAL).  

In my situation, threads seemed much more flexible to implement, and I
probably could
not have done the port without it.


Myron 

On Mon, 4 Dec 2000, Ross J. Reedstrom wrote:

> Myron - 
> Putting aside the fork/threads discussion for a moment (the reasons,
> both historical and other, such as inter-backend protection, are well
> covered in the archives), the work you did sounds like an interesting
> experiment in code redesign. Would you be willing to release the hacked
> code somewhere for others to learn from? Hacking flex to generate
> thread-safe code is of itself interesting, and the question about PG and
> threads comes up so often, that an example of why it's not a simple task
> would be useful.
> 
> Ross
> 




RE: [HACKERS] SQL to retrieve FK's, Update/Delete action, etc. (fwd)

2000-12-04 Thread Michael Fork

There ya go, I figured it out :)  Given the name a table, this query will
return all foreign keys in that table, the table the primary key is in,
the name of the primary key, if the are deferrable, if the are initially
deffered, and the action to be performed (RESTRICT, SET NULL, etc.).  To
get the foreign keys and primary keys and tables, you must parse the
null-terminated pg.tgargs.

When I get the equivalent query working for primary keys I will send it
your way -- or if you beat me to it, send it my way (I am working on some
missing functionality from the ODBC driver)

Michael Fork - CCNA - MCP - A+
Network Support - Toledo Internet Access - Toledo Ohio

SELECT pt.tgargs,
pt.tgnargs,
pt.tgdeferrable,
pt.tginitdeferred,
pg_proc.proname,
pg_proc_1.proname
FROM pg_class pc,
pg_proc pg_proc,
pg_proc pg_proc_1,
pg_trigger pg_trigger,
pg_trigger pg_trigger_1,
pg_proc pp,
pg_trigger pt
WHERE  pt.tgrelid = pc.oid
AND pp.oid = pt.tgfoid
AND pg_trigger.tgconstrrelid = pc.oid
AND pg_proc.oid = pg_trigger.tgfoid
AND pg_trigger_1.tgfoid = pg_proc_1.oid
AND pg_trigger_1.tgconstrrelid = pc.oid
AND ((pc.relname='<>')
AND (pp.proname LIKE '%%ins')
AND (pg_proc.proname LIKE '%%upd')
AND (pg_proc_1.proname LIKE '%%del')
AND (pg_trigger.tgrelid=pt.tgconstrrelid)
AND (pg_trigger_1.tgrelid = pt.tgconstrrelid)) 


On Tue, 5 Dec 2000, Christopher Kings-Lynne wrote:

> Hi Michael,
> 
> I am on the phpPgAdmin development team, and I have been wanting to add this
> functionality to phpPgAdmin.  I will start working with your query as soon
> as possible, and I will use phpPgAdmin as a testbed for the functionality.
> 
> I really appreciate having your query as a working basis, because it's
> really hard trying to figure out the system tables!
> 
> Chris
> 
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED]]On Behalf Of Michael Fork
> > Sent: Sunday, December 03, 2000 12:23 PM
> > To: [EMAIL PROTECTED]
> > Subject: [HACKERS] SQL to retrieve FK's, Update/Delete action, etc.
> >
> >
> > Given the name of a table, I need to find all foreign keys in that table
> > and the table/column that they refer to, along with the action to be
> > performed on update/delete.  The following query works, but only when
> > there is 1 foreign key in the table, when there is more than 2 it grows
> > exponentially -- which means I am missing a join.  However, given my
> > limitied knowledge about the layouts of the postgres system tables, and
> > the pg_trigger not being documented on the web site, I have been unable to
> > get the correct query.  Is this possible, and if so, what join(s) am I
> > missing?
> >
> > SELECT pt.tgargs,
> > pt.tgnargs,
> > pt.tgdeferrable,
> > pt.tginitdeferred,
> > pg_proc.proname,
> > pg_proc_1.proname
> > FROM pg_class pc,
> > pg_proc pg_proc,
> > pg_proc pg_proc_1,
> > pg_trigger pg_trigger,
> > pg_trigger pg_trigger_1,
> > pg_proc pp,
> > pg_trigger pt
> > WHERE pt.tgrelid = pc.oid
> > AND pp.oid = pt.tgfoid
> > AND pg_trigger.tgconstrrelid = pc.oid
> > AND pg_proc.oid = pg_trigger.tgfoid
> > AND pg_trigger_1.tgfoid = pg_proc_1.oid
> > AND pg_trigger_1.tgconstrrelid = pc.oid
> > AND ((pc.relname='tblmidterm')
> > AND (pp.proname LIKE '%ins')
> > AND (pg_proc.proname LIKE '%upd')
> > AND (pg_proc_1.proname LIKE '%del'))
> >
> > Michael Fork - CCNA - MCP - A+
> > Network Support - Toledo Internet Access - Toledo Ohio
> >
> 





Re: [HACKERS] Using Threads?

2000-12-04 Thread Tom Lane

Bruce Guenter <[EMAIL PROTECTED]> writes:
> [ some very interesting datapoints ]
>
> So, forking a process with lots of data is expensive.  However, most of
> the PostgreSQL data is in a SysV IPC shared memory segment, which
> shouldn't affect the fork numbers.

I believe (but don't have numbers to prove it) that most of the present
backend startup time has *nothing* to do with thread vs process
overhead.  Rather, the primary startup cost has to do with initializing
datastructures, particularly the system-catalog caches.  A backend isn't
going to get much real work done until it's slurped in a useful amount
of catalog cache --- for example, until it's got the cache entries for
pg_class and the indexes thereon, it's not going to accomplish anything
at all.

Switching to a thread model wouldn't help this cost a bit, unless
we also switch to a shared cache model.  That's not necessarily a win
when you consider the increased costs associated with cross-backend
or cross-thread synchronization needed to access or update the cache.
And if it *is* a win, we could get most of the same benefit in the
multiple-process model by keeping the cache in shared memory.

The reason that a new backend has to do all this setup work for itself,
rather than inheriting preloaded cache entries via fork/copy-on-write
from the postmaster, is that the postmaster isn't part of the ring of
processes that can access the database files directly.  That was done
originally for robustness reasons: since the PM doesn't have to deal
with database access, cache invalidation messages, etc etc yadda yadda,
it is far simpler and less likely to crash than a real backend.  If we
conclude that shared syscache is not a reasonable idea, it might be
interesting to look into making the PM into a full-fledged backend
that maintains a basic set of cache entries, so that these entries are
immediately available to new backends.  But we'd have to take a real
hard look at the implications for system robustness/crash recovery.

In any case I think we're a long way away from the point where switching
to threads would make a big difference in connection startup time.

regards, tom lane



Re: [HACKERS] Using Threads?

2000-12-04 Thread Tom Samplonius


On Mon, 4 Dec 2000, Junfeng Zhang wrote:

> All the major operating systems should have POSIX threads implemented.
> Actually this can be configurable--multithreads or one thread.

  I don't understand this.  The OS can be configured for one thread?  How
would that be any of use?

> Thread-only server is unsafe, I agree. Maybe the following model can be a
> little better. Several servers, each is multi-threaded. Every server can
> support a maximum number of requests simultaneously. If anything bad
> happends, it is limited to that server. 

  There is no difference.  If anything bad happens with the current
multi-process server, all the postgres backends shutdown because the
shared memory may be corrupted.

> The cons side of processes model is not the startup time. It is about
> kernel resource and context-switch cost. Processes consume much more
> kernel resource than threads, and have a much higher cost for context
> switch. The scalability of threads model is much better than that of
> processes model.

  What kernel resources do a process use?  There is some VM mapping
overhead, a process table entry, and a file descriptor table.  It is
possible to support thousands of processes today.  For instance,
ftp.freesoftware.com supports up to 5000 FTP connections using a slightly
modified ftpd (doesn't use inetd anymore).  That means with 5000 users
connected, that works out to 5000 processes active.  Amazing but true.

  Some OSes (Linux is the main one) implement threads as pseudo processes.
Linux threads are processes with a shared address space and file
descriptor table.

  Context switch cost for threads can be lower if you are switching to a
thread in the same process.  That of course assumes that all context
switches will occur within the same process, or the Linux
everything-is-a-process model isn't used.

> -Junfeng

Tom




Re: [HACKERS] beta testing version

2000-12-04 Thread Thomas Lockhart

> This paragraph from erserver.com:
> eRServer development is currently concentrating on core, universal
> functions that will enable individuals and IT professionals
> to implement PostgreSQL ORDBMS solutions for mission critical
> datawarehousing, datamining, and eCommerce requirements. These
> initial developments will be published under the PostgreSQL Open
> Source license, and made available through our sites, Certified
> Platinum Partners, and others in PostgreSQL community.
> led me (and many others) to believe that this was going to be a tighly
> integrated service, requiring code in the PostgreSQL core, since that's the
> normal use of 'core' around here.

"Around here" isn't "around there" ;)

As you can see, "core" == "fundamental" in the general sense, in a
statement not written specifically for the hacker community but for the
world at large. In many cases, taking one syllable rather than four is a
good thing, but sorry it led to confusion.

My schedule is completely out of whack, partly from taking the afternoon
off to cool down from the personal attacks being lobbed my direction.

Will pick things up as time permits, but we should have some code for
contrib/ in time for beta2, if it is acceptable to the community to put
it in there.

- Thomas



Re: [HACKERS] INSERT INTO ... SELECT problem

2000-12-04 Thread Tom Lane

Alex Perel <[EMAIL PROTECTED]> writes:
> CREATE RULE ip_allocated_rule AS 
>  ON INSERT 
>  TO ips_used
>  DO DELETE FROM ips_free
>   WHERE ips_free.block_id = NEW.block_id
> AND ips_free.ip   = NEW.ip;

> INSERT INTO ips_used 
>   (
>block_id,
>ip,
>contact_id
>   )
> SELECT block_id
>  , ip
>  , '1000'
>   FROM ips_free
>  WHERE ip = '10.10.10.10'

Hmm.  The rule will generate a query along these lines:

DELETE FROM ips_free
FROM ips_free ipsfree2
WHERE ips_free.block_id = ipsfree2.block_id
  AND ips_free.ip   = ipsfree2.ip
  AND ipsfree2.ip = '10.10.10.10';

(I'm using ipsfree2 to convey the idea of a self-join similar to
"SELECT FROM ips_free, ips_free ipsfree2" ... I don't believe the
above is actually legal syntax for DELETE.)

This ends up deleting all your ips_free entries for ip = '10.10.10.10',
which seems to be what you want ... but I think the query added by
the rule is executed before the actual INSERT, which leaves you with
nothing to insert.

There's been some debate in the past about whether an ON INSERT rule
should fire before or after the INSERT itself.  I lean to the "after"
camp myself, which would fix this problem for you.  However, you are
treading right on the hairy edge of circular logic here.  You might want
to think about using a trigger rather than a rule to do the deletes.

regards, tom lane