Re: [GENERAL] CREATE USER

2000-05-31 Thread Tom Lane

"Hiroshi Inoue" <[EMAIL PROTECTED]> writes:
> How about starting new transaction automatically after committing
> "create user ..." at backend side if "create user" is the first command
> of the transaction ?

So then
begin;
create user ...;
rollback;

would do the wrong thing --- silently?

I don't think that's an improvement :-(

The only reason CREATE USER isn't rollbackable is that the flat password
file is updated immediately by a trigger, rather than at transaction
commit.  The right fix would be to defer the update until commit (which
is certainly doable, though it might mean hardwired support for the
update instead of doing it in a generic trigger function).

If that seems like too much work, how about downgrading the "create
user not allowed in transaction" error to a "please don't abort now"
notice?  It's pretty silly that CREATE USER is stiffnecked about this
when DROP TABLE is not --- the bad consequences of rolling back DROP
TABLE are a lot worse.

regards, tom lane



Re: [GENERAL] Re: Speed of locating tables

2000-05-31 Thread Jurgen Defurne

carl garland wrote:

> >  Don't even think about 10 separate tables in a database :-(.It's
> >not so much that PG's own datastructures wouldn't cope,as thatvery
> >few Unix filesystems can cope with 10 filesin a directory.You'd
> >be killed on directory search times.
>
> This doesnt really answer the initial question of how long does it take to
> locate a table in a large 100+ table db and where and when do these
> lookups occur.

Normally, this lookup should occur the first time a table is referenced. After
this the process should keep the file open. In this way, it doesn't need to
lookup the file anymore. If all is really well, then this file is also kept
open
by the OS, so that anyone wishing to use the same file, gets the file handle
from the OS without a directory lookup anymore (is this the case with Linux ?)

>
> I understand the concern for directory search times but what if your
> partition for the db files is under XFS or some other journaling fs that
> allows for very quick search times on large directories.  I also
> saw that there may be concern over PGs own datastructures in that the
> master tables that hold the table and index tables requires a seq
> search for locating the tables.  Why support a large # of tables in PG
> if after a certain limit causes severe performance concerns.  What if
> your data model requires more 1,000,000 tables?
>

If the implementation is like above, there is much less concern with directory
search times, although a directory might get fragmented and be spread out
across the disk (with 100+ tables it will be fragmented). However, it
is the bookkeeping of the disk itself that will be of concern. This bookkeeping

is done with i-nodes, of which there are a limited amount.

Suppose you have 1M+ tables, and you have 1TB of space. This makes up
for about 1 MB per table. Suppose you have a FS which works with 4k bloks,
then you need 269 blocks per table. Since the original figure is not a round
one, this gives a waste of 0.2 %, which amounts to 2.1 GB. Since i-nodes have
multiple pointers to manage blocks (amounts to 12 datablocks under Linux
(I think)), this means you need 23 inodes per file, this is 23,000,000 inodes.

This might not be quite unmanageable, but there is also the fragmentation on
all these tables which bogs down your seek times.

All this to show that the usage of 1M+ tables generates an enormous amount
of work, which would tax your IO enormous. With the directory search above
deleted, you still have to search your inode table. You could cache it, but
then
you will need (at an estimate of 128 bytes per inode) probably about 32 MB
of RAM (at 1/100th of the real space needed), which doesn't seem to bad, but
which could be used more productively.

About the size of the datamodel I say this : I think that you'll need a mighty
long time and enormous amount of analysts to reach a datamodel of 1M+
tables, or else it is based upon a large number of simple tables, in which
case it could be reduced in size.

I'm sorry, but my feeling is that 1M+ tables for a datamodel is preposterous.

Jurgen Defurne
[EMAIL PROTECTED]




[GENERAL] Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?

2000-05-31 Thread Louis-David Mitterrand

On Wed, May 31, 2000 at 09:23:27PM -0400, Bruce Momjian wrote:
> > 3. Mailing Lists.  We use software that allows us to use remote sites for
> >'mail relaying'.  Basically, instead of our central server having to
> >service *all* remote addresses, it offloads email onto remote servers
> >to do the distribution.  For intance, by dumping all email destined for
> >a subscribers in France to a server residing in France, the central
> >server has to send one email mesage "Across the pond", and let the
> >server in France handle the other servers.  If you are interested in
> >providing a relay point, email [EMAIL PROTECTED] (me) for details on how
> >to get setup for this.

FWIW this not as good an idea as it seems. I know of many .fr domains
that are hosted in the US. My own .ch is in St-Louis (MI), whereas some
clients' .com are hosted right here in Paris.

This setup is the reason I was unable to get {-hackers,-general} list
traffic for a week because of a faulty "relay" for my Swiss .ch domain,
which apparently refused to relay back to  the US where this domain
lives.

Domains are diconnected from geography nowadays, and increasingly as
we go.

-- 
Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.fr

I don't build computers, I'm a cooling engineer.
   -- Seymour Cray, founder of Cray Inc. 



RE: [GENERAL] CREATE USER

2000-05-31 Thread Hiroshi Inoue

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Thomas Lockhart
> 
> > I have to create users via an ODBC connection, but I get this error:
> >   CREATE USER: may not be called in a transaction block
> > With psql I have no problems. Any body knows what is happening?
> 
> Yeah. There are too many transaction block wrappers (BEGIN/COMMIT
> statements) in the ODBC driver. I'm not sure if we have patches for it
> yet...
>

Currently the following command sequence isn't allowed.

begin;
create user ...
commit;

This wouldn't be good for odbc/jdbc/ecpg etc.

How about starting new transaction automatically after committing
"create user ..." at backend side if "create user" is the first command
of the transaction ?

Regards.

Hiroshi Inoue
[EMAIL PROTECTED]



Re: [GENERAL] CREATE USER

2000-05-31 Thread Thomas Lockhart

> I have to create users via an ODBC connection, but I get this error:
>   CREATE USER: may not be called in a transaction block
> With psql I have no problems. Any body knows what is happening?

Yeah. There are too many transaction block wrappers (BEGIN/COMMIT
statements) in the ODBC driver. I'm not sure if we have patches for it
yet...

- Thomas



RE: [GENERAL] Postgresql performance on NT

2000-05-31 Thread Lincoln Yeoh

>We support servers on NT, but not on Win95/98.  Cygwin supports both,

What's the performance like? 

NT doesn't do forks well and Postgres uses forks right?

Cheerio,

Link.




[GENERAL] Re: [HACKERS] Oft Ask: How to contribute to PostgreSQL?

2000-05-31 Thread Bruce Momjian

Not sure this belongs in the FAQ.  Seems more of a web page thing.


> 
> Due to a recent thread started on pgsql-hackers, I'm posting this to the
> lists.  Vince is planning on putting in appropriate links for some of
> this, and, Bruce, can we maybe put it into the FAQ?
> 
> I'm not an English major, so this is more techinese then anything
> else...or, a rambling of an un-ordered mind, however you want to classify
> it :)
> 
> 
> 
> There are several ways that people can contribute to the PostgreSQL
> project, and, below, I'm going to try and list them...
> 
> 1. Code.  We have a TODO list available at
>http://www.postgresql.org/docs/todo.html, which lists enhancements that
>have been picked out as needed.  Some of them take time to learn the
>intricacies of the code, some require no more then time.  Contributing
>code, altho not the only way to contribute, is always one of the more
>valuable ways of improving any Open Source Project.
> 
> 2. Web Site.  http://www.postgresql.org is mirrored on many sites around
>the world, as is ftp://ftp.postgresql.org.  By increasing the number of
>mirrors available around the world, you help reduce the load on any one
>site, as well as improve the accessibility to the code.  If you have
>the resources to provide a mirror, both hardware and bandwidth, this is
>another means of contributing to the project.  All our mirrors are
>required to use rsync, in order to be listed, with details on this
>found at http://www.postgresql.org/howtomirror.html
> 
> 3. Mailing Lists.  We use software that allows us to use remote sites for
>'mail relaying'.  Basically, instead of our central server having to
>service *all* remote addresses, it offloads email onto remote servers
>to do the distribution.  For intance, by dumping all email destined for
>a subscribers in France to a server residing in France, the central
>server has to send one email mesage "Across the pond", and let the
>server in France handle the other servers.  If you are interested in
>providing a relay point, email [EMAIL PROTECTED] (me) for details on how
>to get setup for this.
> 
> 4. Financial.  In June of 1999, PostgreSQL, Inc was formed as the
>"Commercial Arm" of the PostgreSQL Project.  Although it was originally
>formed to provide Commercial Support for PostgreSQL, it has expanded to
>include Consulting services, PostgreSQL Merchandise (ElephantWear) and,
>most recently, Database Hosting services.  
> 
>As our mission statement (http://www.pgsql.com/mission.html) states,
>our purpose (among several) is to provide funding for various project,
>whether they be Advertising or Programming.  Although not currently
>available, but will be when the new site is up, there will be a set of
>pages off of http://www.pgsql.com that will provide a cleaner means of
>contribute financially towards having features implemented, as well as
>showing funds available for various projects.  For instance, 25% of the
>revenue from Support Contracts will be ear-marked for stuff like
>Advertising and a General Pool that we can use to fund projects that we
>feel is important from a "commercial deployment" standpoint.
> 
> 
> Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
> Systems Administrator @ hub.org 
> primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 
> 
> 
> 
> 
> 
> 
> 


-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Postmaster won't -HUP

2000-05-31 Thread Martijn van Oosterhout

Jerry Lynde wrote:
> 
> Hello out there,
> 
> I'm having a problem with a production server. Actually, there are two
> problems. The semi-trivial problem is that Postgres won't die using the
> service mechanism. As root, I "service postgres stop" and then "service
> postgres start" after a reasonable wait. The restart will earn me a
> "StreamServerPort: cannot bind to port" which indicates tht the process
> never died. A ps ax confirms the persistance of postmaster. When I kill -9
> the processes (postmaster, the /bin/sh -c postgres, and logger) they
> process will claim to start with "service postgres start" but it reports no
> PID and doesn't show up in ps ax. It is clearly not running at this point.

Hmm, on debian it always dies, though it does takes a while sometimes.
Also, try to avoid kill -9 because then it can't clean up shared memory,
locks, temporary files, etc. Try kill -INT or -TERM.

> The real problem, which caused all this debugging, is that twice so far,
> for no apparent reason, I have pegged the processors on the server. The
> machine has two 500mHz processors with 256 MB ram. I have a hard time
> believing that one small query can bring that machine to its knees, but it
> has, twice. The queries were run through a hard coded php front end (for
> testing purposes). Any insight on these two problems would be appreciated
> greatly.

Heh. Yes, I also wonder why "select * from bigtable" where bigtable is
a 400,000 row table crashes the machine also :). (This is pg_dump btw).
Maybe you should post the query. 

Actually, there is one other situation I've killed the machine nearly.
I have two big tables, "bigtable" and "largetable". Then do this
query:

select * from bigtable where largetable.a = 1;

(ofcourse the actual query was much longer but this one demonstrates the
problem). It actually does a join between those two tables even though
only one is mentioned in the from part. Running explain over this told
that this would take a *very* long time to complete.

This is probably not a problem in newer versions though. What version
are you using?
-- 
Martijn van Oosterhout <[EMAIL PROTECTED]>
http://cupid.suninternet.com/~kleptog/



Re: [GENERAL] TOP SESSIONS?

2000-05-31 Thread Mike Mascari

mikeo wrote:
> 
> hi,
> in oracle you would use these two cursors to determine who was connected and
> what they were doing.
> 
> select distinct s.sid sid, s.serial# serial, s.status status, osuser, spid ,
> count(o.sid) counter, s.username username, s.program program, sql_address
> from v$session s, v$open_cursor o, v$process p
> where s.sid = o.sid(+)
> and paddr = addr
> group by s.sid,s.serial#, s.status , osuser, spid ,s.username, s.program ,sql_address
> order by 1,3
> 
> select distinct name
> from sys.audit_actions, v$sqltext
> where address = ?
> and action = command_type
> 
> does anyone know what tables in postgres would give me the same or similar 
>information?
> 
> TIA,
> mikeo

PostgreSQL attempts to communicate what queries are being
performed by setting the process information in the connected
backend when processing a SQL statement, much like sendmail. You
should be able to determine who's connected and what they're
doing with something like:

ps axf

You'll see who's connected to what database from what machine and
the type of query being executed. I don't know of any tables in
PostgreSQL which would provide similar information.

Hope that helps, 

Mike Mascari



Re: [GENERAL] Postmaster won't -HUP

2000-05-31 Thread Ed Loehr

Jerry Lynde wrote:
> 
> I'm having a problem with a production server. Actually, there are two
> problems. The semi-trivial problem is that Postgres won't die using the
> service mechanism. As root, I "service postgres stop" and then "service
> postgres start" after a reasonable wait. The restart will earn me a
> "StreamServerPort: cannot bind to port" which indicates tht the process
> never died. A ps ax confirms the persistance of postmaster. When I kill -9
> the processes (postmaster, the /bin/sh -c postgres, and logger) they
> process will claim to start with "service postgres start" but it reports no
> PID and doesn't show up in ps ax. It is clearly not running at this point.
> The real problem, which caused all this debugging, is that twice so far,
> for no apparent reason, I have pegged the processors on the server. The
> machine has two 500mHz processors with 256 MB ram. I have a hard time
> believing that one small query can bring that machine to its knees, but it
> has, twice. The queries were run through a hard coded php front end (for
> testing purposes). Any insight on these two problems would be appreciated
> greatly.

Hello Jerry,

What version are you running?

What does your server log show?  Any other logs?

Can you show us the small query, # of rows, etc?

Regards,
Ed Loehr



Re: [GENERAL] Postmaster won't -HUP

2000-05-31 Thread Joseph Shraibman

In version 7.0 postgres waits for all clients to close their connections
before exiting.  Before it just quit.

Jerry Lynde wrote:
> 
> Hello out there,
> 
> I'm having a problem with a production server. Actually, there are two
> problems. The semi-trivial problem is that Postgres won't die using the
> service mechanism. As root, I "service postgres stop" and then "service
> postgres start" after a reasonable wait. The restart will earn me a
> "StreamServerPort: cannot bind to port" which indicates tht the process
> never died. A ps ax confirms the persistance of postmaster. When I kill -9
> the processes (postmaster, the /bin/sh -c postgres, and logger) they
> process will claim to start with "service postgres start" but it reports no
> PID and doesn't show up in ps ax. It is clearly not running at this point.
> 
> The real problem, which caused all this debugging, is that twice so far,
> for no apparent reason, I have pegged the processors on the server. The
> machine has two 500mHz processors with 256 MB ram. I have a hard time
> believing that one small query can bring that machine to its knees, but it
> has, twice. The queries were run through a hard coded php front end (for
> testing purposes). Any insight on these two problems would be appreciated
> greatly.
> 
> thanks,
> 
> Jerry Lynde
> 
> Jerry Lynde
> IT - Invocation/Evocation, Banishing, et al.
> "In E-commerce, the E stands for Evil."
> Due Diligence Inc.
> http://www.diligence.com
> Phone: (406) 728-0001 x232
> Fax: (406) 728-0006



[GENERAL] Postmaster won't -HUP

2000-05-31 Thread Jerry Lynde

Hello out there,

I'm having a problem with a production server. Actually, there are two 
problems. The semi-trivial problem is that Postgres won't die using the 
service mechanism. As root, I "service postgres stop" and then "service 
postgres start" after a reasonable wait. The restart will earn me a 
"StreamServerPort: cannot bind to port" which indicates tht the process 
never died. A ps ax confirms the persistance of postmaster. When I kill -9 
the processes (postmaster, the /bin/sh -c postgres, and logger) they 
process will claim to start with "service postgres start" but it reports no 
PID and doesn't show up in ps ax. It is clearly not running at this point.

The real problem, which caused all this debugging, is that twice so far, 
for no apparent reason, I have pegged the processors on the server. The 
machine has two 500mHz processors with 256 MB ram. I have a hard time 
believing that one small query can bring that machine to its knees, but it 
has, twice. The queries were run through a hard coded php front end (for 
testing purposes). Any insight on these two problems would be appreciated 
greatly.

thanks,

Jerry Lynde



Jerry Lynde
IT - Invocation/Evocation, Banishing, et al.
"In E-commerce, the E stands for Evil."
Due Diligence Inc.
http://www.diligence.com
Phone: (406) 728-0001 x232
Fax: (406) 728-0006



[GENERAL] CREATE USER

2000-05-31 Thread Diego Schvartzman

Hi!
I have to create users via an ODBC connection, but I get this error:
  CREATE USER: may not be called in a transaction block
With psql I have no problems. Any body knows what is happening?

Thanks

Diego Schvartzman
Email: [EMAIL PROTECTED]
ICQ# 1779434




Re: [GENERAL] Trouble-free vacuum w/concurrent writes? (was"PostgreSQL capabilities")

2000-05-31 Thread Ed Loehr

Charles Tassell wrote:
> 
> No, that's now what he said.  You can backup the database while it's still
> being used (the pg_dmp runs in a transaction) but you still can't vacuum a
> database while it's in use.  Vacuuming is more along the lines of a defrag,
> it updates the indexes and maintains stats.

Oops...I had blurred the two in my mind, as they are both a part of my
back-up process.  Thanks for pointing that out.  So, dumping concurrently
with reads/writes is fine, while vacuum with concurrent writes continues
to be problematic (though sounds like there will be some improvements on
the vacuum front in a coming release).

Regards,
Ed Loehr

> At 12:16 PM 5/31/00, Ed Loehr wrote:
> >Bruce Momjian wrote:
> > >
> > > > Alex Pilosov wrote:
> > > > >
> > > > > > http://networkdna.com/database/index.html mentions that PostgreSQL is
> > > > > > capable of "Online backup". What does that exactly mean?
> > > > >
> > > > > It means Postgres can do a reliable backup (a consistent snapshot) of a
> > > > > database without shutting down the database.
> > > >
> > > > Hmmm.  My backup procedure, based on earlier discussions in this group,
> > > > involves blocking all write-access during a pg_dump.  That is effectively
> > > > shutting down the database from my perspective.  Is there a quicker way
> > > > to take a consistent snapshot while still allowing writes?
> > >
> > > With 6.5 and 7.0, pg_dump grabs a consistent snapshot of the database at
> > > the time it starts, and dumps that.  No reason to shut out users.
> >
> >Can other folks confirm that this is no longer a problem?  Are people
> >successfully vacuuming while allowing full read/write access to the db?
> >
> >Regards,
> >Ed Loehr



[GENERAL] PostgreSQL article in LinuxWorld

2000-05-31 Thread Bruce Momjian

There is an article about open source database on the LinuxWorld web
site:
http://www.linuxworld.com/linuxworld/lw-2000-05/lw-05-database.html

It mentions PostgreSQL.  I was interviewed for the article.

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Trouble-free vacuum w/concurrent writes? (was "PostgreSQL capabilities")

2000-05-31 Thread Charles Tassell

No, that's now what he said.  You can backup the database while it's still 
being used (the pg_dmp runs in a transaction) but you still can't vacuum a 
database while it's in use.  Vacuuming is more along the lines of a defrag, 
it updates the indexes and maintains stats.

At 12:16 PM 5/31/00, Ed Loehr wrote:
>Bruce Momjian wrote:
> >
> > > Alex Pilosov wrote:
> > > >
> > > > > http://networkdna.com/database/index.html mentions that PostgreSQL is
> > > > > capable of "Online backup". What does that exactly mean?
> > > >
> > > > It means Postgres can do a reliable backup (a consistent snapshot) of a
> > > > database without shutting down the database.
> > >
> > > Hmmm.  My backup procedure, based on earlier discussions in this group,
> > > involves blocking all write-access during a pg_dump.  That is effectively
> > > shutting down the database from my perspective.  Is there a quicker way
> > > to take a consistent snapshot while still allowing writes?
> >
> > With 6.5 and 7.0, pg_dump grabs a consistent snapshot of the database at
> > the time it starts, and dumps that.  No reason to shut out users.
>
>Can other folks confirm that this is no longer a problem?  Are people
>successfully vacuuming while allowing full read/write access to the db?
>
>Regards,
>Ed Loehr




[GENERAL] TOP SESSIONS?

2000-05-31 Thread mikeo
hi,
in oracle you would use these two cursors to determine who was connected and
what they were doing.   

select distinct s.sid sid, s.serial# serial, s.status status, osuser, spid ,
count(o.sid) counter, s.username username, s.program program, sql_address
from v$session s, v$open_cursor o, v$process p
where s.sid = o.sid(+)
and paddr = addr
group by s.sid,s.serial#, s.status , osuser, spid ,s.username, s.program ,sql_address
order by 1,3

select distinct name
from sys.audit_actions, v$sqltext
where address = ?
and action = command_type


does anyone know what tables in postgres would give me the same or similar information?

TIA,
mikeo



Re: [GENERAL] PostgreSQL capabilities

2000-05-31 Thread Dustin Sallings

On Wed, 31 May 2000, Ed Loehr wrote:

# Hmmm.  My backup procedure, based on earlier discussions in this group,
# involves blocking all write-access during a pg_dump.  That is
# effectively shutting down the database from my perspective.  Is there a
# quicker way to take a consistent snapshot while still allowing writes? 

Use a filesystem that supports snapshotting.

--
SA, beyond.com   My girlfriend asked me which one I like better.
pub  1024/3CAE01D5 1994/11/03 Dustin Sallings <[EMAIL PROTECTED]>
|Key fingerprint =  87 02 57 08 02 D0 DA D6  C8 0F 3E 65 51 98 D8 BE 
L___ I hope the answer won't upset her. 




Re: [GENERAL] Perl interfaces?

2000-05-31 Thread Philip Hallstrom

> Philip Hallstrom <[EMAIL PROTECTED]> writes:
> > I took a look around and was unable to find a Perl DBI driver for
> > PostgreSQL... does one exist that I'm missing?
> 
> DBD-Pg, at rev 0.93 last I looked at the CPAN archives.  For some
> bizarre reason it's not listed on the index page about DBI drivers,
> but you can find it if you search by module name.

Thanks for the info all... I couldn't believe it didn't exist, but now I
know why I couldn't find it.

Thanks again!

-philip




[ANNOUNCE] PostgreSQL book sent to publisher

2000-05-31 Thread Bruce Momjian

I have sent off the first draft of my book to the publisher for review.

Since last week, I have doubled the size of the Administration chapter(20),
and updated all the SQL output to match the 7.0 format.

The books is accessible at:

http://www.postgresql.org/docs/awbook.html

Comments welcomed.  At some point, as the publishing date gets closer, I
will no longer be able to make changes to the book.

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[GENERAL] Trouble-free vacuum w/concurrent writes? (was "PostgreSQL capabilities")

2000-05-31 Thread Ed Loehr

Bruce Momjian wrote:
> 
> > Alex Pilosov wrote:
> > >
> > > > http://networkdna.com/database/index.html mentions that PostgreSQL is
> > > > capable of "Online backup". What does that exactly mean?
> > >
> > > It means Postgres can do a reliable backup (a consistent snapshot) of a
> > > database without shutting down the database.
> >
> > Hmmm.  My backup procedure, based on earlier discussions in this group,
> > involves blocking all write-access during a pg_dump.  That is effectively
> > shutting down the database from my perspective.  Is there a quicker way
> > to take a consistent snapshot while still allowing writes?
> 
> With 6.5 and 7.0, pg_dump grabs a consistent snapshot of the database at
> the time it starts, and dumps that.  No reason to shut out users.

Can other folks confirm that this is no longer a problem?  Are people
successfully vacuuming while allowing full read/write access to the db?

Regards,
Ed Loehr



Re: [GENERAL] PostgreSQL capabilities

2000-05-31 Thread Bruce Momjian

> Alex Pilosov wrote:
> > 
> > > http://networkdna.com/database/index.html mentions that PostgreSQL is
> > > capable of "Online backup". What does that exactly mean?
> > 
> > It means Postgres can do a reliable backup (a consistent snapshot) of a
> > database without shutting down the database.
> 
> Hmmm.  My backup procedure, based on earlier discussions in this group,
> involves blocking all write-access during a pg_dump.  That is effectively
> shutting down the database from my perspective.  Is there a quicker way
> to take a consistent snapshot while still allowing writes?

With 6.5 and 7.0, pg_dump grabs a consistent snapshot of the database at
the time it starts, and dumps that.  No reason to shut out users.

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] PostgreSQL capabilities

2000-05-31 Thread Ed Loehr

Alex Pilosov wrote:
> 
> > http://networkdna.com/database/index.html mentions that PostgreSQL is
> > capable of "Online backup". What does that exactly mean?
> 
> It means Postgres can do a reliable backup (a consistent snapshot) of a
> database without shutting down the database.

Hmmm.  My backup procedure, based on earlier discussions in this group,
involves blocking all write-access during a pg_dump.  That is effectively
shutting down the database from my perspective.  Is there a quicker way
to take a consistent snapshot while still allowing writes?

Regards,
Ed Loehr



Re: [GENERAL] PostgreSQL capabilities

2000-05-31 Thread Mihai Gheorghiu

Thanks a lot.
Now, what is WAL?
When is it scheduled for implementation?


>It means Postgres can do a reliable backup (a consistent snapshot) of a
>database without shutting down the database.
>
>What you are asking for is replication, which is not easy to implement,
>and almost damn impossible to get it RIGHT. (*curse at both Sybase and
>Oracle replication servers*). (i.e. how do you resolve replication
>conflicts, how do you resync databases for which you don't have
>transaction logs, etc). I assume for Postgres, replication is a
>possibility after WAL is implemented...
>
>On Tue, 30 May 2000, Mihai Gheorghiu wrote:
>
>> http://networkdna.com/database/index.html mentions that PostgreSQL is
>> capable of "Online backup". What does that exactly mean?
>> I'd like to be able to run a synchronization (in MS Access terms) (or
>> "incremental backup"???), i.e. to have two databases in two locations,
>> normally using only one of them and updating the other one. (Normal full
>> backup looks unrealistic for 300MB over 1/3 of a T1.) In case the
>> communication line between the two centers fails, users at the two ends
>> should be able to use the local databases, and changes made during
>> communication downtime be appended to the other database after
communication
>> resume.
>




Re: [GENERAL] PostgreSQL capabilities

2000-05-31 Thread Alex Pilosov

See archives of this mailing list.

WAL is write-ahead logging, more conventional way of assuring atomicity.
(I.E. before a transaction is written to database, it is written to
transaction log, which can be replayed). Replication can be achieved by
streaming transaction log toyour replicated computer which will apply to
its own database.

I think its scheduled for postgresql 7.2...

On Wed, 31 May 2000, Mihai Gheorghiu wrote:

> Thanks a lot.
> Now, what is WAL?
> When is it scheduled for implementation?
> 
> 
> >It means Postgres can do a reliable backup (a consistent snapshot) of a
> >database without shutting down the database.
> >
> >What you are asking for is replication, which is not easy to implement,
> >and almost damn impossible to get it RIGHT. (*curse at both Sybase and
> >Oracle replication servers*). (i.e. how do you resolve replication
> >conflicts, how do you resync databases for which you don't have
> >transaction logs, etc). I assume for Postgres, replication is a
> >possibility after WAL is implemented...
> >
> >On Tue, 30 May 2000, Mihai Gheorghiu wrote:
> >
> >> http://networkdna.com/database/index.html mentions that PostgreSQL is
> >> capable of "Online backup". What does that exactly mean?
> >> I'd like to be able to run a synchronization (in MS Access terms) (or
> >> "incremental backup"???), i.e. to have two databases in two locations,
> >> normally using only one of them and updating the other one. (Normal full
> >> backup looks unrealistic for 300MB over 1/3 of a T1.) In case the
> >> communication line between the two centers fails, users at the two ends
> >> should be able to use the local databases, and changes made during
> >> communication downtime be appended to the other database after
> communication
> >> resume.
> >
> 
> 




[GENERAL] No offence..but..

2000-05-31 Thread Henrik Ridder

I have now a program that works aginst a Postgresql. I have a wrapper
class that gets the query. 
The sequence is like this in Postgresql 

res=PQexec(conn, query); 
value = PQgetvalue(res, tuple, col); 

The problem is col because you don't know what kind of colunm the
program send into this class. And I don't want to specify it in the
program every time. 

(Hope somebody understands this) 
Now I am trying to do the same in Oracle with OCI. Do anybody know if
and how it could be done? 

Henrik



[GENERAL] pg_dump -c doesn't order DROP TABLE correctly w.r.t inheritance

2000-05-31 Thread Louis-David Mitterrand

I am starting to use inheritance between tables and when performing a
pg_dump -c (ie: clean drop schema) the DROP TABLE statements are not
optimally ordered: the "DROP TABLE mother_table" comes before "DROP TABLE
child_table" and a "psql -f dump.sql" fails to clear the tables because
you can't DROP a table when a child table isn't droped beforehand.

-- 
Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.fr

I would like to meet the guy who invented sex, just to see what he's
working on now.



Re: [GENERAL] PostgreSQL capabilities

2000-05-31 Thread Alex Pilosov

It means Postgres can do a reliable backup (a consistent snapshot) of a
database without shutting down the database. 

What you are asking for is replication, which is not easy to implement,
and almost damn impossible to get it RIGHT. (*curse at both Sybase and
Oracle replication servers*). (i.e. how do you resolve replication
conflicts, how do you resync databases for which you don't have
transaction logs, etc). I assume for Postgres, replication is a
possibility after WAL is implemented...

On Tue, 30 May 2000, Mihai Gheorghiu wrote:

> http://networkdna.com/database/index.html mentions that PostgreSQL is
> capable of "Online backup". What does that exactly mean?
> I'd like to be able to run a synchronization (in MS Access terms) (or
> "incremental backup"???), i.e. to have two databases in two locations,
> normally using only one of them and updating the other one. (Normal full
> backup looks unrealistic for 300MB over 1/3 of a T1.) In case the
> communication line between the two centers fails, users at the two ends
> should be able to use the local databases, and changes made during
> communication downtime be appended to the other database after communication
> resume.




Re: [GENERAL] pg_hba.conf and password/crypt authorization

2000-05-31 Thread Peter Eisentraut

On Tue, 30 May 2000, Dana Eckart wrote:

> Things work fine with the authorization type in pg_hba.conf set to "trust".
> However, when I try to use either "crypt" or "password" psql fails to
> authenticate either the postgresql super-user (mylocal in my case) or any
> other non-privelegded user.

In a fresh installation the database users don't have passwords yet, so
you must give them some first. (ALTER USER xxx WITH PASSWORD 'yyy').


-- 
Peter Eisentraut  Sernanders väg 10:115
[EMAIL PROTECTED]   75262 Uppsala
http://yi.org/peter-e/Sweden




[GENERAL]

2000-05-31 Thread Radu Coroi

Hello pgsql-general,

I have a problem. I want to make the company I work to go from Informix Online
and SCO to PostgreSQL + Linux and I want this transition to be made, if
it possible, very smooth.
Anybody from this list make this transition and found or made some tools to
make import from Informix dbexport and compilation of .4gl files.
For PostgreSQL can I find something like r4gl from Informix because
this transition will (probable) be done for hundreds of servers and
administrators are not so good in configuration of database and fine
tuning and the programers are use with this tool.


-- 
Best regards,
 Radu  mailto:[EMAIL PROTECTED]





[GENERAL] Problebs with index in 7.0

2000-05-31 Thread Peter Keller

Hi all,
I made an update (pg_dump) from 6.5 to 7.0 (i686-pc-linux-gnu, compiled
by gcc egcs-2.91.66),

I created an index on column gmkg_tl by doing:
create index ix_flurst_gmkg_tl_ix on flurstueck using btree (gmkg_tl
bpchar_ops);

gmkg_tl is char(16).

When I do a
select * from flurstueck where gmkg_tl = 'kaBV00110x9W06Zq';

in version 6.5 the query plan is:
Aggregate  (cost=124.82 rows=1217 width=12)
  ->  Index Scan using ix_flurst_gmkg_tl on flurstueck  (cost=124.82
rows=1217 width=12)

this is ok but when I do the same thing in version 7.0 the query plan
is:
NOTICE:  QUERY PLAN:

Seq Scan on flurstueck  (cost=0.00..4784.31 rows=2445 width=16)

yes, I did a vacuum/vacuum analyze

Some ideas???
Thanks,
Peter
--
Bezirksfinanzdirektion Muenchen
  Vermessungsabteilung 
.
 Peter Keller   :  Tel: (+49) 089-2190-2594 
 Vermessungsrat z.A.:  Fax: (+49) 089-2190-2459
 Alexandrastr. 3:  mailto:[EMAIL PROTECTED]
 80538 Muenchen :  web: http://www.bayern.de/vermessung