Re: [GENERAL] how do you run your backups?

2001-01-26 Thread Vadim Mikheev

  Is there a way to start with yesterday's dump and load each transaction
  log dump in order until you have a complete restore of the database?

WAL based BAR is not implemented in 7.1
Try to use Rserv from 7.1' contrib - it generates consistent incremental
snapshots.

 Also, does a pg_dump guarantee a consistent view of all of the tables in a
  database at a given snapshot in time, ...

Yes. And I believe it's reflected in docs.

Vadim





Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions

1999-11-29 Thread Vadim Mikheev

Tom Lane wrote:
 
 Bruce Momjian [EMAIL PROTECTED] writes:
  I have to say that I'm going to change on-disk database/table/index
  file names to _OID_! This is required by WAL because of inside of
  log records there will be just database/table/index oids, not names,
  and after crash recovery will not be able to read pg_class to get
  database/table/index name using oid ...
 
  Wow, that is a major pain.  Anyone else think so?
  Using oid's instead of names may give us some ability to fix some other
  bugs, though.
 
 Yes, and yes.  I've been trying to nerve myself to propose that, because
 it seems the only reasonable way to make rollback of RENAME TABLE and
 DROP TABLE work safely.  It'll be a pain in the neck for debugging and
 admin purposes though.

So, no more nerves needed, Tom, yeh? -:)
It would be nice if someone else, not me, implement this...

 Can we make some sort of usually-correct-but-not-guaranteed-correct
 dump that shows which corresponds to what?  Maybe something similar
 to the textfile dump of pg_shadow that the postmaster uses for password
 authentication?  Then at least you'd have some shot at figuring out
 which file was what in extremis...

As it was proposed - utility to create dir with database name
(in addition to dir with database oid where data really live)
and symlinks there: table_name -- ../db_oid/table_oid

Vadim





Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions

1999-11-28 Thread Vadim Mikheev

Bruce Momjian wrote:
 
  if PostgreSQL could successfully rollback DDL statements sanely (and thus
  diverge from ORACLE). I guess I don't expect that to happen successfully
  until
  something the equivalent of TABLESPACES is implemented and there is a
  disassociation between table names, index names and their filesystem
  counterparts and to be able to "undo" filesystem operations. That, it seems
  to
  me, will be a major undertaking and not going to happen any time soon...
 
 Ingres has table names that don't match on-disk file names, and it is a
 pain to administer because you can't figure out what is going on at the
 file system level.  Table files have names like AAAHFGE.

I have to say that I'm going to change on-disk database/table/index 
file names to _OID_! This is required by WAL because of inside of 
log records there will be just database/table/index oids, not names, 
and after crash recovery will not be able to read pg_class to get 
database/table/index name using oid ...

Vadim





Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions

1999-11-28 Thread Vadim Mikheev

Mike Mascari wrote:
 
 Will that aid in fixing a problem such as this:
 
 session 1:
 
 CREATE TABLE example1(value int4);
 BEGIN;
 
 session 2:
 
 BEGIN;
 ALTER TABLE example1 RENAME TO example2;
 
 session 1:
 
 INSERT INTO example1 VALUES (1);
 END;
 NOTICE: Abort Transaction and not in in-progress state
 ERROR: Cannot write block 0 of example1 [test] blind
 
 session 2:
 
 END;
 NOTICE: Abort Transaction and not in in-progress state
 ERROR: Cannot write block 0 of example1 [test] blind

Seems that oid file names will fix this...
Currently, each shared buffer description structure has 
database/table names for the purposes of "blind" writes
(when backend cache hasn't entry for relation and so
bufmgr can't use cache to get names from oids).
ALTER TABLE ... RENAME renames relation file(s) but doesn't
change relation name inside shbuffers...

 Mike (implicit commit) Mascari

-:)))

Vadim





Re: [HACKERS] Re: [GENERAL] drop/rename table and transactions

1999-11-28 Thread Vadim Mikheev

Bruce Momjian wrote:
 
  I have to say that I'm going to change on-disk database/table/index
  file names to _OID_! This is required by WAL because of inside of
  log records there will be just database/table/index oids, not names,
  and after crash recovery will not be able to read pg_class to get
  database/table/index name using oid ...
 
 Wow, that is a major pain.  Anyone else think so?

Why it's so painful? 
We can write utility to construct database dir with table names
symlinked to real table files -:)
Actually, I don't understand 
for what would you need to know what is what, (c) -:)

 Using oid's instead of names may give us some ability to fix some other
 bugs, though.

Yes.

Vadim





Re: [GENERAL] Postgres concurrency : urgent

1999-11-11 Thread Vadim Mikheev

Marcin Inkielman wrote:
 
 I 'm using :
 
 LOCK tablename IN ACCESS EXCLUSIVE MODE;
^^
This blocks concurrent read access - is it what you really want?

 to control concurrent transactions - it works for me
 (postgres 6.5.2)

Vadim





Re: [GENERAL] Postgres concurrency : urgent

1999-11-11 Thread Vadim Mikheev

V Krishnaraj wrote:
 
 This application works fine when we are in single user. When we
 go into multi-user, the app has concurrency problems.
 
 The application has a sql query and updates onto the base table in
 the sql query. We are using Select FOR UPDATE to open the
 cursor and multiple update statements to update values on different

FOR UPDATE cursors are not implemented, yet. How could you use them?

 base tables. When multiple users are running the program, after a
 few iterations postmaster startsputting out messages that there are
 concurrency problems due to which the execute before the fetch
 fails.

What messages?

Vadim





[GENERAL] Re: Postgres concurrency : urgent

1999-11-11 Thread Vadim Mikheev

V Krishnaraj wrote:
 
 After a few times, there lots of messages are spewed out in the postgres
 server log. A typical message looks like this.
 NOTICE: Message from PostgreSQL backend:
 The Postmaster has informed me that some other backend died abnormally a
 nd possibly corrupted shared memory.
 I have rolled back the current transaction and am going to terminate you
 r database system connection and exit.
 Please reconnect to the database system and repeat your query.

Typical messages are not interest ones -:)
Please find what backend exited with !0 code and was there
message before that or not. If there was no message about some
exceptional conditions then try to re-compile with CASSERT
enabled (configure --enable-cassert) or post to us gdb'
output for .../data/base/_your_database_/core file.

 2. I'm surprised that select for update is not implemented. I looked at  the manual 
and it says this is implemented in v 6.5.1. So also my 
 programs are behaving consistent with Select for Update (Giving 
 conccurent access message on already locked rows).

Well, FOR UPDATE is not implemented for server side cursors.
But works for SELECT.

 3.I'm not sure whether what I want is
   LOCK tablename IN ACCESS EXCLUSIVE MODE;
 I'll have to test this.
 What exactly I want is, I want the selects on transactions to wait till
 the main locking transaction updates and commits. I want to queue all
 requests in a serialzed fashion.

Vadim





Re: [GENERAL] Postgres INSERTs much slower than MySQL?

1999-10-21 Thread Vadim Mikheev

Lincoln Yeoh wrote:
 
 At 04:38 PM 20-10-1999 +0800, Vadim Mikheev wrote:
 You hit buffer manager/disk manager problems or eat all disk space.
 As for "modifying" - I meant insertion, deletion, update...
 
 There was enough disk space (almost another gig more). So it's probably
 some buffer manager problem. Is that the postgres buffer manager or is it a
 Linux one?
 
 Are you able to duplicate that problem? All I did was to turn off
 autocommit and start inserting.

I created table with text column and inserted 100 rows
with '!' x rand(256) without problems on Sun Ultra, 6.5.2
I run postmaster only with -S flag.
And while inserting I run select count(*) from _table_
in another session from time to time - wonder what was
returned all the time before commit? -:))

  Well anyway the Postgres inserts aren't so much slower if I only commit
  once in a while. Only about 3 times slower for the first 100,000 records.
  So the subject line is now inaccurate :). Not bad, I like it.
 
 Hope that it will be much faster when WAL will be implemented...
 
 What's WAL? Is postgres going to be faster than MySQL? That would be pretty
^^^
No.

 impressive- transactions and all. Woohoo!

WAL is Write Ahead Log, transaction logging.
This will reduce # of fsyncs (among other things) Postgres has
to perform now.
Test above took near 38 min without -F flag and 24 min
with -F (no fsync at all).
With WAL the same test without -F will be near as fast as with
-F now.

But what makes me unhappy right now is that with -F COPY FROM takes
JUST 3 min !!! (And 16 min without -F)
Isn't parsing/planning overhead t big ?!

Vadim





Re: [GENERAL] Postgres INSERTs much slower than MySQL?

1999-10-20 Thread Vadim Mikheev

Lincoln Yeoh wrote:
 
 It's now a lot faster. Now only about 5 or so times slower. Cool.
 
 But it wasn't unexpected that I got the following after a while ;).
 
 NOTICE:  BufferAlloc: cannot write block 990 for joblist/central
 
 NOTICE:  BufferAlloc: cannot write block 991 for joblist/central
 DBD::Pg::st execute failed: NOTICE:  BufferAlloc: cannot write block 991
 for joblist/central
 Error executing insert!NOTICE:  BufferAlloc: cannot write block 991 for
 joblist/central
 Database handle destroyed without explicit disconnect.
 
 I don't mind that. I was actually waiting to see what would happen and
 my jaw would have dropped if MVCC could handle Multi Versions with
 10,000,000 records!

It doesn't seem as MVCC problem. MVCC uses transaction ids,
not tuple ones, and so should work with any number of rows
modified by concurrent transaction... In theory... -:))

Vadim





Re: [GENERAL] Postgres INSERTs much slower than MySQL?

1999-10-20 Thread Vadim Mikheev

Lincoln Yeoh wrote:
 
 At 04:12 PM 20-10-1999 +0800, Vadim Mikheev wrote:
 It doesn't seem as MVCC problem. MVCC uses transaction ids,
 not tuple ones, and so should work with any number of rows
 modified by concurrent transaction... In theory... -:))
 
 OK. Dunno what I hit then. I wasn't modifying rows, I was inserting rows.

You hit buffer manager/disk manager problems or eat all disk space.
As for "modifying" - I meant insertion, deletion, update...

 How many rows (blocks) can I insert before I have to do a commit?

Each transaction can have up to 2^32 commands.

 Well anyway the Postgres inserts aren't so much slower if I only commit
 once in a while. Only about 3 times slower for the first 100,000 records.
 So the subject line is now inaccurate :). Not bad, I like it.

Hope that it will be much faster when WAL will be implemented...

Vadim





Re: [GENERAL] Client dies in transaction ?

1999-08-09 Thread Vadim Mikheev

Leon wrote:
 
 speed of joins and sudden client death. :)  The third question
 should be: what do I do with transaction deadlock, since there
 is no lock wait timeouts?

Server should abort one of transaction to resolve deadlock -
we don't use timeouts for this.
Did you get unresolved deadlocks?
If so then there is a bug in lock manager.

Vadim



Re: [GENERAL] OID IS INVALID?

1999-07-21 Thread Vadim Mikheev

Mark Wilson wrote:
 
 Last night I reformatted the disk, create a new filesystem and restored
 the last good full backup (from about a week ago) hoping I could get a
^^^
What did you use for backup ?
pg_dump/copy or you cp-ed datafiles to another place ?

 version of the system back on the disk prior to this event.  Unfortunately
 I'm still given the same exact errors shown above.  Does anyone have an
 idea of what I can to do fix these problems.  I really need to get the
 database back online quickly.

Either disk is still bad or you didn't use pg_dump/copy for backup.

Vadim



Re: [GENERAL] Stuck in a vacuum.

1999-07-05 Thread Vadim Mikheev

Stuart Rison wrote:
 
 Since I thought that probably left the table a bit messed up, I started a:
 
 VACUUM blast_hits;
 
 It's using 95% of the cpu and seems to be going nowhere (at least not in
 the 30 minutes it has been running so far).
 
 QUESTION 2: What do I do now?  Is there any way I can kill the VACUUM or
 will they be the final nail in the table's coffin?

VACUUM uses transactions so there shouldn't be problems with
its stopping.
After that try to drop all indices over blash_hits and re-vacuum
(in verbose mode).

BTW, PG version?

Vadim



Re: [GENERAL] TODO list elements

1998-12-14 Thread Vadim Mikheev

Hiroshi Inoue wrote:
 
  And note - this will be not row level locking, but
  multi-version concurrency control.
 
 
 What does it mean ?
 LLL in 6.5 doesn't include row level locking ?

One systems (Informix, Sybase) use locking for concurrency control, 
another ones (Oracle, Interbase) use multi-versioning for this. 
I'm implementing multi-version concurrency control.

Vadim



Re: [GENERAL] TODO list elements

1998-12-14 Thread Vadim Mikheev

Hiroshi Inoue wrote:
 
 My words might be obscure.
 
 What I meant was
 
  How writers block other writers in LLL ?
 
 Certainly readers block no writers(readers) in LLL.
 But writers block no writers or the same-row writers or the same-table
 writers ?
 
 Currently writers block the same-table writers(readers also) ?

Only same-row writers will be blocked.

Vadim



Re: [GENERAL] TRANSACTION ISOLATION

1998-08-25 Thread Vadim Mikheev

claudio navarro wrote:
 
 I would like to know wich isolation level has PostgreSQL?
 Dirtyread, readcommited or repeatableread?

Serialized only.

Vadim



Re: [GENERAL] row oids as foreign keys in other tables ?

1998-07-29 Thread Vadim Mikheev

Matt McClure wrote:
  
   If vacuum does not alter row oids, then I have another question.  How does
   postgres re-use oids?  I've seen the numbers grow and grow, but despite

  It doesn't.
 
 Doesn't the fact that postgres never re-uses deleted (and therefore no
 longer in use anywhere) oids create a problem when you reach the upper
 bound?  Or is the upper bound on oids so ridiculously high that it
 shouldn't be a concern?  Or does postgres have a scheme for increasing
 oids without bound entirely?

We have plans to make using global oid in user tables optional...

 
 In any case, using row oids from one table as values in another table
 won't ever be an issue, right?

Right. But you could also use sequences...

Vadim



Re: [GENERAL] row oids as foreign keys in other tables ?

1998-07-28 Thread Vadim Mikheev

Matt McClure wrote:
 
 You say that vacuum "re-writes" the database.  Does it alter row oids???
 ^^
No.

 If so, my scheme completely corrupts my database whenever I do a vacuum,
 since in concert and song the row oids would change, but my inserted
 values would remain the same in concert_song, right?
 
 If vacuum does not alter row oids, then I have another question.  How does
 postgres re-use oids?  I've seen the numbers grow and grow, but despite
  
It doesn't.

 deletes, etc, I have never seen a lower oid get re-used.  How does this
 work?

Vadim