Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-14 Thread nor...@dunbar-it.co.uk
Ann Harrison  wrote:

>And a different good morning to you Norman,
And already it's evening. :-)

>Of course, and thank you for the detailed and complete description of a
>mechanism I glossed over irresponsibly.
Welcome. You may be surprised at how many people think that Oracle uses log 
files for undo. 

> ...


>Ha ha ha!  You don't get month end this month because your DBA screwed
>up!  Try Firebird next time.
But I do! I looked after over 600 Oracle databases and 43 Firebird ones until 
recently.

More problems with Oracle.

Cheers,
Norm.



-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.


Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-14 Thread Ann Harrison
And a different good morning to you Norman,


> If I may be so bold 

Of course, and thank you for the detailed and complete description of a
mechanism I glossed over irresponsibly.

> As long as the DBA correctly sizes the UNDO tablespace and correctly
> sets the UNDO_RETENTION parameter to a decent enough value, data changes
> are able to be rolled back happily all the time.
>
> If the DBA failed miserably in his/her duties, the ORA-01555 Snapshot
> too old" errors are the result. And are most irritating. Long running
> SELECTS - batch reports for example - tend to show up this error mostly.

Ha ha ha!  You don't get month end this month because your DBA screwed
up!  Try Firebird next time.

Cheers,

Ann


Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-14 Thread Norman Dunbar
Morning Ann,

On 13/12/11 18:18, Ann Harrison wrote:
> ... The method used by
> Oracle (and MySQL's InnoDB) keeps old versions in log files which are
> purged periodically.  A long running transaction can get inconsistent
> data if the log file that contained the record version appropriate for
> it has been purged.

If I may be so bold 

Oracle uses log files for REDO and has ROLLBACK_SEGMENTS or UNDO 
Segments (depending on Oracle version) for UNDO. It never uses log files 
for UNDO - and UNDO is what provides Read Consistency/MVCC in an Oracle 
database.

Changes are written to the LOG_BUFFER (n memory) and periodically - on 
commit, every 3 seconds max, or when the buffer is 33% full - flushed to 
the REDO logs. These REDO logs might be archived to disc when they fill 
up. That Depends on the database archive log mode though.

These logs are used when a database is restored and rolled forward 
(using the RECOVER DATABASE command, for example).

In order to roll back changes and to ensure read consistency, UNDO is 
used. These do live on disc - as tablespace files - but remain in memory 
in the buffer cache alongside data blocks etc.

When a SELECT is started, the data returned are the data from the data 
blocks. Each row in a block has an indicator that tells when it was last 
updated. If a pending update is taking place (currently uncommitted) or 
if a commit has taken place since this SELECT started then the data read 
from that data block has changed - and is not consistent with the start 
time of this SELECT transaction.

When this is detected, Oracle "rolls back" the changes to the start time 
of the SELECT taking place by looking for the UNDO block(s) associated 
with the transaction that made the changes. If that results in the 
correct (consistent) data, that's what you get.

If it turns out that there were other transactions that also changed the 
data, they too will be detected and undone.

In this way you only ever see data that was consistent at the start of 
your own transaction.

As long as the DBA correctly sizes the UNDO tablespace and correctly 
sets the UNDO_RETENTION parameter to a decent enough value, data changes 
are able to be rolled back happily all the time.

If the DBA failed miserably in his/her duties, the ORA-01555 Snapshot 
too old" errors are the result. And are most irritating. Long running 
SELECTS - batch reports for example - tend to show up this error mostly.

This mechanism is also used to carry out a CONSISTENT=YES export of the 
data - any changes made since the start of the export are "rolled back" 
to get the data as it were when the export started.

UNDO segments are in three states:

Active - currently in use by a transaction.

Unexpired - A transaction has committed, but the UNDO_RETENTION period 
has not yet expired. This segment should not be used.

Expired - this segment is free for use.

When a transaction needs a new UNDO segment, it attempts to use the 
expired ones, if there are none, it will try to create a new one in the 
free space, if any, of the undo tablespace. If it cannot, it will then 
re-use an unexpired segment. This renders the data in that segment 
unavailable and is the cause of the above snapshot too old errors.

Note to self: next time, be brief!


Cheers,
Norm.

-- 
Norman Dunbar
Dunbar IT Consultants Ltd

Registered address:
Thorpe House
61 Richardshaw Lane
Pudsey
West Yorkshire
United Kingdom
LS28 7EL

Company Number: 05132767


Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-13 Thread Ann Harrison
On Tue, Dec 13, 2011 at 1:53 PM, Helen Borrie  wrote:
>
> What the client can "see" depends on the isolation level of the transaction.  
> At all levels, your client's transaction sees its own changes.  Those will 
> not be visible to any other transaction until your client's transaction 
> commits.


Helen is completely correct.  The mechanism I described is the one
used by Snapshot (concurrency) transactions.  That's the mode that
takes best advantage of MVCC for
readers.


Cheers,

Ann


Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-13 Thread Helen Borrie
At 03:39 AM 14/12/2011, Joey S. wrote:
>Hello,
>
>I am researching methods to achieve high throughput for a learning chat
>application I am designing as a self-led student.  I came across the
>Multiversion Control method of ACID and have a theoretical question: though
>it seems remote that a reader of data wouldn't get an "old" read what has
>been the experience and when this occurs what methods are available to
>alert the read user that data has been updated?

What the client can "see" depends on the isolation level of the transaction.  
At all levels, your client's transaction sees its own changes.  Those will not 
be visible to any other transaction until your client's transaction commits.
  
-- Read Committed allows a client to see any changes committed after his 
transaction started.  It is the responsibility of the client application to 
re-query the database to bring the remote view up to date. 

-- Snapshot isolation (a.k.a. concurrency) keeps a persistent view of the 
database state as it was when your client's transaction started.  Your client 
won't see changes committed by other transactions until his transaction commits.

-- Snapshot Table Stability (a.k.a. consistency) is like concurrency, but it 
also prevents any other transactions from committing work on the tables it is 
reading.  On the other side of the coin, it is difficult for the consistency 
transaction to get write access to the tables if other transactions already 
have it.  You wouldn't use this level often, if ever. 

A transaction can *never* see pending writes for other transactions ("Dirty 
Read").  However, a pending write by one transaction will always cause a wait 
(if allowed by the lock resolution policy) or a lock conflict in any others 
that attempt to change the same rows. 

How you "alert the user" to changes is up to you.  The methods available vary 
according to what you are using to wrap the API - some drivers have extreme 
"smarts" to detect changes.  Commonly, any such methods are based on trapping 
lock conflict exceptions, the nature of which will depend on both the isolation 
level and the lock resolution policy (wait/no wait). 

Another approach is to use the Events mechanism in triggers, whereby an After 
Insert or Update or Delete trigger will issue a POST_EVENT when a 
state-changing operation is committed.  Clients are set up to listen for these 
events and you write the application to respond in some appropriate way, e.g., 
re-query the tables (refresh) in Read Committed;  or attempt to commit and 
refresh in Snapshot/concurrency.

./heLen



Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-13 Thread Ann Harrison
On Tue, Dec 13, 2011 at 9:39 AM, Joey S.  wrote:
>
> I am researching methods to achieve high throughput for a learning chat
> application I am designing as a self-led student.  I came across the
> Multiversion Control method of ACID and have a theoretical question: though
> it seems remote that a reader of data wouldn't get an "old" read what has
> been the experience and when this occurs what methods are available to
> alert the read user that data has been updated?


MVCC can be implemented in several ways.  The method used by Firebird
(and InterBase and PostgreSQL) guarantees that old versions of records
will be available as long as there are transactions that can read
them.  All these systems store old records in the actual data files
and use a garbage collector (or vacuum) to remove them when the last
transaction that can possibly read them exits.  The method used by
Oracle (and MySQL's InnoDB) keeps old versions in log files which are
purged periodically.  A long running transaction can get inconsistent
data if the log file that contained the record version appropriate for
it has been purged.

So, the experience is that using the Firebird/Postgres method, readers
always get a consistent view of data.  With Oracle and InnoDb, less
so.

Firebird keeps information about each running transaction, including
the oldest transaction that was running when the transaction started.
When choosing record versions to remove, Firebird compares the
transaction identifier in the record version header with the "oldest
of the oldest" - i.e. the oldest transaction that was running when the
oldest transaction now running started - and keeps one version older
than that.

If the oldest transaction running is 200 and the oldest transaction
running when it started was 175 and the chain of record versions goes
199, 176, 175, 174, 173, 140, 123, Firebird can remove the versions
created by 173, 140, and 123.   The newer versions will all stay until
transaction 200 exits and the next "oldest of the oldest" is higher
than 175.

Does that help?

Best regards,

Ann


Re: [firebird-support] Multi-Version Consistent Read Question

2011-12-13 Thread Alexey Kovyazin
Hello Joey,

Can you please tell in what university you are studying?
Why you have chosen Firebird - was it some recommendation?

Regards,
Alexey Kovyazin

> Hello,
>
> I am researching methods to achieve high throughput for a learning chat
> application I am designing as a self-led student. I came across the
> Multiversion Control method of ACID and have a theoretical question: 
> though
> it seems remote that a reader of data wouldn't get an "old" read what has
> been the experience and when this occurs what methods are available to
> alert the read user that data has been updated?
>
> Thank you,
> Joey
>
> [Non-text portions of this message have been removed]
>
> 



[Non-text portions of this message have been removed]