Re: [firebird-support] gfix -v -full sufficient?

2020-01-11 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Jan 11, 2020, at 7:37 AM, Kjell Rilbe kjell.ri...@marknadsinformation.se 
> [firebird-support]  wrote:
> 
> Does gfix -v -full detect all kinds of database corruption? Or are there 
> corruptions that cannot be detected 

Gfix finds only errors in the physical structure of the database.  These 
include problems like doubly allocated or unallocated pages, bad back version 
pointers, and orphaned back version chains. 

Gfix will not find logical data errors like duplicates in a unique constraint 
or broken foreign key relationships. 

Cheers,

Ann
> 


Re: [firebird-support] Sweep isn't moving OIT

2019-11-26 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Nov 26, 2019 at 3:26 PM lucassch...@gmail.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
> I'm having a problem while running manually sweep, it's not increasing OIT
> as expected.
>
As you mentioned in your bug report, the problem was a transaction stuck in
limbo - the first half of a
two-phase commit.  Sweep, quite rightly, doesn't clean up limbo
transactions.

> Just to explain better:
>
> I'm copying de DB as a backup, and I'm trying do sweep it, so I stopped
> Firebird, restarted, tried to sweep, stopped Fb again, tried to sweep with
> offline Fb server, tried gfix -mend, which found 3 warnings (0 fixed):
>
Sweep doesn't require stopping Firebird and stopping Firebird is going to
have no effect on
the transaction that's half way through a two-phase commit.

>
> backup Thu Nov 14 18:27:53 2019
> Database: /db/db_no_sweep.fdb
> Validation started
> Warning: Relation has 2 orphan backversions {0 in use} in table
> CONTAS_A_RECEBER (247)
> Warning: Relation has 3 orphan backversions {0 in use} in table
> NFE_LOTE_DADOS (253)
> Warning: Relation has 8 orphan backversions {0 in use} in table
> NF_SAIDA_CAPA (575)
>
Orphan back versions waste space but have no effect on the consistency of
the database.  They
are created if Firebird is stopped during garbage collection.  That
shouldn't happen, but neither
program or operating system crashes.  When the garbage collector - thread
or cooperative -
finds old versions of a record that are no longer needed by any active
transaction it first sets
the back version pointer to zero in the oldest version of the record that
is needed.  Then it
goes to the page that holds the back version and marks that space as free
and available for
reuse.

>
> backup Thu Nov 14 18:53:44 2019
> Database: /db/db_no_sweep.fdb
> Validation finished: 0 errors, 3 warnings, 0 fixed
>
> I've got no error, no warning at all, any ideas?
>
As you suggest elsewhere, Sweep with a verbose mode might report that
transaction xyzzy is stuck
in the middle of a two-phase commit.  That would have saved you a lot of
looking

Good luck,

Ann


Re: [firebird-support] Table is fragmented?

2019-11-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Nov 22, 2019 at 11:47 AM rudi.fe...@multidadosti.com.br
[firebird-support]  wrote:

>
>
> Recently I ran a "Database statistics" with ibexpert on one of our dbs,
> and there were a lot of tables marked in yellow, with the tooltip saying
> "Table xxx Is fragmented"
>
>
>
> I couldn't quite find an explanation for it online, so I have a few doubts
> about it:
>


> • Is this the same as index fragmentation?
>

Sort of. Index fragmentation occurs when new entries are added to the index
that
require splitting an index page.  Suppose you have an index in which one
page
holds all the entries from "England" to "Japan".  Someone adds "Ireland"
which
doesn't fit.  As a result the page must be split into two pages - "England"
to "Georgia"
and "Germany" to "Japan".  Firebird allocates a new page for the "Germany"
to
"Japan" part of the index.  That page will tend to be at the end of the
database file.
If you had your index settled into pages in index order - which would
happen after
a backup and restore - each index page would sit in the file between the
page
logically before it and the page logically after it*.  After the page
split, the "Germany"
to "Japan" page would be at the end of the file, rather than physically
next to
"England" to "Georgia".  Back in the early 1990's with slowly rotating
magnetic
disks, that made a difference, I guess.

Table fragmentation occurs when new data is added to a table and the new
data
page is not adjacent to other data pages for the same table.  In an
unfragmented
table, a sequential scan of all records reads a series of physically
adjacent pages.
Or did, in the early history of rotating disks.  Striping, redundant
storage, SSDs -
the world has moved on to the point where a database ought to worry about
physical storage locations.


> • Is this something to be worried about?
>

No.  Stable storage technology is way beyond the point were being able to
stream
pages off a disk is relevant at all.


> • Does gfix or backup/restore fix the problem?
>

Since there's no problem ... fixing it is hard.

A restore will unfragment indexes and data.  The backup gets all the data
for one
table then moves on to the next, so the restore creates all data pages on
logically
adjacent pages.  Currently gbak restores indexes one at a time, so they too
are
on logically adjacent pages.  Building indexes in parallel is a reasonable
optimization,
in which case the pages for several indexes would be interspersed.  But as
above,
it doesn't matter at all.

• How can I query the metadata to figure out if there are fragmented tables
> in my db?
>
>
>
You can't.  The physical file page numbers for data pages are stored in the
database on
special pages that ibexpert knows how to find and read.  However, it
doesn't matter at
all, since storage is so very different than it was thirty years ago.

Good luck,

Ann

* At least Firebird would think that the pages were adjacent.  In fact, the
file system could
put them anywhere or in several places.


Re: [firebird-support] After downgrading database FB3 to FB2.5 - no permission for read/select access to DATABASE

2019-11-19 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Nov 19, 2019, at 7:27 AM, kok...@yahoo.com [firebird-support] 
>  wrote:
> I upgraded production database, let it working for some weeks, then i 
> realized that due some limitations (upadte of system tables) i am not able to 
> use it with 3.0
> 

What direct system table updates are you doing?  I’m curious about missing 
features in the more secure and reliable interface. 

Cheers,

Ann

Re: [firebird-support] What is the optimum pageSize?

2019-11-17 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Nov 17, 2019, at 6:22 AM, jerz...@o2.pl [firebird-support] 
>  wrote:
> 
> On Sat, 16 Nov 2019 10:29:44 +0100, Mark Rotteveel m...@lawinegevaar.nl 
> [firebird-support] wrote:
> 
In Release Notes for these versions.
Maximum size of record was not changed.
> 
> 
>>> This is a very big limitation, a few larger char fields with multibyte 
>>> coding and we get an exception. I don't understand why this limitation 
>>> hasn't been removed.
> 
>> Because it is hard to do. A lot of internals in Firebird and its clients 
>> rely on this maximum size (hint: the number itself fits in a 2-byte 
>> unsigned integer). It is not like you just need to update a MAX_ROW_SIZE 
>> constant and it is changed.
> 
> I understand it's not trivial. But that's why there are new versions like 3, 
> 4 or now 5 to avoid such limitations.
> 
>> Consider that PostgreSQL has a limit of around 8KB (with its default 
>> page size). However, PostgreSQL has the advantage that variable length 
>> fields can be automatically stored out-of-line (ie basically as a blob, 
>> but that is transparent to the user).

Perhaps a better solution for Firebird would be a similar text type that holds 
arbitrary length strings. 
> 
> However, according to the documentation "row size" is much larger. But maybe 
> I am misreading this information.
> 
> https://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F
> 
> -- 
> kind regards
> jerzy
> 
> 
> 
> 
> Posted by: jerz...@o2.pl
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


Re: [firebird-support] What are the trade-offs of CHAR vs. VARCHAR?

2019-11-13 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Nov 13, 2019, at 10:01 AM, Dimitry Sibiryakov s...@ibphoenix.com 
> [firebird-support]  wrote:
> 
> 13.11.2019 15:51, blackfalconsoftw...@outlook.com [firebird-support] wrote:
>> However, internally, the CHAR field definition is more efficient as the 
>> database does not 
>> have to perform any field size calculations at the moment that data is being 
>> updated to a 
>> VARCHAR field.
> 
>   No such calculations is performed in Firebird. Both CHAR and VARCHAR are 
> always stored 
> with full declared length.
> 

In memory, yes.  But the entire record will be compressed before being written 
to disk. 

Good luck,

Ann
> 
> 
> 
> 
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


Re: [firebird-support] What are the trade-offs of CHAR vs. VARCHAR?

2019-11-13 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Nov 12, 2019, at 10:44 PM, Richard Damon rich...@damon-family.org 
> [firebird-support]  wrote:
> 
>> On 11/12/19 12:38 PM, cte...@gmail.com [firebird-support] wrote:
>>  
>> 
>> What are the trade-offs of CHAR vs. VARCHAR?  I know that VARCHAR
>> consumes less space.  Anything thing else (are VARCHAR searches slower)?
>> 
> In some implementations of SQL (I don't know if firebird is one of
> them), a row without any variable length items (like VARCHAR) and thus
> of fixed length could be stored in a somewhat optimized way making its
> access somewhat faster because all the records were the same size.

In Firebird all records are compressed on disk. 
> 
> VARCHAR also doesn't always take less space, as very short CHAR fields
> can be smaller than the overhead of a VARCHAR, and if the CHAR field is
> storing a value that is always the same length (like maybe a hash code)
> the overhead of VARCHAR is just wasted.
> 
> -- 
> Richard Damon
> 
> 
> 
> 
> Posted by: Richard Damon 
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


Re: [firebird-support] SQL

2019-08-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Aug 28, 2019, at 12:54 AM, Kjell Rilbe kjell.ri...@marknadsinformation..se 
> [firebird-support]  wrote:
> 
> Den 2019-08-05 kl. 13:20, skrev mohamed hamza medh...@hotmail.com 
> [firebird-support]:
>> 
>> Hi All,
>> 
>>   I am new to sql,  I am coming  from  Xbase  system .
>>  I want to know if it 's  possible to execute query using last query 
>> result?
>> 
>> EXAMPLE
>> 1  select * from persons where state='CA'
>> 
>> 2 select * from persons where age> 18   but  we have to use the 
>> result of 1
>> 
>> I DO NOT WANT TO DO AN AND(  STATE = 'CA'AND  AGE > 18  ) ;
> 
> 
> First: Why not use and AND? That would be the most natural solution.

And will perform best.  If both terms are indexed, Firebird will use both 
indexes.  
> 
> Second: No, a plain sequence of SQL queries do not support that. But you 
> can put them in a stored procedure or an "execute block" statement ...

Would a CTE work?

Good luck,

Ann

Re: [firebird-support] index vs page size

2019-06-21 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Jun 20, 2019, at 11:10 AM, Nick Upson n...@telensa.com [firebird-support] 
>  wrote:
> 
> 
> 
> Hi
> 
> I have a large 100Gb+ firebird 2.5 database and I'm trying to deal with some 
> performance issues.
> 
> One indicator I have found is that several large tables have indexes with a 
> depth of 3 (and one of 4). The guidance I have seen says to increase the page 
> size until the depth is below 3 but the pagesize is already maxed out at 16k. 
> Can anyone suggest other actions I could take.

The guidance should have been three or less, not less than three.  One index 
with four levels should not be a crisis if you have adequate page cache. 

Good luck,

Ann



> Kind Regards,
> Nick
> 
> Nick Upson | Principal Operations Engineer
> office +44 01799 588800 support hotline +44 1799 399200
> DDI +44 01799 588621
> telensa.com | @telensa | Linkedin
> 
> IMPORTANT NOTICE: This e-mail message is intended to be received only by 
> persons entitled to receive the confidential information it may contain. 
> E-mail messages to clients of Telensa may contain information that is 
> confidential and legally privileged. Please do not read, copy, forward, or 
> store this message unless you are an intended recipient of it. If you have 
> received this message in error, please forward it to the sender and delete it 
> completely from your computer system. 
> 
> 


Re: [firebird-support] FB 2.1 transaction question

2019-05-16 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, May 13, 2019 at 5:08 AM Rustam rusta...@ukr.net [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> Important notice: modified rows in that 'dead' transaction remains locked
> along this TA existence. So, your another connections will get 'lock
> conflict' errors ('no wait' TA's), or waiting forever ('wait' TA's) when
> try to modify same rows.
>

As Mark Rotteveel says below, if a connection is broken with an active
transaction, Firebird will detect that the
connection and thus the transaction are dead and change the state of the
transaction to dead.  That will unblock
waiting transactions, etc.  When Firebird detects a record level conflict,
it first checks the state of the blocking
transaction, then if the transaction appears to be active, it checks the
state of the connection.  So in the case of
a network glitch or program exit without cleanup, Firebird will set the
state of the blocking transaction to dead
without manual intervention.

On the other hand, if someone goes to lunch or on holiday with an
application connected to the database and
a transaction with uncommitted change running, then manual cleanup is
required.  Also better program
discipline.



> Even if you just want reconnect on unexpected connection lose and finish
> your previous work.
>

That doesn't happen.

>
> For such extremal cases your application can present for SYSDBA special
> window that shows table with data from MON$ATTACHMENTS (login, connection's
> start date, client IP, app name).
> Deleting record from MON$ATTACHMENTS will close connection, so you can
> continue.
>

Again, only necessary in the case where the connection remains active while
the transaction is stalled.

Good luck,

Ann

>
>
>
>
> The data isn't committed, so it isn't visible to any other transaction.
> If a connection is closed with an active transaction, the transaction
> will eventually be rolled back by Firebird. See also "“Dead”
> transactions" in the Firebird book.
>
>  Mark Rotteveel
>
>
>
> 


Re: [firebird-support] Can not create backup of Firebird database because of the errors

2018-09-20 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Sep 20, 2018 at 5:30 AM Georgiy Kovalov g...@liconic.com
[firebird-support]  wrote:

>
>
> At the time of backup firebird database (gbak -g -ig) I have the following
> error:
>
> gbak:writing data for table ORDERS
> gbak: ERROR:message length error (encountered 532, expected 528)
> gbak: ERROR:gds_$receive failed
> gbak:Exiting before completion due to errors
>
> OK.  The first thing to do is to run gbak with the -o and -v options.
That will tell you which table
is causing the error.  Message length errors can be caused by
compression/decompression
problems.  When you've identified the table with the error, you can probe
it to find the bad
record.  Probe with a sorted, non-indexed query and you'll eventually find
a record that will cause
the same error.
same error.  At that point,



> When I'm using *gfix* with different parameters (-v -full, -mend,
> -ignore), I have the message:
>
> Summary of validation errors
> Number of index page errors : 540
>
> Mend is a little dangerous, but essentially it clears invalid pointers to
pages and records, so you get
a null pointer instead of a bad one.  Whatever the bad one pointed to will
go from unavailable to
ready to be removed.  Certainly not the option I would use first.  Index
pages errors are generally
benign.  Firebird indexes can contain pointers to deleted records after a
crash or hard shutdown.
That's expected and eventually fixed if the entry is visited.


> In *firebird.log* file I see the lines:
>
> PC (Server) Thu Sep 20 08:37:01 2018
> Database: E:\...GDB
> Index 2 is corrupt on page 134706 level 1. File: 
> ..\..\..\src\jrd\validation.cpp, line: 1699
>  in table COMPONENTS (197)
>
>
>
That's a problem that can be fixed by rebuilding the index.  However, it's
unlikely to be the source
of your gbak problem because gbak doesn't backup or reference user defined
indexes.

Good luck,

Ann


Re: [firebird-support] Sequential auto incremental numbering

2018-09-06 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Sep 5, 2018 at 10:07 AM, Gabor Boros mlngl...@bgss.hu
[firebird-support]  wrote:

> 2018. 09. 02. 17:03 keltezéssel, 'Christian Giesen' ch...@xt.co.za
> [firebird-support] írta:
>
> >
> > I have a need to generate unique sequential invoice/credit note numbers..
> > I fully understand the usage of generators to produce unique identifiers
> > in tables. However, according to Firebird documentation the use of
> > generators does NOT guarantee sequential numbering.
>

Right.  Generators are outside transaction control.  Once they're
allocated, they're gone.
Rolling back the transaction that created the value does not reset the
generator so it
leaves a hole in the sequence.

Selecting max(whatever) from ...  doesn't work either, except single user,
because two
concurrent transactions can see the same max value.  Creating monotonically
increasing
serial number with no gaps requires serializing concurrent transactions.
Anything else
is going to risk leaving gaps.

>
> I use and suggest a table instead of generator. More flexible and
> transaction controlled solution. If you need more than one (and
> independent) sequential number just use same table with more records.
>

 The simplest way to create monotonically increasing serial numbers is to
create a
table A with one long integer column.  Store an initial value in it.  For
your application,
start a transaction, update table A setting the column to itself plus one
and returning
the value of the column.  If the update fails, rollback the transaction and
start a new
one.  When the update succeeds, you've got your serial identifier.  If
there's an error
in subsequent database activities, roll the transaction back and retry the
whole
thing.  The roll back will reset the serial identifier column.

Good luck,

Ann


Re: [firebird-support] Sequential auto incremental numbering

2018-09-03 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Sep 3, 2018, at 6:00 AM, 'River~~' river14ap...@gmail.com 
> [firebird-support]  wrote:
> 
> Two ideas that should work on any SQL database so long as you never delete 
> invoices

Remember that a transaction can roll back, so delete is not your only 
challenge.  
> 
> I amNot a guru for this dbms so if a more specific technique is posted it is 
> likely to be faster
> 
> Note however that while my method may be inefficient, it is more likely to be 
> portable. You pays your money and you takes your choice
> 
> 
> With SQL and code in another language:
> 
> Start transaction
> Select max (invoice) ...
> Calculate next invoice 
> Insert...
> Commit
> 

That just narrows the window of chance that two transactions will see the same 
next value.  

> Or if you want to do it all in SQL then create a view that selects max 
> invoice plus one, and insert one row at a time from that view into the table. 
> The other columns would then be presented as calculated values 

I think that does nothing to solve the problem of seeing the same max value 
from two concurrent transactions. 
> 
> Either of the above are reasonably efficient if you have an index on the 
> invoice number, as firebird can easily find the first and last entries from 
> the index and iirr if the only column retrieved is the indexed value firebird 
> does not even access the table. 

Firebird does access the table to insure that the last index entry is a record 
version visible to the current transaction.  

Read the article in ibobjects. 

Good luck,

Ann

> 


Re: [firebird-support] Query - Regarding a Firebird Source Control Application

2018-05-27 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
In addtion to Mark Rotteveel’s description of the system tables, you might want 
to look at the code in isql which supports the -x command.  It should include 
everything to create a complete DDL script from the system tables.  

Good luck,
Ann

Re: [firebird-support] Firebird 3 - Auto Garbage collection with Sweep interval = 0

2018-02-19 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
Hi Everyone,

Adding just a bit more detail to the description of garbage collection
in Firebird.

For those who know little about Firebird, when a record is updated,
Firebird
does not replace the old version with the new.  Instead, it creates a new
version
and chains the old version behind the new.  Each record version is marked
with
the identity of the transaction that created it.  Concurrent transactions
can see
different versions of the same record.  Each transaction sees the newest
version
that was committed when the reading transaction started (in concurrency
isolation)
or the newest committed version (in read committed isolation).


The challenge is to remove versions too old for any active transaction
to read
without spending all day doing it..  The answer is garbage collection -
which can
be "cooperative", "background", or "combined."  Combined (as you might
guess)
decides on a case by case basis whether to do cooperative or background
clean-up.  Classic and SuperClassic modes only support cooperative garbage
collection.

Although it is possible to create a connection that does not remove
unlovable
ancient record versions, generally, any time a connection touches a record
with
back versions too old to be seen by any active transaction, it will cause
those old
versions to be removed.  (The exception is the mode gbak uses to create a
backup
with the -g switch, when the intention is to restore the backup as the new
working
copy of the database.  Cleaning a database you plan to discard is a waste
of time.)

   When a normal transaction reads a record, it checks for back versions of
the
record that were created by a transaction that no running transaction cares
about.
Specifically, it checks for a version, newer that the next oldest, that
every running
transaction can read. The older versions are a pure waste of space.   If it
tries to
read a record marked as deleted, it determines whether the deletion is
visible to
all active transactions and causes it to be deleted.

Cooperative garbage collection.

 If the system is using cooperative garbage collection, the active
transaction
breaks the chain of back versions after the version that all running
transactions
can read, releases the space used by the old versions to the system for
reuse,
and cleans up any index entries that reference that version.

If there are few back versions to remove, they are likely to be on the
same
page with the record version that the transaction read, so the I/O overhead
is small.

Background garbage collection.

If an unlucky transaction happens to read a record that has many back
versions,
it has to do a lot of work that it wasn't planning on doing.  In
SuperServer configuration,
Firebird can run a background thread that cleans up old versions.  When a
user
transaction runs into a record that needs garbage collection, it puts that
record's
identifier on a list for the background thread to clean up.  When the
background
thread cleans up a record, it cleans up all the records on that page.

   That sounds like a big win, but on a busy system the background garbage
thread
can fall behind.  Think of a strike by sanitation workers.

Deleted stubs.

  Some applications tend not to revisit deleted records.  Well, duh...  why
would you?
As a result, a page full of deleted stubs and their back versions may never
be garbage
collected either cooperatively or in background.  A full table scan, either
as part of a
sweep or backup or programmatically will cause deleted stubs to be removed.

Good luck,

Ann






>
>
> 
>


Re: [firebird-support] Index process in Firebird 2.1.4

2017-10-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Oct 19, 2017, at 11:45 PM, gn7s...@gmail.com [firebird-support] 
>  wrote:
> 
> 
> 
> Hi!
> I'm going to develop a commercial program with Firebird2.1, in Delphi7.
> 
As noted, 2.1 is obsolete. 
> I want to know whether the following index is picked by the optimizer.
> 
> [The first question] 
> CREATE INDEX IDX1 ON PROJECT(A)
> CREATE INDEX IDX2 ON PROJECT(B)
> CREATE INDEX IDX3 ON PROJECT(C)
> CREATE INDEX IDX4 ON PROJECT(D)
> 
> SELECT * PROJECT WHERE A=1 and B=2 and C=3 and D=4
> 
> The optimizer would be choose the fastest way among table-full-scan and four 
> indexs. Is this right?
> 

If columns A, B, C, and D all have a lot of different values - think of street 
addresses rather than eye color - then Firebird can use all four indexes. It 
builds bit maps of matching records from each index and finds records that 
match all the criteria. 

Note that the SHOW PLAN command shows only the first index for any table - 
maybe newer versions show more...

Good luck,

Ann

> 
> [The second question] 
> CREATE INDEX IDX1 ON PROJECT(A)
> CREATE INDEX IDX2 ON PROJECT(B,C,D)
> 
> SELECT * PROJECT WHERE A=1 and B=2 and C=3 and D=4
> 
> The optimizer would be choose the fastest way among table-full-scan and two 
> indexs. Is this right?
> 
> 
> 
> 
> 
> [The third question] 
> CREATE INDEX IDX ON PROJECT(B,C,D)
> 
> SELECT * PROJECT WHERE A=1 and B=2 and C=3 and D=4
> 
> The optimizer would be c hoose the fastest way among table-full-scan and IDX 
> index. Is this right?
> 
> 
> 
> Please tell me.
> Thanks!
> 
> 
> 
> 


Re: [firebird-support] Always use BIGINT?

2017-07-06 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Sun, Jul 2, 2017 at 1:50 PM, Slavomir Skopalik skopa...@elektlabs.cz
[firebird-support]  wrote:

>
>
> I test general impact of compression and I found that for FB 2.5 and FB3
> is critical to take another DB page from cache.
>
Sure.

OK, you're considering storing the decimal value 13111.  In straight
forward hex, that's 0x33, 0x37.

In little endian format,
   BIGINT with that value is  0x37,0x33,0x0,0x0,0x0,0x0, 0x0, 0x0
   INT with that value is  0x37,0x33,0x0,0x0
   SMALLINT with that value is 0x37,0x33

> BIGINT compressed: 3x ( 2 bytes to compress zeroes + 3 bytes for
> uncompressable data). Total: 15 bytes.
>
 Right.  Each column is stored as 0x2, 0x37,0x33,0xFA 0x0 - five bytes.
The 0x2 means that the next two
bytes should be decompressed as they are.  The 0xFA (-6) means that the
next byte should be repeated
six times.   If the BIGINT were followed by another BIGINT that's null or
has a value of zero, the run-length
compression would require no additional storage.  The -6 would be a -14.

INT compressed: Each column is store as 0x4, 0x37,0x33, 0x0, 0x0 - once
again, five bytes.  The two
trailing zeros are not compressed because the compression is the same size
as the uncompressed data.
Three of those columns in a row would also be 15 bytes.  If the next column
was a null INT or an INT
value 0, the compressed format would be six bytes 0x2, 0x37,0x33, 0xFA, 0x0
for the two columns.

> SMALLINT compressed: 1 byte for uncompressable header + 3x (0x33, 0x37).
> Total: 7 bytes.
>
Right. The stored format for the three columns is 0x6, 0x37, 0x33,0x37,
0x33,0x37, 0x33.  If
the next column is zero or null, you add two more bytes.

 And as long as you're absolutely certain that you'll never store any value
greater than
32767, go for it.   If you're wrong and you need a larger value, you can
just ALTER the
column to a INT, after you figure out what the problem is and get any users
who've read
that column to close their connections.

> The performance gain can vary from few percent to ten times and it is NON
> LINEAR.
>
The effect of compression is data dependent and the effect on performance
is application
dependent.

> Conclusion of my testings:
>
> 1. Size of compressed row has significant impact on performance
>
Absolutely agree.

> 2. SMALLINT will litle help but
>
And has such a limited range, it's a programmer trap. INT and BIGINT are
largely
equivalent.

> 3. VARCHAR and CHARACTER SET has much more impact. Mainly UTF8.
>
VARCHAR adds two bytes for the length of the column.  Its effect on
compression
is less obvious because trailing spaces aren't stored in VARCHAR, and the
places
where you find three identical characters in a string are rare.

For storage, UTF8 does expand characters that aren't ASCII.  And that can
be
a factor, but it's pretty much the cost of multi-lingual applications.

> 4. If performance is not at first place, use INTEGER because upgrade is
> very problematic (you have to drop all PK, FK and indexes)
>
Right, because BIGINT doesn't use the normal numeric key generation.  Which
it could, without
losing any precision, if the low-end digits that are lost in conversion to
double precision were simply
appended to the generated key.  Sigh.

Sincerely,

Ann


Re: [firebird-support] Always use BIGINT?

2017-07-02 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Sat, Jul 1, 2017 at 6:23 AM, Köditz, Martin martin.koed...@it-syn.de
[firebird-support]  wrote:

>
>
> what do you mean with alignment?
>

It varies from machine to machine, but generally in order to add or compare
two integers
they must be aligned on a "natural" boundary.  For modern machines, the
minimum
boundary is 32 bits ... I think some 64 bit machines require a 64 bit
boundary, but it's
been three years since I've thought about this at all, and I could be
wrong.  What that
means in practice is that when building a record in memory, Firebird pads
numbers so
they fall on 32 or 64 bit boundaries.

Cheers,

Ann

>
>
>
>


Re: [firebird-support] Always use BIGINT?

2017-06-30 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Jun 30, 2017 at 10:30 AM, Slavomir Skopalik skopa...@elektlabs.cz
[firebird-support]  wrote:

>
> advantage of SmallInt vs INT vs BIGINT is data density (keep record
> shortest as possible).
>
> If you have one column, it is irrelevant, if you have ten columns, you
> will see speed improvement.
>

Did you test that, or just assume it?  Remember that data compression also
affects
record size.

Cheers,

Ann


Re: [firebird-support] Always use BIGINT?

2017-06-30 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Jun 30, 2017 at 5:37 AM, Köditz, Martin martin.koed...@it-syn.de
[firebird-support] <firebird-support@yahoogroups.com> wrote:

>
> In 2014 I was at the FB conference in Prague. If I remember correctly, Ann
> Harrison told the audience that there are no advantages when using SMALLINT
> or INT versus BIGINT. But I don’t remember the reason. Was it the block
> size of modern architecture? Maybe Ann can make it clear again? Or someone
> else. Would be great.
>

I'm afraid I don't completely remember, but I'm pretty sure that the
problem is not block size, but alignment.

Cheers,

Ann

>
>

>


Re: [firebird-support] Changing collation

2017-06-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Jun 27, 2017 at 1:39 PM, Tomasz Tyrakowski
t.tyrakow...@sol-system.pl [firebird-support] <
firebird-support@yahoogroups.com> wrote:

> ,
>
> Is it possible to change the collation of a varchar column in place
> (without making a temporary column, copying data, dropping the original
> column, making it anew and copying the data back)?


Not if you have indexes, a declared primary key, or a declared unique
constraint.  The index keys are a transformation of the value into a format
that produces the desired ordering through binary comparisons.


> Would setting
> RDB$RELATION_FIELDS.RDB$COLLATION do the trick (and is it safe to do)?
>

Not safe, won't work, and will produce very odd results on range queries on
any
index.  Or, say, a query that causes Firebird to return rows in index
order.


> I've got several old production databases which have different
> collations on different columns (probably by mistake) and would like to
> sort it out.
>

However messy with inter-column dependencies, the only way to make the
change is to create a new column with the desired collation, copy the data,
drop the old column, recreate the old column with the new collation and
repopulate it.


> Also, does the RDB$RELATION_FIELDS.RDB$COLLATION somehow depend on the
> default character set of the database, or is it universal (i.e. does the
> collation code from RDB$RELATION_FIELDS.RDB$COLLATION always mean the
> same, or does a single code mean different collations in databases with
> different default character sets)?
>

The format of the keys depends on the character set - different character
sets
have different meanings for some byte values, which collate differently.

Good luck,

Ann


Re: [firebird-support] Foreign key different field type

2017-06-20 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Jun 19, 2017 at 1:21 PM, Thomas Steinmaurer t...@iblogmanager.com
[firebird-support]  wrote:

> ,
> >
> > Can someone show me example when it is usefull to have different field
> > type in [referenced and referencing keys of a foreign key relationship]?
> >
> >
> > Why this is not forbidden?
>

Why should it be forbidden?  Not everything that's dumb is disallowed.

>
> Perhaps cause the referenced column in table test1 can only store a
> subset (SMALLINT) of the value range of the INTEGER used in test2?
>
> Even if this works at DDL time, I wonder how strict the optimizer then
> is in JOIN statements using an index. Haven't tried.
>

Nor have I tried, but Firebird uses the same key representation for most
numeric
columns, so having mixed sizes of numbers - or different scales -  in a
foreign key
shouldn't matter to the optimizer.  The exception - which may have been
changed -
was the representation of int64. Having a single key format for numbers
makes
it simple to increase the size of columns or change the scale.

Good luck,

Ann


Re: AW: [firebird-support] RE: Firebird SuperClassic hangs

2017-05-09 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On May 9, 2017, at 9:10 AM, Freddy Ertl f.e...@speexx.com [firebird-support] 
> <firebird-support@yahoogroups.com> wrote
> 
> that's the question, that I have: How to find out what causes the deadlock 
> scans. I have thousands of transactions per minutes, doing all possible kind 
> of things. What to look for? Where?

Your lock header reported an actual deadlock which is actually quite rare.  If 
at all possible, instrument your code to find actual deadlock errors ignoring 
the more common "update conflict" errors. 

And, although I'm sure you've said it before, wha transaction isolation level 
are you using?

Good luck,

Ann
> 
> Best regards
> Freddy Ertl
> 
> Von: firebird-support@yahoogroups.com 
> [mailto:firebird-support@yahoogroups.com]
> Gesendet: Dienstag, 9. Mai 2017 15:05
> An: firebird-support@yahoogroups.com
> Betreff: RE: [firebird-support] RE: Firebird SuperClassic hangs
> 
> 
> 
>> Deadlock scans: 289, Deadlocks: 1, Scan interval: 10
> 
> Is probably the culprit, 289 deadlock scans, one actual deadlock found
> 
> "Deadlock scans. The number of times that the lock manager walked a chain of
> locks and owners looking for deadlocks. The lock manager initiates a deadlock 
> scan
> when a process has been waiting 10 seconds for a lock."
> 
> "Deadlocks. The number of actual deadlocks found, A deadlock occurs when
> Process A, wants a lock on Resource 1 which is held in an incompatible mode
> by Process B and Process B wants a lock on some Resource 2 which is held in an
> incompatible mode by Process A.
> 
> "Each owner stands around glowering at the other and neither will do anything
> to improve the situation, so the lock manager returns a fatal error to one or 
> the
> other. Deadlocks can also occur with a single resource if both owners start 
> with
> read locks and request conversions to write locks. However, deadlocks always
> involve two owners (or two separate transactions)
> 
> Errors that are returned as "lock conflict" from "no wait" lock requests will 
> not
> be recorded in the lock table as deadlocks because only one owner is waiting.
> 
> Errors returned as "deadlock" with a secondary message specifying "update 
> conflicts
> with concurrent update" are not actual deadlocks either. What has happened in 
> those
> cases is that one owner has modified (or erased) a record and moved on. 
> Another
> concurrent owner has attempted to modify (or erase) the same record, noticed 
> that the
> most recent version is one he can't see waited to find out how the other 
> transaction
> ends up, and found to his disappointment, that the other transaction 
> succeeded.
> In that case, our patient transaction can't modify the record because it 
> can't find
> out what its late contemporary actually did."
> 
> "Scan interval. The lock manager waits some period of time after a request 
> starts
> waiting before it starts a deadlock scan. The default interval is 10 seconds, 
> which may be
> long considering the change in CPU performance since 1983. Deadlock scans 
> should not be done
> whenever there's a wait because waiting is normal and scans are not free."
> 
> A deadlock scan would cause your server to hiccup while the lock manager 
> tries to
> find if there is a real deadlock within the lock manager. You might want to 
> investigate
> why deadlock scans are being instigated..
> 
> With thanks to Ann Harrison :-)
> 
> Regards
> Paul Beach
> 
> 
> 
> [Non-text portions of this message have been removed]
> 
> 
> 
> 
> Posted by: Freddy Ertl <f.e...@speexx.com>
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


Re: [firebird-support] Firebird 2.5 - Alter table - drop default - bug or feature

2017-04-27 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
>  Apr 25, 2017, at 12:21 PM, Lester Caine les...@lsces.co.uk 
> [firebird-support]  wrote:

> 
> On 25/04/17 16:50, Mark Rotteveel m...@lawinegevaar.nl
> [firebird-support] wrote:
>> The SQL standard is quite clear at what needs to happen: adding a new 
>> column with a default should behave as if that column has existed from 
>> the original create table (btw: irrespective of the NOT NULL constraint, 
>> something that if I'm not mistaken also Firebird doesn't do correctly).

Right.  When a record is read Firebird converts it from its declared format to 
the current format ignoring intervening formats.  ALTER TABLE does not cause 
changes to existing rows in the table (in general, there may be exceptions).  
The result is the appearing and disappearing default values. 
> 
> Exactly the problem in my book. If you add 'DEFAULT' without a 'NOT
> NULL', then in my book the unpopulated fields should remain NULL unless
> I populate them. When you now add creating the field later then things
> should be consistent but there is no consistent way of handling things
> based on the SQL standard inconsistencies.
> 
>> This has the effect that all existing records should get the default 
>> value as if they had that value from the start (either stored or 
>> virtual). It also means that a subsequent alteration of the default (new 
>> value or dropping the default), should not lead to changes to the value 
>> of those rows that existed before the column was added with; the columns 
>> need to retain the original default.

If Firebird were to convert a record from its stored format through each 
intervening format, the result would be more logical but it would be a change 
to a behavior that's over 30 years old. And there are advantages to the current 
behavior.  If you alter a table in a way that invalidates existing values, 
another alter table can undo the damage.  
> 
> The argument that other engines put forward is this idea that a record
> does not need to store a full set of fields, some can be 'virtual' and
> only exist when something is stored in them. I HOPE that this is not
> something that Firebird plans to adopt? In my book the 'original value'
> is always 'NULL' unless other rules require something replaces it, and
> an empty field magically showing some default value is not a safe way of
> working.

Firebird doesn't store null fields, instead it stores an array of bits that 
indicate whether or not a field is null.  Between that, compression, and 
computed fields, there's a lot of magic going on. 

Good luck,

Ann


Re: [firebird-support] Read Only Transaction

2017-03-08 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Mar 8, 2017, at 7:17 AM, Aldo Caruso aldo.car...@argencasas.com 
> [firebird-support] <firebird-support@yahoogroups.com> wrote:
> 
> 
> 
> Thanks for your answer.
> 
> Regarding the "blob not found" error, there is a thread about this issue [1]
> 
> Nevertheless, I believe that RO/RC transactions are assigned a unique ID, 
> because as I start them the next transaction ID keeps incrementing.
> 

Right. They're precommitted but do get unique ids.  I'm getting old  

Cheers,

Ann
> [1] 
> https://groups.yahoo.com/neo/groups/firebird-support/conversations/topics/111668
> 
> Aldo Caruso
> 
>> El 07/03/17 a las 15:52, Ann Harrison aharri...@ibphoenix.com 
>> [firebird-support] escribió:
>> > On Mar 7, 2017, at 10:30 AM, Aldo Caruso   
>> > aldo.car...@argencasas.com wrote:
>> 
>> > 
>> > Supposing a connection only needs to read from a database. Which are the 
>> > benefits of starting a read only transaction ?
>> > 
>> I doubt that there is any difference in performance that you could measure. 
>> Almost all transactions change the database - cooperative garbage 
>> collection, incrementing the next transaction id on the header page, ending 
>> the transaction on the TIP. The security checks have to be made to insure 
>> read access - checking for higher levels is no more expensive. DEclaring a 
>> transaction to be read-only is a slight check on programmers - Firebird will 
>> detect and block writes that were added carelessly in a point version of an 
>> application.
>> 
>> However, a read-only, read-committed transaction has merit if you don't need 
>> consistency. Firebird doesn't assign a unique transaction id to RO/RC 
>> transactions, so they don't change the next transaction id, they don't block 
>> garbage collection, and they don't change the state of a bit on a TIP. From 
>> time to time (IIRC) they'll return a "blob. not found" error if the record 
>> holding the blob is garbage collected between the time the RO/RC transaction 
>> reads the record and retrieves the blob. 
>> 
>> Basically, read-only was implemented decades ago to give InterBase a check 
>> mark on standard compliance and Firebird added a hack to provide low 
>> overhead, low consistency transactions. 
>> 
>> Good luck,
>> 
>> Ann
>> 
> 
> 
> 
> 


Re: [firebird-support] Read Only Transaction

2017-03-07 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
> On Mar 7, 2017, at 10:30 AM, Aldo Caruso aldo.car...@argencasas.com  wrote:

> 
> Supposing a connection only needs to read from a database.  Which are the 
> benefits of starting a read only transaction ?
> 
I doubt that there is any difference in performance that you could measure.  
Almost all transactions change the database - cooperative garbage collection, 
incrementing the next transaction id on the header page,  ending the 
transaction on the TIP.  The security checks have to be made to insure read 
access - checking for higher levels is no more expensive.  DEclaring a 
transaction to be read-only is a slight check on programmers - Firebird will 
detect and block writes that were added carelessly in a point version of an 
application.

However, a read-only, read-committed transaction has merit if you don't need 
consistency.  Firebird doesn't assign a unique transaction id to RO/RC 
transactions, so they don't change the next transaction id, they don't block 
garbage collection, and they don't change the state of a bit on a TIP.  From 
time to time (IIRC) they'll return a "blob. not found" error if the record 
holding the blob is garbage collected between the time the RO/RC transaction 
reads the record and retrieves the blob.  

Basically, read-only was implemented decades ago to give InterBase a check mark 
on standard compliance and Firebird added a hack to provide low overhead, low 
consistency transactions.  

Good luck,


Ann

Re: [firebird-support] Transactions isolation levels and collisions

2017-03-01 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Mar 1, 2017, at 9:53 AM, Aldo Caruso wrote:
> 
> 
> 
> Ann,
> 
>   Thanks for your detailed answer.
> 
> But this leads me to the following question: If snapshot transactions 
> have their own copy of the Transaction Inventory Pages ( TIP ), taken when 
> the transaction started, transaction A couldn't see the state of transaction 
> B or transaction C ( because they are not in this copy ).
> 
> So I guess that snapshot transactions use its own copy of TIP for reading 
> ( selects ), but when it comes to updates or deletes, in order to search for 
> collisions, they should also access the global shared TIP.
> 


Not exactly.  A snapshot transaction has a copy of rhe relevant section of the 
TIP as of the time it starts. Transactions started later alway count as 
"active" because they could not have committed before the started.  

When a snapshot transaction tries to update a record and finds that the most 
recent version was created by a transaction that was active or unknown 
according to its copy of the TIP, it attempts to take a lock on that 
transaction's id. 

Every transaction holds an exclusive lock on its id until it ends.  When the 
lock is granted to the waiting transaction, it knows that the other transaction 
has ended and asks the transaction management module for the final state of the 
dead transaction.  

Good luck,

Ann
> 


Re: [firebird-support] Transactions isolation levels and collisions

2017-03-01 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Feb 28, 2017, at 3:07 PM, Aldo Caruso wrote:
> 
>   I'm trying to understand interactions between transactions with 
> different isolation levels.

The problem is not mixed isolation levels. You would get the same behavior if 
all transactions were Snapshot

The scenario (described below in detail) is

Start transaction A

Start transaction B

B updates some record

B commits

Start transaction C

C updates the same record

A attempts to update that record and waits for C to end

Whatever C dies A will fail because of a conflict with C if C commits or a 
conflict with B if C rolls back.  Why doesn't A just fail immediately?

The answer is that Firebird checks only the most recent record version for 
conflicts.  Checking the next back version would avoid having A wait in this 
case but would require more reading in general. 

Good luck,

Ann


> 
>   My environment is Firebird Super Server 2.5 (v. 2.5.2) on Linux 
> Ubuntu 14.04
> 
>   I'm testing by opening two consoles and running isql-fb on both consoles.
> 
> On the first one I start a SNAPSHOT transaction ( lets call it 
> transaction A ).
> 
> Afterwards, on the second console, I start another transaction with a 
> READ COMMITTED RECORD_VERSION isolation level ( lets call it transaction 
> B ).
> 
> In transaction B, I update a record X and commit the transaction.
> 
> In the same second console I start a third transaction, also READ 
> COMMITTED RECORD_VERSION ( lets call it transaction C ).
> 
> In transaction C, I update record X, but neither commit it nor roll it 
> back ( transaction C remains active ).
> 
> On the first console, in transaction A, I update record X and it 
> obviously waits for a commit or a rollback of transaction C.
> 
> Back on the second console, in transaction C, I roll it back.
> 
> Not surprisingly, on the first console, transaction A ends waiting with 
> a collision error, because although transaction C rolled back, 
> transaction B, that had started after transaction A, had made an update 
> and committed it. As the isolation level of transaction A was SNAPSHOT, 
> it finds a collision and so an error is risen.
> 
> So we have the following situation:
> 
> If transaction C ends rolling back (as in the example), transaction A 
> raises an error because of the collision with the previous transaction 
> that touched and committed the record ( transaction B ).
> 
> If transaction C ends committing, transaction A also raises an error 
> because of the collision with transaction C.
> 
> Whichever way transaction C ends ( committing or rolling back ), the 
> waiting update in transaction A is aborted with an error message because 
> of a collision.
> 
> The question is which was the point of waiting in transaction A for the 
> end of transaction C ? Why didn't it raise an error in the same moment I 
> tried to do an update in transaction A ? Couldn't it foresee that the 
> record was previously updated by a committed transaction B, which 
> started after the SNAPSHOT transaction A, and so whichever the outcome 
> of C were there would be a collision ?
> 
> Note also that if in the example above, we wouldn't have started 
> transaction C, transaction A would raise an error as soon as it tried to 
> update the record that was updated and committed by B ( I tested it ).
> 
> Thanks for any answer.
> 
> Aldo Caruso
> 
> 
> 
> 
> 
> Posted by: Aldo Caruso 
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


Re: [firebird-support] Is there an easy way for input rows from a script file?

2017-02-16 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Feb 15, 2017, at 6:31 PM, setysvar setys...@gmail.com [firebird-support] 
>  wrote:
> 
> 
> 
> Den 15.02.2017 17:07, skrev 'Walter R. Ojeda Valiente' 
> sistemas2000profesio...@gmail.com [firebird-support]:
>> Hello everybody
>> 
>> If I connect to a database "Database1" using ISQL, I can write something 
>> like:
>> 
>> OUTPUT MyFile.DAT;
>> SELECT * FROM MyTable;
>> OUTPUT;
>> 
>> And all the rows of "MyTable" will go to the text file "MyFile.DAT". That's 
>> ok and works fine.
>> But now, I want to connect to "Database2", which also have a table called 
>> "MyTable" and with the same structure.
>> After that, I want to insert into "MyTable" (of "Database2") the rows 
>> contained in "MyFile.DAT"
>> How can I do such thing without a lot of effort writing an INSERT command in 
>> each line of "MyFile.DAT"?
>> 
>> Greetings.
>> Walter.
> 
> Hi Walter. I'm not directly answering your question (mainly due to not using 
> isql myself, so I don't know the answer), but I can think of two alternative 
> ways. 
> 
> (1) (the more standard answer) Make sure your output file is in a fixed 
> length format, then create an external table for this file, transfer the data 
> using INSERT INTO "MyTable" SELECT * FROM  and 
> then finally drop the external table.
> 
> (2) Use qli rather than isql to transfer data directly using something 
> similar to this recipe: 
> https://www.ibphoenix.com/resources/documents/how_to/doc_42
> qli is an ancient part of InterBase, probably used before SQL was invented, 
> you can find it in the same directory as isql.
> 
> I've only used qli once many years ago (probably on Fb 0.9.4 or 1.5), but it 
> worked nicely when I needed it. Though I don't think qli has been updated 
> for the last few Firebird versions, so maybe it doesn't work anymore. And you 
> may have a hard time finding information about it beyond the document 
> referred to above. Though I suppose:
> 
> ready "Database1" as src;
> ready "Database2" as trg;
> trg."MyTable" = src."MyTable";
> 
> doesn't need much explanation and should be simple for you to try on a test 
> database. I expect it to replace the target rather than add to it (though I 
> don't know), but the one time I used qli, I think I was transferring to an 
> empty table.


For what it's worth, qli won't magically delete existing records.  But it 
probably won't work either

Cheers,

Ann

> 
> 
> 


Re: [firebird-support] How to change cpu utilization in Firebird engine?

2017-01-06 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Jan 5, 2017 at 9:54 PM, trsk...@yahoo.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
> As you know, on a single connection, Firebird's cpu utilization calculate
> by 100% / no of cores.
>

That's not an arbitrary limit.  Firebird is not capable of running a single
connection on multiple threads,
and a single thread runs only on a single core.  Firebird 3 in SuperServer
mode can run different
connections on different threads - a great step forward! - but when there
is a single connection it
runs on a single thread.  Prior to V3, the only way you could use multiple
cores was to run Firebird
in Classic or Super Classic mode.


>
> On my 6 cores cpus, its only utilized 100% / 6 = 16%.
>

Right.  That's the way it is.



> During database maintenance & testing (backup, restore, testing, etc) ,
> sometimes, i feel it was to slow.
>

You could try classic, but I suspect that you're running a single
connection which cannot be run on more than one core.

>
> I want to change cpu utilization only for my own use. I have downloaded
> Firebird source code, please help me to point out which codes I should
> change & if my knowledge is enough is I only have basic C?
>

The Firebird group does not impose the "single core per connection" limit
arbitrarily just to annoy users.  Breaking a query up so it runs on
multiple cores is a hard problem, so I doubt very much that someone with
basic C skills and little background in developing database managers would
have much success implementing it.

Good luck,

Ann


Re: [firebird-support] info

2017-01-05 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Jan 4, 2017 at 9:41 AM, Nico Speleers - Carfac BVBA
nico.spele...@carfac.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

We want to create table- and fieldnames (Firebird 3) in camelcase. If we
> want to write a select statement we have to use quotes around a tablename.
> This is not userfriendly. Is there a way that we don't have to use the
> quotes in SQL-statements ?
> We don't want to create our table- and fieldnames in lowercase or
> uppercase.
>
> OK.  Here's how it works.  You can create a table called TablaRasa with a
create table statement.

Create table TablaRasa ;

You can then reference TablaRasa in any SQL statement like this:

Insert into TablaRasa ;
Select from TablaRasA ;
Update TAblaRAsa ;
Delete from"TABLARASA" ;
Drop table TABLARASA;

You can use any capitalization you want.  However, if you ask Firebird to
Show Tables it
will show you TABLARASA.  The SQL standard specifies that unquoted
identifiers are
case insensitive.  That part of the standard was written in 1982, when
computers were
much slower and memory constrained than modern parking meters.  A later
version of
the standard introduced quoted identifiers which preserve case, allow
special characters
(anything other than numbers, letters, $, and _) and allow overlap between
key words
and identifiers.

InterBase (Firebird's ancestor) implemented case insensitive identifiers by
upper casing
identifiers wherever they were referenced and, specifically, storing the
upper case in
its system tables.  That was a cheap and easy way to handle the problem -
and when
you have (by modern standards) no disk space, no memory, and the computing
power
of an abacus, cheap and easy are good.

For reasons lost in history, when InterBase added quoted identifiers, the
developers
decided just to store the quoted value - without quotes, of course - in the
system tables.
Deciding whether an identifier was quoted or not was simple - a name that
contains a
 character that isn't legal in an unquoted identifier must be a quoted
identifier.
Myself, I would have added a field that indicates whether the name is
quoted or not
and preserved the case of unquoted identifiers right then, but they didn't.
  Users
complained about the ugliness of upper case even then in the 90's.  (Yes,
children,
there were people who cared about appearances that long ago.)

That solution produced the anomaly that a quoted identifier that contains
no characters
outside those allowed in unquoted identifiers behaves like an unquoted
identifier.  While

Create table "TABLA RASA" ;

creates a table that can be referenced only as "TABLA RASA" - quotation
marks required,

The similar statement without the space

Create table "TABLARASA" ;

creates a table that can be referenced as tablarasa, TablaRasa, TAblaRAsa,
etc. And,
inversely, if you create a table with an unquoted identifier, you can
reference it in
quoted upper case - example above.


I don't know how that behavior fits the SQL standard, but it's a minor
violationcompared with
the behavior of MySQL and not terribly dissimilar to PostgreSQL.
PostgreSQL chose to
use lower case as the default because it was prettier and didn't look as if
the code was
SHOUTING at you.  InterBase chose upper case because in some European
languages,
classical French for one, the transformation from lower to upper is
deterministic, but the
translation from upper to lower can produce several results.  Not that we
were all that
far into internationalization in '83, but we were thinking about it.


MySQL chose to use the case rules for the environment it runs in - case
sensitive
on Linux, case insensitive on Windows. Mac/OS had different case rules for
different file
systems.   That looks very natural if you run in a single environment, but
it is a bit of a
bitch if you try to create portable applications.  MySQL also uses
different rules for different
types of identifiers.  You could look it up.  Hell, I'll look it up.

http://dev.mysql.com/doc/refman/5.5/en/identifier-case-sensitivity.html?ff=nopfpls


So, in Firebird, preserving the given case of identifiers in the system
tables would just be
a matter of programming and a change to the system tables which is a bigger
problem
but not out of the scope of a major version. The question is whether that
effort is worth
the price given the challenges of making Firebird more secure, faster, more
reliable, and
more conformant with new SQL features, given the limited programming
resources available.

In sum, handwritten code can use upper and lower case in identifiers
without quotes and
look pretty.  If you want to generate names for triggers and so on from the
original format,
that works too.  Trigger TablaRasaTrigInsert1 is exactly equivalent to
TABLARASATrigInsert1
and you can reference it as either.  Machine generated code can put quotes
around every
name it finds in the system tables and create working ugly code.  The
behavior is standard
compliant and works in 

Re: [firebird-support] Explicit Transaction + Suspend

2016-11-25 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Nov 24, 2016, at 2:59 PM, fabia...@itbizolutions.com.au [firebird-support] 
>  wrote:
> 
> Just wondering can my app start a transaction with Begin Transaction and 
> then call a stored procedure that contains the SUSPEND sentence at the end 
> while the transaction is still opened?

Yes a transaction can include invoking a procedure that includes SUSPEND.

> Is there any special consideration 
> when comencing a transaction other than to ensure the process finished as 
> soon as posible with a commit or rollback, and no human intervention is 
> needed in between Begin Transaction and Commit , such as messages on screen 
> or buttons needed to be pressed, etc?

In general, transactions should be kept short, but your definition is limiting. 
 A transaction is a single logical unit of work that should be executed in a 
consistent setting.  If that involves human interaction, fine, as long as you 
have some mechanism to limit the response time.  You don't want to hold a 
transaction open for hours - and in some applications even for minutes - but 
sometimes branching and human interaction are important.  Firebird's MVCC was 
designed to provide consistent results while allowing interaction - as opposed 
to a lock-based serializable consistency that generates read/write deadlocks.

Good luck,

Ann
> 
> 
> 
> 
> 
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


Re: [firebird-support] Re: Best way to delete millions of rows

2016-11-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Nov 22, 2016, at 6:56 AM, kragh.tho...@yahoo.com [firebird-support] 
 wrote:
> 
> Pagesize is 16384 and pagebuffers is 256. Servermode is superclassic. 
> If I drop the primary key/index everything works as expected. 
> 

Interesting.  GUIDs produce really fat indexes.   Firebird uses prefix 
compression and GUIDs vary in the first bytes so they don't compress well.  
Keys generated sequentially will be on different index leaf pages.  Since 
records are deleted and garbage collected in the order they were stored, index 
garbage collection could easily touch a different page for each key.  With only 
256 buffers, index garbage collection is running pages through the cache wildly 
- with luck they're cached by the file system.

I'm not saying that shutting down the machine for seconds or minutes at a time 
is acceptable, but that the place to look is in index garbage collection with a 
small page cache and large randomly generated key values.

For what little it's worth, I'd bet that a 64bit primary key generated by a 
sequence would not show this problem.  Nor, for that matter would a GUID that 
had been re-organized to put the volatile bytes at the end...

Good luck,

Ann

Re: [firebird-support] Digest Number 9396

2016-11-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
Do you see the same behavior if the guid field is not indexed?

> On Nov 22, 2016, at 3:09 AM, tiberiu_horv...@yahoo.com [firebird-support] 
>  wrote:
> 
> 
> 
> All right, thank you all for your time !
> 
> I have code that uses temporary tables in at least 30 places in my programs. 
> This is the way I did complicated queries. Most of these procedures are 
> written 10 years ago (Interbase, Firebird 1.0). I cannot change everything 
> (replace every query that uses temporary tables), cannot do this without 
> proper testing. 
> 
> In the first phase I changed every field definiton of these temporary tables, 
> defined my domains 
> 
> Integer -> MyIntegerDomain
> 
> and changed the definition (DDL) of the temporary tables in my code. 
> 
> I sure will change the code / queries that uses those temporary tables. 
> 
> 
> 
> 
> 
> Tiberiu
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 


Re: [firebird-support] Digest Number 9396

2016-11-20 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Nov 19, 2016, at 11:20 PM, Dalton Calford dcalf...@distributel.ca 
> [firebird-support]  wrote:
> 
> 
> 
> Every time you do not define a domain when creating a column in a table, 
> firebird creates a new domain ...   So, you perform a backup and it recreates 
> all the prior domains first, then, proceeds to create new domains.   

I think that's not correct prior to V3 and probably not for V3 either.  Gbak 
maintains the relationship between fields and the domains on which they're 
based. The problem with extremely high domain numbers is probably related to 
temporary tables. 

Good luck,

Ann



> 
> Either way, you need to get your data out of the database and into one that 
> is restorable.   You also need to change your create table routines to use 
> pre-defined domains.   What you did not realize is that regardless of whether 
> you want to use domains, you will be using them.   The difference is whether 
> you use a domain you define, or let the system create domains on the fly.
> 
> At least with domains you create, you can reuse them and you do not encounter 
> the issue you are seeing.
> 
> 1 GB is not very much, My smallest DB that I maintain is 20 GB.  
> 
> With FB 3, you can do a few things to speed up the process, but, at this 
> point, you may need more help than simple forum list support can give.
> 
> Where on the planet are you located?   Perhaps we can get you in touch with a 
> FB specialist in your area.
> 
> best regards
> 
> Dalton
> 
> From: firebird-support@yahoogroups.com  on 
> behalf of tiberiu_horv...@yahoo.com [firebird-support] 
> 
> Sent: November 19, 2016 9:12:59 AM
> To: firebird-support@yahoogroups.com
> Subject: Re: [firebird-support] Digest Number 9396
>  
> I use only 
> 
> CREATE TABLE   and DROP TABLE  metadata commands in runtime. I 
> populate the tables and that's all. I don't create domains, not at all. 
> 
> I use FB 3.0 on a Linux server (I don't maintain that server) . 
> 
> The database file is very big - 1+ GB in size, my customers append arround 
> 3000 records each day. 
> 
> To try to fix my domain name problem I do as you suggested : I do the restore 
> of the new database file with the correct structure and datapump the records. 
> I just did that yesterday (I have some 6 million records in a few tables) but 
> this process is very time consuming - around 4 hours of work. 
> 
> I have one possible explanation :  last time I upgraded from FB 2.55 to FB 
> 3.0, I did the process on a Windows machine : backup FB 2.55 and restore FB 
> 3.0 on the machine. I copied , then, the FDB file in Linux (Samba) , didn't 
> performed a normal restore on Linux (because it is very time-consuming and my 
> customer couldn't wait). Maybe something bad happened then with those domain 
> names. Yesterday I did this : 
> 1. Exported the records (in some XML files) 
> 2. Restored an empty database file (full restore - not copy the FDB file) 
> 3. Datapumped the records (from the XML files) . 
> 
> I will see the result and keep you informed. 
> 
> 
> Thank you , 
> 
> Tiberiu
> 
> 
> 
> 
> 
> 


Re: [firebird-support] New auto-increment column, existing data

2016-11-16 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Nov 15, 2016, at 3:58 PM, 'Some One' anotherpersonsomewh...@mail.com 
> [firebird-support]  wrote:
> 
> I am trying to add new auto-increment primary keys for existing table.
> .
>  select rdb$db_key
>  from mytable
>  into :dbkey
>   .

As several people have written generators or sequences are the right way to 
create unique values for primary keys.  They can be started at a value of your 
choice.  The syntax depends on the Firebird version but checking the release 
notes or on-line documentation for Generator or Sequence will get you the right 
answer.

The rdb$db_key is a unique identifier for a record at a particular time, but is 
not stable over time or across backup/restore cycles.  If you store the 
rdb$db_key as your primary key, eventually you will will get duplicate key 
errors when you try to store new records.  For the hard core, the rdb$db_key 
uniquely identifies a record in a table - different records in different tables 
will have the same rdb$db_key.  It's a three part value.  The first part is the 
sequence of the pointer page for the table that holds the actual database page 
number that holds the record.  Pointer pages are pages that hold an array of 
page numbers - pretty simple.  The second part is the offset on the pointer 
page of the actual data page number.  The third part is the identity of the 
index on that data page that describes the location and length of the record.  
(Records are stored with run-length compression, so their length varies.)

When a record is deleted, its rdb$db_key will be reused.  When a database is 
backed up with gbak and restored, gbak creates a new empty database and stores 
all the backed up record into it.  Unless the database is very stable, records 
will be stored in different places, resulting in different rdb$db_key values.  

In short, do not store rdb$db_key values in your database!

> This fails with error "malformed string", it looks like either rdb$db_key 
> does not match char(8)
> 
Right.  The rdb$db_key is eight bytes of binary some of which will not be valid 
characters in any normal character set.  If you want to use the rdb$db_key as a 
short-term locator value, use character set Octets which is value agnostic.

Good luck,

Ann
> 
> 
> 
> 


Re: [firebird-support] Reviving an old firebird database (file conversion/programming hints)

2016-10-13 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]


> On Oct 13, 2016, at 4:37 AM, Urs Liska fireb...@openlilylib.org 
> [firebird-support]  wrote:
> 
>> Am 13.10.2016 um 10:18 schrieb Elmar Haneke el...@haneke.de 
>> [firebird-support]:
>> 
>> > 1)
>> > I assume on LInux (Debian based Mint version and an Ubuntu flavor) I
>> > install a variant of the server (in the repository I find the 2.5
>> > variants, but I'd be also happy installing a   downloaded 3.0 
>> > instead).
>> > Then I need a generic DB manager like e.g. Flamerobin to do general
>> > operations and basic DB editing. Or is it equally possible to do that in
>> > the terminal?
>> >
>> > Is that correct?
>> 
>> That's mostly correct. 64Bit Firebird might be unable to open old
>> database - I came to tha Problem with an FB 1.5 database moving to FB 2.5.
>> 
>> In that case you have to install either 32Bit package or an old package.
>> 
>> > 2)
>> > How can I determine the version of my existing database file? And how
>> > can I convert that to a current one in the expected case it isn't
>> > compatible with my current installation? There is only the .fdb file and
>> > no backup available.
>> 
>> Once accessible FlameRobin can tell you the version of datafile.
>> 
>> To convert to current database you have to create an backup and restore.
>> 
>> Elmar
> 
> So that boils down to having to try out different versions of Firebird until 
> I find one that can open the DB?
>   

Try gstat -h.  It should tell you the version of the database file even if the 
seever won't open it.  

Good luck,

Ann
> 
> Posted by: Urs Liska 
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++ 
> 
>  
> 
> VISIT YOUR GROUP New Members 3
> • Privacy • Unsubscribe • Terms of Use 
> 


Re: [firebird-support] Optimizer request

2016-09-10 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
I wrote

> > select o.*
> >  from orders o
> > inner join partners p
> >
> * where p.partid = o.opartid > and p.country = 'Spain*';
>
> Helen suggests the more correct:


>  select o.*
>   from orders o
>  inner join partners p
>  * on p.partid = o.opartid*
>  * where *p.country = 'Spain';
>

which is more correct. But the firebird optimizer is smart enough to find
join
terms, even when misplaced, so the two generate the same plans.  Again,
with more complex queries, results may differ.

Cheers,

Ann


Re: [firebird-support] Optimizer request

2016-09-09 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Sep 9, 2016 at 8:30 AM, 'Djordje Radovanovic' softsis...@sbb.rs
[firebird-support]  wrote:

>
> I tried query with subquery in where clause and found big issue for this
> type of subquery.
>
> select * from orders where orders.partid in (select partners.partid from
> partners where partners.country = ‘Spain’)
>
> Perfomance Analysis returns me this
>
> partners 687660 non index reads
> orders  28657 index reads
>
> If you analyze this result you’ll find that there is 687659 unnecessary
> non index reads. If developer of optimizer accept that all queries on the
> left side of where clause has priority (and there is no way to be opposite)
> than we have big improvement in optimization.
>
> Is there an index on partners.country?  What plans are generated for each
query?

If I were writing this query, I'd write

select o.*
 from orders o
inner join partners p
 where p.partid = o.opartid
and p.country = 'Spain';

All that silliness about "select from (select from )" probably doesn't save
anything in
this case - it does when the inner select is complicated, but not here.
Firebird won't
carry around unneeded fields from the partners table.

What is the distribution of partners.country?

Good luck,

Ann

> __,_._,_
>


Re: [firebird-support] gbak error

2016-08-31 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
Does the error happen on file creation or after some data has been written?



> On Aug 30, 2016, at 4:36 PM, Nick Upson n...@telensa.com [firebird-support] 
>  wrote:
> 
> 
> 
> no, not a file name collision and plenty of disk space
> 
> this just started happening, no change to installed s/w 
> 
> Nick Upson,
> Principal Operations Engineer, Telensa Ltd.
> Direct +44 (0) 1799 533252, Support Hotline +44 (0) 1799 399200
> 
> 
>> On 30 August 2016 at 21:14, Helen Borrie hele...@iinet.net.au 
>> [firebird-support]  wrote:
>> Hello Nick,
>> 
>> > I'm running fb2.1 on centos5, recently a system has started
>> > throwing this error when asked to do a backup, the gbak parameters
>> > are unchanged, permissions all seem ok, any suggestions what I might check
>> 
>> > gbak: ERROR:Unexpected I/O error while writing to backup file
>> > gbak:Exiting before completion due to errors
>> 
>> "gbak parameters are unchanged." Could be you already have a file in
>> that location with the same name.
>> 
>> HB
> 
> 
> 
> 


Re: [firebird-support] Database/Backup consistency check

2016-07-26 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Jul 4, 2016 at 2:52 PM, 'Steffen Heil (Mailinglisten)'
li...@steffen-heil.de [firebird-support] 
wrote:

Late, but I was away...

>
> I have a database on a hard drive which is failing. The hard drive already
> has some bad sectors and we are about to replace it.
>
> Now I have a backup of that database created with gbak that I could use.
> However I am not sure if the hard drive was still fine, when the backup was
> created.
>

If the backup succeeded, then the database file had no corrupted data pages
or system pages.  It might have corrupted pages in indexes on user tables.


> On the other hand, I can right now access the database just fine and I
> could try to simply copy it.


Copying it is a good option if a backup fails - you'll have something to
try to recover from.

As third option I could create a new backup to external media right now.
>

That would have been the best option.  If the backup succeeded, then you
would know that all your data is fine and up to date.

>
> Now my questions:
> - Is there any way to check a copied database file for corruption?

If the file copy works without error (no file system I/O error) can I be
> sure or is there a command to be sure, if the file is completely intact?
>

Gfix -v works on the physical level.  Backup or sweep checks the logical
consistency of data and system structures but not user indexes.


> - If I restore the backup and gbak does not report an error, can I be sure
> that everything is okay?
>

Yes.  If there were bad index pages, they will be replaced.


> - If I create a new backup and gbak does not report an error, can I be
> sure that the new backup is okay?
>

Yes, the backup will be OK - yes there have been bugs in backup but they
were generally mishandled dependencies ... not bad pages

>
> There used to be a checksum in the page header, but IIRC that field was
> abandoned.
>

Decades ago.


Good luck,

Ann


Re: [firebird-support] Database design. Simple question!

2016-07-26 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Jul 25, 2016 at 5:51 PM, shg_siste...@yahoo.com.ar
[firebird-support]  wrote:
>
> Hello! I  have a FB 2.5 database with a tabla "invoices". I need to add a
> field in that table which will be null most of the times. Lefts say, that
> 90% of the records will have this new field in null.
>
> What is your advice? To create a new table with a FK to "invoices" and
> save the needed field in a separate table (only when the value is not null
> of course)
>
Each record in the second table would incur nearly bytes of storage
overhead.  The outer join to put together the full record can make query
optimization difficult.  Really, that's a bad idea.


> or just create the new field in the main "invoices" table? I think I read
> somewhere that null fields use very little space in the database. Is that
> true?
>
Null fields take up very little space.  Every record has space for a null
flag for each field in the record.  The space is there whether or not the
field is null.  A null field is filled with spaces or zeros, depending on
the type. Run-length compression will probably make it disappear.

But seriously, people, don't worry so much about data size!  The basic ODS
on which Firebird is built was designed in 1984 - thirty-two years ago!
Back then, DEC, where Jim and I worked, had three database groups.  We got
together and asked management if there was some way we could get a gigabyte
of disk to do some performance and limits testing.  Management said, "NO.
Too expensive.  No real world application will see those limits."  DEC made
disks and our group was part of disk engineering.  Now you can get 5TB for
$120.  Yes, dense records reduce the amount of I/O, but remember that
Firebird was designed for small disks and almost no memory - it's miserly
about space.

Good luck,

Ann


Re: [firebird-support] Engine12 mistery

2016-06-15 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
The way I think of the Y-valve* is that the stem of the Y is the client -
whether the normal fbclient or the Java client or other language specific
clients.  The connection request goes from the client to a provider.
Right now, the providers are Remote (which may be built into the Y-valve)
and Engine12.  Someone could create a provider that translated Firebird
requests to Oracle/PostgreSQL/MySQL or whatever and offer a provider
to a different back end.  If there's enough demand, Firebird 2.5 could be
cut back to be a provider to older versions of Firebird.

So the stem is the client and the branches are the providers.

Cheers,

Ann

* Y-valve is a nautical term.  On a boat, the head (toilet) outflow goes
through
a Y-valve.  One leg takes the effluent to a holding tank, the other dumps it
overboard.  No parallel implied between what goes through the Firebird
Y-valve
and the nautical term.

On Tue, Jun 14, 2016 at 4:46 PM, fabia...@itbizolutions.com.au
[firebird-support]  wrote:

> 
>
>
> Mark
>
> Thank you, I think I am beggining to understand the concept, so the
> provider's have to be combined including ALWAYS Engine12 (the core FB
> server procedure/method that really goes to the file on disk and does the
> reading and writting to the DB). So the following combinations are valid:
> Providers = Remote + Engine12
> Providers = Loopback + Engine12
> Providers = Remote  + Loopback + Engine12 (the first two in any orther)
>
> And any Providers combination excluding Engine12 would not work because it
> is not including the component that goes to the DB file and reads/writes
> from /to it.
>
> It seems Engine12 is not the same "type of component" as the other
> Providers. In the documentation it is refered as a Y valve, if I understand
> it correctly Engine12 is the base of the Y, while the other providers are
> the top left and top right of the Y, you can have any top you want, BUT YOU
> MUST HAVE the base always, else it does not work, correct?
>
>
> Cheers,
> Fabian
>
>
>
> - Original Message -
> *From:* Mark Rotteveel m...@lawinegevaar.nl [firebird-support]
> 
> *To:* firebird-support@yahoogroups.com
> *Sent:* Wednesday, June 15, 2016 5:32 AM
> *Subject:* Re: [firebird-support] Engine12 mistery
>
> On 2016-06-14 19:44, fabia...@itbizolutions.com.au [firebird-support]
> wrote:
> > My understanding from reading the documentation is:
> >
> > Provider = Remote means the client is on the network, excluding
> > 127.0.0.1
> > Provider = Loopback means the client is actually on the same OS
> > instance as the FB3 engine, and it is using the 127.0.0.1 to access
> > the database to avoid the "embedded server concept" from answering
> > the
> > request, as it would capture the DB file and will not allow any other
> > clients from remote /network source.
> > Engine12 = The local server takes control of the database as if it
> > was
> > an embedded server, killing all future chance of accessing the DB
> > from
> > the network, so Engine12 is if I understand correctly the way to talk
> > to the DB engine in "exclusive" mode when you want to perform
> > maintenance or work on the security database.
> >
> > Is the above wrong?
>
> Yes, your understanding is wrong. Engine12 **is** the component that
> actually does the work with the database file. The rest is just plumbing
> to connect to servers (or make fbclient connect to a local server),
> select the right provider, etc.
>
> As the release notes say:
>
> http://www.firebirdsql.org/file/documentation/release_notes/html/en/3_0/rnfb30-engine.html
>
> * Remote (establish network connection)
> * Engine12 (main database engine)
> * Loopback (force network connection to the local server for  name> without an explicit network protocol being supplied).
>
> There was also talk of providing an 'Engine11' which would essentially
> be a Firebird 2.5 you could use to connect to ODS 11.2 and older
> database files.
>
> Your confusion likely stems from the fact that Firebird embedded now is
> fbclient + Engine12, as opposed to Firebird server + Engine12 for a
> normal database server, while Firebird embedded used to be a separate
> DLL/SO. BTW: In normal situations fbclient **does not** have access to
> Engine12, so it only behaves as a client library.
>
> Mark
>
>
>
> 
> Posted by: Mark Rotteveel 
> 
>
> ++
>
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu
> there.
>
> Also search the knowledgebases at
> http://www.ibphoenix.com/resources/documents/
>
> ++
> 
>
> Yahoo Groups Links
>
>
>
>
>
> 
>


Re: [firebird-support] Re: Is Firebird 3 ready for Production?

2016-05-26 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, May 26, 2016 at 8:30 AM, fabia...@itbizolutions.com.au
[firebird-support]  wrote:

>
> I wrote:
>
>
>   In V3.0, Firebird is multi-threaded at the client statement level.  It
> does not decompose queries and schedule the pieces on different processors.
>
>
>
> I understand, I thought true SMP was referring to assigning each client's
> request to all CPU at once, instead of balancing the clients requests
> across processors. So the current version (FB 3) is probably going to
> outperform the FB 2.54 when there are concurrent requests passed to the
> server engine because each request will go to a different processor,
> correct? The more concurrent requests we have the more we are going to
> notice the difference, right?
>

Right.  The strength of a relational database is its ability to process
multiple client requests simultaneously.  That's the goal of Firebird V3.
If all you want to do is strip data off the disk as fast as possible, there
are better technologies.

Splitting client requests across processors in general is hard.  Well, it's
less hard if you don't care if the result is performance improvement.  In
general, splitting a single requests across processors is beneficial if
there's lots of computing or if the request can be split so different parts
access different disks.  A simple "select count(*) from MyTable" cannot be
improved by splitting it across processors unless the table can also be
split across disks in a deterministic way.   On example that could be
improved by splitting a single request is large sorts, which can be
improved by splitting them so one thread is fetching records while others
sort subsets.  With large memories and 64 bit address spaces, lots of
sorting can happen in memory.

But frankly, if I were the core developers, I'd want to put my efforts into
other areas that will provide more benefits than fast large sorts.

Good luck,

Ann


Re: [firebird-support] Re: Is Firebird 3 ready for Production?

2016-05-25 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, May 25, 2016 at 2:53 PM, Ann Harrison <aharri...@ibphoenix.com>
wrote:

> On Wed, May 25, 2016 at 1:11 PM, fabia...@itbizolutions.com.au
> [firebird-support] <firebird-support@yahoogroups.com> wrote:
>
>>
>>  Now on the flip side, the performance sucks, it is worst than with FB
>> 2.54, and when looking at the task manager on windows it appears only one
>> processor it doing the job, as if the code was not SMP enabled very
>> strange.
>>
>
> One possibility is that you're testing V3.0 SuperServer single user.  In
> V3.0, Firebird is multi-threaded at the client statement level.  It does
> not decompose queries and schedule the pieces on different processors.
> That means that a full-table scan runs on only one processor.  Two
> simultaneous full-table scans will run on two processors.
>

I should have continued to say that two full-table scans probably won't be
any faster in 3.0 than they were in 2.5 because you're measuring disk
transfers and adding processors doesn't make the disk go faster.  I might
also have mentioned that sharing caches and other internal structures
between simultaneous threads is one challenge.  Distributing queries across
processors is another and totally different challenge.  The Firebird
developers were wise (in my opinion) to take the challenges one at a time.

Good luck,

Ann


Re: [firebird-support] Re: Is Firebird 3 ready for Production?

2016-05-25 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, May 25, 2016 at 1:11 PM, fabia...@itbizolutions.com.au
[firebird-support]  wrote:

>
>  Now on the flip side, the performance sucks, it is worst than with FB
> 2.54, and when looking at the task manager on windows it appears only one
> processor it doing the job, as if the code was not SMP enabled very
> strange.
>

One possibility is that you're testing V3.0 SuperServer single user.  In
V3.0, Firebird is multi-threaded at the client statement level.  It does
not decompose queries and schedule the pieces on different processors.
That means that a full-table scan runs on only one processor.  Two
simultaneous full-table scans will run on two processors.

Good luck,

Ann


Re: Re: [firebird-support] update all records

2016-05-07 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, May 6, 2016 at 2:05 AM, liviuslivius liviusliv...@poczta.onet.pl
[firebird-support]  wrote:
>
>
> He wrote:
>
> all records inserted by tr2 are not updated if tr1 is e.g. snapshot
> or during tr1 update there are inserts.
>
> I wrote:

>
> Actually, that's not true.
>
>
> And I was wrong and lazy.  Should have tested it.  Records inserted by one
transaction will not be affected by an update by another concurrent
transaction.

My apologies.

Best regards,

Ann

>
>


Re: [firebird-support] update all records

2016-05-05 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
Sergio asked why this statement

  update stock set actualizar = 0 where 1=1

doesn't update all records in the stock table.   First, the clause "where
1=1" is unnecessary.  The update statement does not requires a where clause
and in the absence of a where clause, it affects all records in the table.

, liviusliv...@poczta.onet.pl [firebird-support] responded


> one of possible reason is that records are not visible to transaction with
> this update statement.
> e.g.
> 1. you start transaction (tr1)
> 2. someone start transaction 2(tr2)
> 3. tr2 insert some records
> 4. tr1 run update
> 5. tr2 commit
> 6. tr1 commit
>
all records inserted by tr2 are not updated if tr1 is e.g. snapshot
> or during tr1 update there are inserts.
>

Actually, that's not true.  If tr2 inserts records before tr1 attempts its
update, tr1 will discover an update conflict when it finds one of tr2's new
records.  If tr1 is a no-wait transaction, it will report an error
immediately otherwise it will wait for tr2's commit.  Then, if tr1 is a
snapshot transaction, when tr2 commits, tr1 will report an error.  If tr1
is a read-committed transaction, it will silently update tr2's new records
without having looked at them.*

If, however, the sequence is that both transactions start, then tr1 does
its update and tr2 stores its new records after the update completes, both
transactions will succeed and the records inserted by tr2 will not be
updated.  Which would also be the case if tr1 ran to completion before tr2
started.

What's slightly more likely is that tr1 and tr2 are concurrent and the
sequence of actions is that tr1 modifies all the stock records then tr2
reads all the stock records and sees the old versions (assuming that tr2 is
not a "no record version" transaction).  Even if tr2 is a read-committed
transaction, it will read the next older version of records if the most
recent is not committed.

So, Sergio, a bit more information would help us give you a better answer.
Are there multiple transactions running?  Why do you think some records
weren't updated?  Do you know what transaction options you're using?

Cheers,

Ann

* If the inserts and update are running at the same time, the most likely
case is that tr2's inserts will be physically the last entries in the
table.  Tr1's updates will be made in storage order, so the last thing
records it tries to modify will be the records tr2 created.  If the table
has had lots of records deleted, it's possible (I think) for tr2 to store
its records on partially empty pages in the first part of the table and
avoid a collision with tr1 if tr1 has already modified the records if found
on those pages.  That's very similar to having tr1 perform its update
before tr2 stores its records.


Re: [firebird-support] SELECT WITH LOCK, WAIT transaction, deadlock

2016-04-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 28, 2016 at 11:39 AM, Ann Harrison <aharri...@ibphoenix.com>
wrote:

A lot of stuff including:

> In ancient history, even in read-committed mode, Firebird blocked
> transactions from updating records if the most recent version was created
> by a concurrent transaction.  At the time, we considered anyone who liked
> inconsistency not worth worrying much about.  It was a long time ago.
>
Or maybe not.  I mentioned this thread to Jim who's creating yet another
databases thirty-plus years after InterBase.  His reaction?  "Read
committed is flawed in premise.   You can reach any conclusion from a
flawed premise."

Cheers,

Ann


Re: [firebird-support] SELECT WITH LOCK, WAIT transaction, deadlock

2016-04-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 28, 2016 at 2:16 PM, Gabor Boros gaborbo...@yahoo.com
[firebird-support]  wrote:

> ...
>
> Try to utilize WITH LOCK because I need many unique number generators in
> a table field. Like a generator/sequence but under transaction control.
>

Okay...  What's the requirement that keeps you from using generators?  Do
you intend to grab the number in a transaction that does other work?  What
is the concurrency requirement?  What you're thinking about doing will
serialize the generation of identifiers and all other actions of those
transactions.  I'm not totally convinced that using ReadCommitted/wait will
create an auditable series of numbers in the case where a transaction
fails. This article may give you a way to look at the problem.
http://www.ibobjects.com/TechInfo.html#ti_AuditableSeries

Good luck,

Ann


Re: [firebird-support] SELECT WITH LOCK, WAIT transaction, deadlock

2016-04-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 28, 2016 at 9:19 AM, Gabor Boros gaborbo...@yahoo.com
[firebird-support]  wrote:

> 2016. 04. 28. 14:55 keltezéssel, Dimitry Sibiryakov s...@ibphoenix.com
> [firebird-support] írta:
> > 28.04.2016 13:41, Gabor Boros gaborbo...@yahoo.com [firebird-support]
> wrote:
> >> The message is:
> >> deadlock
> >> update conflicts with concurrent update
> >> concurrent transaction number is ...
> >
> >That's an expected behavior.
>
> Are you sure?
>
> In the language reference for "isc_tpb_read_committed + isc_tpb_wait" I
> see this: "Update conflict exceptions can never be raised by an explicit
> lock statement in this TPB mode."
>
>
> http://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-dml-select.html#fblangref25-dml-with-lock


Hmmm.  Interesting.  The whole text in the block for isc_tpb_read_committed
+ isc_tpb_wait is:

"If there is an active transaction holding ownership on this record (via
explicit locking or by a normal optimistic* write-lock), the transaction
attempting the explicit lock waits for the outcome of blocking transaction
and when it finishes, attempts to get the lock on the record again.

Update conflict exceptions can never be raised by an explicit lock
statement in this TPB mode."

As you may know, the implementation for "Select ... with lock" is to add an
update to the select statement.  The update doesn't change anything in the
record, but does create a new version belonging to the current
transaction.  That new version blocks other transactions from updating the
record until it is committed or rolled back.  So the question really is the
behavior of an update in read-committed/wait mode.

In ancient history, even in read-committed mode, Firebird blocked
transactions from updating records if the most recent version was created
by a concurrent transaction.  At the time, we considered anyone who liked
inconsistency not worth worrying much about.  It was a long time ago.  I
thought that more recent Firebird versions allowed read-committed
transactions to update records as long as the most recent version was
committed.  If that's right, then the documentation is right.  But your
tests suggest otherwise.  And maybe I'm just confused.  If not, I wonder if
there might be some error in the back-off and retry under contention.

Here's what happens during a wait (be patient, it's a little complicated).
Although Firebird doesn't lock records, it does maintain a lock table to
control access to some shared resources.  When a transaction starts, it
takes out an exclusive lock on its own transaction id.  When a transaction
must wait for another transaction, it requests a shared lock on its
transaction id.  When a transaction ends, its lock on its transaction id is
freed - however it ended.  The lock management code signals the waiting
transactions that they can get their lock - which they don't particularly
want - they just want too wait for their predecessor's end.  In the case of
waiting for a potential update conflict to end, the waiting transactions
re-read the record they were trying to update and check the most recent
version.  If read-committed transactions can update any committed record
version, then the first waiting transaction sees the most recent version as
committed and updates it.  In the case in point, that transaction's "Select
... with lock" succeeds and all is well.

But suppose there were several transactions all trying to update the same
record and all waiting for the same transaction to finish.  The second and
subsequent transactions could retry and see a new uncommitted version.  In
theory, those transactions should request a new lock on the transaction
that created the new uncommited record version and wait again.

Or maybe I'm confused.

Good luck,


Ann



* When I studied concurrency control, "optimistic" concurrency control
meant that all transactions just went to work ignoring each other,  When a
transaction ended, it built a dependency graph of sorts between itself and
its contemporaries and if that graph was clean, it committed, otherwise it
rolled back.  That makes the working part of a transaction really fast and
the commit both slow and unlikely to succeed in a contentious environment.
Firebird's concurrency control is not "optimistic" in that sense.  Like
other "pessimistic" systems, in Firebird, a transaction blocks reads and
updates to record versions it creates as it runs and frees the record
versions when it commits.  That's why I growl whenever someone suggests
that Firebird's concurrency control is "optimistic" - that sounds to me
like a synonym for "stupid**".

**I'm sure there are many applications where optimistic concurrency control
is not stupid.  Mike Stonebraker has exploited most of them, though he
analyzes the transactions before they're run rather than after.


Re: [firebird-support] Re: Connecting to Firebird database from two or more pc with same username

2016-04-18 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Apr 18, 2016 at 10:03 AM, 'Thomas Steinmaurer' t...@iblogmanager.com
[firebird-support]  wrote:

> > On Thu, 14 Apr 2016 12:28:41 +0200 (CEST), 'Thomas Steinmaurer'
> > t...@iblogmanager.com [firebird-support]  >
> > wrote:
> >>
> >>
> >> Yep. If one want to have some sort of client login/user available on
> >> the server-side, which might be totally different to the Firebird
> >> user used upon connect time, a common approach is to put that login
> >> name into a context variable via RDB$SET_CONTEXT(...) and query that
> >> in context of the connection with RDB$GET_CONTEXT(...). Needless to
>


> > Virgo Pärna wrote:

>
> >   Unless I'm misunderstanding it, it is not possible to read
> > context variable from another connection?
>
> No. There are two "namespaces" USER_SESSION and USER_TRANSACTION which
> also define the visibility of context variable values.
>
> Check out:
> http://www.firebirdsql.org/refdocs/langrefupd25-intfunc-set_context.html


Maybe I'm confused, but I think Virgo is asking about two different
connections using the same
user credentials, not a an application server with a single connection to
the database and multiple
connections from the client to the application.  Different connections have
different USER_SESSION
namespaces even if they use the same user credentials, I think.

Cheers,

Ann


Re: [firebird-support] Connecting to Firebird database from two or more pc with same username

2016-04-14 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 14, 2016 at 3:17 AM, Luigi Siciliano luigi...@tiscalinet.it
[firebird-support]  wrote:

>
>If a user try to connect from two or more PC with same username,
> Firebird accepts the connection but is safe without problems or is best
> to avoid like the pest?
>

Firebird keeps track of activity in terms of connections and transactions,
which are created dynamically and
are unique to a connection or transaction.  User login information is used
only for granting (or denying) access
for security purposes.  What this means is that you're not going to corrupt
the database or interfere with other
connections' work if everybody uses the same login criteria.  The problem
comes when you actually want to
grant (or deny) access to some people and not to others.  In that case, you
should have different user names
for different users, or create classes of user with a single user name and
s single set of privileges.

Good luck,

Ann


Re: [firebird-support] VIEW optimization

2016-04-07 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 7, 2016 at 3:41 PM, Ann Harrison <aharri...@ibphoenix.com>
wrote:

> On Thu, Apr 7, 2016 at 2:27 PM, Gabor Boros gaborbo...@yahoo.com
> [firebird-support] <firebird-support@yahoogroups.com> wrote:
>
>> 2016. 04. 07. 19:31 keltezéssel, 'Mark Rotteveel' m...@lawinegevaar.nl
>> [firebird-support] írta:
>>
>
>
Just in case you think Firebird should be able to ignore parts of an outer
join,
let me disillusion you.


>  select d.DepartmentName from Departments d
> *left outer* join Employees e on e.DeptID = d.DeptID
>
> Suppose you have 10 Departments and 10 Employees, but all the
> Employees have the same DeptID.  Then you should get the same
> DepartmentName ten times.
>

The left outer join will return the nine unmatched Departments, returning
one DepartmentName for each unmatched Department.  The name of the
Department with 10 Employees will be returned 10 time.

Good luck,

Ann

>
>
>
>
>


Re: [firebird-support] VIEW optimization

2016-04-07 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 7, 2016 at 2:27 PM, Gabor Boros gaborbo...@yahoo.com
[firebird-support]  wrote:

> 2016. 04. 07. 19:31 keltezéssel, 'Mark Rotteveel' m...@lawinegevaar.nl
> [firebird-support] írta:
> >
> >
> > For a join (inner join), the rows must exist in both tables, so both
> > tables need to be evaluated.
> >
>
> I understand this. Try describe my problem with other words.
> Is the VIEW an atomic (or compiled like a STORED PROCEDURE) thing?
>

For views, only the view definition is stored.  When a query references a
view,
the view is compiled and optimized like any other query.  If possible, it
is
integrated into the larger query.


> I have a VIEW with 4 fields (FIELD1..4), 1 come from the "FROM table"
> (FIELD1) and 3 from JOINs (FIELD2..4). If execute a SELECT FIELD1 FROM
> ... PLAN contains the JOINs. Why? If I write a SELECT manually not
> include JOINs if not needed for the resulted fields, because want
> maximum performance.
>
>
As Mark and others explained, both sides of a join must always be
evaluated, even if you reference only fields from one table.  Consider
this case:

 select d.DepartmentName from Departments d
join Employees e on e.DeptID = d.DeptID

Suppose there are no Employees at all, but 10 Departments.  That
query should return no results because what was asked was to return
the DepartmentName of Departments with Employees.

Suppose you again have 10 Departments, but only 1 Employee and
that Employee has a DeptID that matches one of the Departments.
Then the query should return one DepartmentName.

Suppose you have 10 Departments and 10 Employees, but all the
Employees have the same DeptID.  Then you should get the same
DepartmentName ten times.

You're thinking of a case where there is a (possibly unstated) referential
relationship between the table that you're getting fields from and the other
table in the view.  The table you're getting data from is the referencing
table and the other is the referenced table.  In that particular case, the
join doesn't matter, but there's no way that Firebird can know that.

Good luck,


Ann


Re: [firebird-support] help to repair a corrupted database

2016-04-07 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Apr 7, 2016 at 11:28 AM, korkl...@yahoo.it [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> hi, i'm trying to repair a corrupted db for firebird 2.5.2,  the database
> file is of about 30 gb.
>
> i have disconnected all the user, i did a copy of the file and then i
> executed the command
> gfix -validate 'dbname' with the credentials of the user that created the
> db (not the sysdba)
>
> now the gfix process is 90 minutes that is on 0 cpu usage and 1 mb of ram
> but the firebird process sometime works,
>
> is the gfix process freezed or it's normal?
>

Probably normal.  Gfix itself doesn't do much, it just connects to the
database with options
that cause Firebird to validate the database.



> can i terminate it and retry or maybe this can add other problems to my
> database file?
>

I wouldn't terminate it for another couple of hours.  It's unlikely that
stopping gfix will
have adverse effects on the database, though stopping the Firebird server
could.

What are the symptoms of the corruption?

Good luck,

Ann


Re: [firebird-support] db corruption

2016-04-04 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Apr 4, 2016 at 2:41 AM, 'Andrew Zenz' and...@aimsoftware.com.au
[firebird-support]  wrote:
...


>
>
> I am using WI-V2.5.3.26780 Firebird 2.5 ODS 11.2
>
> ...
>
> For both databases while running gbak:
>
> gbak -user sysdba -password masterkey -b -ig -l corrupt.fdb corruptbak.fbk
> –v
>
> I get the error:
>
> gbak: ERROR:internal Firebird consistency check (decompression overran
> buffer (179), file: sqz.cpp l
>
> ine: 239)
>
> gbak: ERROR:gds_$receive failed
>
> gbak:Exiting before completion due to errors
>
> gbak: ERROR:internal Firebird consistency check (can't continue after
> bugcheck)
>
>
>
> When I run gfix on both:
>
> gfix -user sysdba -password masterkey -validate -full -no_update
> corrupt.fdb
>
> I get the error:
>
> internal Firebird consistency check (cannot find tip page (165), file:
> tra.cpp line: 2375)
>

The first error - decompression overran buffer - is a bad sign, but it may
affect only one record,
or with luck only one back version.  But, that last error - cannot find tip
page - is bad.  TIP pages
contain transaction state.  Losing a TIP means that some transactions may
be lost after reporting
a successful commit.   When you see that error, make a read-only copy of
the database before
you do anything else or Firebird may start garbage collecting good data.

Gfix handles a limited number of common problems.  IBSurgeon has a free
tool, FirstAid, that analyzes
corrupted databases and reports back all the types of corruption.  In
another mode, which is not free,
it will fix some problems.  More complicated corruptions require some
degree of manual intervention.
IBPhoenix, for one, has people and tools that can fix many corruptions.  In
your case, since you've
lost a TIP, I'd recommend professional help if you want the as much of the
data  back as possible.

>
> I can pump some (most) of the data from them but hit errors on some tables
> (unfortunately large important
>
ones) .  One DB is 1.3Gb, the other is over 11Gb.
>

On a read-only copy of the database, you can probe with isql or another
data browsing tool to identify
bad records with indexed retrievals.  This works best on a primary key.
Look for records with primary
key less than some value.  If that works look for records with a key
between that value and some other
value.  When you hit an error, narrow the range.  That process should
eventually identify all the bad
records, and, with luck, you can pump the rest using key ranges.

Good luck,

Ann


Re: [firebird-support] Do I need to "recompile" dependant stored procedure?

2016-03-24 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Mar 24, 2016 at 5:13 PM, 'Wodzu' brucedickin...@wp.pl
[firebird-support]  wrote:

>
>
> I would have to dig in for example o that behavior, maybe it only seemed
> to me that this has occurred.
>

Don't bother digging, but if it happens again, take notes.

Cheers,

Ann

>
>
>
> 
>


Re: [firebird-support] Do I need to "recompile" dependant stored procedure?

2016-03-24 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Mar 24, 2016 at 7:24 AM, brucedickin...@wp.pl [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> I have two procedures:
>
> 1. PROC_A
> 2. PROC_B - which is using internally PROC_A.
>
> Now, I've made some changes in PROC_A, do I need to recompile PROC_B as
> well?
>

That depends on what you've done to PROC_A.  Obviously, if you've changed
its parameters,
then yes, you have to change and recompile PROC_B.


> If so, what is the shortest way of doing it?
>

Again, it depends.  The easiest - brute force - way is to shutdown your
server ... or at least
all connections to that database.  Compiled procedures are memory resident
and go away on
database shutdown.  If not, I think ALTER is the only alternative.

>
> I am pretty sure I've noticed some bad behaviour of PROC_B if I've left it
> without recompilation.
>

You shouldn't if there haven't been changes to the parameters of A.  If you
do notice something,
you should report the symptoms and error messages.

Good luck,

Ann


Re: [firebird-support] violation of PRIMARY or UNIQUE KEY constraint

2016-03-15 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Mar 15, 2016 at 8:04 AM, Tim Ward t...@telensa.com
[firebird-support]  wrote:

>
>
> Now, I seem to recall seeing somewhere that actually a primary key (and
> any other unique index?) is independent of transactions, so the following:
>
> (1) Transaction A inserts record X
> (2) Transaction B attempts to insert record X (having first checked that
> it doesn't exist, which it doesn't as far as transaction B is concerned,
> because transaction A hasn't committed yet)
>
> results in the error.
>
> Have I remembered this behaviour of primary keys correctly? Please could
> someone remind me where the documentation is if so?
>

On the documentation front, no, though it's probably in Helen's book, but
your memory is correct.  When Firebird attempts to put
a new entry in the unique/primary key index on behalf of Transaction B, it
notices that there's an entry for A and that A's transaction
is not committed.  Firebird causes B to wait for A to complete (unless it's
no wait), then gets an error if A commits and succeeds
if A failed.

Good luck,

Ann

> 
>


Re: [firebird-support] Composite index - issue or not existing feature?

2016-03-15 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Mar 14, 2016 at 5:30 PM, setysvar setys...@gmail.com
[firebird-support]  wrote:

>
>
> Hi Karol! At the risk of this being confusing or even incorrect (I trust
> Dmitry or Ann will correct if it is incorrect).
>
> >SELECT * FROM dbo.XXX X WHERE X.A BETWEEN 2 AND 30 *AND* X.B BETWEEN 5
> AND 60
>
> My understanding of this is that Firebird (in theory) have two choices.
>
> Either
> (a) use XXX__A__B once using it exclusively for A
> or
> (b) use XXX__A__B 29 times (2 through 30) using it for both A and B.
>
> Firebird prefers to use (a).
>

Err, not quite.  There's no guarantee that there are only 29 values between
2 and 30.  That depends on the datatype, which is not available to the
index code.  With the exception of 64-bit integers (long sad story), all
numeric values in Firebird indexes are represented as mangled double
precision floating point, so there could be 29*(2**56) values between 2 and
30. (Probably my math is off)  And multiple instances of each.


>
> If you added a new index covering only B, then Firebird could use
> XXX__A__B UNIQUE for A and also the new index for B (unlike many other
> databases, Firebird can utilize several indexes for each table of a query)
>

Right.

Cheers,

Ann


Re: [firebird-support] Re: Composite index - issue or not existing feature?

2016-03-14 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Mar 14, 2016 at 3:41 AM, liviuslivius liviusliv...@poczta.onet.pl
[firebird-support]  wrote:

> W dniu 2016-03-14 08:36:40 użytkownik Dmitry Yemanov
> dim...@users.sourceforge.net [firebird-support] <
> firebird-support@yahoogroups.com> napisał:
>
>
> > SELECT * FROM dbo.XXX X WHERE X.A BETWEEN 2 AND 30 *AND* X.B BETWEEN 5
> > AND 60
> ...
> > As you can see only A key is used but B key should be also used.
> > I am missing something?
>
> Yes, you do. If the first segment is matched for non-equality, then the
> following segments cannot be used.
>
>
> Why?
> Index is a Tree? And if i found VALUE 2 in A key then i can fast find
> value 5 in sub key (leaf)
> You scan through keys in A, and then in finded nodes you look for leafs in
> B
>

Expanding on Dmitry's answer...

Yes, the index is a tree, but it's not a tree the way you imagine it.  At
least not the way
I imagine you imagine it, which is something like this:.  The top of the
index has ranges
of values for the first field in the index, which point to narrower ranges
of values for  for
the first field, going down to the point where you get a single value for
the first field with
ranges of values for the second field hanging under it.  Indexes like that
get unbalanced
easily and tend to be very deep.

A Firebird index key is a single value built out of all parts of a compound
key with some
extra stuff dribbled around so you can tell the difference between the
pairs of values
"deadlock" "search" and "dead" "locksearch".  The whole key is mangled so
it compares
correctly bytewise.  An index entry has a prefix, a key - possibly missing
some front
bytes - and the database key (record id) of the matching record.  A single
level Firebird
index is just a stream of index entries.  It's a little more complicated
than that to reduce
the cost of reading large pages(*).

When there are too many entries to fit on a single page, Firebird splits
the page and
creates a new level(**) above the level with index keys and record ids.
The upper level
has index keys, record ids, and the page number of the lower level index
page that
starts with that index key and record id(***).  The lower levels have
pointers to their
upper level, and to their right and left neighbors.

Each time an index page splits(), the first key and record id pair of
the new page
gets pushed up to the next level.  Eventually the upper level has to split
and a new,
even higher level is created pointing to the next layer down.

Which is a very long winded way of saying that the Firebird index handling
code
hasn't a clue that it's looking at a compound index, let alone where the
values are
split.  If you want to do range retrievals on several fields, create an
index for each
one.  Then, when Firebird executes the search, it will search one index,
building
a bitmap of record ids in that range, then search the other building a
second bitmap
and combine the two bitmaps to retrieve only those records that match both
criteria.

Good luck,

Ann


(*) Index entries are variable length so you can't use a binary search on
an index
page.  When pages got bigger that 4KBb, the time spent on index searches
went
mostly into reading, on average, half the page.  Now indexes in databases
with
large page sizes have an internal index that cuts the average read of a
16Kb page
from 8Kb to 1Kb

(**) To simplify internal bookkeeping, the first physical page allocated to
an index
will always be the top page, so there's some fancy data shuffling when a
new level
is created.  First Firebird creates two new pages, then it moves the data
from the
old top level to the new pages and fills the old top level with pointers
down to the
new pages.

(***) The record id is propagated to the upper levels to make each index
entry unique,
even when the keys aren't.  That saves a lot of time in index garbage
collection but
isn't otherwise interesting.

() The way a page split works depends on where the entry that caused
the split
would go on the page. If the new entry goes in the middle of the page, half
the entries
are moved to the new page.   If it would have been the last entry, most of
the entries
stay on the old page and only enough to start the new page goes on that
page. So
if you're loading records in key order - or using a sequence to create your
keys - the
index stays dense.


Re: [firebird-support] differences between firebird 2.5 cs and 3.0 rc2 ss

2016-03-05 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Mar 4, 2016 at 3:22 AM, 'Checkmail' check_m...@satron.de
[firebird-support]  wrote:

>
>
>
> If I must go back tot he 2.5 version of firebird, how can I gbak the new
> backup-file to the old version?
>

If you haven't used V3 features in data definitions or the new
authorization options, reverting is pretty easy.  Leave V3 installed for
the backup.   Find a copy of gbak from a 2.5 distribution and use it to
create a backup file.  Install Firebird 2.5.x and use the 2.5.x gbak to
restore the database.  If you've used 3.0 features including the various
authorization options, you'll probably need to extract the database
metadata, edit out references to new features and create a new V2.5.x
database in a separate environment.  You'll need to have both versions of
Firebird available simultaneously.  Use one of the database migration tools
to copy the data from thoe V3 database to the V 2.5  You can find migration
tools here: http://ibphoenix.com/download/tools/migration

Good luck,

Ann


Re: [firebird-support] differences between firebird 2.5 cs and 3.0 rc2 ss

2016-03-03 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Mar 3, 2016 at 11:16 AM, 'Checkmail' check_m...@satron.de
[firebird-support]  wrote:

>
>
>
> I’ve postet my problems with the new odbc driver and firebird 2.5 cs, the
> slow queries and all other issus. Now I have installed the firebird 3.0
> rc2, the performance is great, superservers takes all cpu-cores and the
> problems now banned. I have not changed anything, is this a result of the
> superserver or the version 3 of firebird?
>

Both.  Firebird V3 introduces parallel execution of queries in
SuperServer.  SuperServer was always multi-threaded but only one thread
could run at any one time even on a multi-processor server.  The benefit of
threading in earlier versions was that queries could be interleaved, so a
long report didn't block all other requests.  With V3, thread do run in
parallel taking advantage of all the processing power of a multi-core
machine.

Glad things are working better for you!

Good luck,

Ann


Re: [firebird-support] Unable to retreive database

2016-03-01 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Mar 1, 2016 at 6:02 AM, jojecodewo...@yahoo.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> My firebird database got corrupted and I am trying to retrieve it. For
> retrieving database I tried using *gfix.exe* and its commands and got my
> tables back but unfortunately I could not retrieve my stored procedures
> from database. Is there any way through which I can retrieve database along
> with stored procedure
>

OK, guessing what went wrong is easier if you tell us what error messages
you're seeing.

>
> Is there any other way through which I can retrieve my corrupted
> database…..?
>

IBFIrstAid.  If that doesn't work, IBPhoenix offers consulting to repair
databases.

>
>
> Also I would appreciate if someone can also point out how the database
> generally gets corrupted/damaged….!!
>
>
>
Generally databases don't get corrupted.One way to do it is to put the
database on a shared drive
and override the configuration parameters that keep two instances of
Firebird on different machines
from accessing the database at the same time.  Without some hint of what
errors you're seeing
it's hard to guess.

Good luck,

Ann


[firebird-support] Gbak and indexes

2016-02-25 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
Someone on a different forum asked whether gbak restores wouldn't be faster
if gbak backed up indexes so it didn't have to rebuild them.  The writer
wanted the
clean-up and compaction that a gbak backup and restore provides, but not the
time spent retrieving records, sorting, and rebuilding indexes.

There's a problem with that.  An old index is of absolutely no use in a
restored database exactly because records are in different places.  The
index contains the physical location of records.   Change the location,
invalidate the index.   That's the short answer.

The longer answer:

An index contains a key value and a "record id" also called a db-key. When
an index search finds an acceptable approximation of the value desired, it
uses the db-key to find the corresponding record.

 (It's actually more complicated than that, and depends on whether the
index is being used to optimize a sort, but in the end, an index search
finds a db-key and uses it to find a record.)

What's in a db-key?  Three things: a pointer page sequence number, and
offset into the pointer page, and an index offset on the data page.  A
pointer page is a part of the structure of a table that's not visible
outside the database.  It's simply a page that contains a vector of page
numbers.  Those page numbers belong to the data pages in the table.  The
RDB$PAGES table contains the sequence number and page number of pointer
pages for all tables, among other things.  After a table has been
referenced, the pointer page information is kept in memory. The pointer
page may be in cache or Firebird may need to read it from disk.

With the pointer page in its virtual hand, Firebird reads the page number
at the offset indicated by the db-key to get the page number of the
appropriate data page and finds the data page in cache or on disk.  The
offset on the data page in the db-key is an indirect pointer to the
record.  Each data page has an index to the records on it containing the
actual offset and length of the record, so a page can be reorganized
without changing the db-keys of the records on it.

A gbak restore creates a new database and populates it with the data it
backed up from the old database.  Records will be stored on different pages
and at different offsets.  Pages that were part of one table in the old
database may be part of a different table or some other structure in the
new database.

Gbak restore could be made faster if Firebird recognized at a commit that
the transaction created several indexes on the same table and built all the
indexes in parallel rather than reading the table for each index, but using
an old index in a new database is not a good idea.

Good luck,

Ann


Re: [firebird-support] Restoring Backup

2016-02-24 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Feb 24, 2016 at 12:01 PM, Hugo Eyng hugoe...@msn.com
[firebird-support]  wrote:

Referring to the buffers requested for gbak:

>
>
> Why 12? Why not? :)
>

There's a cost to managing a large buffer cache.  Firebird will use the
entire
cache even if there's no particular reason to do so.

A gbak backup needs a few dozen pages for system table stuff that's
referenced frequently and for each table backed up another few pages
that help locate data.  The data pages are read once and then are of no
more interest.   So the default cache size is fine for a backup.   Two
hundred pages is probably enough to backup almost any database.  If
you use a huge cache, it will be filled with useless data pages.

A gbak restore needs the same few dozen pages of system tables and
some pages for internal table management.  When it is restoring data, it
fills a data page and goes on to the next one.  A large cache will fill with
pages that will not be referenced again until the indexes are built.  To
build indexes, Firebird reads records and sorts by keys.  That might
suggest that keeping millions of pages in cache would improve performance
by eliminating disk reads.   However, unless you've got a huge amount
of memory, those data pages will reduce the amount of memory available
for the sort.  The sort algorithm produces chunks of sorted key/record id
pairs which are subsequently merged.  The more chunks that can be kept
in memory, the faster the sort.   If you've used all your machine's  memory
for the cache, the sort will be slow and may start paging.

Good luck,

Ann


Re: [firebird-support] What is more effective execute block or select?

2016-01-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Jan 27, 2016 at 5:01 PM, setysvar setys...@gmail.com
[firebird-support]  wrote:

>  >If the question is which is likely to perform better, then the answer
> is the select.
> ...
>  >In general, execute block should be used when you want to change what
> Firebird
>  >considers fundamental elements of a quer...
>
> I'd never thought I would ask YOU this question, but are you sure, Ann?
>

Sure?  No.

> I just wonder if you've thought EXECUTE STATEMENT where you've written
> EXECUTE BLOCK.
>

Probably.  Does EXECUTE BLOCK allow you to build up the block at runtime?
If so, then I think I may be right unless the compiler is clever enough to
recognize
that your particular block is static.

>
> UPDATE  h
> SET  = (SELECT  FROM  t WHERE
> h. = t.)
>
> is much slower than
>
> EXECUTE BLOCK AS
> Declare variable a integer;
> Declare variable b integer;
> BEGIN
>FOR SELECT DISTINCT , 
>FROM 
>INTO :a, :b do
>  UPDATE HugeTable
> SET  = :b
> WHERE  = :a;
> END
>

GDS/Galaxy aka InterBase, aka Firebird was built around a relational
language that practitioners at the time would have called "procedural" as
opposed to Quel which was "declarative".  Both lost to SQL which was just
ugly.  The procedural language typically
used nested loops "for  for  do 
end-for end-for", which maps very nicely into the PSQL
FOR SELECT.   The SQL UPDATE statement is one of the ugliest parts of that
ugly language and makes it very difficult to optimize
the case where you're drawing values from a small table to update a large
table.

If EXECUTE BLOCK requires static queries, then I'm completely wrong.  If
not, you might be better writing procedures for updates like this, or as
Mark suggests, MERGE.

Cheers,

Ann


Re: [firebird-support] Page is of wrong type (expected 5, found 0)

2016-01-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Jan 28, 2016 at 4:12 PM, Martin Schwedhelm silo...@yahoo.com
[firebird-support]  wrote:

>
>
> As far as I can tell (via gstat) forced write is enabled.
> Also, regarding virtual machines, at least ESX(i) does not cache guest OS
> writes
> (http://www.yellow-bricks.com/2011/04/07/mythbuster
> s-esxesxi-caching-io/).
>

OK.  There are myths about disks that lie about writing stuff, but that's
way on the outer edge
of probability, especially since you've got lots of pages of zeros.

>
> Maybe a thing that was not clear in my initial post:
> We don't use a sql command that deletes that many records or drop tables.
> The missing (zeroed) data should not be zeroed at all.
>

 When Firebird releases a page because it went empty or the table was
dropped or
whatever, it doesn't zero the page.  My somewhat feeble recollection is
that pages of zeros
are newly allocated and never used, or trompled by rogue processes.

Good luck,

Ann


Re: [firebird-support] What is more effective execute block or select?

2016-01-27 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Jan 27, 2016 at 7:48 AM, hamacker sirhamac...@gmail.com
[firebird-support]  wrote:

>
>
> I would know whats methods is more effective, per exemple:
>

Example below shows two convoluted ways to return 'Y' or 'N' depending on
whether an item with a particular id exists: a select against rdb$database
and an execute block.

If the question is which is likely to perform better, then the answer is
the select.
Select statements can be compiled and optimized once and reused, avoiding
validating access and metadata references.  The execute block must be
compiled,
optimized, and access checked each time it's used.

In general, execute block should be used when you want to change what
Firebird
considers fundamental elements of a query - the fields returned, the tables
accessed,
the sorting elements, and the conditions in the on and where clauses.

If the question is which of these particular queries is more effective, the
answer is
neither.  Mark pointed out that there are much simpler ways to determine if
an item
with a particular value exists, though they simply fail to return a value
when the
answer is no.  In most cases it's pretty easy to build the rest of the
logic in whatever
it is that's asking the question.

Good luck,

Ann




> To know if itens exists or not, select
> select
>   case
> when exists(select 1 from cv_itens  where id_cv=51716)
> then 'Y'
> else 'N'
>   end as existe_itens
> from rdb$database
>
> OR using block execute:
> execute block
> returns(exist varchar(1))
> as
> begin
>   exist='N';
>   if (exists(select 1 from cv_itens  where id_cv=51716)) then exist='Y';
>   suspend;
> end
>
> I think that first method using rdb$database as source, checking
> permissions and others things and second method is not and more powerfull
> using psql.
>
> I would like to know what you guys think about.
>
>
>
> 
>


Re: [firebird-support] Page is of wrong type (expected 5, found 0)

2016-01-23 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Sat, Jan 23, 2016 at 9:41 AM, Martin Schwedhelm silo...@yahoo.com
[firebird-support]  wrote:

>
>
> I have multiple (8) databases where no connection is possible and gfix
> prints the above message.
>
> Firebird version: 2.5.2
> Operating system: Windows 2003 - Windows 2008 (VmWare)
> Size: Varying, ~100MB to ~14GB single file
> ODS: 11.1 / 11.2
>
> It looks like the database file is truncated and when resized to the
> previous size
> because all pages starting from a random page till the end of the file are
> filled with zeros.
> In one case only the first 9MB of a 7GB database survived.
>

Are forced writes enabled?

>
> Before the corruption all systems involved were restarted (windows-update,
> etc.).
> Could a unsave shut down of the database server explain the behaviour?
>

Yes, if you're not using forced writes.

Good luck,

Ann


Re: [firebird-support] Forced writes on Linux?

2016-01-17 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Jan 15, 2016 at 5:15 AM, Tim Ward t...@telensa.com
[firebird-support]  wrote:

> I know the folklore around forced writes
>
> - turning forced writes on is safer
>

Not exactly folklore.  Firebird doesn't have a journal but instead depends
on careful writes to keep the database constantly consistent.  After a
crash - system or database - there's no recovery process.  The database is
available instantly.

The basic principle of careful write is writing data before pointers. When
adding data, the page with the data is written before the page that points
to it.  When removing data, the pointer is cleared before the data.  This
principal applies to internal structures like pointers as well as records
and indexes.  If you allow the operating system to write pages in its
preferred order, the database on disk may be inconsistent.  As long as
nothing goes wrong, the state of the database on disk isn't an issue.  At
one time, Windows didn't flush its page cache until a file was closed,
leading to losing hours or days of work.

What's different now is that machines are both faster and more reliable.
If nothing crashes, careful write doesn't matter.



> - turning forced writes on is slower
> - the safety bit is much more of an issue with Windows than with Linux,
> to the extent that it used not to work on Linux and nobody noticed for
> years.
>
> But I haven't found any actual, y'know, like, data, evidence, so on.
> Things like (with reference to Linux, I'm not interested in Windows):
>
> (1) Is there any data about how much slower it is, eg has anyone done
> any benchmark runs on their systems? - I've found just one blog entry
> somewhere with a number ("up to three times slower") but without any
> (published) data behind it.
>

I wouldn't particularly trust benchmarks that are more than three years
old.

>
> (2) Is there any evidence about how much safer it is? Statistics on
> corruptions with and without? Analysis of individual database corruption
> events showing whether turning on forced writes would or would not have
> prevented the corruption event?
>

The evidence I have is too old to be relevant, so I can offer only theory,
not
practice.  In theory, there should be no difference...

Cheers,

Ann


Re: [firebird-support] Bad performance of Firebird in Windows Server 2012

2015-12-29 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Dec 29, 2015 at 2:44 PM, Macma mac...@wp.pl [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> Do I have to change any configuration or the matter is that if I can´t
> enable disk cache, there is nothing I can do to improve the performance?
>
> Try to disable force write on that database.
>

After you have a UPS installed and an aggressive backup schedule.

Good luck,

Ann


Re: [firebird-support] Re: UPDATE to same record causing heavy disk I/O

2015-12-24 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Thu, Dec 24, 2015 at 1:03 PM, Dmitry Yemanov dim...@users.sourceforge.net
[firebird-support]  wrote:

> 24.12.2015 05:31, 'Leyne, Sean' wrote:
> >
> > With today's unlimited availability of disk space and silly-low cost per
> GB for storage, would an argument to dispense with the delta and simply
> store a full copy of the record (not including BLOB) be worthy of
> discussion?
>
> It's not about storage cost, but about IOPS. Bigger record = more I/O
> for the same data = slower performance. Situation is better for SSDs,
> but "silly-low cost" does not really apply there.
>

Right.  The logic was never about saving space on disk, except to the
extent that it reduces the amount of I/O necessary to complete a query.

>
> > I know that Jim has mentioned that in his later db engine he has adopted
> a reverse approach which has the latest version stored in full and for
> transactions required back versions responsible processing the deltas.  In
> this way, the latest version of the row are always complete so that the
> back versions can be dropped very efficiently.
>
> Isn't it exactly how Firebird works?
>

Yes it is.  The primary record version - the most recently created one - is
always complete. The earlier record versions may be whole or deltas.

Jim did handle back versions differently in Netfrastructure and slightly
differently again in NuoDB.  InterBase was designed for systems where
having a whole megbyte of memory, so stuff had to go to disk as quickly as
possible.  When designing for more generous memory systems, he chose to
keep only the most current committed record on disk. That version, and
important back versions, and the newest uncommited version were all
maintained in memory.  If the system went down, any old transactions that
needed old versions went down with it.

NuoDB did approximately the same thing, except that it was distributed, so
old versions had to be maintained a bit more carefully so losing one node
would never lose all old versions.

His latest database, AmorphousDB handles versioning at the attribute level
rather than the record level, but follows the model that only the most
recently committed version of an attribute is worth the cost of a disk
write.

Cheers,

Ann


Re: [firebird-support] UPDATE to same record causing heavy disk I/O

2015-12-23 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Dec 23, 2015 at 7:12 AM, Alexey Kovyazin a...@ib-aid.com
[firebird-support]  wrote:

>
> longest chain of versions here).
>
> If you will see a long chain of versions, it means that you are updating
> the same record while some other writeable transaction is active, or some
> old transaction was rollabacked with mark on TIP, so it makes Firebird to
> preserve versions of updated record.
>

Updating a record will always create a back version, even if there are no
other users in the database.  Back versions function in three ways:

  1) They provide a non-blocking consistent view of data for concurrent
transactions.
  2) They allow Firebird to detect and prevent conflicting updates.
  3) They are the mechanism for verb, savepoint, and transaction rollback

The third function must be available even in single user applications.

Good luck,

Ann

>
>
>
> 
>


Re: [firebird-support] Re: UPDATE to same record causing heavy disk I/O

2015-12-23 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Dec 23, 2015 at 12:01 PM, Dmitry Yemanov
dim...@users.sourceforge.net [firebird-support] <
firebird-support@yahoogroups.com> wrote:

> 23.12.2015 01:36, Ann Harrison wrote:
>
> >
> > ...Your first update will create a back
> > version that's just the difference between the old record state and the
> > new state.  The second (or maybe third) will create a back version
> > that's the whole record (IIRC) - much larger and possibly off page.
>
> Correct, but from another side third, fourth, etc updates will not
> create any new versions. May be worth checking whether a "heavy" second
> update could be better than creating a long version chain.
>

Right. When we were working on InterBase 1.1 (I think) a friend of Jim's
suggested using deltas for back versions to save space.  He's still a
friend, but that feature was a real trial to implement and debug, partly
because we ran out of bits in the record header.  There's one that says
"the version behind me is a delta", but not one that says "I am a delta."
 The first one is essential because it means that you've got to set up a
copy of the record on which to apply deltas.  The second would have been
very nice for detecting bugs that lead to having the first bit set when it
shouldn't be or not set when it should.

Once the deltas were working reasonably well, we discovered that some users
were updating a single record dozens of times in a transaction.  Not a use
case we'd considered.  I don't remember whether we decided to create a
complete back version after two or three updates in one transaction, though
I vaguely remember arguing that like Martinis, one or two was pretty
reasonable, but after three things are likely to go off in an unexpected
direction.  With small records, a complete back version saves space
compared with three deltas.  With big records and small changes... not so
much.

Cheers,

Ann

>
>
> Dmitry
>
>
>
>
> 
> Posted by: Dmitry Yemanov <dim...@users.sourceforge.net>
> 
>
> ++
>
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu
> there.
>
> Also search the knowledgebases at
> http://www.ibphoenix.com/resources/documents/
>
> ++
> 
>
> Yahoo Groups Links
>
>
>
>


Re: [firebird-support] UPDATE to same record causing heavy disk I/O

2015-12-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Dec 22, 2015 at 2:17 PM, Caroline Beltran
caroline.d.belt...@gmail.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> Since I began using Firebird, I have kept my transactions (type
> concurrency) very short and then call COMMIT immediately afterward.  This
> has worked very well.
>
> I recently had the need to perform more complex processing and what I did
> was to keep everything short and modular.  But I am now seeing that my
> design in this specific case was flawed.
>
> I am updating different parts of the same record repeatedly and I believe
> that this is causing multiple back versions which causing excessive disk
> write I/O and slowing things down terribly:
>
> a) begin a transaction, update FIELD_1 of MYTABLE, COMMIT, and end
> transaction.
>

Just curious, why do you say both "COMMIT" and "end transaction" - Commit
ends the
transaction.

Good luck,

Ann

> b) begin a transaction, update FIELD_2 of MYTABLE, COMMIT, and end
> transaction.
> c) begin a transaction, update FIELD_3 of MYTABLE, COMMIT, and end
> transaction.
> d) begin a transaction, update FIELD_4 of MYTABLE, COMMIT, and end
> transaction.
> e) begin a transaction, update FIELD_5 of MYTABLE, COMMIT, and end
> transaction.
> Note: other tables are inserted and updated during during transactions a-e
> but those tables are not presenting any problems.  The problem is with
> MYTABLE.
>
> Of course, I normally update all fields in one transaction but in this
> particular case, determining the contents of each field is a complex
> process that requires manipulation and analysis of the the data provided by
> a number of other Firebird SELECT queries to the database.
>
> I am averaging about 300 transactions per minute during this process that
> may last 12 hours and during that time, things get terribly slow.
>
> So can someone confirm my suspicions, will each of the 5 transactions
> above to the same row of data cause 5 new 'back versions'?
>
> Like I said, I have always kept transactions very short.  I am thinking of
> something like this instead:
>
> a) begin a transaction, update FIELD_1 of MYTABLE.
> b) update FIELD_2 of MYTABLE.
> c) update FIELD_3 of MYTABLE.
> d) update FIELD_4 of MYTABLE.
> e) update FIELD_5 of MYTABLE, COMMIT, and end transaction.
> If something fails anywhere in between, I would ROLLBACK this single
> transaction.
>
> Keeping my transactions shorter and more modular as above is easier from a
> development point of view but I have the back version performance issue.
> Although the second method means a much longer transaction, I won't have
> back versions to deal with.  Do you think that this approach would be
> better?
>
> Thank you
>
> P.S.  Sweeping the database does not help with the performance problem,
> the only temporary solution to regain performance is to backup using GBAK
> and restore.
>
>
> 
>


Re: [firebird-support] Numeration without hole, Is right Before Insert Trigger?

2015-12-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Dec 22, 2015 at 9:40 AM, Luigi Siciliano luigi...@tiscalinet.it
[firebird-support]  wrote:

>
>I must assign a serial number, without hole, in a column of a fiscal
> document.  I must assign the number only when I know if the document is
> complete
> and I think the right moment is on a Before Insert Trigger for the table.
>

Yes that's a good place, but you've got to be very careful.
Generators/Sequences won't
work because they're deliberately non-transactional.  Once you take one its
gone and if
your operation fails, you'll have a hole.

>
> Is right or the insertion can fail? If not right, when I must assign the
> number to be sure of not have a hole in numeration?
>

One way to get numbers without holes is to create a table with one field
that contains
the seed for  your numbers.  In your before insert trigger update that
field adding one to it,
then read to get the new value.  Unfortunately, if someone else has
inserted a record
concurrently, your transaction will wait then get an error and you'll need
to re-run the
whole thing.

 Check the FAQ's at FirebirdSQL.org for other ways of handling this problem.


Good luck,

Ann


>
>


Re: [firebird-support] UPDATE to same record causing heavy disk I/O

2015-12-22 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Dec 22, 2015 at 2:17 PM, Caroline Beltran
caroline.d.belt...@gmail.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> Since I began using Firebird, I have kept my transactions (type
> concurrency) very short and then call COMMIT immediately afterward.  This
> has worked very well.
>

Good, but that too can be overdone.  Each transaction start causes a change
to the header page and end causes changed pages including a transaction
inventory page to be written to disk.  There's some grouping of writes, but
as a rule, think that each transaction you create causes two extra page
writes beyond the data and indexes.

>
> I recently had the need to perform more complex processing and what I did
> was to keep everything short and modular.  I am updating different parts of
> the same record repeatedly and I believe that this is causing multiple back
> versions which causing excessive disk write I/O and slowing things down
> terribly:
>

> a) begin a transaction, update FIELD_1 of MYTABLE, COMMIT, and end
> transaction.
> b) begin a transaction, update FIELD_2 of MYTABLE, COMMIT, and end
> transaction.
> c) begin a transaction, update FIELD_3 of MYTABLE, COMMIT, and end
> transaction.
> d) begin a transaction, update FIELD_4 of MYTABLE, COMMIT, and end
> transaction.
> e) begin a transaction, update FIELD_5 of MYTABLE, COMMIT, and end
> transaction.
>

There are several problems with this.  One is the significant transaction
overhead you introduce. A second, as you've guessed is that you're creating
a back version for each update.   Another is that any transaction reading
your record between updates will see some field that have been changed and
others that haven't.  Another, and not insignificant, is the danger that
some other transaction will change your part or all of a record between
your transactions, leaving the record inconsistent.


> Of course, I normally update all fields in one transaction but in this
> particular case, determining the contents of each field is a complex
> process that requires manipulation and analysis of the the data provided by
> a number of other Firebird SELECT queries to the database.
>
> I am averaging about 300 transactions per minute during this process that
> may last 12 hours and during that time, things get terribly slow.
>

 Probably some information from the monitoring tables will let someone else
give you good advice.

>
> So can someone confirm my suspicions, will each of the 5 transactions
> above to the same row of data cause 5 new 'back versions'?
>

Absolutely.

>
> Like I said, I have always kept transactions very short.  I am thinking of
> something like this instead:
>
> a) begin a transaction, update FIELD_1 of MYTABLE.
> b) update FIELD_2 of MYTABLE.
> c) update FIELD_3 of MYTABLE.
> d) update FIELD_4 of MYTABLE.
> e) update FIELD_5 of MYTABLE, COMMIT, and end transaction.
> If something fails anywhere in between, I would ROLLBACK this single
> transaction.
>

That's not going to work either.  Your first update will create a back
version that's just the difference between the old record state and the new
state.  The second (or maybe third) will create a back version that's the
whole record (IIRC) - much larger and possibly off page. Off page matters
because it doubles the page writes.

>
> Keeping my transactions shorter and more modular as above is easier from a
> development point of view but I have the back version performance issue.
> Although the second method means a much longer transaction, I won't have
> back versions to deal with.  Do you think that this approach would be
> better?
>

No, just do all the computations in a single transaction and update the
record once with all the changes.

>
>
> P.S.  Sweeping the database does not help with the performance problem,
> the only temporary solution to regain performance is to backup using GBAK
> and restore.
>

That's why I suspect there's more to it than just back versions and would
like to see something about I/O, reads, writes, fetches, marks, etc.

Good luck,

Ann

>
>
> 
>


Re: [firebird-support] Is it save to append some data at end of the binary firebird database file?

2015-12-04 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
Fabiano Kureck suggested sticking application version information in the
checksum slot of the page header.

Mark Rotteveel quite correctly responded.
>
>
> That is very dangerous to do, the same bytes might be reused differently
> in a newer ODS. Either use a normal Firebird table to store version info,
> or do it outside of the database. Do not hack things in the internal
> structure.
>

The page header is not a good place to stick information. It does change to
support new features.  However, the database header page has a place that
you might be able to use.  The format of the header page is defined in
ods.h.   The first part is fixed.  The second part is a string of
"clumplets" which are groups of attribute-length-data triplets.  By design,
Firebird skips over clumplets it doesn't understand.  You could write a
program that adds a new clumplet of a type undefined by Firebird to hold
your version information.  It wouldn't survive a gbak backup/restore cycle.

The discussion was about finding portable mechanism to identify the
application version of a database.  Are you aware that databases are not
portable across machines with different endian characteristics?

Good luck,

Ann

>
>

> 
> --
>


Re: [firebird-support] Deleting records and deadlocks

2015-12-02 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Dec 2, 2015 at 11:49 AM, Tim Ward t...@telensa.com
[firebird-support]  wrote:

>
>
> Yes I do know it's not a real deadlock, I was using the word because I
> knew it would be understood and because, I'm pretty sure?, I've seen it
> in one of the relevant error messages.
>

This is really ancient history, but the reason that update conflicts are
reported as deadlocks is rooted in the early dark ages of computing, say
around 1981.  DEC, to its distress, found itself developing two relational
databases, one a relational layer on a CODASYL system, and one relational
from the ground up using multi-generational concurrency control (created by
Jim Starkey).

Management decreed that the two systems be compatible shared libraries.
Change a link and your application goes from using one to using the other
without changes.  That meant compatible everything but ODS - system tables,
api, and error codes.  The solution for error codes was to agree on major
error codes that an application could receive and respond to, with the
possibility of minor codes to for system specific behavior.  Since the
solution to an update conflict (not a possibility in the lock-based
traditional implementation) is the same as the resolution of a deadlock -
rollback and try again - it was lumped under deadlock.

When Jim created InterBase, he used the same interfaces in hopes of gaining
market share from VAX/RDBMS users who wanted a compatible interface on
other hardware.  Alas, by that time anyone still using VAXen was so
addicted to DEC that they wouldn't consider other hardware let alone
foreign software.

Cheers,

Ann


Re: [firebird-support] Deleting records and deadlocks

2015-12-02 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Dec 2, 2015, at 6:35 AM, Tim Ward t...@telensa.com [firebird-support] 
>  wrote:
> 
> What about if two concurrent transactions are both trying to *delete* 
> the *same* record at once? - from the point of the view of the user's 
> objectives there's no reason why this shouldn't work, as either way the 
> record is going to end up deleted, which is what the user wants, but 
> would I be right in guessing that Firebird isn't that clever, and that 
> deadlocks are possible in this scenario?

It's not nearly that simple.  What would happen if one of those transactions 
rolled back? 

Good luck,

Ann


Re: [firebird-support] Is it save to append some data at end of the binary firebird database file?

2015-12-01 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Dec 1, 2015 at 7:42 AM, 'Moessinger, Semjon' s.moessin...@pi.ws
[firebird-support]  wrote:
>
>
>
> My main intend is to add some version information to a firebird database
> file, since I will use firebird as embedded database and installers/update
> mechanism will need this information.
>

Not a great idea.  Unusual, imaginative, but probably catastrophic.  When
Firebird needs to add more data, it will append some number of blocks to
the end of the file.  I don't remember the algorithm exactly, but the space
will be some number of disk blocks that work out to full pages.  On a good
day, your information will be overwritten.  On a bad day, the allocation
will fail or become misaligned and make the database unusable.

>
>
> The information should be available platform independent and I would
> prefer a solution not needing any database access.
>

Maybe distribute a separate file?

Good luck,

Ann

>
>


Re: [firebird-support] SQL Optimation best way

2015-11-04 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Wed, Nov 4, 2015 at 8:41 AM, 'checkmail' check_m...@satron.de
[firebird-support]  wrote:

>
>
>  two tables
>
>
>
> A:
>
> Field1, Field2, Field3, Field4, Field5
>
>
>
> B
>
> FieldA,  FieldB, FieldC
>
>
>
> Now I would like to get the following:
>
>
>
> Sum(A.field3) where a.field1 = condition1 and a.field2 = condition2
>
> In the same statement I would include
>
> Sum(b.fieldc) where b.fielda = a.field4 and b.fieldb = a.field5
>
>
>

Without having tried it, I suggest:

select sum (A.field3) from A where A.field1 = condition1 and A.field2 =
condition2
UNION
select sum (b.fieldc) from B inner join A
 where  A.field1 = condition1
and A.field2 = condition2
and B.fielda = A.field4
and B.fieldb = A.field5

If A.field3 and B.fieldc are of different types you may need to cast them.

Good luck,

Ann



>


Re: [firebird-support] Re: Firebird Indexing problem

2015-10-23 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Oct 23, 2015 at 2:40 AM, drci...@yahoo.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> I don't think I understand you fully. Indexed reads a hudreds of times
> faster than non indexed.
>

Yes, when you're looking for a specific value.  When you're looking for
something that is NOT a specific value, then it's often faster to read the
table in its storage order rather than reading through an index.  In fact,
if I remember correctly, the Firebird optimizer will never choose indexed
access when the lookup is based on non-equality.


> And the table will have millions of records, so I do need them.
>

Reading millions of records through an index is unlikely to be faster than
reading the same millions in storage order.
Normally, when Firebird reads records through an index, it constructs a bit
map of the record numbers of records that match the indexed condition.  The
record numbers include the page on which the record is located.   If the
bitmap include 90% of the pages in the table, Firebird will then read the
table in storage order (which is also record number order) skipping 10% of
the pages, and the work that went into building the bitmap is completely
wasted.

"then this is cheaper to scan table (500 records) then use 400 indexed
> reads"
> What do you mean scan table? How do I do that? Like a stored procedure
> with for select? I don't understand, please explain.
>

You don't have to do anything.  The Firebird optimizer analyzes your query,
attempting to find indexed paths to data.  If it doesn't find usable
indexed access, it resorts to a table scan.


Good luck,

Ann


Re: AW: [firebird-support] Optimize query for date ranges

2015-10-13 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Oct 13, 2015 at 3:47 PM, 'Parzival' parzival1...@gmx.at
[firebird-support]  wrote:

> Hello Thomas,
>
> the result set contains 3 record for criteria.
>
> Records DATUM >= '5.10.2015' = 102
> Records DATUM <= '11.10.2015' = 26
>
> It seems that both queries need to be executed and then the subset of
> matching data for both date conditions is provided.
>

Here's the query:

SELECT * FROM ARBEITSEINTEILUNG AS a where a.baustelle = '12345' AND
a.DATUM >= '05.10.2015' AND a.DATUM <= '11.10.2015';

I don't think that's the way it works.  The query plan shows a one key
 lookup, because that's the main stream of the query execution.  However,
Firebird can use multiple indexes on a single query, which it will do this
time.  First it builds a bit map of the db_keys of records with baustelle =
12345.   It then uses one of the indexes (ascending one would hope) on
DATUM to get the db_keys of records with DATUM between 05.10.2015 and
11.10.2015 (which should be stored as a date type and not a string).
Firebird then ANDs the two bitmaps and returns the rows that match both
criteria.

You might get better performance on this query with a compound index on
baustelle and datum.  You'll pay for it when you insert, update, or delete
rows from arbeitseinteilung.

You seemed concerned about the number of reads this query requires.  Did
you measure the number of reads with a cold cache (i.e a freshly started
database) or one that had already done something with arbeitseinteilung?
 46 is a low number for a completely cold cache.

Good luck,

Ann


Re: [firebird-support] Optimize query for date ranges

2015-10-11 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Oct 11, 2015, at 5:00 AM, 'Parzival' parzival1...@gmx.at 
> [firebird-support]  wrote:
> i am running a simple query where the result contains 3 records. The query 
> should provide all records in a specific week.
> 
> 
>  
> 
> SELECT * FROM ARBEITSEINTEILUNG AS a where a.baustelle = '12345' AND WOCHE = 
> '2015-41';
> 
>  
> 
> Some times I dont have a week but two dates:
> 
>  
> 
> In this case – the very same number of records = 3 is the result set I see 
> that the table Arbeitseinteilung gots 42 reads.
> 
>  
> 
> SELECT * FROM ARBEITSEINTEILUNG AS a where a.baustelle = '12345' AND a.DATUM 
> >= '05.10.2015' AND a.DATUM <= '11.10.2015';
> 
> How can I improve the query?
> 

Do you have an index on DATUM?  What ate the plans for the two queries?

Good luck,

Ann

Re: [firebird-support] Very Strange: Query giving different results

2015-10-05 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Oct 5, 2015 at 12:59 PM, 'Alexis (PROINTER)' ale...@prointersl.com
[firebird-support]  wrote:

> Hi, we have found a very strange problem with our firebird database, a
> record on a table of firebird databes is not always beeing returned:
>
> 1 - Using flamerobin we are never fetching record 1648 of our datble.
>
> 2 - Using an application we are developing (with vb.net on vs2010) the
> record is fetched:
>
> 3 - If we query from the application with condition IDESTADO = 3 wich the
> desired record has that value that record is also not fetched:
>
>
Yahoo ate your table structure, so this is just a guess.  You may have an
index that's missing an entry - that shouldn't happen, of course.  Using
isql, try fetching the record by primary key value, primary key
concatenated with an empty string (if it's a string) or plus zero (if it's
a number).  They try the same thing on any secondary indexes.

Good luck,

Ann


Re: [firebird-support] AW: Performance comparision for primary keys - int vs. varchar

2015-10-03 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Fri, Oct 2, 2015 at 5:09 PM, Köditz, Martin martin.koed...@it-syn.de
[firebird-support]  wrote:

>
>
> I’m using integer IDs for primary keys in my tables. What if I would use
> varchar fields instead? Will I lose performance in that case, especially
> for big tables? Will joins still work as fast as they do for the integer
> column.
>

Your performance should be about the same with varchars or integers.
Firebird always compares index keys bytewise and only the significant part
of the value is stored.

A single field key is first converted to one of the three canonical types:
 string with collation, double precision, and (sadly) 64 bit integer.
Dates become double precision floating point numbers.

Strings that have a collation other than their byte value are converted to
their collation format.  That's something of a black art and expands the
size of the string, but the result is that the string finds its correct
place when sorted with other strings of the same collation.  'A', 'a', 'â',
'á', 'Ă', 'ã', 'ä', 'å', 'ă', 'ą', 'Ā' all appear in their appointed
places.  (Sorry for what that did to your email client   in mine,
that's eleven variants on 'A'.)  Trailing blanks are not included in the
key.

The double precision number is mangled so it too sorts bytewise - roughly
invert the sign, then exponent, then mantissa, truncating trailing zeros.

Depending on the endianness of 64bit integers on the computer, they too are
mangled so they compare bytewise.  That may seem like a disoptimization,
but index keys are not stored on natural boundaries and they undergo prefix
compression so there's no way to use a larger comparison than byte by byte.


Compound keys are much the same.  Each part is converted to its index key
type and padded to a multiple of 4 bytes.  After every four bytes, Firebird
adds a byte with the position of the current field of the key.  Thus an
index on LastName, FirstName, ZodiacSign would come out as 1Harr1ison2Ann
3Gemi3ni.  This avoids the embarassment of confusing Damnation with Dam
nation.

Why did I say "(sadly)" above?  Because having a single format for numbers
allows Firebird to change the size of numbers without recreating indexes on
them.  But when Borland added 64 bit integers back - InterBase had 64 bit
integers from the beginning on Vaxes - some bright spark realized that
double precision has 56 bits of precision and 64 bit integers have 64
bits.  On the other hand, Firebird indexes are designed to handle some
imprecision ... or the remaining 8 bits could be tacked on the end...
whatever.  So you have to rebuild indexes when going from Numeric/Decimal 9
to Numeric/Decimal 12.  Sad.

"Prefix compression?"  When storing a key other than the first on a page or
the first after a jump on page Firebird looks at the preceding key and
truncates that part of the beginning of the next key that duplicates is
predecessor and tacks the length of the truncated part at the beginning.
Thus the strings "", "AAAB", "AAAC", "AABC" become
"", "3B", "3C", and "2BC".There is a problem with some formats of
GUID which put the volatile part of the number first, followed by the fixed
part.  That defeats prefix compression and inflates the size of indexes.

"Jump?" - Prefix compression reduces the size of indexes by a lot, reducing
I/O, but requires reading across the whole page to decipher the key.  Fine
with 1K pages, but with larger page sizes the computation was
unacceptable.  So each index page now has an index of its own pointing to
the offsets of uncompressed entries. That index is called a jump vector.


More than you wanted to know.  Somewhere on the IBPhoenix web site there
should be a couple of papers called Firebird for Experts (or something like
that) - one of they goes into gory detail on the layout of index pages.

Good luck,

Ann


>
>


Re: [firebird-support] Error on Gfix

2015-09-30 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Tue, Sep 29, 2015 at 4:50 PM, Jeferson Sisto jsis...@gmail.com
[firebird-support]  wrote:

>
>
> Ok, I know this full procedure about corrupt database.
>
> But, when the unique problem about corrupt database is ORPHAN PAGE. In
> this especific situation, is necessary to execute all the steps, including
> the backup/restore?
>
> Looking my situation, I'm having ONLY orphan page in a database that have
> 20~30 GB. In this database, the backup/restore procedure delay
> significative time...
>

Orphan pages are just lost space.  If you've only got three or four (at
16KB per page) in a 20 GB database by all means ignore it.   Orphan record
versions are the same, but even less significant.

Both those errors can occur when the database shuts down without writing
out its cache completely.  They're totally benign.


For those who care deeply, here's the situation.  Firebird uses careful
write to maintain on-disk consistency.  That translates to writing the
thing pointed to before the pointer when creating objects and removing the
pointer before the object when releasing objects.

Consider the case of allocating a new page to a table.

1) the page number is released from the free page list, so it can't be
allocated twice and the free page list is written.
2) the page is formatted to look like a data page for that table and
written to the database.
3) the page number is entered into the list of page that belong to the
table and that list is written to the database.

If the database stops between 1 and 2 or 2 and 3, that page is neither free
nor associated with a table, index, or internal structure.  But it doesn't
cause any problems either.  It just sits there, wasting a small amount of
space until gfix removes it or the database is restored from a backup.

Similarly when Firebird garbage collects an old record version, it first
clears the pointer to the version in the next newer version, then clears
the entry on the data page that locates the actual record.  Last, it
removes index entries that are no longer valid.  If there's a failure
between clearing the record pointer and the entry on the data page, that
record version sits there until gfix removes it or the database is restored.

Good luck,

Ann


Re: [firebird-support] Re: SELECT for get the number of contexts

2015-09-27 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Sun, Sep 27, 2015 at 1:37 PM, 'Walter R. Ojeda Valiente'
sistemas2000profesio...@gmail.com [firebird-support] <
firebird-support@yahoogroups.com> wrote:

>
>
> The only way to know how many contexts a stored procedure has is looking
> at the source code and counting handly them?
>
>
The limit on contexts is per statement, not for an entire stored procedure,
trigger, or whatever.  A statement is a SELECT, INSERT, DELETE, or UPDATE.

Each table reference is a context.  So, for example, a reflexive join has
more than one context for the same table.  An update with lots of
subselects has contexts for each part of the subselect.   A statement that
references a view has contexts for every table reference in the view.

So, don't worry about the total number of contexts in a stored procedure,
but do worry about the use of complex views in apparently simple
statements.

Good luck,

Ann




>
>>
>>
>
>
> 
> --
>


Re: [firebird-support] Securing database against corruption on systems that suddenly get turned off

2015-09-23 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Sep 21, 2015 at 4:53 AM, Glenn Thomas Hvidsten g...@vensafe.no
[firebird-support]  wrote:

>
>
>
> We have FirebirdSQL (2.1) running on a client computer (Windows). This
> client has some problems with his power grid which causes the PC to
> suddenly and unexpectedly shut down in infrequent intervals (usually once
> or twice per day). Most of the time this doesn’t cause any problems, but
> once in a while this causes the database to be corrupted.
>
> I’m not at all surprised that the database gets corrupted like this (as
> turning off a computer without shutting everything down gracefully can have
> that effect).
>
>
If you have turned off forced writes, corruption is likely.  If forced
write is on, what are the errors you're seeing when the database is
corrupted?  I'm not suggesting that you disregard advice about adding a UPS
and make backups - excellent suggestions - but the Firebird architecture
should survive a power failure if writes are being forced to disk in the
correct order.  So I'm curious...

Good luck,

Ann


Re: [firebird-support] About firebird performance

2015-09-05 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Sat, Sep 5, 2015 at 12:00 PM, Hector Sánchez hec...@planatec.es
[firebird-support]  wrote:

>
>
> I discovered something else quite interesting:
>
> I realized both IB and FB dbs have pagebuffers set to 100.000. I changed
> FB to 0 and now it works like I expected!!..Same query now completes on FB
> on 50 sec. It seems this is the problem although I don't understand why.
>
> Nevertheless, I'll try on a physical machine as you and Carlos suggest and
> see how it behaves.
>
>
Very odd.   Could you collect the performance stats for the query on the
two systems? Specifically, I'd like to know how much physical I/O each did
- that's reported as Reads and Writes.  Fetches and Marks don't matter
here.

Cheers,

Ann

On another system I worked on we realized as the cache got very large that
a totally fair LRU was taking up a significant fraction of the CPU load.
That might be what you're seeing.


Re: [firebird-support] Firebird backup on virtual machine

2015-08-28 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Aug 28, 2015, at 2:40 AM, Macma mac...@wp.pl [firebird-support] 
firebird-support@yahoogroups.com wrote:
 
 My biggest concern was that after restore form Veeam backup firebird
 database file could be damage. If not there is nothing I can worry
 about. I'm just want to be sure that when someone restore full vmdk file
 my system will work and survive such operation.

Unless Veeam works some magic that stops writes to the database while it makes 
its copy, there is a risk that the copy will not reflect the state of the 
database at any one instant.  Firebird's careful write guarantees that at any 
instant the database is correct, so it will survive a server, file system, or 
O/S crash.  

You can use gbak and ask Veeam to backup only the backup file.  You can stop 
the database before the backup starts.  You can ask for a call that blocks 
updates while the backup is being done.  You could probably pay for a feature 
to redirect writes to a sequential file to be applied to the database after the 
backup.  I think using Nbak won't work because (IIRC) it stores its page images 
in a random access file, so the page image file will have the same problem as 
the database when copied block by block.

Good luck,

Ann




Re: [firebird-support] Re: Differences when adding a Primary Key

2015-08-21 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

 On Aug 21, 2015, at 2:13 PM, 'Walter R. Ojeda Valiente' 
 sistemas2000profesio...@gmail.com [firebird-support] 
 firebird-support@yahoogroups.com wrote:
 
 Without using CONSTRAINT doesn't work and in such case the name is choosed by 
 Firebird, not for me.
 
 
 For add a Primary Key to a table we can write:
 
 ALTER TABLE MyTable ADD PRIMARY KEY (ID);
 
 or we can write:
 
 ALTER TABLE MyTable ADD CONSTRAINT MyPK PRIMARY KEY (ID);
 
 In the first case, the Firebird puts the name of the Primary Key, in the 
 second case we choose that name, but...
 
 Why sometimes the first case fails and the second case always work?
 
 But (just sometimes) the first case fails.
 

How does it fail?  What's the error message? Or does it kill the connection?  
Crash the server?  Freeze the O/S?  Set the machine on fire?

Cheers,

Ann

Re: [firebird-support] Is it safe to have multiple instances of Gbak with garbage collection?

2015-08-20 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
 On Aug 19, 2015, at 5:49 PM, tvd...@ymail.com [firebird-support] 
 firebird-support@yahoogroups.com wrote:
 
 I believe it is generally safe to have multiple simultaneously running 
 instances of Gbak backing up the same server, when all of them read data from 
 within their own transaction.

Right.  Gbak is just another application.  It generates its own BLR, which is 
unusual, but actually easier than generating SQL, if you happen to be a 
program.  If it were a modern program, it would generate dynamic SQL.  It's 
also pretty smart about dependencies between system table elements. However, 
there is not magic.  A Gbak backup is just a read-only application, like any 
other.

 Is that any different when Gbak is not supplied with the argument to skip 
 garbage collection?

All applications that read the database do garbage collection by default.  
Coopertive garbage collection and the garbage collect thread all work together 
all the time.

 Does that put either the backup file or (worse) the database itself at risk 
 of corruption? Perhaps someone can explain how this would work out under the 
 hood.

No magic, except the normal magic of MVCC. 

My question to you is why run multiple simultaneous backups?  

Good luck,

Ann
 
 


Re: [firebird-support] Composite Primary Key in JOIN - performance

2015-08-11 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

 On Aug 11, 2015, at 4:59 AM, steve.decle...@yahoo.com [firebird-support] 
 firebird-support@yahoogroups.com wrote:
 ...when I take a look at the small lookup tables used to store payment 
 conditions, currency codes, titles, etc, the primary key is always 
 VARCHAR(4) to either store a generator number of a user defined CODE. But 
 most of the time the value is just coming from a generator and with '0' as 
 prefixes, like '0001', '0002'. It doesn't make much sense to me as the code 
 is not shown anywhere. I guess the design would be better if a SMALLINT was 
 used, am I right ? 2 Bytes instead of 8, correct ?
 
In general, fixed size small textual fields should be CHAR rather than VARCHAR. 
 Since the value stored is always the same length, the two bytes that describe 
the actual vs. declared length are wasted.  And yes, an integer type would be 
better if the content will always be numeric.  I have a personal problem with 
small int based on unpleasant experiences when they overflow, wrap around, and 
are generally a nuisance.  But as long as you're certain you'll never have more 
than 32,767 of them... 
 The PROJECT table has a primary key VARCHAR(7) and more worse the SUBPROJECT 
 table has composite primary key made up of the PROJECTNUMBER VARCHAR(7) AND A 
 UNIQUE SUBPROJECTNUMBER for each project, VARCHAR(4).
 
 So JOINs look like this :
 
 JOIN PROJECT PR ON PR.PR_NR = AColumn
 
 JOIN SUBPROJECT SU ON SU.SU_NR = PR.PR_NR AND SU.SU_SUB = AColumn
 
 The quotation table has only 30.000 records and is not performing too good. 
 That SUBPROJECT JOIN has to JOIN on 2 fields because of the composite primary 
 key. VARCHAR(7) + (4) = 11, sounds like 22 bytes to me, instead of 4 bytes of 
 an Integer.
 

Firebird's index key generator is sensitive to major datatypes and tries to 
minimize the key size by eliminating trailing blanks in strings - which won't 
exist in your case because the actual data is zero filled on the left, not 
blank filled on the right.  Numeric keys are represented as a mangled double 
precision float.  The mangling causes the number to compare correctly bytewise 
(moving and inverting the sign) and eliminates trailing zeros after the decimal 
point.  

Compound keys are handled so they also compare bytewise and should be 
reasonably quick.


So, yes, I do think you'd be better off with integer keys (skip the small int) 
but no, I don't think that's the problem with your queries.  Things that would 
help figure out why looking up stuff in a 30K record table is slow might 
include:  query plan, indexes on non-primary keys, query stats specifically 
reads vs. fetches, and probably other things I'm not thinking of now.

Generally, you're right that lying to Firebird by calling a number a variable 
length character string will lead to unhappiness...  just not this particular 
unhappiness.

Good luck,

Ann

Re: [firebird-support] Reads inconsistency between FB TraceManager and IB Planalyser

2015-08-11 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

 On Aug 11, 2015, at 12:22 PM, 'Louis van Alphen' lo...@nucleo.co.za 
 [firebird-support] firebird-support@yahoogroups.com wrote:
 
 Sure I understand that, but the reads/fetches are 3000 to return 1 row.

Which involves a four way join, on top of having to find and use the system 
tables to identify the user tables, columns, and indexes, then find the 
appropriate index pages, pointer pages, data pages, etc. for it's internal 
queries and your query.  The second time, most of the heavily used stuff is in 
cache.

Good luck,

Ann
 
 
 
 From: firebird-support@yahoogroups.com 
 [mailto:firebird-support@yahoogroups.com] 
 Sent: 11 August 2015 06:15 PM
 To: firebird-support@yahoogroups.com
 Subject: Re: [firebird-support] Reads inconsistency between FB TraceManager 
 and IB Planalyser
 
 
 
 
 
 Hi Lois,
 
 Reads value means actual reads from the disk.
 If you run query twice, necessary database pages are cached, and you will see 
 in stats more fetches  - i.e., calls to cache, not record fetches.
 
 Actually, always is better to run query twice, to get heated cache.
 
 Regards,
 Alexey Kovyazin
 IBSurgeon
 
 
 
 
 
 
 
 
 
 I am trying to find a performance issue in a system and I am using 2 tools 
 just for comparison and verification. One being FB TraceManager (FBTM) and 
 the other is an old util called
 
 Interbase Planalyzer (IP)
 
 
 
 It seems that the particular query from the view TRACKED_ITEM_LOCATION_DETAIL 
 is slow.
 
 From FBTM,while monitoring the app, I get the following raw output:
 
 
 
 --
 
 2015-08-11T16:35:52.4990 (6180:00F9A948) EXECUTE_STATEMENT_FINISH
 
D:\PROJECTS\KKI\TECH\DATABASE\PRODUCTION 
 DATABASES\2015-07-14\DIGITAN.KKI.FDB (ATT_34, SYSDBA:NONE, NONE, 
 TCPv4:127.0.0.1)
 
(TRA_20898, READ_COMMITTED | NO_REC_VERSION | WAIT | READ_WRITE)
 
 Statement 3407:
 
 ---
 
 select * from TRACKED_ITEM_LOCATION_DETAIL where ITEM_ID = ? order by ID
 
 ^^^
 
 PLAN JOIN (JOIN (JOIN (TRACKED_ITEM_LOCATION_DETAIL TILD ORDER TILD_PK_IDX 
 INDEX (TILD_ITEM_IDX), TRACKED_ITEM_LOCATION_DETAIL TL INDEX (TL_PK_IDX)), 
 TRACKED_ITEM_LOCATION_DETAIL FTL INDEX (TL_PK_IDX)), 
 TRACKED_ITEM_LOCATION_DETAIL USR INDEX (USR_PK_IDX))
 
 param0 = bigint, 2147191655
 
 1 records fetched
 
112 ms, 3316 read(s), 3325 fetch(es)
 
 Table Natural IndexUpdateInsert
 Delete   Backout Purge   Expunge
 
 ***
 
 USER_ 1
 
 TRACKING_LOCATION_1
 
 TRACKED_ITEM_LOCATION_DETAIL_ 1
 
 --
 
 
 
 3325 fetches and  112ms seem way over the top. The cache hit ratio is 0%.
 
 
 
 However, when I use IP and manually enter the same SQL, I get the following:
 
 Prepare time 1ms
 
 Execution time 148ms
 
 Fetch time 4ms
 
 With a total of 7 fetches. Not sure where FBTM gets 3K fetches?
 
 
 
 Also, if I remove the order by, IP reports a drastic reduction is execution 
 time i.e. down to 6msec. Does the ORDER BY on the PK make such a difference?
 
 
 
 I am not understanding what is going on.
 
 
 
 Either way, here are the table  view info  stats
 
 
 
 Thanks
 
 Louis van Alphen 
 
 
 
 --
 
 CREATE TABLE TRACKED_ITEM_LOCATION_DETAIL_ (
 
UID   DOM_UID /* DOM_UID = VARCHAR(36) */,
 
IDDOM_ID /* DOM_ID = BIGINT NOT NULL */,
 
IS_DELETEDDOM_BINARY /* DOM_BINARY = SMALLINT DEFAULT 0 NOT NULL 
 CHECK ((  value in ( 0,1) )) */,
 
CREATED_DTM   DOM_DTM default current_timestamp /* DOM_DTM = TIMESTAMP 
 */,
 
CREATED_USER_ID   DOM_FK /* DOM_FK = BIGINT */,
 
ROW_ORIGINDOM_FK /* DOM_FK = BIGINT */,
 
ITEM_ID   DOM_FK NOT NULL /* DOM_FK = BIGINT */,
 
LOCATION_ID   DOM_FK NOT NULL /* DOM_FK = BIGINT */,
 
FROM_LOCATION_ID  DOM_FK /* DOM_FK = BIGINT */,
 
START_DTM DOM_DTM NOT NULL /* DOM_DTM = TIMESTAMP */,
 
END_DTM   DOM_DTM /* DOM_DTM = TIMESTAMP */,
 
START_DAT DOM_DAT NOT NULL /* DOM_DAT = DATE */,
 
END_DAT   DOM_DAT /* DOM_DAT = DATE */
 
 );
 
 
 
 ALTER TABLE 

Re: [firebird-support] Reads inconsistency between FB TraceManager and IB Planalyser

2015-08-11 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

 On Aug 11, 2015, at 1:30 PM, 'Louis van Alphen' lo...@nucleo.co.za 
 [firebird-support] firebird-support@yahoogroups.com wrote:
 
 Thanks Ann, so maybe the 2 tools' metric are different. So maybe FBTM
 includes all (low level) reads and IP only user table reads.

Or maybe one runs some queries before the one it measures so all the system 
table queries are compiled, optimized, and executed, filling the cache with 
good stuff.
 
 Also, if I remove the order by, IP reports a drastic reduction is
 execution time i.e. down to 6msec. Does the ORDER BY on the PK make such a
 difference?
 

Unh, maybe.   Between the list, my mail system and yours, I've lost most of the 
formatting and can't look at the original while typing (grrr)   You seem to 
use left outer joins and that can be a problem because (in general) the order 
in which outer joins are declared in the query is the order in which they have 
to be accessed in the query plan.   

For example, a three way join of students, registrations, and courses with 
inner joins can be run starting with any of the three tables and get the same 
results.  

Select s.student_name, c.course_name 
from students s 
 join registrations r on r.student_id = s.student_id
 join courses c on c.course_id = r.course_id

The optimizer can choose to start with registrations and lookup students and 
courses by primary key, or students and look up a registration by student_id 
and a course by the course_id in registrations, or courses - registrations - 
students.  Or sort the students and registrations and merge them, looking up 
courses from the matching pairs.  Or other things, probably.

However, this query can be executed in only one way: students - registrations 
- courses

Select s.student_name, c.course_name 
from students s 
left outer join registrations r on r.student_id = s.student_id
 left outer join courses c on c.course_id = r.course_id

If that way isn't optimal, too bad.  The query must be executed that way or 
you'll miss all the students who aren't registered for courses and all the 
registrations that don't correspond to any courses.

It may be that adding the sort, you've convinced Firebird that it has to read 
and sort all the records in that table, then start from the sorted result.  Try 
using one of the various syntaxes that tells it to return only the first record 
found that matches the criteria.  That may cause Firebird to walk the index ... 
read records in index order ...  rather than reading the table and sorting it.

Now that probably sounds really dumb, but in fact, walking a table in index 
order is generally a lot slower than reading it in physical order and sorting 
the result.  Yes, sort is n(log n), but there's a big K applied that is the 
cost of a random record read.  So, if you expect a lot of results, sort first 
and ask for matches later   If not, use 'Select FIRST n' or 'LIMIT n' 
or whatever the standards committee chose to bless ... maybe  'FETCH FIRST 
n'.  Any of those hints will convince Firebird to walk the table in index 
order.

This query is likely to read and sort the students first, then start looking 
for the rest of the data unless there's a good index on student_name.  

Select s.student_name, c.course_name 
from students s 
left outer join registrations r on r.student_id = s.student_id
 left outer join courses c on c.course_id = r.course_id
 where s.student_name = 'Dinah MacQuarie'
 order by s.student_id

This may give a faster result in the absence of an student_name index.  It may 
not, but do consider trying it when playing with query forms...

Select first 30 s.student_name, c.course_name 
from students s 
left outer join registrations r on r.student_id = s.student_id
 left outer join courses c on c.course_id = r.course_id
 where s.student_name = 'Dinah MacQuarie'
 order by s.student_id

It's worth noting that Firebird does not store records in primay key order 
unless the records happen to arrive in primary key order.  Even when records 
are created in primary key order - as with a generated key - deleted records 
may be replaced with records created much later.   Lots of databases cluster 
records on primary key, making it unnecessary to sort when reading the primary 
key.   That has other costs, particularly with natural keys...  This post is 
already way to long to take on that issue.


Now, just to complicate things, it's possible to turn left outer joins to inner 
joins - at least if you're a person.   Maybe the Firebird optimizer knows about 
this, and maybe it doesn't.  Certainly in the previous century one database 
developer - initials JAS - thought that if you were dumb enough to use outer 
joins when you shouldn't, the optimizer shouldn't reward your stupidity.   He's 
become more benevolent in his old age.  Anyway, here's the case:

Select s.student_name, c.course_name 
from students s 
 join 

Re: [firebird-support] Plan evaluation in stored procedures

2015-08-10 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]




 On Aug 10, 2015, at 8:15 AM, Aldo Caruso aldo.car...@argencasas.com 
 [firebird-support] firebird-support@yahoogroups.com wrote:
 
 
 does it mean that if the database is shut down periodically the stored 
 procedure plans would be recalculated ? Shutting the database periodically 
 solves the problem ?
 

Not Sean, but yes.  Shutting down the database will cause stored procedures to 
be reoptimized. 


Good luck,

Ann
 
 On 9 de agosto de 2015 20:14:46 GMT-03:00, 'Leyne, Sean' 
 s...@broadviewsoftware.com [firebird-support] 
 firebird-support@yahoogroups.com wrote:
 Aldo,
 
  My question is: does the optimizer evaluates the plan for the query each
  time the stored procedure is excecuted or is it evaluated only once, when
  the stored procedure is compiled to BLR.
  In the latter case it would be a dissadvantage because as time passes,
  table cardinality and indeces distribution vary, and if the plan was 
  frozen at
  the stored procedure compilation time, may be that it would't be optimal 
  any
 
 The plan for SP is determined when the SP is invoked for the first time from 
 disk.
 
 So your worse is not as bad you think, but if you are running superserver 
 without the database havng no active connections, the SP will remain in 
 memory and the plan will not reflect the latets index stats.
 
 Sean
 
 
 


  1   2   3   4   5   >