Re: [firebird-support] SQL slower after N executions?

2020-05-02 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
OK, so the last change was successful: The +0 as suggested by Set works. 
The query now uses the faster query plan through the entire batch (at 
least it still does after 1769 chunks.

Many thanks for all your input! I still think it would be interesting 
with some theories about why the engine behaves like this, perhaps from 
the dev team...?

Mvh,
Kjell

Den 2020-05-01 kl. 09:53, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:
>
> Well, the changed subselect didn't do the trick, but I think it speeds
> up the query slightly, overall.
>
> Now, it's this query that's causing problems:
>
> insert into "TmpFKExportUhant" ("ECO_ID", "Bärartyp")
> select FtgOmsar."Uppgiftshanterare", 'FöretagOmsättningsår' "Bärartyp"
> from "Företag" F
> inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
> inner join "FöretagOmsättningsår" FtgOmsar on FtgOmsar."Företag" =
> F."ECO_ID" + 0
> inner join "År" Ar on Ar."ECO_ID" = FtgOmsar."Omsättningsår" + 0
> where not exists (
>     select 1
>     from "FöretagOmsättningsår" FtgOmsar2
>     inner join "År" Ar2 on Ar2."ECO_ID" = FtgOmsar2.."Omsättningsår"
>     where FtgOmsar2."Företag" = F."ECO_ID"
>   and Ar2."Årtal" > Ar."Årtal"
>   );
>
> The "breaking point" has now moved to 361 executions. Before that point,
> the "fast" execution plan looks like this (both styles):
>
> PLAN JOIN (
>   L NATURAL,
>   F INDEX ("IX_PK_Företag"),
>   FTGOMSAR INDEX ("IX_FöretagOmsättningsåDBN"),
>   AR INDEX ("IX_PK_År")
> )
>
> Select Expression
>     -> Nested Loop Join (inner)
>     -> Filter
>     -> Table "FöretagOmsättningsår" as "FTGOMSAR2" Access By ID
>     -> Bitmap
>     -> Index "IX_FöretagOmsättningsåDBN" Range Scan
> (full match)
>     -> Filter
>     -> Table "År" as "AR2" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_År" Unique Scan
> Select Expression
>     -> Nested Loop Join (inner)
>     -> Table "TmpFKExportId" as "L" Full Scan
>     -> Filter
>     -> Table "Företag" as "F" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_Företag" Unique Scan
>     -> Filter
>     -> Table "FöretagOmsättningsår" as "FTGOMSAR" Access By ID
>     -> Bitmap
>     -> Index "IX_FöretagOmsättningsåDBN" Range Scan
> (full match)
>     -> Filter
>     -> Table "År" as "AR" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_År" Unique Scan
>
> After the slowdown, the "bad" query plan looks like this:
>
> PLAN JOIN (
>   L NATURAL,
>   F INDEX (IX_PK_Företag),
>   FTGOMSAR INDEX (IX_FöretagOmsättningsåDBN),
>   AR INDEX (IX_PK_År)
> )
>
> Select Expression
>     -> Nested Loop Join (inner)
>     -> Filter
>     -> Table "FöretagOmsättningsår" as "FTGOMSAR2" Access By ID
>     -> Bitmap
>     -> Index "IX_FöretagOmsättningsåDBN" Range Scan
> (full match)
>     -> Filter
>     -> Table "År" as "AR2" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_År" Unique Scan
> Select Expression
>     -> Nested Loop Join (inner)
>     -> Table "År" as "AR" Full Scan
>     -> Filter
>     -> Table "FöretagOmsättningsår" as "FTGOMSAR" Access By ID
>     -> Bitmap
>     -> Index "IX_FöretagOmsättningså9OF" Range Scan
> (full match)
>     -> Filter
>     -> Table "TmpFKExportId" as "L" Access By ID
>     -> Bitmap
>     -> Index "PK_TmpFKExportId" Unique Scan
>     -> Filter
>     -> Table "Företag" as "F" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_Företag" Unique Scan
>
> It's worth noting that the old-style query plans are identical, but the
> new-style ones do have a difference, which is probably causing the 
>

Re: ODP: ODP: ODP: [firebird-support] SQL slower after N executions?

2020-05-01 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
e of slowdown/change of query strategy.

The only thing I can see in FBMonitor's graphs is that about 35 minutes 
before the slowdown, the I/O activity graph shows an increase in cache 
hits, but no apparent change in disk reads or disk writes. Studying the 
actual statements around that time, I fail to see any significant change 
in the statistics. The statistics do change significantly at the time of 
slowdown though, when the bad query plans starts being used.

I think Firebird's behavior is really really weird here. It consistently 
changes to a worse query plan for the same query on essentially the same 
data, just because it's already been executed N times, where N = 316-320 
with the old query and apparently N = ~361 with the current query.

This must be caused by some internal transient state, associated with 
the app's activity history. Some buffer filling up or some other 
resource being depleted or something like that. Is it abug? If so, how 
on earth could it possibly be investigated? Ugh...

Mvh,
Kjell

Den 2020-05-01 kl. 01:12, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:
>
> Hi Set!
>
> Understood and I thank you for pointing it out. My last attempt was to
> put all the Uhant."ECO_ID" in a temp table and then use that to run the
> last part of the query. That avoids the unions, but adds some extra work
> to store the records in the temp table. All in all I suppose it's
> slightly more work for the engine.
>
> But as it turns out, it's that part of the query that suddenly takes a
> lot longer to execute.
>
> Trying now with the alternative subselect you suggested earlier, i.e.
> not exists (...). If that doesn't work out, I'll try the +0 you suggest
> below. Thumbs up!
>
> Regards,
> Kjell
>
> Den 2020-04-30 kl. 23:14, skrev Svein Erling Tysvær setys...@gmail.com
> [firebird-support]:
> > Hej Kjell,
> >
> > I'm not used to seeing plans this way, but if I'm reading the plan
> > correctly, then adding +0 or || '' (depending on the field type)
> > immediately after your first union like this:
> >
> >      select 'FöretagOmsättningsår' "Bärartyp", FtgOmsar."ECO_ID"
> > "Bärare", Uhant."ECO_ID"
> >      from "Företag" F
> >      inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
> >      inner join "FöretagOmsättningsår" FtgOmsar on FtgOmsar."Företag"
> > = F."ECO_ID" +0
> >      inner join "År" Ar on Ar."ECO_ID" = FtgOmsar.."Omsättningsår" +0
> >      inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" =
> > FtgOmsar."Uppgiftshanterare"
> >
> > ought to speed up your query. The point being that your slow plan only
> > is an option if indices can be used for FtgOmsar."Omsättningsår" and
> > F."ECO_ID" and that this kind of addition prevents those indices from
> > being used.
> >
> > HTH,
> > Set
> >
> > tor. 30. apr. 2020 kl. 17:43 skrev Kjell Rilbe
> > kjell.ri...@marknadsinformation.se
> > <mailto:kjell.ri...@marknadsinformation.se> [firebird-support]
> >  > <mailto:firebird-support@yahoogroups.com>>:
> >
> > Thanks Karol! I will consider explicit plan if my current rewrite
> > of the
> > query doesn't pan out.
> >
> > The table "TmpFKExportId" will always have the same number of
> > records,
> > but a different set (each chunk of the batch will load the same
> > number
> > of id:s, but of course different id values). Statistics should be
> > constant.
> >
> > The other tables will vary slightly over time, but the database
> > contains
> > 2+ million companies and related data. During the batch, the only
> > updates to this data is from our TM staff who phone companies and
> > enter
> > data manually, one company at a time. So overall, the change rate is
> > minute. And in particular, there's no big change exactly 318
> > chunks into
> > the batch job, every time.
> >
> > Yes, the query is hardcoded as a string literal into the app's source
> > code. It can't get more "same" than that. :-)
> >
> > Mvh,
> > Kjell
> >
> > Den 2020-04-30 kl. 17:06, skrev Karol Bieniaszewski
> > liviusliv...@poczta.onet.pl <mailto:liviusliv...@poczta.onet.pl>
> > [firebird-support]:
> > >
> > > I suppose you have two different queries – one with where clause
> > and
> > > one without on one of the tables involved in the query.
> > > Are you sure that the queries are the same?
> > >
> > > But also it is quite usual that after new data changes the plan is
> > > about to change.
> > > Is this true for your case that some table got more records
> > after fill
> > > then previously second table?
> > > Especially this one?:
> > >
> > > You can try also modify your query to have static plan (some +0) or
> > > update index statistics after some count of operations.
> > >
> > > Regards,
> > > Karol Bieniaszewski
> > >
> > > [Non-text portions of this message have been removed]
> > >
> > >
> >
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
> > 
> > Posted by: Kjell Rilbe  > <mailto:kjell.ri...@marknadsinformation.se>>
> > 
> >
> > ++
> >
> > Visit http://www.firebirdsql.org and click the Documentation item
> > on the main (top) menu.  Try FAQ and other links from the
> > left-side menu there.
> >
> > Also search the knowledgebases at
> > http://www.ibphoenix.com/resources/documents/
> >
> > ++
> > 
> >
> > Yahoo Groups Links
> >
> >
> > firebird-support-fullfeatu...@yahoogroups.com
> > <mailto:firebird-support-fullfeatu...@yahoogroups.com>
> >
> >
> >
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



Re: ODP: ODP: ODP: [firebird-support] SQL slower after N executions?

2020-04-30 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi Set!

Understood and I thank you for pointing it out. My last attempt was to 
put all the Uhant."ECO_ID" in a temp table and then use that to run the 
last part of the query. That avoids the unions, but adds some extra work 
to store the records in the temp table. All in all I suppose it's 
slightly more work for the engine.

But as it turns out, it's that part of the query that suddenly takes a 
lot longer to execute.

Trying now with the alternative subselect you suggested earlier, i.e. 
not exists (...). If that doesn't work out, I'll try the +0 you suggest 
below. Thumbs up!

Regards,
Kjell

Den 2020-04-30 kl. 23:14, skrev Svein Erling Tysvær setys...@gmail.com 
[firebird-support]:
> Hej Kjell,
>
> I'm not used to seeing plans this way, but if I'm reading the plan 
> correctly, then adding +0 or || '' (depending on the field type) 
> immediately after your first union like this:
>
>      select 'FöretagOmsättningsår' "Bärartyp", FtgOmsar."ECO_ID" 
> "Bärare", Uhant."ECO_ID"
>      from "Företag" F
>      inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
>      inner join "FöretagOmsättningsår" FtgOmsar on FtgOmsar."Företag" 
> = F."ECO_ID" +0
>      inner join "År" Ar on Ar."ECO_ID" = FtgOmsar."Omsättningsår" +0
>      inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" = 
> FtgOmsar."Uppgiftshanterare"
>
> ought to speed up your query. The point being that your slow plan only 
> is an option if indices can be used for FtgOmsar."Omsättningsår" and 
> F."ECO_ID" and that this kind of addition prevents those indices from 
> being used.
>
> HTH,
> Set
>
> tor. 30. apr. 2020 kl. 17:43 skrev Kjell Rilbe 
> kjell.ri...@marknadsinformation.se 
> <mailto:kjell.ri...@marknadsinformation.se> [firebird-support] 
>  <mailto:firebird-support@yahoogroups.com>>:
>
> Thanks Karol! I will consider explicit plan if my current rewrite
> of the
> query doesn't pan out.
>
> The table "TmpFKExportId" will always have the same number of
> records,
> but a different set (each chunk of the batch will load the same
> number
> of id:s, but of course different id values). Statistics should be
> constant.
>
> The other tables will vary slightly over time, but the database
> contains
> 2+ million companies and related data. During the batch, the only
> updates to this data is from our TM staff who phone companies and
> enter
> data manually, one company at a time. So overall, the change rate is
> minute. And in particular, there's no big change exactly 318
> chunks into
> the batch job, every time.
>
> Yes, the query is hardcoded as a string literal into the app's source
> code. It can't get more "same" than that. :-)
>
> Mvh,
> Kjell
>
> Den 2020-04-30 kl. 17:06, skrev Karol Bieniaszewski
> liviusliv...@poczta.onet.pl <mailto:liviusliv...@poczta.onet.pl>
> [firebird-support]:
> >
> > I suppose you have two different queries – one with where clause
> and
> > one without on one of the tables involved in the query.
> > Are you sure that the queries are the same?
> >
> > But also it is quite usual that after new data changes the plan is
> > about to change.
> > Is this true for your case that some table got more records
> after fill
> > then previously second table?
> > Especially this one?:
> >
> > You can try also modify your query to have static plan (some +0) or
> > update index statistics after some count of operations.
> >
> > Regards,
> > Karol Bieniaszewski
> >
> > [Non-text portions of this message have been removed]
> >
> >
>
>
> [Non-text portions of this message have been removed]
>
>
>
> 
> Posted by: Kjell Rilbe  <mailto:kjell.ri...@marknadsinformation.se>>
> 
>
> ++
>
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the
> left-side menu there.
>
> Also search the knowledgebases at
> http://www.ibphoenix.com/resources/documents/
>
> ++
> 
>
> Yahoo Groups Links
>
>
> firebird-support-fullfeatu...@yahoogroups.com
> <mailto:firebird-support-fullfeatu...@yahoogroups.com>
>
>
> 


[Non-text portions of this message have been removed]



Re: ODP: ODP: ODP: [firebird-support] SQL slower after N executions?

2020-04-30 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Thanks Karol! I will consider explicit plan if my current rewrite of the 
query doesn't pan out.

The table "TmpFKExportId" will always have the same number of records, 
but a different set (each chunk of the batch will load the same number 
of id:s, but of course different id values). Statistics should be constant.

The other tables will vary slightly over time, but the database contains 
2+ million companies and related data. During the batch, the only 
updates to this data is from our TM staff who phone companies and enter 
data manually, one company at a time. So overall, the change rate is 
minute. And in particular, there's no big change exactly 318 chunks into 
the batch job, every time.

Yes, the query is hardcoded as a string literal into the app's source 
code. It can't get more "same" than that. :-)

Mvh,
Kjell

Den 2020-04-30 kl. 17:06, skrev Karol Bieniaszewski 
liviusliv...@poczta.onet.pl [firebird-support]:
>
> I suppose you have two different queries – one with where clause and 
> one without on one of the tables involved in the query.
> Are you sure that the queries are the same?
>
> But also it is quite usual that after new data changes the plan is 
> about to change.
> Is this true for your case that some table got more records after fill 
> then previously second table?
> Especially this one?:
>
> You can try also modify your query to have static plan (some +0) or 
> update index statistics after some count of operations.
>
> Regards,
> Karol Bieniaszewski
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



Re: ODP: ODP: [firebird-support] SQL slower after N executions?

2020-04-30 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Maybe it's worth noting that if I execute that exact query in 
FlameRobin, while the batch is running and suffering 50+ second 
execution time, FlameRobin gets it executed in 2-4 seconds every single 
time.

Weird.

I'll try to rewrite the query using execute block, a temporary table or 
something like that, to reduce query complexity.

Kjell

Den 2020-04-30 kl. 13:04, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:
>
> Yes, but the transaction just wasn't there.
>
> I did let it run during the night. Here are FBMonitor diagrams from
> about 1 hour before slowdown up to now:
> https://cdn.mise.se/FirebirdDiagrams.png
>
> The vertical line in each diagram marks the time of slowdown (at about
> 22:05 last night).
>
> The upper left diagram shows in red that I accidentally left a
> transaction open in FlameRobin at about 20:50, which does not correlate
> to the time of slowdown. I committed it at 7:43 this morning.
>
> The I/O activity diagram shows that the cache activity goes up
> significantly at this time, while disk reads drop slightly. Disk writes
> drop a bit earlier. Nothing significant seems to happen with these
> params at the time of slowdown, except that there's a spike of disk
> reads at that moment. Noting noteworthy in any other diagrams as far as
> I can see.
>
> Looking in the statement history from FBMonitor, for the SQL query of
> interest, I see a significant change in the statistics and execution
> plan at the time of slowdown.
>
> ===Before slowdown===
> Fetches lies around 150 GB
> Table scan lies around 94-95 %
> Sequential reads lies around 2.3 million
> Indexed reads lies around 130 thousand
> Fragment reads lies around 650
>
> ===After slowdown===
> Fetches lies around 1.17 TB (up a factor 8)
> Table scan lies around 24 (down 70 %)
> Sequential reads lies around 4.7 million (up a factor 2)
> Indexed reads lies around 14.7 million (up a factor 113)
> Fragment reads lies around 2300 (up a factor 3.5)
>
> The execution plan (new format, but unfortunately truncated) changes
> from this:
>
> === BEFORE SLOWDOWN ===
>
> Select Expression
>     -> Singularity Check
>     -> Aggregate
>     -> Nested Loop Join (inner)
>     -> Filter
>     -> Table "FöretagOmsättningsår" as "UHANT
> FTGOMSAR2" Access By ID
>     -> Bitmap
>     -> Index "IX_FöretagOmsättningsåDBN" Range
> Scan (full match)
>     -> Filter
>     -> Table "År" as "UHANT AR2" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_År" Unique Scan
> Select Expression
>     -> Filter
>     -> Table "Uppgiftsspärr" as "SP2" Access By ID
>     -> Bitmap
>     -> Index "IX_Uppgiftsspärr_Uppgi60J" Range Scan (full
> match)
> Select Expression
>     -> Nested Loop Join (inner)
>     -> Filter
>     -> Table "Uppgiftshanterare" as "UHANT2" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_Uppgiftshanterare" Unique Scan
>     -> Filter
>     -> Table "Uppgiftsspärr" as "SP2" Access By ID
>     -> Bitmap
>     -> Index "IX_Uppgiftsspärr_Uppgi8MK" Range Scan
> (full match)
> Select Expression
>     -> Filter
>     -> Table "Uppgiftshållare" as "UH2" Access By ID
>     -> Bitmap
>     -> Index "IX_PK_Uppgiftshållare" Unique Scan
> Select Expression
>     -> Unique Sort (record length: 484, key length: 20)
>     -> Filter
>     -> Nested Loop Join (outer)
>     -> Nested Loop Join (outer)
>     -> Nested Loop Join (inner)
>     -> Nested Loop Join (inner)
>     -> Unique Sort (record length: 322, key
> length: 112)
>     -> Union
>     -> Nested Loop Join (inner)
>     -> Table "TmpFKExportId" as
> "UHANT L" Full Scan
>     -> Filter
>     -> Table "Företag" as
> "UHANT F" Access By ID
>     -> Bitmap
>     -> Index
> "IX_PK_Företag&qu

Re: ODP: ODP: [firebird-support] SQL slower after N executions?

2020-04-30 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Yes, but the transaction just wasn't there.

I did let it run during the night. Here are FBMonitor diagrams from 
about 1 hour before slowdown up to now:
https://cdn.mise.se/FirebirdDiagrams.png

The vertical line in each diagram marks the time of slowdown (at about 
22:05 last night).

The upper left diagram shows in red that I accidentally left a 
transaction open in FlameRobin at about 20:50, which does not correlate 
to the time of slowdown. I committed it at 7:43 this morning.

The I/O activity diagram shows that the cache activity goes up 
significantly at this time, while disk reads drop slightly. Disk writes 
drop a bit earlier. Nothing significant seems to happen with these 
params at the time of slowdown, except that there's a spike of disk 
reads at that moment. Noting noteworthy in any other diagrams as far as 
I can see.

Looking in the statement history from FBMonitor, for the SQL query of 
interest, I see a significant change in the statistics and execution 
plan at the time of slowdown.

===Before slowdown===
Fetches lies around 150 GB
Table scan lies around 94-95 %
Sequential reads lies around 2.3 million
Indexed reads lies around 130 thousand
Fragment reads lies around 650

===After slowdown===
Fetches lies around 1.17 TB (up a factor 8)
Table scan lies around 24 (down 70 %)
Sequential reads lies around 4.7 million (up a factor 2)
Indexed reads lies around 14.7 million (up a factor 113)
Fragment reads lies around 2300 (up a factor 3.5)

The execution plan (new format, but unfortunately truncated) changes 
from this:

=== BEFORE SLOWDOWN ===

Select Expression
     -> Singularity Check
     -> Aggregate
     -> Nested Loop Join (inner)
     -> Filter
     -> Table "FöretagOmsättningsår" as "UHANT 
FTGOMSAR2" Access By ID
     -> Bitmap
     -> Index "IX_FöretagOmsättningsåDBN" Range 
Scan (full match)
     -> Filter
     -> Table "År" as "UHANT AR2" Access By ID
     -> Bitmap
     -> Index "IX_PK_År" Unique Scan
Select Expression
     -> Filter
     -> Table "Uppgiftsspärr" as "SP2" Access By ID
     -> Bitmap
     -> Index "IX_Uppgiftsspärr_Uppgi60J" Range Scan (full 
match)
Select Expression
     -> Nested Loop Join (inner)
     -> Filter
     -> Table "Uppgiftshanterare" as "UHANT2" Access By ID
     -> Bitmap
     -> Index "IX_PK_Uppgiftshanterare" Unique Scan
     -> Filter
     -> Table "Uppgiftsspärr" as "SP2" Access By ID
     -> Bitmap
     -> Index "IX_Uppgiftsspärr_Uppgi8MK" Range Scan 
(full match)
Select Expression
     -> Filter
     -> Table "Uppgiftshållare" as "UH2" Access By ID
     -> Bitmap
     -> Index "IX_PK_Uppgiftshållare" Unique Scan
Select Expression
     -> Unique Sort (record length: 484, key length: 20)
     -> Filter
     -> Nested Loop Join (outer)
     -> Nested Loop Join (outer)
     -> Nested Loop Join (inner)
     -> Nested Loop Join (inner)
     -> Unique Sort (record length: 322, key 
length: 112)
     -> Union
     -> Nested Loop Join (inner)
     -> Table "TmpFKExportId" as 
"UHANT L" Full Scan
     -> Filter
     -> Table "Företag" as 
"UHANT F" Access By ID
     -> Bitmap
     -> Index 
"IX_PK_Företag" Unique Scan
     -> Filter
     -> Table 
"Uppgiftshanterare" as "UHANT UHANT" Access By ID
     -> Bitmap
     -> Index 
"IX_PK_Uppgiftshanterare" Unique Scan
=== DIFF STARTS HERE ===
     -> Nested Loop Join (inner)
     -> Table "TmpFKExportId" as 
"UHANT L" Full Scan
     -> Filter
     -> Table "Företag" as 
"UHANT F" Access By ID
     -> Bitmap
     -> Index 
"IX_PK_Företag" Unique Scan
     -> Filter
     -> Table 
"FöretagOmsättningsår" as "UHANT FTGOMSAR" Access By ID
     -> Bitmap
     -> Index 
"IX_FöretagOmsättningsåDBN" Range Scan (full match)
     -> Filter
  

Re: ODP: [firebird-support] SQL slower after N executions?

2020-04-29 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Thanks,

I tried firing up Gregor Kobler's FBMonitor (similar to the old but dead 
Sinatica Monitor).

I found that OAT is 222085103 while OIT and OST (what's that?) is 
261140768 and growing. I assume the OAT must have got stuck somehow 
quite a while back, since it's so far behind the others. The machine was 
rebooted for upgrades rather recently, and I can't see any stuck 
transactions, so the OAT must be a zombie one.

No auto sweep (interval set to 0), so I'm trying a manual sweep now. 
Guess I should schedule that weekly or something.

I couldn't see anything interesting happening with the trans numbers 
when I exited the batch app. Will restart it after the manual sweep and 
see if anything interesting shows up. The slowdown, if still there, 
would probably happen after I go to sleep, so I'll have to follow up 
tomorrow.

Other info about the database, in case it says anything:

Page size: 16384
ODS: 12.0
Page buffers: 614400 (roughly 9.5 Gbyte)
Sweep interval: 0 (like I mentioned)
Forced writes: off (yes, I know the risks)

 From databases.conf:
DefaultDbCachePages = 600K (same as Page buffers above, of course)
FileSystemCacheThreshold = 1000K

 From firebird.conf:
FileSystemCacheSize = 40
TempCacheLimit = 4G
MaxUnflushedWrites = 100
MaxUnflushedWriteTime = 60
ServerMode = Super

All other performance params default. Some auth and access restriction 
params are non default, but I assume that's irrelevant.

System has 40 Gbyte RAM and the temp disks have 70+ Gbyte free.

Regards,
Kjell

Den 2020-04-29 kl. 14:00, skrev Karol Bieniaszewski 
liviusliv...@poczta.onet.pl [firebird-support]:

> Hi
>
> There are many possibilities without access i can only hint you:
>
> Look at MON$Tranasctions maybe you have active one which stop garbage 
> collecion.
>
> Look also at sort buffer setting if firebird.conf
>
> Look at settings about buffers in database itself (gfix -h show you 
> value).
>
> Look also at automatic sweep settings (also gfix -h show you value).
>
> Update Firebird to most recent version official is 3.0.5 but i use 
> most recent snapshot without problems.
>
> More can be tell after some details provided.
>
> regards,
>
> Karol Bieniaszewski
>
> 


[Non-text portions of this message have been removed]



[firebird-support] SQL slower after N executions?

2020-04-29 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

I have a batch job that among other things executes a certain SQL query 
to load some data.

This query takes about 10-12 seconds the first ~320 times, but then 
suddenly takes ~45-50 seconds every time.

Any idea what could be causing this?

The query is rather long, see below. Firebird 3.0.4 on Windows Server 
2016 64 bit. App is ASP Framework using FirebirdSql.Data.FirebirdClient 
7.1.1.0.

I can't see any other operations slowing down at the same point in time. 
I can't see any particular change in the app's memory usage (it varies 
up and down both before and after the time of slowdown). I did try a 
change of Firebird's cache settings (more page buffers), but it didn't 
seem to have any effect (yes I did restart FB service to ensure changes 
took effect). Windows file system cache doesn't seem to hit the roof at 
this particular time, so I see no connection there.

Stopping the batch and restarting it, with the app still open, doesn't 
help, but restarting the entire app leads to the shorter execution time 
again.

Each execution of this SQL query is done in a new connection (not 
pooled). Before this SQL, the same connection is also used to empty the 
table "TmpFKExportId" and then fill it with some new data, a simple list 
of 250 bigints).

The app does do other DB work that spans multiple executions of this SQL 
query, using a separate connection. Those operations do not exhibit any 
noticable slowdown.

I'm at a loss... Any suggestions of things to try are most welcome!

SQL:

select distinct Upg."ECO_ID", Upg."ECO_TYPE"
from (
     select 'Företag' "Bärartyp", F."ECO_ID" "Bärare", Uhant."ECO_ID"
     from "Företag" F
     inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
     inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" = 
F."Uppgiftshanterare"
     union
     select 'FöretagOmsättningsår' "Bärartyp", FtgOmsar."ECO_ID" 
"Bärare", Uhant."ECO_ID"
     from "Företag" F
     inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
     inner join "FöretagOmsättningsår" FtgOmsar on FtgOmsar."Företag" = 
F."ECO_ID"
     inner join "År" Ar on Ar."ECO_ID" = FtgOmsar."Omsättningsår"
     inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" = 
FtgOmsar."Uppgiftshanterare"
     where Ar."Årtal" = (
     select max(Ar2."Årtal")
     from "FöretagOmsättningsår" FtgOmsar2
     inner join "År" Ar2 on Ar2.."ECO_ID" = FtgOmsar2."Omsättningsår"
     where FtgOmsar2."Företag" = F."ECO_ID"
   )
     union
     select 'ITmiljö' "Bärartyp", ITmilj."ECO_ID" "Bärare", Uhant."ECO_ID"
     from "Företag" F
     inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
     inner join "ITmiljö" ITmilj on ITmilj."ECO_ID" = F."ITmiljö"
     inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" = 
ITmilj."Uppgiftshanterare"
     union
     select 'Befattningsinnehav' "Bärartyp", BI."ECO_ID" "Bärare", 
Uhant."ECO_ID"
     from "Företag" F
     inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
     inner join "Befattningsinnehav" BI on BI."Företag" = F."ECO_ID"
     inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" = 
BI."Uppgiftshanterare"
     union
     select 'Person' "Bärartyp", P."ECO_ID" "Bärare", Uhant."ECO_ID"
     from "Företag" F
     inner join "TmpFKExportId" L on L."ECO_ID" = F."ECO_ID"
     inner join "Befattningsinnehav" BI on BI."Företag" = F."ECO_ID"
     inner join "Person" P on P."ECO_ID" = BI."Person"
     inner join "Uppgiftshanterare" Uhant on Uhant."ECO_ID" = 
P."Uppgiftshanterare"
   ) Uhant
inner join "Uppgiftshållare" UH on UH."Hanterare" = Uhant."ECO_ID"
inner join "Uppgift" Upg on Upg."Hållare" = UH."ECO_ID"
left join "Uppgiftsspärr" SpUhant on SpUhant."Uppgiftshanterare" = 
Uhant."ECO_ID" and Uhant."Bärartyp" in ('Epostadress', 'Person')
left join "Uppgiftsspärr" SpUH on SpUH."Uppgiftshållare" = UH."ECO_ID"
where Upg."TogsBort" is null
   and SpUhant."ECO_ID" is null
   and SpUH."ECO_ID" is null
   and not exists (
     select 1
     from "Uppgiftshållare" UH2
     where UH2."ECO_ID" = Upg."RelateradHållare"
   and (
     exists (
   select 1
   from "Uppgiftsspärr" Sp2
   where Sp2."Uppgiftshållare" = UH2."ECO_ID"
     )
     or exists (
   select 1
   from "Uppgiftsspärr" Sp2
   inner join "Uppgiftshanterare" Uhant2 on Uhant2."ECO_ID" = 
Sp2."Uppgiftshanterare"
   where Uhant2."ECO_ID" = UH2."Hanterare"
     )
   )
   )

Mvh,
Kjell



[Non-text portions of this message have been removed]



Re: [firebird-support] How to inspect nbak state of a Firebird database

2020-04-22 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Excellent! Thanks!

Feel free to anser that in StackOverflow too (link below). If not, I 
will do it myself after testing it and mark my own answer as the correct 
one.

Regards,
Kjell

Den 2020-04-22 kl. 10:08, skrev 'Paul Beach' pbe...@mail.ibphoenix.com 
[firebird-support]:

>
> MON$TABLES -> MON$BACKUP_STATE -> 0 - Normal, 1 - Stalled, 2 - Merge.
>
> Regards
> Paul
>
> -Original Message-
> From: firebird-support@yahoogroups.com 
> [mailto:firebird-support@yahoogroups.com]
> Sent: 22 April 2020 09:56
> To: Firebird Forum
> Subject: [firebird-support] How to inspect nbak state of a Firebird 
> database
>
> Is there a way in C# to inspect the nbak state of a Firebird database?
>
> The state in the database header would be set to |nbak_state_diff| or
> |nbak_state_normal|. But how do I actually read this value?
>
> https://stackoverflow.com/questions/61359915/how-to-inspect-nbak-state-of-a-firebird-database
>



[Non-text portions of this message have been removed]



[firebird-support] How to inspect nbak state of a Firebird database

2020-04-22 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Is there a way in C# to inspect the nbak state of a Firebird database?

The state in the database header would be set to |nbak_state_diff| or 
|nbak_state_normal|. But how do I actually read this value?

https://stackoverflow.com/questions/61359915/how-to-inspect-nbak-state-of-a-firebird-database

Regards,
Kjell



[Non-text portions of this message have been removed]



Re: [firebird-support] Page buffers & cache threshold

2020-03-10 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2020-03-08 kl. 23:16, skrev liviuslivius liviusliv...@poczta.onet.pl 
[firebird-support]:
> Hi
>
> I do not know which rel notes have you read but this one is quite good 
> about
>
> https://firebirdsql.org/rlsnotesh/rnfb25-fbconf-fscache.html

Yes, that's the one. A few things are unclear to me.

1. What does " allow the page cache to be duplicated to the filesystem 
cache" mean? Does it mean that the same pages are kept in both FB's page 
buffer and in the file system cache? What's the alternative? Is it that 
FB prevents Windows from keeping DB pages in the file system cache? What 
are the pros and cons of either alternative?

2. OK, so the threshold compared to page buffer size determines if file 
system cache is used or not. But how do different threshold values 
affect things apart from that? I.e. if page buffer size is X, and 
threshold is changed from Y to Z where both Y and Z > X, what happens? 
Similarly, if both Y and Z < X, what happens? Nothing? It's just a 
threshold with yes/no effect? FileSystemCacheSize controls the other 
aspects?

Regards,
Kjell



[Non-text portions of this message have been removed]



Re: [firebird-support] Page buffers & cache threshold

2020-03-08 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2020-03-08 kl. 10:45, skrev liviuslivius liviusliv...@poczta.onet.pl 
[firebird-support]:
> About 3
> "DefaultDbCachePages" in firebird.conf is used only at server start. 
> Then restart is needed. But you have 2 more options and you should 
> check it maybe you override default. Buffers you can set also in 
> databases.conf and in db file itself by gfix buffers.
>
> And it is used in priority.
> 1. buffers set in db by gfix if 0 then go to point 2.
> 2. Setting in databases.conf if not present go to point 3.
> 3. Setting from firebird.conf

Ah, that explains the limited firebird.exe process commit size at least. 
Apparently I have 131072 set in the database, so it overrides the 
setting n databases.conf. I wasn't aware of that priority order. Will 
change DB setting to 0 to be able to control it from databases.conf.

Thanks!

Other questions remain...

Regards,
Kjell



[Non-text portions of this message have been removed]



[firebird-support] Page buffers & cache threshold

2020-03-08 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

Trying to figure out the best settings for my database and app usage. 
Firebird 3.0.4 SuperServer on virtualized Windows Server 2016 with 40 
Gbyte RAM and fast SSD disks (separate disks for OS and DB). Database 
almost 200 Gbyte. Page size 16k.

We can assume that no other heavy work is running on the server. The app 
itself uses a few Gbyte RAM.

I have set in databases.conf:
DefaultDbCachePages = 1310720
FileSystemCacheThreshold = 640K

And in firebird.conf:

FileSystemCacheSize = 50%
TempCacheLimit = 4G
MaxUnflushedWrites = 100
MaxUnflushedWriteTime = 60

All other settings (except some security ones) are set to default.

1. Would you recommend any changes to these settings, and why?

2. I see that the firebird.exe process has commit size about 2.2 Gbyte, 
which seems small considering the DefaultDbCachePages = 1310720 setting. 
Shouldn't this result in a page cache of 20 Gbyte?

3. I did change the DefaultDbCachePages setting without restarting the 
superserver process, but I did make sure all connections to the database 
in questions were closed. Does this setting take effect only when the 
server process is restarted?

4. It's not clear to me how the FileSystemCacheThreshold setting 
actually works. I have read the release notes for FB 2.5 as well as the 
comments inside firebird.conf and a couple of support list messages. But 
I still find it a bit muddy... I can see that the database file is 
cached, using RAMMap.exe. It reports "Standby" as "28733844 K" for that 
file.

It seems that the file system cache works and keeps large parts of the 
database file in the cache, but I do realize that the batch will for the 
most part need to read "new" data. The important thing ought to be to 
keep the indexes in cache. Right?

Reason/background: The use case is a batch operation that will do mostly 
(only?) reads and will read large parts of the database. The batch does 
its job in "chunks", and I can see that after a while the time to 
process each chunk goes up from about half a minute to about a full 
minute, or even worse. Restarting the app makes it go back to about half 
a minute per chunk, but just stopping the batch operation (without 
restarting the app) doesn't seem to help. I will investigate this more, 
of course. It's possible that the app has some garbage piling up that 
might slow things down.

Mvh,
Kjell



Kjell Rilbe
Telefon: 0733-44 24 64

Marknadsinformation i Sverige AB
Sundbybergs Torg 1 H 7 tr
172 67 Sundbyberg, Sweden
www.marknadsinformation.se <http://www.marknadsinformation.se/>
08-514 905 90

Företagskontakt.se <http://www.f%C3%B6retagskontakt.se/> 
Personkontakt.se <http://www.personkontakt.se/> DI Gasell 2018DI Gasell 
2019


[Non-text portions of this message have been removed]



[firebird-support] Can i change write mode "On the fly" and does it take effect right away?

2020-01-24 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Related to my preceding question:



Can I execute "gfix -write sync" and "gfix -write async" while the 

database is in active use? I.e. while it has connections from a 

supersevrer process?



If yes, will the changed write mode take effect immediately, or does it 

require a restart of the superserver process, all connections to be 

closed, or what? I.o.w. when does the changed write mode actually take 

effect?



I don't see this documented, but please point me to it if it is.



Regards,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] Firebird begin backup and forced writes off, timestamp update delayed

2020-01-24 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2020-01-24 kl. 09:08, skrev Kjell Rilbe 

kjell.ri...@marknadsinformation.se [firebird-support]:

>

> Hi,

>

> As posted here:

> https://dba.stackexchange.com/questions/258124/firebird-begin-backup-and-forced-writes-off-timestamp-update-delayed
>  

>

> Feel free to reply here, at StackExchange, or both. I will make sure the

> StackExchange question gets updated to show any helpful info that

> appears here.

>

> We have a Firebird database that's almost 200 Gbyte in sice, and for

> performance we have forced writes off. Please don't debate the risks of

> forced writes, we are aware of them. This is Firebird 3.0.4 on Windows

> Server 2016.

>

> For backup, we use alter database begin backup and alter database end

> backup, and copy the main database file using FastCopy. We can see that

> the delta file is created right away when executing alter database begin

> backup. But the main database file usually gets an updated timestamp

> quite some time later, often within a few minutes, but sometimes it

> takes longer.

>

> I assume this is caused by forced writes off and the fact that Windows

> may delay some writes for an arbitrary amount of time. In other words, I

> assume that the Firebird engine does in fact not write to the main

> databse file after alter database begin backup, but writes that were

> made before this may be delayed by Windows for quite a while, meaning

> it's not in fact safe to start copying the main database file until

> Windows has flushed all writes.

>

> Now, my question is how to properly handle this to achieve safe and

> reliable backups?

>

> Up to now I've scheduled file copy to 3 hours after alter database begin

> backup, and I also included a dummy transaction right after the alter

> database begin backup.

>

> But are there better approaches?

>

> I came up with the idea to use gfix to switch forced writes on before

> executing alter database begin backup (and switch it back off later). I

> assume this will cause the locked state to be flushed to disk right

> away, but I also assume that writes that were made before switching

> forced writes on will still suffer the arbitrary delay from Windows'

> write cache. Correct? Or is gfix or the Firebird engine actually able to

> force flush all previous writes that are already in the Windows write 

> cache?

>

> Another idea is to use Sysinternals Sync util to flush Windows' write

> cache for the entire disk. For overall system performace, this would not

> be a problem, considering backup is scheduled to a low-traffic time of 

> day.

>

> We could use nbackup instead of FastCopy. Would this help? In other

> words: would nbackup's reads of the main database file see the new

> still-cached versions of the database pages being copied, or would it

> see the outdated on-disk versions?

>

> In fact, I'm not sure if FastCopy actually sees the new still-cached

> versions or not, but it fails when it notices that the source file's

> timestamp has been changed since it started copying, so it fails anyway.

> There's no apparent way to avoid this.

>

Come to think of it, if it's possible, shouldn't alter database begin 

backup (as well as nbackup -L) always flush any writes still in Windows' 

cache? I mean, the purpose of the operation is to make sure the main 

database file is frozen, but if preceding cached writes are not flushed, 

this frozen state is actually not guaranteed.



Question: Is it possible for alter database begin backup and nbackup -L 

to force flush any preceding writes that are still in Windows' cache?



Regards,

Kjell





[Non-text portions of this message have been removed]



[firebird-support] Firebird begin backup and forced writes off, timestamp update delayed

2020-01-24 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



As posted here:

https://dba.stackexchange.com/questions/258124/firebird-begin-backup-and-forced-writes-off-timestamp-update-delayed

Feel free to reply here, at StackExchange, or both. I will make sure the 

StackExchange question gets updated to show any helpful info that 

appears here.



We have a Firebird database that's almost 200 Gbyte in sice, and for 

performance we have forced writes off. Please don't debate the risks of 

forced writes, we are aware of them. This is Firebird 3.0.4 on Windows 

Server 2016.



For backup, we use alter database begin backup and alter database end 

backup, and copy the main database file using FastCopy. We can see that 

the delta file is created right away when executing alter database begin 

backup. But the main database file usually gets an updated timestamp 

quite some time later, often within a few minutes, but sometimes it 

takes longer.



I assume this is caused by forced writes off and the fact that Windows 

may delay some writes for an arbitrary amount of time. In other words, I 

assume that the Firebird engine does in fact not write to the main 

databse file after alter database begin backup, but writes that were 

made before this may be delayed by Windows for quite a while, meaning 

it's not in fact safe to start copying the main database file until 

Windows has flushed all writes.



Now, my question is how to properly handle this to achieve safe and 

reliable backups?



Up to now I've scheduled file copy to 3 hours after alter database begin 

backup, and I also included a dummy transaction right after the alter 

database begin backup.



But are there better approaches?



I came up with the idea to use gfix to switch forced writes on before 

executing alter database begin backup (and switch it back off later). I 

assume this will cause the locked state to be flushed to disk right 

away, but I also assume that writes that were made before switching 

forced writes on will still suffer the arbitrary delay from Windows' 

write cache. Correct? Or is gfix or the Firebird engine actually able to 

force flush all previous writes that are already in the Windows write cache?



Another idea is to use Sysinternals Sync util to flush Windows' write 

cache for the entire disk. For overall system performace, this would not 

be a problem, considering backup is scheduled to a low-traffic time of day.



We could use nbackup instead of FastCopy. Would this help? In other 

words: would nbackup's reads of the main database file see the new 

still-cached versions of the database pages being copied, or would it 

see the outdated on-disk versions?



In fact, I'm not sure if FastCopy actually sees the new still-cached 

versions or not, but it fails when it notices that the source file's 

timestamp has been changed since it started copying, so it fails anyway. 

There's no apparent way to avoid this.



Mvh,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] Isql exclusive mode?

2020-01-23 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2020-01-23 kl. 11:32, skrev Dimitry Sibiryakov s...@ibphoenix.com 

[firebird-support]:

> 23.01.2020 08:32, Kjell Rilbe kjell.ri...@marknadsinformation.se 
> [firebird-support] wrote:

>> Can anyone explain in which situations this happens, and in particular

>> how I can make sure it does NOT happen?

> If it is version 3 - do not use embedded mode.





I guess that's what I found and noted in the StackExchange question.



So is it correct that I avoid embedded mode by specifying 

"localhost/3050:AliasName" instead of just "AliasName"?



Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] Isql exclusive mode?

2020-01-22 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2020-01-23 kl. 08:32, skrev Kjell Rilbe 

kjell.ri...@marknadsinformation.se [firebird-support]:



> I have a vague recollection that Firebird's isql utility will in some

> situations make an exclusive connection to the database, meaning that

> any other connection attempts will be refused.

>

> I fail to find information about this.

>

> Can anyone explain in which situations this happens, and in particular

> how I can make sure it does NOT happen?

>

> My intended usage is to execute some commands/queries in a scheduled

> task, and I don't want to run the risk of blocking other database work

> while this task is executing.

>

> Also asked at DBA@StackExchange, feel free to reply there or here or 

> both:

> https://dba.stackexchange.com/questions/258038/firebird-isql-exclusive-mode 

>

>



 From a few simple tests it seems like I get an exclusive connection, 

i.e. no other connections are allowed, when there are no other 

connections present. Conversely, if there are other connections when 

isql connects, isql's connection doesn't block other connections.



I tested this with Firebird 3.0.4 in superserver mode and FlameRobin, 

using the same database alias for both isql and FlameRobin connections. 

The alias is defined with an absolute local file path.



Should I include host and port to ensure that isql always connects 

through the superserver (allowing other connections) rather than opening 

the file directly (blocking other connections)?



Kjell





[Non-text portions of this message have been removed]



[firebird-support] Isql exclusive mode?

2020-01-22 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



I have a vague recollection that Firebird's isql utility will in some 

situations make an exclusive connection to the database, meaning that 

any other connection attempts will be refused.



I fail to find information about this.



Can anyone explain in which situations this happens, and in particular 

how I can make sure it does NOT happen?



My intended usage is to execute some commands/queries in a scheduled 

task, and I don't want to run the risk of blocking other database work 

while this task is executing.



Also asked at DBA@StackExchange, feel free to reply there or here or both:

https://dba.stackexchange.com/questions/258038/firebird-isql-exclusive-mode



Mvh,

Kjell





[Non-text portions of this message have been removed]



[firebird-support] Is it safe to risk multiple Firebird nbackup -L?

2020-01-13 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



Please see my question here:

https://dba.stackexchange.com/questions/257259/is-it-safe-to-risk-multiple-firebird-nbackup-l



Quoting here, but wish you will reply at the site, if acceptable to you.



I'm using Firebird 3.0 nbackup to lock + copy + unlock each night.



If some steps fail or are skipped e.g. due to system restart (updates), 

it may happen that the master is not unlocked (-N). Next night, it will 

already be in locked state, but the scheduled backup task will lock it 

again (-L).



Is this safe?



Will the second -L have any effect, or is it a no-op, so a single unlock 

(-N) will restore it to normal state? Or do I have to safeguard against 

a second lock (-L) if it's already in locked state?



Mvh,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] gfix -v -full sufficient?

2020-01-11 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2020-01-11 kl. 15:05, skrev Ann Harrison aharri...@ibphoenix.com 

[firebird-support]:

>

> > On Jan 11, 2020, at 7:37 AM, Kjell Rilbe 

> kjell.ri...@marknadsinformation.se [firebird-support] 

>  wrote:

> >

> > Does gfix -v -full detect all kinds of database corruption? Or are 

> there

> > corruptions that cannot be detected

>

> Gfix finds only errors in the physical structure of the database. 

> These include problems like doubly allocated or unallocated pages, bad 

> back version pointers, and orphaned back version chains.

>

> Gfix will not find logical data errors like duplicates in a unique 

> constraint or broken foreign key relationships.

>

Understood. But I take it then, that gfix -f -full does detect ALL kinds 

of errors in the physical database structure?



A gbak backup + restore cycle should find logical errors, I assume.



But a restore with the -inactive switch wouldn't find any violations of 

constraints that are based on underlying indices, right? I.e. it 

wouldn't detect duplicates in unique constraints or primary key violations?



Regards,

Kjell





[Non-text portions of this message have been removed]



[firebird-support] gfix -v -full sufficient?

2020-01-11 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



Does gfix -v -full detect all kinds of database corruption? Or are there 

corruptions that cannot be detected that way?



I haven't seen any indications of anything wrong, I'm just trying to 

check as best I can after a migration. I have already tested a full gbak 

backup successfully, and will also try a restore from that backup.



Mvh,

Kjell





[Non-text portions of this message have been removed]



[firebird-support] Safe to clean out C:\ProgramData\firebird folder when FB offline?

2020-01-10 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



Is it safe to clean out the contents of the C:\ProgramData\firebird 

folder, i.e. wipe it, when the Firebird service (superserver) is not 

running?



I understand that it contains lock tables etc. so should not be touched 

while FB is running. But it's not clear to me if it can be wiped safely 

when FB is not running, or if it contains data that can be vital when FB 

starts up again.



My situation is that I'm migrating a VM with an FB installation. 

Migration has been done like this, due to practical reasons (uptime vs. 

file transfer & VM conversion time):



1. Snapshot of source VM, i.e. nightly backup is copied to new location. 

Source stays up and running. Copy process takes about 1 day.



2. Snapshot is unpacked at target location, converted from VMWare to 

HyperV and brought online for additional reconfig and system testing.



3. A few days pass.



4. Both source and target Firebird services are stopped, so no database 

activity is going on anywhere.



5. Sync files from source to target, including database files. This file 

transfer is much smaller then in step 1 so it can be done during offline 

time.



In step 5 I find diffs in the C:\ProgramData\firebird folder, and I'm 

wondering what would be the best approach:



A) Wipe the folder at target.



B) Sync so target has the same content as source.



C) Leave target as is.



Please note that when FB service is started again at TARGET, the 

database files will be identical with those at the SOURCE at the time of 

FB shutdown, and probably won't "match" the contents of 

C:\ProgramData\firebird at TARGET. I would assume that this fact rules 

out option C).



Regards,

Kjell





[Non-text portions of this message have been removed]



[firebird-support] Union with or without distinct in the separate queries?

2020-01-02 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



Not important, just got curious: Consider a query like this:



select c1 from t1

union select c1 from t2;



This will eliminate all duplicates, since I didn't specify "union all". 

But would this be more or less efficient than (or the same as) this:



select distinct c1 from t1

union select distinct c1 from t2;



I'm thinking that the latter will reduce the number of records to 

consider "as early as possible" any may reduce resource usage that way. 

On the other hand it would involve three "distinct" operations rather 

than a single one... So maybe the answer is that it depends on the data, 

which indices are available and applicable, and may differ from case to 

case? Or perhaps the query engine recognizes that the two queries are 

logically equivalent and executes them identically?



Regards,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] How do I limit an SQL search list to 10 records?

2019-12-12 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-12-12 kl. 22:36, skrev Clyde Eisenbeis cte...@gmail.com 
[firebird-support]:
> Marcin, Minor tweak => works!  I also added DESC to retrieve the 
> newest records using the field key name (number is an auto increment).
> `
> "SELECT FIRST " + stNumOfRecordsToRetrieve + " * FROM " + stTableName 
> + " ORDER BY " + fstKeyID + " DESC";
> `
> It's interesting that the newest records are found first (DESC), then 
> the number of records retrieved are from those DESC record results.


Yes, the "subset clause" is applied last of all, so specified ordering 
will be respected.

Unless I'm mistaken, FIRST N is Firebird specific. The SQL standard 
syntax for this is:

SELECT *
FROM T
ORDER BY YOUR_KEY DESC
ROWS N;

where N would be the number of rows to retrieve. You also have:

SELECT *
FROM T
ORDER BY YOUR_KEY DESC
ROWS N TO M;

which will retrieve records N, N+1, N+2, ..., M. Useful for pagination 
and not supported, as far as I know, by the Firebird specific syntax.

Regards,
Kjell



[Non-text portions of this message have been removed]



Re: [firebird-support] Fail to create external table

2019-12-02 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-12-02 kl. 16:56, skrev Dimitry Sibiryakov s...@ibphoenix.com 
[firebird-support]:
> 02.12.2019 16:35, Kjell Rilbe kjell.ri...@marknadsinformation.se 
> [firebird-support] wrote:
>> Any ideas?
> In short: give up. Just give up.
> In long: both client side and server side ANSI code pages must support 
> these letter.

They do. All involved operating systems use Windows-1252.

I realized now that I failed to actually set the connection charset for 
the isql session.

I tried now with "chcp 1252" then start isql.exe and before connect 
issue "set names win1252;" command. In that state, I was able to select 
existing UTF-8 data and get it properly translitterated in the isql session..

In this state, I tried again to create the external table, and again got 
the file name "Teståäö.txt" which is the Win1252 interpretation of 
"Teståäö.txt" in UTF-8 encoding.

In other words, it would seem that isql.exe correctly transliterated my 
win1252 encoded input string "Teståäö.txt" and sent it to the server 
that way. The server database character set it UTF-8, so I suspect that 
the server then transliterates the file name into an UTF-8 string and 
passes that to the file system using an non-unicode API call, that would 
then expect the string to be encoded with the operating system's code 
page, in this case Windows-1252. I think that's where the error occurs.

Should I report this as a bug? I.e. that Firebird server engine fails to 
transliterate the file name for an external table from the database 
character set into the OS character set when making OS API calls for the 
corresponding file operations.

Regards,
Kjell




[Non-text portions of this message have been removed]



[firebird-support] Fail to create external table

2019-12-02 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

Using Firebird 3.0.4.33054 on WIndows (64 bit), I want to create an 
external table where the external file name contains Swedish letters. I 
would think this is a reasonable desire these days.

After failing to get the character encoding right in FlameRobin, I 
thought I'd try isql.exe.

So I fired up a command prompt, set codepage to UTF-8 (chcp 65001). Then 
connected to my database in isql and did a select to check that it 
indeed does display existing UTF-8 data correctly, to ensure that I 
actually got the connection character set right. Seems to work.

Then I try this SQL:

create table "ExSlask" external file 'D:\MISE\Data\Firebird\Teståäö.txt' 
("Id" char(5) character set win1252, "CRLF" char(2) character set win1252);

When I press enter, isql.exe exits without any message whatsoever. I'm 
just returned to the Windows command prompt. No external table is created.

Any ideas?

The same SQL executes fine in FlameRobin, but the resulting file name is 
"Teståäö.txt", which means that FlameRobin and/or Firebird fails to 
correctly convert the UTF-8 encoded string in the SQL into the file 
system's encoding (Win1252). I have FlameRobin connected using UTF-8 
connection charset.

Mvh,
Kjell



[Non-text portions of this message have been removed]



Re: [firebird-support] What key word specifies a search for an entire word?

2019-11-28 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-11-28 kl. 17:30, skrev cte...@gmail.com [firebird-support]:
> I have this:
>
> "SELECT fstName, fiKeyID FROM Members WHERE lower(fstName) LIKE 
> lower('%' || @p0 || '%')"
>
> Assume fstName includes "Richard" and "Rich".  If I search for "Rich" 
> it finds both names.
>
> What do I modify so it finds only "Rich"?


I usually solve that kind of problem like this:

SELECT fstName, fiKeyID FROM Members WHERE ' ' || lower(fstName) || ' ' 
LIKE lower('% ' || @p0 || ' %')

Note that I add a leading and a trailing space to the searched string, 
and also to the sought string (inside the % wildcards). This will ensure 
that it finds the sought string only if it's surrounded by spaces, and 
will also find it at the beginning and at the end of the searched 
string, since we added spaces there.

The downside is that no index can be used for this search, but that's 
probably true with other approaches too.

An alternative could be:

SELECT fstName, fiKeyID FROM Members

WHERE lower(fstName) LIKE lower(@p0 || ' %') -- At the start? Note space 
before %
   OR lower(fstName) LIKE lower('% ' || @p0) -- At the end? Note space 
after %
   OR lower(fstName) LIKE lower('% ' || @p0 || ' %') -- In the middle? 
Note spaces inside %

Regards,
Kjell



[Non-text portions of this message have been removed]



[firebird-support] Plugin(?) for external tabels in CSV or Excel?

2019-10-09 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



We work in .NET and C# and we often import and export Excel or CSV data. 

It would be really good to be able to select from and insert (append) 

directly into an external CSV file or Excel file.



I am aware of the currently supported external table feature, but the 

format is rather difficult to use, especially for non-char data nd for 

char data in encoding with variable length characters (e.g. UTF8). I 

have written a tool to create such files (with UTF8 strings), but it's a 

free standing app and I need to prepare the file first, then use the 

tools generated DDL to create the external table within Firebird, before 

I can actually access the data from SQL.



I know new FB verions has support for plugins and as far as I understand 

even a separate engine could be plugged in, right? (I may be wrong here, 

haven't read up on it properly...)



Anyway, would it be possible to write some kind of plugin to FB 3 to 

allow it to select from or insert into either CSV or Excel files or 

both? I would code the actual plugin myself, of course. I'm just asking 

if it's worth investigating (not waste time if it's a known dead end).



If it is indeed possible, I'd appreaciate some pointer how to get 

started: relevant docs or some sample code, or whatever.



If I'm successful I'd be happy to open source it, but if I am to write 

it, it will be C#/.NET (too much effort for me otherwise).



Thanks,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] Array datatype?

2019-08-28 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-08-29 kl. 07:17, skrev 'Check_Mail' check_m...@satron.de 

[firebird-support]:

>

> Okay,

>

> unfortunately there is no coherence between the weekday 0-6 and my

> table-Field Montag, Dienstag etc. (developed historically) and the execute

> statement works not too (not with "variable = variable + variable" and not

> with "select variable + variableb from rdb$database"

>

> Thank you, Dimitry!

>

You should probably consider a lookup table, i.e. whatever you would 

like to put in an array, put it in a separate table instead, one record 

for each array entry.



If the "array" is different for each execution of the procedure, 

consider using a global temporary table, GTT. You can find more on this 

page (about 1/4 down):

https://firebirdsql.org/rlsnotesh/rnfb210-ddl.html



Regards,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] SQL

2019-08-27 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-08-05 kl. 13:20, skrev mohamed hamza medh...@hotmail.com 
[firebird-support]:
>
> Hi All,
>
>   I am new to sql,  I am coming  from  Xbase  system .
>  I want to know if it 's  possible to execute query using last query 
> result?
>
> EXAMPLE
> 1  select * from persons where state='CA'
>
> 2 select * from persons where age> 18       but  we have to use the 
> result of 1
>
> I DO NOT WANT TO DO AN AND        (  STATE = 'CA'    AND  AGE > 18  ) ;


First: Why not use and AND? That would be the most natural solution.

Second: No, a plain sequence of SQL queries do not support that. But you 
can put them in a stored procedure or an "execute block" statement, and 
save results in variables for later use.

For example:

set term #;
execute block as
declare person_id int;
begin
   for select id from persons where state = 'CA' into :person_id
   do select * from persons where age > 18 and id = :person_id;
end#
set term ;#

But you still end up with an "and" in your second SQL.

Maybe it's best if you explain what you're actually trying to achieve, 
and what the actual problem is, because using AND is not really a 
problem at all.

Regards,
Kjell Rilbe



[Non-text portions of this message have been removed]



Re: [firebird-support] Performance of deleting records based on another table

2019-08-27 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-08-17 kl. 12:51, skrev Joost van der Sluis jo...@cnoc.nl 
[firebird-support]:
>
> Hi all,
>
> I finally realized today that I do not know how to properly remove
> records based on another table.
>
> Suppose I have two tables, one with data (called BIG), and one table
> (SMALL) with only some id's that has to be removed from the BIG-table.
>
> I cannot get this to work, without having Firebird looping over all
> records in BIG naturally, once for each record in SMALL.
>

This is a problem I have faced many times. The best solution I've been 
able to find is an execute block statement that iterates the records in 
SMALL and uses a "delete ... where big.id = :found_small_id" statement 
to delete each matching BIG record. Something like this (including set 
term if required for your SQL tool):

set term #;
execute block as
declare id int;
begin
   for select small.id from small into :id
   do delete from big where id = :id;
end#
set term ;#

Regards,
Kjell Rilbe



[Non-text portions of this message have been removed]



Re: [firebird-support] Round gives inconsistent results for DOUBLE PRECISION expressions

2019-07-29 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-07-29 kl. 05:49, skrev Geoff Worboys 

ge...@telesiscomputing.com.au [firebird-support]:

>

> Hi,

>

> Firebird v2.5

>

> Can anyone explain why the following example gives inconsistent

> results. The first two I consider correct (rounded up to 8.42),

> the last four I consider incorrect (rounded down to 8.41).

>

> EXECUTE BLOCK

> RETURNS (

> "Result" DOUBLE PRECISION

> ) AS

> DECLARE A DOUBLE PRECISION;

> DECLARE B DOUBLE PRECISION;

> DECLARE C DOUBLE PRECISION;

> BEGIN

> A = 170.0;

> B = 4.95;

> C = 100.0;

>

> "Result" = round( a * (b / c), 2 );

> SUSPEND;

> "Result" = a * (b / c);

> "Result" = round("Result", 2);

> SUSPEND;

>

> "Result" = round( a * b / c, 2 );

> SUSPEND;

> "Result" = a * b / c;

> "Result" = round("Result", 2);

> SUSPEND;

>

> "Result" = round( (a * b) / c, 2 );

> SUSPEND;

> "Result" = (a * b) / c;

> "Result" = round("Result", 2);

> SUSPEND;

> END

>

> I was changing over from a UDF of my own that used an explicitly

> DOUBLE PRECISION input parameter. When I use that UDF in the

> above code, all results return the same 8.42 value.

>

> Geoff Worboys

>



Geoff,



The value 8.415 (and probably other part results) cannot be exactly 

represented in binary form, just like 1/3 cannot in decimal form. You 

get an infinite number of binary "decimals". That means that even the 

double precision variables that you use will store rounded results. 

These small rounding errors will result in some expressions to land just 

under the exact result 8.415, and will thus be rounded down. Other will 

land just above 8.415 and be rounded up.



If you need this to be handled in an exact manner, you will have to use 

some data type that does not suffer from this kind of rounding errors. 

Note: all floating point types have this problem.



Regards,

Kjell Rilbe





[Non-text portions of this message have been removed]



Re: [firebird-support] Help interpret plan

2019-06-12 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-06-12 kl. 13:40, skrev Svein Erling Tysvær setys...@gmail.com 
[firebird-support]:
> How many records are there in BUPD, are the other indexes in the plan 
> selective and can you show us the query? Just wondering if it can be 
> rewritten so that the query performs better (I assume the reason for 
> you asking about the plan being that you have a query that is too slow).

I could, but there's a left join to F on one col = col from join OR 
other col in (subselect with where on other col from join), which I 
don't really think could be improved. I decided to ignore the result of 
the query. I can achieve what I need by other means.


Thanks anyway! To you too Dimitry, for confirming how to interpret the 
plan, which I have generic use for. :-)

Regards,
Kjell



[Non-text portions of this message have been removed]



[firebird-support] Help interpret plan

2019-06-12 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
I got this plan for a query:

PLAN SORT (
   JOIN (
     JOIN (
   JOIN (
     BUPD NATURAL,
     UPD INDEX ("IX_PK_Uppdatering"),
     SP INDEX ("IX_Uppgiftsspärr_TilläG2M")
   ),
   E INDEX ("IX_Epostadress_UppgiftLZH")
     ),
     F NATURAL
   )
)

Would this mean that the table F would be subject to a full natural scan 
for each record of the preceding joins?

Or does it mean that F is scanned once during which each F record is 
matched against the result set of the joins?

F is large (well, ~3 million records) and the join result set is small 
(~1 records).

Mvh,
Kjell


[Non-text portions of this message have been removed]



Re: [firebird-support] Re: Drop table safe when other users are active?

2019-06-03 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-05-26 kl. 13:28, skrev hv...@users.sourceforge.net 

[firebird-support]:

>

> > A few years ago we suffered Firebird corruption and got it fixed with

> > the help of IB Surgeon. I was told at the time that dropping tables

> > while other users were connected was the probable cause, and that this

> > was a bug in FIrebird that had been around for a long time. I think 

> this

> > would be for Firebird 2.5.

>

> What bug ? Is it registered at the tracker ? Do you have a test case ?

>

I've submitted a bug report now:

http://tracker.firebirdsql.org/browse/CORE-6073



Regards,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] Re: Drop table safe when other users are active?

2019-05-27 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]


Den 2019-05-26 kl. 13:28, skrev hv...@users.sourceforge.net 

[firebird-support]:

>

> > A few years ago we suffered Firebird corruption and got it fixed with

> > the help of IB Surgeon. I was told at the time that dropping tables

> > while other users were connected was the probable cause, and that this

> > was a bug in FIrebird that had been around for a long time. I think 

> this

> > would be for Firebird 2.5.

>

> What bug ? Is it registered at the tracker ? Do you have a test case ?

>

> > Now I'm wondering if this is a bug that is known to have been fixed

> > since then? In other words: Is it now (version 3.0.4) safe to drop

> > tables when the database is in use by multiple users?

>

> I physically can't answer if some unknown bug is fixed or not. There 

> was a lot

> of changes and refactorings during FB3 release circle. It is 

> impossible to make

> any kind of correct answer on such a question, sorry.

>

> Regards,

> Vlad

>



I realize that it's difficult. I was under the impression that this is a 

well-known problem, since it appears to be well-known to the guys at 

IBSurgeon. Apparently not. It's probably highly intermittent, 

considering I have dropped a lot of tables in non-exclusive mode through 

the years, before I got a corruption and learned about this problem. A 

test case? Yeah, well, I could try... But I think it would be better if 

the dev team were to communicate with those who know the problem a lot 

better, i.e. please talk to IBSurgeon.



Regards,

Kjell





[Non-text portions of this message have been removed]



Re: [firebird-support] Drop table safe when other users are active?

2019-05-25 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
I see noone replied. OK.



1. Alexey Kovyazin at IBSurgeon confirms that the problem still exists. 

He's the one who fixed my corruption before. He should know...



2. He also says that there's a "rule" that DDL should always be executed 

in exclusive connection. Where is this rule documented, if it exists?



It would seem odd to ignore the drop table problem and also not document 

that it exists and that DDL should be executed in exclusive connection. 

In fact, if DDL is supposed to be executed only in exclusive connection, 

the engine should refuse to execute DDL in any non-exclusive connection 

and return a suitable error.



If there's already an issue in the tracker about this, I'd appreciate a 

pointer to it so I at least would be able to follow it.



Mvh,

Kjell



Den 2019-05-24 kl. 11:28, skrev Kjell Rilbe 

kjell.ri...@marknadsinformation.se [firebird-support]:

>

> Hi,

>

> A few years ago we suffered Firebird corruption and got it fixed with

> the help of IB Surgeon. I was told at the time that dropping tables

> while other users were connected was the probable cause, and that this

> was a bug in FIrebird that had been around for a long time. I think this

> would be for Firebird 2.5.

>

> Now I'm wondering if this is a bug that is known to have been fixed

> since then? In other words: Is it now (version 3.0.4) safe to drop

> tables when the database is in use by multiple users?

>

> Mvh,

> Kjell

>

> .

>

> 



[Non-text portions of this message have been removed]



[firebird-support] Drop table safe when other users are active?

2019-05-24 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,



A few years ago we suffered Firebird corruption and got it fixed with 

the help of IB Surgeon. I was told at the time that dropping tables 

while other users were connected was the probable cause, and that this 

was a bug in FIrebird that had been around for a long time. I think this 

would be for Firebird 2.5.



Now I'm wondering if this is a bug that is known to have been fixed 

since then? In other words: Is it now (version 3.0.4) safe to drop 

tables when the database is in use by multiple users?



Mvh,

Kjell



[Non-text portions of this message have been removed]



Re: [firebird-support] Consecutive values from different fields

2019-01-23 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2019-01-23 kl. 11:53, skrev 'Autoneer' myauton...@gmail.com 

[firebird-support]:

>

> Good day all

>

> I have a question if you can help me PLEASE.

>

> Using a Firebird SQL query is it possible to count the consecutive 

> number of matching values in different fields? I need to determine the 

> LAST consecutive days a driver has been working.

>

> i.e. my table

>

> *DPID*

>

>   

>

> *WEEKDATE*

>

>   

>

> *DRIVERNR*

>

>   

>

> *DRIVER*

>

>   

>

> *D1*

>

>   

>

> *D2*

>

>   

>

> *D3*

>

>   

>

> *D4*

>

>   

>

> *D5*

>

>   

>

> *D6*

>

>   

>

> *D7*

>

> 83145

>

>   

>

> 12.11.2018

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>

> DRIVING

>

>   

>   

>   

>   

>   

>   

>

> 83290

>

>   

>

> 19.11.2018

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>

> DRIVING

>

>   

>   

>

> LEAVE

>

>   

>

> LEAVE

>

>   

>

> LEAVE

>

>   

>

> LEAVE

>

>   

>

> LEAVE

>

> 83435

>

>   

>

> 26.11.2018

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>

> DRIVING

>

>   

>   

>   

>   

>   

>   

>

> 84160

>

>   

>

> 31.12.2018

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

> 84305

>

>   

>

> 07.01.2019

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>

> DRIVING

>

>   

>

> AWOL

>

>   

>   

>   

>   

>   

>

> 84450

>

>   

>

> 14.01.2019

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>

> DRIVING

>

>   

>   

>   

>   

>   

>   

>

> DRIVING

>

> 84595

>

>   

>

> 21.01.2019

>

>   

>

> 697

>

>   

>

> JOHN SMITH

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>

> DRIVING

>

>   

>   

>   

>

>

> Using the data above my result should be 5

>



Recursive query that first selects the last day and then keeps selecting 

the day before, and count, until a non-driving day is found?



Regards,

Kjell





[Non-text portions of this message have been removed]



[firebird-support] Firebird vs. PostgreSQL

2018-11-06 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

We've been using Firebird "forever". For upcoming projects and also possible 
replacement in existing ones, we're considering PostgreSQL.

What would you say is the pros and cons of Firebird vs. PostgreSQL?

I'm aware that the answers may very well be different depending on application 
and usage patterns, but I would like to start with considerations i general 
terms, e.g. framework support (e.g. Entity Framework), admin tools availability 
and usability (Windows and possibly Mac), query performance in general, 
resource load, SQL standard compliance, stability, development progress (new 
features, fixing bugs, ...) etc.

Mvh,
Kjell

[cid:part1.5A72A320.2CD4BAEA@marknadsinformation.se]

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se<http://www.marknadsinformation.se/>
08-514 905 90

[Företagskontakt.se]<http://www.f%C3%B6retagskontakt.se/>
[Personkontakt.se]<http://www.personkontakt.se/>


[Non-text portions of this message have been removed]



Re: [firebird-support] ISQL connects with any credentials?

2018-09-18 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2018-09-17 kl. 16:04, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:
>
> I just noticed that on my server I can connect to ANY database using ANY
> credentials, even with user names that don't exist. And for existing
> user name, it doesn't matter what password I specify - I can still
> connect and select from e.g. rdb$relations.
>
> Why does this happen?
>

It has been pointed out to me that it's because isql uses embedded 
connection when database is specified without host, and in that 
situation no auth occurs. Problem solved.

Regards,
Kjell


[Non-text portions of this message have been removed]



[firebird-support] ISQL connects with any credentials?

2018-09-17 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
I just noticed that on my server I can connect to ANY database using ANY 
credentials, even with user names that don't exist. And for existing 
user name, it doesn't matter what password I specify - I can still 
connect and select from e.g. rdb$relations.

Why does this happen?

Mvh,
Kjell



Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se/>
08-514 905 90

Företagskontakt.se <http://www.f%C3%B6retagskontakt.se/>
Personkontakt.se <http://www.personkontakt.se/>


[Non-text portions of this message have been removed]



[firebird-support] Transliteration problem with table locks?

2018-09-15 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

Seems this message wasn't posted properly yesterday. Retrying.

I've been using Firebird since a long time, successfully using quoted 
identifiers with Swedish characters åäö in them. I am at version 3.0.3 
(on Windows Server 2016, 64 bit).

I tried this code in C# using FirebirClient 6.3.0.0:

Dictionary locks = new Dictionary() {
    { "KörningInstans", FbTransactionBehavior.Protected | 
FbTransactionBehavior.LockWrite },
    { "Körning", FbTransactionBehavior.Protected | 
FbTransactionBehavior.LockWrite }
}; // Note "ö" in table names above.
FbTransaction trans = connection.BeginTransaction(new 
FbTransactionOptions() {
    LockTables = locks,
    TransactionBehavior = FbTransactionBehavior.Consistency | 
FbTransactionBehavior.Write
}); // Exception!

The BeginTransaction call fails with an exception "arithmetic exception, 
numeric overflow, or string truncation
Cannot transliterate character between character sets". Error code in 
the exception is 335544321. SQLSTATE 22000. It contains 3 errors:
1. type 1, error code 335544321, no message.
2. type 1, error code 335544565, no message.
3. type 0, error code 335544321, message "arithmetic exception, numeric 
overflow, or string truncation\r\nCannot transliterate character between 
character sets".

The database is created like this in isql:
create database 'KorningarDev' user DEV password '***' page_size 4096 
set names 'UTF8' default character set UTF8 collation UNICODE;

And the connection string looks like this:
Server=xxx.xxx.xxx.xxx;Port=3050;Database=KorningarDev;Charset=UTF8;User=DEV;Password=***

So, I see no reason why there would be any transliteration problems: 
Windows unicode -> UTF8, and then UTF8 all the way, right?

Is this a bug, and would that bug be most likely in .NET FirebirdClient 
or in the FB engine? (I.e. where should I report it?)

Regards,
Kjell Rilbe



[firebird-support] Transliteration problem with table locks?

2018-09-15 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

I've been using Firebird since a long time, using quoted identifiers 
with Swedish characters åäö in them. I am now at version 3.0..3 (on 
Windows Server 2016, 64 bit).

Now I tried this code in C# using FirebirClient 6.3.0.0:

Dictionary locks = new Dictionary() {
   { "KörningInstans", FbTransactionBehavior.Protected | 
FbTransactionBehavior.LockWrite },
   { "Körning", FbTransactionBehavior.Protected | 
FbTransactionBehavior.LockWrite }
};
FbTransaction trans = connection.BeginTransaction(new 
FbTransactionOptions() {
   LockTables = locks,
   TransactionBehavior = FbTransactionBehavior.Consistency | 
FbTransactionBehavior.Write
});

The BeginTransaction call fails with an exception "arithmetic exception, 
numeric overflow, or string truncation
Cannot transliterate character between character sets". Error code in 
the exception is 335544321. SQLSTATE 22000. It contains 3 errors:
1. type 1, error code 335544321, no message.
2. type 1, error code 335544565, no message.
3. type 0, error code 335544321, message "arithmetic exception, numeric 
overflow, or string truncation\r\nCannot transliterate character between 
character sets".

The database is created like this in isql:
create database 'KorningarDev' user DEV password '***' page_size 4096 
set names 'UTF8' default character set UTF8 collation UNICODE;

And the connection string looks like this:
Server=xxx.xxx.xxx.xxx;Port=3050;Database=KorningarDev;Charset=UTF8;User=DEV;Password=***

So, I see no reason why there would be any transliteratio problems: 
Windows unicode -> UTF8, and then UTF8 all the way, right?

Is this a bug, and would that bug be most likely in .NET FirebirdClient 
or in the FB engine? (I.e. where should I report it?)

Regards,
Kjell Rilbe


[Non-text portions of this message have been removed]



Re: [firebird-support] CLI to firebird

2018-08-01 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2018-08-01 kl. 10:44, skrev Mark Rotteveel m...@lawinegevaar.nl 
[firebird-support]:
>
> On 2018-08-01 09:14, Kjell Rilbe kjell.ri...@marknadsinformation.se
> [firebird-support] wrote:
> > The benefits, as I see it, are the object trees, command history, data
> > grid, CSV export, log output with stats, syntax highlighting, etc. That
> > said, I wish they would bring FlameRobin up to fully support Firebird
> > 3.0 (and fix some other issues).
>
> Unfortunately, there is no magical "they" that will just do this. If you
> want and need it, then make time to contribute or find out if you can
> sponsor someone to do it for you.
>
> A lot of Firebird related projects are languishing because of lack of
> active contributors. If people want this to change, they need to step up
> and contribute.
>

Good point. I'll consider possible options.

Regards,
Kjell


[Non-text portions of this message have been removed]



Re: [firebird-support] CLI to firebird

2018-08-01 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2018-07-31 kl. 10:44, skrev trueriver river14ap...@gmail.com 
[firebird-support]:
> I am used to creating tables and users and even databases by typing 
> SQL into a command line client.
>
> Forgive me if I am missing some clue in the documention but there I 
> have not found anything about how to do that for firebird 3.0
>
> FlameRobin composes create table SQL for you and you fill in the gaps, 
> but I want to just type it in at a prompt, as in Oracle, Mysql, 
> Mariadb and loads of other DBMSs I have used over the years
>
> I can see that FlameRobin is a gentler learning curve for newcomers 
> but when you know SQL (and can adapt to the inconsistent way it's 
> implemented) it feels so clunky to keep having to swap from mouse to 
> keyboard

ISQL has been mentioned and found, I know. I just wanted to mention that 
there's absolutely nothing in FlameRobin that would force you to let 
FlameRobin compose SQL for you. You just open an SQL query window and 
write your ad hoc SQL there.

I never use the SQL generation capabilities (except to extract DDL for 
existing objects on occasion). I always write my SQL manually, as you 
would in ISQL.

The benefits, as I see it, are the object trees, command history, data 
grid, CSV export, log output with stats, syntax highlighting, etc. That 
said, I wish they would bring FlameRobin up to fully support Firebird 
3.0 (and fix some other issues).

Regards,
Kjell Rilbe


[Non-text portions of this message have been removed]



Re: [firebird-support] Database file modified shortly after NBACKUP -L

2018-04-07 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
den 2018-04-07 10:37, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:
>
> den 2018-04-06 18:08, skrev hv...@users.sourceforge.net 
> [firebird-support]:
> >
> > > For now, I've added, between lock and copy, a dummy isql script that
> > > does a select and commits, and then a 1 hour delay before starting
> > copy.
> > > But I fear this won't guarantee that the flush happens before copy
> > starts.
> >
> > How often Firebird flushes database is ruled by two configuration
> > parameters:
> > MaxUnflushedWrites and MaxUnflushedWriteTime. Also, note - for database
> > with ForcedWrites = ON there is no explicit flush at all.
>
> I had the flush params commented out = set to defaults.
> Now trying with:
> MaxUnflushedWrites = 1000
> MaxUnflushedWriteTime = 300
>
> I assume that should imply that a delay of 300 seconds (5 min) after
> NBACKUP -L should be sufficient. Will try with ten minutes.
>

Didn't work. I still got a modified file error on copy.

Now trying with these settings
MaxUnflushedWrites = 100
MaxUnflushedWriteTime = 60

And also 100 select transactions in an isql script between lock and 
copy. If this works as expected, I would assume that each transaction 
results in at least one disk write (header page?), which should force a 
flush. But perhaps the MaxUnflushesWrites param doesn't mean number of 
write operations to any page, but rather number of pages that have been 
written to? In that case I'm not sure my dummy SQL will have any effect.

> I'm reluctant to use a script/mode/setting that essentially ignores the
> flush and copies anyway. I assume this would imply a risk of a corrupt
> copy. If not, please explain why.
>

Anyone?

Regards,
Kjell


Re: [firebird-support] Re: Database file modified shortly after NBACKUP -L

2018-04-07 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
den 2018-04-06 18:08, skrev hv...@users.sourceforge.net [firebird-support]:
>
> > Thanks! As a workaround, I attempted gfix -write sync, but alas, it 
> will
> > work only if no other attachments. We start the copy at midnight, when
> > it's likely there won't be other connections, but can't be guaranteed.
> >
> > For now, I've added, between lock and copy, a dummy isql script that
> > does a select and commits, and then a 1 hour delay before starting 
> copy.
> > But I fear this won't guarantee that the flush happens before copy 
> starts.
>
> > While waiting for a fix(?) I'd appreaciate other suggestions how to
> > force that flush to happen sooner.
>
> How often Firebird flushes database is ruled by two configuration 
> parameters:
> MaxUnflushedWrites and MaxUnflushedWriteTime. Also, note - for database
> with ForcedWrites = ON there is no explicit flush at all.
>
> I've made a quick look at FastCopy source code and found that if 
> "Nonstop"
> checkbox is checked - it will ignore changed timestamp of source file 
> (while
> still should react on real copy errors).
>
> Or, you may want to disable updating of "modify file" timestamp at OS 
> level -
> this is often used at Windows servers to reduce IO load.
>
> Hope it helps,
> Vlad
>

Many thanks for your efforts. I have force writes off (gfix -write 
async). I realize that forced writes on might solve the problem, but I'm 
not sure we will get acceptable performance in that mode. Might give it 
a try though.

I had the flush params commented out = set to defaults.
Now trying with:
MaxUnflushedWrites = 1000
MaxUnflushedWriteTime = 300

I assume that should imply that a delay of 300 seconds (5 min) after 
NBACKUP -L should be sufficient. Will try with ten minutes.

I'm reluctant to use a script/mode/setting that essentially ignores the 
flush and copies anyway. I assume this would imply a risk of a corrupt 
copy. If not, please explain why.

Regards,
Kjell


Re: [firebird-support] Re: Database file modified shortly after NBACKUP -L

2018-04-06 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
den 2018-04-06 10:53, skrev hv...@users.sourceforge.net [firebird-support]:
>
> > After testing a bit more, I notice that the locked database file has 
> its
>
> > timestamp updated when firebird.exe does a "FlushBuffersFile" 
> operation,
> > which can occur several minutes after the NBACKUP -L operation.
> >
> > Isn't that a bug? Everything should be flushed to file immediately when
> > (before?) the database file is locked, shouldn't it?
>
>   Very interesting, thanks for investigation.
>
>   When engine flushes page cache, it call FlushFileBuffers (on 
> WIndows) for
> database file(s), shadow file(s) (if present) and physical backup 
> difference
> (delta) file (if database is in stalled or merge mode). Note, database 
> files is
> flushed despite of backup state. It was considered as no-op if there 
> was no
> writes to the database file. And, of course, in stalled mode there is 
> no writes
> to the database file. You may ensure it using same ProcMon tool. But !
> FlushFileBuffers could write file metadata also, and it seems we deal 
> with
> exactly this case (and yes - it is not exactly no-op, as we see now).
>
>   Now we should decide what to do with it.
>
> Regards,
> Vlad

Thanks! As a workaround, I attempted gfix -write sync, but alas, it will 
work only if no other attachments. We start the copy at midnight, when 
it's likely there won't be other connections, but can't be guaranteed.

For now, I've added, between lock and copy, a dummy isql script that 
does a select and commits, and then a 1 hour delay before starting copy. 
But I fear this won't guarantee that the flush happens before copy starts.

While waiting for a fix(?) I'd appreaciate other suggestions how to 
force that flush to happen sooner.

Regards,
Kjell



Re: [firebird-support] Re: Database file modified shortly after NBACKUP -L

2018-04-06 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Den 2018-04-05 09:13, skrev hv...@users.sourceforge.net [firebird-support]:
>
> ---In firebird-support@yahoogroups.com,  wrote :
>
> ...
> >
> > It seems like the original database file is touched about a half to one
> > minute after being locked with NBACKUP -L.
> >
> > Why is the locked database file touched?
> >
> > I'm not sure if it's actually modified, but the timestamp is 
> certainly updated.
>
>   Firebird itself doesn't update file timestamp directly.
> But filesystem often does it in lazy\background mode (for performance 
> reason).
>
>   You may know for sure using ProcessMonitor tool, if you wish

After testing a bit more, I notice that the locked database file has its 
timestamp updated when firebird.exe does a "FlushBuffersFile" operation, 
which can occur several minutes after the NBACKUP -L operation.

Isn't that a bug? Everything should be flushed to file immediately when 
(before?) the database file is locked, shouldn't it?

I'd appreciate some feedback on this.

Regards,
Kjell



Re: [firebird-support] Re: Database file modified shortly after NBACKUP -L

2018-04-06 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
den 2018-04-06 07:56, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:

> den 2018-04-05 09:13, skrev hv...@users.sourceforge.net 
> [firebird-support]:
> >
> > ---In firebird-support@yahoogroups.com, <kjell.rilbe@...> wrote :
> >
> > ...
> > >
> > > It seems like the original database file is touched about a half 
> to one
> > > minute after being locked with NBACKUP -L.
> > >
> > > Why is the locked database file touched?
> > >
> > > I'm not sure if it's actually modified, but the timestamp is
> > certainly updated.
> >
> >   Firebird itself doesn't update file timestamp directly.
> > But filesystem often does it in lazy\background mode (for performance
> > reason).
> >
> >   You may know for sure using ProcessMonitor tool, if you wish
>
> Oh, Sorry Vlad, I didn't see your reply at first. My message thread view
> was apparentl corrupt in my e-mail client. Fixed that now.
>
> Sounds like a reasonable explanation. I've inserted a five minute delay
> between lock and copy, but last night's run failed anyway. Guess I'll
> have to investigate a bit more. Thanks for the suggestion with procmon.
>

Process Monitor reveals that the locked database file receives a new 
timestamp on the first write by firebird.exe process after the NBACKUP 
-L, exepting writes that occur immediately after the lock.

I will try to insert some dummy isql command into the script to see if I 
can force this to happen before the copy starts.

Kjell



Re: [firebird-support] Re: Database file modified shortly after NBACKUP -L

2018-04-05 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
den 2018-04-05 09:13, skrev hv...@users.sourceforge.net [firebird-support]:
>
> ---In firebird-support@yahoogroups.com,  wrote :
>
> ...
> >
> > It seems like the original database file is touched about a half to one
> > minute after being locked with NBACKUP -L.
> >
> > Why is the locked database file touched?
> >
> > I'm not sure if it's actually modified, but the timestamp is 
> certainly updated.
>
>   Firebird itself doesn't update file timestamp directly.
> But filesystem often does it in lazy\background mode (for performance 
> reason).
>
>   You may know for sure using ProcessMonitor tool, if you wish

Oh, Sorry Vlad, I didn't see your reply at first. My message thread view 
was apparentl corrupt in my e-mail client. Fixed that now.

Sounds like a reasonable explanation. I've inserted a five minute delay 
between lock and copy, but last night's run failed anyway. Guess I'll 
have to investigate a bit more. Thanks for the suggestion with procmon.

Regards,
Kjell



Re: [firebird-support] Database file modified shortly after NBACKUP -L

2018-04-05 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
No clues?

Repeating the question in short form:
Why would it be that the database file is being touched about ½-1 minute 
AFTER being locked with NBACKUP -L?

I can see the timestamp of the file being updated.

Regards,
Kjell
-- 

Kjell Rilbe
Telefon: 0733-44 24 64
Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

den 2018-04-05 09:03, skrev Kjell Rilbe 
kjell.ri...@marknadsinformation.se [firebird-support]:
>
> Hi,
>
> I've been using FastCopy to make backup copies of a ~180 Gbyte Firebird
> database, under NBACKUP-L state.
>
> This has been working fine for years with Firebird 2.5 and an older
> version of FastCopy.
>
> After migrating the database to a new server and upgrading Firebird to
> 3.0 and FastCopy to latest version 3.41, I'm experiencing a problem that
> I find a bit strange.
>
> The backup script does this:
> 1. NBACKUP -L
> 2. FastCopy.
> 3. NBACKUP -N on original database file.
> 4. NBACKUP -F on copy.
>
> What happens is that FastCopy detects that the source file is modified
> during the copy operation and bails out.
>
> It seems like the original database file is touched about a half to one
> minute after being locked with NBACKUP -L.
>
> Why is the locked database file touched?
>
> I'm not sure if it's actually modified, but the timestamp is certainly
> updated.
>
> I am also not sure if this is new for Firebird 3, or if it happened with
> 2.5 too, but the old version of FastCopy failed to do the modification
> check.
>
> Regards,
> Kjell
> -- 
>
> Marknadsinformation logotyp
>
> Kjell Rilbe
> Telefon: 0733-44 24 64
>
> Marknadsinformation i Sverige AB
> Ulvsundavägen 106C
> 168 67 Bromma
> www.marknadsinformation.se <http://www.marknadsinformation.se>
> 08-514 905 90
>
> Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
> Personkontakt.se <http://personkontakt.se>
>
> 




[firebird-support] Database file modified shortly after NBACKUP -L

2018-04-05 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

I've been using FastCopy to make backup copies of a ~180 Gbyte Firebird 
database, under NBACKUP-L state.

This has been working fine for years with Firebird 2.5 and an older 
version of FastCopy.

After migrating the database to a new server and upgrading Firebird to 
3.0 and FastCopy to latest version 3.41, I'm experiencing a problem that 
I find a bit strange.

The backup script does this:
1. NBACKUP -L
2. FastCopy.
3. NBACKUP -N on original database file.
4. NBACKUP -F on copy.

What happens is that FastCopy detects that the source file is modified 
during the copy operation and bails out.

It seems like the original database file is touched about a half to one 
minute after being locked with NBACKUP -L.

Why is the locked database file touched?

I'm not sure if it's actually modified, but the timestamp is certainly 
updated.

I am also not sure if this is new for Firebird 3, or if it happened with 
2.5 too, but the old version of FastCopy failed to do the modification 
check.

Regards,
Kjell
-- 

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>




[firebird-support] Page size vs. index depth?

2018-03-18 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi!

I'm in the process of migrating a database from Firebird 2.5 to 3.0. 
Since this requires a backup restore cycle, I thought I'd take the 
opportunity to investigate an increased page size to achieve lower index 
depths.

The database is ~170 Gbyte and a couple of tables contain ~300 million 
records. Those tables do reach a high index depth unfortunately.

Now, I'm wondering if it's worthwhile to increase the page size from 8k 
to 16k.

The index depth distribution with 8k page size is:
Depth 1: 353 indices
Depth 2: 533 indices
Depth 3: 146 indices
Depth 4: 8 indices

And for 16k page size (after backup/restore to FB 3):
Depth 1: 383 indices
Depth 2: 586 indices
Depth 3: 67 indices
Depth 4: 1 index

There is an improvement obviously. But I assume a larger page size has 
other drawbacks?

So, would you recommend 8k or 16k in this case? Why? What would you need 
to know about my usage pattern to be able to make a recommendation?

The large tables are mostly written to but they are also heavily queried 
sometimes, and always using some index (too slow otherwise).

Regards,
Kjell
--

Kjell Rilbe
Telefon: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90




[firebird-support] Firebird SQL 3.0 login from legacy tools?

2018-02-24 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Hi,

New Windows 2016 server with Firebird 3 SuperServer x64 on port 3050.

I'm trying to connect to this server from two legacy tools: Sinática 
Monitor and FlameRobin. I can successfully connect with Upscene Database 
Workbench and Firebird Maestro, but the old tools fail, saying "Your 
user name and password are not defined. Ask your database administrator 
to setup a Firebird login."

I have set firebird.conf as follows, and thought that would be sufficient:

AuthServer = Legacy_Auth, Srp, Win_Sspi
AuthClient = Legacy_Auth, Srp

For each connection attempt, firebird.log shows a message like this:

INET/inet_error: read errno = 10054, client host = HOSTNAME, address = 
127.0.0.1/49652, user = LoggedInWindowsUser

What could be amiss? I've tried "reading around" but found nothing. :-)

Feel free to reply here, as an alternative (I think the forum should 
move there, actually):
https://superuser.com/questions/1297659/firebird-sql-3-0-login-from-legacy-tools

Regards,
Kjell
-- 

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>




[firebird-support] Image based VM backup and Firebird?

2017-10-05 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]













Re: [firebird-support] embedded database with Chinese path

2017-10-03 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Dimitry Sibiryakov s...@ibphoenix.com [firebird-support] skrev:
> 03.10.2017 2:25, Hamish Moffatt ham...@risingsoftware.com [firebird-support] 
> wrote:
>> Any ideas?
> Give up. Firebird does not support unicode characters in database name 
> and path. Only ANSI.

Forgive me for butting in, but... Perhaps it's high time to change FB 
then? Unicode is "everywhere" these days, and FB should keep up.

Just my opinion of course.

Kjell






++

Visit http://www.firebirdsql.org and click the Documentation item
on the main (top) menu.  Try FAQ and other links from the left-side menu there.

Also search the knowledgebases at http://www.ibphoenix.com/resources/documents/ 

++


Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/firebird-support/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/firebird-support/join
(Yahoo! ID required)

<*> To change settings via email:
firebird-support-dig...@yahoogroups.com 
firebird-support-fullfeatu...@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
firebird-support-unsubscr...@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/



Re: [firebird-support] Automation tools?

2017-08-16 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]













Re: [firebird-support] Automation tools?

2017-08-15 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Dimitry Sibiryakov s...@ibphoenix.com [firebird-support] skrev:
> 15.08.2017 15:36, Kjell Rilbe kjell.ri...@marknadsinformation.se 
> [firebird-support] wrote:
>> We're in need of a tool where we can author sequences of SQL queries against 
>> FB and non-DB
>> commands like file operations etc. I'd like to call them "batch runs" to 
>> perform recurring
>> tasks.
>>
>> Do you know of such tools or what that kind of tool would be called in 
>> general?
> This kind of tool used to call "batch files" or "shell scripts". You can 
> use isql as
> well because it has "shell" command, but isql language lack execution flow 
> control directives.

Nononono... That's much too crude.

"Integration tools" perhaps?

Regards,
Kjell






++

Visit http://www.firebirdsql.org and click the Documentation item
on the main (top) menu.  Try FAQ and other links from the left-side menu there.

Also search the knowledgebases at http://www.ibphoenix.com/resources/documents/ 

++


Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/firebird-support/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/firebird-support/join
(Yahoo! ID required)

<*> To change settings via email:
firebird-support-dig...@yahoogroups.com 
firebird-support-fullfeatu...@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
firebird-support-unsubscr...@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/



[firebird-support] Automation tools?

2017-08-15 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]

Hi,

We're in need of a tool where we can author sequences of SQL queries 
against FB and non-DB commands like file operations etc. I'd like to 
call them "batch runs" to perform recurring tasks.


Do you know of such tools or what that kind of tool would be called in 
general?


We need to be able to schedule some runs. Others need to be triggered by 
e.g. the appearance of a file in a local or FTP/SFTP folder. Yet others 
need to be executed ad hoc, i.e. started manually.


We also need to be able to enter conditions/checks between each step, to 
check results, e.g. reasonable number of affected records, and bail out 
or at least send a warning e-mail message to me if something goes wrong 
or looks suspicious.


Regards,
Kjell
--

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>




Re: [firebird-support] Full join and indices

2017-01-24 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]













[firebird-support] Full join and indices

2017-01-23 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]

Hi,

I just needed to do a full outer join between two very simple tables G 
and N, each with a single-column PK of identical type char(12). I 
noticed that the plan produces was:


PLAN JOIN (N NATURAL, G NATURAL)

I stopped the query after ~5 minutes. Then I thought I can do this 
calculation in two steps, so I did a left outer join from G to N first, 
then a left outer join from N to G (equivalent to a right outer join 
from G to N). These queries were executed with these plans:


PLAN JOIN (G NATURAL, N INDEX (PK_N))
PLAN JOIN (N NATURAL, G INDEX (PK_G))

They both executed in less than half a minute each.

Now, I was wondering why the full outer join can't do a left join using 
the right table's PK and then a right join using the other table's PK. 
This is essentially what I did with the latter two queries.


The full join operation seems to do something that's a lot less 
efficient (whatever it is that it does).


I'm still on FB 2.5.2, so... has this been improved in 3.0?

Thanks,
Kjell
--

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>



Re: [firebird-support] Re: ODP: FB 3 issues with String from FB 2.54

2016-05-25 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
Fabian Ernesto Chocron fabia...@itbizolutions.com.au [firebird-support] 
skrev:

After playing with the problem for a while I noticed the definition of
the field was Varchar (8192) charset ASCII collate ASCII, so that was
probably causing the problem. I changed the domain settings to be
Charset NONE, Collate NONE, and that fixed the issue! 
Probably better to use UTF-8, but perhaps make that switch as a separate 
operation before or after the upgrade to FB 3.


Kjell
--

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>



Re: [firebird-support] FB 3 issues with String from FB 2.54

2016-05-25 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]

fabia...@itbizolutions.com.au [firebird-support] skrev:


I have been trying to migrate from FB2.54 into FB 3 for a few weeks, 
and after hitting a string related error for some time i have got to 
the point where I do understand the issue, but I don't know how to 
solve it. The issue is pretty simple, the FB 2.54 DB contains a few 
characters that are not allowed into the FB 3 database, one example of 
a character causing an error during the restore was "Mcgarrity’s" 
(note the ’) as it appears to be outside the scope of the FB3 string 
domain, I have trying creating a new FB3 DB with many different 
charsets but none works. The other string causing issues is for 
example "΢ÈíÑźÚ", I have many records with this type of strings 
because the DB contains raw emails received by the system, stored into 
Varchars, and apparently some emails contain very weird characters, 
all were handled by FB2.54 but FB3 rejects the records. I have been 
able to isolate all recrods with issues using IBExpert's table data 
comparer function, as it created a script with all recrods from all 
tables from FB2.54 and when running the script against FB3.0 it 
singles out all the offending records.



Can anyone advise what options I have available to force FB3.0 to 
accept any stuff into string fields?




The first thing that comes to mind is that ' needs to be duplicated 
inside string literals, so if your method of upgrading is backing up to 
insert statements, it might be the case that the backup file is 
incorrectly formatted in this reagard. I.e. to insert Mcgarrity's you 
would insert this string literal: 'Mcgarrity''s' (note the duplicated ').


Apart from that I would recommend UTF-8 for "everything", i.e. old table 
charset, connection charset when performing backup, backup file format, 
connection charset for importing the backup file into FB3, charset for 
all char/varchar columns in new database. Not sure all those "places" 
are even applicable.


Are you using gbak for backup from 2.5 and restore into 3.0, or how are 
you actually trying to perform the upgrade?


Regards,
Kjell
--

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>



Re: [firebird-support] Doubt in Firebird

2016-05-25 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]

lionel napoleon lionelnapol...@yahoo.com [firebird-support] skrev:
I use firebird 2.5.6.I just wanted to know if there is a method by 
which i can speed
an insert query to (insert about 5 records) into a table.Right now 
it is taking about
40 minutes.My front end is VBscript and i use a prepared statement.Any 
help would be

 appreciated.


Hi,

In addition to Helen's advice, if it's still slow, I recommend using an 
external table. Write the data to a fixed-width format text file, create 
an external table referencing that file, then insert the records using a 
insert intoselect from statement.


I've found this approach beats any client insert iteration approach by 
several orders of magnitude.


For example, if you need to insert records with a string, a number and a 
date, create a text file like this (preferable with a fixed-width 
character encoding like iso-8859-1 or win1252:


This is some string with trailing space padding   \t0044838\t2016-05-25\r\n
Here's another string \t545\t2016-01-31\r\n

Save this file as e.g. C:\Dummy.txt and then create external table:

*create table*  "ExDummy"*external file*  'D:\DataDIA\Data\Firebird\Dummy.txt' (
  "TheString"*char*(50)*character set*  win1252,-- Adjust to your max string 
length.
  "Tab1"*char*(1)*character set*  win1252,
  "TheNumber"*char*(7)*character set*  win1252,-- Adjust accoring to your 
required number of digits.
  "Tab2"*char*(1)*character set*  win1252,
  "TheDate"*char*(10)*character set*  win1252,-- Use a date format that 
Firebird can cast from easily.
  "CRLF"*char*(2)*character set*  win1252
);

You can skip the \t separators and even skip line breaks if you wish. In 
that case, remove them from the external table create statement accordingly.


Next, insert the data into your actual data table:

insert into "MyTable" ("S", "N", "D")
select trim(cast("TheString" as varchar(50))),
  cast("TheNumber" as int),
  cast("TheDate" as date)
from "ExDummy";

You can then drop the external table, or keep it permanently for 
repeated batch imports. (Note: dropping tables should be done with 
exclusive database access to avoid risk of corruption).


It is possible to create external file in Firebird-native format rather 
than text format, but that's so complicated that it's probably not worth 
the effort.


This method will normally import several thousand records per second, 
but speed obviously depends on factors that Helen mentioned, e.g. 
indexes, index tree depth, page size, ...


Regards,
Kjell

--

Marknadsinformation logotyp

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

Företagskontakt.se <http://xn--fretagskontakt-vpb.se>
Personkontakt.se <http://personkontakt.se>



Re: [firebird-support] How To Speed Up Update SQL

2015-11-03 Thread Kjell Rilbe kjell.ri...@marknadsinformation.se [firebird-support]
I've found that it often helps with an execute block when updates with 
complex join-like filter conditions apply.

Something like this (untested):

set term #;
execute block as
declare PK yourprimarykeycolumntype;
begin
   for
 select MB.PRIMARYKEYCOLUMN
 from MY_BOOK MB
 inner join MY_BOOK_HEADER MBH
   on MBH.PK_BOOK_HEADER = MB.SK_BOOK_HEADER
 where Upper(Trim(MBH.HEADER_BOOK_CODE)) = 
Upper(Trim('127518010109038'))
   And MBH.FK_BOOK_GROUP =  '{79B79C33-CE56-4084-912B-6DD9F70B3DC4}'
 into :PK
   do
 update MY_BOOK MB
 set BOOK_NAME = 'Book Name1' ,
   BOOK_DESCRIPTION = 'Book Desfcription1'
 where MB.PRIMARYKEYCOLUMN = :PK
end#
set term ;#

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

Marknadsinformation i Sverige AB
Ulvsundavägen 106C
168 67 Bromma
www.marknadsinformation.se <http://www.marknadsinformation.se>
08-514 905 90

den 2015-10-14 11:21, skrev Vishal Tiwari vishuals...@yahoo.co.in 
[firebird-support]:
> Hi All,
>
> I have 480 Update SQL statements which I am executing using ADScript 
> component in Delphi 2010.
>
> Update SQL updates data in only one table called MY_BOOK, but there is 
> a join with MY_BOOK_HEADER table. Below is the Update SQL, which is 
> taking 81 seconds to execute all 480 Update SQL statements. Any idea 
> how could I improve the speed of these 480 Update SQL statements ?
>
> Update SQL:
>
> Update MY_BOOK MB SET BOOK_NAME = 'Book Name1' , BOOK_DESCRIPTION = 
> 'Book Desfcription1'
> Where MB.BOOK_TYPE = 4 And
> Exists (Select 1 from MY_BOOK_HEADER MBH
> Where
> Upper(Trim(MBH.HEADER_BOOK_CODE)) = Upper(Trim('127518010109038'))
> And MBH.FK_BOOK_GROUP =  '{79B79C33-CE56-4084-912B-6DD9F70B3DC4}'
> And MBH.PK_BOOK_HEADER = MB.SK_BOOK_HEADER
> );
>
>
> Thanks In Advance.
>
> With Best Regards.
>
> Vishal
> 




[firebird-support] Charset of external table filename?

2015-09-24 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
Hi,

I'm trying to create an external table to import a list of id numbers 
like this:

create table "ExLöpnr" external file 'D:\Löpnr.txt' (
   "Löpnr" char(7) character set win1252,
   "CRLF" char(2) character set win1252
);

But when I try to select from it, Firebird creates a new empty file 
named "LöPNR.TXT". This is what "LÖPNR.TXT" looks like if it's UTF8 
encoded but displayed as if it were WIN1252/ANSI/LATIN1 encoded.

Checking the system tables, the filename looks correct, but I assume the 
system tables encode that string in UTF8 (or UNICODE-FSS which would be 
identical in this case if I'm not mistaken).

It seems like Firebird does not transliterate this string correctly when 
passing it to the operating system's file operations.

This is in FB version 2.5.2.26540.

Known problem/bug? Or am I missing something? Fixed in later version of 
2.5 or in version 3? Any workaround except renaming the file to plain ascii?

Regards,
Kjell

-- 
--
Kjell Rilbe
DataDIA AB
E-post: kjell.ri...@datadia.se
Telefon: 08-761 06 55
Mobil: 0733-44 24 64



Re: Odp: [firebird-support] Performance diff between insert...select and for select ... do?

2015-04-14 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
'=?utf-8?B?bGl2aXVzbGl2aXVzQHBvY3p0YS5vbmV0LnBs?=' 
liviusliv...@poczta.onet.pl [firebird-support] skrev:

 Why not merge?

Yes, indeed. Might have been worth a shot, but I suspect it would have 
been no more than marginally better. Even merge does have to do an index 
lookup for each record to check if one exists or not, doesn't it? Or 
does it ignore the index lookup and simply add a new record version 
regardless?

Furthermore, even the straight insert (with PK and indices 
dropped/inactive) was orders of magnitude slower than the proprietary 
text file I resorted to (no surprise really).

Regards,
Kjell


Re: Odp: [firebird-support] Performance diff between insert...select and for select ... do?

2015-04-14 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
'=?utf-8?B?bGl2aXVzbGl2aXVzQHBvY3p0YS5vbmV0LnBs?=' 
liviusliv...@poczta.onet.pl [firebird-support] skrev:

 Why not merge?

Yes, indeed. Might have been worth a shot, but I suspect it would have 
been no more than marginally better. Even merge does have to do an index 
lookup for each record to check if one exists or not, doesn't it? Or 
does it ignore the index lookup and simply add a new record version 
regardless?

Furthermore, even the straight insert (with PK and indices 
dropped/inactive) was orders of magnitude slower than the proprietary 
text file I resorted to (no surprise really).

Regards,
Kjell
-- 
--
Kjell Rilbe
DataDIA AB
E-post: kjell.ri...@datadia.se
Telefon: 08-761 06 55
Mobil: 0733-44 24 64


Re: [firebird-support] Performance diff between insert...select and for select ... do?

2015-04-13 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
Kjell Rilbe kjell.ri...@datadia.se [firebird-support] skrev:

 Hi,

 I'm writing a utility that will need to do two things for each record in
 an external table and for this purpose I use a for select ... do
 construct in an execute block. I do it this way because external tables
 can't be indexed and I will scan the entire external table anyway.

 The two operations are:
 1. Update one existing record in the target table.
 2. Insert new record in the same target table.

 In steady state the target table will contain about 20 million records
 and the external table will contain about 10 thousand records.

 But the first time I run this, the target table will be empty and the
 external table will contain about 18 million records. The update will
 never find a record to update during this first execution.

 Would I lose a lot of hours if I use the same execute block/for select
 construct the first time? The alternative would be to do a regular
 insert into target table select from externaltable the first time.


As a follow-up to this question, my tests with real data showed that the 
execute block was *very* much slower than a simple insert from the 
external table for the initial import into an empty target table. I 
think it was something like 10 minutes vs. 10 hours, give or take...

I also noted that in steady state, the Firebird solution as a whole was 
very slow. The thing is that for each run of this utility, I would need 
to visit close to 100 % of the records. I ended up tossing Firebird 
altogether and implemented a simple text file format instead, and a 
load-modify-write pattern, making good use of abundant RAM. Went from 
5-10 hours to about 5 minutes.

SQL databases are good for many things, but in this case, they suck (I'm 
assuming Firebird is not significantly worse than any other brand in 
this case).

Regards,
Kjell


Re: [firebird-support] Extract week of the year (within year)

2015-03-24 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
=?utf-8?B?U3ZlaW4gRXJsaW5nIFR5c3bDpnI=?= 
svein.erling.tysv...@kreftregisteret.no [firebird-support] skrev:

 is possible to get week of the year within year?

 I see that built in functions only support ISO-8601 (or i do not know 
 some parameter)

 Actually, Karol, “week within year” is not the week, Karol. 1 January 
 could be week 53 and 31 December could be week 1. If you want to 
 consider 1 January always as part of week 1 (even if that day could be 
 the only day of week 1), I’d suggest you try something like:

 select (extract(yearday from cast('1.1.2014' as date))-extract(weekday 
 from cast('1.1.2014' as date)))/7+1

 from rdb$database


 If you want the first seven days of a week to always be week 1 (and 
 changing which day a week start with for each year), you could drop 
 the second EXTRACT.


Also, note that the rules for which week should be week number 1 is 
slightly different for different locales. As far as I know, there are 
two different rules, but there could be more. (What about locales where 
Sunday is considered first day of the week, as opposed to Monday?)

So, id you want to calculate the official week number according to a 
specific locale's calendar rules, you'll have to look this up 
thoroughly. I think Excel's week number function gets it right now 
(which hasn't always been the case).

Kjell


Re: [firebird-support] Performance diff between insert...select and for select ... do?

2015-02-15 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
Mark Rotteveel m...@lawinegevaar.nl [firebird-support] skrev:

 On 13-2-2015 14:55, Kjell Rilbe kjell.ri...@datadia.se
 [firebird-support] wrote:
  I'm writing a utility that will need to do two things for each record in
  an external table and for this purpose I use a for select ... do
  construct in an execute block. I do it this way because external tables
  can't be indexed and I will scan the entire external table anyway.
 
  The two operations are:
  1. Update one existing record in the target table.
  2. Insert new record in the same target table.

 Maybe you can use MERGE, it uses a source, a target and can
 conditionally insert, update (and delete) record? I assume your target
 table isn't external and is indexed.


Thanks, that might be worth looking into. I suspect it won't work since 
I will need to do BOTH update AND insert for each external record, but I 
haven't checked what merge can do yet... :-)

Kjell
 -- 
 --
 Kjell Rilbe
 DataDIA AB
 E-post: kjell.ri...@datadia.se
 Telefon: 08-761 06 55
 Mobil: 0733-44 24 64


[firebird-support] Performance diff between insert...select and for select ... do?

2015-02-13 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
Hi,

I'm writing a utility that will need to do two things for each record in 
an external table and for this purpose I use a for select ... do 
construct in an execute block. I do it this way because external tables 
can't be indexed and I will scan the entire external table anyway.

The two operations are:
1. Update one existing record in the target table.
2. Insert new record in the same target table.

In steady state the target table will contain about 20 million records 
and the external table will contain about 10 thousand records.

But the first time I run this, the target table will be empty and the 
external table will contain about 18 million records. The update will 
never find a record to update during this first execution.

Would I lose a lot of hours if I use the same execute block/for select 
construct the first time? The alternative would be to do a regular 
insert into target table select from externaltable the first time. My 
guess is that the unecessary update will take increasingly long to 
execute as the target table grows towards 18 milliion records, wasting a 
lot of time for me, even if a suitable index is present.

Just to give a measure of the system's performance as it is, a simple 
update on one column for 18 million records in the target table 
currently seems to take about 6 hours. But I could drop a couple of 
indices and perhaps I should bump up the page size from 4k to 8k or 16k.

Regards,
Kjell
-- 

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

DataDIA AB
Ulvsundavägen 106
168 67 Bromma
www.datadia.se http://www.datadia.se
08-514 905 90

Företagskontakt.se http://xn--fretagskontakt-vpb.se - urval av företag 
och kontaktinformation
Personkontakt.se http://personkontakt.se - urval av hushållsadresser




Re: [firebird-support] Backups and the Delta File

2014-10-14 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]

russ...@smxi.com [firebird-support] skrev:
We have a large database that runs a full nbackup every night and an 
hourly incremental backup. Is the full backup meant to clear the delta 
file? this doesn't seem to get any smaller.


Also if I restore a gbak backup, what should I do with the delta file?



If you have an active delta file, you have the database in nbackup 
locked state, either because an nbackup operation terminated abnormally, 
or you manually locked the database and neglected to unlock it. The 
delta file is always merged into the live database immediately upon unlock.


On restore, there should be no delta file involved at all. There can be 
multiple nabckups, one for each level you've backed up, but that's a 
different thing.


Regards,
Kjell


Re: [firebird-support] loading data from a .csv file?

2014-05-24 Thread Kjell Rilbe kjell.ri...@datadia.se [firebird-support]
Den 2014-05-18 21:23 skrev Alan Shank li...@notoriousshankbrothers.com 
[firebird-support] såhär:

 Is there some way to load data into a table in Firebird, like LOAD DATA
 INFILE... in Mysql? I have comma-delimited files from Mysql that I want
 to load into Firebird tables.

 I have Firebird2.5-super on Ubuntu 14.04, 64-bit.


Thomas mentioned external tables. I've tried a couple of other data 
pump methods, both free and commercial import utilities as well as my 
own C# import loop (Windows and .Net in that case obviously). They all 
use a loop of insert statements, one for each .csv file row. While it 
works fine, it's generally very slow.

I've now switched completely to external tables, or in some cases (not 
so many records) generating insert statements and executing them in one 
query batch. It does require that you modify the file, but it's 
extremely fast!

I usually modify the format to fixed width with or without separator, 
matching a FB table with char columns (with an extra dummy column for 
each separator, if present, and an extra dummy column for the 
linebreak). It's easiest with a character set with a fixed number of 
bytes per character like iso 8859-1, ascii or win-1252, because FB 
expects each column to be the same number of bytes, not characters. I.e. 
with utf8 format it expects a char(10) column to consist of 40 byte in 
the file. The problem with that is that with text editors etc. such a 
string will appear as anything from 10 to 40 characters, depending on 
what characters you enter. If you enter ten 4 byte characters, that's 
the same byte length as 40 1 byte characters. So, such a file will not 
be fixed-width if you count characters. If you convert the format with 
some utility, you have to pad the strings to 40 byte, taking care to 
really count byte, not characters. With fixed width character sets, it's 
much easier.

Then use an external table like this (if separators are present and line 
breaks in your system are 1 single character)

create table External external file 'yourfilepath' (
   DataCol1 char(10) character set iso88591,
   Sep1 char(1) character set iso88591,
   DataCol2 char(25) character set iso88591,
   Sep2 char(1) character set iso88591,
   DataCol3 char(7) character set iso88591,
   Linebreak char(1) character set iso88591
);

Then import into your real table making sure the char columns are 
appropriately converted:

insert into MyTable (
   SomeString, -- varchar(10)
   SomeDate, -- datetime
   SmallInteger -- int
)
select trim(cast(DataCol1 as varchar(10)) SomeString,
   cast(DataCol2 as datetime) SomeDate,
   cast(DataCol3 as int) SmallInteger;

Good luck!
Kjell



Re: [firebird-support] NBackup Levels

2014-05-07 Thread Kjell Rilbe
Steffen Heil (Mailinglisten) skriver:

 Hi

   I could simply create backups with incrementing levels, move the
   backup to the other server(s) and apply them there (that database is
   offline).
   However, I suspect there is a limit for the number of backup levels a
   database can have.

  You don't need a lot of backup levels to do what you want. For 
 example, Do a level 0 backup once a month/year/whatever, then do a
  level 1 backup every weekend, a level 2 backup every day, and if 
 required a level 3 backup every hour/whatever.

 For another type of service I already have a backup script that 
 creates 5 levels of backups:
 Level 0 at the first Sunday of every quarter.
 Level 1 at the first Sunday every month.
 Level 2 at every Sunday.
 Level 3 every day.
 Level 4 every hour.

 But for this project I want more than hourly consistency. I am 
 targeting 5 minuites or even less. That could be done using:
 Level 5 every 5 minutes.

 However in this case there might be lots of days with nearly no 
 difference and then there may be some days with gigabytes of changes.
 Using an approach as above would mean to copy all these changes up to 
 23 times...
 I would really like to prevent that kind of extra traffic AND more 
 important that delay in synchronization.
 Moreover, the servers hard drives may be rather slow and the database 
 may grow up to 200 GB.
 (During operation there are relatively few reads and only some writes, 
 the database is idle 99% of the time, so for operation the slow
 io system is not a problem.)


I understand your problem. Would it be possible to measure the size of 
each level 5 backup, and if it's over a certain threshold use a higher 
level for the rest of that day to keep from copying the same large set 
of data over and over?

In essence, you would create a script that backs up every five minutes, 
and if the last backup size was below a certains threshold use the same 
level as before, otherwise bump it up. Reset to level 3 each night.

The benefit is that you won't create a large number of levels unless 
it's actually needed to reduce backup traffic.

Regards,
Kjell




++

Visit http://www.firebirdsql.org and click the Resources item
on the main (top) menu.  Try Knowledgebase and FAQ links !

Also search the knowledgebases at http://www.ibphoenix.com 

++
Yahoo Groups Links

* To visit your group on the web, go to:
http://groups.yahoo.com/group/firebird-support/

* Your email settings:
Individual Email | Traditional

* To change settings online go to:
http://groups.yahoo.com/group/firebird-support/join
(Yahoo! ID required)

* To change settings via email:
firebird-support-dig...@yahoogroups.com 
firebird-support-fullfeatu...@yahoogroups.com

* To unsubscribe from this group, send an email to:
firebird-support-unsubscr...@yahoogroups.com

* Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/



Re: Odp: [firebird-support] Bug? Come back

2014-05-07 Thread Kjell Rilbe
liviusliv...@poczta.onet.pl skriver:

 Hi,

 I back with this question. Is this really correct because after 
 discussion with team i have now doubt.

 We have here parenthesis. Any expression in parenthesis should be 
 first calculated
 Then this is not test
 NOT NULL -  NULL only
 Not ( NULL - NULL)


What do you mean with that last line Not (NULL - NULL)?

Think of NULL as UNKNOWN and add that if the value is forced to 
true/false NULL evaluates to false, the logic becomes rather intuitive:

1. if NULL then 1 else 2 end returns 2

2. if not NULL then 1 else 2 end returns 2 because not NULL is NULL 
and the expression degenerates to the same as example 1 above.

Regards,
Kjell



Re: [firebird-support] NBackup Levels

2014-05-06 Thread Kjell Rilbe
Den 2014-05-05 22:27 skrev Steffen Heil (Mailinglisten) såhär:

 I know that nbackup keeps track of which blocks in the database 
 changed and which did not so that backups contain only the blocks that 
 changes since the last backup of the next lower level.
 I want to use that feature to do syncs between servers.

 I could simply create backups with incrementing levels, move the 
 backup to the other server(s) and apply them there (that database is 
 offline).
 However, I suspect there is a limit for the number of backup levels a 
 database can have.


You don't need a lot of backup levels to do what you want. For example, 
Do a level 0 backup once a month/year/whatever, then do a level 1 backup 
every weekend, a level 2 backup every day, and if required a level 3 
backup every hour/whatever.

Copy the backup files to the remote server, and use nbackup there, to 
restore the most current backup chain. Granted, the restore operations 
on the other server(s) will take a bit of time, but if those servers are 
offline, that would not really be a problem. In fact, I see no real 
point in doing those restores until you actually need to put the backup 
online. Just save the most recent chain of backup files, and you're 
ready to do a restore when the need arises.

 Another strategy would be to lock the database, take a copy, unlock it 
 and then use external software to do the diff.
 However that would mean that the database would have to be copied very 
 often and that a lot of extra space was needed.
 That would mean a lot of disc io.

 That could be reduced, if I locked the database, sync it using some 
 software as rsync and then unlock it again.
 I would not need extra copies any more, but a) the database would need 
 to be offline much longer (remote sync needs more time) and b) it 
 would have to be done for every replica again.


This is the approach I use for my 100+ Gbyte database on modest hardware 
(Windows server with 7200 rpm scsi traditional non-ssd harddisks). I use 
FastCopy (http://ipmsg.org/tools/fastcopy.html.en) to copy the database 
files while it's locked and then rsync the copy to our backup server. 
The FastCopy takes about 45-60 minutes and the rsync usually takes a 
couple of hours.

There is NO downtime! The application is online when the database is 
locked and of course also during rsync.

 So here are my questions:

 a) How many levels of backups can a database have (nbackup, not gbak)?


Sorry, I don't know. But I don't think you need more than 3-5 levels.

 b) How bad is it (in terms of performance) to lock a database for a 
 longer period of time?


I've seen no problems whatsoever with the scheme I described, in which 
the database is locked for about an hour. I don't see how/why you would 
need any more than that.

 c) Is there any way to get the information used by nbackup without 
 actually creating a new backup level? (Just use this information to 
 use external block copying?)


Sorry, I don't know.

 d) Is there any other good way to create a live exact replica?


Shadow file feature? Third paerty replication tools?

Regards,
Kjell




[firebird-support] Transliteration of identifiers, query buffers etc?

2014-04-23 Thread Kjell Rilbe
Hi,

I've been wondering a bit about how Firebird handles transliteration of 
various parts of a query, in particular regarding (quoted) identifiers.

My situation is that I have a database with default charset UTF8 and all 
char/varchar columns use this charset. I also always use UTF8 as 
connection charset.

I would assume that this means that Firebird expects to receive query 
strings encoded in UTF8, including identifiers and string literals that 
appear in the query.

At the same time, I know that the identifiers are stored in columns with 
charset Unicode_FSS, which as far as understand is identical with UTF8 
except 1) it will accept malformed strings and 2) it will allocate a 
buffer that's fits 4 x maxlength bytes and will accept any string that 
fits in that buffer even if the number of Unicode characters  maxlength.

Are there any other differences between Unicode_FSS and UTF8?
Are all valid UTF8 strings  maxlength identical with the corresponding 
Unicode_FSS string?

Also, string literals can be specified to be some other charset than 
UTF8 - does this mean that the query buffer sent to the server actually 
contains segments with different encodings? Or is the query buffer 
always 100% encoded in the connection charest?

I tried this with UTF8 connection charset:
select _win1252 'asdfö' Test
from rdb$database

It returns this:
asdfö

So, it seems the string literal is encoded in UTF8 and sent that way to 
the server, which interprets it as encoded in WIN1252. So, it seems the 
buffer itself is 100% UTF8. Right?

What about identifiers? Assume I have an identifier Åäöü€ÉÈÏÿñ. Is 
there any instance that Firebird could get into trouble with this 
assuming I always quote it and always use UTF8 connection charset?





Re: [firebird-support] Online DDL

2014-03-23 Thread Kjell Rilbe













Re: [firebird-support] Online DDL

2014-03-23 Thread Kjell Rilbe
Den 2014-03-23 09:37 skrev Jesus Garcia såhär:

 I had a database corruption recently and was told that it may have 
 been caused by dropping tables while the database was online with 
 other users connected. Apparently there is some kind of race 
 condition in the page allocation that can cause the same page to be 
 allocated to different things when doing that. It ONLY concern 
 dropping tables.
 What version of firebird are you using? If it is one of the latest, 
 would be helpful to open a ticket in the tracker in order to be solved.
 I think is a serious bug corrupt a database, and more serious if it 
 happens executing DDL statements.


Running FB 2.5, but I thought it was a know issue. At least the guy at 
IBSurgeon gave me that impression, although he didn't state it 
explicitly. That's how it was repaired, by the way. The affected table 
was not necessary for me, so he simply used their tools to create 
empty metadat for it, losing all old/existing data content. After that I 
did a gbak backup/restore to make sure everything was nice and clean.

Kjell



Re: [firebird-support] Re: nbackup strategy advice

2014-03-16 Thread Kjell Rilbe
Den 2014-03-16 07:27 skrev hugo.lar...@yahoo.com såhär:

 Hello Paul.
 Thanks for your advice.
 My strategy was based on backup data once approach but this would 
 produce to many files I now realize.

 But I still want to avoid to backup the entire database (N=0) on 
 regular basis.
 Whats your opinion about this approach?
 First backup N=0
 Every day N=1 for a month (replace file each time)
 Increment N next month.

 This would produce 12 files every year.


That wouldn't work. I don't think you understand how Nbackup works.

N=0 backs up everything.
N=1 backs up every page that's changed since last N=0.
N=2 backs up every page that's changed since last N=1.
...
N=k backs up every page that's changed since last N=k-1.

So, with your suggested scheme, first month would go like this:

Day 1: backup entire database (N=0).
Day 2: backup pages changed since day 1 (N=1).
Day 3: backup pages changed since day 1 (N=1). Will include all pages 
copied day 2 + additional pages changed since day 2.
Day 4: backup pages changed since day 1 (N=1). Will include all pages 
copied day 3 + additional pages changed since day 3.
...
Day 28/30/31: backup pages changed since day 1 (N=1). Will include all 
pages copied the day before + additional pages changed the last day.

If Month 2 you increment N to 2, you will get this:
Day 1: backup pages changed since last day of month 1 (N=2).
Day 2: backup pages changed since last day of month 1 (N=2).
...
Day 28/30/31: backup pages changed since last day of month 1 (N=2).

At the end of the year, your actual final backup sequence will be:
N=0: Initial backup first day of the year.
N=1: Last backup of month 1.
N=2: Last backup of month 2.
...
N=12: Last backup of month 12.

It would probably make more sense to do it like this:

First day of year: N=0, initial complete backup.
First day of each month: N=1, will contain all pages changed since first 
day of year.
First day of each week: N=2, will contain all pages changed since first 
day of month.
Each day: N=3, will contain all pages changed since first day of week.

If two such days coincide, you still need to run both colliding levels 
(lower N first, higher N directly afterwards), or the sequence will be 
broken next day.

This way, you will have a daily backup that's complete, consisting of 
four parts (N=0, 1, 2 and 3).

In general, Nbackup should be run with each value of N at a regular 
interval, with tighter intervals for higher values of N. Incrementing N 
over time as you suggested is not suitable.

Note that Nbackup has no way of detecting any database corruptions, so 
if that happens it will go completely undetected. Might be a good idea 
to combine it with some client local gbak or gfix- v or gfix -v -full, 
as often as is viable.

I might also mention the possibility to simply lock the database with 
Nbackup and copy the database file with rsync. This will probably have 
similar or better performance over a slow connection. That's what I do 
for out 200 Gbyte database, although that's on a single server with a 
backup volume attached via high bandwidth network. :-)

Regards,
Kjell



Re: [firebird-support] Re: nbackup strategy advice

2014-03-16 Thread Kjell Rilbe
Den 2014-03-16 11:58 skrev Paul Vinkenoog såhär:

 Kjell Rilbe wrote:

  First day of year: N=0, initial complete backup.
  First day of each month: N=1, will contain all pages changed since first
  day of year.
  First day of each week: N=2, will contain all pages changed since first
  day of month.
  Each day: N=3, will contain all pages changed since first day of week.
 
  If two such days coincide, you still need to run both colliding levels
  (lower N first, higher N directly afterwards), or the sequence will be
  broken next day.

 That's not necessary, and the higher level backup will add nothing on
 that moment.

 Suppose you make a level-2 backup every Sunday. Then it makes sense to
 schedule the level-3 backups daily from Mon-Sat.

 If you want to make a point-in-time restore later, it will involve 4 files
 if that point in time is on Mon-Sat, and 3 files if it is on a Sunday.
 (And 2 files if it is the first day of the month.)

 All you have to do is determine the most recent backup before the chosen
 point in time. If that is a level N, you need N+1 files for the restore
 (levels 0-N, each one being the most recent file of that level before
 time 'T').


My idea was to avoid having to keep track of N for last backup. The 
script  scheduling would be a bit simpler. And I do realize the N=1 
backup right after the N=0 backup will store nothing. It will, however, 
make it possible to do the N=2 backup next day and have a complete 
N-chain. Skipping the N=1 backup and go right for N=2 won't work. or am 
I missing something?

Kjell



Re: [firebird-support] Find all records on all tables that violate constraints?

2014-02-28 Thread Kjell Rilbe
Den 2014-02-28 08:42 skrev Svein Erling Tysvær såhär:

 Reinventing the wheel can sometimes be fun, Kjell. I would expect this 
 statement

 SELECT list('SELECT ''' || TRIM(RDB$RELATION_NAME) ||''' 
 RELATION_NAME, ''' || TRIM(RDB$FIELD_NAME) ||
 ''' FIELD_NAME, COUNT(*) FROM ' || TRIM(RDB$RELATION_NAME) ||
 ' WHERE ' || TRIM(RDB$FIELD_NAME) || '|| IS NULL GROUP BY 1, 2 ', '
 UNION ')
 FROM RDB$RELATION_FIELDS
 WHERE RDB$NULL_FLAG = 1

 to return an SQL statement that you can execute to find which 
 combination of tables and fields contains NULL values (though it will 
 take time since it has to avoid using indexes, at least I don't think 
 you'll find NULLs if you use indexes).


Nice one! Thanks! After fixing some quotes (mixed case table names etc) 
it seems to work nicely. Got 910 cases... Will have to split them up I 
think - too many unions otherwise.

Regards,
Kjell



Re: [firebird-support] Find all records on all tables that violate constraints?

2014-02-28 Thread Kjell Rilbe
Den 2014-02-28 09:41 skrev Kjell Rilbe såhär:

 Den 2014-02-28 08:42 skrev Svein Erling Tysvær såhär:
 
  Reinventing the wheel can sometimes be fun, Kjell. I would expect this
  statement
 
  SELECT list('SELECT ''' || TRIM(RDB$RELATION_NAME) ||'''
  RELATION_NAME, ''' || TRIM(RDB$FIELD_NAME) ||
  ''' FIELD_NAME, COUNT(*) FROM ' || TRIM(RDB$RELATION_NAME) ||
  ' WHERE ' || TRIM(RDB$FIELD_NAME) || '|| IS NULL GROUP BY 1, 2 ', '
  UNION ')
  FROM RDB$RELATION_FIELDS
  WHERE RDB$NULL_FLAG = 1
 
  to return an SQL statement that you can execute to find which
  combination of tables and fields contains NULL values (though it will
  take time since it has to avoid using indexes, at least I don't think
  you'll find NULLs if you use indexes).
 


This is what I ended up with, issuing one select per table to reduce 
table scans:

select 'select ''' || trim(RDB$RELATION_NAME) || ''' Table, count(*) 
Count from ' || trim(RDB$RELATION_NAME) ||
' where ' || list('' || trim(RDB$FIELD_NAME) || ' ||  is null', ' 
or ') || ';'
from RDB$RELATION_FIELDS
where RDB$NULL_FLAG = 1
group by trim(RDB$RELATION_NAME);

With over 200 tables I manually grouped them in groups of 25 with union 
and an outer select to only return rows with count  0.

Thanks for the great starting point!
Kjell




[firebird-support] Change page size from 8k to 16k?

2014-02-27 Thread Kjell Rilbe
I'm going to do a gbak backup/restore on my 100 Gbyte database this 
weekend, and was wondering if perhaps I should bump up the page size 
from 8 kbyte to 16 kbyte.

What should I look for?

I'm running gstat -a -i right now. Is this the appropriate options to 
get the relevant figures?

There are two tables with roughly 200 million records. Perhaps 1-2 
tables with about 20 million records. The rest should be in the range of 
100 thousand or less records.

Regards,
Kjell



[firebird-support] Find all records on all tables that violate constraints?

2014-02-27 Thread Kjell Rilbe
Hi,

It seems I've been a bit sloppy when introducing new constraints in my 
database. At least I found one table with a record that has null in a 
column that I've altered to not null.

So, is there any nice convenient way to find all such records in all 
tables? I'm thinking an execute statement that iterates system tables to 
find all not null columns and select where XXX is null on each such 
table and column...? I think that would suffice in this case, 
considering the set of constraints I have.

Regards,
Kjell


Re: [firebird-support] Find all records on all tables that violate constraints?

2014-02-27 Thread Kjell Rilbe
Den 2014-02-28 00:58 skrev Dmitry Kuzmenko såhär:

 Friday, February 28, 2014, 3:29:34 AM, you wrote:

 KR It seems I've been a bit sloppy when introducing new constraints in my
 KR database. At least I found one table with a record that has null in a
 KR column that I've altered to not null.

 KR So, is there any nice convenient way to find all such records in all
 KR tables?

 Well, I have an idea that can make huge speedup of the whole database 
 scanning
 process - you should check only those tables,
 that have rdb$format  1, because tables, that have rdb$format = 1
 was not altered since last restore or their first creation.


Thanks Dmitry! I think I can figure it out either way (execute statement 
or C# app). I was trying to save some time ny not reinventing the wheel. :-)

Regards,
Kjell



Re: [firebird-support] Change page size from 8k to 16k?

2014-02-27 Thread Kjell Rilbe
Den 2014-02-28 00:36 skrev Dmitry Kuzmenko såhär:

 Hello, Kjell!

 Friday, February 28, 2014, 3:23:34 AM, you wrote:

 KR I'm going to do a gbak backup/restore on my 100 Gbyte database this
 KR weekend, and was wondering if perhaps I should bump up the page size
 KR from 8 kbyte to 16 kbyte.

 KR What should I look for?

 KR I'm running gstat -a -i right now. Is this the appropriate options to
 KR get the relevant figures?

 No. The best way, always, minimum gsta -h, or maximum gstat -r.
 Only gstat -r shows record size, versions, etc.
 You should look at indices, that have depth greater than 3 (equal to 4
 or more).


OK, I ran gstat -r and found a few indices with depth = 4, as follows:

---
Företag (146)
 Primary pointer page: 231, Index root page: 232
 Average record length: 292.38, total records: 2462168
 Average version length: 0.00, total versions: 0, max versions: 0
 Data pages: 101698, data page slots: 101698, average fill: 92%
 Fill distribution:
  0 - 19% = 0
 20 - 39% = 0
 40 - 59% = 1
 60 - 79% = 3
 80 - 99% = 101694

 -- Not searched very often. Inserts are not time critical. Probably 
not an issue.
 Index IX_Företag_NamnCI (22)
 Depth: 4, leaf buckets: 22405, nodes: 2462168
 Average data length: 34.17, total dup: 345654, max dup: 968
 Fill distribution:
  0 - 19% = 58
 20 - 39% = 15
 40 - 59% = 20446
 60 - 79% = 28
 80 - 99% = 1858

---
Uppgift (172)
 Primary pointer page: 284, Index root page: 285
 Average record length: 77.90, total records: 208909973
 Average version length: 0.00, total versions: 0, max versions: 0
 Data pages: 3049518, data page slots: 3049518, average fill: 80%
 Fill distribution:
  0 - 19% = 0
 20 - 39% = 0
 40 - 59% = 0
 60 - 79% = 2001636
 80 - 99% = 1047882

 -- Heavily used for both searches and inserts.
 -- A performance issue here would have an impact. Most critical for 
reading.
 Index IX_PK_Uppgift (0)
 Depth: 4, leaf buckets: 314034, nodes: 208909973
 Average data length: 5.01, total dup: 0, max dup: 0
 Fill distribution:
  0 - 19% = 47
 20 - 39% = 0
 40 - 59% = 5952
 60 - 79% = 2785
 80 - 99% = 305250

 -- Heavily used for both searches and inserts.
 -- A performance issue here would have an impact. Most critical for 
reading.
 Index IX_Uppgift_Hållare (2)
 Depth: 4, leaf buckets: 434584, nodes: 208909973
 Average data length: 3.95, total dup: 45804696, max dup: 998003
 Fill distribution:
  0 - 19% = 555
 20 - 39% = 0
 40 - 59% = 316053
 60 - 79% = 1476
 80 - 99% = 116500
---
Uppgiftshållare (175)
 Primary pointer page: 290, Index root page: 291
 Average record length: 42.23, total records: 166793365
 Average version length: 0.00, total versions: 0, max versions: 0
 Data pages: 1689968, data page slots: 1689968, average fill: 72%
 Fill distribution:
  0 - 19% = 1
 20 - 39% = 0
 40 - 59% = 0
 60 - 79% = 1689967
 80 - 99% = 0

 -- Heavily used for both searches and inserts.
 -- A performance issue here would have an impact. Most critical for 
reading.
 Index IX_PK_Uppgiftshållare (0)
 Depth: 4, leaf buckets: 247620, nodes: 166793365
 Average data length: 5.05, total dup: 0, max dup: 0
 Fill distribution:
  0 - 19% = 1
 20 - 39% = 0
 40 - 59% = 141
 60 - 79% = 0
 80 - 99% = 247478

 -- Heavily used for both searches and inserts.
 -- A performance issue here would have an impact. Most critical for 
reading.
 Index IX_Uppgiftshållare_HanDS2 (1)
 Depth: 4, leaf buckets: 155606, nodes: 166793365
 Average data length: 0.48, total dup: 151362720, max dup: 46
 Fill distribution:
  0 - 19% = 144
 20 - 39% = 1
 40 - 59% = 40447
 60 - 79% = 619
 80 - 99% = 114395

 -- Heavily used for both searches and inserts.
 -- A performance issue here would have an impact. Most critical for 
reading.
 Index UppgiftshållareUnika (3)
 Depth: 4, leaf buckets: 424641, nodes: 166793365
 Average data length: 10.97, total dup: 0, max dup: 0
 Fill distribution:
  0 - 19% = 1669
 20 - 39% = 0
 40 - 59% = 99013
 60 - 79% = 2327
 80 - 99% = 321632

So, based on that gstat info and my comments for each, would you 
recommend to try 16 kbyte page size?

 Anyway, you may always compare performance. 100gb is not such a big
 database that could not be set to 8k back at appropriate time if
 you will find that something become slower at 16k.


I know it's not to be regarded as very large compared to many others, 
but it's large enough to require careful choices to get adequate 

Re: RES: [firebird-support] Wrong page type, how to examine/fix?

2014-02-26 Thread Kjell Rilbe
Den 2014-02-24 22:02 skrev Fabiano - Desenvolvimento SCI såhär:

 The right thing to do to avoid this kind of corruption is make diary 
 backups and ensure you can restore them!

 If you can’t backup or not restore the entire database (including 
 reactivating all foreign keys) you must stop immediately and try to 
 fix the database.

 I think you ran this database corrupted for a long time or without a 
 backup/restore for a long time.

 Fabiano.


Well, I usually run a full file copy using NBackup lock/unlock each 
night, and also a full gbak each weekend to ensure that the DB is OK. I 
also use a copy of the last NBackup copy as a test database for test  
development, which essentially means I now the copy works. This time 
however, I had some disturbances, so the last copy was a few days old, 
which means I'd rather pay for help to get it fixed that revert to the 
backup. Also, the gbak run was missing for the last two weekends. The 
corruption has probably been present for about 2 weeks.

In the long run, I intend to switch to a more powerful server so that I 
can run more regular backups and keep a longer backlog of backups 
without various kinds of performance problems...

Thanks all the same!
Kjell



[firebird-support] Wrong page type, how to examine/fix?

2014-02-24 Thread Kjell Rilbe
On a manual sweep I got this error message:

database file appears corrupt (D:\DATADIA\DATA\FIREBIRD\BACKUP\DIAMONDS.FDB)
-wrong page type
-page 12301932 is of wrong type (expected 4, found 7)

How do I proceed to examine this error and how would I fix it?

What are page types 4 and 7 and what might have caused this? I've had no 
obvious malfunctions or problems.

The database is about 100 Gbyte, so I'de prefer to not waste time on 
sidetracks, so to say. :-)

Regards,
Kjell
-- 

Kjell Rilbe
Telefon: 08-761 06 55
Mobil: 0733-44 24 64

DataDIA AB
Ulvsundavägen 106
168 67 Bromma
www.datadia.se http://www.datadia.se
08-514 905 90

Företagskontakt.se http://xn--fretagskontakt-vpb.se - urval av företag 
och kontaktinformation
Personkontakt.se http://personkontakt.se - urval av hushållsadresser




Re: [firebird-support] Wrong page type, how to examine/fix?

2014-02-24 Thread Kjell Rilbe
Den 2014-02-24 13:57 skrev Ann Harrison såhär:



  On Feb 24, 2014, at 7:24 AM, Kjell Rilbe kjell.ri...@datadia.se
 
  database file appears corrupt 
 (D:\DATADIA\DATA\FIREBIRD\BACKUP\DIAMONDS.FDB)
  -wrong page type
  -page 12301932 is of wrong type (expected 4, found 7)
 
 Alas, not a great situation. Page type 4 is a pointer page - an array 
 of data page numbers that's one level of indirection in resolving 
 record locations. Page type 7 is ani index page. Fortunately, pointer 
 pages can be reconstructed from the rest of the database structure. I 
 think that the free version of IBFirstAid will tell you which table is 
 missing a pointer page. From there, the paid version should be able to 
 fix the problem.


If the errors are located in a specific index (or unique constraint), 
would it be a good idea to drop that index/constraint and try a regular 
gbak backup/restore cycle?

And if the errors are located in a table's data pages - drop the table 
and do gbak backup/restore?

I did get an indication that it might be in a table I can recreate by 
other means...

Regards,
Kjell



Re: [firebird-support] Wrong page type, how to examine/fix?

2014-02-24 Thread Kjell Rilbe
Den 2014-02-24 18:31 skrev Kjell Rilbe såhär:

 Den 2014-02-24 13:57 skrev Ann Harrison såhär:
 
 
 
   On Feb 24, 2014, at 7:24 AM, Kjell Rilbe kjell.ri...@datadia.se
  
   database file appears corrupt
  (D:\DATADIA\DATA\FIREBIRD\BACKUP\DIAMONDS.FDB)
   -wrong page type
   -page 12301932 is of wrong type (expected 4, found 7)
  
  Alas, not a great situation. Page type 4 is a pointer page - an array
  of data page numbers that's one level of indirection in resolving
  record locations. Page type 7 is ani index page. Fortunately, pointer
  pages can be reconstructed from the rest of the database structure. I
  think that the free version of IBFirstAid will tell you which table is
  missing a pointer page. From there, the paid version should be able to
  fix the problem.
 

 If the errors are located in a specific index (or unique constraint),
 would it be a good idea to drop that index/constraint and try a regular
 gbak backup/restore cycle?

 And if the errors are located in a table's data pages - drop the table
 and do gbak backup/restore?

 I did get an indication that it might be in a table I can recreate by
 other means...


Tried to drop the entire table that I found has corruption, but as I 
suspected it didn't work. Pity. :-) I also see there's no way to get 
gbak to skip that table, which would probably have been useful. Or am I 
wrong?

Regards,
Kjell






Re: [firebird-support] Wrong page type, how to examine/fix?

2014-02-24 Thread Kjell Rilbe
Den 2014-02-24 21:13 skrev Jesus Garcia såhär:

 Hello, Kjell, do you know the source of the issue?

 Is a reproducible error in engine what have caused the corruption?


No idea, really. But I do know I did some sloppy do before I think ad 
hoc SQL work when I created the table that has the corruption, and I 
note that IBSurgeon writes in one of their docs that dropping tables 
while other transactions might be working (trying to work?) with it, can 
confuse the engine and cause corruption. If that is the case (which I 
find a bit scary) it might be what happened to me (although I was alone 
working with a couple of transactions fiddeling with that table back and 
forth...).

Regards,
Kjell




Re: [firebird-support] Firebird and transactions question

2013-11-28 Thread Kjell Rilbe













Re: [firebird-support] Object in use... wtf???

2013-11-26 Thread Kjell Rilbe













Re: [firebird-support] Re: Object in use... wtf???

2013-11-26 Thread Kjell Rilbe













Re: [firebird-support] Object in use... wtf???

2013-11-26 Thread Kjell Rilbe













  1   2   3   >