[firebird-support] what can cause gfix -sweep to never finish ?

2012-03-05 Thread Vander Clock Stephane
Hello,

it's 7 days now than gfix -sweep is running. when i look in the 
monitoring table, i see that the process gfix is doing something (it's 
fetch some pages), so it's not freeze  but too long ! database is 
200 Go, and multi user activity on it

Thanks by advance
stéphane



Re: [firebird-support] Re: Optimize this simple SQL

2012-02-01 Thread Vander Clock Stephane
Hello philippe,

not so easy to guess, the plan use randomly one of them ...

PLAN (HASH INDEX (HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX,
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX,
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX,
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX,
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_X1_Y5_IDX))

replacing the last HASH_X1_Y5_IDX by HASH_X1_Y2_IDX or HASH_X1_Y3_IDX or
.. gave "around" the same result in speed

even replacing it by all the index at the same time : HASH_X1_Y1_IDX,
HASH_X1_Y2_IDX, HASH_X1_Y3_IDX, HASH_X1_Y4_IDX, HASH_X1_Y5_IDX
the speed stay around the same ...

also i not really understand why i need an index on
CREATE ASC INDEX HASH_IDX ON HASH (X1_Y1, X1_Y2, X1_Y3, X1_Y4, X1_Y5);

instead of just on CREATE ASC INDEX HASH_IDX ON HASH (X1_Y1, X1_Y2);

(ie: trying to understand 
http://explainextended.com/2010/05/19/things-sql-needs-determining-range-cardinality/
 
)

thanks for all !
stéphane

On 2/1/2012 3:10 PM, philippe makowski wrote:
>
> Vander Clock Stephane [2012-01-31 16:30] :
> > now with this strategy i don't remenbered, but did i need to keep the
> > index
> > CREATE INDEX HASH_X1_Y1_IDX ON HASH (X1_Y1);
> > CREATE INDEX HASH_X1_Y2_IDX ON HASH (X1_Y2);
> > CREATE INDEX HASH_X1_Y3_IDX ON HASH (X1_Y3);
> > CREATE INDEX HASH_X1_Y4_IDX ON HASH (X1_Y4);
> > CREATE INDEX HASH_X1_Y5_IDX ON HASH (X1_Y5);
> >
> > or i can remove them ?
> watch the PLAN, and you'll have the answer I guess
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] SQL Performance question

2012-02-01 Thread Vander Clock Stephane
it's depend where you filter will be the most evident
ie : on VareFrvStrNr or on Dato ...

for exemple it their is zillions reccord with VareFrvStrNr = '090179' 
then it's better to
use the index on Dato and read the data in the dato order still will see 
one row with
VareFrvStrNr = '090179'

on the other way, if few rec with VareFrvStrNr = '090179', it's better 
for the engine
to retrieve ALL the record with VareFrvStrNr = '090179' and look in it 
the low dato ...

ok firebird as this great hability to use 2 index :) but not always so good
this what he try to do in your query

the firebird engine can not be all the time cleaver to know with 
strategy to use !

so in this way you can specify the plan in you query :)


On 2/1/2012 2:49 PM, Michael Vilhelmsen wrote:
>
> Hi
>
> I have a table containing some 33.879.139 records.
>
> If I do a simple select like this:
>
> Select Dato from Transaktioner
> Where
> VareFrvStrNr='090179'
> AND Art=11
> And Transaktioner.Afdeling_ID<>'9'
>
> it fecthes 11 records within 0.3 seconds.
> It will use the index on VareFrvStrNr according to the plan.
> PLAN (TRANSAKTIONER INDEX (TRANS_VAREFRVSTRNR))
>
> If I instead uses this SQL
>
> Select Min(Dato)As MinimuSTG from Transaktioner
> Where
> VareFrvStrNr='090179'
> AND Art=11
> And Transaktioner.Afdeling_ID<>'9'
>
> It takes at least 3 seconds. Often more.
> Now the plan is like this:
> PLAN (TRANSAKTIONER ORDER TRANS_DATO INDEX (TRANS_VAREFRVSTRNR))
>
> The TRANS_DATO index has almost no duplicates.
> The TRANS_VAREFRVSTRNR can have some dublicates.
>
> Why is there such a big difference is the performance?
> can I do anything to optimize this?
>
> Michael
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Changing database default character set

2012-01-31 Thread Vander Clock Stephane
by the way, could be good to add this feature in gbak, because if the 
number of byte per char is the same
it's must be not really hard to do so ... no ?

On 2/1/2012 1:50 AM, Thomas Steinmaurer wrote:
>
> > I'm migrating from Firebird 2.0 to Firebird 2.5 ( both on Debian
> > Linux )
> > My databases were created with default character set NONE.
> > ¿ Is there a way to change this default when restoring ?
>
> No, sorry.
>
> Re-create the database via DDLs with the new character set and then use
> a tool to transfer the data. Depending on the targeted character set
> (especially multi-byte CS), you might run into issues when e.g. longish
> [VAR]CHAR fields are indexed.
>
> -- 
> With regards,
> Thomas Steinmaurer (^TS^)
> Firebird Technology Evangelist
>
> http://www.upscene.com/
> http://www.firebirdsql.org/en/firebird-foundation/
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Re: Stored Procedure question.

2012-01-31 Thread Vander Clock Stephane
you can also use
INSERT or UPDATE CLIENT (name, surname, address, SSN, total_bought) 
VALUES (:Name, :SurName, :Address, :SSN, :Amount);

On 2/1/2012 1:33 AM, Andrew wrote:
>
>
> Woody,
>
> THANKS A BUNCH!
>
> Andrew
>
> --- In firebird-support@yahoogroups.com 
> , "Woody"  
> wrote:
> >
> > From: "Andrew" 
> > Sent: Tuesday, January 31, 2012 3:04 PM
> > To:  >
> > Subject: [firebird-support] Stored Procedure question.
> >
> > > Hi all,
> > >
> > > I'm trying to do a stored procedure that does a select with 
> parameters and
> > > then, if records aren't found, do the insert and proceed with the DML.
> > >
> > > I have to implement this as a Stored procedure (thrid party app) 
> and I'm
> > > stumped. I tried to google it but there were a gazzilion examples 
> and none
> > > of them had this example.
> > >
> > > something like:
> > >
> > > Select name, surname, address from client where SSN = 1234
> > > if not found (or recordcount = 0, or empty = true) then
> > > insert (name, surname, address, SSN) in client ('john', 'doe', 'aa
> > > street', 1234);
> > >
> > > Edit client set total_bought = total_bought + 100.00;
> >
> > Try something like:
> >
> > if not exists(select 1 from client where SSN = :SSN) then
> > begin
> > insert into client (name, surname, address, SSN, total_bought)
> > values (:Name, :SurName, :Address, :SSN, :Amount);
> > end else
> > begin
> > update client set total_bought = total_bought + :Amount
> > where SSN = :SSN;
> > end;
> >
> >
> > Each of the variables with the colon ( : ) in front of them should 
> be passed
> > in as arguments. This is just one way to accomplish what you want. 
> I'm sure
> > you'll get several examples.
> >
> > HTH
> > Woody (TMW)
> >
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] batch script

2012-01-31 Thread Vander Clock Stephane
"c:\Program files\Firebird\bin\isql" -user SYSDBA -password masterke 
-terminator ; -sqldialect 3 -charset ISO8859_1 -cache 1024 -input 
C:\SalesLabourBudget\spBudgetInputForFreshRevision.sql

in spBudgetInputForFreshRevision.sql

don't forget to add at the beginning

CONNECT WINREDATA.FDB ;

also look if spBudgetInputForFreshRevision.sql is encoded in utf8 or in ansi
if in utf8 the 3 first char can be the bom that in the way can raise and 
error

stephane

On 1/30/2012 3:09 PM, Mahesh Pratihari wrote:
>
> Hi All,
>
> Please guide me how to execute the batch script for a file, I have
> written in this way, could you let me know where is my fault, it showing
> error
>
> @echo off
>
> cls
>
> set SName=bg4ws0650
>
> set UName=sysdba
>
> set Pwd=masterke
>
> set DbName=D:\RiTEQFireBird\RELEASE2\TIMETEQ-FIREBIRD-TEST-RELEASE2.FDB
>
> @echo on
>
> CD C:\Program Files (x86)\Firebird\Firebird_2_5\bin
>
> rem echo Importing Table and Stored Procedure to Firebird Database
>
> rem echo Added on 05-01-2012 for Sales and Budget-
>
> isql.exe -U %UName% -P %Pwd% -d %DbName% -S %SName% -i
> C:\SalesLabourBudget\spBudgetInputForFreshRevision.sql
>
> pause
>
> Thanks,
>
> Mahesh Pratihari
>
> Sonata Software Limited
>
> Phone : +91 80 3097 1527
>
> Mobile : +91 99808 37446
>
> www.sonata-software.com 
>
> Please don't print this email unless you really need to. This will
> preserve trees on our planet.
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] gbak connection string

2012-01-31 Thread Vander Clock Stephane
in this way it's better to use gback with the -se:service_manager :) 
much more fast :)

On 1/31/2012 12:30 AM, Sergio H. Gonzalez wrote:
>
> (FB 2.5)
>
> Hello! is there any problem in using gbak with this string:
>
> gbak c:\myfolder\mydatabse.fdb [etc...]
>
> while there are other PCs connected to the same databse with tcp/ip:
>
> servername:c:\myfolder\mydatabse.fdb ???
>
> If I remember well, mixing string connections could damage the database?
> Is the same with gback?
>
> thanks
>
> -s
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] last record from a table

2012-01-31 Thread Vander Clock Stephane
how is generated the id :) if it's a generator you can call this 
generator to get the latest ID :)
fast as hell it's will be :)

On 1/27/2012 11:47 PM, Sergio H. Gonzalez wrote:
>
> (Firebird 2.5)
>
> Hello! this is probably an old question, but didn't find nothing on the
> net...
>
> Here's a sample table: MyTable (has two fields) = ID: integer & 
> MyDate: Date
>
> Is this the best (fastest) way to get the LAST inserted record from a 
> table?
>
> select MyDate from MyTable where id = (select max(id) from MyTable)
>
> Thanks!!
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Optimize this simple SQL

2012-01-31 Thread Vander Clock Stephane
zero knowledge ?
my teacher learn me that the most important to know is
"je ne sais qu'une chose c'est que je ne sais rien (I know one thing, 
that I know nothing)" ... it's from socrate :)




On 1/31/2012 11:10 PM, Dmitry Kuzmenko wrote:
>
> Hello, Vander!
>
> Tuesday, January 31, 2012, 7:30:27 PM, you wrote:
>
> VCS> one year ago i ask advise to optimize this SQL :
>
> Seems that you try to learn for free?
>
> You are asking too many newbee questions, at the same
> time seem not to reading anything.
> I think you will be ingored soon.
> I don't like this, but I don't also like
> asking lot of questions with zero knowledge of anything.
>
> --
> Sincerelly,
> Dmitry Kuzmenko, www.ib-aid.com
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Is our FB 2.5 Db corrupted ?

2012-01-31 Thread Vander Clock Stephane
is it FB 2.5 or fb 2.5.1 ?
because in 2.5.0 i also have this kind of problem on the index (corrupt 
all the time)
the 2.5.1 now work like a charm :)

try gfix if you have the time to see if the index are corrupted

also commitretaining not so good ...

On 1/30/2012 2:51 PM, Colin wrote:
>
> We have a 5GB Database FB2.5 Win2008 Server 4 Core M/C, 143 tables, 
> problem with table INV 48K Records
>
> When everyone logs off and then log on, access to INV is slow for the 
> last 7K Records. If we fetch all, it takes forever, but eventually can 
> log off and log on again and all is ok. Same is OK if we set all 
> indexes ACTIVE (takes 4 hours for this table), or do a backup. Also if 
> we sweep. this takes the same or more time.
>
> Questions:
>
> If we backup and restore the database is like new? data exported and 
> then reloaded into a copied schema? This seems to work.
>
> Can sweep occur if the database has connections, but no activity? If 
> it did, then we would not get all the sweep operations stacked up.
>
> Why does sweep (or the slow backup) take so long and if so, why is 
> there no cpu load? I would have thought that the CPU would have been busy.
>
> In the app - one transaction and all datasets/queries/procedures are 
> commitretaining.
>
> And, and why always this table - treated much the same as other tables.
>
> Are there special settings for the database connection? and how can we 
> know that this might happen (so we could backup and restore before the 
> last user logged out)?
>
> Lawrence
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] when an index is needed ?

2012-01-31 Thread Vander Clock Stephane
yes i understand ...

what i want to say, after how many "page fetch" it's become better to 
have an index ?
for exemple i want all the article order by date from the database.
the database have 1000 articles ...
is it really usefull to have right now to have an index on the price ? i 
thing no ...
now the database have 10 articles
is it really usefull to have right now to have an index on the price ? i 
thing yes ...

my test show if more than 1000 reccord it's start to be better to use 
index ...

want to share your opinion ...

On 1/31/2012 1:07 AM, Leyne, Sean wrote:
>
>
> > From with amount of record in a table it's start to be usefull to 
> have an index
> > ?
> > 100? 1000? 1 ?
> >
> > Actually i thing about 200 ? maybe 1000 ?
>
> Adding an index is not about the number of rows/records, it is about 
> the "uniqueness" of the index.
>
> Consider, A table with every person in the world:
>
> - it should not have an index for sex.
> - it also not have an index for Country (China + India = 25% of database)
> - A compound index by country, last name, date of birth would be of 
> some value.
>
>
> You want to build few indexes that help narrow down the scope of the 
> search.
>
> Remember that the engine will need to read each candidate row to check 
> the current search field values against the query critertia.
>
> In some cases, indexes actually slow things down by causing a large 
> number of random disk IOs. A non-indexed SQL is not always a bad thing.
>
>
> Sean
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Optimize this simple SQL

2012-01-31 Thread Vander Clock Stephane

> 1 - What PLAN does the engine generate for the query?
>

PLAN (HASH INDEX (HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, 
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, 
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, 
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, 
HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_IDX, HASH_X1_Y5_IDX))

replacing the last HASH_X1_Y5_IDX by HASH_X1_Y2_IDX or HASH_X1_Y3_IDX or 
.. gave "around" the same result in speed
even replacing it by all the index at the same time : HASH_X1_Y1_IDX, 
HASH_X1_Y2_IDX, HASH_X1_Y3_IDX, HASH_X1_Y4_IDX, HASH_X1_Y5_IDX
the speed stay around the same ...

>
> 2 - What other types of queries to do run? Do you ever search for rows 
> without X1_Y1, like X1_Y2 and X1_Y3 only?
>

no, i run ONLY this kind of query (just the number change) :

Select
   *
from
   HASH
where
   x1_y1 in 
(110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140)
 
and
   x1_y2 >= 110 and
   x1_y2 <= 130 and
   x1_y3 >= 160 and
   x1_y3 <= 180 and
   x1_y4 >= 20 and
   x1_y4 <= 40 and
   x1_y5 >= 110 and
   x1_y5 <= 130;

(the first "in" instead of >= and <= it's a trick given by philippe)

>
> Stephane, it seems you have been asking this list to think for you and 
> have not tried things for yourself...
>

Always thanks for the help :) and i follow always this rule : never look 
stupid to ask, but look stupid to not ask :)
i can promise you i try the stuff a lot :)

i even do some demo projects (find the link below), compare the bench 
with the rtree in sqlite, etc ...
http://sourceforge.net/projects/alcinoe/files/alsqlbenchmark/1.01/

:)

now here it's mostly that i want to understand the index strategy ...
understand how the index on 5 columns can do the work (i can understand 
how it's help based on 2 columns (X1_Y1, X1_Y2)
but not why 5 columns will help (X1_Y1, X1_Y2, X1_Y3, X1_Y4, X1_Y5)

thanks again for you help !
stéphane


[Non-text portions of this message have been removed]



[firebird-support] Optimize this simple SQL

2012-01-31 Thread Vander Clock Stephane
Hello,

one year ago i ask advise to optimize this SQL :

CREATE TABLE HASH(
   ID INTEGER NOT NULL,
   x1_y1 SMALLINT NOT NULL,
   x1_y2 SMALLINT NOT NULL,
   x1_y3 SMALLINT NOT NULL,
   x1_y4 SMALLINT NOT NULL,
   x1_y5 SMALLINT NOT NULL
  PRIMARY KEY (ID)
);
CREATE INDEX HASH_X1_Y1_IDX ON HASH (X1_Y1);
CREATE INDEX HASH_X1_Y2_IDX ON HASH (X1_Y2);
CREATE INDEX HASH_X1_Y3_IDX ON HASH (X1_Y3);
CREATE INDEX HASH_X1_Y4_IDX ON HASH (X1_Y4);
CREATE INDEX HASH_X1_Y5_IDX ON HASH (X1_Y5);

fill it with 20 millions rows

Select
   ID
from
   HASH
where
   x1_y1 >= <#randomnumber1> - 20 and
   x1_y1 <= <#randomnumber1> + 20 and
   x1_y2 >= <#randomnumber2> - 20 and
   x1_y2 <= <#randomnumber2> + 20 and
   x1_y3 >= <#randomnumber3> - 20 and
   x1_y3 <= <#randomnumber3> + 20 and
   x1_y4 >= <#randomnumber4> - 20 and
   x1_y4 <= <#randomnumber4> + 20 and
   x1_y5 >= <#randomnumber5> - 20 and
   x1_y5 <= <#randomnumber5> + 20;

and philippe makowski sugest me this :

CREATE ASC INDEX HASH_IDX ON HASH (X1_Y1, X1_Y2, X1_Y3, X1_Y4, X1_Y5);

Select
   ID
from
   HASH
where
   X1_Y1 IN
(10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50)
  and
   x1_y2 >= <#randomnumber2> - 20 and
   x1_y2 <= <#randomnumber2> + 20 and
   x1_y3 >= <#randomnumber3> - 20 and
   x1_y3 <= <#randomnumber3> + 20 and
   x1_y4 >= <#randomnumber4> - 20 and
   x1_y4 <= <#randomnumber4> + 20 and
   x1_y5 >= <#randomnumber5> - 20 and
   x1_y5 <= <#randomnumber5> + 20;

that was very (very) much efficient (10x more faster) !!
this is based on 
http://explainextended.com/2010/05/19/things-sql-needs-determining-range-cardinality/

now with this strategy i don't remenbered, but did i need to keep the
index
CREATE INDEX HASH_X1_Y1_IDX ON HASH (X1_Y1);
CREATE INDEX HASH_X1_Y2_IDX ON HASH (X1_Y2);
CREATE INDEX HASH_X1_Y3_IDX ON HASH (X1_Y3);
CREATE INDEX HASH_X1_Y4_IDX ON HASH (X1_Y4);
CREATE INDEX HASH_X1_Y5_IDX ON HASH (X1_Y5);

or i can remove them ?

thanks by advance
stéphane


[firebird-support] Sweep interval : OST - OIT OR OST - OAT ?

2012-01-30 Thread Vander Clock Stephane
Hello,

when i read this doc
http://www.ibphoenix.com/resources/documents/general/doc_67

The automatic sweep is kicked off when the difference between the OAT 
and the OST is greater then the sweep interval

but when i read this :
http://www.firebirdnews.org/?p=4475
The automated sweep will start if the difference between OST (Oldest 
Snapshot Transaction) and OIT is greater than the sweep interval defined

What is good answer ?

also what is OIT? isc_info_oldest_transaction or isc_info_oldest_active ?

thanks by advance
stéphane


[firebird-support] when an index is needed ?

2012-01-30 Thread Vander Clock Stephane
Hello,

 From with amount of record in a table it's start to be usefull to have 
an index ?
100? 1000? 1 ?

Actually i thing about 200 ? maybe 1000 ?

ie : for query like
select ... from ... where  order by 

thanks by advance
stéphane


Re: [firebird-support] parrallel update - Email found in subject

2012-01-30 Thread Vander Clock Stephane
OK, i confirm the parametized queries it's MUCH MUCH more fast than normal
query (up to 3x more fast!)

Was a little "hard" to understand the api to know how to use well the 
parametized queries ...
i do component for that to make it more simple to use (delphi) and call 
the query like
update(SQL, array of params);

now when i read this article

http://codicesoftware.blogspot.com/2008/04/fastest-way-to-insert-100k-registers.html

i thing a possibility to call update like in MySql

INSERT INTO testtable (iobjid, ifield0, ifield1) VALUES ({0}, {1}, {2}), 
0, 30, 5, ({0}, {1}, {2}), 0, 30, 5 ...

can not be so bad as the problem of the parametized query is that they 
need special components and it's could be good to run parametized query 
from inside SQL text .


On 1/26/2012 11:19 PM, Leyne, Sean wrote:
>
> > > Or is this what you're already doing, just that your example was
> > > simplified?
> > >
> > > HTH,
> > > Set
> > >
> >
> > yes, of course parametized queries will be more fast, but i don't 
> think they
> > will change the ratio in the test (it's will simply be more fast for 
> both
> > variantes)
>
> I would disagree!
>
> Your test was skewed and did not reflected the reality of 
> prepare/parameterized statements! (the 2 scenarios are "apples and 
> oranges")
>
> By using un-prepared statements, you reduced the disk IO load of your 
> test and thus the overhead of Classic server page synchronization. In 
> essence your test created "openings" to allow for multiple disk IO to 
> occur.
>
> Using prepared statements will increase the speed of a single 
> connection but also increase the synchronization overhead and disk 
> contention. Thus reducing the benefit of parallel connections.
>
> Sean
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] parrallel update - Email found in subject

2012-01-27 Thread Vander Clock Stephane
ok, i will try the parametized queries to see ...

thanks !

On 1/26/2012 11:19 PM, Leyne, Sean wrote:
>
> > > Or is this what you're already doing, just that your example was
> > > simplified?
> > >
> > > HTH,
> > > Set
> > >
> >
> > yes, of course parametized queries will be more fast, but i don't 
> think they
> > will change the ratio in the test (it's will simply be more fast for 
> both
> > variantes)
>
> I would disagree!
>
> Your test was skewed and did not reflected the reality of 
> prepare/parameterized statements! (the 2 scenarios are "apples and 
> oranges")
>
> By using un-prepared statements, you reduced the disk IO load of your 
> test and thus the overhead of Classic server page synchronization. In 
> essence your test created "openings" to allow for multiple disk IO to 
> occur.
>
> Using prepared statements will increase the speed of a single 
> connection but also increase the synchronization overhead and disk 
> contention. Thus reducing the benefit of parallel connections.
>
> Sean
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] parrallel update

2012-01-26 Thread Vander Clock Stephane

> Changing the entire sql statement 10 times like your example do 
> takes a considerable amount of time. The easiest way for you to speed 
> things up would be to use parameters, prepare once and execute 10 
> times, i.e. something like (using IBO and Pascal since that is what I 
> use):
>
> TIB_DSQL1.SQL.Add('insert Into Table_A(ID) VALUES(:MyParam)');
> TIB_DSQL1.Prepare;
> for I:=1 to 10 do
> begin
> TIB_DSQL1.Params[0].AsString:='<#randomchar>';
> TIB_DSQL1.Execute;
> end;
>
> Or is this what you're already doing, just that your example was 
> simplified?
>
> HTH,
> Set
>

yes, of course parametized queries will be more fast, but i don't think 
they will change the ratio in the test (it's will simply be more fast 
for both variantes)





[Non-text portions of this message have been removed]



Re: [firebird-support] parrallel update

2012-01-25 Thread Vander Clock Stephane
dear ann,

> Sorry for the dumb question, but why do you want to do those
> operations in parallel? The SuperServer runs only one thread at a
> time, ties a connection to an thread, and uses only one processor. Do
> you think it will be faster interweaving n commands than running the
> operations sequentially? And of course, the classic architecture has
> the same general limitations, except there is a process per
> connection.
>

no dumb question at all ...

to answer you, i do this simple test
(on FB super classic, single processor):

Create 3 tables
Create table TABLE_A(ID VARCHAR(15));
Create table TABLE_B(ID VARCHAR(15));
Create table TABLE_C(ID VARCHAR(15));

**
now i do in loop with one single thread, one connection :

Start transaction
insert Into Table_A(ID) VALUES('<#randomchar>');  //
insert Into Table_A(ID) VALUES('<#randomchar>');  // (100 000 loop)
insert Into Table_A(ID) VALUES('<#randomchar>');  //
commit transaction

average Insert time taken for each thread : 0.24 ms
total time to insert 300 000 rec: 34.8 seconds

**
now with 3 different Thread and 3 different connection

thread1
Start transaction
insert Into Table_A(ID) VALUES('<#randomchar>');  (100 000 loop)
commit transaction

Thread2
Start transaction
insert Into Table_B(ID) VALUES('<#randomchar>'); (100 000 loop)
commit transaction

thread3
Start transaction
insert Into Table_C(ID) VALUES('<#randomchar>'); (100 000 loop)
commit transaction

average Insert time taken for each thread : 0.12 ms
total time to insert 300 000 rec: 18.7 seconds

so the parallel are 2 times more faster ! (i was hopping 3 times, but i do
the test on a slow sata hard drive and a single processor computer that
can explain)

so at end the parrallel insert a much more faster (2x) than the 
sequential insert !
so i thing it's can be usefull that firebird permit us to send to him a 
"batch
of sql" to do, and he will himself execute all the SQLs in parallel in 
different
thread ?

thanks for all
stéphane







> >
> > is their any way to execute theses n update in parallele inside one
> > connection and one transaction ?
>
> No.
>
> Good luck,
>
> Ann
>
> 


[Non-text portions of this message have been removed]



[firebird-support] parrallel update

2012-01-25 Thread Vander Clock Stephane
Hello,

I want to do simultaneous these update :

Update table1 ...
Update table2 ...
etc...
Update tablen ...

the table1, table2, ... tableN are not connected in any way

now i can do these update in parallel using n connections, but
i would to avoid using n connection (mean start n transaction, etc...)

is their any way to execute theses n update in parallele inside one
connection and one transaction ?

thanks by advance
stéphane


Re: [firebird-support] Acsending or Descending Index

2012-01-24 Thread Vander Clock Stephane
as always ... thanks ann !

On 1/24/2012 10:01 PM, Ann Harrison wrote:
>
> Stephane,
>
> >
> > Without the consideration of the order by, i there any difference
> > between ascending and descending index for queries like
> >
> > select ... where fieldA > 123;
> > select ... where fieldA = 123;
> > select ... where fieldA < 123;
>
> No difference. The direction of the index matters for ORDER BY and
> for MIN (ascending index) or MAX (descending index).
>
> Ignoring the issues of Nulls, a descending index is identical to an
> ascending index except that the keys are inverted. Nulls are
> complicated in both directions.
>
> Good luck,
>
> Ann
>
> 


[Non-text portions of this message have been removed]



[firebird-support] Acsending or Descending Index

2012-01-24 Thread Vander Clock Stephane
Hello,

Without the consideration of the order by, i there any difference
between ascending and descending index for queries like

select ... where fieldA > 123;
select ... where fieldA = 123;
select ... where fieldA < 123;

or both gave the same performance ?

thanks
stéphane


Re: [firebird-support] Re: Fwd: How to avoid deadlock ?

2012-01-16 Thread Vander Clock Stephane

>
> I don't get the point, sorry. If you perform SELECT in the same
> transaction as INSERT, then neither SELECT WITH LOCK nor UPDATE should
> wait. If the transactions are different, then both should wait. In this
> latter case, SELECT WITH LOCK will not wait only if you have already
> updated the record in the same (second) transaction, but again, the same
> applies to the regular UPDATE.
>

It's ok thanks it's was just my understanding of the With lock was not
too good. i just want to say that the select  with lock work ONLY
if their is record returned by the select :) but it's evident :)


[Non-text portions of this message have been removed]



Re: [firebird-support] Re: Fwd: How to avoid deadlock ?

2012-01-15 Thread Vander Clock Stephane
Thanks dimitry !

now yes it's work like i want ...

juste one remark, the with lock have a little weakness
again the equivalent lock with update ...

ex :

TranstactionStart
Insert ... fieldA=1;

then

Select ... where FieldA=1 with lock

=> return imediatly :(



On 1/14/2012 12:01 PM, Dmitry Yemanov wrote:
>
> 14.01.2012 0:29, Vander Clock Stephane wrote:
> >
> > What i want to say :
> >
> > Table_A
> > Field_A
> > 12
> >
> > Transac 1 (isc_tpb_wait) :
> > update Table_A Set Field_A = Field_A + 1;
> >
> > At the same time
> >
> > Transac 2 (isc_tpb_wait) :
> > update Table_A Set Field_A = Field_A + 1;
> >
> > here we imagine a dead lock will appear !
> > but we set isc_tpb_wait to say to wait
> > that the lock dispear
> >
> > BUT the probleme is that at the begining
> >
> > Transac_1 Is the first to do the job (FieldA = 12)
> > Transac_2 wait (FieldA = 12)
> > Transac 1 finish the job (FieldA = 13)
> > Transac_2 ok i can do the job ... ooh no i can't because fieldA <> what
> > it was at the begining 12 :(
>
> As far as I understand, your problem can be worked around in two
> different ways in Firebird:
>
> a) by using read-committed no-record-version isolation mode
> b) by locking the records explicitly (SELECT WITH LOCK) before updating
>
> Dmitry
>
> 


[Non-text portions of this message have been removed]



Re: [Bulk] [firebird-support] Is Update or Insert more slower than an single Update (or insert)

2012-01-14 Thread Vander Clock Stephane

> It is logical for me, since before inserting, it needs to know if
> similar record already exists (to be update). Sso, more operations are
> being done (more disk I/O) compared to a single insert.
>

i was thinging than during an update or insert, the engine try first to
"insert the record" ... and if no error then OK (so exactly like an 
insert) and
if an error then at this time try to update the record (and the engine
already know with page to update thank to the previous failed update)

now i know that is not the reality ;)


[Non-text portions of this message have been removed]



Re: [Bulk] [firebird-support] Is Update or Insert more slower than an single Update (or insert)

2012-01-14 Thread Vander Clock Stephane
Thanks to my paranoia, i was thinking like everyone that

Update or insert
OR
Insert

will be close to the same speed (or very similare)

so i do the test  and guess ?

Update or insert is 2x more slower than insert alone :( :( :(

10 Update or insert => 0.501 ms average insert time
10 insert => 0.300 ms average insert time


sad but interesting to know !



On 1/13/2012 6:53 PM, Vander Clock Stephane wrote:
>
> Hello,
>
> Is "Update or Insert" more slower than an single Update (or insert) ??
>
> thanks by avance
> stéphane
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Fwd: How to avoid deadlock ?

2012-01-13 Thread Vander Clock Stephane


> Il y a une trentain d'annees, quelqu'un m'avait dit "Ce qui n'est pas
> clair, n'est pas francais."
>

c'est claire :)

What i want to say :

Table_A
Field_A
12

Transac 1 (isc_tpb_wait) :
update Table_A Set Field_A = Field_A + 1;

At the same time

Transac 2 (isc_tpb_wait) :
update Table_A Set Field_A = Field_A + 1;

here we imagine a dead lock will appear !
but we set isc_tpb_wait to say to wait
that the lock dispear

BUT the probleme is that at the begining

Transac_1 Is the first to do the job (FieldA = 12)
Transac_2 wait (FieldA = 12)
Transac 1 finish the job (FieldA = 13)
Transac_2 ok i can do the job ... ooh no i can't because fieldA <> what 
it was at the begining 12 :(

it's how Firebird work actually (if i make no mistake)

but you resume it


> In read committed mode, Firebird would return $20 as the balance until
> the transaction that added $80 commits. After the commit, the same
> transaction that saw $20 would see $100, but it still can't add $5
> because the $80 was added by a transaction that was not committed when
> the current transaction started.
>

exactly what i want to say  but with instruction like FieldA = 
FieldA + 1
it's normally doesn't really matter because the transaction saw like
you say $ 80 (after the commit of the first transac and the release of 
the lock)
an can do the math ...


> In NuoDB, snapshot transactions get
> update conflicts by default, and read committed transactions wait for
> the conflicting write to be committed so the new value is visible,
> then proceed.
>

yes sad that nuodb still in beta :(



[Non-text portions of this message have been removed]



[firebird-support] Is Update or Insert more slower than an single Update (or insert)

2012-01-13 Thread Vander Clock Stephane
Hello,

Is "Update or Insert" more slower than an single Update (or insert) ??

thanks by avance
stéphane


Re: [firebird-support] Fwd: How to avoid deadlock ?

2012-01-13 Thread Vander Clock Stephane

>
> The usual answer (which is in the FAQ) is to have the change to
> Table_Data insert a row into table_stats indicating that it added a
> value (+1) or removed one (-1). Periodically, run a procedure to
> aggregate the results.
> Updates will cause deadlocks unless they are serialized, meaning that
> only one transaction can change Table_Data at a time.
>

Thanks ann,

yes, i need to use an intermediate system (like a table) that will store 
the change ...
mean more disk space, mean more memory, mean more slower :(

it's could be good if it's possible to have something like a "mutex"
inside the database that we can activate in trigger ...


or at least something like :

"Will the record is lock (dead lock) wait (xx seconds max) and after do 
the query .. and
doesn't matter if the value changed between the time the transac start, 
the time you
wait and the time you execute the query"


[Non-text portions of this message have been removed]



[firebird-support] How to avoid deadlock ?

2012-01-12 Thread Vander Clock Stephane
Hello,

say we have one table Data and one Table Stats

Table_Data
ID
1
2
5
8


Table_Stats
NB_IDs
4

everytime one user add or delete one row in
Table_Data, then trigger will update the
table_stats (NB_IDs) by increasing or decreasing
the actual number in it

several user can update different row at the same in
table_data

how to avoid deadlock when the trigger update the NB_Ds
in table_stats ? what mechanisme to use  ?

thanks by advance
stéphane


[firebird-support] Fwd: How to avoid deadlock ?

2012-01-12 Thread Vander Clock Stephane

Hello,

say we have one table Data and one Table Stats

Table_Data
ID
1
2
5
8


Table_Stats
NB_IDs
4

everytime one user add or delete one row in
Table_Data, then trigger will update the
table_stats (NB_IDs) by increasing or decreasing
the actual number in it

several user can update different row at the same in
table_data

how to avoid deadlock when the trigger update the NB_Ds
in table_stats ? what mechanisme to use  ?

thanks by advance
stéphane



Re: [firebird-support] How to store all char (#0 for exemple) in OCTET char column

2012-01-10 Thread Vander Clock Stephane
thanks the x' fit perfectly my need

On 1/10/2012 11:03 PM, Mark Rotteveel wrote:
>
> On 10-1-2012 14:17, Vander Clock Stephane wrote:
> > Hello,
> >
> > How, via SQL text can i store bytes (that include for exemple #0, #1,
> > etc..) in OCTETS char column ?
> >
> > Var char(16) i m happy i have for exemple
> > CHAR_TO_UUID() and
> > UUID_TO_CHAR()
> > for for other length ?
>
> Depending on your access library, you could simply use parametrized
> queries and supply an array of bytes. Otherwise you would need to look
> at the OCTETS escape introduced in - I believe - 2.5: x' pairs' eg x'CAFEBABE':
> http://www.firebirdsql.org/file/documentation/reference_manuals/reference_material/html/langrefupd25-hexbinstrings.html
>
> -- 
> Mark Rotteveel
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Why write async is bad ?

2012-01-10 Thread Vander Clock Stephane

>
> VCS> it's mean that if i m the unlucky guy to commit the 5th update 
> then my
> VCS> commit will be more slower because
> VCS> it's will also commit all the previous update.
>
> why do you have so strange assumptions? You've made practical tests?
> Or, if this is just a theory, what it is based on?
>

yes i  do test :( and every 5 updates (it's configure like this in the 
.conf)
the update is very very slow ... i was curious why and i discover that it's
because it's flush all the previous write to the disk (looking in task 
manager the
disk IO).


[Non-text portions of this message have been removed]



Re: [firebird-support] Why write async is bad ?

2012-01-10 Thread Vander Clock Stephane
ok thanks ann, i understand now why the write are done inside a 
connection thread ...

On 1/11/2012 1:08 AM, Ann Harrison wrote:
>
> Stepane,
>
> >
> > when i setup write async to true, it's mean the the database will store
> > in memory the write and after several update (5 by default?) all the 
> write will be flush to
> > the disk.
>
> Firebird's internal consistency depends on "careful write" - Firebird
> writes database pages in a specific order to avoid dangling pointers.
> Asynchronous writes are done in the order that the operating system
> finds convenient. That is not a good idea. If you have a system
> crash, the database on disk is likely to be corrupt and difficult, if
> not impossible to recover.
> >
> > The main drawback, is that all the write are flush to the disk using the
> > current connection thread :(
>
> No, I wouldn't consider that the main drawback. Potentially
> irrecoverable corruption is a pretty serious drawback.
>
> > i was thinking that it's a separate thread that was responsible to flush
> > to disk :( or i m wrong?
>
> Is it wrong to live dangerously? Maybe so, maybe not.
>
> Good luck,
>
> Ann
>
> 


[Non-text portions of this message have been removed]



[firebird-support] How to store all char (#0 for exemple) in OCTET char column

2012-01-10 Thread Vander Clock Stephane
Hello,

How, via SQL text can i store bytes (that include for exemple #0, #1, 
etc..) in OCTETS char column ?

Var char(16) i m happy i have for exemple
CHAR_TO_UUID()  and 
UUID_TO_CHAR() 
for for other length ?

Thanks by advance
stéphane


[Non-text portions of this message have been removed]



[firebird-support] Why write async is bad ?

2012-01-10 Thread Vander Clock Stephane
Hello,

when i setup write async to true, it's mean the the database will store 
in memory the write
and after several update (5 by default?) all the write will be flush to 
the disk.

The main drawback, is that all the write are flush to the disk using the 
current connection thread :(

it's mean that if i m the unlucky guy to commit the 5th update then my 
commit will be more slower because
it's will also commit all the previous update.

i was thinking that it's a separate thread that was responsible to flush 
to disk :( or i m wrong?



Re: [firebird-support] UTF8 in firebird ?

2012-01-08 Thread Vander Clock Stephane
I don't know, i just try and ASCII seam to accept char > 127 i thing 
internally ASCII is based on 8 bits not 7 ...

On 1/8/2012 1:33 AM, Mark Rotteveel wrote:
>
> On 7-1-2012 18:29, Vander Clock Stephane wrote:
> >
> >> I think you're talking about raw UTF-8 bytes; as other have
> >> suggested, you should be using CHARACTER SET OCTETS. Which
> >> means no characters, just bytes (octets).
> >>
> >
> > yes sorry i m confuse about character, code point or raw UTF8 byte...
> > actually i prefere to use ASCII instead of OCTECTS because ASCII feet
> > better to store ASCII char (that what is RAW UTF8 char)
>
> The Firebird character set ASCII allows (or should allow, not sure; see
> the bug that was posted earlier in the thread) only upto 0x7F as ASCII
> is only defined for 0 to 127 (0x00 to 0x7F), so you won't be able to
> store higher order all bytes that are generated by UTF-8 encoding.
>
> -- 
> Mark Rotteveel
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] UTF8 in firebird ?

2012-01-07 Thread Vander Clock Stephane

> I think you're talking about raw UTF-8 bytes; as other have
> suggested, you should be using CHARACTER SET OCTETS. Which
> means no characters, just bytes (octets).
>

yes sorry i m confuse about character, code point or raw UTF8 byte...
actually i prefere to use ASCII instead of OCTECTS because ASCII feet
better to store ASCII char (that what is RAW UTF8 char)


> > where you see that some bytes are forbidden in ISO8859_1 ?
> > firebird never complain about it !
>
> Then it could be said this is a bug, like here:
>
> http://tech.groups.yahoo.com/group/firebird-support/message/112680
> http://tracker.firebirdsql.org/browse/CORE-3416
>
> -- 
> Michael Ludwig
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] UTF8 in firebird ?

2012-01-07 Thread Vander Clock Stephane
dear Ann,
>
> You've got
> some choices. You can pick one of the almost OK character sets. You
> can use UTF8 and not overspecify field lengths and choose field
> lengths that are likely to compress well with Firebird's RLE when
> they're empty. Or, you can use the fairly well defined interfaces for
> character sets and collations and define your own - or hire someone to
> do it for you.
>

yes off course, but i was also curious how many people was aware that their
UTF8 database can be much more slower than equivalent ASCII database :)
i m sure most of them don't know about it when they choose UTF8  ...





[Non-text portions of this message have been removed]



[firebird-support] ASCII OR ISO8859_1 to store UTF8 char ?

2012-01-06 Thread Vander Clock Stephane
Hello,

juste one question, i thing both are the same but with is the best to 
store UTF8 code point ?
ASCII or ISO8859_1 ?

because i thing ASCII can handle only char < 127 right ?


Re: [firebird-support] UTF8 in firebird ?

2012-01-06 Thread Vander Clock Stephane

> > no, you can store in iso-8859-1 ALL the UTF8 char :)
> > this is the purpose of utf8, to stay compatible with all the previous
> > system.
>
> No it isn't possible. You could attempt to store unicode codepoints in
> ISO-8859-1 by inventing your own encoding,
>
not inventing my own encoding ! simply store in iso8859_1 the code point
(1 UTF8 code point = 1 bytes)

> but you cannot store UTF-8
> encoded characters in ISO-8859-1 because the multi-byte encodings do not
> fit in a single byte ISO-8859-1. If you would take multiple ISO-8859-1
> characters to store the encoding, you cannot do that because some 
> bytes (7F
> - 9F) are not allowed in ISO-8859-1 (they are used in Windows-1252 
> which is
> based on ISO-8859-1, but also uses 7F-9F).
>

where you see that some bytes are forbidden in ISO8859_1 ? firebird never
complain about it !

http://www.arkadia.com/rus/ look like to work :) it's based on an ISO8859_1
database where UTF8 are stored !

>
> > UTF8 use only ascii > 127 to encode special char. but as i know
> > you i m sure you already know it before ... i just speak here about
> > storage, not decoding 
>
> If you talk about storage of UTF8 without using actual UTF8, you need to
> use CHARACTER SET OCTETS.
>

yes but i rather prefere the collation of the iso8859_1 as we mostly
target latin language. of course this break when UTF8 code point are founded


>
> > take this exemple: in html all special char are handle like &ecute; <
> > etc... did that
> > mean that i will need to x 5 the size of my varchar field that i use to
> > store
> > html encoded text ?? of course not except if i store cyrrilic or
> > chinesse char ...
>
> That is not comparable at all as they are escape sequences not character
> encodings (and if you use UTF8 as your page encoding for HTML, you don't
> need to use most escape sequences).
>

not comparable at all ? sorry it's fully comparable ! when they invent
the UTF8 they just think about a mechanism to encode char (like
html do). so they say all char < ascii 127 will stay like it is and all char
 > acsii 127 will be encoded in 2 or more bytes. something like html do
for exemple the char ">" is encoded in 4 bytes ">"

>
> Spanish and French work just fine with ISO-8859-1, if you also need
> Polish, then yes you will definitely need UTF8.
>

and have a 2x more bigger databases, 2x more slower systeme
because of this :(


[Non-text portions of this message have been removed]



Re: [firebird-support] Re: UTF8 database : how to speed up the database by up to 100% !

2012-01-06 Thread Vander Clock Stephane
My database is in front of me ! the size is well written ... No mistake 
possible
for the same amount of bytes written in both database, the utf8 can be 
from 20% to up to 100% more bigger than
the ISO88591 database ... after yes the speed test can be influenced by 
lot of factor but size not !
and bigger size = bigger i/o = more slower that is a fact :)

On 1/6/2012 11:02 AM, ma_golyo wrote:
>
> Hi!
>
> You tested something wrong. Our real database test shows no such 
> thing. (not only ascii, but hungarian characters too)
>
> Size : WIN1250 : 2.246 M, UTF8 : 2.418 M
> Speed : Nothing changed, same speed.
>
> --- In firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>, Vander Clock Stephane 
>  wrote:
> >
> > Hello,
> >
> > Why UTF8 database is not a good choice
> >
> > read carrefully this post and you will see how to speed by up
> > to 100% you database (if you use UTF8 off course)
> >
> > I do some benchmark of UTF8 database vs ISO8859_1 database.
> > I think the way that Firbird handle utf8 database is not optimal
> >
> > let speak first about the utf8
> > UTF8 it's just a way to encode special character like è à etc ..
> > for this utf8 will use combination of char upper than ascii #127
> > In this way, and it's not the less, UTF8 stay compatible with all
> > software that work with 8 bit string. thank to this we can use
> > a firebird database defined as iso8859_1 to store UTF8
> >
> > Now The wrong choice about utf8 in firebird
> > Firebird consider than it need 4 bytes (to confirm i was thinking 3)
> > to store 1 utf8 character that true when we speak about cyrillic
> > char for exemple but not when we speak about latin language (french,
> > italian,
> > spanish, etc.). In these language most of the char need only one byte
> > in utf8 (a, b, c, etc.) and only around 20% (é, à , etc.) of the
> > char need 2 bytes in UTF8
> >
> > now let thing i target such country (portugal, spain, france, italian,
> > etc..)
> > what kind of charset will best fit my database ? of course UTF8 ! but is
> > it neccessary that when i declare a varchar(255) for my varchar column
> > firebird handle in background a varchar(1250) ?? absolutely not
> > as in these language most of the char need only one byte to be encoded
> > in UTF8
> >
> > now you will say me: is their any penalty for this ? after all varchar
> > column
> > are compressed ?
> >
> > yep, their is and i do some bechnmark to prouve it
> >
> > 1rt i create 2 database, one in iso8859_1 and the other in UTF8
> >
> > CREATE DATABASE 'c:\test_utf8.fdb'
> > USER 'sysdba'
> > PASSWORD 'masterkey'
> > PAGE_SIZE 4096
> > DEFAULT CHARACTER SET UTF8;
> >
> > CREATE DATABASE 'c:\test_iso88591.fdb'
> > USER 'sysdba'
> > PASSWORD 'masterkey'
> > PAGE_SIZE 4096
> > DEFAULT CHARACTER SET ISO8859_1;
> >
> > after i add in each database 2 simple table
> >
> > CREATE TABLE TEST_A(
> > DATA1 VARCHAR(2000),
> > DATA2 VARCHAR(2000),
> > DATA3 VARCHAR(2000),
> > DATA4 VARCHAR(2000),
> > DATA5 VARCHAR(2000),
> > DATA6 VARCHAR(5000)
> > );
> >
> > CREATE TABLE TEST_b(
> > DATA01 VARCHAR(100),
> > DATA02 VARCHAR(100),
> > DATA03 VARCHAR(100),
> > DATA04 VARCHAR(100),
> > DATA05 VARCHAR(100),
> > DATA06 VARCHAR(100),
> > DATA07 VARCHAR(100),
> > DATA08 VARCHAR(100),
> > DATA09 VARCHAR(100),
> > DATA10 VARCHAR(100),
> > DATA11 VARCHAR(100),
> > DATA12 VARCHAR(100),
> > DATA13 VARCHAR(100),
> > DATA14 VARCHAR(100),
> > DATA15 VARCHAR(100),
> > DATA16 VARCHAR(100),
> > DATA17 VARCHAR(100),
> > DATA18 VARCHAR(100),
> > DATA19 VARCHAR(100),
> > DATA20 VARCHAR(100),
> > DATA21 VARCHAR(100),
> > DATA22 VARCHAR(100),
> > DATA23 VARCHAR(100),
> > DATA24 VARCHAR(100),
> > DATA25 VARCHAR(100),
> > DATA26 VARCHAR(100),
> > DATA27 VARCHAR(100),
> > DATA28 VARCHAR(100),
> > DATA29 VARCHAR(100),
> > DATA30 VARCHAR(100),
> > DATA31 VARCHAR(100),
> > DATA32 VARCHAR(100),
> > DATA33 VARCHAR(100),
> > DATA34 VARCHAR(100),
> > DATA35 VARCHAR(100),
> > DATA36 VARCHAR(100),
> > DATA37 VARCHAR(100),
> > DATA38 VARCHAR(100),
> > DATA39 VARCHAR(100),
> > DATA40 VARCHAR(100),
> > DATA41 VARCHAR(100),
> > DATA42 VARCHAR(100),
> > DATA43 VARCHAR(100),
> > DATA44 VARCHAR(100),
> > DATA45 VARCHAR(100),
&

Re: [firebird-support] UTF8 in firebird ?

2012-01-06 Thread Vander Clock Stephane

> Vander Clock Stephane wrote:
> > no, you can store in iso-8859-1 ALL the UTF8 char :)
>
> No this is incorrect. What you can store in ISO-8859-1 are
> all the UTF8 codepoints not characters. Once you understand
> the difference you will also understand that to do so means
> that none of your indexes, sorting etc will work properly.
>

of course i was speaking about codepoint ! not (yet) so crazy to
thing i can put all the symbols in earth in 1 bytes :)
my index work perfectly, my sorting no (and off course) !
this why i write this paper about utf8 if not i will stay with
my ISO88598_1 column and everything will be perfect

> If you want simple byte storage and to hell with proper
> unicode character collation then use character set OCTETS.
>
OCTECTS or iso8859_1 it's the same in fact ... still need to go like
you say in the hell of proper unicode character collation in
both case :(

> If you want to Firebird to collate your characters as is
> appropriate for a given locale then use the character-set
> relevant to that locale or use UTF8 to get full unicode
> support and wear the cost.
>

i not understand, you spend so much in developpement to win speed,
you make that you can even optimize some stuff like the TcpRemoteBufferSize
and here i gave you an option to make your system 2x more faster "easily"
and i have as an answer "wear the cost" ??

what the probleme to say that we can define the size of the UTF8 varchar 
column
in BYTES (and in fact 80% of the utf8 char we use in europe need only 1 
byte)
instead of "4 bytes" by char ! and even 4 bytes is not enalf for my sci 
database
where klingon empire have their account !

Keep utf8 like it is if you want, but why not add a new charset like
UTF8_SVDC that is completely egual to UTF8 except that it's considere that
when i write varchar(250) = 250 bytes (or 250 code point if you prefere) ?

stephane


[Non-text portions of this message have been removed]



Re: [firebird-support] UTF8 in firebird ?

2012-01-06 Thread Vander Clock Stephane

> > yes, at least some options in the database (or in the create statement)
> to
> > define the size in byte of 1 UTF8 char
> >
> > For exemple by default 1 utf8 char = 4 bytes (like it is now) and i can
> > be able to
> > customize it to be egual to 1 bytes.
>
> Then it is no longer UTF-8.
>

i thing you have a misunderstanding, because utf8 if "FROM" 1 to 6 bytes 
(or even more in the theory) !
so why 3 (or 4) bytes will be utf8 and not 1 or 2 ??


> What should change is how Firebird handles and
> stores variable length character encodings like UTF-8, because you are
> right: some parts of Firebird treat it as if it is always 4 bytes, and it
> really should not do that.
>

their is no other better way that let the user setup himself the length 
of the varchar
column in 1 bytes instead of x4 bytes. for my column of 250 characteres
if i know i will target russian or chinesse
i will setup varchar(750) and if i know i will target french and spanish 
i will
setup varchar(300). and if i want to be ABSOLUTELY
sure i can enter 250 char in any case i will setup if as varchar 1500 
(klingon extension?) !

>
> However redefining UTF-8 to fit your specific wishes is not the right way.
>
> Mark
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] UTF8 in firebird ?

2012-01-05 Thread Vander Clock Stephane

> No, you cannot use a column defined as ISO-8859-1 to store UTF8, because
> 1) ISO-8859-1 does not contain all UTF characters, and 2) some bytes in
> ISO-8859-1 do not represent any characters. You could however use a column
> defined as CHARACTER SET OCTETS to store the byte representation of a UTF
> string, but then you would need to take care of decoding yourself.
>

no, you can store in iso-8859-1 ALL the UTF8 char :)
this is the purpose of utf8, to stay compatible with all the previous 
system.
UTF8 use only ascii > 127 to encode special char. but as i know
you i m sure you already know it before ... i just speak here about
storage, not decoding 

>
> UTF-8 is a variable encoding that requires 1 to 4 bytes to encode
> characters (theoretically 5 and 6 bytes is possible as well, but this is
> unused for compatibility with the number characters that can be encoded in
> UTF-16). This means that Firebird will use upto 4 bytes per character in
> the DB and - afaik - 4 bytes per character in memory because of the 
> way the
> memory buffer is allocated.
>

take this exemple: in html all special char are handle like &ecute; < 
etc... did that
mean that i will need to x 5 the size of my varchar field that i use to 
store
html encoded text ?? of course not except if i store cyrrilic or
chinesse char ...

>
> If you target Portugal, Spain, France and Italy, then ISO-8859-1 should be
> enough for your needs.
>

but this is exactly the pupose of UTF8 !
it's why they use only char > 127 to encode the special char
and let untouched all the ascii < 127 ! so why i will need to make some 
nightmare
conversion between polish char, spanish char, french char etc.. in
ISO8859-1 when UTF8 is defined exactly for this !

UTF8 is not perfect for russian or chiness char where 100 % of the char 
need to be encoded
(for this it's mostly UTF16), UTF8 s perfect just for latin char where 
only 20% of
the char need to be encoded

>
> Unfortunately, as Ann indicates, the RLE used by Firebird is per byte, and
> not per character. This means that the compression is less efficient then
> with single byte charactersets because of the way codepoints above 127 are
> encoded, and I believe that there remaining 0x00 bytes at the end of the
> string are also RLE encoded and stored (which I think is something which
> could and should be changed).
>

yes, but in the actual state it's 2x better to use single byte (octet or 
ISO8859-1 fr
exemple) to store UTF8 char ... SAD :(

> > and to finish i add (only) 64000 reccords in both table (only with
> > varchar containing ascii between a..z)
>
> What is the exact content and length and are their repeating characters in
> it?
>

random length and random ASCII CHAR (from a to z), but exact same length 
in both
database

> > so the utf8 database is around 35% more bigger than the ISO8859_1
> database!
>
> What happens when you backup and restore the databases?
>

stay the same

>
> > select count(*) from TEST_A
> > in iso8859_1: 212 ms
> > in utf8: 382 ms !!! UP to 80% more slower 
>
> I am not 100%, but this probably has to do with the fact that Firebird
> will need to allocate larger buffers in memory for UTF-8 characters.
>

yes probably but it's far away to be a good UTF8 implementation :(


> > when i declare in utf8 varchar(250) i want to reserve 250 bytes not 1000
>
> > bytes and i know (like for exemple in html)
> > that some char can be encoded in more than one byte! if i know that i
> > will handle russian char, i will set UP
> > as varchar(750) and if i know that i will handle only latin language i
> > will set up to varchar(300) ...
> > this setup must be done only by the database administrator ...
>
> That is not how it is supposed to work. You define the size in characters
> as specified in the standards, not in byte. If that is a problem for you,
> you should look at using CHARACTER SET OCTETS and handle your own decoding
> and encoding from and to UTF8.
>
> I think things should (and probably can) be improved, but not by breaking
> the SQL standards.
>

i will simply say :

"standard is just a word ! a system that is 2 times more slower is a fact !"

with you prefer ? me i know what i will not prefer :)

and even you say yourself, in the true of the true standard, utf8 must 
be encoded
in up to 6 char even ! :)



[Non-text portions of this message have been removed]



Re: [firebird-support] UTF8 in firebird ?

2012-01-05 Thread Vander Clock Stephane

> If you're only handling Western European alphabets, you could probably
> use Latin-1 (which has a formal name that eludes me at the moment.)
>

but UTF8 it's just here for that ! :( it's just the implementation in 
firebird
that is not good !

> The longer term solution is for the Firebird project to look at its
> data representation and find something that works better with UTF8.
>

yes, at least some options in the database (or in the create statement) to
define the size in byte of 1 UTF8 char

For exemple by default 1 utf8 char = 4 bytes (like it is now) and i can 
be able to
customize it to be egual to 1 bytes.


[Non-text portions of this message have been removed]



[firebird-support] UTF8 database : how to speed up the database by up to 100% !

2012-01-05 Thread Vander Clock Stephane
Hello,

Why UTF8 database is not a good choice

read carrefully this post and you will see how to speed by up
to 100% you database (if you use UTF8 off course)

I do some benchmark of UTF8 database vs ISO8859_1 database.
I think the way that Firbird handle utf8 database is not optimal

let speak first about the utf8
UTF8 it's just a way to encode special character like è à etc ..
for this utf8 will use combination of char upper than ascii #127
In this way, and it's not the less, UTF8 stay compatible with all
software that work with 8 bit string. thank to this we can use
a firebird database defined as iso8859_1 to store UTF8

Now The wrong choice about utf8 in firebird
Firebird consider than it need 4 bytes (to confirm i was thinking 3)
to store 1 utf8 character that true when we speak about cyrillic
char for exemple but not when we speak about latin language (french, 
italian,
spanish, etc.). In these language most of the char need only one byte
in utf8 (a, b, c, etc.) and only around 20% (é, à, etc.) of the
char need 2 bytes in UTF8

now let thing i target such country (portugal, spain, france, italian, 
etc..)
what kind of charset will best fit my database ? of course UTF8 ! but is
it neccessary that when i declare a varchar(255) for my varchar column
firebird handle in background a varchar(1250) ?? absolutely not
as in these language most of the char need only one byte to be encoded 
in UTF8

now you will say me: is their any penalty for this ? after all varchar 
column
are compressed ?

yep, their is and i do some bechnmark to prouve it

1rt i create 2 database, one in iso8859_1 and the other in UTF8

CREATE DATABASE 'c:\test_utf8.fdb'
   USER 'sysdba'
   PASSWORD 'masterkey'
   PAGE_SIZE 4096
   DEFAULT CHARACTER SET UTF8;

CREATE DATABASE 'c:\test_iso88591.fdb'
   USER 'sysdba'
   PASSWORD 'masterkey'
   PAGE_SIZE 4096
   DEFAULT CHARACTER SET ISO8859_1;

after i add in each database 2 simple table

CREATE TABLE TEST_A(
   DATA1 VARCHAR(2000),
   DATA2 VARCHAR(2000),
   DATA3 VARCHAR(2000),
   DATA4 VARCHAR(2000),
   DATA5 VARCHAR(2000),
   DATA6 VARCHAR(5000)
);

CREATE TABLE TEST_b(
   DATA01 VARCHAR(100),
   DATA02 VARCHAR(100),
   DATA03 VARCHAR(100),
   DATA04 VARCHAR(100),
   DATA05 VARCHAR(100),
   DATA06 VARCHAR(100),
   DATA07 VARCHAR(100),
   DATA08 VARCHAR(100),
   DATA09 VARCHAR(100),
   DATA10 VARCHAR(100),
   DATA11 VARCHAR(100),
   DATA12 VARCHAR(100),
   DATA13 VARCHAR(100),
   DATA14 VARCHAR(100),
   DATA15 VARCHAR(100),
   DATA16 VARCHAR(100),
   DATA17 VARCHAR(100),
   DATA18 VARCHAR(100),
   DATA19 VARCHAR(100),
   DATA20 VARCHAR(100),
   DATA21 VARCHAR(100),
   DATA22 VARCHAR(100),
   DATA23 VARCHAR(100),
   DATA24 VARCHAR(100),
   DATA25 VARCHAR(100),
   DATA26 VARCHAR(100),
   DATA27 VARCHAR(100),
   DATA28 VARCHAR(100),
   DATA29 VARCHAR(100),
   DATA30 VARCHAR(100),
   DATA31 VARCHAR(100),
   DATA32 VARCHAR(100),
   DATA33 VARCHAR(100),
   DATA34 VARCHAR(100),
   DATA35 VARCHAR(100),
   DATA36 VARCHAR(100),
   DATA37 VARCHAR(100),
   DATA38 VARCHAR(100),
   DATA39 VARCHAR(100),
   DATA40 VARCHAR(100),
   DATA41 VARCHAR(100),
   DATA42 VARCHAR(100),
   DATA43 VARCHAR(100),
   DATA44 VARCHAR(100),
   DATA45 VARCHAR(100),
   DATA46 VARCHAR(100),
   DATA47 VARCHAR(100),
   DATA48 VARCHAR(100),
   DATA49 VARCHAR(100),
   DATA50 VARCHAR(100)
);

and to finish i add (only) 64000 reccords in both table (only with 
varchar containing ascii between a..z)

you can use my bench tool software to do the test
http://sourceforge.net/projects/alcinoe/files/alsqlbenchmark/1.01/

at the end:
the size of the test_iso88591.fdb is 264 MB
the size of the test_utf8.fdb is 403 MB

so the utf8 database is around 35% more bigger than the ISO8859_1 database!

this difference seam to increase with the amount of data (number of 
varchar column) in the database

the fact is that the UTF8 database can be up to 100% more bigger in size 
than the ISO8859_1 database for the SAME DATA (low ascii char) !

not let speak about the speed

in fact no surprise at all, more bigger file size mean more I/O, mean 
more cache, mean more memory, mean more slower :(

to bench the speed, you must take a great care of the windows cache, the 
fragmentation of the file, etc.. not so easy to do ...

but for exemple simply do

select count(*) from TEST_A
in iso8859_1: 212 ms
in utf8: 382 ms !!! UP to 80% more slower 

so you understand the big probleme is that firebird use 4 bytes (or 3 i 
m still not sure) to handle UTF8 char
even when 1 bytes will be enalf for 80% of the char :(

I even not speak about the index size that in utf8 are 4 time more 
smaller than in ISO8859_1 !

how to correct this ? FIRST UTF8 is ASCII ! consider UTF8 like html 
where char like é are encoded in é it's still
just ASCII at the base !

when i declare in utf8 varchar(250) i want to reserve 250 bytes not 1000 
bytes and i know (like for exemple in html)

[firebird-support] UTF8 in firebird ?

2012-01-05 Thread Vander Clock Stephane
Hello,

I want to know if UTF8 is a good in Firebird so i do some
tests. can you gave me your opinion ?

I do some benchmark of UTF8 database vs ISO8859_1 database.
I think the way that Firbird handle utf8 database is not optimal

let speak first about the utf8
UTF8 it's just a way to encode special character like è à etc ..
for this utf8 will use combination of char upper than ascii #127
In this way, and it's not the less, UTF8 stay compatible with all
software that work with 8 bit string. thank to this we can use
a firebird database defined as iso8859_1 to store UTF8

Now The wrong choice about utf8 in firebird
Firebird consider than it need 4 bytes (to confirm i was thinking 3)
to store 1 utf8 character that true when we speak about cyrillic
char for exemple but not when we speak about latin language (french, 
italian,
spanish, etc.). In these language most of the char need only one byte
in utf8 (a, b, c, etc.) and only around 20% (é, à, etc.) of the
char need 2 bytes in UTF8

now let thing i target such country (portugal, spain, france, italian, 
etc..)
what kind of charset will best fit my database ? of course UTF8 ! but is
it neccessary that when i declare a varchar(255) for my varchar column
firebird handle in background a varchar(1250) ?? absolutely not
as in these language most of the char need only one byte to be encoded 
in UTF8

now you will say me: is their any penalty for this ? after all varchar 
column
are compressed ?

yep, their is and i do some bechnmark to prouve it

1rt i create 2 database, one in iso8859_1 and the other in UTF8

CREATE DATABASE 'c:\test_utf8.fdb'
   USER 'sysdba'
   PASSWORD 'masterkey'
   PAGE_SIZE 4096
   DEFAULT CHARACTER SET UTF8;

CREATE DATABASE 'c:\test_iso88591.fdb'
   USER 'sysdba'
   PASSWORD 'masterkey'
   PAGE_SIZE 4096
   DEFAULT CHARACTER SET ISO8859_1;

after i add in each database 2 simple table

CREATE TABLE TEST_A(
   DATA1 VARCHAR(2000),
   DATA2 VARCHAR(2000),
   DATA3 VARCHAR(2000),
   DATA4 VARCHAR(2000),
   DATA5 VARCHAR(2000),
   DATA6 VARCHAR(5000)
);

CREATE TABLE TEST_b(
   DATA01 VARCHAR(100),
   DATA02 VARCHAR(100),
   DATA03 VARCHAR(100),
   DATA04 VARCHAR(100),
   DATA05 VARCHAR(100),
   DATA06 VARCHAR(100),
   DATA07 VARCHAR(100),
   DATA08 VARCHAR(100),
   DATA09 VARCHAR(100),
   DATA10 VARCHAR(100),
   DATA11 VARCHAR(100),
   DATA12 VARCHAR(100),
   DATA13 VARCHAR(100),
   DATA14 VARCHAR(100),
   DATA15 VARCHAR(100),
   DATA16 VARCHAR(100),
   DATA17 VARCHAR(100),
   DATA18 VARCHAR(100),
   DATA19 VARCHAR(100),
   DATA20 VARCHAR(100),
   DATA21 VARCHAR(100),
   DATA22 VARCHAR(100),
   DATA23 VARCHAR(100),
   DATA24 VARCHAR(100),
   DATA25 VARCHAR(100),
   DATA26 VARCHAR(100),
   DATA27 VARCHAR(100),
   DATA28 VARCHAR(100),
   DATA29 VARCHAR(100),
   DATA30 VARCHAR(100),
   DATA31 VARCHAR(100),
   DATA32 VARCHAR(100),
   DATA33 VARCHAR(100),
   DATA34 VARCHAR(100),
   DATA35 VARCHAR(100),
   DATA36 VARCHAR(100),
   DATA37 VARCHAR(100),
   DATA38 VARCHAR(100),
   DATA39 VARCHAR(100),
   DATA40 VARCHAR(100),
   DATA41 VARCHAR(100),
   DATA42 VARCHAR(100),
   DATA43 VARCHAR(100),
   DATA44 VARCHAR(100),
   DATA45 VARCHAR(100),
   DATA46 VARCHAR(100),
   DATA47 VARCHAR(100),
   DATA48 VARCHAR(100),
   DATA49 VARCHAR(100),
   DATA50 VARCHAR(100)
);

and to finish i add (only) 64000 reccords in both table (only with 
varchar containing ascii between a..z)

you can use my bench tool software to do the test
http://sourceforge.net/projects/alcinoe/files/alsqlbenchmark/1.01/

at the end:
the size of the test_iso88591.fdb is 264 MB
the size of the test_utf8.fdb is 403 MB

so the utf8 database is around 35% more bigger than the ISO8859_1 database!

this difference seam to increase with the amount of data (number of 
varchar column) in the database

the fact is that the UTF8 database can be up to 100% more bigger in size 
than the ISO8859_1 database for the SAME DATA (low ascii char) !

not let speak about the speed

in fact no surprise at all, more bigger file size mean more I/O, mean 
more cache, mean more memory, mean more slower

to bench the speed, you must take a great care of the windows cache, the 
fragmentation of the file, etc.. not so easy to do ...

but for exemple simply do

select count(*) from TEST_A
in iso8859_1: 212 ms
in utf8: 382 ms !!! UP to 80% more slower 

so you understand the big probleme is that firebird use 4 bytes (or 3 i 
m still not sure) to handle UTF8 char
even when 1 bytes will be enalf for 80% of the char

I even not speak about the index size that in utf8 are 4 time more 
smaller than in ISO8859_1 !

how to correct this ? FIRST UTF8 is ASCII ! consider UTF8 like html 
where char like é are encoded in é it's still
just ASCII at the base !

when i declare in utf8 varchar(250) i want to reserve 250 bytes not 1000 
bytes and i know (like for exemple in html)
that some char can be encoded in more than one byte! if i know

[firebird-support] Bug ?

2012-01-05 Thread Vander Clock Stephane
i don't know if this a bug, just to confirm

CREATE DATABASE 'c:\test_utf8.fdb'
   USER 'sysdba'
   PASSWORD 'masterkey'
   PAGE_SIZE 4096
   DEFAULT CHARACTER SET UTF8;

CREATE TABLE TEST_A(
   ID VARCHAR(32) NOT NULL,
   DATA1 VARCHAR(5000),
   DATA2 VARCHAR(5000),
   DATA3 VARCHAR(5000),
   DATA4 VARCHAR(5000),
   DATA6 VARCHAR(5000),
   DATA10 VARCHAR(5000),
  PRIMARY KEY (ID)
);

=> ERROR new record size is too big (that ok)

but now

CREATE TABLE TEST_A(
   ID VARCHAR(32) NOT NULL,
  PRIMARY KEY (ID)
);

=> ERROR cannot create index RDB$PRIMARY2 !!!

stéphane


Re: [firebird-support] Re: Table more more than 100 millions rows ?

2011-12-30 Thread Vander Clock Stephane
 >What kind of data are stored in table?

it's simple the id of the picture associate to each item in the database.
we access the data in the database only through query like

select ... where ID = xxx


On 12/30/2011 11:24 AM, karolbieniaszewski wrote:
>
> What kind of data are stored in table?
> Do you search in this table?
> If this is some historical logs (e.g. GPS position of vehicle) then 
> you should divide data not in several tables but in several databases 
> e.g. monthly data then your "master" database will be small,
> faster backup/restore process and easy maintain
>
> Karol Bieniaszewski
>
> --- In firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>, Vander Clock Stephane 
>  wrote:
> >
> > actually except during backup restore, no ... but i want to know with
> > probleme i will face :)
> >
> > On 12/28/2011 8:18 PM, Milan Babuskov wrote:
> > >
> > > Vander Clock Stephane wrote:
> > > > Actually i have a table with more than 100 millions rows :( and it's
> > > > growing growing, i thing we will be around 200 millions rows next
> > > years ...
> > > > is it bad ? or did i need to start to think to split the data in
> > > several
> > > > table ?
> > >
> > > Do you have any problems with it?
> > >
> > > --
> > > Milan Babuskov
> > >
> > > ==
> > > The easiest way to import XML, CSV
> > > and textual files into Firebird:
> > > http://www.guacosoft.com/xmlwizard
> > > ==
> > >
> > >
> >
> >
> > [Non-text portions of this message have been removed]
> >
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Table more more than 100 millions rows ?

2011-12-28 Thread Vander Clock Stephane
actually except during backup restore, no ... but i want to know with 
probleme i will face :)

On 12/28/2011 8:18 PM, Milan Babuskov wrote:
>
> Vander Clock Stephane wrote:
> > Actually i have a table with more than 100 millions rows :( and it's
> > growing growing, i thing we will be around 200 millions rows next 
> years ...
> > is it bad ? or did i need to start to think to split the data in 
> several
> > table ?
>
> Do you have any problems with it?
>
> -- 
> Milan Babuskov
>
> ==
> The easiest way to import XML, CSV
> and textual files into Firebird:
> http://www.guacosoft.com/xmlwizard
> ==
>
> 


[Non-text portions of this message have been removed]



[firebird-support] Table more more than 100 millions rows ?

2011-12-27 Thread Vander Clock Stephane
Hello,

Actually i have a table with more than 100 millions rows :( and it's 
growing growing, i thing we will be around 200 millions rows next years ...
is it bad ? or did i need to start to think to split the data in several 
table ?

stéphane



Re: [firebird-support] Re: Best raid configuration

2011-12-27 Thread Vander Clock Stephane
i not understand, it's write that raid 10 is recommended ! ??

On 12/27/2011 4:55 PM, Ismael L. Donis Garcia wrote:
>
> I have had never no type of problem. The one that is not recommended 
> is the raid 10.
>
> Best Regards
> =
> || ISMAEL ||
> =
> - Original Message -
> From: trskopo
> To: firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>
> Sent: Monday, December 26, 2011 9:30 PM
> Subject: [firebird-support] Re: Best raid configuration
>
> Hi,
>
> Someone in this forum said that raid 5 is bad for firebird server.
>
> Regards,
> Sugiarto
>
> --- In firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>, "Ismael L. Donis Garcia" 
>  wrote:
> >
> > For my understanding I raid 5 or raid 0
> >
> > Best Regards
> > =
> > || ISMAEL ||
> > =
> > - Original Message -
> > From: Vander Clock Stephane
> > To: firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>
> > Sent: Monday, December 26, 2011 12:37 PM
> > Subject: [firebird-support] Best raid configuration
> >
> >
> >
> > Hello,
> >
> > i need to setup a new fb server with 7 sas hard drive
> > With best configuration i can choose ? raid 0, raid 5, raid 6? etc ...
> >
> > thanks by advance
> > stéphane
> >
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Re: Best raid configuration

2011-12-27 Thread Vander Clock Stephane
for what i see here

http://www.kendalvandyke.com/2009/02/disk-performance-hands-on-part-5-raid.html

Generally speaking, it's common knowledge that RAID 5 offers better read 
performance while RAID 10 offers better write performance

RAID 5:

10% more faster in read than raid 10
!! 65% !! more slower in write than raid 10 !




On 12/27/2011 6:30 AM, trskopo wrote:
>
> Hi,
>
> Someone in this forum said that raid 5 is bad for firebird server.
>
> Regards,
> Sugiarto
>
> --- In firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>, "Ismael L. Donis Garcia" 
>  wrote:
> >
> > For my understanding I raid 5 or raid 0
> >
> > Best Regards
> > =====
> > || ISMAEL ||
> > =
> > - Original Message -
> > From: Vander Clock Stephane
> > To: firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>
> > Sent: Monday, December 26, 2011 12:37 PM
> > Subject: [firebird-support] Best raid configuration
> >
> >
> >
> > Hello,
> >
> > i need to setup a new fb server with 7 sas hard drive
> > With best configuration i can choose ? raid 0, raid 5, raid 6? etc ...
> >
> > thanks by advance
> > stéphane
> >
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] What the best big table or several small table

2011-12-27 Thread Vander Clock Stephane

> > For this i setup my location ID as a string like this :
> >
> > for US > NEW YORK(State) > NEW YORK(City) the id will be
> > US-0001234-0001245
> >
> > for US > NEW YORK(State) > NEW YORK(City) > Manhattan the id will be
> > US-0001234-0001245--0001298
>
> Stupid question, maybe, but why not use postal codes? - they're predefined
> and denser. At least in the US, codes tend to be well-localized, meaning
> that 02139 is adjacent to 02138 and both are in Cambridge Mass.
>

it's not postcode but just geographical ID
every location have a unique integer ID
and we add secondary string ID based on the id of their parent

Manhattan iD is 0001298 (integer)
manhattan secondary ID is US-0001234-0001245--0001298 (string)


> >
> > now in my query if an user want to see all the property in manhattan i
> > simply do
> > Where Location like 'US-0001234-0001245--0001298%';
> >
>
> OK. This query has one of two problems. The more likely is that you're
> not using parameterized queries. Parameterized queries are queries where
> you prepare the query without the input values then execute sending just
> the values. That saves the cost of parsing, compiling, and optimizing the
> query over and over again.
>

yes but i think this is negligeable no ?


>
> The second, and less likely, is that you're using LIKE with a parameter,
> which means that the condition can't use an index because the parameter
> could start with a '%'. Instead of LIKE use STARTING WITH (or is it
> STARTS WITH?).
>

it's always STARTNG WITH


>
> > and for query like
> >
> > where Location like 'US%' and nb_bedroom > 2 and nb_bedroom < 3
> >
> > i m in trouble because of the "like" i can not use the multiple column
> > index on (LOCATION, NB_BEDROOM) ...
> > i can use only the single column index on location + single column 
> index
> > on nb_bedroom (and believe me speed is not the same)
>
> Right. But you could use an index on (NB_BEDROOM, LOCATION), if
> you were prepared to build the query as NB_BEDROOM = x and LOCATION
> starting with y OR NB_BEDROOM = x+1 and 
>

yes but unfortunatly it's can work with nb bedroom but it's will not work
with fields like price or surface for exemple because the delta min vs 
max is much
more bigger :(


> >
> > so for this i simply "duplicate" the row in the datase!
> >
>
> Arrggg!!! It's well known that if you've got two copies of the same 
> data, at least one is wrong. Having four copies of every piece of data 
> is ... well, not generally considered good design.
>

i know, i know ... but except "denormalyzing" the database i don't see
any solution to my probleme :(


> How about having separate fields with each part of the location, so if 
> you know you're
> looking for a specific block in Manhattan, you look for
>
> LOCATION_COUNTRY = 'US' and
> LOCATION_CITY =  and
> LOCATION_ARRONDISMENT =  and
> LOCATION_CARRE = 
>
> If you need less detail, ask for fewer fields. You'd have more 
> indexes, but if the read load is substantially heavier than the update 
> load - which I'd guess it is by 100:1 or more - then more indexes 
> aren't that much of a problem.
>

i actually think about this, but dedpendly of the country sometime their 
is more than 10 level !
ex: United States > Alaska > Aleutian Islands > Aleutian Islands > Fox 
Islands > Krenitzin Islands > Akutan Island > Akutan

so i will need to add at least all this field :

location_level1 (country)
location_level2
location_level3
location_level4
location_level5
location_level6
location_level7
location_level8
location_level9
location_level10

but this will not help my query like

where
   location_level3 = 0023954 and
   Price > 10 and
   price < 20
order by
   location_level3, CREATION_date

with index to use ?? multi column index on
(location_level3, Price) OR
(location_level3, Creation_date) ?

that ok, i have actually the same probleme in my design
but here i will need also to create too much of index to answer all the
possible case !

(location_level1, Price)
(location_level1, Creation_date)
(location_level2, Price)
(location_level2, Creation_date)
...
(location_level10, Price)
(location_level10, Creation_date)


in my design where i duplicate the row i can have query like

where
   location = 0023954 and
   Price > 10 and
   price < 20
order by
   CREATION_date

and only need to create index like

(location, Price)
(location, Creation_date)


[Non-text portions of this message have been removed]



[firebird-support] Best raid configuration

2011-12-26 Thread Vander Clock Stephane
Hello,

i need to setup a new fb server with 7 sas hard drive
With best configuration i can choose ? raid 0, raid 5, raid 6? etc ...

thanks by advance
stéphane


Re: [firebird-support] About nbackup

2011-12-26 Thread Vander Clock Stephane
THANKS dmtry for this long explanation !

 > what happen if for exemple we increase the width of
 > one varchar column during the backup ?
 >bad question.

:(

 >So, it's like snapshot that you make in time.
 >For example, Level 0 - each month, level 1 - each week, and level 2 -
 >each day.

yes i understand, but with level 2 each days, it's mean that if i m on 
days 4 from the
level 1 the nbackup will backup the change from the last 4 days .. it's 
mean more bigger
backup file, more IO ... and this when we know that i have already
backup the change of des 3 previous days :(



On 12/26/2011 1:45 AM, Dmitry Kuzmenko wrote:
>
> Hello, Vander!
>
> Sunday, December 25, 2011, 10:57:32 PM, you wrote:
>
> VCS> for exemple i see that during the backup, the database file is locked
> VCS> and all write are done in a separate delta file ... is this can slow
> VCS> down the server ?
> VCS> especially when the separate file is merge back with
> VCS> the database file ?
> Sure, because while writes goes to delta is ok, copying or scanning
> database for nbackup levels produces additional IO.
>
> VCS> what happen if for exemple we increase the width of
> VCS> one varchar column during the backup ?
> bad question.
>
> VCS> also i not understand the level strategy, in the doc they say we 
> will
> VCS> rarelly go beyong level 3 or 4 but me what i want to do is to 
> save on a
> VCS> daily basis ... so for me it's will be 30 levels min (because every
> VCS> month i will reset)... what wrong in this ?
>
> 30 levels is nonsense. Also I doubt that nbackup can handle
> command line with 30 files specified.
> Level 0 - full copy of DB.
> Level 1 - only pages that was changed from level 0
> Level 2 - only pages that was changed from level 1
> etc.
>
> So, it's like snapshot that you make in time.
> For example, Level 0 - each month, level 1 - each week, and level 2 -
> each day.
> Thus with only 3 levels (from 0 to 2) you can restore copy of the
> database for each day. Levels goes this way:
> 0
> -1
> --2 - pages that changed from level 1
> -1 - again, only pages changed from making level 0
> --2 - only pages changed from level 1
>
> So, you can combine some level 2 only with level 1 made before it, not
> after, of course.
> Creating such "a ladder" with 30 levels doesn't make any sense,
> because on restore database can be combined with levels down from
> highest, i.e. 2+1+0 in that example.
> As you plan to make each day 1 level plus, you will need for example
> at 15-th day to specify all 15 levels you made. Instead,
> you can make level 1 backup each day, and then use level 0+1 to
> restore database for particular day starting from the current month.
>
> Depending of the application activity, each level will have different
> size. It is hard to predict level sizes, because applications can
> rewrite same data, or different data.
>
> So, here is IO also. Using 30 levels needs to apply them all, one by
> one to the level 0 backup.
>
> And, remember, that nbackup makes physical copy of the database,
> page by page, and it is not interested about data or transaction
> numbers, etc.
>
> You can make level 0 copy of the database if you just stop the
> Firebird and copy database to another file.
>
> VCS> and to finish how nbackup can know what was changed from the last 
> backup
> VCS> ?
>
> see above.
>
> VCS> does it's mean that every page in the database file have a version
> VCS> number ?
>
> It starts to have marks after each level of nbackup you make.
> That is how nbackup understands what pages was changed from the
> last nbackup level made.
>
> -- 
> Dmitry Kuzmenko, www.ib-aid.com
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] gbak -CREATE_DATABASE -buffers 0 => not work :(

2011-12-26 Thread Vander Clock Stephane
 >I'm curious, what error,

gbak: ERROR:expected page buffers, encountered "0"
gbak:Exiting before completion due to errors

 >and why you ever need -buffers 0
 >in the command line, since it is default value for the new
 >created database?

because it's not written clearly about default value for
buffers = 0 and i have for habitude to always specify
default value because you never know what it's will be
in futur version ...


On 12/26/2011 1:47 AM, Dmitry Kuzmenko wrote:
>
> Hello, Vander!
>
> Sunday, December 25, 2011, 5:56:45 PM, you wrote:
>
> VCS> hello,
>
> VCS> is it a bug
>
> VCS> gbak -CREATE_DATABASE -buffers 0 ... => gave an error
>
> I'm curious, what error, and why you ever need -buffers 0
> in the command line, since it is default value for the new
> created database?
>
> -- 
> Dmitry Kuzmenko, www.ib-aid.com
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Index Strategy

2011-12-26 Thread Vander Clock Stephane
 >With Firebird, the choices are simpler, though not better. In the
 >absence of the FIRST and ORDER clauses, it will use an index. on
 >Price, plus an index on Location, plus an index on Name, plus An
 >index on Year. With those clauses, it can use only Creation_Date
 >because it will emulate the sort with an index walk, and can
 >walk only one index

yes my probleme is the order by :(

actually i MUST specify the PLAN manually

1/ if i thing that location and name will be an enalf filter
INDEX (LOCATION, NAME, CREATION_DATE)
in the way no in memory order by. the filter on the other field
price, year, etc... will be done manually row by row

WHERE LOCATION = xxx and NAME = yyy
ORDER BY LOCATION, NAME, CREATION_DATE


2/ if the location and name are too much open wide filter
(return too much of rows) then i use one of the most acurate :
 > INDEX (LOCATION, NAME, PRICE)
 > INDEX (LOCATION, NAME, YEAR)
 > INDEX (PRICE)
 > INDEX (YEAR)
 > INDEX (CREATION_DATE)
and the order by will be done in memory on the returned result

the probleme is that is a far away perfect solution

1/ how to know with index to use (actually i bench each query and 
remembered the
bench result to know if the previous query was acurate or not)

2/ i alway need to choose between an index based on the order by
or based on the criteria ... but sometime it's not possible to have good
result


but this situation is very common i thing (for anyone that need the 
order by) this
why i m curious how other face such probleme

stéphane


On 12/26/2011 11:22 AM, Ann Harrison wrote:
>
>
> Stephane,
>
> >
> > I have a problem that i thing any web site can have !
> >
> > Let say i m a car dealer WebSite and want to permit user to search my
> > zillions car Firebrd database
> >
> > Typycal search will be
> >
> > SELECT
> > FIRST 100 SKIP 0,
> > *
> > FOR
> > CAR
> > Where
> > NAME = 'BMW' AND
> > LOCATION = 'NEW YORK CITY' AND
> > PRICE > 3 AND
> > PRICE < 4 AND
> > YEAR > 2004
> > ORDER BY CREATION_DATE DESC
> >
> >
> > So here what index strategy to choose ?
> >
> > INDEX (LOCATION, NAME, PRICE) ?
> > INDEX (LOCATION, NAME, YEAR) ?
> > INDEX (LOCATION, NAME, CREATION_DATE) ?
> > INDEX (PRICE) ?
> > INDEX (YEAR) ?
> > INDEX (CREATION_DATE) ?
>


[Non-text portions of this message have been removed]



Re: [firebird-support] 1000 databases per server ?

2011-12-25 Thread Vander Clock Stephane
thanks dmitry !



On 12/26/2011 1:54 AM, Dmitry Kuzmenko wrote:
>
> Hello, Vander!
>
> Sunday, December 25, 2011, 5:54:51 PM, you wrote:
>
> VCS> yes, but what i want to know if for the "same amout of data",
> VCS> is their any disadvantage (or advantage) to use several database 
> instead
> VCS> of one ?
>
> I don't see any need to keep 1000 client's data in one database,
> except you need to make queries over some or all clients data by one
> sql statement.
> So, if each client's data is separate from others, than it is no
> any advantage to keep them in one database.
> Also, in terms of failure or maintenance keeping several client's data
> in one database causes threats to grow.
>
> If you have 10k clients, I do not suggest to separate them all
> to 10k databases, because it was already spoken that it can cause
> problems with the filesystem.
> Anyway, splitting clients to different databases allow you to
> handle databases on different servers, so, you can scale your
> system performance (instead of trying to fix problems with 10k
> clients and 1 database for all of them).
>
> -- 
> Dmitry Kuzmenko, www.ib-aid.com
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Index Strategy

2011-12-25 Thread Vander Clock Stephane
not sure i understand :)

On 12/26/2011 11:11 AM, Ann Harrison wrote:
>
>
>
> Cheers,
>
> Ann
>
> On Dec 22, 2011, at 4:12 AM, Vander Clock Stephane 
> mailto:svanderclock%40yahoo.fr>> wrote:
>
> > Hello,
> >
> > I have a problem that i thing any web site can have !
> >
> > Let say i m a car dealer WebSite and want to permit user to search my
> > zillions car Firebrd database
> >
> > Typycal search will be
> >
> > SELECT
> > FIRST 100 SKIP 0,
> > *
> > FOR
> > CAR
> > Where
> > NAME = 'BMW' AND
> > LOCATION = 'NEW YORK CITY' AND
> > PRICE > 3 AND
> > PRICE < 4 AND
> > YEAR > 2004
> > ORDER BY CREATION_DATE DESC
> >
> >
> > So here what index strategy to choose ?
> >
> > INDEX (LOCATION, NAME, PRICE) ?
> > INDEX (LOCATION, NAME, YEAR) ?
> > INDEX (LOCATION, NAME, CREATION_DATE) ?
> > INDEX (PRICE) ?
> > INDEX (YEAR) ?
> > INDEX (CREATION_DATE) ?
> >
> > because all of this depend on how many row will be returned !
> >
> > for exemple if many many rows are returned by the filter then it's will
> > be better
> > to retrieve the row in the ORDER BY and filter them row by row :
> >
> > SELECT
> > ...
> > ORDER BY NAME, LOCATION, CREATION_DATE DESC
> >
> > INDEX (LOCATION, NAME, CREATION_DATE) ?
> >
> > but in other way i only few row are returned by the query then it's 
> will
> > be better
> > retrieve the row already filtered and order them in memory :
> >
> > SELECT
> > ..
> > ORDER BY CREATION_DATE DESC
> >
> > using the most(S) usefull index :
> > INDEX (LOCATION, NAME, PRICE) ?
> > INDEX (LOCATION, NAME, YEAR) ?
> > INDEX PRICE
> > INDEX YEAR
> >
> > but here also how to know with index will be usefull ??
> > sometime price sometime year, sometime both ...
> >
> >
> > How do you handle such situation ? here the biggest probleme is
> > of course the need of the order by
> >
> >
> > thanks you by advance
> > stéphane
> >
> >
> > 
> >
> > ++
> >
> > Visit http://www.firebirdsql.org and click the Resources item
> > on the main (top) menu. Try Knowledgebase and FAQ links !
> >
> > Also search the knowledgebases at http://www.ibphoenix.com
> >
> > ++
> > Yahoo! Groups Links
> >
> >
> >
>
> 


[Non-text portions of this message have been removed]



[firebird-support] About nbackup

2011-12-25 Thread Vander Clock Stephane
Hello,

on the paper nbackup look wonderfull, but what are behind the wood that 
we need to know ?

for exemple i see that during the backup, the database file is locked 
and all write are done in a separate delta file ... is this can slow 
down the server ? especially when the separate file is merge back with 
the database file ? what happen if for exemple we increase the width of 
one varchar column during the backup ?

also i not understand the level strategy, in the doc they say we will 
rarelly go beyong level 3 or 4 but me what i want to do is to save on a 
daily basis ... so for me it's will be 30 levels min (because every 
month i will reset)... what wrong in this ?

and to finish how nbackup can know what was changed from the last backup 
? does it's mean that every page in the database file have a version 
number ?

thanks for all
stéphane


[firebird-support] gbak -CREATE_DATABASE -buffers 0 => not work :(

2011-12-25 Thread Vander Clock Stephane
hello,

is it a bug

gbak -CREATE_DATABASE -buffers 0 ... => gave an error :(

thanks by advance
stéphane



Re: [firebird-support] 1000 databases per server ?

2011-12-25 Thread Vander Clock Stephane
yes, but what i want to know if for the "same amout of data",
is their any disadvantage (or advantage) to use several database instead 
of one ?

On 12/25/2011 3:42 PM, Milan Babuskov wrote:
>
> Vander Clock Stephane wrote:
> > so is it a probleme to have on one server 1000 databases (does it's 
> will
> > use more memory or more
> > power resource than a single database) ?
>
> As for Firebird, this depends on the server architecture used and
> available RAM and your expectation of concurrent users using different
> databases at the same time.
>
> Beside that, this could pose a problem for some filesystems if you
> stored them all in a single directory. I suggest you desing a hierarchy
> of subdirectories in such way that you do not have more than 100
> database files in each.
>
> -- 
> Milan Babuskov
>
> ==
> The easiest way to import XML, CSV
> and textual files into Firebird:
> http://www.guacosoft.com/xmlwizard
> ==
>
> 


[Non-text portions of this message have been removed]



[firebird-support] 1000 databases per server ?

2011-12-24 Thread Vander Clock Stephane
Hello,

i want to split our main database in several database to use the power 
of several server.
i need to choose between one database for several clients or one 
database for one client.
we have around 10 000 clients, so it's mean 10 000 databases (it's will 
be not 1 servers
off course, say 1000 clients by server).

so is it a probleme to have on one server 1000 databases (does it's will 
use more memory or more
power resource than a single database) ?

thanks you by advance
stéphane


Re: [firebird-support] backup very slow and how often sweep a database ? - Email found in subject

2011-12-24 Thread Vander Clock Stephane
I thing i found how it's happen !!

The command for user is -user and for the no reserver is -use and if i 
make no mistake both use the "-u" shortcut !
i m sure it's happen somewhat like this :(

stéphane


On 12/23/2011 10:13 AM, Thomas Steinmaurer wrote:
>
> > As i say in my previous message (as always half of my messages never
> > appear in this forum)
> > I really not understand why my database become NO RESERVE :(
>
> Perhaps by "accident" via your access components?
>
> -- 
> With regards,
> Thomas Steinmaurer (^TS^)
> Firebird Technology Evangelist
>
> http://www.upscene.com/
> http://www.firebirdsql.org/en/firebird-foundation/
>
> > i never do this ! and i don't know for how many time it's like this ...
> > but really strange
> >
> > On 12/23/2011 12:15 AM, Leyne, Sean wrote:
> >>
> >> Dmitry,
> >>
> >>> Next, your database have NO RESERVE flag turned on.
> >>> This is BAD, since you speaking about updates.
> >>
> >> Good catch! I completely missed that!
> >>
> >> I agree, this is VERY BAD!
> >>
> >> Sean
> >>
> >>
> >
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
> > 
> >
> > ++
> >
> > Visit http://www.firebirdsql.org and click the Resources item
> > on the main (top) menu. Try Knowledgebase and FAQ links !
> >
> > Also search the knowledgebases at http://www.ibphoenix.com
> >
> > ++
> > Yahoo! Groups Links
> >
> >
> >
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] What the best big table or several small table

2011-12-24 Thread Vander Clock Stephane

>
> Are you really using concatenated strings containing several distinct
> dataitems as your key? Given your example the identifier for Manhattan
> is 27 bytes, if you would use a key of seperate fields you need a
> CHAR(2), and 3 INTEGERs so 2 bytes + 3*4 bytes = 14 bytes. Assuming that
> the identifier for NEW YORK(State) is globally unique, you probably
> don't even need to use the US prefix losing 2 bytes.
>

yes it's can be better ... it's was just done like this at the beginning



> Individual fields:
> WHERE country_code = 'US' and nb_bedroom > 2 and nb_bedroom < 3
>
> (BTW: I assume you are aware that that condition is always false, unless
> New York as fractional rooms ;)
>

stupid exemple sorry :(


[Non-text portions of this message have been removed]



Re: [firebird-support] backup very slow and how often sweep a database ? - Email found in subject

2011-12-22 Thread Vander Clock Stephane
but i connect throught the api and only throught the API :(
it's really strange  anyway i will investigate more carefully
also i use before FB 2.5.1 the snapshot version of firebird ...
anyway i know it now i will take care of it

thanks for all

On 12/23/2011 10:13 AM, Thomas Steinmaurer wrote:
>
> > As i say in my previous message (as always half of my messages never
> > appear in this forum)
> > I really not understand why my database become NO RESERVE :(
>
> Perhaps by "accident" via your access components?
>
> -- 
> With regards,
> Thomas Steinmaurer (^TS^)
> Firebird Technology Evangelist
>
> http://www.upscene.com/
> http://www.firebirdsql.org/en/firebird-foundation/
>
> > i never do this ! and i don't know for how many time it's like this ...
> > but really strange
> >
> > On 12/23/2011 12:15 AM, Leyne, Sean wrote:
> >>
> >> Dmitry,
> >>
> >>> Next, your database have NO RESERVE flag turned on.
> >>> This is BAD, since you speaking about updates.
> >>
> >> Good catch! I completely missed that!
> >>
> >> I agree, this is VERY BAD!
> >>
> >> Sean
> >>
> >>
> >
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
> > 
> >
> > ++
> >
> > Visit http://www.firebirdsql.org and click the Resources item
> > on the main (top) menu. Try Knowledgebase and FAQ links !
> >
> > Also search the knowledgebases at http://www.ibphoenix.com
> >
> > ++
> > Yahoo! Groups Links
> >
> >
> >
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] backup very slow and how often sweep a database ? - Email found in subject

2011-12-22 Thread Vander Clock Stephane
i agree tooo (thanks again seam)

As i say in my previous message (as always half of my messages never 
appear in this forum)
I really not understand why my database become NO RESERVE :(

i never do this ! and i don't know for how many time it's like this ... 
but really strange

On 12/23/2011 12:15 AM, Leyne, Sean wrote:
>
> Dmitry,
>
> > Next, your database have NO RESERVE flag turned on.
> > This is BAD, since you speaking about updates.
>
> Good catch! I completely missed that!
>
> I agree, this is VERY BAD!
>
> Sean
>
> 


[Non-text portions of this message have been removed]



[firebird-support] read only read commited transaction - is it safe to let them open undefinitively ?

2011-12-22 Thread Vander Clock Stephane
Hello,

Is it safe to let them open undefinitively a read only read commited 
transaction ?
can it's disturb something in the server (sweep? backup ? etc..)

stéphane


[firebird-support] Index Strategy

2011-12-22 Thread Vander Clock Stephane
Hello,

I have a problem that i thing any web site can have !

Let say i m a car dealer WebSite and want to permit user to search my 
zillions car Firebrd database

Typycal search will be

SELECT
   FIRST 100 SKIP 0,
   *
FOR
   CAR
Where
 NAME = 'BMW' AND
 LOCATION = 'NEW YORK CITY' AND
 PRICE > 3 AND
 PRICE < 4 AND
 YEAR > 2004
ORDER BY CREATION_DATE DESC


So here what index strategy to choose ?

INDEX (LOCATION, NAME, PRICE) ?
INDEX (LOCATION, NAME, YEAR) ?
INDEX (LOCATION, NAME, CREATION_DATE) ?
INDEX (PRICE) ?
INDEX (YEAR) ?
INDEX (CREATION_DATE) ?

because all of this depend on how many row will be returned !

for exemple if many many rows are returned by the filter then it's will 
be better
to retrieve the row in the ORDER BY and filter them row by row :

SELECT
...
ORDER BY NAME, LOCATION, CREATION_DATE DESC

INDEX (LOCATION, NAME, CREATION_DATE) ?

but in other way i only few row are returned by the query then it's will 
be better
retrieve the row already filtered and order them in memory :

SELECT
..
ORDER BY CREATION_DATE DESC

using the most(S) usefull index :
INDEX (LOCATION, NAME, PRICE) ?
INDEX (LOCATION, NAME, YEAR) ?
INDEX PRICE
INDEX YEAR

but here also how to know with index will be usefull ??
sometime price sometime year, sometime both ...


How do you handle such situation ? here the biggest probleme is
of course the need of the order by


thanks you by advance
stéphane


Re: [firebird-support] What the best big table or several small table

2011-12-22 Thread Vander Clock Stephane

> > i have one table with lot of index (around 30, record around 20 
> millions)
>
> 30 indices. And they all are useful? How selective are they?
>

Yes all are necessary :( but i think i will start a new thread to speak 
about
the index because i have somes more questions that can be interresting 
for someone else


>
> > We access the data only throught query like
> >
> > select First 200 skip 0  from MyTable where c = XY and d = zw and A
> > > x and b< Z ... ORDERBY C, D
> >
> > what is the best :
>
> Take an eye on the execution plan, I/O stats in the monitoring table or
> via the trace api.
>
> Btw, you know that with the usage of FIRST above, you probably won't get
> your expected result?
>
> You probably want to do:
>
> select first 200 skip 0 ... from (
> select ... from order by c, d
> )
>

hmm not understand where is the difference ?


> > 1/ One table with lot of record ?
> > 2/ Several tables with fewer records ?
>
> Things are getting more complicated with several tables IMHO. So if you
> don't have a strong reason e.g. maintenance or performance wise, go with
> one table.
>
> Basically, Firebird can handle tables with 20 millions records. It
> vastly depends on how optimized the access is.
>
> > because i can split easyly the number of row in 10 differents tables
> >
> > also is an empty on not very used table use firebird ressource ?
>
> Sorry, I don't understand this question.
>

1. ok, let say i m a real estate website and i have properties to sell 
in all the world.
For this i must permit user to see all properties in one location (for 
exemple New York)
but also in the location inside (Exemple Manhattan)

For this i setup my location ID as a string like this :

for US > NEW YORK(State) > NEW YORK(City) the id will be
US-0001234-0001245

for US > NEW YORK(State) > NEW YORK(City) > Manhattan the id will be
US-0001234-0001245--0001298

etc...

now in my query if an user want to see all the property in manhattan i 
simply do
Where Location like 'US-0001234-0001245--0001298%';

AND to see all the property in new york city (including manhattan)
Where Location like 'US-0001234-0001245%';

AND to see all the property in US
Where Location like 'US%';

so it's OK BUT the probleme is that i need also to add some other filter 
criteria (like nb_room, surface, etc..)

and for query like

where Location like 'US%' and nb_bedroom > 2 and nb_bedroom < 3

i m in trouble because of the "like" i can not use the multiple column 
index on (LOCATION, NB_BEDROOM) ...
i can use only the single column index on location + single column index 
on nb_bedroom
(and believe me speed is not the same)

so for this i simply "duplicate" the row in the datase!

For every property with Location like 'US-0001234-0001245--0001298' i 
create 4 row in
the database with theses locations

Row 1: US
Row 2: US-0001234
Row 3: US-0001234-0001245
Row 4: US-0001234-0001245--0001298

in this way i can easyly uses my multiple column index on the query
where Location = 'US' and nb_bedroom > 2 and nb_bedroom < 3

but now you understand ... i can have only one table for all the row OR
to split theses row in 4 differentes tables

TABLE_LOCATION_LEVEL1
Row 1: US

TABLE_LOCATION_LEVEL2
Row 2: US-0001234

TABLE_LOCATION_LEVEL3
Row 3: US-0001234-0001245

TABLE_LOCATION_LEVEL3
Row 4: US-0001234-0001245--0001298


but now before to do so i just need to know about the cost
of having multiple table instead of one big single !
Theses tables will be also in different database / server to reduce the 
charge on the
main database










[Non-text portions of this message have been removed]



Re: RES: [firebird-support] Re: Optimizing Firebird for SSD

2011-12-22 Thread Vander Clock Stephane
i think that Ram disk with strong power supply can be a good choice.
A soon as power faillure is detected the software stop the firebird and
copy the entire ram disk to the RAID disk before the UPS go down.
All the question is to find the good software and do all the test

in fact it's what you raid controler do internally :) he have a small 
baterry
(he can keep the data in memory (generally 256 mb cache) for 72 hours)
and as soon he restart from a power faillure he write the data on the disk !



On 12/21/2011 10:34 AM, karolbieniaszewski wrote:
>
> Do you know what is RAM DISC?
> If not then "google" for it ;-)
>
> I can simple say that RAM DISC is some virtual disc created in RAM MEMORY.
> After creation you use it as normal hard drive (copy database on it 
> and use as normal) with all benefits of RAM speed :)
>
> but you must remember that if power failure or some hardware error 
> occur and your server stop - all data from RAM will be cleared.
> Because of that you must do backup in some short interval.
>
> e.g. your database is 3GB you can create RAM DISC 4GB and do backup to 
> normal HARD DISC or better you can create RAM DISC 8GB and do backup 
> to also RAM DISC and when backup process completed simple copy backup 
> file to HARD DISC what is tone faster.
>
> RAM is now cheap - but i do not know what is your database size and 
> what scenario can be accomplished.
>
> --- In firebird-support@yahoogroups.com 
> , "Fabiano" 
>  wrote:
> >
> > How put the entire database on RAM Disk?
> >
> >
>
> 


[Non-text portions of this message have been removed]



[firebird-support] What the best big table or several small table

2011-12-21 Thread Vander Clock Stephane
Hello,

i have one table with lot of index (around 30, record around 20 millions)

We access the data only throught query like

select First 200 skip 0  from MyTable where c = XY and d = zw and A 
 > x and b < Z ... ORDERBY C, D

what is the best :

1/ One table with lot of record ?
2/ Several tables with fewer records ?

because i can split easyly the number of row in 10 differents tables

also is an empty on not very used table use firebird ressource ?

thanks for your help


Re: [firebird-support] backup very slow and how often sweep a database ?

2011-12-21 Thread Vander Clock Stephane

> Long running read committed read-only transactions aren't a problem at
> all. If you are using Firebird 2.5, I wonder what the Trace API returns
> in respect to transaction tracing.
>

Ok, so i will start to think about transaction pool (only for read only
red commited transaction)

Can you confirm

if a read only transaction (isc_tpb_read + isc_tpb_read_committed + 
isc_tpb_rec_version)
die without any commit or rollback (network
error, application bug, etc.) then it's no matter for the firebird server ?

Also what a time aleave i need to setup for each transaction before to
commit them? 30 s ? 1 min ? 10 min ? more ?

Thanks by advance


[Non-text portions of this message have been removed]



Re: [firebird-support] backup very slow and how often sweep a database ?

2011-12-21 Thread Vander Clock Stephane

>
> That's quite a number, when running in 24x7 mode! Although due to the
> example above, it seems you do explicit transaction management. So,
> either you still use automcommit somewhere, which results in a new
> transaction for every piece of work, our you have a lot of connections
> and work to be done by explicit transactions.
>
> I really think that you are using autocommit as well. The least
> expensive transactions are READ COMMIT READ ONLY, so basically make use
> of at least read-only transactions, when you are selecting stuff from
> the database and don't write.
>

not really autocommit, but i don't let any transaction open.
when the code need some data, he call a global procedure that will gave
him the data he need. the procedure will open (and close) an transaction
each time and query the fb server.

now i can open a transaction when each thead start and close it when each
thead end but i always hear that transaction must be short as possible :(

stéphane


[Non-text portions of this message have been removed]



Re: [firebird-support] Re: backup very slow and how often sweep a database ?

2011-12-21 Thread Vander Clock Stephane

> > to select or update the data we do only like this :
> >
> > procedure doSQL
> > begin
> > StartTransaction
> > Try
> > select or update data
> > committransaction
> > except
> > rollbacktransaction
> > end;
> > end;
>
> I'm curious what access components you use? Perhaps ZEOS?
>

no, direct api call :) ... but i get inspired from the UIB code

> > for now the only explanation i see is that we do very lot of transaction
> > (average 225 / Secondes is it lot ?)
> > on the server and this why 2 sean to be too little ... with around
> > 19 millions transactions by days
> > i thing i must use much more than 2 (1 million?)
>
> I still think you won't resolve the problem with a larger sweep
> interval. ;-)
>

ok, i just make a confusion, it's not the difference between the next
transaction and the OIT that must be done but between the
OST (oldest snapshot) and the OIT that must be done ! (and many
book write this mistake, thanks that i read an article from vlad about
it)

in my case it's always close to 1 so no sweep really need !



[Non-text portions of this message have been removed]



Re: [firebird-support] Re: backup very slow and how often sweep a database ?

2011-12-21 Thread Vander Clock Stephane

>
> one of our client have ~118gb database,
> and performance is much better than yours. Since that database have
> ~300 concurrent users, they paid attention to the storage, and bought
> new raid (md3220 from dell with 8 or 12 hard drives).
> Now they have
> sweep - from 5 to 40 minutes, depending on garbage amount
> restore - ~7 hours
> backup - ~2 hours
>

in fact on the server the backup is around 2 hours (restore 24hours but 
this
because of index i m not surprise). but backup from different server 
than the
FB server take twice more time :(

hard drive: 5 RAID 0 SAS 15k on DELL server

yes i take some risk but i have no other choice need speed !


> >> yes, up to 2-4 times faster. we made tests 3 years ago and found that.
> VCS> hmm i not understand why ?
>
> when gbak runs itself, it also makes backup by itself, telling
> server to read the data. So, here is "intercommunication" between
> 2 processes - gbak and server.
>

that explain why it's so slow on the remote computer :)

> VCS> everyday the difference between oldest transaction and next 
> transaction
> VCS> grow by more than 20 000 :(
>
> Then you need to check transaction management in your applications.
> First task is to eliminate long running transactions whenever
> possible, or use read read committed transactions for reading
>

to select or update the data we do only like this :

procedure doSQL
begin
   StartTransaction
   Try
 select or update data
 committransaction
   except
 rollbacktransaction
   end;
end;

for now the only explanation i see is that we do very lot of transaction 
(average 225 / Secondes is it lot ?)
on the server and this why 2 sean to be too little ... with around 
19 millions transactions by days
i thing i must use much more than 2 (1 million?)



[Non-text portions of this message have been removed]



Re: [firebird-support] Firebird classic vs supperclassic

2011-12-21 Thread Vander Clock Stephane
not sure, pro will confirm, but i thing classic do more io on the disk 
(locking, etc) than superclassique

On 12/17/2011 8:29 PM, jegazarco wrote:
>
> Hello, we have to select between the two models for a new client. Is a 
> critical system that will run 24x7x365 with an average of 100-200 
> concurrent users and 5 Gb database growth a year, starting with 10 Gb 
> database.
>
> The deployment will Windows 2008 R2 64, 32 Gb RAM and the most 
> important is the reliability of the system.
>
> Superclassic is out a year, but a lot of people prefer classic over 
> SC. I would like to know experiences of people using them, and the 
> reliability of both from your POV.
>
> Regards,
> Jesus
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] backup very slow and how often sweep a database ?

2011-12-21 Thread Vander Clock Stephane


On 12/21/2011 12:08 PM, Thomas Steinmaurer wrote:
>
> > it's a disaster  i disable the gfix -sweep every night and launch it
> > only when the difference between oldest transaction and next transaction
> > is> 20 000 but this become true only after 5 hours :( so at the end
> > everyday the difference between oldest transaction and next transaction
> > grow by more than 20 000 :(
> >
> > i can not run gfix everyday, it's took hours to finish :(
> >
> > what i can do ?
>
> Find the reason why you produce such a transaction gap within 5 hours.
> Is it just the OIT which is behind or OAT as well? What's the output of
> gstat -h?
>

ok, i think i found ... few hours ago the difference
between OIT and next transaction was 50 000 but
now i try again gstat -h and the difference if only 5 ...

Database header page information:
 Flags   0
 Checksum12345
 Generation  1003772708
 Page size   8192
 ODS version 11.2
 Oldest transaction  1003732753
 Oldest active   1003732754
 Oldest snapshot 1003732754
 Next transaction1003732757
 Bumped transaction  1
 Sequence number 0
 Next attachment ID  39942
 Implementation ID   26
 Shadow count0
 Page buffers0
 Next header page0
 Database dialect3
 Creation date   Oct 30, 2011 2:23:04
 Attributes  force write, no reserve

 Variable header data:
 Sweep interval: 0
 *END*

i don't do any long transaction on the server but this is hard
to know if one select or update was not long to return as lot of 
application are
connected to the database, but all application use the same procedure
to select or update the data :

procedure doSQL
begin
   StartTransaction
   Try
 select or update data
 committransaction
   except
 rollbacktransaction
   end;
end;

for now the only explanation i see is that we do very lot of transaction 
(average 225 / Secondes is it lot ?)
on the server and this why 2 sean to be too little ... with around 
19 millions transactions by days
i thing i must use much more than 2 (1 million?)

How much transaction by second can firebird can handle ?


thanks for all
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] backup very slow and how often sweep a database ?

2011-12-21 Thread Vander Clock Stephane
it's a disaster  i disable the gfix -sweep every night and launch it 
only when the difference between oldest transaction and next transaction 
is > 20 000 but this become true only after 5 hours :( so at the end 
everyday the difference between oldest transaction and next transaction 
grow by more than 20 000 :(

i can not run gfix everyday, it's took hours to finish :(

what i can do ?

stéphane

On 12/20/2011 8:41 PM, Ann Harrison wrote:
>
> Stéphane,
>
> > the probleme is that the sweep can take up to 5 hours to finish and the
> > backup up to 12 hours to finish :(
> > i know that in the backup i don't put the -g params (inhibit garbage
> > collection) but someone say me that
> > the -g do something else that the gfix.exe -sweep doesn't do ...
>
> As others have said, sweep does something that gbak doesn't. Specifically,
> sweep resets the "oldest interesting transaction" - the first 
> transaction in the
> system that may have rolled back leaving changes in the database. Keeping
> that number accurate was very important a decade ago when memory was
> expensive and rare and Firebird did not automatically undo changes when
> a transaction rolled back leaving the server alive.
>
> When reading records, Firebird must check the state of transactions newer
> than the oldest interesting. At startup, Firebird produces a bit
> vector of those
> transactions where each pair of bits indicates the transactions state:
> rolled back,
> committed, limbo, or active. The shorter the vector, the better, but 
> 64K bits
> isn't as interesting as it was a decade ago.
> >
> > now i need to know, how often i need to launch the gfix.exe -sweep and
> > the gbak.exe -B without the -g params ? everydays seam not anymore
> > possible because of the time taken
> > by these process...
> >
> > one time a month ?
> >
> Backup as often as lets you sleep at night. Sweep once a week if possible,
> once a month if not.
>
> Good luck,
>
> Ann
>
> 


[Non-text portions of this message have been removed]



Re: SV: [firebird-support] backup very slow and how often sweep a database ?

2011-12-20 Thread Vander Clock Stephane
no off course it's disabled :)

On 12/20/2011 6:42 PM, Poul Dige wrote:
>
> How large is your DB?
>
> And just a thought: is auto sweep enabled on your database? I mean: 
> could it by accident run an automatic sweep already when you start 
> your own?
>
> Kind regards
> Poul
>
>
> Fra: firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com> 
> [mailto:firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>] På vegne af Vander Clock 
> Stephane
> Sendt: 20. december 2011 08:22
> Til: firebird-support@yahoogroups.com 
> <mailto:firebird-support%40yahoogroups.com>
> Emne: [firebird-support] backup very slow and how often sweep a 
> database ?
>
>
>
> Hello,
>
> Everyday i sweep the database and i backup it
>
> gfix.exe -sweep
>
> and just after a backup like this :
>
> gbak.exe -B -t -v -z
>
> the probleme is that the sweep can take up to 5 hours to finish and the
> backup up to 12 hours to finish :(
> i know that in the backup i don't put the -g params (inhibit garbage
> collection) but someone say me that
> the -g do something else that the gfix.exe -sweep doesn't do ...
>
> now i need to know, how often i need to launch the gfix.exe -sweep and
> the gbak.exe -B without the -g params ? everydays seam not anymore
> possible because of the time taken
> by these process...
>
> one time a month ?
>
> thanks by advance for you help
> stéphane
>
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



[firebird-support] Update another database from inside trigger

2011-12-20 Thread Vander Clock Stephane
Hello,

Is it possible (in a fast efficient way) to update another database from 
a trigger ?
i don't want to open close an connection to the other database everytime 
the trigger is fired (because i think this is espensive not ?)

thanks you by advance
stéphane


Re: [firebird-support] Re: backup very slow and how often sweep a database ?

2011-12-20 Thread Vander Clock Stephane
thanks dmitry !

>
> AJ> the probleme is that the sweep can take up to 5 hours to finish 
> and the
> AJ> backup up to 12 hours to finish :(
>
> What is the size of database?
>

120 GB


>
> AJ> i know that in the backup i don't put the -g params (inhibit garbage
> AJ> collection) but someone say me that
> AJ> the -g do something else that the gfix.exe -sweep doesn't do ...
>
> Sweep, if it could, already collected garbage. So, it is useless
> to run gbak without -g after sweep. If sweep couldn't collect garbage
> due to some active "being interested" transaction, neither gbak can
> collect garbage, so, -g is again useless.
>
> You need to check gstat -h before you run sweep to
> know will sweep help, and after sweep to know is sweep helped.
>

ok thanks now i will do like this. Run Sweep only if OIT < next 
transaction + 2

>
> AJ> Note: When I started to use the ServerManager
> AJ> switch (-se service_mgr) I noticed a significant speedup for gbak.
>
> yes, up to 2-4 times faster. we made tests 3 years ago and found that.
>

hmm i not understand why ?


[Non-text portions of this message have been removed]



[firebird-support] backup very slow and how often sweep a database ?

2011-12-20 Thread Vander Clock Stephane
Hello,

Everyday i sweep the database and i backup it

gfix.exe -sweep

and just after a backup like this :

gbak.exe -B -t -v -z

the probleme is that the sweep can take up to 5 hours to finish and the 
backup up to 12 hours to finish :(
i know that in the backup i don't put the -g params (inhibit garbage 
collection) but someone say me that
the -g do something else that the gfix.exe -sweep doesn't do ...

now i need to know, how often i need to launch the gfix.exe -sweep and
the gbak.exe -B without the -g params ? everydays seam not anymore 
possible because of the time taken
by these process...

one time a month ?

  thanks by advance for you help
stéphane


Re: [firebird-support] Re: Optimizing Firebird for SSD

2011-12-18 Thread Vander Clock Stephane

> >
> > I'm not sure if I would use consumer SSDs as a host for Firebird
> > databases at all. For sure not in a RAID with no TRIM support etc.
> > Firebird is known to write very frequently e.g. on the transaction
> > inventory page (TIP). So, even small, but a lot of writes are going on,
> > which is a pretty serious scenario for the life-time and performance of
> > a SSD. And due to similar life-time patterns, SSDs in a RAID might fail
> > pretty much at the same time, although I don't have any real evidence
> > for that.
>

it's was also the case with SAS drive !  :)

We buy 3 years ago at the same time some sas drive to put them in raid 
(seagate SAS 15k) and 2 of them crash at the same time (24h between the 
2 crash)
... this because the sas was coming from the same "lot" ... raid 5 can 
survive to one disk crash and we was thinging that the probability of 2 
simultaneous hard drive crash was near impossible ... it's was not !




[Non-text portions of this message have been removed]



[firebird-support] How to create an user and grant him select to all the table in the database ?

2011-11-17 Thread Vander Clock Stephane
Hello,

how to create an user and grant him only select right to all the table 
in the database ?

thanks by advance
stéphane


Re: [firebird-support] how to put null at the end in oder by ASC

2011-09-11 Thread Vander Clock Stephane

> Why do you need an index for sorting, Stephane? Generally, indexes are 
> great for JOIN and WHERE clauses, not for sorting. Do you have 
> millions of records and cannot eliminate 99% of them through a WHERE 
> clause? If so, are all of the records of interest or only a fraction? 
> My guess is that there may be other ways to solve your problem that 
> involves a WHERE clause...
>


thanks sven, but unfortunatly yes it's this : the result set contain 
million of records :(
with the index it's can return in few milliseconds, without it's can 
return in few seconds that s unaceptable for us ...

the only way i found for now is that when they sort to add a value 
"where xxx is not null"

i things that programatiquelly it's can not be so hard to add in 
firebird the functionnality to use index and also put nul value at the end



> Set
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] how to put null at the end in oder by ASC

2011-09-02 Thread Vander Clock Stephane
unfortunatly

If you override the default NULLs placement, no index will be used for 
sorting. In Firebird 1.5,
that is the case with NULLS FIRST. In 2.0 and higher, with NULLS LAST on 
ascending and
NULLS FIRST on descending sorts.

but i need the index :(


On 8/31/2011 12:17 AM, Mark Rotteveel wrote:
>
> On Tue, 30 Aug 2011 12:20:56 +0400, Vander Clock Stephane
> mailto:svanderclock%40yahoo.fr>> wrote:
> > Hello,
> >
> > how to put the num value at the end of the result (and not at the
> > beginning) when we are doing
> >
> > select xxx order by xxx ASC
>
> Something like:
> ORDER BY field ASC NULLS LAST
>
> See also section Sorts in
> http://www.firebirdsql.org/pdfmanual/Firebird-Null-Guide.pdf
>
> 


[Non-text portions of this message have been removed]



Re: [firebird-support] Why drop table can be sometime very very long ?

2011-09-02 Thread Vander Clock Stephane
thanks ann !

On 8/30/2011 11:52 PM, Ann Harrison wrote:
>
> Stephane,
>
> >
> > > > when i do drop table, sometime it's take hours to return 
> (especially on
> > > > big table)
> > > > why ?
> > >
> >
> > no, the database have only one client connected, the client that do the
> > drop table ...
> > it's not failed, it's just very very long !
> > as soon as it success, creating again and deleting again the table is
> > very fast !
> > i thing it's connected with the number or reccord in the table (in my
> > case lot of, and lot of index too)
> >
>
> Dropping a big table with lots of indexes will take longer than 
> dropping an
> empty table because of the amount of work done. Each page used by the 
> table
> must be marked free on its Page Inventory Page.
>
> To release a data page, Firebird reads the RDB$PAGES table to find pointer
> pages and marks the pages listed on the pointer pages as free until 
> the page
> is empty, then removes the pointer page entry from RDB$PAGES, then 
> marks the
> pointer page as free on its page inventory page. If enough entries are
> removed from RDB$PAGES to free up pages there, then those pages are also
> released, using the same algorithm.
>
> To release index pages, Firebird must read the whole index. It's been a
> very long time since I looked at the code that removes an index, but my
> recollection is that it goes to the lowest level of the index and releases
> pages from left to right, following the sibling pointers, then starts from
> the top again and releases the next level up.
>
> Checking the performance statistics for the number of pages read and 
> written
> will help you understand what's going on.
>
> Good luck,
>
> Ann
>
> [Non-text portions of this message have been removed]
>
> 


[Non-text portions of this message have been removed]



[firebird-support] how to put null at the end in oder by ASC

2011-08-30 Thread Vander Clock Stephane
Hello,

how to put the num value at the end of the result (and not at the 
beginning) when we are doing

select xxx order by xxx ASC

thanks by advance
stéphane


Re: [firebird-support] Why drop table can be sometime very very long ?

2011-08-30 Thread Vander Clock Stephane


On 8/29/2011 9:44 AM, Thomas Steinmaurer wrote:
>
> > when i do drop table, sometime it's take hours to return (especially on
> > big table)
> > why ?
>
> Are you perhaps running the DROP TABLE statement in a transaction with
> the WAIT option, while the table is in use by concurrent transactions?
>

no, the database have only one client connected, the client that do the 
drop table ...
it's not failed, it's just very very long !
as soon as it success, creating again and deleting again the table is 
very fast !
i thing it's connected with the number or reccord in the table (in my 
case lot of, and lot of index too)


[Non-text portions of this message have been removed]



[firebird-support] Why drop table can be sometime very very long ?

2011-08-28 Thread Vander Clock Stephane
hello,

when i do drop table, sometime it's take hours to return (especially on 
big table)
why ?

thanks by advance
stéphane


Re: [firebird-support] Good way to do...

2011-08-23 Thread Vander Clock Stephane

>
> I would strongly argue that option 2 goes against the basic idea of a
> relational database, and because of that will probably generate a lot of
> trouble along the way.
>

yes i agree, but sometime denormalizing the database is the only 
solution ...

>
> I have a database that's about 56 Gbyte and has 150 million or more
> records (each) in two of the tables.
>
> I find query performance to be very agreable, provided an index can be
>

yes, but when you need order by, even with index it's not funny at all :(
did we need to use the index on the query filter or on the order by filter ?
this off course depend how many record the query filter will return ...
with few reccord returned better to use the index on the query filter, 
with lot of
reccord returned better to use the index on the order by ... but this 
the optimization
engine can not know by advance 

>
> So, provided all your queries will be able to use the index on
> Contact_ID, I think you will be fine. And if not, then there are
> probably better ways to handle the situation than your option 2, e.g.
> moving records to an archive table or "throwing hardware at it".
>

I already split the big table in 56 "archived like" table to reduce
the size. for now the perf are ok, but what in one year with the growing
of data :( i know that this system will soon meet the limit of firebird
OR the server himself


[Non-text portions of this message have been removed]



Re: [firebird-support] Good way to do...

2011-08-23 Thread Vander Clock Stephane

> - How many distinct contact ids do you expect
>

around 100 000

> - How does your query pattern look like?
>

select xxx from table BIGTABLE where
contactID=xxx and
fialdA=xxx and

fieldn=xxx
order by fieldw


>
> Because of latter, is contact id the only criteria for filtering rows or
> are there more (e.g. date/time, product etc ... information)?
>
no, much more criterias than just contact ID


> You always can work with an archive and "most recent" table (maintained
> by triggers), in case of performance slow down due to increasing data
> volume. This, possibly is in relationship with your query pattern as well.
>

i already do like this and this gave me 80 000 000 reccords table :(


thanks for all Thomas




[Non-text portions of this message have been removed]



[firebird-support] Good way to do...

2011-08-23 Thread Vander Clock Stephane
Hello,

we have on table with 80 millions of reccords (growing). each records is 
assigned to a defined Contact ID. Only one contact ID can access read 
his record.
i thing it's a pretty classical scenario ...
we have 2 option :

1/ keep all the reccord in one unique table and add a column Contact_ID 
and filter all queries on this colum
Good: easy, only one table to manage
Bad: very big table and very heavy index (all index (mostly all) must 
have the field Contact_ID to be efficient)

2/ create for each contact his dedicated table xxx_ContactID.
Good: performance because all table are little, index are little, etc. 
also posibility to split in several server to win in performance. (but i 
thing it's not really possible (i thing) by one trigger to update the 
table in another database)
Bad: management of creating/destroying table by trigger is not really 
supported under firebird. and what happen when more contact than 
firebird max limit of allowed # of table ?

So what do you advise to do in such case ?

thanks by advance
stéphane