Thanks, these tips are very helpful. #1: has very good improvement, please see the new result below. but will this turning off of write-barrier cause data loss in case of power loss? Write-barrier looks about the ordering of journaling transaction content and commit record, how come it impacts firebird so dramatically? #2: totally agree. #3: yes, I am using classic or superclassic. The other thing I am still wondering is, it looks that increasing number of threads did not improve the performance much? In my specific experiment, it starts with 5 tx/sec with 1 thread, and increases only to 8-9 tx/sec with 6 threads. with one cpu, 200 insertion per transaction, 1 transaction in total. each record is 128 bytes. * Finish Insert: total=25600 bytes elapsed=00.193 throughput=132642.487047 bytes/sec, 1036.269430 ops/sec, 5.181347 tx/sec succeeded=1 <Old value is 4.098361 tx/sec> with 2 cpus, 200 insertion per transaction, 2 transaction in total. each record is 128 bytes. * Finish Insert: total=51200 bytes elapsed=00.292 throughput=175342.465753 bytes/sec, 1369.863014 ops/sec, 6.849315 tx/sec succeeded=2 <Old is 1.106807 tx/sec> with 3 cpus, 200 insertion per transaction, 3 transaction in total. each record is 128 bytes. * Finish Insert: total=76800 bytes elapsed=00.375 throughput=204800.000000 bytes/sec, 1600.000000 ops/sec, 8.000000 tx/sec succeeded=3 <Old is 1.262095 tx/sec> with 4 cpus, 200 insertion per transaction, 4 transaction in total. each record is 128 bytes. * Finish Insert: total=102400 bytes elapsed=00.473 throughput=216490.486258 bytes/sec, 1691.331924 ops/sec, 8.456660 tx/sec succeeded=4 <Old is 0.743080 tx/sec> with 5 cpus, 200 insertion per transaction, 5 transaction in total. each record is 128 bytes. * Finish Insert: total=128000 bytes elapsed=00.590 throughput=216949.152542 bytes/sec, 1694.915254 ops/sec, 8.474576 tx/sec succeeded=5 <Old is 0.777847 tx/sec > with 6 cpus, 200 insertion per transaction, 6 transaction in total. each record is 128 bytes. * Finish Insert: total=153600 bytes elapsed=00.717 throughput=214225.941423 bytes/sec, 1673.640167 ops/sec, 8.368201 tx/sec succeeded=6 <Old is 0.783801 tx/sec >
To: firebird-support@yahoogroups.com From: firebird-support@yahoogroups.com Date: Thu, 16 Jun 2016 23:06:12 -0700 Subject: [firebird-support] Re: performance issue with firebird 3.0 embedded on linux Hi! #1 : FB does not like ext4 : Firebird News » Forced Writes Performance impact on #Ubuntu with ext4 no barrier Firebird News » Forced Writes Performance impact on #Ub... Tweet of the day comes from Carlos H. Cantu about forced writes effects for running firebird scripts Script executing time – Linux Ubuntu 10.04.3 Ext4 – ... View on www.firebirdnews.org Preview by Yahoo #2 : HT is no necessary a good idea in case of data intensive applications (eg. databases) : Be aware: To Hyper or not to Hyper Be aware: To Hyper or not to Hyper Our customers observed very interesting behavior on high end Hyperthreading (HT) enabled hardware. They noticed that in some cases when high load is applied... View on blogs.msdn.microsof... Preview by Yahoo #3 : Use classic server, or superclassic (superserver does not like concurrency is a single database). #4 : I think the bottleneck here is the disk, not the engine itself. We made a test for bulk insert and best method was to run execute blocks with as many inserts as possible in it in a single thread. We achieved 8-9K records / sec for small tables and 3-4K records / sec for bigger tables. (tables were not indexed, and it was a simple HDD)