Re: [Firebird-net-provider] Questions with performance metrics doing large inserts into DB

2014-08-10 Thread Jiří Činčura
It's roundtrip per some records (depends on size of data), not each record.

Putting the records into DataTable will not be faster in any way, as it uses 
the DataReader internally.

--
Mgr. Jiří Činčura
Independent IT Specialist


From: Paulo Gomes paulo_...@sapo.pt
Sent: Saturday, August 09, 2014 4:30:41 PM
To: For users and developers of the Firebird .NET providers
Subject: Re: [Firebird-net-provider] Questions with performance metrics doing 
large inserts into DB

Hi Edward,

I think you answered yourself, the enumerator uses a trip to the database for
each record.

You should try to break the data in smaller packages (10 for example) and
use a DataAdapter to get the data into DataTable.
If possible also use several transactions (1 per Package) to keep the memory
usage in control.
If performance is a must you can always use deal with different Threads(1 for
read and 1 for write) , though you will probably add a wanted dead or alive
sign  on my name for mentioning it.

Note: the nº of records should depend on their average size

Regards
Paulo Gomes


Em 09-08-2014 06:56, Edward Mendez escreveu:

 You should start transaction explicitly. Else it's one transaction per 
 command
 and in a batch processing that goes out of hand really quickly.
 I agree with you 100%. The part that I hadn't given much thought was the 
 Isolation levels to pass as the parameter into the transaction object.


 I would rather write correct algorithm than spent hours on micro-optimizing
 loops. BTW did you know asm loops are faster than Delphi and .NET loops?
 Ah ASM, that is definitely a blast from the past.

 In trying to eliminate certain bottlenecks, I moved a small sample of the 
 source data to a local FB db and ran the .NET test and there it was 
 practically instantaneous reading the data. I think the Network Latency on my 
 corporate network is another factor.
 Over the weekend, I will move a more sizable sampling to my local DB and 
 attempt the test with the Reads and Write. I downloaded Red-gate (trial) and 
 will profile the App. I will keep everyone posted on the any findings.

 Thanks,

 Edward Mendez


 --
 ___
 Firebird-net-provider mailing list
 Firebird-net-provider@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/firebird-net-provider


--
___
Firebird-net-provider mailing list
Firebird-net-provider@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/firebird-net-provider
--
___
Firebird-net-provider mailing list
Firebird-net-provider@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/firebird-net-provider


Re: [Firebird-net-provider] Questions with performance metrics doing large inserts into DB

2014-08-10 Thread Jiří Činčura
There is still some room for improvement on network level. The packets are not 
batches as much as it could be for server to still understand it. Dmitri Y. 
reported it to me some months ago.


It just needs a lot of time with Wireshark and tweaking the socket until it's 
same as with fbclient.dll.


Maybe somebody could help me with that as well.

--
Mgr. Jiří Činčura
Independent IT Specialist


From: Jiří Činčura disk...@cincura.net
Sent: Sunday, August 10, 2014 3:23:14 PM
To: For users and developers of the Firebird .NET providers
Subject: Re: [Firebird-net-provider] Questions with performance metrics 
doing large inserts into DB

It's roundtrip per some records (depends on size of data), not each record.

Putting the records into DataTable will not be faster in any way, as it uses 
the DataReader internally.

--
Mgr. Jiří Činčura
Independent IT Specialist


From: Paulo Gomes paulo_...@sapo.pt
Sent: Saturday, August 09, 2014 4:30:41 PM
To: For users and developers of the Firebird .NET providers
Subject: Re: [Firebird-net-provider] Questions with performance metrics doing 
large inserts into DB

Hi Edward,

I think you answered yourself, the enumerator uses a trip to the database for
each record.

You should try to break the data in smaller packages (10 for example) and
use a DataAdapter to get the data into DataTable.
If possible also use several transactions (1 per Package) to keep the memory
usage in control.
If performance is a must you can always use deal with different Threads(1 for
read and 1 for write) , though you will probably add a wanted dead or alive
sign  on my name for mentioning it.

Note: the nº of records should depend on their average size

Regards
Paulo Gomes


Em 09-08-2014 06:56, Edward Mendez escreveu:

 You should start transaction explicitly. Else it's one transaction per 
 command
 and in a batch processing that goes out of hand really quickly.
 I agree with you 100%. The part that I hadn't given much thought was the 
 Isolation levels to pass as the parameter into the transaction object.


 I would rather write correct algorithm than spent hours on micro-optimizing
 loops. BTW did you know asm loops are faster than Delphi and .NET loops?
 Ah ASM, that is definitely a blast from the past.

 In trying to eliminate certain bottlenecks, I moved a small sample of the 
 source data to a local FB db and ran the .NET test and there it was 
 practically instantaneous reading the data. I think the Network Latency on my 
 corporate network is another factor.
 Over the weekend, I will move a more sizable sampling to my local DB and 
 attempt the test with the Reads and Write. I downloaded Red-gate (trial) and 
 will profile the App. I will keep everyone posted on the any findings.

 Thanks,

 Edward Mendez


 --
 ___
 Firebird-net-provider mailing list
 Firebird-net-provider@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/firebird-net-provider


--
___
Firebird-net-provider mailing list
Firebird-net-provider@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/firebird-net-provider
--
___
Firebird-net-provider mailing list
Firebird-net-provider@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/firebird-net-provider
--
___
Firebird-net-provider mailing list
Firebird-net-provider@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/firebird-net-provider


Re: [Firebird-net-provider] Questions with performance metrics doing large inserts into DB

2014-08-10 Thread Jiří Činčura
There is still some room for improvement on network level. The packets are not 
batches as much as it could be for server to still understand it. Dmitri Y. 
reported it to me some months ago.

It just needs a lot of time with Wireshark and tweaking the socket until it's 
same as with fbclient.dll.

Maybe somebody could help me with that as well.

--
Mgr. Jiří Činčura
Independent IT Specialist

From: Alexander Muylaert-Gelein amuylaert_gel...@hotmail.com
Sent: Friday, August 08, 2014 11:10:43 PM
To: firebird-net-provider@lists.sourceforge.
Subject: Re: [Firebird-net-provider] Questions with performance metrics doing 
large inserts into DB

Hi

Attached some examples of transactions properly set.

If you can put the reader in readonly  and the writer in WriteNoUndo, you 
gain some speed.  But euh, no magic to expect here.  It might be interesting to 
profile it though and post the results back.


public static class Transaction
 {
 private static FbTransactionOptions s_WriteTransaction = new 
FbTransactionOptions()
 {
 TransactionBehavior = FbTransactionBehavior.Write |
   FbTransactionBehavior.ReadCommitted |
   FbTransactionBehavior.RecVersion |
   FbTransactionBehavior.Wait,
 WaitTimeout = new TimeSpan(0, 0, 0, 10) //seconds
 };

 public static FbTransactionOptions WriteTransactions
 {
 get
 {
 return s_WriteTransaction;
 }
 }

 private static FbTransactionOptions s_WriteNoUndoTransaction = new 
FbTransactionOptions()
 {
 TransactionBehavior = FbTransactionBehavior.Write |
   FbTransactionBehavior.ReadCommitted |
   FbTransactionBehavior.RecVersion |
   FbTransactionBehavior.Wait |
   FbTransactionBehavior.NoAutoUndo,
 WaitTimeout = new TimeSpan(0, 0, 0, 10) //seconds
 };

 public static FbTransactionOptions WriteNoUndoTransactions
 {
 get
 {
 return s_WriteTransaction;
 }
 }

 private static FbTransactionOptions s_ReadTransaction = new 
FbTransactionOptions()
 {
 TransactionBehavior = FbTransactionBehavior.Read |
   FbTransactionBehavior.ReadCommitted |
   FbTransactionBehavior.RecVersion |
   FbTransactionBehavior.NoWait
 };

 public static FbTransactionOptions ReadTransactions
 {
 get
 {
 return s_ReadTransaction;
 }
 }


 public static FbTransaction BeginReadTransaction(this FbConnection 
aConnection)
 {
 return aConnection.BeginTransaction(s_ReadTransaction);
 }

 public static FbTransaction BeginWriteTransaction(this FbConnection 
aConnection)
 {
 return aConnection.BeginTransaction(s_WriteTransaction);
 }

 public static FbTransaction BeginWriteNoUndoTransaction(this FbConnection 
aConnection)
 {
 return aConnection.BeginTransaction(s_WriteTransaction);
 }
 }

I read somewhere that Looping in .NET is somewhat slower than in Delphi, but 
there are things you can do optimize the .NET loops.

Agreed, some tricks might be applied.  But that will bring you down from 18.7 
seconds to 18.69998 seconds.  Optimizing loops is actually not the way to go.

What you could do is make a small test project.  with a few simple tables.  Add 
your simulations to it and make sure it compiles.  Then, if you have these 
minimal projects building, I would like to receive a copy and I can profile for 
you.

Maybe I can already see what is wrong.  Like Jiri mentioned.  .Net is slower 
then this FIB/FBClient (c++) dll.  But let us assume that is 10% overhead.  So 
4 seconds delphi = 5 seconds .net is fine for me.  18 seconds...  I'm 
interested in speeding things up then  :-)

thanks

a


From: emendez...@nc.rr.com
To: firebird-net-provider@lists.sourceforge.net
Date: Fri, 8 Aug 2014 09:41:03 -0400
Subject: Re: [Firebird-net-provider] Questions with performance metrics doing 
large inserts into DB


Alexander,



Thank you for the feedback.



Two more things...



1.  Your transaction parameters please.  How do you create them, what 
settings?

To be honest I’ve never really thought about the transaction settings other 
than the default. Are there a specific settings I should be using for just 
reading “stale” data?

2.  Did you check your source-read logic.  Cound you maybe fake data, so we 
know if it is the insert that is slow.  Maybe the materializing of the 
source record is slow or fetching it?

I was doing further testing last night and removed the insert logic to see if 
it was the reading of the data that was slowing it down. with the removal of 
the Insert Logic and just reading and looping for 5000