A large bank (like one of the Big4 in Aus) has a staggering number of applications. Even running what you’d think is the simplest product results in multiple applications being involved, whether opening an account through to day-to-day transacting, especially given the multiple channels that might be available.
From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??????) Sent: Monday, 19 September 2016 3:23 PM To: ozDotNet <ozdotnet@ozdotnet.com> Subject: RE: Entity Framework - the lay of the land People always use banks as the canonical example, but I had one at a local bank where I went to an ATM and did a transfer “From Account” -> “To Account” where both accounts were with the same bank. Came out of the “from”, and never went into the “to”. After what seemed like hours on the phone, they told me that “the person who had typed in the account number had got it wrong”. I said “person???” “type????” That’s when they explained to me that their savings system wasn’t really connected to their credit card system, and on that afternoon the integration link had broken down, so they were printing out the transactions on one and typing them into the other. There really was a little person in the ATM. Regards, Greg Dr Greg Low 1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | http://greglow.me<http://greglow.me/> From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Stephen Price Sent: Monday, 19 September 2016 1:50 PM To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>> Subject: Re: Entity Framework - the lay of the land While on the topic of databases... I made a flight booking via Altitude points system yesterday and if failed. Gave me a number to call during business hours. Turns out just the return flight was made but nothing charged. That's not very atomic hey? [😊] Hehe love that dialup db connection idea... ________________________________ From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> <ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>> on behalf of Greg Low (罗格雷格博士) <g...@greglow.com<mailto:g...@greglow.com>> Sent: Monday, 19 September 2016 11:06:05 AM To: ozDotNet Subject: RE: Entity Framework - the lay of the land I remember many years ago, connecting the devs to the DB via a dial-up 64kB modem. Worked wonders for the code that came back. Suddenly they noticed every call. Regards, Greg Dr Greg Low 1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | http://greglow.me<http://greglow.me/> From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors Sent: Monday, 19 September 2016 12:34 PM To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>> Subject: Re: Entity Framework - the lay of the land On Mon, 19 Sep 2016 at 10:38 Greg Keogh <gfke...@gmail.com<mailto:gfke...@gmail.com>> wrote: I had an argument internally that caching was good, with the alternate side saying that “cache invalidation” was hard so they never use it. I think it is "hard" but don't write it off completely. Search for "second level cache" and you'll see it's not that trivial to use properly. Some ORMs have it as an optional feature. You've got to consider what to cache, eviction or expiry policy, concurrency, capacity, etc. I implemented simple caching in a server app a long time ago, then about year later I put performance counters into the code and discovered that in live use the cache was usually going empty before it was accessed, so it was mostly ineffective. Luckily I could tweak it into working. So caching is great, but be careful -- GK I'd argue caching is a good idea so long as it is not a substitute for good performance optimisation as you go. As a general discipline we roll with a rule I call "10x representative data load" which means we take whatever we think the final system is going to run with for a data set, load each dev with 10x of that on their workstations, and make them live that dream. The reality is that a bit of planning for optimal indexes as well as casting an eye over the execution plan after you write each proc isn't a lot of dev overhead. At least you know when what you have built rolls out it performs as well as it can given other constraints. David. -- David Connors da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 417 189 363