secnario:

we have some important legacy applications here originally developed
as the proverbial "kitchen sinks" (each should have been 3 or more
separate apps but all got rolled in together because that's what you
did then). All currently running fine off CF6.1 (although I'm looking
for any excuse to upgrade them to CF8 - but that's a separate
project).

basically, I've got a bunch of apps that are generating data that
other (new) applications need to use as core read-only data.

background:

at my last job (a large enterprise), the systems we developed
"subscribed" to "feeds" of enterprise aggregated data which we used as
the read-only core data that made up our systems. "Subscription" was
filling out forms and getting it approved by various managers and
"feeds" were nightly DTS-type jobs (or whatever Oracle people call it
dumping data directly into our SQLServer db's), which blew away old
data replacing it with fresh dumps.

I think their aggregated data came from their reporting systems, not
live (so it was perhaps "third-hand" - I could never get a clear
answer) but it was read-only and we knew the limitations were that it
was 24 to 48 hours old.

Was much data changed in reality on their systems before being fed to
us? I doubt it, even on a busy day, I'm sure less than 5% of the
customer or product details were updated. It was at peak times every
quarter where bulk customers and products were added to the Oracle
systems and we needed it to eventually trickle down us.

Problem 1:

Occasionally the Oracle-wielding DBA's changed something their end and
out of the 100,000 or so records dumped nightly into one of our db's,
they would "forget" about 5,000 or so records, orphaning data in our
systems. If we were lucky enough to find it, it usually took a yelling
match manager-to-manager to fix it until the next time it happened.

Question 1: is there a better way of handling situations like that?
(missing records). Thankfully these new systems I'm working on are
using (low) tens of thousands of records, not (high) hundreds of
thousands and I'll be more privy to any changes.

Problem 2:

a limitation we had (with the feeds from the Oracle systems) was that
the data was "stale". Any new customers added to their systems we
didn't know about until the next day - at best.

Question 2: the "push" from the "owners" of the data actually worked
in keeping things simple, gave us pre-aggregated data and hid us from
user error and stuff-ups (if they had a problem and became offline, we
kept on working with the last updated data). There was also no need to
fire them a webservice call asking for records that have been created
or updated since last time nor the need for us to expose webservices
for their systems to directly update us. BUT should we have pushed
them to expose a series of webservices for onDemand data gathering
from them as a better way to go? and what would have been the best way
for us to know about _their_ new customers that they created?

Problem 3:

for this new development, we need to send some data back to the legacy
CF6.1 systems. Either a value that gets changed externally and needs
to be updated or the legacy systems needing some of the new systems
data to plug directly into some processing of legacy logic.

Question 3: I can't help thinking I shouldn't be passing data between
these applications, but should be sending messages instead. Thoughts?


sorry for the war 'n' peace version. I've been arguing with myself
which way to go for a while now and just looking for some fresh
eyes/thoughts.

suggestions?

thanx
barry.b

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"CFCDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/cfcdev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to