Well, it finally happened.  The source data DoNotCall files I download from
the FTC monthly for clients finally pushed over the 2Gb file size limit for
the Area Codes used by my clients!  Luckily I saw this freight train coming
about 2 or 3 years ago, and began to evaluate various SQL database solutions
to use as a back end database, while using VFP on the Front End to still
process the various combinations of Area Code and Phone Number records for
each individual client.  But, I put off doing much more than some casual
testing and playing around with Firebird, MySQL, PostgreSQL, and even a
little MS SQL Server.  Oracle wa$ never an option, and full blown M$ SQL
Server was out for the same reason.  I eventually settled on PostgreSQL, and
studied my arse off in spurts and spatters so I could try to get ready to
make the jump into migrating the DoNotCall table build process for my
clients to the new PostgreSQL based back end.

So, after a few hours of coding this Easter/Passover Sunday (okay, 17 hours
and counting), and processing, I finally nailed it down.  I must say I am
very impressed thus far with this VFP/PostgreSQL hybrid model.  I had
previously made a little progress with PostgreSQL on some preparatory steps
in migrating a few of my other applications to a hybrid VFP Front End and
PostgreSQL back end model.  But I did nothing this intense where I am
dealing with a few hundred million of records on a database table
(PostgreSQL DoNotCall table) that is over 2.5Gb in size (thus far, I have
not yet included phone number data for any Area Codes other than what my
clients need).  I put together a fairly simple parameterized view within VFP
that pulls only the records for the Area Codes I need to process, one Area
Code at a time.  Then I use some more table based logic to assemble the VFP
record set for the Area Code & Phone Numbers needed by each client.  The
final tables for each client are still well within the 2Gb file limitation,
so no rush to start migrating my End User application quite yet <G>.  But
that is coming along with some other enhancements.

My next step is to replicate the current PostgreSQL database that is running
on one of my Windows 2003 Servers over to my Suse Linux Enterprise Server
(v-10), which already has PostgreSQL running on it (so far used for test
purposes only).  After that I may just cut my teeth on learning Python to
replace the VFP side of the process just to get a feel for what else lays
before me in the way of tricks and traps.  One of my little tricks I use for
SQL-SELECT processing with fairly large VFP tables (3 million records, 78Mg
table) about came in handy when pulling records with my VFP parameterized
view when the PostgreSQL records came over real slow.  I issued a RECCOUNT()
(after performing a REQUERY() with the new Parameter value) against the
Parameterized View, and POW!  The records flew over pronto!  Works great for
the 78Mg table following a SQL-SELECT, works just a well for the much larger
PostgreSQL table result set!

WooHoo!  Nothing like a little set of victories in a relatively new arena to
make my day!


Gil



Gilbert M. Hale 
New Freedom Data Resources 
Pittsford, NY 
585-359-8085 - Office (Rolls To Cellular) 
585-202-4341 - Cellular/VoiceMail 
[EMAIL PROTECTED] 




--- StripMime Report -- processed MIME parts ---
multipart/mixed
  text/plain (text body -- kept)
  application/ms-tnef
---

_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://leafe.com/mailman/listinfo/profox
OT-free version of this list: http://leafe.com/mailman/listinfo/profoxtech
Searchable Archive: http://leafe.com/archives/search/profox
This message: http://leafe.com/archives/byMID/profox/[EMAIL PROTECTED]
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to