For me, I've done kind of the opposite. In other words, I'll connect to a
SQL DB, Postgres, Oracle, whatever, download the whole database to VFP,
read info from the information_schema stuff, recreate indexes, etc. Then
work with the data in the local VFP DB for analysis and so forth.

So it's not really the complete opposite because the real prod database is
not 'moved' to VFP. It is just there for analysis, tests, and so on. I
found the speed and flexibility the VFP DB gives me trumps what I get from
the DB Servers. Of course that is a generalization and a major factor here
is the 'locality' of access. Also, there are the cases where I am not the
DBA of the prod database which makes working with VFP an even better option
(I can index, restructure, freely based on what I'm trying to figure out).

And I fully understand the reasons behind the individual "file size"
limitations of DBFs, DBCs, etc. It really was not disk space or stuff like
that, it was how many bytes to use for "addressing" - and back at the time
of FoxBase, etc, I think the 32bit integer was about the largest available.
I have wondered though if there are ways to use the 'extra bytes' in the
DBF header to expand the address space. The idea being that older tools
would only be able to access the "first" 2GB of data, but newer tools could
be written to look at the other bytes and switch to 64bit numbers to go
further. Anyway, no biggie. It is simple enough to split data into multiple
tables based on some value in the data. And then stored procedures or other
code easily "calculates" which table(s) to pull based on a query using that
value.

And as a quick side note about the 'temporary cursor' thing that I used
instead of MS's approach to table/record buffering... Once I started using
it, built a few code classes around it, I never wanted to use the standard
buffering options again. And I never ran into the odd 'table-header-lock'
stuff or whatever it was that folks sometimes hit with MS's buffering
options. The thing I liked best was I got very fine-grained control of the
logic of "buffering" and how to resolve conflicts. I had a truly separate
"data set" that i could analyze 6 ways to Sunday against the original data,
even compare who was locking what, "pause" updates with specific messages,
and son on. Was that needed for every project - nope. But I did need it a
couple times. Well, 'need' may be an overstatement - I'll just say what the
client wanted was not feasible with the standard buffering approaches as
far as I could figure out.

-Charlie

On Wed, Nov 27, 2019 at 9:17 AM MB Software Solutions, LLC <
mbsoftwaresoluti...@mbsoftwaresolutions.com> wrote:

> On 11/27/2019 3:47 AM, Alan Bourke wrote:
> > On Tue, 26 Nov 2019, at 6:51 PM, Tracy Pearson wrote:
> >> Charlie,
> >>
> >> To respond to your wish for larger DBF capacities.
> > There's also Advantage Database Server (now owned by SAP) which lets you
> work with DBF\CDX\FPT over 4GB.
>
>
> Ever since I switched to MySQL (and later MariaDB) back in 2004, I've
> never wanted to go back to DBFs for major app tables. Never.
>
>
[excessive quoting removed by server]

_______________________________________________
Post Messages to: ProFox@leafe.com
Subscription Maintenance: https://mail.leafe.com/mailman/listinfo/profox
OT-free version of this list: https://mail.leafe.com/mailman/listinfo/profoxtech
Searchable Archive: https://leafe.com/archives
This message: 
https://leafe.com/archives/byMID/cajgvlx0k2hxt1_t9+xp7kybgqk-genvxnp_tststpqgy0qa...@mail.gmail.com
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to