Hi Adam,
Yes, the 'offline' clients are using realSQL databases with an identical
schema.
That's essentially what I was doing for the local version, using standard
sqlexecute and db records to insert, add, update etc. to the realSQL db. My
thought was then to query the realSQL for records where 'to update'= true
then generate the commands to update the server (sending them through an
httpsocket).
Any thoughts on how to transmit the data? I was thinking the most secure way
would be to generate an http post with the command type (1 for add, 2 for
delete, etc.) and adding the parameters as part of the post.
Something like server.php?1&username&password&tablename&John&Smith&555-1212
Then have the php assemble the sql.
Thanks,
Rob
----- Original Message -----
From: "Adam Shirey" <[EMAIL PROTECTED]>
To: "REALbasic NUG" <[email protected]>
Sent: Friday, January 12, 2007 1:08 PM
Subject: Re: Converting desktop app to multi-user "Smart Client"
Hi Rob,
I assume that for 'offline' mode, you'll have a reliable method for
storing
data -- perhaps a REALSQLDatabase of the same schema as the server so you
can run queries and store data to be synchronized? The first instinct
seems
to be to do this so offline clients can insert data that can be easily
queried and manipulated; then, when you sync up, you can have a function
that finds everything with its dirty bit set, assembles SQL queries, and
executes them.
Easier yet, create one DatabaseRecord for each dirty object and insert
it --
no SQL query required, and it can be BLOB-friendly.
Knowing a little more of your implementation may help with suggestions for
your implementation.
HTH!
-Adam
On 1/12/07, Rob <[EMAIL PROTECTED]> wrote:
Hi all,
I have created a desktop database front end to handle some of our
company's
workflow processes. We are now hoping to modify the app so individual
users
can run the app on there local machine (both on and off line) and sync
data
to a central server.
The server will most likely be a Linux box using php as middleware and
MySQL
as a database. The system will have approx. 20-30 unique users who will
have
different levels of access (controlled by the database.)
In planning for this upgrade I have identified 2 potential issues;
1) determining best way to handle local to network data sync.
I will be creating a UUID for each record so I'm not worried about
duplicate
record ID's. What does concern me is how to determine what records need
to
be synced. The only thing I can think of is to have a "last modified"
date
field for server queries and an "to update" Boolean field. When the sync
command is called, the desktop app would get a recordset based on the to
update flag then generate the sql to be sent to the server. The app would
then request all records updated after the last sync (local property).
Does this make sense or is there a better way to deal with synchronizing
data?
2) What would be the best way to handle network SQL strings. Send the
whole
string over the network? Send the parameters and an SQL identifier and
have
the php middleware assemble the SQL?
I don't think either would be that complicated but I am worried about
security.
Thanks,
Rob
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>
Search the archives of this list here:
<http://support.realsoftware.com/listarchives/lists.html>
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>
Search the archives of this list here:
<http://support.realsoftware.com/listarchives/lists.html>
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>
Search the archives of this list here:
<http://support.realsoftware.com/listarchives/lists.html>