Yes
On 28 Mar 2013 06:33, "Katherine Moss" <katherine.m...@gordon.edu> wrote:

>  Yes, but can RavenDB replicate to SQL Server or SQL Server Express?  ****
>
> ** **
>
> *From:* ozdotnet-boun...@ozdotnet.com [mailto:
> ozdotnet-boun...@ozdotnet.com] *On Behalf Of *Heinrich Breedt
> *Sent:* Wednesday, March 27, 2013 2:12 PM
> *To:* g...@greglow.com; ozDotNet
> *Subject:* RE: occasionally connected application design problem****
>
> ** **
>
> Ravendb with replication might be an option****
>
> On 28 Mar 2013 02:05, "Greg Low (GregLow.com)" <g...@greglow.com> wrote:**
> **
>
> Hi Corneliu,****
>
>  ****
>
> I don’t mind roll-your-own solutions. They work ok. Biggest issue I come
> across with those LastUpdated type columns is checking out how they get
> updated. I often see people run into issues with those if they don’t
> understand how they were updated.****
>
>  ****
>
> For example, doing work on a DB2 site recently, they had a scheme like
> this but missed the odd row. Turns out that they were using a trigger to
> set the LastUpdated column value. But the value appeared when the trigger
> finished running yet it was set to the time value of when the trigger
> started. So, it looked like (occasionally) a row jumped backwards in time.
> Similar issues can happen when they are updated in proc code, and even
> worse when they are updated based on client code.****
>
>  ****
>
> 99% of those problems are avoided by saying “give me everything since last
> time but only up to a few seconds ago” instead of getting everything up to
> the most recent.****
>
>  ****
>
> Sync Framework does work reasonably well. Azure Data Sync is still in
> preview but we’ve had pretty good success with it for simple scenarios.***
> *
>
>  ****
>
> Replication is probably overkill and isn’t supported on all editions, so
> that might be a limiting factor.****
>
>  ****
>
> And just to add to the “can’t believe that pasta sauce is a vege”
> discussion in the USA, we have nothing to be proud of. The GST exemption on
> food that the Greens needed before passing the legislation has led to
> significant chunks of our Tax Act that define what “food” is. ****
>
>  ****
>
> For example, we now have several pages in the Australian Tax Act that
> cover the definitions of the word “pudding”. I shudder to think what that
> cost us all to have that written <sigh>.****
>
>  ****
>
> Regards,****
>
>  ****
>
> Greg****
>
>  ****
>
> Dr Greg Low****
>
>  ****
>
> 1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913fax
> ****
>
> SQL Down Under | Web: www.sqldownunder.com****
>
>  ****
>
> *From:* ozdotnet-boun...@ozdotnet.com [mailto:
> ozdotnet-boun...@ozdotnet.com] *On Behalf Of *Corneliu I. Tusnea
> *Sent:* Wednesday, 27 March 2013 8:07 PM
> *To:* ozDotNet
> *Subject:* Re: occasionally connected application design problem****
>
>  ****
>
> Greg,****
>
>  ****
>
> I'm sure the SQL guys will tell you about some "magical tool" that can do
> all of this for you hands free and without any headaches (fingers crossed)
> but my take would be the good old REST API model.****
>
>  ****
>
> 1. For every Table have two columns LastUpdated, LastUploaded and
> LastDownloaded. Every change you do locally you update the LastUpdated to
> UTC now (never use local times!)****
>
> 2. Keep a table with the "sync status" of each table where all you need to
> store is the TableName, LastUploaded and LastDownloaded.****
>
> 3. Have a background thread that tries to monitor for network events
> (don't continuously try to ping your server as your'll burn the battery of
> those devices).
> http://www.codeproject.com/Articles/64975/Detect-Internet-Network-Availability
> ****
>
> 4. When you have connectivity all you need to do is select top 100 from
> each table where LastUpdatd for the Status for the table < LastUpdated of
> the row.****
>
> (I don't know if I make sense but basically you want to select all the
> rows that were changed since point of your LastUpdated in your Status
> table).****
>
> You then try to push those back to your server. For every row that "made
> it" to the server you update the LastUploaded to UtcNow or even better I
> would update it to the time just before you started the sync.****
>
> 5. You do the reverse for downloading data. You ask the server for all
> changes since your LastDownload. Once all the changes were received, you
> update your own LastDownload.****
>
> With a bit of reflection and some clear naming conventions you could code
> all of this generically enough that you can simply run it on your database
> disregarding the number of tables & columns.****
>
>  ****
>
> I'm now going to let the SQL guys deliver their magical tool :)****
>
>  ****
>
> Regards,****
>
> Corneliu.****
>
>  ****
>
>  ****
>
>  ****
>
> On Wed, Mar 27, 2013 at 8:41 PM, Greg Harris <harris.gre...@gmail.com>
> wrote:****
>
> Dear People,****
>
>  ****
>
> I need some help to get some good ideas for a design issue I am facing…***
> *
>
>  ****
>
> The application will be geographically dispersed and only occasionally
> connected to the internet with a slow / unreliable connection.****
>
> The users at remote branch offices are doing daily data entry to their own
> local databases (probably SQL express databases).****
>
> On a periodic basis the remote branch offices need to synchronise data
> with head office (probably a full SQL database).****
>
> Most (99%) of data will travel from the remote branch offices the head
> office some reference data may travel back to the remote branch office.***
> *
>
>  ****
>
> There are a couple of design ideas that I have had:****
>
>  ****
>
> SQL Server Replication: (
> http://msdn.microsoft.com/en-us/library/ms151198.aspx) I do not know how
> well this will work on wires that are of such poor quality.  Also how easy
> (hard) it will be to support remotely.****
>
>  ****
>
> Program based updates: Have a program running in the background at each
> site attempting connection with head office transferring data.  All rows
> would have a transferred status flag, that would be set once successful
> transfer has been acknowledged.****
>
>  ****
>
> File extracts: Once an hour produce a text file (with check sum) of all
> data entered in the last hour, background job copies file to head office
> server which will then apply updates to head office server.****
>
>  ****
>
> Please share with me and the group what design ideas and experiences you
> have had that worked well and the ones you would avoid if faced with the
> same design decision again today.****
>
>  ****
>
> Many thanks****
>
> Greg Harris****
>
>   ****
>

Reply via email to