>Yes, but can RavenDB replicate to SQL Server or SQL Server Express? Does it need to anyway? The OP stated '*probably SQL express databases*' which I'm assuming means he hasnt picked the database for both the front and back end (pls confirm)
RavenDB would be highly suitable for this scenario. You could have one server instance of ravendb and run your clients in embedded mode and have them sync between each other ( http://ravendb.net/docs/server/scaling-out/replication/from-embedded-server) - the licences for this scenario would probably be much less than a single sql licence. On Thu, Mar 28, 2013 at 1:34 PM, Corneliu I. Tusnea <corne...@acorns.com.au>wrote: > Greg, > > You are right, roll-your-own is not perfect, it's actually very tricky to > get right but if it's broken can be fixed :) (open-source, here I come :) > > Like everything else, getting the updates right is the key. We use a > pattern like the one above and I think it can be done in code if your code > is consistent. E.g. have proper set of bases classes to track those fields, > make a single point where the save/update is triggered or routed that makes > sure the values are updated and so on. > > Regarding get everything few seconds ago that is yes, very important also > because the database might have a different time resolution so the same > "timestamp" might be slightly different. > One extra point in there is that there could be few minutes difference in > the date-time of the client and server. We generally were triggering the > sync and always requesting from the server data up to 5 minutes previous to > the known sync time. > > SyncFramework is yes, maybe one of the best alternatives. > > Azure Data Sync .. I can't get it to work. I can't even save the list of > tables/columns I want to sync. It's telling me the POST payload is too big > :) > Looks like the infrastructure wanted to be too safe and assumed nobody > will try to post a checkbox next to 100 tables with few thousand columns :) > > > > > > > > On Thu, Mar 28, 2013 at 2:05 AM, Greg Low (GregLow.com) > <g...@greglow.com>wrote: > >> Hi Corneliu,**** >> >> ** ** >> >> I don’t mind roll-your-own solutions. They work ok. Biggest issue I come >> across with those LastUpdated type columns is checking out how they get >> updated. I often see people run into issues with those if they don’t >> understand how they were updated.**** >> >> ** ** >> >> For example, doing work on a DB2 site recently, they had a scheme like >> this but missed the odd row. Turns out that they were using a trigger to >> set the LastUpdated column value. But the value appeared when the trigger >> finished running yet it was set to the time value of when the trigger >> started. So, it looked like (occasionally) a row jumped backwards in time. >> Similar issues can happen when they are updated in proc code, and even >> worse when they are updated based on client code.**** >> >> ** ** >> >> 99% of those problems are avoided by saying “give me everything since >> last time but only up to a few seconds ago” instead of getting everything >> up to the most recent.**** >> >> ** ** >> >> Sync Framework does work reasonably well. Azure Data Sync is still in >> preview but we’ve had pretty good success with it for simple scenarios.** >> ** >> >> ** ** >> >> Replication is probably overkill and isn’t supported on all editions, so >> that might be a limiting factor.**** >> >> ** ** >> >> And just to add to the “can’t believe that pasta sauce is a vege” >> discussion in the USA, we have nothing to be proud of. The GST exemption on >> food that the Greens needed before passing the legislation has led to >> significant chunks of our Tax Act that define what “food” is. **** >> >> ** ** >> >> For example, we now have several pages in the Australian Tax Act that >> cover the definitions of the word “pudding”. I shudder to think what that >> cost us all to have that written <sigh>.**** >> >> ** ** >> >> Regards,**** >> >> ** ** >> >> Greg**** >> >> ** ** >> >> Dr Greg Low**** >> >> ** ** >> >> 1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913fax >> **** >> >> SQL Down Under | Web: www.sqldownunder.com**** >> >> ** ** >> >> *From:* ozdotnet-boun...@ozdotnet.com [mailto: >> ozdotnet-boun...@ozdotnet.com] *On Behalf Of *Corneliu I. Tusnea >> *Sent:* Wednesday, 27 March 2013 8:07 PM >> *To:* ozDotNet >> >> *Subject:* Re: occasionally connected application design problem**** >> >> ** ** >> >> Greg,**** >> >> ** ** >> >> I'm sure the SQL guys will tell you about some "magical tool" that can do >> all of this for you hands free and without any headaches (fingers crossed) >> but my take would be the good old REST API model.**** >> >> ** ** >> >> 1. For every Table have two columns LastUpdated, LastUploaded and >> LastDownloaded. Every change you do locally you update the LastUpdated to >> UTC now (never use local times!)**** >> >> 2. Keep a table with the "sync status" of each table where all you need >> to store is the TableName, LastUploaded and LastDownloaded.**** >> >> 3. Have a background thread that tries to monitor for network events >> (don't continuously try to ping your server as your'll burn the battery of >> those devices). >> http://www.codeproject.com/Articles/64975/Detect-Internet-Network-Availability >> **** >> >> 4. When you have connectivity all you need to do is select top 100 from >> each table where LastUpdatd for the Status for the table < LastUpdated of >> the row.**** >> >> (I don't know if I make sense but basically you want to select all the >> rows that were changed since point of your LastUpdated in your Status >> table).**** >> >> You then try to push those back to your server. For every row that "made >> it" to the server you update the LastUploaded to UtcNow or even better I >> would update it to the time just before you started the sync.**** >> >> 5. You do the reverse for downloading data. You ask the server for all >> changes since your LastDownload. Once all the changes were received, you >> update your own LastDownload.**** >> >> With a bit of reflection and some clear naming conventions you could code >> all of this generically enough that you can simply run it on your database >> disregarding the number of tables & columns.**** >> >> ** ** >> >> I'm now going to let the SQL guys deliver their magical tool :)**** >> >> ** ** >> >> Regards,**** >> >> Corneliu.**** >> >> ** ** >> >> ** ** >> >> ** ** >> >> On Wed, Mar 27, 2013 at 8:41 PM, Greg Harris <harris.gre...@gmail.com> >> wrote:**** >> >> Dear People,**** >> >> ** ** >> >> I need some help to get some good ideas for a design issue I am facing…** >> ** >> >> ** ** >> >> The application will be geographically dispersed and only occasionally >> connected to the internet with a slow / unreliable connection.**** >> >> The users at remote branch offices are doing daily data entry to their >> own local databases (probably SQL express databases).**** >> >> On a periodic basis the remote branch offices need to synchronise data >> with head office (probably a full SQL database).**** >> >> Most (99%) of data will travel from the remote branch offices the head >> office some reference data may travel back to the remote branch office.** >> ** >> >> ** ** >> >> There are a couple of design ideas that I have had:**** >> >> ** ** >> >> SQL Server Replication: ( >> http://msdn.microsoft.com/en-us/library/ms151198.aspx) I do not know how >> well this will work on wires that are of such poor quality. Also how easy >> (hard) it will be to support remotely.**** >> >> ** ** >> >> Program based updates: Have a program running in the background at each >> site attempting connection with head office transferring data. All rows >> would have a transferred status flag, that would be set once successful >> transfer has been acknowledged.**** >> >> ** ** >> >> File extracts: Once an hour produce a text file (with check sum) of all >> data entered in the last hour, background job copies file to head office >> server which will then apply updates to head office server.**** >> >> ** ** >> >> Please share with me and the group what design ideas and experiences you >> have had that worked well and the ones you would avoid if faced with the >> same design decision again today.**** >> >> ** ** >> >> Many thanks**** >> >> Greg Harris**** >> >> ** ** >> > >