> On 4/4/2014 1:21 PM, da...@dandymadeproductions.com wrote: >> On working with the MyJSQLView database GUI access tool it has been >> determined that a local file/memory database would be valuable to >> perform recurring analysis on datasets from the connected datasource. >> Sqlite is being considered as the local database. > > If I understand you correctly, you're suggesting making a local snapshot > of a networked database to optimize performance. I'm not sure what > remote database you're using, but it seems to me with properly designed > prepared statements there won't be much gain in downloading everything > in advance to the local machine, especially since the download will > certainly include more data than is actually needed. Additionally > consider the loss of coherency when the upstream database is modified > but the local snapshot becomes stale.
The assumption is that the networked database, datasource, could be on the local lan or Internet. The 'snapshot' would not necessarily be everything, but based on a SELECT statement of a set of the datasource content. The application already has a mechanism in place that the user can store queries in a bucket for reuse. I guess a similar commercial term for this would be ETL, but without the transform perhaps. One of the commercial tools out there called Tableau I believe is using this exact concept. The benefit I see from this local file/memory database is that I have found some processing of data for analysis occurs over and over to derive comparison results. By having the data local the user can perform these analysis without constantly re-querying the production database. The analysis tools can also remain unchanged since the data is still coming from a RDBM. It is assumed that the user knows that data can be stale at any point beyond the initial load. > >> All the underlining code has been created for conversion between >> datasource and local database. The code is now being developed to >> perform the DB to DB transfer population. The basic algorithm being >> considered is: >> >> Process 1, Loop: 1,2,3 >> 1. Read Datasource row from query. >> 2. Create SQL INSERT statement. >> 3. Write SQL INSERT into queue. >> >> Process 2, Loop: 4,5 >> 4. Read SQL INSERT from queue. >> 5. Write SQL INSERT to SQLite db.} > > The queue seems to be an unnecessary intermediary. Simply alternate > between reading from the remote database and writing the received data > to the SQLite database. This simpler design is also more amenable to > prepared statements which offer indispensible performance and security > benefits. Thank you for that input. My general thoughts were along your statement of a simpler design. The only reason the queue was considered is because it is quite conceivable that a network datasource would be the choke point so a queue being filled by several threads in process 1 would speed up the population. > > Do keep in mind that an SQL database consists not only of INSERTs (the > data) but also CREATE TABLEs (the schema) plus indexes and triggers and > views. The data is the only interest here beside indexes for the new table data. Completed routines have already been created to re-create the structure of the datasource database. > >> Perhaps someone from this forum could possibly comment on another open >> source project with similar type of db to db transfer that could be >> studied or alternative algorithm. > > It doesn't matter what database you use, the algorithm remains the same: > read source, write destination, repeat. Many database systems have > convenient import routines for common data sources. For example, the > SQLite shell has the .import command for loading a file into a table. > > -- > Andy Goth | <andrew.m.goth/at/gmail/dot/com> These import routines are exactly how I have had to populate a local SQLite database when testing analysis plugins. The process of setting up the database table(s) exporting from datasource data and importing to the local file/memory database would be much simply for non-expert users if automated so they can focus on deriving results from analysis with a local higher performance file/memory database. Thank you Andy for your comments. Dana M. Proctor MyJSQLView Project Manager _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users