[sqlite] sqlite sync over network

2019-12-05 Thread George
Hi,
I need to sync sqlite tables over network between few devices. 
Please help me with an opinion with what will be the best approach ?
thanks, 
George
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread Simon Slavin
On 5 Dec 2019, at 12:54pm, George  wrote:

> I need to sync sqlite tables over network between few devices. 
> Please help me with an opinion with what will be the best approach ?

Unfortunately this is a subject which has no simple solution.  The things you 
have to do are not simple.  To help us advise you, you might answer these 
questions.

Do all your devices make changes to the database, or are changes made by just 
one of them ?

What governs when to synchronise ?  Should it be done at regular intervales ?  
If so, how often.  Otherwise what should trigger synchronisation ?

Are all your devices permanently on and connected to the internet, or might 
some of them be turned off or out of contact some of the time ?

Are your devices all proper computers or do you have to worry about battery-use 
and bandwidth on some of them ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread George
Hi Simon,
Thx for helping me. I'm trying to explain our setup next:

Changes -> bidirectional. All terminals can save to db. Amount of collected 
data it's not big and frequency between readings will be minimum 2sec. When we 
push more readings. Data itself it's very small. Like reading temperature from 
a probe.
Terminals-> ARM  with Debian LinuxPC-> this is like master. Kind of server. OS 
is Ubuntu or Windows. Now it's Ubuntu. Here we study all data and sometime do 
some changes to database also. But it's maintenance like deleting old data. 
Change names and settings of terminals.
No internet connection. We run this in our private LAN. All it's local and high 
speed.
No problem with energy consumption, batteries etc. All working on backup UPS.

George C.

-gXg

Sent from Yahoo Mail on Android 
 
  On Thu, Dec 5, 2019 at 17:14, Simon Slavin wrote:   On 
5 Dec 2019, at 12:54pm, George  wrote:

> I need to sync sqlite tables over network between few devices. 
> Please help me with an opinion with what will be the best approach ?

Unfortunately this is a subject which has no simple solution.  The things you 
have to do are not simple.  To help us advise you, you might answer these 
questions.

Do all your devices make changes to the database, or are changes made by just 
one of them ?

What governs when to synchronise ?  Should it be done at regular intervales ?  
If so, how often.  Otherwise what should trigger synchronisation ?

Are all your devices permanently on and connected to the internet, or might 
some of them be turned off or out of contact some of the time ?

Are your devices all proper computers or do you have to worry about battery-use 
and bandwidth on some of them ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
  
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread George
Hi Simon,
I miss to talk about sync trigger. Will be great to be after post trigger. I 
mean we read a probe on terminal T1, save data and on post trigger also put a 
job for sync and leave it on other thread to do this. If some sync jobs fail 
will must to try again to sync or on timeout  stay in a sync log table.
If it's not possible than we have to check what we can do else. It's not really 
time critical so also here we can be flexible.
George C.

Sent from Yahoo Mail on Android 
 
  On Thu, Dec 5, 2019 at 17:14, Simon Slavin wrote:   On 
5 Dec 2019, at 12:54pm, George  wrote:

> I need to sync sqlite tables over network between few devices. 
> Please help me with an opinion with what will be the best approach ?

Unfortunately this is a subject which has no simple solution.  The things you 
have to do are not simple.  To help us advise you, you might answer these 
questions.

Do all your devices make changes to the database, or are changes made by just 
one of them ?

What governs when to synchronise ?  Should it be done at regular intervales ?  
If so, how often.  Otherwise what should trigger synchronisation ?

Are all your devices permanently on and connected to the internet, or might 
some of them be turned off or out of contact some of the time ?

Are your devices all proper computers or do you have to worry about battery-use 
and bandwidth on some of them ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
  
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread Simon Slavin
Okay.  Do you really need all of the terminals to have up-to-date data from all 
the terminals ?

If not, it's a simple problem: each terminal sends data to the master, which 
puts all the incoming data into the same database.  Terminals don't need a SQL 
dataase at all.  They simply report changes to the master, as often as they 
need to.

Information send from each terminal can be in the form of a text file of new 
data in CSV format.  Incoming text files pile up in a directory on the master.  
It is important that the master does not try to process data into the SQLite 
database immediately, since this would introduce a delay and perhaps problems 
with locking.  It's faster just to accept the data and store it in a file on 
disk, and worry about processing it separately.

A program on the master server continually reads any file available, adds its 
data to the central database, then deletes the file.  If there are no files 
waiting to be processed it can idle for a second or two.  Another program, 
which can run at the same time, gives you access to this database so you can 
read the data and do your maintenance.

Does that work for you ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread George

We have an app already done in Qt on all terminals. Sqlite it's already used by 
this app. The only missing part it's our sync to have data on PC for reports 
and also on screen reports on terminals if operator will need to check.We need 
sqlite because help us to manage data on terminal and display it wit Qt 
tableview,  sortfilterproxymodel.
We also don't have PC app, now working on it. And ofcourse still working on 
terminal app to improve.Terminals use a touch display, it's a Samsung board, 
and user can check status of all system
 

Sent from Yahoo Mail on Android 
 
  On Thu, Dec 5, 2019 at 18:24, Simon Slavin wrote:   
Okay.  Do you really need all of the terminals to have up-to-date data from all 
the terminals ?

If not, it's a simple problem: each terminal sends data to the master, which 
puts all the incoming data into the same database.  Terminals don't need a SQL 
dataase at all.  They simply report changes to the master, as often as they 
need to.

Information send from each terminal can be in the form of a text file of new 
data in CSV format.  Incoming text files pile up in a directory on the master.  
It is important that the master does not try to process data into the SQLite 
database immediately, since this would introduce a delay and perhaps problems 
with locking.  It's faster just to accept the data and store it in a file on 
disk, and worry about processing it separately.

A program on the master server continually reads any file available, adds its 
data to the central database, then deletes the file.  If there are no files 
waiting to be processed it can idle for a second or two.  Another program, 
which can run at the same time, gives you access to this database so you can 
read the data and do your maintenance.

Does that work for you ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
  
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread Reid Thompson
On Thu, 2019-12-05 at 12:54 +, George wrote:
> [EXTERNAL SOURCE]
> 
> 
> 
> Hi,
> I need to sync sqlite tables over network between few devices.
> Please help me with an opinion with what will be the best approach ?
> thanks,
> George
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> 
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=w8fEmIi-B5lsC0MnOiGTloUx5YmN70-3jVCmqqbXNxM&r=4r1gVE34nFB7YgioINuzq1cdGrlIBszDh26sKYP7ux8&m=ZUgwTMbyka9IuNZaXY0epvffHzRxcqMMJu6gFDi2cWQ&s=bFUXvciZ4M0weWnWL70V53tgL3oicufmaLQb3BWoGmI&e=

this may suit your needs, based on sqlite.

Bedrock was built by Expensify, and is a networking and distributed transaction 
layer built atop SQLite, the fastest, most reliable, and most widely 
distributed database in the world.

https://bedrockdb.com/


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Zhu, Liang [AUTOSOL/ASSY/US]
If I do not do Vacuum,  my database size just keep raising,  eventually the 
database size gets to  over 90% of storage size,  I can save data to the 
database any more.

Thank you,
Liang

-Original Message-
From: sqlite-users  On Behalf Of 
Gerry Snyder
Sent: Thursday, December 5, 2019 12:12 AM
To: SQLite mailing list 
Subject: [EXTERNAL] Re: [sqlite] what is the optimized way to do the vaccum?

On Wed, Dec 4, 2019, 9:28 PM Zhu, Liang [AUTOSOL/ASSY/US] < 
liang@emerson.com> wrote:

> All Sqlite Expert,
>
> I have one table,  I am inserting and deleting record to and from this 
> table very 250ms.  I always maintain 1000 rows in this table.  I have 
> another table,  I am inserting and deleting data to and from this 
> table every 1s.  The data record in this table maintains at 200,000 rows.
> Can I get some recommendation on what is optimized technique to do the 
> vaccum for my database?
>
> Thank you,
> Liang
>

Why do you think that you need to vacuum at all?


> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlit
> e.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8
> tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DF
> EINZ4E6I&m=Q-3kQm4Cjo_MPKkH2cJZf86hGy0yr0vE6JtLc937A3M&s=1HGpuZT8Hu2Bp
> siRzJ8yujtxh3m_XyAXLThncurjn-M&e=
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=Q-3kQm4Cjo_MPKkH2cJZf86hGy0yr0vE6JtLc937A3M&s=1HGpuZT8Hu2BpsiRzJ8yujtxh3m_XyAXLThncurjn-M&e=
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Enable Or Disable Extension Loading

2019-12-05 Thread Jose Isaias Cabrera

Greetings!

Just to be sure...

The function,

int sqlite3_enable_load_extension(sqlite3 *db, int onoff);

enables or disables a database to allow or disallow the loading of 
extensions[1].  Once it's set, will it stay on?  Or does one need to be turn it 
on every time one connects to the database?  It appears that the latter is the 
correct behavior, but I just want to make sure.  Thanks.

josé

[1] https://www.sqlite.org/c3ref/enable_load_extension.html
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread Simon Slavin
On 5 Dec 2019, at 4:48pm, George  wrote:

> We have an app already done in Qt on all terminals. Sqlite it's already used 
> by this app. 

When a terminal makes up a SQL INSERT command to add new data to the database, 
have it also save this command to a file.  That's the file you send to the 
master computer so that the master database of all transactions can be updated.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Simon Slavin
On 5 Dec 2019, at 6:31pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> If I do not do Vacuum,  my database size just keep raising,  eventually the 
> database size gets to over 90% of storage size,  I can save data to the 
> database any more.

VACUUM should not be saving you any space.  And VACUUM puts a lot of traffic 
through your storage device which will eventually kill it.

What journal mode are you using ?  In other words, what does the command
PRAGMA journal_mode
output ?

Does your application close the connection correctly ?  In other words, do you 
call sqlite3_close() or sqlite3_close_v2() and check the result code it returns 
?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread David Raymond
SQLite is pretty good at using free space inside the file. So if inserting 
something is going to run you out of space, then it's going to run you out of 
space whether the file was previously vacuumed or not.

Also reminder that when vacuum is run, SQLite makes a brand new copy of the 
database, then goes through and updates the pages of the original file, which 
requires writes to the rollback journal. So if your database is size n. Then 
the worst case scenario is that vacuum will peak out at using 3n worth of disk 
space. (Original file, copy, journal) So if your database is already 90% of 
your storage, then you're gonna have a hard time vacuuming it anyway.

You could consider using incremental vacuum to clean up free space without 
re-creating the whole file, but that has to be enabled when the database file 
is created. Again though, that only frees up unused space. If an insert is 
making your database size bigger, then you don't have any unused space to clean 
up.


-Original Message-
From: sqlite-users  On Behalf Of 
Zhu, Liang [AUTOSOL/ASSY/US]
Sent: Thursday, December 5, 2019 1:32 PM
To: SQLite mailing list 
Subject: Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

If I do not do Vacuum,  my database size just keep raising,  eventually the 
database size gets to  over 90% of storage size,  I can save data to the 
database any more.

Thank you,
Liang

-Original Message-
From: sqlite-users  On Behalf Of 
Gerry Snyder
Sent: Thursday, December 5, 2019 12:12 AM
To: SQLite mailing list 
Subject: [EXTERNAL] Re: [sqlite] what is the optimized way to do the vaccum?

On Wed, Dec 4, 2019, 9:28 PM Zhu, Liang [AUTOSOL/ASSY/US] < 
liang@emerson.com> wrote:

> All Sqlite Expert,
>
> I have one table,  I am inserting and deleting record to and from this 
> table very 250ms.  I always maintain 1000 rows in this table.  I have 
> another table,  I am inserting and deleting data to and from this 
> table every 1s.  The data record in this table maintains at 200,000 rows.
> Can I get some recommendation on what is optimized technique to do the 
> vaccum for my database?
>
> Thank you,
> Liang
>

Why do you think that you need to vacuum at all?


> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlit
> e.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8
> tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DF
> EINZ4E6I&m=Q-3kQm4Cjo_MPKkH2cJZf86hGy0yr0vE6JtLc937A3M&s=1HGpuZT8Hu2Bp
> siRzJ8yujtxh3m_XyAXLThncurjn-M&e=
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=Q-3kQm4Cjo_MPKkH2cJZf86hGy0yr0vE6JtLc937A3M&s=1HGpuZT8Hu2BpsiRzJ8yujtxh3m_XyAXLThncurjn-M&e=
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Enable Or Disable Extension Loading

2019-12-05 Thread Keith Medcalf

On Thursday, 5 December, 2019 11:39, Jose Isaias Cabrera  
asked:

>Just to be sure...
>
>The function,
>
>int sqlite3_enable_load_extension(sqlite3 *db, int onoff);
>
>enables or disables a database to allow or disallow the loading of
>extensions[1].  Once it's set, will it stay on?  Or does one need to be
>turn it on every time one connects to the database?  It appears that the
>latter is the correct behavior, but I just want to make sure.  Thanks.

Each time a connection is opened the default setting of the load_extension flag 
is set according the SQLITE_ENABLE_LOAD_EXTENSION compile time define, which 
defaults to 0.  

Using either the sqlite3_enable_load_extension or sqlite3_db_config makes a 
change to the setting for that connection only which remains in effect until 
the connection is closed or the configuration of the connection is changed 
again (using the same API calls).  Each connection needs to enable the loading 
of extensions separately and the change only applies to that connection.  (It 
applies to the connection, not the database)

See also:  https://sqlite.org/c3ref/load_extension.html

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Enable Or Disable Extension Loading

2019-12-05 Thread Jose Isaias Cabrera

Keith Medcalf, on Thursday, December 5, 2019 02:24 PM, wrote...
> On Thursday, 5 December, 2019 11:39, Jose Isaias Cabrera, on
> >Just to be sure...
> >
> >The function,
> >
> >int sqlite3_enable_load_extension(sqlite3 *db, int onoff);
> >
> >enables or disables a database to allow or disallow the loading of
> >extensions[1].  Once it's set, will it stay on?  Or does one need to be
> >turn it on every time one connects to the database?  It appears that the
> >latter is the correct behavior, but I just want to make sure.  Thanks.
>
> Each time a connection is opened the default setting of the load_extension 
> flag is
> set according the SQLITE_ENABLE_LOAD_EXTENSION compile time define, which
> defaults to 0.
>
> Using either the sqlite3_enable_load_extension or sqlite3_db_config makes a 
> change
> to the setting for that connection only which remains in effect until the 
> connection
> is closed or the configuration of the connection is changed again (using the 
> same API
> calls).  Each connection needs to enable the loading of extensions separately 
> and the
> change only applies to that connection.  (It applies to the connection, not 
> the database)
>
> See also:  https://sqlite.org/c3ref/load_extension.html

Thanks.  Yes, I saw the load extension text, but somehow it left me thinking 
that it was a setting per DB and not per connection.  All clear now. Thanks.

josé
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query freezing on latest SQLite 3.30.1 build

2019-12-05 Thread Clovis Ribeiro,MyABCM
Folks,

The following query, when executed against the attached database using SQLite 
3.30.1 (running on Windows OS) will take forever to execute. If we remove all 
columns from both tables that are not actually used in the query, it is 
executed in milliseconds.

SELECT COUNT(*) FROM
(SELECT
 src.member_id src_id,
 dst.member_id dst_id,
 asg.contribution_percentage
FROM
 mdl_assignments asg
INNER JOIN
 mdl_member_instances src ON asg.source_mbi_id = src.id
INNER JOIN
 mdl_member_instances dst ON asg.destination_mbi_id = dst.id
WHERE
 src.period_scenario_id = 1 AND dst.period_scenario_id = 1) T

When testing the same query with older versions of SQLite we used in the past 
(more than 6 years ago), the query also executed in milliseconds.

We have executed several different tests but could not figure out why this 
query hangs on the latest version of SQLite but runs fast in older versions or 
when we remove columns from the tables in the database.

Hope this can help you improve SQLite.

Thanks

Clovis Ribeiro
MyABCM

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Roman Fleysher
A side note about VACUUM:

If I remember correctly, tables which do not have INTEGER PRIMARY KEY will have 
their rowid column reassigned. Be careful if you are using rowid.

Roman


From: sqlite-users  on behalf of 
Simon Slavin 
Sent: Thursday, December 5, 2019 2:03 PM
To: SQLite mailing list 
Subject: Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

CAUTION: This email comes from an external source; the attachments and/or links 
may compromise our secure environment. Do not open or click on suspicious 
emails. Please click on the “Phish Alert” button on the top right of the 
Outlook dashboard to report any suspicious emails.

On 5 Dec 2019, at 6:31pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> If I do not do Vacuum,  my database size just keep raising,  eventually the 
> database size gets to over 90% of storage size,  I can save data to the 
> database any more.

VACUUM should not be saving you any space.  And VACUUM puts a lot of traffic 
through your storage device which will eventually kill it.

What journal mode are you using ?  In other words, what does the command
PRAGMA journal_mode
output ?

Does your application close the connection correctly ?  In other words, do you 
call sqlite3_close() or sqlite3_close_v2() and check the result code it returns 
?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailinglists.sqlite.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fsqlite-users&data=02%7C01%7Croman.fleysher%40einsteinmed.org%7C150f6e61d5e047dfe37e08d779b73fd1%7C9c01f0fd65e040c089a82dfd51e62025%7C0%7C0%7C637111700322791016&sdata=x6sIZJFg33wns0NYU67N7cIyE%2FZsBC3N6Yp6P%2FuRFLo%3D&reserved=0
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query freezing on latest SQLite 3.30.1 build

2019-12-05 Thread David Raymond
The mailing list strips off all attachments, so you'll have to provide another 
place to get that.

Alternatively could you post the schema here as text, along with the explain 
query plan output from the slow version and from a fast version?

-Original Message-
From: sqlite-users  On Behalf Of 
Clovis Ribeiro,MyABCM
Sent: Thursday, December 5, 2019 2:04 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] Query freezing on latest SQLite 3.30.1 build

Folks,

The following query, when executed against the attached database using SQLite 
3.30.1 (running on Windows OS) will take forever to execute. If we remove all 
columns from both tables that are not actually used in the query, it is 
executed in milliseconds.

SELECT COUNT(*) FROM
(SELECT
 src.member_id src_id,
 dst.member_id dst_id,
 asg.contribution_percentage
FROM
 mdl_assignments asg
INNER JOIN
 mdl_member_instances src ON asg.source_mbi_id = src.id
INNER JOIN
 mdl_member_instances dst ON asg.destination_mbi_id = dst.id
WHERE
 src.period_scenario_id = 1 AND dst.period_scenario_id = 1) T

When testing the same query with older versions of SQLite we used in the past 
(more than 6 years ago), the query also executed in milliseconds.

We have executed several different tests but could not figure out why this 
query hangs on the latest version of SQLite but runs fast in older versions or 
when we remove columns from the tables in the database.

Hope this can help you improve SQLite.

Thanks

Clovis Ribeiro
MyABCM

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Zhu, Liang [AUTOSOL/ASSY/US]
We are using Journal_mode=delete,  sqlite3_close().

Liang

-Original Message-
From: sqlite-users  On Behalf Of 
Simon Slavin
Sent: Thursday, December 5, 2019 2:04 PM
To: SQLite mailing list 
Subject: Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

On 5 Dec 2019, at 6:31pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> If I do not do Vacuum,  my database size just keep raising,  eventually the 
> database size gets to over 90% of storage size,  I can save data to the 
> database any more.

VACUUM should not be saving you any space.  And VACUUM puts a lot of traffic 
through your storage device which will eventually kill it.

What journal mode are you using ?  In other words, what does the command
PRAGMA journal_mode
output ?

Does your application close the connection correctly ?  In other words, do you 
call sqlite3_close() or sqlite3_close_v2() and check the result code it returns 
?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=QYaDMvjAXTSup0wv5mZP9nCIDsvmUkbkTWuYAtrJ6l8&s=Z4Qe515HCPlNxogmpfk3Z2O67uL7Hi9ifp1EmpU7oIg&e=
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Simon Slavin
On 5 Dec 2019, at 8:07pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> We are using Journal_mode=delete, sqlite3_close().

Please check that when all your connections are closed, all temporary files are 
deleted.  So if your database is called 'database.sql' then there should be no 
other file starting with that name, for instance 'database.sql-wal', 
'database.sql-shm', or 'database.sql-journal'.

Does your application add data, then delete data, then add more data, etc. ?  
Or does it just add data ?

VACUUM should not be necessary.  And it can use a lot of space while it's 
working.  We need to find out why you are having to use it.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Richard Hipp
On 12/5/19, Simon Slavin  wrote:
>
> VACUUM should not be saving you any space.

It might, depending on what he is doing.

If a single page of the database holds (say) 30 rows of data, and the
OP deletes 10 rows
from that page, that leaves some empty space on the page.  That empty
space is reused later, but only if new rows are inserted that have
keys that belong on the page in question.  If new content is appended
to the table (for example, if this is a ROWID table with automatically
chosen rowids and new rows are inserted) then the empty space freed up
by deleted rows on interior pages will continue to go unused.

Once a sufficient number of rows are removed from a page, and the free
space on that page gets to be a substantial fraction of the total
space for the page, then the page is merged with adjacent pages,
freeing up a whole page for reuse.  But as doing this reorganization
is expensive, it is deferred until a lot of free space accumulates on
the page.  (The exact thresholds for when a rebalance occurs are
written down some place, but they do not come immediately to my mind,
as the whole mechanism *just works* and we haven't touched it in about
15 years.)

So, if the OP is adding rows to the end of a table, intermixed with
deleting random rows from the middle of the table, then the table will
grow in size and VACUUM will restore it to the minimum size.

But the OP is wrong on this point:  The table does not grow *without
bound*.  There is an upper bound on the amount of free space within a
table.  If you go above that bound, then space is automatically
reclaimed.  But, it might be that the upper bound is larger than what
the OP can tolerate.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Zhu, Liang [AUTOSOL/ASSY/US]
According to the SQLlite.org, the purpose of vacuum is as follows,
VACUUM command rebuilds the database file, repacking it into a minimal amount 
of disk space.

I am trying to resolving an issue,   I am keeping the record count in each row 
for the table with 20 row,  After the table fill up 200,000 record,  when I 
deleting the data and inserting new data.  my record count can get all over the 
place,  the record count some time can be incremented up to 200 from one record 
to the next. 
 I am thinking it might be related to vacuum.  I am vacuum when freelist_count 
reaches to 1000.   

Thank you,
Liang

-Original Message-
From: sqlite-users  On Behalf Of 
Richard Hipp
Sent: Thursday, December 5, 2019 3:57 PM
To: SQLite mailing list 
Subject: Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

On 12/5/19, Simon Slavin  wrote:
>
> VACUUM should not be saving you any space.

It might, depending on what he is doing.

If a single page of the database holds (say) 30 rows of data, and the OP 
deletes 10 rows from that page, that leaves some empty space on the page.  That 
empty space is reused later, but only if new rows are inserted that have keys 
that belong on the page in question.  If new content is appended to the table 
(for example, if this is a ROWID table with automatically chosen rowids and new 
rows are inserted) then the empty space freed up by deleted rows on interior 
pages will continue to go unused.

Once a sufficient number of rows are removed from a page, and the free space on 
that page gets to be a substantial fraction of the total space for the page, 
then the page is merged with adjacent pages, freeing up a whole page for reuse. 
 But as doing this reorganization is expensive, it is deferred until a lot of 
free space accumulates on the page.  (The exact thresholds for when a rebalance 
occurs are written down some place, but they do not come immediately to my 
mind, as the whole mechanism *just works* and we haven't touched it in about
15 years.)

So, if the OP is adding rows to the end of a table, intermixed with deleting 
random rows from the middle of the table, then the table will grow in size and 
VACUUM will restore it to the minimum size.

But the OP is wrong on this point:  The table does not grow *without bound*.  
There is an upper bound on the amount of free space within a table.  If you go 
above that bound, then space is automatically reclaimed.  But, it might be that 
the upper bound is larger than what the OP can tolerate.
--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=1ENRPbn-L24iZuWFyfkwR0RwAWnSwaNCCh_MMvJN4Fs&s=UfFmqi9FEBt_M45aGHanxTL2DeM19UgHVighO3_nqKo&e=
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite sync over network

2019-12-05 Thread Jens Alfke


> On Dec 5, 2019, at 8:05 AM, George  wrote:
> 
> Changes -> bidirectional. All terminals can save to db. Amount of collected 
> data it's not big and frequency between readings will be minimum 2sec. When 
> we push more readings. Data itself it's very small. Like reading temperature 
> from a probe.

The product I work on — Couchbase Mobile* — does this, but it may not be 
exactly what you want because it uses SQLite internally on the client side, and 
not at all on the server. So it's not something you can plug your existing 
SQLite-based code into. (Our data model is not relational but JSON-based.)

Data sync is hard. I've been working in this area since 2011 so I think I have 
some expertise here :)
Problems you're likely to run into:

1. You can't use regular integer primary keys, because different clients will 
end up creating rows with the same keys and cause collisions when they sync. 
You either have to use UUIDs as keys, or else somehow partition the key space 
in a deterministic way, like prefixing a fixed client ID to a key.

2. If multiple entities can update the db, there will likely be conflicts. 
Conflicts are hard to manage, and how you do it is entirely dependent on your 
high-level schema. In the worst case, conflicts require human intervention.

3. You need a message-oriented protocol. It's best to keep a continuous 
bidirectional connection open. WebSockets is a good protocol for this. You'll 
have to deal with errors establishing the connection, and unexpected 
disconnects, by periodic retries.

4. Schema upgrades in a distributed system are a mess. If your system is 
centralized enough you can take it down and upgrade every peer's database, then 
bring it back up, but of course that doesn't work in a decentralized system. 
(One of the main reasons Couchbase is schema-less.)

5. Keep in mind there is no central source of truth. Data takes finite time to 
propagate, and transient errors greatly increase that time. Even if you have a 
central server, it will be behind the clients that create the data, so it 
doesn't have the latest info. No one does.

—Jens

* https://www.couchbase.com/products/mobile
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the vaccum?

2019-12-05 Thread Keith Medcalf

That does not make any sense at all.  How are you deleting old rows?  The 
easiest way is to use the table rowid ...

delete from data where rowid < (select max(rowid) - 20 from data);
insert into data (... data but not rowid ...) values (...);

This will explode after you have inserted 9223372036854775807 rows -- at 4 
records per second that is 100614506283 years.  You will end up with the 
database size stabilizing at a few pages more than the size of the data.  If 
you can keep a count of the inserts (in a program variable) and only do the 
delete every pageful of rows or so, that will reduce I/O significantly (as will 
batching the inserts, of course).

ie:

static int c = 0;
void insertRow(...) {
  c += 1;
  if (c % 1000 == 0) {
 delete from data where rowid < (select max(rowid) - 20 from data);
 c = 0;
  }
  insert into data values ();
}

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Zhu, Liang [AUTOSOL/ASSY/US]
>Sent: Thursday, 5 December, 2019 14:31
>To: SQLite mailing list 
>Subject: Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the
>vaccum?
>
>According to the SQLlite.org, the purpose of vacuum is as follows,
>VACUUM command rebuilds the database file, repacking it into a minimal
>amount of disk space.
>
>I am trying to resolving an issue,   I am keeping the record count in
>each row for the table with 20 row,  After the table fill up 200,000
>record,  when I deleting the data and inserting new data.  my record
>count can get all over the place,  the record count some time can be
>incremented up to 200 from one record to the next.
> I am thinking it might be related to vacuum.  I am vacuum when
>freelist_count reaches to 1000.
>
>Thank you,
>Liang
>
>-Original Message-
>From: sqlite-users  On
>Behalf Of Richard Hipp
>Sent: Thursday, December 5, 2019 3:57 PM
>To: SQLite mailing list 
>Subject: Re: [sqlite] [EXTERNAL] Re: what is the optimized way to do the
>vaccum?
>
>On 12/5/19, Simon Slavin  wrote:
>>
>> VACUUM should not be saving you any space.
>
>It might, depending on what he is doing.
>
>If a single page of the database holds (say) 30 rows of data, and the OP
>deletes 10 rows from that page, that leaves some empty space on the page.
>That empty space is reused later, but only if new rows are inserted that
>have keys that belong on the page in question.  If new content is
>appended to the table (for example, if this is a ROWID table with
>automatically chosen rowids and new rows are inserted) then the empty
>space freed up by deleted rows on interior pages will continue to go
>unused.
>
>Once a sufficient number of rows are removed from a page, and the free
>space on that page gets to be a substantial fraction of the total space
>for the page, then the page is merged with adjacent pages, freeing up a
>whole page for reuse.  But as doing this reorganization is expensive, it
>is deferred until a lot of free space accumulates on the page.  (The
>exact thresholds for when a rebalance occurs are written down some place,
>but they do not come immediately to my mind, as the whole mechanism *just
>works* and we haven't touched it in about
>15 years.)
>
>So, if the OP is adding rows to the end of a table, intermixed with
>deleting random rows from the middle of the table, then the table will
>grow in size and VACUUM will restore it to the minimum size.
>
>But the OP is wrong on this point:  The table does not grow *without
>bound*.  There is an upper bound on the amount of free space within a
>table.  If you go above that bound, then space is automatically
>reclaimed.  But, it might be that the upper bound is larger than what the
>OP can tolerate.
>--
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>https://urldefense.proofpoint.com/v2/url?u=http-
>3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-
>2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9
>bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=1ENRPbn-
>L24iZuWFyfkwR0RwAWnSwaNCCh_MMvJN4Fs&s=UfFmqi9FEBt_M45aGHanxTL2DeM19UgHVig
>hO3_nqKo&e=
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users