Andy,
I can state that SQLite easily went up to 100GB with zero issues for us.
We decided to change the structure of our database and we reduced our
database size as we didn't want to incur extra VPN costs. We also
realised we designed the database wrong and it had a lot of useless data
Hartwig,
You have got most of the tricks we know about. Other more experienced
developers may provide a better insight.
We had to moved about 60GB of table data about and we ended up doing
what you have done with one extra bit, we batched the jobs up in
multiples of 10,000 between BEGIN and
that this is not meant
to be criticism of SQLite but rather one of the tradeoffs we know about
about and make. We win for some many other things that we have no
issues.
Rob
On 4 Sep 2019, at 12:02, ingo wrote:
On 4-9-2019 12:24, Rob Willett wrote:
Peng,
Dropping very large tables is time consuming
Peng,
Dropping very large tables is time consuming. Dropping a 59GB table
takes quite a long time for us even on fast hardware. Dropping smaller
tables is faster though.
Not sure what size tables you have but something to think about. We
experimented with new tables and changing old tables
On Sat, Aug 31, 2019 at 6:24 AM Rob Willett
wrote:
Hi,
We have a very similar system that captures traffic incident
information
such as accidents, roadworks, traffic jams and sends personalised
information to drivers based on their routes.
We realised after a few years that our
Hi,
We have a very similar system that captures traffic incident information
such as accidents, roadworks, traffic jams and sends personalised
information to drivers based on their routes.
We realised after a few years that our original design for the database
was inefficient as we had a
Tom,
Because the file is the database, your preferences for how things are
displayed are just that, your preferences. Don't combine the two. If you
have two users with different preferences, who wins?
We move the SQLite database around all the time, across different
servers. We do not want
featured sqlite bindings out there. Though Tom is using
perl6, not perl5. I have no idea how its version compares.
(At the risk of going off topic, File::Slurp has issues. File::Slurper
is a
better alternative.)
On Wed, Apr 3, 2019, 12:08 AM Rob Willett
wrote:
Tom,
We use the Perl DB::SQLite
Tom,
We use the Perl DB::SQLite module. It works very well and I cannot
recall a single issue with it in the last four years. There's not as
much support for Perl on this mailing list as it's not as popular, but
most issues you will probably encounter will be at the design level
rather than
Millions of rows is not even large, never mind huge or very huge :)
We have tables with hundreds of millions of rows, we got to billions of
rows in a single table before we changed the logic. From memory we had
67GB for a single database and I reckon 60GB was one table. Not many
issues at all
Whilst Time Machine does not do snapshots how enterprise storage do
snapshots, literally a freeze and recovery point. Time Machine does make
backups suitable for booting from. Apple considers Time Machine suitable
for home use backups.
You can backup with TimeMachine and boot from it. My
Also are they using Time Machine on a networked drive?
Whilst Time Machine was not supposed to work across networks, people
have made it work using 3rd party software. I know because we tried it
for a laugh and abandoned it (and Time Machine) as it was wholly
unreliable.
However I think the
-
From: sqlite-users
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Rob
Willett
Sent: Tuesday, July 31, 2018 11:42 AM
To: SQLite mailing list
Subject: Re: [sqlite] [EXTERNAL] Re: Very, very slow commits -
Possibly solved
Gunter,
Thanks for this. We have already started on
ove performance to create a "covering index" by adding
the (one or two) fields retrieved to the index used for locating the
record. This allows SQLite to retrieve these fields directly from the
index BTree without referring back to the table BTree.
I assume the order of the fields of ea
, at 16:18, Simon Slavin wrote:
On 31 Jul 2018, at 2:59pm, Rob Willett
wrote:
We've created a new table based on your ideas, moved the collate into
the table, analysed the database. We did **not** add COLLATE NOCASE
to the columns which are defined as integers. Would that make a
difference
thought you were simply adding records into the database - I'm
failing to
grasp how this is slowing down in the new database.
Thanks,
Chris
On Tue, Jul 31, 2018 at 3:30 PM Rob Willett
wrote:
Dear all,
We think we have now found the issue with the slow commits.
We believe this is due
think we know what it is).
Thanks to everybody who contributed ideas, we appreciate the help.
Rob
On 31 Jul 2018, at 15:19, Rob Willett wrote:
Simon,
As an exercise we have just added in COLLATE NOCASE to our integer
columns.
Whoops! We thought this would make no difference but its added
to the integer defn without COLLATE
NOCASE.
Rob
On 31 Jul 2018, at 14:59, Rob Willett wrote:
Simon,
Apologies for taking so long to get back, we've been building a test
system and its taken a long time.
We're just getting round to trying your ideas out to see what
difference they make
Simon,
Apologies for taking so long to get back, we've been building a test
system and its taken a long time.
We're just getting round to trying your ideas out to see what difference
they make,
We've created a new table based on your ideas, moved the collate into
the table, analysed the
has been a problem for us. We've struggled to get a
build for it.
Rob
On 30 Jul 2018, at 13:49, Droedel wrote:
Hi Rob,
Answers are in the text below
On Mon, Jul 30, 2018, at 13:45, Rob Willett wrote:
Droedel,
We don't think there are significant read access. The database is a
single
30, 2018, at 5:53 AM, Rob Willett
wrote:
I would wonder why writing the data to a 60GB database and doing a
commit is fast and writing exactly the same data to the 600MB
database is different. The programs for doing it are the same, the
database schema is identical.
I assume the hardware
Warren,
On 30 Jul 2018, at 12:28, Warren Young wrote:
On Jul 30, 2018, at 4:51 AM, Rob Willett
wrote:
The system has been running for 2-3 years
Has performance changed over that span? Which direction?
Performance hasn't changed on the large 60GB data database. its pretty
consistent
ill factor ;-) Maybe there's something similar possible in
SQLite but I couldn't find a pragma for this.
Oh, and do both databases have the same page size, preferably 4K ?
Regards,
Droedel
On Mon, Jul 30, 2018, at 12:51, Rob Willett wrote:
Droedel,
Thanks for the comprehensive reply. We hav
this kind of problems is caused by other applications
(logging / ...) causing too much baseload. %util should be low when
your application isn't running.
Just my 2 cents.
Kind regards,
Droedel
On Sun, Jul 29, 2018, at 10:14, Rob Willett wrote:
Hi,
Background
We've been refactoring ou
.
Rob
On 29 Jul 2018, at 11:45, J. King wrote:
On July 29, 2018 5:47:29 AM EDT, Rob Willett
wrote:
John,
Thanks for the prompt reply and a very good question..
We've dropped the indexes and the commit is now very quick, approx
two
seconds
However the overall performance of the run is much
Anton,
Dropped the indexes and created them without order.
We'll need to look at what your second para means. It could be a major
and massive change.
Rob
On 29 Jul 2018, at 11:52, Djelf wrote:
Rob,
Try creating indexes without order.
Or, try to make a column with a hash of the values
.
Where were you going with that question?
Thanks
Rob
On 29 Jul 2018, at 10:33, John Found wrote:
What is the performance without the indexes?
On Sun, 29 Jul 2018 10:20:11 +0100
"Rob Willett" wrote:
Update 1
We've copied the disruptions table to a new table, dropped the old
tab
said).
Rob
On 29 Jul 2018, at 9:14, Rob Willett wrote:
Hi,
Background
We've been refactoring our database to reduce the size of it. Through
some simple logic we've managed to pull out 99% of the data to reduce
the size from 51GB down to approx 600MB. This logic has been to remove
rows
Hi,
Background
We've been refactoring our database to reduce the size of it. Through
some simple logic we've managed to pull out 99% of the data to reduce
the size from 51GB down to approx 600MB. This logic has been to remove
rows that are almost the same but not quite identical. As with all
it by ~99% like that?
I think that the OP's organisation discovered the 'relational' part of
RDBMS and implemented normalisation.
To Rob Willett: Ryan Smith's explanation is better than mine. Please
read his post and ignore my wooly one. I tried three ways to get
across the sequential-access vs
Dan,
Thanks for the reply.
pragma secure_delete;
1
sqlite>
Which is a bit of a surprise as we have never seen it before. We had to
look it up using your reference.
No idea why that is set (if 1 means it is).
Should we simply ctrl-C the deletion, turn secure_delete off and then
try again?
it by ~99% like that?
I think that the OP's organisation discovered the 'relational' part of
RDBMS and implemented normalisation.
To Rob Willett: Ryan Smith's explanation is better than mine. Please
read his post and ignore my wooly one. I tried three ways to get
across the sequential-access vs
ng to make sure there are no unusual edge cases (or bugs)
based before we promote it to live.
Many thanks
Rob
On 17 Jul 2018, at 12:05, Simon Slavin wrote:
On 17 Jul 2018, at 8:37am, Rob Willett
wrote:
I suspect that part of the issue is the VPS provider we use has a
rate limiter on
to assure us the database is accurate for
our needs.
Thanks to all the people who offered advice and help
Rob
On 17 Jul 2018, at 7:02, Rob Willett wrote:
Richard,
Thanks for the comprehensive update.
We appreciate that there are tradeoffs and that dropping a table is a
rarer operation than
the portability of
sqlite for a fast table drop? The answer is No.
Rob
On 16 Jul 2018, at 22:59, Richard Hipp wrote:
On 7/16/18, Rob Willett wrote:
It does look as if one of sqlite's weaknesses is dropping very, very
large tables.
Right. If every table were stored in a separate file, a DROP TABLE
Jay,
I think your approach would have been quicker. Ten hours so far and it's
still deleting the table :(
Rob
On 17 Jul 2018, at 2:16, Jay Kreibich wrote:
On Jul 16, 2018, at 3:01 PM, rob.sql...@robertwillett.com wrote:
Hi,
We're doing a massive tidy on our database which is approx 50GB.
Mmm It's still taking an awful long time, though the -wal file is
very small.
It does look as if one of sqlite's weaknesses is dropping very, very
large tables.
Oh well, lets let it run through the night.
Rob
On 16 Jul 2018, at 21:25, Rob Willett wrote:
Dan,
We've killed the process
, at 21:17, Dan Kennedy wrote:
On 07/17/2018 03:12 AM, Rob Willett wrote:
Dan,
Thanks for the reply.
pragma secure_delete;
1
sqlite>
Which is a bit of a surprise as we have never seen it before. We had
to
look it up using your reference.
No idea why that is set (if 1 means it is).
Should
Dan,
Thanks for the reply.
pragma secure_delete;
1
sqlite>
Which is a bit of a surprise as we have never seen it before. We had to
look it up using your reference.
No idea why that is set (if 1 means it is).
Should we simply ctrl-C the deletion, turn secure_delete off and then
try again?
.
Check your build-essential install?
On Tue, Jul 10, 2018 at 1:05 AM, Dan Kennedy
wrote:
On 07/09/2018 09:26 PM, Rob Willett wrote:
Dan,
Thanks for the information. We did try that first, failed and just
assumed
we were idiots and went to the the other download.
Just downloaded it again, so we
On 9 Jul 2018, at 16:05, Dan Kennedy wrote:
On 07/09/2018 09:26 PM, Rob Willett wrote:
Dan,
Thanks for the information. We did try that first, failed and just
assumed we were idiots and went to the the other download.
Just downloaded it again, so we can check and be on the safe side
Dan,
Thanks for the information. We did try that first, failed and just
assumed we were idiots and went to the the other download.
Just downloaded it again, so we can check and be on the safe side, and
we get the same issue.
cp /jambuster/src/sqlite-src-324/ext/session/sqlite3session.h
Clemens,
Thanks. We were up to date anyway, but we did check and we still get the
same error.
It does look like a compiler bug, also nobody else seems to have
reported it, which I think is very odd. I cannot be the first person to
compile this version of SQLite on Ubuntu 16.04 server.
was to list all the ways to accomplish printing something out, there's
always another to do it. I will learn here as well.
Many thanks to everybody,
Rob
On 16 May 2018, at 11:25, Clemens Ladisch wrote:
Rob Willett wrote:
select CASE
WHEN EXISTS (select 1 from blocked where email =
'rwillett.dr
Hi,
I'm experimenting with an email server, Mail In a Box. It's a free
product and gives you a mail server in a box with SSL certificates,
multiple domains and seems to work well.
One thing it doesn't do is allow people to specify emails to block.
It uses SQLite to store the underlying data
Mike,
We use SQLite to store BLOB data to do with polygons and R-tree's for
mapping traffic data. We're running 70-90GB (not looked recently) and
the size has never been an issue for us (which is why we've not looked).
However binary images might start to get towards the field size limit
We have 17 in one database and 1 in another.
Rob
On 17 Mar 2018, at 11:43, x wrote:
0 in my first sqlite3 DB
From: sqlite-users on
behalf of Jean-Christophe Deschamps
Sent: Saturday, March
Dear all,
I've enjoyed reading about the Github vs Fossil discussion. There's been
some passionate and provocative statements :)
What its done for me is make me think again about how we manage our
source code control which is always good. We very recently moved from
our existing text based
an
>
> On 19 September 2017 at 15:58, Rob Willett <rob.sql...@robertwillett.com>
> wrote:
>
>> Hi,
>>
>> I'm trying to find the source to sqldiff and am struggling to locate it.
>>
>> I've looked on
>>
>> https://sqlite.org/download.h
Hi,
I'm trying to find the source to sqldiff and am struggling to locate it.
I've looked on
https://sqlite.org/download.html
and downloaded
sqlite-amalgamation-3200100.zip
sqlite-autoconf-3200100.tar.gz
sqlite-src-3200100.zip
unzipped (or gunzipped) them all and still can't see sqldiff.
Rob
On 10 Jul 2017, at 14:14, Igor Korot wrote:
Rob,
On Mon, Jul 10, 2017 at 7:06 AM, Rob Willett
<rob.sql...@robertwillett.com> wrote:
Vishal,
SQLite isn't a traditional client/server relational database,
therefore
there isn't a port to open up. It runs on a local machine.
I be
Vishal,
SQLite isn't a traditional client/server relational database, therefore
there isn't a port to open up. It runs on a local machine.
Now there are wrappers around SQLite to extend it, I assume this ODBC
driver is one of them.
I suspect people here *may* know the answer regarding any
weeks work :(
We've upgraded Sqlite CLI to the latest 3.18.0 and are trying this out
in a nice safe environment.
Thanks
Rob
On 15 May 2017, at 12:38, Richard Hipp wrote:
On 5/15/17, Richard Hipp <d...@sqlite.org> wrote:
On 5/15/17, Rob Willett <rob.sql...@robertwillett.com> wrot
deleted.
http://sqlite.1065341.n5.nabble.com/journal-files-not-always-removed-td83700.html#a83705
On Mon, May 15, 2017 at 3:01 AM, Rob Willett
<rob.sql...@robertwillett.com>
wrote:
Hi,
We've encountered what we think is an odd situation and we can't find
any
explanation f
Hi,
We've encountered what we think is an odd situation and we can't find
any explanation for why this is. We're also not sure if its a problem or
not.
A brief summary is that we are doing a major database upgrade and are
doing (lots) of testing. The box is a Linux box running Ubuntu 14.04
unneeded fields and adding
in a covering index), we went down to 38 secs.
This has meant we are no longer looking at a second database just for
analytics, we just needed to learn to use the database we already had :)
Rob
On 18 Mar 2017, at 5:48, Rob Willett wrote:
We've just implemented
David,
We're English and masters of the understatement ;)
Rob
On 20 Mar 2017, at 15:04, David Raymond wrote:
"... Our query has moved from 32 mins to 54 secs.
We're quite happy with that performance increase."
I have to admit that the hearty chuckle which that statement produced
from me
, so thanks for all the help in getting us to this stage.
We have kept copies of the query planner bytecode output if anybody is
interested. Gunter has had copies, but if anybody else would like them,
please ask.
Many thanks again for all the help,
Rob
On 17 Mar 2017, at 22:12, Rob Willett
.
Thanks very much for the help so far. Tomorrow is more tricky but I'll
read up on covering indexes to see how to use them,.
Rob
On 17 Mar 2017, at 18:39, Simon Slavin wrote:
On 17 Mar 2017, at 6:22pm, Rob Willett <rob.sql...@robertwillett.com>
wrote:
4. Work through returning just th
three international rugby
games tomorrow.
Rob
On 17 Mar 2017, at 18:15, Simon Slavin wrote:
On 17 Mar 2017, at 5:30pm, Rob Willett <rob.sql...@robertwillett.com>
wrote:
echo "select * from Disruptions where status = 2 OR status = 6;" |
sqlite3 tfl.sqlite > /dev/null
if we have made the
same mistake elsewhere.
Thanks for your help, we can post the bytecode it people are interested.
Rob
On 17 Mar 2017, at 11:41, Rob Willett wrote:
Gunter,
I would never presume to describe anybody as a Nerd!
We're just going back to very first position with the
ce for each status value, whereas I expect the query with
UNION ALL to have 2 copies of the search (which would not affect the
run time) and maybe even a temporary table of results (which would
take longer and use more memory).
-Ursprüngliche Nachricht-
Von: sqlite-users
[mailto:sqlit
n them twice to see the
affect. No idea how long this will take but suspect a few hours :) I
will post back the results as other people may (or may not) find this
helpful.
Thanks
Rob
On 17 Mar 2017, at 10:57, Hick Gunter wrote:
On 17 Mar 2017, at 10:20am, Rob Willett
<rob.sql...@robertwillett.co
:
On 17 Mar 2017, at 10:20am, Rob Willett <rob.sql...@robertwillett.com>
wrote:
CREATE INDEX "Disruptions_idx4" ON Disruptions ("status" COLLATE
NOCASE ASC);
[…]
As part of the larger more complex query, we are executing the query
```
select * from Disruptions wher
Hi,
We've been struggling with a complex query that we have written. One of
the elements of this complex query is a select statement that doesn't
appear to use an index when we *think* it should do. We're not going to
ask you to debug a large complex SQL query (unless you have nothing else
I've been following this thread with interest. I have used Putty for
years as its the de-facto standard for decent ssh terminals on Windows
boxes.
A slightly more radical suggestion for the log files. Since Putty is
open source, have a look at the code and see if you can easily add in a
John,
There is a lot of documentation on the SQLite website.
Here's the 'official' docs on creating a table
https://www.sqlite.org/lang_createtable.html
A Sqlite database consists of many tables. I am unsure if there is an
upper limit, if there is, its more tables than I have ever created.
Simon,
We use the Perl DBD module all the time. What I would expect to see is
(IGNORE THE LINE BREAKS)
my $sth = $dbh->prepare("INSERT INTO
func_begin_change(
author,
author_contact,
author_contact_method,
author_shortname,
We don't use Windows Server or System.Data.SQLite so can't comment.
I'd be astonished if its Sqlite itself thats at fault here.
Rob
On 4 Oct 2016, at 13:24, Werner Kleiner wrote:
Thanks for help.
Hopefully I give you the correct answer, because a collegue has
written the C# program.
We had
We've done inserts of tens of thousand at a time, we may well have done
hundreds of thousands in one single transaction. I've no doubt other
people do even larger transactions.
I would assume the problem lies elsewhere.
What error message are you getting? Whats the OS, the environment, disk,
or the right job etc etc.
Rob
On 2 Oct 2016, at 22:47, Domingo Alvarez Duarte wrote:
Hello Rob !
How do you use those big databases ?
Append only ? Read only ? Constant read write ? Foreign keys ?
Could you show the schema to talk about it ?
Cheers !
On 02/10/16 17:44, Rob Willett wrote:
W
We have production databases with 30-50GB and have no issues with
managing them. Mind you we have more memory, though we only have 8GB as
thats all our VM provider allows us.
After saying that we have never seen any performance issues that weren't
due to our schemas or our bad design. Our
Jean-Christophe
Thanks for the update on wal-mode. Your explanation is clear and makes
sense to us. We can see what we would have a 224MB -wal file, we
experimented with killing processes whilst updating and generally
messing around and SQLite did what is was supposed to do. I wouldn’t
say
Smith wrote:
On 2016/08/07 8:55 AM, Rob Willett wrote:
Richard, Ryan,
Thanks for this. We were dimly aware of WAL but until now hadn’t
needed to use it.
We’ve done a quick check with it and it *seems* to work on a test
database. We’ve all read the docs again and paid attention to
https
be expected. Is there a problem here? It doesn’t appear to
be a problem but would welcome any comments.
Thanks for taking the time to reply.
Rob
On 6 Aug 2016, at 22:35, R Smith wrote:
On 2016/08/06 10:50 PM, Rob Willett wrote:
Our understanding of this is that many processes can READ
Hi,
We’ve been using Sqlite though Perl for some time now and have started
to get more adventurous.
Our SQLite database is around 32GB in size, is created and manipulated
by a single Perl process and is working well.
What we now want to do is mine the database using a very long running
I agree with Tim.
I filter all my SQLite messages to its own folder and read as needed.
I prefer this method to a forum as I can then keep the messages with me.
I’m often off the internet and its good to have them for reading. Also
I can look back through them and get hints and tips about
more to see whats going on.
Top marks for spotting our deliberate omission :)
Rob
On 5 May 2016, at 17:42, J Decker wrote:
> On Thu, May 5, 2016 at 9:38 AM, Rob Willett
> wrote:
>> Mmmm?. Initial tests are not conclusive, it does look as if using
>> rsync
>> ?-inpl
enough, though takes time, and see how that compares.
Thanks for the suggestion,
Rob,
On 5 May 2016, at 16:42, J Decker wrote:
> Instead of cp, rsync might help it is able to send delta changes.
>
> On Wed, May 4, 2016 at 10:55 AM, Rob Willett
> wrote:
>> Scott,
>>
>
at 10:55 AM, Rob Willett
> wrote:
>> Scott,
>>
>> OK, We can see how to do this (I think). Our app is written in Perl
>> and we?d
>> just need to capture the command we write down. The only issue I can
>> think
>> of is the prepare statement and makin
Dan,
Thats NOT the case for us so that explains why things are slow. Mmm?
as I recall we never did get a backup to finish?. Now we know why :)
Rob
On 4 May 2016, at 18:53, Dan Kennedy wrote:
> On 05/05/2016 12:45 AM, Rob Willett wrote:
>> Ryan,
>>
>> Ah! The penny dro
the time to reply.
Rob
On 4 May 2016, at 18:52, Scott Robison wrote:
> On Wed, May 4, 2016 at 11:47 AM, Rob Willett
>
> wrote:
>
>> Scott,
>>
>> Thats an interesting idea. Is there an option in SQLite to do this
>> for us,
>> or do we have to w
Scott,
Thats an interesting idea. Is there an option in SQLite to do this for
us, or do we have to write a small shim in our app?
I like the idea of this as its simple and elegant.
Rob
On 4 May 2016, at 16:51, Scott Robison wrote:
>>> This is going to become a bigger problem for us as the
-It-Simple shell
script. We pause the queue upstream which stops the update process, do a
cp and then restart the queue again. All of this is doable in shell
script.
Rob
On 4 May 2016, at 18:22, R Smith wrote:
> On 2016/05/04 2:35 PM, Rob Willett wrote:
>> Dominque,
>>
>> We p
,ply turns the tap back on is very simple
to code and to see working. I like easy, simple solutions because I?m
simple at heart.
Best wishes,
Rob
On 4 May 2016, at 14:24, Simon Slavin wrote:
> On 4 May 2016, at 1:35pm, Rob Willett
> wrote:
>
>> I think that unless we can get th
2016, at 12:58, Clemens Ladisch wrote:
> Rob Willett wrote:
>> We?re trying to backup a 10GB live running database
>> ?as-fast-as-we-
>> possibly-can? without stopping updates coming in.
>
> How much memory do you have? I guess you can't simply read the enti
in
workflow might be the easiest and simplest solution.
I know you have mentioned VFS shims further down the email trail but
thats certainly a step too far for us.
Thanks
Rob
On 4 May 2016, at 12:22, Dominique Devienne wrote:
> On Wed, May 4, 2016 at 1:13 PM, Rob Willett
> wrote:
>
Thanks for the reply,
Yes Example 2 in https://www.sqlite.org/backup.html is what we are
talking about. It was very slow to run for us.
Rob
On 4 May 2016, at 12:08, Dominique Devienne wrote:
> On Wed, May 4, 2016 at 12:44 PM, Rob Willett
> wrote:
>> We?re trying to backup a 10GB
Hi,
We think we know the answer to this, but we?ll ask the question
anyway.
We?re trying to backup a 10GB live running database
?as-fast-as-we-possibly-can? without stopping updates coming in. The
updates come every 2-3 mins, and write a chunk of data in. We can?t
really stop the database
Cecil,,
Linux reporting 8 cores is due to hyper threading on the four cores.
Thats normal.
One of the cores reporting a high usage is normal if you have single
threaded app that simply cannot be moved, e.g. many perl programs
exhibit this behaviour. Thats fine and to be expected.
I do not
Cecil,
If you have a load average of 15 then that normally means you have a
massively overloaded Linux box. I don?t know your system but I get
worried around a load average of 3-4 on our boxes. Load Average is a
very crude measurement but a high number tends to be bad.
If your CPU is only
ftime and we
will be in our logic.
Thanks to everybody for the help, we?ve learnt something new today
which is always good.
Best wishes,
Rob
On 18 Feb 2016, at 12:45, Richard Hipp wrote:
> On 2/18/16, Rob Willett wrote:
>>
>> select
>> strftime('%s' , starttime),
>>
does not work.
The only difference is the boolean operand.
We know how to solve the problem, we?re puzzled though as our
understanding is clearly wrong.
Thanks for replying,
Rob
On 18 Feb 2016, at 12:14, Tim Streater wrote:
> On 18 Feb 2016 at 10:20, Rob Willett
> wrote:
&
+ (180 * 60))
not work.
Thanks,
Rob
On 18 Feb 2016, at 11:25, Quan Yong Zhai wrote:
> It looks like strftime('%s',...) should return an integer and
> strftime('%J', ...) should return a double value. But it always return
> text value.
> ________
> ???: Ro
Hi,
I?m sure this is a really dumb question but I?m struggling to
understand why the following SQL is needed for what should be a trivial
SQL expression.
I?ve minimised the example down to (hopefully) make it simpler.
I have a table with an ISO date StartTime in it held as a string.
To add to the responses, we use SQLite for our main database which is
currently around 60GB in size, so size isn?t an issue for SQLite. I am
aware of other people with far, far larger SQLite databases. We did a
quick test and ran up to a couple of hundred DB?s of data in SQLIte
and it seemed
I *think* this is due to you creating an integer when you first create
the entries
Try changing from
INSERT INTO fmtemp VALUES (1, 0);
to
INSERT INTO fmtemp VALUES (1, 0.0);
Just did
macpro:js rwillett$ sqlite3
SQLite version 3.8.10.2 2015-05-20 18:17:19
Enter ".help" for usage hints.
A quick update on handling the the select query. We have some speed results
which are quite illuminating and positive.
We?ve been playing around with building a few web services, one of which was
the postcode lookup that people have kindly helped with.
We have one service that is a simple
Ryan,
Thanks. We?ve got 100 requests a second which may be enough. We?ll keep looking
though for any time.
Rob.
> On 18 Sep 2015, at 18:26, R.Smith wrote:
>
> >>>Rob: "We want to do postal code ==> GPS..."
>
> >>Me: "You can use google apis..."
>
> >Rob: "Our business is GPS and GIS
> On 18 Sep 2015, at 16:54, R.Smith wrote:
>
>
>
> On 2015-09-18 08:17 AM, Rob Willett wrote:
>> What we wanted was a postcode to GPS lookup, it was not to pinpoint a house,
>> you would need a little more information to be that specific, e.,g. house
>
1 - 100 of 152 matches
Mail list logo