Thanks to all that replied to my post and sorry for the delayed response.
I had trouble joining the list and had not realised that my post went
through until I tried to join the list again.
Simon Slavin slavins at bigfraud.org wrote:
What you might find increases your speed is to make sure
Jim Callahan jim.callahan.orlando at gmail.com wrote:
1. My condolences with those dimensions you are heading for big
data/hadoop land.
Heheh thanks, I am so determined not to go there! SQLite has been
such a nice simple database to use, I will do anything to avoid all
the complexity of
On 19 Aug 2014, at 8:25am, Paul Dillon paul.dil...@gmail.com wrote:
I was
using count(first_field) instead of selecting all the fields, can't imagine
that could be the problem.
There's an optimization in SQLite which means you can do COUNT(*) and it will
fetch no data at all. It's faster
On Thu, Aug 7, 2014 at 5:51 AM, Paul Dillon paul.dil...@gmail.com wrote:
1. Will moving these 8 query fields to a smaller table improve query
performance when joined to the larger table? My logic is that this small
table would only be about 5% the size of the full table, so the full table
Hello,
I would like to know if splitting a big table into two smaller ones, and
then using a join in my queries would speed up performance.
My table is 100-300GB in size and has about 150 columns. There are 8
fields that I frequently use in my queries, which require full table scans
to
On 7 Aug 2014, at 4:51am, Paul Dillon paul.dil...@gmail.com wrote:
1. Will moving these 8 query fields to a smaller table improve query
performance when joined to the larger table? My logic is that this small
table would only be about 5% the size of the full table, so the full table
scan to
A few observations
1. My condolences with those dimensions you are heading for big
data/hadoop land.
2. Worry about the number of rows; that's what feeds into the big-oh: O(n).
Assuming your 150 columns translate into a 1.5k to 2k record length that
means your 300 gigabyte file must have 150