Hello,
I Re-checked today the Mysql suspect performance, after a reboot, on :
"select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
by cand_nm, contbr_st;"
This particular operation is now 7.1 seconds.
I may have miss-used "MysqlWorkbench".
==> I updated the figure to the
Hi
*Elefterios, Simon,*
*Wes McKinney gave us :- a fully detailed benchmark case (data +
reproducible test),*
*- where SQLite was : . abnormally less good than Postgresql (so could be
better),*
* . SQLdatabase in general were abnormally less good, . a hint
"vertica"was given.*
*Maybe
ok,
Just updated with 3.8.4beta of 2014-03-05.
I also re-did some previous measures as :
- testing method improved a little,
- I measured more carefully that SQLite has also a sort of caching benefit,
when you run a query twice on windows7.
Regards,
On Wed, Mar 5, 2014 at 7:25 PM, Richard Hipp wrote:
> MySQL does very well on query 8 which is a repeat of query 6. This might
> be because MySQL implements a query cache. It remembers the result of each
> query and if that query occurs again, without an intervening INSERT,
>
On Wed, Mar 5, 2014 at 9:29 AM, big stone wrote:
> Timing updates with Mysql 5.6.16
>
MySQL does very well on query 8 which is a repeat of query 6. This might
be because MySQL implements a query cache. It remembers the result of each
query and if that query occurs again,
On Wed, Mar 5, 2014 at 9:29 AM, big stone wrote:
> Timing updates with Mysql 5.6.16
>
I wonder if you could update the timings for the current SQLite 3.8.4 beta?
--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
Timing updates with Mysql 5.6.16
test =
https://raw.github.com/stonebig/ztest_donotuse/master/benchmark_test01.txt
results =
https://github.com/stonebig/ztest_donotuse/blob/master/benchmark_test01_measures.GIF?raw=true
___
sqlite-users mailing list
The result in a .csv format for Mikael,
Sorry I'm very bad in html, I hope someone can re-post it in a nice-looking
html table
Nota :
- Postgresql is not tuned at all, and its figures move a lot between two
measures,
- I couldn't do a Pandas measure because "not enough memory".
"sequence of
On Sun, Mar 2, 2014 at 1:55 PM, big stone wrote:
>==> Why such a 'x6' speed-up, as we need to scan the whole table anyway
> ?
>
SQLite implements GROUP BY by sorting on the terms listed in the GROUP BY
clause. Then as each row comes out, it compares the GROUP BY
big stone,
Can you please compile a chart (in text format is ok) that puts your
numbers from your last mail in relation with the numbers from your email
prior to that, for everyone to get perfectly clear about how the
optimizations you applied now do improve beyond the numbers published in
the
Hi Mikael,
I'm not expert in rtree virtual table handling, but you may try and post
the result here .
Adding the test of the -o2 compiled SQLite3.8.3.exe (size 801Ko vs 501Ko
for the standard Sqlite, 'size' optimized)
- feeding data :
. in disk database : 151 seconds
. in memory database :
Hi again,
I tune a little the SQLite experiment :
- to get rid of the 19th columns message,
- to measure the previous tests with more precise figures,
- the effect of the suggested index :
CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);
- the effect of using a filesystem
big stone,
What are the same results using RTree? (Also feel free to add -O2.)
?
Thanks
2014-03-02 17:25 GMT+01:00 big stone :
> Hi again,
>
> This is what I mean : we should have an updated "speed" page where we could
> objectively measure.
>
> In the mean time, I
On Sun, Mar 2, 2014 at 11:25 AM, big stone wrote:
>
> ** Speed Tests **
> test1 = select cand_nm, sum(contb_receipt_amt) as total from fec group by
> cand_nm;
> ==> SQlite 21 seconds (wes = 72s)
> ==> Postgresql 4.8 seconds stable (44 seconds first time ?) (wes =4.7)
>
>
Hi again,
This is what I mean : we should have an updated "speed" page where we could
objectively measure.
In the mean time, I painfully partially reproduced two of the figures from
Wes.
Procedure :
download
ftp://ftp.fec.gov/FEC/Presidential_Map/2012/P0001/P0001-ALL.zip
unzip to
On 2 Mar 2014, at 1:48pm, Elefterios Stamatogiannakis wrote:
> IMHO, a benchmark like this is useless without any more information. Some
> questions that i would like to see answered:
>
> - Which SQLite and Postgres versions were used?
> - Are the SQLite indexes, covering
IMHO, a benchmark like this is useless without any more information.
Some questions that i would like to see answered:
- Which SQLite and Postgres versions were used?
- Are the SQLite indexes, covering ones?
- Have any performance pragmas being used?
Also interval joins ("between") are hard
Hello,
This morning I saw Pandas/Wes McKinney communicating figures :
- starting at 37'37" of http://vimeo.com/79562736,
- leaking a slide where SQLite "is" 15 times slower than Postgresql.
==> the dataset is public :
18 matches
Mail list logo