On 3 Mar 2014, at 3:41am, romtek wrote:
> Thanks, Simon. Interestingly, for this server, disk operations aren't
> particularly fast. One SQLite write op takes about 4 times longer than on a
> HostGator server.
That supports the idea that storage is simulated (or 'virtualised') to a high
degree
Thanks, Simon. Interestingly, for this server, disk operations aren't
particularly fast. One SQLite write op takes about 4 times longer than on a
HostGator server.
I wonder if what I/you described also means that this file system isn't
likely to support file locks needed for SQLite to control acce
On 3 Mar 2014, at 2:14am, romtek wrote:
> On one of my hosting servers (this one is a VPS), a bunch of write
> operations take practically the same amount of time when they are performed
> individually as when they are performed as one explicit transaction. I've
> varied the number of ops up to
In case this gives somebody a clue, the server in question is on
http://vps.net/.
On Sun, Mar 2, 2014 at 8:14 PM, romtek wrote:
> Hi,
>
> On one of my hosting servers (this one is a VPS), a bunch of write
> operations take practically the same amount of time when they are performed
> individual
Hi,
On one of my hosting servers (this one is a VPS), a bunch of write
operations take practically the same amount of time when they are performed
individually as when they are performed as one explicit transaction. I've
varied the number of ops up to 200 -- with the similar results. Why is that?
Its gotta be great to see your code end up in a TV show and have an actor
say "There's you're problem" and you get to say "Not in my code!". That
would have been epic to be sitting there for that particular event as a
bystander. heh
On Sun, Mar 2, 2014 at 6:39 PM, Darren Duncan wrote:
> On 3/2
On 02.03.2014 21:38, Elefterios Stamatogiannakis wrote:
Under this view, the efficiency of the virtual table api is very
important. Above query only uses 2 VTs in it, but we have other queries
that use a lot more VTs than that.
Max tests in C shows 2x CPU work, but he explains that the test is
On 3/2/2014, 9:34 AM, Richard Hipp wrote:
Reports on twitter say that the "nanobots" in the TV drama "Revolution"
have source code in the season two finale that looks like this:
https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
Compare to the SQLite source code here:
http://www.sqlite.org/
Kees wrote answering Ashleigh
| If you prefer a graphical user interface, I can recommend
|the sqlite manager plugin in the Firefox web browser.
|| If any one knows a better way to read and understand the files I would
greatly appreciate it
|| |I think the file ext. is a plist.
|| Live, love
On Sun, Mar 2, 2014 at 12:34 PM, Richard Hipp wrote:
> Reports on twitter say that the "nanobots" in the TV drama "Revolution"
> have source code in the season two finale that looks like this:
>
> https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
>
> Compare to the SQLite source code here:
>
On Sun, Mar 2, 2014 at 1:55 PM, big stone wrote:
>==> Why such a 'x6' speed-up, as we need to scan the whole table anyway
> ?
>
SQLite implements GROUP BY by sorting on the terms listed in the GROUP BY
clause. Then as each row comes out, it compares the GROUP BY columns to
the previous row
big stone,
Can you please compile a chart (in text format is ok) that puts your
numbers from your last mail in relation with the numbers from your email
prior to that, for everyone to get perfectly clear about how the
optimizations you applied now do improve beyond the numbers published in
the ori
Hi Mikael,
I'm not expert in rtree virtual table handling, but you may try and post
the result here .
Adding the test of the -o2 compiled SQLite3.8.3.exe (size 801Ko vs 501Ko
for the standard Sqlite, 'size' optimized)
- feeding data :
. in disk database : 151 seconds
. in memory database :
We have both input and output virtual tables that avoid hitting the hard
disk and are also able to compress the incoming and outgoing data.
We have a virtual table that takes as input a query and sends the data
to a port on another machine. This virtual table is called "OUTPUT". And
another vi
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
wrote:
>
> Our main test case is TPCH, a standard DB benchmark. The "lineitem" table of
> TPCH contains 16 columns, which for 10M rows would require 160M xColumn
> callbacks, to pass it through the virtual table API. These callbacks are
>
Shouldn't you add "nanobots" to the "famous" user list , just below
flame, and over the "android" droids ?
Biggest Companies use SAP
Smallest Companions use SQLite.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/m
Hi again,
I tune a little the SQLite experiment :
- to get rid of the 19th columns message,
- to measure the previous tests with more precise figures,
- the effect of the suggested index :
CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st, contb_receipt_amt);
- the effect of using a filesystem data
On Sun, Mar 2, 2014 at 12:34 PM, Richard Hipp wrote:
> Reports on twitter say that the "nanobots" in the TV drama "Revolution"
> have source code in the season two finale that looks like this:
>
> https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
>
> Compare to the SQLite source code here:
>
Reports on twitter say that the "nanobots" in the TV drama "Revolution"
have source code in the season two finale that looks like this:
https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
Compare to the SQLite source code here:
http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
--
D. R
big stone,
What are the same results using RTree? (Also feel free to add -O2.)
?
Thanks
2014-03-02 17:25 GMT+01:00 big stone :
> Hi again,
>
> This is what I mean : we should have an updated "speed" page where we could
> objectively measure.
>
> In the mean time, I painfully partially reprod
On Sun, Mar 2, 2014 at 11:25 AM, big stone wrote:
>
> ** Speed Tests **
> test1 = select cand_nm, sum(contb_receipt_amt) as total from fec group by
> cand_nm;
> ==> SQlite 21 seconds (wes = 72s)
> ==> Postgresql 4.8 seconds stable (44 seconds first time ?) (wes =4.7)
>
>
My guess is that PG is
Hi again,
This is what I mean : we should have an updated "speed" page where we could
objectively measure.
In the mean time, I painfully partially reproduced two of the figures from
Wes.
Procedure :
download
ftp://ftp.fec.gov/FEC/Presidential_Map/2012/P0001/P0001-ALL.zip
unzip to P0
On 2 Mar 2014, at 1:48pm, Elefterios Stamatogiannakis wrote:
> IMHO, a benchmark like this is useless without any more information. Some
> questions that i would like to see answered:
>
> - Which SQLite and Postgres versions were used?
> - Are the SQLite indexes, covering ones?
> - Have any pe
IMHO, a benchmark like this is useless without any more information.
Some questions that i would like to see answered:
- Which SQLite and Postgres versions were used?
- Are the SQLite indexes, covering ones?
- Have any performance pragmas being used?
Also interval joins ("between") are hard
In our performance tests we try to work with data and queries that are
representative of what we would find in a typical DB.
This means a lot of "small" values (ints, floats, small strings), and
5-20 columns.
Our main test case is TPCH, a standard DB benchmark. The "lineitem"
table of TPCH c
Hello,
This morning I saw Pandas/Wes McKinney communicating figures :
- starting at 37'37" of http://vimeo.com/79562736,
- leaking a slide where SQLite "is" 15 times slower than Postgresql.
==> the dataset is public :
http://www.fec.gov/disclosurep/PDownload.do?candId=P0001&electionYr=2012
26 matches
Mail list logo