Re: [sqlite] What is wrong with this queries?

2012-12-29 Thread Igor Korot
Yuriy,

On Sat, Dec 29, 2012 at 8:49 PM, Yuriy Kaminskiy  wrote:
> Igor Korot wrote:
>> Hi, ALL,
>>
>> sqlite> CREATE TABLE leagueplayers(id integer, playerid integer, value 
>> integer,
>> currvalue double, foreign key(id) references leagues(id), foreign 
>> key(playerid)
>> references players(playerid));
>> sqlite> INSERT INTO leagueplayers VALUES(1,(SELECT playerid,value,currvalue 
>> FROM
>>  players));
>> Error: table leagueplayers has 4 columns but 2 values were supplied
>>
>> AFAICT, I am trying to insert 4 values in the table.
>
> No, you
> 1) insert *two* values: 1 and (SELECT) expression;
> 2) improperly use (SELECT) expression:
> There are no array type in sqlite. Only single column results is allowed for
> (SELECT) expression, e.g.:
> sqlite> SELECT (SELECT 1,2);
> Error: only a single result allowed for a SELECT that is part of an expression
>
> Just only first discovered error can be returned (and in this case it was
> inconsistent number of columns for insert).
>
>> Does anybody have an idea what is wrong?
>
> Correct:
>
> INSERT INTO leagueplayers (id, playerid, value, currvalue)
>  SELECT 1, playerid,value,currvalue FROM players;

That worked.

Thank you.

>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What is wrong with this queries?

2012-12-29 Thread Yuriy Kaminskiy
Igor Korot wrote:
> Hi, ALL,
> 
> sqlite> CREATE TABLE leagueplayers(id integer, playerid integer, value 
> integer,
> currvalue double, foreign key(id) references leagues(id), foreign 
> key(playerid)
> references players(playerid));
> sqlite> INSERT INTO leagueplayers VALUES(1,(SELECT playerid,value,currvalue 
> FROM
>  players));
> Error: table leagueplayers has 4 columns but 2 values were supplied
> 
> AFAICT, I am trying to insert 4 values in the table.

No, you
1) insert *two* values: 1 and (SELECT) expression;
2) improperly use (SELECT) expression:
There are no array type in sqlite. Only single column results is allowed for
(SELECT) expression, e.g.:
sqlite> SELECT (SELECT 1,2);
Error: only a single result allowed for a SELECT that is part of an expression

Just only first discovered error can be returned (and in this case it was
inconsistent number of columns for insert).

> Does anybody have an idea what is wrong?

Correct:

INSERT INTO leagueplayers (id, playerid, value, currvalue)
 SELECT 1, playerid,value,currvalue FROM players;

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] What is wrong with this queries?

2012-12-29 Thread Igor Korot
Hi, ALL,

sqlite> CREATE TABLE leagueplayers(id integer, playerid integer, value integer,
currvalue double, foreign key(id) references leagues(id), foreign key(playerid)
references players(playerid));
sqlite> INSERT INTO leagueplayers VALUES(1,(SELECT playerid,value,currvalue FROM
 players));
Error: table leagueplayers has 4 columns but 2 values were supplied

AFAICT, I am trying to insert 4 values in the table.

Does anybody have an idea what is wrong?

Thank you.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] System.Data.SQLite version 1.0.83.0 released

2012-12-29 Thread Joe Mistachkin

System.Data.SQLite version 1.0.83.0 (with SQLite 3.7.15.1) is now available
on the System.Data.SQLite website:

 http://system.data.sqlite.org/

Further information about this release can be seen at

 http://system.data.sqlite.org/index.html/doc/trunk/www/news.wiki

Please post on the SQLite mailing list (sqlite-users at sqlite.org) if you
encounter any problems with this release.

--
Joe Mistachkin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Simon Slavin

On 29 Dec 2012, at 9:45pm, Michael Black  wrote:

> During the 1M commit the CPU drops to a couple % and the disk I/O is pretty
> constant...albeit slow

For the last few years, since multi-core processors have been common on 
computers, SQLite performance has usually been limited by the performance of 
storage.  Several times I've recommended to some users that rather than pouring 
their money and development effort into unintuitive programming (e.g. splitting 
one TABLE into smaller ones) they just update from spinning disks to SSD.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Michael Black
Referencing the C program I sent earlierI've found a COMMIT every 1M
records does best.  I had an extra zero on my 100,000 which gives the EKG
appearance.
I averaged 25,000 inserts/sec over 50M records with no big knees in the
performance (there is a noticeable knee on the commit though around 12M
records).  But the average performance curve is pretty smooth.
Less than that and you're flushing out the index too often which causes an
awful lot of disk thrashing it would seem.
During the 1M commit the CPU drops to a couple % and the disk I/O is pretty
constant...albeit slow

P.S. I'm using 3.7.15.1


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Michael Black
I wrote a C program doing your thing (with random data so each key is
unique)

I see some small knees at 20M and 23M -- but nothing like what you're seeing
as long as I don't do the COMMIT.
Seems the COMMIT is what's causing the sudden slowdown.
When doing the COMMIT I see your dramatic slowdown (an order of magnitude)
at around 5M records...regardless of cache sizeso cache size isn't the
problem.
I'm guessing the COMMIT is paging out the index which starts thrashing the
disk.
Increasing the COMMIT to every 100,000 seems to help a lot.  The plot looks
almost like an EKG then with regular slowdowns.


And...when not doing the commit is it normal for memory usage to increase
like the WAL file does?


#include 
#include 
#include 
#include 
#include "sqlite3.h"
  
time_t base_seconds;
suseconds_t base_useconds;
  
void tic() {
  struct timeval tv;
  gettimeofday(,NULL);
  base_seconds=tv.tv_sec;
  base_useconds=tv.tv_usec;
} 
  
// returns time in seconds since tic() was called
double toc() {
  struct timeval tv;
  gettimeofday(,NULL);
  double mark=(tv.tv_sec-base_seconds)+(tv.tv_usec-base_useconds)/1.0e6;
  return mark;
}

void checkrc(sqlite3 *db,int rc,int checkrc,int flag,char *msg,char *str) {
  if (rc != checkrc) {
fprintf(stderr,msg,str);
fprintf(stderr,"%s\n",sqlite3_errmsg(db));
if (flag) { // then fatal
  exit(1);
}
  }
}   

int main(int argc, char *argv[]) {
  int rc;
  long i;
  char *sql,*errmsg=NULL;
  char *databaseName="data.db";
  sqlite3 *db;
  sqlite3_stmt *stmt1,*stmt2;
  remove(databaseName);
  rc =
sqlite3_open_v2(databaseName,,SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE,NU
LL);
  checkrc(db,SQLITE_OK,rc,1,"Error opening database '%s': ",databaseName);
  sql = "create table if not exists t_foo (key binary(16) primary key, value
binary(16))";
  rc=sqlite3_prepare_v2(db,sql,-1,,NULL);
  checkrc(db,SQLITE_OK,rc,1,"Error preparing statement '%s': ",sql);
  rc=sqlite3_step(stmt1);
  checkrc(db,SQLITE_DONE,rc,1,"Error executing statement '%s': ",sql);
  rc=sqlite3_step(stmt1);
  checkrc(db,SQLITE_DONE,rc,1,"Error executing statement '%s': ",sql);
  rc=sqlite3_finalize(stmt1);
  checkrc(db,SQLITE_OK,rc,1,"Error finalizing statement '%s': ",sql);
  rc=sqlite3_exec(db, "PRAGMA journal_mode=WAL",NULL,NULL,);
  checkrc(db,SQLITE_OK,rc,1,"Error on WAL mode statement '%s': ",sql);
  rc=sqlite3_exec(db, "PRAGMA synchronous=OFF",NULL,NULL,);
  checkrc(db,SQLITE_OK,rc,1,"Error on synchronous mode statement '%s':
",sql);
  rc=sqlite3_exec(db, "PRAGMA cache_size=10",NULL,NULL,);
  checkrc(db,SQLITE_OK,rc,1,"Error on cache size statement '%s': ",sql);
  sql="BEGIN";
  rc=sqlite3_exec(db,sql,NULL,NULL,);
  checkrc(db,SQLITE_OK,rc,1,"Error preparing statement '%s': ",sql);
  sql = "insert or replace into t_foo(key,value) values(?,?)";
  rc=sqlite3_prepare_v2(db,sql,-1,,NULL);
  checkrc(db,SQLITE_OK,rc,1,"Error preparing statement '%s': ",sql);
  tic();
  for(i=0; i<5000; ++i) {
char key[16],value[16];
long number = random();
if (i>0 && (i % 10) == 0) {
  printf("%ld,%g \n",i,10/toc());
  tic();
}
#if 0 // COMMIT?
if  (i>0&&(i % 1000)==0) { // try 100,000 
  sql="COMMIT";
  rc=sqlite3_exec(db,sql,NULL,NULL,);
  checkrc(db,SQLITE_OK,rc,1,"Error executing statement '%s': ",errmsg);
  sql="BEGIN";
  rc=sqlite3_exec(db,sql,NULL,NULL,);
  checkrc(db,SQLITE_OK,rc,1,"Error executing statement '%s': ",errmsg);
}
#endif
memcpy(key,,8);
memcpy([8],,8);
memcpy(value,,8);
rc=sqlite3_bind_blob(stmt2,1,key,16,SQLITE_STATIC);
checkrc(db,SQLITE_OK,rc,1,"Error bind1 statement '%s': ",sql);
rc=sqlite3_bind_blob(stmt2,2,value,16,SQLITE_STATIC);
checkrc(db,SQLITE_OK,rc,1,"Error bind2 statement '%s': ",sql);
rc=sqlite3_step(stmt2);
checkrc(db,SQLITE_DONE,rc,1,"Error finalizing statement '%s': ",sql);
rc=sqlite3_reset(stmt2);
checkrc(db,SQLITE_OK,rc,1,"Error resetting statement '%s': ",sql);
  }
  return 0;
}


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: Saturday, December 29, 2012 8:19 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Write performance question for 3.7.15


On 29 Dec 2012, at 12:37pm, Stephen Chrzanowski  wrote:

> My guess would be the OS slowing things down with write caching.  The
> system will hold so much data in memory as a cache to write to the disk,
> and when the cache gets full, the OS slows down and waits on the HDD.  Try
> doing a [dd] to a few gig worth of random data and see if you get the same
> kind of slow down.

Makes sense.  A revealing of how much memory the operating system is using
for caching.  Once you hit 30M rows you exceed the amount of memory the
system is using for caching, and it has to start reading or writing disk for
every operation which is far slower.  Or it's the amount of memory that the
operating system is 

Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Valentin Davydov
On Fri, Dec 28, 2012 at 03:35:17PM -0600, Dan Frankowski wrote:
> 
> 3. Would horizontal partitioning (i.e. creating multiple tables, each for a
> different key range) help?

This would seriously impair read performance (you'd have to access two indices
instead of one).

Valentin Davydov.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: Write performance question for 3.7.15

2012-12-29 Thread Valentin Davydov
On Fri, Dec 28, 2012 at 03:34:02PM -0600, Dan Frankowski wrote:
> I am running a benchmark of inserting 100 million (100M) items into a
> table. I am seeing performance I don't understand. Graph:
> http://imgur.com/hH1Jr. Can anyone explain:
> 
> 1. Why does write speed (writes/second) slow down dramatically around 28M
> items?

Most probably, indices became too large to fit in the in-memory cache.
You can verify this by tracing system activity: this threshold should 
manifest itself by drastical increase in _read_ operations on disk(s).

> 2. Are there parameters (perhaps related to table size) that would change
> this write performance?

CACHE_SIZE. It makes sense to enlarge it up to the all available memory.

Valentin Davydov.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Write performance question for 3.7.15

2012-12-29 Thread Simon Slavin

On 29 Dec 2012, at 12:37pm, Stephen Chrzanowski  wrote:

> My guess would be the OS slowing things down with write caching.  The
> system will hold so much data in memory as a cache to write to the disk,
> and when the cache gets full, the OS slows down and waits on the HDD.  Try
> doing a [dd] to a few gig worth of random data and see if you get the same
> kind of slow down.

Makes sense.  A revealing of how much memory the operating system is using for 
caching.  Once you hit 30M rows you exceed the amount of memory the system is 
using for caching, and it has to start reading or writing disk for every 
operation which is far slower.  Or it's the amount of memory that the operating 
system is allowing the benchmarking process to use.  Or some other OS 
limitation.

But the underlying information in our responses is that it's not a decision 
built into SQLite.  There's nothing in SQLite which says we use a fast strategy 
for up to 25M rows and then a slower one from then on.

A good way to track it down would be to close the database at the point where 
performance starts to tank, and look at how big the filesize is.  That size 
should give a clue about what resource the OS is limiting to that size.  
Another might be to add an extra unindexed column to the test database and fill 
it with a fixed text string in each row.  If this changes the number of rows 
before the cliff edge then it's dependent on total filesize.  If it doesn't, 
then it's dependent on the size of the index being searched for each INSERT.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: Write performance question for 3.7.15

2012-12-29 Thread Stephen Chrzanowski
My guess would be the OS slowing things down with write caching.  The
system will hold so much data in memory as a cache to write to the disk,
and when the cache gets full, the OS slows down and waits on the HDD.  Try
doing a [dd] to a few gig worth of random data and see if you get the same
kind of slow down.

On Fri, Dec 28, 2012 at 4:42 PM, Michael Black  wrote:

> Perhaps the rowid index cache gets too big?  I assume you don't have any
> indexes of your own?
>
> Does the knee change if you say, double your cache_size?
>
> Default should be 2000;
>
> pragma cache_size=4000;
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Frankowski
> Sent: Friday, December 28, 2012 3:34 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Fwd: Write performance question for 3.7.15
>
> I am running a benchmark of inserting 100 million (100M) items into a
> table. I am seeing performance I don't understand. Graph:
> http://imgur.com/hH1Jr. Can anyone explain:
>
> 1. Why does write speed (writes/second) slow down dramatically around 28M
> items?
> 2. Are there parameters (perhaps related to table size) that would change
> this write performance?
>
> ===
>
> Create and insert statements:
>
> create table if not exists t_foo (
>   key binary(16) primary key,
>   value binary(16));
>
> insert or replace into t_foo (key, value) values (?, ?)
>
> key and value are each 16-byte arrays.
>
> I turn auto-commit off and commit every 1000 inserts.
> I set synchronous mode to OFF and journaling mode to WAL (write-ahead log).
>
> I am using sqlite 3.7.15 through the Xerial JDBC driver (see
> https://bitbucket.org/xerial/sqlite-jdbc). I built it myself, due to a
> glibc incompatibility (see
> https://groups.google.com/d/msg/Xerial/F9roGuUjH6c/6RuxqmG6UK4J).
>
> I am running on Gentoo. Output of uname -a:
>
> Linux mymachine 3.2.1-c42.31 #1 SMP Mon Apr 30 10:55:12 CDT 2012 x86_64
> Quad-Core AMD Opteron(tm) Processor 1381 AuthenticAMD GNU/Linux
>
> It has 8G of memory.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users