Re: [sqlite] BACK API Questions

2009-12-15 Thread Martin.Engelschalk
Hello Raghu,

it seems to mee that you misunderstand the backup api. This api does not 
"know" or care for your schema and tables, but backups the database 
block for block. In this sense, the api does not know "old" contents and 
can not add "new" content.
The backup api can not help you in your need. You will have to open both 
the in-memory db and the file dm and copy the data you need to back uop 
yourself before deleting it in the in-memory db.
See the attach-command: http://www.sqlite.org/lang_attach.html

Martin

Raghavendra Thodime schrieb:
> Hi,
> I am using in-memory db where I will execute only inserts. Periodically, 
> I am trying to backup my in-memory db to file based db. But as soon as I 
> backup, I want to clear up the in-memory db so that I will have enough memory 
> for subsequent inserts. So next time when I back up I want to back up only 
> the newly written records. Is there a simple way to accomplish it? Right now, 
> I can't execute delete on in-memory because it will reflect on file db when I 
> back up which is what I don't want.
>
>   What I tried was, as soon as I back up, I close the current in-memory db 
> and open a new one for the subsequent inserts. But next time when I tried to 
> back up to the same old file db, backup operation is overwriting all the its 
> old contents. Is there a way to append to old DB instead of overwriting?
>
> Help will be appreciated...
>
>  Thank You
> Raghu
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-15 Thread Raghavendra Thodime
Hi Martin,
   Thanks for your reply.

   You mean to say that I need to modify sqlite3 code to choose which pages 
that need to be copied and copy them myself? Or is there some other API that I 
can use for this purpose? My main problem is I will have huge influx of real 
time data which I want to write first to my memory (because of computation 
efficiency) and later on, back up to the file-db in the background. Also I 
don't have too much memory to spend. So I want to erase the data from in-memory 
as soon as I backup, but I want my data in file-db to be existent. 

   I don't know how ATTACH is going to help my cause. Am I missing something 
here?

Thanks
Raghu


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Martin.Engelschalk
Sent: Tuesday, December 15, 2009 12:17 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] BACK API Questions

Hello Raghu,

it seems to mee that you misunderstand the backup api. This api does not 
"know" or care for your schema and tables, but backups the database 
block for block. In this sense, the api does not know "old" contents and 
can not add "new" content.
The backup api can not help you in your need. You will have to open both 
the in-memory db and the file dm and copy the data you need to back uop 
yourself before deleting it in the in-memory db.
See the attach-command: http://www.sqlite.org/lang_attach.html

Martin

Raghavendra Thodime schrieb:
> Hi,
> I am using in-memory db where I will execute only inserts. Periodically, 
> I am trying to backup my in-memory db to file based db. But as soon as I 
> backup, I want to clear up the in-memory db so that I will have enough memory 
> for subsequent inserts. So next time when I back up I want to back up only 
> the newly written records. Is there a simple way to accomplish it? Right now, 
> I can't execute delete on in-memory because it will reflect on file db when I 
> back up which is what I don't want.
>
>   What I tried was, as soon as I back up, I close the current in-memory db 
> and open a new one for the subsequent inserts. But next time when I tried to 
> back up to the same old file db, backup operation is overwriting all the its 
> old contents. Is there a way to append to old DB instead of overwriting?
>
> Help will be appreciated...
>
>  Thank You
> Raghu
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-15 Thread P Kishor
On Tue, Dec 15, 2009 at 2:44 PM, Raghavendra Thodime
 wrote:
> Hi Martin,
>   Thanks for your reply.
>
>   You mean to say that I need to modify sqlite3 code to choose which pages 
> that need to be copied and copy them myself?

You don't need to modify sqlite3 code, but you need to write your own
code inside your application to do what you want. The sqlite backup
API, from what I understand, is not designed to solve the problem you
are trying to solve.

> Or is there some other API that I can use for this purpose? My main problem 
> is I will have huge influx of real time data which I want to write first to 
> my memory (because of computation efficiency) and later on, back up to the 
> file-db in the background. Also I don't have too much memory to spend. So I 
> want to erase the data from in-memory as soon as I backup, but I want my data 
> in file-db to be existent.
>
>   I don't know how ATTACH is going to help my cause. Am I missing something 
> here?
>
> Thanks
> Raghu
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Martin.Engelschalk
> Sent: Tuesday, December 15, 2009 12:17 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] BACK API Questions
>
> Hello Raghu,
>
> it seems to mee that you misunderstand the backup api. This api does not
> "know" or care for your schema and tables, but backups the database
> block for block. In this sense, the api does not know "old" contents and
> can not add "new" content.
> The backup api can not help you in your need. You will have to open both
> the in-memory db and the file dm and copy the data you need to back uop
> yourself before deleting it in the in-memory db.
> See the attach-command: http://www.sqlite.org/lang_attach.html
>
> Martin
>
> Raghavendra Thodime schrieb:
>> Hi,
>>     I am using in-memory db where I will execute only inserts. Periodically, 
>> I am trying to backup my in-memory db to file based db. But as soon as I 
>> backup, I want to clear up the in-memory db so that I will have enough 
>> memory for subsequent inserts. So next time when I back up I want to back up 
>> only the newly written records. Is there a simple way to accomplish it? 
>> Right now, I can't execute delete on in-memory because it will reflect on 
>> file db when I back up which is what I don't want.
>>
>>   What I tried was, as soon as I back up, I close the current in-memory db 
>> and open a new one for the subsequent inserts. But next time when I tried to 
>> back up to the same old file db, backup operation is overwriting all the its 
>> old contents. Is there a way to append to old DB instead of overwriting?
>>
>> Help will be appreciated...
>>
>>  Thank You
>> Raghu
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-15 Thread Raghavendra Thodime


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of P Kishor
Sent: Tuesday, December 15, 2009 2:48 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] BACK API Questions

On Tue, Dec 15, 2009 at 2:44 PM, Raghavendra Thodime
 wrote:
> Hi Martin,
>   Thanks for your reply.
>
>   You mean to say that I need to modify sqlite3 code to choose which pages 
> that need to be copied and copy them myself?

You don't need to modify sqlite3 code, but you need to write your own
code inside your application to do what you want. The sqlite backup
API, from what I understand, is not designed to solve the problem you
are trying to solve.
== Kirshor-- I still don't get it. How can you copy from in-memory to 
file from my application? I don't want to again use sql queries to do that 
because it would be as slow as inserting to file DB directly. But during 
backup, sqlite copies pages in bulk, so it will be lot faster I believe. But if 
you know a faster way to copy from in-memory to file DB, then please let me 
know? 

> Or is there some other API that I can use for this purpose? My main problem 
> is I will have huge influx of real time data which I want to write first to 
> my memory (because of computation efficiency) and later on, back up to the 
> file-db in the background. Also I don't have too much memory to spend. So I 
> want to erase the data from in-memory as soon as I backup, but I want my data 
> in file-db to be existent.
>
>   I don't know how ATTACH is going to help my cause. Am I missing something 
> here?
>
> Thanks
> Raghu
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Martin.Engelschalk
> Sent: Tuesday, December 15, 2009 12:17 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] BACK API Questions
>
> Hello Raghu,
>
> it seems to mee that you misunderstand the backup api. This api does not
> "know" or care for your schema and tables, but backups the database
> block for block. In this sense, the api does not know "old" contents and
> can not add "new" content.
> The backup api can not help you in your need. You will have to open both
> the in-memory db and the file dm and copy the data you need to back uop
> yourself before deleting it in the in-memory db.
> See the attach-command: http://www.sqlite.org/lang_attach.html
>
> Martin
>
> Raghavendra Thodime schrieb:
>> Hi,
>>     I am using in-memory db where I will execute only inserts. Periodically, 
>> I am trying to backup my in-memory db to file based db. But as soon as I 
>> backup, I want to clear up the in-memory db so that I will have enough 
>> memory for subsequent inserts. So next time when I back up I want to back up 
>> only the newly written records. Is there a simple way to accomplish it? 
>> Right now, I can't execute delete on in-memory because it will reflect on 
>> file db when I back up which is what I don't want.
>>
>>   What I tried was, as soon as I back up, I close the current in-memory db 
>> and open a new one for the subsequent inserts. But next time when I tried to 
>> back up to the same old file db, backup operation is overwriting all the its 
>> old contents. Is there a way to append to old DB instead of overwriting?
>>
>> Help will be appreciated...
>>
>>  Thank You
>> Raghu
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-15 Thread Max Vlasov
> You don't need to modify sqlite3 code, but you need to write your own
> code inside your application to do what you want. The sqlite backup
> API, from what I understand, is not designed to solve the problem you
> are trying to solve.
> == Kirshor-- I still don't get it. How can you copy from in-memory
> to file from my application? I don't want to again use sql queries to do
> that because it would be as slow as inserting to file DB directly. But
> during backup, sqlite copies pages in bulk, so it will be lot faster I
> believe. But if you know a faster way to copy from in-memory to file DB,
> then please let me know?
>
>
A database file (sqlite for example) is a complex format, usually based on
different level of abstraction (bytes, sectors). If you append more than one
record, it's not that one consequent block of data appended, it's many
operations involved, usually many blocks of data containing in more than one
sector. If the performance is high priority for you, I'd recommend using
series of transactions involving one main database file instead of using two
databases
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-15 Thread Raghavendra Thodime
Max,
  Thanks.. Looks like I will have to optimize using transactions. I will try 
that



-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Max Vlasov
Sent: Tuesday, December 15, 2009 3:56 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] BACK API Questions

> You don't need to modify sqlite3 code, but you need to write your own
> code inside your application to do what you want. The sqlite backup
> API, from what I understand, is not designed to solve the problem you
> are trying to solve.
> == Kirshor-- I still don't get it. How can you copy from in-memory
> to file from my application? I don't want to again use sql queries to do
> that because it would be as slow as inserting to file DB directly. But
> during backup, sqlite copies pages in bulk, so it will be lot faster I
> believe. But if you know a faster way to copy from in-memory to file DB,
> then please let me know?
>
>
A database file (sqlite for example) is a complex format, usually based on
different level of abstraction (bytes, sectors). If you append more than one
record, it's not that one consequent block of data appended, it's many
operations involved, usually many blocks of data containing in more than one
sector. If the performance is high priority for you, I'd recommend using
series of transactions involving one main database file instead of using two
databases
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-15 Thread Raghavendra Thodime
  I did try using batch of transactions with synchronous=OFF PRAGMA set. The 
performance improved slightly. But as db file started to grow larger and larger 
in size, the performance degraded considerably. Is it expected? Or Is there a 
work around for this?

Thanks
Raghu

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Max Vlasov
Sent: Tuesday, December 15, 2009 3:56 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] BACK API Questions

> You don't need to modify sqlite3 code, but you need to write your own
> code inside your application to do what you want. The sqlite backup
> API, from what I understand, is not designed to solve the problem you
> are trying to solve.
> == Kirshor-- I still don't get it. How can you copy from in-memory
> to file from my application? I don't want to again use sql queries to do
> that because it would be as slow as inserting to file DB directly. But
> during backup, sqlite copies pages in bulk, so it will be lot faster I
> believe. But if you know a faster way to copy from in-memory to file DB,
> then please let me know?
>
>
A database file (sqlite for example) is a complex format, usually based on
different level of abstraction (bytes, sectors). If you append more than one
record, it's not that one consequent block of data appended, it's many
operations involved, usually many blocks of data containing in more than one
sector. If the performance is high priority for you, I'd recommend using
series of transactions involving one main database file instead of using two
databases
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-16 Thread Max Vlasov
On Wed, Dec 16, 2009 at 9:30 AM, Raghavendra Thodime wrote:

>  I did try using batch of transactions with synchronous=OFF PRAGMA set. The
> performance improved slightly. But as db file started to grow larger and
> larger in size, the performance degraded considerably. Is it expected? Or Is
> there a work around for this?
>

If you did everything according this topic:

(19) INSERT is really slow - I can only do few dozen INSERTs per second (
http://www.sqlite.org/faq.html#q19 )

from FAQ, you probably have extra overload with your development
language/library. Also be aware that you will probably never get the same
speed as the general copying of same amount of data with the file system
routines since the db engine have to deal with indexes and other extra job.
So the performance high or low might be a subjective estimate.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-16 Thread Raghavendra Thodime
Max,
  I agree with what you are saying. To eliminate such stuff, this is what I 
have done:  I created schema for multiple tables with no indexing, no joins, no 
foreign key constraints.  Basically I wanted to keep it as simple as possible 
for inserts to be faster. Then I simulated the inserts with as minimum C code 
as possible. I had around 50 inserts within each transaction (between BEGIN and 
END). Also I had synchronous=OFF and had my journaling also turned off (even 
tried with MEMORY based journaling). So basically I eliminated all external 
dependencies that I could think of to make it faster.

  I simulated for thousands of inserts to see how the system performs on heavy 
load conditions. I printed the time consumption stats periodically. In such a 
test case, I see that there is a considerable improvement seen. But what I 
observed is as I kept running simulations for longer periods, the inserts were 
taking lot more time in later stages than they were taking initially. For ex, 
If I ran for 200,000 inserts, first 20,000 inserts were done in 9 secs, but 
last 20,000 inserts (from 180,000th to 200,000) took almost 110 secs. It is 
more than 10 times than what it was initially. These results were consistent 
across all iterations of simulation I did.

  So I am afraid this won't scale as I keep inserting into same db file. 

  So my question is, Is this behavior expected? Am I doing something wrong? Is 
my assumption that time to insert is proportional to size of the db file 
correct? Since I am simulating with the same application code and with no 
schema complications, I am assuming there should not be many external 
dependencies that will affect the system in the long run. 

  Your help is appreciated...

Thanks
Raghu

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Max Vlasov
Sent: Wednesday, December 16, 2009 2:38 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] BACK API Questions

On Wed, Dec 16, 2009 at 9:30 AM, Raghavendra Thodime wrote:

>  I did try using batch of transactions with synchronous=OFF PRAGMA set. The
> performance improved slightly. But as db file started to grow larger and
> larger in size, the performance degraded considerably. Is it expected? Or Is
> there a work around for this?
>

If you did everything according this topic:

(19) INSERT is really slow - I can only do few dozen INSERTs per second (
http://www.sqlite.org/faq.html#q19 )

from FAQ, you probably have extra overload with your development
language/library. Also be aware that you will probably never get the same
speed as the general copying of same amount of data with the file system
routines since the db engine have to deal with indexes and other extra job.
So the performance high or low might be a subjective estimate.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-16 Thread David Bicking
Raghu,

Can you either use the backup API or a simple attach to copy the data
from the memory database to a new file based db. A second process could
then much more slowly poll to see if a new "temporary" file was
available, and attach it, and insert its data albeit slowly in to the
consolidated file-db, then deleting the "temporary file"

Insert would be fast in to the memory db, moving data to disk would be
fast. Creating a master db would be slow, but unless someone can figure
out why that is slowing down so much, I don't think you can ever avoid
the time cost of consolidating the data.

David

On Wed, 2009-12-16 at 08:20 -0600, Raghavendra Thodime wrote:
> Max,
>   I agree with what you are saying. To eliminate such stuff, this is what I 
> have done:  I created schema for multiple tables with no indexing, no joins, 
> no foreign key constraints.  Basically I wanted to keep it as simple as 
> possible for inserts to be faster. Then I simulated the inserts with as 
> minimum C code as possible. I had around 50 inserts within each transaction 
> (between BEGIN and END). Also I had synchronous=OFF and had my journaling 
> also turned off (even tried with MEMORY based journaling). So basically I 
> eliminated all external dependencies that I could think of to make it faster.
> 
>   I simulated for thousands of inserts to see how the system performs on 
> heavy load conditions. I printed the time consumption stats periodically. In 
> such a test case, I see that there is a considerable improvement seen. But 
> what I observed is as I kept running simulations for longer periods, the 
> inserts were taking lot more time in later stages than they were taking 
> initially. For ex, If I ran for 200,000 inserts, first 20,000 inserts were 
> done in 9 secs, but last 20,000 inserts (from 180,000th to 200,000) took 
> almost 110 secs. It is more than 10 times than what it was initially. These 
> results were consistent across all iterations of simulation I did.
> 
>   So I am afraid this won't scale as I keep inserting into same db file. 
> 
>   So my question is, Is this behavior expected? Am I doing something wrong? 
> Is my assumption that time to insert is proportional to size of the db file 
> correct? Since I am simulating with the same application code and with no 
> schema complications, I am assuming there should not be many external 
> dependencies that will affect the system in the long run. 
> 
>   Your help is appreciated...
> 
> Thanks
> Raghu
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov
> Sent: Wednesday, December 16, 2009 2:38 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] BACK API Questions
> 
> On Wed, Dec 16, 2009 at 9:30 AM, Raghavendra Thodime 
> wrote:
> 
> >  I did try using batch of transactions with synchronous=OFF PRAGMA set. The
> > performance improved slightly. But as db file started to grow larger and
> > larger in size, the performance degraded considerably. Is it expected? Or Is
> > there a work around for this?
> >
> 
> If you did everything according this topic:
> 
> (19) INSERT is really slow - I can only do few dozen INSERTs per second (
> http://www.sqlite.org/faq.html#q19 )
> 
> from FAQ, you probably have extra overload with your development
> language/library. Also be aware that you will probably never get the same
> speed as the general copying of same amount of data with the file system
> routines since the db engine have to deal with indexes and other extra job.
> So the performance high or low might be a subjective estimate.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BACK API Questions

2009-12-16 Thread Max Vlasov
>
> For ex, If I ran for 200,000 inserts, first 20,000 inserts were done in 9
> secs, but last 20,000 inserts (from 180,000th to 200,000) took almost 110
> secs. It is more than 10 times than what it was initially. These results
> were consistent across all iterations of simulation I did.
>
>
I have several observations about your results:

- As I know rowid is always indexed, so there's always at least one index
for any table that implemented with B-tree. So the bigger the base the
slower is to append a record. The dependency is not linear, but it exists.
- When you're inside a transaction sqlite "delegates" writing logic
(cached/not cached) to the OS, so if it decides to cache one sector and not
to cache another, there's little we can to about it. It can be related to
your RAM size, file cache size, hard-disk characteristics.
- You probably don't want to use sqlite if you plan to develop for example
billing system for a big mobile carrier :), there's a page at
sqlite.orgabout recommendations for sqlite usage. Can you know in
advance what speed
results you'll consider good?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users