Re: [sqlite] "Error: disk I/O error" on big databases vacuuming

2011-03-08 Thread Alexey Pechnikov
There is >30 Gb of free space.

2011/3/8 Jay A. Kreibich :
> On Tue, Mar 08, 2011 at 07:51:22PM +0300, Alexey Pechnikov scratched on the 
> wall:
>> I try to vacuum database about 11Gb size on debian squeeze host with 1,5 Gb 
>> RAM:
>>
>> sqlite3 test.db 'vacuum;'
>> Error: disk I/O error
>>
>> Note: any new files does not created on vacuuming process (may be
>> created journal, does not it?).
>
>  Yes.  A copy of the database and a journal file.  Both may reach the
>  size of the original database.  So, if you database is 11GB in size,
>  you may need as much as 22GB of free disk space to complete the
>  vacuum process.
>
>   -j
>
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Intelligence is like underwear: it is important that you have it,
>  but showing it to the wrong people has the tendency to make them
>  feel uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Error: disk I/O error" on big databases vacuuming

2011-03-08 Thread Teg
Hello Alexey,

Tuesday, March 8, 2011, 11:51:22 AM, you wrote:

AP> I try to vacuum database about 11Gb size on debian squeeze host with 1,5 Gb 
RAM:

AP> sqlite3 test.db 'vacuum;'
AP> Error: disk I/O error

AP> Note: any new files does not created on vacuuming process (may be
AP> created journal, does not it?).

AP> But this work correct:
AP> sqlite3 test.db '.dump'|sqlite3 test2.db


On my windows 7 box using current Sqlite, vacuuming anything over about
7 gigs seems to be hit or miss. I haven't pursued it. I just dump the
DB's to SQL and re-generate them, like you do. It's far faster than
vacuum.

Never have below 100 gigs free when I've tested this.

-- 
Best regards,
 Tegmailto:t...@djii.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Error: disk I/O error" on big databases vacuuming

2011-03-08 Thread Jay A. Kreibich
On Tue, Mar 08, 2011 at 07:51:22PM +0300, Alexey Pechnikov scratched on the 
wall:
> I try to vacuum database about 11Gb size on debian squeeze host with 1,5 Gb 
> RAM:
> 
> sqlite3 test.db 'vacuum;'
> Error: disk I/O error
> 
> Note: any new files does not created on vacuuming process (may be
> created journal, does not it?).

  Yes.  A copy of the database and a journal file.  Both may reach the
  size of the original database.  So, if you database is 11GB in size,
  you may need as much as 22GB of free disk space to complete the
  vacuum process.

   -j


-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Error: disk I/O error" on big databases vacuuming

2011-03-08 Thread Dan Kennedy
On 03/08/2011 11:51 PM, Alexey Pechnikov wrote:
> I try to vacuum database about 11Gb size on debian squeeze host with 1,5 Gb 
> RAM:
>
> sqlite3 test.db 'vacuum;'
> Error: disk I/O error
>
> Note: any new files does not created on vacuuming process (may be
> created journal, does not it?).
>
> But this work correct:
> sqlite3 test.db '.dump'|sqlite3 test2.db

Is the temp file space filling up?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users