[sqlite] Intro SQL ?

2005-09-13 Thread Richard Nagle

Well, now I ready for the next step,
looking for a intro into SQL...

been searching google all night, found hundreds of sites,
that start off with a table, but nothing showing, how to create
a database, and place it in a directory, or in the same folder,
with the sql application.

Looking for a intro, that would show commands, to list
databases & tables that you just created... or allow you to see,
into other folders.

Again, looking for the intro to the intro of SQL.
where we start with the very basic of basic.

So, does anyone have a few links...

Major Thanks -
Rick



[sqlite] compiling standalone SQLite for Mac users (was Re: Sqlite ( off list ))

2005-09-13 Thread Puneet Kishor

Richard,

Kindly send the questions to the list, as there are way more 
knowledgeable SQLite and Mac users on the list than I am... however, I 
will fumble through an answer below...


On Sep 13, 2005, at 6:47 PM, Richard Nagle wrote:


Now what command would I use to compile a self standing
application for Mac users..

Since, I have the build folder.



Building a standalone app is a bit tricky... you have to ask why you 
want to do that. SQLite is a "faceless" application. Think of the 
command-line as the face to SQLite. What you really want to build is a 
face to SQLite. For that, you might be well advised to use Tito Cuiro's 
very nice Quicklite, which includes SQLite. See 
. Or download the 
SQLite Browser from sf.net. It has SQLite built in. Or the Perl DBD, or 
PHP... they all come with SQLite.


If you simply want to provide other users with the compiled files that 
you built, well, you can grab the SQLite binary from /usr/local/bin, 
and the supporting libraries from /usr/local/lib and, for good measure, 
the header file from /usr/local/include, put them in a similar folder 
hierarchy, zip them up, and make them available. But then, they will 
have to insure they put the files in their correct places, and they 
still will not have an "application" so to say. That would be way too 
much work both on your part and on your users' part... wouldn't it be 
just easier for them to download the source and build it themselves?


cd ~
mkdir sqlite_src
cd sqlite_src
curl http://www.sqlite.org/sqlite-3.2.5.tar.gz -o "sqlite-3.2.5.tar.gz"
tar xzf sqlite-3.2.5.tar.gz
./configure
make
sudo make install
make doc
cp -R doc ~/Sites/

You are done.


--
Puneet Kishor



Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Will Leshner


On Sep 13, 2005, at 3:19 PM, Brett Wilson wrote:


The patch says "improve and support locking on the OSX platform (as
well as others)". I see at least some enums in there for MSDOS NFS,
etc.


Well, looking closer at the code, I'm beginning to think it might  
very well be a generic solution that isn't specific to Mac OS X.


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Brett Wilson
The patch says "improve and support locking on the OSX platform (as
well as others)". I see at least some enums in there for MSDOS NFS,
etc.

Can anybody clarify whether this works on other platforms as well?

Basically, the question I have about this patch is: if I access the DB
from more than one connection in the same process (regardless of OS or
FS), will I be safe?

Thanks,
Brett


On 9/13/05, Will Leshner <[EMAIL PROTECTED]> wrote:
> 
> On Sep 13, 2005, at 2:19 PM, D. Richard Hipp wrote:
> 
> > Apple has contributed patches to SQLite that claim
> > to fix this problem.  Those patches may one day find
> > their way into the default release.  In the meantime,
> > you can find the patches at:
> 
> 
> I'm not positive, but I think the Apple patches are Mac OS X-specific.
> 
> --
> REALbasic news and tips: http://rbgazette.com
> KidzMail & KidzLog: http://haranbanjo.com
> 
> 
>


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Will Leshner


On Sep 13, 2005, at 2:19 PM, D. Richard Hipp wrote:


Apple has contributed patches to SQLite that claim
to fix this problem.  Those patches may one day find
their way into the default release.  In the meantime,
you can find the patches at:



I'm not positive, but I think the Apple patches are Mac OS X-specific.

--
REALbasic news and tips: http://rbgazette.com
KidzMail & KidzLog: http://haranbanjo.com




Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread D. Richard Hipp
On Tue, 2005-09-13 at 14:09 -0700, Brett Wilson wrote:
> I'm still concerned about the warnings on the web page about some
> networked file systems not supporting locking. There will be multiple
> DB connections from the same process. They might even be
> multithreaded. Might we have a problem in this case?
> 

Most network filesystems do fcntl locks incorrectly if
at all.  An incorrect implementation of fcntl can result
in database corruption.

Apple has contributed patches to SQLite that claim
to fix this problem.  Those patches may one day find
their way into the default release.  In the meantime,
you can find the patches at:

http://www.sqlite.org/cvstrac/tktview?tn=1240
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



Re: [sqlite] Installer Question ?

2005-09-13 Thread Puneet Kishor


On Sep 13, 2005, at 4:03 PM, Richard Nagle wrote:


..
G4:~/desktop/sqlite-3.2.5/bld rn$ sudo make install
Password:
tclsh ../tclinstaller.tcl 3.2
/usr/bin/install -c -d /usr/local/lib
./libtool --mode=install /usr/bin/install -c libsqlite3.la 
/usr/local/lib
/usr/bin/install -c .libs/libsqlite3.0.8.6.dylib 
/usr/local/lib/libsqlite3.0.8.6.dylib
(cd /usr/local/lib && rm -f libsqlite3.0.dylib && ln -s 
libsqlite3.0.8.6.dylib libsqlite3.0.dylib)




(cd /usr/local/lib && rm -f libsqlite3.dylib && ln -s 
libsqlite3.0.8.6.dylib libsqlite3.dylib)

/usr/bin/install -c .libs/libsqlite3.lai /usr/local/lib/libsqlite3.la
/usr/bin/install -c .libs/libsqlite3.a /usr/local/lib/libsqlite3.a
ranlib /usr/local/lib/libsqlite3.a
chmod 644 /usr/local/lib/libsqlite3.a
--
Libraries have been installed in:
   /usr/local/lib

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `DYLD_LIBRARY_PATH' environment variable
 during execution


Seems like you solved your earlier compiler problems.

read the above message. It tells you where the libraries have been 
installed.






See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
--
/usr/bin/install -c -d /usr/local/bin
./libtool --mode=install /usr/bin/install -c sqlite3 /usr/local/bin
/usr/bin/install -c .libs/sqlite3 /usr/local/bin/sqlite3
/usr/bin/install -c -d /usr/local/include
/usr/bin/install -c -m 0644 sqlite3.h /usr/local/include
/usr/bin/install -c -d /usr/local/lib/pkgconfig;
/usr/bin/install -c -m 0644 sqlite3.pc /usr/local/lib/pkgconfig;
G4:~/desktop/sqlite-3.2.5/bld rn$



look at the above... you see that something called sqlit3 have been 
installed in /usr/local/bin


it also tells you where the header file has gone (/usr/local/include) 
in case you need it to build something else with SQLite.





==
It would appear, from this all went well, however,
when open terminal window, and typing sqlite, nothing.

So, guru's, Where is it?, I can see in the bld folder,
but where is the exe file, of sqlite3

other words, how do I run the application.



the 'exe' is sqlite3, a binary file (.exe is simply a Windows 
convention to indicate a binary file... such an extension is not 
required nor customarily used in Unix).


You will need to either run sqlite3 by calling it with its full path

/usr/local/bin/sqlite3

or by including the path in your environment (consult the docs for your 
shell)


or by setting up an alias to the above in your environment.


--
Puneet Kishor



Re: [sqlite] Installer Question ( oops )

2005-09-13 Thread Richard Nagle

Found it... usr/local/bin  sqlite3
I guess, I need to build a .bash_profile with path,
and add this path.

Oh, since I have the build in bld dir of sqlite,
is that a compiled version? or just a build version,
I can see the installed parts of sqlite,
but was wonder, how would one make a .dmg
version of sqlite for other Mac user, without the need
of them doing any compiling.

Thanks -
Richard



Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Brett Wilson
I'm still concerned about the warnings on the web page about some
networked file systems not supporting locking. There will be multiple
DB connections from the same process. They might even be
multithreaded. Might we have a problem in this case?

Brett

On 9/13/05, Roger Binns <[EMAIL PROTECTED]> wrote:
> > One possibility is if we think we'll be doing a lot of UI,
> > to make an in-memory "read-only" copy of everything we will
> > need.
> 
> Alternatively you can implement your own "platform" backend
> just as there are for Windows, Linux, Mac etc.  It isn't very
> many functions to implement.
> 
> Since you are doing your own locking, you can turn most of the
> locking calls into no-ops.  You could even track when you do
> writes and let locking go through then.
> 
> Roger
>


[sqlite] Installer Question ?

2005-09-13 Thread Richard Nagle

G4:~/desktop/sqlite-3.2.5/bld rn$ ls
Makefilecallback.lo func.o 
main.lo parse.c random.o 
trigger.o   vdbeaux.lo
alter.locallback.o  hash.lo 
main.o  parse.h select.lo 
update.lo   vdbeaux.o
alter.o complete.lo hash.o 
mkkeywordhash   parse.loselect.o 
update.ovdbefifo.lo
analyze.lo  complete.o  insert.lo 
opcodes.c   parse.o sqlite3 
utf.lo  vdbefifo.o
analyze.o   config.log  insert.o 
opcodes.h   parse.out   sqlite3.h 
utf.o   vdbemem.lo
attach.lo   config.status   keywordhash.h 
opcodes.lo  parse.y sqlite3.pc 
util.lo vdbemem.o
attach.odate.lo legacy.lo 
opcodes.o   pragma.lo   table.lo 
util.o  where.lo
auth.lo date.o  legacy.o 
os_unix.lo  pragma.otable.o 
vacuum.lo   where.o
auth.o  delete.lo   lemon 
os_unix.o   prepare.lo  tclsqlite.lo 
vacuum.o
btree.lodelete.olempar.c 
os_win.lo   prepare.o   tclsqlite.o 
vdbe.lo
btree.o expr.lo libsqlite3.la 
os_win.oprintf.lo   tokenize.lo 
vdbe.o
build.loexpr.o  libtclsqlite3.la 
pager.loprintf.otokenize.o 
vdbeapi.lo
build.o func.lo libtool 
pager.o random.lo   trigger.lo 
vdbeapi.o




G4:~/desktop/sqlite-3.2.5/bld rn$ sudo make install
Password:
tclsh ../tclinstaller.tcl 3.2
/usr/bin/install -c -d /usr/local/lib
./libtool --mode=install /usr/bin/install -c libsqlite3.la /usr/local/lib
/usr/bin/install -c .libs/libsqlite3.0.8.6.dylib 
/usr/local/lib/libsqlite3.0.8.6.dylib
(cd /usr/local/lib && rm -f libsqlite3.0.dylib && ln -s 
libsqlite3.0.8.6.dylib libsqlite3.0.dylib)




(cd /usr/local/lib && rm -f libsqlite3.dylib && ln -s 
libsqlite3.0.8.6.dylib libsqlite3.dylib)

/usr/bin/install -c .libs/libsqlite3.lai /usr/local/lib/libsqlite3.la
/usr/bin/install -c .libs/libsqlite3.a /usr/local/lib/libsqlite3.a
ranlib /usr/local/lib/libsqlite3.a
chmod 644 /usr/local/lib/libsqlite3.a
--
Libraries have been installed in:
   /usr/local/lib

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `DYLD_LIBRARY_PATH' environment variable
 during execution

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
--
/usr/bin/install -c -d /usr/local/bin
./libtool --mode=install /usr/bin/install -c sqlite3 /usr/local/bin
/usr/bin/install -c .libs/sqlite3 /usr/local/bin/sqlite3
/usr/bin/install -c -d /usr/local/include
/usr/bin/install -c -m 0644 sqlite3.h /usr/local/include
/usr/bin/install -c -d /usr/local/lib/pkgconfig;
/usr/bin/install -c -m 0644 sqlite3.pc /usr/local/lib/pkgconfig;
G4:~/desktop/sqlite-3.2.5/bld rn$



==
It would appear, from this all went well, however,
when open terminal window, and typing sqlite, nothing.

So, guru's, Where is it?, I can see in the bld folder,
but where is the exe file, of sqlite3

other words, how do I run the application.



Thanks -
Richard


Re: [sqlite] any plans for out-of-row blobs?

2005-09-13 Thread Robert Simpson
- Original Message - 
From: "Kurt Welgehausen" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, September 13, 2005 12:46 PM
Subject: Re: [sqlite] any plans for out-of-row blobs?



There is no way to retrieve part of a
blob value in SQLite.

 From my understanding, most databases store
blob values separate from the rest of the row.


The common way to handle this in SQLite is to store the
blob in a file and store the file name in the db.


There's also plenty of other ways to get creative with the storage ... 
here's one off the top of my head:


CREATE TABLE FOO (Id INTEGER PRIMARY KEY, ... extra columns, whatever ... );
CREATE TABLE FOO_BlobData (FooId INTEGER, BlobData BLOB);

Then when you write blob data, do it in 4k chunks to FOO_BlobData.  To 
select it out,


SELECT * FROM FOO_BlobData WHERE FooId = $myid ORDER BY row_id

Just step through it at this point, you'll get it back in 4k chunks.

Robert




Re: [sqlite] any plans for out-of-row blobs?

2005-09-13 Thread Kurt Welgehausen
> There is no way to retrieve part of a
> blob value in SQLite.
>
>  From my understanding, most databases store
> blob values separate from the rest of the row.

The common way to handle this in SQLite is to store the
blob in a file and store the file name in the db.

Regards


[sqlite] any plans for out-of-row blobs?

2005-09-13 Thread Kervin L. Pierre


I need to retrieve parts of the Blob column
value.  The blob may simply be to large for
memory at runtime.

There is no way to retrieve part of a
blob value in SQLite.

From my understanding, most databases store
blob values separate from the rest of the row.

Would that be a possibility for SQLite?  This
would also solve the issue of blobs being
read entirely into memory if a column defined
after the blob column is retrieved.

Regards,
Kervin


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Joel Lucsy
How about reading it all into :memory: and attaching the network database 
with a trigger so that when an update is made to the :memory: database the 
change is reflected to the network copy.

-- 
Joel Lucsy
"The dinosaurs became extinct because they didn't have a space program." -- 
Larry Niven


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Jay Sprenkle
>I don't think it will matter what database you use if you're forced to have
> >it hosted over a network.
> >As far as I know they all rely on the underlying locking mechanism in the
> >OS, which is inherently slow over a network.
> >
> >Can you avoid multiple locking latency hits by just getting a lock at
> >startup and never relinquishing it?
> >
> >
> What about copying (importing) the network database to a ":memory:"
> database, and periodically copy (export) it back?
> 
> 
If it's large, like he mentioned, that will take a while. It sounds like it 
needs to be profiled
to see what the slow parts are.


-- 
---
The Castles of Dereth Calendar: a tour of the art and architecture of 
Asheron's Call
http://www.lulu.com/content/77264


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Dennis Cote

Brett Wilson wrote:


Hi everybody,

I'm working on replacing a lot of Firefox's storage systems to use
sqlite. It has been going well so far except for one issue.

The database file is stored in the user's Mozilla profile directory.
In companies and Universities using Linux, this directory is often
hosted over the network. In these cases, sqlite performance can be
quite poor.

I am aware that writing performance can be bad in these cases, but we
don't do too many writes. I am mostly concerned about read
performance, since a lot of this stuff (history, bookmarks) drive the
UI. The lag, even over a fast network, can be noticable. I am also
concerned about file locking, since the documentation does not
recommend using files over the network.

Can anybody suggest what to do about this problem? This is not
something that can be avoided, since some people will have this
configuration and will not have any say about it. Firefox must perform
reasonably in these cases.

One thing that could work in our favor is that Mozilla already does
locking on the profile, so access will be restricted to our one
process. Is there anything that we can do to take advantage of this to
avoid having to query the file for reads even when the content is
cached? It looks like we will have multiple database connections from
this process.

I will work on minimizing the number of queries in the common cases,
but any little bit of perfomance will help here.

Thanks,
Brett

 


Brett,

If you can't tolerate the delays accessing the database over the 
network, can you make a copy of the database in a temp directory on the 
local machine on startup. If you copy the file after you lock the 
profile it should be safe to copy down to local storage. Then use the 
local database while the application runs, and finally copy the database 
back to the server, if it has been modified, when the application quits. 
If you want to be more resistant to data loss due to power failures or 
program crashes, you could copy the modified database back to the server 
after every write. This would be slower, but you say you are not as 
concerned about write performance.





Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Dennis Jenkins

Jay Sprenkle wrote:

I'm glad to see someone is working on this since it was a very noticable 
problem for me.


I don't think it will matter what database you use if you're forced to have 
it hosted over a network.
As far as I know they all rely on the underlying locking mechanism in the 
OS, which is inherently slow over a network.


Can you avoid multiple locking latency hits by just getting a lock at 
startup and never relinquishing it?
 

What about copying (importing) the network database to a ":memory:" 
database, and periodically copy (export) it back?




Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Brett Wilson
This was brought up but seems like a bad idea for several reasons.
First, the database could get pretty big. This would really kill start
up and shut down times, which is very important to FF users. Second,
what would happen if we crash?

One possibility is if we think we'll be doing a lot of UI, to make an
in-memory "read-only" copy of everything we will need. Writes go to
both the networked file and our memory cache. It would be difficult to
figure out when/how this could happen, though, and we would have to
balance latency vs. the time to slurp all the data locally.

Brett



On 9/13/05, Eric Scouten <[EMAIL PROTECTED]> wrote:
> To what extent is the database shared (either intentionally or
> unintentionally)? Or, put another way, do you have an option to cache
> data locally?
> 
> Since you are restricting access to the profile to a single process at a
> time, your best bet is probably to make a local copy of the DB during
> the app session (you do have *some* local temporary storage, right?) and
> copy that DB back to the network storage at the end of the session.
> 
> -Eric
> 
> 
> P.S. Thanks for your work on Firefox. Like SQLite, it's a great piece of
> software and I'm grateful to be able to use it.
> 
> 
> 
> Brett Wilson wrote:
> > Hi everybody,
> >
> > I'm working on replacing a lot of Firefox's storage systems to use
> > sqlite. It has been going well so far except for one issue.
> >
> > The database file is stored in the user's Mozilla profile directory.
> > In companies and Universities using Linux, this directory is often
> > hosted over the network. In these cases, sqlite performance can be
> > quite poor.
> >
> > I am aware that writing performance can be bad in these cases, but we
> > don't do too many writes. I am mostly concerned about read
> > performance, since a lot of this stuff (history, bookmarks) drive the
> > UI. The lag, even over a fast network, can be noticable. I am also
> > concerned about file locking, since the documentation does not
> > recommend using files over the network.
> >
> > Can anybody suggest what to do about this problem? This is not
> > something that can be avoided, since some people will have this
> > configuration and will not have any say about it. Firefox must perform
> > reasonably in these cases.
> >
> > One thing that could work in our favor is that Mozilla already does
> > locking on the profile, so access will be restricted to our one
> > process. Is there anything that we can do to take advantage of this to
> > avoid having to query the file for reads even when the content is
> > cached? It looks like we will have multiple database connections from
> > this process.
> >
> > I will work on minimizing the number of queries in the common cases,
> > but any little bit of perfomance will help here.
> >
> > Thanks,
> > Brett
> 
>


Re: [sqlite] how to store date/time in sqlite?

2005-09-13 Thread Will Leshner


On Sep 13, 2005, at 11:07 AM, Mark Wyszomierski wrote:

 Moving from a mysql database to sqlite. I had some date/time  
fields in my
mysql database. I would just populate them using the now()  
function. How

could I achieve the same in my new sqlite database?


This page might be helpful:




Re: [sqlite] how to store date/time in sqlite?

2005-09-13 Thread Mark Wyszomierski
Ah excellent, thanks Jay,
 Mark

 On 9/13/05, Jay Sprenkle <[EMAIL PROTECTED]> wrote: 
> 
> See the wiki section of the documentation on the web site. There's a page 
> devoted to this. 
> 
> On 9/13/05, Mark Wyszomierski < [EMAIL PROTECTED]> wrote: 
> > 
> > Hi all,
> > Moving from a mysql database to sqlite. I had some date/time fields in 
> > my 
> > mysql database. I would just populate them using the now() function. How
> > could I achieve the same in my new sqlite database?
> > Thanks!
> > Mark
> > 
> > 
> 
> 
> -- 
> ---
> The Castles of Dereth Calendar: a tour of the art and architecture of 
> Asheron's Call 
> http://www.lulu.com/content/77264
>


Re: [sqlite] how to store date/time in sqlite?

2005-09-13 Thread Jay Sprenkle
See the wiki section of the documentation on the web site. There's a page 
devoted to this.

On 9/13/05, Mark Wyszomierski <[EMAIL PROTECTED]> wrote:
> 
> Hi all,
> Moving from a mysql database to sqlite. I had some date/time fields in my
> mysql database. I would just populate them using the now() function. How
> could I achieve the same in my new sqlite database?
> Thanks!
> Mark
> 
> 


-- 
---
The Castles of Dereth Calendar: a tour of the art and architecture of 
Asheron's Call
http://www.lulu.com/content/77264


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Jay Sprenkle
I'm glad to see someone is working on this since it was a very noticable 
problem for me.

I don't think it will matter what database you use if you're forced to have 
it hosted over a network.
As far as I know they all rely on the underlying locking mechanism in the 
OS, which is inherently slow over a network.

Can you avoid multiple locking latency hits by just getting a lock at 
startup and never relinquishing it?



On 9/13/05, Brett Wilson <[EMAIL PROTECTED]> wrote:
> 
> Hi everybody,
> 
> I'm working on replacing a lot of Firefox's storage systems to use
> sqlite. It has been going well so far except for one issue.
> 
> The database file is stored in the user's Mozilla profile directory.
> In companies and Universities using Linux, this directory is often
> hosted over the network. In these cases, sqlite performance can be
> quite poor.
> 
> I am aware that writing performance can be bad in these cases, but we
> don't do too many writes. I am mostly concerned about read
> performance, since a lot of this stuff (history, bookmarks) drive the
> UI. The lag, even over a fast network, can be noticable. I am also
> concerned about file locking, since the documentation does not
> recommend using files over the network.
> 
> Can anybody suggest what to do about this problem? This is not
> something that can be avoided, since some people will have this
> configuration and will not have any say about it. Firefox must perform
> reasonably in these cases.
> 
> One thing that could work in our favor is that Mozilla already does
> locking on the profile, so access will be restricted to our one
> process. Is there anything that we can do to take advantage of this to
> avoid having to query the file for reads even when the content is
> cached? It looks like we will have multiple database connections from
> this process.
> 
> I will work on minimizing the number of queries in the common cases,
> but any little bit of perfomance will help here.
> 
> Thanks,
> Brett
> 



-- 
---
The Castles of Dereth Calendar: a tour of the art and architecture of 
Asheron's Call
http://www.lulu.com/content/77264


Re: [sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Eric Scouten
To what extent is the database shared (either intentionally or 
unintentionally)? Or, put another way, do you have an option to cache 
data locally?


Since you are restricting access to the profile to a single process at a 
time, your best bet is probably to make a local copy of the DB during 
the app session (you do have *some* local temporary storage, right?) and 
copy that DB back to the network storage at the end of the session.


-Eric


P.S. Thanks for your work on Firefox. Like SQLite, it's a great piece of 
software and I'm grateful to be able to use it.




Brett Wilson wrote:

Hi everybody,

I'm working on replacing a lot of Firefox's storage systems to use
sqlite. It has been going well so far except for one issue.

The database file is stored in the user's Mozilla profile directory.
In companies and Universities using Linux, this directory is often
hosted over the network. In these cases, sqlite performance can be
quite poor.

I am aware that writing performance can be bad in these cases, but we
don't do too many writes. I am mostly concerned about read
performance, since a lot of this stuff (history, bookmarks) drive the
UI. The lag, even over a fast network, can be noticable. I am also
concerned about file locking, since the documentation does not
recommend using files over the network.

Can anybody suggest what to do about this problem? This is not
something that can be avoided, since some people will have this
configuration and will not have any say about it. Firefox must perform
reasonably in these cases.

One thing that could work in our favor is that Mozilla already does
locking on the profile, so access will be restricted to our one
process. Is there anything that we can do to take advantage of this to
avoid having to query the file for reads even when the content is
cached? It looks like we will have multiple database connections from
this process.

I will work on minimizing the number of queries in the common cases,
but any little bit of perfomance will help here.

Thanks,
Brett




Re: [sqlite] sqlite3_errCode and error handling

2005-09-13 Thread Antony Sargent

I submitted a bug with a similar problem a while back:

http://www.sqlite.org/cvstrac/tktview?tn=1366

the bug was closed with this comment by drh:

"SQLITE_CORRUPT gets returned when you call sqlite3_reset() or 
sqlite3_finalize().


This is unfortunate, I know. We should have designed the API so that the 
specific error was returned directly from sqlite3_step(). But we didn't. 
And nobody commented on the problem during the beta period. We cannot 
change it now without breaking backwards compatibility. So we are stuck 
with the current mode of operation."


We changed our code so that in the face of an error from sqlite3_step(), 
we call sqlite3_reset(), which then at least in the case I was testing 
(looking for SQLITE_CORRUPT) returns the right error code.



Cariotoglou Mike wrote:

when a sqlite3_Step call fails, say on a unique key violation of an
INSERT statement, 
the returned error code is SQLITE_ERROR, which is documented. 
calling sqlite3_errcode at this point, however, also gives the same
error, which is not. 
the correct error, which is SQLITE_CONSTRAINT, is only returned when

calling sqlite3_finalize, which may happen at a much later stage in the
code. 


I believe the error code should be returned right at the point where the
error happens, because if it is not, the structure of the code suffers,
since the finalization code for a
statement needs to be modal.





[sqlite] how to store date/time in sqlite?

2005-09-13 Thread Mark Wyszomierski
 Hi all,
 Moving from a mysql database to sqlite. I had some date/time fields in my 
mysql database. I would just populate them using the now() function. How 
could I achieve the same in my new sqlite database?
 Thanks!
Mark


[sqlite] test suite failures

2005-09-13 Thread Dennis Cote

Hi All,

I just built the latest SQLite from CVS head (3.2.5+) on Windows using 
MinGW. This is the same build process I normally use with no problems. 
When I run the test suite I am getting failures on some tests as shown 
below:


12 errors out of 13455 tests
Failures on these tests: conflict-6.2 conflict-6.3 conflict-6.7 
conflict-6.8 conflict-6.9 conflict-6.10 conflict-6.11 conflict-6.12 
conflict-6.13 sync-1.1 sync-1.2 sync-1.3

make: *** [test] Error 1


Has anyone else seen these errors? Are they expected with the current 
CVS version?




The sync test errors make no sense since they only apply to UNIX 
environments (according to the comments in the test script). When I run 
this test by itself I get no tests and no errors:


$ ./testfixture ../sqlite/test/sync.test
0 errors out of 0 tests
Failures on these tests:

When I run the conflict tests alone I get the following results:

$ ./testfixture ../sqlite/test/conflict.test
conflict-1.0... Ok
conflict-1.1... Ok
conflict-1.2... Ok
conflict-1.3... Ok
conflict-1.4... Ok
conflict-1.5... Ok
conflict-1.6... Ok
conflict-1.7... Ok
conflict-2.0... Ok
conflict-2.1... Ok
conflict-2.2... Ok
conflict-2.3... Ok
conflict-2.4... Ok
conflict-2.5... Ok
conflict-2.6... Ok
conflict-2.7... Ok
conflict-3.0... Ok
conflict-3.1... Ok
conflict-3.2... Ok
conflict-3.3... Ok
conflict-3.4... Ok
conflict-3.5... Ok
conflict-3.6... Ok
conflict-3.7... Ok
conflict-4.0... Ok
conflict-4.1... Ok
conflict-4.2... Ok
conflict-4.3... Ok
conflict-4.4... Ok
conflict-4.5... Ok
conflict-4.6... Ok
conflict-4.7... Ok
conflict-4.8... Ok
conflict-4.9... Ok
conflict-4.10... Ok
conflict-4.11... Ok
conflict-5.0... Ok
conflict-5.1... Ok
conflict-5.2... Ok
conflict-5.3... Ok
conflict-5.4... Ok
conflict-5.5... Ok
conflict-5.6... Ok
conflict-5.7... Ok
conflict-5.8... Ok
conflict-5.9... Ok
conflict-5.10... Ok
conflict-5.11... Ok
conflict-5.12... Ok
conflict-5.13... Ok
conflict-5.14... Ok
conflict-5.15... Ok
conflict-5.16... Ok
conflict-6.0... Ok
conflict-6.1... Ok
conflict-6.2...
Expected: [0 {7 6 9} 1 1]
Got: [0 {7 6 9} 1 0]
conflict-6.3...
Expected: [0 {6 7 3 9} 1 1]
Got: [0 {6 7 3 9} 1 0]
conflict-6.4... Ok
conflict-6.5... Ok
conflict-6.6... Ok
conflict-6.7...
Expected: [0 {6 7 3 9} 1 1]
Got: [0 {6 7 3 9} 1 0]
conflict-6.8...
Expected: [0 {7 6 9} 1 1]
Got: [0 {7 6 9} 1 0]
conflict-6.9...
Expected: [0 {6 7 3 9} 1 1]
Got: [0 {6 7 3 9} 1 0]
conflict-6.10...
Expected: [0 {7 6 9} 1 1]
Got: [0 {7 6 9} 1 0]
conflict-6.11...
Expected: [0 {6 7 3 9} 1 1]
Got: [0 {6 7 3 9} 1 0]
conflict-6.12...
Expected: [0 {6 7 3 9} 1 1]
Got: [0 {6 7 3 9} 1 0]
conflict-6.13...
Expected: [0 {7 6 9} 1 1]
Got: [0 {7 6 9} 1 0]
conflict-6.14... Ok
conflict-6.15... Ok
conflict-6.16... Ok
conflict-7.1... Ok
conflict-7.2... Ok
conflict-7.2.1... Ok
conflict-7.3... Ok
conflict-7.4... Ok
conflict-7.5... Ok
conflict-7.5.1... Ok
conflict-7.6... Ok
conflict-7.7... Ok
conflict-7.7.1... Ok
conflict-8.1... Ok
conflict-8.1.1... Ok
conflict-8.2... Ok
conflict-8.2.1... Ok
conflict-8.3... Ok
conflict-8.3.1... Ok
conflict-8.4... Ok
conflict-8.4.1... Ok
conflict-8.5... Ok
conflict-8.5.1... Ok
conflict-8.6... Ok
conflict-8.6.1... Ok
conflict-8.99... Ok
conflict-9.1... Ok
conflict-9.2... Ok
conflict-9.3... Ok
conflict-9.4... Ok
conflict-9.5... Ok
conflict-9.6... Ok
conflict-9.7... Ok
conflict-9.8... Ok
conflict-9.9... Ok
conflict-9.10... Ok
conflict-9.11... Ok
conflict-9.12... Ok
conflict-9.13... Ok
conflict-9.14... Ok
conflict-9.15... Ok
conflict-9.16... Ok
conflict-9.17... Ok
conflict-9.18... Ok
conflict-9.19... Ok
conflict-9.20... Ok
conflict-9.21... Ok
conflict-9.22... Ok
conflict-9.23... Ok
conflict-9.24... Ok
conflict-9.25... Ok
conflict-9.26... Ok
conflict-10.1... Ok
conflict-10.2... Ok
conflict-11.1... Ok
conflict-11.2... Ok
conflict-11.3... Ok
conflict-11.5... Ok
conflict-11.6... Ok
9 errors out of 126 tests
Failures on these tests: conflict-6.2 conflict-6.3 conflict-6.7 
conflict-6.8 conflict-6.9 conflict-6.10 conflict-6.11 conflict-6.12 
conflict-6.13



Any clarification will be appreciated.


[sqlite] Network-based DB performance for Mozilla

2005-09-13 Thread Brett Wilson
Hi everybody,

I'm working on replacing a lot of Firefox's storage systems to use
sqlite. It has been going well so far except for one issue.

The database file is stored in the user's Mozilla profile directory.
In companies and Universities using Linux, this directory is often
hosted over the network. In these cases, sqlite performance can be
quite poor.

I am aware that writing performance can be bad in these cases, but we
don't do too many writes. I am mostly concerned about read
performance, since a lot of this stuff (history, bookmarks) drive the
UI. The lag, even over a fast network, can be noticable. I am also
concerned about file locking, since the documentation does not
recommend using files over the network.

Can anybody suggest what to do about this problem? This is not
something that can be avoided, since some people will have this
configuration and will not have any say about it. Firefox must perform
reasonably in these cases.

One thing that could work in our favor is that Mozilla already does
locking on the profile, so access will be restricted to our one
process. Is there anything that we can do to take advantage of this to
avoid having to query the file for reads even when the content is
cached? It looks like we will have multiple database connections from
this process.

I will work on minimizing the number of queries in the common cases,
but any little bit of perfomance will help here.

Thanks,
Brett


RE: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread GreatNews


This problem can be remedied by defragging the hard disk from time to time.
Copying the whole db file might also do. But I'm thinking if it's possible
to prevent this problem from happening, or reduce the chances of getting
fragmented? Sqlite can use free pages that were originally occupied by
deleted records. So it might be able to implement a command to pre-allocate
some pages in a big chunk?  Requesting a big chunk of hard disk usually get
less fragments. 

Thanks,

Jack


-Original Message-
From: Dennis Jenkins [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 13, 2005 11:39 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Pre-allocating disk space to avoid db file fragments

Jay Sprenkle wrote:

>On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
>  
>
>>Actually, you can defrag the database file yourself, if you have admin 
>>rights (b/c you need to open a handle to the physical device).
>>
>>
>>
>>
>I thought he needed an automated solution to include in his code 
>released to users.
>
>  
>
Yeah.  His code can defrag the file if it has the correct permissions and is
running on Windows NT 4 or better.  He can do this whenever he has the need
to.  Am I missing something?  Maybe I missed the original goal and only
focused on the "need to defrag" angle.



Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Dennis Jenkins

Jay Sprenkle wrote:


On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
 


Actually, you can defrag the database file yourself, if you have admin
rights (b/c you need to open a handle to the physical device).


   

I thought he needed an automated solution to include in his code released to 
users.


 

Yeah.  His code can defrag the file if it has the correct permissions 
and is running on Windows NT 4 or better.  He can do this whenever he 
has the need to.  Am I missing something?  Maybe I missed the original 
goal and only focused on the "need to defrag" angle.




Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Jay Sprenkle
On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
> 
> Actually, you can defrag the database file yourself, if you have admin
> rights (b/c you need to open a handle to the physical device).
> 
> 
I thought he needed an automated solution to include in his code released to 
users.

-- 
---
The Castles of Dereth Calendar: a tour of the art and architecture of 
Asheron's Call
http://www.lulu.com/content/77264


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Dennis Jenkins

Jay Sprenkle wrote:


On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
 


It depends on lots of things: the OS, the filesystem, the % free space
on the file system, other processes that are causing the OS to allocate
disk blocks. I have noticed that Windows XP totally sucks at keeping
files fragment free when copying them. Even if there is enough free
space to hold the destination file contiguously, the OS won't do it. I
have rarely bothered to check file fragmentation on Linux and FreeBSD
systems, so I don't know how those handle it (but I would assume it to
be much more intelligent than NTFS).
   




ugh! Thanks for letting us know about that.

There's no way I know of to control fragmentation.
I've been assuming if you copy a complete file within a short time period
to a new location it will likely be less fragmented that the original. It's 
not

always true, but in my experience it's simple and generally tends to be
true over the long run. If a user will not do defrag on their disk there's 
not

a lot you can do to correct for it.

 

Actually, you can defrag the database file yourself, if you have admin 
rights (b/c you need to open a handle to the physical device).




Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Ben Clewett
A small warning with running VACUUM too often.  Any predefined 
statements will fail if they are defined before the VACUUM and used 
afterwards.  I had a daemon which did a VACUUM autonomously.  Which 
occasionally coincided with a user request, and broke it :)


Jay Sprenkle wrote:

On 9/13/05, Ray Mosley <[EMAIL PROTECTED]> wrote:


Is it overkill to VACUUM every time the last user terminates?




It depends.
If your program is very active rearranging database records every time it 
runs probably not.





--
Ben Clewett
+44(0)1923 46
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Jay Sprenkle
On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
> 
> >
> It depends on lots of things: the OS, the filesystem, the % free space
> on the file system, other processes that are causing the OS to allocate
> disk blocks. I have noticed that Windows XP totally sucks at keeping
> files fragment free when copying them. Even if there is enough free
> space to hold the destination file contiguously, the OS won't do it. I
> have rarely bothered to check file fragmentation on Linux and FreeBSD
> systems, so I don't know how those handle it (but I would assume it to
> be much more intelligent than NTFS).


ugh! Thanks for letting us know about that.

There's no way I know of to control fragmentation.
I've been assuming if you copy a complete file within a short time period
to a new location it will likely be less fragmented that the original. It's 
not
always true, but in my experience it's simple and generally tends to be
true over the long run. If a user will not do defrag on their disk there's 
not
a lot you can do to correct for it.


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Jay Sprenkle
On 9/13/05, Ray Mosley <[EMAIL PROTECTED]> wrote:
> 
> Is it overkill to VACUUM every time the last user terminates?


It depends.
If your program is very active rearranging database records every time it 
runs probably not.


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Dennis Jenkins

Jay Sprenkle wrote:


On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
 


Even vacuuming won't defrag the file. Disk space is allocated by the OS
and the OS makes no guarantees.
   




Won't Dr. Hipp's method of making a backup copy also defrag the file?

i.e.

execute begin exclusive to lock it.
copy the file
commit
rename the files and use the backup copy as the new current database.

Assuming your disk free space isn't heavily fragmented.
If it is fragmented I believe this will tend to reduce the fragmentation 
with time,

depending on what else is going on at the same time on the machine.
 

It depends on lots of things: the OS, the filesystem, the % free space 
on the file system, other processes that are causing the OS to allocate 
disk blocks.  I have noticed that Windows XP totally sucks at keeping 
files fragment free when copying them.  Even if there is enough free 
space to hold the destination file contiguously, the OS won't do it.  I 
have rarely bothered to check file fragmentation on Linux and FreeBSD 
systems, so I don't know how those handle it (but I would assume it to 
be much more intelligent than NTFS).


To Ben's point, I neglected to consider table space fragmentation.  He 
has a very good point.  I read the source code to the VACUUM function.  
My understanding is that the resulting file won't have any table space 
fragmentation, but I could be wrong.




Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Ray Mosley
Is it overkill to VACUUM every time the last user terminates?

On 9/13/05, Ben Clewett <[EMAIL PROTECTED]> wrote: 
> 
> An old COBOL system we had did this. It never allocated less than 64
> blocks of disk space. It did work.
> 
> A lot of modern file systems (eg, EXT2 and EXT3) do this anyway by
> reserving space after your file for later use. So if you are using a
> file system with plenty of free space, file expansion will (mostly) be
> as a continuous extension of exiting data.
> 
> Apart from file fragmentation, there is also table space fragmentation.
> A sequential read through an index on a table may not be a sequential
> read along a disk cylinder. Therefore resulting in low performance. I
> don't know whether VACUUM helps or hinders this effect.
> 
> From experience I know that dumping an entire DB as SQL, then
> destroying database, then parsing back in. Can result in significant
> read performance gains. Where database is not cached by OS file cache
> system. I would *guess* that where the database is cached, none of this
> will make much difference. :)
> 
> Just my two pence worth...
> 
> 
> Cory Nelson wrote:
> > I think his issue is that the database is changing size too often. He
> > wants it to automatically expand in larger chunks so there is less
> > fragmentation on the disk.
> >
> > Good idea, assuming it's settable via pragma.
> >
> > On 9/13/05, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
> >
> >>On 9/13/05, GreatNews <[EMAIL PROTECTED]> wrote:
> >>
> >>>Hi D. Richard Hipp,
> >>>
> >>>I'm developing a desktop rss reader using your excellent sqlite engine.
> >>>One
> >>>issue my users found is that sqlite database can get heavily fragmented
> >>>over
> >>>time. I'm wondering if it's a viable suggestion that sqlite 
> pre-allocates
> >>>disk space when creating database, and grows the db file by bigger
> >>>chunk(e.g. grow by 20% or so in size each time)?
> >>
> >>
> >>
> >>Why not do a vacuum every 10th time (or something similar) you exit the
> >>program?
> >>
> >>---
> >>The Castles of Dereth Calendar: a tour of the art and architecture of
> >>Asheron's Call
> >>http://www.lulu.com/content/77264
> >>
> >>
> >
> >
> >
> 
> 
> --
> Ben Clewett
> +44(0)1923 46
> Project Manager
> Road Tech Computer Systems Ltd
> http://www.roadrunner.uk.com
> 
> 


-- 
Ray Mosley


Re: [sqlite] Problem with DETACH on 2.8.16

2005-09-13 Thread Dennis Cote

D. Richard Hipp wrote:


On Tue, 2005-09-13 at 09:08 +0200, Laurent wrote:
 


I get the error :

  Assertion failed: xHash!=0, file hash.c, line 299
  Abnormal program termination

   



I tried the same sequence of commands and it worked
fine for me.
 


Richard,

I can confirm Laurent's problem exists in the 2.8.15  version of the 
shell  under Windows. I have copied a a session log of the commands below:


   C:\Documents and Settings\DennisC>sqlite x1.dbx
   SQLite version 2.8.15
   Enter ".help" for instructions
   sqlite> create table documents (a);
   sqlite> create index i on documents(a);
   sqlite> .q
  
   C:\Documents and Settings\DennisC>sqlite x2.dbx

   SQLite version 2.8.15
   Enter ".help" for instructions
   sqlite> create table documents (a);
   sqlite> create index i on documents(a);
   sqlite> .q
  
   C:\Documents and Settings\DennisC>sqlite

   SQLite version 2.8.15
   Enter ".help" for instructions
   sqlite> attach 'x1.dbx' as d1;
   sqlite> attach 'x2.dbx' as d2;
   sqlite> detach d1;
   Assertion failed: xHash!=0, file ../sqlite/src/hash.c, line 299
  
   This application has requested the Runtime to terminate it in an 
unusual way.

   Please contact the application's support team for more information.
  
   C:\Documents and Settings\DennisC>


I don't have 2.8.16 handy, so I can't try that version. I will try again 
with latest as well.


Dennis Cote


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Jay Sprenkle
On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
> 
> >
> Even vacuuming won't defrag the file. Disk space is allocated by the OS
> and the OS makes no guarantees.


Won't Dr. Hipp's method of making a backup copy also defrag the file?

i.e.

execute begin exclusive to lock it.
copy the file
commit
rename the files and use the backup copy as the new current database.

Assuming your disk free space isn't heavily fragmented.
If it is fragmented I believe this will tend to reduce the fragmentation 
with time,
depending on what else is going on at the same time on the machine.


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Ben Clewett
An old COBOL system we had did this.  It never allocated less than 64 
blocks of disk space.  It did work.


A lot of modern file systems (eg, EXT2 and EXT3) do this anyway by 
reserving space after your file for later use.  So if you are using a 
file system with plenty of free space, file expansion will (mostly) be 
as a continuous extension of exiting data.


Apart from file fragmentation, there is also table space fragmentation. 
 A sequential read through an index on a table may not be a sequential 
read along a disk cylinder.  Therefore resulting in low performance.  I 
don't know whether VACUUM helps or hinders this effect.


From experience I know that dumping an entire DB as SQL, then 
destroying database, then parsing back in.  Can result in significant 
read performance gains.  Where database is not cached by OS file cache 
system.  I would *guess* that where the database is cached, none of this 
will make much difference. :)


Just my two pence worth...


Cory Nelson wrote:

I think his issue is that the database is changing size too often.  He
wants it to automatically expand in larger chunks so there is less
fragmentation on the disk.

Good idea, assuming it's settable via pragma.

On 9/13/05, Jay Sprenkle <[EMAIL PROTECTED]> wrote:


On 9/13/05, GreatNews <[EMAIL PROTECTED]> wrote:


Hi D. Richard Hipp,

I'm developing a desktop rss reader using your excellent sqlite engine.
One
issue my users found is that sqlite database can get heavily fragmented
over
time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
disk space when creating database, and grows the db file by bigger
chunk(e.g. grow by 20% or so in size each time)?




Why not do a vacuum every 10th time (or something similar) you exit the
program?

---
The Castles of Dereth Calendar: a tour of the art and architecture of
Asheron's Call
http://www.lulu.com/content/77264









--
Ben Clewett
+44(0)1923 46
Project Manager
Road Tech Computer Systems Ltd
http://www.roadrunner.uk.com


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Dennis Jenkins

Jay Sprenkle wrote:


On 9/13/05, GreatNews <[EMAIL PROTECTED]> wrote:
 


Hi D. Richard Hipp,

I'm developing a desktop rss reader using your excellent sqlite engine. 
One
issue my users found is that sqlite database can get heavily fragmented 
over

time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
disk space when creating database, and grows the db file by bigger
chunk(e.g. grow by 20% or so in size each time)?
   





Why not do a vacuum every 10th time (or something similar) you exit the 
program?


---
The Castles of Dereth Calendar: a tour of the art and architecture of 
Asheron's Call

http://www.lulu.com/content/77264

 

Even vacuuming won't defrag the file.  Disk space is allocated by the OS 
and the OS makes no guarantees.


If the program is running on Windows, then you can defrag the file your 
self (if you have admin rights).  You can read about the Win32 defrag 
APIs here http://www.sysinternals.com/Information/DiskDefragmenting.html.


We don't defrag the database file, but I do vacuum it whenever the slack 
space exceeds 25% of the total file space used by the database file.  We 
check for this when our application starts up. 

I ported a hack from sqlite2 to sqlite3 that calculates the amount of 
slack space.  I submitted it (probably improperly) for inclusion into 
sqlite a few weeks ago.  I can provide it to you if you wish, just email me.


You could do the following:

1) Create a dummy table and fill it with a gazillion [1] rows of junk.
2) Defrag the database file.
3) Drop the dummy table.
4) You just created lots of slack space that will be reused by sqlite 
before sqlite extends the disk file (I think).


[1] for suitable values of "a gazillion".



Re: [sqlite] Problem with DETACH on 2.8.16

2005-09-13 Thread Laurent



Hello Richard,I'm sorry, but if I do :1. Create a new 
directory.2. Download sqlite 2.8.16 (Windows version for sqlite.org).3. 
Create 2 files script1 and script2 (see attached):4.  Run the 
commands:  sqlite x1.dbx

Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Cory Nelson
I think his issue is that the database is changing size too often.  He
wants it to automatically expand in larger chunks so there is less
fragmentation on the disk.

Good idea, assuming it's settable via pragma.

On 9/13/05, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
> On 9/13/05, GreatNews <[EMAIL PROTECTED]> wrote:
> >
> > Hi D. Richard Hipp,
> >
> > I'm developing a desktop rss reader using your excellent sqlite engine.
> > One
> > issue my users found is that sqlite database can get heavily fragmented
> > over
> > time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
> > disk space when creating database, and grows the db file by bigger
> > chunk(e.g. grow by 20% or so in size each time)?
> 
> 
> 
> Why not do a vacuum every 10th time (or something similar) you exit the
> program?
> 
> ---
> The Castles of Dereth Calendar: a tour of the art and architecture of
> Asheron's Call
> http://www.lulu.com/content/77264
> 
> 


-- 
Cory Nelson
http://www.int64.org


Re: [sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread Jay Sprenkle
On 9/13/05, GreatNews <[EMAIL PROTECTED]> wrote:
> 
> Hi D. Richard Hipp,
> 
> I'm developing a desktop rss reader using your excellent sqlite engine. 
> One
> issue my users found is that sqlite database can get heavily fragmented 
> over
> time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
> disk space when creating database, and grows the db file by bigger
> chunk(e.g. grow by 20% or so in size each time)?



Why not do a vacuum every 10th time (or something similar) you exit the 
program?
 
---
The Castles of Dereth Calendar: a tour of the art and architecture of 
Asheron's Call
http://www.lulu.com/content/77264


[sqlite] Pre-allocating disk space to avoid db file fragments

2005-09-13 Thread GreatNews
Hi D. Richard Hipp,
 
I'm developing a desktop rss reader using your excellent sqlite engine. One
issue my users found is that sqlite database can get heavily fragmented over
time. I'm wondering if it's a viable suggestion that sqlite pre-allocates
disk space when creating database, and grows the db file by bigger
chunk(e.g. grow by 20% or so in size each time)?
 
Thanks,
 
Jack
 
 


Re: [sqlite] Problem with DETACH on 2.8.16

2005-09-13 Thread D. Richard Hipp
On Tue, 2005-09-13 at 09:08 +0200, Laurent wrote:
> I get the error :
> 
>Assertion failed: xHash!=0, file hash.c, line 299
>Abnormal program termination
> 

I tried the same sequence of commands and it worked
fine for me.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



[sqlite] Problem with DETACH on 2.8.16

2005-09-13 Thread Laurent
Hello,

I'm using sqlite 2.8.16.
I have 2 databases, x1.dbx and x2.dbx, created with the same commands: 

  create table documents (a);
  create index i on documents(a);

Then, if I run :

  attach 'x1.dbx' as d1; 
  attach 'x2.dbx' as d2;
  detach d1;

I get the error :

   Assertion failed: xHash!=0, file hash.c, line 299
   Abnormal program termination

This error will occur using the command line (under Windows) or the DLL.

Could you please check.  
Thanks in advance.

Laurent.