Thanks, Roger. Your second suggestion does the trick. The first, however,
returns: . Can you explain why?
Thanks again.
Karl
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Roger Binns
Sent: Sunday, October 26, 2008 9:32 PM
To: General Discussion of S
On Oct 27, 2008, at 12:38 PM, Julian Bui wrote:
> Thanks for the reply dan.
>
> You probably don't "need" clustered indexing as such, but this would
> be
>>
>> the kind of case where it provides some advantages. You can get the
>> same
>> effect in SQLite by including all the data columns in
Thanks for the reply dan.
You probably don't "need" clustered indexing as such, but this would be
>
> the kind of case where it provides some advantages. You can get the same
> effect in SQLite by including all the data columns in your index
> definition.
>
>
Unfortunately, because I will be makin
On Oct 26, 2008, at 5:15 PM, Julian Bui wrote:
> Hi all,
>
> I have records w/ a timestamp attribute which is not unique and
> cannot be
> used as a primary key. These records will be inserted according to
> timestamp value. From this important fact, I've gathered I need a
> clustered
> ind
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Karl Lautman wrote:
> Can someone point out to me the syntax error in the following? I've omitted
> the set-up code for brevity, but cur is a cursor with a connection to the
> database. Thanks.
>
x = cur.execute('last_insert_rowid()')
last_i
Cory Nelson wrote:
> On Sun, Oct 26, 2008 at 8:17 PM, Mohit Sindhwani <[EMAIL PROTECTED]> wrote:
>
>> I'm setting to to delete a bunch of old records ever 2 weeks in a cron
>> job and initially I just wanted to do delete * from table where
>> datetime(created_on, 'localtime') < some_date.
>>
>>
On Sun, Oct 26, 2008 at 8:17 PM, Mohit Sindhwani <[EMAIL PROTECTED]> wrote:
> I'm setting to to delete a bunch of old records ever 2 weeks in a cron
> job and initially I just wanted to do delete * from table where
> datetime(created_on, 'localtime') < some_date.
>
> Then, I remembered about vacuum
Can someone point out to me the syntax error in the following? I've omitted
the set-up code for brevity, but cur is a cursor with a connection to the
database. Thanks.
>>> x = cur.execute('last_insert_rowid()')
Traceback (most recent call last):
File "", line 1, in
x = cur.execute('las
I'm setting to to delete a bunch of old records ever 2 weeks in a cron
job and initially I just wanted to do delete * from table where
datetime(created_on, 'localtime') < some_date.
Then, I remembered about vacuum - do I need to vacuum the database
whenever I delete the records? Should I just
My question is How can improve delete speed? This speed can not satisfy my
application.
Igor Tandetnik wrote:
>
> "yhuang" <[EMAIL PROTECTED]>
> wrote in message news:[EMAIL PROTECTED]
>> I create a DB and only one table in the DB. There are 3641043 records
>> in the DB file. Min id is 27081364
What's the right way to use update/commit/rollback hooks to produce a
replay log?
Currently I'm doing it at a high level by just recording all SQL
statements into a replay log, and that works really well except fails in
some cases like with the use of CURRENT_TIMESTAMP. (Replaying that will
i
On 10/26/08, Julian Bui <[EMAIL PROTECTED]> wrote:
> Puneet, I think I see what you're saying about the data types and their
> affinities, but what does that have to do with the MUCH bigger table size
> than what was expected?
>
That you were expecting smallint and bigint to behave the way they
On Oct 26, 2008, at 5:58 PM, Julian Bui wrote:
> HI everyone,
>
> I have records in my db that consist of smallint, bigint, smallint,
> double,
> char(8). By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28
> Bytes per
> record. I also have an index over the attribute that is a double.
>
Puneet, I think I see what you're saying about the data types and their
affinities, but what does that have to do with the MUCH bigger table size
than what was expected?
Also, I made a mistake. In an attempt to censor out my table name and
attribute names I forgot to fix everything. So yes, I am
extending my own reply...
On 10/26/08, P Kishor <[EMAIL PROTECTED]> wrote:
> On 10/26/08, Julian Bui <[EMAIL PROTECTED]> wrote:
> > HI everyone,
> >
> > I have records in my db that consist of smallint, bigint, smallint,
> double,
> > char(8). By my calculation that comes to 2 + 8 + 2 + 8
On 10/26/08, Julian Bui <[EMAIL PROTECTED]> wrote:
> HI everyone,
>
> I have records in my db that consist of smallint, bigint, smallint, double,
> char(8). By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28 Bytes per
> record. I also have an index over the attribute that is a double.
Why
HI everyone,
I have records in my db that consist of smallint, bigint, smallint, double,
char(8). By my calculation that comes to 2 + 8 + 2 + 8 + 8 = 28 Bytes per
record. I also have an index over the attribute that is a double.
I inserted 100,000 records into a clean database and the database
Cancel that last plea of help: Excel was adding a trailng space in one
of the fields containing numbers which I did not discover until
looking at the CSV file in text editor. A quick search-and-replace
later, I've dumped the data backed into SQLite and problem is solved
...
-- Forwarded
I'm fiddling with SQLite running on Mac OSX and am using the Firefox
extension SQLite Manager ...
I have some Excel tables I'd like to re-create in SQLite.
It looks easy to me:
1. Save your Excel table as a .csv file.
2. In SQLite Manager used DATABASE-->IMPORT ... and follow the directions.
Gre
On 10/26/08, Karl Lautman <[EMAIL PROTECTED]> wrote:
> Let's say I've just added a record to a table where one of the columns is
> designated an integer primary key, and I have sqlite autoincrement that
> column for me with each insert. Now I need to add some records to other
> tables linked to
Let's say I've just added a record to a table where one of the columns is
designated an integer primary key, and I have sqlite autoincrement that
column for me with each insert. Now I need to add some records to other
tables linked to the first record. So I want to use the first record's PK
value
On Oct 25, 2008, at 12:21 PM, Dan wrote:
>> I thought the two pAux parameters were odd - one bare and one cast to
>> (int), so I looked up rtreeInit().
>
> Good point. I removed the first of the two "pAux" parameters from
> rtreeInit(). It was not being used.
>
> http://www.sqlite.org/cvstrac/ch
Thanks Doug.
On Sat, Oct 25, 2008 at 11:48 AM, Doug <[EMAIL PROTECTED]> wrote:
> Hi Jay --
>
> I used to have a problem like this a few years back. I don't remember all
> the hows and whys, but my apps call the following at start up and the
> problems are gone:
>
> _tsetlocale(LC_ALL, _T(""));
>
Hi all,
I have records w/ a timestamp attribute which is not unique and cannot be
used as a primary key. These records will be inserted according to
timestamp value. From this important fact, I've gathered I need a clustered
index since my SELECT statements use a time-range in the WHERE clause.
24 matches
Mail list logo