Hello all,
Is it quicker (slower or the same) to execute a query on a view or to
execute the original query from which this view was created?
I'm basically looking for the best (fastest) way to execute thousands of
queries to check whether they return any results or not. At the moment
I'm doing
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> >
> > As for the stats from sqlite3_analyzer, they seem to be in the right
> > ballpark.
> > But I'm not sure its heuristic accounts for rows that are significantly
> > larger
> > than the page size, though. In such cases I a
Joe Wilson <[EMAIL PROTECTED]> wrote:
>
> As for the stats from sqlite3_analyzer, they seem to be in the right ballpark.
> But I'm not sure its heuristic accounts for rows that are significantly larger
> than the page size, though. In such cases I am seeing higher than expected
> fragmentation af
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> >
> > See also: Changes to support fragmentation analysis in sqlite3_analyzer.
> > http://www.sqlite.org/cvstrac/chngview?cn=3634
> >
>
> I'm not real sure those patches are working right.
> I need to revisit that whole fragm
Joe Wilson <[EMAIL PROTECTED]> wrote:
>
> See also: Changes to support fragmentation analysis in sqlite3_analyzer.
> http://www.sqlite.org/cvstrac/chngview?cn=3634
>
I'm not real sure those patches are working right.
I need to revisit that whole fragmentation analysis
thing before the next relea
--- turnleftjohn <[EMAIL PROTECTED]> wrote:
> the production system, and after 4 hours no index. I can't detect any
> activity at all. The journal file and the .db file just sit at the same size
> for 4 hours. Why is this failing? It seems like it is just sitting there
> doing nothing. When I c
--- turnleftjohn <[EMAIL PROTECTED]> wrote:
> improved dramatically. So I attempted the creation of the index off hours on
> the production system, and after 4 hours no index. I can't detect any
> activity at all. The journal file and the .db file just sit at the same size
> for 4 hours. Why is t
Thanks for you all.
I don't know if [char](32) is valid standard SQL or not.
But the MS use this type in SQLServer2000(and above?).
The SQL statements that I used to create tables in SQLite3 was auto generated
by my SQLServer2000 EnterpriseManager.
And SQLite3 did not report any error about this.
I am new to sqlite. I have done some reading up. I have a table that I am
having difficulty creating an index on. The table holds ~180 million rows,
simple four column, integer schema. It is taking up about 1.5 gigs of
space. I inherited this application and it is slowing down. The table has
P Kishor wrote:
Mac/Unix person meself, but the Windows XP sort is pretty darn good as
well.
I'll take a look. Last time I used it it was useless. Win9x days? these
days (especially for a one off) I'd probably go straight to doing it in
Python to avoid x-platform syntax issues.
Martin
Ah yes, I should read more carefully :)
Thanks, right, I was actually guaranteeing uniqueness originally by just
fetching and then inserting only if there wasn't a match (I needed a rowid
if the row existed anyway). Now I'm guaranteeing uniqueness by letting sort
do the work for me, but simila
--- [EMAIL PROTECTED] wrote:
> Gerry Snyder <[EMAIL PROTECTED]> wrote:
> > Chris Jones wrote:
> > > Hi all,
> > >
> > > I have a very simple schema. I need to assign a unique identifier to a
> > > large collection of strings, each at most 80-bytes, although typically
> > > shorter.
> > >
> > > T
Chris Jones wrote:
I probably should have made this more explicit, but in sqlite, every row has
a unique identifier named rowid, which exists even if it isn't explicity
declared in the schema, and I was depending on that. If you declare a
PRIMARY KEY, then this replaces rowid.
A tiny cor
Chris Jones <[EMAIL PROTECTED]> writes:
> Derrell.Lipman wrote:
>>
>> Chris Jones <[EMAIL PROTECTED]> writes:
>>
>> I don't think that your original solution solves that problem either. You
>> first posted this schema:
>>
>>> My schema looks as follows:
>>>
>>> CREATE TABLE rawfen ( fen VARCH
Hi, 2 days passed since I've sent this message to the list, but I
received no replies so far.
As I thought that may be not everybody received it, I resend it one more
time.
I also feel that my message might be unclear, so I also juggest to try
it and give "steps to reproduce" below.
Before this, I
Derrell.Lipman wrote:
>
> Chris Jones <[EMAIL PROTECTED]> writes:
>
> I don't think that your original solution solves that problem either. You
> first posted this schema:
>
>> My schema looks as follows:
>>
>> CREATE TABLE rawfen ( fen VARCHAR(80) );
>> CREATE INDEX rawfen_idx_fen ON rawfe
Chris Jones wrote:
So, I did a "sort -u -S 1800M fenout.txt > fenoutsort.txt"
The sort took about 45 minutes, which is acceptable for me (it was much
longer without the -S option to tell it to make use of more memory), and
then loading the table was very efficient. Inserting all the rows into m
--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> You could also improve the locality in the database file further by
> running a vacuum command after it has been created. this will move the
> pages around so that the page of the table are contiguous and so are the
> pages of the index, rather than h
I haven't heard of Dev-Cpp.
If you use MinGW gcc from http://mingw.org/ and MSYS, you shouldn't have any
problems.
--- timm2 <[EMAIL PROTECTED]> wrote:
> I use MMinGW as it was installed by Dev-Cpp, I think, thera are no Cygwin
> files.
> Tim
>
> > Are you mixing Cygwin and MinGW libraries and/
Oracle has MERGE. Search for UPSERT and SQL Server.
On 3/22/07, Doug <[EMAIL PROTECTED]> wrote:
The conflict algorithms in SQLite are brilliant. It's exactly what I've
needed, and after working with SQLite for so long, I had forgotten that it
wasn't part of the SQL standard (as far as I know--b
Hi Igor,
Igor Tandetnik wrote:
http://www.phpbuilder.com/manual/en/function.sqlite-fetch-array.php
"Fetches the next row from the given result handle. If there are no more
rows, returns FALSE, otherwise returns an associative array representing
the row data."
You need to call it repeatedly
The conflict algorithms in SQLite are brilliant. It's exactly what I've
needed, and after working with SQLite for so long, I had forgotten that it
wasn't part of the SQL standard (as far as I know--but it should be!!!)
For those of you using MS SQL or Oracle, is there any way to get the same
resu
Christian Mattar <[EMAIL PROTECTED]> wrote:
I've been having trouble with SQLite in PHP. Basically I want to
iterate over all table of a database. I use the following query:
$handle = sqlite_open("db.sqlite");
$result = sqlite_query($handle, "SELECT name FROM sqlite_master WHERE
type='table' ORD
Hi everyone!
I've been having trouble with SQLite in PHP. Basically I want to iterate
over all table of a database. I use the following query:
$handle = sqlite_open("db.sqlite");
$result = sqlite_query($handle, "SELECT name FROM sqlite_master WHERE
type='table' ORDER BY name");
$tables = sqlite_
Tue, 20 Mar 2007 14:15:52 - Ajay Arora wrote:
> [snip]
>And it gives me an error saying "e.extract_sequence" does not exist".
DROP TABLE IF EXISTS extract_master;
CREATE TABLE extract_master (
extract_sequence INTEGER PRIMARY KEY,
file_locationTEXT,
active
On Tue, 20 Mar 2007 13:24:17 +0800, you wrote:
>if you create a table use following statement (script generated from MS SQL
>Server 2000)
>CREATE TABLE [XTollData] (
> [DutyID] [char] (32) NOT NULL ,
> [CarNumber] [char] (10) NULL
>);
>
>SQLite3_Column_decltype will treat DutyID as data type 'ch
A fast technique to achieve your objective is to perform what I believe
is called a "monkey puzzle" sort. The data is not moved, instead an
array of descriptors to each element is sorted. The output is realized
by scanning the list of descriptors and picking up the associated record
from the
An issue with cache is cache shadowing, the churning as data is copied
from one cache to another to another.
An example is the speed-up achieved on network accesses by using
sendfile or TransmitFile and bypassing up to four levels of buffering
for a message being despatched to a network interf
On 3/22/07, Martin Jenkins <[EMAIL PROTECTED]> wrote:
Chris Jones wrote:
> realized that the unix "sort"
If I'd known you were on 'nix I'd have suggested using 'sort' and/or
'md5sum' about 12 hours ago. ;)
Mac/Unix person meself, but the Windows XP sort is pretty darn good as well.
--
Pune
Chris Jones wrote:
realized that the unix "sort"
If I'd known you were on 'nix I'd have suggested using 'sort' and/or
'md5sum' about 12 hours ago. ;)
Martin
-
To unsubscribe, send email to [EMAIL PROTECTED]
You could sort the table then perform a merge which removes duplicates.
Chris Jones wrote:
I don't think that solves my problem. Sure, it guarantees that the IDs are
unique, but not the strings.
My whole goal is to be able to create a unique identifier for each string,
in such a way that I
On 3/22/07, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
Voxen <[EMAIL PROTECTED]> wrote:
>> There's no "like" - you do use connection pointer directly from
>> thread B. The fact that the piece of code thread B currently
>> executes is a method of an object that happened to be created by
>> thread A
I use MMinGW as it was installed by Dev-Cpp, I think, thera are no Cygwin files.
Tim
> Are you mixing Cygwin and MinGW libraries and/or header files?
>
> --- timm2 <[EMAIL PROTECTED]> wrote:
> > Using dev-c++ 4.9.9.2-wx-beta-6.8 with mingw 3.4.2 I have got the error:
> >
> > sqlite3.A(.text+0x44
Thanks everyone for your feedback.
I ended up doing a presort on the data, and then adding the data in order.
At first I was a little concerned about how I was going to implement an
external sort on a data set that huge, and realized that the unix "sort"
command can handle large files, and in f
Dne čtvrtek 22 březen 2007 13:16 [EMAIL PROTECTED] napsal(a):
> Jakub Ladman <[EMAIL PROTECTED]> wrote:
> > Hi
> >
> > I need to know how this works in detail:
> > I have sqlite3 database file stored on SD/MMC FLASH card and i would to
> > insert rows to some tables often. There will be triggers to
Jonas Henriksen wrote:
I need to return the last data for each data_logger. In the testdata
there is only 1 distinct data_logger_id
executing:
select max(date_time) as date_time, data_logger_id
from data_values
where date_time>datetime('now','-2 hours')
group by data_logger_id
...takes ca 1,5
Chris Jones wrote:
I've read elsewhere that this is a data locality issue, which certainly
makes sense.
And in those threads, a suggestion has been made to insert in sorted order.
But it's unclear to me exactly what the sorting function would need to be -
it's likely my sorting function (say s
On 3/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
"P Kishor" <[EMAIL PROTECTED]> wrote:
> Richard,
>
> On 3/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> ...
> > The problem is that your working set is bigger than your cache
> > which is causing thrashing. I suggest a solution lik
Voxen <[EMAIL PROTECTED]> wrote:
There's no "like" - you do use connection pointer directly from
thread B. The fact that the piece of code thread B currently
executes is a method of an object that happened to be created by
thread A is immaterial.
That clears things and it shows me I need to ope
"P Kishor" <[EMAIL PROTECTED]> wrote:
> Richard,
>
> On 3/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> ...
> > The problem is that your working set is bigger than your cache
> > which is causing thrashing. I suggest a solution like this:
> >
> > Add entries to table ONE until the table a
At 04:47 22/03/2007, you wrote:
I don't think that solves my problem. Sure, it guarantees that the IDs are
unique, but not the strings.
My whole goal is to be able to create a unique identifier for each string,
in such a way that I dont have the same string listed twice, with different
identif
Richard,
On 3/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
..
The problem is that your working set is bigger than your cache
which is causing thrashing. I suggest a solution like this:
Add entries to table ONE until the table and its unique index get
so big that they no longer fit in ca
Gerry Snyder <[EMAIL PROTECTED]> wrote:
> Chris Jones wrote:
> Hi all,
> I have a very simple schema. I need to assign a unique identifier
> to a large collection of strings, each at most 80-bytes, although
> typically shorter.
Would it help to hash the strings, then save them in the DB, checki
Hi All,
SQL Maestro Group is happy to announce the release of SQLite Maestro 7.3, a
powerful Windows GUI tool for SQLite databases administration and
development. The new version is immediately available at:
http://www.sqlmaestro.com/products/sqlite/maestro/
The new version is mostly a mainte
There's no "like" - you do use connection pointer directly from thread
B. The fact that the piece of code thread B currently executes is a
method of an object that happened to be created by thread A is
immaterial.
Thanks Igor.
That clears things and it shows me I need to open/close the data
Gil:
In you threads, dont declare sqlite3 *db private or public, instead
make it a local variable in each method. Then if one class calls a
method from another, a seperate database pointer will exist (on each
threads local stack) and you will have no problems. Let me know how
it works.
--
Ri
If the database schema is not small then use opening a connection does make
a difference. In initial testing we found opening a connection with 100
simple test tables take 3 ms. Later testing with our actual schema which
has fewer tables but is more complex takes 17ms (~65 tables with indexes an
On 3/22/07, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
Gil Delavous <[EMAIL PROTECTED]>
wrote:
> However, what happens when a thread calls a method from the main
> thread, this one using its own sqlite connection?
The question doesn't make any sense to me, sorry. Methods don't belong
to threads.
Ion Silvestru <[EMAIL PROTECTED]> wrote:
> > drh wrote:
> > INSERT INTO two SELECT * FROM one ORDER BY unique_column;
>
> >The ORDER BY is important here.
>
> This is an excerpt from SQLite documentation:
>
> The second form of the INSERT statement takes it data from a SELE
Ian Frosst <[EMAIL PROTECTED]> wrote:
The problem here though, is that I'm doing the prepare/step/finalize
each time I want to execute the query, even though the only thing
that is going to change are the values. Can I write a loop which
prepares the SQL, using named parameters, then in a loop j
Chris Jones <[EMAIL PROTECTED]> writes:
> I don't think that solves my problem. Sure, it guarantees that the IDs are
> unique, but not the strings.
>
> My whole goal is to be able to create a unique identifier for each string,
> in such a way that I dont have the same string listed twice, with
> drh wrote:
> INSERT INTO two SELECT * FROM one ORDER BY unique_column;
>The ORDER BY is important here.
This is an excerpt from SQLite documentation:
The second form of the INSERT statement takes it data from a SELECT statement.
The number of columns in the result of the
Hi all, I'm looking for some clarification on the usage of bound parameters
in queries. I have one SQL statement that is going to be executed numerous
times (thousands of times, in a fairly tight loop.) Right now I am using a
string, "INSERT INTO tableX (col1, col2) VALUES ('%s', '%s')". and cal
Voxen <[EMAIL PROTECTED]> wrote:
So, from my example, let say thread A created the object "main", and
thread B created the object "print".
When thread B calls the method from object "main" (as shown by the
example), my question was to know if the sqlite connection opened by
object "main" can be
Hi,
I'm new to sqlite but have experience from other systems. I have a
question regarding speed of a query when using "group by":
table:
create table data_values(
data_value_id integer not null primary key autoincrement,
data_type_id integer not null references data_types(data_typ
Jakub Ladman <[EMAIL PROTECTED]> wrote:
> Hi
>
> I need to know how this works in detail:
> I have sqlite3 database file stored on SD/MMC FLASH card and i would to
> insert
> rows to some tables often. There will be triggers to clean old and obsolete
> data.
> How often it will write to databas
Gerry Snyder <[EMAIL PROTECTED]> wrote:
> Chris Jones wrote:
> > Hi all,
> >
> > I have a very simple schema. I need to assign a unique identifier to a
> > large collection of strings, each at most 80-bytes, although typically
> > shorter.
> >
> > The problem is I have 112 million of them.
>
>
Gil Delavous <[EMAIL PROTECTED]>
wrote:
I was wondering if its better to open a sqlite database connection
when my application launches, use it all along the process life, and
closing
it when it exits... or if its better to open/close the database
connection each time a method has to query/stor
Hi Igor,
Thanks for you reply. I might have confused things, sorry about that.
So, from my example, let say thread A created the object "main", and
thread B created the object "print".
When thread B calls the method from object "main" (as shown by the
example), my question was to know if t
Hi,
I was wondering if its better to open a sqlite database connection when my
application launches, use it all along the process life, and closing it
when it exits... or if its better to open/close the database connection
each time a method has to query/store data, thus leaving the databas
Gil Delavous <[EMAIL PROTECTED]>
wrote:
However, what happens when a thread calls a method from the main
thread, this one using its own sqlite connection?
The question doesn't make any sense to me, sorry. Methods don't belong
to threads. Any function in a program can, in principle, be executed
Hi,
I've read that it is better that each thread creates its own sqlite
connection, and that's what I'm doing.
However, what happens when a thread calls a method from the main thread,
this one using its own sqlite connection?
For example:
Main thread:
void main::query_something() {
// qu
> Hi
>
> I need to know how this works in detail:
> I have sqlite3 database file stored on SD/MMC FLASH card and i would to
insert
> rows to some tables often. There will be triggers to clean old and
obsolete
> data.
> How often it will write to database file? After every INSERT command, or
it
> g
Hi
I need to know how this works in detail:
I have sqlite3 database file stored on SD/MMC FLASH card and i would to insert
rows to some tables often. There will be triggers to clean old and obsolete
data.
How often it will write to database file? After every INSERT command, or it
goes to some b
64 matches
Mail list logo