Back in my past I used Oracle OCI and did "array" inserts where you
would load an array for
each column to be inserted. bind the arrays to the insert statement and
then do a big insert.
It was a quite fast way to load data.
Joe Wilson wrote:
Some people on the list have noted that inserting pre
for a query like
select * from a join b on a.x = b.z
anyone know how to get all the column names of the fields that would be
returned from the query?
I am using the DBD::SQLite PERL module
Thanks
Jim
On 3/22/06, Teg <[EMAIL PROTECTED]> wrote:
> Hello Jay,
>
> Best way I've found to get great performance out of strings and
> vectors is to re-use the strings and vectors. String creation speed is
> completely dependent on allocation speed so, by re-using the strings,
> you only grow the ones that
Hello Jay,
Best way I've found to get great performance out of strings and
vectors is to re-use the strings and vectors. String creation speed is
completely dependent on allocation speed so, by re-using the strings,
you only grow the ones that aren't already big enough to hold the new
string data
Hi Ian,
This one has been interesting! I'm trying to repeat the problem with
several different databases - so far no luck. I use SQLite 2.8xx (project
is already ongoing) and will test when I have time. I will post results if
I find anything of value.
At 06:41 PM 3/21/06 +, you wrote:
>
>O
On 3/22/06, Micha Bieber <[EMAIL PROTECTED]> wrote:
> Eventually, I've got my lesson. Because it might be of some interest for
> the beginner:
>
> 1)Use the associated sqlite3_bind_* variants for your data.
> I did make a mistake in converting forth and back to strings beforehand.
>
> 2)It broke my
> > My application is geared towards users who want to find a specific name
> > in a list of names, and then want to have the possibility to scroll
> > backwards or forwards. For example, if I search for "Sprenkle" I want
> > to show the user a window with "Sprenkle" in the middle, preceded by the
When you issue the VACUUM statement, the OS ends up loading a lot of the
data from the database into its disk cache. Since you're running the select
right afterwards, then SQLite ends up loading the pages from the underlying
OS cache, so ya its going to *appear* faster than if you had issued an
au
El 02-03-2006, a las 13:35, [EMAIL PROTECTED] escribió:
The VACUUM command does something very much like this:
sqlite3 olddb .dump | sqlite3 newdb; mv newdb olddb
I say "much like" the above because there are some
important differences. The VACUUM command transfers
the data from the old a
"Iulian Popescu" <[EMAIL PROTECTED]> writes:
> I checked the versions and indeed the one I'm using on Windows is 3.0.8
> whether the one on Linux is 3.1.2. This being said as far as I understand
> and please correct me if I'm wrong the two PRAGMA(s) are just commands you
> run used to modify the o
Hi Derrell,
I checked the versions and indeed the one I'm using on Windows is 3.0.8
whether the one on Linux is 3.1.2. This being said as far as I understand
and please correct me if I'm wrong the two PRAGMA(s) are just commands you
run used to modify the operation of the SQLite library. I haven't
JP wrote:
Jay Sprenkle wrote:
Is there a way I can scroll thru a particular index? For example:
1. Scroll forward/backward on a given set of records
2. Start at position X
3. Start at a record that matches a criteria
SQL is optimized to manipulate a set of records. It's much faster to
ex
Eventually, I've got my lesson. Because it might be of some interest for
the beginner:
1)Use the associated sqlite3_bind_* variants for your data.
I did make a mistake in converting forth and back to strings beforehand.
2)It broke my program design a bit, but setting up large STL vector
based C++
"Iulian Popescu" <[EMAIL PROTECTED]> writes:
> I'm doing an application port from Windows to Linux and one of the
> problems I'm facing is when executing the following statement through a call
> to sqlite3_exec():
>
> SELECT mytable.'mycolumn' FROM table
>
> The registered callback function 4th ar
Hello,
I'm doing an application port from Windows to Linux and one of the
problems I'm facing is when executing the following statement through a call
to sqlite3_exec():
SELECT mytable.'mycolumn' FROM table
The registered callback function 4th argument (a char**) denoting the column
> > This may take a while, about 20 hours maybe. The partition has approx
> > 10GB, I can't afford more. Let's hope that this is sufficient.
>
> 20 hours seems rather long. Even if you have to worry about uniqueness
> constraints, there are ways to deal with that that should be much faster
> (deal
Ulrik Petersen wrote:
Hi JP,
JP wrote:
Anyway, maybe separate topic, I tried to create a "snapshot" window of
the above using plain SQL, but it doesn't seem to work on Sqlite 3.3.4:
CREATE TABLE clients (custid integer primary key, lastname varchar(50));
CREATE INDEX cidx ON (lastname);
(in
Hi JP,
JP wrote:
Anyway, maybe separate topic, I tried to create a "snapshot" window of
the above using plain SQL, but it doesn't seem to work on Sqlite 3.3.4:
CREATE TABLE clients (custid integer primary key, lastname varchar(50));
CREATE INDEX cidx ON (lastname);
(insert 10,000 records her
Jay Sprenkle wrote:
Is there a way I can scroll thru a particular index? For example:
1. Scroll forward/backward on a given set of records
2. Start at position X
3. Start at a record that matches a criteria
SQL is optimized to manipulate a set of records. It's much faster to execute
"update
On Thu, Mar 16, 2006 at 09:53:27PM +0100, Daniel Franke wrote:
>
> > That would be an excellent question to add to the FAQ:
> > "How do I estimate the resource requirements for a database?"
>
> I spent some time to create 3GB of sample data (just zeros, about half the
> size of the actual data s
core source code supports it. The provider I wrote that Brad mentions
is
for VS2005 and .NET 2.0, but there does exist a .NET 1.1 provider from
Finisar: http://sourceforge.net/projects/adodotnetsqlite
I can't vouch for its performance since I've never actually used it.
I've used the Finisar on
Hello Robert,
On Wed, March 22, 2006 15:32, Robert Simpson wrote:
...
> I can't vouch for its performance since I've never actually used it.
we're using SQLite under WinCE since version 2.1.7, with excellent
performances. We have been able to handle database with more than 10
records and up t
> -Original Message-
> From: Monkey Code [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 22, 2006 12:24 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Has anyone used sqlite for Pocket PC development?
>
> Hi,
>
> I am planning to use sqlite with VS .Net 2003 Smart device C#
> ap
> Is there a way I can scroll thru a particular index? For example:
>
> 1. Scroll forward/backward on a given set of records
> 2. Start at position X
> 3. Start at a record that matches a criteria
SQL is optimized to manipulate a set of records. It's much faster to execute
"update mytable set myc
I am planning to use sqlite with VS .Net 2003 Smart device C#
application. Just wondering if anyone has blazed down this path
before and has any insights to share.
The big thing to remember is that you are not programming for a desktop
device, nor even a laptop. If you can limit the use of th
What you've described here is column partitioning - most databases implement
row partitioning, where the rows in the table are split between multiple,
hidden sub-tables based on the value(s) in one or more columns within the row.
The most common application of which is separating date-based
26 matches
Mail list logo