On 08/19/2014 11:00 AM, sqlite-users-requ...@sqlite.org wrote:
10. Re: Long time to drop tables. (Jonathan Moules)
12. Re: Long time to drop tables. (Simon Slavin)
--
Long time for me too!
This was run on an 11.7GB
Joe Fisher wrote on Tuesday, August 19, 2014 5:11 PM
> I use the temp table because every CSV files has a header with the
> column names.
Can you just import the files, header row and all, into your destination table
and just delete the 2000 header rows at the end?
On Aug 19, 2014, at 11:11 PM, joe.fis...@tanguaylab.com
wrote:
> Is there something better I can do to improve this process?
PRAGMA journal_mode = off;
http://www.sqlite.org/pragma.html#pragma_journal_mode
> Perhaps one transaction? Perhaps turn something off?
On 19 Aug 2014, at 10:13pm, John Drescher wrote:
>> I'm running the following script on more than 1000 2MB CSV files and I'd
>> like to speed it up if possible. I noticed that a 'WAL' is running. Is there
>> something better I can do to improve this process?
>
> Use an
Directly after opening your db:
PRAGMA synchronous = OFF;
PRAGMA journal_mode = WAL;
It should fly then, at the cost of risking db corruption in case of a
crash.
Ben(jamin Stadin)
Am 19.08.14 23:11 schrieb "joe.fis...@tanguaylab.com" unter
:
>I'm running the
> I'm running the following script on more than 1000 2MB CSV files and I'd
> like to speed it up if possible. I noticed that a 'WAL' is running. Is there
> something better I can do to improve this process?
Use an SSD.
John
___
sqlite-users mailing
I'm running the following script on more than 1000 2MB CSV files and I'd
like to speed it up if possible. I noticed that a 'WAL' is running. Is
there something better I can do to improve this process? Perhaps one
transaction? Perhaps turn something off? It took about 1.5 hours to run.
I use
On Tue, Aug 19, 2014 at 10:38 AM, Richard Hipp wrote:
>
>
>
> On Tue, Aug 19, 2014 at 10:34 AM, Levente Kovacs
> wrote:
>
>> I started to worry about this issue, because I am in a middle of an
>> application development, and yesterday, it started to work,
On Tue, Aug 19, 2014 at 10:34 AM, Levente Kovacs
wrote:
> I started to worry about this issue, because I am in a middle of an
> application development, and yesterday, it started to work, and I only
> SELECT
> a few times, and it makes a noticeable disk access. I'm still
I started to worry about this issue, because I am in a middle of an
application development, and yesterday, it started to work, and I only SELECT
a few times, and it makes a noticeable disk access. I'm still on magnetic HDD,
but the application will be running on SSD or Flash drive.
Let me start
On 19 Aug 2014, at 11:41am, Jonathan Moules
wrote:
> There are some foreign keys.
Put all your DROP TABLE commands into one big transaction.
Make sure when you DROP your TABLES that you are dropping them in an order
which won't trigger any foreign key
On 19 Aug 2014, at 8:25am, Paul Dillon wrote:
> I was
> using count(first_field) instead of selecting all the fields, can't imagine
> that could be the problem.
There's an optimization in SQLite which means you can do COUNT(*) and it will
fetch no data at all. It's
HI,
Thanks for the replies.
The entire thing on happening on a single desktop machine. The database is
on a standard HDD. Using SQLiteStudio; sqlite version 3.7.16.1.
There are some foreign keys.
Autovacuum is off I think (don't know how to check, but the table size
never automatically shrinks
Wycliff Mabwai wrote:
> syntax near "Where".
>
> SQLITEcmd2.CommandText = "UPDATE RecordGrid SET
> LineNumber='" & reade20.GetInt32(11) & "',self_empty_info_gender_PRect= '"
> & IIf(reade20.IsDBNull(22), String.Empty, reade20.GetString(22)) & "," &
> IIf(reade20.IsDBNull(23),
Jim Callahan jim.callahan.orlando at gmail.com wrote:
> 1. My condolences with those dimensions you are heading for "big>
> data"/hadoop land.
Heheh thanks, I am so determined not to go there! SQLite has been
such a nice simple database to use, I will do anything to avoid all
the complexity of
Thanks to all that replied to my post and sorry for the delayed response.
I had trouble joining the list and had not realised that my post went
through until I tried to join the list again.
Simon Slavin slavins at bigfraud.org wrote:
> What you might find increases your speed is to make sure
16 matches
Mail list logo