Jonathon thejunk...@... writes:
Awesome. Thanks for the quick reply Deech :)
J
Awesome ? You must be easily impressed/scared !
;-)
MikeW
___
sqlite-users mailing list
sqlite-users@sqlite.org
The import has the big limitation to not be able to import the file when a
field is on multiple lines.I don't know if this is the same for the
export...
Cheers,
Sylvain
On Tue, Jan 6, 2009 at 11:08 AM, MikeW mw_p...@yahoo.co.uk wrote:
Jonathon thejunk...@... writes:
Awesome. Thanks for
On Thu, Jan 01, 2009 at 08:19:01PM -0500, D. Richard Hipp wrote:
FWIW, nested transactions (in the form of SAVEPOINTs) will appear in
the next SQLite release, which we hope to get out by mid-January.
Is that going to be 4.0.x then? I'm assuming there will need to be
incompatible file format
Hello!
В сообщении от Tuesday 06 January 2009 15:33:42 Sylvain Pointeau написал(а):
The import has the big limitation to not be able to import the file when a
field is on multiple lines.I don't know if this is the same for the
export...
See virtualtext extension from spatialite project.
Best
By searching the archives of this list, I was able to come up with
this syntax to identify duplicate records and place a single copy of
each duplicate into another table:
CREATE TABLE dup_killer (member_id INTEGER, date DATE); INSERT INTO
dup_killer (member_id, date) SELECT * FROM talks
CREATE TABLE dup_killer (member_id INTEGER, date DATE); INSERT INTO
dup_killer (member_id, date) SELECT * FROM talks GROUP BY member_id,
date HAVING count(*)1;
But, now that I have the copies in the dup_killer table, I have not
been able to discover an efficient way to go back to the
Hello!
В сообщении от Tuesday 06 January 2009 19:29:43 Craig Smith написал(а):
By searching the archives of this list, I was able to come up with
this syntax to identify duplicate records and place a single copy of
each duplicate into another table:
There is simple way: dump your database
On Tue, 6 Jan 2009 08:29:43 -0800, Craig Smith
cr...@macscripter.net wrote in General Discussion of
SQLite Database sqlite-users@sqlite.org:
By searching the archives of this list, I was able to come up with
this syntax to identify duplicate records and place a single copy of
each duplicate
Craig Smith cr...@macscripter.net wrote:
By searching the archives of this list, I was able to come up with
this syntax to identify duplicate records and place a single copy of
each duplicate into another table:
CREATE TABLE dup_killer (member_id INTEGER, date DATE); INSERT INTO
dup_killer
Thanks for your reply.
That's a lot of files. Or did you mean rows?
Are you sure? There can be many other reasons.
There is a lot of files. So, I don't know exactly why at this time,
But thought network latency can´t be denied.
/Edward
On Wed, Jan 7, 2009 at 4:07 AM, Kees Nuyt
I think the question was about the structure of your data
a sqlite database is a file and can contain many tables. tables can contain
many rows.
Do you have 20 million sqlite databases?
This information can help people formulate an answer.
On Tue, Jan 6, 2009 at 6:14 PM, Edward J. Yoon
Do you have 20 million sqlite databases?
Yes.
On Wed, Jan 7, 2009 at 12:36 PM, Jim Dodgen j...@dodgen.us wrote:
I think the question was about the structure of your data
a sqlite database is a file and can contain many tables. tables can contain
many rows.
Do you have 20 million sqlite
On 1/6/09, Edward J. Yoon edwardy...@apache.org wrote:
Do you have 20 million sqlite databases?
Yes.
Since all these databases are just files, you should stuff them into a
Postgres database, then write an application that extracts the
specific row from the pg database with 20 mil rows giving
Thanks,
In more detail, SQLite used for user-based applications (20 million is
the size of app-users). and MySQL used for user location (file path on
NAS) addressing.
On Wed, Jan 7, 2009 at 1:31 PM, P Kishor punk.k...@gmail.com wrote:
On 1/6/09, Edward J. Yoon edwardy...@apache.org wrote:
Do
Hi.
I have a database with a table having a text column that averages 7.1
characters per row; there are 11 million rows in this table. In one
version of the source file that is used to populate this table, the file
is laid out such that the column data is populated in sorted order; in
On Jan 6, 2009, at 6:14 PM, sqlite-users-requ...@sqlite.org wrote:
delete from talks where exists
(select 1 from talks t2
where talks.member_id = t2.member_id and talks.date = t2.date and
talks.rowid t2.rowid);
Igor, this worked fabulously, thank you very much. I also tried your
other
16 matches
Mail list logo