Hello!
I am new to the list, and new to SQLite, so hopefully I can get a good
start.
I've just started using SQL, and was recommended this program by a
co-worker... so I tried the sqlite3 test.db command to start up my new
DB, but when I hit enter, it returns with a ...> prompt, so when ente
Hi pepone. onrez,
I create a separate table for each group, in order to avoid redundant
data in the table. Your solution is interesting, but I think that
allow redundancy in "GroupUsers". For example:
+---+-+
|userId | groupIp |
+---+-+
| foo | groupA | <--- !
| bar
you need a joing to display groups name
select * FROM GroupUsers where userId='user'
INNER JOIN Groups
ON GroupUsers.groupId = Groups.groupId
On 11/18/06, pepone. onrez <[EMAIL PROTECTED]> wrote:
Hi Micro
why you create a separate table for each group?
usaly the groups are all in a singel tab
Hi Micro
why you create a separate table for each group?
usaly the groups are all in a singel table
I think is a veter aproact one table for users other for groups and a
third table for the realtion betwen users and groups, you can view all
the groups that a user beyons to with a
select * FROM
Hi,
I'm a novice using databases, and I have a doubt about the working of
SQL language: I have a system of user group, where each group is a
table in sqlite_master, (each table of a group contains the UIDs of
the members). But I don't know how to get the groups of a user, in a
single query.
I tr
The way the undo-redo is described in the wiki involves triggers to insert
the information of the change in each table to other table which logs the
changes. This will have a price in performance. It also complicates things
when triggers are already used for other things.
So I wonder if journals
"Isaac Raway" <[EMAIL PROTECTED]> wrote:
>
> At any rate, anyone have experience syncing SQLite DB files?
>
I have done this on two separate projects.
In the first case, the databases to be synced all had a fixed
set of records (a few hundred thousand rows). New rows were
never added or delete
$ ./sqlite3.exe v.db vacuum
ATTACH 'C:\TMP\etilqs_SOVEJE7Rni84Zzy' AS vacuum_db;
PRAGMA vacuum_db.synchronous=OFF
BEGIN EXCLUSIVE;
CREATE TABLE vacuum_db.t1(a, b, primary key(b, a))
CREATE TABLE vacuum_db.t2(c, d)
CREATE INDEX vacuum_db.t2i on t2(d, c)
INSERT INTO vacuum_db.'t1' SELECT * FROM 't1';
I am looking at a design that will require syncing a disconnected SQLite DB
file on client's machines to a central server. The version of the DB on the
server will also be modified periodically, so there is a chance that new
records will be created in either and also updated. Conflicts therefore a
Seth Falcon <[EMAIL PROTECTED]> wrote:
Are there any DB tricks you can point me to for dealing with low
cardinality columns? If I need to access rows as quickly as possible
according to low cardinality column value (e.g. allele = 0/1, strange
= 0/1), would it make more sense to split these into
Add a print statement to vacuum.c to see what SQL statements
are actually executed during VACUUM:
diff -u -3 -p -r1.64 vacuum.c
--- src/vacuum.c10 Oct 2006 13:07:36 - 1.64
+++ src/vacuum.c18 Nov 2006 17:18:07 -
@@ -26,6 +26,7 @@
*/
static int execSql(sqlite3 *db, c
On 11/18/06, P Kishor <[EMAIL PROTECTED]> wrote:
didn't try any of your tricks, but can confirm that VACUUM is very
slow on a similar db I have...
Since you obviously have some CPU cycles and RAM to spare, according
to my experiences at least, you'll benefit greatly by doing it your
self instea
> > Upon loading a saved file into the application the database on filesystem
is
> > loaded into an ADO.Net DataSet. This is used by the application until the
> user
> > saves to disk again when all of the changes to the DataSet are saved back
to
> > the database on disk.
> >
>
> In all of the des
didn't try any of your tricks, but can confirm that VACUUM is very
slow on a similar db I have...
table1 -- 210k rows x 6 cols, 4 indexes, 1 pk
table2 -- 36k rows x 6 cols, 4 indexes, 1 pk
table3 -- 16k rows x 6 cols, 4 indexes, 1 pk
table4 -- 5M rows x 4 cols, 2 indexes, 1 pk
total size on file
Nemanja Corlija wrote:
I have a db with one table that has a text primary key and 16 text
columns in total.
After importing data from CSV file db had 5M rows and file size was
833MB. After some big DELETEs db had around 3M rows and 500MB after
"VACUUMing".
Running VACUUM for more then an hour fil
I have a db with one table that has a text primary key and 16 text
columns in total.
After importing data from CSV file db had 5M rows and file size was
833MB. After some big DELETEs db had around 3M rows and 500MB after
"VACUUMing".
Running VACUUM for more then an hour filled new db with ~300MB w
[EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:
> > Hi,
> > I would like a bit of advice before starting to make changes to my
> > application.
> >
> > I've written a program in C# for personnel departments and at present all
of
> > the data is stored in memory until the user saves and then i
17 matches
Mail list logo