[sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread Jens Miltner
Hello,

I just upgraded to sqlite 3.6.10 and keep getting a segmentation  
violation when running the full tests on Mac OS X:
The last test completed is consistently thread001.1.3.

We're using a custom Xcode build of sqlite, so there's a chance that  
it has to do with our build settings. Unfortunately, I did not succeed  
in building the testfixture using the sqlite Makefile, so I can't  
really compare with the "official" build :(
Anyway here's the build flags used to compile the source when building  
the testfixture in our custom project:

/Developer/usr/bin/gcc-4.0 -x c -arch i386 -fmessage-length=0 -pipe - 
Wno-trigraphs -fpascal-strings -fasm-blocks -O2 -Wunused-label - 
DSQLITE_OS_UNIX=1 -DSQLITE_OMIT_CURSOR -DSQLITE_THREADSAFE=1 - 
DHAVE_USLEEP=1 -DSQLITE_THREAD_OVERRIDE_LOCK=-1 -DSQLITE_TEMP_STORE=1 - 
DSQLITE_MAX_SQL_LENGTH=1 -DSQLITE_OMIT_MEMORY_ALLOCATION=1 - 
D__MACOS__=1 -DFD_SETSIZE=8192 -D_NONSTD_SOURCE=1 -DSQLITE_TEST=1 - 
DTCLSH=2 -DSQLITE_CRASH_TEST=1 -DSQLITE_SERVER=1 -DSQLITE_PRIVATE= - 
DSQLITE_CORE -DTEMP_STORE=1 -isysroot /Developer/SDKs/MacOSX10.5.sdk - 
fvisibility=hidden -mmacosx-version-min=10.4 -gdwarf-2

(FWIW, we're building everything using the amalgamation file.)

Obviously, I'm a little concerned due to the crash in the test suite.

If I'm missing some vital build settings here, I'd appreciate if  
someone could point this out to me.
Has anybody else been running the test suite on Mac OS X? Are you  
seeing similar results or does it work for you?
Any instructions on how to properly build the testfixture using the  
sqlite provided Makefile?

Thanks,
-jens

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] request to become co-maintainer of DBD::SQLite

2009-01-19 Thread Stefan Evert
>>

Dear Duncan,

thanks for taking on this job!  I have recently started using SQLite  
quite heavily from Perl scripts -- it is astonishingly efficient, even  
for simple queries on a 70 GB database (Google's Web 1T 5-gram  
database, in case someone's curious) with Perl callback functions --  
so I'd be more than happy to see an up-to-date version of DBD::SQLite.

Since I'm lazy enough to rely on OS-provided SQLite installations on  
various computers, I'm using at least three different old versions of  
SQLite in parallel, DBD::SQLite being the oldest of all ... (no  
compatibility problems at all, though, so kudos to all SQLite  
developers!).

>> I have been stuck back at 3.4 for various issues.
>>
>> I do Perl and C and offer some help.

Same here.  I feel reasonably at home both in C and Perl, and I've  
written some simple XS code.  I don't have any experience with DBI,  
which seems to have its own method of compiling C extensions for DBD  
modules (from a quick look at the DBD::SQLite sources).

Just let us know how/whether we can help you!




Best regards,
Stefan Evert

[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] fts3 and ranking

2009-01-19 Thread Torsten Curdt
I've searching and reading quite a bit in the archives:

So far I found the proposal of a "rank" column

 http://www.nabble.com/Ranking-in-fts.-td11034641.html

Also Scott mentioned the internal "dump_terms" and "dumpt_doclist" functions

 http://www.nabble.com/FTS-statistics-and-stemming-td18298526.html

But this all doesn't seem to be the definite answer yet.



So is there a way to rank FTS3 results in sqlite? ..or is someone
working on this?


cheers
--
Torsten
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use BEGIN & COMMIT in my C program?

2009-01-19 Thread MikeW
Igor Tandetnik  writes:

> 
> "Pramoda M. A" 
> wrote in message
> news:f7846b8f3c78c049b6a1dff861f6c16f031cd...@...
> > How to use BEGIN and COMMIT in my C program?
> 
> They are statements. You exectue them as you would any other statement, 
> like INSERT or UPDATE.
> 
> Igor Tandetnik 

Or more specifically, *SQL* statements.

MikeW



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] confusing with how to to this in sqlite

2009-01-19 Thread Rachmat Febfauza
later i consider that my query take hundreds of MB temporary file if i execute 
this query.
why sqlite make big temporary table is enormous big. my table in only 9 MB and 
have 12000 row.

then i compare with mysql again, it's not make big temp table too much



- Original Message 
From: "Griggs, Donald" 
To: General Discussion of SQLite Database 
Sent: Tuesday, December 30, 2008 2:52:11 AM
Subject: Re: [sqlite] confusing with how to to this in sqlite



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Rachmat Febfauza
Sent: Sunday, December 28, 2008 9:13 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] confusing with how to to this in sqlite

thanks simon for the explanation.

after holiday, i works on how to optimize my query. actually awal1 table
consist 12000 rows and akhir1 too. how to improve performance?

i added index on table awal1 and akhir1 with following syntax :

create index awal1i1 on awal1(Code,Category,Product,Location,"Begin");
create index akhir1i1 on akhir1(Code,Category,Product,Location,"End");

is this create index syntax right? or i must specify each column with
individual index?? like create index awal1i1 on awal1(Code); create
index awal1i2 on awal1(Product); etc

and i want to know to to improve performance of my query? some hint?

i have one question again, is sqlite suitable for large database file?
coz my apps may grow up to 1 giga database file.

thanks again

=
=

Regarding syntax:
If you don't get an error, the syntax is acceptible.;-)

Sqlite *does* support compound indicies.  However:
   -- You may want to use "EXPLAIN QUERY PLAN" as a prefix to your
SELECT (just running as a test) to ensure than your index is used.
   -- You can quickly experiment with using a simple index on "BEGIN" or
"PRODUCT" instead and measure times.
   -- As you measure times, be aware of possible "caching effects" --
i.e. the first run may be slower than subsequent runs of the a query on
the same tables.
   -- Make sure you see the link on performance, below.
   -- Make sure you include many INCLUDES within a single TRANSACTION
(if appropriate to your application).  This can make a dramatic
difference.
   -- You want to be familiar with the PRAGMA's that can affect
performance.
http://www.sqlite.org/pragma.html   (but note that some of
these can be used to trade data safety for performance -- make sure
you're making an informed choice)


Regarding:
" is sqlite suitable for large database file? coz my apps may grow
up to 1 giga database file."

Have you read http://www.sqlite.org/whentouse.html 
And http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations ?  If
not, you'll want to.

Many folks successfully run sqlite on multi-gigabyte databases, BUT 
-- in those cases, the simplicity and small footprint of sqlite may
be less compelling,
-- Are there any features in  http://www.sqlite.org/omitted.html
that you will grieve in their absence?  You might go over the detailed
feature lists for postgres, Mysql, etc. with the same question in mind.
-- How much concurant access do you anticipate?
-- Will you control the queries (so as to optimize them and the
indicies) or will the database be subjected frequently to ad hoc queries
(which *might* benefit from a sophisticated query optimizer)?

Hope this helps,
  Donald
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Compressed dump SQLite3 database

2009-01-19 Thread vlemaire
Hello,

We need to produce copies of our databases for archive.
It is a requirement that the size of those copies being as small as
possible, without having to perform an external compression.
vacuum doesn't seem to perform a compression (it works on fragmented
data), is there any other way to do that ?

Vincent





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread Eric Minbiole
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?

If you can't use an external compression program (which would almost 
certainly help reduce the size of your archived database), then there 
are a couple of options I can think of:

1. When you create the copy of your database, you could drop all of the 
indices from the copy, then vacuum.  Depending on your schema, this has 
the potential to remove some redundant information.  (At the expense of 
query speed, of course.)  You could always re-create the indices, if 
needed, when reading the archive.

2. If that doesn't help enough, run the sqlite3_analyzer (from 
http://sqlite.org/download.html) to see which table(s) are using the 
most disk space.  Focus on these tables to see if you can save space: 
Can you better normalize the schema to reduce repeated values?  Can some 
(non-vital) data be omitted from the archive?  etc.

If the above two options don't help enough, than I would reconsider the 
external compression tool.  zlib, for example, is a relatively 
lightweight, open source compression library that may do well on your 
database.

Hope this helps,
  Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread D. Richard Hipp

On Jan 19, 2009, at 3:50 AM, Jens Miltner wrote:

> Hello,
>
> I just upgraded to sqlite 3.6.10 and keep getting a segmentation
> violation when running the full tests on Mac OS X:
> The last test completed is consistently thread001.1.3.
>

This was a problem in the testing logic, not in the SQLite library  
itself.  The test logic was trying to run a threading test with some  
of the mutexes disabled.  Everything works correctly once mutexes are  
enabled properly.

Of course, I wasted 4 hours tracking the problem down.  This is  
yet another episode that demonstrates how threads are a pernicious  
evil that should be studiously avoided in any program that you  
actually want to work.  Threads cause hard-to-trace bugs.  Threads  
result in non-deterministic behavior.  Threads make programs run  
slower.  Just say "No" to threads...

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread jose isaias cabrera

"D. Richard Hipp" said,
> slower.  Just say "No" to threads...

Can I quote you on that? :-)

josé 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread Martin Engelschalk
Hello,

Threads: use them, but don't abuse them
Threads don't kill programs, programmers do ;-)

Martin

D. Richard Hipp wrote:
> On Jan 19, 2009, at 3:50 AM, Jens Miltner wrote:
>
>   
>> Hello,
>>
>> I just upgraded to sqlite 3.6.10 and keep getting a segmentation
>> violation when running the full tests on Mac OS X:
>> The last test completed is consistently thread001.1.3.
>>
>> 
>
> This was a problem in the testing logic, not in the SQLite library  
> itself.  The test logic was trying to run a threading test with some  
> of the mutexes disabled.  Everything works correctly once mutexes are  
> enabled properly.
>
> Of course, I wasted 4 hours tracking the problem down.  This is  
> yet another episode that demonstrates how threads are a pernicious  
> evil that should be studiously avoided in any program that you  
> actually want to work.  Threads cause hard-to-trace bugs.  Threads  
> result in non-deterministic behavior.  Threads make programs run  
> slower.  Just say "No" to threads...
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread Eric Minbiole
> Of course, I wasted 4 hours tracking the problem down.  This is  
> yet another episode that demonstrates how threads are a pernicious  
> evil that should be studiously avoided in any program that you  
> actually want to work.  Threads cause hard-to-trace bugs.  Threads  
> result in non-deterministic behavior.  Threads make programs run  
> slower.  Just say "No" to threads...

Let me start by saying that I have a great respect for SQLite and its 
developers.  I'm extremely pleased with the code itself as well as with 
the great support community. :)

However, I'm a bit surprised by the "threads are evil" mantra. 
Certainly, threads can cause "hard-to-trace bugs" when used improperly. 
  However the same can be said for many other language constructs, such 
as pointers, dynamic allocation, goto statements, etc.  Any tool can get 
you into trouble if abused.

No matter how you slice it, concurrent programing can be tricky.  While 
  multi-thread and multi-process approaches each have pros and cons, the 
dangers are the same: The programmer must take cautions to ensure that 
any shared resource is accessed safely.  When used properly, either 
approach can work reliably.

I have no doubt that there are many cases where the multi-process 
approach has clear benefits.  Indeed, if one prefers the multi-process 
approach, then by all means use it.  However, a multi-threaded approach 
can have benefits as well.  Advocating a "one size fits all" approach 
for everyone, without knowing the details of a particular application, 
just seems an oversimplification to me.

Sorry for my rant :)

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] reading beyond end of file

2009-01-19 Thread Dave Toll
Returning SQLITE_IOERR_SHORT_READ in this case solves my problem.

Many thanks,
Dave.


-Original Message-
From: D. Richard Hipp [mailto:d...@hwaci.com] 
Sent: 16 January 2009 15:57
To: General Discussion of SQLite Database
Subject: Re: [sqlite] reading beyond end of file


On Jan 16, 2009, at 6:54 PM, D. Richard Hipp wrote:

>
> On Jan 16, 2009, at 6:43 PM, Noah Hart wrote:
>
>> Just a random thought ... This is new code in pager.c,
>> and if Pager->journalOff  is at the end of the file,
>> then perhaps it could cause his problem.
>>
>>   **
>>   ** To work around this, if the journal file does appear to
>> contain
>>   ** a valid header following Pager.journalOff, then write a 0x00
>>   ** byte to the start of it to prevent it from being recognized.
>>   */
>>   rc = sqlite3OsRead(pPager.jfd, zMagic, 8, jrnlOff);
>>
>
>
> Noah is correct.  There was a bug in my earlier assert statement.   
> The code above reads past the end of the journal file when you are  
> in persistent journaling mode.
>


Note that correct behavior of the xRead method of the VFS in this case  
is to return SQLITE_IOERR_SHORT_READ since it should be reading 0 bytes.

D. Richard Hipp
d...@hwaci.com




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread John Stanton
Just use something like gzip to make a compressed version of the 
database for storage.  You would most likely save up to 80% of the 
space.  The .gz files are an industry standard for compression.

vlema...@ausy.org wrote:
> Hello,
>
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?
>
> Vincent
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Assigning REGEX from javascript

2009-01-19 Thread Noah Hart
Turns out there is a fairly simple solution,   (thanks to Mirnal Kant)

In javascript:

//functions to be created for the db
var smDbFunctions = {
  //for use as where col regexp string_for_re
  // col goes as the second argument
  regexp: {
onFunctionCall: function(val) {
  var re = new RegExp(val.getString(0));
  if (val.getString(1).match(re))
return 1;
  else
return 0;
}
  }
};

after instantiating a SQLite instance:

Database.createFunction("REGEXP", 2, smDbFunctions.regexp);

This does work, see Mirnal's SQLite Manager version 0.4.3 for proof of
concept.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Noah Hart
Sent: Tuesday, January 13, 2009 9:29 AM
To: General Discussion of SQLite Database
Subject: [sqlite] Assigning REGEX from javascript

BACKGROUND:  

Firefox includes SQLite version 3.5.9, it also allows extensions, which
are written in javascript and can call the embedded SQLite engine.

As expected, executing the following SQL statement 'SELECT "TEXT" REGEX
"T*";' gives an error, since there is no REGEX function included.

javascript includes a native regex function.

SQLite allows loadable extensions via SELECT load_extension('filename');

QUESTION:
Is it possible to load a javascript extension which could be registered
to do REGEX?


Regards,

Noah





CONFIDENTIALITY NOTICE: 
This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread Christian Smith
On Mon, Jan 19, 2009 at 06:22:33PM +0100, vlema...@ausy.org wrote:
> Hello,
> 
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?


If you're taking snapshots of your databases while live, be careful
not to just copy the database files, as changes may be occurring in the
database while you're taking a copy, leaving you with an inconsistent
file.

In which case, you'll probably want to script some sort of archiving,
using perhaps the sqlite shell .dump command to take a text dump of the
database (which will lock it correctly) or to start an exclusive 
transaction while you copy the raw database file, before a rollback to
unlock the raw database once you've finished copying.

Either way, by scripting it, you'll have the opportunity to also
compress the file (dump or raw). Having an arbitrary no compress policy
without good justification seems like a poor policy to implement.


> 
> Vincent
> 

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 'UPDATE shop_orders SET comments=comments||? WHERE oid=?', ('test', '1')

2009-01-19 Thread Gert Cuykens
How do i do the following ?

comments=comments||?

When I add a comment nothing happens ?

(please reply to my email address no subscription)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fulltest *malloc* test failures

2009-01-19 Thread BardzoTajneKonto
> These are simulated malloc() failures.  They are important for embedded  
> devices (which tend to run out of memory) but not so much on Solaris.   
> When was the last time you remember that malloc() really failed on a  
> workstation or a server? 
 
It's very easy to cause malloc failure on 32 bit desktop computer. Especially 
using sqlite configured to use about 800M cache inside jvm with large -Xmx. 
Unfortunaltelly jvm crashes after a malloc failure. Sqlite usually simply 
returns "out of memory" however I'v experienced several crashes caused by 
sqlite (I'm using old version, problems I'v seen were already corrected long 
time ago). 

---
Promocja w Speak Up. Kwartal angielskiego za darmo. 
3 miesiace nauki gratis. Sprawdz teraz! >> http://link.interia.pl/f2019

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.8+ breaks YUM

2009-01-19 Thread Tuan Hoang

Tuan Hoang wrote:

Hi,

I've been back-porting SQLite 3.x to CentOS 4.7 for some development 
work.  I've been taking the SRPMS from koji.fedoraproject.org and 
rebuilding them.


All has been fine through v3.6.7 but when I tried to recently upgrade to 
3.6.10 (by just updating the SPEC file and rebuilding), the YUM updater 
no longer works.  In particular the python-sqlite package exits with an 
error when it tries to read it's cache file (I assume that it's a SQLite 
DB).  I checked the in-between builds and one of the changes in v3.6.8 
has triggered this error.


Is there anyone else with a similar problem?  FWIW, I've also done this 
under CentOS 5.2 and it also breaks its YUM too.


Thanks,
Tuan

P.S.  Please reply all since I'm not subscribed to the mailing list.



I did a little more debugging with the yum and it's use of 
python-sqlite.  It appears that the database is not corrupt, but rather 
that the database can't be created at all.


The attached CREATE TABLE statements work fine with v3.6.7 and before 
(at least the ones that I've tried).  As of v3.6.8 up through v3.6.10, 
YUM can no longer create these tables.


Did the string "release" suddenly become a keyword?  If so, why?

Tuan
CREATE TABLE db_info (dbversion INTEGER, checksum TEXT);
CREATE TABLE packages (  pkgKey INTEGER PRIMARY KEY,  pkgId TEXT,  name TEXT,  
arch TEXT,  version TEXT,  epoch TEXT,  release TEXT,  summary TEXT,  
description TEXT,  url TEXT,  time_file INTEGER,  time_build INTEGER,  
rpm_license TEXT,  rpm_vendor TEXT,  rpm_group TEXT,  rpm_buildhost TEXT,  
rpm_sourcerpm TEXT,  rpm_header_start INTEGER,  rpm_header_end INTEGER,  
rpm_packager TEXT,  size_package INTEGER,  size_installed INTEGER,  
size_archive INTEGER,  location_href TEXT,  checksum_type TEXT,  checksum_value 
TEXT);
CREATE TABLE files (  name TEXT,  type TEXT,  pkgKey INTEGER);
CREATE TABLE requires (  name TEXT,  flags TEXT,  epoch TEXT,  version TEXT,  
release TEXT,  pkgKey INTEGER , pre BOOLEAN DEFAULT FALSE);
CREATE TABLE provides (  name TEXT,  flags TEXT,  epoch TEXT,  version TEXT,  
release TEXT,  pkgKey INTEGER );
CREATE TABLE conflicts (  name TEXT,  flags TEXT,  epoch TEXT,  version TEXT,  
release TEXT,  pkgKey INTEGER );
CREATE TABLE obsoletes (  name TEXT,  flags TEXT,  epoch TEXT,  version TEXT,  
release TEXT,  pkgKey INTEGER );
CREATE INDEX packagename ON packages (name);
CREATE INDEX packageId ON packages (pkgId);
CREATE INDEX pkgrequires on requires (pkgKey);
CREATE INDEX pkgprovides on provides (pkgKey);
CREATE INDEX pkgconflicts on conflicts (pkgKey);
CREATE INDEX pkgobsoletes on obsoletes (pkgKey);
CREATE INDEX providesname ON provides (name);
CREATE TRIGGER removals AFTER DELETE ON packages  BEGINDELETE FROM files 
WHERE pkgKey = old.pkgKey;DELETE FROM requires WHERE pkgKey = old.pkgKey;   
 DELETE FROM provides WHERE pkgKey = old.pkgKey;DELETE FROM conflicts WHERE 
pkgKey = old.pkgKey;DELETE FROM obsoletes WHERE pkgKey = old.pkgKey;  END;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.8+ breaks YUM

2009-01-19 Thread Noah Hart
Fixed here ---

11:04
Check-in [6186] : Allow recently added keywords 'savepoint' and
'release' to be used as database object names. Just as they could be
prior to 3.6.8. Ticket #3590. 



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Tuan Hoang
Sent: Monday, January 19, 2009 11:59 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite 3.6.8+ breaks YUM

Tuan Hoang wrote:
> Hi,
> 
> I've been back-porting SQLite 3.x to CentOS 4.7 for some development 
> work.  I've been taking the SRPMS from koji.fedoraproject.org and 
> rebuilding them.
> 
> All has been fine through v3.6.7 but when I tried to recently upgrade 
> to 3.6.10 (by just updating the SPEC file and rebuilding), the YUM 
> updater no longer works.  In particular the python-sqlite package 
> exits with an error when it tries to read it's cache file (I assume 
> that it's a SQLite DB).  I checked the in-between builds and one of 
> the changes in v3.6.8 has triggered this error.
> 
> Is there anyone else with a similar problem?  FWIW, I've also done 
> this under CentOS 5.2 and it also breaks its YUM too.
> 
> Thanks,
> Tuan
> 
> P.S.  Please reply all since I'm not subscribed to the mailing list.
> 

I did a little more debugging with the yum and it's use of
python-sqlite.  It appears that the database is not corrupt, but rather
that the database can't be created at all.

The attached CREATE TABLE statements work fine with v3.6.7 and before
(at least the ones that I've tried).  As of v3.6.8 up through v3.6.10,
YUM can no longer create these tables.

Did the string "release" suddenly become a keyword?  If so, why?

Tuan



CONFIDENTIALITY NOTICE: 
This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.8+ breaks YUM

2009-01-19 Thread D. Richard Hipp

On Jan 19, 2009, at 2:59 PM, Tuan Hoang wrote:

> Tuan Hoang wrote:
>> Hi,
>> I've been back-porting SQLite 3.x to CentOS 4.7 for some  
>> development work.  I've been taking the SRPMS from  
>> koji.fedoraproject.org and rebuilding them.
>> All has been fine through v3.6.7 but when I tried to recently  
>> upgrade to 3.6.10 (by just updating the SPEC file and rebuilding),  
>> the YUM updater no longer works.  In particular the python-sqlite  
>> package exits with an error when it tries to read it's cache file  
>> (I assume that it's a SQLite DB).  I checked the in-between builds  
>> and one of the changes in v3.6.8 has triggered this error.
>> Is there anyone else with a similar problem?  FWIW, I've also done  
>> this under CentOS 5.2 and it also breaks its YUM too.
>> Thanks,
>> Tuan
>> P.S.  Please reply all since I'm not subscribed to the mailing list.
>
> I did a little more debugging with the yum and it's use of python- 
> sqlite.  It appears that the database is not corrupt, but rather  
> that the database can't be created at all.
>
> The attached CREATE TABLE statements work fine with v3.6.7 and  
> before (at least the ones that I've tried).  As of v3.6.8 up through  
> v3.6.10, YUM can no longer create these tables.
>
> Did the string "release" suddenly become a keyword?  If so, why?

RELEASE is a command name assocated with SAVEPOINTs.  SAVEPOINT  
support was added for version 3.6.8.

The current CVS contains a work-around in the parser.  The next  
release (3.6.11) will allow "release" to be used as a column name  
without quoting.  See ticket #3590 for details. 
http://www.sqlite.org/cvstrac/tktview?tn=3590

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 'UPDATE shop_orders SET comments=comments||? WHERE oid=?', ('test', '1')

2009-01-19 Thread Igor Tandetnik
"Gert Cuykens" 
wrote in message
news:ef60af090901180715s58fdc033p2c4ba7df6fb90...@mail.gmail.com
> How do i do the following ?
>
> comments=comments||?
>
> When I add a comment nothing happens ?

What's in comments field before the update? Is it NULL, by any chance? 
NULL || 'anything' = NULL.

Also, you probably want to compare oid to 1 (a numeric literal), not '1' 
(a string literal).

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Recover deleted records

2009-01-19 Thread Alex Mandel
I was looking through old posts and this one seemed quite similar to my
situation with one exception. I have full knowledge of what data I'm
trying to recover.
http://thread.gmane.org/gmane.comp.db.sqlite.general/35764

The basics, data was deleted from 4 tables via an ODBC connection. No
Vacuum has been done and looking at a copy of the database in a text
editor(Scite) I can see a lot of the data completely intact.

What I don't know is,
1.for the non-ascii characters in the file, what are they and how can I
translate the ones that might have say numeric data in them.(Text
appears to be just text)
2.is there a pattern of line starts/ends or other indicators that would
be useful for me to look for if I'm writing a script to parse the file
back into at least an unformatted text dump?

Is the "deleted" data in the "Free Pages" and is there anything in the
API to interact with data in this area so I could loop over it and
extract pieces one by one?

Any other leads I should be following?

Thanks,
Alex
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.6.8+ breaks YUM

2009-01-19 Thread Robert L Cochran
D. Richard Hipp wrote:
> On Jan 19, 2009, at 2:59 PM, Tuan Hoang wrote:
>
>   
>> Tuan Hoang wrote:
>> 
>>> Hi,
>>> I've been back-porting SQLite 3.x to CentOS 4.7 for some  
>>> development work.  I've been taking the SRPMS from  
>>> koji.fedoraproject.org and rebuilding them.
>>> All has been fine through v3.6.7 but when I tried to recently  
>>> upgrade to 3.6.10 (by just updating the SPEC file and rebuilding),  
>>> the YUM updater no longer works.  In particular the python-sqlite  
>>> package exits with an error when it tries to read it's cache file  
>>> (I assume that it's a SQLite DB).  I checked the in-between builds  
>>> and one of the changes in v3.6.8 has triggered this error.
>>> Is there anyone else with a similar problem?  FWIW, I've also done  
>>> this under CentOS 5.2 and it also breaks its YUM too.
>>> Thanks,
>>> Tuan
>>> P.S.  Please reply all since I'm not subscribed to the mailing list.
>>>   
>> I did a little more debugging with the yum and it's use of python- 
>> sqlite.  It appears that the database is not corrupt, but rather  
>> that the database can't be created at all.
>>
>> The attached CREATE TABLE statements work fine with v3.6.7 and  
>> before (at least the ones that I've tried).  As of v3.6.8 up through  
>> v3.6.10, YUM can no longer create these tables.
>>
>> Did the string "release" suddenly become a keyword?  If so, why?
>> 
>
> RELEASE is a command name assocated with SAVEPOINTs.  SAVEPOINT  
> support was added for version 3.6.8.
>
> The current CVS contains a work-around in the parser.  The next  
> release (3.6.11) will allow "release" to be used as a column name  
> without quoting.  See ticket #3590 for details. 
> http://www.sqlite.org/cvstrac/tktview?tn=3590
>
> D. Richard Hipp
> d...@hwaci.com
>
>   
I too am experiencing this problem with sqlite-3.6.10 on yum as
installed on CentOS 5.2. Thank you Tuan, Noah, and Richard for finding
and addressing this.

Bob Cochran
Greenbelt, Maryland, USA


> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users