Hi Clemens,
I changed my code, now it works fine.
thank you,
regards,
/nicoo
On Thu, 15 Jan 2015 14:35:49 +0100
Clemens Ladisch wrote:
> Nicolas Jäger wrote:
> > do
> > {
> > rc = sqlite3_step(stmt);
> > std::cout << sqlite3_column_text (stmt, 0) <<","
On 2015/01/15 23:18, Baruch Burstein wrote:
Hi,
If I have a table with an index, and INSERT or DELETE a large number of
rows in one statement, does sqlite stop to update the index for each
record, or is it smart enough to update the index just once for all the
changed records?
In a B-Tree
> How do I save PDF files in SQLIte?
Why do you want to store PDF files in sqlite? What is the "real" problem that
that solves for you?
There is http://www.sqlite.org/sar/doc/trunk/README.md which does what you're
literally asking, but I feel that with more context we can give you better
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/15/2015 12:52 PM, Dave Dyer wrote:
> Of course that's possible, but .dump produced what superficially
> appeared to be a perfectly consistent text file.
Note that .dump writes the output and then on encountering problems
attempts the table
>
>Try doing: sqlite3 old-database .dump | sqlite3 fixed-database
>
>Then verify that "fixed-database" still contains all of your data.
This doesn't work on these databases, even undamaged ones. I think
it's a buffer size problem with sqlite3. The databases contain some
rather long text
>
>Try doing: sqlite3 old-database .dump | sqlite3 fixed-database
>
>Then verify that "fixed-database" still contains all of your data.
This doesn't work on these databases, even undamaged ones. I think
it's a buffer size problem with sqlite3. The databases contain some
rather long text
From the SQLite3 shell (recent version), use the readfile('filename')
function to import into a blob field, and the writefile('filename',field)
for exporting back to a file.
See here: http://www.sqlite.org/cli.html
-Original Message-
From: John Payne
Sent: Thursday, January 15, 2015
On 1/15/15, Dave Dyer wrote:
> The likely cause of corruption is that this is probably a
> database being accessed on a networked disk.
>
> --
>
> sqlite> select * from preference_table where preferenceset='foo';
> sqlite> drop index preferenceindex;
> (11) database
Hi,
If I have a table with an index, and INSERT or DELETE a large number of
rows in one statement, does sqlite stop to update the index for each
record, or is it smart enough to update the index just once for all the
changed records?
--
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı
The likely cause of corruption is that this is probably a
database being accessed on a networked disk.
--
sqlite> select * from preference_table where preferenceset='foo';
sqlite> drop index preferenceindex;
(11) database corruption at line 52020 of [2677848087]
(11) statement aborts at 24:
The likely cause of corruption is that this is probably a
database being accessed on a networked disk.
--
sqlite> select * from preference_table where preferenceset='foo';
sqlite> drop index preferenceindex;
(11) database corruption at line 52020 of [2677848087]
(11) statement aborts at 24:
On 1/15/15, Dave Dyer wrote:
>
>>
>>
>>> it wasn't possible to drop the index in question
>>
>>what happened when you tried ? Were you using your own code or the SQLite
>> shell tool ?
>
> sqlite shell tool. Same complaint, "database corrupted".
First type: ".log
>
>
>> it wasn't possible to drop the index in question
>
>what happened when you tried ? Were you using your own code or the SQLite
>shell tool ?
sqlite shell tool. Same complaint, "database corrupted".
>My guess is that you actually have file-level corruption which just happened
>to
>
>
>> it wasn't possible to drop the index in question
>
>what happened when you tried ? Were you using your own code or the SQLite
>shell tool ?
sqlite shell tool. Same complaint, "database corrupted".
>My guess is that you actually have file-level corruption which just happened
>to
On 15 Jan 2015, at 8:24pm, Dave Dyer wrote:
> 1) the generic error 11 "database corrupt" could have been more
> specific. It would have been handy to know that the complaint was
> about duplicate indexes, and which index, or even which table was
> involved.
>
> 2) it
On 15 Jan 2015, at 8:24pm, John Payne wrote:
> How do I save PDF files in SQLIte?
Read the bytes of the file and save them in a BLOB field.
But I have to warn you ...
> I'm not a programmer,
SQLite is a tool for programmers. It makes database facilities available to
I have a case of a damaged database, where the only damage appears to be
that somehow the index uniqueness constraint is violated. As long as the
operations don't touch the index, the db operates without complaint.
I was eventually able to construct a copy with good indexes, but
1) the generic
How do I save PDF files in SQLIte? Is there a preferred method? All the
online suggestions seem to require writing some custom code. Is there an
add-on for saving PDF or other digital objects? I'm not a programmer, do
not know php and rather clueless on how to proceed.
Thanks
John Payne
I have a case of a damaged database, where the only damage appears to be
that somehow the index uniqueness constraint is violated. As long as the
operations don't touch the index, the db operates without complaint.
I was eventually able to construct a copy with good indexes, but
1) the generic
> I understand that the WAL log must take a lot of space. What I don't
> understand is that it was 7x larger than the resulting DB size. (Actual
> quotient is even larger because I compared to the DB size that contained
> also other tables.)
Unlike a rollback journal a WAL file can have multiple
Sorry Carlos - vanilla sqlite is required.
Its not a big issue for me.
Cheers
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
On 15 Jan 2015, at 3:44pm, Jan Slodicka wrote:
> Index rebuild (using sqlite3 shell) took 123 sec. This would suggest that it
> might be better to run with deleted indexes and rebuild them at the end.
That is as expected, and is standard advice for cases where you are adding
I'll add the results from additional tests.
First of all, I forced a commit after each 100,000 records inserted into a
single table. (A complication for us.)
Some numbers for a table with a single index and 3,423,000 inserted records:
Intermediate commits took subsequently 764 msec, 2164 msec,
Nicolas Jäger wrote:
> do
> {
> rc = sqlite3_step(stmt);
> std::cout << sqlite3_column_text (stmt, 0) <<"," < (stmt, 2) << std::endl;
> } while(rc == SQLITE_ROW);
sqlite3_step() returns SQLITE_ROW when there is a row, or SQLITE_DONE
when there are no more rows, or an error
Hi
I'm new to the list and newish to SQLite and would appreciate some tips.
I'm attempting to create an application that requires a spatial rtree query,
and this works extremely well using the x86 version of the
System.Data.SQLite library
Hi, I do use Xojo (Realbasic) to develop applications with SQLite databases.
I also use some SQLite extensions from:
http://www.monkeybreadsoftware.de/SQLiteExtension/index.shtml
I guess they would be useful (as generic SQLite load extension) in Your
Toolkit.
You may ask Christian to support
Hi,
I'm discovering/using sqlite3 since three days, and I got a problem
that I don't understand. I have this code :
bool
DataTable_manager::searchByTags(const std::vector
_list) {
int rc;
sqlite3_stmt *stmt;
for (auto & tag : tags_list)
{
std::cout << tag << std::endl;
On 14 Jan 2015 at 23:13, Simon Slavin wrote:
> On 14 Jan 2015, at 10:40pm, Baruch Burstein wrote:
>
>> Of course, this is just at the theoretical level. As yo said, your app
>> probably wouldn't need to worry about this.
>
> I think a previous poster
Simon Slavin-3 wrote
>> - WAL log size 7.490 GB
>
> Please repeat your tests but as the first command after opening your
> database file issue
>
> PRAGMA journal_size_limit = 100
>
> With this change the WAL file may still grow to 7 GB while that particular
> transaction is being executed
Richard Hipp-3 wrote
> What is your page size?
1024
Richard Hipp-3 wrote
> Your original post said you inserted two rows for each transaction.
> How big are those two rows?
Sorry for misleading information. Here is a more formal algorithm:
foreach table
{
BEGIN
insert all downloaded
Thanks Peter
Coding outside of SQLite is easy - it's doing it with just SQLite/SQL
that I was after :(
Cheers
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic
On 01/15/2015 12:28 AM, Jan Slodicka wrote:
Richard Hipp-3 wrote
No other active readers or writers.
Are you sure?
Writers for sure.
As far readers are concerned, the things are too complex to make an absolute
statement. (I shall check once more.)
Some APIs that might be helpful:
*
32 matches
Mail list logo