[sqlite] Missing db name with the SQLITE_ATTACH action callback
Hello, I am trying to implement a cache of one connection metadata (databases, tables, columns, indexes). It seems possible to automatically update the cache by using an authorizer. But there is one problem with the SQLITE_ATTACH action: #define SQLITE_ATTACH 24 /* FilenameNULL*/ #define SQLITE_DETACH 25 /* Database Name NULL*/ Only the filename is available and is optional/not unique (for :memory: and temp database). Would you mind adding the database name as the fourth argument of the authorizer callback ? Thanks and regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Detecting multiple CHECK failures
Hello, I think that SQLite reports the first constraint which fails: http://sqlite.org/changes.html 2012-05-14 (3.7.12) Report the name of specific CHECK constraints that fail. sqlite CREATE TABLE test (data TEXT CONSTRAINT notEmpty CHECK (length(data) 0)); sqlite INSERT INTO test VALUES (''); Error: CHECK constraint failed: notEmpty Regards. On Tue, Oct 7, 2014 at 11:11 PM, Simon Slavin slav...@bigfraud.org wrote: On 7 Oct 2014, at 10:00pm, Peter Haworth p...@lcsql.com wrote: I'm a great believer in using CHECK constraints to do as much validation as possible within the database rather than code it in my application. However, I think I'm right in saying that as soon as a CHECK constraint fails, an error is returned to my application so no other CHECK constraints are executed In a data entry type of application, this isn't ideal as users would prefer to see all the errors they need to correct in one message. For most ways in which SQLite can refuse to do something, you have no way to know why it refused. The results don't include the name of a constraint which failed, or anything else of any use. You simply get a result code which tells you that the operation failed because of the data in your command (rather than because the command had bad syntax or referred to a table/index/column which didn't exist). I can't think of a way round this but wondering if anyone has found a technique to return all CHECK constraint errors at once. It would appear that in SQLite the CHECK constraints are useful only in ensuring your database doesn't reflect things that are impossible. It is of no use at all in knowing why a command is rejected. Ideally, if a result code indicates a constraint failure, there would be a way to retrieve a list of the names of the constraints which would have been violated. However this is not possible in SQLite3 at all without a major rewrite. SQLite3 just gets a binary indication of whether any constraints were violated. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] JDBC and savepoints
Hello, Are you sure? http://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#setSavepoint() Regards. On Sun, Jun 29, 2014 at 12:15 PM, hala hala_alesa...@hotmail.com wrote: JDBC does not support savepoints from SQLite is there any replacement for savepoints? if not what to use for bulk inserts to ensure the possibility of rolling back without losing much data? -- View this message in context: http://sqlite.1065341.n5.nabble.com/JDBC-and-savepoints-tp76304.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] importing CSV with a random empty line at the end
Hello, I've taken the time to investigate the problem: only the last parameter is not correctly bound to NULL. May I suggest a patch: --- shell_.c 2014-06-15 14:22:39.0 +0200 +++ shell.c 2014-06-15 14:23:11.0 +0200 @@ -2553,7 +2553,7 @@ filling the rest with NULL\n, sCsv.zFile, startLine, nCol, i+1); i++; - while( inCol ){ sqlite3_bind_null(pStmt, i); i++; } + while( i=nCol ){ sqlite3_bind_null(pStmt, i); i++; } } } if( sCsv.cTerm==sCsv.cSeparator ){ Regards. On Mon, Jun 2, 2014 at 9:34 PM, Gert Van Assche ger...@gmail.com wrote: gwenn, thanks for this. I did not understand what you saw, and then I realized my shell exe was probably too old. I downloaded the new exe and this solves the problem just fine! thanks for your help. gert 2014-06-02 19:03 GMT+02:00 gwenn gwenn.k...@gmail.com: Hello, I doesn't fail for me (it may depend on the constraints on the target table) but the behaviour is unexpected: $ echo 1|test empty.csv $ sqlite3 SQLite version 3.8.4.3 2014-04-03 16:53:12 sqlite create table test(opt text, data text not null); sqlite .import empty.csv test empty.csv:2: expected 2 columns but found 1 - filling the rest with NULL sqlite select * from test; 1|test |test As the bindings are not cleared, it is not a null value but the previous bound value which is inserted. Regards. On Mon, Jun 2, 2014 at 5:41 PM, Gert Van Assche ger...@gmail.com wrote: All, I received 100.000 UTF-8 files (average size 50kb) ready for import in an SQLite db. 90% of them go fine, but some files have an empty line at the very end of the fine (so an extra EOL before the EOF). Of course, the import fails... Is there an easy way to get rid of that extra empty line before I import the file, or is there a way to ignore an empty line? thanks Gert ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] FTS module and DB close
Ok, Maybe the solution is: 1) try to close the connection: sqlite3_close 2) if error code is SQLITE_BUSY, a) use sqlite3_next_stmt to finalize dangling statements b) retry to close the connection Step (1) ensures that FTS related statements are finalized. Thanks. On Sat, Jun 7, 2014 at 7:49 PM, gwenn gwenn.k...@gmail.com wrote: Hello, How do you prevent double free/finalize of statements created by the FTS module ? I am using sqlite3_next_stmt to finalize all dangling statements before closing the connection but the program crashes because the FTS module finalizes them too when sqlite3_close is called... May be I should use sqlite3_close_v2 (not available on MacOS X: SQLite version 3.7.13) ? Thanks ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] FTS module and DB close
Hello, How do you prevent double free/finalize of statements created by the FTS module ? I am using sqlite3_next_stmt to finalize all dangling statements before closing the connection but the program crashes because the FTS module finalizes them too when sqlite3_close is called... May be I should use sqlite3_close_v2 (not available on MacOS X: SQLite version 3.7.13) ? Thanks ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Statement cache does not play well with sqlite3_stmt_readonly
Hello, The function sqlite3_stmt_readonly returns true/1 for the statement DROP TABLE IF EXISTS test when the table test does not exist. But, if this drop statement is cached, sqlite3_stmt_readonly still returns true even after creating the table test. The only way I've found to make sqlite3_stmt_readonly returns false is to execute/step the drop statement. Do you know another way to make SQLite reevaluate the readonly status ? Thanks. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Statement cache does not play well with sqlite3_stmt_readonly
Ok, Thanks. On Tue, Jun 3, 2014 at 8:42 PM, Richard Hipp d...@sqlite.org wrote: On Tue, Jun 3, 2014 at 2:27 PM, gwenn gwenn.k...@gmail.com wrote: Hello, The function sqlite3_stmt_readonly returns true/1 for the statement DROP TABLE IF EXISTS test when the table test does not exist. But, if this drop statement is cached, sqlite3_stmt_readonly still returns true even after creating the table test. The only way I've found to make sqlite3_stmt_readonly returns false is to execute/step the drop statement. Do you know another way to make SQLite reevaluate the readonly status ? Thank you for the bug report. Because this is a very minor issue and because we are well into the test cycle for version 3.8.5 already, we are going to defer looking into this problem until after the 3.8.5 release. -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] importing CSV with a random empty line at the end
Hello, I doesn't fail for me (it may depend on the constraints on the target table) but the behaviour is unexpected: $ echo 1|test empty.csv $ sqlite3 SQLite version 3.8.4.3 2014-04-03 16:53:12 sqlite create table test(opt text, data text not null); sqlite .import empty.csv test empty.csv:2: expected 2 columns but found 1 - filling the rest with NULL sqlite select * from test; 1|test |test As the bindings are not cleared, it is not a null value but the previous bound value which is inserted. Regards. On Mon, Jun 2, 2014 at 5:41 PM, Gert Van Assche ger...@gmail.com wrote: All, I received 100.000 UTF-8 files (average size 50kb) ready for import in an SQLite db. 90% of them go fine, but some files have an empty line at the very end of the fine (so an extra EOL before the EOF). Of course, the import fails... Is there an easy way to get rid of that extra empty line before I import the file, or is there a way to ignore an empty line? thanks Gert ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Small bug with .import shell command
Hello, May I suggest a patch ? -- shell.c 2014-05-26 18:51:40.0 +0200 +++ shell.orig.c 2014-05-26 18:50:37.0 +0200 @@ -1917,8 +1917,6 @@ if( c=='\n' ){ p-nLine++; if( p-n0 p-z[p-n-1]=='\r' ) p-n--; -}else if( !p-z c==cSep ){ - csv_append_char(p, 0); } p-cTerm = c; } Regards. On Sat, May 24, 2014 at 9:46 AM, gwenn gwenn.k...@gmail.com wrote: Hello, When the first value of the first line is empty, .import fails: $ echo '|test' ko.csv $ echo '|test' ok.csv $ sqlite3 SQLite version 3.8.4.3 2014-04-03 16:53:12 sqlite .import ko.csv test ko.csv: empty file sqlite .import ok.csv test sqlite An error happens also when the table already exists: sqlite create table test (id text, data test); sqlite .import ko.csv test ko.csv:1: expected 2 columns but found 1 - extras ignored sqlite Maybe CSVReader.z should be pre-allocated ? Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Small bug with .import shell command
Hello, When the first value of the first line is empty, .import fails: $ echo '|test' ko.csv $ echo '|test' ok.csv $ sqlite3 SQLite version 3.8.4.3 2014-04-03 16:53:12 sqlite .import ko.csv test ko.csv: empty file sqlite .import ok.csv test sqlite An error happens also when the table already exists: sqlite create table test (id text, data test); sqlite .import ko.csv test ko.csv:1: expected 2 columns but found 1 - extras ignored sqlite Maybe CSVReader.z should be pre-allocated ? Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Shared-Cache Mode
Hello, Is there any way to known if one connection participate to shared-cache mode ? I've read http://sqlite.org/sharedcache.html which specifies how to set but not how to get the mode! Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Retrieve a int or a sqlite_int64
Hello, Is there any way to differentiate one value persisted with sqlite3_bind_int from another persisted with sqlite3_bind_int64 ? How to know which method between sqlite3_value_int and sqlite3_value_int64 should be used to retrieve the value back ? Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Retrieve a int or a sqlite_int64
Ok, Thanks. On Tue, May 13, 2014 at 11:59 PM, Teg t...@djii.com wrote: Hello Charles, Tuesday, May 13, 2014, 3:12:09 PM, you wrote: CS Load it with sqlite3_value_int64 every time. If the number fits in a 32 bit CS integer, then you can store it in one. This is what I do. Everything is 64 bits to be future proof. CS Charles CS ___ CS sqlite-users mailing list CS sqlite-users@sqlite.org CS http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- Best regards, Tegmailto:t...@djii.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Unqualified table name and pragma
Hello, SQLite behaviour is consistent (temp database is searched first): create table test (main text); create temporary table test (temporary text); insert into test values ('unqualified'); -- in temp table select * from test; -- temp table -- unqualified pragma table_info(test); -- temp table -- 0|temporary|text|0||0 But pragma documentation page is misleading: http://sqlite.org/pragma.html A pragma may have an optional database name before the pragma name. The database name is the name of an ATTACH-ed database or it can be main or temp for the main and the TEMP databases. If the optional database name is omitted, main is assumed. pragma table_info(test); -- is same as pragma temp.table_info(test); -- and not: pragma main.table_info(test); Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Different behaviour of auxiliary data between 3.7.17 and 3.8.0
Hello, I am not sure but it seems there is a regression between versions 3.7.17 and 3.8.0. It's impacting custom/user declared function and auxiliary data. sqlite-amalgamation-3071700 gwen$ gcc -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include sqlite3.c auxdata.c -o auxdata -L/usr/local/Cellar/glib/2.38.2/lib -lglib-2.0 sqlite-amalgamation-3071700 gwen$ ./auxdata loop 1 (0) compiling... z (0) reusing... y loop 2 (0) reusing... z (0) reusing... y sqlite-amalgamation-3080300 gwen$ gcc -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include sqlite3.c auxdata.c -o auxdata -L/usr/local/Cellar/glib/2.38.2/lib -lglib-2.0 sqlite-amalgamation-3080300 gwen$ ./auxdata loop 1 (0) compiling... z (0) reusing... y loop 2 (0) compiling... z (0) reusing... y The auxiliary data is reused in the second loop with SQLite 3.7.17 but not with SQLite 3.8.0. What is the expected/correct behaviour? Regards Here is the content of auxdata.c: #include stdlib.h #include stdio.h #include glib.h #include sqlite3.h static void log(void *pArg, int iErrCode, const char *zMsg) { printf((%d) %s\n, iErrCode, zMsg); } static void glibRegexpDelete(void *p){ GRegex *pRegex = (GRegex *)p; g_regex_unref(pRegex); } static void glibReplaceAllFunc( sqlite3_context *ctx, int argc, sqlite3_value **argv ){ GError *err = NULL; GRegex *p; gchar *result = NULL; (void)argc; /* Unused parameter */ const gchar *str = (const gchar *) sqlite3_value_text(argv[1]); if (!str) { return; } const gchar *replacement = (const gchar *) sqlite3_value_text(argv[2]); if (!replacement) { sqlite3_result_error(ctx, no replacement string, -1); return; } p = sqlite3_get_auxdata(ctx, 0); if( !p ){ const gchar *re = (const gchar *) sqlite3_value_text(argv[0]); if( !re ){ //sqlite3_result_error(ctx, no regexp, -1); return; } p = g_regex_new(re, 0, 0, err); if( p ){ sqlite3_set_auxdata(ctx, 0, p, glibRegexpDelete); }else{ char *e2 = sqlite3_mprintf(%s: %s, re, err-message); sqlite3_result_error(ctx, e2, -1); sqlite3_free(e2); g_error_free(err); return; } sqlite3_log(0, compiling...); } else { sqlite3_log(0, reusing...); } result = g_regex_replace(p, str, -1, 0, replacement, 0, err); if (err) { sqlite3_result_error(ctx, err-message, -1); g_error_free(err); return; } sqlite3_result_text(ctx, result, -1, g_free); } int main(int argc, char **argv) { sqlite3_config(SQLITE_CONFIG_LOG, log, NULL); sqlite3 *db = NULL; sqlite3_stmt *stmt = NULL; char *zErrMsg = NULL; const char *z; int rc = 0; rc = sqlite3_open_v2(:memory:, db, SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL); if (db == NULL || SQLITE_OK != rc) { fprintf(stderr, Error: unable to open database: %s\n, sqlite3_errmsg(db)); exit(1); } sqlite3_create_function_v2(db, regex_replace, 3, SQLITE_UTF8, 0, glibReplaceAllFunc, NULL, NULL, NULL); rc = sqlite3_prepare_v2(db, select regex_replace('.', 'abcde', r) from (select 'z' as r union all select 'y'), -1, stmt, NULL); if (stmt == NULL || SQLITE_OK != rc) { fprintf(stderr, Error: prepare stmt: %s\n, sqlite3_errmsg(db)); exit(1); } for (int i = 1; i = 2; i++) { printf(loop %d\n, i); rc = sqlite3_step(stmt); while (rc == SQLITE_ROW) { z = (const char*)sqlite3_column_text(stmt, 0); printf(%s\n, z); rc = sqlite3_step(stmt); } if (SQLITE_OK != rc SQLITE_DONE != rc) { fprintf(stderr, Error: %s\n, sqlite3_errmsg(db)); exit(1); } rc = sqlite3_reset(stmt); if (SQLITE_OK != rc) { fprintf(stderr, Error: %s\n, sqlite3_errmsg(db)); exit(1); } } sqlite3_finalize(stmt); sqlite3_close(db); } ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Different behaviour of auxiliary data between 3.7.17 and 3.8.0
Here you are: #include stdlib.h #include stdio.h #include sqlite3.h static void reuseAuxDataCountFunc( sqlite3_context *ctx, int argc, sqlite3_value **argv ){ int *reuseAuxDataCount; int value; (void)argc; /* Unused parameter */ reuseAuxDataCount = (int*)sqlite3_get_auxdata(ctx, 0); if (reuseAuxDataCount == NULL) { reuseAuxDataCount = (int *)malloc(sizeof(int)); if (reuseAuxDataCount == NULL) { sqlite3_result_error_nomem(ctx); return; } *reuseAuxDataCount = 0; sqlite3_set_auxdata(ctx, 0, reuseAuxDataCount, free); } else { (*reuseAuxDataCount)++; } sqlite3_result_int(ctx, *reuseAuxDataCount); } int main(int argc, char **argv) { sqlite3 *db = NULL; sqlite3_stmt *stmt = NULL; char *zErrMsg = NULL; const char *z; int rc = 0; rc = sqlite3_open_v2(:memory:, db, SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL); if (db == NULL || SQLITE_OK != rc) { fprintf(stderr, Error: unable to open database: %s\n, sqlite3_errmsg(db)); exit(1); } sqlite3_create_function_v2(db, reuseAuxDataCountFunc, 1, SQLITE_UTF8, 0, reuseAuxDataCountFunc, NULL, NULL, NULL); // at least, one constant must be passed to make SQLite reuse auxiliary data... rc = sqlite3_prepare_v2(db, select reuseAuxDataCountFunc('test') from (select 1 union all select 2), -1, stmt, NULL); if (stmt == NULL || SQLITE_OK != rc) { fprintf(stderr, Error: prepare stmt: %s\n, sqlite3_errmsg(db)); exit(1); } for (int i = 1; i = 2; i++) { printf(loop %d\n, i); rc = sqlite3_step(stmt); while (rc == SQLITE_ROW) { z = (const char*)sqlite3_column_text(stmt, 0); printf(%s\n, z); rc = sqlite3_step(stmt); } if (SQLITE_OK != rc SQLITE_DONE != rc) { fprintf(stderr, Error: %s\n, sqlite3_errmsg(db)); exit(1); } rc = sqlite3_reset(stmt); if (SQLITE_OK != rc) { fprintf(stderr, Error: %s\n, sqlite3_errmsg(db)); exit(1); } } sqlite3_finalize(stmt); sqlite3_close(db); } sqlite-amalgamation-3071700 gwen$ ./auxdata loop 1 0 1 loop 2 2 3 sqlite-amalgamation-3080300 gwen$ ./auxdata loop 1 0 1 loop 2 0 1 But it appears that SQLite is behaving as specified in: http://sqlite.org/c3ref/get_auxdata.html SQLite is free to discard the metadata at any time, including: ... when sqlite3_reset() or sqlite3_finalize() is called for the SQL statement, or ... Sorry for the false alarm. I will try to find another strategy to keep the compiled regexp... Regards. On Sun, Feb 9, 2014 at 7:34 PM, Richard Hipp d...@sqlite.org wrote: Can you provide an example program that omits the glib.h dependency? On Sun, Feb 9, 2014 at 10:50 AM, gwenn gwenn.k...@gmail.com wrote: Hello, I am not sure but it seems there is a regression between versions 3.7.17 and 3.8.0. It's impacting custom/user declared function and auxiliary data. sqlite-amalgamation-3071700 gwen$ gcc -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include sqlite3.c auxdata.c -o auxdata -L/usr/local/Cellar/glib/2.38.2/lib -lglib-2.0 sqlite-amalgamation-3071700 gwen$ ./auxdata loop 1 (0) compiling... z (0) reusing... y loop 2 (0) reusing... z (0) reusing... y sqlite-amalgamation-3080300 gwen$ gcc -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include sqlite3.c auxdata.c -o auxdata -L/usr/local/Cellar/glib/2.38.2/lib -lglib-2.0 sqlite-amalgamation-3080300 gwen$ ./auxdata loop 1 (0) compiling... z (0) reusing... y loop 2 (0) compiling... z (0) reusing... y The auxiliary data is reused in the second loop with SQLite 3.7.17 but not with SQLite 3.8.0. What is the expected/correct behaviour? Regards Here is the content of auxdata.c: #include stdlib.h #include stdio.h #include glib.h #include sqlite3.h static void log(void *pArg, int iErrCode, const char *zMsg) { printf((%d) %s\n, iErrCode, zMsg); } static void glibRegexpDelete(void *p){ GRegex *pRegex = (GRegex *)p; g_regex_unref(pRegex); } static void glibReplaceAllFunc( sqlite3_context *ctx, int argc, sqlite3_value **argv ){ GError *err = NULL; GRegex *p; gchar *result = NULL; (void)argc; /* Unused parameter */ const gchar *str = (const gchar *) sqlite3_value_text(argv[1]); if (!str) { return; } const gchar *replacement = (const gchar *) sqlite3_value_text(argv[2]); if (!replacement) { sqlite3_result_error(ctx, no replacement string, -1); return; } p = sqlite3_get_auxdata(ctx, 0); if( !p ){ const gchar *re = (const gchar *) sqlite3_value_text(argv[0]); if( !re ){ //sqlite3_result_error(ctx, no regexp, -1); return; } p = g_regex_new(re, 0, 0, err); if( p ){ sqlite3_set_auxdata(ctx, 0, p, glibRegexpDelete); }else{ char *e2 = sqlite3_mprintf(%s: %s, re, err-message); sqlite3_result_error(ctx, e2, -1); sqlite3_free(e2); g_error_free(err); return; } sqlite3_log(0, compiling...); } else { sqlite3_log(0, reusing...); } result = g_regex_replace(p, str, -1, 0, replacement, 0, err); if (err
Re: [sqlite] Different behaviour of auxiliary data between 3.7.17 and 3.8.0
Yes, you are right. Thanks for the investigation. On Sun, Feb 9, 2014 at 11:54 PM, Richard Hipp d...@sqlite.org wrote: This behavior change is in response to ticket http://www.sqlite.org/src/info/406d3b2ef9 - a diff across several check-ins that makes this change can be seen here: http://www.sqlite.org/src/vdiff?from=b1b0de29fdf7de83to=62465ecba7431e1dsbs=1dc=25 Note that the behavior changes brings the implementation into agreement with the historical documentation. The document was clarified and enhanced as part of this change. But the key statements in the old documentation where: If [the sqlite3_set_auxdata destructor] is not NULL, SQLite will invoke the destructor function given by the 4th parameter to sqlite3_set_auxdata() on the metadata when the corresponding function parameter changes or when the SQL statement completes, whichever comes first. SQLite is free to call the destructor and drop metadata on any parameter of any function at any time. The only guarantee is that the destructor will be called before the metadata is dropped. The corresponding text in the revised documentation is similar: SQLite is free to discard the metadata at any time, including: * when the corresponding function parameter changes, or * when [sqlite3_reset()] or [sqlite3_finalize()] is called for the SQL statement, or * when sqlite3_set_auxdata() is invoked again on the same parameter, or * during the original sqlite3_set_auxdata() call when a memory allocation error occurs. The revised documentation is on the website here: http://www.sqlite.org/c3ref/get_auxdata.html So as far as I can tell, the current implementation is doing what it is suppose to do. Or did I misunderstand the complaint? On Sun, Feb 9, 2014 at 10:50 AM, gwenn gwenn.k...@gmail.com wrote: Hello, I am not sure but it seems there is a regression between versions 3.7.17 and 3.8.0. It's impacting custom/user declared function and auxiliary data. sqlite-amalgamation-3071700 gwen$ gcc -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include sqlite3.c auxdata.c -o auxdata -L/usr/local/Cellar/glib/2.38.2/lib -lglib-2.0 sqlite-amalgamation-3071700 gwen$ ./auxdata loop 1 (0) compiling... z (0) reusing... y loop 2 (0) reusing... z (0) reusing... y sqlite-amalgamation-3080300 gwen$ gcc -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include sqlite3.c auxdata.c -o auxdata -L/usr/local/Cellar/glib/2.38.2/lib -lglib-2.0 sqlite-amalgamation-3080300 gwen$ ./auxdata loop 1 (0) compiling... z (0) reusing... y loop 2 (0) compiling... z (0) reusing... y The auxiliary data is reused in the second loop with SQLite 3.7.17 but not with SQLite 3.8.0. What is the expected/correct behaviour? Regards Here is the content of auxdata.c: #include stdlib.h #include stdio.h #include glib.h #include sqlite3.h static void log(void *pArg, int iErrCode, const char *zMsg) { printf((%d) %s\n, iErrCode, zMsg); } static void glibRegexpDelete(void *p){ GRegex *pRegex = (GRegex *)p; g_regex_unref(pRegex); } static void glibReplaceAllFunc( sqlite3_context *ctx, int argc, sqlite3_value **argv ){ GError *err = NULL; GRegex *p; gchar *result = NULL; (void)argc; /* Unused parameter */ const gchar *str = (const gchar *) sqlite3_value_text(argv[1]); if (!str) { return; } const gchar *replacement = (const gchar *) sqlite3_value_text(argv[2]); if (!replacement) { sqlite3_result_error(ctx, no replacement string, -1); return; } p = sqlite3_get_auxdata(ctx, 0); if( !p ){ const gchar *re = (const gchar *) sqlite3_value_text(argv[0]); if( !re ){ //sqlite3_result_error(ctx, no regexp, -1); return; } p = g_regex_new(re, 0, 0, err); if( p ){ sqlite3_set_auxdata(ctx, 0, p, glibRegexpDelete); }else{ char *e2 = sqlite3_mprintf(%s: %s, re, err-message); sqlite3_result_error(ctx, e2, -1); sqlite3_free(e2); g_error_free(err); return; } sqlite3_log(0, compiling...); } else { sqlite3_log(0, reusing...); } result = g_regex_replace(p, str, -1, 0, replacement, 0, err); if (err) { sqlite3_result_error(ctx, err-message, -1); g_error_free(err); return; } sqlite3_result_text(ctx, result, -1, g_free); } int main(int argc, char **argv) { sqlite3_config(SQLITE_CONFIG_LOG, log, NULL); sqlite3 *db = NULL; sqlite3_stmt *stmt = NULL; char *zErrMsg = NULL; const char *z; int rc = 0; rc = sqlite3_open_v2(:memory:, db, SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL); if (db == NULL || SQLITE_OK != rc) { fprintf(stderr, Error: unable to open database: %s\n, sqlite3_errmsg(db)); exit(1); } sqlite3_create_function_v2(db, regex_replace, 3, SQLITE_UTF8, 0, glibReplaceAllFunc, NULL, NULL, NULL); rc = sqlite3_prepare_v2(db, select regex_replace('.', 'abcde', r) from (select 'z' as r union all select 'y
Re: [sqlite] Any tool to create erd from sqlite database?
Hello, There is a minimalist one here: https://github.com/gwenn/sqliterd It depends on c/go compilers and the graphviz dot command... Regards. On Fri, Sep 20, 2013 at 2:26 PM, Jason H scorp...@yahoo.com wrote: Don't forget about ODBC tools... Just use ta SQLite ODBC driver... From: dd durga.d...@gmail.com To: General Discussion of SQLite Database sqlite-users@sqlite.org Sent: Friday, September 20, 2013 8:11 AM Subject: [sqlite] Any tool to create erd from sqlite database? I am looking for tool which generates er diagrams from existing database. Any suggetions? Thanks in advance. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3: .import command handles quotation incorrectly
Hello, I've tested the improved .import command and it seems that there is a bug with empty not-quoted field: $ cat empty.csv A|B | $ ./a.out SQLite version 3.8.0 2013-06-28 23:55:45 sqlite .import empty.csv test empty.csv:2: expected 2 columns but found 1 - filling the rest with NULL sqlite I am not sure, but it seems to be here: }else{ csv_append_char(p, c); // FIXME while( (c = fgetc(p-in))!=EOF c!=cSep c!='\n' ){ Regards. On Thu, Jun 27, 2013 at 1:01 AM, Richard Hipp d...@sqlite.org wrote: On Wed, Jun 26, 2013 at 6:23 PM, RSmith rsm...@rsweb.co.za wrote: I have done ludicrous amounts of testing and evaluating imports for and from CSVs I made a go at improving the CSV importer for the upcoming SQLite 3.8.0 release. Please see the latest trunk check-in. Your expert feedback would certainly be welcomed here. Note that in the new .import command, the table named in the second argument need not exist now, and the shell will create it for you automatically, giving it column names as determined by the first row of the CSV file. That seemed like it might be a handy feature. The other changes to the new .import are that it issues error messages (but tries to continue muddling through) if the input does not conform to rfc4180, and it correctly handles quoted data that extends across multiple lines or that contains embedded commas. -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Timezone is supported by date/time functions but is not documented
Hello, SQLite datetime function correctly parses timestring with timezone: sqlite select datetime('2013-04-30T18:38:54Z'); 2013-04-30 18:38:54 sqlite select datetime('2013-04-30T20:38:54+02:00'); 2013-04-30 18:38:54 But this is not documented: http://sqlite.org/lang_datefunc.html http://sqlite.org/datatype3.html#datetime May I suggest updating these pages accordingly. Thanks. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to verify referential integrity of SQLite database
Hello, You can give the following tool a try if you want: https://github.com/gwenn/checkfkey But I'm not sure that it correctly handles composite. Regards. On Thu, Dec 13, 2012 at 4:22 PM, Jean-Christophe Deschamps j...@antichoc.net wrote: Jay A. Kreibich wrote: I can also see situations when someone might want to run one set or the other set of checks. Breaking it out, so that these checks are done by a different PRAGMA (integrity_check_v2 ?) seems like a wise idea. Indeed; with a separate PRAGMA fk_integrity_check, it would be possible to run the check even when foreign keys are not currently enabled. This would be a useful thing to do just before enabling foreign keys. Isn't something else than a pragma more appropiate? SELECT consistency_check() FROM mytable; would return rows from a specific table where any constraint, unicity or FK is violated: rowid | constraint_name | diag_code SELECT consistency_check_all(); would return rows from every table in turn where any constraint, unicity or FK is violated: table_name | rowid | constraint_name | diag_code ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bug: Inconsistency wrt. indirectly defined columns
If you want, you can verify automatically that all the FK columns have a type matching the referenced columns by using (and tweaking) an old tool whose name is 'genfkey' (see http://www.sqlite.org/faq.html#q22 but the 'readme' link is broken). Regards. On Thu, Nov 8, 2012 at 6:29 PM, Simon Slavin slav...@bigfraud.org wrote: On 8 Nov 2012, at 5:27pm, stahl...@dbs.uni-hannover.de wrote: But inferring the FK's type from the referenced PK would cause applications which rely on the FK's type affinity being 'none' to be broken, no? At this sort of level of bug-compatibility, you have to say Will not be fixed until SQLite4. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] VTab xRename
Thanks for your suggestion Jay. static sqlite3_module csvModule = { 0,/* iVersion */ csvCreate,/* xCreate - create a table */ csvConnect, /* xConnect - connect to an existing table */ csvBestIndex, /* xBestIndex - Determine search strategy */ csvDisconnect,/* xDisconnect - Disconnect from a table */ csvDestroy, /* xDestroy - Drop a table */ csvOpen, /* xOpen - open a cursor */ csvClose, /* xClose - close a cursor */ csvFilter,/* xFilter - configure scan constraints */ csvNext, /* xNext - advance a cursor */ csvEof, /* xEof */ csvColumn,/* xColumn - read data */ csvRowid, /* xRowid - read data */ 0,/* xUpdate - write data */ 0,/* xBegin - begin transaction */ 0,/* xSync - sync transaction */ 0,/* xCommit - commit transaction */ 0,/* xRollback - rollback transaction */ 0,/* xFindFunction - function overloading */ 0 /* xRename - rename the table */ }; sqlite .load ./csv.sqlext sqlite create virtual table test using csv(test1.csv, ',', USE_HEADER_ROW); sqlite select * from test; 1|2|3 a|b|c a|b|c a|b|c .. z a|b|c,d sqlite alter table test rename to test1; sqlite select * from test; Error: no such table: test sqlite select * from test1; 1|2|3 a|b|c a|b|c a|b|c .. z a|b|c,d sqlite So it's seems that SQLite properly handles virtual table rename even when xRename is not specified by the module. Regards. On Tue, Oct 23, 2012 at 10:50 PM, Jay A. Kreibich j...@kreibi.ch wrote: On Tue, Oct 23, 2012 at 10:16:07PM +0200, gwenn scratched on the wall: Hello, The documentation says the xRename function is mandatory: http://sqlite.org/vtab.html#xrename The xRename method is required for every virtual table implementation. But it seems possible to not specify it: static const sqlite3_module fts3aux_module = { ... 0, /* xRename */ ... }; And when you attempt to rename the table, what happens? The virtual table interface is advanced, in the sense that there are very few safety nets or double-checks. It is designed to be used by an intelligent programmer that knows their stuff. You need to do what the docs say, exactly, or something bad can happen. That's not to say something bad will happen right away. The fact that you can assign a NULL function pointer to the xRename() function only means the system is not double-checking your work when you pass in the structure... it does not mean that passing a NULL is allowed. I strongly suspect that if you do not provide a xRename() function, and someone attempts to rename the table, the whole application will simply crash. Your fault. -j -- Jay A. Kreibich J A Y @ K R E I B I.C H Intelligence is like underwear: it is important that you have it, but showing it to the wrong people has the tendency to make them feel uncomfortable. -- Angela Johnson ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] VTab xRename
Hello, The documentation says the xRename function is mandatory: http://sqlite.org/vtab.html#xrename The xRename method is required for every virtual table implementation. But it seems possible to not specify it: static const sqlite3_module fts3aux_module = { ... 0, /* xRename */ ... }; Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Jdbc Blob Incremental I/O
Hi, I've been trying to support incremental I/O in a Jdbc driver. By forcing the user to access the rowid before the blob, it's possible to use only the JDBC API for loading a Blob: // CREATE TABLE test (data BLOB); INSERT INTO test (data) VALUES (zeroblob(1024)); ResultSet rs = stmt.executeQuery(SELECT rowid, data FROM test); rs.getRowId(1); final Blob blob = rs.getBlob(2); ... Indeed, with the rowId kept internally by the driver and with the column index (2), I can retrieve all the data needed by 'sqlite3_blob_open': dbName, tblName, colName. I will also support this alternative: PrepareStatement pstmt = c.prepareStatement(SELECT data FROM test where rowid = :rowid); pstmt.setRowId(1, ...); -- rowId value kept internally ResultSet rs = pstmt.executeQuery(); final Blob blob = rs.getBlob(1); But for update/insert, it doesn't work because the sqlite3_column_name, sqlite3_column_origin_name, sqlite3_column_table_name and sqlite3_column_database_name can only be used with select: PreparedStatement pstmt = c.prepareStatement(UPDATE test SET data = :blob WHERE rowid = :rowid); pstmt.setRowId(2, new RowIdImpl(rowid)); pstmt.setBinaryStream(1, new ByteArrayInputStream(new byte[] {1, 2, 3, 4, 5, 6})); -- fails pstmt.executeUpdate(); Do you see a way to write a blob incrementally by using only the JDBC API? (I've checked all the other implementations, but they don't support reading, nor writing...) Thanks. (the driver is here: https://github.com/gwenn/sqlite-jna/tree/master/src/main/java/org/sqlite/driver) ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite JDBC generated key
Thanks for your replies. I will add a tweak to ignore column access by name when running SELECT last_insert_rowid(); On Thu, Apr 5, 2012 at 2:17 AM, Kees Nuyt k.n...@zonnet.nl wrote: On Wed, 4 Apr 2012 21:08:24 +0200, gwenn gwenn.k...@gmail.com wrote: 2) Do you know if there are other bindings that implement/support this kind of feature ? I almost forgot to mention: SELECT last_insert_rowid(); http://www.sqlite.org/lang_corefunc.html -- Regards, Kees Nuyt ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQLite JDBC generated key
* In JDBC API, there is a method to retreive the generated key during an insert: http://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#getGeneratedKeys() * With SQLite API, there is: sqlite3_last_insert_rowid. Let suppose that: - the primary key is correctly declared to make it an alias for the rowid, - and the connection is not shared. 1) Do you know how to retreive the column name of the primary key (the table name is not known) ? 2) Do you know if there are other bindings that implement/support this kind of feature ? Thanks. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Typo in source code comment
/*** EXPERIMENTAL *** ** ** Register a function to be invoked when a transaction comments. ** If the invoked function returns non-zero, then the commit becomes a ** rollback. */ SQLITE_API void *sqlite3_commit_hook( :s/comments/commits/ ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Tiny correction
Hello, It seems that the icuFunctionError can be simplified: char zBuf[128]; sqlite3_snprintf(128, zBuf, ICU error: %s(): %s, zName, u_errorName(e)); zBuf[127] = '\0'; // - useless In the documentation: As long as the buffer size is greater than zero, sqlite3_snprintf() guarantees that the buffer is always zero-terminated. Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3_set_auxdata invalid pointer
I guess that sqlite3_set_auxdata cannot be called by the xStep implementation of an aggregate function. The doc says: The following two functions may be used by scalar SQL functions to associate metadata with argument values. I will try with sqlite3_aggregate_context. Sorry for the disturbance. On Sat, Jan 14, 2012 at 10:36 PM, gwenn gwenn.k...@gmail.com wrote: Hello, I am trying to add custom aggregation function support in a golang driver (scalar functions are ok). While testing, I got this: *** glibc detected *** ./6.out: realloc(): invalid pointer: 0x02daa1c5 *** === Backtrace: = /lib/x86_64-linux-gnu/libc.so.6(+0x72656)[0x2b9a7b5da656] /lib/x86_64-linux-gnu/libc.so.6(realloc+0x312)[0x2b9a7b5e0762] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(+0x30387)[0x2b9a7b2ec387] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(+0x16a5b)[0x2b9a7b2d2a5b] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(+0x234da)[0x2b9a7b2df4da] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(sqlite3_set_auxdata+0xb6)[0x2b9a7b2e2d86] I just have enough skills to debug with gdb and to find this line: 62056: pVdbeFunc = sqlite3DbRealloc(pCtx-s.db, pVdbeFunc, nMalloc); Could you please help me find what I am doing wrong? I just call sqlite3_set_auxdata in my xStep function. Thanks. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] sqlite3_set_auxdata invalid pointer
Hello, I am trying to add custom aggregation function support in a golang driver (scalar functions are ok). While testing, I got this: *** glibc detected *** ./6.out: realloc(): invalid pointer: 0x02daa1c5 *** === Backtrace: = /lib/x86_64-linux-gnu/libc.so.6(+0x72656)[0x2b9a7b5da656] /lib/x86_64-linux-gnu/libc.so.6(realloc+0x312)[0x2b9a7b5e0762] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(+0x30387)[0x2b9a7b2ec387] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(+0x16a5b)[0x2b9a7b2d2a5b] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(+0x234da)[0x2b9a7b2df4da] /home/gwen/Test/sqlite-autoconf-3070900/.libs/libsqlite3.so.0(sqlite3_set_auxdata+0xb6)[0x2b9a7b2e2d86] I just have enough skills to debug with gdb and to find this line: 62056: pVdbeFunc = sqlite3DbRealloc(pCtx-s.db, pVdbeFunc, nMalloc); Could you please help me find what I am doing wrong? I just call sqlite3_set_auxdata in my xStep function. Thanks. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [csv extension] Error while reading long lines
Here is some tests: --- csv1.test +++ csv1.test @@ -23,10 +23,11 @@ # # csv-1.*: Creating/destroying csv tables. # csv-2.*: Linear scans of csv data. # csv-3.*: Test renaming an csv table. # csv-4.*: CREATE errors +# csv-5.*: Dirty header, long line, escaped quotes, escaped newlines. # ifcapable !csv { finish_test return @@ -36,10 +37,13 @@ set test1csv [file join [file dirname [info script]] test1.csv] # This file is delimited by '|' and has quoted fields. set test2csv [file join [file dirname [info script]] test2.csv] # This file is delimited by '|'. It does NOT have quoted fields. set test3csv [file join [file dirname [info script]] test3.csv] +# This file contains a dirty header, one long line, escaped quotes, escaped +# new lines. +set test4csv [file join [file dirname [info script]] test4.csv] # # Test cases csv-1.* test CREATE and DROP table statements. # @@ -249,5 +253,40 @@ catchsql CREATE VIRTUAL TABLE t1 USING csv('foo') } {1 {Error opening CSV file: 'foo'}} do_test csv-4.1.3 { catchsql CREATE VIRTUAL TABLE t1 USING csv(foo foo) } {1 {Error opening CSV file: 'foo foo'}} + +# +# Test cases csv-5.* test file with dirty header and long line. +# + +do_test csv-5.1.1 { + execsql CREATE VIRTUAL TABLE t1 USING csv('$test4csv') + execsql CREATE VIRTUAL TABLE t2 USING csv('$test4csv', ',', USE_HEADER_ROW) +} {} +do_test csv-5.1.2 { + execsql { +SELECT col1 FROM t1 limit 1 offset 1; + } +} {123456789} +do_test csv-5.1.3 { + execsql { +SELECT * FROM t1 limit 1 offset 3; + } +} {{123456789 +} { +} {} {123456789 +} {1234\\567'89 +} {123456789 +} {123456789 +} 1234\5678\9 123456789\ {}} +do_test csv-5.1.4 { + execsql { +SELECT col1,col2,col3 FROM t1 limit 1 offset 4; + } +} {{} ' {}} +do_test csv-5.1.5 { + execsql { +SELECT col1 FROM t1 limit 1 offset 5; + } +} {'} And the test file (test4.csv): col 1,col.2,col-3,col!4,c...@5,col;6,col%7,col*8,col=9,col'10 123456789,123456789,123456789,123456789,123456789,123456789,123456789,123456789,123456789,123456789 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 123456789 , ,,123456789 ,1234\\567'89 ,123456789 ,123456789 ,123456789,123456789, ,', ' On Thu, May 13, 2010 at 9:28 PM, gwenn gwenn.k...@gmail.com wrote: Done! Index: ext/csv/csv.c === --- ext/csv/csv.c +++ ext/csv/csv.c @@ -60,10 +60,11 @@ char *zRow; /* Buffer for current CSV row */ char cDelim; /* Character to use for delimiting columns */ int nCol;/* Number of columns in current row */ int maxCol; /* Size of aCols array */ char **aCols;/* Array of parsed columns */ + int *aEscapedQuotes; /* Number of escaped quotes for each column in aCols */ }; /* ** An CSV cursor object. @@ -120,10 +121,11 @@ */ static char *csv_getline( CSV *pCSV ){ int n = 0; int bEol = 0; int bShrink = 0; + int bQuotedCol = 0; /* allocate initial row buffer */ if( pCSV-maxRow 1 ){ pCSV-zRow = sqlite3_malloc( 100 ); if( pCSV-zRow ){ @@ -135,10 +137,13 @@ /* read until eol */ while( !bEol ){ /* grow row buffer as needed */ if( n+100pCSV-maxRow ){ int newSize = pCSV-maxRow*2 + 100; + if( newSize=pCSV-db-aLimit[SQLITE_LIMIT_LENGTH] ){ +return 0; + } char *p = sqlite3_realloc(pCSV-zRow, newSize); if( !p ) return 0; pCSV-maxRow = newSize; pCSV-zRow = p; bShrink = -1; @@ -150,19 +155,32 @@ pCSV-zRow[n] = '\0'; bEol = -1; break; } /* look for line delimiter */ -while( pCSV-zRow[n] ){ n++; } -if( (n0) ((pCSV-zRow[n-1]=='\n') || (pCSV-zRow[n-1]=='\r')) ){ +while( pCSV-zRow[n] ){ + if( pCSV-zRow[n]=='\' ){ +if( bQuotedCol ) { + if( pCSV-zRow[n+1]=='\' ) { /* escaped */ +n++; + }else{ +bQuotedCol = 0; + } +}else if( n==0 || pCSV-zRow[n-1]==pCSV-cDelim ){ + bQuotedCol = 1; +} + } + n++; +} +if( (n0) ((pCSV-zRow[n-1]=='\n') || (pCSV-zRow[n-1]=='\r')) !bQuotedCol ){ pCSV-zRow[n-1] = '\n'; /* uniform line ending */ pCSV-zRow[n] = '\0'; bEol = -1; } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; } @@ -317,15 +335,16 @@ /* allocate initial space for the column pointers */ if( pCSV-maxCol 1
Re: [sqlite] [csv extension] Error while reading long lines
if( pCSV-aEscapedQuotes[i] ){ + char *z; + + int nByte = (int)(strlen(col) - pCSV-aEscapedQuotes[i]); + if( nBytepCSV-db-aLimit[SQLITE_LIMIT_LENGTH] ){ +sqlite3_result_error_toobig( ctx ); +z = 0; + }else{ +z = sqlite3_malloc( nByte ); +if( !z ){ + sqlite3_result_error_nomem( ctx ); +} + } + if( z ){ +int j,k; +for(j=0, k=0; col[j]; j++){ + z[k++] = col[j]; + if( col[j]=='\' ){ +/* unescape quote */ +j++; + } +} +z[k] = 0; +sqlite3_result_text( ctx, z, k, sqlite3_free ); + } }else{ sqlite3_result_text( ctx, col, -1, SQLITE_TRANSIENT ); } } @@ -473,10 +535,11 @@ /* finalize any prepared statements here */ csv_close( pCSV ); if( pCSV-zRow ) sqlite3_free( pCSV-zRow ); if( pCSV-aCols ) sqlite3_free( pCSV-aCols ); +if( pCSV-aEscapedQuotes ) sqlite3_free( pCSV-aEscapedQuotes ); sqlite3_free( pCSV ); } return 0; } @@ -539,10 +602,11 @@ return SQLITE_NOMEM; } /* intialize virtual table object */ memset(pCSV, 0, sizeof(CSV)+nDb+nName+nFile+3); + pCSV-db = db; pCSV-nBusy = 1; pCSV-base.pModule = csvModule; pCSV-cDelim = cDelim; pCSV-zDb = (char *)pCSV[1]; pCSV-zName = pCSV-zDb[nDb+1]; @@ -608,11 +672,11 @@ *pzErr = sqlite3_mprintf(%s, aErrMsg[4]); sqlite3_free(zSql); csvRelease( pCSV ); return SQLITE_ERROR; } - zSql = sqlite3_mprintf(%s%s%s, zTmp, zCol, zTail); + zSql = sqlite3_mprintf(%s\%s\%s, zTmp, zCol, zTail); }else{ zSql = sqlite3_mprintf(%scol%d%s, zTmp, i+1, zTail); } sqlite3_free(zTmp); } I'll add some testcases... On Sat, May 8, 2010 at 9:44 PM, gwenn gwenn.k...@gmail.com wrote: Ok, I've just added support to embedded new lines and partial support to escaped double-quotes. By partial support, I mean they are not unescaped yet... Index: ext/csv/csv.c === --- ext/csv/csv.c +++ ext/csv/csv.c @@ -120,10 +120,11 @@ */ static char *csv_getline( CSV *pCSV ){ int n = 0; int bEol = 0; int bShrink = 0; + int bQuotedCol = 0; /* allocate initial row buffer */ if( pCSV-maxRow 1 ){ pCSV-zRow = sqlite3_malloc( 100 ); if( pCSV-zRow ){ @@ -150,19 +151,32 @@ pCSV-zRow[n] = '\0'; bEol = -1; break; } /* look for line delimiter */ -while( pCSV-zRow[n] ){ n++; } -if( (n0) ((pCSV-zRow[n-1]=='\n') || (pCSV-zRow[n-1]=='\r')) ){ +while( pCSV-zRow[n] ){ + if( pCSV-zRow[n]=='\' ){ +if( bQuotedCol ) { + if( pCSV-zRow[n+1]=='\' ) { /* escaped */ +n++; + }else{ +bQuotedCol = 0; + } +}else if( n==0 || pCSV-zRow[n-1]==pCSV-cDelim ){ + bQuotedCol = 1; +} + } + n++; +} +if( (n0) ((pCSV-zRow[n-1]=='\n') || (pCSV-zRow[n-1]=='\r')) !bQuotedCol ){ pCSV-zRow[n-1] = '\n'; /* uniform line ending */ pCSV-zRow[n] = '\0'; bEol = -1; } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; } @@ -332,17 +346,23 @@ do{ /* if it begins with a quote, assume it's a quoted col */ if( *s=='\' ){ s++; /* skip quote */ pCSV-aCols[nCol] = s; /* save pointer for this col */ - /* TBD: handle escaped quotes */ /* find closing quote */ - s = strchr(s, '\'); - if( !s ){ -/* no closing quote */ -pCSV-eof = -1; -return SQLITE_ERROR; + while( 1 ){ +s = strchr(s, '\'); +if( !s ){ + /* no closing quote */ + pCSV-eof = -1; + return SQLITE_ERROR; +}else if ( *(s+1)=='\' ){ + /* TBD: replace all escaped quotes by a single one */ + s+=2; +}else{ + break; +} } *s = '\0'; /* null terminate this col */ /* fall through and look for following ,\n */ s++; }else{ @@ -608,11 +628,11 @@ *pzErr = sqlite3_mprintf(%s, aErrMsg[4]); sqlite3_free(zSql); csvRelease( pCSV ); return SQLITE_ERROR; } - zSql = sqlite3_mprintf(%s%s%s, zTmp, zCol, zTail); + zSql = sqlite3_mprintf(%s\%s\%s, zTmp, zCol, zTail); }else{ zSql = sqlite3_mprintf(%scol%d%s, zTmp, i+1, zTail); } sqlite3_free(zTmp); } On Sat, May 8, 2010 at 3:45 PM, gwenn gwenn.k...@gmail.com wrote: While looking in csv1.test, I found a solution to the case when header row contains spaces: just wrap the column name with double quotes. Index: ext/csv/csv.c
Re: [sqlite] [csv extension] Error while reading long lines
While looking in csv1.test, I found a solution to the case when header row contains spaces: just wrap the column name with double quotes. Index: ext/csv/csv.c === --- ext/csv/csv.c +++ ext/csv/csv.c @@ -158,11 +158,11 @@ pCSV-zRow[n] = '\0'; bEol = -1; } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; } @@ -608,11 +608,11 @@ *pzErr = sqlite3_mprintf(%s, aErrMsg[4]); sqlite3_free(zSql); csvRelease( pCSV ); return SQLITE_ERROR; } - zSql = sqlite3_mprintf(%s%s%s, zTmp, zCol, zTail); + zSql = sqlite3_mprintf(%s\%s\%s, zTmp, zCol, zTail); }else{ zSql = sqlite3_mprintf(%scol%d%s, zTmp, i+1, zTail); } sqlite3_free(zTmp); } Index: ext/csv/csv1.test === --- ext/csv/csv1.test +++ ext/csv/csv1.test @@ -23,10 +23,11 @@ # # csv-1.*: Creating/destroying csv tables. # csv-2.*: Linear scans of csv data. # csv-3.*: Test renaming an csv table. # csv-4.*: CREATE errors +# csv-5.*: Dirty header and long line. # ifcapable !csv { finish_test return @@ -36,10 +37,12 @@ set test1csv [file join [file dirname [info script]] test1.csv] # This file is delimited by '|' and has quoted fields. set test2csv [file join [file dirname [info script]] test2.csv] # This file is delimited by '|'. It does NOT have quoted fields. set test3csv [file join [file dirname [info script]] test3.csv] +# This file contains a dirty header and one long line. +set test4csv [file join [file dirname [info script]] test4.csv] # # Test cases csv-1.* test CREATE and DROP table statements. # @@ -249,5 +252,14 @@ catchsql CREATE VIRTUAL TABLE t1 USING csv('foo') } {1 {Error opening CSV file: 'foo'}} do_test csv-4.1.3 { catchsql CREATE VIRTUAL TABLE t1 USING csv(foo foo) } {1 {Error opening CSV file: 'foo foo'}} + +# +# Test cases csv-5.* test file with dirty header and long line. +# + +do_test csv-5.1.1 { + execsql CREATE VIRTUAL TABLE t1 USING csv('$test4csv') + execsql CREATE VIRTUAL TABLE t2 USING csv('$test4csv', ',', USE_HEADER_ROW) +} {} ADDEDext/csv/test4.csv col 1,col.2,col-3,col!4,c...@5,col;6,col%7,col*8,col=9,col'10 123456789,123456789,123456789,123456789,123456789,123456789,123456789,123456789,123456789,123456789 I tried to handle double-quoted fields with embedded line breaks without success but I am not stuck yet. Regards. On Wed, Apr 21, 2010 at 8:22 PM, gwenn gwenn.k...@gmail.com wrote: Thanks for this great extension. It works smoothly with 500Mo files. And it's a workaround to some shortcomings of the '.import' command: - no need to create a table before, - no need to delete the header row before/after, - no error if the number of columns is not homogeneous, - ... It's a nightmare to work with the CSV format but I have to. I made a quick and dirty fix to the USE_HEADER_ROW mode to replace whitespaces, slashes or hyphens by underscores. But I look for a better solution. Is there any way to make sure a string is a valid column name? Regards On Mon Apr 19 17:32:13 GMT 2010, Shane Harrelson shane at sqlite.org wrote: Thanks for the report. The extension is still very a much a work-in-progress and any feedback is greatly appreciated. -Shane On Sun, Apr 18, 2010 at 12:51 PM, gwenn gwenn.kahz at gmail.com wrote: Hello, There is a little bug/typo in the csv extension when lines exceed 100 characters: *** glibc detected *** sqlite3: realloc(): invalid pointer: 0x00ad1a78 *** === Backtrace: = /lib/libc.so.6[0x7f6dab009d16] /lib/libc.so.6(realloc+0x321)[0x7f6dab00fda1] ./libSqliteCsv.so[0x7f6da9ef9dbf] A possible patch is: --- /tmp/SQLite-d474195a997b9d94/ext/csv/csv.c 2009-11-05 05:14:30.0 +0100 +++ csv.c 2010-04-18 18:48:04.0 +0200 @@ -160,7 +160,7 @@ } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; Regards. ___ sqlite-users mailing list sqlite-users at sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [csv extension] Error while reading long lines
Ok, I've just added support to embedded new lines and partial support to escaped double-quotes. By partial support, I mean they are not unescaped yet... Index: ext/csv/csv.c === --- ext/csv/csv.c +++ ext/csv/csv.c @@ -120,10 +120,11 @@ */ static char *csv_getline( CSV *pCSV ){ int n = 0; int bEol = 0; int bShrink = 0; + int bQuotedCol = 0; /* allocate initial row buffer */ if( pCSV-maxRow 1 ){ pCSV-zRow = sqlite3_malloc( 100 ); if( pCSV-zRow ){ @@ -150,19 +151,32 @@ pCSV-zRow[n] = '\0'; bEol = -1; break; } /* look for line delimiter */ -while( pCSV-zRow[n] ){ n++; } -if( (n0) ((pCSV-zRow[n-1]=='\n') || (pCSV-zRow[n-1]=='\r')) ){ +while( pCSV-zRow[n] ){ + if( pCSV-zRow[n]=='\' ){ +if( bQuotedCol ) { + if( pCSV-zRow[n+1]=='\' ) { /* escaped */ +n++; + }else{ +bQuotedCol = 0; + } +}else if( n==0 || pCSV-zRow[n-1]==pCSV-cDelim ){ + bQuotedCol = 1; +} + } + n++; +} +if( (n0) ((pCSV-zRow[n-1]=='\n') || (pCSV-zRow[n-1]=='\r')) !bQuotedCol ){ pCSV-zRow[n-1] = '\n'; /* uniform line ending */ pCSV-zRow[n] = '\0'; bEol = -1; } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; } @@ -332,17 +346,23 @@ do{ /* if it begins with a quote, assume it's a quoted col */ if( *s=='\' ){ s++; /* skip quote */ pCSV-aCols[nCol] = s; /* save pointer for this col */ - /* TBD: handle escaped quotes */ /* find closing quote */ - s = strchr(s, '\'); - if( !s ){ -/* no closing quote */ -pCSV-eof = -1; -return SQLITE_ERROR; + while( 1 ){ +s = strchr(s, '\'); +if( !s ){ + /* no closing quote */ + pCSV-eof = -1; + return SQLITE_ERROR; +}else if ( *(s+1)=='\' ){ + /* TBD: replace all escaped quotes by a single one */ + s+=2; +}else{ + break; +} } *s = '\0'; /* null terminate this col */ /* fall through and look for following ,\n */ s++; }else{ @@ -608,11 +628,11 @@ *pzErr = sqlite3_mprintf(%s, aErrMsg[4]); sqlite3_free(zSql); csvRelease( pCSV ); return SQLITE_ERROR; } - zSql = sqlite3_mprintf(%s%s%s, zTmp, zCol, zTail); + zSql = sqlite3_mprintf(%s\%s\%s, zTmp, zCol, zTail); }else{ zSql = sqlite3_mprintf(%scol%d%s, zTmp, i+1, zTail); } sqlite3_free(zTmp); } On Sat, May 8, 2010 at 3:45 PM, gwenn gwenn.k...@gmail.com wrote: While looking in csv1.test, I found a solution to the case when header row contains spaces: just wrap the column name with double quotes. Index: ext/csv/csv.c === --- ext/csv/csv.c +++ ext/csv/csv.c @@ -158,11 +158,11 @@ pCSV-zRow[n] = '\0'; bEol = -1; } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; } @@ -608,11 +608,11 @@ *pzErr = sqlite3_mprintf(%s, aErrMsg[4]); sqlite3_free(zSql); csvRelease( pCSV ); return SQLITE_ERROR; } - zSql = sqlite3_mprintf(%s%s%s, zTmp, zCol, zTail); + zSql = sqlite3_mprintf(%s\%s\%s, zTmp, zCol, zTail); }else{ zSql = sqlite3_mprintf(%scol%d%s, zTmp, i+1, zTail); } sqlite3_free(zTmp); } Index: ext/csv/csv1.test === --- ext/csv/csv1.test +++ ext/csv/csv1.test @@ -23,10 +23,11 @@ # # csv-1.*: Creating/destroying csv tables. # csv-2.*: Linear scans of csv data. # csv-3.*: Test renaming an csv table. # csv-4.*: CREATE errors +# csv-5.*: Dirty header and long line. # ifcapable !csv { finish_test return @@ -36,10 +37,12 @@ set test1csv [file join [file dirname [info script]] test1.csv] # This file is delimited by '|' and has quoted fields. set test2csv [file join [file dirname [info script]] test2.csv] # This file is delimited by '|'. It does NOT have quoted fields. set test3csv [file join [file dirname [info script]] test3.csv] +# This file contains a dirty header and one long line. +set test4csv [file join [file dirname [info script]] test4.csv] # # Test cases csv-1.* test CREATE and DROP table statements. # @@ -249,5 +252,14 @@ catchsql CREATE VIRTUAL TABLE t1 USING csv('foo') } {1 {Error opening CSV file: 'foo'}} do_test csv-4.1.3 { catchsql CREATE VIRTUAL TABLE t1 USING csv(foo
Re: [sqlite] [csv extension] Error while reading long lines
http://www2.sqlite.org/src/dir?name=ext/csv On Sat, Apr 24, 2010 at 9:43 AM, Jan janus...@gmx.net wrote: This sounds very useful. But where can I get this extension? Sorry, I could not find anything. Jan Am 21.04.2010 20:22, schrieb gwenn: Thanks for this great extension. It works smoothly with 500Mo files. And it's a workaround to some shortcomings of the '.import' command: - no need to create a table before, - no need to delete the header row before/after, - no error if the number of columns is not homogeneous, - ... It's a nightmare to work with the CSV format but I have to. I made a quick and dirty fix to the USE_HEADER_ROW mode to replace whitespaces, slashes or hyphens by underscores. But I look for a better solution. Is there any way to make sure a string is a valid column name? Regards On Mon Apr 19 17:32:13 GMT 2010, Shane Harrelsonshane at sqlite.org wrote: Thanks for the report. The extension is still very a much a work-in-progress and any feedback is greatly appreciated. -Shane On Sun, Apr 18, 2010 at 12:51 PM, gwenngwenn.kahz at gmail.com wrote: Hello, There is a little bug/typo in the csv extension when lines exceed 100 characters: *** glibc detected *** sqlite3: realloc(): invalid pointer: 0x00ad1a78 *** === Backtrace: = /lib/libc.so.6[0x7f6dab009d16] /lib/libc.so.6(realloc+0x321)[0x7f6dab00fda1] ./libSqliteCsv.so[0x7f6da9ef9dbf] A possible patch is: --- /tmp/SQLite-d474195a997b9d94/ext/csv/csv.c 2009-11-05 05:14:30.0 +0100 +++ csv.c 2010-04-18 18:48:04.0 +0200 @@ -160,7 +160,7 @@ } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; Regards. ___ sqlite-users mailing list sqlite-users at sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] [csv extension] Error while reading long lines
Thanks for this great extension. It works smoothly with 500Mo files. And it's a workaround to some shortcomings of the '.import' command: - no need to create a table before, - no need to delete the header row before/after, - no error if the number of columns is not homogeneous, - ... It's a nightmare to work with the CSV format but I have to. I made a quick and dirty fix to the USE_HEADER_ROW mode to replace whitespaces, slashes or hyphens by underscores. But I look for a better solution. Is there any way to make sure a string is a valid column name? Regards On Mon Apr 19 17:32:13 GMT 2010, Shane Harrelson shane at sqlite.org wrote: Thanks for the report. The extension is still very a much a work-in-progress and any feedback is greatly appreciated. -Shane On Sun, Apr 18, 2010 at 12:51 PM, gwenn gwenn.kahz at gmail.com wrote: Hello, There is a little bug/typo in the csv extension when lines exceed 100 characters: *** glibc detected *** sqlite3: realloc(): invalid pointer: 0x00ad1a78 *** === Backtrace: = /lib/libc.so.6[0x7f6dab009d16] /lib/libc.so.6(realloc+0x321)[0x7f6dab00fda1] ./libSqliteCsv.so[0x7f6da9ef9dbf] A possible patch is: --- /tmp/SQLite-d474195a997b9d94/ext/csv/csv.c 2009-11-05 05:14:30.0 +0100 +++ csv.c 2010-04-18 18:48:04.0 +0200 @@ -160,7 +160,7 @@ } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; Regards. ___ sqlite-users mailing list sqlite-users at sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] [csv extension] Error while reading long lines
Hello, There is a little bug/typo in the csv extension when lines exceed 100 characters: *** glibc detected *** sqlite3: realloc(): invalid pointer: 0x00ad1a78 *** === Backtrace: = /lib/libc.so.6[0x7f6dab009d16] /lib/libc.so.6(realloc+0x321)[0x7f6dab00fda1] ./libSqliteCsv.so[0x7f6da9ef9dbf] A possible patch is: --- /tmp/SQLite-d474195a997b9d94/ext/csv/csv.c 2009-11-05 05:14:30.0 +0100 +++ csv.c 2010-04-18 18:48:04.0 +0200 @@ -160,7 +160,7 @@ } } if( bShrink ){ -pCSV-zRow = realloc( pCSV-zRow, n+1 ); +pCSV-zRow = sqlite3_realloc( pCSV-zRow, n+1 ); pCSV-maxRow = n+1; } return bEol ? pCSV-zRow : 0; Regards. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users