Hi,

Please ignore the earlier emails.  The extended error code when trying CREATE 
TABLE is 1034 and when trying INSERT is 266.

I have given below the correct log generated during INSERT.  Thanks.

Regards
Arun

Enter file name: 
/spiffs/test.db
fn: FullPathNamefn:Fullpathname:Success
fn: Open
/spiffs/test.db
fn:Open:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
Opened database successfully

Welcome to SQLite console!!
---------------------------

Database file: /spiffs/test.db

1. Open database
2. Execute SQL
3. Execute Multiple SQL
4. Close database
5. List folder contents
6. Rename file
7. Delete file

Enter choice: 2
Enter SQL (max 500 characters):
INSERT INTO test VALUES (shox96_0_2c('This wont get inserted'))
fn: Access
fn:Access:Success
fn: FileSize
fn: FlushBuffer
fn:FlushBuffer:Success
fn:FileSize:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Open
/spiffs/test.db-journal
fn:Open:Success
fn: Write
fn:Write:Success
fn: Write
fn:Write:Success
fn: Write
fn:Write:Success
fn: Write
fn:Write:Success
fn: Write
fn:Write:Success
fn: Write
fn: FlushBuffer
fn: DirectWrite:
fn:DirectWrite:Success
fn:FlushBuffer:Success
fn:Write:Success
fn: Write
fn:Write:Success
fn: Read
fn: FlushBuffer
fn: DirectWrite:
fn:DirectWrite:Success
fn:FlushBuffer:Success
fn: FileSize
fn: FlushBuffer
fn:FlushBuffer:Success
fn:FileSize:Success
fn: FileSize
fn: FlushBuffer
fn:FlushBuffer:Success
fn:FileSize:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: FileSize
fn: FlushBuffer
fn:FlushBuffer:Success
fn:FileSize:Success
fn: Read
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Read:Success
fn: Write
fn: FlushBuffer
fn:FlushBuffer:Success
fn:Write:Success
fn: Sync
fn: FlushBuffer
fn: DirectWrite:
fn:DirectWrite:Success
fn:FlushBuffer:Success
fn:Sync:Success
SQL error: 266 disk I/O error


 ---- On Fri, 12 Apr 2019 17:47:21 +0530 Arun - Siara Logics (cc) 
<a...@siara.cc> wrote ----
 > I also tried INSERT on an existing database.  This time the extended error 
 > is 266.  I am giving below the log.
 > Also, there are two warnings printed during open:
 > (21) API call with invalid database connection pointer
 > (21) misuse at line 152855 of [fb90e7189a]
 > 
 > Regards
 > Arun
 > 
 > Enter file name: 
 > /spiffs/test.db
 > (21) API call with invalid database connection pointer
 > (21) misuse at line 152855 of [fb90e7189a]
 > fn: FullPathNamefn:Fullpathname:Success
 > fn: Open
 > /spiffs/test.db
 > fn:Open:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > Opened database successfully
 > 
 > Welcome to SQLite console!!
 > ---------------------------
 > 
 > Database file: /spiffs/test.db
 > 
 > 1. Open database
 > 2. Execute SQL
 > 3. Execute Multiple SQL
 > 4. Close database
 > 5. List folder contents
 > 6. Rename file
 > 7. Delete file
 > 
 > Enter choice: 2
 > Enter SQL (max 500 characters):
 > INSERT INTO test VALUES ('This wont get inserted')
 > fn: Access
 > fn:Access:Success
 > fn: FileSize
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn:FileSize:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Open
 > /spiffs/test.db-journal
 > Create mode
 > fn:Open:Success
 > fn: Write
 > fn:Write:Success
 > fn: Write
 > fn:Write:Success
 > fn: Write
 > fn:Write:Success
 > fn: Write
 > fn:Write:Success
 > fn: Write
 > fn:Write:Success
 > fn: Write
 > fn: FlushBuffer
 > fn: DirectWrite:
 > fn:DirectWrite:Success
 > fn:FlushBuffer:Success
 > fn:Write:Success
 > fn: Write
 > fn:Write:Success
 > fn: Read
 > fn: FlushBuffer
 > fn: DirectWrite:
 > fn:DirectWrite:Success
 > fn:FlushBuffer:Success
 > fn: FileSize
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn:FileSize:Success
 > fn: FileSize
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn:FileSize:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: FileSize
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn:FileSize:Success
 > fn: Read
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn: Write
 > fn: FlushBuffer
 > fn:FlushBuffer:Success
 > fn:Write:Success
 > fn: Sync
 > fn: FlushBuffer
 > fn: DirectWrite:
 > fn:DirectWrite:Success
 > fn:FlushBuffer:Success
 > fn:Sync:Success
 > SQL error: 266 disk I/O error
 > Time taken:310381 us
 > 
 > 
 >  ---- On Fri, 12 Apr 2019 17:30:14 +0530 Arun - Siara Logics (cc) 
 > <a...@siara.cc> wrote ----
 >  > Hi, Thank you for the suggestion.  The sqlite3_extended_errcode() is 1034 
 > disk I/O error.
 >  > Regards
 >  > Arun
 >  > 
 >  >  ---- On Fri, 12 Apr 2019 17:06:00 +0530 Richard Hipp <d...@sqlite.org> 
 > wrote ----
 >  >  > On 4/12/19, Arun - Siara Logics (cc) <a...@siara.cc> wrote:
 >  >  > > fn:DirectWrite:Success
 >  >  > > fn:FlushBuffer:Success
 >  >  > > fn:Sync:Success
 >  >  > > SQL error: disk I/O error
 >  >  > >
 >  >  > > At the end, there are two files on disk: vfs_test.db (0 bytes) and
 >  >  > > vfs_test.db-journal (512 bytes).  There is no problem reading a 
 > database.
 >  >  > > But when CREATE or INSERT is involved, it gives disk I/O error.
 >  >  > >
 >  >  > > Any idea why it is throwing disk I/O error, inspite of the previous 
 > sync
 >  >  > > success?  Any suggestions on how I could figure it out?
 >  >  > 
 >  >  > Please tell us the sqlite3_extended_errcode().  Also, consider
 >  >  > enabling the error and warning log
 >  >  > (https://www.sqlite.org/errlog.html)
 >  >  > 
 >  >  > 
 >  >  > >
 >  >  > > Regards
 >  >  > > Arun
 >  >  > >
 >  >  > >
 >  >  > > _______________________________________________
 >  >  > > sqlite-users mailing list
 >  >  > > sqlite-users@mailinglists.sqlite.org
 >  >  > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
 >  >  > >
 >  >  > 
 >  >  > 
 >  >  > -- 
 >  >  > D. Richard Hipp
 >  >  > d...@sqlite.org
 >  >  > _______________________________________________
 >  >  > sqlite-users mailing list
 >  >  > sqlite-users@mailinglists.sqlite.org
 >  >  > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
 >  >  > 
 >  > 
 >  > _______________________________________________
 >  > sqlite-users mailing list
 >  > sqlite-users@mailinglists.sqlite.org
 >  > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
 >  > 
 > 
 > _______________________________________________
 > sqlite-users mailing list
 > sqlite-users@mailinglists.sqlite.org
 > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
 > 

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to