Hi, Often I pipe tables (from .gz or multiple files). I found it also work
for large files that otherwise fail with `Error: cannot open :huge.file"`:

cat huge.file | sqlite3 somedb '.import /dev/stdin hugetable'

But it could be slower that using `real` file import. Anyone have an idea?

L. 



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/import-error-cannot-open-large-file-tp27346p72364.html
Sent from the SQLite mailing list archive at Nabble.com.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to