Hi, Simon,

Based on my test, create a >3GB files on my Solaris 10 is ok, do you have any 
others suggestions?
============================================
bash-3.00# du -hs logs/*
 7.0G   logs/SilentAlarm.log
  25K   logs/SilentAlarmError.log
  10M   logs/systemErr1.txt
 4.4M   logs/systemErr10.txt
  10M   logs/systemErr2.txt
  10M   logs/systemErr3.txt
  10M   logs/systemErr4.txt
  10M   logs/systemErr5.txt
  10M   logs/systemErr6.txt
  10M   logs/systemErr7.txt
  10M   logs/systemErr8.txt
  10M   logs/systemErr9.txt
 256K   logs/systemOut1.txt
bash-3.00# ls -lt database/*
-rw-r--r--   1 root     root        1544 Nov 25 15:25 
database/silentalarm.db-journal
-rw-r--r--   1 root     root     2147483648 Nov 25 15:25 database/silentalarm.db
bash-3.00# ulimit -a
core file size        (blocks, -c) unlimited
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
open files                    (-n) 8192
pipe size          (512 bytes, -p) 10
stack size            (kbytes, -s) 8192
cpu time             (seconds, -t) unlimited
max user processes            (-u) 29995
virtual memory        (kbytes, -v) unlimited
bash-3.00#
============================================
Regards,
Liang Kunming.
-----Original Message-----
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Liang Kunming
Sent: 2013年11月23日 9:52
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Does sqlite has db file-size restriction on Solaris 10?

Hi, Simon,



I use the 64bit Solaris 10 and the fs type is ufs. The trace as below:

============================================

bash-3.00# df -v

Mount Dir  Filesystem        blocks     used     free  %used

/dev/dsk/c0t5000CCA01286BB88d0s0

/                          94601683  4847315 88808352     6%

/devices   /devices               0        0        0     0%

/system/co ctfs                   0        0        0     0%

/proc      proc                   0        0        0     0%

/etc/mntta mnttab                 0        0        0     0%

/etc/svc/v swap             4676759      211  4676548     1%

/system/ob objfs                  0        0        0     0%

/etc/dfs/s sharefs                0        0        0     0%

/platform/sun4v/lib/libc_psr/libc_psr_hwcap2.so.1

/platform/                 94601683  4847315 88808352     6%

/platform/sun4v/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1

/platform/                 94601683  4847315 88808352     6%

/dev/fd    fd                     0        0        0     0%

/tmp       swap             4679532     2984  4676548     1%

/var/run   swap             4676558       10  4676548     1%

/dev/dsk/c0t5000CCA01286BB88d0s1

/opt                       161363185 80324957 79424597    51%

bash-3.00# fstyp /dev/dsk/c0t5000CCA01286BB88d0s1

ufs

bash-3.00# isainfo -v

64-bit sparcv9 applications

        hpc vis3 fmaf asi_blk_init vis2 vis popc

32-bit sparc applications

        hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32

============================================

I will test the case if can create a 3GB file on this file system and share the 
result here. Thanks.



Regards,

Liang Kunming.



-----Original Message-----
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: 2013年11月23日 9:23
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Does sqlite has db file-size restriction on Solaris 10?





On 23 Nov 2013, at 1:03am, Liang Kunming <kunming.li...@utstar.com> wrote:



> I meet some issue when use the sqlite on Solaris 10. The db file is made by 
> the sqlite R3.4.2 version and the sqlite3 is compiled on Solaris 10 platform 
> (has attached). When the db file meet 2147483648 bytes (2Gigabytes), the file 
> size can not increase anymore, and query/write also error. When query or 
> write the data, the exception as below. Who know the solution of this issue 
> and can share me, thanks very much.



Your trace indicates that you are using a network file system (perhaps the one 
called NFS) to access the drive your database is on.  Which file system and/or 
network file system are you using to access that drive ?



Can you please test that it is possible to make a 3GB file of any kind on that 
drive.  A long text file would do fine, but not a file completely filled with 
hex zeros (0x00).



Simon.

_______________________________________________

sqlite-users mailing list

sqlite-users@sqlite.org

http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to