Just a quick sanity check ...
server is linux w/DLT7000 and 25g free on holding disk
brain (client) is solaris 8 with 15g partition (7 gig used) to back up for
the first time.

I get this error in the raw log:
FAIL dumper brain c1t1d0s0 0 ["data write: File too large"]
  sendbackup: start [brain:c1t1d0s0 level 0]
  sendbackup: info BACKUP=/usr/local/gnome/bin/gtar
  sendbackup: info RECOVER_CMD=/usr/local/bin/gzip -dc
|/usr/local/gnome/bin/gtar -f... -
  sendbackup: info COMPRESS_SUFFIX=.gz
  sendbackup: info end

and this in amdump.1:
driver: hdisk-state time 153.095 hdisk 0: free 21616884 dumpers 1
driver: result time 3319.648 from dumper0: FAILED 00-00001 ["data write:
File too large"]
dumper: kill index command

Is this all because of the 2g file limit on basic kernels on Linux?

and is:
chunksize 2gb

the right way to limit it? Or should I have a "chunksize 1950 Mb"?

Cheers,
Mark

-- 
http://www.mchang.org/
http://decss.zoy.org/


Reply via email to