Hi Martin,
Thank you very much. That was the issue.
Regards,
Ryan
On Thu, Jun 18, 2020 at 5:37 AM Martin Simmons wrote:
> > On Wed, 17 Jun 2020 20:35:41 -0700, Ryan Sizemore said:
> >
> > The spool directory is owned by bacula (and permissioned to allow write
> by
> > all for troubleshoot
Am 18.06.20 um 05:35 schrieb Ryan Sizemore:
> Device {
> Name = LTO-4
> Media Type = LTO-4
> Archive Device = /dev/nst0
> AutomaticMount = yes;
> AlwaysOpen = yes;
> RemovableMedia = yes;
> RandomAccess = no;
> Maximum File Size = 10GB
> AutoChanger = yes
> Maximum Spool Size =
> root@pacific:/etc/bacula# ls -lsa /scratch/
> total 28
> 4 drw--- 4 bacula bacula 4096 Jun 18 01:55 .
> 4 drwxr-xr-x 26 root root4096 Jun 18 01:55 ..
> 16 drwx-- 2 root root 16384 Jun 17 19:45 lost+found
> 4 drwxrwxrwx 2 bacula bacula 4096 Jun 18 03:03 spool
>
Maybe the
> On Wed, 17 Jun 2020 20:35:41 -0700, Ryan Sizemore said:
>
> The spool directory is owned by bacula (and permissioned to allow write by
> all for troubleshooting):
>
> root@pacific:/etc/bacula# ls -lsa /scratch/
> total 28
> 4 drw--- 4 bacula bacula 4096 Jun 18 01:55 .
> 4 drwxr-xr-x
On 6/17/2020 11:35 PM, Ryan Sizemore wrote:
Hi,
I have a Job that I want to use data spooling with. The Job reads from
a locally-mounted NFS share, and writes to an LTO-4 tape. Since
writing to tape will be faster than reading over the network, I want
to spool the data locally. However, when
Hi,
I have a Job that I want to use data spooling with. The Job reads from a
locally-mounted NFS share, and writes to an LTO-4 tape. Since writing to
tape will be faster than reading over the network, I want to spool the data
locally. However, when I run the job, it terminates with an error that i