Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-31 Thread Martin Simmons
> On Tue, 29 Dec 2009 13:52:08 -0700, Paul Greidanus said:
> 
> On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
> 
> > * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
> >> I'm trying to restore files I have backed up on the NFS server that I'm 
> >> using to back VMware, but I'm getting similar errors to this every time I 
> >> try to restore:
> >> 
> >> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
> >> Restore.2009-12-28_12.10.28_54
> >> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
> >> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 
> >> 11, drive 0" command.
> >> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
> >> drive 0" command.
> >> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 
> >> 0", status is OK.
> >> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" 
> >> on device "TL2000-1" (/dev/rmt/0n).
> >> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
> >> file:block 473:0.
> >> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
> >> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
> >> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
> >> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
> >> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
> >> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
> >> restored file 
> >> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk 
> >> not correct. Original 8589934592, restored 445841408.
> >> 
> >> Files are backed up from a zfs snapshot which is created just before the 
> >> backup starts. Every other file I am attempting to restore works just 
> >> fine... 
> >> 
> >> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
> >> servers that have .vmdk files on it?
> > 
> > No, but I could imagine that this might have something to do with
> > some sparse-file setting.
> > 
> > Have you checked how much space of your 8GB flat vmdk is aktually being
> > used? Maybe this was 445841408 Bytes at backup time?
> > 
> > Does the same happen if you do not use pre-allocated vmdk-disks?
> > (Which is better anyway most of the times if you use NFS instead of vmfs)
> > 
> 
> All I use is preallocated disks especially on NFS.. I don't think I can 
> actually use sparse disks on NFS.
> 
> As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
> and restoring it, and I get this:
> 
> 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
> file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
> 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
> "TL2000-1" (/dev/rmt/0n), Volume "10L4"
> 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
> 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
> file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
> 66365161472, restored 376340827.
> 
> So, this tells me that whatever's going on, it's not Vmware that's causing me 
> the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
> backups, or just something with large files and Bacula?

Looks like a tape drive or tape problem, since you get I/O error when reading.

Did you check the syslog on the SD machine?

Does bls give the same error for that volume?

__Martin

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus
 Nachricht-
>> Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
>> Gesendet: Dienstag, 29. Dezember 2009 21:52
>> An: bacula-users@lists.sourceforge.net
>> Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
>> 
>> 
>> On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
>> 
>>> * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
>>>> I'm trying to restore files I have backed up on the NFS server that I'm 
>>>> using to back VMware, but I'm getting similar errors to this every time I 
>>>> try to restore:
>>>> 
>>>> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
>>>> Restore.2009-12-28_12.10.28_54
>>>> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
>>>> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 
>>>> 11, drive 0" command.
>>>> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
>>>> drive 0" command.
>>>> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 
>>>> 0", status is OK.
>>>> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" 
>>>> on device "TL2000-1" (/dev/rmt/0n).
>>>> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
>>>> file:block 473:0.
>>>> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
>>>> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
>>>> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
>>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
>>>> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
>>>> restored file 
>>>> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk 
>>>> not correct. Original 8589934592, restored 445841408.
>>>> 
>>>> Files are backed up from a zfs snapshot which is created just before the 
>>>> backup starts. Every other file I am attempting to restore works just 
>>>> fine... 
>>>> 
>>>> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
>>>> servers that have .vmdk files on it?
>>> 
>>> No, but I could imagine that this might have something to do with
>>> some sparse-file setting.
>>> 
>>> Have you checked how much space of your 8GB flat vmdk is aktually being
>>> used? Maybe this was 445841408 Bytes at backup time?
>>> 
>>> Does the same happen if you do not use pre-allocated vmdk-disks?
>>> (Which is better anyway most of the times if you use NFS instead of vmfs)
>>> 
>> 
>> All I use is preallocated disks especially on NFS.. I don't think I can 
>> actually use sparse disks on NFS.
>> 
>> As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
>> and restoring it, and I get this:
>> 
>> 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 
>> at file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>> 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
>> "TL2000-1" (/dev/rmt/0n), Volume "10L4"
>> 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
>> 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of 
>> restored file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. 
>> Original 66365161472, restored 376340827.
>> 
>> So, this tells me that whatever's going on, it's not Vmware that's causing 
>> me the troubles.. I'm wondering if I'm running into problems with ZFS 
>> snapshot backups, or just something with large files and Bacula?
>> 
>> Paul
>> --
>> This SF.Net email is sponsored by the Verizon Developer Community
>> Take advantage of Verizon's best-in-class app development support
>> A streamlined, 14 day to market process makes app distribution fast and easy
>> Join now and get one step closer to millions of Verizon customers
>> http://p.sf.net/sfu/verizon-dev2dev 
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-us

Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Fahrer, Julian
Hey paul,

i don't have enough space on the test system right now. Just created a new zfs 
without compression/dedup and a 1gb file on an solaris 10u6 system.
I could backup and restore from a snapshot without errors.

Could u post your zfs config?
zfs get all 

Julian

-Ursprüngliche Nachricht-
Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
Gesendet: Dienstag, 29. Dezember 2009 23:00
An: Fahrer, Julian
Cc: bacula-users@lists.sourceforge.net
Betreff: Re: AW: [Bacula-users] Cannot restore VMmware/ZFS

Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
enabled with compression or dedup enabled.

Can you try backing up and restoring a 100Gb file full of zeros from a snapshot?

Paul

On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:

> What solaris are u using?
> Is zfs compression/ dedup enabled?
> Maybe I could run some test for u. I had no problems with zfs so far
> 
> 
> -Ursprüngliche Nachricht-
> Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
> Gesendet: Dienstag, 29. Dezember 2009 21:52
> An: bacula-users@lists.sourceforge.net
> Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
> 
> 
> On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
> 
>> * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
>>> I'm trying to restore files I have backed up on the NFS server that I'm 
>>> using to back VMware, but I'm getting similar errors to this every time I 
>>> try to restore:
>>> 
>>> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
>>> Restore.2009-12-28_12.10.28_54
>>> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
>>> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 
>>> 11, drive 0" command.
>>> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
>>> drive 0" command.
>>> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 
>>> 0", status is OK.
>>> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" on 
>>> device "TL2000-1" (/dev/rmt/0n).
>>> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
>>> file:block 473:0.
>>> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
>>> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
>>> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
>>> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
>>> restored file 
>>> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
>>> correct. Original 8589934592, restored 445841408.
>>> 
>>> Files are backed up from a zfs snapshot which is created just before the 
>>> backup starts. Every other file I am attempting to restore works just 
>>> fine... 
>>> 
>>> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
>>> servers that have .vmdk files on it?
>> 
>> No, but I could imagine that this might have something to do with
>> some sparse-file setting.
>> 
>> Have you checked how much space of your 8GB flat vmdk is aktually being
>> used? Maybe this was 445841408 Bytes at backup time?
>> 
>> Does the same happen if you do not use pre-allocated vmdk-disks?
>> (Which is better anyway most of the times if you use NFS instead of vmfs)
>> 
> 
> All I use is preallocated disks especially on NFS.. I don't think I can 
> actually use sparse disks on NFS.
> 
> As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
> and restoring it, and I get this:
> 
> 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
> file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
> 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
> "TL2000-1" (/dev/rmt/0n), Volume "10L4"
> 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
> 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
> file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
> 66365161472, restored 376340827.
> 
> So, this tells me that whatever's going on, it's not Vmware that's causing me 
> the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
&

Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus
Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
enabled with compression or dedup enabled.

Can you try backing up and restoring a 100Gb file full of zeros from a snapshot?

Paul

On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:

> What solaris are u using?
> Is zfs compression/ dedup enabled?
> Maybe I could run some test for u. I had no problems with zfs so far
> 
> 
> -Ursprüngliche Nachricht-
> Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
> Gesendet: Dienstag, 29. Dezember 2009 21:52
> An: bacula-users@lists.sourceforge.net
> Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
> 
> 
> On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
> 
>> * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
>>> I'm trying to restore files I have backed up on the NFS server that I'm 
>>> using to back VMware, but I'm getting similar errors to this every time I 
>>> try to restore:
>>> 
>>> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
>>> Restore.2009-12-28_12.10.28_54
>>> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
>>> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 
>>> 11, drive 0" command.
>>> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
>>> drive 0" command.
>>> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 
>>> 0", status is OK.
>>> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" on 
>>> device "TL2000-1" (/dev/rmt/0n).
>>> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
>>> file:block 473:0.
>>> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
>>> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
>>> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
>>> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
>>> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
>>> restored file 
>>> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
>>> correct. Original 8589934592, restored 445841408.
>>> 
>>> Files are backed up from a zfs snapshot which is created just before the 
>>> backup starts. Every other file I am attempting to restore works just 
>>> fine... 
>>> 
>>> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
>>> servers that have .vmdk files on it?
>> 
>> No, but I could imagine that this might have something to do with
>> some sparse-file setting.
>> 
>> Have you checked how much space of your 8GB flat vmdk is aktually being
>> used? Maybe this was 445841408 Bytes at backup time?
>> 
>> Does the same happen if you do not use pre-allocated vmdk-disks?
>> (Which is better anyway most of the times if you use NFS instead of vmfs)
>> 
> 
> All I use is preallocated disks especially on NFS.. I don't think I can 
> actually use sparse disks on NFS.
> 
> As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
> and restoring it, and I get this:
> 
> 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
> file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
> 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
> "TL2000-1" (/dev/rmt/0n), Volume "10L4"
> 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
> 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
> file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
> 66365161472, restored 376340827.
> 
> So, this tells me that whatever's going on, it's not Vmware that's causing me 
> the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
> backups, or just something with large files and Bacula?
> 
> Paul
> --
> This SF.Net email is sponsored by the Verizon Developer Community
> Take advantage of Verizon's best-in-class app development support
> A streamlined, 14 day to market process makes app distribution fast and easy
> Join now and get one step closer to millions of Verizon customers
> http://p.sf.net/sfu/verizon-dev2dev 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Fahrer, Julian
What solaris are u using?
Is zfs compression/ dedup enabled?
Maybe I could run some test for u. I had no problems with zfs so far


-Ursprüngliche Nachricht-
Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
Gesendet: Dienstag, 29. Dezember 2009 21:52
An: bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS


On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:

> * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
>> I'm trying to restore files I have backed up on the NFS server that I'm 
>> using to back VMware, but I'm getting similar errors to this every time I 
>> try to restore:
>> 
>> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
>> Restore.2009-12-28_12.10.28_54
>> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
>> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 
>> 11, drive 0" command.
>> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
>> drive 0" command.
>> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 0", 
>> status is OK.
>> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" on 
>> device "TL2000-1" (/dev/rmt/0n).
>> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
>> file:block 473:0.
>> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
>> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
>> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
>> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
>> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
>> restored file 
>> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
>> correct. Original 8589934592, restored 445841408.
>> 
>> Files are backed up from a zfs snapshot which is created just before the 
>> backup starts. Every other file I am attempting to restore works just 
>> fine... 
>> 
>> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
>> servers that have .vmdk files on it?
> 
> No, but I could imagine that this might have something to do with
> some sparse-file setting.
> 
> Have you checked how much space of your 8GB flat vmdk is aktually being
> used? Maybe this was 445841408 Bytes at backup time?
> 
> Does the same happen if you do not use pre-allocated vmdk-disks?
> (Which is better anyway most of the times if you use NFS instead of vmfs)
> 

All I use is preallocated disks especially on NFS.. I don't think I can 
actually use sparse disks on NFS.

As a test, I created a 100Gb file from /dev/zero, and tried backing that up and 
restoring it, and I get this:

29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
"TL2000-1" (/dev/rmt/0n), Volume "10L4"
29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
66365161472, restored 376340827.

So, this tells me that whatever's going on, it's not Vmware that's causing me 
the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
backups, or just something with large files and Bacula?

Paul
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus

On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:

> * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
>> I'm trying to restore files I have backed up on the NFS server that I'm 
>> using to back VMware, but I'm getting similar errors to this every time I 
>> try to restore:
>> 
>> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
>> Restore.2009-12-28_12.10.28_54
>> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
>> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 
>> 11, drive 0" command.
>> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
>> drive 0" command.
>> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 0", 
>> status is OK.
>> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" on 
>> device "TL2000-1" (/dev/rmt/0n).
>> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
>> file:block 473:0.
>> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
>> at file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
>> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
>> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
>> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
>> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
>> restored file 
>> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
>> correct. Original 8589934592, restored 445841408.
>> 
>> Files are backed up from a zfs snapshot which is created just before the 
>> backup starts. Every other file I am attempting to restore works just 
>> fine... 
>> 
>> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
>> servers that have .vmdk files on it?
> 
> No, but I could imagine that this might have something to do with
> some sparse-file setting.
> 
> Have you checked how much space of your 8GB flat vmdk is aktually being
> used? Maybe this was 445841408 Bytes at backup time?
> 
> Does the same happen if you do not use pre-allocated vmdk-disks?
> (Which is better anyway most of the times if you use NFS instead of vmfs)
> 

All I use is preallocated disks especially on NFS.. I don't think I can 
actually use sparse disks on NFS.

As a test, I created a 100Gb file from /dev/zero, and tried backing that up and 
restoring it, and I get this:

29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
file:blk 13:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
"TL2000-1" (/dev/rmt/0n), Volume "10L4"
29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
66365161472, restored 376340827.

So, this tells me that whatever's going on, it's not Vmware that's causing me 
the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
backups, or just something with large files and Bacula?

Paul
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-28 Thread Marc Schiffbauer
* Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
> I'm trying to restore files I have backed up on the NFS server that I'm using 
> to back VMware, but I'm getting similar errors to this every time I try to 
> restore:
> 
> 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
> Restore.2009-12-28_12.10.28_54
> 28-Dec 12:10 krikkit-dir JobId 1433: Using Device "TL2000-1"
> 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger "unload slot 11, 
> drive 0" command.
> 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger "load slot 3, 
> drive 0" command.
> 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger "load slot 3, drive 0", 
> status is OK.
> 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume "09L4" on 
> device "TL2000-1" (/dev/rmt/0n).
> 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume "09L4" to 
> file:block 473:0.
> 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 at 
> file:blk 475:0 on device "TL2000-1" (/dev/rmt/0n). ERR=I/O error.
> 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
> "TL2000-1" (/dev/rmt/0n), Volume "09L4"
> 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
> 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
> restored file 
> /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
> correct. Original 8589934592, restored 445841408.
> 
> Files are backed up from a zfs snapshot which is created just before the 
> backup starts. Every other file I am attempting to restore works just fine... 
> 
> Is anyone out there doing ZFS snapshots for VMware, or backing up NFS servers 
> that have .vmdk files on it?

No, but I could imagine that this might have something to do with
some sparse-file setting.

Have you checked how much space of your 8GB flat vmdk is aktually being
used? Maybe this was 445841408 Bytes at backup time?

Does the same happen if you do not use pre-allocated vmdk-disks?
(Which is better anyway most of the times if you use NFS instead of vmfs)


-Marc
-- 
8AAC 5F46 83B4 DB70 8317  3723 296C 6CCA 35A6 4134

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users