A pity that it is not possible. Don't you agree, that would be a good strategy when diskspace is critical and thus a good enhancement for bareos ?

For me, it would be okay to write one! whole Full during consolidatin, when "Always Incremental Max Full Age" is hit into consolidation pool on disk and the then migrate it back to tape. You could set the timespan so that each month only one Full is consolidated and then migrate it away from disk before the next on is scheduled.

Anyhow, thank you for your explanation and proposals. I'll check my options then, especially the "Max virtual full interval", or gathering more diskspace...

Kind Regards
Tom

Am 23.07.2024 um 08:57 schrieb Bruno Friedmann (bruno-at-bareos):
I would said because it is designed as it is actually: The full is also moving in the timeline like documented, so from time to time it needs to be read and rewrite.

To cover your need you may want to achieve almost what AI do but without AI facilities, like running a virtual full say each month this one going to Tape, and
then you can recycle your previous Full and incremental ?
Check Maximum virtual full interval parameter for example.

Just a rough idea.

On Monday 22 July 2024 at 16:13:24 UTC+2 Thomas Kempf wrote:

    ok, what i don't understand is the necessity of having the same media
    type for read access. I just read from tape
    Wouldn't it be possible to read full from one device (media type),
    consolidate with incrementals from other device (media type) and write
    the consolidated full data to consolidated pool.
    Like that You could avoid having to keep a full of all AI jobs on disk
    at the same time...

    Am 22.07.2024 um 14:50 schrieb Bruno Friedmann (bruno-at-bareos):> Well
    it try (for good reason, configuration) to also consolidate the
     > full, so it failed when it tries to access data stored on tape
     >
     > >19-Jul 13:27 schorsch-sd JobId 63962: Fatal error:
    stored/acquire.cc:214
     > >No suitable device found to read Volume "B00006L8"
     >
     > AI work with 2 distinct storage, but having the same media type.
     > so in your case, full and inc would be on disk, and then tape can be
     > used to get a Virtual Full consolidated (like every week, month
    or so)
     > as described in documentation.
     >
     > On Monday 22 July 2024 at 09:48:59 UTC+2 Thomas Kempf wrote:
     >
     > Hello Bruno,
     > i thougt that i tried to do what you proposed...
     > Write the full consolidation job, including the first full from
    tape to
     > the Consolidation pool and them migrate all back to tape. So i
    thought
     > there would be no need to write on the tape during the consolidated
     > full.
     > As it works on incremental consolidation, data is read of type
    (file)
     > and written to type (filec) too. At least, that's how i interpret
     > the log.
     > > 19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AII0001"
     > > to read.
     > > 19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AIC0001"
     > > to write.
     > Or am i missing something here?
     >
     > Am 22.07.2024 um 09:33 schrieb Bruno Friedmann (bruno-at-bareos):
     > > This setup will not work, if your first full is on Tape, the
     > > consolidation won't be able to have a read and write storage of
     > the same
     > > media type.
     > >
     > > The tape should be used to create a VF copy of consolidated
    jobs for
     > > example.
     > >
     > > On Friday 19 July 2024 at 14:50:17 UTC+2 Thomas Kempf wrote:
     > >
     > > Hello
     > > I'm reviving this old thread, because i think i have a setup like
     > Brock
     > > describes, but still have some error in the setup or found a
    bug in
     > > bareos. I'd be really glad if you could help me...
     > >
     > > There is One LTO-8 Drive with an associated Pool "Fulltape"
     > (mediatype
     > > "lto-8") and the Disk-Storage with Pools "AI-Incremental"
    (mediatype
     > > "file") and "AI-Consolidated" (mediatype "filec").
     > > At the beginning i did a Full in the "Fulltape" Pool on LTO-8.
     > > After that daily AI Incrementals on Disk in the AI-Incremental
    Pool.
     > > Additionaly daily Consolidation jobs to AI-Consolidated.
     > > This ran errorfree and smooth for 8 months. Then "Always
    Incremental
     > > max
     > > Full Age" was reached and i expected a full backup In
     > AI-Consolidated,
     > > which i wanted to migrate back to tape again afterwards.
     > > Alas this Full Consolidation throws an error and fails to
    change the
     > > read device to the tape.
     > > Is this a bug in bareos ?
     > >
     > > here is the error:
     > >
     > > 19-Jul 13:27 hueper-dir JobId 63962: Start Virtual Backup JobId
     > 63962,
     > > Job=BETTERONE-AI-MULTIPOLSTER.2024-07-19_13.27.23_47
     > > 19-Jul 13:27 hueper-dir JobId 63962: Bootstrap records written to
     > > /var/lib/bareos/hueper-dir.restore.8.bsr
     > > 19-Jul 13:27 hueper-dir JobId 63962: Consolidating JobIds
     > > 59638,63851,62915,62939,62963 containing 215949 files
     > > 19-Jul 13:27 hueper-dir JobId 63962: Connected Storage daemon at
     > > schorsch-sd.ad.hueper.de:9103
    <http://schorsch-sd.ad.hueper.de:9103>
    <http://schorsch-sd.ad.hueper.de:9103
    <http://schorsch-sd.ad.hueper.de:9103>>
     > > <http://schorsch-sd.ad.hueper.de:9103
    <http://schorsch-sd.ad.hueper.de:9103>
     > <http://schorsch-sd.ad.hueper.de:9103
    <http://schorsch-sd.ad.hueper.de:9103>>>, encryption:
     > > TLS_CHACHA20_POLY1305_SHA256
     > > TLSv1.3
     > > 19-Jul 13:27 hueper-dir JobId 63962: Encryption:
     > > TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
     > > 19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AII0001"
     > > to read.
     > > 19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AIC0001"
     > > to write.
     > > 19-Jul 13:27 schorsch-sd JobId 63962: Volume
     > "BO-AI-Consolidated-31509"
     > > previously written, moving to end of data.
     > > 19-Jul 13:27 schorsch-sd JobId 63962: Ready to append to end of
     > Volume
     > > "BO-AI-Consolidated-31509" size=238
     > > 19-Jul 13:27 schorsch-sd JobId 63962: stored/acquire.cc:157
    Changing
     > > read device. Want Media Type="LTO-8" have="File"
     > > device="Bareos-AII0001" (/bareos-data/BO-AI-Incremental)
     > > 19-Jul 13:27 schorsch-sd JobId 63962: Releasing device
     > "Bareos-AII0001"
     > > (/bareos-data/BO-AI-Incremental).
     > > 19-Jul 13:27 schorsch-sd JobId 63962: Fatal error:
     > > stored/acquire.cc:214
     > > No suitable device found to read Volume "B00006L8"
     > > 19-Jul 13:27 schorsch-sd JobId 63962: Releasing device
     > "Bareos-AIC0001"
     > > (/bareos-data/BO-AI-Consolidated).
     > > 19-Jul 13:27 schorsch-sd JobId 63962: Releasing device
     > "Bareos-AII0001"
     > > (/bareos-data/BO-AI-Incremental).
     > > 19-Jul 13:27 hueper-dir JobId 63962: Replicating deleted files
    from
     > > jobids 59638,63851,62915,62939,62963 to jobid 63962
     > > 19-Jul 13:27 hueper-dir JobId 63962: Error: Bareos hueper-dir
     > > 23.0.3~pre135.a9e3d95ca (28May24):
     > > Build OS: Debian GNU/Linux 11 (bullseye)
     > > JobId: 63962
     > > Job: BETTERONE-AI-MULTIPOLSTER.2024-07-19_13.27.23_47
     > > Backup Level: Virtual Full
     > > Client: "betterone-fd" 23.0.3~pre95.0aeaf0d6d
     > > (15Apr24) 13.2-RELEASE,freebsd
     > > FileSet: "betterone-ai-multipolster" 2021-02-12 12:02:55
     > > Pool: "BO-AI-Consolidated" (From Job Pool's
     > > NextPool resource)
     > > Catalog: "MyCatalog" (From Client resource)
     > > Storage: "Bareos-AIC" (From Storage from Pool's
     > > NextPool resource)
     > > Scheduled time: 19-Jul-2024 13:27:23
     > > Start time: 23-Mai-2024 19:06:26
     > > End time: 23-Mai-2024 19:23:02
     > > Elapsed time: 16 mins 36 secs
     > > Priority: 25
     > > Allow Mixed Priority: yes
     > > SD Files Written: 0
     > > SD Bytes Written: 0 (0 B)
     > > Rate: 0.0 KB/s
     > > Volume name(s):
     > > Volume Session Id: 9
     > > Volume Session Time: 1721383710
     > > Last Volume Bytes: 0 (0 B)
     > > SD Errors: 1
     > > SD termination status: Fatal Error
     > > Accurate: yes
     > > Bareos binary info: Bareos community build (UNSUPPORTED): Get
     > > professional support from https://www.bareos.com
    <https://www.bareos.com>
     > <https://www.bareos.com <https://www.bareos.com>>
     > > <https://www.bareos.com <https://www.bareos.com>
    <https://www.bareos.com <https://www.bareos.com>>>
     > > Job triggered by: User
     > > Termination: *** Backup Error ***
     > >
     > > Kind Regards
     > > Tom
     > >
     > >
     > >
     > >
     > > Am 26.12.2023 um 14:05 schrieb Brock Palen:
     > > > Doesn’t work,
     > > >
     > > > During the consolidation of a full, you have to read back from
     > > teh full pool and write to the full pool. So you always need to
    have
     > > 2 working devices, one to read, one to write to the AI-Consolidate
     > > pool.
     > > >
     > > > That’s why in my setup tape is really a pool to ‘migrate to make
     > > space’ so I can read back from it (Bareos often correctly switches
     > > to read from that pool). But the AI-Consolidate pool is on disk
    and
     > > where all the shuffling happens.
     > > >
     > > > There is no way to do AI without two devices, and enough disk to
     > > at least create one full backup for whatever you are backing up.
     > > >
     > > >
     > > > Brock Palen
     > > > bro...@mlds-networks.com
     > > > www.mlds-networks.com <http://www.mlds-networks.com>
    <http://www.mlds-networks.com <http://www.mlds-networks.com>>
     > <http://www.mlds-networks.com <http://www.mlds-networks.com>
    <http://www.mlds-networks.com <http://www.mlds-networks.com>>>
     > > > Websites, Linux, Hosting, Joomla, Consulting
     > > >
     > > >
     > > >
     > > >> On Dec 24, 2023, at 12:54 AM, Russell Harmon
     > > <eatnu...@gmail.com> wrote:
     > > >>
     > > >> On Sat, Dec 23, 2023 at 17:21 Brock Palen
     > > <bro...@mlds-networks.com> wrote:
     > > >> Correct. Because when you run your consolidate with a full it
     > > has to read the old full likely from your tape drive. So it has to
     > > write to disk.
     > > >>
     > > >> What if I flip things around: use disk for incrementals and
    tape
     > > for full? Would I then just need to make sure I run a consolidate
     > > job before I run out of disk?
     > > >
     > >
     > > --
     > > You received this message because you are subscribed to the Google
     > > Groups "bareos-users" group.
     > > To unsubscribe from this group and stop receiving emails from it,
     > send
     > > an email to bareos-users...@googlegroups.com
     > > <mailto:bareos-users...@googlegroups.com>.
     > > To view this discussion on the web visit
     > >
     >
    https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com> 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com>> 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer
 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer>
 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer
 
<https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer>>>.
     >
     > --
     > You received this message because you are subscribed to the Google
     > Groups "bareos-users" group.
     > To unsubscribe from this group and stop receiving emails from it,
    send
     > an email to bareos-users...@googlegroups.com
     > <mailto:bareos-users...@googlegroups.com>.
     > To view this discussion on the web visit
     >
    
https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com>
 
<https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com?utm_medium=email&utm_source=footer
 
<https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com?utm_medium=email&utm_source=footer>>.

--
You received this message because you are subscribed to the Google Groups "bareos-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+unsubscr...@googlegroups.com <mailto:bareos-users+unsubscr...@googlegroups.com>. To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/5ce93779-e5fb-44d0-85b0-d688f59d2540n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/5ce93779-e5fb-44d0-85b0-d688f59d2540n%40googlegroups.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fb0e9c4f-7f07-4494-9b07-2c47fc428f2c%40hueper.de.

Reply via email to