B.H.
On Tue, Jul 7, 2015 at 4:14 PM, Mordechay Kaganer wrote:
>
>
> The conclusion is: to actually reclaim the duplicated space you have
> to include all snapshots that may point to the file.
>
Tried to dedupe the real data, including all snapshots. Still no free
space gain. This time, this loo
B.H.
On Tue, Jul 7, 2015 at 9:27 AM, Ryan Bourne wrote:
> To clarify, if I did the following:
>
> # btrfs subvolume create a
> # dd bs=1M count=10 if=/dev/urandom of=a/1
> # dd if=a/1 of=a/2
> # btrfs subvolume snapshot a b
>
> then I have four files containing the same data. a/1, b/1 share exten
On 7/07/15 9:07 AM, Mark Fasheh wrote:
Yes I forgot about that but in your case almost everything will be reported
shared. Btw, I have to leave my office now but will get to the rest of your
e-mail
later.
--
Mark Fasheh
--
To clarify, if I did the following:
# btrfs subvolume create a
# dd
On Tue, Jul 07, 2015 at 02:03:06AM +0300, Mordechay Kaganer wrote:
>
> Checked some more pairs, most extents appear as "shared". In some
> cases there is "last encoded" not shared extent with length 4096.
>
> Since i use snapshots, may shared also mean "shared between snapshots"?
Yes I forgot ab
B.H.
On Tue, Jul 7, 2015 at 1:34 AM, Mark Fasheh wrote:
>>
>> It runs successfully for several hours and prints out many files which
>> are indeed duplicate like this:
>>
>> Showing 4 identical extents with id 5164bb47
>> Start Length Filename
>> 0.0 4.8M""
>> 0.0
On Tue, Jul 07, 2015 at 12:54:01AM +0300, Mordechay Kaganer wrote:
> I have a btrfs volume which is used as a backup using rsync from the
> main servers. It contains many duplicate files across different
> subvolumes and i have some read only snapshots of each subvolume,
> which are created every t
B.H.
Hello.
I have a btrfs volume which is used as a backup using rsync from the
main servers. It contains many duplicate files across different
subvolumes and i have some read only snapshots of each subvolume,
which are created every time after the backup completes.
I'm was trying to gain some