Am 20.02.2013, 02:14 Uhr, schrieb Liu Bo bo.li@oracle.com:
I think I know why inode_cache keeps us from freeing space, inode_cache
adds
a cache_inode in each btrfs root, and this cache_inode will be iput at
the very
last of stage during umount, ie. after we do cleanup work on old
Am 29.04.2012, 01:53 Uhr, schrieb Hubert Kario h...@qbs.com.pl:
On Sunday 01 of April 2012 11:42:23 Jérôme Poulin wrote:
On Sun, Apr 1, 2012 at 11:27 AM, Norbert Scheibner s...@gmx.net wrote:
Some users tested this patch successfully for week,s or months in 2
or 3
kernel versions since
Glück Auf!
I know its been discussed more then ones, but as a user I really would like to
see the patch for allowing this in the kernel.
Some users tested this patch successfully for weeks or months in 2 or 3 kernel
versions since then, true?
I'd say by creating a snapshot, it's nothing else
On: Sun, 01 Apr 2012 18:30:13 +0300 Konstantinos Skarlatos wrote
+1 from me too, i would save enormous amounts of space with a patch
like that, at least until dedupe is implemented. We could call it poor
man's dedupe
That's my point. This poor man's dedupe would solve my problems here very
On: Sun, 01 Apr 2012 19:45:13 +0300 Konstantinos Skarlatos wrote
That's my point. This poor man's dedupe would solve my problems here
very well. I don't need a zfs-variant of dedupe. I can implement such a
file-based dedupe with userland tools and would be happy.
do you have any scripts
On: Sun, 01 Apr 2012 19:22:42 +0200Klaus A. Kreil wrote
I am just an interested reader on the btrfs list and so far have never
posted or sent a message to the list, but I do have a dedup bash script
that searches for duplicates underneath a directory (provided as an
argument) and hard
On: Fri, 10 Feb 2012 00:20:55 +0600 Roman Mamedov r...@romanrm.ru wrote
AFAIK the only reliable way currently to ensure the space after a
subvolume
deletion is freed, is to remount the FS.
Have You tried it Yourself? I think the problem was the remount
before the space has been completely
On Sat, 11 Feb 2012 19:56:32 +0600 Roman Mamedov r...@romanrm.ru wrote:
Have You tried it Yourself? I think the problem was the remount
before the space has been completely freed in background. It
left a valid and working fs, with still work to do.
Yes, after some snapshot deletions the
Glück Auf!
I use now kernel 3.2. The filesystem was originally created under 2.6.39 on 1
whole hdd, mounted with noatime,nodiratime,inode_cache. I use it for backups:
rsync the whole system to a subvolume, snapshot it and then delete some
tempfiles in the snapshot, which are 90% of the
On Thu, 9 Feb 2012 12:11:19 -0600 Chester wrote:
A similar thing has happened to me recently. The snapshot deletion
happens asynchronously and should continue after a reboot (in my
case). If you boot up your system and leave it idle, take a look at
iotop. You might see a [btrfs-cleaner] doing
Another reboot - another try - only this time I ran btrfsctl -a first.
Now the first process to stay in uninterruptable sleep was the umount after
theses tests.
modprobe btrfs
./btrfsctl -a
Scanning for Btrfs filesystems
./btrfs-show
Label: none uuid: ca5e7037-a65c-45d8-b954-f64ab0799964
Hi,
During some btrfs-tests for my own on a btrfs-volume started with 5 devices of
different size, some snapshots and subvolumes and a few large files, I removed
one device after another (always rebalancing after remove) til I ended up with
3.
I use the latest btrfs-tools snapshot and the
This happened after reboot:
./btrfs-show
Label: none uuid: ca5e7037-a65c-45d8-b954-f64ab0799964
Total devices 2 FS bytes used 6.01GB
devid5 size 623.25GB used 7.32GB path /dev/md15
devid1 size 9.31GB used 7.32GB path /dev/md11
./btrfsck /dev/md11
failed to open
13 matches
Mail list logo