Hello,
Thanks a lot the explanations. In the v0.18 version used I indeed hit patch #1.
The other patches don't seem to (may be in v0.19 ?).
Cheers, Oliver
On Saturday 11 July 2009 03:11:34 ashf...@whisperpc.com wrote:
Oliver,
I just tried btrfs with a big blocksize (-n,-l,-s) of 256K
Hi All,
on a testing machine I installed four HDDs and they are configured as
RAID6. For a test I removed one of the drives (/dev/sdk) while the
volume was mounted and data was written to it. This worked well, as far
as I can see. Some I/O errors were written to /var/log/syslog, but the
, Oliver wrote:
Hi All,
on a testing machine I installed four HDDs and they are configured as
RAID6. For a test I removed one of the drives (/dev/sdk) while the
volume was mounted and data was written to it. This worked well, as far
as I can see. Some I/O errors were written to /var/log/syslog
only an idea, but
I'm interested to know if a) it would be possible to implement it into a
complex filesystem like btrfs, and b) if it would prove useful if
implemented.
Thanks
Oliver.
PS. I realise this could be implemented with a user space daemon which
polls available disk space and deletes
ideas on this - at the moment it's only an idea, but
I'm interested to know if a) it would be possible to implement it into a
complex filesystem like btrfs, and b) if it would prove useful if
implemented.
Thanks
Oliver.
PS. I realise this could be implemented with a user space daemon which
polls
on this - at the moment it's only an idea, but
I'm interested to know if a) it would be possible to implement it into a
complex filesystem like btrfs, and b) if it would prove useful if
implemented.
Thanks
Oliver.
PS. I realise this could be implemented with a user space daemon which
polls available disk space
have two
similar files, can the space used by the identical parts of the files be
saved)
Has any thought been put into either 1) or 2) - are either possible or
desired?
Thanks
Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to [EMAIL
Having a 3 possible states for each file would seem sensible:
1. Compression Enabled - this file or folder will be compressed.
2. Compression Disabled - this file or folder will never be compressed.
3. Not Specified - This will inherit the compression state from it's parent.
To keep this
2) Keep a tree of checksums for data blocks, so that a bit of data can
be located by it's checksum. Whenever a data block is about to be
written check if the block matches any known block, and if it does then
don't bother duplicating the data on disk. I suspect this option may
not be
On Wed, 2008-12-10 at 13:07 -0700, Anthony Roberts wrote:
When the a direct read
comparison is required before sharing blocks, it is probably best done
by a stand alone utility, since we don't want wait for a read of a full
extent every time we want to write on.
Can a stand-alone
writing that script to test on my ext3 disk just to see
how much duplicate wasted data I really have.
Thanks
Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
It would be interesting to see how many duplicate *blocks* there are
across the filesystem, agnostic to files...
Is this somthing your script does Oliver?
My script doesn't yet exist, although when created it would, yes. I was
thinking of just making a BASH script and using dd to extract
+ software is 2GB (after nulls are removed), the total size for 20VM's
could be ~6GB (remembering there will be extra redundancy the more VM's
you add)- not a bad saving.
Thanks
Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to [EMAIL PROTECTED
Hi,
While this sounds nice in practice, in reality since eraseblocks are
generally very large, and with hardware based block remapping (FTL), you can
never be sure which data blocks are at risk when re-writing just one block.
There is a good chance that rewriting one block of data somewhere
if (nritems == BTRFS_NODEPTRS_PER_BLOCK(root))
BUG();
^ You seem to have missed one.
Actually that one was left on purpose because BUG_ON calls are not to
have any side effects and I do not know enough about btrfs to know
what BTRFS_NODEPTRS_PER_BLOCK does so it was left as is.
to cheap SSD's.
Not much BTRFS can do about it though. If the piece of data that triggers
the bug could be identified, workarounds could possibly be introduced for
the particular buggy controllers.
Oliver Mattos
(resent as I emailled wrong recipients before)
--
To unsubscribe from this list
I seem to have observed a file on a (writable) snapshot changing
although there were no writes occuring on the snapshot itself. This is
not supposed to happen, right?
Sequence of events:
1. A (writable) snapshot @home-2014-04-16 is taken on a @home subvolume
mounted at /home.
2. The
Am 17.04.2014 17:56, schrieb Chris Mason:
On 04/17/2014 11:39 AM, Oliver O. wrote:
I seem to have observed a file on a (writable) snapshot changing
although there were no writes occuring on the snapshot itself. This is
not supposed to happen, right?
Was this a nodatacow file?
-chris
Am 17.04.2014 18:11, schrieb Oliver O.:
Am 17.04.2014 17:56, schrieb Chris Mason:
On 04/17/2014 11:39 AM, Oliver O. wrote:
I seem to have observed a file on a (writable) snapshot changing
although there were no writes occuring on the snapshot itself. This is
not supposed to happen, right
Am 17.04.2014 23:29, schrieb Oliver O.:
Conclusions:
The sequence of events seems to be:
1. The file was changed with generation 193090.
2. The snapshot for backup (@home-2014-04-16) was taken (generation
194551).
3. As the backup was reading from the snapshot, it was seeing stale
data
a situation like this?
Thanks!
--
Regards,
Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
than
1GB and took just a couple of seconds.
The only idea I have now is to copy everthing to a new fs, chroot into
it and update-grub. Is this the best way to go?
--
Regards,
Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord
% /var/run
none 435M 0 435M 0% /var/lock
none 844G 293G 552G 35% /var/lib/ureadahead/debugfs
/dev/sdc1 2.0G 26M 1.9G 2% /boot
--
Regards,
Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
I fear, I broke my FS by running btrfsck. I tried 'btrfsck --repair' and
it fixed several problems but finally crashed with some debug message
from 'extent-tree.c', so I also tried 'btrfsck --repair
--init-extent-tree'. Since then I can't mount the FS anymore:
mount -t btrfs
On 01.01.2014 22:58, Chris Murphy wrote:
On Jan 1, 2014, at 2:27 PM, Oliver Mangold o.mang...@gmail.com wrote:
I fear, I broke my FS by running btrfsck. I tried 'btrfsck --repair' and it
fixed several problems but finally crashed with some debug message from
'extent-tree.c', so I also tried
apt a checksum?
>Bear in mind that if it is unreliable hardware, then continued use
> of the FS in read-write operation is likely to cause additional
> damage.
Of course.
I would then, in any case, after the filesystem is up again, clean up, do a
fresh external backup, scratch
Hi Duncan,
thanks for your extensive reply!
Am 28.01.2017 um 06:00 schrieb Duncan:
> All three options apparently default to 64K (as that's what I see here
> and I don't believe I've changed them), but can be changed. See the
> kernel options help and where it points for more.
>
Indeed, I
Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>>We can probably talk you through fixing this by hand with a decent
>> hex editor. I've done it before...
>>
> That would be nice! Is it fine via the mailing list?
> Po
Am 28.01.2017 um 13:37 schrieb Janos Toth F.:
> I usually compile my kernels with CONFIG_X86_RESERVE_LOW=640 and
> CONFIG_X86_CHECK_BIOS_CORRUPTION=N because 640 kilobyte seems like a
> very cheap price to pay in order to avoid worrying about this (and
> skip the associated checking + monitoring).
Am 29.01.2017 um 17:44 schrieb Hans van Kranenburg:
> On 01/29/2017 03:02 AM, Oliver Freyermuth wrote:
>> Am 28.01.2017 um 23:27 schrieb Hans van Kranenburg:
>>> On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
>>>> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth
Am 29.01.2017 um 20:28 schrieb Hans van Kranenburg:
> On 01/29/2017 08:09 PM, Oliver Freyermuth wrote:
>>> [..whaaa.. text.. see previous message..]
>> Wow - this nice python toolset really makes it easy, bigmomma holding your
>> hands ;-) .
>>
>> Indeed, I
> and there are patches for btrfs check which will fix those in most
> cases.
I'll schedule a memcheck as soon as I can turn off the machine for a while,
which sadly may be a week or so in the future from now...
>
>Hugo.
>
>> Cheers and thanks for any suggestions,
uggestions,
Oliver
PS: Please put my mail in CC, I'm not subscribed to the list. Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
FS. I will take
an external backup of the content within the next 24 hours using that, then I
am ready to try anything you suggeest.
Cheers and thanks!
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Am 28.01.2017 um 23:27 schrieb Hans van Kranenburg:
> On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
>> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>>> Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>>>>We can probably talk you through fixing this by hand
in
question, so I'll rename that folder and restore just that from backup for now.
Is the debug-information still of interest? If so, I can share it (but would
not post it publicly to the list since many filenames are in there...).
It weighs in at about 2 x 80 MiB after xz compression.
Or
ite some
days.
If you can think of any other information which may be useful to diagnose the
underlying issue which caused that corruption
just let me know. I'll keep the image of the broken FS around for a few weeks.
Cheers,
Oliver
--
To unsubscribe from this list: send the line "unsub
erts can figure out something from my uploaded
debug info
to prevent such things in the future.
Thanks a lot in any case for your experience report!
I hope my "repair experience" from my other mail made from my user's
perspective may at some point
of time also be of help to you (even though
rm, this indeed
answers all my questions as a user.
I hope it will be helpful to many other btrfs-users in the future.
Best wishes,
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo i
send" still unusable to me -
I guess it's not ready for general use just yet.
Is there any information I can easily extract / provide to allow the experts to
fix this issue?
The kernel log shows nothing.
Thanks a lot,
Oliver
--
To unsubscribe from this list: send the line "
. balancing metadata with -musage=0, I'd guess)
needed to make them become active afterwards?
If there is any documentation on this and I missed it, please RTFM me.
Cheers and thanks a lot,
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
t
"rw,noatime,compress=zlib,ssd,space_cache,commit=120".
Apart from that: No RAID or any other special configuration involved.
Cheers and any help appreciated,
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
ed
to replay a backup for other reasons?
Cheers and thanks for your reply,
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
my systems are SSD only, to avoid unnecessary writes I'll then just wait
until
I really need to replay a backup.
Thanks a lot and best regards,
Oliver
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
the past 'btrfs
send' got broken
after dedupe which got fixed, now it is just extremely slow).
For me, this means I have to stay with rsync backups, which are sadly
incomplete since special FS attrs
like "C" for nocow are not backed up.
Cheers and thanks for your reply,
Ol
the first / few to
complain about this as a user, I did not feel like my usecase was
special or exotic (at least, up to now).
Thanks a lot,
Oliver
> Thanks,
> Qu
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
read from
stream" issue.
Needless to say, everything is fine again after downgrade to btrfs-progs 4.8.3.
Cheers,
Oliver
PS: While I will check back for replies in the archive, a courtesy cc will be
appreciated since I am not subscribed to the list.
--
To unsubscribe from this li
apshot @parent-snapshot/@child1
@parent-snapshot/@child2
(st_dev=129, st_ino=256) @parent-snapshot
(st_dev=21, st_ino=2) @parent-snapshot/@child1
(st_dev=21, st_ino=2) @parent-snapshot/@child2
Delete subvolume (no-commit): '/home/oliver/Downloads/Btrfs/@parent-snapshot'
Delete subvolume (no-commit
If you clicked on the link to this topic: Thank you!
I have the following setup:
6x 500GB HDD-Drives
1x 32GB NVME-SSD (Intel Optane)
I used bcache to setup up the SSD as caching device and all other six
drives are backing devices. After all that was in place, I formatted the
six HHDs with
49 matches
Mail list logo