overheads if 'too many' snapshots/subvols
are made?
If snapshots were to be taken once a minute and retained, what breaks first?
What are 'reasonable' (maximum) numbers for frequency and number of held
versions?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
that I'm using at present...
OK, so the way of managing all that is going to be a little different.
How would you want that?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
by
filesystem label is a nice/good idea. But is there any interest for that
to be picked up? Put in a bug/feature request onto bugzilla?
I would guess that most developers focus on mount point and let
fstab/mtab sort out the detail...
Regards,
Martin
--
To unsubscribe from this list: send the line
,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Dear Devs,
Would there be any problem to use nbd (/dev/ndX) devices to gain
btrfs-raid across multiple physical hosts across a network? (For a sort
of btrfs-drbd! :-) )
Regards,
Martin
http://en.wikipedia.org/wiki/Network_block_device
http://www.drbd.org/
--
To unsubscribe from this list
On 19/05/13 18:39, Clemens Eisserer wrote:
Hi Martin,
So, an interesting variation could be to have filesystem level raid
operating on ext4 or nilfs or whatever... Would that be a sensible idea?
Thats already supported by using LVM. What do you think you would gain
from layering in top
On 19/05/13 20:34, Chris Murphy wrote:
On May 19, 2013, at 12:59 PM, Martin m_bt...@ml1.co.uk wrote:
btrfs-raid offers a greater variety and far greater flexibility of
raid options individually for filedata and metadata at the
filesystem level.
Well it really doesn't. The btrfs raid
be done with a /sbin/(u?)mount.btrfs 'helper'?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
copies of data/metadata across 4 physical disks.
When might that hit? Or is there a stable patch that can be added into
kernel 3.8.13?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
On 19/05/13 18:32, Martin wrote:
Dear Devs,
Would there be any problem to use nbd (/dev/ndX) devices to gain
btrfs-raid across multiple physical hosts across a network? (For a sort
of btrfs-drbd! :-) )
Regards,
Martin
http://en.wikipedia.org/wiki/Network_block_device
http
various:
INFO: task rsync:11022 blocked for more than 180 seconds
and one:
INFO: task btrfs-endio-wri:10816 blocked for more than 180 seconds
Further detail listed below.
What's the fix or any debug worthwhile?
Regards,
Martin
x1 of these:
kernel: INFO: task rsync:11022 blocked for more
:
The following block rsv returned -28 is repeated 7 times until there
is a call trace for:
WARNING: at fs/btrfs/super.c:256 __btrfs_abort_transaction+0x3d/0xad().
Then, the mount is set read-only.
How to fix or debug?
Thanks,
Martin
kernel: [ cut here ]
kernel: WARNING
On 05/06/13 16:05, Hugo Mills wrote:
On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
/etc/fstab mounts with the options:
noatime,noauto,space_cache,inode_cache
All
On 05/06/13 16:43, Hugo Mills wrote:
On Wed, Jun 05, 2013 at 04:28:33PM +0100, Martin wrote:
On 05/06/13 16:05, Hugo Mills wrote:
On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef
to try?
For that size of storage and with many hard links, is there any
advantage formatting with leaf/node size greater than the default 4kBytes?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
On 05/06/13 22:12, Martin wrote:
On 05/06/13 17:24, David Sterba wrote:
On Wed, Jun 05, 2013 at 04:43:29PM +0100, Hugo Mills wrote:
OK, so you've got plenty of space to allocate. There were some
issues in this area (block reserves and ENOSPC, and I think
specifically addressing the issue
.
Total writes to the two disks is equal.
This is noticeable for example when running emerge --sync or running
compiles on Gentoo.
Is this a known feature/problem or worth looking/checking further?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
On 28/06/13 16:39, Hugo Mills wrote:
On Fri, Jun 28, 2013 at 11:34:18AM -0400, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 02:59:45PM +0100, Martin wrote:
On kernel 3.8.13:
Using two equal performance SATAII HDDs, formatted for btrfs
raid1 for both data and metadata and:
The second disk
On 28/06/13 18:04, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 09:55:31AM -0700, George Mitchell wrote:
On 06/28/2013 09:25 AM, Martin wrote:
On 28/06/13 16:39, Hugo Mills wrote:
On Fri, Jun 28, 2013 at 11:34:18AM -0400, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 02:59:45PM +0100, Martin wrote
allocated: 3155176812544
referenced 3155176812544
Btrfs Btrfs v0.19
Command exited with non-zero status 1
So: What does that little lot mean?
The drives were mounted and active during an unexpected power-plug pull :-(
Safe to mount again or are there other checks/fixes needed?
Thanks,
Martin
On 29/06/13 10:41, Russell Coker wrote:
On Sat, 29 Jun 2013, Martin wrote:
Mmmm... I'm not sure trying to balance historical read/write counts is
the way to go... What happens for the use case of an SSD paired up with
a HDD? (For example an SSD and a similarly sized Raptor or enterprise
SCSI
?
(There are perhaps about 5% new or changed files each time.)
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
best to recover from this?
(This is a 'backup' disk so not 'critical' but it would be nice to avoid
rewriting about 1.5TB of data over the network...)
Is there an obvious sequence/recipe to follow for recovery?
Thanks,
Martin
Further details:
Linux 3.10.7-gentoo-r1 #2 SMP Fri Sep 27 23:38
Chris,
All agreed. Further comment inlined:
(Should have mentioned more prominently that the hardware problem has
been worked-around by limiting the sata to 3Gbit/s on bootup.)
On 28/09/13 21:51, Chris Murphy wrote:
On Sep 28, 2013, at 1:26 PM, Martin m_bt...@ml1.co.uk wrote:
Writing
On 28/09/13 20:26, Martin wrote:
... btrfsck bombs out with LOTs of errors...
How best to recover from this?
(This is a 'backup' disk so not 'critical' but it would be nice to avoid
rewriting about 1.5TB of data over the network...)
Is there an obvious sequence/recipe to follow
On 28/09/13 23:54, Martin wrote:
On 28/09/13 20:26, Martin wrote:
... btrfsck bombs out with LOTs of errors...
How best to recover from this?
(This is a 'backup' disk so not 'critical' but it would be nice to avoid
rewriting about 1.5TB of data over the network...)
Is there an obvious
Chris,
Thanks for good comment/discussion.
On 29/09/13 03:06, Chris Murphy wrote:
On Sep 28, 2013, at 4:51 PM, Martin m_bt...@ml1.co.uk wrote:
Stick with forced 3Gbps, but I think it's worth while to find out
what the actual problem is. One day you forget about this 3Gbps SATA
link
On 29/09/13 06:11, Duncan wrote:
Martin posted on Sun, 29 Sep 2013 03:10:37 +0100 as excerpted:
So...
Any options for btrfsck to fix things?
Or is anything/everything that is fixable automatically fixed on the
next mount?
Or should:
btrfs scrub /dev/sdX
be run first?
Or?
What
On 29/09/13 22:29, Martin wrote:
Looking up what's available for Gentoo, the maintainers there look to be
nicely sharp with multiple versions available all the way up to kernel
3.11.2...
That is being pulled in now as expected:
sys-kernel/gentoo-sources-3.11.2
There's also the latest
listed below.
What next best to try?
Safer to try again but this time with with no_space_cache,no_inode_cache?
Thanks,
Martin
)
On 29/09/13 22:29, Martin wrote:
On 29/09/13 06:11, Duncan wrote:
What does btrfs do (or can do) for recovery?
Here's a general-case answer (courtesy gmane
occured whilst trying to delete a known
bad directory tree. No worries for losing the data in that.
But how best to clean up the filesystem errors?
Thanks,
Martin
On 03/10/13 17:56, Martin wrote:
On 03/10/13 01:49, Martin wrote:
Summary:
Mounting -o recovery,noatime worked well and allowed
whatever
data can be read and start again?
All that lot sounds good for a wiki page ;-)
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On 04/10/13 19:32, Duncan wrote:
Martin posted on Fri, 04 Oct 2013 16:47:19 +0100 as condensed:
There's ad-hoc comment for various commands to recover from filesystem
errors.
But what do they actually do and when should what command be used?
What do they do exactly and what
next?
Thanks,
Martin
In the meantime, trying:
btrfsck /dev/sdc
gave the following output + abort:
parent transid verify failed on 915444523008 wanted 16974 found 13021
Ignoring transid failure
btrfsck: cmds-check.c:1066: process_file_extent: Assertion `!(rec-ino
!= key-objectid
On 28/09/13 20:26, Martin wrote:
AMD
E-450 APU with Radeon(tm) HD Graphics AuthenticAMD GNU/Linux
Just in case someone else stumbles across this thread due to a related
problem for my particular motherboard...
There appears to be a fatal hardware bug for the interrupt line deassert
for a PCIe
.
The output attached.
What next?
Thanks,
Martin
On 05/10/13 12:32, Martin wrote:
No comment so blindly trying:
btrfsck --repair /dev/sdc
gave the following abort:
btrfsck: extent-tree.c:2736: alloc_reserved_tree_block: Assertion
`!(ret)' failed.
Full output attached.
All
.)
Thanks,
Martin
On 05/10/13 14:18, Martin wrote:
So...
The hint there is btrfsck: extent-tree.c:2736, so trying:
btrfsck --repair --init-extent-tree /dev/sdc
That ran for a while until:
kernel: btrfsck[16610]: segfault at cc ip 0041d2a7 sp
7fffd2c2d710 error 4
?
Thanks,
Martin
Further detail:
On 07/10/13 20:03, Chris Murphy wrote:
On Oct 7, 2013, at 8:56 AM, Martin m_bt...@ml1.co.uk wrote:
Or try mount -o recovery,noatime again?
Because of this: free space inode generation (0) did not match free
space cache generation (1607)
Try mount
the
number of devices used for mirroring, striping, and error-correction?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
is
rattling through a gazillion files and the syslog gets swamped.
Unfortunately, I don't know beforehand what files to mark no-cow unless
I no-cow the entire user/applications.
Thoughts?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
On 24/03/14 20:19, Duncan wrote:
Martin posted on Mon, 24 Mar 2014 19:47:34 + as excerpted:
Possible fix:
btrfs checks the ratio of filesize versus number of fragments and for a
bad ratio either: [...]
3: Automatically defragments the file.
See the autodefrag mount option
On 24/03/14 21:52, Marc MERLIN wrote:
On Mon, Mar 24, 2014 at 07:17:12PM +, Martin wrote:
Thanks for the very good summary.
So... In very brief summary, btrfs raid5 is very much a work in progress.
If you know how to use it, which I didn't know do now, it's technically very
usable
. :-)
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
' is an interesting
approach. However, is that appropriate and useful considering the real
world failure mechanisms that are to be guarded against?
Do you see or measure any real advantage?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
On 18/05/14 17:09, Russell Coker wrote:
On Sat, 17 May 2014 13:50:52 Martin wrote:
[...]
Do you see or measure any real advantage?
Imagine that you have a RAID-1 array where both disks get ~14,000 read
errors.
This could happen due to a design defect common to drives of a particular
is a very human thing... ;-)
Sorry:
Interesting idea but not convinced there's any advantage for disk/SSD
storage.
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
for deleting snapshots.
Aside: I've held off from using kernel 3.12 and 3.13 due to curious
happenings on my test system. kernel 3.14.4 is behaving well so far.
Hope that gives a few clues.
Good luck,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
On 02/06/14 14:22, Josef Bacik wrote:
On 05/30/2014 06:00 PM, Martin wrote:
OK... I'll jump in...
On 30/05/14 21:43, Josef Bacik wrote:
Hello,
TL;DR: I want to only do snapshot-aware defrag on inodes in snapshots
that haven't changed since the snapshot was taken. Yay or nay
On 04/06/14 10:19, Erkki Seppala wrote:
Martin m_bt...@ml1.co.uk writes:
The *ONLY* application that I know of that uses atime is Mutt and then
*only* for mbox files!...
However, users, such as myself :), can be interested in when a certain
file has been last accessed. With snapshots I
file causing excessive
fragmentation?
Align the data writes to 16kByte or 64kByte boundaries/chunks?
Are mmap-ed files a similar problem to using a swap file and so should
the same btrfs file swap code be used for both?
Not looked over the code so all random guesses...
Regards,
Martin
operation?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/02/12 01:42, Liu Bo wrote:
On 02/09/2012 03:24 AM, Martin wrote:
[ No problem for 4kByte sector HDDs. However, for SSDs... ]
However for SSDs...
I'm using for example a 60GByte SSD that has:
8kB page size;
16kB logical to physical mapping chunk size;
2MB erase block
...
Some good comments:
On 10/02/12 18:18, Martin Steigerwald wrote:
Hi Martin,
Am Mittwoch, 8. Februar 2012 schrieb Martin:
My understanding is that for x86 architecture systems, btrfs only
allows a sector size of 4kB for a HDD/SSD. That is fine for the
present HDDs assuming the partitions
be trained to clean out their inboxes or to be
more hierarchically tidy... :-( )
Or is btrfs yet too premature to suffer such use?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
On 02/05/12 00:18, Martin wrote:
How well suited is btrfs to low-end and high-end FLASH devices?
Paraphrasing from a thread elsewhere:
FLASH can be categorised into two classes, which have extremely
different characteristics:
(a) the low-end (USB, SDHC, CF, cheap ATA SSD);
A good FYI
.
Thoughts welcomed.
Is btrfs development at the 'optimising' stage now, or is it all still
very much a 'work in progress'?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
your 'filesystem'.
If you need fast random access, then use SSDs.
Plausible?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
used?
Does that suggest better optimisation of the (meta)data, or just a
greater housekeeping overhead to shuffle data to new offsets?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
with the Sandforce controller that implements
its own data compression and data deduplication. How well does btrfs fit
with those compared to other non-data-compression controllers?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message
On 19/05/12 18:36, Martin Steigerwald wrote:
Am Freitag, 18. Mai 2012 schrieb Sander:
Martin wrote (ao):
Are there any format/mount parameters that should be set for using
btrfs on SSDs (other than the ssd mount option)?
If possible, format the whole device, do not partition the ssd
?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 23/05/12 05:19, Calvin Walton wrote:
On Tue, 2012-05-22 at 22:47 +0100, Martin wrote:
I've got two recent examples of SSDs. Their pristine state from the
manufacturer shows:
Device Model: OCZ-VERTEX3
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Device Model: OCZ
there for me is that running rsync or running
a deduplication script might hit too many hard links that were perfectly
fine when on ext4.
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
of Chris' raid5/6 work will
also fix this when it lands.
Interesting...
The source problem is how the COW fragments under expected normal use...
Is all this unavoidable unless we rethink the semantics?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
for me are the raid and snapshots.
The killer though is for how robust the filesystem is against corruption
and random data/hardware failure.
btrfsck?
Always keep multiple backups!
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message
fragmentation with such as sqlite
files are greater concerns!
(Yes, there is the manual fix of NOCOW... I also put such horrors into
tmpfs and snapshot that... All well and good but all unnecessary admin
tasks!)
Regards,
Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.15 (GNU/Linux)
Comment
idea in the first place?!
(There's more than one physical set of backups but I'd rather not suffer
weeks to recover from one hiccup in the filesystem... Should I partition
btrfs down to smaller gulps, or does the structure of btrfs in effect
already do that?)
Thanks,
Martin
--
To unsubscribe from
?...
A good writeup! Thanks for a good giggle. :-)
Regards,
Martin
On 01/04/13 15:44, Harald Glatt wrote:
On Mon, Apr 1, 2013 at 2:50 PM, Josef Bacik jba...@fusionio.com wrote:
Hello,
I was bored this weekend so I hacked up online dedup for Btrfs. It's working
quite well so I think it can
On 18/04/13 15:06, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks each
where I wish to use the disk packs aggregated for 16TB and up to
64TB backups...
Can btrfs...?
1:
Mirror data
On 18/04/13 20:44, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 05:29:10PM +0100, Martin wrote:
On 18/04/13 15:06, Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
I have a number of esata disk packs holding 4 physical disks
each where I wish to use
On 18/04/13 20:48, Alex Elsayed wrote:
Hugo Mills wrote:
On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
Dear Devs,
snip
Note that esata shows just the disks as individual physical disks, 4 per
disk pack. Can physical disks be grouped together to force the RAID data
to be mirrored
an initrd and grub operates fine for the btrfs raid.
What is the special magic to do this without the need for an initrd?
Is the comment/patch below from last year languishing unknown? Or is
there some problem with that kernel approach?
Thanks,
Martin
See:
http://forums.gentoo.org/viewtopic-t
...
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
) down to
the two devices.
The missing device was an old HDD that had physically failed. No data
was lost for that example failure.
Hope of interest,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo
?
This all started from trying to delete/repair a directory tree of a few
MBytes of files...
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 22/10/13 19:17, Josef Bacik wrote:
On Tue, Oct 22, 2013 at 06:58:48PM +0100, Martin wrote:
Dear list,
I've been trying to recover a 2TB single disk btrfs from a good few days
ago as already commented on the list. btrfsck complained of an error in
the extents and so I tried:
btrfsck
On 23/10/13 17:21, Josef Bacik wrote:
On Wed, Oct 23, 2013 at 04:32:51PM +0100, Martin wrote:
Any further debug useful?
Nope I know where it's breaking, I need to fix how we init the extent tree.
Thanks,
Good stuff.
If of help, I can test new code or a patch for that example. (I'll
+++ b/cmds-check.c
Hey! Quick work!...
Is that worth patching locally and trying against my example?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On 25/10/13 19:31, Josef Bacik wrote:
On Fri, Oct 25, 2013 at 07:27:24PM +0100, Martin wrote:
On 25/10/13 19:01, Josef Bacik wrote:
Unfortunately you can't run --init-extent-tree if you can't actually read
the
extent root. Fix this by allowing partial starts with no extent root
On 28/10/13 15:11, Josef Bacik wrote:
On Sun, Oct 27, 2013 at 12:16:12AM +0100, Martin wrote:
On 25/10/13 19:31, Josef Bacik wrote:
On Fri, Oct 25, 2013 at 07:27:24PM +0100, Martin wrote:
On 25/10/13 19:01, Josef Bacik wrote:
Unfortunately you can't run --init-extent-tree if you can't
) writes...
Testing in progress,
Regards,
Martin
This uses 16KB or the page size, whichever is bigger. If you're doing a
mixed block group mkfs, it uses the sectorsize instead.
Since the kernel refuses to mount a mixed block group FS where the
metadata leaf size doesn't match the data
On 07/11/13 01:25, Martin wrote:
On 28/10/13 15:11, Josef Bacik wrote:
Ok I've sent
[PATCH] Btrfs-progs: rework open_ctree to take flags, add a new one
which should address your situation. Thanks,
Josef,
Tried your patch:
Signed-off-by: Josef Bacik jba...@fusionio.com
On 11/11/13 22:52, Martin wrote:
On 07/11/13 01:25, Martin wrote:
OK so Chris Mason and the Gentoo sys-fs/btrfs-progs- came to the
rescue to give:
# btrfs version
Btrfs v0.20-rc1-591-gc652e4e
From that, I've tried running again:
# btrfsck --repair /dev/sdc
giving thus far
.
There looks to be a repeating pattern of calls. Is this working though
the same test repeated per btrfs block? Are there any variables that can
be checked with gdb to see how far it has gone so as to guess how long
it might need to run?
Phew?
Hope of interest,
Regards,
Martin
On 13/11/13 12:08
On 07/11/13 01:25, Martin wrote:
[...]
And the patching fails due to mismatching code...
I have the Gentoo source for:
Btrfs v0.20-rc1-358-g194aa4a
(On Gentoo 3.11.5, will be on 3.11.6 later today.)
What are the magic incantations to download your version of source code
to try
in open_ctree_fs_info ()
#5 0x0041812e in cmd_check ()
#6 0x00404904 in main ()
Still no further output. btrfsck running at 100% on a single core and
with no apparent disk activity. All for a 2TB hdd.
Should it take this long?...
Regards,
Martin
On 15/11/13 17:18, Martin wrote
mount with the degraded option?)
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
at the moment is that for using multiple disks: Any
actions seem to be applied to the list of devices in sequence
one-by-one. There's no apparent intelligence to consider present pool
- new pool of devices as a whole.
More development!
Regards,
Martin
--
To unsubscribe from this list: send the line
-a-time, this is not going to finish in reasonable time.
How come so very slow?
Any hints/tips/fixes or abandon the test?
Regards,
Martin
On 19/11/13 06:34, Martin wrote:
Continuing:
gdb bt now gives:
#0 0x0042075a in btrfs_search_slot ()
#1 0x00427bb4
On 20/11/13 17:08, Duncan wrote:
Martin posted on Wed, 20 Nov 2013 06:51:20 + as excerpted:
It's now gone back to a pattern from a full week ago:
(gdb) bt #0 0x0042d576 in read_extent_buffer ()
#1 0x0041ee79 in btrfs_check_node ()
#2 0x00420211 in check_block
...
This is on kernel 3.11.5 and Btrfs v0.20-rc1-591-gc652e4e.
Can easily upgrade to the latest kernel at the expense of killing the
existing btrfsck run.
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
On 21/11/13 23:37, Chris Mason wrote:
Quoting Martin (2013-11-08 18:53:06)
On 08/11/13 22:01, Chris Mason wrote:
Hi everyone,
This patch is now the tip of the master branch for btrfs-progs, which
has been updated to include most of the backlogged progs patches.
Please take a look and give
On 22/11/13 13:40, Chris Mason wrote:
Quoting Martin (2013-11-22 04:03:41)
* QA Notice: Package triggers severe warnings which indicate that it
*may exhibit random runtime failures.
* disk-io.c:91:5: warning: dereferencing type-punned pointer will break
strict-aliasing rules
On 22/11/13 19:57, Chris Mason wrote:
Quoting Martin (2013-11-22 14:50:17)
On 22/11/13 13:40, Chris Mason wrote:
Quoting Martin (2013-11-22 04:03:41)
* QA Notice: Package triggers severe warnings which indicate that it
*may exhibit random runtime failures.
* disk-io.c:91:5
the ones I have tried) for ext4 and btrfs. You must mount with the
nobarrier option...
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
positive comment: Good progress, thanks.
Regards,
Martin
(OK, that's the last of the positives for the Christmas present. Back to
bugging! ;-) )
On 25/11/13 21:45, Chris Mason wrote:
Hi everyone,
I've tagged the current btrfs-progs repo as v3.12. The new idea is that
instead of making
On 20/11/13 20:00, Martin wrote:
On 20/11/13 17:08, Duncan wrote:
Martin posted on Wed, 20 Nov 2013 06:51:20 + as excerpted:
It's now gone back to a pattern from a full week ago:
(gdb) bt #0 0x0042d576 in read_extent_buffer ()
#1 0x0041ee79 in btrfs_check_node ()
#2
cleanly amputated rather than
too-painfully-slowly repaired?...
Just a few wild ideas ;-)
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
to ensure that data is
rewritten before suffering flash memory bitrot?
Is not the firmware in SSDs aware to rewrite any too-long unchanged data?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
on
the original system with all the kernel modules?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
1 - 100 of 781 matches
Mail list logo