?
I have it disabled and yet I have things like:
Oh, this is insane. This filefrags runs for over a minute already. And hogging
on one core eating almost 100% of its processing power.
merkaba:/home/martin/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
/usr/bin/time -v filefrag
extents I get:
merkaba:/home/martin/.kde/share/apps/nepomuk/repository/main/data/virtuosobackend
echo 3 /proc/sys/vm/drop_caches ; /usr/bin/time -v dd if=soprano-virtuoso.db
of=/dev/null bs=1M
2418+0 Datensätze ein
2418+0 Datensätze aus
2535456768 Bytes (2,5 GB) kopiert, 13,9546 s, 182 MB/s
own recommendation at the moment.
Then a occasional fstrim, maybe mount with noatime (cause who cares about it
at all?)…
Ciao,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line
Am Samstag, 25. Januar 2014, 15:06:24 schrieb Kai Krakow:
Martin Steigerwald mar...@lichtvoll.de schrieb:
Okay, I have seen 260 MB/s. But frankly I am pretty sure that Virtuoso
isn´t doing this kind of large scale I/O on a highly fragmented file. Its
a database. Its random access. My
On Fri, Jan 17, 2014 at 06:24:26PM +, Duncan wrote:
Martin Walter posted on Fri, 17 Jan 2014 15:18:41 +0100 as excerpted:
Our problem is a zfs with 20,000 quota-enabled homedirectories and 100
snapshots.
We would really like to do the same with btrfs, but we don't know how
before such a bold conversion.
Thanks,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
a memory error since
then and I am not aware of any co-workers having had memory errors on their
laptops. But then… those are usually enterprise grade laptops, which to my
knowledge nonetheless just use RAM without ECC. I don´t think that this
ThinkPad T520 uses ECC RAM.
--
Martin 'Helios
.
As to the error messages: I do not know how critical those are.
I usually just scrub my filesystems once in a while and would only try btrfs
check on one that fails the scrubbing or has problems mounting or (in some
cases) yields strange messages in dmesg.
--
Martin 'Helios' Steigerwald - http
hour 20,000 snapshots and delete the same
amount.
Is there any chance to get real user quotas with btrfs?
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
way to find out how much it might still allocate
and at what point it fails – and that without writing tons of data first.
fallocate just triggers allocation and does not write any actual data.
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC
of play now for the Chris Mason or whatever
latest 'stable' branch of btrfs on git?
In other words: Should we always update the btrfs userspace tools to the
latest even though we may be running one or two kernels behind that?...
Thanks,
Martin
--
To unsubscribe from this list: send the line
amount of data)?
Thanks and happy new year,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo
or IOPS numbers.
Kernel in use:
martin@merkaba:~ cat /proc/version
Linux version 3.13.0-rc4-tp520 (martin@merkaba) (gcc version 4.8.2 (Debian
4.8.2-10) ) #39 SMP PREEMPT Tue Dec 17 13:57:12 CET 2013
Characteristics of backup data:
About 239 GiB, lzo compressed with lots of small mail files
OK... So for backing up across a local network to a second physical host...
Is btrfs-send-receive stable enough now to be used?
How does send-receive compare to using rsync for backups?
Any comments please from those using such things?
Thanks,
Martin
--
To unsubscribe from this list: send
the
device IO queues?)
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
to ensure that data is
rewritten before suffering flash memory bitrot?
Is not the firmware in SSDs aware to rewrite any too-long unchanged data?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
on
the original system with all the kernel modules?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
cleanly amputated rather than
too-painfully-slowly repaired?...
Just a few wild ideas ;-)
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
positive comment: Good progress, thanks.
Regards,
Martin
(OK, that's the last of the positives for the Christmas present. Back to
bugging! ;-) )
On 25/11/13 21:45, Chris Mason wrote:
Hi everyone,
I've tagged the current btrfs-progs repo as v3.12. The new idea is that
instead of making
On 20/11/13 20:00, Martin wrote:
On 20/11/13 17:08, Duncan wrote:
Martin posted on Wed, 20 Nov 2013 06:51:20 + as excerpted:
It's now gone back to a pattern from a full week ago:
(gdb) bt #0 0x0042d576 in read_extent_buffer ()
#1 0x0041ee79 in btrfs_check_node ()
#2
the ones I have tried) for ext4 and btrfs. You must mount with the
nobarrier option...
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 21/11/13 23:37, Chris Mason wrote:
Quoting Martin (2013-11-08 18:53:06)
On 08/11/13 22:01, Chris Mason wrote:
Hi everyone,
This patch is now the tip of the master branch for btrfs-progs, which
has been updated to include most of the backlogged progs patches.
Please take a look and give
On 22/11/13 13:40, Chris Mason wrote:
Quoting Martin (2013-11-22 04:03:41)
* QA Notice: Package triggers severe warnings which indicate that it
*may exhibit random runtime failures.
* disk-io.c:91:5: warning: dereferencing type-punned pointer will break
strict-aliasing rules
On 22/11/13 19:57, Chris Mason wrote:
Quoting Martin (2013-11-22 14:50:17)
On 22/11/13 13:40, Chris Mason wrote:
Quoting Martin (2013-11-22 04:03:41)
* QA Notice: Package triggers severe warnings which indicate that it
*may exhibit random runtime failures.
* disk-io.c:91:5
On 20/11/13 17:08, Duncan wrote:
Martin posted on Wed, 20 Nov 2013 06:51:20 + as excerpted:
It's now gone back to a pattern from a full week ago:
(gdb) bt #0 0x0042d576 in read_extent_buffer ()
#1 0x0041ee79 in btrfs_check_node ()
#2 0x00420211 in check_block
...
This is on kernel 3.11.5 and Btrfs v0.20-rc1-591-gc652e4e.
Can easily upgrade to the latest kernel at the expense of killing the
existing btrfsck run.
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
mount with the degraded option?)
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
at the moment is that for using multiple disks: Any
actions seem to be applied to the list of devices in sequence
one-by-one. There's no apparent intelligence to consider present pool
- new pool of devices as a whole.
More development!
Regards,
Martin
--
To unsubscribe from this list: send the line
-a-time, this is not going to finish in reasonable time.
How come so very slow?
Any hints/tips/fixes or abandon the test?
Regards,
Martin
On 19/11/13 06:34, Martin wrote:
Continuing:
gdb bt now gives:
#0 0x0042075a in btrfs_search_slot ()
#1 0x00427bb4
On 07/11/13 01:25, Martin wrote:
[...]
And the patching fails due to mismatching code...
I have the Gentoo source for:
Btrfs v0.20-rc1-358-g194aa4a
(On Gentoo 3.11.5, will be on 3.11.6 later today.)
What are the magic incantations to download your version of source code
to try
in open_ctree_fs_info ()
#5 0x0041812e in cmd_check ()
#6 0x00404904 in main ()
Still no further output. btrfsck running at 100% on a single core and
with no apparent disk activity. All for a 2TB hdd.
Should it take this long?...
Regards,
Martin
On 15/11/13 17:18, Martin wrote
.
There looks to be a repeating pattern of calls. Is this working though
the same test repeated per btrfs block? Are there any variables that can
be checked with gdb to see how far it has gone so as to guess how long
it might need to run?
Phew?
Hope of interest,
Regards,
Martin
On 13/11/13 12:08
On 11/11/13 22:52, Martin wrote:
On 07/11/13 01:25, Martin wrote:
OK so Chris Mason and the Gentoo sys-fs/btrfs-progs- came to the
rescue to give:
# btrfs version
Btrfs v0.20-rc1-591-gc652e4e
From that, I've tried running again:
# btrfsck --repair /dev/sdc
giving thus far
On 07/11/13 01:25, Martin wrote:
On 28/10/13 15:11, Josef Bacik wrote:
Ok I've sent
[PATCH] Btrfs-progs: rework open_ctree to take flags, add a new one
which should address your situation. Thanks,
Josef,
Tried your patch:
Signed-off-by: Josef Bacik jba...@fusionio.com
) writes...
Testing in progress,
Regards,
Martin
This uses 16KB or the page size, whichever is bigger. If you're doing a
mixed block group mkfs, it uses the sectorsize instead.
Since the kernel refuses to mount a mixed block group FS where the
metadata leaf size doesn't match the data
On 28/10/13 15:11, Josef Bacik wrote:
On Sun, Oct 27, 2013 at 12:16:12AM +0100, Martin wrote:
On 25/10/13 19:31, Josef Bacik wrote:
On Fri, Oct 25, 2013 at 07:27:24PM +0100, Martin wrote:
On 25/10/13 19:01, Josef Bacik wrote:
Unfortunately you can't run --init-extent-tree if you can't
On 25/10/13 19:31, Josef Bacik wrote:
On Fri, Oct 25, 2013 at 07:27:24PM +0100, Martin wrote:
On 25/10/13 19:01, Josef Bacik wrote:
Unfortunately you can't run --init-extent-tree if you can't actually read
the
extent root. Fix this by allowing partial starts with no extent root
+++ b/cmds-check.c
Hey! Quick work!...
Is that worth patching locally and trying against my example?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
On 22/10/13 19:17, Josef Bacik wrote:
On Tue, Oct 22, 2013 at 06:58:48PM +0100, Martin wrote:
Dear list,
I've been trying to recover a 2TB single disk btrfs from a good few days
ago as already commented on the list. btrfsck complained of an error in
the extents and so I tried:
btrfsck
On 23/10/13 17:21, Josef Bacik wrote:
On Wed, Oct 23, 2013 at 04:32:51PM +0100, Martin wrote:
Any further debug useful?
Nope I know where it's breaking, I need to fix how we init the extent tree.
Thanks,
Good stuff.
If of help, I can test new code or a patch for that example. (I'll
.
Thanks in advance
Martin
# btrfsck /dev/sdc2
parent transid verify failed on 38158336 wanted 96844 found 97302
parent transid verify failed on 38158336 wanted 96844 found 97302
parent transid verify failed on 38158336 wanted 96844 found 97302
parent transid verify failed on 38158336 wanted 96844
?
This all started from trying to delete/repair a directory tree of a few
MBytes of files...
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
) down to
the two devices.
The missing device was an old HDD that had physically failed. No data
was lost for that example failure.
Hope of interest,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo
...
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
?
Thanks,
Martin
Further detail:
On 07/10/13 20:03, Chris Murphy wrote:
On Oct 7, 2013, at 8:56 AM, Martin m_bt...@ml1.co.uk wrote:
Or try mount -o recovery,noatime again?
Because of this: free space inode generation (0) did not match free
space cache generation (1607)
Try mount
.)
Thanks,
Martin
On 05/10/13 14:18, Martin wrote:
So...
The hint there is btrfsck: extent-tree.c:2736, so trying:
btrfsck --repair --init-extent-tree /dev/sdc
That ran for a while until:
kernel: btrfsck[16610]: segfault at cc ip 0041d2a7 sp
7fffd2c2d710 error 4
next?
Thanks,
Martin
In the meantime, trying:
btrfsck /dev/sdc
gave the following output + abort:
parent transid verify failed on 915444523008 wanted 16974 found 13021
Ignoring transid failure
btrfsck: cmds-check.c:1066: process_file_extent: Assertion `!(rec-ino
!= key-objectid
On 28/09/13 20:26, Martin wrote:
AMD
E-450 APU with Radeon(tm) HD Graphics AuthenticAMD GNU/Linux
Just in case someone else stumbles across this thread due to a related
problem for my particular motherboard...
There appears to be a fatal hardware bug for the interrupt line deassert
for a PCIe
.
The output attached.
What next?
Thanks,
Martin
On 05/10/13 12:32, Martin wrote:
No comment so blindly trying:
btrfsck --repair /dev/sdc
gave the following abort:
btrfsck: extent-tree.c:2736: alloc_reserved_tree_block: Assertion
`!(ret)' failed.
Full output attached.
All
occured whilst trying to delete a known
bad directory tree. No worries for losing the data in that.
But how best to clean up the filesystem errors?
Thanks,
Martin
On 03/10/13 17:56, Martin wrote:
On 03/10/13 01:49, Martin wrote:
Summary:
Mounting -o recovery,noatime worked well and allowed
whatever
data can be read and start again?
All that lot sounds good for a wiki page ;-)
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On 04/10/13 19:32, Duncan wrote:
Martin posted on Fri, 04 Oct 2013 16:47:19 +0100 as condensed:
There's ad-hoc comment for various commands to recover from filesystem
errors.
But what do they actually do and when should what command be used?
What do they do exactly and what
listed below.
What next best to try?
Safer to try again but this time with with no_space_cache,no_inode_cache?
Thanks,
Martin
)
On 29/09/13 22:29, Martin wrote:
On 29/09/13 06:11, Duncan wrote:
What does btrfs do (or can do) for recovery?
Here's a general-case answer (courtesy gmane
On 29/09/13 06:11, Duncan wrote:
Martin posted on Sun, 29 Sep 2013 03:10:37 +0100 as excerpted:
So...
Any options for btrfsck to fix things?
Or is anything/everything that is fixable automatically fixed on the
next mount?
Or should:
btrfs scrub /dev/sdX
be run first?
Or?
What
On 29/09/13 22:29, Martin wrote:
Looking up what's available for Gentoo, the maintainers there look to be
nicely sharp with multiple versions available all the way up to kernel
3.11.2...
That is being pulled in now as expected:
sys-kernel/gentoo-sources-3.11.2
There's also the latest
best to recover from this?
(This is a 'backup' disk so not 'critical' but it would be nice to avoid
rewriting about 1.5TB of data over the network...)
Is there an obvious sequence/recipe to follow for recovery?
Thanks,
Martin
Further details:
Linux 3.10.7-gentoo-r1 #2 SMP Fri Sep 27 23:38
Chris,
All agreed. Further comment inlined:
(Should have mentioned more prominently that the hardware problem has
been worked-around by limiting the sata to 3Gbit/s on bootup.)
On 28/09/13 21:51, Chris Murphy wrote:
On Sep 28, 2013, at 1:26 PM, Martin m_bt...@ml1.co.uk wrote:
Writing
On 28/09/13 20:26, Martin wrote:
... btrfsck bombs out with LOTs of errors...
How best to recover from this?
(This is a 'backup' disk so not 'critical' but it would be nice to avoid
rewriting about 1.5TB of data over the network...)
Is there an obvious sequence/recipe to follow
On 28/09/13 23:54, Martin wrote:
On 28/09/13 20:26, Martin wrote:
... btrfsck bombs out with LOTs of errors...
How best to recover from this?
(This is a 'backup' disk so not 'critical' but it would be nice to avoid
rewriting about 1.5TB of data over the network...)
Is there an obvious
Chris,
Thanks for good comment/discussion.
On 29/09/13 03:06, Chris Murphy wrote:
On Sep 28, 2013, at 4:51 PM, Martin m_bt...@ml1.co.uk wrote:
Stick with forced 3Gbps, but I think it's worth while to find out
what the actual problem is. One day you forget about this 3Gbps SATA
link
Am Freitag, 20. September 2013, 22:34:15 schrieb Josef Bacik:
On Sat, Sep 21, 2013 at 12:25:02AM +0200, Martin Steigerwald wrote:
Hi!
I tried to create a snapshot today like this:
merkaba:/mnt/debian-zeit ls -l
insgesamt 0
drwxr-xr-x 1 root root 210 Sep 20 11:48 root
merkaba
Am Samstag, 21. September 2013, 10:54:55 schrieb Martin Steigerwald:
Am Freitag, 20. September 2013, 22:34:15 schrieb Josef Bacik:
On Sat, Sep 21, 2013 at 12:25:02AM +0200, Martin Steigerwald wrote:
Hi!
I tried to create a snapshot today like this:
merkaba:/mnt/debian-zeit ls
there is a complete documentation on the defaults.
[1] https://wiki.debian.org/fstab#Field_definitions
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
or extern $successor-of-
SSD will be replacing extern harddisks.
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord
?
(There are perhaps about 5% new or changed files each time.)
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
in case you need more information. Also any hint
how to recover from such situation would be really welcome...
Cheers,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
allocated: 3155176812544
referenced 3155176812544
Btrfs Btrfs v0.19
Command exited with non-zero status 1
So: What does that little lot mean?
The drives were mounted and active during an unexpected power-plug pull :-(
Safe to mount again or are there other checks/fixes needed?
Thanks,
Martin
On 29/06/13 10:41, Russell Coker wrote:
On Sat, 29 Jun 2013, Martin wrote:
Mmmm... I'm not sure trying to balance historical read/write counts is
the way to go... What happens for the use case of an SSD paired up with
a HDD? (For example an SSD and a similarly sized Raptor or enterprise
SCSI
.
Total writes to the two disks is equal.
This is noticeable for example when running emerge --sync or running
compiles on Gentoo.
Is this a known feature/problem or worth looking/checking further?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
On 28/06/13 16:39, Hugo Mills wrote:
On Fri, Jun 28, 2013 at 11:34:18AM -0400, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 02:59:45PM +0100, Martin wrote:
On kernel 3.8.13:
Using two equal performance SATAII HDDs, formatted for btrfs
raid1 for both data and metadata and:
The second disk
On 28/06/13 18:04, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 09:55:31AM -0700, George Mitchell wrote:
On 06/28/2013 09:25 AM, Martin wrote:
On 28/06/13 16:39, Hugo Mills wrote:
On Fri, Jun 28, 2013 at 11:34:18AM -0400, Josef Bacik wrote:
On Fri, Jun 28, 2013 at 02:59:45PM +0100, Martin wrote
On 05/06/13 22:12, Martin wrote:
On 05/06/13 17:24, David Sterba wrote:
On Wed, Jun 05, 2013 at 04:43:29PM +0100, Hugo Mills wrote:
OK, so you've got plenty of space to allocate. There were some
issues in this area (block reserves and ENOSPC, and I think
specifically addressing the issue
various:
INFO: task rsync:11022 blocked for more than 180 seconds
and one:
INFO: task btrfs-endio-wri:10816 blocked for more than 180 seconds
Further detail listed below.
What's the fix or any debug worthwhile?
Regards,
Martin
x1 of these:
kernel: INFO: task rsync:11022 blocked for more
:
The following block rsv returned -28 is repeated 7 times until there
is a call trace for:
WARNING: at fs/btrfs/super.c:256 __btrfs_abort_transaction+0x3d/0xad().
Then, the mount is set read-only.
How to fix or debug?
Thanks,
Martin
kernel: [ cut here ]
kernel: WARNING
On 05/06/13 16:05, Hugo Mills wrote:
On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
/etc/fstab mounts with the options:
noatime,noauto,space_cache,inode_cache
All
On 05/06/13 16:43, Hugo Mills wrote:
On Wed, Jun 05, 2013 at 04:28:33PM +0100, Martin wrote:
On 05/06/13 16:05, Hugo Mills wrote:
On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
Dear Devs,
I have x4 4TB HDDs formatted with:
mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef
to try?
For that size of storage and with many hard links, is there any
advantage formatting with leaf/node size greater than the default 4kBytes?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
Am Donnerstag, 23. Mai 2013, 18:41:11 schrieb George Mitchell:
On 05/23/2013 09:08 AM, Martin Steigerwald wrote:
3) As to my knowledge mount times of large partitions can be quite
long with ReiserFS 3.
That may well be, but I certainly wouldn't consider btrfs mount times
fast
of the
limitations of snapshots. They're NOT the same as separate backups. I
believe you know that already and just didn't mention it, but I'm worried
about others who might come across your comment.
Well, a snapshot is not a backup. Just like a RAID is also not a backup.
:)
--
Martin 'Helios
set it I saw
significant improvement.
Without going back to check the wiki, IIRC it was there that the /sys
paths it checks for that detection are listed. Those paths are then
based on what the drive itself claims. If it claims to be rotating
storage...
This is:
martin@merkaba:~ cat /sys
reformatting?
Seems there are still no space left on device bugs left. Or some have been
introduced with 3.10.
Thanks,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
Am Samstag, 25. Mai 2013, 19:36:03 schrieb Martin Steigerwald:
Hi!
Now I got it myself what I read again and again on this mailinglist: During
apt-get upgrade I get no space left on device.
But there is:
merkaba:~ df -hT /
DateisystemTyp Größe Benutzt Verf. Verw
Am Samstag, 25. Mai 2013, 14:13:07 schrieb Martin Steigerwald:
The SSD is in use for about 2 years. I left about 25 GiB free of the 300 GB
it
has.
merkaba:~ smartctl -a /dev/sda | grep Host
225 Host_Writes_32MiB 0x0032 100 100 000Old_age Always
- 261260
Am Samstag, 25. Mai 2013, 23:29:41 schrieb Leonidas Spyropoulos:
On Sat, May 25, 2013 at 1:13 PM, Martin Steigerwald mar...@lichtvoll.de
wrote:
Am Samstag, 25. Mai 2013, 03:58:12 schrieb Duncan:
[...]
And can be verified by:
martin@merkaba:~ grep ssd /proc/mounts
/dev/mapper/merkaba
Am Dienstag, 21. Mai 2013, 13:19:31 schrieb Martin:
Yep, ReiserFS has stood the test of time very well and I'm still using
and abusing it still on various servers all the way from something like
a decade ago!
Very interesting. I only used it for a short time and it worked.
But co-workers lost
On 19/05/13 18:32, Martin wrote:
Dear Devs,
Would there be any problem to use nbd (/dev/ndX) devices to gain
btrfs-raid across multiple physical hosts across a network? (For a sort
of btrfs-drbd! :-) )
Regards,
Martin
http://en.wikipedia.org/wiki/Network_block_device
http
be done with a /sbin/(u?)mount.btrfs 'helper'?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
copies of data/metadata across 4 physical disks.
When might that hit? Or is there a stable patch that can be added into
kernel 3.8.13?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
On 19/05/13 20:34, Chris Murphy wrote:
On May 19, 2013, at 12:59 PM, Martin m_bt...@ml1.co.uk wrote:
btrfs-raid offers a greater variety and far greater flexibility of
raid options individually for filedata and metadata at the
filesystem level.
Well it really doesn't. The btrfs raid
with
Oracle DB on it or maybe a swap device. Or for filesystems not (yet) supporting
VFS hot data tracking.
Thanks,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
that I'm using at present...
OK, so the way of managing all that is going to be a little different.
How would you want that?
Regards,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
Am Sonntag, 19. Mai 2013, 21:43:14 schrieb Zhi Yong Wu:
On Sun, May 19, 2013 at 6:41 PM, Martin Steigerwald mar...@lichtvoll.de
wrote:
Am Donnerstag, 9. Mai 2013, 07:13:56 schrieb Zhi Yong Wu:
[…]
ZFS and BTRFS have shown that RAID support within the filesystem can make
a lot of sense. I
by
filesystem label is a nice/good idea. But is there any interest for that
to be picked up? Put in a bug/feature request onto bugzilla?
I would guess that most developers focus on mount point and let
fstab/mtab sort out the detail...
Regards,
Martin
--
To unsubscribe from this list: send the line
,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Dear Devs,
Would there be any problem to use nbd (/dev/ndX) devices to gain
btrfs-raid across multiple physical hosts across a network? (For a sort
of btrfs-drbd! :-) )
Regards,
Martin
http://en.wikipedia.org/wiki/Network_block_device
http://www.drbd.org/
--
To unsubscribe from this list
On 19/05/13 18:39, Clemens Eisserer wrote:
Hi Martin,
So, an interesting variation could be to have filesystem level raid
operating on ext4 or nilfs or whatever... Would that be a sensible idea?
Thats already supported by using LVM. What do you think you would gain
from layering in top
,
and the file in the example is 34GB.
Any ideas what's happening here?
Yes. The command just triggers the defragmentation which takes place in the
background. Try a sync afterwards :)
Ciao,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599
Am Samstag, 11. Mai 2013, 17:57:11 schrieb Tim Eggleston:
Yes. The command just triggers the defragmentation which takes place
in the
background. Try a sync afterwards :)
Sorry Martin, I should have specified that I wondered if it was like
the scrub operation in that respect, so I left
overheads if 'too many' snapshots/subvols
are made?
If snapshots were to be taken once a minute and retained, what breaks first?
What are 'reasonable' (maximum) numbers for frequency and number of held
versions?
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
notices with df output for the filesystem, but I
thought I mention it, just in case.
Thanks,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
401 - 500 of 781 matches
Mail list logo