On 2016-08-01 13:15, Chris Murphy wrote:
On Mon, Aug 1, 2016 at 10:58 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-08-01 12:19, Chris Murphy wrote:
On Mon, Aug 1, 2016 at 10:08 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
MD and DM RAID handle this
On 2016-07-26 10:42, Chris Murphy wrote:
On Tue, Jul 26, 2016 at 3:37 AM, Kurt Seo wrote:
2016-07-26 5:49 GMT+09:00 Chris Murphy :
On Mon, Jul 25, 2016 at 1:25 AM, Kurt Seo wrote:
Hi all
I am currently
On 2016-07-26 13:07, David Sterba wrote:
On Mon, Jul 11, 2016 at 10:44:30AM +0900, Satoru Takeuchi wrote:
+ chdir("/");
You should check the return value of chdir(). Otherwise
we get the following warning message at the build time.
Can we actually fail
ly seen one of those in at least a few months. In
general, BTRFS is moving fast enough that reports older than a kernel
release cycle are generally out of date unless something confirms
otherwise, but I do distinctly recall such issues being commonly
reported in the past.
On 10 August 2016 at 15:46,
On 2016-08-09 18:20, Dave T wrote:
Thank you for the info, Duncan.
I will use Alt-sysrq-s alt-sysrq-u alt-sysrq-b. This is the best
description / recommendation I've read on the subject. I had read
about these special key sequences before but I could never remember
them and I didn't fully
On 2016-08-10 02:27, Duncan wrote:
Dave T posted on Tue, 09 Aug 2016 23:27:56 -0400 as excerpted:
btrfs scrub returned with uncorrectable errors. Searching in dmesg
returns the following information:
BTRFS warning (device dm-0): checksum error at logical N on
/dev/mapper/[crypto] sector:
On 2016-08-03 17:55, Graham Cobb wrote:
On 03/08/16 21:37, Adam Borowski wrote:
On Wed, Aug 03, 2016 at 08:56:01PM +0100, Graham Cobb wrote:
Are there any btrfs commands (or APIs) to allow a script to create a
list of all the extents referred to within a particular (mounted)
subvolume? And is
On 2016-08-12 11:06, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 12 Aug 2016 08:04:42 -0400 as
excerpted:
On a file server? No, I'd ensure proper physical security is
established and make sure it's properly secured against network based
attacks and then not worry about it. Unless you
On 2016-08-15 03:50, Qu Wenruo wrote:
Hi,
Recently I found that manpage of mkfs is saying minimal device number
for RAID5 and RAID6 is 2 and 3.
Personally speaking, although I understand that RAID5/6 only requires
1/2 devices for parity stripe, it is still quite strange behavior.
Under most
On 2016-08-11 16:23, Dave T wrote:
What I have gathered so far is the following:
1. my RAM is not faulty and I feel comfortable ruling out a memory
error as having anything to do with the reported problem.
2. my storage device does not seem to be faulty. I have not figured
out how to do more
On 2016-07-21 09:34, Chris Murphy wrote:
On Thu, Jul 21, 2016 at 6:46 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-20 15:58, Chris Murphy wrote:
On Sun, Jul 17, 2016 at 3:08 AM, Hendrik Friedel <hend...@friedels.name>
wrote:
Well, btrfs does write data ve
On 2016-07-20 15:58, Chris Murphy wrote:
On Sun, Jul 17, 2016 at 3:08 AM, Hendrik Friedel wrote:
Well, btrfs does write data very different to many other file systems. On
every write the file is copied to another place, even if just one bit is
changed. That's special
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover from this?
Thanks to RAID1 of the metadata I can still access the data residing on the
On 2016-07-18 14:31, Hendrik Friedel wrote:
Hello and thanks for your replies,
It's a Seagate Expansion Desktop 5TB (USB3). It is probably a
ST5000DM000.
this is TGMR not SMR disk:
TGMR is a derivative of giant magneto-resistance, and is what's been
used in hard disk drives for decades now.
On 2016-07-18 15:05, Hendrik Friedel wrote:
Hello Austin,
thanks for your reply.
Ok, thanks; So, TGMR does not say whether or not the Device is SMR or
not, right?
I'm not 100% certain about that. Technically, the only non-firmware
difference is in the read head and the tracking. If it were
On 2016-07-15 14:45, Matt wrote:
On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed
On 2016-07-11 12:58, Tomasz Torcz wrote:
On Mon, Jul 11, 2016 at 07:17:28AM -0400, Austin S. Hemmelgarn wrote:
On 2016-07-11 03:26, Tomasz Torcz wrote:
On Tue, Jun 21, 2016 at 11:16:59AM -0400, Austin S. Hemmelgarn wrote:
Currently, balance operations are run synchronously in the foreground
On 2016-07-12 11:22, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 12 Jul 2016 08:25:24 -0400 as
excerpted:
As far as daemonization, I have no man-page called daemon in section
seven, yet I have an up-to-date upstream copy of the Linux man pages. My
guess is that this is a systemd man page
On 2016-07-11 17:07, Chris Murphy wrote:
On Fri, Jul 8, 2016 at 6:24 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
To clarify, I'm not trying to argue against adding support, I'm arguing
against it being mandatory.
By "D-Bus support" I did not mean to indicate ma
On 2016-07-13 00:39, Andrei Borzenkov wrote:
12.07.2016 15:25, Austin S. Hemmelgarn пишет:
I'm not changing my init system just to add functionality that should
already exist in btrfs-progs. The fact that the balance ioctl is
synchronous was a poor design choice, and we need to provide
On 2016-07-17 05:08, Hendrik Friedel wrote:
Hi Thomasz,
@Dave I have added you to the conversation, as I refer to your notes
(https://github.com/kdave/drafts/blob/master/btrfs/smr-mode.txt)
thanks for your reply!
It's a Seagate Expansion Desktop 5TB (USB3). It is probably a
ST5000DM000.
On 2016-06-25 12:44, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
Well, the obvious major advantage that comes to mind for me to checksumming
parity is that it would let us scrub the parity data itself and verify it.
OK bu
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul 5, 2016 at 11:10 PM, Chris Murphy wrote:
I started a systemd-devel@ thread since that's where most udev stuff
gets talked about.
https://lists.freedesktop.org/archives/systemd-devel/2016-July/037031.html
On 2016-07-05 05:28, Joerg Schilling wrote:
Andreas Dilger wrote:
I think in addition to fixing btrfs (because it needs to work with existing
tar/rsync/etc. tools) it makes sense to *also* fix the heuristics of tar
to handle this situation more robustly. One option is if
On 2016-07-05 19:05, Chris Murphy wrote:
Related:
http://www.spinics.net/lists/raid/msg52880.html
Looks like there is some traction to figuring out what to do about
this, whether it's a udev rule or something that happens in the kernel
itself. Pretty much the only hardware setup unaffected by
On 2016-07-06 07:55, Andrei Borzenkov wrote:
On Wed, Jul 6, 2016 at 2:45 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul 5, 2016 at 11:10 PM, Chris Murphy <li...@colorremedies.com>
wrote:
I started a systemd-devel@
On 2016-07-06 08:39, Andrei Borzenkov wrote:
Отправлено с iPhone
6 июля 2016 г., в 15:14, Austin S. Hemmelgarn <ahferro...@gmail.com> написал(а):
On 2016-07-06 07:55, Andrei Borzenkov wrote:
On Wed, Jul 6, 2016 at 2:45 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
O
On 2016-07-07 09:49, Francesco Turco wrote:
I have a USB flash drive with an encrypted Btrfs filesystem where I
store daily backups. My problem is that this btrfs filesystem gets
corrupted very often, after a few days of usage. Usually I just reformat
it and move along, but this time I'd like to
On 2016-07-07 10:55, Francesco Turco wrote:
On 2016-07-07 16:27, Austin S. Hemmelgarn wrote:
This seems odd, are you trying to access anything over NFS or some other
network filesystem protocol here? If not, then I believe you've found a
bug, because I'm pretty certain we shouldn't
On 2016-07-07 12:52, Goffredo Baroncelli wrote:
On 2016-07-06 14:48, Austin S. Hemmelgarn wrote:
On 2016-07-06 08:39, Andrei Borzenkov wrote:
[]
To be entirely honest, if it were me, I'd want systemd to
fsck off. If the kernel mount(2) call succeeds, then the
filesystem was ready enough
On 2016-07-08 07:14, Tomasz Kusmierz wrote:
Well, I was able to run memtest on the system last night, that passed with
flying colors, so I'm now leaning toward the problem being in the sas card.
But I'll have to run some more tests.
Seriously use the "stres.sh" for couple of days, When I was
On 2016-07-07 16:20, Chris Murphy wrote:
On Thu, Jul 7, 2016 at 1:59 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
D-Bus support needs to be optional, period. Not everybody uses D-Bus (I
have dozens of systems that get by just fine without it, and know hundreds
of other people
On 2016-07-08 12:10, Francesco Turco wrote:
On 2016-07-07 19:57, Chris Murphy wrote:
Use F3 to test flash:
http://oss.digirati.com.br/f3/
I tested my USB flash drive with F3 as you suggested, and there's no
indication it is a fake device.
---
# f3probe --destructive /dev/sdb
F3
On 2016-07-06 14:23, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 12:04 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-06 13:19, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 3:51 AM, Andrei Borzenkov <arvidj...@gmail.com>
wrote:
3) can we query btrfs whether it
On 2016-07-06 11:22, Joerg Schilling wrote:
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
It should be obvious that a file that offers content also has allocated blocks.
What you mean then is that POSIX _implies_ that this is the case, but
does not say whether or no
On 2016-07-06 13:19, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 3:51 AM, Andrei Borzenkov wrote:
3) can we query btrfs whether it is mountable in degraded mode?
according to documentation, "btrfs device ready" (which udev builtin
follows) checks "if it has ALL of it’s
On 2016-07-06 14:45, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 11:18 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-06 12:43, Chris Murphy wrote:
So does it make sense to just set the default to 180? Or is there a
smarter way to do this? I don't know.
Just th
On 2016-07-06 12:05, Austin S. Hemmelgarn wrote:
On 2016-07-06 11:22, Joerg Schilling wrote:
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
It should be obvious that a file that offers content also has
allocated blocks.
What you mean then is that POSIX _implies_ that
On 2016-07-06 10:53, Joerg Schilling wrote:
Antonio Diaz Diaz wrote:
Joerg Schilling wrote:
POSIX requires st_blocks to be != 0 in case that the file contains data.
Please, could you provide a reference? I can't find such requirement at
On 2016-07-06 12:43, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 5:51 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-05 19:05, Chris Murphy wrote:
Related:
http://www.spinics.net/lists/raid/msg52880.html
Looks like there is some traction to figuring out what to do
On 2016-07-06 18:59, Tomasz Kusmierz wrote:
On 6 Jul 2016, at 23:14, Corey Coughlin wrote:
Hi all,
Hoping you all can help, have a strange problem, think I know what's going
on, but could use some verification. I set up a raid1 type btrfs filesystem on
an
On 2016-07-11 03:26, Tomasz Torcz wrote:
On Tue, Jun 21, 2016 at 11:16:59AM -0400, Austin S. Hemmelgarn wrote:
Currently, balance operations are run synchronously in the foreground.
This is nice for interactive management, but is kind of crappy when you
start looking at automation and similar
On 2016-07-07 14:58, Chris Murphy wrote:
On Thu, Jul 7, 2016 at 12:23 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
Here's how I would picture the ideal situation:
* A device is processed by udev. It detects that it's part of a BTRFS
array, updates blkid and whateve
On 2016-08-04 17:12, Chris Murphy wrote:
On Thu, Aug 4, 2016 at 2:51 PM, Martin wrote:
Thanks for the benchmark tools and tips on where the issues might be.
Is Fedora 24 rawhide preferred over ArchLinux?
I'm not sure what Arch does any differently to their kernels
On 2016-08-05 06:56, Lutz Vieweg wrote:
On 08/04/2016 10:30 PM, Chris Murphy wrote:
Keep in mind the list is rather self-selecting for problems. People
who aren't having problems are unlikely to post their non-problems to
the list.
True, but the number of people inclined to post a bug report
On 2016-08-04 13:43, Martin wrote:
Hi,
I would like to find rare raid6 bugs in btrfs, where I have the following hw:
* 2x 8 core CPU
* 128GB ram
* 70 FC disk array (56x 500GB + 14x 1TB SATA disks)
* 24 FC or 2x SAS disk array (1TB SAS disks)
* 16 FC disk array (1TB SATA disks)
* 12 SAS disk
On 2016-08-09 05:50, MegaBrutal wrote:
2016-06-03 14:43 GMT+02:00 Austin S. Hemmelgarn <ahferro...@gmail.com>:
Also, since you're on a new enough kernel, try 'lazytime' in the mount options
as well, this defers all on-disk timestamp updates for up to 24 hours or until
the inode gets w
On 2016-08-09 07:50, Thomas wrote:
Hello!
First things first:
Mailing lists are asynchronous. You will almost _never_ get an
immediate response, and will quite often not get a response for a few
hours at least. Sending a message more than once when you don't get a
response does not make it
On 2016-06-29 14:12, Saint Germain wrote:
On Wed, 29 Jun 2016 11:28:24 -0600, Chris Murphy
wrote :
Already got a backup. I just really want to try to repair it (in
order to test BTRFS).
I don't know that this is a good test because I think the file system
has
On 2016-06-28 08:14, Steven Haigh wrote:
On 28/06/16 22:05, Austin S. Hemmelgarn wrote:
On 2016-06-27 17:57, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 10:17:04AM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 5:21 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-06
On 2016-08-15 08:19, Martin wrote:
The smallest disk of the 122 is 500GB. Is it possible to have btrfs
see each disk as only e.g. 10GB? That way I can corrupt and resilver
more disks over a month.
Well, at least you can easily partition the devices for that to happen.
Can it be done with
On 2016-08-15 09:39, Martin wrote:
That really is the case, there's currently no way to do this with BTRFS.
You have to keep in mind that the raid5/6 code only went into the mainline
kernel a few versions ago, and it's still pretty immature as far as kernel
code goes. I don't know when (if
On 2016-08-15 10:06, Daniel Caillibaud wrote:
Le 15/08/16 à 08:32, "Austin S. Hemmelgarn" <ahferro...@gmail.com> a écrit :
ASH> On 2016-08-15 06:39, Daniel Caillibaud wrote:
ASH> > I'm newbie with btrfs, and I have pb with high load after each btrfs
subvolume delete
On 2016-08-15 09:38, Martin wrote:
Looking at the kernel log itself, you've got a ton of write errors on
/dev/sdap. I would suggest checking that particular disk with smartctl, and
possibly checking the other hardware involved (the storage controller and
cabling).
I would kind of expect BTRFS
On 2016-08-15 06:39, Daniel Caillibaud wrote:
Hi,
I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume
delete
I use snapshots on lxc hosts under debian jessie with
- kernel 4.6.0-0.bpo.1-amd64
- btrfs-progs 4.6.1-1~bpo8
For backup, I have each day, for each
On 2016-08-15 08:19, Martin wrote:
I'm not sure what Arch does any differently to their kernels from
kernel.org kernels. But bugzilla.kernel.org offers a Mainline and
Fedora drop down for identifying the kernel source tree.
IIRC, they're pretty close to mainline kernels. I don't think they
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3 devices for
raid5/6 respectively.
Personally, I agree that we should warn when trying to do this,
On 2016-08-15 10:32, Anand Jain wrote:
On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices
RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3
of the send stream.
Signed-off-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
Suggested-by: Graham Cobb <g.bt...@cobb.uk.net>
---
Chages since v1:
* Updated the description based on suggestions from Graham Cobb.
Inspired by a recent thread on the ML.
This could probably be m
On 2017-02-03 14:17, Graham Cobb wrote:
On 03/02/17 16:01, Austin S. Hemmelgarn wrote:
Ironically, I ended up having time sooner than I thought. The message
doesn't appear to be in any of the archives yet, but the message ID is:
<20170203134858.75210-1-ahferro...@gmail.com>
Ah. I
On 2017-01-30 23:58, Duncan wrote:
Oliver Freyermuth posted on Sat, 28 Jan 2017 17:46:24 +0100 as excerpted:
Just don't count on restore to save your *** and always treat what it
can often bring to current as a pleasant surprise, and having it fail
won't be a down side, while having it work,
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as excerpted:
I have been testing btrfs send/receive. I like it.
During those tests I discovered that it is possible to access and modify
(add files, delete files ...) of the new receive snapshot
On 2017-02-04 16:10, Kai Krakow wrote:
Am Sat, 04 Feb 2017 20:50:03 +
schrieb "Jorg Bornschein" :
February 4, 2017 1:07 AM, "Goldwyn Rodrigues"
wrote:
Yes, please check if disabling quotas makes a difference in
execution time of btrfs balance.
Just
On 2017-02-05 23:26, Duncan wrote:
Hans van Kranenburg posted on Sun, 05 Feb 2017 22:55:42 +0100 as
excerpted:
On 02/05/2017 10:42 PM, Alexander Tomokhov wrote:
Is it possible, having two drives to do raid1 for metadata but keep
data on a single drive only?
Nope.
Would be a really nice
On 2017-02-05 06:54, Kai Krakow wrote:
Am Wed, 1 Feb 2017 17:43:32 +
schrieb Graham Cobb <g.bt...@cobb.uk.net>:
On 01/02/17 12:28, Austin S. Hemmelgarn wrote:
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as
excerpted:
[...]
I'
On 2017-02-07 08:53, Peter Zaitsev wrote:
Hi,
I have tried BTRFS from Ubuntu 16.04 LTS for write intensive OLTP MySQL
Workload.
It did not go very well ranging from multi-seconds stalls where no
transactions are completed to the finally kernel OOPS with "no space left
on device" error
On 2017-02-07 10:00, Timofey Titovets wrote:
2017-02-07 17:13 GMT+03:00 Peter Zaitsev :
Hi Hugo,
For the use case I'm looking for I'm interested in having snapshot(s)
open at all time. Imagine for example snapshot being created every
hour and several of these snapshots
On 2017-02-07 10:20, Timofey Titovets wrote:
I think that you have a problem with extent bookkeeping (if i
understand how btrfs manage extents).
So for deal with it, try enable compression, as compression will force
all extents to be fragmented with size ~128kb.
No, it will compress everything
On 2017-02-07 13:59, Peter Zaitsev wrote:
Jeff,
Thank you very much for explanations. Indeed it was not clear in the
documentation - I read it simply as "if you have snapshots enabled
nodatacow makes no difference"
I will rebuild the database in this mode from scratch and see how
performance
On 2017-02-07 15:36, Kai Krakow wrote:
Am Tue, 7 Feb 2017 09:13:25 -0500
schrieb Peter Zaitsev :
Hi Hugo,
For the use case I'm looking for I'm interested in having snapshot(s)
open at all time. Imagine for example snapshot being created every
hour and several of these
On 2017-02-07 14:39, Kai Krakow wrote:
Am Tue, 7 Feb 2017 10:06:34 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
4. Try using in-line compression. This can actually significantly
improve performance, especially if you have slow storage devices and
a really
On 2017-02-07 14:31, Peter Zaitsev wrote:
Hi Hugo,
As I re-read it closely (and also other comments in the thread) I know
understand there is a difference how nodatacow works even if snapshot are
in place.
On autodefrag I wonder is there some more detailed documentation about how
autodefrag
On 2017-02-07 14:47, Kai Krakow wrote:
Am Mon, 6 Feb 2017 08:19:37 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
MDRAID uses stripe selection based on latency and other measurements
(like head position). It would be nice if btrfs implemented similar
functiona
On 2017-02-07 15:19, Kai Krakow wrote:
Am Tue, 7 Feb 2017 14:50:04 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
Also does autodefrag works with nodatacow (ie with snapshot) or
are these exclusive ?
I'm not sure about this one. I would assume based on the fact
On 2017-02-08 13:38, Libor Klepáč wrote:
Hello,
inspired by recent discussion on BTRFS vs. databases i wanted to ask on
suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found
any recent article on this topic.
I'm preparing migration of our mailserver to Debian Stretch, ie.
On 2017-02-08 09:46, Peter Grandi wrote:
My system is or seems to be running out of disk space but I
can't find out how or why. [ ... ]
FilesystemSize Used Avail Use% Mounted on
/dev/sda3 28G 26G 2.1G 93% /
[ ... ]
So from chunk level, your fs is already full.
On 2017-02-08 08:46, Tomasz Torcz wrote:
On Wed, Feb 08, 2017 at 07:50:22AM -0500, Austin S. Hemmelgarn wrote:
It is exponentially safer in BTRFS
to run single data single metadata than half raid1 data half raid1 metadata.
Why?
To convert to profiles _designed_ for a single device
On 2017-02-02 05:52, Graham Cobb wrote:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins and
users use every day are equally workarounds. Setting 007 perms on a dir
that doesn't have anything immediately security vulnerable in it, simply
to
On 2017-02-01 17:48, Duncan wrote:
Adam Borowski posted on Wed, 01 Feb 2017 12:55:30 +0100 as excerpted:
On Wed, Feb 01, 2017 at 05:23:16AM +, Duncan wrote:
Hans Deragon posted on Tue, 31 Jan 2017 21:51:22 -0500 as excerpted:
But the current scenario makes it difficult for me to put
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case fail completely. The fix had no dependencies itself and
I don't see what's bad in mounting a RAID
On 2017-02-07 15:54, Kai Krakow wrote:
Am Tue, 7 Feb 2017 15:27:34 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
I'm not sure about this one. I would assume based on the fact that
many other things don't work with nodatacow and that regular defrag
doesn't wor
On 2017-02-07 13:27, David Sterba wrote:
On Fri, Feb 03, 2017 at 08:48:58AM -0500, Austin S. Hemmelgarn wrote:
This adds some extra documentation to the btrfs-receive manpage that
explains some of the security related aspects of btrfs-receive. The
first part covers the fact that the subvolume
On 2017-02-07 22:21, Hans Deragon wrote:
Greetings,
On 2017-02-02 10:06, Austin S. Hemmelgarn wrote:
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case
On 2017-02-07 20:49, Nicholas D Steeves wrote:
Dear btrfs community,
Please accept my apologies in advance if I missed something in recent
btrfs development; my MUA tells me I'm ~1500 unread messages
out-of-date. :/
I recently read about "mount -t btrfs -o user_subvol_rm_allowed" while
doing
On 2017-02-08 08:26, Martin Raiber wrote:
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leaves
On 2017-02-07 17:28, Kai Krakow wrote:
Am Thu, 19 Jan 2017 15:02:14 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" <alejan...@mosteo.com> wrote:
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leaves
the database files like killing the database in-flight. Like shutting
the system down in the
On 2017-02-03 04:14, Duncan wrote:
Graham Cobb posted on Thu, 02 Feb 2017 10:52:26 + as excerpted:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins
and users use every day are equally workarounds. Setting 007 perms on
a dir that
On 2017-02-03 10:44, Graham Cobb wrote:
On 03/02/17 12:44, Austin S. Hemmelgarn wrote:
I can look at making a patch for this, but it may be next week before I
have time (I'm not great at multi-tasking when it comes to software
development, and I'm in the middle of helping to fix a bug
of the send stream.
Signed-off-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
---
Inspired by a recent thread on the ML.
This could probably be more thorough, but I felt it was more important
to get it documented as quickly as possible, and this should cover the
basic info that most
On 2017-01-27 11:47, Hans Deragon wrote:
On 2017-01-24 14:48, Adam Borowski wrote:
On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
If I remove 'ro' from the option, I cannot get the filesystem mounted
because of the following error: BTRFS: missing devices(1) exceeds the
On 2017-01-28 04:17, Andrei Borzenkov wrote:
27.01.2017 23:03, Austin S. Hemmelgarn пишет:
On 2017-01-27 11:47, Hans Deragon wrote:
On 2017-01-24 14:48, Adam Borowski wrote:
On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
If I remove 'ro' from the option, I cannot get
On 2017-01-28 00:00, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 27 Jan 2017 07:58:20 -0500 as
excerpted:
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3
of the memory. I'll leave that running for a day or so
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3 of the
memory. I'll leave that running for a day or so, but of course it will not
provide a clear answer...
A small update: while the online memtester is without any errors
On 2017-02-23 19:54, Qu Wenruo wrote:
At 02/23/2017 06:51 PM, Christian Theune wrote:
Hi,
not sure whether it’s possible, but we tried space_cache=v2 and
obviously after working fine in staging it broke in production. Or
rather: we upgraded from 4.4 to 4.9 and enabled the space_cache. Our
On 2017-02-23 08:19, Christian Theune wrote:
Hi,
just for future reference if someone finds this thread: there is a bit of
output I’m seeing with this crashing kernel (unclear whether related to btrfs
or not):
31 | 02/23/2017 | 09:51:22 | OS Stop/Shutdown #0x4f | Run-time critical stop
|
On 2017-02-23 05:51, Christian Theune wrote:
Hi,
not sure whether it’s possible, but we tried space_cache=v2 and obviously after
working fine in staging it broke in production. Or rather: we upgraded from 4.4
to 4.9 and enabled the space_cache. Our production volume is around 50TiB
usable
On 2017-02-14 11:46, Austin S. Hemmelgarn wrote:
On 2017-02-14 11:07, Chris Murphy wrote:
On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple expla
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a subvolume, any files in
the subvolume that have the NOCOW attribute will not have that attribute
in the snapshot. Some further testing indicates that
On 2017-02-14 11:07, Chris Murphy wrote:
On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a subvolume, any
601 - 700 of 1331 matches
Mail list logo