On Thu, Oct 23, 2014 at 10:35 PM, Zygo Blaxell
ce3g8...@umail.furryterror.org wrote:
- single profile: we can tolerate zero missing disks,
so we don't allow rw mounts even if degraded.
That seems like the wrong logic here. By all means mount read-only by
On Fri, Oct 24, 2014 at 05:13:27AM +, Duncan wrote:
Zygo Blaxell posted on Thu, 23 Oct 2014 22:35:29 -0400 as excerpted:
My pet peeve: if balance is converting profiles from RAID1 to single,
the conversion should be *instantaneous* (or at least small_constant *
On Fri, Oct 24, 2014 at 06:58:25AM -0400, Rich Freeman wrote:
On Thu, Oct 23, 2014 at 10:35 PM, Zygo Blaxell
ce3g8...@umail.furryterror.org wrote:
- single profile: we can tolerate zero missing disks,
so we don't allow rw mounts even if degraded.
That
On Fri, Oct 24, 2014 at 12:07 PM, Zygo Blaxell
ce3g8...@umail.furryterror.org wrote:
We could also leave this as an option to the user mount -o
degraded-and-I-want-to-lose-my-data, but in my opinion the use
case is very, very exceptional.
Well, it is only exceptional
Also a device replace operation requires that the replacement be the same size
(or maybe larger). While a remove and replace allows the replacement to be
merely large enough to contain all the data. Given the size variation in what
might be called the same size disk by manufcturers this isn't
Robert White posted on Wed, 22 Oct 2014 22:18:09 -0700 as excerpted:
On 10/22/2014 09:30 PM, Chris Murphy wrote:
Sure. So if Btrfs is meant to address scalability, then perhaps at the
moment it's falling short. As it's easy to add large drives and get
very large multiple device volumes, the
Russell Coker posted on Thu, 23 Oct 2014 18:39:52 +1100 as excerpted:
Also a device replace operation requires that the replacement be the
same size (or maybe larger). While a remove and replace allows the
replacement to be merely large enough to contain all the data. Given the
size variation
On Wed, 22 Oct 2014 14:40:47 +0200, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th device
to my FS, it took 4 days to move 25GBs of data.
It's
On 2014-10-23 05:19, Miao Xie wrote:
On Wed, 22 Oct 2014 14:40:47 +0200, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th device to my
FS, it took
On Wed, Oct 22, 2014 at 10:18:09PM -0700, Robert White wrote:
On 10/22/2014 09:30 PM, Chris Murphy wrote:
Sure. So if Btrfs is meant to address scalability, then perhaps at the
moment it's falling short. As it's easy to add large drives and get very
large multiple device volumes, the
Austin S Hemmelgarn posted on Thu, 23 Oct 2014 07:39:28 -0400 as
excerpted:
On 2014-10-23 05:19, Miao Xie wrote:
Now my colleague and I is implementing the scrub/replace for RAID5/6
and I have a plan to reimplement the balance and split it off from the
metadata/file data process. the main
On Fri, Oct 24, 2014 at 01:05:39AM +, Duncan wrote:
Austin S Hemmelgarn posted on Thu, 23 Oct 2014 07:39:28 -0400 as
excerpted:
On 2014-10-23 05:19, Miao Xie wrote:
Now my colleague and I is implementing the scrub/replace for RAID5/6
and I have a plan to reimplement the balance and
But 5000 snapshots?
Why? Are you *TRYING* to test btrfs until it breaks, or TRYING to
demonstrate a balance taking an entire year?
Remember a given btrfs filesystem is not necessarily a backup
destination for data from one source.
It can be, say, 30 or 60 daily snapshots, plus several
Tomasz Chmielewski posted on Wed, 22 Oct 2014 09:14:14 +0200 as excerpted:
Remember a given btrfs filesystem is not necessarily a backup
destination for data from one source.
It can be, say, 30 or 60 daily snapshots, plus several monthly, for each
data source * number of data sources.
So
On 2014-10-21 16:44, Arnaud Kapp wrote:
Hello,
I would like to ask if the balance time is related to the number of
snapshot or if this is related only to data (or both).
I currently have about 4TB of data and around 5k snapshots. I'm thinking
of going raid1 instead of single. From the numbers
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th device to my
FS, it took 4 days to move 25GBs of data.
It's long term untenable. At some point it must be fixed. It's
On Oct 21, 2014, at 9:43 PM, Chris Murphy li...@colorremedies.com wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłow p...@siedziba.pl wrote:
On 21.10.2014 20:59, Tomasz Chmielewski wrote:
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete,
On 22/10/2014 14:40, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th
device to my FS, it took 4 days to move 25GBs of data.
It's long term
On Wed, Oct 22, 2014 at 07:41:32AM +, Duncan wrote:
Tomasz Chmielewski posted on Wed, 22 Oct 2014 09:14:14 +0200 as excerpted:
Tho that is of course per subvolume. If you have multiple subvolumes
on the same filesystem, that can still end up being a thousand or two
snapshots per
On 10/22/2014 01:08 PM, Zygo Blaxell wrote:
I have datasets where I record 14000+ snapshots of filesystem directory
trees scraped from test machines and aggregated onto a single server
for deduplication...but I store each snapshot as a git commit, not as
a btrfs snapshot or even subvolume.
We
Chris Murphy posted on Wed, 22 Oct 2014 12:15:25 -0400 as excerpted:
Granted I'm ignoring the fact there are 5000+ snapshots[.]
The short term, maybe even medium term, it's doctor, it hurts
when I do this! and the doctor says, well then don't do that!
LOL! Nicely said! =:^)
--
Duncan -
On Wed, Oct 22, 2014 at 01:37:15PM -0700, Robert White wrote:
On 10/22/2014 01:08 PM, Zygo Blaxell wrote:
I have datasets where I record 14000+ snapshots of filesystem directory
trees scraped from test machines and aggregated onto a single server
for deduplication...but I store each snapshot
On Oct 22, 2014, at 4:08 PM, Zygo Blaxell zblax...@furryterror.org wrote:
If you have one subvolume per user and 1000 user directories on a server,
it's only 5 snapshots per user (last hour, last day, last week, last
month, and last year).
Sure. So if Btrfs is meant to address
On 10/22/2014 09:30 PM, Chris Murphy wrote:
Sure. So if Btrfs is meant to address scalability, then perhaps at the moment
it's falling short. As it's easy to add large drives and get very large
multiple device volumes, the snapshotting needs to scale also.
I'd say per user, it's reasonable to
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete, for 120 GBs of data:
# time btrfs balance start -v /home
Dumping filters: flags 0x7, state 0x0, force is off
DATA (flags 0x0): balancing
METADATA (flags 0x0): balancing
SYSTEM (flags 0x0):
On 21.10.2014 20:59, Tomasz Chmielewski wrote:
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete, for 120 GBs of data:
Looks normal to me. Last time I started a balance after adding 6th
device to my FS, it took 4 days to move 25GBs of data. Some
Hello,
I would like to ask if the balance time is related to the number of
snapshot or if this is related only to data (or both).
I currently have about 4TB of data and around 5k snapshots. I'm thinking
of going raid1 instead of single. From the numbers I see this seems
totally impossible
That's an unmanageably large and probably pointless number of snapshots
guys.
I mean 150 is a heck of a lot, and 5000 is almost unfathomable in terms
of possible usefulness.
Snapshots are cheap but they aren't free.
Each snapshot is effectively stapling down one version of your entire
On Oct 21, 2014, at 4:14 PM, Piotr Pawłow p...@siedziba.pl wrote:
On 21.10.2014 20:59, Tomasz Chmielewski wrote:
FYI - after a failed disk and replacing it I've run a balance; it took
almost 3 weeks to complete, for 120 GBs of data:
Looks normal to me. Last time I started a balance after
On Tue, Oct 21, 2014 at 06:10:27PM -0700, Robert White wrote:
That's an unmanageably large and probably pointless number of
snapshots guys.
I mean 150 is a heck of a lot, and 5000 is almost unfathomable in
terms of possible usefulness.
Snapshots are cheap but they aren't free.
This could
Robert White posted on Tue, 21 Oct 2014 18:10:27 -0700 as excerpted:
Each snapshot is effectively stapling down one version of your entire
metadata tree, right? So imagine leaving tape spikes (little marks on
the floor to keep track of where something is so you can put it back)
for the last
31 matches
Mail list logo