Marc MERLIN posted on Sat, 13 May 2017 13:54:31 -0700 as excerpted:
> Kernel 4.11, btrfs-progs v4.7.3
>
> I run scrub and balance every night, been doing this for 1.5 years on
> this filesystem.
> But it has just started failing:
> saruman:~# btrfs balance start -musage=0 /mnt/btrfs_pool1
> Don
Imran Geriskovan posted on Fri, 12 May 2017 15:02:20 +0200 as excerpted:
> On 5/12/17, Duncan <1i5t5.dun...@cox.net> wrote:
>> FWIW, I'm in the market for SSDs ATM, and remembered this from a couple
>> weeks ago so went back to find it. Thanks. =:^)
>>
>> (I'm currently still on quarter-TB genera
On Sat, May 13, 2017 at 6:41 PM, Andreas Dilger wrote:
> On May 10, 2017, at 11:10 PM, Eric Biggers wrote:
>>
>> On Wed, May 10, 2017 at 01:14:37PM -0700, Darrick J. Wong wrote:
>>> [cc btrfs, since afaict that's where most of the dedupe tool authors hang
>>> out]
>> Yes, PIDs have traditionall
gargamel:/sys/block/bcache16/bcache# echo 1 > stop
bcache: bcache_device_free() bcache16 stopped
[ cut here ]
WARNING: CPU: 7 PID: 11051 at lib/idr.c:383 ida_remove+0xe8/0x10b
ida_remove called for id=16 which is not allocated.
Modules linked in: uas usb_storage veth ip6ta
On Sat, May 13, 2017 at 3:39 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> When I was doing my ssd research the first time around, the going
> recommendation was to keep 20-33% of the total space on the ssd entirely
> unallocated, allowing it to use that space as an FTL erase-block
> management pool.
Hi,
Chris Murphy suggested we move the discussion in this bugzilla thread:
https://bugzilla.kernel.org/show_bug.cgi?id=115851
To here, the mailing list.
Going to quote him to give context:
"This might be better discussed on list to ensure there's congruence in
dev and user expectations; and in p
My apologies, this was for the bcache list, sorry about this.
On Sun, May 14, 2017 at 08:25:22AM -0700, Marc MERLIN wrote:
>
> gargamel:/sys/block/bcache16/bcache# echo 1 > stop
>
> bcache: bcache_device_free() bcache16 stopped
> [ cut here ]
> WARNING: CPU: 7 PID: 11051
All stuff that Chris wrote holds true, I just wanted to add flash specific
information (from my experience of writing low level code for operating flash)
So with flash, to erase you have to erase a large allocation block, usually it
used to be 128kB (plus some crc data and stuff makes more than
On 05/13/2017 10:54 PM, Marc MERLIN wrote:
> Kernel 4.11, btrfs-progs v4.7.3
>
> I run scrub and balance every night, been doing this for 1.5 years on this
> filesystem.
What are the exact commands you run every day?
> But it has just started failing:
> [...]
> saruman:~# btrfs fi usage /mnt/bt
On Sun, May 14, 2017 at 09:13:35PM +0200, Hans van Kranenburg wrote:
> On 05/13/2017 10:54 PM, Marc MERLIN wrote:
> > Kernel 4.11, btrfs-progs v4.7.3
> >
> > I run scrub and balance every night, been doing this for 1.5 years on this
> > filesystem.
>
> What are the exact commands you run every da
On 05/14/2017 08:01 PM, Tomasz Kusmierz wrote:
> All stuff that Chris wrote holds true, I just wanted to add flash
> specific information (from my experience of writing low level code
> for operating flash)
Thanks!
> [... erase ...]
> In terms of over provisioning of SSD it’s a give and take
> r
Le 14/05/2017 à 22:15, Marc MERLIN a écrit :
> On Sun, May 14, 2017 at 09:13:35PM +0200, Hans van Kranenburg wrote:
>> On 05/13/2017 10:54 PM, Marc MERLIN wrote:
>>> Kernel 4.11, btrfs-progs v4.7.3
>>>
>>> I run scrub and balance every night, been doing this for 1.5 years on this
>>> filesystem.
>>
On Sun, May 14, 2017 at 01:15:09PM -0700, Marc MERLIN wrote:
> On Sun, May 14, 2017 at 09:13:35PM +0200, Hans van Kranenburg wrote:
> > On 05/13/2017 10:54 PM, Marc MERLIN wrote:
> > > Kernel 4.11, btrfs-progs v4.7.3
> > >
> > > I run scrub and balance every night, been doing this for 1.5 years on
Am Sun, 14 May 2017 13:15:09 -0700
schrieb Marc MERLIN :
> On Sun, May 14, 2017 at 09:13:35PM +0200, Hans van Kranenburg wrote:
> > On 05/13/2017 10:54 PM, Marc MERLIN wrote:
> > > Kernel 4.11, btrfs-progs v4.7.3
> > >
> > > I run scrub and balance every night, been doing this for 1.5
> > > yea
Am Sun, 14 May 2017 22:57:26 +0200
schrieb Lionel Bouton :
> I've coded one Ruby script which tries to balance between the cost of
> reallocating group and the need for it. The basic idea is that it
> tries to keep the proportion of free space "wasted" by being allocated
> although it isn't used b
On 5/14/17, Tomasz Kusmierz wrote:
> In terms of over provisioning of SSD it’s a give and take relationship … on
> good drive there is enough over provisioning to allow a normal operation on
> systems without TRIM … now if you would use a 1TB drive daily without TRIM
> and have only 30GB stored on
Le 14/05/2017 à 23:30, Kai Krakow a écrit :
> Am Sun, 14 May 2017 22:57:26 +0200
> schrieb Lionel Bouton :
>
>> I've coded one Ruby script which tries to balance between the cost of
>> reallocating group and the need for it.[...]
>> Given its current size, I should probably push it on github...
> Y
On Sun, May 14, 2017 at 09:21:11PM +, Hugo Mills wrote:
> > 2) balance -musage=0
> > 3) balance -musage=20
>
>In most cases, this is going to make ENOSPC problems worse, not
> better. The reason for doign this kind of balance is to recover unused
> space and allow it to be reallocated. The
Theoretically all sectors in over provision are erased - practically they are
either erased or waiting to be erased or broken.
What you have to understand is that sectors on SSD are not where you really
think they are - they can swap place with sectors with over provisioning are,
they can swap
Theoretically all sectors in over provision are erased - practically they are
either erased or waiting to be erased or broken.
What you have to understand is that sectors on SSD are not where you really
think they are - they can swap place with sectors with over provisioning are,
they can swap
20 matches
Mail list logo