Kai Krakow posted on Fri, 12 May 2017 20:27:56 +0200 as excerpted:
> In the end, the more continuous blocks of free space there are, the
> better the chance for proper wear leveling.
Talking about which...
When I was doing my ssd research the first time around, the going
recommendation was to
12.05.2017 20:07, Chris Murphy пишет:
> On Thu, May 11, 2017 at 5:24 PM, Ochi wrote:
>> Hello,
>>
>> here is the journal.log (I hope). It's quite interesting. I rebooted the
>> machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing
>> afterwards (around timestamp 66.*).
On Fri, 12 May 2017 20:36:44 +0200
Kai Krakow wrote:
> My concern is with fail scenarios of some SSDs which die unexpected and
> horribly. I found some reports of older Samsung SSDs which failed
> suddenly and unexpected, and in a way that the drive completely died:
> No
Am Sat, 13 May 2017 14:52:47 +0500
schrieb Roman Mamedov :
> On Fri, 12 May 2017 20:36:44 +0200
> Kai Krakow wrote:
>
> > My concern is with fail scenarios of some SSDs which die unexpected
> > and horribly. I found some reports of older Samsung SSDs
With larger file system (in this case its 22TB), ext2fs_open() returns
EXT2_ET_CANT_USE_LEGACY_BITMAPS error message with ext2fs_read_block_bitmap().
To overcome this issue, we need pass EXT2_FLAG_64BITS flag with ext2fs_open
and also use 64-bit functions like ext2fs_get_block_bitmap_range2,
Am Sat, 13 May 2017 09:39:39 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
> Kai Krakow posted on Fri, 12 May 2017 20:27:56 +0200 as excerpted:
>
> > In the end, the more continuous blocks of free space there are, the
> > better the chance for proper wear leveling.
>
> Talking about
>
> Ping?
>
> Any comments?
>
> Thanks,
> Qu
Can I inject corruption with existing script [1] and expect offline
scrub to fix it? If so, I'll give it try and let you know the results.
[1] https://patchwork.kernel.org/patch/9583455/
Cheers,
Lakshmipathi.G
--
To unsubscribe from this list:
Hello,
okay, I think I now have a repro that is stupidly simple, I'm not even
sure if I overlook something here. No multi-device btrfs involved, but
notably it does happen with btrfs, but not with e.g. ext4.
[Sidenote: At first I thought it had to do with systemd-cryptsetup
opening multiple
13.05.2017 18:28, Ochi пишет:
> Hello,
>
> okay, I think I now have a repro that is stupidly simple, I'm not even
> sure if I overlook something here. No multi-device btrfs involved, but
> notably it does happen with btrfs, but not with e.g. ext4.
>
I could not reproduce it with single device
Kernel 4.11, btrfs-progs v4.7.3
I run scrub and balance every night, been doing this for 1.5 years on this
filesystem.
But it has just started failing:
saruman:~# btrfs balance start -musage=0 /mnt/btrfs_pool1
Done, had to relocate 0 out of 235 chunks
saruman:~# btrfs balance start -dusage=0
Hi Liu,
On Wed, Mar 22, 2017 at 1:40 AM, Liu Bo wrote:
> On Sun, Mar 19, 2017 at 07:18:59PM +0200, Alex Lyakas wrote:
>> We have a commit_root_sem, which is a read-write semaphore that protects the
>> commit roots.
>> But it is also used to protect the list of caching block
> Anyway, that 20-33% left entirely unallocated/unpartitioned
> recommendation still holds, right?
I never liked that idea. And I really disliked how people considered
it to be (and even passed it down as) some magical, absolute
stupid-proof fail-safe thing (because it's not).
1: Unless you
On May 10, 2017, at 11:10 PM, Eric Biggers wrote:
>
> On Wed, May 10, 2017 at 01:14:37PM -0700, Darrick J. Wong wrote:
>> [cc btrfs, since afaict that's where most of the dedupe tool authors hang
>> out]
>>
>> On Wed, May 10, 2017 at 02:27:33PM -0500, Eric W. Biederman
On Sat, May 13, 2017 at 07:41:24PM -0600, Andreas Dilger wrote:
> On May 10, 2017, at 11:10 PM, Eric Biggers wrote:
> >
> > On Wed, May 10, 2017 at 01:14:37PM -0700, Darrick J. Wong wrote:
> >> [cc btrfs, since afaict that's where most of the dedupe tool authors hang
> >>
14 matches
Mail list logo