On Wed, 17 Jul 2013 00:44:15 +0400 CoolCold wrote:
> Neil, I've tryed to look around commit logs but failed to find commit
> where discard/trim were added.
> I was looking via
>
Neil, I've tryed to look around commit logs but failed to find commit
where discard/trim were added.
I was looking via
http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/log/drivers/md?id=9f2a940965286754f3a34d5737c3097c05db8725=grep=discard+support
, tryed just "discard" without
Part of this depends on the exact failure mode. I've seen cases where drives
fail, and the drive does a bunch of retries, then the OS does a bunch of
retries, and eventually the read fails, but in the meantime, everything stalls
for a long time.
I've even seen the same thing in at least one
Thanks for the replies,
After some further testing..
When I ran a repair on the md's sync_action, the system would reduce
I/O to the RAID-1 to 14kb/s or even less when it hit a certain number
of blocks and effectively locked the system every time.
It turned out to be a bad SSD (it also failed
On Sat, 13 Jul 2013 06:34:19 -0400 "Justin Piszcz"
wrote:
> Hello,
>
> Running 3.10 and I see the following for an md-raid1 of two SSDs:
>
> Checking /sys/block/md1/queue:
> add_random: 0
> discard_granularity: 512
> discard_max_bytes: 2147450880
> discard_zeroes_data: 0
> hw_sector_size: 512
On Sat, 13 Jul 2013 06:34:19 -0400 Justin Piszcz jpis...@lucidpixels.com
wrote:
Hello,
Running 3.10 and I see the following for an md-raid1 of two SSDs:
Checking /sys/block/md1/queue:
add_random: 0
discard_granularity: 512
discard_max_bytes: 2147450880
discard_zeroes_data: 0
Thanks for the replies,
After some further testing..
When I ran a repair on the md's sync_action, the system would reduce
I/O to the RAID-1 to 14kb/s or even less when it hit a certain number
of blocks and effectively locked the system every time.
It turned out to be a bad SSD (it also failed
Part of this depends on the exact failure mode. I've seen cases where drives
fail, and the drive does a bunch of retries, then the OS does a bunch of
retries, and eventually the read fails, but in the meantime, everything stalls
for a long time.
I've even seen the same thing in at least one
Neil, I've tryed to look around commit logs but failed to find commit
where discard/trim were added.
I was looking via
http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/log/drivers/md?id=9f2a940965286754f3a34d5737c3097c05db8725qt=grepq=discard+support
, tryed just discard without
On Wed, 17 Jul 2013 00:44:15 +0400 CoolCold coolthec...@gmail.com wrote:
Neil, I've tryed to look around commit logs but failed to find commit
where discard/trim were added.
I was looking via
On 13/07/13 18:34, Justin Piszcz wrote:
And possibly:
discard_zeroes_data: 1
Does it though?
Here's my 6 x SSD RAID10 that definitely discards.
brad@srv:~$ grep . /sys/block/md2/queue/*
/sys/block/md2/queue/add_random:0
/sys/block/md2/queue/discard_granularity:33553920
Hello,
Running 3.10 and I see the following for an md-raid1 of two SSDs:
Checking /sys/block/md1/queue:
add_random: 0
discard_granularity: 512
discard_max_bytes: 2147450880
discard_zeroes_data: 0
hw_sector_size: 512
iostats: 0
logical_block_size: 512
max_hw_sectors_kb: 32767
Hello,
Running 3.10 and I see the following for an md-raid1 of two SSDs:
Checking /sys/block/md1/queue:
add_random: 0
discard_granularity: 512
discard_max_bytes: 2147450880
discard_zeroes_data: 0
hw_sector_size: 512
iostats: 0
logical_block_size: 512
max_hw_sectors_kb: 32767
On 13/07/13 18:34, Justin Piszcz wrote:
And possibly:
discard_zeroes_data: 1
Does it though?
Here's my 6 x SSD RAID10 that definitely discards.
brad@srv:~$ grep . /sys/block/md2/queue/*
/sys/block/md2/queue/add_random:0
/sys/block/md2/queue/discard_granularity:33553920
14 matches
Mail list logo