On Thu, Apr 28, 2016 at 7:09 AM, Matthias Bodenbinder
wrote:
> Am 26.04.2016 um 18:19 schrieb Henk Slager:
>> It looks like a JMS567 + SATA port multipliers behaind it are used in
>> this drivebay. The command lsusb -v could show that. So your HW
>> setup is like JBOD, not RAID.
>
> Here is the
Gareth Pye posted on Thu, 28 Apr 2016 15:24:51 +1000 as excerpted:
> PDF doc info dates it at 23/1/2013, which is the best guess that can
> easily be found.
Well, "easily" is relative, but motivated by your observation I first
confirmed it, then decided to see what google had to say about the
a
PDF doc info dates it at 23/1/2013, which is the best guess that can
easily be found.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Am 26.04.2016 um 18:42 schrieb Holger Hoffstätte:
> On 04/26/16 18:19, Henk Slager wrote:
>> It looks like a JMS567 + SATA port multipliers behaind it are used in
>> this drivebay. The command lsusb -v could show that. So your HW
>> setup is like JBOD, not RAID.
>
> I hate to quote the "harmful
Am 26.04.2016 um 18:19 schrieb Henk Slager:
> It looks like a JMS567 + SATA port multipliers behaind it are used in
> this drivebay. The command lsusb -v could show that. So your HW
> setup is like JBOD, not RAID.
Here is the output of lsusb -v:
Bus 003 Device 004: ID 152d:0567 JMicron Techno
On 04/26/16 18:19, Henk Slager wrote:
> It looks like a JMS567 + SATA port multipliers behaind it are used in
> this drivebay. The command lsusb -v could show that. So your HW
> setup is like JBOD, not RAID.
I hate to quote the "harmful" trope, but..
SATA Port Multipliers Considered Harmful
ht
On Thu, Apr 21, 2016 at 7:27 PM, Matthias Bodenbinder
wrote:
> Am 21.04.2016 um 13:28 schrieb Henk Slager:
>>> Can anyone explain this behavior?
>>
>> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in
>> this test. What is on WD20 is unclear to me, but the raid1 array is
>> {WD75,
On Sat, Apr 23, 2016 at 9:07 AM, Matthias Bodenbinder
wrote:
>
> Here is my newest test. The backports provide a 4.5 kernel:
>
>
> kernel: 4.5.0-0.bpo.1-amd64
> btrfs-tools: 4.4-1~bpo8+1
>
>
> This time the raid1 is automatically unmounted after I unplug the device and
> it can not be m
On 2016/04/23 16:07, Matthias Bodenbinder wrote:
Here is my newest test. The backports provide a 4.5 kernel:
kernel: 4.5.0-0.bpo.1-amd64
btrfs-tools: 4.4-1~bpo8+1
This time the raid1 is automatically unmounted after I unplug the device and it
can not be mounted while the device is mi
Am 23.04.2016 um 09:07 schrieb Matthias Bodenbinder:
> 14# mount /mnt/raid1/
> mount: wrong fs type, bad option, bad superblock on /dev/sdh,
>missing codepage or helper program, or other error
>
>In some cases useful info is found in syslog - try
>dmesg | tail or so.
>
Here is my newest test. The backports provide a 4.5 kernel:
kernel: 4.5.0-0.bpo.1-amd64
btrfs-tools: 4.4-1~bpo8+1
This time the raid1 is automatically unmounted after I unplug the device and it
can not be mounted while the device is missing. See below.
Matthias
1) turn on the
On 2016/04/22 14:32, Qu Wenruo wrote:
Satoru Takeuchi wrote on 2016/04/22 11:21 +0900:
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22
Matthias Bodenbinder wrote on 2016/04/21 19:40 +0200:
Am 21.04.2016 um 07:43 schrieb Qu Wenruo:
There are already unmerged patches which will partly do the mdadm level
behavior, like automatically change to degraded mode without making the fs RO.
The original patchset:
http://comments.gmane.
Satoru Takeuchi wrote on 2016/04/22 11:21 +0900:
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be be
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here w
Am 21.04.2016 um 07:43 schrieb Qu Wenruo:
> There are already unmerged patches which will partly do the mdadm level
> behavior, like automatically change to degraded mode without making the fs RO.
>
> The original patchset:
> http://comments.gmane.org/gmane.comp.file-systems.btrfs/48335
The desc
Am 21.04.2016 um 13:28 schrieb Henk Slager:
>> Can anyone explain this behavior?
>
> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in
> this test. What is on WD20 is unclear to me, but the raid1 array is
> {WD75, WD50, SP2504C}
> So the test as described by Matthias is not what a
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a
On Thu, Apr 21, 2016 at 8:23 AM, Satoru Takeuchi
wrote:
> On 2016/04/20 14:17, Matthias Bodenbinder wrote:
>>
>> Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
>>>
>>> BTW, it would be better to post the dmesg for better debug.
>>
>>
>> So here we. I did the same test again. Here is a full log of what
On 2016-04-21 02:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i
did. It seems to be mean like
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like
On 04/21/2016 01:15 PM, Matthias Bodenbinder wrote:
Am 20.04.2016 um 15:32 schrieb Anand Jain:
1. mount the raid1 (2 disc with different size)
2. unplug the biggest drive (hotplug)
Btrfs won't know that you have plugged-out a disk.
Though it experiences IO failures, it won't close t
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
1. mount
Liu Bo wrote on 2016/04/20 23:02 -0700:
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote:
Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
Unfortunately, this is the designed behavior.
The fs is rw just because it doesn't hit any c
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote:
>
>
> Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
> >Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
> >
> >>
> >>Unfortunately, this is the designed behavior.
> >>
> >>The fs is rw just because it doesn't hit any critical problem.
>
Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
Unfortunately, this is the designed behavior.
The fs is rw just because it doesn't hit any critical problem.
If you try to touch a file and then sync the fs, btrfs will become RO
immediately.
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
>
> Unfortunately, this is the designed behavior.
>
> The fs is rw just because it doesn't hit any critical problem.
>
> If you try to touch a file and then sync the fs, btrfs will become RO
> immediately.
>
> Btrfs fails to read space cache, no
Am 20.04.2016 um 15:32 schrieb Anand Jain:
>> 1. mount the raid1 (2 disc with different size)
>
>> 2. unplug the biggest drive (hotplug)
>
> Btrfs won't know that you have plugged-out a disk.
> Though it experiences IO failures, it won't close the bdev.
Well, as far as I can tell mdadm can h
1. mount the raid1 (2 disc with different size)
2. unplug the biggest drive (hotplug)
Btrfs won't know that you have plugged-out a disk.
Though it experiences IO failures, it won't close the bdev.
3. try to copy something to the degraded raid1
This will work as long as you do _no
Matthias Bodenbinder wrote on 2016/04/20 07:17 +0200:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
1.
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
> BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
1. mount the raid1 (2 disc with different size)
2. unplug t
Not quite sure about raid1 behavior.
But your "hotplug" seems to be problem.
IIRC Btrfs is known to have problem with re-appearing device.
If the hot revmoed device is fully wiped before re-plugged, it should
not cause the RO mount (abort transaction).
BTW, it would be better to post the dmes
Hi,
I have a raid1 with 3 drives: 698, 465 and 232 GB. I copied 1,7 GB data to that
raid1, balanced the filesystem and then removed the bigger drive (hotplug).
The data was still available. Now I copied the /root directory to the raid1. It
showed up via ls -l. Then I plugged in the missing har
33 matches
Mail list logo