On 2016-10-21 18:13, Peter Becker wrote:
if you have >750 GB free you can simply remove one of the drives.
btrfs device delete /dev/sd[x] /mnt
#power off, replace device
btrfs device add /dev/sd[y] /mnt
Make sure to balance afterwards if you do this, the new disk will be
pretty much unused
if you have >750 GB free you can simply remove one of the drives.
btrfs device delete /dev/sd[x] /mnt
#power off, replace device
btrfs device add /dev/sd[y] /mnt
if not you can use an USB-SATA adapter or an eSata-Port and make the following:
btrfs device add /dev/sd[y] /mnt
btrfs device delete
On Sat, Oct 22, 2016 at 09:03:16AM +1100, Gareth Pye wrote:
> I've got a BTRFS array that is of mixed size disks:
> 2x750G
> 3x1.5T
> 3x3T
> And it's getting fuller than I'd like. The problem is that adding
> disks is harder than one would like as the computer only has 8 sata
> ports. Is it viable
I've got a BTRFS array that is of mixed size disks:
2x750G
3x1.5T
3x3T
And it's getting fuller than I'd like. The problem is that adding
disks is harder than one would like as the computer only has 8 sata
ports. Is it viable to do the following to upgrade one of the disks?
A) Take array offline
Hi All,
on a testing machine I installed four HDDs and they are configured as
RAID6. For a test I removed one of the drives (/dev/sdk) while the
volume was mounted and data was written to it. This worked well, as far
as I can see. Some I/O errors were written to /var/log/syslog, but the
On 12/01/2014 06:47 AM, Oliver wrote:
Hi All,
on a testing machine I installed four HDDs and they are configured as
RAID6. For a test I removed one of the drives (/dev/sdk) while the
volume was mounted and data was written to it. This worked well, as far
as I can see. Some I/O errors were
Dear Robert,
thank you for all the possibilities! I think option 4 + 3 would be my
prefered ones.
Until I received your answer I allready played around with that volume
and converting it to Raid 5 seems to work too, I'll attach the steps I
took. Soon I'll try your solution and do some
I started experimenting with an 8 drive BTRFS RAID6, but rebalanced it a while
ago as RAID10. Recently though I’ve run into problems when trying to replace a
drive.
Nov 18 10:18:44 ganymede kernel: BTRFS warning (device sdk): dev_replace
cannot yet handle RAID5/RAID6
Even though df shows
On Nov 18, 2014, at 11:22 AM, Craig Yoshioka crai...@gmail.com wrote:
I started experimenting with an 8 drive BTRFS RAID6, but rebalanced it a
while ago as RAID10. Recently though I’ve run into problems when trying to
replace a drive.
Nov 18 10:18:44 ganymede kernel: BTRFS warning
still somethings aren't matching, it could be that
underlying group profile might have changed without
your notice (yes in some circumstance it could).
can you show 'btrfs fi df mnt'
Anand
On 10/19/14 07:12, Vincent. wrote:
i've upgraded to kernel 3.17.0 to update btrfs.ko and it's
Suman Chakravartula posted on Fri, 17 Oct 2014 19:02:49 -0700 as
excerpted:
On 2014-10-17 18:47, Vincent. wrote:
Hi !
I have a faulty drive in my raid10 and want it to be replaced. Working
drive are xvd[bef] and replacement drive is xvdc.
This is something I ran into the other day. Key
Hi !
I have a faulty drive in my raid10 and want it to be replaced.
Working drive are xvd[bef] and replacement drive is xvdc.
When I mount my drive in RW:
#mount -odegraded /dev/xvdb /tank
#dmesg -c
[ 6207.294513] btrfs: device fsid 728ef4d8-928c-435c-b707-f71c459e1520
devid 1 transid 551398
On 2014-10-17 18:47, Vincent. wrote:
Hi !
I have a faulty drive in my raid10 and want it to be replaced.
Working drive are xvd[bef] and replacement drive is xvdc.
This is something I ran into the other day. Key difference is that I was
running 3.17.1 kernel and 3.16 btrfs-progs
When I
somethings aren't matching well. the issue is..
[ 6219.703606] Btrfs: too many missing devices, writeable mount is not
allowed
But per Vincent only xvdc is missing in a raid10 (both data and metadata
are raid10 ?)
Anand
On 10/18/14 10:02, Suman Chakravartula wrote:
On
14 matches
Mail list logo