Il giorno sab 9 gen 2021 alle ore 22:40 Zygo Blaxell
ha scritto:
>
> On Fri, Jan 08, 2021 at 08:29:45PM +0100, Andrea Gelmini wrote:
> > Il giorno ven 8 gen 2021 alle ore 09:36 ha
> > scritto:
> > > What happens when I poison one of the drives in the mdadm array using
> > > this command? Will a
I'm trying to transfer a btrfs snapshot via the network.
First attempt: Both NC programs don't exit after the transfer is complete. When
I ctrl-C the sending side, the receiving side exits OK.
btrfs subvolume delete /mnt/rec/snapshots/*
receive side:
# nc -l -p 6790 | btrfs receive /mnt/rec/sna
On 2021-01-10 11:34, cedric.dew...@eclipso.eu wrote:
I'm trying to transfer a btrfs snapshot via the network.
First attempt: Both NC programs don't exit after the transfer is complete. When
I ctrl-C the sending side, the receiving side exits OK.
btrfs subvolume delete /mnt/rec/snapshots/*
Hi Thomas,
Le sam. 9 janv. 2021 à 1:33, Thomas Bogendoerfer
a écrit :
On Sat, Jan 09, 2021 at 12:58:05AM +0100, Thomas Bogendoerfer wrote:
On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote:
> Hi Thomas,
>
> 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this
com
On Sun, 10 Jan 2021 11:34:27 +0100
" " wrote:
> I'm trying to transfer a btrfs snapshot via the network.
>
> First attempt: Both NC programs don't exit after the transfer is complete.
> When I ctrl-C the sending side, the receiving side exits OK.
It is a common annoyance that NC doesn't exit
On Sun, Jan 10, 2021 at 11:34:27AM +0100, wrote:
> I'm trying to transfer a btrfs snapshot via the network.
>
> First attempt: Both NC programs don't exit after the transfer is complete.
> When I ctrl-C the sending side, the receiving side exits OK.
>
> btrfs subvolume delete /mnt/rec/snapsho
I migrated a system to btrfs which was hosting virtual machins with
qemu.
Using it without disabling copy-on-write was a mistake, of course, and
it became horribly fragmented and slow.
So I tried copying it to a new file... but it has actual *errors* too,
which I think are because it was using th
> Am 10.01.2021 um 12:35 schrieb Paul Cercueil :
>
> Hi Thomas,
>
> Le sam. 9 janv. 2021 à 1:33, Thomas Bogendoerfer
> a écrit :
>> On Sat, Jan 09, 2021 at 12:58:05AM +0100, Thomas Bogendoerfer wrote:
>>> On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote:
>>> > Hi Thomas,
>>> >
>>
On 2021-01-10 12:52, David Woodhouse wrote:
I migrated a system to btrfs which was hosting virtual machins with
qemu.
Using it without disabling copy-on-write was a mistake, of course, and
it became horribly fragmented and slow.
So I tried copying it to a new file... but it has actual *error
On Sun, 2021-01-10 at 13:08 +0100, Forza wrote:
>
> On 2021-01-10 12:52, David Woodhouse wrote:
> > I migrated a system to btrfs which was hosting virtual machins with
> > qemu.
> >
> > Using it without disabling copy-on-write was a mistake, of course, and
> > it became horribly fragmented and sl
It is a common annoyance that NC doesn't exit in such scenario and needs
to be
Ctrl-C'ed after verifying that the transfer is over.
Instead, at host2 try:
ssh host1 "btrfs send ..." | btrfs receive ...
Also much more secure.
--
With respect,
Roman
I was practicing on a test system, the r
On 10/01/2021 07:41, cedric.dew...@eclipso.eu wrote:
> I've tested some more.
>
> Repeatedly sending the difference between two consecutive snapshots creates a
> structure on the target drive where all the snapshots share data. So 10
> snapshots of 10 files of 100MB takes up 1GB, as expected.
>
On Sun, Jan 10, 2021 at 01:06:44PM +, Graham Cobb wrote:
> On 10/01/2021 07:41, cedric.dew...@eclipso.eu wrote:
> > I've tested some more.
> >
> > Repeatedly sending the difference between two consecutive snapshots creates
> > a structure on the target drive where all the snapshots share data
By the way, Cedric, your SMTP server is rejecting mail from mine
with a "forged HELO" error. I'm not sure why, but I've not knowingly
encountered this with anyone else's mail server.
Hugo.
--
Hugo Mills | The last man on Earth sat in a room. Suddenly, there
hugo@... carfax.org.
10.01.2021 16:21, Hugo Mills пишет:
> On Sun, Jan 10, 2021 at 01:06:44PM +, Graham Cobb wrote:
>> On 10/01/2021 07:41, cedric.dew...@eclipso.eu wrote:
>>> I've tested some more.
>>>
>>> Repeatedly sending the difference between two consecutive snapshots creates
>>> a structure on the target dr
On 1/9/21 10:23 PM, Zygo Blaxell wrote:
On a loaded test server, I observed 90th percentile fsync times
drop from 7 seconds without preferred_metadata to 0.7 seconds with
preferred_metadata when all the metadata is on the SSDs. If some metadata
ever lands on a spinner, we go back to almost 7 se
On Sun, Jan 10, 2021 at 4:54 AM David Woodhouse wrote:
>
> I filed https://bugzilla.redhat.com/show_bug.cgi?id=1914433
>
> What I see is that *both* disks of the RAID-1 have data which is
> consistent, and does not match the checksum that btrfs expects:
Yeah either use nodatacow (chattr +C) or do
On 2021/1/9 下午4:53, Anand Jain wrote:
I ran scrub when disk space was low on a btrfs with some raid1 csum
errors, on a system with kernel 5.11.0-rc2+. It lead to transaction
aborted and rdonly FS.
Would you please provide `btrfs fi usage` output?
There is a bug in recent kernel that
On 11/1/21 8:38 am, Qu Wenruo wrote:
On 2021/1/9 下午4:53, Anand Jain wrote:
I ran scrub when disk space was low on a btrfs with some raid1 csum
errors, on a system with kernel 5.11.0-rc2+. It lead to transaction
aborted and rdonly FS.
Would you please provide `btrfs fi usage` outpu
19 matches
Mail list logo