Ubuntu create snapshot before each release upgrade
sudo mount /dev/sda6 /mnt -o rw,subvol=/;
ls /mnt
2015-11-14 9:16 GMT+03:00 Brenton Chapin :
> Thanks for the ideas. Sadly, no snapshots, unless btrfs does that by
> default. Never heard of snapper before.
>
> Don't see
Looks good,
Acked-by: Christoph Hellwig
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I'm looking to make a "production copy" of my music and video library
for use in our media server. It is not my intent to create any form
of RAID array, but rather to treat each drive independently where
filesystem is concerned and then to create a single view of the drives
using mhddfs. As the
On 2015-11-14 11:43, audio muze wrote:
> I can turn checksumming
> off given it's of no utility where a Btrfs volume is comprised of a
> single device only?
The checksums are used to detect a data corruption; in case of a btrfs-raid,
the checksums are used *also* to pick the good copy.
BR
From: Filipe Manana
We were using only 1 transaction unit when attempting to delete an unused
block group but in reality we need 3 + N units, where N corresponds to the
number of stripes. We were accounting only for the addition of the orphan
item (for the block group's free
From: Filipe Manana
The following pair of changes fix an issue observed in a production
environment where any file operations done by a package manager failed
with ENOSPC. Forcing a commit of the current transaction (through "sync")
didn't help, a balance operation with the
From: Filipe Manana
It's possible to reach a state where the cleaner kthread isn't able to
start a transaction to delete an unused block group due to lack of enough
free metadata space and due to lack of unallocated device space to allocate
a new metadata block group as well.
On 2015-11-13 11:20, Anand Jain wrote:
>
> Thanks for comments.
>
> On 11/13/2015 03:21 AM, Goffredo Baroncelli wrote:
>> On 2015-11-09 11:56, Anand Jain wrote:
>>> These set of patches provides btrfs hot spare and auto replace support
>>> for you review and comments.
>>
>> Hi Anand,
>>
>> is
Duncan posted on Sat, 14 Nov 2015 16:37:14 + as excerpted:
> Hugo Mills posted on Sat, 14 Nov 2015 14:31:12 + as excerpted:
>
>>> I have read the Gotcha[1] page:
>>>
>>>Files with a lot of random writes can become heavily fragmented
>>> (1+ extents) causing trashing on HDDs and
On Sun, 2015-11-15 at 09:29 +0800, Qu Wenruo wrote:
> > > If type is wrong, all the extents inside the chunk should be
> > > reported
> > > as
> > > mismatch type with chunk.
> > Isn't that the case? At least there are so many reported extents...
>
> If you posted all the output
Sure, I posted
I've gone ahead and created a single drive Btrfs filesystem on a 3TB
drive and started copying content from a raid5 array to the Btrfs
volume. Initially copy speeds were very good sustained at ~145MB/s
and I left it to run overnight. This morning I ran btrfs fi usage
/mnt/btrfs and it reported
audio muze posted on Sun, 15 Nov 2015 05:27:00 +0200 as excerpted:
> I've gone ahead and created a single drive Btrfs filesystem on a 3TB
> drive and started copying content from a raid5 array to the Btrfs
> volume. Initially copy speeds were very good sustained at ~145MB/s and
> I left it to
On Sunday 15 November 2015 04:01:57 Duncan wrote:
>audio muze posted on Sun, 15 Nov 2015 05:27:00 +0200 as excerpted:
>> I've gone ahead and created a single drive Btrfs filesystem on a 3TB
>> drive and started copying content from a raid5 array to the Btrfs
>> volume. Initially copy speeds were
Hi,
On Fri, Nov 13, 2015 at 09:41:01AM -0800, Marc MERLIN wrote:
> root@polgara:/mnt/btrfs_root# du -sh *
> 28G @
> 28G @_hourly.20151113_08:04:01
> 4.0K@_last
> 4.0K@_last_rw
> 28G @_rw.20151113_00:02:01
> root@polgara:/mnt/btrfs_root# df -h .
> Filesystem Size Used
Hi List,
I have read the Gotcha[1] page:
Files with a lot of random writes can become heavily fragmented
(1+ extents) causing trashing on HDDs and excessive multi-second
spikes of CPU load on systems with an SSD or **large amount a RAM**.
Why could large amount of memory worsen the
It might be that your metadata is quite scattered and if the 320GB is
a HDD and not an SSD, than this 11s is just what it takes.
Scattered metadata might be caused by the autodefrag mount option I
think (and by fs getting older and changing often).
What is the output of btrfs fi df /
You
Goffredo Baroncelli posted on Sat, 14 Nov 2015 12:09:21 +0100 as
excerpted:
> On 2015-11-14 11:43, audio muze wrote:
>> I can turn checksumming off given it's of no utility where a Btrfs
>> volume is comprised of a single device only?
>
> The checksums are used to detect a data corruption; in
On Sat, Nov 14, 2015 at 10:11:31PM +0800, CHENG Yuk-Pong, Daniel wrote:
> Hi List,
>
>
> I have read the Gotcha[1] page:
>
>Files with a lot of random writes can become heavily fragmented
> (1+ extents) causing trashing on HDDs and excessive multi-second
> spikes of CPU load on systems
Hugo Mills posted on Sat, 14 Nov 2015 14:31:12 + as excerpted:
>> I have read the Gotcha[1] page:
>>
>>Files with a lot of random writes can become heavily fragmented
>> (1+ extents) causing trashing on HDDs and excessive multi-second
>> spikes of CPU load on systems with an SSD or
在 2015年11月14日 10:29, Christoph Anton Mitterer 写道:
On Sat, 2015-11-14 at 09:22 +0800, Qu Wenruo wrote:
Manually checked they all.
thanks a lot :-)
Strangely, they are all OK... although it's a good news for you.
Oh man... you're s mean ;-D
They are all tree blocks and are all in
20 matches
Mail list logo