Re: Massive loss of disk space

2017-08-04 Thread Austin S. Hemmelgarn
On 2017-08-04 10:45, Goffredo Baroncelli wrote: On 2017-08-03 19:23, Austin S. Hemmelgarn wrote: On 2017-08-03 12:37, Goffredo Baroncelli wrote: On 2017-08-03 13:39, Austin S. Hemmelgarn wrote: [...] Also, as I said below, _THIS WORKS ON ZFS_. That immediately means that a CoW filesystem

Re: Massive loss of disk space

2017-08-04 Thread Goffredo Baroncelli
On 2017-08-03 19:23, Austin S. Hemmelgarn wrote: > On 2017-08-03 12:37, Goffredo Baroncelli wrote: >> On 2017-08-03 13:39, Austin S. Hemmelgarn wrote: [...] >>> Also, as I said below, _THIS WORKS ON ZFS_. That immediately means that a >>> CoW filesystem _does not_ need to behave like BTRFS is.

Re: Massive loss of disk space

2017-08-03 Thread pwm
In 30 seconds I should be able to fill about 200MB * 30 = 6GB. Requiring the parity to not grow larger than there is a 6GB additional space is possible to live with on a 10TB disk. It seems that for SnapRAID to have any chance to work correctly with parity on a BTRFS partition, it would need

Re: Massive loss of disk space

2017-08-03 Thread Austin S. Hemmelgarn
On 2017-08-03 13:15, Marat Khalili wrote: On August 3, 2017 7:01:06 PM GMT+03:00, Goffredo Baroncelli The file is physically extended ghigo@venice:/tmp$ fallocate -l 1000 foo.txt For clarity let's replace the fallocate above with: $ head -c 1000 foo.txt ghigo@venice:/tmp$ ls -l foo.txt

Re: Massive loss of disk space

2017-08-03 Thread Austin S. Hemmelgarn
On 2017-08-03 12:37, Goffredo Baroncelli wrote: On 2017-08-03 13:39, Austin S. Hemmelgarn wrote: On 2017-08-02 17:05, Goffredo Baroncelli wrote: On 2017-08-02 21:10, Austin S. Hemmelgarn wrote: On 2017-08-02 13:52, Goffredo Baroncelli wrote: Hi, [...] consider the following scenario: a)

Re: Massive loss of disk space

2017-08-03 Thread Marat Khalili
On August 3, 2017 7:01:06 PM GMT+03:00, Goffredo Baroncelli >The file is physically extended > >ghigo@venice:/tmp$ fallocate -l 1000 foo.txt For clarity let's replace the fallocate above with: $ head -c 1000 foo.txt >ghigo@venice:/tmp$ ls -l foo.txt >-rw-r--r-- 1 ghigo ghigo 1000 Aug 3 18:00

Re: Massive loss of disk space

2017-08-03 Thread Goffredo Baroncelli
On 2017-08-03 13:39, Austin S. Hemmelgarn wrote: > On 2017-08-02 17:05, Goffredo Baroncelli wrote: >> On 2017-08-02 21:10, Austin S. Hemmelgarn wrote: >>> On 2017-08-02 13:52, Goffredo Baroncelli wrote: Hi, >> [...] >> consider the following scenario: a) create a 2GB file

Re: Massive loss of disk space

2017-08-03 Thread Goffredo Baroncelli
On 2017-08-03 13:44, Marat Khalili wrote: > On 02/08/17 20:52, Goffredo Baroncelli wrote: >> consider the following scenario: >> >> a) create a 2GB file >> b) fallocate -o 1GB -l 2GB >> c) write from 1GB to 3GB >> >> after b), the expectation is that c) always succeed [1]: i.e. there is >> enough

Re: Massive loss of disk space

2017-08-03 Thread Austin S. Hemmelgarn
On 2017-08-03 07:44, Marat Khalili wrote: On 02/08/17 20:52, Goffredo Baroncelli wrote: consider the following scenario: a) create a 2GB file b) fallocate -o 1GB -l 2GB c) write from 1GB to 3GB after b), the expectation is that c) always succeed [1]: i.e. there is enough space on the

Re: Massive loss of disk space

2017-08-03 Thread Marat Khalili
On 02/08/17 20:52, Goffredo Baroncelli wrote: consider the following scenario: a) create a 2GB file b) fallocate -o 1GB -l 2GB c) write from 1GB to 3GB after b), the expectation is that c) always succeed [1]: i.e. there is enough space on the filesystem. Due to the COW nature of BTRFS, you

Re: Massive loss of disk space

2017-08-03 Thread Austin S. Hemmelgarn
On 2017-08-02 17:05, Goffredo Baroncelli wrote: On 2017-08-02 21:10, Austin S. Hemmelgarn wrote: On 2017-08-02 13:52, Goffredo Baroncelli wrote: Hi, [...] consider the following scenario: a) create a 2GB file b) fallocate -o 1GB -l 2GB c) write from 1GB to 3GB after b), the expectation

Re: Massive loss of disk space

2017-08-02 Thread Duncan
Goffredo Baroncelli posted on Wed, 02 Aug 2017 19:52:30 +0200 as excerpted: > it seems that BTRFS always allocate the maximum space required, without > consider the one already allocated. Is it too conservative ? I think no: > consider the following scenario: > > a) create a 2GB file > b)

Re: Massive loss of disk space

2017-08-02 Thread Goffredo Baroncelli
On 2017-08-02 21:10, Austin S. Hemmelgarn wrote: > On 2017-08-02 13:52, Goffredo Baroncelli wrote: >> Hi, >> [...] >> consider the following scenario: >> >> a) create a 2GB file >> b) fallocate -o 1GB -l 2GB >> c) write from 1GB to 3GB >> >> after b), the expectation is that c) always succeed

Re: Massive loss of disk space

2017-08-02 Thread Austin S. Hemmelgarn
On 2017-08-02 13:52, Goffredo Baroncelli wrote: Hi, On 2017-08-01 17:00, Austin S. Hemmelgarn wrote: OK, I just did a dead simple test by hand, and it looks like I was right. The method I used to check this is as follows: 1. Create and mount a reasonably small filesystem (I used an 8G

Re: Massive loss of disk space

2017-08-02 Thread Goffredo Baroncelli
Hi, On 2017-08-01 17:00, Austin S. Hemmelgarn wrote: > OK, I just did a dead simple test by hand, and it looks like I was right. > The method I used to check this is as follows: > 1. Create and mount a reasonably small filesystem (I used an 8G temporary LV > for this, a file would work too

Re: Massive loss of disk space

2017-08-02 Thread Austin S. Hemmelgarn
On 2017-08-02 00:14, Duncan wrote: Austin S. Hemmelgarn posted on Tue, 01 Aug 2017 10:47:30 -0400 as excerpted: I think I _might_ understand what's going on here. Is that test program calling fallocate using the desired total size of the file, or just trying to allocate the range beyond the

Re: Massive loss of disk space

2017-08-01 Thread Duncan
Austin S. Hemmelgarn posted on Tue, 01 Aug 2017 10:47:30 -0400 as excerpted: > I think I _might_ understand what's going on here. Is that test program > calling fallocate using the desired total size of the file, or just > trying to allocate the range beyond the end to extend the file? I've >

Re: Massive loss of disk space

2017-08-01 Thread Austin S. Hemmelgarn
On 2017-08-01 12:50, pwm wrote: I did a temporary patch of the snapraid code to start fallocate() from the previous parity file size. Like I said though, it's BTRFS that's misbehaving here, not snapraid. I'm going to try to get some further discussion about this here on the mailing list,and

Re: Massive loss of disk space

2017-08-01 Thread pwm
I did a temporary patch of the snapraid code to start fallocate() from the previous parity file size. Finally have a snapraid sync up and running. Looks good, but will take quite a while before I can try a scrub command to double-check everything. Thanks for the help. /Per W On Tue, 1 Aug

Re: Massive loss of disk space

2017-08-01 Thread Austin S. Hemmelgarn
On 2017-08-01 11:24, pwm wrote: Yes, the test code is as below - trying to match what snapraid tries to do: #include #include #include #include #include #include #include int main() { int fd = open("/mnt/snap_04/snapraid.parity",O_NOFOLLOW|O_RDWR); if (fd < 0) {

Re: Massive loss of disk space

2017-08-01 Thread pwm
Yes, the test code is as below - trying to match what snapraid tries to do: #include #include #include #include #include #include #include int main() { int fd = open("/mnt/snap_04/snapraid.parity",O_NOFOLLOW|O_RDWR); if (fd < 0) { printf("Failed opening parity file

Re: Massive loss of disk space

2017-08-01 Thread Austin S. Hemmelgarn
On 2017-08-01 10:47, Austin S. Hemmelgarn wrote: On 2017-08-01 10:39, pwm wrote: Thanks for the links and suggestions. I did try your suggestions but it didn't solve the underlying problem. pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04 Dumping filters: flags 0x1, state

Re: Massive loss of disk space

2017-08-01 Thread Austin S. Hemmelgarn
On 2017-08-01 10:39, pwm wrote: Thanks for the links and suggestions. I did try your suggestions but it didn't solve the underlying problem. pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04 Dumping filters: flags 0x1, state 0x0, force is off DATA (flags 0x2): balancing,

Re: Massive loss of disk space

2017-08-01 Thread pwm
Thanks for the links and suggestions. I did try your suggestions but it didn't solve the underlying problem. pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04 Dumping filters: flags 0x1, state 0x0, force is off DATA (flags 0x2): balancing, usage=20 Done, had to relocate

Re: Massive loss of disk space

2017-08-01 Thread Hugo Mills
Hi, Per, Start here: https://btrfs.wiki.kernel.org/index.php/FAQ#if_your_device_is_large_.28.3E16GiB.29 In your case, I'd suggest using "-dusage=20" to start with, as it'll probably free up quite a lot of your existing allocation. And this may also be of interest, in how to read the

Massive loss of disk space

2017-08-01 Thread pwm
I have a 10TB file system with a parity file for a snapraid. However, I can suddenly not extend the parity file despite the file system only being about 50% filled - I should have 5TB of unallocated space. When trying to extend the parity file, fallocate() just returns ENOSPC, i.e. that the