Re: discard synchronous on most SSDs?

2014-03-15 Thread Marc MERLIN
On Sat, Mar 15, 2014 at 11:26:27AM +, Duncan wrote:
> Chris Samuel posted on Sat, 15 Mar 2014 17:48:56 +1100 as excerpted:
> 
> > $ sudo smartctl --identify /dev/sdb | fgrep 'Trim bit in DATA SET
> > MANAGEMENT'
> >  169  0  1   Trim bit in DATA SET MANAGEMENT command
> >  supported
> > $
> > 
> > If that command returns nothing then it's not reported as supported (and
> > I've tested that).  You can get the same info with hdparm -I.
> 
> > My puzzle now is that I have two SSD drives that report supporting NCQ
> > TRIM (one confirmed via product info) but report only supporting SATA
> > 3.0 not 3.1.
> 
> My SATA 2.5 SSDs reported earlier, report support for it too, so it's 
> apparently not SATA 3.1 limited.  (Note that I'm simply grepping word 
> 169, in the command below.  Since word 169 is trim support...)
> 
> sudo smartctl --identify /dev/sda | grep '^ 169'
>  169  - 0x0001   Data Set Management support
>  169  0  1   Trim bit in DATA SET MANAGEMENT command supported
> 
> Either that or that feature bit simply indicates trim support, not NCQ 
> trim support.

Mmmh, so now I'm confused.

See this:

=== START OF INFORMATION SECTION ===
Device Model: INTEL SSDSC2BW180A3L
Serial Number:CVCV215200XU180EGN
LU WWN Device Id: 5 001517 bb28c5317
Firmware Version: LE1i
User Capacity:180,045,766,656 bytes [180 GB]
Sector Size:  512 bytes logical/physical
Rotation Rate:Solid State Device
Device is:Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 3.0 Gb/s (current: 3.0 Gb/s)
Local Time is:Sat Mar 15 15:49:06 2014 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

polgara:/usr/src# smartctl --identify /dev/sda | grep '^ 169'
 169  - 0x0001   Data Set Management support
 169  0  1   Trim bit in DATA SET MANAGEMENT command supported

This is a super old SSD from 3 years ago. Clearly it can't support
synchronous dicard, right?

Yet, deleting a kernel tree also takes 1.5 seconds:
polgara:/usr/src# time rm -rf linux-3.14-rc5/
real0m1.441s
user0m0.048s
sys 0m1.352s


So maybe it's not the data level, but just the value of 169?

Either way, this SSD is more than 2 years old, maybe 3 actually.

Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/ | PGP 1024R/763BE901
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] btrfs: Cleanup the btrfs_workqueue related function type

2014-03-15 Thread quwen...@cn.fujitsu.com
On Fri, 14 Mar 2014 15:39:03 +0100, David Sterba wrote:
> On Thu, Mar 06, 2014 at 04:19:50AM +, quwen...@cn.fujitsu.com wrote:
>> @@ -23,11 +23,13 @@
>>   struct btrfs_workqueue;
>>   /* Internal use only */
>>   struct __btrfs_workqueue;
>> +struct btrfs_work;
>> +typedef void (*btrfs_func_t)(struct btrfs_work *arg);
> I don't see what's wrong with the non-typedef type, CodingStyle
> discourages from using typedefs in general (Chapter 5).
>
> The name btrfs_func_t is a generic, if you really need to use a typedef
> here, please change it to something closer to the workqueues, eg.
> btrfs_work_func_t.
>
btrfs_func_t is just following the work_func_t naming style,
for btrfs it's only used to reduce the length of prototype definition.

Since btrfs_func_t is only used in btrfs, this patch can be ignored.

Thanks
Qu

Re: discard synchronous on most SSDs?

2014-03-15 Thread Chris Samuel

On 15/03/14 22:26, Duncan wrote:


Either that or that feature bit simply indicates trim support, not NCQ
trim support.


You're quite right, I outsmarted myself by noticing at the fact that the 
kernel tests for ATA_LOG_NCQ_SEND_RECV_DSM_TRIM and unsets that for 
drives that don't support NCQ DSM TRIM and then seeing DSM TRIM in the 
SATA 3.1 spec and inferred they were the same thing.


Looking closer at the kernel code that tests for what trim to use with 
ATA_LOG_NCQ_SEND_RECV_DSM_TRIM it falls back to ATA_DSM_TRIM if it can't 
do the NCQ version.


Mea culpa!

--
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Snapper on Ubuntu

2014-03-15 Thread David Disseldorp
On Sat, 15 Mar 2014 18:47:49 +0100
Hendrik Friedel  wrote:

> > I think you may have forgotten to specify the config snapper is supposed
> > to use. Try
> >
> > # snapper -c home create
> > # snapper -c Video create  
> 
> Thanks, that was it. I would have expected an Error-Message though...

Snapper uses the "root" config by default. /root snapshots were created
successfully when you ran "snapper create" without specifying an explicit
config.

Cheers, David
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Snapper on Ubuntu

2014-03-15 Thread Hendrik Friedel

I think you may have forgotten to specify the config snapper is supposed
to use. Try

# snapper -c home create
# snapper -c Video create


Thanks, that was it. I would have expected an Error-Message though...

Greetings,
Hendrik
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Incremental backup for a raid1

2014-03-15 Thread George Mitchell
Michael,  I am currently using rsync INSTEAD of btrfs backup tools.  I 
really don't see anyway that it could be compatible with the backup 
features of btrfs.  As I noted in my post, it is definitely not a 
perfect solution, but it is doing the job for me.  What I REALLY want in 
this regard is n-way mirroring to get me out of the simplex trap 
completely.  At that point, I can have more confidence in btrfs snapshop 
capability.


On 03/15/2014 04:35 AM, Michael Schuerig wrote:

On Thursday 13 March 2014 17:29:11 George Mitchell wrote:

I currently use rsync to a separate drive to maintain a
backup copy, but it is not integrated into the array like n-way would
be, and is definitely not a perfect solution.

Could you explain how you're using rsync? I was just about to copy a
btrfs filesystem to another disk. That filesystem has several subvolumes
and about 100 snapshots overall. Owing to COW, this amounts to about
1.2TB. However, I reckon that rsync doesn't know anything about COW and
accordingly would blow up my data immensely on the destination disk.

How do I copy a btrfs filesystem preserving its complete contents? How
do I update such a copy?

Yes, I want to keep the subvolume layout of the original and I want to
copy all snapshots. I don't think send/receive is the answer, but it's
likey I don't understand it well enough. I'm concerned, that a
send/receive-based approach is not robust against mishaps.

Consider: I want to incrementally back-up a filesystem to two external
disks. For this I'd have to for each subvolume keep a snapshot
corresponding to its state on the backup disk. If I make any mistake in
managing these snapshots, I can't update the external backup anymore.

Also, I don't understand whether send/receive would allow me to
copy/update a subvolume *including* its snapshots.

Things have become a little more complicated than I had hoped for, but
I've only been using btrfs for a couple of weeks.

Michael



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Snapper on Ubuntu

2014-03-15 Thread Michael Schuerig
On Saturday 15 March 2014 15:05:22 Hendrik Friedel wrote:
> Hello,
> 
> I am not sure, whether this is the right place to ask this question
> -if not, please advise.
> 
> Ubuntu installs on btrfs, creating subvolumes for the homes (/home),
> the root home (/root) and the root (/) named @home, @root and @
> respectively.
> 
> When I install snapper I configure it like this
> snapper -c rt create-config /
> snapper -c home create-config /home
> snapper -c root create-config /root
> snapper -c Video create-config /mnt/BTRFS/Video/
> 
> After executing snapper create several times, this results in
> 
> #btrfs subvolume list /
> ID 258 gen 2615 top level 5 path @
> ID 259 gen 2611 top level 5 path @root
> ID 260 gen 2555 top level 5 path @home
> ID 281 gen 2555 top level 5 path @home/.snapshots
> ID 282 gen 2606 top level 5 path @root/.snapshots
> ID 283 gen 2562 top level 5 path @root/.snapshots/1/snapshot
> ID 284 gen 2563 top level 5 path @root/.snapshots/2/snapshot
> ID 285 gen 2573 top level 5 path @root/.snapshots/3/snapshot
[...]
> So, this all works for @root only, not for the other subvolumes.
> 
> Do you have any suggestions, how to find the cause?

I think you may have forgotten to specify the config snapper is supposed 
to use. Try

# snapper -c home create
# snapper -c Video create

Michael

-- 
Michael Schuerig
mailto:mich...@schuerig.de
http://www.schuerig.de/michael/

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Snapper on Ubuntu

2014-03-15 Thread Hendrik Friedel

Hello,

I am not sure, whether this is the right place to ask this question -if 
not, please advise.


Ubuntu installs on btrfs, creating subvolumes for the homes (/home), the 
root home (/root) and the root (/) named @home, @root and @ respectively.


When I install snapper I configure it like this
snapper -c rt create-config /
snapper -c home create-config /home
snapper -c root create-config /root
snapper -c Video create-config /mnt/BTRFS/Video/

After executing snapper create several times, this results in

#btrfs subvolume list /
ID 258 gen 2615 top level 5 path @
ID 259 gen 2611 top level 5 path @root
ID 260 gen 2555 top level 5 path @home
ID 281 gen 2555 top level 5 path @home/.snapshots
ID 282 gen 2606 top level 5 path @root/.snapshots
ID 283 gen 2562 top level 5 path @root/.snapshots/1/snapshot
ID 284 gen 2563 top level 5 path @root/.snapshots/2/snapshot
ID 285 gen 2573 top level 5 path @root/.snapshots/3/snapshot
ID 286 gen 2577 top level 5 path @root/.snapshots/4/snapshot
ID 287 gen 2582 top level 5 path @root/.snapshots/5/snapshot
ID 288 gen 2583 top level 5 path @root/.snapshots/6/snapshot
ID 290 gen 2605 top level 258 path .snapshots
ID 291 gen 2599 top level 5 path @root/.snapshots/7/snapshot
ID 292 gen 2600 top level 5 path @root/.snapshots/8/snapshot
ID 293 gen 2605 top level 5 path @root/.snapshots/9/snapshot


#btrfs subvolume list /mnt/BTRFS/Video/
ID 258 gen 4560 top level 5 path Video
ID 259 gen 4557 top level 258 path VDR
ID 275 gen 672 top level 258 path Filme
ID 284 gen 816 top level 258 path Homevideo
ID 288 gen 1048 top level 258 path VideoSchnitt
ID 1874 gen 1288 top level 5 path rsnapshot
ID 1875 gen 4245 top level 5 path backups
ID 2265 gen 4560 top level 258 path .snapshots

So, this all works for @root only, not for the other subvolumes.

Do you have any suggestions, how to find the cause?

Regards,
Hendrik
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: remove transaction from send

2014-03-15 Thread Hugo Mills
On Fri, Mar 14, 2014 at 10:44:04PM +, Hugo Mills wrote:
> On Fri, Mar 14, 2014 at 02:51:22PM -0400, Josef Bacik wrote:
> > On 03/13/2014 06:16 PM, Hugo Mills wrote:
> > >On Thu, Mar 13, 2014 at 03:42:13PM -0400, Josef Bacik wrote:
> > >>Lets try this again.  We can deadlock the box if we send on a box and try 
> > >>to
> > >>write onto the same fs with the app that is trying to listen to the send 
> > >>pipe.
> > >>This is because the writer could get stuck waiting for a transaction 
> > >>commit
> > >>which is being blocked by the send.  So fix this by making sure looking 
> > >>at the
> > >>commit roots is always going to be consistent.  We do this by keeping 
> > >>track of
> > >>which roots need to have their commit roots swapped during commit, and 
> > >>then
> > >>taking the commit_root_sem and swapping them all at once.  Then make sure 
> > >>we
> > >>take a read lock on the commit_root_sem in cases where we search the 
> > >>commit root
> > >>to make sure we're always looking at a consistent view of the commit 
> > >>roots.
> > >>Previously we had problems with this because we would swap a fs tree 
> > >>commit root
> > >>and then swap the extent tree commit root independently which would cause 
> > >>the
> > >>backref walking code to screw up sometimes.  With this patch we no longer
> > >>deadlock and pass all the weird send/receive corner cases.  Thanks,
> > >
> > >There's something still going on here. I managed to get about twice
> > >as far through my test as I had before, but I again got an "unexpected
> > >EOF in stream", with btrfs send returning 1. As before, I have this in
> > >syslog:
> > >
> > >Mar 13 22:09:12 s_src@amelia kernel: BTRFS error (device sda2): did not 
> > >find backref in send_root. inode=1786631, offset=825257984, 
> > >disk_byte=36504023040 found extent=36504023040\x0a
> > >
> > 
> > I just noticed that the offset you have there is freaking gigantic,
> > like 700mb, which is way larger than what an extent should be.  Here
> > is a newer debug patch, just chuck the old on and put this instead
> > and re-run
> > 
> > http://paste.fedoraproject.org/85486/39482301
> 
>That last run, with the above patch, failed again, at approximately
> the same place again. The only output in dmesg is:
> 
> [ 6488.168469] BTRFS error (device sda2): did not find backref in send_root. 
> inode=1786631, offset=825257984, disk_byte=36504023040 found 
> extent=36504023040, len=1294336

root@amelia:~# btrfs insp ino 1786631 /
//srv/vm/armand.img
root@amelia:~# ls -l /srv/vm/armand.img 
-rw-rw-r-- 1 root kvm 40 Jan 30 08:11 /srv/vm/armand.img
root@amelia:~# filefrag /srv/vm/armand.img
/srv/vm/armand.img: 17436 extents found

   This is a VM image, not currently operational. It probably has
sparse extents in it somewhere.

   The full filefrag -ev output is at [1], but the offset it's
complaining about is 825257984 = 201479 4k blocks:

 ext: logical_offset:physical_offset: length:   expected: flags:
17200:   201478..  201478:7220724..   7220724:  1:8923002:
17201:   201479..  201481:8912386..   8912388:  3:7220725:
17202:   201482..  201482:8923002..   8923002:  1:8912389:

   This seems unexceptional.

   Hugo.

[1] http://carfax.org.uk/files/temp/filefrag.txt

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- "Can I offer you anything? Tea? Seedcake? ---
 Glass of Amontillado?"  


signature.asc
Description: Digital signature


[PATCH] Btrfs-progs: fsck: fix memory leak and unnecessary call to free

2014-03-15 Thread Rakesh Pandit
Free already allocated memory to item1_data if malloc fails for
item2_data in swap_values. Seems to be a typo from commit 70749a77.

Signed-off-by: Rakesh Pandit 
---
 cmds-check.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cmds-check.c b/cmds-check.c
index d1cafe1..60708d0 100644
--- a/cmds-check.c
+++ b/cmds-check.c
@@ -2380,7 +2380,7 @@ static int swap_values(struct btrfs_root *root, struct 
btrfs_path *path,
return -ENOMEM;
item2_data = malloc(item2_size);
if (!item2_data) {
-   free(item2_data);
+   free(item1_data);
return -ENOMEM;
}
 
-- 
1.8.5.3

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: fix deadlock with nested trans handles

2014-03-15 Thread Duncan
Rich Freeman posted on Fri, 14 Mar 2014 18:40:25 -0400 as excerpted:

> And some more background.  I had more reboots over the next two days at
> the same time each day, just after my crontab successfully completed. 
> One of the last thing it does is runs the snapper cleanups which delete
> a bunch of snapshots.  During a reboot I checked and there were a bunch
> of deleted snapshots, which disappeared over the next 30-60 seconds
> before the panic, and then they would re-appear on the next reboot.
> 
> I disabled the snapper cron job and this morning had no issues at all.
>  One day isn't much to establish a trend, but I suspect that this is
> the cause.  Obviously getting rid of snapshots would be desirable at
> some point, but I can wait for a patch.  Snapper would be deleting about
> 48 snapshots at the same time, since I create them hourly and the
> cleanup occurs daily on two different subvolumes on the same filesystem.

Hi, Rich.  Imagine seeing you here! =:^)  (Note to others, I run gentoo 
and he's a gentoo dev, so we normally see each other on the gentoo 
lists.  But btrfs comes up occasionally there too, so we knew we were 
both running it, I'd just not noticed any of his posts here, previously.)

Three things:

1) Does running the snapper cleanup command from that cron job manually 
trigger the problem as well?

Presumably if you run it manually, you'll do so at a different time of 
day, thus eliminating the possibility that it's a combination of that and 
something else occurring at that specific time, as well as confirming 
that it is indeed the snapper 

2) What about modifying the cron job to run hourly, or perhaps every six 
hours, so it's deleting only 2 or 12 instead of 48 at a time?  Does that 
help?

If so then it's a thundering herd problem.  While definitely still a bug, 
you'll at least have a workaround until its fixed. 

3) I'd be wary of letting too many snapshots build up.  A couple hundred 
shouldn't be a huge issue, but particularly when the snapshot-aware-
defrag was still enabled, people were reporting problems with thousands 
of snapshots, so I'd recommend trying to keep it under 500 or so, at 
least of the same subvol (so under 1000 total since you're snapshotting 
two different subvols).

So a hourly cron job deleting or at least thinning down snapshots over 
say 2 days old, possibly in the same cron job that creates the new snaps, 
might be a good idea.  That'd only do two at a time, the same rate 
they're created, but with a 48 hour set of snaps before deletion.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Incremental backup for a raid1

2014-03-15 Thread Hugo Mills
On Sat, Mar 15, 2014 at 12:35:30PM +0100, Michael Schuerig wrote:
> On Thursday 13 March 2014 17:29:11 George Mitchell wrote:
> > I currently use rsync to a separate drive to maintain a 
> > backup copy, but it is not integrated into the array like n-way would 
> > be, and is definitely not a perfect solution.
> 
> Could you explain how you're using rsync? I was just about to copy a 
> btrfs filesystem to another disk. That filesystem has several subvolumes 
> and about 100 snapshots overall. Owing to COW, this amounts to about 
> 1.2TB. However, I reckon that rsync doesn't know anything about COW and 
> accordingly would blow up my data immensely on the destination disk.
> 
> How do I copy a btrfs filesystem preserving its complete contents? How 
> do I update such a copy?
> 
> Yes, I want to keep the subvolume layout of the original and I want to 
> copy all snapshots. I don't think send/receive is the answer, but it's 
> likey I don't understand it well enough. I'm concerned, that a 
> send/receive-based approach is not robust against mishaps.

   send/receive is the answer, but it's going to be a bit more
complicated to manage *all* of the snapshots. (Questions -- do you
actually need them all backed up? Can you instead do incremental
backups of the "main" subvol and keep each of those independently on
the backup machine instead?)

> Consider: I want to incrementally back-up a filesystem to two external 
> disks. For this I'd have to for each subvolume keep a snapshot 
> corresponding to its state on the backup disk. If I make any mistake in 
> managing these snapshots, I can't update the external backup anymore.

   Correct (I got bitten by this last week with my fledgling backup
process). You need a place that stores the "current state" subvolumes
that's not going to be touched by anything else, and you can't clean
up any given base until you're certain that there's a good new one
available on both sides. One thing that helps here is that send
requires the snapshot being sent to be marked read-only, so it's not
possible to change it at all -- but you can delete them.

> Also, I don't understand whether send/receive would allow me to 
> copy/update a subvolume *including* its snapshots.

   Snapshots aren't owned by subvolumes. Once you've made a snapshot,
that snapshot is a fully equal partner of the subvol that it was a
snapshot of -- there is no hierarchy of ownership. This means that you
will have to send each snapshot independently.

   What send allows you to do is to specify that one or more
subvolumes on the send side can be assumed to exist on the receive
side (via -p and -c). If you do that, the stream can then use them as
clone sources (i.e. should make shared CoW copies from them, rather
than sending all the data).

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- ...  one ping(1) to rule them all, and in the ---  
 darkness bind(2) them.  


signature.asc
Description: Digital signature


[PATCH] Btrfs-progs: return with -ENOMEM if malloc fails

2014-03-15 Thread Rakesh Pandit
Prevent segfault if memory allocation fails for sargs in get_df
(cmds-filesystem.c).

Signed-off-by: Rakesh Pandit 
---
 cmds-filesystem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index c9e27fc..7eb6e9d 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
@@ -146,7 +146,7 @@ static int get_df(int fd, struct btrfs_ioctl_space_args 
**sargs_ret)
sargs = malloc(sizeof(struct btrfs_ioctl_space_args) +
(count * sizeof(struct btrfs_ioctl_space_info)));
if (!sargs)
-   ret = -ENOMEM;
+   return -ENOMEM;
 
sargs->space_slots = count;
sargs->total_spaces = 0;
-- 
1.8.5.3

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Incremental backup for a raid1

2014-03-15 Thread Michael Schuerig
On Thursday 13 March 2014 17:29:11 George Mitchell wrote:
> I currently use rsync to a separate drive to maintain a 
> backup copy, but it is not integrated into the array like n-way would 
> be, and is definitely not a perfect solution.

Could you explain how you're using rsync? I was just about to copy a 
btrfs filesystem to another disk. That filesystem has several subvolumes 
and about 100 snapshots overall. Owing to COW, this amounts to about 
1.2TB. However, I reckon that rsync doesn't know anything about COW and 
accordingly would blow up my data immensely on the destination disk.

How do I copy a btrfs filesystem preserving its complete contents? How 
do I update such a copy?

Yes, I want to keep the subvolume layout of the original and I want to 
copy all snapshots. I don't think send/receive is the answer, but it's 
likey I don't understand it well enough. I'm concerned, that a 
send/receive-based approach is not robust against mishaps.

Consider: I want to incrementally back-up a filesystem to two external 
disks. For this I'd have to for each subvolume keep a snapshot 
corresponding to its state on the backup disk. If I make any mistake in 
managing these snapshots, I can't update the external backup anymore.

Also, I don't understand whether send/receive would allow me to 
copy/update a subvolume *including* its snapshots.

Things have become a little more complicated than I had hoped for, but 
I've only been using btrfs for a couple of weeks.

Michael

-- 
Michael Schuerig
mailto:mich...@schuerig.de
http://www.schuerig.de/michael/

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: discard synchronous on most SSDs?

2014-03-15 Thread Duncan
Chris Samuel posted on Sat, 15 Mar 2014 17:48:56 +1100 as excerpted:

> $ sudo smartctl --identify /dev/sdb | fgrep 'Trim bit in DATA SET
> MANAGEMENT'
>  169  0  1   Trim bit in DATA SET MANAGEMENT command
>  supported
> $
> 
> If that command returns nothing then it's not reported as supported (and
> I've tested that).  You can get the same info with hdparm -I.

> My puzzle now is that I have two SSD drives that report supporting NCQ
> TRIM (one confirmed via product info) but report only supporting SATA
> 3.0 not 3.1.

My SATA 2.5 SSDs reported earlier, report support for it too, so it's 
apparently not SATA 3.1 limited.  (Note that I'm simply grepping word 
169, in the command below.  Since word 169 is trim support...)

sudo smartctl --identify /dev/sda | grep '^ 169'
 169  - 0x0001   Data Set Management support
 169  0  1   Trim bit in DATA SET MANAGEMENT command supported

Either that or that feature bit simply indicates trim support, not NCQ 
trim support.

But it can be noted that if SATA 3.1 requires trim to be NCQ if its 
supported at all (spinning rust would thus get a pass), then claiming 3.1 
support as well as trim support should be the equivalent of claiming NCQ 
trim support, likely with no indicator of whether that trim support is NCQ 
or not, pre-3.1.

... Which would mean that my SATA 2.5 and your SATA 3.0 drives are simply 
indicating trim support, not specifically NCQ trim support.

I guess you'd have to check the SATA 2.5 and 3.0 specs to find that out.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: discard synchronous on most SSDs?

2014-03-15 Thread Holger Hoffstätte
On Fri, 14 Mar 2014 21:21:16 -0700, Marc MERLIN wrote:

> On Fri, Mar 14, 2014 at 08:46:09PM +, Holger Hoffstätte wrote:
>> On Fri, 14 Mar 2014 15:57:41 -0400, Martin K. Petersen wrote:
>> 
>> > So right now I'm afraid we don't have a good way for a user to
>> > determine whether a device supports queued trims or not.
>> 
>> Mount with discard, unpack kernel tree, sync, rm -rf tree.
>> If it takes several seconds, you have sync discard, no?
> 
> Mmmh, interesting point.
> 
> legolas:/usr/src# time rm -rf linux-3.14-rc5 real 0m1.584s user   
0m0.008s
> sys   0m1.524s
> 
> I remounted my FS with remount,nodiscard, and the time was the same.
> 
>> This changed somewhere around kernel 3.8.x; before that it used to be
>> acceptably fast. Since then I only do batch trims, daily (server) or
>> weekly (laptop).
> 
> I'm never really timed this before. Is it supposed to be faster than
> 1.5s on a fast SSD?

No, ~1s + noise is OK and seems normal, depending on filesystem and
phase of the moon. To contrast here is the output from my laptop,
which has an old but still-going-strong Intel G2 with ext4:

$smartctl -i /dev/sda | grep ATA
ATA Version is:   ATA/ATAPI-7 T13/1532D revision 1
SATA Version is:  SATA 2.6, 3.0 Gb/s

without dicard:
rm -rf linux-3.12.14  0.05s user 1.28s system 98% cpu 1.364 total

remounted with discard & after an initial manual fstrim:
rm -rf linux-3.12.14  1.90s user 0.02s system 2% cpu 1:07.45 total

I think these numbers speak for themselves. :)

It's really good to know that SATA 3.1 apparently fixed this.

cheers
Holger

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html