Current code have following bug for subvolume sync:
1: If there are more than 1 subvolume to sync, the program will
   infinitely loop.
2: return !0 in exit

This patch add misc-tests/007-subvolume-sync for above case.

Signed-off-by: Zhao Lei <zhao...@cn.fujitsu.com>
---
 tests/misc-tests/007-subvolume-sync/test.sh | 32 +++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)
 create mode 100755 tests/misc-tests/007-subvolume-sync/test.sh

diff --git a/tests/misc-tests/007-subvolume-sync/test.sh 
b/tests/misc-tests/007-subvolume-sync/test.sh
new file mode 100755
index 0000000..d4019f4
--- /dev/null
+++ b/tests/misc-tests/007-subvolume-sync/test.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+# test btrfs subvolume run normally with more than one subvolume
+#
+# - btrfs subvolume must not loop indefinetelly
+# - btrfs subvolume return 0 in normal case
+
+source $TOP/tests/common
+
+check_prereq mkfs.btrfs
+check_prereq btrfs
+
+setup_root_helper
+prepare_test_dev
+
+run_check $SUDO_HELPER $TOP/mkfs.btrfs -f "$TEST_DEV"
+run_check $SUDO_HELPER mount "$TEST_DEV" "$TEST_MNT"
+
+# to check following thing in both 1 and multiple subvolume case:
+# 1: is subvolume sync loop indefinetelly
+# 2: is return value right
+#
+run_check $SUDO_HELPER $TOP/btrfs subvolume create "$TEST_MNT"/mysubvol1
+run_check $SUDO_HELPER $TOP/btrfs subvolume create "$TEST_MNT"/mysubvol2
+run_check $SUDO_HELPER $TOP/btrfs subvolume delete "$TEST_MNT"/mysubvol1
+run_check $SUDO_HELPER $TOP/btrfs subvolume delete "$TEST_MNT"/mysubvol2
+run_check $SUDO_HELPER $TOP/btrfs subvolume sync "$TEST_MNT"
+
+run_check $SUDO_HELPER $TOP/btrfs subvolume create "$TEST_MNT"/mysubvol
+run_check $SUDO_HELPER $TOP/btrfs subvolume delete "$TEST_MNT"/mysubvol
+run_check $SUDO_HELPER $TOP/btrfs subvolume sync "$TEST_MNT"
+
+run_check $SUDO_HELPER umount $TEST_MNT
-- 
1.8.5.1

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to