[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
re-attaching the files On Fri, Sep 22, 2023 at 5:25 PM Joseph Fernandes wrote: > Hello All, > > I found a weird issue with ceph_readdirplus_r() when used along > with ceph_ll_lookup_vino(). > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > (stable) > > Any help is really appreciated. > > Thanks in advance, > -Joe > > Test Scenario : > > A. Create a Ceph Fs Subvolume "4" and created a directory in root of > subvolume "user_root" > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ceph fs subvolume ls cephfs > [ > { > "name": "4" > } > ] > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ls -l > total 0 > drwxrwxrwx 2 root root 0 Sep 22 09:16 user_root > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > B. In the "user_root" directory create some files and directories > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > mkdir dir1 dir2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > dir1 dir2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > echo > "Hello Worldls!" > file1 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > echo "Hello Worldls!" > file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > dir1 dir2 file1 file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > cat file* > Hello Worldls! > Hello Worldls! > > > C. Create a subvolume snapshot "sofs-4-5". Please ignore the older > snapshots. > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ceph fs subvolume snapshot ls cephfs 4 > [ > { > "name": "sofs-4-1" > }, > { > "name": "sofs-4-2" > }, > { > "name": "sofs-4-3" > }, > { > "name": "sofs-4-4" > }, > { > "name": "sofs-4-5" > } > ] > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > Here "sofs-4-5" has snapshot id 6. > Got this from libcephfs and have verified at Line > snapshot_inode_lookup.cpp#L212. (Attached to the email) > > >#Content within the snapshot > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > cd .snap/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap# > ls > _sofs-4-1_1099511627778 _sofs-4-2_1099511627778 _sofs-4-3_1099511627778 > _sofs-4-4_1099511627778 _sofs-4-5_1099511627778 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap# > cd _sofs-4-5_1099511627778/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778# > ls > user_root > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778# > cd user_root/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > ls > dir1 dir2 file1 file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > cat file* > Hello Worldls! > Hello Worldls! > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > > > D. Delete all the files and directories in "user_root" > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > rm -rf * > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > > > E. Using Libcephfs in a C++ program do the following,(Attached to this > email) > >1. Get the Inode of "user_root" using ceph_ll_walk(). >2. Open the directory using Inode received from ceph_ll_walk() and >do ceph_readdirplus_r() >We don't see any dentries(except "." and "..") as we have deleted all >files and directories in the active filesystem. This is expected and >correct! > > > > > =/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/= > >Path/Name > :"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/" >Inode Address: 0x7f5ce0009900 >Inode Number : 1099511629282 >Snapshot Number : 18446744073709551614 >I
[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
reattaching files On Fri, Sep 22, 2023 at 5:25 PM Joseph Fernandes wrote: > Hello All, > > I found a weird issue with ceph_readdirplus_r() when used along > with ceph_ll_lookup_vino(). > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > (stable) > > Any help is really appreciated. > > Thanks in advance, > -Joe > > Test Scenario : > > A. Create a Ceph Fs Subvolume "4" and created a directory in root of > subvolume "user_root" > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ceph fs subvolume ls cephfs > [ > { > "name": "4" > } > ] > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ls -l > total 0 > drwxrwxrwx 2 root root 0 Sep 22 09:16 user_root > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > B. In the "user_root" directory create some files and directories > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > mkdir dir1 dir2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > dir1 dir2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > echo > "Hello Worldls!" > file1 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > echo "Hello Worldls!" > file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > dir1 dir2 file1 file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > cat file* > Hello Worldls! > Hello Worldls! > > > C. Create a subvolume snapshot "sofs-4-5". Please ignore the older > snapshots. > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ceph fs subvolume snapshot ls cephfs 4 > [ > { > "name": "sofs-4-1" > }, > { > "name": "sofs-4-2" > }, > { > "name": "sofs-4-3" > }, > { > "name": "sofs-4-4" > }, > { > "name": "sofs-4-5" > } > ] > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > Here "sofs-4-5" has snapshot id 6. > Got this from libcephfs and have verified at Line > snapshot_inode_lookup.cpp#L212. (Attached to the email) > > >#Content within the snapshot > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > cd .snap/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap# > ls > _sofs-4-1_1099511627778 _sofs-4-2_1099511627778 _sofs-4-3_1099511627778 > _sofs-4-4_1099511627778 _sofs-4-5_1099511627778 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap# > cd _sofs-4-5_1099511627778/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778# > ls > user_root > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778# > cd user_root/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > ls > dir1 dir2 file1 file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > cat file* > Hello Worldls! > Hello Worldls! > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > > > D. Delete all the files and directories in "user_root" > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > rm -rf * > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > > > E. Using Libcephfs in a C++ program do the following,(Attached to this > email) > >1. Get the Inode of "user_root" using ceph_ll_walk(). >2. Open the directory using Inode received from ceph_ll_walk() and >do ceph_readdirplus_r() >We don't see any dentries(except "." and "..") as we have deleted all >files and directories in the active filesystem. This is expected and >correct! > > > > > =/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/= > >Path/Name > :"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/" >Inode Address: 0x7f5ce0009900 >Inode Number : 1099511629282 >Snapshot Number : 18446744073709551614 >Inode
[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Hi Joseph, On Fri, Sep 22, 2023 at 5:27 PM Joseph Fernandes wrote: > > Hello All, > > I found a weird issue with ceph_readdirplus_r() when used along > with ceph_ll_lookup_vino(). > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > (stable) > > Any help is really appreciated. > > Thanks in advance, > -Joe > > Test Scenario : > > A. Create a Ceph Fs Subvolume "4" and created a directory in root of > subvolume "user_root" > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ceph fs subvolume ls cephfs > [ > { > "name": "4" > } > ] > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ls -l > total 0 > drwxrwxrwx 2 root root 0 Sep 22 09:16 user_root > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > B. In the "user_root" directory create some files and directories > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > mkdir dir1 dir2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > dir1 dir2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > echo > "Hello Worldls!" > file1 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > echo "Hello Worldls!" > file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > dir1 dir2 file1 file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > cat file* > Hello Worldls! > Hello Worldls! > > > C. Create a subvolume snapshot "sofs-4-5". Please ignore the older > snapshots. > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > ceph fs subvolume snapshot ls cephfs 4 > [ > { > "name": "sofs-4-1" > }, > { > "name": "sofs-4-2" > }, > { > "name": "sofs-4-3" > }, > { > "name": "sofs-4-4" > }, > { > "name": "sofs-4-5" > } > ] > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > Here "sofs-4-5" has snapshot id 6. > Got this from libcephfs and have verified at Line > snapshot_inode_lookup.cpp#L212. (Attached to the email) > > >#Content within the snapshot > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > cd .snap/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap# > ls > _sofs-4-1_1099511627778 _sofs-4-2_1099511627778 _sofs-4-3_1099511627778 > _sofs-4-4_1099511627778 _sofs-4-5_1099511627778 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap# > cd _sofs-4-5_1099511627778/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778# > ls > user_root > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778# > cd user_root/ > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > ls > dir1 dir2 file1 file2 > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > cat file* > Hello Worldls! > Hello Worldls! > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root# > > > D. Delete all the files and directories in "user_root" > > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > rm -rf * > root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > ls > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root# > > > E. Using Libcephfs in a C++ program do the following,(Attached to this > email) > >1. Get the Inode of "user_root" using ceph_ll_walk(). >2. Open the directory using Inode received from ceph_ll_walk() and do >ceph_readdirplus_r() >We don't see any dentries(except "." and "..") as we have deleted all >files and directories in the active filesystem. This is expected and >correct! > > > > > =/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/= > >Path/Name > :"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/" >Inode Address: 0x7f5ce0009900 >Inode Number : 1099511629282 >Snapshot Number : 18446744073709551614 >Inode Number
[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Hello Venky, Nice to hear from you :) Hope you are doing well. I tried as you suggested, root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# mkdir dir1 dir2 root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# echo "Hello Worldls!" > file2 root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# echo "Hello Worldls!" > file1 root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# ls dir1 dir2 file1 file2 root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# Create a new snapshot called "sofs-4-6" root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# ceph fs subvolume snapshot ls cephfs 4 [ { "name": "sofs-4-1" }, { "name": "sofs-4-2" }, { "name": "sofs-4-3" }, { "name": "sofs-4-4" }, { "name": "sofs-4-5" }, { "name": "sofs-4-6" } ] root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# And Delete the file and directory from the active filessystem root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# rm -rf * root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# ls root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# cd .snap root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap# ls _sofs-4-6_1099511627778 root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap# cd _sofs-4-6_1099511627778/ root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778# ls dir1 dir2 file1 file2 root@ss-joe-01 (bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778# Now I modified the program and executed it, I am getting the same result. root@ss-joe-01(bash):/home/hydrauser# ./snapshot_inode_lookup =/volumes/_nogroup/4/user_root/= Path/Name:"/volumes/_nogroup/4/user_root/" Inode Address: 0x7f7c10008f00 Inode Number : 1099511628293 Snapshot Number : 18446744073709551614 Inode Number : 1099511628293 Snapshot Number : 18446744073709551614 . Ino: 1099511628293 SnapId: 18446744073709551614 Address: 0x7f7c10008f00 .. Ino: 1099511627778 SnapId: 18446744073709551614 Address: 0x7f7c100086f0 =1099511628293:7= Path/Name:"1099511628293:7" Inode Address: 0x7f7c10009710 Inode Number : 1099511628293 Snapshot Number : 7 Inode Number : 1099511628293 Snapshot Number : 7 . Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 .. Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 =/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778= Path/Name :"/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778" Inode Address: 0x7f7c10009710 Inode Number : 1099511628293 Snapshot Number : 7 Inode Number : 1099511628293 Snapshot Number : 7 . Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 .. Ino: 1099511628293 SnapId: 18446744073709551615 Address: 0x55efc15b4640 file1 Ino: 1099511628297 SnapId: 7 Address: 0x7f7c1000a030 dir1 Ino: 1099511628294 SnapId: 7 Address: 0x7f7c1000a720 dir2 Ino: 1099511628295 SnapId: 7 Address: 0x7f7c1000ada0 file2 Ino: 1099511628296 SnapId: 7 Address: 0x7f7c1000b420 =1099511628293:7= Path/Name:"1099511628293:7" Inode Address: 0x7f7c10009710 Inode Number : 1099511628293 Snapshot Number : 7 Inode Number : 1099511628293 Snapshot Number : 7 . Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 .. Ino: 1099511628293 SnapId: 18446744073709551615 Address: 0x55efc15b4640 file1 Ino: 1099511628297 SnapId: 7 Address: 0x7f7c1000a030 dir1 Ino: 1099511628294 SnapId: 7 Address: 0x7f7c1000a720 dir2 Ino: 1099511628295 SnapId: 7 Address: 0x7f7c1000ada0 file2 Ino: 1099511628296 SnapId: 7 Address: 0x7f7c1000b420 root@ss-joe-01(bash):/home/hydrauser# I have attached the modified program and ceph client logs from this run. Cheers, Joe On Fri, Sep 22, 2023 at 8:54 PM Venky Shankar wrote: > Hi Joseph, > > On Fri, Sep 22, 2023 at 5:27 PM Joseph Fernandes > wrote: > > > > Hello All, > > > > I found a weird issue with ceph_readdirplus_r() when used along > > with ceph_ll_lookup_vino(). > > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > > (stable) > > > > Any help is really appreciated. > > > > Thanks in advance, > > -Joe > > > > Test Scenario : > > > > A. Create a Ceph Fs Subvolume "4" and created a directory in root of > > subvolume "user_root" > > > > root@ss-joe-01 > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > ceph fs subvolume ls cephfs > > [ > > { > > "name": "4" > > } > > ] > > root@ss-joe-01 > > > (bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23# > > > > > > root@ss-joe-01 > (bash):/mnt/cephfs/vo
[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
CC ceph-user (Apologies, Forgot last time I replied) On Tue, Sep 26, 2023 at 1:11 PM Joseph Fernandes wrote: > Hello Venky, > > Did you get a chance to look into the updated program? Am I > missing something? > I suppose it's something trivial I am missing, As I see these API used in > NFS Ganesha also and would have been tested there. > > https://github.com/nfs-ganesha/nfs-ganesha/blob/2a57b6d53295426247b200cd100ba0741b12aff9/src/FSAL/FSAL_CEPH/export.c#L322 > > Thanks, > ~Joe > > On Fri, Sep 22, 2023 at 10:52 PM Joseph Fernandes > wrote: > >> Hello Venky, >> >> Nice to hear from you :) Hope you are doing well. >> >> I tried as you suggested, >> >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# mkdir >> dir1 dir2 >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# echo >> "Hello Worldls!" > file2 >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# echo >> "Hello Worldls!" > file1 >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# ls >> dir1 dir2 file1 file2 >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# >> >> Create a new snapshot called "sofs-4-6" >> >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# ceph fs >> subvolume snapshot ls cephfs 4 >> [ >> { >> "name": "sofs-4-1" >> }, >> { >> "name": "sofs-4-2" >> }, >> { >> "name": "sofs-4-3" >> }, >> { >> "name": "sofs-4-4" >> }, >> { >> "name": "sofs-4-5" >> }, >> { >> "name": "sofs-4-6" >> } >> ] >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# >> >> And Delete the file and directory from the active filessystem >> >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# rm -rf * >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# ls >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# cd .snap >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap# ls >> _sofs-4-6_1099511627778 >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap# cd >> _sofs-4-6_1099511627778/ >> root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778# >> ls >> dir1 dir2 file1 file2 >> root@ss-joe-01 >> (bash):/mnt/cephfs/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778# >> >> Now I modified the program and executed it, I am getting the same result. >> >> root@ss-joe-01(bash):/home/hydrauser# ./snapshot_inode_lookup >> >> =/volumes/_nogroup/4/user_root/= >> >> Path/Name:"/volumes/_nogroup/4/user_root/" >> Inode Address: 0x7f7c10008f00 >> Inode Number : 1099511628293 >> Snapshot Number : 18446744073709551614 >> Inode Number : 1099511628293 >> Snapshot Number : 18446744073709551614 >> . Ino: 1099511628293 SnapId: 18446744073709551614 Address: 0x7f7c10008f00 >> .. Ino: 1099511627778 SnapId: 18446744073709551614 Address: 0x7f7c100086f0 >> >> >> =1099511628293:7= >> >> Path/Name:"1099511628293:7" >> Inode Address: 0x7f7c10009710 >> Inode Number : 1099511628293 >> Snapshot Number : 7 >> Inode Number : 1099511628293 >> Snapshot Number : 7 >> . Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 >> .. Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 >> >> >> =/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778= >> >> Path/Name >> :"/volumes/_nogroup/4/user_root/.snap/_sofs-4-6_1099511627778" >> Inode Address: 0x7f7c10009710 >> Inode Number : 1099511628293 >> Snapshot Number : 7 >> Inode Number : 1099511628293 >> Snapshot Number : 7 >> . Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 >> .. Ino: 1099511628293 SnapId: 18446744073709551615 Address: 0x55efc15b4640 >> file1 Ino: 1099511628297 SnapId: 7 Address: 0x7f7c1000a030 >> dir1 Ino: 1099511628294 SnapId: 7 Address: 0x7f7c1000a720 >> dir2 Ino: 1099511628295 SnapId: 7 Address: 0x7f7c1000ada0 >> file2 Ino: 1099511628296 SnapId: 7 Address: 0x7f7c1000b420 >> >> >> =1099511628293:7= >> >> Path/Name:"1099511628293:7" >> Inode Address: 0x7f7c10009710 >> Inode Number : 1099511628293 >> Snapshot Number : 7 >> Inode Number : 1099511628293 >> Snapshot Number : 7 >> . Ino: 1099511628293 SnapId: 7 Address: 0x7f7c10009710 >> .. Ino: 1099511628293 SnapId: 18446744073709551615 Address: 0x55efc15b4640 >> file1 Ino: 1099511628297 SnapId: 7 Address: 0x7f7c1000a030 >> dir1 Ino: 1099511628294 SnapId: 7 Address: 0x7f7c1000a720 >> dir2 Ino: 1099511628295 SnapId: 7 Address: 0x7f7c1000ada0 >> file2 Ino: 1099511628296 SnapId: 7 Address: 0x7f7c1000b420 >> >> root@ss-joe-01(bash):/home/hydrauser# >> >> I have attached the modified program and ceph client logs from this run. >> >> C