Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released
Also forgot to mention that it's impossible to change any volume settings while 3.4 clients are attached but I can unmount them all, change the setting and them mount them all again. gluster vol set data2 nfs.disable off volume set: failed: Staging failed on nas5-10g. Error: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote: OK. I've just unmounted the data2 volume from a machine called tape1, and now try to remount - it's hanging. /bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log on the server # gluster vol status Status of volume: data2 Gluster process PortOnline Pid -- Brick nas5-10g:/data17/gvol 49152 Y 6553 Brick nas5-10g:/data18/gvol 49153 Y 6564 Brick nas5-10g:/data19/gvol 49154 Y 6575 Brick nas5-10g:/data20/gvol 49155 Y 6586 Brick nas6-10g:/data21/gvol 49160 Y 20608 Brick nas6-10g:/data22/gvol 49161 Y 20613 Brick nas6-10g:/data23/gvol 49162 Y 20614 Brick nas6-10g:/data24/gvol 49163 Y 20621 Task Status of Volume data2 -- There are no active volume tasks Sending the sigusr1 killed the mount processes and I don't see any state dumps. What path should they be in? I'm running Gluster installed via rpm and I don't see a /var/run/gluster directory. On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: Franco, When your clients perceive a hang, could you check the status of the bricks by running, # gluster volume status VOLNAME (run this on one of the 'server' machines in the cluster.) Could you also provide the statedump of the client(s), by issuing the following command. # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.) This would dump the state information of the client, like the file operations in progress, memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. Please attach this file in your response. thanks, Krish - Original Message - Hi My clients are running 3.4.1, when I try to mount from lots of machine simultaneously, some of the mounts hang. Stopping and starting the volume clears the hung mounts. Errors in the client logs [2014-05-28 01:47:15.930866] E [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. Let me know if you want more information. Cheers, On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote: On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote: SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz This beta release is intended to verify the changes that should resolve the bugs listed below. We appreciate tests done by anyone. Please leave a comment in the respective bugreport with a short description of the success or failure. Visiting one of the bugreports is as easy as opening the bugzilla.redhat.com/$BUG URL, for the first in the list, this results in http://bugzilla.redhat.com/765202 . Bugs expected to be fixed (31 in total since 3.5.0): #765202 - lgetxattr called with invalid keys on the bricks #833586 - inodelk hang from marker_rename_release_newp_lock #859581 - self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs #986429 - Backupvolfile server option should work internal to GlusterFS framework #1039544 - [FEAT] gluster volume heal info should list the entries that actually required to be healed. #1046624 - Unable to heal symbolic Links #1046853 - AFR : For every file self-heal there are warning messages reported in glustershd.log file #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum was met #1064096 - The old Python Translator code (not Glupy) should be removed #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type auto) leads to a split-brain #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with open(), seek(), write() #1078061 - Need ability to heal mismatching user extended attributes without any
Re: [Gluster-devel] Change in glusterfs[master]: NetBSD build fix for gettext
Done Pranith - Original Message - From: Emmanuel Dreyfus m...@netbsd.org To: jGluster Devel gluster-devel@gluster.org Sent: Thursday, May 29, 2014 1:53:12 PM Subject: Re: [Gluster-devel] Change in glusterfs[master]: NetBSD build fix for gettext http://build.gluster.org/job/regression/4603/consoleFull : FAILED Is it possible to reschedule this test? I feel like something was wrong which is not related to my change. -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] regarding fsetattr
hi, When I run the following program on fuse mount it fails with ENOENT. When I look at the mount logs, it prints error for setattr instead of fsetattr. Wondering anyone knows why the fop comes as setattr instead of fsetattr. Log: [2014-05-29 09:33:38.658023] W [fuse-bridge.c:1056:fuse_setattr_cbk] 0-glusterfs-fuse: 2569: SETATTR() gfid:ae44dd74-ff45-42a8-886e-b4ce2373a267 = -1 (No such file or directory) Program: #include stdio.h #include unistd.h #include sys/types.h #include sys/stat.h #include fcntl.h #include errno.h #include string.h int main () { int ret = 0; int fd=open(a.txt, O_CREAT|O_RDWR); if (fd 0) printf (open failed: %s\n, strerror(errno)); ret = unlink(a.txt); if (ret 0) printf (unlink failed: %s\n, strerror(errno)); if (write (fd, abc, 3) 0) printf (Not able to print %s\n, strerror (errno)); ret = fchmod (fd, S_IRUSR|S_IWUSR|S_IXUSR); if (ret 0) printf (fchmod failed %s\n, strerror(errno)); return 0; } Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding fsetattr
- Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: jGluster Devel gluster-devel@gluster.org Cc: Brian Foster bfos...@redhat.com Sent: Thursday, May 29, 2014 3:08:33 PM Subject: regarding fsetattr hi, When I run the following program on fuse mount it fails with ENOENT. When I look at the mount logs, it prints error for setattr instead of fsetattr. Wondering anyone knows why the fop comes as setattr instead of fsetattr. Log: [2014-05-29 09:33:38.658023] W [fuse-bridge.c:1056:fuse_setattr_cbk] 0-glusterfs-fuse: 2569: SETATTR() gfid:ae44dd74-ff45-42a8-886e-b4ce2373a267 = -1 (No such file or directory) Program: #include stdio.h #include unistd.h #include sys/types.h #include sys/stat.h #include fcntl.h #include errno.h #include string.h int main () { int ret = 0; int fd=open(a.txt, O_CREAT|O_RDWR); if (fd 0) printf (open failed: %s\n, strerror(errno)); ret = unlink(a.txt); if (ret 0) printf (unlink failed: %s\n, strerror(errno)); if (write (fd, abc, 3) 0) printf (Not able to print %s\n, strerror (errno)); ret = fchmod (fd, S_IRUSR|S_IWUSR|S_IXUSR); if (ret 0) printf (fchmod failed %s\n, strerror(errno)); return 0; } Based on vijay's inputs I checked in fuse-brige and this is what I see: 1162if (fsi-valid FATTR_FH 1163!(fsi-valid (FATTR_ATIME|FATTR_MTIME))) { 1164/* We need no loc if kernel sent us an fd and 1165 * we are not fiddling with times */ 1166state-fd = FH_TO_FD (fsi-fh); (gdb) 1167fuse_resolve_fd_init (state, state-resolve, state-fd); 1168} else { 1169fuse_resolve_inode_init (state, state-resolve, finh-nodeid); 1170} 1171 (gdb) p fsi-valid $4 = 1 (gdb) p (fsi-valid FATTR_FH) $5 = 0 (gdb) fsi-valid doesn't have FATTR_FH. Who is supposed to set it? Pranith Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]
The test-case in question uses the cluster framework, where the logs are saved in /d/backends and is not present in the compressed log file. So i wrote a patch http://review.gluster.org/#/c/7924/ which creates the glusterd logs for the cluster framework in logdir, instead of /d/backends. Vijay, can we please merge this patch, so that the next time this scenario is hit we will have the proper logs. Regards, Avra On 05/29/2014 07:19 AM, Pranith Kumar Karampuri wrote: Avra, Patch submitted by you http://review.gluster.com/#/c/7889 also failed this test today. Pranith - Original Message - From: Avra Sengupta aseng...@redhat.com To: Pranith Kumar Karampuri pkara...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Wednesday, May 28, 2014 5:04:40 PM Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16] Pranith am looking into a priority issue for snapshot(https://bugzilla.redhat.com/show_bug.cgi?id=1098045) right now, I will get started with this spurious failure as soon as I finish it, which should be max by eod tomorrow. Regards, Avra On 05/28/2014 06:46 AM, Pranith Kumar Karampuri wrote: FYI, this test failed more than once yesterday. Same test failed both the times. Pranith - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Avra Sengupta aseng...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Wednesday, May 28, 2014 6:43:52 AM Subject: Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16] CC gluster-devel Pranith - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Avra Sengupta aseng...@redhat.com Sent: Wednesday, May 28, 2014 6:42:53 AM Subject: Spurious failire ./tests/bugs/bug-1049834.t [16] hi Avra, Could you look into it. Patch == http://review.gluster.com/7889/1 Author== Avra Sengupta aseng...@redhat.com Build triggered by== amarts Build-url == http://build.gluster.org/job/regression/4586/consoleFull Download-log-at == http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz Test written by == Author: Avra Sengupta aseng...@redhat.com ./tests/bugs/bug-1049834.t [16] #!/bin/bash . $(dirname $0)/../include.rc . $(dirname $0)/../cluster.rc . $(dirname $0)/../volume.rc . $(dirname $0)/../snapshot.rc cleanup; 1 TEST verify_lvm_version 2 TEST launch_cluster 2 3 TEST setup_lvm 2 4 TEST $CLI_1 peer probe $H2 5 EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count 6 TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2 7 EXPECT 'Created' volinfo_field $V0 'Status' 8 TEST $CLI_1 volume start $V0 9 EXPECT 'Started' volinfo_field $V0 'Status' #Setting the snap-max-hard-limit to 4 10 TEST $CLI_1 snapshot config $V0 snap-max-hard-limit 4 PID_1=$! wait $PID_1 #Creating 3 snapshots on the volume (which is the soft-limit) 11 TEST create_n_snapshots $V0 3 $V0_snap 12 TEST snapshot_n_exists $V0 3 $V0_snap #Creating the 4th snapshot on the volume and expecting it to be created # but with the deletion of the oldest snapshot i.e 1st snapshot 13 TEST $CLI_1 snapshot create ${V0}_snap4 ${V0} 14 TEST snapshot_exists 1 ${V0}_snap4 15 TEST ! snapshot_exists 1 ${V0}_snap1 ***16 TEST $CLI_1 snapshot delete ${V0}_snap4 17 TEST $CLI_1 snapshot create ${V0}_snap1 ${V0} 18 TEST snapshot_exists 1 ${V0}_snap1 #Deleting the 4 snaps #TEST delete_n_snapshots $V0 4 $V0_snap #TEST ! snapshot_n_exists $V0 4 $V0_snap cleanup; Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding fsetattr
- Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: jGluster Devel gluster-devel@gluster.org Sent: Thursday, May 29, 2014 3:37:37 PM Subject: Re: [Gluster-devel] regarding fsetattr - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: jGluster Devel gluster-devel@gluster.org Cc: Brian Foster bfos...@redhat.com Sent: Thursday, May 29, 2014 3:08:33 PM Subject: regarding fsetattr hi, When I run the following program on fuse mount it fails with ENOENT. When I look at the mount logs, it prints error for setattr instead of fsetattr. Wondering anyone knows why the fop comes as setattr instead of fsetattr. Log: [2014-05-29 09:33:38.658023] W [fuse-bridge.c:1056:fuse_setattr_cbk] 0-glusterfs-fuse: 2569: SETATTR() gfid:ae44dd74-ff45-42a8-886e-b4ce2373a267 = -1 (No such file or directory) Program: #include stdio.h #include unistd.h #include sys/types.h #include sys/stat.h #include fcntl.h #include errno.h #include string.h int main () { int ret = 0; int fd=open(a.txt, O_CREAT|O_RDWR); if (fd 0) printf (open failed: %s\n, strerror(errno)); ret = unlink(a.txt); if (ret 0) printf (unlink failed: %s\n, strerror(errno)); if (write (fd, abc, 3) 0) printf (Not able to print %s\n, strerror (errno)); ret = fchmod (fd, S_IRUSR|S_IWUSR|S_IXUSR); if (ret 0) printf (fchmod failed %s\n, strerror(errno)); return 0; } Based on vijay's inputs I checked in fuse-brige and this is what I see: 1162 if (fsi-valid FATTR_FH 1163 !(fsi-valid (FATTR_ATIME|FATTR_MTIME))) { 1164 /* We need no loc if kernel sent us an fd and 1165 * we are not fiddling with times */ 1166 state-fd = FH_TO_FD (fsi-fh); (gdb) 1167 fuse_resolve_fd_init (state, state-resolve, state-fd); 1168 } else { 1169 fuse_resolve_inode_init (state, state-resolve, finh-nodeid); 1170 } 1171 (gdb) p fsi-valid $4 = 1 (gdb) p (fsi-valid FATTR_FH) $5 = 0 (gdb) fsi-valid doesn't have FATTR_FH. Who is supposed to set it? had a discussion with brian foster on IRC. The issue is that gluster depends on client fd to be passed down to perform the operations where as setattr is sent on an inode from vfs to fuse and since gluster doesn't have any reference to inode once unlink happens, this issue is seen. I will have one more conversation with brian to find what needs to be fixed. Pranith. Pranith Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]
On 05/29/2014 06:04 PM, Avra Sengupta wrote: The test-case in question uses the cluster framework, where the logs are saved in /d/backends and is not present in the compressed log file. So i wrote a patch http://review.gluster.org/#/c/7924/ which creates the glusterd logs for the cluster framework in logdir, instead of /d/backends. Vijay, can we please merge this patch, so that the next time this scenario is hit we will have the proper logs. Thanks, have merged this patch. -Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Need testers for GlusterFS 3.4.4
On 29/05/2014, at 8:04 PM, Ben Turner wrote: From: James purplei...@gmail.com Sent: Wednesday, May 28, 2014 5:21:21 PM On Wed, May 28, 2014 at 5:02 PM, Justin Clift jus...@gluster.org wrote: Hi all, Are there any Community members around who can test the GlusterFS 3.4.4 beta (rpms are available)? I've provided all the tools and how-to to do this yourself. Should probably take about ~20 min. Old example: https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/ Same process should work, except base your testing on the latest vagrant article: https://ttboj.wordpress.com/2014/05/13/vagrant-on-fedora-with-libvirt-reprise/ If you haven't set it up already. I can help out here, I'll have a chance to run through some stuff this weekend. Where should I post feedback? Excellent Ben! Please send feedback to gluster-devel. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failure in tests/basic/bd.t [22, 23, 24, 25]
CC'ing to the correct ID of Mohan On Fri, May 30, 2014 at 5:45 AM, Pranith Kumar Karampuri pkara...@redhat.com wrote: hi Mohan, Could you please look into this: Patch == http://review.gluster.com/#/c/7926/1 Author== Avra Sengupta aseng...@redhat.com Build triggered by== amarts Build-url == http://build.gluster.org/job/regression/4615/consoleFull Download-log-at == http://build.gluster.org:443/logs/regression/glusterfs-logs-20140529:10:51:46.tgz Test written by == Author: M. Mohan Kumar mo...@in.ibm.com ./tests/basic/bd.t [22, 23, 24, 25] 0 #!/bin/bash 1 2 . $(dirname $0)/../include.rc 3 4 function execute() 5 { 6 cmd=$1 7 shift 8 ${cmd} $@ /dev/null 21 9 } 10 11 function bd_cleanup() 12 { 13 execute vgremove -f ${V0} 14 execute pvremove ${ld} 15 execute losetup -d ${ld} 16 execute rm ${BD_DISK} 17 cleanup 18 } 19 20 function check() 21 { 22 if [ $? -ne 0 ]; then 23 echo prerequsite $@ failed 24 bd_cleanup 25 exit 26 fi 27 } 28 29 SIZE=256 #in MB 30 31 bd_cleanup; 32 33 ## Configure environment needed for BD backend volumes 34 ## Create a file with configured size and 35 ## set it as a temporary loop device to create 36 ## physical volume VG. These are basic things needed 37 ## for testing BD xlator if anyone of these steps fail, 38 ## test script exits 39 function configure() 40 { 41 GLDIR=`$CLI system:: getwd` 42 BD_DISK=${GLDIR}/bd_disk 43 44 execute truncate -s${SIZE}M ${BD_DISK} 45 check ${BD_DISK} creation 46 47 execute losetup -f 48 check losetup 49 ld=`losetup -f` 50 51 execute losetup ${ld} ${BD_DISK} 52 check losetup ${BD_DISK} 53 execute pvcreate -f ${ld} 54 check pvcreate ${ld} 55 execute vgcreate ${V0} ${ld} 56 check vgcreate ${V0} 57 execute lvcreate --thin ${V0}/pool --size 128M 58 } 59 60 function volinfo_field() 61 { 62 local vol=$1; 63 local field=$2; 64 $CLI volume info $vol | grep ^$field: | sed 's/.*: //'; 65 } 66 67 function volume_type() 68 { 69 getfattr -n volume.type $M0/. --only-values --absolute-names -e text 70 } 71 72 TEST glusterd 73 TEST pidof glusterd 74 configure 75 76 TEST $CLI volume create $V0 ${H0}:/$B0/$V0?${V0} 77 EXPECT $V0 volinfo_field $V0 'Volume Name'; 78 EXPECT 'Created' volinfo_field $V0 'Status'; 79 80 ## Start volume and verify 81 TEST $CLI volume start $V0; 82 EXPECT 'Started' volinfo_field $V0 'Status' 83 84 TEST glusterfs --volfile-id=/$V0 --volfile-server=$H0 $M0 85 EXPECT '1' volume_type 86 87 ## Create posix file 88 TEST touch $M0/posix 89 90 TEST touch $M0/lv 91 gfid=`getfattr -n glusterfs.gfid.string $M0/lv --only-values --absolute-names` 92 TEST setfattr -n user.glusterfs.bd -v lv:4MB $M0/lv 93 # Check if LV is created 94 TEST stat /dev/$V0/${gfid} 95 96 ## Create filesystem 97 sleep 1 98 TEST mkfs.ext4 -qF $M0/lv 99 # Cloning 100 TEST touch $M0/lv_clone 101 gfid=`getfattr -n glusterfs.gfid.string $M0/lv_clone --only-values --absolute-names` 102 TEST setfattr -n clone -v ${gfid} $M0/lv 103 TEST stat /dev/$V0/${gfid} 104 105 sleep 1 106 ## Check mounting 107 TEST mount -o loop $M0/lv $M1 108 umount $M1 109 110 # Snapshot 111 TEST touch $M0/lv_sn 112 gfid=`getfattr -n glusterfs.gfid.string $M0/lv_sn --only-values --absolute-names` 113 TEST setfattr -n snapshot -v ${gfid} $M0/lv 114 TEST stat /dev/$V0/${gfid} 115 116 # Merge 117 sleep 1 **118 TEST setfattr -n merge -v $M0/lv_sn $M0/lv_sn **119 TEST ! stat $M0/lv_sn **120 TEST ! stat /dev/$V0/${gfid} 121 122 123 rm $M0/* -f 124 **125 TEST umount $M0 126 TEST $CLI volume stop ${V0} 127 EXPECT 'Stopped' volinfo_field $V0 'Status'; 128 TEST $CLI volume delete ${V0} 129 130 bd_cleanup Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel -- http://raobharata.wordpress.com/ ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Initiative to increase developer paritcipation
hi, We are taking an initiative to come up with some easy bugs where we can help volunteers in the community to send patches for. Goals of this initiative: - Each maintainer needs to come up with a list of bugs that are easy to fix in their components. - All the developers who are already active in the community to help the new comers by answering the questions. - Improve developer documentation to address FAQ - Over time make these new comers as experienced developers in glusterfs :-) Maintainers, Could you please come up with the initial list of bugs by next Wednesday before community meeting? Niels, Could you send out the guideline to mark the bugs as easy fix. Also the wiki link for backports. PS: This is not just for new comers to the community but also for existing developers to explore other components. Please feel free to suggest and give feedback to improve this process :-). Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failure in tests/basic/bd.t [22, 23, 24, 25]
- Original Message - From: Bharata B Rao bharata@gmail.com To: Pranith Kumar Karampuri pkara...@redhat.com Cc: jGluster Devel gluster-devel@gluster.org, M. Mohan Kumar mohankuma...@gmail.com Sent: Friday, May 30, 2014 8:28:15 AM Subject: Re: [Gluster-devel] Spurious failure in tests/basic/bd.t [22, 23, 24, 25] CC'ing to the correct ID of Mohan Thanks! Pranith On Fri, May 30, 2014 at 5:45 AM, Pranith Kumar Karampuri pkara...@redhat.com wrote: hi Mohan, Could you please look into this: Patch == http://review.gluster.com/#/c/7926/1 Author== Avra Sengupta aseng...@redhat.com Build triggered by== amarts Build-url == http://build.gluster.org/job/regression/4615/consoleFull Download-log-at == http://build.gluster.org:443/logs/regression/glusterfs-logs-20140529:10:51:46.tgz Test written by == Author: M. Mohan Kumar mo...@in.ibm.com ./tests/basic/bd.t [22, 23, 24, 25] 0 #!/bin/bash 1 2 . $(dirname $0)/../include.rc 3 4 function execute() 5 { 6 cmd=$1 7 shift 8 ${cmd} $@ /dev/null 21 9 } 10 11 function bd_cleanup() 12 { 13 execute vgremove -f ${V0} 14 execute pvremove ${ld} 15 execute losetup -d ${ld} 16 execute rm ${BD_DISK} 17 cleanup 18 } 19 20 function check() 21 { 22 if [ $? -ne 0 ]; then 23 echo prerequsite $@ failed 24 bd_cleanup 25 exit 26 fi 27 } 28 29 SIZE=256 #in MB 30 31 bd_cleanup; 32 33 ## Configure environment needed for BD backend volumes 34 ## Create a file with configured size and 35 ## set it as a temporary loop device to create 36 ## physical volume VG. These are basic things needed 37 ## for testing BD xlator if anyone of these steps fail, 38 ## test script exits 39 function configure() 40 { 41 GLDIR=`$CLI system:: getwd` 42 BD_DISK=${GLDIR}/bd_disk 43 44 execute truncate -s${SIZE}M ${BD_DISK} 45 check ${BD_DISK} creation 46 47 execute losetup -f 48 check losetup 49 ld=`losetup -f` 50 51 execute losetup ${ld} ${BD_DISK} 52 check losetup ${BD_DISK} 53 execute pvcreate -f ${ld} 54 check pvcreate ${ld} 55 execute vgcreate ${V0} ${ld} 56 check vgcreate ${V0} 57 execute lvcreate --thin ${V0}/pool --size 128M 58 } 59 60 function volinfo_field() 61 { 62 local vol=$1; 63 local field=$2; 64 $CLI volume info $vol | grep ^$field: | sed 's/.*: //'; 65 } 66 67 function volume_type() 68 { 69 getfattr -n volume.type $M0/. --only-values --absolute-names -e text 70 } 71 72 TEST glusterd 73 TEST pidof glusterd 74 configure 75 76 TEST $CLI volume create $V0 ${H0}:/$B0/$V0?${V0} 77 EXPECT $V0 volinfo_field $V0 'Volume Name'; 78 EXPECT 'Created' volinfo_field $V0 'Status'; 79 80 ## Start volume and verify 81 TEST $CLI volume start $V0; 82 EXPECT 'Started' volinfo_field $V0 'Status' 83 84 TEST glusterfs --volfile-id=/$V0 --volfile-server=$H0 $M0 85 EXPECT '1' volume_type 86 87 ## Create posix file 88 TEST touch $M0/posix 89 90 TEST touch $M0/lv 91 gfid=`getfattr -n glusterfs.gfid.string $M0/lv --only-values --absolute-names` 92 TEST setfattr -n user.glusterfs.bd -v lv:4MB $M0/lv 93 # Check if LV is created 94 TEST stat /dev/$V0/${gfid} 95 96 ## Create filesystem 97 sleep 1 98 TEST mkfs.ext4 -qF $M0/lv 99 # Cloning 100 TEST touch $M0/lv_clone 101 gfid=`getfattr -n glusterfs.gfid.string $M0/lv_clone --only-values --absolute-names` 102 TEST setfattr -n clone -v ${gfid} $M0/lv 103 TEST stat /dev/$V0/${gfid} 104 105 sleep 1 106 ## Check mounting 107 TEST mount -o loop $M0/lv $M1 108 umount $M1 109 110 # Snapshot 111 TEST touch $M0/lv_sn 112 gfid=`getfattr -n glusterfs.gfid.string $M0/lv_sn --only-values --absolute-names` 113 TEST setfattr -n snapshot -v ${gfid} $M0/lv 114 TEST stat /dev/$V0/${gfid} 115 116 # Merge 117 sleep 1 **118 TEST setfattr -n merge -v $M0/lv_sn $M0/lv_sn **119 TEST ! stat $M0/lv_sn **120 TEST ! stat /dev/$V0/${gfid} 121 122 123 rm $M0/* -f 124 **125 TEST umount $M0 126 TEST $CLI volume stop ${V0} 127 EXPECT 'Stopped' volinfo_field $V0 'Status'; 128 TEST $CLI volume delete ${V0} 129 130 bd_cleanup Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http
Re: [Gluster-devel] Initiative to increase developer paritcipation
CC gluster-devel Pranith - Original Message - From: HUANG Qiulan huan...@ihep.ac.cn To: Pranith Kumar Karampuri pkara...@redhat.com Sent: Friday, May 30, 2014 9:10:54 AM Subject: Re: [Gluster-devel] Initiative to increase developer paritcipation Hi Pranith, I'm glad to participate the Gluster developer team. First introduce myself briefly, I'm a staff of Computing Center, Institute of High Energy Physics, Chinese Academy Science. I have deployed Gluster 3.2.7 in our computing farm with 5 servers which provides storage services for physicists and is about 315TB. For the production package, I do many changes like data distribution ,optimize the lookup request without send requests to all bricks only the hash and hash+1 brick and so on. Recently, I developed a distributed metadata services for Gluster which is being tested. Hope you are intested the work what I have done. Thank you. Cheers, Qiulan Computing center,the Institute of High Energy Physics, China Huang, QiulanTel: (+86) 10 8823 6010-105 P.O. Box 918-7 Fax: (+86) 10 8823 6839 Beijing 100049 P.R. China Email: huan...@ihep.ac.cn === -原始邮件- 发件人: Pranith Kumar Karampuri pkara...@redhat.com 发送时间: 2014年5月30日 星期五 收件人: jGluster Devel gluster-devel@gluster.org, gluster-users gluster-us...@gluster.org 抄送: Kaushal Madappa kmada...@redhat.com 主题: [Gluster-devel] Initiative to increase developer paritcipation hi, We are taking an initiative to come up with some easy bugs where we can help volunteers in the community to send patches for. Goals of this initiative: - Each maintainer needs to come up with a list of bugs that are easy to fix in their components. - All the developers who are already active in the community to help the new comers by answering the questions. - Improve developer documentation to address FAQ - Over time make these new comers as experienced developers in glusterfs :-) Maintainers, Could you please come up with the initial list of bugs by next Wednesday before community meeting? Niels, Could you send out the guideline to mark the bugs as easy fix. Also the wiki link for backports. PS: This is not just for new comers to the community but also for existing developers to explore other components. Please feel free to suggest and give feedback to improve this process :-). Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel