Forgot to mention: the non-empty directories list files like this:

-?????????? ? ?    ?     ?            ? Hal8723APhyReg.h
-?????????? ? ?    ?     ?            ? Hal8723UHWImg_CE.h
-?????????? ? ?    ?     ?            ? hal_com.h
-?????????? ? ?    ?     ?            ? HalDMOutSrc8723A.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_BB.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_FW.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_MAC.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_RF.h
-?????????? ? ?    ?     ?            ? hal_intf.h
-?????????? ? ?    ?     ?            ? HalPwrSeqCmd.h
-?????????? ? ?    ?     ?            ? ieee80211.h
-?????????? ? ?    ?     ?            ? odm_debug.h

Thanks,
JF


On 16/03/15 11:45, JF Le Fillâtre wrote:
> 
> Hello all,
> 
> So, another day another issue. I was trying to play with quotas on my volume:
> 
> ================================================================================
> [root@stor104 ~]# gluster volume status
> Status of volume: live
> Gluster process                                               Port    Online  
> Pid
> ------------------------------------------------------------------------------
> Brick stor104:/zfs/brick0/brick                               49167   Y       
> 13446
> Brick stor104:/zfs/brick1/brick                               49168   Y       
> 13457
> Brick stor104:/zfs/brick2/brick                               49169   Y       
> 13468
> Brick stor106:/zfs/brick0/brick                               49159   Y       
> 14158
> Brick stor106:/zfs/brick1/brick                               49160   Y       
> 14169
> Brick stor106:/zfs/brick2/brick                               49161   Y       
> 14180
> NFS Server on localhost                                       2049    Y       
> 13483
> Quota Daemon on localhost                             N/A     Y       13490
> NFS Server on stor106                                 2049    Y       14195
> Quota Daemon on stor106                                       N/A     Y       
> 14202
>  
> Task Status of Volume live
> ------------------------------------------------------------------------------
> Task                 : Rebalance           
> ID                   : 6bd03709-1f48-49a9-a215-d0a6e6f3ab1e
> Status               : completed           
> ================================================================================
> 
> 
> Not sure if the "Quota Daemon on localhost -> N/A" is normal, but that's 
> another topic.
> 
> While the quotas were enabled, I did some test copying a whole tree of small 
> files (the Linux kernel sources) to the volume to see which performance I 
> would get, and it was really low. So I decided to disable quotas again:
> 
> 
> ================================================================================
> [root@stor104 ~]# gluster volume status
> Status of volume: live
> Gluster process                                               Port    Online  
> Pid
> ------------------------------------------------------------------------------
> Brick stor104:/zfs/brick0/brick                               49167   Y       
> 13754
> Brick stor104:/zfs/brick1/brick                               49168   Y       
> 13765
> Brick stor104:/zfs/brick2/brick                               49169   Y       
> 13776
> Brick stor106:/zfs/brick0/brick                               49159   Y       
> 14282
> Brick stor106:/zfs/brick1/brick                               49160   Y       
> 14293
> Brick stor106:/zfs/brick2/brick                               49161   Y       
> 14304
> NFS Server on localhost                                       2049    Y       
> 13790
> NFS Server on stor106                                 2049    Y       14319
>  
> Task Status of Volume live
> ------------------------------------------------------------------------------
> Task                 : Rebalance           
> ID                   : 6bd03709-1f48-49a9-a215-d0a6e6f3ab1e
> Status               : completed           
> ================================================================================
> 
> 
> I remounted the volume from the client and tried deleting the directory 
> containing the sources, which gave me a very long list of this:
> 
> 
> ================================================================================
> rm: cannot remove 
> ‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/ftrace/test.d/kprobe’: 
> Directory not empty
> rm: cannot remove 
> ‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/ptrace’: Directory not 
> empty
> rm: cannot remove 
> ‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/rcutorture/configs/rcu/v0.0’:
>  Directory not empty
> rm: cannot remove 
> ‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/rcutorture/configs/rcu/v3.5’:
>  Directory not empty
> rm: cannot remove 
> ‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/powerpc’: Directory not 
> empty
> rm: cannot remove 
> ‘/glusterfs/live/linux-3.18.7/tools/perf/scripts/python/Perf-Trace-Util’: 
> Directory not empty
> rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/perf/Documentation’: 
> Directory not empty
> rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/perf/ui/tui’: Directory 
> not empty
> rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/perf/util/include’: 
> Directory not empty
> rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/lib’: Directory not 
> empty
> rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/virtio’: Directory not 
> empty
> rm: cannot remove ‘/glusterfs/live/linux-3.18.7/virt/kvm’: Directory not empty
> ================================================================================
> 
> 
> I did my homework on Google, yet the information I found was that the cause 
> for this is that the contents of the bricks have been modified locally. It's 
> definitely not the case here, I have *not* touched the contents of the bricks.
> 
> So my question is: is it possible that disabling the quotas may have had some 
> side effects on the metadata of Gluster? If so, what can I do to force 
> Gluster to rescan all local directories and "import" local files?
> 
> GlusterFS version: 3.6.2
> 
> The setup of my volume:
> 
> 
> ================================================================================
> [root@stor104 ~]# gluster volume info
>  
> Volume Name: live
> Type: Distribute
> Volume ID: 
> Status: Started
> Number of Bricks: 6
> Transport-type: tcp
> Bricks:
> Brick1: stor104:/zfs/brick0/brick
> Brick2: stor104:/zfs/brick1/brick
> Brick3: stor104:/zfs/brick2/brick
> Brick4: stor106:/zfs/brick0/brick
> Brick5: stor106:/zfs/brick1/brick
> Brick6: stor106:/zfs/brick2/brick
> Options Reconfigured:
> features.quota: off
> performance.readdir-ahead: on
> nfs.volume-access: read-only
> cluster.data-self-heal-algorithm: full
> performance.strict-write-ordering: off
> performance.strict-o-direct: off
> performance.force-readdirp: off
> performance.write-behind-window-size: 4MB
> performance.io-thread-count: 32
> performance.flush-behind: on
> performance.client-io-threads: on
> performance.cache-size: 32GB
> performance.cache-refresh-timeout: 60
> performance.cache-max-file-size: 4MB
> nfs.disable: off
> cluster.eager-lock: on
> cluster.min-free-disk: 1%
> server.allow-insecure: on
> diagnostics.client-log-level: ERROR
> diagnostics.brick-log-level: ERROR
> ================================================================================
> 
> 
> It is mounted from the client with those fstab options:
> 
> ================================================================================
> stor104:live  
> defaults,backupvolfile-server=stor106,direct-io-mode=disable,noauto
> ================================================================================
> 
> Attached are the log files from stor104
> 
> Thanks a lot for any help!
> JF
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

-- 

 Jean-François Le Fillâtre
 -------------------------------
 HPC Systems Administrator
 LCSB - University of Luxembourg
 -------------------------------
 PGP KeyID 0x134657C6
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to