gluster volume set help?

—
regards
Aravinda Vishwanathapura
https://kadalu.io

> On 06-Mar-2020, at 6:39 AM, gil han Choi <ghchoi.c...@gmail.com> wrote:
> 
> Hello
> 
> I used a command to print out the default values and descriptions of all 
> options.
> But I can't remember what command I used and can't find it.
> What command can check its contents?
> 
> Option: cluster.lookup-unhashed
> Default Value: on
> Description: This option if set to ON, does a lookup through all the 
> sub-volumes, in case a lookup didn't return any result from the hash 
> subvolume. If set to OFF, it does not do a lookup on the remaining subvolumes.
> 
> Option: cluster.lookup-optimize
> Default Value: on
> Description: This option if set to ON enables the optimization of -ve 
> lookups, by not doing a lookup on non-hashed subvolumes for files, in case 
> the hashed subvolume does not return any result. This option disregards the 
> lookup-unhashed setting, when enabled.
> 
> Option: cluster.min-free-disk
> Default Value: 10%
> Description: Percentage/Size of disk space, after which the process starts 
> balancing out the cluster, and logs will appear in log files
> 
> Option: cluster.min-free-inodes
> Default Value: 5%
> Description: after system has only N% of inodes, warnings starts to appear in 
> log files
> 
> Option: cluster.rebalance-stats
> Default Value: off
> Description: This option if set to ON displays and logs the  time taken for 
> migration of each file, during the rebalance process. If set to OFF, the 
> rebalance logs will only display the time spent in each directory.
> 
> Option: cluster.subvols-per-directory
> Default Value: (null)
> Description: Specifies the directory layout spread. Takes number of 
> subvolumes as default value.
> 
> Option: cluster.readdir-optimize
> Default Value: off
> Description: This option if set to ON enables the optimization that allows 
> DHT to requests non-first subvolumes to filter out directory entries.
> 
> Option: cluster.rebal-throttle
> Default Value: normal
> Description:  Sets the maximum number of parallel file migrations allowed on 
> a node during the rebalance operation. The default value is normal and allows 
> a max of [($(processing units) - 4) / 2), 2]  files to be migrated at a time. 
> Lazy will allow only one file to be migrated at a time and aggressive will 
> allow max of [($(processing units) - 4) / 2), 4]
> 
> Option: cluster.lock-migration
> Default Value: off
> Description:  If enabled this feature will migrate the posix locks associated 
> with a file during rebalance
> 
> Option: cluster.force-migration
> Default Value: off
> Description: If disabled, rebalance will not migrate files that are being 
> written to by an application
> 
> Option: cluster.weighted-rebalance
> Default Value: on
> Description: When enabled, files will be allocated to bricks with a 
> probability proportional to their size.  Otherwise, all bricks will have the 
> same probability (legacy behavior).
> 
> Option: cluster.entry-change-log
> Default Value: on
> Description: This option exists only for backward compatibility and 
> configuring it doesn't have any effect
> 
> Option: cluster.read-subvolume
> Default Value: (null)
> Description: inode-read fops happen only on one of the bricks in replicate. 
> Afr will prefer the one specified using this option if it is not stale. 
> Option value must be one of the xlator names of the children. Ex: 
> <volname>-client-0 till <volname>-client-<number-of-bricks - 1>
> 
> Option: cluster.read-subvolume-index
> Default Value: -1
> Description: inode-read fops happen only on one of the bricks in replicate. 
> AFR will prefer the one specified using this option if it is not stale. 
> allowed options include -1 till replica-count - 1
> 
> Option: cluster.read-hash-mode
> Default Value: 1
> Description: inode-read fops happen only on one of the bricks in replicate. 
> AFR will prefer the one computed using the method specified using this option.
> 0 = first readable child of AFR, starting from 1st child.
> 1 = hash by GFID of file (all clients use same subvolume).
> 2 = hash by GFID of file and client PID.
> 3 = brick having the least outstanding read requests.
> 
> Option: cluster.background-self-heal-count
> Default Value: 8
> Description: This specifies the number of per client self-heal jobs that can 
> perform parallel heals in the background.
> 
> Option: cluster.metadata-self-heal
> Default Value: off
> Description: Using this option we can enable/disable metadata i.e. 
> Permissions, ownerships, xattrs self-heal on the file/directory.
> 
> Option: cluster.data-self-heal
> Default Value: off
> Description: Using this option we can enable/disable data self-heal on the 
> file. "open" means data self-heal action will only be triggered by file open 
> operations.
> 
> Option: cluster.entry-self-heal
> Default Value: off
> Description: Using this option we can enable/disable entry self-heal on the 
> directory.
> 
> Option: cluster.self-heal-daemon
> Default Value: on
> Description: This option applies to only self-heal-daemon. Index directory 
> crawl and automatic healing of files will not be performed if this option is 
> turned off.
> 
> Option: cluster.heal-timeout
> Default Value: 600
> Description: time interval for checking the need to self-heal in 
> self-heal-daemon
> 
> Option: cluster.self-heal-window-size
> Default Value: 1
> Description: Maximum number blocks per file for which self-heal process would 
> be applied simultaneously.
> 
> Option: cluster.data-change-log
> Default Value: on
> Description: This option exists only for backward compatibility and 
> configuring it doesn't have any effect
> 
> Option: cluster.metadata-change-log
> Default Value: on
> Description: This option exists only for backward compatibility and 
> configuring it doesn't have any effect
> 
> Option: cluster.data-self-heal-algorithm
> Default Value: (null)
> Description: Select between "full", "diff". The "full" algorithm copies the 
> entire file from source to sink. The "diff" algorithm copies to sink only 
> those blocks whose checksums don't match with those of source. If no option 
> is configured the option is chosen dynamically as follows: If the file does 
> not exist on one of the sinks or empty file exists or if the source file size 
> is about the same as page size the entire file will be read and written i.e 
> "full" algo, otherwise "diff" algo is chosen.
> 
> Option: cluster.eager-lock
> Default Value: on
> Description: Enable/Disable eager lock for replica volume. Lock phase of a 
> transaction has two sub-phases. First is an attempt to acquire locks in 
> parallel by broadcasting non-blocking lock requests. If lock acquisition 
> fails on any server, then the held locks are unlocked and we revert to a 
> blocking locks mode sequentially on one server after another.  If this option 
> is enabled the initial broadcasting lock request attempts to acquire a full 
> lock on the entire file. If this fails, we revert back to the sequential 
> "regional" blocking locks as before. In the case where such an "eager" lock 
> is granted in the non-blocking phase, it gives rise to an opportunity for 
> optimization. i.e, if the next write transaction on the same FD arrives 
> before the unlock phase of the first transaction, it "takes over" the full 
> file lock. Similarly if yet another data transaction arrives before the 
> unlock phase of the "optimized" transaction, that in turn "takes over" the 
> lock as well. The actual u
> nlock now happens at the end of the last "optimized" transaction.
> 
> Option: disperse.eager-lock
> Default Value: on
> Description: Enable/Disable eager lock for regular files on a disperse 
> volume. If a fop takes a lock and completes its operation, it waits for next 
> 1 second before releasing the lock, to see if the lock can be reused for next 
> fop from the same client. If ec finds any lock contention within 1 second it 
> releases the lock immediately before time expires. This improves the 
> performance of file operations. However, as it takes lock on first brick, for 
> few operations like read, discovery of lock contention might take long time 
> and can actually degrade the performance. If eager lock is disabled, lock 
> will be released as soon as fop completes.
> 
> Option: disperse.other-eager-lock
> Default Value: on
> Description: It's equivalent to the eager-lock option but for non regular 
> files.
> 
> Option: disperse.eager-lock-timeout
> Default Value: 1
> Description: Maximum time (in seconds) that a lock on an inode is kept held 
> if no new operations on the inode are received.
> 
> Option: disperse.other-eager-lock-timeout
> Default Value: 1
> Description: It's equivalent to eager-lock-timeout option but for non regular 
> files.
> 
> Option: cluster.quorum-type
> Default Value: none
> Description: If value is "fixed" only allow writes if quorum-count bricks are 
> present.  If value is "auto" only allow writes if more than half of bricks, 
> or exactly half including the first, are present.
> 
> Option: cluster.quorum-count
> Default Value: (null)
> Description: If quorum-type is "fixed" only allow writes if this many bricks 
> are present.  Other quorum types will OVERWRITE this value.
> 
> Option: cluster.choose-local
> Default Value: true
> Description: Choose a local subvolume (i.e. Brick) to read from if 
> read-subvolume is not explicitly set.
> 
> Option: cluster.self-heal-readdir-size
> Default Value: 1KB
> Description: readdirp size for performing entry self-heal
> 
> Option: cluster.ensure-durability
> Default Value: on
> Description: Afr performs fsyncs for transactions if this option is on to 
> make sure the changelogs/data is written to the disk
> 
> Option: cluster.consistent-metadata
> Default Value: no
> Description: If this option is enabled, readdirp will force lookups on those 
> entries read whose read child is not the same as that of the parent. This 
> will guarantee that all read operations on a file serve attributes from the 
> same subvol as long as it holds  a good copy of the file/dir.
> 
> Option: cluster.heal-wait-queue-length
> Default Value: 128
> Description: This specifies the number of heals that can be queued for the 
> parallel background self heal jobs.
> 
> Option: cluster.favorite-child-policy
> Default Value: none
> Description: This option can be used to automatically resolve split-brains 
> using various policies without user intervention. "size" picks the file with 
> the biggest size as the source. "ctime" and "mtime" pick the file with the 
> latest ctime and mtime respectively as the source. "majority" picks a file 
> with identical mtime and size in more than half the number of bricks in the 
> replica.
> 
> Option: diagnostics.latency-measurement
> Default Value: off
> Description: If on stats related to the latency of each operation would be 
> tracked inside GlusterFS data-structures. 
> 
> Option: diagnostics.dump-fd-stats
> Default Value: off
> Description: If on stats related to file-operations would be tracked inside 
> GlusterFS data-structures.
> 
> Option: diagnostics.brick-log-level
> Default Value: INFO
> Description: Changes the log-level of the bricks
> 
> Option: diagnostics.client-log-level
> Default Value: INFO
> Description: Changes the log-level of the clients
> 
> Option: diagnostics.brick-sys-log-level
> Default Value: CRITICAL
> Description: Gluster's syslog log-level
> 
> Option: diagnostics.client-sys-log-level
> Default Value: CRITICAL
> Description: Gluster's syslog log-level
> 
> Option: diagnostics.brick-logger
> Default Value: (null)
> Description: (null)
> 
> Option: diagnostics.client-logger
> Default Value: (null)
> Description: (null)
> 
> Option: diagnostics.brick-log-format
> Default Value: (null)
> Description: (null)
> 
> Option: diagnostics.client-log-format
> Default Value: (null)
> Description: (null)
> 
> Option: diagnostics.brick-log-buf-size
> Default Value: 5
> Description: (null)
> 
> Option: diagnostics.client-log-buf-size
> Default Value: 5
> Description: (null)
> 
> Option: diagnostics.brick-log-flush-timeout
> Default Value: 120
> Description: (null)
> 
> Option: diagnostics.client-log-flush-timeout
> Default Value: 120
> Description: (null)
> 
> Option: diagnostics.stats-dump-interval
> Default Value: 0
> Description: Interval (in seconds) at which to auto-dump statistics. Zero 
> disables automatic dumping.
> 
> Option: diagnostics.fop-sample-interval
> Default Value: 0
> Description: Interval in which we want to collect FOP latency samples.  2 
> means collect a sample every 2nd FOP.
> 
> Option: diagnostics.stats-dump-format
> Default Value: json
> Description:  The dump-format option specifies the format in which to dump 
> the statistics. Select between "text", "json", "dict" and "samples". Default 
> is "json".
> 
> Option: diagnostics.fop-sample-buf-size
> Default Value: 65535
> Description: The maximum size of our FOP sampling ring buffer.
> 
> Option: diagnostics.stats-dnscache-ttl-sec
> Default Value: 86400
> Description: The interval after wish a cached DNS entry will be re-validated. 
>  Default: 24 hrs
> 
> Option: performance.cache-max-file-size
> Default Value: 0
> Description: Maximum file size which would be cached by the io-cache 
> translator.
> 
> Option: performance.cache-min-file-size
> Default Value: 0
> Description: Minimum file size which would be cached by the io-cache 
> translator.
> 
> Option: performance.cache-refresh-timeout
> Default Value: 1
> Description: The cached data for a file will be retained for 
> 'cache-refresh-timeout' seconds, after which data re-validation is performed.
> 
> Option: performance.cache-priority
> Default Value: 
> Description: Assigns priority to filenames with specific patterns so that 
> when a page needs to be ejected out of the cache, the page of a file whose 
> priority is the lowest will be ejected earlier
> 
> Option: performance.cache-size
> Default Value: 32MB
> Description: Size of the read cache.
> 
> Option: performance.io-thread-count
> Default Value: 16
> Description: Number of threads in IO threads translator which perform 
> concurrent IO operations
> 
> Option: performance.high-prio-threads
> Default Value: 16
> Description: Max number of threads in IO threads translator which perform 
> high priority IO operations at a given time
> 
> Option: performance.normal-prio-threads
> Default Value: 16
> Description: Max number of threads in IO threads translator which perform 
> normal priority IO operations at a given time
> 
> Option: performance.low-prio-threads
> Default Value: 16
> Description: Max number of threads in IO threads translator which perform low 
> priority IO operations at a given time
> 
> Option: performance.least-prio-threads
> Default Value: 1
> Description: Max number of threads in IO threads translator which perform 
> least priority IO operations at a given time
> 
> Option: performance.enable-least-priority
> Default Value: on
> Description: Enable/Disable least priority
> 
> Option: performance.iot-watchdog-secs
> Default Value: (null)
> Description: Number of seconds a queue must be stalled before starting an 
> 'emergency' thread.
> 
> Option: performance.iot-cleanup-disconnected-reqs
> Default Value: off
> Description: 'Poison' queued requests when a client disconnects
> 
> Option: performance.iot-pass-through
> Default Value: false
> Description: Enable/Disable io threads translator
> 
> Option: performance.io-cache-pass-through
> Default Value: false
> Description: Enable/Disable io cache translator
> 
> Option: performance.qr-cache-timeout
> Default Value: 1
> Description: (null)
> 
> Option: performance.cache-invalidation
> Default Value: false
> Description: When "on", invalidates/updates the metadata cache, on receiving 
> the cache-invalidation notifications
> 
> Option: performance.ctime-invalidation
> Default Value: false
> Description: Quick-read by default uses mtime to identify changes to file 
> data. However there are applications like rsync which explicitly set mtime 
> making it unreliable for the purpose of identifying change in file content . 
> Since ctime also changes when content of a file  changes and it cannot be set 
> explicitly, it becomes  suitable for identifying staleness of cached data. 
> This option makes quick-read to prefer ctime over mtime to validate its 
> cache. However, using ctime can result in false positives as ctime changes 
> with just attribute changes like permission without changes to file data. So, 
> use this only when mtime is not reliable
> 
> Option: performance.flush-behind
> Default Value: on
> Description: If this option is set ON, instructs write-behind translator to 
> perform flush in background, by returning success (or any errors, if any of 
> previous  writes were failed) to application even before flush FOP is sent to 
> backend filesystem. 
> 
> Option: performance.nfs.flush-behind
> Default Value: on
> Description: If this option is set ON, instructs write-behind translator to 
> perform flush in background, by returning success (or any errors, if any of 
> previous  writes were failed) to application even before flush FOP is sent to 
> backend filesystem. 
> 
> Option: performance.write-behind-window-size
> Default Value: 1MB
> Description: Size of the write-behind buffer for a single file (inode).
> 
> Option: performance.resync-failed-syncs-after-fsync
> Default Value: (null)
> Description: If sync of "cached-writes issued before fsync" (to backend) 
> fails, this option configures whether to retry syncing them after fsync or 
> forget them. If set to on, cached-writes are retried till a "flush" fop (or a 
> successful sync) on sync failures. fsync itself is failed irrespective of the 
> value of this option. 
> 
> Option: performance.nfs.write-behind-window-size
> Default Value: 1MB
> Description: Size of the write-behind buffer for a single file (inode).
> 
> Option: performance.strict-o-direct
> Default Value: off
> Description: This option when set to off, ignores the O_DIRECT flag.
> 
> Option: performance.nfs.strict-o-direct
> Default Value: off
> Description: This option when set to off, ignores the O_DIRECT flag.
> 
> Option: performance.strict-write-ordering
> Default Value: off
> Description: Do not let later writes overtake earlier writes even if they do 
> not overlap
> 
> Option: performance.nfs.strict-write-ordering
> Default Value: off
> Description: Do not let later writes overtake earlier writes even if they do 
> not overlap
> 
> Option: performance.write-behind-trickling-writes
> Default Value: on
> Description: (null)
> 
> Option: performance.aggregate-size
> Default Value: 128KB
> Description: Will aggregate writes until data of specified size is fully 
> filled for a single file provided there are no dependent fops on cached 
> writes. This option just sets the aggregate size. Note that aggregation won't 
> happen if performance.write-behind-trickling-writes is turned on. Hence turn 
> off performance.write-behind.trickling-writes so that writes are aggregated 
> till a max of "aggregate-size" bytes
> 
> Option: performance.nfs.write-behind-trickling-writes
> Default Value: on
> Description: (null)
> 
> Option: performance.lazy-open
> Default Value: yes
> Description: Perform open in the backend only when a necessary FOP arrives 
> (e.g writev on the FD, unlink of the file). When option is disabled, perform 
> backend open right after unwinding open().
> 
> Option: performance.read-after-open
> Default Value: yes
> Description: read is sent only after actual open happens and real fd is 
> obtained, instead of doing on anonymous fd (similar to write)
> 
> Option: performance.open-behind-pass-through
> Default Value: false
> Description: Enable/Disable open behind translator
> 
> Option: performance.read-ahead-page-count
> Default Value: 4
> Description: Number of pages that will be pre-fetched
> 
> Option: performance.read-ahead-pass-through
> Default Value: false
> Description: Enable/Disable read ahead translator
> 
> Option: performance.readdir-ahead-pass-through
> Default Value: false
> Description: Enable/Disable readdir ahead translator
> 
> Option: performance.md-cache-pass-through
> Default Value: false
> Description: Enable/Disable md cache translator
> 
> Option: performance.md-cache-timeout
> Default Value: 1
> Description: Time period after which cache has to be refreshed
> 
> Option: performance.cache-swift-metadata
> Default Value: (null)
> Description: Cache swift metadata (user.swift.metadata xattr)
> 
> Option: performance.cache-samba-metadata
> Default Value: (null)
> Description: Cache samba metadata (user.DOSATTRIB, security.NTACL xattr)
> 
> Option: performance.cache-capability-xattrs
> Default Value: (null)
> Description: Cache xattrs required for capability based security
> 
> Option: performance.cache-ima-xattrs
> Default Value: (null)
> Description: Cache xattrs required for IMA (Integrity Measurement 
> Architecture)
> 
> Option: performance.md-cache-statfs
> Default Value: off
> Description: Cache statfs information of filesystem on the client
> 
> Option: performance.xattr-cache-list
> Default Value: (null)
> Description: A comma separated list of xattrs that shall be cached by 
> md-cache. The only wildcard allowed is '*'
> 
> Option: performance.nl-cache-pass-through
> Default Value: false
> Description: Enable/Disable nl cache translator
> 
> Option: features.encryption
> Default Value: off
> Description: enable/disable client-side encryption for the volume.
> 
> Option: network.frame-timeout
> Default Value: 1800
> Description: Time frame after which the (file) operation would be declared as 
> dead, if the server does not respond for a particular (file) operation.
> 
> Option: network.ping-timeout
> Default Value: 42
> Description: Time duration for which the client waits to check if the server 
> is responsive.
> 
> Option: network.tcp-window-size
> Default Value: (null)
> Description: Specifies the window size for tcp socket.
> 
> Option: client.ssl
> Default Value: off
> Description: enable/disable client.ssl flag in the volume.
> 
> Option: network.remote-dio
> Default Value: disable
> Description: If enabled, in open/creat/readv/writev fops, O_DIRECT flag will 
> be filtered at the client protocol level so server will still continue to 
> cache the file. This works similar to NFS's behavior of O_DIRECT. Anon-fds 
> can choose to readv/writev using O_DIRECT
> 
> Option: client.event-threads
> Default Value: 2
> Description: Specifies the number of event threads to execute in parallel. 
> Larger values would help process responses faster, depending on available 
> processing power. Range 1-32 threads.
> 
> Option: network.inode-lru-limit
> Default Value: 16384
> Description: Specifies the limit on the number of inodes in the lru list of 
> the inode cache.
> 
> Option: auth.allow
> Default Value: *
> Description: Allow a comma separated list of addresses and/or hostnames to 
> connect to the server. Option auth.reject overrides this option. By default, 
> all connections are allowed.
> 
> Option: auth.reject
> Default Value: (null)
> Description: Reject a comma separated list of addresses and/or hostnames to 
> connect to the server. This option overrides the auth.allow option. By 
> default, all connections are allowed.
> 
> Option: server.allow-insecure
> Default Value: on
> Description: (null)
> 
> Option: server.root-squash
> Default Value: off
> Description: Map requests from uid/gid 0 to the anonymous uid/gid. Note that 
> this does not apply to any other uids or gids that might be equally 
> sensitive, such as user bin or group staff.
> 
> Option: server.all-squash
> Default Value: off
> Description: Map requests from any uid/gid to the anonymous uid/gid. Note 
> that this does not apply to any other uids or gids that might be equally 
> sensitive, such as user bin or group staff.
> 
> Option: server.anonuid
> Default Value: 65534
> Description: value of the uid used for the anonymous user/nfsnobody when 
> root-squash/all-squash is enabled.
> 
> Option: server.anongid
> Default Value: 65534
> Description: value of the gid used for the anonymous user/nfsnobody when 
> root-squash/all-squash is enabled.
> 
> Option: server.statedump-path
> Default Value: /var/run/gluster
> Description: Specifies directory in which gluster should save its statedumps.
> 
> Option: server.outstanding-rpc-limit
> Default Value: 64
> Description: Parameter to throttle the number of incoming RPC requests from a 
> client. 0 means no limit (can potentially run out of memory)
> 
> Option: server.ssl
> Default Value: off
> Description: enable/disable server.ssl flag in the volume.
> 
> Option: auth.ssl-allow
> Default Value: *
> Description: Allow a comma separated list of common names (CN) of the clients 
> that are allowed to access the server.By default, all TLS authenticated 
> clients are allowed to access the server.
> 
> Option: server.manage-gids
> Default Value: off
> Description: Resolve groups on the server-side.
> 
> Option: server.dynamic-auth
> Default Value: on
> Description: When 'on' perform dynamic authentication of volume options in 
> order to allow/terminate client transport connection immediately in response 
> to *.allow | *.reject volume set options.
> 
> Option: server.gid-timeout
> Default Value: 300
> Description: Timeout in seconds for the cached groups to expire.
> 
> Option: server.event-threads
> Default Value: 2
> Description: Specifies the number of event threads to execute in parallel. 
> Larger values would help process responses faster, depending on available 
> processing power.
> 
> Option: server.tcp-user-timeout
> Default Value: 42
> Description: (null)
> 
> Option: server.keepalive-time
> Default Value: (null)
> Description: (null)
> 
> Option: server.keepalive-interval
> Default Value: (null)
> Description: (null)
> 
> Option: server.keepalive-count
> Default Value: (null)
> Description: (null)
> 
> Option: transport.listen-backlog
> Default Value: 1024
> Description: This option uses the value of backlog argument that defines the 
> maximum length to which the queue of pending connections for socket fd may 
> grow.
> 
> Option: performance.write-behind
> Default Value: on
> Description: enable/disable write-behind translator in the volume.
> 
> Option: performance.read-ahead
> Default Value: on
> Description: enable/disable read-ahead translator in the volume.
> 
> Option: performance.readdir-ahead
> Default Value: on
> Description: enable/disable readdir-ahead translator in the volume.
> 
> Option: performance.io-cache
> Default Value: on
> Description: enable/disable io-cache translator in the volume.
> 
> Option: performance.open-behind
> Default Value: on
> Description: enable/disable open-behind translator in the volume.
> 
> Option: performance.quick-read
> Default Value: on
> Description: enable/disable quick-read translator in the volume.
> 
> Option: performance.nl-cache
> Default Value: off
> Description: enable/disable negative entry caching translator in the volume. 
> Enabling this option improves performance of 'create file/directory' workload
> 
> Option: performance.stat-prefetch
> Default Value: on
> Description: enable/disable meta-data caching translator in the volume.
> 
> Option: performance.client-io-threads
> Default Value: on
> Description: enable/disable io-threads translator in the client graph of 
> volume.
> 
> Option: performance.nfs.write-behind
> Default Value: on
> Description: enable/disable write-behind translator in the volume
> 
> Option: performance.force-readdirp
> Default Value: true
> Description: Convert all readdir requests to readdirplus to collect stat info 
> on each entry.
> 
> Option: performance.cache-invalidation
> Default Value: false
> Description: When "on", invalidates/updates the metadata cache, on receiving 
> the cache-invalidation notifications
> 
> Option: performance.global-cache-invalidation
> Default Value: true
> Description: When "on", purges all read caches in kernel and glusterfs stack 
> whenever a stat change is detected. Stat changes can be detected while 
> processing responses to file operations (fop) or through upcall 
> notifications. Since purging caches can be an expensive operation, it's 
> advised to have this option "on" only when a file can be accessed from 
> multiple different Glusterfs mounts and caches across these different mounts 
> are required to be coherent. If a file is not accessed across different 
> mounts (simple example is having only one mount for a volume), its advised to 
> keep this option "off" as all file modifications go through caches keeping 
> them coherent. This option overrides value of performance.cache-invalidation.
> 
> Option: features.uss
> Default Value: off
> Description: enable/disable User Serviceable Snapshots on the volume.
> 
> Option: features.snapshot-directory
> Default Value: .snaps
> Description: Entry point directory for entering snapshot world. Value can 
> have only [0-9a-z-_] and starts with dot (.) and cannot exceed 255 character
> 
> Option: features.show-snapshot-directory
> Default Value: off
> Description: show entry point in readdir output of snapdir-entry-path which 
> is set by samba
> 
> Option: features.tag-namespaces
> Default Value: off
> Description: This option enables this translator's functionality that tags 
> every fop with a namespace hash for later throttling, stats collection, 
> logging, etc.
> 
> Option: network.compression
> Default Value: off
> Description: enable/disable network compression translator
> 
> Option: network.compression.window-size
> Default Value: -15
> Description: Size of the zlib history buffer.
> 
> Option: network.compression.mem-level
> Default Value: 8
> Description: Memory allocated for internal compression state. 1 uses minimum 
> memory but is slow and reduces compression ratio; memLevel=9 uses maximum 
> memory for optimal speed. The default value is 8.
> 
> Option: network.compression.min-size
> Default Value: 0
> Description: Data is compressed only when its size exceeds this.
> 
> Option: network.compression.compression-level
> Default Value: -1
> Description: Compression levels 
> 0 : no compression, 1 : best speed, 
> 9 : best compression, -1 : default compression 
> 
> Option: features.quota-deem-statfs
> Default Value: on
> Description: If set to on, it takes quota limits into consideration while 
> estimating fs size. (df command) (Default is on).
> 
> Option: nfs.transport-type
> Default Value: (null)
> Description: Specifies the nfs transport type. Valid transport types are 
> 'tcp' and 'rdma'.
> 
> Option: nfs.rdirplus
> Default Value: (null)
> Description: When this option is set to off NFS falls back to standard 
> readdir instead of readdirp
> 
> Option: features.read-only
> Default Value: off
> Description: When "on", makes a volume read-only. It is turned "off" by 
> default.
> 
> Option: features.worm
> Default Value: off
> Description: When "on", makes a volume get write once read many  feature. It 
> is turned "off" by default.
> 
> Option: features.worm-file-level
> Default Value: off
> Description: When "on", activates the file level worm. It is turned "off" by 
> default.
> 
> Option: features.worm-files-deletable
> Default Value: on
> Description: When "off", doesn't allow the Worm filesto be deleted. It is 
> turned "on" by default.
> 
> Option: features.default-retention-period
> Default Value: 120
> Description: The default retention period for the files.
> 
> Option: features.retention-mode
> Default Value: relax
> Description: The mode of retention (relax/enterprise). It is relax by default.
> 
> Option: features.auto-commit-period
> Default Value: 180
> Description: Auto commit period for the files.
> 
> Option: storage.linux-aio
> Default Value: off
> Description: Support for native Linux AIO
> 
> Option: storage.batch-fsync-mode
> Default Value: reverse-fsync
> Description: Possible values:
>       - syncfs: Perform one syncfs() on behalf oa batchof fsyncs.
>       - syncfs-single-fsync: Perform one syncfs() on behalf of a batch of 
> fsyncs and one fsync() per batch.
>       - syncfs-reverse-fsync: Perform one syncfs() on behalf of a batch of 
> fsyncs and fsync() each file in the batch in reverse order.
> in reverse order.
>       - reverse-fsync: Perform fsync() of each file in the batch in reverse 
> order.
> 
> Option: storage.batch-fsync-delay-usec
> Default Value: 0
> Description: Num of usecs to wait for aggregating fsync requests
> 
> Option: storage.owner-uid
> Default Value: -1
> Description: Support for setting uid of brick's owner
> 
> Option: storage.owner-gid
> Default Value: -1
> Description: Support for setting gid of brick's owner
> 
> Option: storage.node-uuid-pathinfo
> Default Value: off
> Description: return glusterd's node-uuid in pathinfo xattr string instead of 
> hostname
> 
> Option: storage.health-check-interval
> Default Value: 30
> Description: Interval in seconds for a filesystem health check, set to 0 to 
> disable
> 
> Option: storage.build-pgfid
> Default Value: off
> Description: Enable placeholders for gfid to path conversion
> 
> Option: storage.gfid2path-separator
> Default Value: :
> Description: Path separator for glusterfs.gfidtopath virt xattr
> 
> Option: storage.reserve
> Default Value: 1
> Description: Percentage of disk space to be reserved. Set to 0 to disable
> 
> Option: storage.force-create-mode
> Default Value: 0000
> Description: Mode bit permission that will always be set on a file.
> 
> Option: storage.force-directory-mode
> Default Value: 0000
> Description: Mode bit permission that will be always set on directory
> 
> Option: storage.create-mask
> Default Value: 0777
> Description: Any bit not set here will be removed from themodes set on a file 
> when it is created
> 
> Option: storage.create-directory-mask
> Default Value: 0777
> Description: Any bit not set here will be removed from themodes set on a 
> directory when it is created
> 
> Option: storage.max-hardlinks
> Default Value: 100
> Description: max number of hardlinks allowed on any one inode.
> 0 is unlimited, 1 prevents any hardlinking at all.
> 
> Option: features.ctime
> Default Value: on
> Description: When this option is enabled, time attributes (ctime,mtime,atime) 
> are stored in xattr to keep it consistent across replica and distribute set. 
> The time attributes stored at the backend are not considered 
> 
> Option: config.gfproxyd
> Default Value: off
> Description: If this option is enabled, the proxy client daemon called 
> gfproxyd will be started on all the trusted storage pool nodes
> 
> Option: cluster.server-quorum-type
> Default Value: none
> Description: It can be set to none or server. When set to server, this option 
> enables the specified volume to participate in the server-side quorum. This 
> feature is on the server-side i.e. in glusterd. Whenever the glusterd on a 
> machine observes that the quorum is not met, it brings down the bricks to 
> prevent data split-brains. When the network connections are brought back up 
> and the quorum is restored the bricks in   the volume are brought back up.
> 
> Option: cluster.server-quorum-ratio
> Default Value: (null)
> Description: Sets the quorum percentage for the trusted storage pool.
> 
> Option: changelog.changelog-barrier-timeout
> Default Value: 120
> Description: After 'timeout' seconds since the time 'barrier' option was set 
> to "on", unlink/rmdir/rename  operations are no longer blocked and previously 
> blocked fops are allowed to go through
> 
> Option: features.barrier-timeout
> Default Value: 120
> Description: After 'timeout' seconds since the time 'barrier' option was set 
> to "on", acknowledgements to file operations are no longer blocked and 
> previously blocked acknowledgements are sent to the application
> 
> Option: features.trash
> Default Value: off
> Description: Enable/disable trash translator
> 
> Option: features.trash-dir
> Default Value: .trashcan
> Description: Directory for trash files
> 
> Option: features.trash-eliminate-path
> Default Value: (null)
> Description: Eliminate paths to be excluded from trashing
> 
> Option: features.trash-max-filesize
> Default Value: 5MB
> Description: Maximum size of file that can be moved to trash
> 
> Option: features.trash-internal-op
> Default Value: off
> Description: Enable/disable trash translator for internal operations
> 
> Option: cluster.enable-shared-storage
> Default Value: disable
> Description: Create and mount the shared storage 
> volume(gluster_shared_storage) at /var/run/gluster/shared_storage on enabling 
> this option. Unmount and delete the shared storage volume  on disabling this 
> option.
> 
> Option: locks.trace
> Default Value: off
> Description: Trace the different lock requests to logs.
> 
> Option: locks.mandatory-locking
> Default Value: off
> Description: Specifies the mandatory-locking mode. Valid options are 'file' 
> to use linux style mandatory locks, 'forced' to use volume strictly under 
> mandatory lock semantics only and 'optimal' to treat advisory and mandatory 
> locks separately on their own.
> 
> Option: cluster.quorum-reads
> Default Value: no
> Description: This option has been removed. Reads are not allowed if quorum is 
> not met.
> 
> Option: features.timeout
> Default Value: (null)
> Description: Specifies the number of seconds the quiesce translator will wait 
> for a CHILD_UP event before force-unwinding the frames it has currently 
> stored for retry.
> 
> Option: features.failover-hosts
> Default Value: (null)
> Description: It is a comma separated list of hostname/IP addresses. It 
> Specifies the list of hosts where the gfproxy daemons are running, to which 
> the the thin clients can failover to.
> 
> Option: features.shard
> Default Value: off
> Description: enable/disable sharding translator on the volume.
> 
> Option: features.shard-block-size
> Default Value: 64MB
> Description: The size unit used to break a file into multiple chunks
> 
> Option: features.shard-deletion-rate
> Default Value: 100
> Description: The number of shards to send deletes on at a time
> 
> Option: features.cache-invalidation
> Default Value: off
> Description: When "on", sends cache-invalidation notifications.
> 
> Option: features.cache-invalidation-timeout
> Default Value: 60
> Description: After 'timeout' seconds since the time client accessed any file, 
> cache-invalidation notifications are no longer sent to that client.
> 
> Option: features.leases
> Default Value: off
> Description: When "on", enables leases support
> 
> Option: features.lease-lock-recall-timeout
> Default Value: 60
> Description: After 'timeout' seconds since the recall_lease request has been 
> sent to the client, the lease lock will be forcefully purged by the server.
> 
> Option: disperse.background-heals
> Default Value: 8
> Description: This option can be used to control number of parallel heals
> 
> Option: disperse.heal-wait-qlength
> Default Value: 128
> Description: This option can be used to control number of heals that can wait
> 
> Option: dht.force-readdirp
> Default Value: on
> Description: This option if set to ON, forces the use of readdirp, and hence 
> also displays the stats of the files.
> 
> Option: disperse.read-policy
> Default Value: gfid-hash
> Description: inode-read fops happen only on 'k' number of bricks in n=k+m 
> disperse subvolume. 'round-robin' selects the read subvolume using 
> round-robin algo. 'gfid-hash' selects read subvolume based on hash of the 
> gfid of that file/directory.
> 
> Option: cluster.shd-max-threads
> Default Value: 1
> Description: Maximum number of parallel heals SHD can do per local brick. 
> This can substantially lower heal times, but can also crush your bricks if 
> you don't have the storage hardware to support this.
> 
> Option: cluster.shd-wait-qlength
> Default Value: 1024
> Description: This option can be used to control number of heals that can wait 
> in SHD per subvolume
> 
> Option: cluster.locking-scheme
> Default Value: full
> Description: If this option is set to granular, self-heal will stop being 
> compatible with afr-v1, which helps afr be more granular while self-healing
> 
> Option: cluster.granular-entry-heal
> Default Value: no
> Description: If this option is enabled, self-heal will resort to granular way 
> of recording changelogs and doing entry self-heal.
> 
> Option: features.locks-revocation-secs
> Default Value: 0
> Description: Maximum time a lock can be taken out, beforebeing revoked.
> 
> Option: features.locks-revocation-clear-all
> Default Value: false
> Description: If set to true, will revoke BOTH granted and blocked (pending) 
> lock requests if a revocation threshold is hit.
> 
> Option: features.locks-revocation-max-blocked
> Default Value: 0
> Description: A number of blocked lock requests after which a lock will be 
> revoked to allow the others to proceed.  Can be used in conjunction w/ 
> revocation-clear-all.
> 
> Option: features.locks-notify-contention
> Default Value: no
> Description: When this option is enabled and a lock request conflicts with a 
> currently granted lock, an upcall notification will be sent to the current 
> owner of the lock to request it to be released as soon as possible.
> 
> Option: features.locks-notify-contention-delay
> Default Value: 5
> Description: This value determines the minimum amount of time (in seconds) 
> between upcall contention notifications on the same inode. If multiple lock 
> requests are received during this period, only one upcall will be sent.
> 
> Option: disperse.shd-max-threads
> Default Value: 1
> Description: Maximum number of parallel heals SHD can do per local brick.  
> This can substantially lower heal times, but can also crush your bricks if 
> you don't have the storage hardware to support this.
> 
> Option: disperse.shd-wait-qlength
> Default Value: 1024
> Description: This option can be used to control number of heals that can wait 
> in SHD per subvolume
> 
> Option: disperse.cpu-extensions
> Default Value: auto
> Description: force the cpu extensions to be used to accelerate the galois 
> field computations.
> 
> Option: disperse.self-heal-window-size
> Default Value: 1
> Description: Maximum number blocks(128KB) per file for which self-heal 
> process would be applied simultaneously.
> 
> Option: cluster.use-compound-fops
> Default Value: no
> Description: This option exists only for backward compatibility and 
> configuring it doesn't have any effect
> 
> Option: performance.parallel-readdir
> Default Value: off
> Description: If this option is enabled, the readdir operation is performed in 
> parallel on all the bricks, thus improving the performance of readdir. Note 
> that the performance improvement is higher in large clusters
> 
> Option: performance.rda-request-size
> Default Value: 131072
> Description: size of buffer in readdirp calls initiated by readdir-ahead 
> 
> Option: performance.rda-cache-limit
> Default Value: 10MB
> Description: maximum size of cache consumed by readdir-ahead xlator. This 
> value is global and total memory consumption by readdir-ahead is capped by 
> this value, irrespective of the number/size of directories cached
> 
> Option: performance.nl-cache-positive-entry
> Default Value: (null)
> Description: enable/disable storing of entries that were lookedup and found 
> to be present in the volume, thus lookup on non existent file is served from 
> the cache
> 
> Option: performance.nl-cache-limit
> Default Value: 131072
> Description: the value over which caching will be disabled fora while and the 
> cache is cleared based on LRU
> 
> Option: performance.nl-cache-timeout
> Default Value: 60
> Description: Time period after which cache has to be refreshed
> 
> Option: cluster.brick-multiplex
> Default Value: off
> Description: This global option can be used to enable/disable brick 
> multiplexing. Brick multiplexing ensures that compatible brick instances can 
> share one single brick process.
> 
> Option: cluster.max-bricks-per-process
> Default Value: 250
> Description: This option can be used to limit the number of brick instances 
> per brick process when brick-multiplexing is enabled. If not explicitly set, 
> this tunable is set to 0 which denotes that brick-multiplexing can happen 
> without any limit on the number of bricks per process. Also this option can't 
> be set when the brick-multiplexing feature is disabled.
> 
> Option: cluster.halo-enabled
> Default Value: False
> Description: Enable Halo (geo) replication mode.
> 
> Option: cluster.halo-shd-max-latency
> Default Value: 99999
> Description: Maximum latency for shd halo replication in msec.
> 
> Option: cluster.halo-nfsd-max-latency
> Default Value: 5
> Description: Maximum latency for nfsd halo replication in msec.
> 
> Option: cluster.halo-max-latency
> Default Value: 5
> Description: Maximum latency for halo replication in msec.
> 
> Option: cluster.halo-max-replicas
> Default Value: 99999
> Description: The maximum number of halo replicas; replicas beyond this value 
> will be written asynchronouslyvia the SHD.
> 
> Option: cluster.halo-min-replicas
> Default Value: 2
> Description: The minimmum number of halo replicas, before adding out of 
> region replicas.
> 
> Option: features.ctime
> Default Value: on
> Description: enable/disable utime translator on the volume.
> 
> Option: ctime.noatime
> Default Value: on
> Description: enable/disable noatime option with ctime enabled.
> 
> Option: feature.cloudsync-storetype
> Default Value: (null)
> Description: Defines which remote store is enabled
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





________



Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to