[Gluster-users] Heketi v8.0.0 available for download

2018-09-12 Thread John Mulligan
Heketi v8.0.0 is now available [1].

This is the new stable version of Heketi.

Major additions in this release:
* Resumable delete of Volumes and Block Volumes
* Server administrative modes
* Throttling of concurrent operations
* Support configuration of block hosting volume options
* Heketi cli command to fetch operation counters
* Support setting restrictions on block hosting volume; to prevent block 
hosting volumes from taking new block volumes
* Add an option to destroy data while adding a device to a node
* Heketi Container: load an initial topology if HEKETI_TOPOLOGY_FILE is set

This release contains numerous stability and bug fixes. A more detailed 
changelog is available at the release page [1].


Special thanks to Michael Adam and Raghavendra Talur for assisting me with 
creating my first release.

-- John M. on behalf of the Heketi team


[1] https://github.com/heketi/heketi/releases/tag/v8.0.0


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-12 Thread John Strunk
You're right.
I'm not the person to answer on the specifics of Ubuntu, so hopefully one
of our packagers will chime in.

I have opened the following bug against the documentation:
https://bugzilla.redhat.com/show_bug.cgi?id=1628369

-John


On Wed, Sep 12, 2018 at 4:05 PM Nicolas SCHREVEL 
wrote:

> Hi John,
>
> But there is no info about 3.13 :
> https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/
>
> "Upgrade procedure to Gluster 4.1, from Gluster 4.0.x, 3.12.x, and 3.10.x
>
> NOTE: Upgrade procedure remains the same as with 3.12 and 3.10
> releases"
>
> And there is no info about "PPA" Management.
> Do i have to remove 3.13 PPA first, install over ...
>
> Where installing on Ubuntu there is information in documentation :
>
> https://docs.gluster.org/en/latest/Install-Guide/Install/#for-ubuntu
>
> Nicolas Schrevel
>
> Le 12/09/2018 à 21:16, John Strunk a écrit :
>
> The upgrade guide covers live upgrade.
> https://docs.gluster.org/en/latest/Upgrade-Guide/
>
> -John
>
>
> On Wed, Sep 12, 2018 at 3:10 PM Nicolas SCHREVEL 
> wrote:
>
>> Hi,
>>
>> I have two cluster with 3 bricks on Ubuntu 16.04 with GlusterFS 3.13 PPA.
>>
>> What is the best way to upgrade 3.13.2 to to 4.0 version ?
>>
>> As always, i found a lot of tutorial to install GlusterFS not to
>> maintain and upgrade a live system...
>>
>> Thanks
>>
>> --
>> Nicolas SCHREVEL
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-12 Thread Nicolas SCHREVEL

Hi John,

But there is no info about 3.13 : 
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/


"Upgrade procedure to Gluster 4.1, from Gluster 4.0.x, 3.12.x, and 3.10.x

    NOTE: Upgrade procedure remains the same as with 3.12 and 3.10 
releases"


And there is no info about "PPA" Management.
Do i have to remove 3.13 PPA first, install over ...

Where installing on Ubuntu there is information in documentation :

https://docs.gluster.org/en/latest/Install-Guide/Install/#for-ubuntu

Nicolas Schrevel

Le 12/09/2018 à 21:16, John Strunk a écrit :

The upgrade guide covers live upgrade.
https://docs.gluster.org/en/latest/Upgrade-Guide/

-John


On Wed, Sep 12, 2018 at 3:10 PM Nicolas SCHREVEL 
mailto:nicolas.schre...@l3ia.fr>> wrote:


Hi,

I have two cluster with 3 bricks on Ubuntu 16.04 with GlusterFS
3.13 PPA.

What is the best way to upgrade 3.13.2 to to 4.0 version ?

As always, i found a lot of tutorial to install GlusterFS not to
maintain and upgrade a live system...

Thanks

-- 
Nicolas SCHREVEL



___
Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-12 Thread John Strunk
The upgrade guide covers live upgrade.
https://docs.gluster.org/en/latest/Upgrade-Guide/

-John


On Wed, Sep 12, 2018 at 3:10 PM Nicolas SCHREVEL 
wrote:

> Hi,
>
> I have two cluster with 3 bricks on Ubuntu 16.04 with GlusterFS 3.13 PPA.
>
> What is the best way to upgrade 3.13.2 to to 4.0 version ?
>
> As always, i found a lot of tutorial to install GlusterFS not to
> maintain and upgrade a live system...
>
> Thanks
>
> --
> Nicolas SCHREVEL
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Upgrade from 3.13 ?

2018-09-12 Thread Nicolas SCHREVEL

Hi,

I have two cluster with 3 bricks on Ubuntu 16.04 with GlusterFS 3.13 PPA.

What is the best way to upgrade 3.13.2 to to 4.0 version ?

As always, i found a lot of tutorial to install GlusterFS not to 
maintain and upgrade a live system...


Thanks

--
Nicolas SCHREVEL


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] vfs_gluster broken

2018-09-12 Thread Terry McGuire
Hey Anoop.  Thanks for looking into this.  Responses inline:

> On Sep 11, 2018, at 23:42, Anoop C S  wrote:
> 
> On Tue, 2018-09-11 at 15:10 -0600, Terry McGuire wrote:
>> Hello list.  I had happily been sharing a Gluster volume with Samba using 
>> vfs_gluster, but it has
>> recently stopped working right.  I think it might have been after updating 
>> Samba from 4.6.2 to
>> 4.7.1 (as part of updating CentOS 7.4 to 7.5). The shares suffer a variety 
>> of weird issues,
>> including:
>> 
>> - sporadic connection refusals (credentials are accepted as valid, but 
>> volume is unavailable)
> 
> Does that mean after authentication share is not at all listed?

On a Mac, after the auth dialog disappears (suggesting the auth was valid), a 
dialog appears saying the volume is unavailable.  Can’t recall what the 
behaviour on Windows is, but it would be equivalent.  (This error doesn’t 
happen often, so I can’t quickly reproduce it, and I can’t even quite be sure 
it’s related to this problem, but I suspect it is.)

> 
>> - on Mac, when attempting to write a file: "The operation can’t be completed 
>> because an unexpected
>> error occurred (error code -50)."
> 
> How is this write performed? via Finder or via command-line?

That error appears when using the Finder.  It happens pretty much all the time, 
and is the clearest symptom of this problem.  Using the command-line, anything 
that attempts I/O with the share gives an “Invalid argument” error:

Mac:~ root# ls /Volumes/
Macintosh HDmodule

Mac:~ root# ls -l /Volumes/
ls: module: Invalid argument
total 8
lrwxr-xr-x  1 root  wheel  1 24 Aug 15:48 Macintosh HD -> /

Mac:~ root# touch /Volumes/module/test
touch: /Volumes/module/test: Invalid argument
Mac:~ root# 

> 
>> - on Windows, sometimes when writing and sometimes when reading: "Z:\ is not 
>> accessible. The
>> parameter is incorrect"
> 
> How is this write performed? via Explorer or Powershell?

Explorer.
> 
>> -on Mac and Windows, the contents of the volume in Finder/Explorer windows 
>> sometimes disappears,
>> sometimes reappearing later, sometimes not.
>> - on Mac (and similar on Windows), volume icon sometimes disappears - maybe 
>> the volume unmounts,
>> but it's unclear.
> 
> Is this a clustered Samba setup i.e, with CTDB for high availability?

Not clustered.  Just a plain vanilla Samba.
> 
>> All these issues vanish when I switch to sharing the FUSE-mounted volume, 
>> but, of course, I lose
>> the advantages of vfs_gluster.
> 
> Can you please attach the output of `testparm -s` so as to look through how 
> Samba is setup?

From our test server (“nomodule-nofruit” is currently the only well-behaved 
share):

root@mfsuat-01 ~]#testparm -s
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[share1]"
Processing section "[share2]"
Processing section "[nomodule]"
Processing section "[nomodule-nofruit]"
Processing section "[module]"
Processing section "[IPC$]"
WARNING: No path in service IPC$ - making it unavailable!
NOTE: Service IPC$ is flagged unavailable.
Loaded services file OK.
idmap range not specified for domain '*'
ERROR: Invalid idmap range for domain *!

WARNING: You have some share names that are longer than 12 characters.
These may not be accessible to some older clients.
(Eg. Windows9x, WindowsMe, and smbclient prior to Samba 3.0.)
WARNING: some services use vfs_fruit, others don't. Mounting them in 
conjunction on OS X clients results in undefined behaviour.

Server role: ROLE_DOMAIN_MEMBER

# Global parameters
[global]
log file = /var/log/samba/log.%m
map to guest = Bad User
max log size = 50
realm = .AD.UALBERTA.CA
security = ADS
workgroup = STS
glusterfs:volume = mfs1
idmap config * : backend = tdb
access based share enum = Yes
force create mode = 0777
force directory mode = 0777
include = /mfsmount/admin/etc/mfs/smb_shares.conf
kernel share modes = No
read only = No
smb encrypt = desired
vfs objects = glusterfs


[share1]
path = /share1
valid users = @mfs-...@.ad.ualberta.ca


[share2]
path = /share2
valid users = @mfs-test-gr...@.ad.ualberta.ca


[nomodule]
kernel share modes = Yes
path = /mfsmount/share1
valid users = @mfs-...@.ad.ualberta.ca
vfs objects = fruit streams_xattr


[nomodule-nofruit]
kernel share modes = Yes
path = /mfsmount/share1
valid users = @mfs-...@.ad.ualberta.ca
vfs objects = 


[module]
path = /share1
valid users = @mfs-...@.ad.ualberta.ca


[IPC$]
available = No
vfs objects = 


> 
>> My gluster version initially was 3.10.12.  I’ve since updated to gluster 
>> 3.12.13, but the symptoms
>> are the same.
>> 
>> Does this sound familiar to anyone?
> 
> All mentioned symptoms point towards a 

[Gluster-users] Minutes from Community Meeting, September 12th, 15:00 UTC

2018-09-12 Thread Amye Scavarda
Attendees: amye, sankarshan, jstrunk, loadtheacc, spisla80, shyam

Agenda:
How are the releases planned coming along? [sankarshan - for someone from
TLC]
See email from 10 September (
https://lists.gluster.org/pipermail/gluster-devel/2018-September/055374.html),
but tracking for an early October release for 5. Needs more documentation,
but the release is more focused around stability and code health.

The “GCS” email barely had any discussion on mailing lists - are there
unresolved questions? [sankarshan - for audience]
This is likely a Vijay question, will hold for next meeting on 26 Sept.

Is it possible to have someone from Glusto test framework team share
current topics/roadmap with community [sankarshan - for a Glusto maintainer]
- Loadtheacc (Jonathan Holloway) to provide an update on gluster-devel.

mountpoint.io - going around the table feedback? [sankarshan - for
participants if folks do join]
- amye to start a thread on -users

Other items:
CSI from Jstrunk: Work is going on for a file-based (FUSE) CSI driver,
targeting GD2. As we get a GCS implementation available for initial
preview, there should be more announcements about this.
Next Gluster Summit EU: Nothing planning, but DevConf.cz and FOSDEM are
coming up in Jan/Feb 2019.

Next community meeting is 26 September in #gluster-meeting, 15:00 UTC.

On Tue, Sep 11, 2018 at 5:00 PM Amye Scavarda  wrote:

> We'll be in #gluster-meeting on IRC, agenda lives in:
> https://bit.ly/gluster-community-meetings
> No agenda items yet, maybe you have some?
> - amye
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>


-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Failures during rebalance on gluster distributed disperse volume

2018-09-12 Thread Mauro Tridici
Dear All,

I recently added 3 servers (each one with 12 bricks) to an existing Gluster 
Distributed Disperse Volume.
Volume extension has been completed without error and I already executed the 
rebalance procedure with fix-layout option with no problem.
I just launched the rebalance procedure without fix-layout option, but, as you 
can see in the output below, I noticed that some failures have been detected.

[root@s01 glusterfs]# gluster v rebalance tier2 status
Node Rebalanced-files  size   
scanned  failures   skipped   status  run time in h:m:s
   -  ---   ---   
---   ---   ---  --
   localhost71176 3.2MB   
2137557   1530391  8128  in progress   13:59:05
 s02-stg00Bytes 
0 0 0completed   11:53:28
 s03-stg00Bytes 
0 0 0completed   11:53:32
 s04-stg00Bytes 
0 0 0completed0:00:06
 s05-stg   150Bytes 
17055 018completed   10:48:01
 s06-stg00Bytes 
0 0 0completed0:00:06
Estimated time left for rebalance to complete :0:46:53
volume rebalance: tier2: success

In the volume rebalance log file, I detected a lot of error messages similar to 
the following ones:

[2018-09-12 13:15:50.756703] E [MSGID: 0] 
[dht-rebalance.c:1696:dht_migrate_file] 0-tier2-dht: Create dst failed on - 
tier2-disperse-6 for file - 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2005-12_grid.nc
[2018-09-12 13:15:50.757025] E [MSGID: 109023] 
[dht-rebalance.c:2733:gf_defrag_migrate_single_file] 0-tier2-dht: migrate-data 
failed for 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2005-12_grid.nc
[2018-09-12 13:15:50.759183] E [MSGID: 109023] 
[dht-rebalance.c:844:__dht_rebalance_create_dst_file] 0-tier2-dht: fallocate 
failed for 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2005-09_grid.nc
 on tier2-disperse-9 (Operation not supported)
[2018-09-12 13:15:50.759206] E [MSGID: 0] 
[dht-rebalance.c:1696:dht_migrate_file] 0-tier2-dht: Create dst failed on - 
tier2-disperse-9 for file - 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2005-09_grid.nc
[2018-09-12 13:15:50.759536] E [MSGID: 109023] 
[dht-rebalance.c:2733:gf_defrag_migrate_single_file] 0-tier2-dht: migrate-data 
failed for 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2005-09_grid.nc
[2018-09-12 13:15:50.777219] E [MSGID: 109023] 
[dht-rebalance.c:844:__dht_rebalance_create_dst_file] 0-tier2-dht: fallocate 
failed for 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2006-01_grid.nc
 on tier2-disperse-10 (Operation not supported)
[2018-09-12 13:15:50.777241] E [MSGID: 0] 
[dht-rebalance.c:1696:dht_migrate_file] 0-tier2-dht: Create dst failed on - 
tier2-disperse-10 for file - 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2006-01_grid.nc
[2018-09-12 13:15:50.777676] E [MSGID: 109023] 
[dht-rebalance.c:2733:gf_defrag_migrate_single_file] 0-tier2-dht: migrate-data 
failed for 
/CSP/sp1/CESM/archive/sps_200508_003/atm/hist/postproc/sps_200508_003.cam.h0.2006-01_grid.nc

Could you please help me to understand what is happening and how to solve it?

Our Gluster implementation is based on Gluster v.3.10.5

Thank you in advance,
Mauro

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 4.1.x geo-replication "changelogs could not be processed completely" issue

2018-09-12 Thread Kotte, Christian (Ext)
> There seems to be a bug, please raise a bug. For now as a work around add the 
> following line at the end on all the master node's configuration with
> any editor. After adding it on all master nodes, stop and start geo-rep.

> rsync-options = --ignore-missing-args

> configuration file: 
> /var/lib/glusterd/geo-replication/__gsyncd.conf

I did that on the master and interim master. Nothing has changed.

Just to make sure. Here is what I did:
/var/lib/glusterd/geo-replication//__/gsyncd.conf
[vars]
stime-xattr-prefix = 
trusted.glusterfs.d0b96093-d48d-4b92-bd91-7c88a3c33dcb.10dc1a40-70b2-4f3c-b874-2653fa778134
rsync-options = --ignore-missing-args

gluster volume geo-replication  geoaccount@< slave_node >:: stop
gluster volume geo-replication  geoaccount@< slave_node >:: start

>> I read somewhere that if I delete the geo-replication with 
>> “reset-sync-time”, the changelogs are cleared, but this doesn’t happen.
> changelogs are not cleared, but in the new geo-rep session, the old 
> changelogs are not used for syncing.

When I delete the geo-replication with “delete reset-sync-time” and then create 
it again, I have the same warnings and errors. However, the changelog names are 
different. It looks like every time a new changelog file is created and I 
already have hundreds of CHANGELOG.* files. Is this a normal behaviour?

# gluster volume geo-replication  geoaccount@:: delete 
reset-sync-time
Deleting geo-replication session between  & geoaccount@:: 
has been successful
# gluster volume geo-replication  geoaccount@:: create
Total available size of master is greater than available size of slave
:: is not empty. Please delete existing files in 
:: and retry, or use force to continue without deleting the 
existing files.
geo-replication command failed
# gluster volume geo-replication  geoaccount@:: create 
force
Creating geo-replication session between  & geoaccount@:: 
has been successful
# gluster volume geo-replication  geoaccount@:: start
Starting geo-replication session between  & geoaccount@:: 
has been successful

[2018-09-12 10:48:42.957526] I [master(worker /bricks/brick1/brick):1460:crawl] 
_GMaster: slave's time  stime=(1536749259, 0)
[2018-09-12 10:48:45.2098] I [master(worker /bricks/brick1/brick):1944:syncjob] 
Syncer: Sync Time Taken duration=1.4931 num_files=1 job=2   return_code=23
[2018-09-12 10:48:45.2836] W [master(worker /bricks/brick1/brick):1346:process] 
_GMaster: incomplete sync, retrying changelogs  files=['CHANGELOG.1536749320']
[2018-09-12 10:48:47.48713] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken
duration=1.5327 num_files=1 job=1   return_code=23
[2018-09-12 10:48:47.50287] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogs files=['CHANGELOG.1536749320']
[2018-09-12 10:48:49.110692] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken   
duration=1.4517 num_files=1 job=3   return_code=23
[2018-09-12 10:48:49.112823] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogsfiles=['CHANGELOG.1536749320']
[2018-09-12 10:48:51.505287] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken   
duration=1.4960 num_files=1 job=2   return_code=23
[2018-09-12 10:48:51.507086] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogsfiles=['CHANGELOG.1536749320']
[2018-09-12 10:48:53.557701] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken   
duration=1.5014 num_files=1 job=1   return_code=23
[2018-09-12 10:48:53.559295] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogsfiles=['CHANGELOG.1536749320']
[2018-09-12 10:48:55.642981] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken   
duration=1.5239 num_files=1 job=3   return_code=23
[2018-09-12 10:48:55.644567] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogsfiles=['CHANGELOG.1536749320']
[2018-09-12 10:48:58.44553] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken
duration=1.5315 num_files=1 job=2   return_code=23
[2018-09-12 10:48:58.46200] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogs files=['CHANGELOG.1536749320']
[2018-09-12 10:49:00.74062] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken
duration=1.5080 num_files=1 job=1   return_code=23
[2018-09-12 10:49:00.75681] W [master(worker 
/bricks/brick1/brick):1346:process] _GMaster: incomplete sync, retrying 
changelogs files=['CHANGELOG.1536749320']
[2018-09-12 10:49:02.155219] I [master(worker 
/bricks/brick1/brick):1944:syncjob] Syncer: Sync Time Taken   
duration=1.5041 num_files=1 

[Gluster-users] Distributed-Replicated with mismatch number of bricks

2018-09-12 Thread Jose V . Carrión
Hi,

I would like to implement  a distributed-replicated architecture but some
of my nodes have different number of bricks. My idea is to do a replica 2
through 6 nodes (all bricks with the same size).

My gluster architecture is:

Name node  |  Brick 1 | Brick 2

--

Node1  |  R1 | |

Node2  |  R1 | |

Node3  |  R2 | R3|

Node4  |  R2 | R3|

Node5  |  R4  | R5   |

Node6  |  R4  | R5   |

My questions are:

1.  Would be this correct ?

gluster volume create vol0 replica 2 transport tcp \
node1:/data/gluster/vol0/brick1 node2:/data/gluster/vol0/brick1 \
node3:/data/gluster/vol0/brick1 node4:/data/gluster/vol0/brick1 \
node3:/data/gluster/vol0/brick2 node4:/data/gluster/vol0/brick2 \
node5:/data/gluster/vol0/brick1 node6:/data/gluster/vol0/brick1 \
node5:/data/gluster/vol0/brick2 node6:/data/gluster/vol0/brick2 \

2 If I need to add a new node in the future, I will be able to extend my
distributed replication system ? Something like:

Name node  |  Brick 1 | Brick 2

--

Node1  |  R1 | |

Node2  |  R2  ||

Node3  |  R3  | R4   |

Node4  |  R3  | R4   |

Node5  |  R5  | R6   |

Node6  |  R5  | R6   |

Node7  |  R1  | R2   |


gluster volume create vol0 replica 2 transport tcp \
node1:/data/gluster/vol0/brick1 node7:/data/gluster/vol0/brick1 \
node2:/data/gluster/vol0/brick1 node7:/data/gluster/vol0/brick2 \
node3:/data/gluster/vol0/brick1 node4:/data/gluster/vol0/brick1 \
node3:/data/gluster/vol0/brick2 node4:/data/gluster/vol0/brick2 \
node5:/data/gluster/vol0/brick1 node6:/data/gluster/vol0/brick1 \
node5:/data/gluster/vol0/brick2 node6:/data/gluster/vol0/brick2 \

3. My nodes will also mount volumes (gluster clients) so in order to
optimize the read performance operations,  what is the best replication
system:  distributed-replicated or only replicated.


Thanks in advance.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users