Re: [Gluster-users] geo-replication not syncing files...

2015-11-10 Thread Wade Fitzpatrick
Your ssh commands connect to port 2503 - is that port listening on the 
slaves?

Does it use privilege-separation?

Don't force it to changelog without an initial sync using xsync.

The warning "fuse: xlator does not implement release_cbk" was fixed in 
3.6.0alpha1 but looks like it could easily be backported 
https://github.com/gluster/glusterfs/commit/bca9eab359710eb3b826c6441126e2e56f774df5


On 11/11/2015 3:20 AM, Dietmar Putz wrote:

Hi all,

i need some help with a geo-replication issue...
recently i upgraded two 6-node distributed-replicated gluster from 
ubuntu 12.04.5 lts to 14.04.3 lts resp. glusterfs 3.4.7 to 3.5.6
since then the geo-replication does not start syncing but remains as 
shown in the 'status detail' output below for about 48h.


I followed the hints for upgrade with an existing geo-replication :
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

the master_gfid_file.txt was created and applied to the slave volume. 
geo-replication was started with 'force' option.
in the gluster.log on the slave i can find thousands of lines with 
messages like :
".../.gfid/1abb953b-aa9d-4fa3-9a72-415204057572 => -1 (Operation not 
permitted)"

and no files are synced.

I'm not sure whats going on and since there are about 40TByte of data 
already replicated by the old 3.4.7 setup I have some fear to try 
around...

so i got some questions...maybe somebody can give me some hints...

1. as shown in the example below the trusted.gfid of the same file 
differs in master and slave volume. as far as i understood the 
upgrade-howto after applying the master_gfid_file.txt on the slave 
they should be the same on master and slave...is that right ?
2. as shown in the config below the change_detector is 'xsync'. 
Somewhere i red that xsync is used for the initial replication and is 
changing to 'change_log' later on when the entire sync is done. should 
i try to modify the change_detector to 'change_log', does it make 
sense...?


any other idea which could help me to solve this problem?

best regards
dietmar




[ 11:10:01 ] - root@gluster-ger-ber-09  ~ $glusterfs --version
glusterfs 3.5.6 built on Sep 16 2015 15:27:30
...
[ 11:11:37 ] - root@gluster-ger-ber-09  ~ $cat 
/var/lib/glusterd/glusterd.info | grep operating-version

operating-version=30501


[ 10:55:35 ] - root@gluster-ger-ber-09  ~ $gluster volume 
geo-replication ger-ber-01 ssh://gluster-wien-02::aut-wien-01 status 
detail


MASTER NODE   MASTER VOLMASTER BRICK 
SLAVE STATUS CHECKPOINT 
STATUSCRAWL STATUSFILES SYNCDFILES PENDINGBYTES 
PENDINGDELETES PENDINGFILES SKIPPED
-- 

gluster-ger-ber-09ger-ber-01/gluster-export 
gluster-wien-05-int::aut-wien-01  Active N/A Hybrid Crawl0 
8191 00 0
gluster-ger-ber-11ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A N/A  N/A N/A
gluster-ger-ber-10ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A N/A  N/A N/A
gluster-ger-ber-12ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A N/A  N/A N/A
gluster-ger-ber-07ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A N/A  N/A N/A
gluster-ger-ber-08ger-ber-01/gluster-export 
gluster-wien-04-int::aut-wien-01  Passive N/A N/A 0 
000 0

[ 10:55:48 ] - root@gluster-ger-ber-09  ~ $


[ 10:56:56 ] - root@gluster-ger-ber-09  ~ $gluster volume 
geo-replication ger-ber-01 ssh://gluster-wien-02::aut-wien-01 config

special_sync_mode: partial
state_socket_unencoded: 
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.socket
gluster_log_file: 
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.gluster.log
ssh_command: ssh -p 2503 -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem

ignore_deletes: true
change_detector: xsync
ssh_command_tar: ssh -p 2503 -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/tar_ssh.pem
working_dir: 
/var/run/gluster/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01

remote_gsyncd: /nonexistent/gsyncd
log_file: 
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131

Re: [Gluster-users] 'No data available' at clients, brick xattr ops errors on small I/O -- XFS stripe issue or repeat bug?

2015-11-10 Thread LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Update:  I've tried a second cluster with AFAIK identical backing storage 
configuration from the LUNs up, identical gluster/xfsprogs/kernel on the 
servers, identical volume setup, and identical kernel/gluster on the clients.  
The reproducer does not fail on the new system.  So far I can't find any delta 
between the two clusters' setup other than the brick count (28 bricks across 8 
servers on the failing one, 14 bricks across 4 servers on the new one).
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] iostat not showing data transfer while doing read operation with libgfapi

2015-11-10 Thread satish kondapalli
Hi

iostat i am not running on Network interface.  I am running on my storage
device( SSD card:: iostat /dev/nvme0n1 1).

Sateesh

On Tue, Nov 10, 2015 at 1:47 AM, Piotr Rybicki  wrote:

> W dniu 2015-11-10 o 04:01, satish kondapalli pisze:
>
> Hi,
>>
>> I am running performance  test between fuse vs libgfapi.  I have a
>> single node, client and server are running on same node. I have NVMe SSD
>> device as a storage.
>>
>> My volume info::
>>
>> [root@sys04 ~]# gluster vol info
>> Volume Name: vol1
>> Type: Distribute
>> Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7
>> Status: Started
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 172.16.71.19:/mnt_nvme/brick1
>> Options Reconfigured:
>> performance.cache-size: 0
>> performance.write-behind: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.strict-o-direct: on
>>
>>
>> fio Job file::
>>
>> [global]
>> direct=1
>> runtime=20
>> time_based
>> ioengine=gfapi
>> iodepth=1
>> volume=vol1
>> brick=172.16.71.19
>> rw=read
>> size=128g
>> bs=32k
>> group_reporting
>> numjobs=1
>> filename=128g.bar
>>
>> While doing sequential read test, I am not seeing any data transfer on
>> device with iostat tool.  Looks like gfapi engine is reading from the
>> cache because i am reading from same file with different block sizes.
>>
>> But i disabled  io cache  for my volume. Can someone help me  from where
>> fio is reading the data?
>>
>
> Hi.
>
> It is normal - not seeing traffic on ethernet interface, when using native
> RDMA protocol (not TCP via IPoIB).
>
> Try perfquery -x , to see traffic counters increase on RDMA interface.
>
> Regards
> Piotr Rybicki
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] geo-replication not syncing files...

2015-11-10 Thread Dietmar Putz

Hi all,

i need some help with a geo-replication issue...
recently i upgraded two 6-node distributed-replicated gluster from 
ubuntu 12.04.5 lts to 14.04.3 lts resp. glusterfs 3.4.7 to 3.5.6
since then the geo-replication does not start syncing but remains as 
shown in the 'status detail' output below for about 48h.


I followed the hints for upgrade with an existing geo-replication :
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

the master_gfid_file.txt was created and applied to the slave volume. 
geo-replication was started with 'force' option.
in the gluster.log on the slave i can find thousands of lines with 
messages like :
".../.gfid/1abb953b-aa9d-4fa3-9a72-415204057572 => -1 (Operation not 
permitted)"

and no files are synced.

I'm not sure whats going on and since there are about 40TByte of data 
already replicated by the old 3.4.7 setup I have some fear to try around...

so i got some questions...maybe somebody can give me some hints...

1. as shown in the example below the trusted.gfid of the same file 
differs in master and slave volume. as far as i understood the 
upgrade-howto after applying the master_gfid_file.txt on the slave they 
should be the same on master and slave...is that right ?
2. as shown in the config below the change_detector is 'xsync'. 
Somewhere i red that xsync is used for the initial replication and is 
changing to 'change_log' later on when the entire sync is done. should i 
try to modify the change_detector to 'change_log', does it make sense...?


any other idea which could help me to solve this problem?

best regards
dietmar




[ 11:10:01 ] - root@gluster-ger-ber-09  ~ $glusterfs --version
glusterfs 3.5.6 built on Sep 16 2015 15:27:30
...
[ 11:11:37 ] - root@gluster-ger-ber-09  ~ $cat 
/var/lib/glusterd/glusterd.info | grep operating-version

operating-version=30501


[ 10:55:35 ] - root@gluster-ger-ber-09  ~ $gluster volume 
geo-replication ger-ber-01 ssh://gluster-wien-02::aut-wien-01 status detail


MASTER NODE   MASTER VOLMASTER BRICK 
SLAVE STATUS CHECKPOINT 
STATUSCRAWL STATUSFILES SYNCDFILES PENDINGBYTES 
PENDINGDELETES PENDINGFILES SKIPPED

--
gluster-ger-ber-09ger-ber-01/gluster-export 
gluster-wien-05-int::aut-wien-01  Active N/A  Hybrid 
Crawl0 8191 00 0
gluster-ger-ber-11ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A  N/A  N/A N/A
gluster-ger-ber-10ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A  N/A  N/A N/A
gluster-ger-ber-12ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A  N/A  N/A N/A
gluster-ger-ber-07ger-ber-01/gluster-export 
ssh://gluster-wien-02::aut-wien-01Not Started N/A  
N/A N/A N/A  N/A  N/A N/A
gluster-ger-ber-08ger-ber-01/gluster-export 
gluster-wien-04-int::aut-wien-01  Passive N/A  
N/A 0 000 0

[ 10:55:48 ] - root@gluster-ger-ber-09  ~ $


[ 10:56:56 ] - root@gluster-ger-ber-09  ~ $gluster volume 
geo-replication ger-ber-01 ssh://gluster-wien-02::aut-wien-01 config

special_sync_mode: partial
state_socket_unencoded: 
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.socket
gluster_log_file: 
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.gluster.log
ssh_command: ssh -p 2503 -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem

ignore_deletes: true
change_detector: xsync
ssh_command_tar: ssh -p 2503 -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
working_dir: 
/var/run/gluster/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01

remote_gsyncd: /nonexistent/gsyncd
log_file: 
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.log

socketdir: /var/run
state_file: 
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.status
state_detail_file: 
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%

Re: [Gluster-users] Starting RDMA volume fails

2015-11-10 Thread Jochen Becker

Hi,

seems like RHEL/CentOS package glusterfs-rdma (from the gluster repo) is 
not working together with the Mellanox ofed.
Guess the problem is in the dependency of libibverbs which depends on 
the rdma package of RHEL/CentOS.
Installation of glusterfs-rdma might break your Mellanox ofed 
installation. At least on the four systems (with three different 
kernels) I tried, the openibd.service was unusable because of the 
rdma_cm module installed by the rdma package.
Is there something I can do in case I would like to use glusterfs but 
need the Mellanox ofed or do I have to discard the idea of using 
glusterfs for my setup?


Best regards,
Jochen


Am 09.11.2015 um 23:31 schrieb Jochen Becker:

Hi folks,

I have some problems starting a replica volume on a two node 
infiniband setup.
Both systems are running the same hardware and Infiniband (ipoib, 
ibverbs) seems to work well.


OS is Centos 7.1 fresh install and updated, Mellanox Ofed is in use, 
openibd is running both gluster peers are in connected state with each 
other. Creating the volume was no problem at all, but starting always 
fails. Using the force option the volume seems to be started but 
cannot be mounted.


Here are the entries of mnt-bricks-instances.log that happen during 
the command gluster volume start instances:


[2015-11-09 22:01:00.153360] I [MSGID: 100030] 
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterfsd: Started running 
/usr/sbin/glusterfsd version 3.7.5 (args: /usr/sbin/glusterfsd -s 
compute02 --volfile-id instances.compute02.mnt-bricks-instances -p 
/var/lib/glusterd/vols/instances/run/compute02-mnt-bricks-instances.pid -S 
/var/run/gluster/8f5e59a0b8d5949b51b4c276192b0725.socket --brick-name 
/mnt/bricks/instances -l 
/var/log/glusterfs/bricks/mnt-bricks-instances.log --xlator-option 
*-posix.glusterd-uuid=52109ce5-6173-4d22-bffc-a03c71d24791 
--brick-port 49152 --xlator-option instances-server.listen-port=49152 
--volfile-server-transport=rdma)
[2015-11-09 22:01:00.169326] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2015-11-09 22:01:00.177472] I [graph.c:269:gf_add_cmdline_options] 
0-instances-server: adding option 'listen-port' for volume 
'instances-server' with value '49152'
[2015-11-09 22:01:00.177519] I [graph.c:269:gf_add_cmdline_options] 
0-instances-posix: adding option 'glusterd-uuid' for volume 
'instances-posix' with value '52109ce5-6173-4d22-bffc-a03c71d24791'
[2015-11-09 22:01:00.177826] I [MSGID: 115034] 
[server.c:403:_check_for_auth_option] 0-/mnt/bricks/instances: skip 
format check for non-addr auth option 
auth.login./mnt/bricks/instances.allow
[2015-11-09 22:01:00.177913] I [MSGID: 115034] 
[server.c:403:_check_for_auth_option] 0-/mnt/bricks/instances: skip 
format check for non-addr auth option 
auth.login.9d64b3ec-9d24-41ac-ba84-cf58c67c9b21.password
[2015-11-09 22:01:00.177916] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 2
[2015-11-09 22:01:00.179299] I 
[rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: 
Configured rpc.outstanding-rpc-limit with value 64
[2015-11-09 22:01:00.181636] W [MSGID: 101002] 
[options.c:957:xl_opt_validate] 0-instances-server: option 
'listen-port' is deprecated, preferred is 
'transport.rdma.listen-port', continuing with correction
[2015-11-09 22:01:00.183742] W [MSGID: 103071] 
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event 
channel creation failed [Keine Berechtigung]
[2015-11-09 22:01:00.183782] W [MSGID: 103055] [rdma.c:4899:init] 
0-rdma.instances-server: Failed to initialize IB Device
[2015-11-09 22:01:00.183796] W 
[rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma' 
initialization failed
[2015-11-09 22:01:00.183866] W [rpcsvc.c:1597:rpcsvc_transport_create] 
0-rpc-service: cannot create listener, initing the transport failed
[2015-11-09 22:01:00.183884] W [MSGID: 115045] [server.c:1019:init] 
0-instances-server: creation of listener failed
[2015-11-09 22:01:00.183898] E [MSGID: 101019] 
[xlator.c:428:xlator_init] 0-instances-server: Initialization of 
volume 'instances-server' failed, review your volfile again
[2015-11-09 22:01:00.183912] E [graph.c:322:glusterfs_graph_init] 
0-instances-server: initializing translator failed
[2015-11-09 22:01:00.183921] E [graph.c:661:glusterfs_graph_activate] 
0-graph: init failed
[2015-11-09 22:01:00.184429] W [glusterfsd.c:1236:cleanup_and_exit] 
(-->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x331) [0x7f7b6aee02f1] 
-->/usr/sbin/glusterfsd(glusterfs_process_volfp+0x126) 
[0x7f7b6aedb0f6] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x69) 
[0x7f7b6aeda6d9] ) 0-: received signum (0), shutting down


I think the problem starts at 2015-11-09 22:01:00.183742 where the 
channel creation with rdma_cm fails. "Keine Berechtigung" means 
something like missing permissions/rights. Module rdma_cm is loaded 
and I can't find any other problem with the Infiniband or rdma. I have 

Re: [Gluster-users] [CentOS-devel] What Gluster versions would you like to see in the CentOS Storage SIG?

2015-11-10 Thread Prasun Gera
It would be nice to have RHGS's equivalent in CentOS, keeping with the
general theme of CentOS. Any reason why that hasn't been considered ?

On Tue, Nov 10, 2015 at 1:25 AM, C.L. Martinez  wrote:

> On 11/10/2015 09:18 AM, Niels de Vos wrote:
>
>> Hi Gluster users running on CentOS!
>>
>> As you may have heard before, we're planning on providing stable Gluster
>> releases and related packages through the CentOS Storage SIG [0]. We
>> would like to know what version of Gluster and which versions of CentOS
>> are most wanted by our users.
>>
>> The current support for Gluster defines 3 stable releases at the time.
>> This means that 3.7, 3.6 and 3.5 are supported by the Gluster Community.
>> Once 3.8 is released, 3.5 will become unsupported and will not receive
>> any updates anymore. 3.8 is planned to be released early 2016 [1].
>>
>> We can provide all Gluster packages for CentOS-7 and 6, but CentOS-5 can
>> only get recent versions of the Gluster client.
>>
>> Now, we want to know which combinations our users like to see in the
>> CentOS Storage SIG:
>>
>>   - CentOS-7 + GlusterFS 3.7: latest and greatest, will be included
>>   - CentOS-6 + GlusterFS 3.7: very much used release, also included
>>
>>   - CentOS-7 + GlusterFS 3.6: some users, you?
>>   - CentOS-6 + GlusterFS 3.6: some users, you?
>>
>>   - CentOS-7 + GlusterFS 3.5: fewer users, you?
>>   - CentOS-6 + GlusterFS 3.5: fewer users, you?
>>
>>   - CentOS-5 + GlusterFS 3.7 (client only): nobody?
>>   - CentOS-5 + GlusterFS 3.6: nobody?
>>   - CentOS-5 + GlusterFS 3.5: nobody?
>>
>>
>> Please speak up and let us know what versions you depend on for next few
>> months. You can reply to this email to the list (note that it is
>> x-posted, one mailinglist is sufficient for your reply), directly to me
>> or over IRC in #centos-devel or #gluster.
>>
>> Many thanks,
>> Niels
>>
>>
>> 0. https://wiki.centos.org/SpecialInterestGroup/Storage
>> 1. https://www.gluster.org/community/roadmap/
>>
>>
> Hi all,
>
>  My production environments runs Gluster 3.7 for CentOS-7/6. I am not
> planning to use CentOS-5.
>
> Thanks.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Cancellation of tomorrows weekly Gluster Community Meeting

2015-11-10 Thread Niels de Vos
Due to public holidays in India the majority of regular attendees will
not be attending the weekly meeting tomorrow. Therefor we are cancelling
tomorrows occurrence.

The next community meeting will take place next week Wednesday on 18
November at 12:00 UTC in #gluster-meeting on Freenode IRC.

Happy Diwali everyone,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] oVirt host installations failing due to missing GPG key (key location no longer valid)

2015-11-10 Thread Robert Story
On Mon, 9 Nov 2015 16:48:33 -0500 Neal wrote:
NG> I attempted to set up an oVirt host today through the engine UI, and it
NG> kept bombing out on the GPG key, saying it couldn't retrieve it (HTTP
NG> 404). I went to the engine logs and found out that the key that it
NG> can't retrieve is the one for GlusterFS, which lived at
NG> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
NG> according to the logs and
NG> the /etc/yum.repos.d/ovirt-3.6-dependencies.repo file.

I'm seeing this on my oVirt 3.5 hosts as well..


Robert

-- 
Senior Software Engineer @ Parsons


pgplPVRqdIoKX.pgp
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] iostat not showing data transfer while doing read operation with libgfapi

2015-11-10 Thread Piotr Rybicki

W dniu 2015-11-10 o 04:01, satish kondapalli pisze:

Hi,

I am running performance  test between fuse vs libgfapi.  I have a
single node, client and server are running on same node. I have NVMe SSD
device as a storage.

My volume info::

[root@sys04 ~]# gluster vol info
Volume Name: vol1
Type: Distribute
Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 172.16.71.19:/mnt_nvme/brick1
Options Reconfigured:
performance.cache-size: 0
performance.write-behind: off
performance.read-ahead: off
performance.io-cache: off
performance.strict-o-direct: on


fio Job file::

[global]
direct=1
runtime=20
time_based
ioengine=gfapi
iodepth=1
volume=vol1
brick=172.16.71.19
rw=read
size=128g
bs=32k
group_reporting
numjobs=1
filename=128g.bar

While doing sequential read test, I am not seeing any data transfer on
device with iostat tool.  Looks like gfapi engine is reading from the
cache because i am reading from same file with different block sizes.

But i disabled  io cache  for my volume. Can someone help me  from where
fio is reading the data?


Hi.

It is normal - not seeing traffic on ethernet interface, when using 
native RDMA protocol (not TCP via IPoIB).


Try perfquery -x , to see traffic counters increase on RDMA interface.

Regards
Piotr Rybicki
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [CentOS-devel] What Gluster versions would you like to see in the CentOS Storage SIG?

2015-11-10 Thread C.L. Martinez

On 11/10/2015 09:18 AM, Niels de Vos wrote:

Hi Gluster users running on CentOS!

As you may have heard before, we're planning on providing stable Gluster
releases and related packages through the CentOS Storage SIG [0]. We
would like to know what version of Gluster and which versions of CentOS
are most wanted by our users.

The current support for Gluster defines 3 stable releases at the time.
This means that 3.7, 3.6 and 3.5 are supported by the Gluster Community.
Once 3.8 is released, 3.5 will become unsupported and will not receive
any updates anymore. 3.8 is planned to be released early 2016 [1].

We can provide all Gluster packages for CentOS-7 and 6, but CentOS-5 can
only get recent versions of the Gluster client.

Now, we want to know which combinations our users like to see in the
CentOS Storage SIG:

  - CentOS-7 + GlusterFS 3.7: latest and greatest, will be included
  - CentOS-6 + GlusterFS 3.7: very much used release, also included

  - CentOS-7 + GlusterFS 3.6: some users, you?
  - CentOS-6 + GlusterFS 3.6: some users, you?

  - CentOS-7 + GlusterFS 3.5: fewer users, you?
  - CentOS-6 + GlusterFS 3.5: fewer users, you?

  - CentOS-5 + GlusterFS 3.7 (client only): nobody?
  - CentOS-5 + GlusterFS 3.6: nobody?
  - CentOS-5 + GlusterFS 3.5: nobody?


Please speak up and let us know what versions you depend on for next few
months. You can reply to this email to the list (note that it is
x-posted, one mailinglist is sufficient for your reply), directly to me
or over IRC in #centos-devel or #gluster.

Many thanks,
Niels


0. https://wiki.centos.org/SpecialInterestGroup/Storage
1. https://www.gluster.org/community/roadmap/



Hi all,

 My production environments runs Gluster 3.7 for CentOS-7/6. I am not 
planning to use CentOS-5.


Thanks.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] oVirt host installations failing due to missing GPG key (key location no longer valid)

2015-11-10 Thread Sandro Bonazzola
On Tue, Nov 10, 2015 at 8:17 AM, Sandro Bonazzola 
wrote:

>
>
> On Mon, Nov 9, 2015 at 10:48 PM, Neal Gompa  wrote:
>
>> Hello oVirt and Gluster folks,
>>
>> I'm emailing both mailing lists to ask about my failing oVirt
>> installation due to the GlusterFS repository files pointing to an invalid
>> GPG key path.
>>
>> I attempted to set up an oVirt host today through the engine UI, and it
>> kept bombing out on the GPG key, saying it couldn't retrieve it (HTTP 404).
>> I went to the engine logs and found out that the key that it can't retrieve
>> is the one for GlusterFS, which lived at
>> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
>> according to the logs and the /etc/yum.repos.d/ovirt-3.6-dependencies.repo
>> file.
>>
>> Going to the URL indicated that, sure enough, the key doesn't exist
>> there. Instead, it appears to exist at
>> http://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
>>
>> Can someone please fix this so that I can install oVirt and Gluster again?
>>
>
> I'm fixing the ovirt-3.6-dependencies.repo .
> Thanks for the report
>

patch: https://gerrit.ovirt.org/48325
test builds: http://jenkins.ovirt.org/job/ovirt-release_master_gerrit/185/
Neal, can you help testing them?



>
>
>>
>> Thanks in advance,
>> Neal Gompa
>>
>> --
>> 真実はいつも一つ!/ Always, there's only one truth!
>>
>> ___
>> Users mailing list
>> us...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] oVirt host installations failing due to missing GPG key (key location no longer valid)

2015-11-10 Thread Sandro Bonazzola
On Mon, Nov 9, 2015 at 10:48 PM, Neal Gompa  wrote:

> Hello oVirt and Gluster folks,
>
> I'm emailing both mailing lists to ask about my failing oVirt installation
> due to the GlusterFS repository files pointing to an invalid GPG key path.
>
> I attempted to set up an oVirt host today through the engine UI, and it
> kept bombing out on the GPG key, saying it couldn't retrieve it (HTTP 404).
> I went to the engine logs and found out that the key that it can't retrieve
> is the one for GlusterFS, which lived at
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
> according to the logs and the /etc/yum.repos.d/ovirt-3.6-dependencies.repo
> file.
>
> Going to the URL indicated that, sure enough, the key doesn't exist there.
> Instead, it appears to exist at
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
>
> Can someone please fix this so that I can install oVirt and Gluster again?
>

I'm fixing the ovirt-3.6-dependencies.repo .
Thanks for the report


>
> Thanks in advance,
> Neal Gompa
>
> --
> 真実はいつも一つ!/ Always, there's only one truth!
>
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-10 Thread Stefano Danzi

Hello!
It's a test evoirment, so I have only one node.
If I start manually glusterd some seconds after boot I have no problems. 
This error is only during boot.


I think that something chages during upgrade. Maybe that now glusterd 
start before networkging or rpc.


Il 06/11/2015 5.29, Sahina Bose ha scritto:

Did you upgrade all the nodes too?
Are some of your nodes not-reachable?

Adding gluster-users for glusterd error.

On 11/06/2015 12:00 AM, Stefano Danzi wrote:


After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when 
the host boot.

Manual start of service after boot works fine.

gluster log:

[2015-11-04 13:37:55.360876] I [MSGID: 100030] 
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterd: Started running 
/usr/sbin/glusterd version 3.7.5 (args: /usr/sbin/glusterd -p 
/var/run/glusterd.pid)
[2015-11-04 13:37:55.447413] I [MSGID: 106478] [glusterd.c:1350:init] 
0-management: Maximum allowed open file descriptors set to 65536
[2015-11-04 13:37:55.447477] I [MSGID: 106479] [glusterd.c:1399:init] 
0-management: Using /var/lib/glusterd as working directory
[2015-11-04 13:37:55.464540] W [MSGID: 103071] 
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm 
event channel creation failed [Nessun device corrisponde]
[2015-11-04 13:37:55.464559] W [MSGID: 103055] [rdma.c:4899:init] 
0-rdma.management: Failed to initialize IB Device
[2015-11-04 13:37:55.464566] W 
[rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma' 
initialization failed
[2015-11-04 13:37:55.464616] W 
[rpcsvc.c:1597:rpcsvc_transport_create] 0-rpc-service: cannot create 
listener, initing the transport failed
[2015-11-04 13:37:55.464624] E [MSGID: 106243] [glusterd.c:1623:init] 
0-management: creation of 1 listeners failed, continuing with 
succeeded transport
[2015-11-04 13:37:57.663862] I [MSGID: 106513] 
[glusterd-store.c:2036:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 30600
[2015-11-04 13:37:58.284522] I [MSGID: 106194] 
[glusterd-store.c:3465:glusterd_store_retrieve_missed_snaps_list] 
0-management: No missed snaps list.
[2015-11-04 13:37:58.287477] E [MSGID: 106187] 
[glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd: 
resolve brick failed in restore
[2015-11-04 13:37:58.287505] E [MSGID: 101019] 
[xlator.c:428:xlator_init] 0-management: Initialization of volume 
'management' failed, review your volfile again
[2015-11-04 13:37:58.287513] E [graph.c:322:glusterfs_graph_init] 
0-management: initializing translator failed
[2015-11-04 13:37:58.287518] E [graph.c:661:glusterfs_graph_activate] 
0-graph: init failed
[2015-11-04 13:37:58.287799] W [glusterfsd.c:1236:cleanup_and_exit] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f29b876524d] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x126) [0x7f29b87650f6] 
-->/usr/sbin/glusterd(cleanup_and_exit+0x69) [0x7f29b87646d9] ) 0-: 
received signum (0), shutting down



___
Users mailing list
us...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-10 Thread Stefano Danzi

Hi!
I have only one node (Test system) and I don't chage any ip address and 
the entry is on /etc/hosts.

I thing that now gluster start before networking

Il 06/11/2015 6.32, Atin Mukherjee ha scritto:

[glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd:
resolve brick failed in restore

The above log is the culprit here. Generally this function fails when
GlusterD fails to resolve the associated host of a brick. Has any of the
node undergone an IP change during the upgrade process?

~Atin

On 11/06/2015 09:59 AM, Sahina Bose wrote:

Did you upgrade all the nodes too?
Are some of your nodes not-reachable?

Adding gluster-users for glusterd error.

On 11/06/2015 12:00 AM, Stefano Danzi wrote:

After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when the
host boot.
Manual start of service after boot works fine.

gluster log:

[2015-11-04 13:37:55.360876] I [MSGID: 100030]
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterd: Started running
/usr/sbin/glusterd version 3.7.5 (args: /usr/sbin/glusterd -p
/var/run/glusterd.pid)
[2015-11-04 13:37:55.447413] I [MSGID: 106478] [glusterd.c:1350:init]
0-management: Maximum allowed open file descriptors set to 65536
[2015-11-04 13:37:55.447477] I [MSGID: 106479] [glusterd.c:1399:init]
0-management: Using /var/lib/glusterd as working directory
[2015-11-04 13:37:55.464540] W [MSGID: 103071]
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
channel creation failed [Nessun device corrisponde]
[2015-11-04 13:37:55.464559] W [MSGID: 103055] [rdma.c:4899:init]
0-rdma.management: Failed to initialize IB Device
[2015-11-04 13:37:55.464566] W
[rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma'
initialization failed
[2015-11-04 13:37:55.464616] W [rpcsvc.c:1597:rpcsvc_transport_create]
0-rpc-service: cannot create listener, initing the transport failed
[2015-11-04 13:37:55.464624] E [MSGID: 106243] [glusterd.c:1623:init]
0-management: creation of 1 listeners failed, continuing with
succeeded transport
[2015-11-04 13:37:57.663862] I [MSGID: 106513]
[glusterd-store.c:2036:glusterd_restore_op_version] 0-glusterd:
retrieved op-version: 30600
[2015-11-04 13:37:58.284522] I [MSGID: 106194]
[glusterd-store.c:3465:glusterd_store_retrieve_missed_snaps_list]
0-management: No missed snaps list.
[2015-11-04 13:37:58.287477] E [MSGID: 106187]
[glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd:
resolve brick failed in restore
[2015-11-04 13:37:58.287505] E [MSGID: 101019]
[xlator.c:428:xlator_init] 0-management: Initialization of volume
'management' failed, review your volfile again
[2015-11-04 13:37:58.287513] E [graph.c:322:glusterfs_graph_init]
0-management: initializing translator failed
[2015-11-04 13:37:58.287518] E [graph.c:661:glusterfs_graph_activate]
0-graph: init failed
[2015-11-04 13:37:58.287799] W [glusterfsd.c:1236:cleanup_and_exit]
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f29b876524d]
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x126) [0x7f29b87650f6]
-->/usr/sbin/glusterd(cleanup_and_exit+0x69) [0x7f29b87646d9] ) 0-:
received signum (0), shutting down


___
Users mailing list
us...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Glusterfs on native Xenserver

2015-11-10 Thread Tal Bar-Or
Hello All
I need advice regarding creating Xenserver cluster  nodes that mounted with
ZFS system and Glusterfs all with Dell r720 servers equipped with 256Gb
memory and two dedicated 10g interconnection
My question is that config is possible is it done already any suggestion
will be helpful
Please advice
Thanks
-- 
Tal Bar-or
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gdeploy 1.1 is available!

2015-11-10 Thread Sachidananda URS
Hi,

I'm pleased to announce the 1.1 version of gdeploy[1]. RPMs can be
downloaded from:
http://download.gluster.org/pub/gluster/gdeploy/1.1/



In this release we have:

Patterns for hostnames in the configuration files.
Backend setup config changes.
Rerunning the config does not throw error
Backend reset
Intuitive configuration file format (We support old format too).
Host specific and group specific configurations.

And support for the following GlusterFS features.

* Quota
* Snapshot
* Geo-replication
* Subscription manager
* Package install
* Firewalld
* Samba
* CTDB
* CIFS mount

Some sample configuration files can be found at:
https://github.com/gluster/gdeploy/tree/1.1/examples

Happy deploying!

-sac

[1] https://github.com/gluster/gdeploy/tree/1.1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes of todays Gluster Community Bug Triage meeting

2015-11-10 Thread Niels de Vos
On Tue, Nov 10, 2015 at 12:06:45PM +0100, Niels de Vos wrote:
> 
> Hi all,
> 
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
> 
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?channels=gluster-meeting )
> - date: every Tuesday
> - time: 12:00 UTC
> (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
> 
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
> 
> The last two topics have space for additions. If you have a suitable bug
> or topic to discuss, please add it to the agenda.


Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-10/gluster-meeting.2015-11-10-12.00.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-10/gluster-meeting.2015-11-10-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-10/gluster-meeting.2015-11-10-12.00.log.html


  Meeting summary

1.   a. Agenda is at https://public.pad.fsfe.org/p/gluster-bug-triage 
(ndevos, 12:00:52)
2. Roll Call (ndevos, 12:00:56)
3. Group Triage (ndevos, 12:05:11)
 a. there are 22 bugs that failed earlier triage, and need to get 
picked up in this meeting (ndevos, 12:05:57)
4. Open Floor (ndevos, 12:25:41)
 a. REMINDER: bugs and the status of their patches in Gerrit: 
http://bugs.cloud.gluster.org/ (ndevos, 12:25:53)
 b. Nandaja and Manikandan are inerested in volounteering the project 
Automated bug work flow (ndevos, 12:26:14)

   Meeting ended at 12:38:12 UTC (full logs).

  Action items

1. (none)

  People present (lines said)

1. ndevos (36)
2. Manikandan (13)
3. gem (7)
4. jiffin (5)
5. kdhananjay (4)
6. zodbot (2)
7. kkeithley_ (2)
8. hgowtham (1)
9. rafi (1)

   Generated by MeetBot 0.1.4.


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] concurrent "gluster volume status" crashes the command (v3.4 and v3.7)

2015-11-10 Thread Engelmann Florian
Dear list,

concurrent running "gluster volume status" on all 3 GlusterFS Nodes (actually 
those are LXC) somehow crashes the command. Two nodes reply "Another 
transaction is in progress. Please try again after sometime." and on the 3rd 
node the command hangs forever. Stopping the hanging command and running it 
again results also in "Another transaction is in progress. Please try again 
after sometime." on that machine.

strace exits like:

[...]
connect(7, {sa_family=AF_LOCAL, sun_path="/var/run/gluster/quotad.socket"}, 
110) = -1 ENOENT (No such file or directory)
fcntl(7, F_GETFL)   = 0x802 (flags O_RDWR|O_NONBLOCK)
fcntl(7, F_SETFL, O_RDWR|O_NONBLOCK)= 0
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLPRI|EPOLLOUT|EPOLLONESHOT, {u32=1, 
u64=4294967297}}) = 0
pipe([8, 9])= 0
fcntl(9, F_SETFD, FD_CLOEXEC)   = 0
pipe([10, 11])  = 0
fcntl(10, F_GETFL)  = 0 (flags O_RDONLY)
fstat(10, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f67780e5000
lseek(10, 0, SEEK_CUR)  = -1 ESPIPE (Illegal seek)
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
child_tidptr=0x7f67780d9a50) = 28493
close(-1)   = -1 EBADF (Bad file descriptor)
close(11)   = 0
close(-1)   = -1 EBADF (Bad file descriptor)
close(9)= 0
read(8, "", 4)  = 0
close(8)= 0
read(10, "gsyncd.py 0.0.1\n", 4096) = 16
wait4(28493, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 28493
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=28493, si_status=0, 
si_utime=5, si_stime=1} ---
close(10)   = 0
munmap(0x7f67780e5000, 4096)= 0
close(-1)   = -1 EBADF (Bad file descriptor)
close(-2)   = -1 EBADF (Bad file descriptor)
close(-1)   = -1 EBADF (Bad file descriptor)
mmap(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, 
-1, 0) = 0x7f6773545000
mprotect(0x7f6773545000, 4096, PROT_NONE) = 0
clone(child_stack=0x7f6773d44f70, 
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,
 parent_tidptr=0x7f6773d459d0, tls=0x7f6773d45700, child_tidptr=0x7f6773d459d0) 
= 28496
mmap(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, 
-1, 0) = 0x7f6772d44000
mprotect(0x7f6772d44000, 4096, PROT_NONE) = 0
clone(child_stack=0x7f6773543f70, 
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,
 parent_tidptr=0x7f67735449d0, tls=0x7f6773544700, child_tidptr=0x7f67735449d0) 
= 28497
futex(0x7f67735449d0, FUTEX_WAIT, 28497, NULLAnother transaction is in 
progress. Please try again after sometime.
 
+++ exited with 1 +++

I  had to stop all volumes and restart glusterd to solve that problem.

Host OS: Ubuntu 14.04 LTS
LXC OS:  Ubuntu 14.04 LTS


We've got this issue with 3.4.2 (Ubuntu official) and upgraded to 3.7.5 
(Launchpad) to check if the problem still exists. Still unsolved. Any ideas?

Thank you for your help,
Florian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Community Bug Triage meeting ~in one hour

2015-11-10 Thread Niels de Vos

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What Gluster versions would you like to see in the CentOS Storage SIG?

2015-11-10 Thread Nux!
Hi Niels,

Whichever version you think will last longer and can be seamlessly upgraded.

We still run 3.4 in production, because an upgrade requires downtime we can't 
afford. :-)
I wouldn't like to hit the same problems with future versions hence the above. 

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Niels de Vos" 
> To: centos-de...@centos.org, gluster-users@gluster.org
> Sent: Tuesday, 10 November, 2015 09:18:08
> Subject: [Gluster-users] What Gluster versions would you like to see in the 
> CentOS Storage SIG?

> Hi Gluster users running on CentOS!
> 
> As you may have heard before, we're planning on providing stable Gluster
> releases and related packages through the CentOS Storage SIG [0]. We
> would like to know what version of Gluster and which versions of CentOS
> are most wanted by our users.
> 
> The current support for Gluster defines 3 stable releases at the time.
> This means that 3.7, 3.6 and 3.5 are supported by the Gluster Community.
> Once 3.8 is released, 3.5 will become unsupported and will not receive
> any updates anymore. 3.8 is planned to be released early 2016 [1].
> 
> We can provide all Gluster packages for CentOS-7 and 6, but CentOS-5 can
> only get recent versions of the Gluster client.
> 
> Now, we want to know which combinations our users like to see in the
> CentOS Storage SIG:
> 
> - CentOS-7 + GlusterFS 3.7: latest and greatest, will be included
> - CentOS-6 + GlusterFS 3.7: very much used release, also included
> 
> - CentOS-7 + GlusterFS 3.6: some users, you?
> - CentOS-6 + GlusterFS 3.6: some users, you?
> 
> - CentOS-7 + GlusterFS 3.5: fewer users, you?
> - CentOS-6 + GlusterFS 3.5: fewer users, you?
> 
> - CentOS-5 + GlusterFS 3.7 (client only): nobody?
> - CentOS-5 + GlusterFS 3.6: nobody?
> - CentOS-5 + GlusterFS 3.5: nobody?
> 
> 
> Please speak up and let us know what versions you depend on for next few
> months. You can reply to this email to the list (note that it is
> x-posted, one mailinglist is sufficient for your reply), directly to me
> or over IRC in #centos-devel or #gluster.
> 
> Many thanks,
> Niels
> 
> 
> 0. https://wiki.centos.org/SpecialInterestGroup/Storage
> 1. https://www.gluster.org/community/roadmap/
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] What Gluster versions would you like to see in the CentOS Storage SIG?

2015-11-10 Thread Niels de Vos
Hi Gluster users running on CentOS!

As you may have heard before, we're planning on providing stable Gluster
releases and related packages through the CentOS Storage SIG [0]. We
would like to know what version of Gluster and which versions of CentOS
are most wanted by our users.

The current support for Gluster defines 3 stable releases at the time.
This means that 3.7, 3.6 and 3.5 are supported by the Gluster Community.
Once 3.8 is released, 3.5 will become unsupported and will not receive
any updates anymore. 3.8 is planned to be released early 2016 [1].

We can provide all Gluster packages for CentOS-7 and 6, but CentOS-5 can
only get recent versions of the Gluster client.

Now, we want to know which combinations our users like to see in the
CentOS Storage SIG:

 - CentOS-7 + GlusterFS 3.7: latest and greatest, will be included
 - CentOS-6 + GlusterFS 3.7: very much used release, also included

 - CentOS-7 + GlusterFS 3.6: some users, you?
 - CentOS-6 + GlusterFS 3.6: some users, you?

 - CentOS-7 + GlusterFS 3.5: fewer users, you?
 - CentOS-6 + GlusterFS 3.5: fewer users, you?

 - CentOS-5 + GlusterFS 3.7 (client only): nobody?
 - CentOS-5 + GlusterFS 3.6: nobody?
 - CentOS-5 + GlusterFS 3.5: nobody?


Please speak up and let us know what versions you depend on for next few
months. You can reply to this email to the list (note that it is
x-posted, one mailinglist is sufficient for your reply), directly to me
or over IRC in #centos-devel or #gluster.

Many thanks,
Niels


0. https://wiki.centos.org/SpecialInterestGroup/Storage
1. https://www.gluster.org/community/roadmap/


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users