Re: [Gluster-users] Turn off replication

2018-04-11 Thread Karthik Subrahmanya
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez  wrote:

> Hi Karthik
>
> Looking at the information you have provided me, I would like to make sure
> that I’m running the right commands.
>
> 1.   gluster volume heal scratch info
>
If the count is non zero, trigger the heal and wait for heal info count to
become zero.

> 2. gluster volume remove-brick scratch *replica 1 *
> gluster02ib:/gdata/brick1/scratch gluster02ib:/gdata/brick2/scratch force
>
3.  gluster volume add-brick* “#"* scratch gluster02ib:/gdata/brick1/
> scratch gluster02ib:/gdata/brick2/scratch
>
>
> Based on the configuration I have, Brick 1 from Node A and B are tide
> together and Brick 2 from Node A and B are also tide together. Looking at
> your remove command (step #2), it seems that you want me to remove Brick 1
> and 2 from Node B (gluster02ib). is that correct? I thought the data was
> distributed in bricks 1 between nodes A and B) and duplicated on Bricks 2
> (node A and B).
>
Data is duplicated between bricks 1 of nodes A & B and bricks 2 of nodes A
& B and data is distributed between these two pairs.
You need not always remove the bricks 1 & 2 from node B itself. The idea
here is to keep one copy from both the replica pairs.

>
> Also when I add the bricks back to gluster, do I need to specify if it is
> distributed or replicated?? and Do i need a configuration #?? for example
> on your command (Step #2) you have “replica 1” when remove bricks, do I
> need to do the same when adding the nodes back ?
>
No. You just need to erase the data on those bricks and add those bricks
back to the volume. The previous remove-brick command will make the volume
plain distribute. Then simply adding the bricks without specifying any "#"
will expand the volume as a plain distribute volue.

>
> Im planning on moving with this changes in few days. At this point each
> brick has 14tb and adding bricks 1 from node A and B, i have a total of
> 28tb, After doing all the process, (removing and adding bricks) I should be
> able to see a total of 56Tb right ?
>
Yes after all these you will have 56TB in total.
After adding the bricks, do volume rebalance, so that the data which were
present previously, will be moved to the correct bricks.

HTH,
Karthik

>
> Thanks
>
> Jose
>
>
>
>
> -
> Jose Sanchez
> Systems/Network Analyst 1
> Center of Advanced Research Computing
> 1601 Central Ave
> .
> MSC 01 1190
> Albuquerque, NM 87131-0001
> carc.unm.edu
> 575.636.4232
>
> On Apr 7, 2018, at 8:29 AM, Karthik Subrahmanya 
> wrote:
>
> Hi Jose,
>
> Thanks for providing the volume info. You have 2 subvolumes. Data is
> replicated within the bricks of that subvolumes.
> First one consisting of Node A's brick1 & Node B's brick1 and the second
> one consisting of Node A's brick2 and Node B's brick2.
> You don't have the same data on all the 4 bricks. Data are distributed
> between these two subvolumes.
> To remove the replica you can use the command
> gluster volume remove-brick scratch replica 1 gluster02ib:/gdata/brick1/
> scratch gluster02ib:/gdata/brick2/scratch force
> So you will have one copy of data present from both the distributes.
> Before doing this make sure "gluster volume heal scratch info" value is
> zero. So copies you retain will have the correct data.
> After the remove-brick erase the data from the backend.
> Then you can expand the volume by following the steps at [1].
>
> [1] https://docs.gluster.org/en/latest/Administrator%
> 20Guide/Managing%20Volumes/#expanding-volumes
>
> Regards,
> Karthik
>
> On Fri, Apr 6, 2018 at 11:39 PM, Jose Sanchez 
> wrote:
>
>> Hi Karthik
>>
>> this is our configuration,  is 2x2 =4 , they are all replicated , each
>> brick has 14tb. we have 2 nodes A and B, each one with brick 1 and 2.
>>
>> Node A  (replicated A1 (14tb) and B1 (14tb) ) same with node B
>> (Replicated A2 (14tb) and B2 (14tb)).
>>
>> Do you think we need to degrade the node first before removing it. i
>> believe the same copy of data is on all 4 bricks, we would like to keep one
>> of them, and add the other bricks as extra space
>>
>> Thanks for your help on this
>>
>> Jose
>>
>>
>>
>>
>>
>> [root@gluster01 ~]# gluster volume info scratch
>>
>> Volume Name: scratch
>> Type: Distributed-Replicate
>> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Brick3: gluster01ib:/gdata/brick2/scratch
>> Brick4: gluster02ib:/gdata/brick2/scratch
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> nfs.disable: on
>>
>> [root@gluster01 ~]# gluster volume status all
>> Status of volume: scratch
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>> 
>> --
>> Brick

[Gluster-users] how to get the true used capacity of the volume

2018-04-11 Thread hannan...@shudun.com
I create a volume,and mounted it, and use df command to view the volume 
Available and used .
After some testing, I think the used information displayed by df is the sum of 
the capacities of the disks on which the brick is located.
Not the sum of the used of the brick directory.
(I know the Available capacity, is the physical space of all disks if not 
quota, 
but used of space should not be sum of the space used by the hard disk,  should 
be the sum of the size of the brick directory
beacuse, There may be different volumes of bricks on one disk)

In my case:
I want to create multiple volumes on some disks(For better performance, each 
volume will use all disks of our server cluster),one volume for NFS and replica 
2,one volume for NFS and replica 3, one volume for SAMBA。
I want get the capacity  already used of each volume, but now one of the 
volumes write data, the other volumes used will also increase when viewed using 
df command.

Example:
eg1:
I create a volume with two bricks and the two bricks are on one disk. And write 
1TB of data for the volume
using the df command, View the space used by the volume.
Display volume uses 2TB of space

eg2:
such as :When I create a volume on the root partition,I didn't write any data 
to the volume,But using df shows that this volume has used some space。
In fact, these spaces are not the size of the brick directory, but the size of 
the disk on which the brick is located.

How do I get the capacity of each volume in this case?

[root@f08n29glusterfs-3.7.20]# df -hT | grep f08n29
f08n29:/usage_test fuse.glusterfs   50G   24G   27G  48% /mnt

[root@f08n29glusterfs-3.7.20]# gluster volume info usage_test
Volume Name: usage_test
Type: Distribute
Volume ID: d9b5abff-9f69-41ce-80b3-3dc4ba1d77b3
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: f08n29:/brick1
Options Reconfigured:
performance.readdir-ahead: on

[root@f08n29glusterfs-3.7.20]# du -sh /brick1
100K/brick1

Is there any command that can check the actual space used by each volume in 
this situation?



 


 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-11 Thread mabi
Thank you Ravi for your comments. I do understand that it might not be very 
wise to risk any mistakes by rushing this fix into 3.12.8. In that case I will 
be more patient and wait for 3.12.9 next month.

‐‐‐ Original Message ‐‐‐
On April 11, 2018 5:09 PM, Ravishankar N  wrote:

> Mabi,
>
> It looks like one of the patches is not a straight forward cherry-pick to the 
> 3.12 branch. Even though the conflict might be easy to resolve, I don't think 
> it is a good idea to hurry it for tomorrow. We will definitely have it ready 
> by the next minor release (or if by chance the release is delayed and the 
> back port is reviewed and merged before that). Hope that is acceptable.
>
> -Ravi
>
> On 04/11/2018 01:11 PM, mabi wrote:
>
>> Dear Jiffin,
>>
>> Would it be possible to have the following backported to 3.12:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1482064
>>
>> See my mail with subject "New 3.12.7 possible split-brain on replica 3" on 
>> the list earlier this week for more details.
>>
>> Thank you very much.
>>
>> Best regards,
>> Mabi
>>
>> ‐‐‐ Original Message ‐‐‐
>> On April 11, 2018 5:16 AM, Jiffin Tony Thottan 
>> [](mailto:jthot...@redhat.com) wrote:
>>
>>> Hi,
>>>
>>> It's time to prepare the 3.12.8 release, which falls on the 10th of
>>> each month, and hence would be 12-04-2018 this time around.
>>>
>>> This mail is to call out the following,
>>>
>>> 1) Are there any pending *blocker* bugs that need to be tracked for
>>> 3.12.7? If so mark them against the provided tracker [1] as blockers
>>> for the release, or at the very least post them as a response to this
>>> mail
>>>
>>> 2) Pending reviews in the 3.12 dashboard will be part of the release,
>>> *iff* they pass regressions and have the review votes, so use the
>>> dashboard [2] to check on the status of your patches to 3.12 and get
>>> these going
>>>
>>> 3) I have made checks on what went into 3.10 post 3.12 release and if
>>> these fixes are already included in 3.12 branch, then status on this is 
>>> *green*
>>> as all fixes ported to 3.10, are ported to 3.12 as well.
>>>
>>> @Mlind
>>>
>>> IMO https://review.gluster.org/19659 is like a minor feature to me. Can 
>>> please provide a justification for why it need to include in 3.12 stable 
>>> release?
>>>
>>> And please rebase the change as well
>>>
>>> @Raghavendra
>>>
>>> The smoke failed for https://review.gluster.org/#/c/19818/. Can please 
>>> check the same?
>>>
>>> Thanks,
>>> Jiffin
>>>
>>> [1] Release bug tracker:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.8
>>>
>>> [2] 3.12 review dashboard:
>>> https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>>
>> http://lists.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-11 Thread Anastasia Belyaeva
Hello everybody!

I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are actually
virtual machines located on 3 separate physical XenServer7.1 servers)

They are all connected via infiniband network. Iperf3 shows around *23
Gbit/s network bandwidth *between each 2 of them.

Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical
volume created on top of it, formatted with *xfs*. Gluster top reports the
following throughput:

root@fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count 524288
> list-cnt 0
> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
> Throughput *631.82 MBps *time 3.3989 secs
> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
> Throughput *566.96 MBps *time 3.7877 secs
> Brick: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
> Throughput *546.65 MBps *time 3.9285 secs


root@fsnode2 ~ $ gluster volume top r2vol write-perf bs 4096 count 524288
> list-cnt 0
> Brick: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
> Throughput *539.60 MBps *time 3.9798 secs
> Brick: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
> Throughput *580.07 MBps *time 3.7021 secs


And two *pure replicated ('replica 2' and 'replica 3')* volumes. *The
'replica 2' volume is for testing purpose only.

> Volume Name: r2vol
> Type: Replicate
> Volume ID: 4748d0c0-6bef-40d5-b1ec-d30e10cfddd9
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
> Brick2: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
> Options Reconfigured:
> nfs.disable: on
>


> Volume Name: r3vol
> Type: Replicate
> Volume ID: b0f64c28-57e1-4b9d-946b-26ed6b499f29
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
> Brick2: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
> Brick3: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
> Options Reconfigured:
> nfs.disable: on



*Client *is also gluster 3.12.6, Centos 7.3 virtual machine, *FUSE mount*

> root@centos7u3-nogdesktop2 ~ $ mount |grep gluster
> gluster-host.ibnet:/r2vol on /mnt/gluster/r2 type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> gluster-host.ibnet:/r3vol on /mnt/gluster/r3 type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)



*The problem *is that there is a significant performance loss with smaller
block sizes. For example:

*4K block size*
[replica 3 volume]
root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 11.2207 s, *95.7 MB/s*

[replica 2 volume]
root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
of=/mnt/gluster/r2/file$RANDOM bs=4096 count=262144
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 12.0149 s, *89.4 MB/s*

*512K block size*
[replica 3 volume]
root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
of=/mnt/gluster/r3/file$RANDOM bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB) copied, 5.27207 s, *204 MB/s*

[replica 2 volume]
root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
of=/mnt/gluster/r2/file$RANDOM bs=512K count=2048
2048+0 records in
2048+0 records out
1073741824 bytes (1.1 GB) copied, 4.22321 s, *254 MB/s*

With bigger block size It's still not where I expect it to be, but at least
it starts to make some sense.

I've been trying to solve this for a very long time with no luck.
I've already tried both kernel tuning (different 'tuned' profiles and the
ones recommended in the "Linux Kernel Tuning" section) and tweaking gluster
volume options, including
write-behind/flush-behind/write-behind-window-size.
The latter, to my surprise, didn't make any difference. 'Cause at first I
thought it was the buffering issue but it turns out it does buffer writes,
just not very efficient (well at least what it looks like in the *gluster
profile output*)

root@fsnode2 ~ $ gluster volume profile r3vol info clear
> ...
> Cleared stats.


root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
> of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144
> 262144+0 records in
> 262144+0 records out
> 1073741824 bytes (1.1 GB) copied, 10.9743 s, 97.8 MB/s



> root@fsnode2 ~ $ gluster volume profile r3vol info
> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
> ---
> Cumulative Stats:
>Block Size:   4096b+8192b+
> 16384b+
>  No. of Reads:0 0
> 0
> No. of Writes: 1576  4173
> 19605
>Block Size:  32768b+   65536b+
>  131072b+
>  No. of Reads:0 0
> 0
> No. of Writes:   1847
>   657
>  %-latency

[Gluster-users] Minutes from today's community meeting (11 April 2018)

2018-04-11 Thread Amye Scavarda
Thanks to all who attended!
Joe Julian to host our next one at 25 April, 15:00 UTC.
https://bit.ly/gluster-community-meetings has our agenda, feel free to add
topics!

===
#gluster-meeting: Gluster Community Meeting  - 11 April 2018



Meeting started by amye at 15:01:43 UTC. The full logs are available at
https://meetbot.fedoraproject.org/gluster-meeting/2018-04-11/gluster_community_meeting_-_11_april_2018.2018-04-11-15.01.log.html
.



Meeting summary
---
* removal of old deb packages from repo. Why?  (amye, 15:04:46)
  * LINK: https://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.15/
(kkeithley, 15:10:16)

* - gluster option man/help command  (amye, 15:15:32)
  * ACTION: Shyam to hunt through github for the correct issue  (amye,
15:21:28)

* - Switching minor releases to once in 2 month updates, than minor
  releases every month (based on #bugs fixed), post initial 3-6 minor
  releases, thoughts/concerns? [Shyam]  (amye, 15:24:10)

Meeting ended at 15:42:08 UTC.



Action Items

* Shyam to hunt through github for the correct issue



People Present (lines said)
---
* amye (34)
* ivan_rossi (25)
* kkeithley (21)
* joes_phone (19)
* shyam (16)
* zodbot (3)
* joes-phone (2)


-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-04-11 Thread Alex K
On Wed, Apr 11, 2018 at 4:35 AM, TomK  wrote:

> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing.  Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables quorum so as to avoid the issue. Glad that this helped.
Bare in in mind though that it is easier to face split-brain issues with
quorum is disabled, that's why 3 nodes at least are recommended. Just to
note that I have also a 2 node cluster which is running without issues for
long time.


> Thank you for that.
>
> Cheers,
> Tom
>
> Hi,
>>
>> You need 3 nodes at least to have quorum enabled. In 2 node setup you
>> need to disable quorum so as to be able to still use the volume when one of
>> the nodes go down.
>>
>> On Mon, Apr 9, 2018, 09:02 TomK > tomk...@mdevsys.com>> wrote:
>>
>> Hey All,
>>
>> In a two node glusterfs setup, with one node down, can't use the
>> second
>> node to mount the volume.  I understand this is expected behaviour?
>> Anyway to allow the secondary node to function then replicate what
>> changed to the first (primary) when it's back online?  Or should I
>> just
>> go for a third node to allow for this?
>>
>> Also, how safe is it to set the following to none?
>>
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>>
>>
>> [root@nfs01 /]# gluster volume start gv01
>> volume start: gv01: failed: Quorum not met. Volume operation not
>> allowed.
>> [root@nfs01 /]#
>>
>>
>> [root@nfs01 /]# gluster volume status
>> Status of volume: gv01
>> Gluster process TCP Port  RDMA Port
>>  Online  Pid
>> 
>> --
>> Brick nfs01:/bricks/0/gv01  N/A   N/AN
>>N/A
>> Self-heal Daemon on localhost   N/A   N/AY
>> 25561
>>
>> Task Status of Volume gv01
>> 
>> --
>> There are no active volume tasks
>>
>> [root@nfs01 /]#
>>
>>
>> [root@nfs01 /]# gluster volume info
>>
>> Volume Name: gv01
>> Type: Replicate
>> Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: nfs01:/bricks/0/gv01
>> Brick2: nfs02:/bricks/0/gv01
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> nfs.trusted-sync: on
>> performance.cache-size: 1GB
>> performance.io-thread-count: 16
>> performance.write-behind-window-size: 8MB
>> performance.readdir-ahead: on
>> client.event-threads: 8
>> server.event-threads: 8
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> [root@nfs01 /]#
>>
>>
>>
>>
>> ==> n.log <==
>> [2018-04-09 05:08:13.704156] I [MSGID: 100030]
>> [glusterfsd.c:2556:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
>> 3.13.2 (args: /usr/sbin/glusterfs --process-name fuse
>> --volfile-server=nfs01 --volfile-id=/gv01 /n)
>> [2018-04-09 05:08:13.711255] W [MSGID: 101002]
>> [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family'
>> is
>> deprecated, preferred is 'transport.address-family', continuing with
>> correction
>> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not
>> available"
>> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 1
>> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 2
>> [2018-04-09 05:08:13.738114] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 3
>> [2018-04-09 05:08:13.738203] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 4
>> [2018-04-09 05:08:13.738324] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 5
>> [2018-04-09 05:08:13.738330] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 6
>> [2018-04-09 05:08:13.738655] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 7
>> [2018-04-09 05:08:13.738742] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with in

Re: [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-11 Thread Ravishankar N

Mabi,

It looks like one of the patches is not a straight forward cherry-pick 
to the 3.12 branch. Even though the conflict might be easy to resolve, I 
don't think it is a good idea to hurry it for tomorrow. We will 
definitely have it ready by the next minor release (or if by chance the 
release is delayed and the back port is reviewed and merged before 
that). Hope that is acceptable.


-Ravi

On 04/11/2018 01:11 PM, mabi wrote:

Dear Jiffin,

Would it be possible to have the following backported to 3.12:

https://bugzilla.redhat.com/show_bug.cgi?id=1482064



See my mail with subject "New 3.12.7 possible split-brain on replica 
3" on the list earlier this week for more details.


Thank you very much.

Best regards,
Mabi

‐‐‐ Original Message ‐‐‐
On April 11, 2018 5:16 AM, Jiffin Tony Thottan  
wrote:



Hi,

It's time to prepare the 3.12.8 release, which falls on the 10th of
each month, and hence would be 12-04-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.7? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this 
is *green*

as all fixes ported to 3.10, are ported to 3.12 as well.

@Mlind

IMO https://review.gluster.org/19659 is like a minor feature to me. 
Can please provide a justification for why it need to include in 3.12 
stable release?


And please rebase the change as well

@Raghavendra

The smoke failed for https://review.gluster.org/#/c/19818/. Can 
please check the same?


Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.8

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-11 Thread mabi
Dear Jiffin,

Would it be possible to have the following backported to 3.12:

https://bugzilla.redhat.com/show_bug.cgi?id=1482064

See my mail with subject "New 3.12.7 possible split-brain on replica 3" on the 
list earlier this week for more details.

Thank you very much.

Best regards,
Mabi

‐‐‐ Original Message ‐‐‐
On April 11, 2018 5:16 AM, Jiffin Tony Thottan  wrote:

> Hi,
>
> It's time to prepare the 3.12.8 release, which falls on the 10th of
> each month, and hence would be 12-04-2018 this time around.
>
> This mail is to call out the following,
>
> 1) Are there any pending *blocker* bugs that need to be tracked for
> 3.12.7? If so mark them against the provided tracker [1] as blockers
> for the release, or at the very least post them as a response to this
> mail
>
> 2) Pending reviews in the 3.12 dashboard will be part of the release,
> *iff* they pass regressions and have the review votes, so use the
> dashboard [2] to check on the status of your patches to 3.12 and get
> these going
>
> 3) I have made checks on what went into 3.10 post 3.12 release and if
> these fixes are already included in 3.12 branch, then status on this is 
> *green*
> as all fixes ported to 3.10, are ported to 3.12 as well.
>
> @Mlind
>
> IMO https://review.gluster.org/19659 is like a minor feature to me. Can 
> please provide a justification for why it need to include in 3.12 stable 
> release?
>
> And please rebase the change as well
>
> @Raghavendra
>
> The smoke failed for https://review.gluster.org/#/c/19818/. Can please check 
> the same?
>
> Thanks,
> Jiffin
>
> [1] Release bug tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.8
>
> [2] 3.12 review dashboard:
> https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users