Re: [Gluster-users] Rebalance without changing layout

2017-06-21 Thread Raghavendra Gowdappa


- Original Message -
> From: "Tahereh Fattahi" 
> To: gluster-users@gluster.org
> Sent: Friday, May 19, 2017 12:21:53 AM
> Subject: [Gluster-users] Rebalance without changing layout
> 
> Hi
> Is it possible to rebalance data in gluster without changing layout?
> When I use rebalance with force, the layout changed. I dont want changing
> layout, just balancing data over layout.

Currently this is not possible. But, looks like something we should have. I've 
filed a github issue at [1].

[1] https://github.com/gluster/glusterfs/issues/250

regards,
Raghavendra
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting minutes, 2017-06-21

2017-06-21 Thread Kaleb S. KEITHLEY
===
#gluster-meeting: Gluster Community Meeting
===


Meeting started by kkeithley at 15:13:53 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-06-21/gluster_community_meeting.2017-06-21-15.13.log.html
.



Meeting summary
---
* roll call  (kkeithley, 15:14:12)

* AIs from last meeting  (kkeithley, 15:19:21)

* related projects  (kkeithley, 15:33:56)
  * ACTION: JoeJulian to invite Harsha to next community meeting to
discuss Minio  (kkeithley, 15:50:21)
  *

https://review.openstack.org/#/q/status:open+project:openstack/swift3,n,z
(kkeithley, 15:50:49)
  * there's definetely versioning work going on,  bunch of patches that
needs reviews...  (kkeithley, 15:50:57)
  * The infra for simplified reverts is done btw.  (kkeithley, 15:51:30)

* open floor  (kkeithley, 15:54:32)

Meeting ended at 16:07:14 UTC.




Action Items

* JoeJulian to invite Harsha to next community meeting to discuss Minio




Action Items, by person
---
* JoeJulian
  * JoeJulian to invite Harsha to next community meeting to discuss
Minio
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (54)
* ndevos (40)
* nigelb (35)
* JoeJulian (10)
* tdasilva (9)
* shyam (7)
* zodbot (3)
* jstrunk (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June

2017-06-21 Thread Shyam

On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote:



On Tue, Jun 20, 2017 at 7:37 PM, Shyam > wrote:

Hi,

Release tagging has been postponed by a day to accommodate a fix for
a regression that has been introduced between 3.11.0 and 3.11.1 (see
[1] for details).

As a result 3.11.1 will be tagged on the 21st June as of now
(further delays will be notified to the lists appropriately).


The required patches landed upstream for review and are undergoing
review. Could we do the tagging tomorrow? We don't want to rush the
patches to make sure we don't introduce any new bugs at this time.


Agreed, considering the situation we would be tagging the release 
tomorrow (June-22nd 2017).






Thanks,
Shyam

[1] Bug awaiting fix:
https://bugzilla.redhat.com/show_bug.cgi?id=1463250


"Releases are made better together"

On 06/06/2017 09:24 AM, Shyam wrote:

Hi,

It's time to prepare the 3.11.1 release, which falls on the 20th of
each month [4], and hence would be June-20th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.11.1? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to
this
mail

2) Pending reviews in the 3.11 dashboard will be part of the
release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.11 and get
these going

3) Empty release notes are posted here [3], if there are any
specific
call outs for 3.11 beyond bugs, please update the review, or leave a
comment in the review, for us to pick it up

Thanks,
Shyam/Kaushal

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1


[2] 3.11 review dashboard:

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard




[3] Release notes WIP: https://review.gluster.org/17480


[4] Release calendar:
https://www.gluster.org/community/release-schedule/

___
Gluster-devel mailing list
gluster-de...@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
maintainers mailing list
maintain...@gluster.org 
http://lists.gluster.org/mailman/listinfo/maintainers





--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June

2017-06-21 Thread Pranith Kumar Karampuri
On Tue, Jun 20, 2017 at 7:37 PM, Shyam  wrote:

> Hi,
>
> Release tagging has been postponed by a day to accommodate a fix for a
> regression that has been introduced between 3.11.0 and 3.11.1 (see [1] for
> details).
>
> As a result 3.11.1 will be tagged on the 21st June as of now (further
> delays will be notified to the lists appropriately).
>

The required patches landed upstream for review and are undergoing review.
Could we do the tagging tomorrow? We don't want to rush the patches to make
sure we don't introduce any new bugs at this time.


>
> Thanks,
> Shyam
>
> [1] Bug awaiting fix: https://bugzilla.redhat.com/show_bug.cgi?id=1463250
>
> "Releases are made better together"
>
> On 06/06/2017 09:24 AM, Shyam wrote:
>
>> Hi,
>>
>> It's time to prepare the 3.11.1 release, which falls on the 20th of
>> each month [4], and hence would be June-20th-2017 this time around.
>>
>> This mail is to call out the following,
>>
>> 1) Are there any pending *blocker* bugs that need to be tracked for
>> 3.11.1? If so mark them against the provided tracker [1] as blockers
>> for the release, or at the very least post them as a response to this
>> mail
>>
>> 2) Pending reviews in the 3.11 dashboard will be part of the release,
>> *iff* they pass regressions and have the review votes, so use the
>> dashboard [2] to check on the status of your patches to 3.11 and get
>> these going
>>
>> 3) Empty release notes are posted here [3], if there are any specific
>> call outs for 3.11 beyond bugs, please update the review, or leave a
>> comment in the review, for us to pick it up
>>
>> Thanks,
>> Shyam/Kaushal
>>
>> [1] Release bug tracker:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1
>>
>> [2] 3.11 review dashboard:
>> https://review.gluster.org/#/projects/glusterfs,dashboards/d
>> ashboard:3-11-dashboard
>>
>>
>> [3] Release notes WIP: https://review.gluster.org/17480
>>
>> [4] Release calendar: https://www.gluster.org/community/release-schedule/
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance

2017-06-21 Thread Krutika Dhananjay
No, you don't need to do any of that. Just executing volume-set commands is
sufficient for the changes to take effect.


-Krutika

On Wed, Jun 21, 2017 at 3:48 PM, Chris Boot  wrote:

> [replying to lists this time]
>
> On 20/06/17 11:23, Krutika Dhananjay wrote:
> > Couple of things:
> >
> > 1. Like Darrell suggested, you should enable stat-prefetch and increase
> > client and server event threads to 4.
> > # gluster volume set  performance.stat-prefetch on
> > # gluster volume set  client.event-threads 4
> > # gluster volume set  server.event-threads 4
> >
> > 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
> > https://review.gluster.org/#/c/16966/
> >
> > With these two changes, we saw great improvement in performance in our
> > internal testing.
>
> Hi Krutika,
>
> Thanks for your input. I have yet to run any benchmarks, but I'll do
> that once I have a bit more time to work on this.
>
> I've tweaked the options as you suggest, but that doesn't seem to have
> made an appreciable difference. I admit that without benchmarks it's a
> bit like sticking your finger in the air, though. Do I need to restart
> my bricks and/or remount the volumes for these to take effect?
>
> I'm actually running GlusterFS 3.10.2-1. This is all coming from the
> CentOS Storage SIG's centos-release-gluster310 repository.
>
> Thanks again.
>
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance

2017-06-21 Thread Krutika Dhananjay
No. It's just that in the internal testing that was done here, increasing
the thread count beyond 4 did not improve the performance any further.

-Krutika

On Tue, Jun 20, 2017 at 11:30 PM, mabi  wrote:

> Dear Krutika,
>
> Sorry for asking so naively but can you tell me on what factor do you base
> that the client and server event-threads parameters for a volume should be
> set to 4?
>
> Is this metric for example based on the number of cores a GlusterFS server
> has?
>
> I am asking because I saw my GlusterFS volumes are set to 2 and would like
> to set these parameters to something meaningful for performance tuning. My
> setup is a two node replica with GlusterFS 3.8.11.
>
> Best regards,
> M.
>
>
>
>  Original Message 
> Subject: Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance
> Local Time: June 20, 2017 12:23 PM
> UTC Time: June 20, 2017 10:23 AM
> From: kdhan...@redhat.com
> To: Lindsay Mathieson 
> gluster-users , oVirt users 
>
> Couple of things:
> 1. Like Darrell suggested, you should enable stat-prefetch and increase
> client and server event threads to 4.
> # gluster volume set  performance.stat-prefetch on
> # gluster volume set  client.event-threads 4
> # gluster volume set  server.event-threads 4
>
> 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
> https://review.gluster.org/#/c/16966/
>
> With these two changes, we saw great improvement in performance in our
> internal testing.
>
> Do you mind trying these two options above?
> -Krutika
>
> On Tue, Jun 20, 2017 at 1:00 PM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> Have you tried with:
>>
>> performance.strict-o-direct : off
>> performance.strict-write-ordering : off
>> They can be changed dynamically.
>>
>>
>> On 20 June 2017 at 17:21, Sahina Bose  wrote:
>>
>>> [Adding gluster-users]
>>>
>>> On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:
>>>
 Hi folks,

 I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
 6 bricks, which themselves live on two SSDs in each of the servers (one
 brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
 SSDs. Connectivity is 10G Ethernet.

 Performance within the VMs is pretty terrible. I experience very low
 throughput and random IO is really bad: it feels like a latency issue.
 On my oVirt nodes the SSDs are not generally very busy. The 10G network
 seems to run without errors (iperf3 gives bandwidth measurements of >=
 9.20 Gbits/sec between the three servers).

 To put this into perspective: I was getting better behaviour from NFS4
 on a gigabit connection than I am with GlusterFS on 10G: that doesn't
 feel right at all.

 My volume configuration looks like this:

 Volume Name: vmssd
 Type: Distributed-Replicate
 Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 2 x (2 + 1) = 6
 Transport-type: tcp
 Bricks:
 Brick1: ovirt3:/gluster/ssd0_vmssd/brick
 Brick2: ovirt1:/gluster/ssd0_vmssd/brick
 Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
 Brick4: ovirt3:/gluster/ssd1_vmssd/brick
 Brick5: ovirt1:/gluster/ssd1_vmssd/brick
 Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
 Options Reconfigured:
 nfs.disable: on
 transport.address-family: inet6
 performance.quick-read: off
 performance.read-ahead: off
 performance.io-cache: off
 performance.stat-prefetch: off
 performance.low-prio-threads: 32
 network.remote-dio: off
 cluster.eager-lock: enable
 cluster.quorum-type: auto
 cluster.server-quorum-type: server
 cluster.data-self-heal-algorithm: full
 cluster.locking-scheme: granular
 cluster.shd-max-threads: 8
 cluster.shd-wait-qlength: 1
 features.shard: on
 user.cifs: off
 storage.owner-uid: 36
 storage.owner-gid: 36
 features.shard-block-size: 128MB
 performance.strict-o-direct: on
 network.ping-timeout: 30
 cluster.granular-entry-heal: enable

 I would really appreciate some guidance on this to try to improve things
 because at this rate I will need to reconsider using GlusterFS
 altogether.

>>>
>>> Could you provide the gluster volume profile output while you're running
>>> your I/O tests.
>>> # gluster volume profile  start
>>> to start profiling
>>> # gluster volume profile  info
>>> for the profile output.
>>>
>>>

 Cheers,
 Chris

 --
 Chris Boot
 bo...@bootc.net
 ___
 Users mailing list
 us...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>>
>>>