Re: [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-22 Thread FNU Raghavendra Manjunath
Hi,

This is the agenda for tomorrow's community meeting for NA/EMEA timezone.

https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both




On Thu, Apr 11, 2019 at 4:56 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi All,
>
> Below is the final details of our community meeting, and I will be sending
> invites to mailing list following this email. You can add Gluster Community
> Calendar so you can get notifications on the meetings.
>
> We are starting the meetings from next week. For the first meeting, we
> need 1 volunteer from users to discuss the use case / what went well, and
> what went bad, etc. preferrably in APAC region.  NA/EMEA region, next week.
>
> Draft Content: https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g
> 
> Gluster Community Meeting
> Previous
> Meeting minutes:
>
>- http://github.com/gluster/community
>
>
> Date/Time:
> Check the community calendar
> 
> Bridge
>
>- APAC friendly hours
>   - Bridge: https://bluejeans.com/836554017
>- NA/EMEA
>   - Bridge: https://bluejeans.com/486278655
>
> --
> Attendance
>
>- Name, Company
>
> Host
>
>- Who will host next meeting?
>   - Host will need to send out the agenda 24hr - 12hrs in advance to
>   mailing list, and also make sure to send the meeting minutes.
>   - Host will need to reach out to one user at least who can talk
>   about their usecase, their experience, and their needs.
>   - Host needs to send meeting minutes as PR to
>   http://github.com/gluster/community
>
> User stories
>
>- Discuss 1 usecase from a user.
>   - How was the architecture derived, what volume type used, options,
>   etc?
>   - What were the major issues faced ? How to improve them?
>   - What worked good?
>   - How can we all collaborate well, so it is win-win for the
>   community and the user? How can we
>
> Community
>
>-
>
>Any release updates?
>-
>
>Blocker issues across the project?
>-
>
>Metrics
>- Number of new bugs since previous meeting. How many are not triaged?
>   - Number of emails, anything unanswered?
>
> Conferences
> / Meetups
>
>- Any conference in next 1 month where gluster-developers are going?
>gluster-users are going? So we can meet and discuss.
>
> Developer
> focus
>
>-
>
>Any design specs to discuss?
>-
>
>Metrics of the week?
>- Coverity
>   - Clang-Scan
>   - Number of patches from new developers.
>   - Did we increase test coverage?
>   - [Atin] Also talk about most frequent test failures in the CI and
>   carve out an AI to get them fixed.
>
> RoundTable
>
>- 
>
> 
>
> Regards,
> Amar
>
> On Mon, Mar 25, 2019 at 8:53 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>> Thanks for the feedback Darrell,
>>
>> The new proposal is to have one in North America 'morning' time. (10AM
>> PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
>> 9pm Newzealand, 5pm Tokyo, 4pm Beijing.
>>
>> For example, if we choose Every other Tuesday for meeting, and 1st of the
>> month is Tuesday, we would have North America time for 1st, and on 15th it
>> would be ASIA/Pacific time.
>>
>> Hopefully, this way, we can cover all the timezones, and meeting minutes
>> would be committed to github repo, so that way, it will be easier for
>> everyone to be aware of what is happening.
>>
>> Regards,
>> Amar
>>
>> On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
>> wrote:
>>
>>> As a user, I’d like to visit more of these, but the time slot is my 3AM.
>>> Any possibility for a rolling schedule (move meeting +6 hours each week
>>> with rolling attendance from maintainers?) or an occasional regional
>>> meeting 12 hours opposed to the one you’re proposing?
>>>
>>>   -Darrell
>>>
>>> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
>>> atumb...@redhat.com> wrote:
>>>
>>> All,
>>>
>>> We currently have 3 meetings which are public:
>>>
>>> 1. Maintainer's Meeting
>>>
>>> - Runs once in 2 weeks (on Mondays), and current attendance is around
>>> 3-5 on an avg, and not much is discussed.
>>> - Without majority attendance, we can't take any decisions too.
>>>
>>> 2. Community meeting
>>>
>>> - 

[Gluster-users] Community Happy Hour at Red Hat Summit

2019-04-22 Thread Amye Scavarda
The Ceph and Gluster teams are joining forces to put on a Community
Happy Hour in Boston on Tuesday, May 7th as part of Red Hat Summit.

More details, including RSVP at:
https://cephandglusterhappyhour_rhsummit.eventbrite.com
-- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 5.6 slow read despite fast local brick

2019-04-22 Thread Strahil Nikolov
 As I had the option to rebuild the volume - I did it and it still reads quite 
slower than before 5.6 upgrade.
I have set cluster.choose-local to 'on' but still the same performance.
Volume Name: data_fast
Type: Replicate
Volume ID: 888a32ea-9b5c-4001-a9c5-8bc7ee0bddce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/data_fast/data_fast
Brick2: ovirt2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
cluster.choose-local: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Any issues expected when downgrading the version ?
Best Regards,Strahil Nikolov

В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil 
 написа:  
 
 
Hello Community,

I have been left with the impression that FUSE mounts will read from both local 
and remote bricks , is that right?

I'm using oVirt as a hyperconverged setup and despite my slow network 
(currently 1 gbit/s, will be expanded soon), I was expecting that at least the 
reads from the local brick will be fast, yet I can't reach more than 250 MB/s 
while the 2 data bricks are NVME with much higher capabilities.

Is there something I can do about that ?
Maybe change cluster.choose-local, as I don't see it on my other volumes ?
What are the risks associated with that?

Volume Name: data_fast
Type: Replicate
Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/data_fast/data_fast
Brick2: ovirt2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
cluster.choose-local: off
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
network.ping-timeout: 30
cluster.enable-shared-storage: enable


Best Regards,
Strahil Nikolov

Hello Community,

I have been left with the impression that FUSE mounts will read from both local 
and remote bricks , is that right?

I'm using oVirt as a hyperconverged setup and despite my slow network 
(currently 1 gbit/s, will be expanded soon), I was expecting that at least the 
reads from the local brick will be fast, yet I can't reach more than 250 MB/s 
while the 2 data bricks are NVME with much higher capabilities.

Is there something I can do about that ?
Maybe change cluster.choose-local, as I don't see it on my other volumes ?
What are the risks associated with that?

Volume Name: data_fast
Type: Replicate
Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/data_fast/data_fast
Brick2: ovirt2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
cluster.choose-local: off
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
network.ping-timeout: 30
cluster.enable-shared-storage: enable


Best Regards,
Strahil Nikolov
  ___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster release 6.1

2019-04-22 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
6.1 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

None

Thanks,
Gluster community

[1] Packages for 6.1:
https://download.gluster.org/pub/gluster/glusterfs/6/6.1/

[2] Release notes for 6.1:
https://docs.gluster.org/en/latest/release-notes/6.1/

___
maintainers mailing list
maintain...@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] v6.0 release notes fix request

2019-04-22 Thread Shyam Ranganathan
Thanks for reporting, this is fixed now.
On 4/19/19 2:57 AM, Artem Russakovskii wrote:
> Hi,
> 
> https://docs.gluster.org/en/latest/release-notes/6.0/ currently contains
> a list of fixed bugs that's run-on and should be fixed with proper line
> breaks:
> image.png
> 
> Sincerely,
> Artem
> 
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net  | +ArtemRussakovskii
>  | @ArtemR
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] adding thin arbiter

2019-04-22 Thread Karthik Subrahmanya
Hi,

Currently we do not have support for converting an existing volume to a
thin-arbiter volume. It is also not supported to replace the thin-arbiter
brick with a new one.
You can create a fresh thin arbiter volume using GD2 framework and play
around that. Feel free to share your experience with thin-arbiter.
The GD1 CLIs are being implemented. We will keep things posted on this list
as and when they are ready to consume.

Regards,
Karthik

On Fri, Apr 19, 2019 at 8:39 PM  wrote:

> Hi guys,
>
> On an existing volume, I have a volume with 3 replica. One of them is an
> arbiter. Is there a way to change the arbiter to a thin-arbiter? I tried
> removing the arbiter brick and add it back, but the add-brick command
> does't take the --thin-arbiter option.
>
> xpk
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Settings for VM hosting

2019-04-22 Thread Krutika Dhananjay
On Fri, Apr 19, 2019 at 12:48 PM  wrote:

> On Fri, Apr 19, 2019 at 06:47:49AM +0530, Krutika Dhananjay wrote:
> > Looks good mostly.
> > You can also turn on performance.stat-prefetch, and also set
>
> Ah the corruption bug has been fixed, I missed that. Great !
>
> > client.event-threads and server.event-threads to 4.
>
> I didn't realize that would also apply to libgfapi ?
> Good to know, thanks.
>
> > And if your bricks are on ssds, then you could also enable
> > performance.client-io-threads.
>
> I'm surprised by that, the doc says "This feature is not recommended for
> distributed, replicated or distributed-replicated volumes."
> Since this volume is just a replica 3, shouldn't this stay off ?
> The disks are all nvme, which I assume would count as ssd.
>

They're not recommended if you're using slower disks (HDDs for instance)
as it can increase the number of fsyncs triggered by replicate module and
their slowness
can degrade performance. With nvme/ssds this should not be a problem and
the net result
of enabling client-io-threads there should be an improvement in perf.

-Krutika


> > And if your bricks and hypervisors are on same set of machines
> > (hyperconverged),
> > then you can turn off cluster.choose-local and see if it helps read
> > performance.
>
> Thanks, we'll give those a try !
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users