Re: [Gluster-devel] gluster-block v0.4 is alive!

2019-05-06 Thread Niels de Vos
On Thu, May 02, 2019 at 11:04:41PM +0530, Prasanna Kalever wrote:
> Hello Gluster folks,
> 
> Gluster-block team is happy to announce the v0.4 release [1].
> 
> This is the new stable version of gluster-block, lots of new and
> exciting features and interesting bug fixes are made available as part
> of this release.
> Please find the big list of release highlights and notable fixes at [2].
> 
> Details about installation can be found in the easy install guide at
> [3]. Find the details about prerequisites and setup guide at [4].
> If you are a new user, checkout the demo video attached in the README
> doc [5], which will be a good source of intro to the project.
> There are good examples about how to use gluster-block both in the man
> pages [6] and test file [7] (also in the README).
> 
> gluster-block is part of fedora package collection, an updated package
> with release version v0.4 will be soon made available. And the
> community provided packages will be soon made available at [8].

Updates for Fedora are available in the testing repositories:

Fedora 30: https://bodhi.fedoraproject.org/updates/FEDORA-2019-76730d7230
Fedora 29: https://bodhi.fedoraproject.org/updates/FEDORA-2019-cc7cdce2a4
Fedora 28: https://bodhi.fedoraproject.org/updates/FEDORA-2019-9e9a210110

Installation instructions can be found at the above links. Please leave
testing feedback as comments on the Fedora Update pages.

Thanks,
Niels


> Please spend a minute to report any kind of issue that comes to your
> notice with this handy link [9].
> We look forward to your feedback, which will help gluster-block get better!
> 
> We would like to thank all our users, contributors for bug filing and
> fixes, also the whole team who involved in the huge effort with
> pre-release testing.
> 
> 
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/releases
> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
> [4] https://github.com/gluster/gluster-block#usage
> [5] https://github.com/gluster/gluster-block/blob/master/README.md
> [6] https://github.com/gluster/gluster-block/tree/master/docs
> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
> [8] https://download.gluster.org/pub/gluster/gluster-block/
> [9] https://github.com/gluster/gluster-block/issues/new
> 
> Cheers,
> Team Gluster-Block!
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-06 Thread Jiffin Tony Thottan

Hi

On 04/05/19 12:04 PM, Strahil wrote:

Hi Jiffin,

No vendor will support your corosync/pacemaker stack if you do not have proper 
fencing.
As Gluster is already a cluster of its own, it makes sense to control 
everything from there.

Best Regards,



Yeah I agree with your point. What I meant to say by default this 
feature won't provide any fencing mechanism,


user need to manually configure fencing for the cluster. In future we 
can try to include to default fencing configuration


for the ganesha cluster as part of the Ganesha HA configuration

Regards,

Jiffin



Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan  
wrote:


On 30/04/19 6:59 PM, Strahil Nikolov wrote:

Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.


It do take care those, but need to follow certain prerequisite, but
please fencing won't configured for this setup. May we think about in
future.

--

Jiffin


Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
 
+1!

I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
 
Hi all,


Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

     I have opened up an issue [1] with details and posted initial set of 
patches [2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)



--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)

--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] New in GlusterFS

2019-05-06 Thread Rajib Hossen
Hello all,
I am new in glusterfs development. I would like to contribute in Erasure
Coding part of glusterfs. I already studied non-systematic code and its
theory. Now, I want to know how erasure coding read/write works in terms of
coding. Can you please give me any documentation that'll help to understand
glusterfs ec read/write, coding structure. Please any help is appreciated.
Thanks you very much.

Sincerely,
Md Rajib Hossen
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-05-06 Thread Amar Tumballi Suryanarayan
On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  wrote:

> Hello Gluster folks,
>
> Gluster-block team is happy to announce the v0.4 release [1].
>
> This is the new stable version of gluster-block, lots of new and
> exciting features and interesting bug fixes are made available as part
> of this release.
> Please find the big list of release highlights and notable fixes at [2].
>
>
Good work Team (Prasanna and Xiubo Li to be precise)!!

This was much needed release w.r.to gluster-block project, mainly because
of the number of improvements done since last release. Also, gluster-block
release 0.3 was not compatible with glusterfs-6.x series.

All, feel free to use it if your deployment has any usecase for Block
storage, and give us feedback. Happy to make sure gluster-block is stable
for you.

Regards,
Amar


> Details about installation can be found in the easy install guide at
> [3]. Find the details about prerequisites and setup guide at [4].
> If you are a new user, checkout the demo video attached in the README
> doc [5], which will be a good source of intro to the project.
> There are good examples about how to use gluster-block both in the man
> pages [6] and test file [7] (also in the README).
>
> gluster-block is part of fedora package collection, an updated package
> with release version v0.4 will be soon made available. And the
> community provided packages will be soon made available at [8].
>
> Please spend a minute to report any kind of issue that comes to your
> notice with this handy link [9].
> We look forward to your feedback, which will help gluster-block get better!
>
> We would like to thank all our users, contributors for bug filing and
> fixes, also the whole team who involved in the huge effort with
> pre-release testing.
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/releases
> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
> [4] https://github.com/gluster/gluster-block#usage
> [5] https://github.com/gluster/gluster-block/blob/master/README.md
> [6] https://github.com/gluster/gluster-block/tree/master/docs
> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
> [8] https://download.gluster.org/pub/gluster/gluster-block/
> [9] https://github.com/gluster/gluster-block/issues/new
>
> Cheers,
> Team Gluster-Block!
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel