[Gluster-devel] Fwd: [ovirt-users] ovirt35 - deep dive - Monitoring (UI plugin) Dashboard (Integrated with Nagios monitoring)
Getting right the alias for gluster-devel now ;-). -Vijay Original Message Subject: [Gluster-users] Fwd: [ovirt-users] ovirt35 - deep dive - Monitoring (UI plugin) Dashboard (Integrated with Nagios monitoring) Date: Fri, 12 Sep 2014 14:13:37 +0530 From: Vijay Bellur vbel...@redhat.com To: gluster-users Discussion List gluster-us...@gluster.org CC: 'gluster-de...@nongnu.org' gluster-de...@nongnu.org Hi All, There is a hangout scheduled by the oVirt community to discuss nagios + oVirt integration for monitoring gluster deployments. If you are interested in this topic, please feel free to join in to this hangout (details below). Regards, Vijay Original Message Subject: [ovirt-users] ovirt35 - deep dive - Monitoring (UI plugin) Dashboard (Integrated with Nagios monitoring) Date: Thu, 11 Sep 2014 06:43:08 -0400 (EDT) From: Barak Azulay bazu...@redhat.com To: anb...@redhat.com, sab...@redhat.com, us...@ovirt.org, de...@ovirt.org The following is a new meeting request: Subject: ovirt35 - deep dive - Monitoring (UI plugin) Dashboard (Integrated with Nagios monitoring) Organizer: Barak Azulay bazu...@redhat.com Time: Monday, September 15, 2014, 4:00:00 PM - 5:00:00 PM GMT +02:00 Jerusalem Invitees: anb...@redhat.com; sab...@redhat.com; us...@ovirt.org; de...@ovirt.org *~*~*~*~*~*~*~*~*~* Nagios has been integrated with ovirt-engine for monitoring the gluster deployments to enable the administrators, to monitor the health of their deployments. The administrator can use the auto-config script to be able to automatically configure nagios for monitoring their gluster deployment(hosts,volumes). The ui-plugin manifests itself in the form of two tabs : 1. Dashboard -- overall view of the deployment(Not in 3.5. Part of our plan for future release) 2. Trends -- Graphs displayed in accordance with the selected entity on the system tree when the tab is open. In this session we will go through the overall integration architecture,auto-config script and the monitoring ui-plugin. Google hangout link: https://plus.google.com/events/ccf2tev5tg5eh7ntelph95psuis Wiki Link : http://www.ovirt.org/Features/Nagios_Integration ___ Users mailing list us...@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Gluster-users mailing list gluster-us...@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] GlusterFS 3.6 Test Day Preparations
Hi All, As part of GlusterFs 3.6 release, we are planning to hold GlusterFs 3.6 TEST days starting from next week. All the listed 3.6 feature's ( http://www.gluster.org/community/documentation/index.php/Planning36) page have to be updated to have a clear picture on what exactly the feature provides, user experience..etc. The most important part is updating the HOW TO TEST section in each feature page. Feature page and HOW TO TEST section will be used heavily for testing the feature, so the component owners are requested to spend some time and update this page, so that the test days will be more productive. --Humble ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Proposal for GlusterD-2.0
- Original Message - From: Jeff Darcy jda...@redhat.com To: Balamurugan Arumugam b...@gluster.com Cc: Justin Clift jus...@gluster.org, gluster-us...@gluster.org, Gluster Devel gluster-devel@gluster.org Sent: Thursday, September 11, 2014 7:45:52 PM Subject: Re: [Gluster-users] [Gluster-devel] Proposal for GlusterD-2.0 Yes. I came across Salt currently for unified management for storage to manage gluster and ceph which is still in planning phase. I could think of a complete requirement of infra requirement to solve from glusterd to unified management. Calamari ceph management already uses Salt. It would be the ideal solution with Salt (or any infra) if gluster, ceph and unified management uses. I think the idea of using Salt (or similar) is interesting, but it's also key that Ceph still has its mon cluster as well. (Is mon calamari an *intentional* Star Wars reference?) As I see it, glusterd or anything we use to replacement has multiple responsibilities: (1) Track the current up/down state of cluster members and resources. (2) Store configuration and coordinate changes to it. (3) Orchestrate complex or long-running activities (e.g. rebalance). (4) Provide service discovery (current portmapper). Salt and its friends clearly shine at (2) and (3), though they outsource the actual data storage to an external data store. With such a data store, (4) becomes pretty trivial. The sticking point for me is (1). How does Salt handle that need, or how might it be satisfied on top of the facilities Salt does provide? I can see *very* clearly how to do it on top of etcd or consul. Could those in fact be used for Salt's data store? It seems like Salt shouldn't need a full-fledged industrial strength database, just something with high consistency/availability and some basic semantics. Maybe we should try to engage with the Salt developers to come up with ideas. Or find out exactly what functionality they found still needs to be in the mon cluster and not in Salt. Salt has a way to push events to salt-master from salt-minions. At salt-master, we would extend reactor[1] to handle such (any) events. This helps to solve (1). Regards, Bala [1] http://docs.saltstack.com/en/latest/topics/reactor/ ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] How does GD_SYNCOP work?
On Fri, Sep 12, 2014 at 06:31:55AM +0200, Emmanuel Dreyfus wrote: It is fine for me that glusterd_bricks_select_heal_volume() finds 3 bricks, they are the 3 remaining alive bricks. However I am surprised to see the first in the list having rpc-conn.name = management. It should be a brick name here, right? Or is this glustershd? Reading the code, it has to be glustershd. I tracked down most of the problem. The request to glustershd times out before the reply comes, because glustershd gets stuck in an infinite loop. In afr_shd_gather_index_entries(), the obtained offset is corrupted (huge negative value), and the loop never ens. I will not look for why this offset is corrupted. -- Emmanuel Dreyfus m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] How does GD_SYNCOP work?
Emmanuel Dreyfus m...@netbsd.org wrote: I will not look for why this offset is corrupted. s/not/now/ of course... -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz m...@netbsd.org ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Proposal for GlusterD-2.0
Has anyone looked into whether LogCabin can provide the consistent small storage based on RAFT for Gluster? https://github.com/logcabin/logcabin I have no experience with using it so I cannot say if it is good or suitable. I do know the following project uses it and it's just not as easy to setup as Gluster is - it also has Zookeeper support etc. https://ramcloud.atlassian.net/wiki/display/RAM/RAMCloud LogCabin is the canonical implementation of Raft, by the author of the Raft protocol, so it was the first implementation I looked at. Sad to say, it didn't seem that stable. AFAIK RAMCloud - itself an academic project - is the only user, whereas etcd and consul are being used by multiple projects and in production. Also, I found the etcd code at least more readable than LogCabin despite the fact that I've worked in C++ before and had never seen any Go code until that time. Then again, those were early days for all three projects (consul didn't even exist yet) so things might have changed. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel