[Gluster-users] Ganesha+Gluster strange issue: incomplete directory reads

2021-08-26 Thread Ivan Rossi
Hello list,

Ganesha (but not Gluster) newbie here.
This is the first time I have to set-up Ganesha to serve a Gluster volume,
but
it seems i stumbled on a weird issue. Hope it is due to my inexperience with
Ganesha.

I need to serve a Gluster volume to some old production VMs that cannot
install
a recent Gluster client. Thus they need to access the Gluster data using the
standard Linux NFS client. Since native Gluster NFS server is gone, I had
to go
the Ganesha route.

Volume served is used for bioinformatics analysis, each subdirectory
containing
on the order of a thousand files, a few of them large (think 50 Gb each)

Now the issue:

When the volume is mounted on the client (using *NFSv3*) directory reads
SOMETIMES return an INCOMPLETE list of files. Problem goes away if you redo
the
read in a different way as if the first directory metadata read did not
complete successfully but it is then cached anyway.

Problem does not manifests if there are few files in the directory or they
are all small (think < 1 GB)

Direct access to the files is OK eve if they did not show up in the ls.
E.g. :

mount -t nfs ganesha:/srv/glfs/work /mnt/
ls /mnt/47194616IMS187mm10 | wc -l
# wrong result
ls: reading directory /mnt/47194616IMS187mm10: Input/output error
304

# right ( NB ls-l returns one line more than plain ls)
ls -l /mnt/47194616IMS187mm10 | wc -l
668

# after 'ls -l' now even plain ls returns the expected number of files

ls /mnt/47194616IMS187mm10 | wc -l
667

Furthermore i see the Input/output message only because of the pip to wc,
if i
just run plain ls, in a terminal it fails silently returning a partial list.

If the client mounts the volume using *NFSv4* everything looks as expected.

mount -t nfs -o vers=4.0 ganesha:/work /mnt/
ls /mnt/47194616IMS187mm10 | wc -l
667

but as you can guess my confidence in using Ganesha in production is
somewhat
shaking ATM.

My feeling is that it is a Ganesha problem or something lacking in the
Ganesha
configuration for Gluster. My Ganesha configuration is basically just
defaults.
No failover conf either.

My Gluster setup has nothing strange, I am just serving a R3 volume and
defaults are just fine to get a fast volume given the hardware. Furthermore
the
volume looks fine from the Gluster clients.

I am using Gluster 8.4 and Ganesha 3.4 on Debian 10 (buster). Packages
coming
from the Gluster and Ganesha repos, not the debian one.

Has anyone seen anything similar before?
Did I stumble on a bug?
Any advice or common wisdom to share?

Ivan Rossi




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of Gluster project

2020-06-18 Thread Ivan Rossi


On 6/17/20 6:19 AM, Dmitry Melekhov wrote:

17.06.2020 01:06, Mahdi Adnan пишет:

Hello,

 I'm wondering what's the current and future plan for Gluster project 
overall, I see that the project is not as busy as it was before "at 
least this is what I'm seeing" Like there are fewer blogs about what 
the roadmap or future plans of the project, the deprecation of 
Glusterd2, even Red Hat Openshift storage switched to Ceph.
As the community of this project, do you feel the same? Is the 
deprecation of Glusterd2 concerning? Do you feel that the project is 
slowing down somehow? Do you think Red Hat is abandoning the project 
or giving fewer resources to Gluster?




Gluster2 was mistake, imho. It's deprecation means nothing.

For me looks like gluster in now stable , this is why it is not as 
busy as before.


Some parts of Gluster have been really stable for a very long time now. 
Which is good, IMHO. I want to be bored by storage, because valuable 
data is there. And new features obviously make situation less boring ;) 
bc they are... new (and buggy).


Furthermore it should be remembered that Gluster had been driven for a 
long time by RedHat, that is a company, and used gluster community in a 
similar way to what they do with Fedora/RHEL.


You want maximum stability, you pay for RH Gluster Storage (or whatever 
it is called now). You go with community, you have similar risks to 
runing Fedora in production.


Having said that, I really appreciated the statement that core 
development will focus on stability first.


Gluster community now is very different from what it was just 2 years 
ago. Several Gluster people left RH or moved to different projects. 
Conversely, new companies are now involved with Gluster. In a sense it 
may be a new start. Let's see as it turns out.



Ivan






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Proposal to change gNFS status

2019-11-21 Thread Ivan Rossi
Kudos.

Il gio 21 nov 2019, 11:31 Amar Tumballi  ha scritto:

> Hi All,
>
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> TL;DR;
>
> I understand the current maintainers are not able to focus on maintaining
> it as the focus of the project, as earlier described, is keeping
> NFS-Ganesha based integration with glusterfs. But, I am volunteering along
> with Xie Changlong (currently working at Chinamobile), to keep the feature
> running as it used to in previous versions. Hence the status of 'Odd
> Fixes'.
>
> Before sending the patch to make these changes, I am proposing it here
> now, as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
> heard from some users that it was working great for them with earlier
> releases, as all they wanted was NFS v3 support, and not much of features
> from gNFS. Also note that, even though the packages are not built, none of
> the regression tests using gNFS are stopped with latest master, so it is
> working same from at least last 2 years.
>
> I request the package maintainers to please add '--with gnfs' (or
> --enable-gnfs) back to their release script through this email, so those
> users wanting to use gNFS happily can continue to use it. Also points to
> users/admins is that, the status is 'Odd Fixes', so don't expect any
> 'enhancements' on the features provided by gNFS.
>
> Happy to hear feedback, if any.
>
> Regards,
> Amar
>
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glustered 2018 schedule

2018-03-21 Thread Ivan Rossi
A short report on how Glustered 2018 went.

http://www.biodec.com/it/blog/looking-back-at-glustered-2018

2018-02-28 15:15 GMT+01:00 Ivan Rossi <rouge2...@gmail.com>:

> Today we published the program for the "Glustered 2018" meeting (Bologna,
> Italy, 2018-03-08)
> Hope to see some of you here.
>
>  http://www.incontrodevops.it/events/glustered-2018/
>
> Ivan
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glustered 2018 schedule

2018-02-28 Thread Ivan Rossi
Today we published the program for the "Glustered 2018" meeting (Bologna,
Italy, 2018-03-08)
Hope to see some of you here.

 http://www.incontrodevops.it/events/glustered-2018/

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glustered 2018 in Bologna (IT)

2017-12-22 Thread Ivan Rossi
We are happy to announce that Glustered 2018, a Gluster community meeting,
will take place on  March 8th 2018 in Bologna (Italy), back-to-back with
Incontro Devops Italia
(http://2018.incontrodevops.it) and in the same venue as the main event.

http://www.incontrodevops.it/events/glustered-2018/

The tentative schedule will have a (confirmed) keynote by Niels de Vos,
plus technical talks, use cases presentations and/or community space.

Call for Papers is now open: please  send proposals to i...@biodec.com.

Bologna is well connected, by cheap direct flights, to most of the major
European airports, thus there is the potential to grow above being a local
event and to have a nice European meetup of the community. Please help
making the event a success by submitting proposals, it is also on you...

Tickets are free, but registration will be required (limited room).

Merry christmas and happy new year .

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] BLQ Gluster community meeting anyone?

2017-11-08 Thread Ivan Rossi
Hello community,

My company is willing to host a Gluster-community meeting in Bologna
(Italy) on March 8th 2018, back-to-back with Incontro Devops Italia (
http://2018.incontrodevops.it) and in the same venue as the conference.

I think that having 2-3 good technical talk, plus some BOFs/lightning
talks/open-space discussions will make for a nice half-a-day event.  It is
also probable that one or more of the devs may be on-site. (they
half-promised...)

However I would like to understand if there is enough interest in the
community to grant the effort.

What do you think about it? Anyone interested in attending/contributing?

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Summit BOF - Encryption

2017-11-07 Thread Ivan Rossi
We had a BOF about how to do file-level volume encryption.

Coupled with geo-replication, this feature would be useful for secure
off-site archiving/backup/disaster-recovery of Gluster volumes.

TLDR: It might be possible using EncFS stacked file system on top of a
Gluster
mount, but it is experimental and untested. At the moment, you are on your
own.

- The built-in encryption translator is strongly deprecated and it may be
removed
  altogether from the code base in the future.

- The kernel-based ecryptfs (http://ecryptfs.org/) stacked file system has a
  known bug with NFS and possibly other network file systems.

- Stacking EncFS (https://github.com/vgough/encfs) on top of a Gluster mount
  should, in principle, work with both native and NFS mounts.  Performance
are
  going to be low, but still workable in some of the use cases of interest.

- Long term solution: having a client-side translator based on EncFS code.
ATM
  there is no plan to develop it.

Hope it is useful to others too.

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread Ivan Rossi
2017-10-11 15:37 GMT+02:00 ML :

> After some extra reading about LVM snapshots & Gluster, I think I can
> conclude it may be a bad idea to use it on big storage bricks.
>
> I understood that the LVM maximum metadata, used to store the snapshots
> data, is about 16GB.
>

LVM metadata  aer used to store changed METADATA, not data.
thin-provisioned snapshots usually may grow up to the local unallocated
capacity.


> So if I have a brick with a volume arount 10TB (for example), daily
> snapshots, files changing ~100GB : the LVM snapshot is useless.
>
> LVM's snapshots doesn't seems to be a good idea with very big LVM
> partitions.
>
> Did I missed something ? Hard to find clear documentation on the subject.
>

LVM documentation (RH has very good docs available via web) and even the
lvcreate man page is OK. not a lightweight read but OK. You need
thin-provisioned lvm pools to have snapshots in gluster.


>
> Le 11/10/2017 à 09:07, Ric Wheeler a écrit :
>
>> On 10/11/2017 09:50 AM, ML wrote:
>>
>>> Hi everyone,
>>>
>>> I've read on the gluster & redhat documentation, that it seems
>>> recommended to use XFS over LVM before creating & using gluster volumes.
>>>
>>> Sources :
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storag
>>> e/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>>> http://gluster.readthedocs.io/en/latest/Administrator%20Guid
>>> e/Setting%20Up%20Volumes/
>>>
>>> My point is : do we really need LVM ?
>>> For example , on a dedicated server with disks & partitions that will
>>> not change of size, it doesn't seems necessary to use LVM.
>>>
>>> I can't understand clearly wich partitioning strategy would be the best
>>> for "static size" hard drives :
>>>
>>> 1 LVM+XFS partition = multiple gluster volumes
>>> or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
>>> or 1 XFS partition = multiple gluster volumes
>>> or 1 XFS partition = 1 gluster volume per XFS partition
>>>
>>> What do you use on your servers ?
>>>
>>> Thanks for your help! :)
>>>
>>> Quentin
>>>
>>
>> Hi Quentin,
>>
>> Gluster relies on LVM for snapshots - you won't get those unless you
>> deploy on LVM.
>>
>> Regards,
>> Ric
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-04 Thread Ivan Rossi
The latter one is the one I have been referring to. And it is pretty
dangerous Imho

Il 31/ago/2017 01:19, <lemonni...@ulrar.net> ha scritto:

> Solved as to 3.7.12. The only bug left is when adding new bricks to
> create a new replica set, now sure where we are now on that bug but
> that's not a common operation (well, at least for me).
>
> On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote:
> > There has ben a bug associated to sharding that led to VM corruption that
> > has been around for a long time (difficult to reproduce I understood). I
> > have not seen reports on that for some time after the last fix, so
> > hopefully now VM hosting is stable.
> >
> > 2017-08-30 3:57 GMT+02:00 Everton Brogliatto <broglia...@gmail.com>:
> >
> > > Ciao Gionatan,
> > >
> > > I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide
> > > storage for oVirt 4.x and I have had no major issues so far.
> > > I have done online upgrades a couple of times, power losses,
> maintenance,
> > > etc with no issues. Overall, it is very resilient.
> > >
> > > Important thing to keep in mind is your network, I run the Gluster
> nodes
> > > on a redundant network using bonding mode 1 and I have performed
> > > maintenance on my switches, bringing one of them off-line at a time
> without
> > > causing problems in my Gluster setup or in my running VMs.
> > > Gluster recommendation is to enable jumbo frames across the
> > > subnet/servers/switches you use for Gluster operations. Your switches
> must
> > > support MTU 9000 + 208 at least.
> > >
> > > There were two occasions where I purposely caused a split brain
> situation
> > > and I was able to heal the files manually.
> > >
> > > Volume performance tuning can make a significant difference in
> Gluster. As
> > > others have mentioned previously, sharding is recommended when running
> VMs
> > > as it will split big files in smaller pieces, making it easier for the
> > > healing to occur.
> > > When you enable sharding, the default sharding block size is 4MB which
> > > will significantly reduce your writing speeds. oVirt recommends the
> shard
> > > block size to be 512MB.
> > > The volume options you are looking here are:
> > > features.shard on
> > > features.shard-block-size 512MB
> > >
> > > I had an experimental setup in replica 2 using an older version of
> Gluster
> > > few years ago and it was unstable, corrupt data and crashed many
> times. Do
> > > not use replica 2. As others have already said, minimum is replica 2+1
> > > arbiter.
> > >
> > > If you have any questions that I perhaps can help with, drop me an
> email.
> > >
> > >
> > > Regards,
> > > Everton Brogliatto
> > >
> > >
> > > On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti <g.da...@assyoma.it>
> > > wrote:
> > >
> > >> Il 26-08-2017 07:38 Gionatan Danti ha scritto:
> > >>
> > >>> I'll surely give a look at the documentation. I have the "bad" habit
> > >>> of not putting into production anything I know how to repair/cope
> > >>> with.
> > >>>
> > >>> Thanks.
> > >>>
> > >>
> > >> Mmmm, this should read as:
> > >>
> > >> "I have the "bad" habit of not putting into production anything I do
> NOT
> > >> know how to repair/cope with"
> > >>
> > >> Really :D
> > >>
> > >>
> > >> Thanks.
> > >>
> > >> --
> > >> Danti Gionatan
> > >> Supporto Tecnico
> > >> Assyoma S.r.l. - www.assyoma.it
> > >> email: g.da...@assyoma.it - i...@assyoma.it
> > >> GPG public key ID: FF5F32A8
> > >> ___
> > >> Gluster-users mailing list
> > >> Gluster-users@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/gluster-users
> > >>
> > >
> > >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > >
>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-30 Thread Ivan Rossi
There has ben a bug associated to sharding that led to VM corruption that
has been around for a long time (difficult to reproduce I understood). I
have not seen reports on that for some time after the last fix, so
hopefully now VM hosting is stable.

2017-08-30 3:57 GMT+02:00 Everton Brogliatto :

> Ciao Gionatan,
>
> I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide
> storage for oVirt 4.x and I have had no major issues so far.
> I have done online upgrades a couple of times, power losses, maintenance,
> etc with no issues. Overall, it is very resilient.
>
> Important thing to keep in mind is your network, I run the Gluster nodes
> on a redundant network using bonding mode 1 and I have performed
> maintenance on my switches, bringing one of them off-line at a time without
> causing problems in my Gluster setup or in my running VMs.
> Gluster recommendation is to enable jumbo frames across the
> subnet/servers/switches you use for Gluster operations. Your switches must
> support MTU 9000 + 208 at least.
>
> There were two occasions where I purposely caused a split brain situation
> and I was able to heal the files manually.
>
> Volume performance tuning can make a significant difference in Gluster. As
> others have mentioned previously, sharding is recommended when running VMs
> as it will split big files in smaller pieces, making it easier for the
> healing to occur.
> When you enable sharding, the default sharding block size is 4MB which
> will significantly reduce your writing speeds. oVirt recommends the shard
> block size to be 512MB.
> The volume options you are looking here are:
> features.shard on
> features.shard-block-size 512MB
>
> I had an experimental setup in replica 2 using an older version of Gluster
> few years ago and it was unstable, corrupt data and crashed many times. Do
> not use replica 2. As others have already said, minimum is replica 2+1
> arbiter.
>
> If you have any questions that I perhaps can help with, drop me an email.
>
>
> Regards,
> Everton Brogliatto
>
>
> On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti 
> wrote:
>
>> Il 26-08-2017 07:38 Gionatan Danti ha scritto:
>>
>>> I'll surely give a look at the documentation. I have the "bad" habit
>>> of not putting into production anything I know how to repair/cope
>>> with.
>>>
>>> Thanks.
>>>
>>
>> Mmmm, this should read as:
>>
>> "I have the "bad" habit of not putting into production anything I do NOT
>> know how to repair/cope with"
>>
>> Really :D
>>
>>
>> Thanks.
>>
>> --
>> Danti Gionatan
>> Supporto Tecnico
>> Assyoma S.r.l. - www.assyoma.it
>> email: g.da...@assyoma.it - i...@assyoma.it
>> GPG public key ID: FF5F32A8
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] On Gluster resiliency

2016-12-23 Thread Ivan Rossi
Last few days has been tense because a R3 3.8.5 Gluster cluster that I
built has been plagued by problems.

The first symptom has been a continuous stream in the client logs of:

[2016-12-17 15:55:02.047508] E [MSGID: 108009]
[afr-open.c:187:afr_openfd_fix_open_cbk]
0-hisap-prod-1-replicate-0: Failed to open
/home/galaxy/HISAP/java/lib/java/jre1.7.0_51/jre/lib/rt.jar on subvolume
hisap-prod-1-client-2 [Transport endpoint is not connected]

followed by very frequent peer disconnections/reconnections and a
continuous stream of files to be healed on several volumes.

The problem has been traced back to a flaky X540-T2 10GBE NIC embedded
in one of the peers motherboard, that was incapable of keeping the
correct 10Gbit speed negotiation with the switch.

The motherboard has been replaced on the peer. and then the volumes
healed quickly to complete health.  All of these while the users kept
running some heavy-duty bioinformatics applications (NGS data
analysis) on top of Gluster.  No user noticed ANYTHING despite a major
hardware problem and offi-lining of a peer.

This is a RESILIENT system, in my book.

Gluster people, despite the constant stream of problems and requests
for help that you see on the ML and IRC, rest assured that you are
building a nice piece of software, at least IMHO.

Keep-up the good work and Merry Christmas.

Ivan Rossi
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo replication as backup

2016-11-25 Thread Ivan Rossi
I would not say that it is the only and official way.
For examples, the bareos (bareos.org) backup system can talk to cluster via
gfapi, IIRC

Il 21/nov/2016 17:32, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
>
> 2016-11-21 15:48 GMT+01:00 Aravinda :
> > When you set checkpoint, you can watch the status of checkpoint
completion
> > using geo-rep status. Once checkpoint complete, it is guaranteed that
> > everything created before Checkpoint Time is synced to slave.(Note: It
only
> > ensures that all the creates/updates done before checkpoint time but
Geo-rep
> > may sync the files which are created/modified after Checkpoint time)
> >
> > Read more about Checkpoint here
> >
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/#checkpoint
>
> Thank you.
> So, can I assume this is the official (and only) way to properly
> backup a Gluster storage ?
> I'm saying "only" way because it would be impossible to backup a multi
> terabyte storage with any other software.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterd seems to be ignoring that the underling filesystem got missing

2016-09-26 Thread Ivan Rossi
for completeness:

https://bugzilla.redhat.com/show_bug.cgi?id=1378978

2016-09-23 18:06 GMT+02:00 Luca Gervasi :
> Hi guys,
> I've got a strange problem involving this timeline (matches the "Log
> fragment 1" excerpt)
> 19:56:50: disk is detached from my system. This disk is actually the brick
> of the volume V.
> 19:56:50: LVM sees the disk as unreachable and starts its maintenance
> procedures
> 19:56:50: LVM umounts my thin provisioned volumes
> 19:57:02: Health check on specific bricks fails thus moving the brick to a
> down state
> 19:57:32: XFS filesystem umounts
>
> At this point, the brick filesystem is no longer mounted. The underlying
> filesystems is empty (misses the brick directory too). My assumption is that
> gluster would stop itself in such conditions: it is not.
> Gluster slowly fills my entire root partition, creating its full tree.
>
> My only warning point is the disk that starts to fill its inodes to 100%.
>
> I've read release notes for every version subsequent mine (3.7.14, 3.7.15)
> without finding relevant fixes and at this point i'm pretty sure is some bug
> undocumented.
> Servers were made symmetric.
>
> Could you please help me understand how to avoid that gluster coninues write
> on an unmounted filesystem? Thanks.
>
> I'm running a 3 node replica on 3 azure vms. This is the configuration:
>
> MD (yes, i use md to aggregate 4 disks into a single 4Tb volume):
> /dev/md128:
> Version : 1.2
>   Creation Time : Mon Aug 29 18:10:45 2016
>  Raid Level : raid0
>  Array Size : 4290248704 (4091.50 GiB 4393.21 GB)
>Raid Devices : 4
>   Total Devices : 4
> Persistence : Superblock is persistent
>
> Update Time : Mon Aug 29 18:10:45 2016
>   State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
>
>  Chunk Size : 512K
>
>Name : 128
>UUID : d5c51214:43e48da9:49086616:c1371514
>  Events : 0
>
> Number   Major   Minor   RaidDevice State
>0   8   800  active sync   /dev/sdf
>1   8   961  active sync   /dev/sdg
>2   8  1122  active sync   /dev/sdh
>3   8  1283  active sync   /dev/sdi
>
> PV, VG, LV status
>   PV VG  Fmt  Attr PSize PFree DevSize PV UUID
>   /dev/md127 VGdata  lvm2 a--  2.00t 2.00t   2.00t
> Kxb6C0-FLIH-4rB1-DKyf-IQuR-bbPE-jm2mu0
>   /dev/md128 gluster lvm2 a--  4.00t 1.07t   4.00t
> lDazuw-zBPf-Duis-ZDg1-3zfg-53Ba-2ZF34m
>
>  VG  Attr   Ext   #PV #LV #SN VSize VFree VG UUID
> VProfile
>   VGdata  wz--n- 4.00m   1   0   0 2.00t 2.00t
> XI2V2X-hdxU-0Jrn-TN7f-GSEk-7aNs-GCdTtn
>   gluster wz--n- 4.00m   1   6   0 4.00t 1.07t
> ztxX4f-vTgN-IKop-XePU-OwqW-T9k6-A6uDk0
>
>  LV  VG  #Seg Attr   LSize   Maj Min KMaj KMin Pool
> Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID
> LProfile
>   apps-data   gluster1 Vwi-aotz--  50.00g  -1  -1  253   12
> thinpool0.08
> znUMbm-ax1N-R7aj-dxLc-gtif-WOvk-9QC8tq
>   feedgluster1 Vwi-aotz-- 100.00g  -1  -1  253   14
> thinpool0.08
> hZ4Isk-dELG-lgFs-2hJ6-aYid-8VKg-3jJko9
>   homes   gluster1 Vwi-aotz--   1.46t  -1  -1  253   11
> thinpool58.58
> salIPF-XvsA-kMnm-etjf-Uaqy-2vA9-9WHPkH
>   search-data gluster1 Vwi-aotz-- 100.00g  -1  -1  253   13
> thinpool16.41
> Z5hoa3-yI8D-dk5Q-2jWH-N5R2-ge09-RSjPpQ
>   thinpoolgluster1 twi-aotz--   2.93t  -1  -1  2539
> 29.85  60.00
> oHTbgW-tiPh-yDfj-dNOm-vqsF-fBNH-o1izx2
>   video-asset-manager gluster1 Vwi-aotz-- 100.00g  -1  -1  253   15
> thinpool0.07
> 4dOXga-96Wa-u3mh-HMmE-iX1I-o7ov-dtJ8lZ
>
> Gluster volume configuration (all volumes use the same exact configuration,
> listing them all would be redundant)
> Volume Name: vol-homes
> Type: Replicate
> Volume ID: 0c8fa62e-dd7e-429c-a19a-479404b5e9c6
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: glu01.prd.azr:/bricks/vol-homes/brick1
> Brick2: glu02.prd.azr:/bricks/vol-homes/brick1
> Brick3: glu03.prd.azr:/bricks/vol-homes/brick1
> Options Reconfigured:
> performance.readdir-ahead: on
> cluster.server-quorum-type: server
> nfs.disable: disable
> cluster.lookup-unhashed: auto
> performance.nfs.quick-read: on
> performance.nfs.read-ahead: on
> performance.cache-size: 4096MB
> cluster.self-heal-daemon: enable
> diagnostics.brick-log-level: ERROR
> diagnostics.client-log-level: ERROR
> nfs.rpc-auth-unix: off
> nfs.acl: off
> performance.nfs.io-cache: on
> performance.client-io-threads: on
> performance.nfs.stat-prefetch: on
> performance.nfs.io-threads: on
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> performance.md-cache-timeout: 1
> performance.cache-refresh-timeout: 1
> performance.io-thread-count: 16
> performance.high-prio-threads: 16
> 

Re: [Gluster-users] So what are people using for 10G nics

2016-08-29 Thread Ivan Rossi
X540-t2 now, but in the past we used Solarflare with no particular issues.

Il 26/ago/2016 22:32, "Diego Remolina"  ha scritto:

> Servers now also come with the copper 10Gbit network adapters built in the
> motherboard (Dell R730, supermicro, etc). But for those that do not, I have
> used the Intel X540-T2 adapters with Centos 7 and RHEL7.
>
> As for switches, our infrastructure uses expensive Cisco 9XXX series and
> FEX expanders, so cannot really say much about "inexpensive" ones.
>
> Diego
>
> On Aug 26, 2016 16:05, "WK"  wrote:
>
>> Prices seem to be dropping online at NewEgg etc and going from 2 nodes to
>> 3 nodes for a quorum implies a lot more traffic than would be comfortable
>> with 1G.
>>
>> Any NIC/Switch recommendations for RH/Cent 7.x and Ubuntu 16?
>>
>>
>> -wk
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Fwd: CFP for Gluster Developer Summit

2016-08-26 Thread Ivan Rossi
If there is interest I may give a (short) talk

"the life of a consultant listed on gluster.org/support"

about the use cases that we met in the last two years.

2016-08-12 21:48 GMT+02:00 Vijay Bellur :
> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are looking
> to have talks and discussions related to the following themes in the summit:
>
> 1. Gluster.Next - focusing on features shaping the future of Gluster
>
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other ecosystems
>
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
>
> 4. Stability & Performance - focusing on current improvements to reduce our
> technical debt backlog
>
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
>
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will be ending the CFP by 12 midnight PDT on August 31st, 2016.
>
> If you have other topics that do not fit in the themes listed, please feel
> free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
>
> Please do reach out to me or Amye if you have any questions.
>
> Thanks!
> Vijay
>
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Diamond metrics collector

2016-03-21 Thread Ivan Rossi
Jumping in...
anyone aware of a similar plugin for collectd? looking at diamond
(mostly because it is python) but I already have collectd  in-place.

PS I know, i could do it myself, but time is a limited resource

2016-03-18 21:39 GMT+01:00 Charles Williams :
> Grant,
>
> Thanks for the reminder. Just forked, uploaded and submitted the PR.
>
> Chuck
>
> On Fr, 2016-03-18 at 10:27 -0700, Grant Ridder wrote:
>> Charles,
>>
>> Thanks for the write-up!  I see near the end you mention "Until the
>> collector is officially accepted into the Diamond project" but i
>> don't see a PR on the GitHub repo for the project.  Can you
>> elaborate on this?
>>
>> -Grant
>>
>> On Fri, Mar 18, 2016 at 1:24 AM, Charles Williams > > wrote:
>> > On Mi, 2016-03-16 at 17:42 +0100, Niels de Vos wrote:
>> > > On Wed, Mar 16, 2016 at 02:24:00PM +0100, Charles Williams wrote:
>> > > > On Mi, 2016-03-16 at 14:07 +0100, Niels de Vos wrote:
>> > > > > On Wed, Mar 16, 2016 at 10:51:52AM +0100, Charles Williams
>> > wrote:
>> > > > > > Hey all,
>> > > > > >
>> > > > > > Finally took the time to hammer out a Diamond metrics
>> > > > > > collector.
>> > > > > > It's
>> > > > > > currently functional beta (in production on one of our
>> > gluster
>> > > > > > clusters). Would like to get some help testing it a bit.
>> > So if
>> > > > > > you
>> > > > > > have Diamond collecting metrics for you then try it out.
>> > > > > >
>> > > > > >
>> > https://wiki.itadmins.net/filesystems/glusterfs_diamond_metrics
>> > > > >
>> > > > > I'm missing a little detail here. I've never heard of
>> > "Diamond
>> > > > > metrics"
>> > > > > before. A little duckduckgo'ing suggests that it would be
>> > this
>> > > > > project:
>> > > > >
>> > > > >   https://github.com/python-diamond/Diamond
>> > > > >
>> > > > > That sounds very interesting. Monitoring is something every
>> > > > > environment
>> > > > > needs and each one has different requirements. Diamond
>> > support
>> > > > > for
>> > > > > Gluster can definitely use some examples.
>> > > > >
>> > > > > Do you think you could write a blog post or wiki article that
>> > > > > contains
>> > > > > screenshots or so? It also may be very helpful to get your
>> > > > > collector
>> > > > > included in the standard Diamond installation. Have you
>> > thought
>> > > > > about
>> > > > > sending it to the python-Diamond project?
>> > > > >
>> > > > > Thanks!
>> > > > > Niels
>> > > >
>> > > > Niels,
>> > > >
>> > > > Anyone needing this collector knows what diamond is already. ;)
>> > > >
>> > > > But I guess I could have added a bit of context for those NOT
>> > in
>> > > > the
>> > > > know. And that is the correct project. It used to belong to
>> > > > BrighCove
>> > > > but they gave it up a while ago.
>> > > >
>> > > > I have to admit, it is a great metrics collection suit. Very
>> > lite-
>> > > > weight, fast and extensible.
>> > > >
>> > > > There isn't a lot to explain but will see if I can clear some
>> > time
>> > > > to
>> > > > write a short something about it. Will also be submitting it
>> > for
>> > > > inclusion in the package as soon as I feel there are no major
>> > > > issues
>> > > > (hence the call for testers).
>> > >
>> > > Great, thanks for explaining! I'm looking forward to see feedback
>> > > from
>> > > users and your progress on inclusion in the main Diamond project.
>> > >
>> > > Niels
>> >
>> > ok peeples,
>> >
>> > here is a quick writeup I did. just touches on a few points and
>> > explains briefly how the install goes.
>> >
>> > https://www.itadmins.net/2016/03/18/getting-useful-metrice-from-glusterfs-into-grafana-using-diamond/#more-678
>> >
>> > enjoy
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Differences between RHSS and gluster.org codebases

2015-03-14 Thread Ivan Rossi
This is mainly for the RH people.

I am not sure I understand correctly how different is the gluster
server code for the commercial RHSS and gluster.org.

Is it the same codebase or RHSS is a separate fork?
Is the relation between the two code bases similar to that of the
RHEL-Fedora projects?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users