Re: [Gluster-users] [Gluster-devel] Version uplift query

2019-02-27 Thread Milind Changire
you might want to check what build.log says ... especially at the very
bottom

Here's a hint from StackExhange
.

On Thu, Feb 28, 2019 at 12:42 PM ABHISHEK PALIWAL 
wrote:

> I am trying to build Gluster5.4 but getting below error at the time of
> configure
>
> conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory
>
> Could you please help me what is the reason of the above error.
>
> Regards,
> Abhishek
>
> On Wed, Feb 27, 2019 at 8:42 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>> GlusterD2 is not yet called out for standalone deployments.
>>
>> You can happily update to glusterfs-5.x (recommend you to wait for
>> glusterfs-5.4 which is already tagged, and waiting for packages to be
>> built).
>>
>> Regards,
>> Amar
>>
>> On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi,
>>>
>>> Could  you please update on this and also let us know what is GlusterD2
>>> (as it is under development in 5.0 release), so it is ok to uplift to 5.0?
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Hi,

 Currently we are using Glusterfs 3.7.6 and thinking to switch on
 Glusterfs 4.1 or 5.0, when I see there are too much code changes between
 these version, could you please let us know, is there any compatibility
 issue when we uplift any of the new mentioned version?

 Regards
 Abhishek

>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> Amar Tumballi (amarts)
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Milind
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread ABHISHEK PALIWAL
I am trying to build Gluster5.4 but getting below error at the time of
configure

conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory

Could you please help me what is the reason of the above error.

Regards,
Abhishek

On Wed, Feb 27, 2019 at 8:42 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> GlusterD2 is not yet called out for standalone deployments.
>
> You can happily update to glusterfs-5.x (recommend you to wait for
> glusterfs-5.4 which is already tagged, and waiting for packages to be
> built).
>
> Regards,
> Amar
>
> On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi,
>>
>> Could  you please update on this and also let us know what is GlusterD2
>> (as it is under development in 5.0 release), so it is ok to uplift to 5.0?
>>
>> Regards,
>> Abhishek
>>
>> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi,
>>>
>>> Currently we are using Glusterfs 3.7.6 and thinking to switch on
>>> Glusterfs 4.1 or 5.0, when I see there are too much code changes between
>>> these version, could you please let us know, is there any compatibility
>>> issue when we uplift any of the new mentioned version?
>>>
>>> Regards
>>> Abhishek
>>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread Amudhan P
Hi Poornima,

Instead of killing process stopping volume followed by stopping service in
nodes and update glusterfs.

can't we follow the above step?

regards
Amudhan

On Thu, Feb 28, 2019 at 8:16 AM Poornima Gurusiddaiah 
wrote:

>
>
> On Wed, Feb 27, 2019, 11:52 PM Ingo Fischer  wrote:
>
>> Hi Amar,
>>
>> sorry to jump into this thread with an connected question.
>>
>> When installing via "apt-get" and so using debian packages and also
>> systemd to start/stop glusterd is the online upgrade process from
>> 3.x/4.x to 5.x still needed as described at
>> https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/ ?
>>
>> Especially because there is manual killall and such for processes
>> handled by systemd in my case. Or is there an other upgrade guide or
>> recommendations for use on ubuntu?
>>
>> Would systemctl stop glusterd, then using apt-get update with changes
>> sources and a reboot be enough?
>>
>
> I think you would still need to kill the process manually, AFAIK systemd
> only stops glusterd not the other Gluster processes like
> glusterfsd(bricks), heal process etc. Reboot of system is not required, if
> that's what you meant by reboot. Also you need follow all the other steps
> mentioned, for the cluster to work smoothly after upgrade. Especially the
> steps to perform heal are important.
>
> Regards,
> Poornima
>
>
>> Ingo
>>
>> Am 27.02.19 um 16:11 schrieb Amar Tumballi Suryanarayan:
>> > GlusterD2 is not yet called out for standalone deployments.
>> >
>> > You can happily update to glusterfs-5.x (recommend you to wait for
>> > glusterfs-5.4 which is already tagged, and waiting for packages to be
>> > built).
>> >
>> > Regards,
>> > Amar
>> >
>> > On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL
>> > mailto:abhishpali...@gmail.com>> wrote:
>> >
>> > Hi,
>> >
>> > Could  you please update on this and also let us know what is
>> > GlusterD2 (as it is under development in 5.0 release), so it is ok
>> > to uplift to 5.0?
>> >
>> > Regards,
>> > Abhishek
>> >
>> > On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL
>> > mailto:abhishpali...@gmail.com>> wrote:
>> >
>> > Hi,
>> >
>> > Currently we are using Glusterfs 3.7.6 and thinking to switch on
>> > Glusterfs 4.1 or 5.0, when I see there are too much code changes
>> > between these version, could you please let us know, is there
>> > any compatibility issue when we uplift any of the new mentioned
>> > version?
>> >
>> > Regards
>> > Abhishek
>> >
>> >
>> >
>> > --
>> >
>> >
>> >
>> >
>> > Regards
>> > Abhishek Paliwal
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org 
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> >
>> > --
>> > Amar Tumballi (amarts)
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread Poornima Gurusiddaiah
On Wed, Feb 27, 2019, 11:52 PM Ingo Fischer  wrote:

> Hi Amar,
>
> sorry to jump into this thread with an connected question.
>
> When installing via "apt-get" and so using debian packages and also
> systemd to start/stop glusterd is the online upgrade process from
> 3.x/4.x to 5.x still needed as described at
> https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/ ?
>
> Especially because there is manual killall and such for processes
> handled by systemd in my case. Or is there an other upgrade guide or
> recommendations for use on ubuntu?
>
> Would systemctl stop glusterd, then using apt-get update with changes
> sources and a reboot be enough?
>

I think you would still need to kill the process manually, AFAIK systemd
only stops glusterd not the other Gluster processes like
glusterfsd(bricks), heal process etc. Reboot of system is not required, if
that's what you meant by reboot. Also you need follow all the other steps
mentioned, for the cluster to work smoothly after upgrade. Especially the
steps to perform heal are important.

Regards,
Poornima


> Ingo
>
> Am 27.02.19 um 16:11 schrieb Amar Tumballi Suryanarayan:
> > GlusterD2 is not yet called out for standalone deployments.
> >
> > You can happily update to glusterfs-5.x (recommend you to wait for
> > glusterfs-5.4 which is already tagged, and waiting for packages to be
> > built).
> >
> > Regards,
> > Amar
> >
> > On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL
> > mailto:abhishpali...@gmail.com>> wrote:
> >
> > Hi,
> >
> > Could  you please update on this and also let us know what is
> > GlusterD2 (as it is under development in 5.0 release), so it is ok
> > to uplift to 5.0?
> >
> > Regards,
> > Abhishek
> >
> > On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL
> > mailto:abhishpali...@gmail.com>> wrote:
> >
> > Hi,
> >
> > Currently we are using Glusterfs 3.7.6 and thinking to switch on
> > Glusterfs 4.1 or 5.0, when I see there are too much code changes
> > between these version, could you please let us know, is there
> > any compatibility issue when we uplift any of the new mentioned
> > version?
> >
> > Regards
> > Abhishek
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org 
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> > --
> > Amar Tumballi (amarts)
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Jim Kinney
It sounds like new bricks were added and they mounted over the top of
existing bricks.
gluster volume status  detail 
This will give the data you need to find where the real files are. You
can look in those to see the data should be intact.
Stopping the gluster volume is a good first step. Then as a safe guard
you can unmount the filesystem that holds the data you want. Now remove
the gluster volume(s) that are the problem - all if needed. Remount the
real filesystem(s). Create new gluster volumes with correct names.
On Wed, 2019-02-27 at 16:56 -0500, Tami Greene wrote:
> That makes sense.  System is made of four data arrays with a hardware
> RAID 6 and then the distributed volume on top.  I honestly don't know
> how that works, but the previous administrator said we had
> redundancy.  I'm hoping there is a way to bypass the safeguard of
> migrating data when removing a brick from the volume, which in my
> beginner's mind, would be a straight-forward way of remedying the
> problem.  Hopefully once the empty bricks are removed, the "missing"
> data will be visible again in the volume.
> 
> On Wed, Feb 27, 2019 at 3:59 PM Jim Kinney 
> wrote:
> > Keep in mind that gluster is a metadata process. It doesn't really
> > touch the actual volume files. The exception is the .glusterfs and
> > .trashcan folders in the very top directory of the gluster volume.
> > 
> > When you create a gluster volume from brick, it doesn't format the
> > filesystem. It uses what's already there.
> > 
> > So if you remove a volume and all it's bricks, you've not deleted
> > data.
> > 
> > That said, if you are using anything but replicated bricks, which
> > is what I use exclusively for my needs, then reassembling them into
> > a new volume with correct name might be tricky. By listing the
> > bricks in the exact same order as they were listed when creating
> > the wrong name volume when making the correct named volume, it
> > should use the same method to put data on the drives as previously
> > and not scramble anything. 
> > 
> > On Wed, 2019-02-27 at 14:24 -0500, Tami Greene wrote:
> > > I sent this and realized I hadn't registered.  My apologies for
> > > the duplication
> > > Subject: Added bricks with wrong name and now need to remove them
> > > without destroying volume.
> > > To:  
> > > 
> > > 
> > > 
> > > Yes, I broke it. Now I need help fixing it.
> > >  
> > > I have an existing Gluster Volume, spread over 16 bricks and 4
> > > servers; 1.5P space with 49% currently used .  Added an
> > > additional 4 bricks and server as we expect large influx of data
> > > in the next 4 to 6 months.  The system had been established by my
> > > predecessor, who is no longer here.
> > >  
> > > First solo addition of bricks to gluster.
> > >  
> > > Everything went smoothly until “gluster volume add-brick Volume
> > > newserver:/bricks/dataX/vol.name"
> > > (I don’t have the exact response as I worked on
> > > this for almost 5 hours last night) Unable to add-brick as “it is
> > > already mounted” or something to that affect.
> > > Double checked my instructions, the name of the
> > > bricks. Everything seemed correct.  Tried to add again adding
> > > “force.”  Again, “unable to add-brick”
> > > Because of the keyword (in my mind) “mounted” in
> > > the error, I checked /etc/fstab, where the name of the mount
> > > point is simply /bricks/dataX.
> > > This convention was the same across all servers, so I thought I
> > > had discovered an error in my notes and changed the name to
> > > newserver:/bricks/dataX. 
> > > Still had to use force, but the bricks were added.
> > > Restarted the gluster volume vol.name. No errors.
> > > Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
> > > instructs. So I attempted to mount manually and discovered a had
> > > a big mess on my hands.
> > > “Transport endpoint not
> > > connected” in addition to other messages.
> > > Discovered an issue between certificates and the
> > > auth.ssl-allow list because of the hostname of new server.  I
> > > made correction and /vol.name mounted.
> > > However, df -h indicated the 4 new bricks were
> > > not being seen as 400T were missing from what should have been
> > > available.
> > >  
> > > Thankfully, I could add something to vol.name on one machine and
> > > see it on another machine and I wrongly assumed the volume was
> > > operational, even if the new bricks were not recognized.
> > > So I tried to correct the main issue by,
> > > gluster volume remove vol.name
> > > newserver/bricks/dataX/
> > > received prompt, data will be migrated before
> > > brick is removed continue (or something to that) and I started
> > > the process, think this won’t take long because there is no data.
> > > After 10 minutes and no apparent progress on the
> > > process, I did panic, thinking worse case sc

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Tami Greene
That makes sense.  System is made of four data arrays with a hardware RAID
6 and then the distributed volume on top.  I honestly don't know how that
works, but the previous administrator said we had redundancy.  I'm hoping
there is a way to bypass the safeguard of migrating data when removing a
brick from the volume, which in my beginner's mind, would be a
straight-forward way of remedying the problem.  Hopefully once the empty
bricks are removed, the "missing" data will be visible again in the volume.

On Wed, Feb 27, 2019 at 3:59 PM Jim Kinney  wrote:

> Keep in mind that gluster is a metadata process. It doesn't really touch
> the actual volume files. The exception is the .glusterfs and .trashcan
> folders in the very top directory of the gluster volume.
>
> When you create a gluster volume from brick, it doesn't format the
> filesystem. It uses what's already there.
>
> So if you remove a volume and all it's bricks, you've not deleted data.
>
> That said, if you are using anything but replicated bricks, which is what
> I use exclusively for my needs, then reassembling them into a new volume
> with correct name might be tricky. By listing the bricks in the exact same
> order as they were listed when creating the wrong name volume when making
> the correct named volume, it should use the same method to put data on the
> drives as previously and not scramble anything.
>
> On Wed, 2019-02-27 at 14:24 -0500, Tami Greene wrote:
>
> I sent this and realized I hadn't registered.  My apologies for the
> duplication
>
> Subject: Added bricks with wrong name and now need to remove them without
> destroying volume.
> To: 
>
>
>
> Yes, I broke it. Now I need help fixing it.
>
>
>
> I have an existing Gluster Volume, spread over 16 bricks and 4 servers;
> 1.5P space with 49% currently used .  Added an additional 4 bricks and
> server as we expect large influx of data in the next 4 to 6 months.  The
> system had been established by my predecessor, who is no longer here.
>
>
>
> First solo addition of bricks to gluster.
>
>
>
> Everything went smoothly until “gluster volume add-brick Volume
> newserver:/bricks/dataX/vol.name"
>
> (I don’t have the exact response as I worked on this for
> almost 5 hours last night) Unable to add-brick as “it is already mounted”
> or something to that affect.
>
> Double checked my instructions, the name of the bricks.
> Everything seemed correct.  Tried to add again adding “force.”  Again,
> “unable to add-brick”
>
> Because of the keyword (in my mind) “mounted” in the
> error, I checked /etc/fstab, where the name of the mount point is simply
> /bricks/dataX.
>
> This convention was the same across all servers, so I thought I had
> discovered an error in my notes and changed the name to
> newserver:/bricks/dataX.
>
> Still had to use force, but the bricks were added.
>
> Restarted the gluster volume vol.name. No errors.
>
> Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
> instructs. So I attempted to mount manually and discovered a had a big mess
> on my hands.
>
> “Transport endpoint not connected” in
> addition to other messages.
>
> Discovered an issue between certificates and the
> auth.ssl-allow list because of the hostname of new server.  I made
> correction and /vol.name mounted.
>
> However, df -h indicated the 4 new bricks were not being
> seen as 400T were missing from what should have been available.
>
>
>
> Thankfully, I could add something to vol.name on one machine and see it
> on another machine and I wrongly assumed the volume was operational, even
> if the new bricks were not recognized.
>
> So I tried to correct the main issue by,
>
> gluster volume remove vol.name newserver/bricks/dataX/
>
> received prompt, data will be migrated before brick is
> removed continue (or something to that) and I started the process, think
> this won’t take long because there is no data.
>
> After 10 minutes and no apparent progress on the process,
> I did panic, thinking worse case scenario – it is writing zeros over my
> data.
>
> Executed the stop command and there was still no progress,
> and I assume it was due to no data on the brick to be remove causing the
> program to hang.
>
> Found the process ID and killed it.
>
>
> This morning, while all clients and servers can access /vol.name; not all
> of the data is present.  I can find it under cluster, but users cannot
> reach it.  I am, again, assume it is because of the 4 bricks that have been
> added, but aren't really a part of the volume because of their incorrect
> name.
>
>
>
> So – how do I proceed from here.
>
>
> 1. Remove the 4 empty bricks from the volume without damaging data.
>
> 2. Correctly clear any metadata about these 4 bricks ONLY so they may be
> added correctly.
>
>
> If this doesn't restore the volume to f

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Jim Kinney
Keep in mind that gluster is a metadata process. It doesn't really
touch the actual volume files. The exception is the .glusterfs and
.trashcan folders in the very top directory of the gluster volume.
When you create a gluster volume from brick, it doesn't format the
filesystem. It uses what's already there.
So if you remove a volume and all it's bricks, you've not deleted data.
That said, if you are using anything but replicated bricks, which is
what I use exclusively for my needs, then reassembling them into a new
volume with correct name might be tricky. By listing the bricks in the
exact same order as they were listed when creating the wrong name
volume when making the correct named volume, it should use the same
method to put data on the drives as previously and not scramble
anything. 
On Wed, 2019-02-27 at 14:24 -0500, Tami Greene wrote:
> I sent this and realized I hadn't registered.  My apologies for the
> duplication
> Subject: Added bricks with wrong name and now need to remove them
> without destroying volume.
> To:  
> 
> 
> 
> Yes, I broke it. Now I need help fixing it.
>  
> I have an existing Gluster Volume, spread over 16 bricks and 4
> servers; 1.5P space with 49% currently used .  Added an additional 4
> bricks and server as we expect large influx of data in the next 4 to
> 6 months.  The system had been established by my predecessor, who is
> no longer here.
>  
> First solo addition of bricks to gluster.
>  
> Everything went smoothly until “gluster volume add-brick Volume
> newserver:/bricks/dataX/vol.name"
> (I don’t have the exact response as I worked on this
> for almost 5 hours last night) Unable to add-brick as “it is already
> mounted” or something to that affect.
> Double checked my instructions, the name of the
> bricks. Everything seemed correct.  Tried to add again adding
> “force.”  Again, “unable to add-brick”
> Because of the keyword (in my mind) “mounted” in the
> error, I checked /etc/fstab, where the name of the mount point is
> simply /bricks/dataX.
> This convention was the same across all servers, so I thought I had
> discovered an error in my notes and changed the name to
> newserver:/bricks/dataX. 
> Still had to use force, but the bricks were added.
> Restarted the gluster volume vol.name. No errors.
> Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
> instructs. So I attempted to mount manually and discovered a had a
> big mess on my hands.
> “Transport endpoint not connected” in
> addition to other messages.
> Discovered an issue between certificates and the
> auth.ssl-allow list because of the hostname of new server.  I made
> correction and /vol.name mounted.
> However, df -h indicated the 4 new bricks were not
> being seen as 400T were missing from what should have been available.
>  
> Thankfully, I could add something to vol.name on one machine and see
> it on another machine and I wrongly assumed the volume was
> operational, even if the new bricks were not recognized.
> So I tried to correct the main issue by,
> gluster volume remove vol.name
> newserver/bricks/dataX/
> received prompt, data will be migrated before brick
> is removed continue (or something to that) and I started the process,
> think this won’t take long because there is no data.
> After 10 minutes and no apparent progress on the
> process, I did panic, thinking worse case scenario – it is writing
> zeros over my data.
> Executed the stop command and there was still no
> progress, and I assume it was due to no data on the brick to be
> remove causing the program to hang.
> Found the process ID and killed it.
> 
> 
> This morning, while all clients and servers can access /vol.name; not
> all of the data is present.  I can find it under cluster, but users
> cannot reach it.  I am, again, assume it is because of the 4 bricks
> that have been added, but aren't really a part of the volume because
> of their incorrect name.
>  
> So – how do I proceed from here.  
> 
> 
> 1. Remove the 4 empty bricks from the volume without damaging data.
> 2. Correctly clear any metadata about these 4 bricks ONLY so they may
> be added correctly.
> 
> 
> If this doesn't restore the volume to full functionality, I'll write
> another post if I cannot find answer in the notes or on line.
>  
> Tami-- 
> 
> 
> 
> ___Gluster-users mailing 
> listgluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-- 
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/

___
Gluster-users mailing list

[Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Tami Greene
I sent this and realized I hadn't registered.  My apologies for the
duplication

Subject: Added bricks with wrong name and now need to remove them without
destroying volume.
To: 



Yes, I broke it. Now I need help fixing it.



I have an existing Gluster Volume, spread over 16 bricks and 4 servers;
1.5P space with 49% currently used .  Added an additional 4 bricks and
server as we expect large influx of data in the next 4 to 6 months.  The
system had been established by my predecessor, who is no longer here.



First solo addition of bricks to gluster.



Everything went smoothly until “gluster volume add-brick Volume
newserver:/bricks/dataX/vol.name"

(I don’t have the exact response as I worked on this for
almost 5 hours last night) Unable to add-brick as “it is already mounted”
or something to that affect.

Double checked my instructions, the name of the bricks.
Everything seemed correct.  Tried to add again adding “force.”  Again,
“unable to add-brick”

Because of the keyword (in my mind) “mounted” in the error,
I checked /etc/fstab, where the name of the mount point is simply
/bricks/dataX.

This convention was the same across all servers, so I thought I had
discovered an error in my notes and changed the name to
newserver:/bricks/dataX.

Still had to use force, but the bricks were added.

Restarted the gluster volume vol.name. No errors.

Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
instructs. So I attempted to mount manually and discovered a had a big mess
on my hands.

“Transport endpoint not connected” in
addition to other messages.

Discovered an issue between certificates and the
auth.ssl-allow list because of the hostname of new server.  I made
correction and /vol.name mounted.

However, df -h indicated the 4 new bricks were not being
seen as 400T were missing from what should have been available.



Thankfully, I could add something to vol.name on one machine and see it on
another machine and I wrongly assumed the volume was operational, even if
the new bricks were not recognized.

So I tried to correct the main issue by,

gluster volume remove vol.name newserver/bricks/dataX/

received prompt, data will be migrated before brick is
removed continue (or something to that) and I started the process, think
this won’t take long because there is no data.

After 10 minutes and no apparent progress on the process, I
did panic, thinking worse case scenario – it is writing zeros over my data.

Executed the stop command and there was still no progress,
and I assume it was due to no data on the brick to be remove causing the
program to hang.

Found the process ID and killed it.


This morning, while all clients and servers can access /vol.name; not all
of the data is present.  I can find it under cluster, but users cannot
reach it.  I am, again, assume it is because of the 4 bricks that have been
added, but aren't really a part of the volume because of their incorrect
name.



So – how do I proceed from here.


1. Remove the 4 empty bricks from the volume without damaging data.

2. Correctly clear any metadata about these 4 bricks ONLY so they may be
added correctly.


If this doesn't restore the volume to full functionality, I'll write
another post if I cannot find answer in the notes or on line.


Tami--


-- 
Tami
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Tami Greene
Yes, I broke it. Now I need help fixing it.



I have an existing Gluster Volume, spread over 16 bricks and 4 servers;
1.5P space with 49% currently used .  Added an additional 4 bricks and
server as we expect large influx of data in the next 4 to 6 months.  The
system had been established by my predecessor, who is no longer here.



First solo addition of bricks to gluster.



Everything went smoothly until “gluster volume add-brick Volume
newserver:/bricks/dataX/vol.name"

(I don’t have the exact response as I worked on this for
almost 5 hours last night) Unable to add-brick as “it is already mounted”
or something to that affect.

Double checked my instructions, the name of the bricks.
Everything seemed correct.  Tried to add again adding “force.”  Again,
“unable to add-brick”

Because of the keyword (in my mind) “mounted” in the error,
I checked /etc/fstab, where the name of the mount point is simply
/bricks/dataX.

This convention was the same across all servers, so I thought I had
discovered an error in my notes and changed the name to
newserver:/bricks/dataX.

Still had to use force, but the bricks were added.

Restarted the gluster volume vol.name. No errors.

Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
instructs. So I attempted to mount manually and discovered a had a big mess
on my hands.

“Transport endpoint not connected” in
addition to other messages.

Discovered an issue between certificates and the
auth.ssl-allow list because of the hostname of new server.  I made
correction and /vol.name mounted.

However, df -h indicated the 4 new bricks were not being
seen as 400T were missing from what should have been available.



Thankfully, I could add something to vol.name on one machine and see it on
another machine and I wrongly assumed the volume was operational, even if
the new bricks were not recognized.

So I tried to correct the main issue by,

gluster volume remove vol.name newserver/bricks/dataX/

received prompt, data will be migrated before brick is
removed continue (or something to that) and I started the process, think
this won’t take long because there is no data.

After 10 minutes and no apparent progress on the process, I
did panic, thinking worse case scenario – it is writing zeros over my data.

Executed the stop command and there was still no progress,
and I assume it was due to no data on the brick to be remove causing the
program to hang.

Found the process ID and killed it.


This morning, while all clients and servers can access /vol.name; not all
of the data is present.  I can find it under cluster, but users cannot
reach it.  I am, again, assume it is because of the 4 bricks that have been
added, but aren't really a part of the volume because of their incorrect
name.



So – how do I proceed from here.


1. Remove the 4 empty bricks from the volume without damaging data.

2. Correctly clear any metadata about these 4 bricks ONLY so they may be
added correctly.


If this doesn't restore the volume to full functionality, I'll write
another post if I cannot find answer in the notes or on line.


Tami--
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread Ingo Fischer
Hi Amar,

sorry to jump into this thread with an connected question.

When installing via "apt-get" and so using debian packages and also
systemd to start/stop glusterd is the online upgrade process from
3.x/4.x to 5.x still needed as described at
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/ ?

Especially because there is manual killall and such for processes
handled by systemd in my case. Or is there an other upgrade guide or
recommendations for use on ubuntu?

Would systemctl stop glusterd, then using apt-get update with changes
sources and a reboot be enough?

Ingo

Am 27.02.19 um 16:11 schrieb Amar Tumballi Suryanarayan:
> GlusterD2 is not yet called out for standalone deployments.
> 
> You can happily update to glusterfs-5.x (recommend you to wait for
> glusterfs-5.4 which is already tagged, and waiting for packages to be
> built).
> 
> Regards,
> Amar
> 
> On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL
> mailto:abhishpali...@gmail.com>> wrote:
> 
> Hi,
> 
> Could  you please update on this and also let us know what is
> GlusterD2 (as it is under development in 5.0 release), so it is ok
> to uplift to 5.0?
> 
> Regards,
> Abhishek
> 
> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL
> mailto:abhishpali...@gmail.com>> wrote:
> 
> Hi,
> 
> Currently we are using Glusterfs 3.7.6 and thinking to switch on
> Glusterfs 4.1 or 5.0, when I see there are too much code changes
> between these version, could you please let us know, is there
> any compatibility issue when we uplift any of the new mentioned
> version? 
> 
> Regards
> Abhishek
> 
> 
> 
> -- 
> 
> 
> 
> 
> Regards
> Abhishek Paliwal
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> -- 
> Amar Tumballi (amarts)
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread Amar Tumballi Suryanarayan
GlusterD2 is not yet called out for standalone deployments.

You can happily update to glusterfs-5.x (recommend you to wait for
glusterfs-5.4 which is already tagged, and waiting for packages to be
built).

Regards,
Amar

On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL 
wrote:

> Hi,
>
> Could  you please update on this and also let us know what is GlusterD2
> (as it is under development in 5.0 release), so it is ok to uplift to 5.0?
>
> Regards,
> Abhishek
>
> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi,
>>
>> Currently we are using Glusterfs 3.7.6 and thinking to switch on
>> Glusterfs 4.1 or 5.0, when I see there are too much code changes between
>> these version, could you please let us know, is there any compatibility
>> issue when we uplift any of the new mentioned version?
>>
>> Regards
>> Abhishek
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread ABHISHEK PALIWAL
Hi,

Could  you please update on this and also let us know what is GlusterD2 (as
it is under development in 5.0 release), so it is ok to uplift to 5.0?

Regards,
Abhishek

On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
wrote:

> Hi,
>
> Currently we are using Glusterfs 3.7.6 and thinking to switch on Glusterfs
> 4.1 or 5.0, when I see there are too much code changes between these
> version, could you please let us know, is there any compatibility issue
> when we uplift any of the new mentioned version?
>
> Regards
> Abhishek
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster and bonding

2019-02-27 Thread Jorick Astrego

On 2/25/19 6:01 PM, Alvin Starr wrote:
> On 2/25/19 11:48 AM, Boris Zhmurov wrote:
>> On 25/02/2019 14:24, Jorick Astrego wrote:
>>>
>>> Hi,
>>>
>>> Have not measured it as we have been running this way for years now
>>> and haven't experienced any problems with "transport endpoint is not
>>> connected” with this setup.
>>>
>>
>> Hello,
>>
>> Jorick, how often (during those years) did your NICs break?
>>
> Over the years(30) I have had problems with bad ports on switches.
>
> With some manufactures  being worse than others.
>
>
Hi,

Have been doing 25 years of infra and I have seen really everything
break and PDSS (People Doing Stupid Sh*t)

The NIC's these days are excellent quality and we never had one break in
10 years. We do a lot of testing before we put it into production and we
have had some other issues that have the same effect (switch faillure,
someone pulling the wrong cable, LACP mis configuration).

Actually we went from LACP with stacked switches to balance-alb. There
were more configuration errors with LACP and we had stacked switches
getting messed up. We now have separate L2 storage switches.

And the GlusterFS developers think it's the best bonding mode for their
application, so you don't have to take my word for it ;-)

https://docs.gluster.org/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/

*best bonding mode for Gluster client is mode 6 (balance-alb)*, this
allows client to transmit writes in parallel on separate NICs much
of the time. A peak throughput of 750 MB/s on writes from a single
client was observed with bonding mode 6 on 2 10-GbE NICs with jumbo
frames. That's 1.5 GB/s of network traffic.

another way to balance both transmit and receive traffic is bonding
mode 4 (802.3ad) but this requires switch configuration (trunking
commands)

still another way to load balance is bonding mode 2 (balance-xor)
with option "xmit_hash_policy=layer3+4". The bonding modes 6 and 2
will not improve single-connection throughput, but improve aggregate
throughput across all connections.

Regards,

Jorick Astrego






Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users