Hi Pranith,
The problem is on the xstoocky02, where the linux logical volume has
be completely rebuild with mkfs.
How will you rebuild the lost data, if we did mkfs?
Pranith
That is a scratch, the users know that the data could be lost. on that
node, the data are lost. Now I want to restart
What kind of workload do you have? Why are you using stripe?
Pranith
That is a 14 nodes cluster doing a lot of analyse in R or with classical
tools in the biology. these computing used very large files from 30Go to
1To, it is manage with a sheduler. and interconnected with a ten Gbs.
So
On 02/01/2016 02:59 PM, Pierre Léonard wrote:
Hi Pranith,
The problem is on the xstoocky02, where the linux logical volume has
be completely rebuild with mkfs.
How will you rebuild the lost data, if we did mkfs?
Pranith
That is a scratch, the users know that the data could be lost. on that
What kind of workload do you have? Why are you using stripe?
Pranith
On 02/01/2016 04:52 PM, Pierre Léonard wrote:
Hi Pranith,
Data that fell in that stripe where you did mkfs is lost. So I am not
understanding if the rest of the stripes that have been available are
useful for your
Hi Pranith,
Data that fell in that stripe where you did mkfs is lost. So I am not
understanding if the rest of the stripes that have been available are
useful for your application.
Yes you are right. so I will destroy the volume and rebuild it. I don't
see any other solutions.
Many
On 01/29/2016 02:03 PM, Pierre Léonard wrote:
hi Pranith,
hi Pierre,
Could you send volume info output of the volume where you are
trying to do this operation and also point out which brick is giving
problem.
Pranith
following is the volume info of xstoocky01 where I tried to
hi Pranith,
hi Pierre,
Could you send volume info output of the volume where you are
trying to do this operation and also point out which brick is giving
problem.
Pranith
following is the volume info of xstoocky01 where I tried to start the
volume :
[root@xstoocky01 glusterfs]#
Hi all,
I have a strip 7 volume of 14 nodes. one of the brick crash and I
replace the failed disk. Nox on that node the brick is entirely new.
then when I want to start the volume gluster answer :
[root@xstoocky01 brick2]# gluster volume start gvscratch
volume start: gvscratch: failed:
hi Pierre,
Could you send volume info output of the volume where you are
trying to do this operation and also point out which brick is giving
problem.
Pranith
On 01/28/2016 08:55 PM, Pierre Léonard wrote:
Hi all,
I have a strip 7 volume of 14 nodes. one of the brick crash and I