Hi Atin,
I will be waiting for your response.
On Mon, Nov 21, 2016 at 10:00 AM, ABHISHEK PALIWAL
wrote:
> Hi Atin,
>
> System is the embedded system and these dates are before the system get in
> timer sync.
>
> Yes, I have also seen these two files in peers directory on 002500 board
> and I wa
On 21/11/16 11:13, Alexandr Porunov wrote:
Version of glusterfs is 3.8.5
Here what I have installed:
rpm -ivh
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install centos-release-gluster
yum install glusterfs-server
It should be part of glusterfs-server.
Thank you for the explanation!
I will keep it in mind.
Sincerely,
Alexandr
On Mon, Nov 21, 2016 at 7:39 AM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi,
>
> Glad, you could get it rectified. But, having same slave volume for two
> different
> geo-rep sessions is never recom
Version of glusterfs is 3.8.5
Here what I have installed:
rpm -ivh
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install centos-release-gluster
yum install glusterfs-server
yum install glusterfs-geo-replication
Unfortunately it doesn't work if I just add the scr
Hi,
Glad, you could get it rectified. But, having same slave volume for two
different
geo-rep sessions is never recommended. The two sessions end up writing to
same slave node. It's always one master volume to many different slave volume
configuration if required. If ssh-keys are deleted on slav
On 21/11/16 01:07, Alexandr Porunov wrote:
I have installed it from rpm. No that file isn't there. The folder
"/var/lib/glusterd/hooks/1/set/post/" is empty..
which gluster version and what all gluster rpms have u installed?
For time being just download this file[1] and copy to above locati
Hi Atin,
System is the embedded system and these dates are before the system get in
timer sync.
Yes, I have also seen these two files in peers directory on 002500 board
and I want to know the reason why gluster creates the second file when
there is old file is exist. Even when you see the content
atin@dhcp35-96:~/Downloads/gluster_users/abhishek_dup_uuid/duplicate_uuid/glusterd_2500/peers$
ls -lrt
total 8
-rw---. 1 atin wheel 71 *Jan 1 1970*
5be8603b-18d0-4333-8590-38f918a22857
-rw---. 1 atin wheel 71 Nov 18 03:31
26ae19a6-b58f-446a-b079-411d4ee57450
In board 2500 look at the dat
Hope you will see in the logs..
On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL
wrote:
> Hi Atin,
>
> It is not getting wipe off we have changed the configuration path from
> /var/lib/glusterd to /system/glusterd.
>
> So, they will remain as same as previous.
>
> On Mon, Nov 21, 2016 at 9:
Hi Atin,
It is not getting wipe off we have changed the configuration path from
/var/lib/glusterd to /system/glusterd.
So, they will remain as same as previous.
On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee wrote:
> Abhishek,
>
> rebooting the board does wipe of /var/lib/glusterd contents in
Abhishek,
rebooting the board does wipe of /var/lib/glusterd contents in your set up
right (as per my earlier conversation with you) ? In that case, how are you
ensuring that the same node gets back the older UUID? If you don't then
this is bound to happen.
On Mon, Nov 21, 2016 at 9:11 AM, ABHISH
Hi Atin,
Ok.Thank you for your reply.
Thanks,
Xin
在 2016-11-21 10:00:36,"Atin Mukherjee" 写道:
Hi Xin,
I've not got a chance to look into it yet. delete stale volume function is in
place to take care of wiping off volume configuration data which has been
deleted from the cluster. However
Hi Xin,
I've not got a chance to look into it yet. delete stale volume function is
in place to take care of wiping off volume configuration data which has
been deleted from the cluster. However we need to revisit this code to see
if this function is anymore needed given we recently added a validat
Hi Atin,
Thank you for your support.
And any conclusions about this issue?
Thanks,
Xin
在 2016-11-16 20:59:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 1:53 PM, songxin wrote:
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 12:47 PM,
I have installed it from rpm. No that file isn't there. The folder
"/var/lib/glusterd/hooks/1/set/post/" is empty..
Sincerely,
Alexandr
On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan
wrote:
> Did u install rpm or directly from sources. Can u check whether following
> script is present?
>
Did u install rpm or directly from sources. Can u check whether
following script is present?
/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
--
Jiffin
On 20/11/16 13:33, Alexandr Porunov wrote:
To enable shared storage I used next command:
# gluster volume set all clus
To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable
But it seems that it doesn't create gluster_shared_storage automatically.
# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist
Do I need to manually
17 matches
Mail list logo