On 19/11/16 21:47, Alexandr Porunov wrote:
Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I don't know why.
Here is the content of the 'run-gluster-shared_storage.log':
Make sure shared storage is up and running using "gluster volume
As per the subject.
Have created a 3.9 volume in containers, would like to copy some data
from a 3.8 volume, but get errors when trying to mount it under 3.8
Thanks,
+--+
[2016-11-20 01:37:02.554365] I
Hi,
Sorry for late reply. I think I will wait for 3.10 LTS release to try
it. I am on 3.7.11 and it is very stable for us.
On Thu, Nov 17, 2016 at 1:05 PM, Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Nov 16, 2016 at 11:47 PM, Serkan Çoban
> wrote:
Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I don't know why.
Here is the content of the 'run-gluster-shared_storage.log':
[2016-11-19 10:37:01.581737] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfs: Started running
OK, I have figured out what it was.
I had a session not with 'root' user but with 'geoaccount' user.
It seems that we can't have 2 sessions to the one node (even if users are
different). After deleting a session with 'geoaccount' user I was able to
create a session with 'root' user.
On Sat, Nov
Hi folks,
There are lots of warning message like
- `remote operation failed` with the path: /path//xxx (uuid) [ No such
file or directory error]
- `gfid mismatch detected`
after expanding my distribute-replicated(24x2=48) volumes.
configuration are:
```
Options Reconfigured:
Hello,
I had a geo replication between master nodes and slave nodes. I have
removed ssh keys for authorization from slave nodes. Now I can neither
create session for slave nodes nor remove the old useless session. Is it
possible to manually remove a sessions from all the nodes?
Here is the
On 11/19/2016 04:13 PM, Alexandr Porunov wrote:
It still doesn't work..
I have created that dir:
# mkdir -p /var/run/gluster/shared_storage
and then:
# mount -t glusterfs 127.0.0.1:gluster_shared_storage
/var/run/gluster/shared_storage
Mount failed. Please check the log file for more
I have disabled shared_storage then removed that folder
(/var/run/gluster/shared_storage). Then I have enabled shared_storage again.
Here is the execution of the commands after that:
*# gluster volume info*
Volume Name: gv0
Type: Replicate
Volume ID: ddff3200-ff93-429f-990f-648b6d9ec237
Status:
It still doesn't work..
I have created that dir:
# mkdir -p /var/run/gluster/shared_storage
and then:
# mount -t glusterfs 127.0.0.1:gluster_shared_storage
/var/run/gluster/shared_storage
Mount failed. Please check the log file for more details.
Where to find a proper file to read logs? Because
Would be possible to setup a geo-replicated cluster to be used as backup?
Obviously deleted files on master would be also deleted on the
replicated, so this won't be feasable to be used as backup, BUT if I
schedule a snapshot every night (on the replicated node), I can use
the snapshot as backup.
On 11/19/2016 01:39 AM, Alexandr Porunov wrote:
Hello,
I try to enable shared storage for Geo-Replication but I am not sure
that I do it properly.
Here is what I do:
# gluster volume set all cluster.enable-shared-storage enable
volume set: success
# mount -t glusterfs
12 matches
Mail list logo