dear all, I just subscribed and started reading docs,
but still not sure if I got the hung of it all
is GlusterFS for something simple like:
a box -b box
/some_folder /some_folder
so /some_folder on both boxes would contain same data
if yes, then does setting only
dear all
has anybody got glusterfs working on OL 6.2,
fuse version is bit higher than for an instance the one that
comes with SL61, number of SL6.1 boxes are working just
fine, yet Oracle fails to mount anything with following error:
0-glusterfs-fuse: FUSE init failed (No such file or
.
On 24/03/12 04:06, Kevein Liu wrote:
Why Oracle?Iprefer CentOS to the rubbish Oracle
Kevein | 刘洋
Phone:+86 186 0034 5212
在 2012-3-23,23:08,lejeczek pelj...@yahoo.co.uk
mailto:pelj...@yahoo.co.uk 写道:
dear all
has anybody got glusterfs working on OL 6.2,
fuse version is bit higher than
dear all,
how does as in the subject work,
I can see users run into number of different troubles, like
here:
http://gluster.org/pipermail/gluster-users/2011-February/006603.html
has anybody got it worked out? m$ office stuff, does it work
with gluster?
cheers
Pawel
actually I get locking problem very frequently, even if
only copying files from samba/gluster net share,
is there a fix/solution for these problems?
On 26/03/12 12:14, lejeczek wrote:
dear all,
how does as in the subject work,
I can see users run into number of different troubles,
like here
-users-boun...@gluster.org] Im Auftrag von lejeczek
Gesendet: Montag, 26. März 2012 13:14
An: gluster-users@gluster.org
Betreff: [Gluster-users] gluster as a storage backend to samba
dear all,
how does as in the subject work,
I can see users run into number of different troubles, like here:
http
Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von lejeczek
Gesendet: Montag, 26. März 2012 13:28
An: gluster-users@gluster.org
Betreff: Re
is it safe at all,
a) a disaster, imagine CPU blows up
b) deliberately, in a control fashion
thanks everybody
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
the gluster to use different IP whereas everything else
remains unchanged?
cheers
lejeczek
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
would still work, does it?
thanks
On 22/03/12 13:35, Brian Candler wrote:
On Thu, Mar 22, 2012 at 10:25:04AM +, lejeczek wrote:
is it possible to set up server side replication with command line?
www.gluster.org Documentation Administration Guide will get you to here:
http
recommends anything?
cheers
On 23/04/12 17:13, Brian Candler wrote:
On Mon, Apr 23, 2012 at 04:44:32PM +0100, lejeczek wrote:
yes, precisely
in the past I had running AFRs, this way
box A looback client - box A server- box B server- box B
loopback client
but similarly replace
and replication would still work,
does it?
thanks
On 22/03/12 13:35, Brian Candler wrote:
On Thu, Mar 22, 2012 at 10:25:04AM +, lejeczek wrote:
is it possible to set up server side replication with command line?
www.gluster.org Documentation Administration Guide will get you to here
.. in a replicated volume, so backend storage could be
easily tampered with.
any such features planned for introduction/inclusion?
regards
___
Gluster-users mailing list
Gluster-users@gluster.org
hi everybody
would a tool such as dbench be a valid bechmark for gluster?
and, most importantly, is there any formula to estimate raw
fs to gluster performance ratio for different setups?
for instance:
having a replicated volume, two bricks, fuse mountpoint to
volume via non-congested 1Gbps
well, if dbench was to give me close to real numbers then
these number look quite .. well, disappointing
quick tests on a single brick volume that fuse mountpoints
locally to show a massive disproportion
raw fs / gluster fuse
336.224 / 17.7982
I believe the volume is a standard one, meaning it
thanks for posting
I'd be curious to see what kind of disproportion you get
between: raw fs / single brick volume with local fuse
mountpoint which effectively points back to the same raw fs
from my quick tests I saw massive gap between the two
thanks
On 30/04/12 18:39, Wipe_Out wrote:
possible) which is important for more complex
configurations, for simpler ones this bonus does not
outweigh poor performance gluster suffers from, well, in my
opinion.
thanks
On 02/05/12 13:09, Amar Tumballi wrote:
On 05/02/2012 02:22 PM, lejeczek wrote:
thanks for posting
I'd be curious
hi everone,
trying geo-repl first, I've followed that official howto and the
process claimed "success" up until I went for status: "Faulty"
Errors I see:
...
[2017-02-01 12:11:38.103259] I [monitor(monitor):268:monitor]
Monitor: starting gsyncd worker
hi,
I have a four peers gluster and one is failing, well, kind of..
If on a working peer I do:
$ gluster volume add-brick QEMU-VMs replica 3
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
force
volume add-brick: failed: Commit failed on whale.priv Please
check log file for
-file setup on the slave nodes.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create
push-pem force
4. Start geo-rep
Thanks and Regards,
Kotresh H R
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: gluster-users@gluster.org
Sent: Th
iles directly from bricks?
many thanks,
L.
regards,
nag pavan
- Original Message -
From: "lejeczek"<pelj...@yahoo.co.uk>
To:gluster-users@gluster.org
Sent: Tuesday, 7 February, 2017 2:00:51 AM
Subject: [Gluster-users] Input/output error - would not heal
hi all
I'm hittin
many thx.
L.
thanks,
nagpavan
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: "Nag Pavan Chilakam" <nchil...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, 7 February, 2017 10:53:07 PM
Subject: Re: [Gluster-users] Input/output e
don't
see those errors any more.
Should I now be looking at something particular more closely?
b.w.
L.
On Wed, Feb 1, 2017 at 7:49 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
hi,
I have a four peers gluster and one is failing, we
On 01/02/17 19:30, lejeczek wrote:
On 01/02/17 14:44, Atin Mukherjee wrote:
I think you have hit
https://bugzilla.redhat.com/show_bug.cgi?id=1406411 which
has been fixed in mainline and will be available in
release-3.10 which is slated for next month.
To prove you have hit the same
dear all
should gluster update geo repl when a volume changes?
eg. bricks are added, taken away.
reason I'm asking is because it doe not seem like gluster is
doing it on my systems?
Well, I see gluster removed a node form geo-repl, brick that
I removed.
But I added a brick to a vol and it's
hi everyone,
I've been browsing list's messages and it seems to me that
users struggle, I do.
I do what I thought was simple, I follow official docs.
I, as root always do..
]$ gluster system:: execute gsec_create
]$ gluster volume geo-replication WORK
10.5.6.32::WORK-Replica create push-pem
or socket exhaustion.
something to do with kernel version, I run centos off
kernel-ml and v.4.9.5 was where this message persisted, now
with 4.9.6 it's gone.
I wonder if gluster dev guys rest centos release also
against ml kernels.
thanks,
L.
On February 17, 2017 7:47:23 AM PST, lejeczek
On 09/02/17 06:07, Nag Pavan Chilakam wrote:
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: "Nag Pavan Chilakam" <nchil...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Wednesday, 8 February, 2017 7:15:29 PM
Subject: Re: [Gluster-u
hi everyone
I see that something like this in mount:
... 127.0.0.1,10.5.6.100 make mounts available as long as
one server is up & running.
Is that a rule of thumb and means that when both vol servers
are available mount will be touching/taking ONLY to the
first one, unless it goes down for
hi everyone
I have a volume (there is more that should be working but
are scheduled for tonight so will know later) that
rdiff-backup fails to backup.
The error rdiff-backup spits out I see for the first time.
There in only one thing I could think of which I could link
with this problem -
On 08/02/17 10:06, Kotresh Hiremath Ravishankar wrote:
Hi lejeczek,
Try stop force.
gluster vol geo-rep :: stop force
I think something broke:
[2017-02-12 18:30:09.970683] E
[resource(/__.aLocalStorages/3/0-GLUSTERs/3GLUSTER--DATA):234:errlog]
Popen: command &quo
hi everyone
my gluster insists that:
~]$ gluster vol quota DATA list
Path Hard-limit
Soft-limit Used Available Soft-limit exceeded? Hard-limit
exceeded?
hi
there is a vol with:
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
storage.owner-uid: 107
storage.owner-gid: 107
As you see it is for libvirt/qemu, which all working fine, I
cannot think at this
hi guys,
should rdiff-backup struggle to backup a glusterfs mount?
I'm trying glusterfs and was hoping, expecting I could keep
on rdiff-backing up data. I backup directly to
local(non-gluster) storage(xfs) and get this:
$ rdiff-backup --exclude-other-filesystems
--exclude-symbolic-links
hi everyone
it does not work for me, is it supposed to work?
Autofs does not complain nor report any errors nor problems.
* -fstype=glusterfs -rw 10.5.6.49,10.5.6.100:/USER-HOME/&
but this does:
everything -fstype=glusterfs -rw 10.5.6.49,10.5.6.100:/USER-HOME
so wildcard keys does not work?
n no probs.
On Wed, Dec 14, 2016 at 1:50 AM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
libvirt/qemu does not get to gluster vols when one has
these:
Upgraded:
glusterfs.x86_64 3.7.17-1.el7 glusterfs-api.x86_64
3.7.17-1.el7
glusterfs-c
...emergency/recovery mode, because, eg. a storage device
could not be mounted during boot, (not a gluster storage nor
a storage which gluster uses)
I login in rescue mode, I fix that storage/mount problem
(whatever that might be, does not matter) and I do
$ system default
and I find
hi everyone,
this I'd guess it tunable somewhere? yet I cannot get there.
I see:
[2017-03-16 16:55:32.264981] W
[fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse: 426:
ACCESS() /me => -1 (Permission denied)
[2017-03-16 16:55:32.265192] W
[fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse:
On 02/08/17 02:22, Atin Mukherjee wrote:
Are you referring to other names of peer status output? If
so, then a peerinfo entry having other names populated
means it might be having multiple n/w interfaces or the
reverse address resolution is picking this name. But why
are you worried on the
also, now after the upgrade gluster claims, on some vols,
log list in heal info, and these in these amongst:
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-USER-HOME
Status: Connected
what are these entries?
On 02/08/17 02:19, Atin Mukherjee wrote:
This means shd client is not
what I've just notice - the brick in question does show up as:
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-GROUP-WORKN/A N/A
N N/A
for one particular vol. Status for other vols(so far) shows
it ok.
Would this be volume problem or brick problem,
But I had not killed anything, unless system did for some
reason and silently, but I'd not think so.
It seems that one brick is particularly ill about it all.
I'd have to restart it but mostly this would not do and
actually reboot the system, then for I short while it would
be ok only soon
on number of bricks there might
be too many brick ops involved. This is the reason we
introduced --timeout option in CLI which can be used to
have a larger time out value. However this fix is
available from release-3.9 onwards.
On Mon, Jul 24, 2017 at 3:54 PM, lejeczek
<pelj...@yahoo.co.uk <
... or in other words - can samba break (on Centos 7.3) if
one goes with gluster version to high?
hi fellas.
I wonder because I see:
smbd[4088153]: Unknown gluster ACL version: -847736808
smbd[4088153]: [2017/07/27 13:12:54.047332, 0]
hi fellas
would you know what could be the problem with: vol status
detail times out always?
After I did above I had to restart glusterd on the peer
which had the command issued.
I run 3.8.14. Everything seems to work a ok.
many thanks
L.
___
On 27/07/17 14:13, lejeczek wrote:
... or in other words - can samba break (on Centos 7.3) if
one goes with gluster version to high?
hi fellas.
I wonder because I see:
smbd[4088153]: Unknown gluster ACL version: -847736808
smbd[4088153]: [2017/07/27 13:12:54.047332, 0]
../source3
.. is this default/desired behaviour?
And is this configurable/controllable behaviour?
I'm thinking - it would be nice not to have whole vol go
read-only(three peers in cluster) but at the same time have
gluster alert/highlight the problem to a user/admin.
ver. 3.10.3
thanks.
L.
hi fellas
I wonder if gluster with a peer connected via vpn tunnel is
something you would use for production?
@devel - is such a scenario even a valid(approved) one?
many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
dear fellas
I've a pool, it's same one subnet. Now I'd like to add a
peer which is on a subnet which will not be
available/accessible to all the peers.
a VolV
peer X 10.0.0.1 <-> 10.0.0.2 peer Y 192.168.0.2 <-> peer Z
192.168.0.3 # so here 192.168.0.3 and 10.0.0.1 do not see
each other.
hi guys/gals
I realize that this question must have been asked before, I
sroogled and found some posts on the web on how to
tweak/tune gluster, however..
What I hope is that some experts and/or devel could write a
bit more, maybe compose a doc on - How to investigate and
trouble gluster's
I wonder - was it upgrade from 3.8 to 3.10 that caused this
problem.
Can these file be deleted? And if yes would this be enough?
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
hi fellas,
same old
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol,
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does
On 13/09/17 06:21, Gaurav Yadav wrote:
Please provide the output of gluster volume info, gluster
volume status and gluster peer status.
Apart from above info, please provide glusterd logs,
cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
<pelj...@yahoo.co
On 12/09/17 12:59, Niels de Vos wrote:
On Tue, Sep 12, 2017 at 10:01:14AM +0100, lejeczek wrote:
@devel
hi, I wonder who takes care of man pages when it comes to rpms?
I'd like to file a bugzilla report and would like to make sure it's packages
mainainer(s) are responsible for incomplete man
I emailed the logs earlier to just you.
On 13/09/17 11:58, Gaurav Yadav wrote:
Please send me the logs as well i.e glusterd.logs and
cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
On 13/09/17 06:21,
@devel
hi, I wonder who takes care of man pages when it comes to rpms?
I'd like to file a bugzilla report and would like to make
sure it's packages mainainer(s) are responsible for
incomplete man pages.
Often man pages are neglected by authors, too often, and man
is, should always be "the
On 28/09/17 17:05, lejeczek wrote:
On 13/09/17 20:47, Ben Werthmann wrote:
These symptoms appear to be the same as I've recorded in
this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
<atin.mukhe
a...@redhat.com <mailto:gya...@redhat.com>> wrote:
Please send me the logs as well i.e glusterd.logs
and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>>
wrote:
t
cksum, causing state in "State: Peer Rejected (Connected)".
This inconsistency arise due to upgrade you did.
Workaround:
1.Go to node 10.5.6.17
2.Open info file from
"/var/lib/glusterd/vols//info" and remove
"tier-enabled=0".
3.Restart glusterd services
4.Pee
r/lib/glusterd/vols//info" and remove
"tier-enabled=0".
3.Restart glusterd services
4.Peer probe again.
Thanks
Gaurav
On Thu, Aug 31, 2017 at 3:37 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
attached the lot as per your request.
W
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect -
healed on volume QEMU-VMs
has been unsuccessful on bricks that are down. Please check
if all brick processes are running.
On 04/09/17 11:47, Atin Mukherjee wrote:
Please provide the output of gluster volume info, gluster
volume status and gluster peer status.
On Mon, Sep 4, 2017 at
On 30/08/17 07:18, Gaurav Yadav wrote:
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/" directory from all the
nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek
<pelj...@y
hi
I see:
..
[2017-08-29 12:53:41.708756] W [MSGID: 101095]
[xlator.c:162:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/3.10.5/xlator/features/ganesha.so:
cannot open shared object file: No such file or directory
..
and I wonder.. because nothing provides that lib(in terms of
rpm
hi there
I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too)
I see something like it:
# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs
has been successful
Brick
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
.. or just a way to make samba, when samba shares via
glusterfs api, to show in a share/vol a submount(like fs
bind) - how, if possible at all?
I guess it would have to be some sort of crossing from one
vol to another vol's dir or something, hmm...
many thanks, L.
hi everyone
I assume such a situation where network segement changes, in
a simplest case where one provides a box(a brick) a new
faster net iterface. So after boxes have two nics and then
bricks gets introduced to them via gluster probe $_newIPs.
Ideally @ a developer - how gluster handles
into gluster would be
great, but I'm not sure gluster has any mechanics for notifying clients
of changes since most of the logic is in the client, as I understand it.
On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:
hi guys will we have gluster with inotify? some
point
any mechanics for notifying clients
of changes since most of the logic is in the client, as I understand it.
On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:
hi guys will we have gluster with inotify? some
point / never? thanks, L
hi guys
I've had two replicas volume, added third brick and now I see hundred
thousand files to heal, interestingly though only on two bricks that
already constituted the volume.
Volume prior to expansion was, according to gluster, okey and when I
added third brick it immediately started
hi guys
is that configurable somewhere as a global setting?
many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
hi guys
something wrong with my gluster, it saysthere are files
healing but it does not seem like it actually heals anything.
Here is, apologies for biggish snippet, a bit of log from
one volume. I cannot decode it but have a felling that can
expert/devel spot something is not completely okey
On 01/05/18 23:59, Vijay Bellur wrote:
On Tue, May 1, 2018 at 5:46 AM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol,
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol, to a
regular fs and removing acl works as expected.
Inside mounted gluster vol I seem to be able to
modify/remove ACLs for users, groups and masks but that one
simple, important
hi guys
will we have gluster with inotify? some point / never?
thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
hi people
I wonder if anybody experience any problems with vols in
replica mode that run across IPoIB links and libvirt stores
qcow image on such a volume?
I wonder if maybe devel could confirm it should just work,
and then hardware/Infiniband I should blame.
I have a direct IPoIB link
hi everyone
I understand this should be trivial.
I'm amazed by the lack of tool in gluster tool kit which
would swap peer IP when it is the only change in the
cluster. Peer naturally is a brick in volumes.
Or I'm oblivious to the fact that gluster command line
actually does this?
What do
hi everyone
do you guys know why icrond does not catch/see what happens
inside autofs mounted gluster volumes?
I rsync into such mount point and incron oblivious, from its
perspective nothing happened.
many thanks, L.
___
Gluster-users mailing list
hi everyone
I think geo-repl needs ssh and keys in order to work, but
does anything else? Self-heal perhaps?
Reason I ask is that I had some old keys gluster put in when
I had geo-repl which I removed and self-heal now rouge,
cannot get statistics:
..
Gathering crawl statistics on volume
sorry guys to spam a bit - I hope someone from redhat could
check whether - freeipa-us...@redhat.com - is up & ok?
I've been a subscriber for a couple of years but now,
suddenly(?) I cannot mail there, I get:
"
Sorry, we were unable to deliver your message to the
following address.
hi guys
is it possible to configure things such as log dir or it's
fixed @compile time?
I see the daemon fails to start:
...
[2018-03-09 12:32:57.341142] I [MSGID: 100030]
[glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started
running /usr/sbin/glusterd version 3.13.1 (args:
it's such a shame devel have not improved these bits yet. Would be nice
to have hooks managed by cli.
You can disable it with removing a hook script exists in
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
- 원본 메일 -
보낸사람:lejeczek
hi guys,
I have a Samba (centos 7.5) which does not pick up gluster's quota. More
specifically it shows 0 bytes free even if I increase quotas.
Where in gluster I could start troubleshooting, if possible?
many thanks, L.
___
Gluster-users mailing
hi guys
can we mixed bot versions? My cluster is 3.12.x version but I'd like to
add another peer in 4.1.x version. Will that work?
And if yes would it be a good path to migrate everything to 4.1.x, by
adding/replacing nodes/peers with 4.1.x?
many thanks, L.
hi guys
I presume because 4.1.x has been in EPEL repo it is confirmed and
validated to work 100% with default samba installation.
But, I'd prefer to here you guys say you ACTUALLY have your samba work
100% with 4.1.x. Anybody?
many thanks, L.
On 09/11/2018 15:08, Kaleb S. KEITHLEY wrote:
On 11/9/18 8:12 AM, lejeczek wrote:
hi guys
I presume because 4.1.x has been in EPEL repo it is confirmed and
validated to work 100% with default samba installation.
GlusterFS — any version — is _not_ in EPEL.
However it is in the CentOS Storage
hi everyone
I'm hoping devel might be reading this, but if not - anybody tried
glusterfs off PyPy?
If yes and it works then what was/is the experience?
many thanks, L.
pEpkey.asc
Description: application/pgp-keys
___
Gluster-users mailing list
hi guys,
after a reboot Ganesha export are not there. Suffices to do:
$ systemctl restart nfs-ganesha - and all is good again.
Would you have any ideas why?
I'm on Centos 7.6 with nfs-ganesha-gluster-2.7.6-1.el7.x86_64;
glusterfs-server-6.4-1.el7.x86_64.
many thanks, L.
pEpkey.asc
hi guys,
is it possible to add iface either for global or per volume which
gluster would be available through? And if yes then how?
many thanks, L.
pEpkey.asc
Description: application/pgp-keys
___
Gluster-users mailing list
Gluster-users@gluster.org
/glusterd.info file and figure out
if you have a file with this uuid at /var/lib/glusterd/peers/*.
If you find any such file, please delete it and restart glusterd
on that node.
On Fri, Oct 11, 2019 at 3:15
PM lejeczek <pelj...@yahoo.co.uk>
hi everyone,
I do not suppose it is a matter of tweaking Gluster(or Samba) as these
problems I think started appearing after upgrade of Samba
(samba-4.9.1-6.el7.x86_64) and/or Gluster(glusterfs-6.5-1.el7.x86_64)
How it manifests it that Samba operations that a user performs are
incredibly slow.
hi everyone
I'm been running glusterfs 6 for a while and either I did not notice or
it just started to pop:
[2019-10-07 09:17:37.071409] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/6.5/xlator/mgmt/glusterd.so(+0xe8faa)
[0x7fd6204d3faa]
On 30/01/2019 20:26, Artem Russakovskii wrote:
> I found a similar issue
> here: https://bugzilla.redhat.com/show_bug.cgi?id=1313567. There's a
> comment from 3 days ago from someone else with 5.3 who started seeing
> the spam.
>
> Here's the command that repeats over and over:
> [2019-01-30
hi everyone,
are those options needed to be 'on' if cluster does not use georepl?
> geo-replication.indexing
on
> geo-replication.indexing
on
> geo-replication.ignore-pid-check
on
hi guys,
as per the subject.
Only one thing that I'd like to tell first is that on that peer/node
Samba runs. Other two peers do not show this in their logs.
I gluster log for the volume I get plenty of:
...
t)
[2019-10-11 09:40:40.768647] E [socket.c:3498:socket_connect]
0-glusterfs:
On 08/01/2020 11:28, Ravishankar N wrote:
>
> On 08/01/20 3:55 pm, lejeczek wrote:
>> On 08/01/2020 02:08, Ravishankar N wrote:
>>> On 07/01/20 8:07 pm, lejeczek wrote:
>>>> Which process should I be gdbing, selfheal's?
>>>>
>>> No the b
hi everyone.
I see this:
$ gluster volume heal QEMU_VMs info
Brick swir-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Status: Connected
Number of entries: 0
Brick rider-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
/default.sock
/private/sbus-dp_disk.812
/pam
On 07/01/2020 07:08, Ravishankar N wrote:
>
>
> On 06/01/20 8:12 pm, lejeczek wrote:
>> And when I start this volume, in log on the brick which shows gfids:
> I assume these messages are from the self-heal daemon's log
> (glustershd.log). Correct me if I am mistaken.
>>
On 07/01/2020 13:11, Ravishankar N wrote:
>
> On 07/01/20 4:38 pm, lejeczek wrote:
>>
>> 3. These files which the brick/replica shows appear to exist on only
>> that very brick/replica:
> Right, so the mknods are failing on the other 2 bricks (as seen from
> th
1 - 100 of 122 matches
Mail list logo