Hi,
i am trying to understand why georeplciation during "History Crawl" starts
failing on each of the three bricks, one after the other. I have enabled
DEBUG for all the logs configurable by the geo-replication command.
Running glusterfs v4.16 the behaviour is as follow:
- The "History Crawl" wor
;>> not.
>>> We also know that there are options to improve performance. But first of
>>> all we are interested
>>> in whether there are reference values.
>>> Regards
>>> David Spisla
>>>
>>> ___
an do it.
>
> I compared the file /var/lib/glusterd/vols/export/info and it is the same
> in both, though entries are in different order.
>
> Diego
>
>
>
>
> On Tue, Jan 15, 2019 at 5:03 AM Davide Obbi
> wrote:
>
>>
>>
>> On Tue, Jan 15, 2019 at 2:18
On Tue, Jan 15, 2019 at 2:18 AM Diego Remolina wrote:
> Dear all,
>
> I was running gluster 3.10.12 on a pair of servers and recently upgraded
> to 4.1.6. There is a cron job that runs nightly in one machine, which
> rsyncs the data on the servers over to another machine for backup purposes.
> Th
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
--
Davide Obbi
Senior System Administrator
Bookin
s, and rebooted all nodes.
>
>
>
> *From:* Davide Obbi
> *Sent:* Monday, January 7, 2019 12:47 PM
> *To:* Matt Waymack
> *Cc:* Raghavendra Gowdappa ;
> gluster-users@gluster.org List
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> l
affected, even on the same volumes. This is
> seemingly only affecting the FUSE clients.
>
>
>
> *From:* Davide Obbi
> *Sent:* Sunday, January 6, 2019 12:26 PM
> *To:* Raghavendra Gowdappa
> *Cc:* Matt Waymack ; gluster-users@gluster.org List <
> gluster-users@glus
i guess you tried already unmounting, stop/star and mounting?
On Mon, Jan 7, 2019 at 7:44 PM Matt Waymack wrote:
> Yes, all volumes use sharding.
>
>
>
> *From:* Davide Obbi
> *Sent:* Monday, January 7, 2019 12:43 PM
> *To:* Matt Waymack
> *Cc:* Raghavendra Go
pc-glus3:/exp/b5/gv1
>>> Brick15: tpc-arbiter2:/exp/b5/gv1 (arbiter)
>>> Brick16: tpc-glus1:/exp/b6/gv1
>>> Brick17: tpc-glus3:/exp/b6/gv1
>>> Brick18: tpc-arbiter2:/exp/b6/gv1 (arbiter)
>>> Brick19: tpc-glus1:/exp/b7/gv1
>>> Brick20: tpc-glu
hich file?
>
> I think you've hit on the cause of the issue. Thinking back we've had
> some extended power outages and due to a misconfiguration in the swap
> file device name a couple of the nodes did not come up and I didn't
> catch it for a while so maybe the delet
if the long GFID does not correspond to any file it could mean the file has
been deleted by the client mounting the volume. I think this is caused when
the delete was issued and the number of active bricks were not reaching
quorum majority or a second brick was taken down while another was down or
-sync 1.5TB of data over our WAN connection.
>
> Does glusterfs allow for this?
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
--
Davide Obbi
System Adm
gt; command
> > not found
> > geo-replication command failed
> >
> > Do you know where is the problem?
> >
> > Thanks in advance!
> >
> > BR
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
Hi,
I have been conducting performance tests over the past day on our new HW
where we plan to deploy a scalable file-system solution. I hope the results
can be helpful to someone.
I hope to receive some feedback regarding optimizations and volume xlator
setup.
If necessary volume profiling has bee
Bug 1649252 submitted
thanks
On Tue, Nov 13, 2018 at 2:41 AM Raghavendra Gowdappa
wrote:
>
>
> On Mon, Nov 12, 2018 at 9:36 PM Davide Obbi
> wrote:
>
>> Hi,
>>
>> i have noticed that this option is repeated twice with different values
>> in gluster 4.1.
Hi,
i have noticed that this option is repeated twice with different values in
gluster 4.1.5 if you run gluster volume get volname all
performance.cache-size 32MB
...
performance.cache-size 128MB
is that right?
Regards
_
anding the problem better.
>
> Thanks,
> Vijay
>
> On Tue, Nov 6, 2018 at 6:16 AM Davide Obbi
> wrote:
>
>> Hi,
>>
>> i am testing gluster-block and i am wondering if someone has used it and
>> have some feedback regarding its performance.. just to set so
Hi,
i am testing gluster-block and i am wondering if someone has used it and
have some feedback regarding its performance.. just to set some
expectations... for example:
- i have deployed a block volume using heketi on a 3 nodes gluster4.1
cluster. it's a replica3 volume.
- i have mounted via iscs
Hi,
after running volume stop/start the error disappeared and the volume can be
mounted from the server.
Regards
On Tue, Oct 9, 2018 at 3:27 PM Davide Obbi wrote:
>
> Hi,
>
> i have enabled SSL/TLS on a cluster of 3 nodes, the server to server
> communication seems workin
Hi,
i have enabled SSL/TLS on a cluster of 3 nodes, the server to server
communication seems working since gluster volume status returns the three
bricks while we are unable to mount from the client and the client can be
also one of the gluster nodes iteself.
Options:
/var/lib/glusterd/secure-acce
.
> The disk limit is enforced for all the files in that directory once they
> are created.
>
> On Fri, Sep 14, 2018 at 11:35 AM Davide Obbi
> wrote:
>
>> Here:
>>
>> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Directory%20Quota/
>>
>> K
as mentioned.
>
> As far as I know, it is not possible to do so.
> Quota's limits are stored on the directory itself.
> Without the directory being there quota can't store the limit for
> future directories.
>
>
> On Thu, Sep 13, 2018 at 4:48 PM Davide Obbi
> w
Hi,
According to glusterdoc:
Note You can set the disk limit on the directory even if it is not created.
The disk limit is enforced immediately after creating that directory.
However if i try to set the limit on direcotries not exsiting on the volume
i get:
quota command failed : Failed to get t
it didnt make a difference. I will try to re-configure with a 2x3 config
On Fri, Aug 31, 2018 at 1:48 PM Raghavendra Gowdappa
wrote:
> another relevant option is setting cluster.lookup-optimize on.
>
> On Fri, Aug 31, 2018 at 3:22 PM, Davide Obbi
> wrote:
>
>> #gluster vol
2018, 8:49 PM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Thu, Aug 30, 2018 at 8:38 PM, Davide Obbi
>> wrote:
>>
>>> yes "performance.parallel-readdir on and 1x3 replica
>>>
>>
>> That's surprising. I thought performance.p
yes "performance.parallel-readdir on and 1x3 replica
On Thu, Aug 30, 2018 at 5:00 PM Raghavendra Gowdappa
wrote:
>
>
> On Thu, Aug 30, 2018 at 8:08 PM, Davide Obbi
> wrote:
>
>> Thanks Amar,
>>
>> i have enabled the negative lookups cache on the volum
Hello,
did anyone ever managed to achieve reasonable waiting time while performing
metadata intensive operations such as git clone, untar etc...? Is this
possible workload or will never be in scope for glusterfs?
I'd like to know, if possible, what would be the options that affect such
volume per
issue open[2] Happy to get contributions, and help in getting a
>> > newer approach to Quota.
>> >
>> >
>> > These are our set of initial features which we propose to take out of
>> ‘fully’ supported features. While we are in the process of making the
>
Hi,
how do i tell heketi to use glusterd2 instead of glusterd?
When i perform the topology load i get the following error:
Creating node gluster01 ... Unable to create node: New Node doesn't have
glusterd running
this suggest that is looking for glusterd in fact if i switch to glusterd
it finds
Hi,
i'm trying to create a volume using glusterd2 however curl does not return
anything and in verbose mode i get the above mentioned error, the only
gluster service on the hosts is "glusterd2.service"
/var/log/glusterd2/glusterd2.log:
time="2018-08-13 18:05:15.395479" level=info msg="runtime er
Hi,
does anyone know why glusterd2 4.1 is not available in the main centos
repos?
http://mirror.centos.org/centos/7/storage/x86_64/gluster-4.1/
while is available in the buildlogs?
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.1/
thanks
Davide
_
ch failed
>
> Error: Unable to find Peer ID
>
>
>
>
>
> ___
> Gluster-users mailing
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> regards
> Aravinda VK
>
hi,
i'm testing some operations with gluster4 and glustercli.
I have re-installed a node with the same host name/IP and add it back to
the cluster.
This resulted in a double entry, this could be expected but then i am not
able to remove the old one nor the new nor any other node.
So at this point
AM, Davide Obbi wrote:
> > Thanks Kaleb,
> >
> > any chance i can make the node working after the downgrade?
> > thanks
>
> Without knowing what doesn't work, I'll go out on a limb and guess that
> it's an op-version problem.
>
> Shut down you
ger they'll be there. I suggest you copy them
> if you think you're going to need them in the future.)
>
>
>
> n 05/15/2018 04:58 AM, Davide Obbi wrote:
> > hi,
> >
> > i noticed that this repo for glusterfs 3.13 does not exists anymore at:
> >
> &g
hi,
i noticed that this repo for glusterfs 3.13 does not exists anymore at:
http://mirror.centos.org/centos/7/storage/x86_64/
i knew was not going to be long term supported however the downgrade to
3.12 breaks the server node i believe the issue is with:
*[2018-05-15 08:54:39.981101] E [MSGID:
36 matches
Mail list logo