Hi,
In general and not in Gluster.
we used Teaming for some time and we switched back to Bonding because we had
issues with the load balancing of Teaming.
With teaming config was "LACP, eth,ipv4,ipv6" the results was one interface
utilized more then the other one, and in some cases one
Hello everybody
I have 3 datacenters in different regions, Can I deploy my own cloud
storage with the help of glusterfs on the physical nodes?If I can, what are
the differences between cloud storage glusterfs and local gluster storage?
thx for your attention :)
I finally did find some stats on teaming
http://rhelblog.redhat.com/2014/06/23/team-driver/
On 6/19/2017 10:42 AM, WK wrote:
OK, at least its not an *issue* with Gluster. I didn't expect any but
you never know.
I have been amused at the 'lack' of discussion on Teaming performance
found
Hi,
I have tried on my host by setting corresponding ports, but I didn't see
the issue on my machine locally.
However with the logs you have sent it is prety much clear issue is related
to ports only.
I will trying to reproduce on some other machine. Will update you as s0on
as possible.
Thanks
On 18/06/2017 12:47 PM, Lindsay Mathieson wrote:
I installed 3.8.12 a while back and the packages seem to have been
updated since (2017-06-13), prompting me for updates.
I haven't seen any release announcements or notes on this though.
Bump - new versions are 3.8.12-2. Just curious as to
OK, at least its not an *issue* with Gluster. I didn't expect any but
you never know.
I have been amused at the 'lack' of discussion on Teaming performance
found on Google searches.
There are lots of 'here it is and here is how to set it up'
articles/posts, but no 'ooh-wee-wow it is
Hi Vijay
Thanks, is it straight forward to upgrade from 2.8 to 2.11 on a production
system? I have around 300 TB data and approximately 60-80 million files.
Is there any other optimisation which I can try at the same time?
Thanks
Kashif
On Sun, Jun 18, 2017 at 4:57 PM, Vijay Bellur
Gentle reminder ...
On 15 June 2017 at 10:43, Nithya Balachandran wrote:
> Hi,
>
> We are looking at improving our documentation (
> http://gluster.readthedocs.io/en/latest/) and would like your feedback.
>
> Please let us know what would make the documentation more useful
Hi,
I just remembered that I posted once a bug at redhat
https://bugzilla.redhat.com/show_bug.cgi?id=1434000
could this be the same problem? but this time it's not a few files but
hundreds of thousands
BTW: I tried to disable bitrot but it didn't help
Best Regards
Bernhard
2017-06-19
Hi,
I checked the attributes of one of the files with I/O errors
root@chastcvtprd04:~# getfattr -d -e hex -m -
/data/glusterfs/Server_Standard/1I-1-14/brick/Server_Standard/CV_MAGNETIC/V_1050932/CHUNK_11126559/SFILE_CONTAINER_014
getfattr: Removing leading '/' from absolute path names
# file:
On Mon, Jun 19, 2017 at 7:01 PM, Joe Julian wrote:
> Isn't this just brick multiplexing?
>
I initially thought about it but with brick multiplexing the pid should be
the same which is not the case here.
>
>
> On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee
Hi,
we use a bunch of replicated gluster volumes as a backend for our
backup. Yesterday I noticed that some synthetic backups failed because
of I/O errors.
Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads
of I/O errors.
The brick log file shows the below errors
[2017-06-19
Isn't this just brick multiplexing?
On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee wrote:
>On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang wrote:
>
>> Hi, all
>>
>>
>>
>> I found two of my bricks from different volumes are using the same
>port
>> 49154 on
I haven't done any testing of performance differences, but on my oVirt/rhev
i use standard bonding as that's that it supports. On the stand along
gluster nodes I use teaming for bonding.
Teaming may be slightly easier to manage, but not by much if you are
already used to bond setups. I haven't
On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang wrote:
> Hi, all
>
>
>
> I found two of my bricks from different volumes are using the same port
> 49154 on the same glusterfs server node, is this normal?
>
No it's not.
Can you please help me with the following information:
1.
On Fri, 2017-06-16 at 16:15 -0400, Ludwig Gamache wrote:
> All,
>
> I just enabled the trashcan feature on our volumes. It is working as
> expected. However, I can't
> seem to find the rules to empty the trashcan. Is there any automated process
> to do that?
No such policies have been
16 matches
Mail list logo