i am still facing this issue any suggestion
On Fri, May 27, 2016 at 10:48 AM, ABHISHEK PALIWAL
wrote:
> any hint from the logs..
>
> On Thu, May 26, 2016 at 11:59 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>>
>>
>> On Thu, May 26, 2016 at 11:54 AM,
The meeting minutes for this weeks meeting are available at
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-01/weekly_community_meeting_01june2016.2016-06-01-12.00.html
Minutes (text) :
No one has any suggestions? Would this scenario I have been toying with
work: remove the brick from the node with the out of sync snapshots,
destroy all associated logical volumes, and then add the brick back as an
arbiter node?
On 1 June 2016 at 13:40, Alastair Neil
> This could be because of nufa xlator. As you say the files are present on the
> brick I don't suspect RDMA here.
Agreed.
> Is nufa still supported? Could this a bug in nufa + dht?
Until we explicitly decide to stop building and distributing it, it's still
"supported" in some sense, but only
This could be because of nufa xlator. As you say the files are present on
the brick I don't suspect RDMA here.
Raghavendra,
Is nufa still supported? Could this a bug in nufa + dht?
On 06-Jun-2016 10:29 PM, "Fedele Stabile"
wrote:
> I add some information about my
I add some information about my cluster:
[root@wn001 glusterfs]# gluster volume info
Volume Name: scratch
Type: Distribute
Volume ID: fc6f18b6-a06c-4fdf-ac08-23e9b4f8053e
Status: Started
Number of Bricks: 32
Transport-type: rdma
Bricks:
Brick1: ib-wn001:/bricks/brick1/gscratch0
Brick2:
Good morning to all the community.
I have a problem and I would like to ask for your kind help.
It happens that newly written files from an MPI-application that uses
InfiniBand disappear from glusterfs but they are present in a brick...
Please, can anyone help me to solve this problem?
thanks for your reply
On Sun, Jun 5, 2016 at 10:42 PM, Kaleb Keithley wrote:
>
>
> - Original Message -
> > From: "袁仲" >
> >
> >
> > I have check the source code of glusterFS, the communication between cli
> and
> > glustered , glustered and
Also, I see lots of entries in pmap output:
===
7ef9ff8f3000 4K - [ anon ]
7ef9ff8f4000 8192K rw--- [ anon ]
7efa000f4000 4K - [ anon ]
7efa000f5000 8192K rw--- [ anon ]
===
If I sum them, I get the following:
===
# pmap 15109 | grep '[ anon ]' |
I believe, multi-threaded shd has not been merged at least into 3.7
branch prior to 3.7.11 (incl.), because I've found this [1].
[1] https://www.gluster.org/pipermail/maintainers/2016-April/000628.html
06.06.2016 12:21, Kaushal M написав:
Has multi-threaded SHD been merged into 3.7.* by any
Has multi-threaded SHD been merged into 3.7.* by any chance? If not,
what I'm saying below doesn't apply.
We saw problems when encrypted transports were used, because the RPC
layer was not reaping threads (doing pthread_join) when a connection
ended. This lead to similar observations of huge VIRT
Hello.
We use v3.7.11, replica 2 setup between 2 nodes + 1 dummy node for
keeping volumes metadata.
Now we observe huge VSZ (VIRT) usage by glustershd on dummy node:
===
root 15109 0.0 13.7 76552820 535272 ? Ssl тра26 2:11
/usr/sbin/glusterfs -s localhost --volfile-id
12 matches
Mail list logo