Hi list,
Applying as an organization for GSOC requires us to prepare a publicly
accessible list of project ideas.
Anyone within the community who has an idea for projects, and are willing
to mentor students, please add the idea at [1]. (Add them even if you
can't/won't mentor as well). There is
On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal. I deleted the
files from all of the bricks
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full
heal using 'gluster volume heal homegfs full'. Even after the full
heal,
On 02/02/2015 10:29 PM, Krishnan Parthasarathi wrote:
IOW, given a d_off and a common routine, pass the d_off with this (i.e
current xlator) to get a subvol that the d_off belongs to. This routine
would decode the d_off for the leaf ID as encoded in the client/protocol
layer, and match its
Hi gluster-devel@gluster.org,
Your email was added by kshlms...@gmail.com to receive software defect notifications from
Coverity Scan
for the gluster/glusterfs project.
To confirm and activate these notifications,
click here.
If you do not wish to receive these emails, you may
Today we discussed about Geo-rep Status design, summary of the discussion.
- No usecase for Deletes pending column, should we retain it?
- No separate column for Active/Passive. Worker can be Active/Passive
only when worker is Stable(It can't be Faulty and Active)
- Rename Not Started status
Cancel this issue. I found the problem. Explanation below...
We use active directory to manage our user accounts; however, open sssd
doesn't seem to play well with gluster. When I turn it on, the cpu load
shoots up to between 80-100% and stays there (previously submitted bug
report). So,
3.4, not 2.4... Need more coffee!!!
On 02/03/2015 11:12 AM, Joe Julian wrote:
Odd, I was using sssd with home directories on gluster from 2.0
through 2.4 and never had a problem (I'm no longer at that company,
but they still have home directories on Gluster). Might be worth
another look.
On
Odd, I was using sssd with home directories on gluster from 2.0 through
2.4 and never had a problem (I'm no longer at that company, but they
still have home directories on Gluster). Might be worth another look.
On 02/03/2015 10:45 AM, David F. Robinson wrote:
Cancel this issue. I found the
I rsync'd 20-TB over to my gluster system and noticed that I had some
directories missing even though the rsync completed normally.
The rsync logs showed that the missing files were transferred.
I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' the
files were on the bricks.
Hi KB,
We're interested in the offer to have our CI stuff (Jenkins jobs)
run on the CentOS CI infrastructure.
We have some initial questions, prior to testing things technically.
* Is there a document that describes the CentOS Jenkins setup
(like provisioning of the slaves)?
* Is it
- Original Message -
+1. We just need to make sure, CentOS would give required access to
gluster community members even if they are not doing anything in CentOS
community.
Good points guys. We'll definitely need to find out prior to even doing
any technical testing.
I'll ask the
Sorry. Thought about this a little more. I should have been clearer.
The files were on both bricks of the replica, not just one side. So,
both bricks had to have been up... The files/directories just don't show
up on the mount.
I was reading and saw a related bug
It sounds to me like the files were only copied to one replica, werent
there for the initial for the initial ls which triggered a self heal, and
were there for the last ls because they were healed. Is there any chance
that one of the replicas was down during the rsync? It could be that you
lost
Like these?
data-brick02a-homegfs.log:[2015-02-03 19:09:34.568842] I
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting
connection from
gfs02a.corvidtec.com-18563-2015/02/03-19:07:58:519134-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:09:41.286551] I
14 matches
Mail list logo