[Gluster-devel] Ideas list for GSOC

2015-02-03 Thread Kaushal M
Hi list,

Applying as an organization for GSOC requires us to prepare a publicly
accessible list of project ideas.

Anyone within the community who has an idea for projects, and are willing
to mentor students, please add the idea at [1]. (Add them even if you
can't/won't mentor as well). There is already an existing list of ideas on
the page, most of them without any mentors. Any one wishing to just be a
mentor can pick one/a few from this list and add yourselves as mentors.

I'm hoping we can build a sizable list by the time the organization
registration opens next week.

Thanks.

~kaushal

[1]: http://www.gluster.org/community/documentation/index.php/Projects
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] failed heal

2015-02-03 Thread Pranith Kumar Karampuri


On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal.  I deleted the 
files from all of the bricks 
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full 
heal using 'gluster volume heal homegfs full'.  Even after the full 
heal, the entries below still show up.

How do I clear these?
3.6.1 Had an issue where files undergoing I/O will also be shown in the 
output of 'gluster volume heal volname info', we addressed that in 
3.6.2. Is this output from 3.6.1 by any chance?


Pranith

[root@gfs01a ~]# gluster volume heal homegfs info
Gathering list of entries to be healed on volume homegfs has been 
successful

Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 10
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
gfid:a6fc9011-74ad-4128-a232-4ccd41215ac8
gfid:bc17fa79-c1fd-483d-82b1-2c0d3564ddc5
gfid:ec804b5c-8bfc-4e7b-91e3-aded7952e609
gfid:ba62e340-4fad-477c-b450-704133577cbb
gfid:4843aa40-8361-4a97-88d5-d37fc28e04c0
gfid:c90a8f1c-c49e-4476-8a50-2bfb0a89323c
gfid:090042df-855a-4f5d-8929-c58feec10e33
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 2
gfid:f96b4ddf-8a75-4abb-a640-15dbe41fdafa
/hpc_shared/motorsports/gmics/Raven/p3/70_rke
Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 7
gfid:5d08fe1d-17b3-4a76-ab43-c708e346162f
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
gfid:427d3738-3a41-4e51-ba2b-f0ba7254d013
gfid:8ad88a4d-8d5e-408f-a1de-36116cf6d5c1
gfid:0e034160-cd50-4108-956d-e45858f27feb
Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com mailto:david.robin...@corvidtec.com
http://www.corvidtechnologies.com


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] RFC: d_off encoding at client/protocol layer

2015-02-03 Thread Shyam

On 02/02/2015 10:29 PM, Krishnan Parthasarathi wrote:

IOW, given a d_off and a common routine, pass the d_off with this (i.e
current xlator) to get a subvol that the d_off belongs to. This routine
would decode the d_off for the leaf ID as encoded in the client/protocol
layer, and match its subvol relative to this and send that for further
processing. (it may consult the graph or store the range of IDs that any
subvol has w.r.t client/protocol and deliver the result appropriately).


What happens to this scheme when bricks are repeatedly added/removed?


The result should be no different than what the current scheme in code 
does, i.e encode the subvol ID based on children of DHT, which is based 
on dht_subvol_cnt, which means indirectly the order of children seen in 
the graph.


I would further state, this change does not improve that limitation, 
rather it just changes the encoding to a single point.




IIUC, the leaf xlator encoding proposed should be performed during graph
initialization and would remain static for the lifetime of the graph.
When bricks are added or removed, it would trigger a graph change, and
the new encoding would be computed. Further, it is guaranteed that
ongoing (readdir) FOPs would complete in the same (old) graph and therefore
the encoding should be unaffected by bricks being added/removed.



I would differ in the reasoning here, NFS clients store d_off returned 
on directory scans, hence it is possible that they come back with those 
d_off values post a graph switch and in this case it would be a fresh 
opendir and then seeking to the d_off provided (with all the subvol ID 
decoding etc.).


So in short, we are not immune to this.

Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity Scan subscription confirmation

2015-02-03 Thread scan-admin
Hi gluster-devel@gluster.org,


  Your email was added by kshlms...@gmail.com to receive software defect notifications from
  Coverity Scan
  for the gluster/glusterfs project.



  To confirm and activate these notifications,
  click here.



  If you do not wish to receive these emails, you may safely ignore this message.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Improving Geo-replication Status and Checkpoints

2015-02-03 Thread Aravinda

Today we discussed about Geo-rep Status design, summary of the discussion.

- No usecase for Deletes pending column, should we retain it?
- No separate column for Active/Passive. Worker can be Active/Passive 
only when worker is Stable(It can't be Faulty and Active)

- Rename Not Started status as Created
- Checkpoint columns will be retained in the Status output till we 
support Multiple checkpoints. Three columns instead of Single 
column(Completed, Checkpoint time and Completion time)
- Still we have confusion about Files Pending and Files Synced, What 
numbers it has to show. Georep can't map the number to exact count on disk.
  Venky suggested to show Entry, Data and Metadata pending as three 
columns. (Remove Files Pending and Files Synced)

- Rename Files Skipped to Failures

Status output proposed:
---
MASTER NODE - Master node hostname/IP
MASTER VOL - Master volume name
MASTER BRICK - Master brick path
SLAVE USER - Slave user to which geo-rep is established.
SLAVE - Slave host and Volume name(HOST::VOL format)
STATUS - Created/Initializing../Started/Active/Passive/Stopped/Faulty
LAST SYNCED - Last synced time(Based on stime xattr)
CRAWL STATUS - Hybrid/History/Changelog
CHECKPOINT STATUS - Yes/No/ N/A
CHECKPOINT TIME - Checkpoint Set Time
CHECKPOINT COMPLETED - Checkpoint Completion Time

Not yet decided
---
FILES SYNCD - Number of Files Synced
FILES PENDING - Number of Files Pending
DELETES PENDING- Number of Deletes Pending
FILES SKIPPED - Number of Files skipped
ENTRIES - Create/Delete/MKDIR/RENAME etc
DATA - Data operations
METADATA - SETATTR, SETXATTR etc

Let me know your suggestions.

--
regards
Aravinda


On 02/02/2015 04:51 PM, Aravinda wrote:

Thanks Sahina, replied inline.

--
regards
Aravinda

On 02/02/2015 12:55 PM, Sahina Bose wrote:


On 01/28/2015 04:07 PM, Aravinda wrote:

Background
--
We have `status` and `status detail` commands for GlusterFS 
geo-replication, This mail is to fix the existing issues in these 
command outputs. Let us know if we need any other columns which 
helps users to get meaningful status.


Existing output
---
Status command output
MASTER NODE - Master node hostname/IP
MASTER VOL - Master volume name
MASTER BRICK - Master brick path
SLAVE - Slave host and Volume name(HOST::VOL format)
STATUS - Stable/Faulty/Active/Passive/Stopped/Not Started
CHECKPOINT STATUS - Details about Checkpoint completion
CRAWL STATUS - Hybrid/History/Changelog

Status detail -
MASTER NODE - Master node hostname/IP
MASTER VOL - Master volume name
MASTER BRICK - Master brick path
SLAVE - Slave host and Volume name(HOST::VOL format)
STATUS - Stable/Faulty/Active/Passive/Stopped/Not Started
CHECKPOINT STATUS - Details about Checkpoint completion
CRAWL STATUS - Hybrid/History/Changelog
FILES SYNCD - Number of Files Synced
FILES PENDING - Number of Files Pending
BYTES PENDING - Bytes pending
DELETES PENDING - Number of Deletes Pending
FILES SKIPPED - Number of Files skipped


Issues with existing status and status detail:
--

1. Active/Passive and Stable/faulty status is mixed up - Same column 
is used to show both active/passive status as well as Stable/faulty 
status. If Active node goes faulty then by looking at the status it 
is difficult to understand Active node is faulty or the passive one.
2. Info about last synced time, unless we set checkpoint it is 
difficult to understand till what time data is synced to slave. For 
example, if a admin want's to know all the files synced which are 
created 15 mins ago, it is not possible without setting checkpoint.

3. Wrong values in metrics.
4. When multiple bricks present in same node. Status shows Faulty 
when one of the worker is faulty in that node.


Changes:

1. Active nodes will be prefixed with * to identify it is a active 
node.(In xml output active tag will be introduced with values 0 or 1)
2. New column will show the last synced time, which minimizes the 
use of checkpoint feature. Checkpoint status will be shown only in 
status detail.
3. Checkpoint Status is removed, Separate Checkpoint command will be 
added to gluster cli(We can introduce multiple Checkpoint feature 
with this change)
4. Status values will be Not 
Started/Initializing/Started/Faulty/Stopped. Stable is changed to 
Started
5. Slave User column will be introduced to show to which user 
geo-rep session is established.(Useful in Non root geo-rep)
6. Bytes pending column will be removed. It is not possible to 
identify the delta without simulating sync. For example, we are 
using rsync to sync data from master to slave, If we need to know 
how much data to be transferred then we have to run the rsync 
command with --dry-run flag before running actual command. With 
tar-ssh we have to stat all the files which are identified to be 
synced to calculate the total bytes to be synced. Both 

Re: [Gluster-devel] gluster 3.6.2 ls issues

2015-02-03 Thread David F. Robinson

Cancel this issue.  I found the problem.  Explanation below...

We use active directory to manage our user accounts; however, open sssd 
doesn't seem to play well with gluster.  When I turn it on, the cpu load 
shoots up to between 80-100% and stays there (previously submitted bug 
report).  So, we I did on my gluster machines to keep the uid/gid 
updated (required due to server.manage-gids=on), is write a script that 
start opensssd, grabs all of the groups/users from the server, parses 
out the /etc/group and /etc/passwd file, and then shuts down sssd.  I 
didn't realize that sssd uses the locally cached file.  My script was 
running faster than sssd was updating the cache file, so this particular 
user wasn't in the SBIR group on all of the machines.  He was in that 
group on gfs01a, but not on gfs01b (replica pair) or gfs02a/02b.  I 
guess this gave him enough permission to cd into the directory, but for 
some strange reason he couldn't do an ls and have the directory name 
show up.


The only reason I do any of this is because I had to use 
server.manage-gids to overcome the 32-group limitation.  This requires 
that my storage system have all of the user accounts and groups.  The 
preferred option would be to simply use sssd on my storage systems, but 
it doesn't seem to play well with gluster.


David


-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com
To: Gluster Devel gluster-devel@gluster.org; 
gluster-us...@gluster.org gluster-us...@gluster.org

Sent: 2/3/2015 12:56:40 PM
Subject: gluster 3.6.2 ls issues

On my gluster filesystem mount, I have a user who does an ls and all 
of the directories do not show up.  Not that the A15-029 directory 
doesn't show up.  However, as kbutz I can cd into the directory.


As root (also tested as several other users), I get the following from 
an ls -al

[root@sb1 2015.1]# ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 .
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ..
drwxrws--x  5 cczechsbir   606 Jan 30 12:58 A15-007
drwxrws--x  5 kbutz sbir   291 Feb  3 12:11 A15-029
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058
drwxrws--x  3 tanderson sbir   216 Jan 28 09:43 AF151-072
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102
drwxrws--x  4 aaronward sbir   493 Feb  3 09:58 AF151-114
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174
drwxrws--x  3 dstowesbir   192 Jan 27 12:25 AF15-AT28
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA
As user kburz, I get the following:
sb1:sbir/2015.1 ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 ./
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ../
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063/
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088/
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058/
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102/
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174/
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA/
Note, that I can still cd into the non-listed directory as kbutz:

[kbutz@sb1 ~]$ cd /homegfs/documentation/programs/sbir/2015.1
A15-063/  A15-088/  AF151-058/  AF151-102/  AF151-174/  NASA/

sb1:sbir/2015.1 cd A15-029
A15-029_proposal_draft_rev1.docx*  CB_work/  gun_work/  Refs/

David

===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster 3.6.2 ls issues

2015-02-03 Thread Joe Julian

3.4, not 2.4... Need more coffee!!!

On 02/03/2015 11:12 AM, Joe Julian wrote:
Odd, I was using sssd with home directories on gluster from 2.0 
through 2.4 and never had a problem (I'm no longer at that company, 
but they still have home directories on Gluster). Might be worth 
another look.


On 02/03/2015 10:45 AM, David F. Robinson wrote:

Cancel this issue.  I found the problem.  Explanation below...
We use active directory to manage our user accounts; however, open 
sssd doesn't seem to play well with gluster. When I turn it on, the 
cpu load shoots up to between 80-100% and stays there (previously 
submitted bug report).  So, we I did on my gluster machines to keep 
the uid/gid updated (required due to server.manage-gids=on), is write 
a script that start opensssd, grabs all of the groups/users from the 
server, parses out the /etc/group and /etc/passwd file, and then 
shuts down sssd.  I didn't realize that sssd uses the locally cached 
file.  My script was running faster than sssd was updating the cache 
file, so this particular user wasn't in the SBIR group on all of the 
machines.  He was in that group on gfs01a, but not on gfs01b (replica 
pair) or gfs02a/02b.  I guess this gave him enough permission to cd 
into the directory, but for some strange reason he couldn't do an 
ls and have the directory name show up.
The only reason I do any of this is because I had to use 
server.manage-gids to overcome the 32-group limitation.  This 
requires that my storage system have all of the user accounts and 
groups.  The preferred option would be to simply use sssd on my 
storage systems, but it doesn't seem to play well with gluster.

David
-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com 
mailto:david.robin...@corvidtec.com
To: Gluster Devel gluster-devel@gluster.org 
mailto:gluster-devel@gluster.org; gluster-us...@gluster.org 
gluster-us...@gluster.org mailto:gluster-us...@gluster.org

Sent: 2/3/2015 12:56:40 PM
Subject: gluster 3.6.2 ls issues
On my gluster filesystem mount, I have a user who does an ls and 
all of the directories do not show up.  Not that the A15-029 
directory doesn't show up.  However, as kbutz I can cd into the 
directory.
_*As root (also tested as several other users), I get the following 
from an ls -al*_

/[root@sb1 2015.1]# ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 .
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ..
drwxrws--x  5 cczechsbir   606 Jan 30 12:58 A15-007
*drwxrws--x  5 kbutz sbir   291 Feb  3 12:11 A15-029*
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058
drwxrws--x  3 tanderson sbir   216 Jan 28 09:43 AF151-072
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102
drwxrws--x  4 aaronward sbir   493 Feb  3 09:58 AF151-114
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174
drwxrws--x  3 dstowesbir   192 Jan 27 12:25 AF15-AT28
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA
/
*_As user kburz, I get the following:_*
/sb1:sbir/2015.1 ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 ./
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ../
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063/
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088/
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058/
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102/
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174/
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA//
Note, that I can still cd into the non-listed directory as kbutz:
/[kbutz@sb1 ~]$ cd /homegfs/documentation/programs/sbir/2015.1
A15-063/  A15-088/  AF151-058/  AF151-102/ AF151-174/  NASA//
/
sb1:sbir/2015.1 cd A15-029
A15-029_proposal_draft_rev1.docx*  CB_work/ gun_work/  Refs//
David
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com mailto:david.robin...@corvidtec.com
http://www.corvidtechnologies.com http://www.corvidtechnologies.com/



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster 3.6.2 ls issues

2015-02-03 Thread Joe Julian
Odd, I was using sssd with home directories on gluster from 2.0 through 
2.4 and never had a problem (I'm no longer at that company, but they 
still have home directories on Gluster). Might be worth another look.


On 02/03/2015 10:45 AM, David F. Robinson wrote:

Cancel this issue.  I found the problem.  Explanation below...
We use active directory to manage our user accounts; however, open 
sssd doesn't seem to play well with gluster.  When I turn it on, the 
cpu load shoots up to between 80-100% and stays there (previously 
submitted bug report).  So, we I did on my gluster machines to keep 
the uid/gid updated (required due to server.manage-gids=on), is write 
a script that start opensssd, grabs all of the groups/users from the 
server, parses out the /etc/group and /etc/passwd file, and then shuts 
down sssd.  I didn't realize that sssd uses the locally cached file.  
My script was running faster than sssd was updating the cache file, so 
this particular user wasn't in the SBIR group on all of the machines.  
He was in that group on gfs01a, but not on gfs01b (replica pair) or 
gfs02a/02b.  I guess this gave him enough permission to cd into the 
directory, but for some strange reason he couldn't do an ls and have 
the directory name show up.
The only reason I do any of this is because I had to use 
server.manage-gids to overcome the 32-group limitation.  This requires 
that my storage system have all of the user accounts and groups.  The 
preferred option would be to simply use sssd on my storage systems, 
but it doesn't seem to play well with gluster.

David
-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com 
mailto:david.robin...@corvidtec.com
To: Gluster Devel gluster-devel@gluster.org 
mailto:gluster-devel@gluster.org; gluster-us...@gluster.org 
gluster-us...@gluster.org mailto:gluster-us...@gluster.org

Sent: 2/3/2015 12:56:40 PM
Subject: gluster 3.6.2 ls issues
On my gluster filesystem mount, I have a user who does an ls and 
all of the directories do not show up.  Not that the A15-029 
directory doesn't show up.  However, as kbutz I can cd into the 
directory.
_*As root (also tested as several other users), I get the following 
from an ls -al*_

/[root@sb1 2015.1]# ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 .
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ..
drwxrws--x  5 cczechsbir   606 Jan 30 12:58 A15-007
*drwxrws--x  5 kbutz sbir   291 Feb  3 12:11 A15-029*
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058
drwxrws--x  3 tanderson sbir   216 Jan 28 09:43 AF151-072
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102
drwxrws--x  4 aaronward sbir   493 Feb  3 09:58 AF151-114
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174
drwxrws--x  3 dstowesbir   192 Jan 27 12:25 AF15-AT28
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA
/
*_As user kburz, I get the following:_*
/sb1:sbir/2015.1 ls -al
total 16
drwxrws--x 13 streadway sbir   868 Feb  3 12:48 ./
drwxrws--- 46 root  sbir 16384 Feb  3 10:50 ../
drwxrws--x  3 randerson sbir   219 Feb  3 11:52 A15-063/
drwxrws--x  4 abirnbaum sbir   223 Feb  3 10:14 A15-088/
drwxrws--x  2 anelson   sbir   270 Jan 27 14:30 AF151-058/
drwxrws--x  3 streadway sbir   162 Jan 21 13:28 AF151-102/
drwxrws--x  3 streadway sbir   162 Feb  3 12:07 AF151-174/
drwxrws--x  3 kboyett   sbir   199 Jan 28 09:43 NASA//
Note, that I can still cd into the non-listed directory as kbutz:
/[kbutz@sb1 ~]$ cd /homegfs/documentation/programs/sbir/2015.1
A15-063/  A15-088/  AF151-058/  AF151-102/  AF151-174/ NASA//
/
sb1:sbir/2015.1 cd A15-029
A15-029_proposal_draft_rev1.docx*  CB_work/  gun_work/ Refs//
David
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com mailto:david.robin...@corvidtec.com
http://www.corvidtechnologies.com http://www.corvidtechnologies.com/



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] missing files

2015-02-03 Thread David F. Robinson
I rsync'd 20-TB over to my gluster system and noticed that I had some 
directories missing even though the rsync completed normally.

The rsync logs showed that the missing files were transferred.

I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' the 
files were on the bricks.  After I did this 'ls', the files then showed 
up on the FUSE mounts.


1) Why are the files hidden on the fuse mount?
2) Why does the ls make them show up on the FUSE mount?
3) How can I prevent this from happening again?

Note, I also mounted the gluster volume using NFS and saw the same 
behavior.  The files/directories were not shown until I did the ls on 
the bricks.


David



===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster testing on CentOS CI

2015-02-03 Thread Justin Clift
Hi KB,

We're interested in the offer to have our CI stuff (Jenkins jobs)
run on the CentOS CI infrastructure.

We have some initial questions, prior to testing things technically.

 *  Is there a document that describes the CentOS Jenkins setup
(like provisioning of the slaves)?

 * Is it possible for Gluster Community members to log into a Jenkins
   slave and troubleshoot specific failures?

   eg if something fails a test, can one of the Community members log
   into the slave node to investigate (with root perms)

 * We also want to be sure that our Community members (developers) will
   have access in the Jenkins interface to do stuff for our jobs.

   eg create new jobs + editing existing ones (so our tests improve
   over time), rerunning failed jobs, and so on.  We don't need full
   admin access, but we do need the ability to do stuff for our jobs

Regards and best wishes,

Justin Clift

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Possibility to move our Jenkins setup to the CentOS Jenkins infra

2015-02-03 Thread Justin Clift
- Original Message -
 +1. We just need to make sure, CentOS would give required access to
 gluster community members even if they are not doing anything in CentOS
 community.

Good points guys.  We'll definitely need to find out prior to even doing
any technical testing.

I'll ask the CentOS guys now, and we can take it from there. :)

+ Justin

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] missing files

2015-02-03 Thread David F. Robinson
Sorry.  Thought about this a little more. I should have been clearer.  
The files were on both bricks of the replica, not just one side.  So, 
both bricks had to have been up... The files/directories just don't show 
up on the mount.


I was reading and saw a related bug 
(https://bugzilla.redhat.com/show_bug.cgi?id=1159484).  I saw it 
suggested to run:


find mount -d -exec getfattr -h -n trusted.ec.heal {} \;


I get a bunch of errors for operation not supported:

[root@gfs02a homegfs]# find wks_backup -d -exec getfattr -h -n 
trusted.ec.heal {} \;
find: warning: the -d option is deprecated; please use -depth instead, 
because the latter is a POSIX-compliant feature.

wks_backup/homer_backup/backup: trusted.ec.heal: Operation not supported
wks_backup/homer_backup/logs/2014_05_20.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_21.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_18.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_19.log: trusted.ec.heal: Operation 
not supported
wks_backup/homer_backup/logs/2014_05_22.log: trusted.ec.heal: Operation 
not supported

wks_backup/homer_backup/logs: trusted.ec.heal: Operation not supported
wks_backup/homer_backup: trusted.ec.heal: Operation not supported

-- Original Message --
From: Benjamin Turner bennytu...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Gluster Devel gluster-devel@gluster.org; 
gluster-us...@gluster.org gluster-us...@gluster.org

Sent: 2/3/2015 7:12:34 PM
Subject: Re: [Gluster-devel] missing files

It sounds to me like the files were only copied to one replica, werent 
there for the initial for the initial ls which triggered a self heal, 
and were there for the last ls because they were healed.  Is there any 
chance that one of the replicas was down during the rsync?  It could be 
that you lost a brick during copy or something like that.  To confirm I 
would look for disconnects in the brick logs as well as checking 
glusterfshd.log to verify the missing files were actually healed.


-b

On Tue, Feb 3, 2015 at 5:37 PM, David F. Robinson 
david.robin...@corvidtec.com wrote:
I rsync'd 20-TB over to my gluster system and noticed that I had some 
directories missing even though the rsync completed normally.

The rsync logs showed that the missing files were transferred.

I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' 
the files were on the bricks.  After I did this 'ls', the files then 
showed up on the FUSE mounts.


1) Why are the files hidden on the fuse mount?
2) Why does the ls make them show up on the FUSE mount?
3) How can I prevent this from happening again?

Note, I also mounted the gluster volume using NFS and saw the same 
behavior.  The files/directories were not shown until I did the ls 
on the bricks.


David



===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] missing files

2015-02-03 Thread Benjamin Turner
It sounds to me like the files were only copied to one replica, werent
there for the initial for the initial ls which triggered a self heal, and
were there for the last ls because they were healed.  Is there any chance
that one of the replicas was down during the rsync?  It could be that you
lost a brick during copy or something like that.  To confirm I would look
for disconnects in the brick logs as well as checking glusterfshd.log to
verify the missing files were actually healed.

-b

On Tue, Feb 3, 2015 at 5:37 PM, David F. Robinson 
david.robin...@corvidtec.com wrote:

  I rsync'd 20-TB over to my gluster system and noticed that I had some
 directories missing even though the rsync completed normally.
 The rsync logs showed that the missing files were transferred.

 I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' the
 files were on the bricks.  After I did this 'ls', the files then showed up
 on the FUSE mounts.

 1) Why are the files hidden on the fuse mount?
 2) Why does the ls make them show up on the FUSE mount?
 3) How can I prevent this from happening again?

 Note, I also mounted the gluster volume using NFS and saw the same
 behavior.  The files/directories were not shown until I did the ls on the
 bricks.

 David



  ===
 David F. Robinson, Ph.D.
 President - Corvid Technologies
 704.799.6944 x101 [office]
 704.252.1310 [cell]
 704.799.7974 [fax]
 david.robin...@corvidtec.com
 http://www.corvidtechnologies.com



 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] missing files

2015-02-03 Thread David F. Robinson

Like these?

data-brick02a-homegfs.log:[2015-02-03 19:09:34.568842] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs02a.corvidtec.com-18563-2015/02/03-19:07:58:519134-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:09:41.286551] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-12804-2015/02/03-19:09:38:497808-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:16:35.906412] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs02b.corvidtec.com-27190-2015/02/03-19:15:53:458467-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 19:51:22.761293] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-25926-2015/02/03-19:51:02:89070-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 20:54:02.772180] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01b.corvidtec.com-4175-2015/02/02-16:44:31:179119-homegfs-client-2-0-1
data-brick02a-homegfs.log:[2015-02-03 22:44:47.458905] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-29467-2015/02/03-22:44:05:838129-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:47:42.830866] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30069-2015/02/03-22:47:37:209436-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:48:26.785931] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30256-2015/02/03-22:47:55:203659-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:53:25.530836] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30658-2015/02/03-22:53:21:627538-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:56:14.033823] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-30893-2015/02/03-22:56:01:450507-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:56:55.622800] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-31080-2015/02/03-22:56:32:665370-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 22:59:11.445742] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-31383-2015/02/03-22:58:45:190874-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:06:26.482709] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-31720-2015/02/03-23:06:11:340012-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:10:54.807725] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-32083-2015/02/03-23:10:22:131678-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:13:35.545513] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-32284-2015/02/03-23:13:21:26552-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-03 23:14:19.065271] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-32471-2015/02/03-23:13:48:221126-homegfs-client-2-0-0
data-brick02a-homegfs.log:[2015-02-04 00:18:20.261428] I 
[server.c:518:server_rpc_notify] 0-homegfs-server: disconnecting 
connection from 
gfs01a.corvidtec.com-1369-2015/02/04-00:16:53:613570-homegfs-client-2-0-0


-- Original Message --
From: Benjamin Turner bennytu...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Gluster Devel gluster-devel@gluster.org; 
gluster-us...@gluster.org gluster-us...@gluster.org

Sent: 2/3/2015 7:12:34 PM
Subject: Re: [Gluster-devel] missing files

It sounds to me like the files were only copied to one replica, werent 
there for the initial for the initial ls which triggered a self heal, 
and were there for the last ls because they were healed.  Is there any 
chance that one of the replicas was down during the rsync?  It could be 
that you lost a brick during copy or something like that.  To confirm I 
would look for disconnects in the brick logs as well as checking 
glusterfshd.log to verify the missing files were actually healed.


-b

On Tue, Feb 3, 2015 at 5:37 PM, David F. Robinson 
david.robin...@corvidtec.com wrote:
I rsync'd 20-TB over to my gluster system and noticed that I had some 
directories missing even though the rsync completed normally.

The rsync logs showed that the missing files were transferred.

I went to the bricks and did an 'ls -al /data/brick*/homegfs/dir/*' 
the files were on the bricks.  After I did this 'ls', the files then 
showed up on the FUSE mounts.


1) Why are the files hidden on the fuse mount?
2) Why does the ls