Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-02 Thread Krutika Dhananjay
Glad the fixes worked for you. Thanks for that update!

-Krutika

On Tue, Aug 2, 2016 at 7:31 PM, David Gossage 
wrote:

> So far both dd commands that failed previously worked fine on 3.7.14
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it would error out.  I am now creating a test
> VM with default disk caching settings (pretty sure oVirt is defaulting to
> none rather than writeback/through).  So far all shards are being created
> properly.
>
> Load is sky rocketing but I have all 3 gluster bricks running off 1 hard
> drive on test box so I would expect horrible io/load issues with that.
>
> Very promising so far.  Thank you developers for your help in working
> through this.
>
> Once I have the VM installed and running will test for a few days and make
> sure it doesn't have any freeze or locking issues then will roll this out
> to working cluster.
>
>
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Wed, Jul 27, 2016 at 8:37 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay 
>> wrote:
>>
>>> Yes please, could you file a bug against glusterfs for this issue?
>>>
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>>
>>
>>>
>>>
>>> -Krutika
>>>
>>> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 Has a bug report been filed for this issue or should l I create one
 with the logs and results provided so far?

 *David Gossage*
 *Carousel Checks Inc. | System Administrator*
 *Office* 708.613.2284

 On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur 
> wrote:
>
>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <
>> samp...@neutraali.net> wrote:
>> > Here is a quick way how to test this:
>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>> dataset. gluster-test1 is server and gluster-test2 is client mounting 
>> with
>> FUSE.
>> >
>> > Writing file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > dd: failed to open ‘file’: Invalid argument
>> >
>> > Enable network.remote-dio on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> network.remote-dio enable
>> > volume set: success
>> >
>> > Writing small file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > 1+0 records in
>> > 1+0 records out
>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>> >
>> > Writing bigger file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3
>> oflag=direct count=100 bs=1M
>> > 100+0 records in
>> > 100+0 records out
>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>> >
>> > Enable Sharding on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> features.shard enable
>> > volume set: success
>> >
>> > Writing small file  with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3
>> oflag=direct count=1 bs=1M
>> > 1+0 records in
>> > 1+0 records out
>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>> >
>> > Writing bigger file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3
>> oflag=direct count=100 bs=1M
>> > dd: error writing ‘file3’: Operation not permitted
>> > dd: closing output file ‘file3’: Operation not permitted
>> >
>>
>>
>> Thank you for these tests! would it be possible to share the brick and
>> client logs?
>>
>
> Not sure if his tests are same as my setup but here is what I end up
> with
>
> Volume Name: glustershard
> Type: Replicate
> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.71.10:/gluster1/shard1/1
> Brick2: 192.168.71.11:/gluster1/shard2/1
> Brick3: 192.168.71.12:/gluster1/shard3/1
> Options Reconfigured:
> features.shard-block-size: 64MB
> features.shard: on
> server.allow-insecure: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.quick-read: off
> cluster.self-heal-window-size: 1024
> 

Re: [Gluster-users] Failed file system

2016-08-02 Thread Leno Vo
if you don't want any downtime (in the case that your node 2 really die), you 
have to create a new gluster san (if you have the resources of course, 3 nodes 
as much as possible this time), and then just migrate your vms (or files), 
therefore no downtime but you have to cross your finger that the only node will 
not die too...  also without sharding the vm migration especially an rdp one, 
will be slow access from users till it migrated.
you have to start testing sharding, it's fast and cool...

 

On Tuesday, August 2, 2016 2:51 PM, Andres E. Moya 
 wrote:
 

 couldnt we just add a new server by
gluster peer probegluster volume add-brick replica 3 (will this command succeed 
with 1 current failed brick?)
let it heal, then 
gluster volume remove remove-brickFrom: "Leno Vo" 
To: "Andres E. Moya" , "gluster-users" 

Sent: Tuesday, August 2, 2016 1:26:42 PM
Subject: Re: [Gluster-users] Failed file system

you need to have a downtime to recreate the second node, two nodes is actually 
not good for production and you should have put raid 1 or raid 5 as your 
gluster storage, when you recreate the second node you might try running some 
VMs that need to be up and rest of vm need to be down but stop all backup and 
if you have replication, stop it too.  if you have 1G nic, 2cpu and less 8Gram, 
then i suggest all turn off the VMs during recreation of second node. someone 
said if you have sharding with 3.7.x, maybe some vip vm can be up...
if it just a filesystem, then just turn off the backup service until you 
recreate the second node. depending on your resources and how big is your 
storage, it might be hours to recreate it and even days...
here's my process on recreating the second or third node (copied and modifed 
from the net),
#make sure partition is already addedThis 
procedure is for replacing a failed server, IF your newly installed server has 
the same hostname as the failed one:
(If your new server will have a different hostname, see this article instead.)
For purposes of this example, the server that crashed will be server3 and the 
other servers will be server1 and server2
On both server1 and server2, make sure hostname server3 resolves to the correct 
IP address of the new replacement server.#On either server1 or server2, dogrep 
server3 /var/lib/glusterd/peers/*
This will return a uuid followed by ":hostname1=server3"
#On server3, make sure glusterd is stopped, then doecho UUID={uuid from 
previous step}>/var/lib/glusterd/glusterd.info
#actual testing below,[root@node1 ~]# cat 
/var/lib/glusterd/glusterd.infoUUID=4b9d153c-5958-4dbe-8f91-7b5002882aacoperating-version=30710#the
 second line is new.  maybe not needed...
On server3:make sure that all brick directories are created/mountedstart 
glusterdpeer probe one of the existing servers
#restart glusterd, check that full peer list has been populated using gluster 
peer status
(if peers are missing, probe them explicitly, then restart glusterd 
again)#check that full volume configuration has been populated using gluster 
volume info
if volume configuration is missing, do #on the other nodegluster volume sync 
"replace-node" all
#on the node to be replacedsetfattr -n trusted.glusterfs.volume-id -v 0x$(grep 
volume-id /var/lib/glusterd/vols/v1/info | cut -d= -f2 | sed 's/-//g') 
/gfs/b1/v1setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id 
/var/lib/glusterd/vols/v2/info | cut -d= -f2 | sed 's/-//g') /gfs/b2/v2setfattr 
-n trusted.glusterfs.volume-id -v 0x$(grep volume-id 
/var/lib/glusterd/vols/config/info | cut -d= -f2 | sed 's/-//g') 
/gfs/b1/config/c1
mount -t glusterfs localhost:config /data/data1
#install ctdb if not yet installed and put it back online, use the step on 
creating the ctdb config but #use your common sense not to deleted or modify 
current one.
gluster vol heal v1 fullgluster vol heal v2 fullgluster vol heal config full
 

On Tuesday, August 2, 2016 11:57 AM, Andres E. Moya 
 wrote:
 

 Hi, we have a 2 node replica setup
on 1 of the nodes the file system that had the brick on it failed, not the OS
can we re create a file system and mount the bricks on the same mount point

what will happen, will the data from the other node sync over, or will the 
failed node wipe out the data on the other mode?

what would be the correct process?

Thanks in advance for any help
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


   


  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Can't delete files from gluster volume in Windows

2016-08-02 Thread David Cowe
Hello,

Glusterfs 3.7.10

We have a glusterfs volume made up of 1 brick on each of our 4 nodes.

The volume is setup using tiering. The hot tier has 2 bricks in a replicate
and the cold tier has 2 bricks in a replicate.

 We use samba (4.2) and ctdb to mount the volume to our windows clients via
cifs.

We cannot delete a file from the cifs mounted volume on Windows. The file
deletes ok on the Windows side without error but it does not delete from
the glusterfs volume on the storage nodes! When refreshing the Windows
cifs mounted volume (using f5), the file reappears.

 We can install the gluster client on a Linux machine and mount the gluster
volume and delete a file without any of the above issues of the file
reappearing.

We can also do this on Linux mounting it via nfs.


Our problem is to do with Gluster and Samba. Any thoughts?


Regards,

David
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2016-08-02 Thread David Cowe
Hello,

Glusterfs 3.7.10

We have a glusterfs volume made up of 1 brick on each of our 4 nodes.

The volume is setup using tiering. The hot tier has 2 bricks in a replicate
and the cold tier has 2 bricks in a replicate.

We use samba (4.2) and ctdb to mount the volume to our windows clients via
cifs.

We cannot delete a file from the cifs mounted volume on Windows. The file
deletes ok on the Windows side without error but it does not delete from
the glusterfs volume on the storage nodes! When refreshing the Windows
cifs mounted volume (using f5), the file reappears.

We can install the gluster client on a Linux machine and mount the gluster
volume and delete a file without any of the above issues of the file
reappearing. We can also do this on Linux mounting it via nfs.

Our problem is to do with Gluster and Samba.

Any thoughts?

Regards,
David
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fuse memleaks, all versions

2016-08-02 Thread Yannick Perret

So here are the dumps, gzip'ed.

What I did:
1. mounting the volume, removing all its content, umounting it
2. mounting the volume
3. performing a cp -Rp /usr/* /root/MNT
4. performing a rm -rf /root/MNT/*
5. taking a dump (glusterdump.p1.dump)
6. re-doing 3, 4 and 5 (glusterdump.p2.dump)

VSZ/RSS are respectively:
- 381896 / 35688 just after mount
- 644040 / 309240 after 1st cp -Rp
- 644040 / 310128 after 1st rm -rf
- 709576 / 310128 after 1st kill -USR1
- 840648 / 421964 after 2nd cp -Rp
- 840648 / 44 after 2nd rm -rf

I created a small script that performs these actions in an infinite loop:
while /bin/true
do
  cp -Rp /usr/* /root/MNT/
  + get VSZ/RSS of glusterfs process
  rm -rf /root/MNT/*
  + get VSZ/RSS of glusterfs process
done

At this time here are the values so far:
971720 533988
1037256 645500
1037256 645840
1168328 757348
1168328 757620
1299400 869128
1299400 869328
1364936 980712
1364936 980944
1496008 1092384
1496008 1092404
1627080 1203796
1627080 1203996
1692616 1315572
1692616 1315504
1823688 1426812
1823688 1427340
1954760 1538716
1954760 1538772
2085832 1647676
2085832 1647708
2151368 1750392
2151368 1750708
2282440 1853864
2282440 1853764
2413512 1952668
2413512 1952704
2479048 2056500
2479048 2056712

So at this time glusterfs process takes not far from 2Gb of resident 
memory, only performing exactly the same actions 'cp -Rp /usr/* 
/root/MNT' + 'rm -rf /root/MNT/*'.


Swap usage is starting to increase a little, and I don't saw any memory 
dropping at this time.
I can understand that kernel may not release the removed files (after rm 
-rf) immediatly, but the fist 'rm' occured at ~12:00 today and it is 
~17:00 here so I can't understand why so much memory is used.
I would expect the memory to grow during 'cp -Rp', then reduce after 
'rm', but it stays the same. Even if it stays the same I would expect it 
to not grow more while cp-ing again.


I let the cp/rm loop running to see what will happen. Feel free to ask 
for other data if it may help.


Please note that I'll be in hollidays at the end of this week for 3 
weeks so I will mostly not be able to perform tests during this time 
(network connection is too bad where I go).


Regards,
--
Y.

Le 02/08/2016 à 05:11, Pranith Kumar Karampuri a écrit :



On Mon, Aug 1, 2016 at 3:40 PM, Yannick Perret 
> 
wrote:


Le 29/07/2016 à 18:39, Pranith Kumar Karampuri a écrit :



On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret
> wrote:

Ok, last try:
after investigating more versions I found that FUSE client
leaks memory on all of them.
I tested:
- 3.6.7 client on debian 7 32bit and on debian 8 64bit (with
3.6.7 serveurs on debian 8 64bit)
- 3.6.9 client on debian 7 32bit and on debian 8 64bit (with
3.6.7 serveurs on debian 8 64bit)
- 3.7.13 client on debian 8 64bit (with 3.8.1 serveurs on
debian 8 64bit)
- 3.8.1 client on debian 8 64bit (with 3.8.1 serveurs on
debian 8 64bit)
In all cases compiled from sources, appart for 3.8.1 where
.deb were used (due to a configure runtime error).
For 3.7 it was compiled with --disable-tiering. I also tried
to compile with --disable-fusermount (no change).

In all of these cases the memory (resident & virtual) of
glusterfs process on client grows on each activity and never
reach a max (and never reduce).
"Activity" for these tests is cp -Rp and ls -lR.
The client I let grows the most overreached ~4Go RAM. On
smaller machines it ends by OOM killer killing glusterfs
process or glusterfs dying due to allocation error.

In 3.6 mem seems to grow continusly, whereas in 3.8.1 it
grows by "steps" (430400 ko → 629144 (~1min) → 762324 (~1min)
→ 827860…).

All tests performed on a single test volume used only by my
test client. Volume in a basic x2 replica. The only
parameters I changed on this volume (without any effect) are
diagnostics.client-log-level set to ERROR and
network.inode-lru-limit set to 1024.


Could you attach statedumps of your runs?
The following link has steps to capture
this(https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
). We basically need to see what are the memory types that are
increasing. If you could help find the issue, we can send the
fixes for your workload. There is a 3.8.2 release in around 10
days I think. We can probably target this issue for that?

Here are statedumps.
Steps:
1. mount -t glusterfs ldap1.my.domain:SHARE /root/MNT/ (here VSZ
and RSS are 381896 35828)
2. take a dump with kill -USR1  (file
glusterdump.n1.dump.1470042769)
3. perform a 'ls -lR /root/MNT | wc -l' (btw result of wc -l is
518396 

Re: [Gluster-users] managing slow drives in cluster

2016-08-02 Thread Jay Berkenbilt
So we managed to work around the behavior by setting

sysctl -w vm.dirty_bytes=5000
sysctl -w vm.dirty_background_bytes=2500

In our environment with our specific load testing, this prevents the
disk flush from taking longer than gluster's timeout and avoids the
whole problem with gluster timing out. We haven't finished our
performance testing, but initial results suggest that it is no worse
than the performance we had with our previous home-grown solution. In
our previous home grown solution, we had a fuse layer that was calling
fsync() on every megabyte written as soon as there were 10 megabytes
worth of requests in the queue, which was effectively emulating in user
code what these kernel parameters do but with even smaller numbers.

Thanks for the note below about the potential patch. I applied this to
3.8.1 with the fix based on the code review comment and have that in my
back pocket in case we need it, but we're going to try with just the
kernel tuning for now. These parameters are decent for us anyway
because, for other reasons based on the nature of our application and
certain customer requirements, we want to keep the amount of dirty data
really low.

It looks like the code review has been idle for some time. Any reason?
It looks like a simple and relatively obvious change (not to take
anything away from it at all, and I really appreciate the pointer). Is
there anything potentially unsafe about it? Like are there some cases
where not always appending to the queue could cause damage to data if
the test wasn't exactly right or wasn't doing exactly what it was
expecting? If I were to run our load test against the patch, it wouldn't
catch anything like that because we don't actually look at the content
of the data written in our load test. In any case, if the kernel tuning
doesn't completely solve the problem for us, I may pull this out and do
some more rigorous testing against it. If I do, I can comment on the
code change.

For now, unless I post otherwise, we're considering our specific problem
to be resolved, though I believe there remains a potential weakness in
gluster's ability to report that it is still up in the case of a slower
disk write speed on one of the nodes.

--Jay

On 08/01/2016 01:29 AM, Mohammed Rafi K C wrote:
>
> On 07/30/2016 10:53 PM, Jay Berkenbilt wrote:
>> We're using glusterfs in Amazon EC2 and observing certain behavior
>> involving EBS volumes. The basic situation is that, in some cases,
>> clients can write data to the file system at a rate such that the
>> gluster daemon on one or more of the nodes may block in disk wait for
>> longer than 42 seconds, causing gluster to decide that the brick is
>> down. In fact, it's not down, it's just slow. I believe it is possible
>> by looking at certain system data to tell the difference from the system
>> with the drive on it between down and working through its queue.
>>
>> We are attempting a two-pronged approach to solving this problem:
>>
>> 1. We would like to figure out how to tune the system, including either
>> or both of adjusting kernel parameters or glusterd, to try to avoid
>> getting the system into the state of having so much data to flush out to
>> disk that it blocks in disk wait for such a long time.
>> 2. We would like to see if we can make gluster more intelligent about
>> responding to the pings so that the client side is still getting a
>> response when the remote side is just behind and not down. Though I do
>> understand that, in some high performance environments, one may want to
>> consider a disk that's not keeping up to have failed, so this may have
>> to be a tunable parameter.
>>
>> We have a small team that has been working on this problem for a couple
>> of weeks. I just joined the team on Friday. I am new to gluster, but I
>> am not at all new to low-level system programming, Linux administration,
>> etc. I'm very much open to the possibility of digging into the gluster
>> code and supplying patches
> Welcome to Gluster. It is great to see a lot of ideas within days :).
>
>
>>  if we can find a way to adjust the behavior
>> of gluster to make it behave better under these conditions.
>>
>> So, here are my questions:
>>
>> * Does anyone have experience with this type of issue who can offer any
>> suggestions on kernel parameters or gluster configurations we could play
>> with? We have several kernel parameters in mind and are starting to
>> measure their affect.
>> * Does anyone have any background on how we might be able to tell that
>> the system is getting itself into this state? Again, we have some ideas
>> on this already, mostly by using sysstat to monitor stuff, though
>> ultimately if we find a reliable way to do it, we'd probably code it
>> directly by looking at the relevant stuff in /proc from our own code. I
>> don't have the details with me right now.
>> * Can someone provide any pointers to where in the gluster code the ping
>> logic is handled and/or how one might go about making 

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-02 Thread David Gossage
On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On 2/08/2016 5:07 PM, Kaushal M wrote:
>
>> GlusterFS-3.7.14 has been released. This is a regular minor release.
>> The release-notes are available at
>>
>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md
>>
>
> Thanks Kaushal, I'll check it out
>
>
So far on my test box its working as expected.  At least the issues that
prevented it from running as before have disappeared.  Will need to see how
my test VM behaves after a few days.



-- 
> Lindsay Mathieson
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-02 Thread David Gossage
So far both dd commands that failed previously worked fine on 3.7.14

Once I deleted old content from test volume it mounted to oVirt via storage
add when previously it would error out.  I am now creating a test VM with
default disk caching settings (pretty sure oVirt is defaulting to none
rather than writeback/through).  So far all shards are being created
properly.

Load is sky rocketing but I have all 3 gluster bricks running off 1 hard
drive on test box so I would expect horrible io/load issues with that.

Very promising so far.  Thank you developers for your help in working
through this.

Once I have the VM installed and running will test for a few days and make
sure it doesn't have any freeze or locking issues then will roll this out
to working cluster.



*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284

On Wed, Jul 27, 2016 at 8:37 AM, David Gossage 
wrote:

> On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay 
> wrote:
>
>> Yes please, could you file a bug against glusterfs for this issue?
>>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>
>
>>
>>
>> -Krutika
>>
>> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> Has a bug report been filed for this issue or should l I create one with
>>> the logs and results provided so far?
>>>
>>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>> On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>



 On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur 
 wrote:

> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <
> samp...@neutraali.net> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
> dataset. gluster-test1 is server and gluster-test2 is client mounting with
> FUSE.
> >
> > Writing file with oflag=direct is not ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > dd: failed to open ‘file’: Invalid argument
> >
> > Enable network.remote-dio on Gluster Volume:
> > [root@gluster-test1 gluster]# gluster volume set gluster
> network.remote-dio enable
> > volume set: success
> >
> > Writing small file with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > 1+0 records in
> > 1+0 records out
> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
> >
> > Writing bigger file with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
> >
> > Enable Sharding on Gluster Volume:
> > [root@gluster-test1 gluster]# gluster volume set gluster
> features.shard enable
> > volume set: success
> >
> > Writing small file  with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=1 bs=1M
> > 1+0 records in
> > 1+0 records out
> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
> >
> > Writing bigger file with oflag=direct is not ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > dd: error writing ‘file3’: Operation not permitted
> > dd: closing output file ‘file3’: Operation not permitted
> >
>
>
> Thank you for these tests! would it be possible to share the brick and
> client logs?
>

 Not sure if his tests are same as my setup but here is what I end up
 with

 Volume Name: glustershard
 Type: Replicate
 Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
 Status: Started
 Number of Bricks: 1 x 3 = 3
 Transport-type: tcp
 Bricks:
 Brick1: 192.168.71.10:/gluster1/shard1/1
 Brick2: 192.168.71.11:/gluster1/shard2/1
 Brick3: 192.168.71.12:/gluster1/shard3/1
 Options Reconfigured:
 features.shard-block-size: 64MB
 features.shard: on
 server.allow-insecure: on
 storage.owner-uid: 36
 storage.owner-gid: 36
 cluster.server-quorum-type: server
 cluster.quorum-type: auto
 network.remote-dio: enable
 cluster.eager-lock: enable
 performance.stat-prefetch: off
 performance.io-cache: off
 performance.quick-read: off
 cluster.self-heal-window-size: 1024
 cluster.background-self-heal-count: 16
 nfs.enable-ino32: off
 nfs.addr-namelookup: off
 nfs.disable: on
 performance.read-ahead: off
 performance.readdir-ahead: on



  dd if=/dev/zero 
 of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
 oflag=direct count=100 

[Gluster-users] Gluster replica over WAN...

2016-08-02 Thread Gilberto Nunes
Hello list...
This is my first post on this list.

I have here two IBM Server, with 9 TB of hardisk on which one.
Between this servers, I have a WAN connecting two offices,let say OFFICE1
and OFFICE2.
This WAN connection is over fibre channel.
When I setting up gluster with replica with two bricks, and mount the
gluster volume in other folder, like this:

mount -t glusterfs localhost:/VOLUME /STORAGE


and when I go to that folder, and try to access the content, I get a lot of
timeout... Even a single ls give a lot of time to return the list.

This folder, /STORAGE is access by many users, through samba share.

So, when OFFICE1 access the gluster server access the files over
\\server\share, has a long delay to show the files Sometimes, get time
out.

My question is: is there some way to improve the gluster to work faster in
this scenario??

Thanks a lot.

Best regards

-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting (Today)

2016-08-02 Thread Muthu Vigneshwaran
Hi all,

The weekly Gluster bug triage is about to take place in an hour

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

Appreciate your participation

Regards,
Muthu Vigneshwaran
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-02 Thread Lindsay Mathieson

On 2/08/2016 5:07 PM, Kaushal M wrote:

GlusterFS-3.7.14 has been released. This is a regular minor release.
The release-notes are available at
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md


Thanks Kaushal, I'll check it out

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users