The cache works by remembering 128KB pages within files. Effectively
blocks in your terminology.
Thanks
On Wed, 11 Mar 2015 at 12:36 Jon Heese jonhe...@jonheese.com wrote:
Hello,
I have a two-server, two-brick (one brick per server) replicated Gluster
3.6.2 volume, and I'm interested in the
O_DIRECT support in fuse has been for quite some time now, surely well
before 3.4
On Fri, Feb 13, 2015, 02:37 Pedro Serotto pedro.sero...@yahoo.es wrote:
Dear All,
I am actually using the following software stack:
debian wheezy with kernel 3.2.0-4-amd64, glusterfs 3.6.2, openstack Juno,
Hi Barry!
Your observation is right. Sometime after 3.0 (not sure which exact
version, probably 3.1) Gluster introduced POSIX acl support (on the server
side). Until then, if fuse let a request through into Gluster, server
assumed request to be authenticated - however fuse does not support POSIX
Atomicity of rename() has two aspects. One is the back end view (for crash
consistency), of having an unambiguous single point in time when the rename
is declared complete. Dht does quite a few tricks to make this atomicity
work well in practice. The other is the effect on API, in particular the
It would be convenient if the time is appended to the snap name on the fly
(when receiving list of snap names from glusterd?) so that the timezone
application can be dynamic (which is what users would expect).
Thanks
On Thu Jan 08 2015 at 3:21:15 AM Poornima Gurusiddaiah pguru...@redhat.com
GlusterFS did undergo a few revisions of license changes. We have finally
settled on dual licensed GPL v2 / LGPL v3 or later - for all the code in
glusterfs.git outside contrib/.
Thanks
On Thu, Oct 16, 2014 at 12:36 AM, Zhou Ganhong zhoug...@gmail.com wrote:
Hi, all
I am
The only reason O_APPEND gets stripped on the server side, is because of
one of the following xlators:
- stripe
- quiesce
- crypt
If you have any of these, please try unloading/reconfiguring without these
features and try again.
Thanks
On Sat, Sep 6, 2014 at 3:31 PM, mike
+1 for all the points.
On Wed, Aug 13, 2014 at 11:22 AM, Jeff Darcy jda...@redhat.com wrote:
I.1 Generating the master volume key
Master volume key should be generated by user on the trusted machine.
Recommendations on master key generation provided at section 6.2 of
the
Whether flush-behind is enabled or not, close() will guarantee all previous
write()s on that fd have been acknowledged by server. It is just the post
processing of close() itself which is performed in background when
flush-behind is enabled. The word flush here is probably confusing as it
is
On Mon, Jul 28, 2014 at 10:43 AM, Richard van der Hoff
rich...@swiftserve.com wrote:
On 28/07/14 18:05, Anand Avati wrote:
Whether flush-behind is enabled or not, close() will guarantee all
previous write()s on that fd have been acknowledged by server.
Thanks Anand. So can you explain why
On Tue, Jun 24, 2014 at 10:43 AM, Justin Clift jus...@gluster.org wrote:
On 24/06/2014, at 6:34 PM, Vijay Bellur wrote:
Hi All,
Since there has been traction for ports of GlusterFS to other unix
distributions, we thought of adding maintainers for the various ports that
are around. I am
Is it possible that each of your bricks is in its own vm, and the vm system
drives (where /var/lib/glusterd resides) are all placed on the same host
drive? Glusterd updates happen synchronously even in the latest release and
the change to use buffered writes + fsync went into master only
On Mon, May 19, 2014 at 8:39 AM, Niels de Vos nde...@redhat.com wrote:
The 32 limit you are hitting is caused by FUSE. The Linux kernel module
provides the groups of the process that accesses the FUSE-mountpoint
through /proc/$PID/status (line starting with 'Groups:'). The kernel
does not
) release of gluster and fuse? Or, are we stuck with
32-groups until the fixes are released in the next version?
It wasn't clear if the fixes were to take you up to the 93-group limit or
beyond it...
David
-- Original Message --
From: Anand Avati av...@gluster.org
To: Niels de Vos
On Thu, May 8, 2014 at 4:45 AM, Ira Cooper i...@redhat.com wrote:
Also inline.
- Original Message -
The scalability factor I mentioned simply had to do with the core
infrastructure (depending on very basic mechanisms like the epoll wait
thread, the entire end-to-end flow of a
On Thu, May 8, 2014 at 4:48 AM, Jeff Darcy jda...@redhat.com wrote:
If snapview-server runs on all servers, how does a particular client
decide which one to use? Do we need to do something to avoid hot spots?
Overall, it seems like having clients connect *directly* to the snapshot
volumes
On Thu, May 8, 2014 at 4:53 AM, Jeff Darcy jda...@redhat.com wrote:
* How do clients find it? Are we dynamically changing the client
side graph to add new protocol/client instances pointing to new
snapview-servers, or is snapview-client using RPC directly? Are
the
On Thu, May 8, 2014 at 12:20 PM, Jeff Darcy jda...@redhat.com wrote:
They were: a) snap view generation requires privileged ops to
glusterd. So moving this task to the server side solves a lot of those
challenges.
Not really. A server-side component issuing privileged requests
whenever
I did now. I'd recommend adding a check for libintl.h in configure.ac and
fail gracefully suggesting installing gettext.
Thanks
On Fri, Apr 4, 2014 at 10:59 PM, Dennis Schafroth den...@schafroth.dkwrote:
On 05 Apr 2014, at 07:38 , Anand Avati av...@gluster.org wrote:
And here:
./gf-error
Build fails for me:
Making all in libglusterfs
Making all in src
CC libglusterfs_la-dict.lo
CC libglusterfs_la-xlator.lo
CC libglusterfs_la-logging.lo
logging.c:26:10: fatal error: 'libintl.h' file not found
#include libintl.h
^
1 error generated.
make[4]: ***
in /usr/local and but require sudo right
underway to sett rights
brew install gettext
It will require setting some CFLAGS / LDFLAGS when ./configure:
LDFLAGS=-L/usr/local/opt/gettext/lib
CPPFLAGS=-I/usr/local/opt/gettext/include
cheers,
:-Dennis
On 05 Apr 2014, at 06:56 , Anand Avati av
Most likely reason is that someone deleted these files manually from the
brick directories. You must never access/modify the data from the brick
directories directly, and all modifications must happen from a gluster
client mount point. You may inspect the file contents to figure out if you
still
Can you please post some logs (the client logs which is exporting ISCSI)?
It is hard to diagnose issues without logs.
thanks,
Avati
On Wed, Mar 5, 2014 at 9:28 AM, Carlos Capriotti capriotti.car...@gmail.com
wrote:
Hi all. Again.
I am still fighting that VMware esxi cannot use striped
Jay,
there are few parts to consistency.
- file data consistency: libgfapi by itself does not perform any file data
caching, it is entirely dependent on the set of translators (write-behind,
io-cache, read-ahead, quick-read) that are loaded, and the effect of those
xlators is same in both FUSE
Hi,
Allowing noforget option to FUSE will not help for your cause. Gluster
persents the address of the inode_t as the nodeid to FUSE. In turn FUSE
creates a filehandle using this nodeid for knfsd to export to nfs client.
When knfsd fails over to another server, FUSE will decode the handle
encoded
On Sat, Dec 14, 2013 at 5:58 AM, James purplei...@gmail.com wrote:
On Sat, Dec 14, 2013 at 3:28 AM, Vijay Bellur vbel...@redhat.com wrote:
On 12/13/2013 04:05 AM, James wrote:
I just noticed that the Gluster Gerrit [1] doesn't use HTTPS!
Can this be fixed ASAP?
Configured now,
I have the same question. Do you have excessively high --entry-timeout
parameter to your FUSE mount? In any case, Structure needs cleaning error
should not surface up to FUSE and that is still a bug.
On Thu, Dec 12, 2013 at 12:46 PM, Maik Kulbe
i...@linux-web-development.dewrote:
How do you
Looks like your issue was fixed by patch http://review.gluster.org/4989/ in
master branch. Backporting this to release-3.4 now.
Thanks!
Avati
On Thu, Dec 12, 2013 at 1:26 PM, Anand Avati av...@gluster.org wrote:
I have the same question. Do you have excessively high --entry-timeout
parameter
Please provide the full client and server logs (in a bug report). The
snippets give some hints, but are not very meaningful without the full
context/history since mount time (they have after-the-fact symptoms, but
not the part which show the reason why disconnects happened).
Even before looking
James,
This is the right way to think about the problem. I have more specific
comments in the script, but just wanted to let you know this is a great
start.
Thanks!
On Wed, Nov 27, 2013 at 7:42 AM, James purplei...@gmail.com wrote:
Hi,
This is along the lines of tools for sysadmins. I plan
Scott,
It is really unfortunate that you were bit by that bug. I am hoping to
convince you to at least not abandon the deployment this early with some
responses:
- Note that you typically don't have to proactively rebalance your volume.
If your new data comes in the form of new directories, they
You are probably using 3.3 or older? This has been fixed in 3.4 (
http://review.gluster.org/5414)
Thanks,
Avati
On Wed, Dec 4, 2013 at 2:05 PM, Michael Lampe mlam...@googlemail.comwrote:
I've installed GlusterFS on our 23-node Beowulf cluster. Each node has a
disc which provides a brick for
.
Am I perhaps misinterpreting the problem?
-Michael
Anand Avati wrote:
You are probably using 3.3 or older? This has been fixed in 3.4 (
http://review.gluster.org/5414)
Thanks,
Avati
On Wed, Dec 4, 2013 at 2:05 PM, Michael Lampe
mlam...@googlemail.comwrote:
I've installed
Nguyen,
I did not realize you were using RDMA. Can you paste the gluster client
logs as well?
Thanks,
Avati
On Tue, Dec 3, 2013 at 4:16 PM, Nguyen Viet Cuong mrcuon...@gmail.comwrote:
Hi Keithley,
Please find the bug in the attached log file. I experienced this bug on
both 3.4.0 and 3.4.1.
the IPoIB
interface.
I have already re-installed 3.2.7 on that server for shipping. I will
install 3.4.1 on other servers and send you client logs, but please wait
for a while.
Regards,
Cuong
On Wed, Dec 4, 2013 at 9:35 AM, Anand Avati av...@gluster.org wrote:
Nguyen,
I did
You are seeing a side-effect of http://review.gluster.com/3631. Which
means: if your backend filesystem uses 4KB blocks, then the value reported
by gluster will be at worst 7 blocks smaller (4KB / 512 - 1).
On Tue, Nov 26, 2013 at 3:13 AM, Maik Kulbe
i...@linux-web-development.dewrote:
So
From man (2) stat:
blksize_t st_blksize; /* blocksize for file system I/O */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
The 128K you are seeing is st_blksize which is the recommended I/O
transfer size. The number of consumed blocks is always
, Anand Avati wrote:
Can you provide the following details from the time of the pop3 test on
FUSE mount:
1. mount FUSE client with -LTRACE and logs from that session
I'm unable to do that
#mount -t glusterfs -LTRACE gluster1:/mailtest /homegluster
mount: no such partition found
maybe I
Or actually, is it a 32-bit binary? (running file /usr/bin/binary on the
pop3 daemon should reveal). If it is, try mount FUSE client with -o
enable-ino32 and retry the pop3 daemon.
Avati
On Sun, Nov 24, 2013 at 12:40 AM, Anand Avati av...@gluster.org wrote:
The problematic line is:
9096
Can you provide the following details from the time of the pop3 test on
FUSE mount:
1. mount FUSE client with -LTRACE and logs from that session
2. strace -f -p pop3 daemon -o /tmp/pop3-strace.log
Thanks,
Avati
On Sat, Nov 23, 2013 at 3:56 PM, W K wkm...@bneit.com wrote:
We brought up a
initial testing that patch appears to have addressed the
problem. I will put it through our full system tests, but at least my
example script can no longer reproduce the problem. Thank you.
On Wed, Nov 20, 2013 at 10:25 PM, Anand Avati av...@gluster.org wrote:
Peter,
Thanks, this was helpful
/gfs/test1385000727, /mnt/gfs/test-target..., 4096) = 20
symlink(/mnt/gfs/test1385000727, /tmp/test1385000727) = 0
lstat(/tmp/test1385000727, {st_mode=S_IFLNK|0777, st_size=23, ...}) = 0
On Wed, Nov 13, 2013 at 3:24 PM, Anand Avati av...@gluster.org wrote:
On Wed, Nov 13, 2013 at 12:14 PM
Ravi,
We should not mix up data and entry operation domains, if a file is in data
split brain that should not stop a user from rename/link/unlink operations
on the file.
Regarding your concern about complications while healing - we should change
our manual fixing instructions to:
- go to
On Wed, Nov 13, 2013 at 9:01 AM, Peter Drake peter.dr...@acquia.com wrote:
I have a replicated Gluster setup, 2 servers (fs-1 and fs-2) x 1 brick. I
have two clients (web-1 and web-2) which are connected and simultaneously
execute tasks. These clients mount the Gluster volume at /mnt/gfs.
On Wed, Nov 13, 2013 at 12:14 PM, Peter Drake peter.dr...@acquia.comwrote:
Thanks for taking the time to look at this and reply. To clarify, the
script that was running and created the log entries is an internal tool
which does lots of other, unrelated things, but the part that caused the
Shawn,
Thanks for the detailed info. I have not yet looked into your logs, but
will do so soon. There have been patches on rebalance which do fix issues
related to ownership. But I am not (yet) sure about bugs which caused data
loss. One question I have is -
[2013-10-29 23:13:49.611069] I
Sounds good! URL please ? :-)
On Fri, Nov 1, 2013 at 12:54 PM, Paul Cuzner pcuz...@redhat.com wrote:
Hi,
Just to let you know that I've updated the deploy tool (aka setup wizard),
to include the creation/tuning of the 1st volume.
Here's the changelog info;
- Added optparse module for
for the
user. Showing the date gluster completed replicating the file to another
node is confusing.
As I described above, that is not the case. Delayed replication (healing)
happens for both data and mtime.
Avati
-bc
--
*From: *Anand Avati av
Also, have you specified a block size for dd? The default (512 bytes) is
too low for the number of context switches it generates in FUSE. Use a
higher block size (64-128KB) and check the throughput.
Avati
On Fri, Oct 25, 2013 at 7:53 AM, Joe Julian j...@julianfamily.org wrote:
Have you
by consistently returning the highest of the two mtimes
whenever queried.
Avati
On Fri, Oct 25, 2013 at 11:17 AM, James purplei...@gmail.com wrote:
On Fri, Oct 25, 2013 at 1:46 PM, Anand Avati av...@gluster.org wrote:
Gluster's replication is synchronous. So writes are done in parallel
On Fri, Oct 25, 2013 at 12:51 PM, James purplei...@gmail.com wrote:
On Fri, Oct 25, 2013 at 3:18 PM, Anand Avati av...@gluster.org wrote:
In normal operations they will differ as much as the time drift between
the
servers + lag in delivery/issue of write() calls on the servers. This
delta
Gluster does have logic to always show mtime which is the highest in value.
It is probably a bug if you are witnessing different mtimes at different
times when no writes have happened in between.
Avati
On Thu, Oct 24, 2013 at 4:31 PM, James purplei...@gmail.com wrote:
On Thu, 2013-10-24 at
Very likely reason for getting ENODATA for posix_acl_default key is because
your backend is not mounted with -o acl?
Avati
On Mon, Oct 14, 2013 at 12:41 AM, Vijay Bellur vbel...@redhat.com wrote:
On 10/11/2013 10:20 AM, Dan Mons wrote:
Following up on this:
* GlusterFS 3.4.1 solves the
http://review.gluster.org/#/c/6031/ (patch to remove replace-brick data
migration) is slated for merge before 3.5. Review comments (on gerrit)
welcome.
Thanks,
Avati
On Thu, Oct 3, 2013 at 9:27 AM, Anand Avati av...@gluster.org wrote:
On Thu, Oct 3, 2013 at 8:57 AM, KueiHuan Chen kueihuan.c
h2:/b2 h1:/b1 start .. commit
Let me know if you still have questions.
Avati
Thanks.
Best Regards,
KueiHuan-Chen
Synology Incorporated.
Email: khc...@synology.com
Tel: +886-2-25521814 ext.827
2013/9/30 Anand Avati av...@gluster.org:
On Fri, Sep 27, 2013 at 1:56 AM, James purplei
On Fri, Sep 27, 2013 at 10:15 AM, Amar Tumballi ama...@gmail.com wrote:
I plan to send out patches to remove all traces of replace-brick data
migration code by 3.5 branch time.
Thanks for the initiative, let me know if you need help.
I could use help here, if you have free cycles to pick
On Fri, Sep 27, 2013 at 1:56 AM, James purplei...@gmail.com wrote:
On Fri, 2013-09-27 at 00:35 -0700, Anand Avati wrote:
Hello all,
Hey,
Interesting timing for this post...
I've actually started working on automatic brick addition/removal. (I'm
planning to add this to puppet-gluster
Hello all,
DHT's remove-brick + rebalance has been enhanced in the last couple of
releases to be quite sophisticated. It can handle graceful decommissioning
of bricks, including open file descriptors and hard links.
This in a way is a feature overlap with replace-brick's data migration
On Wed, Sep 25, 2013 at 12:47 PM, John Mark Walker johnm...@gluster.orgwrote:
- Original Message -
Sadly this won't help, but thanks for your effort. We use the Ubuntu
repository for Gluster, so sadly this is no option. Also I don't think
I'd
be too happy with Packages built
I have a theory for #998967 (that posix-acl is not doing the right thing
after chmod/setattr). Preparing a patch, will appreciate if you can test it
quickly.
Avati
On Fri, Sep 20, 2013 at 1:26 AM, Lukáš Bezdička lukas.bezdi...@gooddata.com
wrote:
No, I see issues reported in
Can you please confirm if http://review.gluster.org/5979 fixes the problem
of #998967 for you? If so we will backport and include the patch in 3.4.1.
Thanks,
Avati
On Fri, Sep 20, 2013 at 2:03 AM, Anand Avati av...@gluster.org wrote:
I have a theory for #998967 (that posix-acl is not doing
, Anand Avati av...@gluster.org wrote:
Can you please confirm if http://review.gluster.org/5979 fixes the
problem of #998967 for you? If so we will backport and include the patch
in 3.4.1.
Thanks,
Avati
On Fri, Sep 20, 2013 at 2:03 AM, Anand Avati av...@gluster.org wrote:
I have a theory
the issue with patch #2 from
http://review.gluster.org/#/c/5979/
Thank you.
On Fri, Sep 20, 2013 at 11:52 AM, Anand Avati av...@gluster.org wrote:
Please pick #2 resubmission, that is fine.
Avati
On Fri, Sep 20, 2013 at 2:48 AM, Lukáš Bezdička
lukas.bezdi...@gooddata.com wrote
On Thu, Sep 19, 2013 at 11:28 AM, Nux! n...@li.nux.ro wrote:
On 18.09.2013 19:04, Nux! wrote:
Hi,
I'm trying to build and test samba-glusterfs-vfs, but problems appear
from the start:
http://fpaste.org/40562/**95274621/ http://fpaste.org/40562/95274621/
Any pointers?
Anyone from devel
-9-18,下午1:38,Anand Avati av...@redhat.com 写道:
On 9/17/13 10:34 PM, kane wrote:
Hi Anand,
I use 2 gluster server , this is my volume info:
Volume Name: soul
Type: Distribute
Volume ID: 58f049d0-a38a-4ebe-94c0-086d492bdfa6
Status: Started
Number of Bricks: 2
Transport-type: tcp
How are you testing this? What tool are you using?
Avati
On Tue, Sep 17, 2013 at 9:02 PM, kane stef...@163.com wrote:
Hi Vijay
I used the code in https://github.com/gluster/glusterfs.git with
the lasted commit:
commit de2a8d303311bd600cb93a775bc79a0edea1ee1a
Author: Anand Avati
.
any places i did wrong?
thank you
-Kane
在 2013-9-18,下午1:19,Anand Avati av...@gluster.org
mailto:av...@gluster.org 写道:
How are you testing this? What tool are you using?
Avati
On Tue, Sep 17, 2013 at 9:02 PM, kane stef...@163.com
mailto:stef...@163.com wrote:
Hi Vijay
Anand,
This is a great first step.. Looking forward for the integration to mature
soon. This is a big step for supporting NFSv4 and pNFS for GlusterFS.
Thanks!
Avati
On Sat, Sep 14, 2013 at 3:18 AM, Anand Subramanian ana...@redhat.comwrote:
FYI, the FSAL (File System Abstraction Layer) for
On Tue, Sep 10, 2013 at 2:57 PM, Tamas Papp tom...@martos.bme.hu wrote:
--with-glusterfs=/data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs
Make that --with-glusterfs=/data/gluster/glusterfs-3.4.0final/debian/tmp/usr/
(exclude /include/glusterfs suffix).
Avati
Fred,
Questions regarding RHS (Red Hat Storage) are best asked with Red Hat
Support rather than the community. That being said, upgrading from one
version to another does not alter hash values / layouts and no new
linkfiles will be created because of the upgrade. You will see new hash
values and
This looks like it might be because you need -
http://review.gluster.org/4591
If you can confirm, we can backport it to 3.4.1.
Thanks,
Avati
On Wed, Sep 4, 2013 at 9:03 PM, Vijay Bellur vbel...@redhat.com wrote:
On 09/05/2013 04:55 AM, higkoohk wrote:
yes,I'm using GlusterFS 3.4.0
For those interested in what are the possible patches, here is a short list
of commits which are available in master but not yet backported to
release-3.4 (note the actual list 500, this is a short list of patches
which fix some kind of an issue - crash, leak, incorrect behavior,
failure)
names
# file: mnt/brick2/vol_icclab/
trusted.glusterfs.volume-id=0xa2b943d2271a464da2ae7e29ede15552
Cheers,
--
Daniele Stroppa
Researcher
Institute of Information Technology
Zürich University of Applied Sciences
http://www.cloudcomp.ch
From: Anand Avati anand.av...@gmail.com
Date: Fri
On Sun, Aug 25, 2013 at 11:23 PM, Vijay Bellur vbel...@redhat.com wrote:
File size as reported on the mount point and the bricks can vary because
of this code snippet in iatt_from_stat():
{
uint64_t maxblocks;
maxblocks = (iatt-ia_size + 511) / 512;
Michael,
The problem looks very strange. We haven't come across such an issue (in
glusterfs) so far. However I do recall seeing such bit flips at a customer
site in the past, and in the end it was diagnosed to be a hardware issue.
Can you retry a few runs of same rsync directly to the backends
Please provide the output of the following commands on the respective nodes:
on gluster-node1 and gluster-node4:
getfattr -d -e hex -m . /mnt
getfattr -d -e hex -m . /mnt/brick1
getfattr -d -e hex -m . /mnt/brick1/vol_icclab
getfattr -d -e hex -m . /mnt/brick2
getfattr -d -e hex -m .
This is intentional behavior. We specifically brought this because creating
volumes with directories which are subdirectories of other bricks, or if a
subdirectory belongs to another brick can result in dangerous corruption of
your data.
Please create volumes with brick directories which are
On Sat, Aug 17, 2013 at 5:20 AM, Jeff Darcy jda...@redhat.com wrote:
On 08/16/2013 11:21 PM, Alexey Shalin wrote:
I wrote small script :
#!/bin/bash
for i in {1..1000}; do
size=$((RANDOM%5+1))
dd if=/dev/zero of=/storage/test/bigfile${i} count=1024 bs=${size}k
done
This script creates
On Tue, Jul 30, 2013 at 11:39 PM, Balamurugan Arumugam
barum...@redhat.comwrote:
- Original Message -
From: Joe Julian j...@julianfamily.org
To: Pablo paa.lis...@gmail.com, Balamurugan Arumugam
b...@gluster.com
Cc: gluster-users@gluster.org, gluster-de...@nongnu.org
Sent:
On Wed, Jul 31, 2013 at 8:57 AM, Nux! n...@li.nux.ro wrote:
On 31.07.2013 16:21, Nux! wrote:
On 31.07.2013 12:29, Nux! wrote:
Hello,
I'm trying to use a volume on Windows via NFS and every operation is
very slow and in the nfs.log I see the following:
[2013-07-31 11:26:22.644794] W
On Mon, Jul 29, 2013 at 10:55 PM, Anand Avati anand.av...@gmail.com wrote:
On Mon, Jul 29, 2013 at 8:36 AM, Roberto De Ioris robe...@unbit.itwrote:
Hi everyone, i have just committed a plugin for the uWSGI application
server
for exposing glusterfs filesystems using the new native api
but they
just took forever to get pushed out into an official release.
I'm in favor of closing some bugs and risking introducing new bugs for
the sake of releases happening often.
On Fri, Jul 26, 2013 at 10:26 AM, Anand Avati anand.av...@gmail.com
wrote:
Hello everyone,
We
On Sun, Jul 28, 2013 at 11:32 PM, Bryan Whitehead dri...@megahappy.netwrote:
Weekend activities kept me away from watching this thread, wanted to
add in more of my 2 cents... :)
Major releases would be great to happen more often - but keeping
current releases more current is really what I
What unix uid is the windows client mapping the access to? I guess the
permission issue boils down to that. You can create a file under the mode
777 dir, and check the uid/gid from a linux client. Then make sure the dirs
you create can be writeable by that uid/gid.
Avati
On Tue, Jul 30, 2013 at
.
Avati
On Tue, Jul 30, 2013 at 3:21 AM, Nux! n...@li.nux.ro wrote:
On 30.07.2013 11:03, Anand Avati wrote:
What unix uid is the windows client mapping the access to? I guess the
permission issue boils down to that. You can create a file under the mode
777 dir, and check the uid/gid from a linux
On Tue, Jul 30, 2013 at 7:47 AM, Roberto De Ioris robe...@unbit.it wrote:
On Mon, Jul 29, 2013 at 10:55 PM, Anand Avati anand.av...@gmail.com
wrote:
I am assuming the module in question is this -
https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c
.
I
see
On Mon, Jul 29, 2013 at 1:56 AM, Nux! n...@li.nux.ro wrote:
On 29.07.2013 07:16, Daniel Müller wrote:
But you need to have gluster installed!? Which version?
Samba4.1 does not compile with the lates glusterfs 3.4 on CentOs 6.4.
From what JM said, it builds against EL6 Samba (3.6) and it
On Mon, Jul 29, 2013 at 8:36 AM, Roberto De Ioris robe...@unbit.it wrote:
Hi everyone, i have just committed a plugin for the uWSGI application
server
for exposing glusterfs filesystems using the new native api:
https://github.com/unbit/uwsgi-docs/blob/master/GlusterFS.rst
Currently it is
On Sat, Jul 27, 2013 at 12:40 PM, Harshavardhana
har...@harshavardhana.netwrote:
- Be responsible for maintaining release branch.
- Deciding branch points in master for release branches.
- Actively scan commits happening in master and cherry-pick those which
improve stability of a release
On Sun, Jul 28, 2013 at 6:48 AM, Brian Foster bfos...@redhat.com wrote:
On 07/27/2013 02:32 AM, Anand Avati wrote:
On Fri, Jul 26, 2013 at 5:16 PM, Bryan Whitehead dri...@megahappy.net
mailto:dri...@megahappy.net wrote:
I would really like to see releases happen regularly
On Thu, Jul 25, 2013 at 5:52 AM, Marcus Bointon
mar...@synchromedia.co.ukwrote:
This is a silly chicken-and-egg problem. I've found when I issue a reboot
on a server that's mounted its own gluster volume via NFS (with the 'hard'
option set),
Mounting NFS export from localhost is a recipe for
On Sun, Jul 28, 2013 at 8:58 AM, Marcus Bointon
mar...@synchromedia.co.ukwrote:
On 28 Jul 2013, at 17:31, Anand Avati anand.av...@gmail.com wrote:
Mounting NFS export from localhost is a recipe for disaster for many other
reasons (including deadlocks under heavy IO).
Docs? I'm only doing
On Sun, Jul 28, 2013 at 9:23 AM, Emmanuel Dreyfus m...@netbsd.org wrote:
Anand Avati anand.av...@gmail.com wrote:
We are in the process of formalizing the governance model of the
GlusterFS project. Historically, the governance of the project has been
loosely structured
On Sun, Jul 28, 2013 at 9:57 AM, Marcus Bointon
mar...@synchromedia.co.ukwrote:
On 28 Jul 2013, at 18:12, Anand Avati anand.av...@gmail.com wrote:
What is your typical workload, and what kind of tests did you compare
native client perf against NFS perf?
Low load, two web servers sharing
think the ext4 patches had long been available but they
just took forever to get pushed out into an official release.
I'm in favor of closing some bugs and risking introducing new bugs for
the sake of releases happening often.
On Fri, Jul 26, 2013 at 10:26 AM, Anand Avati anand.av...@gmail.com
Hello everyone,
We are in the process of formalizing the governance model of the
GlusterFS project. Historically, the governance of the project has been
loosely structured. This is an invitation to all of you to participate in
this discussion and provide your feedback and suggestions on how we
It would be good if you can make sure a dd write with oflag=direct works on
the mount point (without involving KVM/qemu).
Avati
On Mon, Jul 15, 2013 at 7:37 PM, Jacob Yundt jyu...@gmail.com wrote:
Unfortunately I'm hitting the same problem with 3.4.0 GA. In case it
helps, I increased both
Hello,
I have managed to clear pending 0031 to 0028 operation by shutting down
all the nodes , deleting rb_mount file and editing rb_state file.
However this did not help reintroduce 00031 to the cluster (0022 also
but it is offline so no chance to do peer probe).
I have tried to replicate
On Fri, Jun 14, 2013 at 10:04 AM, John Brunelle
john_brune...@harvard.eduwrote:
Thanks, Jeff! I ran readdir.c on all 23 bricks on the gluster nfs
server to which my test clients are connected (one client that's
working, and one that's not; and I ran on those, too). The results
are attached.
Looks like there might be a firewall (iptables) in the way? Can you flush
all iptables rules and retry - just to confirm?
Avati
On Mon, May 20, 2013 at 3:45 PM, Jay Vyas jayunit...@gmail.com wrote:
Hi gluster:
Im getting the cryptic 107 error, (I guess this means gluster can't see a
1 - 100 of 300 matches
Mail list logo