On Wed, Nov 4, 2015 at 8:41 AM, Amye Scavarda wrote:
> Hi all,
> It's that time of year again! The 2015 Gluster Community Survey is an
> important way to be able to give your feedback to the project as a whole,
> letting us know where you'd like to see improvements, what you
On 11/17/2015 08:50 PM, Pierre LĂ©onard wrote:
Hi all,
Y have a cluster with 14 nodes. I have build a volume stripe 7 with the
14 nodes. Underlying I use XFS.
Locally I mount the global volume with nfs :
mount -t nfs 127.0.0.1:gvExport /glusterfs/gvExport -o
_netdev,nosuid,bg,exec
then I
I am running glusterfs on a 3 node prod server with thinly provisioned
LVM-Volumes. My goal is to automate a backup process that is based on
gluster snapshots. The idea is basically to run a shell script via cron
that takes the snapshot, zips it and moves it to a remote server.
Backup works, now
On 11/17/2015 10:21 PM, Surya K Ghatty wrote:
Hi:
I am trying to understand if it is technically feasible to have gluster
nodes on one machine, and export a volume from one of these nodes using
a nfs-ganesha server installed on a totally different machine? I tried
the below and showmount -e
On 11/17/2015 08:19 AM, Tiemen Ruiten wrote:
I double-checked my config and found out that the filesystem of the
brick on the arbiter node doesn't support ACLs: underlying fs is ext4
without acl mount option, while the other bricks are XFS ( where it's
always enabled). Do all the bricks need
Hi All,
The weekly Gluster community meeting will start in ~90 minutes.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda:
Thank you everyone who attended today's meeting. We ran overtime
slightly, and couldn't cover all topics. We hope to cover any missed
topics in the next meeting, which will happen sametime next week. A
calendar invite has been attached for next weeks meeting.
Today's meeting logs are available at
Lindsay,
I wanted to ask you one more thing: specifically in VM workload with sharding,
do you run into consistency issues with strict-write-ordering being off?
I remember suggesting that this option be enabled. But that was for plain dd on
the mountpoint (and not inside the vm), where it was
Aravinda,
I figured it out. The problem was that I was using the public IPs to create
the gluster cluster which started giving the transport node issue. I found
a work around by using the ec2-DNS private for peering and public for
geo-replication which worked like a charm. Sorry, if this doesn't