Good evening gentlemen,
I would be interested in creating this documentation.
I have recently setup glusterfs 3.4.2 + 3.6.2 on Ubuntu 14.04 and CentOS 7
and done some benchmarks on our hosting platform, however do not have year
long experience in running glusterfs (however some more experience wi
On 02/12/2015 01:03 PM, Peter B. wrote:
> Is there anything I can do to make Gluster feel "good" with bricks filling
> up, as long as there is sufficient space on other nodes/bricks?
Anyone? :(
Thanks,
Pb
___
Gluster-users mailing list
Gluster-users@gl
Yeah, that'd be pretty optimal. How full on do you want to go?
Should we look into arranging some OnMetal stuff, in addition to the
VM offerings?
Jesse, Ben is our *very best* GlusterFS and RHS performance tuning
expert, so capturing his interest is a *very* good thing. ;)
+ Justin
On 17 Feb 2
This is interesting to me, I'd like the chance to run my performance tests
on a cloud provider's systems. We could put together some recommendations
for configuration, tuning, and performance numbers? Also it would be cool
to enhance my setup scripts to work with cloud instances. Sound like what
On 17 Feb 2015, at 21:49, Josh Boon wrote:
> Do we have use cases to focus on? Gluster is part of the answer to many
> different questions so if it's things like simple replication and
> distribution and basic performance tuning I could help. I also have a heavy
> Ubuntu tilt so if it's Red Hat
Do we have use cases to focus on? Gluster is part of the answer to many
different questions so if it's things like simple replication and distribution
and basic performance tuning I could help. I also have a heavy Ubuntu tilt so
if it's Red Hat oriented I'm not much help :)
- Original Mes
On 17 Feb 2015, at 19:28, Tom Callaway wrote:
> Where: I know we have a lot of international Gluster contributors who
> are not in the United States, so I'm open to suggestion on this point. A
> quick internet search seems to imply that large international airports
> like Washington DC, New York,
Yeah, huge subject line. :)
But it gets the message across... Rackspace provide us a *bunch* of online VM's
which we have our infrastructure in + run the majority of our regression tests
with.
They've asked us if we could write up a "How to do GlusterFS in the Cloud: The
Right Way" (technical) d
Hello Gluster people! I'd like to propose that we bring our Gluster
community together for an in-person summit. This event will have a few
goals:
* To come together to discuss the features and changes we want to see in
the future of GlusterFS
* To hear from our user community about their experienc
Hi
I have somehow put my Gluster 3.6 in a state where I am unable to create any
volumes. Even having unmounted all Gluster-related mounts, deleting all
relevant files/directories I get the following error when attempting to create
a volume. Here is what I am trying to do:
> mkdir -p /gfiles/da
On Tue, Feb 17, 2015 at 12:05:05PM +0100, Niels de Vos wrote:
> Hi all,
>
> Later today we will have an other Gluster Community Bug Triage meeting.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
>- web client: https://webchat.freenode.net/?channels=gluster-meeting
> - dat
Hi everyone,
I've been trying several data sync solutions (git-annex, syncthing)
without much success for my use case. I've been considering glusterfs
and am still not sure it will work out. Here's my situation:
- Three machines, laptop, server1 and server2 all with enough storage
to hold all my
Hi everybody,
I have a cluster with 14 nodes on each nodes a take 3 disks to get a
raid 0 ext4 linux volume. Then I take all the node and made a
distributed stripped 7 distributed gluster volume. That volume is a
scratch it may not contain very important data but If a node failed, how
can I k
Hi all,
Later today we will have an other Gluster Community Bug Triage meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- web client: https://webchat.freenode.net/?channels=gluster-meeting
- date: every Tuesday
- time: 12:00 UTC, 13:00 CET (in your terminal, run: date -d
14 matches
Mail list logo