To answer Franco's post, my guess is that accesses will be pretty evenly
balanced among the bricks. There may be a slight bias to newer data, but our
experience is that it's only a slight bias.
I really appreciate all of the great posts from everyone today. I will
consider what you have all
On Wed, Dec 11, 2013 at 6:06 PM, Anand Avati wrote:
> James,
> This is the right way to think about the problem. I have more specific
> comments in the script, but just wanted to let you know this is a great
> start.
>
> Thanks!
Thanks... So I'd like to solve these problems, but I think I need so
RE: meeting, sorry I couldn't make it, but I have some comments:
1) About the pre-packaged VM comment's. I've gotten Vagrant working on
Fedora. I'm using this to rapidly spin up and test GlusterFS.
https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/
In the coming week or so, I'l
If you are only adding disk space and don't necessarily need to increase
bandwidth, then you wont need to rebalance. It's only a problem if you
are adding clients and your most frequently accessed files are all on
the same brick.
On Thu, 2013-12-12 at 04:49 +, Scott Smith wrote:
> Pretty much
On Wed, Dec 11, 2013 at 8:15 PM, Scott Smith wrote:
> In trying to figure out what was going on with our GlusterFS system after
> the disastrous rebalance, we ran across two posts. The first one was
> http://hekafs.org/index.php/2012/03/glusterfs-algorithms-distribution/. If
> we understand it c
Pretty much, our files are never deleted. We just keep adding more
information. Think of them as write once, read multiple, delete never.
-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent: Wednesday, December 11, 2013 7:31 PM
To: Scott Smith
Cc: gluster-users@gl
Meeting minutes from 11th December available here:
http://meetbot.fedoraproject.org/gluster-meeting/2013-12-11/gluster-meeting.2013-12-11-15.01.html
-Vijay
- Original Message -
> The following is a new meeting request:
>
> Subject: Gluster Community Weekly Meeting
> Organizer: "Vijay B
How long-lived are your files? We have 400TB and are just about to
double that but have decided not to rebalance the data, instead we are
hoping that the disks will rebalance naturally through attrition and not
waste any valuable time or bandwidth moving data around.
On Thu, 2013-12-12 at 01:15 +
Scott,
It is really unfortunate that you were bit by that bug. I am hoping to
convince you to at least not abandon the deployment this early with some
responses:
- Note that you typically don't have to proactively rebalance your volume.
If your new data comes in the form of new directories, they n
We are about to abandon GlusterFS as a solution for our object storage needs.
I'm hoping to get some feedback to tell me whether we have missed something and
are making the wrong decision. We're already a year into this project after
evaluating a number of solutions. I'd like not to abandon G
James,
This is the right way to think about the problem. I have more specific
comments in the script, but just wanted to let you know this is a great
start.
Thanks!
On Wed, Nov 27, 2013 at 7:42 AM, James wrote:
> Hi,
>
> This is along the lines of "tools for sysadmins". I plan on using
> these
Excellent interview.
Thanks to Jason, Justin and John for all their communication and
community involvement. The Gluster community is one of the project's
greatest strengths.
Looking forward to seeing more of these.
-Dan
Dan Mons
R&D SysAdmin
Unbreaker of broken things
Cutting
Thanks to Dan Mons for giving his time for an interview and to Jason Brooks for
writing this up.
This is a great case study that features GlusterFS in a "content cloud" type of
caes study. From the article:
"Cutting Edge, a visual effects company that’s worked on films such as The
Great Gatsb
Should have included that fact that there are many of these entries in the logs
with quotas enabled:
"... quota context not set in inode ..."
google tells me this should be fixed soon
(http://thr3ads.net/gluster-users/2013/08/2670145-Quota-context-not-set-in-inode),
I wonder if this is related
To confirm: Joe's explanation, uncomfortable as it, looks to be the correct
one. When the servers were powered off and restarted (so the gluster
processes /had/ to be restarted, the new ones started up and used the correct
time format.
The 'problem' clients were the ones which were running the
Hello guys, i'm back =)
This suggest works but, no fully.
Actual Scenario:
2 bricks (node1 and node2) - It contains all data (about 2.0Tb).
I added more 2 bricks (node3 and node4) with replica 4.
In a client server (client1) I already want the "/data" mounted, and
executed the command "find . |
I am currently culling bugzilla for all reported bugs from this weekend so I
can send out swag packages. If you submitted a bug from Friday, December 6,
through Monday, December 9, please let me know by private email and send me
your mailing address.
Thanks!
-JM
- Original Message -
>
17 matches
Mail list logo