On 01/03/2014 01:59 AM, Mikhail T. wrote:
[Please, CC replies to me directly as I am not subscribed to the list.
Thank you.]
Joe Landman wrote:
As mentioned above, four test-files were used for the benchmark:
1. Small static file - 429 bytes
2. Larger static file - 93347
On 01/02/2014 06:26 PM, Mikhail T. wrote:
We are building a new web-serving farm here. Believing, like most
people, that the choice of the technology does not affect performance in
read-dominated work-loads (such as ours), we picked GlusterFS for its
rich feature set.
[...]
As mentioned
On 08/12/2013 03:52 PM, Justin Clift wrote:
On 12/08/2013, at 8:50 PM, Jay Vyas wrote:
okay makes sense. so .. um How about json ? :)
Yeah, might be a bunch more readable. Also easier to
work with from things like Node.js.
Just personally not sure how to structure it well
with JSON.
On 06/27/2013 10:30 AM, Stephan von Krawczynski wrote:
On Wed, 26 Jun 2013 11:46:36 -0400
Jeff Darcy jda...@redhat.com wrote:
On 06/27/2013 09:37 AM, Stephan von Krawczynski wrote:
On Wed, 26 Jun 2013 11:04:07 -0400 Jeff Darcy jda...@redhat.com
wrote:
[Jeff on UUIDs]
I generally vote
On 06/27/2013 10:58 AM, Stephan von Krawczynski wrote:
On Thu, 27 Jun 2013 10:37:23 -0400
Joe Landman land...@scalableinformatics.com wrote:
No. One of the largest issues we and our customers have been having for
years has been tightly tying gluster volume creation to single IP
addresses
On 03/24/2013 07:03 PM, Jeff Darcy wrote:
On 03/24/2013 04:43 AM, Stephan von Krawczynski wrote:
So the automated distribution of the vol-files does not help
the majority at all.
It's not just about distribution of volfiles. It's also about ensuring
their consistency across many servers and
On 06/10/2011 10:50 AM, Anand Avati wrote:
We have plans of introduction the option to scale up/down replication
counts on a volume. This way you can move between distributed and
distributed-replicated volumes by doing add-brick with a proper 'replica
N' parameter. Running replicate with a
is
the replicate translator serving without the second server? In what
way is this more beneficial than loading the replicate translator
only when the second server is introduced to the system?
Avati
On Sat, May 28, 2011 at 7:11 PM, Joe Landman
land...@scalableinformatics.com wrote:
This patch to the cli
This patch to the cli enables a very simple replica 1 mirror creation
(similar to the missing attribute for mdadm RAID1 creation).
It will likely need additional supporting work in add-brick so that
incomplete mirrors are assembled correctly for distributed-replicated cases.
Creating a replica before the second mirror is ready. Or create the metadata
for a mirror so you can prepare a unit for mirroring later without recreating
the volume. It's a common system config situation.
---
Joe Landman
land...@scalableinformatics.com
Please pardon brevity and typos
On 05/07/2011 12:58 PM, Nobodys Home wrote:
Hello,
I was afraid to delete anything since I have seen no DR docs related
to this state. We did seem to reslove the situation however:
1. Did md5sum's of all file in /etc/glusterd and /etc/glusterfs on gbe02
and compared results to other 7 peers.
2.
On 05/06/2011 09:48 PM, Nobodys Home wrote:
Hello All,
I have 8 servers.
7 of the 8 say that gbe02 is in state State: Peer Rejected (Connected).
gbe08 says it is connected to the other 7 but they are all State: Peer
Rejected (Connected)
So it would appear that gbe02 is out of sync with the
On 8/6/2010 2:09 PM, Jacob Shucart wrote:
Hello,
Does anyone have any experience using MPI-IO on top of the Gluster File
System? If so, what was involved? Please let me know. Thank you.
Hi Jacob
We've done this. Nothing special was required, it just worked. We
ran mpi-io LANL benchmark
Hi folks
I seem to remember a while ago we were able to do this. There was
something like --with-ofed=/path/to/ofed on the config line.
Unfortunately I need this now for a system I am testing on. Has
anyone successfully built gluster with OFED in a non-standard location?
Thanks!
Tom O'Connor wrote:
Hi List,
We currently have a very irritating problem with Centos 5.3 x86_64
running on a Dell Poweredge SC1435. The problem is this: We are
experiencing frequent kernel panics while using glusterfs and Fuse.
Across the cluster of servers, we are experiencing roughly 1
Dan Parsons wrote:
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time
and rearchitect things.
Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast,
dedicated storage. Each server has two bonded Gig-E links to the rest
of my network, for 8gbit/s theoretical
Dan Parsons wrote:
OK, I did a little bit of testing with this new setting and have not
been able to reproduce the problem. However, this has happened in the
past, as in, I had this problem, made a setting change, problem seemed
fix, and then it came back. So it might not totally be fixed, but
Fred Hucht wrote:
Hi!
The glusterfsd.log on all nodes are virtually empty, the only entry on
2008-11-25 reads
2008-11-25 03:13:48 E [io-threads.c:273:iot_flush] sc1-ioth: fd context
is NULL, returning EBADFD
on all nodes. I don't think that this is related to our problems.
Regards,
Fred Hucht wrote:
Hi,
crawling through all /var/log/messages, I found on one of the failing
nodes (node68)
Does your setup use local disk? Is it possible that the backing store
is failing?
If you run
mcelog /tmp/mce.log 21
on the failing node, do you get any output in
rhubbell wrote:
Yeah I've looked at blastwave before. Just never used it. I see they
have packages for tla.
Basically, if we are constrained to use Solaris, we start by installing
the Blastwave bits so we can pkg-add useful things that Solaris omits.
WHo builds the packages they
rhubbell wrote:
That was my first thought when I found out sunfreeware had no pre-built
tla. I optimistically ruled it out when I thought I could build tla
on solaris. But it turns out that trying to build tla on solaris
is a gift in disguise. I'm thinking I will look at getting tla
to build
Rohan wrote:
Any feedback?
Hi Rohan:
Could you strace a few of these to see what they are up to? BTW, are
you using xinetd to launch vsftpd or running it stand alone? We usually
recommend the latter to our customers. If you are running it through
xinetd, please restart xinetd, and add
Rohan wrote:
Hi Joe,
Thanks for your reply. As per your instruction I started strace. I'm getting
read(5, unfinished ... as a last line.
I couldn't understand what it's trying to do.
Your help is really appreciated in this matter.
Let me look at this for a few moments.
--
Joseph
Brandon Lamb wrote:
Any issues running glusterfs on a 64bit version of linux?
I am looking at talking the powers that be into getting us some new
64bit hardware with some sata raids, but I want to make sure there
arent any known issues with glusterfs first.
Greetings Brandon:
To date we
24 matches
Mail list logo