> > 1) Is-it OK to build my storage on top of Ubuntu server?
> There are no known limitations of using Ubuntu Server IMHO.
OK.
> > 2) Is there any flusterfs driver for FreeBSD? I understood that there is
> > none so far?
> There is an experimental version available to test out - this is fully
> G
Hi,
I am looking at using Gluster to build a multi-purposes network storage
for my lab, it should serve VMware, FreBSD, Linux systems (the Samba
part, I am not ready to port it to a Gulster machine yet, too much local
configuration to port).
1) Is-it OK to build my storage on top of Ubuntu server
istration, I will go solution 1 at any
time. Now if RAID is higher than 0, that's another discussion.
Of course 3 was faster, but it has no gluster :)
Bests,
Olivier
> Thank you very much!
> Don
>
>
>
>
> ----- Original Message -
> From: "Olivier Nic
managment; it's all a balance of
pro's and cons' :)
Bests
Olivier
> Jeff White
> Linux/Unix Systems Engineer
> University of Pittsburgh - CSSD
> jaw...@pitt.edu
>
>
> On 10/05/2011 10:45 PM, Olivier Nicole wrote:
> > Hi Don,
> >
> >>
Hi Don,
> Thanks for your reply. Can you explain what you mean by:
>
> > Instead of configuring your 8 disks in RAID 0, I would use JOBD and
> > let Gluster do the concatenation. That way, when you replace a disk,
> > you just have 125 GB to self-heal.
If I am not mistaken, RAID 0 provides no r
Hi Don,
> 1. Remove the brick from the Gluster volume, stop the array, detach the 8
> vols, make new vols from last good snapshot, attach new vols, restart array,
> re-add brick to volume, perform self-heal.
>
> or
>
> 2. Remove the brick from the Gluster volume, stop the array, detach the 8
Hi,
> I have been testing rebalance...migrate-data in GlusterFS version 3.2.3,
> following add-brick and fix-layout. After migrate-data the the volume
> is 97% full with some bricks being 100% full. I have not added any
> files to the volume so there should be an amount of free space at least
Hi,
I just did another test this morning.
After rebalancing, I get an error in the logs
/var/log/gluster/etc-glusterfs-glusterd.vol.log about file size:
[2011-09-29 09:36:39.703090] W
[glusterd-rebalance.c:251:gf_glusterd_rebalance_move_data] 0-glusterfs: file
sizes are not same : /etc/gluster
Hi,
I have hard time understanding how volume rebalance works.
I had 2 bricks: gluster3:/data & gluster4:/data
It contains:
on@gluster3:/data$ ls -lrat
total 5117020
-rwx-- 1 root root 108693504 2011-09-28 19:38
VMware-tools-linux-8.6.0-425873.iso
-rwx-- 1 root root 93786112 2011-09
course they
were all the same :)
Best regards,
Olivier
> On Wed, Sep 28, 2011 at 3:07 PM, Olivier Nicole > wrote:
>
> > Hi,
> >
> > For testing purposes, I have set-up 4 virtual machines, runing Ubuntu
> > 11.4 and Gluster 3.2.3-1 from the Debian packages
> >
Hi,
For testing purposes, I have set-up 4 virtual machines, runing Ubuntu
11.4 and Gluster 3.2.3-1 from the Debian packages
(glusterfs_3.2.3-1_amd64_with_rdma.deb).
I start the CLI from one of the servers and I can peer probe one other
server, but trying to add a 3r server takes long time to resp
11 matches
Mail list logo