Im only dealing with about 10TiB Gluster volumes, so by far not at your
> planned level, but I really would like to see some results, if you go
> for Gluster!
>
> Frank
>
>
> Am Dienstag, den 08.11.2016, 13:49 + schrieb Thomas Wakefield:
>> I think we are leaning towar
I think we are leaning towards erasure coding with 3 or 4 copies. But open to
suggestions.
> On Nov 8, 2016, at 8:43 AM, Lindsay Mathieson <lindsay.mathie...@gmail.com>
> wrote:
>
> On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
>> High Performance Computing,
High Performance Computing, we have a small cluster on campus of about 50 linux
compute servers.
> On Nov 8, 2016, at 8:37 AM, Lindsay Mathieson <lindsay.mathie...@gmail.com>
> wrote:
>
> On 8/11/2016 9:58 PM, Thomas Wakefield wrote:
>> Still looking for use cases
Still looking for use cases and opinions for Gluster in an education / HPC
environment. Thanks.
> On Nov 4, 2016, at 2:05 PM, Thomas Wakefield <dwake...@gmu.edu> wrote:
>
> Everyone, thanks in advance.
>
> We are looking to add a large filesystem to our compute
Everyone, thanks in advance.
We are looking to add a large filesystem to our compute facility at GMU. We
are investigating if Gluster can work in a University setting for some HPC
work, and general research computing. Does anyone have use cases where Gluster
has been used in a university
Everyone, thanks in advance.
We are looking to add a large filesystem to our compute facility at GMU. We
are investigating if Gluster can work in a University setting for some HPC
work, and general research computing. Does anyone have use cases where Gluster
has been used in a university
Running 3.3.1 on everything, client and servers :(
Thomas Wakefield
Sr Sys Admin @ COLA
301-902-1268
On Apr 16, 2013, at 3:23 PM, Ling Ho l...@slac.stanford.edu wrote:
On 04/15/2013 06:35 PM, Thomas Wakefield wrote:
Help-
I have multiple gluster filesystems, all with the setting
)) {
max = conf-du_stats[i].avail_space;
max_inodes =
conf-du_stats[i].avail_inodes;
...
ling
On 04/16/2013 12:38 PM, Thomas Wakefield wrote:
Running 3.3.1 on everything, client and servers :(
Thomas Wakefield
Sr
Was there ever a solution for setting min-free-disk? I have a cluster of about
30 bricks, some are 8TB and new bricks are 50TB. I should be able to set
gluster to leave 500GB free on each brick. But one of the 8TB bricks keeps
filling up with new data.
This is my current setting:
You can set the free disk space limit. This will force gluster to write files
to another volume.
gluster volume set volume cluster.min-free-disk XXGB(you insert your
volume name and the amount of free space you want, probably like 2-300GB)
Running a rebalance would help move your files
Help please-
I am running 3.3.1 on Centos using a 10GB network. I get reasonable write
speeds, although I think they could be faster. But my read speeds are REALLY
slow.
Executive summary:
On gluster client-
Writes average about 700-800MB/s
Reads average about 70-80MB/s
On server-
Writes
Can anyone tell me how to fix having link files show on the client mount point.
This started after an upgrade to 3.3.1
file name and user and group info have been changed, but this is the basic
problem. There are about 5 files in just this directory, and I am sure there
are more directories
Help please-
Last night I tried to upgrade from 3.2.5 to 3.3.1 and had no success and rolled
back to 3.2.5.
I followed the instructions for the upgrade as exactly as possible, but don't
understand this section:
5) If you have installed from RPM, goto 6). Else, start glusterd in upgrade
mode.
Is it possible to have multiple deamons running on the same server? I have 2
infiniband ports, and want to dedicate 1 glusterfs instance to each port. I am
maxing out the CPU on a single glusterfsd, pushing about 1.1GB/s over
infiniband to my server. But i know the disks are capable of doing
For back end storage when not using direct attached JBODS, what does Gluster
prefer, iSCSI or FC.
Looking at 2 different setups:
1. Hitachi AMS 2300 with 8Gb/s FC (8 ports)
or
2. Dell Equallogic boxes with 10Gb/s iSCSI (unknown number of ports, at least
6 i think)
Either setup would be
Can you clarify, does your disk mount and work for a period of time, and then
fail? Or does your disk mount, but not become active for a period of time?
I run ib_verbs, and find that sometimes it takes a while for a connection to be
setup, but that it does start working fine in 2-5 minutes. I
I can't seem to delete whole directories with gluster:
[r...@cola14 gluster]# rm -rf aaron/
rm: cannot remove directory `aaron//code/lib3.2/src/phspf': Directory not empty
I just added more disk space to my /gluster partition. I had 4 bricks, and now
i have 10 bricks. I ran the following 2
Should this command be run on the servers or the clients? I have done both, and
still have issues.
And also is it correct as listed below? I get no such attribute errors.
[r...@g1 ~]# find /export/g1a -type d -exec setfattr -x trusted.glusterfs.dht
{} \;
setfattr: /export/g1a/data: No such
What's the best way to add bricks, and get distribute to use them? I added 2
more bricks, and the total size increased for the filesystem, but i can't get
any traffic on the new disks. I remounted the filesystem, and ran an ls -Rl ,
but i still don't see any traffic to the disks. I do see
This seems to have helped, thanks.
On Nov 19, 2009, at 12:12 PM, Amar Tumballi wrote:
What's the best way to add bricks, and get distribute to use them? I
added 2 more bricks, and the total size increased for the filesystem,
but i can't get any traffic on the new disks. I remounted the
point? I
couldn't find an answer in the documentation. Even with XFS, i am
worried about having a single 40TB volume. So i was thinking either 2
or 4 volumes to combine together to get to 40TB.
Thanks in advance,
Thomas Wakefield
Systems Administrator COLA/IGES
tw...@cola.iges.org
21 matches
Mail list logo