thanks, working now
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-04-09 18:51 GMT+02:00 Chen Chen <chenc...@smartquerier.com>:
> Hi Mathieu,
>
> It changed to "
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/rsa.pub;
> now.
>
> B
GPG key retrieval failed: [Errno 14] HTTP Error 404 - Not Found
Can someone fix this missing gpg key ?
thanks
Mathieu CHATEAU
http://www.lotp.fr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster
glusterfs
1573 root 20 0 3796044 2.924g 3580 S 0.0 18.8 7:07.05
glusterfs
1571 root 20 0 2469924 1.695g 3588 S 0.0 10.9 1:19.75
glusterfs
thanks
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-02-16 18:54 GMT+01:00 Soumya Koduri <skod...@redhat.com>:
>
sion 3.7.8
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-02-14 10:56 GMT+01:00 Nico Schottelius <
nico-gluster-us...@schottelius.org>:
> Hello everyone,
>
> we have a 2 brick setup running on a raid6 with 19T storage.
>
> We are currently facing the problem tha
Hello,
not sure to understand. Gluster store files as regular ones, which is one
big difference against ceph and others that store container/blocks.
Just mount disk as normal one, your data should be there.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-24 11:37 GMT+01:00 Serkan
Thanks for all your tests and times, it looks promising :)
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <oleksa...@natalenko.name>:
> OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the
> following
> patches:
&g
copying file, to ensure that files healing can
handle your files count.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-18 4:21 GMT+01:00 Pawan Devaiah <pawan.deva...@gmail.com>:
> Hi Guys,
>
> Sorry I was busy with setting up those 2 machines for GlusterFS
> So my machine
RAID 10 provide best performance (much better than raid 6)
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-13 5:35 GMT+01:00 Pranith Kumar Karampuri <pkara...@redhat.com>:
> +gluster-users
>
> On 01/13/2016 09:44 AM, Pawan Devaiah wrote:
>
> We would be look
able
performance.read-ahead: disable
performance.client-io-threads: on
performance.write-behind-window-size: 1MB
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 4
cluster.readdir-optimize: on
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-13 7:39 GMT+01:00 Arash S
me time (1 order, serial number close to
each other), they may fail in near time to each other (if something bad
happened in manufactory).
I already saw like 3 disks failing in few days.
just my 2 cents,
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 4:36 GMT+01:00 Pranith Kumar
Hello,
I also experience high memory usage on my gluster clients. Sample :
[image: Images intégrées 1]
Can I help in testing/debugging ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com>:
>
>
> On 01/11/2016 05:1
I tried like suggested:
echo 3 > /proc/sys/vm/drop_caches
sync
It lower a bit usage:
before:
[image: Images intégrées 2]
after:
[image: Images intégrées 1]
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:34 GMT+01:00 Mathieu Chateau <mathieu.chat...@lotp.fr>:
Hello,
did you test performance of storage brick itself before setting up ?
How are they connecting to NetApp for their storage? nfs?iscsi?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-12-11 7:02 GMT+01:00 Vijay Bellur <vbel...@redhat.com>:
> On 12/09/2015 07:03 AM,
static mode (name depend on
vendor) ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-11-25 15:44 GMT+01:00 Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] <
uthra.r@nasa.gov>:
> Thank you all for taking the time to reply to my email:
>
>
>
> Here is some more inf
Hello,
except for NFS, client will synchronously writes to each replica all time.
They are also getting meta data from both.
As it's synchronous, it's going as slower as slowest replica. replica with
2 nodes won't be useful as it's the slowest one that is important.
Cordialement,
Mathieu
Hello,
I suggest Tom to clarify if he intends geo-rep or replica.
Tom, do you expect automatic failover to remote site if local is down ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-11-23 14:37 GMT+01:00 Kaleb S. KEITHLEY <kkeit...@redhat.com>:
> On 11/23/2015 03:26 AM, T
t shot in this case is to manually execute nagios script inside a loop,
like every 10s, to cach it by yourself.
If you wait for nagios alert to log on and try, short issue may already be
gone.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-10-20 13:23 GMT+02:00 Gary Armstrong &l
ood.
My 2 cents
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-10-05 8:01 GMT+02:00 Kaamesh Kamalaaharan <kaam...@novocraft.com>:
> Hi everyone,
> I was wondering if anyone has done any performance testing on gluster 3.62
> and above where they test the maximum number of
Hello,
from internet I get "Not Found" for gluster-nagios* url
Is it published ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-28 14:21 GMT+02:00 Ramesh Nachimuthu <rnach...@redhat.com>:
>
>
> On 09/24/2015 10:21 PM, André Bauer wrote:
>
> I
another server to get 3 and 3.
Yes in this setup it's a raid 0, so if you loose a disk, the whole 2
replicated server are down.
But you will get much better performance and much more usable space, and
still survive a single server crash.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-21
locally instead for availability
So it work as you can imagine (good), just slow
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-16 14:23 GMT+02:00 Paul Thomas <p...@thomas3.plus.com>:
> Hi,
>
> I’m new to shared file systems and horizontal cloud scaling.
>
> I have alr
as a backup
if first is offline. Then gluster client get all nodes from the one it's
connected too.
So even if you have 50 bricks, no change in fstab.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-07 2:58 GMT+02:00 Antun Krasic <an...@martuna.co>:
> Hello,
>
> I have two servers
pshot only contains delta blocks).
Rsync with hardlink is more resistant (inode stay until last reference is
removed)
But interested to hear about production setup relying on it
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-05 21:03 GMT+02:00 M S Vishwanath Bhat <msvb...@gmail.com>:
rested in solution 1), but need to be stored on distinct
drives/servers. We can't afford to loose data and snapshot in case of human
error or disaster.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-03 13:05 GMT+02:00 Merlin Morgenstern <merlin.morgenst...@gmail.com>
:
> I have
what get or not in the cache:
performance.*cache*-max-file-size 0
performance.*cache*-min-file-size 0
just my 2cents
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-01 21:15 GMT+02:00 Christian Rice <cr...@pandora.com>:
> This is still an issue for me, I d
Hello,
did you do some basic tuning to help anyway ?
using latest version ?
in replication or only distributed ?
Why using NFS and not native fuse client to mount volume?
did you install VM tools (if using VMware fusion) ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-24 15:20 GMT+02
performance.io-thread-count: 16
Options I set on brick in fstab for XFS mounted volumes used by gluster:
defaults,noatime,nodiratime,logbufs=8,logbsize=256k,largeio,inode64,swalloc,allocsize=131072k,nobarrier
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-24 16:11 GMT+02:00 Merlin Morgenstern
Hello,
this is to do on brick, not on client side
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-24 17:57 GMT+02:00 Merlin Morgenstern merlin.morgenst...@gmail.com
:
thank you for the recommendation on parameters.
I tried:
gs2:/volume1/data/nfs glusterfs
defaults
720 brick! Respect !
Le 24 août 2015 09:48, Mohamed Pakkeer mdfakk...@gmail.com a écrit :
Hi,
I have a cluster of 720 bricks, all bricks are 4TB in size. I have
change the cluster.min-free-disk default value 10% to 3%. So all the disks
should have 3% minimum disk space free. But some
Hello,
Just in case, Did you create and test from the client (and not locally
on any brick)?
Envoyé de mon iPad
Le 21 août 2015 à 13:36, tecni...@gmx.de tecni...@gmx.de a écrit :
Hi all,
I have the following problem with GlusterFS:
Setup:
1x Client (Ubuntu 14.04 / VirtualBox)
3x
,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-17 5:54 GMT+02:00 Colin Coe colin@gmail.com:
Hi all
I've setup a two node replicated system that is providing NFS and
SMB/CIFS services to clients (RHEL5,6,7 with NFS and
WinXP,7,2008R2,2012R2).
I'm try to create a DFS mount point for Windows
? Both ? If both, going to
same shares?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-09 17:56 GMT+02:00 David david.p...@gmail.com:
Hi,
I need some help in making this call choosing between the two.
I have no experience with MS DFS or with Windows server OS as a file
server
I do have DFS-R in production, that replaced sometimes netapp ones.
But no similar workload as my current GFS.
In active/active, the most common issue is file changed on both side (no
global lock)
Will users access same content from linux windows ?
Cordialement,
Mathieu CHATEAU
http
antivirus scan
- Tune network card (send and receive buffer)
- ...
Start with 2012 R2, to get SMB v3 and all latest stuff
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-10 7:11 GMT+02:00 David david.p...@gmail.com:
No, but files can be accessed from different clients from different
Gluster in
replicated mode. Users won't notice any latency,
At the price that replication is async.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-10 7:26 GMT+02:00 Ira Cooper i...@redhat.com:
Mathieu Chateau mathieu.chat...@lotp.fr writes:
I do have DFS-R in production, that replaced
Maybe related to the insecure port issue reported ?
try with :
gluster volume set xxx server.allow-insecure on
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-07 23:47 GMT+02:00 Geoffrey Letessier geoffrey.letess...@cnrs.fr:
I’m not really sure to well understand your answer.
I
Sorry only read replicated in your first mail, I squashed the distributed
one :'(
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-04 9:47 GMT+02:00 Florian Oppermann gluster-us...@flopwelt.de:
In my current configuration I have a distributed and replicated volume
which is (to my
, Mathieu Chateau wrote:
Hello,
As you are in replicate mode, all write will be send synchronously to
all bricks, and in your case to a single hdd.
Writes: you are going to have same perf as 1 single hdd (best case
possible, you will have less)
read: all brick will be queried for metadata, one
date is not the same but is content different ?
You may have disable the mtime attribue to get better perf ?
What are these 2 GFID ?
You can use this script to find who they are:
https://gist.github.com/semiosis/4392640
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-03 12:30 GMT+02:00
://www.gluster.org/community/documentation/index.php/OperatingVersions
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-28 20:34 GMT+02:00 Mathieu Chateau mathieu.chat...@lotp.fr:
Hello,
any official doc to do that ? I can't find any so far. Just indications on
what it is used for
thanks
Hello,
sorry operating-version is a variable like others, just need to find the
good name: op-version:
gluster volume get all cluster.op-version
then to set version (global to all volumes):
gluster volume set all cluster.op-version 30702
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015
Hello,
any official doc to do that ? I can't find any so far. Just indications on
what it is used for
thanks
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-28 20:27 GMT+02:00 Atin Mukherjee atin.mukherje...@gmail.com:
That's shouldn't be a problem. Ensure that you bump up the op
Hello,
thanks for this guidance, I wasn't aware of!
Any doc that describe all settings values ?
For example, I can't find documentation for cluster.lookup-optimize
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-27 14:58 GMT+02:00 André Bauer aba...@magix.net:
Some more infos
/vfs_cache_pressure
(I have mainly many small files and things are slow).
I am using version 3.7.1, upgrading to 3.7.2-3
Thanks for your help
Regards,
Mathieu CHATEAU
http://www.lotp.fr
___
Gluster-users mailing list
Gluster-users@gluster.org
http
.
Don't play with turning off nodes, as you may create more issues than solve.
just my 2cents
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-24 19:34 GMT+02:00 John Kennedy skeb...@gmail.com:
I am new to Gluster and have not found anything useful from my friend
Google. I have not dealt
guess gluster can handle that load, you are using big files and this is
where gluster deliver highest output. Nevertheless, you will need many disk
to provide these i/o, even more if using replicated bricks.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-14 21:15 GMT+02:00 Forrest Aldrich
, and so 20 disks may not be enough anyway.
You first must be sure that storage can physically provide your needs in
terms or capacity and performance.
Then you can choose solution that fit best your needs.
just my 2cts
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-14 22:21 GMT+02:00
)
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-14 20:49 GMT+02:00 Forrest Aldrich for...@gmail.com:
I'm exploring solutions to help us achieve high throughput and scalability
within the AWS environment. Specifically, I work in a department where we
handle and produce video content
Yes, after it depend on your average file size
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-12 12:43 GMT+02:00 Geoffrey Letessier geoffrey.letess...@cnrs.fr:
Hello Mathieu,
Thank you for replying.
What do you think about reconstructing every RAID VD with 128KB stripe
size
this issue
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-07 6:43 GMT+02:00 Geoffrey Letessier geoffrey.letess...@cnrs.fr:
Dear all,
We are currently in a very critical situation because all our scientific
production in our computing center is stopped since more than a week
Great :)
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-06-22 14:52 GMT+02:00 Kaleb S. KEITHLEY kkeit...@redhat.com:
On 06/21/2015 01:07 AM, Atin Mukherjee wrote:
All,
GlusterFS 3.7.2 has been released. The packages for Centos, Debian,
Fedora and RHEL will be available at
http
# recommended default congestion control is htcp
sysctl -w net.ipv4.tcp_congestion_control=htcp
But it's still really slow, even if better
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-06-20 2:34 GMT+02:00 Geoffrey Letessier geoffrey.letess...@cnrs.fr:
Re,
For comparison, here is the output
gave you some kernel tuning to help TCP Windows to get bigger
faster.
Do you use latest version (3.7.1) ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-06-20 11:01 GMT+02:00 Geoffrey Letessier geoffrey.letess...@cnrs.fr:
Hello Mathieu,
Thanks for replying.
Previously, i’ve never
to know impacts before setting performance.cache-size
to 4GB for example.
I have small files and I am trying to cache whenever possible to speed up
things
Thanks
Regards,
Mathieu CHATEAU
http://www.lotp.fr
___
Gluster-users mailing list
Gluster-users
-gluster-xxx[1615]: -
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
]
/usr/sbin/glusterfs(+0x6351)[0x7f66b1c4f351]
-
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-06-17 16:53 GMT+02:00 Krutika Dhananjay kdhan...@redhat.com:
Hi,
Looks like the process crashed.
Could you provide the logs associated with this process along with the
volume
for reads lookup
would be possible I guess ?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-06-08 8:11 GMT+02:00 Ravishankar N ravishan...@redhat.com:
On 06/08/2015 11:34 AM, Mathieu Chateau wrote:
Hello Ravi,
thanks for clearing things up.
Anything on the roadmap that would help my
Hello Ravi,
thanks for clearing things up.
Anything on the roadmap that would help my case?
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-06-08 6:37 GMT+02:00 Ravishankar N ravishan...@redhat.com:
On 06/06/2015 12:49 AM, Mathieu Chateau wrote:
Hello,
sorry to bother again
?
Thanks for your help :)
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-05-11 7:26 GMT+02:00 Mathieu Chateau mathieu.chat...@lotp.fr:
Hello,
thanks for helping :)
If gluster server is rebooted, any way to make client failback on node
after reboot ?
How to know which node is using
Hello,
thanks for helping :)
If gluster server is rebooted, any way to make client failback on node
after reboot ?
How to know which node is using a client ? I see TCP connection to both
node
Regards,
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-05-11 7:13 GMT+02:00 Ravishankar N
.
Regards,
Mathieu CHATEAU
http://www.lotp.fr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
61 matches
Mail list logo