Hello everyone,
Currently one of the production gluster nodes is consuming a lot of
memory, in particular the gluster NFS process makes great use of SWAP
memory and does not release it.
It happens to someone else?
# for file in /proc/*/status ; do awk '/VmSwap|Name|Pid/{printf $2 " "
$3}END
Hi Kotresh,
Could you please update whether it is possible to get the patch or bakport
this patch on Gluster 3.7.6 version.
Regards,
Abhishek
On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL
wrote:
> What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
> the version?
>
>
Hi all,
I was wondering if anyone meets the problem when running with xglfs, which
is a GlusterFS API FUSE client (https://github.com/gluster/xglfs).
1. I mounted xglfs as client on each server, and try to write files to the
glusterfs by xglfs. However, the files were not written to the glusterfs
On Mon, Apr 24, 2017, at 10:14 AM, atris adam wrote:
> I have two data centers in two different provinces. Each data centers
> have 3 servers. I want to setup cloud storage with glusterfs.> I want to make
> one glusterfs volume with these information.
>
> province "a"==> 3 servers, each server
Hi,
I'm only getting 404's when trying to access the debian repos on
download.gluster.org. Did they move someplace else, or is there some
other reason why they're not available?
Cheers,
Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
ht
Hi everybody,
I have two data centers in two different provinces. Each data centers have
3 servers. I want to setup cloud storage with glusterfs.
I want to make one glusterfs volume with these information.
province "a"==> 3 servers, each server has one 5TB brick (bricks number is
from 1-3)
provin
Hi All,
Considering we are coming out with major release plan, we would like to
revisit the QUOTA feature to decide its path forward.
I have been working on quota feature for a couple of months now and have
come across various issues from performance, usability and correctness
perspective.
We do h
What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
the version?
On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL
wrote:
> Hi Kotresh,
>
> I have seen the patch available on the link which you shared. It seems we
> don't have some files in gluser 3.7.6 which you modified
I've done some more testing with tc and introduced latency on one of my
testservers. With 9ms latency artificially introduced using tc ( sudo tc
qdisc add dev bond0 root netem delay 9ms ) to a testserver in the same DC
as the disperse volume servers I get more or less the same throughput as I
do wh
I can confirm mounting the disperse volume locally on one of the three
servers i got 211 MB/s with dd if=/dev/zero of=./local.dd.test bs=1M
count=1.
Its not very good concidering 10gig network, but at least 20x better than
10-12MB/s
2017-04-24 13:53 GMT+02:00 Pranith Kumar Karampuri :
> +Ash
+Ashish
Ashish,
Could you help Ingard? Do let me know what you find.
On Mon, Apr 24, 2017 at 4:50 PM, Ingard Mevåg wrote:
> Hi. I can't see a fuse thread at all. Please see attached screenshot of
> top process with threads. Keep in mind this is from inside the container.
>
> 2017-04-24 1
2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri :
> At least in case of EC it is with good reason. If you want to change
> volume's configuration from 6+2->7+2 you have to compute the encoding again
> and place different data on the resulting 9 bricks. Which has to be done for
> all files. It is
On Mon, Apr 24, 2017 at 3:47 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> We were able to saturate hardware with EC as well. Could you check 'top'
> in threaded mode to see if fuse thread is saturated when you run dd?
>
This is for mount process by the way.
>
> On Mon, Apr 24, 20
We were able to saturate hardware with EC as well. Could you check 'top' in
threaded mode to see if fuse thread is saturated when you run dd?
On Mon, Apr 24, 2017 at 3:27 PM, Ingard Mevåg wrote:
> Hi
> I've been playing with disperse volumes the past week, and so far i can
> not get more than 12
Hi
I've been playing with disperse volumes the past week, and so far i can not
get more than 12MB/s when i do a write test. I've tried a distributed
volume on the same bricks and gotten close to gigabit speeds. iperf
confirms gigabit speeds to all three servers in the storage pool.
The three stora
Hi Kotresh,
I have seen the patch available on the link which you shared. It seems we
don't have some files in gluser 3.7.6 which you modified in the patch.
Is there any possibility to provide the patch for Gluster 3.7.6?
Regards,
Abhishek
On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravis
Hi Abhishek,
Bitrot requires versioning of files to be down on writes.
This was being done irrespective of whether bitrot is
enabled or not. This takes considerable CPU. With the
fix https://review.gluster.org/#/c/14442/, it is made
optional and is enabled only with bitrot. If bitrot
is not enable
On Mon, Apr 24, 2017 at 1:24 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 24 apr 2017 9:40 AM, "Ashish Pandey" ha scritto:
>
>
> There is difference between server and bricks which we should understand.
> When we say m+n = 6+2, then we are talking about the bricks.
>
On Sat, Apr 22, 2017 at 01:47:56PM +, Zhitao Li wrote:
> Hello, everyone,
>
>
> I am installing glusterfs release 3.8.11 now on my aarch64 computer. It will
> fail in execute configure.
>
> [cid:0c2c69f8-aaad-400a-b0b8-9635f173ed31]
>
>
> Configure file is this:
> [cid:2ccf15ef-5d33-4dd5-
Il 24 apr 2017 9:40 AM, "Ashish Pandey" ha scritto:
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any server. The only thing is t
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any server. The only thing is that the
server should be a part of cluster.
You can
21 matches
Mail list logo