Hi Kotresh,
Could you please update whether it is possible to get the patch or bakport
this patch on Gluster 3.7.6 version.
Regards,
Abhishek
On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL
wrote:
> What is the way to take this patch on Gluster 3.7.6 or only way to
Hi all,
I was wondering if anyone meets the problem when running with xglfs, which
is a GlusterFS API FUSE client (https://github.com/gluster/xglfs).
1. I mounted xglfs as client on each server, and try to write files to the
glusterfs by xglfs. However, the files were not written to the
On Mon, Apr 24, 2017, at 10:14 AM, atris adam wrote:
> I have two data centers in two different provinces. Each data centers
> have 3 servers. I want to setup cloud storage with glusterfs.> I want to make
> one glusterfs volume with these information.
>
> province "a"==> 3 servers, each
Hi,
I'm only getting 404's when trying to access the debian repos on
download.gluster.org. Did they move someplace else, or is there some
other reason why they're not available?
Cheers,
Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi everybody,
I have two data centers in two different provinces. Each data centers have
3 servers. I want to setup cloud storage with glusterfs.
I want to make one glusterfs volume with these information.
province "a"==> 3 servers, each server has one 5TB brick (bricks number is
from 1-3)
Hi All,
Considering we are coming out with major release plan, we would like to
revisit the QUOTA feature to decide its path forward.
I have been working on quota feature for a couple of months now and have
come across various issues from performance, usability and correctness
perspective.
We do
What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
the version?
On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL
wrote:
> Hi Kotresh,
>
> I have seen the patch available on the link which you shared. It seems we
> don't have some files in gluser
I've done some more testing with tc and introduced latency on one of my
testservers. With 9ms latency artificially introduced using tc ( sudo tc
qdisc add dev bond0 root netem delay 9ms ) to a testserver in the same DC
as the disperse volume servers I get more or less the same throughput as I
do
+Ashish
Ashish,
Could you help Ingard? Do let me know what you find.
On Mon, Apr 24, 2017 at 4:50 PM, Ingard Mevåg wrote:
> Hi. I can't see a fuse thread at all. Please see attached screenshot of
> top process with threads. Keep in mind this is from inside the
2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri :
> At least in case of EC it is with good reason. If you want to change
> volume's configuration from 6+2->7+2 you have to compute the encoding again
> and place different data on the resulting 9 bricks. Which has to be done
On Mon, Apr 24, 2017 at 3:47 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> We were able to saturate hardware with EC as well. Could you check 'top'
> in threaded mode to see if fuse thread is saturated when you run dd?
>
This is for mount process by the way.
>
> On Mon, Apr 24,
We were able to saturate hardware with EC as well. Could you check 'top' in
threaded mode to see if fuse thread is saturated when you run dd?
On Mon, Apr 24, 2017 at 3:27 PM, Ingard Mevåg wrote:
> Hi
> I've been playing with disperse volumes the past week, and so far i can
>
Hi
I've been playing with disperse volumes the past week, and so far i can not
get more than 12MB/s when i do a write test. I've tried a distributed
volume on the same bricks and gotten close to gigabit speeds. iperf
confirms gigabit speeds to all three servers in the storage pool.
The three
Hi Kotresh,
I have seen the patch available on the link which you shared. It seems we
don't have some files in gluser 3.7.6 which you modified in the patch.
Is there any possibility to provide the patch for Gluster 3.7.6?
Regards,
Abhishek
On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath
Hi Abhishek,
Bitrot requires versioning of files to be down on writes.
This was being done irrespective of whether bitrot is
enabled or not. This takes considerable CPU. With the
fix https://review.gluster.org/#/c/14442/, it is made
optional and is enabled only with bitrot. If bitrot
is not
On Mon, Apr 24, 2017 at 1:24 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 24 apr 2017 9:40 AM, "Ashish Pandey" ha scritto:
>
>
> There is difference between server and bricks which we should understand.
> When we say m+n = 6+2, then we are
On Sat, Apr 22, 2017 at 01:47:56PM +, Zhitao Li wrote:
> Hello, everyone,
>
>
> I am installing glusterfs release 3.8.11 now on my aarch64 computer. It will
> fail in execute configure.
>
> [cid:0c2c69f8-aaad-400a-b0b8-9635f173ed31]
>
>
> Configure file is this:
>
Il 24 apr 2017 9:40 AM, "Ashish Pandey" ha scritto:
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any server. The only thing is that the
server should be a part of cluster.
You
Yes, that's about it. Pranith pretty much summed up whatever I would have
said.
-Krutika
On Sat, Apr 22, 2017 at 12:25 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> +Krutika for any other inputs you may need.
>
> On Sat, Apr 22, 2017 at 12:21 PM, Pranith Kumar Karampuri <
>
Hi Kotresh,
Could you please update me on this?
Regards,
Abhishek
On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> +Kotresh who seems to have worked on the bug you mentioned.
>
> On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
>
21 matches
Mail list logo