Hostname: 192.168.3.31
Uuid: 7e085d9f-a0f9-4ed6-a850-44b6ed991081
State: Peer in Cluster (Connected)
--
Paul Watson
Oninitwww.oninit.com
Tel: +1 913 364 0360
Cell: +1 913 387 7529
Oninit® is a registered trademark of Oninit LLC
If you want to improve, be content to be thought foolish and stupid
has
been solved.
So I guess I would recommend using this "direct-io-mode=disable" when working
with numpy files.
Thanks,
-Paul
From: gluster-users-boun...@gluster.org on
behalf of Jewell, Paul
Sent: Thursday, December 12, 2019 10:52 AM
To: glust
!
____
From: Jewell, Paul
Sent: Monday, December 9, 2019 1:40 PM
To: gluster-users@gluster.org
Subject: Re: Problems with gluster distributed mode and numpy memory mapped
files
Hi All,
I am using gluster in order to share data between four development servers. It
is just
Op 16-05-19 om 05:43 schreef Nithya Balachandran:
>
>
> On Thu, 16 May 2019 at 03:05, Paul van der Vlis <mailto:p...@vandervlis.nl>> wrote:
>
> Op 15-05-19 om 15:45 schreef Nithya Balachandran:
> > Hi Paul,
> >
> > A few questions:
>
Op 15-05-19 om 15:45 schreef Nithya Balachandran:
> Hi Paul,
>
> A few questions:
> Which version of gluster are you using?
On the server and some clients: glusterfs 4.1.2
On a new client: glusterfs 5.5
> Did this behaviour start recently? As in were the contents of that
>
32a
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: xxx-vpn:/DATA
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
(I have edited this a bit for privacy of my customer).
I think they have used glusterfs because it can do ACLs.
With reg
don't see any data in /data/ALGEMEEN/.
I don't see something special in /etc/exports or in /etc/glusterfs on
the server.
Is there maybe a mechanism in Glusterfs what can exclude data from
export? Or is there a way to debug this problem?
With regards,
Paul van der Vlis
# file: VOORBEELD
# o
Recently we updated a Gluster replicated setup from 3.6 to 3.12 (stepping
through 3.8 first before going to 3.12).
Afterwards I noticed the brick logs were filling at an alarming rate on the
server we have the NFS service running from:
$ sudo tail -20
the database operations
to prevent data loss. You also can't do any caching in your volume
mount on the client side. The performance settings server side appear
not to matter, provided you're up to date on client/server code.
I hope this helps someone!
Paul
On Tue, Mar 6, 2018 at 12:32 PM
ile, but on
debian, it appears to have to be a real file.
Paul
On Tue, Mar 6, 2018 at 10:28 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote:
> On 03/06/2018 05:50 PM, Paul Anderson wrote:
>> When I follow the directions at
>> http://docs.gluster.org/en/latest/Install-G
ume-yes install glusterfs-client
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
, Raghavendra Gowdappa
<rgowd...@redhat.com> wrote:
> +Csaba.
>
> On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <p...@umich.edu> wrote:
>>
>> Raghavendra,
>>
>> Thanks very much for your reply.
>>
>> I fixed our data corruption problem by dis
would like our test scripts, I can either tar them up and
email them or put them in github - either is fine with me. (they rely
on current builds of docker and docker-compose)
Thanks again!!
Paul
On Mon, Mar 5, 2018 at 11:26 AM, Raghavendra Gowdappa
<rgowd...@redhat.com> wrote:
>
>
&
that flushes won't
block as would be needed by SQLite3.
Does anyone have any suggestions? Any words of widsom would be much appreciated.
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster
There are two users u1 & u2 in the cluster. Some files are created by u1,
and they are read only for u2. Of course u2 can read these files. Later
these files are renamed by u1. Then I switch to the user u2. I find that u2
can't list or access the renamed files. I see these errors in log:
server.keepalive-interval: 1
server.keepalive-time: 2
transport.keepalive: 1
client.keepalive-count: 1
client.keepalive-interval: 1
client.keepalive-time: 2
features.cache-invalidation: off
network.ping-timeout: 30
user.smb.guest: no
user.id: 8148
nfs.disable: on
snap-activate-on-create: enable
Thanks,
Paul
don't see this problem.
Is there a way to solve this problem? If ls doesn't return the correct file
names,
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Vijay,
Yes, I find it's a problem of xfs later. After upgrading xfs code, I've not
seen this problem again.
Thanks a lot!
Paul
On Fri, Nov 17, 2017 at 12:08 AM, Vijay Bellur <vbel...@redhat.com> wrote:
>
>
> On Thu, Nov 16, 2017 at 6:23 AM, Paul <fly...@gmail.com> wrote:
Yes indeed it is probably what's going on. what filesystem are you using and
what are the mount options?
Original Message
From: li...@bago.org
Sent: November 22, 2017 4:26 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] error "Not able to add to index" in brick logs
in my
sks are new
and I don't see any low level IO error. Is it a bug related to xfs or
GlusterFS? Is there a workaround?
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
the problems happens again. Later
we find the problem seems to happen when creating directories.
The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is
it related to tiering?
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
happens again. Later
we find the problem seems to happen when creating directories.
The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is
it related to tiering?
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http
Just wanted to pick up on the EC for vm storage domains option..
> > * Erasure coded volumes with sharding - seen as a good fit for VM disk
> > storage
>
> I am working on this with a customer, we have been able to do 400-500 MB /
> sec writes! Normally things max out at ~150-250. The trick
poking for previous mail reply
On Sun, May 7, 2017 at 1:06 AM, Abhijit Paul <er.abhijitp...@gmail.com>
wrote:
> https://pkalever.wordpress.com/2017/03/14/elasticsearch-wit
> h-gluster-block/
> here used tested environment is Fedora ,
> but i am using RHEL based Oracle lin
umar Karampuri <pkara...@redhat.com
> wrote:
>
>
> On Fri, May 5, 2017 at 5:40 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Fri, May 5, 2017 at 5:36 PM, Abhijit Paul <er.abhijitp...@gmail.com>
>> wrote:
&g
5, 2017 at 5:06 PM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:
> Abhijit we just started making the efforts to get all of this stable.
>
> On Fri, May 5, 2017 at 4:45 PM, Abhijit Paul <er.abhijitp...@gmail.com>
> wrote:
>
>> I yet to try gluster-block
ich
> doesn't require solving all these caching issues.
> Here's a blog post on the same - https://pkalever.wordpress.
> com/2017/03/14/elasticsearch-with-gluster-block/
>
> Adding Prasanna and Pranith who worked on this, in case you need more info
> on this.
>
> -Krutika
>
gi?id=1426548> ?
*FYI i am using glusterfs 3.10.1 tar.gz*
Regards,
Abhijit
On Thu, May 4, 2017 at 10:58 PM, Amar Tumballi <atumb...@redhat.com> wrote:
>
>
> On Thu, May 4, 2017 at 10:41 PM, Abhijit Paul <er.abhijitp...@gmail.com>
> wrote:
>
>> Since i am n
tors to make that work complete. Cced the relevant folks for more
>> information. Can you please turn off all the perf xlator options as a work
>> around to move forward?
>>
>> On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul <er.abhijitp...@gmail.com>
>> wrote:
>
at work complete. Cced the relevant folks for more
> information. Can you please turn off all the perf xlator options as a work
> around to move forward?
>
> On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul <er.abhijitp...@gmail.com>
> wrote:
>
>> Dear folks,
>>
&
Dear folks,
I setup Glusterfs(3.10.1) NFS type as persistence volume for
Elasticsearch(5.1.2) but currently facing issue with *"CorruptIndexException"
*with Elasticseach logs and due to that index health trued RED in
Elasticsearch.
Later found that there was an issue with gluster < 3.10 (
--node=NFSPROD02 --lifetime=forever --name=grace-active
--update=1 failed
Mar 17 11:47:10 NFSPROD02 ganesha_grace(nfs-grace)[25420]: INFO: crm_attribute
--query --node=NFSPROD02 --name=grace-active failed
Everything is working properly, but any idea what the problem is here?
Paul Cammarata
?
Paul Cammarata
SIEM System Administrator
SecurIT360
530 Beacon Pkwy W, Suite 901 | Birmingham, AL 35209
O: 205.419.9066 x1022 | P: 205.532.9646 | F: 205.449.1425
www.securit360.com<http://www.securit360.com/> |
p...@securit360.com<mailto:p...@securit360.com>
CONFIDENTIALITY: This emai
?
Paul Cammarata
SIEM System Administrator
SecurIT360
530 Beacon Pkwy W, Suite 901 | Birmingham, AL 35209
O: 205.419.9066 x1022 | P: 205.532.9646 | F: 205.449.1425
www.securit360.com<http://www.securit360.com/> |
p...@securit360.com<mailto:p...@securit360.com>
CONFIDENTIALITY: This emai
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
here, I
was hoping to have this in production last Friday. If anyone has any
ideas I'd be very grateful.
--
Paul Allen
Inetz System Administrator
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster
know what others have found works well.
Thanks
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hey all,
What is the max storage capacity of GlusterFS please?
--
Paul Feuvraux <https://super-baleine.github.io/>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
s also get properly
started on boot once the /gluster filesystem is there.
Regards, Paul Boven.
--
Paul Boven <bo...@jive.eu> +31 (0)521-596547
Unix/Linux/Networking specialist
Joint Institute for VLBI in Europe - www.jive.eu
VLBI - It's a fringe science
___
ate-0: All subvolumes are down. Going offline until atleast
one of them comes back up.
Once the machine has fully booted and I log in, simply typing 'mount
/gluster' always succeeds. I would really appreciate your help in making
this happening on boot without intervention.
Regards, Paul Bove
Sounds great!
I had to knit together different cli commands in the past for 'gstatus' to
provide a view of the cluster - so this is a cool.
Would it be possible to add an example of the output to the RFE BZ* 1353156
* <https://bugzilla.redhat.com/show_bug.cgi?id=1353156>
*?*
Paul C
On We
wntime etc to upgrade my entire clusters this is not a short term
plan and takes careful testing, planning and approval of management to disrupt
to services that are dependant.
Best wishes
Paul
--
Paul Osborne
Senior Systems Engineer
Canterbury Christ Church University
Tel: 01
this.
I was following
http://www.gluster.org/community/documentation/index.php/Features/disk-encryption
- but this doesn't exist any more.
Thanks
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo
Just to pick up on how the block device is defined. I think sharding is the
best option - it's already the 'standard' for virtual disks, and the images
files for iSCSI are no different in my mind. They have pretty much the same
requirements around sizing, fault tolerance and recovery.
Let's keep
Unfortunately that kind of tuning doesn't have any simple answers, and
any one who says there is should not be listened to.
It really depends on your workload and a lot of other factors such as
your hardware. for example a 20 plater RAID 1+0 on spinning disks with
a wide stripe needs very little
Just wondering how shards can silently be different across bricks in a
replica? Lindsay caught this issue due to her due diligence taking on 'new'
tech - and resolved the inconsistency, but tbh this shouldn't be an admin's
job :(
On Sun, Apr 24, 2016 at 7:06 PM, Krutika Dhananjay
Hi Lindsay,
As I understand it, the current logic of bitd/scrubd does not address the
problem you asked about
"a process where all replicas are compared for inconsistencies."
bitd/scrubd operate independently within each node, signing each file and
validating the checksum - which is part of the
copying the vm's across the storage domains can be done with the storage
migrate feature. I did see a few problems in the past with migrating
running vm's in this way, but powered off vm's were fine.
However, it's not the fastest process though!
On Wed, Jan 13, 2016 at 4:28 AM, Krutika Dhananjay
ter actually use?
Thanks
Paul
From: gluster-users-boun...@gluster.org <gluster-users-boun...@gluster.org> on
behalf of Osborne, Paul (paul.osbo...@canterbury.ac.uk)
<paul.osbo...@canterbury.ac.uk>
Sent: 15 October 2015 16:40
To: gluster-us
Hi,
*IF* your seeing crashes in glusterd, Atin sent out an workaround that
needs to be applied to 3.7.x to avoid the issue (introduced with epoll)
add
# for epoll issue glusterd crash fix
option ping-timeout 0
option event-threads 1
to your glusterd.vol files
wrote:
>
> On 13 October 2015 at 11:51, Paul Cuzner <pcuz...@redhat.com> wrote:
>
>> add
>> # for epoll issue glusterd crash fix
>> option ping-timeout 0
>> option event-threads 1
>>
>> to your glusterd.vol files (/etc/glusterfs/glusterd.vol)
yep, try gluster vol reset
Paul C
On Sun, Oct 11, 2015 at 11:30 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Once set, is there any way to "unset" a volume option, so that it returns
> to its default v
to support each
public instance individually?
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Would you run puppet in init.d of the new node to sync infrastructure?
Then you could use rundeck to trigger the shared config on each
instance, for on demand syncing.
On 16/09/15 13:23, Paul Thomas wrote:
Hi,
I’m new to shared file systems and horizontal cloud scaling.
I have already
Have you considered the disperse volume? We'd normally advocate 6 servers
for a +2 redundancy factor though.
Paul C
On Tue, Sep 15, 2015 at 5:47 AM, <aa...@ajserver.com> wrote:
> Gluster users,
>
> I am looking to implement GlusterFS on my network for large, expandable,
> and
right...
Thanks
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
and servers have been sequentially rebooted in
the hope that this would clear any issue - however that doe not appear to be
the case.
Thanks
Paul
Paul Osborne
Senior Systems Engineer
Canterbury Christ Church University
Tel: 01227 782751
From: Atin Mukherjee
that this is temporary as I am loathe to move forward to Jessie
at present due to it's immaturity.
Many thanks
Paul
Paul Osborne
Senior Systems Engineer
Canterbury Christ Church University
Tel: 01227 782751
___
Gluster-users mailing list
Gluster-users
files
on them. Gluster is replicating them, so I am not sure if this is just a
it will take time issue or if there is a real problem. Also potentially
unrelated but maybe not is if I ask gluster to do a rebalance glusterd
crashes.
Thank you for the advice and assistance,
Paul
On Fri, May 22, 2015 at 06:50:40PM +0800, Paul Guo wrote:
Hello,
I wrote two simple single-process seq read test case to compare libgfapi and
fuse. The logic looks like this.
char buf[32768];
while (1) {
cnt = read(fd, buf, sizeof(buf));
if (cnt == 0
cache).
I tested direct io because I suspected that fuse kernel readahead
helped more than the read optimization solutions in gluster. I searched
a lot but I did not find much about the comparison between fuse and
libgfapi. Anyone has known about this and known why?
Thanks,
Paul
seeks_cpu: 9.6
Server 2: seeks: 2175.4seeks_cpu: 65.4
At this point I am confused why there should be such a difference in seek info
and am uncertain how to proceed futher.
Suggestions are welcome.
Thanks
Paul
___
Gluster-users
Want a storage cluster? Get Gluster!
On Tue, Apr 7, 2015 at 3:37 PM, Dustin L. Black dbl...@redhat.com wrote:
{Flexible|Adaptive|Versatile} Open Data Store
Dustin L. Black, RHCA
Principal Technical Account Manager
Red Hat, Inc. - Strategic Customer Engagement
(o) +1.212.510.4138 (m)
You do realize you would have to put the ISCSI target disk image on
the mounted Gluster volume not directly on the brick.
So as long as you have replication your volume would remain accessible.
You can not point the ISCSI process directly to the brick or
replication and striping wont work
Ah, that is handy to know.
Will this patch get applied to the 3.5. release stream or am I going to have to
look at moving onto 3.6 at some point.
Thanks
Paul
--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
-Original
it is not an issue for me as
this is still proof of concept for what we are doing, what I need to know
whether doing so will stop the continual log churn.
Many thanks
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org
One thing I've noticed is that you need to make sure that the SSH host
keys of _each_ of the slave bricks needs to be in the known_hosts of
each of the master bricks. Failure to ensure this can cause failure in
a non-obvious way.
Regards,
Paul
On 12 March 2015 at 20:29, John Gardeniers
jgardeni
to do this?
Client side I am not concerned (yet) as I am using autofs with NFS server
failover via weighting - this demonstrably deals with loss of a gluster node
but what I do not want is a client continuing to use a gluster node that is off
in a world of it's own.
Thanks
Paul
--
Paul
reasonable long term support?
Many thanks
Paul
--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo
.
Does it make sense for me to have web1 mount gfs1 and gfsq, web2 to mount gfs2
and gfsq with weightings set to away from the quorum server?
I think this makes sense but experienced folk out there may tell me better.
Thanks
Paul
--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
7332621.00 us 1 FSYNC
I repeated the test and the results for this brick are similar:
29.60 176.56 us 27.00 us 1008658.00 us 55273 WRITE
62.41 1714773.67 us 10098.00 us 10996032.00 us 12 FSYNC
Thanks,
Paul
Paul Stallworth
Housing
7.98 1843032.83 us 22261.00 us 16023044.00 us 12 FSYNC
8.19 1620932.64 us 27.00 us 17171453.00 us 14 INODELK
9.03 94.52 us 22.00 us 9533.00 us 264526 LOOKUP
73.67 563743.05 us 14.00 us 17173239.00 us 362 FINODELK
Thanks,
Paul
Paul Stallworth
Housing Information Technology
University of Colorado
the element of setting GLUSTERD_WORKDIR to /var/lib/glusterd
and running /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh
What is the current situation with this?
Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman
I'd like to at least know which direction to head in. Any
other tips or resources that you could send my way would also be appreciated.
Thanks,
Paul
Paul Stallworth
Housing IT
University of Colorado Boulder
Boulder, Colorado 80309
T: 303.735.6623
Yes-- Sent from my HP Pre3On Dec 10, 2014 3:54 PM, Michael Schwartzkopff m...@sys4.de wrote: Hi,
what happens if the number of replica in a volume is not equal to the number
of bricks?
Sample: I have a volume with 4 bricks (on four peers) and only what to have
two replicats of every file. Is
Of course you can always add space to the volume that works well. The reason you may want to consider adding bricks is if you enable striping in gluster with the mirroring you will probably see better performance on your reads and writes.-- Sent from my HP Pre3On Dec 10, 2014 11:21 AM, wodel
then why does the gluster documentation not recommend
this? (this is not intended as snarky dig - honest)
Thanks
Paul
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
+44 1227 782751
volume not automounted when peer is down
A much simpler answer is to assign a hostname to multiple IP addresses (round robin dns). When gethostbyname() returns multiple entries, the client will try them all until it's successful.
On 11/24/2014 06:23 PM, Paul Robert Marino wrote:
This is simple
This is simple and can be handled in many ways.Some background first.The mount point is a single IP or host name. The only thing the client uses it for is to download a describing all the bricks in the cluster. The next thing is it opens connections to all the nodes containing bricks for that
In my experience this unusually happens because of NFS lockd trying too traverse a firewall.Turn off NFS locking on the source host and you will be fine. The root cause is not a problem with cluster its actually a deficiency in the NFS RFCs about RPC which has never been properly addressed.-- Sent
you are partially correct.
the -i and -L options have not been implemented in xfs_grow.
that said
1) The journal size will increase automatically to the appropriate
size based data area size in the its just you cant manually specify
the size.
2) While it is true you cant switch between internal
Hi,
I had seen it but didn’t want to go down that route without seeing if there was
something obvious that I was missing, hence my mail to the list.
Looks like I will be putting that on the list to look at in anger when I get
time to do so.
Many thanks
Paul
From: Juan José Pavlik Salles
I need, however rather than just try what could be random code, is
there anything that the users here can recommend?
Thanks
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
to query it through the command line which I can then call
via SNMP – in an ideal world someone will have done that already…
Regards
Paul
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Juan José Pavlik Salles
Sent: 11 November 2014 00:05
Use XFS instead of EXT4There are many very good reasons its the new default filesystem in RHEL 7Also SSDs are faster at random IO and small file; however a properly built RAID of spinning disks is still faster at linear reads of large file.In general qemu does large linear reads or at least very
that is a fundamental behaviour change should surely be a whole lot
easier to find than something that is blatantly wrong.
Anyhow thanks for the clarification.
Paul
From: Joe Julian [mailto:j...@julianfamily.org]
Sent: 30 October 2014 19:22
To: Todd Stansell; Osborne, Paul (paul.osbo
in the documentation and are there any
other ports that I should be aware of?
Thanks
Paul
--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http
Here are the steps to reproduce this issue. (gluster version 3.5.2)
On one server lab1 (There is another server lab2 for replica 2):
[root@lab1 ~]# gluster volume set g1 worm on
volume set: success
[root@lab1 ~]# gluster volume stop g1
Stopping volume will make its data inaccessible. Do you
: Claudio Kuenzler;c...@claudiokuenzler.com;
Date: Sep 12, 2014
To: Juan José Pavlik Sallesjjpav...@gmail.com;
Cc: gluster-usersgluster-users@gluster.org; Paul
Guobigpaul...@foxmail.com;
Subject: Re: [Gluster-users] error when using mount point as a brick directory.
Thanks for the hint about
Hello Shyam. Thanks for the reply. Please see my reply below, starting with
[paul:]
Please add me in address list besides gluster-uses when replying so that I can
easier
reply since I subscribed gluster-users with the digest mode (No other choice if
I
remember correctly.)
Date: Wed, 10 Sep
Its the small files there is an overhead on any operation on small lots of small file this is not unique to Gluster. Also are you using XFS As the underlying filesystem if you are not that would play a big part ext has issues with performance when dealing with small file due to its over reliance
On 04/09/2014 09:24, M S Vishwanath Bhat wrote:
On 04/09/14 00:33, Vijaykumar Koppad wrote:
On Wed, Sep 3, 2014 at 8:20 PM, M S Vishwanath Bhat vb...@redhat.com
mailto:vb...@redhat.com wrote:
On 01/09/14 23:09, Paul Mc Auley wrote:
Is geo-replication from a replica 3 volume
to add the passwordless SSH key back in? (As
opposed to the restricted secret.pem)
For that matter in the inital setup is it an expected failure mode that
the initial geo-replication create will fail if the slave host's SSH key
isn't known?
Thanks,
Paul
Ive also seen this happen when there is a firewall in the middle and
nfslockd malfunctioned because of it.
On Tue, Jun 10, 2014 at 12:20 PM, Gene Liverman glive...@westga.edu wrote:
Thanks! I turned off drc as suggested and will have to wait and see how that
works. Here are the packages I have
This is a really good initiative Lala.
Anything that helps Operations folks always gets my vote :)
I've added a few items to the etherpad.
Cheers,
PC
- Original Message -
From: Lalatendu Mohanty lmoha...@redhat.com
To: gluster-users@gluster.org, gluster-de...@gluster.org
Just one question relating to thoughts around how you apply a filter to the
snapshot view from a user's perspective.
In the considerations section, it states - We plan to introduce a
configurable option to limit the number of snapshots visible under the USS
feature.
Would it not be possible
brick failure.
What this means to me: there's a problem in libgfapi, gluster 3.4.2
and 3.4.3 (at least) and/or kvm 1.7.1 (I'm running the latest 1.7
source tree in production).
Joe: we're in your hands. I hope you find the problem somewhere.
Paul
on swap-device (252:0:96626664)
Read-error on swap-device (252:0:96626672)
Read-error on swap-device (252:0:96626680)
Read-error on swap-device (252:0:96626688)
This is all. Not much I'm afraid.
Paul
2014-04-21 18:21 GMT+02:00 Paul Penev ppqu...@gmail.com:
Joe,
it will take some time for redo
that libgfapi is responsible for maintaining
connections to the bricks and to reestablish them as needed (makes
sense, but feel free to prove me wrong).
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman
I sent the brick logs earlier. But I'm not able to produce logs from
events in KVM. I can't find any logging or debugging interface. It is
somewhat weird.
Paul
2014-04-21 18:30 GMT+02:00 Joe Julian j...@julianfamily.org:
I don't expect much from the bricks either, but in combination
1 - 100 of 232 matches
Mail list logo