Thanks. For small files and random I/O, nfs has been recommended over fuse.
Would pNFS, with multiple MDS'es in the future, be the recommended approach
for small files ?
On Wed, Aug 12, 2015 at 11:03 PM, Soumya Koduri skod...@redhat.com wrote:
It depends on the workload. Like native NFS, even
Suggestion for future: More granular auto-delete for snapshots. Something
like a sliding window. For eg. Keep hourly snapshots for last one day,
daily snapshots for the past week, weekly ones for the month etc. i.e.
Tapering frequency as you go further in the past. Right now, I think it's
just a
Hi Prasun,
pNFS was recently released in a tech-preview form. With multiple
MDS-es or even an all-symmetric arch (every ganesha node can act as both
DS and MDS, which will also be a supported config) you could potentially
see improvements (due to increased throughput) but, from our
It depends on the workload. Like native NFS, even with NFS-Ganesha, data is
routed through the server where its mounted from. In addition NFSv4.x protocol
adds more complexity and cannot be directly compared with NFSv3 traffic.
However with pNFS, I/O is routed to data servers directly by the
Hi,
Previously we used GlusterFS v3.5.3 and we could achieve around 1.1GBs for
read/write in plain distributed volume.
Now, since I have upgraded GlusterFS to 3.7.3, i can only achieve around 700MBs
(max).
Here some additional information concerning my volume.
# gluster volume info
Hi,
Isn't the trusted.afr.dirty attribute missing from Brick 2? Shouldn't it be
increased
and decreased, but never removed?
Could that be the reason why GlusterFS is confused?
What could be the reason for gfid mismatches?
Regards
Andreas
Brick1 getfattr -d -m . -e hex config.ior
# file:
On Mon, Aug 10, 2015 at 09:19:25AM +0100, Thibault Godouet wrote:
Thanks Niel for your helpful answer.
Regarding the locking, indeed that solves my issue. Now I'm wondering how
to monitor this. The best I have so far is get the list of RPC binds and
the TCP/UDP port in particular, and then
Can we have some volunteers of these BZs?
-Atin
Sent from one plus one
On Aug 12, 2015 12:34 PM, Kaushal M kshlms...@gmail.com wrote:
Hi Csaba,
These are the updates regarding the requirements, after our meeting
last week. The specific updates on the requirements are inline.
In general, we
I am currently testing gluster v3.7.3 on Scientific Linux 7.1 and a newly
created gluster volume. After transferring some files to the volume over
the fuse mount, the volume log is flooded with 2.5GB of errors like the
following:
[2015-08-13 15:54:36.921622] W [fuse-bridge.c:1230:fuse_err_cbk]
On 08/13/2015 07:42 PM, Andreas Hollaus wrote:
Isn't the trusted.afr.dirty attribute missing from Brick 2? Shouldn't
it be increased and decreased, but never removed?
If one brick of a replica 2 setup is down, and files are written to, the
dirty xattr is never set on the brick that is up.
Thanks Joe for the correction. However as of now we do not have a
mechanism to let the clients know how should they behave based on the
op-version just like what glusterd does. This is the only implication
why we always encourage to upgrade the servers first followed by the
clients.
~Atin
On
On 08/13/2015 07:30 PM, Lakshmi Anusha wrote:
Hello,
We managed to collect below command outputs:
Brick1 getfattr -d -m . -e hex
/opt/lvmdir/c2/brick/logfiles/security/EVENT_LOG.xml
getfattr: Removing leading '/' from absolute path names
# file:
-Atin
Sent from one plus one
On Aug 13, 2015 9:11 PM, Taylor Lewick tlew...@adknowledge.com wrote:
As a follow up, I created a test gluster cluster, installed 3.6, and
upgraded it to 3.7. I verified client machines running glusterfs 3.6 and
3.7 could mount the volume via glusterfs…
So
As a follow up, I created a test gluster cluster, installed 3.6, and upgraded
it to 3.7. I verified client machines running glusterfs 3.6 and 3.7 could
mount the volume via glusterfs…
So currently, clients running 3.7 can’t mount a gluster volume via glusterfs if
the gluster servers are
You can find the gluster news of the week #30/2015 below
https://medium.com/@msvbhat/gluster-news-of-the-week-30-2015-30452f44a144
You should be able to see the same in www.planet.gluster.org very shortly.
If you have anything that needs to be mentioned in news of the week, please
add them here
15 matches
Mail list logo