On Wed, Nov 16, 2016 at 3:31 AM, Ankireddypalle Reddy
wrote:
> Kaushal/Pranith,
> Thanks for clarifying this. As I
> understand there are 2 id's. Please correct if there is a mistake in my
> assumptions:
>
Kaushal/Pranith,
Thanks for clarifying this. As I understand
there are 2 id's. Please correct if there is a mistake in my assumptions:
1) HASH generated by DHT and this will
generate the same id for a given file all the time.
Hi all,
I just subscribed ot the list; we have been testing GlusterFS from some time
for a typical workload small files: more than 90%-95% of accesses are stats,
of an around 1 TB and we have millions of dirs and few kb files.
I am doing some tests with md-cache, following instructions
On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy wrote:
> Pranith,
>
> Thanks for getting back on this. I am trying to see how
> gfid can be generated programmatically. Given a file name how do we
> generate gfid for it. I was reading some of the
On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy
wrote:
> Pranith,
>
> Thanks for getting back on this. I am trying to see how
> gfid can be generated programmatically. Given a file name how do we generate
> gfid for it. I was reading some of the email
On Tue, Nov 15, 2016 at 4:33 AM, Micha Ober wrote:
> Hi,
>
> I upgraded an installation of GlusterFS on Ubuntu 14.04.3 from version
> 3.4.2 to 3.8.5.
> Few hours after the upgrade, I noticed files in "split-brain" state. I
> never had split-brain files in months of operation
Pranith,
Thanks for getting back on this. I am trying to see how gfid
can be generated programmatically. Given a file name how do we generate gfid
for it. I was reading some of the email threads about it where it was mentioned
that gfid is generated based upon parent directory
On Tue, Nov 15, 2016 at 04:34:15PM +, lejeczek wrote:
> hi everyone
>
> it does not work for me, is it supposed to work?
> Autofs does not complain nor report any errors nor problems.
>
> * -fstype=glusterfs -rw 10.5.6.49,10.5.6.100:/USER-HOME/&
>
> but this does:
>
> everything
Sorry, didn't understand the question. Are you saying give a file on
gluster how to get gfid of the file?
#getfattr -d -m. -e hex /path/to/file shows it
On Fri, Nov 11, 2016 at 9:47 PM, Ankireddypalle Reddy
wrote:
> Hi,
>
> Is the mapping from file name to gfid an
On 15 November 2016 at 21:59, Kevin Leigeb wrote:
> Nithya -
>
>
>
> Thanks for the reply, I will send this at the top to keep the thread from
> getting really ugly.
>
>
>
> We did indeed copy from the individual bricks in an effort to speed up the
> copy. We had one rsync
hi everyone
it does not work for me, is it supposed to work?
Autofs does not complain nor report any errors nor problems.
* -fstype=glusterfs -rw 10.5.6.49,10.5.6.100:/USER-HOME/&
but this does:
everything -fstype=glusterfs -rw 10.5.6.49,10.5.6.100:/USER-HOME
so wildcard keys does not work?
Nithya -
Thanks for the reply, I will send this at the top to keep the thread from
getting really ugly.
We did indeed copy from the individual bricks in an effort to speed up the
copy. We had one rsync running from each brick to the mount point for the new
cluster. As stated, we skipped all
Hi kevin,
On 15 November 2016 at 20:56, Kevin Leigeb wrote:
> All -
>
>
>
> We recently moved from an old cluster running 3.7.9 to a new one running
> 3.8.4. To move the data we rsync’d all files from the old gluster nodes
> that were not in the .glusterfs directory and
Hi Atul,
In Short: it is due to client side quorum behavior
Detailed info:
I see that there are 3 nodes in the cluster ie master1, master2, compute01
However the volume is being hosted only on master1 and master2.
Also, see that you have enabled server side quorum, and client side quorum
from
All -
We recently moved from an old cluster running 3.7.9 to a new one running 3.8.4.
To move the data we rsync'd all files from the old gluster nodes that were not
in the .glusterfs directory and had a size of greater-than zero (to avoid stub
files) through the front-end of the new cluster.
On 11/15/2016 03:46 AM, Shirwa Hersi wrote:
Hi,
I'm using glusterfs geo-replication on version 3.7.11, one of the
bricks becomes faulty and does not replicated to slave bricks after i
start geo-replication session.
Following are the logs related to the faulty brick, can someone please
Hi all,
Apologies for the late notice.
This meeting is scheduled for anyone who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date:
If I understand the query correctly, the problem is that gluster takes
more than 20seconds to timeout even though the brick was offline for
more than 35s. With that assumptions I have some
How did you understand that the timer has expired after 35s only, by log
file? If so glusterfs wait some
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
Hi Atin,
I think the root cause is in the function glusterd_import_friend_volume as
below.
int32_t
glusterd_import_friend_volume
On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
>
> Hi Atin,
>
> I think the root cause is in the function glusterd_import_friend_volume as
> below.
>
> int32_t
> glusterd_import_friend_volume (dict_t *peer_data, size_t count)
> {
> ...
> ret =
20 matches
Mail list logo