On Wed, Feb 3, 2016 at 2:25 PM, Daniel Filipazzi wrote:
> I was about to set upp glusterfs with samba ctdb, i got just about every
> thing to work up untill i was about to access the samba share .
> from a linux client i can see the share but i get "unable to mount
>
Hi, we have multiple clusters of GlusterFS which are mostly alike. The typical
setup is as such:
- Cluster of 3 nodes
- Replication factor of 3
- Each node has 1 brick, mounted on XFS with RELATIME and NODIRATIME
- Each node has 8 disks in RAID 0 hardware
Hi,
On 02/03/2016 08:09 PM, ML mail wrote:
Dear Aravinda,
Thank you for the analysis and submitting a patch for this issue. I hope it can
make it into the next GlusterFS release 3.7.7.
As suggested I ran the find_gfid_issues.py script on my brick on the two master
nodes and slave nodes but
On Thu, 2016-02-04 at 12:05 +0530, Anoop C S wrote:
> On Fri, 2016-01-29 at 18:59 +0530, PankaJ Singh wrote:
> > Hi,
> >
> > Thanks Anoop for the help,
> > Would you please tell me when can we expect this new release with
> > this bug fix.
> >
>
> Please find the corresponding patch posted
Kaleb Keithley wrote on 04/02/2016 06:40:
>
> If you're a Debian Wheezy user please give the new packages a try.
Thanks for this Kaleb, I'll give it a try in the next few days.
I realise this is a newbie question but, just for my sanity, what's the best
procedure to upgrade my two nodes?
- Original Message -
> From: "Ronny Adsetts"
>
> Kaleb Keithley wrote on 04/02/2016 06:40:
> >
> > If you're a Debian Wheezy user please give the new packages a try.
>
> Thanks for this Kaleb, I'll give it a try in the next few days.
>
> I realise
All,
It seems that snapshotting for volumes based on ZFS is still 'in the works'. Is
that the case?
snapshot create: failed: Snapshot is supported only for thin provisioned LV.
Ensure that all bricks of DATA are thinly provisioned LV.
Using glusterfs-3.7.6-1.el6.x86_64
Brian Andrus
Interesting. I just encountered a hanging flush problem, too. Probably
unrelated but if you want to give this a try a temporary workaround I found was
to drop caches, "echo 3 > /proc/vm/drop_caches", on all the servers prior to
the flush operation.
On February 4, 2016 10:06:45 PM PST,
This is regarding glusterfs(3.7.6) issue we are facing at our end.
We have a logging file which saves logs of the events for two node and this
file are in sync using replica volume. When we restart the nodes , we see that
log file of one board is not in the sync .
How to reproduce:
Hi,
I use glusterfs (version 3.7.6) in replicate mode for sync between two boards
in a node.
When one of the board is locked and replaced with new board and restarted we
see that sync is lost between the two boards.The mounted glusterfs volume is
not present on the replaced board.
Output
Am 2016-02-04 15:51, schrieb Raghavendra Bhat:
It depends upon the memory available and the workload. In this case,
the size of the files being copied are huge. So more I/O happens to
completely copy the file.
Can you please give the o/p of "gluster volume info "?
Regards,
Raghavendra
On
Hi Gluster community,
Can someone who has insight on how rpc_client_ping_timer_expired operates,
I would love to learn more about. The reason behind it is that last week
I had 2 fuse clients produce the same disconnect message, but reconnected
immediately afterwards. What I'd like to know
+soumyak, +rtalur.
On Fri, Jan 29, 2016 at 2:34 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On 01/28/2016 05:05 PM, Pranith Kumar Karampuri wrote:
>
>> With baul jianguo's help I am able to see that FLUSH fops are hanging for
>> some reason.
>>
>> pk1@localhost - ~/Downloads
On 02/05/2016 08:45 AM, songxin wrote:
Hi,
I use glusterfs (*version 3.7.6*) in replicate mode for sync between
two boards in a node.
When one of the board is locked and replaced with new board and
restarted we see that sync is lost between the two boards.The mounted
glusterfs volume is
On 02/03/2016 10:12 PM, Simon Turcotte-Langevin wrote:
Hi, we have multiple clusters of GlusterFS which are mostly alike. The
typical setup is as such:
-Cluster of 3 nodes
-Replication factor of 3
-Each node has 1 brick, mounted on XFS with RELATIME and NODIRATIME
-Each node has 8 disks
On 02/04/2016 08:26 PM, Khoi Mai wrote:
Hi Gluster community,
Can someone who has insight on how rpc_client_ping_timer_expired
operates, I would love to learn more about. The reason behind it is
that last week I had 2 fuse clients produce the same disconnect
message, but reconnected
So Kaleb just built updated 3.7.6-2 packages for Wheezy, which should
work. Please check them out.
On Sat, Jan 30, 2016 at 3:14 PM, Kaushal M wrote:
> On Tue, Jan 26, 2016 at 10:05 PM, Kaleb Keithley wrote:
>>
>>
>> - Original Message -
>>>
That's correct, I had in total 394 files and directories which where not
existant on any of my two master nodes bricks. So as you suggested I have now
stopped the geo-rep and deleted the concerned files and directories on the
slave node and restarted the geo-rep. It's all clean again but I will
It depends upon the memory available and the workload. In this case, the
size of the files being copied are huge. So more I/O happens to completely
copy the file.
Can you please give the o/p of "gluster volume info "?
Regards,
Raghavendra
On Wed, Feb 3, 2016 at 4:54 PM, Taste-Of-IT
On Mon, Feb 1, 2016 at 2:24 PM, Soumya Koduri wrote:
>
>
> On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
>
>> Wait. It seems to be my bad.
>>
>> Before unmounting I do drop_caches (2), and glusterfs process CPU usage
>> goes to 100% for a while. I haven't waited for it
20 matches
Mail list logo