On 16/12/15 22:59, Krutika Dhananjay wrote:
I guess I did not make myself clear. Apologies. I meant to say that
printing a single list of counts aggregated
from all bricks can be tricky and is susceptible to the possibility of
same entry getting counted multiple times
if the inode needs a heal
Hi PuYun,
Would you be able to run rebalance again and take state-dumps in intervals
when you see high mem-usages. Here is the details.
##How to generate statedump
We can find the directory where statedump files are created using 'gluster
--print-statedumpdir' command.
Create that directory
We plan to freeze the review gate by end of this month. Appreciate if
you can take a look at it and log your comments if any.
Thanks,
Atin
On 11/17/2015 12:07 PM, Atin Mukherjee wrote:
> A gentle reminder for your review comments!!
>
> ~Atin
>
> On 11/09/2015 10:17 AM, Atin Mukherjee wrote:
>>
Hi,
One of my gluster deployment requires to completely disable Gluster caching
and put in proprietary caching logic. Setup is as follows:
- PCI flash will be used as flash
- Two node connected using 40GB Melanox cards
- Gluster version 3.7.6
With fresh installation of Gluster, using fio with
And also in the /var/log/messages I see glusterfsd got called which is the
brick process. If the rebalance process also got killed can you provide the
core?
If you can provide the mem-usage of gluster processes[rebalance which is
glusterfs process + brick processes] at OOM-KILL that will be
On 12/17/2015 07:34 AM, Lindsay Mathieson wrote:
On 23/11/15 19:44, Krutika Dhananjay wrote:
The patch http://review.gluster.org/#/c/12717/ might just be the fix
to the issue you ran into with performance.stat-prefetch on.
With this patch, it should be possible to enable stat-prefetch
On 17/12/15 13:10, Pranith Kumar Karampuri wrote:
Hi Lindsay,
I see that this particular patch is merged after 3.7.6. into
3.7. branch. You should have this in 3.7.7
Pranith
Thanks
--
Lindsay Mathieson
___
Gluster-users mailing list
On 12/16/2015 02:24 PM, Poornima Gurusiddaiah wrote:
Answers inline
- Original Message -
From: "Pranith Kumar Karampuri"
To: "Ankireddypalle Reddy" , "Vijay Bellur"
, gluster-users@gluster.org,
"Shyam"
- Original Message -
> From: "Lindsay Mathieson"
> To: "Krutika Dhananjay"
> Cc: "Gluster Devel" , "gluster-users"
>
> Sent: Wednesday, December 16, 2015 6:56:03 AM
> Subject: Re:
Hi,
I have upgraded all my server/client gluster packages to version 3.7.6 and
started reblance task again. It had been running much longer than before, but
it got oom and failed again.
= /var/log/messages ==
Dec 16 20:06:41 d001 kernel: glusterfsd invoked
1) We are using a gluster volume as a backend storage for storing backup data
generated by Commvault Simpana software. We tried creating one glfs_t instance
for every glfd_t that was getting generated. But that did not work. After
around 8 to 10 glfs_t objects were created glfs_init started
Hi All,
The weekly Gluster community meeting will start in ~90 minutes.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda:
Thanks to everyone who attended todays meeting. The minutes and full
chat-log can be found here:
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-16/weekly_gluster_community_meeting.2015-12-16-12.01.html
Minutes (text):
On Wed, Dec 16, 2015 at 12:53:12PM +0100, Niels de Vos wrote:
> Hi All,
>
> The weekly Gluster community meeting will start in ~90 minutes.
Note, the subject was correct. The meeting starts in a few minutes.
Sorry for the confusion,
Niels
>
> Meeting details:
> - location: #gluster-meeting
Answers inline
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Ankireddypalle Reddy" , "Vijay Bellur"
> , gluster-users@gluster.org,
> "Shyam" , "Niels de Vos"
> Sent:
On 12/14/2015 03:44 AM, Andrus, Brian Contractor wrote:
All,
I have a small gluster filesystem on 3 nodes.
I have a perl program that multi-threads and each thread writes it’s
output to one of 3 files depending on some results.
My trouble is that I am seeing missing lines from the
On 12/14/2015 04:44 PM, Udo Giacomozzi wrote:
Hi,
it happened again:
today I've upgraded some packages on node #3. Since the Kernel had a
minor update, I was asked to reboot the server, and did so.
At that time only one (non-critical) VM was running on that node. I've
checked twice and
17 matches
Mail list logo