Hi,
I am researching the kind of hardware that would be best for archival
use case. We probably need to keep the data anywhere between 20-40 years.
Do let us know what you think would be best.
Pranith
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST
On Tue, Jun 4, 2019 at 7:36 AM Xie Changlong wrote:
> To me, all 'df' commands on specific(not all) nfs client hung forever.
> The temporary solution is disable performance.nfs.write-behind and
> cluster.eager-lock.
>
> I'll try to get more info back if encounter this problem again .
>
If you ob
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar 2
On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar
On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
>> wrote:
>>
>>> Hi Raghavendra,
>>>
>>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa <
>>> rgowd...@redhat.c
On Tue, Oct 30, 2018 at 2:51 PM Hongzhi, Song
wrote:
> Hi Pranith and other friends,
>
> Does this CVE apply for glusger-v3.11.1?
>
It was later found to be not a CVE, only a memory leak.
No, this bug is introduced in 3.12 branch and fixed in 3.12 branch as well.
Patch that introduced leak:
http
andling at early start of gluster ?
>
As far as I understand there shouldn't be. But I would like to double check
that indeed is the case if there are any steps to re-create the issue.
>
>
> Regards,
>
> Abhishek
>
> On Tue, Sep 25, 2018 at 2:27 PM Pranith Kumar Karam
; sent in between.
>
But the crash happened inside exit() code for which will be in libc which
doesn't access any data structures in glusterfs.
>
> Regards,
> Abhishek
>
> On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
&g
that the RC is correct
and then I will send out the fix.
>
> Regards,
> Abhishek
>
> On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL
>> wr
the maintained releases and run the workloads you have for some time to
test things out, once you feel confident, you can put it in production.
HTH
>
> Thanks
> Ashayam Gupta
>
> On Tue, Sep 18, 2018 at 11:00 AM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>&g
: volume gvol0-md-cache
> 81: type performance/md-cache
> 82: subvolumes gvol0-open-behind
> 83: end-volume
> 84:
> 85: volume gvol0
> 86: type debug/io-stats
> 87: option log-level INFO
> 88: option latency-measurement off
> 89: option cou
Please also attach the logs for the mount points and the glustershd.logs
On Thu, Sep 20, 2018 at 11:41 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> How did you do the upgrade?
>
> On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa
> wrote:
>
>>
>
How did you do the upgrade?
On Thu, Sep 20, 2018 at 11:01 AM Raghavendra Gowdappa
wrote:
>
>
> On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa > wrote:
>
>> Can you give volume info? Looks like you are using 2 way replica.
>>
>
> Yes indeed.
> gluster volume create gvol0 replica 2 gfs
On Mon, Sep 17, 2018 at 4:14 AM Ashayam Gupta
wrote:
> Hi All,
>
> We are currently using glusterfs for storing large files with write-once
> and multiple concurrent reads, and were interested in understanding one of
> the features of glusterfs called sharding for our use case.
>
> So far from th
On Fri, Sep 7, 2018 at 7:31 PM Dave Sherohman wrote:
> On Fri, Sep 07, 2018 at 10:46:01AM +0530, Pranith Kumar Karampuri wrote:
> > On Tue, Sep 4, 2018 at 6:06 PM Dave Sherohman
> wrote:
> >
> > > On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:
> &g
On Tue, Sep 4, 2018 at 6:06 PM Dave Sherohman wrote:
> On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:
> > Is there anything I can do to kick the self-heal back into action and
> > get those final 59 entries cleaned up?
>
> In response to the request about what version of gluster
Which version of glusterfs are you using?
On Tue, Sep 4, 2018 at 4:26 PM Dave Sherohman wrote:
> Last Friday, I rebooted one of my gluster nodes and it didn't properly
> mount the filesystem holding its brick (I had forgotten to add it to
> fstab...), so, when I got back to work on Monday, its r
t;>> think it will go up when more users will cause more traffic -> more
> >>> work on servers), 'gluster volume heal shared info' shows no entries,
> >>> status:
> >>>
> >>> Status of volume: shared
> >>> Gluster process
ocket_event_handler]
> 0-transport: EPOLLERR - disconnecting now
> [2018-08-22 06:19:23.809366] I [input.c:31:cli_batch] 0-: Exiting with: 0
>
> Just wondered if this could related anyhow.
>
> 2018-08-21 8:17 GMT+02:00 Pranith Kumar Karampuri :
> >
> >
> > On Tue
gt; ext4_readdir
>
> Or do you want to download the file /tmp/perf.gluster11.bricksdd1.out
> and examine it yourself? If so i could send you a link.
>
Thank you! yes a link would be great. I am not as good with kernel side of
things. So I will have to show this information to s
On Tue, Aug 21, 2018 at 10:13 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote:
>
>> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3
>> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 P
iotwr3[kernel.kallsyms] [k]
> do_syscall_64
>
> Do you need different or additional information?
>
This looks like there are lot of readdirs going on which is different from
what we observed earlier, how many seconds did you do perf record for? Will
it be possible for you to d
entries: 0
>
> Looks good to me.
>
>
> 2018-08-20 10:51 GMT+02:00 Pranith Kumar Karampuri :
> > There are a lot of Lookup operations in the system. But I am not able to
> > find why. Could you check the output of
> >
> > # gluster volume heal info | grep -i n
> >>
> >> i hope i did get it right.
> >>
> >> gluster volume profile shared start
> >> wait 10 minutes
> >> gluster volume profile shared info
> >> gluster volume profile shared stop
> >>
> >> If that's ok, i
ile shared info
> gluster volume profile shared stop
>
> If that's ok, i've attached the output of the info command.
>
>
> 2018-08-17 8:31 GMT+02:00 Pranith Kumar Karampuri :
> > Please do volume profile also for around 10 minutes when CPU% is high.
> >
>
Please do volume profile also for around 10 minutes when CPU% is high.
On Fri, Aug 17, 2018 at 11:56 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> As per the output, all io-threads are using a lot of CPU. It is better to
> check what the volume profile is to see what i
: "Running GlusterFS Volume Profile Command"and attach output
of "gluster
volume profile info",
On Fri, Aug 17, 2018 at 11:24 AM Hu Bert wrote:
> Good morning,
>
> i ran the command during 100% CPU usage and attached the file.
> Hopefully it helps.
>
> 2018-08-17
Could you do the following on one of the nodes where you are observing high
CPU usage and attach that file to this thread? We can find what
threads/processes are leading to high usage. Do this for say 10 minutes
when you see the ~100% CPU.
top -bHd 5 > /tmp/top.${HOSTNAME}.txt
On Wed, Aug 15, 201
On Fri, Jul 27, 2018 at 1:32 PM, Hu Bert wrote:
> 2018-07-27 9:22 GMT+02:00 Pranith Kumar Karampuri :
> >
> >
> > On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert
> wrote:
> >>
> >> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri >:
> >> >
On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert wrote:
> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri :
> >
> >
> > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert
> wrote:
> >>
> >> > Do you already have all the 19 directories already created? If not
On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert wrote:
> > Do you already have all the 19 directories already created? If not
> could you find out which of the paths need it and do a stat directly
> instead of find?
>
> Quite probable not all of them have been created (but counting how
> much would
ctories already created? If not
could you find out which of the paths need it and do a stat directly
instead of find?
>
> 2018-07-26 11:29 GMT+02:00 Pranith Kumar Karampuri :
> >
> >
> > On Thu, Jul 26, 2018 at 2:41 PM, Hu Bert wrote:
> >>
> >> > Sorry, b
n the good bricks, so this is expected.
> (sry, mail twice, didn't go to the list, but maybe others are
> interested... :-) )
>
> 2018-07-26 10:17 GMT+02:00 Pranith Kumar Karampuri :
> >
> >
> > On Thu, Jul 26, 2018 at 12:59 PM, Hu Bert
> wrote:
> >>
releases. But it is solvable.
You can follow that issue to see progress and when it is fixed etc.
>
> 2018-07-26 8:56 GMT+02:00 Pranith Kumar Karampuri :
> > Thanks a lot for detailed write-up, this helps find the bottlenecks
> easily.
> > On a high level, to handle this direc
rst 3 digits of ID)/(next 3 digits of
> ID)/$ID/$misc_formats.jpg
>
> That's why we have that many (sub-)directories. Files are only stored
> in the lowest directory hierarchy. I hope i could make our structure
> at least a bit more transparent.
>
> i hope there's
On Mon, Jul 23, 2018 at 4:16 PM, Hu Bert wrote:
> Well, over the weekend about 200GB were copied, so now there are
> ~400GB copied to the brick. That's far beyond a speed of 10GB per
> hour. If I copied the 1.6 TB directly, that would be done within max 2
> days. But with the self heal this will
Hi,
We want gluster's monitoring/observability to be as easy as possible
going forward. As part of reaching this goal we are starting this
initiative to add improvements to existing apis/commands and create new
apis/commands to gluster so that the admin can integrate it with whichever
monitori
You can use posix-locks i.e. fnctl based advisory locks on glusterfs just
like any other fs.
On Wed, Apr 4, 2018 at 8:30 AM, Lei Gong wrote:
> Hello there,
>
>
>
> I want to know if there is a feature allow user to add lock on a file when
> their app is modifying that file, so that other apps co
ot need to scale out, stick with a single server
> (+DRBD optionally for HA), it will give you the best performance
>
>
>
> Ondrej
>
>
>
>
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Tuesday, March 13, 2018 9:10 AM
>
> *To:* Ondr
execute on the NFS share, but 90 seconds on GlusterFS –
> i.e. 10 times slower.
>
Do you have the new features enabled?
performance.stat-prefetch=on
performance.cache-invalidation=on
performance.md-cache-timeout=600
network.inode-lru-limit=5
performance.nl-cache=on
performance.nl-cache-tim
On Tue, Mar 13, 2018 at 12:58 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
> ondrej.valou...@s3group.com> wrote:
>
>> Hi,
>>
>> Gluster will never perform well for small files.
>
On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
ondrej.valou...@s3group.com> wrote:
> Hi,
>
> Gluster will never perform well for small files.
>
> I believe there is nothing you can do with this.
>
It is bad compared to a disk filesystem but I believe it is much closer to
NFS now.
Andreas,
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi wrote:
> Pranith,
>
>
>
>> We found that compound fops is not giving better performance in
>> replicate and I am thinking of removing that code. Sent the patch at
>> https://review.gluster.org/19655
>>
>>
> If I understand it right, as of now AF
I found the following memory leak present in 3.13, 4.0 and master:
https://bugzilla.redhat.com/show_bug.cgi?id=1550078
I will clone/port to 4.0 as soon as the patch is merged.
On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero wrote:
> Hi all,
>
> Have tested on CentOS Linux release 7.4.1708 (Core)
On Mon, Jan 29, 2018 at 1:26 PM, Samuli Heinonen
wrote:
> Pranith Kumar Karampuri kirjoitti 29.01.2018 07:32:
>
>> On 29 Jan 2018 10:50 am, "Samuli Heinonen"
>> wrote:
>>
>> Hi!
>>>
>>> Yes, thank you for asking. I found out this line
er releasing locks?
Under what circumstances that does happen? Is it only process exporting the
brick crashes or is there a possibility of data corruption?
No data corruption. Brick process where you did clear-locks may crash.
Best regards,
Samuli Heinonen
Pranith Kumar Karampuri wrote:
>
Hi,
Did you find the command from strace?
On 25 Jan 2018 1:52 pm, "Pranith Kumar Karampuri"
wrote:
>
>
> On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen
> wrote:
>
>> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09:
>>
>>> On Thu, Jan 2
Adding devs who work on it
On 23 Jan 2018 10:40 pm, "Alan Orth" wrote:
> Hello,
>
> I saw that parallel-readdir was an experimental feature in GlusterFS
> version 3.10.0, became stable in version 3.11.0, and is now recommended for
> small file workloads in the Red Hat Gluster Storage Server
> do
On Thu, Jan 25, 2018 at 1:49 PM, Samuli Heinonen
wrote:
> Pranith Kumar Karampuri kirjoitti 25.01.2018 07:09:
>
>> On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen
>> wrote:
>>
>> Hi!
>>>
>>> Thank you very much for your help so far. Could you p
gt; Samuli Heinonen
>
> Pranith Kumar Karampuri <mailto:pkara...@redhat.com>
>> 23 January 2018 at 10.30
>>
>>
>> On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen > <mailto:samp...@neutraali.net>> wrote:
>>
>> Pranith Kumar Kara
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen
wrote:
> Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34:
>
>> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
>> wrote:
>>
>> Hi again,
>>>
>>> here is more information regarding issue describ
On Tue, Jan 23, 2018 at 1:04 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
> wrote:
>
>> Hi again,
>>
>> here is more information regarding issue described earlier
>>
>> It l
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
wrote:
> Hi again,
>
> here is more information regarding issue described earlier
>
> It looks like self healing is stuck. According to "heal statistics" crawl
> began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun
> Jan 21 2
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa
wrote:
> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your concerns. [1] is a stop-gap
> fix for the problem discussed in issues [2][3]
>
What is the impact on
On Thu, Jan 11, 2018 at 10:44 PM, Darrell Budic
wrote:
> Sounds like a good option to look into, but I wouldn’t want it to take
> time & resources away from other, non-GPU based, methods of improving this.
> Mainly because I don’t have discrete GPUs in most of my systems. While I
> could add them
Serkan,
Will it be possible to provide gluster volume profile
info output with 3.10.5 vs 3.12.0? That should give us clues about what
could be happening.
On Tue, Sep 12, 2017 at 1:51 PM, Serkan Çoban wrote:
> Hi,
> Servers are in production with 3.10.5, so I cannot provide 3.12
> relate
The following generally means it is not able to connect to any of the
glusterds in the cluster.
[1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140
(Success)
[1970-01-02 10:54:04.420422] I [MSGID: 101190]
[ev
Adding gluster-devel
Raghavendra,
I remember we discussing about handling these kinds of errors by
ping-timer expiry? I may have missed the final decision on how this was
decided to be handled. So asking you again ;-)
On Thu, Jul 13, 2017 at 2:14 PM, Øyvind Krosby wrote:
> I have been tryi
now.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Monday, July 10, 2017 8:31 AM
> *To:* Sanoj Unnikrishnan
> *Cc:* Ankireddypalle Reddy; Gluster Devel (gluster-de...@gluster.org);
> gluster-users@gluste
On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina wrote:
> >
> > You should first upgrade servers and then clients. New servers can
> > understand old clients, but it is not easy for old servers to understand
> new
> > clients in case it started doing something new.
>
> But isn't that the reason op
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan
wrote:
> I upgraded from 3.8.12 to 3.8.13 without issues.
>
> Two replicated volumes with online update, upgraded clients first and
> followed by servers upgrade, "stop glusterd, pkill gluster*, update
> gluster*, start glusterd, monitor healing proce
print the backtrace of the glusterfsd process when trigerring removing
>> xattr.
>> I will write the script and reply back.
>>
>> On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> Ram,
>>>
rfs6sds.commvault.com:/ws/disk8/ws_brick
>
> Brick58: glusterfs4sds.commvault.com:/ws/disk9/ws_brick
>
> Brick59: glusterfs5sds.commvault.com:/ws/disk9/ws_brick
>
> Brick60: glusterfs6sds.commvault.com:/ws/disk9/ws_brick
>
> Options Reconfigured:
>
> performance.readdir-ahead: on
>
> d
irectory in ec/afr doesn't have gfid
2) Something else removed these xattrs.
What is your volume info? May be that will give more clues.
PS: sys_fremovexattr is called only from posix_fremovexattr(), so that
doesn't seem to be the culprit as it also have checks to guard against
gfid/vo
ng the
> volume the attributes were again lost. Had to stop glusterd set attributes
> and then start glusterd. After that the volume start succeeded.
>
Which version is this?
>
>
> Thanks and Regards,
>
> Ram
>
>
>
> *From:* Pranith Kumar Karampuri [mailto:pkar
On Fri, Jul 7, 2017 at 9:15 PM, Pranith Kumar Karampuri wrote:
> Did anything special happen on these two bricks? It can't happen in the
> I/O path:
> posix_removexattr() has:
> 0 if (!strcmp (GFID_XATTR_KEY, name))
> {
>
>
> 1 gf_msg (
Did anything special happen on these two bricks? It can't happen in the I/O
path:
posix_removexattr() has:
0 if (!strcmp (GFID_XATTR_KEY, name))
{
1 gf_msg (this->name, GF_LOG_WARNING, 0,
P_MSG_XATTR_NOT_REMOVED,
2 "Remove xattr called on gfid
On Thu, Jun 29, 2017 at 8:12 PM, Paolo Margara
wrote:
> Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto:
>
>
>
> On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara
> wrote:
>
>> Hi Pranith,
>>
>> I'm using this guide http
ick process (and waiting for the heal to
> complete), this is fixing my problem.
>
> Many thanks for the help.
>
>
> Greetings,
>
> Paolo
>
> Il 29/06/2017 13:03, Pranith Kumar Karampuri ha scritto:
>
> Paolo,
> Which document did you follow for the upgr
;
> Yes, but ensure there are no pending heals like Pranith mentioned.
> https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.7/
> lists the steps for upgrade to 3.7 but the steps mentioned there are
> similar for any rolling upgrade.
>
> -Ravi
>
>
> Greetin
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N
wrote:
> On 06/28/2017 06:52 PM, Paolo Margara wrote:
>
>> Hi list,
>>
>> yesterday I noted the following lines into the glustershd.log log file:
>>
>> [2017-06-28 11:53:05.000890] W [MSGID: 108034]
>> [afr-self-heald.c:479:afr_shd_index_sweep]
>> 0-
hi Paolo,
I just checked code in v3.8.12 and it should have been created when the
brick starts after you upgrade the node. How did you do the upgrade?
On Wed, Jun 28, 2017 at 6:52 PM, Paolo Margara
wrote:
> Hi list,
>
> yesterday I noted the following lines into the glustershd.log log file:
>
via NFS
> to respect the group write permissions?
>
+Niels, +Jiffin
I added 2 more guys who work on NFS to check why this problem happens in
your environment. Let's see what information they may need to find the
problem and solve this issue.
>
> Thanks
>
> Pat
>
>
>
>
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley wrote:
>
>>
>> Hi,
>>
>> Today we experimented with some of the FUSE options that we found in the
>>
> -b
>
> - Original Message -
>
> From: "Pat Haley"
> To: "Ben Turner"
> Sent: Monday, June 12, 2017 4:54:00 PM
> Subject: Re: [Gluster-users] Slow write times to gluster disk
>
>
> Hi Ben,
>
> I guess I'm confused about
On Wed, Jun 21, 2017 at 9:12 PM, Shyam wrote:
> On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote:
>
>>
>>
>> On Tue, Jun 20, 2017 at 7:37 PM, Shyam > <mailto:srang...@redhat.com>> wrote:
>>
>> Hi,
>>
>> Release tagging h
On Tue, Jun 20, 2017 at 7:37 PM, Shyam wrote:
> Hi,
>
> Release tagging has been postponed by a day to accommodate a fix for a
> regression that has been introduced between 3.11.0 and 3.11.1 (see [1] for
> details).
>
> As a result 3.11.1 will be tagged on the 21st June as of now (further
> delay
On Tue, Jun 20, 2017 at 10:49 AM, atris adam wrote:
> Hello everybody
>
> I have 3 datacenters in different regions, Can I deploy my own cloud
> storage with the help of glusterfs on the physical nodes?If I can, what are
> the differences between cloud storage glusterfs and local gluster storage?
On Tue, Jun 6, 2017 at 6:54 PM, Shyam wrote:
> Hi,
>
> It's time to prepare the 3.11.1 release, which falls on the 20th of
> each month [4], and hence would be June-20th-2017 this time around.
>
> This mail is to call out the following,
>
> 1) Are there any pending *blocker* bugs that need to be
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee wrote:
>
> On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 11/06/2017 10:46 AM, WK wrote:
>> > I thought you had removed vna as defective and then ADDED in vnh as
>> > the replacement?
>> >
>> > Why is
On Sat, Jun 10, 2017 at 2:53 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9/06/2017 9:56 PM, Pranith Kumar Karampuri wrote:
>
> > gluster volume remove-brick datastore4 replica 2
>> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
>>
On Fri, Jun 9, 2017 at 12:41 PM, wrote:
> > I'm thinking the following:
> >
> > gluster volume remove-brick datastore4 replica 2
> > vna.proxmox.softlog:/tank/vmdata/datastore4 force
> >
> > gluster volume add-brick datastore4 replica 3
> > vnd.proxmox.softlog:/tank/vmdata/datastore4
>
> I think
+Raghavendra/Nithya
On Tue, Jun 6, 2017 at 7:41 PM, Jarsulic, Michael [CRI] <
mjarsu...@bsd.uchicago.edu> wrote:
> Hello,
>
> I am still working at recovering from a few failed OS hard drives on my
> gluster storage and have been removing, and re-adding bricks quite a bit. I
> noticed yesterday n
This mail was not there in the same thread as earlier because the subject
has extra "?==?utf-8?q? " so thought it was not answered and answered
again. Sorry about that.
On Sat, Jun 3, 2017 at 1:45 AM, Xavier Hernandez
wrote:
> Hi Serkan,
>
> On Thursday, June 01, 2017 21:31 CEST, Serkan Çoban
>
On Thu, Jun 8, 2017 at 12:49 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, Jun 2, 2017 at 1:01 AM, Serkan Çoban
> wrote:
>
>> >Is it possible that this matches your observations ?
>> Yes that matches what I see. So 19 files is being i
On Fri, Jun 2, 2017 at 1:01 AM, Serkan Çoban wrote:
> >Is it possible that this matches your observations ?
> Yes that matches what I see. So 19 files is being in parallel by 19
> SHD processes. I thought only one file is being healed at a time.
> Then what is the meaning of disperse.shd-max-thre
;
> http://mseas.mit.edu/download/phaley/GlusterUsers/TestVol/
> dd_testvol_gluster.txt
>
> Pat
>
>
> On 05/30/2017 09:27 PM, Pranith Kumar Karampuri wrote:
>
> Pat,
>What is the command you used? As per the following output, it seems
> like at least one write ope
fs): 1.4 GB/s
>
> The profile for the gluster test-volume is in
>
> http://mseas.mit.edu/download/phaley/GlusterUsers/TestVol/
> profile_testvol_gluster.txt
>
> Thanks
>
> Pat
>
>
>
>
> On 05/30/2017 12:10 PM, Pranith Kumar Karampuri wrote:
>
> Let's
gt; Thanks for the tip. We now have the gluster volume mounted under /home.
> What tests do you recommend we run?
>
> Thanks
>
> Pat
>
>
>
> On 05/17/2017 05:01 AM, Pranith Kumar Karampuri wrote:
>
>
>
> On Tue, May 16, 2017 at 9:20 PM, Pat Haley wrote:
&g
On Wed, May 24, 2017 at 9:10 PM, Joe Julian wrote:
> Forwarded for posterity and follow-up.
>
> Forwarded Message
> Subject: Re: GlusterFS removal from Openstack Cinder
> Date: Fri, 05 May 2017 21:07:27 +
> From: Amye Scavarda
> To: Eric Harney , Joe Julian
> , Vijay Bel
Adding gluster-users, developers who work on distribute module of gluster.
On Fri, May 19, 2017 at 12:58 PM, 郭鸿岩(基础平台部)
wrote:
> Hello,
>
> I am a user of Gluster 3.8 from beijing, China.
> I met a problem. I added a brick to a volume, but the brick is on
> the / disk, the same
+Snapshot maintainer. I think he is away for a week or so. You may have to
wait a bit more.
On Wed, May 10, 2017 at 2:39 AM, Chris Jones wrote:
> Hi All,
>
> This was discussed briefly on IRC, but got no resolution. I have a
> Kubernetes cluster running heketi and GlusterFS 3.10.1. When I try to
On Tue, May 9, 2017 at 12:57 PM, Ingard Mevåg wrote:
> You're not counting wrong. We won't necessarily transfer all of these
> files to one volume though. It was more an example of the distribution of
> file sizes.
> But as you say healing might be a problem, but then again. This is archive
> sto
Could you provide gluster volume info, gluster volume status and output of
'top' command so that we know which processes are acting up in the volume?
On Mon, May 15, 2017 at 8:02 AM, Joshua Coyle
wrote:
> Hey Guys,
>
>
>
> I think I’ve got a couple of stuck processes on one of my arbiter machine
Volume size of the client doesn't matter for mounting volume from server.
On Mon, May 15, 2017 at 8:22 PM, Dwijadas Dey wrote:
> Hi
>List users
>I am trying to mount a GlusterFS server volume to a
> Gluster Client in /var directory. My intention is to connect a fo
Hey,
3.9.1 reached its EndOfLife, you can use either 3.8.x or 3.10.x. which
are active at the moment.
On Tue, May 16, 2017 at 11:03 AM, Rafał Radecki
wrote:
> Hi All.
>
> I have a 9 node dockerized glusterfs cluster and I am seeing a situation
> that:
> 1) docker daemon on 8th node failes a
+Rafi, +Raghavendra Bhat
On Tue, May 16, 2017 at 11:55 AM, WoongHee Han wrote:
> Hi, all!
>
> I erased the VG having snapshot LV related to gluster volumes
> and then, I tried to restore volume;
>
> 1. vgcreate vg_cluster /dev/sdb
> 2. lvcreate --size=10G --type=thin-pool -n tp_cluster vg_cluste
Next time when this happens, could you collect statedump of the brick
processes where this activity is going on at intervals of 10 seconds?
You can refer about how to take statedump at:
https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
On Tue, May 16, 2017 at 7:43 PM, Jan Wrona
gluster creates .glusterfs directory hierarchy so some extra directories
take away some amount of disk space from that available space.
On Wed, May 17, 2017 at 6:28 PM, Tahereh Fattahi
wrote:
> Hi
> I create a brick with size 5G. I want to know is some space of this 5G
> used for gluster server
Seems like a frame-loss. Could you collect statedump of the mount process?
You may have to use kill -USR1 method described in the docs below when the
process hangs. Please also get statedump of brick processes.
https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
On Thu, May 18, 20
1 - 100 of 861 matches
Mail list logo