On Thu, May 19, 2016 at 11:42 AM, Raghavendra Talur wrote:
>
>
> On Thu, May 19, 2016 at 11:39 AM, Kaushal M wrote:
>>
>> On Thu, May 19, 2016 at 11:35 AM, Kaushal M wrote:
>> > On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur
>> > wrote:
>> >>
>> >>
>> >> On Thu, May 19, 2016 at 11:13 AM, K
On Thu, May 19, 2016 at 11:39 AM, Kaushal M wrote:
> On Thu, May 19, 2016 at 11:35 AM, Kaushal M wrote:
> > On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur
> wrote:
> >>
> >>
> >> On Thu, May 19, 2016 at 11:13 AM, Kaushal M
> wrote:
> >>>
> >>> I'm in favour of a stable release every 2 mon
On Thu, May 19, 2016 at 11:35 AM, Kaushal M wrote:
> On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur wrote:
>>
>>
>> On Thu, May 19, 2016 at 11:13 AM, Kaushal M wrote:
>>>
>>> I'm in favour of a stable release every 2 months and an LTS once a
>>> year (option 2).
>>
>>
>> +1
>>
>>>
>>>
>>> A
On Thu, May 19, 2016 at 11:29 AM, Raghavendra Talur wrote:
>
>
> On Thu, May 19, 2016 at 11:13 AM, Kaushal M wrote:
>>
>> I'm in favour of a stable release every 2 months and an LTS once a
>> year (option 2).
>
>
> +1
>
>>
>>
>> As Oleksander already suggested, I'm in favour of having well define
On Thu, May 19, 2016 at 11:13 AM, Kaushal M wrote:
> I'm in favour of a stable release every 2 months and an LTS once a
> year (option 2).
>
+1
>
> As Oleksander already suggested, I'm in favour of having well defined
> merge windows, freeze dates and testing period.
> (A slightly modified tim
Hi,
There seems to be some issue in glusterfs01.sh3.ctripcorp.com slave node.
Can you share the complete logs ?
You can increase verbosity of debug messages like this:
gluster volume geo-replication ::volume>config log-level DEBUG
Also, check /root/.ssh/authorized_keys in glusterfs01.sh3.ct
I'm in favour of a stable release every 2 months and an LTS once a
year (option 2).
As Oleksander already suggested, I'm in favour of having well defined
merge windows, freeze dates and testing period.
(A slightly modified timeline from Oleksander's proposal follows)
For every 2 month window,
- 1
Hello,
I have tried to config a geo-replication volume , all the master nodes
configuration are the same, When I start this volume, the status shows partial
faulty as following:
gluster volume geo-replication filews
glusterfs01.sh3.ctripcorp.com::filews_slave status
MASTER NODE MASTER VO
A bit late but better than never. My vote is for option 2.
~Atin
On 05/18/2016 07:19 PM, Vijay Bellur wrote:
> [Adding gluster-users]
>
> I would like to wrap this poll by the next community meeting on 25th
> May. Can you please weigh in with your opinions on the options
> provided by Aravinda?
On Wed, May 18, 2016 at 06:54:57PM +0200, Gandalf Corvotempesta wrote:
> Il 18/05/2016 13:55, Kevin Lemonnier ha scritto:
> > Yes, that's why you need to use sharding. With sharding, the heal is
> > much quicker and the whole VM isn't freezed during the heal, only the
> > shard being healed. I'm
Il 18/05/2016 13:55, Kevin Lemonnier ha scritto:
Yes, that's why you need to use sharding. With sharding, the heal is
much quicker and the whole VM isn't freezed during the heal, only the
shard being healed. I'm testing that right now myself and that's
almost invisible for the VM using 3.7.11.
On 18/05/2016 11:41 PM, Krutika Dhananjay wrote:
I will try to recreate this issue tomorrow on my machines with the
steps that Lindsay provided in this thread. I will let you know the
result soon after that.
Thanks Krutika, I've been trying to get the shard stats you wanted, but
by the time t
Some additional details if it helps, there is no cache on the disk,
it's virtio and iothread=1. The file is in qcow and using qemu-img check
it says it's not corrupted, but when the VM is running I have I/O Errors.
As you can see in the config, performance.stat-prefetch: off but being
on a debian s
On 13 May 2016 at 13:46, Aravinda wrote:
> Hi,
>
> Based on the discussion in last community meeting and previous discussions,
>
> 1. Too frequent releases are difficult to manage.(without dedicated
> release manager)
> 2. Users wants to see features early for testing or POC.
> 3. Backporting pat
[Adding gluster-users]
I would like to wrap this poll by the next community meeting on 25th
May. Can you please weigh in with your opinions on the options
provided by Aravinda?
Thanks!
Vijay
On Fri, May 13, 2016 at 4:16 AM, Aravinda wrote:
> Hi,
>
> Based on the discussion in last community me
Hi,
I will try to recreate this issue tomorrow on my machines with the steps
that Lindsay provided in this thread. I will let you know the result soon
after that.
-Krutika
On Wednesday, May 18, 2016, Kevin Lemonnier wrote:
> Hi,
>
> Some news on this.
> Over the week end the RAID Card of the no
Hi,
Some news on this.
Over the week end the RAID Card of the node ipvr2 died, and I thought
that maybe that was the problem all along. The RAID Card was changed
and yesterday I reinstalled everything.
Same problem just now.
My test is simple, using the website hosted on the VMs all the time
I re
On Wed, May 18, 2016 at 01:39:58PM +0200, Gandalf Corvotempesta wrote:
> Ciao,
> i'm planning a new infrastructure. I have some questions about
> healing to better optimize performances in case of brick failure.
>
> Let's assume this environment:
>
> 3 supermicro servers, replica 3, with 12 SATA
Ciao,
i'm planning a new infrastructure. I have some questions about
healing to better optimize performances in case of brick failure.
Let's assume this environment:
3 supermicro servers, replica 3, with 12 SATA disks each.
each servers has 2 bricks in RAID-6 (software or
hardware, i don't know)
Hello,
I have tried to config a geo-replication volume , all the master nodes
configuration are the same, When I start this volume, the status shows partial
faulty as following:
gluster volume geo-replication filews
glusterfs01.sh3.ctripcorp.com::filews_slave status
MASTER NODE MASTER VOL
20 matches
Mail list logo