On 2018-03-06, Amar Tumballi wrote:
>> If anyone would like our test scripts, I can either tar them up and
>> email them or put them in github - either is fine with me. (they rely
>> on current builds of docker and docker-compose)
>>
>>
> Sure, sharing the test cases makes it
Adding csaba
On Tue, Mar 6, 2018 at 9:09 AM, Raghavendra Gowdappa
wrote:
> +Csaba.
>
> On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson wrote:
>
>> Raghavendra,
>>
>> Thanks very much for your reply.
>>
>> I fixed our data corruption problem by disabling the
+Csaba.
On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson wrote:
> Raghavendra,
>
> Thanks very much for your reply.
>
> I fixed our data corruption problem by disabling the volume
> performance.write-behind flag as you suggested, and simultaneously
> disabling caching in my client
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence
wrote:
> Hello,
>
> So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
>
> It actually began as the same problem with a different peer. I noticed
> with (call it) gluster-2, when I couldn't make a
Tough to do. Like in my case where you would have to install and use Plex.
On March 5, 2018 4:19:23 PM PST, Amar Tumballi wrote:
>>
>>
>> If anyone would like our test scripts, I can either tar them up and
>> email them or put them in github - either is fine with me. (they
Hello,
So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
It actually began as the same problem with a different peer. I noticed with
(call it) gluster-2, when I couldn't make a new volume. I compared
/var/lib/glusterd between them, and found that somehow the options
>
>
> If anyone would like our test scripts, I can either tar them up and
> email them or put them in github - either is fine with me. (they rely
> on current builds of docker and docker-compose)
>
>
Sure, sharing the test cases makes it very easy for us to see what would be
the issue. I would
Raghavendra,
Thanks very much for your reply.
I fixed our data corruption problem by disabling the volume
performance.write-behind flag as you suggested, and simultaneously
disabling caching in my client side mount command.
In very modest testing, the flock() case appears to me to work well -
Hi Shyam,
2018-03-01 22:08 GMT-03:00 Shyam Ranganathan :
> On 02/28/2018 07:25 AM, Javier Romero wrote:
>> This one fails
>> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
>>
>> # yum install -y
>>
On Mon, Mar 5, 2018 at 8:21 PM, Paul Anderson wrote:
> Hi,
>
> tl;dr summary of below: flock() works, but what does it take to make
> sync()/fsync() work in a 3 node GFS cluster?
>
> I am under the impression that POSIX flock, POSIX
> fcntl(F_SETLK/F_GETLK,...), and POSIX
Hi,
tl;dr summary of below: flock() works, but what does it take to make
sync()/fsync() work in a 3 node GFS cluster?
I am under the impression that POSIX flock, POSIX
fcntl(F_SETLK/F_GETLK,...), and POSIX read/write/sync/fsync are all
supported in cluster operations, such that in theory,
Hi,
The actual data will be in the hot tier only till demotion. The file
that you see on the cold tier is just a linkto file of the file on the
hot tier.
These linkto file are necessary for the internal working of the tier.
On Mon, Mar 5, 2018 at 1:16 PM, Sherin George
Hi,
There isn't a way to replace the failing tier brick through a single
command as we don't have support for replace/ remove or add brick with
tier.
Once you bring the brick online(volume start force), the data in the
brick will be built by the self heal daemon (Done because its a
replicated
Hi Guys
Got a quick question regarding hot tier and cold tier.
I got a gluster volume with 1 x 3 hot tier and 1 x 3 cold tier.
watermark-low is 75 and watermark-hi is 90. usage of volume is very less.
My files always go to hot tier and cold tier at same time
As I understand, data should go to hot
Hi,
It's time to prepare the 3.12.7 release, which falls on the 10th of
each month, and hence would be 08-03-2018 this time around.
This mail is to call out the following,
1) Are there any pending *blocker* bugs that need to be tracked for
3.12.7? If so mark them against the provided tracker
15 matches
Mail list logo