he file.
J.
On 02.09.2015 11:50, Yan, Zheng wrote:
>> On Sep 2, 2015, at 17:11, Gregory Farnum wrote:
>>
>> Whoops, forgot to add Zheng.
>>
>> On Wed, Sep 2, 2015 at 10:11 AM, Gregory Farnum wrote:
>>> On Wed, Sep 2, 2015 at 10:00 AM, Janusz Borkowski
&g
Hi!
Do you have replication factor 2?
To test recovery e.g. kill one OSD process, observe when ceph notices it and
starts moving data. Reformat the OSD partition, remove the killed OSD from
cluster, then add a new OSD using the freshly formatted partition. When you
have again 3 OSDs, observe w
t;
>> Whoops, forgot to add Zheng.
>>
>> On Wed, Sep 2, 2015 at 10:11 AM, Gregory Farnum wrote:
>>> On Wed, Sep 2, 2015 at 10:00 AM, Janusz Borkowski
>>> wrote:
>>>> Hi!
>>>>
>>>> I mount cephfs using kernel client (3.10.0-229.
first "echo"
overwriting the original contents, next "echos" overwriting bytes written by
the preceding "echo".
Thanks!
J.
On 01.09.2015 18:15, Gregory Farnum wrote:
>
>
> On Sep 1, 2015 4:41 PM, "Janusz Borkowski" <mailto:janusz.borkow...@info
Hi!
open( ... O_APPEND) works fine in a single system. If many processes write to
the same file, their output will never overwrite each other.
On NFS overwriting is possible, as appending is only emulated - each write is
preceded by a seek to the current file size and race condition may occur.