On Tue, Sep 22, 2015 at 6:17 PM, John Griffith <john.griffi...@gmail.com>
wrote:

>
>
> On Tue, Sep 22, 2015 at 5:48 PM, Joshua Harlow <harlo...@outlook.com>
> wrote:
>
>> A present:
>>
>> >>> import contextlib
>> >>> import os
>> >>>
>> >>> @contextlib.contextmanager
>> ... def synced_file(path, mode='wb'):
>> ...   with open(path, mode) as fh:
>> ...      yield fh
>> ...      os.fdatasync(fh.fileno())
>> ...
>> >>> with synced_file("/tmp/b.txt") as fh:
>> ...    fh.write("b")
>> ...
>>
>> Have fun :-P
>>
>> -Josh
>>
>>
>> Robert Collins wrote:
>>
>>> On 23 September 2015 at 09:52, Chris Friesen
>>> <chris.frie...@windriver.com>  wrote:
>>>
>>>> Hi,
>>>>
>>>> I recently had an issue with one file out of a dozen or so in
>>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
>>>> running stable/kilo if it makes a difference.
>>>>
>>>> Looking at the code in volume.targets.tgt.TgtAdm.create_iscsi_target(),
>>>> I'm
>>>> wondering if we should do a fsync() before the close().  The way it
>>>> stands
>>>> now, it seems like it might be possible to write the file, start making
>>>> use
>>>> of it, and then take a power outage before it actually gets written to
>>>> persistent storage.  When we come back up we could have an instance
>>>> expecting to make use of it, but no target information in the on-disk
>>>> copy
>>>> of the file.
>>>>
>>>
>>> If its being kept in sync with DB records, and won't self-heal from
>>> this situation, then yes. e.g. if the overall workflow is something
>>> like
>>>
>>> receive RPC request
>>> do some work, including writing the file
>>> reply to the RPC with 'ok'
>>> ->  gets written to the DB and the state recorded as ACTIVE or whatever..
>>>
>>> then yes, we need to behave as though its active even if the machine
>>> is power cycled robustly.
>>>
>>> Even then, fsync() explicitly says that it doesn't ensure that the
>>>> directory
>>>> changes have reached disk...for that another explicit fsync() on the
>>>> file
>>>> descriptor for the directory is needed.
>>>> So I think for robustness we'd want to add
>>>>
>>>> f.flush()
>>>> os.fsync(f.fileno())
>>>>
>>>
>>> fdatasync would be better - we don't care about the metadata.
>>>
>>> between the existing f.write() and f.close(), and then add something
>>>> like:
>>>>
>>>> f = open(volumes_dir, 'w+')
>>>> os.fsync(f.fileno())
>>>> f.close()
>>>>
>>>
>>> Yup, but again - fdatasync here I believe.
>>>
>>> -Rob
>>>
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> ​That target file pretty much "is" the persistence record, the db entry is
> the iqn and provider info only.  I think that adding the fdatasync isn't a
> bad idea at all.  At the very least it doesn't hurt.  Power losses on
> attach I would expect to be problematic regardless.
>

​Let me clarify the statement above, if you loose power to the node in the
middle of an attach process and the file wasn't written properly you're
most likely 'stuck' and will have to detach (which deletes the file) or it
will be in an error state and rebuild the file when you try the attach
again anyway IIRC, it's been a while since we've mucked with that code
(thank goodness)!!
​


>
> Keep in mind that file is built as part of the attach process stemming
> from initialize-connection.
>
> John
>
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to