Thanks!!!

Becky

On Fri, Feb 23, 2018 at 5:35 PM Eduardo CAMILO Inacio <
[email protected]> wrote:

> Becky,
>
> Interesting. My sample code is attached to this message. Additionally, I
> observed the same error while using IOR 3.0.1.
>
> Best regards,
>
>
>
> On Fri, Feb 23, 2018 at 5:52 PM Becky Ligon <[email protected]> wrote:
>
>> Eduardo:
>>
>> The original developer said that file view IS supported.  He looked at
>> the error messages and didn’t see anything obviously wrong.
>>
>> Can you send me the code you are running?  I will try to recreate the
>> problem on my local machine.
>>
>> Becky
>>
>>
>>
>> On Fri, Feb 23, 2018 at 10:17 AM Becky Ligon <[email protected]> wrote:
>>
>>> Eduardo:
>>>
>>> It is possible that file view is not supported.  Let me check with the
>>> developers of ROMIO/PVFS2 and see if that’s the case.
>>>
>>> Becky
>>>
>>> On Thu, Feb 22, 2018 at 6:58 PM Eduardo CAMILO Inacio <
>>> [email protected]> wrote:
>>>
>>>> Becky,
>>>>
>>>> Using other "I/O modes", namely, POSIX syscalls, C stream operations
>>>> (e.g, fread, fwrite), MPI-IO offset-based operations (independent and
>>>> collective), the issue is not observed. Linux commands, such as ls, cat,
>>>> echo, rm, and so on, also does not seem to be affected by the issue. The
>>>> error was only observed with MPI-IO with file view. I have not yet tried
>>>> "pvfs2-" commands. If, after this new information, you think testing this
>>>> may be worthy, I can try to reserve the nodes tomorrow and follow your
>>>> suggestion; although I am not sure if it would be helpful.
>>>>
>>>> Thank you for your attention,
>>>>
>>>>
>>>>
>>>> On Thu, Feb 22, 2018 at 7:52 PM Becky Ligon <[email protected]> wrote:
>>>>
>>>>> Eduardo:
>>>>>
>>>>> Can you access the OrangeFS filesystem from the client node?   Please
>>>>> try a pvfs2-ls on your filesystem and send me the output.  Also, try
>>>>> pvfs2-ping -m <mountpoint from pvfs2tab file> and send me the output.
>>>>>
>>>>> Do you have the filesystem mounted?  Please try "ls" on the mount
>>>>> point and send me the results.
>>>>>
>>>>> Becky
>>>>>
>>>>
>>>>> On Thu, Feb 22, 2018 at 3:30 PM, Eduardo CAMILO Inacio <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Dear Becky,
>>>>>>
>>>>>> Thank you for your quick response.
>>>>>>
>>>>>> Regarding the installation, yes, I have compile MPICH with ROMIO and
>>>>>> PVFS2 support, following documentation instructions. Further, I did not
>>>>>> verified any error during OrangeFS installation.
>>>>>>
>>>>>> Also, during execution, no error message is registered in the client
>>>>>> log. Similar errors, as the ones presented in this message, are found in
>>>>>> the server log, though.
>>>>>>
>>>>>> These errors were observed with mainly default configuration options
>>>>>> (configuration file attached). The only change is the path of the storage
>>>>>> directories, both meta and data. Additionally, I have tried to solve the
>>>>>> problem by increasing the flow buffer size, following a suggestion found 
>>>>>> in
>>>>>> an old thread. No success was observed using sizes up to 10 MB.
>>>>>>
>>>>>> Best regards,
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 22, 2018 at 2:43 PM Becky Ligon <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Eduardo:
>>>>>>>
>>>>>>> Have you resolved your problem with OrangeFS and MPICH?
>>>>>>>
>>>>>>> Becky Ligon
>>>>>>>
>>>>>>> On Wed, Feb 21, 2018 at 9:50 AM, Becky Ligon <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Eduardo:
>>>>>>>>
>>>>>>>> Seems like there might be a problem with your OrangeFS
>>>>>>>> installation.  Can you send me the server conf file?
>>>>>>>>
>>>>>>>> Did you configure mpich with romio and pvfs2 support?
>>>>>>>>
>>>>>>>> Becky Ligon
>>>>>>>>
>>>>>>>> On Wed, Feb 21, 2018 at 6:27 AM Eduardo CAMILO Inacio <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> Dear all,
>>>>>>>>>
>>>>>>>>> I am trying to execute some experiments using ROMIO (MPICH) file
>>>>>>>>> view capabilities for reading and writing shared files into OrangeFS. 
>>>>>>>>> I
>>>>>>>>> have tried with the IOR benchmark and an ad-hoc simple workload 
>>>>>>>>> generator
>>>>>>>>> (attached to this message) with no success. After a few trials, I 
>>>>>>>>> observed
>>>>>>>>> that when the dataset is small (i.e., O(100KB)), processes finish 
>>>>>>>>> without
>>>>>>>>> errors, but the file ends up empty. For my target dataset produced by 
>>>>>>>>> the
>>>>>>>>> ad-hoc generator (around 300 MB), multiple error messages like the
>>>>>>>>> following are exhibited, the file is created, but it is incomplete:
>>>>>>>>>
>>>>>>>>> [E 23:05:11.388265] Error: payload_progress: Bad address (error
>>>>>>>>> class: 128)
>>>>>>>>> [E 23:05:11.388296] mem_to_bmi_callback_fn: I/O error occurred
>>>>>>>>> [E 23:05:11.388311] handle_io_error: flow proto error cleanup
>>>>>>>>> started on 0x2396e18: Bad address
>>>>>>>>> [E 23:05:11.388320] handle_io_error: flow proto 0x2396e18 canceled
>>>>>>>>> 0 operations, will clean up.
>>>>>>>>> [E 23:05:11.388333] handle_io_error: flow proto 0x2396e18 error
>>>>>>>>> cleanup finished: Bad address
>>>>>>>>>
>>>>>>>>> I verified that the ad-hoc generator works fine when executed
>>>>>>>>> locally (i.e., all processes in the same node writing to the local 
>>>>>>>>> file
>>>>>>>>> system). This fact, allied to the file system error messages observed,
>>>>>>>>> suggests me that the problem may be in the OrangeFS.
>>>>>>>>>
>>>>>>>>> Following are the system information:
>>>>>>>>> - OrangeFS 2.9.7
>>>>>>>>> - MPICH 3.2.1
>>>>>>>>> - IOR 3.0.1
>>>>>>>>> - GCC 4.4.7 20120313
>>>>>>>>> - CentOS release 6.7 (kernel 2.6.32-573.22.1.el6.x86_64)
>>>>>>>>> - Data/Metadata servers on Hercule cluster and Clients on 20 nodes
>>>>>>>>> of the Nova cluster, both from the Grid’5000 testbed (
>>>>>>>>> https://www.grid5000.fr/mediawiki/index.php/Lyon:Hardware)
>>>>>>>>> - ad-hoc generator compilation: $ mpicc fview.c -Wall -Wextra -o
>>>>>>>>> fview
>>>>>>>>> - ad-hoc generator execution: $ mpiexec -n 40 -f mpihostsfile
>>>>>>>>> ./fview
>>>>>>>>>
>>>>>>>>> Thank you.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Eduardo Camilo Inacio, M.Sc.
>>>>>>>>> Lattes: http://lattes.cnpq.br/4794169282710899
>>>>>>>>> _______________________________________________
>>>>>>>>> Pvfs2-users mailing list
>>>>>>>>> [email protected]
>>>>>>>>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>>>>>>>>
>>>>>>>> --
>>>>>>>> Sent from Gmail Mobile
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Pvfs2-users mailing list
>>>>>>>> [email protected]
>>>>>>>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>> Eduardo Camilo Inacio, M.Sc.
>>>>>> Lattes: http://lattes.cnpq.br/4794169282710899
>>>>>>
>>>>>
>>>>> --
>>>> Eduardo Camilo Inacio, M.Sc.
>>>> Lattes: http://lattes.cnpq.br/4794169282710899
>>>>
>>> --
>>> Sent from Gmail Mobile
>>>
>> --
>> Sent from Gmail Mobile
>>
> --
> Eduardo Camilo Inacio, M.Sc.
> Lattes: http://lattes.cnpq.br/4794169282710899
>
-- 
Sent from Gmail Mobile
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to