НА: Official Ceph RPM build flags needed

2014-06-04 Thread Ilya Storozhilov
Hello Ruben, Jeff, Daniel and others,

thank you very much for excellent assistance! :)

Best Regards,
Ilya Storozhilov
Lead Software Engineer

EPAM Systems
Tver office, Russia
GMT+4

EPAM Internal ext.:  55529
Office phone:+7 (4822) 630-070 ext. 55529
Office fax:  +7 (4822) 630-073
Mobile phone:+7 (904) 021-0763
E-mail:  ilya_storozhi...@epam.com

http://www.epam.com

CONFIDENTIALITY CAUTION AND DISCLAIMER
This message is intended only for the use of the individual(s) or entity(ies) 
to which it is addressed and contains information that is legally privileged 
and confidential. If you are not the intended recipient, or the person 
responsible for delivering the message to the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. All unintended recipients are obliged to 
delete this message and destroy any printed copies.



От: Ruben Kerkhof ru...@rubenkerkhof.com
Отправлено: 4 июня 2014 г. 11:40
Кому: Ilya Storozhilov
Копия: Daniel Sterling; ceph-devel@vger.kernel.org
Тема: Re: Official Ceph RPM build flags needed

On Tue, Jun 3, 2014 at 6:19 PM, Ilya Storozhilov
ilya_storozhi...@epam.com wrote:
 So it seems, that this is up to packager what to put to RPM_OPT_FLAGS 
 environment variable on packaging. Maybe someone knows what particular value 
 is used?
 Thank you!

You can see what the actual value is by running:
$ rpm --eval '%{optflags}'
-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic

Hope it helps,

Ruben Kerkhof
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Official Ceph RPM build flags needed

2014-06-03 Thread Ilya Storozhilov
Hello Ceph-developers,
 
at the moment we are doing some performance testing of Ceph v0.72.2 on ContOS 
6.5, 64 bit. He have found, that Ceph from official RPM works faster that one 
which was compiled ourselves from sources. Do you know what compiler flags and 
configure options are actually used to build official Ceph PRM for CentOS? 
Maybe you can give an advise where to dig to find out what we need? :)

Thank you very much!

Best Regards, 
Ilya Storozhilov
Lead Software Engineer
 
EPAM Systems 
Tver office, Russia 
GMT+4 
 
EPAM Internal ext.:  55529
Office phone:    +7 (4822) 630-070 ext. 55529
Office fax:  +7 (4822) 630-073
Mobile phone:    +7 (904) 021-0763
E-mail:  ilya_storozhi...@epam.com 
 
http://www.epam.com 
 
CONFIDENTIALITY CAUTION AND DISCLAIMER 
This message is intended only for the use of the individual(s) or entity(ies) 
to which it is addressed and contains information that is legally privileged 
and confidential. If you are not the intended recipient, or the person 
responsible for delivering the  message to the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. All unintended recipients are obliged to 
delete this message and destroy any printed copies. 

 --
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


НА: Official Ceph RPM build flags needed

2014-06-03 Thread Ilya Storozhilov
Hello Daniel,

thank you for rapid response. ceph.spec file from the SRPM contains following 
lines:


...
export RPM_OPT_FLAGS=`echo $RPM_OPT_FLAGS | sed -e 's/i386/i486/'`

%{configure}CPPFLAGS=$java_inc \
--prefix=/usr \
--localstatedir=/var \
--sysconfdir=/etc \
--docdir=%{_docdir}/ceph \
--with-nss \
--without-cryptopp \
--with-rest-bench \
--with-debug \
--enable-cephfs-java \
$MY_CONF_OPT \
%{?_with_ocf} \
CFLAGS=$RPM_OPT_FLAGS CXXFLAGS=$RPM_OPT_FLAGS
...


So it seems, that this is up to packager what to put to RPM_OPT_FLAGS 
environment variable on packaging. Maybe someone knows what particular value is 
used?
Thank you!

Best Regards,
Ilya Storozhilov
Lead Software Engineer

EPAM Systems
Tver office, Russia
GMT+4

EPAM Internal ext.:  55529
Office phone:+7 (4822) 630-070 ext. 55529
Office fax:  +7 (4822) 630-073
Mobile phone:+7 (904) 021-0763
E-mail:  ilya_storozhi...@epam.com

http://www.epam.com

CONFIDENTIALITY CAUTION AND DISCLAIMER
This message is intended only for the use of the individual(s) or entity(ies) 
to which it is addressed and contains information that is legally privileged 
and confidential. If you are not the intended recipient, or the person 
responsible for delivering the message to the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. All unintended recipients are obliged to 
delete this message and destroy any printed copies.



От: ceph-devel-ow...@vger.kernel.org ceph-devel-ow...@vger.kernel.org от 
имени Daniel Sterling sterling.dan...@gmail.com
Отправлено: 3 июня 2014 г. 20:06
Кому: Ilya Storozhilov
Копия: ceph-devel@vger.kernel.org
Тема: Re: Official Ceph RPM build flags needed

The compiler flags are in the source RPM, is that correct?


On Tue, Jun 3, 2014 at 11:56 AM, Ilya Storozhilov
ilya_storozhi...@epam.com wrote:
 Hello Ceph-developers,

 at the moment we are doing some performance testing of Ceph v0.72.2 on ContOS 
 6.5, 64 bit. He have found, that Ceph from official RPM works faster that one 
 which was compiled ourselves from sources. Do you know what compiler flags 
 and configure options are actually used to build official Ceph PRM for 
 CentOS? Maybe you can give an advise where to dig to find out what we need? :)

 Thank you very much!

 Best Regards,
 Ilya Storozhilov
 Lead Software Engineer

 EPAM Systems
 Tver office, Russia
 GMT+4

 EPAM Internal ext.:  55529
 Office phone:+7 (4822) 630-070 ext. 55529
 Office fax:  +7 (4822) 630-073
 Mobile phone:+7 (904) 021-0763
 E-mail:  ilya_storozhi...@epam.com

 http://www.epam.com

 CONFIDENTIALITY CAUTION AND DISCLAIMER
 This message is intended only for the use of the individual(s) or entity(ies) 
 to which it is addressed and contains information that is legally privileged 
 and confidential. If you are not the intended recipient, or the person 
 responsible for delivering the  message to the intended recipient, you are 
 hereby notified that any dissemination, distribution or copying of this 
 communication is strictly prohibited. All unintended recipients are obliged 
 to delete this message and destroy any printed copies.

  --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2014-04-16 Thread Ilya Storozhilov
subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Slow request problem - help please.

2014-04-16 Thread Ilya Storozhilov
, 
received at 2014-04-16 02:20:47.454611: osd_op(client.5793.0:1273 
default.5793.1__shadow__F0jnzwQRYyOtc-QfW0S2_M0ptLm-o-b_1 [write 
2097152~524288] 172.d7dc6717 e608) v4 currently commit sent
2014-04-16 02:21:18.065316 osd.0 [WRN] slow request 30.608710 seconds old, 
received at 2014-04-16 02:20:47.456540: osd_op(client.5793.0:1274 
default.5793.1__shadow__F0jnzwQRYyOtc-QfW0S2_M0ptLm-o-b_1 [write 
2621440~524288] 172.d7dc6717 e608) v4 currently commit sent
2014-04-16 02:21:18.065319 osd.0 [WRN] slow request 30.605942 seconds old, 
received at 2014-04-16 02:20:47.459308: osd_op(client.5793.0:1275 
default.5793.1__shadow__F0jnzwQRYyOtc-QfW0S2_M0ptLm-o-b_1 [write 
3145728~329984] 172.d7dc6717 e608) v4 currently commit sent

After OSD log-verbosity increase we got the following messages from OSD:

2014-04-16 03:23:32.337961 7ff93987a700  0 -- 10.44.1.82:6801/2050730  
10.44.1.81:6805/1055074 pipe(0x72c0780 sd=54 :45949 s=2 pgs=15930 cs=2649 l=0 
c=0x565a680).fault, initiating reconnect
2014-04-16 03:23:32.338939 7ff93987a700  0 -- 10.44.1.82:6801/2050730  
10.44.1.81:6805/1055074 pipe(0x72c0780 sd=54 :45950 s=2 pgs=15931 cs=2651 l=0 
c=0x565a680).fault, initiating reconnect
2014-04-16 03:23:32.339956 7ff93987a700  0 -- 10.44.1.82:6801/2050730  
10.44.1.81:6805/1055074 pipe(0x72c0780 sd=54 :45951 s=2 pgs=15932 cs=2653 l=0 
c=0x565a680).fault, initiating reconnect
2014-04-16 03:23:32.341167 7ff93987a700  0 -- 10.44.1.82:6801/2050730  
10.44.1.81:6805/1055074 pipe(0x72c0780 sd=54 :45952 s=2 pgs=15933 cs=2655 l=0 
c=0x565a680).fault, initiating reconnect
2014-04-16 03:23:32.342212 7ff93987a700  0 -- 10.44.1.82:6801/2050730  
10.44.1.81:6805/1055074 pipe(0x72c0780 sd=54 :45953 s=2 pgs=15934 cs=2657 l=0 
c=0x565a680).fault, initiating reconnect
2014-04-16 03:23:32.343330 7ff93987a700  0 -- 10.44.1.82:6801/2050730  
10.44.1.81:6805/1055074 pipe(0x72c0780 sd=54 :45954 s=2 pgs=15935 cs=2659 l=0 
c=0x565a680).fault, initiating reconnect
[14:26:33] Ilya Storozhilov: 2014-04-16 03:25:55.170361 7ff955f87700  0 log 
[WRN] : 15 slow requests, 2 included below; oldest blocked for  33.673854 secs
2014-04-16 03:25:55.170373 7ff955f87700  0 log [WRN] : slow request 33.636609 
seconds old, received at 2014-04-16 03:25:21.533670: osd_op(client.5793.0:5387 
default.5793.2__shadow__MMYl0ajXAfpaBMg55xl6ZVlD7U1lstn_3 [writefull 0~87104] 
172.12238d13 e617) v4 currently waiting for subops from [2]
2014-04-16 03:25:55.170380 7ff955f87700  0 log [WRN] : slow request 33.122809 
seconds old, received at 2014-04-16 03:25:22.047470: osd_op(client.5793.0:5394 
default.5793.2_myobjects_14 [cmpxattr user.rgw.idtag (17) op 1 mode 1,create 
0~0,delete,setxattr user.rgw.idtag (17),writefull 0~524288,setxattr 
user.rgw.manifest (818),setxattr user.rgw.acl (185),setxattr 
user.rgw.content_type (25),setxattr user.rgw.etag (33)] 172.3a483fca e617) v4 
currently waiting for subops from [2]

This page 
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/ contains 
the following information about this problem:

Possible causes include:
 
A bad drive (check dmesg output)
A bug in the kernel file system bug (check dmesg output)
An overloaded cluster (check system load, iostat, etc.)
A bug in the ceph-osd daemon.
 
Possible solutions:
 
Remove VMs Cloud Solutions from Ceph Hosts
Upgrade Kernel
Upgrade Ceph
Restart OSDs

We checked our environment for bad drive, kernel bug and system load and found 
that everything works fine but OSD consumes 100% of CPU when this error 
appears. OSD restart did not help.

Can you help us in some way, e.g. give an advise where to dig or whatever? 
Thank you very much!

Best Regards, 

Ilya Storozhilov

Lead Software Engineer

 

EPAM Systems 

Tver office, Russia 

GMT+4 

 

EPAM Internal ext.:  55529

Office phone:+7 (4822) 630-070 ext. 55529

Office fax:  +7 (4822) 630-073

Mobile phone:+7 (904) 021-0763

E-mail:  ilya_storozhi...@epam.com 

 

http://www.epam.com 

 

CONFIDENTIALITY CAUTION AND DISCLAIMER 

This message is intended only for the use of the individual(s) or entity(ies) 
to which it is addressed and contains information that is legally privileged 
and confidential. If you are not the intended recipient, or the person 
responsible for delivering the
 message to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. All unintended recipients are obliged to delete this message and 
destroy any printed copies.










--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


wip-libcephfs rebased and pulled up to v.65

2013-07-12 Thread Ilya Storozhilov
Hi Sage, Adam, Matt and David,

we have resolved a couple of compilation issues in 'wip-libcephfs' branch and 
have created a respective pull request, see 
https://github.com/ceph/ceph/pull/424

Regards,
Ilya

P.S. I'm going on vacation for the next week, so keep connected with Andrey 
Kuznetsov (andrey_kuznet...@epam.com) during this period, please.


От: Sage Weil [s...@inktank.com]
Отправлено: 11 июля 2013 г. 23:01
To: Ilya Storozhilov
Cc: Adam C. Emerson; Matt W. Benjamin; David Zafman; ceph-devel@vger.kernel.org
Тема: Re: wip-libcephfs rebased and pulled up to v.65

Please check out the current wip=libcephfs branch and let me know how it
looks/works for you guys.  I cleaned up your patches a bit and fixed teh
root cause of the xattr issue you were seeing.

Thanks!
sage


On Thu, 11 Jul 2013, Ilya Storozhilov wrote:

 Hi Adam (CC: Sage, Matt and David),

 it seems to me we have choosen a bad commit description, where instead of 
 FIX: readlink not copy link path to user's buffer there should be something 
 like FIX: ceph_ll_readlink() now fills user provided buffer with the link 
 data instead of returning a pointer to the libcephfs internal memory 
 location. Take a look to the sourcecode - it does actually what you are 
 talking about (see 
 https://github.com/ferustigris/ceph/blob/wip-libcephfs-rebased-v65/src/libcephfs.cc):

 ---
 extern C int ceph_ll_readlink(struct ceph_mount_info *cmount, Inode *in, 
 char *buf, size_t bufsiz, int uid, int gid)
 {
   const char *value = NULL;
   int res = (cmount-get_client()-ll_readlink(in, value, uid, gid));
   if (res  0)
 return res;
   if (bufsiz  (size_t)res)
 return ENAMETOOLONG;
   memcpy(buf, value, res);  // -- Here we are copying a link data to the 
 user provided buffer. This is what you want us to do.
   return res;
 }
 ---

 In your branch (see 
 https://github.com/linuxbox2/linuxbox-ceph/blob/wip-libcephfs-rebased-v65/src/libcephfs.cc)
  this function does not copy link data to the user-provided buffer, but 
 passes back a pointer to the internal libcephfs structure, which is not good 
 solution, as you mentioned below:

 ---
 extern C int ceph_ll_readlink(struct ceph_mount_info *cmount, Inode *in, 
 char **value, int uid, int gid)
 {
   return (cmount-get_client()-ll_readlink(in, (const char**) value, uid, 
 gid));
 }
 ---

 Regards,
 Ilya

 
 ??: Adam C. Emerson [aemer...@linuxbox.com]
 ??: 10  2013 ?. 20:41
 To: Ilya Storozhilov
 Cc: Sage Weil; Matt W. Benjamin; David Zafman
 : Re: wip-libcephfs rebased and pulled up to v.65

 At Wed, 10 Jul 2013 12:17:24 +, Ilya Storozhilov wrote:
 [snip]
  ?wip-libcephfs-rebased-v65? branch of the 
  https://github.com/linuxbox2/linuxbox-ceph repository has not been branched 
  from the ?wip-libcephfs? branch of https://github.com/ceph/ceph as it was 
  made with our ?open_by_handle_api? branch of the 
  https://github.com/ferustigris/ceph repo. That is why we were not able to 
  automatically ?cherry-pick? our changes using respective git command. So we 
  have manually applied our changes to the ?wip-libcephfs-rebased-v65? branch 
  in https://github.com/ferustigris/ceph repo as one commit - you can check 
  it out here: 
  https://github.com/ferustigris/ceph/commit/c3f4940b2cfcfd3ea9a004e6f07f1aa3c0b6c419.
 [snip]

 Good afternoon, sir.

 I was looking at your patch and Matt and I have concerns about the
 change you made to readlink.  By passing back a pointer to a buffer
 rather than copying the link into a supplied buffer, we're opening
 ourselves up to the content changing, being deallocated, or otherwise
 having something bad happen to it.

 Thanks.


--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


HA: wip-libcephfs rebased and pulled up to v.65

2013-07-11 Thread Ilya Storozhilov
Hi Sage, Adam, Matt and David,

this branch is looking good at the first blush. We will check out how it works 
and report to you till the EOD.

Regards,
Ilya

От: Sage Weil [s...@inktank.com]
Отправлено: 11 июля 2013 г. 23:01
To: Ilya Storozhilov
Cc: Adam C. Emerson; Matt W. Benjamin; David Zafman; ceph-devel@vger.kernel.org
Тема: Re: wip-libcephfs rebased and pulled up to v.65

Please check out the current wip=libcephfs branch and let me know how it
looks/works for you guys.  I cleaned up your patches a bit and fixed teh
root cause of the xattr issue you were seeing.

Thanks!
sage


On Thu, 11 Jul 2013, Ilya Storozhilov wrote:

 Hi Adam (CC: Sage, Matt and David),

 it seems to me we have choosen a bad commit description, where instead of 
 FIX: readlink not copy link path to user's buffer there should be something 
 like FIX: ceph_ll_readlink() now fills user provided buffer with the link 
 data instead of returning a pointer to the libcephfs internal memory 
 location. Take a look to the sourcecode - it does actually what you are 
 talking about (see 
 https://github.com/ferustigris/ceph/blob/wip-libcephfs-rebased-v65/src/libcephfs.cc):

 ---
 extern C int ceph_ll_readlink(struct ceph_mount_info *cmount, Inode *in, 
 char *buf, size_t bufsiz, int uid, int gid)
 {
   const char *value = NULL;
   int res = (cmount-get_client()-ll_readlink(in, value, uid, gid));
   if (res  0)
 return res;
   if (bufsiz  (size_t)res)
 return ENAMETOOLONG;
   memcpy(buf, value, res);  // -- Here we are copying a link data to the 
 user provided buffer. This is what you want us to do.
   return res;
 }
 ---

 In your branch (see 
 https://github.com/linuxbox2/linuxbox-ceph/blob/wip-libcephfs-rebased-v65/src/libcephfs.cc)
  this function does not copy link data to the user-provided buffer, but 
 passes back a pointer to the internal libcephfs structure, which is not good 
 solution, as you mentioned below:

 ---
 extern C int ceph_ll_readlink(struct ceph_mount_info *cmount, Inode *in, 
 char **value, int uid, int gid)
 {
   return (cmount-get_client()-ll_readlink(in, (const char**) value, uid, 
 gid));
 }
 ---

 Regards,
 Ilya

 
 ??: Adam C. Emerson [aemer...@linuxbox.com]
 ??: 10  2013 ?. 20:41
 To: Ilya Storozhilov
 Cc: Sage Weil; Matt W. Benjamin; David Zafman
 : Re: wip-libcephfs rebased and pulled up to v.65

 At Wed, 10 Jul 2013 12:17:24 +, Ilya Storozhilov wrote:
 [snip]
  ?wip-libcephfs-rebased-v65? branch of the 
  https://github.com/linuxbox2/linuxbox-ceph repository has not been branched 
  from the ?wip-libcephfs? branch of https://github.com/ceph/ceph as it was 
  made with our ?open_by_handle_api? branch of the 
  https://github.com/ferustigris/ceph repo. That is why we were not able to 
  automatically ?cherry-pick? our changes using respective git command. So we 
  have manually applied our changes to the ?wip-libcephfs-rebased-v65? branch 
  in https://github.com/ferustigris/ceph repo as one commit - you can check 
  it out here: 
  https://github.com/ferustigris/ceph/commit/c3f4940b2cfcfd3ea9a004e6f07f1aa3c0b6c419.
 [snip]

 Good afternoon, sir.

 I was looking at your patch and Matt and I have concerns about the
 change you made to readlink.  By passing back a pointer to a buffer
 rather than copying the link into a supplied buffer, we're opening
 ourselves up to the content changing, being deallocated, or otherwise
 having something bad happen to it.

 Thanks.


--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


HA: wip-libcephfs rebased and pulled up to v.65

2013-07-03 Thread Ilya Storozhilov
Hi Sage,

Andrey has amended our changes according to you comments except one regarding 
re-fetching xattrs from the MDS after setting or removal the extended attribute 
of the filesystem object, cause it just requires some resources we do not have 
at the moment (Andrey is sick till Thursday) - you can check Andrey's commits 
here: https://github.com/ferustigris/ceph/commits/open_by_handle_api. Other 
staff, including xattrs unit tests have been implemented.

Does it possible to rebase our current changes on top of this and to implement 
re-fetching of xattrs from MDS later, cause I suspect we are not able to 
complete it till Monday?

Best regards,
Ilya

От: Sage Weil [s...@inktank.com]
Отправлено: 3 июля 2013 г. 3:19
To: Matt W. Benjamin; Ilya Storozhilov
Cc: ceph-devel; aemerson; David Zafman
Тема: Re: wip-libcephfs rebased and pulled up to v.65

Hi Matt,

On Tue, 2 Jul 2013, Matt W. Benjamin wrote:
 Hi Sage (et al),

 We have rebased the former wip-libcephfs branch, on the model of the
 rebased example branch, as planned, and also pulled it up to Ceph's
 v65 tag/master, also as planned.

 In addition to cross checking this, Adam has updated our Ganesha client
 driver to use the ll v2 API, and this checks out.

 We've pushed wip-libcephfs-rebased-v65 to our public git repository,
 https://github.com/linuxbox2/linuxbox-ceph, for review.

I made a couple comments on github with small nits.  In the meantime, I'm
going to run this through our fs test suite.

Looks good!

Ilya, do you want to rebase your changes on top of this?  It would be
great to get both sets of changes in before the dumpling feature freeze
(Monday!).

Thanks!
sage
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


libcephfs: Open-By-Handle API question

2013-06-04 Thread Ilya Storozhilov
Hi Ceph developers,

in order to represent NFS-frontend to CephFS data storage we are trying to use 
innovative Open-By-Handle API from 'src/include/cephfs/libcephfs.h' file, which 
is of 'wip-libcephfs' branch at the moment. API looks quite consistent and 
useful but we couldn't find a method to get a pointer to root inode of the 
mounted Ceph filesystem.

At the moment we have found only one place, where it could be fetched from: an 
'Inode* root' member from the 'Client' class ('src/client/Client.h') but it is 
in 'protected' section, so some hack is needed (e.g. to introduce a Client's 
descendant, which is providing a method to acces this protected member). Do you 
know, how to fetch a pointer to the root inode of the mounted Ceph filesystem 
without any hacking (using just an official CephFS API only)?

Thank you and best wishes,
Ilya V. Storozhilov
EPAM Systems
Lead Software Engineer

P.S. What do you think about to make 'Open-By-Handle' API to be a primary and 
not low-level API to CephFS and to make POSIX-like API to be just a helper 
addendum to it?--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html