RE: Symbolic links like feature on radosgw

2015-11-03 Thread Zhou, Yuan
Hi Guang, 

Does 'copy' works for your case? copying one objects inside one bucket would 
copy the head object only. The shadow objects are not copied.
If rgw_max_chunk_objects is configured to be small(say 1-byte) then we have a 
'symbolic link'  like file.

thanks, -yuan

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Guang Yang
Sent: Tuesday, November 3, 2015 1:26 PM
To: ceph-devel@vger.kernel.org; Yehuda Sadeh
Subject: Re: Symbolic links like feature on radosgw

Hi Yehuda,
We have a user requirement that needs symbolic link like feature on radosgw - 
two object ids pointing to the same object (ideally it could cross bucket, but 
same bucket is fine).

The closest feature on Amazon S3 I could find is [1], but not exact the same, 
the one from Amazon S3 API was designed for static web site hosting.

Is this a valid feature request we can put into radosgw? The way I am thinking 
to implement is like symbolic link, the link object just contains a pointer to 
the original object.

 [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html

 --
 Regards,
 Guang




--
--
Regards,
Guang
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
N�r��yb�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj"��!�i

RE: nginx for rgw fcgi frontend

2015-09-18 Thread Zhou, Yuan
Thanks Yehuada for the quick response!

My nginx is 1.4.6(old but default for Ubuntu trusty) and for some reason it's 
sending both CONTENT_LENGTH and HTTP_CONTENT_LENGTH to the backend even if I 
comment out the fastcgi_params part in the site conf

With below config this issue is fixed:
 
rgw content length compat = true


Thanks, -yuan

-Original Message-
From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com] 
Sent: Friday, September 18, 2015 11:29 PM
To: Zhou, Yuan
Cc: Ceph Development
Subject: Re: nginx for rgw fcgi frontend

On Thu, Sep 17, 2015 at 11:38 PM, Zhou, Yuan  wrote:
> Hi Yehuda,
>
> I was trying to do some tests on nginx over rgw and ran into some issue on 
> the PUT side:
>
> $ swift upload con ceph_fuse.cc
> Object PUT failed: http://localhost/swift/v1/con/ceph_fuse.cc 411 Length 
> Required   MissingContentLength
>
> However the GET/HEAD/POST requests are all working. From the history mail in 
> ceph-user nginx should be working well. There's no such issue if switch to 
> civetweb frontend. Is there anything changed in fcgi frontend? I'm testing on 
> the master branch.
>
> here's the request log and the CONTENT_LENGTH is there actually.
>
> http://paste2.org/YDJFYIcp
>
>

What version are you running? Note that you're getting an HTTP_CONTENT_LENGTH 
header instead of CONTENT_LENGTH header. There should be some support for it, 
but on the other hand there now, but maybe you can get nginx to send the 
appropriate header?

Yehuda


>
> rgw part of ceph.conf
> 
> rgw frontends = fastcgi
> rgw dns name = localhost
> rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
> rgw print continue = false
> ...
>
>
> Nginx site.conf:
>
> server {
> listen 80;
>
> client_max_body_size 10g;
>
> access_log /dev/stdout;
> error_log /dev/stderr;
>
> location / {
> fastcgi_pass_header Authorization;
> fastcgi_pass_request_headers on;
>
> if ($request_method = PUT) {
> rewrite ^ /PUT$request_uri;
> }
>
> include fastcgi_params;
>
> fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
> }
>
> location /PUT/ {
> internal;
>
> fastcgi_pass_header Authorization;
> fastcgi_pass_request_headers on;
>
> include fastcgi_params;
> fastcgi_param CONTENT_LENGTH $content_length;
> fastcgi_param HTTP_CONTENT_LENGTH $content_length;
>
> fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
> }
> }
>
>
>
> Sincerely, Yuan
>


nginx for rgw fcgi frontend

2015-09-17 Thread Zhou, Yuan
Hi Yehuda,

I was trying to do some tests on nginx over rgw and ran into some issue on the 
PUT side:

$ swift upload con ceph_fuse.cc
Object PUT failed: http://localhost/swift/v1/con/ceph_fuse.cc 411 Length 
Required   MissingContentLength

However the GET/HEAD/POST requests are all working. From the history mail in 
ceph-user nginx should be working well. There's no such issue if switch to 
civetweb frontend. Is there anything changed in fcgi frontend? I'm testing on 
the master branch.

here's the request log and the CONTENT_LENGTH is there actually.

http://paste2.org/YDJFYIcp



rgw part of ceph.conf

        rgw frontends = fastcgi
        rgw dns name = localhost
        rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
        rgw print continue = false
...


Nginx site.conf:

server {
    listen 80;

    client_max_body_size 10g;

    access_log /dev/stdout;
    error_log /dev/stderr;

    location / {
        fastcgi_pass_header Authorization;
        fastcgi_pass_request_headers on;

        if ($request_method = PUT) {
            rewrite ^ /PUT$request_uri;
        }

        include fastcgi_params;

        fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
    }

    location /PUT/ {
        internal;

        fastcgi_pass_header Authorization;
        fastcgi_pass_request_headers on;

        include fastcgi_params;
        fastcgi_param CONTENT_LENGTH $content_length;
        fastcgi_param HTTP_CONTENT_LENGTH $content_length;

        fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
    }
}



Sincerely, Yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: testing the teuthology OpenStack backend

2015-08-20 Thread Zhou, Yuan
Hi Loic/Li,

We have a local Ceph binary repo for our local teuthology testing. I'm using 
some modified Ceph gitbuilder (https://github.com/ceph/gitbuilder) to create 
the repo. The things I've changed are:

1. modify branch-local to watch our branchs only, otherwise there'll be too 
many branches to build
2. modify make-debs.sh to add *test-dbg packages  
(https://gist.github.com/zhouyuan/f4c1c671c1659ece04e1)
2. modify autobuilder.sh to make a proper directory and move the binaries there 
 (https://gist.github.com/zhouyuan/2a2c4d1b20ff42efc351)

It's still very simple (e,g, only Ubuntu Trusty included) but enough for our 
tests now. Hope this can help.

Thanks, -yuan


-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Loic Dachary
Sent: Wednesday, August 19, 2015 6:24 PM
To: Li Wang
Cc: Ceph Development
Subject: Re: testing the teuthology OpenStack backend

Hi,

[cc'ing the ceph-devel list for Zack or Andrew's input]

On 19/08/2015 11:32, Li Wang wrote:
> Hi Loic,
>   we did a test, the results are attached. We found that the network 
> may be the bottleneck, it seems it will fetch all the packages from 
> ceph website on the fly, that does take time and cause many failed 
> jobs, and there are also many dead jobs, what do those mean? Do you 
> have some suggestions, if possible to enable it fetch the packages 
> from configurable mirror or even private package repositories?

You can ssh to the teuthology instance (the host that runs pulpito) and modify 
the ~/.teuthology.yaml with

   gitbuilder_host: gitbuilder.ubuntukylin.com

which will override the default of 

   gitbuilder_host: gitbuilder.ceph.com

I've never done that however and maybe Zack or Andrew have advices on how to 
establish a mirror properly.

I think having a private package repository that is useable by teuthology is 
complicated because the package locations, naming conventions and build methods 
are difficult to reproduce (and I don't fully understand what they are right 
now).

Cheers

> 
> Cheers,
> Li Wang
> 
> On 2015/8/8 23:23, Loic Dachary wrote:
>>
>>
>> On 08/08/2015 16:22, Li Wang wrote:
>>> Hi Loic,
>>>Glad to talk with you on IRC about setting up the teuthology 
>>> OpenStack backend. Once it is ready to run the test in private 
>>> cloud, and the results could be exported and uploaded to a public 
>>> place, please let me know asap :)
>>
>> I'm working on it, should be ready RSN.
>>
>>>
>>> Cheers,
>>> Li Wang
>>>
>>>
>>> On 2015/8/6 23:05, Loic Dachary wrote:
 Hi,

 I'm looking into testing the OpenStack backend for teuthology on a new 
 cluster to verify it's portable. I think it is but ... ;-) I'm told you 
 have an OpenStack cluster and would be interested in running teuthology 
 workloads on it. Does it have a public facing API ?

 Cheers

>>

--
Loïc Dachary, Artisan Logiciel Libre



RE: quick way to rebuild deb packages

2015-07-22 Thread Zhou, Yuan
I'm also using make-debs.sh to generate the binaries for some local deployment. 
Note that if you need the *tests.deb you'll need to change this scripts a bit.

@@ -58,8 +58,8 @@ tar -C $releasedir -zxf $releasedir/ceph_$vers.orig.tar.gz
 #
 cp -a debian $releasedir/ceph-$vers/debian
 cd $releasedir
-perl -ni -e 'print if(!(/^Package: .*-dbg$/../^$/))' ceph-$vers/debian/control
-perl -pi -e 's/--dbg-package.*//' ceph-$vers/debian/rules
+#perl -ni -e 'print if(!(/^Package: .*-dbg$/../^$/))' ceph-$vers/debian/control
+#perl -pi -e 's/--dbg-package.*//' ceph-$vers/debian/rules
 #
 # always set the debian version to 1 which is ok because the debian
 # directory is included in the sources and the upstream version will 



-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Loic Dachary
Sent: Wednesday, July 22, 2015 2:32 PM
To: Bartłomiej Święcki; ceph-devel@vger.kernel.org
Subject: Re: quick way to rebuild deb packages

Hi,

Did you try https://github.com/ceph/ceph/blob/master/make-debs.sh ? I would 
recommend running https://github.com/ceph/ceph/blob/master/run-make-check.sh 
first to make sure you can build and test: this will install the dependencies 
you're missing at the same time.

Cheers

On 21/07/2015 18:15, Bartłomiej Święcki wrote:
> Hi all,
> 
> I'm currently working on a test environment for ceph where we're using deb 
> files to deploy new version on test cluster.
> To make this work efficiently I'd have to quckly build deb packages.
> 
> I tried dpkg-buildpackages -nc which should keep the results of previous 
> build but it ends up in a linking error:
> 
>> ...
>>   CXXLDceph_rgw_jsonparser
>> ./.libs/libglobal.a(json_spirit_reader.o): In function 
>> `~thread_specific_ptr':
>> /usr/include/boost/thread/tss.hpp:79: undefined reference to 
>> `boost::detail::set_tss_data(void const*, 
>> boost::shared_ptr, void*, bool)'
>> /usr/include/boost/thread/tss.hpp:79: undefined reference to 
>> `boost::detail::set_tss_data(void const*, 
>> boost::shared_ptr, void*, bool)'
>> /usr/include/boost/thread/tss.hpp:79: undefined reference to 
>> `boost::detail::set_tss_data(void const*, 
>> boost::shared_ptr, void*, bool)'
>> /usr/include/boost/thread/tss.hpp:79: undefined reference to 
>> `boost::detail::set_tss_data(void const*, 
>> boost::shared_ptr, void*, bool)'
>> /usr/include/boost/thread/tss.hpp:79: undefined reference to 
>> `boost::detail::set_tss_data(void const*, 
>> boost::shared_ptr, void*, bool)'
>> ./.libs/libglobal.a(json_spirit_reader.o):/usr/include/boost/thread/tss.hpp:79:
>>  more undefined references to `boost::detail::set_tss_data(void const*, 
>> boost::shared_ptr, void*, bool)' follow
>> ./.libs/libglobal.a(json_spirit_reader.o): In function `call_once> (*)()>':
>> ...
> 
> Any ideas on what could go wrong here ?
> 
> Version I'm compiling is v0.94.1 but I've observed same results with 9.0.1.
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

N�r��yb�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj"��!�i

RE: local teuthology testing

2015-07-21 Thread Zhou, Yuan
Loic, thanks for the notes! Will try the new code and report out the issue I 
met.

Thanks, -yuan

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Loic Dachary
Sent: Tuesday, July 21, 2015 11:48 PM
To: shin...@linux.com; Zhou, Yuan
Cc: David Casier AEVOO; Ceph Devel; se...@lists.ceph.com
Subject: Re: local teuthology testing

Hi,

Since July 18th teuthology no longer uses chef, this issue has been resolved ! 
Using ansible requires configuration ( http://dachary.org/?p=3752 explains it 
shortly, maybe there is something in the documentation but I did not pay enough 
attention to be sure ). At the end of http://dachary.org/?p=3752 you will see a 
list of configurable values and I suspect Andrew & Zack would be more than 
happy to explain how any hardcoded leftover can be stripped :-)

Cheers

On 21/07/2015 14:58, Shinobu Kinjo wrote:
> Hi,
> 
> I think that you have to show us such a URLs for anyone who would have same 
> biggest issue.
> 
> Sincerely,
> Kinjo
> 
> On Tue, Jul 21, 2015 at 9:52 PM, Zhou, Yuan  <mailto:yuan.z...@intel.com>> wrote:
> 
> Hi David/Loic,
> 
> I was also trying to make some local Teuthology clusters here. The 
> biggest issue I met is in the ceph-qa-chef - there're lots of hardcoded URL 
> related with the sepia lab. I have to trace the code and change them line by 
> line.
> 
> Can you please kindly share me how did you get this work? Is there an 
> easy way to fix this?
> 
> Thanks, -yuan
> 
> 
> 
> 
> -- 
> Life w/ Linux <http://i-shinobu.hatenablog.com/>

-- 
Loïc Dachary, Artisan Logiciel Libre



local teuthology testing

2015-07-21 Thread Zhou, Yuan
Hi David/Loic,

I was also trying to make some local Teuthology clusters here. The biggest 
issue I met is in the ceph-qa-chef - there're lots of hardcoded URL related 
with the sepia lab. I have to trace the code and change them line by line. 

Can you please kindly share me how did you get this work? Is there an easy way 
to fix this?

Thanks, -yuan

N�r��yb�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj"��!�i

RGW object versioning status

2015-07-16 Thread Zhou, Yuan
Hi Yehuda,

I see there's a wiki page on RGW object versioning. And from the code looks 
like it's already there. 
https://wiki.ceph.com/Development/RGW_Object_Versioning

But the RGW doc tells me Swift/S3 object version is not supported.

http://ceph.com/docs/master/radosgw/swift/
http://ceph.com/docs/master/radosgw/s3/


What is the status of this feature? Is there any doc on this?


Thanks, -yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


head object read with range get on RGW

2015-07-13 Thread Zhou, Yuan
Hi Yehuda,

I trace the code and the read op seems happen at RGWRados::raw_obj_stat()? This 
looks like some unnecessary if the range does not fall on to the head object. 
I tried to set the rgw_max_chunk_size = 0 and it's not working on the PUT side.
Do you have any ideas on how to avoid this read? 

Thanks, -yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: CDS Jewel Wed/Thurs

2015-07-01 Thread Zhou, Yuan
Hey Patrick, 

Looks like the GMT+8 time for the 1st day is wrong, should be 10:00 pm - 7:30 
am?

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Patrick McGarry
Sent: Tuesday, June 30, 2015 11:28 PM
To: Ceph Devel; Ceph-User
Subject: CDS Jewel Wed/Thurs

Hey cephers,

Just a friendly reminder that our Ceph Developer Summit for Jewel planning is 
set to run tomorrow and Thursday. The schedule and dial in information is 
available on the new wiki:

http://tracker.ceph.com/projects/ceph/wiki/CDS_Jewel

Please let me know if you have any questions. Thanks!


-- 

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com @scuttlemonkey || @ceph
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
N�r��yb�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj"��!�i

Monitor clock skew on Teuthology testing

2015-06-23 Thread Zhou, Yuan
Hi Zack,

I was testing on some home-made Teuthology cluster. Till now I can use the 
teuthology-suite to submit the test cases and start some workers to do the 
tests. From the Pulpito logs I can see most of the tests are passed except 
there was some error when aggregating the results in the last step. The error 
message was like:

"2015-06-24 08:41:33.334317 mon.1 192.168.13.117:6789/0 4 : cluster [WRN] 
message from mon.0 was stamped 0.709253s in the future, clocks not 
synchronized" in cluster log  

This was due to the time lag between monitors to my knowledge. I checked the 
clock settings in Teuthology and find the NTP server are defined in 
ceph-qa-chef/cookbooks/ceph-qa/files/default/ntp.conf. Is there any other 
settings on the clock side there?

Thanks, -yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: xattrs vs. omap with radosgw

2015-06-16 Thread Zhou, Yuan
FWIW, there was some discussion in OpenStack Swift and their performance tests 
showed 255 is not the best in recent XFS. They decided to use large xattr 
boundary size(65535).

https://gist.github.com/smerritt/5e7e650abaa20599ff34


-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Wednesday, June 17, 2015 3:43 AM
To: GuangYang
Cc: ceph-devel@vger.kernel.org; ceph-us...@lists.ceph.com
Subject: Re: xattrs vs. omap with radosgw

On Tue, 16 Jun 2015, GuangYang wrote:
> Hi Cephers,
> While looking at disk utilization on OSD, I noticed the disk was constantly 
> busy with large number of small writes, further investigation showed that, as 
> radosgw uses xattrs to store metadata (e.g. etag, content-type, etc.), which 
> made the xattrs get from local to extents, which incurred extra I/O.
> 
> I would like to check if anybody has experience with offloading the metadata 
> to omap:
>   1> Offload everything to omap? If this is the case, should we make the 
> inode size as 512 (instead of 2k)?
>   2> Partial offload the metadata to omap, e.g. only offloading the rgw 
> specified metadata to omap.
> 
> Any sharing is deeply appreciated. Thanks!

Hi Guang,

Is this hammer or firefly?

With hammer the size of object_info_t crossed the 255 byte boundary, which is 
the max xattr value that XFS can inline.  We've since merged something that 
stripes over several small xattrs so that we can keep things inline, but it 
hasn't been backported to hammer yet.  See 
c6cdb4081e366f471b372102905a1192910ab2da.  Perhaps this is what you're seeing?

I think we're still better off with larger XFS inodes and inline xattrs if it 
means we avoid leveldb at all for most objects.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Regarding hadoop over RGW blueprint

2015-06-10 Thread Zhou, Yuan
Hi Somnath,

The background was a bit complicated. This was part of the MOC project, which 
aims to setup an open-exchange cloud between several private cloud inside 
several universities.
https://www.openstack.org/summit/openstack-summit-atlanta-2014/session-videos/presentation/the-massachusetts-open-cloud-moc-a-new-model-to-operate-and-innovate-in-a-vendor-neutral-cloud

There was a strong requirement of multi-tenancy, which is lack in S3 interface. 
So we actually go to the SwiftFS approach. Currently SwiftFS only support one 
proxy-server, which is not able to scale to some rack level, this is a big gap. 
SwiftFS supports locality-awareness but this is  restricted to single proxy.

During our tests, we also find there's some bug when the data set goes to 
>20GB. SwiftFS is not able to support large data sets. We have some patches but 
not full ready.

In conclusion, there was some new requirements that S3/SwiftFS cannot meet. So 
we just propose the new plugin for Ceph RGW. 

thanks, -yuan

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thursday, June 11, 2015 11:02 AM
To: Zhang, Jian; ceph-devel
Subject: RE: Regarding hadoop over RGW blueprint

Thanks Jian !
What about my first question :-) ? Are you seeing any shortcomings with that ?
Dumb question may be (not much knowledge on Hadoop front ) , but I was asking 
why to write a new filesystem interface to plugin with Hadoop, why not plug in 
RGWProxy somewhere in between may be like Hadoop + S3 + RGWProxy + RGW ?

Regards
Somnath

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Zhang, Jian
Sent: Wednesday, June 10, 2015 7:06 PM
To: Somnath Roy; ceph-devel
Cc: Zhang, Jian
Subject: RE: Regarding hadoop over RGW blueprint

Somnath,
For you second question, our blueprint is targeting to solve the scenario that 
people trying to run multiple cluster (geographically distributed), which only 
has a dedicated proxy server have access to the storage cluster, that's one of 
the biggest advantage of this blueprints. 
For the third question, I think most end users still have concerns on CephFS, 
currently we don't have plan to benchmark this solution against CephFS. 

Jian



-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thursday, June 11, 2015 8:54 AM
To: ceph-devel
Subject: Regarding hadoop over RGW blueprint

Hi Yuan/Jian

I was going through your following blueprint.

http://tracker.ceph.com/projects/ceph/wiki/Hadoop_over_Ceph_RGW_status_update

This is very interesting. I have some query though.

1. Did you guys benchmark RGW + S3 interface integrated with Hadoop. This 
should work as is today. Are you seeing some shortcomings with this solution 
other than localization ?

2. The only advantage with your solution is to get locality with RGW proxy ? Or 
there are other advantages as well ?

3. Hadoop with CephFs is the preferred solution from RedHat. Are you going to 
benchmark your solution against this as well ?

Thanks & Regards
Somnath




PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: libcrush.so

2015-05-08 Thread Zhou, Yuan
Hi James,

This happens usually when the storage platform and applications are in 
segmented networks. For example, in a cluster with multiple RGW instances, if 
we could know the which RGW instance is the closest to primary copy, then we 
can do more efficient local read/write through some particular deployment. 
There's one feature in Openstack Swift [1] which is able to provide the 
location of objects inside a cluster.

Thanks, -yuan

[1] 
https://github.com/openstack/swift/blob/master/swift/common/middleware/list_endpoints.py

-Original Message-
From: James (Fei) Liu-SSI [mailto:james@ssi.samsung.com] 
Sent: Saturday, May 9, 2015 1:40 AM
To: Zhou, Yuan; Ceph Development
Cc: Cohen, David E; Yu, Zhidong
Subject: RE: libcrush.so

Hi Yuan,
   Very interesting. Would be possible to know why application needs to access 
the cursh map directly instead of accessing through ceph tool?

  Regards,
  James

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Zhou, Yuan
Sent: Thursday, May 07, 2015 6:29 PM
To: Ceph Development
Cc: Cohen, David E; Yu, Zhidong
Subject: libcrush.so

Ceph use crush algorithm to provide the mapping of objects to OSD servers. This 
is great for clients so they could talk to with these OSDs directly. However 
there are some scenarios where the application needs to access the crush map, 
for load-balancing as an example. 

Currently Ceph doesn't provides any API to render the layout. If your 
application needs to access the crush map you'll going to rely on the command 
'ceph osd map pool_name obj_name'. With this libcrush.so we could let the 
application to choose which nodes to access. The other advantage is we could 
provide some other bindings(python, go) based on this also.

>From the git log we find libcrush was there before but removed out since 
>Argonaut. Can anyone kindly share us the background of this change?


Thanks, -yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


libcrush.so

2015-05-07 Thread Zhou, Yuan
Ceph use crush algorithm to provide the mapping of objects to OSD servers. This 
is great for clients so they could talk to with these OSDs directly. However 
there are some scenarios where the application needs to access the crush map, 
for load-balancing as an example. 

Currently Ceph doesn't provides any API to render the layout. If your 
application needs to access the crush map you'll going to rely on the command 
'ceph osd map pool_name obj_name'. With this libcrush.so we could let the 
application to choose which nodes to access. The other advantage is we could 
provide some other bindings(python, go) based on this also.

>From the git log we find libcrush was there before but removed out since 
>Argonaut. Can anyone kindly share us the background of this change?


Thanks, -yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Ceph code tests / teuthology

2015-04-24 Thread Zhou, Yuan
Sage/Zack, Thanks for the info! 

I did tested with check-locks: false when using teuthology-suite, but 
teuthology is warning some machine_type things. Checking the code looks like 
it's related with lock-server. 
So paddles are both lock server and results server! I missed that in my 
testing. Thanks for the setup doc, I'm going to have another try.

Thanks, -yuan 

-Original Message-
From: Zack Cerza [mailto:z...@redhat.com] 
Sent: Saturday, April 25, 2015 12:11 AM
To: Sage Weil
Cc: Zhou, Yuan; Loic Dachary; Ceph Development
Subject: Re: Ceph code tests / teuthology

If you're scheduling jobs with teuthology-suite *and* disabling locking, my 
guess is you're in for an adventure.

The lock server is called paddles:
 https://github.com/ceph/paddles/

The setup instructions are up-to-date but note that the API docs are not (but 
you shouldn't need those anyway).

There are additional docs covering new lab setup here:
 http://ceph.com/teuthology/docs/LAB_SETUP.html

Thanks,
Zack

- Original Message -
From: "Sage Weil" 
To: "Yuan Zhou" 
Cc: "Loic Dachary" , "Ceph Development" 
, "Zack Cerza" 
Sent: Friday, April 24, 2015 10:02:01 AM
Subject: RE: Ceph code tests / teuthology

On Fri, 24 Apr 2015, Zhou, Yuan wrote:
> Hi Loic/Zack,
> 
> So I've got some progress here. I was able to run a single job with 
> teuthology xxx.yaml targets.yaml. From the code, teuthology-suite 
> needs to query the lock-server for some machine info, like os_type, platform.
> Is there any documents for the lock-server?

You can skip these checks with 

 check-locks: false

in the job yaml.

sage


> 
> Thanks, -yuan
> 
> -Original Message-
> From: Loic Dachary [mailto:l...@dachary.org]
> Sent: Monday, April 13, 2015 5:19 PM
> To: Zhou, Yuan
> Cc: Ceph Development; Zack Cerza
> Subject: Re: Ceph code tests / teuthology
> 
> Hi,
> 
> On 13/04/2015 04:39, Zhou, Yuan wrote:> Hi Loic,
> > 
> >  
> > 
> > I'm trying to setup an internal Teuthology cluster here. I was able to 
> > setup a 3 node cluster now. however there's not much docs and I'm confused 
> > about some questions here:
> > 
> >  
> > 
> > 1)  how does Ceph upstream do tests? Currently I see there's a) 
> > Jekins(make check on each PR) 
> 
> Yes.
> 
> > b) Teuthology Integration tests(on important PR only).
> >
> 
> The teuthology tests are run either by cron jobs or by people. 
> http://pulpito.ceph.com/. They are not run on pull request.
> 
> > 2)  Teuthology automatically fetch the binary files from 
> > gitbuilder.ceph.com currently. However the binary will not be built for 
> > each pull request? 
> 
> Right. Teuthology can be pointed to an alternate repository but there is a 
> catch: it needs to have the same naming conventions as gitbuilder.ceph.com. 
> These naming convention are not documented (as far as I know) and you would 
> need to read the code to figure them out. When I tried to customize the 
> repository, I replaced the code locating the repository with something that 
> was configurable instead (reading the yaml file). But I did it in a hackish 
> way and did not take the time to figure out how to contribute that back 
> properly.
> 
> > 3)  Can Teuthology working on VMs? I got some info from your blog, 
> > looks like you're running Teuthology on a Openstack/Docker.
> 
> The easiest way is to prepare three VMs and make sure you can ssh to them 
> without password. You then create a targets.yaml file with these three 
> machines. And you can run a single job that will use them. It will save you 
> the trouble of setting up a full teuthology cluster (I think 
> http://dachary.org/?p=2204 is still mostly valid). The downside is that it 
> only allows you to run a single job at a time and will not allow you to run 
> teuthology-suite to schedule a number of jobs and have them wait in the queue.
> 
> I'm not actually using the docker backend I hacked together, therefore I 
> don't recommend you try this route, unless you have a week or two to devote 
> to it.  
> 
> > 4)  If I have a working Teuthology cluster now, how do I start a full 
> > run? or only the workunits/* is good enough?
> 
> For instance:
> 
> ./virtualenv/bin/teuthology-suite --filter-out btrfs,ext4 --priority 
> 1000 --suite rados --suite-branch giant --machine-type 
> plana,burnupi,mira --distro ubuntu --email 
> abhishek.lekshma...@gmail.com --owner abhishek.lekshma...@gmail.com 
> --ceph giant-backports
> 
> http://tracker.ceph.com/issues/11153 contains many examples of how teuthology 
> is run to test stable releases.
&g

RE: Ceph code tests / teuthology

2015-04-24 Thread Zhou, Yuan
Hi Loic/Zack,

So I've got some progress here. I was able to run a single job with teuthology 
xxx.yaml targets.yaml. 
>From the code,  teuthology-suite needs to query the lock-server for some 
>machine info, like os_type, platform. Is there any documents for the 
>lock-server?

Thanks, -yuan

-Original Message-
From: Loic Dachary [mailto:l...@dachary.org] 
Sent: Monday, April 13, 2015 5:19 PM
To: Zhou, Yuan
Cc: Ceph Development; Zack Cerza
Subject: Re: Ceph code tests / teuthology

Hi,

On 13/04/2015 04:39, Zhou, Yuan wrote:> Hi Loic,
> 
>  
> 
> I'm trying to setup an internal Teuthology cluster here. I was able to setup 
> a 3 node cluster now. however there's not much docs and I'm confused about 
> some questions here:
> 
>  
> 
> 1)  how does Ceph upstream do tests? Currently I see there's a) 
> Jekins(make check on each PR) 

Yes.

> b) Teuthology Integration tests(on important PR only).
>

The teuthology tests are run either by cron jobs or by people. 
http://pulpito.ceph.com/. They are not run on pull request.

> 2)  Teuthology automatically fetch the binary files from 
> gitbuilder.ceph.com currently. However the binary will not be built for each 
> pull request? 

Right. Teuthology can be pointed to an alternate repository but there is a 
catch: it needs to have the same naming conventions as gitbuilder.ceph.com. 
These naming convention are not documented (as far as I know) and you would 
need to read the code to figure them out. When I tried to customize the 
repository, I replaced the code locating the repository with something that was 
configurable instead (reading the yaml file). But I did it in a hackish way and 
did not take the time to figure out how to contribute that back properly.

> 3)  Can Teuthology working on VMs? I got some info from your blog, looks 
> like you're running Teuthology on a Openstack/Docker.

The easiest way is to prepare three VMs and make sure you can ssh to them 
without password. You then create a targets.yaml file with these three 
machines. And you can run a single job that will use them. It will save you the 
trouble of setting up a full teuthology cluster (I think 
http://dachary.org/?p=2204 is still mostly valid). The downside is that it only 
allows you to run a single job at a time and will not allow you to run 
teuthology-suite to schedule a number of jobs and have them wait in the queue.

I'm not actually using the docker backend I hacked together, therefore I don't 
recommend you try this route, unless you have a week or two to devote to it.  

> 4)  If I have a working Teuthology cluster now, how do I start a full 
> run? or only the workunits/* is good enough?

For instance:

./virtualenv/bin/teuthology-suite --filter-out btrfs,ext4 --priority 1000 
--suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro 
ubuntu --email abhishek.lekshma...@gmail.com --owner 
abhishek.lekshma...@gmail.com --ceph giant-backports

http://tracker.ceph.com/issues/11153 contains many examples of how teuthology 
is run to test stable releases.

The easiest way to create a single job probably is to run 
./virtualenv/bin/teuthology-suite : it will output calls to teuthology that you 
can probalby copy / paste to run a single job. I've not tried that and went a 
more difficult route instead (manually assembling yaml files to create a job). 

Zack will probably have more hints and advices on how to run your own 
teuthology suite.

Cheers

>  
> 
> Thanks for any hints!
> 
> -yuan
> 
>  
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ceph code tests

2015-04-13 Thread Zhou, Yuan
Hi Loic, 

I'm trying to setup an internal Teuthology cluster here. I was able to setup a 
3 node cluster now. however there's not much docs and I'm confused about some 
questions here:

1) how does Ceph upstream do tests? Currently I see there's a) Jekins(make 
check on each PR) b) Teuthology Integration tests(on important PR only).
2) Teuthology automatically fetch the binary files from gitbuilder.ceph.com 
currently. However the binary will not be built for each pull request?  
3) Can Teuthology working on VMs? I got some info from your blog, looks like 
you're running Teuthology on a Openstack/Docker.
4) If I have a working Teuthology cluster now, how do I start a full run? or 
only the workunits/* is good enough?

Thanks for any hints!
-yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RGW : Transaction Id in response?

2015-02-27 Thread ZHOU Yuan
On Fri, Feb 27, 2015 at 6:31 PM, Abhishek Dixit  wrote:
> On Thu, Feb 26, 2015 at 10:51 PM, Yehuda Sadeh-Weinraub
>  wrote:
>>
>>
>> - Original Message -
>>> From: "Abhishek Dixit" 
>>> To: "ceph-devel" 
>>> Sent: Wednesday, February 25, 2015 8:35:40 PM
>>> Subject: RGW : Transaction Id in response?
>>>
>>> Hi,
>>>
>>> I was doing comparison of Open Stack Swift response headers and Ceph
>>> RGW response.
>>> This is in regard to X-Trans-Id header in response from Open Stack
>>> Swift storage.
>>> Swift response to a request always have the header "X-Trans-Id".
>>> X-Trans-Id : A unique transaction identifier for this request.
>>>
>>> X-Trans-Id seems to serve two purpose:
>>> 1. Every log messages for a request will carry this and aid in
>>> debugging/analyzing.
>>> 2. Benchmarking for latency.
>>>
>>> So, do we have similar unique identifier in Ceph RGW response which
>>> associates with each request?
>>>
>>> Or do we need add support for this?
>>>
>>
>> At the moment there is no such identifier. We can leverage the rados client 
>> instance id that each radosgw instance gets when connecting to the backend, 
>> and there's also a unique running number that we use to identify each 
>> request within that gateway. We can probably concatenate these and return it 
>> as the unique identifier.
>>
>> Yehuda
>>
>
>
> Hi Yehuda,
>
> I will add "X-Trans-Id" as per you suggested approach.
> I have opened a new issue particularly for adding "X-Trans-Id" as many
> other issues report absence of this header.
>
> I have assigned this to myself.
> http://tracker.ceph.com/issues/10970
>


Hi Yehuda, Abhishek,

may be we should follow the swift way considering the goal is to keep
compatible with swift? This might make the log analysis a bit easier
for swift cluster operators.

def generate_trans_id(trans_id_suffix):
return 'tx%s-%010x%s' % (
uuid.uuid4().hex[:21], time.time(), quote(trans_id_suffix))

The trans_id_suffix is configurable in swift setup.

Thanks, -yuan

> Thanks
> Abhishek Dixit
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: 11/19/2014 Weekly Ceph Performance Meeting IS ON!

2014-11-24 Thread Zhou, Yuan
Hi Mark, do you have the recording videos for this meeting? 

Regards, -yuan

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, November 19, 2014 10:49 AM
To: ceph-devel@vger.kernel.org
Subject: 11/19/2014 Weekly Ceph Performance Meeting IS ON!

Hi All,

8AM PST as usual!  Some lucky folks are out at Supercomputing 14 so the crowd 
might be a bit small this week.  Feel free to add an agenda item if there is 
something you want to talk about!

Here's the links:

Etherpad URL:
http://pad.ceph.com/p/performance_weekly

To join the Meeting:
https://bluejeans.com/268261044

To join via Browser:
https://bluejeans.com/268261044/browser

To join with Lync:
https://bluejeans.com/268261044/lync


To join via Room System:
Video Conferencing System: bjn.vc -or- 199.48.152.152 Meeting ID: 268261044

To join via Phone:
1) Dial:
   +1 408 740 7256
   +1 888 240 2560(US Toll Free)
   +1 408 317 9253(Alternate Number)
   (see all numbers - http://bluejeans.com/numbers)
2) Enter Conference ID: 268261044

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Question on Ceph LRC design

2014-11-16 Thread Zhou, Yuan
[resend with plain format, sorry for the duplicated mail]

Hi Loic/Anderas,

I was trying to understand the LRC design in Ceph EC. Per my understanding, it 
seems Ceph was using a slightly different design with the Microsoft LRC: the 
local parities were calculated with the global parities included. Is there any 
special consideration on this change? 
I was asking because in a typical MS LRC design the global and local parities 
could be calculated at the same time actually(I mean inside the Erasure Code 
library). But with this new design, we lost this potential optimization.

Thanks, -Yuan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html