cephfs (hammer) flips directory access bits

2016-01-07 Thread CSa
Hi, we are using cephfs on a ceph cluster (V0.94.5, 3x MON, 1x MDS, ~50x OSD). Recently, we observed a spontaneous (and unwanted) change in the access rights of newly created directories: $ umask 0077 $ mkdir test $ ls -ld test drwx-- 1 me me 0 Jan 6 14:59 test $ touch test/foo $ ls -ld

The osd process locked itself , when I tested cephfs through filebench

2016-01-07 Thread wangsongbo
Hi all, When I tested randomrw on my cluster through filebench (running ceph 0.94.5) , one of the osds was marked down. but I could still get the process with ps command. So I checked the log fiile and found follow message: > 2016-01-07

Custom STL allocator

2016-01-07 Thread Evgeniy Firsov
I want your opinion guys regarding two features implemented in attempt to greatly reduce number of memory allocation without major surgery in the code. The features are: 1. Custom STL allocator, which allocates first N items from the STL container itself. This is semi-transparent replacement of

Re: Is BlueFS an alternative of BlueStore?

2016-01-07 Thread Sage Weil
On Thu, 7 Jan 2016, Javen Wu wrote: > Hi Sage, > > Sorry to bother you. I am not sure if it is appropriate to send email to you > directly, but I cannot find any useful information to address my confusion > from Internet. Hope you can help me. > > Occasionally, I heard that you are going to

Re: Is BlueFS an alternative of BlueStore?

2016-01-07 Thread Javen Wu
Thanks Sage for your reply. I am not sure I understand the challenges you mentioned about backfill/scrub. I will investigate from the code and let you know if we can conquer the challenge by easy means. Our rough idea for ZFSStore are: 1. encapsulate dnode object as onode and add onode

Re: Is BlueFS an alternative of BlueStore?

2016-01-07 Thread peng.hse
Hi Sage, thanks for your quick response. Javen and I once the zfs developer,are currently focusing on how to leverage some of the zfs ideas to improve the ceph backend performance in userspace. Based on your encouraging reply, we come up with 2 schemes to continue our future work 1. the

two tarballs for ceph 10.0.1

2016-01-07 Thread Ken Dreyer
In http://download.ceph.com/tarballs/ , there's two tarballs: "ceph_10.0.1.orig.tar.gz" and "ceph_10.0.1.orig.tar.gz.1" Which one is correct? Can we delete one? - Ken -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org

Re: FreeBSD Building and Testing

2016-01-06 Thread Willem Jan Withagen
On 6-1-2016 08:51, Mykola Golub wrote: On Mon, Dec 28, 2015 at 05:53:04PM +0100, Willem Jan Withagen wrote: Hi, Can somebody try to help me and explain why in test: Func: test/mon/osd-crash Func: TEST_crush_reject_empty started Fails with a python error which sort of startles me:

Re: FreeBSD Building and Testing

2016-01-06 Thread Willem Jan Withagen
On 5-1-2016 19:23, Gregory Farnum wrote: On Mon, Dec 28, 2015 at 8:53 AM, Willem Jan Withagen wrote: Hi, Can somebody try to help me and explain why in test: Func: test/mon/osd-crash Func: TEST_crush_reject_empty started Fails with a python error which sort of startles me:

Stable releases preparation temporarily stalled

2016-01-06 Thread Loic Dachary
Hi, The stable releases (hammer, infernalis) did not make progress in the past few weeks because we can't run tests. Before xmas the following happened: * the sepia lab was migrated and we discovered the OpenStack teuthology backend can't run without it (that was a problem during a few days

Re: FreeBSD Building and Testing

2016-01-06 Thread Willem Jan Withagen
On 6-1-2016 08:51, Mykola Golub wrote: > > Are you able to reproduce this problem manually? I.e. in src dir, start the > cluster using vstart.sh: > > ./vstart.sh -n > > Check it is running: > > ./ceph -s > > Repeat the test: > > truncate -s 0 empty_map.txt > ./crushtool -c empty_map.txt -o

01/06/2016 Weekly Ceph Performance Meeting IS ON!

2016-01-06 Thread Mark Nelson
8AM PST as usual (ie in 18 minutes)! Discussion topics today include bluestore testing results and a potential performance regression in CentOS/RHEL 7.1 kernels. Please feel free to add your own topics! Here's the links: Etherpad URL: http://pad.ceph.com/p/performance_weekly To join the

Is BlueFS an alternative of BlueStore?

2016-01-06 Thread Javen Wu
Hi Sage, Sorry to bother you. I am not sure if it is appropriate to send email to you directly, but I cannot find any useful information to address my confusion from Internet. Hope you can help me. Occasionally, I heard that you are going to start BlueFS to eliminate the redudancy between XFS

Re: 01/06/2016 Weekly Ceph Performance Meeting IS ON!

2016-01-06 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 The last recording I'm seeing is for 10/07/15. Can we get the newer ones? Thanks, - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Wed, Jan 6, 2016 at 8:43 AM, Mark Nelson wrote: > 8AM PST

Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Alfredo Deza
This is odd. We are signing all packages before publishing them on the repository. These ceph-deploy releases are following a new release process so I will have to investigate where is the disconnect. Thanks for letting us know. On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell

Re: FreeBSD Building and Testing

2016-01-05 Thread Gregory Farnum
On Mon, Dec 28, 2015 at 8:53 AM, Willem Jan Withagen wrote: > Hi, > > Can somebody try to help me and explain why > > in test: Func: test/mon/osd-crash > Func: TEST_crush_reject_empty started > > Fails with a python error which sort of startles me: > test/mon/osd-crush.sh:227:

CBT on an existing cluster

2016-01-05 Thread Deneau, Tom
Having trouble getting a reply from c...@cbt.com so trying ceph-devel list... To get familiar with CBT, I first wanted to use it on an existing cluster. (i.e., not have CBT do any cluster setup). Is there a .yaml example that illustrates how to use cbt to run for example, its radosbench

Re: CBT on an existing cluster

2016-01-05 Thread Gregory Farnum
On Tue, Jan 5, 2016 at 9:56 AM, Deneau, Tom wrote: > Having trouble getting a reply from c...@cbt.com so trying ceph-devel list... > > To get familiar with CBT, I first wanted to use it on an existing cluster. > (i.e., not have CBT do any cluster setup). > > Is there a .yaml

Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Alfredo Deza
It looks like this was only for ceph-deploy in Hammer. I verified that this wasn't the case in e.g. Infernalis I have ensured that the ceph-deploy packages in hammer are in fact signed and coming from our builds. Thanks again for reporting this! On Tue, Jan 5, 2016 at 12:27 PM, Alfredo Deza

Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Derek Yarnell
Hi Alfredo, I am still having a bit of trouble though with what looks like the 1.5.31 release. With a `yum update ceph-deploy` I get the following even after a full `yum clean all`. http://ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.31-0.noarch.rpm: [Errno -1] Package does not match intended

PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Derek Yarnell
It looks like the ceph-deploy > 1.5.28 packages in the http://download.ceph.com/rpm-hammer/el6 and http://download.ceph.com/rpm-hammer/el7 repositories are not being PGP signed. What happened? This is causing our yum updates to fail but may be a sign of something much more nefarious? # rpm -qp

deprecation and build warnings

2016-01-05 Thread Gregory Farnum
I was annoyed again at our gitbuilders being all yellow because of compile warnings so I went to check out how many of them are real and how many of them are self-inflicted warnings. I just spot-checked

Docs now building again

2016-01-05 Thread Dan Mick
https://github.com/ceph/ceph/pull/7119 fixed an issue preventing docs from building. Master is fixed; merge that into your branches if you want working docs again. -- Dan Mick Red Hat, Inc. Ceph docs: http://ceph.com/docs -- To unsubscribe from this list: send the line "unsubscribe ceph-devel"

Re: FreeBSD Building and Testing

2016-01-05 Thread Mykola Golub
On Mon, Dec 28, 2015 at 05:53:04PM +0100, Willem Jan Withagen wrote: > Hi, > > Can somebody try to help me and explain why > > in test: Func: test/mon/osd-crash > Func: TEST_crush_reject_empty started > > Fails with a python error which sort of startles me: > test/mon/osd-crush.sh:227:

Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Alfredo Deza
It seems that the metadata didn't get updated. I just tried out and got the right version with no issues. Hopefully *this* time it works for you. Sorry for all the troubles On Tue, Jan 5, 2016 at 3:21 PM, Derek Yarnell wrote: > Hi Alfredo, > > I am still having a bit of

Re: Long peering - throttle at FileStore::queue_transactions

2016-01-05 Thread Guang Yang
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote: > On Mon, 4 Jan 2016, Guang Yang wrote: >> Hi Cephers, >> Happy New Year! I got question regards to the long PG peering.. >> >> Over the last several days I have been looking into the *long peering* >> problem when we start a OSD

hammer mon failure

2016-01-05 Thread Samuel Just
http://tracker.ceph.com/issues/14236 New hammer mon failure in the nightlies (missing a map apparently?), can you take a look? -Sam -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: hammer mon failure

2016-01-05 Thread Joao Eduardo Luis
On 01/05/2016 07:55 PM, Samuel Just wrote: > http://tracker.ceph.com/issues/14236 > > New hammer mon failure in the nightlies (missing a map apparently?), > can you take a look? > -Sam Will do. -Joao -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a

Is rbd map/unmap op. configured like an event?

2016-01-04 Thread Wukongming
Hi All, Is rbd map/unmap op. configured like an event in the directory of /etc/init, so we can use system/upstart to automanage it? - wukongming ID: 12019 Tel:0571-86760239 Dept:2014 UIS2 ONEStor

Long peering - throttle at FileStore::queue_transactions

2016-01-04 Thread Guang Yang
Hi Cephers, Happy New Year! I got question regards to the long PG peering.. Over the last several days I have been looking into the *long peering* problem when we start a OSD / OSD host, what I observed was that the two peering working threads were throttled (stuck) when trying to queue new

Re: OSD data file are OSD logs

2016-01-04 Thread Samuel Just
IIRC, you are running giant. I think that's the log rotate dangling fd bug (not fixed in giant since giant is eol). Fixed upstream 8778ab3a1ced7fab07662248af0c773df759653d, firefly backport is b8e3f6e190809febf80af66415862e7c7e415214. -Sam On Mon, Jan 4, 2016 at 3:37 PM, Guang Yang

Re: OSD data file are OSD logs

2016-01-04 Thread Guang Yang
Thanks Sam for the confirmation. Thanks, Guang On Mon, Jan 4, 2016 at 3:59 PM, Samuel Just wrote: > IIRC, you are running giant. I think that's the log rotate dangling > fd bug (not fixed in giant since giant is eol). Fixed upstream >

OSD data file are OSD logs

2016-01-04 Thread Guang Yang
Hi Cephers, Before I open a tracker, I would like check if it is a known issue or not.. One one of our clusters, there was OSD crash during repairing, the crash happened after we issued a PG repair for inconsistent PGs, which failed because the recorded file size (within xattr) mismatched with

Re: Long peering - throttle at FileStore::queue_transactions

2016-01-04 Thread Samuel Just
We need every OSDMap persisted before persisting later ones because we rely on there being no holes for a bunch of reasons. The deletion transactions are more interesting. It's not part of the boot process, these are deletions resulting from merging in a log from a peer which logically removed

Re: Long peering - throttle at FileStore::queue_transactions

2016-01-04 Thread Sage Weil
On Mon, 4 Jan 2016, Guang Yang wrote: > Hi Cephers, > Happy New Year! I got question regards to the long PG peering.. > > Over the last several days I have been looking into the *long peering* > problem when we start a OSD / OSD host, what I observed was that the > two peering working threads

Benachrichtigung

2016-01-04 Thread EMAIL LOTTERIE
Sehr geehrte / ter email Benützer ! Ihre email Adresse hat €1.20,00€ (EINEMILLIONZWEIHUNDERTAUSEND EURO) gewonnen . Mit den Glückszahlen 9-3-8-26-28-4-64 In der EURO MILLIONEN EMAIL LOTTERIE.Die Summe ergibt sich aus einer Gewinnausschuttung von. €22.800,000,00 (

Charity/Donation

2016-01-04 Thread Skoll, Jeff
Hi, My name is Jeffrey Skoll, a philanthropist and the founder of one of the largest private foundations in the world. I believe strongly in ‘giving while living.’ I had one idea that never changed in my mind — that you should use your wealth to help people and I have decided to secretly give

Re: Speeding up rbd_stat() in libvirt

2016-01-04 Thread Jason Dillaman
Short term, assuming there wouldn't be an objection from the libvirt community, I think spawning a thread pool and concurrently executing several rbd_stat calls concurrently would be the easiest and cleanest solution. I wouldn't suggest trying to roll your own solution for retrieving image

Re: Speeding up rbd_stat() in libvirt

2016-01-04 Thread Wido den Hollander
On 04-01-16 16:38, Jason Dillaman wrote: > Short term, assuming there wouldn't be an objection from the libvirt > community, I think spawning a thread pool and concurrently executing several > rbd_stat calls concurrently would be the easiest and cleanest solution. I > wouldn't suggest trying

Re: Create one millon empty files with cephfs

2016-01-04 Thread Gregory Farnum
On Tue, Dec 29, 2015 at 4:55 AM, Fengguang Gong wrote: > hi, > We create one million empty files through filebench, here is the test env: > MDS: one MDS > MON: one MON > OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica > Network: all nodes are

Re: 答复: Reboot blocked when undoing unmap op.

2016-01-04 Thread Ilya Dryomov
On Mon, Jan 4, 2016 at 10:51 AM, Wukongming wrote: > Hi, Ilya, > > It is an old problem. > When you say "when you issue a reboot, daemons get killed and the kernel > client ends up waiting for the them to come back, because of outstanding > writes issued by umount called by

Re: How to configure if there are tow network cards in Client

2015-12-31 Thread Linux Chips
it would certainly help those with less knowledge about networking in linux, though i do not know how many people using ceph are in this category. Sage and the others here may have a better idea about its feasibility. but i usually use rule-* and route-* (in CentOS) files, they work with

Re: Fwd: how io works when backfill

2015-12-29 Thread Sage Weil
On Tue, 29 Dec 2015, Dong Wu wrote: > if add in osd.7 and 7 becomes the primary: pg1.0 [1, 2, 3] --> pg1.0 > [7, 2, 3], is it similar with the example above? > still install a pg_temp entry mapping the PG back to [1, 2, 3], then > backfill happens to 7, normal io write to [1, 2, 3], if io to the

Create one millon empty files with cephfs

2015-12-29 Thread Fengguang Gong
hi, We create one million empty files through filebench, here is the test env: MDS: one MDS MON: one MON OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica Network: all nodes are connected through 10 gigabit network We use more than one client to create files, to test the

Create one millon empty files with cephfs

2015-12-29 Thread Fengguang Gong
hi, We create one million empty files through filebench, here is the test env: MDS: one MDS MON: one MON OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica Network: all nodes are connected through 10 gigabit network We use more than one client to create files, to test the

Re:Re: How to configure if there are tow network cards in Client

2015-12-29 Thread 蔡毅
Thank for your replies. So is it reasonable that we could write a file such as shell script to bind one process with a specific IP and modify the routing tables and rules as one of Ceph’s tools? So that the users is convenient when they want to change the NIC connecting with the

Re: How to configure if there are tow network cards in Client

2015-12-29 Thread Linux Chips
On 12/28/2015 07:47 PM, Sage Weil wrote: On Fri, 25 Dec 2015, ?? wrote: Hi all, When we read the code, we haven?t find the function that the client can bind a specific IP. In Ceph?s configuration, we could only find the parameter ?public network?, but it seems acts on the OSD but not the

Re: Fwd: how io works when backfill

2015-12-28 Thread Dong Wu
if add in osd.7 and 7 becomes the primary: pg1.0 [1, 2, 3] --> pg1.0 [7, 2, 3], is it similar with the example above? still install a pg_temp entry mapping the PG back to [1, 2, 3], then backfill happens to 7, normal io write to [1, 2, 3], if io to the portion of the PG that has already been

Fwd: how io works when backfill

2015-12-28 Thread Zhiqiang Wang
2015-12-27 20:48 GMT+08:00 Dong Wu : > Hi, > When add osd or remove osd, ceph will backfill to rebalance data. > eg: > - pg1.0[1, 2, 3] > - add an osd(eg. osd.7) > - ceph start backfill, then pg1.0 osd set changes to [1, 2, 7] > - if [a, b, c, d, e] are objects needing

Speeding up rbd_stat() in libvirt

2015-12-28 Thread Wido den Hollander
Hi, The storage pools of libvirt know a mechanism called 'refresh' which will scan a storage pool to refresh the contents. The current implementation does: * List all images via rbd_list() * Call rbd_stat() on each image Source:

Re: FreeBSD Building and Testing

2015-12-28 Thread Willem Jan Withagen
Hi, Can somebody try to help me and explain why in test: Func: test/mon/osd-crash Func: TEST_crush_reject_empty started Fails with a python error which sort of startles me: test/mon/osd-crush.sh:227: TEST_crush_reject_empty: local empty_map=testdir/osd-crush/empty_map

Re: Fwd: how io works when backfill

2015-12-28 Thread Sage Weil
On Mon, 28 Dec 2015, Zhiqiang Wang wrote: > 2015-12-27 20:48 GMT+08:00 Dong Wu : > > Hi, > > When add osd or remove osd, ceph will backfill to rebalance data. > > eg: > > - pg1.0[1, 2, 3] > > - add an osd(eg. osd.7) > > - ceph start backfill, then pg1.0 osd set changes

Re: How to configure if there are tow network cards in Client

2015-12-28 Thread Sage Weil
On Fri, 25 Dec 2015, ?? wrote: > Hi all, > When we read the code, we haven?t find the function that the client can > bind a specific IP. In Ceph?s configuration, we could only find the parameter > ?public network?, but it seems acts on the OSD but not the client. > There is a scenario

ceph branch status

2015-12-28 Thread ceph branch robot
-- All Branches -- Abhishek Varshney 2015-11-23 11:45:29 +0530 infernalis-backports Adam C. Emerson 2015-12-21 16:51:39 -0500 wip-cxx11concurrency Adam Crume 2014-12-01 20:45:58 -0800

Cordial greeting

2015-12-28 Thread Zahra Robert
Cordial greeting message from Fatima, I am seeking for your help,I will be very glad if you do assist me to relocate a sum of (US$4 Million Dollars) into your Bank account in your country for the benefit of both of us i want to use this money for investment. I will give you more details as you

Re: CEPH build

2015-12-28 Thread Odintsov Vladislav
Hi, resending my letter. Thank you for the attention. Best regards, Vladislav Odintsov From: Sage Weil Sent: Monday, December 28, 2015 19:49 To: Odintsov Vladislav Subject: Re: CEPH build Can you

how io works when backfill

2015-12-27 Thread Dong Wu
Hi, When add osd or remove osd, ceph will backfill to rebalance data. eg: - pg1.0[1, 2, 3] - add an osd(eg. osd.7) - ceph start backfill, then pg1.0 osd set changes to [1, 2, 7] - if [a, b, c, d, e] are objects needing to backfill to osd.7 and now object a is backfilling - when a write io hits

Re: [ceph-users] why not add (offset,len) to pglog

2015-12-25 Thread Dong Wu
Thank you for your reply. I am looking formard to Sage's opinion too @sage. Also I'll keep on with the BlueStore and Kstore's progress. Regards 2015-12-25 14:48 GMT+08:00 Ning Yao : > Hi, Dong Wu, > > 1. As I currently work for other things, this proposal is abandon for > a

How to configure if there are tow network cards in Client

2015-12-25 Thread 蔡毅
Hi all, When we read the code, we haven’t find the function that the client can bind a specific IP. In Ceph’s configuration, we could only find the parameter “public network”, but it seems acts on the OSD but not the client. There is a scenario that the client has two network cards named

Re: [ceph-users] why not add (offset,len) to pglog

2015-12-25 Thread Sage Weil
On Fri, 25 Dec 2015, Ning Yao wrote: > Hi, Dong Wu, > > 1. As I currently work for other things, this proposal is abandon for > a long time > 2. This is a complicated task as we need to consider a lots such as > (not just for writeOp, as well as truncate, delete) and also need to > consider the

Re: [ceph-users] why not add (offset,len) to pglog

2015-12-24 Thread Ning Yao
Hi, Dong Wu, 1. As I currently work for other things, this proposal is abandon for a long time 2. This is a complicated task as we need to consider a lots such as (not just for writeOp, as well as truncate, delete) and also need to consider the different affects for different backends(Replicated,

Re: [ceph-users] why not add (offset,len) to pglog

2015-12-24 Thread Dong Wu
Thanks, from this pull request I learned that this issue is not completed, is there any new progress of this issue? 2015-12-25 12:30 GMT+08:00 Xinze Chi (信泽) : > Yeah, This is good idea for recovery, but not for backfill. > @YaoNing have pull a request about this >

why not add (offset,len) to pglog

2015-12-24 Thread Dong Wu
Hi, I have doubt about pglog, the pglog contains (op,object,version) etc. when peering, use pglog to construct missing list,then recover the whole object in missing list even if different data among replicas is less then a whole object data(eg,4MB). why not add (offset,len) to pglog? If so, the

Re: [ceph-users] why not add (offset,len) to pglog

2015-12-24 Thread 信泽
Yeah, This is good idea for recovery, but not for backfill. @YaoNing have pull a request about this https://github.com/ceph/ceph/pull/3837 this year. 2015-12-25 11:16 GMT+08:00 Dong Wu : > Hi, > I have doubt about pglog, the pglog contains (op,object,version) etc. > when

Re: fixing jenkins builds on pull requests

2015-12-23 Thread Loic Dachary
Hi, I triaged the jenkins related failures (from #24 to #49): CentOS 6 not supported: https://jenkins.ceph.com/job/ceph-pull-requests/26/console https://jenkins.ceph.com/job/ceph-pull-requests/28/console https://jenkins.ceph.com/job/ceph-pull-requests/29/console

use object size of 32k rather than 4M

2015-12-23 Thread hzwulibin
Hi, cephers, Sage and Haomai Recently we stuck of the performance down problem when recoverying. The scene is simple: 1. run fio with rand write(bs=4k) 2. stop one osd; sleep 10; start the osd 3. the IOPS drop from 6K to about 200 We now know the SSD which that osd on is the bottleneck when

Re: [ceph-users] use object size of 32k rather than 4M

2015-12-23 Thread hzwulibin
Hi, Robert Thanks for your quick reply. Yeah, the number of file really will be the potential problem. But if just the memory problem, we could use more memory in our OSD servers. Also, i tested it on XFS use mdtest, here is the result: $ sudo ~/wulb/bin/mdtest -I 1 -z 1 -b 1024 -R -F

Re: Time to move the make check bot to jenkins.ceph.com

2015-12-23 Thread Ken Dreyer
This is really great. Thanks Loic and Alfredo! - Ken On Tue, Dec 22, 2015 at 11:23 AM, Loic Dachary wrote: > Hi, > > The make check bot moved to jenkins.ceph.com today and ran it's first > successfull job. You will no longer see comments from the bot: it will update > the

Re: [ceph-users] use object size of 32k rather than 4M

2015-12-23 Thread Van Leeuwen, Robert
>In order to reduce the enlarge impact, we want to change the default size of >the object from 4M to 32k. > >We know that will increase the number of the objects of one OSD and make >remove process become longer. > >Hmm, here i want to ask your guys is there any other potential problems will

Re: [ceph-users] use object size of 32k rather than 4M

2015-12-23 Thread Van Leeuwen, Robert
>Thanks for your quick reply. Yeah, the number of file really will be the >potential problem. But if just the memory problem, we could use more memory in >our OSD >servers. Add more mem might not be a viable solution: Ceph does not say how much data is stores in an inode but the docs say the

Re: Let's Not Destroy the World in 2038

2015-12-23 Thread Adam C. Emerson
On 22/12/2015, Gregory Farnum wrote: [snip] > So I think we're stuck with creating a new utime_t and incrementing > the struct_v on everything that contains them. :/ [snip] > We'll also then need the full feature bit system to make > sure we send the old encoding to clients which don't understand

rgw: sticky user quota data on bucket removal

2015-12-23 Thread Paul Von-Stamwitz
Hi, We're testing user quotas on Hammer with civetweb and we're running into an issue with user stats. If the user/admin removes a bucket using -force/-purge-objects options with s3cmd/radosgw-admin respectively, the user stats will continue to reflect the deleted objects for quota purposes,

Re: rgw: sticky user quota data on bucket removal

2015-12-23 Thread Yehuda Sadeh-Weinraub
On Wed, Dec 23, 2015 at 3:53 PM, Paul Von-Stamwitz wrote: > Hi, > > We're testing user quotas on Hammer with civetweb and we're running into an > issue with user stats. > > If the user/admin removes a bucket using -force/-purge-objects options with >

RE: rgw: sticky user quota data on bucket removal

2015-12-23 Thread Paul Von-Stamwitz
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Yehuda Sadeh-Weinraub > Sent: Wednesday, December 23, 2015 5:02 PM > To: Paul Von-Stamwitz > Cc: ceph-devel@vger.kernel.org > Subject: Re: rgw: sticky user quota data on

Re: New "make check" job for Ceph pull requests

2015-12-23 Thread Loic Dachary
Hi, For the record the pending issues that prevent the "make check" job (https://jenkins.ceph.com/job/ceph-pull-requests/) from running can be found at http://tracker.ceph.com/issues/14172 Cheers On 23/12/2015 21:05, Alfredo Deza wrote: > Hi all, > > As of yesterday (Tuesday Dec 22nd) we

New "make check" job for Ceph pull requests

2015-12-23 Thread Alfredo Deza
Hi all, As of yesterday (Tuesday Dec 22nd) we have the "make check" job running within our CI infrastructure, working very similarly as the previous check with a few differences: * there are no longer comments added to the pull requests * notifications of success (or failure) are done inline in

jenkins on ceph pull requests: clarify which Operating System is used

2015-12-23 Thread Loic Dachary
Hi Alfredo, I see a make check slave currently runs on jessie and I think to remember it ran on trusty slaves before. It's a good thing operating systems are mixed but there does not seem to be a clear indication about which operating system is used. For instance regarding:

fixing jenkins builds on pull requests

2015-12-23 Thread Loic Dachary
Hi Alfredo, I forgot to mention that the ./run-make-check.sh run currently has no known false negative on CentOS 7. By that I mean that if run on master 100 times, it will succeed 100 times. This is good to debug the jenkins builds on pull requests as we know all problems either come from the

Let's Not Destroy the World in 2038

2015-12-22 Thread Adam C. Emerson
Comrades, Ceph's victory is assured. It will be the storage system of The Future. Matt Benjamin has reminded me that if we don't act fast¹ Ceph will be responsible for destroying the world. utime_t() uses a 32-bit second count internally. This isn't great, but it's something we can fix.

Re: RBD performance with many childs and snapshots

2015-12-22 Thread Josh Durgin
On 12/22/2015 01:55 PM, Wido den Hollander wrote: On 12/21/2015 11:51 PM, Josh Durgin wrote: On 12/21/2015 11:06 AM, Wido den Hollander wrote: Hi, While implementing the buildvolfrom method in libvirt for RBD I'm stuck at some point. $ virsh vol-clone --pool myrbdpool image1 image2 This

Re: RBD performance with many childs and snapshots

2015-12-22 Thread Josh Durgin
On 12/22/2015 05:34 AM, Wido den Hollander wrote: On 21-12-15 23:51, Josh Durgin wrote: On 12/21/2015 11:06 AM, Wido den Hollander wrote: Hi, While implementing the buildvolfrom method in libvirt for RBD I'm stuck at some point. $ virsh vol-clone --pool myrbdpool image1 image2 This would

Re: Time to move the make check bot to jenkins.ceph.com

2015-12-22 Thread Loic Dachary
Hi, The make check bot moved to jenkins.ceph.com today and ran it's first successfull job. You will no longer see comments from the bot: it will update the github status instead, which is less intrusive. Cheers On 21/12/2015 11:13, Loic Dachary wrote: > Hi, > > The make check bot is broken

Re: RFC: tool for applying 'ceph daemon ' command to all OSDs

2015-12-22 Thread Dan Mick
On 12/21/2015 11:29 PM, Gregory Farnum wrote: > On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick wrote: >> I needed something to fetch current config values from all OSDs (sorta >> the opposite of 'injectargs --key value), so I hacked it, and then >> spiffed it up a bit. Does this

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-22 Thread Wido den Hollander
On 12/21/2015 11:20 PM, Josh Durgin wrote: > On 12/21/2015 11:00 AM, Wido den Hollander wrote: >> My discard code now works, but I wanted to verify. If I understand Jason >> correctly it would be a matter of figuring out the 'order' of a image >> and call rbd_discard in a loop until you reach the

Re: RBD performance with many childs and snapshots

2015-12-22 Thread Wido den Hollander
On 12/21/2015 11:51 PM, Josh Durgin wrote: > On 12/21/2015 11:06 AM, Wido den Hollander wrote: >> Hi, >> >> While implementing the buildvolfrom method in libvirt for RBD I'm stuck >> at some point. >> >> $ virsh vol-clone --pool myrbdpool image1 image2 >> >> This would clone image1 to a new RBD

Re: Let's Not Destroy the World in 2038

2015-12-22 Thread Gregory Farnum
On Tue, Dec 22, 2015 at 12:10 PM, Adam C. Emerson wrote: > Comrades, > > Ceph's victory is assured. It will be the storage system of The Future. > Matt Benjamin has reminded me that if we don't act fast¹ Ceph will be > responsible for destroying the world. > > utime_t() uses

RE: tool for applying 'ceph daemon ' command to all OSDs

2015-12-22 Thread igor.podo...@ts.fujitsu.com
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Dan Mick > Sent: Tuesday, December 22, 2015 7:00 AM > To: ceph-devel > Subject: RFC: tool for applying 'ceph daemon ' command to all OSDs > > I needed something to fetch

Re: RBD performance with many childs and snapshots

2015-12-22 Thread Wido den Hollander
On 21-12-15 23:51, Josh Durgin wrote: > On 12/21/2015 11:06 AM, Wido den Hollander wrote: >> Hi, >> >> While implementing the buildvolfrom method in libvirt for RBD I'm stuck >> at some point. >> >> $ virsh vol-clone --pool myrbdpool image1 image2 >> >> This would clone image1 to a new RBD image

Re: Issue with Ceph File System and LIO

2015-12-22 Thread Eric Eastman
On Sun, Dec 20, 2015 at 7:38 PM, Eric Eastman wrote: > On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote: >> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman >> wrote: Hi Yan Zheng, Eric Eastman Similar

Re: FreeBSD Building and Testing

2015-12-21 Thread Willem Jan Withagen
On 21-12-2015 01:45, Xinze Chi (信泽) wrote: sorry for delay reply. Please have a try https://github.com/ceph/ceph/commit/ae4a8162eacb606a7f65259c6ac236e144bfef0a. Tried this one first: Testsuite summary for ceph 10.0.1

RBD performance with many childs and snapshots

2015-12-21 Thread Wido den Hollander
Hi, While implementing the buildvolfrom method in libvirt for RBD I'm stuck at some point. $ virsh vol-clone --pool myrbdpool image1 image2 This would clone image1 to a new RBD image called 'image2'. The code I've written now does: 1. Create a snapshot called image1@libvirt- 2. Protect the

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Wido den Hollander
On 12/21/2015 04:50 PM, Josh Durgin wrote: > On 12/21/2015 07:09 AM, Jason Dillaman wrote: >> You will have to ensure that your writes are properly aligned with the >> object size (or object set if fancy striping is used on the RBD >> volume). In that case, the discard is translated to remove

Re: FreeBSD Building and Testing

2015-12-21 Thread Willem Jan Withagen
On 20-12-2015 17:10, Willem Jan Withagen wrote: Hi, Most of the Ceph is getting there in the most crude and rough state. So beneath is a status update on what is not working for me jet. Further: A) unittest_erasure_code_plugin failes on the fact that there is a different error code returned

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Josh Durgin
On 12/21/2015 11:00 AM, Wido den Hollander wrote: My discard code now works, but I wanted to verify. If I understand Jason correctly it would be a matter of figuring out the 'order' of a image and call rbd_discard in a loop until you reach the end of the image. You'd need to get the order via

Fwd: FileStore : no wait thread queue_sync

2015-12-21 Thread David Casier
FYI. -- Forwarded message -- From: David Casier Date: 2015-12-21 23:19 GMT+01:00 Subject: FileStore : no wait thread queue_sync To: Ceph Development , Sage Weil Cc: Benoît LORIOT ,

Re: RBD performance with many childs and snapshots

2015-12-21 Thread Josh Durgin
On 12/21/2015 11:06 AM, Wido den Hollander wrote: Hi, While implementing the buildvolfrom method in libvirt for RBD I'm stuck at some point. $ virsh vol-clone --pool myrbdpool image1 image2 This would clone image1 to a new RBD image called 'image2'. The code I've written now does: 1. Create

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Alexandre DERUMIER
>>I just want to know if this is sufficient to wipe a RBD image? AFAIK, ceph write zeroes in the rados objects with discard is used. They are an option for skip zeroes write if needed OPTION(rbd_skip_partial_discard, OPT_BOOL, false) // when trying to discard a range inside an object, set to

Re: Fwd: Client still connect failed leader after that mon down

2015-12-21 Thread Sage Weil
On Mon, 21 Dec 2015, Zhi Zhang wrote: > Regards, > Zhi Zhang (David) > Contact: zhang.david2...@gmail.com > zhangz.da...@outlook.com > > > > -- Forwarded message -- > From: Jaze Lee > Date: Mon, Dec 21, 2015 at 4:08 PM > Subject: Re: Client

Re: Issue with Ceph File System and LIO

2015-12-21 Thread Gregory Farnum
On Sun, Dec 20, 2015 at 6:38 PM, Eric Eastman wrote: > On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote: >> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman >> wrote: Hi Yan Zheng, Eric Eastman Similar

RFC: tool for applying 'ceph daemon ' command to all OSDs

2015-12-21 Thread Dan Mick
I needed something to fetch current config values from all OSDs (sorta the opposite of 'injectargs --key value), so I hacked it, and then spiffed it up a bit. Does this seem like something that would be useful in this form in the upstream Ceph, or does anyone have any thoughts on its design or

  1   2   3   4   5   6   7   8   9   10   >