Hi,
we are using cephfs on a ceph cluster (V0.94.5, 3x MON, 1x MDS, ~50x OSD).
Recently, we observed a spontaneous (and unwanted) change in the access
rights of newly created directories:
$ umask
0077
$ mkdir test
$ ls -ld test
drwx-- 1 me me 0 Jan 6 14:59 test
$ touch test/foo
$ ls -ld
Hi all,
When I tested randomrw on my cluster through filebench (running ceph
0.94.5) , one of the osds was marked down. but I could still get the
process with ps command.
So I checked the log fiile and found follow message:
>
2016-01-07
I want your opinion guys regarding two features implemented in attempt to
greatly reduce number of memory allocation without major surgery in the
code.
The features are:
1. Custom STL allocator, which allocates first N items from the STL
container itself. This is semi-transparent replacement of
On Thu, 7 Jan 2016, Javen Wu wrote:
> Hi Sage,
>
> Sorry to bother you. I am not sure if it is appropriate to send email to you
> directly, but I cannot find any useful information to address my confusion
> from Internet. Hope you can help me.
>
> Occasionally, I heard that you are going to
Thanks Sage for your reply.
I am not sure I understand the challenges you mentioned about
backfill/scrub.
I will investigate from the code and let you know if we can conquer the
challenge by easy means.
Our rough idea for ZFSStore are:
1. encapsulate dnode object as onode and add onode
Hi Sage,
thanks for your quick response. Javen and I once the zfs developer,are
currently focusing on how to
leverage some of the zfs ideas to improve the ceph backend performance
in userspace.
Based on your encouraging reply, we come up with 2 schemes to continue
our future work
1. the
In http://download.ceph.com/tarballs/ , there's two tarballs:
"ceph_10.0.1.orig.tar.gz" and "ceph_10.0.1.orig.tar.gz.1"
Which one is correct? Can we delete one?
- Ken
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
On 6-1-2016 08:51, Mykola Golub wrote:
On Mon, Dec 28, 2015 at 05:53:04PM +0100, Willem Jan Withagen wrote:
Hi,
Can somebody try to help me and explain why
in test: Func: test/mon/osd-crash
Func: TEST_crush_reject_empty started
Fails with a python error which sort of startles me:
On 5-1-2016 19:23, Gregory Farnum wrote:
On Mon, Dec 28, 2015 at 8:53 AM, Willem Jan Withagen wrote:
Hi,
Can somebody try to help me and explain why
in test: Func: test/mon/osd-crash
Func: TEST_crush_reject_empty started
Fails with a python error which sort of startles me:
Hi,
The stable releases (hammer, infernalis) did not make progress in the past few
weeks because we can't run tests.
Before xmas the following happened:
* the sepia lab was migrated and we discovered the OpenStack teuthology backend
can't run without it (that was a problem during a few days
On 6-1-2016 08:51, Mykola Golub wrote:
>
> Are you able to reproduce this problem manually? I.e. in src dir, start the
> cluster using vstart.sh:
>
> ./vstart.sh -n
>
> Check it is running:
>
> ./ceph -s
>
> Repeat the test:
>
> truncate -s 0 empty_map.txt
> ./crushtool -c empty_map.txt -o
8AM PST as usual (ie in 18 minutes)! Discussion topics today include
bluestore testing results and a potential performance regression in
CentOS/RHEL 7.1 kernels. Please feel free to add your own topics!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the
Hi Sage,
Sorry to bother you. I am not sure if it is appropriate to send email to
you
directly, but I cannot find any useful information to address my confusion
from Internet. Hope you can help me.
Occasionally, I heard that you are going to start BlueFS to eliminate the
redudancy between XFS
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
The last recording I'm seeing is for 10/07/15. Can we get the newer ones?
Thanks,
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, Jan 6, 2016 at 8:43 AM, Mark Nelson wrote:
> 8AM PST
This is odd. We are signing all packages before publishing them on the
repository. These ceph-deploy releases are following a new release
process so I will
have to investigate where is the disconnect.
Thanks for letting us know.
On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell
On Mon, Dec 28, 2015 at 8:53 AM, Willem Jan Withagen wrote:
> Hi,
>
> Can somebody try to help me and explain why
>
> in test: Func: test/mon/osd-crash
> Func: TEST_crush_reject_empty started
>
> Fails with a python error which sort of startles me:
> test/mon/osd-crush.sh:227:
Having trouble getting a reply from c...@cbt.com so trying ceph-devel list...
To get familiar with CBT, I first wanted to use it on an existing cluster.
(i.e., not have CBT do any cluster setup).
Is there a .yaml example that illustrates how to use cbt to run for example,
its radosbench
On Tue, Jan 5, 2016 at 9:56 AM, Deneau, Tom wrote:
> Having trouble getting a reply from c...@cbt.com so trying ceph-devel list...
>
> To get familiar with CBT, I first wanted to use it on an existing cluster.
> (i.e., not have CBT do any cluster setup).
>
> Is there a .yaml
It looks like this was only for ceph-deploy in Hammer. I verified that
this wasn't the case in e.g. Infernalis
I have ensured that the ceph-deploy packages in hammer are in fact
signed and coming from our builds.
Thanks again for reporting this!
On Tue, Jan 5, 2016 at 12:27 PM, Alfredo Deza
Hi Alfredo,
I am still having a bit of trouble though with what looks like the
1.5.31 release. With a `yum update ceph-deploy` I get the following
even after a full `yum clean all`.
http://ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.31-0.noarch.rpm:
[Errno -1] Package does not match intended
It looks like the ceph-deploy > 1.5.28 packages in the
http://download.ceph.com/rpm-hammer/el6 and
http://download.ceph.com/rpm-hammer/el7 repositories are not being PGP
signed. What happened? This is causing our yum updates to fail but may
be a sign of something much more nefarious?
# rpm -qp
I was annoyed again at our gitbuilders being all yellow because of
compile warnings so I went to check out how many of them are real and
how many of them are self-inflicted warnings. I just spot-checked
https://github.com/ceph/ceph/pull/7119 fixed an issue preventing docs
from building. Master is fixed; merge that into your branches if you
want working docs again.
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
On Mon, Dec 28, 2015 at 05:53:04PM +0100, Willem Jan Withagen wrote:
> Hi,
>
> Can somebody try to help me and explain why
>
> in test: Func: test/mon/osd-crash
> Func: TEST_crush_reject_empty started
>
> Fails with a python error which sort of startles me:
> test/mon/osd-crush.sh:227:
It seems that the metadata didn't get updated.
I just tried out and got the right version with no issues. Hopefully
*this* time it works for you.
Sorry for all the troubles
On Tue, Jan 5, 2016 at 3:21 PM, Derek Yarnell wrote:
> Hi Alfredo,
>
> I am still having a bit of
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> On Mon, 4 Jan 2016, Guang Yang wrote:
>> Hi Cephers,
>> Happy New Year! I got question regards to the long PG peering..
>>
>> Over the last several days I have been looking into the *long peering*
>> problem when we start a OSD
http://tracker.ceph.com/issues/14236
New hammer mon failure in the nightlies (missing a map apparently?),
can you take a look?
-Sam
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 01/05/2016 07:55 PM, Samuel Just wrote:
> http://tracker.ceph.com/issues/14236
>
> New hammer mon failure in the nightlies (missing a map apparently?),
> can you take a look?
> -Sam
Will do.
-Joao
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a
Hi All,
Is rbd map/unmap op. configured like an event in the directory of /etc/init, so
we can use system/upstart to automanage it?
-
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor
Hi Cephers,
Happy New Year! I got question regards to the long PG peering..
Over the last several days I have been looking into the *long peering*
problem when we start a OSD / OSD host, what I observed was that the
two peering working threads were throttled (stuck) when trying to
queue new
IIRC, you are running giant. I think that's the log rotate dangling
fd bug (not fixed in giant since giant is eol). Fixed upstream
8778ab3a1ced7fab07662248af0c773df759653d, firefly backport is
b8e3f6e190809febf80af66415862e7c7e415214.
-Sam
On Mon, Jan 4, 2016 at 3:37 PM, Guang Yang
Thanks Sam for the confirmation.
Thanks,
Guang
On Mon, Jan 4, 2016 at 3:59 PM, Samuel Just wrote:
> IIRC, you are running giant. I think that's the log rotate dangling
> fd bug (not fixed in giant since giant is eol). Fixed upstream
>
Hi Cephers,
Before I open a tracker, I would like check if it is a known issue or not..
One one of our clusters, there was OSD crash during repairing, the
crash happened after we issued a PG repair for inconsistent PGs, which
failed because the recorded file size (within xattr) mismatched with
We need every OSDMap persisted before persisting later ones because we
rely on there being no holes for a bunch of reasons.
The deletion transactions are more interesting. It's not part of the
boot process, these are deletions resulting from merging in a log from
a peer which logically removed
On Mon, 4 Jan 2016, Guang Yang wrote:
> Hi Cephers,
> Happy New Year! I got question regards to the long PG peering..
>
> Over the last several days I have been looking into the *long peering*
> problem when we start a OSD / OSD host, what I observed was that the
> two peering working threads
Sehr geehrte / ter email Benützer !
Ihre email Adresse hat 1.20,00 (EINEMILLIONZWEIHUNDERTAUSEND EURO)
gewonnen . Mit den Glückszahlen 9-3-8-26-28-4-64 In der EURO MILLIONEN
EMAIL LOTTERIE.Die Summe ergibt sich aus
einer Gewinnausschuttung von. 22.800,000,00
(
Hi,
My name is Jeffrey Skoll, a philanthropist and the founder of one of the
largest private foundations in the world. I believe strongly in ‘giving while
living.’ I had one idea that never changed in my mind — that you should use
your wealth to help people and I have decided to secretly give
Short term, assuming there wouldn't be an objection from the libvirt community,
I think spawning a thread pool and concurrently executing several rbd_stat
calls concurrently would be the easiest and cleanest solution. I wouldn't
suggest trying to roll your own solution for retrieving image
On 04-01-16 16:38, Jason Dillaman wrote:
> Short term, assuming there wouldn't be an objection from the libvirt
> community, I think spawning a thread pool and concurrently executing several
> rbd_stat calls concurrently would be the easiest and cleanest solution. I
> wouldn't suggest trying
On Tue, Dec 29, 2015 at 4:55 AM, Fengguang Gong wrote:
> hi,
> We create one million empty files through filebench, here is the test env:
> MDS: one MDS
> MON: one MON
> OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica
> Network: all nodes are
On Mon, Jan 4, 2016 at 10:51 AM, Wukongming wrote:
> Hi, Ilya,
>
> It is an old problem.
> When you say "when you issue a reboot, daemons get killed and the kernel
> client ends up waiting for the them to come back, because of outstanding
> writes issued by umount called by
it would certainly help those with less knowledge about networking in
linux, though i do not know how many people using ceph are in this
category. Sage and the others here may have a better idea about its
feasibility.
but i usually use rule-* and route-* (in CentOS) files, they work with
On Tue, 29 Dec 2015, Dong Wu wrote:
> if add in osd.7 and 7 becomes the primary: pg1.0 [1, 2, 3] --> pg1.0
> [7, 2, 3], is it similar with the example above?
> still install a pg_temp entry mapping the PG back to [1, 2, 3], then
> backfill happens to 7, normal io write to [1, 2, 3], if io to the
hi,
We create one million empty files through filebench, here is the test env:
MDS: one MDS
MON: one MON
OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica
Network: all nodes are connected through 10 gigabit network
We use more than one client to create files, to test the
hi,
We create one million empty files through filebench, here is the test env:
MDS: one MDS
MON: one MON
OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica
Network: all nodes are connected through 10 gigabit network
We use more than one client to create files, to test the
Thank for your replies.
So is it reasonable that we could write a file such as shell script to
bind one process with a specific IP and modify the routing tables and rules
as one of Ceph’s tools? So that the users is convenient when they want to
change the NIC connecting with the
On 12/28/2015 07:47 PM, Sage Weil wrote:
On Fri, 25 Dec 2015, ?? wrote:
Hi all,
When we read the code, we haven?t find the function that the client can
bind a specific IP. In Ceph?s configuration, we could only find the parameter
?public network?, but it seems acts on the OSD but not the
if add in osd.7 and 7 becomes the primary: pg1.0 [1, 2, 3] --> pg1.0
[7, 2, 3], is it similar with the example above?
still install a pg_temp entry mapping the PG back to [1, 2, 3], then
backfill happens to 7, normal io write to [1, 2, 3], if io to the
portion of the PG that has already been
2015-12-27 20:48 GMT+08:00 Dong Wu :
> Hi,
> When add osd or remove osd, ceph will backfill to rebalance data.
> eg:
> - pg1.0[1, 2, 3]
> - add an osd(eg. osd.7)
> - ceph start backfill, then pg1.0 osd set changes to [1, 2, 7]
> - if [a, b, c, d, e] are objects needing
Hi,
The storage pools of libvirt know a mechanism called 'refresh' which
will scan a storage pool to refresh the contents.
The current implementation does:
* List all images via rbd_list()
* Call rbd_stat() on each image
Source:
Hi,
Can somebody try to help me and explain why
in test: Func: test/mon/osd-crash
Func: TEST_crush_reject_empty started
Fails with a python error which sort of startles me:
test/mon/osd-crush.sh:227: TEST_crush_reject_empty: local
empty_map=testdir/osd-crush/empty_map
On Mon, 28 Dec 2015, Zhiqiang Wang wrote:
> 2015-12-27 20:48 GMT+08:00 Dong Wu :
> > Hi,
> > When add osd or remove osd, ceph will backfill to rebalance data.
> > eg:
> > - pg1.0[1, 2, 3]
> > - add an osd(eg. osd.7)
> > - ceph start backfill, then pg1.0 osd set changes
On Fri, 25 Dec 2015, ?? wrote:
> Hi all,
> When we read the code, we haven?t find the function that the client can
> bind a specific IP. In Ceph?s configuration, we could only find the parameter
> ?public network?, but it seems acts on the OSD but not the client.
> There is a scenario
-- All Branches --
Abhishek Varshney
2015-11-23 11:45:29 +0530 infernalis-backports
Adam C. Emerson
2015-12-21 16:51:39 -0500 wip-cxx11concurrency
Adam Crume
2014-12-01 20:45:58 -0800
Cordial greeting message from Fatima, I am seeking for your help,I will be
very glad if you do assist me to relocate a sum of (US$4 Million Dollars)
into your Bank account in your country for the benefit of both of us i
want to use this money for investment. I will give you more details as you
Hi,
resending my letter.
Thank you for the attention.
Best regards,
Vladislav Odintsov
From: Sage Weil
Sent: Monday, December 28, 2015 19:49
To: Odintsov Vladislav
Subject: Re: CEPH build
Can you
Hi,
When add osd or remove osd, ceph will backfill to rebalance data.
eg:
- pg1.0[1, 2, 3]
- add an osd(eg. osd.7)
- ceph start backfill, then pg1.0 osd set changes to [1, 2, 7]
- if [a, b, c, d, e] are objects needing to backfill to osd.7 and now
object a is backfilling
- when a write io hits
Thank you for your reply. I am looking formard to Sage's opinion too @sage.
Also I'll keep on with the BlueStore and Kstore's progress.
Regards
2015-12-25 14:48 GMT+08:00 Ning Yao :
> Hi, Dong Wu,
>
> 1. As I currently work for other things, this proposal is abandon for
> a
Hi all,
When we read the code, we haven’t find the function that the client can
bind a specific IP. In Ceph’s configuration, we could only find the parameter
“public network”, but it seems acts on the OSD but not the client.
There is a scenario that the client has two network cards named
On Fri, 25 Dec 2015, Ning Yao wrote:
> Hi, Dong Wu,
>
> 1. As I currently work for other things, this proposal is abandon for
> a long time
> 2. This is a complicated task as we need to consider a lots such as
> (not just for writeOp, as well as truncate, delete) and also need to
> consider the
Hi, Dong Wu,
1. As I currently work for other things, this proposal is abandon for
a long time
2. This is a complicated task as we need to consider a lots such as
(not just for writeOp, as well as truncate, delete) and also need to
consider the different affects for different backends(Replicated,
Thanks, from this pull request I learned that this issue is not
completed, is there any new progress of this issue?
2015-12-25 12:30 GMT+08:00 Xinze Chi (信泽) :
> Yeah, This is good idea for recovery, but not for backfill.
> @YaoNing have pull a request about this
>
Hi,
I have doubt about pglog, the pglog contains (op,object,version) etc.
when peering, use pglog to construct missing list,then recover the
whole object in missing list even if different data among replicas is
less then a whole object data(eg,4MB).
why not add (offset,len) to pglog? If so, the
Yeah, This is good idea for recovery, but not for backfill.
@YaoNing have pull a request about this
https://github.com/ceph/ceph/pull/3837 this year.
2015-12-25 11:16 GMT+08:00 Dong Wu :
> Hi,
> I have doubt about pglog, the pglog contains (op,object,version) etc.
> when
Hi,
I triaged the jenkins related failures (from #24 to #49):
CentOS 6 not supported:
https://jenkins.ceph.com/job/ceph-pull-requests/26/console
https://jenkins.ceph.com/job/ceph-pull-requests/28/console
https://jenkins.ceph.com/job/ceph-pull-requests/29/console
Hi, cephers, Sage and Haomai
Recently we stuck of the performance down problem when recoverying. The scene
is simple:
1. run fio with rand write(bs=4k)
2. stop one osd; sleep 10; start the osd
3. the IOPS drop from 6K to about 200
We now know the SSD which that osd on is the bottleneck when
Hi, Robert
Thanks for your quick reply. Yeah, the number of file really will be the
potential problem. But if just the memory problem, we could use more memory in
our OSD
servers.
Also, i tested it on XFS use mdtest, here is the result:
$ sudo ~/wulb/bin/mdtest -I 1 -z 1 -b 1024 -R -F
This is really great. Thanks Loic and Alfredo!
- Ken
On Tue, Dec 22, 2015 at 11:23 AM, Loic Dachary wrote:
> Hi,
>
> The make check bot moved to jenkins.ceph.com today and ran it's first
> successfull job. You will no longer see comments from the bot: it will update
> the
>In order to reduce the enlarge impact, we want to change the default size of
>the object from 4M to 32k.
>
>We know that will increase the number of the objects of one OSD and make
>remove process become longer.
>
>Hmm, here i want to ask your guys is there any other potential problems will
>Thanks for your quick reply. Yeah, the number of file really will be the
>potential problem. But if just the memory problem, we could use more memory in
>our OSD
>servers.
Add more mem might not be a viable solution:
Ceph does not say how much data is stores in an inode but the docs say the
On 22/12/2015, Gregory Farnum wrote:
[snip]
> So I think we're stuck with creating a new utime_t and incrementing
> the struct_v on everything that contains them. :/
[snip]
> We'll also then need the full feature bit system to make
> sure we send the old encoding to clients which don't understand
Hi,
We're testing user quotas on Hammer with civetweb and we're running into an
issue with user stats.
If the user/admin removes a bucket using -force/-purge-objects options with
s3cmd/radosgw-admin respectively, the user stats will continue to reflect the
deleted objects for quota purposes,
On Wed, Dec 23, 2015 at 3:53 PM, Paul Von-Stamwitz
wrote:
> Hi,
>
> We're testing user quotas on Hammer with civetweb and we're running into an
> issue with user stats.
>
> If the user/admin removes a bucket using -force/-purge-objects options with
>
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Yehuda Sadeh-Weinraub
> Sent: Wednesday, December 23, 2015 5:02 PM
> To: Paul Von-Stamwitz
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: rgw: sticky user quota data on
Hi,
For the record the pending issues that prevent the "make check" job
(https://jenkins.ceph.com/job/ceph-pull-requests/) from running can be found at
http://tracker.ceph.com/issues/14172
Cheers
On 23/12/2015 21:05, Alfredo Deza wrote:
> Hi all,
>
> As of yesterday (Tuesday Dec 22nd) we
Hi all,
As of yesterday (Tuesday Dec 22nd) we have the "make check" job
running within our CI infrastructure, working very similarly as the
previous check with a few differences:
* there are no longer comments added to the pull requests
* notifications of success (or failure) are done inline in
Hi Alfredo,
I see a make check slave currently runs on jessie and I think to remember it
ran on trusty slaves before. It's a good thing operating systems are mixed but
there does not seem to be a clear indication about which operating system is
used. For instance regarding:
Hi Alfredo,
I forgot to mention that the ./run-make-check.sh run currently has no known
false negative on CentOS 7. By that I mean that if run on master 100 times, it
will succeed 100 times. This is good to debug the jenkins builds on pull
requests as we know all problems either come from the
Comrades,
Ceph's victory is assured. It will be the storage system of The Future.
Matt Benjamin has reminded me that if we don't act fast¹ Ceph will be
responsible for destroying the world.
utime_t() uses a 32-bit second count internally. This isn't great, but it's
something we can fix.
On 12/22/2015 01:55 PM, Wido den Hollander wrote:
On 12/21/2015 11:51 PM, Josh Durgin wrote:
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This
On 12/22/2015 05:34 AM, Wido den Hollander wrote:
On 21-12-15 23:51, Josh Durgin wrote:
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would
Hi,
The make check bot moved to jenkins.ceph.com today and ran it's first
successfull job. You will no longer see comments from the bot: it will update
the github status instead, which is less intrusive.
Cheers
On 21/12/2015 11:13, Loic Dachary wrote:
> Hi,
>
> The make check bot is broken
On 12/21/2015 11:29 PM, Gregory Farnum wrote:
> On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick wrote:
>> I needed something to fetch current config values from all OSDs (sorta
>> the opposite of 'injectargs --key value), so I hacked it, and then
>> spiffed it up a bit. Does this
On 12/21/2015 11:20 PM, Josh Durgin wrote:
> On 12/21/2015 11:00 AM, Wido den Hollander wrote:
>> My discard code now works, but I wanted to verify. If I understand Jason
>> correctly it would be a matter of figuring out the 'order' of a image
>> and call rbd_discard in a loop until you reach the
On 12/21/2015 11:51 PM, Josh Durgin wrote:
> On 12/21/2015 11:06 AM, Wido den Hollander wrote:
>> Hi,
>>
>> While implementing the buildvolfrom method in libvirt for RBD I'm stuck
>> at some point.
>>
>> $ virsh vol-clone --pool myrbdpool image1 image2
>>
>> This would clone image1 to a new RBD
On Tue, Dec 22, 2015 at 12:10 PM, Adam C. Emerson wrote:
> Comrades,
>
> Ceph's victory is assured. It will be the storage system of The Future.
> Matt Benjamin has reminded me that if we don't act fast¹ Ceph will be
> responsible for destroying the world.
>
> utime_t() uses
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Dan Mick
> Sent: Tuesday, December 22, 2015 7:00 AM
> To: ceph-devel
> Subject: RFC: tool for applying 'ceph daemon ' command to all OSDs
>
> I needed something to fetch
On 21-12-15 23:51, Josh Durgin wrote:
> On 12/21/2015 11:06 AM, Wido den Hollander wrote:
>> Hi,
>>
>> While implementing the buildvolfrom method in libvirt for RBD I'm stuck
>> at some point.
>>
>> $ virsh vol-clone --pool myrbdpool image1 image2
>>
>> This would clone image1 to a new RBD image
On Sun, Dec 20, 2015 at 7:38 PM, Eric Eastman
wrote:
> On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote:
>> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
>> wrote:
Hi Yan Zheng, Eric Eastman
Similar
On 21-12-2015 01:45, Xinze Chi (信泽) wrote:
sorry for delay reply. Please have a try
https://github.com/ceph/ceph/commit/ae4a8162eacb606a7f65259c6ac236e144bfef0a.
Tried this one first:
Testsuite summary for ceph 10.0.1
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would clone image1 to a new RBD image called 'image2'.
The code I've written now does:
1. Create a snapshot called image1@libvirt-
2. Protect the
On 12/21/2015 04:50 PM, Josh Durgin wrote:
> On 12/21/2015 07:09 AM, Jason Dillaman wrote:
>> You will have to ensure that your writes are properly aligned with the
>> object size (or object set if fancy striping is used on the RBD
>> volume). In that case, the discard is translated to remove
On 20-12-2015 17:10, Willem Jan Withagen wrote:
Hi,
Most of the Ceph is getting there in the most crude and rough state.
So beneath is a status update on what is not working for me jet.
Further:
A) unittest_erasure_code_plugin failes on the fact that there is a
different error code returned
On 12/21/2015 11:00 AM, Wido den Hollander wrote:
My discard code now works, but I wanted to verify. If I understand Jason
correctly it would be a matter of figuring out the 'order' of a image
and call rbd_discard in a loop until you reach the end of the image.
You'd need to get the order via
FYI.
-- Forwarded message --
From: David Casier
Date: 2015-12-21 23:19 GMT+01:00
Subject: FileStore : no wait thread queue_sync
To: Ceph Development , Sage Weil
Cc: Benoît LORIOT ,
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would clone image1 to a new RBD image called 'image2'.
The code I've written now does:
1. Create
>>I just want to know if this is sufficient to wipe a RBD image?
AFAIK, ceph write zeroes in the rados objects with discard is used.
They are an option for skip zeroes write if needed
OPTION(rbd_skip_partial_discard, OPT_BOOL, false) // when trying to discard a
range inside an object, set to
On Mon, 21 Dec 2015, Zhi Zhang wrote:
> Regards,
> Zhi Zhang (David)
> Contact: zhang.david2...@gmail.com
> zhangz.da...@outlook.com
>
>
>
> -- Forwarded message --
> From: Jaze Lee
> Date: Mon, Dec 21, 2015 at 4:08 PM
> Subject: Re: Client
On Sun, Dec 20, 2015 at 6:38 PM, Eric Eastman
wrote:
> On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote:
>> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
>> wrote:
Hi Yan Zheng, Eric Eastman
Similar
I needed something to fetch current config values from all OSDs (sorta
the opposite of 'injectargs --key value), so I hacked it, and then
spiffed it up a bit. Does this seem like something that would be useful
in this form in the upstream Ceph, or does anyone have any thoughts on
its design or
1 - 100 of 23384 matches
Mail list logo