- Original Message -
> I want your opinion guys regarding two features implemented in attempt to
> greatly reduce number of memory allocation without major surgery in the
> code.
>
> The features are:
> 1. Custom STL allocator, which allocates first N items from the STL
> container itself
Hi Sage,
thanks for your quick response. Javen and I once the zfs developer,are
currently focusing on how to
leverage some of the zfs ideas to improve the ceph backend performance
in userspace.
Based on your encouraging reply, we come up with 2 schemes to continue
our future work
1. the
Thanks Sage for your reply.
I am not sure I understand the challenges you mentioned about
backfill/scrub.
I will investigate from the code and let you know if we can conquer the
challenge by easy means.
Our rough idea for ZFSStore are:
1. encapsulate dnode object as onode and add onode attribut
On Thu, 7 Jan 2016, Javen Wu wrote:
> Hi Sage,
>
> Sorry to bother you. I am not sure if it is appropriate to send email to you
> directly, but I cannot find any useful information to address my confusion
> from Internet. Hope you can help me.
>
> Occasionally, I heard that you are going to start
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
The last recording I'm seeing is for 10/07/15. Can we get the newer ones?
Thanks,
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, Jan 6, 2016 at 8:43 AM, Mark Nelson wrote:
> 8AM PST as
On 04-01-16 16:38, Jason Dillaman wrote:
> Short term, assuming there wouldn't be an objection from the libvirt
> community, I think spawning a thread pool and concurrently executing several
> rbd_stat calls concurrently would be the easiest and cleanest solution. I
> wouldn't suggest trying
On Tue, 5 Jan 2016, Guang Yang wrote:
> On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> > On Mon, 4 Jan 2016, Guang Yang wrote:
> >> Hi Cephers,
> >> Happy New Year! I got question regards to the long PG peering..
> >>
> >> Over the last several days I have been looking into the *long peering*
On 6-1-2016 08:51, Mykola Golub wrote:
>
> Are you able to reproduce this problem manually? I.e. in src dir, start the
> cluster using vstart.sh:
>
> ./vstart.sh -n
>
> Check it is running:
>
> ./ceph -s
>
> Repeat the test:
>
> truncate -s 0 empty_map.txt
> ./crushtool -c empty_map.txt -o em
On 6-1-2016 08:51, Mykola Golub wrote:
On Mon, Dec 28, 2015 at 05:53:04PM +0100, Willem Jan Withagen wrote:
Hi,
Can somebody try to help me and explain why
in test: Func: test/mon/osd-crash
Func: TEST_crush_reject_empty started
Fails with a python error which sort of startles me:
test/mon/osd
On 5-1-2016 19:23, Gregory Farnum wrote:
On Mon, Dec 28, 2015 at 8:53 AM, Willem Jan Withagen wrote:
Hi,
Can somebody try to help me and explain why
in test: Func: test/mon/osd-crash
Func: TEST_crush_reject_empty started
Fails with a python error which sort of startles me:
test/mon/osd-crush
On Mon, Dec 28, 2015 at 05:53:04PM +0100, Willem Jan Withagen wrote:
> Hi,
>
> Can somebody try to help me and explain why
>
> in test: Func: test/mon/osd-crash
> Func: TEST_crush_reject_empty started
>
> Fails with a python error which sort of startles me:
> test/mon/osd-crush.sh:227: TEST_crus
On 01/05/2016 07:55 PM, Samuel Just wrote:
> http://tracker.ceph.com/issues/14236
>
> New hammer mon failure in the nightlies (missing a map apparently?),
> can you take a look?
> -Sam
Will do.
-Joao
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a mess
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> On Mon, 4 Jan 2016, Guang Yang wrote:
>> Hi Cephers,
>> Happy New Year! I got question regards to the long PG peering..
>>
>> Over the last several days I have been looking into the *long peering*
>> problem when we start a OSD / OSD host, what I
It seems that the metadata didn't get updated.
I just tried out and got the right version with no issues. Hopefully
*this* time it works for you.
Sorry for all the troubles
On Tue, Jan 5, 2016 at 3:21 PM, Derek Yarnell wrote:
> Hi Alfredo,
>
> I am still having a bit of trouble though with what
Hi Alfredo,
I am still having a bit of trouble though with what looks like the
1.5.31 release. With a `yum update ceph-deploy` I get the following
even after a full `yum clean all`.
http://ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.31-0.noarch.rpm:
[Errno -1] Package does not match intended
On Mon, Dec 28, 2015 at 8:53 AM, Willem Jan Withagen wrote:
> Hi,
>
> Can somebody try to help me and explain why
>
> in test: Func: test/mon/osd-crash
> Func: TEST_crush_reject_empty started
>
> Fails with a python error which sort of startles me:
> test/mon/osd-crush.sh:227: TEST_crush_reject_em
It looks like this was only for ceph-deploy in Hammer. I verified that
this wasn't the case in e.g. Infernalis
I have ensured that the ceph-deploy packages in hammer are in fact
signed and coming from our builds.
Thanks again for reporting this!
On Tue, Jan 5, 2016 at 12:27 PM, Alfredo Deza wro
On Tue, Jan 5, 2016 at 9:56 AM, Deneau, Tom wrote:
> Having trouble getting a reply from c...@cbt.com so trying ceph-devel list...
>
> To get familiar with CBT, I first wanted to use it on an existing cluster.
> (i.e., not have CBT do any cluster setup).
>
> Is there a .yaml example that illustrat
This is odd. We are signing all packages before publishing them on the
repository. These ceph-deploy releases are following a new release
process so I will
have to investigate where is the disconnect.
Thanks for letting us know.
On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell wrote:
> It looks li
On Mon, 4 Jan 2016, Guang Yang wrote:
> Hi Cephers,
> Happy New Year! I got question regards to the long PG peering..
>
> Over the last several days I have been looking into the *long peering*
> problem when we start a OSD / OSD host, what I observed was that the
> two peering working threads were
We need every OSDMap persisted before persisting later ones because we
rely on there being no holes for a bunch of reasons.
The deletion transactions are more interesting. It's not part of the
boot process, these are deletions resulting from merging in a log from
a peer which logically removed an
Thanks Sam for the confirmation.
Thanks,
Guang
On Mon, Jan 4, 2016 at 3:59 PM, Samuel Just wrote:
> IIRC, you are running giant. I think that's the log rotate dangling
> fd bug (not fixed in giant since giant is eol). Fixed upstream
> 8778ab3a1ced7fab07662248af0c773df759653d, firefly backport
IIRC, you are running giant. I think that's the log rotate dangling
fd bug (not fixed in giant since giant is eol). Fixed upstream
8778ab3a1ced7fab07662248af0c773df759653d, firefly backport is
b8e3f6e190809febf80af66415862e7c7e415214.
-Sam
On Mon, Jan 4, 2016 at 3:37 PM, Guang Yang wrote:
> Hi
On Mon, Jan 4, 2016 at 10:51 AM, Wukongming wrote:
> Hi, Ilya,
>
> It is an old problem.
> When you say "when you issue a reboot, daemons get killed and the kernel
> client ends up waiting for the them to come back, because of outstanding
> writes issued by umount called by systemd (or whatever)
On 04-01-16 16:38, Jason Dillaman wrote:
> Short term, assuming there wouldn't be an objection from the libvirt
> community, I think spawning a thread pool and concurrently executing several
> rbd_stat calls concurrently would be the easiest and cleanest solution. I
> wouldn't suggest trying
Short term, assuming there wouldn't be an objection from the libvirt community,
I think spawning a thread pool and concurrently executing several rbd_stat
calls concurrently would be the easiest and cleanest solution. I wouldn't
suggest trying to roll your own solution for retrieving image size
On Tue, Dec 29, 2015 at 4:55 AM, Fengguang Gong wrote:
> hi,
> We create one million empty files through filebench, here is the test env:
> MDS: one MDS
> MON: one MON
> OSD: two OSD, each with one Inter P3700; data on OSD with 2x replica
> Network: all nodes are connected through 10 gigabit netwo
it would certainly help those with less knowledge about networking in
linux, though i do not know how many people using ceph are in this
category. Sage and the others here may have a better idea about its
feasibility.
but i usually use rule-* and route-* (in CentOS) files, they work with
networ
On Tue, 29 Dec 2015, Dong Wu wrote:
> if add in osd.7 and 7 becomes the primary: pg1.0 [1, 2, 3] --> pg1.0
> [7, 2, 3], is it similar with the example above?
> still install a pg_temp entry mapping the PG back to [1, 2, 3], then
> backfill happens to 7, normal io write to [1, 2, 3], if io to the
On 12/28/2015 07:47 PM, Sage Weil wrote:
On Fri, 25 Dec 2015, ?? wrote:
Hi all,
When we read the code, we haven?t find the function that the client can
bind a specific IP. In Ceph?s configuration, we could only find the parameter
?public network?, but it seems acts on the OSD but not the
Hi,
resending my letter.
Thank you for the attention.
Best regards,
Vladislav Odintsov
From: Sage Weil
Sent: Monday, December 28, 2015 19:49
To: Odintsov Vladislav
Subject: Re: CEPH build
Can you resend this to ceph
if add in osd.7 and 7 becomes the primary: pg1.0 [1, 2, 3] --> pg1.0
[7, 2, 3], is it similar with the example above?
still install a pg_temp entry mapping the PG back to [1, 2, 3], then
backfill happens to 7, normal io write to [1, 2, 3], if io to the
portion of the PG that has already been back
On Mon, 28 Dec 2015, Zhiqiang Wang wrote:
> 2015-12-27 20:48 GMT+08:00 Dong Wu :
> > Hi,
> > When add osd or remove osd, ceph will backfill to rebalance data.
> > eg:
> > - pg1.0[1, 2, 3]
> > - add an osd(eg. osd.7)
> > - ceph start backfill, then pg1.0 osd set changes to [1, 2, 7]
> > - if [a,
Hi,
Can somebody try to help me and explain why
in test: Func: test/mon/osd-crash
Func: TEST_crush_reject_empty started
Fails with a python error which sort of startles me:
test/mon/osd-crush.sh:227: TEST_crush_reject_empty: local
empty_map=testdir/osd-crush/empty_map
test/mon/osd-crush.sh:2
On Fri, 25 Dec 2015, ?? wrote:
> Hi all,
> When we read the code, we haven?t find the function that the client can
> bind a specific IP. In Ceph?s configuration, we could only find the parameter
> ?public network?, but it seems acts on the OSD but not the client.
> There is a scenario tha
On Fri, 25 Dec 2015, Ning Yao wrote:
> Hi, Dong Wu,
>
> 1. As I currently work for other things, this proposal is abandon for
> a long time
> 2. This is a complicated task as we need to consider a lots such as
> (not just for writeOp, as well as truncate, delete) and also need to
> consider the di
Thank you for your reply. I am looking formard to Sage's opinion too @sage.
Also I'll keep on with the BlueStore and Kstore's progress.
Regards
2015-12-25 14:48 GMT+08:00 Ning Yao :
> Hi, Dong Wu,
>
> 1. As I currently work for other things, this proposal is abandon for
> a long time
> 2. This is
Hi, Dong Wu,
1. As I currently work for other things, this proposal is abandon for
a long time
2. This is a complicated task as we need to consider a lots such as
(not just for writeOp, as well as truncate, delete) and also need to
consider the different affects for different backends(Replicated,
Thanks, from this pull request I learned that this issue is not
completed, is there any new progress of this issue?
2015-12-25 12:30 GMT+08:00 Xinze Chi (信泽) :
> Yeah, This is good idea for recovery, but not for backfill.
> @YaoNing have pull a request about this
> https://github.com/ceph/ceph/pul
Yeah, This is good idea for recovery, but not for backfill.
@YaoNing have pull a request about this
https://github.com/ceph/ceph/pull/3837 this year.
2015-12-25 11:16 GMT+08:00 Dong Wu :
> Hi,
> I have doubt about pglog, the pglog contains (op,object,version) etc.
> when peering, use pglog to cons
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Yehuda Sadeh-Weinraub
> Sent: Wednesday, December 23, 2015 5:02 PM
> To: Paul Von-Stamwitz
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: rgw: stic
On Wed, Dec 23, 2015 at 3:53 PM, Paul Von-Stamwitz
wrote:
> Hi,
>
> We're testing user quotas on Hammer with civetweb and we're running into an
> issue with user stats.
>
> If the user/admin removes a bucket using -force/-purge-objects options with
> s3cmd/radosgw-admin respectively, the user st
Hi,
For the record the pending issues that prevent the "make check" job
(https://jenkins.ceph.com/job/ceph-pull-requests/) from running can be found at
http://tracker.ceph.com/issues/14172
Cheers
On 23/12/2015 21:05, Alfredo Deza wrote:
> Hi all,
>
> As of yesterday (Tuesday Dec 22nd) we have
On 22/12/2015, Gregory Farnum wrote:
[snip]
> So I think we're stuck with creating a new utime_t and incrementing
> the struct_v on everything that contains them. :/
[snip]
> We'll also then need the full feature bit system to make
> sure we send the old encoding to clients which don't understand t
This is really great. Thanks Loic and Alfredo!
- Ken
On Tue, Dec 22, 2015 at 11:23 AM, Loic Dachary wrote:
> Hi,
>
> The make check bot moved to jenkins.ceph.com today and ran it's first
> successfull job. You will no longer see comments from the bot: it will update
> the github status instead
>Thanks for your quick reply. Yeah, the number of file really will be the
>potential problem. But if just the memory problem, we could use more memory in
>our OSD
>servers.
Add more mem might not be a viable solution:
Ceph does not say how much data is stores in an inode but the docs say the
xa
?
--
hzwulibin
2015-12-23
-
发件人:"Van Leeuwen, Robert"
发送日期:2015-12-23 20:57
收件人:hzwulibin,ceph-devel,ceph-users
抄送:
主题:Re: [ceph-users] use object size of 32k rather than 4M
>In order to reduc
>In order to reduce the enlarge impact, we want to change the default size of
>the object from 4M to 32k.
>
>We know that will increase the number of the objects of one OSD and make
>remove process become longer.
>
>Hmm, here i want to ask your guys is there any other potential problems will
>3
Hi,
I triaged the jenkins related failures (from #24 to #49):
CentOS 6 not supported:
https://jenkins.ceph.com/job/ceph-pull-requests/26/console
https://jenkins.ceph.com/job/ceph-pull-requests/28/console
https://jenkins.ceph.com/job/ceph-pull-requests/29/console
https://jenkins.ceph.com/
On Sun, Dec 20, 2015 at 7:38 PM, Eric Eastman
wrote:
> On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote:
>> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
>> wrote:
Hi Yan Zheng, Eric Eastman
Similar bug was reported in f2fs, btrfs, it does affect 4.4-rc4, the fixing
patch w
On Tue, Dec 22, 2015 at 12:10 PM, Adam C. Emerson wrote:
> Comrades,
>
> Ceph's victory is assured. It will be the storage system of The Future.
> Matt Benjamin has reminded me that if we don't act fast¹ Ceph will be
> responsible for destroying the world.
>
> utime_t() uses a 32-bit second count
On 12/22/2015 01:55 PM, Wido den Hollander wrote:
On 12/21/2015 11:51 PM, Josh Durgin wrote:
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This woul
On 12/22/2015 05:34 AM, Wido den Hollander wrote:
On 21-12-15 23:51, Josh Durgin wrote:
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would c
On 12/22/2015 12:21 AM, igor.podo...@ts.fujitsu.com wrote:
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
>> ow...@vger.kernel.org] On Behalf Of Dan Mick
>> Sent: Tuesday, December 22, 2015 7:00 AM
>> To: ceph-devel
>> Subject: RFC: tool for applying 'ce
On 12/21/2015 11:29 PM, Gregory Farnum wrote:
> On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick wrote:
>> I needed something to fetch current config values from all OSDs (sorta
>> the opposite of 'injectargs --key value), so I hacked it, and then
>> spiffed it up a bit. Does this seem like something th
On 12/21/2015 11:51 PM, Josh Durgin wrote:
> On 12/21/2015 11:06 AM, Wido den Hollander wrote:
>> Hi,
>>
>> While implementing the buildvolfrom method in libvirt for RBD I'm stuck
>> at some point.
>>
>> $ virsh vol-clone --pool myrbdpool image1 image2
>>
>> This would clone image1 to a new RBD ima
On 12/21/2015 11:20 PM, Josh Durgin wrote:
> On 12/21/2015 11:00 AM, Wido den Hollander wrote:
>> My discard code now works, but I wanted to verify. If I understand Jason
>> correctly it would be a matter of figuring out the 'order' of a image
>> and call rbd_discard in a loop until you reach the e
Hi,
The make check bot moved to jenkins.ceph.com today and ran it's first
successfull job. You will no longer see comments from the bot: it will update
the github status instead, which is less intrusive.
Cheers
On 21/12/2015 11:13, Loic Dachary wrote:
> Hi,
>
> The make check bot is broken in
On 21-12-15 23:51, Josh Durgin wrote:
> On 12/21/2015 11:06 AM, Wido den Hollander wrote:
>> Hi,
>>
>> While implementing the buildvolfrom method in libvirt for RBD I'm stuck
>> at some point.
>>
>> $ virsh vol-clone --pool myrbdpool image1 image2
>>
>> This would clone image1 to a new RBD image
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Dan Mick
> Sent: Tuesday, December 22, 2015 7:00 AM
> To: ceph-devel
> Subject: RFC: tool for applying 'ceph daemon ' command to all OSDs
>
> I needed something to fetch
On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick wrote:
> I needed something to fetch current config values from all OSDs (sorta
> the opposite of 'injectargs --key value), so I hacked it, and then
> spiffed it up a bit. Does this seem like something that would be useful
> in this form in the upstream C
On 20-12-2015 17:10, Willem Jan Withagen wrote:
Hi,
Most of the Ceph is getting there in the most crude and rough state.
So beneath is a status update on what is not working for me jet.
Further:
A) unittest_erasure_code_plugin failes on the fact that there is a
different error code returned w
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would clone image1 to a new RBD image called 'image2'.
The code I've written now does:
1. Create
On 12/21/2015 11:00 AM, Wido den Hollander wrote:
My discard code now works, but I wanted to verify. If I understand Jason
correctly it would be a matter of figuring out the 'order' of a image
and call rbd_discard in a loop until you reach the end of the image.
You'd need to get the order via r
On 21-12-2015 01:45, Xinze Chi (信泽) wrote:
sorry for delay reply. Please have a try
https://github.com/ceph/ceph/commit/ae4a8162eacb606a7f65259c6ac236e144bfef0a.
Tried this one first:
Testsuite summary for ceph 10.0.1
On 12/21/2015 04:50 PM, Josh Durgin wrote:
> On 12/21/2015 07:09 AM, Jason Dillaman wrote:
>> You will have to ensure that your writes are properly aligned with the
>> object size (or object set if fancy striping is used on the RBD
>> volume). In that case, the discard is translated to remove oper
On 12/21/2015 07:09 AM, Jason Dillaman wrote:
You will have to ensure that your writes are properly aligned with the object
size (or object set if fancy striping is used on the RBD volume). In that
case, the discard is translated to remove operations on each individual backing
object. The on
t; To: "Wido den Hollander"
> Cc: "ceph-devel"
> Sent: Monday, December 21, 2015 9:25:15 AM
> Subject: Re: Is rbd_discard enough to wipe an RBD image?
>
> >>I just want to know if this is sufficient to wipe a RBD image?
>
> AFAIK, ceph write zeroes in
On Sun, Dec 20, 2015 at 6:38 PM, Eric Eastman
wrote:
> On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote:
>> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
>> wrote:
Hi Yan Zheng, Eric Eastman
Similar bug was reported in f2fs, btrfs, it does affect 4.4-rc4, the fixing
patch w
>>I just want to know if this is sufficient to wipe a RBD image?
AFAIK, ceph write zeroes in the rados objects with discard is used.
They are an option for skip zeroes write if needed
OPTION(rbd_skip_partial_discard, OPT_BOOL, false) // when trying to discard a
range inside an object, set to tr
On Mon, 21 Dec 2015, Zhi Zhang wrote:
> Regards,
> Zhi Zhang (David)
> Contact: zhang.david2...@gmail.com
> zhangz.da...@outlook.com
>
>
>
> -- Forwarded message --
> From: Jaze Lee
> Date: Mon, Dec 21, 2015 at 4:08 PM
> Subject
On Wed, Dec 16, 2015 at 11:33 PM, Sage Weil wrote:
> On Wed, 16 Dec 2015, Adam Kupczyk wrote:
>> On Tue, Dec 15, 2015 at 3:23 PM, Lars Marowsky-Bree wrote:
>> > On 2015-12-14T14:17:08, Radoslaw Zarzynski wrote:
>> >
>> > Hi all,
>> >
>> > great to see this revived.
>> >
>> > However, I have come
> On Dec 19, 2015, at 10:54, Minfei Huang wrote:
>
> The variant pagep will still get the invalid page point, although ceph
> fails in function ceph_update_writeable_page.
>
> To fix this issue, Assigne the page to pagep until there is no failure
> in function ceph_update_writeable_page.
>
> S
Which msg type and ceph version are you using?
Once we used 0.94.1 with async msg, we encountered similar issue.
Client was trying to connect a down monitor when it was just started
and this connection would hung there. This is because previous async
msg used blocking connection mode.
After we ba
On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote:
> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
> wrote:
>>> Hi Yan Zheng, Eric Eastman
>>>
>>> Similar bug was reported in f2fs, btrfs, it does affect 4.4-rc4, the fixing
>>> patch was merged into 4.4-rc5, dfd01f026058 ("sched/wait: Fix the sig
sorry for delay reply. Please have a try
https://github.com/ceph/ceph/commit/ae4a8162eacb606a7f65259c6ac236e144bfef0a.
2015-12-21 0:10 GMT+08:00 Willem Jan Withagen :
> Hi,
>
> Most of the Ceph is getting there in the most crude and rough state.
> So beneath is a status update on what is not worki
> I've been working with Sam Just today and we would like to get some
> performance data around client I/O and recovery I/O to test the new Op
> queue I've been working on. I know that we can just set and OSD out/in
> and such, but there seems like there could be a lot of variation in
> the results
Hi Mike,
On the EXSi server both Header Digest and Data Digest are set to Prohibited.
Eric
On Fri, Dec 18, 2015 at 2:54 PM, Mike Christie wrote:
> Eric,
>
> Do you have iSCSI data digests on?
>
> On 12/15/2015 12:08 AM, Eric Eastman wrote:
>> I am testing Linux Target SCSI, LIO, with a Ceph Fil
Eric,
Do you have iSCSI data digests on?
On 12/15/2015 12:08 AM, Eric Eastman wrote:
> I am testing Linux Target SCSI, LIO, with a Ceph File System backstore
> and I am seeing this error on my LIO gateway. I am using Ceph v9.2.0
> on a 4.4rc4 Kernel, on Trusty, using a kernel mounted Ceph File
>
Nevermind, got it:
CHANGES WITH 214:
* As an experimental feature, udev now tries to lock the
disk device node (flock(LOCK_SH|LOCK_NB)) while it
executes events for the disk or any of its partitions.
Applications like partitioning programs can lock the
>> AFAICT udevd started doing this in v214.
Do you have a specific commit / changelog entry in mind ? I'd like to add it to
the commit message fixing the problem reference.
Thanks !
--
Loïc Dachary, Artisan Logiciel Libre
signature.asc
Description: OpenPGP digital signature
On 18/12/2015 16:31, Ilya Dryomov wrote:
> On Fri, Dec 18, 2015 at 1:38 PM, Loic Dachary wrote:
>> Hi Ilya,
>>
>> It turns out that sgdisk 0.8.6 -i 2 /dev/vdb removes partitions and re-adds
>> them on CentOS 7 with a 3.10.0-229.11.1.el7 kernel, in the same way
&g
On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote:
> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
> wrote:
>>> Hi Yan Zheng, Eric Eastman
>>>
>>> Similar bug was reported in f2fs, btrfs, it does affect 4.4-rc4, the fixing
>>> patch was merged into 4.4-rc5, dfd01f026058 ("sched/wait: Fix the sig
On Fri, Dec 18, 2015 at 1:38 PM, Loic Dachary wrote:
> Hi Ilya,
>
> It turns out that sgdisk 0.8.6 -i 2 /dev/vdb removes partitions and re-adds
> them on CentOS 7 with a 3.10.0-229.11.1.el7 kernel, in the same way partprobe
> does. It is used intensively by ceph-disk and inev
Hi Ilya,
It turns out that sgdisk 0.8.6 -i 2 /dev/vdb removes partitions and re-adds
them on CentOS 7 with a 3.10.0-229.11.1.el7 kernel, in the same way partprobe
does. It is used intensively by ceph-disk and inevitably leads to races where a
device temporarily disapears. The same command
ify it.
>> :)
>>
>> Thanks,
>>
>>> -Original Message-
>>> From: ceph-devel-ow...@vger.kernel.org
>>> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of
>>> Yan, Zheng
>>> Sent: Friday, December 18, 2015 12:05 PM
>>&
kernel.org
>> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of
>> Yan, Zheng
>> Sent: Friday, December 18, 2015 12:05 PM
>> To: Eric Eastman
>> Cc: Ceph Development
>> Subject: Re: Issue with Ceph File System and LIO
>>
>> On Fri, Dec 18, 2015
On Fri, Dec 18, 2015 at 3:49 AM, Eric Eastman
wrote:
> With cephfs.patch and cephfs1.patch applied and I am now seeing:
>
> [Thu Dec 17 14:27:59 2015] [ cut here ]
> [Thu Dec 17 14:27:59 2015] WARNING: CPU: 0 PID: 3036 at
> fs/ceph/addr.c:1171 ceph_write_begin+0xfb/0x120 [c
On 17/12/15 21:27, Sage Weil wrote:
On Thu, 17 Dec 2015, Jaze Lee wrote:
Hello cephers:
In our test, there are three monitors. We find client run ceph
command will slow when the leader mon is down. Even after long time, a
client run ceph command will also slow in first time.
>From strace, w
The script handles UTF-8 fine, the copy/paste is at fault here ;-)
On 24/11/2015 07:59, piotr.da...@ts.fujitsu.com wrote:
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
>> ow...@vger.kernel.org] On Behalf Of Sage Weil
>> Sent: Monday, November 23, 2015
On 17/12/2015 16:49, Ilya Dryomov wrote:
> On Thu, Dec 17, 2015 at 1:19 PM, Loic Dachary wrote:
>> Hi Ilya,
>>
>> I'm seeing a partprobe failure right after a disk was zapped with sgdisk
>> --clear --mbrtogpt -- /dev/vdb:
>>
>> partprobe /dev/vdb failed : Error: Partition(s) 1 on /dev/vdb have
On Thu, Dec 17, 2015 at 2:44 PM, Derek Yarnell wrote:
> On 12/17/15 3:15 PM, Yehuda Sadeh-Weinraub wrote:
>>
>> Right. Reading the code again:
>>
>> Try:
>> GET /admin/metadata/user&key=cephtest
>
> Thanks this is very helpful and works and I was able to also get the PUT
> working. Only question
On 12/17/15 3:15 PM, Yehuda Sadeh-Weinraub wrote:
>
> Right. Reading the code again:
>
> Try:
> GET /admin/metadata/user&key=cephtest
Thanks this is very helpful and works and I was able to also get the PUT
working. Only question is that is it expected to return a 204 no content?
2015-12-17 17
On 12/17/15 2:36 PM, Yehuda Sadeh-Weinraub wrote:
> Try 'section=user&key=cephtests'
Doesn't seem to work either.
# radosgw-admin metadata get user:cephtest
{
"key": "user:cephtest",
"ver": {
"tag": "_dhpzgdOjqJI-OsR1MsYV5-p",
"ver": 1
},
"mtime": 1450378246,
"
On Thu, Dec 17, 2015 at 12:06 PM, Derek Yarnell wrote:
> On 12/17/15 2:36 PM, Yehuda Sadeh-Weinraub wrote:
>> Try 'section=user&key=cephtests'
>
> Doesn't seem to work either.
>
> # radosgw-admin metadata get user:cephtest
> {
> "key": "user:cephtest",
> "ver": {
> "tag": "_dhpzgdO
With cephfs.patch and cephfs1.patch applied and I am now seeing:
[Thu Dec 17 14:27:59 2015] [ cut here ]
[Thu Dec 17 14:27:59 2015] WARNING: CPU: 0 PID: 3036 at
fs/ceph/addr.c:1171 ceph_write_begin+0xfb/0x120 [ceph]()
[Thu Dec 17 14:27:59 2015] Modules linked in: iscsi_targ
On Thu, Dec 17, 2015 at 11:05 AM, Derek Yarnell wrote:
> On 12/17/15 1:09 PM, Yehuda Sadeh-Weinraub wrote:
>>> Bug? Design?
>>
>> Somewhat a bug. The whole subusers that use s3 was unintentional, so
>> when creating the subuser api, we didn't think of needing the access
>> key. For some reason we
On 12/17/15 1:09 PM, Yehuda Sadeh-Weinraub wrote:
>> Bug? Design?
>
> Somewhat a bug. The whole subusers that use s3 was unintentional, so
> when creating the subuser api, we didn't think of needing the access
> key. For some reason we do get the key type. Can you open a ceph
> tracker issue for t
On Thu, Dec 17, 2015 at 9:04 AM, Derek Yarnell wrote:
> I am having an issue with the 'radosgw-admin subuser create' command
> doing something different than the '/{admin}/user?subuser&format=json'
> admin API. I want to leverage subusers in S3 which looks to be possible
> in my testing for bit m
his).
>>>
>>> It looks like a side effect of a previous partprobe command, the only
>>> command I can think of that removes / re-adds devices. I thought calling
>>> udevadm settle after running partprobe would be enough to ensure
>>> partprobe completed (and sin
1 - 100 of 19556 matches
Mail list logo