On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil wrote:
> On Fri, 9 Jun 2017, Erik McCormick wrote:
>> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote:
>> > On Thu, 8 Jun 2017, Sage Weil wrote:
>> >> Questions:
>> >>
>> >> - Does anybody on the list use a
Dear Cephers,
As seen below, I notice that 12.7% of raw storage is consumed with zero pools
in the system. These are bluestore OSDs.
Is this expected or an anomaly?
Thanks,
Nitin
maruti1:~ # ceph -v
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
maruti1:~ #
If you could install the debug packages and get a gdb backtrace from all
threads it would be helpful. librbd doesn't utilize any QEMU threads so
even if librbd was deadlocked, the worst case that I would expect would be
your guest OS complaining about hung kernel tasks related to disk IO (since
Hi,
I'm using debian stretch with ceph 12.2.1-1~bpo80+1 and qemu
1:2.8+dfsg-6+deb9u3
I'm running 3 nodes with 3 monitors and 8 osds on my nodes, all on IPV6.
When I tested the cluster, I detected strange and severe problem.
On first node I'm running qemu hosts with librados disk connection to
On Sun, Nov 5, 2017 at 8:19 AM, Brady Deetz wrote:
> My organization has a production cluster primarily used for cephfs upgraded
> from jewel to luminous. We would very much like to have snapshots on that
> filesystem, but understand that there are risks.
>
> What kind of work
An update for the list archive and if people have similar issues in the
future.
My cluster took about 18 hours after resetting noup for all of the OSDs
to get to the current epoch. In the end there were 5 that took a few
hours longer than the others. Other small issues came up during the
Online does that on C14 (https://www.online.net/en/c14)
IIRC, 52 spining disks per RU, with only 2 disks usable at a time
There is some custom hardware, though, and it is really design for cold
storage (as an IO must wait for an idle slot, power-on the device, do
the IO, power-off the device and
It was a cold and rainy weekend here, so I did some power measurements
of the three types of storage servers we got over a few years of running
Ceph in production, and compared the results:
https://cloudblog.switch.ch/2017/11/06/ceph-storage-server-power-usage/
The last paragraph contains a
On Mon, Nov 6, 2017 at 7:29 AM, Wido den Hollander wrote:
> Hi,
>
> On a Ceph Luminous (12.2.1) environment I'm seeing RGWs stall and about the
> same time I see these errors in the RGW logs:
>
> 2017-11-06 15:50:24.859919 7f8f5fa1a700 0 ERROR: failed to distribute cache
> for
It's pretty much identical to creating a user with radosgw-admin except
instead of user create, you do subuser create. To create subusers for
user_a, you would do something like this...
# read only subuser with the name user_a:read-only
radosgw-admin subuser create --uid=user_a --gen-access-key
Thanks all
David if you can explain how to create subusers with keys i happy to try
and explain to my boss.
The issue i had with the ACLs, for some reason when i upload a file, to
bucket_a with user_a
user_b cant read the file even tho user_b has read permissions on the
bucket.
And i tired
If you don't mind juggling multiple access/secret keys, you can use
subusers. Just have 1 user per bucket and create subusers with read,
write, etc permissions. The objects are all owned by the 1 user that
created the bucket, and then you pass around the subuser keys to the
various apps that
Weird but Very bad problem with my test cluster 2-3 weeks after upgrading to
Luminous.
All 7 running VM's are corrupted and unbootable. 6 Windows and 1 CentOS7.
Windows error is "unmountable boot volume". CentOS7 will only boot to emergency
mode.
3 VM's that were off during event work as
On 06/11/2017, nigel davies wrote:
> ok i am using Jewel vershion
>
> when i try setting permissions using s3cmd or an php script using s3client
>
> i get the error
>
> encoding="UTF-8"?>InvalidArgumenttest_bucket
> (truncated...)
>InvalidArgument (client): -
I see this once on both my RGW's today:
rgw01:
2017-11-06 10:36:35.070068 7f4a4f300700 0 ERROR: failed to distribute cache
for default.rgw.meta:.meta:bucket.instance:XXX/YYY:ZZZ.30636654.1::0
2017-11-06 10:36:45.139068 7f4a4f300700 0 ERROR: failed to distribute cache
for
Hi,
On a Ceph Luminous (12.2.1) environment I'm seeing RGWs stall and about the
same time I see these errors in the RGW logs:
2017-11-06 15:50:24.859919 7f8f5fa1a700 0 ERROR: failed to distribute cache
for
On Thu, Oct 5, 2017 at 12:16 AM, Leonardo Vaz wrote:
> On Wed, Oct 04, 2017 at 03:02:09AM -0300, Leonardo Vaz wrote:
>> On Thu, Sep 28, 2017 at 12:08:00AM -0300, Leonardo Vaz wrote:
>> > Hey Cephers,
>> >
>> > This is just a friendly reminder that the next Ceph Developer Montly
I'll document the resolution here for anyone else who experiences similar
issues.
We have determined the root cause of the long boot time was a combination of
factors having to do with ZFS version and tuning, in combination with how long
filenames are handled.
## 1 ## Insufficient ARC cache
ok i am using Jewel vershion
when i try setting permissions using s3cmd or an php script using s3client
i get the error
InvalidArgumenttest_bucket
(truncated...)
InvalidArgument (client): - InvalidArgumenttest_buckettx
a-005a005b91-109f-default109f-default-default
in
is established, when it gets the command "ceph osd pool stat"
for example, or "ceph auth list" it crashes.
Complete log can be found at:
http://files.spacedump.se/ceph03-monerror-20171106-01.txt
Used below settings for logging in ceph.conf at the time:
[mon]
debug mon = 2
"That is not true. "ceph osd crush set-device-class" will fail if the input
OSD has already been assigned a class. Instead you should do "ceph osd
crush rm-device-class" before proceeding."
You are absolutely right, sorry for the confusion!
Caspar
2017-11-04 2:22 GMT+01:00
Issue #22046 created.
Met vriendelijke groeten,
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
Van: Orit Wasserman
Aan: Mark Schouten
I’m not sure how I missed this earlier in the lists but having done a lot
of work on Ceph helm charts, this is of definite interest to us. We’ve been
running various states of Ceph in Docker and Kubernetes (in production
environments) for over a year now.
There is a lot of overlap between the
Hi,
while trying to use python crush tools on a luminous cluster i get:
crush.ceph.HealthError: expected health overall_status == HEALTH_OK but
got HEALTH_WARNinstead
It seems crush-1.0.35 uses the deprecated overall_status element.
Greets,
Stefan
On Mon, Nov 6, 2017 at 8:22 AM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> is there already a kernel available which connects with luminous?
>
> ceph features tells for my kernel clients still release jewel.
require-min-compat-client = jewel is the default for new
25 matches
Mail list logo