Hi, as I understand, Pacific+ is having a performance issue that does not exist
in older releases? So that why Ceph's new release (Reef) is delayed in this
year?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
Thanks, postRequest context is what I actually need.
The Request.Object.Size is now correct, but MTime is still wrong. Beside that,
I also want to print the owner name of bucket or object using
Request.User.Tenant, but it doesn't return anything.
Script:
if Request.HTTP.StorageClass == 'COLD' th
There are tags for placement, so any user that has the tag is allowed to access
the placement.
But I can't find how to do it for storage class?
Anyone have any ideal? Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an emai
There is a test bucket, I have removed its index and metadata:
radosgw-admin bi purge --bucket abccc --yes-i-really-mean-it
radosgw-admin metadata rm
bucket.instance:abccc:17a4ce99-009e-40f2-a2d2-2afc218ebd9b.425824299.4
Now the index and metadata is gone, but how to clean its data? Or is there
Hi, I have a plan to use the second storage class for cold data in the near
future.
But I have a question: if many old buckets with many objects are set to
transition at the same time, can Ceph handle it by default, or do I have to
configure QoS to avoid the high load?
Thanks!
Hi,
I usually install the SRPM and then build from ceph.spec like this:
rpmbuild -bb /root/rpmbuild/SPECS/ceph.spec --without ceph_test_package
But it take a long time and contain many packages that I don't need. So is
there a way to optimize this build process for only needed package, for exam
Hi,
In Ceph Radosgw 15.2.17, I get this issue when trying to create a push endpoint
to Kafka
Here is push endpoint configuration:
endpoint_args =
'push-endpoint=kafka://abcef:123456@kafka.endpoint:9093&use-ssl=true&ca-location=/etc/ssl/certs/ca.crt'
attributes = {nvp[0] : nvp[1] for nvp in urlli
Hi,
I'm not able to find the information about used size of a storage class.
- bucket stats
- usage show
- user stats ...
Does Radosgw support it? Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@c
Hi,
I tried to generate a presigned url using SDK PHP, but it doesn't work. (I also
tried to use boto3 with the same configures and the url works normally)
Here is my php code:
'2006-03-01',
'region' => 'us-east-1',
'signature_version' => 'v4',
'use_path_style_endpoint' => tru
Thanks for your reply,
Yes, my setup is like the following:
RGWs (port 8084) -> Nginx (80, 443)
So this why it make me confuse when :8084 appear with the domain.
And this behavior only occurs with PHP's generated url, not in Boto3
___
ceph-users mailing
Thanks for your support,
Using tcpdump and wireshark, I can see both boto3 and PHP transactions have
this field:
Host: hn.ss.bfcplatform.vn\r\n
Which does not contain 8084, so I'm still confused about how it appeared =)
But anyway, this configuration in nginx was solved the problem:
proxy_set_he
Hi,
You may want to check out this doc:
https://docs.ceph.com/en/quincy/radosgw/config-ref/#lifecycle-settings
As I understand it in short:
- if there are thousands of buckets, we should increase the rgw_lc_max_worker.
- if there are a few buckets that have hundreds of thousands of objects, we
s
Hi,
I have a Ceph cluster in v16.2.13. I'm not sure why does this happen and how to
clean it?
[2023-07-12 21:23:13 +07] 299B STANDARD null v18 PUT index.txt
[2023-07-12 21:27:54 +07] 299B STANDARD null v17 PUT index.txt
[2023-07-12 21:48:01 +07] 299B STANDARD null v16 PUT index.txt
[2023-0
Ok, I think the only possible way to fix this is to clone that bucket and
remove the old bucket (rename the new bucket if needed)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I'm using a v16.2.13 Ceph cluster. Yesterday, I just add some SSD node for
replace HDD node. During the process, 1 SSD node have different MTU that cause
the some PGs become unactive for a while. But after change the MTU, all the PGs
is active+clean now. But after that, I can't access some b
Hi,
Currently, I trying to create a CNAME record point to a s3 website, for
example: s3.example.com => s3.example.com.s3-website.myceph.com. So in this
way, my subdomain s3. will have https.
But then only http works. If I go to https://s3.example.com, it shows the
metadata of index.html:
This
This issue doesn't occur using S3 website domain of AWS. Seems like it only
happens with Ceph.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi, I have a Ceph cluster v16.2.10
To use STS lite, my configures are like the following:
ceph.conf
...
[client.rgw.ss-rgw-01]
host = ss-rgw-01
rgw_frontends = beast port=8080
rgw_zone=backup-hapu
admin_socket = /var/run/ceph/ceph-client.rgw.ss-rgw-01
rgw_sts_key = qekd3Rd5zXr0adQx
rgw_s3_auth_use
Hi,
For some reason, I need to recalculate the bucket stats. Is this possible?
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Currently, I'm testing Ceph v17.2.7 with NVMe. When mapping an rbd image to
physical compute host, "fio bs=4k iodepth=128 randwrite" give 150k IOPS. I have
a VM that located within the compute host, and the fio give ~40k IOPS and with
50 %iowait. I know there is a bottleneck, I'm not sure wh
Hi community,
I'm using Ceph version 16.2.13. I tried to set default_storage_class but seems
like it didn't work.
Here is steps I did:
I already had a storage class name COLD, then I modify the user
default_storage_class like this:
radosgw-admin user modify --uid testuser --placement-id default-
Thanks for your reply. You are right, newly-created bucket will now have
"placement_rule": "default-placement/COLD". But then I have another question
that can we specify the default storage class when creating a new bucket? I
found a way to set placement but not with storage class:
s3.create_bu
Hi community,
I have a user that owns some buckets. I want to create a subuser that has
permission to access only one bucket. What can I do to archive this?
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-us
Ceph version 14.2.7
Ceph osd df tree command take long time than usual but I can't find out what is
the reason? The monitor node still has plenty of available RAM and CPU
resources. I checked the monitor and mgr log but nothing seems useful. I
checked an older cluster in version 13.2.10 but Ceph
Hi,
In traditional way, I can make the radosgw listen on IPv6 like this:
# ceph.conf
[...]
rgw_frontends = beast port=8080 endpoint=[2405:f980:0:2:f816:3eff:fed9:8d16]
In Cephadm, It seems specifying IPv6 network does not work (IPv4 does), and the
rgw services do not become active.
# rgw.yaml
s
Hi, thanks for your reply.
Yes, ceph.conf configuration works normally.
But I'm learning Cephadm, and I'm trying not to use ceph.conf. Instead, I want
to get used to the ceph config or the ceph spec file. But I still have not
figured it out how to do that.
_
Hi,
I'm using Ceph v16.2.13. Using `radosgw-admin bucket list`, I can see there are
2 multipart objects in a bucket. But I can not show it on boto3
(list_multipart_uploads) or s3 browser.
What I have tried:
- bucket check: this command can still see the multipart objects but it does
nothing, n
Hi,
I created a secondary zone where the placement maps to an EC pool. 2 zones are
under the same zonegroup and same cluster. But the secondary zone or EC pool
did not receive any data (only user info is synced). So how do you make it
work? Thanks
___
Thanks, I just found out the issue
https://tracker.ceph.com/issues/43317
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi community,
I tried "radosgw-admin sync group" but it seems to only work between zones
within the same zonegroup. So does radosgw support sync a bucket between
regions/zonegroups? Or could it be done in any other way?
Thanks
___
ceph-users mailing li
Hi,
Currently I'm testing replication ability between 2 zones in the same cluster.
But only metadata is synced, not the data. I checked endpoints, system_key,...
all good. If anyone have any idea, please guide me to resolve this situation.
Thanks
The radosgw shows this log on both side, primary
Hi,
I know a third option that is create a secondary zone mapping to replicated
pool. The data will be replicated from primary zone, after that, switch the
master zone and the migration will be done. Zero downtime
This is possible in theory, but I can not make it work when trying to setup 2
zon
Hi,
Thanks for you reply. I'm not sure what do you means by full sync. I tried
data/metadata sync init/run but nothing happened. I tried with a fresh cluster
but still can't make it work. Here is the sync status:
realm 26c6fd00-56f3-473f-8d9b-e583efb58a2b (multi-region)
zonegroup
Yes, that is the strange part. It says "data is caught up with source",
but in secondary data pool (hn2.rgw.buckets.data), nothing in there.
The data has been uploaded before the multisite setup is not replicated.
___
ceph-users mailing list -- ceph-users
Yes, it only sync newly-uploaded object
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I'm very grateful for your detailed guide. I follow your commands, also upload
some objects
to hn1 before creating hn2. After ensuring realm pull on both endpoints output
the same result,
I need to upload one more file to hn1 to trigger the replication on old objects
and new objects.
This is a b
Hi,
My lab setup two Ceph clusters as 2 zonegroups in a realm. Ceph version 16.2.13
As I know, if a bucket is created on secondary zonegroup, the master zonegroup
only know about its metadata, not data. So after resharding command is run
on master (to avoid inconsistent metadata), the bucket is s
Hi community,
I'm using ceph v18.2.4. Each time I commit a period, all of my radosgw
instances paused for about 30s
which is kinda long, and make my object service down during that time.
how to avoid the down time? Is there better/safer practice?
Thanks
Here is radosgw logs each time I did a pe
Hi,
I'm using Ceph v18.2.4, non-cephadm. Over time, my RGW cluster receives more
requests,
and when it reaches a certain threshold, I will manually scale more Radosgw
instances to
handle the increased traffic.
- Is this a normal practice?
- Are there any tunings I can do with RGW to make it han
39 matches
Mail list logo