Den mån 29 juni 2020 kl 17:27 skrev Liam Monahan :
>
> For example, here is a bucket that all of a sudden reports that it has
> 18446744073709551615 objects! The actual count should be around 20,000.
>
> "rgw.none": {
> "size": 0,
> "size_actual": 0,
>
Anthony asked about the 'use case'. Well, I haven't gone into details
because I worried it wouldn't help much. From a 'ceph' perspective, the
sandbox layout goes like this: 4 pretty much identical old servers,
each with 6 drives, and a smaller server just running a mon to break
ties. Usual fron
> Thanks for the thinking. By 'traffic' I mean: when a user space rbd
> write has as a destination three replica osds in the same chassis
eek.
> does the whole write get shipped out to the mon and then back
Mons are control-plane only.
> All the 'usual suspects' like lossy ethernets and mis
I've not been replying to the list, apologies.
> just the write metadata to the mon, with the actual write data
content not having to cross a physical ethernet cable but directly to
the chassis-local osds via the 'virtual' internal switch?
This is my understanding as well, yes. I've not explored
Thanks for the thinking. By 'traffic' I mean: when a user space rbd
write has as a destination three replica osds in the same chassis, does
the whole write get shipped out to the mon and then back, or just the
write metadata to the mon, with the actual write data content not
having to cross a ph
What does “traffic” mean? Reads? Writes will have to hit the net regardless
of any machinations.
> On Jun 29, 2020, at 7:31 PM, Harry G. Coin wrote:
>
> I need exactly what ceph is for a whole lot of work, that work just
> doesn't represent a large fraction of the total local traffic.
I need exactly what ceph is for a whole lot of work, that work just
doesn't represent a large fraction of the total local traffic. Ceph is
the right choice. Plainly ceph has tremendous support for replication
within a chassis, among chassis and among racks. I just need
intra-chassis traffic to n
Hello,
I've upgraded ceph to Octopus (15.2.3 from repo) on one of the Ubuntu 18.04
host servers. The update caused problem with libvirtd which hangs when it tries
to access the storage pools. The problem doesn't exist on Nautilus. The
libvirtd process simply hangs. Nothing seem to happen. The
Hello,
I have been struggling a lot with radosgw buckets space wastage, which is
currently stands at about 2/3 of utilised space is wasted and unaccounted for.
I've tried to use the tools to find the orphan objects, but these were running
in loop for weeks on without producing any results. Wid
Hello Rafael,
Can you check the apt sources list that exist from your ceph-deploy
node? Maybe there you have put luminous debian packages version?
Regards,
Anastasios
On Mon, 2020-06-29 at 06:59 -0300, Rafael Quaglio wrote:
> Hi,
>
> We have already installed a new Debian (10.4) server and I ne
That is an interesting point. We are using 12 on 1 nvme journal for our
Filestore nodes (which seems to work ok). The workload for wal + db is
different so that could be a factor. However when I've looked at the IO
metrics for the nvme it seems to be only lightly loaded, so does not
appear to b
Hey Cephers,
Can someone help us out with this? Seems that it could be fixed by just
rerunning that job Goutham pointed out. We have a bunch of changes waiting
for this to merge.
Thanks in advance,
V
On Fri, Jun 26, 2020 at 2:49 PM Goutham Pacha Ravi
wrote:
> Hello!
>
> Thanks for bringing th
I wonder if you should not have chosen a different product? Ceph is
meant to distribute data across nodes, racks, data centers etc. For a
nail use a hammer, for a screw use a screw driver.
-Original Message-
To: ceph-users@ceph.io
Subject: *SPAM* [ceph-users] layout help: nee
Error occurs here:
- name: look up for ceph-volume rejected devices
ceph_volume:
cluster: "{{ cluster }}"
action: "inventory"
register: rejected_devices
environment:
CEPH_VOLUME_DEBUG: 1
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry
Hi Amit,
Yes in the non-ec pool there’re like 600 files .meta but i dont know if its
safe move to data pool.
Does anyone know if there is a way to generate a synthetic .meta only so that
the delete command is capable of deleting the file?
Regards
Manuel
De: Amit Ghadge
Enviado el: lunes, 29
Hi,
Since upgrading from Nautilus 14.2.9 -> Octopus 15.2.3 two weeks ago we are
seeing large upticks in the reported size (both space and object count) for a
number of our RGW users. It does not seem to be isolated to just one user, so
I don't think it's something wrong in the users' usage pat
Hi
I have a few servers each with 6 or more disks, with a storage workload
that's around 80% done entirely within each server. From a
work-to-be-done perspective there's no need for 80% of the load to
traverse network interfaces, the rest needs what ceph is all about. So
I cooked up a set of c
On 29/06/2020 11:44 pm, Stefan Priebe - Profihost AG wrote:
You need to use:
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD}
bluefs-bdev-new-db --dev-target /dev/bluesfs_db/db-osd${OSD}
and
ceph-bluestore-tool --path dev/osd1/ --devs-source dev/osd1/block
--dev-target dev/osd1/block.db
On 6/18/20 12:11 AM, Ken Dreyer wrote:
> On Wed, Jun 17, 2020 at 9:25 AM David Galloway wrote:
>> If there will be a 14.2.10 or 14.3.0 (I don't actually know), it will be
>> built and signed for CentOS 8.
>>
>> Is this sufficient?
>
>
> Yes, thanks!
>
I can see the newer 14.2.10 build for el8
Hi Simon,
On 6/29/20 2:12 PM, Simon Sutter wrote:
> I'm trying to get Grafana working inside the Dashboard.
> If I press on "Overall Performance" tab, I get an error, because the iframe
> tries to connect to the internal hostname, which cannot be resolved from my
> machine.
> If I directly open
Hi Lindays,
Am 29.06.20 um 15:37 schrieb Lindsay Mathieson:
> Nautilus - Bluestore OSD's created with everything on disk. Now I have
> some spare SSD's - can I move the location of the existing WAL and/or DB
> to SSD partitions without recreating the OSD?
>
> I suspect not - saw emails from 2018,
Nautilus - Bluestore OSD's created with everything on disk. Now I have
some spare SSD's - can I move the location of the existing WAL and/or DB
to SSD partitions without recreating the OSD?
I suspect not - saw emails from 2018, in the negative :(
Failing that - is it difficult to add lvmcach
I agree, please check for min_size to cover min 1 max 2 configs as we have
done in our software for our users since years. It is important and it can
prevent lot's of issues.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
Hi All,
I really like the idea of warning users against using unsafe practices.
Wouldn't it make sense to warn against using min_size=1 instead of size=1.
I've seen data loss happen with size=2 min_size=1 when multiple failures
occur and write have been done between the failures. Effectively t
Hello
I'm trying to get Grafana working inside the Dashboard.
If I press on "Overall Performance" tab, I get an error, because the iframe
tries to connect to the internal hostname, which cannot be resolved from my
machine.
If I directly open grafana, everything works.
How can I tell the dashboar
Hi,
We have already installed a new Debian (10.4) server and I need put it in a
Ceph cluster.
When I execute the command to install ceph on this node:
ceph-deploy install --release nautilus node1
It starts to install a version 12.x in my node...
(...)
[serifos][DEBUG
26 matches
Mail list logo