[ceph-users] Issue in the Epson interface? Find uphold from Epson Printer Support.

2020-09-21 Thread mary smith
In the event that you face an interface issue in your gadget, by then you should reboot it. Regardless, on the off chance that the issue doesn't get unraveled, by then you can take help from the tech consultancies and utilize their investigating procedures. You can also contact the Epson Printer

[ceph-users] Record recuperation mistake prompting Unlock Yahoo Account? Contact help group.

2020-09-21 Thread mary smith
The record recuperation mistake can make a disappointment Unlock Yahoo Account. This issue can be settled by utilizing the assist that with canning be found in the tech consultancies. What's more, you can generally visit Youtube and take a gander at some tech vids that will help you in managing

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-21 Thread Anthony D'Atri
Depending what you’re looking to accomplish, setting up a cluster in VMs (VirtualBox, Fusion, cloud provider, etc) may meet your needs without having to buy anything. > > - Don't think having a few 1Gbit can replace a >10Gbit. Ceph doesn't use > such bonds optimal. I already asked about this y

[ceph-users] Re: Solve the issue of time scarcity via assignment help in Kuwait

2020-09-21 Thread itsmonikaa1
Students can get assignment help from professional UK writers here at https://www.theacademicpapers.co.uk/ , it is offering its best writing services since 2011 in multiple academic fields. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscrib

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-21 Thread Marc Roos
I tested something in the past[1] where I could notice that an osd staturated a bond link and did not use the available 2nd one. I think I maybe made a mistake in writing down it was a 1x replicated pool. However it has been written here multiple times that these osd processes are single thre

[ceph-users] Re: What is the advice, one disk per OSD, or multiple disks

2020-09-21 Thread Robert Sander
On 21.09.20 14:29, Kees Bakker wrote: > Being new to CEPH, I need some advice how to setup a cluster. > Given a node that has multiple disks, should I create one OSD for > all disks, or is it better to have one OSD per disk. The general rule is one OSD per disk. There may be an exception with ve

[ceph-users] ceph 14.2.8 tracing ceph with blking compile error

2020-09-21 Thread 陈晓波
I want to trace a request in ceph processing in 14.2.8 version. I open the blkin with do_cmake.sh -DWITH_BLKIN=ON, when compile there is a error in below: ../lib/libblkin.a(tp.c.o): undefined reference to symbol 'lttng_probe_register' //lib64/liblttng-ust.so.0: error adding symbols: DSO missing

[ceph-users] Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-21 Thread Marc Roos
When I create a new encrypted osd with ceph volume[1] I assume something like this is being done, please correct what is wrong. - it creates the pv on the block device - it creates the ceph vg on the block device - it creates the osd lv in the vg - it uses cryptsetup to encrypt this lv (or

[ceph-users] Re: What is the advice, one disk per OSD, or multiple disks

2020-09-21 Thread Wout van Heeswijk
Just to expand on the answer of Robert. If all devices are of the same class (hdd/ssd/nvme) then a one on one relationship is most likely the best choice. If you have very fast devices it might be good the have multiple OSDs on one devices, at the cost of some complexity. If you have devices o

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-21 Thread Anthony D'Atri
> we use heavily bonded interfaces (6x10G) and also needed to look at this > balancing question. We use LACP bonding and, while the host OS probably tries > to balance outgoing traffic over all NICs > I tested something in the past[1] where I could notice that an osd > staturated a bond link an

[ceph-users] Troubleshooting stuck unclean PGs?

2020-09-21 Thread Matt Larson
Hi, Our Ceph cluster is reporting several PGs that have not been scrubbed or deep scrubbed in time. It is over a week for these PGs to have been scrubbed. When I checked the `ceph health detail`, there are 29 pgs not deep-scrubbed in time and 22 pgs not scrubbed in time. I tried to manually start

[ceph-users] What is the advice, one disk per OSD, or multiple disks

2020-09-21 Thread Kees Bakker
Hello, Being new to CEPH, I need some advice how to setup a cluster. Given a node that has multiple disks, should I create one OSD for all disks, or is it better to have one OSD per disk. -- Kees Bakker ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-21 Thread Frank Schilder
Hi all, we use heavily bonded interfaces (6x10G) and also needed to look at this balancing question. We use LACP bonding and, while the host OS probably tries to balance outgoing traffic over all NICs, the real decision is made by the switches (incoming traffic). Our switches hash packets to a

[ceph-users] Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-21 Thread René Bartsch
I'm new on the list, so a "Hello" to all! :-) We're planning a Proxmox-Cluster. The data-center operator advised to use a virtual machine with NFS on top of a single CEPH-FS instance to mount the shared CEPH-FS storage on multiple hosts/VMs. As this NFS/CEPH-FS-VM could be a bottle-neck I was wo

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-21 Thread Wout van Heeswijk
Hi Rene, Yes, cephfs is a good filesystem for concurrent writing. When using CephFS with ganesha you can even scale out. It will perform better but why don't you mount CephFS inside the VM? Kind regards, Wout 42on From: René Bartsch Sent: Monday, Sept

[ceph-users] Re: Troubleshooting stuck unclean PGs?

2020-09-21 Thread Wout van Heeswijk
Hi Matt, The mon data can grow during when PGs are stuck unclean. Don't restart the mons. You need to find out why your placement groups are "backfill_wait". Likely some of your OSDs are (near)full. If you have space elsewhere you can use the ceph balancer module or reweighting of OSDs to reba

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-21 Thread Hans van den Bogert
Perhaps not SBCs, but I have 4x HP 6300s and have been running Kubernetes together with Ceph/Rook for more than 3 years. The HPs can be picked up around 80-120eu. I learned so much in 3 years, last time I had that was when I started using Linux. This was money well spent and still is, it runs n

[ceph-users] Re: Troubleshooting stuck unclean PGs?

2020-09-21 Thread Matt Larson
Hi Wout, None of the OSDs are greater than 20% full. However, only 1 PG is backfilling at a time, while the others are backfill_wait. I had recently added a large amount of data to the Ceph cluster, and this may have caused the # of PGs to increase causing the need to rebalance or move objects.

[ceph-users] Re: Troubleshooting stuck unclean PGs?

2020-09-21 Thread Matt Larson
I tried this: `sudo ceph tell 'osd.*' injectargs '--osd-max-backfills 4'` Which has increased to having 10 simultaneous backfills and a higher 10X higher rate of data movements. It looks like I could increase this further by increasing the number of simultaneous recovery operations, but changing

[ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-21 Thread Janne Johansson
Den mån 21 sep. 2020 kl 16:15 skrev Marc Roos : > When I create a new encrypted osd with ceph volume[1] > > Q4: Where is this luks passphrase stored? > I think the OSD asks the mon for it after auth:ing, so "in the mon DBs" somewhere. -- May the most significant bit of your life be positive. __

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-21 Thread Robert Sander
On 21.09.20 18:44, René Bartsch wrote: > We're planning a Proxmox-Cluster. The data-center operator advised to > use a virtual machine with NFS on top of a single CEPH-FS instance to > mount the shared CEPH-FS storage on multiple hosts/VMs. For what purpose do you plan to use CephFS? Do you know