[ceph-users] Re: why not block gmail?

2024-06-17 Thread Eneko Lacunza
g SPF -all seems better, but not sure about how easy that would be to implement... :) Cheers Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun https://www.youtub

[ceph-users] Re: Ceph storage project for virtualization

2024-03-05 Thread Eneko Lacunza
erent racks and rows. In this case the latency should be acceptable and low. My question was more related to the redundant nfs and if you have some experience with similar setups. I was trying to know if first is feasible what I'm planning to do. Thank you so much :) Cheers! El 2024-03-05

[ceph-users] Re: Ceph storage project for virtualization

2024-03-05 Thread Eneko Lacunza
dvance, Cheers! ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bid

[ceph-users] Re: ceph-crash NOT reporting crashes due to wrong permissions on /var/lib/ceph/crash/posted (Debian / Ubuntu packages)

2024-03-04 Thread Eneko Lacunza
.html * quincy: https://lists.proxmox.com/pipermail/pve-devel/2024-February/061798.html Not sure this has been upstreamed. Cheers Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11,

[ceph-users] Re: After power outage, osd do not restart

2023-09-21 Thread Eneko Lacunza
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Eneko Lacunza Zuzendari teknikoa | Di

[ceph-users] Re: hardware setup recommendations wanted

2023-08-29 Thread Eneko Lacunza
prefer a decent web interface. Any comments/recommendations? Best regards, Kai ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project

[ceph-users] Re: ceph quincy repo update to debian bookworm...?

2023-07-26 Thread Eneko Lacunza
ing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/

[ceph-users] Re: What is the best way to use disks with different sizes

2023-07-04 Thread Eneko Lacunza
t pool will continue working with only 2 replicas. For the "near" calculus, you must factor in nearfull and full ratios for OSDs, and also that data may be unevenly distributed among OSDs... The choice also will affect how well the aggregated IOPS will be spread between VMs<->disk

[ceph-users] Re: Mixed mode ssd and hdd issue

2023-03-14 Thread Eneko Lacunza
680.7702370.892833 9 16 173 157 69.7677 1040.9760050.878237 10 16 195 179 71.5891880.7553630.869603 That is very poor ! Why ? Thanks _______ ceph-users mailing list -- ceph-user

[ceph-users] Re: Mysterious HDD-Space Eating Issue

2023-01-17 Thread Eneko Lacunza
Hi, El 17/1/23 a las 8:12, duluxoz escribió: Thanks to Eneko Lacunza, E Taka, and Anthony D'Atri for replying - all that advice was really helpful. So, we finally tracked down our "disk eating monster" (sort of). We've got a "runaway" ceph-guest-NN that is

[ceph-users] Re: Mysterious Disk-Space Eater

2023-01-12 Thread Eneko Lacunza
nspect each process' open files and find what file(s) have no longer a directory entry... that would give you a hint. Cheers Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11

[ceph-users] Re: Best Disk Brand for Ceph OSD

2022-12-27 Thread Eneko Lacunza
this list for reference). Cheers Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 |https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company

[ceph-users] Re: What to expect on rejoining a host to cluster?

2022-11-28 Thread Eneko Lacunza
ing list --ceph-users@ceph.io To unsubscribe send an email toceph-users-le...@ceph.io ___ ceph-users mailing list --ceph-users@ceph.io To unsubscribe send an email toceph-users-le...@ceph.io Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human

[ceph-users] Re: Trying to add NVMe CT1000P2SSD8

2022-10-05 Thread Eneko Lacunza
Hi, This is a consumer SSD. Did you test it's performance first? Better get a datacenter disk... Cheers El 5/10/22 a las 17:53, Murilo Morais escribió: Nobody? ___ ceph-users mailing list --ceph-users@ceph.io To unsubscribe send an email toceph-use

[ceph-users] Re: Stretch cluster questions

2022-05-05 Thread Eneko Lacunza
Hi Gregory Thanks for your confirmation. I hope I can start some tests today. Cheers El 5/5/22 a las 5:19, Gregory Farnum escribió: On Wed, May 4, 2022 at 1:25 AM Eneko Lacunza wrote: Hi Gregory, El 3/5/22 a las 22:30, Gregory Farnum escribió: On Mon, Apr 25, 2022 at 12:57 AM

[ceph-users] Re: Ceph Usage web and terminal.

2021-10-27 Thread Eneko Lacunza
be I misunderstood something. P.S. And the question is which of usage disk I can use for stored data: the usage what I see on web or what I see on terminal? Hope this helps ;) Cheers Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | htt

[ceph-users] Re: The cluster expands the osd, but the storage pool space becomes smaller

2021-08-11 Thread Eneko Lacunza
space was restored to the size before the expansion. Can anyone help me, thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Eneko Lacunza
out RAM and CPU, but at 24xEnterprise SSD per server I think you'll be wasting much of those SSD's performance... I suggest you consider a 4-6 server cluster, and SSD for wal + Spinning disks for storage. This will give you more redundancy for less money, and more peace of mind when a spi

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-07 Thread Eneko Lacunza
. Some of the VMs host containers. Cheers -Original Message- From: Eneko Lacunza Sent: Friday, 4 June 2021 15:49 To: ceph-users@ceph.io Subject: *SPAM* [ceph-users] Re: Why you might want packages not containers for Ceph deployments Hi, We operate a few Ceph hyperconverged

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-04 Thread Eneko Lacunza
, which would delay feeback/bugreport to upstream. Cheers and thanks for the great work! Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/u

[ceph-users] Re: Cephadm upgrade to Pacific problem

2021-04-15 Thread Eneko Lacunza
me to the link above.  The suggested addition to the kernel command line fixed the issue. -Dave -- Dave Hall Binghamton University kdh...@binghamton.edu <mailto:kdh...@binghamton.edu> On Thu, Apr 15, 2021 at 4:07 AM Eneko Lacunza <mailto:elacu...@binovo.es>> wrote: Hi Dave,

[ceph-users] Re: [External Email] Cephadm upgrade to Pacific problem

2021-04-15 Thread Eneko Lacunza
. Cheers Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/compan

[ceph-users] Re: ceph boostrap initialization :: nvme drives not empty after >12h

2021-03-12 Thread Eneko Lacunza
27;t think you should use Ceph for this config. The bare minimum you should use is 3 nodes, because default failure domain is host. Maybe you can explain what your goal is, so people can recommend setups. Regards -- Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel

[ceph-users] Re: 10G stackabe lacp switches

2021-02-15 Thread Eneko Lacunza
ting insights to share for ceph 10G ethernet storage networking? Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2 simple switches (Mikrotik for example) and in Proxmox use an active-pasive bond, with default interface in all nodes to the same switch. Cheers -- Eneko La

[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-04 Thread Eneko Lacunza
size=2) vs risk of data loss (min_size=1). Not everyone needs to max SSD disk IOPS; having a decent, HA setup can be of much value... Cheers -- Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2

[ceph-users] Re: [External Email] Re: Hardware for new OSD nodes.

2020-10-26 Thread Eneko Lacunza
heers Dave Hall Binghamton University kdh...@binghamton.edu <mailto:kdh...@binghamton.edu> 607-760-2328 (Cell) 607-777-4641 (Office) On 10/23/2020 6:00 AM, Eneko Lacunza wrote: Hi Dave, El 22/10/20 a las 19:43, Dave Hall escribió: El 22/10/20 a las 16:48, Dave Hall escribió: (BTW, Nautilus 14.2.7

[ceph-users] Re: desaster recovery Ceph Storage , urgent help needed

2020-10-23 Thread Eneko Lacunza
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- Eneko Lacunza

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Eneko Lacunza
n you paste the warning message? If shows the spillover size. What size are the partitions on NVMe disk (lsblk) Cheers -- Eneko Lacunza| +34 943 569 206 | elacu...@binovo.es Zuzendari teknikoa | https://www.binovo.es Director técnico

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Eneko Lacunza
sk. I think some BIOS/UEFIs have settings for a secondary boot/UEFI bootfile, but that would have to be prepared and maintained manually, out of the mdraid10; and would only work with a total failure of the primary disk. Cheers -- Eneko Lacunza| +34 943 56

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Eneko Lacunza
ver failure, so that won't be a problem. With small clusters (like ours) you may want to reorganize OSDs to a new server (let's say, move one OSD of earch server to the new server). But this is an uncommon corner-case, I agree :) Cheers -- Eneko Lacunza

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Eneko Lacunza
Hi Brian, El 22/10/20 a las 17:50, Brian Topping escribió: On Oct 22, 2020, at 9:14 AM, Eneko Lacunza <mailto:elacu...@binovo.es>> wrote: Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1 NVMe drive for 2  SAS drives  and provision 300GB for WAL/DB

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Eneko Lacunza
y this is a good size for us, but I'm wondering if my BlueFS spillovers are resulting from using drives that are too big.  I also thought I might have seen some comments about cutting large drives into multiple OSDs - could that be? Not using such big disk here, sorry :) (no space ne

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-04 Thread Eneko Lacunza
ces across all nodes. Any ideas what may cause that? Maybe I've missed something important in release notes? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- Eneko Lacunza

[ceph-users] Re: dealing with spillovers

2020-06-08 Thread Eneko Lacunza
Hi, We had this issue in a 14.2.8 cluster, although it appeared after resizing db device to a larger one. After some time (weeks), spillover was gone... Cheers Eneko El 6/6/20 a las 0:07, Reed Dier escribió: I'm going to piggy back on this somewhat. I've battled RocksDB spillovers over the

[ceph-users] Re: move bluestore wal/db

2020-05-26 Thread Eneko Lacunza
Hi, Yes, it can be done (shuting down the OSD but no rebuild required), we did it for resizing wal partition to a bigger one. A simple Google search will help; I can paste the procedure we followed but it's in spanish :( Cheers El 26/5/20 a las 17:20, Frank R escribió: Is there a safe way

[ceph-users] Re: Setting up first cluster on proxmox - a few questions

2020-05-21 Thread Eneko Lacunza
Hi, I strongly suggest you read ceph documentation in https://docs.ceph.com/docs/master El 21/5/20 a las 15:06, CodingSpiderFox escribió: Hello everyone :) When I try to create an OSD, Proxmox UI asks for * Data disk * DB disk * WAL disk What disk will be the limiting factor in terms of st

[ceph-users] telemetry.ceph.com certificate expired

2020-04-15 Thread Eneko Lacunza
Hi all, We're receiving a certificate error for telemetry module: Module 'telemetry' has failed: HTTPSConnectionPool(host='telemetry.ceph.com', port=443): Max retries exceeded with url: /report (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate

[ceph-users] Re: Load on drives of different sizes in ceph

2020-03-31 Thread Eneko Lacunza
Hi Andras, El 31/3/20 a las 16:42, Andras Pataki escribió: I'm looking for some advice on what to do about drives of different sizes in the same cluster. We have so far kept the drive sizes consistent on our main ceph cluster (using 8TB drives).  We're getting some new hardware with larger,

[ceph-users] Re: Newbie to Ceph jacked up his monitor

2020-03-24 Thread Eneko Lacunza
Hi Jarett, El 23/3/20 a las 3:52, Jarett DeAngelis escribió: So, I thought I’d post with what I learned re: what to do with this problem. This system is a 3-node Proxmox cluster, and each node had: 1 x 1TB NVMe 2 x 512GB HDD I had maybe 100GB of data in this system total. Then I added: 2 x 2

[ceph-users] Re: Hardware feedback before purchasing for a PoC

2020-03-09 Thread Eneko Lacunza
are rather than renting it, since I want to create a private cloud. Thanks! *Ignacio Ocampo* On Mar 9, 2020, at 4:12 AM, Eneko Lacunza wrote: Hola Ignacio, El 9/3/20 a las 3:00, Ignacio Ocampo escribió: Hi team, I'm planning to invest in hardware for a PoC and I would like your feedback

[ceph-users] Re: Hardware feedback before purchasing for a PoC

2020-03-09 Thread Eneko Lacunza
Hola Ignacio, El 9/3/20 a las 3:00, Ignacio Ocampo escribió: Hi team, I'm planning to invest in hardware for a PoC and I would like your feedback before the purchase: The goal is to deploy a *16TB* storage cluster, with *3 replicas* thus *3 nodes*. System configuration: https://pcpartpicker.co

[ceph-users] Re: SSD considerations for block.db and WAL

2020-02-28 Thread Eneko Lacunza
Hi Christian, El 27/2/20 a las 20:08, Christian Wahl escribió: Hi everyone, we currently have 6 OSDs with 8TB HDDs split across 3 hosts. The main usage is KVM-Images. To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs. The plan was to put 2x1TB in each host. One option

[ceph-users] Re: Limited performance

2020-02-25 Thread Eneko Lacunza
Hi Fabian, El 24/2/20 a las 19:01, Fabian Zimmermann escribió: we currently creating a new cluster. This cluster is (as far as we can tell) an config-copy (ansible) of our existing cluster, just 5 years later - with new hardware (nvme instead of ssd, bigger disks, ...) The setup: * NVMe for Jo

[ceph-users] Re: Experience with messenger v2 in Nautilus

2020-01-03 Thread Eneko Lacunza
Hi Stefan, El 2/1/20 a las 10:47, Stefan Kooman escribió: I'm wondering how many of are using messenger v2 in Nautilus after upgrading from a previous release (Luminous / Mimic). Does it work for you? Or why did you not enable it (yet)? Our hyperconverged office cluster (Proxmox) with 5 nodes