, the RAM comes
soldered-to-the-board in most cases. If you want removable RAM, you're
looking at a small-form-factor server board of some kind like the
Supermicro A1SAi boards that have been my storage nodes since 2016.
Regards,
--
Stuart Longland (aka Redhatter, VK4MSL)
I hav
Filestore). These were built on Intel NUCs because I needed them in a
hurry and I had some DDR4 SO-DIMMs that I bought by mistake.
So that's 5 WDC WD20SPZX-00U OSDs and 3 Samsung SSD 860 OSDs.
I'm looking to move out of the DIN-rail cases as it looks like I'm
out-growing them, so maybe in the
nstead of using a BMC with a server board or
a multiplexed serial console is a nuisance.)
Not all of us using Ceph are big corporates with deep pockets.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a
On 5/9/20 11:08 am, Stuart Longland wrote:
> I note though that they don't survive a reboot:
>> [2020-09-05 11:05:39,216][ceph_volume][ERROR ] exception caught by decorator
>> …
>> File
>> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/activat
On 29/8/20 2:20 pm, Stuart Longland wrote:
> Step 7.
>> If there are any OSDs in the cluster deployed with ceph-disk (e.g., almost
>> any OSDs that were created before the Mimic release), you need to tell
>> ceph-volume to adopt responsibility for starting the daemons.
rep...@submit.spam.acma.gov.au. Don’t change the
> subject line or add any text. You should receive an auto-response to your
> email.
https://www.acma.gov.au/stop-getting-spam
If enough of us do it, the "business" will very soon get shut down.
--
Stuart Longland (aka Redhatter, V
0b57630c",
> "cluster_name": "ceph",
> "data": {
> "path": "/dev/sdc1",
> "uuid": "2a1e6a2b-7742-4cd9-9a39-e4a35ebe74fe"
> },
> "fsid": "2a1e6a2b-7742-4cd9-9a39-e4a35ebe74
nesha on CephFS exported to Windows
Virtualise the Windows Client under KVM (which natively supports Ceph)
and attach the RBD target as a virtual disk.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
_