[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-27 Thread Angelo Hongens

Yes, I'd love this!

A lot of companies want samba for simple file access from windows/mac 
clients. I know quite some companies that buy netapp as 'easy smb storage'.


Having ceph do built-in (or bolt-on) samba instead of having to manage 
external samba clusters would be nice, and would make it more accessible 
to replace above storage.


And the result would be better integration between samba and ceph. 
Perhaps in code, but also in documentation and example configs.


I set up my own 2 physical node samba cluster, with gluster to host the 
CTDB lock file. (with a third machine, a vm, to act as the third node in 
the gluster cluster). According to 45drives, saving the CTDB lock file 
in CephFS is a bad idea, and doing some rados-mutex thingy was too 
complex for me. And this whole solution feels a bit hackish, although it 
works wonders. Having a unified tried and tested solution where everyone 
is doing the same thing, sounds great!


Angelo.


On 21/03/2024 15:12, John Mulligan wrote:

Hello Ceph List,

I'd like to formally let the wider community know of some work I've been
involved with for a while now: adding Managed SMB Protocol Support to Ceph.
SMB being the well known network file protocol native to Windows systems and
supported by MacOS (and Linux). The other key word "managed" meaning
integrating with Ceph management tooling - in this particular case cephadm for
orchestration and eventually a new MGR module for managing SMB shares.

The effort is still in it's very early stages. We have a PR adding initial
support for Samba Containers to cephadm [1] and a prototype for an smb MGR
module [2]. We plan on using container images based on the samba-container
project [3] - a team I am already part of. What we're aiming for is a feature
set similar to the current NFS integration in Ceph, but with a focus on
bridging non-Linux/Unix clients to CephFS using a protocol built into those
systems.

A few major features we have planned include:
* Standalone servers (internally defined users/groups)
* Active Directory Domain Member Servers
* Clustered Samba support
* Exporting Samba stats via Prometheus metrics
* A `ceph` cli workflow loosely based on the nfs mgr module

I wanted to share this information in case there's wider community interest in
this effort. I'm happy to take your questions / thoughts / suggestions in this
email thread, via Ceph slack (or IRC), or feel free to attend a Ceph
Orchestration weekly meeting! I try regularly attend and we sometimes discuss
design aspects of the smb effort there. It's on the Ceph Community Calendar.
Thanks!


[1] - https://github.com/ceph/ceph/pull/55068
[2] - https://github.com/ceph/ceph/pull/56350
[3] - https://github.com/samba-in-kubernetes/samba-container/


Thanks for reading,
--John Mulligan


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: snaptrim number of objects

2023-08-23 Thread Angelo Hongens




On 23/08/2023 08:27, Sridhar Seshasayee wrote:


This also leads me to agree with you there's 'something wrong' with
the mclock scheduler. I was almost starting to suspect hardware issues
or something like that, I was at my wit's end.

Could you update this thread with the exact quincy version by running:

$ ceph versions

and

$ ceph config show-with-defaults osd.N | grep osd_mclock

Please replace N with any valid OSD id.

I suspect that the quincy version you are running on doesn't
have the latest changes we made to the Reef upstream release.
Recent changes introduced significant improvements to the
mClock profiles and address slow recovery/backfill rates. The
improvements to the mClock profiles should also help throttle
snaptrim operations.

Snaptrim operation with mClock currently uses a static cost as
defined by osd_snap_trim_cost. There are improvements planned
around this soon. For e.g., the cost must be dynamic and reflect
the size of the object being trimmed.
-Sridhar



Here's the requested info, even though I'm going to stay on wpq for a 
while.



# ceph versions
{
"mon": {
"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) 
quincy (stable)": 3

},
"mgr": {
"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) 
quincy (stable)": 3

},
"osd": {
"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) 
quincy (stable)": 117

},
"mds": {
"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) 
quincy (stable)": 3

},
"overall": {
"ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) 
quincy (stable)": 126

}
}

I'm using the docker image 
registry..local/quay-proxy/ceph/ceph@sha256:673b48521fd53e1b4bc7dda96335505c4d4b2e13d7bb92bf2e7782e2083094c9.



# ceph config show-with-defaults osd.0 | grep osd_mclock
osd_mclock_cost_per_byte_usec   0.00 





  default
osd_mclock_cost_per_byte_usec_hdd   2.60 





  default
osd_mclock_cost_per_byte_usec_ssd   0.011000 





  default
osd_mclock_cost_per_io_usec 0.00 





  default
osd_mclock_cost_per_io_usec_hdd 11400.00 





  default
osd_mclock_cost_per_io_usec_ssd 50.00 





  default
osd_mclock_force_run_benchmark_on_init  false 





  default
osd_mclock_iops_capacity_threshold_hdd  500.00 





  default
osd_mclock_iops_capacity_threshold_ssd  8.00 





  default
osd_mclock_max_capacity_iops_hdd250.00 





  mon
osd_mclock_max_capacity_iops_ssd21500.00 





  default
osd_mclock_override_recovery_settings   true 





  mon
osd_mclock_profile 
high_client_ops 




  mon
osd_mclock_scheduler_anticipation_timeout   0.00 





  default
osd_mclock_scheduler_background_best_effort_lim 99 





  default
osd_mclock_scheduler_background_best_effort_res 1 





  default
osd_mclock_scheduler_background_best_effort_wgt 1 





  default
osd_mclock_scheduler_background_recovery_lim99 





  default
osd_mclock_scheduler_background_recovery_res1 





  default
osd_mclock_scheduler_background_recovery_wgt1 





  default
osd_mclock_scheduler_client_lim 99 





  default
osd_mclock_scheduler_client_res 1 





  default
osd_mclock_scheduler_client_wgt 1 





  default
osd_mclock_skip_benchmark   false 





  default


# ceph config show-with-defaults osd.0 | grep trim_cost
osd_snap_trim_cost  1048576 




default


Angelo.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: snaptrim number of objects

2023-08-21 Thread Angelo Hongens




On 21/08/2023 12:38, Frank Schilder wrote:

Hi Angelo,

was this cluster upgraded (major version upgrade) before these issues started? 
We observed that with certain paths of a major version upgrade and the only way 
to fix that was to re-deploy all OSDs step by step.

You can try a rocks-DB compaction first. If that doesn't help, rebuilding the 
OSDs might be the only way out.

You should also confirm that all ceph-daemons are on the same version and that 
require-osd-release is reporting the same major version as well:

ceph report | jq '.osdmap.require_osd_release'



Hey Frank,

No, this cluster was clean installed with 17.2.6! All quincy.

Angelo.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: snaptrim number of objects

2023-08-19 Thread Angelo Hongens




On 07/08/2023 18:04, Patrick Donnelly wrote:

I'm trying to figure out what's happening to my backup cluster that
often grinds to a halt when cephfs automatically removes snapshots.


CephFS does not "automatically" remove snapshots. Do you mean the
snap_schedule mgr module?


Yup.



Almost all OSD's go to 100% CPU, ceph complains about slow ops, and
CephFS stops doing client i/o.


What health warnings do you see? You can try configuring snap trim:

https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_snap_trim_sleep


Mostly a looot of SLOW_OPS. And I guess as a result of that
MDS_CLIENT_LATE_RELEASE, MDS_CLIENT_OLDEST_TID, MDS_SLOW_METADATA_IO, 
MDS_TRIM warnings.



>> That won't explain why my cluster bogs down, but at least it gives
>> some visibility. Running 17.2.6 everywhere by the way.
>
> Please let us know how configuring snaptrim helps or not.
>

When I set nosnaptrim, all I/O immediately restores. When I unset 
nosnaptrim, i/o stops again.


One of the symptoms is that OSD's go to about 350% cpu per daemon.

I got the feeling for a while that setting osd_snap_trim_sleep_ssd to 1 
helped. I have 120 HDD osd's with wal/journal on ssd, does it even use 
this value? Everything seemed stable, but eventually another few days 
passed, and suddenly removing a snapshot brought the cluster down again. 
So I guess that wasn't the cause.


Now what I'm trying to do is set osd_max_trimming_pgs to 0 for all 
disks, and slowly setting it to 1 for a few osd's. This seems to work 
for a while, but still it brings the cluster down every now and then, 
and if not, the cluster is so slow it's almost unusable.


This whole troubleshooting process is taking weeks. I just noticed that 
when 'the problem occurs', a lot of OSD's on a host (15 osd's per host) 
start using a lot of CPU, even though for example only 3 OSD's on this 
machine have their osd_max_trimming_pgs set to 1, the rest to 0. Disk 
doesn't seem to be the bottleneck.


Restarting the daemons seems to solve the problem for a while, although 
the high cpu usage pops up on a different osd node every time.


I am at a loss here. I'm almost thinking it's some kind of bug in the 
osd daemons, but I have no idea how to troubleshoot this.


Angelo.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Angelo Hongens
As a sidenote: there's the windows rbd driver which will get you wy 
more performance. It's labeled beta, but it seems to work fine for a lot 
of people. If you have a testlab you could try that.


Angelo.

On 19/06/2023 18:16, Work Ceph wrote:

I see, thanks for the feedback guys!

It is interesting that Ceph Manager does not allow us to export iSCSI
blocks without selecting 2 or more iSCSI portals. Therefore, we will always
use at least two, and as a consequence that feature is not going to be
supported. Can I export an RBD image via iSCSI gateway using only one
portal via GwCli?

@Maged Mokhtar, I am not sure I follow. Do you guys have an iSCSI
implementation that we can use to somehow replace the default iSCSI server
in the default Ceph iSCSI Gateway? I didn't quite understand what the
petasan project is, and if it is an OpenSource solution that we can somehow
just pick/select/use one of its modules (e.g. just the iSCSI
implementation) that you guys have.

On Mon, Jun 19, 2023 at 10:07 AM Maged Mokhtar  wrote:


Windows Clustered Shared Volumes and Failover Clustering require the
support of clustered persistence reservations by the block device to
coordinate access by multiple hosts. The default iSCSI implementation in
Ceph does not support this, you can use the iSCSI implementation in
PetaSAN project:

www.petasan.org

which supports this feature and provides a high performance
implementation. We currently use Ceph 17.2.5


On 19/06/2023 14:47, Work Ceph wrote:

Hello guys,

We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some

Windows

clients.

Recently, we had the need to add some VMWare clusters as clients for the
iSCSI GW and also Windows systems with the use of Clustered Storage

Volumes

(CSV), and we are facing a weird situation. In windows for instance, the
iSCSI block can be mounted, formatted and consumed by all nodes, but when
we add in the CSV it fails with some generic exception. The same happens

in

VMWare, when we try to use it with VMFS it fails.

We do not seem to find the root cause for these errors. However, the

errors

seem to be linked to the situation of multiple nodes consuming the same
block by shared file systems. Have you guys seen this before?

Are we missing some basic configuration in the iSCSI GW?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: architecture help (iscsi, rbd, backups?)

2023-04-29 Thread Angelo Hongens


Thanks Alex, interesting perspectives.

I already thought about proxmox as well, and that would also work quite 
nicely. I think that would be the most performant option to put VM's on 
RBD.


But my entire goal was to run SMB servers on top of that hypervisor 
layer, to serve SMB shares to Windows.


So I think Bailey's suggestion makes more sense, to use CephFS with 
Linux SMB gateways, which cuts out a layer in between, greatly improving 
performance.


That also has the benefit of being able to use a single 1PB CephFS 
filesystem served by multiple SMB gateways instead of my initial plan of 
having like 10x100TB windows SMB file servers (I would not dare have a 
single 1PB windows VM with an NTFS disk)




Angelo.

On 27/04/2023 20:05, Alex Gorbachev wrote:

Hi Angelo,

Just some thoughts to consider from our experience with similar setups:

1. Use Proxmox instead of VMWare, or anything KVM based.  These VMs can 
consume Ceph directly, and provide the same level of service (some may 
say better) for live ,migration, hyperconvergence etc.  Then you run 
Windows VMs in KVM, bring RBD storage to them as virtual disks and share 
out as needed.


2. Use NFS - all modern Windows  OSs support it.  You can use any NFS 
gateway you like, or set up your own machine or cluster (which is what 
we did with Storcium) and export your storage as needed.


3. If you must use VMWare, you can present datastores via NFS as well, 
this has a lot of indirection but is easier to manage.


--
Alex Gorbachev
ISS Storcium
https://www.iss-integration.com 



On Thu, Apr 27, 2023 at 5:06 PM Angelo Höngens > wrote:


Hey guys and girls,

I'm working on a project to build storage for one of our departments,
and I want to ask you guys and girls for input on the high-level
overview part. It's a long one, I hope you read along and comment.

SUMMARY

I made a plan last year to build a 'storage solution' including ceph
and some windows VM's to expose the data over SMB to clients. A year
later I finally have the hardware, built a ceph cluster, and I'm doing
tests. Ceph itself runs great, but when I wanted to start exposing the
data using iscsi to our VMware farm, I ran into some issues. I know
the iscsi gateways will introduce some new performance bottlenecks,
but I'm seeing really slow performance, still working on that.

But then I ran into the warning on the iscsi gateway page: "The iSCSI
gateway is in maintenance as of November 2022. This means that it is
no longer in active development and will not be updated to add new
features.". Wait, what? Why!? What does this mean? Does this mean that
iSCSI is now 'feature complete' and will still be supported the next 5
years, or will it be deprecated in the future? I tried searching, but
couldn't find any info on the decision and the roadmap.

My goal is to build a future-proof setup, and using deprecated
components should not be part of that of course.

If the iscsi gateway will still be supported the next few years and I
can iron out the performance issues, I can still go on with my
original plan. If not, I have to go back to the drawing board. And
maybe you guys would advise me to take another route anyway.

GOALS

My goals/considerations are:

- we want >1PB of storage capacity for cheap (on a tight budget) for
research data. Most of it is 'store once, read sometimes'. <1% of the
data is 'hot'.
- focus is on capacity, but it would be nice to have > 200MB/s of
sequential write/read performance and not 'totally suck' on random
i/o. Yes, not very well quantified, but ah. Sequential writes are most
important.
- end users all run Windows computers (mostly VDI's) and a lot of
applications require SMB shares.
- security is a big thing, we want really tight ACL's, specific
monitoring agents, etc.
- our data is incredibly important to us, we still want the 3-2-1
backup rule. Primary storage solution, a second storage solution in a
different place, and some of the data that is not reproducible is also
written to tape. We also want to be protected from ransomware or user
errors (so no direct replication to the second storage).
- I like open source, reliability, no fork-lift upgrades, no vendor
lock-in, blah, well, I'm on the ceph list here, no need to convince
you guys ;)
- We're hiring a commercial company to do ceph maintenance and support
for when I'm on leave or leaving the company, but they won't support
clients, backup software, etc, so I want something as simple as
possible. We do have multiple Windows/VMware admins, but no other real
linux guru's.

THE INITIAL PLAN

Given these considerations, I ordered two identical clusters, each
consisting of 3 monitor nodes and 8 osd nodes, Each osd node has 2
ssd's and 10 

[ceph-users] Re: architecture help (iscsi, rbd, backups?)

2023-04-29 Thread Angelo Hongens

Bailey,

Thanks for your extensive reply, you got me down the wormhole of CephFS 
and SMB (and looking at a lot of 45drives videos and knowledge base, 
Houston dashboard, reading up on CTDB, etc), and this is a really 
interesting option as well! Thanks for the write-up.



By the way, are you using the RBD driver in Windows in production with 
your customers?


The binaries are still called beta, and last time I tried it in a proof 
of concept setup (a while back), it would never connect and always crash 
out on me. After reporting an issue, I did not get a response for almost 
three months before a dev responded that it was an unsupported ipv6 
issue. Not a problem, and all very understandable, it's open source 
software written mostly by volunteers, but I got a bit cautious about 
deploying this to production ;)


Angelo.





On 27/04/2023 18:20, Bailey Allison wrote:

Hey Angelo,

Just to make sure I'm understanding correctly, the main idea for the use
case is to be able to present Ceph storage to windows clients as SMB?

If so, you can absolutely use CephFS to get that done. This is something we
do all the time with our cluster configurations, if we're looking to present
ceph storage to windows clients for the use case of a file server is our
standard choice, and to your point of security/ACLs we can make use of
joining the samba server that to an existing active directory, and then
assigning permissions through Windows.

I will provide a high level overview of an average setup to hopefully
explain it better, and of course if you have any questions please let me
know. I understand that this is way different of a setup of what you
currently have planned, but it's a different choice that could prove useful
in your case.

Essentially how it works is we have ceph cluster with CephFS configured, of
which we map CephFS kernel mounts onto some gateway nodes, at which point we
expose to clients via CTDB with SMB shares (CTDB for high availability).

i.e

ceph cluster > ceph fs > map cephfs kernel mount on linux client > create
smb share on top of cephfs kernel mount > connect to samba share with
windows clients.

The SMB gateway nodes hosting samba also can be joined to an Active
Directory to allow setting Windows ACL permissions to allow more in depth
control of ACLs.

Also I will say +1 for the RBD driver on Windows, something we also make use
of a lot and have a lot of success with.

Again, please let me know if you need any insight or clarification, or have
any further questions. Hope this is of assistance.

Regards,

Bailey

-Original Message-

From: Angelo Höngens 
Sent: April 27, 2023 6:06 PM
To: ceph-users@ceph.io
Subject: [ceph-users] architecture help (iscsi, rbd, backups?)

Hey guys and girls,

I'm working on a project to build storage for one of our departments, and I

want to ask you guys and girls for input on the high-level overview part.
It's a long one, I hope you read along and comment.


SUMMARY

I made a plan last year to build a 'storage solution' including ceph and

some windows VM's to expose the data over SMB to clients. A year later I
finally have the hardware, built a ceph cluster, and I'm doing tests. Ceph
itself runs great, but when I wanted to start exposing the data using iscsi
to our VMware farm, I ran into some issues. I know the iscsi gateways will
introduce some new performance bottlenecks, but I'm seeing really slow
performance, still working on that.


But then I ran into the warning on the iscsi gateway page: "The iSCSI

gateway is in maintenance as of November 2022. This means that it is no
longer in active development and will not be updated to add new features.".
Wait, what? Why!? What does this mean? Does this mean that iSCSI is now
'feature complete' and will still be supported the next 5 years, or will it
be deprecated in the future? I tried searching, but couldn't find any info
on the decision and the roadmap.


My goal is to build a future-proof setup, and using deprecated components

should not be part of that of course.


If the iscsi gateway will still be supported the next few years and I can

iron out the performance issues, I can still go on with my original plan. If
not, I have to go back to the drawing board. And maybe you guys would advise
me to take another route anyway.


GOALS

My goals/considerations are:

- we want >1PB of storage capacity for cheap (on a tight budget) for

research data. Most of it is 'store once, read sometimes'. <1% of the data
is 'hot'.

- focus is on capacity, but it would be nice to have > 200MB/s of

sequential write/read performance and not 'totally suck' on random i/o. Yes,
not very well quantified, but ah. Sequential writes are most important.

- end users all run Windows computers (mostly VDI's) and a lot of

applications require SMB shares.

- security is a big thing, we want really tight ACL's, specific monitoring

agents, etc.

- our data is incredibly important to us, we still want the 3-2-1 backup

rule. 

[ceph-users] Re: Ceph on windows (wnbd) rbd.exe keeps crashing

2022-09-10 Thread Angelo Hongens

Does that windows driver even support ipv6?

I remember I could not get the driver working as well on my ipv6 setup, 
but no logs to help me troubleshoot the issue. I create an issue on 
github somewhere, but no response, so I gave up.


Ah, here's my ticket. Might not be related to your issue, but I could 
not help my suspicion it might be ipv6 related: 
https://github.com/cloudbase/ceph-windows-installer/issues/27




On 09/09/2022 04:22, Stefan Kooman wrote:

Hi,

I try to get wnbd to work on a Windows 2019 virtual machine (Version 
1809, OS Build 17763.2183). Unfortunately the process rbd.exe keeps 
crashing (according to logs in event viewer).


I have tested with a linux VM in the same network and that just works.

In the ceph.conf I specified the following (besides mon host):

[global]
log to stderr = true
run dir = C:/ProgramData/ceph
crash dir = C:/ProgramData/ceph
debug client = 2

ms bind ipv4 = false
ms bind ipv6 = true

[client]
keyring = C:/ProgramData/ceph/keyring
admin socket = c:/ProgramData/ceph/$name.$pid.asok
debug client = 2

Note: The Ceph network is IPv6 only, and no IPv4 is involved.


I double checked that I can connect with the cluster from the VM. 
Eventually I made a tcpdump and from that dump I can conclude that the 
client keeps on trying to connect to the cluster (probably because the 
rbd.exe process is restarting over and over) but never seems to manage 
to actually connect to it. Although debug logging is defined in the 
ceph.conf, the client does not write any log output.


Here an example of a crash report:


��Version=1
EventType=APPCRASH
EventTime=133068416700266255
ReportType=2
Consent=1
UploadTime=133068512230149899
ReportStatus=4196
ReportIdentifier=1c521fd8-325f-494e-9b6b-e7a608d9f1b1
IntegratorReportIdentifier=1a2b76ca-248e-4636-9ed6-1cae6c332c0c
Wow64Host=34404
NsAppName=rbd.exe
AppSessionGuid=03c4-0001-0011-7c4e-cb1b05c1d801
TargetAppId=W:7c6b388ea9ba05b8df74c0e19907c78c0904!e32ad63d5bac11abc70d42f12a6c189e6b9edfdc!rbd.exe
TargetAppVer=1970//01//01:00:00:00!26b9f4!rbd.exe
BootId=4294967295
TargetAsId=2397
IsFatal=1
EtwNonCollectReason=1
Response.type=4
Sig[0].Name=Application Name
Sig[0].Value=rbd.exe
Sig[1].Name=Application Version
Sig[1].Value=0.0.0.0
Sig[2].Name=Application Timestamp
Sig[2].Value=
Sig[3].Name=Fault Module Name
Sig[3].Value=libceph-common.dll
Sig[4].Name=Fault Module Version
Sig[4].Value=0.0.0.0
Sig[5].Name=Fault Module Timestamp
Sig[5].Value=
Sig[6].Name=Exception Code
Sig[6].Value=4015
Sig[7].Name=Exception Offset
Sig[7].Value=0069a3bb
DynamicSig[1].Name=OS Version
DynamicSig[1].Value=10.0.17763.2.0.0.272.7
DynamicSig[2].Name=Locale ID
DynamicSig[2].Value=1033
DynamicSig[22].Name=Additional Information 1
DynamicSig[22].Value=74e0
DynamicSig[23].Name=Additional Information 2
DynamicSig[23].Value=74e0e499b0f720be12f39b443eb7059c
DynamicSig[24].Name=Additional Information 3
DynamicSig[24].Value=f912
DynamicSig[25].Name=Additional Information 4
DynamicSig[25].Value=f9121477e3a172a0e72323a2204f3558
UI[2]=C:\Program Files\Ceph\bin\rbd.exe
LoadedModule[0]=C:\Program Files\Ceph\bin\rbd.exe
LoadedModule[1]=C:\Windows\SYSTEM32\ntdll.dll
LoadedModule[2]=C:\Windows\System32\KERNEL32.DLL
LoadedModule[3]=C:\Windows\System32\KERNELBASE.dll
LoadedModule[4]=C:\Windows\System32\msvcrt.dll
LoadedModule[5]=C:\Windows\System32\WS2_32.dll
LoadedModule[6]=C:\Windows\System32\RPCRT4.dll
LoadedModule[7]=C:\Program Files\Ceph\bin\libboost_program_options.dll
LoadedModule[8]=C:\Program Files\Ceph\bin\libwinpthread-1.dll
LoadedModule[9]=C:\Program Files\Ceph\bin\libgcc_s_seh-1.dll
LoadedModule[10]=C:\Program Files\Ceph\bin\libstdc++-6.dll
LoadedModule[11]=C:\Program Files\Ceph\bin\libceph-common.dll
LoadedModule[12]=C:\Windows\System32\ADVAPI32.dll
LoadedModule[13]=C:\Windows\System32\sechost.dll
LoadedModule[14]=C:\Windows\System32\bcrypt.dll
LoadedModule[15]=C:\Program Files\Ceph\bin\librados.dll
LoadedModule[16]=C:\Program Files\Ceph\bin\librbd.dll
LoadedModule[17]=C:\Program Files\Ceph\bin\libboost_random.dll
LoadedModule[18]=C:\Program Files\Ceph\bin\libboost_thread_pthread.dll
LoadedModule[19]=C:\Program Files\Ceph\bin\libcrypto-1_1-x64.dll
LoadedModule[20]=C:\Windows\System32\USER32.dll
LoadedModule[21]=C:\Windows\System32\win32u.dll
LoadedModule[22]=C:\Windows\System32\GDI32.dll
LoadedModule[23]=C:\Program Files\Ceph\bin\libboost_iostreams.dll
LoadedModule[24]=C:\Windows\System32\gdi32full.dll
LoadedModule[25]=C:\Windows\System32\msvcp_win.dll
LoadedModule[26]=C:\Windows\System32\ucrtbase.dll
LoadedModule[27]=C:\Windows\SYSTEM32\IPHLPAPI.DLL
LoadedModule[28]=C:\Program Files\Ceph\bin\libssl-1_1-x64.dll
LoadedModule[29]=C:\Program Files\Ceph\bin\zlib1.dll
LoadedModule[30]=C:\Windows\System32\IMM32.DLL
LoadedModule[31]=C:\Windows\system32\napinsp.dll
LoadedModule[32]=C:\Windows\System32\mswsock.dll
LoadedModule[33]=C:\Windows\SYSTEM32\DNSAPI.dll
LoadedModule[34]=C:\Windows\System32\NSI.dll

[ceph-users] Moving rbd-images across pools?

2022-06-01 Thread Angelo Hongens

Hey guys and girls, newbie question here (still in planning phase).

I'm thinking about starting out with a mini cluster with 4 nodes and 
perhaps 3x replication, because of budgetary reasons. In a few months or 
next year, I'll get extra budget and can extend to 7-8 nodes. I will 
then want to change to EC 4:2.


But how does this work? Can I create a new pool on the same cluster with 
the different policy? And can I move rbd-images across while they are 
mounted without user impact? Or do I need to unmount the images, more 
the images to another pool and then mount again?


Angelo.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Recommendations on books

2022-04-30 Thread Angelo Hongens


Thanks for all the responses! I will review your recommendations later.

Reading Learning Ceph 2nd edition right now for an overview of how all 
the puzzle pieces fit together, and will play around a test cluster as 
well of course.


Angelo.


On 28/04/2022 02:44, Dhairya Parmar wrote:

Hi Angelo,

Publications and RPs: You can follow this link 
, it contains all the Ceph 
publications and research papers that will substantially help you 
understand Ceph and its umbrella(Ceph's components).


Ceph Architecture: link 

Crash Course in CRUSH by Sage Weil: link 



I hope it helps you understand Ceph in some or the other way.

Regards,
Dhairya

On Wed, Apr 27, 2022 at 8:47 AM Angelo Höngens > wrote:


Hey guys and girls,

Can you recommend some books to get started with ceph? I know the
docs are
probably a good source, but books, in my experience, do a better job of
glueing it all together and painting the big picture. And I can take
a book
to places where reading docs on a laptop is inconvenient. I know
Amazon has
some books, but what do you think are the best books?

I hope to read about the different deployment methods (cephadm? Docker?
Native?), what pg’s and crush maps are, best practices in building
clusters, ratios between osd, wal, db, etc, what they do and why,
use cases
for cephfs vs rdb vs s3, etc.

Looking forward to your tips!

Angelo.
___
ceph-users mailing list -- ceph-users@ceph.io

To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io