ally depends on your use for CEPH and the I/O activity
expected to what the best option may be.
On Fri, 20 Sep 2019 14:56:12 +0800 *Wladimir Mutel
mailto:m...@mwg.dp.ua>>* wrote
Dear everyone,
Last year I set up an experim
Dear everyone,
Last year I set up an experimental Ceph cluster (still single node,
failure domain = osd, MB Asus P10S-M WS, CPU Xeon E3-1235L, RAM 64 GB,
HDDs WD30EFRX, Ubuntu 18.04, now with kernel 5.3.0 from Ubuntu mainline
PPA and Ceph 14.2.4 from
David Turner wrote:
Yes, when creating an EC profile, it automatically creates a CRUSH rule
specific for that EC profile. You are also correct that 2+1 doesn't
really have any resiliency built in. 2+2 would allow 1 node to go down
while still having your data accessible. It will use 2x data
Dear all,
I want to share some experience of upgrading my experimental 1-host
Ceph cluster from v13.2.0 to v13.2.1.
First, I fetched new packages and installed them using 'apt
dist-upgrade', which went smooth as usual.
Then I noticed from 'lsof', that Ceph daemons were not restarted
Jason Dillaman wrote:
I am doing more experiments with Ceph iSCSI gateway and I am a bit
confused on how to properly repurpose an RBD image from iSCSI target
into QEMU virtual disk and back
This isn't really a use case that we support nor intend to support. Your
best bet would
Hi,
I cloned a NTFS with bad blocks from USB HDD onto Ceph RBD volume
(using ntfsclone, so the copy has sparse regions), and decided to clean
bad blocks within the copy. I run chkdsk /b from WIndows and it fails on
free space verification (step 5 of 5).
In tcmu-runner.log I see that
Dear all,
I am doing more experiments with Ceph iSCSI gateway and I am a bit confused on
how to properly repurpose an RBD image from iSCSI target into QEMU virtual disk
and back
First, I create an RBD image and set it up as iSCSI backstore in gwcli,
specifying its size exactly to avoid
Wladimir Mutel wrote:
it back to gwcli/disks), I discover that its size is rounded up to 3
TiB, i.e. 3072 GiB or 786432*4M Ceph objects. As we know, GPT is stored
'targetcli ls /' (there, it is still 3.0T). Also, when I restart
rbd-target-gw.service, it gets resized back up to 3.0T as shown
Dear all,
I create an RBD to be used as iSCSI target, with size close to the most
popular 3TB HDD size, 5860533168 512-byte sectors, or 715398*4M Ceph
objects (2.7 TB or 2794.4 GB). Then I add it into gwcli/disks (having to
specify the same size, 2861592M), and then, after some
Dear all,
I set up a minimal 1-node Ceph cluster to evaluate its performance.
We tried to save as much as possible on the hardware, so now the box has
Asus P10S-M WS motherboard, Xeon E3-1235L v5 CPU, 64 GB DDR4 ECC RAM and
8x3TB HDDs (WD30EFRX) connected to on-board SATA ports. Also
Jason Dillaman wrote:
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/
I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10
initiator (not Win2016 but hope there is no much difference). I try to make
it work with a single session first. Also, right now I only
Jason Dillaman wrote:
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 [DEBUG]
dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205
On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote:
> > So, my usual question is - where to look and what logs to enable
> > to find out what is going wrong ?
> If not overridden, tcmu-runner will default to 'client.admin' [1] so
> you shouldn't need to add any
' but I am not sure
about tcmu-runner and/or user:rbd backstore provider.
Thank you in advance for your useful directions
out of my problem.
Wladimir Mutel wrote:
Failed : disk create/update failed on p10s. LUN allocation failure
Well, this was fixed by updating
Jason Dillaman wrote:
One more question, how should I set profile 'rbd-read-only' properly
? I tried to set is for 'client.iso' on both 'iso' and 'jerasure21' pools,
and this did not work. Set profile on both pools to 'rbd', it worked. But I
don't want my iso imaged to be accidentally
Jason Dillaman wrote:
The caps for those users looks correct for Luminous and later
clusters. Any chance you are using data pools with the images? It's
just odd that you have enough permissions to open the RBD image but
cannot read its data objects.
Yes, I use erasure-pool as
Jason Dillaman wrote:
The caps for those users looks correct for Luminous and later
clusters. Any chance you are using data pools with the images? It's
just odd that you have enough permissions to open the RBD image but
cannot read its data objects.
Yes, I use erasure-pool as data-pool
Jason Dillaman wrote:
Can you run "rbd --id libvirt --pool libvirt win206-test-3tb " w/o error? It sounds like your CephX caps for
client.libvirt are not permitting read access to the image data
objects.
I tried to run 'rbd export' with these params,
but it said it was unable
Dear all,
I installed QEMU, libvirtd and its RBD plugins and now trying
to make QEMU use my Ceph storage. I created 'iso' pool
and imported Windows installation image there (rbd import).
Also I created 'libvirt' pool and there, created 2.7-TB image
On Mon, Jun 04, 2018 at 11:12:58AM +0300, Wladimir Mutel wrote:
> /disks> create pool=rbd image=win2016-3tb-1 size=2861589M
> CMD: /disks/ create pool=rbd image=win2016-3tb-1 size=2861589M count=1
> max_data_area_mb=None
> pool 'rbd' is ok to use
> Creating/mapping disk
On Fri, Jun 01, 2018 at 08:20:12PM +0300, Wladimir Mutel wrote:
>
> And still, when I do '/disks create ...' in gwcli, it says
> that it wants 2 existing gateways. Probably this is related
> to the created 2-TPG structure and I should look for more ways
>
rove' that json config so that rbd-target-gw loads it
as I need on single host.
Wladimir Mutel wrote:
Well, ok, I moved second address into different subnet
(192.168.201.231/24) and also reflected that in 'hosts' file
But that did not help much :
/iscsi-target...test/gateways> cr
e P10S host I have in my hands right now?
Additional host and 10GBE hardware would require additional
funding, which would possible only in some future.
Thanks in advance for your responses
Wladimir Mutel wrote:
I have both its Ethernets connected to the sa
Dear all,
I am experimenting with Ceph setup. I set up a single node
(Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs,
Ubuntu 18.04 Bionic, Ceph packages from
http://download.ceph.com/debian-luminous/dists/xenial/
and iscsi parts built
24 matches
Mail list logo