[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread duluxoz
I don't know Marc, i only know what I had to do to get the thing 
working  :-)

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread Marc
is that the RBD Image needs to have a partition entry
> created for it - that might be "obvious" to some, but my ongoing belief
> is that most "obvious" things aren't, so its better to be explicit about
> such things. 
> >

Are you absolutely sure about this? I think you are missing something 
somewhere. I have been using for years the method of adding physical disks and 
now rbd devices to linux without partitioning them. I mostly do this with disks 
that contain data that is expected to grow, this way it is easier to resize 
them while vm's stay active/up. (Coming from the days that the new partition 
table was not updated and you had to reboot).




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread duluxoz

Hi All,

OK, an update for everyone, a note about some (what I believe to be) 
missing information in the Ceph Doco, a success story, and an admission 
on my part that I may have left out some important information.


So to start with, I finally got everything working - I now have my 4T 
RBD Image mapped, mounted, and tested on my host.  YA!


The missing Ceph Doco Info:

What I found in the latested Redhat documentation 
(https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/7/html/block_device_guide/the-rbd-kernel-module) 
that is not in the Ceph documentation (perhaps because it is 
EL-specific? - but a note should be placed anyway, even if it is 
EL-specific) is that the RBD Image needs to have a partition entry 
created for it - that might be "obvious" to some, but my ongoing belief 
is that most "obvious" things aren't, so its better to be explicit about 
such things. Just my $0.02 worth.  :-)


The relevant commands, which are performed after a `rbd map 
my_pool.meta/my_image --id my_image_user` are:


[codeblock]

parted /dev/rbd0 mklabel gpt

parted /dev/rbd0 mkpart primary xfs 0% 100%

[/codebock]

From there the RBD Image needs a file system: `mkfs.xfs /dev/rbd0p1`

And a mount: `mount /dev/rbd0p1 /mnt/my_image`

Now, the omission on my part:

The host I was attempting all this on was an oVirt-managed VM. 
Apparently, an oVirt-Managed VM doesn't like/allow (speculation on my 
part) running the `parted` or `mkfs.xfs` commands on an RBD Image. What 
I had to do to test this and get it working was to run the `rbd map`, 
`parted`, and `mkfs.xfs` commands on a physical host (which I did), THEN 
unmount/unmap the image from the physical host and map / mount it on the VM.


So my apologises for not providing all the info - I didn't consider it 
to be relevant - my bad!


So all good in the end. I hope the above helps others if they have 
similar issues.


Thank you all who helped / pitched in with ideas - I really, *really* 
appreciate it.


Thanks too to Wesley Dillingham - although the suggestion wasn't 
relevant to this issue, it did cause me to look at the firewall settings 
on the Ceph Cluster where I found (and corrected) an unrelated issue 
that hadn't reared its ugly head yet. Thanks Wes.


Cheers (until next time)  :-P

Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-25 Thread Alwin Antreich
Hi,


March 24, 2024 at 8:19 AM, "duluxoz"  wrote:
> 
> Hi,
> 
> Yeah, I've been testing various configurations since I sent my last 
> 
> email - all to no avail.
> 
> So I'm back to the start with a brand new 4T image which is rbdmapped to 
> 
> /dev/rbd0.
> 
> Its not formatted (yet) and so not mounted.
> 
> Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs 
> 
> /dev/rbd/my_pool/my_image) I get the errors I previous mentioned and the 
> 
> resulting image then becomes unusable (in ever sense of the word).
> 
> If I run a fdisk -l (before trying the mkfs.xfs) the rbd image shows up 
> 
> in the list - no, I don't actually do a full fdisk on the image.
> 
> An rbd info my_pool:my_image shows the same expected values on both the 
> 
> host and ceph cluster.
> 
> I've tried this with a whole bunch of different sized images from 100G 
> 
> to 4T and all fail in exactly the same way. (My previous successful 100G 
> 
> test I haven't been able to reproduce).
> 
> I've also tried all of the above using an "admin" CephX(sp?) account - I 
> 
> always can connect via rbdmap, but as soon as I try an mkfs.xfs it 
> 
> fails. This failure also occurs with a mkfs.ext4 as well (all size drives).
> 
> The Ceph Cluster is good (self reported and there are other hosts 
> 
> happily connected via CephFS) and this host also has a CephFS mapping 
> 
> which is working.
> 
> Between running experiments I've gone over the Ceph Doco (again) and I 
> 
> can't work out what's going wrong.
> 
> There's also nothing obvious/helpful jumping out at me from the 
> 
> logs/journal (sample below):
> 
> ~~~
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
> 
> 524773 0~65536 result -1
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
> 
> 524772 65536~4128768 result -1
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: blk_print_req_error: 119 
> 
> callbacks suppressed
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector 
> 
> 4298932352 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
> 
> 524774 0~65536 result -1
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
> 
> 524773 65536~4128768 result -1
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
> 
> Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector 
> 
> 4298940544 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
> 
> ~~~
> 
> Any ideas what I should be looking at?

Could you please share the command you've used to create the RBD?

Cheers,
Alwin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Wesley Dillingham
I suspect this may be a network / firewall issue between the client and one
OSD-server. Perhaps the 100MB RBD didn't have an object mapped to a PG with
the primary on this problematic OSD host but the 2TB RBD does. Just a
theory.

Respectfully,

*Wes Dillingham*
LinkedIn 
w...@wesdillingham.com




On Mon, Mar 25, 2024 at 12:34 AM duluxoz  wrote:

> Hi Alexander,
>
> Already set (and confirmed by running the command again) - no good, I'm
> afraid.
>
> So I just restart with a brand new image and ran the following commands
> on the ceph cluster and the host respectively. Results are below:
>
> On the ceph cluster:
>
> [code]
>
> rbd create --size 4T my_pool.meta/my_image --data-pool my_pool.data
> --image-feature exclusive-lock --image-feature deep-flatten
> --image-feature fast-diff --image-feature layering --image-feature
> object-map --image-feature data-pool
>
> [/code]
>
> On the host:
>
> [code]
>
> rbd device map my_pool.meta/my_image --id ceph_rbd_user --keyring
> /etc/ceph/ceph.client.ceph_rbd_user.keyring
>
> mkfs.xfs /dev/rbd0
>
> [/code]
>
> Results:
>
> [code]
>
> meta-data=/dev/rbd0  isize=512agcount=32,
> agsize=33554432 blks
>   =   sectsz=512   attr=2, projid32bit=1
>   =   crc=1finobt=1, sparse=1, rmapbt=0
>   =   reflink=1bigtime=1 inobtcount=1
> nrext64=0
> data =   bsize=4096   blocks=1073741824, imaxpct=5
>   =   sunit=16 swidth=16 blks
> naming   =version 2  bsize=4096   ascii-ci=0, ftype=1
> log  =internal log   bsize=4096   blocks=521728, version=2
>   =   sectsz=512   sunit=16 blks, lazy-count=1
> realtime =none   extsz=4096   blocks=0, rtextents=0
> Discarding blocks...Done.
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x1ff00/0x100, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x0/0x100, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on xfs_sb bno 0x0/0x1, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x10080/0x80, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> mkfs.xfs: read failed: Input/output error
> mkfs.xfs: data size check failed
> mkfs.xfs: filesystem failed to initialize
> [/code]
>
> On 25/03/2024 15:17, Alexander E. Patrakov wrote:
> > Hello Matthew,
> >
> > Is the overwrite enabled in the erasure-coded pool? If not, here is
> > how to fix it:
> >
> > ceph osd pool set my_pool.data allow_ec_overwrites true
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz

Hi Alexander,

Already set (and confirmed by running the command again) - no good, I'm 
afraid.


So I just restart with a brand new image and ran the following commands 
on the ceph cluster and the host respectively. Results are below:


On the ceph cluster:

[code]

rbd create --size 4T my_pool.meta/my_image --data-pool my_pool.data 
--image-feature exclusive-lock --image-feature deep-flatten 
--image-feature fast-diff --image-feature layering --image-feature 
object-map --image-feature data-pool


[/code]

On the host:

[code]

rbd device map my_pool.meta/my_image --id ceph_rbd_user --keyring 
/etc/ceph/ceph.client.ceph_rbd_user.keyring


mkfs.xfs /dev/rbd0

[/code]

Results:

[code]

meta-data=/dev/rbd0  isize=512    agcount=32, 
agsize=33554432 blks

 =   sectsz=512   attr=2, projid32bit=1
 =   crc=1    finobt=1, sparse=1, rmapbt=0
 =   reflink=1    bigtime=1 inobtcount=1 
nrext64=0

data =   bsize=4096   blocks=1073741824, imaxpct=5
 =   sunit=16 swidth=16 blks
naming   =version 2  bsize=4096   ascii-ci=0, ftype=1
log  =internal log   bsize=4096   blocks=521728, version=2
 =   sectsz=512   sunit=16 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.
mkfs.xfs: pwrite failed: Input/output error
libxfs_bwrite: write failed on (unknown) bno 0x1ff00/0x100, err=5
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Input/output error
libxfs_bwrite: write failed on (unknown) bno 0x0/0x100, err=5
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Input/output error
libxfs_bwrite: write failed on xfs_sb bno 0x0/0x1, err=5
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: pwrite failed: Input/output error
libxfs_bwrite: write failed on (unknown) bno 0x10080/0x80, err=5
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
mkfs.xfs: read failed: Input/output error
mkfs.xfs: data size check failed
mkfs.xfs: filesystem failed to initialize
[/code]

On 25/03/2024 15:17, Alexander E. Patrakov wrote:

Hello Matthew,

Is the overwrite enabled in the erasure-coded pool? If not, here is
how to fix it:

ceph osd pool set my_pool.data allow_ec_overwrites true

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Alexander E. Patrakov
Hello Matthew,

Is the overwrite enabled in the erasure-coded pool? If not, here is
how to fix it:

ceph osd pool set my_pool.data allow_ec_overwrites true

On Mon, Mar 25, 2024 at 11:17 AM duluxoz  wrote:
>
> Hi Curt,
>
> Blockdev --getbsz: 4096
>
> Rbd info my_pool.meta/my_image:
>
> ~~~
>
> rbd image 'my_image':
>  size 4 TiB in 1048576 objects
>  order 22 (4 MiB objects)
>  snapshot_count: 0
>  id: 294519bf21a1af
>  data_pool: my_pool.data
>  block_name_prefix: rbd_data.30.294519bf21a1af
>  format: 2
>  features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, data-pool
>  op_features:
>  flags:
>  create_timestamp: Sun Mar 24 17:44:33 2024
>  access_timestamp: Sun Mar 24 17:44:33 2024
>  modify_timestamp: Sun Mar 24 17:44:33 2024
> ~~~
>
> On 24/03/2024 21:10, Curt wrote:
> > Hey Mathew,
> >
> > One more thing out of curiosity can you send the output of blockdev
> > --getbsz on the rbd dev and rbd info?
> >
> > I'm using 16TB rbd images without issue, but I haven't updated to reef
> > .2 yet.
> >
> > Cheers,
> > Curt
>


-- 
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz

Hi Curt,

Blockdev --getbsz: 4096

Rbd info my_pool.meta/my_image:

~~~

rbd image 'my_image':
    size 4 TiB in 1048576 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 294519bf21a1af
    data_pool: my_pool.data
    block_name_prefix: rbd_data.30.294519bf21a1af
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten, data-pool

    op_features:
    flags:
    create_timestamp: Sun Mar 24 17:44:33 2024
    access_timestamp: Sun Mar 24 17:44:33 2024
    modify_timestamp: Sun Mar 24 17:44:33 2024
~~~

On 24/03/2024 21:10, Curt wrote:

Hey Mathew,

One more thing out of curiosity can you send the output of blockdev 
--getbsz on the rbd dev and rbd info?


I'm using 16TB rbd images without issue, but I haven't updated to reef 
.2 yet.


Cheers,
Curt

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz

Hi, Alwin,

Command (as requested): rbd create --size 4T my_pool.meta/my_image 
--data-pool my_pool.data --image-feature exclusive-lock --image-feature 
deep-flatten --image-feature fast-diff --image-feature layering 
--image-feature object-map --image-feature data-pool


On 24/03/2024 22:53, Alwin Antreich wrote:

Hi,


March 24, 2024 at 8:19 AM, "duluxoz"  wrote:

Hi,

Yeah, I've been testing various configurations since I sent my last

email - all to no avail.

So I'm back to the start with a brand new 4T image which is rbdmapped to

/dev/rbd0.

Its not formatted (yet) and so not mounted.

Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs

/dev/rbd/my_pool/my_image) I get the errors I previous mentioned and the

resulting image then becomes unusable (in ever sense of the word).

If I run a fdisk -l (before trying the mkfs.xfs) the rbd image shows up

in the list - no, I don't actually do a full fdisk on the image.

An rbd info my_pool:my_image shows the same expected values on both the

host and ceph cluster.

I've tried this with a whole bunch of different sized images from 100G

to 4T and all fail in exactly the same way. (My previous successful 100G

test I haven't been able to reproduce).

I've also tried all of the above using an "admin" CephX(sp?) account - I

always can connect via rbdmap, but as soon as I try an mkfs.xfs it

fails. This failure also occurs with a mkfs.ext4 as well (all size drives).

The Ceph Cluster is good (self reported and there are other hosts

happily connected via CephFS) and this host also has a CephFS mapping

which is working.

Between running experiments I've gone over the Ceph Doco (again) and I

can't work out what's going wrong.

There's also nothing obvious/helpful jumping out at me from the

logs/journal (sample below):

~~~

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno

524773 0~65536 result -1

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno

524772 65536~4128768 result -1

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1

Mar 24 17:38:29 my_host.my_net.local kernel: blk_print_req_error: 119

callbacks suppressed

Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector

4298932352 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno

524774 0~65536 result -1

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno

524773 65536~4128768 result -1

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1

Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector

4298940544 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2

~~~

Any ideas what I should be looking at?

Could you please share the command you've used to create the RBD?

Cheers,
Alwin

--
Peregrine IT Signature

*Matthew J BLACK*
  M.Inf.Tech.(Data Comms)
  MBA
  B.Sc.
  MACS (Snr), CP, IP3P

When you want it done /right/ ‒ the first time!

Phone:  +61 4 0411 0089
Email:  matt...@peregrineit.net 
Web:www.peregrineit.net 

View Matthew J BLACK's profile on LinkedIn 



This Email is intended only for the addressee.  Its use is limited to 
that intended by the author at the time and it is not to be distributed 
without the author’s consent.  You must not use or disclose the contents 
of this Email, or add the sender’s Email address to any database, list 
or mailing list unless you are expressly authorised to do so.  Unless 
otherwise stated, Peregrine I.T. Pty Ltd accepts no liability for the 
contents of this Email except where subsequently confirmed in 
writing.  The opinions expressed in this Email are those of the author 
and do not necessarily represent the views of Peregrine I.T. Pty 
Ltd.  This Email is confidential and may be subject to a claim of legal 
privilege.


If you have received this Email in error, please notify the author and 
delete this message immediately.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Curt
Hey Mathew,

One more thing out of curiosity can you send the output of blockdev
--getbsz on the rbd dev and rbd info?

I'm using 16TB rbd images without issue, but I haven't updated to reef .2
yet.

Cheers,
Curt


On Sun, 24 Mar 2024, 11:12 duluxoz,  wrote:

> Hi Curt,
>
> Nope, no dropped packets or errors - sorry, wrong tree  :-)
>
> Thanks for chiming in.
>
> On 24/03/2024 20:01, Curt wrote:
> > I may be barking up the wrong tree, but if you run ip -s link show
> > yourNicID on this server or your OSDs do you see any
> > errors/dropped/missed?
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz

Hi Curt,

Nope, no dropped packets or errors - sorry, wrong tree  :-)

Thanks for chiming in.

On 24/03/2024 20:01, Curt wrote:
I may be barking up the wrong tree, but if you run ip -s link show 
yourNicID on this server or your OSDs do you see any 
errors/dropped/missed?

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Curt
I may be barking up the wrong tree, but if you run ip -s link show
yourNicID on this server or your OSDs do you see any errors/dropped/missed?

On Sun, 24 Mar 2024, 09:20 duluxoz,  wrote:

> Hi,
>
> Yeah, I've been testing various configurations since I sent my last
> email - all to no avail.
>
> So I'm back to the start with a brand new 4T image which is rbdmapped to
> /dev/rbd0.
>
> Its not formatted (yet) and so not mounted.
>
> Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs
> /dev/rbd/my_pool/my_image) I get the errors I previous mentioned and the
> resulting image then becomes unusable (in ever sense of the word).
>
> If I run a fdisk -l (before trying the mkfs.xfs) the rbd image shows up
> in the list - no, I don't actually do a full fdisk on the image.
>
> An rbd info my_pool:my_image shows the same expected values on both the
> host and ceph cluster.
>
> I've tried this with a whole bunch of different sized images from 100G
> to 4T and all fail in exactly the same way. (My previous successful 100G
> test I haven't been able to reproduce).
>
> I've also tried all of the above using an "admin" CephX(sp?) account - I
> always can connect via rbdmap, but as soon as I try an mkfs.xfs it
> fails. This failure also occurs with a mkfs.ext4 as well (all size drives).
>
> The Ceph Cluster is good (self reported and there are other hosts
> happily connected via CephFS) and this host also has a CephFS mapping
> which is working.
>
> Between running experiments I've gone over the Ceph Doco (again) and I
> can't work out what's going wrong.
>
> There's also nothing obvious/helpful jumping out at me from the
> logs/journal (sample below):
>
> ~~~
>
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524773 0~65536 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524772 65536~4128768 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: blk_print_req_error: 119
> callbacks suppressed
> Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector
> 4298932352 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524774 0~65536 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno
> 524773 65536~4128768 result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
> Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector
> 4298940544 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
> ~~~
>
> Any ideas what I should be looking at?
>
> And thank you for the help  :-)
>
> On 24/03/2024 17:50, Alexander E. Patrakov wrote:
> > Hi,
> >
> > Please test again, it must have been some network issue. A 10 TB RBD
> > image is used here without any problems.
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz

Hi,

Yeah, I've been testing various configurations since I sent my last 
email - all to no avail.


So I'm back to the start with a brand new 4T image which is rbdmapped to 
/dev/rbd0.


Its not formatted (yet) and so not mounted.

Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs 
/dev/rbd/my_pool/my_image) I get the errors I previous mentioned and the 
resulting image then becomes unusable (in ever sense of the word).


If I run a fdisk -l (before trying the mkfs.xfs) the rbd image shows up 
in the list - no, I don't actually do a full fdisk on the image.


An rbd info my_pool:my_image shows the same expected values on both the 
host and ceph cluster.


I've tried this with a whole bunch of different sized images from 100G 
to 4T and all fail in exactly the same way. (My previous successful 100G 
test I haven't been able to reproduce).


I've also tried all of the above using an "admin" CephX(sp?) account - I 
always can connect via rbdmap, but as soon as I try an mkfs.xfs it 
fails. This failure also occurs with a mkfs.ext4 as well (all size drives).


The Ceph Cluster is good (self reported and there are other hosts 
happily connected via CephFS) and this host also has a CephFS mapping 
which is working.


Between running experiments I've gone over the Ceph Doco (again) and I 
can't work out what's going wrong.


There's also nothing obvious/helpful jumping out at me from the 
logs/journal (sample below):


~~~

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
524773 0~65536 result -1
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
524772 65536~4128768 result -1

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
Mar 24 17:38:29 my_host.my_net.local kernel: blk_print_req_error: 119 
callbacks suppressed
Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector 
4298932352 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
524774 0~65536 result -1
Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write at objno 
524773 65536~4128768 result -1

Mar 24 17:38:29 my_host.my_net.local kernel: rbd: rbd0: write result -1
Mar 24 17:38:29 my_host.my_net.local kernel: I/O error, dev rbd0, sector 
4298940544 op 0x1:(WRITE) flags 0x4000 phys_seg 1024 prio class 2

~~~

Any ideas what I should be looking at?

And thank you for the help  :-)

On 24/03/2024 17:50, Alexander E. Patrakov wrote:

Hi,

Please test again, it must have been some network issue. A 10 TB RBD
image is used here without any problems.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Alexander E. Patrakov
Hi,

Please test again, it must have been some network issue. A 10 TB RBD
image is used here without any problems.

On Sun, Mar 24, 2024 at 1:01 PM duluxoz  wrote:
>
> Hi Alexander,
>
> DOH!
>
> Thanks for pointing out my typo - I missed it, and yes, it was my
> issue.  :-)
>
> New issue (sort of): The requirement of the new RBD Image is 2 TB in
> size (its for a MariaDB Database/Data Warehouse). However, I'm getting
> the following errors:
>
> ~~~
>
> mkfs.xfs: pwrite failed: Input/output error
> libxfs_bwrite: write failed on (unknown) bno 0x7f00/0x100, err=5
> mkfs.xfs: Releasing dirty buffer to free list!
> found dirty buffer (bulk) on free list!
> ~~~
>
> I tested with a 100 GB image in the same pool and was 100% successful,
> so I'm now wondering if there is some sort of Ceph RBD Image size limit
> - although, honestly, that seems to be counter-intuitive to me
> considering CERN uses Ceph for their data storage needs.
>
> Any ideas / thoughts?
>
> Cheers
>
> Dulux-Oz
>
> On 23/03/2024 18:52, Alexander E. Patrakov wrote:
> > Hello Dulux-Oz,
> >
> > Please treat the RBD as a normal block device. Therefore, "mkfs" needs
> > to be run before mounting it.
> >
> > The mistake is that you run "mkfs xfs" instead of "mkfs.xfs" (space vs
> > dot). And, you are not limited to xfs, feel free to use ext4 or btrfs
> > or any other block-based filesystem.
> >
>


-- 
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-23 Thread duluxoz

Hi Alexander,

DOH!

Thanks for pointing out my typo - I missed it, and yes, it was my 
issue.  :-)


New issue (sort of): The requirement of the new RBD Image is 2 TB in 
size (its for a MariaDB Database/Data Warehouse). However, I'm getting 
the following errors:


~~~

mkfs.xfs: pwrite failed: Input/output error
libxfs_bwrite: write failed on (unknown) bno 0x7f00/0x100, err=5
mkfs.xfs: Releasing dirty buffer to free list!
found dirty buffer (bulk) on free list!
~~~

I tested with a 100 GB image in the same pool and was 100% successful, 
so I'm now wondering if there is some sort of Ceph RBD Image size limit 
- although, honestly, that seems to be counter-intuitive to me 
considering CERN uses Ceph for their data storage needs.


Any ideas / thoughts?

Cheers

Dulux-Oz

On 23/03/2024 18:52, Alexander E. Patrakov wrote:

Hello Dulux-Oz,

Please treat the RBD as a normal block device. Therefore, "mkfs" needs
to be run before mounting it.

The mistake is that you run "mkfs xfs" instead of "mkfs.xfs" (space vs
dot). And, you are not limited to xfs, feel free to use ext4 or btrfs
or any other block-based filesystem.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-23 Thread Alexander E. Patrakov
Hello Dulux-Oz,

Please treat the RBD as a normal block device. Therefore, "mkfs" needs
to be run before mounting it.

The mistake is that you run "mkfs xfs" instead of "mkfs.xfs" (space vs
dot). And, you are not limited to xfs, feel free to use ext4 or btrfs
or any other block-based filesystem.

On Sat, Mar 23, 2024 at 2:28 PM duluxoz  wrote:
>
> Hi All,
>
> I'm trying to mount a Ceph Reef (v18.2.2 - latest version) RBD Image as
> a 2nd HDD on a Rocky Linux v9.3 (latest version) host.
>
> The EC pool has been created and initialised and the image has been
> created.
>
> The ceph-common package has been installed on the host.
>
> The correct keyring has been added to the host (with a chmod of 600) and
> the host has been configure with an rbdmap file as follows:
> `my_pool.meta/my_image
> id=ceph_user,keyring=/etc/ceph/ceph.client.ceph_user.keyring`.
>
> When running the rbdmap.service the image appears as both `/dev/rbd0`
> and `/dev/rbd/my_pool.meta/my_image`, exactly as the Ceph Doco says it
> should.
>
> So everything *appears* AOK up to this point.
>
> My question now is: Should I run `mkfs xfs` on `/dev/rbd0` *before* or
> *after* I try to mount the image (via fstab:
> `/dev/rbd/my_pool.meta/my_image  /mnt/my_image  xfs  noauto  0 0` - as
> per the Ceph doco)?
>
> The reason I ask is that I've tried this *both* ways and all I get is an
> error message (sorry, can't remember the exact messages and I'm not
> currently in front of the host to confirm it  :-) - but from memory it
> was something about not being able to recognise the 1st block - or
> something like that).
>
> So, I'm obviously doing something wrong, but I can't work out what
> exactly (and the logs don't show any useful info).
>
> Do I, for instance, have the process wrong / don't understand the exact
> process, or is there something else wrong?
>
> All comments/suggestions/etc greatly appreciated - thanks in advance
>
> Cheers
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
Alexander E. Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io