[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Cedric
Hello,

Data on a volume should be the same independently on how they are being
accessed.

I would think the volume was previously initialized with an LVM layer, did
"lvs" shows any logical volume on the system ?

On Sun, Feb 4, 2024, 08:56 duluxoz  wrote:

> Hi All,
>
> All of this is using the latest version of RL and Ceph Reef
>
> I've got an existing RBD Image (with data on it - not "critical" as I've
> got a back up, but its rather large so I was hoping to avoid the restore
> scenario).
>
> The RBD Image used to be server out via an (Ceph) iSCSI Gateway, but we
> are now looking to use plain old kernal module.
>
> The RBD Image has been RBD Mapped to the client's /dev/rbd0 location.
>
> So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/` as a test
>
> What I'm getting back is `mount: /mount/old_image/: unknown filesystem
> type 'LVM2_member'.`
>
> All my Google Foo is telling me that to solve this issue I need to
> reformat the image with a new file system - which would mean "losing"
> the data.
>
> So my question is: How can I get to this data using rbd kernal modules
> (the iSCSI Gateway is no longer available, so not an option), or am I
> stuck with the restore option?
>
> Or is there something I'm missing (which would not surprise me in the
> least)?  :-)
>
> Thanks in advance (as always, you guys and gals are really, really helpful)
>
> Cheers
>
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz

Hi Cedric,

That's what I thought - the access method shouldn't make a difference.

No, no lvs details at all - I mean, yes, the osds show up with the lvs 
command on the ceph node(s), but not on the individual pools/images (on 
the ceph node or the client) - this is, of course, that I'm doing this 
right (and there's no guarantee of that).


To clarify: entering `lvs` on the client (which has the rbd image 
"attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph 
nodes only returns the data for each OSD/HDD.


Full disclosure (as I should have done in the first post): the 
pool/image was/is used as block device for oVirt VM disk images - but as 
far as I'm aware this should not be the cause of this issue (because we 
also use glusterfs and we've got similar VM disk images on gluster 
drives/bricks and the VM images show up as "simple" files (yes, I'm 
simplifying things a bit with that last statement)


On 04/02/2024 19:16, Cedric wrote:

Hello,

Data on a volume should be the same independently on how they are 
being accessed.


I would think the volume was previously initialized with an LVM layer, 
did "lvs" shows any logical volume on the system ?


On Sun, Feb 4, 2024, 08:56 duluxoz  wrote:

Hi All,

All of this is using the latest version of RL and Ceph Reef

I've got an existing RBD Image (with data on it - not "critical"
as I've
got a back up, but its rather large so I was hoping to avoid the
restore
scenario).

The RBD Image used to be server out via an (Ceph) iSCSI Gateway,
but we
are now looking to use plain old kernal module.

The RBD Image has been RBD Mapped to the client's /dev/rbd0 location.

So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/`
as a test

What I'm getting back is `mount: /mount/old_image/: unknown
filesystem
type 'LVM2_member'.`

All my Google Foo is telling me that to solve this issue I need to
reformat the image with a new file system - which would mean "losing"
the data.

So my question is: How can I get to this data using rbd kernal
modules
(the iSCSI Gateway is no longer available, so not an option), or am I
stuck with the restore option?

Or is there something I'm missing (which would not surprise me in the
least)?  :-)

Thanks in advance (as always, you guys and gals are really, really
helpful)

Cheers


Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Jayanth Reddy
Hi,
Anything with "pvs" and "vgs" on the client machine where there is /dev/rbd0?

Thanks

From: duluxoz 
Sent: Sunday, February 4, 2024 1:59:04 PM
To: yipik...@gmail.com ; matt...@peregrineit.net 

Cc: ceph-users@ceph.io 
Subject: [ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' 
On Mount - Help Please

Hi Cedric,

That's what I thought - the access method shouldn't make a difference.

No, no lvs details at all - I mean, yes, the osds show up with the lvs
command on the ceph node(s), but not on the individual pools/images (on
the ceph node or the client) - this is, of course, that I'm doing this
right (and there's no guarantee of that).

To clarify: entering `lvs` on the client (which has the rbd image
"attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph
nodes only returns the data for each OSD/HDD.

Full disclosure (as I should have done in the first post): the
pool/image was/is used as block device for oVirt VM disk images - but as
far as I'm aware this should not be the cause of this issue (because we
also use glusterfs and we've got similar VM disk images on gluster
drives/bricks and the VM images show up as "simple" files (yes, I'm
simplifying things a bit with that last statement)

On 04/02/2024 19:16, Cedric wrote:
> Hello,
>
> Data on a volume should be the same independently on how they are
> being accessed.
>
> I would think the volume was previously initialized with an LVM layer,
> did "lvs" shows any logical volume on the system ?
>
> On Sun, Feb 4, 2024, 08:56 duluxoz  wrote:
>
> Hi All,
>
> All of this is using the latest version of RL and Ceph Reef
>
> I've got an existing RBD Image (with data on it - not "critical"
> as I've
> got a back up, but its rather large so I was hoping to avoid the
> restore
> scenario).
>
> The RBD Image used to be server out via an (Ceph) iSCSI Gateway,
> but we
> are now looking to use plain old kernal module.
>
> The RBD Image has been RBD Mapped to the client's /dev/rbd0 location.
>
> So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/`
> as a test
>
> What I'm getting back is `mount: /mount/old_image/: unknown
> filesystem
> type 'LVM2_member'.`
>
> All my Google Foo is telling me that to solve this issue I need to
> reformat the image with a new file system - which would mean "losing"
> the data.
>
> So my question is: How can I get to this data using rbd kernal
> modules
> (the iSCSI Gateway is no longer available, so not an option), or am I
> stuck with the restore option?
>
> Or is there something I'm missing (which would not surprise me in the
> least)?  :-)
>
> Thanks in advance (as always, you guys and gals are really, really
> helpful)
>
> Cheers
>
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz

Hi Jayanth,

Only a couple of glusterfs volumes ie the glusterfs bricks are sitting 
on lvs which are on a lv spares volume on a vg which spans two pvs


My Google Foo led me to believe that the above set-up would (should?) be 
entirely independent of anything to do with rbd/ceph - was I wrong in this?


Cheers

On 04/02/2024 19:34, Jayanth Reddy wrote:

Hi,
Anything with "pvs" and "vgs" on the client machine where there is 
/dev/rbd0?


Thanks

*From:* duluxoz 
*Sent:* Sunday, February 4, 2024 1:59:04 PM
*To:* yipik...@gmail.com ; matt...@peregrineit.net 


*Cc:* ceph-users@ceph.io 
*Subject:* [ceph-users] Re: RBD Image Returning 'Unknown Filesystem 
LVM2_member' On Mount - Help Please

Hi Cedric,

That's what I thought - the access method shouldn't make a difference.

No, no lvs details at all - I mean, yes, the osds show up with the lvs
command on the ceph node(s), but not on the individual pools/images (on
the ceph node or the client) - this is, of course, that I'm doing this
right (and there's no guarantee of that).

To clarify: entering `lvs` on the client (which has the rbd image
"attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph
nodes only returns the data for each OSD/HDD.

Full disclosure (as I should have done in the first post): the
pool/image was/is used as block device for oVirt VM disk images - but as
far as I'm aware this should not be the cause of this issue (because we
also use glusterfs and we've got similar VM disk images on gluster
drives/bricks and the VM images show up as "simple" files (yes, I'm
simplifying things a bit with that last statement)

On 04/02/2024 19:16, Cedric wrote:
> Hello,
>
> Data on a volume should be the same independently on how they are
> being accessed.
>
> I would think the volume was previously initialized with an LVM layer,
> did "lvs" shows any logical volume on the system ?
>
> On Sun, Feb 4, 2024, 08:56 duluxoz  wrote:
>
> Hi All,
>
> All of this is using the latest version of RL and Ceph Reef
>
> I've got an existing RBD Image (with data on it - not "critical"
> as I've
> got a back up, but its rather large so I was hoping to avoid the
> restore
> scenario).
>
> The RBD Image used to be server out via an (Ceph) iSCSI Gateway,
> but we
> are now looking to use plain old kernal module.
>
> The RBD Image has been RBD Mapped to the client's /dev/rbd0 
location.

>
> So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/`
> as a test
>
> What I'm getting back is `mount: /mount/old_image/: unknown
> filesystem
> type 'LVM2_member'.`
>
> All my Google Foo is telling me that to solve this issue I need to
> reformat the image with a new file system - which would mean 
"losing"

> the data.
>
> So my question is: How can I get to this data using rbd kernal
> modules
> (the iSCSI Gateway is no longer available, so not an option), or 
am I

> stuck with the restore option?
>
> Or is there something I'm missing (which would not surprise me 
in the

> least)?  :-)
>
> Thanks in advance (as always, you guys and gals are really, really
> helpful)
>
> Cheers
>
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
Peregrine IT Signature

*Matthew J BLACK*
  M.Inf.Tech.(Data Comms)
  MBA
  B.Sc.
  MACS (Snr), CP, IP3P

When you want it done /right/ ‒ the first time!

Phone:  +61 4 0411 0089
Email:  matt...@peregrineit.net <mailto:matt...@peregrineit.net>
Web:www.peregrineit.net <http://www.peregrineit.net>

View Matthew J BLACK's profile on LinkedIn 
<http://au.linkedin.com/in/mjblack>


This Email is intended only for the addressee.  Its use is limited to 
that intended by the author at the time and it is not to be distributed 
without the author’s consent.  You must not use or disclose the contents 
of this Email, or add the sender’s Email address to any database, list 
or mailing list unless you are expressly authorised to do so.  Unless 
otherwise stated, Peregrine I.T. Pty Ltd accepts no liability for the 
contents of this Email except where subsequently confirmed in 
writing.  The opinions expressed in this Email are those of the author 
and do not necessarily represent the views of Peregrine I.T. Pty 
Ltd.  This Email is confidential and may be subject to a claim of legal 
privilege.


If you have received this Email in error, please notify the author and 
delete this message immediately.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Gilles Mocellin
Le dimanche 4 février 2024, 09:29:04 CET duluxoz a écrit :
> Hi Cedric,
> 
> That's what I thought - the access method shouldn't make a difference.
> 
> No, no lvs details at all - I mean, yes, the osds show up with the lvs 
> command on the ceph node(s), but not on the individual pools/images (on 
> the ceph node or the client) - this is, of course, that I'm doing this 
> right (and there's no guarantee of that).
> 
> To clarify: entering `lvs` on the client (which has the rbd image 
> "attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph 
> nodes only returns the data for each OSD/HDD.
[...]

Hello,

I think that /dev/rbd* devices are flitered "out" or not filter "in" by the 
fiter 
option in the devices section of /etc/lvm/lvm.conf.

So pvscan (pvs, vgs and lvs) don't look at your device.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Alex Gorbachev
For Ceph based LVM volumes, you would do this to import:

Map every one of the RBDs to the host

Include this in /etc/lvm/lvm.conf:

types = [ "rbd", 1024 ]

pvscan

vgscan

pvs
vgs

If you see the VG:

vgimportclone -n  /dev/rbd0 /dev/rbd1 ... --import

Now you should be able to vgchange -a y  and see the LVs

--
Alex Gorbachev
www.iss-integration.com



On Sun, Feb 4, 2024 at 2:55 AM duluxoz  wrote:

> Hi All,
>
> All of this is using the latest version of RL and Ceph Reef
>
> I've got an existing RBD Image (with data on it - not "critical" as I've
> got a back up, but its rather large so I was hoping to avoid the restore
> scenario).
>
> The RBD Image used to be server out via an (Ceph) iSCSI Gateway, but we
> are now looking to use plain old kernal module.
>
> The RBD Image has been RBD Mapped to the client's /dev/rbd0 location.
>
> So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/` as a test
>
> What I'm getting back is `mount: /mount/old_image/: unknown filesystem
> type 'LVM2_member'.`
>
> All my Google Foo is telling me that to solve this issue I need to
> reformat the image with a new file system - which would mean "losing"
> the data.
>
> So my question is: How can I get to this data using rbd kernal modules
> (the iSCSI Gateway is no longer available, so not an option), or am I
> stuck with the restore option?
>
> Or is there something I'm missing (which would not surprise me in the
> least)?  :-)
>
> Thanks in advance (as always, you guys and gals are really, really helpful)
>
> Cheers
>
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz

~~~
Hello,
I think that /dev/rbd* devices are flitered "out" or not filter "in" by the 
fiter
option in the devices section of /etc/lvm/lvm.conf.
So pvscan (pvs, vgs and lvs) don't look at your device.
~~~

Hi Gilles,

So the lvm filter from the lvm.conf file is set to the default of `filter = [ 
"a|.*|" ]`, so that's accept every block device, so no luck there  :-(


~~~
For Ceph based LVM volumes, you would do this to import:
Map every one of the RBDs to the host
Include this in /etc/lvm/lvm.conf:
types = [ "rbd", 1024 ]
pvscan
vgscan
pvs
vgs
If you see the VG:
vgimportclone -n  /dev/rbd0 /dev/rbd1 ... --import
Now you should be able to vgchange -a y  and see the LVs
~~~

Hi Alex,

Did the above as you suggested - the rbd devices (3 of them, none of which were 
originally part of an lvm on the ceph servers - at least, not set up manually 
by me) still do not show up using pvscan, etc.

So I still can't mount any of them (not without re-creating a fs, anyway, and 
thus losing the data I'm trying to read/import) - they all return the same 
error message (see original post).

Anyone got any other ideas?   :-)

Cheers

Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Curt
Out of curiosity, how are you mapping the rbd?  Have you tried using
guestmount?

I'm just spitballing, I have no experience with your issue, so probably not
much help or useful.

On Mon, 5 Feb 2024, 10:05 duluxoz,  wrote:

> ~~~
> Hello,
> I think that /dev/rbd* devices are flitered "out" or not filter "in" by
> the fiter
> option in the devices section of /etc/lvm/lvm.conf.
> So pvscan (pvs, vgs and lvs) don't look at your device.
> ~~~
>
> Hi Gilles,
>
> So the lvm filter from the lvm.conf file is set to the default of `filter
> = [ "a|.*|" ]`, so that's accept every block device, so no luck there  :-(
>
>
> ~~~
> For Ceph based LVM volumes, you would do this to import:
> Map every one of the RBDs to the host
> Include this in /etc/lvm/lvm.conf:
> types = [ "rbd", 1024 ]
> pvscan
> vgscan
> pvs
> vgs
> If you see the VG:
> vgimportclone -n  /dev/rbd0 /dev/rbd1 ... --import
> Now you should be able to vgchange -a y  and see the LVs
> ~~~
>
> Hi Alex,
>
> Did the above as you suggested - the rbd devices (3 of them, none of which
> were originally part of an lvm on the ceph servers - at least, not set up
> manually by me) still do not show up using pvscan, etc.
>
> So I still can't mount any of them (not without re-creating a fs, anyway,
> and thus losing the data I'm trying to read/import) - they all return the
> same error message (see original post).
>
> Anyone got any other ideas?   :-)
>
> Cheers
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz

Mounting/Mapping commands:

`rbd device map data_pool/data_pool_image_1 --id rbd_user --keyring 
/etc/ceph/ceph.client.rbd_user.keyring`


`mount /dev/rbd0 /mountpoint/rbd_data`

data_pool is showing up in a lsblk command as mapped to /dev/rbd0:

~~~

NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS

rbd0 252:0  0  8T  0     disk

~~~

*Yes, its an 8 TB "disk" - lot of space for bulk files - in truth, its 
probably "overkill" but if you've got that much space...  :-)


On 05/02/2024 17:43, Curt wrote:
Out of curiosity, how are you mapping the rbd? Have you tried using 
guestmount?


I'm just spitballing, I have no experience with your issue, so 
probably not much help or useful.


On Mon, 5 Feb 2024, 10:05 duluxoz,  wrote:

~~~
Hello,
I think that /dev/rbd* devices are flitered "out" or not filter
"in" by the fiter
option in the devices section of /etc/lvm/lvm.conf.
So pvscan (pvs, vgs and lvs) don't look at your device.
~~~

Hi Gilles,

So the lvm filter from the lvm.conf file is set to the default of
`filter = [ "a|.*|" ]`, so that's accept every block device, so no
luck there  :-(


~~~
For Ceph based LVM volumes, you would do this to import:
Map every one of the RBDs to the host
Include this in /etc/lvm/lvm.conf:
types = [ "rbd", 1024 ]
pvscan
vgscan
pvs
vgs
If you see the VG:
vgimportclone -n  /dev/rbd0 /dev/rbd1 ... --import
Now you should be able to vgchange -a y  and see the LVs
~~~

Hi Alex,

Did the above as you suggested - the rbd devices (3 of them, none
of which were originally part of an lvm on the ceph servers - at
least, not set up manually by me) still do not show up using
pvscan, etc.

So I still can't mount any of them (not without re-creating a fs,
anyway, and thus losing the data I'm trying to read/import) - they
all return the same error message (see original post).

Anyone got any other ideas?   :-)

Cheers

Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Peregrine IT Signature

*Matthew J BLACK*
  M.Inf.Tech.(Data Comms)
  MBA
  B.Sc.
  MACS (Snr), CP, IP3P

When you want it done /right/ ‒ the first time!

Phone:  +61 4 0411 0089
Email:  matt...@peregrineit.net 
Web:www.peregrineit.net 

View Matthew J BLACK's profile on LinkedIn 



This Email is intended only for the addressee.  Its use is limited to 
that intended by the author at the time and it is not to be distributed 
without the author’s consent.  You must not use or disclose the contents 
of this Email, or add the sender’s Email address to any database, list 
or mailing list unless you are expressly authorised to do so.  Unless 
otherwise stated, Peregrine I.T. Pty Ltd accepts no liability for the 
contents of this Email except where subsequently confirmed in 
writing.  The opinions expressed in this Email are those of the author 
and do not necessarily represent the views of Peregrine I.T. Pty 
Ltd.  This Email is confidential and may be subject to a claim of legal 
privilege.


If you have received this Email in error, please notify the author and 
delete this message immediately.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Gilles Mocellin
Perhaps there is a partition table on your device.

What does show :
fdisk -l /dev/rbd0

If there is, you can create additional devices with :
kpartx /dev/rbd0
And you'll have /dev/rbd0p1 wich will perhaps bee a PV.

Le 5 février 2024 07:51:24 GMT+01:00, duluxoz  a écrit :
>Mounting/Mapping commands:
>
>`rbd device map data_pool/data_pool_image_1 --id rbd_user --keyring 
>/etc/ceph/ceph.client.rbd_user.keyring`
>
>`mount /dev/rbd0 /mountpoint/rbd_data`
>
>data_pool is showing up in a lsblk command as mapped to /dev/rbd0:
>
>~~~
>
>NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
>
>rbd0 252:0  0  8T  0     disk
>
>~~~
>
>*Yes, its an 8 TB "disk" - lot of space for bulk files - in truth, its 
>probably "overkill" but if you've got that much space...  :-)
>
>On 05/02/2024 17:43, Curt wrote:
>> Out of curiosity, how are you mapping the rbd? Have you tried using 
>> guestmount?
>> 
>> I'm just spitballing, I have no experience with your issue, so probably not 
>> much help or useful.
>> 
>> On Mon, 5 Feb 2024, 10:05 duluxoz,  wrote:
>> 
>> ~~~
>> Hello,
>> I think that /dev/rbd* devices are flitered "out" or not filter
>> "in" by the fiter
>> option in the devices section of /etc/lvm/lvm.conf.
>> So pvscan (pvs, vgs and lvs) don't look at your device.
>> ~~~
>> 
>> Hi Gilles,
>> 
>> So the lvm filter from the lvm.conf file is set to the default of
>> `filter = [ "a|.*|" ]`, so that's accept every block device, so no
>> luck there  :-(
>> 
>> 
>> ~~~
>> For Ceph based LVM volumes, you would do this to import:
>> Map every one of the RBDs to the host
>> Include this in /etc/lvm/lvm.conf:
>> types = [ "rbd", 1024 ]
>> pvscan
>> vgscan
>> pvs
>> vgs
>> If you see the VG:
>> vgimportclone -n  /dev/rbd0 /dev/rbd1 ... --import
>> Now you should be able to vgchange -a y  and see the LVs
>> ~~~
>> 
>> Hi Alex,
>> 
>> Did the above as you suggested - the rbd devices (3 of them, none
>> of which were originally part of an lvm on the ceph servers - at
>> least, not set up manually by me) still do not show up using
>> pvscan, etc.
>> 
>> So I still can't mount any of them (not without re-creating a fs,
>> anyway, and thus losing the data I'm trying to read/import) - they
>> all return the same error message (see original post).
>> 
>> Anyone got any other ideas?   :-)
>> 
>> Cheers
>> 
>> Dulux-Oz
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> 
>-- 
>Peregrine IT Signature
>
>*Matthew J BLACK*
>  M.Inf.Tech.(Data Comms)
>  MBA
>  B.Sc.
>  MACS (Snr), CP, IP3P
>
>When you want it done /right/ ‒ the first time!
>
>Phone: +61 4 0411 0089
>Email: matt...@peregrineit.net 
>Web:   www.peregrineit.net 
>
>View Matthew J BLACK's profile on LinkedIn 
>
>This Email is intended only for the addressee.  Its use is limited to that 
>intended by the author at the time and it is not to be distributed without the 
>author’s consent.  You must not use or disclose the contents of this Email, or 
>add the sender’s Email address to any database, list or mailing list unless 
>you are expressly authorised to do so.  Unless otherwise stated, Peregrine 
>I.T. Pty Ltd accepts no liability for the contents of this Email except where 
>subsequently confirmed in writing.  The opinions expressed in this Email are 
>those of the author and do not necessarily represent the views of Peregrine 
>I.T. Pty Ltd.  This Email is confidential and may be subject to a claim of 
>legal privilege.
>
>If you have received this Email in error, please notify the author and delete 
>this message immediately.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-05 Thread Alex Gorbachev
Don't try to mount the LVM2 PV as a filesystem.  You need to look at syslog
to determine why you are unable to scan it in.  When you have your PVs
mapped, they should show up in lsblk and pvs.

Once you determine why they are not showing (maybe there is something else
mapping type 1024, so remove it), and you can see the PV - you should also
see the VG in vgs and LV in lvs.  Then you mount the LV:

mount /dev// /
--
Alex Gorbachev
www.iss-integration.com



On Mon, Feb 5, 2024 at 1:04 AM duluxoz  wrote:

> ~~~
> Hello,
> I think that /dev/rbd* devices are flitered "out" or not filter "in" by
> the fiter
> option in the devices section of /etc/lvm/lvm.conf.
> So pvscan (pvs, vgs and lvs) don't look at your device.
> ~~~
>
> Hi Gilles,
>
> So the lvm filter from the lvm.conf file is set to the default of `filter
> = [ "a|.*|" ]`, so that's accept every block device, so no luck there  :-(
>
>
> ~~~
> For Ceph based LVM volumes, you would do this to import:
> Map every one of the RBDs to the host
> Include this in /etc/lvm/lvm.conf:
> types = [ "rbd", 1024 ]
> pvscan
> vgscan
> pvs
> vgs
> If you see the VG:
> vgimportclone -n  /dev/rbd0 /dev/rbd1 ... --import
> Now you should be able to vgchange -a y  and see the LVs
> ~~~
>
> Hi Alex,
>
> Did the above as you suggested - the rbd devices (3 of them, none of which
> were originally part of an lvm on the ceph servers - at least, not set up
> manually by me) still do not show up using pvscan, etc.
>
> So I still can't mount any of them (not without re-creating a fs, anyway,
> and thus losing the data I'm trying to read/import) - they all return the
> same error message (see original post).
>
> Anyone got any other ideas?   :-)
>
> Cheers
>
> Dulux-Oz
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-07 Thread Gilles Mocellin
Le dimanche 4 février 2024, 09:29:04 CET duluxoz a écrit :
> Hi Cedric,
> 
> That's what I thought - the access method shouldn't make a difference.
> 
> No, no lvs details at all - I mean, yes, the osds show up with the lvs 
> command on the ceph node(s), but not on the individual pools/images (on 
> the ceph node or the client) - this is, of course, that I'm doing this 
> right (and there's no guarantee of that).
> 
> To clarify: entering `lvs` on the client (which has the rbd image 
> "attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph 
> nodes only returns the data for each OSD/HDD.
[...]

Hello,

I think that /dev/rbd* devices are flitered "out" or not filter "in" by the 
fiter 
option in the devices section of /etc/lvm/lvm.conf.

So pvscan (pvs, vgs and lvs) don't look at your device.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io