Re: [ceph-users] ceph-deploy and journal on separate disk
On Thu, Aug 22, 2013 at 4:36 AM, Pavel Timoschenkov wrote: > Hi. > With this patch - is all ok. > Thanks for help! > Thanks for confirming this, I have opened a ticket (http://tracker.ceph.com/issues/6085 ) and will work on this patch to get it merged. > -Original Message- > From: Alfredo Deza [mailto:alfredo.d...@inktank.com] > Sent: Wednesday, August 21, 2013 7:16 PM > To: Pavel Timoschenkov > Cc: ceph-us...@ceph.com > Subject: Re: [ceph-users] ceph-deploy and journal on separate disk > > On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov > wrote: >> Hi. Thanks for patch. But after patched ceph src and install it, I have not >> ceph-disk or ceph-deploy command. >> I did the following steps: >> git clone --recursive https://github.com/ceph/ceph.git patch -p0 < >> ./autogen.sh ./configure make make install What am I >> doing wrong? > > Oh I meant to patch it directly, there was no need to rebuild/make/install > again because the file is a plain Python file (no compilation needed). > > Can you try that instead? >> >> -Original Message- >> From: Alfredo Deza [mailto:alfredo.d...@inktank.com] >> Sent: Monday, August 19, 2013 3:38 PM >> To: Pavel Timoschenkov >> Cc: ceph-us...@ceph.com >> Subject: Re: [ceph-users] ceph-deploy and journal on separate disk >> >> On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov >> wrote: >>> <<>> are causing this to <<>> flag with the filesystem and prevent this. >>> >>> Hi. Any changes ( >>> >>> Can you create a build that passes the -t flag with mount? >>> >> >> I tried going through these steps again and could not get any other ideas >> except to pass in that flag for mounting. Would you be willing to try a >> patch? >> (http://fpaste.org/33099/37691580/) >> >> You would need to apply it to the `ceph-disk` executable. >> >> >>> >>> >>> >>> >>> >>> >>> From: Pavel Timoschenkov >>> Sent: Thursday, August 15, 2013 3:43 PM >>> To: 'Alfredo Deza' >>> Cc: Samuel Just; ceph-us...@ceph.com >>> Subject: RE: [ceph-users] ceph-deploy and journal on separate disk >>> >>> >>> >>> The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works >>> because then the journal is on the same device as the OSD data, so >>> the execution is different to get them to a working state. >>> >>> I suspect that there are left over partitions in /dev/sdaa that are >>> causing this to fail, I *think* that we could pass the `-t` flag with >>> the filesystem and prevent this. >>> >>> Just to be sure, could you list all the partitions on /dev/sdaa (if >>> /dev/sdaa is the whole device)? >>> >>> Something like: >>> >>> sudo parted /dev/sdaa print >>> >>> Or if you prefer any other way that could tell use what are all the >>> partitions in that device. >>> >>> >>> >>> >>> >>> After >>> >>> ceph-deploy disk zap ceph001:sdaa ceph001:sda1 >>> >>> >>> >>> root@ceph001:~# parted /dev/sdaa print >>> >>> Model: ATA ST3000DM001-1CH1 (scsi) >>> >>> Disk /dev/sdaa: 3001GB >>> >>> Sector size (logical/physical): 512B/4096B >>> >>> Partition Table: gpt >>> >>> >>> >>> Number Start End Size File system Name Flags >>> >>> >>> >>> root@ceph001:~# parted /dev/sda1 print >>> >>> Model: Unknown (unknown) >>> >>> Disk /dev/sda1: 10.7GB >>> >>> Sector size (logical/physical): 512B/512B >>> >>> Partition Table: gpt >>> >>> So that is after running `disk zap`. What does it say after using >>> ceph-deploy and failing? >>> >>> >>> >>> Number Start End Size File system Name Flags >>> >>> >>> >>> After ceph-disk -v prepare /dev/sdaa /dev/sda1: >>> >>> >>> >>> root@ceph001:~# parted /dev/sdaa print >>> >>> Model: ATA ST3000DM001-1CH1 (scsi) >>> >>> Disk /dev/sdaa: 3001GB >>> >>> Sector size (logical/physical): 512B/4096B >>> >>> Partition Table: gpt >>> >>> >>> >>> Number Start End SizeFile system Name Flags >>&
Re: [ceph-users] ceph-deploy and journal on separate disk
Hi. With this patch - is all ok. Thanks for help! -Original Message- From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Wednesday, August 21, 2013 7:16 PM To: Pavel Timoschenkov Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov wrote: > Hi. Thanks for patch. But after patched ceph src and install it, I have not > ceph-disk or ceph-deploy command. > I did the following steps: > git clone --recursive https://github.com/ceph/ceph.git patch -p0 < > ./autogen.sh ./configure make make install What am I > doing wrong? Oh I meant to patch it directly, there was no need to rebuild/make/install again because the file is a plain Python file (no compilation needed). Can you try that instead? > > -Original Message- > From: Alfredo Deza [mailto:alfredo.d...@inktank.com] > Sent: Monday, August 19, 2013 3:38 PM > To: Pavel Timoschenkov > Cc: ceph-us...@ceph.com > Subject: Re: [ceph-users] ceph-deploy and journal on separate disk > > On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov > wrote: >> <<> are causing this to <<> flag with the filesystem and prevent this. >> >> Hi. Any changes ( >> >> Can you create a build that passes the -t flag with mount? >> > > I tried going through these steps again and could not get any other ideas > except to pass in that flag for mounting. Would you be willing to try a patch? > (http://fpaste.org/33099/37691580/) > > You would need to apply it to the `ceph-disk` executable. > > >> >> >> >> >> >> >> From: Pavel Timoschenkov >> Sent: Thursday, August 15, 2013 3:43 PM >> To: 'Alfredo Deza' >> Cc: Samuel Just; ceph-us...@ceph.com >> Subject: RE: [ceph-users] ceph-deploy and journal on separate disk >> >> >> >> The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works >> because then the journal is on the same device as the OSD data, so >> the execution is different to get them to a working state. >> >> I suspect that there are left over partitions in /dev/sdaa that are >> causing this to fail, I *think* that we could pass the `-t` flag with >> the filesystem and prevent this. >> >> Just to be sure, could you list all the partitions on /dev/sdaa (if >> /dev/sdaa is the whole device)? >> >> Something like: >> >> sudo parted /dev/sdaa print >> >> Or if you prefer any other way that could tell use what are all the >> partitions in that device. >> >> >> >> >> >> After >> >> ceph-deploy disk zap ceph001:sdaa ceph001:sda1 >> >> >> >> root@ceph001:~# parted /dev/sdaa print >> >> Model: ATA ST3000DM001-1CH1 (scsi) >> >> Disk /dev/sdaa: 3001GB >> >> Sector size (logical/physical): 512B/4096B >> >> Partition Table: gpt >> >> >> >> Number Start End Size File system Name Flags >> >> >> >> root@ceph001:~# parted /dev/sda1 print >> >> Model: Unknown (unknown) >> >> Disk /dev/sda1: 10.7GB >> >> Sector size (logical/physical): 512B/512B >> >> Partition Table: gpt >> >> So that is after running `disk zap`. What does it say after using >> ceph-deploy and failing? >> >> >> >> Number Start End Size File system Name Flags >> >> >> >> After ceph-disk -v prepare /dev/sdaa /dev/sda1: >> >> >> >> root@ceph001:~# parted /dev/sdaa print >> >> Model: ATA ST3000DM001-1CH1 (scsi) >> >> Disk /dev/sdaa: 3001GB >> >> Sector size (logical/physical): 512B/4096B >> >> Partition Table: gpt >> >> >> >> Number Start End SizeFile system Name Flags >> >> 1 1049kB 3001GB 3001GB xfs ceph data >> >> >> >> And >> >> >> >> root@ceph001:~# parted /dev/sda1 print >> >> Model: Unknown (unknown) >> >> Disk /dev/sda1: 10.7GB >> >> Sector size (logical/physical): 512B/512B >> >> Partition Table: gpt >> >> >> >> Number Start End Size File system Name Flags >> >> >> >> With the same errors: >> >> >> >> root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 >> >> DEBUG:ceph-disk:Journal /dev/sda1 is a partition >> >> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the >> same device as the osd data >> >> DEBUG:ceph
Re: [ceph-users] ceph-deploy and journal on separate disk
On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov wrote: > Hi. Thanks for patch. But after patched ceph src and install it, I have not > ceph-disk or ceph-deploy command. > I did the following steps: > git clone --recursive https://github.com/ceph/ceph.git > patch -p0 < > ./autogen.sh > ./configure > make > make install > What am I doing wrong? Oh I meant to patch it directly, there was no need to rebuild/make/install again because the file is a plain Python file (no compilation needed). Can you try that instead? > > -Original Message- > From: Alfredo Deza [mailto:alfredo.d...@inktank.com] > Sent: Monday, August 19, 2013 3:38 PM > To: Pavel Timoschenkov > Cc: ceph-us...@ceph.com > Subject: Re: [ceph-users] ceph-deploy and journal on separate disk > > On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov > wrote: >> <<> causing this to <<> with the filesystem and prevent this. >> >> Hi. Any changes ( >> >> Can you create a build that passes the -t flag with mount? >> > > I tried going through these steps again and could not get any other ideas > except to pass in that flag for mounting. Would you be willing to try a patch? > (http://fpaste.org/33099/37691580/) > > You would need to apply it to the `ceph-disk` executable. > > >> >> >> >> >> >> >> From: Pavel Timoschenkov >> Sent: Thursday, August 15, 2013 3:43 PM >> To: 'Alfredo Deza' >> Cc: Samuel Just; ceph-us...@ceph.com >> Subject: RE: [ceph-users] ceph-deploy and journal on separate disk >> >> >> >> The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works >> because then the journal is on the same device as the OSD data, so the >> execution is different to get them to a working state. >> >> I suspect that there are left over partitions in /dev/sdaa that are >> causing this to fail, I *think* that we could pass the `-t` flag with >> the filesystem and prevent this. >> >> Just to be sure, could you list all the partitions on /dev/sdaa (if >> /dev/sdaa is the whole device)? >> >> Something like: >> >> sudo parted /dev/sdaa print >> >> Or if you prefer any other way that could tell use what are all the >> partitions in that device. >> >> >> >> >> >> After >> >> ceph-deploy disk zap ceph001:sdaa ceph001:sda1 >> >> >> >> root@ceph001:~# parted /dev/sdaa print >> >> Model: ATA ST3000DM001-1CH1 (scsi) >> >> Disk /dev/sdaa: 3001GB >> >> Sector size (logical/physical): 512B/4096B >> >> Partition Table: gpt >> >> >> >> Number Start End Size File system Name Flags >> >> >> >> root@ceph001:~# parted /dev/sda1 print >> >> Model: Unknown (unknown) >> >> Disk /dev/sda1: 10.7GB >> >> Sector size (logical/physical): 512B/512B >> >> Partition Table: gpt >> >> So that is after running `disk zap`. What does it say after using >> ceph-deploy and failing? >> >> >> >> Number Start End Size File system Name Flags >> >> >> >> After ceph-disk -v prepare /dev/sdaa /dev/sda1: >> >> >> >> root@ceph001:~# parted /dev/sdaa print >> >> Model: ATA ST3000DM001-1CH1 (scsi) >> >> Disk /dev/sdaa: 3001GB >> >> Sector size (logical/physical): 512B/4096B >> >> Partition Table: gpt >> >> >> >> Number Start End SizeFile system Name Flags >> >> 1 1049kB 3001GB 3001GB xfs ceph data >> >> >> >> And >> >> >> >> root@ceph001:~# parted /dev/sda1 print >> >> Model: Unknown (unknown) >> >> Disk /dev/sda1: 10.7GB >> >> Sector size (logical/physical): 512B/512B >> >> Partition Table: gpt >> >> >> >> Number Start End Size File system Name Flags >> >> >> >> With the same errors: >> >> >> >> root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 >> >> DEBUG:ceph-disk:Journal /dev/sda1 is a partition >> >> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the >> same device as the osd data >> >> DEBUG:ceph-disk:Creating osd partition on /dev/sdaa >> >> Information: Moved requested sector from 34 to 2048 in >> >> order to align on 2048-sector boundaries. >> >> The operation has completed successfully. >> >> DEBUG:ceph-disk:Creating xfs fs on
Re: [ceph-users] ceph-deploy and journal on separate disk
Hi. Thanks for patch. But after patched ceph src and install it, I have not ceph-disk or ceph-deploy command. I did the following steps: git clone --recursive https://github.com/ceph/ceph.git patch -p0 < ./autogen.sh ./configure make make install What am I doing wrong? -Original Message- From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Monday, August 19, 2013 3:38 PM To: Pavel Timoschenkov Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov wrote: > << causing this to << with the filesystem and prevent this. > > Hi. Any changes ( > > Can you create a build that passes the -t flag with mount? > I tried going through these steps again and could not get any other ideas except to pass in that flag for mounting. Would you be willing to try a patch? (http://fpaste.org/33099/37691580/) You would need to apply it to the `ceph-disk` executable. > > > > > > > From: Pavel Timoschenkov > Sent: Thursday, August 15, 2013 3:43 PM > To: 'Alfredo Deza' > Cc: Samuel Just; ceph-us...@ceph.com > Subject: RE: [ceph-users] ceph-deploy and journal on separate disk > > > > The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works > because then the journal is on the same device as the OSD data, so the > execution is different to get them to a working state. > > I suspect that there are left over partitions in /dev/sdaa that are > causing this to fail, I *think* that we could pass the `-t` flag with > the filesystem and prevent this. > > Just to be sure, could you list all the partitions on /dev/sdaa (if > /dev/sdaa is the whole device)? > > Something like: > > sudo parted /dev/sdaa print > > Or if you prefer any other way that could tell use what are all the > partitions in that device. > > > > > > After > > ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > > > > root@ceph001:~# parted /dev/sdaa print > > Model: ATA ST3000DM001-1CH1 (scsi) > > Disk /dev/sdaa: 3001GB > > Sector size (logical/physical): 512B/4096B > > Partition Table: gpt > > > > Number Start End Size File system Name Flags > > > > root@ceph001:~# parted /dev/sda1 print > > Model: Unknown (unknown) > > Disk /dev/sda1: 10.7GB > > Sector size (logical/physical): 512B/512B > > Partition Table: gpt > > So that is after running `disk zap`. What does it say after using > ceph-deploy and failing? > > > > Number Start End Size File system Name Flags > > > > After ceph-disk -v prepare /dev/sdaa /dev/sda1: > > > > root@ceph001:~# parted /dev/sdaa print > > Model: ATA ST3000DM001-1CH1 (scsi) > > Disk /dev/sdaa: 3001GB > > Sector size (logical/physical): 512B/4096B > > Partition Table: gpt > > > > Number Start End SizeFile system Name Flags > > 1 1049kB 3001GB 3001GB xfs ceph data > > > > And > > > > root@ceph001:~# parted /dev/sda1 print > > Model: Unknown (unknown) > > Disk /dev/sda1: 10.7GB > > Sector size (logical/physical): 512B/512B > > Partition Table: gpt > > > > Number Start End Size File system Name Flags > > > > With the same errors: > > > > root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 > > DEBUG:ceph-disk:Journal /dev/sda1 is a partition > > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the > same device as the osd data > > DEBUG:ceph-disk:Creating osd partition on /dev/sdaa > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732566385, imaxpct=5 > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357698, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1 > > realtime =none extsz=4096 blocks=0, rtextents=0 > > DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx > with options noatime > > mount: /dev/sdaa1: more filesystems detected. This should not happen, > >use -t to explicitly specify the filesystem type or > >use wipefs(8) to clean up the device. > > > > mount: you must specify the filesystem type > > ceph-disk: Mounting filesystem failed: Command '['mount', '-o', > 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' > returned non-zero exit status 32 > > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] ceph-deploy and journal on separate disk
On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov wrote: > << causing this to << filesystem and prevent this. > > Hi. Any changes ( > > Can you create a build that passes the -t flag with mount? > I tried going through these steps again and could not get any other ideas except to pass in that flag for mounting. Would you be willing to try a patch? (http://fpaste.org/33099/37691580/) You would need to apply it to the `ceph-disk` executable. > > > > > > > From: Pavel Timoschenkov > Sent: Thursday, August 15, 2013 3:43 PM > To: 'Alfredo Deza' > Cc: Samuel Just; ceph-us...@ceph.com > Subject: RE: [ceph-users] ceph-deploy and journal on separate disk > > > > The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works because > then the journal is on the same device as the OSD data, so the execution is > different to get them to a working state. > > I suspect that there are left over partitions in /dev/sdaa that are causing > this to fail, I *think* that we could pass the `-t` flag with the filesystem > and prevent this. > > Just to be sure, could you list all the partitions on /dev/sdaa (if > /dev/sdaa is the whole device)? > > Something like: > > sudo parted /dev/sdaa print > > Or if you prefer any other way that could tell use what are all the > partitions in that device. > > > > > > After > > ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > > > > root@ceph001:~# parted /dev/sdaa print > > Model: ATA ST3000DM001-1CH1 (scsi) > > Disk /dev/sdaa: 3001GB > > Sector size (logical/physical): 512B/4096B > > Partition Table: gpt > > > > Number Start End Size File system Name Flags > > > > root@ceph001:~# parted /dev/sda1 print > > Model: Unknown (unknown) > > Disk /dev/sda1: 10.7GB > > Sector size (logical/physical): 512B/512B > > Partition Table: gpt > > So that is after running `disk zap`. What does it say after using > ceph-deploy and failing? > > > > Number Start End Size File system Name Flags > > > > After ceph-disk -v prepare /dev/sdaa /dev/sda1: > > > > root@ceph001:~# parted /dev/sdaa print > > Model: ATA ST3000DM001-1CH1 (scsi) > > Disk /dev/sdaa: 3001GB > > Sector size (logical/physical): 512B/4096B > > Partition Table: gpt > > > > Number Start End SizeFile system Name Flags > > 1 1049kB 3001GB 3001GB xfs ceph data > > > > And > > > > root@ceph001:~# parted /dev/sda1 print > > Model: Unknown (unknown) > > Disk /dev/sda1: 10.7GB > > Sector size (logical/physical): 512B/512B > > Partition Table: gpt > > > > Number Start End Size File system Name Flags > > > > With the same errors: > > > > root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 > > DEBUG:ceph-disk:Journal /dev/sda1 is a partition > > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same > device as the osd data > > DEBUG:ceph-disk:Creating osd partition on /dev/sdaa > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732566385, imaxpct=5 > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357698, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1 > > realtime =none extsz=4096 blocks=0, rtextents=0 > > DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx with > options noatime > > mount: /dev/sdaa1: more filesystems detected. This should not happen, > >use -t to explicitly specify the filesystem type or > >use wipefs(8) to clean up the device. > > > > mount: you must specify the filesystem type > > ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', > '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit > status 32 > > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] ceph-deploy and journal on separate disk
>>>>So that is after running `disk zap`. What does it say after using >>>>ceph-deploy >>>>and failing? After ceph-disk -v prepare /dev/sdaa /dev/sda1: root@ceph001:~# parted /dev/sdaa print Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sdaa: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End SizeFile system Name Flags 1 1049kB 3001GB 3001GB xfs ceph data And root@ceph001:~# parted /dev/sda1 print Model: Unknown (unknown) Disk /dev/sda1: 10.7GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags With the same errors: root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit status 32 From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Wednesday, August 14, 2013 7:44 PM To: Pavel Timoschenkov Cc: Samuel Just; ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Wed, Aug 14, 2013 at 10:47 AM, Pavel Timoschenkov mailto:pa...@bayonetteas.onmicrosoft.com>> wrote: From: Alfredo Deza [mailto:alfredo.d...@inktank.com<mailto:alfredo.d...@inktank.com>] Sent: Wednesday, August 14, 2013 5:41 PM To: Pavel Timoschenkov Cc: Samuel Just; ceph-us...@ceph.com<mailto:ceph-us...@ceph.com> Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov mailto:pa...@bayonetteas.onmicrosoft.com>> wrote: >>>It looks like at some point the filesystem is not passed to the options. >>>Would >>>you mind running the `ceph-disk-prepare` command again but with >>>the --verbose flag? >>>I think that from the output above (correct it if I am mistaken) that would >>>be >>>something like: >>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1 Hi. If I'm running: ceph-deploy disk zap ceph001:sdaa ceph001:sda1 and ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: == root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime&
Re: [ceph-users] ceph-deploy and journal on separate disk
<< to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit status 32 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] ceph-deploy and journal on separate disk
The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works because then the journal is on the same device as the OSD data, so the execution is different to get them to a working state. I suspect that there are left over partitions in /dev/sdaa that are causing this to fail, I *think* that we could pass the `-t` flag with the filesystem and prevent this. Just to be sure, could you list all the partitions on /dev/sdaa (if /dev/sdaa is the whole device)? Something like: sudo parted /dev/sdaa print Or if you prefer any other way that could tell use what are all the partitions in that device. After ceph-deploy disk zap ceph001:sdaa ceph001:sda1 root@ceph001:~# parted /dev/sdaa print Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sdaa: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags root@ceph001:~# parted /dev/sda1 print Model: Unknown (unknown) Disk /dev/sda1: 10.7GB Sector size (logical/physical): 512B/512B Partition Table: gpt So that is after running `disk zap`. What does it say after using ceph-deploy and failing? Number Start End Size File system Name Flags After ceph-disk -v prepare /dev/sdaa /dev/sda1: root@ceph001:~# parted /dev/sdaa print Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sdaa: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End SizeFile system Name Flags 1 1049kB 3001GB 3001GB xfs ceph data And root@ceph001:~# parted /dev/sda1 print Model: Unknown (unknown) Disk /dev/sda1: 10.7GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags With the same errors: root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit status 32 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] ceph-deploy and journal on separate disk
>>>It looks like at some point the filesystem is not passed to the options. >>>Would >>>you mind running the `ceph-disk-prepare` command again but with >>>the --verbose flag? >>>I think that from the output above (correct it if I am mistaken) that would >>>be >>>something like: >>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1 Hi. If I'm running: ceph-deploy disk zap ceph001:sdaa ceph001:sda1 and ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: == root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit status 32 If executed this command separately for both disks - looks like ok: For sdaa: root@ceph001:~# ceph-disk -v prepare /dev/sdaa INFO:ceph-disk:Will colocate journal with data on /dev/sdaa DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 2097153 to 2099200 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22884508 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732304241, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357570, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5 DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal -> /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5 The operation has completed successfully. DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa For sda1: root@ceph001:~# ceph-disk -v prepare /dev/sda1 DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition DEBUG:ceph-disk:Creating xfs fs on /dev/sda1 meta-data=/dev/sda1 isize=2048 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda1 From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Tuesday, August 13, 2013 11:14 PM To: Pavel Timoschenkov Cc: Samu
Re: [ceph-users] ceph-deploy and journal on separate disk
On Wed, Aug 14, 2013 at 10:47 AM, Pavel Timoschenkov < pa...@bayonetteas.onmicrosoft.com> wrote: > ** ** > > ** ** > > *From:* Alfredo Deza [mailto:alfredo.d...@inktank.com] > *Sent:* Wednesday, August 14, 2013 5:41 PM > *To:* Pavel Timoschenkov > *Cc:* Samuel Just; ceph-us...@ceph.com > > *Subject:* Re: [ceph-users] ceph-deploy and journal on separate disk > > ** ** > > ** ** > > ** ** > > On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov < > pa...@bayonetteas.onmicrosoft.com> wrote: > >>>>It looks like at some point the filesystem is not passed to the > options. Would >>>you mind running the `ceph-disk-prepare` command again > but with > > >>>the --verbose flag? > > >>>I think that from the output above (correct it if I am mistaken) that > would be >>>something like: > > >>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1 > > > > Hi. > > If I’m running: > > ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > > and > > ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: > > == > > root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 > > DEBUG:ceph-disk:Journal /dev/sda1 is a partition > > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same > device as the osd data > > DEBUG:ceph-disk:Creating osd partition on /dev/sdaa > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732566385, imaxpct=5* > *** > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357698, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1** > ** > > realtime =none extsz=4096 blocks=0, rtextents=0 > > DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with > options noatime > > mount: /dev/sdaa1: more filesystems detected. This should not happen, > >use -t to explicitly specify the filesystem type or > >use wipefs(8) to clean up the device. > > > > mount: you must specify the filesystem type > > ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', > '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit > status 32 > > > > If executed this command separately for both disks - looks like ok: > > > > *For sdaa:* > > > > root@ceph001:~# ceph-disk -v prepare /dev/sdaa > > INFO:ceph-disk:Will colocate journal with data on /dev/sdaa > > DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa*** > * > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Journal is GPT partition > /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f > > DEBUG:ceph-disk:Creating osd partition on /dev/sdaa > > Information: Moved requested sector from 2097153 to 2099200 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22884508 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732304241, imaxpct=5* > *** > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357570, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1** > ** > > realtime =none extsz=4096 blocks=0, rtextents=0 > > DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with > options noatime > &g
Re: [ceph-users] ceph-deploy and journal on separate disk
From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Wednesday, August 14, 2013 5:41 PM To: Pavel Timoschenkov Cc: Samuel Just; ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov mailto:pa...@bayonetteas.onmicrosoft.com>> wrote: >>>It looks like at some point the filesystem is not passed to the options. >>>Would >>>you mind running the `ceph-disk-prepare` command again but with >>>the --verbose flag? >>>I think that from the output above (correct it if I am mistaken) that would >>>be >>>something like: >>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1 Hi. If I'm running: ceph-deploy disk zap ceph001:sdaa ceph001:sda1 and ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: == root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit status 32 If executed this command separately for both disks - looks like ok: For sdaa: root@ceph001:~# ceph-disk -v prepare /dev/sdaa INFO:ceph-disk:Will colocate journal with data on /dev/sdaa DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 2097153 to 2099200 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22884508 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732304241, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357570, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5 DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal -> /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5 The operation has completed successfully. DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa For sda1: root@ceph001:~# ceph-disk -v prepare /dev/sda1 DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition DEBUG:ceph-disk:Creating xfs fs on /dev/sda1 meta-data=/dev/sda1 isize=2048 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD
Re: [ceph-users] ceph-deploy and journal on separate disk
On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov < pa...@bayonetteas.onmicrosoft.com> wrote: > >>>It looks like at some point the filesystem is not passed to the > options. Would >>>you mind running the `ceph-disk-prepare` command again > but with > > >>>the --verbose flag? > > >>>I think that from the output above (correct it if I am mistaken) that > would be >>>something like: > > >>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1 > > ** ** > > Hi. > > If I’m running: > > ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > > and > > ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: > > == > > root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 > > DEBUG:ceph-disk:Journal /dev/sda1 is a partition > > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same > device as the osd data > > DEBUG:ceph-disk:Creating osd partition on /dev/sdaa > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732566385, imaxpct=5* > *** > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357698, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1** > ** > > realtime =none extsz=4096 blocks=0, rtextents=0 > > DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with > options noatime > > mount: /dev/sdaa1: more filesystems detected. This should not happen, > >use -t to explicitly specify the filesystem type or > >use wipefs(8) to clean up the device. > > ** ** > > mount: you must specify the filesystem type > > ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', > '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit > status 32 > > ** ** > > If executed this command separately for both disks - looks like ok: > > ** ** > > *For sdaa:* > > ** ** > > root@ceph001:~# ceph-disk -v prepare /dev/sdaa > > INFO:ceph-disk:Will colocate journal with data on /dev/sdaa > > DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa*** > * > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Journal is GPT partition > /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f > > DEBUG:ceph-disk:Creating osd partition on /dev/sdaa > > Information: Moved requested sector from 2097153 to 2099200 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22884508 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732304241, imaxpct=5* > *** > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357570, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1** > ** > > realtime =none extsz=4096 blocks=0, rtextents=0 > > DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with > options noatime > > DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5 > > DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal -> > /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f > > DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5 > > The operation has completed successfully. > > DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa > > ** ** > > *For sda1:* > > * * > > root@ceph001:~# ceph-disk -v prepare /dev/sda1 > > DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition > > DEBUG:ceph-disk:Creating xfs fs on /dev/sda1 > > meta-data=/dev/sda1 isize=2048 agcount=4, agsize=655360 blks > > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=2621440, imaxpct=25** > ** > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log
Re: [ceph-users] ceph-deploy and journal on separate disk
>>>It looks like at some point the filesystem is not passed to the options. >>>Would >>>you mind running the `ceph-disk-prepare` command again but with >>>the --verbose flag? >>>I think that from the output above (correct it if I am mistaken) that would >>>be >>>something like: >>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1 Hi. If I'm running: ceph-deploy disk zap ceph001:sdaa ceph001:sda1 and ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors: == root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1 DEBUG:ceph-disk:Journal /dev/sda1 is a partition WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with options noatime mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit status 32 If executed this command separately for both disks - looks like ok: For sdaa: root@ceph001:~# ceph-disk -v prepare /dev/sdaa INFO:ceph-disk:Will colocate journal with data on /dev/sdaa DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Creating osd partition on /dev/sdaa Information: Moved requested sector from 2097153 to 2099200 in order to align on 2048-sector boundaries. The operation has completed successfully. DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1 meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22884508 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732304241, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357570, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5 DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal -> /dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5 The operation has completed successfully. DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa For sda1: root@ceph001:~# ceph-disk -v prepare /dev/sda1 DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition DEBUG:ceph-disk:Creating xfs fs on /dev/sda1 meta-data=/dev/sda1 isize=2048 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options noatime DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda1 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] ceph-deploy and journal on separate disk
On Tue, Aug 13, 2013 at 3:21 AM, Pavel Timoschenkov < pa...@bayonetteas.onmicrosoft.com> wrote: > Hi. > Yes, i'm zapped all disks before. > > More about my situation: > sdaa - one of disk for data: 3 TB with GPT partition table. > sda - ssd drive with manual created partitions (10 GB) for journal with > MBR partition table. > === > fdisk -l /dev/sda > > Disk /dev/sda: 480.1 GB, 480103981056 bytes > 255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors > Units = sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disk identifier: 0x00033624 > >Device Boot Start End Blocks Id System > /dev/sda1204819531775 9764864 83 Linux > /dev/sda21953177639061503 9764864 83 Linux > /dev/sda33906150458593279 9765888 83 Linux > /dev/sda47812505697656831 9765888 83 Linux > > === > > If i'm executed ceph-deploy osd prepare without "journal" options - it's > ok: > > > ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > [ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001 > [ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001 > > ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa > [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa: > [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 > [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use. > [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal > None activate False > > root@ceph001:~# gdisk -l /dev/sdaa > GPT fdisk (gdisk) version 0.8.1 > > Partition table scan: > MBR: protective > BSD: not present > APM: not present > GPT: present > > Found valid GPT with protective MBR; using GPT. > Disk /dev/sdaa: 5860533168 sectors, 2.7 TiB > Logical sector size: 512 bytes > Disk identifier (GUID): 575ACF17-756D-47EC-828B-2E0A0B8ED757 > Partition table holds up to 128 entries > First usable sector is 34, last usable sector is 5860533134 > Partitions will be aligned on 2048-sector boundaries > Total free space is 4061 sectors (2.0 MiB) > > Number Start (sector)End (sector) Size Code Name >1 2099200 5860533134 2.7 TiB ceph data >22048 2097152 1023.0 MiB ceph journal > > Problems start, when i'm try create journal on separate drive: > > ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1 > [ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001 > [ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001 > > ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1 > [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks > ceph001:/dev/sdaa:/dev/sda1 > [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 > [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use. > [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal > /dev/sda1 activate False > [ceph_deploy.osd][ERROR ] ceph-disk-prepare -- /dev/sdaa /dev/sda1 > returned 1 > Information: Moved requested sector from 34 to 2048 in > order to align on 2048-sector boundaries. > The operation has completed successfully. > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 > blks > = sectsz=512 attr=2, projid32bit=0 > data = bsize=4096 blocks=732566385, imaxpct=5 > = sunit=0 swidth=0 blks > naming =version 2 bsize=4096 ascii-ci=0 > log =internal log bsize=4096 blocks=357698, version=2 > = sectsz=512 sunit=0 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 > > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same > device as the osd data > mount: /dev/sdaa1: more filesystems detected. This should not happen, >use -t to explicitly specify the filesystem type or >use wipefs(8) to clean up the device. > > mount: you must specify the filesystem type > ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', > '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.fZQxiz']' returned non-zero exit > status 32 > > ceph-deploy: Failed to create 1 OSDs > > It looks like at some point the filesystem is not passed to the options. Would you mind running the `ceph-disk-prepare` command again but with the --verbose fla
Re: [ceph-users] ceph-deploy and journal on separate disk
Hi. Yes, i'm zapped all disks before. More about my situation: sdaa - one of disk for data: 3 TB with GPT partition table. sda - ssd drive with manual created partitions (10 GB) for journal with MBR partition table. === fdisk -l /dev/sda Disk /dev/sda: 480.1 GB, 480103981056 bytes 255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00033624 Device Boot Start End Blocks Id System /dev/sda1204819531775 9764864 83 Linux /dev/sda21953177639061503 9764864 83 Linux /dev/sda33906150458593279 9765888 83 Linux /dev/sda47812505697656831 9765888 83 Linux === If i'm executed ceph-deploy osd prepare without "journal" options - it's ok: ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1 [ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001 [ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001 ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa: [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use. [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal None activate False root@ceph001:~# gdisk -l /dev/sdaa GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdaa: 5860533168 sectors, 2.7 TiB Logical sector size: 512 bytes Disk identifier (GUID): 575ACF17-756D-47EC-828B-2E0A0B8ED757 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 5860533134 Partitions will be aligned on 2048-sector boundaries Total free space is 4061 sectors (2.0 MiB) Number Start (sector)End (sector) Size Code Name 1 2099200 5860533134 2.7 TiB ceph data 22048 2097152 1023.0 MiB ceph journal Problems start, when i'm try create journal on separate drive: ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1 [ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001 [ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001 ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:/dev/sda1 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use. [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal /dev/sda1 activate False [ceph_deploy.osd][ERROR ] ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1 Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.fZQxiz']' returned non-zero exit status 32 ceph-deploy: Failed to create 1 OSDs -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Monday, August 12, 2013 11:39 PM To: Pavel Timoschenkov Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk Did you try using ceph-deploy disk zap ceph001:sdaa first? -Sam On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov wrote: > Hi. > > I have some problems with create journal on separate disk, using > ceph-deploy osd prepare command. > > When I try execute next command: > > ceph-deploy osd prepare ceph001:sdaa:sda1 > > where: > > sdaa - disk for ceph data > > sda1 - partition on ssd drive for journal > > I get next errors: > >
Re: [ceph-users] ceph-deploy and journal on separate disk
Did you try using ceph-deploy disk zap ceph001:sdaa first? -Sam On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov wrote: > Hi. > > I have some problems with create journal on separate disk, using ceph-deploy > osd prepare command. > > When I try execute next command: > > ceph-deploy osd prepare ceph001:sdaa:sda1 > > where: > > sdaa – disk for ceph data > > sda1 – partition on ssd drive for journal > > I get next errors: > > > > ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1 > > ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1 > > Information: Moved requested sector from 34 to 2048 in > > order to align on 2048-sector boundaries. > > The operation has completed successfully. > > meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 > blks > > = sectsz=512 attr=2, projid32bit=0 > > data = bsize=4096 blocks=732566385, imaxpct=5 > > = sunit=0 swidth=0 blks > > naming =version 2 bsize=4096 ascii-ci=0 > > log =internal log bsize=4096 blocks=357698, version=2 > > = sectsz=512 sunit=0 blks, lazy-count=1 > > realtime =none extsz=4096 blocks=0, rtextents=0 > > > > WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same > device as the osd data > > mount: /dev/sdaa1: more filesystems detected. This should not happen, > >use -t to explicitly specify the filesystem type or > >use wipefs(8) to clean up the device. > > > > mount: you must specify the filesystem type > > ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', > '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.ek6mog']' returned non-zero exit > status 32 > > > > Someone had a similar problem? > > Thanks for the help > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] ceph-deploy and journal on separate disk
Hi. I have some problems with create journal on separate disk, using ceph-deploy osd prepare command. When I try execute next command: ceph-deploy osd prepare ceph001:sdaa:sda1 where: sdaa - disk for ceph data sda1 - partition on ssd drive for journal I get next errors: ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1 ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1 Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. meta-data=/dev/sdaa1 isize=2048 agcount=32, agsize=22892700 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=732566385, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=357698, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data mount: /dev/sdaa1: more filesystems detected. This should not happen, use -t to explicitly specify the filesystem type or use wipefs(8) to clean up the device. mount: you must specify the filesystem type ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.ek6mog']' returned non-zero exit status 32 Someone had a similar problem? Thanks for the help ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com