Re: [ceph-users] luminous ceph-osd crash

2017-09-13 Thread Marcin Dulak
Hi,

It looks like at sdb size around 1.1 GBytes ceph (ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)) is not crashing
anymore.
Please don't increase the minimum disk size requirements unnecessarily - it
makes it more demanding to test new ceph features and operational
procedures using small, virtual testing environments.

Best regards,

Marcin

On Fri, Sep 1, 2017 at 1:49 AM, Marcin Dulak  wrote:

> Hi,
>
> /var/log/ceph/ceph-osd.0.log is attached.
> My sdb is 128MB and sdc (journal) is 16MB:
>
> [root@server0 ~]# ceph-disk list
> /dev/dm-0 other, xfs, mounted on /
> /dev/dm-1 swap, swap
> /dev/sda :
>  /dev/sda1 other, 0x83
>  /dev/sda2 other, xfs, mounted on /boot
>  /dev/sda3 other, LVM2_member
> /dev/sdb :
>  /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdc1
> /dev/sdc :
>  /dev/sdc1 ceph journal, for /dev/sdb1
>
> Marcin
>
> On Thu, Aug 31, 2017 at 3:05 PM, Sage Weil  wrote:
>
>> Hi Marcin,
>>
>> Can you reproduce the crash with 'debug bluestore = 20' set, and then
>> ceph-post-file /var/log/ceph/ceph-osd.0.log?
>>
>> My guess is that we're not handling a very small device properly?
>>
>> sage
>>
>>
>> On Thu, 31 Aug 2017, Marcin Dulak wrote:
>>
>> > Hi,
>> >
>> > I have a virtual CentOS 7.3 test setup at:
>> > https://github.com/marcindulak/github-test-local/blob/
>> a339ff7505267545f593f
>> > d949a6453a56cdfd7fe/vagrant-ceph-rbd-tutorial-centos7.sh
>> >
>> > It seems to crash reproducibly with luminous, and works with kraken.
>> > Is this a known issue?
>> >
>> > [ceph_deploy.conf][DEBUG ] found configuration file at:
>> > /home/ceph/.cephdeploy.conf
>> > [ceph_deploy.cli][INFO  ] Invoked (1.5.37): /bin/ceph-deploy osd
>> activate
>> > server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
>> > server2:/dev/sdb1:/dev/sdc
>> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> > [ceph_deploy.cli][INFO  ]  username  : None
>> > [ceph_deploy.cli][INFO  ]  verbose   : False
>> > [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>> > [ceph_deploy.cli][INFO  ]  subcommand: activate
>> > [ceph_deploy.cli][INFO  ]  quiet : False
>> > [ceph_deploy.cli][INFO  ]  cd_conf   :
>> > 
>> > [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> > [ceph_deploy.cli][INFO  ]  func  : > osd at
>> > 0x109fb90>
>> > [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> > [ceph_deploy.cli][INFO  ]  default_release   : False
>> > [ceph_deploy.cli][INFO  ]  disk  : [('server0',
>> > '/dev/sdb1', '/dev/sdc'), ('server1', '/dev/sdb1', '/dev/sdc'),
>> ('server2',
>> > '/dev/sdb1', '/dev/sdc')]
>> > [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
>> > server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
>> > server2:/dev/sdb1:/dev/sdc
>> > [server0][DEBUG ] connection detected need for sudo
>> > [server0][DEBUG ] connected to host: server0
>> > [server0][DEBUG ] detect platform information from remote host
>> > [server0][DEBUG ] detect machine type
>> > [server0][DEBUG ] find the location of an executable
>> > [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
>> > [ceph_deploy.osd][DEBUG ] activating host server0 disk /dev/sdb1
>> > [ceph_deploy.osd][DEBUG ] will use init type: systemd
>> > [server0][DEBUG ] find the location of an executable
>> > [server0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate
>> > --mark-init systemd --mount /dev/sdb1
>> > [server0][WARNIN] main_activate: path = /dev/sdb1
>> > [server0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
>> > /sys/dev/block/8:17/dm/uuid
>> > [server0][WARNIN] command: Running command: /sbin/blkid -o udev -p
>> /dev/sdb1
>> > [server0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o
>> value
>> > -- /dev/sdb1
>> > [server0][WARNIN] command: Running command: /usr/bin/ceph-conf
>> > --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>> > [server0][WARNIN] command: Running command: /usr/bin/ceph-conf
>> > --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
>> > [server0][WARNIN] mount: Mounting /dev/sdb1 on
>> /var/lib/ceph/tmp/mnt.wfKzzb
>> > with options noatime,inode64
>> > [server0][WARNIN] command_check_call: Running command: /usr/bin/mount
>> -t xfs
>> > -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.wfKzzb
>> > [server0][WARNIN] command: Running command: /sbin/restorecon
>> > /var/lib/ceph/tmp/mnt.wfKzzb
>> > [server0][WARNIN] activate: Cluster uuid is
>> > 04e79ca9-308c-41a5-b40d-a2737c34238d
>> > [server0][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --cluster=ceph
>> > --show-config-value=fsid
>> > [server0][WARNIN] activate: Cluster name is ceph
>> > [server0][WARNIN] activate: OSD uuid is 46d7cc0b-a087-4c8c-b00c-ff584c
>> 941cf9
>> > [server0][WARNIN] activate: OSD id is 0
>> > 

Re: [ceph-users] luminous ceph-osd crash

2017-08-31 Thread Marcin Dulak
Hi,

/var/log/ceph/ceph-osd.0.log is attached.
My sdb is 128MB and sdc (journal) is 16MB:

[root@server0 ~]# ceph-disk list
/dev/dm-0 other, xfs, mounted on /
/dev/dm-1 swap, swap
/dev/sda :
 /dev/sda1 other, 0x83
 /dev/sda2 other, xfs, mounted on /boot
 /dev/sda3 other, LVM2_member
/dev/sdb :
 /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdc1
/dev/sdc :
 /dev/sdc1 ceph journal, for /dev/sdb1

Marcin

On Thu, Aug 31, 2017 at 3:05 PM, Sage Weil  wrote:

> Hi Marcin,
>
> Can you reproduce the crash with 'debug bluestore = 20' set, and then
> ceph-post-file /var/log/ceph/ceph-osd.0.log?
>
> My guess is that we're not handling a very small device properly?
>
> sage
>
>
> On Thu, 31 Aug 2017, Marcin Dulak wrote:
>
> > Hi,
> >
> > I have a virtual CentOS 7.3 test setup at:
> > https://github.com/marcindulak/github-test-local/
> blob/a339ff7505267545f593f
> > d949a6453a56cdfd7fe/vagrant-ceph-rbd-tutorial-centos7.sh
> >
> > It seems to crash reproducibly with luminous, and works with kraken.
> > Is this a known issue?
> >
> > [ceph_deploy.conf][DEBUG ] found configuration file at:
> > /home/ceph/.cephdeploy.conf
> > [ceph_deploy.cli][INFO  ] Invoked (1.5.37): /bin/ceph-deploy osd activate
> > server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
> > server2:/dev/sdb1:/dev/sdc
> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
> > [ceph_deploy.cli][INFO  ]  username  : None
> > [ceph_deploy.cli][INFO  ]  verbose   : False
> > [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> > [ceph_deploy.cli][INFO  ]  subcommand: activate
> > [ceph_deploy.cli][INFO  ]  quiet : False
> > [ceph_deploy.cli][INFO  ]  cd_conf   :
> > 
> > [ceph_deploy.cli][INFO  ]  cluster   : ceph
> > [ceph_deploy.cli][INFO  ]  func  :  at
> > 0x109fb90>
> > [ceph_deploy.cli][INFO  ]  ceph_conf : None
> > [ceph_deploy.cli][INFO  ]  default_release   : False
> > [ceph_deploy.cli][INFO  ]  disk  : [('server0',
> > '/dev/sdb1', '/dev/sdc'), ('server1', '/dev/sdb1', '/dev/sdc'),
> ('server2',
> > '/dev/sdb1', '/dev/sdc')]
> > [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
> > server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
> > server2:/dev/sdb1:/dev/sdc
> > [server0][DEBUG ] connection detected need for sudo
> > [server0][DEBUG ] connected to host: server0
> > [server0][DEBUG ] detect platform information from remote host
> > [server0][DEBUG ] detect machine type
> > [server0][DEBUG ] find the location of an executable
> > [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
> > [ceph_deploy.osd][DEBUG ] activating host server0 disk /dev/sdb1
> > [ceph_deploy.osd][DEBUG ] will use init type: systemd
> > [server0][DEBUG ] find the location of an executable
> > [server0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate
> > --mark-init systemd --mount /dev/sdb1
> > [server0][WARNIN] main_activate: path = /dev/sdb1
> > [server0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
> > /sys/dev/block/8:17/dm/uuid
> > [server0][WARNIN] command: Running command: /sbin/blkid -o udev -p
> /dev/sdb1
> > [server0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o
> value
> > -- /dev/sdb1
> > [server0][WARNIN] command: Running command: /usr/bin/ceph-conf
> > --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
> > [server0][WARNIN] command: Running command: /usr/bin/ceph-conf
> > --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
> > [server0][WARNIN] mount: Mounting /dev/sdb1 on
> /var/lib/ceph/tmp/mnt.wfKzzb
> > with options noatime,inode64
> > [server0][WARNIN] command_check_call: Running command: /usr/bin/mount -t
> xfs
> > -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.wfKzzb
> > [server0][WARNIN] command: Running command: /sbin/restorecon
> > /var/lib/ceph/tmp/mnt.wfKzzb
> > [server0][WARNIN] activate: Cluster uuid is
> > 04e79ca9-308c-41a5-b40d-a2737c34238d
> > [server0][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph
> > --show-config-value=fsid
> > [server0][WARNIN] activate: Cluster name is ceph
> > [server0][WARNIN] activate: OSD uuid is 46d7cc0b-a087-4c8c-b00c-
> ff584c941cf9
> > [server0][WARNIN] activate: OSD id is 0
> > [server0][WARNIN] activate: Initializing OSD...
> > [server0][WARNIN] command_check_call: Running command: /usr/bin/ceph
> > --cluster ceph --name client.bootstrap-osd --keyring
> > /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> > /var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap
> > [server0][WARNIN] got monmap epoch 1
> > [server0][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd
> > --cluster ceph --mkfs -i 0 --monmap
> > /var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap --osd-data
> > /var/lib/ceph/tmp/mnt.wfKzzb --osd-uuid 46d7cc0b-a087-4c8c-b00c-
> ff584c941cf9
> > 

Re: [ceph-users] luminous ceph-osd crash

2017-08-31 Thread Sage Weil
Hi Marcin,

Can you reproduce the crash with 'debug bluestore = 20' set, and then 
ceph-post-file /var/log/ceph/ceph-osd.0.log?

My guess is that we're not handling a very small device properly?

sage


On Thu, 31 Aug 2017, Marcin Dulak wrote:

> Hi,
> 
> I have a virtual CentOS 7.3 test setup at:
> https://github.com/marcindulak/github-test-local/blob/a339ff7505267545f593f
> d949a6453a56cdfd7fe/vagrant-ceph-rbd-tutorial-centos7.sh
> 
> It seems to crash reproducibly with luminous, and works with kraken.
> Is this a known issue?
> 
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/ceph/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.37): /bin/ceph-deploy osd activate
> server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
> server2:/dev/sdb1:/dev/sdc
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username                      : None
> [ceph_deploy.cli][INFO  ]  verbose                       : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
> [ceph_deploy.cli][INFO  ]  subcommand                    : activate
> [ceph_deploy.cli][INFO  ]  quiet                         : False
> [ceph_deploy.cli][INFO  ]  cd_conf                       :
> 
> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
> [ceph_deploy.cli][INFO  ]  func                          :  0x109fb90>
> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
> [ceph_deploy.cli][INFO  ]  default_release               : False
> [ceph_deploy.cli][INFO  ]  disk                          : [('server0',
> '/dev/sdb1', '/dev/sdc'), ('server1', '/dev/sdb1', '/dev/sdc'), ('server2',
> '/dev/sdb1', '/dev/sdc')]
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
> server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
> server2:/dev/sdb1:/dev/sdc
> [server0][DEBUG ] connection detected need for sudo
> [server0][DEBUG ] connected to host: server0 
> [server0][DEBUG ] detect platform information from remote host
> [server0][DEBUG ] detect machine type
> [server0][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
> [ceph_deploy.osd][DEBUG ] activating host server0 disk /dev/sdb1
> [ceph_deploy.osd][DEBUG ] will use init type: systemd
> [server0][DEBUG ] find the location of an executable
> [server0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate
> --mark-init systemd --mount /dev/sdb1
> [server0][WARNIN] main_activate: path = /dev/sdb1
> [server0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
> /sys/dev/block/8:17/dm/uuid
> [server0][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdb1
> [server0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value
> -- /dev/sdb1
> [server0][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
> [server0][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
> [server0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.wfKzzb
> with options noatime,inode64
> [server0][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs
> -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.wfKzzb
> [server0][WARNIN] command: Running command: /sbin/restorecon
> /var/lib/ceph/tmp/mnt.wfKzzb
> [server0][WARNIN] activate: Cluster uuid is
> 04e79ca9-308c-41a5-b40d-a2737c34238d
> [server0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph
> --show-config-value=fsid
> [server0][WARNIN] activate: Cluster name is ceph
> [server0][WARNIN] activate: OSD uuid is 46d7cc0b-a087-4c8c-b00c-ff584c941cf9
> [server0][WARNIN] activate: OSD id is 0
> [server0][WARNIN] activate: Initializing OSD...
> [server0][WARNIN] command_check_call: Running command: /usr/bin/ceph
> --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap
> [server0][WARNIN] got monmap epoch 1
> [server0][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd
> --cluster ceph --mkfs -i 0 --monmap
> /var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap --osd-data
> /var/lib/ceph/tmp/mnt.wfKzzb --osd-uuid 46d7cc0b-a087-4c8c-b00c-ff584c941cf9
> --setuser ceph --setgroup ceph
> [server0][WARNIN]/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x
> 86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/
> rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueFS.cc: In function 'void
> BlueFS::add_block_extent(unsigned int, uint64_t, uint64_t)' thread
> 7fef4f0cfd00 time 2017-08-31 10:05:31.892519
> [server0][WARNIN]/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x
> 86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/
> rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueFS.cc: 172: FAILED
> assert(bdev[id]->get_size() >= offset + length)
> 

[ceph-users] luminous ceph-osd crash

2017-08-31 Thread Marcin Dulak
Hi,

I have a virtual CentOS 7.3 test setup at:
https://github.com/marcindulak/github-test-local/blob/a339ff
7505267545f593fd949a6453a56cdfd7fe/vagrant-ceph-rbd-tutorial-centos7.sh

It seems to crash reproducibly with luminous, and works with kraken.
Is this a known issue?

[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /bin/ceph-deploy osd activate
server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
server2:/dev/sdb1:/dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: activate
[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   :

[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  func  : 
[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  disk  : [('server0',
'/dev/sdb1', '/dev/sdc'), ('server1', '/dev/sdb1', '/dev/sdc'), ('server2',
'/dev/sdb1', '/dev/sdc')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
server2:/dev/sdb1:/dev/sdc
[server0][DEBUG ] connection detected need for sudo
[server0][DEBUG ] connected to host: server0
[server0][DEBUG ] detect platform information from remote host
[server0][DEBUG ] detect machine type
[server0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] activating host server0 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[server0][DEBUG ] find the location of an executable
[server0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate
--mark-init systemd --mount /dev/sdb1
[server0][WARNIN] main_activate: path = /dev/sdb1
[server0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
/sys/dev/block/8:17/dm/uuid
[server0][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdb1
[server0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value
-- /dev/sdb1
[server0][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[server0][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[server0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.wfKzzb
with options noatime,inode64
[server0][WARNIN] command_check_call: Running command: /usr/bin/mount -t
xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.wfKzzb
[server0][WARNIN] command: Running command: /sbin/restorecon
/var/lib/ceph/tmp/mnt.wfKzzb
[server0][WARNIN] activate: Cluster uuid is
04e79ca9-308c-41a5-b40d-a2737c34238d
[server0][WARNIN] command: Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[server0][WARNIN] activate: Cluster name is ceph
[server0][WARNIN] activate: OSD uuid is 46d7cc0b-a087-4c8c-b00c-ff584c941cf9
[server0][WARNIN] activate: OSD id is 0
[server0][WARNIN] activate: Initializing OSD...
[server0][WARNIN] command_check_call: Running command: /usr/bin/ceph
--cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
/var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap
[server0][WARNIN] got monmap epoch 1
[server0][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd
--cluster ceph --mkfs -i 0 --monmap
/var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap --osd-data
/var/lib/ceph/tmp/mnt.wfKzzb --osd-uuid
46d7cc0b-a087-4c8c-b00c-ff584c941cf9 --setuser ceph --setgroup ceph
[server0][WARNIN]
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueFS.cc:
In function 'void BlueFS::add_block_extent(unsigned int, uint64_t,
uint64_t)' thread 7fef4f0cfd00 time 2017-08-31 10:05:31.892519
[server0][WARNIN]
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueFS.cc:
172: FAILED assert(bdev[id]->get_size() >= offset + length)
[server0][WARNIN]  ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
[server0][WARNIN]  1: (ceph::__ceph_assert_fail(char const*, char const*,
int, char const*)+0x110) [0x7fef4fb4c510]
[server0][WARNIN]  2: (BlueFS::add_block_extent(unsigned int, unsigned
long, unsigned long)+0x4d8) [0x7fef4fad1f88]
[server0][WARNIN]  3: (BlueStore::_open_db(bool)+0xc4f) [0x7fef4f9f597f]
[server0][WARNIN]  4: