Re: [ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Sarunas Burdulis
On 3/19/19 2:52 PM, Benjamin Cherian wrote:
> Hi,
> 
> I'm getting an error when trying to use the APT repo for Ubuntu bionic.
> Does anyone else have this issue? Is the mirror sync actually still in
> progress? Or was something setup incorrectly?
> 
> E: Failed to fetch
> https://download.ceph.com/debian-nautilus/dists/bionic/main/binary-amd64/Packages.bz2
>  
> File has unexpected size (15515 != 15488). Mirror sync in progress? [IP:
> 158.69.68.124 443]
>    Hashes of expected file:
>     - Filesize:15488 [weak]
>     -
> SHA256:d5ea08e095eeeaa5cc134b1661bfaf55280fcbf8a265d584a4af80d2a424ec17
>     - SHA1:6da3a8aa17ed7f828f35f546cdcf923040e8e5b0 [weak]
>     - MD5Sum:7e5a4ecea4a4edc3f483623d48b6efa4 [weak]
>    Release file created at: Mon, 11 Mar 2019 18:44:46 +

I'm getting the same error for `apt update` with

deb https://download.ceph.com/debian-nautilus/ bionic main

-- 
Sarunas Burdulis
Systems Administrator, Dartmouth Mathematics
math.dartmouth.edu/~sarunas



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD I/O errors with QEMU [luminous upgrade/osd change]

2017-09-11 Thread Sarunas Burdulis
On 2017-09-11 09:31, Nico Schottelius wrote:
> 
> Sarunas,
> 
> may I ask when this happened?

I was following

http://docs.ceph.com/docs/master/release-notes/#upgrade-from-jewel-or-kraken

I can't tell which step in particular the issue with VMs was triggered by.

> And did you move OSDs or mons after that export/import procecdure?

No. In between rbd-exporting and importing VM images back, I upgraded
all OSDs to use Bluestore, i.e. removed them one-by-one and
ceph-deploy'ed them anew with luminous defaults (it's only a 6 OSD cluster).
 --
Sarunas Burdulis
Systems Administrator, Dartmouth Mathematics
math.dartmouth.edu/~sarunas



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD I/O errors with QEMU [luminous upgrade/osd change]

2017-09-11 Thread Sarunas Burdulis
On 2017-09-10 08:23, Nico Schottelius wrote:
> 
> Good morning,
> 
> yesterday we had an unpleasant surprise that I would like to discuss:
> 
> Many (not all!) of our VMs were suddenly
> dying (qemu process exiting) and when trying to restart them, inside the
> qemu process we saw i/o errors on the disks and the OS was not able to
> start (i.e. stopped in initramfs).

We experienced the same after upgrade from kraken to luminous, i.e. all
VM with their system images in Ceph pool failed to boot due to
filesystem errors, ending in initramfs. fsck wasn't able to fix them.

> When we exported the image from rbd and loop mounted it, there were
> however no I/O errors and the filesystem could be cleanly mounted [-1].

Same here.

We ended up rbd-exporting images from Ceph rbd pool to local filesystem
and re-exporting them back. That "fixed" them without the need for fsck.

-- 
Sarunas Burdulis
Systems Administrator, Dartmouth Mathematics
math.dartmouth.edu/~sarunas



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com