Test PPA from Luciano:
https://launchpad.net/~lmlogiudice/+archive/ubuntu/ceph-1920-lttng
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2106199
Title:
LTTNG is missing for ceph
To manage notificati
Public bug reported:
Currently, our ceph deb package in main is being built with flag
WITH_LTTNG turned OFF. This makes tracing tools like blkin/traeger
unusable though they are enabled by default in upstream.
** Affects: ceph (Ubuntu)
Importance: Undecided
Status: New
** Affects:
Regarding the test, this would require some experience to
deploy/configure the Ceph and S3 storage, otherwise it's going to take
quite some time to make it work.
Expanding the size limitation of the return value from 32 to 64 bytes,
this value will be serialized and sent over the network from the
** Also affects: ceph (Ubuntu Focal)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2082030
Title:
rgw could not reset user stats: Value too large for defi
debdiff file uploaded
** Patch added: "bug2082030.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2082030/+attachment/5836828/+files/bug2082030.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
** Tags added: sts
** Tags added: seg sts-sru-needed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2082030
Title:
rgw could not reset user stats: Value too large for defined data type
To manage no
** Description changed:
- reset-stats is designed to fix the user stat inconsistency which could
- happen due to rgw bugs
+ [Impact]
+ reset-stats is designed to fix the user stat inconsistency which could happen
due to rgw bugs
But it failed on octopus:
# radosgw-admin user stats --uid ta
** Description changed:
reset-stats is designed to fix the user stat inconsistency which could
happen due to rgw bugs
# radosgw-admin user stats --uid taodd --reset-stats
ERROR: could not reset user stats: (75) Value too large for defined data type
+
+ Upstream has the fix https://gith
Public bug reported:
reset-stats is designed to fix the user stat inconsistency which could
happen due to rgw bugs
# radosgw-admin user stats --uid taodd --reset-stats
ERROR: could not reset user stats: (75) Value too large for defined data type
** Affects: ceph (Ubuntu)
Importance: Undecid
Public bug reported:
Ceph on bcache could have serious performance degradation (10 times drop) when
the below two conditions are met:
1. bluefs_buffered_io is turned on
2. Any OSD bcache’s cache_available_percent is less than 60
As many of us may already know that bcache will force all writes t
Verified the bionic-proposed ceph package, can confirm the bluefs
compaction performed even with a very low workload.
** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic
--
You received this bug notification because you are a
I used the same steps to verify it and can confirm it's succeeded in
bionic-proposed too
fio --name=test1 --filename /dev/bcache0 --direct 1 --rw=randrw
--bs=32k,4k --ioengine=libaio --iodepth=1
Then I monitoring the cache_available_percent and writeback rate.
I no longer see the cache_available_
** Tags added: verification-done-bionic
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900438
Title:
Bcache bypasse writeback on caching device with fragmentation
To manage notifications about this
I used the same steps to verify it and can conconfirm it's succeeded
fio --name=test1 --filename /dev/bcache0 --direct 1 --rw=randrw
--bs=32k,4k --ioengine=libaio --iodepth=1
Then I monitoring the cache_available_percent and writeback rate.
I no longer see the cache_available_percent dropped to 3
I've done the verification of the focal-proposed kernel.
I used below fio command to cause the bcache fragmentation:
fio --name=test1 --filename /dev/bcache0 --direct 1 --rw=randrw
--bs=32k,4k --ioengine=libaio --iodepth=1
Then I monitoring the cache_available_percent and writeback rate.
I no lon
** Changed in: linux (Ubuntu Bionic)
Importance: Medium => High
** Changed in: linux (Ubuntu Focal)
Importance: Medium => High
** Changed in: linux (Ubuntu Groovy)
Importance: Medium => High
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscri
** Also affects: linux (Ubuntu Hirsute)
Importance: Undecided
Assignee: dongdong tao (taodd)
Status: Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900438
Title:
Bcache
Public bug reported:
If the pool size is reduced, we can end up with pg_temp mappings that are
too big. This can trigger bad behavior elsewhere (e.g., OSDMapMapping,
which assumes that acting and up are always <= pool size).
** Affects: ceph (Ubuntu)
Importance: Undecided
Status: N
so affects: linux (Ubuntu Focal)
Importance: Undecided
Status: New
** Changed in: linux (Ubuntu)
Assignee: (unassigned) => dongdong tao (taodd)
** Changed in: linux (Ubuntu Bionic)
Assignee: (unassigned) => dongdong tao (taodd)
** Changed in: linux (Ubuntu Fo
Public bug reported:
ceph-kvstore-tool, ceph-monstore-tool, ceph-osdomap-tool were shipped
within ceph-test package,but the ceph-test package was dropped by [0] in
bionic-train UCA release.
I believe the reason is that most of the binaries (except those 3 tools)
in ceph-test package are meant for
Public bug reported:
For a certain type of workload, the bluefs might never compact the log file,
which would cause the bluefs log file slowly grows to a huge size
(some bigger than 1TB for a 1.5T device).
This bug could eventually cause osd crash and failed to restart as it couldn't
get throu
Here is the latest updated patch and the progress being made in upstream
https://marc.info/?l=linux-bcache&m=160981605206306&w=1
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900438
Title:
Bcache
Public bug reported:
upstream implemented a new feature [1] that will check/report those long
network ping times between osds, but it introduced an issue that ceph-
mgr might be very slow because it needs to dump all the new osd network
ping stats [2] for some tasks, this can be bad especially whe
I've just submitted a patch[1] to upstream for review to help with this problem.
The key is to speed up the writeback rate when the fragmentation is high.
Here are the comments from the patch:
Current way to calculate the writeback rate only considered the
dirty sectors, this usually works fine wh
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868364
Title:
[SRU] rgw: unable to abort multipart upload after the bucket got
Please verify if the " Autopkgtest regression report
(ceph/12.2.13-0ubuntu0.18.04.3)" is an issue or not ?
** Tags removed: verification-done
** Tags added: verification-needed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://b
** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-bionic-done verification-done
** Tags removed: verification-queens-needed
** Tags added: verification-queens-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is su
Hi all,
I've verified that the ceph package 12.2.13-0ubuntu0.18.04.3 ( bionic-proposed)
fixed the problem.
The steps I've done:
1. Deploy a ceph cluster with version 12.2.13-0ubuntu0.18.04.2
2. s3cmd mb s3://test
3. s3cmd put testfile s3://test //400MB testfile
4. Ctrl + C to abort the multipart
Hi Robie and Corey,
is this autopkgtest regression expected ?
This is not caused by my change to rgw, do you need to re-upload the pkg or
re-run the regression ?
Thanks,
Dongdong
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
http
This is only intended for Bionic ceph12.2.13.
This debdiff file is generated via debdiff .
Let me upload a real patch here, so it would be more clear.
** Patch added: "bug1868364.patch"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1868364/+attachment/5400889/+files/bug1868364.patch
-
Hi Corey,
I updated the regression potential section, and this patch is for bionic
luminous.
Thanks,
Dongdong
** Description changed:
[Impact]
This bug will cause the bucket not able to abort the multipart upload and
leaving the stale multiple entries behind for those buckets which had par
upload the debdiff
** Description changed:
+ [Impact]
+ This bug will cause the bucket not able to abort the multipart upload and
leaving the stale multiple entries behind for those buckets which had partial
multipart uploads before the resharding.
+
+ [Test Case]
+ Deploy a latest luminous(12
Just to clarify a bit to avoid confusion. In above comment, at step 5, I
meant Wait for about 1.5 hour and unseal the vault.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261
Title:
Ceph OSD uni
I have verified the fix in bionic-proposed and confirm it can fix this issue.
The test steps I've performed:
1. deployed a ceph cluster with vault
2. upgrade some of the osds to 12.2.13
3. Add "Environment=CEPH_VOLUME_SYSTEMD_TRIES=2000" at
/lib/systemd/system/ceph-volume@.service for all osds
4.
Hi All,
I can confirm this release fixed the bug, I used below steps to test
1. Deployed a ceph cluster with vault
2. Upgrade all the ceph packages to 12.2.13 at bionic-proposed
3. Add "Environment=CEPH_VOLUME_SYSTEMD_TRIES=2000" at
/lib/systemd/system/ceph-volume@.service for some osd node
4. R
/ceph/ceph/pull/32617
upstream bug report: https://tracker.ceph.com/issues/43583
** Affects: ceph (Ubuntu)
Importance: Undecided
Assignee: dongdong tao (taodd)
Status: New
** Changed in: ceph (Ubuntu)
Assignee: (unassigned) => dongdong tao (taodd)
--
You received this bug
** Also affects: ceph (Ubuntu Focal)
Importance: High
Assignee: dongdong tao (taodd)
Status: New
** Changed in: ceph (Ubuntu Focal)
Importance: High => Medium
** Changed in: ceph (Ubuntu Focal)
Status: New => Fix Released
--
You received this bug notification b
proposed a cosmic debdiff
** Patch added: "cosmic.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1863704/+attachment/5329514/+files/cosmic.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
proposed a disco debdiff
** Patch added: "disco.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1863704/+attachment/5329513/+files/disco.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.n
proposed a eoan debdiff
** Patch added: "eoan.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1863704/+attachment/5329512/+files/eoan.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/
proposed a bionic debdiff
** Patch added: "bionic.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1863704/+attachment/5329515/+files/bionic.debdiff
** Also affects: ceph (Ubuntu Bionic)
Importance: Undecided
Status: New
** Also affects: ceph (Ubuntu Eoan)
Importance
@Edward, focal does contain the fix , need targeting to eoan disco
bionic
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863704
Title:
wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD
** No longer affects: ceph (Ubuntu Bionic)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863704
Title:
wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES
and CEPH_VOLUME_SYSTEM
** Also affects: ceph (Ubuntu Bionic)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863704
Title:
wrongly used a string type as int value for CEPH_VOLUME
This is a deb patch that address this issue
** Patch added:
"0001-ceph-volume-fix-the-type-mismatch-covert-the-tries-a.patch"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1863704/+attachment/5329167/+files/0001-ceph-volume-fix-the-type-mismatch-covert-the-tries-a.patch
** Tags added:
to be less than before as the previous value was apparently wrong and might
wrongly enlarged.
[other info]
Upstream bug report: https://tracker.ceph.com/issues/43186
Upstream pull request: https://github.com/ceph/ceph/pull/32106
** Affects: ceph (Ubuntu)
Importance: High
Assignee: dongdo
Hi James,
I'm wondering if you can help to include a simple fix to this SRU for bug
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261
The fix is here: https://github.com/ceph/ceph/pull/32106, it's a critical
ceph-volume bug that found by you.
it is now in upstream but not merged to luminou
Hi Brian,
I have verified trusty-proposed, it works good!
I used the second method (the gdb one) to verify it, below is the message I got:
231007:2018-12-04 13:54:53.828152 7ffa7658b700 10 -- 10.5.0.13:6801/15728 >>
10.5.0.20:6802/1017937 pipe(0x7ffa96c8a280 sd=146 :6801 s=2 pgs=3 cs=1 l=0
c=0x7
Xenial has Jewel ceph, Jewel have that patch.
so this issue does not affect Xenial ubuntu or releases later then Xenial.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1798081
Title:
ceph got slow re
** Description changed:
[Impact]
Ceph from ubuntu trusty, the version is 0.80.*.
The bug is that when a message seq number has exceeds the max value of
unsigned 32 bit which is 4294967295, the unsigned 64 bit seq number will be
truncated to unsigned 32 bit. But the seq number is supposed t
** Patch added: "lp1798081_trusty.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1798081/+attachment/5214357/+files/lp1798081_trusty.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/b
** Description changed:
[Impact]
Ceph from ubuntu trusty, the version is 0.80.*.
The bug is that when a message seq number has exceeds the max value of
unsigned 32 bit which is 4294967295, the unsigned 64 bit seq number will be
truncated to unsigned 32 bit. But the seq number is supposed t
** Patch added: "lp1798081_trusty.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1798081/+attachment/5211686/+files/lp1798081_trusty.debdiff
** Patch removed: "lp1798081.debdiff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1798081/+attachment/5211680/+files/deb.diff
This is the debdiff file
** Patch added: "deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1798081/+attachment/5211680/+files/deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/17
** Description changed:
- ceph version 0.80.5
- the reason that primary osd drop the secondary osd's subop reply message is
because of
- a bug that the message sequence number is truncated to unsigned 32 bit from
unsigned 64 bit.
+ [Impact]
+ Ceph from ubuntu trusty, the version is 0.80.*.
+ T
Public bug reported:
ceph version 0.80.5
the reason that primary osd drop the secondary osd's subop reply message is
because of
a bug that the message sequence number is truncated to unsigned 32 bit from
unsigned 64 bit.
the bug is already reported in the upstream:
https://tracker.ceph.com/is
56 matches
Mail list logo