Thanks! I’ll remove my patch from my local build of the 4.19 kernel and
upgrade to 4.19.77. Appreciate the quick fix.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
M: 228.547.8045
15052 Conference Center Dr, Chantilly, VA 20151
perspecta
On Oct 5, 2019, at 7:29 AM, Ilya Dryomov
ashed machine and to avoid attaching an image, I’ll link to
where they are: http://kvanals.kvanals.org/.ceph_kernel_panic_images/
Am I way off base or has anyone else run into this issue?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
M: 228.547.8045
15052
Got it! I can calculate individual clone usage using “rbd du”, but does
anything exist to show total clone usage across the pool? Otherwise it looks
like phantom space is just missing.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
M: 228.547.8045
15052 Conference Center Dr, Chantilly, VA
done
rbd
size: 3
data
size: 3
metadata
size: 3
.rgw.root
size: 3
default.rgw.control
size: 3
default.rgw.meta
size: 3
default.rgw.log
size: 3
default.rgw.buckets.index
size: 3
default.rgw.buckets.data
size: 3
default.rgw.buckets.non-ec
size: 3
Thanks,
--
K
Unfortunately it looks like he’s still on Luminous, but if upgrading is an
option, the options are indeed significantly better. If I recall correctly, at
least the balancer module is available in Luminous.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service
Shain:
Have you looked into doing a "ceph osd reweight-by-utilization” by chance?
I’ve found that data distribution is rarely perfect and on aging clusters, I
always have to do this periodically.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Dis
5.6.1 or wait for 5.8.1
to be released since the issues have already been fixed upstream.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
705696d4fe619afc)
nautilus (stable)": 1
},
"mds": {
"ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc)
nautilus (stable)": 1
},
"rgw": {
"ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc)
nautilus (st
I’d actually rather it not be an extra cluster, but can the destination pool
name be different? If not, I have conflicting image names in the “rbd” pool on
either side.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775
mind
— I just didn’t want to risk impacting the underlying cluster too much or hit
any other caveats that perhaps someone else has run into before. I doubt many
people have tried CephFS as a Filestore OSD since in general, it seems like a
pretty silly idea.
Thanks,
--
Kenneth Van Alstyne
have that. The
single OSD is simply due to the underlying cluster already either being erasure
coded or replicated.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f
B 0.02
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 1 133 B 083 GiB 10
# df -h /var/lib/ceph/osd/cephfs-0/
Filesystem Size Used Avail Use% Mounted on
10.0.0.1:/ceph-remote 87G 12M 87G 1% /var/lib/ce
D’oh! I was hoping that the destination pools could be unique names,
regardless of the source pool name.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266
In this case, I’m imagining Clusters A/B both having write access to a third
“Cluster C”. So A/B -> C rather than A -> C -> B / B -> C -> A / A -> B-> C.
I admit, in the event that I need to replicate back to either primary cluster,
there may be challenges.
Thanks,
-
build out a
test lab to see how that would work for us.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com<h
. Has anything been done in this
regard? If not, is my best bet perhaps a tertiary clusters that both can reach
and do one-way replication to?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA
the watcher did
indeed go away and I was able to remove the images. Very, very strange. (But
situation solved… except I don’t know what the cause was, really.)
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue
| grep -i qemu | grep -i rbd | grep -i 145
# ceph version
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
#
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8
; "Seagate Nytro 1551 DuraWrite 3DWPD Mainstream Endurance 960GB, SATA"?
> Seems really cheap too and has TBW 5.25PB. Anybody tested that? What
> about (RBD) performance?
>
> Cheers
> Corin
>
> On Fri, 2018-10-12 at 13:53 +, Kenneth Van Alstyne wrote:
>> C
Cephers:
As the subject suggests, has anyone tested Samsung 860 DCT SSDs? They
are really inexpensive and we are considering buying some to test.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite
duplicate the issue in a lab,
but highly suspect this is what happened.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com<h
ount of logging and debug
information I have available, unfortunately. If it helps, all ceph-mon,
ceph-mds, radosgw, and ceph-mgr daemons were running 12.2.7, while 30 of the 50
total ceph-osd daemons were also on 12.2.7 when the remaining 20 ceph-osd
daemons (on 10.2.10) crashed.
Thanks,
--
Ken
know if I’ve missed something fundamental.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com
DHS EAGLE II Prime Contractor: FC1 S
Got it — I’ll keep that in mind. That may just be what I need to “get by” for
now. Ultimately, we’re looking to buy at least three nodes of servers that can
hold 40+ OSDs backed by 2TB+ SATA disks,
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled
Thanks for the awesome advice folks. Until I can go larger scale (50+ SATA
disks), I’m thinking my best option here is to just swap out these 1TB SATA
disks with 1TB SSDs. Am I oversimplifying the short term solution?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
I/O coalescing
to deal with my crippling IOP limit due to the low number of spindles?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com
ack"
- rbd_concurrent_management_ops is unset, so it appears the default is
“10”
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com
DHS EAGLE
27 matches
Mail list logo