Due to the nature of my single node Ceph instance, pretty much from the beginning, I have configured my scrub intervals to be a lot bigger than the default.
However, I kept getting
Recently I encountered my first spillovers after upgrading to Ceph 18.2.
In other places, like Ceph health detail, such situation manifests as:
BLUEFS_SPILLOVER: 6 OSD(s) experiencing BlueFS spillover
osd.1
I will use osd.32 in this example
Find your OSD physical device path (/dev/sdxx)
cephadm ceph-volume lvm list
Look for devices section next to ====== osd.32 =======
Mine is /dev/sdaj
Mark