Common Issues
Benchmarking Ceph Storage¶
You want to benchmark the storage of your Ceph cluster(s)? This is a short list of tools to benchmark storage.
Recommended tools:
- General Benchmarking Testing of Storage (e.g., plain disks, and other storage software)
- Ceph specific Benchmarking:
CephFS mount issues on Hosts¶
Make sure you have a (active) Linux kernel of version 4.17
or higher.
Tip
In general it is recommended to have a very up-to-date version of the Linux kernel, as many improvements have been made to the Ceph kernel drivers in newer kernel versions (5.x
or higher).
HEALTH_WARN 1 large omap objects
¶
Issue¶
1 2 3 |
|
Solution¶
The following command should fix the issue:
1 |
|
MDSs report oversized cache
¶
Issue¶
Ceph health status reports, e.g., 1 MDSs report oversized cache
.
1 2 3 4 5 6 |
|
Solution¶
You can try to increase the mds cache memory limit
setting1.
Tip
For Rook Ceph users, you set/increase the memory requests on the CephFilesystem object for the MDS daemons2.
Find Device OSD is using¶
Issue¶
You need to find out which disk/device is used by an OSD daemon.
Scenarios: smartctl
is showing that the disk should be replaced, disk has already failed, etc.
Solution¶
Use the various ls*
subcommands of ceph device
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
The ceph device
subcommands allow you to do even more things, e.g., turn on the disk light in server chassis.
Enabling the light for the disk can help the datacenter workers to easily locate the disk and not replacing the wrong disk.
Locate Disk of OSD by OSD daemon ID (e.g., OSD 13):¶
1 2 3 |
|
Show all disks by host (hostname):¶
1 2 3 4 5 6 7 |
|
CephOSDSlowOps Alerts¶
Issue¶
TODO
Things to Try¶
- Ensure the disks you are using are healthy
- Check the SMART values. A bad disks can lock up an application (such as a Ceph OSD) or worse the whole server.
Should this page not have yielded you a solution, checkout the Rook Ceph Common Issues doc as well.
-
Report / Source for information regarding this issue has been taken from http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-December/037633.html ↩
-
Rook Ceph Docs v1.7 - Ceph Filesystem CRD - MDS Resources Configuration Settings ↩