Skip to content

OSDs

OSD Maintenance

Gracefully remove OSD

Tip

If you are using Rook Ceph Operator to run a Ceph Cluster in Kubernetes, please follow the official documentation here: Rook Docs - Ceph OSD Management.

First thing is to set the crush weight to zero, either instantly to 0.0 or a bit gracefully.
(
gracefully should always be used when the cluster is in use, though any OSD weight change will cause data redistribution)

1
ceph osd crush reweight osd.<ID> 0.0

or graceful:

1
2
3
4
5
for i in {9 1}; do
    ceph osd crush reweight osd.<ID> 0.$i
    # Wait five minutes each step or longer depending on your Ceph cluster recovery speed
    sleep 300
done

After the reweight, set the OSD out and remove it (+ its credentials):

1
ceph osd out <ID>
1
2
3
ceph osd crush remove osd.<ID>
ceph auth del osd.<ID>
ceph osd rm <ID>