This is a stub for using a ceph cluster.
# Check cluster status
ceph -s
# Check cluster logs
ceph -w
# See cluster warnings only
ceph --watch-warn
# Check cluster health
ceph health
# Check MONs have a quorum
ceph quorum_status --format json-pretty
# See map of available OSDs
ceph osd tree
# Check CephFS status
ceph fs status ${CEPHFS}
ceph auth list
# Create a new RWX user for a specific pool
ceph auth get-or-create client.NAME mds "allow * pool=NAME" mon "allow * pool=NAME" osd "allow * pool=NAME" -o NAME.keyring
# Create a new R user for a specific pool
ceph auth get-or-create client.NAME mds "allow r pool=NAME" mon "allow r pool=NAME" osd "allow r pool=NAME" -o NAME.keyring
# Set a new quota to 25TB
ceph osd pool set-quota NAME max_bytes $((25 * 1024 * 1024 * 1024 * 1024))
# Remove quota
ceph osd pool set-quota NAME max_bytes 0
sudo mount -t ceph mon.ceph.bfh.ch:6789:/ /mnt/cephfs/NAME -o name=USER,secret=SECRET
Configurable runtime changes can be done on all daemons at once, e.g.:
ceph tell osd.* injectargs '--osd-recovery-max-active 1'
Normally you would expect that when you reboot any of the ceph containers that the ceph daemons will be started automatically.
We decided to not go this way but to manually assemble the cluster members instead of getting interferences from automatisms. This is not a problem because reboots or recoveries are rare and ‘out-of-ordenary’ thing anyway and should be treated like that.