This is a stub for configuring a ceph cluster.
ceph mgr module enable dashboard
ceph config-key set mgr/dashboard/mgr1/server_addr IP_ADDRESS
ceph config-key set mgr/dashboard/mgr2/server_addr IP_ADDRESS
ceph config-key set mgr/dashboard/mgr3/server_addr IP_ADDRESS
Create a sysadmin user:
ceph auth get-or-create client.sysadmin mds "allow *" mgr "allow *" mon "allow *" osd "allow *" -o sysadmin.keyring
By default only one CephFS can be created
ceph fs flag set enable_multiple true --yes-i-really-mean-it
We consistently name Ceph pools the following way:
POOL.$type.$extra
e.g. test.cephfs.data
and test.cephfs.metadata
ceph osd pool create CEPHFS.cephfs.metadata 100
ceph osd pool create CEPHFS.cephfs.data 1024
ceph fs new CEPHFS.cephfs CEPHFS.cephfs.metadata CEPHFS.cephfs.data
By default snapshots are disabled for cephfs.
ceph fs set CEPHFS.cephfs allow_new_snaps true --yes-i-really-mean-it
ceph fs set CEPHFS.cephfs allow_multimds true
Configure multi-MDS
ceph fs set CEPHFS.cephfs max_mds 3
ceph fs set CEPHFS.cephfs allow_dirfrags true
By default we want 3 replicas, but if we’re temporarily degraded (2 replicas) we still want to be able to write to the cluster.
ceph osd pool set ${POOL} size 3
ceph osd pool set ${POOL} min_size 2
We want 3 standby MDS’s to be available, otherwise turn health into warning mode.
ceph fs set ${CEPHFS} standby_count_wanted 3