Ceph Cluster Configuration

This is a stub for configuring a ceph cluster.

Enable mgr dashboard

ceph mgr module enable dashboard
ceph config-key set mgr/dashboard/mgr1/server_addr IP_ADDRESS
ceph config-key set mgr/dashboard/mgr2/server_addr IP_ADDRESS
ceph config-key set mgr/dashboard/mgr3/server_addr IP_ADDRESS

Add System Administration User

Create a sysadmin user:

ceph auth get-or-create client.sysadmin mds "allow *" mgr "allow *" mon "allow *" osd "allow *" -o sysadmin.keyring

Enable multiple CephFS filesystems

By default only one CephFS can be created

ceph fs flag set enable_multiple true --yes-i-really-mean-it

Ceph Pool Namingscheme

We consistently name Ceph pools the following way:

POOL.$type.$extra

e.g. test.cephfs.data
and  test.cephfs.metadata

Create CephFS

ceph osd pool create CEPHFS.cephfs.metadata 100
ceph osd pool create CEPHFS.cephfs.data 1024
ceph fs new CEPHFS.cephfs CEPHFS.cephfs.metadata CEPHFS.cephfs.data

Enable CephFS Snapshots

By default snapshots are disabled for cephfs.

ceph fs set CEPHFS.cephfs allow_new_snaps true --yes-i-really-mean-it

Enable multi-MDS

ceph fs set CEPHFS.cephfs allow_multimds true

Configure multi-MDS

ceph fs set CEPHFS.cephfs max_mds 3

Enable dirfrag

ceph fs set CEPHFS.cephfs allow_dirfrags true

Setting Replica defaults

By default we want 3 replicas, but if we’re temporarily degraded (2 replicas) we still want to be able to write to the cluster.

ceph osd pool set ${POOL} size 3
ceph osd pool set ${POOL} min_size 2

Setting Minimum Number of Standby MDS’s

We want 3 standby MDS’s to be available, otherwise turn health into warning mode.

ceph fs set ${CEPHFS} standby_count_wanted 3