Reconfiguring Ceph
Table of Contents
Configuring Ceph #
Ceph daemons use /etc/ceph/ceph.conf
by default for configuration. However, modern ceph clusters are initialized with cephadm
, which deploys deach daemon in individual containers; then, how we can apply configuration changes to Ceph daemons?
1. Dynamic Configuration Injection 1 #
Warning: it is not reliable; make sure that the changed parameter is active. Otherwise, use method 2.
Use injectargs
to inject configuration values into the existing values.
$ ceph tell <type.id> <args>...
(e.g.) $ ceph tell osd.* injectargs --osd_max_write_size 50
The above example command changes osd_max_write_size
configuration parameter of all OSDs (see osd.*
type) to 50, which has been introduced in this document:
osd_max_write_size
The maximum size of a write in megabytes.
- Type:
size
- Default:
90B
- Min:
4
2. Changing the Configuration Files 2 #
The best way to change configurations is to put the configurations into /etc/ceph/ceph.conf
, and restarts the daemons to apply the changes.
However, all Ceph daemons deployed with cephadm
are running in containers.
Is there way to change the /etc/ceph/ceph.conf
file in the container from the host side?
The ceph.conf
configuration file is bind-mounted to the container, so it can be accessed in the host.
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
2d80d9f0f02d docker.io/ceph/ceph:v15 -n osd.0 -f --set... 2 weeks ago Up 2 weeks ago ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6-osd.0
2d710a199390 docker.io/ceph/ceph:v15 -n osd.1 -f --set... 2 weeks ago Up 2 weeks ago ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6-osd.1
$ podman inspect ceph-$(ceph fsid)-osd.0
...
[
{
"Id": "2d80d9f0f02dedf32eb21efa5fe159a1eff899b58334d12d316b78f105cbd175",
"Path": "/usr/bin/ceph-osd",
...
"Mounts": [
...
{
"Type": "bind",
"Source": "/var/lib/ceph/31a99598-96aa-11eb-8e83-b06ebf35abc6/osd.0/config",
"Destination": "/etc/ceph/ceph.conf",
"Driver": "",
"Mode": "",
"Options": [
"rbind"
],
"RW": true,
"Propagation": "rprivate"
},
...
]
}
]
As you can see, /var/lib/ceph/31a99598-96aa-11eb-8e83-b06ebf35abc6/osd.0/config
has been mounted to /etc/ceph/ceph.conf
in the container for osd.0
.
All ceph.conf
configuration files for each daemons can be accessed via:
/var/lib/ceph/<cluster-fsid>/<daemon-name>/config
where <daemon-name>
can be found with ceph orch ps
.
After modifying the configuration file, you should restart the daemon to apply the changes:
$ systemctl restart ceph-$(ceph fsid)@<daemon-name>
2 illustrates that you can restart the daemon with
systemctl restart ceph-<cluster-fsid>-<daemon-name>
, which is false.cephadm
uses a template instance (@) to refer a daemon name:ceph-<cluster-fsid>@<daemon-name>
.
$ systemctl status ceph-$(ceph fsid)@osd.0
● ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6@osd.0.service - Ceph osd.0 for 31a99598-96aa-11eb-8e83-b06ebf35abc6
Loaded: loaded (/etc/systemd/system/ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6@.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-04-23 17:09:32 KST; 11s ago
Process: 17428 ExecStartPre=/bin/rm -f //run/ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6@osd.0.service-pid //run/ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6@osd.0.service-cid (code=exited, status=0/SUCCESS)
Process: 17429 ExecStart=/bin/bash /var/lib/ceph/31a99598-96aa-11eb-8e83-b06ebf35abc6/osd.0/unit.run (code=exited, status=0/SUCCESS)
Main PID: 17790 (conmon)
Tasks: 2 (limit: 38107)
Memory: 888.0K
CGroup: /system.slice/system-ceph\x2d31a99598\x2d96aa\x2d11eb\x2d8e83\x2db06ebf35abc6.slice/ceph-31a99598-96aa-11eb-8e83-b06ebf35abc6@osd.0.service
└─17790 /usr/bin/conmon --api-version 1 -c e0fa2f7e05bb32895e31acc758b0ac5dc8dfb103bdc1171d3e50447c012f8dcb -u e0fa2f7e05bb32895e31acc758b0ac5dc8dfb103bdc1171d3e50447c012f8dcb -r /usr/bin/crun -b>
Apr 23 17:09:31 insujang.fedora bash[17525]: --> ceph-volume lvm activate successful for osd ID: 0
Apr 23 17:09:31 insujang.fedora podman[17525]: 2021-04-23 17:09:31.175933276 +0900 KST m=+0.730119503 container died 1bb4ff00147ecb939c9bc824e76eb49f46d9f0753461a362f50a8712bf969664 (image=docker.io/ceph/ceph>Apr 23 17:09:31 insujang.fedora podman[17525]: 2021-04-23 17:09:31.897632645 +0900 KST m=+1.451818766 container remove 1bb4ff00147ecb939c9bc824e76eb49f46d9f0753461a362f50a8712bf969664 (image=docker.io/ceph/ce>Apr 23 17:09:32 insujang.fedora bash[17741]: WARNING: The same type, major and minor should not be used for multiple devices.
Apr 23 17:09:32 insujang.fedora bash[17741]: WARNING: The same type, major and minor should not be used for multiple devices.
Apr 23 17:09:32 insujang.fedora podman[17741]: 2021-04-23 17:09:32.365594057 +0900 KST m=+0.283835309 container create e0fa2f7e05bb32895e31acc758b0ac5dc8dfb103bdc1171d3e50447c012f8dcb (image=docker.io/ceph/ce>Apr 23 17:09:32 insujang.fedora podman[17741]: 2021-04-23 17:09:32.547644796 +0900 KST m=+0.465885960 container init e0fa2f7e05bb32895e31acc758b0ac5dc8dfb103bdc1171d3e50447c012f8dcb (image=docker.io/ceph/ceph>Apr 23 17:09:32 insujang.fedora podman[17741]: 2021-04-23 17:09:32.580880134 +0900 KST m=+0.499121329 container start e0fa2f7e05bb32895e31acc758b0ac5dc8dfb103bdc1171d3e50447c012f8dcb (image=docker.io/ceph/cep>Apr 23 17:09:32 insujang.fedora bash[17741]: e0fa2f7e05bb32895e31acc758b0ac5dc8dfb103bdc1171d3e50447c012f8dcb
Apr 23 17:09:32 insujang.fedora systemd[1]: Started Ceph osd.0 for 31a99598-96aa-11eb-8e83-b06ebf35abc6.
You can check whether the change has been applied to the daemon by:
$ ceph config show osd.0
NAME VALUE SOURCE OVERRIDES IGNORES
container_image docker.io/ceph/ceph:v15 mon
daemonize false override
keyring $osd_data/keyring default
leveldb_log default
log_stderr_prefix debug default
log_to_file false default
log_to_stderr true default
mon_host [omitted] file
no_config_file false override
osd_max_write_size 50 file <<---
rbd_default_features 61 default
setgroup ceph cmdline
setuser ceph cmdline
… while the given value has actually not been applied and used:
$ ceph config get osd.0
WHO MASK LEVEL OPTION VALUE RO
global basic container_image docker.io/ceph/ceph:v15 *
3. Ceph Configuration Priority and Centralized Configuration Management 3 #
Note that Ceph has a priority which value should be applied to the Ceph cluster. According to 3, the overall priority order that Ceph uses is:
- Compiled-in default values
- Cluster configuration database (via
ceph config set <daemon> ...
) - Local
ceph.conf
file - Runtime override (via
ceph tell <daemon> injectargs ...
)
It seems that osd_max_write_size
has a default value 90, which makes our value be ignored. To force it, we can use ceph config --force
:
$ ceph config set osd.0 osd_max_write_size 40 --force
$ ceph config get osd.0
WHO MASK LEVEL OPTION VALUE RO
global basic container_image docker.io/ceph/ceph:v15 *
osd.0 advanced osd_max_write_size 40 <<----
$ ceph config show osd.0
NAME VALUE SOURCE OVERRIDES IGNORES
container_image docker.io/ceph/ceph:v15 mon
daemonize false override
keyring $osd_data/keyring default
leveldb_log default
log_stderr_prefix debug default
log_to_file false default
log_to_stderr true default
mon_host [v2:192.168.7.24:3300/0,v1:192.168.7.24:6789/0] file
no_config_file false override
osd_max_write_size 50 file mon[40]
rbd_default_features 61 default
setgroup ceph cmdline
setuser ceph cmdline