PostgreSQL server
PostgreSQL configuration
PostgreSQL is configured do SSL-only, i.e. all connections need to use SSL.
PostgreSQL is configured to listen to all IP adresses, but does not allow to connect to databases, except:
if you create a database named
mainand a user nameduser, it is pre-allowed which means you can automatically connect to this database from the BFH internal network with the password you configured.
If you need to connect from outside the BFH network (i.e. the internet), and/or you created different/multiple databases/users, please submit a support request with the database name, database user and the IP address or IP subnet you want to connect from and we will configure it for you.
pgcli
We install pgcli, a prompt-toolkit PostgreSQL interactive client which gives you omni-completion.
pgcli can be used with
sudo -u postgres pgcli
postgresql-backup
We automatically backup a database named
main, if existing, once a day and store it in/srv/$fqdn/postgresql-backup.If you created different/multiple databases, please submit a support request with the database names and we will configure it for you.
PostgreSQL commands
Creating a user
sudo -u postgres psql -c "create user DATABASE_USER with encrypted password 'DATABASE_PASSWORD';"
Replace DATABASE_USER with the username of your choice and DATABASE_PASSWORD with a secure password (i.e. use apg -m 128 -x 128).
Setting the database owner
sudo -u postgres psql -c "alter database DATABASE_NAME owner to DATABASE_USER;"
Installing software
We update all systems automatically, however, you can update it manually with:
sudo apt full-upgrade -U
Software from the Debian repository can be installed with:
sudo apt install -U [...]
i.e. sudo apt install -U tmux.
Software not included in the Debian repository needs to be manually installed in your home.
Depending on the build system of the software, consider using DESTDIR=$HOME/local or similar.
BFH data
If you need persistent storage, you can create storage shares on our Ceph cluster via BFH.science Self-Service Portal.
All data from Ceph is accessible at /mnt/data/group.
Scratch space
/mnt/local/scratch gives you access to local scratch space, i.e. temporary storage on the hosts fast SSDs.
Please note:
no backup: if you delete something by accident, data is gone.
no persistency: if we move the container to another host server, all data is gone.
limited capacity: compared to Ceph with hundrets of TB of space, the host servers just have two local SSDs in RAID1 shared with all containers for scratch space.
no guarantees: if required, we reserve the right to delete data if necessary even without prior notice in order to ensure stable and secure operation.
rule of thumb: only store temporary data which you would asume would be gone after a reboot (/tmp).
best practice: use /mnt/local/scratch (fast local SSDs) instead of /tmp (RAM disk). /tmp will either eventually kill your processes when used excessively or slow down your calculations because of wasted RAM.