Support has seen many times that database was undersized or not properly configured. Default deployment of database is using only small amount of RAM for caching underlying data (1GB). So even if users have big machines with 64GB of memory, it is not used. One should do an initial health check seeing if database is configured properly.
- Postgres tuning example for Linux box with 16GB memory (Postgres configured to consume 8GB):
1. Edit /etc/sysctl.conf and configure enough shared memory space. If higher value is already present skip to step 3.
# Configure Shared memory at least to 4GB
2. To apply without need to restart
sysctl -w kernel.shmmax=4294967296
sysctl -w kernel.shmall=1048576
3. Configure postgresql.conf
# The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.
shared_buffers = 2GB
# effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating
# system and within the database itself, after taking into account what's used by the OS itself and other applications
effective_cache_size = 6GB
# Increasing the work_mem parameter allows PostgreSQL to do larger in-memory sorts which, unsurprisingly, will be faster than disk-based equivalents.
work_mem = 64MB
# Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
maintenance_work_mem = 512MB
4. Restart Postgres server to reload configuration