quickly load database CPU utilization to 100% for PostgreSQL

CPU-intensive query to simulate CPU load in PostgreSQL database

In my previous blog post, I shared a CPU-intensive query designed to simulate 100% CPU utilization in an Oracle database, inspired by a question raised on the Oracle-L Mailing List. Building on that, I have now created a similar script for PostgreSQL to achieve the same purpose.

The CPU is inherently designed to process tasks in a time-sliced, serve-and-go model. It prioritizes active workloads, serving them as computational resources become available, and queues tasks in a ready-to-run state when resources are occupied. Testing CPU performance often involves artificially generating workloads that mimic this queuing process. Such tests essentially measure the CPU’s ability to handle demand under simulated contention, where processes either execute immediately or wait their turn in the scheduler’s queue.

Database servers are often benchmarked for Transactions Per Second (TPS) and Input/Output Operations Per Second (IOPS). These benchmarks mostly focus on storage and transaction performance. However, in scenarios such as hardware procurement, capacity planning or bottleneck identification, measuring CPU performance directly can also be a target.

Database performance testing often involves creating controlled workloads to evaluate system limits or optimize configurations. One such scenario is generating a CPU-intensive workload to assess database server CPU performance mostly speed. In this blog post, I will provide a PL/pgSQL script that utilizes pg_background extension to create and execute CPU-bound background worker processes.

No table is required, as no data will be retrieved. Instead of focusing on I/O operations, this test is designed to emphasize CPU usage. The core idea involves executing simple calculations to maximize CPU utilization.

First Build and Install pg_background extension. (PostgreSQL version >= 9.5 and Ensure pg_config is in your PATH)

The pg_background_launch function requires plain, static SQL strings rather than procedural PL/pgSQL code that relies on variable context. For this reason, I will create a function and then call it. This PostgreSQL function, cpu_intensive_task, simulates CPU load by performing repetitive mathematical calculations. It takes iterations as an input, loops that many times, and computes MOD(num + CAST(SQRT(idx) AS NUMERIC), 13) during each iteration to simulate computational work. The variable num is updated in each loop iteration, continuously performing mathematical operations to utilize CPU cycles. This function is designed to generate CPU load for testing purposes.

The max_parallel_workers parameter in PostgreSQL defines the maximum number of parallel worker processes that can be used for query execution. It essentially acts as a CPU usage limit by controlling how many workers can perform parallel processing simultaneously. Since the current session is also counted towards the limit, I will create max_parallel_workers – 1 = 7 background processes to ensure optimal CPU utilization without exceeding the limit.

Actually, The process completes in 6 – 7 minutes. If you need to cancel the jobs, you may use commands provided below.

Docker stats while the test is going on :

During the test, CPU usage peaked at 700%, indicating that 7 CPUs were utilized heavily.

Hope it helps.


Discover More from Osman DİNÇ


Comments

Leave your comment