We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.
Customize Consent Preferences
We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.
The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ...
Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
No cookies to display.
Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.
No cookies to display.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.
No cookies to display.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
No cookies to display.
Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.
Database performance analysis using pg_profile and pgpro_pwr
May 12, 2025
pgpro_pwr serves as a sophisticated database workload monitoring tool, enabling database administrators (DBAs) to pinpoint the most resource-demanding operations. Initially launched in 2017 under the name pg_profile, this module was crafted by Andrey Zubkov, who transitioned from a database administrator to an engineer at Postgres Professional. The distinction between pg_profile and pgpro_pwr lies in their operational frameworks; while the former is compatible with open-source PostgreSQL (and has been integrated into PostgreSQL 17 as of 2024), the latter provides advanced statistical insights and is embedded within Postgres Pro releases.
How it all began
The inception of these tools did not involve reinventing the wheel; rather, they emerged from a recognized need. Existing tools often posed challenges for integration within web applications, necessitating separate deployments from the database management system (DBMS). DBAs, particularly those with experience in Oracle AWR, sought a solution that could seamlessly operate within PostgreSQL. In response to this demand, Andrey developed a platform-independent extension using pl/pgSQL.
How pg_profile and pgpro_pwr work
These tools do not feature alerting capabilities; instead, they focus on monitoring database workload metrics through counters that increment continuously from the last reset—typically from the cluster’s inception. While the raw counter values may seem insignificant, the increments over time across various perspectives—such as clusters, databases, individual functions, or tables—yield invaluable insights. The tools capture counter values at designated intervals and archive the differences in their repository.
A quick test: do you need this tool?
Consider utilizing pg_profile or pgpro_pwr if you:
Want to assess the stability of a long-running system—especially before introducing new features.
Need to analyze the outcomes of load testing.
Aim to identify system-intensive activities, such as:
Queries that are executed too often or take excessive time to complete.
Inefficient execution plans.
Spoiler Currently, only pgpro_pwr can identify “inefficient” execution plans, but we aspire to enhance pg_profile with this capability in the future.
If you answered “yes” to any of these considerations, then pg_profile or pgpro_pwr will certainly prove beneficial.
Tool structure
pg_profile and pgpro_pwr comprise several key components:
Repository tables for storing snapshot data
Data collection functions for snapshots
Reporting functions
Service tables and functions
The tools primarily serve two functions:
Taking snapshots. These tools operate independently of the operating system and require a dedicated scheduler for snapshot execution.
Generating reports, which are HTML documents summarizing statistics over specified time periods. These reports encompass various metrics, some exclusive to pgpro_pwr, including database cleanup statistics, per-plan statistics, expression-level statistics, workload distribution, and invalidation statistics.
How to monitor performance
To initiate monitoring, follow these steps outlined in the documentation:
Upon installation, both extensions create a single active server named local, corresponding to the current cluster. This server can be managed using functions (a comprehensive list is available for both pg_profile and pgpro_pwr). For instance, the create_server function establishes a server definition:
The database load statistics are preserved in snapshots, which can also be managed (pg_profile / pgpro_pwr). These snapshots can be exported from one instance of the extension and imported into another, facilitating the transfer of accumulated server information or sharing it with support specialists for further analysis.
Generating reports
After establishing the extensions, servers, and snapshots, you can generate reports. Reports are categorized into:
Standard reports (pg_profile / pgpro_pwr) – which provide workload statistics for a designated time frame.
Comparison reports (pg_profile / pgpro_pwr) – that compare statistics for the same objects across two distinct time intervals.
For example, to generate a report for the local server over a specific interval:
These reports can be viewed in any web browser. Detailed descriptions of the report types and included statistics are available in the documentation (pg_profile / pgpro_pwr).
Some examples include:
Wait event statisticsAdvanced vacuum statisticsWorkload distribution
What’s next
We are currently focused on submitting a patch that introduces vacuum statistics to the vanilla PostgreSQL. Our aim is to upstream as much as possible, as managing future merge conflicts can be quite complex. Should you have any questions or encounter challenges while utilizing the monitoring tools, we are here to assist. We welcome your thoughts and ideas in the comments!
Database performance analysis using pg_profile and pgpro_pwr
pgpro_pwr
serves as a sophisticated database workload monitoring tool, enabling database administrators (DBAs) to pinpoint the most resource-demanding operations. Initially launched in 2017 under the namepg_profile
, this module was crafted by Andrey Zubkov, who transitioned from a database administrator to an engineer at Postgres Professional. The distinction betweenpg_profile
andpgpro_pwr
lies in their operational frameworks; while the former is compatible with open-source PostgreSQL (and has been integrated into PostgreSQL 17 as of 2024), the latter provides advanced statistical insights and is embedded within Postgres Pro releases.How it all began
The inception of these tools did not involve reinventing the wheel; rather, they emerged from a recognized need. Existing tools often posed challenges for integration within web applications, necessitating separate deployments from the database management system (DBMS). DBAs, particularly those with experience in Oracle AWR, sought a solution that could seamlessly operate within PostgreSQL. In response to this demand, Andrey developed a platform-independent extension using pl/pgSQL.
How pg_profile and pgpro_pwr work
These tools do not feature alerting capabilities; instead, they focus on monitoring database workload metrics through counters that increment continuously from the last reset—typically from the cluster’s inception. While the raw counter values may seem insignificant, the increments over time across various perspectives—such as clusters, databases, individual functions, or tables—yield invaluable insights. The tools capture counter values at designated intervals and archive the differences in their repository.
A quick test: do you need this tool?
Consider utilizing
pg_profile
orpgpro_pwr
if you:Currently, only
pgpro_pwr
can identify “inefficient” execution plans, but we aspire to enhancepg_profile
with this capability in the future.If you answered “yes” to any of these considerations, then
pg_profile
orpgpro_pwr
will certainly prove beneficial.Tool structure
pg_profile
andpgpro_pwr
comprise several key components:The tools primarily serve two functions:
pgpro_pwr
, including database cleanup statistics, per-plan statistics, expression-level statistics, workload distribution, and invalidation statistics.How to monitor performance
To initiate monitoring, follow these steps outlined in the documentation:
Upon installation, both extensions create a single active server named local, corresponding to the current cluster. This server can be managed using functions (a comprehensive list is available for both pg_profile and pgpro_pwr). For instance, the create_server function establishes a server definition:
Here’s an example of this function in action:
The database load statistics are preserved in snapshots, which can also be managed (pg_profile / pgpro_pwr). These snapshots can be exported from one instance of the extension and imported into another, facilitating the transfer of accumulated server information or sharing it with support specialists for further analysis.
Generating reports
After establishing the extensions, servers, and snapshots, you can generate reports. Reports are categorized into:
For example, to generate a report for the local server over a specific interval:
To generate a report for another server:
To generate a report for a time range:
These reports can be viewed in any web browser. Detailed descriptions of the report types and included statistics are available in the documentation (pg_profile / pgpro_pwr).
Some examples include:
What’s next
We are currently focused on submitting a patch that introduces vacuum statistics to the vanilla PostgreSQL. Our aim is to upstream as much as possible, as managing future merge conflicts can be quite complex. Should you have any questions or encounter challenges while utilizing the monitoring tools, we are here to assist. We welcome your thoughts and ideas in the comments!