Introduction
In the high-stakes world of database engineering, memory is your most precious resource—and your most common bottleneck. In my 20+ years as a DBA, I’ve seen countless production environments struggle with latency not because they lacked hardware, but because they relied on “default” settings that ignored the underlying infrastructure. PostgreSQL performance tuning is often reduced to a few “magic formulas,” but the reality is more nuanced. Tuning a 128GB RAM server on bare metal requires a different philosophy than AWS RDS PostgreSQL memory optimization. This guide provides the “Golden Ratios” you need for database memory management and PostgreSQL optimization strategies that actually scale in 2026.
It is strongly recommended to validate these settings in your QA or test environments prior to deploying them in production. This article is designed to provide guidance throughout that process.
Prerequisites
- Superuser/RDS Admin Access: Ability to modify
postgresql.confor DB Parameter Groups. - Linux Kernel Awareness: Specifically Linux kernel tuning for Postgres like enabling huge pages.
- Installed Extensions: The
pg_buffercacheandpg_stat_statementsextensions are essential for running a PostgreSQL memory diagnostic query. - Learn How to Install PostgreSQL on Windows
Best Practices (Senior DBA Implementation)
- The “One at a Time” Rule: Never change shared_buffers and work_mem simultaneously; isolate variables to track performance gains accurately.
- Account for OS Overhead: For PostgreSQL memory management for bare metal, always leave at least 20% of RAM for the OS page cache to prevent OOM (Out of Memory) kills.
- Restart Awareness: Remember that shared_buffers is a static parameter requiring a server restart, unlike work_mem or maintenance_work_mem.
- Analyze the Hit Ratio: Use a PostgreSQL buffer cache hit ratio script to ensure your “hot” data stays in the buffer cache, aiming for >99%.
- Mind the Connections: Each process in max_connections consumes memory. Scaling PostgreSQL for high concurrency often requires a pooler like PgBouncer.
- Effective Cache Hinting: Set effective_cache_size to 75% of total RAM to help the planner choose index scans over sequential ones.
- Sizing for Maintenance: Ensure maintenance_work_mem is large enough (1GB+) for faster index builds and VACUUM operations.
- OOM Prevention: Always calculate work_mem based on peak loads to avoid calculating work_mem to avoid OOM kills scenarios.
The Architecture: Shared_Buffers vs OS Cache Performance
One of the most debated topics in PostgreSQL shared_buffers best practices 2026 is the shared_buffers vs OS cache performance trade-off.
In a standard environment, PostgreSQL uses double-buffering. It has its own buffer cache (shared_buffers) and relies on the Linux kernel’s OS page cache. If you are tuning PostgreSQL for 128GB RAM on bare metal, the “Golden Ratio” is typically 25% (32GB). Setting it higher often leads to diminishing returns as it steals RAM from the OS’s highly efficient file-system cache.
However, PostgreSQL memory configuration for Aurora vs RDS changes the rules. Amazon Aurora uses a log-structured storage engine that bypasses the traditional OS page cache, allowing you to push shared_buffers to 75% without the “double-buffering” penalty.
Diagnostic: Buffer Cache Analysis Script
Senior DBA needs a PostgreSQL memory diagnostic query that reveals the truth about memory pressure.
-- PostgreSQL memory diagnostic query: Analyzing shared_buffers usage
-- Goal: Identify "hot" relations and validate shared_buffers vs OS cache performance
CREATE EXTENSION IF NOT EXISTS pg_buffercache;
SELECT
c.relname AS relation_name,
count(*) AS buffers_in_ram,
round(8192 * count(*) / 1024 / 1024, 2) AS size_mb,
round(count(*) * 100.0 / (SELECT setting FROM pg_settings WHERE name = 'shared_buffers')::int, 2) AS pct_of_shared_buffers
FROM pg_buffercache b
INNER JOIN pg_class c ON b.relfilenode = pg_relation_filenode(c.oid)
AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database()))
GROUP BY c.relname
ORDER BY 2 DESC
LIMIT 10;
Frequently Asked Questions (FAQs) on PostgreSQL Performance Tuning
Question 1: What is the recommended shared_buffers for 128GB RAM?
On bare metal, 32GB (25%) is the sweet spot. For Aurora, up to 96GB (75%) is optimal due to its bypass of the OS page cache.
Question 2: How do I prevent “Out of Memory” (OOM) kills?
OOM kills happen when max_connections * work_mem exceeds available RAM. Use our calculating work_mem to avoid OOM kills formula: (Total RAM * 0.05) / max_connections.
Question 3: Why should I care about Huge Pages?
Huge pages reduce the overhead of the CPU’s memory management unit. For high-RAM systems, it’s a core part of Linux kernel tuning for Postgres.
Question 4: Does maintenance_work_mem affect query speed?
No, it speeds up maintenance like CREATE INDEX and VACUUM. It’s a key part of PostgreSQL optimization strategies for write-heavy loads.
Question 5: How does effective_cache_size work?
It’s a “planner hint.” It tells Postgres how much memory is likely available in the OS page cache, influencing the choice between index and sequential scans.
Question 6: Is there a difference in memory tuning for RDS vs. Aurora?
Yes. PostgreSQL memory configuration for Aurora vs RDS is different because Aurora manages its own storage cache, while RDS relies on standard EBS and OS caching.
Question 7: How can I check my Buffer Cache Hit Ratio?
Run a PostgreSQL buffer cache hit ratio script. Anything below 99% for a production OLTP workload suggests you need more RAM or better indexing.
Question 8: Can I change shared_buffers without downtime?
No. This parameter is static and requires a restart to reallocate the shared memory segment in the kernel.
Question 9: Why use a connection pooler for scaling?
Scaling PostgreSQL for high concurrency is impossible with high max_connections because each idle connection wastes 2-3MB of RAM. PgBouncer keeps memory free for work_mem.
Conclusion
Successful PostgreSQL performance tuning is about balance, not just volume. By mastering the “Golden Ratios” and understanding the nuances of PostgreSQL memory management for bare metal versus cloud-native solutions like Aurora, you can build databases that handle millions of transactions with sub-millisecond latency. Use the provided PostgreSQL memory diagnostic query to audit your environment today. My advice is simple: trust the data from your execution plans and the buffer cache behavior, not just the default values in the configuration file.
Next Steps
- Audit Your Parameters: Check your current shared_buffers vs. the architect-specific values provided above.
- Optimize Retrieval: After tuning memory, refine your queries with our PostgreSQL SELECT Clause Masterclass.
- Audit Your Indexes: Use the PostgreSQL buffer cache hit ratio script to see if your indexes actually fit in memory.
- Cloud Scaling: Review your AWS RDS PostgreSQL memory optimization settings against the Aurora bypass strategies discussed here.

Add comment