Chat with us, powered by LiveChat
Home > Corporate Data Backup > Enterprise Data Backup Tools > PostgreSQL Backup Software Tool

Bacula Enterprise integrates directly with PostgreSQL to deliver backup and recovery across even the most demanding production environments, including high-transaction databases and large multi-database clusters that cannot afford service interruptions during backup windows.

The PostgreSQL backup software handles the full backup and recovery cycle of your PostgreSQL clusters without scripts and without interrupting cluster operations. It runs as a File Daemon plugin on the database host and captures everything the cluster needs for a clean restore, from roles and tablespaces to per-database schemas and creation scripts.

Bacula’s PostgreSQL backup tool supports Dump and PITR strategies to cover two distinct recovery scenarios. Dump mode runs pg_dump in custom or plain format across all databases or a defined subset, with object-level filtering available at both backup and restore time. This is particularly useful when you need to recover a single table or schema without touching the rest of the database.

In PITR mode, the plugin manages WAL archiving across Full, Incremental, and Differential job levels, so you can recover any cluster to an arbitrary point in time and avoid losing hours of transactions to accidental data loss, corruption, or a failed deployment.

In multi-node HA environments such as Patroni, Dump backups can connect via the cluster endpoint regardless of which node is primary. PITR and WAL-based backups, however, operate at the filesystem level on the active primary host. After a role switch, backup jobs must point to the new primary to maintain a consistent WAL archive.

Key Benefits of Bacula’s PostgreSQL Backup Tool

Dual-Mode Backup Coverage

  • Dump and PITR Support – Back up any PostgreSQL instance using logical dumps via pg_dump or WAL-based point-in-time archiving (PITR), with both modes running concurrently on the same instance. PITR operates at the instance level, since WAL files are generated for the entire PostgreSQL instance rather than for a single database. If you need database-level recovery granularity, Dump mode supports selective restore down to a single table or schema.
  • No Scripting Required – The PostgreSQL backup tool auto-detects all databases in the cluster and automatically backs up configuration, roles, tablespaces and schemas without any manual intervention.
  • Online Backup – Back up PostgreSQL clusters while they remain fully operational, with no downtime required at any backup level.

Granular Point-in-Time Recovery (PITR)

  • Full Backup – Captures the complete data directory and all WAL files generated during the run, in turn forming the baseline for all subsequent recovery operations.
  • Incremental Backup – Archives WAL files generated since the last job, and keeps backup windows tight on high-activity databases.
  • Differential Backup – Captures data files changed since the last Full plus current WAL files, and cuts restore time without the storage overhead of a new Full.
  • Precise Restore Targets – Replay any WAL chain to recover an instance to an exact transaction, and not just the last scheduled backup.
  • Alternate Location Restore – Recover any database to a different server or directory for testing, migration or disaster recovery without touching the production environment.

Comprehensive Recovery Options

  • Zero Coverage Gaps – Run Incremental jobs concurrently with Full and Differential jobs via the Maximum Concurrent Jobs directive, so no changes fall between backup windows.
  • Copy and Migration Compatibility – Move backup data between volumes using Bacula’s native Copy and Migration framework, without File Daemon involvement.
  • No Temporary Disk Space Required – Data transfers directly from the cluster to the Storage Daemon at every backup level, with no intermediate staging.

The PostgreSQL backup tool is available on 32-bit and 64-bit Linux and supports all officially maintained PostgreSQL releases from version 8.4 onward.

PostgreSQL Backup Software: Detailed Feature Summary

Selecting the right backup strategy depends on whether you need to restore individual objects from a logical dump or recover an entire cluster to a precise point in time. The table below covers the key functional differences between the supported modes of this PostgreSQL backup solution.

Custom and Dump modes produce compact, portable SQL files suited to selective restores and cross-version migrations. PITR produces larger backups because it captures the full data directory plus WAL files, but delivers faster restore speeds and the ability to recover to any specific point in time. Both strategies can run concurrently on the same PostgreSQL instance.

*In the table, the Custom format corresponds to the Custom Dump format of pg_dump and the Dump format of Bacula’s PostgreSQL module corresponds to the plain format of pg_dump.

Custom Dump PITR
Restore single object (table, schema) Yes No No
Backup speed Slow Slow Fast
Restore speed Slow Very Slow Fast
Backup size Small Small Large
Point-in-time restore No No Yes
Incremental and Differential support No No Yes
Parallel restore Yes No No
Online backup Yes Yes Yes
Consistent backup Yes Yes Yes
Restore to previous major PostgreSQL version No Yes No
Restore to newer major PostgreSQL version Yes Yes No

Robust Backup Capabilities

  • Full Backup – Captures the complete data directory and all WAL files generated during the job, forming the baseline for all PITR recovery chains.
  • Incremental Backup – Forces a WAL segment switch and archives all WAL files generated since the previous job, in turn minimizing backup windows on high-activity PostgreSQL instances.
  • Differential Backup – Captures data files changed since the last Full backup plus all current WAL files, balancing storage efficiency with faster restore chain resolution.
  • Dump-Based Backup – Runs pg_dump in custom or plain format across all databases or a defined subset, with object-level filtering available at backup time.
  • Online Backup – All backup types run against a live PostgreSQL instance with no downtime and no cluster interruption required.
  • Automatic Cluster Discovery – The plugin auto-detects all databases in the cluster and captures configuration, roles, tablespaces, schemas and creation scripts without manual configuration.
  • No Temporary Disk Space Required – Bulk data transfers directly from the database host to the Storage Daemon at every backup level.

Restore Capabilities

  • Point-in-Time Restore (PITR) – Replays any WAL chain to recover a PostgreSQL instance to an exact transaction, independent of scheduled backup intervals.
  • Single Object Restore – Restores individual tables, schemas or indexes directly from Custom format dumps using pg_restore, without touching the rest of the database.
  • Alternate Location Restore – Recovers any database to a different server or local directory for migration, testing or disaster recovery without affecting the production environment.
  • Parallel Restore – Custom format dumps support concurrent restore jobs via pg_restore, reducing recovery time on multi-core systems.
  • Cross-Version Restore – Custom and plain Dump formats support restore to newer major PostgreSQL versions; plain Dump supports restore to previous major versions.
  • Granular Role and Schema Recovery – Roles, users and database schemas can be restored independently using roles.sql and schema.sql, with selective editing supported via psql before load.

Operational Features

  • Database Access Verification – The estimate command queries the PostgreSQL plugin to validate cluster connectivity and list all databases detected before any backup job runs.
  • Selective Database Targeting – The database parameter accepts wildcard strings, which allows the plugin to back up only databases matching a defined pattern without modifying the job configuration.
  • Service Connection File Support – PostgreSQL connection parameters can be abstracted into a named pg_service entry. This allows the plugin to connect to remote instances without embedding host, port or credentials directly in the FileSet.
  • Timeout Control – A configurable timeout parameter sets the maximum wait time in seconds for any command sent to PostgreSQL, with a default of 60 seconds. Set abort_on_error to terminate the job immediately on connection failure rather than allowing it to run incomplete.
  • Deduplication Guidance – PostgreSQL does not implement its backup routines with deduplication in mind. Running the CLUSTER command before backup physically reorders table data by index, improving deduplication ratios at the cost of an exclusive table lock and significant CPU and I/O overhead.

Administration and Monitoring

  • Web Interface Management (BWeb) – Configure and monitor PostgreSQL backup jobs through Bacula’s graphical BWeb console without interacting with configuration files directly.
  • Command-Line Control – Use bconsole for job triggering, catalog browsing, restore operations and scriptable automation across all PostgreSQL backup jobs.
  • Known Limitations – dump_opt cannot be used to back up remote PostgreSQL servers; use PGSERVICE instead. Restoring template1, postgres or any database with active connections requires those connections to be terminated first. Replaying long WAL chains on high-activity clusters adds meaningful time to PITR recovery operations.

Platform and Version Support

The Bacula Enterprise PostgreSQL backup solution supports the following configurations:

  • All officially maintained PostgreSQL releases from version 8.4 onward
  • Linux 32-bit
  • Linux 64-bit

Security Built Into Every PostgreSQL Backup

Bacula Enterprise is trusted by defense organizations, government agencies and financial institutions to protect their most sensitive PostgreSQL environments.

Bacula’s security starts at the architecture level. Backup clients have no knowledge of storage targets and hold no credentials to access them, which means a compromised database host cannot read, overwrite, modify or delete backup data. This protection is built into the protocol itself, not toggled through a configuration setting.

Ransomware and Malware Protection

  • Immutable Disk Volumes – Backup volumes can be set to immutable, which prevents any and all modification or deletion once written, including by privileged users.
  • Data Poisoning Detection – Automatically identifies corrupted or tampered data before it propagates into the backup chain.
  • Advanced Ransomware Detection – BGuardian monitors backup activity for suspicious patterns and triggers alerts before damage spreads.
  • Silent Data Corruption Detection – Verifies the integrity of backed-up data independently of the source system.

Encryption and Authentication

  • AES Encryption – Data encryption configurable per client at AES 128, AES 192 or AES 256, applied at the volume level.
  • TLS for All Network Traffic – Automatic TLS encryption across every component communication channel, with CRAM-MD5 password authentication between daemons.
  • Multi-Factor Authentication – MFA and OTP authentication with biometric smartphone support for BWeb access.
  • Active Directory and LDAP Integration – Centralized access control tied directly to your existing identity management infrastructure.

Compliance and Auditability

  • FIPS 140-3 Compliant – Meets federal cryptographic standards required by government and defense environments.
  • SHA256 and SHA512 File Signatures – Cryptographic verification of every backed-up file, with Tripwire-like catalog comparison for break-in detection.
  • SIEM Integration – Security events feed directly into your existing Security Information and Event Management platform.
  • Hardening Reports – Per-host hardening reports for every system where Bacula runs, surfacing insecure configurations before they become vulnerabilities.

Core Enterprise Capabilities for Every Bacula Use

The PostgreSQL backup tool is part of Bacula Enterprise’s unified backup platform. Every capability listed below is available across all Bacula installations, regardless of the environment.

Storage Infrastructure & Efficiency

Bacula Enterprise gives administrators direct control over storage costs through data reduction and flexible destination routing:

  • Block-Level Deduplication – Any data block that appears more than once across the backup catalog is written to storage only once, cutting redundancy at the source rather than after the fact.
  • Adaptive Compression – Compression algorithms are configurable per job, so CPU overhead and storage savings can be balanced against each other based on data type and available resources.
  • Multiple Storage Target Types – Backups write to local disk, NAS, SAN, tape libraries, cloud object storage including S3, Azure and Google Cloud, or any combination within a single policy.
  • S3-Compatible Object Storage – Connects to any S3-compatible provider for long-term retention without vendor lock-in.
  • Tiered Storage Workflows – Backup data moves across storage tiers automatically as it ages, so frequently accessed recovery points stay on fast storage while older data shifts to lower-cost destinations.
  • Incremental Forever – After an initial full backup, every subsequent job captures only what has changed. Recurring full backup windows become unnecessary.
  • Bandwidth-Conscious Transfers – Only modified data crosses the network between backup runs, keeping the load on production infrastructure to a minimum.

Data Protection & Compliance

Security and regulatory compliance are built into every layer of the platform, from data transport and storage encryption to access control and audit logging.

  • End-to-End Encryption – AES-256 encryption covers the full data path from source client to final storage destination, with key management configurable to fit organizational security policies.
  • Immutable Backup Copies – WORM-compatible storage locks backup data against any modification or deletion once written that gives you a recovery point that ransomware and insider threats cannot reach.
  • Granular Access Controls – User permissions scope to specific jobs, restore workflows and management functions, so administrators access only what their role requires.
  • Complete Activity Auditing – Every backup, restore and configuration change is logged with user identity and timestamp. Compliance and security teams get a full, unbroken audit trail.
  • Regulatory Framework Support – Platform controls map to GDPR, HIPAA and SOC 2 requirements through a combination of encryption, configurable retention policies and detailed audit logs.
  • Privacy-Preserving Architectures – Zero-knowledge deployment options let backup infrastructure run without granting administrators any visibility into the protected data itself.

Enterprise Management & Control

Two complementary interfaces and a full suite of management tools provide visibility and control across all backup operations:

  • Dual Interface – BWeb provides a graphical console for day-to-day job management and monitoring, while bconsole gives operators full command-line control for scripting, automation and advanced configuration.
  • Scalability Without Limits – The same platform architecture manages environments from a handful of servers to deployments numbering in the thousands, all under a single management plane.
  • Tenant Isolation – MSPs and large enterprises partition the backup environment into independently administered units, each with its own configuration, policies and access controls.
  • Automatic Resource Discovery – The platform scans infrastructure to identify and catalog backup targets automatically, so protection coverage stays current as environments change.
  • Comprehensive Reporting – Scheduled reports cover job outcomes, capacity trends, compliance status and operational performance, delivered on a defined cadence.
  • External System Integration – Connects to monitoring tools, IT ticketing systems and directory services including LDAP and Active Directory, fitting into existing operational workflows without custom development.

Hybrid Infrastructure Excellence

Physical servers, virtual machines, containers and cloud infrastructure all fall within a single, unified backup strategy:

  • Multi-Platform Virtualization – Native integration for VMware vSphere, Hyper-V, KVM, Red Hat Virtualization, Xen, Azure VM, Proxmox and Nutanix AHV with consistent policy application across all platforms.
  • Physical and Virtual Convergence – Physical servers, workstations and virtual machines are protected through the same management interface with unified backup policies.
  • Container and Cloud-Native Support – Full protection for DockerKubernetes and OpenShift environments with persistent volume backups and application-consistent snapshots.
  • Multi-Cloud Storage Integration – Native support for public, private and hybrid cloud storage including S3, S3-IA, Azure, Google Cloud, Oracle Cloud and Glacier, with Minimal Restore Cost functionality.
  • Database and Application Integration – Hot backup for Oracle, SQL Server, MySQL, PostgreSQL, SAP HANA and other mission-critical applications with full transactional consistency.

Economic Advantages

Licensing is based on environment size, not data volume. PostgreSQL databases can grow without triggering higher licensing costs:

  • Volume-Independent Licensing – Growing backup capacity does not translate into higher license fees, so data protection costs stay flat even as data volumes expand.
  • Predictable Cost Structure – A fixed pricing model lets teams plan infrastructure budgets without accounting for variable costs tied to storage growth or workload changes.
  • Workload-Agnostic Pricing – Database sizes, server counts and storage volumes have no effect on licensing costs.
  • Large-Scale Cost Benefits – Organizations protecting substantial or rapidly growing PostgreSQL databases gain increasingly significant economic advantages over capacity-priced competitors.
  • Service Provider Economics – MSPs take on clients with large or fast-growing datasets without absorbing the licensing cost increases that erode margins under per-terabyte pricing models.

Recovery & Business Continuity

Every recovery scenario has a defined path, from single-file restores to full site rebuilds:

  • System-Level Bare Metal Restore – Recovers a complete server from scratch, including OS, applications, configuration and data, without requiring a prior manual installation.
  • Cross-Platform Data Movement – Backup data can be recovered to a different operating system than its source, which gives teams options when like-for-like hardware is unavailable or a migration is underway.
  • Geographic Backup Replication – Backup sets are copied to geographically separate storage locations, so a site-wide outage does not take recovery points down with it.
  • Frequent Backup Scheduling – Backup intervals can be reduced to minutes, which pushes the potential data loss window well below what traditional hourly or nightly schedules allow.
  • Automated Restore Validation – Recoverability is confirmed through automated testing without administrator involvement or a separate validation process.

PostgreSQL Backup Types

There are two main methods to back up PostgreSQL databases with our tool: filesystem-level snapshots (physical) or SQL dumps (logical).

Physical Type

Filesystem-level backups, or physical backups, are essentially snapshots of all the files in the database. Backing up those files is not as straightforward as it might seem, because they are usually going through constant rewrites and changes. PostgreSQL database backup relies on two key methods: continuity in archiving and point-in-time recovery. Both are designed to complement each other. Continuous archiving provides the WAL chain, and point-in-time recovery (PITR) uses it to restore the instance to any chosen moment.

For consistency, backups need to confirm that the backup process is either copying the entire database or leaving it completely unchanged. That is why PostgreSQL uses write-ahead logging technology. WAL segments are the exact files being backed up during the ongoing archiving process. The information stored in those files enables easier after-crash recovery and better data consistency.

It’s not uncommon for databases to undergo some changes in the process of a filesystem backup, but some of those changes might damage some parts of a backup or render it irreparable. To prevent this, PostgreSQL provides a low-level API for the physical backup process. Using pg_start_backup() and pg_stop_backup() before and after the process prevents dangerous changes from being made to the database during backup. All WAL segments generated between those two commands still need to be captured. The filesystem-level snapshot together with those WAL segments is commonly referred to as a base backup.

Logical Type

An SQL dump, or logical backup, works differently from a physical backup. It uses PostgreSQL backup commands to recreate the database’s basic structure and populate it with data. An SQL dump consistently represents the state of the database at a given moment, since the dumping process runs like any other database session.

The process works as follows:  the software reads through all available tables, and then fetches all rows, preserving the order needed to restore everything exactly as it was backed up, including all connections and dependencies.

One thing to keep in mind with SQL dumps is that data from different tables may carry different timestamps. One table might be captured at timestamp A and another at timestamp B. This is worth noting when your database enforces rules about how rows and tables should relate to each other.

 

Download TrialDownload PostgreSQL Whitepaper

Frequently Asked Questions

What is PostgreSQL?

PostgreSQL is an open-source relational database management system with over 35 years of active development. It is one of the most widely used databases in the world, trusted by organizations of all sizes for its reliability, performance and extensibility. It supports both SQL and JSON querying and handles everything from small web applications to large-scale enterprise workloads.

Is pg_dump a real PostgreSQL backup solution?

pg_dump is a data export tool, not a complete backup solution. It does not capture WAL files, which makes point-in-time recovery impossible from a dump alone. It also excludes global objects like roles and tablespaces unless you run pg_dumpall separately. For production environments, pg_dump works best as part of a broader strategy that includes WAL archiving and automated scheduling, which is exactly what Bacula’s PostgreSQL backup tool handles automatically.

If I have replication, do I still need PostgreSQL backups?

Yes. Replication copies every change to your standby servers, including accidental ones. If a developer drops a table in production, that deletion replicates instantly to every replica. Backups with PITR let you recover to the exact moment before the mistake. Replication and backups solve different problems and should both be part of your recovery strategy.

What is the best way to automate PostgreSQL backups?

The most reliable approach for production environments combines scheduled full backups with continuous WAL archiving. Bacula Enterprise handles both automatically. Full, Incremental and Differential jobs run on a defined schedule, WAL archiving runs in the background, and everything is managed centrally without custom scripting.

How does PostgreSQL point-in-time recovery work?

PostgreSQL records every database change in the Write-Ahead Log before applying it to data files. PITR works by restoring a base backup and then replaying WAL files up to the exact moment you need to recover to. This lets you recover from accidental data loss, dropped tables or failed deployments to a precise point in time rather than just the last scheduled backup.

What PostgreSQL backup tool do I need for an enterprise environment?

Enterprise PostgreSQL environments need online backup without downtime, PITR, centralized management across multiple clusters, encryption, and integration with the rest of the infrastructure. Bacula Enterprise’s PostgreSQL plugin covers all of these within a single platform that also protects virtual machines, containers, cloud environments and other databases.