[pbs-devel] [RFC proxmox proxmox-backup 00/39] S3 storage backend for datastores

Christian Ebner c.ebner at proxmox.com
Mon May 19 13:46:01 CEST 2025


Disclaimer: These patches are in a development state and are not
intended for production use.

This patch series aims to add S3 compatible object stores as storage
backend for PBS datastores. A PBS local cache store using the regular
datastore layout is used for faster operation, bypassing requests to
the S3 api when possible. Further, the local cache store allows to
keep frequently used chunks and is used to avoid expensive metadata
updates on the object store, e.g. by using local marker file during
garbage collection.

Backups are created by upload chunks to the corresponding S3 bucket,
while keeping the index files in the local cache store, on backup
finish, the snapshot metadata are persisted to the S3 storage backend.

Snapshot restores read chunks preferably from the local cache store,
downloading and insterting them if not present from the S3 object
store.

Listing and snapsoht metadata operation currently rely soly on the
local cache store, with the intention to provide a mechanism to
re-sync and merge with object stored on the S3 backend if requested.

Sending this patch series as RFC to get some initial feedback, mostly
on the S3 client implementation part and the corresponding
configuration integration with PBS, which is already in an advanced
stage and warants initial review and real world testing.

Datastore operations on the S3 backend are still work in progress,
but feedback on that is appreciated very much as well.

Among the open points still being worked on are:
- Locking mechanism and consistency between local cache and S3 store.
- Sync and merge of namespace, group snapshot and index files when
  required or requested.
- Advanced packing mechanism for chunks to significantly reduce the
  number of api requests and therefore be more cost effective.
- Reduction of in-memory copies for chunks/blobs and recalculation of
  checksums.

Testing:
For testing, an S3 compatible object store provided via Ceph RADOS
gateway can be used by the following setup. This was performed on a
pre-existing Ceph Reef 18.2 cluster.

Install radosgw on all the nodes:
```
apt install radosgw
```

On one node, generate client keyring:
```
ceph-authtool --create-keyring /etc/pve/priv/ceph.client.radosgw.keyring
```

For each node, generate key and add it to the keyring (adapt name
accordingly):
```
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.pve-c0-n1 --gen-key
```

Setup capabilities for client keys:
```
ceph-authtool -n client.radosgw.pve-c0-n1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring
```

Add the keys (repeat for each) to the cluster:
```
ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.pve-c0-n1 -i /etc/pve/priv/ceph.client.radosgw.keyring
```

For each client, add a config based on the one below to /etc/ceph/ceph.conf
```
[client.radosgw.pve-c0-n1]
        host = pve-c0-n1
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = s3.pve-c0-n1.local
```

Restart the service for each node, e.g.
```
systemctl daemon-reload
systemctl restart radosgw.service
```

Setup a new user, generating access key and secret key shown in
output:
```
radosgw-admin user create --uid=testuser --display-name="TestUser" --email=your at mail.com
```

Since the configuration and keyring are located on the pmxcfs, add
the following override so the gateway service is only started after
pve-cluster by adding to
`/etc/systemd/system/radosgw.service.d/override.conf`:
```
[Unit]
Documentation=man:systemd-sysv-generator(8)
SourcePath=/etc/init.d/radosgw
Description=LSB: radosgw RESTful rados gateway
After=pve-cluster.service
Wants=pve-cluster.service
```

A custom certificate must be added since the client forces tls by
extending the config with a path to a custom generated certificate and
key:
```
[client.radosgw.pve-c0-n1]
	host = pve-c0-n1
	keyring = /etc/pve/priv/ceph.client.radosgw.keyring
	logfile = /var/log/ceph/client.radsgw.$host.log
	rgw_dns_name = s3.pve-c0-n1.local
	rgw_frontends = "beast ssl_port=7480 ssl_certificate=/etc/pve/ceph/server-cert.pem ssl_private_key=/etc/pve/ceph/server-key.pem"
```

A new bucket can finally be created using the `s3cmd` cli tool after
initial configuration.

proxmox:

Christian Ebner (2):
  pbs-api-types: add types for S3 client configs and secrets
  pbs-api-types: extend datastore config by backend config enum

 pbs-api-types/src/datastore.rs |  58 +++++++++++++-
 pbs-api-types/src/lib.rs       |   3 +
 pbs-api-types/src/s3.rs        | 138 +++++++++++++++++++++++++++++++++
 3 files changed, 198 insertions(+), 1 deletion(-)
 create mode 100644 pbs-api-types/src/s3.rs

proxmox-backup:

Christian Ebner (37):
  fmt: fix minor formatting issues
  verify: refactor verify related functions to be methods of worker
  s3 client: add crate for AWS S3 compatible object store client
  s3 client: implement AWS signature v4 request authentication
  s3 client: add dedicated type for s3 object keys
  s3 client: add helper for last modified timestamp parsing
  s3 client: add helper to parse http date headers
  s3 client: implement methods to operate on s3 objects in bucket
  config: introduce s3 object store client configuration
  api: config: implement endpoints to manipulate and list s3 configs
  api: datastore: check S3 backend bucket access on datastore create
  datastore: allow to get the backend for a datastore
  api: backup: store datastore backend in runtime environment
  api: backup: conditionally upload chunks to S3 object store backend
  api: backup: conditionally upload blobs to S3 object store backend
  api: backup: conditionally upload indices to S3 object store backend
  api: backup: conditionally upload manifest to S3 object store backend
  api: reader: fetch chunks based on datastore backend
  datastore: local chunk reader: read chunks based on backend
  verify worker: add datastore backed to verify worker
  verify: implement chunk verification for stores with s3 backend
  api: remove snapshot from S3 backend on snapshot delete
  datastore: prune groups/snapshots from S3 object store backend
  datastore: implement garbage collection for s3 backend
  ui: add S3 client edit window for configuration create/edit
  ui: add S3 client view for configuration
  ui: expose the S3 client view in the navigation tree
  ui: add s3 bucket selector and allow to set s3 backend
  api/bin: add endpoint and command to test s3 backend for datastore
  tools: lru cache: add removed callback for evicted nodes
  tools: async lru cache: implement insert, remove and contains methods
  datastore: add local datastore cache for network attached storages
  api: backup: use local datastore cache on S3 backend chunk upload
  api: reader: use local datastore cache on S3 backend chunk fetching
  api: backup: add no-cache flag to bypass local datastore cache
  datastore: get and set owner for S3 store backend
  datastore: create namespace marker in S3 backend

 Cargo.toml                                    |   6 +
 examples/upload-speed.rs                      |   1 +
 pbs-client/src/backup_writer.rs               |   4 +-
 pbs-config/src/lib.rs                         |   1 +
 pbs-config/src/s3.rs                          |  73 ++
 pbs-datastore/Cargo.toml                      |   3 +
 pbs-datastore/src/cached_chunk_reader.rs      |   6 +-
 pbs-datastore/src/datastore.rs                | 387 +++++++-
 pbs-datastore/src/dynamic_index.rs            |   1 +
 pbs-datastore/src/lib.rs                      |   4 +
 pbs-datastore/src/local_chunk_reader.rs       |  37 +-
 .../src/local_datastore_lru_cache.rs          | 116 +++
 pbs-s3-client/Cargo.toml                      |  28 +
 pbs-s3-client/src/aws_sign_v4.rs              | 140 +++
 pbs-s3-client/src/client.rs                   | 501 ++++++++++
 pbs-s3-client/src/lib.rs                      | 220 +++++
 pbs-s3-client/src/object_key.rs               |  64 ++
 pbs-s3-client/src/response_reader.rs          | 324 +++++++
 pbs-tools/src/async_lru_cache.rs              |  46 +-
 pbs-tools/src/lru_cache.rs                    |  42 +-
 proxmox-backup-client/src/benchmark.rs        |   1 +
 proxmox-backup-client/src/main.rs             |   8 +
 src/api2/admin/datastore.rs                   | 129 ++-
 src/api2/backup/environment.rs                | 145 ++-
 src/api2/backup/mod.rs                        | 107 +--
 src/api2/backup/upload_chunk.rs               | 112 ++-
 src/api2/config/datastore.rs                  |  41 +-
 src/api2/config/mod.rs                        |   2 +
 src/api2/config/s3.rs                         | 349 +++++++
 src/api2/reader/environment.rs                |  12 +-
 src/api2/reader/mod.rs                        |  60 +-
 src/backup/verify.rs                          | 879 +++++++++---------
 src/bin/proxmox_backup_manager/datastore.rs   |  24 +
 src/server/push.rs                            |   1 +
 src/server/verify_job.rs                      |  12 +-
 www/Makefile                                  |   3 +
 www/NavigationTree.js                         |   6 +
 www/config/S3BucketView.js                    | 144 +++
 www/form/S3BucketSelector.js                  |  40 +
 www/window/DataStoreEdit.js                   |  35 +
 www/window/S3BucketEdit.js                    | 125 +++
 41 files changed, 3618 insertions(+), 621 deletions(-)
 create mode 100644 pbs-config/src/s3.rs
 create mode 100644 pbs-datastore/src/local_datastore_lru_cache.rs
 create mode 100644 pbs-s3-client/Cargo.toml
 create mode 100644 pbs-s3-client/src/aws_sign_v4.rs
 create mode 100644 pbs-s3-client/src/client.rs
 create mode 100644 pbs-s3-client/src/lib.rs
 create mode 100644 pbs-s3-client/src/object_key.rs
 create mode 100644 pbs-s3-client/src/response_reader.rs
 create mode 100644 src/api2/config/s3.rs
 create mode 100644 www/config/S3BucketView.js
 create mode 100644 www/form/S3BucketSelector.js
 create mode 100644 www/window/S3BucketEdit.js

-- 
2.39.5





More information about the pbs-devel mailing list