consultantshas.blogg.se

Drive snapshot 1.4
Drive snapshot 1.4













drive snapshot 1.4

The name of the snapshot is auto-generated. In the case when multiple etcd nodes exist, any created snapshot is created after the cluster has been health checked, so it can be considered a valid snapshot of the data in the etcd cluster.Įach snapshot will include the cluster state file in addition to the etcd snapshot file.

drive snapshot 1.4 drive snapshot 1.4

On S3, the snapshot will always be from the last node that uploads it, as all etcd nodes upload it and the last will remain. If the directory is configured on the nodes as a shared mount, it will be overwritten. The snapshot is stored in /opt/rke/etcd-snapshots. If the node reports that the etcd cluster is healthy, a snapshot is created from it and optionally uploaded to S3. How Snapshots Workįor each etcd node in the cluster, the etcd cluster health is checked. These example scenarios for backup and restore are different based on your version of RKE. You can use RKE to restore your cluster from backup. You can create one-time snapshots to back up your cluster, and you can also configure recurring snapshots. Note: As of RKE v0.2.0, the file is no longer required because of a change in how the Kubernetes cluster state is stored. RKE can upload your snapshots to a S3 compatible backend. Snapshots are always saved locally in /opt/rke/etcd-snapshots. In a disaster scenario, you can restore these snapshots, which are stored on other nodes in the cluster. RKE clusters can be configured to automatically take snapshots of etcd.















Drive snapshot 1.4