Migrate Appsmith Helm Deployment from Non-HA to HA (Kubernetes)
This guide explains how to migrate an existing Appsmith Helm deployment from single-pod storage (typically ReadWriteOnce) to shared storage (ReadWriteMany) so you can run Appsmith in high availability mode with multiple pods.
The default Helm configuration sets autoscaling.enabled: false, which deploys Appsmith as a StatefulSet and creates a pod volume mounted at /appsmith-stacks. In most clusters, this uses the default StorageClass and is not shareable across multiple pods.
To enable HA safely, migrate data to a new ReadWriteMany (RWX) volume first, then cut Appsmith over to that claim.
For all available chart parameters, see Helm values.yaml.
System requirements
- At least 2 GB of free storage space for backup and migration tasks.
Prerequisites
Before you begin, ensure:
- You already have Appsmith installed on Kubernetes using Helm.
- You can run
kubectlandhelmcommands against your cluster. - You have a maintenance window for final cutover.
- You have a backup of the instance. See Backup instance.
Step 1: Provision RWX storage outside the Helm chart
Create a new PersistentVolume (PV) and PersistentVolumeClaim (PVC) backed by a RWX-capable storage class using your cloud/on-prem CSI driver.
Use provider guides as needed:
- AWS EKS + EFS CSI driver: Install the Amazon EFS CSI driver
- Google Kubernetes Engine + Filestore CSI driver: Use Filestore with GKE
- Generic Kubernetes storage: Persistent Volumes
For this migration scenario, it is cleaner to create and manage the PV/PVC outside the Appsmith Helm chart and then reference the PVC from values.yaml.
In the examples below, the target PVC name is appsmith-data-ha.
Step 2: Mount the new PVC as a temporary secondary path
Follow the below steps to mount the new claim as a temporary secondary path:
-
Update
values.yamlto add the new claim as an extra volume and mount point:extraVolumes:
- name: appsmith-data-ha-volume
persistentVolumeClaim:
claimName: appsmith-data-ha
extraVolumeMounts:
- name: appsmith-data-ha-volume
mountPath: /appsmith-stacks-ha -
Apply the change:
helm upgrade -i <release_name> appsmith-ee/appsmith -n <namespace> -f values.yaml -
Wait until Appsmith is healthy:
kubectl get pods -n <namespace>
Step 3: Copy data from current volume to RWX volume
Follow the below steps to copy data from the existing volume to the new RWX volume:
-
Open a shell in the running Appsmith pod:
kubectl exec -it pod/<appsmith_pod_name> -n <namespace> -- bash -
Copy all data from the original path to the temporary RWX path:
cp -a /appsmith-stacks/* /appsmith-stacks-ha/
Step 4: Cut over Appsmith to the new existing claim
Follow the below steps to cut over Appsmith to the new claim:
-
Remove the temporary
extraVolumesandextraVolumeMountsentries. -
Set Appsmith persistence to use the new claim directly:
persistence:
existingClaim:
enabled: true
name: appsmith-data-ha
claimName: appsmith-data-ha -
Apply the final cutover:
helm upgrade -i <release_name> appsmith-ee/appsmith -n <namespace> -f values.yaml
Step 5: Enable HA replicas
If not already enabled, follow the below steps to set autoscaling:
-
Update
values.yaml:autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 2 -
Apply the update and verify multiple healthy pods:
kubectl get pods -n <namespace>
Data consistency note
Data written between the copy step and final cutover can be missed.
To minimize risk:
- Keep the copy-to-cutover window as short as possible.
- Consider temporarily blocking user traffic (for example, disable ingress) during final cutover.
- If needed, run an additional sync pass (for example, with
rsync) just before cutover.
In most cases, the highest-risk loss is recent filesystem writes such as logs, but configuration artifacts can also be affected if the window is large.