This guide describes how to install and configure Harbor for high availability. It supplements the [Installation Guides](installation_guide.md) and assumes that you are familiar with the material in those guides.
**Important** This guide was last updated as of the Harbor 1.4.0 release. It will not apply to releases before 1.4.0. We advise that you read this at your own discretion when planning on your Harbor high availability implementation.
This document discusses some common methods of implementing highly available systems, with an emphasis on the core Harbor services and other open source services that are closely aligned with Harbor.
You will need to address high availability concerns for any applications software that you run on your Harbor environment. The important thing is to make sure that your services are redundant and available. How you achieve that is up to you.
### Stateless service
To make a stateless service highly available, you need to provide redundant instances and load balance them. Harbor services that are stateless include:
- Adminserver
- UI
- Registry
- Logs
- Jobservice
- Clair
- Proxy
### Stateful service
Stateful services are more difficult to manage. Providing additional instances and load balancing does not solve the problem. Harbor services that are stateful include the follow services:
**VIP**: [Virtual IP](https://en.wikipedia.org/wiki/Virtual_IP_address) The Harbor user will access Harbor through this virtual IP address. This VIP will only active on one load balancer node at the same time. It will automatically switch to the other node if the active loadbalancer node is down.
**LoadBalancer 01 and 02**: They together compose as a group which avoid single point failure of load balancer nodes. [Keepalived](www.keepalived.org) is installed on both load balancer nodes. The two Keepalived instances will form a VRRP group to provide the VIP and ensure the VIP only shows on one node at the same time. The LVS component in Keepalived is responsible for balance the requests between different Harbor servers according to the routing algorithm.
**Harbor server 1..n**: These are the running Harbor instances. They are in active-active mode. User can setup multiple nodes according to their workload.
**Harbor DB cluster**: The MariaDB is used by Harbor to store user authentication information, image metadata information and so on. User should follow its best practice to make it HA protected.
**Clair DB cluster**: The PostgreSQL is used by Clair to store the vulnerability data which will be used by scanning the images. User should follow it's best practice to make it HA protected
**Shared Storage**: The shared storage is used for storing Docker Volumes used by Harbor. Images pushed by users are actually stored in this shared storage. The shared storage makes sure that multiple Harbor instances have consistent storage backend. Shared Storages can be Swift, NFS, S3, azure, GCS Ceph or OSS. User should follow its best practice to make it HA protected.
**Redis**: The purpose of having Redis is to store UI session data and store the registry metadata cache. When one Harbor instance fails or the load balancer routes a user request to another Harbor instance, any Harbor instance can query the Redis to retrieve session information to make sure the end-user has a continued session. User should follow the best practice of Redis to make it HA protected.
From the above high availability architecture, we can see that we don't setup LB per stateless services. Instead we group those stateless service as a group. The communicate between each services are protected by host based docker network with isolation. **Note** As the component communicate with each other through rest API. You can always define the group granularity according to your use scenarios.
Follow the setup instruction in this section we can build a Harbor high availability deployment as the follow figure shows. You can setup more Harbor nodes if needed.
- 6> n VMs for Harbor stateless services (n >=2), in this example we will set up 2 Harbor nodes.
- 7> n+1 static IPs (1 for VIP and the other n IPs will be used by Harbor stateless servers)
**Important** Item 1,2,3,4 are statefull components to Harbor. Before configuring Harbor HA, we assume these components are present and all of them are HA protected. Otherwise, any of these components can be a single point of failure.
The shared storage is replaceable you can choose other shared storage, just need to make sure the storage you used is supported by registry https://docs.docker.com/registry/storage-drivers
#> mysql -u your_db_username -p -h your_db_ip < registry.sql
```
### Load balancer setup
As all the Harbor nodes are in active status. A loadbancer will be needed to efficiently distributing incoming requests between the Harbor nodes. You can choose either hardware loadbalancer or software loadbalancer at your convenient.
Save the [Keepalived configuration file](https://github.com/vmware/harbor/blob/release-1.4.0/make/ha/sample/active_active/keepalived_active_active.conf) to ```/etc/keepalived/keepalived.conf```
Save the server [health check](https://github.com/vmware/harbor/blob/release-1.4.0/make/ha/sample/active_active/check.sh) script to ```/usr/local/bin/check.sh```
Follow the same steps 1 to 5 as Loadbalancer01 list, only change the ```priority``` to 20 in the /etc/keepalived/keepalived.conf in step 2. The higher number will get the VIP address.
If you set ```filesystem``` for the ```registry_storage_provider_name``` you must make sure the registry directory ```/data/registry``` mount to a shared storage like NFS,Ceph, etc. You need to create the /data/registry directory first and change it's owner to 10000:10000 as registry will run as userID 10000 and groupID 10000.
##### 9> (Optional) If you enable https, you need to prepare the certificate and key and copy them to ```/data/cert/``` directory(you need to create that folder if it not exist).
If you want keep your own filename for the certificate, you need to modify the ssl_cert and ssl_cert_key properties in harbor.cfg. If you use a certificate that signed by a private CA then you need to put your CA file to the /data/ca_download/ca.crt
You need to change 192.168.1.220 to your VIP address before issue the follow command, if you just use http for Harbor, then you don't need run the second command.
You need to change 192.168.1.220 to your VIP address before issue the follow command, if you just use http for Harbor, then you don't need to run the second command.
In Harbor 1.4 we support stop a running Jobs. But in HA scenarios, you may not be able to stop the Jobs. As currently the Job status is stored in memory instead of persistent storages. Request may not be able to schedule to the node which execute the job. We will plan to refactor the jobservices model to sovle this limitation in next release.