mirror of
https://github.com/goharbor/harbor.git
synced 2024-12-18 06:38:19 +01:00
Adding missing files
This commit is contained in:
parent
ca80e52cb6
commit
1a3cc0abf9
BIN
docs/1.10/img/ha.png
Normal file
BIN
docs/1.10/img/ha.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 152 KiB |
76
docs/1.10/install_config/harbor_ha_helm.md
Normal file
76
docs/1.10/install_config/harbor_ha_helm.md
Normal file
@ -0,0 +1,76 @@
|
||||
# Deploying Harbor with High Availability via Helm
|
||||
|
||||
## Goal
|
||||
|
||||
Deploy Harbor on K8S via helm to make it highly available, that is, if one of node that has Harbor's container running becomes un accessible. Users does not experience interrupt of service of Harbor.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes cluster 1.10+
|
||||
- Helm 2.8.0+
|
||||
- High available ingress controller (Harbor does not manage the external endpoint)
|
||||
- High available PostgreSQL database (Harbor does not handle the deployment of HA of database)
|
||||
- High available Redis (Harbor does not handle the deployment of HA of Redis)
|
||||
- PVC that can be shared across nodes or external object storage
|
||||
|
||||
## Architecture
|
||||
|
||||
Most of Harbor's components are stateless now. So we can simply increase the replica of the pods to make sure the components are distributed to multiple worker nodes, and leverage the "Service" mechanism of K8S to ensure the connectivity across pods.
|
||||
|
||||
As for storage layer, it is expected that the user provide high available PostgreSQL, Redis cluster for application data and PVCs or object storage for storing images and charts.
|
||||
|
||||
![HA](../img/ha.png)
|
||||
|
||||
## Usage
|
||||
|
||||
### Download Chart
|
||||
|
||||
Download Harbor helm chart:
|
||||
|
||||
```bash
|
||||
helm repo add harbor https://helm.goharbor.io
|
||||
helm fetch harbor/harbor --untar
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Configure the followings items in `values.yaml`, you can also set them as parameters via `--set` flag during running `helm install`:
|
||||
|
||||
- **Ingress rule**
|
||||
Configure the `expose.ingress.hosts.core` and `expose.ingress.hosts.notary`.
|
||||
- **External URL**
|
||||
Configure the `externalURL`.
|
||||
- **External PostgreSQL**
|
||||
Set the `database.type` to `external` and fill the information in `database.external` section.
|
||||
|
||||
Four empty databases should be created manually for `Harbor core`, `Clair`, `Notary server` and `Notary signer` and configure them in the section. Harbor will create tables automatically when starting up.
|
||||
- **External Redis**
|
||||
Set the `redis.type` to `external` and fill the information in `redis.external` section.
|
||||
|
||||
As the Redis client used by Harbor's upstream projects doesn't support `Sentinel`, Harbor can only work with a single entry point Redis. You can refer to this [guide](https://community.pivotal.io/s/article/How-to-setup-HAProxy-and-Redis-Sentinel-for-automatic-failover-between-Redis-Master-and-Slave-servers) to setup a HAProxy before the Redis to expose a single entry point.
|
||||
- **Storage**
|
||||
By default, a default `StorageClass` is needed in the K8S cluster to provision volumes to store images, charts and job logs.
|
||||
|
||||
If you want to specify the `StorageClass`, set `persistence.persistentVolumeClaim.registry.storageClass`, `persistence.persistentVolumeClaim.chartmuseum.storageClass` and `persistence.persistentVolumeClaim.jobservice.storageClass`.
|
||||
|
||||
If you use `StorageClass`, for both default or specified one, set `persistence.persistentVolumeClaim.registry.accessMode`, `persistence.persistentVolumeClaim.chartmuseum.accessMode` and `persistence.persistentVolumeClaim.jobservice.accessMode` as `ReadWriteMany`, and make sure that the persistent volumes must can be shared cross different nodes.
|
||||
|
||||
You can also use the existing PVCs to store data, set `persistence.persistentVolumeClaim.registry.existingClaim`, `persistence.persistentVolumeClaim.chartmuseum.existingClaim` and `persistence.persistentVolumeClaim.jobservice.existingClaim`.
|
||||
|
||||
If you have no PVCs that can be shared across nodes, you can use external object storage to store images and charts and store the job logs in database. Set the `persistence.imageChartStorage.type` to the value you want to use and fill the corresponding section and set `jobservice.jobLogger` to `database`.
|
||||
|
||||
- **Replica**
|
||||
Set `portal.replicas`, `core.replicas`, `jobservice.replicas`, `registry.replicas`, `chartmuseum.replicas`, `clair.replicas`, `notary.server.replicas` and `notary.signer.replicas` to `n`(`n`>=2).
|
||||
|
||||
### Installation
|
||||
|
||||
Install the Harbor helm chart with a release name `my-release`:
|
||||
|
||||
helm 2:
|
||||
```bash
|
||||
helm install --name my-release .
|
||||
```
|
||||
helm 3:
|
||||
```
|
||||
helm install my-release .
|
||||
```
|
65
docs/1.10/install_config/helm_upgrade.md
Normal file
65
docs/1.10/install_config/helm_upgrade.md
Normal file
@ -0,0 +1,65 @@
|
||||
# Upgrading Harbor Deployed with Helm
|
||||
|
||||
This guide is used to upgrade Harbor deployed by chart since version 0.3.0.
|
||||
|
||||
## Notes
|
||||
|
||||
- As the database schema may change between different versions of Harbor, there is a progress to migrate the schema during the upgrade and the downtime cannot be avoid
|
||||
- The database schema cannot be downgraded automatically, so the `helm rollback` is not supported
|
||||
|
||||
## Upgrade
|
||||
|
||||
### 1. Backup database
|
||||
|
||||
Backup the database used by Harbor in case the upgrade process fails.
|
||||
|
||||
### 2. Download new chart
|
||||
|
||||
Download the latest version of Harbor chart.
|
||||
|
||||
### 3. Configure new chart
|
||||
|
||||
Configure the new chart to make sure that the configuration items have the same values with the old one.
|
||||
|
||||
> Note: if TLS is enabled and the certificate is generated by chart automatically, a new certificate will be generated and overwrite the old one during the upgrade, this may cause some issues if you have distributed the certificate. You can follow the below steps to configure the new chart to use the old certificate:
|
||||
|
||||
1) Get the secret name which certificate is stored in:
|
||||
|
||||
```bash
|
||||
kubectl get secret
|
||||
```
|
||||
|
||||
Find the secret whose name ends with `-harbor-ingress` (expose service via `Ingress`) or `-harbor-nginx`(expose service via `ClusterIP` or `NodePort`)
|
||||
|
||||
2) Export the secret as yaml file:
|
||||
|
||||
|
||||
```bash
|
||||
kubectl get secret <secret-name-from-step-1> -o yaml > secret.yaml
|
||||
```
|
||||
|
||||
|
||||
|
||||
3) Rename the secret by setting `metadata.name` in `secret.yaml`
|
||||
|
||||
4) Create a new secret:
|
||||
|
||||
```bash
|
||||
kubectl create -f secret.yaml
|
||||
```
|
||||
|
||||
5) Configure the chart to use the new secret by setting `expose.tls.secretName` as the value you set in step **3**
|
||||
|
||||
### 4. Upgrade
|
||||
|
||||
Run upgrade command:
|
||||
|
||||
```bash
|
||||
helm upgrade release-name --force .
|
||||
```
|
||||
|
||||
> The `--force` is necessary if upgrade from version 0.3.0 due to issue [#30](https://github.com/goharbor/harbor-helm/issues/30).
|
||||
|
||||
## Known issues
|
||||
|
||||
- The job logs will be lost if you upgrade from version 0.3.0 as the logs are store in a `emptyDir` in 0.3.0.
|
Loading…
Reference in New Issue
Block a user