First wave of doc reorg

This commit is contained in:
Stuart Clements 2019-10-17 15:47:25 +02:00
parent ddb83574a7
commit cb7d6eca93
52 changed files with 9298 additions and 0 deletions

View File

@ -0,0 +1,73 @@
## Table of Content
### User documents
**[Installation and Configuration Guide](installation_guide.md)
Read this first!**
Guide for Harbor online installer and offline installer.
**[Harbor User Guide](user_guide.md)**
How to use Harbor to manage images, projects, replications and users.
[Configuring HTTPS for Harbor](configure_https.md)
Configure security connection between Harbor and Docker client.
[Upgrade and Data Migration Guide](migration_guide.md)
Data migration may be needed when upgrading Harbor to a newer version.
[Deploy Harbor on Kubernetes](kubernetes_deployment.md)
Guide to deploy Harbor on Kubernetes. (maintained by community)
### Developer documents
[Architecture Overview of Harbor](https://github.com/vmware/harbor/wiki/Architecture-Overview-of-Harbor)
Developers read this first.
[Build Harbor from Source](compile_guide.md)
How to build Harbor from source code.
[Harbor API Specs by Swagger](configure_swagger.md)
Use Swagger to find out the specs of Harbor API.
[Internationalization Guide](developer_guide_i18n.md)
How to add your local language to Harbor.
[Python SDK](../contrib/registryapi) (by community)
[Harbor CLI](https://github.com/int32bit/harborclient) (by community)
[Deploy Harbor using Docker Machine](../contrib/deploying_using_docker_machine.md) ( by community)
[Configuring Harbor as a local registry mirror](../contrib/Configure_mirror.md) (by community)
### Articles from the community
[Remote site replicated Docker Registries with VMware Harbor](http://www.vmtocloud.com/remote-site-replicated-docker-registries-with-vmware-harbor/)
[Hybrid cloud Docker Registry with VMware Harbor](http://www.vmtocloud.com/hybrid-cloud-docker-registry-with-vmware-harbor/)
[Harbor Registry Blueprint for vRA](http://www.vmtocloud.com/harbor-registry-blueprint-is-here/)
[Architecture of Harbor: An Open Source Enterprise-class Registry Server](http://www.think-foundry.com/architecture-of-harbor-an-open-source-enterprise-class-registry-server/)
[Private Harbor Registry Achieves High Availability based on Virtual SAN](http://www.think-foundry.com/private-docker-registry-harbor-achieves-ha-based-on-virtual-san/)
[Working with Harbor Registry REST API via Swagger](http://www.think-foundry.com/working-with-harbor-registry-rest-api-via-swagger/)
[How to use Harbor with Minio](https://blog.minio.io/how-to-use-vmware-harbor-with-minio-c07a5c4ae31b)
[Harbor, an enterprise class registry server](https://vorcunus.blog/2017/03/11/harbor-an-enterprise-class-registry-server/)
[Hybrid Container Management for vCloud Director with Harbor](https://blogs.vmware.com/vcat/2017/03/hybrid-container-management-vcloud-director-vmware-harbor.html)
[Project Harbor Reached Milestone of 2000 Stars](http://www.think-foundry.com/project-harbor-reaches-milestone-2000-stars-github/)
[Project Harbor in action](http://cormachogan.com/2016/08/05/project-harbor-action/)
[Using vSphere docker volume driver to run Project Harbor on VSAN](http://cormachogan.com/2016/07/29/using-vsphere-docker-volume-driver-run-project-harbor-vsan/)
[Overall Architecture of Harbor Registry](http://www.compare-review-information.com/overall-architecture-of-harbor-registry/)
[Making a Private Secured Docker Registry in 15 Minutes](http://alexanderzeitler.com/articles/deploying-a-private-secured-docker-registry-within-15-minutes/)
[Docker Private Registry Using Harbor](https://blog.imaginea.com/docker-private-registry-using-harbor-2/)

View File

@ -0,0 +1,49 @@
# Harbor Documentation
This is the main table of contents for the Harbor documentation.
## Harbor Installation and Configuration
This section describes how to install Harbor and perform the required initial configurations. These day 1 operations are performed by the Harbor Administrator.
- [Installing Harbor](install_config/installation/_index.md)
- [Harbor Installation Prerequisites](install_config/installation/installation_prereqs.md)
- [Download the Harbor Installer](install_config/installation/download_installer.md)
- [Configure the Harbor YML File](install_config/installation/configure_yml_file.md)
- [Run the Installer Script](install_config/installation/run_installer_script.md)
- [Troubleshooting Harbor Installation
](install_config/installation/troubleshoot_installation.md)
- [Configuring Harbor](install_config/configuration/_index.md)
- [Reconfigure Harbor and Manage the Harbor Lifecycle](install_config/configuration/reconfigure_manage_lifecycle.md)
- [Configure HTTPS Access to Harbor](install_config/configuration/configure_https.md)
- [Access Harbor Logs](install_config/configuration/access_logs.md)
## Harbor Administration
This section describes how to use, upgrade, and maintain Harbor after deployment. These day 2 operations are performed by the Harbor Administrator.
- [](administration/)
- [](administration/)
- [](administration/)
## Working with Harbor Projects
This section describes how users with the developer, master, and project administrator roles manage and participate in Harbor projects.
- [](working_with_projects/)
- [](working_with_projects/)
- [](working_with_projects/)
- [](working_with_projects/)
- [](working_with_projects/)
- [](working_with_projects/)
## Build, Customize, and Contribute to Harbor
This section describes how developers can build from Harbor source code, customize their deployments, and contribute to the open-source Harbor project.
- [](build_customize_contribute/)
- [](build_customize_contribute/)
- [](build_customize_contribute/)
- [](build_customize_contribute/)
- [](build_customize_contribute/)
- [](build_customize_contribute/)

View File

@ -0,0 +1,3 @@
# Build, Customize, and Contribute to Harbor
Placeholder text

View File

@ -0,0 +1,178 @@
## Introduction
This guide provides instructions for developers to build and run Harbor from source code.
## Step 1: Prepare for a build environment for Harbor
Harbor is deployed as several Docker containers and most of the code is written in Go language. The build environment requires Docker, Docker Compose and golang development environment. Please install the below prerequisites:
| Software | Required Version |
| -------------- | ---------------- |
| docker | 17.05 + |
| docker-compose | 1.18.0 + |
| python | 2.7 + |
| git | 1.9.1 + |
| make | 3.81 + |
| golang\* | 1.7.3 + |
\*optional, required only if you use your own Golang environment.
## Step 2: Getting the source code
```sh
$ git clone https://github.com/goharbor/harbor
```
## Step 3: Building and installing Harbor
### Configuration
Edit the file **make/harbor.yml** and make necessary configuration changes such as hostname, admin password and mail server. Refer to **[Installation and Configuration Guide](installation_guide.md#configuring-harbor)** for more info.
```sh
$ cd harbor
$ vi make/harbor.yml
```
### Compiling and Running
You can compile the code by one of the three approaches:
#### I. Build with official Golang image
- Get official Golang image from docker hub:
```sh
$ docker pull golang:1.12.5
```
- Build, install and bring up Harbor without Notary:
```sh
$ make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage
```
- Build, install and bring up Harbor with Notary:
```sh
$ make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage NOTARYFLAG=true
```
- Build, install and bring up Harbor with Clair:
```sh
$ make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage CLAIRFLAG=true
```
#### II. Compile code with your own Golang environment, then build Harbor
- Move source code to \$GOPATH
```sh
$ mkdir $GOPATH/src/github.com/goharbor/
$ cd ..
$ mv harbor $GOPATH/src/github.com/goharbor/.
```
- Build, install and run Harbor without Notary and Clair:
```sh
$ cd $GOPATH/src/github.com/goharbor/harbor
$ make install
```
- Build, install and run Harbor with Notary and Clair:
```sh
$ cd $GOPATH/src/github.com/goharbor/harbor
$ make install -e NOTARYFLAG=true CLAIRFLAG=true
```
### Verify your installation
If everything worked properly, you can get the below message:
```sh
...
Start complete. You can visit harbor now.
```
Refer to [Installation and Configuration Guide](installation_guide.md#managing-harbors-lifecycle) for more information about managing your Harbor instance.
## Appendix
- Using the Makefile
The `Makefile` contains these configurable parameters:
| Variable | Description |
| ------------------- | ---------------------------------------------------------------- |
| BASEIMAGE | Container base image, default: photon |
| DEVFLAG | Build model flag, default: dev |
| COMPILETAG | Compile model flag, default: compile_normal (local golang build) |
| NOTARYFLAG | Notary mode flag, default: false |
| CLAIRFLAG | Clair mode flag, default: false |
| HTTPPROXY | NPM http proxy for Clarity UI builder |
| REGISTRYSERVER | Remote registry server IP address |
| REGISTRYUSER | Remote registry server user name |
| REGISTRYPASSWORD | Remote registry server user password |
| REGISTRYPROJECTNAME | Project name on remote registry server |
| VERSIONTAG | Harbor images tag, default: dev |
| PKGVERSIONTAG | Harbor online and offline version tag, default:dev |
- Predefined targets:
| Target | Description |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| all | prepare env, compile binaries, build images and install images |
| prepare | prepare env |
| compile | compile ui and jobservice code |
| compile_portal | compile portal code |
| compile_ui | compile ui binary |
| compile_jobservice | compile jobservice binary |
| build | build Harbor docker images (default: using build_photon) |
| build_photon | build Harbor docker images from Photon OS base image |
| install | compile binaries, build images, prepare specific version of compose file and startup Harbor instance |
| start | startup Harbor instance (set NOTARYFLAG=true when with Notary) |
| down | shutdown Harbor instance (set NOTARYFLAG=true when with Notary) |
| package_online | prepare online install package |
| package_offline | prepare offline install package |
| pushimage | push Harbor images to specific registry server |
| clean all | remove binary, Harbor images, specific version docker-compose file, specific version tag and online/offline install package |
| cleanbinary | remove ui and jobservice binary |
| cleanimage | remove Harbor images |
| cleandockercomposefile | remove specific version docker-compose |
| cleanversiontag | remove specific version tag |
| cleanpackage | remove online/offline install package |
#### EXAMPLE:
#### Push Harbor images to specific registry server
```sh
$ make pushimage -e DEVFLAG=false REGISTRYSERVER=[$SERVERADDRESS] REGISTRYUSER=[$USERNAME] REGISTRYPASSWORD=[$PASSWORD] REGISTRYPROJECTNAME=[$PROJECTNAME]
```
**Note**: need add "/" on end of REGISTRYSERVER. If REGISTRYSERVER is not set, images will be pushed directly to Docker Hub.
```sh
$ make pushimage -e DEVFLAG=false REGISTRYUSER=[$USERNAME] REGISTRYPASSWORD=[$PASSWORD] REGISTRYPROJECTNAME=[$PROJECTNAME]
```
#### Clean up binaries and images of a specific version
```sh
$ make clean -e VERSIONTAG=[TAG]
```
**Note**: If new code had been added to Github, the git commit TAG will change. Better use this command to clean up images and files of previous TAG.
#### By default, the make process create a development build. To create a release build of Harbor, set the below flag to false.
```sh
$ make XXXX -e DEVFLAG=false
```

View File

@ -0,0 +1,71 @@
# View and test Harbor REST API via Swagger
A Swagger file is provided for viewing and testing Harbor REST API.
### Viewing Harbor REST API
* Open the file **swagger.yaml** under the _docs_ directory in Harbor project;
* Paste all its content into the online Swagger Editor at http://editor.swagger.io. The descriptions of Harbor API will be shown on the right pane of the page.
![Swagger Editor](img/swaggerEditor.png)
### Testing Harbor REST API
From time to time, you may need to mannually test Harbor REST API. You can deploy the Swagger file into Harbor's service node. Suppose you install Harbor through online or offline installer, you should have a Harbor directory after you un-tar the installer, such as **~/harbor**.
**Caution:** When using Swagger to send REST requests to Harbor, you may alter the data of Harbor accidentally. For this reason, it is NOT recommended using Swagger against a production Harbor instance.
* Download _prepare-swagger.sh_ and _swagger.yaml_ under the _docs_ directory to your local Harbor directory, e.g. **~/harbor**.
```sh
wget https://raw.githubusercontent.com/goharbor/harbor/master/docs/prepare-swagger.sh https://raw.githubusercontent.com/goharbor/harbor/master/docs/swagger.yaml
```
* Edit the script file _prepare-swagger.sh_.
```sh
vi prepare-swagger.sh
```
* Change the SCHEME to the protocol scheme of your Harbor server.
```sh
SCHEME=<HARBOR_SERVER_SCHEME>
```
* Change the SERVER_IP to the IP address of your Harbor server.
```sh
SERVER_IP=<HARBOR_SERVER_DOMAIN>
```
* Change the file mode.
```sh
chmod +x prepare-swagger.sh
````
* Run the shell script. It downloads a Swagger package and extracts files into the _../static_ directory.
```sh
./prepare-swagger.sh
```
* Edit the _docker-compose.yml_ file under your local Harbor directory.
```sh
vi docker-compose.yml
```
* Add two lines to the file _docker-compose.yml_ under the section _ui.volumes_.
```docker
...
ui:
...
volumes:
- ./common/config/ui/app.conf:/etc/core/app.conf:z
- ./common/config/ui/private_key.pem:/etc/core/private_key.pem:z
- /data/secretkey:/etc/core/key:z
- /data/ca_download/:/etc/core/ca/:z
## add two lines as below ##
- ../src/ui/static/vendors/swagger-ui-2.1.4/dist:/harbor/static/vendors/swagger
- ../src/ui/static/resources/yaml/swagger.yaml:/harbor/static/resources/yaml/swagger.yaml
...
```
* Recreate Harbor containers
```docker
docker-compose down -v && docker-compose up -d
```
* Because a session ID is usually required by Harbor API, **you should log in first from a browser.**
* Open another tab in the same browser so that the session is shared between tabs.
* Enter the URL of the Swagger page in Harbor as below. The ```<HARBOR_SERVER>``` should be replaced by the IP address or the hostname of the Harbor server.
```
http://<HARBOR_SERVER>/static/vendors/swagger/index.html
```
* You should see a Swagger UI page with Harbor API _swagger.yaml_ file loaded in the same domain, **be aware that your REST request submitted by Swagger may change the data of Harbor**.
![Harbor API](img/renderedSwagger.png)

View File

@ -0,0 +1,38 @@
# Customize the look & feel of Harbor
The primary look & feel of Harbor supports to be customized with several simple steps. All the relevant customization in configurations are saved in the `setting.json` file under `$HARBOR_DIR/src/portal/src` folder with `json` format and will be loaded when Harbor is launched.
## Configure
Open the `setting.json` file, you'll see the default content as shown below:
```
{
"headerBgColor": "#004a70",
"headerLogo": "",
"loginBgImg": "",
"appTitle": "",
"product": {
"name": "Harbor",
"introduction": {
"zh-cn": "",
"es-es": "",
"en-us": ""
}
}
}
```
Change the values of configuration if you want to override the default style to your own. Here are references:
* **headerBgColor**: Background color of the page header, support either HEX or RGB value. e.g: `#004a70` and `rgb(210,110,235)`.
* **headerLogo**: Name path of the logo image in the header, e.g: 'logo.png'. The image file should be put in the `images` folder.
* **loginBgImg**: Name path of the background image displayed in the login page, e.g: 'login_bg.png'. The image file should be put in the `images` folder. Suggest the size of this image should be bigger than 800px*600px.
* **Product**: Contain metadata / description of the product.
- **title**: The full product title displayed in the login page.
- **company**: Name of the company publishing the product.
- **name**: Name of the product.
- **introductions**: The introduction about the product with different languages, which are displayed in the `About` dialog.
## Build
Once the `setting.json` configurations has been updated, re-[build](#configure) your product to apply the new changes.

View File

@ -0,0 +1,58 @@
## Developer's Guide for Internationalization (i18n)
*NOTE: All the files you created should use UTF-8 encoding.*
### Steps to localize the UI in your language
1. In the folder `src/portal/src/i18n/lang`, copy json file `en-us-lang.json` to a new file and rename it to `<language>-<locale>-lang.json` .
The file contains a JSON object including all the key-value pairs of UI strings:
```
{
"APP_TITLE": {
"VMW_HARBOR": "Harbor",
"HARBOR": "Harbor",
...
},
...
}
```
In the file `<language>-<locale>-lang.json`, translate all the values into your language. Do not change any keys.
2. After creating your language file, you should add it to the language supporting list.
Locate the file `src/portal/src/app/shared/shared.const.ts`.
Append `<language>-<locale>` to the language supporting list:
```
export const supportedLangs = ['en-us', 'zh-cn', '<language>-<locale>'];
```
Define the language display name and append it to the name list:
```
export const languageNames = {
"en-us": "English",
"zh-cn": "中文简体",
"<language>-<locale>": "<DISPLAY_NAME>"
};
```
**NOTE: Don't miss the comma before the new key-value item you've added.**
3. Enable the new language in the view.
Locate the file `src/portal/src/app/base/navigator/navigator.component.html` and then find the following code piece:
```
<div class="dropdown-menu">
<a href="javascript:void(0)" clrDropdownItem (click)='switchLanguage("en-us")' [class.lang-selected]='matchLang("en-us")'>English</a>
<a href="javascript:void(0)" clrDropdownItem (click)='switchLanguage("zh-cn")' [class.lang-selected]='matchLang("zh-cn")'>中文简体</a>
</div>
```
Add new menu item for your language:
```
<div class="dropdown-menu">
<a href="javascript:void(0)" clrDropdownItem (click)='switchLanguage("en-us")' [class.lang-selected]='matchLang("en-us")'>English</a>
<a href="javascript:void(0)" clrDropdownItem (click)='switchLanguage("zh-cn")' [class.lang-selected]='matchLang("zh-cn")'>中文简体</a>
<a href="javascript:void(0)" clrDropdownItem (click)='switchLanguage("<language>-<locale>")' [class.lang-selected]='matchLang("<language>-<locale>")'>DISPLAY_NAME</a>
</div>
```
4. Next, please refer [compile guideline](compile_guide.md) to rebuild and restart Harbor.

View File

@ -0,0 +1,20 @@
#!/bin/bash
SCHEME=http
SERVER_IP=reg.mydomain.com
set -e
echo "Doing some clean up..."
rm -f *.tar.gz
echo "Downloading Swagger UI release package..."
wget https://github.com/swagger-api/swagger-ui/archive/v2.1.4.tar.gz -O swagger.tar.gz
echo "Untarring Swagger UI package to the static file path..."
mkdir -p ../src/ui/static/vendors
tar -C ../src/ui/static/vendors -zxf swagger.tar.gz swagger-ui-2.1.4/dist
echo "Executing some processes..."
sed -i.bak 's/http:\/\/petstore\.swagger\.io\/v2\/swagger\.json/'$SCHEME':\/\/'$SERVER_IP'\/static\/resources\/yaml\/swagger\.yaml/g' \
../src/ui/static/vendors/swagger-ui-2.1.4/dist/index.html
sed -i.bak '/jsonEditor: false,/a\ validatorUrl: null,' ../src/ui/static/vendors/swagger-ui-2.1.4/dist/index.html
mkdir -p ../src/ui/static/resources/yaml
cp swagger.yaml ../src/ui/static/resources/yaml
sed -i.bak 's/host: localhost/host: '$SERVER_IP'/g' ../src/ui/static/resources/yaml/swagger.yaml
sed -i.bak 's/ \- http$/ \- '$SCHEME'/g' ../src/ui/static/resources/yaml/swagger.yaml
echo "Finish preparation for the Swagger UI."

View File

@ -0,0 +1,18 @@
# Registry Landscape
The cloud native ecosystem is moving rapidlyregistries and their featuresets are no exception. We've made our best effort to survey the container registry landscape and compare to our core featureset.
If you find something outdated or outright erroneous, please submit a PR and we'll fix it right away.
| Feature | Harbor | Docker Trusted Registry | Quay | Cloud Providers (GCP, AWS, Azure) | Docker Distribution | Artifactory |
| -------------: | :----: | :---------------------: | :--: | :-------------------------------: | :-----------------: | :---------: |
| Local Auth | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ |
| LDAP-based Auth | ✓ | ✓ | ✓ | partial | ✗ | ✓ |
| Content Trust and Validation | ✓ | ✓ | ✗ | ✗ | partial | partial |
| Vulnerability Scanning & Monitoring | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ |
| Replication | ✓ | ✓ | ✓ | n/a | ✗ | ✓ |
| Multi-Tenancy (projects, teams, etc.) | ✓ | ✓ | ✓ | partial | ✗ | ✓ |
| Role-Based Access Control | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ |
| Custom TLS Certificates | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ |
| Ability to Determine Version of Binaries in Containers | ✓ | ✓ | ✓ | ✗ | ✗ | ? |
| Upstream Registry Proxy Cache | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ |
| Audit Logs | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ |

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,100 @@
# Harbor frontend environment get started guide
If you already have a harbor backend environment, you can build a frontend development environment with the following configuration.
1. Create the file proxy.config.json in the directory harbor/src/portaland config it according to the sample below.
**NOTE:** You should replace “$IP_ADDRESS” with your own ip address.
```
{
"/api/*": {
"target": "$IP_ADDRESS",
"secure": false,
"changeOrigin": true,
"logLevel": "debug"
},
"/service/*": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/c/login": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/sign_in": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/c/log_out": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/sendEmail": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/language": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/reset": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/userExists": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/reset_password": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/i18n/lang/*.json": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug",
"pathRewrite": { "^/src$": "" }
},
"/chartrepo": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
},
"/*.json": {
"target": "$IP_ADDRESS",
"secure": false,
"logLevel": "debug"
}
}
```
2. Open the terminal and run the following commandinstall npm packages as 3rd-party dependencies.
```
cd harbor/src/portal
npm install
```
3. Compile the frontend code by the following command.
```
npm run build_all
```
4. Execute the following commandserve Harbor locally.
```
npm run start
```
5. Then you can visit the Harbor by address: https://localhost:4200.

View File

@ -0,0 +1,48 @@
### Variables
Variable | Description
-------------------|-------------
BASEIMAGE | Container base image, default: photon
DEVFLAG | Build model flag, default: dev
COMPILETAG | Compile model flag, default: compile_normal (local golang build)
GOBUILDIMAGE | Golang image to compile harbor go source code.
NOTARYFLAG | Whether to enable notary in harbor, default:false
HTTPPROXY | Clarity proxy to build UI.
### Targets
Target | Description
--------------------|-------------
all | prepare env, compile binaries, build images and install images
prepare | prepare env
compile | compile ui and jobservice code
compile_portal | compile portal code
compile_core | compile core binary
compile_jobservice | compile jobservice binary
build | build Harbor docker images (default: using build_photon)
build_photon | build Harbor docker images from Photon OS base image
install | compile binaries, build images, prepare specific version of compose file and startup Harbor instance
start | startup Harbor instance
down | shutdown Harbor instance
package_online | prepare online install package
package_offline | prepare offline install package
pushimage | push Harbor images to specific registry server
clean all | remove binary, Harbor images, specific version docker-compose file, specific version tag and online/offline install package
cleanbinary | remove ui and jobservice binary
cleanimage | remove Harbor images
cleandockercomposefile | remove specific version docker-compose
cleanversiontag | remove specific version tag
cleanpackage | remove online/offline install package
version | set harbor version
#### EXAMPLE:
#### Build and run harbor from source code.
make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage NOTARYFLAG=true
### Package offline installer
make package_offline GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage NOTARYFLAG=true
### Start harbor with notary
make -e NOTARYFLAG=true start
### Stop harbor with notary
make -e NOTARYFLAG=true down

View File

@ -0,0 +1,6 @@
# Harbor Installation and Configuration
This document describes how to install Harbor, and how to perform initial configuration.
- [Installing Harbor](install_config/installation/_index.md)
- [Configuring Harbor](install_config/configuration/_index.md)

View File

@ -0,0 +1,7 @@
# Configuring Harbor
After you have deployed Harbor, you can perform certain post-deployment configuration operations.
- [Reconfigure Harbor and Manage the Harbor Lifecycle](reconfigure_manage_lifecycle.md)
- [Configure HTTPS Access to Harbor](configure_https.md)
- [Access Harbor Logs](access_logs.md)

View File

@ -0,0 +1,5 @@
# Access Harbor Logs
By default, registry data is persisted in the host's `/data/` directory. This data remains unchanged even when Harbor's containers are removed and/or recreated, you can edit the `data_volume` in `harbor.yml` file to change this directory.
In addition, Harbor uses *rsyslog* to collect the logs of each container. By default, these log files are stored in the directory `/var/log/harbor/` on the target host for troubleshooting, also you can change the log directory in `harbor.yml`.

View File

@ -0,0 +1,66 @@
# Administrator options
### Managing user
Administrator can add "Administrator" role to one or more ordinary users by checking checkboxes and clicking `SET AS ADMINISTRATOR`. To delete users, checked checkboxes and select `DELETE`. Deleting user is only supported under database authentication mode.
![browse project](../img/new_set_admin_remove_user.png)
### Managing registry
You can list, add, edit and delete registries under `Administration->Registries`. Only registries which are not referenced by any rules can be deleted.
![browse project](../img/manage_registry.png)
### Managing replication
You can list, add, edit and delete rules under `Administration->Replications`.
![browse project](../img/manage_replication.png)
### Managing authentication
You can change authentication mode between **Database**(default) and **LDAP** before any user is added, when there is at least one user(besides admin) in Harbor, you cannot change the authentication mode.
![browse project](../img/new_auth.png)
When using LDAP mode, user's self-registration is disabled. The parameters of LDAP server must be filled in. For more information, refer to [User account](#user-account).
![browse project](../img/ldap_auth.png)
When using OIDC mode, user will login Harbor via OIDC based SSO. A client has to be registered on the OIDC provider and Harbor's callback URI needs to be associated to that client as a redirectURI.
![OIDC settings](../img/oidc_auth_setting.png)
The settings of this auth mode:
* OIDC Provider Name: The name of the OIDC Provider.
* OIDC Provider Endpoint: The URL of the endpoint of the OIDC provider(a.k.a the Authorization Server in OAuth's terminology),
which must service the "well-known" URI for its configuration, more details please refer to https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
* OIDC Client ID: The ID of client configured on OIDC Provider.
* OIDC Client Secret: The secret for this client.
* OIDC Scope: The scope values to be used during the authentication. It is the comma separated string, which must contain `openid`.
Normally it should also contain `profile` and `email`. For getting the refresh token it should also contain `offline_access`. Please check with the administrator of the OIDC Provider.
* Verify Certificate: Whether to check the certificate when accessing the OIDC Provider. if you are running the OIDC Provider with self-signed
certificate, make sure this value is set to false.
### Managing project creation
Use the **Project Creation** drop-down menu to set which users can create projects. Select **Everyone** to allow all users to create projects. Select **Admin Only** to allow only users with the Administrator role to create projects.
![browse project](../img/new_proj_create.png)
### Managing self-registration
You can manage whether a user can sign up for a new account. This option is not available if you use LDAP authentication.
![browse project](../img/new_self_reg.png)
### Managing email settings
You can change Harbor's email settings, the mail server is used to send out responses to users who request to reset their password.
![browse project](../img/new_config_email.png)
### Managing registry read only
You can change Harbor's registry read only settings, read only mode will allow 'docker pull' while preventing 'docker push' and the deletion of repository and tag.
![browse project](../img/read_only.png)
If it set to true, deleting repository, tag and pushing image will be disabled.
![browse project](../img/read_only_enable.png)
```
$ docker push 10.117.169.182/demo/ubuntu:14.04
The push refers to a repository [10.117.169.182/demo/ubuntu]
0271b8eebde3: Preparing
denied: The system is in read only mode. Any modification is prohibited.
```
### Managing role by LDAP group
If auth_mode is ldap_auth, you can manage project role by LDAP/AD group. please refer [manage role by ldap group guide](manage_role_by_ldap_group.md).

View File

@ -0,0 +1,192 @@
# Configure HTTPS Access to Harbor
Because Harbor does not ship with any certificates, it uses HTTP by default to serve registry requests. However, it is highly recommended that security be enabled for any production environment. Harbor has an Nginx instance as a reverse proxy for all services, you can use the prepare script to configure Nginx to enable https.
In a test or development environment, you may choose to use a self-signed certificate instead of the one from a trusted third-party CA. The followings will show you how to create your own CA, and use your CA to sign a server certificate and a client certificate.
## Getting Certificate Authority
```
openssl genrsa -out ca.key 4096
```
```
openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=yourdomain.com" \
-key ca.key \
-out ca.crt
```
## Getting Server Certificate
Assuming that your registry's **hostname** is **yourdomain.com**, and that its DNS record points to the host where you are running Harbor. In production environment, you first should get a certificate from a CA. In a test or development environment, you can use your own CA. The certificate usually contains a .crt file and a .key file, for example, **yourdomain.com.crt** and **yourdomain.com.key**.
**1) Create your own Private Key:**
```
openssl genrsa -out yourdomain.com.key 4096
```
**2) Generate a Certificate Signing Request:**
If you use FQDN like **yourdomain.com** to connect your registry host, then you must use **yourdomain.com** as CN (Common Name).
```
openssl req -sha512 -new \
-subj "/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=yourdomain.com" \
-key yourdomain.com.key \
-out yourdomain.com.csr
```
**3) Generate the certificate of your registry host:**
Whether you're using FQDN like **yourdomain.com** or IP to connect your registry host, run this command to generate the certificate of your registry host which comply with Subject Alternative Name (SAN) and x509 v3 extension requirement:
**v3.ext**
```
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=yourdomain.com
DNS.2=yourdomain
DNS.3=hostname
EOF
```
```
openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in yourdomain.com.csr \
-out yourdomain.com.crt
```
## Configuration and Installation
**1) Configure Server Certificate and Key for Harbor**
After obtaining the **yourdomain.com.crt** and **yourdomain.com.key** files,
you can put them into directory such as ```/root/cert/```:
```
cp yourdomain.com.crt /data/cert/
cp yourdomain.com.key /data/cert/
```
**2) Configure Server Certificate, Key and CA for Docker**
The Docker daemon interprets ```.crt``` files as CA certificates and ```.cert``` files as client certificates.
Convert server ```yourdomain.com.crt``` to ```yourdomain.com.cert```:
```
openssl x509 -inform PEM -in yourdomain.com.crt -out yourdomain.com.cert
```
Delpoy ```yourdomain.com.cert```, ```yourdomain.com.key```, and ```ca.crt``` for Docker:
```
cp yourdomain.com.cert /etc/docker/certs.d/yourdomain.com/
cp yourdomain.com.key /etc/docker/certs.d/yourdomain.com/
cp ca.crt /etc/docker/certs.d/yourdomain.com/
```
The following illustrates a configuration with custom certificates:
```
/etc/docker/certs.d/
└── yourdomain.com:port
├── yourdomain.com.cert <-- Server certificate signed by CA
├── yourdomain.com.key <-- Server key signed by CA
└── ca.crt <-- Certificate authority that signed the registry certificate
```
Notice that you may need to trust the certificate at OS level. Please refer to the [Troubleshooting](#Troubleshooting) section below.
**3) Configure Harbor**
Edit the file `harbor.yml`, update the hostname and uncomment the https block, and update the attributes `certificate` and `private_key`:
```yaml
#set hostname
hostname: yourdomain.com
http:
port: 80
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /data/cert/yourdomain.com.crt
private_key: /data/cert/yourdomain.com.key
......
```
Generate configuration files for Harbor:
```
./prepare
```
If Harbor is already running, stop and remove the existing instance. Your image data remain in the file system
```
docker-compose down -v
```
Finally, restart Harbor:
```
docker-compose up -d
```
After setting up HTTPS for Harbor, you can verify it by the following steps:
* Open a browser and enter the address: https://yourdomain.com. It should display the user interface of Harbor.
* Notice that some browser may still shows the warning regarding Certificate Authority (CA) unknown for security reason even though we signed certificates by self-signed CA and deploy the CA to the place mentioned above. It is because self-signed CA essentially is not a trusted third-party CA. You can import the CA to the browser on your own to solve the warning.
* On a machine with Docker daemon, make sure the option "-insecure-registry" for https://yourdomain.com is not present.
* If you mapped nginx port 443 to another port, then you should instead create the directory ```/etc/docker/certs.d/yourdomain.com:port``` (or your registry host IP:port). Then run any docker command to verify the setup, e.g.
```
docker login yourdomain.com
```
If you've mapped nginx 443 port to another, you need to add the port to login, like below:
```
docker login yourdomain.com:port
```
## Troubleshooting
1. You may get an intermediate certificate from a certificate issuer. In this case, you should merge the intermediate certificate with your own certificate to create a certificate bundle. You can achieve this by the below command:
```
cat intermediate-certificate.pem >> yourdomain.com.crt
```
2. On some systems where docker daemon runs, you may need to trust the certificate at OS level.
On Ubuntu, this can be done by below commands:
```sh
cp yourdomain.com.crt /usr/local/share/ca-certificates/yourdomain.com.crt
update-ca-certificates
```
On Red Hat (CentOS etc), the commands are:
```sh
cp yourdomain.com.crt /etc/pki/ca-trust/source/anchors/yourdomain.com.crt
update-ca-trust
```

View File

@ -0,0 +1,60 @@
# Customize Harbor token service with your key and certificate
Harbor requires Docker client to access the Harbor registry with a token. The procedure to generate a token is like [Docker Registry v2 authentication](https://github.com/docker/distribution/blob/master/docs/spec/auth/token.md). Firstly, you should make a request to the token service for a token. The token is signed by the private key. After that, you make a new request with the token to the Harbor registry, Harbor registry will verify the token with the public key in the rootcert bundle. Then Harbor registry will authorize the Docker client to push/pull images.
By default, Harbor uses default private key and certificate in authentication. Also, you can customize your configuration with your own key and certificate with the following steps:
1.If you already have a certificate, go to step 3.
2.If not, you can generate a root certificate using openSSL with following commands:
**1)Generate a private key:**
```sh
$ openssl genrsa -out private_key.pem 4096
```
**2)Generate a certificate:**
```sh
$ openssl req -new -x509 -key private_key.pem -out root.crt -days 3650
```
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank. Following are what you're asked to enter.
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, server FQDN or YOUR name) []:
Email Address []:
After you execute these two commands, you will see private_key.pem and root.crt in the **current directory**, just type "ls", you'll see them.
3.Refer to [Installation Guide](https://github.com/goharbor/harbor/blob/master/docs/installation_guide.md) to install Harbor, After you execute ./prepare, Harbor generates several config files. We need to replace the original private key and certificate with your own key and certificate.
4.Replace the default key and certificate. Assume that you key and certificate are in the directory /root/cert, following are what you should do:
```
$ cd config/ui
$ cp /root/cert/private_key.pem private_key.pem
$ cp /root/cert/root.crt ../registry/root.crt
```
5.After these, go back to the make directory, you can start Harbor using following command:
```
$ docker-compose up -d
```
6.Then you can push/pull images to see if your own certificate works. Please refer [User Guide](https://github.com/goharbor/harbor/blob/master/docs/user_guide.md) for more info.

View File

@ -0,0 +1,29 @@
# Garbage Collection
Online Garbage Collection enables user to trigger docker registry garbage collection by clicking button on UI.
**NOTES:** The space is not freed when the images are deleted from Harbor, Garbage Collection is the task to free up the space by removing blobs from the filesystem when they are no longer referenced by a manifest.
For more information about Garbage Collection, please see [Garbage Collection](https://github.com/docker/docker.github.io/blob/master/registry/garbage-collection.md).
### Setting up Garbage Collection
If you are a system admin, you can trigger garbage collection by clicking "GC Now" in the **'Garbage Collection'** tab of **'Configuration'** section under **'Administration'**.
![browse project](../img/gc_now.png)
**NOTES:** Harbor is put into read-only mode when to execute Garbage Collection, and any modification on docker registry is prohibited.
To avoid frequently triggering the garbage collection process, the availability of the button is restricted. It can be only triggered once in one minute.
![browse project](../img/gc_now2.png)
**Scheduled Garbage Collection by Policy**
* **None:** No policy is selected.
* **Daily:** Policy is activated daily. It means an analysis job is scheduled to be executed at the specified time everyday. The scheduled job will do garbage collection in Harbor.
* **Weekly:** Policy is activated weekly. It means an analysis job is scheduled to be executed at the specified time every week. The scheduled job will do garbage collection in Harbor.
Once the policy has been configured, you have the option to save the schedule.
![browse project](../img/gc_policy.png)
### Garbage Collection history
If you are a system admin, you can view the latest 10 records of garbage collection execution.
![browse project](../img/gc_history.png)
You can click on the 'details' link to view the related logs.
![browse project](../img/gc_details.png)

View File

@ -0,0 +1,69 @@
## Update an offline Harbor instance with new vulnerability data
Harbor has integrated with Clair to scan vulnerabilities in images. When Harbor is installed in an environment without internet connection, Clair cannot fetch data from the public vulnerability database. Under this circumstance, Harbor administrator needs to manually update the Clair database.
This document provides step-by-step instructions on updating Clair vulnerability database in Harbor.
**NOTE:** Harbor does not ship with any vulnerability data. For this reason, if Harbor cannot connect to Internet, the administrator must manually import vulnerability data to Harbor by using instructions given in this document.
### Preparation
A. You need to install an instance of Clair with internet connection. If you have another instance of Harbor with internet access, it also works.
B. Check whether your Clair instance has already updated the vulnerability database to the latest version. If it has not, wait for Clair to get the data from public endpoints.
- Use command `docker ps` to find out the container id of Clair.
- Run command `docker logs container_id` to check the log of the Clair container. If you are using Harbor you can find the latest Clair log under /var/log/harbor/2017-xx-xx/clair.log
- Look for logs that look like the below:
```
Jul 3 20:40:45 172.18.0.1 clair[3516]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2017-07-04 03:40:45.890364","updater name":"rhel"}
Jul 3 20:40:46 172.18.0.1 clair[3516]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2017-07-04 03:40:46.768924","updater name":"alpine"}
Jul 3 20:40:47 172.18.0.1 clair[3516]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2017-07-04 03:40:47.190982","updater name":"oracle"}
Jul 3 20:41:07 172.18.0.1 clair[3516]: {"Event":"Debian buster is not mapped to any version number (eg. Jessie-\u003e8). Please update me.","Level":"warning","Location":"debian.go:128","Time":"2017-07-04 03:41:07.833720"}
Jul 3 20:41:07 172.18.0.1 clair[3516]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2017-07-04 03:41:07.833975","updater name":"debian"}
Jul 4 00:26:17 172.18.0.1 clair[3516]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2017-07-04 07:26:17.596986","updater name":"ubuntu"}
Jul 4 00:26:18 172.18.0.1 clair[3516]: {"Event":"adding metadata to vulnerabilities","Level":"info","Location":"updater.go:253","Time":"2017-07-04 07:26:18.060810"}
Jul 4 00:38:05 172.18.0.1 clair[3516]: {"Event":"update finished","Level":"info","Location":"updater.go:198","Time":"2017-07-04 07:38:05.251580"}
```
- The phrase "finished fetching" indicates that Clair has finished a round of vulnerability update from an endpoint. Please make sure all five endpoints (rhel, alpine, oracle, debian, ubuntu) are updated correctly.
## Harbor version < 1.6
If you're using a version of Harbor prior to 1.6, you can access the correct instructions for your version using the following URL.
https://github.com/goharbor/harbor/blob/v\<VERSION NUMBER>/docs/import_vulnerability_data.md
## Harbor version >= 1.6
Databased were consolidated in version 1.6 which moved the clair database to the harbor-db container and removed the clair-db container.
### Dumping vulnerability data
- Log in to the host (that is connected to Internet) where Clair database (Postgres) is running.
- Dump Clair's vulnerability database by the following commands, two files (`vulnerability.sql` and `clear.sql`) are generated:
_NOTE: The container name 'clair-db' is a placeholder for the db container used by the internet connected instance of clair_
```
$ docker exec clair-db /bin/sh -c "pg_dump -U postgres -a -t feature -t keyvalue -t namespace -t schema_migrations -t vulnerability -t vulnerability_fixedin_feature" > vulnerability.sql
$ docker exec clair-db /bin/sh -c "pg_dump -U postgres -c -s" > clear.sql
```
### Back up Harbor's Clair database
Before importing the data, it is strongly recommended to back up the Clair database in Harbor.
```
$ docker exec harbor-db /bin/sh -c "pg_dump -U postgres -c" > all.sql
```
### Update Harbor's Clair database
Copy the `vulnerability.sql` and `clear.sql` to the host where Harbor is running on. Run the below commands to import the data to Harbor's Clair database:
```
$ docker exec -i harbor-db psql -U postgres < clear.sql
$ docker exec -i harbor-db psql -U postgres < vulnerability.sql
```
### Rescanning images
After importing the data, trigger the scanning process in the administrator's web UI: **Administration**->**Configuration**->**Vulnerability**->**SCAN NOW**. Harbor reflects the new changes after the scanning is completed. (Otherwise the summary of the image vulnerabilities will not be displayed correctly.)

View File

@ -0,0 +1,68 @@
# Reconfigure Harbor and Manage the Harbor Lifecycle
You can use docker-compose to manage the lifecycle of Harbor. Some useful commands are listed as follows (must run in the same directory as *docker-compose.yml*).
Stopping Harbor:
``` sh
$ sudo docker-compose stop
Stopping nginx ... done
Stopping harbor-portal ... done
Stopping harbor-jobservice ... done
Stopping harbor-core ... done
Stopping registry ... done
Stopping redis ... done
Stopping registryctl ... done
Stopping harbor-db ... done
Stopping harbor-log ... done
```
Restarting Harbor after stopping:
``` sh
$ sudo docker-compose start
Starting log ... done
Starting registry ... done
Starting registryctl ... done
Starting postgresql ... done
Starting core ... done
Starting portal ... done
Starting redis ... done
Starting jobservice ... done
Starting proxy ... done
```
To change Harbor's configuration, first stop existing Harbor instance and update `harbor.yml`. Then run `prepare` script to populate the configuration. Finally re-create and start Harbor's instance:
``` sh
$ sudo docker-compose down -v
$ vim harbor.yml
$ sudo prepare
$ sudo docker-compose up -d
```
Removing Harbor's containers while keeping the image data and Harbor's database files on the file system:
``` sh
$ sudo docker-compose down -v
```
Removing Harbor's database and image data (for a clean re-installation):
``` sh
$ rm -r /data/database
$ rm -r /data/registry
```
#### *Managing lifecycle of Harbor when it's installed with Notary, Clair and chart repository service*
If you want to install Notary, Clair and chart repository service together, you should include all the components in the prepare commands:
``` sh
$ sudo docker-compose down -v
$ vim harbor.yml
$ sudo prepare --with-notary --with-clair --with-chartmuseum
$ sudo docker-compose up -d
```
Please check the [Docker Compose command-line reference](https://docs.docker.com/compose/reference/) for more on docker-compose.

View File

@ -0,0 +1,46 @@
# Setting Project Quotas
To exercise control over resource use, as a system administrator you can set quotas on projects. You can limit the number of tags that a project can contain and limit the amount of storage capacity that a project can consume. You can set default quotas that apply to all projects globally.
**NOTE**: Default quotas apply to projects that are created after you set or change the default quota. The default quota is not applied to projects that already existed before you set it.
You can also set quotas on individual projects. If you set a global default quota and you set different quotas on individual projects, the per-project quotas are applied.
By default, all projects have unlimited quotas for both tags and storage use.
1. Go to **Configuration** > **Project Quotas**.
![Project quotas](../img/project-quota1.png)
1. To set global default quotas on all projects, click **Edit**.
![Project quotas](../img/project-quota2.png)
1. For **Default artifact count**, enter the maximum number of tags that any project can contain.
Enter `-1` to set the default to unlimited.
1. For **Default storage consumption**, enter the maximum quantity of storage that any project can consume, selecting `MB`, `GB`, or `TB` from the drop-down menu.
Enter `-1` to set the default to unlimited.
![Project quotas](../img/project-quota3.png)
1. Click **OK**.
1. To set quotas on an individual project, click the 3 vertical dots next to a project and select **Edit**.
![Project quotas](../img/project-quota4.png)
1. For **Default artifact count**, enter the maximum number of tags that this individual project can contain, or enter `-1` to set the default to unlimited.
1. For **Default storage consumption**, enter the maximum quantity of storage that this individual project can consume, selecting `MB`, `GB`, or `TB` from the drop-down menu.
After you set quotas, the you can see how much of their quotas each project has consumed in the **Project Quotas** tab.
![Project quotas](../img/project-quota5.png)
### How Harbor Calculates Resource Usage
When setting project quotas, it is useful to know how Harbor calculates tag numbers and storage use, especially in relation to image pushing, retagging, and garbage collection.
- Harbor computes image size when blobs and manifests are pushed from the Docker client.
- Harbor computes tag counts when manifests are pushed from the Docker client.
**NOTE**: When users push an image, the manifest is pushed last, after all of the associated blobs have been pushed successfully to the registry. If several images are pushed concurrently and if there is an insufficient number of tags left in the quota for all of them, images are accepted in the order that their manifests arrive. Consequently, an attempt to push an image might not be immediately rejected for exceeding the quota. This is because there was availability in the tag quota when the push was initiated, but by the time the manifest arrived the quota had been exhausted.
- Shared blobs are only computed once per project. In Docker, blob sharing is defined globally. In Harbor, blob sharing is defined at the project level. As a consequence, overall storage usage can be greater than the actual disk capacity.
- Retagging images reserves and releases resources:
- If you retag an image within a project, the tag count increases by one, but storage usage does not change because there are no new blobs or manifests.
- If you retag an image from one project to another, the tag count and storage usage both increase.
- During garbage collection, Harbor frees the storage used by untagged blobs in the project.
- If the tag count reaches the limit, image blobs can be pushed into a project and storage usage is updated accordingly. You can consider these blobs to be untagged blobs. They can be removed by garbage collection, and the storage that they consume is returned after garbage colletion.
- Helm chart size is not calculated. Only tag counts are calculated.

View File

@ -0,0 +1,68 @@
# Vulnerability Scanning with Clair
**CAUTION: Clair is an optional component, please make sure you have already installed it in your Harbor instance before you go through this section.**
Static analysis of vulnerabilities is provided through open source project [Clair](https://github.com/coreos/clair). You can initiate scanning on a particular image, or on all images in Harbor. Additionally, you can also set a policy to scan all the images at a specified time everyday.
**Vulnerability metadata**
Clair depends on the vulnerability metadata to complete the analysis process. After the first initial installation, Clair will automatically start to update the metadata database from different vulnerability repositories. The updating process may take a while based on the data size and network connection. If the database has not been fully populated, there is a warning message at the footer of the repository datagrid view.
![browse project](../img/clair_not_ready.png)
The 'database not fully ready' warning message is also displayed in the **'Vulnerability'** tab of **'Configuration'** section under **'Administration'** for your awareness.
![browse project](../img/clair_not_ready2.png)
Once the database is ready, an overall database updated timestamp will be shown in the **'Vulnerability'** tab of **'Configuration'** section under **'Administration'**.
![browse project](../img/clair_ready.png)
**Scanning an image**
Enter your project, select the repository. For each tag there will be an 'Vulnerability' column to display vulnerability scanning status and related information. You can select the image and click the "SCAN" button to trigger the vulnerability scan process.
![browse project](../img/scan_image.png)
**NOTES: Only the users with 'Project Admin' role have the privilege to launch the analysis process.**
The analysis process may have the following status that are indicated in the 'Vulnerability' column:
* **Not Scanned:** The tag has never been scanned.
* **Queued:** The scanning task is scheduled but not executed yet.
* **Scanning:** The scanning process is in progress.
* **Error:** The scanning process failed to complete.
* **Complete:** The scanning process was successfully completed.
For the **'Not Scanned'** and **'Queued'** statuses, a text label with status information is shown. For the **'Scanning'**, a progress bar will be displayed.
If an error occurred, you can click on the **'View Log'** link to view the related logs.
![browse project](../img/log_viewer.png)
If the process was successfully completed, a result bar is created. The width of the different colored sections indicates the percentage of features with vulnerabilities for a particular severity level.
* **Red:** **High** level of vulnerabilities
* **Orange:** **Medium** level of vulnerabilities
* **Yellow:** **Low** level of vulnerabilities
* **Grey:** **Unknown** level of vulnerabilities
* **Green:** **No** vulnerabilities
![browse project](../img/bar_chart.png)
Move the cursor over the bar, a tooltip with summary report will be displayed. Besides showing the total number of features with vulnerabilities and the total number of features in the scanned image tag, the report also lists the counts of features with vulnerabilities of different severity levels. The completion time of the last analysis process is shown at the bottom of the tooltip.
![browse project](../img/summary_tooltip.png)
Click on the tag name link, the detail page will be opened. Besides the information about the tag, all the vulnerabilities found in the last analysis process will be listed with the related information. You can order or filter the list by columns.
![browse project](../img/tag_detail.png)
**NOTES: You can initiate the vulnerability analysis for a tag at anytime you want as long as the status is not 'Queued' or 'Scanning'.**
**Scanning all images**
In the **'Vulnerability'** tab of **'Configuration'** section under **'Administration'**, click on the **'SCAN NOW'** button to start the analysis process for all the existing images.
**NOTES: The scanning process is executed via multiple concurrent asynchronous tasks. There is no guarantee on the order of scanning or the returned results.**
![browse project](../img/scan_all.png)
To avoid frequently triggering the resource intensive scanning process, the availability of the button is restricted. It can be only triggered once in a predefined period. The next available time will be displayed besides the button.
![browse project](../img/scan_all2.png)
**Scheduled Scan by Policy**
You can set policies to control the vulnerability analysis process. Currently, two options are available:
* **None:** No policy is selected.
* **Daily:** Policy is activated daily. It means an analysis job is scheduled to be executed at the specified time everyday. The scheduled job will scan all the images in Harbor.
![browse project](../img/scan_policy.png)
**NOTES: Once the scheduled job is executed, the completion time of scanning all images will be updated accordingly. Please be aware that the completion time of the images may be different because the execution of analysis for each image may be carried out at different time.**

View File

@ -0,0 +1,20 @@
# Demo Server Guide
**Important!**
- Please note that this demo server is **ONLY** for your trial of Harbor functionalities.
- Please **DO NOT** upload any sensitive images to this server.
- We will **CLEAN AND RESET** the server every **TWO Days**.
- You can only experience the non-admin functionalities on this server. Please follow the **[Installation Guide](installation_guide.md)** to set up a Harbor server to try more advanced features.
- Please do not push large images(>100MB) as the server has limited storage.
If you encounter any problems during using the demo server, please open an issue or ping us on [Slack](https://github.com/goharbor/harbor#community).
**Usage**
- 1> The address of the demo server is [https://demo.goharbor.io](https://demo.goharbor.io)
- 2> You can register a new user by yourself
- 3> Then you can use the account/password created in step 2 to log in
```
docker login demo.goharbor.io
```
You can refer to [User Guide](user_guide.md) for more details on how to use Harbor.

View File

@ -0,0 +1,30 @@
# Installing Harbor
The Harbor installation process involves the following stages:
1. Make sure that your target host meets the [Harbor Installation Prerequisites](installation_prereqs.md).
1. [Download the Harbor Installer](download_installer.md)
1. [Configure the Harbor YML File](configure_yml_file.md)
1. [Run the Installer Script](run_installer_script.md)
Harbor does not ship with any certificates, and, by default, uses HTTP to serve requests. While this makes it relatively simple to set up and run - especially for a development or testing environment - it is **not** recommended for a production environment. To enable HTTPS, see [Configure HTTPS Access to Harbor](../configuration/configure_https.md).
**NOTE**: If you run a previous version of Harbor, you may need to update ```harbor.yml``` and migrate the data to fit the new database schema. For more details, please refer to **[Harbor Migration Guide](migration_guide.md)**.
In addition, the deployment instructions on Kubernetes has been created by the community. Refer to [Harbor on Kubernetes](kubernetes_deployment.md) for details.
## Harbor Components
The table below lists the components that are deployed when you install this version of Harbor.
|Component|Version|
|---|---|
|Postgresql|9.6.10-1.ph2|
|Redis|4.0.10-1.ph2|
|Clair|2.0.8|
|Beego|1.9.0|
|Chartmuseum|0.9.0|
|Docker/distribution|2.7.1|
|Docker/notary|0.6.1|
|Helm|2.9.1|
|Swagger-ui|3.22.1|

View File

@ -0,0 +1,144 @@
# Configure the Harbor YML File
Configuration parameters are located in the file **harbor.yml**.
There are two categories of parameters, **required parameters** and **optional parameters**.
- **System level parameters**: These parameters are required to be set in the configuration file. They will take effect if a user updates them in ```harbor.yml``` and run the ```install.sh``` script to reinstall Harbor.
- **User level parameters**: These parameters can update after the first time harbor started on Web Portal. In particular, you must set the desired **auth_mode** before registering or creating any new users in Harbor. When there are users in the system (besides the default admin user), **auth_mode** cannot be changed.
The parameters are described below - note that at the very least, you will need to change the **hostname** attribute.
##### Required parameters
- **hostname**: The target host's hostname, which is used to access the Portal and the registry service. It should be the IP address or the fully qualified domain name (FQDN) of your target machine, e.g., `192.168.1.10` or `reg.yourdomain.com`. _Do NOT use `localhost` or `127.0.0.1` or `0.0.0.0` for the hostname - the registry service needs to be accessible by external clients!_
- **data_volume**: The location to store harbor's data.
- **harbor_admin_password**: The administrator's initial password. This password only takes effect for the first time Harbor launches. After that, this setting is ignored and the administrator's password should be set in the Portal. _Note that the default username/password are **admin/Harbor12345** ._
- **database**: the configs related to local database
- **password**: The root password for the PostgreSQL database. Change this password for any production use.
- **max_idle_conns**: The maximum number of connections in the idle connection pool. If <=0 no idle connections are retained. The default value is 50 and if it is not configured the value is 2.
- **max_open_conns**: The maximum number of open connections to the database. If <= 0 there is no limit on the number of open connections. The default value is 100 for the max connections to the Harbor database. If it is not configured the value is 0.
- **jobservice**: jobservice related service
- **max_job_workers**: The maximum number of replication workers in job service. For each image replication job, a worker synchronizes all tags of a repository to the remote destination. Increasing this number allows more concurrent replication jobs in the system. However, since each worker consumes a certain amount of network/CPU/IO resources, please carefully pick the value of this attribute based on the hardware resource of the host.
- **log**: log related url
- **level**: log level, options are debug, info, warning, error, fatal
- **local**: The default is to retain logs locally.
- **rotate_count**: Log files are rotated **rotate_count** times before being removed. If count is 0, old versions are removed rather than rotated.
- **rotate_size**: Log files are rotated only if they grow bigger than **rotate_size** bytes. If size is followed by k, the size is assumed to be in kilobytes. If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G are all valid.
- **location**: the directory to store logs
- **external_endpoint**: Enable this option to forward logs to a syslog server.
- **protocol**: Transport protocol for the syslog server. Default is TCP.
- **host**: The URL of the syslog server.
- **port**: The port on which the syslog server listens.
##### optional parameters
- **http**:
- **port** : the port number of you http
- **https**: The protocol used to access the Portal and the token/notification service. If Notary is enabled, has to set to _https_.
refer to **[Configuring Harbor with HTTPS Access](configure_https.md)**.
- **port**: port number for https
- **certificate**: The path of SSL certificate, it's applied only when the protocol is set to https.
- **private_key**: The path of SSL key, it's applied only when the protocol is set to https.
- **external_url**: Enable it if use external proxy, and when it enabled the hostname will no longer used
- **clair**: Clair related configs
- **updaters_interval**: The interval of clair updaters, the unit is hour, set to 0 to disable the updaters
- **http_proxy**: Config http proxy for Clair, e.g. `http://my.proxy.com:3128`.
- **https_proxy**: Config https proxy for Clair, e.g. `http://my.proxy.com:3128`.
- **no_proxy**: Config no proxy for Clair, e.g. `127.0.0.1,localhost,core,registry`.
- **chart**: chart related configs
- **absolute_url**: if set to enabled chart will use absolute url, otherwise set it to disabled, chart will use relative url.
- **external_database**: external database configs, Currently only support POSTGRES.
- **harbor**: harbor's core database configs
- **host**: hostname for harbor core database
- **port**: port of harbor's core database
- **db_name**: database name of harbor core database
- **username**: username to connect harbor core database
- **password**: password to harbor core database
- **ssl_mode**: is enable ssl mode
- **max_idle_conns**: The maximum number of connections in the idle connection pool. If <=0 no idle connections are retained. The default value is 2.
- **max_open_conns**: The maximum number of open connections to the database. If <= 0 there is no limit on the number of open connections. The default value is 0.
- **clair**: clair's database configs
- **host**: hostname for clair database
- **port**: port of clair database
- **db_name**: database name of clair database
- **username**: username to connect clair database
- **password**: password to clair database
- **ssl_mode**: is enable ssl mode
- **notary_signer**: notary's signer database configs
- **host**: hostname for notary signer database
- **port**: port of notary signer database
- **db_name**: database name of notary signer database
- **username**: username to connect notary signer database
- **password**: password to notary signer database
- **ssl_mode**: is enable ssl mode
- **notary_server**:
- **host**: hostname for notary server database
- **port**: port of notary server database
- **db_name**: database name of notary server database
- **username**: username to connect notary server database
- **password**: password to notary server database
- **ssl_mode**: is enable ssl mode
- **external_redis**: configs for use the external redis
- **host**: host for external redis
- **port**: port for external redis
- **password**: password to connect external host
- **registry_db_index**: db index for registry use
- **jobservice_db_index**: db index for jobservice
- **chartmuseum_db_index**: db index for chartmuseum
#### Configuring storage backend (optional)
- **storage_service**: By default, Harbor stores images and chart on your local filesystem. In a production environment, you may consider use other storage backend instead of the local filesystem, like S3, OpenStack Swift, Ceph, etc. These parameters are configurations for registry.
- **ca_bundle**: The path to the custom root ca certificate, which will be injected into the trust store of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
- **provider_name**: Storage configs for registry, default is filesystem. for more info about this configuration please refer https://docs.docker.com/registry/configuration/
- **redirect**:
- **disable**: set disable to true when you want to disable registry redirect
For example, if you use Openstack Swift as your storage backend, the parameters may look like this:
``` yaml
storage_service:
ca_bundle:
swift:
username: admin
password: ADMIN_PASS
authurl: http://keystone_addr:35357/v3/auth
tenant: admin
domain: default
region: regionOne
container: docker_images"
redirect:
disable: false
```
_NOTE: For detailed information on storage backend of a registry, refer to [Registry Configuration Reference](https://docs.docker.com/registry/configuration/) ._
# Configuring Harbor listening on a customized port
By default, Harbor listens on port 80(HTTP) and 443(HTTPS, if configured) for both admin portal and docker commands, these default ports can configured in `harbor.yml`
## Configuring Harbor using the external database
Currently, only PostgreSQL database is supported by Harbor.
To user an external database, just uncomment the `external_database` section in `harbor.yml` and fill the necessary information. Four databases are needed to be create first by users for Harbor core, Clair, Notary server and Notary signer. And the tables will be generated automatically when Harbor starting up.
## Manage user settings
After release 1.8.0, User settings are separated with system settings, and all user settings should be configured in web console or by HTTP request.
Please refer [Configure User Settings](configure_user_settings.md) to config user settings.
## Performance tuning
By default, Harbor limits the CPU usage of Clair container to 150000 and avoids its using up all the CPU resources. This is defined in the docker-compose.clair.yml file. You can modify it based on your hardware configuration.

View File

@ -0,0 +1,26 @@
# Download the Harbor Installer:
The binary of the installer can be downloaded from the [release](https://github.com/goharbor/harbor/releases) page. Choose either online or offline installer.
- **Online installer:** The installer downloads Harbor's images from Docker hub. For this reason, the installer is very small in size.
- **Offline installer:** Use this installer when the host does not have an Internet connection. The installer contains pre-built images so its size is larger.
All installers can be downloaded from the **[official release](https://github.com/goharbor/harbor/releases)** page.
This guide describes the steps to install and configure Harbor by using the online or offline installer. The installation processes are almost the same.
Use *tar* command to extract the package.
Online installer:
```bash
$ tar xvf harbor-online-installer-<version>.tgz
```
Offline installer:
```bash
$ tar xvf harbor-offline-installer-<version>.tgz
```

View File

@ -0,0 +1,27 @@
# Harbor Installation Prerequisites
Harbor is deployed as several Docker containers, and, therefore, can be deployed on any Linux distribution that supports Docker. The target host requires Docker, and Docker Compose to be installed.
### Hardware
|Resource|Capacity|Description|
|---|---|---|
|CPU|minimal 2 CPU|4 CPU is preferred|
|Mem|minimal 4GB|8GB is preferred|
|Disk|minimal 40GB|160GB is preferred|
### Software
|Software|Version|Description|
|---|---|---|
|Docker engine|version 17.06.0-ce+ or higher|For installation instructions, please refer to: [docker engine doc](https://docs.docker.com/engine/installation/)|
|Docker Compose|version 1.18.0 or higher|For installation instructions, please refer to: [docker compose doc](https://docs.docker.com/compose/install/)|
|Openssl|latest is preferred|Generate certificate and keys for Harbor|
### Network ports
|Port|Protocol|Description|
|---|---|---|
|443|HTTPS|Harbor portal and core API will accept requests on this port for https protocol, this port can change in config file|
|4443|HTTPS|Connections to the Docker Content Trust service for Harbor, only needed when Notary is enabled, This port can change in config file|
|80|HTTP|Harbor portal and core API will accept requests on this port for http protocol|

View File

@ -0,0 +1,61 @@
# Run the Installer Script
Once **harbor.yml** and storage backend (optional) are configured, install and start Harbor using the `install.sh` script. Note that it may take some time for the online installer to download Harbor images from Docker hub.
##### Default installation (without Notary/Clair)
Harbor has integrated with Notary and Clair (for vulnerability scanning). However, the default installation does not include Notary or Clair service.
``` sh
$ sudo ./install.sh
```
If everything worked properly, you should be able to open a browser to visit the admin portal at `http://reg.yourdomain.com` (change `reg.yourdomain.com` to the hostname configured in your `harbor.yml`). Note that the default administrator username/password are admin/Harbor12345.
Log in to the admin portal and create a new project, e.g. `myproject`. You can then use docker commands to login and push images (By default, the registry server listens on port 80):
```sh
$ docker login reg.yourdomain.com
$ docker push reg.yourdomain.com/myproject/myrepo:mytag
```
**IMPORTANT:** The default installation of Harbor uses _HTTP_ - as such, you will need to add the option `--insecure-registry` to your client's Docker daemon and restart the Docker service.
##### Installation with Notary
To install Harbor with Notary service, add a parameter when you run `install.sh`:
```sh
$ sudo ./install.sh --with-notary
```
**Note**: For installation with Notary the parameter **ui_url_protocol** must be set to "https". For configuring HTTPS please refer to the following sections.
More information about Notary and Docker Content Trust, please refer to [Docker's documentation](https://docs.docker.com/engine/security/trust/content_trust/).
##### Installation with Clair
To install Harbor with Clair service, add a parameter when you run `install.sh`:
```sh
$ sudo ./install.sh --with-clair
```
For more information about Clair, please refer to Clair's documentation:
`https://coreos.com/clair/docs/2.0.1/`
##### Installation with chart repository service
To install Harbor with chart repository service, add a parameter when you run ```install.sh```:
```sh
$ sudo ./install.sh --with-chartmuseum
```
**Note**: If you want to install Notary, Clair and chart repository service, you must specify all the parameters in the same command:
```sh
$ sudo ./install.sh --with-notary --with-clair --with-chartmuseum
```
For information on how to use Harbor, please refer to **[User Guide of Harbor](user_guide.md)** .

View File

@ -0,0 +1,27 @@
# Troubleshooting Harbor Installation
1. When Harbor does not work properly, run the below commands to find out if all containers of Harbor are in **UP** status:
```
$ sudo docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------------------
harbor-core /harbor/start.sh Up
harbor-db /entrypoint.sh postgres Up 5432/tcp
harbor-jobservice /harbor/start.sh Up
harbor-log /bin/sh -c /usr/local/bin/ ... Up 127.0.0.1:1514->10514/tcp
harbor-portal nginx -g daemon off; Up 80/tcp
nginx nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp
redis docker-entrypoint.sh redis ... Up 6379/tcp
registry /entrypoint.sh /etc/regist ... Up 5000/tcp
registryctl /harbor/start.sh Up
```
If a container is not in **UP** state, check the log file of that container in directory `/var/log/harbor`. For example, if the container `harbor-core` is not running, you should look at the log file `core.log`.
2.When setting up Harbor behind an nginx proxy or elastic load balancing, look for the line below, in `common/config/nginx/nginx.conf` and remove it from the sections if the proxy already has similar settings: `location /`, `location /v2/` and `location /service/`.
``` sh
proxy_set_header X-Forwarded-Proto $scheme;
```
and re-deploy Harbor refer to the previous section "Managing Harbor's lifecycle".

View File

@ -0,0 +1,99 @@
# Harbor Upgrade and Migration Guide
This guide covers upgrade and migration to version 1.9.0. This guide only covers migration from v1.7.x and later to the current version. If you are upgrading from an earlier version, refer to the migration guide in the `release-1.7.0` branch to upgrade to v1.7.x first, then follow this guide to perform the migration to this version.
When upgrading an existing Harbor 1.7.x instance to a newer version, you might need to migrate the data in your database and the settings in `harbor.cfg`.
Since the migration might alter the database schema and the settings of `harbor.cfg`, you should **always** back up your data before any migration.
**NOTES:**
- Again, you must back up your data before any data migration.
- Since v1.8.0, the configuration of Harbor has changed to a `.yml` file. If you are upgrading from 1.7.x, the migrator will transform the configuration file from `harbor.cfg` to `harbor.yml`. The command will be a little different to perform this migration, so make sure you follow the steps below.
- In version 1.9.0, some containers are started by `non-root`. This does not pose problems if you are upgrading an officially released version of Harbor, but if you have deployed a customized instance of Harbor, you might encounter permission issues.
- In previous releases, user roles took precedence over group roles in a project. In this version, user roles and group roles are combined so that the user has whichever set of permissions is highest. This might cause the roles of certain users to change during upgrade.
- With the introduction of storage and artifact quotas in version 1.9.0, migration from 1.7.x and 1.8.x might take a few minutes. This is because the `core` walks through all blobs in the registry and populates the database with information about the layers and artifacts in projects.
- With the introduction of storage and artifact quotas in version 1.9.0, replication between version 1.9.0 and a previous version of Harbor does not work. You must upgrade all Harbor nodes to 1.9.0 if you have configured replication between them.
## Upgrading Harbor and Migrating Data
1. Log in to the host that Harbor runs on, stop and remove existing Harbor instance if it is still running:
```sh
cd harbor
docker-compose down
```
2. Back up Harbor's current files so that you can roll back to the current version if necessary.
```sh
mv harbor /my_backup_dir/harbor
```
Back up database (by default in directory `/data/database`)
```sh
cp -r /data/database /my_backup_dir/
```
3. Get the latest Harbor release package from Github:
[https://github.com/goharbor/harbor/releases](https://github.com/goharbor/harbor/releases)
4. Before upgrading Harbor, perform a migration first. The migration tool is delivered as a docker image, so you should pull the image from docker hub. Replace [tag] with the release version of Harbor (for example, v1.9.0) in the command below:
```sh
docker pull goharbor/harbor-migrator:[tag]
```
5. If you are current version is v1.7.x or earlier, i.e. migrate config file from `harbor.cfg` to `harbor.yml`.
**NOTE:** You can find the ${harbor_yml} in the extracted installer you got in step `3`, after the migration the file `harbor.yml`
in that path will be updated with the values from ${harbor_cfg}
```sh
docker run -it --rm -v ${harbor_cfg}:/harbor-migration/harbor-cfg/harbor.yml -v ${harbor_yml}:/harbor-migration/harbor-cfg-out/harbor.yml goharbor/harbor-migrator:[tag] --cfg up
```
Otherwise, If your version is 1.8.x or higher, just upgrade the `harbor.yml` file.
```sh
docker run -it --rm -v ${harbor_yml}:/harbor-migration/harbor-cfg/harbor.yml goharbor/harbor-migrator:[tag] --cfg up
```
**NOTE:** The schema upgrade and data migration of the database is performed by core when Harbor starts, if the migration fails, please check the log of core to debug.
6. Under the directory `./harbor`, run the `./install.sh` script to install the new Harbor instance. If you choose to install Harbor with components such as Notary, Clair, and chartmuseum, refer to [Installation & Configuration Guide](../docs/installation_guide.md) for more information.
## Roll Back from an Upgrade
If, for any reason, you want to roll back to the previous version of Harbor, perform the following steps:
1. Stop and remove the current Harbor service if it is still running.
```sh
cd harbor
docker-compose down
```
2. Remove current Harbor instance.
```sh
rm -rf harbor
```
3. Restore the older version package of Harbor.
```sh
mv /my_backup_dir/harbor harbor
```
4. Restore database, copy the data files from backup directory to you data volume, by default `/data/database`.
5. Restart Harbor service using the previous configuration.
If previous version of Harbor was installed by a release build:
```sh
cd harbor
./install.sh
```
**NOTE**: While you can roll back an upgrade to the state before you started the upgrade, Harbor does not support downgrades.

View File

@ -0,0 +1,44 @@
This guide is for Harbor upgrade test
=======
# Before upgrade
## Prepare data
1. Add user usera userb userc userd usere, set usera userb as system admin.
2. Create project projecta projectc as private, create projectb as public.
3. Add usera as projecta's admin, userc as developer, and userd as guest. Do the same to projectb and projectc.
4. Login harbor as usera, push an unsigned image into projecta, then push a signed image to projecta.
5. Login harbor as userc, push an unsigned image into projecta, then push a signed image to projeca.
6. Login harbor as userd, push each image one time.
7. Repeat 4 5 6 to projectb and projectc.
8. Add one endpoint to harbor.
9. Add an immediate replication rule to projeca, a schedule rule to projectb, a manual rule to projectc, trigger each rule one time.
10. Add 5 system label syslabel1 to syslabel5 and tag syslabel1 and syslabel2 to all unsigned image.
11. In each project add 5 project label projlabela to projlabele, add projlabela projlabelb and projlabelc to signed image.
12. Trigger one scan all job to scan all images.(For clair enabled instance)
13. Update project publicly, content trust, severity and scanning settings.
14. Update Harbor email, token expire read only and scan settings.
15. Update repository info.
**NOTE**: Create user step is not needed if auth mode is LDAP.
# Upgrade
## Follow the upgrade guide
1. Run db migrator image to backup database.
2. Run db migrator image to migrate database.
3. Install new version harbor.
# After upgrade
1. Confirm users are exist and available(No need for VIC and LDAP Mode).
2. Confirm users have the correct role.
3. Confirm labels are existing and labeled correct.(No need for VIC)
4. Confirm notary signature correct.
5. Confirm endpoint exist.
6. Confirm replication rule exist and works well.
7. Confirm project level settings(publicly, content trust, scan) same as before.
8. Confirm system level settings(email token expire scan) same as before.
9. Confirm scan result the same as before upgrade.
10. Confirm access log the same as before upgrade.
11. Confirm repository info the same as before.
12. Confirm other image metadata(e.g. author, size) the same as before.

View File

@ -0,0 +1,94 @@
# Role Based Access Control (RBAC)
![rbac](../img/rbac.png)
Harbor manages images through projects. Users can be added into one project as a member with one of three different roles:
* **Guest**: Guest has read-only privilege for a specified project.
* **Developer**: Developer has read and write privileges for a project.
* **Master**: Master has elevated permissions beyond those of 'Developer' including the ability to scan images, view replications jobs, and delete images and helm charts.
* **ProjectAdmin**: When creating a new project, you will be assigned the "ProjectAdmin" role to the project. Besides read-write privileges, the "ProjectAdmin" also has some management privileges, such as adding and removing members, starting a vulnerability scan.
Besides the above three roles, there are two system-level roles:
* **SysAdmin**: "SysAdmin" has the most privileges. In addition to the privileges mentioned above, "SysAdmin" can also list all projects, set an ordinary user as administrator, delete users and set vulnerability scan policy for all images. The public project "library" is also owned by the administrator.
* **Anonymous**: When a user is not logged in, the user is considered as an "Anonymous" user. An anonymous user has no access to private projects and has read-only access to public projects.
See detailed permissions matrix listed here: https://github.com/goharbor/harbor/blob/master/docs/permissions.md
## User account
Harbor supports different authentication modes:
* **Database(db_auth)**
Users are stored in the local database.
A user can register himself/herself in Harbor in this mode. To disable user self-registration, refer to the [installation guide](installation_guide.md) for initial configuration, or disable this feature in [Administrator Options](#administrator-options). When self-registration is disabled, the system administrator can add users into Harbor.
When registering or adding a new user, the username and email must be unique in the Harbor system. The password must contain at least 8 characters with 1 lowercase letter, 1 uppercase letter and 1 numeric character.
When you forgot your password, you can follow the below steps to reset the password:
1. Click the link "Forgot Password" in the sign in page.
2. Input the email address entered when you signed up, an email will be sent out to you for password reset.
3. After receiving the email, click on the link in the email which directs you to a password reset web page.
4. Input your new password and click "Save".
* **LDAP/Active Directory (ldap_auth)**
Under this authentication mode, users whose credentials are stored in an external LDAP or AD server can log in to Harbor directly.
When an LDAP/AD user logs in by *username* and *password*, Harbor binds to the LDAP/AD server with the **"LDAP Search DN"** and **"LDAP Search Password"** described in [installation guide](installation_guide.md). If it succeeded, Harbor looks up the user under the LDAP entry **"LDAP Base DN"** including substree. The attribute (such as uid, cn) specified by **"LDAP UID"** is used to match a user with the *username*. If a match is found, the user's *password* is verified by a bind request to the LDAP/AD server. Uncheck **"LDAP Verify Cert"** if the LDAP/AD server uses a self-signed or an untrusted certificate.
Self-registration, deleting user, changing password and resetting password are not supported under LDAP/AD authentication mode because the users are managed by LDAP or AD.
* **OIDC Provider (oidc_auth)**
With this authentication mode, regular user will login to Harbor Portal via SSO flow.
After the system administrator configure Harbor to authenticate via OIDC (more details refer to [this section](#managing-authentication)),
a button `LOGIN VIA OIDC PROVIDER` will appear on the login page.
![oidc_login](../img/oidc_login.png)
By clicking this button user will kick off the SSO flow and be redirected to the OIDC Provider for authentication. After a successful
authentication at the remote site, user will be redirected to Harbor. There will be an "onboard" step if it's the first time the user
authenticate using his account, in which there will be a dialog popped up for him to set his user name in Harbor:
![oidc_onboar](../img/oidc_onboard_dlg.png)
This user name will be the identifier for this user in Harbor, which will be used in the cases such as adding member to a project, assigning roles, etc.
This has to be a unique user name, if another user has used this user name to onboard, user will be prompted to choose another one.
Regarding this user to use docker CLI, please refer to [Using CLI after login via OIDC based SSO](#using-oidc-cli-secret)
**NOTE:**
1. After the onboard process, you still have to login to Harbor via SSO flow, the `Username` and `Password` fields are only for
local admin to login when Harbor is configured authentication via OIDC.
2. Similar to LDAP authentication mode, self-registration, updating profile, deleting user, changing password and
resetting password are not supported.
## Using OIDC CLI secret
Having authenticated via OIDC SSO and onboarded to Harbor, you can use Docker/Helm CLI to access Harbor to read/write the artifacts.
As the CLI cannot handle redirection for SSO, we introduced `CLI secret`, which is only available when Harbor's authentication mode
is configured to OIDC based.
After logging into Harbor, click the drop down list to view user's profile:
![user_profile](../img/user_profile.png)
You can copy your CLI secret via the dialog of profile:
![profile_dlg](../img/profile_dlg.png)
After that you can authenticate using your user name in Harbor that you set during onboard process, and CLI secret as the password
with Docker/Helm CLI, for example:
```sh
docker login -u testuser -p xxxxxx jt-test.local.goharbor.io
```
When you click the "..." icon in the profile dialog, a button for generating new CLI secret will appear, and you can generate a new
CLI secret by clicking this button. Please be reminded one user can only have one CLI secret, so when a new secret is generated, the
old one becomes invalid at once.
**NOTE**:
Under the hood the CLI secret is associated with the ID token, and Harbor will try to refresh the token, so the CLI secret will
be valid after th ID token expires. However, if the OIDC Provider does not provide refresh token or the refresh fails for some
reason, the CLI secret will become invalid. In that case you can logout and login Harbor via SSO flow again so Harbor can get a
new ID token and the CLI secret will work again.

View File

@ -0,0 +1,112 @@
# Config Harbor user settings by command line
After release 1.8.0, all user settings are separated from system settings, it can not be configured in config file anymore. Users need to configure it with admin privileges in web console or via HTTP request.
`curl -X PUT -u "<username>:<password>" -H "Content-Type: application/json" -ki <Harbor Server URL>/api/configurations -d'{"<item_name>":"<item_value>"}'`
Get current configurations
`curl -u "<username>:<password>" -H "Content-Type: application/json" -ki <Harbor Server URL>/api/configurations`
## Sample config commands:
1. Update Harbor to use LDAP auth
Command
```shell
curl -X PUT -u "<username>:<password>" -H "Content-Type: application/json" -ki https://harbor.sample.domain/api/configurations -d'{"auth_mode":"ldap_auth"}'
```
Output
```
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 08 May 2019 08:22:02 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 0
Connection: keep-alive
Set-Cookie: sid=a5803a1265e2b095cf65ce1d8bbd79b1; Path=/; HttpOnly
```
1. Restrict project creation to admin only
Command
```shell
curl -X PUT -u "<username>:<password>" -H "Content-Type: application/json" -ki https://harbor.sample.domain/api/configurations -d'{"project_creation_restriction":"adminonly"}'
```
Output
```
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 08 May 2019 08:24:32 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 0
Connection: keep-alive
Set-Cookie: sid=b7925eaf7af53bdefb13bdcae201a14a; Path=/; HttpOnly
```
1. Update the token expiration time
Command
```shell
curl -X PUT -u "<username>:<password>" -H "Content-Type: application/json" -ki https://harbor.sample.domain/api/configurations -d'{"token_expiration":"300"}'
```
Output
```
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 08 May 2019 08:23:38 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 0
Connection: keep-alive
Set-Cookie: sid=cc1bc93ffa2675253fc62b4bf3d9de0e; Path=/; HttpOnly
```
## Harbor user settings
| Configure item name | Description | Type | Required | Default Value |
| ------------ |------------ | ---- | ----- | ----- |
auth_mode | Authentication mode, it can be db_auth, ldap_auth, uaa_auth or oidc_auth | string
email_from | Email from | string | required (email feature)
email_host | Email server | string | required (email feature)
email_identity | Email identity | string | optional (email feature)
email_password | Email password | string | required (email feature)
email_insecure | Email verify certificate, true or false |boolean | optional (email feature) | false
email_port | Email server port | number | required (email feature)
email_ssl | Email SSL | boolean | optional | false
email_username | Email username | string | required (email feature)
ldap_url | LDAP URL | string | required |
ldap_base_dn | LDAP base DN | string | required(ldap_auth)
ldap_filter | LDAP filter | string | optional
ldap_scope | LDAP search scope, 0-Base Level, 1- One Level, 2-Sub Tree | number | optional | 2-Sub Tree
ldap_search_dn | LDAP DN to search LDAP users| string | required(ldap_auth)
ldap_search_password | LDAP DN's password |string | required(ldap_auth)
ldap_timeout | LDAP connection timeout | number | optional | 5
ldap_uid | LDAP attribute to indicate the username in Harbor | string | optional | cn
ldap_verify_cert | Verify cert when create SSL connection with LDAP server, true or false | boolean | optional | true
ldap_group_admin_dn | LDAP Group Admin DN | string | optional
ldap_group_attribute_name | LDAP Group Attribute, the LDAP attribute indicate the groupname in Harbor, it can be gid or cn | string | optional | cn
ldap_group_base_dn | The Base DN which to search the LDAP groups | string | required(ldap_auth and LDAP group)
ldap_group_search_filter | The filter to search LDAP groups | string | optional
ldap_group_search_scope | LDAP group search scope, 0-Base Level, 1- One Level, 2-Sub Tree | number | optional | 2-Sub Tree|
ldap_group_membership_attribute | LDAP group membership attribute, to indicate the group membership, it can be memberof, or ismemberof | string | optional | memberof
project_creation_restriction | The option to indicate user can be create object, it can be everyone, adminonly | string | optional | everyone
read_only | The option to set repository read only, it can be true or false | boolean | optional | false
self_registration | User can register account in Harbor, it can be true or false | boolean | optional| true
token_expiration | Security token expirtation time in minutes | number |optional| 30
uaa_client_id | UAA client ID | string | required(uaa_auth)
uaa_client_secret | UAA certificate | string | required(uaa_auth)
uaa_endpoint | UAA endpoint | string | required(uaa_auth)
uaa_verify_cert | UAA verify cert, true or false | boolean | optional | true
oidc_name | name for OIDC authentication | string | required(oidc_auth)
oidc_endpoint | endpoint for OIDC auth | string | required(oidc_auth)
oidc_client_id | client id for OIDC auth | string | required(oidc_auth)
oidc_client_secret | client secret for OIDC auth |string | required(oidc_auth)
oidc_scope | scope for OIDC auth | string| required(oidc_auth)
oidc_verify_cert | verify cert for OIDC auth, true or false | boolean | optional| true
robot_token_duration | Robot token expiration time in minutes | number | optional | 43200 (30days)
**Note:** Both boolean and number can be enclosed with double quote in the request json, for example: `123`, `"123"`, `"true"` or `true` is OK.

View File

@ -0,0 +1,57 @@
## Introduction
This guide provides instructions to manage roles by LDAP/AD group. You can import an LDAP/AD group to Harbor and assign project roles to it. All LDAP/AD users in this LDAP/AD group have assigned roles.
## Prerequisite
1. Harbor's auth_mode is ldap_auth and **[basic LDAP configure parameters](https://github.com/vmware/harbor/blob/master/docs/installation_guide.md#optional-parameters)** are configured.
1. Memberof overlay
This feature requires the LDAP/AD server enabled the feature **memberof overlay**.
With this feature, the LDAP/AD user entity's attribute **memberof** is updated when the group entity's **member** attribute is updated. For example, adding or removing an LDAP/AD user from the LDAP/AD group.
* OpenLDAP -- Refer this **[guide](https://technicalnotes.wordpress.com/2014/04/19/openldap-setup-with-memberof-overlay/)** to enable and verify **memberof overlay**
* Active Directory -- this feature is enabled by default.
## Configure LDAP group settings
Besides **[basic LDAP configure parameters](https://github.com/vmware/harbor/blob/master/docs/installation_guide.md#optional-parameters)** , LDAP group related configure parameters should be configured, they can be configured before or after installation
1. Configure LDAP parameters via API, refer to **[Config Harbor user settings by command line](configure_user_settings.md)**
For example:
```
curl -X PUT -u "<username>:<password>" -H "Content-Type: application/json" -ki https://harbor.sample.domain/api/configurations -d'{"ldap_group_basedn":"ou=groups,dc=example,dc=com"}'
```
The following parameters are related to LDAP group configuration.
* ldap_group_basedn -- The base DN from which to lookup a group in LDAP/AD, for example: ou=groups,dc=example,dc=com
* ldap_group_filter -- The filter to search LDAP/AD group, for example: objectclass=groupOfNames
* ldap_group_gid -- The attribute used to name an LDAP/AD group, for example: cn
* ldap_group_scope -- The scope to search for LDAP/AD groups. 0-LDAP_SCOPE_BASE, 1-LDAP_SCOPE_ONELEVEL, 2-LDAP_SCOPE_SUBTREE
2. Or change configure parameter in web console after installation. Go to "Administration" -> "Configuration" -> "Authentication" and change following settings.
- LDAP Group Base DN -- ldap_group_basedn in the Harbor user settings
- LDAP Group Filter -- ldap_group_filter in the Harbor user settings
- LDAP Group GID -- ldap_group_gid in the Harbor user settings
- LDAP Group Scope -- ldap_group_scope in the Harbor user settings
- LDAP Groups With Admin Privilege -- Specify an LDAP/AD group DN, all LDAPA/AD users in this group have harbor admin privileges.
![Screenshot of LDAP group config](img/group/ldap_group_config.png)
## Assign project role to LDAP/AD group
In "Project" -> "Members" -> "+ GROUP".
![Screenshot of add group](img/group/ldap_group_addgroup.png)
You can "Add an existing user group to project member" or "Add a group from LDAP to project member".
![Screenshot of add group dialog](img/group/ldap_group_addgroup_dialog.png)
Once an LDAP group is assigned a project role, log in with an LDAP/AD user in this group, the user should have the privilege of its group role.
If a user is in the LDAP groups with admin privilege (ldap_group_admin_dn), the user should have the same privileges with Harbor admin.
## User privileges and group privileges
If a user has both user-level role and group-level role, these privileges are merged together.

View File

@ -0,0 +1,9 @@
# Working with Harbor Projects
Placeholder text.
1. Item
1. Item
1. Item

View File

@ -0,0 +1,47 @@
# Configuring CVE Whitelists
When you run vulnerability scans, images that are subject to Common Vulnerabilities and Exposures (CVE) are identified. According to the severity of the CVE and your security settings, these images might not be permitted to run. As a system administrator, you can create whitelists of CVEs to ignore during vulnerability scanning.
You can set a system-wide CVE whitelist or you can set CVE whitelists on a per-project basis.
### Configure a System-Wide CVE Whitelist
System-wide CVE whitelists apply to all of the projects in a Harbor instance.
1. Go to **Configuration** > **System Settings**.
1. Under **Deployment security**, click **Add**.
![System-wide CVE whitelist](../img/cve-whitelist1.png)
1. Enter the list of CVE IDs to ignore during vulnerability scanning.
![Add system CVE whitelist](../img/cve-whitelist2.png)
Either use a comma-separated list or newlines to add multiple CVE IDs to the list.
1. Click **Add** at the bottom of the window to add the list.
1. Optionally uncheck the **Never expires** checkbox and use the calendar selector to set an expiry date for the whitelist.
![Add system CVEs](../img/cve-whitelist3.png)
1. Click **Save** at the bottom of the page to save your settings.
After you have created a system whitelist, you can remove CVE IDs from the list by clicking the delete button next to it in the list. You can click **Add** to add more CVE IDs to the system whitelist.
![Add and remove system CVEs](../img/cve-whitelist4.png)
### Configure a Per-Project CVE Whitelist
By default, the system whitelist is applied to all projects. You can configure different CVE whitelists for individual projects, that override the system whitelist.
1. Go to **Projects**, select a project, and select **Configuration**.
1. Under **CVE whitelist**, select **Project whitelist**.
![Project CVE whitelist](../img/cve-whitelist5.png)
1. Optionally click **Copy From System** to add all of the CVE IDs from the system CVE whitelist to this project whitelist.
1. Click **Add** and enter a list of additional CVE IDs to ignore during vulnerability scanning of this project.
![Add project CVEs](../img/cve-whitelist6.png)
Either use a comma-separated list or newlines to add multiple CVE IDs to the list.
1. Click **Add** at the bottom of the window to add the CVEs to the project whitelist.
1. Optionally uncheck the **Never expires** checkbox and use the calendar selector to set an expiry date for the whitelist.
1. Click **Save** at the bottom of the page to save your settings.
After you have created a project whitelist, you can remove CVE IDs from the list by clicking the delete button next to it in the list. You can click **Add** at any time to add more CVE IDs to the whitelist for this project.
If CVEs are added to the system whitelist after you have created a project whitelist, click **Copy From System** to add the new entries from the system whitelist to the project whitelist.
**NOTE**: If CVEs are deleted from the system whitelist after you have created a project whitelist, and if you added the system whitelist to the project whitelist, you must manually remove the deleted CVEs from the project whitelist. If you click **Copy From System** after CVEs have been deleted from the system whitelist, the deleted CVEs are not automatically removed from the project whitelist.

View File

@ -0,0 +1,86 @@
# Configuring Webhook Notifications
If you are a project administrator, you can configure a connection from a project in Harbor to a webhook endpoint. If you configure webhooks, Harbor notifies the webhook endpoint of certain events that occur in the project. Webhooks allow you to integrate Harbor with other tools to streamline continuous integration and development processes.
The action that is taken upon receiving a notification from a Harbor project depends on your continuous integration and development processes. For example, by configuring Harbor to send a `POST` request to a webhook listener at an endpoint of your choice, you can trigger a build and deployment of an application whenever there is a change to an image in the repository.
### Supported Events
You can define one webhook endpoint per project. Webhook notifications provide information about events in JSON format and are delivered by `HTTP` or `HTTPS POST` to an existing webhhook endpoint URL that you provide. The following table describes the events that trigger notifications and the contents of each notification.
|Event|Webhook Event Type|Contents of Notification|
|---|---|---|
|Push image to registry|`IMAGE PUSH`|Repository namespace name, repository name, resource URL, tags, manifest digest, image name, push time timestamp, username of user who pushed image|
|Pull manifest from registry|`IMAGE PULL`|Repository namespace name, repository name, manifest digest, image name, pull time timestamp, username of user who pulled image|
|Delete manifest from registry|`IMAGE DELETE`|Repository namespace name, repository name, manifest digest, image name, image size, delete time timestamp, username of user who deleted image|
|Upload Helm chart to registry|`CHART PUSH`|Repository name, chart name, chart type, chart version, chart size, tag, timestamp of push, username of user who uploaded chart|
|Download Helm chart from registry|`CHART PULL`|Repository name, chart name, chart type, chart version, chart size, tag, timestamp of push, username of user who pulled chart|
|Delete Helm chart from registry|`CHART DELETE`|Repository name, chart name, chart type, chart version, chart size, tag, timestamp of delete, username of user who deleted chart|
|Image scan completed|`IMAGE SCAN COMPLETED`|Repository namespace name, repository name, tag scanned, image name, number of critical issues, number of major issues, number of minor issues, last scan status, scan completion time timestamp, vulnerability information (CVE ID, description, link to CVE, criticality, URL for any fix), username of user who performed scan|
|Image scan failed|`IMAGE SCAN FAILED`|Repository namespace name, repository name, tag scanned, image name, error that occurred, username of user who performed scan|
#### JSON Payload Format
The webhook notification is delivered in JSON format. The following example shows the JSON notification for a push image event:
```
{
"event_type": "pushImage"
"events": [
{
"project": "prj",
"repo_name": "repo1",
"tag": "latest",
"full_name": "prj/repo1",
"trigger_time": 158322233213,
"image_id": "9e2c9d5f44efbb6ee83aecd17a120c513047d289d142ec5738c9f02f9b24ad07",
"project_type": "Private"
}
]
}
```
### Webhook Endpoint Recommendations
The endpoint that receives the webhook should ideally have a webhook listener that is capable of interpreting the payload and acting upon the information it contains. For example, running a shell script.
### Example Use Cases
You can configure your continuous integration and development infrastructure so that it performs the following types of operations when it receives a webhook notification from Harbor.
- Image push:
- Trigger a new build immediately following a push on selected repositories or tags.
- Notify services or applications that use the image that a new image is available and pull it.
- Scan the image using Clair.
- Replicate the image to remote registries.
- Image scanning:
- If a vulnerability is found, rescan the image or replicate it to another registry.
- If the scan passes, deploy the image.
### Configure Webhooks
1. Select a project and go to the Webhooks tab.
![Webhooks option](../img/webhooks1.png)
1. Enter the URL for your webhook endpoint listener.
![Webhook URL](../img/webhooks2.png)
1. If your webhook listener implements authentication, enter the authentication header.
1. To implement `HTTPS POST` instead of `HTTP POST`, select the **Verifiy Remote Certficate** check box.
1. Click **Test Endpoint** to make sure that Harbor can connect to the listener.
1. Click **Continue** to create the webhook.
When you have created the webhook, you see the status of the different notifications and the timestamp of the last time each notification was triggered. You can click **Disable** to disable notifications.
![Webhook Status](../img/webhooks3.png)
**NOTE**: You can only disable and reenable all notifications. You cannot disable and enable selected notifications.
If a webhook notification fails to send, or if it receives an HTTP error response with a code other than `2xx`, the notification is re-sent based on the configuration that you set in `harbor.yml`.
### Globally Enable and Disable Webhooks
As a system administrator, you can enable and disable webhook notifications for all projects.
1. Go to **Configuration** > **System Settings**.
1. Scroll down and check or uncheck the **Webhooks enabled** check box.
![Enable/disable webhooks](../img/webhooks4.png)

View File

@ -0,0 +1,37 @@
# Creating Robot Accounts
Robot Accounts are accounts created by project admins that are intended for automated operations. They have the following limitations:
1, Robot Accounts cannot login Harbor portal
2, Robot Accounts can only perform operations by using the Docker and Helm CLIs.
### Add a Robot Account
If you are a project admin, you can create a Robot Account by clicking "New Robot Account" in the `Robot Accounts` tab of a project, and enter a name, a description, and grant permission to the account to push and pull images and Helm charts.
![add_robot_account](../img/robotaccount/add_robot_account.png)
![add_robot_account](../img/robotaccount/add_robot_account_2.png)
> **NOTE:** The name will become `robot$<accountname>` and will be used to distinguish a robot account from a normal harbor user.
![copy_robot_account_token](../img/robotaccount/copy_robot_account_token.png)
As Harbor doesn't store your account token, please make sure to copy it in the pop up dialog after creating, otherwise, there is no way to get it from Harbor.
### Configure duration of robot account
If you are a system admin, you can configure the robot account token duration in days.
![set_robot_account_token_duration](../img/robotaccount/set_robot_account_token_duration.png)
### Authenticate with a robot account
To authenticate with a Robot Account, use `docker login` as below,
```
docker login harbor.io
Username: robot$accountname
Password: Thepasswordgeneratedbyprojectadmin
```
### Disable a robot account
If you are a project admin, you can disable a Robot Account by clicking "Disable Account" in the `Robot Accounts` tab of a project.
![disable_robot_account](../img/robotaccount/disable_delete_robot_account.png)
### Delete a robot account
If you are a project admin, you can delete a Robot Account by clicking "Delete" in the `Robot Accounts` tab of a project.
![delete_robot_account](../img/robotaccount/disable_delete_robot_account.png)

View File

@ -0,0 +1,153 @@
#Creating Tag Retention Rules
A repository can rapidly accumulate a large number of image tags, many of which might not be required after a given time or once they have been superseded by a subsequent image build. These excess tags can obviously consume large quantities of storage capacity. As a system administrator, you can define rules that govern how many tags of a given repository to retain, or for how long to retain certain tags.
### How Tag Retention Rules Work
You define tag retention rules on repositories, not on projects. This allows for greater granularity when defining your retention rules. As the name suggests, when you define a retention rule for a repository, you are identifying which tags to retain. You do not define rules to explicitly remove tags. Rather, when you set a rule, any tags in a repository that are not identified as being eligible for retention are discarded.
A tag retention rule has 3 filters that are applied sequentially, as described in the following table.
|Order|Filter|Description|
|---|---|---|
|First|Repository or repositories|Identify the repository or repositories on which to apply the rule. You can identify repositories that either have a certain name or name fragment, or that do not have that name or name fragment. Wild cards (for example `*repo`, `repo*`, and `**`) are permitted. The repository filter is applied first to mark the repositories to which to apply the retention rule. The identified repositories are earmarked for further matching based on the tag criteria. No action is taken on the nonspecified repositories at this stage.|
|Second|Quantity to retain|Set which tags to retain either by specifying a maximum number of tags, or by specifying a maximum period for which to retain tags.|
|Third|Tags to retain|Identify the tag or tags on which to apply the rule. You can identify tags that either have a certain name or name fragment, or that do not have that name or name fragment. Wild cards (for example `*tag`, `tag*`, and `**`) are permitted.|
For information about how the `**` wildcard is applied, see https://github.com/bmatcuk/doublestar#patterns.
#### Example 1
- You have 5 repositories in a project, repositories A to E.
- Repository A has 100 image tags, all of which have been pulled in the last week.
- Repositories B to E each have 6 images, none of which have been pulled in the last month.
- You set the repository filter to `**`, meaning that all repositories in the project are included.
- You set the retention policy to retain the 10 most recently pulled images in each repository.
- You set the tag filter to `**`, meaning that all tags in the repository are included.
In this example the rule retains the 10 most recently pulled images in repository A, and all 6 of the images in each of the 4 repositories B to E. So, a total of 34 image tags are retained in the project. In other words, the rule does not treat all of the images in repositories A to E as a single pool from which to choose the 10 most recent images. So, even if the 11th to 100th tags in repository A have been pulled more recently than any of the tags in repositories B to E, all of the tags in repositories B to E are retained, because each of those repositories has fewer than 10 tags.
#### Example 2
This example uses the same project and repositories as example 1, but sets the retention policy to retain the images in each repository that have been pulled in the last 7 days.
In this case, all of the images in repository A are retained because they have been pulled in the last 7 days. None of the images in repositories B to E are retained, because none of them has been pulled in the last week. In this example, 100 images are retained, as opposed to 34 images in example 1.
#### Tag Retention Rules and Native Docker Tag Deletion
**WARNING**: Due to native Docker tag deletion behavior, there is an issue with the current retention policy implementation. If you have multiple tags that refer to the same SHA digest, and if a subset of these tags are marked for deletion by a configured retention policy, all of the remaining tags would also be deleted. This violates the retention policy, so in this case all of the tags are retained. This issue will be addressed in a future update release, so that tag retention policies can delete tags without deleting the digest and other shared tags.
For example, you have following tags, listed according to their push time, and all of them refer to the same SHA digest:
- `harbor-1.8`, pushed 8/14/2019 12:00am
- `harbor-release`, pushed 8/14/2019 03:00am
- `harbor-nightly`, pushed 8/14/2019 06:00am
- `harbor-latest`, pushed 8/14/2019 09:00am
You configure a retention policy to retain the two latest tags that match `harbor-*`, so that `harbor-rc` and `harbor-latest` are deleted. However, since all tags refer to the same SHA digest, this policy would also delete the tags `harbor-1.8` and `harbor-release`, so all tags are retained.
### Combining Rules on a Repository
You can define up to 15 rules per project. You can apply multiple rules to a repository or set of repositories. When you apply multiple rules to a repository, they are applied with `OR` logic rather than with `AND` logic. In this way, there is no prioritization of application of the rules on a given repository. Rules run concurrently in the background, and the resulting sets from each rule are combined at the end of the run.
#### Example 3
This example uses the same project and repositories as examples 1 and 2, but sets two rules:
- Rule 1: Retain all of the images in each repository that have been pulled in the last 7 days.
- Rule 2: Retain a maximum number of 10 images in each repository.
For repository A, rule 1 retains all of the images because they have all been pulled in the last week. Rule 2 retains the 10 most recently pulled images. So, since the two rules are applied with an `OR` relationship, all 100 images are retained in repository A.
For repositories B-E, rule 1 will retain 0 images as no images are pulled in the last week. Rule 2 will retain all 6 images because 6 < 10. So, since the two rules are applied with an `OR` relationship, for repositories B-E, each repository will keep all 6 images.
In this example, all of the images are retained.
#### Example 4
This example uses a different repository to the previous examples.
- You have a repository that has 12 tags:
|Production|Release Candidate|Release|
|---|---|---|
|`2.1-your_repo-prod`|`2.1-your_repo-rc`|`2.1-your_repo-release`|
|`2.2-your_repo-prod`|`2.2-your_repo-rc`|`2.2-your_repo-release`|
|`3.1-your_repo-prod`|`3.1-your_repo-rc`|`3.1-your_repo-release`|
|`4.4-your_repo-prod`|`4.4-your_repo-rc`|`4.4-your_repo-release`|
- You define three tag retention rules on this repository:
- Retain the 10 most recently pushed image tags that start with `2`.
- Retain the 10 most recently pushed image tags that end with `-prod`.
- Retain all tags that do not include `2.1-your_repo-prod`.
In this example, the rules are applied to the following 7 tags:
- `2.1-your_repo-rc`
- `2.1-your_repo-release`
- `2.2-your_repo-prod`
- `2.2-your_repo-rc`
- `2.2-your_repo-release`
- `3.1-your_repo-prod`
- `4.4-your_repo-prod`
### How Tag Retention Rules Interact with Project Quotas
The system administrator can set a maximum on the number of tags that a project can contain and the amount of storage that it can consume. For information about project quotas, see [Set Project Quotas](#set-project-quotas).
If you set a quota on a project, this quota cannot be exceeded. The quota is applied to a project even if you set a retention rule that would exceed it. In other words, you cannot use retention rules to bypass quotas.
### Configure Tag Retention Rules
1. Select a project and go to the **Tag Retention** tab.
![Tag Retention option](../img/tag-retention1.png)
1. Click **Add Rule** to add a rule.
1. In the **For the repositories** drop-down menu, select **matching** or **excluding**.
![Select repositories](../img/tag-retention2.png)
1. Identify the repositories on which to apply the rule.
You can define the repositories on which to apply the rule by entering the following information:
- A repository name, for example `my_repo_1`.
- A comma-separated list of repository names, for example `my_repo_1,my_repo_2,your_repo_3`.
- A partial repository name with wildcards, for example `my_*`, `*_3`, or `*_repo_*`.
- `**` to apply the rule to all of the repositories in the project.
If you selected **matching**, the rule is applied to the repositories you identified. If you selected **excluding**, the rule is applied to all of the repositories in the project except for the ones that you identified.
1. Define how many tags to retain or how the period to retain tags.
![Select retention criteria](../img/tag-retention3.png)
|Option|Description|
|---|---|
|**retain the most recently pushed # images**|Enter the maximum number of images to retain, keeping the ones that have been pushed most recently. There is no maximum age for an image.|
|**retain the most recently pulled # images**|Enter the maximum number of images to retain, keeping only the ones that have been pulled recently. There is no maximum age for an image.|
|**retain the images pushed within the last # days**|Enter the number of days to retain images, keeping only the ones that have been pushed during this period. There is no maximum number of images.|
|**retain the images pulled within the last # days**|Enter the number of days to retain images, keeping only the ones that have been pulled during this period. There is no maximum number of images.|
|**retain always**|Always retain the images identified by this rule.|
1. In the **Tags** drop-down menu, select **matching** or **excluding**.
1. Identify the tags on which to apply the rule.
You can define the tags on which to apply the rule by entering the following information:
- A tag name, for example `my_tag_1`.
- A comma-separated list of tag names, for example `my_tag_1,my_tag_2,your_tag_3`.
- A partial tag name with wildcards, for example `my_*`, `*_3`, or `*_tag_*`.
- `**` to apply the rule to all of the tags in the project.
If you selected **matching**, the rule is applied to the tags you identified. If you selected **excluding**, the rule is applied to all of the tags in the repository except for the ones that you identified.
1. Click **Add** to save the rule.
1. (Optional) Click **Add Rule** to add more rules, up to a maximum of 15 per project.
1. (Optional) Under Schedule, click **Edit** and select how often to run the rule.
![Select retention criteria](../img/tag-retention4.png)
If you select **Custom**, enter a cron job command to schedule the rule.
**NOTE**: If you define multiple rules, the schedule is applied to all of the rules. You cannot schedule different rules to run at different times.
1. Click **Dry Run** to test the rule or rules that you have defined.
1. Click **Run Now** to run the rule immediately.
**WARNING**: You cannot revert a rule after you run it. It is strongly recommended to perform a dry run before you run rules.
To modify an existing rule, click the three vertical dots next to a rule to disable, edit, or delete that rule.
![Modify tag retention rules](../img/tag-retention5.png)

View File

@ -0,0 +1,16 @@
# Implementing Content Trust
**NOTE: Notary is an optional component, please make sure you have already installed it in your Harbor instance before you go through this section.**
If you want to enable content trust to ensure that images are signed, please set two environment variables in the command line before pushing or pulling any image:
```sh
export DOCKER_CONTENT_TRUST=1
export DOCKER_CONTENT_TRUST_SERVER=https://10.117.169.182:4443
```
If you push the image for the first time, You will be asked to enter the root key passphrase. This will be needed every time you push a new image while the ``DOCKER_CONTENT_TRUST`` flag is set.
The root key is generated at: ``/root/.docker/trust/private/root_keys``
You will also be asked to enter a new passphrase for the image. This is generated at ``/root/.docker/trust/private/tuf_keys/[registry name] /[imagepath]``.
If you are using a self-signed cert, make sure to copy the CA cert into ```/etc/docker/certs.d/10.117.169.182``` and ```$HOME/.docker/tls/10.117.169.182:4443/```. When an image is signed, it is indicated in the Web UI.
**Note: Replace "10.117.169.182" with the IP address or domain name of your Harbor node. In order to use content trust, HTTPS must be enabled in Harbor.**
When an image is signed, it has a tick shown in UI; otherwise, a cross sign(X) is displayed instead.
![browse project](../img/content_trust.png)

View File

@ -0,0 +1,122 @@
# Managing Helm Charts
[Helm](https://helm.sh) is a package manager for [Kubernetes](https://kubernetes.io). Helm uses a packaging format called [charts](https://docs.helm.sh/developing_charts). Since version 1.6.0 Harbor is now a composite cloud-native registry which supports both container image management and Helm charts management. Access to Helm charts in Harbor is controlled by [role-based access controls (RBAC)](https://en.wikipedia.org/wiki/Role-based_access_control) and is restricted by projects.
### Manage Helm Charts via portal
#### List charts
Click your project to enter the project detail page after successful logging in. The existing helm charts will be listed under the tab `Helm Charts` which is beside the image `Repositories` tab with the following information:
* Name of helm chart
* The status of the chart: Active or Deprecated
* The count of chart versions
* The created time of the chart
![list charts](../img/chartrepo/list_charts.png)
You can click the icon buttons on the top right to switch views between card view and list view.
#### Upload new chart
Click the `UPLOAD` button on the top left to open the chart uploading dialog. Choose the uploading chart from your filesystem. Click the `UPLOAD` button to upload it to the chart repository server.
![upload charts](../img/chartrepo/upload_charts.png)
If the chart is signed, you can choose the corresponding provenance file from your filesystem and Click the `UPLOAD` button to upload them together at once.
If the chart is successfully uploaded, it will be displayed in the chart list at once.
#### List chart versions
Clicking the chart name from the chart list will show all the available versions of that chart with the following information:
* the chart version number
* the maintainers of the chart version
* the template engine used (default is gotpl)
* the created timestamp of the chart version
![list charts versions](../img/chartrepo/list_chart_versions.png)
Obviously, there will be at least 1 version for each of the charts in the top chart list. Same with chart list view, you can also click the icon buttons on the top right to switch views between card view and list view.
Check the checkbox at the 1st column to select the specified chart versions:
* Click the `DELETE` button to delete all the selected chart versions from the chart repository server. Batch operation is supported.
* Click the `DOWNLOAD` button to download the chart artifact file. Batch operation is not supported.
* Click the `UPLOAD` button to upload the new chart version for the current chart
#### Adding labels to/remove labels from chart versions
Users who have system administrator, project administrator or project developer role can click the `ADD LABELS` button to add labels to or remove labels from chart versions.
![add labels to chart versions](../img/chartrepo/add_labesl_to_chart_versions.png)
#### Filtering chart versions by labels
The chart versions can be filtered by labels:
![filter chart versions by labels](../img/chartrepo/filter_chart_versions_by_label.png)
#### View chart version details
Clicking the chart version number link will open the chart version details view. You can see more details about the specified chart version here. There are three content sections:
* **Summary:**
* readme of the chart
* overall metadata like home, created timestamp and application version
* related helm commands for reference, such as `helm add repo` and `helm install` etc.
![chart details](../img/chartrepo/chart_details.png)
* **Dependencies:**
* list all the dependant sun charts with 'name', 'version' and 'repository' fields
![chart dependencies](../img/chartrepo/chart_dependencies.png)
* **Values:**
* display the content from `values.yaml` file with highlight code preview
* clicking the icon buttons on the top right to switch the yaml file view to k-v value pair list view
![chart values](../img/chartrepo/chart_values.png)
Clicking the `DOWNLOAD` button on the top right will start the downloading process.
### Working with Helm CLI
As a helm chart repository, Harbor can work smoothly with Helm CLI. About how to install Helm CLI, please refer [install helm](https://docs.helm.sh/using_helm/#installing-helm). Run command `helm version` to make sure the version of Helm CLI is v2.9.1+.
```
helm version
#Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
#Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
```
#### Add harbor to the repository list
Before working, Harbor should be added into the repository list with `helm repo add` command. Two different modes are supported.
* Add Harbor as a unified single index entry point
With this mode Helm can be made aware of all the charts located in different projects and which are accessible by the currently authenticated user.
```
helm repo add --ca-file ca.crt --username=admin --password=Passw0rd myrepo https://xx.xx.xx.xx/chartrepo
```
**NOTES:** Providing both ca file and cert files is caused by an issue from helm.
* Add Harbor project as separate index entry point
With this mode, helm can only pull charts in the specified project.
```
helm repo add --ca-file ca.crt --username=admin --password=Passw0rd myrepo https://xx.xx.xx.xx/chartrepo/myproject
```
#### Push charts to the repository server by CLI
As an alternative, you can also upload charts via the CLI. It is not supported by the native helm CLI. A plugin from the community should be installed before pushing. Run `helm plugin install` to install the `push` plugin first.
```
helm plugin install https://github.com/chartmuseum/helm-push
```
After a successful installation, run `push` command to upload your charts:
```
helm push --ca-file=ca.crt --username=admin --password=passw0rd chart_repo/hello-helm-0.1.0.tgz myrepo
```
**NOTES:** `push` command does not support pushing a prov file of a signed chart yet.
#### Install charts
Before installing, make sure your helm is correctly initialized with command `helm init` and the chart index is synchronized with command `helm repo update`.
Search the chart with the keyword if you're not sure where it is:
```
helm search hello
#NAME CHART VERSION APP VERSION DESCRIPTION
#local/hello-helm 0.3.10 1.3 A Helm chart for Kubernetes
#myrepo/chart_repo/hello-helm 0.1.10 1.2 A Helm chart for Kubernetes
#myrepo/library/hello-helm 0.3.10 1.3 A Helm chart for Kubernetes
```
Everything is ready, install the chart to your kubernetes:
```
helm install --ca-file=ca.crt --username=admin --password=Passw0rd --version 0.1.10 repo248/chart_repo/hello-helm
```
For other more helm commands like how to sign a chart, please refer to the [helm doc](https://docs.helm.sh/helm/#helm).

View File

@ -0,0 +1,24 @@
# Managing labels
Harbor provides two kinds of labels to isolate kinds of resources(only images for now):
* **Global Level Label**: Managed by system administrators and used to manage the images of the whole system. They can be added to images under any projects.
* **Project Level Label**: Managed by project administrators under a project and can only be added to the images of the project.
### Managing global level labels
The system administrators can list, create, update and delete the global level labels under `Administration->Configuration->Labels`:
![manage global level labels](../img/manage_global_level_labels.png)
### Managing project level labels
The project administrators and system administrators can list, create, update and delete the project level labels under `Labels` tab of the project detail page:
![manage project level labels](../img/manage_project_level_labels.png)
### Adding labels to/remove labels from images
Users who have system administrator, project administrator or project developer role can click the `ADD LABELS` button to add labels to or remove labels from images. The label list contains both globel level labels(come first) and project level labels:
![add labels to images](../img/add_labels_to_images.png)
### Filtering images by labels
The images can be filtered by labels:
![filter images by labels](../img/filter_images_by_label.png)

View File

@ -0,0 +1,59 @@
# Managing projects
A project in Harbor contains all repositories of an application. No images can be pushed to Harbor before the project is created. RBAC is applied to a project. There are two types of projects in Harbor:
* **Public**: All users have the read privilege to a public project, it's convenient for you to share some repositories with others in this way.
* **Private**: A private project can only be accessed by users with proper privileges.
You can create a project after you signed in. Check on the "Access Level" checkbox will make this project public.
![create project](../img/new_create_project.png)
After the project is created, you can browse repositories, members, logs, replication and configuration using the navigation tab.
![browse project](../img/new_browse_project.png)
There are two views to show repositories, list view and card view, you can switch between them by clicking the corresponding icon.
![browse repositories](../img/browse_project_repositories.png)
All logs can be listed by clicking "Logs". You can apply a filter by username, or operations and dates under "Advanced Search".
![browse project](../img/log_search_advanced.png)
![browse project](../img/new_project_log.png)
Project properties can be changed by clicking "Configuration".
* To make all repositories under the project accessible to everyone, select the `Public` checkbox.
* To prevent un-signed images under the project from being pulled, select the `Enable content trust` checkbox.
* To prevent vulnerable images under the project from being pulled, select the `Prevent vulnerable images from running` checkbox and change the severity level of vulnerabilities. Images cannot be pulled if their level equals to or higher than the currently selected level.
* To activate an immediate vulnerability scan on new images that are pushed to the project, select the `Automatically scan images on push` checkbox.
![browse project](../img/project_configuration.png)
## Managing members of a project
### Adding members
You can add members with different roles to an existing project. You can add a LDAP/AD user to project members under LDAP/AD authentication mode.
![browse project](../img/new_add_member.png)
### Updating and removing members
You can check one or more members, then click `ACTION`, choose one role to batch switch checked members' roles or remove them from the project.
![browse project](../img/new_remove_update_member.png)
## Searching projects and repositories
Entering a keyword in the search field at the top lists all matching projects and repositories. The search result includes both public and private repositories you have access to.
![browse project](../img/new_search.png)
## Build history
Build history make it easy to see the contents of a container image, find the code which bulids an image, or locate the image for a source repository.
In Harbor portal, enter your project, select the repository, click on the link of tag name you'd like to see its build history, the detail page will be opened. Then switch to `Build History` tab, you can see the build history information.
![build_ history](../img/build_history.png)

View File

@ -0,0 +1,53 @@
# Permissions
Users have different abilities depending on the role they in a project.
On public projects all users will be able to see the list of repositories, images, image vulnerabilities, helm charts and helm chart versions, pull images, retag images (need push permission for destination image), download helm charts, download helm chart versions.
System admin have all permissions for the project.
## Project members permissions
The following table depicts the various user permission levels in a project.
| Action | Guest | Developer | Master | Project Admin |
| --------------------------------------- | ----- | --------- | ------ | ------------- |
| See the porject configurations | ✓ | ✓ | ✓ | ✓ |
| Edit the project configurations | | | | ✓ |
| See a list of project members | ✓ | ✓ | ✓ | ✓ |
| Create/edit/delete project members | | | | ✓ |
| See a list of project logs | ✓ | ✓ | ✓ | ✓ |
| See a list of project replications | | | ✓ | ✓ |
| See a list of project replication jobs | | | | ✓ |
| See a list of project labels | | | ✓ | ✓ |
| Create/edit/delete project lables | | | ✓ | ✓ |
| See a list of repositories | ✓ | ✓ | ✓ | ✓ |
| Create repositories | | ✓ | ✓ | ✓ |
| Edit/delete repositories | | | ✓ | ✓ |
| See a list of images | ✓ | ✓ | ✓ | ✓ |
| Retag image | ✓ | ✓ | ✓ | ✓ |
| Pull image | ✓ | ✓ | ✓ | ✓ |
| Push image | | ✓ | ✓ | ✓ |
| Scan/delete image | | | ✓ | ✓ |
| See a list of image vulnerabilities | ✓ | ✓ | ✓ | ✓ |
| See image build history | ✓ | ✓ | ✓ | ✓ |
| Add/Remove labels of image | | ✓ | ✓ | ✓ |
| See a list of helm charts | ✓ | ✓ | ✓ | ✓ |
| Download helm charts | ✓ | ✓ | ✓ | ✓ |
| Upload helm charts | | ✓ | ✓ | ✓ |
| Delete helm charts | | | ✓ | ✓ |
| See a list of helm chart versions | ✓ | ✓ | ✓ | ✓ |
| Download helm chart versions | ✓ | ✓ | ✓ | ✓ |
| Upload helm chart versions | | ✓ | ✓ | ✓ |
| Delete helm chart versions | | | ✓ | ✓ |
| Add/Remove labels of helm chart version | | ✓ | ✓ | ✓ |
| See a list of project robots | | | ✓ | ✓ |
| Create/edit/delete project robots | | | | ✓ |
| See configured CVE whitelist | ✓ | ✓ | ✓ | ✓ |
| Create/edit/remove CVE whitelist | | | | ✓ |
| Enable/disable webhooks | | ✓ | ✓ | ✓ |
| Create/delete tag retention rules | | ✓ | ✓ | ✓ |
| Enable/disable tag retention rules | | ✓ | ✓ | ✓ |
| See project quotas | ✓ | ✓ | ✓ | ✓ |
| Edit project quotas | | | | |

View File

@ -0,0 +1,84 @@
# Pulling and pushing images using Docker client
**NOTE: Harbor only supports Registry V2 API. You need to use Docker client 1.6.0 or higher.**
Harbor supports HTTP by default and Docker client tries to connect to Harbor using HTTPS first, so if you encounter an error as below when you pull or push images, you need to configure insecure registry. Please, read [this document](https://docs.docker.com/registry/insecure/) in order to understand how to do this.
```Error response from daemon: Get https://myregistrydomain.com/v1/users/: dial tcp myregistrydomain.com:443 getsockopt: connection refused.```
If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add
`--insecure-registry myregistrydomain.com` to the daemon's start up arguments.
In the case of HTTPS, if you have access to the registry's CA certificate, simply place the CA certificate at /etc/docker/certs.d/myregistrydomain.com/ca.crt .
### Pulling images
If the project that the image belongs to is private, you should sign in first:
```sh
$ docker login 10.117.169.182
```
You can now pull the image:
```sh
$ docker pull 10.117.169.182/library/ubuntu:14.04
```
**Note: Replace "10.117.169.182" with the IP address or domain name of your Harbor node. You cannot pull a unsigned image if you enabled content trust.**
### Pushing images
Before pushing an image, you must create a corresponding project on Harbor web UI.
First, log in from Docker client:
```sh
$ docker login 10.117.169.182
```
Tag the image:
```sh
$ docker tag ubuntu:14.04 10.117.169.182/demo/ubuntu:14.04
```
Push the image:
```sh
$ docker push 10.117.169.182/demo/ubuntu:14.04
```
**Note: Replace "10.117.169.182" with the IP address or domain name of your Harbor node.**
### Add description to repositories
After pushing an image, an Information can be added by project admin to describe this repository.
Go into the repository and select the "Info" tab, and click the "EDIT" button. An textarea will appear and enter description here. Click "SAVE" button to save this information.
![edit info](../img/edit_description.png)
### Download the harbor certs
Users can click the "registry certificate" link to download the registry certificate.
![browse project](../img/download_harbor_certs.png)
### Deleting repositories
Repository deletion runs in two steps.
First, delete a repository in Harbor's UI. This is soft deletion. You can delete the entire repository or just a tag of it. After the soft deletion,
the repository is no longer managed in Harbor, however, the files of the repository still remain in Harbor's storage.
![browse project](../img/new_delete_repo.png)
![browse project](../img/new_delete_tag.png)
**CAUTION: If both tag A and tag B refer to the same image, after deleting tag A, B will also get deleted. if you enabled content trust, you need to use notary command line tool to delete the tag's signature before you delete an image.**
Next, delete the actual files of the repository using the [garbage collection](#online-garbage-collection) in Harbor's UI.
## Pull image from Harbor in Kubernetes
Kubernetes users can easily deploy pods with images stored in Harbor. The settings are similar to that of another private registry. There are two major issues:
1. When your Harbor instance is hosting http and the certificate is self signed. You need to modify daemon.json on each work node of your cluster, for details please refer to: https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry
2. If your pod references an image under private project, you need to create a secret with the credentials of user who has permission to pull image from this project, for details refer to: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

View File

@ -0,0 +1,99 @@
# Replicating Resources
Replication allows users to replicate resources (images/charts) between Harbor and non-Harbor registries in both pull or push mode.
Once the system administrator has set a rule, all resources that match the defined [filter](#resource-filter) patterns will be replicated to the destination registry when the [triggering condition](#trigger-mode) is matched. Each resource will start a task to run. If the namespace does not exist on the destination registry, a new namespace will be created automatically. If it already exists and the user configured in the policy has no write privilege to it, the process will fail. The member information will not be replicated.
There may be a bit of delay during replication based on the situation of the network. If a replication task fails, it will be re-scheduled a few minutes later and retried times.
**Note:** Due to API changes, replication between different versions of Harbor is not supported.
### Creating replication endpoints
To replicate image repositories from one instance of Harbor to another Harbor or non-Harbor registry, you first create replication endpoints.
1. Go to **Registries** and click the **+ New Endpoint** button.
![New replication endpoint](../img/replication-endpoint1.png)
1. For **Provider**, use the drop-down menu to select the type of registry to set up as a replication endpoint.
The endpoint can be another Harbor instance, or a non-Harbor registry. Currently, the following non-Harbor registries are supported:
- Docker Hub
- Docker registry
- AWS Elastic Container Registry
- Azure Container Registry
- Ali Cloud Container Registry
- Google Container Registry
- Huawei SWR
- Helm Hub
![Replication providers](../img/replication-endpoint2.png)
1. Enter a suitable name and description for the new replication endpoint.
1. Enter the full URL of the registry to set up as a replication endpoint.
For example, to replicate to another Harbor instance, enter https://harbor_instance_address:443. The registry must exist and be running before you create the endpoint.
1. Enter the Access ID and Access Secret for the endpoint registry instance.
Use an account that has the appropriate privileges on that registry, or an account that has write permission on the corresponding project in a Harbor registry.
**NOTES**:
- AWS ECR adapters should use access keys, not a username and password. The access key should have sufficient permissions, such as storage permission.
- Google GCR adapters should use the entire JSON key generated in the service account. The namespace should start with the project ID.
1. Optionally, select the **Verify Remote Cert** check box.
Deselect the check box if the remote registry uses a self-signed or untrusted certificate.
1. Click **Test Connection**.
1. When you have successfully tested the connection, click **OK**.
### Creating a replication rule
Login as a system administrator user, click `NEW REPLICATION RULE` under `Administration->Replications` and fill in the necessary fields. You can choose different replication modes, [resource filters](#resource-filter) and [trigger modes](#trigger-mode) according to the different requirements. If there is no endpoint available in the list, follow the instructions in the [Creating replication endpoints](#creating-replication-endpoints) to create one. Click `SAVE` to create a replication rule.
![browse project](../img/create_rule.png)
#### Resource filter
Three resource filters are supported:
* **Name**: Filter resources according to the name.
* **Tag**: Filter resources according to the tag.
* **Resource**: Filter images according to the resource type.
The terms supported in the pattern used by name filter and tag filter are as follows:
* **\***: Matches any sequence of non-separator characters `/`.
* **\*\***: Matches any sequence of characters, including path separators `/`.
* **?**: Matches any single non-separator character `/`.
* **{alt1,...}**: Matches a sequence of characters if one of the comma-separated alternatives matches.
**Note:** `library` must be added if you want to replicate the official images of Docker Hub. For example, `library/hello-world` matches the official hello-world images.
Pattern | String(Match or not)
---------- | -------
`library/*` | `library/hello-world`(Y)<br> `library/my/hello-world`(N)
`library/**` | `library/hello-world`(Y)<br> `library/my/hello-world`(Y)
`{library,goharbor}/**` | `library/hello-world`(Y)<br> `goharbor/harbor-core`(Y)<br> `google/hello-world`(N)
`1.?` | `1.0`(Y)<br> `1.01`(N)
#### Trigger mode
* **Manual**: Replicate the resources manually when needed. **Note**: The deletion operations are not replicated.
* **Scheduled**: Replicate the resources periodically. **Note**: The deletion operations are not replicated.
* **Event Based**: When a new resource is pushed to the project, it is replicated to the remote registry immediately. Same to the deletion operation if the `Delete remote resources when locally deleted` checkbox is selected.
### Starting a replication manually
Select a replication rule and click `REPLICATE`, the resources which the rule is applied to will be replicated from the source registry to the destination immediately.
![browse project](../img/start_replicate.png)
### Listing and stopping replication executions
Click a rule, the execution records which belong to this rule will be listed. Each record represents the summary of one execution of the rule. Click `STOP` to stop the executions which are in progress.
![browse project](../img/list_stop_executions.png)
### Listing tasks
Click the ID of one execution, you can get the execution summary and the task list. Click the log icon can get the detail information for the replication progress.
**Note**: The count of `IN PROGRESS` status in the summary includes both `Pending` and `In Progress` tasks.
![browse project](../img/list_tasks.png)
### Deleting the replication rule
Select the replication rule and click `DELETE` to delete it. Only rules which have no in progress executions can be deleted.
![browse project](../img/delete_rule.png)

View File

@ -0,0 +1,15 @@
# Retagging Images
Images retag helps users to tag images in Harbor, images can be tagged to different repositories and projects, as long as the users have sufficient permissions. For example,
```
release/app:stg --> release/app:prd
develop/app:v1.0 --> release/app:v1.0
```
To retag an image, users should have read permission (guest role or above) to the source project and write permission (developer role or above) to the target project.
In Harbor portal, select the image you'd like to retag, and click the enabled `Retag` button to open the retag dialog.
![retag image](../img/retag_image.png)
In the retag dialog, project name, repository name and the new tag should be specified. On click the `CONFIRM` button, the new tag would be created instantly. You can check the new tag in the corresponding project.

View File

@ -0,0 +1,26 @@
### Setup
In harbor.yml, make sure https is enabled, and the attributes `ssl_cert` and `ssl_cert_key` are pointed to valid certificates. For more information about generating https certificate please refer to: [Configuring HTTPS for Harbor](configure_https.md)
### Copy Root Certificate
Suppose the Harbor instance is hosted on a machine `192.168.0.5`
If you are using a self-signed certificate, make sure to copy the CA root cert to `/etc/docker/certs.d/192.168.0.5/` and `~/.docker/tls/192.168.0.5:4443/`
### Enable Docker Content Trust
It can be done via setting environment variables:
```
export DOCKER_CONTENT_TRUST=1
export DOCKER_CONTENT_TRUST_SERVER=https://192.168.0.5:4443
```
### Set alias for notary (optional)
Because by default the local directory for storing meta files for notary client is different from docker client. If you want to use notary client to manipulate the keys/meta files generated by Docker Content Trust, please set the alias to reduce the effort:
```
alias notary="notary -s https://192.168.0.5:4443 -d ~/.docker/trust --tlscacert /etc/docker/certs.d/192.168.0.5/ca.crt"
```

View File

@ -0,0 +1,38 @@
# User Guide
## Overview
This guide walks you through the fundamentals of using Harbor. You'll learn how to use Harbor to:
* [Manage your projects](#managing-projects)
* [Manage members of a project](#managing-members-of-a-project)
* [Replicate resources between Harbor and non-Harbor registries](#replicating-resources)
* [Retag images within Harbor](#retag-images)
* [Search projects and repositories](#searching-projects-and-repositories)
* [Manage labels](#managing-labels)
* [Configure CVE Whitelists](#configure-cve-whitelists)
* [Set Project Quotas](#set-project-quotas)
* [Manage Harbor system if you are the system administrator:](#administrator-options)
* [Manage users](#managing-user)
* [Manage registries](#managing-registry)
* [Manage replication rules](#managing-replication)
* [Manage authentication](#managing-authentication)
* [Manage project creation](#managing-project-creation)
* [Manage self-registration](#managing-self-registration)
* [Manage email settings](#managing-email-settings)
* [Manage registry read only](#managing-registry-read-only)
* [Manage role by LDAP group](#managing-role-by-ldap-group)
* [Pull and push images using Docker client](#pulling-and-pushing-images-using-docker-client)
* [Add description to repositories](#add-description-to-repositories)
* [Delete repositories and images](#deleting-repositories)
* [Content trust](#content-trust)
* [Vulnerability scanning via Clair](#vulnerability-scanning-via-clair)
* [Pull image from Harbor in Kubernetes](#pull-image-from-harbor-in-kubernetes)
* [Manage Helm Charts](#manage-helm-charts)
* [Manage Helm Charts via portal](#manage-helm-charts-via-portal)
* [Working with Helm CLI](#working-with-helm-cli)
* [Online Garbage Collection](#online-garbage-collection)
* [View build history](#build-history)
* [Using CLI after login via OIDC based SSO](#using-oidc-cli-secret)
* [Manage robot account of a project](#robot-account)
* [Tag Retention Rules](#tag-retention-rules)
* [Webhook Notifications](#webhook-notifications)
* [Using API Explorer](#api-explorer)

View File

@ -0,0 +1,10 @@
# Using the API Explorer
Harbor integrated swagger UI from 1.8. That means all apis can be invoked through UI. Normally, user have 2 ways to navigate to API Explorer.
1. User can login harbor, and click the "API EXPLORER" button.All apis will be invoked with current user authorization.
![navigation bar](../img/api_explorer_btn.png)
2. User can navigate to swagger page by ip address by router "devcenter". For example: https://10.192.111.118/devcenter. After go to the page, need to click "authorize" button to give basic authentication to all apis. All apis will be invoked with the authorized user authorization.
![authentication](../img/authorize.png)