Refresh the status of execution for every status changing of task to support filtering executions by status directly
Signed-off-by: Wenkai Yin <yinw@vmware.com>
fixes#12720
The GC job doesn't remove the manifest of scheme1.MediaTypeSignedManifest as it's recognized by GC job.
Signed-off-by: wang yan <wangyan@vmware.com>
1. Accept vendorType and vendorID when creating the schedule
2. Provide more methods in the scheduler interface to reduce the duplicated works of callers
3. Use a new ormer and transaction when creating the schedule
Signed-off-by: Wenkai Yin <yinw@vmware.com>
- fix npe issue in create/update policy
- fix issue of missing schedule job id in the preheat policy
Signed-off-by: Steven Zou <szou@vmware.com>
- increase the client timeout
1, update typo in the update blob status sql, the typo will not impact the sql result.
2, correct blob status in the middleware & GC job log.
Signed-off-by: wang yan <wangyan@vmware.com>
- add job stop check points in preheat job
- add missing digest property for the preheat request sent to the provider
Signed-off-by: Steven Zou <szou@vmware.com>
- fix invalid data type of vulnerability filter param
- add more debug logs
- add more logs in the preheat job
- fix issue of getting empty list when doing querying artifacts
Signed-off-by: Steven Zou <szou@vmware.com>
* add debugging env for GC time window
For debugging, the tester/users wants to run GC to delete the removed artifact immediately instead of waitting for two hours, add the env(GC_BLOB_TIME_WINDOW) to meet this.
Signed-off-by: wang yan <wangyan@vmware.com>
1, The update blob status method should udpate the blob version of the blob object as well, otherwise the GC job cannot handle the blob status transform(none - delete - deleting - deletefailed)
as the method is using version equals as the query condition.
2, For the deleting blob which marked for more than 2 hours, it should be set to delete failed in head blob & put manifest request
Signed-off-by: wang yan <wangyan@vmware.com>
This commit provides the secret manager for proxy cache.
The secret is used for pushing blobs to local when it's proxied from
remote registry.
Each secret can be used only once and has a relatively short expiration
time.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
two phases:
1, mark, select the gc candidates bases on the DB and mark them as status delete.
2, sweep, select the candidate and mark it as status deleting and remove it from backend and database.
Signed-off-by: wang yan <wangyan@vmware.com>
- use real provider instance manager
- move mock insatnce manager to testing/pkg
- modify kraken deriver implementation to remove digest fetcher
- update related UT cases
Signed-off-by: Steven Zou <szou@vmware.com>
Add support list blob with update time.
As introduces the time window in GC, it wants to list the blobs less than specific time.
Signed-off-by: wang yan <wangyan@vmware.com>
- define instance's api
- define extension models for api
- implement preheat controller
- implement preheat manager
- most code are picked up from the original P2P feat branch
Signed-off-by: fanjiankong <fanjiankong@tencent.com>
- define policy enforcer interface
- implement the default enforcer
- registrer P2P preheat job to JS
- add the missing mock manager&controller in the src/testing pkg
- Add UT cases for enforcer
- fix#12285
- left one TODO: query provider instance by instance Manager
Signed-off-by: Steven Zou <szou@vmware.com>
* add get GC candidate
select non referenced blobs from table blob and exclude the ones in the time windows.
Signed-off-by: wang yan <wangyan@vmware.com>
- add new selector based on vulnerability severity criteria
- add new selector based on signature(signed) criteria
- do change to the select factory method definition
- do changes to selector.Candidate model
- add preheat policy filter interface and default implementation
- add UT cases to cover new code
Signed-off-by: Steven Zou <szou@vmware.com>
misspelling
- define preheat driver interface
- implement dragonfly driver
- implememt kraken driver
- add related UT cases with testify framework
- fix#10870#10871
- some code are picked up from the original P2P feat branch
Signed-off-by: Steven Zou <szou@vmware.com>
1, add two more attributes, update_time and status
2, add delete and fresh update time method in blob mgr & ctr.
Signed-off-by: wang yan <wangyan@vmware.com>
Fixes#9704
As we do want to unify error handling, so just decreprates pkg errors, use lib/errors instead for Harbor internal used errors model.
1, The lib/errors can cover all of funcs of pkg/errors, and also it has code attribute to define the http return value.
2, lib/errors can give a OCI standard error format, like {"errors":[{"code":"UNAUTHORIZED","message":"unauthorized"}]}
If you'd like to use pkg/errors, use lib/errors instead. If it cannot meet your request, enhance it.
Signed-off-by: wang yan <wangyan@vmware.com>
- update Job interface to introdcue MaxCurrency method for declaring the max currency of the specified job
- change the downstream jobs to implement the new interface method
- GC and sample jobs are set to 1
- other jobs are set to 0 that means unlimited
- add max currency optiot when doing job registration
- resolve issue #11586
- probably resolve issue #11281
- resolve issue #11570
Signed-off-by: Steven Zou <szou@vmware.com>
1. Remove `common/quota` package.
2. Remove functions about quota in `common/dao` package.
3. Move `Quota` and `QuotaUsage` models from `common/models` to
`pkg/quota/dao`.
4. Add `Count` and `List` methods to `quota.Controller`.
5. Use `quota.Controller` to implement quota APIs.
Signed-off-by: He Weiwei <hweiwei@vmware.com>
fixes#11533
GC jobs will use the filter results to call registry API to delete manifest.
In the current imple, the filter function in some case does not return the deleted artifact as it's using digest as the filter condition.
Like: If one artifact is deleted, but there is another project/repo has a image with same digest with the deleted one, filter func will
not mark the deleted artifact as candidate. It results in, GC job does not call API to remove the manifest.
To fix it, update the filter to use both digest and repository name to filter candidate.
Signed-off-by: wang yan <wangyan@vmware.com>