bump dependencies (#17549)

updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
- dependency-name: golang.org/x/net
- dependency-name: helm.sh/helm/v3

Signed-off-by: Wang Yan <wangyan@vmware.com>

Signed-off-by: Wang Yan <wangyan@vmware.com>
This commit is contained in:
Wang Yan 2022-09-15 16:50:16 +08:00 committed by GitHub
parent a3d96000f5
commit 848167c4e0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1221 changed files with 104982 additions and 28937 deletions

View File

@ -11,7 +11,7 @@ require (
github.com/beego/i18n v0.0.0-20140604031826-e87155e8f0c0
github.com/bmatcuk/doublestar v1.1.1
github.com/casbin/casbin v1.7.0
github.com/cenkalti/backoff/v4 v4.1.1
github.com/cenkalti/backoff/v4 v4.1.2
github.com/coreos/go-oidc/v3 v3.0.0
github.com/dghubble/sling v1.1.0
github.com/docker/distribution v2.8.1+incompatible
@ -30,7 +30,7 @@ require (
github.com/go-sql-driver/mysql v1.5.0
github.com/gocarina/gocsv v0.0.0-20210516172204-ca9e8a8ddea8
github.com/gocraft/work v0.5.1
github.com/golang-jwt/jwt/v4 v4.1.0
github.com/golang-jwt/jwt/v4 v4.2.0
github.com/golang-migrate/migrate/v4 v4.15.1
github.com/gomodule/redigo v2.0.0+incompatible
github.com/google/uuid v1.3.0
@ -42,39 +42,39 @@ require (
github.com/jackc/pgx/v4 v4.12.0
github.com/jinzhu/gorm v1.9.8 // indirect
github.com/jpillora/backoff v1.0.0
github.com/miekg/pkcs11 v1.0.3 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/ncw/swift v1.0.49 // indirect
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646
github.com/olekukonko/tablewriter v0.0.5
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.2
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_golang v1.12.1
github.com/robfig/cron v1.0.0 // indirect
github.com/robfig/cron/v3 v3.0.0
github.com/spf13/viper v1.8.1
github.com/stretchr/testify v1.7.0
github.com/stretchr/testify v1.7.2
github.com/tencentcloud/tencentcloud-sdk-go v1.0.62
github.com/theupdateframework/notary v0.6.1
github.com/vmihailenco/msgpack/v5 v5.0.0-rc.2
go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux v0.22.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.22.0
go.opentelemetry.io/otel v1.0.0
go.opentelemetry.io/otel v1.3.0
go.opentelemetry.io/otel/exporters/jaeger v1.0.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.0.0
go.opentelemetry.io/otel/sdk v1.0.0
go.opentelemetry.io/otel/trace v1.0.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.3.0
go.opentelemetry.io/otel/sdk v1.3.0
go.opentelemetry.io/otel/trace v1.3.0
go.uber.org/ratelimit v0.2.0
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519
golang.org/x/net v0.0.0-20211013171255-e13a2654a71e
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e
golang.org/x/net v0.0.0-20220909164309-bea034e7d591
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8
gopkg.in/h2non/gock.v1 v1.0.16
gopkg.in/yaml.v2 v2.4.0
helm.sh/helm/v3 v3.7.1
k8s.io/api v0.22.1
k8s.io/apimachinery v0.22.1
k8s.io/client-go v0.22.1
helm.sh/helm/v3 v3.9.4
k8s.io/api v0.24.2
k8s.io/apimachinery v0.24.2
k8s.io/client-go v0.24.2
)
require (
@ -83,12 +83,12 @@ require (
)
require (
cloud.google.com/go v0.88.0 // indirect
cloud.google.com/go v0.99.0 // indirect
github.com/Azure/azure-sdk-for-go v37.2.0+incompatible // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.18 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.14 // indirect
github.com/Azure/go-autorest/autorest v0.11.24 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.18 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/autorest/to v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
@ -106,25 +106,26 @@ require (
github.com/bugsnag/panicwrap v1.2.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cloudflare/cfssl v0.0.0-20190510060611-9c027c93ba9e // indirect
github.com/containerd/containerd v1.5.13 // indirect
github.com/cyphar/filepath-securejoin v0.2.2 // indirect
github.com/containerd/containerd v1.6.6 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/denverdino/aliyungo v0.0.0-20191227032621-df38c6fa730c // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/docker/cli v20.10.7+incompatible // indirect
github.com/docker/docker v20.10.9+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.3 // indirect
github.com/dnaeon/go-vcr v1.2.0 // indirect
github.com/docker/cli v20.10.17+incompatible // indirect
github.com/docker/docker v20.10.17+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/docker/go v0.0.0-20160303222718-d30aec9fd63c // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/evanphx/json-patch v4.11.0+incompatible // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/felixge/httpsnoop v1.0.2 // indirect
github.com/form3tech-oss/jwt-go v3.2.5+incompatible // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/garyburd/redigo v1.6.3 // indirect
github.com/go-errors/errors v1.0.1 // indirect
github.com/go-logr/logr v0.4.0 // indirect
github.com/go-logr/logr v1.2.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/analysis v0.19.10 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
@ -133,17 +134,16 @@ require (
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/certificate-transparency-go v1.0.21 // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-querystring v1.0.0 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/gorilla/securecookie v1.1.1 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/imdario/mergo v0.3.12 // indirect
@ -157,36 +157,36 @@ require (
github.com/jackc/pgtype v1.8.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.11 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.13.6 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mitchellh/copystructure v1.1.1 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/mapstructure v1.4.1 // indirect
github.com/mitchellh/reflectwalk v1.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.5.0 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.1 // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/pelletier/go-toml v1.9.3 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/satori/go.uuid v1.2.0 // indirect
github.com/shiena/ansicolor v0.0.0-20151119151921-a422bbe96644 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/afero v1.6.0 // indirect
github.com/spf13/cast v1.3.1 // indirect
github.com/spf13/cobra v1.2.1 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/cobra v1.4.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/objx v0.2.0 // indirect
@ -198,20 +198,21 @@ require (
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca // indirect
go.mongodb.org/mongo-driver v1.7.0 // indirect
go.opentelemetry.io/contrib v0.22.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.0.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.3.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.3.0 // indirect
go.opentelemetry.io/otel/internal/metric v0.22.0 // indirect
go.opentelemetry.io/otel/metric v0.22.0 // indirect
go.opentelemetry.io/proto/otlp v0.9.0 // indirect
go.opentelemetry.io/proto/otlp v0.11.0 // indirect
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
go.uber.org/atomic v1.7.0 // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad // indirect
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d // indirect
google.golang.org/api v0.51.0 // indirect
google.golang.org/appengine v1.6.6 // indirect
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f // indirect
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
google.golang.org/api v0.61.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8 // indirect
google.golang.org/genproto v0.0.0-20211013025323-ce878158c4d4 // indirect
google.golang.org/grpc v1.41.0 // indirect
google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368 // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/dancannon/gorethink.v3 v3.0.5 // indirect
gopkg.in/fatih/pool.v2 v2.0.0 // indirect
@ -219,17 +220,18 @@ require (
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.62.0 // indirect
gopkg.in/square/go-jose.v2 v2.5.1 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/apiextensions-apiserver v0.22.1 // indirect
k8s.io/cli-runtime v0.22.1 // indirect
k8s.io/klog/v2 v2.9.0 // indirect
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e // indirect
k8s.io/utils v0.0.0-20210707171843-4b05e18ac7d9 // indirect
oras.land/oras-go v0.4.0 // indirect
sigs.k8s.io/kustomize/api v0.8.11 // indirect
sigs.k8s.io/kustomize/kyaml v0.11.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.24.2 // indirect
k8s.io/cli-runtime v0.24.2 // indirect
k8s.io/klog/v2 v2.60.1 // indirect
k8s.io/kube-openapi v0.0.0-20220627174259-011e075b9cb8 // indirect
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
oras.land/oras-go v1.2.0 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/kustomize/api v0.11.4 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.6 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)
replace (

File diff suppressed because it is too large Load Diff

View File

@ -282,6 +282,7 @@ func NewClient(c *http.Client) *Client {
// getETag returns a value from the metadata service as well as the associated ETag.
// This func is otherwise equivalent to Get.
func (c *Client) getETag(suffix string) (value, etag string, err error) {
ctx := context.TODO()
// Using a fixed IP makes it very difficult to spoof the metadata service in
// a container, which is an important use-case for local testing of cloud
// deployments. To enable spoofing of the metadata service, the environment
@ -304,9 +305,25 @@ func (c *Client) getETag(suffix string) (value, etag string, err error) {
}
req.Header.Set("Metadata-Flavor", "Google")
req.Header.Set("User-Agent", userAgent)
res, err := c.hc.Do(req)
if err != nil {
return "", "", err
var res *http.Response
var reqErr error
retryer := newRetryer()
for {
res, reqErr = c.hc.Do(req)
var code int
if res != nil {
code = res.StatusCode
}
if delay, shouldRetry := retryer.Retry(code, reqErr); shouldRetry {
if err := sleep(ctx, delay); err != nil {
return "", "", err
}
continue
}
break
}
if reqErr != nil {
return "", "", reqErr
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {

View File

@ -0,0 +1,114 @@
// Copyright 2021 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metadata
import (
"context"
"io"
"math/rand"
"net/http"
"time"
)
const (
maxRetryAttempts = 5
)
var (
syscallRetryable = func(err error) bool { return false }
)
// defaultBackoff is basically equivalent to gax.Backoff without the need for
// the dependency.
type defaultBackoff struct {
max time.Duration
mul float64
cur time.Duration
}
func (b *defaultBackoff) Pause() time.Duration {
d := time.Duration(1 + rand.Int63n(int64(b.cur)))
b.cur = time.Duration(float64(b.cur) * b.mul)
if b.cur > b.max {
b.cur = b.max
}
return d
}
// sleep is the equivalent of gax.Sleep without the need for the dependency.
func sleep(ctx context.Context, d time.Duration) error {
t := time.NewTimer(d)
select {
case <-ctx.Done():
t.Stop()
return ctx.Err()
case <-t.C:
return nil
}
}
func newRetryer() *metadataRetryer {
return &metadataRetryer{bo: &defaultBackoff{
cur: 100 * time.Millisecond,
max: 30 * time.Second,
mul: 2,
}}
}
type backoff interface {
Pause() time.Duration
}
type metadataRetryer struct {
bo backoff
attempts int
}
func (r *metadataRetryer) Retry(status int, err error) (time.Duration, bool) {
if status == http.StatusOK {
return 0, false
}
retryOk := shouldRetry(status, err)
if !retryOk {
return 0, false
}
if r.attempts == maxRetryAttempts {
return 0, false
}
r.attempts++
return r.bo.Pause(), true
}
func shouldRetry(status int, err error) bool {
if 500 <= status && status <= 599 {
return true
}
if err == io.ErrUnexpectedEOF {
return true
}
// Transient network errors should be retried.
if syscallRetryable(err) {
return true
}
if err, ok := err.(interface{ Temporary() bool }); ok {
if err.Temporary() {
return true
}
}
if err, ok := err.(interface{ Unwrap() error }); ok {
return shouldRetry(status, err.Unwrap())
}
return false
}

View File

@ -0,0 +1,26 @@
// Copyright 2021 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build linux
// +build linux
package metadata
import "syscall"
func init() {
// Initialize syscallRetryable to return true on transient socket-level
// errors. These errors are specific to Linux.
syscallRetryable = func(err error) bool { return err == syscall.ECONNRESET || err == syscall.ECONNREFUSED }
}

View File

@ -1,3 +1,4 @@
//go:build modhack
// +build modhack
package adal

View File

@ -16,9 +16,11 @@ package adal
import (
"crypto/tls"
"net"
"net/http"
"net/http/cookiejar"
"sync"
"time"
"github.com/Azure/go-autorest/tracing"
)
@ -72,15 +74,18 @@ func sender() Sender {
// note that we can't init defaultSender in init() since it will
// execute before calling code has had a chance to enable tracing
defaultSenderInit.Do(func() {
// Use behaviour compatible with DefaultTransport, but require TLS minimum version.
defaultTransport := http.DefaultTransport.(*http.Transport)
// copied from http.DefaultTransport with a TLS minimum version.
transport := &http.Transport{
Proxy: defaultTransport.Proxy,
DialContext: defaultTransport.DialContext,
MaxIdleConns: defaultTransport.MaxIdleConns,
IdleConnTimeout: defaultTransport.IdleConnTimeout,
TLSHandshakeTimeout: defaultTransport.TLSHandshakeTimeout,
ExpectContinueTimeout: defaultTransport.ExpectContinueTimeout,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
ForceAttemptHTTP2: true,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
},

View File

@ -37,7 +37,7 @@ import (
"github.com/Azure/go-autorest/autorest/date"
"github.com/Azure/go-autorest/logger"
"github.com/form3tech-oss/jwt-go"
"github.com/golang-jwt/jwt/v4"
)
const (
@ -676,8 +676,6 @@ const (
func (m msiType) String() string {
switch m {
case msiTypeUnavailable:
return "unavailable"
case msiTypeAppServiceV20170901:
return "AppServiceV20170901"
case msiTypeCloudShell:
@ -699,13 +697,9 @@ func getMSIType() (msiType, string, error) {
}
// if ONLY the env var MSI_ENDPOINT is set the msiType is CloudShell
return msiTypeCloudShell, endpointEnvVar, nil
} else if msiAvailableHook(context.Background(), sender()) {
// if MSI_ENDPOINT is NOT set AND the IMDS endpoint is available the msiType is IMDS. This will timeout after 500 milliseconds
return msiTypeIMDS, msiEndpoint, nil
} else {
// if MSI_ENDPOINT is NOT set and IMDS endpoint is not available Managed Identity is not available
return msiTypeUnavailable, "", errors.New("MSI not available")
}
// if MSI_ENDPOINT is NOT set assume the msiType is IMDS
return msiTypeIMDS, msiEndpoint, nil
}
// GetMSIVMEndpoint gets the MSI endpoint on Virtual Machines.
@ -1322,15 +1316,13 @@ func NewMultiTenantServicePrincipalTokenFromCertificate(multiTenantCfg MultiTena
}
// MSIAvailable returns true if the MSI endpoint is available for authentication.
func MSIAvailable(ctx context.Context, sender Sender) bool {
resp, err := getMSIEndpoint(ctx, sender)
func MSIAvailable(ctx context.Context, s Sender) bool {
if s == nil {
s = sender()
}
resp, err := getMSIEndpoint(ctx, s)
if err == nil {
resp.Body.Close()
}
return err == nil
}
// used for testing purposes
var msiAvailableHook = func(ctx context.Context, sender Sender) bool {
return MSIAvailable(ctx, sender)
}

View File

@ -1,3 +1,4 @@
//go:build go1.13
// +build go1.13
// Copyright 2017 Microsoft Corporation
@ -24,7 +25,7 @@ import (
)
func getMSIEndpoint(ctx context.Context, sender Sender) (*http.Response, error) {
tempCtx, cancel := context.WithTimeout(ctx, 500*time.Millisecond)
tempCtx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
// http.NewRequestWithContext() was added in Go 1.13
req, _ := http.NewRequestWithContext(tempCtx, http.MethodGet, msiEndpoint, nil)

View File

@ -1,3 +1,4 @@
//go:build !go1.13
// +build !go1.13
// Copyright 2017 Microsoft Corporation
@ -23,7 +24,7 @@ import (
)
func getMSIEndpoint(ctx context.Context, sender Sender) (*http.Response, error) {
tempCtx, cancel := context.WithTimeout(ctx, 500*time.Millisecond)
tempCtx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
req, _ := http.NewRequest(http.MethodGet, msiEndpoint, nil)
req = req.WithContext(tempCtx)

View File

@ -26,6 +26,7 @@ import (
"time"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/logger"
"github.com/Azure/go-autorest/tracing"
)
@ -215,6 +216,7 @@ func (f *Future) WaitForCompletionRef(ctx context.Context, client autorest.Clien
}
// if the initial response has a Retry-After, sleep for the specified amount of time before starting to poll
if delay, ok := f.GetPollingDelay(); ok {
logger.Instance.Writeln(logger.LogInfo, "WaitForCompletionRef: initial polling delay")
if delayElapsed := autorest.DelayForBackoff(delay, 0, cancelCtx.Done()); !delayElapsed {
err = cancelCtx.Err()
return
@ -234,12 +236,14 @@ func (f *Future) WaitForCompletionRef(ctx context.Context, client autorest.Clien
var ok bool
delay, ok = f.GetPollingDelay()
if !ok {
logger.Instance.Writeln(logger.LogInfo, "WaitForCompletionRef: Using client polling delay")
delay = client.PollingDelay
}
} else {
// there was an error polling for status so perform exponential
// back-off based on the number of attempts using the client's retry
// duration. update attempts after delayAttempt to avoid off-by-one.
logger.Instance.Writef(logger.LogError, "WaitForCompletionRef: %s\n", err)
delayAttempt = attempts
delay = client.RetryDuration
attempts++

View File

@ -68,7 +68,7 @@ func (se ServiceError) Error() string {
if err != nil {
result += fmt.Sprintf(" Details=%v", se.Details)
}
result += fmt.Sprintf(" Details=%v", string(d))
result += fmt.Sprintf(" Details=%s", d)
}
if se.InnerError != nil {
@ -76,7 +76,7 @@ func (se ServiceError) Error() string {
if err != nil {
result += fmt.Sprintf(" InnerError=%v", se.InnerError)
}
result += fmt.Sprintf(" InnerError=%v", string(d))
result += fmt.Sprintf(" InnerError=%s", d)
}
if se.AdditionalInfo != nil {
@ -84,7 +84,7 @@ func (se ServiceError) Error() string {
if err != nil {
result += fmt.Sprintf(" AdditionalInfo=%v", se.AdditionalInfo)
}
result += fmt.Sprintf(" AdditionalInfo=%v", string(d))
result += fmt.Sprintf(" AdditionalInfo=%s", d)
}
return result
@ -211,7 +211,7 @@ func (r Resource) String() string {
}
// ParseResourceID parses a resource ID into a ResourceDetails struct.
// See https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions-resource#return-value-4.
// See https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-resource?tabs=json#resourceid.
func ParseResourceID(resourceID string) (Resource, error) {
const resourceIDPatternText = `(?i)subscriptions/(.+)/resourceGroups/(.+)/providers/(.+?)/(.+?)/(.+)`
@ -335,13 +335,13 @@ func WithErrorUnlessStatusCode(codes ...int) autorest.RespondDecorator {
b, decodeErr := autorest.CopyAndDecode(encodedAs, resp.Body, &e)
resp.Body = ioutil.NopCloser(&b)
if decodeErr != nil {
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b.String(), decodeErr)
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b, decodeErr)
}
if e.ServiceError == nil {
// Check if error is unwrapped ServiceError
decoder := autorest.NewDecoder(encodedAs, bytes.NewReader(b.Bytes()))
if err := decoder.Decode(&e.ServiceError); err != nil {
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b.String(), err)
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b, err)
}
// for example, should the API return the literal value `null` as the response
@ -364,7 +364,7 @@ func WithErrorUnlessStatusCode(codes ...int) autorest.RespondDecorator {
rawBody := map[string]interface{}{}
decoder := autorest.NewDecoder(encodedAs, bytes.NewReader(b.Bytes()))
if err := decoder.Decode(&rawBody); err != nil {
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b.String(), err)
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b, err)
}
e.ServiceError = &ServiceError{

View File

@ -45,9 +45,12 @@ type ResourceIdentifier struct {
Datalake string `json:"datalake"`
Batch string `json:"batch"`
OperationalInsights string `json:"operationalInsights"`
OSSRDBMS string `json:"ossRDBMS"`
Storage string `json:"storage"`
Synapse string `json:"synapse"`
ServiceBus string `json:"serviceBus"`
SQLDatabase string `json:"sqlDatabase"`
CosmosDB string `json:"cosmosDB"`
}
// Environment represents a set of endpoints for each of Azure's Clouds.
@ -64,6 +67,10 @@ type Environment struct {
ServiceBusEndpoint string `json:"serviceBusEndpoint"`
BatchManagementEndpoint string `json:"batchManagementEndpoint"`
StorageEndpointSuffix string `json:"storageEndpointSuffix"`
CosmosDBDNSSuffix string `json:"cosmosDBDNSSuffix"`
MariaDBDNSSuffix string `json:"mariaDBDNSSuffix"`
MySQLDatabaseDNSSuffix string `json:"mySqlDatabaseDNSSuffix"`
PostgresqlDatabaseDNSSuffix string `json:"postgresqlDatabaseDNSSuffix"`
SQLDatabaseDNSSuffix string `json:"sqlDatabaseDNSSuffix"`
TrafficManagerDNSSuffix string `json:"trafficManagerDNSSuffix"`
KeyVaultDNSSuffix string `json:"keyVaultDNSSuffix"`
@ -71,7 +78,6 @@ type Environment struct {
ServiceManagementVMDNSSuffix string `json:"serviceManagementVMDNSSuffix"`
ResourceManagerVMDNSSuffix string `json:"resourceManagerVMDNSSuffix"`
ContainerRegistryDNSSuffix string `json:"containerRegistryDNSSuffix"`
CosmosDBDNSSuffix string `json:"cosmosDBDNSSuffix"`
TokenAudience string `json:"tokenAudience"`
APIManagementHostNameSuffix string `json:"apiManagementHostNameSuffix"`
SynapseEndpointSuffix string `json:"synapseEndpointSuffix"`
@ -93,6 +99,10 @@ var (
ServiceBusEndpoint: "https://servicebus.windows.net/",
BatchManagementEndpoint: "https://batch.core.windows.net/",
StorageEndpointSuffix: "core.windows.net",
CosmosDBDNSSuffix: "documents.azure.com",
MariaDBDNSSuffix: "mariadb.database.azure.com",
MySQLDatabaseDNSSuffix: "mysql.database.azure.com",
PostgresqlDatabaseDNSSuffix: "postgres.database.azure.com",
SQLDatabaseDNSSuffix: "database.windows.net",
TrafficManagerDNSSuffix: "trafficmanager.net",
KeyVaultDNSSuffix: "vault.azure.net",
@ -100,7 +110,6 @@ var (
ServiceManagementVMDNSSuffix: "cloudapp.net",
ResourceManagerVMDNSSuffix: "cloudapp.azure.com",
ContainerRegistryDNSSuffix: "azurecr.io",
CosmosDBDNSSuffix: "documents.azure.com",
TokenAudience: "https://management.azure.com/",
APIManagementHostNameSuffix: "azure-api.net",
SynapseEndpointSuffix: "dev.azuresynapse.net",
@ -110,9 +119,12 @@ var (
Datalake: "https://datalake.azure.net/",
Batch: "https://batch.core.windows.net/",
OperationalInsights: "https://api.loganalytics.io",
OSSRDBMS: "https://ossrdbms-aad.database.windows.net",
Storage: "https://storage.azure.com/",
Synapse: "https://dev.azuresynapse.net",
ServiceBus: "https://servicebus.azure.net/",
SQLDatabase: "https://database.windows.net/",
CosmosDB: "https://cosmos.azure.com",
},
}
@ -130,6 +142,10 @@ var (
ServiceBusEndpoint: "https://servicebus.usgovcloudapi.net/",
BatchManagementEndpoint: "https://batch.core.usgovcloudapi.net/",
StorageEndpointSuffix: "core.usgovcloudapi.net",
CosmosDBDNSSuffix: "documents.azure.us",
MariaDBDNSSuffix: "mariadb.database.usgovcloudapi.net",
MySQLDatabaseDNSSuffix: "mysql.database.usgovcloudapi.net",
PostgresqlDatabaseDNSSuffix: "postgres.database.usgovcloudapi.net",
SQLDatabaseDNSSuffix: "database.usgovcloudapi.net",
TrafficManagerDNSSuffix: "usgovtrafficmanager.net",
KeyVaultDNSSuffix: "vault.usgovcloudapi.net",
@ -137,7 +153,6 @@ var (
ServiceManagementVMDNSSuffix: "usgovcloudapp.net",
ResourceManagerVMDNSSuffix: "cloudapp.usgovcloudapi.net",
ContainerRegistryDNSSuffix: "azurecr.us",
CosmosDBDNSSuffix: "documents.azure.us",
TokenAudience: "https://management.usgovcloudapi.net/",
APIManagementHostNameSuffix: "azure-api.us",
SynapseEndpointSuffix: NotAvailable,
@ -147,9 +162,12 @@ var (
Datalake: NotAvailable,
Batch: "https://batch.core.usgovcloudapi.net/",
OperationalInsights: "https://api.loganalytics.us",
OSSRDBMS: "https://ossrdbms-aad.database.usgovcloudapi.net",
Storage: "https://storage.azure.com/",
Synapse: NotAvailable,
ServiceBus: "https://servicebus.azure.net/",
SQLDatabase: "https://database.usgovcloudapi.net/",
CosmosDB: "https://cosmos.azure.com",
},
}
@ -167,6 +185,10 @@ var (
ServiceBusEndpoint: "https://servicebus.chinacloudapi.cn/",
BatchManagementEndpoint: "https://batch.chinacloudapi.cn/",
StorageEndpointSuffix: "core.chinacloudapi.cn",
CosmosDBDNSSuffix: "documents.azure.cn",
MariaDBDNSSuffix: "mariadb.database.chinacloudapi.cn",
MySQLDatabaseDNSSuffix: "mysql.database.chinacloudapi.cn",
PostgresqlDatabaseDNSSuffix: "postgres.database.chinacloudapi.cn",
SQLDatabaseDNSSuffix: "database.chinacloudapi.cn",
TrafficManagerDNSSuffix: "trafficmanager.cn",
KeyVaultDNSSuffix: "vault.azure.cn",
@ -174,7 +196,6 @@ var (
ServiceManagementVMDNSSuffix: "chinacloudapp.cn",
ResourceManagerVMDNSSuffix: "cloudapp.chinacloudapi.cn",
ContainerRegistryDNSSuffix: "azurecr.cn",
CosmosDBDNSSuffix: "documents.azure.cn",
TokenAudience: "https://management.chinacloudapi.cn/",
APIManagementHostNameSuffix: "azure-api.cn",
SynapseEndpointSuffix: "dev.azuresynapse.azure.cn",
@ -184,9 +205,12 @@ var (
Datalake: NotAvailable,
Batch: "https://batch.chinacloudapi.cn/",
OperationalInsights: NotAvailable,
OSSRDBMS: "https://ossrdbms-aad.database.chinacloudapi.cn",
Storage: "https://storage.azure.com/",
Synapse: "https://dev.azuresynapse.net",
ServiceBus: "https://servicebus.azure.net/",
SQLDatabase: "https://database.chinacloudapi.cn/",
CosmosDB: "https://cosmos.azure.com",
},
}
@ -204,6 +228,10 @@ var (
ServiceBusEndpoint: "https://servicebus.cloudapi.de/",
BatchManagementEndpoint: "https://batch.cloudapi.de/",
StorageEndpointSuffix: "core.cloudapi.de",
CosmosDBDNSSuffix: "documents.microsoftazure.de",
MariaDBDNSSuffix: "mariadb.database.cloudapi.de",
MySQLDatabaseDNSSuffix: "mysql.database.cloudapi.de",
PostgresqlDatabaseDNSSuffix: "postgres.database.cloudapi.de",
SQLDatabaseDNSSuffix: "database.cloudapi.de",
TrafficManagerDNSSuffix: "azuretrafficmanager.de",
KeyVaultDNSSuffix: "vault.microsoftazure.de",
@ -211,7 +239,6 @@ var (
ServiceManagementVMDNSSuffix: "azurecloudapp.de",
ResourceManagerVMDNSSuffix: "cloudapp.microsoftazure.de",
ContainerRegistryDNSSuffix: NotAvailable,
CosmosDBDNSSuffix: "documents.microsoftazure.de",
TokenAudience: "https://management.microsoftazure.de/",
APIManagementHostNameSuffix: NotAvailable,
SynapseEndpointSuffix: NotAvailable,
@ -221,9 +248,12 @@ var (
Datalake: NotAvailable,
Batch: "https://batch.cloudapi.de/",
OperationalInsights: NotAvailable,
OSSRDBMS: "https://ossrdbms-aad.database.cloudapi.de",
Storage: "https://storage.azure.com/",
Synapse: NotAvailable,
ServiceBus: "https://servicebus.azure.net/",
SQLDatabase: "https://database.cloudapi.de/",
CosmosDB: "https://cosmos.azure.com",
},
}
)

View File

@ -31,7 +31,7 @@ import (
const (
// DefaultPollingDelay is a reasonable delay between polling requests.
DefaultPollingDelay = 60 * time.Second
DefaultPollingDelay = 30 * time.Second
// DefaultPollingDuration is a reasonable total polling duration.
DefaultPollingDuration = 15 * time.Minute

View File

@ -1,3 +1,4 @@
//go:build modhack
// +build modhack
package autorest

View File

@ -241,6 +241,8 @@ func WithBaseURL(baseURL string) PrepareDecorator {
return r, fmt.Errorf("autorest: No scheme detected in URL %s", baseURL)
}
if u.RawQuery != "" {
// handle unencoded semicolons (ideally the server would send them already encoded)
u.RawQuery = strings.Replace(u.RawQuery, ";", "%3B", -1)
q, err := url.ParseQuery(u.RawQuery)
if err != nil {
return r, err

View File

@ -1,3 +1,4 @@
//go:build !go1.8
// +build !go1.8
// Copyright 2017 Microsoft Corporation

View File

@ -1,3 +1,4 @@
//go:build go1.8
// +build go1.8
// Copyright 2017 Microsoft Corporation

View File

@ -20,12 +20,14 @@ import (
"fmt"
"log"
"math"
"net"
"net/http"
"net/http/cookiejar"
"strconv"
"sync"
"time"
"github.com/Azure/go-autorest/logger"
"github.com/Azure/go-autorest/tracing"
)
@ -128,15 +130,18 @@ func sender(renengotiation tls.RenegotiationSupport) Sender {
// note that we can't init defaultSenders in init() since it will
// execute before calling code has had a chance to enable tracing
defaultSenders[renengotiation].init.Do(func() {
// Use behaviour compatible with DefaultTransport, but require TLS minimum version.
defaultTransport := http.DefaultTransport.(*http.Transport)
// copied from http.DefaultTransport with a TLS minimum version.
transport := &http.Transport{
Proxy: defaultTransport.Proxy,
DialContext: defaultTransport.DialContext,
MaxIdleConns: defaultTransport.MaxIdleConns,
IdleConnTimeout: defaultTransport.IdleConnTimeout,
TLSHandshakeTimeout: defaultTransport.TLSHandshakeTimeout,
ExpectContinueTimeout: defaultTransport.ExpectContinueTimeout,
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
ForceAttemptHTTP2: true,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
Renegotiation: renengotiation,
@ -271,6 +276,7 @@ func DoRetryForAttempts(attempts int, backoff time.Duration) SendDecorator {
if err == nil {
return resp, err
}
logger.Instance.Writef(logger.LogError, "DoRetryForAttempts: received error for attempt %d: %v\n", attempt+1, err)
if !DelayForBackoff(backoff, attempt, r.Context().Done()) {
return nil, r.Context().Err()
}
@ -325,6 +331,9 @@ func doRetryForStatusCodesImpl(s Sender, r *http.Request, count429 bool, attempt
if err == nil && !ResponseHasStatusCode(resp, codes...) || IsTokenRefreshError(err) {
return resp, err
}
if err != nil {
logger.Instance.Writef(logger.LogError, "DoRetryForStatusCodes: received error for attempt %d: %v\n", attempt+1, err)
}
delayed := DelayWithRetryAfter(resp, r.Context().Done())
// if this was a 429 set the delay cap as specified.
// applicable only in the absence of a retry-after header.
@ -391,6 +400,7 @@ func DoRetryForDuration(d time.Duration, backoff time.Duration) SendDecorator {
if err == nil {
return resp, err
}
logger.Instance.Writef(logger.LogError, "DoRetryForDuration: received error for attempt %d: %v\n", attempt+1, err)
if !DelayForBackoff(backoff, attempt, r.Context().Done()) {
return nil, r.Context().Err()
}
@ -438,6 +448,7 @@ func DelayForBackoffWithCap(backoff, cap time.Duration, attempt int, cancel <-ch
if cap > 0 && d > cap {
d = cap
}
logger.Instance.Writef(logger.LogInfo, "DelayForBackoffWithCap: sleeping for %s\n", d)
select {
case <-time.After(d):
return true

View File

@ -1,3 +1,4 @@
//go:build go1.13
// +build go1.13
// Copyright 2017 Microsoft Corporation

View File

@ -1,3 +1,4 @@
//go:build !go1.13
// +build !go1.13
// Copyright 2017 Microsoft Corporation

View File

@ -21,15 +21,16 @@ import (
"bytes"
"compress/gzip"
"context"
"encoding/binary"
"fmt"
"io"
"os"
"os/exec"
"strconv"
"sync"
"github.com/containerd/containerd/log"
"github.com/klauspost/compress/zstd"
exec "golang.org/x/sys/execabs"
)
type (
@ -125,17 +126,52 @@ func (r *bufferedReader) Peek(n int) ([]byte, error) {
return r.buf.Peek(n)
}
const (
zstdMagicSkippableStart = 0x184D2A50
zstdMagicSkippableMask = 0xFFFFFFF0
)
var (
gzipMagic = []byte{0x1F, 0x8B, 0x08}
zstdMagic = []byte{0x28, 0xb5, 0x2f, 0xfd}
)
type matcher = func([]byte) bool
func magicNumberMatcher(m []byte) matcher {
return func(source []byte) bool {
return bytes.HasPrefix(source, m)
}
}
// zstdMatcher detects zstd compression algorithm.
// There are two frame formats defined by Zstandard: Zstandard frames and Skippable frames.
// See https://tools.ietf.org/id/draft-kucherawy-dispatch-zstd-00.html#rfc.section.2 for more details.
func zstdMatcher() matcher {
return func(source []byte) bool {
if bytes.HasPrefix(source, zstdMagic) {
// Zstandard frame
return true
}
// skippable frame
if len(source) < 8 {
return false
}
// magic number from 0x184D2A50 to 0x184D2A5F.
if binary.LittleEndian.Uint32(source[:4])&zstdMagicSkippableMask == zstdMagicSkippableStart {
return true
}
return false
}
}
// DetectCompression detects the compression algorithm of the source.
func DetectCompression(source []byte) Compression {
for compression, m := range map[Compression][]byte{
Gzip: {0x1F, 0x8B, 0x08},
Zstd: {0x28, 0xb5, 0x2f, 0xfd},
for compression, fn := range map[Compression]matcher{
Gzip: magicNumberMatcher(gzipMagic),
Zstd: zstdMatcher(),
} {
if len(source) < len(m) {
// Len too short
continue
}
if bytes.Equal(m, source[:len(m)]) {
if fn(source) {
return compression
}
}

View File

@ -18,8 +18,9 @@ package content
import (
"context"
"errors"
"fmt"
"io"
"io/ioutil"
"math/rand"
"sync"
"time"
@ -27,7 +28,6 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
var bufPool = sync.Pool{
@ -77,7 +77,7 @@ func WriteBlob(ctx context.Context, cs Ingester, ref string, r io.Reader, desc o
cw, err := OpenWriter(ctx, cs, WithRef(ref), WithDescriptor(desc))
if err != nil {
if !errdefs.IsAlreadyExists(err) {
return errors.Wrap(err, "failed to open writer")
return fmt.Errorf("failed to open writer: %w", err)
}
return nil // all ready present
@ -134,28 +134,28 @@ func OpenWriter(ctx context.Context, cs Ingester, opts ...WriterOpt) (Writer, er
func Copy(ctx context.Context, cw Writer, r io.Reader, size int64, expected digest.Digest, opts ...Opt) error {
ws, err := cw.Status()
if err != nil {
return errors.Wrap(err, "failed to get status")
return fmt.Errorf("failed to get status: %w", err)
}
if ws.Offset > 0 {
r, err = seekReader(r, ws.Offset, size)
if err != nil {
return errors.Wrapf(err, "unable to resume write to %v", ws.Ref)
return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
}
}
copied, err := copyWithBuffer(cw, r)
if err != nil {
return errors.Wrap(err, "failed to copy")
return fmt.Errorf("failed to copy: %w", err)
}
if size != 0 && copied < size-ws.Offset {
// Short writes would return its own error, this indicates a read failure
return errors.Wrapf(io.ErrUnexpectedEOF, "failed to read expected number of bytes")
return fmt.Errorf("failed to read expected number of bytes: %w", io.ErrUnexpectedEOF)
}
if err := cw.Commit(ctx, size, expected, opts...); err != nil {
if !errdefs.IsAlreadyExists(err) {
return errors.Wrapf(err, "failed commit on ref %q", ws.Ref)
return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
}
}
@ -172,11 +172,11 @@ func CopyReaderAt(cw Writer, ra ReaderAt, n int64) error {
copied, err := copyWithBuffer(cw, io.NewSectionReader(ra, ws.Offset, n))
if err != nil {
return errors.Wrap(err, "failed to copy")
return fmt.Errorf("failed to copy: %w", err)
}
if copied < n {
// Short writes would return its own error, this indicates a read failure
return errors.Wrap(io.ErrUnexpectedEOF, "failed to read expected number of bytes")
return fmt.Errorf("failed to read expected number of bytes: %w", io.ErrUnexpectedEOF)
}
return nil
}
@ -190,13 +190,13 @@ func CopyReaderAt(cw Writer, ra ReaderAt, n int64) error {
func CopyReader(cw Writer, r io.Reader) (int64, error) {
ws, err := cw.Status()
if err != nil {
return 0, errors.Wrap(err, "failed to get status")
return 0, fmt.Errorf("failed to get status: %w", err)
}
if ws.Offset > 0 {
r, err = seekReader(r, ws.Offset, 0)
if err != nil {
return 0, errors.Wrapf(err, "unable to resume write to %v", ws.Ref)
return 0, fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
}
}
@ -212,7 +212,10 @@ func seekReader(r io.Reader, offset, size int64) (io.Reader, error) {
if ok {
nn, err := seeker.Seek(offset, io.SeekStart)
if nn != offset {
return nil, errors.Wrapf(err, "failed to seek to offset %v", offset)
if err == nil {
err = fmt.Errorf("unexpected seek location without seek error")
}
return nil, fmt.Errorf("failed to seek to offset %v: %w", offset, err)
}
if err != nil {
@ -230,12 +233,12 @@ func seekReader(r io.Reader, offset, size int64) (io.Reader, error) {
}
// well then, let's just discard up to the offset
n, err := copyWithBuffer(ioutil.Discard, io.LimitReader(r, offset))
n, err := copyWithBuffer(io.Discard, io.LimitReader(r, offset))
if err != nil {
return nil, errors.Wrap(err, "failed to discard to offset")
return nil, fmt.Errorf("failed to discard to offset: %w", err)
}
if n != offset {
return nil, errors.Errorf("unable to discard to offset")
return nil, errors.New("unable to discard to offset")
}
return r, nil

View File

@ -17,11 +17,11 @@
package local
import (
"fmt"
"sync"
"time"
"github.com/containerd/containerd/errdefs"
"github.com/pkg/errors"
)
// Handles locking references
@ -41,7 +41,13 @@ func tryLock(ref string) error {
defer locksMu.Unlock()
if v, ok := locks[ref]; ok {
return errors.Wrapf(errdefs.ErrUnavailable, "ref %s locked since %s", ref, v.since)
// Returning the duration may help developers distinguish dead locks (long duration) from
// lock contentions (short duration).
now := time.Now()
return fmt.Errorf(
"ref %s locked for %s (since %s): %w", ref, now.Sub(v.since), v.since,
errdefs.ErrUnavailable,
)
}
locks[ref] = &lock{time.Now()}

View File

@ -17,10 +17,9 @@
package local
import (
"fmt"
"os"
"github.com/pkg/errors"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/errdefs"
)
@ -40,7 +39,7 @@ func OpenReader(p string) (content.ReaderAt, error) {
return nil, err
}
return nil, errors.Wrap(errdefs.ErrNotFound, "blob not found")
return nil, fmt.Errorf("blob not found: %w", errdefs.ErrNotFound)
}
fp, err := os.Open(p)
@ -49,7 +48,7 @@ func OpenReader(p string) (content.ReaderAt, error) {
return nil, err
}
return nil, errors.Wrap(errdefs.ErrNotFound, "blob not found")
return nil, fmt.Errorf("blob not found: %w", errdefs.ErrNotFound)
}
return sizeReaderAt{size: fi.Size(), fp: fp}, nil

View File

@ -20,7 +20,6 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
@ -37,7 +36,6 @@ import (
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
var bufPool = sync.Pool{
@ -94,13 +92,13 @@ func NewLabeledStore(root string, ls LabelStore) (content.Store, error) {
func (s *store) Info(ctx context.Context, dgst digest.Digest) (content.Info, error) {
p, err := s.blobPath(dgst)
if err != nil {
return content.Info{}, errors.Wrapf(err, "calculating blob info path")
return content.Info{}, fmt.Errorf("calculating blob info path: %w", err)
}
fi, err := os.Stat(p)
if err != nil {
if os.IsNotExist(err) {
err = errors.Wrapf(errdefs.ErrNotFound, "content %v", dgst)
err = fmt.Errorf("content %v: %w", dgst, errdefs.ErrNotFound)
}
return content.Info{}, err
@ -129,12 +127,12 @@ func (s *store) info(dgst digest.Digest, fi os.FileInfo, labels map[string]strin
func (s *store) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error) {
p, err := s.blobPath(desc.Digest)
if err != nil {
return nil, errors.Wrapf(err, "calculating blob path for ReaderAt")
return nil, fmt.Errorf("calculating blob path for ReaderAt: %w", err)
}
reader, err := OpenReader(p)
if err != nil {
return nil, errors.Wrapf(err, "blob %s expected at %s", desc.Digest, p)
return nil, fmt.Errorf("blob %s expected at %s: %w", desc.Digest, p, err)
}
return reader, nil
@ -147,7 +145,7 @@ func (s *store) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.
func (s *store) Delete(ctx context.Context, dgst digest.Digest) error {
bp, err := s.blobPath(dgst)
if err != nil {
return errors.Wrapf(err, "calculating blob path for delete")
return fmt.Errorf("calculating blob path for delete: %w", err)
}
if err := os.RemoveAll(bp); err != nil {
@ -155,7 +153,7 @@ func (s *store) Delete(ctx context.Context, dgst digest.Digest) error {
return err
}
return errors.Wrapf(errdefs.ErrNotFound, "content %v", dgst)
return fmt.Errorf("content %v: %w", dgst, errdefs.ErrNotFound)
}
return nil
@ -163,18 +161,18 @@ func (s *store) Delete(ctx context.Context, dgst digest.Digest) error {
func (s *store) Update(ctx context.Context, info content.Info, fieldpaths ...string) (content.Info, error) {
if s.ls == nil {
return content.Info{}, errors.Wrapf(errdefs.ErrFailedPrecondition, "update not supported on immutable content store")
return content.Info{}, fmt.Errorf("update not supported on immutable content store: %w", errdefs.ErrFailedPrecondition)
}
p, err := s.blobPath(info.Digest)
if err != nil {
return content.Info{}, errors.Wrapf(err, "calculating blob path for update")
return content.Info{}, fmt.Errorf("calculating blob path for update: %w", err)
}
fi, err := os.Stat(p)
if err != nil {
if os.IsNotExist(err) {
err = errors.Wrapf(errdefs.ErrNotFound, "content %v", info.Digest)
err = fmt.Errorf("content %v: %w", info.Digest, errdefs.ErrNotFound)
}
return content.Info{}, err
@ -201,7 +199,7 @@ func (s *store) Update(ctx context.Context, info content.Info, fieldpaths ...str
all = true
labels = info.Labels
default:
return content.Info{}, errors.Wrapf(errdefs.ErrInvalidArgument, "cannot update %q field on content info %q", path, info.Digest)
return content.Info{}, fmt.Errorf("cannot update %q field on content info %q: %w", path, info.Digest, errdefs.ErrInvalidArgument)
}
}
} else {
@ -378,7 +376,7 @@ func (s *store) status(ingestPath string) (content.Status, error) {
fi, err := os.Stat(dp)
if err != nil {
if os.IsNotExist(err) {
err = errors.Wrap(errdefs.ErrNotFound, err.Error())
err = fmt.Errorf("%s: %w", err.Error(), errdefs.ErrNotFound)
}
return content.Status{}, err
}
@ -386,19 +384,19 @@ func (s *store) status(ingestPath string) (content.Status, error) {
ref, err := readFileString(filepath.Join(ingestPath, "ref"))
if err != nil {
if os.IsNotExist(err) {
err = errors.Wrap(errdefs.ErrNotFound, err.Error())
err = fmt.Errorf("%s: %w", err.Error(), errdefs.ErrNotFound)
}
return content.Status{}, err
}
startedAt, err := readFileTimestamp(filepath.Join(ingestPath, "startedat"))
if err != nil {
return content.Status{}, errors.Wrapf(err, "could not read startedat")
return content.Status{}, fmt.Errorf("could not read startedat: %w", err)
}
updatedAt, err := readFileTimestamp(filepath.Join(ingestPath, "updatedat"))
if err != nil {
return content.Status{}, errors.Wrapf(err, "could not read updatedat")
return content.Status{}, fmt.Errorf("could not read updatedat: %w", err)
}
// because we don't write updatedat on every write, the mod time may
@ -461,7 +459,7 @@ func (s *store) Writer(ctx context.Context, opts ...content.WriterOpt) (content.
// TODO(AkihiroSuda): we could create a random string or one calculated based on the context
// https://github.com/containerd/containerd/issues/2129#issuecomment-380255019
if wOpts.Ref == "" {
return nil, errors.Wrap(errdefs.ErrInvalidArgument, "ref must not be empty")
return nil, fmt.Errorf("ref must not be empty: %w", errdefs.ErrInvalidArgument)
}
var lockErr error
for count := uint64(0); count < 10; count++ {
@ -495,16 +493,16 @@ func (s *store) resumeStatus(ref string, total int64, digester digest.Digester)
path, _, data := s.ingestPaths(ref)
status, err := s.status(path)
if err != nil {
return status, errors.Wrap(err, "failed reading status of resume write")
return status, fmt.Errorf("failed reading status of resume write: %w", err)
}
if ref != status.Ref {
// NOTE(stevvooe): This is fairly catastrophic. Either we have some
// layout corruption or a hash collision for the ref key.
return status, errors.Errorf("ref key does not match: %v != %v", ref, status.Ref)
return status, fmt.Errorf("ref key does not match: %v != %v", ref, status.Ref)
}
if total > 0 && status.Total > 0 && total != status.Total {
return status, errors.Errorf("provided total differs from status: %v != %v", total, status.Total)
return status, fmt.Errorf("provided total differs from status: %v != %v", total, status.Total)
}
// TODO(stevvooe): slow slow slow!!, send to goroutine or use resumable hashes
@ -528,10 +526,10 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
if expected != "" {
p, err := s.blobPath(expected)
if err != nil {
return nil, errors.Wrap(err, "calculating expected blob path for writer")
return nil, fmt.Errorf("calculating expected blob path for writer: %w", err)
}
if _, err := os.Stat(p); err == nil {
return nil, errors.Wrapf(errdefs.ErrAlreadyExists, "content %v", expected)
return nil, fmt.Errorf("content %v: %w", expected, errdefs.ErrAlreadyExists)
}
}
@ -568,7 +566,7 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
// the ingest is new, we need to setup the target location.
// write the ref to a file for later use
if err := ioutil.WriteFile(refp, []byte(ref), 0666); err != nil {
if err := os.WriteFile(refp, []byte(ref), 0666); err != nil {
return nil, err
}
@ -581,7 +579,7 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
}
if total > 0 {
if err := ioutil.WriteFile(filepath.Join(path, "total"), []byte(fmt.Sprint(total)), 0666); err != nil {
if err := os.WriteFile(filepath.Join(path, "total"), []byte(fmt.Sprint(total)), 0666); err != nil {
return nil, err
}
}
@ -589,11 +587,12 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
fp, err := os.OpenFile(data, os.O_WRONLY|os.O_CREATE, 0666)
if err != nil {
return nil, errors.Wrap(err, "failed to open data file")
return nil, fmt.Errorf("failed to open data file: %w", err)
}
if _, err := fp.Seek(offset, io.SeekStart); err != nil {
return nil, errors.Wrap(err, "could not seek to current write offset")
fp.Close()
return nil, fmt.Errorf("could not seek to current write offset: %w", err)
}
return &writer{
@ -615,7 +614,7 @@ func (s *store) Abort(ctx context.Context, ref string) error {
root := s.ingestRoot(ref)
if err := os.RemoveAll(root); err != nil {
if os.IsNotExist(err) {
return errors.Wrapf(errdefs.ErrNotFound, "ingest ref %q", ref)
return fmt.Errorf("ingest ref %q: %w", ref, errdefs.ErrNotFound)
}
return err
@ -626,7 +625,7 @@ func (s *store) Abort(ctx context.Context, ref string) error {
func (s *store) blobPath(dgst digest.Digest) (string, error) {
if err := dgst.Validate(); err != nil {
return "", errors.Wrapf(errdefs.ErrInvalidArgument, "cannot calculate blob path from invalid digest: %v", err)
return "", fmt.Errorf("cannot calculate blob path from invalid digest: %v: %w", err, errdefs.ErrInvalidArgument)
}
return filepath.Join(s.root, "blobs", dgst.Algorithm().String(), dgst.Hex()), nil
@ -656,23 +655,23 @@ func (s *store) ingestPaths(ref string) (string, string, string) {
}
func readFileString(path string) (string, error) {
p, err := ioutil.ReadFile(path)
p, err := os.ReadFile(path)
return string(p), err
}
// readFileTimestamp reads a file with just a timestamp present.
func readFileTimestamp(p string) (time.Time, error) {
b, err := ioutil.ReadFile(p)
b, err := os.ReadFile(p)
if err != nil {
if os.IsNotExist(err) {
err = errors.Wrap(errdefs.ErrNotFound, err.Error())
err = fmt.Errorf("%s: %w", err.Error(), errdefs.ErrNotFound)
}
return time.Time{}, err
}
var t time.Time
if err := t.UnmarshalText(b); err != nil {
return time.Time{}, errors.Wrapf(err, "could not parse timestamp file %v", p)
return time.Time{}, fmt.Errorf("could not parse timestamp file %v: %w", p, err)
}
return t, nil
@ -683,19 +682,23 @@ func writeTimestampFile(p string, t time.Time) error {
if err != nil {
return err
}
return atomicWrite(p, b, 0666)
return writeToCompletion(p, b, 0666)
}
func atomicWrite(path string, data []byte, mode os.FileMode) error {
func writeToCompletion(path string, data []byte, mode os.FileMode) error {
tmp := fmt.Sprintf("%s.tmp", path)
f, err := os.OpenFile(tmp, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_SYNC, mode)
if err != nil {
return errors.Wrap(err, "create tmp file")
return fmt.Errorf("create tmp file: %w", err)
}
_, err = f.Write(data)
f.Close()
if err != nil {
return errors.Wrap(err, "write atomic data")
return fmt.Errorf("write tmp file: %w", err)
}
return os.Rename(tmp, path)
err = os.Rename(tmp, path)
if err != nil {
return fmt.Errorf("rename tmp file: %w", err)
}
return nil
}

View File

@ -27,7 +27,7 @@ import (
func getATime(fi os.FileInfo) time.Time {
if st, ok := fi.Sys().(*syscall.Stat_t); ok {
return time.Unix(int64(st.Atimespec.Sec), int64(st.Atimespec.Nsec)) //nolint: unconvert // int64 conversions ensure the line compiles for 32-bit systems as well.
return time.Unix(st.Atimespec.Unix())
}
return fi.ModTime()

View File

@ -27,7 +27,7 @@ import (
func getATime(fi os.FileInfo) time.Time {
if st, ok := fi.Sys().(*syscall.Stat_t); ok {
return time.Unix(int64(st.Atim.Sec), int64(st.Atim.Nsec)) //nolint: unconvert // int64 conversions ensure the line compiles for 32-bit systems as well.
return time.Unix(st.Atim.Unix())
}
return fi.ModTime()

View File

@ -27,7 +27,7 @@ import (
func getATime(fi os.FileInfo) time.Time {
if st, ok := fi.Sys().(*syscall.Stat_t); ok {
return time.Unix(int64(st.Atim.Sec), int64(st.Atim.Nsec)) //nolint: unconvert // int64 conversions ensure the line compiles for 32-bit systems as well.
return time.Unix(st.Atim.Unix())
}
return fi.ModTime()

View File

@ -18,6 +18,8 @@ package local
import (
"context"
"errors"
"fmt"
"io"
"os"
"path/filepath"
@ -28,7 +30,6 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/log"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
// writer represents a write transaction against the blob store.
@ -88,30 +89,30 @@ func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest,
w.fp = nil
if fp == nil {
return errors.Wrap(errdefs.ErrFailedPrecondition, "cannot commit on closed writer")
return fmt.Errorf("cannot commit on closed writer: %w", errdefs.ErrFailedPrecondition)
}
if err := fp.Sync(); err != nil {
fp.Close()
return errors.Wrap(err, "sync failed")
return fmt.Errorf("sync failed: %w", err)
}
fi, err := fp.Stat()
closeErr := fp.Close()
if err != nil {
return errors.Wrap(err, "stat on ingest file failed")
return fmt.Errorf("stat on ingest file failed: %w", err)
}
if closeErr != nil {
return errors.Wrap(err, "failed to close ingest file")
return fmt.Errorf("failed to close ingest file: %w", closeErr)
}
if size > 0 && size != fi.Size() {
return errors.Wrapf(errdefs.ErrFailedPrecondition, "unexpected commit size %d, expected %d", fi.Size(), size)
return fmt.Errorf("unexpected commit size %d, expected %d: %w", fi.Size(), size, errdefs.ErrFailedPrecondition)
}
dgst := w.digester.Digest()
if expected != "" && expected != dgst {
return errors.Wrapf(errdefs.ErrFailedPrecondition, "unexpected commit digest %s, expected %s", dgst, expected)
return fmt.Errorf("unexpected commit digest %s, expected %s: %w", dgst, expected, errdefs.ErrFailedPrecondition)
}
var (
@ -127,9 +128,9 @@ func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest,
if _, err := os.Stat(target); err == nil {
// collision with the target file!
if err := os.RemoveAll(w.path); err != nil {
log.G(ctx).WithField("ref", w.ref).WithField("path", w.path).Errorf("failed to remove ingest directory")
log.G(ctx).WithField("ref", w.ref).WithField("path", w.path).Error("failed to remove ingest directory")
}
return errors.Wrapf(errdefs.ErrAlreadyExists, "content %v", dgst)
return fmt.Errorf("content %v: %w", dgst, errdefs.ErrAlreadyExists)
}
if err := os.Rename(ingest, target); err != nil {
@ -142,17 +143,17 @@ func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest,
commitTime := time.Now()
if err := os.Chtimes(target, commitTime, commitTime); err != nil {
log.G(ctx).WithField("digest", dgst).Errorf("failed to change file time to commit time")
log.G(ctx).WithField("digest", dgst).Error("failed to change file time to commit time")
}
// clean up!!
if err := os.RemoveAll(w.path); err != nil {
log.G(ctx).WithField("ref", w.ref).WithField("path", w.path).Errorf("failed to remove ingest directory")
log.G(ctx).WithField("ref", w.ref).WithField("path", w.path).Error("failed to remove ingest directory")
}
if w.s.ls != nil && base.Labels != nil {
if err := w.s.ls.Set(dgst, base.Labels); err != nil {
log.G(ctx).WithField("digest", dgst).Errorf("failed to set labels")
log.G(ctx).WithField("digest", dgst).Error("failed to set labels")
}
}
@ -165,7 +166,7 @@ func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest,
// NOTE: Windows does not support this operation
if runtime.GOOS != "windows" {
if err := os.Chmod(target, (fi.Mode()&os.ModePerm)&^0333); err != nil {
log.G(ctx).WithField("ref", w.ref).Errorf("failed to make readonly")
log.G(ctx).WithField("ref", w.ref).Error("failed to make readonly")
}
}

View File

@ -17,7 +17,7 @@
// Package errdefs defines the common errors used throughout containerd
// packages.
//
// Use with errors.Wrap and error.Wrapf to add context to an error.
// Use with fmt.Errorf to add context to an error.
//
// To detect an error class, use the IsXXX functions to tell whether an error
// is of a certain type.
@ -28,8 +28,7 @@ package errdefs
import (
"context"
"github.com/pkg/errors"
"errors"
)
// Definitions of common error types used throughout containerd. All containerd

View File

@ -18,9 +18,9 @@ package errdefs
import (
"context"
"fmt"
"strings"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
@ -68,9 +68,9 @@ func ToGRPC(err error) error {
// ToGRPCf maps the error to grpc error codes, assembling the formatting string
// and combining it with the target error string.
//
// This is equivalent to errors.ToGRPC(errors.Wrapf(err, format, args...))
// This is equivalent to errdefs.ToGRPC(fmt.Errorf("%s: %w", fmt.Sprintf(format, args...), err))
func ToGRPCf(err error, format string, args ...interface{}) error {
return ToGRPC(errors.Wrapf(err, format, args...))
return ToGRPC(fmt.Errorf("%s: %w", fmt.Sprintf(format, args...), err))
}
// FromGRPC returns the underlying error from a grpc service based on the grpc error code
@ -104,9 +104,9 @@ func FromGRPC(err error) error {
msg := rebaseMessage(cls, err)
if msg != "" {
err = errors.Wrap(cls, msg)
err = fmt.Errorf("%s: %w", msg, cls)
} else {
err = errors.WithStack(cls)
err = cls
}
return err

View File

@ -21,7 +21,6 @@ import (
"io"
"github.com/containerd/containerd/errdefs"
"github.com/pkg/errors"
)
/*
@ -71,7 +70,7 @@ func ParseAll(ss ...string) (Filter, error) {
for _, s := range ss {
f, err := Parse(s)
if err != nil {
return nil, errors.Wrap(errdefs.ErrInvalidArgument, err.Error())
return nil, fmt.Errorf("%s: %w", err.Error(), errdefs.ErrInvalidArgument)
}
fs = append(fs, f)
@ -90,7 +89,7 @@ func (p *parser) parse() (Filter, error) {
ss, err := p.selectors()
if err != nil {
return nil, errors.Wrap(err, "filters")
return nil, fmt.Errorf("filters: %w", err)
}
return ss, nil
@ -284,9 +283,9 @@ func (pe parseError) Error() string {
}
func (p *parser) mkerr(pos int, format string, args ...interface{}) error {
return errors.Wrap(parseError{
return fmt.Errorf("parse error: %w", parseError{
input: p.input,
pos: pos,
msg: fmt.Sprintf(format, args...),
}, "parse error")
})
}

View File

@ -17,9 +17,8 @@
package filters
import (
"errors"
"unicode/utf8"
"github.com/pkg/errors"
)
// NOTE(stevvooe): Most of this code in this file is copied from the stdlib

View File

@ -18,6 +18,7 @@ package images
import (
"context"
"errors"
"fmt"
"sort"
@ -25,7 +26,6 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/platforms"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
@ -33,13 +33,17 @@ import (
var (
// ErrSkipDesc is used to skip processing of a descriptor and
// its descendants.
ErrSkipDesc = fmt.Errorf("skip descriptor")
ErrSkipDesc = errors.New("skip descriptor")
// ErrStopHandler is used to signify that the descriptor
// has been handled and should not be handled further.
// This applies only to a single descriptor in a handler
// chain and does not apply to descendant descriptors.
ErrStopHandler = fmt.Errorf("stop handler")
ErrStopHandler = errors.New("stop handler")
// ErrEmptyWalk is used when the WalkNotEmpty handlers return no
// children (e.g.: they were filtered out).
ErrEmptyWalk = errors.New("image might be filtered out")
)
// Handler handles image manifests
@ -99,6 +103,36 @@ func Walk(ctx context.Context, handler Handler, descs ...ocispec.Descriptor) err
}
}
}
return nil
}
// WalkNotEmpty works the same way Walk does, with the exception that it ensures that
// some children are still found by Walking the descriptors (for example, not all of
// them have been filtered out by one of the handlers). If there are no children,
// then an ErrEmptyWalk error is returned.
func WalkNotEmpty(ctx context.Context, handler Handler, descs ...ocispec.Descriptor) error {
isEmpty := true
var notEmptyHandler HandlerFunc = func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
children, err := handler.Handle(ctx, desc)
if err != nil {
return children, err
}
if len(children) > 0 {
isEmpty = false
}
return children, nil
}
err := Walk(ctx, notEmptyHandler, descs...)
if err != nil {
return err
}
if isEmpty {
return ErrEmptyWalk
}
return nil
}
@ -274,7 +308,7 @@ func LimitManifests(f HandlerFunc, m platforms.MatchComparer, n int) HandlerFunc
if n > 0 {
if len(children) == 0 {
return children, errors.Wrap(errdefs.ErrNotFound, "no match for platform in manifest")
return children, fmt.Errorf("no match for platform in manifest: %w", errdefs.ErrNotFound)
}
if len(children) > n {
children = children[:n]

View File

@ -29,7 +29,6 @@ import (
"github.com/containerd/containerd/platforms"
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// Image provides the model for how containerd views container images.
@ -115,7 +114,7 @@ func (image *Image) Size(ctx context.Context, provider content.Provider, platfor
var size int64
return size, Walk(ctx, Handlers(HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if desc.Size < 0 {
return nil, errors.Errorf("invalid size %v in %v (%v)", desc.Size, desc.Digest, desc.MediaType)
return nil, fmt.Errorf("invalid size %v in %v (%v)", desc.Size, desc.Digest, desc.MediaType)
}
size += desc.Size
return nil, nil
@ -156,7 +155,7 @@ func Manifest(ctx context.Context, provider content.Provider, image ocispec.Desc
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "manifest: invalid desc %s", desc.Digest)
return nil, fmt.Errorf("manifest: invalid desc %s: %w", desc.Digest, err)
}
var manifest ocispec.Manifest
@ -200,7 +199,7 @@ func Manifest(ctx context.Context, provider content.Provider, image ocispec.Desc
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "manifest: invalid desc %s", desc.Digest)
return nil, fmt.Errorf("manifest: invalid desc %s: %w", desc.Digest, err)
}
var idx ocispec.Index
@ -236,15 +235,15 @@ func Manifest(ctx context.Context, provider content.Provider, image ocispec.Desc
}
return descs, nil
}
return nil, errors.Wrapf(errdefs.ErrNotFound, "unexpected media type %v for %v", desc.MediaType, desc.Digest)
return nil, fmt.Errorf("unexpected media type %v for %v: %w", desc.MediaType, desc.Digest, errdefs.ErrNotFound)
}), image); err != nil {
return ocispec.Manifest{}, err
}
if len(m) == 0 {
err := errors.Wrapf(errdefs.ErrNotFound, "manifest %v", image.Digest)
err := fmt.Errorf("manifest %v: %w", image.Digest, errdefs.ErrNotFound)
if wasIndex {
err = errors.Wrapf(errdefs.ErrNotFound, "no match for platform in manifest %v", image.Digest)
err = fmt.Errorf("no match for platform in manifest %v: %w", image.Digest, errdefs.ErrNotFound)
}
return ocispec.Manifest{}, err
}
@ -309,7 +308,7 @@ func Check(ctx context.Context, provider content.Provider, image ocispec.Descrip
return false, []ocispec.Descriptor{image}, nil, []ocispec.Descriptor{image}, nil
}
return false, nil, nil, nil, errors.Wrapf(err, "failed to check image %v", image.Digest)
return false, nil, nil, nil, fmt.Errorf("failed to check image %v: %w", image.Digest, err)
}
// TODO(stevvooe): It is possible that referenced conponents could have
@ -324,7 +323,7 @@ func Check(ctx context.Context, provider content.Provider, image ocispec.Descrip
missing = append(missing, desc)
continue
} else {
return false, nil, nil, nil, errors.Wrapf(err, "failed to check image %v", desc.Digest)
return false, nil, nil, nil, fmt.Errorf("failed to check image %v: %w", desc.Digest, err)
}
}
ra.Close()
@ -346,7 +345,7 @@ func Children(ctx context.Context, provider content.Provider, desc ocispec.Descr
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "children: invalid desc %s", desc.Digest)
return nil, fmt.Errorf("children: invalid desc %s: %w", desc.Digest, err)
}
// TODO(stevvooe): We just assume oci manifest, for now. There may be
@ -365,7 +364,7 @@ func Children(ctx context.Context, provider content.Provider, desc ocispec.Descr
}
if err := validateMediaType(p, desc.MediaType); err != nil {
return nil, errors.Wrapf(err, "children: invalid desc %s", desc.Digest)
return nil, fmt.Errorf("children: invalid desc %s: %w", desc.Digest, err)
}
var index ocispec.Index

View File

@ -18,12 +18,12 @@ package images
import (
"context"
"fmt"
"sort"
"strings"
"github.com/containerd/containerd/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// mediatype definitions for image components handled in containerd.
@ -87,7 +87,7 @@ func DiffCompression(ctx context.Context, mediaType string) (string, error) {
}
return "", nil
default:
return "", errors.Wrapf(errdefs.ErrNotImplemented, "unrecognised mediatype %s", mediaType)
return "", fmt.Errorf("unrecognised mediatype %s: %w", mediaType, errdefs.ErrNotImplemented)
}
}

View File

@ -17,8 +17,9 @@
package labels
import (
"fmt"
"github.com/containerd/containerd/errdefs"
"github.com/pkg/errors"
)
const (
@ -31,7 +32,7 @@ func Validate(k, v string) error {
if len(k) > 10 {
k = k[:10]
}
return errors.Wrapf(errdefs.ErrInvalidArgument, "label key and value greater than maximum size (%d bytes), key: %s", maxSize, k)
return fmt.Errorf("label key and value greater than maximum size (%d bytes), key: %s: %w", maxSize, k, errdefs.ErrInvalidArgument)
}
return nil
}

View File

@ -52,7 +52,8 @@ const (
// WithLogger returns a new context with the provided logger. Use in
// combination with logger.WithField(s) for great effect.
func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
return context.WithValue(ctx, loggerKey{}, logger)
e := logger.WithContext(ctx)
return context.WithValue(ctx, loggerKey{}, e)
}
// GetLogger retrieves the current logger from the context. If no logger is
@ -61,7 +62,7 @@ func GetLogger(ctx context.Context) *logrus.Entry {
logger := ctx.Value(loggerKey{})
if logger == nil {
return L
return L.WithContext(ctx)
}
return logger.(*logrus.Entry)

View File

@ -38,12 +38,22 @@ func platformVector(platform specs.Platform) []specs.Platform {
switch platform.Architecture {
case "amd64":
if amd64Version, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && amd64Version > 1 {
for amd64Version--; amd64Version >= 1; amd64Version-- {
vector = append(vector, specs.Platform{
Architecture: platform.Architecture,
OS: platform.OS,
OSVersion: platform.OSVersion,
OSFeatures: platform.OSFeatures,
Variant: "v" + strconv.Itoa(amd64Version),
})
}
}
vector = append(vector, specs.Platform{
Architecture: "386",
OS: platform.OS,
OSVersion: platform.OSVersion,
OSFeatures: platform.OSFeatures,
Variant: platform.Variant,
})
case "arm":
if armVersion, err := strconv.Atoi(strings.TrimPrefix(platform.Variant, "v")); err == nil && armVersion > 5 {

View File

@ -18,6 +18,7 @@ package platforms
import (
"bufio"
"fmt"
"os"
"runtime"
"strings"
@ -25,7 +26,6 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/log"
"github.com/pkg/errors"
)
// Present the ARM instruction set architecture, eg: v7, v8
@ -48,7 +48,7 @@ func cpuVariant() string {
// by ourselves. We can just parse these information from /proc/cpuinfo
func getCPUInfo(pattern string) (info string, err error) {
if !isLinuxOS(runtime.GOOS) {
return "", errors.Wrapf(errdefs.ErrNotImplemented, "getCPUInfo for OS %s", runtime.GOOS)
return "", fmt.Errorf("getCPUInfo for OS %s: %w", runtime.GOOS, errdefs.ErrNotImplemented)
}
cpuinfo, err := os.Open("/proc/cpuinfo")
@ -75,7 +75,7 @@ func getCPUInfo(pattern string) (info string, err error) {
return "", err
}
return "", errors.Wrapf(errdefs.ErrNotFound, "getCPUInfo for pattern: %s", pattern)
return "", fmt.Errorf("getCPUInfo for pattern: %s: %w", pattern, errdefs.ErrNotFound)
}
func getCPUVariant() string {

View File

@ -38,7 +38,7 @@ func isLinuxOS(os string) bool {
// The OS value should be normalized before calling this function.
func isKnownOS(os string) bool {
switch os {
case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
case "aix", "android", "darwin", "dragonfly", "freebsd", "hurd", "illumos", "ios", "js", "linux", "nacl", "netbsd", "openbsd", "plan9", "solaris", "windows", "zos":
return true
}
return false
@ -60,7 +60,7 @@ func isArmArch(arch string) bool {
// The arch value should be normalized before being passed to this function.
func isKnownArch(arch string) bool {
switch arch {
case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
case "386", "amd64", "amd64p32", "arm", "armbe", "arm64", "arm64be", "ppc64", "ppc64le", "loong64", "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le", "ppc", "riscv", "riscv64", "s390", "s390x", "sparc", "sparc64", "wasm":
return true
}
return false
@ -86,9 +86,11 @@ func normalizeArch(arch, variant string) (string, string) {
case "i386":
arch = "386"
variant = ""
case "x86_64", "x86-64":
case "x86_64", "x86-64", "amd64":
arch = "amd64"
variant = ""
if variant == "v1" {
variant = ""
}
case "aarch64", "arm64":
arch = "arm64"
switch variant {

View File

@ -16,27 +16,11 @@
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultString returns the default string specifier for the platform.
func DefaultString() string {
return Format(DefaultSpec())
}
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// DefaultStrict returns strict form of Default.
func DefaultStrict() MatchComparer {
return OnlyStrict(DefaultSpec())

View File

@ -0,0 +1,45 @@
//go:build darwin
// +build darwin
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// Default returns the default matcher for the platform.
func Default() MatchComparer {
return Ordered(DefaultSpec(), specs.Platform{
// darwin runtime also supports Linux binary via runu/LKL
OS: "linux",
Architecture: runtime.GOARCH,
})
}

View File

@ -1,5 +1,5 @@
//go:build !windows
// +build !windows
//go:build !windows && !darwin
// +build !windows,!darwin
/*
Copyright The containerd Authors.
@ -19,6 +19,22 @@
package platforms
import (
"runtime"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
// Default returns the default matcher for the platform.
func Default() MatchComparer {
return Only(DefaultSpec())

View File

@ -1,6 +1,3 @@
//go:build windows
// +build windows
/*
Copyright The containerd Authors.
@ -30,6 +27,18 @@ import (
"golang.org/x/sys/windows"
)
// DefaultSpec returns the current platform's default platform specification.
func DefaultSpec() specs.Platform {
major, minor, build := windows.RtlGetNtVersionNumbers()
return specs.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
OSVersion: fmt.Sprintf("%d.%d.%d", major, minor, build),
// The Variant field will be empty if arch != ARM.
Variant: cpuVariant(),
}
}
type matchComparer struct {
defaults Matcher
osVersionPrefix string

View File

@ -107,6 +107,8 @@
package platforms
import (
"fmt"
"path"
"regexp"
"runtime"
"strconv"
@ -114,7 +116,6 @@ import (
"github.com/containerd/containerd/errdefs"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
var (
@ -166,14 +167,14 @@ func (m *matcher) String() string {
func Parse(specifier string) (specs.Platform, error) {
if strings.Contains(specifier, "*") {
// TODO(stevvooe): need to work out exact wildcard handling
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: wildcards not yet supported", specifier)
return specs.Platform{}, fmt.Errorf("%q: wildcards not yet supported: %w", specifier, errdefs.ErrInvalidArgument)
}
parts := strings.Split(specifier, "/")
for _, part := range parts {
if !specifierRe.MatchString(part) {
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q is an invalid component of %q: platform specifier component must match %q", part, specifier, specifierRe.String())
return specs.Platform{}, fmt.Errorf("%q is an invalid component of %q: platform specifier component must match %q: %w", part, specifier, specifierRe.String(), errdefs.ErrInvalidArgument)
}
}
@ -205,7 +206,7 @@ func Parse(specifier string) (specs.Platform, error) {
return p, nil
}
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: unknown operating system or architecture", specifier)
return specs.Platform{}, fmt.Errorf("%q: unknown operating system or architecture: %w", specifier, errdefs.ErrInvalidArgument)
case 2:
// In this case, we treat as a regular os/arch pair. We don't care
// about whether or not we know of the platform.
@ -227,7 +228,7 @@ func Parse(specifier string) (specs.Platform, error) {
return p, nil
}
return specs.Platform{}, errors.Wrapf(errdefs.ErrInvalidArgument, "%q: cannot parse platform specifier", specifier)
return specs.Platform{}, fmt.Errorf("%q: cannot parse platform specifier: %w", specifier, errdefs.ErrInvalidArgument)
}
// MustParse is like Parses but panics if the specifier cannot be parsed.
@ -246,20 +247,7 @@ func Format(platform specs.Platform) string {
return "unknown"
}
return joinNotEmpty(platform.OS, platform.Architecture, platform.Variant)
}
func joinNotEmpty(s ...string) string {
var ss []string
for _, s := range s {
if s == "" {
continue
}
ss = append(ss, s)
}
return strings.Join(ss, "/")
return path.Join(platform.OS, platform.Architecture, platform.Variant)
}
// Normalize validates and translate the platform to the canonical value.
@ -269,10 +257,5 @@ func joinNotEmpty(s ...string) string {
func Normalize(platform specs.Platform) specs.Platform {
platform.OS = normalizeOS(platform.OS)
platform.Architecture, platform.Variant = normalizeArch(platform.Architecture, platform.Variant)
// these fields are deprecated, remove them
platform.OSFeatures = nil
platform.OSVersion = ""
return platform
}

View File

@ -19,6 +19,8 @@ package auth
import (
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"strings"
@ -27,7 +29,6 @@ import (
"github.com/containerd/containerd/log"
remoteserrors "github.com/containerd/containerd/remotes/errors"
"github.com/containerd/containerd/version"
"github.com/pkg/errors"
"golang.org/x/net/context/ctxhttp"
)
@ -46,7 +47,7 @@ func GenerateTokenOptions(ctx context.Context, host, username, secret string, c
realmURL, err := url.Parse(realm)
if err != nil {
return TokenOptions{}, errors.Wrap(err, "invalid token auth challenge realm")
return TokenOptions{}, fmt.Errorf("invalid token auth challenge realm: %w", err)
}
to := TokenOptions{
@ -58,7 +59,7 @@ func GenerateTokenOptions(ctx context.Context, host, username, secret string, c
scope, ok := c.Parameters["scope"]
if ok {
to.Scopes = append(to.Scopes, scope)
to.Scopes = append(to.Scopes, strings.Split(scope, " ")...)
} else {
log.G(ctx).WithField("host", host).Debug("no scope specified for token auth challenge")
}
@ -73,6 +74,15 @@ type TokenOptions struct {
Scopes []string
Username string
Secret string
// FetchRefreshToken enables fetching a refresh token (aka "identity token", "offline token") along with the bearer token.
//
// For HTTP GET mode (FetchToken), FetchRefreshToken sets `offline_token=true` in the request.
// https://docs.docker.com/registry/spec/auth/token/#requesting-a-token
//
// For HTTP POST mode (FetchTokenWithOAuth), FetchRefreshToken sets `access_type=offline` in the request.
// https://docs.docker.com/registry/spec/auth/oauth/#getting-a-token
FetchRefreshToken bool
}
// OAuthTokenResponse is response from fetching token with a OAuth POST request
@ -101,6 +111,9 @@ func FetchTokenWithOAuth(ctx context.Context, client *http.Client, headers http.
form.Set("username", to.Username)
form.Set("password", to.Secret)
}
if to.FetchRefreshToken {
form.Set("access_type", "offline")
}
req, err := http.NewRequest("POST", to.Realm, strings.NewReader(form.Encode()))
if err != nil {
@ -121,18 +134,18 @@ func FetchTokenWithOAuth(ctx context.Context, client *http.Client, headers http.
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 400 {
return nil, errors.WithStack(remoteserrors.NewUnexpectedStatusErr(resp))
return nil, remoteserrors.NewUnexpectedStatusErr(resp)
}
decoder := json.NewDecoder(resp.Body)
var tr OAuthTokenResponse
if err = decoder.Decode(&tr); err != nil {
return nil, errors.Wrap(err, "unable to decode token response")
return nil, fmt.Errorf("unable to decode token response: %w", err)
}
if tr.AccessToken == "" {
return nil, errors.WithStack(ErrNoToken)
return nil, ErrNoToken
}
return &tr, nil
@ -175,6 +188,10 @@ func FetchToken(ctx context.Context, client *http.Client, headers http.Header, t
req.SetBasicAuth(to.Username, to.Secret)
}
if to.FetchRefreshToken {
reqParams.Add("offline_token", "true")
}
req.URL.RawQuery = reqParams.Encode()
resp, err := ctxhttp.Do(ctx, client, req)
@ -184,14 +201,14 @@ func FetchToken(ctx context.Context, client *http.Client, headers http.Header, t
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 400 {
return nil, errors.WithStack(remoteserrors.NewUnexpectedStatusErr(resp))
return nil, remoteserrors.NewUnexpectedStatusErr(resp)
}
decoder := json.NewDecoder(resp.Body)
var tr FetchTokenResponse
if err = decoder.Decode(&tr); err != nil {
return nil, errors.Wrap(err, "unable to decode token response")
return nil, fmt.Errorf("unable to decode token response: %w", err)
}
// `access_token` is equivalent to `token` and if both are specified
@ -202,7 +219,7 @@ func FetchToken(ctx context.Context, client *http.Client, headers http.Header, t
}
if tr.Token == "" {
return nil, errors.WithStack(ErrNoToken)
return nil, ErrNoToken
}
return &tr, nil

View File

@ -19,6 +19,7 @@ package docker
import (
"context"
"encoding/base64"
"errors"
"fmt"
"net/http"
"strings"
@ -28,7 +29,6 @@ import (
"github.com/containerd/containerd/log"
"github.com/containerd/containerd/remotes/docker/auth"
remoteerrors "github.com/containerd/containerd/remotes/errors"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@ -37,10 +37,12 @@ type dockerAuthorizer struct {
client *http.Client
header http.Header
mu sync.Mutex
mu sync.RWMutex
// indexed by host name
handlers map[string]*authHandler
onFetchRefreshToken OnFetchRefreshToken
}
// NewAuthorizer creates a Docker authorizer using the provided function to
@ -51,9 +53,10 @@ func NewAuthorizer(client *http.Client, f func(string) (string, string, error))
}
type authorizerConfig struct {
credentials func(string) (string, string, error)
client *http.Client
header http.Header
credentials func(string) (string, string, error)
client *http.Client
header http.Header
onFetchRefreshToken OnFetchRefreshToken
}
// AuthorizerOpt configures an authorizer
@ -80,6 +83,16 @@ func WithAuthHeader(hdr http.Header) AuthorizerOpt {
}
}
// OnFetchRefreshToken is called on fetching request token.
type OnFetchRefreshToken func(ctx context.Context, refreshToken string, req *http.Request)
// WithFetchRefreshToken enables fetching "refresh token" (aka "identity token", "offline token").
func WithFetchRefreshToken(f OnFetchRefreshToken) AuthorizerOpt {
return func(opt *authorizerConfig) {
opt.onFetchRefreshToken = f
}
}
// NewDockerAuthorizer creates an authorizer using Docker's registry
// authentication spec.
// See https://docs.docker.com/registry/spec/auth/
@ -94,10 +107,11 @@ func NewDockerAuthorizer(opts ...AuthorizerOpt) Authorizer {
}
return &dockerAuthorizer{
credentials: ao.credentials,
client: ao.client,
header: ao.header,
handlers: make(map[string]*authHandler),
credentials: ao.credentials,
client: ao.client,
header: ao.header,
handlers: make(map[string]*authHandler),
onFetchRefreshToken: ao.onFetchRefreshToken,
}
}
@ -109,12 +123,21 @@ func (a *dockerAuthorizer) Authorize(ctx context.Context, req *http.Request) err
return nil
}
auth, err := ah.authorize(ctx)
auth, refreshToken, err := ah.authorize(ctx)
if err != nil {
return err
}
req.Header.Set("Authorization", auth)
if refreshToken != "" {
a.mu.RLock()
onFetchRefreshToken := a.onFetchRefreshToken
a.mu.RUnlock()
if onFetchRefreshToken != nil {
onFetchRefreshToken(ctx, refreshToken, req)
}
}
return nil
}
@ -161,6 +184,7 @@ func (a *dockerAuthorizer) AddResponses(ctx context.Context, responses []*http.R
if err != nil {
return err
}
common.FetchRefreshToken = a.onFetchRefreshToken != nil
a.handlers[host] = newAuthHandler(a.client, a.header, c.Scheme, common)
return nil
@ -181,14 +205,15 @@ func (a *dockerAuthorizer) AddResponses(ctx context.Context, responses []*http.R
}
}
}
return errors.Wrap(errdefs.ErrNotImplemented, "failed to find supported auth scheme")
return fmt.Errorf("failed to find supported auth scheme: %w", errdefs.ErrNotImplemented)
}
// authResult is used to control limit rate.
type authResult struct {
sync.WaitGroup
token string
err error
token string
refreshToken string
err error
}
// authHandler is used to handle auth request per registry server.
@ -220,29 +245,29 @@ func newAuthHandler(client *http.Client, hdr http.Header, scheme auth.Authentica
}
}
func (ah *authHandler) authorize(ctx context.Context) (string, error) {
func (ah *authHandler) authorize(ctx context.Context) (string, string, error) {
switch ah.scheme {
case auth.BasicAuth:
return ah.doBasicAuth(ctx)
case auth.BearerAuth:
return ah.doBearerAuth(ctx)
default:
return "", errors.Wrapf(errdefs.ErrNotImplemented, "failed to find supported auth scheme: %s", string(ah.scheme))
return "", "", fmt.Errorf("failed to find supported auth scheme: %s: %w", string(ah.scheme), errdefs.ErrNotImplemented)
}
}
func (ah *authHandler) doBasicAuth(ctx context.Context) (string, error) {
func (ah *authHandler) doBasicAuth(ctx context.Context) (string, string, error) {
username, secret := ah.common.Username, ah.common.Secret
if username == "" || secret == "" {
return "", fmt.Errorf("failed to handle basic auth because missing username or secret")
return "", "", fmt.Errorf("failed to handle basic auth because missing username or secret")
}
auth := base64.StdEncoding.EncodeToString([]byte(username + ":" + secret))
return fmt.Sprintf("Basic %s", auth), nil
return fmt.Sprintf("Basic %s", auth), "", nil
}
func (ah *authHandler) doBearerAuth(ctx context.Context) (token string, err error) {
func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken string, err error) {
// copy common tokenOptions
to := ah.common
@ -255,7 +280,7 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token string, err erro
if r, exist := ah.scopedTokens[scoped]; exist {
ah.Unlock()
r.Wait()
return r.token, r.err
return r.token, r.refreshToken, r.err
}
// only one fetch token job
@ -266,14 +291,16 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token string, err erro
defer func() {
token = fmt.Sprintf("Bearer %s", token)
r.token, r.err = token, err
r.token, r.refreshToken, r.err = token, refreshToken, err
r.Done()
}()
// fetch token for the resource scope
if to.Secret != "" {
defer func() {
err = errors.Wrap(err, "failed to fetch oauth token")
if err != nil {
err = fmt.Errorf("failed to fetch oauth token: %w", err)
}
}()
// credential information is provided, use oauth POST endpoint
// TODO: Allow setting client_id
@ -284,28 +311,29 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token string, err erro
// Registries without support for POST may return 404 for POST /v2/token.
// As of September 2017, GCR is known to return 404.
// As of February 2018, JFrog Artifactory is known to return 401.
if (errStatus.StatusCode == 405 && to.Username != "") || errStatus.StatusCode == 404 || errStatus.StatusCode == 401 {
// As of January 2022, ACR is known to return 400.
if (errStatus.StatusCode == 405 && to.Username != "") || errStatus.StatusCode == 404 || errStatus.StatusCode == 401 || errStatus.StatusCode == 400 {
resp, err := auth.FetchToken(ctx, ah.client, ah.header, to)
if err != nil {
return "", err
return "", "", err
}
return resp.Token, nil
return resp.Token, resp.RefreshToken, nil
}
log.G(ctx).WithFields(logrus.Fields{
"status": errStatus.Status,
"body": string(errStatus.Body),
}).Debugf("token request failed")
}
return "", err
return "", "", err
}
return resp.AccessToken, nil
return resp.AccessToken, resp.RefreshToken, nil
}
// do request anonymously
resp, err := auth.FetchToken(ctx, ah.client, ah.header, to)
if err != nil {
return "", errors.Wrap(err, "failed to fetch anonymous token")
return "", "", fmt.Errorf("failed to fetch anonymous token: %w", err)
}
return resp.Token, nil
return resp.Token, resp.RefreshToken, nil
}
func invalidAuthorization(c auth.Challenge, responses []*http.Response) error {
@ -319,7 +347,7 @@ func invalidAuthorization(c auth.Challenge, responses []*http.Response) error {
return nil
}
return errors.Wrapf(ErrInvalidAuthorization, "server message: %s", errStr)
return fmt.Errorf("server message: %s: %w", errStr, ErrInvalidAuthorization)
}
func sameRequest(r1, r2 *http.Request) bool {

View File

@ -28,7 +28,6 @@ import (
"github.com/containerd/containerd/remotes"
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// LegacyConfigMediaType should be replaced by OCI image spec.
@ -52,12 +51,12 @@ func ConvertManifest(ctx context.Context, store content.Store, desc ocispec.Desc
// read manifest data
mb, err := content.ReadBlob(ctx, store, desc)
if err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to read index data")
return ocispec.Descriptor{}, fmt.Errorf("failed to read index data: %w", err)
}
var manifest ocispec.Manifest
if err := json.Unmarshal(mb, &manifest); err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to unmarshal data into manifest")
return ocispec.Descriptor{}, fmt.Errorf("failed to unmarshal data into manifest: %w", err)
}
// check config media type
@ -68,7 +67,7 @@ func ConvertManifest(ctx context.Context, store content.Store, desc ocispec.Desc
manifest.Config.MediaType = images.MediaTypeDockerSchema2Config
data, err := json.MarshalIndent(manifest, "", " ")
if err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to marshal manifest")
return ocispec.Descriptor{}, fmt.Errorf("failed to marshal manifest: %w", err)
}
// update manifest with gc labels
@ -82,7 +81,7 @@ func ConvertManifest(ctx context.Context, store content.Store, desc ocispec.Desc
ref := remotes.MakeRefKey(ctx, desc)
if err := content.WriteBlob(ctx, store, ref, bytes.NewReader(data), desc, content.WithLabels(labels)); err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to update content")
return ocispec.Descriptor{}, fmt.Errorf("failed to update content: %w", err)
}
return desc, nil
}

View File

@ -19,9 +19,9 @@ package docker
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
@ -30,7 +30,6 @@ import (
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type dockerFetcher struct {
@ -42,7 +41,7 @@ func (r dockerFetcher) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.R
hosts := r.filterHosts(HostCapabilityPull)
if len(hosts) == 0 {
return nil, errors.Wrap(errdefs.ErrNotFound, "no pull hosts")
return nil, fmt.Errorf("no pull hosts: %w", errdefs.ErrNotFound)
}
ctx, err := ContextWithRepositoryScope(ctx, r.refspec, false)
@ -142,9 +141,9 @@ func (r dockerFetcher) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.R
}
if errdefs.IsNotFound(firstErr) {
firstErr = errors.Wrapf(errdefs.ErrNotFound,
"could not fetch content descriptor %v (%v) from remote",
desc.Digest, desc.MediaType)
firstErr = fmt.Errorf("could not fetch content descriptor %v (%v) from remote: %w",
desc.Digest, desc.MediaType, errdefs.ErrNotFound,
)
}
return nil, firstErr
@ -179,19 +178,19 @@ func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string,
// implementation.
if resp.StatusCode == http.StatusNotFound {
return nil, errors.Wrapf(errdefs.ErrNotFound, "content at %v not found", req.String())
return nil, fmt.Errorf("content at %v not found: %w", req.String(), errdefs.ErrNotFound)
}
var registryErr Errors
if err := json.NewDecoder(resp.Body).Decode(&registryErr); err != nil || registryErr.Len() < 1 {
return nil, errors.Errorf("unexpected status code %v: %v", req.String(), resp.Status)
return nil, fmt.Errorf("unexpected status code %v: %v", req.String(), resp.Status)
}
return nil, errors.Errorf("unexpected status code %v: %s - Server message: %s", req.String(), resp.Status, registryErr.Error())
return nil, fmt.Errorf("unexpected status code %v: %s - Server message: %s", req.String(), resp.Status, registryErr.Error())
}
if offset > 0 {
cr := resp.Header.Get("content-range")
if cr != "" {
if !strings.HasPrefix(cr, fmt.Sprintf("bytes %d-", offset)) {
return nil, errors.Errorf("unhandled content range in response: %v", cr)
return nil, fmt.Errorf("unhandled content range in response: %v", cr)
}
} else {
@ -201,12 +200,12 @@ func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string,
// Discard up to offset
// Could use buffer pool here but this case should be rare
n, err := io.Copy(ioutil.Discard, io.LimitReader(resp.Body, offset))
n, err := io.Copy(io.Discard, io.LimitReader(resp.Body, offset))
if err != nil {
return nil, errors.Wrap(err, "failed to discard to offset")
return nil, fmt.Errorf("failed to discard to offset: %w", err)
}
if n != offset {
return nil, errors.Errorf("unable to discard to offset")
return nil, errors.New("unable to discard to offset")
}
}

View File

@ -18,12 +18,11 @@ package docker
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/log"
"github.com/pkg/errors"
)
const maxRetry = 3
@ -70,7 +69,7 @@ func (hrs *httpReadSeeker) Read(p []byte) (n int, err error) {
}
if hrs.rc != nil {
if clsErr := hrs.rc.Close(); clsErr != nil {
log.L.WithError(clsErr).Errorf("httpReadSeeker: failed to close ReadCloser")
log.L.WithError(clsErr).Error("httpReadSeeker: failed to close ReadCloser")
}
hrs.rc = nil
}
@ -95,7 +94,7 @@ func (hrs *httpReadSeeker) Close() error {
func (hrs *httpReadSeeker) Seek(offset int64, whence int) (int64, error) {
if hrs.closed {
return 0, errors.Wrap(errdefs.ErrUnavailable, "Fetcher.Seek: closed")
return 0, fmt.Errorf("Fetcher.Seek: closed: %w", errdefs.ErrUnavailable)
}
abs := hrs.offset
@ -106,21 +105,21 @@ func (hrs *httpReadSeeker) Seek(offset int64, whence int) (int64, error) {
abs += offset
case io.SeekEnd:
if hrs.size == -1 {
return 0, errors.Wrap(errdefs.ErrUnavailable, "Fetcher.Seek: unknown size, cannot seek from end")
return 0, fmt.Errorf("Fetcher.Seek: unknown size, cannot seek from end: %w", errdefs.ErrUnavailable)
}
abs = hrs.size + offset
default:
return 0, errors.Wrap(errdefs.ErrInvalidArgument, "Fetcher.Seek: invalid whence")
return 0, fmt.Errorf("Fetcher.Seek: invalid whence: %w", errdefs.ErrInvalidArgument)
}
if abs < 0 {
return 0, errors.Wrapf(errdefs.ErrInvalidArgument, "Fetcher.Seek: negative offset")
return 0, fmt.Errorf("Fetcher.Seek: negative offset: %w", errdefs.ErrInvalidArgument)
}
if abs != hrs.offset {
if hrs.rc != nil {
if err := hrs.rc.Close(); err != nil {
log.L.WithError(err).Errorf("Fetcher.Seek: failed to close ReadCloser")
log.L.WithError(err).Error("Fetcher.Seek: failed to close ReadCloser")
}
hrs.rc = nil
@ -141,17 +140,17 @@ func (hrs *httpReadSeeker) reader() (io.Reader, error) {
// only try to reopen the body request if we are seeking to a value
// less than the actual size.
if hrs.open == nil {
return nil, errors.Wrapf(errdefs.ErrNotImplemented, "cannot open")
return nil, fmt.Errorf("cannot open: %w", errdefs.ErrNotImplemented)
}
rc, err := hrs.open(hrs.offset)
if err != nil {
return nil, errors.Wrapf(err, "httpReadSeeker: failed open")
return nil, fmt.Errorf("httpReadSeeker: failed open: %w", err)
}
if hrs.rc != nil {
if err := hrs.rc.Close(); err != nil {
log.L.WithError(err).Errorf("httpReadSeeker: failed to close ReadCloser")
log.L.WithError(err).Error("httpReadSeeker: failed to close ReadCloser")
}
}
hrs.rc = rc
@ -162,7 +161,7 @@ func (hrs *httpReadSeeker) reader() (io.Reader, error) {
// as the length is already satisfied but we just return the empty
// reader instead.
hrs.rc = ioutil.NopCloser(bytes.NewReader([]byte{}))
hrs.rc = io.NopCloser(bytes.NewReader([]byte{}))
}
return hrs.rc, nil

View File

@ -18,8 +18,9 @@ package docker
import (
"context"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
@ -33,7 +34,6 @@ import (
remoteserrors "github.com/containerd/containerd/remotes/errors"
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type dockerPusher struct {
@ -56,7 +56,7 @@ func (p dockerPusher) Writer(ctx context.Context, opts ...content.WriterOpt) (co
}
}
if wOpts.Ref == "" {
return nil, errors.Wrap(errdefs.ErrInvalidArgument, "ref must not be empty")
return nil, fmt.Errorf("ref must not be empty: %w", errdefs.ErrInvalidArgument)
}
return p.push(ctx, wOpts.Desc, wOpts.Ref, true)
}
@ -77,22 +77,22 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
status, err := p.tracker.GetStatus(ref)
if err == nil {
if status.Committed && status.Offset == status.Total {
return nil, errors.Wrapf(errdefs.ErrAlreadyExists, "ref %v", ref)
return nil, fmt.Errorf("ref %v: %w", ref, errdefs.ErrAlreadyExists)
}
if unavailableOnFail {
if unavailableOnFail && status.ErrClosed == nil {
// Another push of this ref is happening elsewhere. The rest of function
// will continue only when `errdefs.IsNotFound(err) == true` (i.e. there
// is no actively-tracked ref already).
return nil, errors.Wrap(errdefs.ErrUnavailable, "push is on-going")
return nil, fmt.Errorf("push is on-going: %w", errdefs.ErrUnavailable)
}
// TODO: Handle incomplete status
} else if !errdefs.IsNotFound(err) {
return nil, errors.Wrap(err, "failed to get status")
return nil, fmt.Errorf("failed to get status: %w", err)
}
hosts := p.filterHosts(HostCapabilityPush)
if len(hosts) == 0 {
return nil, errors.Wrap(errdefs.ErrNotFound, "no push hosts")
return nil, fmt.Errorf("no push hosts: %w", errdefs.ErrNotFound)
}
var (
@ -144,7 +144,7 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
},
})
resp.Body.Close()
return nil, errors.Wrapf(errdefs.ErrAlreadyExists, "content %v on remote", desc.Digest)
return nil, fmt.Errorf("content %v on remote: %w", desc.Digest, errdefs.ErrAlreadyExists)
}
} else if resp.StatusCode != http.StatusNotFound {
err := remoteserrors.NewUnexpectedStatusErr(resp)
@ -206,7 +206,7 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
Offset: desc.Size,
},
})
return nil, errors.Wrapf(errdefs.ErrAlreadyExists, "content %v on remote", desc.Digest)
return nil, fmt.Errorf("content %v on remote: %w", desc.Digest, errdefs.ErrAlreadyExists)
default:
err := remoteserrors.NewUnexpectedStatusErr(resp)
log.G(ctx).WithField("resp", resp).WithField("body", string(err.(remoteserrors.ErrUnexpectedStatus).Body)).Debug("unexpected response")
@ -222,7 +222,7 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
if strings.HasPrefix(location, "/") {
lurl, err = url.Parse(lhost.Scheme + "://" + lhost.Host + location)
if err != nil {
return nil, errors.Wrapf(err, "unable to parse location %v", location)
return nil, fmt.Errorf("unable to parse location %v: %w", location, err)
}
} else {
if !strings.Contains(location, "://") {
@ -230,7 +230,7 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
}
lurl, err = url.Parse(location)
if err != nil {
return nil, errors.Wrapf(err, "unable to parse location %v", location)
return nil, fmt.Errorf("unable to parse location %v: %w", location, err)
}
if lurl.Host != lhost.Host || lhost.Scheme != lurl.Scheme {
@ -263,7 +263,7 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
pr, pw := io.Pipe()
respC := make(chan response, 1)
body := ioutil.NopCloser(pr)
body := io.NopCloser(pr)
req.body = func() (io.ReadCloser, error) {
if body == nil {
@ -355,6 +355,12 @@ func (pw *pushWriter) Write(p []byte) (n int, err error) {
}
func (pw *pushWriter) Close() error {
status, err := pw.tracker.GetStatus(pw.ref)
if err == nil && !status.Committed {
// Closing an incomplete writer. Record this as an error so that following write can retry it.
status.ErrClosed = errors.New("closed incomplete writer")
pw.tracker.SetStatus(pw.ref, status)
}
return pw.pipe.Close()
}
@ -375,7 +381,7 @@ func (pw *pushWriter) Digest() digest.Digest {
func (pw *pushWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error {
// Check whether read has already thrown an error
if _, err := pw.pipe.Write([]byte{}); err != nil && err != io.ErrClosedPipe {
return errors.Wrap(err, "pipe error before commit")
return fmt.Errorf("pipe error before commit: %w", err)
}
if err := pw.pipe.Close(); err != nil {
@ -398,11 +404,11 @@ func (pw *pushWriter) Commit(ctx context.Context, size int64, expected digest.Di
status, err := pw.tracker.GetStatus(pw.ref)
if err != nil {
return errors.Wrap(err, "failed to get status")
return fmt.Errorf("failed to get status: %w", err)
}
if size > 0 && size != status.Offset {
return errors.Errorf("unexpected size %d, expected %d", status.Offset, size)
return fmt.Errorf("unexpected size %d, expected %d", status.Offset, size)
}
if expected == "" {
@ -411,11 +417,11 @@ func (pw *pushWriter) Commit(ctx context.Context, size int64, expected digest.Di
actual, err := digest.Parse(resp.Header.Get("Docker-Content-Digest"))
if err != nil {
return errors.Wrap(err, "invalid content digest in response")
return fmt.Errorf("invalid content digest in response: %w", err)
}
if actual != expected {
return errors.Errorf("got digest %s, expected %s", actual, expected)
return fmt.Errorf("got digest %s, expected %s", actual, expected)
}
status.Committed = true

View File

@ -17,10 +17,9 @@
package docker
import (
"errors"
"net"
"net/http"
"github.com/pkg/errors"
)
// HostCapabilities represent the capabilities of the registry

View File

@ -18,9 +18,9 @@ package docker
import (
"context"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"path"
@ -35,7 +35,6 @@ import (
"github.com/containerd/containerd/version"
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/context/ctxhttp"
)
@ -255,7 +254,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
hosts := base.filterHosts(caps)
if len(hosts) == 0 {
return "", ocispec.Descriptor{}, errors.Wrap(errdefs.ErrNotFound, "no resolve hosts")
return "", ocispec.Descriptor{}, fmt.Errorf("no resolve hosts: %w", errdefs.ErrNotFound)
}
ctx, err = ContextWithRepositoryScope(ctx, refspec, false)
@ -280,7 +279,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
resp, err := req.doWithRetries(ctx, nil)
if err != nil {
if errors.Is(err, ErrInvalidAuthorization) {
err = errors.Wrapf(err, "pull access denied, repository does not exist or may require authorization")
err = fmt.Errorf("pull access denied, repository does not exist or may require authorization: %w", err)
}
// Store the error for referencing later
if firstErr == nil {
@ -299,11 +298,11 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
if resp.StatusCode > 399 {
// Set firstErr when encountering the first non-404 status code.
if firstErr == nil {
firstErr = errors.Errorf("pulling from host %s failed with status code %v: %v", host.Host, u, resp.Status)
firstErr = fmt.Errorf("pulling from host %s failed with status code %v: %v", host.Host, u, resp.Status)
}
continue // try another host
}
return "", ocispec.Descriptor{}, errors.Errorf("pulling from host %s failed with unexpected status code %v: %v", host.Host, u, resp.Status)
return "", ocispec.Descriptor{}, fmt.Errorf("pulling from host %s failed with unexpected status code %v: %v", host.Host, u, resp.Status)
}
size := resp.ContentLength
contentType := getManifestMediaType(resp)
@ -319,7 +318,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
if dgstHeader != "" && size != -1 {
if err := dgstHeader.Validate(); err != nil {
return "", ocispec.Descriptor{}, errors.Wrapf(err, "%q in header not a valid digest", dgstHeader)
return "", ocispec.Descriptor{}, fmt.Errorf("%q in header not a valid digest: %w", dgstHeader, err)
}
dgst = dgstHeader
}
@ -359,7 +358,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
return "", ocispec.Descriptor{}, err
}
}
} else if _, err := io.Copy(ioutil.Discard, &bodyReader); err != nil {
} else if _, err := io.Copy(io.Discard, &bodyReader); err != nil {
return "", ocispec.Descriptor{}, err
}
size = bodyReader.bytesRead
@ -367,7 +366,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
// Prevent resolving to excessively large manifests
if size > MaxManifestSize {
if firstErr == nil {
firstErr = errors.Wrapf(errdefs.ErrNotFound, "rejecting %d byte manifest for %s", size, ref)
firstErr = fmt.Errorf("rejecting %d byte manifest for %s: %w", size, ref, errdefs.ErrNotFound)
}
continue
}
@ -388,7 +387,7 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
// means that either no registries were given or each registry returned 404.
if firstErr == nil {
firstErr = errors.Wrap(errdefs.ErrNotFound, ref)
firstErr = fmt.Errorf("%s: %w", ref, errdefs.ErrNotFound)
}
return "", ocispec.Descriptor{}, firstErr
@ -548,7 +547,7 @@ func (r *request) do(ctx context.Context) (*http.Response, error) {
ctx = log.WithLogger(ctx, log.G(ctx).WithField("url", u))
log.G(ctx).WithFields(requestFields(req)).Debug("do request")
if err := r.authorize(ctx, req); err != nil {
return nil, errors.Wrap(err, "failed to authorize")
return nil, fmt.Errorf("failed to authorize: %w", err)
}
var client = &http.Client{}
@ -560,13 +559,16 @@ func (r *request) do(ctx context.Context) (*http.Response, error) {
if len(via) >= 10 {
return errors.New("stopped after 10 redirects")
}
return errors.Wrap(r.authorize(ctx, req), "failed to authorize redirect")
if err := r.authorize(ctx, req); err != nil {
return fmt.Errorf("failed to authorize redirect: %w", err)
}
return nil
}
}
resp, err := ctxhttp.Do(ctx, client, req)
if err != nil {
return nil, errors.Wrap(err, "failed to do request")
return nil, fmt.Errorf("failed to do request: %w", err)
}
log.G(ctx).WithFields(responseFields(resp)).Debug("fetch response received")
return resp, nil

View File

@ -21,16 +21,14 @@ import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"strconv"
"strings"
"sync"
"time"
"golang.org/x/sync/errgroup"
"github.com/containerd/containerd/archive/compression"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/errdefs"
@ -40,7 +38,7 @@ import (
digest "github.com/opencontainers/go-digest"
specs "github.com/opencontainers/image-spec/specs-go"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
const (
@ -159,12 +157,12 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
history, diffIDs, err := c.schema1ManifestHistory()
if err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "schema 1 conversion failed")
return ocispec.Descriptor{}, fmt.Errorf("schema 1 conversion failed: %w", err)
}
var img ocispec.Image
if err := json.Unmarshal([]byte(c.pulledManifest.History[0].V1Compatibility), &img); err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to unmarshal image from schema 1 history")
return ocispec.Descriptor{}, fmt.Errorf("failed to unmarshal image from schema 1 history: %w", err)
}
img.History = history
@ -175,7 +173,7 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
b, err := json.MarshalIndent(img, "", " ")
if err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to marshal image")
return ocispec.Descriptor{}, fmt.Errorf("failed to marshal image: %w", err)
}
config := ocispec.Descriptor{
@ -199,7 +197,7 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
mb, err := json.MarshalIndent(manifest, "", " ")
if err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to marshal image")
return ocispec.Descriptor{}, fmt.Errorf("failed to marshal image: %w", err)
}
desc := ocispec.Descriptor{
@ -216,12 +214,12 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
ref := remotes.MakeRefKey(ctx, desc)
if err := content.WriteBlob(ctx, c.contentStore, ref, bytes.NewReader(mb), desc, content.WithLabels(labels)); err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to write image manifest")
return ocispec.Descriptor{}, fmt.Errorf("failed to write image manifest: %w", err)
}
ref = remotes.MakeRefKey(ctx, config)
if err := content.WriteBlob(ctx, c.contentStore, ref, bytes.NewReader(b), config); err != nil {
return ocispec.Descriptor{}, errors.Wrap(err, "failed to write image config")
return ocispec.Descriptor{}, fmt.Errorf("failed to write image config: %w", err)
}
return desc, nil
@ -230,7 +228,7 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
// ReadStripSignature reads in a schema1 manifest and returns a byte array
// with the "signatures" field stripped
func ReadStripSignature(schema1Blob io.Reader) ([]byte, error) {
b, err := ioutil.ReadAll(io.LimitReader(schema1Blob, manifestSizeLimit)) // limit to 8MB
b, err := io.ReadAll(io.LimitReader(schema1Blob, manifestSizeLimit)) // limit to 8MB
if err != nil {
return nil, err
}
@ -350,7 +348,7 @@ func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) erro
if desc.Size == -1 {
info, err := c.contentStore.Info(ctx, desc.Digest)
if err != nil {
return errors.Wrap(err, "failed to get blob info")
return fmt.Errorf("failed to get blob info: %w", err)
}
desc.Size = info.Size
}
@ -371,7 +369,7 @@ func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) erro
}
if _, err := c.contentStore.Update(ctx, cinfo, "labels.containerd.io/uncompressed", fmt.Sprintf("labels.%s", labelDockerSchema1EmptyLayer)); err != nil {
return errors.Wrap(err, "failed to update uncompressed label")
return fmt.Errorf("failed to update uncompressed label: %w", err)
}
c.mu.Lock()
@ -385,7 +383,7 @@ func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) erro
func (c *Converter) reuseLabelBlobState(ctx context.Context, desc ocispec.Descriptor) (bool, error) {
cinfo, err := c.contentStore.Info(ctx, desc.Digest)
if err != nil {
return false, errors.Wrap(err, "failed to get blob info")
return false, fmt.Errorf("failed to get blob info: %w", err)
}
desc.Size = cinfo.Size
@ -442,7 +440,7 @@ func (c *Converter) schema1ManifestHistory() ([]ocispec.History, []digest.Digest
for i := range m.History {
var h v1History
if err := json.Unmarshal([]byte(m.History[i].V1Compatibility), &h); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal history")
return nil, nil, fmt.Errorf("failed to unmarshal history: %w", err)
}
blobSum := m.FSLayers[i].BlobSum
@ -554,7 +552,7 @@ func stripSignature(b []byte) ([]byte, error) {
}
pb, err := joseBase64UrlDecode(sig.Signatures[0].Protected)
if err != nil {
return nil, errors.Wrapf(err, "could not decode %s", sig.Signatures[0].Protected)
return nil, fmt.Errorf("could not decode %s: %w", sig.Signatures[0].Protected, err)
}
var protected protectedBlock
@ -568,7 +566,7 @@ func stripSignature(b []byte) ([]byte, error) {
tail, err := joseBase64UrlDecode(protected.Tail)
if err != nil {
return nil, errors.Wrap(err, "invalid tail base 64 value")
return nil, fmt.Errorf("invalid tail base 64 value: %w", err)
}
return append(b[:protected.Length], tail...), nil

View File

@ -74,7 +74,7 @@ func ContextWithAppendPullRepositoryScope(ctx context.Context, repo string) cont
// GetTokenScopes returns deduplicated and sorted scopes from ctx.Value(tokenScopesKey{}) and common scopes.
func GetTokenScopes(ctx context.Context, common []string) []string {
var scopes []string
scopes := []string{}
if x := ctx.Value(tokenScopesKey{}); x != nil {
scopes = append(scopes, x.([]string)...)
}
@ -82,6 +82,10 @@ func GetTokenScopes(ctx context.Context, common []string) []string {
scopes = append(scopes, common...)
sort.Strings(scopes)
if len(scopes) == 0 {
return scopes
}
l := 0
for idx := 1; idx < len(scopes); idx++ {
// Note: this comparison is unaware of the scope grammar (https://docs.docker.com/registry/spec/auth/scope/)

View File

@ -17,12 +17,12 @@
package docker
import (
"fmt"
"sync"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/errdefs"
"github.com/moby/locker"
"github.com/pkg/errors"
)
// Status of a content operation
@ -31,6 +31,9 @@ type Status struct {
Committed bool
// ErrClosed contains error encountered on close.
ErrClosed error
// UploadUUID is used by the Docker registry to reference blob uploads
UploadUUID string
}
@ -67,7 +70,7 @@ func (t *memoryStatusTracker) GetStatus(ref string) (Status, error) {
defer t.m.Unlock()
status, ok := t.statuses[ref]
if !ok {
return Status{}, errors.Wrapf(errdefs.ErrNotFound, "status for ref %v", ref)
return Status{}, fmt.Errorf("status for ref %v: %w", ref, errdefs.ErrNotFound)
}
return status, nil
}

View File

@ -19,7 +19,6 @@ package errors
import (
"fmt"
"io"
"io/ioutil"
"net/http"
)
@ -41,7 +40,7 @@ func (e ErrUnexpectedStatus) Error() string {
func NewUnexpectedStatusErr(resp *http.Response) error {
var b []byte
if resp.Body != nil {
b, _ = ioutil.ReadAll(io.LimitReader(resp.Body, 64000)) // 64KB
b, _ = io.ReadAll(io.LimitReader(resp.Body, 64000)) // 64KB
}
err := ErrUnexpectedStatus{
Body: b,

View File

@ -18,6 +18,7 @@ package remotes
import (
"context"
"errors"
"fmt"
"io"
"strings"
@ -29,7 +30,6 @@ import (
"github.com/containerd/containerd/log"
"github.com/containerd/containerd/platforms"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/semaphore"
)
@ -127,13 +127,13 @@ func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc
// most likely a poorly configured registry/web front end which responded with no
// Content-Length header; unable (not to mention useless) to commit a 0-length entry
// into the content store. Error out here otherwise the error sent back is confusing
return errors.Wrapf(errdefs.ErrInvalidArgument, "unable to fetch descriptor (%s) which reports content size of zero", desc.Digest)
return fmt.Errorf("unable to fetch descriptor (%s) which reports content size of zero: %w", desc.Digest, errdefs.ErrInvalidArgument)
}
if ws.Offset == desc.Size {
// If writer is already complete, commit and return
err := cw.Commit(ctx, desc.Size, desc.Digest)
if err != nil && !errdefs.IsAlreadyExists(err) {
return errors.Wrapf(err, "failed commit on ref %q", ws.Ref)
return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
}
return nil
}
@ -243,8 +243,8 @@ func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, st
// as a marker for this problem
if (manifestStack[i].MediaType == ocispec.MediaTypeImageIndex ||
manifestStack[i].MediaType == images.MediaTypeDockerSchema2ManifestList) &&
errors.Cause(err) != nil && strings.Contains(errors.Cause(err).Error(), "400 Bad Request") {
return errors.Wrap(err, "manifest list/index references to blobs and/or manifests are missing in your target registry")
errors.Unwrap(err) != nil && strings.Contains(errors.Unwrap(err).Error(), "400 Bad Request") {
return fmt.Errorf("manifest list/index references to blobs and/or manifests are missing in your target registry: %w", err)
}
return err
}
@ -253,6 +253,43 @@ func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, st
return nil
}
// SkipNonDistributableBlobs returns a handler that skips blobs that have a media type that is "non-distributeable".
// An example of this kind of content would be a Windows base layer, which is not supposed to be redistributed.
//
// This is based on the media type of the content:
// - application/vnd.oci.image.layer.nondistributable
// - application/vnd.docker.image.rootfs.foreign
func SkipNonDistributableBlobs(f images.HandlerFunc) images.HandlerFunc {
return func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
if images.IsNonDistributable(desc.MediaType) {
log.G(ctx).WithField("digest", desc.Digest).WithField("mediatype", desc.MediaType).Debug("Skipping non-distributable blob")
return nil, images.ErrSkipDesc
}
if images.IsLayerType(desc.MediaType) {
return nil, nil
}
children, err := f(ctx, desc)
if err != nil {
return nil, err
}
if len(children) == 0 {
return nil, nil
}
out := make([]ocispec.Descriptor, 0, len(children))
for _, child := range children {
if !images.IsNonDistributable(child.MediaType) {
out = append(out, child)
} else {
log.G(ctx).WithField("digest", child.Digest).WithField("mediatype", child.MediaType).Debug("Skipping non-distributable blob")
}
}
return out, nil
}
}
// FilterManifestByPlatformHandler allows Handler to handle non-target
// platform's manifest and configuration data.
func FilterManifestByPlatformHandler(f images.HandlerFunc, m platforms.Matcher) images.HandlerFunc {

View File

@ -23,7 +23,7 @@ var (
Package = "github.com/containerd/containerd"
// Version holds the complete version number. Filled in at linking time.
Version = "1.5.13+unknown"
Version = "1.6.6+unknown"
// Revision is filled with the VCS (e.g. git) revision being used to build
// the program at linking time.

View File

@ -4,10 +4,12 @@
language: go
go:
- 1.7.x
- 1.8.x
- 1.13.x
- 1.16.x
- tip
arch:
- AMD64
- ppc64le
os:
- linux
- osx

View File

@ -7,6 +7,19 @@ standard library][go#20126]. The purpose of this function is to be a "secure"
alternative to `filepath.Join`, and in particular it provides certain
guarantees that are not provided by `filepath.Join`.
> **NOTE**: This code is *only* safe if you are not at risk of other processes
> modifying path components after you've used `SecureJoin`. If it is possible
> for a malicious process to modify path components of the resolved path, then
> you will be vulnerable to some fairly trivial TOCTOU race conditions. [There
> are some Linux kernel patches I'm working on which might allow for a better
> solution.][lwn-obeneath]
>
> In addition, with a slightly modified API it might be possible to use
> `O_PATH` and verify that the opened path is actually the resolved one -- but
> I have not done that yet. I might add it in the future as a helper function
> to help users verify the path (we can't just return `/proc/self/fd/<foo>`
> because that doesn't always work transparently for all users).
This is the function prototype:
```go
@ -16,8 +29,8 @@ func SecureJoin(root, unsafePath string) (string, error)
This library **guarantees** the following:
* If no error is set, the resulting string **must** be a child path of
`SecureJoin` and will not contain any symlink path components (they will all
be expanded).
`root` and will not contain any symlink path components (they will all be
expanded).
* When expanding symlinks, all symlink path components **must** be resolved
relative to the provided root. In particular, this can be considered a
@ -25,7 +38,7 @@ This library **guarantees** the following:
these symlinks will **not** be expanded lexically (`filepath.Clean` is not
called on the input before processing).
* Non-existant path components are unaffected by `SecureJoin` (similar to
* Non-existent path components are unaffected by `SecureJoin` (similar to
`filepath.EvalSymlinks`'s semantics).
* The returned path will always be `filepath.Clean`ed and thus not contain any
@ -57,6 +70,7 @@ func SecureJoin(root, unsafePath string) (string, error) {
}
```
[lwn-obeneath]: https://lwn.net/Articles/767547/
[go#20126]: https://github.com/golang/go/issues/20126
### License ###

View File

@ -1 +1 @@
0.2.2
0.2.3

View File

@ -12,39 +12,20 @@ package securejoin
import (
"bytes"
"errors"
"os"
"path/filepath"
"strings"
"syscall"
"github.com/pkg/errors"
)
// ErrSymlinkLoop is returned by SecureJoinVFS when too many symlinks have been
// evaluated in attempting to securely join the two given paths.
var ErrSymlinkLoop = errors.Wrap(syscall.ELOOP, "secure join")
// IsNotExist tells you if err is an error that implies that either the path
// accessed does not exist (or path components don't exist). This is
// effectively a more broad version of os.IsNotExist.
func IsNotExist(err error) bool {
// If it's a bone-fide ENOENT just bail.
if os.IsNotExist(errors.Cause(err)) {
return true
}
// Check that it's not actually an ENOTDIR, which in some cases is a more
// convoluted case of ENOENT (usually involving weird paths).
var errno error
switch err := errors.Cause(err).(type) {
case *os.PathError:
errno = err.Err
case *os.LinkError:
errno = err.Err
case *os.SyscallError:
errno = err.Err
}
return errno == syscall.ENOTDIR || errno == syscall.ENOENT
return errors.Is(err, os.ErrNotExist) || errors.Is(err, syscall.ENOTDIR) || errors.Is(err, syscall.ENOENT)
}
// SecureJoinVFS joins the two given path components (similar to Join) except
@ -68,7 +49,7 @@ func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) {
n := 0
for unsafePath != "" {
if n > 255 {
return "", ErrSymlinkLoop
return "", &os.PathError{Op: "SecureJoin", Path: root + "/" + unsafePath, Err: syscall.ELOOP}
}
// Next path component, p.

View File

@ -1 +0,0 @@
github.com/pkg/errors v0.8.0

View File

@ -104,14 +104,18 @@ func LoadFromReader(configData io.Reader) (*configfile.ConfigFile, error) {
return &configFile, err
}
// TODO remove this temporary hack, which is used to warn about the deprecated ~/.dockercfg file
var printLegacyFileWarning bool
// Load reads the configuration files in the given directory, and sets up
// the auth config information and returns values.
// FIXME: use the internal golang config parser
func Load(configDir string) (*configfile.ConfigFile, error) {
printLegacyFileWarning = false
cfg, _, err := load(configDir)
return cfg, err
}
// TODO remove this temporary hack, which is used to warn about the deprecated ~/.dockercfg file
// so we can remove the bool return value and collapse this back into `Load`
func load(configDir string) (*configfile.ConfigFile, bool, error) {
printLegacyFileWarning := false
if configDir == "" {
configDir = Dir()
@ -127,11 +131,11 @@ func Load(configDir string) (*configfile.ConfigFile, error) {
if err != nil {
err = errors.Wrap(err, filename)
}
return configFile, err
return configFile, printLegacyFileWarning, err
} else if !os.IsNotExist(err) {
// if file is there but we can't stat it for any reason other
// than it doesn't exist then stop
return configFile, errors.Wrap(err, filename)
return configFile, printLegacyFileWarning, errors.Wrap(err, filename)
}
// Can't find latest config file so check for the old one
@ -140,16 +144,16 @@ func Load(configDir string) (*configfile.ConfigFile, error) {
printLegacyFileWarning = true
defer file.Close()
if err := configFile.LegacyLoadFromReader(file); err != nil {
return configFile, errors.Wrap(err, filename)
return configFile, printLegacyFileWarning, errors.Wrap(err, filename)
}
}
return configFile, nil
return configFile, printLegacyFileWarning, nil
}
// LoadDefaultConfigFile attempts to load the default config file and returns
// an initialized ConfigFile struct if none is found.
func LoadDefaultConfigFile(stderr io.Writer) *configfile.ConfigFile {
configFile, err := Load(Dir())
configFile, printLegacyFileWarning, err := load(Dir())
if err != nil {
fmt.Fprintf(stderr, "WARNING: Error loading config file: %v\n", err)
}

View File

@ -119,7 +119,7 @@ func (configFile *ConfigFile) LegacyLoadFromReader(configData io.Reader) error {
// LoadFromReader reads the configuration data given and sets up the auth config
// information with given directory and populates the receiver object
func (configFile *ConfigFile) LoadFromReader(configData io.Reader) error {
if err := json.NewDecoder(configData).Decode(&configFile); err != nil && !errors.Is(err, io.EOF) {
if err := json.NewDecoder(configData).Decode(configFile); err != nil && !errors.Is(err, io.EOF) {
return err
}
var err error

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package configfile

View File

@ -1,3 +1,4 @@
//go:build !windows && !darwin && !linux
// +build !windows,!darwin,!linux
package credentials

View File

@ -4,7 +4,8 @@ import (
"fmt"
"io"
"os"
"os/exec"
exec "golang.org/x/sys/execabs"
)
// Program is an interface to execute external programs.

View File

@ -1,4 +1,4 @@
package credentials
// Version holds a string describing the current version
const Version = "0.6.3"
const Version = "0.6.4"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package container // import "github.com/docker/docker/api/types/container"

View File

@ -1,78 +1,11 @@
package errdefs // import "github.com/docker/docker/errdefs"
import (
"fmt"
"net/http"
containerderrors "github.com/containerd/containerd/errdefs"
"github.com/docker/distribution/registry/api/errcode"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// GetHTTPErrorStatusCode retrieves status code from error message.
func GetHTTPErrorStatusCode(err error) int {
if err == nil {
logrus.WithFields(logrus.Fields{"error": err}).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case IsNotFound(err):
statusCode = http.StatusNotFound
case IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case IsConflict(err):
statusCode = http.StatusConflict
case IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case IsForbidden(err):
statusCode = http.StatusForbidden
case IsNotModified(err):
statusCode = http.StatusNotModified
case IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case IsSystem(err) || IsUnknown(err) || IsDataLoss(err) || IsDeadline(err) || IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return GetHTTPErrorStatusCode(e.Cause())
}
logrus.WithFields(logrus.Fields{
"module": "api",
"error_type": fmt.Sprintf("%T", err),
}).Debugf("FIXME: Got an API for which error does not match any expected type!!!: %+v", err)
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
// FromStatusCode creates an errdef error, based on the provided HTTP status-code
func FromStatusCode(err error, statusCode int) error {
if err == nil {
@ -100,10 +33,10 @@ func FromStatusCode(err error, statusCode int) error {
err = System(err)
}
default:
logrus.WithFields(logrus.Fields{
logrus.WithError(err).WithFields(logrus.Fields{
"module": "api",
"status_code": fmt.Sprintf("%d", statusCode),
}).Debugf("FIXME: Got an status-code for which error does not match any expected type!!!: %d", statusCode)
"status_code": statusCode,
}).Debug("FIXME: Got an status-code for which error does not match any expected type!!!")
switch {
case statusCode >= 200 && statusCode < 400:
@ -118,74 +51,3 @@ func FromStatusCode(err error, statusCode int) error {
}
return err
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case containerderrors.IsInvalidArgument(err):
return http.StatusBadRequest
case containerderrors.IsNotFound(err):
return http.StatusNotFound
case containerderrors.IsAlreadyExists(err):
return http.StatusConflict
case containerderrors.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case containerderrors.IsUnavailable(err):
return http.StatusServiceUnavailable
case containerderrors.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View File

@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package homedir // import "github.com/docker/docker/pkg/homedir"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package homedir // import "github.com/docker/docker/pkg/homedir"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package ioutils // import "github.com/docker/docker/pkg/ioutils"

View File

@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package registry // import "github.com/docker/docker/registry"

View File

@ -0,0 +1,71 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
restful.html
*.out
tmp.prof
go-restful.test
examples/restful-basic-authentication
examples/restful-encoding-filter
examples/restful-filters
examples/restful-hello-world
examples/restful-resource-functions
examples/restful-serve-static
examples/restful-user-service
*.DS_Store
examples/restful-user-resource
examples/restful-multi-containers
examples/restful-form-handling
examples/restful-CORS-filter
examples/restful-options-filter
examples/restful-curly-router
examples/restful-cpuprofiler-service
examples/restful-pre-post-filters
curly.prof
examples/restful-NCSA-logging
examples/restful-html-template
s.html
restful-path-tail
.idea

View File

@ -0,0 +1 @@
ignore

View File

@ -0,0 +1,13 @@
language: go
go:
- 1.x
before_install:
- go test -v
script:
- go test -race -coverprofile=coverage.txt -covermode=atomic
after_success:
- bash <(curl -s https://codecov.io/bash)

372
src/vendor/github.com/emicklei/go-restful/v3/CHANGES.md generated vendored Normal file
View File

@ -0,0 +1,372 @@
# Change history of go-restful
## [v3.8.0] - 20221-06-06
- use exact matching of allowed domain entries, issue #489 (#493)
- this changes fixes [security] Authorization Bypass Through User-Controlled Key
by changing the behaviour of the AllowedDomains setting in the CORS filter.
To support the previous behaviour, the CORS filter type now has a AllowedDomainFunc
callback mechanism which is called when a simple domain match fails.
- add test and fix for POST without body and Content-type, issue #492 (#496)
- [Minor] Bad practice to have a mix of Receiver types. (#491)
## [v3.7.2] - 2021-11-24
- restored FilterChain (#482 by SVilgelm)
## [v3.7.1] - 2021-10-04
- fix problem with contentEncodingEnabled setting (#479)
## [v3.7.0] - 2021-09-24
- feat(parameter): adds additional openapi mappings (#478)
## [v3.6.0] - 2021-09-18
- add support for vendor extensions (#477 thx erraggy)
## [v3.5.2] - 2021-07-14
- fix removing absent route from webservice (#472)
## [v3.5.1] - 2021-04-12
- fix handling no match access selected path
- remove obsolete field
## [v3.5.0] - 2021-04-10
- add check for wildcard (#463) in CORS
- add access to Route from Request, issue #459 (#462)
## [v3.4.0] - 2020-11-10
- Added OPTIONS to WebService
## [v3.3.2] - 2020-01-23
- Fixed duplicate compression in dispatch. #449
## [v3.3.1] - 2020-08-31
- Added check on writer to prevent compression of response twice. #447
## [v3.3.0] - 2020-08-19
- Enable content encoding on Handle and ServeHTTP (#446)
- List available representations in 406 body (#437)
- Convert to string using rune() (#443)
## [v3.2.0] - 2020-06-21
- 405 Method Not Allowed must have Allow header (#436) (thx Bracken <abdawson@gmail.com>)
- add field allowedMethodsWithoutContentType (#424)
## [v3.1.0]
- support describing response headers (#426)
- fix openapi examples (#425)
v3.0.0
- fix: use request/response resulting from filter chain
- add Go module
Module consumer should use github.com/emicklei/go-restful/v3 as import path
v2.10.0
- support for Custom Verbs (thanks Vinci Xu <277040271@qq.com>)
- fixed static example (thanks Arthur <yang_yapo@126.com>)
- simplify code (thanks Christian Muehlhaeuser <muesli@gmail.com>)
- added JWT HMAC with SHA-512 authentication code example (thanks Amim Knabben <amim.knabben@gmail.com>)
v2.9.6
- small optimization in filter code
v2.11.1
- fix WriteError return value (#415)
v2.11.0
- allow prefix and suffix in path variable expression (#414)
v2.9.6
- support google custome verb (#413)
v2.9.5
- fix panic in Response.WriteError if err == nil
v2.9.4
- fix issue #400 , parsing mime type quality
- Route Builder added option for contentEncodingEnabled (#398)
v2.9.3
- Avoid return of 415 Unsupported Media Type when request body is empty (#396)
v2.9.2
- Reduce allocations in per-request methods to improve performance (#395)
v2.9.1
- Fix issue with default responses and invalid status code 0. (#393)
v2.9.0
- add per Route content encoding setting (overrides container setting)
v2.8.0
- add Request.QueryParameters()
- add json-iterator (via build tag)
- disable vgo module (until log is moved)
v2.7.1
- add vgo module
v2.6.1
- add JSONNewDecoderFunc to allow custom JSON Decoder usage (go 1.10+)
v2.6.0
- Make JSR 311 routing and path param processing consistent
- Adding description to RouteBuilder.Reads()
- Update example for Swagger12 and OpenAPI
2017-09-13
- added route condition functions using `.If(func)` in route building.
2017-02-16
- solved issue #304, make operation names unique
2017-01-30
[IMPORTANT] For swagger users, change your import statement to:
swagger "github.com/emicklei/go-restful-swagger12"
- moved swagger 1.2 code to go-restful-swagger12
- created TAG 2.0.0
2017-01-27
- remove defer request body close
- expose Dispatch for testing filters and Routefunctions
- swagger response model cannot be array
- created TAG 1.0.0
2016-12-22
- (API change) Remove code related to caching request content. Removes SetCacheReadEntity(doCache bool)
2016-11-26
- Default change! now use CurlyRouter (was RouterJSR311)
- Default change! no more caching of request content
- Default change! do not recover from panics
2016-09-22
- fix the DefaultRequestContentType feature
2016-02-14
- take the qualify factor of the Accept header mediatype into account when deciding the contentype of the response
- add constructors for custom entity accessors for xml and json
2015-09-27
- rename new WriteStatusAnd... to WriteHeaderAnd... for consistency
2015-09-25
- fixed problem with changing Header after WriteHeader (issue 235)
2015-09-14
- changed behavior of WriteHeader (immediate write) and WriteEntity (no status write)
- added support for custom EntityReaderWriters.
2015-08-06
- add support for reading entities from compressed request content
- use sync.Pool for compressors of http response and request body
- add Description to Parameter for documentation in Swagger UI
2015-03-20
- add configurable logging
2015-03-18
- if not specified, the Operation is derived from the Route function
2015-03-17
- expose Parameter creation functions
- make trace logger an interface
- fix OPTIONSFilter
- customize rendering of ServiceError
- JSR311 router now handles wildcards
- add Notes to Route
2014-11-27
- (api add) PrettyPrint per response. (as proposed in #167)
2014-11-12
- (api add) ApiVersion(.) for documentation in Swagger UI
2014-11-10
- (api change) struct fields tagged with "description" show up in Swagger UI
2014-10-31
- (api change) ReturnsError -> Returns
- (api add) RouteBuilder.Do(aBuilder) for DRY use of RouteBuilder
- fix swagger nested structs
- sort Swagger response messages by code
2014-10-23
- (api add) ReturnsError allows you to document Http codes in swagger
- fixed problem with greedy CurlyRouter
- (api add) Access-Control-Max-Age in CORS
- add tracing functionality (injectable) for debugging purposes
- support JSON parse 64bit int
- fix empty parameters for swagger
- WebServicesUrl is now optional for swagger
- fixed duplicate AccessControlAllowOrigin in CORS
- (api change) expose ServeMux in container
- (api add) added AllowedDomains in CORS
- (api add) ParameterNamed for detailed documentation
2014-04-16
- (api add) expose constructor of Request for testing.
2014-06-27
- (api add) ParameterNamed gives access to a Parameter definition and its data (for further specification).
- (api add) SetCacheReadEntity allow scontrol over whether or not the request body is being cached (default true for compatibility reasons).
2014-07-03
- (api add) CORS can be configured with a list of allowed domains
2014-03-12
- (api add) Route path parameters can use wildcard or regular expressions. (requires CurlyRouter)
2014-02-26
- (api add) Request now provides information about the matched Route, see method SelectedRoutePath
2014-02-17
- (api change) renamed parameter constants (go-lint checks)
2014-01-10
- (api add) support for CloseNotify, see http://golang.org/pkg/net/http/#CloseNotifier
2014-01-07
- (api change) Write* methods in Response now return the error or nil.
- added example of serving HTML from a Go template.
- fixed comparing Allowed headers in CORS (is now case-insensitive)
2013-11-13
- (api add) Response knows how many bytes are written to the response body.
2013-10-29
- (api add) RecoverHandler(handler RecoverHandleFunction) to change how panic recovery is handled. Default behavior is to log and return a stacktrace. This may be a security issue as it exposes sourcecode information.
2013-10-04
- (api add) Response knows what HTTP status has been written
- (api add) Request can have attributes (map of string->interface, also called request-scoped variables
2013-09-12
- (api change) Router interface simplified
- Implemented CurlyRouter, a Router that does not use|allow regular expressions in paths
2013-08-05
- add OPTIONS support
- add CORS support
2013-08-27
- fixed some reported issues (see github)
- (api change) deprecated use of WriteError; use WriteErrorString instead
2014-04-15
- (fix) v1.0.1 tag: fix Issue 111: WriteErrorString
2013-08-08
- (api add) Added implementation Container: a WebServices collection with its own http.ServeMux allowing multiple endpoints per program. Existing uses of go-restful will register their services to the DefaultContainer.
- (api add) the swagger package has be extended to have a UI per container.
- if panic is detected then a small stack trace is printed (thanks to runner-mei)
- (api add) WriteErrorString to Response
Important API changes:
- (api remove) package variable DoNotRecover no longer works ; use restful.DefaultContainer.DoNotRecover(true) instead.
- (api remove) package variable EnableContentEncoding no longer works ; use restful.DefaultContainer.EnableContentEncoding(true) instead.
2013-07-06
- (api add) Added support for response encoding (gzip and deflate(zlib)). This feature is disabled on default (for backwards compatibility). Use restful.EnableContentEncoding = true in your initialization to enable this feature.
2013-06-19
- (improve) DoNotRecover option, moved request body closer, improved ReadEntity
2013-06-03
- (api change) removed Dispatcher interface, hide PathExpression
- changed receiver names of type functions to be more idiomatic Go
2013-06-02
- (optimize) Cache the RegExp compilation of Paths.
2013-05-22
- (api add) Added support for request/response filter functions
2013-05-18
- (api add) Added feature to change the default Http Request Dispatch function (travis cline)
- (api change) Moved Swagger Webservice to swagger package (see example restful-user)
[2012-11-14 .. 2013-05-18>
- See https://github.com/emicklei/go-restful/commits
2012-11-14
- Initial commit

22
src/vendor/github.com/emicklei/go-restful/v3/LICENSE generated vendored Normal file
View File

@ -0,0 +1,22 @@
Copyright (c) 2012,2013 Ernest Micklei
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -0,0 +1,8 @@
all: test
test:
go vet .
go test -cover -v .
ex:
find ./examples -type f -name "*.go" | xargs -I {} go build -o /tmp/ignore {}

110
src/vendor/github.com/emicklei/go-restful/v3/README.md generated vendored Normal file
View File

@ -0,0 +1,110 @@
go-restful
==========
package for building REST-style Web Services using Google Go
[![Build Status](https://travis-ci.org/emicklei/go-restful.png)](https://travis-ci.org/emicklei/go-restful)
[![Go Report Card](https://goreportcard.com/badge/github.com/emicklei/go-restful)](https://goreportcard.com/report/github.com/emicklei/go-restful)
[![GoDoc](https://godoc.org/github.com/emicklei/go-restful?status.svg)](https://pkg.go.dev/github.com/emicklei/go-restful)
[![codecov](https://codecov.io/gh/emicklei/go-restful/branch/master/graph/badge.svg)](https://codecov.io/gh/emicklei/go-restful)
- [Code examples use v3](https://github.com/emicklei/go-restful/tree/v3/examples)
REST asks developers to use HTTP methods explicitly and in a way that's consistent with the protocol definition. This basic REST design principle establishes a one-to-one mapping between create, read, update, and delete (CRUD) operations and HTTP methods. According to this mapping:
- GET = Retrieve a representation of a resource
- POST = Create if you are sending content to the server to create a subordinate of the specified resource collection, using some server-side algorithm.
- PUT = Create if you are sending the full content of the specified resource (URI).
- PUT = Update if you are updating the full content of the specified resource.
- DELETE = Delete if you are requesting the server to delete the resource
- PATCH = Update partial content of a resource
- OPTIONS = Get information about the communication options for the request URI
### Usage
#### Without Go Modules
All versions up to `v2.*.*` (on the master) are not supporting Go modules.
```
import (
restful "github.com/emicklei/go-restful"
)
```
#### Using Go Modules
As of version `v3.0.0` (on the v3 branch), this package supports Go modules.
```
import (
restful "github.com/emicklei/go-restful/v3"
)
```
### Example
```Go
ws := new(restful.WebService)
ws.
Path("/users").
Consumes(restful.MIME_XML, restful.MIME_JSON).
Produces(restful.MIME_JSON, restful.MIME_XML)
ws.Route(ws.GET("/{user-id}").To(u.findUser).
Doc("get a user").
Param(ws.PathParameter("user-id", "identifier of the user").DataType("string")).
Writes(User{}))
...
func (u UserResource) findUser(request *restful.Request, response *restful.Response) {
id := request.PathParameter("user-id")
...
}
```
[Full API of a UserResource](https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go)
### Features
- Routes for request &#8594; function mapping with path parameter (e.g. {id} but also prefix_{var} and {var}_suffix) support
- Configurable router:
- (default) Fast routing algorithm that allows static elements, [google custom method](https://cloud.google.com/apis/design/custom_methods), regular expressions and dynamic parameters in the URL path (e.g. /resource/name:customVerb, /meetings/{id} or /static/{subpath:*})
- Routing algorithm after [JSR311](http://jsr311.java.net/nonav/releases/1.1/spec/spec.html) that is implemented using (but does **not** accept) regular expressions
- Request API for reading structs from JSON/XML and accessing parameters (path,query,header)
- Response API for writing structs to JSON/XML and setting headers
- Customizable encoding using EntityReaderWriter registration
- Filters for intercepting the request &#8594; response flow on Service or Route level
- Request-scoped variables using attributes
- Containers for WebServices on different HTTP endpoints
- Content encoding (gzip,deflate) of request and response payloads
- Automatic responses on OPTIONS (using a filter)
- Automatic CORS request handling (using a filter)
- API declaration for Swagger UI ([go-restful-openapi](https://github.com/emicklei/go-restful-openapi), see [go-restful-swagger12](https://github.com/emicklei/go-restful-swagger12))
- Panic recovery to produce HTTP 500, customizable using RecoverHandler(...)
- Route errors produce HTTP 404/405/406/415 errors, customizable using ServiceErrorHandler(...)
- Configurable (trace) logging
- Customizable gzip/deflate readers and writers using CompressorProvider registration
## How to customize
There are several hooks to customize the behavior of the go-restful package.
- Router algorithm
- Panic recovery
- JSON decoder
- Trace logging
- Compression
- Encoders for other serializers
- Use [jsoniter](https://github.com/json-iterator/go) by build this package using a tag, e.g. `go build -tags=jsoniter .`
## Resources
- [Example programs](./examples)
- [Example posted on blog](http://ernestmicklei.com/2012/11/go-restful-first-working-example/)
- [Design explained on blog](http://ernestmicklei.com/2012/11/go-restful-api-design/)
- [sourcegraph](https://sourcegraph.com/github.com/emicklei/go-restful)
- [showcase: Zazkia - tcp proxy for testing resiliency](https://github.com/emicklei/zazkia)
- [showcase: Mora - MongoDB REST Api server](https://github.com/emicklei/mora)
Type ```git shortlog -s``` for a full list of contributors.
© 2012 - 2022, http://ernestmicklei.com. MIT License. Contributions are welcome.

View File

@ -0,0 +1,13 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| v3.7.x | :white_check_mark: |
| < v3.0.1 | :x: |
## Reporting a Vulnerability
Create an Issue and put the label `[security]` in the title of the issue.
Valid reported security issues are expected to be solved within a week.

1
src/vendor/github.com/emicklei/go-restful/v3/Srcfile generated vendored Normal file
View File

@ -0,0 +1 @@
{"SkipDirs": ["examples"]}

View File

@ -0,0 +1,10 @@
#go test -run=none -file bench_test.go -test.bench . -cpuprofile=bench_test.out
go test -c
./go-restful.test -test.run=none -test.cpuprofile=tmp.prof -test.bench=BenchmarkMany
./go-restful.test -test.run=none -test.cpuprofile=curly.prof -test.bench=BenchmarkManyCurly
#go tool pprof go-restful.test tmp.prof
go tool pprof go-restful.test curly.prof

View File

@ -0,0 +1,127 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bufio"
"compress/gzip"
"compress/zlib"
"errors"
"io"
"net"
"net/http"
"strings"
)
// OBSOLETE : use restful.DefaultContainer.EnableContentEncoding(true) to change this setting.
var EnableContentEncoding = false
// CompressingResponseWriter is a http.ResponseWriter that can perform content encoding (gzip and zlib)
type CompressingResponseWriter struct {
writer http.ResponseWriter
compressor io.WriteCloser
encoding string
}
// Header is part of http.ResponseWriter interface
func (c *CompressingResponseWriter) Header() http.Header {
return c.writer.Header()
}
// WriteHeader is part of http.ResponseWriter interface
func (c *CompressingResponseWriter) WriteHeader(status int) {
c.writer.WriteHeader(status)
}
// Write is part of http.ResponseWriter interface
// It is passed through the compressor
func (c *CompressingResponseWriter) Write(bytes []byte) (int, error) {
if c.isCompressorClosed() {
return -1, errors.New("Compressing error: tried to write data using closed compressor")
}
return c.compressor.Write(bytes)
}
// CloseNotify is part of http.CloseNotifier interface
func (c *CompressingResponseWriter) CloseNotify() <-chan bool {
return c.writer.(http.CloseNotifier).CloseNotify()
}
// Close the underlying compressor
func (c *CompressingResponseWriter) Close() error {
if c.isCompressorClosed() {
return errors.New("Compressing error: tried to close already closed compressor")
}
c.compressor.Close()
if ENCODING_GZIP == c.encoding {
currentCompressorProvider.ReleaseGzipWriter(c.compressor.(*gzip.Writer))
}
if ENCODING_DEFLATE == c.encoding {
currentCompressorProvider.ReleaseZlibWriter(c.compressor.(*zlib.Writer))
}
// gc hint needed?
c.compressor = nil
return nil
}
func (c *CompressingResponseWriter) isCompressorClosed() bool {
return nil == c.compressor
}
// Hijack implements the Hijacker interface
// This is especially useful when combining Container.EnabledContentEncoding
// in combination with websockets (for instance gorilla/websocket)
func (c *CompressingResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hijacker, ok := c.writer.(http.Hijacker)
if !ok {
return nil, nil, errors.New("ResponseWriter doesn't support Hijacker interface")
}
return hijacker.Hijack()
}
// WantsCompressedResponse reads the Accept-Encoding header to see if and which encoding is requested.
// It also inspects the httpWriter whether its content-encoding is already set (non-empty).
func wantsCompressedResponse(httpRequest *http.Request, httpWriter http.ResponseWriter) (bool, string) {
if contentEncoding := httpWriter.Header().Get(HEADER_ContentEncoding); contentEncoding != "" {
return false, ""
}
header := httpRequest.Header.Get(HEADER_AcceptEncoding)
gi := strings.Index(header, ENCODING_GZIP)
zi := strings.Index(header, ENCODING_DEFLATE)
// use in order of appearance
if gi == -1 {
return zi != -1, ENCODING_DEFLATE
} else if zi == -1 {
return gi != -1, ENCODING_GZIP
} else {
if gi < zi {
return true, ENCODING_GZIP
}
return true, ENCODING_DEFLATE
}
}
// NewCompressingResponseWriter create a CompressingResponseWriter for a known encoding = {gzip,deflate}
func NewCompressingResponseWriter(httpWriter http.ResponseWriter, encoding string) (*CompressingResponseWriter, error) {
httpWriter.Header().Set(HEADER_ContentEncoding, encoding)
c := new(CompressingResponseWriter)
c.writer = httpWriter
var err error
if ENCODING_GZIP == encoding {
w := currentCompressorProvider.AcquireGzipWriter()
w.Reset(httpWriter)
c.compressor = w
c.encoding = ENCODING_GZIP
} else if ENCODING_DEFLATE == encoding {
w := currentCompressorProvider.AcquireZlibWriter()
w.Reset(httpWriter)
c.compressor = w
c.encoding = ENCODING_DEFLATE
} else {
return nil, errors.New("Unknown encoding:" + encoding)
}
return c, err
}

View File

@ -0,0 +1,103 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"compress/gzip"
"compress/zlib"
)
// BoundedCachedCompressors is a CompressorProvider that uses a cache with a fixed amount
// of writers and readers (resources).
// If a new resource is acquired and all are in use, it will return a new unmanaged resource.
type BoundedCachedCompressors struct {
gzipWriters chan *gzip.Writer
gzipReaders chan *gzip.Reader
zlibWriters chan *zlib.Writer
writersCapacity int
readersCapacity int
}
// NewBoundedCachedCompressors returns a new, with filled cache, BoundedCachedCompressors.
func NewBoundedCachedCompressors(writersCapacity, readersCapacity int) *BoundedCachedCompressors {
b := &BoundedCachedCompressors{
gzipWriters: make(chan *gzip.Writer, writersCapacity),
gzipReaders: make(chan *gzip.Reader, readersCapacity),
zlibWriters: make(chan *zlib.Writer, writersCapacity),
writersCapacity: writersCapacity,
readersCapacity: readersCapacity,
}
for ix := 0; ix < writersCapacity; ix++ {
b.gzipWriters <- newGzipWriter()
b.zlibWriters <- newZlibWriter()
}
for ix := 0; ix < readersCapacity; ix++ {
b.gzipReaders <- newGzipReader()
}
return b
}
// AcquireGzipWriter returns an resettable *gzip.Writer. Needs to be released.
func (b *BoundedCachedCompressors) AcquireGzipWriter() *gzip.Writer {
var writer *gzip.Writer
select {
case writer, _ = <-b.gzipWriters:
default:
// return a new unmanaged one
writer = newGzipWriter()
}
return writer
}
// ReleaseGzipWriter accepts a writer (does not have to be one that was cached)
// only when the cache has room for it. It will ignore it otherwise.
func (b *BoundedCachedCompressors) ReleaseGzipWriter(w *gzip.Writer) {
// forget the unmanaged ones
if len(b.gzipWriters) < b.writersCapacity {
b.gzipWriters <- w
}
}
// AcquireGzipReader returns a *gzip.Reader. Needs to be released.
func (b *BoundedCachedCompressors) AcquireGzipReader() *gzip.Reader {
var reader *gzip.Reader
select {
case reader, _ = <-b.gzipReaders:
default:
// return a new unmanaged one
reader = newGzipReader()
}
return reader
}
// ReleaseGzipReader accepts a reader (does not have to be one that was cached)
// only when the cache has room for it. It will ignore it otherwise.
func (b *BoundedCachedCompressors) ReleaseGzipReader(r *gzip.Reader) {
// forget the unmanaged ones
if len(b.gzipReaders) < b.readersCapacity {
b.gzipReaders <- r
}
}
// AcquireZlibWriter returns an resettable *zlib.Writer. Needs to be released.
func (b *BoundedCachedCompressors) AcquireZlibWriter() *zlib.Writer {
var writer *zlib.Writer
select {
case writer, _ = <-b.zlibWriters:
default:
// return a new unmanaged one
writer = newZlibWriter()
}
return writer
}
// ReleaseZlibWriter accepts a writer (does not have to be one that was cached)
// only when the cache has room for it. It will ignore it otherwise.
func (b *BoundedCachedCompressors) ReleaseZlibWriter(w *zlib.Writer) {
// forget the unmanaged ones
if len(b.zlibWriters) < b.writersCapacity {
b.zlibWriters <- w
}
}

View File

@ -0,0 +1,91 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"compress/gzip"
"compress/zlib"
"sync"
)
// SyncPoolCompessors is a CompressorProvider that use the standard sync.Pool.
type SyncPoolCompessors struct {
GzipWriterPool *sync.Pool
GzipReaderPool *sync.Pool
ZlibWriterPool *sync.Pool
}
// NewSyncPoolCompessors returns a new ("empty") SyncPoolCompessors.
func NewSyncPoolCompessors() *SyncPoolCompessors {
return &SyncPoolCompessors{
GzipWriterPool: &sync.Pool{
New: func() interface{} { return newGzipWriter() },
},
GzipReaderPool: &sync.Pool{
New: func() interface{} { return newGzipReader() },
},
ZlibWriterPool: &sync.Pool{
New: func() interface{} { return newZlibWriter() },
},
}
}
func (s *SyncPoolCompessors) AcquireGzipWriter() *gzip.Writer {
return s.GzipWriterPool.Get().(*gzip.Writer)
}
func (s *SyncPoolCompessors) ReleaseGzipWriter(w *gzip.Writer) {
s.GzipWriterPool.Put(w)
}
func (s *SyncPoolCompessors) AcquireGzipReader() *gzip.Reader {
return s.GzipReaderPool.Get().(*gzip.Reader)
}
func (s *SyncPoolCompessors) ReleaseGzipReader(r *gzip.Reader) {
s.GzipReaderPool.Put(r)
}
func (s *SyncPoolCompessors) AcquireZlibWriter() *zlib.Writer {
return s.ZlibWriterPool.Get().(*zlib.Writer)
}
func (s *SyncPoolCompessors) ReleaseZlibWriter(w *zlib.Writer) {
s.ZlibWriterPool.Put(w)
}
func newGzipWriter() *gzip.Writer {
// create with an empty bytes writer; it will be replaced before using the gzipWriter
writer, err := gzip.NewWriterLevel(new(bytes.Buffer), gzip.BestSpeed)
if err != nil {
panic(err.Error())
}
return writer
}
func newGzipReader() *gzip.Reader {
// create with an empty reader (but with GZIP header); it will be replaced before using the gzipReader
// we can safely use currentCompressProvider because it is set on package initialization.
w := currentCompressorProvider.AcquireGzipWriter()
defer currentCompressorProvider.ReleaseGzipWriter(w)
b := new(bytes.Buffer)
w.Reset(b)
w.Flush()
w.Close()
reader, err := gzip.NewReader(bytes.NewReader(b.Bytes()))
if err != nil {
panic(err.Error())
}
return reader
}
func newZlibWriter() *zlib.Writer {
writer, err := zlib.NewWriterLevel(new(bytes.Buffer), gzip.BestSpeed)
if err != nil {
panic(err.Error())
}
return writer
}

View File

@ -0,0 +1,54 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"compress/gzip"
"compress/zlib"
)
// CompressorProvider describes a component that can provider compressors for the std methods.
type CompressorProvider interface {
// Returns a *gzip.Writer which needs to be released later.
// Before using it, call Reset().
AcquireGzipWriter() *gzip.Writer
// Releases an acquired *gzip.Writer.
ReleaseGzipWriter(w *gzip.Writer)
// Returns a *gzip.Reader which needs to be released later.
AcquireGzipReader() *gzip.Reader
// Releases an acquired *gzip.Reader.
ReleaseGzipReader(w *gzip.Reader)
// Returns a *zlib.Writer which needs to be released later.
// Before using it, call Reset().
AcquireZlibWriter() *zlib.Writer
// Releases an acquired *zlib.Writer.
ReleaseZlibWriter(w *zlib.Writer)
}
// DefaultCompressorProvider is the actual provider of compressors (zlib or gzip).
var currentCompressorProvider CompressorProvider
func init() {
currentCompressorProvider = NewSyncPoolCompessors()
}
// CurrentCompressorProvider returns the current CompressorProvider.
// It is initialized using a SyncPoolCompessors.
func CurrentCompressorProvider() CompressorProvider {
return currentCompressorProvider
}
// SetCompressorProvider sets the actual provider of compressors (zlib or gzip).
func SetCompressorProvider(p CompressorProvider) {
if p == nil {
panic("cannot set compressor provider to nil")
}
currentCompressorProvider = p
}

View File

@ -0,0 +1,30 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
const (
MIME_XML = "application/xml" // Accept or Content-Type used in Consumes() and/or Produces()
MIME_JSON = "application/json" // Accept or Content-Type used in Consumes() and/or Produces()
MIME_OCTET = "application/octet-stream" // If Content-Type is not present in request, use the default
HEADER_Allow = "Allow"
HEADER_Accept = "Accept"
HEADER_Origin = "Origin"
HEADER_ContentType = "Content-Type"
HEADER_LastModified = "Last-Modified"
HEADER_AcceptEncoding = "Accept-Encoding"
HEADER_ContentEncoding = "Content-Encoding"
HEADER_AccessControlExposeHeaders = "Access-Control-Expose-Headers"
HEADER_AccessControlRequestMethod = "Access-Control-Request-Method"
HEADER_AccessControlRequestHeaders = "Access-Control-Request-Headers"
HEADER_AccessControlAllowMethods = "Access-Control-Allow-Methods"
HEADER_AccessControlAllowOrigin = "Access-Control-Allow-Origin"
HEADER_AccessControlAllowCredentials = "Access-Control-Allow-Credentials"
HEADER_AccessControlAllowHeaders = "Access-Control-Allow-Headers"
HEADER_AccessControlMaxAge = "Access-Control-Max-Age"
ENCODING_GZIP = "gzip"
ENCODING_DEFLATE = "deflate"
)

View File

@ -0,0 +1,450 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"errors"
"fmt"
"net/http"
"os"
"runtime"
"strings"
"sync"
"github.com/emicklei/go-restful/v3/log"
)
// Container holds a collection of WebServices and a http.ServeMux to dispatch http requests.
// The requests are further dispatched to routes of WebServices using a RouteSelector
type Container struct {
webServicesLock sync.RWMutex
webServices []*WebService
ServeMux *http.ServeMux
isRegisteredOnRoot bool
containerFilters []FilterFunction
doNotRecover bool // default is true
recoverHandleFunc RecoverHandleFunction
serviceErrorHandleFunc ServiceErrorHandleFunction
router RouteSelector // default is a CurlyRouter (RouterJSR311 is a slower alternative)
contentEncodingEnabled bool // default is false
}
// NewContainer creates a new Container using a new ServeMux and default router (CurlyRouter)
func NewContainer() *Container {
return &Container{
webServices: []*WebService{},
ServeMux: http.NewServeMux(),
isRegisteredOnRoot: false,
containerFilters: []FilterFunction{},
doNotRecover: true,
recoverHandleFunc: logStackOnRecover,
serviceErrorHandleFunc: writeServiceError,
router: CurlyRouter{},
contentEncodingEnabled: false}
}
// RecoverHandleFunction declares functions that can be used to handle a panic situation.
// The first argument is what recover() returns. The second must be used to communicate an error response.
type RecoverHandleFunction func(interface{}, http.ResponseWriter)
// RecoverHandler changes the default function (logStackOnRecover) to be called
// when a panic is detected. DoNotRecover must be have its default value (=false).
func (c *Container) RecoverHandler(handler RecoverHandleFunction) {
c.recoverHandleFunc = handler
}
// ServiceErrorHandleFunction declares functions that can be used to handle a service error situation.
// The first argument is the service error, the second is the request that resulted in the error and
// the third must be used to communicate an error response.
type ServiceErrorHandleFunction func(ServiceError, *Request, *Response)
// ServiceErrorHandler changes the default function (writeServiceError) to be called
// when a ServiceError is detected.
func (c *Container) ServiceErrorHandler(handler ServiceErrorHandleFunction) {
c.serviceErrorHandleFunc = handler
}
// DoNotRecover controls whether panics will be caught to return HTTP 500.
// If set to true, Route functions are responsible for handling any error situation.
// Default value is true.
func (c *Container) DoNotRecover(doNot bool) {
c.doNotRecover = doNot
}
// Router changes the default Router (currently CurlyRouter)
func (c *Container) Router(aRouter RouteSelector) {
c.router = aRouter
}
// EnableContentEncoding (default=false) allows for GZIP or DEFLATE encoding of responses.
func (c *Container) EnableContentEncoding(enabled bool) {
c.contentEncodingEnabled = enabled
}
// Add a WebService to the Container. It will detect duplicate root paths and exit in that case.
func (c *Container) Add(service *WebService) *Container {
c.webServicesLock.Lock()
defer c.webServicesLock.Unlock()
// if rootPath was not set then lazy initialize it
if len(service.rootPath) == 0 {
service.Path("/")
}
// cannot have duplicate root paths
for _, each := range c.webServices {
if each.RootPath() == service.RootPath() {
log.Printf("WebService with duplicate root path detected:['%v']", each)
os.Exit(1)
}
}
// If not registered on root then add specific mapping
if !c.isRegisteredOnRoot {
c.isRegisteredOnRoot = c.addHandler(service, c.ServeMux)
}
c.webServices = append(c.webServices, service)
return c
}
// addHandler may set a new HandleFunc for the serveMux
// this function must run inside the critical region protected by the webServicesLock.
// returns true if the function was registered on root ("/")
func (c *Container) addHandler(service *WebService, serveMux *http.ServeMux) bool {
pattern := fixedPrefixPath(service.RootPath())
// check if root path registration is needed
if "/" == pattern || "" == pattern {
serveMux.HandleFunc("/", c.dispatch)
return true
}
// detect if registration already exists
alreadyMapped := false
for _, each := range c.webServices {
if each.RootPath() == service.RootPath() {
alreadyMapped = true
break
}
}
if !alreadyMapped {
serveMux.HandleFunc(pattern, c.dispatch)
if !strings.HasSuffix(pattern, "/") {
serveMux.HandleFunc(pattern+"/", c.dispatch)
}
}
return false
}
func (c *Container) Remove(ws *WebService) error {
if c.ServeMux == http.DefaultServeMux {
errMsg := fmt.Sprintf("cannot remove a WebService from a Container using the DefaultServeMux: ['%v']", ws)
log.Print(errMsg)
return errors.New(errMsg)
}
c.webServicesLock.Lock()
defer c.webServicesLock.Unlock()
// build a new ServeMux and re-register all WebServices
newServeMux := http.NewServeMux()
newServices := []*WebService{}
newIsRegisteredOnRoot := false
for _, each := range c.webServices {
if each.rootPath != ws.rootPath {
// If not registered on root then add specific mapping
if !newIsRegisteredOnRoot {
newIsRegisteredOnRoot = c.addHandler(each, newServeMux)
}
newServices = append(newServices, each)
}
}
c.webServices, c.ServeMux, c.isRegisteredOnRoot = newServices, newServeMux, newIsRegisteredOnRoot
return nil
}
// logStackOnRecover is the default RecoverHandleFunction and is called
// when DoNotRecover is false and the recoverHandleFunc is not set for the container.
// Default implementation logs the stacktrace and writes the stacktrace on the response.
// This may be a security issue as it exposes sourcecode information.
func logStackOnRecover(panicReason interface{}, httpWriter http.ResponseWriter) {
var buffer bytes.Buffer
buffer.WriteString(fmt.Sprintf("recover from panic situation: - %v\r\n", panicReason))
for i := 2; ; i += 1 {
_, file, line, ok := runtime.Caller(i)
if !ok {
break
}
buffer.WriteString(fmt.Sprintf(" %s:%d\r\n", file, line))
}
log.Print(buffer.String())
httpWriter.WriteHeader(http.StatusInternalServerError)
httpWriter.Write(buffer.Bytes())
}
// writeServiceError is the default ServiceErrorHandleFunction and is called
// when a ServiceError is returned during route selection. Default implementation
// calls resp.WriteErrorString(err.Code, err.Message)
func writeServiceError(err ServiceError, req *Request, resp *Response) {
for header, values := range err.Header {
for _, value := range values {
resp.Header().Add(header, value)
}
}
resp.WriteErrorString(err.Code, err.Message)
}
// Dispatch the incoming Http Request to a matching WebService.
func (c *Container) Dispatch(httpWriter http.ResponseWriter, httpRequest *http.Request) {
if httpWriter == nil {
panic("httpWriter cannot be nil")
}
if httpRequest == nil {
panic("httpRequest cannot be nil")
}
c.dispatch(httpWriter, httpRequest)
}
// Dispatch the incoming Http Request to a matching WebService.
func (c *Container) dispatch(httpWriter http.ResponseWriter, httpRequest *http.Request) {
// so we can assign a compressing one later
writer := httpWriter
// CompressingResponseWriter should be closed after all operations are done
defer func() {
if compressWriter, ok := writer.(*CompressingResponseWriter); ok {
compressWriter.Close()
}
}()
// Instal panic recovery unless told otherwise
if !c.doNotRecover { // catch all for 500 response
defer func() {
if r := recover(); r != nil {
c.recoverHandleFunc(r, writer)
return
}
}()
}
// Find best match Route ; err is non nil if no match was found
var webService *WebService
var route *Route
var err error
func() {
c.webServicesLock.RLock()
defer c.webServicesLock.RUnlock()
webService, route, err = c.router.SelectRoute(
c.webServices,
httpRequest)
}()
if err != nil {
// a non-200 response (may be compressed) has already been written
// run container filters anyway ; they should not touch the response...
chain := FilterChain{Filters: c.containerFilters, Target: func(req *Request, resp *Response) {
switch err.(type) {
case ServiceError:
ser := err.(ServiceError)
c.serviceErrorHandleFunc(ser, req, resp)
}
// TODO
}}
chain.ProcessFilter(NewRequest(httpRequest), NewResponse(writer))
return
}
// Unless httpWriter is already an CompressingResponseWriter see if we need to install one
if _, isCompressing := httpWriter.(*CompressingResponseWriter); !isCompressing {
// Detect if compression is needed
// assume without compression, test for override
contentEncodingEnabled := c.contentEncodingEnabled
if route != nil && route.contentEncodingEnabled != nil {
contentEncodingEnabled = *route.contentEncodingEnabled
}
if contentEncodingEnabled {
doCompress, encoding := wantsCompressedResponse(httpRequest, httpWriter)
if doCompress {
var err error
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
if err != nil {
log.Print("unable to install compressor: ", err)
httpWriter.WriteHeader(http.StatusInternalServerError)
return
}
}
}
}
pathProcessor, routerProcessesPath := c.router.(PathProcessor)
if !routerProcessesPath {
pathProcessor = defaultPathProcessor{}
}
pathParams := pathProcessor.ExtractParameters(route, webService, httpRequest.URL.Path)
wrappedRequest, wrappedResponse := route.wrapRequestResponse(writer, httpRequest, pathParams)
// pass through filters (if any)
if size := len(c.containerFilters) + len(webService.filters) + len(route.Filters); size > 0 {
// compose filter chain
allFilters := make([]FilterFunction, 0, size)
allFilters = append(allFilters, c.containerFilters...)
allFilters = append(allFilters, webService.filters...)
allFilters = append(allFilters, route.Filters...)
chain := FilterChain{
Filters: allFilters,
Target: route.Function,
ParameterDocs: route.ParameterDocs,
Operation: route.Operation,
}
chain.ProcessFilter(wrappedRequest, wrappedResponse)
} else {
// no filters, handle request by route
route.Function(wrappedRequest, wrappedResponse)
}
}
// fixedPrefixPath returns the fixed part of the partspec ; it may include template vars {}
func fixedPrefixPath(pathspec string) string {
varBegin := strings.Index(pathspec, "{")
if -1 == varBegin {
return pathspec
}
return pathspec[:varBegin]
}
// ServeHTTP implements net/http.Handler therefore a Container can be a Handler in a http.Server
func (c *Container) ServeHTTP(httpWriter http.ResponseWriter, httpRequest *http.Request) {
// Skip, if content encoding is disabled
if !c.contentEncodingEnabled {
c.ServeMux.ServeHTTP(httpWriter, httpRequest)
return
}
// content encoding is enabled
// Skip, if httpWriter is already an CompressingResponseWriter
if _, ok := httpWriter.(*CompressingResponseWriter); ok {
c.ServeMux.ServeHTTP(httpWriter, httpRequest)
return
}
writer := httpWriter
// CompressingResponseWriter should be closed after all operations are done
defer func() {
if compressWriter, ok := writer.(*CompressingResponseWriter); ok {
compressWriter.Close()
}
}()
doCompress, encoding := wantsCompressedResponse(httpRequest, httpWriter)
if doCompress {
var err error
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
if err != nil {
log.Print("unable to install compressor: ", err)
httpWriter.WriteHeader(http.StatusInternalServerError)
return
}
}
c.ServeMux.ServeHTTP(writer, httpRequest)
}
// Handle registers the handler for the given pattern. If a handler already exists for pattern, Handle panics.
func (c *Container) Handle(pattern string, handler http.Handler) {
c.ServeMux.Handle(pattern, http.HandlerFunc(func(httpWriter http.ResponseWriter, httpRequest *http.Request) {
// Skip, if httpWriter is already an CompressingResponseWriter
if _, ok := httpWriter.(*CompressingResponseWriter); ok {
handler.ServeHTTP(httpWriter, httpRequest)
return
}
writer := httpWriter
// CompressingResponseWriter should be closed after all operations are done
defer func() {
if compressWriter, ok := writer.(*CompressingResponseWriter); ok {
compressWriter.Close()
}
}()
if c.contentEncodingEnabled {
doCompress, encoding := wantsCompressedResponse(httpRequest, httpWriter)
if doCompress {
var err error
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
if err != nil {
log.Print("unable to install compressor: ", err)
httpWriter.WriteHeader(http.StatusInternalServerError)
return
}
}
}
handler.ServeHTTP(writer, httpRequest)
}))
}
// HandleWithFilter registers the handler for the given pattern.
// Container's filter chain is applied for handler.
// If a handler already exists for pattern, HandleWithFilter panics.
func (c *Container) HandleWithFilter(pattern string, handler http.Handler) {
f := func(httpResponse http.ResponseWriter, httpRequest *http.Request) {
if len(c.containerFilters) == 0 {
handler.ServeHTTP(httpResponse, httpRequest)
return
}
chain := FilterChain{Filters: c.containerFilters, Target: func(req *Request, resp *Response) {
handler.ServeHTTP(resp, req.Request)
}}
chain.ProcessFilter(NewRequest(httpRequest), NewResponse(httpResponse))
}
c.Handle(pattern, http.HandlerFunc(f))
}
// Filter appends a container FilterFunction. These are called before dispatching
// a http.Request to a WebService from the container
func (c *Container) Filter(filter FilterFunction) {
c.containerFilters = append(c.containerFilters, filter)
}
// RegisteredWebServices returns the collections of added WebServices
func (c *Container) RegisteredWebServices() []*WebService {
c.webServicesLock.RLock()
defer c.webServicesLock.RUnlock()
result := make([]*WebService, len(c.webServices))
for ix := range c.webServices {
result[ix] = c.webServices[ix]
}
return result
}
// computeAllowedMethods returns a list of HTTP methods that are valid for a Request
func (c *Container) computeAllowedMethods(req *Request) []string {
// Go through all RegisteredWebServices() and all its Routes to collect the options
methods := []string{}
requestPath := req.Request.URL.Path
for _, ws := range c.RegisteredWebServices() {
matches := ws.pathExpr.Matcher.FindStringSubmatch(requestPath)
if matches != nil {
finalMatch := matches[len(matches)-1]
for _, rt := range ws.Routes() {
matches := rt.pathExpr.Matcher.FindStringSubmatch(finalMatch)
if matches != nil {
lastMatch := matches[len(matches)-1]
if lastMatch == "" || lastMatch == "/" { // do not include if value is neither empty nor /.
methods = append(methods, rt.Method)
}
}
}
}
}
// methods = append(methods, "OPTIONS") not sure about this
return methods
}
// newBasicRequestResponse creates a pair of Request,Response from its http versions.
// It is basic because no parameter or (produces) content-type information is given.
func newBasicRequestResponse(httpWriter http.ResponseWriter, httpRequest *http.Request) (*Request, *Response) {
resp := NewResponse(httpWriter)
resp.requestAccept = httpRequest.Header.Get(HEADER_Accept)
return NewRequest(httpRequest), resp
}

View File

@ -0,0 +1,193 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"regexp"
"strconv"
"strings"
)
// CrossOriginResourceSharing is used to create a Container Filter that implements CORS.
// Cross-origin resource sharing (CORS) is a mechanism that allows JavaScript on a web page
// to make XMLHttpRequests to another domain, not the domain the JavaScript originated from.
//
// http://en.wikipedia.org/wiki/Cross-origin_resource_sharing
// http://enable-cors.org/server.html
// http://www.html5rocks.com/en/tutorials/cors/#toc-handling-a-not-so-simple-request
type CrossOriginResourceSharing struct {
ExposeHeaders []string // list of Header names
// AllowedHeaders is alist of Header names. Checking is case-insensitive.
// The list may contain the special wildcard string ".*" ; all is allowed
AllowedHeaders []string
// AllowedDomains is a list of allowed values for Http Origin.
// The list may contain the special wildcard string ".*" ; all is allowed
// If empty all are allowed.
AllowedDomains []string
// AllowedDomainFunc is optional and is a function that will do the check
// when the origin is not part of the AllowedDomains and it does not contain the wildcard ".*".
AllowedDomainFunc func(origin string) bool
// AllowedMethods is either empty or has a list of http methods names. Checking is case-insensitive.
AllowedMethods []string
MaxAge int // number of seconds before requiring new Options request
CookiesAllowed bool
Container *Container
allowedOriginPatterns []*regexp.Regexp // internal field for origin regexp check.
}
// Filter is a filter function that implements the CORS flow as documented on http://enable-cors.org/server.html
// and http://www.html5rocks.com/static/images/cors_server_flowchart.png
func (c CrossOriginResourceSharing) Filter(req *Request, resp *Response, chain *FilterChain) {
origin := req.Request.Header.Get(HEADER_Origin)
if len(origin) == 0 {
if trace {
traceLogger.Print("no Http header Origin set")
}
chain.ProcessFilter(req, resp)
return
}
if !c.isOriginAllowed(origin) { // check whether this origin is allowed
if trace {
traceLogger.Printf("HTTP Origin:%s is not part of %v, neither matches any part of %v", origin, c.AllowedDomains, c.allowedOriginPatterns)
}
chain.ProcessFilter(req, resp)
return
}
if req.Request.Method != "OPTIONS" {
c.doActualRequest(req, resp)
chain.ProcessFilter(req, resp)
return
}
if acrm := req.Request.Header.Get(HEADER_AccessControlRequestMethod); acrm != "" {
c.doPreflightRequest(req, resp)
} else {
c.doActualRequest(req, resp)
chain.ProcessFilter(req, resp)
return
}
}
func (c CrossOriginResourceSharing) doActualRequest(req *Request, resp *Response) {
c.setOptionsHeaders(req, resp)
// continue processing the response
}
func (c *CrossOriginResourceSharing) doPreflightRequest(req *Request, resp *Response) {
if len(c.AllowedMethods) == 0 {
if c.Container == nil {
c.AllowedMethods = DefaultContainer.computeAllowedMethods(req)
} else {
c.AllowedMethods = c.Container.computeAllowedMethods(req)
}
}
acrm := req.Request.Header.Get(HEADER_AccessControlRequestMethod)
if !c.isValidAccessControlRequestMethod(acrm, c.AllowedMethods) {
if trace {
traceLogger.Printf("Http header %s:%s is not in %v",
HEADER_AccessControlRequestMethod,
acrm,
c.AllowedMethods)
}
return
}
acrhs := req.Request.Header.Get(HEADER_AccessControlRequestHeaders)
if len(acrhs) > 0 {
for _, each := range strings.Split(acrhs, ",") {
if !c.isValidAccessControlRequestHeader(strings.Trim(each, " ")) {
if trace {
traceLogger.Printf("Http header %s:%s is not in %v",
HEADER_AccessControlRequestHeaders,
acrhs,
c.AllowedHeaders)
}
return
}
}
}
resp.AddHeader(HEADER_AccessControlAllowMethods, strings.Join(c.AllowedMethods, ","))
resp.AddHeader(HEADER_AccessControlAllowHeaders, acrhs)
c.setOptionsHeaders(req, resp)
// return http 200 response, no body
}
func (c CrossOriginResourceSharing) setOptionsHeaders(req *Request, resp *Response) {
c.checkAndSetExposeHeaders(resp)
c.setAllowOriginHeader(req, resp)
c.checkAndSetAllowCredentials(resp)
if c.MaxAge > 0 {
resp.AddHeader(HEADER_AccessControlMaxAge, strconv.Itoa(c.MaxAge))
}
}
func (c CrossOriginResourceSharing) isOriginAllowed(origin string) bool {
if len(origin) == 0 {
return false
}
lowerOrigin := strings.ToLower(origin)
if len(c.AllowedDomains) == 0 {
if c.AllowedDomainFunc != nil {
return c.AllowedDomainFunc(lowerOrigin)
}
return true
}
// exact match on each allowed domain
for _, domain := range c.AllowedDomains {
if domain == ".*" || strings.ToLower(domain) == lowerOrigin {
return true
}
}
if c.AllowedDomainFunc != nil {
return c.AllowedDomainFunc(origin)
}
return false
}
func (c CrossOriginResourceSharing) setAllowOriginHeader(req *Request, resp *Response) {
origin := req.Request.Header.Get(HEADER_Origin)
if c.isOriginAllowed(origin) {
resp.AddHeader(HEADER_AccessControlAllowOrigin, origin)
}
}
func (c CrossOriginResourceSharing) checkAndSetExposeHeaders(resp *Response) {
if len(c.ExposeHeaders) > 0 {
resp.AddHeader(HEADER_AccessControlExposeHeaders, strings.Join(c.ExposeHeaders, ","))
}
}
func (c CrossOriginResourceSharing) checkAndSetAllowCredentials(resp *Response) {
if c.CookiesAllowed {
resp.AddHeader(HEADER_AccessControlAllowCredentials, "true")
}
}
func (c CrossOriginResourceSharing) isValidAccessControlRequestMethod(method string, allowedMethods []string) bool {
for _, each := range allowedMethods {
if each == method {
return true
}
}
return false
}
func (c CrossOriginResourceSharing) isValidAccessControlRequestHeader(header string) bool {
for _, each := range c.AllowedHeaders {
if strings.ToLower(each) == strings.ToLower(header) {
return true
}
if each == "*" {
return true
}
}
return false
}

View File

@ -0,0 +1,2 @@
go test -coverprofile=coverage.out
go tool cover -html=coverage.out

173
src/vendor/github.com/emicklei/go-restful/v3/curly.go generated vendored Normal file
View File

@ -0,0 +1,173 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"net/http"
"regexp"
"sort"
"strings"
)
// CurlyRouter expects Routes with paths that contain zero or more parameters in curly brackets.
type CurlyRouter struct{}
// SelectRoute is part of the Router interface and returns the best match
// for the WebService and its Route for the given Request.
func (c CurlyRouter) SelectRoute(
webServices []*WebService,
httpRequest *http.Request) (selectedService *WebService, selected *Route, err error) {
requestTokens := tokenizePath(httpRequest.URL.Path)
detectedService := c.detectWebService(requestTokens, webServices)
if detectedService == nil {
if trace {
traceLogger.Printf("no WebService was found to match URL path:%s\n", httpRequest.URL.Path)
}
return nil, nil, NewError(http.StatusNotFound, "404: Page Not Found")
}
candidateRoutes := c.selectRoutes(detectedService, requestTokens)
if len(candidateRoutes) == 0 {
if trace {
traceLogger.Printf("no Route in WebService with path %s was found to match URL path:%s\n", detectedService.rootPath, httpRequest.URL.Path)
}
return detectedService, nil, NewError(http.StatusNotFound, "404: Page Not Found")
}
selectedRoute, err := c.detectRoute(candidateRoutes, httpRequest)
if selectedRoute == nil {
return detectedService, nil, err
}
return detectedService, selectedRoute, nil
}
// selectRoutes return a collection of Route from a WebService that matches the path tokens from the request.
func (c CurlyRouter) selectRoutes(ws *WebService, requestTokens []string) sortableCurlyRoutes {
candidates := make(sortableCurlyRoutes, 0, 8)
for _, each := range ws.routes {
matches, paramCount, staticCount := c.matchesRouteByPathTokens(each.pathParts, requestTokens, each.hasCustomVerb)
if matches {
candidates.add(curlyRoute{each, paramCount, staticCount}) // TODO make sure Routes() return pointers?
}
}
sort.Sort(candidates)
return candidates
}
// matchesRouteByPathTokens computes whether it matches, howmany parameters do match and what the number of static path elements are.
func (c CurlyRouter) matchesRouteByPathTokens(routeTokens, requestTokens []string, routeHasCustomVerb bool) (matches bool, paramCount int, staticCount int) {
if len(routeTokens) < len(requestTokens) {
// proceed in matching only if last routeToken is wildcard
count := len(routeTokens)
if count == 0 || !strings.HasSuffix(routeTokens[count-1], "*}") {
return false, 0, 0
}
// proceed
}
for i, routeToken := range routeTokens {
if i == len(requestTokens) {
// reached end of request path
return false, 0, 0
}
requestToken := requestTokens[i]
if routeHasCustomVerb && hasCustomVerb(routeToken){
if !isMatchCustomVerb(routeToken, requestToken) {
return false, 0, 0
}
staticCount++
requestToken = removeCustomVerb(requestToken)
routeToken = removeCustomVerb(routeToken)
}
if strings.HasPrefix(routeToken, "{") {
paramCount++
if colon := strings.Index(routeToken, ":"); colon != -1 {
// match by regex
matchesToken, matchesRemainder := c.regularMatchesPathToken(routeToken, colon, requestToken)
if !matchesToken {
return false, 0, 0
}
if matchesRemainder {
break
}
}
} else { // no { prefix
if requestToken != routeToken {
return false, 0, 0
}
staticCount++
}
}
return true, paramCount, staticCount
}
// regularMatchesPathToken tests whether the regular expression part of routeToken matches the requestToken or all remaining tokens
// format routeToken is {someVar:someExpression}, e.g. {zipcode:[\d][\d][\d][\d][A-Z][A-Z]}
func (c CurlyRouter) regularMatchesPathToken(routeToken string, colon int, requestToken string) (matchesToken bool, matchesRemainder bool) {
regPart := routeToken[colon+1 : len(routeToken)-1]
if regPart == "*" {
if trace {
traceLogger.Printf("wildcard parameter detected in route token %s that matches %s\n", routeToken, requestToken)
}
return true, true
}
matched, err := regexp.MatchString(regPart, requestToken)
return (matched && err == nil), false
}
var jsr311Router = RouterJSR311{}
// detectRoute selectes from a list of Route the first match by inspecting both the Accept and Content-Type
// headers of the Request. See also RouterJSR311 in jsr311.go
func (c CurlyRouter) detectRoute(candidateRoutes sortableCurlyRoutes, httpRequest *http.Request) (*Route, error) {
// tracing is done inside detectRoute
return jsr311Router.detectRoute(candidateRoutes.routes(), httpRequest)
}
// detectWebService returns the best matching webService given the list of path tokens.
// see also computeWebserviceScore
func (c CurlyRouter) detectWebService(requestTokens []string, webServices []*WebService) *WebService {
var best *WebService
score := -1
for _, each := range webServices {
matches, eachScore := c.computeWebserviceScore(requestTokens, each.pathExpr.tokens)
if matches && (eachScore > score) {
best = each
score = eachScore
}
}
return best
}
// computeWebserviceScore returns whether tokens match and
// the weighted score of the longest matching consecutive tokens from the beginning.
func (c CurlyRouter) computeWebserviceScore(requestTokens []string, tokens []string) (bool, int) {
if len(tokens) > len(requestTokens) {
return false, 0
}
score := 0
for i := 0; i < len(tokens); i++ {
each := requestTokens[i]
other := tokens[i]
if len(each) == 0 && len(other) == 0 {
score++
continue
}
if len(other) > 0 && strings.HasPrefix(other, "{") {
// no empty match
if len(each) == 0 {
return false, score
}
score += 1
} else {
// not a parameter
if each != other {
return false, score
}
score += (len(tokens) - i) * 10 //fuzzy
}
}
return true, score
}

View File

@ -0,0 +1,54 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
// curlyRoute exits for sorting Routes by the CurlyRouter based on number of parameters and number of static path elements.
type curlyRoute struct {
route Route
paramCount int
staticCount int
}
// sortableCurlyRoutes orders by most parameters and path elements first.
type sortableCurlyRoutes []curlyRoute
func (s *sortableCurlyRoutes) add(route curlyRoute) {
*s = append(*s, route)
}
func (s sortableCurlyRoutes) routes() (routes []Route) {
routes = make([]Route, 0, len(s))
for _, each := range s {
routes = append(routes, each.route) // TODO change return type
}
return routes
}
func (s sortableCurlyRoutes) Len() int {
return len(s)
}
func (s sortableCurlyRoutes) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s sortableCurlyRoutes) Less(i, j int) bool {
a := s[j]
b := s[i]
// primary key
if a.staticCount < b.staticCount {
return true
}
if a.staticCount > b.staticCount {
return false
}
// secundary key
if a.paramCount < b.paramCount {
return true
}
if a.paramCount > b.paramCount {
return false
}
return a.route.Path < b.route.Path
}

View File

@ -0,0 +1,29 @@
package restful
import (
"fmt"
"regexp"
)
var (
customVerbReg = regexp.MustCompile(":([A-Za-z]+)$")
)
func hasCustomVerb(routeToken string) bool {
return customVerbReg.MatchString(routeToken)
}
func isMatchCustomVerb(routeToken string, pathToken string) bool {
rs := customVerbReg.FindStringSubmatch(routeToken)
if len(rs) < 2 {
return false
}
customVerb := rs[1]
specificVerbReg := regexp.MustCompile(fmt.Sprintf(":%s$", customVerb))
return specificVerbReg.MatchString(pathToken)
}
func removeCustomVerb(str string) string {
return customVerbReg.ReplaceAllString(str, "")
}

185
src/vendor/github.com/emicklei/go-restful/v3/doc.go generated vendored Normal file
View File

@ -0,0 +1,185 @@
/*
Package restful , a lean package for creating REST-style WebServices without magic.
WebServices and Routes
A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls.
Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes.
WebServices must be added to a container (see below) in order to handler Http requests from a server.
A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept).
This package has the logic to find the best matching Route and if found, call its Function.
ws := new(restful.WebService)
ws.
Path("/users").
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON, restful.MIME_XML)
ws.Route(ws.GET("/{user-id}").To(u.findUser)) // u is a UserResource
...
// GET http://localhost:8080/users/1
func (u UserResource) findUser(request *restful.Request, response *restful.Response) {
id := request.PathParameter("user-id")
...
}
The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response.
See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation.
Regular expression matching Routes
A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path.
For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters.
Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax)
This feature requires the use of a CurlyRouter.
Containers
A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests.
Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container.
The Default container of go-restful uses the http.DefaultServeMux.
You can create your own Container and create a new http.Server for that particular container.
container := restful.NewContainer()
server := &http.Server{Addr: ":8081", Handler: container}
Filters
A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses.
You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc.
In the restful package there are three hooks into the request,response flow where filters can be added.
Each filter must define a FilterFunction:
func (req *restful.Request, resp *restful.Response, chain *restful.FilterChain)
Use the following statement to pass the request,response pair to the next filter or RouteFunction
chain.ProcessFilter(req, resp)
Container Filters
These are processed before any registered WebService.
// install a (global) filter for the default container (processed before any webservice)
restful.Filter(globalLogging)
WebService Filters
These are processed before any Route of a WebService.
// install a webservice filter (processed before any route)
ws.Filter(webserviceLogging).Filter(measureTime)
Route Filters
These are processed before calling the function associated with the Route.
// install 2 chained route filters (processed before calling findUser)
ws.Route(ws.GET("/{user-id}").Filter(routeLogging).Filter(NewCountFilter().routeCounter).To(findUser))
See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations.
Response Encoding
Two encodings are supported: gzip and deflate. To enable this for all responses:
restful.DefaultContainer.EnableContentEncoding(true)
If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding.
Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route.
See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go
OPTIONS support
By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request.
Filter(OPTIONSFilter())
CORS
By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests.
cors := CrossOriginResourceSharing{ExposeHeaders: []string{"X-My-Header"}, CookiesAllowed: false, Container: DefaultContainer}
Filter(cors.Filter)
Error Handling
Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why.
For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation.
400: Bad Request
If path or query parameters are not valid (content or type) then use http.StatusBadRequest.
404: Not Found
Despite a valid URI, the resource requested may not be available
500: Internal Server Error
If the application logic could not process the request (or write the response) then use http.StatusInternalServerError.
405: Method Not Allowed
The request has a valid URL but the method (GET,PUT,POST,...) is not allowed.
406: Not Acceptable
The request does not have or has an unknown Accept Header set for this operation.
415: Unsupported Media Type
The request does not have or has an unknown Content-Type Header set for this operation.
ServiceError
In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response.
Performance options
This package has several options that affect the performance of your service. It is important to understand them and how you can change it.
restful.DefaultContainer.DoNotRecover(false)
DoNotRecover controls whether panics will be caught to return HTTP 500.
If set to false, the container will recover from panics.
Default value is true
restful.SetCompressorProvider(NewBoundedCachedCompressors(20, 20))
If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool.
Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation.
Trouble shooting
This package has the means to produce detail logging of the complete Http request matching process and filter invocation.
Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as:
restful.TraceLogger(log.New(os.Stdout, "[restful] ", log.LstdFlags|log.Lshortfile))
Logging
The restful.SetLogger() method allows you to override the logger used by the package. By default restful
uses the standard library `log` package and logs to stdout. Different logging packages are supported as
long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your
preferred package is simple.
Resources
[project]: https://github.com/emicklei/go-restful
[examples]: https://github.com/emicklei/go-restful/blob/master/examples
[design]: http://ernestmicklei.com/2012/11/11/go-restful-api-design/
[showcases]: https://github.com/emicklei/mora, https://github.com/emicklei/landskape
(c) 2012-2015, http://ernestmicklei.com. MIT License
*/
package restful

Some files were not shown because too many files have changed in this diff Show More