Kubernetes,  Operation

基于 nexus3+minio/s3 搭建 docker/helm 镜像仓库

基于 nexus3+minio/s3 搭建 docker/helm 镜像仓库

1. 基于docker单机部署

  • 此部署方案,仅推荐用于测试环境
# 创建宿主机数据目录
sudo mkdir -p /mnt/disk1/nexus3/data
sudo chmod 777 -R /mnt/disk1/nexus3/

# 启动单机 nexus3 容器
docker run -d \
--name nexus3 \
--restart=no \
--ulimit nofile=655350 \
--ulimit memlock=-1 \
--memory=1G \
--memory-swap=-1 \
--cpuset-cpus='1-7' \
-e INSTALL4J_ADD_VM_PARAMS="-XX:InitialRAMPercentage=80.0 -XX:MinRAMPercentage=80.0 -XX:MaxRAMPercentage=80.0 -XX:MaxDirectMemorySize=4G" \
-p 28081:8081 \
-p 28082:8082 \
-p 28083:8083 \
-v /etc/localtime:/etc/localtime \
-v /mnt/disk1/nexus3/data/:/nexus-data/ \
sonatype/nexus3:3.37.3
  • 以上参数中,映射的端口28081:8081用于 nexus3 管理控制台,28082:808228083:8083 分别用于创建 hostedgroup 2种类型的 docker 镜像仓库,其中参数--cpuset-cpus为绑核设置(从0号核开始)请按实际设置,否则可能报错: Requested CPUs are not available - requested 1-7, available: 0-1.

  • 注: 以上并没有映射 docker(proxy) 类型的仓库,是因为这种仓库一般会建多个如:docker-proxy-gcr.io / docker-proxy-k8s.gcr.io,同时会配置到 docker(group) 中统一管理了,因此无需映射端口对外提供服务,只需容器内部能访问即可(实际上本文部署的单点 nexus3 就是同一 JVM 进程当然没问题了),另外 nexus3 免费版的 docker group 仅支持 pull 不支持 push。

  • 其中INSTALL4J_ADD_VM_PARAMS是针对 nexus3 容器默认 jvm 参数做了优化。原因参考: https://blogs.wl4g.com/archives/2969

  • 查看所有日志: tail -f /mnt/disk1/nexus3/data/log/*

  • 如需修改日志配置

## 先以root进入容器
docker exec -u root -it nexus3 bash

## 编辑默认日志配置
vi opt/sonatype/nexus/etc/logback/logback.xml

2. 网关配置

  • 伪域名配置
cat <<-'EOF' >> /etc/hosts
127.0.0.1 registry.your-domain.io mirror.registry.your-domain.io
EOF
  • Nginx 配置 /etc/nginx/conf.d/nexus3.conf
sudo mkdir -p /etc/nginx/conf.d/
sudo cat <<-'EOF' > /etc/nginx/conf.d/nexus3.conf
##
## Nexus3 OSS gateway configuration.
##

## Web console.
server {
   #listen       443;
   #server_name  registry.your-domain.io;
   #ssl on;
   #ssl_certificate cert.d/registry.your-domain.io.pem;
   #ssl_certificate_key cert.d/registry.your-domain.io.key;
   #ssl_session_timeout 5m;
   #ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
   #ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   #ssl_prefer_server_ciphers on;
   listen       80;
   server_name  registry.your-domain.io;
   proxy_set_header Host $host:$server_port;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_connect_timeout 3600;
   proxy_send_timeout 3600;
   proxy_read_timeout 3600;
   proxy_buffering off;
   proxy_request_buffering off;
   client_max_body_size 4096m;
   location / {
     proxy_pass http://localhost:28081;
   }
}

## Docker release registry(hosted).
server {
   #listen       443;
   #server_name  cr.registry.your-domain.io;
   #ssl on;
   #ssl_certificate cert.d/cr.registry.your-domain.io.pem;
   #ssl_certificate_key cert.d/cr.registry.your-domain.io.key;
   #ssl_session_timeout 5m;
   #ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
   #ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   #ssl_prefer_server_ciphers on;
   listen        80;
   server_name   mirror.registry.your-domain.io;
   proxy_set_header Host $host:$server_port;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_connect_timeout 3600;
   proxy_send_timeout 3600;
   proxy_read_timeout 3600;
   proxy_buffering off;
   proxy_request_buffering off;
   client_max_body_size 4096m;
   location / {
     proxy_pass http://localhost:28082; # 注:必须与 docker hosted 类型仓库对应,否则会 push 失败,参见本文 FAQ#8.2
   }
}

## Docker mirror registry(group->proxy).
server {
   #listen       443;
   #server_name  mirror.registry.your-domain.io;
   #ssl on;
   #ssl_certificate cert.d/mirror.registry.your-domain.io.pem;
   #ssl_certificate_key cert.d/mirror.registry.your-domain.io.key;
   #ssl_session_timeout 5m;
   #ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
   #ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   #ssl_prefer_server_ciphers on;
   listen        80;
   server_name   mirror.registry.your-domain.io;
   proxy_set_header Host $host:$server_port;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection "Upgrade";
   proxy_connect_timeout 3600;
   proxy_send_timeout 3600;
   proxy_read_timeout 3600;
   proxy_buffering off;
   proxy_request_buffering off;
   client_max_body_size 4096m;
   location / {
     proxy_pass http://localhost:28083; # 注:应该与 docker group 类型仓库对应
   }
}
EOF

systemctl restart nginx

3. 创建 registry 仓库

3.1 创建 Blob 存储

  • 浏览器访问: https://registry.your-domain.io, 登录进入以后, 首先创建后端 S3 协议的 Blob 存储, 参考: nexus3 配置 minio/s3 blob存储

  • 环境准备及要求:

    1) 必须 Nexus 版本>=3.12
    2) 拥有一个已注册的 S3 对象存储服务地址,或者基于 MinIO 已搭建好.

  • (可选) 如果您还没有 S3 存储服务, 可参考如下指引部署一个单机 MinIO 实例

基于 kubernetes 的 MinIO 部署请参考: https://github.com/minio/operator/blob/master/README.md

sudo mkdir -p /mnt/disk1/nexus3_minio
docker run -d \
--name nexus3_minio \
--restart no \
-p 9000:9000 \
-p 9900:9900 \
-v /mnt/disk1/nexus3_minio:/data \
minio/minio \
server /data --console-address ":9900"
  • 浏览器访问: http://127.0.0.1:9900/, 默认账号密码: minioadmin/minioadmin

  • 在 Nexus3 创建 S3 Blob 存储及 docker registry 仓库的操作截图,为使篇幅简洁请前往以下链接查看: gitee.com/wl4g/blogs/tree/master/articles/kubernetes/nexus3-docker-registry-deploy/resources/shots

  • 主要步骤:

    • a) 在 MinIO 创建 S3 bucket
    • b) 在 Nexus3 创建 S3 类型的 Blob 存储;
    • c) 在Nexus3 依次创建仓库my-docker-release(:8082)、my-docker-group(:8083)、my-docker-proxy-gcr.io(:8084)、my-docker-proxy-k8s.gcr.io(:8085);
    • d) 在 Nexus3 创建角色nx-docker-all,为了授权简单,搜索docker关键字,分配所有*号结尾的权限;
    • e) 在 Nexus3 创建用户wl4g,分配角色 nx-docker-all
    • f) 重要 在 Nexus3 菜单 Security -> Realms,一定要分配 Docker Bearer Tokens Realms,否则 pull/push 镜像时失败;
    • g) 为 Nexus3 设置 Http代理,需注意的是至少在目前 3.37.3 只能是全局生效,这是也是不太完美的地方,因为通常需求按需为某个 proxy类型的仓库设置代理,对此问题的方案就是给代理服务 privoxy 设置 PAC 列表按需精确代理,可参见: blogs.wl4g.com/archives/121,同时再结合 containerd 的 mirrors 机制就能达到的效果是 pull 某个域名下的仓库时才走私有仓库,例如:crictl pull --creds wl4g:123456 gcr.io/istio-testing/build-tools

4. 配置容器引擎&验证

4.1 Docker 环境验证

  • 如果您的容器引擎是 Docker, 则添加 mirrors 地址到/etc/docker/daemon.json,如:
{
  "registry-mirrors": [
    "https://hjbu3ivg.mirror.aliyuncs.com",
    "https://mirror.registry.your-domain.io"
  ],
  "insecure-registries": [
    "http://mirror.registry.your-domain.io"
  ]
}

然后重启 dockerd, sudo systemctl restart docker

  • pull 镜像验证
## 首先指定从 docker 中央仓库拉
docker pull docker.io/alpine

## 登录私有仓库
docker login -u wl4g -p 123456 mirror.registry.your-domain.io

## 创建 tag
docker tag docker.io/alpine mirror.registry.your-domain.io/mylibrary/alpine

## 通过控制台 listen 端口查所有版本
curl -v http://registry.your-domain.io/repository/my-docker-release/v2/mylibrary/alpine/tags/list

{"name":"mylibrary/alpine","tags":["latest"]}

## 通过仓库 listen 端口查所有版本
curl http://mirror.registry.your-domain.io/v2/mylibrary/alpine/tags/list

{"name":"mylibrary/alpine","tags":["latest"]}

## 更多接口:
curl http://mirror.registry.your-domain.io/v2/mylibrary/alpine/manifests/latest
curl http://mirror.registry.your-domain.io/v2/_catalog

4.2 Containerd 环境验证

  • 如果您的容器引擎是 Containerd, 则添加 mirrors 地址到/etc/containerd/config.toml, 如果没有如下配置, 则需先生成, 如:
sudo mkdir -p /etc/containerd/
sudo containerd config default > /etc/containerd/config.toml
...
    [plugins."io.containerd.grpc.v1.cri".registry]    
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://mirror.registry.your-domain.io"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."mirror.registry.your-domain.io".auth]
          auth = "d2w0ZzoxMjM0NTY="
          ## Priority use username-password.
          ## see:https://blogs.wl4g.com/archives/66#4.2
          ## see:https://github.com/containerd/cri/blob/master/docs/registry.md#configure-registry-endpoint
          username = "wl4g"
          password = "123456"
          #identitytoken = ""
...
  • 以上示例表示, 如当crictl pull gcr.io/istio-testing/build-tools时, 实际会被转发到私有仓mirror.registry.your-domain.io (可从日志看出journalctl -afu containerd), 即会转到 Nexus3 OSS, 最终通过已设置的 HTTPS 代理请求到上游gcr.io中央仓库, 这样客户端用户 pull 时是无感知的简单且方便。

  • 注: 由 CRI 传递的身份验证配置优先于此配置,仅当 Kubernetes 未通过 CRI 指定身份验证配置时,才会使用此配置中的注册表凭据。参见: github.com/containerd/cri/blob/master/docs/registry.md#configure-registry-credentials

修改此配置后,需要重启containerd服务或直接传递给crictl也是正常的, 如: crictl --auth 'd2w0ZzoxMjM0NTY=' gcr.io/istio-testing/build-tools

  • 特别注意: 使用 shell 计算 http-basic 认证的 base64 编码时必须加-n, 否则默认后尾追加换行符导致 registry 认证失败, 正例: echo -n 'wl4g:123456' | base64 -w 0

  • 可能用到 (可选)

    • 指定cricrl命令的运行时sock地址
      sudo cat <<-'EOF' > /etc/crictl.yaml
      ## Use docker engine.
      #image-endpoint: unix:///var/run/crio/dockershim.sock
      ## Use containerd engine.
      runtime-endpoint: unix:///run/containerd/containerd.sock
      image-endpoint: unix:///run/containerd/containerd.sock
      ## Use cri-o engine.
      #runtime-endpoint: unix:///var/run/crio/crio.sock
      #image-endpoint: unix:///var/run/crio/crio.sock
      timeout: 10
      debug: true
      pull-image-on-create: true
      disable-pull-on-run: false
      EOF

4.3 K3s 环境验证

  • 如果您的容器引擎是 k3s, 则添加 mirrors 地址到/etc/rancher/k3s/registries.yaml, 本质上重启后k3s会自动同步到 /var/lib/rancher/k3s/agent/etc/containerd/config.toml中,它只是内嵌了containerd而已。
sudo cp /etc/rancher/k3s/registries.yaml /etc/rancher/k3s/registries.yaml.bak
sudo cat <<EOF>/etc/rancher/k3s/registries.yaml
# see:https://rancher.com/docs/k3s/latest/en/installation/private-registry/#rewrites
mirrors:
  quay.io:
    endpoint:
      - "https://mirror.registry.your-domain.io"
    #rewrite:
    #  "^argoproj/(.*)": "public/argoproj/$1"
  k8s.gcr.io:
    endpoint:
      - "https://mirror.registry.your-domain.io"
  gcr.io:
    endpoint:
      - "https://mirror.registry.your-domain.io"
  docker.io:
    endpoint:
      - "https://mirror.registry.your-domain.io"
configs:
  "mirror.registry.your-domain.io":
    auth:
      username: user1
      password: 123456
EOF

sudo systemctl restart k3s

5. 配置 helm 并验证

helm plugin install --version main https://github.com/wl4g-k8s/helm-nexus3-push-plugin.git
helm nexus-push --help
#ls -al /Users/jw/Library/helm/plugins/helm-nexus3-push-plugin.git/push.sh
  • Initial helm repo
helm repo add --username myuser --password 123456 myrepo https://registry.your-domain.io/repository/helm-release/
  • Push chart to nexus3
#helm nexus-push myrepo ./mychart
helm nexus-push myrepo mychart-0.0.1.tgz

Found early saved autorization authorization. Will use user [myuser] with password [123456] for chart mychart-0.1.0.tgz
Pushing mychart-0.1.0.tgz to repo https://registry.your-domain.io/repository/helm-release/...
  HTTP/1.1 100 Continue
  HTTP/1.1 200 OK
  Server: nginx/1.14.1
  Date: Thu, 25 Aug 2022 08:11:05 GMT
  Transfer-Encoding: chunked
  Connection: keep-alive
  X-Content-Type-Options: nosniff
  Content-Security-Policy: sandbox allow-forms allow-modals allow-popups allow-presentation allow-scripts allow-top-navigation
  X-XSS-Protection: 1; mode=block
Done

6. 基于Kubernetes集群生产部署

TODO

7. 参考

8. FAQ

8.1 复盘编写本文时所遇到的问题:

Image URI docker pull nerdctl pull crictl pull
gcr.io/istio-testing/build-tools:latest ①失败 ②失败 ③成功
mirror.registry.your-domain.io/istio-testing/build-tools:latest ④成功 ⑤成功 ⑥成功

8.2 docker push 镜像时报错

  • 错误:denied: Deploying to groups is a PRO-licensed feature. See https://links.sonatype.com/product-nexus-repository

  • 出现这个错误通常是推送错了端口,在以上实验中,cr.registry.your-domain.io 必须让 nginx 代理到 my-docker-release 这个仓库,也就是8082,否则如果转发到 my-docker-group(8083) 或 my-docker-proxy-gcr.io(8084) 端口就会触发此错误,第一眼见这个错就容易误以为 nexus3 社区版不支持 docker release,其实本质上 8083 页可以,只不过是 nexus3 的企业版功能。

8.3 如何配置支持多租户?

  • 从如上案例可知,其中命名空间 mylibrary 目前(3.37)在控制台并不能明确设置,在 push 镜像时会自动创建,只要有权限。

  • 它的多租户并不是按路径区分,而是在创建每个 docker(group|proxy|hosted) repository 时会要求指定监听端口,然后再通过网关如nginx绑定子域名转发,最终客户端使用的地址就如同阿里云容器镜像服务分配的地址:https://hjbu3ivg.mirror.aliyuncs.com

8.4 删除 docker 镜像后 Nexus 的 Blob 存储(如 minio)不释放空间

8.5 部署成功后 pull 拉取进行报错 not found: manifest unknown.

  • 前提条件: 已在新建的仓库 docker-release 中上传镜像:public/library/alpine:3.6.5

  • Case1: 在 nexus3 管理页面上 Repoistory -> Repoistories -> docker-release 仓库显示 URL 为: https://registry.your-domain.io/repository/docker-release/ ,但使用它拉镜像报错,如:docker pull mirror.registry.your-domain.io/repository/docker-release/public/library/alpine:3.6.5

    Error response from daemon: manifest for mirror.registry.your-domain.io/repository/docker-release/public/library/alpine:3.6.5 not found: manifest unknown: manifest unknown
  • Case2: 拉取成功。

    docker pull mirror.registry.your-domain.io/public/library/alpine:3.6.5
    Pulling from public/library/alpine
    5a3ea8efae5d: Pulling fs layer ...
  • Case3: 返回 manifest json 完全一致。

export authCred=$(echo -n 'username:password' | base64 -w 0)
curl -v -H "Authorization: Basic ${authCred}" https://registry.your-domain.io/repository/docker-group/v2/_catalog
curl -v -H "Authorization: Basic ${authCred}" https://mirror.registry.your-domain.io/v2/_catalog
  • 总结:
    从 Case3 分析,这两接口请求返回的是完全一样,因为前者是 nexus 通过内部 path 区分不同仓库的URL(可能由于 nexus 最早是从 maven 仓库发展而来, 但后面为了支持多种类型仓库,被设计为默认将任何仓库都统一映射一个路径),但是这对于 image 仓库 pull 时指定的镜像路径是有规则的,即它是通过 path 来区分不同用户的 namespace,这点与 maven 不同它是通过 groupId / artfaceId 坐标来确定的,因此对于不熟悉 nexus 的同学很容易误用映射的 URL 来 pull 镜像,如:docker pull registry.your-domain.io/repository/docker-group/public/library/alpine:3.6.5 而报错 Error response from daemon: error parsing HTTP 404 response body: invalid characte '<' looking for beginning of value: "\n<!DOCTYPE html>...

    这点从 nexus3 管理页新建 docker-group/docker-hosted/docker-proxy 仓库也可看出,会要求配置 Connectors 端口,但新建其他如 maven 仓库时是不需要的。

留言

您的电子邮箱地址不会被公开。