Original file: 2.4G docker images tar file
format | compressed size | ratio | compress time | decompress time |
---|---|---|---|---|
bzip2 | 645M | 26.6% | 4m13s | 1m33s |
lz4 | 1.1G | 42.3% | 7s | 12s |
gzip | 728M | 30.0% | 1m50s | 21s |
zstd | 670M | 27.6% | 16s | 10s |
$body = @{ | |
model = "Qwen" | |
messages = @( | |
@{ | |
role = "user" | |
content = "tell me a joke" | |
} | |
) | |
stream = $false | |
} |
stages:
- name: Deploy to external K8S
steps:
- runScriptConfig:
# Any images with kubectl binary packaged
image: lawr/kubectl
shellScript: |-
mkdir -p ~/.kube
# This helps to hide the file content in build logs
We've got users constantly asking for caching files between pipeline builds. Currently, when a build is finished, the build pod will be removed and all intermediate files are ephemeral. For instance, when a maven build is done the second time, users would like to have local cache and do not need to fetch dependencies again. They can setup private maven repository now to speed up the process but it still causes additional network traffic and extra time.
For multi-tenancy we don't want to support host-path volume, and we don't need dynamic provisioning regarding pipeline usecases. So the plan is to support three volume types: existing PVCs/secrets/configmaps.
Pipeline currently provides extensibility by using different context images in pipeline steps. An image with certain utilities can be considered as a plugin. But we lack a centralized place to maintain available plugins, either provided by Rancher or community users. Also, the user experience for using different plugins can be improved so that users with poor experience in container/kubernetes can use existing util images more easily.
We use questions files to describe the usage of a pipeline plugin. An example for using drone s3 plugin:
There're some Drone plugins for integration with K8S:
Drone-Kubernetes: upgrade a Kubernetes deployment with a newer version of an image.