mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-14 10:25:40 +00:00
remove pipeline references
This commit is contained in:
committed by
Billy Tat
parent
ca01513152
commit
f0196c62ee
@@ -35,11 +35,18 @@ The Rancher API server is built on top of an embedded Kubernetes API server and
|
||||
|
||||
### Working with Kubernetes
|
||||
|
||||
<<<<<<< HEAD:docs/getting-started/overview.md
|
||||
- **Provisioning Kubernetes clusters:** The Rancher API server can [provision Kubernetes](../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) on existing nodes, or perform [Kubernetes upgrades.](installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)
|
||||
- **Catalog management:** Rancher provides the ability to use a [catalog of Helm charts](../pages-for-subheaders/helm-charts-in-rancher.md) that make it easy to repeatedly deploy applications.
|
||||
- **Managing projects:** A project is a group of multiple namespaces and access control policies within a cluster. A project is a Rancher concept, not a Kubernetes concept, which allows you to manage multiple namespaces as a group and perform Kubernetes operations in them. The Rancher UI provides features for [project administration](../pages-for-subheaders/manage-projects.md) and for [managing applications within projects.](../pages-for-subheaders/kubernetes-resources-setup.md)
|
||||
- **Fleet Continuous Delivery:** Within Rancher, you can leverage [Fleet Continuous Delivery](../pages-for-subheaders/fleet-gitops-at-scale.md) to deploy applications from git repositories, without any manual operation, to targeted downstream Kubernetes clusters.
|
||||
- **Istio:** Our [integration with Istio](../pages-for-subheaders/istio.md) is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
=======
|
||||
- **Provisioning Kubernetes clusters:** The Rancher API server can [provision Kubernetes](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) on existing nodes, or perform [Kubernetes upgrades.](../installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)
|
||||
- **Catalog management:** Rancher provides the ability to use a [catalog of Helm charts](../../pages-for-subheaders/helm-charts-in-rancher.md) that make it easy to repeatedly deploy applications.
|
||||
- **Managing projects:** A project is a group of multiple namespaces and access control policies within a cluster. A project is a Rancher concept, not a Kubernetes concept, which allows you to manage multiple namespaces as a group and perform Kubernetes operations in them. The Rancher UI provides features for [project administration](../../pages-for-subheaders/manage-projects.md) and for [managing applications within projects.](../../pages-for-subheaders/kubernetes-resources-setup.md)
|
||||
- **Istio:** Our [integration with Istio](../../pages-for-subheaders/istio.md) is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
>>>>>>> a99263eb10... remove pipeline references:docs/getting-started/introduction/overview.md
|
||||
|
||||
### Working with Cloud Infrastructure
|
||||
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
---
|
||||
title: Rancher's CI/CD Pipelines
|
||||
description: Use Rancher’s CI/CD pipeline to automatically checkout code, run builds or scripts, publish Docker images, and deploy software to users
|
||||
---
|
||||
Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline.
|
||||
|
||||
After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution:
|
||||
|
||||
- Build your application from code to image.
|
||||
- Validate your builds.
|
||||
- Deploy your build images to your cluster.
|
||||
- Run unit tests.
|
||||
- Run regression tests.
|
||||
|
||||
For details, refer to the [pipelines](../../../pages-for-subheaders/pipelines.md) section.
|
||||
@@ -129,7 +129,6 @@ After registering a cluster, the cluster owner can:
|
||||
- Enable [monitoring, alerts and notifiers](../../../pages-for-subheaders/monitoring-and-alerting.md)
|
||||
- Enable [logging](../../../pages-for-subheaders/logging.md)
|
||||
- Enable [Istio](../../../pages-for-subheaders/istio.md)
|
||||
- Use [pipelines](../../advanced-user-guides/manage-projects/ci-cd-pipelines.md)
|
||||
- Manage projects and workloads
|
||||
|
||||
### Additional Features for Registered K3s Clusters
|
||||
|
||||
@@ -47,12 +47,6 @@ After you expose your cluster to external requests using a load balancer and/or
|
||||
|
||||
For more information, see [Service Discovery](../how-to-guides/new-user-guides/kubernetes-resources-setup/create-services.md).
|
||||
|
||||
## Pipelines
|
||||
|
||||
After your project has been [configured to a version control provider](../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md#1-configure-version-control-providers), you can add the repositories and start configuring a pipeline for each repository.
|
||||
|
||||
For more information, see [Pipelines](pipelines.md).
|
||||
|
||||
## Applications
|
||||
|
||||
Besides launching individual components of an application, you can use the Rancher catalog to start launching applications, which are Helm charts.
|
||||
|
||||
@@ -20,7 +20,6 @@ You can use projects to perform actions like:
|
||||
- [Set resource quotas](manage-project-resource-quotas.md)
|
||||
- [Manage namespaces](../how-to-guides/new-user-guides/manage-namespaces.md)
|
||||
- [Configure tools](../reference-guides/rancher-project-tools.md)
|
||||
- [Set up pipelines for continuous integration and deployment](../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md)
|
||||
- [Configure pod security policies](../how-to-guides/advanced-user-guides/manage-projects/manage-pod-security-policies.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
@@ -1,285 +0,0 @@
|
||||
---
|
||||
title: Pipelines
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
:::note Notes
|
||||
|
||||
- As of Rancher v2.5, Git-based deployment pipelines are now deprecated. We recommend handling pipelines with Rancher Continuous Delivery powered by [Fleet](../how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md). To get to Fleet in Rancher, click <b>☰ > Continuous Delivery</b>.
|
||||
|
||||
- Pipelines in Kubernetes 1.21+ are no longer supported.
|
||||
|
||||
- Fleet does not replace Rancher pipelines; the distinction is that Rancher pipelines are now powered by Fleet.
|
||||
|
||||
:::
|
||||
|
||||
Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users.
|
||||
|
||||
Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline.
|
||||
|
||||
After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution:
|
||||
|
||||
- Build your application from code to image.
|
||||
- Validate your builds.
|
||||
- Deploy your build images to your cluster.
|
||||
- Run unit tests.
|
||||
- Run regression tests.
|
||||
|
||||
:::note
|
||||
|
||||
Rancher's pipeline provides a simple CI/CD experience, but it does not offer the full power and flexibility of and is not a replacement of enterprise-grade Jenkins or other CI tools your team uses.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
## Concepts
|
||||
|
||||
For an explanation of concepts and terminology used in this section, refer to [this page.](../reference-guides/pipelines/concepts.md)
|
||||
|
||||
## How Pipelines Work
|
||||
|
||||
After enabling the ability to use pipelines in a project, you can configure multiple pipelines in each project. Each pipeline is unique and can be configured independently.
|
||||
|
||||
A pipeline is configured off of a group of files that are checked into source code repositories. Users can configure their pipelines either through the Rancher UI or by adding a `.rancher-pipeline.yml` into the repository.
|
||||
|
||||
Before pipelines can be configured, you will need to configure authentication to your version control provider, e.g. GitHub, GitLab, Bitbucket. If you haven't configured a version control provider, you can always use [Rancher's example repositories](../reference-guides/pipelines/example-repositories.md) to view some common pipeline deployments.
|
||||
|
||||
When you configure a pipeline in one of your projects, a namespace specifically for the pipeline is automatically created. The following components are deployed to it:
|
||||
|
||||
- **Jenkins:**
|
||||
|
||||
The pipeline's build engine. Because project users do not directly interact with Jenkins, it's managed and locked.
|
||||
|
||||
:::note
|
||||
|
||||
There is no option to use existing Jenkins deployments as the pipeline engine.
|
||||
|
||||
:::
|
||||
|
||||
- **Docker Registry:**
|
||||
|
||||
Out-of-the-box, the default target for your build-publish step is an internal Docker Registry. However, you can make configurations to push to a remote registry instead. The internal Docker Registry is only accessible from cluster nodes and cannot be directly accessed by users. Images are not persisted beyond the lifetime of the pipeline and should only be used in pipeline runs. If you need to access your images outside of pipeline runs, please push to an external registry.
|
||||
|
||||
- **Minio:**
|
||||
|
||||
Minio storage is used to store the logs for pipeline executions.
|
||||
|
||||
:::note
|
||||
|
||||
The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components](../reference-guides/pipelines/configure-persistent-data.md).
|
||||
|
||||
:::
|
||||
|
||||
## Role-based Access Control for Pipelines
|
||||
|
||||
If you can access a project, you can enable repositories to start building pipelines.
|
||||
|
||||
Only [administrators](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md), [cluster owners or members](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#cluster-roles), or [project owners](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-roles) can configure version control providers and manage global pipeline execution settings.
|
||||
|
||||
Project members can only configure repositories and pipelines.
|
||||
|
||||
## Setting up Pipelines
|
||||
|
||||
### Prerequisite
|
||||
|
||||
:::note Legacy Feature Flag:
|
||||
|
||||
Because the pipelines app was deprecated in favor of Fleet, you will need to turn on the feature flag for legacy features before using pipelines. Note that pipelines in Kubernetes 1.21+ are no longer supported.
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
1. Click **Feature Flags**.
|
||||
1. Go to the `legacy` feature flag and click **⋮ > Activate**.
|
||||
|
||||
:::
|
||||
|
||||
1. [Configure version control providers](#1-configure-version-control-providers)
|
||||
2. [Configure repositories](#2-configure-repositories)
|
||||
3. [Configure the pipeline](#3-configure-the-pipeline)
|
||||
|
||||
### 1. Configure Version Control Providers
|
||||
|
||||
Before you can start configuring a pipeline for your repository, you must configure and authorize a version control provider:
|
||||
|
||||
- GitHub
|
||||
- GitLab
|
||||
- Bitbucket
|
||||
|
||||
Select your provider's tab below and follow the directions.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="GitHub">
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Click the **Configuration** tab.
|
||||
1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to set up an OAuth App in Github.
|
||||
1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher.
|
||||
1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation.
|
||||
1. Click **Authenticate**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="GitLab">
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Click the **Configuration** tab.
|
||||
1. Click **GitLab**.
|
||||
1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab.
|
||||
1. From GitLab, copy the **Application ID** and **Secret**. Paste them into Rancher.
|
||||
1. If you're using GitLab for enterprise setup, select **Use a private gitlab enterprise installation**. Enter the host address of your GitLab installation.
|
||||
1. Click **Authenticate**.
|
||||
|
||||
:::note Notes:
|
||||
|
||||
1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+.
|
||||
2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Bitbucket Cloud">
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Click the **Configuration** tab.
|
||||
1. Click **Bitbucket** and leave **Use Bitbucket Cloud** selected by default.
|
||||
1. Follow the directions displayed to **Setup a Bitbucket Cloud application**. Rancher redirects you to Bitbucket to setup an OAuth consumer in Bitbucket.
|
||||
1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher.
|
||||
1. Click **Authenticate**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Bitbucket Server">
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Click the **Configuration** tab.
|
||||
1. Click **Bitbucket** and choose the **Use private Bitbucket Server setup** option.
|
||||
1. Follow the directions displayed to **Setup a Bitbucket Server application**.
|
||||
1. Enter the host address of your Bitbucket server installation.
|
||||
1. Click **Authenticate**.
|
||||
|
||||
:::note
|
||||
|
||||
Bitbucket server needs to do SSL verification when sending webhooks to Rancher. Please ensure that Rancher server's certificate is trusted by the Bitbucket server. There are two options:
|
||||
|
||||
1. Setup Rancher server with a certificate from a trusted CA.
|
||||
1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html).
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
|
||||
|
||||
### 2. Configure Repositories
|
||||
|
||||
After the version control provider is authorized, you are automatically re-directed to start configuring which repositories that you want start using pipelines with. Even if someone else has set up the version control provider, you will see their repositories and can build a pipeline.
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Click on **Configure Repositories**.
|
||||
|
||||
1. A list of repositories are displayed. If you are configuring repositories the first time, click on **Authorize & Fetch Your Own Repositories** to fetch your repository list.
|
||||
|
||||
1. For each repository that you want to set up a pipeline, click on **Enable**.
|
||||
|
||||
1. When you're done enabling all your repositories, click on **Done**.
|
||||
|
||||
**Results:** You have a list of repositories that you can start configuring pipelines for.
|
||||
|
||||
### 3. Configure the Pipeline
|
||||
|
||||
Now that repositories are added to your project, you can start configuring the pipeline by adding automated stages and steps. For your convenience, there are multiple built-in step types for dedicated tasks.
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Find the repository that you want to set up a pipeline for.
|
||||
1. Configure the pipeline through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. Pipeline configuration is split into stages and steps. Stages must fully complete before moving onto the next stage, but steps in a stage run concurrently. For each stage, you can add different step types. Note: As you build out each step, there are different advanced options based on the step type. Advanced options include trigger rules, environment variables, and secrets. For more information on configuring the pipeline through the UI or the YAML file, refer to the [pipeline configuration reference.](../reference-guides/pipelines/pipeline-configuration.md)
|
||||
|
||||
* If you are going to use the UI, select the vertical **⋮ > Edit Config** to configure the pipeline using the UI. After the pipeline is configured, you must view the YAML file and push it to the repository.
|
||||
* If you are going to use the YAML file, select the vertical **⋮ > View/Edit YAML** to configure the pipeline. If you choose to use a YAML file, you need to push it to the repository after any changes in order for it to be updated in the repository. When editing the pipeline configuration, it takes a few moments for Rancher to check for an existing pipeline configuration.
|
||||
|
||||
1. Select which `branch` to use from the list of branches.
|
||||
|
||||
1. Optional: Set up notifications.
|
||||
|
||||
1. Set up the trigger rules for the pipeline.
|
||||
|
||||
1. Enter a **Timeout** for the pipeline.
|
||||
|
||||
1. When all the stages and steps are configured, click **Done**.
|
||||
|
||||
**Results:** Your pipeline is now configured and ready to be run.
|
||||
|
||||
|
||||
## Pipeline Configuration Reference
|
||||
|
||||
Refer to [this page](../reference-guides/pipelines/pipeline-configuration.md) for details on how to configure a pipeline to:
|
||||
|
||||
- Run a script
|
||||
- Build and publish images
|
||||
- Publish catalog templates
|
||||
- Deploy YAML
|
||||
- Deploy a catalog app
|
||||
|
||||
The configuration reference also covers how to configure:
|
||||
|
||||
- Notifications
|
||||
- Timeouts
|
||||
- The rules that trigger a pipeline
|
||||
- Environment variables
|
||||
- Secrets
|
||||
|
||||
|
||||
## Running your Pipelines
|
||||
|
||||
Run your pipeline for the first time. Find your pipeline and select the vertical **⋮ > Run**.
|
||||
|
||||
During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline:
|
||||
|
||||
- `docker-registry`
|
||||
- `jenkins`
|
||||
- `minio`
|
||||
|
||||
This process takes several minutes. When it completes, you can view each pipeline component from the project **Workloads** tab.
|
||||
|
||||
## Triggering a Pipeline
|
||||
|
||||
When a repository is enabled, a webhook is automatically set in the version control provider. By default, the pipeline is triggered by a **push** event to a repository, but you can modify the event(s) that trigger running the pipeline.
|
||||
|
||||
Available Events:
|
||||
|
||||
* **Push**: Whenever a commit is pushed to the branch in the repository, the pipeline is triggered.
|
||||
* **Pull Request**: Whenever a pull request is made to the repository, the pipeline is triggered.
|
||||
* **Tag**: When a tag is created in the repository, the pipeline is triggered.
|
||||
|
||||
:::note
|
||||
|
||||
This option doesn't exist for Rancher's [example repositories](../reference-guides/pipelines/example-repositories.md).
|
||||
|
||||
:::
|
||||
|
||||
### Modifying the Event Triggers for the Repository
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. Find the repository where you want to modify the event triggers. Select the vertical **⋮ > Setting**.
|
||||
1. Select which event triggers (**Push**, **Pull Request** or **Tag**) you want for the repository.
|
||||
1. Click **Save**.
|
||||
@@ -24,7 +24,6 @@ Here is the complete list of tokens that are generated with `ttl=0`:
|
||||
| `agent-*` | Token for agent deployment |
|
||||
| `compose-token-*` | Token for compose |
|
||||
| `helm-token-*` | Token for Helm chart deployment |
|
||||
| `*-pipeline*` | Pipeline token for project |
|
||||
| `telemetry-*` | Telemetry token |
|
||||
| `drain-node-*` | Token for drain (we use `kubectl` for drain because there is no native Kubernetes API) |
|
||||
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
---
|
||||
title: Concepts
|
||||
---
|
||||
|
||||
The purpose of this page is to explain common concepts and terminology related to pipelines.
|
||||
|
||||
- **Pipeline:**
|
||||
|
||||
A _pipeline_ is a software delivery process that is broken into different stages and steps. Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects. A pipeline is based on a specific repository. It defines the process to build, test, and deploy your code. Rancher uses the [pipeline as code](https://jenkins.io/doc/book/pipeline-as-code/) model. Pipeline configuration is represented as a pipeline file in the source code repository, using the file name `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
- **Stages:**
|
||||
|
||||
A pipeline stage consists of multiple steps. Stages are executed in the order defined in the pipeline file. The steps in a stage are executed concurrently. A stage starts when all steps in the former stage finish without failure.
|
||||
|
||||
- **Steps:**
|
||||
|
||||
A pipeline step is executed inside a specified stage. A step fails if it exits with a code other than `0`. If a step exits with this failure code, the entire pipeline fails and terminates.
|
||||
|
||||
- **Workspace:**
|
||||
|
||||
The workspace is the working directory shared by all pipeline steps. In the beginning of a pipeline, source code is checked out to the workspace. The command for every step bootstraps in the workspace. During a pipeline execution, the artifacts from a previous step will be available in future steps. The working directory is an ephemeral volume and will be cleaned out with the executor pod when a pipeline execution is finished.
|
||||
|
||||
Typically, pipeline stages include:
|
||||
|
||||
- **Build:**
|
||||
|
||||
Each time code is checked into your repository, the pipeline automatically clones the repo and builds a new iteration of your software. Throughout this process, the software is typically reviewed by automated tests.
|
||||
|
||||
- **Publish:**
|
||||
|
||||
After the build is completed, either a Docker image is built and published to a Docker registry or a catalog template is published.
|
||||
|
||||
- **Deploy:**
|
||||
|
||||
After the artifacts are published, you would release your application so users could start using the updated product.
|
||||
@@ -1,95 +0,0 @@
|
||||
---
|
||||
title: Configuring Persistent Data for Pipeline Components
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
The pipelines' internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
|
||||
|
||||
This section assumes that you understand how persistent storage works in Kubernetes. For more information, refer to the section on [how storage works.](../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-persistent-storage.md)
|
||||
|
||||
:::note Prerequisites for both parts A and B:
|
||||
|
||||
[Persistent volumes](../../pages-for-subheaders/create-kubernetes-persistent-storage.md) must be available for the cluster.
|
||||
|
||||
:::
|
||||
|
||||
### A. Configuring Persistent Data for Docker Registry
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster that you created and click **Explore**.
|
||||
1. Click **Workload**.
|
||||
|
||||
1. Find the `docker-registry` workload and select **⋮ > Edit**.
|
||||
|
||||
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
|
||||
|
||||
- **Add Volume > Add a new persistent volume (claim)**
|
||||
- **Add Volume > Use an existing persistent volume (claim)**
|
||||
|
||||
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
|
||||
<Tabs>
|
||||
<TabItem value="Add a new persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Select a volume claim **Source**:
|
||||
- If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**.
|
||||
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Use an existing persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Choose a **Persistent Volume Claim** from the dropdown.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
|
||||
|
||||
1. Click **Upgrade**.
|
||||
|
||||
### B. Configuring Persistent Data for Minio
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster that you created and click **Explore**.
|
||||
1. Click **Workload**.
|
||||
1. Go to the `minio` workload and select **⋮ > Edit**.
|
||||
|
||||
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
|
||||
|
||||
- **Add Volume > Add a new persistent volume (claim)**
|
||||
- **Add Volume > Use an existing persistent volume (claim)**
|
||||
|
||||
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
|
||||
<Tabs>
|
||||
<TabItem value="Add a new persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Select a volume claim **Source**:
|
||||
- If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**.
|
||||
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Use an existing persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Choose a **Persistent Volume Claim** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
|
||||
|
||||
1. Click **Upgrade**.
|
||||
|
||||
**Result:** Persistent storage is configured for your pipeline components.
|
||||
@@ -1,89 +0,0 @@
|
||||
---
|
||||
title: Example Repositories
|
||||
---
|
||||
|
||||
Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for:
|
||||
|
||||
- Go
|
||||
- Maven
|
||||
- php
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
- The example repositories are only available if you have not [configured a version control provider](../../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md).
|
||||
- Because the pipelines app was deprecated in favor of Fleet, you will need to turn on the feature flag for legacy features before using pipelines.
|
||||
- Note that pipelines in Kubernetes 1.21+ are no longer supported.
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
1. Click **Feature Flags**.
|
||||
1. Go to the `legacy` feature flag and click **⋮ > Activate**.
|
||||
|
||||
:::
|
||||
|
||||
To start using these example repositories,
|
||||
|
||||
1. [Enable the example repositories](#1-enable-the-example-repositories)
|
||||
2. [View the example pipeline](#2-view-the-example-pipeline)
|
||||
3. [Run the example pipeline](#3-run-the-example-pipeline)
|
||||
|
||||
### 1. Enable the Example Repositories
|
||||
|
||||
By default, the example pipeline repositories are disabled. Enable one (or more) to test out the pipeline feature and see how it works.
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. In the **Pipelines** tab, click **Configure Repositories**.
|
||||
|
||||
:::note
|
||||
|
||||
Example repositories only display if you haven't fetched your own repos.
|
||||
|
||||
:::
|
||||
|
||||
1. Click **Enable** for one of the example repos (e.g., `https://github.com/rancher/pipeline-example-go.git`). Then click **Done**.
|
||||
|
||||
**Results:**
|
||||
|
||||
- The example repository is enabled to work with a pipeline is available in the **Pipeline** tab.
|
||||
|
||||
- The following workloads are deployed to a new namespace:
|
||||
|
||||
- `docker-registry`
|
||||
- `jenkins`
|
||||
- `minio`
|
||||
|
||||
### 2. View the Example Pipeline
|
||||
|
||||
After enabling an example repository, review the pipeline to see how it is set up.
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. In the **Pipelines** tab, click **Configure Repositories**.
|
||||
1. Find the example repository, select **⋮ > Edit Config**. There are two ways to view the pipeline:
|
||||
* **Rancher UI**: Click on **Edit Config** or **View/Edit YAML** to view the stages and steps of the pipeline. The YAML view shows the `./rancher-pipeline.yml` file.
|
||||
|
||||
### 3. Run the Example Pipeline
|
||||
|
||||
After enabling an example repository, run the pipeline to see how it works.
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. In the **Pipelines** tab, go to the pipeline and select the vertical **⋮ > Run**.
|
||||
|
||||
:::note
|
||||
|
||||
When you run a pipeline the first time, it takes a few minutes to pull relevant images and provision necessary pipeline components.
|
||||
|
||||
:::
|
||||
|
||||
**Result:** The pipeline runs. You can see the results in the logs.
|
||||
|
||||
### What's Next?
|
||||
|
||||
For detailed information about setting up your own pipeline for your repository, [configure a version control provider](../../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md), enable a repository and finally configure your pipeline.
|
||||
@@ -1,71 +0,0 @@
|
||||
---
|
||||
title: Example YAML File
|
||||
---
|
||||
|
||||
Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
In the [pipeline configuration reference](pipeline-configuration.md), we provide examples of how to configure each feature using the Rancher UI or using YAML configuration.
|
||||
|
||||
Below is a full example `rancher-pipeline.yml` for those who want to jump right in.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
# Conditions for stages
|
||||
when:
|
||||
branch: master
|
||||
event: [ push, pull_request ]
|
||||
# Multiple steps run concurrently
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: echo ${FIRST_KEY} && echo ${ALIAS_ENV}
|
||||
# Set environment variables in container for the step
|
||||
env:
|
||||
FIRST_KEY: VALUE
|
||||
SECOND_KEY: VALUE2
|
||||
# Set environment variables from project secrets
|
||||
envFrom:
|
||||
- sourceName: my-secret
|
||||
sourceKey: secret-key
|
||||
targetKey: ALIAS_ENV
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: date -R
|
||||
# Conditions for steps
|
||||
when:
|
||||
branch: [ master, dev ]
|
||||
event: push
|
||||
- name: Publish my image
|
||||
steps:
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: rancher/rancher:v2.0.0
|
||||
# Optionally push to remote registry
|
||||
pushRemote: true
|
||||
registry: reg.example.com
|
||||
- name: Deploy some workloads
|
||||
steps:
|
||||
- applyYamlConfig:
|
||||
path: ./deployment.yaml
|
||||
# branch conditions for the pipeline
|
||||
branch:
|
||||
include: [ master, feature/*]
|
||||
exclude: [ dev ]
|
||||
# timeout in minutes
|
||||
timeout: 30
|
||||
notification:
|
||||
recipients:
|
||||
- # Recipient
|
||||
recipient: "#mychannel"
|
||||
# ID of Notifier
|
||||
notifier: "c-wdcsr:n-c9pg7"
|
||||
- recipient: "test@example.com"
|
||||
notifier: "c-wdcsr:n-lkrhd"
|
||||
# Select which statuses you want the notification to be sent
|
||||
condition: ["Failed", "Success", "Changed"]
|
||||
# Ability to override the default message (Optional)
|
||||
message: "my-message"
|
||||
```
|
||||
@@ -1,634 +0,0 @@
|
||||
---
|
||||
title: Pipeline Configuration Reference
|
||||
---
|
||||
|
||||
In this section, you'll learn how to configure pipelines.
|
||||
|
||||
|
||||
## Step Types
|
||||
|
||||
Within each stage, you can add as many steps as you'd like. When there are multiple steps in one stage, they run concurrently.
|
||||
|
||||
Step types include:
|
||||
|
||||
- [Run Script](#step-type-run-script)
|
||||
- [Build and Publish Images](#step-type-build-and-publish-images)
|
||||
- [Publish Catalog Template](#step-type-publish-catalog-template)
|
||||
- [Deploy YAML](#step-type-deploy-yaml)
|
||||
- [Deploy Catalog App](#step-type-deploy-catalog-app)
|
||||
|
||||
<!--
|
||||
### Clone
|
||||
|
||||
The first stage is preserved to be a cloning step that checks out source code from your repo. Rancher handles the cloning of the git repository. This action is equivalent to `git clone <repository_link> <workspace_dir>`.
|
||||
-->
|
||||
|
||||
### Configuring Steps By UI
|
||||
|
||||
If you haven't added any stages, click **Configure pipeline for this branch** to configure the pipeline through the UI.
|
||||
|
||||
1. Add stages to your pipeline execution by clicking **Add Stage**.
|
||||
|
||||
1. Enter a **Name** for each stage of your pipeline.
|
||||
1. For each stage, you can configure [trigger rules](#triggers-and-trigger-rules) by clicking on **Show Advanced Options**. Note: this can always be updated at a later time.
|
||||
|
||||
1. After you've created a stage, start [adding steps](#step-types) by clicking **Add a Step**. You can add multiple steps to each stage.
|
||||
|
||||
### Configuring Steps by YAML
|
||||
|
||||
For each stage, you can add multiple steps. Read more about each [step type](#step-types) and the advanced options to get all the details on how to configure the YAML. This is only a small example of how to have multiple stages with a singular step in each stage.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
# Conditions for stages
|
||||
when:
|
||||
branch: master
|
||||
event: [ push, pull_request ]
|
||||
# Multiple steps run concurrently
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: date -R
|
||||
- name: Publish my image
|
||||
steps:
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: rancher/rancher:v2.0.0
|
||||
# Optionally push to remote registry
|
||||
pushRemote: true
|
||||
registry: reg.example.com
|
||||
```
|
||||
# Step Type: Run Script
|
||||
|
||||
The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience, you can use variables to refer to metadata of a pipeline execution. Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables.
|
||||
|
||||
### Configuring Script by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Run Script** and fill in the form.
|
||||
|
||||
1. Click **Add**.
|
||||
|
||||
### Configuring Script by YAML
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: golang
|
||||
shellScript: go build
|
||||
```
|
||||
## Step Type: Build and Publish Images
|
||||
|
||||
The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully.
|
||||
|
||||
The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely.
|
||||
|
||||
### Configuring Building and Publishing Images by UI
|
||||
1. From the **Step Type** drop-down, choose **Build and Publish**.
|
||||
|
||||
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
|
||||
|
||||
Field | Description |
|
||||
---------|----------|
|
||||
Dockerfile Path | The relative path to the Dockerfile in the source code repo. By default, this path is `./Dockerfile`, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (`./path/to/myDockerfile` for example). |
|
||||
Image Name | The image name in `name:tag` format. The registry address is not required. For example, to build `example.com/repo/my-image:dev`, enter `repo/my-image:dev`. |
|
||||
Push image to remote repository | An option to set the registry that publishes the image that's built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry. |
|
||||
Build Context <br/><br/> (**Show advanced options**)| By default, the root directory of the source code (`.`). For more details, see the Docker [build command documentation](https://docs.docker.com/engine/reference/commandline/build/).
|
||||
|
||||
### Configuring Building and Publishing Images by YAML
|
||||
|
||||
You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example below. Available environment variables include:
|
||||
|
||||
Variable Name | Description
|
||||
------------------------|------------------------------------------------------------
|
||||
PLUGIN_DRY_RUN | Disable docker push
|
||||
PLUGIN_DEBUG | Docker daemon executes in debug mode
|
||||
PLUGIN_MIRROR | Docker daemon registry mirror
|
||||
PLUGIN_INSECURE | Docker daemon allows insecure registries
|
||||
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
|
||||
|
||||
<br/>
|
||||
|
||||
```yaml
|
||||
# This example shows an environment variable being used
|
||||
# in the Publish Image step. This variable allows you to
|
||||
# publish an image to an insecure registry:
|
||||
|
||||
stages:
|
||||
- name: Publish Image
|
||||
steps:
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: repo/app:v1
|
||||
pushRemote: true
|
||||
registry: example.com
|
||||
env:
|
||||
PLUGIN_INSECURE: "true"
|
||||
```
|
||||
|
||||
## Step Type: Publish Catalog Template
|
||||
|
||||
The **Publish Catalog Template** step publishes a version of a catalog app template (i.e. Helm chart) to a git hosted chart repository. It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code's repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) is supported for any file in the chart folder.
|
||||
|
||||
### Configuring Publishing a Catalog Template by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Publish Catalog Template**.
|
||||
|
||||
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
|
||||
|
||||
Field | Description |
|
||||
---------|----------|
|
||||
Chart Folder | The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. |
|
||||
Catalog Template Name | The name of the template. For example, wordpress. |
|
||||
Catalog Template Version | The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. |
|
||||
Protocol | You can choose to publish via HTTP(S) or SSH protocol. |
|
||||
Secret | The secret that stores your Git credentials. You need to create a secret in dedicated pipeline namespace in the project before adding this step. If you use HTTP(S) protocol, store Git username and password in `USERNAME` and `PASSWORD` key of the secret. If you use SSH protocol, store Git deploy key in `DEPLOY_KEY` key of the secret. After the secret is created, select it in this option. |
|
||||
Git URL | The Git URL of the chart repository that the template will be published to. |
|
||||
Git Branch | The Git branch of the chart repository that the template will be published to. |
|
||||
Author Name | The author name used in the commit message. |
|
||||
Author Email | The author email used in the commit message. |
|
||||
|
||||
|
||||
### Configuring Publishing a Catalog Template by YAML
|
||||
|
||||
You can add **Publish Catalog Template** steps directly in the `.rancher-pipeline.yml` file.
|
||||
|
||||
Under the `steps` section, add a step with `publishCatalogConfig`. You will provide the following information:
|
||||
|
||||
* Path: The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located.
|
||||
* CatalogTemplate: The name of the template.
|
||||
* Version: The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file.
|
||||
* GitUrl: The git URL of the chart repository that the template will be published to.
|
||||
* GitBranch: The git branch of the chart repository that the template will be published to.
|
||||
* GitAuthor: The author name used in the commit message.
|
||||
* GitEmail: The author email used in the commit message.
|
||||
* Credentials: You should provide Git credentials by referencing secrets in dedicated pipeline namespace. If you publish via SSH protocol, inject your deploy key to the `DEPLOY_KEY` environment variable. If you publish via HTTP(S) protocol, inject your username and password to `USERNAME` and `PASSWORD` environment variables.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Publish Wordpress Template
|
||||
steps:
|
||||
- publishCatalogConfig:
|
||||
path: ./charts/wordpress/latest
|
||||
catalogTemplate: wordpress
|
||||
version: ${CICD_GIT_TAG}
|
||||
gitUrl: git@github.com:myrepo/charts.git
|
||||
gitBranch: master
|
||||
gitAuthor: example-user
|
||||
gitEmail: user@example.com
|
||||
envFrom:
|
||||
- sourceName: publish-keys
|
||||
sourceKey: DEPLOY_KEY
|
||||
```
|
||||
|
||||
## Step Type: Deploy YAML
|
||||
|
||||
This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at [GitHub](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml). Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables.
|
||||
|
||||
### Configure Deploying YAML by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Deploy YAML** and fill in the form.
|
||||
|
||||
1. Enter the **YAML Path**, which is the path to the manifest file in the source code.
|
||||
|
||||
1. Click **Add**.
|
||||
|
||||
### Configure Deploying YAML by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Deploy
|
||||
steps:
|
||||
- applyYamlConfig:
|
||||
path: ./deployment.yaml
|
||||
```
|
||||
|
||||
## Step Type :Deploy Catalog App
|
||||
|
||||
The **Deploy Catalog App** step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one.
|
||||
|
||||
### Configure Deploying Catalog App by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Deploy Catalog App**.
|
||||
|
||||
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
|
||||
|
||||
Field | Description |
|
||||
---------|----------|
|
||||
Catalog | The catalog from which the app template will be used. |
|
||||
Template Name | The name of the app template. For example, wordpress. |
|
||||
Template Version | The version of the app template you want to deploy. |
|
||||
Namespace | The target namespace where you want to deploy the app. |
|
||||
App Name | The name of the app you want to deploy. |
|
||||
Answers | Key-value pairs of answers used to deploy the app. |
|
||||
|
||||
|
||||
### Configure Deploying Catalog App by YAML
|
||||
|
||||
You can add **Deploy Catalog App** steps directly in the `.rancher-pipeline.yml` file.
|
||||
|
||||
Under the `steps` section, add a step with `applyAppConfig`. You will provide the following information:
|
||||
|
||||
* CatalogTemplate: The ID of the template. This can be found by clicking `Launch app` and selecting `View details` for the app. It is the last part of the URL.
|
||||
* Version: The version of the template you want to deploy.
|
||||
* Answers: Key-value pairs of answers used to deploy the app.
|
||||
* Name: The name of the app you want to deploy.
|
||||
* TargetNamespace: The target namespace where you want to deploy the app.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Deploy App
|
||||
steps:
|
||||
- applyAppConfig:
|
||||
catalogTemplate: cattle-global-data:library-mysql
|
||||
version: 0.3.8
|
||||
answers:
|
||||
persistence.enabled: "false"
|
||||
name: testmysql
|
||||
targetNamespace: test
|
||||
```
|
||||
|
||||
## Timeouts
|
||||
|
||||
By default, each pipeline execution has a timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted.
|
||||
|
||||
### Configuring Timeouts by UI
|
||||
|
||||
Enter a new value in the **Timeout** field.
|
||||
|
||||
### Configuring Timeouts by YAML
|
||||
|
||||
In the `timeout` section, enter the timeout value in minutes.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: ls
|
||||
# timeout in minutes
|
||||
timeout: 30
|
||||
```
|
||||
|
||||
## Notifications
|
||||
|
||||
You can enable notifications to any notifiers based on the build status of a pipeline. Before enabling notifications, Rancher recommends setting up notifiers so it will be easy to add recipients immediately.
|
||||
|
||||
### Configuring Notifications by UI
|
||||
|
||||
1. Within the **Notification** section, turn on notifications by clicking **Enable**.
|
||||
|
||||
1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**.
|
||||
|
||||
1. If you don't have any existing notifiers, Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions](../../../versioned_docs/version-2.0-2.4/explanations/integrations-in-rancher/notifiers.md) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button.
|
||||
|
||||
:::note
|
||||
|
||||
Notifiers are configured at a cluster level and require a different level of permissions.
|
||||
|
||||
:::
|
||||
|
||||
1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**.
|
||||
|
||||
### Configuring Notifications by YAML
|
||||
|
||||
In the `notification` section, you will provide the following information:
|
||||
|
||||
* **Recipients:** This will be the list of notifiers/recipients that will receive the notification.
|
||||
* **Notifier:** The ID of the notifier. This can be found by finding the notifier and selecting **View in API** to get the ID.
|
||||
* **Recipient:** Depending on the type of the notifier, the "default recipient" can be used or you can override this with a different recipient. For example, when configuring a slack notifier, you select a channel as your default recipient, but if you wanted to send notifications to a different channel, you can select a different recipient.
|
||||
* **Condition:** Select which conditions of when you want the notification to be sent.
|
||||
* **Message (Optional):** If you want to change the default notification message, you can edit this in the yaml. Note: This option is not available in the UI.
|
||||
|
||||
```yaml
|
||||
# Example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: ls
|
||||
notification:
|
||||
recipients:
|
||||
- # Recipient
|
||||
recipient: "#mychannel"
|
||||
# ID of Notifier
|
||||
notifier: "c-wdcsr:n-c9pg7"
|
||||
- recipient: "test@example.com"
|
||||
notifier: "c-wdcsr:n-lkrhd"
|
||||
# Select which statuses you want the notification to be sent
|
||||
condition: ["Failed", "Success", "Changed"]
|
||||
# Ability to override the default message (Optional)
|
||||
message: "my-message"
|
||||
```
|
||||
|
||||
## Triggers and Trigger Rules
|
||||
|
||||
After you configure a pipeline, you can trigger it using different methods:
|
||||
|
||||
- **Manually:**
|
||||
|
||||
After you configure a pipeline, you can trigger a build using the latest CI definition from Rancher UI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion.
|
||||
|
||||
- **Automatically:**
|
||||
|
||||
When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo by pushing code, opening pull requests, or creating a tag, the version control system sends a webhook to Rancher Server, triggering a pipeline execution.
|
||||
|
||||
To use this automation, webhook management permission is required for the repository. Therefore, when users authenticate and fetch their repositories, only those on which they have webhook management permission will be shown.
|
||||
|
||||
Trigger rules can be created to have fine-grained control of pipeline executions in your pipeline configuration. Trigger rules come in two types:
|
||||
|
||||
- **Run this when:** This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs.
|
||||
|
||||
- **Do Not Run this when:** This type of rule skips the pipeline, stage, or step when a trigger explicitly occurs.
|
||||
|
||||
If all conditions evaluate to `true`, then the pipeline/stage/step is executed. Otherwise it is skipped. When a pipeline is skipped, none of the pipeline is executed. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run.
|
||||
|
||||
Wildcard character (`*`) expansion is supported in `branch` conditions.
|
||||
|
||||
### Configuring Pipeline Triggers
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
1. Click on **Show Advanced Options**.
|
||||
1. In the **Trigger Rules** section, configure rules to run or skip the pipeline.
|
||||
|
||||
1. Click **Add Rule**. In the **Value** field, enter the name of the branch that triggers the pipeline.
|
||||
|
||||
1. **Optional:** Add more branches that trigger a build.
|
||||
|
||||
1. Click **Done**.
|
||||
|
||||
### Configuring Stage Triggers
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
1. Find the **stage** that you want to manage trigger rules, click the **Edit** icon for that stage.
|
||||
1. Click **Show advanced options**.
|
||||
1. In the **Trigger Rules** section, configure rules to run or skip the stage.
|
||||
|
||||
1. Click **Add Rule**.
|
||||
|
||||
1. Choose the **Type** that triggers the stage and enter a value.
|
||||
|
||||
| Type | Value |
|
||||
| ------ | -------------------------------------------------------------------- |
|
||||
| Branch | The name of the branch that triggers the stage. |
|
||||
| Event | The type of event that triggers the stage. Values are: `Push`, `Pull Request`, `Tag` |
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
### Configuring Step Triggers
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
1. Find the **step** that you want to manage trigger rules, click the **Edit** icon for that step.
|
||||
1. Click **Show advanced options**.
|
||||
1. In the **Trigger Rules** section, configure rules to run or skip the step.
|
||||
|
||||
1. Click **Add Rule**.
|
||||
|
||||
1. Choose the **Type** that triggers the step and enter a value.
|
||||
|
||||
| Type | Value |
|
||||
| ------ | -------------------------------------------------------------------- |
|
||||
| Branch | The name of the branch that triggers the step. |
|
||||
| Event | The type of event that triggers the step. Values are: `Push`, `Pull Request`, `Tag` |
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
|
||||
### Configuring Triggers by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
# Conditions for stages
|
||||
when:
|
||||
branch: master
|
||||
event: [ push, pull_request ]
|
||||
# Multiple steps run concurrently
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: date -R
|
||||
# Conditions for steps
|
||||
when:
|
||||
branch: [ master, dev ]
|
||||
event: push
|
||||
# branch conditions for the pipeline
|
||||
branch:
|
||||
include: [ master, feature/*]
|
||||
exclude: [ dev ]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
When configuring a pipeline, certain [step types](#step-types) allow you to use environment variables to configure the step's script.
|
||||
|
||||
### Configuring Environment Variables by UI
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**.
|
||||
1. Within one of the stages, find the **step** that you want to add an environment variable for, click the **Edit** icon.
|
||||
1. Click **Show advanced options**.
|
||||
1. Click **Add Variable**, and then enter a key and value in the fields that appear. Add more variables if needed.
|
||||
1. Add your environment variable(s) into either the script or file.
|
||||
1. Click **Save**.
|
||||
|
||||
### Configuring Environment Variables by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY}
|
||||
env:
|
||||
FIRST_KEY: VALUE
|
||||
SECOND_KEY: VALUE2
|
||||
```
|
||||
|
||||
## Secrets
|
||||
|
||||
If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes [secrets](../../how-to-guides/new-user-guides/kubernetes-resources-setup/secrets.md).
|
||||
|
||||
### Prerequisite
|
||||
Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.
|
||||
<br/>
|
||||
|
||||
:::note
|
||||
|
||||
Secret injection is disabled on [pull request events](#triggers-and-trigger-rules).
|
||||
|
||||
:::
|
||||
|
||||
### Configuring Secrets by UI
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**.
|
||||
1. Within one of the stages, find the **step** that you want to use a secret for, click the **Edit** icon.
|
||||
1. Click **Show advanced options**.
|
||||
1. Click **Add From Secret**. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key.
|
||||
1. Click **Save**.
|
||||
|
||||
### Configuring Secrets by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: echo ${ALIAS_ENV}
|
||||
# environment variables from project secrets
|
||||
envFrom:
|
||||
- sourceName: my-secret
|
||||
sourceKey: secret-key
|
||||
targetKey: ALIAS_ENV
|
||||
```
|
||||
|
||||
## Pipeline Variable Substitution Reference
|
||||
|
||||
For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of `${VAR_NAME}`.
|
||||
|
||||
Variable Name | Description
|
||||
------------------------|------------------------------------------------------------
|
||||
`CICD_GIT_REPO_NAME` | Repository name (Github organization omitted).
|
||||
`CICD_GIT_URL` | URL of the Git repository.
|
||||
`CICD_GIT_COMMIT` | Git commit ID being executed.
|
||||
`CICD_GIT_BRANCH` | Git branch of this event.
|
||||
`CICD_GIT_REF` | Git reference specification of this event.
|
||||
`CICD_GIT_TAG` | Git tag name, set on tag event.
|
||||
`CICD_EVENT` | Event that triggered the build (`push`, `pull_request` or `tag`).
|
||||
`CICD_PIPELINE_ID` | Rancher ID for the pipeline.
|
||||
`CICD_EXECUTION_SEQUENCE` | Build number of the pipeline.
|
||||
`CICD_EXECUTION_ID` | Combination of `{CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}`.
|
||||
`CICD_REGISTRY` | Address for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step.
|
||||
`CICD_IMAGE` | Name of the image built from the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. It does not contain the image tag.<br/><br/> [Example](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml)
|
||||
|
||||
## Global Pipeline Execution Settings
|
||||
|
||||
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher.
|
||||
|
||||
### Changing Pipeline Settings
|
||||
|
||||
:::note Prerequisite:
|
||||
|
||||
Because the pipelines app was deprecated in favor of Fleet, you will need to turn on the feature flag for legacy
|
||||
features before using pipelines. Note that pipelines in Kubernetes 1.21+ are no longer supported.
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
1. Click **Feature Flags**.
|
||||
1. Go to the `legacy` feature flag and click **⋮ > Activate**.
|
||||
|
||||
:::
|
||||
|
||||
To edit these settings:
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to configure pipelines and click **Explore**.
|
||||
1. In the dropdown menu in the top navigation bar, select the project where you want to configure pipelines.
|
||||
1. In the left navigation bar, click **Legacy > Project > Pipelines**.
|
||||
|
||||
- [Executor Quota](#executor-quota)
|
||||
- [Resource Quota for Executors](#resource-quota-for-executors)
|
||||
- [Custom CA](#custom-ca)
|
||||
|
||||
### Executor Quota
|
||||
|
||||
Select the maximum number of pipeline executors. The _executor quota_ decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is `2`. A value of `0` or less removes the quota limit.
|
||||
|
||||
### Resource Quota for Executors
|
||||
|
||||
Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can [manage compute resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for every containers in the pod.
|
||||
|
||||
Edit the **Memory Reservation**, **Memory Limit**, **CPU Reservation** or **CPU Limit**, then click **Update Limit and Reservation**.
|
||||
|
||||
To configure compute resources for pipeline-step containers:
|
||||
|
||||
You can configure compute resources for pipeline-step containers in the `.rancher-pipeline.yml` file.
|
||||
|
||||
In a step, you will provide the following information:
|
||||
|
||||
* **CPU Reservation (`CpuRequest`)**: CPU request for the container of a pipeline step.
|
||||
* **CPU Limit (`CpuLimit`)**: CPU limit for the container of a pipeline step.
|
||||
* **Memory Reservation (`MemoryRequest`)**: Memory request for the container of a pipeline step.
|
||||
* **Memory Limit (`MemoryLimit`)**: Memory limit for the container of a pipeline step.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: ls
|
||||
cpuRequest: 100m
|
||||
cpuLimit: 1
|
||||
memoryRequest:100Mi
|
||||
memoryLimit: 1Gi
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: repo/app:v1
|
||||
cpuRequest: 100m
|
||||
cpuLimit: 1
|
||||
memoryRequest:100Mi
|
||||
memoryLimit: 1Gi
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way.
|
||||
|
||||
:::
|
||||
|
||||
### Custom CA
|
||||
|
||||
If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.
|
||||
|
||||
1. Click **Edit cacerts**.
|
||||
|
||||
1. Paste in the CA root certificates and click **Save cacerts**.
|
||||
|
||||
**Result:** Pipelines can be used and new pods will be able to work with the self-signed-certificate.
|
||||
|
||||
# Persistent Data for Pipeline Components
|
||||
|
||||
The internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
|
||||
|
||||
For details on setting up persistent storage for pipelines, refer to [this page.](configure-persistent-data.md)
|
||||
|
||||
## Example rancher-pipeline.yml
|
||||
|
||||
An example pipeline configuration file is on [this page.](example-yaml.md)
|
||||
@@ -38,7 +38,6 @@ The Rancher API server is built on top of an embedded Kubernetes API server and
|
||||
- **Provisioning Kubernetes clusters:** The Rancher API server can [provision Kubernetes](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) on existing nodes, or perform [Kubernetes upgrades.](../installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)
|
||||
- **Catalog management:** Rancher provides the ability to use a [catalog of Helm charts](../../pages-for-subheaders/helm-charts-in-rancher.md) that make it easy to repeatedly deploy applications.
|
||||
- **Managing projects:** A project is a group of multiple namespaces and access control policies within a cluster. A project is a Rancher concept, not a Kubernetes concept, which allows you to manage multiple namespaces as a group and perform Kubernetes operations in them. The Rancher UI provides features for [project administration](../../pages-for-subheaders/manage-projects.md) and for [managing applications within projects.](../../pages-for-subheaders/kubernetes-resources-setup.md)
|
||||
- **Pipelines:** Setting up a [pipeline](../../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md) can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects.
|
||||
- **Istio:** Our [integration with Istio](../../pages-for-subheaders/istio.md) is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
|
||||
### Working with Cloud Infrastructure
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
---
|
||||
title: Rancher's CI/CD Pipelines
|
||||
description: Use Rancher’s CI/CD pipeline to automatically checkout code, run builds or scripts, publish Docker images, and deploy software to users
|
||||
---
|
||||
|
||||
Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline.
|
||||
|
||||
After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution:
|
||||
|
||||
- Build your application from code to image.
|
||||
- Validate your builds.
|
||||
- Deploy your build images to your cluster.
|
||||
- Run unit tests.
|
||||
- Run regression tests.
|
||||
|
||||
For details, refer to the [pipelines](../../../pages-for-subheaders/pipelines.md) section.
|
||||
@@ -164,7 +164,6 @@ After registering a cluster, the cluster owner can:
|
||||
- Enable [monitoring, alerts and notifiers](../../../pages-for-subheaders/monitoring-and-alerting.md)
|
||||
- Enable [logging](../../../pages-for-subheaders/logging.md)
|
||||
- Enable [Istio](../../../pages-for-subheaders/istio.md)
|
||||
- Use [pipelines](../../advanced-user-guides/manage-projects/ci-cd-pipelines.md)
|
||||
- Manage projects and workloads
|
||||
|
||||
<a id="2-5-8-additional-features-for-registered-k3s-clusters"></a>
|
||||
@@ -204,7 +203,6 @@ After registering a cluster, the cluster owner can:
|
||||
- Enable [monitoring, alerts and notifiers](../../../pages-for-subheaders/monitoring-and-alerting.md)
|
||||
- Enable [logging](../../../pages-for-subheaders/logging.md)
|
||||
- Enable [Istio](../../../pages-for-subheaders/istio.md)
|
||||
- Use [pipelines](../../advanced-user-guides/manage-projects/ci-cd-pipelines.md)
|
||||
- Manage projects and workloads
|
||||
|
||||
<a id="before-2-5-8-additional-features-for-registered-k3s-clusters"></a>
|
||||
|
||||
@@ -47,11 +47,6 @@ After you expose your cluster to external requests using a load balancer and/or
|
||||
|
||||
For more information, see [Service Discovery](../how-to-guides/new-user-guides/kubernetes-resources-setup/create-services.md).
|
||||
|
||||
## Pipelines
|
||||
|
||||
After your project has been [configured to a version control provider](../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md#1-configure-version-control-providers), you can add the repositories and start configuring a pipeline for each repository.
|
||||
|
||||
For more information, see [Pipelines](./pipelines.md).
|
||||
|
||||
## Applications
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ You can use projects to perform actions like:
|
||||
- [Set resource quotas](manage-project-resource-quotas.md)
|
||||
- [Manage namespaces](../how-to-guides/advanced-user-guides/manage-projects/manage-namespaces.md)
|
||||
- [Configure tools](../reference-guides/rancher-project-tools.md)
|
||||
- [Set up pipelines for continuous integration and deployment](../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md)
|
||||
- [Configure pod security policies](../how-to-guides/advanced-user-guides/manage-projects/manage-pod-security-policies.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
@@ -1,258 +0,0 @@
|
||||
---
|
||||
title: Pipelines
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
> As of Rancher v2.5, Git-based deployment pipelines are now deprecated. We recommend handling pipelines with Rancher Continuous Delivery powered by [Fleet](../how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet.md), available in Cluster Explorer.
|
||||
>
|
||||
>**Notice:** Fleet does not replace Rancher pipelines; the distinction is that Rancher pipelines are now powered by Fleet.
|
||||
|
||||
Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users.
|
||||
|
||||
Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline.
|
||||
|
||||
After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution:
|
||||
|
||||
- Build your application from code to image.
|
||||
- Validate your builds.
|
||||
- Deploy your build images to your cluster.
|
||||
- Run unit tests.
|
||||
- Run regression tests.
|
||||
|
||||
>**Note:** Rancher's pipeline provides a simple CI/CD experience, but it does not offer the full power and flexibility of and is not a replacement of enterprise-grade Jenkins or other CI tools your team uses.
|
||||
|
||||
|
||||
# Concepts
|
||||
|
||||
For an explanation of concepts and terminology used in this section, refer to [this page.](../reference-guides/pipelines/concepts.md)
|
||||
|
||||
# How Pipelines Work
|
||||
|
||||
After enabling the ability to use pipelines in a project, you can configure multiple pipelines in each project. Each pipeline is unique and can be configured independently.
|
||||
|
||||
A pipeline is configured off of a group of files that are checked into source code repositories. Users can configure their pipelines either through the Rancher UI or by adding a `.rancher-pipeline.yml` into the repository.
|
||||
|
||||
Before pipelines can be configured, you will need to configure authentication to your version control provider, e.g. GitHub, GitLab, Bitbucket. If you haven't configured a version control provider, you can always use [Rancher's example repositories](../reference-guides/pipelines/example-repositories.md) to view some common pipeline deployments.
|
||||
|
||||
When you configure a pipeline in one of your projects, a namespace specifically for the pipeline is automatically created. The following components are deployed to it:
|
||||
|
||||
- **Jenkins:**
|
||||
|
||||
The pipeline's build engine. Because project users do not directly interact with Jenkins, it's managed and locked.
|
||||
|
||||
>**Note:** There is no option to use existing Jenkins deployments as the pipeline engine.
|
||||
|
||||
- **Docker Registry:**
|
||||
|
||||
Out-of-the-box, the default target for your build-publish step is an internal Docker Registry. However, you can make configurations to push to a remote registry instead. The internal Docker Registry is only accessible from cluster nodes and cannot be directly accessed by users. Images are not persisted beyond the lifetime of the pipeline and should only be used in pipeline runs. If you need to access your images outside of pipeline runs, please push to an external registry.
|
||||
|
||||
- **Minio:**
|
||||
|
||||
Minio storage is used to store the logs for pipeline executions.
|
||||
|
||||
>**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components](../reference-guides/pipelines/configure-persistent-data.md).
|
||||
|
||||
# Roles-based Access Control for Pipelines
|
||||
|
||||
If you can access a project, you can enable repositories to start building pipelines.
|
||||
|
||||
Only [administrators](../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md), [cluster owners or members](../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#cluster-roles), or [project owners](../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-roles) can configure version control providers and manage global pipeline execution settings.
|
||||
|
||||
Project members can only configure repositories and pipelines.
|
||||
|
||||
# Setting up Pipelines
|
||||
|
||||
To set up pipelines, you will need to do the following:
|
||||
|
||||
1. [Configure version control providers](#1-configure-version-control-providers)
|
||||
2. [Configure repositories](#2-configure-repositories)
|
||||
3. [Configure the pipeline](#3-configure-the-pipeline)
|
||||
|
||||
### 1. Configure Version Control Providers
|
||||
|
||||
Before you can start configuring a pipeline for your repository, you must configure and authorize a version control provider:
|
||||
|
||||
- GitHub
|
||||
- GitLab
|
||||
- Bitbucket
|
||||
|
||||
Select your provider's tab below and follow the directions.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="GitHub">
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar.
|
||||
|
||||
1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github.
|
||||
|
||||
1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher.
|
||||
|
||||
1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation.
|
||||
|
||||
1. Click **Authenticate**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="GitLab">
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar.
|
||||
|
||||
1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab.
|
||||
|
||||
1. From GitLab, copy the **Application ID** and **Secret**. Paste them into Rancher.
|
||||
|
||||
1. If you're using GitLab for enterprise setup, select **Use a private gitlab enterprise installation**. Enter the host address of your GitLab installation.
|
||||
|
||||
1. Click **Authenticate**.
|
||||
|
||||
>**Note:**
|
||||
> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+.
|
||||
> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Bitbucket Cloud">
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar.
|
||||
|
||||
1. Choose the **Use public Bitbucket Cloud** option.
|
||||
|
||||
1. Follow the directions displayed to **Setup a Bitbucket Cloud application**. Rancher redirects you to Bitbucket to setup an OAuth consumer in Bitbucket.
|
||||
|
||||
1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher.
|
||||
|
||||
1. Click **Authenticate**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Bitbucket Server">
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar.
|
||||
|
||||
1. Choose the **Use private Bitbucket Server setup** option.
|
||||
|
||||
1. Follow the directions displayed to **Setup a Bitbucket Server application**.
|
||||
|
||||
1. Enter the host address of your Bitbucket server installation.
|
||||
|
||||
1. Click **Authenticate**.
|
||||
|
||||
>**Note:**
|
||||
> Bitbucket server needs to do SSL verification when sending webhooks to Rancher. Please ensure that Rancher server's certificate is trusted by the Bitbucket server. There are two options:
|
||||
>
|
||||
> 1. Setup Rancher server with a certificate from a trusted CA.
|
||||
> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html).
|
||||
>
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
|
||||
|
||||
### 2. Configure Repositories
|
||||
|
||||
After the version control provider is authorized, you are automatically re-directed to start configuring which repositories that you want start using pipelines with. Even if someone else has set up the version control provider, you will see their repositories and can build a pipeline.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. Click on **Configure Repositories**.
|
||||
|
||||
1. A list of repositories are displayed. If you are configuring repositories the first time, click on **Authorize & Fetch Your Own Repositories** to fetch your repository list.
|
||||
|
||||
1. For each repository that you want to set up a pipeline, click on **Enable**.
|
||||
|
||||
1. When you're done enabling all your repositories, click on **Done**.
|
||||
|
||||
**Results:** You have a list of repositories that you can start configuring pipelines for.
|
||||
|
||||
### 3. Configure the Pipeline
|
||||
|
||||
Now that repositories are added to your project, you can start configuring the pipeline by adding automated stages and steps. For your convenience, there are multiple built-in step types for dedicated tasks.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. Find the repository that you want to set up a pipeline for.
|
||||
|
||||
1. Configure the pipeline through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. Pipeline configuration is split into stages and steps. Stages must fully complete before moving onto the next stage, but steps in a stage run concurrently. For each stage, you can add different step types. Note: As you build out each step, there are different advanced options based on the step type. Advanced options include trigger rules, environment variables, and secrets. For more information on configuring the pipeline through the UI or the YAML file, refer to the [pipeline configuration reference.](../reference-guides/pipelines/pipeline-configuration.md)
|
||||
|
||||
* If you are going to use the UI, select the vertical **⋮ > Edit Config** to configure the pipeline using the UI. After the pipeline is configured, you must view the YAML file and push it to the repository.
|
||||
* If you are going to use the YAML file, select the vertical **⋮ > View/Edit YAML** to configure the pipeline. If you choose to use a YAML file, you need to push it to the repository after any changes in order for it to be updated in the repository. When editing the pipeline configuration, it takes a few moments for Rancher to check for an existing pipeline configuration.
|
||||
|
||||
1. Select which `branch` to use from the list of branches.
|
||||
|
||||
1. Optional: Set up notifications.
|
||||
|
||||
1. Set up the trigger rules for the pipeline.
|
||||
|
||||
1. Enter a **Timeout** for the pipeline.
|
||||
|
||||
1. When all the stages and steps are configured, click **Done**.
|
||||
|
||||
**Results:** Your pipeline is now configured and ready to be run.
|
||||
|
||||
|
||||
# Pipeline Configuration Reference
|
||||
|
||||
Refer to [this page](../reference-guides/pipelines/pipeline-configuration.md) for details on how to configure a pipeline to:
|
||||
|
||||
- Run a script
|
||||
- Build and publish images
|
||||
- Publish catalog templates
|
||||
- Deploy YAML
|
||||
- Deploy a catalog app
|
||||
|
||||
The configuration reference also covers how to configure:
|
||||
|
||||
- Notifications
|
||||
- Timeouts
|
||||
- The rules that trigger a pipeline
|
||||
- Environment variables
|
||||
- Secrets
|
||||
|
||||
|
||||
# Running your Pipelines
|
||||
|
||||
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** Find your pipeline and select the vertical **⋮ > Run**.
|
||||
|
||||
During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline:
|
||||
|
||||
- `docker-registry`
|
||||
- `jenkins`
|
||||
- `minio`
|
||||
|
||||
This process takes several minutes. When it completes, you can view each pipeline component from the project **Workloads** tab.
|
||||
|
||||
# Triggering a Pipeline
|
||||
|
||||
When a repository is enabled, a webhook is automatically set in the version control provider. By default, the pipeline is triggered by a **push** event to a repository, but you can modify the event(s) that trigger running the pipeline.
|
||||
|
||||
Available Events:
|
||||
|
||||
* **Push**: Whenever a commit is pushed to the branch in the repository, the pipeline is triggered.
|
||||
* **Pull Request**: Whenever a pull request is made to the repository, the pipeline is triggered.
|
||||
* **Tag**: When a tag is created in the repository, the pipeline is triggered.
|
||||
|
||||
> **Note:** This option doesn't exist for Rancher's [example repositories](../reference-guides/pipelines/example-repositories.md).
|
||||
|
||||
### Modifying the Event Triggers for the Repository
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to modify the event trigger for the pipeline.
|
||||
|
||||
1. 1. Click **Resources > Pipelines.**
|
||||
|
||||
1. Find the repository that you want to modify the event triggers. Select the vertical **⋮ > Setting**.
|
||||
|
||||
1. Select which event triggers (**Push**, **Pull Request** or **Tag**) you want for the repository.
|
||||
|
||||
1. Click **Save**.
|
||||
@@ -24,7 +24,6 @@ Here is the complete list of tokens that are generated with `ttl=0`:
|
||||
| `agent-*` | Token for agent deployment |
|
||||
| `compose-token-*` | Token for compose |
|
||||
| `helm-token-*` | Token for Helm chart deployment |
|
||||
| `*-pipeline*` | Pipeline token for project |
|
||||
| `telemetry-*` | Telemetry token |
|
||||
| `drain-node-*` | Token for drain (we use `kubectl` for drain because there is no native Kubernetes API) |
|
||||
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
---
|
||||
title: Concepts
|
||||
---
|
||||
|
||||
The purpose of this page is to explain common concepts and terminology related to pipelines.
|
||||
|
||||
- **Pipeline:**
|
||||
|
||||
A _pipeline_ is a software delivery process that is broken into different stages and steps. Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects. A pipeline is based on a specific repository. It defines the process to build, test, and deploy your code. Rancher uses the [pipeline as code](https://jenkins.io/doc/book/pipeline-as-code/) model. Pipeline configuration is represented as a pipeline file in the source code repository, using the file name `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
- **Stages:**
|
||||
|
||||
A pipeline stage consists of multiple steps. Stages are executed in the order defined in the pipeline file. The steps in a stage are executed concurrently. A stage starts when all steps in the former stage finish without failure.
|
||||
|
||||
- **Steps:**
|
||||
|
||||
A pipeline step is executed inside a specified stage. A step fails if it exits with a code other than `0`. If a step exits with this failure code, the entire pipeline fails and terminates.
|
||||
|
||||
- **Workspace:**
|
||||
|
||||
The workspace is the working directory shared by all pipeline steps. In the beginning of a pipeline, source code is checked out to the workspace. The command for every step bootstraps in the workspace. During a pipeline execution, the artifacts from a previous step will be available in future steps. The working directory is an ephemeral volume and will be cleaned out with the executor pod when a pipeline execution is finished.
|
||||
|
||||
Typically, pipeline stages include:
|
||||
|
||||
- **Build:**
|
||||
|
||||
Each time code is checked into your repository, the pipeline automatically clones the repo and builds a new iteration of your software. Throughout this process, the software is typically reviewed by automated tests.
|
||||
|
||||
- **Publish:**
|
||||
|
||||
After the build is completed, either a Docker image is built and published to a Docker registry or a catalog template is published.
|
||||
|
||||
- **Deploy:**
|
||||
|
||||
After the artifacts are published, you would release your application so users could start using the updated product.
|
||||
@@ -1,90 +0,0 @@
|
||||
---
|
||||
title: Configuring Persistent Data for Pipeline Components
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
The pipelines' internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
|
||||
|
||||
This section assumes that you understand how persistent storage works in Kubernetes. For more information, refer to the section on [how storage works.](../../how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-persistent-storage.md)
|
||||
|
||||
>**Prerequisites (for both parts A and B):**
|
||||
>
|
||||
>[Persistent volumes](../../pages-for-subheaders/create-kubernetes-persistent-storage.md) must be available for the cluster.
|
||||
|
||||
### A. Configuring Persistent Data for Docker Registry
|
||||
|
||||
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.**
|
||||
|
||||
1. Find the `docker-registry` workload and select **⋮ > Edit**.
|
||||
|
||||
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
|
||||
|
||||
- **Add Volume > Add a new persistent volume (claim)**
|
||||
- **Add Volume > Use an existing persistent volume (claim)**
|
||||
|
||||
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Add a new persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Select a volume claim **Source**:
|
||||
- If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**.
|
||||
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Use an existing persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Choose a **Persistent Volume Claim** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
4. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
|
||||
|
||||
5. Click **Upgrade**.
|
||||
|
||||
### B. Configuring Persistent Data for Minio
|
||||
|
||||
1. From the project view, click **Resources > Workloads.** Find the `minio` workload and select **⋮ > Edit**.
|
||||
|
||||
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
|
||||
|
||||
- **Add Volume > Add a new persistent volume (claim)**
|
||||
- **Add Volume > Use an existing persistent volume (claim)**
|
||||
|
||||
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Add a new persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Select a volume claim **Source**:
|
||||
- If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**.
|
||||
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Use an existing persistent volume">
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
1. Choose a **Persistent Volume Claim** from the drop-down.
|
||||
1. From the **Customize** section, choose the read/write access for the volume.
|
||||
1. Click **Define**.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
|
||||
|
||||
1. Click **Upgrade**.
|
||||
|
||||
**Result:** Persistent storage is configured for your pipeline components.
|
||||
@@ -1,73 +0,0 @@
|
||||
---
|
||||
title: Example Repositories
|
||||
---
|
||||
|
||||
Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for:
|
||||
|
||||
- Go
|
||||
- Maven
|
||||
- php
|
||||
|
||||
> **Note:** The example repositories are only available if you have not [configured a version control provider](../../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md).
|
||||
|
||||
To start using these example repositories,
|
||||
|
||||
1. [Enable the example repositories](#1-enable-the-example-repositories)
|
||||
2. [View the example pipeline](#2-view-the-example-pipeline)
|
||||
3. [Run the example pipeline](#3-run-the-example-pipeline)
|
||||
|
||||
### 1. Enable the Example Repositories
|
||||
|
||||
By default, the example pipeline repositories are disabled. Enable one (or more) to test out the pipeline feature and see how it works.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. Click **Configure Repositories**.
|
||||
|
||||
**Step Result:** A list of example repositories displays.
|
||||
|
||||
>**Note:** Example repositories only display if you haven't fetched your own repos.
|
||||
|
||||
1. Click **Enable** for one of the example repos (e.g., `https://github.com/rancher/pipeline-example-go.git`). Then click **Done**.
|
||||
|
||||
**Results:**
|
||||
|
||||
- The example repository is enabled to work with a pipeline is available in the **Pipeline** tab.
|
||||
|
||||
- The following workloads are deployed to a new namespace:
|
||||
|
||||
- `docker-registry`
|
||||
- `jenkins`
|
||||
- `minio`
|
||||
|
||||
### 2. View the Example Pipeline
|
||||
|
||||
After enabling an example repository, review the pipeline to see how it is set up.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. Find the example repository, select the vertical **⋮**. There are two ways to view the pipeline:
|
||||
* **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline.
|
||||
* **YAML**: Click on View/Edit YAML to view the `./rancher-pipeline.yml` file.
|
||||
|
||||
### 3. Run the Example Pipeline
|
||||
|
||||
After enabling an example repository, run the pipeline to see how it works.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. Find the example repository, select the vertical **⋮ > Run**.
|
||||
|
||||
>**Note:** When you run a pipeline the first time, it takes a few minutes to pull relevant images and provision necessary pipeline components.
|
||||
|
||||
**Result:** The pipeline runs. You can see the results in the logs.
|
||||
|
||||
### What's Next?
|
||||
|
||||
For detailed information about setting up your own pipeline for your repository, [configure a version control provider](../../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md), enable a repository and finally configure your pipeline.
|
||||
@@ -1,71 +0,0 @@
|
||||
---
|
||||
title: Example YAML File
|
||||
---
|
||||
|
||||
Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
In the [pipeline configuration reference](pipeline-configuration.md), we provide examples of how to configure each feature using the Rancher UI or using YAML configuration.
|
||||
|
||||
Below is a full example `rancher-pipeline.yml` for those who want to jump right in.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
# Conditions for stages
|
||||
when:
|
||||
branch: master
|
||||
event: [ push, pull_request ]
|
||||
# Multiple steps run concurrently
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: echo ${FIRST_KEY} && echo ${ALIAS_ENV}
|
||||
# Set environment variables in container for the step
|
||||
env:
|
||||
FIRST_KEY: VALUE
|
||||
SECOND_KEY: VALUE2
|
||||
# Set environment variables from project secrets
|
||||
envFrom:
|
||||
- sourceName: my-secret
|
||||
sourceKey: secret-key
|
||||
targetKey: ALIAS_ENV
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: date -R
|
||||
# Conditions for steps
|
||||
when:
|
||||
branch: [ master, dev ]
|
||||
event: push
|
||||
- name: Publish my image
|
||||
steps:
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: rancher/rancher:v2.0.0
|
||||
# Optionally push to remote registry
|
||||
pushRemote: true
|
||||
registry: reg.example.com
|
||||
- name: Deploy some workloads
|
||||
steps:
|
||||
- applyYamlConfig:
|
||||
path: ./deployment.yaml
|
||||
# branch conditions for the pipeline
|
||||
branch:
|
||||
include: [ master, feature/*]
|
||||
exclude: [ dev ]
|
||||
# timeout in minutes
|
||||
timeout: 30
|
||||
notification:
|
||||
recipients:
|
||||
- # Recipient
|
||||
recipient: "#mychannel"
|
||||
# ID of Notifier
|
||||
notifier: "c-wdcsr:n-c9pg7"
|
||||
- recipient: "test@example.com"
|
||||
notifier: "c-wdcsr:n-lkrhd"
|
||||
# Select which statuses you want the notification to be sent
|
||||
condition: ["Failed", "Success", "Changed"]
|
||||
# Ability to override the default message (Optional)
|
||||
message: "my-message"
|
||||
```
|
||||
@@ -1,620 +0,0 @@
|
||||
---
|
||||
title: Pipeline Configuration Reference
|
||||
---
|
||||
|
||||
In this section, you'll learn how to configure pipelines.
|
||||
|
||||
|
||||
## Step Types
|
||||
|
||||
Within each stage, you can add as many steps as you'd like. When there are multiple steps in one stage, they run concurrently.
|
||||
|
||||
Step types include:
|
||||
|
||||
- [Run Script](#step-type-run-script)
|
||||
- [Build and Publish Images](#step-type-build-and-publish-images)
|
||||
- [Publish Catalog Template](#step-type-publish-catalog-template)
|
||||
- [Deploy YAML](#step-type-deploy-yaml)
|
||||
- [Deploy Catalog App](#step-type-deploy-catalog-app)
|
||||
|
||||
<!--
|
||||
### Clone
|
||||
|
||||
The first stage is preserved to be a cloning step that checks out source code from your repo. Rancher handles the cloning of the git repository. This action is equivalent to `git clone <repository_link> <workspace_dir>`.
|
||||
-->
|
||||
|
||||
### Configuring Steps By UI
|
||||
|
||||
If you haven't added any stages, click **Configure pipeline for this branch** to configure the pipeline through the UI.
|
||||
|
||||
1. Add stages to your pipeline execution by clicking **Add Stage**.
|
||||
|
||||
1. Enter a **Name** for each stage of your pipeline.
|
||||
1. For each stage, you can configure [trigger rules](#triggers-and-trigger-rules) by clicking on **Show Advanced Options**. Note: this can always be updated at a later time.
|
||||
|
||||
1. After you've created a stage, start [adding steps](#step-types) by clicking **Add a Step**. You can add multiple steps to each stage.
|
||||
|
||||
### Configuring Steps by YAML
|
||||
|
||||
For each stage, you can add multiple steps. Read more about each [step type](#step-types) and the advanced options to get all the details on how to configure the YAML. This is only a small example of how to have multiple stages with a singular step in each stage.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
# Conditions for stages
|
||||
when:
|
||||
branch: master
|
||||
event: [ push, pull_request ]
|
||||
# Multiple steps run concurrently
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: date -R
|
||||
- name: Publish my image
|
||||
steps:
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: rancher/rancher:v2.0.0
|
||||
# Optionally push to remote registry
|
||||
pushRemote: true
|
||||
registry: reg.example.com
|
||||
```
|
||||
## Step Type: Run Script
|
||||
|
||||
The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience, you can use variables to refer to metadata of a pipeline execution. Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables.
|
||||
|
||||
### Configuring Script by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Run Script** and fill in the form.
|
||||
|
||||
1. Click **Add**.
|
||||
|
||||
### Configuring Script by YAML
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: golang
|
||||
shellScript: go build
|
||||
```
|
||||
## Step Type: Build and Publish Images
|
||||
|
||||
The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully.
|
||||
|
||||
The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely.
|
||||
|
||||
### Configuring Building and Publishing Images by UI
|
||||
1. From the **Step Type** drop-down, choose **Build and Publish**.
|
||||
|
||||
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
|
||||
|
||||
Field | Description |
|
||||
---------|----------|
|
||||
Dockerfile Path | The relative path to the Dockerfile in the source code repo. By default, this path is `./Dockerfile`, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (`./path/to/myDockerfile` for example). |
|
||||
Image Name | The image name in `name:tag` format. The registry address is not required. For example, to build `example.com/repo/my-image:dev`, enter `repo/my-image:dev`. |
|
||||
Push image to remote repository | An option to set the registry that publishes the image that's built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry. |
|
||||
Build Context <br/><br/> (**Show advanced options**)| By default, the root directory of the source code (`.`). For more details, see the Docker [build command documentation](https://docs.docker.com/engine/reference/commandline/build/).
|
||||
|
||||
### Configuring Building and Publishing Images by YAML
|
||||
|
||||
You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example below. Available environment variables include:
|
||||
|
||||
Variable Name | Description
|
||||
------------------------|------------------------------------------------------------
|
||||
PLUGIN_DRY_RUN | Disable docker push
|
||||
PLUGIN_DEBUG | Docker daemon executes in debug mode
|
||||
PLUGIN_MIRROR | Docker daemon registry mirror
|
||||
PLUGIN_INSECURE | Docker daemon allows insecure registries
|
||||
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
|
||||
|
||||
<br/>
|
||||
|
||||
```yaml
|
||||
# This example shows an environment variable being used
|
||||
# in the Publish Image step. This variable allows you to
|
||||
# publish an image to an insecure registry:
|
||||
|
||||
stages:
|
||||
- name: Publish Image
|
||||
steps:
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: repo/app:v1
|
||||
pushRemote: true
|
||||
registry: example.com
|
||||
env:
|
||||
PLUGIN_INSECURE: "true"
|
||||
```
|
||||
|
||||
## Step Type: Publish Catalog Template
|
||||
|
||||
The **Publish Catalog Template** step publishes a version of a catalog app template (i.e. Helm chart) to a git hosted chart repository. It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code's repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) is supported for any file in the chart folder.
|
||||
|
||||
### Configuring Publishing a Catalog Template by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Publish Catalog Template**.
|
||||
|
||||
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
|
||||
|
||||
Field | Description |
|
||||
---------|----------|
|
||||
Chart Folder | The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. |
|
||||
Catalog Template Name | The name of the template. For example, wordpress. |
|
||||
Catalog Template Version | The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. |
|
||||
Protocol | You can choose to publish via HTTP(S) or SSH protocol. |
|
||||
Secret | The secret that stores your Git credentials. You need to create a secret in dedicated pipeline namespace in the project before adding this step. If you use HTTP(S) protocol, store Git username and password in `USERNAME` and `PASSWORD` key of the secret. If you use SSH protocol, store Git deploy key in `DEPLOY_KEY` key of the secret. After the secret is created, select it in this option. |
|
||||
Git URL | The Git URL of the chart repository that the template will be published to. |
|
||||
Git Branch | The Git branch of the chart repository that the template will be published to. |
|
||||
Author Name | The author name used in the commit message. |
|
||||
Author Email | The author email used in the commit message. |
|
||||
|
||||
|
||||
### Configuring Publishing a Catalog Template by YAML
|
||||
|
||||
You can add **Publish Catalog Template** steps directly in the `.rancher-pipeline.yml` file.
|
||||
|
||||
Under the `steps` section, add a step with `publishCatalogConfig`. You will provide the following information:
|
||||
|
||||
* Path: The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located.
|
||||
* CatalogTemplate: The name of the template.
|
||||
* Version: The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file.
|
||||
* GitUrl: The git URL of the chart repository that the template will be published to.
|
||||
* GitBranch: The git branch of the chart repository that the template will be published to.
|
||||
* GitAuthor: The author name used in the commit message.
|
||||
* GitEmail: The author email used in the commit message.
|
||||
* Credentials: You should provide Git credentials by referencing secrets in dedicated pipeline namespace. If you publish via SSH protocol, inject your deploy key to the `DEPLOY_KEY` environment variable. If you publish via HTTP(S) protocol, inject your username and password to `USERNAME` and `PASSWORD` environment variables.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Publish Wordpress Template
|
||||
steps:
|
||||
- publishCatalogConfig:
|
||||
path: ./charts/wordpress/latest
|
||||
catalogTemplate: wordpress
|
||||
version: ${CICD_GIT_TAG}
|
||||
gitUrl: git@github.com:myrepo/charts.git
|
||||
gitBranch: master
|
||||
gitAuthor: example-user
|
||||
gitEmail: user@example.com
|
||||
envFrom:
|
||||
- sourceName: publish-keys
|
||||
sourceKey: DEPLOY_KEY
|
||||
```
|
||||
|
||||
## Step Type: Deploy YAML
|
||||
|
||||
This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at [GitHub](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml). Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables.
|
||||
|
||||
### Configure Deploying YAML by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Deploy YAML** and fill in the form.
|
||||
|
||||
1. Enter the **YAML Path**, which is the path to the manifest file in the source code.
|
||||
|
||||
1. Click **Add**.
|
||||
|
||||
### Configure Deploying YAML by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Deploy
|
||||
steps:
|
||||
- applyYamlConfig:
|
||||
path: ./deployment.yaml
|
||||
```
|
||||
|
||||
## Step Type :Deploy Catalog App
|
||||
|
||||
The **Deploy Catalog App** step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one.
|
||||
|
||||
### Configure Deploying Catalog App by UI
|
||||
|
||||
1. From the **Step Type** drop-down, choose **Deploy Catalog App**.
|
||||
|
||||
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
|
||||
|
||||
Field | Description |
|
||||
---------|----------|
|
||||
Catalog | The catalog from which the app template will be used. |
|
||||
Template Name | The name of the app template. For example, wordpress. |
|
||||
Template Version | The version of the app template you want to deploy. |
|
||||
Namespace | The target namespace where you want to deploy the app. |
|
||||
App Name | The name of the app you want to deploy. |
|
||||
Answers | Key-value pairs of answers used to deploy the app. |
|
||||
|
||||
|
||||
### Configure Deploying Catalog App by YAML
|
||||
|
||||
You can add **Deploy Catalog App** steps directly in the `.rancher-pipeline.yml` file.
|
||||
|
||||
Under the `steps` section, add a step with `applyAppConfig`. You will provide the following information:
|
||||
|
||||
* CatalogTemplate: The ID of the template. This can be found by clicking `Launch app` and selecting `View details` for the app. It is the last part of the URL.
|
||||
* Version: The version of the template you want to deploy.
|
||||
* Answers: Key-value pairs of answers used to deploy the app.
|
||||
* Name: The name of the app you want to deploy.
|
||||
* TargetNamespace: The target namespace where you want to deploy the app.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Deploy App
|
||||
steps:
|
||||
- applyAppConfig:
|
||||
catalogTemplate: cattle-global-data:library-mysql
|
||||
version: 0.3.8
|
||||
answers:
|
||||
persistence.enabled: "false"
|
||||
name: testmysql
|
||||
targetNamespace: test
|
||||
```
|
||||
|
||||
## Timeouts
|
||||
|
||||
By default, each pipeline execution has a timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted.
|
||||
|
||||
### Configuring Timeouts by UI
|
||||
|
||||
Enter a new value in the **Timeout** field.
|
||||
|
||||
### Configuring Timeouts by YAML
|
||||
|
||||
In the `timeout` section, enter the timeout value in minutes.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: ls
|
||||
# timeout in minutes
|
||||
timeout: 30
|
||||
```
|
||||
|
||||
## Notifications
|
||||
|
||||
You can enable notifications to any notifiers based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers](../monitoring-v2-configuration/receivers.md) so it will be easy to add recipients immediately.
|
||||
|
||||
### Configuring Notifications by UI
|
||||
|
||||
1. Within the **Notification** section, turn on notifications by clicking **Enable**.
|
||||
|
||||
1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**.
|
||||
|
||||
1. If you don't have any existing notifiers, Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions](../../reference-guides/monitoring-v2-configuration/receivers.md) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button.
|
||||
|
||||
> **Note:** Notifiers are configured at a cluster level and require a different level of permissions.
|
||||
|
||||
1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**.
|
||||
|
||||
### Configuring Notifications by YAML
|
||||
|
||||
In the `notification` section, you will provide the following information:
|
||||
|
||||
* **Recipients:** This will be the list of notifiers/recipients that will receive the notification.
|
||||
* **Notifier:** The ID of the notifier. This can be found by finding the notifier and selecting **View in API** to get the ID.
|
||||
* **Recipient:** Depending on the type of the notifier, the "default recipient" can be used or you can override this with a different recipient. For example, when configuring a slack notifier, you select a channel as your default recipient, but if you wanted to send notifications to a different channel, you can select a different recipient.
|
||||
* **Condition:** Select which conditions of when you want the notification to be sent.
|
||||
* **Message (Optional):** If you want to change the default notification message, you can edit this in the yaml. Note: This option is not available in the UI.
|
||||
|
||||
```yaml
|
||||
# Example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: ls
|
||||
notification:
|
||||
recipients:
|
||||
- # Recipient
|
||||
recipient: "#mychannel"
|
||||
# ID of Notifier
|
||||
notifier: "c-wdcsr:n-c9pg7"
|
||||
- recipient: "test@example.com"
|
||||
notifier: "c-wdcsr:n-lkrhd"
|
||||
# Select which statuses you want the notification to be sent
|
||||
condition: ["Failed", "Success", "Changed"]
|
||||
# Ability to override the default message (Optional)
|
||||
message: "my-message"
|
||||
```
|
||||
|
||||
## Triggers and Trigger Rules
|
||||
|
||||
After you configure a pipeline, you can trigger it using different methods:
|
||||
|
||||
- **Manually:**
|
||||
|
||||
After you configure a pipeline, you can trigger a build using the latest CI definition from Rancher UI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion.
|
||||
|
||||
- **Automatically:**
|
||||
|
||||
When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo by pushing code, opening pull requests, or creating a tag, the version control system sends a webhook to Rancher Server, triggering a pipeline execution.
|
||||
|
||||
To use this automation, webhook management permission is required for the repository. Therefore, when users authenticate and fetch their repositories, only those on which they have webhook management permission will be shown.
|
||||
|
||||
Trigger rules can be created to have fine-grained control of pipeline executions in your pipeline configuration. Trigger rules come in two types:
|
||||
|
||||
- **Run this when:** This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs.
|
||||
|
||||
- **Do Not Run this when:** This type of rule skips the pipeline, stage, or step when a trigger explicitly occurs.
|
||||
|
||||
If all conditions evaluate to `true`, then the pipeline/stage/step is executed. Otherwise it is skipped. When a pipeline is skipped, none of the pipeline is executed. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run.
|
||||
|
||||
Wildcard character (`*`) expansion is supported in `branch` conditions.
|
||||
|
||||
|
||||
### Configuring Pipeline Triggers
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
|
||||
1. Click on **Show Advanced Options**.
|
||||
|
||||
1. In the **Trigger Rules** section, configure rules to run or skip the pipeline.
|
||||
|
||||
1. Click **Add Rule**. In the **Value** field, enter the name of the branch that triggers the pipeline.
|
||||
|
||||
1. **Optional:** Add more branches that trigger a build.
|
||||
|
||||
1. Click **Done.**
|
||||
|
||||
### Configuring Stage Triggers
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
|
||||
1. Find the **stage** that you want to manage trigger rules, click the **Edit** icon for that stage.
|
||||
|
||||
1. Click **Show advanced options**.
|
||||
|
||||
1. In the **Trigger Rules** section, configure rules to run or skip the stage.
|
||||
|
||||
1. Click **Add Rule**.
|
||||
|
||||
1. Choose the **Type** that triggers the stage and enter a value.
|
||||
|
||||
| Type | Value |
|
||||
| ------ | -------------------------------------------------------------------- |
|
||||
| Branch | The name of the branch that triggers the stage. |
|
||||
| Event | The type of event that triggers the stage. Values are: `Push`, `Pull Request`, `Tag` |
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
### Configuring Step Triggers
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
|
||||
1. Find the **step** that you want to manage trigger rules, click the **Edit** icon for that step.
|
||||
|
||||
1. Click **Show advanced options**.
|
||||
|
||||
1. In the **Trigger Rules** section, configure rules to run or skip the step.
|
||||
|
||||
1. Click **Add Rule**.
|
||||
|
||||
1. Choose the **Type** that triggers the step and enter a value.
|
||||
|
||||
| Type | Value |
|
||||
| ------ | -------------------------------------------------------------------- |
|
||||
| Branch | The name of the branch that triggers the step. |
|
||||
| Event | The type of event that triggers the step. Values are: `Push`, `Pull Request`, `Tag` |
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
|
||||
### Configuring Triggers by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
# Conditions for stages
|
||||
when:
|
||||
branch: master
|
||||
event: [ push, pull_request ]
|
||||
# Multiple steps run concurrently
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: date -R
|
||||
# Conditions for steps
|
||||
when:
|
||||
branch: [ master, dev ]
|
||||
event: push
|
||||
# branch conditions for the pipeline
|
||||
branch:
|
||||
include: [ master, feature/*]
|
||||
exclude: [ dev ]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
When configuring a pipeline, certain [step types](#step-types) allow you to use environment variables to configure the step's script.
|
||||
|
||||
### Configuring Environment Variables by UI
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**.
|
||||
|
||||
1. Within one of the stages, find the **step** that you want to add an environment variable for, click the **Edit** icon.
|
||||
|
||||
1. Click **Show advanced options**.
|
||||
|
||||
1. Click **Add Variable**, and then enter a key and value in the fields that appear. Add more variables if needed.
|
||||
|
||||
1. Add your environment variable(s) into either the script or file.
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
### Configuring Environment Variables by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY}
|
||||
env:
|
||||
FIRST_KEY: VALUE
|
||||
SECOND_KEY: VALUE2
|
||||
```
|
||||
|
||||
## Secrets
|
||||
|
||||
If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes [secrets](../../how-to-guides/new-user-guides/kubernetes-resources-setup/secrets.md).
|
||||
|
||||
### Prerequisite
|
||||
Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.
|
||||
<br/>
|
||||
|
||||
>**Note:** Secret injection is disabled on [pull request events](#triggers-and-trigger-rules).
|
||||
|
||||
### Configuring Secrets by UI
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.**
|
||||
|
||||
1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**.
|
||||
|
||||
1. Within one of the stages, find the **step** that you want to use a secret for, click the **Edit** icon.
|
||||
|
||||
1. Click **Show advanced options**.
|
||||
|
||||
1. Click **Add From Secret**. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key.
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
### Configuring Secrets by YAML
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: echo ${ALIAS_ENV}
|
||||
# environment variables from project secrets
|
||||
envFrom:
|
||||
- sourceName: my-secret
|
||||
sourceKey: secret-key
|
||||
targetKey: ALIAS_ENV
|
||||
```
|
||||
|
||||
## Pipeline Variable Substitution Reference
|
||||
|
||||
For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of `${VAR_NAME}`.
|
||||
|
||||
Variable Name | Description
|
||||
------------------------|------------------------------------------------------------
|
||||
`CICD_GIT_REPO_NAME` | Repository name (Github organization omitted).
|
||||
`CICD_GIT_URL` | URL of the Git repository.
|
||||
`CICD_GIT_COMMIT` | Git commit ID being executed.
|
||||
`CICD_GIT_BRANCH` | Git branch of this event.
|
||||
`CICD_GIT_REF` | Git reference specification of this event.
|
||||
`CICD_GIT_TAG` | Git tag name, set on tag event.
|
||||
`CICD_EVENT` | Event that triggered the build (`push`, `pull_request` or `tag`).
|
||||
`CICD_PIPELINE_ID` | Rancher ID for the pipeline.
|
||||
`CICD_EXECUTION_SEQUENCE` | Build number of the pipeline.
|
||||
`CICD_EXECUTION_ID` | Combination of `{CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}`.
|
||||
`CICD_REGISTRY` | Address for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step.
|
||||
`CICD_IMAGE` | Name of the image built from the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. It does not contain the image tag.<br/><br/> [Example](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml)
|
||||
|
||||
## Global Pipeline Execution Settings
|
||||
|
||||
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar.
|
||||
|
||||
- [Executor Quota](#executor-quota)
|
||||
- [Resource Quota for Executors](#resource-quota-for-executors)
|
||||
- [Custom CA](#custom-ca)
|
||||
|
||||
### Executor Quota
|
||||
|
||||
Select the maximum number of pipeline executors. The _executor quota_ decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is `2`. A value of `0` or less removes the quota limit.
|
||||
|
||||
### Resource Quota for Executors
|
||||
|
||||
Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can [manage compute resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for every containers in the pod.
|
||||
|
||||
Edit the **Memory Reservation**, **Memory Limit**, **CPU Reservation** or **CPU Limit**, then click **Update Limit and Reservation**.
|
||||
|
||||
To configure compute resources for pipeline-step containers:
|
||||
|
||||
You can configure compute resources for pipeline-step containers in the `.rancher-pipeline.yml` file.
|
||||
|
||||
In a step, you will provide the following information:
|
||||
|
||||
* **CPU Reservation (`CpuRequest`)**: CPU request for the container of a pipeline step.
|
||||
* **CPU Limit (`CpuLimit`)**: CPU limit for the container of a pipeline step.
|
||||
* **Memory Reservation (`MemoryRequest`)**: Memory request for the container of a pipeline step.
|
||||
* **Memory Limit (`MemoryLimit`)**: Memory limit for the container of a pipeline step.
|
||||
|
||||
```yaml
|
||||
# example
|
||||
stages:
|
||||
- name: Build something
|
||||
steps:
|
||||
- runScriptConfig:
|
||||
image: busybox
|
||||
shellScript: ls
|
||||
cpuRequest: 100m
|
||||
cpuLimit: 1
|
||||
memoryRequest:100Mi
|
||||
memoryLimit: 1Gi
|
||||
- publishImageConfig:
|
||||
dockerfilePath: ./Dockerfile
|
||||
buildContext: .
|
||||
tag: repo/app:v1
|
||||
cpuRequest: 100m
|
||||
cpuLimit: 1
|
||||
memoryRequest:100Mi
|
||||
memoryLimit: 1Gi
|
||||
```
|
||||
|
||||
>**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way.
|
||||
|
||||
### Custom CA
|
||||
|
||||
If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.
|
||||
|
||||
1. Click **Edit cacerts**.
|
||||
|
||||
1. Paste in the CA root certificates and click **Save cacerts**.
|
||||
|
||||
**Result:** Pipelines can be used and new pods will be able to work with the self-signed-certificate.
|
||||
|
||||
## Persistent Data for Pipeline Components
|
||||
|
||||
The internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
|
||||
|
||||
For details on setting up persistent storage for pipelines, refer to [this page.](configure-persistent-data.md)
|
||||
|
||||
## Example rancher-pipeline.yml
|
||||
|
||||
An example pipeline configuration file is on [this page.](example-yaml.md)
|
||||
Reference in New Issue
Block a user