Catalog Entries

In order to manage Custom Resources from a ServiceCluster we have to tell KubeCarrier how to find them and how we want to offer them to our users.

First we need some kind of CustomResourceDefinition or Operator installation in our ServiceCluster. To help get you started we have a fictional example CRD that can be used without having to setup an Operator. Start with registering the CRD in the ServiceCluster:

Service Cluster

# make sure you are connected to the ServiceCluster
# that's `eu-west-1` if you followed our earlier guide.
$ kubectl config use-context kind-eu-west-1
Switched to context "kind-eu-west-1".

$ kubectl apply \
  -f created

$ kubectl get crd
NAME                  CREATED AT   2020-03-10T10:27:51Z

Now we will tell the KubeCarrier installation to work with this CRD. We can accomplish this, by creating a CatalogEntrySet. This object describes which CRD should be fetched from which ServiceCluster, metadata for the Service Hub and it will limit which fields are available to users.

CatalogEntrySet definition

kind: CatalogEntrySet
    displayName: CouchDB
    description: The compfy database
    serviceClusterSelector: {}
    - versions:
      - v1alpha1
      - jsonPath: .spec.username
      - jsonPath: .spec.password
      - jsonPath: .status.phase
      - jsonPath: .status.fauxtonAddress
      - jsonPath: .status.address
      - jsonPath: .status.observedGeneration

Management Cluster

# make sure you are connected to the KubeCarrier Management Cluster
# that's `kubecarrier` if you followed our earlier guide.
$ kubectl config use-context kind-kubecarrier
Switched to context "kind-kubecarrier".

$ kubectl apply -n team-a \
  -f created

$ kubectl get catalogentryset -n team-a
NAME       STATUS   CRD                   AGE
couchdbs   Ready   19s

As soon as the CatalogEntrySet is ready, you will notice two new CustomResourceDefinitions appearing in the Cluster:

Management Cluster

$ kubectl get crd -l
NAME                                CREATED AT           2020-07-31T09:36:04Z  2020-07-31T09:35:50Z

The object is just a copy of the CRD present in the ServiceCluster, while is a “slimmed-down” version, only containing fields specified in the CatalogEntrySet. Both CRDs are “namespaced” by their API group.


Now that we have successfully registered a CustomResourceDefinition from another cluster, attached metadata to it and created a “public” interface for other people, we can go ahead and actually offer this CouchDB object to other users.

The CatalogEntrySet we created in the previous step is managing CatalogEntries for all ServiceClusters that match the given serviceClusterSelector.

Management Cluster

$ kubectl get catalogentry -n team-a
NAME                 STATUS   BASE CRD                            TENANT CRD                  AGE   Ready   26s

We can now reference these CatalogEntries in a Catalog and offer them to Tenants. Every Account with the Tenant role has a Tenant object created in each Provider namespace.

Management Cluster

$ kubectl get tenant -n team-a
team-b   5m35s

These objects allow the Provider to organize them by setting labels on them, so they can be selected by a Catalog. This Catalog selects all CatalogEntries and offers them to all Tenants:

Catalog definition

kind: Catalog
  name: default
  # selects all the Tenants
  tenantSelector: {}
  # selects all the CatalogEntries
  catalogEntrySelector: {}

Management Cluster

$ kubectl apply -n team-a \
  -f created

$ kubectl get catalog -n team-a
default   Ready    5s

When the Catalog is ready, selected Tenants can discover objects available to them and RBAC is setup to users to work with the CRD in their namespace. Here we also use kubectl user impersonation (--as), to showcase RBAC:

Management Cluster

# Offering objects contain information about CRDs that are shared to a Tenant.
# They contain all the information to validate and create new instances.
$ kubectl get offering -n team-b --as=team-b-member
NAME                        DISPLAY NAME   PROVIDER   AGE   CouchDB        team-a     3m15s

# Region exposes information about the underlying Clusters.
$ kubectl get region -n team-b --as=team-b-member
NAME               PROVIDER   DISPLAY NAME   AGE   team-a     EU West 1      5m14s

# Provider exposes information about the Provider of an Offering.
$ kubectl get provider -n team-b --as=team-b-member
team-a   The A Team     6m11s