Demystifying RBAC in vSphere with Tanzu
vSphere with Tanzu introduces a unique integration of Kubernetes within the vSphere platform. Multiple articles and blogs have already discussed the architecture, benefits, and advantages of integrating VMs and Containers within the same platform. New tenancy models and computational objects have been introduced to the platform. These supplementary objects help provide additional roles and responsibilities to the different persona that interact with the platform. In this article, I will try to demystify the new RBAC model.
Let’s look at a very simplified architecture diagram of a vSphere with Tanzu environment. The Supervisor cluster (blue box in the image below) is at heart, which comprises a set of K8s control plane VMs (and depending on the networking stack used, the ESXi servers as worker nodes). The Supervisor cluster runs several infrastructures and K8s specific controllers that help provide the abstraction of the infrastructure (compute, networking, storage) and other native K8s functionalities to the tenants living within it.
The next object is the Supervisor namespace (blue resource pool in the image below). These are DevOps user-managed K8s namespaces within the Supervisor Cluster that closely mimic resource pools within a vSphere platform. Since the Supervisor cluster is an opinionated K8s cluster — to guarantee the SLAs and SLOs of the platform it delivers — the Supervisor namespaces are the only allowed namespaces within which users can deploy their workloads.
The last object of interest is the Workload Cluster (blue K8s icon in the bottom right corner in the image below). These are the conformant K8s clusters where the DevOps users can deploy their applications. While users can implement their desired RBAC policies on these clusters, distinct policies get inherited through the Supervisor namespaces (for platform-specific functionalities).
We will be delving deeper into these objects/tenants and looking at their corresponding RBACs. As part of the exercise, we will be looking at important ClusterRoleBindings and the rules associated with the ClusterRoles. If there are RoleBindings of interest defined at the namespace level, we will explore them and their related Roles. Controllers running in Supervisor clusters and Workload clusters are responsible for maintaining the desired state of these RBAC rules.
For discussions in this article, we will reference the following setup as seen in the image below.
- A Supervisor Cluster with SupervisorContolPlaneVM(s) —
- A Supervisor Namespace —
- Workload Cluster —
Three generic users,
user3 have been created within the vsphere.local domain. The users have been assigned the following roles within the
demo1 Supervisor namespace (see image below).
User1has been assigned the persona of a Viewer of the Supervisor namespace to have a read-only view of the objects.
User2has been assigned the persona of an Editor of the Supervisor namespace to create/update/delete objects. DevOps engineers, who build and deploy K8s applications, would be suitable owners of this persona.
User3has been assigned the persona of the Owner of the Supervisor namespace with full administrative rights to it. Ideally, platform operators/SRE engineers would be fulfilling this functionality.
Note — As vSphere with Tanzu evolves, some of the configurations discussed in this article would organically grow and new features added. Hence new RBAC could be implemented, while some could be dropped and deprecated.
- The K8s Supervisor clusters have some very well-defined ClusterRoleBindings that are implemented to allow for restricted access to the cluster. Only the firstname.lastname@example.org vCenter group has limited access to some of the cluster resources. They are enforced via these ClusterRoleBindings and ClusterRoles —
Digging a bit deeper, let us look at what verbs and K8s API resources these cluster level RBACs translate to. At the cluster level, the email@example.com group has the following access —
- Additionally, the firstname.lastname@example.org vCenter group has limited access to some of the system namespaces through relevant RoleBindings and Roles.
kube-system namespace, these administrators can
patch on the kube-system-specific — image-fetcher-ca-bundle and image-fetcher-service -
NOTE: No additional vSphere users can access the Supervisor resources at the cluster or system namespaces level.
Interestingly, as the lists above indicate, even the almighty email@example.com group has minimal privileges within the Supervisor cluster !! This prevents any malicious and accidental changes to the Supervisor cluster and its various systems configurations.
- The permissions deployed within the vSphere UI (image in the Setup section above) translates to K8s specific RBAC permissions within the Supervisor namespace. As soon as the Supervisor namespaces are created and the relevant permissions applied with the vSPhere UI, the controllers running within the Supervisor cluster enforce these permissions. Digging deeper, we observer the following RoleBindings and Roles enforced within the
Let us look at the ClusterRole edit, a modified role that provides edit access to multiple
demo1 namespaced resources. The details are provided below. As can be seen, the vSphere users with Editor and Owner persona(
user3), along with the Administrators@vsphere.local group, have full edit rights to several resources within the Supervisor namespace
demo1 to deploy K8s clusters, as well as deploy standard K8s applications (when possible through an NSX networking stack and using vSphere pods)
Let us also look at the ClusterRole view, a modified role that provides limited view access to multiple
demo1 namespaced resources. The details are provided below. As can be seen, the vSphere users with Viewer persona(
user1) have limited view rights to several resources within the Supervisor namespace
user3 , who is the Owner of the namespace
demo1 is also granted namespace-delete permissions via the following ClusterRoleBindings and ClusterRole.
This includes permissions to delete namespaces that they have already created as part of the Owner persona.
When the allowed user(s) creates a Tanzu Kubernetes Cluster (TKC) — aka Workload Clusters, specific roles and permissions are automatically created during the cluster creation stage. This allows seamless life cycle management of these clusters by the vSphere persona with the relevant permissions.
This one is quite simple. The Owner and Editor personas and the firstname.lastname@example.org vCenter group get full cluster-admin rights to the workload cluster!!