top of page
Abstract Linear Background

Documentation to
guide you

Check our step wise and properly inscribed document manual to get you started.

Home: Welcome
Get started

Install

Check our guide on how to install both Cli's.

Quick Start

Understand the links, folder structure and code involved to work with.

Home: Services
bxsblue.png
State independency

Many cli’s used for cloud container orchestration are dependent on state o files or meta data that are created by cli’s when managing resources. There are the chances for backend to get corrupted or inaccessible which can lead inconstant resource configuration. We have designed our tools such a way that that are least likely to depend on state files, decreasing the probably of failure due to lack of or corrupted meta data files.

Simple Yaml declarative DRY configuration

ekscluster = boto3.client(“eks”)

response = ekscluster.create_cluster(

               name=”simpleeks”

               version=”1.15”,

               roleArn = roleArn,

               resourcesVpcConfig = {

                 ‘subnetIds’ : subnets,

                 ‘securityGroupIds’ : [secGroupId]})

Waiter = eks.get_waiter(“cluster_active”)

Waiter.wait(name=”myCluster”)

Fig 1. AWS EKS boto

Azurepythonsdk.jpg

Fig 2. Azure python SDK – simple EC2 instace

Every major cloud has come up with their own predefined SDKs and template for orchestrating K8 cluster, but many doesn’t provide in built maintenance interface for managing cluster and application deployed. This cloud centric interfaces also demands   considerable amount of time to learn, implement, and maintain container infrastructure.

​

Though there some tools such as terraforms can bring in common ground for implementing and maintaining this clusters, they don’t provide the flexibility of DRY configuration for cross account orchestrations and demand considerable amount of time for the developers and operators to understand and implement configurations.

​

We like to simplify this approach by using simple yaml inputs for defining and maintaining both clusters and applications residing in the cluster. By using this file similar configurations can be achieved for cross cloud implementations.

Multi cloud support - k8cli

Tool helps orchestrate k8 over multi-cloud infrastructure, avoiding vendor lock-in, providing competitive advantages, and better capacity distribution and disaster recovery. It consumes simple and similar cloud independent yaml templates for spinning up and maintaining clusters and applications. This cluster.yml file is made up of resources i.e Cloud, VPC, Master, and Nodes needed for creating and maintain clusters. For example, Cloud resource accepts cloud credentials needed for creating and maintaining resources

---

Cloud:

    Name: AWS

    AccessKey: 

    SecretKey:

    Region:  us-east-1

    Cluster:  test-eks9

    Bucket:  k8cli-test-eks5-cluster

Similarly, VPC, non-mandatory, which is needed for creating supporting network. This can either be used or left blank if the network is already available but once created can’t be left blank

vpc.png

Same goes with Master and Node group but they are mandatory non cloud agnostics resources needed to be defined for creating and maintaining K8 clusters.

Mastrgroup.png
Blue-Green Cluster Upgrades

K8 clusters holds applications with different frameworks and dependencies. This makes it impossible to access the impact on the applications during and after the cluster upgrade. More-over many teams prefer to hold both dev and stage applications in QA environment which makes the non-prod envs critical in day-to-day operations.

This approach follows a similar application blue-green pattern except being performed at cluster level. A parallel environment is created, and cluster is upgraded to the latest. Applications teams are requested to push their apps to the blue cluster and are required to perform their tests. Once they feel comfortable with approach, they changes are propagated to green cluster. This method helps help the teams to upgrade the cluster with more confidence vs in-place

Green cluster can be defined in cluster-green.yml. It will accept all the parameters except VPC inputs as it uses the same VPC of the Blue cluster to avoid creating duplicate network resources.

clustrupgrades.png
Helm integration

Seamless helm 3 integrations with different helm repos can be achieved by K8Cli. This is an in-place integration enabled at cluster level. Operators are needed to specify the path to addons and defines the addons and respective addons and the repos in the addons.yml.

helmintegration1.png
helmintegration2.png
Cloud independent K8 resource management

K8cli can be used for managing cluster once it is operational. It can  used for creating Name Spaces, Storage Accounts, Default Quotas, Resource Quotas and Service Accounts.

Namespace Operation

This operation will create/update/delete Name Spaces needed for applications to push their apps. This is admin level operation needed proper privileges for the service accounts to execute this operation. This will be discussed in following topics. This operation will provide Delete flag which can be enabled to delete the NameSpaces not listed in the configuration file. The default quota and resource quota will be defined in separate yml files, discussed below, which will be read when creating namespaces

NameSpace-yml.png

Figure 6 - NameSpace.yml

ResourceQuota Operation

This operation will create resources quotas listed in ResourceQuota.yml. This Quotas are attached to the NameSpace defined in NameSpace.yml. The yml file enables the operator to define CPU / Memory / StorageClasses limits needed for attaching to the NameSpaces.

resourcequota.png

Figure 7 - ResourceQuota.yml

StorageClass Operation

This is operation used for creating Storage Classes needs for cluster operations. This is cloud depended, i.e, the respective storage classes listed needs to be supported by the cloud environment.

defaultquota.png

Figure 9 - DefaultQuota.yml

DefaultNameSpaceRole Operation

This operation can used for creating default roles needed for accessing Namespaces created by Cli. Once the Operator triggers namespace operation, it will create a corresponding service account. The role/permission set needed for creating this service account is defined in default namespace role operation. Updating this yml, will lead to updating role in the backend and changing access control for all the service accounts.

defaultnamespace.png

Figure 10 - DefaultNameSpaceRole.yml

Integrated backup and restore

This is supported for all the cluster and dependent on Valero. Velero version and configurations can be managed via addon.yml

./K8Cli --operation take_backup --context test-eks9

Audit resources

To avoid removing resources accidently the Audit flags provides the operators flexibility to either remove or list the resources which are not defined in the yaml files. This can the teams to analyze the impact of removing resources with actual removal by listing the resources.  To enable delete resource feature operators has to enable option in the resource config file if not it will defaults to list resources.

auditnew.png
In-place Git integration

The traditional way of managing cloud foundry resources, using cf cli plugin, is always a cumbersome process.  Neither operator or applications team are provided with an option to provide the json/yaml inputs, handicapping the teams to store the changes in version controls.

 

Similarly, the open-source automation options available outside though supports version control, they do not enable application teams to execute their functionalities. Using these tools, Operators will be the only executers and become bottle necks for the application teams as applications are not provided with any execution controls by defaults unless or until work arounds are explored.

 

To avoid these constrains, C9Cli is enabled to support Parent-Child configuration which can enable operators to control platform level parameters such as quota definition, default ASGs, org enablement, and access control etc. and org managers or users to focus on org level controls such as space management, user access, isolation segments and space ASGs

 

For more information, please refer the below doc.

Repoconfig.png
Audit resources

To avoid removing resources accidently the Audit flags provides the operators flexibility to either remove or list the resources which are not defined in the yaml files. This can the teams to analyze the impact of removing resources without actual removal by renaming the resources.

           

Few resources such as quotas, namespaces, orglist  etc have option to list/delete/rename for resources while others have option to either list or rename the resources not defined in the yaml inputs. Audit function can be set at org level, but administrative setting, defined in config.yml, will have precedence.

c9cli-auditrsrcs1.png
c9cli-auditrsrcs2.png
Manage ASGs and Isolation segments

Platform administrators are provided with administrative capability such as enabling global flags for creating ASGs, isolations segments, and setting up user roles etc. While those submitted by application teams through the child repos are referred to as the Self-Service Capabilities which includes flags for enabling Isolation segments and ASG audits at the Org level.

 

Cli helps the operations to control the privileges applications teams can avail. For example, Operators can define whether application teams can use application security groups or isolation segments and define the default applications groups needed for orgs such as system etc for privileged access.

 

Moreover, the orgs that needs privileged access can maintained by using separate flag called protected org asg which can be run as part of corn job to propagate the changes as quickly as possible.

manageasgs.png
Customizable setup

Cli provides various operations, and each operation is integrated with GIT pull and GIT Push which makes them independent of another operations. This helps platform team to move the bottleneck task run on a different pipeline without getting effected by other tasks. The below examples show C9Cli run using concourse. They consist of master pipeline running all the tasks sequentially and master that has been divided in three more pipelines handling separate tasks.

c9cf-mgmt.png

Figure 1 - C9Cf-mgmt

c9cf-mgmt-orgs.png

Figure 2 - C9cf-mgmt-Orgs

c9cf-mgmt-users.png

Figure 3 - C9Cf-mgmt-Users

c9cf-mgmt-asgs.png

Figure 4 - C9Cf-mgmt-Asgs

Conact
Contact Information

For any query and customized solution related to installation, maintenance or troubleshooting, reach out to us by filling the form. Our team will be in touch with you shortly to listen and resolve your queries. 

  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
  • YouTube
  • TikTok
Get in Touch

Thanks for submitting!

©2023 by Arna Cloud

  • Facebook
  • Twitter
  • LinkedIn
bottom of page