Wednesday, September 14, 2022

Appropriate workloads for Kubernetes

I've been asked about hosting cloud applications in Kubernetes. People seem to assume that Kubernetes is the best practice for hosting containerized workloads in all cases. While I love Kubernetes and have used it successfully in numerous applications, It shouldn't be the default, out-of-hand, hosting solution for containerized workloads. 

Kubernetes is far from the simplest solution for hosting containerized workloads. Even with cloud vendors making Kubernetes clusters more integrated and easier to manage, they are still very complex and take highly specialized administration skillsets. If you're using Azure, Containerized Instances is a comparatively more straightforward method than AKS for deploying containerized workloads. In fact, Azure has several different ways to deploy containerized workloads. Most are easier than AKS/Kubernetes.

If you're using AWS, ECS or Lambda is comparatively easier than Kubernetes. In fact, AWS has at least 17 ways to deploy containerized workloads. Incidentally, with any cloud, it's possible to create a virtual machine and run a containerized workload on it: I don't recommend this. Bottom line: Kubernetes AWS ECS, and Azure Containerized Instances are application runners.

If you adopt Kubernetes as a hosting mechanism, the benefits should pay for the additional complexity. Otherwise, the additional maintenance headaches and costs of Kubernetes are not a wise investment. That begs the question, what types of applications are most appropriate for Kubernetes? Which application types should adopt a more straightforward hosting mechanism?

As an aside, containerizing your applications is the mechanism that separates the concern of hosting from the functionality of your application. Cloud vendors have numerous ways to deploy and run containerized applications. Vendor lock-in usually isn't an issue with containerization.

Workloads Well-Suited for Kubernetes

This section details common types of applications that may benefit from Kubernetes hosting.

Applications with dozens or hundreds of cloud-native services often benefit from Kubernetes. Kubernetes can handle autoscaling and availability concerns among the different services with less set-up per service. Additionally, cloud vendors have integrated their security and monitoring frameworks in a way that generally makes management of Kubernetes-hosted services homogenous with the rest of your cloud footprint.

Applications servicing multiple customers that require single-tenant deployments often benefit from Kubernetes. For these applications, there is one deployment of a set of services per "customer". For example, if there are 1000 customers, there will be 1000 deployments of the same set of services for each one. Given that the number of deployments grows and shrinks as customers come and go, Kubernetes streamlines the setup for each as well as provides constructs to keep each of the customer deployments separate.

Applications requiring custom Domain Name Services (DNS) resolution that can't be delegated to the enterprise custom DNS often benefit from Kubernetes. This is because Kubernetes has configurable internal networking capabilities. Often this happens more as a result of organizational structure than technical reasons. In many enterprises, DNS is managed by infrastructure teams and not application teams.

Applications in enterprises where IP address conservation is necessary can benefit from Kubernetes. For applications with internal services that only Kubernetes-hosted services needs access to, Kubernetes internal networking model called kubenet provides an internal network not visible outside the cluster. For example, an application with hundreds of microservices may only need to expose a small fraction of those services outside the cluster. The kubenet networking model conserves IP addresses as internal services don't need IP addressability outside the cluster.

In the cloud, we think of IP addresses as "free" and without charge. For firms with a large existing on premises network, IP address space is often not free. In most firms I've seen, on premises networks are tarballs without sensible IP address schemes. Often, nobody understands the entirety of what network CIDRs are in use and for what. Additionally, routing is often manually configured, making CIDR block additions labor-intensive.

Summary

Not every application should be hosted in Kubernetes. Simple applications with small transaction volume often don't require the additional complexity Kubernetes brings. 

Thanks for taking the time to read this post. I hope this helps.

Sunday, September 4, 2022

Policy-based Management Challenges and Solutions

One of the most common best practices for managing security in the cloud is policy-based management.  Policy-based management optimally prevents security breaches or at least alerts you to their presence. Additionally, it alieviates the need for as many manual reviews and approvals, which slow down development of new business capabilities. That said, policy-based management presents many challenges. This post details common challenges and tactics to overcome them. 

Challenge #1: Introducing New or Changed Policies

New or changes to existing policies often break existing infrastructure code (IaC) supporting existing applications. This occurs because at the time the IaC was constructed, the policy wasn't in place and the actions were allowed. This results in unplanned work for application teams and schedule disruptions. As policy makers are usually separate teams, they often don't pay the cost associated with the associated unplanned work.

Policy change announcements are often ineffective. Partially, this is due to volume of announcements in most organizations. The announcement of an individual policy gets lost in a sea of other announcements. Additionally, sometimes IaC developers do not completely understand or see the ramifications of the policy change. 

Challenge #2: Policies with Automatic Remediation

Installing policies that have automatic remediations in them can actually break existing infrastructure and the applications that rely on it. While automatic remediation for policies is appealing from a security perspective as it fixes an issue in a short time after a security hole is created, it really just kicks the can. Any resulting breakage will need to be repaired sometimes causing an outage for end users.

The IaC that produced the invalid infrastructure will no longer match the infrastructure that physically exists and needs to be changed. In other words, the automatic remediation causes unplanned work for other teams. Sometimes, new policies cause common IaC modules used by multiple teams to no longer work and not individual application infrastructure code.

Challenge #3: Adapting Policies to Advances in Technology 

Many policy makers only consider legacy mutable infrastructure. Mutable infrastructure is common on premises and consists of static virtual machines/servers that are created once and updated with new application releases when needed. Immutable infrastructure VMs are completely disposable. The VMs are still updated, but by updating the images they are created from and replacing the VMs in their entirety.

For example, it is common to place a policy that requires that automatic security updates be applied to virtual machines on a regular basis. The issue is that such policies assume that the VM has a long life as it would under a mutable infrastructure. Such a policy doesn't apply to immutable infrastructures. For immutable infrastructure, the base image needs security updates applied and any VMs built using it should be rebuilt and redeployed.

Cloud vendor technology changes at a rapid pace. Keeping cloud policies up to date with current advances is a challenge. In practice, policy makers are often out of date and make invalid assumptions. Effects of this I commonly see are:
  • Assuming that cloud vendor capabilities for securing network access remains the same.  Often,  these capabilities advance.
  • Assuming VM IP addresses are static can safely be used in firewall rules. In the cloud,  IP addresses can change quite frequently.
  • Assuming that VM images are changeable (vended provided images might not be)
  • Assuming that there will be no needed exceptions to security policies

Tactics to Mitigate Challenges

Always audit compliance to policies first before installing automatic remediation. That is alert teams of new compliance issues before changing anything automatically. This allows teams to accommodate a security policy change proactively before change is forced. Additionally, a reasonable lead time needs to be provided so that teams have the opportunity to mitigate the additional work.

Test new or changed policies with any related enterprise-wide common IaC modules. It is common for organizations with mature DevOps capabilities to centralize common IaC modules and reuse them for multiple applications. This allows organizations to leverage existing work instead of having multiple teams reinvent the wheel. For example, if a policy regarding AWS S3 buckets or Azure storage accounts is being changed, test any common IaC modules that use those constructs. Make policy compliance part of the test. Note that these tests should be automated so they can easily be rerun.

Any policies with automatic remediation must provide an exception capability. For example, if some VM images are purchased and not changeable according to the license. It is common for such images to be granted exceptions from related security policies. Additionally, I've seen exceptions granted for cloud vendor-provided Kubernetes clusters where underlying VMs don't and can't meet policy requirements. 

New policies should be deployed in lower environments first. This increases the chance that any errors or issues will be identified before the policy is applied to production. Be sure to allow a reasonable period of time in lower environments to increase the likelyhood that issues will be identified and addressed.

Summary

Policy-based management has challenges, but should still be considered best practice. Thanks for taking time to read this article. Please contact me if you have questions, concerns, or are experiencing challenges I've not listed here.


Monday, August 8, 2022

Move your Network to the Cloud Too!

Over the past year, I'm seeing indications of what will be a big trend in cloud consumption: let's move our network to the cloud along with data centers. I'm talking about the WAN network primarily which many enterprises maintain worldwide.  Local offices will still need connectivity to the WAN; it's just that they will increasingly become on-ramps to the worldwide WAN hosted in the cloud. In other words, data centers will no longer be the "center" for all network access. 

Graphically, the concept of moving the WAN to the cloud would look like figure 1 below. Notice how all data centers and offices are connected to the WAN that handles traffic between them. While the image doesn't describe it, the Cloud-based WAN is worldwide and can serve offices and data centers across the globe.

Figure 1: Cloud-based WAN Network




Let's contrast this with figure 2 which depicts the WAN network topology common in enterprises today. Note that public cloud access typically routes via data centers making enterprise application access data center centric. Worldwide connectivity is managed by a custom MPLS network.

Figure 2: Traditional Worldwide MPLS Network



I'm seeing several motivations for the change in thinking about how worldwide networks should be organized. I'll separate the reasoning into the following categories:
  • Complexity
  • Performance
  • Financial
  • Speed to Market

Complexity

The complexity of non-Cloud MPLS networks, the base for most enterprise worldwide WANs, is tremendous. MPLS networks typically require large amounts of hardware that needs to be upgraded and replaced regularly. They take a large networking staff. While some outsource that to an MSP provider, they are still necessary. Outsourcing a large portion of the network to cloud vendors outsources this complexity and associated maintenance to a large degree. They also tend to be replete with numerous vendor contracts.

The complexity increases the business risk of change. MPLS networks are rarely supported by testing sandbox environments and automation. Many still make changes manually leading to inevitable human error and outages for users. Utilizing cloud vendors makes it much easier to automate the WAN infrastructure and provides a sandbox environment to test networking-related changes. This decreases the business risk of changes to networking infrastructure. This is huge. For most enterprises, the WAN that integrates all data centers and offices is essential.

Simpler capacity planning requirements. Hardware and vendor contacts needed for worldwide MPLS networks require sophisticated capacity planning due to long lead time requirements. This requirement is much simpler with cloud WAN implementations. Capacity planning still exists,  but it is far simpler and is easily changeable and adaptable on the fly. 

Performance

Network latency is generally significantly lower (faster) using cloud-provided WAN networking than worldwide MPLS networks. While your mileage will vary depending on your MPLS implementation, so much R&D goes into cloud-provided WANs that the likelihood that an enterprise will keep up any network performance advantages over time is low. Face it, most firms just can't compete.

Network latency is higher (slower) accessing resources that require networking between on premises and the cloud. As more IT workloads move from on premises to the cloud, closer proximity to the cloud will yield better performance. To this end, I see more enterprises leveraging cloud VPN services, which are closer to most application workloads, yielding better performance.

Financial

Converting networking hardware and infrastructure from capital expense (CapEx) to operational expense (OpEx) is appealing to many enterprises from an accounting perspective. As with computing resources, you pay for what you use for cloud-based WANs without hardware expenditures and management.

Networking labor is expensive specialized labor. Outsourcing that labor to cloud providers is definitively cheaper. Some enterprises mitigate this cost by enlisting a managed services provider (MSP), but outsourcing that labor to cloud vendors is cheaper as it capitalizes on the cloud's economy of scale advantages.

Speed to Market

No more long lead times for MPLS network upgrades and capacity increases. Increasing capacity in a cloud-provided WAN is typically measured in hours, not months. Furthermore, cloud-provided WAN products benefit from the cloud's dynamic scaling capabilities. Increasing MPLS network capacity takes sophisticated capacity planning and typically long lead times due to additional hardware expenditures.

Additional Benefits

The firm gets access to research and development advances made by cloud providers. The R&D resources that cloud providers are investing in WAN technologies surpass what most enterprises are able or willing to invest in. This means that over time, any differences in functionality and performance are likely to appear in cloud vendors first.

A cloud-based WAN is a natural partner when combined with a cloud-based VPN capability. This makes sense especially if the cloud hosts a larger percentage of application compute resources. Consuming the cloud-providers VPN solution moves those compute resources closer to what users access. With that closer proximity, typically comes better performance.

A cloud-based WAN is a natural partner for integrating multiple cloud providers. That is, Your AWS footprint can be securely connected to your Azure or GCP footprint directly. This avoids the slower connection between the cloud providers through an on premises data center.

Concluding Remarks

I'm reporting what I'm seeing at clients. This idea made no sense when many had a small fraction of their IT footprint in the cloud. Now that most firms now have most of their footprint in the cloud,  thinking on how to provide worldwide access to internal users needs to evolve. And the time for that evolution has come.

If you have thoughts or feedback, please contact me directly via LinkedIn or Email. thanks for taking the time to read this article. 


Wednesday, August 3, 2022

Radical Idea: Let's do more Testing in Production

There are many different types of application testing. This article is entirely about system-level testing, the most outer-level user experience testing. System-level testing is also the most difficult to automate.

I'm talking about integration or testing of the application from an end-user perspective only. Many use the term system-level testing for this activity.   Other types of testing such as unit, performance, exploratory, and usability, are not a part of this article. Other forms of testing are essential, but not the focus here. 

System-level automated testing has too much friction.  It can't keep up.  It's the most challenging type of testing to automate. Because of that, many still perform system-level testing manually. The cost-benefit of automating these types of tests is elusive. This test automation certainly can't support high-performing DevOps teams' high-frequency change rates. 

System-level automated tests are fragile.  The slightest change or refactoring at the outer web layer breaks a large percentage of system-level tests. Automated testing at this level usually relies on labels programmers use for parameters and control identifiers. Programmers usually consider them free to refactor these labels for clarity without notice.

The lack of automated system-level testing impedes the firm's ability to implement continuous delivery. In turn, manual system-level testing lowers the lead-time, which is one of the DORA metrics, we seem to be using these days.

What's the Alternative?

Let's outsource system-level testing to end users. Rather, let's enlist a small percentage of end users to use a release candidate in production and measure their error rate. Additionally, those errors can be provided to the development team for remediation.

Instead of writing system-level tests, implement canary deployments and provide a release candidate version that is considered production and uses production databases and resources. The release candidate is production in every way, except that it's used by a small percentage of users. If the application is hosted in the cloud, it's possible to create a "sister" installation of an application in production that uses production resources in the same way the active version of the application does.

Remediate the release candidate until the error profile is acceptable for mainstream release. In other words, fail forward, don't roll back when errors are discovered. At some point, the release candidate will be considered stable and is made active for 100% of users. At this point, a new release candidate is created for new features and changes to be tested in the same way. 

This solution avoids the problem of automating system-level tests and all its problems in terms of friction and fragility. Sometimes, the winning move is not to play! What I propose doesn't skip testing. It just changes the paradigm under which that testing is conducted.

The testing that end users do is going to be more comprehensive than any test plan can provide. Moreover, testing by end users will concentrate on the most frequently performed tasks.  

There are diminishing returns to increasing the number of users directed to the release candidate. That is, you will discover more defects increasing the number of users on the release candidate from 0% to 2% than you will from 25% to 50%. 

If you monitor error rates on the application, automation can be built to support continuous delivery. In other words, if the release candidate reveals no increase in error rates over the current live version, automation can make the switch based on thresholds you configure.

This concept sounds scary,  but is it functionally different from what we experience today? We all see defects deployed to production despite our best testing efforts.  I'm just suggesting we use what we experience rather than pretend we can avoid it. 

Thanks for taking the time to read this article. I'm always eager for feedback.



Tuesday, March 8, 2022

Infrastructure Code and the Shifting Sand Problem

Infrastructure code (IaC) has a problem that application code does not: changes outside of infrastructure code impact its ability to function as intended. I call this the shifting sand problem. The goal is for infrastructure code executions (of the same version) to always produce the same result. In this sense, we want infrastructure code executions to be "repeatable" in the same way we strive to make application builds repeatable. When IaC executions aren't repeatable, unplanned work is the result. 

There are many sources of IaC shifting sand. I'll detail those sources and ways to mitigate the problem in this post. 

Common IaC shifting sand sources are below. They are caused by a mixture of technology change and organization management procedures. 
  • Automatic dependency updates
  • Cloud backend updates
  • Right-hand/Left-hand issues
  • Managing cloud assets from multiple sources
Configuring automatic dependency updates is the practice of using the latest version of IaC software or common IaC code revisions. Examples include Terraform and Cloud provider versions, common IaC code versions, virtual machine image versions, operating system updates,  and many more. Most automatic updates are put in place as a convenience: developers want to avoid the additional work of upgrading the versions used. The problem is that breaking changes from these dependencies will cause IaC code not to function. Then unplanned work results and somebody will need to fix the issue,  often at an inconvenient time with a looming deadline.

Some operating system and virtual machine image updates are security-related. Examples include anti-virus software updates,  etc. This type of update is often unavoidable. Rather, the potential cost of delaying these updates can be considerable. 

For avoidable automatic updates (not security-related), the best mitigation is to explicitly specify the versions of dependencies used.  Never use 'latest' or the newest available. Let upgrades of dependencies be driven by changes needed for planned enhancements for end-users.  Then the upgrade work is planned and scheduled as opposed to inconveniently timed.

For unavoidable automatic updates (e.g. security-related), early detection by automated testing is best. In addition, apply these updates in lower-level environments on a regular basis. Forcing such updates in lower-level environments first will increase the chance that issues not covered by automatic testing can be found before production. 

Cloud backend updates are breaking changes introduced by cloud vendor software. While Cloud vendors do attempt to make their changes backward compatible,  they don't always succeed. Without naming names, I've been in support calls with Cloud vendor technical support teams and seen them blackout product changes or frantically release product fixes. In addition, I've frequently had to frantically upgrade IaC code to accommodate Cloud vendor backend changes. As with the automatic update problem, unplanned (and inconveniently timed) work results.

Scheduled testing in a sandbox environment is the best mitigation strategy we've found. This strategy increases the chance of detection early before changes are actually needed in real environments. Depending on the components used, sandbox testing can be expensive to run. The more often sandbox testing is run, the earlier problems like this will be detected. Unfortunately, increased run frequency drives up costs. 

Right-hand/left-hand issues are created within the organization itself where one department makes changes that have ramifications they don't see in other departments. One example I've seen frequently is a group in charge of making security policy changes effectively "breaks" IaC code maintained by other departments. In essence,  tactics taken by that IaC code were no longer allowed. In this example, making the policy change is often necessary.  

Early detection through scheduled sandbox testing (as described above) is the best mitigation strategy for the team maintaining IaC code for specific applications. The same tradeoff between frequency and cost applies.

Managing cloud assets from multiple sources occurs when something besides one IaC pipeline manages a cloud asset. The most frequent example is manual change. When developers manually change assets that are managed by IaC, often drift results. That is, the cloud asset, in reality, differs from what is in the IaC pipeline. 

Another example is creating multiple IaC code bases to manage the same asset. I've seen this frequently in cloud asset tagging, which is frequently used for accounting charge-back purposes. Drift always results as multiple IaC code bases rarely come up with the same answer. As a result, all IaC code bases (except the one last executed) differ from reality. 

Bottom line: ensure that one and only one IaC code base manages each cloud asset. This prevents configuration drift. It saves aggregation and labor in staff who are often left with the mystery of figuring out how that drift happened. This is a topic I might explore more completely in another article.

I hope this helps.  I'm always interested in other types of problems you encounter maintaining IaC code.  Thanks for taking the time to read this article.