Cyber Threat Briefing: Cloud Security

  • Home
  • Blog
  • Cyber Threat Briefing: Cloud Security
image

In recent years, businesses across various sectors have been migrating operational services to the cloud, leveraging the flexibility this brings – a trend that was accelerated by the pandemic but is showing no sign of slowing down. Gartner predicts that by 2025, 85% of enterprises will adopt a cloud-first computing approach, compared to just 20% in 2020. In the fourth episode of our live Cyber Threat Briefing, Adversary Simulation Lead, Aaron Dobie, and Risk Advisory Senior Director, Craig Moores discuss some of the benefits of migrating to the cloud, why a shared responsibility model matters, and how to implement secure by design and a compliant cloud infrastructure.

Security and operational benefits

Our session began by highlighting a couple of the more well-known benefits of moving to the cloud, namely flexibility and scalability, which Aaron explained are great as they mean you can fit the solution to your business, but there is a lot of security and operational benefits, too.

Native segregation, for example. Something that companies tend to struggle with is implementing segregation within their internal networks, so that only relevant systems that need to talk to other systems can do so. However, by default, cloud providers will almost force it upon you. With Amazon Web Services (AWS), for example, there are virtual private clouds where users are encouraged to put one project/solution in one ‘bucket’. Beyond that, it requires external routing. So if one of the buckets does get compromised, it makes it much more difficult to move on to other services or other high-value bits of infrastructure (often referred to as lateral movement).

Another benefit of the cloud is the easier integration for automated DevOps pipelines. Using infrastructure-as-code, organizations are deploying their products and services by linking development and production infrastructure (making use of segregation controls!) with the same products that you’re using to manage your code. This kind of increased flexibility means that you can shift security left in the software development lifecycle and push out new features or identify and respond to issues, much more quickly.

Cost of ownership and expenditure

Previously, if you were setting up a new service that also required infrastructure to be set up, you’d have a significant capital expense in buying that infrastructure; just to be able to start working on the project. However, the cloud allows you to shift those up-front costs into operating costs that come through on a monthly basis. So you can spread the overall expense over a longer period of time, instead of having significant upfront costs. However, each cloud provider also gives you the opportunity to buy credits upfront, if that fits your finance model better. But on the whole, most companies would rather hold that capital expense, redistribute the funds, use it to further the business and pay a lower monthly operating cost for their services. So it’s a nice move in terms of financing.

Shared responsibility model

No matter which cloud provider you use, there will be a shared responsibility model. From raw networking and hardware to hypervisors and running your own code, the model will determine what the cloud provider will maintain responsibility for, what they will ensure is secure and, more importantly, what your organization’s responsibilities are. Typically, there’s limited to no overlap. There will be a section of things that you, as a consumer, are responsible for and a section of things that the cloud provider is responsible for.

When it comes to on-premise, you’d be responsible for everything. With a virtualized platform-as-a-service (PaaS), the only thing you manage is the data, and maybe some customizations on top of that, but your ultimate responsibility is very low. Although cloud providers often make their shared responsibility matrix publicly available, a key issue we often see is that clients don’t fully understand where the delimitation of responsibility lies. If there’s an oversight and you think that the cloud provider is covering something, they will refer you back to the model in the first instance, but most are pretty clearly laid out so it’s important these are understood.

Implementing secure by design

There are two main points when it comes to implementing a secure model. The first one being to make sure that you understand what you need, your requirements from the application or the service that you’re setting up, but also from your wider cloud environment as a whole. The second is actually implementing it in a way that fits your requirements and adding some further hardening steps, such as enforcing the use of relevant validation activities and defined profiles to restrict what users can do (following the principle of least privilege).

It’s important to remember this isn’t a ‘lift and shift’ from the environments that have been running in physical data centers for years, but it is the perfect opportunity to re-architect as we migrate things. It’s rare that you’re going to have to make a large-scale change like this in the future because of the nature of the cloud in which you can flexibly deploy new hosts and they can be updated. The underlying software will be automatically updated, so you simply have to keep adjusting your codebase to make sure it remains current. The cloud pushes organizations to consider and implement a more dynamic environment, which obviously has its own operational costs, but ultimately should demonstrate an improvement for security as a whole.

Compliant cloud infrastructure

Our session then moved to consider the best way to validate that everything in your cloud infrastructure is operating as it should be. The key thing that the cloud offers, that wasn’t traditionally possible with on-premise infrastructure, is ongoing validation – continuously assessing the environment to make sure that it’s matching the templates and controls that you put in place. However, it’s wise to balance that with ‘Point in Time Assessments’ as well.

‘Point in Time Assessments’ help to understand the current snapshot state so you can look for areas of improvement. With ongoing validation, you can start to offload a lot of the auditing controls into technical controls, and lots of cloud providers will offer services that can be leveraged to do that. Many will have a variety of controls implemented by default that will alert when controls fail, but you can add additional controls to monitor other areas, where these can be tested every minute of every day and alert you as soon as something deviates from what is expected. So if a developer creates a system within an environment that doesn’t comply with the security baseline, you can be alerted for investigation before it becomes a security concern.

Whilst there is complexity in operating within a cloud environment that differs to the legacy systems that organizations are used to managing, the security model that can be built out within the cloud offers a lot more granularity and flexibility for organizations to secure their environments. However, the additional complexity and granularity is almost a double-edged sword. Yes, you have much more control and it’s much easier to implement fine-grained policies and deploy things quickly, but that brings the potential for substantial cost. For example, if hosts are spun up but not kept track of or development projects are abandoned, but left running. While there are lots of benefits of migrating to the cloud, including potential cost and time savings, if things aren’t managed appropriately, those costs can spiral.

To learn more about best practices when it comes to migrating to the cloud, including how to implement secure by design and compliant cloud infrastructure, you can watch the full briefing here.

Would you like to talk to us and find out more about our services?

Please fill in the form below and one of the team will get in touch.