Author: Jeremy Nees, Chief Product & Technology Officer – The Instillery
2019 is of course well underway, and technologies that barely escaped the lab through 2017 and 2018 are now entering the mainstream across enterprise and government alike. Here is what we expect to see more of in 2019
Use it wisely
Since 2016 The Instillery were early in supporting businesses with multi-cloud architectures. 2018 saw more businesses looking for true data portability and therefore multi-cloud incorporated into their technology strategy but not always a view as to how it will be used in the business. 2019 is the year businesses start really using it to deliver tangible business value. That is all about understanding where you can be tactical with multi-cloud. Far from sitting in the camp of just building out multiple cloud platforms for the fun of it, we see multi-cloud as a way to both extract value from existing investments (licensing is a big one here) but also to mobilise nimble project teams.
An example would be hosting your corporate workloads in one cloud, and having projects with different requirements, in another cloud. Why would you do that? It could be about the availability of a particular service that relates to your project, or it could be a teams experience with a cloud platform that allows them to get going quickly. After all, one of the reasons we like the public cloud is because the agility it enables and the pay-for-use consumption model – so why be too quick to wrap it up in red tape?
Instead businesses will allow for certain elements of “shadow IT”. A smart approach will see this balanced with an approach that rationalises the creation of tech debt, that is making sure all the pieces together on the jigsaw puzzle.
Ground control to major Tom
Private cloud is re-emerging as a key part of a platform strategy. Again, it is about being tactical in how it is used. If the public cloud has taught us one thing, it is about how to consume compute as a resource. This has forced the maturation of private cloud into a more consumable fashion, however, there are the obvious limitations. Resources are not infinite – capacity planning is a must, which makes bursting workloads less suited for hybrid-cloud. Similarly, there aren’t the operational efficiencies provided by the likes of wide-ranging platform services. Sure there will be DBaaS type offerings on private cloud, however, the range of PaaS and “as-a-service” nature reduces as we take back a level of operational control. Where hybrid will be used effectively is for steady-state workloads, some of which are more challenging to move to public cloud.
Also where the capex budget is favoured, private cloud will be a go-to. In NZ we may not see many private deployments of Azure Stack or AWS Outpost due to our limited scale. Instead, this space is likely to be dominated by VMware with composable infrastructure for the time being.
And the small matter of a hypothesis.
Big data has become one of the most used tech buzzwords of the last 5 years. Ever since the story emerged of Target mashing data together and discovering a teenager was pregnant creating a whirlwind through the industry, it seemed that having something termed “big data” in the digital strategy was a must. In fact, it justified budgets because this wasn’t just a tech thing, “data is the new oil”! The idea that putting a whole lot of data together in one place would create magic was the simplistic approach.
Now what is emerging is a trend towards small data. A smaller more focused dataset is used, a tightly formed hypothesis is tested
This is why AWS Lake Formation was one of my favourite announcements of AWS RE
See the announcement from Re
A square peg that may fit in a round hole
The adoption of containers to date has largely been focused on a single use case – app development. Containers are ideal for running light-weight and isolated instances to execute code and have been typically deployed for single purpose processes. One container to one process. However another use case has begun to emerge, and that is one of using containers primarily as a vehicle for abstraction from a cloud providers hypervisor. Essentially unshackling clients from a specific platform or hosting provider.
A container by nature is highly portable. That is, it can be picked up and transported almost anywhere, and can operate abstractly from the underlying platform. While this has always been a benefit of containers (it’s basically in the name), it has not generally been exploited outside of development environments.
Further to this, the entry point to running containers well has been substantially lowered. All major cloud providers now have managed kubernetes offerings, and they are maturing at a rate of knots. This means less time building and configuring container networks, or maintaining hosts, and more time building and deploying your containers.
Containers are set to become a common component that most businesses use somewhere. They are not just for developers.
Cloud native security
Not an oxymoron
While there is a natural desire to “stretch” your favourite brand of firewall into public cloud and call it secure, by doing so there are almost certainly three consequences. One, you are adding cost. Two, you are adding overhead. Three, you are adding complexity. Familiarity is its own vice, and this is often why we will do what we have always done.
Cloud native security allows extremely granular and auditable controls to be baked into the build and configuration of workloads. It also functions on cloud services that don’t sit in your network and goes well beyond network security settings. DevSecOps teams will define cloud-native security policies as part of the application and workload configuration.
Tools like Dome9 (recently acquired by Check Point) will provide security admins detailed and actionable insights to public cloud security, and across multiple cloud platforms. This includes compliance automation, identity protection, visibility and enforcement.
The acceptance of cloud-native security as the way to secure cloud resources will come from the realisation that it provides better security than you could previously practically achieve.
Will there be anything else?
SD-WAN has entered the mainstream. Now it is quickly going to establish itself as the only way to deploy a corporate WAN. And why wouldn’t you? SD-WAN has regularly been seen as an all or nothing competitor to MPLS networks, but in reality, it is complimentary.
SD-WAN can be used with commodity internet circuits, 4G links, MPLS connections and high-throughput data centre connections. It is, of course, the very point… SD-WAN is abstracted from the underlying connectivity medium. On top of that, you get numerous benefits that are not available with just a basic router on the end of a private link.
SD-WAN will rapidly become the default way to provide business connectivity.
See how Sanford
Its life, but not as we know it.
How we manage systems and applications is changing as rapidly as the platforms on which they built. This heralds a change in how we should, and will, manage our various resources. It will also change the talent and skills we need in our business.
Code pipelines (CI/CD) will become a must-have in any business that is building their own digital assets. Add to that a shift from visual management systems to defining infrastructure as code – you will template and configure your resources using tools like Terraform, rather than via a GUI. As the diversity of platforms you manage increases, new-world tools like Automox (for automated patch management) will provide policy-driven approaches across a broad range of systems.
These trends primarily revolve allowing digital strategies to be executed effectively and at speed through maintaining choice. Key to this is making good decisions about the right balance between abstracting your systems from providers and platforms