For a long time, companies have worried about attacks targeting cloud and distributed systems. While that worry hasn’t disappeared, they might be surprised to know that their cloud systems aren’t the most significant vulnerability. Instead, legacy systems are quickly becoming a popular attack vector for threat actors trying to get around increased, up-to-date security features in cloud services. Legacy system security will form a much bigger risk for companies than their newest cloud systems.
Why legacy systems are more vulnerable now
A legacy system still performs its intended function, but there’s a good chance it no longer updates with the latest security features. Companies may be reluctant to completely offload their legacy systems for fear of losing valuable data and functionality. However, there’s one major problem with keeping a legacy system: it doesn’t evolve.
Because legacy systems no longer grow and evolve with operations, companies are shifting to cloud-based systems that can scale. They keep their legacy systems intact rather than replacing them, but forgotten security protocols may add to the technical debt over time.
These on-premises systems require in-house teams to maintain and manage security updates and responses to threats. Where cloud providers make it their business to push the latest security protocols—it’s their primary job. As a result, companies have the latest security in the cloud but fall behind with on-premises systems.
And because systems often remain connected within the network, they offer a loophole that could allow threat actors to access other critical data and systems. However, keeping a legacy system in an on-premises data center doesn’t guarantee its safety anymore because so much focus is now on newer, faster cloud processing. Budgets gravitate towards investing in the cloud, away from maintaining on-premises data center best practices.
Critical legacy system vulnerabilities:
- Security features do not evolve: Cybersecurity is not a static field. Many legacy systems are not capable of implementing new security features—think multi-factor authentication or role-based access. In addition, outdated encryption gives a false sense of security, while a lack of audit trails complicates assessment.
- Legacy dependencies compound the issue: Even if a legacy system can update to the latest security features, what other dependencies can’t? If companies must use certain outdated hardware to continue using the legacy system, that creates another vulnerability.
- Orphaned hardware has no oversight: Legacy systems and their dependencies can become exposed externally over time. Company restructuring, mergers and acquisitions, leadership changes—all these things can leave behind software and hardware that no one takes ownership of.
Why businesses can miss legacy system attacks
Companies keeping legacy systems in their data centers always intend to keep security cutting-edge. Plus, there is a pervading belief that on-premises systems are naturally more secure than those with remote or cloud links. So it’s easy to allow security updates to fall behind while engaged in other digital transformation road maps.
However, just like connected smart appliances allowed hackers into people’s homes and vulnerable IoT robotics provided loopholes for factories, legacy systems are a vulnerability. Companies might miss the first signs of attack because it appears to be internal activity.
Rather than flagging an external attack, for example, hackers might gain entrance through a legacy system. Once inside, requests appear to come from the company’s internal network, causing a delay in response times. The longer it takes to address an attack, the more damage can be done.
How companies are managing legacy system security
Migrating data to the cloud and replacing legacy systems completely may not always be in the cards, especially for companies that aren’t technology-first, Silicon Valley talent. It’s a vicious cycle. Companies need talent that can maintain legacy systems, but fewer developers and IT professionals have the knowledge specific to those systems. Business leaders either pay dearly to hire someone who does, or they make IT departments cobble the skills together.
Businesses can pay mindful attention to newer security procedures for on-premises systems. Complete upgrades may not be possible because budgets are going towards cloud systems, but regular audits are critical to keeping legacy systems secure.
Inter-system security should be the second layer. A zero trust approach, for example, would require all users inside and outside a company network to continuously validate for access. Because zero trust assumes no network edge, it can successfully encompass a cloud, on-premises, and hybrid network configuration.
NIST guidelines outline:
- Continuous verification: All resources, all the time, no exceptions
- Minimize the attack surface: Limit the damage that can happen once a breach occurs
- Automate context gathering: Learn from behavioral data to better identify unusual activity
These principles eliminate the traditional trust-based model and assume that every vector is vulnerable. It does add another layer of complexity to operations, but it’s an essential step.
Unifying legacy systems into a modern data architecture is also key
Companies must also find ways to unify their critical legacy systems into an overall data infrastructure to prevent loopholes. Legacy systems aren’t going to be safer on-premises if they’re forgotten structures in desperate need of security updates. Instead, full integration with newer systems keeps legacy front of mind.
Attacks will only become more common as hackers exploit those vulnerabilities. While it may not be feasible to dump legacy systems completely, companies should take a long look at their governance policies. Modernization and prioritizing decommission for the oldest systems may not be the easiest path, but old systems can stall cloud migration and digital transformation. It’s a threat companies can’t afford to ignore.