Skip navigation
Meter at zero

Why the Zero Trust Model Is Ultimate Trust

The zero trust model is the final departure from the concept of the secure perimeter.

Some people view the zero trust model as surrender. Conceptually, it’s admitting/accepting the idea that there are no secure perimeters in computing, anywhere. In other words, nothing is trustworthy.

Long ago, the simple act of network address translation stopped probing past a point of common network traffic ingress and egress. Then the stateful firewall was born, managing egress when an “internal” host had contacted an external host. Concepts like Linux IP Tables further established a table of relationships that could be permitted between internal and external hosts, along with specific ports at each host that could be accessed. The default setting became Deny All.

The Internet allowed users port 80, and probably 8080 and more, to be open on their machines, and with those openings the enemies of stability marched in the door. Then email and phishing malware rose, ready to eat the lunch of hapless users who thought they were being warned about something important. Phishing and especially spear-phishing still works reliably.

Firewalls evolved. Endpoint security evolved. Various technologies were spawned to prevent an infected user, especially one outside the ostensible organizational security perimeter, to work with the internal systems assets. VPNs were one technology that encrypted the network circuit between hosts, anchored by varying methods of authentication (usually, static certificates).

Another evolution, network access control, used a theory that an application would check endpoints to ensure that they were patched/fixed/updated/scanned-for-viruses before they would be admitted to a network--their traffic thus circuit-encrypted. It was a lofty idea. The BYOD phenomenon arrived to thwart these lofty ideals through the diversity of possible viable/permitted endpoints. Authentication couldn’t keep up.

As with VPNs: One external host would present credentials, often a certificate or username/password (or more), and an encrypted circuit would be instantiated. Conversations would be both encrypted and authenticated. This worked until the encryption methods were found to be sub-standard (take PPP and LTMN, for example), or keyloggers or outright robbery stole both the encryption components along with a single sign-on/SSO authorization. Several generations of VPNs emerged to tackle each substandard component, but the bad guys got in. They still do.

VDI was invented to allow interaction with internal organizational resources by sending session screens back and forth across network wires. Users controlled the session as though a computing resource was another computer system, permitting access as a remote host. Remote session access became the domain of Citrix, Microsoft, VMware and many others. But this model exacted high costs, in hardware ( to support the internal “mimed” session), network speed and screen geography mapping, as well as serious user support issues. VDI is still used to provide the luxury of asset control, but at the cost of limiting resources (such as disabling free Internet surfing and mail).

VDI evolved to persistent versus non-persistent instances (persistent images can be infected) and then application virtualization, where a presence sits on an endpoint but the compute portion and most app logic sits somewhere else in a protected “fortress.” Even CAD is now done via VDI, so that remote workers can sit in pajamas and design bridges. The bridges might be secure.

Much can be blocked with firewalls, but not all, and the entrance points for malware break-ins occur with web surfing, email, and successful device probes and subsequent hijackings. A foothold into an organization may have occurred years ago, or yesterday. Regardless, once "in," a bad actor remains, either dormant or slowly and carefully (and usually automatically) probing traffic an infected device can sniff through network wires. Then: Evil happens.

Two Paths to Zero Trust Model

The zero trust model has two pathways to implementation. One involves using a firewall approach where firewalls still have control, while another is based on the theory behind Cloud Assisted Security Brokerage (CASB) of proxy (and sometimes network circuit) authentication for each and every resource access. Both pathways also add traffic monitoring, anomaly detection and, depending on the vendor, least privilege models. 

Palo Alto Networks takes the traditional firewall and makes it the nexus of the zero trust model by managing encryption keys and network circuits, all while examining the traffic that it holds keys to for anomalous behavior (and more).

Centrify breaks the zero trust model into application, endpoint, infrastructure and analytic services, but without a firewall appliance (although it does need a strategically located VM or host) acting as an interloper between and among all devices talking on an internal or external network. In the Centrify model, all users are validated to all applications, and the user together with the device are validated--with the least privileges possible defined for each user. Centrify then builds models of what “normally” talks to what with what device, and from what location, so that access attempts from strange places and/or strange devices can be marked for potentially bad behavior.

Zero trust can mean managing hundreds--or perhaps hundreds of thousands--of organizational devices. This  becomes an administrative chore, especially at implementation. But when just one device can rob an organization of its assets and reputation, the investment is well worthwhile.

 

TAGS: Security
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish