Cryptocurrency and cash

How to Get Ahead of Container Cryptojacking

Linux container technology introduces several pathways through which cryptojackers can stealthily do their work.

The use of Linux container technology increasing, and so, too, are container security issues containers. Crypto jackers are finding ways to leech information for profit via container vulnerabilities. Here's what's needed to spot, stop and prevent such invasions. 

Cryptocurrency profits are made by chewing resources to make hashes. Cumulative hashes represent work, and work represents value in a blockchain. On a good day, this translates into value that represents real currency. Depending on the currency and how it’s made, almost all crypto software wants plenty of CPU. Memory and storage don’t matter very much. The CPU pedal is floored for any instance because CPU represents work, then a payday. Less CPU per instance, but thousands upon thousands instances, provides a cumulative payday. No data is exfiltrated, no company secrets are sold, and so cryptojacking is often reported less often than other breaches because it’s a a leeching of resources, not an injury to privacy or data assets.

Some mining software can be throttled by an installer/bot, so that the process doesn’t scream "CPU LOAD!" policy violations, which, in turn, make a drowsy analyst wonder what’s going on in an instance, or many instances. Instead, there will be a steady stream. The stream isn’t usually random up/down behavior; rather, it will be 45.255 percent or some other number that’s a steady-state rate (that is, static resource robbery).

Containers: No Safety in Numbers

Jacking software can show up in instances made from monolithic apps or container instances. Miners show up in routers, Drupal, WordPress, Docker images and more.

If there are thousands of containers, or the container fleets/pods are dynamic in nature, this becomes more difficult to detect. Things go on, expand, contract, then are wiped in the highly scalable world. This makes rate limits difficult to pinpoint because there’s no cumulative trigger to a policy that’s monitored by a control plane manager app. Just before a high load or a steady-state load might trigger something, poof, the instances are gone--as they’ve been directed to perform. No fuss, no muss, but you were robbed just the same. The robber got away with just a few pennies, but when hundreds of thousands of instances appear and go away, the cumulative effect is significant.

Where did these instances come from? There are several infection vectors. The easiest is from containers that aren’t inspected and are in constant reuse. Many organizations thoroughly vet containers, using sometimes Draconian and rigors to check repos for pristine, totally CVE-vetted source containers. But get a busy group with lots of changes and trampled-on QA, and, well, Monero Happens.

Another infection vector is the unwitting public-facing container fleet, as honeypot researchers have shown. The default security in CN/CF instances is only slowly mandating isolated network paths, although storage pools now are getting better about not exposing landed data to the universe. Bots infected Docker instances in less than three days when exposed to a publicly facing IP. Profiteers are hungry, and their bots are efficient--and the hackers now know enough about Docker to snack on Docker fleets.

Still another method is subterfuge, an internal job in which containers can be substituted purposefully. Thousands of instances can help provide a nice pension fund. A colleague, having looked long and hard at forensics records, believes this has happened to one of her cloud apps. The evidence was destroyed, but a backup found a fleet of infected containers and a Kubernetes script that permitted outbound traffic to anywhere on IPv6. Yes, she has a suspect. No, he doesn’t work there anymore. Were things not in the hands of their legal department, or if subterfuge was part of a required reportable breach, you’d have read about it. Internal, external, head-desk laxness, or loose and fast, the result is the same: snacking on organizational assets for profit.

How are infected instances detected? The low-hanging-fruit methods are looking for high CPU utilization or static utilization. Software CPU loads are often highly dynamic, but race conditions that floor CPUs are indicative of problems, and steady-state consumption may not look strange for many apps until one considers that apps are often quiescent until they need to do something. Optimized apps have CPU utilization that looks like a rollercoaster. But an exact load is suspicious.

With luck, there’s also outbound traffic limitations imposed that either monitor or, by policy control, stanch traffic from back-end servers except to “legitimate” front-end sources. There are also low-load monitoring solutions that are part of larger fleet control platforms, like Aqua and Twistlock. There has been an explosion in software-defined-networks (SDNs) for container fleets, each with its own methodology for network deployment, trapping unlikely traffic types and destination port addresses, and other methods of identifying cryptojacked payday traffic.

It’s also possible to use container construct apps like Envoy with Zipkin or Jaeger. Envoy is a sidecar communications container infrastructure that builds meshes of apps (with network service discovery), and Zipkin does elemental execution monitoring. Zipkin is great at timing transactions, and miners disrupt expectations consistently. Jaeger can employ a number of different types of instance samplings to see if they are within a desired policy using a plugin method. At a lower level, Puppet, Chef and other apps can query instances to look and sift for known cryptojacking apps or other evidence of file changes not within a desired spec--not unlike the Aqua and Twistlock methods.

Traffic filtering doesn’t stop a cryptojacking program from stealing CPU, but it does prevent the payday. With that said, but this only treats a symptom, rather than removing the leech. It's important to enforce role-based policies for teams and vet each and every container image through a container parser.

The ideal way to stanch mining profits is to look for network behavior that isn’t paired correctly. There are proxy possibilities that prevent just sending traffic to known mining sites into a black hole. Such lists are highly transient, and destinations can change whimsically. There is no comprehensive list of coin cashers. Picking odd outbound conversations is a great way to monitor security in general, but finding the culprit is sometimes very challenging and time-consuming.

TAGS: Security
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish