Moving Past the Blacklist

For many years there has been a gradual, but steady, acceptance that the "old" way of doing IT security was no longer good enough.

I’m sure that some people would disagree with my simplifying security strategies into two simple buckets of old and new, however bear with me. Old doesn’t mean dead, it means insufficient. I would categorize a few tenants of the old way of protecting an organization / enterprise as the following:

Server Level

  • Ensure that all organization servers are running up-to-date code
    • Doing this means scheduling regular update intervals, and depending on your architecture – could very well mean regularly scheduled downtime.
  • Grant access to resources only when required, and only as little as necessary.
  • Setup regular audit intervals, where sysadmins verify that user x still needs access to y.
    • This means proper workflows on user creation need to be established from the start, and clear directions on who can grant access to what. Who is the second party to sign off on such decisions? These directives need to be written down, and updated when personnel changes are made. Limiting access to only what is needed is often a battle with the limitations of ACL systems, and determining how much time should be spent on making the “perfect” ACL.
  • Computing assets need to be either:
    • Tracked from a project standpoint – ie: How long should we keep this running? With clear end dates recorded on every asset (an asset being a server, a website, anything).

      OR

    • Tracked with setup dates – and clear sunset time-lines where assets must be announced as being due for automatic shutdown unless someone says – yes, I use that. Sometimes even a yes isn’t sufficient, depending on how much technology has changed. Things may need upgrading, migration, and so forth.
  • Access granting needs to be tracked via some method, with dates, and specific assets allowed, and who approved what.
  • Architectural changes need to be reviewed and recorded by a team of people, before changes that could have security impacts are made. Trust in the crowd – accepting that there is often no one right answer, only different implications. Reach out for guidance when the in-house familiarity with a product is low.
    • This should be done for many reasons, not just security.
  • Encrypt server assets where the capability impact is less costly than the potential organizational impact from stolen data.
  • Use multiple networks, one for assets that will be touched by the outside, one for assets that will not. Treat the assets that live in the network reachable from the outside with a special level of paranoia.
    • Not to spoil anything, but the biggest changes are in this mantra.
  • Use firewalls – open ports only when necessary, and only to the limited set of assets that need it.

End-User Level

  • End-User assets should require authentication, with on-disk encryption as well.
  • Mandate virus protection and updates on windows endpoints (trust macs and linux).
    • Obvious changes are necessary here…
  • Limit a user’s ability to install things on company assets.
  • Decide how to handle the BYOD (Bring Your Own Device) problem from a policy standpoint. Technical control would be nice, but ah well – I guess we just hope.
    • It’s easy to see how this attitude was and remains foolish – but it is fair to say that the methods of controlling access to organization resources (including the network) have only gained acceptance outside of Universities very recently.
  • Train users to not click on random links, good password policy, and hope that they listen…
    • Sadly, a policy without a method of ensuring compliance, is doomed to failure.

This is by no means an exhaustive list. It amounts to simply a starting point of what was (and to a certain extent) and what remains “best practice”. A dangerous term that gets thrown around. If you ask a security expert what best practices are, and a sysadmin in a fast-growing small business… you will get two very different answers. To imagine that there is a “best” is to think that computers are somehow divorced from the normal rules of life, that you can have something for nothing. Putting aside the topic of what’s best though, we can sum up the general perspective on security as one of:

“The outside is scary and full of hackers trying to get in. Put up lots of walls. If someone on the inside does something bad, fire them.”

This perspective may have made some (but not a lot) of sense, at least at one time. I think a better way to sum up the new paradigm of security would be one of:

“The outside and the inside is scary... and full of hackers. They will get in. How long before we detect them, and how many levels of containment did they break through?”

This may sound more like CDC / NIH speak, than IT terminology, but just as infectious disease experts have long since accepted that outbreaks will occur, we as IT professionals must accept that breaches will occur. That doesn’t mean we give up, but it does mean that we approach security differently.

If you think that you can absolutely protect against every type of intrusion with some GPOs (Group Policy Objects), an anti-virus suite and good “policy” – you need to move past the 1990’s. In fact it wasn’t true back then – but few enough organizations got visibly burned from thinking this – that the mentality was allowed to persist. Enough breaches have occurred of late, that management teams are more often willing to listen to doomsday scenarios, without sticking their head in the ground.

If you are an IT professional and you think sticking your head in the ground IS an effective response, consider another field of work. Not just for your sake, but all of those around you.

What’s an IT manager to do though? How on earth do we limit the damage? First, accept the cost of doing business (be it for-profit or non-profit). It’s expensive. There is no way around that.

You can not choose to invest in IT or not. You simply have to choose how long you would like to exist as an organization. If you are OK with existing no more than a few weeks, go ahead and ignore what I’m about to say. Otherwise, you are going to have to dedicate a large part of your total organizational budget to IT – or use paper in pencil. You can’t build a bridge “on the cheap” – it just collapses. IT is no different. The fact that people deny that the bridge will collapse if you don’t do it right, doesn’t change that the bridge will in fact collapse.

Second, stop trusting your employees (and their assets) with everything. You don’t have to distrust your users on a personal level. Most people are honest, good folks. That doesn’t change that every single user is an entry point. View a single user as a firewall hole. Would you open a hole on a firewall (or request someone to) without also setting up some process to monitor the activity on that server? Users are no different. You have to monitor everything going on with their assets. This is not the same as monitoring everything the user is doing from a work-flow standpoint. Micro-managing your employees does indicate that you don’t trust them.

Third, stop treating end-user assets as “theirs”. Your employees aren’t within your organization to listen the music that they like with their choice of application. They are to fulfill a role, and the reality is letting them do anything on company assets (be it the device or the network) – introduces risk. Thankfully more and more applications are run within the relative safety of a web-browser sandbox. This means that allowing web browser access, allows your end users to do any number of healthy things at work, that don’t directly relate to their position. Things like listening to pandora (if you have the bandwidth) doesn’t make an employee bad. Of course, the more we trust something like a web-browser – the more we have to be draconian in ensuring it gets updated. Again though, zero-days on web browsers are common. You could deny access to everything but a web browser, if you piss of the right group of people – you will still be hacked.

Fourth, Stop making policy papers and putting the on a shelf. There are any number of real-world technical ways to force policy compliance. Do them. All of them. Your staff will hate you at first, until they hear about their friend’s / partner’s organization being breached and everything having to be burned down. Then they will happy to have:

  • 802.1X authentication wired and wireless
  • VLANs to isolate people who need one level of access, from people who don’t and server assets that serve one purpose, from assets that serve another.
  • NO Single-Sign to everything. This was always a bad idea, at least without some two-factor as well. Yes it annoys people. You can’t make surfing around key data both super easy and super secure. When someone says it’s easy and secure – they are full of it. Stop listening to market-speak, use your brain.
  • Figure out what to do about passwords – and I don’t mean complexity requirements. There are any number of products that can help in this, but be aware that any password manager will be an obvious target of hack attempts. Think through your strategy here very carefully.
  • Access auditing of all the things
  • A staff dedicated to reviewing all of those audit logs. If you don’t have a security staff who spends most of their day doing this, and you have more than 50 people… you are doing it wrong. Even if you have under 50 people, someone should be tasked with this on a daily basis.
  • A staff of people who are dedicated to reviewing security patches that are released daily and either sufficient people to design and maintain the complex, no single point of failure systems we all desire, or off-hours staff (and an acceptance of small outages) to install and reboot things. Again, if this is 1 person, and you have over 50 people…. You are doing it wrong.
  • A regularly scheduled breach test. We have fire drills to test our disaster response plans, why should information security be any different? We need to stop treating data as “the website”. Data has increasingly become people lives, be it financial data, health data, locational data – all of it could be used to hurt, and yes even kill people. Stop treating it like it’s just data. Finally, the black list… may it rest in peace. It no longer serves a functional role. We have entered the day of the white-list and machine-learning for all things. Once upon a time, even firewalls operated on a blacklist basis. You only blocked known “bad” things. This changed over time to a whitelist model, where you assume all things are bad, and you have to prove otherwise. SIEM software can also utilize machine learning to block things based on behaviors, not simple ports / IPs / hashes.

This model now must extend beyond the firewall, beyond the RBAC level (SELinux and similar solutions), all the way to telling users, you get to run what I tell you to, when I tell you, and nothing else.

This doesn’t have to be as horrible as it sounds, given things such as certificate signing, or even better, code-signing and RBAC. And no, this doesn’t prevent all the bad software. Hacks on code-signing certificates can and do happen. All of that being said, it’s a heck of a lot harder to break into an organization where every single application must be approved.

The days of black-list + (simple) heuristic analysis software – ie antivirus, have come to a close. We must accept the hassle of whitelisting based / code-signing software into our work lives, and soon our personal lives will necessitate such security as well. That’s a topic for another day though.

(edited 2019-07-15 as part of website migration)