Start your day with intelligence. Get The OODA Daily Pulse.
No matter how much you try to spin it, cybersecurity is a cost. You can implement cheap or even free solutions to address a wide range of problems, but there has yet to be a demonstration of security adding real, sustainable value. JPMorgan Chase spends a half-billion dollars on security, but if they got breached tomorrow it will all seem for naught. Discouraging? Absolutely, but it’s a fight we must continue because ‘the big one’ will come, and the only thing that gets us back to some semblance of normal is those who know the problems and how to fix them (even if it’s after-the-fact).
The security of the Internet of Things (or lack thereof), is all about scale. Secure IoT devices are a thing, or they can be. Layering security mechanisms and protocols on top of the existing IoT “infrastructure” is or can also be a thing. That we CAN do these things is one thing; that we CAN’T do them in a meaningful timeframe is the issue. The sheer scale of replacing all such devices in even a minuscule segment of our critical infrastructure – or even a well-automated widget factory – boggles the mind and vaporizes the budget. Every law that is passed dealing with IoT security is a law that benefits our grandchildren, not us.
No, the boss does not need to know your cell phone number. No, the boss does not need $50,000 wired to this totally new bank account. If it sounds odd, pick up the phone and call. Bad guys are trying to exploit your sense of duty and trust every. Single. Day.
The idea that if you’re a victim of the Equifax compromise but were slow on the draw to file for your $125 in compensation now leaves you out in the cold is yet another reminder in the lack of seriousness (or power) on the part of regulators. There are few people in this country who have not had their personal information stolen at least once. That means any random Joe or Jane has more than enough free credit monitoring to go around. What they lack is cash for their troubles, or more importantly, something to inflict pain on the carelessness or incompetence of those who did them wrong. As we talked about last time, a $5B fine for Facebook seems like a big deal until you realize how much Facebook makes in a year. If the government is serious about holding organizations and their leaders accountable for security, they shouldn’t be settling for pennies on the dollar.
On one end of the security spectrum you have “independent researchers” looking for exploitable vulnerabilities and following a disclosure protocol of some sort, or participating in a bug bounty program for recompense; on the other you had situations like X and Muddy Waters, essentially using the market to turn a profit. Now we’ve got something in-between: the False Claims Act. In a nutshell, if you have evidence that someone has sold something to the government that they knew was faulty or otherwise did not meet expectations, you can claim a portion of the penalty the government extracts from the vendor as a reward.
Attribution is a double-edged sword. On the one hand there is value in knowing who is behind attacks against your enterprise; on the other you’re not compensated for identifying whodunit, you’re compensated for making your numbers. That means getting back to work as quickly as possible. In such situations attribution is an expensive distraction. If you’re not in the military, government, or critical infrastructure, attribution is your security nerd’s science project. What do you need to do to get back to business?
A generalized capability to monitor threats and coordinate responses is a smart idea, but it’s a tough gig. Too many alerts, too much data, not enough time…the list goes on and on. While some hope ML and AI are the answer, that promise is still a long way off. How to deal with burn-out and churn now? Literally a billion-dollar question.
Another horrific act, another bad guy’s locked cell phone, another call for weakening cryptography and the installation of ‘secure’ backdoors. It sounds cold, but the price we pay for living in a free society is risk. No matter how horrific the event, the moment you open a back door (actual or metaphorical) you’ll never close it again. The idea that some store of evidence can’t be examined is frustrating to be sure, but the idea that there isn’t ample evidence to do what needs to be done is laughable.
The ability to monitor social media platforms in real time to detect threats is an understandable capability for a security-oriented agency, but as we’ve seen in the recent past: more data == more problems. Let’s pretend for a moment that a system can be built to access all of this data (reasonable), and then the capability to sort legitimate threats from people shooting their mouths off with no real intent (because that’s 90% of internet traffic anyway) (suboptimally), and then let’s say the system successfully identifies a fraction of a percent of the real threats out there: who’s going to deal with it? There is not, and there will never be enough cops to address this issue. The traditional law enforcement approach cannot work at the proper scale. The responsibility has to fall on the platforms. If we really want change in this arena, we have to start at the root.
You wouldn’t believe this when it’s Amazon listening in, you shouldn’t believe it when it’s Microsoft. No offense to either outfit, it’s just that we all know your conversations are far too valuable to allow to drift off into the ether. If it can’t be monetized now, eventually it will be and those recordings will be worth their weight in gold. If the convenience of such devices outweighs your concern about security (not your threat model) knock yourself out. If you haven’t considered such devices a threat until now, time to do some math.
The chorus is growing stronger and louder by the day. Take care of the basics, and you take a whole lot of threats off the table (or at least minimize them). It’s (usually) simple, unglamorous, and the opposite of sexy security stuff, but it works, and isn’t that really what we’re going for?
It is, generally speaking, easier to go through life assuming everyone is operating with the best of intentions. But after the umpteenth time of a tech company – especially a social media company – “accidentally” violating policies and promises you have to ask yourself: are they really that careless or stupid? Perhaps a better question: do they think we’re that stupid?
If you’ve been targeted by phishing, and successfully managed to avoid becoming a victim, understand that the fight isn’t over. The bad guys will come at you from different angles. One of the prices we pay for the convenience and speed of the Internet is constant vigilance from threats. Threats that will vector in via any device with a CPU and a network connection, no matter what form it takes (PC, tablet, phone) or how ‘smart’ it is (voice assistant, connected thermostat, connected appliance, your car, etc.).
The cloud is a lot of things, but fool-proof it is not. There are those that argue that trusting ‘someone else’s computer’ is a mistake, but if you cannot afford to have Amazon, Google, or Microsoft-level security, then that ‘someone else’ is still way better suited to defend that computer than you are. Having said that, if you don’t implement the security mechanisms available to you – cloud or on-premise – nothing is going to save you.
What’s cheaper than building security into your products from the beginning? A bug bounty.
Do you really need that app? Do you? Is the value you get out of it going to outweigh the potential risks associated with what you put into it (e.g. personal information)? I know this can be a lot to ask in the heat of the moment, but it’s worth asking anyway since the granularity we’re giving up to bad guys is getting finer all the time.