I’m here for the second day of Taia Global’s Spooks & Suits conference. For the morning’s session, I attended “What’s the Downside of Private Sector Offensive Engagement?”, a panel comprised of Jeffrey Carr, David Dittrich (who wrote this comprehensive blog post on hackback), Robert Bigman, Dmitri Alperovich (Crowdstrike) and *EDIT* Greg Hoglund. Dr. Anup Ghosh moderated the panel. This was a lively discussion, as you can imagine. I apologize for my lack of specificity, but I’m going to conflate hackback/active defense throughout this post, though there is a difference.
Jeffrey Carr opened, noting that blowback could accompany hackback unless done under protection of law/a regulatory framework. Mr. Carr went on to discuss irresponsible firms that operate as “cyber ambulance chasers” looking to take advantage of data breaches. His point was to question whether we want to give private firms hackback powers if these firms have acted irresponsibly in the past. If we ever want to consider hackback/private sector active defense, some form of oversight is important.
Dave Dittrich followed, arguing that a company considering hackback/active defense should perform a comprehensive stakeholder analysis, justifying their choices based on clear ethical principles. Mr. Dittrich generally advised caution when considering hackback.
Dmitri Alperovich presented a measured, reasoned argument in favor of stabilization actions. To be abundantly clear–as Mr. Alperovich was–Crowdstrike does not hackback. He suggested that we need clarification in our policies and the law. Perhaps Congress could enable private sector actors to take stabilization actions (note that stabilization actions differ from retribution). Indeed, “this is not about vigilantism, this is not about taking extra-judicial action because it feels good.” If the private sector were to take stabilization actions, it should be done responsibly, with records, and not to the detriment of law enforcement.
Robert Bigman noted that hacking back is a stupid idea for CSOs because they’ll have to defend the network when something worse comes back at them. Mr. Bigman agreed that the law needs clarification.
*EDIT* Greg Hoglund also offered some wonderful contributions to the panel. I very much apologize for failing to note his thoughts in my first iteration of this blog post. I integrated many of his thoughts into some of the general themes below. At the risk of falsely attributing (you see what I did there?) comments that Mr. Hoglund did not make, I’ll leave this section as is. However, I do remember that he forcefully and persuasively argued for private sector active defense and even hackback under limited circumstances.
After the panelist presentations, the discussion opened up in a very interesting question and answer section. A few notable themes/questions:
- The panel disagreed over the proper response to cyberexploitation. Everyone agreed companies are bleeding IP. However, is the proper response to batten down the hatches and plug holes? Does plugging holes work if there will always be new holes? Do you go the FBI/USG? If so . . .
- The panel disagreed over the private sector’s desire for government help. One of the panelists suggested the private sector turns down government help. A member of the audience suggested the private sector just doesn’t want a set of bureaucratic guidelines that will quickly be out of date.
- The “gray” in the law (primarily the CFAA) was a big theme. For example, some sort of active traceback (like when Georgian authorities traced a Russian hacker, broke into his computer, and took a photo of the hacker using his own camera) example) would be useful for a private firm to offer to law enforcement or use in litigation, but it’s unclear whether that is legal under the CFAA. What’s more, although DOJ probably won’t prosecute, honeypots are also of uncertain legality under the CFAA. This discussion illustrated that the CFAA is out of date. Essentially, we need to get rid of the gray and bring on the red.
- Interesting concept . . . if a hackback compromises an “innocent computer” in a botnet–i.e. grandma’s computer–is that computer so innocent? Are these “low hanging fruit” computers actually innocent third parties, or through their negligence, an accomplice to a crime? Sort of a similar analysis to whether a person is directly participating in an attack under IHL. Obviously there is a question of intent on behalf of grandma, but still . . .
- Disturbingly, the panel suggested that may we get to the point where hackers don’t look to steal IP, but rather, “nuke” companies by wiping all of their computers (similar to Saudi Aramco). This raised fears that small to medium sized companies could be destroyed by a simple malware designed to wipe hard drives.
- Finally, companies are currently engaged in hackback.
*Disclaimer*
These blog posts are my informal summary of these speaker panels, so don’t take these as official quotes from the speakers. Also, I’m under the impression that Taia Global is not recording these discussions, so my intent here is to memorialize their content rather than steal any of Taia Global’s thunder. Again, all credit to the speakers and Taia Global.
My thoughts
This panel was sharply divided, but there was notable agreement: the need for clarification under the law. Let’s amend the CFAA and clarify this, one way or the other. Unfortunately, I don’t know what the impetus for this action would be. The panel discussed whether a firm would ever hackback and go public to make a point; sorta like a test case, perhaps in the court of public opinion, perhaps in a real court. Absent some significant event like that, would Congress tackle a complicated and politically sensitive topic like this? Extremely doubtful. In Maxwell Public Policy speak, we need a policy window. To get the window, we need an event. The event is unlikely to occur. Etc, etc.
Leave a Reply