asha Romanosky’s computer career began at age 13, when his sister won a Commodore 64 at a spelling bee. “I don’t think she ever saw it after she won it,” he confessed with a laugh. “I confiscated it.” Born and raised in Canada, Romanosky earned a BS in electrical engineering from the University of Calgary and a PhD in public policy and management from Carnegie Mellon University. Now a policy researcher at the Rand Corporation, he writes and speaks often about cybersecurity and the law. These days he’s based in Washington, D.C. , where he learned a lot about contentious relations between the public and private sectors during a year as a policy adviser to the U.S. Department
Legal BlackBook: So what hooked you on the Commodore 64 at age 13?
Sasha Romanosky: Back then I was playing the games, hacking the games. They had a cartridge that could freeze the operating system and dump you into assembly—almost the lowest level of programming for the computer—and then you could play around in memory. There was a small area that controlled all of the game’s parameters, so it was fairly easy to goof around with things.
LBB: How did you get into cybersecurity?
SR: Before there was cybersecurity it was information security. I started doing this professionally after college, working at an ISP [internet service provider]. We were setting up internet services, firewalls, e-commerce sites for people. It was all pretty simple. Until the late 1990s it wasn’t even considered security. After that it was information security, and then somehow the term morphed into cybersecurity, which I think just came from the government and military, because for them “cyber” encompasses electronic warfare. And even to this day, old-school information security people get a twinge when they hear “cyber” because it’s almost like a goofy marketing term.
LBB: These days, some of your work focuses on corporate governance, compliance and corporate crime, areas that are of great interest to lawyers. In what context did you begin delving into this arena?
SR: It started when I was looking at the data breach litigation and state data breach disclosure laws, and whether those had an effect on firms and consumers. These are laws that require companies to notify you when they suffer breaches. And there are good policy questions as to whether the laws are working. Plaintiffs were always losing these cases. We wanted to study what those cases look like. What are the causes of action that are brought? When are data breaches more likely to be litigated, when are firms more likely to be sued, when are those cases more likely to settle, what do the settlements look like? All of that.
LBB: Can you give us a summary of what you found?
SR: First, we found that only a very small percent of data breaches end up being litigated. A decade ago, the litigation rate was in the high teens, but recently it has fallen to 3 to 4 percent. So the probability that any firm would be sued is already pretty low. However, firms tend to be sued more when the breach relates to financial information and less when the firm provides credit monitoring right away. As for the results, it seems that about half of all these cases settle (with the other half being dismissed right away). However, “settled” typically means that only the named plaintiffs recover, and they get a few thousand dollars each. In addition, the firm was more likely to settle when the breach involved medical information. However, we saw no strong correlation between settlement and class action certification, or allegations of violations of statutes with statutory damages. Overall, though, the biggest finding was from coding each of the causes of action from all 200 plus cases. We found over 90 unique causes of action, including common law (torts, contracts, etc) and state and federal statutes. Contrast that with financial securities law, where there is a single federal statute under which plaintiffs can bring an action.
LBB: You’ve given lots of presentation on these topics to lawyers. Are you still doing that?
SR: Yes. And lawyers are always involved. I’m also getting into the cyber insurance area, which means more insurance people are involved. But there are always lawyers around.
LBB: Is this all from your Rand work?
SR: This is all Rand research. And it has a nice evolution. Looking at the security laws got into the litigation stuff, which then got into the story of costs for all of these incidents. Which then got into insurance because the companies are interested in what all these things cost, and the insurance companies are interested in what all these things cost.
And everyone wants to know: How do we protect ourselves best? What kind of insurance do we need? How much do
LBB: You were a cyber policy adviser to the U.S. Department of Defense for a year that ended last September. Can you tell us about your government work?
SR: There’s a federal statute that enables people to go back and forth from government to the private sector—or nonprofits anyway. This is a vehicle that the federal government can use to get experts to help them out for a short time. I was in cyber policy in OSD [Office of the Secretary of Defense]. It deals with all kinds of cyber issues to help inform the secretary to make decisions. If the Department of Defense is going to engage in cyber operations, there needs to be careful thought and understanding of how you do that—authorities and capabilities and agreements and all that. But the department also needs to defend itself, and I worked mainly on the defensive side.
LBB: We’re interested in the sometimes complicated relationships between the government and corporations.
Can you talk about that?
SR: Private-public partnerships are a big priority for the secretary. There are lots of ways that can happen. One of them is what they call DIUX, which is the Defense Innovation Unit Experimental. It’s about fostering a culture, working with startups to help develop new technology. And not all for robots that shoot guns. There are lots of innovations that could help. It’s that kind of an R&D partnership. There’s information sharing, cooperation between DOD and the defense industrial base, which is a whole collection of companies that supply support to DOD. Maybe they’re cyberattacked and DOD wants to know about that.
There’s a larger question if a company in the U.S. is cyberattaccked. What is the role for DOD in helping protect them? Generally the answer is, “There is no role.” DOD is not involved, nor should it be involved in defending or protecting some company that gets hacked. That’s the role of the FBI—until it becomes a national security issue. So if the whole country was suffering some kind of distributed denial of service attack, then one of the primary roles of the Defense Department is to protect the country. Only if and when that were to happen would DOD step in. There are conversations that DOD may have with infrastucture companies, telecommunications companies, finance companies to understand how resilient we are. What do we need? Are there any gaps? And what are the roles and responsibilities for different people? There’s what’s called Defense Support of Civil Authorities, which is what gets triggered when a state has a natural disaster and needs to call the military to help out with a hurricane or an earthquake or whatever. There’s a similar interest to figure out how DOD could help a state with a cyberattack.
The federal government-state government partnership really needs to be negotiated. And as you can imagine, there are lots of electrical companies and other infrastructure companies that support state and federal operations and bases and military installations, so there needs to be an understanding and cooperation there. Who’s protecting what, and who manages what at what times? So there are lots of ways that DOD interacts with the private sector.
LBB: Did you observe tension between government agencies and the private sector over cybersecurity issues?
SR: Yes. Everyone gets ticked off by everyone else. Everyone wants more information and better information and quicker, faster, stronger. One of the big issues that’s kind of a firestorm is the vulnerabilities equities process (VEP). It’s the U.S. government policy around what they do with the special kinds of vulnerabilities that they may know about. If a breach vulnerability exists that no one else knows about but the U.S. government, where lots of computers are vulnerable in the U.S., then there’s an equity decision, a decision of whether the government should tell everyone about that so that they can patch their system and be more secure, or whether they should hold it temporarily and use it for intelligence collection. Maybe U.S. systems are vulnerable, but maybe an adversary's systems are also vulnerable and maybe they also want to collect intelligence on that adversary. How do you weigh that? How do you balance that? The private sector, of course, is always very adamant that you should tell us every single time, and do it right away. The U.S. government, you can understand, has a different kind of role. They see the interests of everyone and everyone’s equities. And so this VEP process is contentious.
Recently the Security Council released the charter for the VEP, which is the set of policies and procedures around how it makes these decisions and what the whole process looks like, and who’s involved in decision making and all that. And it’s been fairly well received. I think most reasonable people understand that it’s not an easy decision. And the U.S. is really one of the only countries that has such a process. Most other countries may just use the vulnerabilities and really don’t care or don’t tell their citizens what they’re doing and why. It’s a small piece of cyber operations and cybersecurity at a federal level, but it’s a specialized, important piece of it.
LBB: Is this one way in which the government and the private sector have actually made progress in dealing with these tensions and communicating better?
SR: I think so. It was a big, big step to release this charter to the public before it was classified. It had been available through a freedom of information act request, but only part of it. I think it was a really good step, helping to create more trust and be more transparent.
LBB: You recently wrote about an interesting development. Sometimes our government attributes cyberattacks to foreign governments, as it did when it pointed the finger at North Korea for the WannaCry ransomware attacks last year. And sometimes private companies do the same thing. First, what’s unusual about this situation?
SR: This kind of attribution has really been the purview of a government. Is there another instance where private sector companies have had these capabilities to identify attacks or incidents or malicious behavior by other nation states and been able to comment on that with authority and develop capabilities to identify that? I don’t think it has happened. The question is: What should governments do about this? Is this something that helps them in their dialogues with other countries—their negotiations, their diplomacy? Or does it undermine what it is they’re trying to do?
LBB: Is there an upside for business and the public in the fact that private companies are gaining skill and sophistication in this area, and making more information available?
SR: That is definitely true. If nothing else, they’re helping their clients out. If they have advanced capabilities and can create a service out of that and sell that service, that’s what innovation is about, and competition. So that’s good.
LBB: But there’s also a potential downside, right?
SR: The concerns are that it really could undermine any sensitive negotiations. Like if we’re trying to negotiate with North Korea, or even China let’s say on the theft of intellectual property and cyberattacks, and all of a sudden Mandiant blurts out, “Look, there’s all this activity by China doing X, Y and Z.” The risk is that it pisses China off and they leave the table. Now has that happened before? I don’t know. But it’s possible.
On the other hand, if the U.S. wants to negotiate with China or some other country and they have classified information about an attack but they can’t really share it, maybe what they can do is point to a report by FireEye and say, “Look, we all know what you’ve been doing. This very credible company says this. Let’s talk about it.” So it does have the potential of being able to foster discussion in an open forum. The issue is, on balance, is it a good thing or not? And that’s what we’re trying to figure out.
LBB: Has there been much communication between these specialized companies and the government?
SR: I don’t know. I do know that the government can be a consumer of these companies like everyone else. So they may purchase their services and learn everything that the company knows, and that’s all good. And we know that some of the company employees are former government intelligence people, so inherently there are some relationships. But specifically what conversations they’ve had, I can’t say.
LBB: You’ve written a lot about data breaches. Are there clear legal guidelines spelling out when and how companies must share information about breaches and with whom?
SR: State data breach disclosure laws require that companies notify people when their first name and last name in addition to some other piece of information, like a driver’s license, passport or financial information have been disclosed without authorization to some other party, either publicly on a website or lost in a shipping container stolen by an attacker. So 48 states have this law. There is some variation among them, but basically they just say, “Company, you need to tell affected consumers when this happens.” Sometimes there are penalties if you don’t comply. Sometimes there is a private right of action for consumers to bring a lawsuit. Sometimes there are notification requirements to states’ attorneys general. There may be exceptions if the data is encrypted or there’s already a notification requirement through other financial regulation statutes. There’s been lots of issues and questions around whether they’re effective and how they should be changed and what consumers can actually do when this happens.
LBB: Then there’s also the time factor. Sometimes companies realize that they’ve been breached, but they’re not quite sure what has been taken. Or they decide that they really need to get a handle on how broad the breach was. Is it clear how fast they have to reveal what they know?
SR: There is tension. It’s not really settled. People are still trying to figure out the right time window. Some say as quickly as possible. Others say 60 days or 90 days or 30 days. It’s unclear what the perfect answer is because you can see how premature notification can just cause confusion. The company may not have all of the information. Or it may take time for them to figure out exactly what happened, so forcing them to notify too early may be not be helpful. In addition, it may corrupt a police investigation. But then you can’t wait too long because you want to notify people as soon as possible so they can take prevention measures—monitor their credit, watch out for charges on their credit cards, close accounts. That’s the best we have in terms of recommendations.
LBB: What roles do in-house and outside lawyers need to play?
SR: The first things counsel need to do is figure out if in fact there was a breach. It they answer is yes, then they need to figure who is affected and which laws for which states they need to comply with. There are lots of firms and lots of practice groups that do this kind of thing. So if they don’t have the capabilities in-house, they can go outside.
LBB: Some experts have talked about the limited knowledge many in-house lawyers bring to this subject, and their failure to make it their business to educate themselves sufficiently to really help protect their companies. What do you think about that?
SR: I suppose as counsel you have lots of laws you really need to figure out and understand. Breach laws are just one set of them. It’s 2018. You should have some awarness of all this cyber stuff. If you’re not the expert, it’s pretty easy to make a call and find someone who can guide you. I guess at the end of the day, they’re supposed to be risk-averse and have some idea of how to manage risks.
LBB: A lot of people have been saying for years now about data breaches: “It’s not a matter of if but when.”
Do you buy that?
SR: it’s a familiar marketing story by this one guy from a threat intelligence company. In some sense it’s a little silly because there are what, six or seven million companies in the country? What are you saying, all six million of them have been breached? That just seems hard to believe. There is an overall question of how many breaches we know about. Like what is the underreporting percentage of breaches? There’s something like 15,000 that we know about since 2003, when these breach laws were first adopted. Everyone wants to know whether that’s just the tip of the iceberg. And the answer is probably yes, but we don’t have a good feel.
LBB: You’ve been spending time researching the way insurance functions in this realm. What are you finding?
SR: Everyone is trying to figure out how to fix cyber. And one of the possible fixes people suggest is insurance. Is there an opportunity to create incentives for discounts, like your car insurance will bring you a discount if you drive safely? What can we do in cyber? That’s what people are really trying to figure out. It’s unresolved, but it’s a big market in the sense that there are billions of dollars in premiums. And it’s expected to grow by an order of magnitude. But the real issue for people who are standing where systemic risks are is the notion that one attack might affect thousands of companies. And there could be billions of dollars in losses. So it could be something catastrophic, like attacks on critical infrastructure. One incident, many affected people. That’s what’s on everyone’s mind—that’s the real fear. For companies, for insurance companies, for reinsurance companies, for the government especially: How do you protect it, how do you mitigate it?