Legal BlackBook
TM
PREDICTIONS AND THREATS FOR THE YEAR IN CYBERSECURITY
welcome. Our interview with Rand researcher Sasha Romanosky delves into some of these issues 
(Challenges in Cybersecurity Provoke Conflict between the Public and Private Sectors). But there always 
seems to be another angle. It may surprise some readers to learn that the government has a history of tech innovation, and some new programs established at universities are designed to lure entrepreneurial 
students to beat a path to Washington rather than Silicon Valley. And they’re actually winning converts 
(Building a Cybersecurity Bridge between Startups and the Military).  
   Finally, it wouldn’t be a new year if we didn’t feature a crystal ball somewhere in the mix. We invited a legal expert to talk to us about two articles that had important things to say. One predicted what we can expect this year; the other warned what we should fear (Predictions and Threats for the Year in Cybersecurity). 
    We’ll be back with another issue of CyberInsecurity next month. Until then, please let us know what you think. Your comments and suggestions are always welcome.

David Hechler, Editor-in-Chief

An expert weighs in on some prognostications.
    n late December, the online publication CSO posted an 
    article with the headline Our Top 7 Cyber Security Predictions for 2018, by senior editor Michael Nadeau. In early January, MIT Technology Review posted Six Cyber Threats to Really Worry About in 2018, by San Francisco bureau chief Martin Giles. Each had plenty to knock the complacency out of any general counsel foolish enough to try to relax over the holidays. We asked Daniel Garrie to have a look and talk to us about takeaways for in-house lawyers, including his own recommendations. Garrie, who frequently counsels company lawyers, is co-founder and managing partner of Law & Forensics and the editor-in-chief of the Journal of Law and Cyber Warfare. In addition to his law degree, he earned a bachelor’s and master’s in computer science and has built and sold several tech startups. 
Legal BlackBook: The MIT article focused on the likely future targets of cyberattacks, including cloud storage companies, data brokers who store information about people’s Web browsing habits, and infrastructure such as 
electric grids and voting machines. What potential targets strike you as the most important for in-house lawyers to pay attention to?
Daniel Garrie: Vendors and supply chains. They will have the biggest impact, and there is a high likelihood that they will be targeted. This presents a huge risk because larger companies can have many vendors, some of which may have connectivity to a company’s most sensitive information. If vendors get hit and they have the right level of connectivity, that can have the most material consequences to the organization.  

LBB: What can they do to protect their companies?
DG: Three things. First, policies and procedures should be implemented and followed. It is not enough to simply make policies and consider the problem solved. Having a bunch of policies that no one at the company follows is a problem. Having a bunch of procedures that aren’t followed by managers or vendors is also a problem. That’s why it is critical to ensure that policies are being followed by anyone with access to company data. Second, there should be a robust structure for educating employees on good information security practices. It's important that the training is not threat-driven, but rather engages employees and focuses on the constructive rather than destructive aspects of security. The third area is insurance. They should make sure they’ve done an evaluation and adjustment of their insurance framework and coverages, and understand where the gaps are. They need to review their insurance in light of all their risks, and then determine what they may need to insure against from a cyber perspective. 

LBB: Based on your experience, how do you think they’re doing?
DG: Most of the companies I have worked with struggle with all three—at different levels, depending on the resources they have available to them. 

LBB: Both articles discussed new tech weapons. CSO said that internet of things (IoT) devices have been compromised by botnets and used to launch attacks. On the other hand, artificial intelligence has helped companies automate threat detection. But the MIT article pointed out that AI is also being used in spear phishing attacks because it crafts fake messages as effectively as humans. Are these the biggest threats, in your view? 
DG: Those are great, but there are a lot more threats and weapons. There’s some very advanced custom-built ransomware coming to market that has learning algorithms—AI—built into it. That’s one of the known threats, and I think that’s a real issue. 

LBB: Who do you think is winning the cyber arms race, the good guys or the bad guys? 
DG: On the civilian side, the bad guys. On the military side, I’d say that the military is not losing the battle, but they’re not crushing it per se. The military has sophisticated capabilities, but the dynamics of this battle change on a daily basis. 

LBB: What can companies do to keep up with the threats?
DG: They can identify their risk factors and work to understand their specific threat landscape and readiness. Because what’s generic isn’t going to work. And that’s what you have to embrace—the reality of your operating environment, not what everybody else is telling you. It’s different for everybody. And it’s near impossible to say that what works for one is going to work for the other. 

LBB: CSO suggested that 2018 will finally bring an acceleration of multifactor authentication in place of password-only systems. One reason is the rise of what’s called “aftershock breach”—after a data breach at one company, stolen credentials are used to breach accounts at other companies. Do you agree with this predication? 
DG: I think they’re right. Companies are starting to do this because they have to protect the consumer, they have to protect their brand. They’re realizing that they have to take a more proactive, assertive approach to all of this. 

LBB: What role do you think in-house lawyers should play here?
DG: They need to identify the legal risks. It depends on the business, but there’s direct liability, third-party liability, customer agreements. It depends on the type of client company it is. 

LBB: What do you think are the biggest challenges ahead for companies in this realm?
DG: The conversation between in-house counsel and outside counsel on protecting client data—one of the things that nobody talks about. That conversation, that dialogue is changing rapidly. And defining that partnership and how those dynamics work and operate is really interesting. Clients are changing the level of security requirements. They want outside counsel to secure their data. They want to know who can access the information. Company lawyers have to reevaluate 
that relationship. 

LBB: What guidance can you offer general counsel? 
DG: When this subject comes up at their companies, they should make sure they have a seat at the table and are properly engaging the business, risk, cyber and communications team when addressing cybersecurity issues. 

Who is winning 
the cyber 
arms race? 
On the 
civilian side, the bad guys.
BUILDING A CYBERSECURITY BRIDGE 
BETWEEN STARTUPS AND THE MILITARY
Universities are becoming incubators for innovation.
By David Hechler
       teve Blank wants you to rethink everything you thought you knew about government. Especially the stereotype of 
       agencies that are hidebound, inflexible and out of touch. And he’s especially insistent when he’s speaking to gifted students with a tech background and entrepreneurial aspirations. He has big plans for them, and he believes they can help make us all a little safer.
   Blank was speaking at a panel discussion in January hosted by Columbia University, where he’s a senior fellow for entrepreneurship. It’s a subject he knows pretty well. He founded or worked in eight startups, and half of them went public. His own introduction to the tech industry occurred after he joined the Air Force and volunteered to go to Vietnam. “I spent a year and a half in Southeast Asia learning electronics,” he told the audience. “And when I got out, I ended up in Silicon Valley in the mid 1970s, when we were selling equipment to other businesses.” That was because the consumer electronics business didn’t exist.
   The government and the defense industry were the Valley’s big innovators back then. “Lockheed was the largest employer,” Blank said. “I worked for a startup run by someone named Bill Perry. He ended up eventually as the secretary of defense. So my introduction to the Valley was a halfway house between startups and the military.”
    Now he’s trying to build a bridge between the two. The government needs to innovate to counter the high-tech threats it faces from global adversaries, Blank explained. It needs energy and vision comparable to what you find in Valley startups. But rather than seek to lure talent from there, he is grooming it at universities by creating courses with names like Hacking for Diplomacy and Hacking for Defense.  
   The idea is to present students with some real problems government agencies are trying to solve and see what they can come up with in 10 weeks. The goal is not only to solve problems, it’s to create a public service career path for young tech innovators. 

(from left) Justin Fox, Avril Haines and Steve Blank
   Sitting next to Blank on the stage was Avril Haines, a lawyer and former deputy National Security Advisor under Obama who is now a senior research scholar at Columbia and a lecturer in law. Haines was an entrepreneur herself shortly after college, when she founded and ran a successful bookstore/café before she sold it and went to law school. But for this talk, which was moderated by Bloomberg View columnist Justin Fox, she was speaking as a recent government insider with a ringside view of the biggest threats the country faces.
   “In the national security world,” Haines said, “we are consistently facing more threats more quickly—the generation cycle is faster—and they’re more complex in the sense that you typically need greater and diverse expertise to really understand them and deal with them on a consistent basis.”  
   Cyber is high on the list of critical vulnerabilities, she said, noting that 50 percent of households have a smart meter attached to their electricity. “An adversary looks for places where they can innovate and find ways to hold at risk, at relatively low cost to them, things that are of value to us. Similarly we need to innovate in order the think through how we can respond without actually hurting the great value that all of these areas bring to the United States.”
   For this reason, she continued, innovation is “critical to dealing with national security issues.” Effective responses require execution, “but you also need an innovation culture. The two can coexist.”
   Blank had already made incursions into the academy before he introduced his hacking classes. He’d realized that business schools and investors invariably treated startups as smaller versions of large companies. They asked fledgling entities for 40-page business plans without realizing that nearly all of these companies were run by visionaries. For them it wasn’t about executing their business model; it was about finding one.  
   “We had built 100 years of management tools for execution,” he told the Columbia crowd, “but very little methodology 
for searching for business models.” Drawing on the work of Alexander Osterwalder and Eric Ries, Blank helped 
create something called the Lean Startup, which “established a set of rules for startups on how to think about 
building companies.”
   Based on this groundwork, in 2011 he created a class called The Lean LaunchPad. It was all about talking to potential customers, building viable products and designing a business model. As it started to catch on with universities, a funny thing happened, Blank said. He’d been blogging about it, posting his class notes, and through these he developed a small but influential fan club within the government. Washington had long given out research grants that included money for commercialization, but had never offered recipients any guidance on how to start a business. Blank’s Lean Startup struck some people there as just what they needed, so the National Science Foundation adopted it. After that, a host of other agencies followed suit. 
   By 2013, the wider business world had taken notice. ”It happened to be the time when large companies were dealing with continuous disruption,” Blank noted. “And for the first time large companies were looking to startups for methodologies.” That’s when the Harvard Business Review ran a cover that said: Why the Lean Start-up Changes Everything. More than 80 universities are now teaching courses.
   Three years later Stanford offered Hacking for Defense. The following semester it added Hacking for Diplomacy. The methodology is basically the same as the Lean Startup courses. They talk to lots of potential customers and try to build viable products. But the problems they’re addressing are ones government agencies are trying to solve. About 10 universities offer these classes now, Blank said. 
   He’s particularly pleased that students have responded so positively. But the big deal to him is what he hopes it does for the country. “Think about retail today in the United States,” he told the audience. “It’s literally being taken apart by Amazon. Some form of retail will continue to exist, but it won’t look anything like today. 
   “The problem I observed, and the reason why I’m interested in government,” he concluded, “is that we can afford to have Macy’s go out of business. We can’t afford to have our intelligence community go out of business.“

A serial entrepreneur is having an impact that’s been felt in the academy, business and government.
CHALLENGES IN CYBERSECURITY PROVOKE CONFLICT BETWEEN THE PUBLIC AND PRIVATE SECTORS 
Observers see progress in the release of the VEP charter last year.
       asha Romanosky’s computer career began at age 13, when his sister won a Commodore 64 at a spelling bee. 
       “I don’t think she ever saw it after she won it,” he confessed with a laugh. “I confiscated it.” Born and raised in Canada, Romanosky earned a BS in electrical engineering from the University of Calgary and a PhD in public policy and management from Carnegie Mellon University. Now a policy researcher at the Rand Corporation, he writes and speaks often about cybersecurity and the law. These days he’s based in Washington, D.C. , where he learned a lot about contentious relations between the public and private sectors during a year as a policy adviser to the U.S. Department 
of Defense.

Legal BlackBook: So what hooked you on the Commodore 64 at age 13?
Sasha Romanosky: Back then I was playing the games, hacking the games. They had a cartridge that could freeze the operating system and dump you into assembly—almost the lowest level of programming for the computer—and then you could play around in memory. There was a small area that controlled all of the game’s parameters, so it was fairly easy to goof around with things.

LBB: How did you get into cybersecurity?
SR: Before there was cybersecurity it was information security. I started doing this professionally after college, working at an ISP [internet service provider]. We were setting up internet services, firewalls, e-commerce sites for people. It was all pretty simple. Until the late 1990s it wasn’t even considered security. After that it was information security, and then somehow the term morphed into cybersecurity, which I think just came from the government and military, because for them “cyber” encompasses electronic warfare. And even to this day, old-school information security people get a twinge when they hear “cyber” because it’s almost like a goofy marketing term. 

LBB: These days, some of your work focuses on corporate governance, compliance and corporate crime, areas that are of great interest to lawyers. In what context did you begin delving into this arena?
SR: It started when I was looking at the data breach litigation and state data breach disclosure laws, and whether those had an effect on firms and consumers. These are laws that require companies to notify you when they suffer breaches. And there are good policy questions as to whether the laws are working. Plaintiffs were always losing these cases. We wanted to study what those cases look like. What are the causes of action that are brought? When are data breaches more likely to be litigated, when are firms more likely to be sued, when are those cases more likely to settle, what do the settlements look like? All of that. 

LBB: Can you give us a summary of what you found?
SR: First, we found that only a very small percent of data breaches end up being litigated. A decade ago, the litigation rate was in the high teens, but recently it has fallen to 3 to 4 percent. So the probability that any firm would be sued is already pretty low. However, firms tend to be sued more when the breach relates to financial information and less when the firm provides credit monitoring right away. As for the results, it seems that about half of all these cases settle (with the other half being dismissed right away). However, “settled” typically means that only the named plaintiffs recover, and they get a few thousand dollars each. In addition, the firm was more likely to settle when the breach involved medical information. However, we saw no strong correlation between settlement and class action certification, or allegations of violations of statutes with statutory damages. Overall, though, the biggest finding was from coding each of the causes of action from all 200 plus cases. We found over 90 unique causes of action, including common law (torts, contracts, etc) and state and federal statutes. Contrast that with financial securities law, where there is a single federal statute under which plaintiffs can bring an action. 

LBB: You’ve given lots of presentation on these topics to lawyers. Are you still doing that?
SR: Yes. And lawyers are always involved. I’m also getting into the cyber insurance area, which means more insurance people are involved. But there are always lawyers around.

LBB: Is this all from your Rand work?
SR: This is all Rand research. And it has a nice evolution. Looking at the security laws got into the litigation stuff, which then got into the story of costs for all of these incidents. Which then got into insurance because the companies are interested in what all these things cost, and the insurance companies are interested in what all these things cost. 
And everyone wants to know: How do we protect ourselves best? What kind of insurance do we need? How much do 
we need? 

LBB: You were a cyber policy adviser to the U.S. Department of Defense for a year that ended last September. Can you tell us about your government work?
SR: There’s a federal statute that enables people to go back and forth from government to the private sector—or nonprofits anyway. This is a vehicle that the federal government can use to get experts to help them out for a short time. I was in cyber policy in OSD [Office of the Secretary of Defense]. It deals with all kinds of cyber issues to help inform the secretary to make decisions. If the Department of Defense is going to going to engage in cyber operations, there needs to be careful thought and understanding of how you do that—authorities and capabilities and agreements and all that. But the department also needs to defend itself, and I worked mainly on the defensive side. 

LBB: We’re interested in the sometimes complicated relationships between the government and corporations. 
Can you talk about that?
SR: Private-public partnerships are a big priority for the secretary. There are lots of ways that can happen. One of them is what they call DIUX, which is the Defense Innovation Unit Experimental. It’s about fostering a culture, working with startups to help develop new technology. And not all for robots that shoot guns. There are lots of innovations that could help. It’s that kind of an R&D partnership. There’s information sharing, cooperation between DOD and the defense industrial base, which is a whole collection of companies that supply support to DOD. Maybe they’re cyberattacked and DOD wants to know about that. 
   There’s a larger question if a company in the U.S. is cyberattaccked. What is the role for DOD in helping protect them? Generally the answer is, “There is no role.” DOD is not involved, nor should it be involved in defending or protecting some company that gets hacked. That’s the role of the FBI—until it becomes a national security issue. So if the whole country was suffering some kind of distributed denial of service attack, then one of the primary roles of the Defense Department is to protect the country. Only if and when that were to happen would DOD step in. There are conversations that DOD may have with infrastucture companies, telecommunications companies, finance companies to understand how resilient we are. What do we need? Are there any gaps? And what are the roles and responsibilities for different people? There’s what’s called Defense Support of Civil Authorities, which is what gets triggered when a state has a natural disaster and needs to call the military to help out with a hurricane or an earthquake or whatever. There’s a similar interest to figure out how DOD could help a state with a cyberattack.  
   The federal government-state government partnership really needs to be negotiated. And as you can imagine, there are lots of electrical companies and other infrastructure companies that support state and federal operations and bases and military installations, so there needs to be an understanding and cooperation there. Who’s protecting what, and who manages what at what times? So there are lots of ways that DOD interacts with the private sector. 

LBB: Did you observe tension between government agencies and the private sector over cybersecurity issues?
SR: Yes. Everyone gets ticked off by everyone else. Everyone wants more information and better information and quicker, faster, stronger. One of the big issues that’s kind of a firestorm is the vulnerabilities equities process (VEP). It’s the U.S. government policy around what they do with the special kinds of vulnerabilities that they may know about. If a breach vulnerability exists that no one else knows about but the U.S. government, where lots of computers are vulnerable in the U.S., then there’s an equity decision, a decision of whether the government should tell everyone about that so that they can patch their system and be more secure, or whether they should hold it temporarily and use it for intelligence collection. Maybe U.S. systems are vulnerable, but maybe an adversary's systems are also vulnerable and maybe they also want to collect intelligence on that adversary. How do you weigh that? How do you balance that? The private sector, of course, is always very adamant that you should tell us every single time, and do it right away. The U.S. government, you can understand, has a different kind of role. They see the interests of everyone and everyone’s equities. And so this VEP process is contentious. 
   Recently the Security Council released the charter for the VEP, which is the set of policies and procedures around how it makes these decisions and what the whole process looks like, and who’s involved in decision making and all that. And it’s been fairly well received. I think most reasonable people understand that it’s not an easy decision. And the U.S. is really one of the only countries that has such a process. Most other countries may just use the vulnerabilities and really don’t care or don’t tell their citizens what they’re doing and why. It’s a small piece of cyber operations and cybersecurity at a federal level, but it’s a specialized, important piece of it.

LBB: Is this one way in which the government and the private sector have actually made progress in dealing with these tensions and communicating better?
SR: I think so. It was a big, big step to release this charter to the public before it was classified. It had been available through a freedom of information act request, but only part of it. I think it was a really good step, helping to create more trust and be more transparent. 

LBB: You recently wrote about an interesting development. Sometimes our government attributes cyberattacks to foreign governments, as it did when it pointed the finger at North Korea for the WannaCry ransomware attacks last year. And sometimes private companies do the same thing. First, what’s unusual about this situation? 
SR: This kind of attribution has really been the purview of a government. Is there another instance where private sector companies have had these capabilities to identify attacks or incidents or malicious behavior by other nation states and been able to comment on that with authority and develop capabilities to identify that? I don’t think it has happened. The question is: What should governments do about this? Is this something that helps them in their dialogues with other countries—their negotiations, their diplomacy? Or does it undermine what it is they’re trying to do? 

LBB: Is there an upside for business and the public in the fact that private companies are gaining skill and sophistication in this area, and making more information available?
SR: That is definitely true. If nothing else, they’re helping their clients out. If they have advanced capabilities and can create a service out of that and sell that service, that’s what innovation is about, and competition. So that’s good. 

LBB: But there’s also a potential downside, right? 
SR: The concerns are that it really could undermine any sensitive negotiations. Like if we’re trying to negotiate with North Korea, or even China let’s say on the theft of intellectual property and cyberattacks, and all of a sudden Mandiant blurts out, “Look, there’s all this activity by China doing X, Y and Z.” The risk is that it pisses China off and they leave the table. Now has that happened before? I don’t know. But it’s possible. 
   On the other hand, if the U.S. wants to negotiate with China or some other country and they have classified information about an attack but they can’t really share it, maybe what they can do is point to a report by FireEye and say, “Look, we all know what you’ve been doing. This very credible company says this. Let’s talk about it.” So it does have the potential of being able to foster discussion in an open forum. The issue is, on balance, is it a good thing or not? And that’s what we’re trying to figure out. 

LBB: Has there been much communication between these specialized companies and the government? 
SR: I don’t know. I do know that the government can be a consumer of these companies like everyone else. So they may purchase their services and learn everything that the company knows, and that’s all good. And we know that some of the company employees are former government intelligence people, so inherently there are some relationships. But specifically what conversations they’ve had, I can’t say.  

LBB: You’ve written a lot about data breaches. Are there clear legal guidelines spelling out when and how companies must share information about breaches and with whom?
SR: State data breach disclosure laws require that companies notify people when their first name and last name in addition to some other piece of information, like a driver’s license, passport or financial information have been disclosed without authorization to some other party, either publicly on a website or lost in a shipping container stolen by an attacker. So 48 states have this law. There is some variation among them, but basically they just say, “Company, you need to tell affected consumers when this happens.” Sometimes there are penalties if you don’t comply. Sometimes there is a private right of action for consumers to bring a lawsuit. Sometimes there are notification requirements to states’ attorneys general. There may be exceptions if the data is encrypted or there’s already a notification requirement through other financial regulation statutes. There’s been lots of issues and questions around whether they’re effective and how they should be changed and what consumers can actually do when this happens. 

LBB: Then there’s also the time factor. Sometimes companies realize that they’ve been breached, but they’re not quite sure what has been taken. Or they decide that they really need to get a handle on how broad the breach was. Is it clear how fast they have to reveal what they know? 
SR: There is tension. It’s not really settled. People are still trying to figure out the right time window. Some say as quickly as possible. Others say 60 days or 90 days or 30 days. It’s unclear what the perfect answer is because you can see how premature notification can just cause confusion. The company may not have all of the information. Or it may take time for them to figure out exactly what happened, so forcing them to notify too early may be not be helpful. In addition, it may corrupt a police investigation. But then you can’t wait too long because you want to notify people as soon as possible so they can take prevention measures—monitor their credit, watch out for charges on their credit cards, close accounts. That’s the best we have in terms of recommendations. 

LBB: What roles do in-house and outside lawyers need to play?
SR: The first things counsel need to do is figure out if in fact there was a breach. It they answer is yes, then they need to figure who is affected and which laws for which states they need to comply with. There are lots of firms and lots of practice groups that do this kind of thing. So if they don’t have the capabilities in-house, they can go outside. 

LBB: Some experts have talked about the limited knowledge many in-house lawyers bring to this subject, and their failure to make it their business to educate themselves sufficiently to really help protect their companies. What do you think about that?
SR: I suppose as counsel you have lots of laws you really need to figure out and understand. Breach laws are just one set of them. It’s 2018. You should have some awarness of all this cyber stuff. If you’re not the expert, it’s pretty easy to make a call and find someone who can guide you. I guess at the end of the day, they’re supposed to be risk-averse and have some idea of how to manage risks. 

LBB: A lot of people have been saying for years now about data breaches: “It’s not a matter of if but when.” 
Do you buy that?
SR: it’s a familiar marketing story by this one guy from a threat intelligence company. In some sense it’s a little silly because there are what, six or seven million companies in the country? What are you saying, all six million of them have been breached? That just seems hard to believe. There is an overall question of how many breaches we know about. Like what is the underreporting percentage of breaches? There’s something like 15,000 that we know about since 2003, when these breach laws were first adopted. Everyone wants to know whether that’s just the tip of the iceberg. And the answer is probably yes, but we don’t have a good feel. 

LBB: You’ve been spending time researching the way insurance functions in this realm. What are you finding?
SR: Everyone is trying to figure out how to fix cyber. And one of the possible fixes people suggest is insurance. Is there an opportunity to create incentives for discounts, like your car insurance will bring you a discount if you drive safely? What can we do in cyber? That’s what people are really trying to figure out. It’s unresolved, but it’s a big market in the sense that there are billions of dollars in premiums. And it’s expected to grow by an order of magnitude. But the real issue for people who are standing where systemic risks are is the notion that one attack might affect thousands of companies. And there could be billions of dollars in losses. So it could be something catastrophic, like attacks on critical infrastructure. One incident, many affected people. That’s what’s on everyone’s mind—that’s the real fear. For companies, for insurance companies, for reinsurance companies, for the government especially: How do you protect it, how do you mitigate it? 

It’s not clear 
how quickly companies need to notify customers when there’s been a data breach.
Everyone 
wants more information 
and better information 
and quicker, faster, stronger.
If you are interested in contributing thought leadership or other content to this platform 
please contact Patrick Duff, Marketing Director at 937.219.6600 
S
I
S
March 2018
WHAT THE NUMBERS SAY
 2015         2016        2017
 2015         2016        2017
 2015         2016        2017
 2015         2016        2017
 2015         2016        2017
31%
26%
36%
13%
14%
17%
8%
7%
17%
5%
9%
38%
36%
30%
Businesses That 
Experienced No Losses
Financial Fraud 
Business  Email
Compromised
Ransomware
Phishing
Daniel Garrie
Steve Blank
Sasha Romanosky
Source: 2017 U.S. State of Cybercrime (published by CSO).
Survey responses about cybersecurity events reported by 510 executives at U.S.businesses, 
law enforcement services and 
government agencies.
Contact Us
Contact Us
W
CyberInsecurity
S
      pring at last! A time of renewal. In this, our second issue of
      CyberInsecurity, renewal is a bit redundant. It’s all pretty new. But we have added two features in this issue that you can expect to find each month. 
  First, there’s a new section called Cybersecurity in the News. We know that you keep up with the big events; these short takes focus on smaller fare that you may have missed. This month, for instance, we cue up some surprising statistics, a new 

A
T
April 2018
31%
26%
36%
13%
Contact Us
When Will We 
David Hechler, Editor-in-Chief
CAN A COMPANY BE TOO COOPERATIVE
WITH THE GOVERNMENT?
Best Buy’s employees weren’t law enforcement agents, but they acted as if they were.
“How we 
think about cooperation is different than the way [FBI] Director [Christopher] Wray thinks about it.”
       hey don’t look like cops, and their cars don’t look like police vehicles. They’re not 
       supposed to. They’re supposed to epitomize computer geeks. But lawyers at the Electronic Frontier Foundation (EFF) have learned that some of the individuals who work for the Geek Squad, the Best Buy employees who repair customers’ computer equipment, have been searching devices for incriminating information and passing what they find to the FBI. Aaron Mackey, a staff attorney at the EFF, has been working on the case for the San Francisco-based nonprofit. The civil liberties group is concerned about the Fourth Amendment implications, and it’s been digging for documents through the Freedom of Information Act (FOIA). Mackey, a former journalist, has also been blogging about it. The interview has been edited for style and length.

Legal BlackBook: How did you first bump into this case?
Aaron Mackey: I first learned of this story when a doctor in Southern California [named Mark Rettenmaier] was prosecuted as a result of the Geek Squad finding potentially illegal material on a hard drive that he had submitted for repairs. 

LBB: Have there been other cases like that one? 
AM: We know of at least one other. It’s actually a state case in Kentucky which the FBI and federal prosecutors declined to prosecute and referred to state authorities, who are prosecuting a man now. There may be others.

LBB: To your knowledge, did all of the Geek Squad searches that resulted in some law enforcement activity—whether they resulted in charges or not—involve alleged child pornography?
AM: The prosecutions that we’ve seen, yes. Some of the documents that we received as part of our Freedom of Information Act request make reference to identity theft investigations, but it’s a cover sheet and there’s a lot that’s redacted. It could be just a file name or a designation. It may not mean that there’s an investigation. What I can say is that all of the investigative reports that we’ve seen as part of our FOIA lawsuit involve investigations of potential child pornography.

LBB: After you found out about the Rettenmaier prosecution, what did you decide to do?
AM: We decided to file a Freedom of Information Act request to try to learn more about the relationship between the FBI and the Geek Squad. We wanted to understand what that relationship looked like and how it worked. We were concerned about the Geek Squad employees being paid to find this material, which was something that came out in the Rettenmaier prosecution. We were concerned also that perhaps the technicians who were being paid were going beyond their repair duties and actually doing investigations as an arm of the federal government. The FBI denied our request and told us they couldn’t confirm or deny anything. Then we filed our lawsuit, and they’ve produced a number of records—and withheld a number of others. 

LBB: Where did you file the lawsuit?
AM: In Washington. We have an attorney in D.C., David Sobel, who does a lot of our FOIA litigation. Our civil liberties director, David Greene, is also working with us. 

LBB: In summary, what is the FBI’s argument?
AM: What the FBI has said from the beginning, and what the Geek Squad and Best Buy have said as well, is that there’s no coordination. They don’t have an established relationship. All that happens is that the Geek Squad employees, in the course of repairing people’s devices, call the FBI when they find child pornography or other illegal material. And then 
the FBI comes and reviews what the Best Buy employees have found. They make a determination, and then they seize the device and get a search warrant and search for additional material. Then they decide whether to bring charges. Best Buy put out a 
statement that says there was no high-level coordination. There were four employees who were receiving payments. And three of them no longer work for Best Buy. The other has 
been reassigned.

LBB: What’s your response to the FBI?
AM: We’re concerned that the documents produced in the Rettenmaier prosecution, as well as the documents produced in our FOIA request, show some sort of relationship that goes back at least to 2008. The FBI was having meetings at the Geek Squad facility, they were receiving tours and they were paying individuals who we later learned were supervisors at the Geek Squad facility. So our No. 1 concern here is that this relationship is something more than just a private employee of Best Buy doing his or her work and finding something illegal and calling the cops. Our concern is that they’re going beyond that and actually doing something at the direction of the FBI. And in so doing, they’ve transformed themselves into agents or extensions of the FBI, and they shouldn’t be searching people’s devices without a warrant. And that’s ultimately what we’re concerned about: the potential Fourth Amendment violations of everyone who sends a device to be repaired at the Geek Squad facility.

LBB: This facility you’re referring to is in Kentucky?
AM: Yes. What happens is there are Geek Squad employees at Best Buys throughout the country. I think they come out to your home, and if you have low-level, minor repairs, you can take them in to the stores. But when you have a hard drive that’s failing or you’ve lost data on your device—what Geek Squad calls “data recovery”—they have to send those computers to this Brooks, Kentucky, facility. And it’s there where technicians do things to try to recover, or, if the hard drive is broken, extract the data and put it on a new hard drive or other media. It’s a wonderful thing if you lose your wedding photos, or photos of your kids. You can send it to them, and they’ll fix it. 

LBB: Is that the only place where, to your knowledge, this kind of law enforcement role has played out? 
AM: Yes. In our FOIA request, we asked for documents that would show whether they had any sort of relationship with other repair facilities—be they national tech businesses or local repair facilities in various cities—and the FBI has refused to confirm or deny whether any of those records exist.

LBB: Has anything changed to date as a result of your efforts and the resulting publicity?
AM: It’s hard to say. Best Buy has basically said about their employees who were paid that they didn’t know about it and they’ve put a stop to it. And in testimony in the state case, one of the special agents of the FBI who was involved said that they’ve stopped paying employees and they’ve stopped having that sort of relationship with the Geek Squad, although she was cagey about it—didn’t even want to call it a relationship. But we don’t have confirmation.

LBB: What are some of the unanswered questions you have at this point?
AM: Are there similar relationships that the FBI has with other repair facilities, say Apple and Apple Care? You bring your device into Apple, whether it’s an iPad or a MacBook, they take it and ship it out somewhere. Is there a similar type of relationship with wherever those repairs occur? Were there additional payments made to Geek Squad employees? How far back does this relationship go? What does the one-on-one interaction between the Best Buy employees and the FBI look like? Those are some of the basic questions that we don’t have answers to. 

LBB: What’s next?
AM: The government has produced all the records that they claim they’re going to produce, and so they’re going to move for summary judgment, arguing that they’ve met their burden under FOIA. What’s up next for us is we’ll be filing a cross-motion challenging their withholdings and asking the court to order further disclosure of records, including ordering the FBI to confirm or deny whether they have relationships with other repair facilities. 

LBB: Do you think you have a good chance at obtaining that information?
AM:  We think that there’s good law on this point in FOIA. When you have some official confirmation of a similar type of pattern or practice, you can’t just say, “We’re not even going to tell you whether or not these documents exist.” So we think we have a fair shot at convincing the court that the FBI has to at least give us a yes or no answer. And then process records if they do have records. Whether or not we’ll ultimately prevail in terms of getting some of the records released or getting less redacted records, we’re optimistic.

LBB: Do you have reason to believe that they do have relationships with other companies similar to the one they had with Best Buy?
AM: We don’t have any documents or anything else that we’re aware of that would show us another relationship. We only learned about the Best Buy relationship, which has been going on for more than 10 years, as a result of this more recent prosecution. So it’s obvious that there are efforts to try to keep these relationships secret. 

LBB: FBI Director Christopher Wray gave a speech at a cybersecurity conference in Boston in March in which he talked about the importance of cooperation—the cooperation of all the government agencies that work on cybersecurity, and cooperation between the government and the private sector. The theme he kept returning to was: We’re all in this together. In your opinion, what are some of the potential benefits of public-private cooperation in this area? 
AM: How we think about cooperation is different than the way Director Wray thinks about it. The one area of cooperation that we would like to see is the government more affirmatively disclosing when they have identified vulnerabilities in software or hardware that’s used by all of us. And they should let the manufacturers know so they can fix the flaws rather than the government’s hoarding the information to use offensively, or keeping it secret so that they can patch their systems but leave us all vulnerable. That’s definitely an area where we think there needs to be better cooperation. But we see that as something where the government needs to do a better job, not necessarily private industry. Our concern, whenever there is private industry cooperation, is that it potentially requires those private actors to side with the government’s interests at the expense of the privacy and security of the company’s users. And that’s something that we think is really problematic. 

LBB: To continue to explore that perspective, what are the potential dangers for 
businesses when they cooperate with the government? 
AM: You put yourself in the position of harming your relationship with your customers, with the owners of your stock, and with the public goodwill more broadly when you’re seen as siding with the government and perhaps doing things that intrude on the privacy of your users or jeopardize their security.

LBB: In your opinion, has Best Buy made mistakes here? 
AM: It’s hard to tell. What we’re still looking for is whether there was some sort of high-level coordination. It’s clear that there were supervisors at this facility who were taking money from the FBI. What we don’t know is whether they did so and then became active agents for the government. At the same time, I want to recognize that, to the extent that employees at Best Buy or anywhere find illegal material, particularly child pornography, there are legal requirements—usually state law—that says that they have to notify law enforcement. To the extent that what they say is true, that people at these facilities are just finding things because they’re making sure that all your photos are still working, then that’s fine. But what our concern has been is that it appears that perhaps there were several employees and supervisors at this facility who were potentially going beyond that and actively searching 
for it. 

LBB: What are the lessons other companies can learn from this?
AM: One lesson is to make sure you really know what’s going on at these types of facilities. The management of the company has to work with law enforcement and respond to government orders and warrants. But they have to be aware of the pitfalls and damage to their brand and their goodwill with their customers if they’re seen as being agents of law enforcement. 

LBB: Are there potential costs for the FBI?
AM: Perhaps if someone were to bring a civil rights lawsuit. There are costs associated with the inability to bring cases and prosecute them through. You see that with the Rettenmaier prosecution. The FBI’s conduct played a central role in what the court ultimately found in terms of tossing out the search warrant because it found that the FBI agents were not truthful and misled or omitted information. And so that cost them a prosecution. And then there are more general costs: people’s perceptions of the bureau, and law enforcement in general, when they see this type of behavior.

Aaron Mackey

The EFF filed 
a FOIA request to assess the relationship between the Geek Squad and the FBI.
AARON MACKEY / ELECTRONIC FRONTIER FOUNDATION
INTERVIEW: ANDREA BONIME-BLANC / GEC RISK ADVISORY
THE LEGAL PERSPECTIVE HELPS SHAPE AI STRATEGY
Companies need multidisciplinary teams to find the right path.  
Andrea Bonime-Blanc
        ndrea Bonime-Blanc has a new book called The Artificial Intelligence Imperative: A Practical Roadmap for Business,         which will be published by Praeger in April, and the timing couldn’t be better. AI seems to be everywhere these days, and Bonime-Blanc can talk about it as a business consultant, an expert on corporate governance and a former general counsel. She is the founder and CEO of GEC Risk Advisory, which specializes in governance, risk, ethics and compliance. She worked for two decades as a C-Suite executive. In addition to her stints as a general counsel, she has served as a chief risk officer, ethics officer and compliance officer, among other positions. All of those perspectives come into play when she talks about the risks and rewards of AI. The interview has been edited for style and length.

Legal BlackBook: You’ve talked about the need for “traditional” companies to focus on artificial intelligence. What do you mean by “traditional companies,” and why have you particularly addressed them? 
Andrea Bonime-Blanc: One of the areas that my co-author [Anastassia Lauterbach] and I were concerned about is that you have, on the one hand, the leading technology companies—Apple, Facebook, Amazon, Google and Microsoft. And there are other big players like IBM and more niche technology companies that are also very attuned to AI. But then you have a vast world of businesses where some people are dipping their toe into AI. They don’t really know what to do. And you have a fourth group, which is the old-line industries that haven’t figured out what these new technologies really mean to them: the chemical industry, oil and gas, manufacturing. Some are doing the right thing and starting to inquire. But many are not. Many don’t think it’s important and don’t think it’s going to affect their businesses. We want that large group to start thinking about AI as important, because sooner or later, through data analytics or other forms of information gathering and application of AI algorithms, some of these industries that are very traditional may get disrupted. Look at cases of past companies that missed the technological change that was impacting them. Kodak invented digital photography, and they didn’t pursue it, and someone else ate their lunch. So don’t let this new technology pass you by, put you at a competitive disadvantage or disrupt you out of existence. That’s the attitude that we have in the book.  

LBB: You’ve talked about the risks and rewards that AI offers. And you’ve presented different kinds. Let’s start with cyberattacks. What are the risks that AI poses there?
ABB: AI is going to be used as a tool by people who are savvy about cyber. If you have sophisticated AI algorithms in the wrong hands, that would be a risk. For example, if the bad guys—nation states, criminal gangs, the fat guy on the bed—want to use AI as part of their cyber weapons, that’s where the serious risk occurs. 

LBB: What are potential rewards? 
ABB: The counter to that is that the military, cyber experts and governments are also developing AI tools to use in cyber warfare and cyber defense. AI can become part of the mutually assured destruction that we had with nuclear weapons. And when used by the people who are defending the integrity of business assets or government assets, it can be something very beneficial. It will help you find out about attacks while they’re happening. So it can be both a tool for good and a tool for bad.  

LBB: You also talked about the ethics and governance challenges that AI imposes. What are some key issues there? 
ABB: From an ethics standpoint, what we’re most concerned about is some of the issues that come up with how AI is created in the first place. And it always goes back to the humans involved. Most programmers are young, well-educated men. If you have a bunch of young white males who are well-to-do creating the algorithms, you’re missing out on a very large swath of society. You may be missing the perspective and the wisdom and the knowledge that women might have and that older people might have, that people from other geographies might have. So the diversity piece is an important ethical issue. And who has access to AI is another issue that has societal implications. Who is using AI, who can use AI—there are some inequality issues. Also, there’s the employment and labor impact. How many jobs won’t exist tomorrow as a result of AI and robotics? I was just in the Midwest yesterday visiting with a client that’s in the energy sector, and one of the questions they have is, How will AI and robotics impact their labor force? They have a lot of blue-collar workers doing manual work. As robots and AI become more sophisticated, these jobs are going to disappear. What is the responsibility of a business in planning for that, maybe retraining the workforce? There’s also a good news story buried in there. When there’s major technological disruption in the marketplace, a lot of new jobs that nobody foresaw also emerge. Some of the new jobs that are being talked about are AI data designers, AI data trainers, people who help to decide what data goes into the algorithm.

LBB: A central point you make is that the way a company handles AI can have an impact on its reputation. 
ABB: Take the Equifax situation or Anthem, where privacy data was hacked or stolen and ended up on the dark web. So companies like Anthem and Equifax that have this treasure trove of privacy data, if the AI algorithms they use to understand and develop and manage the data and then to interact with customers are not going through very clear and systematic quality management, they may actually expose themselves to cyberhacking. And that can be a huge reputational risk. 

LBB: We hear all the time about reputations being damaged from cyberattacks. How can a company mitigate the damage?
ABB:  I did a research report for The Conference Board on cyber risk governance, and we looked at several cases, including Anthem, Target, Sony and J.P. Morgan Chase. Each of them was attacked and hacked in different ways, but I looked at the reputational risk profile for the four entities, and they were very different cases. And if you take the J.P. Morgan case, where they were already spending a good $250 million a year on cybersecurity, their reputational hit was lower than, say, an Anthem or Target, who apparently didn’t have very strong defenses, and it was clear that they had not paid as much attention as J.P. Morgan had. I had statistics on that in the report using data from a Swiss company, RepRisk, that creates a reputational risk index on companies based on media and social media responses. The idea there is that if you build the resilience—that is, you know what your risks are, and you’ve actually created the right kinds of programs, training and controls to try to mitigate that risk, and then something actually happens—the market and the stakeholders are going to be a lot more forgiving than if you never do anything or you do it negligently, which certainly came out in cases like Target, Anthem and, more recently, Equifax. 

LBB: You’ve been a general counsel yourself. What role should the GC play in creating and managing a company’s 
AI strategy?
ABB: The most important thing to do as a general counsel is, first of all, to inform yourself of what AI means and how it might affect your particular business. So do some investigating on your own. But in terms of the responsibility to the company and the shareholders, a general counsel can be a very important player—a key player, actually—in helping to structure the right approach in the company, creating a cross-functional team. You want to have some key players looking at this issue in a very concerted way. Depending on the industry, this could be the general counsel along with other key members of executive management—operations, innovation, strategy, IT, risk management, R&D. Clearly, another important aspect of this is the executive team—making this part of their strategic review. Assuming that the general counsel is part of the executive management—I know that a few aren’t, which is shocking to me—but it really has to start with the executive management. “What are we doing about AI?” Once the C-Suite has figured out what they need to do, then the general counsel can be part of an ad hoc committee that is looking at it more proactively. The GC, of course, is looking at the compliance, legal and regulatory issues that might be involved, but other people are looking at their areas and really sharing the information. 
  Then they can bring in experts to talk to them, do some further research. But it has to start at the C-Suite in terms of the strategy of the company. “Do we bring AI into the picture? Do we need to? Or are we going to be subverted and disrupted by a competitor that is using it? Or is there a completely different business model out there—an unforeseen disrupter—that is going to upend our business?” Look at Amazon with retail, right? They came out of nowhere, and they’re disrupting Walmart. The point is: How are you going to incorporate this into your strategy? Who’s going to look at it within the company? I think the GC should always be part of that group. 

LBB: You talked about the general counsel educating himself or herself. How much expertise in cybersecurity and artificial intelligence does a general counsel need? 
ABB: It depends on what kind of business and industry you’re in. At a minimum, we should all be informed. We should all be curious, because it’s affecting our day-to-day lives—both cyber issues and AI issues. We’re using AI in our phones right now, and we don’t even know that we’re doing that. And we’re being cyberattacked, and our personal information is being stolen by cyber criminals on a daily basis, so as a citizen I would say that we all need to be somewhat informed. As a general counsel, there’s a heightened responsibility not only to be informed but to keep up with the legal, regulatory and compliance requirements that are coming down the pike for your industry. Clearly, it’s part of the big bucket of technology affecting your business, as more and more technology is doing. We’re at the threshold of a potential major technological disruption of the financial sector through blockchain and cryptocurrency. I don’t know enough about that to give you advice, but I know that if those technologies achieve what they aim to achieve, they’re going to completely disrupt the banking sector as it exists today and the way we pay for things—and the way we create transparency and accountability around those things. So depending on what industry you’re in, the GC really has a responsibility to be personally informed, to be legally informed, and then to be informed in a way that they can be a contributor to the discussion at their business—at the executive level and, frankly, vis-à-vis the board as well.

LBB: Speaking of which, let’s turn to the board. A focus of your work has, for a long time, been the board of directors. So what should the board’s role be? 
ABB: To take the 50,000-foot view, boards have an obligation to understand how AI and other technologies intersect with the business. Even if you’re in the most traditional kind, you have a responsibility to understand if, from a strategic standpoint, your business is going to get disrupted. And/or, will your business achieve a competitive advantage if it starts incorporating some of these tools? So if I’m a director of a widget company and we have traditional factories around the world, I would be asking myself: “How do we use technology in our widget factories to enhance value for the shareholders?” And that might lead to a discussion of: Who is in the factory right now? What means of production are they using? Are there robotics involved? If there aren’t, why not? Are our competitors ahead of us in thinking about how technology is incorporated into and improves the productivity of the company? Again, very big-picture, that’s their responsibility. 
  But we’re talking about governance. While the executive team has the responsibility to develop and implement the strategy, the board has oversight responsibilities. Both of them share responsibility for a number of steps toward understanding AI. There has to be discussion for the full board about data. What data do we have in our organization? What data value strategy do we have? Do we have information that could be deployed into an AI algorithm that could create efficiencies, create differentiation and new ways of understanding and delivering our products and services? And then you bring in outside experts to help you think about this. Obviously, you want to have people like your chief information security officer, chief technology officer, chief risk officer, general counsel—those people need to be part of that discussion. 
  A key additional point here, both at the C-Suite level and the board level, is that you want to have a very clear idea of your talent management. Who do you have managing technology information security? Do you have the right people? Do you need new people? And that has to be done in conjunction with the chief technology officer. And then you also need to have that futuristic look of understanding where this is going. How it will affect key stakeholders in the organization, especially employees and labor, but also third parties, customers, the media, the government, the regulators? Another thing that I think is really important with the board is that it needs to have a couple of people who can talk about this—understand what questions to ask. Even if they’re not tech experts, they understand cyber. They understand technology. And they can ask the chief technology officer, the chief information security officer, the general counsel in some cases, the right questions about how they’re managing data, and how it will interface with AI—and how AI will interface with the company’s products and services.  

LBB: If a company doesn’t have directors who match that description, should they be trying to give a couple of people intensive training? Or should they be out there looking for new directors to bring in, who already have at least a pretty good understanding of cybersecurity and artificial intelligence?
ABB: Both options are good. Part of the challenge you have in governance and boards, and this goes beyond technology, is that you usually have similar, homogenous people. Not diverse. And by that I don’t just mean gender and race. You have people who have been CEOs and CFOs. They’re all sophisticated people and have done a lot in terms of their career, but their worldview is based on running businesses and financial matters. And this whole cyber development and these new technologies coming around—AI and blockchain and so on—are complicated and require some additional firepower on the board. You can train people, you can give them intensive courses. NACD, for example, has a cyber risk governance certificate for boards. And that will be helpful. But depending on the business you’re in and how disrupted it might get, you really want to bring in some of that new blood that is conversant with and maybe even very knowledgeable about technology in general and the particular industry that the company is in. I’m also a big proponent of folks who, like me, have a legal background sitting on boards. Again, they bring a view that is a little different from the technical, financial and operations person. Also add chief risk officers—people who have done that kind of work—because they bring the risk lens into the picture, and with it a broader view.  

LBB: What percentage of the Fortune 500 would pass muster right now if their boards were examined to see if they had individuals with the kind of understanding that you’re advocating?
ABB: I do not have numbers. I can only give you my gut on that. My gut tells me that boards are woefully unprepared. I think the ones that are prepared are the big technology companies, which have been very avant-garde about this. The Amazons, Microsofts, Googles. They have very knowledgeable people who are well prepared for this new world we’re entering. And then you might have a few others that have been working in this space. But I would say that the vast majority of the Fortune 500 probably don’t have one of those people, let alone two or three. 

The general counsel can be a key player in helping to devise an AI strategy and creating a cross-functional team to implement it.
Companies that ignore AI may 
find themselves disrupted out 
of business.
INTERVIEW:
WHY THE U.S. 
SHOULD EMBRACE
THE GDPR
Even though the EU regulation 
 may be onerous, it may be what the
  cybersecurity doctor ordered.
By Steven Senz
       he EU’s General Data Protection Regulation (GDPR) takes effect on May 25. And all in-house lawyers should be well          aware by now that the key provisions require reporting data breaches, removing privacy data when requested to do so, and protecting personally identifiable information (PII). It covers all organizations that do business with an EU entity, all individuals from the EU, and all U.S. citizens who move there and declare residency.
  Companies and their lawyers will undoubtedly be busy, as the deadline approaches, consulting outside experts, drafting policies and procedures, and training employees who will carry them out. But they will also need to navigate the inevitable gray areas that accompany any new and complex regulation. 
  As an experienced cybersecurity consultant, I have recently been working with one institution that is trying to figure out how to proceed. Though the organization is a community college, the questions it has raised are likely to surface not only in other academic institutions but in many different business environments. They involve the privacy rights of EU citizens who reside in the United States. 
  The concern for individual privacy on campus has typically been addressed through the Federal Educational Rights and Privacy Act (FERPA) and identity theft legislation. The GDPR raises the bar for organizations to control PII, and it changes the viewpoint because citizens from the EU have the right to control their own information. Individuals may consent (or not) to provide PII and may request that it be deleted or corrected. Now U.S. organizations must comply with these rules.  
  John Williams is an IT director at Anne Arundel Community College in Maryland. He is in the process of implementing GDPR into the college’s policies and procedures, and it's been challenging. “It is difficult to translate the regulation from the EU to an academic environment in the United States,” he said. There are three key questions he’s trying to answer.
    1. When do U.S. privacy laws pre-empt the GDPR?
    2. Can EU citizens request that all their personal data be removed from their academic files, or only the personal data 
     that they provided?
    3. How can an academic institution ensure that identifying students and faculty members as EU citizens isn’t used in a  
     way that either discriminates against them or treats them preferentially?

  Williams asked some good questions that general counsel may also be pondering. In instances where GDPR conflicts with U.S. privacy laws, it is unclear which laws will prevail and whether rulings will be specific to each case or generally applied. In the area of breaches, GDPR requires that notification be given within 72 hours and delivered to the appropriate data protection authority and any individuals the breach is likely to cause harm to, though the format of notification is not specified. U.S. privacy laws merely require that a breach notification be issued “without reasonable delay” after a breach has been confirmed.   Unfortunately, this can often take many months, as organizations are hesitant to release notifications that hurt their “brands.” 
  It is also unclear what information an EU citizen can ask to be changed or deleted. For instance, students do have the right to request that personal information they provided to enroll (date of birth, identification number, country of birth, etc.) be corrected or deleted, but it is not clear if this right extends to personal information about the individual that the school rather than the student generated, such as grades or transcripts obtained from other institutions. 
  Also of concern to academic institutions is another question that sounds like one that a civil rights lawyer may have to answer. Does the request that EU students identify themselves as such on their applications constitute a form of discrimination in the selection process? Will institutions have to convince admissions auditors and regulatory authorities that individuals from EU countries were not denied admission as a means of avoiding GDPR compliance?  
  I have been assisting federal agencies for two decades in accrediting and auditing their information systems. I have written security policies and procedures that codified the expected behavior of the federal and contractor staffs, and from my perspective, GDPR should be considered a best practice that should be adopted by the U.S. 
  Since the early stages of computer processing, U.S. organizations have been protecting information that is deemed essential for the well-being of the population (defense, finance, health, intelligence, environmental, etc.), with a focus on the adverse consequences that disclosure of the information could cause to the organization’s mission or operations. The impact on individuals was only considered in the context of loss of life or serious injury. The concern for individual privacy and personal electronic information provided to an organization has only been addressed in the last 10 to 15 years. 
  Initially the concern about PII was how to protect this information from accidental loss or disclosure, with the underlying assumption that once the information was given to the organization, the individual no longer retained “ownership.” GDPR changes that viewpoint. An individual from the EU can request that PII be deleted (or corrected), and the organization is obligated to comply within a reasonable time frame. This seems like an important improvement in today’s data-driven world.
  There are many questions about implementing GDPR compliance in educational and commercial organizations. The federal government requires compliance with Special Publication (SP) 800-171 Rev 1, published by the National Institute of Standards and Technology (NIST). This document will assist organizations in meeting GDPR requirements; however, control selections are based on risk. Each organization must therefore perform a risk assessment, which includes a privacy impact assessment (PIA), to determine the processes and procedures in place and the changes that may be needed to improve them.
  Nowadays most federal organizations have a civil liberties and privacy officer as well as a general counsel to ensure that they are compliant with evolving privacy regulations. It is highly recommended that organizations that collect PII, and don’t have one, add a privacy or data protection officer to work with the general counsel and the IT department to address the technical and legal aspects of GDPR. 
  One of the most important messages they can deliver is that security and privacy are the responsibility of everyone in the organization. If this sounds obvious, you would be surprised to see how rarely it is truly embraced. Too often people believe that information security or privacy is the sole purview of the IT or legal or information assurance department. Nothing could be further from the truth. In the Navy, there is the expression “loose lips sink ships.” In corporate America, security and privacy are only as robust as the least-trained individual. 
  There are many studies that back this up. An estimated 40-50 percent of data breaches are caused by poorly trained employees who fail to practice operational security, which can result in a loss or compromise of data. And I’ve seen it. You probably have too. I have witnessed people discussing sensitive information outside of controlled spaces. I’ve seen them fail to lock their workstations when going for coffee. I’ve watched them print sensitive information on a shared printer and fail to quickly collect the documents, or leave them on their cubicle desks. I’ve heard sensitive topics discussed so loudly that anyone in the hallway could overhear what was being said. 
  So while general counsel focus on the new rules that will roll out of Europe in May, this might be a good opportunity for organizations to redouble their efforts to train everyone about the importance of data security. Whether it’s the laptop stolen from an executive’s car or the phishing email that leads to a ransomware attack, there are vulnerabilities everywhere. Companies should have annual cybersecurity and privacy training for all employees. We need to understand and constantly remind ourselves that security and privacy protection is the responsibility of everyone, because we all represent the first line 
of defense. 

Steven Senz is a consultant who has over 35 years’ experience in the computer, cybersecurity and telecommunications industries, developing new products and services for the public and private sectors. Senz, who has a master’s from Cornell University and an MBA from the University of Michigan, is certified as a CISSP, ISSMP, CISA, CHP, CRSIC and HITECH. He has participated in multiple working groups sponsored by the National Institute of Standards and Technology (NIST) and the Committee for National Security Systems (CNSS). Formerly the director of information assurance for Inscope International, he headed the Center of Excellence for Cybersecurity. He is also the founder of Your Cyber Security Matters and a co-developer of the ASCERTIS application for the authorization of nonfederal information systems. 

T
Steven Senz
In-house lawyers need to consider questions like: 
When do U.S. privacy laws 
pre-empt 
the GDPR?
This might be a good opportunity for organizations to redouble their efforts to train everyone about 
the importance of data security.
ADD THIS TO YOUR 
CYBERSECURITY READING LIST
Cisco’s annual report is comprehensive 
and also accessible to lawyers.
By David Hechler
   n late February, Cisco released its 2018 Annual Cybersecurity Report. It’s the 
   company’s 11th, and it’s undoubtedly required reading for chief information security officers. After all, the people who put out this report aren’t just researchers or bloggers (or newsletter editors). Cisco is intimately involved in building the products and offering the services (including training) designed to help enhance our digital security. It also partners with a network of companies and conducts a benchmarking 

I
survey that last year yielded more than 3,600 responses from 26 countries. 
  But what about in-house lawyers? Should the 68-page report be required reading for them? Probably not required, but definitely recommended. Given the burgeoning risks, the ever-morphing threats and the host of legal and compliance issues involved in cybersecurity, it’s a good idea for inside lawyers to keep up with developments. And the Cisco report offers an excellent overview. 
  It’s also loaded with detail. And that can be intimidating. Most of us don’t speak tech. Or at least we’re not fluent. But that may actually be an excellent reason to dive in. After all, how do you learn a language? Not all at once. You learn the basics, and then you add words and phrases as you go. If you’re reading this article, you already have a cybersecurity foundation. You know enough to absorb a lot from Cisco’s report, and it may help prepare you for what’s ahead.
  Let’s start with the big picture. One key takeaway is that both the “defenders” and the “attackers” (as they’re called in the report) have come a long way. Cisco’s report is a wide-ranging document that backs up assertions with a wealth of statistics. It studiously avoids hyperbole and even calls out other defenders who have not always been so cautious. 
  For example, the report points out that in May 2017, when the WannaCry attack was first detected, many organizations in both the private and public sectors mistakenly attributed the source to a phishing campaign and malicious email attachments. This proved to be an imprudent rush to judgment, the report said. Wrong information leads organizations to adopt the wrong defensive measures. 
  Sounding like careful—and mature—journalists, Cisco’s team wrote: “Being right is better than being first.” 
  Later, when they’re talking about the attackers, they describe evidence of, if not maturity (which somehow seems like the wrong word), then sophistication. As companies have moved data into the cloud, attackers have found new vulnerabilities, partly due to “the lack of clarity around who exactly is responsible for protecting those environments,” Cisco says. Attackers have managed to conceal their assaults by launching them using legitimate services like Twitter, Google Docs, Dropbox and Hotmail.com. 
  Sounding here like The Wall Street Journal, the report notes: “Attackers benefit from this technique because it allows them to reduce overhead and improve their return on investment.” 
  If the language seems almost respectful, it’s not an aberration. Attackers have clearly upped their game, according to Cisco. Malware attacks have reached “unprecedented levels of sophistication and impact.” And like the cloud, the internet of things (IoT) is an environment left lightly guarded. Supply chains are another such target. Attackers have become adept at recognizing and taking advantage of these weaknesses. 
  Yet another mark of the adversaries’ progress is their use of encryption. It’s not only the defenders who use it to their advantage. Both legitimate and malicious web traffic is often encrypted these days. It’s another way that the attackers conceal their handiwork. And they have also increased their productivity by using automation, machine learning and artificial intelligence.
  The deeper you read, the more evidence you find that each side is fighting fire with fire. AI is used to attack, and AI is used to detect attacks. 
  Toward the end of the report, there are statistics that paint a picture of the volume and types of attacks, the average costs of the damage they do, and the budgetary trends in the departments that struggle to defend against them [see “What the Numbers Say”, below]. Cisco also includes lots of recommendations defenders will want to study.
  It’s not surprising that cybersecurity professionals expect plenty of challenges in the year ahead. Or that companies are having a hard time filling open positions as they try to hire reinforcements. What is surprising, given all of the Sturm und Drang devoted to this topic, is how realistic the leaders of these defense teams seem to be. 
  “Most security leaders said they believe their companies are spending appropriately on security,” Cisco noted near the end of the report. That was probably one of the few passages that would elicit a sigh of relief from their bosses. 
  No, this is not the kind of reading to begin as you try to relax for a comfortable night’s rest. It’s more likely to provoke nightmares. But the upside is that, taken to heart, it may help you avert the waking kind. 
WHAT THE NUMBERS SAY
The Greatest Obstacle to Security: 
Source: Cisco 2018 Security Capabilities Benchmark Study 
2015          2016          2017
Compatability Issues 
with Legacy Systems
2014                          2015                        2016                        2017
Organizations Hire More 
Median number of professionals dedicated to security 
Budget Constraints
39%
35%
34%
32%
28%
27%
22%
25%
27%
Lack of Trained Personel
Budget 
Constraints
Security Professionals 
30
25
33
40
CYBERSECURITY 
The Tipping Point?
How will we know when cybersecurity has become a household word, a genuine phenomenon? Not by the number of law review articles on the subject, or even the number of Big Law practice groups devoted to it. It’s more likely to be revealed by an unexpected event in the popular culture. 
    Would this qualify? In September, Girl Scouts of the U.S.A. will roll out its first cybersecurity badges that scouts can earn by demonstrating their mastery of the subject. It’s part of an effort to boost girls’ interest in tech, which in turn could lead to their greater representation in the field. 
  Read more from NBC News.
IN THE NEWS
  And only 8 percent of the security professionals surveyed said that their company continuously conducts penetration tests to determine where their vulnerabilities 
are located.
  These numbers suggest that, where cybersecurity is concerned, some of the pros companies depend on may need to be sent to reeducation camp. 
  Read more in TechRepublic.
What’s the definition of education? A pretty good one, when you think about it, is the ability to change.
  Now wrap your mind around this. According to a recent report 
by CyberArk
46 percent of organizations 
never 
change their 
cybersecurity strategy even after they 
suffer a cyberattack. 
Collaborating 
Continuously Test
Never
Change
for Security
Ever Learn?
A few years ago, Siemens was immersed in a bribery scandal. In the wake of it, as the company took major steps to reform, then-General Counsel Peter Solmssen reached out to his company’s competitors, and they agreed to cooperate to combat not just bribery but the competitive advantage it had offered. Solmssen called this joint effort the Cabal of the Good. 
  Flash forward. In February, Siemens and seven of its competitors signed what they called the Charter of Trust, vowing to cooperate in order to enhance cybersecurity worldwide. It’s actually even more ambitious than this may sound. It calls not only for the cooperation of the eight companies (the other seven are Airbus, Allianz, Daimler Group, IBM, NXP, SGS and Deutsche Telekom), but also governments. 
  Read more in AutomationWorld.

The First Cybersecurity 
Style Guide
kind. Hacker lexicons have been published, but never one dedicated to cybersecurity, according to lead editor Brianne Hughes
  It aims for breadth rather than depth, and it does a good job. In 92 pages (including preliminary notes, appendices and an epilogue) it’s got everything from AI to zero day, and it’s almost guaranteed that you won’t know them all. 
  One warning: it’s designed for security researchers. That means there’s an emphasis on proper usage. Many of the words listed are not defined. This can be annoying for a more general audience, and a missed opportunity for the editors. (The appendix does include links to other guides that fill in the gaps.) 
  There’s one particularly nice feature. If you like it, you’ve got it. You can download it simply by clicking here.
  Read more on The Parallax.

The Bishop Fox Cybersecurity Style Guide, published in February, is billed as the first of its 
A Company’s 
Cybersecurity
Information May Not
Be an Open Book
Let’s say your company just discovered it’s suffered a data breach. The CEO asks whether it should be reported to the state police. As the general counsel, you feel it’s clearly information that’s going to have to be disclosed within a few months, and you point out that the police may help the company counter the attack. 
  But your boss isn’t happy. The company has been struggling lately. “This would be a lousy time for this to get out,” the CEO complains. And what if the media catch wind and file a Freedom of Information Act request with the police? 
  This isn’t purely hypothetical. The issue has come up, and in March Michigan’s Legislature overwhelmingly passed a bill that would exempt a company’s cybersecurity information from the state’s open records laws. 
  Predictably, the vote was not greeted warmly in the media or by the media.
   Still, two weeks later Republican Governor Rick Snyder signed the bill into law.
  Read more on Crain’s Detroit Business here and (for the follow-up) here

46%
8%
cybersecurity lexicon you may want to download, and a story that suggests this field may have achieved household-name status. 
  Also new: our first article by an outside expert. Steven Senz, a consultant who has worked in data security for more than two decades, talks about the questions a client is already trying to navigate months before the EU’s General Data Protection Regulation takes effect in late May. By now the GDPR should be provoking widespread anxiety. This is a good opportunity, Senz points out, for general counsel to emphasize that information security at their companies is everyone’s responsibility. 
  We also have two interviews this month. One is with Andrea Bonime-Blanc, who has a book coming out in April on artificial intelligence. In our conversation, she explains why now is the time for general counsel and boards of directors to work with management to craft an AI strategy—before their competitors find ways to use it to disrupt them out of business. 
  The second returns to a subject we find particularly provocative: When and how should companies cooperate with the government? We interviewed a staff attorney at the Electronic Frontier Foundation who has been investigating a cozy relationship between retailer Best Buy and the FBI—too cozy, the attorney suggests. 
  Finally, we reviewed Cisco’s Annual Cybersecurity Report for the purpose of advising lawyers who don’t have a tech background whether it’s both accessible and worth reading. Since we’re sharing our conclusions, you can probably guess what we think.  
  Enjoy the spring. Thanks for tuning in. And let us know what you think.

When Will They 
      pring at last! A time of renewal. In this, our second issue of
      CyberInsecurity, renewal is a bit redundant. It’s all pretty new. But we have added two features in this issue that you can expect to find each month. 
  First, there’s a new section called Cybersecurity in the News. We know that you keep up with the big events; these short takes focus on smaller fare that you may have missed. This month, for instance, we cue up some surprising statistics, a new 

      pring at last! A time of renewal. In this, our second issue of
      CyberInsecurity, renewal is a bit redundant. It’s all pretty new. But we have added two features in this issue that you can expect to find each month. 
  First, there’s a new section called Cybersecurity in the News. We know that you keep up with the big events; these short takes focus on smaller fare that you may have missed. This month, for instance, we cue up some surprising statistics, a new 

        elcome to Legal BlackBook, a platform that features writing about the legal arena. 
        The subject of the moment, and it’s a moment that’s likely to last for some time, is cybersecurity. Or, as we call this newsletter, CyberInsecurity. And if you're wondering whether your insecurity is justified, we've included survey data in a graphic that should provide confirmation (What the Numbers Say).
   One facet of this subject that’s been fascinating to watch is the complicated relationship between government and business. Are they friends, enemies, frenemies? 
   The government can pass along tips to protect companies from attack. It can also pressure them to provide access to their customer information, which may be less