If you are interested in contributing thought leadership or other content to this platform,
please contact Lester Goodman, Publisher
pring at last! A time of renewal. In this, our second issue of
CyberInsecurity, renewal is a bit redundant. It’s all pretty new. But we have added two features in this issue that you can expect to find each month.
First, there’s a new section called Cybersecurity in the News. We know that you keep up with the big events; these short takes focus on smaller fare that you may have missed. This month, for instance, we cue up some surprising statistics, a new
David Hechler, Editor-in-Chief
CAN A COMPANY BE TOO COOPERATIVE
Best Buy’s employees weren’t law enforcement agents, but they acted as if they were.
think about cooperation is different than the way [FBI] Director [Christopher] Wray thinks about it.”
hey don’t look like cops, and their cars don’t look like police vehicles. They’re not
supposed to. They’re supposed to epitomize computer geeks. But lawyers at the Electronic Frontier Foundation (EFF) have learned that some of the individuals who work for the Geek Squad, the Best Buy employees who repair customers’ computer equipment, have been searching devices for incriminating information and passing what they find to the FBI. Aaron Mackey, a staff attorney at the EFF, has been working on the case for the San Francisco-based nonprofit. The civil liberties group is concerned about the Fourth Amendment implications, and it’s been digging for documents through the Freedom of Information Act (FOIA). Mackey, a former journalist, has also been blogging about it. The interview has been edited for style and length.
Legal BlackBook: How did you first bump into this case?
Aaron Mackey: I first learned of this story when a doctor in Southern California [named Mark Rettenmaier] was prosecuted as a result of the Geek Squad finding potentially illegal material on a hard drive that he had submitted for repairs.
LBB: Have there been other cases like that one?
AM: We know of at least one other. It’s actually a state case in Kentucky which the FBI and federal prosecutors declined to prosecute and referred to state authorities, who are prosecuting a man now. There may be others.
LBB: To your knowledge, did all of the Geek Squad searches that resulted in some law enforcement activity—whether they resulted in charges or not—involve alleged child pornography?
AM: The prosecutions that we’ve seen, yes. Some of the documents that we received as part of our Freedom of Information Act request make reference to identity theft investigations, but it’s a cover sheet and there’s a lot that’s redacted. It could be just a file name or a designation. It may not mean that there’s an investigation. What I can say is that all of the investigative reports that we’ve seen as part of our FOIA lawsuit involve investigations of potential child pornography.
LBB: After you found out about the Rettenmaier prosecution, what did you decide to do?
AM: We decided to file a Freedom of Information Act request to try to learn more about the relationship between the FBI and the Geek Squad. We wanted to understand what that relationship looked like and how it worked. We were concerned about the Geek Squad employees being paid to find this material, which was something that came out in the Rettenmaier prosecution. We were concerned also that perhaps the technicians who were being paid were going beyond their repair duties and actually doing investigations as an arm of the federal government. The FBI denied our request and told us they couldn’t confirm or deny anything. Then we filed our lawsuit, and they’ve produced a number of records—and withheld a number of others.
LBB: Where did you file the lawsuit?
AM: In Washington. We have an attorney in D.C., David Sobel, who does a lot of our FOIA litigation. Our civil liberties director, David Greene, is also working with us.
LBB: In summary, what is the FBI’s argument?
AM: What the FBI has said from the beginning, and what the Geek Squad and Best Buy have said as well, is that there’s no coordination. They don’t have an established relationship. All that happens is that the Geek Squad employees, in the course of repairing people’s devices, call the FBI when they find child pornography or other illegal material. And then
the FBI comes and reviews what the Best Buy employees have found. They make a determination, and then they seize the device and get a search warrant and search for additional material. Then they decide whether to bring charges. Best Buy put out a
statement that says there was no high-level coordination. There were four employees who were receiving payments. And three of them no longer work for Best Buy. The other has
LBB: What’s your response to the FBI?
AM: We’re concerned that the documents produced in the Rettenmaier prosecution, as well as the documents produced in our FOIA request, show some sort of relationship that goes back at least to 2008. The FBI was having meetings at the Geek Squad facility, they were receiving tours and they were paying individuals who we later learned were supervisors at the Geek Squad facility. So our No. 1 concern here is that this relationship is something more than just a private employee of Best Buy doing his or her work and finding something illegal and calling the cops. Our concern is that they’re going beyond that and actually doing something at the direction of the FBI. And in so doing, they’ve transformed themselves into agents or extensions of the FBI, and they shouldn’t be searching people’s devices without a warrant. And that’s ultimately what we’re concerned about: the potential Fourth Amendment violations of everyone who sends a device to be repaired at the Geek Squad facility.
LBB: This facility you’re referring to is in Kentucky?
AM: Yes. What happens is there are Geek Squad employees at Best Buys throughout the country. I think they come out to your home, and if you have low-level, minor repairs, you can take them in to the stores. But when you have a hard drive that’s failing or you’ve lost data on your device—what Geek Squad calls “data recovery”—they have to send those computers to this Brooks, Kentucky, facility. And it’s there where technicians do things to try to recover, or, if the hard drive is broken, extract the data and put it on a new hard drive or other media. It’s a wonderful thing if you lose your wedding photos, or photos of your kids. You can send it to them, and they’ll fix it.
LBB: Is that the only place where, to your knowledge, this kind of law enforcement role has played out?
AM: Yes. In our FOIA request, we asked for documents that would show whether they had any sort of relationship with other repair facilities—be they national tech businesses or local repair facilities in various cities—and the FBI has refused to confirm or deny whether any of those records exist.
LBB: Has anything changed to date as a result of your efforts and the resulting publicity?
AM: It’s hard to say. Best Buy has basically said about their employees who were paid that they didn’t know about it and they’ve put a stop to it. And in testimony in the state case, one of the special agents of the FBI who was involved said that they’ve stopped paying employees and they’ve stopped having that sort of relationship with the Geek Squad, although she was cagey about it—didn’t even want to call it a relationship. But we don’t have confirmation.
LBB: What are some of the unanswered questions you have at this point?
AM: Are there similar relationships that the FBI has with other repair facilities, say Apple and Apple Care? You bring your device into Apple, whether it’s an iPad or a MacBook, they take it and ship it out somewhere. Is there a similar type of relationship with wherever those repairs occur? Were there additional payments made to Geek Squad employees? How far back does this relationship go? What does the one-on-one interaction between the Best Buy employees and the FBI look like? Those are some of the basic questions that we don’t have answers to.
LBB: What’s next?
AM: The government has produced all the records that they claim they’re going to produce, and so they’re going to move for summary judgment, arguing that they’ve met their burden under FOIA. What’s up next for us is we’ll be filing a cross-motion challenging their withholdings and asking the court to order further disclosure of records, including ordering the FBI to confirm or deny whether they have relationships with other repair facilities.
LBB: Do you think you have a good chance at obtaining that information?
AM: We think that there’s good law on this point in FOIA. When you have some official confirmation of a similar type of pattern or practice, you can’t just say, “We’re not even going to tell you whether or not these documents exist.” So we think we have a fair shot at convincing the court that the FBI has to at least give us a yes or no answer. And then process records if they do have records. Whether or not we’ll ultimately prevail in terms of getting some of the records released or getting less redacted records, we’re optimistic.
LBB: Do you have reason to believe that they do have relationships with other companies similar to the one they had with Best Buy?
AM: We don’t have any documents or anything else that we’re aware of that would show us another relationship. We only learned about the Best Buy relationship, which has been going on for more than 10 years, as a result of this more recent prosecution. So it’s obvious that there are efforts to try to keep these relationships secret.
LBB: FBI Director Christopher Wray gave a speech at a cybersecurity conference in Boston in March in which he talked about the importance of cooperation—the cooperation of all the government agencies that work on cybersecurity, and cooperation between the government and the private sector. The theme he kept returning to was: We’re all in this together. In your opinion, what are some of the potential benefits of public-private cooperation in this area?
AM: How we think about cooperation is different than the way Director Wray thinks about it. The one area of cooperation that we would like to see is the government more affirmatively disclosing when they have identified vulnerabilities in software or hardware that’s used by all of us. And they should let the manufacturers know so they can fix the flaws rather than the government’s hoarding the information to use offensively, or keeping it secret so that they can patch their systems but leave us all vulnerable. That’s definitely an area where we think there needs to be better cooperation. But we see that as something where the government needs to do a better job, not necessarily private industry. Our concern, whenever there is private industry cooperation, is that it potentially requires those private actors to side with the government’s interests at the expense of the privacy and security of the company’s users. And that’s something that we think is really problematic.
LBB: To continue to explore that perspective, what are the potential dangers for
businesses when they cooperate with the government?
AM: You put yourself in the position of harming your relationship with your customers, with the owners of your stock, and with the public goodwill more broadly when you’re seen as siding with the government and perhaps doing things that intrude on the privacy of your users or jeopardize their security.
LBB: In your opinion, has Best Buy made mistakes here?
AM: It’s hard to tell. What we’re still looking for is whether there was some sort of high-level coordination. It’s clear that there were supervisors at this facility who were taking money from the FBI. What we don’t know is whether they did so and then became active agents for the government. At the same time, I want to recognize that, to the extent that employees at Best Buy or anywhere find illegal material, particularly child pornography, there are legal requirements—usually state law—that says that they have to notify law enforcement. To the extent that what they say is true, that people at these facilities are just finding things because they’re making sure that all your photos are still working, then that’s fine. But what our concern has been is that it appears that perhaps there were several employees and supervisors at this facility who were potentially going beyond that and actively searching
LBB: What are the lessons other companies can learn from this?
AM: One lesson is to make sure you really know what’s going on at these types of facilities. The management of the company has to work with law enforcement and respond to government orders and warrants. But they have to be aware of the pitfalls and damage to their brand and their goodwill with their customers if they’re seen as being agents of law enforcement.
LBB: Are there potential costs for the FBI?
AM: Perhaps if someone were to bring a civil rights lawsuit. There are costs associated with the inability to bring cases and prosecute them through. You see that with the Rettenmaier prosecution. The FBI’s conduct played a central role in what the court ultimately found in terms of tossing out the search warrant because it found that the FBI agents were not truthful and misled or omitted information. And so that cost them a prosecution. And then there are more general costs: people’s perceptions of the bureau, and law enforcement in general, when they see this type of behavior.
The EFF filed
a FOIA request to assess the relationship between the Geek Squad and the FBI.
AARON MACKEY / ELECTRONIC FRONTIER FOUNDATION
INTERVIEW: ANDREA BONIME-BLANC / GEC RISK ADVISORY
THE LEGAL PERSPECTIVE HELPS SHAPE AI STRATEGY
Companies need multidisciplinary teams to find the right path.
ndrea Bonime-Blanc has a new book called The Artificial Intelligence Imperative: A Practical Roadmap for Business, which will be published by Praeger in April, and the timing couldn’t be better. AI seems to be everywhere these days, and Bonime-Blanc can talk about it as a business consultant, an expert on corporate governance and a former general counsel. She is the founder and CEO of GEC Risk Advisory, which specializes in governance, risk, ethics and compliance. She worked for two decades as a C-Suite executive. In addition to her stints as a general counsel, she has served as a chief risk officer, ethics officer and compliance officer, among other positions. All of those perspectives come into play when she talks about the risks and rewards of AI. The interview has been edited for style and length.
Legal BlackBook: You’ve talked about the need for “traditional” companies to focus on artificial intelligence. What do you mean by “traditional companies,” and why have you particularly addressed them?
Andrea Bonime-Blanc: One of the areas that my co-author [Anastassia Lauterbach] and I were concerned about is that you have, on the one hand, the leading technology companies—Apple, Facebook, Amazon, Google and Microsoft. And there are other big players like IBM and more niche technology companies that are also very attuned to AI. But then you have a vast world of businesses where some people are dipping their toe into AI. They don’t really know what to do. And you have a fourth group, which is the old-line industries that haven’t figured out what these new technologies really mean to them: the chemical industry, oil and gas, manufacturing. Some are doing the right thing and starting to inquire. But many are not. Many don’t think it’s important and don’t think it’s going to affect their businesses. We want that large group to start thinking about AI as important, because sooner or later, through data analytics or other forms of information gathering and application of AI algorithms, some of these industries that are very traditional may get disrupted. Look at cases of past companies that missed the technological change that was impacting them. Kodak invented digital photography, and they didn’t pursue it, and someone else ate their lunch. So don’t let this new technology pass you by, put you at a competitive disadvantage or disrupt you out of existence. That’s the attitude that we have in the book.
LBB: You’ve talked about the risks and rewards that AI offers. And you’ve presented different kinds. Let’s start with cyberattacks. What are the risks that AI poses there?
ABB: AI is going to be used as a tool by people who are savvy about cyber. If you have sophisticated AI algorithms in the wrong hands, that would be a risk. For example, if the bad guys—nation states, criminal gangs, the fat guy on the bed—want to use AI as part of their cyber weapons, that’s where the serious risk occurs.
LBB: What are potential rewards?
ABB: The counter to that is that the military, cyber experts and governments are also developing AI tools to use in cyber warfare and cyber defense. AI can become part of the mutually assured destruction that we had with nuclear weapons. And when used by the people who are defending the integrity of business assets or government assets, it can be something very beneficial. It will help you find out about attacks while they’re happening. So it can be both a tool for good and a tool for bad.
LBB: You also talked about the ethics and governance challenges that AI imposes. What are some key issues there?
ABB: From an ethics standpoint, what we’re most concerned about is some of the issues that come up with how AI is created in the first place. And it always goes back to the humans involved. Most programmers are young, well-educated men. If you have a bunch of young white males who are well-to-do creating the algorithms, you’re missing out on a very large swath of society. You may be missing the perspective and the wisdom and the knowledge that women might have and that older people might have, that people from other geographies might have. So the diversity piece is an important ethical issue. And who has access to AI is another issue that has societal implications. Who is using AI, who can use AI—there are some inequality issues. Also, there’s the employment and labor impact. How many jobs won’t exist tomorrow as a result of AI and robotics? I was just in the Midwest yesterday visiting with a client that’s in the energy sector, and one of the questions they have is, How will AI and robotics impact their labor force? They have a lot of blue-collar workers doing manual work. As robots and AI become more sophisticated, these jobs are going to disappear. What is the responsibility of a business in planning for that, maybe retraining the workforce? There’s also a good news story buried in there. When there’s major technological disruption in the marketplace, a lot of new jobs that nobody foresaw also emerge. Some of the new jobs that are being talked about are AI data designers, AI data trainers, people who help to decide what data goes into the algorithm.
LBB: A central point you make is that the way a company handles AI can have an impact on its reputation.
ABB: Take the Equifax situation or Anthem, where privacy data was hacked or stolen and ended up on the dark web. So companies like Anthem and Equifax that have this treasure trove of privacy data, if the AI algorithms they use to understand and develop and manage the data and then to interact with customers are not going through very clear and systematic quality management, they may actually expose themselves to cyberhacking. And that can be a huge reputational risk.
LBB: We hear all the time about reputations being damaged from cyberattacks. How can a company mitigate the damage?
ABB: I did a research report for The Conference Board on cyber risk governance, and we looked at several cases, including Anthem, Target, Sony and J.P. Morgan Chase. Each of them was attacked and hacked in different ways, but I looked at the reputational risk profile for the four entities, and they were very different cases. And if you take the J.P. Morgan case, where they were already spending a good $250 million a year on cybersecurity, their reputational hit was lower than, say, an Anthem or Target, who apparently didn’t have very strong defenses, and it was clear that they had not paid as much attention as J.P. Morgan had. I had statistics on that in the report using data from a Swiss company, RepRisk, that creates a reputational risk index on companies based on media and social media responses. The idea there is that if you build the resilience—that is, you know what your risks are, and you’ve actually created the right kinds of programs, training and controls to try to mitigate that risk, and then something actually happens—the market and the stakeholders are going to be a lot more forgiving than if you never do anything or you do it negligently, which certainly came out in cases like Target, Anthem and, more recently, Equifax.
LBB: You’ve been a general counsel yourself. What role should the GC play in creating and managing a company’s
ABB: The most important thing to do as a general counsel is, first of all, to inform yourself of what AI means and how it might affect your particular business. So do some investigating on your own. But in terms of the responsibility to the company and the shareholders, a general counsel can be a very important player—a key player, actually—in helping to structure the right approach in the company, creating a cross-functional team. You want to have some key players looking at this issue in a very concerted way. Depending on the industry, this could be the general counsel along with other key members of executive management—operations, innovation, strategy, IT, risk management, R&D. Clearly, another important aspect of this is the executive team—making this part of their strategic review. Assuming that the general counsel is part of the executive management—I know that a few aren’t, which is shocking to me—but it really has to start with the executive management. “What are we doing about AI?” Once the C-Suite has figured out what they need to do, then the general counsel can be part of an ad hoc committee that is looking at it more proactively. The GC, of course, is looking at the compliance, legal and regulatory issues that might be involved, but other people are looking at their areas and really sharing the information.
Then they can bring in experts to talk to them, do some further research. But it has to start at the C-Suite in terms of the strategy of the company. “Do we bring AI into the picture? Do we need to? Or are we going to be subverted and disrupted by a competitor that is using it? Or is there a completely different business model out there—an unforeseen disrupter—that is going to upend our business?” Look at Amazon with retail, right? They came out of nowhere, and they’re disrupting Walmart. The point is: How are you going to incorporate this into your strategy? Who’s going to look at it within the company? I think the GC should always be part of that group.
LBB: You talked about the general counsel educating himself or herself. How much expertise in cybersecurity and artificial intelligence does a general counsel need?
ABB: It depends on what kind of business and industry you’re in. At a minimum, we should all be informed. We should all be curious, because it’s affecting our day-to-day lives—both cyber issues and AI issues. We’re using AI in our phones right now, and we don’t even know that we’re doing that. And we’re being cyberattacked, and our personal information is being stolen by cyber criminals on a daily basis, so as a citizen I would say that we all need to be somewhat informed. As a general counsel, there’s a heightened responsibility not only to be informed but to keep up with the legal, regulatory and compliance requirements that are coming down the pike for your industry. Clearly, it’s part of the big bucket of technology affecting your business, as more and more technology is doing. We’re at the threshold of a potential major technological disruption of the financial sector through blockchain and cryptocurrency. I don’t know enough about that to give you advice, but I know that if those technologies achieve what they aim to achieve, they’re going to completely disrupt the banking sector as it exists today and the way we pay for things—and the way we create transparency and accountability around those things. So depending on what industry you’re in, the GC really has a responsibility to be personally informed, to be legally informed, and then to be informed in a way that they can be a contributor to the discussion at their business—at the executive level and, frankly, vis-à-vis the board as well.
LBB: Speaking of which, let’s turn to the board. A focus of your work has, for a long time, been the board of directors. So what should the board’s role be?
ABB: To take the 50,000-foot view, boards have an obligation to understand how AI and other technologies intersect with the business. Even if you’re in the most traditional kind, you have a responsibility to understand if, from a strategic standpoint, your business is going to get disrupted. And/or, will your business achieve a competitive advantage if it starts incorporating some of these tools? So if I’m a director of a widget company and we have traditional factories around the world, I would be asking myself: “How do we use technology in our widget factories to enhance value for the shareholders?” And that might lead to a discussion of: Who is in the factory right now? What means of production are they using? Are there robotics involved? If there aren’t, why not? Are our competitors ahead of us in thinking about how technology is incorporated into and improves the productivity of the company? Again, very big-picture, that’s their responsibility.
But we’re talking about governance. While the executive team has the responsibility to develop and implement the strategy, the board has oversight responsibilities. Both of them share responsibility for a number of steps toward understanding AI. There has to be discussion for the full board about data. What data do we have in our organization? What data value strategy do we have? Do we have information that could be deployed into an AI algorithm that could create efficiencies, create differentiation and new ways of understanding and delivering our products and services? And then you bring in outside experts to help you think about this. Obviously, you want to have people like your chief information security officer, chief technology officer, chief risk officer, general counsel—those people need to be part of that discussion.
A key additional point here, both at the C-Suite level and the board level, is that you want to have a very clear idea of your talent management. Who do you have managing technology information security? Do you have the right people? Do you need new people? And that has to be done in conjunction with the chief technology officer. And then you also need to have that futuristic look of understanding where this is going. How it will affect key stakeholders in the organization, especially employees and labor, but also third parties, customers, the media, the government, the regulators? Another thing that I think is really important with the board is that it needs to have a couple of people who can talk about this—understand what questions to ask. Even if they’re not tech experts, they understand cyber. They understand technology. And they can ask the chief technology officer, the chief information security officer, the general counsel in some cases, the right questions about how they’re managing data, and how it will interface with AI—and how AI will interface with the company’s products and services.
LBB: If a company doesn’t have directors who match that description, should they be trying to give a couple of people intensive training? Or should they be out there looking for new directors to bring in, who already have at least a pretty good understanding of cybersecurity and artificial intelligence?
ABB: Both options are good. Part of the challenge you have in governance and boards, and this goes beyond technology, is that you usually have similar, homogenous people. Not diverse. And by that I don’t just mean gender and race. You have people who have been CEOs and CFOs. They’re all sophisticated people and have done a lot in terms of their career, but their worldview is based on running businesses and financial matters. And this whole cyber development and these new technologies coming around—AI and blockchain and so on—are complicated and require some additional firepower on the board. You can train people, you can give them intensive courses. NACD, for example, has a cyber risk governance certificate for boards. And that will be helpful. But depending on the business you’re in and how disrupted it might get, you really want to bring in some of that new blood that is conversant with and maybe even very knowledgeable about technology in general and the particular industry that the company is in. I’m also a big proponent of folks who, like me, have a legal background sitting on boards. Again, they bring a view that is a little different from the technical, financial and operations person. Also add chief risk officers—people who have done that kind of work—because they bring the risk lens into the picture, and with it a broader view.
LBB: What percentage of the Fortune 500 would pass muster right now if their boards were examined to see if they had individuals with the kind of understanding that you’re advocating?
ABB: I do not have numbers. I can only give you my gut on that. My gut tells me that boards are woefully unprepared. I think the ones that are prepared are the big technology companies, which have been very avant-garde about this. The Amazons, Microsofts, Googles. They have very knowledgeable people who are well prepared for this new world we’re entering. And then you might have a few others that have been working in this space. But I would say that the vast majority of the Fortune 500 probably don’t have one of those people, let alone two or three.
The general counsel can be a key player in helping to devise an AI strategy and creating a cross-functional team to implement it.
Companies that ignore AI may
find themselves disrupted out
Even though the EU regulation
may be onerous, it may be what the
cybersecurity doctor ordered.
he EU’s General Data Protection Regulation (GDPR) takes effect on May 25. And all in-house lawyers should be well aware by now that the key provisions require reporting data breaches, removing privacy data when requested to do so, and protecting personally identifiable information (PII). It covers all organizations that do business with an EU entity, all individuals from the EU, and all U.S. citizens who move there and declare residency.
Companies and their lawyers will undoubtedly be busy, as the deadline approaches, consulting outside experts, drafting policies and procedures, and training employees who will carry them out. But they will also need to navigate the inevitable gray areas that accompany any new and complex regulation.
As an experienced cybersecurity consultant, I have recently been working with one institution that is trying to figure out how to proceed. Though the organization is a community college, the questions it has raised are likely to surface not only in other academic institutions but in many different business environments. They involve the privacy rights of EU citizens who reside in the United States.
The concern for individual privacy on campus has typically been addressed through the Federal Educational Rights and Privacy Act (FERPA) and identity theft legislation. The GDPR raises the bar for organizations to control PII, and it changes the viewpoint because citizens from the EU have the right to control their own information. Individuals may consent (or not) to provide PII and may request that it be deleted or corrected. Now U.S. organizations must comply with these rules.
John Williams is an IT director at Anne Arundel Community College in Maryland. He is in the process of implementing GDPR into the college’s policies and procedures, and it's been challenging. “It is difficult to translate the regulation from the EU to an academic environment in the United States,” he said. There are three key questions he’s trying to answer.
1. When do U.S. privacy laws pre-empt the GDPR?
2. Can EU citizens request that all their personal data be removed from their academic files, or only the personal data
that they provided?
3. How can an academic institution ensure that identifying students and faculty members as EU citizens isn’t used in a
way that either discriminates against them or treats them preferentially?
Williams asked some good questions that general counsel may also be pondering. In instances where GDPR conflicts with U.S. privacy laws, it is unclear which laws will prevail and whether rulings will be specific to each case or generally applied. In the area of breaches, GDPR requires that notification be given within 72 hours and delivered to the appropriate data protection authority and any individuals the breach is likely to cause harm to, though the format of notification is not specified. U.S. privacy laws merely require that a breach notification be issued “without reasonable delay” after a breach has been confirmed. Unfortunately, this can often take many months, as organizations are hesitant to release notifications that hurt their “brands.”
It is also unclear what information an EU citizen can ask to be changed or deleted. For instance, students do have the right to request that personal information they provided to enroll (date of birth, identification number, country of birth, etc.) be corrected or deleted, but it is not clear if this right extends to personal information about the individual that the school rather than the student generated, such as grades or transcripts obtained from other institutions.
Also of concern to academic institutions is another question that sounds like one that a civil rights lawyer may have to answer. Does the request that EU students identify themselves as such on their applications constitute a form of discrimination in the selection process? Will institutions have to convince admissions auditors and regulatory authorities that individuals from EU countries were not denied admission as a means of avoiding GDPR compliance?
I have been assisting federal agencies for two decades in accrediting and auditing their information systems. I have written security policies and procedures that codified the expected behavior of the federal and contractor staffs, and from my perspective, GDPR should be considered a best practice that should be adopted by the U.S.
Since the early stages of computer processing, U.S. organizations have been protecting information that is deemed essential for the well-being of the population (defense, finance, health, intelligence, environmental, etc.), with a focus on the adverse consequences that disclosure of the information could cause to the organization’s mission or operations. The impact on individuals was only considered in the context of loss of life or serious injury. The concern for individual privacy and personal electronic information provided to an organization has only been addressed in the last 10 to 15 years.
Initially the concern about PII was how to protect this information from accidental loss or disclosure, with the underlying assumption that once the information was given to the organization, the individual no longer retained “ownership.” GDPR changes that viewpoint. An individual from the EU can request that PII be deleted (or corrected), and the organization is obligated to comply within a reasonable time frame. This seems like an important improvement in today’s data-driven world.
There are many questions about implementing GDPR compliance in educational and commercial organizations. The federal government requires compliance with Special Publication (SP) 800-171 Rev 1, published by the National Institute of Standards and Technology (NIST). This document will assist organizations in meeting GDPR requirements; however, control selections are based on risk. Each organization must therefore perform a risk assessment, which includes a privacy impact assessment (PIA), to determine the processes and procedures in place and the changes that may be needed to improve them.
Nowadays most federal organizations have a civil liberties and privacy officer as well as a general counsel to ensure that they are compliant with evolving privacy regulations. It is highly recommended that organizations that collect PII, and don’t have one, add a privacy or data protection officer to work with the general counsel and the IT department to address the technical and legal aspects of GDPR.
One of the most important messages they can deliver is that security and privacy are the responsibility of everyone in the organization. If this sounds obvious, you would be surprised to see how rarely it is truly embraced. Too often people believe that information security or privacy is the sole purview of the IT or legal or information assurance department. Nothing could be further from the truth. In the Navy, there is the expression “loose lips sink ships.” In corporate America, security and privacy are only as robust as the least-trained individual.
There are many studies that back this up. An estimated 40-50 percent of data breaches are caused by poorly trained employees who fail to practice operational security, which can result in a loss or compromise of data. And I’ve seen it. You probably have too. I have witnessed people discussing sensitive information outside of controlled spaces. I’ve seen them fail to lock their workstations when going for coffee. I’ve watched them print sensitive information on a shared printer and fail to quickly collect the documents, or leave them on their cubicle desks. I’ve heard sensitive topics discussed so loudly that anyone in the hallway could overhear what was being said.
So while general counsel focus on the new rules that will roll out of Europe in May, this might be a good opportunity for organizations to redouble their efforts to train everyone about the importance of data security. Whether it’s the laptop stolen from an executive’s car or the phishing email that leads to a ransomware attack, there are vulnerabilities everywhere. Companies should have annual cybersecurity and privacy training for all employees. We need to understand and constantly remind ourselves that security and privacy protection is the responsibility of everyone, because we all represent the first line
Steven Senz is a consultant who has over 35 years’ experience in the computer, cybersecurity and telecommunications industries, developing new products and services for the public and private sectors. Senz, who has a master’s from Cornell University and an MBA from the University of Michigan, is certified as a CISSP, ISSMP, CISA, CHP, CRSIC and HITECH. He has participated in multiple working groups sponsored by the National Institute of Standards and Technology (NIST) and the Committee for National Security Systems (CNSS). Formerly the director of information assurance for Inscope International, he headed the Center of Excellence for Cybersecurity. He is also the founder of Your Cyber Security Matters and a co-developer of the ASCERTIS application for the authorization of nonfederal information systems.
In-house lawyers need to consider questions like:
When do U.S. privacy laws
This might be a good opportunity for organizations to redouble their efforts to train everyone about
the importance of data security.
CYBERSECURITY READING LIST
Cisco’s annual report is comprehensive
and also accessible to lawyers.
company’s 11th, and it’s undoubtedly required reading for chief information security officers. After all, the people who put out this report aren’t just researchers or bloggers (or newsletter editors). Cisco is intimately involved in building the products and offering the services (including training) designed to help enhance our digital security. It also partners with a network of companies and conducts a benchmarking
survey that last year yielded more than 3,600 responses from 26 countries.
But what about in-house lawyers? Should the 68-page report be required reading for them? Probably not required, but definitely recommended. Given the burgeoning risks, the ever-morphing threats and the host of legal and compliance issues involved in cybersecurity, it’s a good idea for inside lawyers to keep up with developments. And the Cisco report offers an excellent overview.
It’s also loaded with detail. And that can be intimidating. Most of us don’t speak tech. Or at least we’re not fluent. But that may actually be an excellent reason to dive in. After all, how do you learn a language? Not all at once. You learn the basics, and then you add words and phrases as you go. If you’re reading this article, you already have a cybersecurity foundation. You know enough to absorb a lot from Cisco’s report, and it may help prepare you for what’s ahead.
Let’s start with the big picture. One key takeaway is that both the “defenders” and the “attackers” (as they’re called in the report) have come a long way. Cisco’s report is a wide-ranging document that backs up assertions with a wealth of statistics. It studiously avoids hyperbole and even calls out other defenders who have not always been so cautious.
For example, the report points out that in May 2017, when the WannaCry attack was first detected, many organizations in both the private and public sectors mistakenly attributed the source to a phishing campaign and malicious email attachments. This proved to be an imprudent rush to judgment, the report said. Wrong information leads organizations to adopt the wrong defensive measures.
Sounding like careful—and mature—journalists, Cisco’s team wrote: “Being right is better than being first.”
Later, when they’re talking about the attackers, they describe evidence of, if not maturity (which somehow seems like the wrong word), then sophistication. As companies have moved data into the cloud, attackers have found new vulnerabilities, partly due to “the lack of clarity around who exactly is responsible for protecting those environments,” Cisco says. Attackers have managed to conceal their assaults by launching them using legitimate services like Twitter, Google Docs, Dropbox and Hotmail.com.
Sounding here like The Wall Street Journal, the report notes: “Attackers benefit from this technique because it allows them to reduce overhead and improve their return on investment.”
If the language seems almost respectful, it’s not an aberration. Attackers have clearly upped their game, according to Cisco. Malware attacks have reached “unprecedented levels of sophistication and impact.” And like the cloud, the internet of things (IoT) is an environment left lightly guarded. Supply chains are another such target. Attackers have become adept at recognizing and taking advantage of these weaknesses.
Yet another mark of the adversaries’ progress is their use of encryption. It’s not only the defenders who use it to their advantage. Both legitimate and malicious web traffic is often encrypted these days. It’s another way that the attackers conceal their handiwork. And they have also increased their productivity by using automation, machine learning and artificial intelligence.
The deeper you read, the more evidence you find that each side is fighting fire with fire. AI is used to attack, and AI is used to detect attacks.
Toward the end of the report, there are statistics that paint a picture of the volume and types of attacks, the average costs of the damage they do, and the budgetary trends in the departments that struggle to defend against them [see “What the Numbers Say”, below]. Cisco also includes lots of recommendations defenders will want to study.
It’s not surprising that cybersecurity professionals expect plenty of challenges in the year ahead. Or that companies are having a hard time filling open positions as they try to hire reinforcements. What is surprising, given all of the Sturm und Drang devoted to this topic, is how realistic the leaders of these defense teams seem to be.
“Most security leaders said they believe their companies are spending appropriately on security,” Cisco noted near the end of the report. That was probably one of the few passages that would elicit a sigh of relief from their bosses.
No, this is not the kind of reading to begin as you try to relax for a comfortable night’s rest. It’s more likely to provoke nightmares. But the upside is that, taken to heart, it may help you avert the waking kind.
The Greatest Obstacle to Security:
Source: Cisco 2018 Security Capabilities Benchmark Study
with Legacy Systems
Median number of professionals dedicated to security
How will we know when cybersecurity has become a household word, a genuine phenomenon? Not by the number of law review articles on the subject, or even the number of Big Law practice groups devoted to it. It’s more likely to be revealed by an unexpected event in the popular culture.
Would this qualify? In September, Girl Scouts of the U.S.A. will roll out its first cybersecurity badges that scouts can earn by demonstrating their mastery of the subject. It’s part of an effort to boost girls’ interest in tech, which in turn could lead to their greater representation in the field.
And only 8 percent of the security professionals surveyed said that their company continuously conducts penetration tests to determine where their vulnerabilities
These numbers suggest that, where cybersecurity is concerned, some of the pros companies depend on may need to be sent to reeducation camp.
What’s the definition of education? A pretty good one, when you think about it, is the ability to change.
Now wrap your mind around this. According to a recent report
46 percent of organizations
cybersecurity strategy even after they
suffer a cyberattack.
A few years ago, Siemens was immersed in a bribery scandal. In the wake of it, as the company took major steps to reform, then-General Counsel Peter Solmssen reached out to his company’s competitors, and they agreed to cooperate to combat not just bribery but the competitive advantage it had offered. Solmssen called this joint effort the Cabal of the Good.
Flash forward. In February, Siemens and seven of its competitors signed what they called the Charter of Trust, vowing to cooperate in order to enhance cybersecurity worldwide. It’s actually even more ambitious than this may sound. It calls not only for the cooperation of the eight companies (the other seven are Airbus, Allianz, Daimler Group, IBM, NXP, SGS and Deutsche Telekom), but also governments.
kind. Hacker lexicons have been published, but never one dedicated to cybersecurity, according to lead editor Brianne Hughes.
It aims for breadth rather than depth, and it does a good job. In 92 pages (including preliminary notes, appendices and an epilogue) it’s got everything from AI to zero day, and it’s almost guaranteed that you won’t know them all.
One warning: it’s designed for security researchers. That means there’s an emphasis on proper usage. Many of the words listed are not defined. This can be annoying for a more general audience, and a missed opportunity for the editors. (The appendix does include links to other guides that fill in the gaps.)
There’s one particularly nice feature. If you like it, you’ve got it. You can download it simply by clicking here.
Let’s say your company just discovered it’s suffered a data breach. The CEO asks whether it should be reported to the state police. As the general counsel, you feel it’s clearly information that’s going to have to be disclosed within a few months, and you point out that the police may help the company counter the attack.
But your boss isn’t happy. The company has been struggling lately. “This would be a lousy time for this to get out,” the CEO complains. And what if the media catch wind and file a Freedom of Information Act request with the police?
This isn’t purely hypothetical. The issue has come up, and in March Michigan’s Legislature overwhelmingly passed a bill that would exempt a company’s cybersecurity information from the state’s open records laws.
Predictably, the vote was not greeted warmly in the media or by the media.
Still, two weeks later Republican Governor Rick Snyder signed the bill into law.
Read more on Crain’s Detroit Business here and (for the follow-up) here.
cybersecurity lexicon you may want to download, and a story that suggests this field may have achieved household-name status.
Also new: our first article by an outside expert. Steven Senz, a consultant who has worked in data security for more than two decades, talks about the questions a client is already trying to navigate months before the EU’s General Data Protection Regulation takes effect in late May. By now the GDPR should be provoking widespread anxiety. This is a good opportunity, Senz points out, for general counsel to emphasize that information security at their companies is everyone’s responsibility.
We also have two interviews this month. One is with Andrea Bonime-Blanc, who has a book coming out in April on artificial intelligence. In our conversation, she explains why now is the time for general counsel and boards of directors to work with management to craft an AI strategy—before their competitors find ways to use it to disrupt them out of business.
The second returns to a subject we find particularly provocative: When and how should companies cooperate with the government? We interviewed a staff attorney at the Electronic Frontier Foundation who has been investigating a cozy relationship between retailer Best Buy and the FBI—too cozy, the attorney suggests.
Finally, we reviewed Cisco’s Annual Cybersecurity Report for the purpose of advising lawyers who don’t have a tech background whether it’s both accessible and worth reading. Since we’re sharing our conclusions, you can probably guess what we think.
Enjoy the spring. Thanks for tuning in. And let us know what you think.