e live in an Age of Conflict. It’s everywhere we look. And the
challenges of cybersecurity are grounded in, and manifestations of, that conflict. Cyberattacks erupt from political conflict, international conflict, economic conflict—the gamut. And they’ve certainly added to the pervasive global tension.
Another troubling trend we’ve seen in the past year has been the erosion of so many longstanding alliances. Countries that have long been allies are on the outs. Brexit and the rise of nationalism
David Hechler, Editor-in-Chief
LESSONS FOR ALL COMPANIES
FROM CYBERSECURITY ALLIANCES
The U.K.'s maritime industry has discovered
hat could be worse than being a captain out on the open sea worrying about pirates?
Being a captain having to worry about pirates waiting around the next bend—or pirates far away, but about to launch a cyberattack! Former journalist David Rider, based in the U.K., knows about both. In 2009, he helped set up a maritime security company, and he began writing and editing articles on piracy. That evolved into intelligence analysis, which in turn led to cyber and information security. This is his main focus now, though he did help the authorities capture a dozen pirates last year by passing along a tip from an old source in Somalia, along with key information he dug up on the pirates. The lessons he has for lawyers everywhere are the importance of the EU’s General Data Protection Regulation (GDPR), which takes effect on May 25, and the value of alliances—within industries and with the government—in mitigating cyber risks. He says that those alliances are “where the U.K. is currently ahead of the game.”
Legal BlackBook: What is the CSO Alliance, and how and when did you get involved?
David Rider: CSO Alliance was created in 2013 by two men in the shipping industry. One was a chief security officer who had enjoyed meeting up with his peer group at events and conferences, and realized how valuable this was. There was no real forum for CSOs to meet and swap ideas and best practices. As a result, the CSO Alliance was born. I joined them later that year as intelligence consultant and content editor for the website. The site itself is a very secure online platform for crime reporting, with group areas for members to discuss topics relevant to their specific sector with other CSOs in that same area. My job is to maintain the site’s content and ensure that all crime reports are verified. We want to make sure that we don’t conflate problems and that we present only the best information available.
LBB: And that has led to a second alliance, hasn't it?
DR: The Maritime Cyber Alliance is a joint project between Airbus, CSO Alliance and our technology partner, Wididi. The venture with Airbus is a natural affiliation between transport industries that have much in common. The platform is still in its pilot stage at present, but offers an anonymous cyber crime reporting system and specific Airbus tools, such as the ORION malware checker. We’re also running an e-learning GDPR Awareness Course, in partnership with Templar Executives, a U.K. infosec specialist. Our hope is that we can educate the shipping industry and offer a clear guide to GDPR. My job is to keep the site’s content up to date and make sure that new incident alerts are sent out when needed. We’ve already had one anonymous report from a vessel at sea, which was effectively paralyzed for several hours following a malware infection. We’ve also received reports from ship brokers regarding attempted email fraud and reports from P&I clubs [protection and indemnity mutual insurance associations] regarding fraud attempts against their members.
INTERVIEW: DAVID RIDER / CSO ALLIANCE
How CLOs can help their companies take control.
igh-profile breaches at Target and Home Depot, in which hackers gained access to networks through the companies’ service providers, caught the public’s attention. In response, regulators issued new cybersecurity guidance or began to enforce existing regulations against companies to improve their information security and privacy governance. The regulatory pressure has helped somewhat. The Ponemon Institute’s 2017 Cost of Data Breach Study reports that the amount of time from breach to discovery decreased from 201 days in 2016 to 191 days last year, and that the average time to contain the breach declined by four days. Furthermore, the appointment of a chief privacy officer and the use of security analytics saved companies $3 and $7, respectively, per compromised record compared to 2016. Nevertheless, data breaches are ubiquitous. In 2017, 1,579 data breaches were reported, according to the Identity Theft Resource Center, a 45 percent increase over 2016. The average cost to U.S. companies was $7.5 million, a 5 percent increase.
But here’s one of the biggest challenges. As businesses have become more digital and interconnected, cyber criminals have more companies to attack. Hackers have begun to attack companies’ service providers more frequently, which causes breach costs to increase by $17 per compromised record, according to the Ponemon study. Despite the growing threat, only 44 percent of respondents to a recent Ponemon survey reported that managing service provider risk is a priority at their companies.
This lack of urgency is troubling, because a company’s failure to address service provider risk suggests that the firm may have deficiencies in its information security program. More importantly, a perception may be created that outsourcing a service absolves the company from liability if the service provider experiences a security incident. This perception is wrong, because statutes, regulatory guidance and the public all look to the company, not the service provider, for redress.
This is where the chief legal officer should enter the conversation. CLOs need to explain the legal requirements to management; at the same time, they are critical to minimizing service provider risks. But they can’t manage this issue in a vacuum. They are dependent on the specific facts and circumstances of the relationship. The best way to minimize this risk is to develop a robust privacy and security program to be used when the vendor accesses the company’s network or customer data.
Presumably the CLO’s company has already created a program of its own that works well and that it regularly tests. The next step is to review the vendor’s own program to ensure that it is as robust and secure as that of the company it is servicing. (Or, if the vendor does not yet have a strong program, collaborate with it to create one.)
The CLO would be a logical choice to be part of the cross-functional team that should be assembled to lead this effort. Other participants from the CLO’s company might include its chief security information officer, the head of IT and the chief privacy officer. On the vendor’s side, it may wish to tap its own in-house lawyer, the head of IT, the chief risk officer or even the CFO. Realistically, however, certain service providers may lack personnel with the technical, human, financial or legal resources that their larger counterparts have at their disposal. The company may have to persuade the service provider to retain an “outsourced” chief information security officer to manage its data security initiatives.
After the team has established or modified the vendor’s program, they can turn to the task of drafting the service provider agreement that they will use to guide the ongoing relationship between the two entities. These agreements are essential to protect the CLO’s company from service provider risk. Among other things, the agreement should: (1) include representations and warranties regarding regulatory compliance and industry standards; (2) impose a standard of care equal to, or better than, the company’s privacy and information security practices; (3) limit disclosures to third parties without consent; (4) require prompt notice after a security incident; (5) create procedures to ensure return or destruction of company data upon termination of the agreement; and (6) develop a cost allocation mechanism for responding to a security incident, including indemnification or insurance.
Alternatively, if the company’s business units negotiate these agreements with limited input from the legal department, the CLO should ensure that an enterprise-wide service provider policy exists, and includes mandatory data security language to be included in all agreements. At a minimum, the company’s service provider policy must require that data security language cannot be changed without the express approval of senior management.
Monitoring the service provider’s compliance with its data security program is essential to ensuring that its risk controls are correlated to address current and emergent security threats. Yet, 56 percent of respondents in the Ponemon report said that their companies do not monitor the security and privacy practices of service providers to which they grant access to sensitive or confidential information, typically because the service providers do not allow independent monitoring to verify security practices. This obstacle is best overcome by having a contractual requirement to allow periodic audits. The sensitivity of the company’s data, or the breadth of the service provider’s access to the company’s network, should determine the frequency and intensity of the audit. As an additional step, for service providers whose access is considered “high-risk,” the company should have the provider participate in the company’s incident response exercises so that the company can observe its processes during a security incident and address any necessary remediation.
In sum, companies must ensure that their service providers’ security practices are equal to or better than their own. Companies must also monitor service providers’ compliance with their stated data security practices to ensure that their controls can contain current and evolving threats. This requires firms to develop strong privacy and data security practices that match the legal framework in which they operate and the risks that they face. The CLO’s role is crucial to ensure that the results will safeguard the company’s reputation, limit breach costs and avoid unwanted regulatory scrutiny and litigation.
Denver Edwards, a principal in the financial institutions practice at the law firm of Bressler, Amery & Ross, P.C., is a former senior counsel in the Enforcement Division of the Securities and Exchange Commission and the Office of the Comptroller of the Currency. One of his areas of focus is cybersecurity and regulation. He can be reached at email@example.com
TO REPORT OR NOT TO REPORT?
Companies embrace new strategies for working with
law enforcement after a data breach.
Tired of reading depressing articles about cybersecurity? They always seem to feature statistics that go from bad
But new data suggests that we’re actually getting better at cybersecurity. These are the kinds of statistics we’ve been waiting for.
Last year in North and South America, 64 percent of security breaches were discovered by the victim companies themselves rather than external sources like law enforcement, according to Axios (citing Mandiant’s M-Trends 2018 report). That’s quite a contrast to the results in 2011, when only
6 percent of breaches were discovered internally.
Axios didn’t mention another positive statistic that’s even more startling. This one can be found in the 2018 Trustwave Global Security Report. Looking at the length of time it takes to discover compromised data from the date of intrusion to the date of detection, the report found that for intrusions detected internally, it was zero days—meaning on average they were detected in less than one day. For the incident to be reported externally, they averaged
So, companies must be doing something right. Maybe more things than they get
have shaken the European Union to its core. The United States feels a little less stable, and a little more isolated, every day. These developments have been particularly dispiriting to witness because alliances seem to be our best chance of mitigating the problems.
In this issue, in our own small way, we explore the power of alliances. First we look at an alliance of maritime companies working to boost their common cyber defense.
Then we have a piece about the Cybersecurity Tech Accord, and the 34 companies that vowed to make the world safer from cyberattacks.
And our expert article talks about the ways companies need to partner with their service providers in order to enhance the security of both entities.
These are three very different kinds of alliances, but they all make a world of sense. They possess the key components that successful alliances have always needed: they’re designed for the mutual benefit of the partners, and they make them all a little more secure.
Here’s hoping there’s a resurgence of alliances this year.
administrators around the country. Arizona made a bold move to protect itself from cyberattacks.
The state has 133 agencies in all. That’s a lot of data to worry about. And it knows it has vulnerabilities because it was hacked during the last national election.
The solution? It decided to hire a single firm to handle cybersecurity for all of them. And it didn’t even choose an Arizona company. It picked RiskSense, based in neighboring New Mexico.
Interestingly, one of the prime reasons it cited was the ease of using the vendor’s software. It scores an agency’s cyber vulnerabilities with a system modeled on credit ratings, so someone without an IT background can quickly see how each is doing.
Here’s a development that’s going to be watched
You can’t rely on your information technology team to protect your company from data breaches. That’s the message that the United Kingdom’s information commissioner delivered in her keynote address in April at the
National Cyber Security Centre's CYBERUK conference in Manchester, according
"Security is a boardroom-level issue,” Commissioner Elizabeth Denham told the audience. “We have seen too many major breaches where companies process data in a technical context, but security gets precious little airtime at board meetings."
And failing to hire specialists and to properly invest in security is a recipe for disaster, she said. Denham cited companies that paid
a steep price for their
security failures in last year's WannaCry attacks.
Nonprofits typically don’t have a lot of extra money to spend. And they don’t figure to be the kinds of targets for cyberattacks that large data-rich (and just plain rich) companies can be. So, how much time and money should they spend on cybersecurity?
It’s a good question. And it’s answered with intelligence and clarity in an article that appeared in NTEN, which stands for Nonprofit Technology Network.
The article discusses insurance, but it doesn’t have an agenda and it isn’t pushing any particular solution. It advises readers how to think about the risks of their own work environments in order to make prudent decisions for their own companies’ needs.
The courts have been waiting for the law to catch up with technology for a long time. But given the warp-speed of tech advances and the almost nonexistent pace of legislation these days, not many big cases have been mooted by new laws. But one was in April.
And it was a big one. It was a case that had made it all the way to the Supreme Court. It was Microsoft’s cloud case, which began in late 2013.
As you’ll recall, federal agents had obtained a warrant seeking access to the email account of one of the company’s users. The warrant was issued on a showing of probable cause that the email contained evidence of drug trafficking. But Microsoft declined to provide access, arguing that the data was stored on servers in Ireland, and the law did not reach overseas.
The feds got a new warrant to replace the old one, and the Supreme Court, finding that there was no longer a live dispute, sent it back to the district court with instructions to dismiss.
If you’ve been holding your breath, you can exhale now.
A malware attack that resulted from poor BYOD security caused a ship to lose all navigational control at sea.
Making members aware of any given threat or malicious campaign means that we simply reduce the risk of it spreading and causing more damage.
LBB: The EU’s Network and Information Systems (NIS) Directive is going to take effect in May. Very briefly, what is it, and is there any comparably robust regulation in the U.S. or elsewhere?
DR: I’m not sure of any legislative parallels in the U.S. Essentially, the NIS Directive is a huge, daunting piece of legislation that aims to mitigate risk management and incident reporting across two types of business: critical infrastructure (which has a very wide definition here) and digital service providers, an almost deliberately vague term that encompasses everything from online marketplaces to search engines. Hopefully, the legislation will work as planned and ensure that companies report incidents in a much more timely manner than we’ve seen in recent months and years. Then incidents such as NotPetya won’t happen on the same scale, because more companies will be made aware of it sooner.
LBB: What is the importance of alliances? Do they really mitigate cyber risks?
DR: The CSO Alliance motto is “security through community,” and we believe that it works. It’s interesting to see it in action. We’ve held workshops around the world over the last couple of years, and when you see two or three people all realize that they’ve been subjected to the same attack vector or crime, and then you offer a simple solution, such as timely information sharing, the light bulbs above heads all go on. In the cyber field, it absolutely works. Making members aware of any given threat or malicious campaign means that we simply reduce the risk of it spreading and causing more damage. You only need to look at the costs that [container shipping company] Maersk absorbed last year to get an idea of how useful and valuable that is.
LBB: What about regulations? In addition to the NIS Directive, the GDPR will also take effect in May. Do they really make us safer?
DR: If you know anything at all about data protection, then you know you’ll never be truly safe, short of never going online and throwing away your cellphone. We’ve all left digital footprints behind us. GDPR will hopefully go some way to closing a few doors and loopholes. However, anyone watching the news in the last month will appreciate what a massive task that is, as well as just how much metadata we all leave behind us. GDPR is also a massive hoop to jump through. If you don’t think it applies to you or your company, check again. And then again. The fines for noncompliance are eye-watering, and GDPR affects so many different business types, it’s simply not worth sticking your head in the sand. Anyone with a marketing mailing list will need to take a long, hard look at whether they’re going to be affected.
LBB: You wrote an article in which you conjured up a captain’s worst nightmare: You’re at sea, and all of a sudden you’re not in control of your own vessel. You’re being steered into port—and into the waiting arms of a band of pirates. Has a cyberattack like this ever happened?
DR: That’s a very interesting question. I’ve seen no conclusive report to that effect, although a cellphone network provider suggested in a report that it had happened a couple of years ago. I did an awful lot of digging, given my involvement in counterpiracy, and came up blank. I’d have to ask why pirates would do it, when they’re perfectly capable of taking a vessel themselves at sea, and just what sort of logistics they’d require to secure a port and transport for any cargo they wanted to steal. It’s a curious claim. However, we recently received a report from a vessel that was hit by a malware attack at sea as a result of poor bring-your-own-device (BYOD) security. The vessel lost all navigational control and had to drop anchor to reboot its systems. The estimated financial loss of that delay was $40,000. Who needs pirates?!
LBB: What have been the cyber events that have gotten the attention of your industry?
DR: Without a doubt, the biggest was the Maersk incident. I think the shipping industry as a whole had been happy to regard cybersecurity as someone else’s problem—until that happened. Then the industry began to appreciate just how many other systems and knock-on effects there were. And the cost to mitigate it, as much as $300 million, should be enough to make most boardrooms break out into a sweat.
LBB: Are there special vulnerabilities that set your industry apart from others?
DR: I think it’s the same as other, non-high tech sectors. Legacy systems and outdated, underpatched software is an issue, certainly, but awareness training is the key. Shipping is an incredibly time-sensitive industry, where the smallest delay can have a huge financial impact. Seeing terminals and ports close because of NotPetya last year had a sobering effect, I think.
LBB: What steps does your alliance advise companies to take to protect themselves?
DR: Patch. Train. Patch again. The main problems are older computer operating systems and a lack of investment from companies that didn’t realize what risks they were taking. That’s changing now, as GDPR and NIS heave into view, and companies appreciate that they can’t afford to cut corners. Key to everything, though, is staff training and awareness. So much cyber crime comes via the head office, from CEO fraud, phishing and malware. Having good systems in place to check those—both physical and software—means that firms are far more resilient.
LBB: Let’s talk about the lawyers. What role do in-house lawyers at maritime companies play in this area?
DR: Lawyers are going to be absolutely crucial in the coming months as the new legislation comes into force. There’s still a huge knowledge gap, which can really only be filled by skilled lawyers who have grasped the magnitude of the issues facing commercial operators. I think we’ve traditionally seen a situation where CISOs [chief information security officers] and CSOs have made the board aware of a threat, but it’s only when counsel agree that it’s taken seriously. I think a lot of lawyers are going to be kept very busy this summer.
LBB: Do you think they’re playing active roles at their companies? Are their companies using them effectively?
DR: That’s hard for me to say. The lawyers I know in the infosec space are very diligent, but there’s an element of “we told you so” as well. They’ve warned about the dangers for a long time, and now those dangers are more apparent. A company lawyer is often only listened to when things go badly wrong, I suspect. Now, the tide is turning, and their advice is being taken on board by the management teams.
LBB: Are there lawyers involved in this work at the Alliance?
DR: We don’t have in-house counsel at the Alliance, but we do have access to the excellent legal team at Airbus, who work hand in hand with Airbus Cyber to deliver solutions. So far, we haven’t had to trouble them!
LBB: What are the big legal issues in cybersecurity right now?
DR: It’s definitely the impact of GDPR, simply because it asks so many questions about how personal data is used, stored and given out.
LBB: What do you see in your crystal ball as far as legal issues are concerned?
DR: Well, once the current legislation is in place, I expect we’ll see cases challenging data use and compliance. I’d advise lawyers to keep a very close eye on the current investigations into Facebook, Cambridge Analytica and associated companies. Using personal data for areas other than those agreed to by the user is going to become an even bigger minefield in the coming months.
LBB: What can lawyers from other countries and industries learn from the experience of the U.K.’s maritime industry?
DR: I think the main thing would be that keeping the C-Suite and the board informed of the latest major incidents and possible ramifications for their business is key. Where the U.K. is currently ahead of the game is in linking government and the security agencies together with industry. We’re seeing some good initiatives emerge from the recently formed National Cyber Security Centre, set up by GCHQ [Government Communications Headquarters, the British intelligence and security agency]. They’re working with industry to ensure resilience where needed, and they have worked with the U.K.’s Department of Transport, which covers the maritime sector, to reinforce best practices. Lawyers elsewhere can do similar things with law enforcement agencies in their sectors, making sure that company policies are aligned with the government and best practices so that all these risks are reduced.
Companies must ensure that their service providers’ security practices are equal to or better than their own.
INTERVIEW: KIMBERLY PERETTI / ALSTON & BIRD
Companies are often reluctant to report data breaches based on their own false assumptions of what will happen next.
In recent years
law enforcement agencies have demonstrated greater sensitivity to the needs of companies that have been victims of cyberattacks.
imberly Peretti is a partner and co-chair of Alston & Bird’s Cybersecurity Preparedness and Response Team and
National Security and Digital Crimes Team. She is the former director of PwC’s cyber forensic services group and a former senior litigator for the U.S. Department of Justice’s Computer Crime and Intellectual Property Section. She draws on her background as both an information security professional and a lawyer in managing technical cyber investigations, assisting clients in responding to data security-related regulator inquiries, and advising boards and senior executives in matters of cybersecurity and risk. Peretti is a Certified Information Systems Security Professional (CISSP).
Legal BlackBook: FBI Director Christopher Wray recently spoke at a cybersecurity conference in Boston, and during his speech he highlighted the FBI’s continuing need for cooperation from victim companies. The FBI “treats victims as victims” and has been working to better share information with them, he said. Has that been your experience in working with the FBI?
Kimberly Peretti: For the most part, yes. And generally over the years I’ve seen a significant transformation of federal law enforcement—both the FBI and the Secret Service—in how they interact with companies that have been victims of cyberattacks. In my experience, law enforcement has become more sensitive to the issues facing victims of cyberattacks, and is often more amenable to developing strategies to help minimize the impact that working with law enforcement may create. Fundamentally, law enforcement is dependent on companies, both for reporting cyber intrusions to them and for investigating those types of crimes. So I think that’s a very important point that separates investigations of cybercrimes from other types of crimes. They’re often more sensitive to understanding the critical role that both sides play. And they understand that they need to work together for each to be successful in their missions. Law enforcement often understands that the victim may be in the best position to gather and preserve the digital evidence in an immediate fashion, and that law enforcement coming in and saying that “we need all relevant information” could be a significant distraction at a time when a company really needs to devote significant attention to protecting the company, its systems and its data.
LBB: Some people who don’t have a lot of experience dealing with cybersecurity investigations may be surprised that the Secret Service can be involved. Can you explain to them why that is?
KP: It just so happened that when I was at the DOJ cyber crime unit, most of the cases I worked were with the Secret Service. We were working with companies that weren’t aware that the Secret Service had jurisdiction to investigate cyber crimes, but they do. They share that jurisdiction with the FBI. You tend to see them more in the area of financial crimes, because they’re an entity that has had jurisdiction to investigate counterfeits of currency going back to when they were part of the Treasury Department. Now, of course, they’re a part of the Department of Homeland Security.
LBB: It seems as though there should be a natural partnership between a law enforcement agency and a company that has been the victim of a cybercrime and is conducting an internal investigation. After all, they’re both trying to solve the same crime, right? Is that how it works in the real world?
KP: I think most often their interests can be aligned, and they often are aligned in the early stages of an investigation. But it’s important to recognize that there’s a different interest in why they’re conducting the investigation. For companies, the primary interest is often to protect the company—its systems and its data. And to protect its customers and employees as quickly as possible. They also may want to see criminals identified and apprehended. But that can be a secondary benefit or purpose. Whereas for law enforcement, their primary purpose, of course, is to catch the criminals behind the attack. In the later stages of the investigation, though, a company may want to wrap it up, fix the systems, remediate and move on. And that may be just when law enforcement is beginning its investigation in earnest. This can be a place where interests begin to diverge.
LBB: Has the role of law enforcement—and working with law enforcement—changed since you were at the Justice Department and prosecuting hackers, from 2002 to 2008?
KP: Yes, I would say that there have been some significant changes. In the early days of cyber crime, we were often looking at lone wolves—solo hackers hacking into systems for intellectual curiosity or for bragging rights. And we were investigating those individuals in order to bring them to justice and convict them for the crimes they had committed. But as we’ve seen the cyber landscape grow and change over time, there are other purposes for law enforcement’s involvement. It’s not just to apprehend individuals. Now, because of the global nature of cyber crime, and often the inability to identify those behind these crimes, there are other goals. These could be intelligence gathering, disrupting the infrastructure of the criminal organization, or information sharing to help other potential victims protect themselves against a similar attack. And then, of course, we’ve seen over time the significant increase in state-sponsored involvement, which has brought in other law enforcement priorities, including a national security interest, and the increasing need to push out intelligence learned from state-sponsored activity to victim companies. The recent joint Technical Alert issued by DHS, the FBI and the United Kingdom’s National Cyber Security Centre, addressing the worldwide cyber exploitation of network infrastructure devices by Russian state-sponsored cyber actors, is a good example.
LBB: Given the number of different federal, state and foreign law enforcement authorities that may have jurisdiction over cyber crimes, how does a company determine the right law enforcement authority to contact when it’s in the middle of responding to an incident?
KP: Well, hopefully they’ve identified those contacts prior to having an incident. Most companies nowadays face frequent security incidents, whether it’s attempted access to their networks or otherwise. It’s often helpful to have an established relationship with law enforcement prior to an incident, because knowing which agency to contact, and who within that agency, can be challenging to navigate during a crisis. Equally important, law enforcement is increasingly disseminating information on cyber threats through various information-sharing platforms, and companies may want to establish such relationships with law enforcement in order to ensure that they are receiving that information. In terms of who to contact, as a general matter for cyber intrusions, I would say that often companies turn to federal law enforcement, because there is usually jurisdiction, and there are more likely to be resources and capabilities to investigate cyber crime, in contrast to local jurisdictions or state jurisdictions.
So it often becomes a question of which federal law enforcement agency to contact. I often refer companies to the Computer Crime and Intellectual Property Section, an entity within the U.S. Department of Justice that has published a document called “Reporting Computer, Internet-Related or Intellectual Property Crime.” It’s on its website. For most types of cyber crime, you can report to either the FBI’s local office or the Secret Service local office. It may be a preference of the company, whether they’ve worked with one of these agencies, or it could depend on the industry that they’re in. Or it could be because of a relationship they have within their company. But it’s a good idea to establish a point of contact on both the Secret Service side and the FBI side, because both are pushing out information, and both can likely assist in investigating most types of cyber crime.
LBB: Let’s take up the age-old question: To report or not to report? A few years ago, that question often arose when a company discovered evidence that one of its employees or vendors had paid a bribe that may have violated the Foreign Corrupt Practices Act. Now the big issue is data breaches. Can you walk us through some reasons that a company may want to report a cyber crime and some reasons why it may not?
KP: Absolutely. And these haven’t changed over time. I recall that when I worked at DOJ in the computer crime section, we were speaking about the myth of what law enforcement does if they’re involved in investigating a cyber crime. And I think it largely has remained the same. The top three or four that I would identify as reasons that a company may not want to report: No. 1 is fear of loss of control of the investigation. There is a misguided perception of law enforcement raids. Often companies are more aware of working with law enforcement when they are targets, when they are being investigated for committing a crime. And it immediately pops to mind: law enforcement showing up at your door with 50 agents and taking files. Second, fear of the incident becoming public through a government leak or indictment—or fear of losing control over when the incident becomes public. Some cybersecurity incidents never become public and don’t need to. Others will naturally become public because there are reporting obligations to individuals or regulators. But companies would rather stay in control. Three is that information would be shared by law enforcement with regulators, who are interested in cyber intrusions for a very different purpose than law enforcement. Regulators protect consumers, and they’re often investigating cyber intrusions to identify whether the victim company complied with various consumer protection-related laws and regulations. So there’s the fear that information will be turned over to regulators. And then there’s the sense that there will be no benefit inuring to them if they do cooperate with law enforcement.
I would say that for each of these, law enforcement is aware of the fears or myths and has taken steps to address them, often telling victims that they will not “show up with 50 agents and start taking things.” Law enforcement is often comfortable with having the victim entity preserve and provide digital evidence in a time frame that works with the company’s competing demands to conduct an internal investigation. As for concerns about information being shared with regulators: That’s generally a misconception, and I think that’s what FBI Director Wray was attempting to address in his recent comments indicating that the FBI does not believe that it has a responsibility to share information it receives with other “less enlightened” enforcement agencies. Certainly, when law enforcement is conducting a grand jury investigation, the Federal Rules of Criminal Procedure dictate with whom responsive information can be shared, which does not include those—such as regulators—that do not have a need to know.
Now, what are the reasons for reporting a cyber intrusion to law enforcement? No. 1, law enforcement may have intelligence to share on the particular threat group or criminal activity that can be critical to the company’s investigation—it may enable the company to short-circuit its investigation, provide clues of where to look in its systems or what to look for. And that information can be extremely valuable. This one factor is increasingly the overarching reason that companies reach out to law enforcement in the aftermath of an incident—especially given the rising level of sophistication of some of the attacks. A second reason is that law enforcement often has more tools available to investigate that aren’t available to victims, so there may be some methods that can be used to identify the perpetrators or to stop the activities.
LBB: It sounds as though this could potentially save a company quite a bit of money, because these investigations are not cheap, are they?
KP: They can be very expensive, because forensic investigations take time and require skilled experts. Often many systems need to be imaged, logs need to be collected, and technical experts need to be hired to do the collection and analysis. Digital evidence is fleeting, so there may not be all the clues available of what happened. If you could have law enforcement fill in some of the gaps, some of that information may ultimately help you mitigate the risk of the incident, or identify that actually there wasn’t access to certain data or systems. Filling in pieces of the puzzle can be very helpful.
LBB: As outside counsel, how do you advise the general counsel and management? Is it your practice to provide a specific recommendation when they’re deciding whether and when to report a breach, for example?
KP: Yes, generally we get involved in that decision. It’s a big decision for the company. And for me it’s often a question of timing and the level of sophistication of the attack. If the incident occurred months prior and isn’t ongoing, sometimes it helps to take some initial investigatory steps, to have a sense of what may have happened before notifying law enforcement. It may influence what agency you notify. So sometimes it’s more of a question of when. Similarly, if there is uncertainty around the type of attack or the threat actors, law enforcement may be able to provide valuable information to assist with the investigation. Either way, there often is an important decision of whether to notify, and many different considerations come into play when making that decision. And we, as outside counsel, help the company navigate those issues.
BANDING TOGETHER AGAINST CYBERATTACKS
An international accord has tech companies collaborating.
n April, 34 tech companies announced that they had signed an agreement vowing to help strengthen cybersecurity
globally by adopting core principles to combat attacks. The firms included some of the largest in the industry, among them Facebook, Microsoft, SAP, Oracle and HP.
Coincidentally, that same week the United States and the United Kingdom issued an apparently unprecedented joint warning about Russian cyberattacks that threatened both the public and private sectors.
The tech agreement and the warning vividly demonstrated the growing attraction of alliances to respond to the ever-escalating cyberwars.
Microsoft took the lead in crafting the Cybersecurity Tech Accord. In a blog posted on April 17, the day of the announcement, Brad Smith, Microsoft’s president and chief legal officer, wrote that the idea grew out of his call for action at the RSA conference in San Francisco last year.
“We recognized that supporting an open, free and secure internet is not just the responsibility of individual companies, like ourselves, but a responsibility that must be shared across the entire tech sector and with governments,” Smith wrote.
“But as we also said at RSA last year,” he continued, “the first step in creating a safer internet must come from our own industry, the enterprises that create and operate the world’s online technologies and infrastructure.”
The companies represented are not all American and do not include all of the big tech firms. In addition to Germany’s SAP, the Finnish company Nokia is a signatory, as is Spain’s Telefónica. But the list does not include Apple, Google or Amazon. At least not yet.
The agreement itself is anchored in four pledges:
•We will protect all of our users and customers everywhere. •We will oppose cyberattacks on innocent citizens and enterprises from anywhere. •We will help empower users, customers and developers to strengthen cybersecurity protection. •We will partner with each other and with like-minded groups to enhance cybersecurity.
The brief document elaborates on each. The companies say that they will develop products that prioritize privacy and security while reducing vulnerabilities. They will “not help governments launch cyberattacks against innocent citizens and enterprises from anywhere.” And they will “work with each other and will establish formal and informal partnerships with industry, civil society and security researchers” to coordinate “vulnerability disclosure and threat sharing.”
Smith emphasized the importance of collaboration when he spoke about the agreement to The New York Times. Companies like his are often “first responders” when cyberattacks hit their customers, he said.
“This has become a much bigger problem,” he added, “and I think what we have learned in the past few years is that we need to work together in much bigger ways.”
DATA BREACHES BY INDUSTRY
Source: 2018 Trustwave Global Security Report, based on Trustwave SpiderLabs’ investigations of malicious data breaches affecting thousands of locations in 21 countries
Median number of days between intrusion and detection for detected incidents
If you are interested in contributing thought leadership or other content to this platform
please contact Patrick Duff, Marketing Director at 937.219.6600