Two marketers of genetically customized nutritional supplements have agreed to settle Federal Trade Commission (“FTC”) charges of deceptive advertising claims and lax information security practices. Apparently, the main purpose of the FTC’s investigation had to do with unsubstantiated advertising claims about Genelink’s products, but the FTC took the opportunity to also question the security processes employed by Genelink. The FTC’s complaint charges that Genelink deceptively and unfairly claimed that it had taken reasonable and appropriate security measures to safeguard and maintain personal information from nearly 30,000 consumers. Genelink collected genetic information, social security numbers, bank account information, and credit card numbers. The complaint alleges that Genelink did not require service providers to have appropriate safeguards for personal information, and failed to use readily available security measures to limit wireless access to its network. The proposed order requires Genelink to establish and maintain a comprehensive information security program and to submit to security audits by an independent auditor every other year for 20 years. As I have said before, sometimes the ongoing compliance obligations are much more burdensome and costly than any fines or penalties imposed by regulators.
A dermatology practice called Adult & Pediatric Dermatology, P.C. (“Covered Entity”) reported a security breach as required by the Health Insurance Portability and Accountability Act (“HIPAA”) to the Department of Health and Human Services (“HHS”) on October 7, 2011. The Covered Entity reported that an unencrypted thumb drive was stolen from the vehicle of a member of its workforce, and that the drive contained the protected health information (“PHI”) of approximately 2,200 individuals. The thumb drive was never recovered. The Covered Entity notified the impacted patients of the theft as required by applicable law, and provided notice to HHS in accordance with the breach notification rules under HIPAA / HITECH.
As is often the cast, HHS decided to investigate the Covered Entity following notice of the security breach. The HHS investigation revealed:
- The Covered Entity did not conduct an accurate and thorough analysis of the potential risks and vulnerabilities to the confidentiality of PHI as part of its security process until October 2012.
- The Covered Entity did not fully comply with the requirements of the HIPAA breach notification rules because it did not have written policies and procedures regarding its breach notification process, nor did it train members of its workforce regarding the breach notice requirements until February 2012.
- On September 14, 2011, the Covered Entity impermissibly disclosed the PHI of 2,200 individuals by permitting an unauthorized individual access to the PHI for a purpose not permitted by the Privacy Rule when it did not reasonably safeguard an unencrypted thumb drive that was stolen from the unattended vehicle of one of its workforce members.
The Covered Entity agreed to pay HHS $150,000 to resolve the investigation, and agreed to enter into and comply with a Corrective Action Plan.
Sometimes, the fine is not as significant as the ongoing cost of the corrective actions required by the regulators. Here, the agreed upon Corrective Action Plan gives the Covered Entity one year to conduct a comprehensive risk analysis of its security risks and vulnerabilities that incorporates all of the Covered Entity’s electronic media and systems, and to develop a risk management plan to address and mitigate the risks and vulnerabilities identified. The risk analysis, risk management plan, and any revised policies and procedures must be forwarded to the HHS Office of Civil Rights (“OCR”) for review and approval within 60 days of the date completed by the Covered Entity. OCR will review the submission and may require revisions. Upon approval by OCR, the Covered Entity must train its workforce on the revised policies and procedures within 30 calendar days. During the time period covered by the Corrective Action Plan, if any workforce member fails to comply with the policies and procedures, the Covered Entity must investigate and report such noncompliance to OCR, including any actions taken by the Covered Entity to mitigate the resulting harm and to prevent recurrence.
Ultimately, the Covered Entity must provide OCR with an Implementation Report describing how the Covered Entity implemented its security management process, and an attestation from an officer of the Covered Entity that any revisions required by OCR were fully implemented and its workforce members were completely trained. An uncured breach of the Corrective Action Plan can lead to the imposition of Civil Monetary Penalties.
As called for in President Obama’s Executive Order 13636, “Improving Critical Infrastructure Cybersecurity,” the National Institute of Standards and Technology (“NIST”) has released the Preliminary Cybersecurity Framework for improving critical infrastructure cybersecurity (the “Framework.”) The Executive Order required NIST to develop a Framework that would provide “prioritized, flexible, repeatable, performance-based, and cost-effective approach” to assist organizations responsible for critical infrastructure to manage cybersecurity risk.
The Executive Order requires the Secretary of the Department of Homeland Security (“DHS”) to coordinate with the Sector Specific Agencies to establish a voluntary program to support the adoption of the Framework by owners and operators of critical infrastructure. In issuing the preliminary Framework, the head of NIST again emphasized the voluntary nature of the Framework. Of course, as required by the Executive Order, the Sector Specific Agencies have published their preliminary recommendations for incentives to adopt the Framework, including the suggestions that adoption of the Framework be a condition for receiving a federal critical infrastructure grant, government services to those who implement the Framework be expedited, and Framework participants be publicly recognized. It remains to be seen whether an entity in the critical infrastructure can remain competitive without adopting the “voluntary” Framework. Further, as with many industry standards, compliance with the Framework may effectively become mandatory if courts look to it as what is reasonable security in the industry. If entities in the critical infrastructure (and beyond) adopt the Framework as the standard for vendor audits, then companies will need to become fluent in using the Framework to communicate about their cybersecurity readiness.
The Framework is intended to help organizations establish a cybersecurity program, assess their already existing cybersecurity program, and communicate cybersecurity requirements or expectations with business partners and service providers. The Framework is built around five functions described as the Framework “Core Functions”: Identify, Protect, Detect, Respond, and Recover. Each Core Function is broken down into Categories and Subcategories, with NIST providing Informative References for each Subcategory, which are existing standards, guidance, and practices that are basically resources to look to for help with that Subcategory. The five Core Functions lead an organization through the process of (1) conducting a risk assessment taking into consideration your organization’s mission objectives, systems, assets, regulatory requirements, and capabilities, as well as the operational environment to discern the likelihood of a cybersecurity event that could impact your organization; (2) developing and implementing appropriate safeguards to protect the organization’s systems, data, and assets; (3) developing and implementing activities to detect a cybersecurity event; (4) developing and implementing activities to take action regarding a detected cybersecurity event; and (5) developing and implementing activities to restore capabilities or critical infrastructure services that were impaired by a cybersecurity event. The second through fifth Core Functions (steps 2-5 in the process) are approached taking into consideration the current and target profiles created by the organization in the first step of the process—“Identify.” The “Identify” function allows the organization to prioritize.
The preliminary Framework released on Tuesday includes an appendix that presents a methodology to address privacy and civil liberties considerations around the deployment of cybersecurity activities and the protection of personally identifiable information, which is based on the Fair Information Practice Principles (“FIPPs”) referenced in the Executive Order. The appendix includes Informative References related to privacy and civil liberties standards, guidance, and best practices, as well.
The preliminary Framework is open for public comment, with the next version planned for February 2014.
Adding to the controversy surrounding the Affordable Care Act, aka Obamacare, is a new 253-page Obamacare rule that requires state, federal, local agencies, and health insurers to share protected health information (“PHI”) on any individual seeking to join the new “healthcare exchanges.” PHI includes individual medical histories, test and laboratory results, insurance information, and other personal health-related data.
Although PHI is already protected by various federal laws, the new Obamacare rule allows agencies to trade information in order to verify that applicants are receiving the appropriate level of health insurance coverage from the healthcare exchanges. The ruling, however, does not require that applicants pre-approve the release of their PHI. In fact, the Department of Health and Human Services already allows the exchange of some PHI without an individual’s pre-approval, especially when it’s for a “government program providing public benefits.” Officials state that the swapping of information is simply meant to help determine the best insurance coverage for every Obamacare user.
If enacted as written, the new Obamacare rule will result in the creation of one of the largest collections of personal data in U.S. history whereby information will be managed and shared between numerous federal, state, and local governments. This repository will undoubtedly be an irresistible “pot of gold” for every hacker and identity thief on the planet.
Nish Bhalla, CEO of Security Compass, is an ethical hacker specializing in web security for Fortune 500s, major banks, and well-known technology companies. Drawing on his unique perspective, Bhalla noted that, “Typically, state governments do not have the same level of resources as the federal government when it comes to cybersecurity. In fact, a recent study by Deloitte-NASCIO found that only 24 percent of state chief information security officers are confident they can thwart hack attacks.”
Speculating on how the vulnerable exchanges could be exploited, Bhalla believes we will “see a standard crop of web-based attacks directly targeting the state exchanges and federal data hub. We’re also sure to see a lot of spam, phishing, and ‘waterholing’ attacks that target consumers.” Aside from direct attacks on the exchanges themselves, hackers will seek softer targets, such as public computer terminals (i.e., libraries, schools, unions, small business associations, etc.) that will be made available for people to enroll in an exchange. Other vulnerable targets include various “navigator” companies responsible for helping people enroll online.
While the healthcare exchanges have conducted security audits, the testing has not been as rigorous as one might expect given the amount of PHI at risk. As with many aspects of Obamacare, security testing appears to have been rushed in order to meet specific deadlines. Numerous news stories have already reported on the “glitches” with Obamacare’s online enrollment portal, surmising the evident conclusion that rushing any large project is likely to result in errors.
While it’s too soon to determine how secure our PHI will be in the hands of various government agencies, we do know that hackers will be unable to resist the temptation to grab at such low-hanging fruit.
This year began with a massive security leak by Edward Snowden, then turned to talk of war with Syria, and now looks to be ending in a budget stalemate that has all but crippled the federal government. In the face of these events, it is no surprise that meaningful cybersecurity reform legislation is unlikely to make its way into law. The lack of progress comes a year after the failed effort to advance cybersecurity reform, and months after President Obama called on lawmakers to advance legislation. The tepid pace of reform seems unlikely to change despite the continuing assault on our nations’ IT infrastructure by the Chinese, Iranians, and Syrians.
The fate of cybersecurity reform continues to be bogged down by lingering disputes over protections for information sharing, litigation reform, and privacy standards. Earlier this year, the House passed the Cyber Intelligence Sharing and Protection Act (“CISPA”). The bill went nowhere after drawing objections from Senate Democrats and the White House, who backed a different bill but failed to woo skeptical Republicans and critical interest groups. For its part, the Senate has yet to draft a major cybersecurity bill.
Dianne Feinstein (D-Calif.) and Saxby Chambliss (R-Ga.), who led the Senate’s intelligence efforts, have not released a draft bill, despite extensive negotiations. Instead, they have been preoccupied with the fallout from Snowden’s surveillance leaks and the debate over reforming the National Security Agency. On a substantive level, Chambliss acknowledged that a major hang-up includes the fight over lawsuit immunity for companies that act on government data that proves to be incorrect.
As for the House, there have been efforts to modify CISPA to overcome the Democrats’ concerns and to secure additional support. Reps. Mike Rogers (R-Mich.) and Dutch Ruppersberger (D-Md.) tightened CISPA’s privacy protections, but remained unable to obtain support from the Administration and Senate Democrats. Rep. Adam Schiff (D-Calif.), a member of the House Intelligence Committee, noted “I do think we’ve been too slow to deal with this issue,” and that it has been “much more difficult” to pass cybersecurity legislation for reasons including Snowden’s leaks.
For its part, the White House is too preoccupied with the budget stalemate to spend its precious resources on cybersecurity legislation. “The most important thing that Congress can do for the nation’s cybersecurity right now is to fund the entire government, including cybersecurity missions and operations,” a White House spokesman said.
Giving little room for optimism, when asked if a cybersecurity bill would become law this year, Rogers stated, “You might not expect it, but you ought to pray for it.”
To read more on delayed cybersecurity reform, click here for an article by Politico.
Word on the street is that Google and Amazon have quietly started to offer business associate agreements (“BAAs”) to their healthcare customers using their cloud services. As you probably know, the Health Insurance Portability and Accountability Act (“HIPAA”) now requires that cloud providers comply with the HIPAA Security Rule if they process protected health information (“PHI”) on behalf of a covered entity, regardless of whether they sign a business associate agreement. So, while it is nice that these large cloud providers are beginning to execute such agreements, it is not a surprise, and it is probably to their benefit, as they will be responsible directly for HIPAA violations anyway, and such contracts offer them the opportunity to limit their liability as much as possible under the law.
Cloud providers are notorious for trying to disclaim as much liability as possible related to the services they provide. By entering into these business associate agreements, it gives them the opportunity to state, once again, exactly what they will be responsible for and what they will not. Further, Google stated publicly that if customers have not entered into a BAA with Google, they must not store PHI using Google services. I imagine their contracts reflect this idea—that they will not be responsible for protecting PHI about which they do not know.
Unless a company is a larger customer with a lot of leverage, most companies have little power to negotiate responsibility for losses with cloud service providers. Companies need to try to negotiate what cloud providers are responsible for, including what liabilities and at what levels. Companies should push to conduct their typical vendor audits with cloud providers. Some cloud providers will give representations as to outside security certifications, such as the Federal Information Security Management Act (FISMA), the International Organization for Standardization (ISO), and the Statement on Standards for Attestation Engagements (SSAE), which is helpful. Further, realize that cloud providers may be outsourcing your data to still other cloud service providers. Companies should therefore make sure that contracts with cloud providers, including BAAs, contemplate liability for downstream losses caused by subcontractors.
Researchers have confirmed that a widely used Android mobile ad library app poses a significant threat to mobile users. The ad library has been dubbed “Vulna” (or “vulnerable and aggressive”), which allows attackers to “perform dangerous operations such as downloading and running new components on demand.”
The scope of the problem is significant—researchers “have analyzed all Android apps with over one million downloads on Google Play, and found that over 1.8% of these apps used Vulna. These affected apps have been downloaded more than 200 million times in total.”
Developed by third-parties, mobile app libraries are used to display advertisements from other “host apps.” This class of software also collects International Mobile Subscriber Identity (commonly referred to as “IMSI”) and International Mobile Equipment Identity (commonly referred to as “IMEI”) codes. What makes Vulna dangerous, therefore, is its ability to amass call record details and SMS text messages, as well as allow for the execution of malicious code.
“Vulna is aggressive—if instructed by its server, it will collect sensitive information such as text messages, phone call history, and contacts. It also performs dangerous operations such as executing dynamically downloaded code. Second, Vulna contains a number of diverse vulnerabilities. These vulnerabilities when exploited allow an attacker to utilize Vulna’s risky and aggressive functionality to conduct malicious activity, such as turning on the camera and taking pictures without user’s knowledge, stealing two-factor authentication tokens sent via SMS, or turning the device into part of a botnet.”
The likelihood of a cybersecurity breach hitting a company in the near future is as certain as the subsequent drop in shareholder value, finger-pointing, fines, regulatory headaches, and civil litigation alleging the board was asleep at the wheel in the face of a known danger when that danger finally materializes. The question every board member must answer is whether the actions they are currently taking to protect their company’s digital assets are sufficient to withstand the Monday morning quarterbacking that will occur after a cyber attack incident.
I recently published a series of three articles intended to help boards of directors better understand the breadth of their fiduciary obligation in managing looming cybersecurity threats.
In today’s world, many companies maintain their most valuable assets in digital form. Thieves no longer need to physically enter a company’s facility to steal its valuables. Rather, an individual on the other side of the globe, or right next door, can, with equal impunity, silently steal a company’s most prized possessions by breaching its data network. Due to the evolving nature of cyber risks, there is a lack of authority discussing the scope of a board’s obligation to address such attacks.
Obviously, directors’ fiduciary duties will extend to the protection of significant digital assets. The more difficult question to answer is: What are the contours of a director’s fiduciary obligation when it comes to cybersecurity? As discussed in my articles, the answer to these vexing questions is almost always “it depends.” As with all risks, the extent of a director’s obligation and the amount of attention an issue should receive at the board level will depend on such things as the nature of the company, the foreseeability of an attack, and the potential severity of a cyber breach.
Each of the three articles in my “Cybersecurity and the Board of Directors: Avoiding Personal Liability” series can be read in their entirety by clicking on the links below:
Every day, information security professionals have to make decisions about whether the security measures they have taken are sufficient or if they should spend more money on additional protections. We all know that there is no such thing as perfect security. So, the question always remains: what level of security is necessary to comply with the law?
The FTC has not established by official rulemaking any clear data security standards. Nevertheless, the FTC has brought more than forty data security cases against companies charging, under Section 5, that they have not taken adequate and reasonable security measures to protect consumer data resulting in the unauthorized disclosure of private information.
For the first time, two companies have pushed back against the FTC’s authority to bring such data security cases against companies that have suffered a data breach caused by a third party under its unfair and deceptive trade practices authority. The FTC has had to go to court, because these two companies have refused to settle.
In June 2012, the FTC brought an action again Wyndham Hotels following three data breaches in under three years. FTC charges that Wyndham acted deceptively in representing that it implemented reasonable and appropriate security measures to protect personally identifiable information against unrestricted access, and that Wyndham acted unfairly in failing to employ reasonable and appropriate security measures. The FTC alleges that these failures led to data breaches that resulted in fraudulent charges on consumers’ accounts and the export of payment card information to an Internet domain in Russia. But Wyndham has challenged the FTC’s authority to bring an action based on security breaches caused by a third party. Among its arguments, Wyndham says that the FTC has not published rules that give companies sufficient notice of what data security practices are required in order to be in compliance with Section 5.
In August 2013, the FTC announced it had brought an administrative action against LabMD, alleging that LabMD’s failure to take adequate and reasonable security measures resulted in the unauthorized disclosure of consumer personally identifiable information, including names, social security numbers, and medical procedure diagnostic codes. The FTC requested that LabMD provide information to the FTC to determine what caused the breach. LabMD refused to comply with the FTC requests for information, and the FTC sought a court order. The District Court agreed with LabMD that the FTC’s power under the “unfairness” provision of Section 5 is not unlimited, but nevertheless ruled that the FTC’s investigative authority was broader and so LabMD was required to provide the information to respond to the FTC’s requests.
The FTC has asked Congress in the past for additional authority to mandate data security policies and practices, but so far Congress has not passed a federal data security standard.
So, what is a company to do? If your company is governed by a law that does include more specific security standards, like the Health Insurance Portability and Accountability Act (“HIPAA”) Security Rule, then you have some guidance as to your obligations. With regard to the FTC, no clear standards exist. However, the complaints that the FTC has filed against LabMD, Wyndham, and others are somewhat helpful in revealing the kinds of conduct that the FTC considers to be “unfair,” including:
- failure to implement or maintain a comprehensive data security program to protect consumer information through the use of readily available measures, including things like firewalls and employee training;
- permitting improperly configured software to display passwords, financial information, or login information in unencrypted clear text (for example, it is alleged that Wyndham stored sensitive payment card information in clear readable text);
- failure to ensure and maintain security across user networks (for example, it is alleged that Wyndham did not employ network segmentation between hotels and its corporate network);
- failure to follow best practices for password complexity;
- failure to employ reasonable measures to detect and prevent unauthorized access;
- failure to use reasonable security to design and test privacy sensitive software;
- improper use of peer to peer networks;
- failure to follow proper procedures to prevent repeated intrusions—it is not acceptable to suffer repeated security breaches without fixing the problem;
- failure to restrict third party access to data networks.
Further, companies should review their online privacy statements and other public statements to determine what representations they have made to the public regarding the security measures that they implement. Companies often see these statements as an opportunity to win over their customers by promising stellar security protection. But this is not wise, given that the FTC is on much stronger ground in bringing an action against a company for failing to live up to its public promises.
California has passed a bill that amends the existing California Online Privacy Protection Act (“CalOPPA”) to require that websites collecting personally identifiable information (“PII”) about California residents be transparent in how they respond to web browser “do not track” (“DNT”) signals.
- the categories of PII collected by the site,
- the categories of third parties with whom such information is shared,
- the process (if any) that the site operator uses for an individual to request changes to any PII collected through the site,
- the effective date of the policy, and
- how users are notified about changes to the policy.
The major web browsers offer DNT browser headers that give consumers the choice to elect not to be tracked across websites for purposes such as online behavioral advertising. The amendment to CalOPPA does not require that websites comply with the web browser DNT signals. Rather, websites are free to ignore such signals so long as they are transparent about it to their users in their online privacy policies.
On September 18, the Comptroller of the Currency gave remarks before the Exchequer Club in Washington regarding the risks posed by cyber attacks, which the Comptroller said “have the potential to be as destructive of the financial system as the excess of the mortgage and securitization markets.” The Comptroller explained that, if the risks posed by increasingly sophisticated and frequent attacks go unchecked, they could threaten the reputation of the country’s financial institutions, as well as public confidence in the system. He echoed the Administration’s desire, reflected in the President’s Executive Order on Cybersecurity, for increased awareness, better information sharing, and collective response.
The Comptroller noted that the trend is toward more technology, including the use of cloud computing, social media, mobile banking, and new payment solutions. As many companies know, each new opportunity brings expanded exposure.
The Comptroller echoed the Administration’s call to work together, and noted the efforts underway to do so, including the work being done by the Federal Financial Institutions Examination Council (“FFIEC”) Cybersecurity and Critical Infrastructure Working Group. The Comptroller expressed the importance of sharing information and augmenting relationships between regulators, law enforcement, and the intelligence communities regarding the threats being seen and the best practices to address them.
The Comptroller noted a particular concern for community banks and thrifts, explaining that smaller players allow a point of access into the system and may have less sophisticated defenses than larger banks. Understandably, smaller banks often rely on third party vendors to support their IT functions and security, and may not have the expertise to identify and mitigate vulnerabilities. So, the Office of the Comptroller of the Currency (“OCC”) is devoting increased resources to community banks and thrifts and has increased outreach to such smaller organizations. The Comptroller noted that the OCC is communicating to banks and thrifts that it is important for them, at the board and senior management level, to be aware and engaged and understand the risks, so that there is a culture of risk management from the top. Again, the Comptroller encouraged communication among institutions, both large and small, and with the relevant government agencies.
In general, the Comptroller emphasized that this is not a problem that can be addressed by one agency or one institution acting alone.