The title says it all – what should smaller companies do to comply with privacy laws?
California has now finalized the California Consumer Privacy Act (CCPA), Cal. Civ. Code §§ 1798.100 to 1798.199 – well, at least for now (please note that this link does not have all of the law changes in it as of the posting of this article). It goes into effect 1/1/2020. Regulations under it will not be issued until December at the earliest and are likely to change over time. While it is a net gain for California consumers, it is a complex law with many incidental effects and traps for the unwary business. How does a small business deal with this mess? Before we address that, let’s discuss some background:
Why is CCPA important?
The CCPA is important because so many businesses do business with California consumers that California law is the “highest common denominator” – meaning, instead of trying to comply with disparate laws in 50 states, a business could target compliance with the most onerous law (typically California law in the pro-consumer sense), and then hope for the best that such compliance will also comply with other laws. This does not always work – for example, Illinois has a much harsher bio-metric security law than California, and New York has very detailed personal information protection laws and rules as well, particularly in the financial/banking sector. So, a slight modification of the above strategy is to target the “top 3” laws (i.e. California, Illinois and New York) and again hope for the best in other states. And finally, there is the modified “top 3” strategy of adding compliance with the General Data Protection Regulation of the EU (GDPR).
What many larger companies have done is simply targeted compliance with the GDPR worldwide, assuming it is the most onerous pro-privacy law. However, the CCPA has provisions that differ from, and add to, the GDPR, for example, the regulations on businesses that sell personal information, and that broker those sales, does not really directly exist under GDPR.
How does this affect small business?
Many small businesses may think they are not subject to these privacy laws because they do not “do business” in a particular state, or in the European Union. While those are interesting issues – either the ability of a state to constitutionally require a remote business to be liable under a privacy law when interacting solely electronically with a resident of that state – or whether the GDPR regulators can fine and enforce such fines against a small US business that has no offices, employees or other contacts in the EU, the problem is that many small businesses will contract with larger businesses that are subject to those laws and regulators and those contracts will require the small business to comply with those laws indirectly. This is particularly true in heavily regulated industries such as banking and health care, where regulators apparently force their member banks to impose liability on third party vendors.
In addition, when a small business goes to sell or merge – typically with a larger entity that may apply these rules and regulations in analyzing the level of data risk it has post transaction. If the small business has not thought about basic data privacy and data security, this can negatively impact the value of the business.
So, what should a small business do (and, what are the key provisions of CCPA)?
First, do not ignore remote state laws like CCPA or the GDPR. Someone in the organization should be assigned the responsibility, and be given reasonable time, to make a true assessment of the data privacy and security risks of the company. Ideally that person would have a “C” designation (CIO, CTO, CPO etc) and be incentivized to diligently complete such tasks. The business should neither marginalize nor minimize such role.
Second, do a data inventory – what data is the business collecting? Why? Does it really need it? Where is it collecting this data from? (the web, forms, manual entry, data harvesting/scraping, third party lists etc) What agreements are there with those data sources (this includes terms of service)? Is this information personal information? What type of personal information ? (i.e. is it sensitive personal information). Basically, this is a review of all inbound data flow . . .
Third, determine where the collected data is being disclosed or shared? This can be the critical step – because if the information being collected is personal information, and particularly if it is sensitive information, this can have significant impacts if there is a data breach. Do adequate agreements cover that data exchange? Is the data encrypted? Should it be? Has any audit or review of the recipient been done to determine the adequacy of their data protection systems? Basically, this step involves tracing all outbound data flows, and determining the business need for the disclosure and the risk level such disclosure presents.
Fourth, assess the computer systems used to capture, store, and transmit data, to determine weaknesses in security where a data breach can occur. Computer security is hard, period. It is way harder today when its not just your own computer security, but the security of every link in the data disclosure chain.
Fifth, consider what tools to use to address risk. Do you throw technology at the issue, like better intrusion protection, detection systems, universal threat management devices, etc.? Do you hire experts, which may be costly? Both? Do you have proper agreements in place? Indemnity? Does the downstream recipient have insurance? Are you sure about that? Do you have insurance? Do contracts you have require insurance? Be especially careful with insurance – a good insurance broker will be up on all the various changes in insurance policies that claim to cover “cyber liability” (a generic term that is meaningless without specific context). You simply cannot determine what insurance you need if you have not performed steps 1-4 above.
Sixth, engage in continuous review and management. Hackers are not static, they evolve – your systems must be maintained and modified to address new threats and issues, be updated and patched, and monitored for threats. See item 1 – it is why a dedicated C level person and/or team need to be in place to really address these issues.
The key provisions of CCPA as they relate to small businesses are below:
It only applies to certain “businesses” – namely – a business that has annual gross revenues in excess of $25,000,000); or that alone or in combination, annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices; or derives 50 percent or more of its annual revenues from selling consumers’ personal information.
A consumer has a right to an accounting – essentially, a consumer can request that a business disclose to the consumer the categories and specific pieces of personal information the business has collected.
A consumer has the right to request that a business delete any personal information about the consumer which the business has collected from the consumer, subject to several exceptions.
A consumer has the right to an accounting of personal information that has been transferred by the business to third parties, subject to several exceptions. These rights are enhanced if the business “sells” that personal information.
A consumer has the right to terminate the sale of its personal information. It is important to note that the law does not force a business to offer a service to a consumer who makes such an opt out – in other words, a business can condition its service on the right to sell the information. However: (a) a business cannot sell personal information of consumers under 16 (if they know they are under 16) without express opt in, and for consumers under 13, the parent or guardian must opt in; and (b) a business cannot discriminate against a consumer who exercises any of their rights under the law.
A business must provide two methods of contacting the business to exercise these rights – one of which is a toll free number, unless the business transacts business solely online and has a direct relationship to the consumer, in which case such online business need only provide an email address to send such requests. There are affirmative disclosure requirements for websites of businesses that are subject to the law. Businesses that sell personal information have additional affirmative disclosure requirements for their websites.
A consumer whose personal information has been breached now has an affirmative damage remedy. Previously there was uncertainty in the law as to whether actual damage or harm would have to be shown to recover, or just the risk of future harm. In general the cases have held that actual harm is required, but vary in what they view as “actual harm.”
Significant daily penalties can be assessed for non compliance after a 30 day notice period.
The above is only a general overview. However, some of those rights, for example, the right to onward transfer accounting, the right to delete information, and the right to opt out, present not only legal compliance issues, but significant technical hurdles. For many small businesses, their systems were not designed this way, and/or, they have so many disparate systems where data is duplicated, that it might hard or near impossible to comply. If a small business runs through the above checklist and gets a handle on the who, what, where, when and why questions, it will be easier to then assess the “how hard to comply” question.
For more information or assistance in data security and privacy law compliance, please contact Mike Oliver
Many companies have exactly 1 year to get their privacy house in order. On May 25, 2018 the European Union’s General Data Protection Regulation (found here in its entirety, the reg itself without precursors is here: GDPR regulation only) goes into effect. It brings tremendous changes to the previous data protection rules, but in this short post I discuss what I consider to be the “Big 3” issues that the new rule presents, and why even though US privacy law is almost non existent (in the general consumer privacy context), these EU rules will become more and more important even for smaller companies operating solely in the US, due to globalization of data exchange. Ok, the Big 3:
Huge fines for small errors. The GDPR allows for fines of up to the greater of 20,000,000 Euros, or 4% of annual global turnover. And, there is every indication that the privacy regulators will be very harsh in doling out these fines, even for fairly innocuous errors. That has certainly been the trend in the U.S. for sensitive data like protected health information.
Information included within the rule is almost everything. The regulation (Article 4, Section (1)) defines “personal data” to mean “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person” It is clear this definition encompasses far more information than just “identifying” information – for example, an “online identifier” is just about any technology that tracks a user.
Extra-territorial scope. The regulation (Article 3) extends the reach of the GDPR well beyond the borders of the EU. First, it states that it “applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” So, any data processed by a controller or processor who is located in the EU is subject to this rule, even if the data subject is not a EU resident. Next, it states “This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: (a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or (b) the monitoring of their behaviour as far as their behaviour takes place within the Union.” So, regardless of the location of a business, if the business offers goods or services, whether paid or unpaid, the GDPR applies. Finally, “[t]his Regulation applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.” The US has not yet adopted public international law that accedes to this rule, but other countries may do so. Operating in those countries would impose the rule on the controller or processor.
There is a separate issue about whether the EU could enforce the GDPR against a US based entity in the EU, or whether it would have to try and come to the US and file such claim; and there is also a separate question of whether a US court would enforce a foreign law against a US based business without an enabling treaty or other enabling statute. However, a company that operates solely in the US would probably have to play ball with the EU authorities if they ever wanted to be able to actually do direct business in the EU. Most large companies have already made that decision. Smaller companies that are wholly located in the US will have to consider whether they would want to take the risk of GDPR enforcement, and whether they want to ever expand direct services into the EU.
One year seems like a long time, but the GDPR has been known for some time (it was adopted in 2016), and now the time is short and companies that might be subject to it really need to be well on their way of making an assessment of what data they are collecting, how they are using it, what efforts they have made to obtain consent to that use, and how they will meet the 6 principles in a timely fashion.
Beginning in 2015, any website or mobile service that is directed to minors under the age of 18 and allows them to post content, will have to delete that content on request of the minor user. SB 568 provides in part that a site directed to minors must “(1) Permit a minor who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application to remove or, if the operator prefers, to request and obtain removal of, content or information posted on the operator’s Internet Web site, online service, online application, or mobile application by the user. (2) Provide notice to a minor who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application that the minor may remove or, if the operator prefers, request and obtain removal of, content or information posted on the operator’s Internet Web site, online service, online application, or mobile application by the registered user. (3) Provide clear instructions to a minor who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application on how the user may remove or, if the operator prefers, request and obtain the removal of content or information posted on the operator’s Internet Web site, online service, online application, or mobile application. (4) Provide notice to a minor who is a registered user of the operator’s Internet Web site, online service, online application, or mobile application that the removal described under paragraph (1) does not ensure complete or comprehensive removal of the content or information posted on the operator’s Internet Web site, online service, online application, or mobile application by the registered user”
Some exemptions to this requirement apply (such as data that must be retained for law enforcement, data that is posted by a third party about the minor, and data that is anonymized). It is not clear (to this writer) that the law would apply after a minor reaches his or her 18th birthday. In other words – it is not clear a minor who does not make the request before their 18th birthday could make the deletion request after their 18th birthday.
That law also prevents a site “directed to minors” from presenting any content or advertising in the following enumerated categories:
(1) Alcoholic beverages
(2) Firearms or handguns
(3) Ammunition or reloaded ammunition
(4) Handgun safety certificates
(5) Aerosol container of paint that is capable of defacing property
(6) Etching cream that is capable of defacing property
(7) Any tobacco, cigarette, or cigarette papers, or blunt wraps, or any other preparation of tobacco, or any other instrument or paraphernalia that is designed for the smoking or ingestion of tobacco, products prepared from tobacco, or any controlled substance
(8) BB device
(9) Dangerous fireworks
(10) Tanning in an ultraviolet tanning device
(11) Dietary supplement products containing ephedrine group alkaloids
(12) Tickets or shares in a lottery game
(13) Salvia divinorum or Salvinorin A, or any substance or material containing Salvia divinorum or Salvinorin A
(14) Body branding
(15) Permanent tattoo
(16) Drug paraphernalia
(17) Electronic cigarette
(18) Obscene matter
(19) A “less lethal weapon”
A site is directed to minors if “[the] Internet Web site, online service, online application, or mobile application, or a portion thereof,  is created for the purpose of reaching an audience that is predominately comprised of minors, and is not intended for a more general audience comprised of adults.”
This rule also reaches “advertising services” if the website/mobile operator advises the advertising service that the site is “directed to minors.” Advertisers therefore will need to obtain certification from their customers that the site they are servicing is not directed to minors, or, they will need to add the above filters for such sites.
The California law appears to be the first law that has used the age of 18 in regulating website/platform content; prior to this, under the FTC COPPA act, the applicable age was “less than 13.”
Sites and services that are “directed to minors” will need to begin technologically addressing the issues raised by SB 568 in 2014, to be ready in 2015.
We meet a lot of clients that fail to obtain a written agreement, or blindly sign the form provided by the developer – and when a dispute arises, only too late realize the problems created by that lack of diligence. This post addresses critical provisions in a website development agreement.
First, you want to make sure you will own the material and content created by the developer. Thus, you want a provision in the agreement (which must be in writing) that recognizes that the developer’s work for you is considered a “work made for hire” and you want a copyright and intellectual property assignment as well. These clauses ensure that, although the developer is not your employee, you are the owner of the website materials and intellectual property rights. You do not want to find that your website designer created something unique for you only to discover the same unique layout on another website. Many businesses are surprised to learn that in the absence of this statement in a written agreement, an independent contractor (in this case the website developer) typically is the owner of work they create, and the business at most would be a licensee of the material. This means you don’t own the work; rather, you only have permission to use it.
Second, you want to have a provision in the contract that states that the work on the website is the website developer’s original work and/or that the developer has the necessary permission/licenses from the owners to use the work on your site. For instance, the website developer may place photographs on your website – you want the developer to represent that the developer has the right to use those photographs on your website (i.e. either the developer took the photos or it has the permission to use them). If the developer uses photographs owned by a third party on your website without the third party’s permission, the third party could claim you are infringing on their copyright by displaying their work on your website without their permission, and would demand you cease use of the photos and may demand damages as well. Thus, have your website developer represent the work is original or that he has permissions to use all work on your website.
Third, make sure to have an indemnification provision in your agreement. This provision should provide that the developer will indemnify you in the event you incur damages or a loss due to a third party claim that you are infringing their intellectual property rights – where they claim the work on your website is actually their material. For example, a business thinks the graphics on its site are original, however, it receives a cease and desist letter from a third party alleging that its use of the works on its website without the third party’s authorization is copyright infringement and demands damages. Under Copyright Law, if the third party is the owner of a registered copyright in the work, the business as an unauthorized user could be subject to statutory damages ranging from $700 to $30,000 for unintentional infringement, and up to $150,000 for willful infringement. Thus, if material placed on your website by your developer is subject to a claim or legal action for infringement, you want your developer to indemnify you for these actions since you are relying on their knowledge, creativity and skill in developing and designing your website.
Finally, it is important that you make sure that the developer periodically delivers all source codes and native files to you, and that you control all passwords and access to critical website assets, such as the domain registration. You want to make sure that such files and access rights cannot be withheld in the event of a dispute. Thus, if a dispute arises, the developer’s sole remedy should be money damages. You should not be prevented from transferring the work done (to the point of a dispute) to a new developer, so you can finish your site, and deal with the dispute separately.
Craigslist, Inc. v 3Taps Inc., No. CV 12-03816 CRB. (N.D. Ca. August 16, 2013) is another case in a now long line of cases that establish that in most situations access to even an otherwise publicly accessible website can be controlled via selective authorization.
The 3Taps case is very straightforward. 3Taps scraped Craigslist’s website, and replicated it. Craigslist sent them a letter revoking all permissions to access the Craigslist site, but 3Taps ignored that and circumvented IP filters and continued accessing the website, and replicating it. In other words, Craigslist “singled out” 3Taps and told them that they could not access the Craigslist website. 3Taps was singled out because it was copying the entire Craigslist site, in apparent competition with Craigslist.
Note that unlike the Digital Millenium Copyright Act, which requires there to be sufficient technological measures to protect copyrighted content before there would be a finding of circumvention, under the CFAA, no such technological measures are required. 3Taps sought to dismiss the complaint filed by Craigslist, which complaint asserted that 3Taps violated the Computer Fraud and Abuse Act (“CFAA”) which generally prohibits a person from “intentionally accesses[ing] a computer without authorization or exceed[ing] authorized access, and thereby obtain[ing] . . . information from any protected computer.” The essence of 3Taps’ argument was that because the Craigslist website is publicly available, the CFAA does not apply, and therefore, just as anyone else had “authorization” to access and use the website, so did 3Taps. [Note: this decision did not address copyright issues with 3Taps’ conduct.]
A long line of cases enforce “terms of service ” either under contract law, under the CFAA, or both – that is, if terms of service authorize access to information on certain conditions, and those conditions are not met, then the access to that information is not authorized and is a violation of the contract and often, the CFAA. See Register. com, Inc. v. Verio, Inc., 126 F. Supp. 2d 238 (S.D.N.Y. 2000), affirmed on other grounds, 356 F. 3d 393 (2nd Cir. 2004) and their progeny.
You can now add this case to that list. This case even more bluntly stands for the proposition that a website owner can, with only the typical “protected class” exceptions, discriminate against a particular user and revoke authorization, while at the same time generally authorize the public to access and use the website. This right, moreover, does not make the website operator a so-called common carrier, and the website operator does not give up its other important immunities, such as the immunity under the Communications Decency Act (47 USC 230). There may be other limitations on a website’s right to discriminate – for example, there may be first amendment interests in the data being accessed, or there might be an argument that certain provisions in a contract limitation constitute a copyright misuse (and hence might make enforcement of the contract, even under the CFAA, problematic). However, in the majority of private interest cases like Craigslist (or Twitter, or Facebook or virtually any other social media provider) – the owner of the data is going to have a pretty broad right in the U.S. and under U.S. law to protect access to that data via restrictions either in a terms of service, or more directly as was done in the 3Taps case.
Congress is considering an amendment to the CFAA (Aaron’s law – for background, see this Techdirt article the EFF pages, and what I believe is the current draft here) that might limit a website platform operator’s use of the CFAA to control its content . . . but that issue has come up in various contexts before and Congress has not seemed to have much appetite for monkeying with the CFAA. Also, that would not eliminate the breach of contract claim (see the Verio case above).
The 3Taps case has been cited in some online commentary for the proposition that IP proxies or anonymization systems (like Tor) are “illegal.” That is not what the court held. There are many legal and pro-privacy reasons to use such systems that would not violate the CFAA. The simplest example would be use of such a system to avoid being tracked while browsing the web. In these cases you are not accessing a protected computer without authorization, you are simply sending a false identifier to a computer that is collecting the data on its own volition. CFAA punishes unauthorized access, not access gained by presenting false location or identification data. However, under the 3Taps case, apparently a terms of service agreement could be written to withdraw consent to any access of the site if a person is using a location or tracking anonymizer/IP spoofer, and hence, a person using such a service and accessing the site could then be in violation of the CFAA. That question, however, also raises substantial 1st Amendment issues (right to anonymous speech), which were not present in 3Taps. Thus, it is not clear at all that a court would hold that the CFAA claim would survive in that instance.
Until Congress modifies the CFAA internet users should be cautious about use of “publicly available” but privately owned information on a website, RSS feed, social media firehose, or other resource, and be careful to read and comply with the terms of service. [Note: this blog entry does not address governmental or public information, FOIA or the right (or lack of a right) under a contract or CFAA to “privatize” governmental public data]
FAIR USE FIASCO – BE CAREFUL USING *ANY* IMAGE ONLINE by Mike Oliver
The internet is littered with millions of images – taken by professional and amateur photographers alike, that contain NO identification of the author of the image. In part, this may be because it is not known, and in part it could be that the “meta data” in the image – which if done professionally will typically have the author’s name and the claim of copyright embedded as text inside the file (known as EXIF data) – is stripped off.
A recent decision in the 3rd Circuit (Pennsylvania, New Jersey, Delaware, and the Virgin Islands), Murphy v. Millennium Radio Group, LLC, [https://www.ca3.uscourts.gov/opinarch/102163p.pdf] demonstrates how dangerous it is to post on your website an image that does not contain this “copyright management information.”
In Murphy, a professional photographer took a picture of two radio show shock jocks, partially nude, for a print publication. The photographer maintained copyright. A radio station employee scans the image, posts it on the radio station website, and invites people to “photoshop” the image in a contest. No attribution identifying the photographer is given.
The photographer sues and loses in the trial court – essentially because that court believed the use was a fair use or licensed. The appellate court, however, reverses. It holds that under the Digital Millennium Copyright Act, the “copyright management information” includes identification of the author of the image. Of important note – the DMCA for the purposes of this provisions, has no “fair use” defense – in other words, the copyright management information must always be included.
The take away here is that if you are electronically displaying images in which you are not the author, the EXIF file data – the data that is embedded in the image and can be read by software to see the copyright management information – cannot be stripped off the file. In addition, if you are using stock photography, or manage a stock photo site, the EXIF data must be retained in the image.