by Mike Oliver | May 10, 2024 | Privacy, Privacy Law
TL;DR; – they prevent companies from denying services to people that use technology to block tracking and ads.
Maryland recently adopted the Maryland Online Data Privacy Act of 2024, available here: https://mgaleg.maryland.gov/2024RS/bills/sb/sb0541e.pdf It goes into effect in October of 2025. Does it increase Maryland consumer privacy rights? – yes (sort of). But it follows many other laws that really just do not address the key issue, and it perpetuates the privacy law and compliance cat-and-mouse game. That “game” is obtaining consent. To understand the cat-mouse, read the Sephora decision. Sure, there are other provisions like data minimization – but making a claim solely based on that would be hard. Claims are almost always based on failure to obtain consent. So, as long as a provider can comply with the consent rules, they are 90% of the way to the clear.
The advertising business, which has been around for centuries but really took off with the advent of radio and TV, thrived for 50+ years without tracking or massively collecting personal information on consumers. The Internet just acted as a giant enabling device – and all these privacy laws have really done virtually nothing to slow it down because of one word: consent. The new Maryland law does not really change this concept. To elaborate:
There are basically two types of services on the Internet – those that cost money (so called “paywalled services”), and so called “free” services – which are primarily the user-to-user social media sites. I say “so called” because they are not truly free – you are the product – these sites even expressly tell you – your personal data is productized. There are also some hybrid types – the best example are the hardware manufacturers like Roku and Smart TV makers, that you pay one time for the product, but the product then comes laden with data gathering, Personal Information collecting and selling software.
So what is the key legal change we need to really make privacy a right? Forcing any data collector to respect a consumer’s technological choice to block tracking and advertising. Period.
As it stands now, US privacy-consent law is mostly opt out, and when its opt in, the providers force you to agree – so it’s a Hobson’s choice – don’t use my service, or agree to use of your Personal Information.
In a truly free market competitive system this would work – because you could make a choice based on the service, and how much you valued your privacy (and if the laws and user interfaces were better, a knowing understanding of how they would use your Personal Information). However, our actual system is dominated by essentially only a few providers – Meta, X, Reddit, Google YouTube, TikTok and maybe one or two others. None of those providers give you a choice to not allow them to use and sell your Personal Information – because their service simply does not work if you do not consent. All this does is result in horrid user interfaces – pop up consents and re-confirmations with gibberish, unintelligible explanations of how they use your Personal Information. Try using a browser that technologically blocks those uses – many services just do not work at all.
While those observations are bad enough, the same happens on paid services – banks, hospitals, insurance companies, streaming providers (easily the worst) – services we pay for – many will not work at all if you block third party tracking on a browser.
So, is the new Maryland privacy law good? Apart from the fact it should have been enacted 10 years or more ago, I guess its a start, but its just the same can getting kicked down the road – its just a little harder to kick – but all of the service providers that rely on selling your Personal Information already have this figured out. The Genie is way too far out of the bottle to meaningfully fix our privacy systems in the US. Instead users have to suffer through horrible user interfaces and user experiences for sake of us “consenting” to who-knows-what they do with our Personal Information, just to connect to their friends and family on social media.
Effective advertising does not need all the personal data – in fact, artificial intelligence can now probably do a better job of just guessing your preferences based on whatever you are searching for, viewing or browsing on, without knowing one actual personal detail about you . . .
by Mike Oliver | Sep 9, 2022 | Privacy, Privacy Law
The first fine under the California Consumer Privacy Act was issued this week against Sephora U.S.A., Inc. The complaint alleged in part “The right to opt-out is the hallmark of the CCPA. This right requires that companies follow certain straightforward rules: if companies make consumer personal information available to third parties and receive a benefit from the arrangement—such as in the form of ads targeting specific consumers—they are deemed to be “selling” consumer personal information under the law.”
There are three important observations arising from both this allegation, and the consent by Sephora:
- “Selling” Personal Information does not just mean literally collecting actual personal information and selling it – it means, according to the California attorney general, collection of essentially any information about a web site visitor (for example, what browser they are using) and then providing that to a third party, who then uses that to track such website visitor in their own network of customers – even if the tracking company does not actually know who that person is;
- Websites that use ANY tracking technology must meet the fairly onerous disclosure notification rules that the site sells personal information – for example Sephora had stated (as most websites do today) that they “do not sell personal information”; and
- For all practical purposes any website visitor has the right to completely opt out of “tracking” essentially anything, and the site must provide this ability to opt out and respect it.
The other matter of significance from the complaint and resulting consent fine is that if a user instructs their browser to send a do not track signal (also known as a global privacy control, or GPC), the website must honor it.
Finally, Sephora was unable to establish that the analytics providers were “service providers,” which would have resulted in the transaction not being a sale, because they did not have valid service provider agreements with these providers – indeed, the complaint goes to great lengths to note that Sephora exchanged personal information for free or reduced price analytic services.
Under the CCPA, a service provider agreement must:
“(1) Specif[y] that the personal information is sold or disclosed by the business only for limited and specified purposes.
(2) Obligat[e] the third party, service provider, or contractor to comply with applicable obligations under this title and obligate those persons to provide the same level of privacy protection as is required by this title.
(3) Grant[] the business rights to take reasonable and appropriate steps to help ensure that the third party, service provider, or contractor uses the personal information transferred in a manner consistent with the business’ obligations under this title.
(4) Require[] the third party, service provider, or contractor to notify the business if it makes a determination that it can no longer meet its obligations under this title.
(5) Grant[] the business the right, upon notice, including under paragraph (4), to take reasonable and appropriate steps to stop and remediate unauthorized use of personal information.”
CCPA, § 1798.100(d).
Virtually no analytics provider online terms of service meet these requirements.
The whole matter is also strange in that Sephora was given 30 days notice of the violations and for some reason chose not to comply. Did they decide to contest the claims, and then later decided not to? If so, it was a costly decision.
As a result of this decision, all websites using any form of third party (data sharing) analytics providers needs to make sure that they review the agreement with the analytics company carefully to see if they meet the above requirements. If not, they need to either obtain such an agreement, or cease using such provider. They also need to make full disclosure about what data is shared (sold) to the analytics provider, and provide a full opt out notice – and of course, ensure that the site respects GPC, and respects any opt out request. This is going to be very challenging to accomplish for many reasons – in part because in most cases, these analytics providers do not actually know who the person is – they just have all the data that identifies the electronic interaction – so they are going to have to devise a system to scan their identifiers for the requests. In short, this decision is going to make using web analytics all but impossible except where the analytics are limited solely to the website operator.
by Mike Oliver | Mar 4, 2022 | Privacy, Privacy Law
Almost every major website you visit today pops up a banner to warn you that it uses “cookies.” This is not legally required in the U.S. or in most places, and where it is, the vast majority of sites do not comply with legal requirements. From a policy perspective: cookie pops are just dumb – (virtually) no one reads them. There are vastly better ways to deal with the issue they present – legally and from a site usability perspective.
First, no current U.S. law requires cookie pop-ups. Some sites that are available in the European Union are required to post cookie pop ups – sites that use so called “tracking cookies.” I discuss below a recent EU case that makes this issue even worse than one would have originally thought.
Second, an anecdotal review of websites shows the vast, vast majority of them – all of them in my experience that are “U.S.” sites – utterly fail to comply with the so called EU “cookie law.” Why? Because they store the cookie before consent (which is not permitted under the cookie law) and they simply state, “This site uses cookies” and present an “OK” button (and/or an X to close the pop up) with a link to the privacy policy. See for example www.abajournal.com which, as of the date of this post, simply provides an OK button – no option to do anything like reject or manage the cookies, and a link to the privacy policy. Just a useless and legally insufficient user interface distraction.
Finally, except in very, very limited cases, these cookie pops do not in any way increase user privacy protection. Why? If a site does comply with the notice and consent requirements, it is not legally required to provide the service if a user declines tracking cookies. The site can simply not provide functionality. So in many cases, its not really a choice – the choice is either not to use the site, or consent to tracking. This is made worse because many governments and third parties use these sites for information dissemination. A truly privacy focused law would at least require that the site function if a person elected no tracking.
The whole cookie problem was started by our friends in Europe when they promulgated the ePrivacy Directive 2002/58/EC. However, no U.S. company really started focusing on compliance with the “cookie issue” presented in the ePrivacy Directive until the General Data Protection Regulation (GDPR) of the European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 came into effect. The GDPR applies in Europe, not the US, however so many U.S. companies either do business in, or ostensibly could be regulated by, EU members – so they attempt to comply with both U.S.and EU law.
Many “cookies” – the ones necessary to actually operate a website, are “exempt” and need not be identified nor are they subject to consent. However, sites that use tracking cookies and other tracking technology – even anonymized data – are required under EU law to obtain prior consent before even storing the cookie or other technology that allows such tracking.
In my opinion, this system has been an utter failure in policy and actual impact. It has not stopped companies from incessant user tracking. The companies that rely on user tracking have the power to force the choice of “allow tracking” or do not use the service. The privacy policies remain mostly unintelligible, or at the very least, its is all but impossible to tell what exact tracking a company does, primarily because they either disclose only the types of tracking, or disclose so excessively that the cookie disclosure is indecipherable.
But the EU is doubling down on the concept . . .
In a recent decision (File number: DOS-2019-0137) of the Dispute Chamber of the Data Protection Authority of Belgium issued 2/2/2022, that regulator held that the European arm of the Interactive Advertising Bureau (IAB)’s “pop up” framework used by most of its members – intentionally designed to comply with the GDPR, in fact did not. The decision is lengthy (my machine translated version into English is 139 pages long), and undoubtedly will be appealed. As an overview, IAB created a real time bidding system (RTB) – an automated system of bidding for advertising. This is their framework in the U.S. and many other countries, but in Europe, they created the “Transparency and Consent Framework” (TCF). At issue in this case was a subset of the TCF, which the Board described as follows: “Specifically for the TCF, there are also the companies that use so-called “Consent Management Platforms” (CMPs) to offer. Specifically, a CMP takes the form of a pop-up that appears on the first connection to a website appears to request permission from the internet user to collect cookies and other identification data” Para. 40 (Note, all English translations here are machine created by Google’s translation service). The original decision in Dutch is here (and I can post the English translated version if someone requests it): https://www.gegevensbeschermingsautoriteit.be/publications/beslissing-ten-gronde-nr.-21-2022.pdf.
The basic idea is that IAB manages a “consensu” cookie – that indicates if the web user has already consented (or rejected) cookies. So, a participating site would somehow take information from a user’s initial browser session, send it off to IAB, and IAB would send back a text string indicating if that user had already consented to accept cookies or not. If not, a “cookie pop up” would be presented to the user. The Board found that the IAB maintains a database of users and preferences, which can be used “in order to create an advertising profile of data subjects and to show them personalized advertising” Para. 50. It therefore concluded the IAB was a data controller (a point the IAB disputed). From this point forward the Board essentially found nearly every conceivable violation of the GDPR that could be found. Among them, that “IAB Europe [] failed to observe the principles of due regard for transparency and fairness with regard to data subjects” in part because some of the information that can be sucked up into the preference model includes “special categories of personal data … For example, participating organizations could become acquainted with the websites previously visited by a data subject, including the political opinions, religious or philosophical beliefs, sexual orientation, health data or also trade union memberships of the data subjects be inferred or disclosed.” Para 51. It also found the IAB’s privacy policy insufficient because among other reasons it was only available in English, and used unclear vague terms like “services” and “other means.” Para. 54. It also did not like that the terms “partners” and “third parties” were not explained sufficiently.
To me this is just evidence that no one really understands the law – or that the regulators think it says one thing and the industry thinks it says another. Not good either way. But after that decision, it seems like it would be all but impossible to have a centralized “cookie consent” service – or to comply, the service would be so intrusive as to make the web experience intolerable.
The solution? In my view, just stop with the cookie pop ups. They are stupid and ineffective. Enact a law that requires a service to respect the do not track signal from a browser (currently entirely voluntary), and not store any tracking cookies, clear gifs or other trackers – and require that a site not “discriminate” against users who elect no tracking – basically – provide all functions to users whether they consent or do not consent. I would also prevent any government organization to use a site that tracks users as a service for information dissemination.
by Mike Oliver | Jan 15, 2021 | HIPAA, Privacy, Privacy Law
In University of Texas M.D. Anderson Cancer Center v. US Dept of Health and Human Services, No. 19-60226 (5th Cir. 1/14/2021) the Fifth Circuit held that the DHHS’ fine for violating the HIPAA Security Rule was “arbitrary, capricious, and contrary to law.” To say that the government lost this case is an understatement – the government’s arguments were roundly rejected in broad language – so much so that the government is going to regret ever having brought this case . . .
In brief, University of Texas M.D. Anderson Cancer Center (UT) had three computer security lapses in the early 2010 period – one laptop and two thumb drives, each that stored electronic Protected Health Information (ePHI), were not encrypted, and were lost or stolen. The DHHS originally fined them over 4 million dollars for violating rules that in most cases require ePHI to be encrypted, and that prohibit disclosure of ePHI to unauthorized persons. UT’s administrative efforts on appeal were unsuccessful, but when they petitioned to have the case reviewed by the court, the DHHS admitted that the maximum fine they could impose was $450,000. UT however objected to even that fine on 2 grounds, that a state instrumentality is not a person under the HIPAA enforcement provisions, and that the fine was arbitrary and capricious under the Administrative Procedures Act. The court did not address the first argument and assumed UT was a person subject to HIPAA enforcement.
Under the HIPAA Security Rule, “a HIPAA-covered entity must “[i]mplement a mechanism to encrypt and decrypt electronic protected health information.” 45 C.F.R. § 164.312(a)(2)(iv)” (emphasis by court). UT had done so – it had policies that required portable and mobile devices to be encrypted, it provided employees certain technology (dongles) to encrypt these devices, and it trained them how to do so. DHHS argued that the mere fact that 3 devices were not encrypted meant that UT had violated the rule. The court disagreed:
[T]he Government argues that the stolen laptop and the two lost USB drives were not encrypted at all. That appears undisputed. But that does not mean M.D. Anderson failed to implement “a mechanism” to encrypt ePHI. It means only that three employees failed to abide by the encryption mechanism, or that M.D. Anderson did not enforce that mechanism rigorously enough. And nothing in HHS’s regulation says that a covered entity’s failure to encrypt three devices means that it never implemented “a mechanism” to encrypt anything at all.
UT v. DHHS, at p. 7 (slip)
The court goes on to provide numerous examples of scenarios where unauthorized disclosure of unencrypted ePHI would likely not violate the regulation, primarily because the regulation is not written to make data loss a strict liability.
The same result was found under the Disclosure Rule. That rule in general prohibits a Covered Entity from “disclosing” PHI except as permitted by the rule. The Disclosure Rule defines “disclosure” to “mean[] the release, transfer, provision of access to, or divulging in any manner of information outside the entity holding the information.” 45 C.F.R. § 160.103. The administrative law judge held that the loss of data on unencrypted devices was a “release” however the court disagreed and stated “That interpretation departs from the regulation HHS wrote in at least three ways. First, each verb HHS uses to define “disclosure”—release, transfer, provide, and divulge—suggests an affirmative act of disclosure, not a passive loss of information. One does not ordinarily “transfer” or “provide” something as a sideline observer but as an active participant. The ALJ recognized as much when he defined “release” as “the act of setting something free.” But then he made the arbitrary jump to the conclusion that “anyloss of ePHI is a ‘release,’” even if the covered entity did not act to set free anything.It defies reason to say an entity affirmatively acts to disclose information when someone steals it.”
Finally, the court was particularly upset that the DHHS took the position that it “can arbitrarily and capriciously enforce the CMP rules against some covered entities and not others.” UT had argued that in other similar cases either no fine was imposed, or fines much smaller than the fine imposed on UT were imposed. It also argued that DHHS refused to consider factors expressly stated in its own regulations (none of which the DHHS could prove – for example, that any individual suffered financial harm)
This case is an incredible loss by the DHHS. It will need to completely overhaul its entire regulatory enforcement structure, most likely it will need to re-write regulations, and it will need to train its ALJs better about how to handle administrative law appeals in light of arguments made by the petitioners. Finally, the case is incredibly helpful for Covered Entities and Business Associates in their efforts to avoid civil money penalties for small and inadvertent infractions (as long as they otherwise meet data security requirements).
Importantly, all entities that store and process PHI should be careful in drafting their Business Associate Agreements and related agreements to distinguish between regulatory violations (which under this case are not strict liability in many scenarios), and contractual liability. Many Business Associate Agreements are written as if *any* “loss” of PHI outside of the entity is a breach. Business Associates should be careful in reviewing these agreements so as to not undertake greater liability than that imposed under the regulations.
For more information contact Mike Oliver
by Mike Oliver | Jul 22, 2019 | Data Privacy, Privacy Law
Question: How do cost your company £80,000 with one relatively small computer error?
(Short) Answer: You misconfigure an FTP (file transfer protocol) server . . . and forget and leave it running.
This was the lesson Life at Parliament View Limited recently learned when the Information Commissioner’s Office (https://ico.org.uk) fined it £80,000 for violating the 7th principle of the Data Protection Act 1998 (“DPA”). See https://ico.org.uk/media/action-weve-taken/mpns/2615396/mpn-life-at-parliament-view-limited-20190717.pdf. ICO could have fined it £500,000 (the maximum under that act) – but chose to only implement 16% of the maximum fine.
What happened? Life at Parliament needed to mass transfer personal data – though not particularly sensitive data (note1) – to a data processor, and chose to use an FTP server. They intended to use a feature of this server to require a username and password, but the technicians misunderstood the server documentation from Microsoft, and ended up putting the server in Anonymous Authentication mode. In addition, “The FTP server was further misconfigured in that whilst approved data transfers were encrypted, personal data transmitted to non-approved parties was not. As such, transfers of personal data over FTP to non- approved parties had the potential to be compromised or intercepted in transit.” (Though not explained in the opinion, this was likely a fallback setting that allowed the server to transmit over a non encrypted channel if the receiving party did not have a secure channel available). The server was left in this condition for just shy of 2 years. Computer logs showed over 500,000 anonymous data requests. Eventually a hacker (well, really a person with ordinary computer skill who located the open FTP server) who had obtained the data, began extorting Life at Parliament.
While the failure of basic computer security is plain in this case, it is noteworthy that ICO also found the following violations:
- Post configuration of the server, LVPL failed to monitor access logs, conduct penetration testing or implement any system to alert LPVL of downloads from the FTP server, which would have facilitated early detection and containment of the breach;
- Failure to provide staff with adequate and timely training, policies or guidance either in relation to setting up the FTP server, or information handling and security generally.
ICO has been very active in the general data protection space and issuing fines, and this decision – while an easy one in light of the poor computer security practices – is telling because ICO found secondary violations in post implementation failures to detect and train.
The same tendency is happening in the US – the FTC and State Attorney Generals are increasing their oversight of data protection, and several states (e.g. California’s CCPA) are enacting new data protection and data oversight requirements. While the FTC has had some wins (see a recent order against a car dealer, no fine but consent order, where unencrypted data was exposed for 10 days – https://www.ftc.gov/news-events/press-releases/2019/06/auto-dealer-software-provider-settles-ftc-data-security), and at least one major set back in its efforts against LabMD (http://media.ca11.uscourts.gov/opinions/pub/files/201616270.pdf), it is likely that the government regulators will start going after companies that have engaged in less egregious data security violations, but nevertheless have lax training or monitoring set up, and probably also pursue smaller businesses who may not have the resources to have a robust security system and training.
For more information on our data security and privacy practice contact Mike Oliver.
_______________________
(note 1): The data consisted of “The types of personal data potentially compromised included names, phone numbers, e-mail addresses, postal addresses (current and previous), dates of birth, income/salary, employer details (position, company, salary, payroll number start date, employer address & contact details), accountant’s details (name, email address & phone number). It also contained images of passports, bank statements, tax details, utility bills and driving licences of both tenants and landlords.”
by Mike Oliver | May 26, 2017 | Internet, Technology and Privacy Law, Privacy Law
Many companies have exactly 1 year to get their privacy house in order. On May 25, 2018 the European Union’s General Data Protection Regulation (found here in its entirety, the reg itself without precursors is here: GDPR regulation only) goes into effect. It brings tremendous changes to the previous data protection rules, but in this short post I discuss what I consider to be the “Big 3” issues that the new rule presents, and why even though US privacy law is almost non existent (in the general consumer privacy context), these EU rules will become more and more important even for smaller companies operating solely in the US, due to globalization of data exchange. Ok, the Big 3:
- Huge fines for small errors. The GDPR allows for fines of up to the greater of 20,000,000 Euros, or 4% of annual global turnover. And, there is every indication that the privacy regulators will be very harsh in doling out these fines, even for fairly innocuous errors. That has certainly been the trend in the U.S. for sensitive data like protected health information.
- Information included within the rule is almost everything. The regulation (Article 4, Section (1)) defines “personal data” to mean “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person” It is clear this definition encompasses far more information than just “identifying” information – for example, an “online identifier” is just about any technology that tracks a user.
- Extra-territorial scope. The regulation (Article 3) extends the reach of the GDPR well beyond the borders of the EU. First, it states that it “applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” So, any data processed by a controller or processor who is located in the EU is subject to this rule, even if the data subject is not a EU resident. Next, it states “This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: (a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or (b) the monitoring of their behaviour as far as their behaviour takes place within the Union.” So, regardless of the location of a business, if the business offers goods or services, whether paid or unpaid, the GDPR applies. Finally, “[t]his Regulation applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.” The US has not yet adopted public international law that accedes to this rule, but other countries may do so. Operating in those countries would impose the rule on the controller or processor.
There are many other significant issues raised by the GDPR. For example, in the EU one of the 6 core principles is data subject control and access. Article 12, Section 3 states “.The controller shall provide information on action taken on a request under Articles 15 to 22 to the data subject without undue delay and in any event within one month of receipt of the request.” A company must ask itself – can it do that? And not if just one data subject asked, but if (thousands? hundreds of thousands?). Clearly, any “big data” holder will not be able to meet this standard using humans – they will need an automated system to be able to meet the standards. And see my 1st big issue above – failure to meet this requirement is going to result in a fine for sure. The questions will be how big, and that will likely depend on what effort went into at least trying to meet this standard. Another example: the GDPR is crystal clear that consent to use personal data cannot be obtained through an ambiguous “privacy policy” or buried in terms of service. The opt in requirement must be plain, unambiguous and intelligible to the data subject. So, disclosures of how a company tracks a data subject in a privacy policy are not sufficient consent.
There is a separate issue about whether the EU could enforce the GDPR against a US based entity in the EU, or whether it would have to try and come to the US and file such claim; and there is also a separate question of whether a US court would enforce a foreign law against a US based business without an enabling treaty or other enabling statute. However, a company that operates solely in the US would probably have to play ball with the EU authorities if they ever wanted to be able to actually do direct business in the EU. Most large companies have already made that decision. Smaller companies that are wholly located in the US will have to consider whether they would want to take the risk of GDPR enforcement, and whether they want to ever expand direct services into the EU.
One year seems like a long time, but the GDPR has been known for some time (it was adopted in 2016), and now the time is short and companies that might be subject to it really need to be well on their way of making an assessment of what data they are collecting, how they are using it, what efforts they have made to obtain consent to that use, and how they will meet the 6 principles in a timely fashion.
For more information, contact Mike Oliver.