Tenn. Makes Nine? ‘Tennessee Information Protection Act’ Set to Become Newest Comprehensive State Privacy Law
On Friday April 21, Nashville lawmakers approved the Tennessee Information Protection Act (TIPA) following unanimous votes. Tennessee now joins Iowa, Indiana, and Montana as the four states in 2023 that have advanced baseline privacy legislation governing the collection, use, and transfer of consumer data.
TIPA is closely modeled on the Virginia Consumer Data Protection Act (VCDPA) that was enacted in March 2021 and went into effect on January 1 of this year. The frameworks share key definitions, business obligations, and core consumer rights. For example, TIPA and the VCDPA both require companies to obtain consent for the processing of sensitive personal data and allow consumers to opt out of data sales, targeted advertising, and significant profiling decisions.
Nevertheless, the Tennessee proposal contains several unique deviations that will make it an overall less protective privacy regime than Virginia’s landmark law. Below, we highlight the key ways that TIPA differs from the VCDPA.
Unique coverage thresholds: TIPA will likely apply to a narrower range of businesses than the VCDPA by covering companies that make $25 million in annual revenue and that process that data of 175,000 or more state residents.
Broad carve-outs for pseudonymous data: Unlike the VCDPA, TIPA’s carveout for pseudonymized information extends to the consumer right to opt-out of data sales, targeted advertising, and significant profiling decisions. Depending on how the definition of “pseudonymous data” is interpreted and enforced, this approach could significantly narrow the impact of consumers’ opt-out rights.
Insurance industry exemption: TIPA establishes a blanket, entity-level carveout for licensed insurance companies.
Longer right to cure: Tennessee and Virginia both require the Attorney General to give a business an ‘opportunity to cure’ any alleged violation of the Act. However, Tennessee provides a 60-day cure period, rather than 30 days.
NIST ‘Safe Harbor’ Defense: TIPA establishes a first-of-its-kind affirmative defense against enforcement for businesses that “reasonably conform[]” to the NIST Privacy Framework or “other documented policies, standards, and procedures designed to safeguard consumer privacy.” Given that the NIST Framework is intended to provide a flexible way for organizations to identify and manage risks within diverse environments, it is unclear what ‘reasonable conformity’ to the framework would entail or how invoking this affirmative defense would work in litigation.
Not every distinction in the Tennessee proposal is weaker than the VCDPA. For instance, while Tennessee and Virginia both allow the Attorney General to recover $7,500 in civil penalties for each violation of the law, in Tennessee a court may award treble damages for willful or knowing violations. Should TIPA be enacted, it will take effect on July 1, 2025.
The ‘Montana Consumer Data Privacy Act’ Reminds us that Privacy is Bipartisan
On Friday, April 21st, the Montana State Legislature approved the ‘Montana Consumer Data Privacy Act’ (MCDPA) to be sent to the Governor’s desk. If enacted by Governor Gianforte, Montana would join the 6 states that have adopted comprehensive privacy frameworks. Notably, at almost every stage of the legislative process, the MCDPA received unanimous bipartisan support and strengthening amendments.
The MCDPA includes what would be the strongest baseline consumer privacy rights and protections of any Republican-led U.S. state, comparable in substance and scope to leading privacy frameworks in Connecticut and Colorado. Furthermore, the MCDPA is unlikely to require significant modifications to the compliance programs of organizations that are already subject to either of these existing state laws.
Significant privacy-protective elements of the MCDPA include:
Wide Applicability: In recognition of Montana’s status as only the 43rd most populous state, the MCDPA would apply to businesses that handle the personal information of 50,000 or more residents (rather than the typical 100,000 threshold).
Broad definition of data “sales”: If enacted, Montana will be the first Republican-led state to define the sale of personal information as encompassing transfers for both monetary and “other valuable consideration.” The other states with laws adopting this definition are California, Colorado, and Connecticut.
Potent Opt-Out Rights: Consumers’ right to opt out of targeted advertising, data sales, and significant profiling decisions will extend to pseudonymous data, would be able to be exercised through authorized agents, and could not be ignored by businesses, even if the request is unable to be authenticated.
Opt-Out Preference Signals: If enacted, Montana will join California, Colorado, and Connecticut in allowing individuals to exercise certain rights by default, through the use of technological mechanisms such as browser-level signals.
Right to Cure Sunsets: The MCDPA would offer organizations an opportunity to cure prior to the Attorney General taking enforcement action; however, this provision would expire two years after taking effect.
So far in 2023 three states, including Montana, have passed privacy legislation through their legislative branch, and one state, Iowa, has seen privacy legislation signed into law. If enacted, the MCDPA will take effect on October 1, 2024.
Tanzania’s Personal Information Protection Act: Overview, Key Takeaways, and Context
On November 27 2022, the President of Tanzania signed the Personal Information Protection Act, 2022 (PIPA) after it garnered unanimous Parliamentary support following its September 2022 introduction during the 8th Parliamentary sitting. The Act’s passage makes the United Republic of Tanzania (henceforth referred to as “Tanzania”) the 35th country in Africa to enact a standalone data protection law and effectively extends data protection safeguards to more than 63 million people. The law is in Swahili.
Prior to the passage of PIPA, Tanzania made several unsuccessful attempts to pass a data protection law. The 2003 National ICT Policy called for policy changes to facilitate enactment of a specific and effective legislative instrument on privacy after the initial recognition of a right to privacy as part of the 1984 Constitution’s Bill of Rights, which followed failed attempts to include the right in previous iterations of the constitution. Data protection reforms for a comprehensive data protection law began in 2013 in connection with the African Union’s Harmonization of ICT Policies in Sub-Saharan Africa (HIPSSA) project. Tanzania received financial and technical support from the International Telecommunication Union (ITU) and the European Commission to develop its first comprehensive data protection law, which was ultimately unsuccessful. The second attempt at a draft of a comprehensive data protection bill began in August 2022 when a draft was released for public consultation; this bill ultimately became PIPA.
The commencement date of PIPA will be determined by the Minister of Communications through a gazette notice (Section 1). The stated objectives of PIPA are laid out in Section 4 and include:
Controlling collection and processing of personal data;
Ensuring the collection and processing of personal data is guided by the principles laid down in the law;
Protecting the privacy of individuals; and
Establishing legal and institutional mechanisms for protecting personal information.
Overview of Key Features: From Recognizing Broad Categories of Sensitive Data, to Specifically Allowing Monetization of Personal Data
PIPA establishes a data protection framework for Tanzania that provides obligations related to processing of personal data. Specifically, it defines the forms of personal data covered under the law, covered actors and extent of application of the law, registration requirements of controllers and processors, and obligations of controllers and processors towards data subjects. The structure and provisions of PIPA coincide with laws in other parts of the world, however, there are unique provisions under PIPA that differentiate Tanzania from other countries.
For example, the law contains broad provisions on categories of sensitive personal data and imposes a mandatory requirement on all controllers and processors to appoint a data protection officer. Further, the law establishes unique situations where it is not applicable, including, among others, situations where processing is carried out for the purpose of identifying and preventing tax evasion, investigating embezzlement of public funds, or performing due diligence prior to appointment in a public service position.
Interestingly, the law has an obligation for controllers to collect personal data directly from data subjects with priority. Only where this is not possible can they collect personal data from third parties, under specific conditions which are akin to “lawful grounds for processing”.
With regards to using a data subject’s personal data for commercial advertising, the law specifically allows the monetization of personal data by permitting a data subject to enter into a contract with the data controller, on the basis of which the controller may process the data subject’s personal data for financial gain.
Another unique feature of the law relates to how data subjects exercise their rights. The law mediates the relationship between a data subject and a controller in certain cases. For example, a request by a data subject to have a controller or processor to modify, block, delete, or destroy incorrect personal data relating to them must be first made to the Personal Information Protection Commission (the data protection authority established by the law) for onward transmission to the controller or processor.
The structure of the Commission to be created also carries unique features, especially the creation of a board to oversee the conduct of the Commission. With regards to cross border data transfers, the Commission and the Minister of Communications maintain wide discretion on whether a transfer can be made, even upon fulfilling the conditions stipulated in the law.
Territorial Application, Covered Actors, and Data: Introducing a Limited Extraterritorial Scope and “Data Collectors”
Territorial Scope
Per express language in Section 2, PIPA shall apply to mainland Tanzania, as well as in Zanzibar. In Zanzibar, the law shall only apply to Union matters. The First Schedule of the Constitution of Tanzania enumerates the “union matters” that includes the Constitution of Tanzania and the government of the United Republic. Laws passed by the Union Parliament can only apply to Zanzibar where there is an express provision declaring so, or the law relates to Union affairs and is in compliance with the provisions of the Union Constitution.
This specification is necessary due to the fact that Tanzania was formed from the 1964 merger of two formerly sovereign states: Republic of Tanganyika and People’s Republic of Zanzibar. The 1964 union did not throw out Zanzibar’s sovereignty, and, as such, the unified state maintains two governments. Zanzibar retains its own constitution and governs itself with regard to non-Union matters while the Union government based in Dodoma (the united republic’s capital) maintains power over the entire territory with regards to Union matters. Zanzibar’s House of Representatives has legislative powers limited to non-Union matters as stipulated in the 1984 Constitution.
PIPA applies extraterritorially, but in a more limited way than other data protection laws like the EU’s General Data Protection Regulation (GDPR) or Indonesia’s Personal Data Protection Act. According to the law, Section 4 of PIPA shall apply to processing of personal information carried out by a controller residing in Tanzania or in a place where the laws of Tanzania are applied in accordance with international laws, as well as to any processing of personal information carried out by a controller or processor residing outside the United Republic if the processing has taken place in the country and not for the purpose of transferring personal information to another country (Section 22(b) and (c)). The condition that the processing takes place in the country to trigger the extraterritoriality of the law limits its reach. However, by specifying that extraterritoriality does not apply when personal data is transferred outside of the country to be processed there, the law solves a common conundrum appearing with other data protection laws between extraterritorial effect and the rules governing international data transfers.
Like many other African personal data protection laws, PIPA exempts processing of personal data for household purposes (Section 58(2)(a)). Other exemptions under Section 58 include where processing is:
Carried out in accordance with other laws;
Pursuant to a court order and in furtherance of national security or public security; and
Carried out for the purpose of preventing crime, identifying and preventing tax evasion, investigating embezzlement of public funds, or performing due diligence prior to appointment in a public service position.
The Minister of Communications is empowered to expand the list of exempt circumstances and the means of implementing such exemptions are provided in Section 58(3). However, these exemptions do not preclude a data collector (defined below) from complying with the principles relating to collection and processing of personal information or the security safeguards requirement of the law (Section 58(1)).
Covered Actors
Opting to use the term “data collectors”1 to refer to data controllers, PIPA applies to data controllers, processors, and recipients, which may be individuals, private entities, or public entities that process personal data.
The Act defines a controller as a person, individual institution, or public institution that alone or together with other institutions determines the purposes and methods of personal information processing; and where the purposes and methods of processing are specified in the law, the controller is a person, entity, or public institution appointed in accordance with the law and will include its representative.
A processor is defined as a person, individual entity, or public entity that processes personal information for and on behalf of the controller and under the instructions of the controller, except for persons who under the direct authority of the controller are permitted to process personal information, including their representatives. A recipient is defined as a person, entity, public institution, or any other person who receives personal information from the controller.
Covered Data: Broad definition of “sensitive data”
PIPA covers personal data that is defined as information about an identifiable person that is maintained in any form, including (Section 3):
Race, color, ethnicity, religion, age, and marital status of an individual;
Education, medical history, criminal data, and employment information;
Any identification number or any other mark that identifies an individual;
Address, fingerprints, and blood group of an individual;
Name of individual that appears in the personal information of another person related to them or where such disclosure will reveal the personal data of the data subject; and
Personal data that is sent to a data controller by a person where it is clear that such data is personal or confidential and responses to the information may reveal the content of the previous information and the view or opinion of any other person about the data subject.
PIPA further lists the following as “sensitive personal data”:
Genetic data, defined as personal data resulting from genetic analysis;
Children’s data (child has the same meaning assigned to it as under the law concerning children in Tanzania, which defines a “child” as a person below the age of 18);
Criminal records;
Financial transactions; and
Biometric information.
Beyond the categories listed above, PIPA creates a broad definition of categories of sensitive personal data. According to Section 3 of the law, personal data becomes sensitive if, or when, processed it reveals the race, ethnicity, political ideologies, religious or philosophical beliefs, trade union associations, gender, health data, or sexual relationships of a data subject. Sensitive personal data also includes “any personal information that, according to the laws of Tanzania, is considered to have a significant impact on the rights and interests of a data subject.” “Significant impact” is not defined in the law but could potentially be clarified at a later time through the Minister’s power to create regulations (Section 64(1)).
The Act imposes restrictions on the processing of these forms of sensitive personal data. PIPA prohibits processing sensitive personal data without the written consent of the data subject (Section 30(1)). A data subject may withdraw consent at any time, without reason and at no cost (Section 30(2)). Additionally, the Minister has regulatory discretion to designate circumstances where the prohibition on processing sensitive personal data may not be lifted even with a data subject’s written consent (Section 30(3)). Section 30(5) also gives circumstances where a data controller or processor does not need a data subject’s written consent to process sensitive personal data, including when the:
Processing is necessary to comply with other laws;
Processing is necessary to protect the important interests of the data subject or of another person, if the data subject cannot give their consent or is not represented by a legal representative;
Processing is necessary for the filing, operation, or defense of legal claims;
Processed personal information has been disclosed to the public by the data subject;
Processing is necessary for the purposes of scientific research and the Commission has provided specific guidelines defining the circumstances where such processing can take place; and
Processing is necessary for medical purposes in the interest of the data subject, and the sensitive personal information involved is processed under the supervision of a health professional in accordance with the law governing such services.
Obligations of Controllers and Processors: From Old School Registration Obligations, to Compulsory Appointment of DPOs
Registration as Data Controllers and Processors
PIPA, like many other African data protection laws, requires data controllers and processors to register with the data protection authority before collecting and processing personal data (Section 14). The Communications Minister recently released draft regulations on the registration of data controllers and processors that provide the conditions for registration. These requirements have similarities to those in other African jurisdictions, such as Kenya. Upon fulfilling the conditions for registration a controller or processor receives a certificate of registration (Section 14(3)), which is valid for 5 years after it is issued (Section 15(2)).
Unlike Kenya’s Data Protection Act, 2019, PIPA does not provide a threshold for registration as a data controller or processor. This lack of a threshold implies that all individuals and private entities acting as controllers or processors are required to register with the authority, regardless of their size. Furthermore, PIPA’s certificates of registration are good for 5 years, in contrast to Kenya’s which are valid for 2 years. Interestingly, PIPA assumes that upon commencement of the law, public bodies shall be automatically registered as controllers and processors, and no action is required from them (Section 21).
Compliance with the Principles of Data Processing: From purpose limitation to security safeguards
Section 5 of PIPA requires controllers and processors to process personal data in accordance with the principles set forth under the law, including:
Lawfulness, fairness, and transparency;
Specified, legitimate purpose and purpose limitation;
Adequate and relevant for the purpose of the intended processing;
Correctness;
Retention in a manner that allows for identification for a period not exceeding the purpose of processing;
Processing with regards to the rights of data subjects;
Maintaining security safeguards during processing, including protection against unauthorized processing, loss, damage, or other harm using appropriate technical and administrative measures; and
Transferring personal data out of the country in compliance with the law.
All Data Controllers and Processors Must Appoint Data Protection Officers
Section 27(3) of PIPA requires controllers and processors to appoint a Data Protection Officer (DPO). There are no thresholds or criteria that trigger appointment of a DPO, which means that all data controllers and processors must have a DPO.
Collection of Personal Data: An Obligation to Prioritize Collection Directly from the Data Subjects
According to Section 23(1) of PIPA, data controllers are generally required to collect personal data directly from the data subject. Prior to such direct collection of personal data, a controller shall ensure that a data subject (Section 23(2)):
Recognizes the purpose for which the personal data is collected;
Understands that collection of personal data is for an authorized purpose; and
Knows the recipients of the personal data.
This obligation to collect personal data directly from a data subject is not commonly found in other regional or global frameworks, but a similar provision can be found in Kenya’s Data Protection Act, 2019.
However, a data controller is not obliged to directly collect personal data under certain circumstances (Section 23(3)), including if:
Personal data is publicly available;
The data subject has consented to collection of personal data from another source or person;
It is impossible to directly collect personal data from a data subject;
The law allows for collection of personal data indirectly; or
Direct collection may affect the purpose of collecting personal data.
Notably, the law does not define what “publicly available” means in the context of personal data collection. However, it is possible that this definition will be provided at a later time through the Minister’s power to create regulations (Section 64(1)).
Duty to Ensure Accuracy of Data
PIPA requires a data controller to take steps that ensure that the information is complete, correct and consistent with the intended purpose of processing, and is not misleading before any processing occurs (Section 24).
Further Processing of Personal Data Beyond the Initial Purpose
Section 25(2) of PIPA sets the conditions for when further processing of personal data is permitted, including when:
The data subject has consented to the new purpose;
Use of personal information for the new purpose is authorized or required by law;
The purpose for which personal information has been used is directly related to the purpose of collecting that information;
The personal data is used in a manner that does not disclose the identity of the data subject;
The personal data is used for statistical or research purposes in a manner that will not identify the data subject; or
The controller believes for reasonable reasons that the use of personal information for that other purpose is necessary to prevent or reduce harm to the life or health of the person concerned or another person, or to the health or safety of society.
Establishing a Data Processing Agreement
PIPA requires that the relationship between a controller and processor be mediated by a data processing agreement (Section 27(4)). Activities of the processor must be governed by a contract that specifies the relationship between the processor and the controller and includes the controller’s instructions to the processor.
Data Retention
A controller is required to consider the existing laws that stipulate data retention periods for various data processing activities or develop a retention policy consistent with forthcoming regulations (Section 28(1)).
Security and Data Breach Processes
PIPA obligates controllers to take necessary steps to safeguard personal data (Section 27(1)). A processor has a duty to adhere to the levels of security stipulated under the Act (Section 27(4)). In the event of a security breach relating to data processed on behalf of the controller by a data processor, the data controller is obliged to inform the data protection authority (Section 27(5)). This implies that a processor is obligated to inform the controller in the event of a security breach. However, there is no obligation for controllers under the law to notify data subjects in the event of a data breach.
Creation of Codes of Ethics
Controllers are required to develop codes of ethics for processing personal data in compliance with the provisions of the law and submit to the Commission for review and approval. Where the Commission deems fit, it may seek the input of data subjects or their representatives before approval (Section 65). The Act does not specifically mention that each controller must develop their own code of ethics; the broad provision gives leeway for controllers to either do so independently or as a group.
Data Subject Rights: From Absolute Opt-Out of Commercial Advertising, to The Right not to be Subject to Solely Automated Decision-Making
Part 6 of PIPA enumerates the rights of data subjects that controllers must adhere to – the right to access personal data, the right to restriction of processing, an absolute opt-out from commercial advertising – which might have important consequences for online advertising in the country, a right not to be subject to solely automated decision-making, and a rightto have personal data modified, blocked, deleted, or destroyed. Protection of rights of data subjects is one of the principles of data protection under PIPA, which may support interpretation of the legal provision towards enhancing protections for individuals exercising their rights.
Under Section 33, data subjects are entitled to know that their personal data is being processed and the details of the processing, including:
What personal data that is being processed;
The purpose of processing;
Any recipients of the data; and
Where a decision with significant impact on the data subject has been made solely on the basis of an evaluation derived from the automatic processing of their personal data, as well as the rationale behind the decision.
However, a data controller is not obliged to provide the above information to a data subject if the information is incorrect, if it is being used in an investigation in accordance with the law, or if it is withheld by court order. Notably, data subjects must convince the Commission that data held by a controller is incorrect in order to exercise their right to deletion or modification of that data under Section 38.
As for the right to restriction of processing, where a processing activity “may cause serious harm” to the data subject or any other person, the data subject has the right to ask the data controller to not initiate the processing or to stop the processing. The methodology to restrict processing shall be stipulated in regulations to be issued by the Minister of Communications.
Under Section 35(1), a data subject, through procedures that shall be specified in future regulations, has the right to ask the data controller to stop processing their personal data for the purpose of commercial advertisements (i.e., presentation, in any form, of a commercial advertisement addressed to a particular person). This provision seemingly equates to an absolute opt-out of any processing of personal data for “commercial advertising”, which could potentially be interpreted much broader than the GDPR’s “direct marketing”.
As per Section 35(2), a data subject may, with regards to commercial advertising, execute a contract with the data controller, on the basis of which the controller may process the data subject’s personal data for financial gain.
According to Section 36(1), data subjects have the right to ask the controller, through procedures that will be stipulated by regulations, to ensure that no decision based solely on automated means is made, where that decision has a significant impact on the data subject. The way the right is drafted indicates a departure from the GDPR’s approach to consider it a prohibition with exceptions rather than a right that must be actively exercised by data subjects. Where the data controller proceeds to make a decision solely on the basis of automated means, the controller must, as soon as possible, inform the data subject that a decision was made based on automated processing and have the right to request that the automated decision be reconsidered (Section 36(2)). However, these rights shall not apply if a decision based on automated processing is necessary to enter into or enforce a contract between the data controller and the data subject, if it is permitted by any law, or if the data subject has given their consent (Section 36(3)).
Lastly, Section 38 provides that the data subject may ask the Commission to make an order to a controller or processor to modify, block, delete, or destroy personal data relating to them if the personal data is incorrect, even if the controller or processor received this data as part of an accurate record given to them by the data subject or another person.
Cross Border Data Transfers and Data Localization: A Three-Tiered Approach to Data Transfers
Part 5 begins by providing that, in consideration with the provisions of PIPA, the Commission may prevent the export of personal data out of Tanzania (Section 31(1)). Such a restriction notwithstanding, personal data may be transferred out of Tanzania to other countries considered to have an adequate level of protection under certain circumstances, including (Section 31(2)) when the recipient determines:
The personal data is necessary for the performance of a duty in the public interest;
Such a transfer is in accordance with the legitimate interests of the data controller; or
The necessity of transferring the personal data and there is no reason to believe that the legitimate interests of the data subject may be affected by the transfer or processing in the receiving country.
In transferring the personal data to an adequate country, the controller is required to conduct an initial assessment of the importance of transferring the personal data and the recipient is required to ensure that the necessity of such a transfer is ascertainable at a future date (Section 31(3) &(4)). The controller is required to ensure that the recipient processes personal data only for the purpose for which the data was transferred (Section 31(5)).
Personal data may also be transferred to a country without an adequate level of protection if adequate protection is guaranteed and personal data is transferred for the purpose of processing that is allowed by the controller (Section 32(1)). Criteria for assessing whether adequate protection is offered by a country include (Section 32(2)):
All the circumstances relating to the transfer of relevant personal data;
Type of personal data;
Purpose and duration of the proposed processing;
The country of the recipient;
Relevant laws applicable to the country; and
Professional regulations in use and the security measures observed in the recipient’s country.
Despite the provisions on transferring personal data to countries without adequate protection and the conditions to be fulfilled in this respect, the Minister of Communications is required, after consulting with the Commission and through regulations, to specify the type of processing and the circumstances under which the export of personal information to countries without adequate protections will not be allowed (Section 32(3)). In other words, the Minister of Communications will have the discretion to ban transfers in certain situations and for certain purposes.
Notwithstanding the provision under Section 32(3), personal data may be transferred to non-adequate jurisdictions when:
The data subject has consented;
The transfer is necessary for the performance of a contract or fulfillment of pre-contractual requirements between a data subject and a controller; or
The transfer is necessary for entering into or executing a contract entered into or to be entered into between the controller and another person in the interest of the data subject.
Finally, the Commission may affirmatively permit specific transfers of personal data to a country without adequate protection (even if the other adequacy criteria cannot be fulfilled) where the controller assures the Commission that there are adequate security safeguards in place, there is a guarantee of the rights and freedoms of the data subject in the domestic laws of the recipient’s country, there is an ability to enforce the rights of data subjects, and that the protection can be implemented through adequate legal, security, and regulatory measures.
Enforcement: New Data Protection Authority, Processes, and International Cooperation
Data Protection Authority
Section 6(1) of the Act establishes the Personal Information Protection Commission. The Commission shall be headed by a Director General who shall be appointed by the president (Section 11(1)) and will have the following duties:
Monitoring the implementation of the Act;
Registering controllers and processors;
Receiving, investigating, and processing complaints;
Investigating and taking action against a matter that the Commission deems to affect the protection of personal information and privacy of people;
Raising awareness;
Establishing a cooperation mechanism with the authorities of other countries;
Advising the Government with regard to the implementation of this Act; and
Performing other duties required for the effective implementation of the Act.
The management of the Commission shall be overseen by a seven-member Board (Section 8) with a Chairperson, vice chairperson, and five at-large members. The Chairperson and the vice-chairperson shall be appointed by the president of Tanzania; if the Chairperson is from Tanzania, the vice chairperson shall be appointed from Zanzibar, and vice versa. The Board shall, among other functions, oversee the activities and performance of the Commission (Section 9(2)(b)) and approve and oversee financial management procedures and service rules (Section (9)(g)). The board may form committees to conduct its functions (Section 10).
Financial Resources
Per Section 51, funding for the Commission includes an amount set by the Parliament, along with paid fines, donations, gifts or grants, loans, and any other income derived from the Commission’s activities. The Act also describes the internal mechanisms for management of the Commission’s financial resources, the role of the Board and the Director General, and the Commission’s accountability duties. Annual budgets must be approved by the Minister of Communications, who has the power to ask the Commission to adjust a proposed budget. Additionally, the Director General must submit an annual report to the Minister, who will in turn submit it to the Parliament (Section 57). The Act does provide for the Minister or the Parliament to otherwise intervene in the Commission’s activities.
Initiating a Complaint
Data subjects may issue complaints to the Commission on the basis of violation of the Act by a controller and/or a processor (Section 39(1)). Upon receipt of a complaint, the Commission shall notify the data controller or processor of the complaint and its intention to conduct investigations (Section 40). Investigations shall be conducted and completed within 90 days from when the complaint was submitted (Section 39(3)). The Commission may, depending on the circumstances of the investigation, extend an investigation up to a maximum of another 90 days (Section 39(4)). The investigation process shall be done confidentially and with all security requirements in place.
Commission’s Authority During Investigations
Section 42 enumerates the Commission’s investigatory powers, including to:
Summon and require a person to appear before the Commission;
Receive evidence;
Enter buildings to ascertain that it meets the security requirements;
Examine and obtain copies of records that it deems necessary and relevant from controllers’ and/or processors’ premises; and
Interrogate people and collect evidence.
The Commission will also receive submissions from the complainants and the data controller or processor. The Commission may engage other individuals or authorities to assist in enforcement of the law (Section 44). The Commission may apply to the courts for preservation orders when personal data involved in an investigation is at the risk of loss or alteration (Section 59).
Section 43 of PIPA makes it an offense to obstruct the Commission during performance of its investigations. The offense of obstructing the Commission attracts a fine between 100,000 and 5,000,000 Tanzania Shillings (approximately between 42 and 2,130 US Dollars) or imprisonment for not more than two years, or both.
Outcome of Investigations
If the Commission concludes that there has been a violation of the Act, the Commission may issue an enforcement notice requiring the controller and/or processor to take appropriate measures to remedy the violation (Section 45). Where the controller or processor fails to comply with the enforcement notice issued by the Commission, the Commission may, based on certain factors, issue a penalty notice and require that the controller or processor pay an administrative fine (Section 46). The elements to be taken into account by the Commission when deciding whether to issue a penalty notice and the fine to be paid are enumerated under Section 46(2). Where the Commission decides to issue a penalty notice, the law sets the maximum fine to 100,000,000 Tanzania Shillings (approximately 42,600 US Dollars) (Section 47).
Once the Commission has made a decision, two actions may follow:
It can, on its own discretion or on request by a party to the complaint, refer back to its decision where it can reverse, change, or suspend its decision or instructions (Section 48); or
For a party that is dissatisfied with the administrative action of the Commission against it, they have a right to appeal to the High Court (Section 49).
The Commission may order a controller and/or processor to compensate a data subject for harm caused by violations of the Act’s provisions, in addition to other penalties and with regard to Section 37 on the right to compensation.
Offenses, Sanctions, and Compensation: From the Offense of Obstruction to Wide Penalty Bands for Different Offenses
Civil and Criminal Liability
Beyond the offense of obstructing the Commission during investigations mentioned above, PIPA creates an offense for the disclosure of personal data for any reasons other than the intended purpose, and for selling personal data obtained contrary to the law (Section 60). Individuals may be punished by a fine between 100,000 and 10,000,000 Tanzania Shillings (approximately between 42 and 4,260 US Dollars), imprisonment for up to 10 years, or both. Companies or organizations may be fined between 100,000,000 and 5 billion Tanzania Shillings (approximately between 42,600 and 2,130,000 US Dollars) (Section 60(6)).
The law also prohibits the destruction, deletion, concealment, misrepresentation, or alteration of personal information in violation of the law (Section 61). These offenses attract a fine between 100,000 and 10,000,000 Tanzania Shillings (approximately between 42 and 4,260 US Dollars), imprisonment for up to 5 years, or both. Where an offense is committed by a company, the company and every officer of the company who knowingly and intentionally violates the law shall be held liable (Section 62). The law creates a “general punishment” for offenses not specifically stipulated that still amount to a violation under the Act (Section 63). The penalty for an offense not specified under the law is between 100,000 and 5,000,000 Tanzania shillings (approximately between 42 and 2,130 US Dollars), imprisonment for up to 5 years, or both.
Compensation Under PIPA
Section 37(1) provides that a data subject who suffers harm due to the violation of the Act’s provisions by a controller or processor is entitled to compensation. A data subject shall be entitled to compensation on condition that (Section 37(2)):
The complainant or their representative (in the case of a child or person of unsound mind) is the affected data subject;
Rights of the data subject have been violated due to a breach of the law; and
The effects of the violation(s) are related to the processing of personal data contrary to the provisions of the Act.
Where the Commission is satisfied that a data subject has suffered harm under compensable circumstances and there is risk of further violations, it may order the data controller to modify, block, delete, or destroy personal data. Once the Commission has made an order, it may also make an order requiring the controller and processor to inform any third parties that had received the personal data of the order to correct, block, delete, or destroy that data (Section 37(4)). When making such an order, the Commission will consider the number of people to be notified (Section 37(5)).
Section 50 specifies the relative liability of the data controller and the data processor. The controller is conditionally responsible for the results of the processing. The processor is responsible in two cases: (1) if they have not complied with the duties specifically addressed to them under the Act or (2) if they have acted contrary to the controller’s instructions. The controller and/or the processor may only avoid liability if they can prove that they were not involved in any way in the event that caused harm.
Expected Regulations
Finally, Section 64 stipulates the various regulations required for the implementation of the Act, including but not limited to:
Regulations on circumstances excluded from the scope of the law;
Regulations detailing duties of a data protection officer; and
Procedures for storing and disposing personal information.
As stated previously, the Minister has already released draft regulations that cover registration of data controllers, cross border data transfers, and the handling of complaints.
Conclusion
Tanzania’s adoption of this legislation is a significant development for data protection in the country. The Act reflects common provisions found in many other regional and global data protection frameworks, and also includes unique provisions, particularly related to the governance of the new data protection authority. Tanzania’s differing approach can also be seen in provisions dealing with cross border data transfers. As the country awaits the commencement of the Act and the publication of regulations, Tanzania remains a jurisdiction to watch for those interested in African data protection.
1 The Act uses “data collector” throughout the Act. The definition of a “data collector” provided under the law is similar to that of a “data controller” in many other data protection laws. However, in laws like Uganda, a “data collector” is differentiated from a “data controller”. Thus since, the definition of “data collector” provided under PIPA is similar to that of a controller in many other laws, we use “data controller” throughout the blog.
Whither Indiana? Somewhere in the Middle for Consumer Privacy Protection
On April 13, 2023, Indiana Senate Bill 5 unanimously cleared the state legislature. If enacted by Governor Holcomb, Indiana will become the seventh state to enact a baseline consumer privacy law.
To help stakeholders assess where Indiana fits into the expanding U.S. state privacy landscape, the Future of Privacy Forum has released a chart comparing SB 5 to the Connecticut Data Privacy Act (“CTDPA”), which currently stands as one of the most protective baseline state privacy laws, and the recently-enacted Iowa SF 262 (“IPA”), which stands as one of the narrowest.
Indiana SB 5 adopts a similar framework for protecting consumer privacy as both the Iowa and Connecticut privacy laws. However, in the scope of its consumer rights, business obligations, and enforcement mechanisms, Indiana lies somewhere between these two existing regimes. In particular, our chart shows that:
Indiana SB 5 applies to a roughly equivalent range of covered entities as both the CTDPA and IPA.
Indiana SB 5 covers much of the same data as the CTDPA and IPA, though aligning more closely with the IPA by: (a) not recognizing data that reveals mental or physical health “condition” absent a diagnosis as sensitive information; (2) defining “biometric data” with a broad exclusion for data generated from photographs, video, or audio; and (3) explicitly excluding aggregate data from its scope.
Indiana SB 5 establishes similar consumer rights as the CTDPA, including the rights to access, correct, and delete personal data and to consent to the processing of sensitive personal information. However, SB 5 provides slightly narrower rights of both access and correction and does not provide heightened protections for adolescents’ data.
Indiana SB 5 creates opt-out rights for targeted advertising, profiling, and the sale of personal data, consistent with the CTDPA. However, like the IPA, “sale” is narrowly defined to only include exchanges for “monetary consideration.”
Like most comprehensive U.S. state privacy laws, Indiana SB 5 would require businesses to limit the amount of personal data they collect, to disclose their data processing practices, and to conduct data protection impact assessments for certain processing activities.
Indiana SB 5 would be exclusively enforced by the State Attorney General. Like the IPA, businesses would have a non-sunsetting right to “cure” any alleged violations of the Act, though SB 5’s timeframe to cure is much shorter than both the CTDPA and the IPA.
Indiana’s SB 5 has a substantial compliance on-ramp and will not take effect until January 1, 2026.
FPF CEO Jules Polonetsky Receives IAPP’s Prestigious Privacy Leadership Award
The IAPP Leadership Award is given annually to individuals who “demonstrate an ongoing commitment to furthering privacy policy, promoting recognition of privacy issues, and advancing the growth and visibility of the profession.”
Previous recipients of the award include former US Deputy CTO Nicole Wong, European Data Protection Supervisor Giovanni Buttarelli, Professor Peter Swire of Georgia Tech’s Scheller College of Business, former FTC Commissioner Julie Brill, UK Information Commissioner Elizabeth Denham, Hogan Lovells’ (and FPF founder) Christopher Wolf and a host of others.
“The Privacy Leadership Award is an incredible recognition, I am honored,” said Jules, who has served as FPF’s CEO for the last 15 years. “I thank the team at IAPP for the award and my staff at FPF, who continue serving as global privacy leaders and publishing influential scholarship that is imperative to advancing privacy safeguards, protections, and policy.”
Considered one of the leading Internet and data privacy experts, Jules served on the founding board of the IAPP and was co-editor of the “Cambridge Handbook of Consumer Privacy”.
Jules was previously the CPO of AOL and of DoubleClick. At both companies, Jules worked with clients to ensure trust, build best practices in product development and implement privacy policies that complied with global data protection requirements.
Building on his public service experience as a former state legislator, congressional staffer and Commissioner of the New York City Department of Consumer Affairs, Jules has testified in Congress, assisted with drafting data protection legislation, and presented expert testimony with global agencies and legislatures.
In addition to leading a global non-profit, he remains active in the larger privacy community by being a member of The George Washington University Law School Privacy and Security Advisory Council and serving on the Advisory Boards of Harvard University’s Privacy Tools Project, Open DP and the University of California Privacy Lab.
Congratulations as well to Stephen Reynolds, winner of the Diversity in Privacy Award, Peggy Eisenhauer, winner of the Global Vanguard Award – North America and Marcos Semola, winner of the Global Vanguard Award – Latin America.
FPF Files Comments to Inform New California Privacy Rulemaking Process
On Monday March 27, the Future of Privacy Forum (FPF) filed comments with the California Privacy Protection Agency to inform the Agency’s forthcoming rulemaking to implement the California Privacy Rights Act amendments to the California Consumer Privacy Act’s provisions on cybersecurity audits, risk assessments, and automated decisionmaking.
FPF’s comments are directed towards ensuring that individuals are able to effectively exercise new consumer rights under the CCPA while maximizing clarity for both individuals and businesses and promoting interoperability with emerging U.S. and global privacy frameworks.
Specifically, FPF recommended the Agency adopt regulations concerning automated decisionmaking and risk assessments that:
Govern automated decisionmaking systems that produce “legal or similarly significant effects”
Clarify how the California Consumer Privacy Act will apply to automated decisions and profiling subject to varying degrees of human oversight
Support meaningful access rights with respect to automated decisionmaking systems
Provide guidance that supports context-appropriate flexibility in developing and conducting data protection assessments
Are informed by existing best practices for data protection assessments
Let’s Look at LLMs: Understanding Data Flows and Risks in the Workplace
Over the last few months, we have seen generative AI systems and Large Language Models (LLMs), like OpenAI’s ChatGPT, Google Bard, Stable Diffusion, and Dall-E, send shockwaves throughout society. Companies are racing to bake AI features into existing products and roll out new services. Many Americans are worrying whether generative AI and LLMs are going to replace them in the workforce, and teachers are downloading ChatGPT specific software to ensure their students are not plagiarizing homework assignments. Some have called for a pause to AI development. But organizations and individuals are adopting LLMs more quickly, and the trend shows no signs of abating.
Organizations have quickly seen employees using generative AI and LLM tools in their workstreams. Few workers are waiting for permission to use the technologies to speed up complex tasks, get tailored answers to full-sentence questions, or draft content like marketing emails. However, the growing use of LLMs creates risks, such as privacy concerns, content inaccuracies, and potential discrimination. Use of LLMs can also be deemed inappropriate in certain contexts and create discontent–students recently criticized their university for lacking empathy when the school used ChatGPT to draft an email notice about a nearby mass shooting.
As organizations navigate these uncertainties, they are asking whether, or when, employees should be permitted to use LLMs for their work activities. Many organizations are establishing or considering internal policies and guidelines for when employees should be encouraged, permitted, discouraged, or prohibited from using such tools. As organizations create new policies, they should be aware that:
1. When workers share personal information with LLMs, it can create legal obligations for data protection and privacy, including regulatory compliance;
2. Many organizations will need to establish norms for originality and ownership, including when it is appropriate for employees to use LLMs or other generative AI systems to create novel content;
3. Organizations need to carefully evaluate any uses for potential bias, discrimination, and misinformation while also considering other potential ethical concerns.
What are LLMs?
In late 2022, OpenAI released its AI chatbot, ChatGPT, which has now undergone several versions and is available as ChatGPT-4. ChatGPT is both a generative AI and a large language model (LLM), which are two distinct but similar AI terms. A “generative AI system” is a type of AI that has been trained on data so that it can produce or generate content similar to what it has been trained on, such as new text, images, video, music, and audio. For example, a growing number of AI tools are available for generating artwork and some even create videos based on text. A “large language model” (LLM) is a type of generative AI that generates text. LLMs can perform a variety of language-related tasks, including translations, text summarization, question-answering, sentiment analysis, and more. They can also generate text that mimics human writing and speech, which have been used in various applications such as chatbots and virtual assistants. To produce human-like responses, LLMs are trained on vast quantities of text data. In ChatGPT’s case, it was trained on vast amounts of data from the internet.
#1. Legal Obligations.
a. Data protection and privacy.
In general, users of generative AI and LLMs should avoid inputting personal or sensitive information into ChatGPT or similar AI tools. ChatGPT uses the data that is input by many users to generate individual responses and further train its model for future responses to all users. If an employee inputs data that contains confidential information, such as trade secrets, medical records, or financial data, then that data may be at risk, especially if there is a data breach of the AI system that results in authorized access or disclosure. Similarly, if an individual puts personally identifiable information or sensitive data into the model, and that data is not properly protected, it could also be improperly accessed or used by unauthorized individuals.
Furthermore, personal information disclosed to LLMs could be used in additional ways that violate the expectations of the people to whom the information relates. For example, ChatGPT also continues to use and analyze this data. Thus, the sensitive information that an employee inputs about a customer or patient could potentially be revealed to another user if they pose a similar question or prompt. Further, every engagement with ChatGPT has a unique identifier–there is a login trail of people who are using it. Therefore, an individual’s use of ChatGPT is not truly anonymous, raising questions about the retention of sensitive data by OpenAI.
b. Regulatory Compliance.
LLMs are subject to the same regulatory and compliance frameworks as other AI technologies, but as LLMs become more common, they can raise novel questions of how such tools can be used in ways that comply with the General Data Protection Regulation (GDPR) and other regulations. Since ChatGPT processes user data to generate responses, OpenAI or the entities relying on ChatGPT for their own purposes may be considered data controllers under the GDPR, which means they should secure a lawful ground to process users’ personal data (such as users’ consent) and users must be informed about the controller’s ChatGPT-powered data processing activities.
OpenAI and companies relying on ChatGPT’s capabilities also need to consider how overarching GDPR principles like data minimization and fairness curtail certain data processing activities, including decisions made on the basis of algorithmic recommendations. Additionally, under the GDPR, data subjects have certain rights regarding their personal data, including the right to access, rectify, and delete their data. But can users really exercise these rights in practice? In regard to data erasure, OpenAI offers users the ability to delete their accounts, but OpenAI has stated that conversations with ChatGPT can be used for AI training. This presents challenges because while it seems that the original input can be deleted, the input can be used to shape and overall improve ChatGPT. Removing a user’s complete digital footprint and its effects on ChatGPT may be unfeasible and risks offending the GDPR’s “right to be forgotten.” Moreover, the repurposing of prompts to train OpenAI’s algorithm may raise issues relating to the GDPR’s purpose limitation principle, as well as the applicable lawful ground for service improvement after a recent restrictive binding decision from the European Data Protection Board (EDPB) concerning the lawfulness of processing for service improvement purposes.
There are questions about the future regulation of ChatGPT and similar technologies under the European Union (EU) Artificial Intelligence Act (AI Act), which is currently under review by the European Parliament. Proposed in 2021, the regulation is designed to ban certain AI uses such as social scoring, manipulation and some instances of facial recognition. However, recent developments regarding LLMs and related AI services have caused the European Parliament to reassess “high-risk” use cases and how to implement proper safeguards that were not previously accounted for in today’s rapidly developing tech environment. EU lawmakers have proposed different ways of regulating general purpose and generative AI systems during their discussions on the text of the AI Act. The consensus at the EU Council is that the European Commission should regulate such systems at a later stage via so-called ‘implementing acts’; at the European Parliament, lawmakers may include such systems in the AI Act’s high-risk list, therefore subjecting them to strict conformity assessment procedures before they are placed on the market.
#2. Ownership and Originality.
Depending on context, organizations should determine when it is appropriate or ethical for individuals or organizations to use, or take credit for, work generated by LLMs. In education, for example, some teachers are adapting their approaches (e.g., from written to oral presentations) to avoid plagiarism. Schools have even banned ChatGPT due to concerns that the technology will cause students to take shortcuts when writing or forgo doing their own research.
In many cases, these issues raise novel questions about legal rights and liability. For example, software developers have used ChatGPT to write and improve existing code. Yet LLMs, including ChatGPT, have been shown to regularly produce inaccurate content. If an employee uses the code generated by ChatGPT in a product that interacts with the public, organizations may have liability if something goes wrong. In a related issue, if employees input copyrighted, patented, or confidential information (e.g. trade secrets), into a generative AI tool, the resulting output could infringe on intellectual property rights or breach confidentiality obligations.
#3. Ethical Concerns, Bias, Discrimination, and Misinformation.
Finally, organizations must carefully consider all uses of AI, including LLMs, for possible discriminatory outcomes and effects. In general, LLMs reflect the underlying data that they are trained on, which is often incomplete, biased, or outdated. For example, AI training datasets often exclude information from marginalized and minority communities, who have historically had less access to technologies such as the internet, or had fewer opportunities to have their writings, songs, or culture digitized. ChatGPT was trained on internet data, and as a result is likely to reflect and perpetuate societal biases that exist in websites, online books, and articles.
For example, a Berkeley professor asked ChatGPT to create code to determine which air travelers pose a safety risk. ChatGPT assigned a higher “risk score” to travelers who were Syrian, Iraqi, Afghan, or North Korean. A predictive algorithm used for medical decision-making was biased against black patients because it was trained on data that reflected historical bias. Even though the deployers of the algorithm excluded race as a metric when running the system, the algorithm still perpetuated bias against black patients because it took economic factors and healthcare costs into account.
Furthermore, it is clear that generative AI and LLMs can be potentially disruptive and change the way we consume and create information. ChatGPT has demonstrated its ability to write news articles, essays, and television scripts. Supplied with a prompt loaded with disinformation or misinformation, LLMs can produce convincing content that could mislead even thoughtful readers. Audiences are at risk of consuming vast amounts of misinformation if they are not able to fact-check the information given to them or know that content was generated by an LLM or AI. Organizations that use LLMs should be aware that LLMs can generate inaccurate or misleading information, even when prompts are not intended to mislead. Vigilance is required when organizations ask LLMs or generative AI to give clients, customers, or users information solely based on the LLM.
To address ethical concerns, bias, discrimination, and misinformation, organizations have a responsibility to scrutinize their use of generative AI and LLMs. Ethical considerations are incredibly important and progress is being made on transparency in generative AI models, though complete solutions remain elusive. Ethical considerations are especially important when the AI is used in an outcome-determinative way, such as in hiring or healthcare. In some cases, such uses will risk running afoul of employment or other civil rights laws. Organizations must determine the different contexts that this type of AI use can be particularly susceptible to bias and discrimination, and will want to avoid those situations. Organizations should engage and talk to the communities that are most affected in these cases and get stakeholder input when drafting internal policies.
Conclusion
Recent developments concerning LLMs and generative AI demonstrate substantial technological advancements while also presenting many uncertainties. There are many unanswered questions and yet to be discovered risks that may result from use of AI in the workplace. However, these harms can be mitigated if organizations take the time to address these issues internally and develop best practices. We encourage organizations to be inclusive and cross-collaborative when engaging in these conversations with lawyers, engineers, customers, and the public.
FPF Report: Not-So-Standard Clauses – An Examination of Three Regional Contractual Frameworks for International Data Transfers
On March 30, the Future of Privacy Forum launched a new report comparing three regional model contractual frameworks for cross-border data transfers. The report compares the EU’s Standard Contractual Clauses (SCCs), the ASEAN Model Contractual Clauses (MCCs), and the Iberoamerican Network’s Model Transfer Agreement (MTA). The three frameworks cover a total of 62 jurisdictions on three continents – this report seeks to identify overlaps and key differences among the three, while also reflecting on the question of their potential interoperability. Notably, this report does not evaluate contractual frameworks created by individual countries (such as the recently released Chinese Standard Contract Provisions) or frameworks that are still in the development process, like the recently-amended draft Convention 108+ Model Contractual Clauses for the Transfer of Personal Data still being developed by the Council of Europe.
International transfers of personal information are an increasingly contentious space in the privacy and data protection world. As more jurisdictions pass laws and develop regulations governing the collection and processing of personal data, necessarily more limitations on the transfer of that information from one jurisdiction to another follow. Exceptions to those restrictions go hand-in-hand with those generalized restrictions on cross-border data transfers; in that context, pre-approved contractual frameworks between transferor and transferee have emerged as critical components of the modern cross-border data transfer environment.
These contractual frameworks set out the responsibilities of the parties to a data transfer, mandating to a greater or lesser extent what information those parties must provide to one another, members of the public, and relevant government authorities while also covering issues ranging from the distribution of liability to the parties’ responsibility to evaluate the laws of destination jurisdictions. This Report outlines how the three chosen frameworks are similar and where they are different in a number of key areas, including:
Underlying Legal Basis for Use of Contractual Framework
Core Party Obligations
Data Subject and Third Party Rights
Response to Government Requests
Relevance to Cross-Border Enforcement
Permissibility of Modifications
This Report also includes a number of Annexes that seek to set summaries of particularly important provisions side-by-side, organized by the type of provision or the specific party it binds.
Our analysis has determined that while the international space for cross-border transfers has begun to converge on some core concepts (such as classifying parties as “controllers” and “processors” as well as “importers” and “exporters”) and on some key obligations for parties (such as requiring a certain degree of transparency regarding each transfer, and imposing basic security requirements on parties) there remain significant areas where data transfer contracts diverge. As a baseline, the most critical element of any model contractual framework is how it interacts with the underlying legal obligations imposed on the parties it binds. Here, the EU SCCs and their relationship to the GDPR by necessity have a different structure than either the Ibero-American MTA or the ASEAN MCCs, designed as they are to interact with multiple jurisdictions governed by different (or lacking entirely) data protection laws. Additional issues include whether and how contracts should acknowledge third party rights, whether contracts should treat different types of processing or personal data differently, the parties’ responsibilities in the event of government requests for data, and whether specific concepts like the use of automated decision-making or the processing of children’s information should be addressed.
On July 28, 2022, the African Union (AU) released its Data Policy Framework (Framework) following extensive multi-stakeholder engagements. The Framework aims to provide a multi-year blueprint for how the AU will accomplish its goals for Africa’s digital economy. It also sets forth the AU’s vision, scope, and priorities for Africa’s data ecosystem, the regulatory policies underpinning the digital economy, and the creation of the African Digital Single Market (DSM). Broadly, the Framework provides data governance guidance for Africa’s data market by helping Member States navigate complex regulatory issues. The goal of the Framework is to bolster intra-African digital trade, entrepreneurship, and digital innovation while safeguarding against risks and harms of the digital economy.
The Framework builds off the work of the Digital Transformation Strategy (DTS), which the AU adopted in 2020 to spur digital development across the continent, as well as other prior initiatives such as the Africa Continental Free Trade Agreement (AfCTFA) and the Policy and Regulatory Initiative for Digital Africa (PRIDA). African leaders created the Framework to respond to identified needs, opportunities, and risks of the digital economy, including the need to re-think policy around data and its relationship to larger social goals and institutions. In particular, the Framework recognizes that while data may create value, it also brings harms that regulators must address. The AU acknowledges the vast ongoing transformations to regional and global data policies and the need for African leadership to promote harmonization of legal frameworks across the continent.
Notably, the Framework contains many features that align with international approaches to data protection such as the need to root data policy in the rule of law, protect fundamental rights, and strike an appropriate balance between innovation and privacy. However, it also conveys unique and nuanced views on key emerging issues, including:
Separating data sovereignty (a principle it generally supports) from data localization under the guise of data security, and taking a stance against using security policies to undermine human rights;
Dissuading Member States from adopting broad data localization requirements. Rather, focus on localization for certain categories of data to ensure broad flow of data in line with policies such as the African Free Continental Free Trade Area Agreement.
Highlighting areas where Member States can take novel approaches that fit the context of Africa, including prioritizing collective privacy rights and the need for data stewardships and other forms of data trusts; and
Contextualizing the Framework within the larger process of creating a digital single market to assert Africa’s voice in ongoing global policy conversations and indicate that Member States will no longer be “standard takers” of data protection policy but rather “standard makers” in the future.
This blog post provides a descriptive analysis of the Data Policy Framework to draw attention to key data protection proposals under it. It does not identify challenges of the Framework or delve into specific policy priorities. Rather, we summarize the scope of the Framework and offer a reference guide to understand its contents.
The Framework consists of six sections, each detailing a core feature of how regulators should balance policy harmonization across Member States with respect to digital policy. These sections include: (i) guiding principles, (ii) definitions and categorization of data, (iii) value enablers, (iv) data governance, (v) international and regional governance, and (vi) an implementation framework.
1. Guiding Principles: From Sovereignty, to Fairness and Inclusiveness
The Framework sets forth high-level principles to guide data policy creation and harmonization across Africa. These principles primarily apply to African Union Member States but extend to other stakeholders such as public-private partnership bodies, civil society organizations, regional cooperation fora, and other entities engaging in the digital economy. The principles aim to ensure that the creation and adoption of digital rules aligns with international law and standards and remains balanced. These principles include:
Cooperation –Stakeholders (including private, public, and civil society bodies) should cooperate to foster exchange and interoperability of data systems within the African Digital Single Market, as well as promote coherence and harmonization of policies;
Integration –Policies should remove legal barriers to intra-African data flows subject to necessary data protection, human rights, and security considerations;
Fairness and Inclusiveness –Benefits and opportunities of the digital economy should be equitable and inclusive to redress national and global inequalities to those marginalized by technological developments;
Trust, Safety, and Accountability –Policies should promote a trustworthy data ecosystem that is safe, secure, accountable, and ethical to stakeholders;
Sovereignty –Stakeholders shall cooperate to enable Member States to self-manage, govern, and utilize data;
Comprehensive and Forward-Looking –Policies should strive to create an environment that promotes investment and innovation through the development of infrastructure, human capacity, and harmonized regulations and legislations; and
Integrity and Justice –Member States must ensure the collection, processing, and usage of data is just, lawful, and not used to discriminate or infringe on individual rights.
2. (Not) Defining and Categorizing Data
The Framework does not define data, stating that the variety of uses and types of data pose practical constraints to formulating a comprehensive definition. However, the drafters highlight that a better understanding of how data functions within the larger technological and digital ecosystem will help support policymaking.
The Framework proposes that Member States—and their data protection authorities (DPAs) in particular—categorize data to clarify and differentiate between different types of data, including personal and non-personal information. This clarification will aid companies’ compliance strategies to align their collection, storage, and use of data with data protection regulations. Furthermore, specifying the types of data, especially personal data, could help DPAs to more efficiently protect and uphold data subject rights.
3. Driving Value in the Digital Economy
Recognizing the power of data to transform economies and facilitate development, the Framework recommends Member States to create an environment that captures the value of the data economy while also preventing harms. In particular, the Framework encourages the creation of dependable regulatory systems to facilitate trust and enhance human, institutional, and technical capabilities to create value from data. The Framework highlights five areas of focus: (i) foundational infrastructure and trustworthy systems, (ii) institutional arrangements for complex regulation, (iii) the need to rebalance the legal system, (iv) create public value, and (v) promote coherent sectoral policies.
Enhanced research and development (R&D) plays a prominent role in the Framework, which encourages further investment in fields such as big data analytics, artificial intelligence, blockchain, and quantum computing. For each of these areas, the Framework stresses the need to place the digital economy within the wider complexities of the digital ecosystem, giving special attention to the role of the state in processing data.
The AU recognizes that digital infrastructure is the backbone of the data-driven economy and stresses the need for Member States to coordinate on investment and development. The Framework proposes policy recommendations for deploying broadband, enabling information communication technology (ICT) architectures, and creating trustworthy digital ID systems through public-private partnerships to spur entrepreneurship and public data reuse. Member states are encouraged to build stakeholder engagement at all levels to ensure organizations use data to further public interests. Specific foundational infrastructures identified include:
Cloud computing, including cloud services and cloud-based services, to spur system efficiency and reduce capital expenditure on IT equipment, internal servers, storage resources, and software;
Big data services for both the public and private sector to improve decision-making, forecasting, and consumer segmentation; and
“Platformization” for new business models and e-commerce services to facilitate trade across geographical borders.
Additionally, the Framework identifies the importance of creating trustworthy data systems to underpin the larger political, economic, and societal environment. AU policymakers stress that a key aspect of this system includes safeguarding basic human rights through the rule of law. The continental challenge is to ensure Member States have all the necessary tools and legal requirements to adapt to rapidly evolving technological challenges. To this end, the Framework proposes a comprehensive benchmarking policy centered around five interrelated considerations:
Cybersecurity – The Framework recognizes that while regulatory tools to strengthen cybersecurity can mitigate vulnerability threats, they can also if misused, undermine fundamental rights of equity, dignity, and security. Policies should therefore be proportional and limit infringement on online human rights;
Cybercrime – The Framework stresses that governments must tailor policies to implement regional and global conventions on cybercrime.
Data Protection – According to the AU, data protection forms the backbone of any data framework as it helps ensure that organizations do not harm individuals when processing their personal data. Data protection policies must fit particular contexts and be adaptive to user interaction and capability online. The Framework cautions against relying too heavily on consent as a regulatory mechanism and promotes other concepts like data stewardships;
Data Justice – The Framework states that in order to expand the safeguarding of rights from the individual to the collective level, Member States should promote data justice in their policies. Data justice extends to social and economic rights to redress inequalities resulting from historical, structural, and discriminatory injustice that have been reproduced through digital technologies; and
Data Ethics – Codes of ethics developed by all stakeholder groups can guide the use and design of systems that run on data. The Framework recommends leveraging such codes to mitigate harm in particular technological contexts. The creation of such codes must be as inclusive as possible.
The Framework recommends Member States establish policies that bolster these five considerations to foster trust and safeguard basic human rights through rule of law at the regional and continental levels.
Highlighting that data economies require future-facing, agile regulatory systems, the Framework specifies areas where regulators can work proactively and recommends Member States to prioritize building regulatory capacity and avoiding regulatory silos.
Of note, a key recommendation is for Member States to enable data regulators – Member States should create conditions that build institutional capacity and capabilities to optimize the potential use of data for enforcement mechanisms across sectors. Regulators that have wide authority and competence over data generally may also help to address issues resulting from competition and consumer protection law. Member States should also create a transparency portal to monitor data breaches and consumer data flows.
The Framework proposes concrete recommendations for each of these considerations and recognizes that data regulation cannot happen unless authorities have capacity both internally within Member States and externally on the regional and continental level in collaboration with other regulators. As a result, the AU places special emphasis on regional harmonization mechanisms.
The Framework also identifies key challenges for regulatory coordination including incoherent sectoral policies and incompatible regulatory goals. After outlining these areas and analyzing where regulatory tension could arise, it provides recommendations for overcoming challenges in competition, trade, data flows, and e-commerce policy. Notably, it specifies the privacy and data protection considerations in each of these policies and charts emerging variations in regulatory approaches to data transfers.
Member States are encouraged to harmonize sectoral policies and coordinate in regional fora on these regulatory issues. Complementary policy design choices can help regulators foster intra-African digital trade and data-enabled entrepreneurship while weighing trade-offs of data governance. The Framework recommends coordination in the following areas:
Competition policy instruments that address anti-competitive behaviors;
Data portability regulations, provisions, and other enabling activities for open data.;
Collaboration with international bodies like the OECD and the WTO;
Regional data infrastructures and data systems including human, technical, and institutional capacity; and
International harmonization of AI and big data technical standards, ethics, governance, and best practices.
4. Data Governance
The Framework sets forth a multi-prong strategy for data governance on the continent to enable data access and use while encouraging data combination and repurposing to limit the harms and risks of processing. The strategy prioritizes using data for its greatest economic and social value but recognizes that restricting data flows will be necessary in some circumstances to ensure societal protection.
The Framework recognizes that narrowly defining data governance to just encompass data protection is a risk within most African countries. Rather it acknowledges that data governance interacts with other disciplines including competition, cybersecurity, electronic transactions, and intellectual property. For this reason, the Framework proposes a multi-prong strategy for understanding and tackling these related policy areas. The prongs of this strategy include: data control, processing and protection, access and interoperability, security, cross-border flows, data demand, and special categories of data.
Data Control
The Framework recognizes the importance of facilitating the control of data for individuals, firms, and government and the need for policy to clarify the obligations and responsibilities of parties to find an appropriate balance that governs when entities may control data. The AU stresses that Member State policies should at a minimum design data subject rights to provide personal data control, but the AU also points towards emerging ownership models such as data trusts and stewardships as alternatives to the individual-rights focused model.
On the national level, the Framework recognizes data sovereignty and localization as two mechanisms through which states currently exert control over data, but cautions against pursuing both without specifically tailored reasons.
Data Sovereignty – The Framework recognizes that AU Member States have a right to formulate digital rules in line with their national interests and that such states should prioritize politically neutral partnerships to avoid foreign interference into domestic affairs. Exertion of domestic sovereignty should be based on multilateral agreements with recourse avenues for cases of infringement.
Data Localization – The Framework states that localization must be evaluated against potential harms to human rights and generally cautions against adopting such measures. Localization requirements, if adopted, should be as specific as possible and involve multi-stakeholder engagement to avoid over-restrictive policies.
Data Processing and Protection
The AU stresses the need to construct robust data protection mechanisms for the processing of personal data, including the promulgation of data subject rights. Such mechanisms are encouraged to realize privacy, foster trust in digital technologies, and create a sound digital economy. The Framework recognizes the need to ensure that constraints to personal information processing do not impede data flows and for Member States to harmonize policies across regions.
Additionally, the Framework urges Member States to implement a privacy-by-design approach that incentivizes organizations to incorporate privacy into new technology by default via design and development processes. The Framework identifies de-identification (including anonymization and pseudonymisation) in its outline, but also acknowledges that such techniques must be accompanied by strong legal rights for data subjects and regulatory capacity to enforce data protection. Specific recommendations in the Framework include:
Creating independent, funded, and effective data protection authorities (DPAs) that are accountable and cover all relevant data processing entities. DPAs should drive multi-stakeholder partnerships across the continent;
Requiring data protection risk assessments (DPIA) for the deployment of new technologies; and
Promulgating codes of conduct to promote sector-specific needs and ensure best practices in mitigating risks and harms associated with processing.
Data Access and Interoperability (Open Data)
The Framework stresses the need for Member States and regional institutions to take proactive measures to spur data access through open government data, as well as broader data portability to facilitate access and consumer benefits. In particular, the Framework recommends creating open data standards for public data, strengthening data portability rights and policies, promoting data partnerships, and facilitating data categorization. It also urges Member States to establish open data policies, DPAs to issue codes of conduct, and multi-sectoral bodies to implement open data initiatives. Policymakers should make use of regulatory sandboxes and other data hubs to promote data use and management.
Data Security
Throughout the Framework, AU policymakers acknowledge the importance of data security for preserving privacy, confidentiality, and integrity, as well as building trust in the larger digital ecosystem. Data security refers not only to the physical security of hardware systems but also the logical security of networks, applications, and software and the norms and regulations underpinning such systems. The Framework points to the following areas of focus:
Data Security and Localization – The Framework highlights the importance of not allowing data security to serve as a barrier to the free flow of data or a justification for data protectionism. Data security may positively enhance integrity and trust but also undermine other values if used for negative ends;
Transparency Challenges – The Framework also specifies the difficulties of upholding transparency via data security policies. To promote transparency, policymakers should increase efforts to coordinate on incident and vulnerability reporting, adhere to international cybersecurity standards, and create mature markets for cybersecurity and data processing. DPAs and policymakers should especially focus on building capacity and specify data processing roles for security protection; and
Regional Coordination – The Framework recommends Member States establish a joint sanction regime for cyber-attacks across Africa to promote interoperability and coordination of cybersecurity regimes.
Cross-Border Data Flows
The Framework stresses the importance of aligning national personal data regulations with other African jurisdictions’ regulations to foster trust and data exchange. The Framework acknowledges emerging tensions in cross-border data flows, like the relationship between data sovereignty and cross-border data flows, as well as the regulation of data flows in environments that lack comprehensive data protection laws. Specific recommendations to Member States to facilitate cross-border data flows include:
Providing minimum standards for cross-border data flows;
Enshrining reciprocity as a central principle for permitting such flows;
Prioritizing data specificity to avoid unintended restrictions;
Incorporating law enforcement considerations into policymaking; and
Building enforcement capacity and regional coordination.
Data Demand, Sectoral Governance, and Special Categories of Data
The Framework acknowledges the need to bolster data demand across sectors and avoid creating data silos that render data less usable. To promote access to data, the Framework recommends Member States to clearly identify special categories of data and employ codes of conduct for specific sectors to help organizations comply with regulatory expectations. Special data regimes should be integrated into national data regimes to avoid regulatory distortion. To address potential risk of harm to specific groups, the Framework stresses the need to identify and include different data communities into the policymaking process when crafting special categories of data. Although the AU recognizes that special treatment of data based on its particular characteristics is necessary, the Framework stresses that such policies should be in harmony with general data governance principles.
5. International and Regional Governance
The AU recognizes the importance of promoting cooperation between countries to increase dialogue and enforcement coordination. Over the next few years, the AU will develop a consultation framework for interstate collaboration, strengthen links with other regions such as the EU and APAC to coordinate Africa’s common position on data in international negotiations and support the creation of a continental data infrastructure to enable data-driven technologies.
The Framework acknowledges the importance of aligning Africa’s technical standards with internationally-recognized best practices but also states that such standards may not be sufficient for the continent’s unique needs. Rather, regional engagement on standards should take priority. One area outlined in the document where African policymakers can exert leadership is open data arrangements. The Framework specifies initiatives such as the African Development Bank’s central open-data portal, institutional data portals, and volunteer-driven community data sharing initiatives as unique examples of facilitating data sharing and creating a collaborative digital ecosystem.
Continental Instruments and Regional Institutions
The AU stresses the need to develop and bolster continental instruments and institutions to accomplish core goals, such as facilitating data flows while ensuring data protection and safety online.
The Framework calls for the creation of a regional cross-border data flow mechanism, the ratification of the Malabo convention, and the implementation of the African Continental Free Trade Agreement. The Framework also gives priority to the RECs and various human rights courts in Africa to coordinate governance and identifies other bodies like the African Network of Data Protection Authorities, ICT regulatory associations, and the African Competition Forum that could likewise play a role in fostering cross-border transfer rules.
6. Implementation Framework and Stakeholder Mapping
Finally, the Framework proposes an implementation framework divided into five phases and identifies important stakeholders for each phase of implementation.
Phase 1: Member States would adopt the Framework and work with the AU to develop mechanisms to monitor and centralize regional engagement;
Phase 2: to establish buy-in, the policy recommends ensuring alignment with continental instruments, engaging continental and regional structures like the RECs, and assessing international frameworks.
Phase 3: institutions would work towards developing broadband infrastructure and regulatory frameworks before engaging with stakeholders from all sectors
Phase 4: institutions would evaluate domestic policy instruments; and
Phase 5: the AU will prioritize intra-African collaboration with RECS and other continental institutions.
Conclusion
As the most ambitious policy document on data regulation in Africa to date, the Framework represents the AU’s desire to form a lasting roadmap for how African nations can safely and responsibly leverage the power of data through the creation of an African Digital Single Market. The Framework attempts to instill broad principles of transparency and accountability of institutions and actors into the fabric of national and regional approaches to data regulation. It prioritizes the inclusion of multiple stakeholders from both the public and private sectors, equity among citizens, and fair competition amongst market players. It also focuses on regional processes, mechanisms, and instruments that stakeholders can leverage to develop a cohesive data policy framework across the continent.
Particularly for data protection and privacy on the continent, the Framework is significant because it centers part of the proposed solutions on data protection that is recognized as the “backbone” of any data framework, while at the same time advancing ideas such as the prioritization of collective privacy rights, fit for the African context. Data justice and data ethics are other pillars proposed for advancing digital policies, in recognizing that economic growth and value from digital markets must not come at the expense of the rights of people and communities.
Concerns about a coherent cross-border data transfers policy for the continent add to the focus on data protection and privacy of the Framework. A significant contribution made is also separating the concept of “data sovereignty” from that of “data localization” under the guise of data security, and taking a stance against using security policies to undermine human rights. The Framework recognizes there is value in data-sovereignty-inspired policies, but understood as a more complex concept and different than mere data localization mandates, which may in fact be more harmful to the rights of individuals and communities.
At its heart the Framework calls on Member States in Africa to collaborate through regional institutions such as the Network of African Data Protection Authorities and relevant stakeholders towards regional and continental harmonization of digital policies. Like African Continental Free Trade Area Agreement and other initiatives, the Framework is designed to spur the creation of a common digital market. The AU stresses that collaboration between national and regional stakeholders is necessary for African countries in their aim to become more competitive in global policy fora. As such, the Framework attempts to set the foundation for African policymakers to engage with stakeholders on a broad set of data regulation issues and prioritizes intra-Continental collaboration through regional institutions.
FPF Releases Infographic to Explore Implications of Open Banking Data Flows and Security for Individuals
Today, the Future of Privacy Forum (FPF), a global non-profit focused on privacy and data protection, is pleased to release an infographic, “Open Banking And The Customer Experience,” visualizing the US open banking ecosystem. FPF’s open banking infographic is supported by over a year of meetings and outreach with leaders in banking, credit management, financial data aggregators, and solution providers to comprehensively understand the developing industry of open banking.
Open banking involves customer-permissioned data transfers between organizations holding data and entities that provide financial products and services (e.g., wealth management, payments, and loan access). Open banking is organized around four main steps, including (i) signing up and initiating a service; (ii) authenticating identity; (iii) authorizing data sharing; and (iv) provision of the product or service.
Open banking can be a catalyst for greater competition by enabling new products and services that depend on the sharing of personal data. While the sharing of personal data is integral to realizing these benefits, it is not without privacy and security risks, including the risk of data breaches and unauthorized transactions. The US open banking ecosystem can also be confusing for customers wishing to use these products and services as well as the organizations that provide them, including in areas related to:
Parties’ roles and responsibilities: Open banking involves multiple parties, each with overlapping roles and responsibilities, creating uncertainties and friction for users. Coordination by appropriate regulators could prevent inconsistent oversight mechanisms and rules.
Notice and consent: Current rules are unclear about which activities in open banking require consent, and from whom. This may lead to inconsistency in data collection and less transparency about uses of personal data.
Secondary data uses: While companies may want to use customer data for purposes other than providing the requested product or service, misuse of financial data can cause harm, including financial loss, loss of account access, and disparate impact. To give customers greater understanding and control over the secondary use of their personal data, companies could be required to segregate secondary use consents from the primary use opt-in.
Data retention: Individuals seeking to have their data deleted by an organization may not understand why they are unable to do so. Greater transparency and clarity is required for organizations in the open banking ecosystem that are subject to legal requirements about retaining user information.
Customer service and terminations: Without clear roles and responsibilities, people engaged in open banking may not know who to consult to fix issues they encounter. Rules can help to clarify how parties should communicate changes with one another, or when to cease use of personal data.
The Consumer Financial Protection Bureau (CFPB) sought comments this year regarding data portability for financial products and services, which is a prerequisite to issuing a proposed rule later in 2023 to update Section 1033 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. Subject to rules created by the CFPB, Section 1033 requires covered entities to make certain information related to a person’s requested products and services available to the person upon request.
In response to the CFPB request regarding data portability for financial products and services, FPF submitted comments in January 2023, which address the main pain points raised in this infographic in greater detail. FPF has also released a paper, Data Portability in Open Banking: Privacy and Other Cross-Cutting Issues, detailing how different jurisdictions’ laws impacted open banking activities and intersected with data protection law, including issues surrounding consent, security, and data subject portability rights. The paper provided grounds for discussion at an event FPF organized in 2022 with the Organization for Economic Co-Operation and Development (OECD). In February 2023, the OECD issued a paper of the same name about the event.
If you wish to speak with FPF about this infographic or would like to learn more about the organization’s Open Banking Working Group, please reach out to Zoe Strickland ([email protected]) and Daniel Berrick ([email protected]). For media inquiries, please reach out to [email protected].
This infographic would not have been made possible without the work of Hunter Dorwart, former FPF Policy Counsel, who devoted significant hours to this project during his time at FPF.