Red Lines under the EU AI Act: Unpacking Social Scoring as a Prohibited AI Practice
Blog 3 | Red Lines under the EU AI Act Series
This blog is the third of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.
The prohibition of AI-enabled social scoring is among the red lines established by the EU AI Act under its Article 5, targeting practices that assess or classify individuals or groups based on their social behavior or personal traits, leading to unfair treatment, particularly when the information is drawn from multiple unrelated social contexts or when the resulting treatment is disproportionate to the behavior assessed. Notably, the prohibition has a broad scope of application across public and private contexts and is not limited to a specific sector or field.
The practice of “social scoring” is not uniquely regulated by the AI Act, as it engages well-established notions under the General Data Protection Regulation (GDPR): profiling, purpose limitation and automated decision-making. Therefore, those practices in the same realm that do not meet the high threshold of the social scoring prohibition under the AI Act must in any case comply with the detailed GDPR provisions relevant to them.
As this analysis will show, the “social scoring” prohibition under the AI Act also engages notions of “personalization” in AI, which may be particularly relevant to the current state of AI development, as prior FPF analysis has shown.
This blog examines the definition and contextual scope of the prohibition of social scoring under Article 5(1)(c) AI Act (Section 1), including its conditions and detailed scenarios (Section 2), as well as the practices that fall outside the scope of the prohibition (Section 3). It then takes a look at how this provision interacts with other areas of EU law, in particular data protection, non-discrimination, and sector-specific frameworks (Section 4). The main takeaways (Section 5) highlight that:
- The AI Act prohibits specific practices of AI-enabled social scoring that lead to detrimental or unfavorable treatment in unrelated social contexts or treatment that is unjustified to the behavior assessed.
- The prohibition applies across both public and private sectors.
- The lawful evaluation and classification practices carried out for legitimate purposes using relevant data and proportionate safeguards, such as creditworthiness assessments, insurance risk scoring or fraud detection systems, remain outside the scope of the prohibition, subject to compliance with relevant provisions of the AI Act and other applicable legislation.
- Social scoring as a “contextual” prohibited AI practice
EU legislators made the policy choice to expressly ban practices of AI systems that enable social scoring because they considered them incompatible with fundamental rights and European Union values. This results from Recital 31 of the AI Act, which states that such practices “may lead to discriminatory outcomes and the exclusion of certain groups” and can violate individuals’ dignity, privacy, and right to non-discrimination. The European Commission characterized AI systems that allow “social scoring” by governments or companies as a “clear threat to people’s fundamental rights”, noting that these are banned outright. The Guidelines the Commission issued on prohibited practices under the AI Act reiterate this framing and clarify the cumulative elements for the prohibition with practical illustrations.
This rationale was backed by EU data protection authorities (DPAs). In June 2021, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) welcomed the intention to ban social scoring in their Joint Opinion 5/2021 on the AI Act proposal, warning that “the use of AI for ‘social scoring’… can lead to discrimination and is against the EU fundamental values”. Since then, the EDPB and national DPAs have continued to develop guidelines around profiling and automated decision-making (ADM), including guidance on legitimate interests (2024) and national tools such as the Dutch DPA’s guidance (AP) on “meaningful human intervention”, which could be relevant when assessing whether an AI-enabled score could fall under the AI Act provisions or Article 22 GDPR, which provides for the right not to be subject to solely automated decision-making.
According to the Commission Guidelines, the AI Act prohibits social scoring practices if the following cumulative conditions are met:
- The AI system is placed on the market, put into service, or used.
- The AI system is intended to evaluate or classify individuals or groups over a certain period of time based on their social behavior or inferred personal or personality characteristics.
- The social score results in (i) unfavorable treatment in social contexts unrelated to where the data was originally collected and/or (ii) treatment is unjustified or disproportionate to the social behavior or its gravity.
All three conditions must be met simultaneously for Article 5(1)(c) to apply. The prohibition applies to both providers and deployers of AI systems. Of note, the prohibition has been applicable since 2 February 2025, while the supervisory and enforcement provisions related to it have been in force since 2 August 2025. However, no enforcement or regulatory action has been announced so far regarding the social scoring prohibition.
The prohibition does not extend to all AI-enabled scoring practices. The Guidelines clarify that it targets only unacceptable practices that result in unfair treatment, social control or surveillance. At the same time, the Guidelines note that the prohibition is not meant to affect the “lawful practices that evaluate people for specific purposes that are legitimate and in compliance” with the EU and national law, particularly in the cases where the legislation provides for the types of data that are relevant for the specific evaluation purposes and ensures that any unfavorable or detrimental treatment that results from the practice is justified and proportionate.
In this context, the Guidelines clarify that sector-specific scoring systems, such as creditworthiness assessments, insurance risk scoring or fraud detection systems, are not prohibited in cases where they are carried out for clearly defined purposes and in accordance with EU or national legislation.
For example, the credit scoring systems used by financial institutions to assess a borrower’s creditworthiness based on relevant financial data do not fall under the provision of Article 5(1)(c) of the AI Act, provided that they do not result in unjustified or disproportionate treatment or rely on unrelated social context data. Instead, such systems are typically classified as high-risk AI systems under Article 6 and Annex III of the AI Act and must comply with the applicable requirements, including risk management, transparency, human oversight and data governance obligations.
2. Unpacking how the social scoring prohibition is triggered under the AI Act
2.1 The AI system is intended to evaluate or classify individuals or groups over a certain period of time based on their social behavior or inferred personal or personality characteristics
Article 5(1)(c) AI Act explicitly prohibits “the placing on the market, putting into service or use of an AI system for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behavior or known, inferred or predicted personal or personality characteristics”. The Guidelines clarify that this condition is fulfilled where an AI system assigns individuals or groups scores based on their social behavior or personal or personality characteristics. These scores could take different forms, such as numerical values, rankings or labels. This prohibition applies broadly across public and private sectors and concerns only natural persons or groups of natural persons, excluding thus legal entities.
The Guidelines differentiate between “evaluation” and “classification” as two distinct but related concepts within the scope of Article 5(1)(c) AI Act. “Evaluation” refers to an assessment or judgment about a person or group of persons, and “classification” has a broader scope and includes categorizing individuals or groups based on certain characteristics or behavioral patterns. “Classification” does not necessarily involve an explicit judgement or assessment but may still fall within the scope of the prohibition in cases where individuals are assigned scores, rankings or labels based on their behavior or personal or personality characteristics.
In addition, the Guidelines note that the term “evaluation” is closely linked to “profiling” as defined by EU data protection law, namely in Article 4(4) GDPR, and as referred to in Article 22 GDPR and Article 11 Law Enforcement Directive. Profiling refers to the processing of personal data to evaluate personal aspects of an individual, in particular to analyse or predict behavior about their ability to perform tasks, interests, likely behavior, or future actions.
Interestingly to note is that the Guidelines opted for the wording of the Article 29 Working Party Guidelines on Automated Decision-Making and Profiling, adopted in 2017, when referring to profiling, reflecting a broader, functional understanding of profiling that encompasses AI systems assigning behavioral scores or predictive assessments, and therefore clarifying that that the scope of the prohibition is not narrowly limited to specific technical forms of automated processing but extends to AI-enabled evaluation and categorization of persons based on their characteristics or behavior.
The Guidelines note that although Article 5(1)(c) AI Act does not explicitly reference profiling under the GDPR as defined in Article 4(4), the act of profiling may still fall under the prohibition when AI systems process personal data to assess individuals.
To illustrate the link between profiling and social scoring, the Guidelines refer to the SCHUFA I judgment (Case C-634/21), in which the CJEU examined a creditworthiness scoring system used in Germany. In that case, the score generated by the computer programme consisted of a probability value estimating an individual’s ability to meet payment commitments. The CJEU found that this score was based on certain personal characteristics and involved establishing a prognosis concerning the likelihood of future behavior, such as the repayment of a loan. The scoring process relied on assigning individuals to groups of persons with comparable characteristics and using the behavior of those groups to predict the individuals’ future conduct.
The CJEU held that this activity constitutes “profiling” within the meaning of Article 4(4) GDPR and it held that the automated establishment of that probability value can constitute ADM under Article 22(1) GDPR where a third party draws strongly on it to decide whether to enter into, implement, or terminate a contractual relationship. The Guidelines clarify that such scoring may also constitute an “evaluation” of individuals based on their personal characteristics within the meaning of Article 5(1)(c) AI Act and will be prohibited if carried out through AI systems, provided that all the other conditions are fulfilled.
Additionally, even if not referenced in the Guidelines, the CJEU judgment in CK v Dun & Bradstreet Austria (CaseC-203/22) further clarified the legal framework governing profiling and scoring systems. In that case, the CJEU held that the right of access under Article 15(1)(h) GDPR requires controllers to provide data subjects with meaningful information about the logic involved in automated decision-making, including the procedures and principles used to generate a score.
2.1.1 The prohibition requires evaluations to rely on data gathered over a period of time, ensuring that one-off assessments cannot circumvent it.
The prohibition in Article 5(1)(c) AI Act applies only where the evaluation or classification is based on data collected over “a certain period of time”. The Guidelines clarify that this temporal requirement indicates that the assessment should not be limited to a one-time rating or grading based solely on data from a single, isolated context. This condition must be assessed in light of all the circumstances of the case to avoid the circumvention of the scope of the prohibition.
To illustrate this, the Guidelines refer to a scenario involving a migration or asylum authority that deploys a partly automated surveillance system in refugee camps using cameras and motion sensors. If such a system analyzes behavioral data collected over a period of time and evaluates individuals to determine, for example, if they may attempt to abscond, this would mean that the temporal condition is met and may fall within the scope of the prohibition, provided that all the other conditions are also met.
2.1.2 The provision prohibits AI evaluations based on social behavior or known, inferred, or predicted personal or personality characteristics
The evaluation or clarification of individuals based on AI-enabled processing in relation to either (i) their social behavior or (ii) their known, inferred or predicted personal and personality characteristics, or both, is prohibited under the AI Act provision. This data may be directly provided by the individuals, indirectly collected through surveillance, obtained from third parties, or inferred from other information.
The Guidelines explain that “social behavior” is a broad concept that encompasses a wide range of actions, habits, and interactions within society. This may include behavior in private and social contexts, such as participation in cultural or voluntary activities, as well as behavior in business or institutional contexts, including payment of debts, use of services and interactions with public authorities or private entities. This type of data is often collected from multiple sources and combined, sometimes involving extensive monitoring or tracking of individuals.
The prohibition also applies in cases where “personal or personality characteristics” may involve specific social behavioral aspects. The Guidelines note that personal characteristics may include a wide range of information relating to an individual, such as race, ethnicity, income, profession, other legal status, location, level of debt, and so on. Personality characteristics should, in principle, be interpreted as personal characteristics, but may also involve the creation of specific profiles of individuals as “personalities”. These characteristics may indicate a judgment, made by the individuals themselves, observed by others, or generated by AI systems.
The Guidelines distinguish between three types of characteristics used in scoring systems: (i) “known characteristics” (verifiable inputs provided to the AI systems), (ii) “inferred characteristics” (conclusions drawn from existing data, usually by AI systems), and (iii) “predicted characteristics” (estimates based on patterns, often with some degree of inaccuracy). These distinctions are relevant because inferred and predicted characteristics may be less accurate and more opaque, raising concerns about fairness and transparency in AI-driven scoring systems.
2.2. The social score must lead to detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment to the gravity of the social behavior
2.2.1. Causal link between the social score and the treatment
For the prohibition to apply, the social scoring created by or with the assistance of an AI system must lead to detrimental or unfavorable treatment of the evaluated person or group of persons. There must be a causal link between the score and the resulting treatment, such that the treatment is the consequence of the score. This causal link may also exist where harmful consequences have not yet materialised, provided that the AI system is capable or intended to produce such outcomes.
The Guidelines further note that the AI-enabled score does not need to be the sole cause of the detrimental or unfavorable treatment. The prohibition also covers situations where AI-enabled scoring is combined with human assessment, as long as the AI output plays a sufficiently significant role in the decision. The prohibition is still applicable if the score is obtained by an organization and produced by another (e.g., a public authority using a creditworthiness score from a private company).
2.2.2. Detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment
For the prohibition to apply, the social score must or could result in detrimental or unfavorable treatment of the evaluated person or group. This treatment could occur either in (i) a different social context from where the data was originally generated or collected, and/or (ii) in a manner that is unjustified or disproportionate to the social behavior or its gravity.
The Guidelines emphasize that a case-by-case analysis is required to determine if at least one of these conditions is fulfilled, as many AI-enabled scoring practices may fall outside the scope of the prohibition.
The Guidelines further clarify that “unfavorable treatment” refers to situations where, as a result of the scoring, a person or a group is treated less favorably compared to others, even where no specific harm or damage is demonstrated. By contrast, “detrimental treatment” requires that the individual or group suffer harm or disadvantage as a result of the scoring. Such treatment may also be considered discriminatory under EU non-discrimination law and may include the exclusion of certain persons or groups, although it is not a necessary condition for the prohibition to apply. As the Guidelines highlight, the treatment covered by the Article 5(1)(c) could go beyond the EU non-discrimination law.
The Guidelines further detail the scenarios described under Article 5(1)(c) AI Act:
a. Detrimental or unfavorable treatment in unrelated social contexts, such as when authorities use information like nationality, internet activity, or health status from one area to evaluate people in another
The first scenario regards the situations where the detrimental or unfavorable treatment resulting from a social score occurs in social contexts unrelated to the one in which the data were originally generated or collected. The Guidelines clarify that this condition requires both that the data used for scoring originates from unrelated social contexts and that the resulting score leads to detrimental or unfavorable treatment in a different context.
This scenario typically involves AI systems processing data relating to the individuals’ social behavior or personal characteristics that were generated or collected in contexts unrelated to the purpose of the scoring, and used by the AI system for the scoring of the individual(s) without an apparent connection to the purpose of the evaluation or classification or in a way that leads to the generalised surveillance of individuals or groups.
As the Guidelines note, in most situations, these kinds of practices occur against the reasonable expectations of the individuals concerned and may also violate EU law and other applicable rules. To determine if this condition is met, a case-by-case assessment is required, evaluating the purpose of the evaluation and the context in which the data was collected and generated.
There is a clear link between this scenario and the purpose limitation principle under Article 5(1)(b) GDPR, which provides that personal data must be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. When the personal data collected in one context is used to generate social scores in an unrelated context, such a practice may violate this principle, particularly where the new use of data was not foreseeable to the individual or where the new processing lacks a sufficient legal basis of connection to the original purpose.
The Guidelines provide several examples of prohibited practices under this first scenario, highlighting the following situations where:
- AI predictive tools are being used by national tax authorities to select specific taxpayers for inspections based on social habits, internet connections, and other unrelated data; or
- AI systems are being used by social welfare agencies to estimate fraud risk based on characteristics collected or inferred from unrelated contexts (e.g., the nationality or ethnicity of the spouse, internet connection, social media activity, workplace performance); or
- AI systems are used by public labor agencies to score unemployed individuals based on personal data such as age, education, as well as inferred or collected variables from contexts and data unrelated to the purpose of the evaluation (e.g., health conditions, marital status). These practices can be distinguished from lawful practices.
On another note, national developments illustrate the risks associated with AI-enabled social scoring and classification systems that rely on data from unrelated contexts. In the Netherlands, the Dutch Tax and Customs Administration used the “Fraude Signaleringsvoorziening” (FSV – Fraud Signaling Provision), a system used to record and assess fraud signals based on personal data collected from multiple sources, including internal systems, other public authorities, and third parties.
The Dutch AP found that the processing of personal data in the FSV was unlawful. The AP found that the processing had no legal basis and that the purpose of the processing was not sufficiently defined. These findings were explored in the Case 202401528/1/A3. The Council of State held that the letter of the Ministry of Finance informing the individual that they were not eligible for financial compensation following his registration in the FSV was a decision subject to judicial review. It is relevant to note that this case was decided under administrative and data protection law and did not concern the application of the AI Act, yet it highlights the risks associated with systems that record and use personal data to evaluate and classify individuals which may influence their treatment by public authorities.
b. Situations where the detrimental or unfavorable treatment is disproportionate to the actual behavior
To this extent, the Guidelines provide a list of unjustified or disproportionate treatment that falls under both Article 5(1)(c)(i) and Article 5(1)(c)(ii):
- AI systems used by tax authorities to profile child benefit recipients and assign fraud risk categories based on criteria such as low income or nationality, that could lead to unjust, discriminatory, and detrimental treatment and severe financial hardship. The Guidelines reference a similar concern that arose in the Netherlands, where automated risk profiling systems used in the administration of childcare benefits contributed to disproportionate enforcement measures, and with the SyRI (Systeem Risico Indicatie) system being subsequently found unlawful.
- AI systems used by public authorities to control fraud in the student housing grant process by considering indicators such as internet connections, family status or the level of education of beneficiaries as distinguishing factors for fraud risk, which do not appear relevant or justified for the purpose of the evaluation.
- An AI system introduced by the government that is used to monitor and rate citizens’ behavior across various aspects of life, including social interactions, online activities, purchasing habits, and punctuality in paying bills. Individuals with lower scores may face restricted access to public services, financial disadvantages and certain limitations, such as in employment, housing or travel. Such systems could lead to excessive surveillance and detrimental treatment in unrelated contexts while also imposing excessive penalties for minor infractions.
The Guidelines note that the prohibition may also cover cases when preferential treatment is granted to certain individuals or groups of people (e.g., in cases of support employment programs, (de-)prioritization for housing or resettlement).
2.3. AI-enabled social scoring is prohibited regardless of whether the system or the score are provided or used by public or private persons
Article 5(1)(c) prohibits AI-enabled social scoring practices regardless of whether the AI systems or the resulting score are provided or used by public or private persons. While scoring practices in the public sector may have particularly significant consequences due to the individuals’ dependence on public services and the imbalance of power between the authorities and the individuals, similarly harmful consequences may also happen in the private sector.
For instance, as the Guidelines exemplify, an insurance company may use an AI system to analyze spending patterns and the financial data obtained from a bank, which are unrelated to the assessment of eligibility for life insurance, in order to determine if it should refuse the insurance or impose higher premiums to individuals or groups of individuals. In another example, a private credit agency may use an AI system to determine an individual’s creditworthiness for obtaining a housing loan based on unrelated personal characteristics.
In the case of verifications conducted by the competent market surveillance authorities, the responsibility lies with the providers and deployers of the AI systems, within their respective obligations, to demonstrate that their AI systems are legitimate, transparent and only process context-related data. They must also ensure that the systems operate as intended and that any resulting detrimental or unfavorable treatment is justified and proportionate to the social behavior assessed.
The Guidelines also note that compliance with the applicable requirements, including those concerning high-risk AI, may help ensure that the evaluation and classification practices remain lawful and do not constitute prohibited social scoring.
3. What falls outside the scope of the prohibition?
The AI Act makes room for carefully tailored exceptions to the social scoring prohibition. It acknowledges several scenarios where assessing individuals via algorithms is lawful and even necessary, provided that such an assessment is conducted in a targeted and proportionate manner.
First to note is that the prohibition applies only to the scoring of natural persons or groups of natural persons. Scoring of legal entities is, in principle, excluded in situations where the evaluation is not based on the social behavior or personal or personality characteristics of individuals. However, as the Guidelines highlight, in the situations where a score attributed to a legal entity aggregates the evaluation of natural persons and directly affects those individuals, the practice may fall within the scope of this prohibition.
Secondly, the Guidelines distinguish AI-enabled social scoring as a “probabilistic value” and prognosis from individual ratings provided by users (for example, the ratings of drivers or service providers on online platforms). These fall outside the prohibition unless they are combined with other data and analyzed by an AI system to evaluate or classify individuals that fulfill the conditions of Article 5(1)(c).
Finally, Recital 31 AI Act and the Guidelines clarify that lawful evaluation practices conducted for a specific purpose in compliance with EU and national law remain outside the scope of the prohibition. Recital 31 reiterates that this prohibition “should not affect lawful evaluation practices of natural persons carried out for a specific purpose in accordance with Union or national law.”
The Guidelines provide additional examples of legitimate scoring practices that are out of scope, including:
- financial creditworthiness assessments based on relevant financial and economic data, in compliance with consumer protection and financial services law;
- fraud detection systems relying on relevant transactional behavior and metadata in the context of the service provided;
- insurance risk assessments based on telematics data reflecting driving behavior, where premium adjustments are proportionate to the risk;
- online platforms evaluating users’ behavior for safety or service quality purposes, based on relevant data for the given context, when the evaluation does not result in disproportionate detrimental treatment;
- AI-enabled targeted commercial advertising based on users’ preferences, if it complies with the applicable consumer protection, data protection, and digital services law;
- AI systems used for legitimate purposes such as medical diagnosis, fraud prevention, law enforcement, or migration procedures, where the data used is relevant and the resulting treatment is justified and proportionate.
4. Interplay with other EU laws, including consumer protection, data protection, non-discrimination, and sector-specific provisions such as credit, banking, and anti-money laundering
Providers and deployers must assess whether other EU or national laws apply to any particular AI scoring system used in their activities, particularly if more specific legislation strictly defines the type of data considered relevant and necessary for specific evaluation purposes and ensures fair and justified treatment.
AI-enabled social scoring in business-to-consumer relations may also require the application of EU consumer protection laws, such as Directive 2005/29/EC on unfair business-to-consumer commercial practices (the “UCPD”), if it misleads consumers or distorts their economic behavior. The practices which may amount to misleading consumers or distorting their behavior through AI uses or in AI contexts is further explored in Blog 2 of this series, accessible here.
Social scoring may also engage specific data protection rules as encoded in the GDPR, particularly those regarding the legal ground for processing, data protection principles, and other obligations, including the rules on solely automated individual decision-making. AI-enabled social scoring that results in discrimination based on protected characteristics (e.g., age, race, and religion) would also fall under EU non-discrimination law.
Finally, certain sector-specific rules may be applicable. For example, the Consumer Credit Directive (CCD) prohibits the use of special categories of personal data in these evaluations and the obtaining of data from social networks. Additionally, guidelines from the European Banking Authority provide further specifications on the relevant information for the purpose of creditworthiness assessments, which are relevant to determine whether a practice falls under the scope of Article 5(1)(c). AI systems used for anti-money laundering and counter-terrorism financing purposes must also comply with the applicable EU legislation.
5. Closing reflections and key takeaways
The AI Act prohibits specific practices of AI-enabled social scoring, not scoring in general
Article 5(1)(c) of the AI Act does not prohibit scoring as such, but rather the placing on the market, putting into service or use of AI systems for social scoring practices that meet the conditions set out in the provision. The Guidelines repeatedly focus on the concrete use of the AI system and the effects of the resulting score, rather than on the existence of the scoring mechanisms alone. In particular, the prohibition is determined only when all conditions are cumulatively met, including the evaluation or classification over a certain period of time and the link to detrimental or unfavorable treatment in unrelated social contexts and/or unjustified or disproportionate treatment.
Public and private uses are equally in scope, with shared accountability across the value chain
The Guidelines clarify that unacceptable AI-enabled social scoring is prohibited regardless of whether the system or score is provided or used by public or private persons. They also place practical weight on accountability: in case of verifications conducted by the competent market surveillance authorities, both providers and deployers, each within their responsibilities, must be able to demonstrate legitimacy and justification, including transparency about system functioning, data types and sources, and the use of only data related to the social context in which the score is used, as well as proportionality of any resulting detrimental or unfavorable treatment.
Out of scope does not mean being exempted from scrutiny.
Recital 31 of the AI Act and the Guidelines clarify that the prohibition is not intended to affect the lawful evaluation practices carried out for a specific purpose in accordance with the existing legislation in place. It depends on several criteria, as examined throughout this blog, if a scoring practice falls outside the scope of the prohibition, including whether the evaluation serves a legitimate and clearly defined purpose, whether the data used is relevant and necessary for that purpose, whether the scoring occurs within the same social context in which the data was collected, and whether any resulting detrimental or unfavorable treatment is justified and proportionate to the behaviour assessed.
As the Guidelines emphasise, this assessment is contextual. The same scoring practice may fall outside the scope of the prohibition in one situation, for example, where it is used for a lawful and proportionate creditworthiness assessment based on relevant financial data, but may fall within the scope of Article 5(1)(c) where it relies on unrelated data, produces disproportionate consequences, or is used in a different social context. This reinforces that compliance depends not only on the existence of scoring systems, but on how they are designed, the types of data they process, and the purposes for which they are used.