Location Controls in iOS 11 Highlight the Role of Platforms

~ ~ ~ ~

Author’s Note 9/8/17: Following the release of additional versions of iOS 11, it appears that the “Blue Bar” notification discussed in the second part (below) will no longer be a mandatory feature for app developers. Questions? Email [email protected].

~ ~ ~ ~

From Pokémon Go, to the geo-targeting of abortion clinics, to state legislative efforts, the last year has seen significant attention paid to the many ways our apps use and often share location data. In the midst of this heightened awareness of geo-location privacy, iPhone users and app developers may notice a difference this Fall, when Apple will be releasing updates to iOS 11 that will increase users’ control over how their geo-location may be collected and used. The changes highlight the ongoing importance—and legal implications—of platform settings for consumer privacy.

At Apple’s 2017 Worldwide Developer Conference, a variety of privacy updates to iOS 11 were announced that may have potentially far-reaching impact when they become broadly available in the Fall. Apple has made two significant changes related to how apps can access users’ location: (1) the “Only When In Use” setting will now be required; and (2) the prominent Blue Bar notification that indicates when persistent, real-time location is being collected, will now appear more often.

Requiring the “Only When In Use” Authorization Setting for Location Services

In iOS 11, mobile apps seeking to access a user’s location will now be required to offer the intermediate authorization setting “Only When in Use.” Previously, location-based apps had the ability to bypass this option and present the user with only two choices: Always or Never. In iOS 11, this binary consent flow will no longer be possible: if an app wants to request access to location “Always,” it must also provide the user with the option of accessing location “Only When in Use.” (Of course, if “Always” is not needed, it is still possible to provide the two lesser options, Never or Only When in Use).

Apple reports that about 21% of location-based apps are currently offering their users only those two choices: Always or Never. At the June developer conference, Apple technologist Katie Skinner reasoned that this is likely due in part to developer confusion. While this is probably true, it is also the case that some apps might be seeking Always authorization in order to monetize the additional location history data for location-based marketing, interest-based advertisements, and other data analysis.

“Why do apps ever need ‘Always’ authorization for location?” This is a question we often hear from privacy-minded observers, and the answer lies in the added functionalities that “Always” authorization enables. As shown in the chart below, with “Always” authorization, apps can make use of a number of lower-energy background monitoring services that permit them to incorporate contextual triggers based on movement or specific geographical areas (“geo-fences”).

For example, a retail app might offer discounts or points when the user enters a store; an airline app might launch a boarding pass when the user enters the airport; or a smart home app might send a parent a notification when their child enters or leaves a location, such as home or school. These sorts of location-based actions could not occur if the user were required to have each app open and running in the foreground for it to access location.

 

Although there are legitimate reasons to request “Always” authorization, apps do not always provide good explanations for why they are asking for it. Because Apple’s update ensures users will always have the option of choosing “Only While In Use,” developers that want “Always” access to location will now have a greater incentive to justify their request. In a small way, this shifts some of the burden from the iPhone user (of understanding apps’ data practices) to the app developers (of explaining them), which is a consumer-friendly step.

Expanding the “Blue Bar” Notification to Match User Expectations

Another significant update to iOS 11 is that the “Blue Bar” notification will now appear any time an app has requested “Continuous Background Location” and the user subsequently navigates away from the app.

Currently, this Blue Bar does not appear for apps with authorization to collect location “Always.”  In theory, users that have agreed to share location “Always” do not need (or necessarily want) this notification for most of the typical, approved uses of background location described above, such as receiving reminders when they enter a geo-fence. But do users expect that an app might also be receiving high frequency, high accuracy, battery intensive location data? For those apps–and only when they are collecting “Continuous Background Location”– the Blue Bar will now appear.

Importantly, most methods of accessing location using the “Always” permission will not trigger the Blue Bar, presumably because they are expected when a user approves “Always” authorization.  These include visit monitoring, significant change monitoring, and geo-fencing.  (For more on these background location services, read Apple’s Developer Documentation, “Getting the User’s Location” (here) or the Location & Maps Programming Guide (here)). Only the “Continuous Background Location” service will be impacted—i.e. the accurate persistent, real-time collection of location after a user leaves an app. Compared to the other services, it is the most power-intensive, but delivers the most accurate and immediate data.

 

Generally speaking, the only apps that need this particular high-energy location service are the ones providing real-time navigation, or mapping a route (e.g. a Runkeeper or a MapMyRun). As Apple tells developers: “Use this service only when you really need it, such as when offering navigation instructions or when recording a user’s path on a hike.” For these kinds of apps, the Blue Bar is more than just a notification, it’s also feature—it makes it easy to return to the app after leaving it to take a call or do something else on the phone.

For these reasons, industry sources are noting that most location-based mobile marketing are unlikely to be affected. We agree, although it’s worth noting that there are almost certainly some small number of actors in the mobile marketing space whose overzealous collection of location data will be halted by this OS upgrade. So far, the only incentive not to use this feature to collect unnecessary data has been that it depletes the user’s battery significantly; and that it causes the OS to occasionally generate notifications to users.

Operating Systems Play a Key Role in Shaping Expectations

Understanding the details of iOS and Android upgrades is important not only because of their direct effect on consumer privacy, but because platforms can have a powerful effect on user expectations—both in creating trust by aligning privacy controls with what they are, and in shaping what they become. When those consumer expectations are particularly strong and clearly expressed, we have seen in recent years that the Federal Trade Commission (FTC) is willing to step in and take actions against companies who violate them.

In 2016, mobile ad network Inmobi settled with the FTC for ignoring the mobile operating system settings of users who disabled apps’ location access, and inferring those users’ location anyway through other means. As we discussed at the time, the ad network was able to infer location by detecting the users’ proximity to known Wi-Fi access points. Aside from the practice being a violation of the OS’s Terms of Service, the FTC considered it to be a “deceptive business practice,” in part because they had misrepresented the practice to their app partners. Under the FTC’s Section 5 authority, the agency can take action against companies who deviate from their stated policies in a way that is considered deceptive.

After the case settled, many within industry have wondered whether the FTC could have brought the case if the ad network had disclosed its practices accurately in a Privacy Policy, explaining that it was ignoring the mobile operating system privacy settings. We suspect that this would not have been sufficient. As we have seen from state legislative efforts and cases involving law enforcement access to location history, reasonable people have strong feelings and expectations around the disclosure of geo-location. As a result, platform and operating system settings for controlling the disclosure of this type of information are likely to require a high degree of respect because of the expectations users have for how these settings work.

In addition to setting user expectations, recent research demonstrates that operating systems influence how app developers view privacy. In May 2017, Katie Shilton and Daniel Greene at the University of Maryland analyzed hundreds of discussions in iOS and Android developer forums, and concluded that platforms act not just as intermediaries but as regulators who define what privacy means:

Mobile platforms are not just passive intermediaries supporting other technologies. Rather, platforms govern design by promoting particular ways of doing privacy, training [developers] on those practices, and (to varying degrees) rewarding or punishing them based on their performance. . .  “Privacy by design” is thus perhaps better termed, at least in the mobile development space, “privacy by permitted design” or “privacy by platform.”

– Daniel Greene & Katie Shilton, “Platform privacies: Governance, collaboration, and the different meanings of “privacy” in iOS and Android development” (Read full article here)

As consumers increasingly interact with platforms controlling data from their children’s toys, connected cars, and their homes, it is important that privacy settings evolve and become increasingly nuanced.  We look forward to continuing to see Apple and other platforms refine privacy settings to ensure they match consumer expectations.

 

Resources:

For more, we recommend watching two sessions from Apple technologists at WWDC 2017, “Privacy and Your Apps,” and “What’s New in Location Technologies” (find all WWDC 2017 videos here).

The Future of Microphones in Connected Devices

Microphone (2)

Today, FPF released a new Infographic: Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices (read the Press Release here). From Amazon Echos to Smart TVs, we are seeing more home devices integrate microphones, often to provide a voice user interface powered by cloud-based speech recognition.

Last year, we wrote about the “voice first revolution” in a paper entitled “Always On: Privacy Implications of Microphone-Enabled Devices.” This paper created early distinctions between different types of consumer devices, and provided initial best practices for companies to design their devices and policies in a way that builds trust and understanding. Since then, microphones in home devices — and increasingly, in city sensors and other out-of-home systems — have continued to generate privacy concerns. This has been particularly notable in the world of children’s toys, where the sensitivity of the underlying data invites heightened scrutiny (leading the Federal Trade Commission to update to its guidance and clarify that the Children’s Online Privacy Protection Act applies to data collected from toys). Meanwhile, voice-first user interfaces are becoming more ubiquitous, and may one day represent the “normal,” default method of interacting with many online services and connected devices, from our cars to our home security systems.

As policymakers consider the existing legal protections and future direction for the Internet of Things, it’s important to first understand the wide range of ways that these devices can operate. In this Infographic, we propose that regulators and advocates thinking about microphone-enabled devices should be asking three questions: (1) how the device is activated; (2) what kind of data is transmitted; and, on the basis of those two questions, (3) what are the legal protections that may already be in place (or not yet in place).

#1. Activation

In this section, we distinguish between Manual, Always Ready (i.e., speech-activated), and Always On devices. Always Ready devices often have familiar “wake phrases” (e.g. “Hey Siri,”). Careful readers will notice that the term “Always Ready” applies broadly to devices that buffer and re-record locally (e.g., for Amazon Echo it is roughly every 1-3 seconds), and transmit data when they detect a sound pattern. Sometimes that pattern is a specific phrase (“Alexa”), but it can sometimes be customizable (e.g. Moto Voice let’s you record your own launch phrase) and sometimes it need not be a phrase at all — for example, a home security camera might begin recording when it detects any noise. Overall, Always Ready devices have serious benefits, and (if designed with the right safeguards) can be more privacy protective than devices designed to be on and running 100% of the time.

#2 – Data Transmitted

In this section, we demonstrate the variety of data that can be transmitted via microphones. If a device is designed to enable speech-to-text translation, for example, it will probably need to transmit data from within the normal range of human hearing — which, depending on the sensitivity, might include background noises like traffic or dogs barking. Other devices might be designed to detect sound in specialized ranges, and still others might not require audio to be transmitted at all. With the help of efficient local processing, we may begin to see more devices that operate 100% locally and only transmit data about what they detect. For example, a city sensor might alert law enforcement when a “gunshot” pattern is detected.

#3 – What are the existing legal protections?

In this section, we identify the federal and state laws in the United States that may be leveraged to protect consumers from unexpected or unfair collection of data using microphones. Although not all laws will apply in all cases, it’s important to note that certain sectoral laws (e.g. HIPAA) are likely to apply regardless of whether the same kind of data is collected through writing or through voice. In other instances, the broad terms of state anti-surveillance statutes and privacy torts may be broadly applicable. Finally, we outline a few considerations for companies seeking to innovate, noting that privacy safeguards must be two-fold: technical and policy-driven.

cleanshot 2023 07 18 at 11.15.54@2x

New Infographic: Understanding Uses of Microphones in Internet of Things (IoT) Devices

Microphones Clip Part1

FOR IMMEDIATE RELEASE                          

August 10, 2017

Contact:

Stacey Gray, Policy Counsel, [email protected]

Melanie Bates, Director of Communications, [email protected]

New Infographic: Understanding Uses of Microphones in Internet of Things (IoT) Devices

Washington, DC – Today, the Future of Privacy Forum released an infographic, “Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices.” In order to enable the benefits of new voice-based services while protecting data privacy, this infographic attempts to explain the range of possible uses of microphones in connected devices. Microphones & the Internet of Things describes to consumers in an easily digestible format: 1) how microphones are used in home devices, 2) the different ways these devices can be activated (“Manual,” “Always Ready,” or “Always On”), 3) the types of data that can be transmitted, and 4) current U.S. legal protections and best practices.

“Voice is an increasingly useful interface to engage with technology,” said Stacey Gray, FPF Policy Counsel. “Consider the Amazon Echo, which is activated by a spoken command (“Alexa”), or Apple’s personal assistant Siri (“Hey, Siri”), or the Smart TVs that are incorporating voice interactions. But consumers don’t always understand when and in what ways these devices are actually collecting information, leading to legitimate concerns that companies can address through transparency and strong privacy safeguards.”

By their method of activation, consumer devices can be categorized as manual, always ready, or always on. In the past, most recording devices could be considered on or off. Many new voice-based home personal assistants today are “always ready” because they do not begin transmitting data off-site until they detect a wake phrase. The infographic FPF released today describes these categories:

  1. manual (requiring a press of a button, or other intentional physical action);
  2. always ready (requiring a spoken “gate phrase”); and
  3. always on (device transmits data 100% of the time on a standalone basis, and further processing occurs externally).

Microphones & the Internet of Things also explains that microphone-enabled devices are not always transmitting the same kinds of audio data, or transmitting audio data at all.  Some devices, such as smart speakers, can use microphones to do things like calibrate sound to the shape of a room for better music quality.

The publication of Microphones & the Internet of Things follows last year’s FPF paper, Always On: Privacy Implications of Microphone-Enabled Devices. The paper identifies emerging practices by which manufacturers and developers can alleviate privacy concerns and build consumer trust in the ways that data is collected, stored, and analyzed. Today’s release also coincides with the 2017 National Conference of State Legislatures Legislative Summit, where Gray spoke on a panel titled “The Future of Artificial Intelligence and Voice Recognition Technology.”

“Information networks and devices that make up the ‘Internet of Things’ promise great benefits for individuals and society,” Gray said. “However, if we do not have the right guiding principles or necessary privacy safeguards, consumers will lose trust in the evolving technologies. We need to address security and privacy issues to allow the Internet of Things to achieve its full potential.”

### 

The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

Congratulations to Former FPF Advisory Board Member, Jim Byrne, the new General Counsel of the Veterans Administration

Former FPF Advisory Board Member (and former IAPP Chairman of Board) James Byrne was confirmed on Friday by the United States Senate to be General Counsel of the Veterans Administration.

Jim has had a great career as a privacy leader at Lockheed Martin, and is a great choice to help address the many challenges facing the VA and our veterans. We have appreciated his advice, support and friendship and look forward to his success. We wish him good luck!

Jim most recently served as Associate General Counsel and Chief Privacy Officer at Lockheed Martin Corporation where he was also the company’s lead cyber and counterintelligence attorney.

Prior to joining Lockheed Martin, Mr. Byrne served as the career Senior Executive Service Deputy Special Counsel with the Office of the United States Special Counsel, and both General Counsel and Assistant Inspector General for Investigations with the Office of the Special Inspector General for Iraq Reconstruction.

Jim has over 20 years of experience in the public sector, including service as a deployed Marine Infantry Officer and a U.S. Department of Justice (DOJ) international narcotics prosecutor. He volunteered for the past ten years on the Executive Board of Give an Hour, a non-profit organization that has developed national networks of volunteer professionals capable of providing complimentary and confidential mental health services in response to both acute and chronic conditions that arise within our society, beginning with the mental health needs of post-9/11 veterans, servicemembers and their families. Mr. Byrne is a Distinguished Graduate of the U.S. Naval Academy, where he received an engineering degree and ultimately held the top leadership position of Brigade Commander. He earned his J.D. from Stetson University College of Law, St. Petersburg, Florida and started his legal career as a judicial law clerk to the Honorable Malcolm J. Howard, U.S. District Court, Eastern District of North Carolina.

The House’s SELF DRIVE Act Races Ahead on Privacy

In a rare moment of bipartisanship, the House Energy and Commerce Committee yesterday unanimously approved the SELF DRIVE Act H.R. 3388, sending it to the full House of Representatives for consideration. The bill facilitates introduction and testing of autonomous cars by clarifying federal and state roles, and by granting exemptions from motor vehicle standards that have impeded introduction of new automated vehicle technologies. This vote was an important step forward in enabling introduction of new technologies that have the potential to transform the future of mobility and maximize consumer safety.

The latest version of the bill includes a significant section on consumer privacy, which primarily requires that manufacturers create a written “privacy plan” for every automated vehicle. This privacy plan must explain a manufacturer’s collection, use, sharing, and storage of information about vehicle owners and occupants, and detail manufacturers’ approaches to core privacy principles like data minimization, de-identification, and information retention. Carmaker practices for information that is de-identified, anonymized, or encrypted do not need to be detailed in the privacy plan.

The House’s support for these provisions underscores the growing role that data will play in connected vehicles, and the importance of responsible data practices for this emerging field.

Automakers have proactively tackled this issue, with nearly all automakers developing and committing to the Automotive Privacy Principles in 2014. The Principles, which are enforceable by the Federal Trade Commission, require transparency, affirmative consent for sharing of sensitive data for marketing purposes, and limited sharing of covered information with law enforcement. Many of the provisions in the Hill bill are reflect similar commitments made in these Principles. Moreover, NHTSA’s Federal Automated Vehicles Policy recommends that entities produce a Safety Assessment Letter (SAL) before they introduce new technologies. The SAL, which becomes a legal requirement in the latest version of the House bill, already includes a provision that companies in the ecosystem outline their privacy practices, which ensure basic consumer privacy protection.

The bill also underscores the Federal Trade Commission’s role regarding connected vehicles. While the FTC has authority to bring enforcement actions against unfair or deceptive privacy and data practices across sectors including transportation, the bill highlights the agency’s ability to enforce violations of the privacy-related sections of the bill, and calls on the FTC to study manufacturer privacy plans and practices. The FTC is actively collaborating with NHTSA on this topic, co-hosting a workshop on privacy and security issues related to connected and automated vehicles in June, where the agencies committed to minimize duplication while ensuring consumer protection around privacy and cybersecurity in connected cars.

The bill also calls for creation of a Highly Automated Vehicle Advisory Council that will monitor and provide advice to NHTSA on several issues protection of consumer privacy and security. This Council will have the flexibility to monitor this space and recommend best practices going forward.

The House bill provides flexibility for manufacturers to determine best practices in a nascent industry, where data is only beginning to play a part. The exact data that will need to be generated, stored, and shared to facilitate self-driving cars is not yet known, even by industry experts, and a bill that requires a plan but provides flexibility on exact treatment of such data is a promising step.

The Committee’s work on the SELF DRIVE act has been a successful bipartisan effort and seems likely to advance with continued support after House recess. A Senate bill on self-driving cars is expected shortly, and FPF will stay tuned to see if privacy provisions are included.

Additional Resources

See FPF’s consumer guide to the connected car here

See FPF’s infographic mapping data flows in the connected car here

See FPF’s comments on the Federal Automated Vehicles Policy here

See FPF’s comments on the FTC/NHTSA Workshop here

Privacy Protective Research: Facilitating Ethically Responsible Access to Administrative Data

This paper provides strategies for organizations to minimize risks of re-identification and privacy violations for individual data subjects. In addition, it suggests privacy and ethical concerns can be most effectively managed by supporting the development of administrative data centers. These institutions will serve as centers of expertise for de-identification, certify researchers, provide state-of-the-art data security, organize ethical review boards and support best practices for cleansing and managing data sets.

Privacy and Confidentiality; 2) The Interests of Data Producers; 3) Gaining Access: The Lessons of Experience; and 4) Lessons learned from infrastructure successes in other contexts.

The Top 10: Student Privacy News (June – July 2017)

The Future of Privacy Forum tracks student privacy news very closely, and shares relevant news stories with our newsletter subscribers.* Approximately every month, we post “The Top 10,” a blog with our top student privacy stories. 

The Top 10

  1. FERPA|Sherpa continues to grow! FPF published new blogs on protecting your child’s privacy when they go to summer camp (Leah Plunkett from the Berkman Klein Center) and Higher Ed Chief Privacy Officers(Joanna Grama from EDUCAUSE). We have also continued to add new resources to the Resource Search Center. Check out the site!
  2. Carnegie Mellon University grad students released a study on ed tech start-ups and student privacy, finding that they often fail to “prioritize student data protections,” and that investors do not tend to discuss privacy with their investees (the only exceptions I know about are AT&T Aspire and the Michelson 20MM Foundation). The release of the study was widely covered in the press.
  3. The House Subcommittee on Early Childhood, Elementary, and Secondary Education held the hearing “Exploring Opportunities to Strengthen Education Research While Protecting Student Privacy” on June 28th. The consensus: “states need federal guidance on student data privacy,” and “It’s Time” to update FERPA. As mentioned in the previous newsletter, a very similar hearing was held on March 22nd last year, which is probably why very few lawmakers were in attendance. You can read my live tweets from the hearing, and check out my op-ed on this topic from last year.
  4. The Louisiana governor vetoed a bill that would have allowed researchers outside of Louisiana to access student data for research, subject to civil penalties for any violation of student privacy (more about the problem the bill was addressing here). The Louisiana student privacy law is still one of the strictest laws in the country even after being rolled back a year after it passed due to many unintended consequences.
  5. The U.S. Department of Education’s Regulatory Reform Task Force issued a progress report with a list of regulations that need to be updated – including FERPA and PPRA regulations (more info on the task force report via EdWeek) (h/t Doug Levin).
  6. Elana Zeide’s article, “The Structural Consequences of Big Data-Driven Education,” was published in the journal Big Data.
  7. John Warner writes a really interesting article in Inside Higher Ed about “Algorithmic Assessment vs. Critical Reflection.” One particularly thought-inspiring quote: “I am disconcerted by an educational model where students primarily receive attention when they’re “struggling.” This suggests a framework where the goal of education is simply to stay off the algorithm’s radar, rather than maximize each student’s potential.”
  8. In Australia, “An algorithm is using government data to tell those from low socioeconomic backgrounds their likelihood of completing university, but privacy experts say it could be utilised for early intervention instead of discouragement.”
  9. When should schools be able to access student social media? TrustED posted an article about the issue, and EdWeek reported on “10 Social Media Controversies That Landed Students in Trouble This School Year.” A student “tried to expose a schoolmate’s racism by reposting” her remarks on social media and was disciplined by the school, and the ACLU of Ohio is pushing back. A new paper published this month found that “women and young people are more likely to experience the chilling effects of surveillance,” and “the younger the participant, the greater the chilling effect.” For a look at surveillance and student privacy, check out my report from last fall.
  10. Personalized Learning articles proliferated this month in response to a RAND report on personalized learning implementation. Ben Herold at EdWeek reported that “Chan-Zuckerberg to Push Ambitious New Vision for Personalized Learning;” the New Schools Venture Fund Summit emphasized that “philanthropists and school leaders need to make a ‘big bet’ on dramatically reshaping schools” through personalized learning; Common Sense Media’s Bill Fitzgerald was on a podcast about “Personalized Learning and the Disruption of Public Education;” and there were other think pieces on personalized learning in RealClearEducationThe EconomistEdTech Strategies, and the Christensen Institute. It may be worth revisiting the Data & Society paper on “Personalized Learning: The Conversations We’re Not Having” from last year and its discussion of some of the privacy implications of personalized learning.

Image: “image_019” by Brad Flickinger  is licensed under CC BY 2.0.

The Future of Digital Privacy

Jules Polonetksy, Future of Privacy Forum’s CEO, was featured on Episode 5 of The Front Row, a podcast by 2U. The conversation centered around responsible data collection and the future of digital privacy. Jules discussed how chief privacy officers and cyber security experts will be able to harness the good in technology and mitigate the risks. He explained:

“If they are empowered to shape responsible decisions, we’ll help make sure that we have a world that is not Orwellian but that uses technology so that we have better health, more free time, more time to do important things like spend it with our family and be healthy and achieve great things.”

LISTEN

Read Transcript

Privacy in the age of data: Regulation for human rights and the economy

The Friends of Europe recently released a discussion paper, ‘Policy choices for a digital age – taking a whole economy, whole society approach’ at the closing plenary of the European Commission co-organised Net Futures 2017 conference in Brussels.

Jules Polonetsky, Future of Privacy Forum’s CEO, contributed an article titled, ‘Privacy in the age of data: Regulation for human rights and the economy.’ His article examines how companies can enhance trust in the digital economy while also strengthening the deep mutual values that citizens and consumers so cherish in both Europe and the US.

READ PAPER

Meet FPF's 2017 Summer Interns!

Pictured Above: FPF Interns during a visit to Google’s Washington, D.C. offices.

We are pleased to introduce FPF’s 2017 Summer Interns. FPF interns work with policy staff on a range of substantive projects.  They perform research and craft analysis regarding the intersection of privacy and emerging technologies, including: connected cars, the Internet of Things, education technologies, smart communities, de-identification, advertising technology, biometrics, and genetic analysis.  FPF interns meet with influential policymakers, industry leaders, academics, and privacy advocates. They provide crucial support for FPF projects and stakeholder engagement.  We also like to think they have a bit of fun.

Please click below to meet our interns!

Intern Profiles