Essential and Untrusted
Essential and Untrusted
The pandemic has exacerbated an existing child care crisis. Platforms like Care.com are growing, while exposing care workers to new forms of surveillance and discrimination.
After months of shutdowns to slow the spread of COVID-19, businesses face intense pressure to restore pre-pandemic productivity and profit-ability. With no vaccine in sight, many are undertaking novel efforts to prevent workers from getting sick, and to prevent sick workers from infecting others. One tool employers have turned to is the thermal camera, which promises to detect elevated temperatures.
The ACLU and other critics have noted the limited effectiveness of temperature screening, while warning about the invasiveness of this practice. The use of thermal cameras, like other technologies of virus detection, is an example of the increasingly intimate surveillance of our bodies in public life. But these technologies don’t only measure body temperature; like background checks, they also shape our ideas about who is risky, and who is trustworthy. For many, the pressure to hand over personal information in order to prove compliance to employers will feel familiar.
For the past three years, I’ve been studying domestic labor platforms—websites and apps like Care.com and Sittercity—which have come to play an important role in the ways care workers and families find one another. These platforms are not considered as newsworthy as companies like Uber and Lyft, and they are usually excluded from policy research and systematic data collection about the online gig economy. But they are immensely popular: Care.com hosts more than 11 million worker profiles in the United States alone.
Since 2017, researcher Alexandra Mateescu and I have conducted more than forty interviews with nannies, babysitters, and home-care workers who had used these apps to find work. We met them through many of the same avenues where they look for jobs, including nanny agencies, Craigslist, email lists, and Facebook groups. We learned about how these platforms push workers to share ever more about themselves. Features like background checks, profile pictures, and guidance about how to conduct social media searches allow clients to peer into the lives of prospective hires.
Care-work platforms distance themselves from apps like Uber and Lyft by pointing out that they don’t dictate how work should be done, but merely facilitate connections between workers and clients. Care.com’s former CEO has compared the company’s “marketplace” to hiring platforms like Indeed or LinkedIn, which bring employers and job applicants together in the same digital space. But just like ride-hailing apps, online care platforms are management technologies. The difference is that instead of direct “algorithmic management”—using surveillance and data to automate decision-making and maximize productivity—care platforms use “actuarial management,” which also includes surveillance, but to classify workers in an attempt to minimize losses for employers. These platforms incentivize workers to expose themselves as much as possible to establish their trustworthiness. The information collected goes into the algorithm, which sorts the workers into categories that are anything but neutral.
One of the earliest forms of this kind of actuarial management in North America dates back to 1712. After an armed slave insurrection in New York, “lantern laws” required enslaved Africans to carry a lantern when traveling around the city at night. As Simone A. Browne explains in Dark Matters, these laws transformed lanterns into a technology that made the bodies of workers visible and subject to control. Even essential labor posed a potential risk to social order, and authorities used the lanterns to illuminate (in this case, literally) a class of workers and to distinguish between the obedient and the dangerous. Though some of the technology used today might be novel, asking workers to make themselves visible to prove their trustworthiness is an old practice of social control.
The management mechanisms used by care-work companies today are shaped by centuries of suspicion about the mostly Black and brown women who perform essential reproductive labor in the United States. Domestic work in the colonial period was traditionally carried out either by family members or by enslaved and otherwise unfree workers; throughout the nineteenth century, it was one of the only occupations available to women of ethnic and racial minority groups. In the twentieth century, as racism depressed the wages of Black people and excluded them from better paying work, Black women continued to do paid domestic work even after marriage, which sometimes marked their exit from the profession in prior eras. Today, a majority of domestic workers are women of color, and nearly half are immigrants, according to the 2010 American Community Survey.
In recent decades, two cultural transformations helped pave the way for online care-work platforms. In the 1980s, there was an explosion in formal auditing techniques and technologies. Corporations, hospitals, governments, and many large institutions were subject to new demands for accountability and transparency, which spawned a class of professional auditors. In the following decade, as the internet entered our everyday lives, early online marketplaces like eBay combined the tools of institutional accountability with countercultural ideals of decentralized and trusted communities of networked strangers. Peer-to-peer marketplaces were early experiments in what became known as the “sharing economy.” They used ratings and reviewing systems to facilitate transactions between strangers, drawing on economic theories about the essential role of trust in markets. These technologies democratized the audit, putting the tools of institutional accountability—and the responsibility for surveillance—into the inexpert hands of individuals.
The 1980s also saw a rise in concern about children’s safety. News stories about abusive priests, Satanic day-care teachers, and predators lurking around every corner made parents uneasy and skeptical of the people who were supposed to be looking after their children. While poverty remains the most significant threat to children’s safety, and most children are safer than ever before, for many middle-class parents, moral panics provided further justification for existing discomfort with the workers tasked with caring for their children. Along with nanny cams and consumer background-check companies, new digital technologies aimed to dispel the fears of anxious parents. The marketing for this technology also helped sow seeds of doubt about care workers. Suspicion—and the surveillance tools designed to quell it—became not just the norm, but a requirement of careful parenting.
Since their introduction in the 2000s, care-work platforms have expanded the options for finding domestic labor. These companies offer a reassuring interface to parents unfamiliar, and perhaps uncomfortable, with the largely informal market for in-home care. The sanitized and standardized “marketplaces” of Care.com, Sittercity, and UrbanSitter suggest safety while still putting the onus of responsibility on parents. In a Wall Street Journal article that reported on the deaths of two children who drowned in 2018 at a day care advertised on Care.com, the former CEO assured users that the site was “safer than word-of-mouth” but also emphasized that they “don’t verify the information posted by users.” The platform states that parents hold the ultimate responsibility to “hire safely”; the company is there to provide “tools and resources to evaluate risk and stay savvy about safety.” It encourages parents to run separate criminal record and motor vehicle checks (both available for purchase through the site), as well as wider internet and social media searches. Parents are entreated to “do their digital homework” but given little direction about what to look for or how to interpret the information they find—other than to “hire without discrimination.” This acknowledgement is an important reminder, but insufficient. In online resources on Care.com, parents are encouraged to trust their “gut” when making decisions about prospective workers. As numerous implicit bias studies have shown, however, for many of us, falling back on instincts or relying on ideas about “fit” often leads to discriminatory hiring. Without any deeper education, parents are left to figure out how to navigate the complicated dynamics of a labor market that has been shaped by centuries of racism and racialized surveillance.
In talking to workers who use these platforms, we discovered that many are acutely sensitive to the importance of first impressions. Many were specifically conscious about the ways racial and gender stereotypes affect how prospective employers judge their profiles. While uploading a photo is optional on Care.com, workers are encouraged to do so through email reminders and prompts on the site. Kendall (all names are pseudonyms), a Black nanny in Atlanta, Georgia, explained the work that went into creating an image for her profile:
There’s a science to that . . . your profile picture has to say, I’m going to love your child. I’m not going to attract your husband . . . I’m safe. If you do video? Please! It takes hours to get it right. You have to tweak it. . . . Don’t do lip gloss. Do ChapStick. . . . Then how you have your hair. . . . I know if my hair is pulled back, or if I have my hair pressed, and down, I’m fine. No curls, it’s too sexy, or too unprofessional.
Many Black nannies told us about deliberately including information in an effort to prevent racist assumptions. As one worker in New York put it, she needed to “get out ahead of ideas” that parents might have about her. That meant including information about how she loved “hiking” to avoid stigma. Amanda, a New York–based nanny, began featuring “swimming” prominently in her interests after receiving a question about her ability to swim during an interview.
While most of the nannies we interviewed did put up profile photos, they were more hesitant to share other pieces of information solicited by the platforms. Whether it was the schools they had attended, their weekly availability, or connecting their Facebook or other social media accounts, concerns about surveillance factored into the choices they made about which information to disclose. Explaining her decision not to link her social media accounts to her profile, Shontay, a Black nanny in Atlanta, recalled a story about an acquaintance who had an encounter with Child Protective Services (CPS) because of a photo she had posted to Instagram of her own child in a car with an unbuckled seatbelt (while the car was parked). “Someone called CPS on her because of that photo,” she explained. “It’s crazy. I don’t want my business out there like that, you never know who’s got something against you . . . jealous of what you’ve got . . . someone can find some crazy way to try and make drama.” The use of visibility to reassure worried families about care workers is just another form of racist surveillance that Black women face across the internet.
When institutions surveil Black people, as Khalil Gibran Muhammad has argued, it is not necessarily aimed at excluding them from white spaces, but to ensure compliant labor within them. Black women and other women of color provide essential reproductive labor in the United States; therefore it’s unsurprising that they face intense scrutiny and pressure to comply with strict codes of conduct and personal expression. In uncertain times, it’s easy to find solace in technologies that put a “data-driven” veneer on existing systems. But technologies that don’t actively work against this status quo nearly always reinforce it.
The pandemic exacerbated an existing crisis of care. As schools closed, working parents were suddenly expected to look after their children 24/7, or find someone to do it for them. In response, the platforms sprung into action. In the spring, Care.com announced partnerships with four states—Louisiana, Massachusetts, Rhode Island, and Texas—and the Armed Services YMCA to provide ninety days of free membership to members of the military and essential workers searching for child care. Since the announcement of completely online or hybrid school years, the platform has reported “triple digit increases” in tutoring jobs in some cities. The platform now prompts workers to log their “household fever status,” which they can choose to display on their profiles, “letting families know you’re proactively monitoring your heath.” As the company explains, this new feature is “to help protect our community,” but at the time of writing clients aren’t asked to report any health information. UrbanSitter has partnered with Collective Health, a company that offers testing and health monitoring for organizations through an app that guides workers through a “risk assessment,” to record their daily symptoms and upload test results. In the name of protecting workers, technological “fixes” for the COVID-19 crisis threaten to make actuarial management practices more invasive and more ubiquitous.
There have been some small signs of positive change. Care.com recently announced to workers that it is exploring ways to reduce “implicit bias” on the site by tweaking profile photos and other potentially sensitive information. The platform is planning to change its reviewing system to allow care workers to post reviews about clients. When they materialize, these efforts may improve workers’ experiences. However, it’s essential to recognize not only the pernicious effects of these technologies but the broader cultural anxieties that link risk and suspicion to some bodies and not others. These problems will not be solved by technical experts correcting racist algorithms, or by politicians calling on CEOs to testify, or even by platforms offering more options for workers to control their privacy online, as worthy as these efforts may be. As Dorothy E. Roberts has written, the “regulation of black women’s bodies” is central to the “disciplinary policies which keep socially privileged people from seeing the need for social change.” The problems of these platforms go beyond the tech sector. The burden of exposing and disinvesting from technologies that manufacture doubt about individual care workers is everyone’s problem.
We are in a moment of national reckoning with the racism that lurks in so many of our institutions; simultaneously, we are confronting an unprecedented health crisis that has made the critical importance of child care more evident than ever. This is an opportunity to investigate, redesign, and maybe even abolish technological forms of visibility that encourage or fail to prevent discrimination, and for employers to examine the buried forces that drive mistrust of workers. As the growth of platforms accelerates through the pandemic, it’s important to recognize the crucial flaws of these systems, even as they meet the real needs of many parents. An anti-racist approach can help us resist the promises of technologies that promise to fix social problems by locating risk in individuals, and to focus on fighting for a just system of care for all.
Julia Ticona is an Assistant Professor at the Annenberg School for Communication at the University of Pennsylvania, where she researches and teaches about digital
inequalities in the world of work.