The Guardian, Smart cities need thick data, not big data, By Adrian Smith, Professor of Technology and Society at the Science Policy Research Unit at the University of Sussex,

Residents living around Plaça del Sol joke that theirs is the only square where, despite the name, rain is preferable. Rain means fewer people gather to socialise and drink, reducing noise for the flats overlooking the square. Residents know this with considerable precision because they’ve developed a digital platform for measuring noise levels and mobilising action. I was told the joke by Remei, one of the residents who, with her ‘citizen scientist’ neighbours, are challenging assumptions about Big Data and the Smart City.
The Smart City and data sovereignty
The Smart City is an alluring prospect for many city leaders. Even if you haven’t heard of it, you may have already joined in by looking up bus movements on your phone, accessing Council services online or learning about air contamination levels. By inserting sensors across city infrastructures and creating new data sources – including citizens via their mobile devices – Smart City managers can apply Big Data analysis to monitor and anticipate urban phenomena in new ways, and, so the argument goes, efficiently manage urban activity for the benefit of ‘smart citizens’.
Barcelona has been a pioneering Smart City. The Council’s business partners have been installing sensors and opening data platforms for years. Not everyone is comfortable with this technocratic turn. After Ada Colau was elected Mayor on a mandate of democratising the city and putting citizens centre-stage, digital policy has sought to go ‘beyond the Smart City’. Chief Technology Officer Francesca Bria is opening digital platforms to greater citizen participation and oversight. Worried that the city’s knowledge was being ceded to tech vendors, the Council now promotes technological sovereignty.
On the surface, the noise project in Plaça del Sol is an example of such sovereignty. It even features in Council presentations. Look more deeply, however, and it becomes apparent that neighbourhood activists are really appropriating new technologies into the old-fashioned politics of community development.
Community developments
Plaça de Sol has always been a meeting place. But as the neighbourhood of Gràcia has changed, so the intensity and character of socialising in the square has altered. More bars, restaurants, hotels, tourists and youngsters have arrived, and Plaça del Sol’s long-standing position as venue for large, noisy groups drinking late into the night has become more entrenched. For years, resident complaints to the Council fell on deaf ears. For the Council, Gràcia signified an open, welcoming city and leisure economy. Residents I spoke with were proud of their vibrant neighbourhood. But they recalled a more convivial square, with kids playing games and families and friends socialising. Visitors attracted by Gràcia’s atmosphere also contributed to it, but residents in Plaça del Sol felt this had become a nuisance. It is a story familiar to many cities. Much urban politics turns on the negotiation of convivial uses of space.
What made Plaça del Sol stand out can be traced to a group of technology activists who got in touch with residents early in 2017. The activists were seeking participants in their project called Making Sense, which sought to resurrect a struggling ‘Smart Citizen Kit’ for environmental monitoring. The idea was to provide residents with the tools to measure noise levels, compare them with officially permissible levels, and reduce noise in the square. More than 40 neighbours signed up and installed 25 sensors on balconies and inside apartments.
The neighbours had what project coordinator Mara Balestrini from Ideas for Change calls ‘a matter of concern’. The earlier Smart Citizen Kit had begun as a technological solution looking for a problem: a crowd-funded gadget for measuring pollution, whose data users could upload to a web-platform for comparison with information from other users. Early adopters found the technology trickier to install than developers had presumed. Even successful users stopped monitoring because there was little community purpose. A new approach was needed. Noise in Plaça del Sol provided a problem for this technology fix.
Through meetings and workshops residents learnt about noise monitoring, and, importantly, activists learnt how to make technology matter for residents. The noise data they generated, unsurprisingly, exceeded norms recommended by both the World Health Organisation and municipal guidelines. Residents were codifying something already known: their square is very noisy. However, in rendering their experience into data, these citizen scientists could also compare their experience with official noise levels, refer to scientific studies about health impacts, and correlate levels to different activities in the square during the day and night.
The project decided to compare their square with other places in the city. At this point, they discovered the Council’s Sentilo Smart City platform already included a noise monitor in their square. Officials had been monitoring noise but not publicising the open data. Presented with citizen data, officials initially challenged the competence of resident monitoring, even though official data confirmed a noise problem. But as Rosa, one of the residents, said to me, “This is my data. They cannot deny it”.
Thick data
Residents were learning that data is rarely neutral. The kinds of data gathered, the methods used, how it gets interpreted, what gets overlooked, the context in which it is generated, and by whom, and what to do as a result, are all choices that shape the facts of a matter. For experts building Big Data city platforms, one sensor in one square is simply a data point. On the other side of that point, however, are residents connecting that data to life in all its richness in their square. Anthropologist Clifford Geertz argued many years ago that situations can only be made meaningful through ‘thick description’. Applied to the Smart City, this means data cannot really be explained and used without understanding the contexts in which it arises and gets used. Data can only mobilise people and change things when it becomes thick with social meaning.
Noise data in Plaça del Sol was becoming thick with social meaning. Collective data gathering proved more potent than decibel levels alone: it was simultaneously mobilising people into changing the situation. Noise was no longer an individual problem, but a collective issue. And it was no longer just noise. The data project arose through face-to-face meetings in a physical workshop space. Importantly, this meant that neighbours got to know one another better, and had reasons for discussing life in the square when they bumped into one another.
Attention turned to solutions. A citizen assembly convened in the square one weekend publicised the campaign and discuss ideas with passers-by. Some people wanted the local police to impose fines on noisy drinkers, whereas others were wary of heavy-handed approaches. Some suggested installing a children’s playground. Architects helped locals examine material changes that could dampen sound.
The Council response has been cautious. New flowerbeds along one side of the square remove steps where groups used to sit and drink. Banners and community police officers remind people to respect the neighbourhood. The Council recently announced plans for a movable playground (whose occupation of the centre of the square can be removed for events, like the Festa Major de Gràcia). Residents will be able to monitor how these interventions change noise in the square. Their demands confront an established leisure economy. As local councillor Robert Soro explained to me, convivial uses have also to address the interests of bar owners, public space managers, tourism, commerce, and others. Beyond economic issues are questions of rights to public space, young peoples’ needs to socialise, neighbouring squares worried about displaced activity, the Council’s vision for Gràcia, and of course, the residents suffering the noise.
The politics beneath Smart City platforms
For the Council, technology activists, and residents of Plaça del Sol, data alone cannot solve their issues. Data cannot transcend the lively and contradictory social worlds that it measures. If data is to act then it needs ultimately to be brought back into those generative social contexts – which, as Jordi Giró at the Catalan Confederation of Neighbourhood Associations reminds us, means cultivating people skills and political capacity. Going beyond the Smart City demands something its technocratic efficiency is supposed to make redundant: investment in old-fashioned, street-level skills in community development. Technology vendors cannot sell such skills. They are cultivated through the kinds of community activism that first brought Ada Colau to prominence, and eventually into office.
Adrian Smith is Professor of Technology and Society at the Science Policy Research Unit at the University of Sussex, and Visiting Professor at the Centro de Innovación en Tecnología para el Desarrollo Humano at the Universidad Politécnica de Madrid. This blog comes from a European research project analysing the knowledge politics of smart urbanism. He is on Twitter as @smithadrianpaul
UBER MAKES PEACE WITH CITIES BY SPILLING ITS SECRETS, Wired
.jpg)
THE TRUCE BETWEEN two old foes—city governments and secretive private companies like Uber—began at the curb.
If you think the curb seems an unlikely Appomattox, you haven’t been pay attention. Today, the curb represents the most contested space in the urban world. Cyclists pedal through bike lanes, cars battle for parking spots. Taxis, Ubers, and Lyfts pick up and drop off riders. Delivery trucks unload Amazon Prime boxes and buses pull in and out of stops. People on foot scuttle through it all, trying not to get hit.
The people running cities believe there should be a place for all these things. Maybe a few designated Uber pick-up and drop off zones, or spaces reserved for trucks making deliveries. The companies want curb space, too, so they can do their thing. But before city governments can start reallocating that space (too long given over to private, parked cars), they need information.
“The autonomous age is upon us but most cities really don’t even have the network password to log in,” says Janette Sadik-Khan, a former New York City transportation commissioner and the chair of National Association of City Transportation Officials. Some don’t have their curbs mapped at all. Others do, but the info is spread out across agencies, file formats, and incompatible maps. (One agency’s master files won’t include intersections; another’s might skimp on curb cuts.)
You know who does have that data? Private sector companies like Uber, which collect piles of information on who goes where, and when. And historically, they’ve been loath to let it until the sunlight. “The data is essential, but because so many companies wouldn’t share the data, we were planning blind,” says Sadik-Khan.
Until now, perhaps. In January, NACTO quietly rolled out a data-sharing project called SharedStreets. And last week, it landed a very important private sector partner, in Uber. The ride-hail company has started using the project as an intermediary, to share sensitive pick-up and drop-off data for Washington, DC.
DC is pleased. “Data today is worth more than gold, oil and cryptocurrency,” says Ernest Chrappah, the director of the city’s Department of For-Hire Vehicles, which oversees taxi, limos, and ride-hail companies in the district. He says the city could use the newly available info to understand whether, say, drivers are too often blocking traffic to pick up passengers—and reconsider its street designs or traffic patterns to accommodate the new ways of getting around.
Indeed, SharedStreets may be exactly what both sides need. First, it will establish data standards for curbs, traffic speeds, and transit data, formats that can be shared between companies, agencies, even across cities. (No more, My computer can’t open that file.) Now, there’s a common language for curb data and maps, with agreed-upon locations for curb cuts and intersections.
.jpg)
This urban Esperanto is a major help, say the people who work with curb info every day. “All of these debates that people are having, you have to have some kind of shared truth,” says Michal Migurski, an engineer at the startup Remix, which builds transit planning software.1 “You have to have an agreement on how many miles of streets, how many miles of curbs. If not, it ends up devolving into testiness early on.”
SharedStreets’ second key advantage is that it serves as a non-profit, non-political third party, a data-holding buffer between occasionally adversarial cities and private companies. That’s key for the companies that have hesitated to share data, fearing less cautious of technically savvy users could compromise their customers’ privacy, or reveal their various secret sauces, like routing algorithms.
“They have made it very clear that they understand private companies have legitimate constraints on what they do with data,” says Andrew Salzberg, who heads up transportation policy at Uber.
So Uber is working with SharedStreets to build a tool that will process and aggregate private companies’ data, put it in the correct format, and leave it completely anonymized. After all, the city says, it isn’t after Uber’s routing info—how your particular car got from, say, the White House to the Capitol building. It just wants to know how often and when vehicles are picking up from that spot outside of 1600 Pennsylvania Avenue. Maybe the city should carve out a designated meeting point there.
This is nice timing for Uber. The ride-hail company is in the midst of a PR glow-up, eight months after dumping former controversial CEO Travis Kalanick in favor of the ultra–apologetic Dara Khosrowshahi. With a spate of announcements this week—about SharedStreets, its acquisition of a city-friendly bike-share company, and a mobile ticketing integration with public transit—Uber is working to prove it can be an excellent partner for cities.
SharedStreets’ success is not quite guaranteed. The platform has plenty of competitors, like Coord from Alphabet’s Sidewalk Labs, and Ford’s Transportation Mobility Cloud. These big dogs want to be the operating system for the modern city, with everyone—governments, companies like Uber and FedEx—feeding their info into their all-knowing, number-crunching transportation platforms. For now, though, cities and private companies say they are attracted to SharedStreets’ non-profit status. With its connection to National Association of City Transportation Officials, it feelssafe. Now the project just needs to execute.
“If SharedStreets could become the place that could assure the private providers that they would protect the information but provide it in a usable form to the city for planning purposes, that would be very helpful. ” says Stephen Goldsmith, who studies big data and government at the Harvard Kennedy School. “I think it’s a good first step.”
Now the initiative just has to get more organizations—the bike-share companies, the e-scooters, the cities, the UPSes, the Lyfts, maybe even automakers—onboard. Imagine a glorious world where everyone speaks a common curb language.
1 Correction appended, 4/16/18, 1:15 PM EDT: A previous version of this story misspelled Michal Migurski’s name.
Do Not Curb Your Transportation Enthusiasm
- Uber’s big pivot: controlling every way you move
- New York’s impending L train shutdown has inspired bizarre and beautiful transit
- As shared e-scooters invade, cities decide whose vehicles belong where

In late 2016, on a conference stage in Palm Springs, California, decision scientist Hannah Bayer made a bold declaration: “We’re going to measure everything we can possibly measure about 10,000 people over the course of the next 20 years or more. We’re going to sequence their genomes; track everywhere they go, everything they eat, everything they buy, everyone they interact with, every time they exercise.” 1
“We” is the Human Project, born as a collaboration between two research labs at New York University — the Institute for the Interdisciplinary Study of Decision Making (a world leader in neuroeconomics) and the Center for Urban Science and Progress (ditto for urban informatics) — with startup funding from the Kavli Foundation. As you might suspect from those origins, the partners are less interested in defining the essential qualities of our species than in understanding how those qualities are operationalized. “Human,” here, is an acronym: Human Understanding through Measurement and Analytics.
HUMAN, here, is an acronym: Human Understanding through Measurement and Analytics.
Any Quantified Self enthusiasts in that audience who might have relished the chance to be so intimately measured were out of luck. As the Human Project is a scientific study, it needs a representative sample. Researchers started by crunching datasets to identify 100 “micro-neighborhoods” that embody New York City’s diversity, and next they will contact randomly targeted households in those areas, inviting people to join the study, “not just as volunteers, but as representatives of their communities.” With promises of payment and self-enlightenment, recruiters will try to turn 10,000 human subjects into HUMANs. 2
Let’s say your family volunteers. To start, you might submit blood, saliva, and stool samples, so that researchers can sequence your genome and microbiome. You could undergo IQ, mental health, personality, and memory testing; and agree to a schedule of regular physical exams, where the researchers collect more biological samples so they can track epigenetic changes. They might compile your education and employment histories, and conduct “socio-political” assessments of your voting, religious, and philanthropic activity. (As the project leaders did not respond to interview requests, I pieced together this speculative protocol from their promotional materials, academic papers, and public statements.) 3

If you don’t have a smartphone, they may give you one, so they can track your location, activity, and sleep; monitor your socialization and communication behaviors; and push “gamified” tests assessing your cognitive condition and well-being. They may “instrument” your home with sensors to detect environmental conditions and track the locations of family members, so they can see who’s interacting with whom, when, and where (those without a home are presumably ineligible). You may be asked to keep a food diary and wear a silicon wristband to monitor your exposure to chemicals. Audits of your tax and financial records could reveal your socioeconomic position and consumer behavior, and could be cross-referenced with your location data, to make sure you were shopping when and where you said you were.
The researchers assert that for the first time ever, they are able to quantify the human condition.
With your permission, researchers could access new city and state medical records databases, and they could tap public records of your interaction with schools, courts, police, and government assistance programs. They could assess your neighborhood: how safe is it, how noisy is it, how many trees are there? Finally, they could pull city data — some of it compiled and filtered by the Center for Urban Science and Progress — to monitor air quality, toxins, school ratings, crime, water and energy use, and other environmental factors.
What does all this measuring add up to? The researchers assert, “For the first time ever we are now able to quantify the human condition.” By investigating “the feedback mechanisms between biology, behavior, and our environment in the bio-behavioral complex,” they aim to comprehend “all of the factors that make humans … human.” 4 Of course, that requires a huge leap of faith. As Steven Koonin, the theoretical physicist who founded the Center for Urban Science and Progress, observes: “What did Galileo think he was going to see when he turned his telescope on the heavens? He didn’t know.” 5

Now the telescope is turned inward, on the human body in the urban environment. This terrestrial cosmos of data will merge investigations that have been siloed: neuroscience, psychology, sociology, biology, biochemistry, nutrition, epidemiology, economics, data science, urban science. A promotional video boasts that the Human Project has brought together technologists, lawyers, ethicists, and “anthropologists, even!” to ask big questions. Even anthropologists! (It’s notable that several relevant fields — social work, geography, and most of the humanities — don’t make the list.) 6
Now the telescope is turned inward, on the human body in the urban environment.
This is the promise of big data and artificial intelligence. With a sufficiently large dataset we can find meaning even without a theoretical framework or scientific method. As Wired-editor-turned-drone-entrepreneur Chris Anderson famously declared, “Petabytes allow us to say: ‘Correlation is enough.’ We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.” 7 Human Project director Paul Glimcher says that collecting data on “everything we can think of” — at least everything related to biology, behavior, and environment — will allow researchers to model every imaginable “phenotype,” or set of observable characteristics, both for people and the cities they inhabit. 8
Medical researchers have long harbored similar ambitions. The Framingham Heart Study (which began in 1948) and Seven Countries Study (1956) investigated the impact of diet, weight, exercise, genetics, and smoking on cardiovascular health. The Nurses’ Health Study (1976) collected biospecimens and questionnaires from hundreds of thousands of nurses, to better understand how nutrition, weight, physical activity, hormones, alcohol, and smoking affect disease. The English Longitudinal Study on Aging (2002) periodically interviewed and examined participants over the age of 50, looking for correlations among economic position, physical health, disability, cognition, mental health, retirement status, household structure, social networks, civic and cultural participation, and life expectancy. Some of these studies also considered environmental aspects of public health, although they didn’t have access to today’s rich geospatial data.

Fast-forward to the age of smartphones and neural nets. Apple recently announced that its Health app will allow users to access personal medical records. The company is also developing apps to aid studies and even sponsoring clinical trials. 9 Seemingly everyone is trying to break into the risky but lucrative health tech market, which offers ample opportunities for data harvesting. And many medical providers are happy to cooperate. A few years ago, Google’s AI subsidiary Deep Mind and London’s Royal Free Hospital partnered to develop new clinical technologies, but they didn’t adequately inform patients about the use of their data, and were rebuked by the British government. 10 More recently, Facebook has approached hospitals about matching anonymized patient data with social media profiles to find patterns that might inform medical treatment. Plans were “paused” last month, as the Cambridge Analytica scandal came to light. 11 When I brought up this trend in a recent lecture, one of the attendees, a health informatics researcher at a Philadelphia hospital, emphatically declared, “All of us want to work with Google.” It’s easy to see why. More data can lead to better care, and the potential benefits of so-called “precision medicine” are enormous.
Seemingly everyone is trying to break into the health tech market, which offers ample opportunities for data harvesting.
To its credit, the Human Project is advised by privacy and security experts and has announced strategies for keeping data safe. Recruiters use videos to secure consent from subjects (some as young as seven years old) who may not understand legalese, and the FAQs state that data will be anonymized, aggregated, and protected from subpoena. According to reports, the data will be compartmentalized so that researchers have access only to the particular slice (or “data mart”) relevant to a given study. These “heavily partitioned data silos” will reside in sealed zones at the project’s data center in Brooklyn: a monitored green zone with limited data; a yellow zone, accessible via thumbprint and ID card, where researchers consult the anonymized data marts; and a high-security red zone, where the “crown jewels” are held. 12 It seems fitting that researchers will have to offer up their own biometrics to access their subjects’ data.
Yet even if personal data are secure, methodological and ethical risks are exacerbated when university research programs are spun off into private companies. The Human Project is run through a partnership with Data Cubed, Inc., a health tech startup founded by Glimcher that aims to monetize the project tools (particularly the Phenome-X phenotyping platform) and ensure that “participants and the study benefit when for-profit companies use insights from [project] data for profitable, socially responsible work.” 13 Given the stakes here, that relationship needs close scrutiny.
What’s more, the blind faith that ubiquitous data collection will lead to “discoveries that benefit everyone” deserves skepticism. Large-scale empirical studies can reinforce health disparities, especially when demographic analyses are not grounded in specific hypotheses or theoretical frameworks. Ethicist Celia Fisher argues that studies like the Human Project need to clearly define “what class, race, and culture mean, taking into account how these definitions are continuously shaped and redefined by social and political forces,” and how certain groups have been marginalized, even pathologized, in medical discourse and practice. Researchers who draw conclusions based on observed correlations — untheorized and not historicized — run the risk, she says, of “attributing health problems to genetic or cultural dispositions in marginalized groups rather than to policies that sustain systemic political and institutional health inequities.” 14 A recent report by Kadija Ferryman and Mikaela Pitcan at the Data & Society Research Institute shows how biases in precision medicine could threaten lives. 15 And history offers many examples of ethical problems that arise when health data circulate beyond the context of their collection. 16

We’ve seen such biases realized in other data-driven models, notably in law enforcement. Contemporary models of “actuarial justice” and “predictive policing” draw correlations between specific risk factors and the probability of future criminal action. Courts and police make decisions based on proprietary technologies with severe vulnerabilities: incomplete datasets, high error rates, demographic bias, opaque algorithms, and discrepancies in administration. 17 “Criminal justice management” software packages like Northpointe’s dramatically overestimate the likelihood of recidivism among black defendants. 18 Even the instruments used to collect data can misfire. Biometric technologies like facial recognition software and fingerprint and retina scanners can misread people of color, women, and disabled bodies. 19 As has always been the case, race and gender determine how “identities, rather than persons, interact with the public sphere.” 20
Race and gender determine how ‘identities, rather than persons, interact with the public sphere.’
These problems are compounded as datasets are combined. Palantir software now used by some local governments merges data from disparate city agencies and external organizations, enabling police to collate information about suspects, targets, and locations. 21 In New York, for example, Palantir worked with the Mayor’s Office of Data Analytics and the Office of Special Enforcement to develop a tablet application “that allows inspectors in the field to easily see everything that the City knows about a given location.” 22 Key analyses, even decisions about where to deploy resources, are automated, which means that “no human need ever look at the actual raw data.” 23 Biology, behavior, culture, history, and environment are thus reduced to dots on a map. End users don’t know which agencies supplied the underlying intelligence and how their interests might have shaped data collection. They can’t ask questions about how social and environmental categories are operationalized in the different data sets. They can’t determine whether the data reinscribe historical biases and injustices.
All of this is to say that past efforts to combine vast troves of personal and environmental data should make us wary of new initiatives. As Virginia Eubanks demonstrates in Automating Inequality, “Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the healthcare system, or cross national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a kind of collective red-flagging, a feedback loop of injustice.” 24

Environmental Epidemiology
While the neuroeconomists on Glimcher’s project gather data on everything “that makes humans … human,” their partners in urban informatics control a voluminous flow of information on what makes New York … New York. With special access to municipal data held by many offices and agencies, researchers at the Center for Urban Science and Progress have built “one of the most ambitious Geographic Information Systems ever aggregated: a block-by-block, moment-by-moment, searchable record of nearly every aspect of the New York City Landscape.” 25 In a video promoting the Human Project, every urban scene is overlaid with a bullseye, a calibration marker, or a cascade of 0’s and 1’s, signaling an aggressive intent to render the environment as data.
The Human Project researchers regard the urban habitat as something that can be rehabilitated or reengineered.
The partnership with CUSP may give the Human Project an advantage in the race to quantify health outcomes, but it is not the only such effort. The National Institutes of Health is building All of Us, a research cohort of one million volunteers with “the statistical power to detect associations between environment and/or biological exposures and a wide variety of health outcomes.” 26The NIH receives data and research support from Verily Life Sciences, an Alphabet company that, in turn, runs Project Baseline, a partnership with Duke, Stanford, and Google that aims to recruit 10,000 volunteers to “share [their] personal health story” — as well as clinical, molecular, imaging, sensor, self-reported, behavioral, psychological, and environmental data — to help “map human health.” 27 Ferryman and Pitcan have diagrammed the complex topology of these projects in their Precision Medicine National Actor Map. Sidewalk Labs, another Alphabet company, recently announced Cityblock Health, which seeks to connect low-income urban residents with community-based health services, including clinics, coaches, tech tools, and “nudges” for self-care. 28 Again, the precise targeting of individual patients and neighborhoods depends on a vast dataset, including in this case Google’s urban data.
All of these initiatives see public health through the lens of geography. The Human Project even refers to its emerging databank as an “atlas.” Programs like Cityblock Health conceive the urban environment not just as a background source of “exposure” or risk, but as a habitat in which biology and behavior inform one another. The qualities of this habitat affect how people make choices about diet and exercise, and how bodies respond to stress or industrial hazards. What seems to set the Human Project apart is that its researchers regard that habitat not as a given, but as something that can be rehabilitated or reengineered. Once researchers have identified relations between the city or neighborhood and the “human condition,” they can tweak or transform the habitat through urban planning, design, and policy. Their insights can also guide “the construction of future cities.” 29 Individual phenotypes are mapped to urban phonotypes, databodies to codespaces.

Constantine Kontokosta, the head of CUSP’s Quantified Community project, is one of the most prominent advocates for this worldview. He wants to “instrument” neighborhoods with sensors and engage citizens in local data collection, so that the urban environment becomes a “test bed for new technologies”; “a real-world experimental site” for evaluating policy and business plans; a 3D model for analyzing “the economic effects of data-driven optimization of operations, resource flows, and quality-of-life indicators.” Machine-learning algorithms will find patterns among data from environmental sensors and residents’ smartphones in order to define each neighborhood’s “pulse,” to determine the community’s “normal” heartbeat. 30Here, again, we see the resurgence of biomedical metaphors in urban planning.
Meanwhile, a group of Human Project-affiliated researchers at Harvard and MIT are using computer vision to assign “Streetscores,” or measurements of perceived safety, to Google Street View images of particular neighborhoods. They then combine those metrics with demographic and economic data to determine how social and economic changes relate to changes in a neighborhood’s physical appearance — its phenotype. 31 This work builds on the PlacePulseproject at the MIT Media Lab, which invites participants to vote on which of two paired Street View scenes appears “livelier,” “safer,” “wealthier,” “more boring,” “more depressing,” or “more beautiful.” In such endeavors, Aaron Shaprio argues, “computer-aided, data-mined correlations between visible features, geographic information, and social character of place are framed as objective, if ‘ambient,’ social facts.” 32 The algorithmicization of environmental metrics marks the rise of what Federico Caprotti and colleagues call a new “epidemiology of the urban.” 33 The new epidemiologists echo the “smart city” rhetoric I’ve critiqued often in thesepages, but now the discourse is shaded toward the dual bioengineering of cities and inhabitants.
The new epidemiologists echo the familiar ‘smart city’ rhetoric, but now the discourse is shaded toward the dual bioengineering of cities and inhabitants.
Cities have long been regarded as biophysical bodies, with their own circulatory, respiratory, and nervous systems — and waste streams. In the mid-19th century, as industrialization transformed cities and spurred their growth, physicians were developing new theories of infectious disease (e.g., miasma, filth), complete with scientific models and maps that depicted cities as unhealthy. City planners and health officials joined forces to advocate for sanitation reform, zoning, new infrastructures, street improvements, and urban parks. 34 Healthy buildings and cities were associated with certain phenotypical expressions, although designers did not always agree on the ideal form. Frederick Law Olmsted’s parks, Daniel Burnham’s City Beautiful movement, Ebenezer Howard’s Garden Cities, 1920s zoning ordinances, Modernist social housing projects and sanatoria: all promised reform, yet produced distinct morphologies. 35
As the 20th century proceeded, epidemiologists focused on germs and the biological causes of disease, while modernist architects turned toward formal concerns and rational master plans. Public health and urban planning drifted apart until the 1960s, when the environmental justice and community health center movements brought them together again. Today, initiatives like the World Health Organization’s European Healthy Cities program and New York City’s Active Design Guidelines encourage the integration of health and planning. Now the focus is on designing cities that promote exercise and social cohesion, and that provide access to healthy food and quality housing. 36 Given the rise of artificial intelligence in both health and urban planning, we might imagine a Streetscore or “pulse” for healthy neighborhoods, which could be used to generate an algorithmic pattern language for urban design: every healthy neighborhood has one playground, two clinics, lots of fresh produce, and a bicycle path.

Where do quantified humans fit in this new planning regime? Consider the fact that China is preparing to use Citizen Scores to rate residents’ trustworthiness and determine their eligibility for mortgages, jobs, and dates; their right to freely travel abroad; and their access to high-speed internet and top schools. “It will forge a public opinion environment where keeping trust is glorious,” the Chinese government proclaims. 37 This is the worst case scenario: obedience gamified, as Rachel Botsman puts it. Humanity instrumentalized.
At least for now, most urbanists recognize that a city is more than a mere aggregation of spatial features that an AI has correlated with ‘wellness.’
Will the new data-driven urbanism — with its high-security data centers and black-boxed algorithms and proprietary software — usher in another era of top-down master planning in North America? Perhaps. But at least for now, most urbanists recognize that a city is more than a mere aggregation of spatial features that an AI has correlated with “wellness.” As Jane Jacobs argues, a healthy city is built on social inclusion and communication, and a shot of serendipity. Researchers affiliated with the Human Project are investigating Jacobsian questions like how economic changes affect housing and, in turn, residents’ social networks and health. 38 Others are asking how cities “encourage the free flow of information” and “how geography interacts with … knowledge” — you might say, how a city can be designed to provide the spatial conditions for a public sphere. 39 So in their rhetoric, at least, the project investigators recognize the political importance of involving communities in the research process and in the urban environments that may be reshaped by it.
Kontokosta says his Quantified Community initiatives focus on the neighborhood scale in order to “connect and engage local residents” not only in data collection, but also in “problem identification, data interpretation, and problem-solving.” 40 Locals assume the role of “participatory sensors,” using their own smartphones to collect data and helping build and install ambient sensing devices. They also act as ground-truthers who verify harvested data through direct observations and experiences. On a more fundamental level, Kontokosta says he wants community members involved as research designers who help project leaders understand areas of curiosity and concern. Locals can identify the pressing problems in their neighborhoods and the sources of data that can provide insight. CUSP aims to bring communities typically excluded from “smart city” discussions into the planning process. One might hope that this would lead to a long-term personal investment in neighborhoods and interest in local planning and politics.

Self-Datafication as Civic Duty
The Human Project study design envisions that participants will be motivated by payment and by the promise of insight into their own health and their families’ medical histories. Data are currency. 41 But there’s a civic vision — and a civic aesthetic — behind this work, too. As the researchers gear up to collect data, they have rebranded the website with stock photos representing “diversity” and urban vitality, washed in New York University’s signature violet. The new logo, which evokes a circular genome map, is rendered in watercolor, humanizing all the hard science. 42
Framing the project as a “public service” may help convince New Yorkers to share their most personal data. 43 Contributors are assured that they will be more than mere research subjects; they will also be “partners” in governing the study, responsible for vetting proposals from researchers who want to use the databank. 44 They’ll receive newsletters and updates on research discoveries that their data has made possible, and they’ll have access to visualization tools that allow them to filter and interpret their own data and aggregate data for the study population. Apparently, handing over bank statements and biometrics is a form of activism, too: “instead of giving [their] data for free, to corporations,” they can “take [it] back,” “bring [it] together as a community… to make a better world.” 45 Glimcher maintains that New Yorkers will see the potential to generate new knowledge, therapeutics, and urban policy and will understand “that this is a civic project of enormous importance.” 46
Offering oneself up as data, or as a data-collector, is often framed as an act of civic duty.
Offering oneself up as data, or as a data-collector, is often framed as an act of civic duty. Participation in U.S. census and government surveys, for instance, has historically been regarded as part of the “social contract”: citizens yield their personal information, and the government uses it for the public good. 47 In the 19th century, philanthropists, researchers, and activists garnered support for social and industrial reforms by generating an “avalanche of numbers.” 48 And in the early 20th century, as the social sciences popularized new sampling methods, a swarm of surveyors and pollsters began collecting data for other purposes. According to historian Sarah Igo, these modern researchers “billed their methods as democratically useful, instruments of national self-understanding rather than bureaucratic control.” Because they had to rely on voluntary participation, they manufactured consent by emphasizing “the virtues of contributing information for the good of the whole,” for the “public sphere.” Divulging one’s opinions and behaviors to Gallup or Roper pollsters was a means of democratic participation — an opportunity to make one’s voice heard. Pollsters, unlike newspaper editors and political commentators, were “of the people,” and they argued that their methods were “even more representative and inclusive than elections.” 49
Around the same time, A.C. Nielsen, which started off in manufacturing, marketing, and sales research, began acquiring and developing technology that allowed it to monitor people’s radio-listening (and, later, TV-watching and web-surfing) behaviors. Nielsen ratings drove advertising placement and programming decisions. Commercial broadcasters, meanwhile, began funding academic studies and incorporating social-scientific research into their operations, furthering the integration of academic and industry agendas. As Willard D. Rowland shows, the “image of certainty and irrefutability” cultivated by social scientists allowed them to “mesh neatly into the interaction of industrial, political, communications, and academic interests.” 50

Modern survey methods, Igo says, “helped to forge a mass public” and determined how that public saw itself in mediated representations. Surveys shaped beliefs about normalcy and nationality and individuality. 51 But like all methods of data-collection and analysis, those social surveys reflected and reinscribed biases. Consider the canonical Middletown Studies, sociological case studies conducted by Robert and Helen Lynd in the “typical” American city of Muncie, Indiana, in the 1920s and ’30s. Igo shows how the researchers were compelled to paint a picture of cultural wholeness and cohesion, and how they excised non-white, non-native and non-Protestant Americans from their portrait of this “representative” community. 52
The computationally-engineered city producesthe urban citizen by measuring her.
We can trace these histories forward to the cutting-edge work being conducted at the Institute for the Interdisciplinary Study of Decision Making. The researchers’ overarching goal, to link decision-making to social policy, is reflected in their motto: “from neurons to nations.” Yet the extraction of neurons will never fully describe the individual subject, let alone the nation in aggregate. Even the myriad data sources collated by the Human Project cannot capture “the human condition.”
As Hannah Arendt observes, the disclosure of who one is “can almost never be achieved as a willful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities.” Who one is, rather than what one is, is revealed to others through speech and action and physical identity. 53 Quantifying humans and habitats turns them into “whats”: into biometric entities and Streetscores. This ontological reduction inevitably leads to impoverished notions of city planning, citizenship, and civic action. Shapiro argues that because planning algorithms like Streetscore embed “indicators of deviance and normativity, worth and risk,” they promote “normative and essentialist … aesthetics.” 54 The computationally-engineered city produces the urban citizen by measuring her. Then, Caprotti argues, “you’re actually producing a subject for governance.” 55

When civic action is reduced to data provision, the citizen can perform her public duties from the privacy of a car or bedroom. If her convictions and preferences can be gleaned through an automated survey of her browser history, network analysis of her social media contacts, and sentiment analysis of her texts and emails, she needn’t even go to the trouble of answering a survey or filling out a ballot. Yet she has no idea how an artificially intelligent agent discerns “what” kind of subject she is, how it calculates her risk of heart attack or recidivism, or how those scores impact her insurance premiums and children’s school assignments. Likewise, the researchers who deploy that agent, like those now working with Palantir and Northpointe, have no need to look at the raw data, let alone develop hypotheses that might inform their methods of collection and analysis. In this emerging paradigm, neither subjects nor researchers are motivated, nor equipped, to challenge the algorithmic agenda. Decision-making is the generation of patterns, a “pulse,” a “score” that will translate into policy or planning initiatives and social service provision. This is a vision of the city — society — as algorithmic assemblage.
All our bodies and environments are already data — both public and proprietary. So how can we marshal whatever remains of our public sphere to take up these critical issues?
And this is the world where we now live. 56All our bodies and environments are already data — both public and proprietary. 57 So how can we marshal whatever remains of our public sphere to take up these critical issues? How can we respond individually and collectively to the regime of quantitative rationalization? How might we avert its risks, even as we recognize its benefits? We can start by intervening in those venues where pattern recognition is translated to policy and planning. Wouldn’t it be better to use algorithms to identify areas and issues of concern, and then to investigate with more diverse, localized qualitative methods? After the scores are assigned and hotpots are plotted on a map, we could reverse-engineer those urban pulses, dissect the databodies, recontextualize and rehistoricize the datasets that brought them into being. To prepare for this work, the ethicists and social scientists — even anthropologists! — should call in the humanists at every stage of research: from the constitution of the study population; through the collection, analysis, and circulation of data; and finally as those datasets are marshaled to transform the world around us.
Projects like NYU’s and Alphabet’s and the NIH’s could yield tremendous improvements in public health. And even in their methodological and ethical limitations, they can teach us a few things about measuring a public and the spheres in which it is constituted. The methods by which publics and public spheres become visible — to one another and to the sensors that read them — reflect the interests and ideologies of their sponsors. At the same time, these databody projects remind us that public health is a critical precondition for, and should be a regular subject of debate within, the public sphere. 58 They signal that the liberal subject has a physical body, one whose health and illness, pleasure and pain, affect and cognition, race and gender, class and sexual orientation, affect its ability to safely navigate and make itself seen and heard amidst the myriad publics that emerge across our digital and physical worlds.
- Hannah Bayer, “What If We Could Quantify the Entirety of the Human Condition?,” TEDMED, Palm Springs, California, November 30 – December 2, 2016. ↩
- See The Human Project, “Frequently Asked Questions,” for a broad overview of the study methods. In September 2017, the project posted a job ad seeking “18 energetic, passionate, and outgoing” field recruiters who would “conduct in-person household surveys and act as brand ambassadors.” ↩
- The speculative study protocol in these three paragraphs is based on reports in Okan Azmak, Hannah Bayer, Andrew Caplin, Miyoung Chun, Paul Glimcher, Steven Koonin, and Aristides Patrinos, “Using Big Data to Understand the Human Condition: The Kavli HUMAN Project,” Big Data 3:3 (2015), http://doi.org/8bt; Dennis Ausiello and Scott Lipnick, “Real-Time Assessment of Wellness and Disease in Daily Life,” Big Data 3:3 (2015), 203-08, http://doi.org/gb5fkj; Leslie Mertz, “The Case for Big Data,” IEEE Pulse (October 3, 2016); Aviva Rutkin, “Tracking the Health of 10,000 New Yorkers,” New Scientist 228:3044 (October 24, 2015), 20-21; and the FAQs on the Human Project website. The actual study protocol may vary. ↩
- Kavli Foundation, “The Human Project.” Emphasis mine. ↩
- Quoted in Rutkin. Koonin was comparing the ambitions of the Human Project to the Sloan Digital Sky Survey, which “has transformed galactic-level cosmology from a small data science to a big data science and has catalyzed a renaissance in astronomy.” See Azmak, et al. ↩
- The Human Project, “The HUMAN Project (Long Version),” Vimeo. ↩
- Chris Anderson, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete,” Wired (June 23, 2008). See also Rob Kitchin, “Big Data, New Epistemologies and Paradigm Shifts,” Big Data & Society (April-June 2014), 1-12, http://doi.org/gcdzc5. ↩
- Julie Anne Schuck, Social and Behavioral Sciences for National Security: Proceedings of a Summit (Washington, D.C.: The National Academies Press, 2017), 15-16. ↩
- Anesh Chopra and Shafiq Rab, “Apple’s Move to Share Health Care Records is a Game Changer,” Wired (February 19, 2018); Natasha Singer, “Apple, in Sign of Health Ambitions, Adds Medical Records Feature for iPhone,” New York Times (January 24, 2018). ↩
- Julia Powles and Hal Hodson, “Google DeepMind and Healthcare in an Age of Algorithms,” Health and Technology (March 2017), http://doi.org/gcs9ch; Alex Hern, “Royal Free Breached UK Data Law in 1.6m Patient Deal with Google’s DeepMind,” The Guardian (July 3, 2017). ↩
- Jacob Kastrenakes, “Facebook Spoke with Hospitals About Matching Health Data to Anonymized Profiles,” The Verge (April 5, 2018). ↩
- Marc Santora, “10,000 New Yorkers. 2 Decades. A Data Trove About ‘Everything,’” New York Times (June 4, 2017). ↩
- This statement is from a page on the Human Project website that has since been removed: “Powered by d3,” accessed September 2017. ↩
- Celia B. Fisher, “Will Research on 10,000 New Yorkers Fuel Future Racial Health Inequality?” The Ethics and Society Blog (August 30, 2016). Ezekiel Dixon-Román also writes about the ways in which certain bodily norms are embedded into our data, and how “fabrications of race” in education data can shape educational practice and policy. See Ezekiel Dixon-Román, “Toward a Hauntology on Data: On the Sociopolitical Forces of Data Assemblages,” Research in Education 98:1 (2017): 44-58, http://doi.org/cnct. ↩
- Kadija Ferryman and Mikaela Pitcan, Fairness in Precision Medicine (Data and Society, February 2018). ↩
- For example, one of the most important cell lines used in medical research was started in 1951 by doctors at John Hopkins Hospital who sampled and cultured cancer cells from an African American patient, Henrietta Lacks, without her permission. Similarly, medical historian Joanna Radin has shown how hereditary and public health data collected from the Pima Gila River Indian Community “has become a dataset now used by statisticians as well as genome scientists.” Her report examined “the persistence of place and personhood in the history of big data.” See Joanna Radin, “Off the Rez: How Indigenous Bodies Became ‘Big Data’” (Max Planck Institute for the History of Science, 2014). See also Kim TallBear, “Genomic Articulations of Indigeneity,” Social Studies of Science 43:4 (2013), 509-33, http://doi.org/gcnznp. ↩
- Robert Brauneis and Ellen P. Goodman, “Algorithmic Transparency for the Smart City,” Yale Journal of Law and Technology 103 (2018), http://doi.org/cncv. Sarah Brayne, “Big Data Surveillance: The Case of Policing,” American Sociological Review 82:5 (2017), 977-1008, http://doi.org/gcsq6p; Glenn Cohen and Harry S. Graver, “Cops, Docs, and Code: A Dialogue Between Big Data in Health Care and Predictive Policing,” UC Davis Law Review 51:437 (2017); Malcom Feeley and Jonathan Simon, “Actuarial Justice: The Emerging New Criminal Law,” Criminal Justice and Crime Control, in David Nelken, ed., The Futures of Criminology (Thousand Oaks: Sage, 1994): 173-201; Andrew Guthrie Ferguson, “Policing Predictive Policing,” Washington University Law Review 94:5 (2017): 1113-95. ↩
- In 2016, ProPublica ran a statistical test on Northpointe software [PDF] showing that, even when isolating the effects of race, age, and gender, black defendants were 77 percent more likely to be flagged as future violent offenders. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” ProPublica (May 23, 2016). ↩
- Shoshana Amielle Magnet, When Biometrics Fail: Gender, Race, and the Technology of Identity (Durham: Duke University Press, 2011); Simone Browne, Dark Matters: On the Surveillance of Blackness (Durham: Duke University Press, 2015); Riese Jordan Lin, Don’t @ Me: Surveillance, Subject Formation, and the Digital Information Economy, Masters Thesis, San Francisco State University, Fall 2016. ↩
- Karla F. C. Holloway, Private Bodies, Public Texts: Race, Gender, and a Cultural Bioethics (Durham: Duke University Press, 2011): 7. ↩
- Palantir, “Law Enforcement.” See also Mark Harris, “How Peter Thiel’s Secretive Data Company Pushed Into Policing,” Wired (August 9, 2017); Ali Winston, “Palantir Has Secretly Been Using New Orleans to Test its Predictive Policing Technology,” The Verge (February 27, 2018). ↩
- Quoted in Brendan O’Connor, “How Palantir Is Taking Over New York City,” Gizmodo (September 22, 2016). New York City later canceled its Palantir contract because of the company’s lack of transparency and ongoing debates over who controlled the data. See William Alden, “There’s a Fight Brewing Between the NYPD and Silicon Valley’s Palantir,” BuzzFeed News (June 28, 2017). ↩
- Harris, op. cit. ↩
- Virginia Eubanks, Automating Inequality (St. Martin’s Press, 2017): 6-7. ↩
- New York University Institute for the Interdisciplinary Study of Decision Making, “Kavli HUMAN Project: Preliminary Study Design” (Kavli HUMAN Project / New York University, 2015): 19. ↩
- National Institutes of Health, “All of Us” and “Scientific Opportunities.” See also David J. Kaufman, Rebecca Baker, Lauren C. Milner, Stephanie Devaney, Kathy L. Hudson, “A Survey of U.S. Adults’ Opinions About Conduct of a Nationwide Precision Medicine Initiative® Cohort Study of Genes and Environment,” PLOS ONE 11:8 (2016), http://doi.org/gbqdkp. Critics have begun questioning the efficacy and feasibility of the program, given the complexity of other, smaller-scale bio-banking projects. Meanwhile, Israel is engaging in an ambitious project to create a national online medical database including 9 million residents. ↩
- Verily Life Sciences, Project Baseline. See also Adam Rogers, “That Google Spinoff’s Scary, Important, Invasive, Deep New Health Study,’ Wired (April 20, 2017). ↩
- Iyah Romm, “Announcing Cityblock: Bringing a New Approach to Urban Health, One Block at a Time,” Sidewalk Talk, the blog of Sidewalk Labs (October 1, 2017). See also the Cityblock The Neighborhood Health Hubs are to be more than clinics; they’ll also offer communal areas, classrooms, family counseling, public programming, free internet, and space for community organizations to speak with patients about financial assistance, benefits, and other community services. ↩
- Philip Salesses, Katja Schechtner, and César A. Hidalgo, “The Collaborative Image of the City: Mapping the Inequality of Urban Perception,” PLOS One 8:7 (2013), e68400, http://doi.org/f5bs55. ↩
- Constantine E. Kontokosta, “The Quantifiable Community and Neighborhood Labs: A Framework for Computational Urban Science and Civic Technology Innovation,” Journal of Urban Technology (2016), 7, http://doi.org/cncw; Constantine E. Kontokosta, Nicholas Johnson, and Anthony Schloss, “The Quantified Community at Red Hook: Urban Sensing and Citizen Science in Low-Income Neighborhoods,” Bloomberg Data for Good Exchange Conference (September 25, 2016), 6. See also Kontokosta’s Urban Intelligence Lab. ↩
- Nikhil Naik, Scott Duke Kominers, Ramesh Raskar, Edward L. Glaeser, and Cesar A. Hidalgo, “Do People Shape Cities, or Do Cities Shape People? The Co-Evolution of Physical, Social, and Economic Change in Five Major U.S. Cities,” Harvard Kennedy School, Faculty Research Working Paper Series, RWP15-061 (October 2015); Edward L. Glaeser, Scott Duke Kominers, Michael Luca, and Nikhil Naik, “Big Data and Big Cities: The Promises and Limitations of Improved Measures of Urban Life,” Harvard Kennedy School Faculty Research Working Paper Series, RWP15-075 (December 2015); Nikhil Naik, Scott Dike Kominers, Ramesh Raskar, Edward L. Glaeser, and Cesar A. Hidalgo, “Computer Vision Uncovers Predictors of Physical Urban Change,” PNAS 114:29 (July 18, 2017), 7571-76, http://doi.org/gbsmnm. See also the City Forensics project at the University of California, Berkeley: Sean Arietta, Alexei A. Efros, Ravi Ramamoorthi, Maneesh Agrawala, “City Forensics: Using Visual Elements to Predict Non-Visual City Attributes,” IEEE Transactions on Visualization and Computer Graphics (2014): 2624-33, http://doi.org/226. ↩
- Aaron Shapiro, “Street-level: Google Street View’s Abstraction by Datafication,” New Media and Society (2017), 11, http://doi.org/cncx. ↩
- Federico Caprotti, Robert Cowley, Ayona Datta, Vanesa Castan Broto, Eleanor Gao, Lucien Georgeson, Clare Herrick, Nancy Odendaal, and Simon Joss, “The New Urban Agenda: Key Opportunities and Challenges for Policy and Practice,” Urban Research and Practice (2017), 367-78, http://doi.org/cncz. These researchers are drawing on earlier empirical research on urban appearance by Amos Rapoport, Kevin Lynch, and Jack Nasar, as well as George L. Kelling’s and James Q. Wilson’s “Broken Windows Theory,” which links environmental disorder and incivility to increased crime. ↩
- On the historical relationship between urban planning and public health, see Jon A. Peterson, “The Impact of Sanitary Reform Upon American Urban Planning, 1840-1890,” Journal of Social History 13:1 (Autumn 1979), 83-103; Jason Corburn, “Reconnecting with Our Roots: American Urban Planning and Public Health in the Twenty-first Century,” Urban Affairs Review 42:5 (2007), http://doi.org/cf7mp8; Jocelyn Pak Drummond, “Measuring and Mapping Relationships Between Urban Environment and Urban Health: How New York City’s Active Design Policies Can Be Targeted to Address the Obesity Epidemic,” Masters Thesis, Massachusetts Institute of Technology, 2013 [PDF]; Bonj Szczygiel and Robert Hewitt, “Nineteenth-Century Medical Landscapes: John H. Rauch, Frederick Law Olmsted, and the Search for Salubrity,” Bulletin of the History of Medicine 74:4 (Winter 2000), 708-34, http://doi.org/fp9zqj. ↩
- Urban renewal in Europe after World War I enabled modern architects to design new, hygienic forms of social housing, and many of those same architects also employed the tropes of modernism to design new sanatoria, whose flat roofs, terraces, balconies, and recliner chairs afforded patients plenty of opportunities for open-air relaxation. Margaret Campbell, “What Tuberculosis Did for Modernism: The Influence of a Curative Environment on Modernist Design and Architecture,” Medical History 49:4 (2005), 463-88. See also Giovanna Borasi and Mirko Zardini, eds., Imperfect Health: The Medicalization of Architecture (Montreal: Canadian Centre for Architecture, Lars Muller Publishers, 2012), excerpted in this journal as Giovanna Borasi and Mirko Zardini, “Demedicalize Architecture,” Places Journal (March 2012), https://doi.org/10.22269/120306. ↩
- Hugh Barton and Marcus Grant, “Urban Planning for Healthy Cities: A Review of the Progress of the European Healthy Cities Programme,” Journal of Urban Health: Bulletin of the New York Academy of Medicine 90:1 (2011), http://doi.org/cnc2; Lawrence Frank, Peter Engelke, and Thomas Schmid, Health and Community Design: The Impact of the Built Environment on Physical Activity (Washington, D.C.: Island Press, 2003). ↩
- Rachel Botsman, “Big Data Meets Big Brother as China Moves to Rate its Citizens,” Wired (October 21, 2017). ↩
- Eillie Anzilotti, “Quantifying Everything About Urban Life,” CityLab (October 14, 2016). ↩
- “Kavli HUMAN Project: Preliminary Study Design,” op. cit., 127. ↩
- Kontokosta, “The Quantifiable Community and Neighborhood Labs,” 2. ↩
- For more on incentives for participation in precision-medicine initiatives, see Kaufman, et al., 8. ↩
- For images of the rebrand, see the portfolio of The Human Project’s graphic designer Aerial Sun. ↩
- Schuck, op. cit. ↩
- “Participants as Partners” is the first of six “Core Values” defined on the project’s website: “The Human Project is a partnership between participants, staff, and their company Data Cubed, all of whom play a role in governing the study.” ↩
- In the video cited in footnote 6, Glimcher says, “The Kavli HUMAN project offers each of this challenge: Take your data back. Instead of giving our data for free, for corporations [sic], let’s bring our data together as a community. Let’s use that data not to sell things, but to make a better world.” ↩
- Quoted in Anzilotti, op. cit. ↩
- The Council of Professional Associations on Federal Statistics, “Providing Incentives to Survey Respondents” (September 22, 1993). I am grateful to Johanna Drucker for highlighting the “social contract” implications of government data. ↩
- Ian Hacking, “Biopower and the Avalanche of Printed Numbers,” Humanities in Society 5 (1982): 279-95. See also Sarah Igo, The Averaged American: Surveys, Citizens, and the Making of a Mass Public (Cambridge: Harvard University Press, 2008): 302-03, n. 6. ↩
- Igo 7, 8, 119, 121. ↩
- Willard D. Rowland, Jr., “The Symbolic Uses of Effects: Notes on the Television Violence Inquiries and the Legitimation of Mass Communication Research,” in Michael Burgoon, eds., Communication Yearbook 5 (International Communication Association, 1982), 391-92. Burgoon describes the Bureau of Applied Social Research at Columbia University, a cooperation largely between Frank Stanton of CBS, Paul Lazardfeld, Hadley Cantril, and the Rockefeller Foundation, which represented “a structure for the pursuit of audience research in the United States rooted firmly in a combination of fascination with empirical social science methodology, practical marketing research experience and broadcast industry commercial and political needs. … The early efforts of the Bureau involved two important problems faced by the broadcasting industry. One was the vital need to develop a better ratings research capacity that would demonstrate radio’s ability to compete with the print media as an effective advertising tool. The second problem was to show that, in spite of all the then current criticism by some in Congress and the FCC, this privately-held, commercially-motivated, and network-dominated medium as in fact exercising its public trust obligations under the law and was providing a socially responsible service. … [It] served as a vehicle for contract research underwritten by both industry and government, often jointly, and as such it became a model for the development during the post-war arrival of television for a host of centers, institutes and school of communication research that also depended heavily on commercial and governmental grant funding.” ↩
- Igo, 282. ↩
- Igo, 55. Meanwhile, in the UK, from the 1930s through the 1960s, hundreds of volunteers contributed to the Mass-Observation project, chronicling their own lives and others’ lives in an attempt to understand everyday life in Britain. These volunteers were both data sources and data collectors. And in their attempts to overcome class divisions in recruiting members to their ranks, they imagined themselves to be “build[ing] a new society with the capability to reshape itself through informed civic participation” and “collectivized forms of self-knowledge.” See Tony Bennett, Fiona Cameron, Nelia Dias, Ben Dibley, Rodney Harrison, Ira Jacknis, and Conal McCarthy, “A Liberal Archive of Everyday Life: Mass Observation as Oligopticon,” in Collecting, Ordering, Governing: Anthropology, Museums, and Liberal Government (Duke University Press, 2017), 121, 129. ↩
- Hannah Arendt, The Human Condition, 2nd edition (Chicago: University of Chicago Press, 1998), 159, 179. ↩
- Shapiro: 3, 10-11. ↩
- Quoted in Gregory Scruggs, “The ‘New Urban Citizen’ and the Dangers of the Measurable City,” Citiscope (August 25, 2017). This is more than biopolitics, which Foucault describes as an “endeavor, begun in the 18th century, to rationalize the problems presented to government practice by the phenomena characteristic of a group of living human beings constituted as a population: health, sanitation, birth rate, longevity race.” From these populations we can model individual phenotypes. See Michel Foucault, “The Birth of Biopolitics” (1979), in Paul Rabinow, ed., Ethics, Subjectivity, and Truth (New York: The New Press, 1997), 73-79. ↩
- See Zeynep Tufekci, “Engineering the Public: Big Data, Surveillance and Computational Politics,” First Monday 19:7 (July 2014). ↩
- Consider the Aadhaar system in India, which assigns residents a unique ID number based on demographic and biometric data. See “Digital India Initiatives Playing Major Role in Smart Cities Mission,” Financial Express (March 16, 2017); Anita Gurumurthy, Nandini Chami, and Sanjana Thomas, “Unpacking Digital India: A Feminist Commentary on Policy Agendas in the Digital Moment,” Journal of Information Policy 6 (2016): 371-402, http://doi.org/cnc3; Rhyea Malik and Subhajit Basu, “India’s Dodgy Mass Surveillance Project Should Concern Us All,” Wired (August 25, 2017); “Will India Overcome Challenges to Build Smart Cities?,” Knowledge @ Wharton (February 26, 2016). ↩
- See Christopher Hamlin, “Public Sphere to Public Health: The Transformation of ‘Nuisance,’” in Steve Sturdy, ed., Medicine, Health and the Public Sphere in Britain, 1600-2000 (London: Routledge 2002), 189-94. ↩
Cite
Shannon Mattern, “Databodies in Codespace,” Places Journal, April 2018. Accessed 23 Apr 2018. <https://placesjournal.org/article/databodies-in-codespace/>