Fitness trackers are redefining what it means to be a human subject
Since getting a Fitbit several months ago, my days have been focused on action and analysis: Wake up, check my sleep stats. Go to the gym, track my workout. Eat breakfast, log my calories. Bike to work, track my miles and steps. Repeat ad infinitum. Variety is the enemy of optimization.
And “optimization” has increasingly become a synonym for “health,” one that conjures a sense of the rational, the ordered, the programmatically ideal. To optimize one’s body is to take it to its functional maximum, to fine-tune its performance to machine-level accuracy.
Then there’s “fitness,” another term that’s been folded into this technological vision of ability and potential. “Fitness,” the Fitbit website states, “is the sum of your life.” And tracking “every part of your day—including activity, exercise, food, weight, and sleep—[helps] you find your fit, stay motivated, and see how small steps make a big impact.” In essence, Fitbit claims that not only is your day-to-day the true marker of fitness, and not only is fitness is the key marker of your life, but that quantifying them as a series of inputs and outputs will ultimately improve it, too. Health trackers like the Fitbit—including the Apple Watch, Nike Fuelband, Garmin vÍvosmart, and Samsung Galaxy Gear—assert that your bodily output is the sum total of your experience, and that sum can be quantified.
THIS IS THE BEDROCK of the Quantified Self (QS) movement, a group of people whose rallying cry is “self knowledge through numbers.” You won’t be surprised to hear that the QS movement was first conceived in San Francisco, by former Wired magazine editors Gary Wolf and Kevin Kelly, in 2007. From determining the peak enjoyment of an album by number of listens to the most effective way to train for physical strength or endurance, QS evangelists believe that gathering data about the self is one of the most effective and meaningful ways to learn about both the human condition and the human body. “If we want to act more effectively in the world,” said Wolf in a 2010 TED Talk, “we have to get to know ourselves better.” By reflecting on ourselves as systems and using data “as a mirror,” Wolf says we can achieve levels of self-awareness—and therefore self-improvement—previously unavailable to us. Who knows what we might achieve once we attain peak personal performance?
Of course, self-tracking has been around for a long time. Cumbersome though they were, computers were small enough to be developed into wearables by fringe enthusiasts in the 1970s; throwing it back even further, women have been tracking their periods since at least 388 ad. We have been seeking ways to understand the body’s behavior for as long as we’ve turned a scientific eye to our own navels. In today’s era of ubiquitous computing, Bluetooth, and microprocessors, it only makes sense that some of our most sophisticated measurement devices be applied to ourselves. Now, the body is best understood through its abstraction: It isn’t until I’ve logged my meals and checked my stats that I’m able to comprehend what I’ve done with my day. There’s little space in the ethos of optimization for the chaotic, unpredictable, and often uncontrollable vicissitudes of being human. Order has always been a human ideal—now that we can apply it to the previously invisible and unquantifiable processes of our physical selves, has it become a defining category of a worthy life?
The answer to the question of why order and optimization are so seductive seems self-evident: Better is better. If we dig into our own incentives for self-improvement, it’s likely we’ll find similar definitions of what “better” means—greater happiness, less pain, more freedom and autonomy. But whether or not optimization through self-regulation is the means to those ends for everyone is another question altogether.
Michel Foucault theorized that a regulated population is easier to control, arguing that regulation itself is the mechanism by which modern-day states manage their constituents, a mechanism he called “biopower.” Health and fitness trackers are tools with enormous potential for smoothing out the kinks in this chain of power from the population level to the individual, not only by gathering detailed social and scientific data on the body and its daily rhythms but also by bringing statistical averages directly to the body. (My average resting heart rate is meaningless without a baseline to compare it to, but the Fitbit app helpfully does just that.) The overall health, wellness, and life expectancy of a population can be more accurately drawn and tightly controlled with better data, which is precisely what fitness trackers provide. Through wearable technologies, we are seeing a new theorization of the modern body from a tech mind-set.
FITBIT'S WEB BASED dashboard is a variety of friendly colors and graphics, full of easy-to-read charts and cheerful icons representing your biometrics. I find sifting through the numbers an enjoyable time sink, a way to represent me to myself. Personally—and this may be anathema to QS diehards—I am less concerned with the strict accuracy of the data; it’s more about seeing trends and feeling accomplished than about acquiring “true” biological information on myself.
The term “self-tracking” is strange. Like following trail signs of an animal in the woods, it conjures a sense of both the past and the future—where it has been, where you will be going. But in the present there is only a watchfulness, an active surveillance. The “self” in self-tracking is surprisingly absent: Whatever peppy, inspirational copy Fitbit uses to move its product, it is a regulatory device, bringing statistical averages and norms to bear on the individual. Regardless of what my sleeping and waking hours are, the Fitbit day ends at 11:59 p.m. and begins at
12 a.m., and my counters, unless I change the default settings, are reset by the clock. My device allows me to compare my resting heart rate and levels of sleep to other women my age. I’m encouraged to move only between the hours of 9 a.m. to 5 p.m., in accordance with the typical desk-jockey lifestyle that still somehow shapes our idea of economic rhythm, despite the relative precarity and unpredictability of lives spent freelancing, contracting, interning, or otherwise shoehorned into the “sharing economy.”
This all happens, of course, with the user’s consent. I shelled out money for the thing, and no government has yet made such devices mandatory. Though in January 2017, Fitbit partnered with United-Healthcare and Qualcomm’s cloud-based care platform to roll out a program that would allow users to earn up to $1,500 in healthcare credits, incentivizing employees within their insurance networks to use the trackers.
This surveillance of a body in absentia is a foundational premise of biopower. Emerging in late 18th-century Europe as a new mechanism for control over a population undergoing industrialization, biopower was the technology of demographers, of those who sought control at the population level—birth rates, mortality rates, life expectancy. Biopower per Foucalt “deals with the population as a political problem,” and develops regulatory mechanisms in order to maintain biological—and therefore also social and political—equilibrium. Rather than having a regent rule by threat of death, we have state powers that rule through regulation: academic and fitness tests, for instance, instead of soldiers marching in the streets.
Biopower works, in Foucault’s estimation, through the mediating force of the norm: a baseline for objectivity “that can be applied to both a body one wishes to discipline and a population one wishes to regularize.” Statistical averages become both a regulatory function for a population and an expectation internalized by any given person. As a former student in California public schools, I recall pe curricula essentially training us for FitnessGram physical-fitness tests, making sure we could at least measure up to the state’s baseline average. It was always a point of pride for my peers when we outperformed other schools, a juvenile satisfaction of superiority attained while unwittingly contributing to state school rankings and, by extension, funding distribution.
The implications of biopower systems go beyond making sure that the nation’s children are, on the whole, physically healthy (to say nothing of the ethical and biological assertions that go into drawing that particular boundary). In drawing power from the regulation of bodies, the norm becomes a deadly force: Anything that does not conform to it can be seen as a justifiable threat to the population. A nation under biopower—in which anyone who is not white, able-bodied, male, and straight is considered a deviation from the norm—is one that can, and does, justify racism and bigotry. This is why no form of visual recording, whether body cameras or livestreams from iPhones, can save the lives of the Black men, women, and children regularly murdered by state police.
When Foucault was theorizing biopower in 1976, he understood it as the new mechanism for exercising sovereign power over subjects. But there’s a new player in town, one that was only just coming to maturity in the late 1970s: the corporation. These days we’re seeing biopower wielded in far greater scope than government regulation. Big Data is its new name, and the ones using it with far more creativity and canniness are based in San Francisco lofts rather than offices in Washington, D.C.
Project Baseline is Alphabet’s (that is, Google’s) newest health study. Its 10,000 subjects (“to represent different ages, backgrounds, and medical histories on behalf of humanity”) are given special watch-style health trackers and sensors to put under their mattresses, and are studied over four years. Participants agree to use the various health trackers daily, to fill out questionnaires and surveys regularly, and to perform up to four annual in-person health tests. All test subjects are volunteers; they are not compensated for their participation, nor are the health tests meant to provide any kind of medical care. If an applicant is not selected, it is likely that Project Baseline has “already met [its] requirements for people of your age, location, health status, etc., or that we do not yet have a study site near you.”
This last part is noteworthy, considering the inextricable link between location and demographics. Though the study aims to be representative of the American population, there are already known limitations to its appeal to universality. Currently, the only study sites are in the San Francisco Bay Area near Stanford University and in North Carolina near Duke University: One can imagine the data sets available in those areas, especially given that volunteering for the project requires one to know about it in the first place. The project hopes to expand globally, but questions about what that expansion looks like are unanswered for the time being.
The limitations of Project Baseline’s sample set is the precise problem with these kinds of extrapolatory projects: There will always be bias depending on how the sample is acquired. And when you’re talking about “creat[ing] a Google Maps for human health,” who gets excluded from the sample is more than just a rounding error. There are entire demographics that would literally be excluded from what constitutes “the human race.” It matters if the requirement of four annual clinic visits makes participation in the study impossible for people who, for instance, have difficulty leaving their homes, whether that’s due to physical or mental disability, or economic reasons such as lack of childcare or free time. It matters if the sample sets can only be derived from areas near clinics with the right tech. It matters if the only people able to participate are those who already believe in the goal. Without addressing these biases, Project Baseline will not be a radical leap forward in human understanding, but a codification of norms that marginalizes more sectors of the population every day.
In his book Disability Aesthetics, cultural scholar Tobin Siebers argues that disability is the most basic form of human disqualification, presumably predicated by biological fact rather than sociocultural conditions. This means that all types of social inequalities, such as racism, sexism, and ableism, stem from a biological justification for their oppression—these bodies are less fit, less healthy, less worthy, and ultimately, less human. So when a project like Project Baseline reiterates those justifications rather than challenging them just based on who they let through the door, we ought to be concerned about which bodies are allowed into futurity.
Health, of course, is already a state issue. State funding determines what foods are available in public-school lunches, what scientific studies get funded, and what insurance premiums look like. The health of the body becomes synecdoche for the health of the state; the precursor to the current iteration of the physical fitness test was the Presidential Fitness Test, a now-defunct testing format that President Kennedy claimed, in a 1960 Sports Illustrated op-ed piece, would combat Americans’ “increasing lack of physical fitness” that he saw as a “menace to our security.” If a healthy body must also conform to standards and regulations developed through state power and state incentives, then the oppressive function of biopower necessa-rily excludes and disqualifies the disorderly bodies that exist outside of its spectrum. Bodily ideals, codified by scientific argumentation for fitness, are utilized as a measure of control—ones which are functionally impossible for certain bodies to achieve. And my Fitbit is the most powerful tool available for this project.
BUT PERHAPS WE'RE dancing around the real issue here, which is death.
From a biopower perspective, the primary goal for programs like Project Baseline is more effective regulation, and therefore more effective control over the lives and deaths of the general population. From an individual perspective, Project Baseline is exciting because it offers up the possibility for deeper understanding of endemics, like diseases, and therefore the possibility for curing them. The project has great potential to do objectively good things (advance medical understanding) and more questionable ones (allow more granular state control). But the real reason people are volunteering for it is a desire to escape the reaper.
In Tad Friend’s 2017 New Yorker article “Silicon Valley’s Quest to Live Forever” doctor-cum-hedge-fund-manager Joon Yun describes death as a hackable code: “Thermodynamically, there should be no reason we can’t defer entropy indefinitely. We can end aging forever.” Friend’s exploration of the ways tech-industry players are throwing money at this one seemingly unsolvable problem illustrates a view of death as simply a bug in our otherwise functional operating systems. But it’s also a little presumptuous to argue that the best way to extend lives is through some high-tech fix for shrinking telomeres when there are still millions of people in the United States alone who don’t have access to healthcare, clean water, or food.
The entire impetus for health is that it encourages longevity, and the possibility of staving off a natural death for as long as possible. And what feels closest to avoiding that final fact of biological existence than to become closer to the machine? As if by technologizing the body, we can transmogrify ourselves into the eternal, efficient, orderly, and immortal cyborgs of our wildest fantasies. But whether you want to theorize it as the final great mystery of existence or as merely a program to be hacked, death is never simple. Perhaps its greatest irony is that it becomes easier to deal with the more you abstract it. At the level of biopower, death is just another metric to control for. At the level of the individual, well.
Even in trying to write the sentence, “When my brother died,” I find myself at a loss to complete it. There have been many sentences since his death that I’ve been unable to finish. Grief is something you learn to live with rather than escape from, a constant companion that sometimes taps you on the shoulder gently and other times lays you out cold on the side of the road, glad that you were at least able to pull over before the real sobbing started. There are no clear metrics for improvement, and no sense of progression. You can go weeks and months feeling like maybe you’re finally done crying before you find yourself on the side of the road again.
In this context, optimization is more than a seductive marketing ploy: it’s a survival strategy. Yes, we must be vigilant about where our data is going, who has access to it, and who benefits from it. We must not allow ourselves to be sorted like so many products in a warehouse, bodies codified and stratified in accordance to fitness, race, and ability. We must not let our data be codified into “objective” knowledge, foreclosing on any possibilities for a dialectic and repurposed for the benefit of eugenics-by-capitalism.
But I am finding that behavior tracking grounds me. The abstraction of myself into numbers has become the most accessible way for me to be in my body, to remind myself that I am this living thing; the messiness can be left for later. To be able to work toward a quantifiable goal, even one that is more rigid than my body can bear, is to find something tangible in grief. It is satisfying to complete the circles, to fill the bars, to earn the badge. It is comforting to see that I walked farther today than I did yesterday.
My therapist often asks me, “Where are you feeling this in your body?”
I am never able to answer her with any accuracy.
Mailee Hung is a writer and editor based out of San Francisco, California. Reprinted from Bitch (Fall 2017), a quarterly publication of Bitch Media, which is a nonprofit, independent, feminist media organization dedicated to providing and encouraging an engaged, thoughtful feminist response to mainstream media and popular culture. www.bitchmedia.org