Categories
Just Data Research Matters

Algorithmic helping

Whether self-help or helping profession, the act of improving the human and their well-being will become increasingly relegated to the efforts of algorithms. ChatGPT and its siblings are merely the latest and most visible actors in this game. The use of robots to provide physical help and company to the elderly, apps that surveil and control our physical health and fitness, and algorithmic analysis to measure the benefits of social and educational interventions are now commonplace. Yes, the wholesale, copyright violating scraping of the entire Internet to create generative AI datasets, then paying minimum wage to traumatize employees censoring it, then serving as an echo chamber for its shitty funders like Elon Musk are all great reasons to worry about generative AI. But, not far off in our future, we’re going to need to worry about the ways AI is shaping the consciousness of our children, and of us.

In his latest book, Klara and the Sun, Kazuo Ishiguro explores the use of AI as a childhood companion in a not-to-distant future where friends and relationships with others generally are few and far between, at least for some. Like most Ishiguro novels, it is haunting, dystopian, and unfortunately, probably prescient. Spoiler warning: Klara, the robot AI, becomes friend to her infirm childhood “owner” and in her own confusion about her purpose, seeks to heal her by any means necessary. The disposability of consciousness strikes me as part of Ishiguro’s concern in this text, but just as haunting to me, the ways a programmed computer might become the thing which, primarily, raises a child. One both remarkably intelligent sounding, and supremely incapable.

The moral complexity of Ishiguro is part of his brilliance. You leave without obvious moralizing, without any clear sense that you’ve sorted right and wrong. This is true as well for Naomi Krizter’s newest short sci fi, available in text and audio for free on Clarkesworld Magazine.

https://clarkesworldmagazine.com/kritzer_05_23/

I won’t ruin the tale for you, it’s a fun one, and I highly suggest a read or listen. In my own work, I’ve studied the ways non-profit agencies, under the guise of “what gets measured gets done” and big data ideologies, are gathering more data than ever, and, more and more, using data to drive their activities, despite little evidence that these systems are improving client services. The reasoning, of course, always returns to making clients’ lives better. And so too in the case of Kritzer’s newest piece, where an app can be found genuinely improving people’s lives.

And that’s the kicker. Ishiguro, Kritzer, and my own humble research all show that there’s a double edged sword in all this algorithmic intervention in our lives. Because, yes, it helps. It can make things better. Perhaps it often does. But it is not without costs. Costs under-studied, under-discussed, and potentially of significant danger. Kritzer seems to pull at our innate discomfort with machines intervening intimately in our lives. Myself, and others like Lauri Goldkind, wonder what these kinds of intervention mean for those we serve (and the workers doing the service), how it changes us, and who might also be negatively impacted.

In my work, I examine this problem in a few ways. First, I tried to understand the infrastructural work being used to establish big data and algorithmic intervention as a common practice. I found, unsurprisingly, that the efforts to do this were primarily supported by white folx with “good” intentions, with little community or client involvement, deciding what metrics mattered most for driving social change in communities. Second, I worked with a small group of young people to understand their experiences interacting with the helping systems that collected data about them. Their message was clear: no data about me, without me. In other words, you can have it with my explicit permission and participation, and you better not carry it off anywhere else. Finally, I studied the way non-profit organizations are doing data work. My colleague and I found orgs and their employees to be concerned about the data being collected – they felt obligated by their funders, but felt what they collected was poorly used, or worse, perpetuating harm.

Lauri Goldkind, in her work on small data, or what I call the real “big data”, or data that matters in the context of trusting, caring relationship, are both concerned that a hyperfocus on data systems, metrics, and AI is moving at the speed of venture capital, rather than human systems; and worse, that big data misunderstands “big”, replacing meaningful human relationships with big data systems. Kritzer’s piece demonstrates another key concern I haven’t yet found a clear way to research: my fear that human service data systems, with metrics and measures embedded in white supremacy and colonialism (a topic for a future post), will increasingly define the ways we see, understand, interact with, and intervene on those we serve. While Kritzer’s intervention is opt-in, with the goal of helping people be happier, human service data systems help those who desperately need it, who can’t afford to turn it away because of unfavorable terms.

None of this is new per se. The Charities Organizing Societies, seen as one of the founding movements of American social work, believed they could, through friendly visiting and data management, discover and implement interventions that would remake the poor in the image of the better off (they were embedded in Eugenics by the way, as are the Silicon Valley billionaires who are part of driving this desire for data in the form of “effective altruism”). The new human service data systems are, increasingly, growing capable of realizing this data fetish wet dream, using client and intervention data to offer suggestions for programmatic improvement, and in some cases, dictating direct interventions with clients. AI doesn’t have a robotic face – it doesn’t need one. It already has a human face, and it only needs to grow into its control of the human interventions. It will be hard to argue that a worker knows better when “the data” shows otherwise.

Algorithmic helping is only just beginning. But it is beginning as uncritically as generative AI generally, and with almost no regulation or transparency. As Ruha Benjamin points out in Race After Technologybelieving algorithms will solve our problems, rather than re-enact the racism, colonialism, etc. that provides their basis, is a kind of magical thinking. The new human services, without our intervention, will be less human than managed by data and algorithm. At present we’re doing almost nothing to fight it. But we could be. One key change: the algorithms themselves are probably less important than who makes them, and even more so, who decides what information they get fed.

I’m Reading

2 replies on “Algorithmic helping”