By Steven Carr
Communications Manager | Groundswell Health
From Asimov to Heinlein to Cameron, I’ve read and watched all manner of stories about AI or machine intelligence that, to some degree, frame and inform my interest and understanding of what’s happening with AI in the real world.
Amid the ongoing corporate arms race for AI dominance, this unique (read: nerdy) perspective helps me think critically about AI and its role in shaping our world and our role in deciding how we are going to use this technology.
Perhaps there is no industry better suited to lead this effort than the healthcare industry.
I was struck by this thought when I read a recent article about Google (you know, that company we all use for basically everything) barreling ahead with its healthcare AI business plans faster than lawmakers, regulators, and Google itself can possibly understand it. The “move fast and break things” mentality perpetuated by Meta, Google, and other mega-corporations pouncing on AI is exactly the kind of irresponsible behavior the healthcare industry is built to push back against.
When people’s lives and health are at stake, it’s generally understood that it’s a good idea to make sure something works before we do it. Drug trials can take years to complete, new health technologies and techniques face rigorous scrutiny before implementation, healthcare privacy laws are robust and complex, and the list goes on.
The article quotes Sen. Mark Warner (D-Va.), and I think his words sum up the dilemma that healthcare leaders and those in power must choose to wrestle with when it comes to AI in healthcare: “There is great promise in many of these tools to save more lives…they also have the potential to do exactly the opposite — harm patients and their data, reinforce human bias, and add burdens to providers as they navigate a clinical and legal landscape without clear norms.”
We can’t be so afraid of AI that we miss out on the benefits and leaps in innovation it can provide, and we can’t be thoughtless enough to begin exposing people’s health and health data to something we still don’t fully understand. We must avoid the binary choice between labeling AI as a threatening and horrifying creation a la Skynet, or as a harmless but ultimately useless tool like WALL-E (no offense little guy).
It’s imperative that we stay informed about what companies we implicitly trust (how many cookie policies have you blindly accepted this week?) and rely on in our professional and personal lives are doing or planning to do with AI, and how our representatives are responding to those plans.
We have to stay informed and engage with the grey areas and universal questions that AI is asking of us, whether we have an answer or not.
If you’re not into reading anything longer than one page, check out this Becker’s article that sums up the original Politico piece.