yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Can AI predict someone's breakup? - Thomas Hofweber


3m read
·Nov 8, 2024

You and your partner Alex have been in a strong, loving relationship for years, and lately you're considering getting engaged. Alex is enthusiastic about the idea, but you can’t get over the statistics. You know a lot of marriages end in divorce, often not amicably.

And over 10% of couples in their first marriage get divorced within the first five years. If your marriage wouldn’t even last five years, you feel like tying the knot would be a mistake. But you live in the near future, where a brand-new company just released an AI-based model that can predict your likelihood of divorce.

The model is trained on data sets containing individuals’ social media activity, online search histories, spending habits, and history of marriage and divorce. And using this information, the AI can predict if a couple will divorce within the first five years of marriage with 95% accuracy. The only catch is the model doesn’t offer any reasons for its results—it simply predicts that you will or won’t divorce without saying why.

So, should you decide whether or not to get married based on this AI’s prediction? Suppose the model predicts you and Alex would divorce within five years of getting married. At this point, you'd have three options. You could get married anyway and hope the prediction is wrong. You could break up now, though there’s no way to know if ending your currently happy relationship would cause more harm than letting the prediction run its course.

Or, you could stay together and remain unmarried, on the off-chance marriage itself would be the problem. Though without understanding the reasons for your predicted divorce, you’d never know if those mystery issues would still emerge to ruin your relationship. The uncertainty undermining all these options stems from a well-known issue with AI around explainability and transparency.

This problem plagues tons of potentially useful predictive models, such as those that could be used to predict which bank customers are most likely to repay a loan, or which prisoners are most likely to reoffend if granted parole. Without knowing why AI systems reach their decisions, many worry we can’t think critically about how to follow their advice. But the transparency problem doesn’t just prevent us from understanding these models, it also impacts the user’s accountability.

For example, if the AI's prediction led you to break up with Alex, what explanation could you reasonably offer them? That you want to end your happy relationship because some mysterious machine predicted its demise? That hardly seems fair to Alex. We don’t always owe people an explanation for our actions, but when we do, AI’s lack of transparency can create ethically challenging situations.

And accountability is just one of the tradeoffs we make by outsourcing important decisions to AI. If you’re comfortable deferring your agency to an AI model it’s likely because you’re focused on the accuracy of the prediction. In this mindset, it doesn’t really matter why you and Alex might break up—simply that you likely will.

But if you prioritize authenticity over accuracy, then you'll need to understand and appreciate the reasons for your future divorce before ending things today. Authentic decision making like this is essential for maintaining accountability, and it might be your best chance to prove the prediction wrong.

On the other hand, it’s also possible the model already accounted for your attempts to defy it, and you’re just setting yourself up for failure. 95% accuracy is high, but it’s not perfect—that figure means 1 in 20 couples will receive a false prediction.

And as more people use this service, the likelihood increases that someone who was predicted to divorce will do so just because the AI predicted they would. If that happens to enough newlyweds, the AI's success rate could be artificially maintained or even increased by these self-fulfilling predictions. Of course, no matter what the AI might tell you, whether you even ask for its prediction is still up to you.

More Articles

View All
The Soul of Music: Meklit Hadero Tells Stories of Migration | Overheard at National Geographic
[Music] Hey there, I’m Kyrie Douglas. I’m a producer here at Overheard, and this is the final episode of our four-part series focusing on music exploration and Black history. It’s called “The Soul of Music,” and National Geographic explorers will be sitti…
2009 Berkshire Hathaway Annual Meeting (Full Version)
[Applause] Good morning! I’m Warren, the hyperkinetic fellow. Here is Charlie, and we’re going to go in just a minute to a question and answer section that, at least, a question session that will be a little different than last year. We have a panel, I ca…
Kevin O'Leary: Don't Vilify Capitalism - Fox and Friends
Our truth on men, women, and money: 50 common money mistakes and how to fix them. He joins us this morning, Kevin. Thanks for joining us; you’re a brave man. The look in that woman’s face—I think she was a stand-in for many who thought, “Huh, you’re defe…
Nobel Prize Winner Brian Schmidt - Physics 2011
[Applause] I’m here at the Mount Strow Observatory to talk to one of this year’s Nobel Prize winners for physics, Professor Brian Schmidt. “Still feels kind of weird. I don’t know, I don’t really feel like a Nobel Prize winner when I go and say, ‘Okay, g…
Develop | Vocabulary | Khan Academy
Prepare yourselves for some advanced language wordsmiths, because it’s time for us to develop our vocabularies. That’s right, the word I’m focusing on in this video is develop. Develop is a verb; it means to grow larger or more complex, to build, or impro…
Last Wild Places: American Prairie Reserve | National Geographic
Everything that is in this creation is put here for a specific purpose. All the things that fly, all of the things that swim, all of the things that crawl, they all have a special place in our culture. It is our responsibility as the two-leggeds to try to…