yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Can AI predict someone's breakup? - Thomas Hofweber


3m read
·Nov 8, 2024

You and your partner Alex have been in a strong, loving relationship for years, and lately you're considering getting engaged. Alex is enthusiastic about the idea, but you can’t get over the statistics. You know a lot of marriages end in divorce, often not amicably.

And over 10% of couples in their first marriage get divorced within the first five years. If your marriage wouldn’t even last five years, you feel like tying the knot would be a mistake. But you live in the near future, where a brand-new company just released an AI-based model that can predict your likelihood of divorce.

The model is trained on data sets containing individuals’ social media activity, online search histories, spending habits, and history of marriage and divorce. And using this information, the AI can predict if a couple will divorce within the first five years of marriage with 95% accuracy. The only catch is the model doesn’t offer any reasons for its results—it simply predicts that you will or won’t divorce without saying why.

So, should you decide whether or not to get married based on this AI’s prediction? Suppose the model predicts you and Alex would divorce within five years of getting married. At this point, you'd have three options. You could get married anyway and hope the prediction is wrong. You could break up now, though there’s no way to know if ending your currently happy relationship would cause more harm than letting the prediction run its course.

Or, you could stay together and remain unmarried, on the off-chance marriage itself would be the problem. Though without understanding the reasons for your predicted divorce, you’d never know if those mystery issues would still emerge to ruin your relationship. The uncertainty undermining all these options stems from a well-known issue with AI around explainability and transparency.

This problem plagues tons of potentially useful predictive models, such as those that could be used to predict which bank customers are most likely to repay a loan, or which prisoners are most likely to reoffend if granted parole. Without knowing why AI systems reach their decisions, many worry we can’t think critically about how to follow their advice. But the transparency problem doesn’t just prevent us from understanding these models, it also impacts the user’s accountability.

For example, if the AI's prediction led you to break up with Alex, what explanation could you reasonably offer them? That you want to end your happy relationship because some mysterious machine predicted its demise? That hardly seems fair to Alex. We don’t always owe people an explanation for our actions, but when we do, AI’s lack of transparency can create ethically challenging situations.

And accountability is just one of the tradeoffs we make by outsourcing important decisions to AI. If you’re comfortable deferring your agency to an AI model it’s likely because you’re focused on the accuracy of the prediction. In this mindset, it doesn’t really matter why you and Alex might break up—simply that you likely will.

But if you prioritize authenticity over accuracy, then you'll need to understand and appreciate the reasons for your future divorce before ending things today. Authentic decision making like this is essential for maintaining accountability, and it might be your best chance to prove the prediction wrong.

On the other hand, it’s also possible the model already accounted for your attempts to defy it, and you’re just setting yourself up for failure. 95% accuracy is high, but it’s not perfect—that figure means 1 in 20 couples will receive a false prediction.

And as more people use this service, the likelihood increases that someone who was predicted to divorce will do so just because the AI predicted they would. If that happens to enough newlyweds, the AI's success rate could be artificially maintained or even increased by these self-fulfilling predictions. Of course, no matter what the AI might tell you, whether you even ask for its prediction is still up to you.

More Articles

View All
How to subtract mixed numbers that have unlike denominators | Fractions | Pre-Algebra | Khan Academy
Let’s try to evaluate 7 and 6 9ths - 3 and 25ths. So, like always, I like to separate out the whole number parts from the fractional parts. This is the same thing as 7 + 6⁄9 - 3 - 25⁄100. The reason why I’m saying -3 and -25⁄100 is this is the same thing…
How To Get Rich According To Robert Kiyosaki
There are a million ways to make $1,000,000. And this is how Robert Kiyosaki does it. Robert Kiyosaki is a financial educator, entrepreneur, and the author of Rich Dad, Poor Dad, one of the best-selling personal finance books of all time. He’s challenged …
Continuity and change in the postwar era | Period 8: 1945-1980 | AP US History | Khan Academy
The era from 1945 to 1980 was action-packed, to say the least. During this period, the United States experienced the baby boom, the civil rights movement, the tumultuous 1960s, and the quagmire of Vietnam. This era was also riddled with contradictions; a …
Captain Cook Snags a Big Tuna | Wicked Tuna | National Geographic
Getting anything better down there, Johnny? A lot of bent rods. So we’re just getting set up right now on the Regal Sword. We made the move down from Crab Ledge last night. Might just set up right in here. Yeah. You know, you just get panic in panic mode…
10 Brutal Truths That Trigger People's Ego
You know, the universe seems kind of small compared to some people’s egos. We all know the type. The challenge is speaking to them in a manner that doesn’t trigger any childish behaviors. So, if you want to avoid that at any cost, you’d better pay attenti…
Know your product.
I start off my day by arriving early at the office and closing a deal on a private jet sale in Asia. You know we always tell people, “We want you to hate us today, not the six on.” From so we’re giving you all the bad news now, and if you can live with al…