Forum

AI and suicide prev...
 
Notifications
Clear all

AI and suicide prevention

Tango
Posts: 21
Topic starter
(@tango)
Active Member
Joined: 9 years ago

Ten years from now, will artificial intelligence, blood tests and other technology be effectively predicting and preventing suicide attempts? 

https://www.theglobeandmail.com/canada/article-can-an-algorithm-stop-suicides-by-spotting-the-signs-of-despair/

9 Replies
Elsa
Posts: 2249
 Elsa
Admin
(@elsa)
Prominent Member
Joined: 19 years ago

I don't think a machine can predict what a person is going to do when they have free will.  However. a weak-minded person can lose sight of the fact, they are running their life... and begin to act in accord with the machines around them.

But if you are a free-thinking person, out in the world, interacting with actual humans... well your actual soul is out of reach of an algorithm.

Reply
Allie
Posts: 623
(@allie120)
Reputable Member
Joined: 10 years ago

My God. Not everything can be boiled down accurately to an algorithm. 

Reply
NotMyCircus
Posts: 167
(@notmycircus)
Trusted Member
Joined: 13 years ago

Are we really so out of touch with each other as a collective that a MACHINE has to check social media to see if our loved ones are suicidal?

I can tell you now—the people I know who have either committed suicide, attempted it, or   simply been in such a dark place that it scared those who know them, would NOT appreciate this. If someone wants to die bad enough, they’ll find a way to do it—or they’ll change their mind on their own. People I know who have been in a dark place either push others away or let everyone know they can handle themselves and are NOT suicidal, depressed, nor do they need your “help”, thankyouverymuch. What’s a computer going to do with that? Algorithms will just piss these people off. Some of them might even bust the machine and tell it to algorithm THAT. ?

Reply
NotMyCircus
Posts: 167
(@notmycircus)
Trusted Member
Joined: 13 years ago

Also—the parents in that article saw that their son needed help. He was taken to the ER, where they assessed him and sent him home! There’s something rotten in the healthcare system. So are you telling me that if an inhuman machine gives the doctor some data from the patient, they’ll accept that over the legitimate concerns of the live human beings who talk to the patient and observe them every day?

Reply
Allie
Posts: 623
(@allie120)
Reputable Member
Joined: 10 years ago

You make a great point. Also, it could be when he got to the hospital he may have convinced them that he was fine. Where’s the line between committing someone, even overnight, and letting them go? 

I absolutely agree that some formula vs eyewitness accounts is Orwellian. How are they getting this information? Parsing social media accounts? Of topic but not, a lawmaker in NYC wants to propose that people submit their social media accounts three years back when they apply for a pistol permit. Whatever a person believes about 2A, how much faith should we place on algorithms? 

Reply
Page 1 / 2
Scroll to Top