Facebook is now working on new ways to help troubled users with the use of artificial intelligence and pattern recognition, in addition to expanding its suicide prevention tools.
The algorithm would immediately send a report to a real reviewer, who could then contact the user with suggestions and resources to help if appropriate. At the moment, Facebook relies on a human reporting system regarding potential suicides, where friends of users can click a button to tell the company about concerning updates.
The new tools are similar to what Facebook launched back in 2015, which allows friends to flag a troubling image or status post. Now, this feature is available on Facebook Live — with the goal of connecting a user with a mental health expert in real-time. If Facebook believes a reported Live streamer may need help, that user will receive notifications for suicide prevention resources while they’re still on the air. The person who reported the video will also get resources to personally reach out and help their friend, if they wish to identify his or herself.
The broadcaster at risk will also be given the option to contact a friend, mental health helpline or see tips.
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.
There is a long way from Elon Musk's vision of symbiosis between man and machine, which would require a much more granular understanding of the brain network that goes beyond the basics of motor control to more complex cognitive faculties like language and metaphor.
Professor Panagiotis Artemiadis of Arizona State University has been trying to get more bandwidth using a 128-electrode EEG cap to allow a human to control a swarm of flying robots with their brain.
Humans won't become irrelevant until machines can replicate the human brain something Nicolelis believes is not possible.
Nicolelis argues that the brain contrary to what Musk and Singularity proponents like Ray Kurzweil say is not computable because human consciousness is the result of unpredictable, nonlinear interactions among billions of cells.
He agrees with Musk that if we can interface directly with machines we can produce a "quantum leap" in what digital infrastructure has produced today, but predicts that humans will retain ultimate control.
Under these circumstances human skills diminish and people become subservient to machines.
Better communication between humans and machines, particularly the transmission of emotional signals from humans, will be a powerful tool for building trust in automated systems, added Artemiadis.
"It's about making the machine more intuitive using brain signals to understand whether the human is distracted or tired."
Computer scientists from the Google-owned firm have studied how their AI behaves in social situations by using principles from game theory and social sciences. During the work, they found it is possible for AI to act in an "aggressive manner" when it feels it is going to lose out, but agents will work as a team when there is more to be gained.
For the research, the AI was tested on two games: a fruit gathering game and a Wolfpack hunting game.
These are both basic, 2D games that used AI characters (known as agents) similar to those used in DeepMind's original work with Atari.
Within DeepMind's work, the gathering game saw the systems trained using deep reinforcement learning to collect apples (represented by green pixels).
When a player, or in this case an AI, collected an apple, it was rewarded with a '1' and the apple disappeared from the game's map.
"Intuitively, a defecting policy in this game is one that is aggressive i.e., involving frequent attempts to tag rival players to remove them from the game," the researchers write in their paper.
For a deeper understanding check out Deepminds post: Understanding Agent Cooperation
Design intelligent agents to solve real-world problems including, search, games, machine learning, logic, and constraint satisfaction problems.
What do self-driving cars, face recognition, web search, industrial robots, missile guidance, and tumor detection have in common?
They are all complex real world problems being solved with applications of intelligence (AI).
This course will provide a broad understanding of the basic techniques for building intelligent computer systems and an understanding of how AI is applied to problems.
You will learn about the history of AI, intelligent agents, state-space problem representations, uninformed and heuristic search, game playing, logical agents, and constraint satisfaction problems.
Hands on experience will be gained by building a basic search agent. Adversarial search will be explored through the creation of a game and an introduction to machine learning includes work on linear regression.
What you'll learn:
- Introduction to Artificial Intelligence and intelligent agents, history of Artificial Intelligence
- Building intelligent agents (search, games, logic, constraint satisfaction problems)
- Machine Learning algorithms
- Applications of AI (Natural Language Processing, Robotics/Vision)
- Solving real AI problems through programming with Python