Here’s why that’s a good thing. A new algorithm could ease critically ill patients’ final days.

Quick Read

By using an artificially intelligent algorithm to predict patient mortality, a research team from Stanford University is hoping to improve the timing of end-of-life care for critically ill patients. In tests, the system proved eerily accurate, correctly predicting mortality outcomes in 90 percent of cases. But while the system is able to predict when a patient might die, it still cannot tell doctors how it came to its conclusion

Quick Read

Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution

Quick Read

From policing and healthcare to defence and dating sites AI is being woven into the fabric of our lives – for better and for worse

Quick Read

At Stanford and Google, Fei-Fei Li is leading the development of artificial intelligence—and working to diversify the field

Quick Read

If developed and used sensitively, artificial intelligence systems could go a long way to mitigating these inequalities by removing human bias. A careless approach, however, could make the situation worse

Quick Read

But he may struggle to get his way without hard evidence of what, exactly, needs regulating

Quick Read

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It’s perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, “Matrix”-like, as some sort of human battery

Quick Read