Oh well…
Artificial Intelligence has been one of the hottest technologies available today. When it comes to predictions AI has been immensely helpful. Now you can add another feather to its cap. AI can now use health data record of a person to predict when a patient can die. We believe this information will be very helpful in treatment of patients.
Reading the cards

In a recent paper published in Nature authors say that, data extracted from health records can be fed into a machine learning can increase the accuracy of such predictions.
In trials conducted in two US hospitals, by using AI models researchers were able improve the accuracy of predictions, such as predicting the time of death of patients. This was achieved by feeding a huge data set to a neural network. This data consist of information about a patient’s vitals and medical history. After this data is accepted, a novel algorithm generates a timeline which consist of events such as length of stay and death. Like all AI driven applications this happens in almost realtime.
Applications of AI predications

By applying these predictions to their workflow, hospitals can improve treatment protocols, increase the quality of patient care and even predict emergencies before they occur. Use of such techniques will also help free healthcare professionals from doing the slow work of identification of symptoms thus improving efficiency.
Recently AI has seen increasing use in the medical field. Latest innovations allow health professionals to detect early onset diseases like cataracts which form in the eye. AI driven models have also been successful in diagnosing lung cancer and heart diseases better than humans.
However there are some caveats on the feasibility of integration of AI with patients databases. Most medical records are decentralised. Another difficulty is many of these records are not in a standard format. This makes widespread application a difficult task. This will however change with time.
You must now be thinking, ‘Well this all nice and good Aspioneer, but what about the negatives’. Well…
‘Gimme everyone’s data’

Since AI driven applications require large central datasets to work properly, there are growing concerns that such sensitive data in hands of few entities is dangerous idea.
There is a worry among professionals that if a company like Google owns all this data, then we are creating massive future problems for ourselves. How can we be sure this data won’t be shared? What stops anyone to applying this data in cases where it is prohibited. Can we guarantee data safety? At Aspioneer, we think these are valid questions and they must be addressed.
Caution: Bumpy road ahead

Quite recently the UK government expressed concerns that DeepMind Health, owned by Google could become a monopoly. The UK government also believes that DeepMind broke UK laws by collecting patient data without the informed consent of the patients involved.
Furthermore some medical professionals have expressed concerns that using AI can have negative effects if not implemented and monitored properly. The American Medical Association (AMA) while admitting the benefits AI can bring to medicine also notes that AI tools must “strive to meet several key criteria, including being transparent, standards-based, and free from bias.” It is not clear how one makes algorithms transparent. The AMA notes that the US regulatory framework is far behind the advance of technology with legislation being passed over two decades ago.
Anyone can be biased

There is a reason why people are excited to use AI in the medical field. The field of medicine is notorious for being biased against certain group of people, such as women. Using a AI can eliminate this bias since they are unfeeling machines.
However, recent observations tell us that since AIs are being coded by people there is a big chance they might turn out to be biased. Cathy O’Neil the author of the book “Weapons of Math Destruction” is quoted in the article. She says, “Algorithms replace human processes, but they’re not held to the same standards, people trust them too much”. Some might say people are being unnecessary paranoid, but we think there might be some truth to these observations.