Brown eye

It’s been hot news for the last 24 hours: Google’s developed a machine learning algorithm that can scan images of your eyes and predict your risk of heart disease. While this tech is not ready for clinical use yet (it needs more testing), it holds a lot of promise—it predicts heart disease to about the same level of accuracy as other current medical methods, and it’s fast because testing doesn’t require analyzing blood results.

What are the implications? Once this tool goes live in a medical setting, it’ll save doctors and patients time, time that doctors can use to better treat patients.

Photo by Liam Welch on Unsplash

Burmese temple affected by earthquake

It’s hard to detect smaller earthquakes in areas that have few seismic stations. And the less data you have, the harder it can be. Now, with a convolutional neural network developed by Harvard and MIT researchers, seismologists can better sift through the data to find earthquakes. By feeding the network training sets from seismically inactive regions, the network can identify and disregard regular activity while parsing the data, allowing it to clearly identify tremors.

What are the implications? We can better identify earthquakes and tremors with less data.

Photo Found Here: https://pixabay.com/en/temple-shack-earthquake-burma-2740180/

Walnut in a shell that looks like a brain

Researchers at the University of Pennsylvania conducted a brain study where they used personalized algorithms to trigger brain-specific pulses, to hopefully improve their patients’ memories . . . and these algorithms did. Patients’ word recall increased by 15 percent. Essentially, they are working to build a brain-activity reader that can tell when the brain is effectively encoding memories. If it’s not, they send pulses pulses to certain areas of the brain that kick brain activity up a notch.

Photo Found Here: https://pixabay.com/en/walnut-nut-shell-nutshell-open-3072652/

Open book with handwriting

In the 1800s, historians discovered an indecipherable text, now called the Voynich manuscript. It dates back to the 15th century, but they’ve never been able to figure out its language of origins or what it says. More recently, however, it’s been reported that researchers at the University of Alberta used AI to pinpoint that over 80 percent of the manuscript’s words were identified as Hebrew (initially they thought it was probably Arabic), and they believe they’ve translated the text’s first sentence. However, The Verge hotly disputes the original reporting of other news outlets, saying it wasn’t AI after all, and the original study is suspect.

Photo by Kiwihug on Unsplash

Lady with head down

IBM researchers used AI to help predict the likelihood of study subjects developing psychosis. They employed AI to parse transcripts from a prior study and predict the potential for mental illness from linguistic indicators. The AI was 83 percent accurate in its predictions, and interestingly, one indicator was that people at risk used fewer possessive pronouns while talking than those who weren’t at risk.

Photo by Volkan Olmez on Unsplash

Toothbrush with toothpaste on it

Scientists used an “AI researcher” to discover how one toothpaste ingredient may be able to take down increasingly drug-resistant malaria. This AI helped them figure out which malarial enzyme this toothpaste ingredient inhibits. This is powerful knowledge because researchers can now develop a drug that could hit malaria during two developmental stages: liver and bloodstream.

Photo Found Here: https://www.pexels.com/photo/clean-mouth-teeth-dentist-40798/

A human eye looking at the camera

Scientists have taken a somewhat successful step toward visualizing what your brain sees. In the past, scientists have been able to decode and visualize basic images from human thought, but this new technique is more robust. It decodes more complex images from human brain waves. It not only shows what a person is currently seeing, it also shows an image a person is remembering, and it does it in color.

This video shows some of their research. I think there’s a ways to go before the side-by-side images clearly resemble each other, but there are definitely similarities between the two.

 

Photo by Daniil Avilov on Unsplash

A building with flood waters coming up to the bottom of the door

As coastal communities face more potential flooding, researchers want to be ready to warn of and protect people from incoming floods. To do this, they need a reliable way to identify early flooding signs. They’ve seen predictive success while developing this AI application, which they’re training to identify water-level danger from data pulled from networks like MyCoast (a network that monitors US coasts) and social media data.

Photo by Cristina Gottardi on Unsplash

road with palm trees and cars parked on the side

Researchers at Stanford have been able to train a machine learning model to identify cars in Google Street View images, and then identify car make and model as well as glean other information from the neighborhood photos. With the power of 50 million images processed, they then use another model to predict the demographics of your neighborhood, which include income, race, and education—and even likely voting patterns.

 

Photo by Matt Alaniz on Unsplash