- Alice Zhang, co-founder and CEO of drug discovery company Verge Genomics
- Verge Genomics
- Alice Zhang started Verge Genomics in 2015 with Jason Chen to combine innovation in neuroscience, machine learning and genomics and apply it to the drug discovery process.
- The vision for Verge was to become the first pharmaceutical company that automated its drug discovery engine, helping to rapidly develop multiple lifesaving treatments in diseases like Alzheimer’s disease, ALS, and Parkinson’s disease where no cure exists today.
- On Monday, the San Francisco-based company announced it had raised $32 million in series A funding, led by Draper Fischer Jurvetson, bringing its total amount raised to $36.5 million.
The drug development process is laden with problems that make it lengthy and expensive. Right now, it takes 12 years and $2.6 billion to get a single drug to market, with the drug discovery and development process costing $1.4 billion.
Verge Genomics, run by 29-year-old Alice Zhang, is trying to address these problems by making drug discovery faster and cheaper.
On Monday, the San Francisco-based company announced it had raised $32 million in series A funding, led by Draper Fischer Jurvetson, bringing its total amount raised to $36.5 million.
Zhang was three months shy of her MD and PhD graduation from University of California-Los Angeles when she left school to start Verge Genomics in 2015 with Jason Chen, who she met during the program.
“I just became very frustrated with the drug discovery process,” she said. “It’s largely a guessing game where companies are essentially brute force screening millions of drugs just to stumble across a single new drug that works.”
At the time, Zhang also recognized the advancements in neuroscience, machine learning and genomics occurring all around her. Genome sequencing had become more and more affordable, and breakthroughs in understanding how function connects with genes opened a new field of possibilities for exploring disease and health. And there was an opening for an opportunity to guesswork out of drug discovery. The vision for Verge was to become the first pharmaceutical company that automated its drug discovery engine, helping to rapidly develop multiple lifesaving treatments in diseases like Alzheimer’s disease, ALS, and Parkinson’s disease where no cure exists today.
Recently, other big pharmaceutical agencies like Novartis are also starting to follow suit, adapting technology to different steps of the clinical trial process.
Verge, 14 people large, functions at full capacity. Not only do they have computer scientists managing the front-end of machine learning, but they also have researchers working in its own in-house drug discovery and animal lab.The team is stacked with computer scientists, mathematicians, neurobiologists, as well as industry veterans and drug development veterans.
There are three main problems in drug discovery that Verge is using data and software to tackle. The first is that many diseases like Alzheimer’s disease are caused by hundreds of genes. Verge’s algorithms on human genomic data can map these genes out. The second is instead of using animal data only for pre-clinical trials, Verge uses human data from day one, which may enable greater insight into how effective the drug actually is on human cells. Drugs that work in mice often fail in humans, and that’s because they’re usually there to serve as primary mammal model. Instead of tediously screening millions of drugs, the algorithm will computationally predict drugs that work.
Verge uses brain samples from patients that have passed away from Alzheimer’s disease or Parkinson’s disease for its human data, obtained through partnerships with over a dozen different universities, hospitals and brain banks. The company then RNA-sequences them in-house, which allows them to measure the gene expression in its most current state, and it can measure simultaneously how all of the genes in the genome are behaving. This data helps scientists figure out what’s actually causing disease in these patients and see if there are connections between genes and disease.
Verge’s scientists can make predictions about what drugs they think will work. They can take a patient’s own skin cell and turn it directly into their own brain cells in a dish. Then the predictions can be tested on these brain cells to see if they can rescue them from dysfunction or death – a basic test of drug efficacy. That validation data can feed back into the platform and continuously improve predictions over time, even across different diseases.
The Verge algorithm identifies drugable targets for treatments, then design drugs accordingly. This is done by mining through human samples to identify groups of genes that are implicated with the disease, and what crucial hub in these gene networks can turn them on or off.
The latest investment in Verge will serve to advance its ALS and Parkinson’s disease drugs. There are six drugs in development, closer to the clinical end, which are being tested to make sure they’re safe and non-toxic. The funding will also be used to expand the number of diseases Verge has in its portfolio.
Emily Melton, a partner at DFJ, told Business Insider that investment in early stage startups is largely about the team, the uniqueness of the idea and the capability and expertise of the research team. But what drew her in most was Zhang. “She was this brilliant founder, with a very organic desire to create an impact,” said Melton. “She felt like it was her calling.”
Using system learning to recognize patterns that would otherwise go undetected by the human eye can speed up the process while creating a bigger and better feedback loop, said Melton. “We’re rethinking how drug discovery is done, and we’re rethinking how therapeutics are developed.”
- Mario Tama/Getty
- Google has a new algorithm that can quickly sift through thousands of digital documents in a patient’s health record to find important information, Bloomberg reported.
- This enables the technology in some cases to help doctor make better predictions about how long a patient may stay in a hospital or when the likelihood that the patient may die.
- Google wants to take the tech into clinics and use ‘a slew of AI tools to predict symptoms and disease.’
Google’s artificial intelligence systems can cull through and analyze a person’s medical history to help doctors make more accurate predictions about a patient’s health, and even provide estimates on when a patient may die,according to a Bloomberg report.
In one situation, doctors estimated that a woman with cancer who had arrived at a city hospital with fluids in her lungs had a 9.3 percent chance of dying during her stay, Bloomberg reported. A new form of algorithm from Google said the risks of death were higher, 19.9 percent. The woman died a few days later.
Google is one of many companies trying to apply AI technology tosolve some of the problemsfaced by the health-care sector. AI has shown enormous promise at analyzing vast amounts of data and performing tasks that typically requires lots of man hours.
In this case, Google’s AI uses neural networks, which has proven effective at gathering data and then using it to learn and improve analysis. According to Bloomberg, Google’s tech can “forecasta host of patient outcomes, including how long people may stay in hospitals, their odds of re-admission and chances they will soon die.”
AI can help doctors make better diagnosis
Google’salgorithm can retrieve “notes buried in PDFs or scribbled on old charts” to make predictions, and determine “the problems with solving,” Bloomberg wrote. All this could help doctors make better diagnosis.
Predicting death is likely to stoke fears among those who worry that AI may some day hold too much control over humans.
And Google’s technology raises a variety of ethical concerns about how it is used and who gets access to it. Decisions about insurance coverage for patients seeking certain medical treatment, or hospitals trying to allocate scarce beds for patients, are obvious examples of potentially problematic scenarios where such AI predictions could come into play.
According to the report, the findings so far is that Google’s system is faster, and more accurate than other techniques at evaluating a patient’s medical history. Eventually, Google would like to take “a slew of new AI tools” that can accurately predict symptoms and disease into clinics.
- Shutterstock/Blend Images
- A new system created by Verily and Google AI researchers can use photographs of the retina to predict risk factors for cardiovascular disease.
- The system works about as well as presently used predictive methods and is far less invasive.
- In a recent study, researchers could see what the artificial intelligence software was paying attention to as it studied the eye.
Your eyes might be the perfect windows into your heart.
At least, they’re windows that Google-created artificial intelligence software can use to calculate your risk factors for heart disease.
According to a study recently published in the Nature Biomedical Engineering journal, an AI algorithm created by Google AI and Verily Life Sciences (an Alphabet subsidiary that spun off from Google) can predict whether a patient is likely to suffer a major cardiovascular event like a heart attack or stroke within five years, based on a photo of their retina.
So far, the predictions work about as well as presently accepted methods that are more invasive, according to the study.
Learning to predict heart disease
The fact that disease can be spotted in the retina isn’t a surprise. Doctors often spot medical conditions including diabetes, extreme high blood pressure, high cholesterol, and some cancers during eye exams.
To mimic that ability, the Verily and Google researchers trained AI software to identify cardiovascular risks by having the system analyze retina photos and health data from 284,335 patients. Specifically, it looked at retinal fundus images – photos that show blood vessels in the eye.
- The retina fundus photographs that the AI software uses to asses cardiovascular disease risk.
- Poplin et al., Biomedical Engineering, 2018
Known risk factors for cardiovascular disease include age, blood pressure, and gender, among other things. Based on an eye scan, the algorithm was able to predict a person’s age to within 3.26 years, smoking status with 71% accuracy, and blood pressure within 11 units of the upper number reported in their measurement.
Because the algorithm was so effective at assessing these factors, the researchers decided to see how well it could predict actual strokes and heart attacks.
They used data from a set of 150 patients that had suffered major cardiovascular events within five years of their eye scan. (That data set included 12,026 people, but only several hundred experienced a major cardiac health event, with clinical data available for 150 of those patients.) When the researchers presented the algorithm with two retina images and asked it to predict which one would suffer a major cardiac event or stroke, it predicted the correct scan 70% of the time.
By comparison, the European SCORE risk calculator, which requires a blood test, is currently used to predict risk for cardiovascular disease. That calculator predicted the correct scan in 72% of the cases from the same dataset, which is not much better than the AI performance – and the AI did just as well when it had access to demographic information like age, gender, and BMI.
A powerful demonstration
- From images like this, the researchers were able to identify what exactly the AI was paying attention to while assessing risk.
- Poplin et al., Biomedical Engineering, 2018
Cardiovascular disease is the leading cause of death in the world. Because of that, the idea that a routine retina scan could provide an early warning of heightened risk – hopefully in time to change behavior – is exciting.
The new study suggests there is more information available in the retina than scientists previously realized. The AI system is particularly exciting because it takes medical images that might already exist and gets new and potentially important data from them. And that information can be gathered and used without invasive tests.
Researchers involved in the study were also able to track which factors the algorithm was relying on to make its predictions, since the system created heat-maps of areas it focused on. In this case, the researchers know the system was paying particular attention to blood vessels to calculate blood pressure, for example.
Such information isn’t always available in machine learning processes. But in this case, it can help scientists better understand the wealth of data that’s available in retina images in the first place.
Overall, this new study highlights the ways that deep learning is transforming how scientists study the body. Machine learning can even take scans that we already have and use them to generate a far more complete picture of human health.
Still, as promising as these results seem, they are preliminary, according to a blog post by Dr. Michael McConnell, the Head of Cardiovascular Health Innovations at Verily.
“[M]ore work must be done to develop and validate these findings on larger patient cohorts before this can arrive in a clinical setting,” McConnell wrote.