Positive Preclinical Proof-of-Concept Results For Liver Cancer Candidate, TXR-311

In September 2016, we announced a collaboration with the Asian Liver Center at Stanford University School of Medicine (the Asian Liver Center). The goal of this collaboration was to identify new drug candidates targeting hepatocellular carcinoma (HCC, the major form of adult liver cancer). Today, we announced a lead candidate, TXR-311, that has shown positive results in cell-based assays. I wanted to share a bit more background on liver cancer and details on why these results are exciting.

HCC is a primary cancer of the liver that tends to occur in patients with… 

READ THE FULL POST AT MEDIUM.COM

Seeing the power of AI in drug development

Today we announced our collaboration with Santen, a world leader in the development of innovative ophthalmology treatments. Scientists at twoXAR will use our proprietary computational drug discovery platform to discover, screen and prioritize novel drug candidates with potential application in glaucoma. Santen will then develop and commercialize drug candidates arising from the collaboration. This collaboration is an exciting example of how artificial intelligence-driven approaches can move beyond supporting existing hypotheses and lead the discovery of new drugs. Combining twoXAR’s unique capabilities with Santen’s experience in ophthalmic product development and commercialization… 

READ THE FULL POST AT MEDIUM.COM

Consider Your Biases

In the wake of Donald Trump’s victory over Hillary Clinton, pundits and politicians alike have wondered, “how did we not predict this?” Theories range from misrepresentative polling to journalistic bias to confirmation bias, fueled by the echo chambers of social media. These fervent debates about bias in politics had me reflecting on the role that bias plays in science and in R&D. Sampling bias, expectancy bias, publication bias… all hazards of the profession and yet science is held up against other disciplines as relatively bias-free by virtue of its data-centric approach.

Biopharma R&D has rapidly evolved over the last few years — it is more collaborative, demands greater speed to respond to competition, and challenges many notions of “conventional” drug discovery. In my reflections, I was curious whether this rapid evolution was a harbinger of biases not conventionally associated with science — and wanted to understand how we at twoXAR aim to stay aware and ahead of such biases.

READ THE FULL POST AT MEDIUM.COM

Augmenting Drug Discovery with Computer Science

The short-list for the annual Arthur C. Clarke Award was recently announced and it reminded me of a post we did last fall on augmentation vs. automation. Clarke is a British science fiction writer who is famous for being the co-screenplay writer (with Stanley Kubrick) of the 1968 film 2001: A Space Odyssey. He is also known for the so-called Clarke’s Laws, which are three ideas intended to guide consideration of future scientific developments.

  1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.

These laws resonate here at twoXAR where every week we meet with biopharma research executives who tell us — usually right after we say something like, “using our platform you can evaluate tens of thousands of drug candidates and identify their possible MOAs, evaluate chemical similarity, and screen for clinical evidence in minutes” — that’s “impossible” or “magic”!

READ THE FULL POST AT MEDIUM.COM

Mission Possible: Software-driven Drug Discovery

Originally published at Life Science Leader Online.

In the 25-plus years since the modern Internet was launched we have seen virtually every industry evolve by leveraging the connected, global computing infrastructure we can now tap into any time, from anywhere. Today, advanced software programming tools like machine learning, massive data sets and cloud-based compute are making it easier than ever to rapidly launch and globally scale software-driven services without the capital expense that was once required.

The debate about whether or not software will eat drug discovery is not a new one and remains a topic that can raise voices. As a formally educated computer scientist and cofounder of a company focused on software-driven drug discovery, I come to the discussion with my own biases.

There is no shortage of software in today’s biopharma R&D organization. Cloud-based electronic data capture (EDC), laboratory information management systems (LIMS), process automation, and chemical informatics are just a few of the well-established tools that support R&D and have a meaningful impact. While software has become a …

Read the full piece at Life Science Leader Online.

Let’s Augment, Not Automate

“Any sufficiently advanced technology is indistinguishable from magic.”

Science writer and futurist Arthur C. Clarke’s poignant “third law” only becomes more relevant as technological innovation accelerates and disciplines like computer science, data science and life science converge.

As we have been out in the field demonstrating the power of our technology platform to our collaborators, it has been interesting to hear their reactions when we tell them how it can
evaluate tens of thousands of drug candidates and identify their possible MOAs, evaluate chemical similarity, and screen for clinical evidence in minutes. These responses cover the gamut from, “Wow, this is going to revolutionize drug discovery!” to “this is magic, I don’t believe computers can do this…”

However, whether we are talking to the converted or the skeptical, as we get deeper into conversations about how our technology works, we come into agreement that using advanced data science techniques to analyze data about drug candidates is not magic. In fact, we’re doing what scientific researchers have always done – analyze data that arises from experiments. What’s different is that advances in statistical methods, our proprietary algorithms, and secure cloud computing enable us to do this orders of magnitude faster than by hand or with the naked eye.

The speed of our technology combined with the massive quantities of data that it processes, is simply enhancing the work that our collaborators have been doing in the lab for years. We believe that the most interesting and powerful new discoveries will arise at the intersection of open-minded life scientists combining their deep expertise with unbiased software.

Technologies like ours are meant to augment* the work of life scientists and help them accelerate drug discovery and fill clinical pipelines while leading society to a more robust and streamlined scientific process. Although DUMA might sound futuristic, today it is enabling therapeutic researchers to better leverage the value of their data and do it more rapidly than ever before.

Don’t believe the magic? Contact me and we’ll get a trial started to show you the science.

 

*Sidenote: I have been particularly interested in this interaction between humans and machines, which led me to a class at MIT called The [Technological] Singularity and Related Topics. One of those major topics was whether or not machines (including software) will replace aspects of society. One of my professors Erik Brynjolfsson, author of The Second Machine Age: stated that “We are racing with machines – let’s augment, not automate.”And we definitely share that view here at twoXAR.

Validating DUMA Independently

When independent scientific validation happens with new technologies it is an exciting time for both researcher and validator.

Some time ago we used our DUMA drug discovery platform to find new potential drug treatments for Parkinson’s disease. After processing over 25,000 drugs with our system, we identified a handful of promising candidates for further study. We noticed one of our highest ranked predictions was currently under study at an NIH Udall Center of Excellence in Parkinson’s Disease Research at Michigan State University.

We decided to be good citizens to the research community and provide our findings to the research team at Michigan State University. We prepared a 5-page PDF that summarized our computational prediction. When DUMA highly ranks a drug for efficacy it also provides the supporting evidence it used to make that prediction. This can include:

  • Calculated proteins of significant interest in the disease state,
  • How the drug interacts with those proteins or their binding neighbors,
  • Drugs with similar molecular substructures that have similar effects, and
  • Protective evidence found in clinical medical records.

We emailed our report to Dr. Tim Collier and figured that was the end of it. Much to our surprise we found ourselves on a phone call the next day with Tim and his colleague Dr. Katrina Paumier. Tim told us that we had independently validated work that had been going on for years.

As part of the review of the report, Tim and Katrina asked a number of questions on how we came up with the prediction we presented. We explained a bit about DUMA and how quickly it can be used to screen large databases of drugs and make predictions within a few minutes. They told us they had another promising drug under study and asked us to run it through DUMA. We returned the results on this new drug right away. It turned out this second candidate was highly predicted by DUMA to be effective in treating Parkinson’s disease. Once again our evidence matched their data, independently validating that they were on the right track with their second candidate.

Finally, Tim asked us to run one more drug through our system. He didn’t tell us much about this particular molecule, and we let DUMA process the data we collected on it. The prediction ranked this candidate relatively lower. We informed Tim that our system gave a low to moderate indication of efficacy, and supplied the evidence that DUMA had made to assign this ranking. This once again matched his own data about the compound.

Our work with Michigan State University continues today. We are working with Tim on providing new, novel compounds for further study. We have collaborated on combining the power of the DUMA drug discovery system with the expertise in Parkinson’s research labs.

Night of the Dead’s Living Data

We often speak of our trove of gene expression data:  RNA measurements from different human tissues, which allow us to identify genes that are expressed abnormally in disease patients compared to healthy people. By the time it gets to us, that RNA has been converted first into cDNA, then into a microarray or RNA-seq readout, then into a publication, and finally into an entry in a neat public database. But like babies and sausage, we must eventually pause to consider where this RNA comes from. The answer, especially for brain diseases, is often cadavers (otherwise known as dead people).

Realizing that so much scientific knowledge comes from the dearly departed initially gave me the heebie jeebies. I knew there were no other options, as brain biopsies are incredibly unpopular among the living. But weren’t readouts from dead tissues vastly different from live ones? My naïve intuition was that biological readouts would be like the electronic displays that report system diagnostics on my motorcycle: once the machine’s been turned off, the measurements become significantly less accurate reflections of the bike’s functioning state.

However, apparently one cannot extrapolate this logic from hogs to humans. It turns out that RNA, particularly in brain tissue, is quite stable post-mortem, and a reliable snapshot of brain function in life. Post-mortem protein measurements can be very robust as well; a recent study of more than 3,600 human cadaver brains has shifted the paradigm on which protein is the primary driver of Alzheimer’s Disease.

In a way, twoXAR’s work corroborates this principle. Our gene expression-based models of Parkinson’s Disease, schizophrenia, and Alzheimer’s Disease yield excellent predictions of known treatments and exciting, sensible repurposing candidates. Thus, I have come to acknowledge that like zombies, “undead data” can be surprisingly powerful.

It’s All About the Gene Expression: How Genes are Turned On and Off in Disease

Hi guys, my name’s Aaron, and as a grad student researcher in Genetics at Stanford, I’m twoXAR’s resident gene expression nerd. And as a co-tinkerer on the twoXAR platform, I focus on finding and incorporating data to continuously improve our disease models, and in turn our algorithm’s predictions. In a previous post, Andrew gave a nice introduction about gene expression measurements and how we use them. Today, I want to give you a little more information on how gene expression—otherwise known as transcription—is biologically controlled and scientifically measured.

As Andrew explained in his excellent cookbook analogy, genes are instructions on how to make proteins, written in DNA. For a cell to execute those instructions, it must make RNA “photocopies” of genes that are relevant to its tasks. Cells therefore select which proteins to make by choosing a set of genes to transcribe from DNA to RNA. The genome has tens of thousands of instructions coding for everything from insulin to dopamine to the stuff that makes your toenails.  For a pancreas cell to do its job correctly, it has to pull out the instructions for the first and ignore the latter two. How do cells do this?

One major player in the gene regulation game is the transcription factor. Transcription factors are proteins that bind to specific sequences of DNA and kick off gene expression. You can think of them as “smart bookmarks” that find their way to the words that begin relevant chapters of DNA.  But where do these bookmarks come from, and how’d they get so smart anyways?

It turns out, a lot of them work two shifts, acting as both transcription factors and signaling proteins: molecules that report the signals a cell receives.  So, a transcription factor will hear the hormones in the cell’s environment shouting, “We need more insulin, STAT!”, and hurry to the DNA to open up the insulin chapter (there’s a little pun in there for you signaling geeks). Once the right transcription factor bookmarks arrive at the right gene chapter, other proteins will come to that page and transcribe its DNA instructions into RNA, allowing protein synthesis.

And there’s an even simpler level at which gene expression is regulated: some pages of the DNA book are open and easily flipped through, while other pages can be temporarily glued shut, preventing bookmarks from finding their way to their chapter headings. The varied accessibility of different DNA regions in the genome is referred to as Epigenetics, and it’s so neat that I’m doing a whole PhD about it! Those interested in learning more about this hot, up-and-coming field are encouraged to start here.

But how does all this relate to twoXAR? Well, we’re in the business of finding new roles for drugs in human disease. Human disease manifests through changes in gene expression: DNA pages that are supposed to be sealed become opened, transcription factor bookmarks land excessively on some chapters and insufficiently on others, and the selection and number of RNA photocopies get out of whack. At twoXAR, we compare the gene expression profiles of disease patients versus healthy individuals, and identify the proteins that correspond to each gene, which become the starting points for our drug discovery algorithms, as described here. All very well and good, you say, but what the heck’s a gene expression profile, and how do you get your hands on one?

Some of our gene expression data comes from published databases of federally funded human research. Each dataset indicates the number of RNA photocopies that have been made for thousands of genes in a certain tissue (such as blood, muscle, or brain biopsy samples) from patients and healthy controls. If you’re wondering what kind of healthy people let scientists biopsy their brains, the answer is dead ones; more on that in our next post!

The last thing I want to tell you about is how these RNA measurements are made. We use data collected via two methods: the older (and here we’re still talking only 20 years or so) RNA microarray, and the powerful new kid on the block, RNA sequencing (RNA-seq). The first step in both of these processes is extracting RNA from tissue samples, and immediately converting it back into its more stable cousin, DNA (through a process unsurprisingly called Reverse Transcription); since this DNA is “complementary” to the RNA sequences in each sample, it’s referred to as “cDNA”.

Where these two methods differ is in their mode and range of detection. To run a microarray, you first pick which genes you want to measure, synthesize DNA molecules for each of those genes, and stick thousands of each gene’s molecule at specific locations on a glass slide. You then label your cDNA with fluorescent chemicals, and run it over said glass slide. If one of the genes you put on the chip was expressed in your sample tissue, the fluorescent cDNA for that gene will stick to it, because two complementary pieces of DNA that contact each other will form double helixes. The more copies of that gene in your sample, the more fluorescence will accumulate at that spot on the chip, which can be quantified. In contrast, RNA-seq takes a simpler, but more expensive approach: take your whole batch of cDNA, and sequence (i.e. use a machine to ‘read’ the cDNA) the sucker! Rather than picking out individual genes to measure, RNA-seq takes an unbiased approach and measures everything. As the experimental costs of sequencing and the computational costs of analyzing such large data sets are both going down, this next-generation approach is becoming more prevalent in both the research community and in the twoXAR databases.

Phew! And there you have it, a handy knowledge dump from your friendly neighborhood geneticist. I hope it helps provide a clearer picture of the methods behind our madness.