Skip to main content
Speed of progress is an important parameter in drug discovery. Not just because moving fast is by default good but because there are several potential business benefits of executing the research work highly efficiently.
 
Firstly, drug discovery is a very expensive endeavour so every single day an organisation can get their drug earlier to the market and initiate the return on that investment is beneficial. 
 
Secondly, as in any business the competition is fierce so being first to market and having some exclusivity will also benefit the revenue.
 
In the smaller biotech companies the output of the drug discovery efforts - the drug - will likely not go all the way to the market in the hands of the biotech company. Instead, it will be sold to a bigger pharma company. Hence, the sooner you find a good candidate the sooner you can strike a big pharma deal or ensure the next round of funding to get you there.
 
Finally, patients are waiting for better treatments. In some cases, where there is no current treatment, even waiting to get some help. So let's ensure we make the processes as efficient as we possibly can.


What's the challenge with the way it's often done today?
 
Today we unfortunately see too many manual processes. Either the workflow is built up of several different tools with manual editing and copy/pasting of the data in and out of these tools. Or the workflow is completely manual and the scientists will need to be very careful and vigilant while keeping track of compound&batch details, plate information, and matching data to plate wells by hand in Excel. Doing it this way is not just slow it's also very error-prone.
 
How can we improve?
 
There are several ways to improve depending on the actual workflow and the ambition. When we talk about standard work - like an assay run often with the same or very similar configurations - then the minimum should be templates where the data transformations or calculations are handled by well-tested formulas. However, initial data editing and copy/pasting can still introduce errors.
 
To improve the quality and speed further, implement software tools where the raw data files are uploaded without any human editing and hence the files are moved directly from the data-producing equipment to the software tools performing the transformations and calculations. In other words, letting the software do what it's good at => keeping track of data and calculating.
 
The final step to improve the speed of the process even further would be to introduce full automation. So connect equipment and software tools tightly together and let the machinery do all the work. This is naturally costly and requires a lot of investment and implementation and is hence mostly relevant for bigger organisations and very standardised processes.
 
What could the process look like?
 
Taking our own gritCurvefit tool as an example, this is what a simpler and faster process from raw data file from the equipment (plate reader) to the final IC50 results could look like. The tool is stand-alone and therefore does not need any integrations to either up- or downstream equipment or databases.
 
The user simply imports - or drag’n’drops - the raw output file from the equipment into the software. No editing needed. The user selects a matching plate layout or assigns the roles to the relevant wells. Then the software will keep track of and match wells with roles, concentrations, and readouts and can therefore perform the calculations and the curvefit without any involvement from the users. Based on the curves the platform will also calculate and display the IC50 values making it very easy and quick to get from the uploaded raw data files to the final IC50 values.
 
All the data from the raw reads to the normalised values, the algorithms used and the final IC50 values are stored in a database and can therefore be referenced later. 
 
So, what's in it for you?
 
It's easier. Drag the files and let go! That's it. There's enough to worry about when running drug discovery lab experiments so if some of the complexity of the data handling can be reduced that's good. And there is a lot less risk of introducing errors in the data if the user doesn't edit anything in the equipment or software-generated data files.
 
Easier and fewer errors are good for the user, but naturally also good for the organisation. As is the fact that the data handling - and thereby the drug discovery process - will become more efficient and faster and as mentioned earlier lead to drugs coming faster to market.
Claus Stie Kallesøe
Post by Claus Stie Kallesøe
March 21, 2024