as successful when ten, five, or zero percent of the population
shows an immunogenic response? What level of immunogenicity
If personalized medicine is the path forward, then researchers
and developers must be able to isolate specific populations. In
the near future, scientists may use software that presents a world
map depicting segments of the human population that are likely
to have an immunogenic response to a medication. Such solutions are on the horizon, as are tools that will support even deeper levels of personalized analysis. Because individuals in specific
populations will not all react in the same way to the same drug,
there is a need for tools and tests that can predict individual immune responses.
In early stages of drug development, most immunogenicity predictions are based on in vitro testing. Then testing is performed on animals. Later, as a drug is given the go-ahead for
human testing, analysis becomes more challenging and predictability less certain. It is difficult to extrapolate from animal tests
precisely what the response will be in a human organism. How
can toxicologists be sure that what they observed in animal tests
is reflected in human tests? What segment of the human population should be considered as subjects for testing? What characteristics should the human test subjects have?
Pharmaceutical organizations need tools, information, and
protocols that can help them answer these questions, but new
software and algorithms are not enough. The industry also
needs more data about the human patient population. This underscores the importance of collaboration among all kinds of
experts within organizations and the potential advantages of
pre-competitive alliances that span the industry. These cooperative strategies cultivate collective insight that expands scientific knowledge for everyone.
PERSPECTIVE IS EVERYTHING
Science is about reproducing results. When a company develops
a drug, every experiment and test must be recorded. Organizations must be able to prove to regulatory bodies such as the U.S.
Food and Drug Administration (FDA) that development results
are reproducible. So it is important that rules are clearly defined
and enforced for virtual experiments, wet lab experiments, and
all associated data. Different software solutions can provide a
predictive model if the right data is used. The challenge is to
maintain that data so the model and its results are reproducible
in the future.
Data must be complete, consistent, correct, and contextual-
ized by metadata. And it must be the original data as it was cap-
tured, not changed in any way. Procedures for data governance
should be in place to ensure data is managed appropriately. This
can include documenting how companies safeguard the quality
and integrity of their data. It can also include management of
where data resides, who has access to it, what is done with it, and
all the policies governing secure and efficient data management.
Proper lifecycle management of virtual experiments, wet lab ex-
periments, and all associated data is as important as the results
of the experiments.
Accurate data is necessary to build good models. A platform
approach to information technology can ensure that all relevant
data is represented while also helping to enforce data integrity.
But beyond data management, scientists need to extract knowl-
edge from data. A single software solution cannot do that. Sci-
entists must have the ability to visualize and leverage data from
many points of view—including physical, biological, statistical,
Scientists can attain this perspective by giving virtual ex-
periments and real experiments equal weight. Pharmaceutical
testing relies heavily on wet lab experiments. Virtual experi-
ments and associated results can illuminate additional crite-
ria to help scientists make better decisions. Leaders at phar-
maceutical organizations should establish a company culture
that values both approaches, creating holistic workflows that
integrate both types of experimentation. Most importantly, or-
ganizations should apply the same protocols to both wet lab
and in silico experiments to foster good decisions across both
types of experiments.
These best practices won’t pay off unless analytical tools
are easily accessible to all scientific roles. Siloed knowledge
and know-how can undermine usability and collaborative in-
novation. Many toxicologists and biologists don’t want to work
with mathematics, differential equations, or statistical analy-
sis. But predictive modeling depends on mathematics and sta-
tistics. To clear this industry hurdle, software developers must
provide analytical tools that are easy to use for people who are
not math experts.
A neural network should not be presented in the same way
to a toxicologist and a statistician in the same company. Ideally,
a technology solution should provide an interface that offers ap-
propriate usability depending on the specific expertise of each
scientist while always accessing the same back-end software and
data. Presentation should vary based on functional roles; analyti-
cal results should not.
Despite consensus about the value and importance of collaboration, isolated silos of expert proficiency remain an industry challenge. Some mathematicians and statisticians don’t
collaborate optimally with biologists. Some biologists don’t collaborate effectively with toxicologists. However, these experts
can all contribute to improving the predictability of immunogenicity in biotherapeutics. Technology providers need to support
them all with software solutions and platforms that help them
“Really, the only thing that makes sense is to strive for greater collective
enlightenment.” –Elon Musk