AstraZeneca Lead Architect talks big data process excellence
After attending the 2018 Data Analytics for Pharma Development conference, Pharma IQ stopped to chat with Arun Bondali, Sr. Enterprise Lead Architect, Science and Enabling Units at AstraZeneca about pharma eliminating inefficiencies in how it handles and mines data.
What’s your golden rule for pharma’s use of big data?
The key rule I would suggest to all companies is rather than waiting to start after the required platforms have been built, look at the data as soon as possible. You could begin by using analytics capabilities on it in a structured way or by examining a particular kind of dataset.
Do not shy away from potential failure because those failures can be the stepping stones to discovering how and what tools can be applied to what kind of data.
Two take homes from your speech this year on driving process optimisation through integrating structured and unstructured data?
Firstly, as I mentioned, start small, do not shy away from experimentation and do not wait for the next big platform to be built to line-up all the data items for you in place.
Secondly, to get some more maturity into the model I suggest managing data governance through some sort of hub or application. This would help track where the requirements for big data are generating maximum retention, monitor the usage of analytics and also guide the efficient use of budgets around the areas with low analytics activity.
Any other tips for driving operational excellence in data strategies?
This goes back to my earlier point about having a strong data governance team as far as operational excellence goes. This team helps maintain the unmet aspects or forecasts of the data requirements around the data hub. This strategy would also give the right level of security on what kind of data is being used for analytics and at what level.
In your session, you said about the fog in the pharma industry around the meanings of some advanced analytics buzzwords.
Please give us a quick definition breakdown for predictive analytics, machine learning and prescriptive analytics.
Descriptive analytics is any kind of analytics which analyses pre-existing data to understand a current situation from a data perspective.
Predictive analytics is one step ahead of descriptive analytics and entails the ability to extrapolate future events from descriptive data. This requires a large base of real time data.
Based on the data you already have, you are able to predict potential next set of actions that could happen.
One good example is you can predict the impact of an infection in a particular community based on the current data pattern.
Prescriptive analytics takes a predicted event and then forecasts what action is likely to be the most beneficial solution.
You examine a predicted event, like mass infection in a community, and then you look into what needs to be done by that community to counter this and suppress that infection.
What you think 2018 holds for predictive analytics in drug discovery?
I see a lot of Pharma companies and Biotech Ventures taking a more active role in predictive analytics because of the technology’s ability to help identify key usage patterns and look for treatments that are targeted to a specific set of patients or set of conditions.
I see this as a growing trend because companies starting to realise the value behind this kind of niche market. The strong economic factors will drive them to consider predictive as well as prescriptive analytics.
The biggest mistake the industry makes in how it uses data?
The biggest cause of inefficiency is the duplication of data in multiple applications - be that in a regulated, non-regulated or an R&D application environment. There are too many applications of systems having duplication of data.
We end up performing heavy transcription methods and then at the end of the day not being able to provide the validity of a single source of truth for a multiple system.