
AI powered Media
based Outbreak monitoring
DURATION
18 months
TYPE
User Research | UX/UI | Product management
CLIENT
Wadhwani AI & Ministry of Health & Family Welfare, India
*This is a summary of the full case study due to NDA
** Mobile view has limited information, desktop recommended
Problem
Statement
The Media Scanning & Verification cell struggles to raise health alerts efficiently due to the massive amount of data processed manually, including 300 Google alerts (10 links in each), 20 newspapers in Hindi and English, and general online reading, leading to some omissions by the Media Surveillance Assistant (MSA).
After Covid-19, there has been an increased rigor internationally in ensuring that the monitoring of outbreaks become more robust.
The Beginning
The Assumption
Due to manual gathering of media news which creates a cognitive overload due to the extent of media news, there are media events with outbreak potential which are missed. By automating the process of capturing media news and filtering, we will reduce the burden on the user and create a more systemic process.
Integrated Disease surveillance Programme has a platform called Integrated Health information platform (IHIP)
The IHIP platform was developed by WHO.
​
The AI tool is integrated through an iframe within the IHIP Platform.​
All the data from Medical facilities (Presumptive Data:P form), labs (Laboratory: L form) and Community (Syndromic: S Form) is collated on IHIP and analysed.​
Process 1.0
The first round of research was focussed on developing the Minimum Viable product (MVP) to get a closer understanding of user's experience with an AI based product for Disease surveillance. Duration: 1 month
01
Understanding IDSP & Media based surveillance
Literature review
02
Interviews with user, program manager & Epidemiologist
Contextual Interviews
Field research
03
Internal discussion with product & program manager
Primary Research
04
Designing a MVP based on learnin
Prototype Development
05
Testing the MVP
Usability Testing
Key Learnings 1.0
From the field visit, interviews & internal conversations , we learned some key aspects about the disease surveillance ecosystem at the centre.

1. Media based news is not as "useful & reliable"
There is an attitude of Media based monitoring being not as useful as other sources of information. This attitude creates an environment where the Media surveillance assistant functions on ensuring there are health alerts published everyday but not whether all are published or not.

2. No way to ensure all media alerts are captured
The user sifted through 300 Google alerts, which included many irrelevant articles. Additionally, they use other sources online & physical newspapers to complete their media based surveillance. However, there is no way of measuring whether all "relevant" media news is covered or not.

3. Offline is preferred as it offers more control
There is a form available for publishing a media based Health alert on IHIP, however the MSA does the entire process from shortlisting to approval manually on papers. The alerts are shared on gmail and records maintained in excel. IHIP offers them limited access to data created by them.

4. One Primary user and multiple secondary users
While the MSA is the key user of this product, the alerts generated are responded by all 600+ Districts surveillance officers. Their expectations from the alerts is critical to be integrated in the design of AI solution. There are multiple users who oversee the functioning of MSA's work.
Process
Outcomes 1.0
From the field visit, interviews & internal conversations , we learned some key aspects about the disease surveillance ecosystem at the centre.

Using IHIP as the foundation, we mapped all the different users and data sources
1. Mapping of the IHIP platform, data and users
Using the extensive secondary research and programmatic information available online for states, we prepared an ecosystem map with data sources and humans. This mapping was useful for understanding current interactions & developing a future view for AI usage.

2. User Journey Map of the Media Assistant
We identified all the touch points which the AI tool needed to design for. We also identified all the areas which were beyond the scope of the AI tool and required programmatic interventions with the stakeholders. For eg: Disease names
Minimum Viable Product (MVP)
The MVP was a tool for us to understand how the AI tool was used by the users. At the same time, it became a tool for furthering AI's capabilities, training needs and aligning expectations with stakeholders.

There were 3 tabs:
Media Alerts List: User sees list of all media alerts & health information processed using ML models, which the user verifies before shortlisting
​
Shortlisted Tab: Gets approval from epidemiologist
​
Published Tab: User can see the published alert
1. AI powered tool and not a replacement of human
The MVP played a critical role in providing a non threatening experience to the user as the user was averse to using the tool fearing it will "take their job". This also applied to the stakeholders wherein we worked to shift the buzz around AI to the process of building the tool.
2. Information flow and Language used
Replication of offline process into online flow. The language and style came from programmatic Guidelines. For eg: What the AI tool captures from the media news is "Unusual Health event" which becomes a "Health alert" upon publication by the MSA.
3. Integrating with the existing platform
From a fragmented system which engaged with multiple platforms like excel, word and gmail to printed files, the intention was to shift within IHIP. This required us to capture all the communications that the tool needed to do with the IHIP within an iframe.
Key Learnings from MVP
While the MVP was deployed within the iframe, we worked with the user to identify challenges that we needed to resolve

1. Reducing duplicates by clustering
Since, the same news event is published by multiple news sources, we introduced a feature of clustering news articles based on semantic matching of the text of the articles beside date, disease & location.
This reduced the number of media alerts shown to the user by 55%.


2. Introduction of AI translation as default
When we compared the manually published health alerts vs. published using AI tool, we learned that the user missed information captured from AI provided in Hindi as they were not selecting the drop down for Hindi.
We expanded the languages to 9 other Indian languages and introduced AI translation in English as default which reduced the missed events by 80%.

Disease field shown as Others: The usual reason for this was a lack of article classification on the basis of disease types IHIP was interested in. We did Multi Article classification based on this learning.
3. Improved article classification for diseases
Although the disease list initially had 36 disease names, the tool captured multiple other diseases, creating a large "Others" category. (70% diseases were captured as Others). We made a programmatic effort to create a comprehensive disease list (123) for the media scanning to shift from binary classification to multi classification.

Empty Location fields: Even when the location was captured by the AI models, they were not matched as the Location directory IHIP used was on basis of District & States only, while media news mentioned specific areas & missed district values.
4. Empty location value due to various issues
The biggest challenge was the media coverage of location was not structured. This required us to update the directory and create a better way of how location was read . The second issue was International locations which were not resolved with regular domain blocker and required multiple changes.
Wireframing
Worked with the user to identify all the challenges from the MVP and build the wireframe for information flow.
.png)
Final Product UX/UI
Media Alerts List: User sees list of all media alerts & health information processed using ML models, which the user verifies before shortlisting
Shortlisted Tab: Gets approval from epidemiologist on shortlisted alert
Published Tab: User can follow the progress of a published alert
1. Bringing transparency to user for actions taken
1. Status of Published Alert in real time
The user is able to see the status of an event published by them in real time. There are 4 stages to a Published event:
-
Ongoing Investigation
-
Ongoing Outbreak
-
Closed Outbreak
-
Closed Event
They can also see the Investigation report by clicking on the status bar.


2. Status of a Shortlisted event
When the Media Surveillance Assistant shortlists an event, it needs to be approved by an Epidemiologist before it can be published. A shortlisted event involved 3 steps:
-
Pending Approval: The MSA could nudge the Epidemiologist
-
Approval Denied: The MSA could see the reason
-
Approved: The MSA could publish it as soon as the Health alert is approved
​



2. System suggestions & prompts
.png)
1. Creating a space for human error
Once an event is published, the action cannot be reversed. This can led to unnecessary panic and is a huge cognitive burden for the user. We provided the ability to undo an event within a few seconds of publishing it by creating a buffer for when the event is send to the backend system.


2. Smarter system to alert user
Since, the media tool captures similar events about a previously published event, alerting user about the event captured deprioritised certain events.
The user would also get alerts on status of shortlisted and published events.
3. Improved control through automation of various aspects

1. Improved UI on basis of user suggestions
The filter act as search, designed to keep the UX familiar with IHIP. More detailed filters were introduced to provide more accurate search results. One key change was the Date & Time filter: We learned that time was a critical factor and ensured that the default was from 6p.m. (or the time user logged out) -10a.m. (or the time user logged back in)

By including secondary users, we were able to measure the impact of AI tool end to end on the Media scanning process
2. Optimisation of Media scanning & raising alerts
By introducing the District Surveillance officer to this journey and bringing the processes online, we were able to track and optimise the time taken by the Media Surveillance Assistant from going through media to following up on DSO's response.
Impact of the product

01
Time reduction in performance due to automation
37.5% in Shortlisting of a Potential Health alert (0.8/0.5)
96.4% in Publishing a Health alert: (2.8/0.1)
80.5% in time taken by DSO’s to respond
(13.9/2.7)

Increased reach due to increased sources
02
87.8% increase in districts reached in 2023-2024 (478) when compared with districts before AI tool in 2021-2022(238) by the Media Scanning & Verification cell

03
Higher number of alerts published using AI tool
90.7% of all the alerts published from 1st April'22 to 30th March'24 were by the AI tool

04
Higher number of alerts published using AI tool
converted into Outbreak
90.9% all Outbreaks have been detected between 1st April'22 to 30th March'24 were captured by the AI tool
Reduced fear and high adoption of the AI tool by the user
The MSA while discussing event load after a weekend said, “You have returned my peaceful night’s sleep”
This was due to the fact that now the user relied on the AI tool to help filter news media instead of doing it manually & trusts it to not miss events.

Process 2.0
After stablising the product, we also spent time on the ground to understand the end to end process and how media surveillance was used on ground. We used the knowledge to align our future goals.
01
Foundational research to understand Outbreak monitoring on ground
Primary review
02
Building the vision and identifying the key metrics along with team
Co-Design workshop
Process
Outcomes 2.0



1. Key goals were applied to front and back end
The four key goals—Early Alerts, Trustworthiness, Ease of Use, and Smart & Continuous—were applied to ML, engineering, design, and programming. This led to changing the expectations from the ML model and other teams about performance standards that needed to be achieved to meet product's vision.

2. Key metrics led to transparency of impact
The key metrics were used to prepare a dashboard that was utilised by the stakeholders and program managers to oversee the AI powered Media surveillance
function. It also provided a shared language to have conversations about the benefit of the ML models.
Ongoing

Explainability of model behvaior
One key challenge has been communicating to stakeholders & Users to identify gaps between model and system operations and share real-time updates. We realised this needed to be integrated into the product rather than relying on training about AI models as this information was easily forgotten.