Session

Friday, June 23, 2017 - 14:00 to 15:30
Novel Method for Storyboarding Biomedical Videos for Medical Informatics
Abstract: 
We propose a novel method for developing static storyboard for video clips included with biomedical research literature. The technique uses both visual and audio content in the video to select candidate key frames for the storyboard. From the visual channel, the intra-frames are extracted using FFmpeg tool. IBM Watson speech-to-text service is used to extract words from the audio channel, from which clinically significant concepts (key concepts) are identified using the U.S. National Library of Medicine‘s Repository for Informed Decision Making (RIDEM) service. These concepts are synchronized with the key frames, from which our algorithm selects relevant frames to highlight in the storyboard. In order to test the system, we first created a reference set through a semiautomatic approach, and measure the system performance with informativeness and fidelity metrics. Results from pilot testing, both subjective visual and quantitative metrics, are promising. It is our goal to conduct a formal user evaluation in the future.
Sema Candemir's picture
Sema Candemir
Sameer Antani's picture
Sameer Antani
U.S. National Library of Medicine / NIH (USA)
Zhiyun Xue's picture
Zhiyun Xue
George Thoma's picture
George Thoma