Researchers have developed a computational tool that can learn from headcam footage of complex tasks to predict where the user's future gaze will be focused. This tool combines 'visual saliency' mapping of frames of footage based on distinctive visual features with 'gaze prediction' mapping based on head movement and previous gaze direction. This tool could facilitate real-time guidance derived from headcam footage in situations involving complex tasks such as surgery and manufacturing.