Introduction

 

The workshop brings together leading researchers in the area of video activity analysis to discuss current challenges in the field. The VIRAT Video Dataset for action recognition is presented and initial results obtained using it are reported.


Program


9:00-9:30  Mita Desai, DARPA

        Title: The VIRAT Program

9:30-10:00  Anthony Hoogs & Sangmin Oh, Kitware

        Title: The VIRAT Video Dataset and Competition

10:00-10:30  Jitendra Malik, UC Berkeley

        Title: Action Recognition from Pose and Appearance

10:30-11:00  Break


11:00-11:30  Ram Nevatia, USC

        Title: Simultaneous Tracking and Activity Recognition

11:30-12:00  Jeff Siskind, Purdue University

        Title: Generating English Utterances to Describe Events in Video

12:00-1:30  Lunch


1:30-2:00  Stan Sclaroff, Boston University

        Title: Object, Scene and Actions: Multi-Feature MIL for Human Action Recognition

2:00-2:30  Fei-Fei Li, Stanford University

        Title: Recognizing Human-Object Interaction Activities

2:30-3:00  Deva Ramanan & Carl Vondrick, UC Irvine

        Title: Web-Sourcing Video Annotations on the VIRAT Video Dataset

3:00-3:30  Break


3:30-3:50  Challenge Results:
Universitat Autonoma de Barcelona

3:50-4:10  Challenge Results: SUNY Buffalo


4:10-5:10  Panel: Actions to Activities


Dataset


The dataset is available at www.viratdata.org. The VIRAT Video Dataset web page contains all information about the dataset and competition including dataset download instructions, annotation formats, and the scoring methodology.


People


Organizers:

Prof. Larry Davis (lsd@umiacs.umd.edu)

Dr. Anthony Hoogs(anthony.hoogs@kitware.com)


Acknowledgements


The dataset collection, filtering, annotation and distribution are sponsored by the DARPA VIRAT program.

VIRAT program participants are not eligible for the competition.


Disclaimer


The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.