[ skip to content ]

More Information about this image

You Visit Tour. Webb Lion Fountain. June 1 2017. Photo David B. Hollingsworth

An Expanded Research Focus for ODU Vision Lab Comes Into View

For more than a decade, Old Dominion University's Machine Vision and Computational Intelligence Laboratory (Vision Lab) has conducted complex, groundbreaking vision and video research.

Founded following a three-year project to develop a computer-based, facial image detection and recognition system, the Vision Lab has expanded its reach into a range of defense and homeland security, medical image analysis, intelligent transportation applications, receiving many grants from the Department of Defense, Department of Homeland Security, National Institutes of Health and NASA. In 2012, the lab received about $500,000.

Under the direction of Khan Iftekharuddin, a professor of electrical and computer engineering who came to ODU in 2011, the Vision Lab is also expanding its view.

"Our national security research projects are still a very important part of what we do," Iftekharuddin said. "But there are so many real-world applications of machine vision technology, it's exciting to think of the possibilities."

Here are highlights of some of the new projects being led by Vision Lab researchers:

  • Real-time characterization of traffic

Using real-time traffic video from existing Virginia Department Transportation (VDOT) camera locations, Vision Lab researchers are developing algorithms to characterize each vehicle as it goes through the camera's field of view, from a four-cylinder sports car to an 18-wheel rig. Then, extrapolating the real-time traffic data with existing databases of traffic flows and emissions data, the lab is working to produce an accurate snapshot of the emission levels being produced at any given time in the region.

The project, led by Jeff Flora, a graduate student, and Amr Yousef, postdoctoral fellow, could potentially help in intelligent green transportation. The Vision Lab is collaborating with ODU's newly established Center for Innovative Transportation Solutions on this research.

"We are developing novel computational algorithms to perform robust vehicle segmentation in low-resolution video source under different weather, lighting and traffic conditions, with a focus on green and sustainable environment," Iftekharuddin said.

  • Three-dimensional modeling of the facial expressions of children with autism spectrum disorder (ASD)

Among the advanced equipment available for use at the Vision Lab is a camera system that can create 3-D images. Together with researchers at Eastern Virginia Medical School and Children's Hospital of The King's Daughters, the Vision Lab is conducting research on using that technology to help identify symptoms of autism spectrum disorder through 3-D facial expression analysis. Computer vision techniques can help to analyze spontaneous expression in 3-D facial image for both normal children and children with ASD while viewing different stimuli. A 3-D image not only contains more information than conventional two-dimensional data, but it is also less affected by illumination and head pose variation.

The project is led by graduate students Manar Samad and Lasitha Vidyaratne. With the prevalence of ASD in the United States at an estimated rate of 1 in 88 children, development of robust tools for early diagnosis cannot be overemphasized. The researchers at Vision Lab believe that 3-D image analysis tools may help in this effort by capturing 3-D facial dynamics in a natural social setting.

  • Volume computation of tumors through medical image analysis

In 2012 alone, 22,910 new cases and 13,700 deaths have been recorded in the U.S. due to brain and nervous system tumors. Accurate brain tumor segmentation (BTS) is needed for image-guided surgery and radiation therapy, among other treatments. In a typical clinic, radiologists review a large number of multimodal MRIs per day, and obtain manual BTS. This process is very tedious and can be prone to error since the radiologists work under time constraints. Due to the complex structures of different normal and abnormal tissues in the brain, robust measurement of tumors is challenging.

The Vision Lab has developed stochastic multiresolution fractal computational models and tools that extract information about a brain tumor from magnetic resonance image for accurate measurement. The lab is working with clinical collaborators at Children's Hospital of Philadelphia and San Diego VA Hospital on this project.

The research, led by Shamim Reza, Linmin Pei and Mahbubul Alam, shows promise in automated extraction and segmentation of brain tumors.

"The prevailing practice of manual tumor boundary and volume estimation can be inaccurate, relying on between one and three planar MRI measurements for often irregular tumor segments," Iftekharuddin said. "Our computational models are expected to help in automating the BTS and volume computation for radiologists in clinics."

  • Shoreline detection and mapping using DEMs data and aerial photos

Beach erosion is a chronic problem along most of the open-ocean shores of the U.S. As sea level rises and coastal populations continue to grow, there is an increased demand for understanding the accurate position of the shorelines. The primary goal of this project is to develop a novel and accurate technique for mapping and analyzing shoreline position.

The project, led by Amr Yousef, is using the digital elevation models (DEMs) and aerial photos to map the accurate trace of shoreline. The Vision Lab is developing techniques to tackle this goal using two alternative methods: (1) extracting the shoreline position based on DEMs and a local tidal datum at the study area; and (2) extracting the shoreline position based on a fused DEM on aerial photos.

"The longer-term goal is to link receding shoreline information in the Hampton Roads area to the sea level rise initiative for wetland and flood monitoring," Iftekharuddin said.

  • Autonomous wireless radar sensor mote for perimeter surveillance

Autonomous wireless sensor networks, consisting of different types of sensors, have been receiving greater attention due to their versatility and portability. These autonomous sensor networks commonly include passive sensors such as infrared, acoustic, vibration and magnetic nodes. However, fusion of active sensors in the integrated sensor network, such as Doppler radars, may offer powerful capabilities for many different sensing and classification tasks.

A Vision Lab project continues prior work with FedEx Institute of Technology in Memphis to design and implement an autonomous wireless sensor network, integrating a Doppler sensor with commercial off-the-shelf components. Vision Lab researchers developed a toy experiment to test the radar-mote sensor network. An electric train runs by these wireless motes, tripping field-of-view of the sensors. The sensor then picks different information, including movement, the type of object, material and the direction being traveled, wirelessly transmitting the data to a remote processing center for further analysis.

"Such a wireless radar-mote sensor network can be used for high-performance perimeter surveillance purposes at a fraction of cost for similar performing high-end radar-enabled devices," Iftekharuddin said.

  • Psychology-driven human movement analysis for crowd sourcing

In a relatively new project, the Vision Lab is working to interpret observed behavior from full motion video (FMV), and use vision-based techniques to recognize the actions and activities of people in the videos. It also involves prediction of activities from the behavioral cues gleaned from observing an individual person or crowd in FMV.

The project, led by Kawsi Afrifa, includes using a computational model to link behavior cues to particular actions or activities and emotions for training of the vision-based learning machine. "Development of such a psychology-driven model can be crucial in understanding large-crowd behavior in natural and man-made disasters," Iftekharuddin said.

The Vision Lab, an enterprise center of ODU's Frank Batten College of Engineering and Technology, focuses on developing new algorithms for real-time applications of machine vision technology, to use in things like surveillance cameras, night-vision technology and motion sensing.

Site Navigation

Experience Guaranteed

Enhance your college career by gaining relevant experience with the skills and knowledge needed for your future career. Discover our experiential learning opportunities.

Academic Days

Picture yourself in the classroom, speak with professors in your major, and meet current students.

Upcoming Events

From sports games to concerts and lectures, join the ODU community at a variety of campus events.