Research

We are living in the age of data. Visualization as a tool plays a key role in unraveling complexity underlying increasing amounts of data, because it enables us to visually detect interesting data features, assess situations, and make effective scientific decisions. One of the major challenges faced by visualization systems is noise inherent to the input data and computational processes. The propagation of sensor or quantization errors during data acquisition and algorithmic errors may adversely impact the accuracy of final visualizations, leading to incorrect analyses of data. Thus, integrating uncertainty with visualizations can enhance the reliability of scientific decisions via reasoning based on uncertainty.

In my research, I have studied the interaction of uncertainty in data with the key scientific visualization algorithms, specifically the marching cubes algorithm for isosurface rendering, volume ray casting, topology-based feature extractions with Morse complexes, and feature level sets. The results of my research illustrate how the uncertainty-aware visualizations can help us gain insight into high- vs. low-confidence features present in the data, thereby increasing the reliability of scientific decisions. I have successfully demonstrated the application of my research to several domain-specific data, such as an ensemble representing Red Sea eddy simulations (~1.5TB) and brain imaging for patients implanted with deep brain stimulation (DBS) electrodes.