Most computers are boxes that
sit on desks. In my research, the computer is an active part of the world around it.
It observes the environment and acts based on its observations and the intentions of users. I develop systems that
are not mere tools but are interactive parts of their environment that extend human capabilities. To harness the
potential of these systems, interfaces must be developed between the artificial system and its human collaborator.
Toward that end, I am interested in the areas of computer vision, robotics, artificial intelligence, and user interfaces.
My thesis work in computer vision and robotics extended energy minimizing active deformable models (a.k.a. snakes)
into the real-time domain and applied the results to robot control. I applied these efficient deformable models to
a number of problems from pedestrian and mobile robot tracking to visual servo control of robotic manipulators. Recently,
I have applied my fast snakes to the segmentation of 3D real-time ultrasound imagery for minimally invasive intracardiac
procedures. My algorithm has a segmentation rate that matches the ultrasound devices output of twenty five volumes (ninety
two million voxels) per second or, expressed as a video frame rate, over four thousand frames a second. Level-sets, a similar
deformable model technique, by comparison run at far slower rates requiring specialized hardware.
Image guided medical procedures are an ideal application domain for surgical robotics research and are a logical extension my work with manipulators. Recently, I have been studying ways in which the sense of touch (haptics) can be electro-mechanically transmitted. The transmission of haptic information from within the patient to the surgeon through a user interface provides a sense of touch previously unavailable in minimally invasive procedures. The transmitted feeling may come from a sensor in the real world or may be virtually constructed. Synthetic feelings are a necessary part of intuitive user interfaces for interaction with virtual models.
I have demonstrated the use of these haptic systems for remote palpation. I integrated an existing haptic display with a manipulator and conducted a user study examining stiffness discrimination ability with tactile and kinesthetic (classic) force feedback components. Subjects were able to reliably detect a 20 percent difference in rendered material stiffness exploring a virtual model. This result is in-line with physiological studies on compliance discrimination of real objects by human subjects.
My work in user interface research quantified the interface design process for a system used to view confocal microscopy images of rat neurobiology. These large data sets were shared across two sites: one in the US and one in Sweden. One of the problems I advanced was how to build a user interface for synchronous viewing and manipulation of enormous data sets across large geographic (latency) distances. This work received a best paper and best student paper award when presented.
|Real-time Computer Vision on Clusters|
My work in real-time volume segmentation will allow the computer to take on some of a surgeon's cognitive load by providing quantifiable measures of heart function, orientation cues, and virtual fixtures to assist in tool guidance.
While my segmentation rates are impressive, more complex volumetric processing (e.g. 3D ultrasound motion analysis and on-the-fly segmentation) will require real-time parallel computing if they are to be used in the operating room. Even non-real-time volumetric computer vision applications (e.g. resolution enhancement) can benefit from parallelization. Parallel implementations of these applications should improve frame rates from days to minutes and greatly extend the usefulness of these applications. I intend to continue work with clinical researchers and surgeons to measure in real-time heart performance function, to track tools for minimally invasive procedures (heart and prostate), to generate patient specific models, and to quantify fast moving structures in the heart using image enhancement (for improved model generation).
For initial funding I intend to apply for an NSF career grant to extend a number of computer vision techniques to parallel architectures. I will also develop more generic real-time deformable models than the parametric snakes.
As a member of the proposal evaluation panel for the NSF computing research infrastructure (CRI) program it was clear to me that NSF is willing to fund the construction of small computing clusters for novel applications, especially if the computational resources can be utilized across a number of projects. Additional funding for this work, namely for personnel, would come from NSF computer vision (CV) proposals. Because the CRI grants extend existing NSF grants, the CV proposal would be my first target.
|Surgical Assistance and Planing using Robotics and Parallel Processing|
My research plan includes continued explorations into the use of computing and robotics for clinical applications. Computer aided medicine has the potential to change the face of medicine by enhancing clinical skills, expediting medical procedures, and providing novel diagnostics. Despite much of the technology already existing, implementation is lacking. Through collaboration with clinicians I intend to help identify areas that could benefit from improved user interfaces and qualitative information from computer vision and modeling.
A example of this kind of collaboration is developing procedures for treating heart defects that require reconstruction of part or all of the heart. Congenital heart defect management remains a subjective and qualitative process despite the detailed quantitative information made available by modern imaging modalities. While patient-specific cardiac geometry, motion and flow dynamics can be determined from images, this information is not provided to the cardiologist and surgeon in a form conducive to reconstructive surgical planning. The result is that physiological implications of the patient's anatomy cannot be readily integrated into the reconstructive plan and the clinician is forced to rely instead on empirical knowledge and instinct.
Surgical planning and technique will benefit substantially from predictive modeling based on quantitative, patient-specific information. This can be achieved by the development of surgical planning/simulation tools (software and hardware) that permit the comparison of surgical options and enable physiological ``what if?'' analysis of proposed reconstructions. This work will enable non-computing specialists (surgeons) to manipulate, at interactive rates, proposed reconstruction models before simulating them. A promising research direction is to use surgical robot type interfaces for interaction between non-scientific computing specialists and the cardiac models/simulations.
In addition to requiring sophisticated robotics, real-time computing will be needed to compute interactions (forces, deformations, etc.) within the models. Computer vision will be used to build patient specific models from volumetric imagery and to validate the models against the observed cardiac function post operatively. A sample scenario is as follows:
NIH often funds collaboration between technology researchers and practicing clinicians. For specific diseases, there are a number of foundations who also fund projects. For example, the American Heart Association (AHA). While the scope of the research is immense, initial funding may be acquired though an AHA Scientist Development Grant or a NIH RO3.
|User Interfaces to Scientific Computing for Non-Scientific Computing Researchers|
The ideas of providing the power of modern scientific computing to clinicians in a number of medical disciplines has tremendous potential. More generically, there are a number of ways that providing mechanisms to utilize scientific computing for non-scientific computing experts can be useful. I look forward to finding scientific computing collaborators within the institution to help me bring scientific computing to non-computing specialists. I intend to leverage my expertise in user interface design, robotics, and haptics to produce systems and intuitive interfaces that can be used by clinicians, architects, designers, and civil engineers. These interfaces should be intuitive within the domain of the users for their type of problem. For example, hapticly controlled instruments for surgeons or digital clay/fabric for designers.
The NSF robotics and major research instrumentation (MRI) programs are likely places to provide funding for haptic robot interfaces. NSF Human-Computer Interaction (HCI) is also a likely candidate program for funding.
|Parallel Computing in Neural Simulation and Integration|
The end solution to the previous proposed research will involve large parallel computational resources to provide real-time analysis and modeling. This kind of resource has other applications in-line with my research agenda. Great strides have been made in the understanding of neural biology, sensorimotor control mechanisms, and the ability to capture the output of single neurons. Using dedicated powerful systems, the simulation of increasingly complex neural models in real-time is possible. In collaboration with neural biologists and bioengineers, namely Prof. Garrett Stanley at Harvard University, I propose to measure a functional group of neurons and train a real-time simulation of that group to take over function from the biological network. The computer science component of this work involves finding a way to fulfill the required complex neural network modeling in real time.
&nsb;The mammalian visual pathway between the optic nerve and the visual cortex provides a reasonable structure for this kind of study. By analyzing signals from different positions of neural path we can validate the function of the artificial network through the response of the neurons further down the chain. This work will enhance fundamental understanding of how artificial neural networks can be integrated with biological ones. Long term applications of this work include neural interfaces to prosthetics, advanced pacemaking of the autonomic system, and restoration of function following surgical complications in prostate procedures.
&nsp;Due to the speculative nature of this work NIH RO3 or NSF Artificial Intelligence and Cognitive Science (AICS) proposals are the best places to obtain initial funding. Once preliminary work is generated NSF, collaborative research in computational neural science (CRCNS) would be a good source of funding to bridge with additional NIH RO1 grants from Biomedical Imaging and Bioengineering.
I believe that by extending the state of the art in computer vision, robotics, haptics, and artificial intelligence and by collaborating with clinicians a number of interesting problems from the domains of biology and medicine can be solved. I intend to build such collaborations using the potential funding sources described previously.
&nsb;Smaller scale projects, like my work planned in rescue robotics, can be funded by NSF proposals or as part of targeted programs like the NSF Industry/University Cooperative Research Center for Safety, Security, and Rescue Research. I have benefited tremendously from undergraduate research experiences, both as a mentor and as a student. As a faculty member I intend to provide these experience to students through NSF's research experiences for undergraduates (REU) extensions to my grants and I will also pursue REU site funding.
&nsb;The interdisciplinary nature of computer science and its application to medicine provide a number of research directions and potential funding sources. I look forward to continuing my research exploring the potential computers have to advance science---as they are more than inanimate objects taking up desk space.