You are here

Category
Technology Name
Briefcase
Scientist
1021
A method for mapping and correcting optical distortion conferred by live cell specimens in microscopy that cannot be overcome using optical techniques alone can be used both for light microscopy and confocal microscopy. The system determines the 3D refractive index for the samples, and provides a...

A method for mapping and correcting optical distortion conferred by live cell specimens in microscopy that cannot be overcome using optical techniques alone can be used both for light microscopy and confocal microscopy. The system determines the 3D refractive index for the samples, and provides a method for ray tracing, calculation of 3D space variant point spread, and generalized deconvolution.

Applications


Microscopy: The method was developed and applied for light microscopy, and is of critical importance for detection of weak fluorescently labeled molecules (like GFP fusion proteins) in live cells. It may be applicable also to confocal microscopy and other imaging methods like ultrasound, deep ocean sonar imaging, radioactive imaging, non-invasive deep tissue optical probing and photodynamic therapy. Gradient glasses: The determination of the three-dimensional refractive index of samples allows testing and optimization of techniques for production of gradient glasses. Recently continuous refractive index gradient glasses (GRIN, GRADIUM) were introduced, with applications in high quality optics, microlenses, aspherical lenses, plastic molded optics etc. Lenses built from such glasses can be aberration-corrected at a level, which required doublets and triplets using conventional glasses. Optimized performance of such optics requires ray tracing along curved path, as opposed to straight segments between surface borders of homogeneous glass lenses. Curved ray tracing is computation-intensive and dramatically slows down optimization of optical properties. Our algorithm for ray tracing in gradient refractive index eliminates this computational burden.

Technology's Essence


A computerized package to process three-dimensional images from live biological cells and tissues was developed in order to computationally correct specimen induced distortions that cannot be achieved by optical technique. The package includes: 1. Three-dimensional (3D) mapping of the refractive index of the specimen. 2. Fast method for ray tracing through gradient refractive index medium. 3. Three-dimensional space variant point spread function calculation. 4. Generalized three-dimensional deconvolution.

+
  • Prof. Zvi Kam
1250
A robust method of identifying moving or changing objects in a video sequence groups each pixel with other adjacent pixels according to either motion or intensity values. Pixels are then repeatedly regrouped into clusters in a hierarchical manner. As these clusters are regrouped, the motion pattern is...

A robust method of identifying moving or changing objects in a video sequence groups each pixel with other adjacent pixels according to either motion or intensity values. Pixels are then repeatedly regrouped into clusters in a hierarchical manner. As these clusters are regrouped, the motion pattern is refined, until the full pattern is reached.

Applications


These methods for motion-based segmentation may be used in a multitude of applications that need to correctly identify meaningful regions in image sequences and compute their motion. Such applications include:

  1. Surveillance and homeland security - detecting changes, activities, objects.
  2. Medical Imaging - imaging of dynamic tissues.
  3. Quality control in manufacturing, and more.

Technology's Essence


Researchers at the Weizmann Institute of Science have developed a multiscale, motion-based segmentation method which, unlike previous methods, uses the inherent multiple scales of information in images. The method begins by measuring local optical flow at every picture elements (pixels). Then, using algebraic multigrid (AMG) techniques, it assembles together adjacent pixels which are similar in either their motion or intensity values into small aggregates - each pixel being allowed to belong to different aggregates with different weights. These aggregates in turn are assembled into larger aggregates, then still larger, etc., yielding eventually full segments.

As the aggregation process proceeds, the estimation of the motion of each aggregate is refined and ambiguities are resolved. In addition, an adaptive motion model is used to describe the motion of an aggregate, depending on the amount of flow information that is available within each aggregate. In particular, a translation model is used to describe the motion of pixels and small aggregates, switch to an affine model to describe the motion of intermediate sized aggregates, and finally turn to a perspective model to describe aggregates at the coarsest levels of scale. In addition to this, methods for identifying correspondences between aggregates in different images are also being developed. These methods are suitable for image sequences separated by fairly large motion.

+
  • Prof. Ronen Ezra Basri

Pages