SIPL Projects

Image & Video

Areas of Interest Detection in Biopsy Images
Picture for Areas of Interest Detection in Biopsy Images
2024
Student/s: Lior Gorelik, Yulia Ostrovsky
Supervisor/s: Ori Bryt
Logo of Hillel Yaffe Medical Center Collaborator
The goal of the work is to analyze histopathology images of the thyroid gland using deep learning algorithms. The algorithm is designed to reduce the workload of pathologists in identifying regions of interest in the biopsys samples in order to minimize human errors in the identification process. We chose to perform the segmentation using the U-Net network. Following the classification, we performed feature detection to classify the subtype of thyroid cancer using the StarDist network. We conducted our tests using datasets from two hospitals, NKI in the Netherlands and VGH in Japan, for the U-Net algorithm, and the MoNuSeg dataset for the StarDist network.
Estimating BMI from 2D Image
Picture for Estimating BMI from 2D Image
2024
Student/s: Tzvi Tal Noy, Ido Sagi
Supervisor/s: Nurit Spingarn
The BMI index is a crucial index which gives a quantitative assessment of whether a person is in normal weight, underweight or overweight. The index is calculated using height and weight data. The purpose of our project is to estimate a person's BMI from a single 2-dimensional image. This is a complex task because visual inspection of the image is not sensitive to the distance of the object from the camera and the angle of the shot. To approach this task, we relied on previous works in the field, on their results, and on the dataset published with them.
Classification of Parent-child Synchronization During Interaction
Picture for Classification of Parent-child  Synchronization During Interaction
2024
Student/s: Ori Zehngut, Shany Zamir
Supervisor/s: Yair Moshe
Logo of The Educational Neuroimaging Center Collaborator
Many studies in the field of child development have shown that the diagnosis of developmental problems in children at a young age significantly improves the ability to treat these problems (about 80-70 percent success rate). In addition, studies have shown that the nature of interaction between a child and their parents can indicate developmental problems. Building upon this premise, this project aims to classify joint activities between a child and parent through visual analysis of shared interaction videos. This system will ultimately contribute to diagnosing developmental problems by cross-referencing its output with additional data (EEG signals, etc.).
Depth-Based Semantic Segmentation for Four–Legged Robot
Picture for Depth-Based Semantic Segmentation for  Four–Legged Robot
2024
Student/s: Shany Cohen, Tal Sonis
Supervisor/s: Yair Moshe
Logo of RAFAEL Collaborator
This work’s goal is to enable maneuvering abilities for a four-legged robot in an indoor environment by employing semantic segmentation. The segmentation is performed using deep learning, based on low-resolution grayscale and depth images captured by a Pico Flexx camera mounted atop the robot. While most existing semantic segmentation methods rely on RGB and depth images, there are no pre-trained models specifically designed for grayscale images. In the project, we adapted an architecture intended for semantic segmentation using RGB and depth images, leveraging transfer learning to tailor it to our specific requirements.
Table Tennis Ball Kinematic Parameters Estimation Using a Smartphone Camera
Picture for Table Tennis Ball Kinematic Parameters Estimation Using a Smartphone Camera
2024
Student/s: Firas Hamoud, Nahdeye Hamoud
Supervisor/s: Ori Bryt
Table tennis is a fast game that requires quick reactions and great hand-eye coordination. Many of the points that are scored during the game are a direct result of a good serve which relies heavily on the strategic use of spin. This project aims to leverage smartphone slow-motion cameras and image processing techniques to analyze and quantify the spin of a table tennis ball during service. We use YOLO v7 for object detection on slow-motion videos in which the ball is uniquely patterned for effective feature detection. Additionally, we apply BRIEF feature descriptor and brute-force matching algorithm which we use to extract matching features across frames.
Characterizing Pedestrians in Parks
Picture for Characterizing Pedestrians in Parks
2024
Student/s: Shany Zehavy, Adi Levy
Supervisor/s: Ori Bryt
Collaborator default image
This work aims to address the pressing need for high-quality public open spaces in urban environments, with a focus on leveraging computer vision and deep learning techniques. The COVID-19 pandemic has emphasized the importance of public open spaces in enhancing the well-being and quality of life for city dwellers. It has become evident that these spaces serve as vital elements in urban landscapes and play a significant role in promoting physical and mental health, social interactions, and overall community resilience.
Image guided image generation using stable diffusion and CLIP
Picture for Image guided image generation using stable diffusion and CLIP
2024
Student/s: Ido Blayberg, Ohad Amsalem
Supervisor/s: Noam Elata
In recent years, AI-driven image editing has emerged as a promising field with numerous applications. This work explores the capabilities of the generative AI model, Diffusion, for image editing guided by reference images. We focus on leveraging a set of images that outline the desired editing features and applying them to a target image. By experimenting with various hyper-parameters, modifying the core components of the Diffusion model, and integrating the CLIP model, we demonstrate various improvements in image editing performance.
Digitizing The Yerushalmi Catalogue
Picture for Digitizing The Yerushalmi Catalogue
2024
Student/s: Rami Halabi, Salah Abbas
Supervisor/s: Ori Bryt
Joseph Yerushalmi, a librarian at the University of Haifa Library, created a catalogue with around 65,000 records on paper cards. The catalogue contains articles from the 1940s to the 1970s, focusing on individuals like artists, writers, philosophers, intellectuals, and historical figures. the collection also includes reviews on books and literary works.   To preserve this valuable catalogue, digitization is needed, the project is divided to two parts: The first part is to Detect text regions, which means classifying each region to its appropriate label: Title, Author, Text, and other.
Low-Field Longitudinal MRI Scans Reconstruction
Picture for Low-Field Longitudinal MRI Scans Reconstruction
2024
Student/s: Tal Oved
Supervisor/s: Orel Tsioni & Prof. Efrat Shimron
Magnetic Resonance Imaging (MRI) is a critical imaging modality in modern medicine, providing high-resolution images of soft tissues without the use of ionizing radiation. However, the high cost and complexity of traditional high-field MRI scanners have driven interest in low-field MRI systems. While more affordable and portable, low-field MRI suffers from several drawbacks, including lower signal-to-noise ratio (SNR), reduced resolution, and poorer contrast, which limits its clinical utility.
Depth Estimation Algorithm Of Infiltrating Cancer Cells On A Surface Gel
Picture for Depth Estimation Algorithm Of Infiltrating Cancer Cells On A Surface Gel
2024
Student/s: Saar Drive, Sham Fahmawy
Supervisor/s: Ori Bryt & Anastasia Simonova, Daphne Weihs
Logo of Weihs Mechanobiology Lab (Faculty of Bio-Medical Engineering) Collaborator
Cancer is a leading cause of death worldwide, responsible for around ten million deaths in 2020. Cancer cells exhibit invasive behavior, which significantly contributes to tumor growth and metastasis. The invasiveness quality of cancer cells is a principal contributor to these deaths by influencing tumor growth and metastasis. Current methods for evaluating this include seeding cells on bio-gel, capturing images and semi-manually labeling the positions and depths of cells, which are both time-consuming and prone to human error. This study introduces an automated approach for estimating cell position and depth using Differential Interference Contrast (DIC) and Surface images.
Mark of Award this ProjectDenoising for Event Cameras
Picture for Denoising for Event Cameras
2023
Student/s: Bar Weiss, Asaf Omer
Supervisor/s: Yonatan Shadmi
Logo of RAFAEL Collaborator
In this work we explore the different noise mechanisms in event cameras and discuss known methods for filtering several noise sources. We focus on the threshold mismatch effect in event cameras and introduce a novel correction scheme to reduce the effect of threshold mismatch. To the best of our knowledge this is the first algorithmic solution to the threshold mismatch effect. We use a threshold estimation algorithm suggested by Ziwei Wang et al., and further explore it in simulations with knowledge of the thresholds to calculate estimation errors. Using the estimated thresholds, we apply a correction algorithm based on sampling theory.
Analysis of Waves and Human Interaction Using Beach Webcams
Picture for Analysis of Waves and Human Interaction Using Beach Webcams
2023
Student/s: Harel Mendelman, Tomer Massas
Supervisor/s: Yair Moshe
This work presents a novel approach for monitoring 'beach scenes', utilizing existing beach webcams and computer vision algorithms based on deep neural networks to analyze the behavior of water bodies, humans, and the interactions between them. We used surf condition analysis as a study case for our method. The proposed approach detects and tracks both people and waves simultaneously, distinguishing between waiting surfers and riding surfers. The system uses a Faster-RCNN object detector for detection and classification and SORT, a fast multi-object tracking algorithm based on the Kalman filter, for tracking.
Morphology Recognition for Gel Cells Using DIC Images
Picture for Morphology Recognition for Gel Cells Using DIC Images
2023
Student/s: Talya Basha
Supervisor/s: Prof. Daphne Weihs
Logo of Weihs Mechanobiology Lab (Faculty of Bio-Medical Engineering) Collaborator
A new method to predict the development of metastases of cancer cells and accelerate treatment for patients was developed in Prof. Daphne Weihs' lab. The method works by placing cancer cells samples on gel and finding their penetrating level on it. According to the penetration level of the cell, one can predict the probability it will develop metastases. In a previous project, an algorithm was developed to automatically detect cells' locations on a DIC image (image of the cells on the gel's surface). Our first goal in this project was to improve the algorithm and reach higher accuracy. To manage this task, I added morphological operations and based on deep-learning ideas.
Depth Estimating of Infiltrating Cancer Cells on a Gel Surface
Picture for Depth Estimating of Infiltrating Cancer Cells on a Gel Surface
2023
Student/s: Haitam hawa
Supervisor/s: Daphne Weihs
Logo of Weihs Mechanobiology Lab (Faculty of Bio-Medical Engineering) Collaborator
Spreading of cancer cells to distant organs is the main cause of all cancer-related mortalities, while early and accurate prediction of tumor cell fate can significantly impact treatment protocols and survival rates. In the Weihs laboratory a phenomenon was observed, where invasive cancer cells can infiltrate the surface of synthetic polyacrylamide gels, while normal cells have minimal to no indentation in the gels surface. Estimating such invasiveness of cells manually by lab expert is time and resource consuming. This project aims to predict the depth of each individual cell based on an image of the gels surface in order to estimate cell invasiveness.
Image Reconstruction from Deep Diffractive Neural Network
Picture for Image Reconstruction from Deep Diffractive Neural Network
2023
Student/s: Iggy Segev Gal, Tamar Sde Chen
Supervisor/s: Matan Kleiner
Deep diffractive neural networks have emerged as a promising framework that combines the speed and energy efficiency of optical computing with the power of deep learning. This has opened new possibilities for optical computing suit for machine learning tasks and all-optical sensors. One proposed application of this framework is the design of a diffractive camera that preserves privacy by only imaging target classes while optically erasing all other unwanted classes. In this work, we investigated whether this camera design truly erases the information in unwanted class data. We used K-NN to achieve up to 94% accuracy in classifying optically erased images.
Remote PPG Signal Acquisition and Analysis
Picture for Remote PPG Signal Acquisition and Analysis
2023
Student/s: Amir Mishael, Tamir Malka
Supervisor/s: Hadas Ofir
This works main goal is to create a demo system that estimates a subject's heart rate in real time by observing changes in light reflection from his skin. The system must be able to do so using a regular RGB camera. The heart rate estimation is based on a PPG (PhotoPlethysmoGraphy) signal, remotely extracted using the camera and digital signal processing. We have created such a system, which operates on a standard laptop using its built-in webcam or any other external camera connected to it. The median of the mean absolute error across the videos in UBFC-rPPG dataset that was achieved by our system is 3.8%.
Mark of Award this ProjectEstimating Intestine Blood Flow With Video Analysis
Picture for Estimating Intestine Blood Flow With Video Analysis
2023
Student/s: Ofir Ben Yosef, Stav Bleyy
Supervisor/s: Ori Bryt
Logo of Sagiv Tech Collaborator
Evaluating blood flow in intestinal tissue is important for several medical procedures, such as laparoscopic surgery. However, accurately assessing blood flow using traditional methods, such as Doppler ultrasound, can be challenging. In this project, we propose a new method for evaluating blood flow in intestinal tissue using real-time video from a laparoscopic camera. Our approach involves identifying the Region of Interest (ROI) in the video and then analyzing the video in the color, frequency, and texture spaces to detect temporary changes that may indicate changes in blood flow. We demonstrate the effectiveness of our method through experiments on a dataset of laparoscopic videos.
Identification of Dairy Cows
Picture for Identification of Dairy Cows
2023
Student/s: Kfir Bendic, Itzhak Mandelman
Supervisor/s: Ido Cohen
Classifying dairy cows is a critical operation for dairy farms. The primary goal of dairy farms is to maximize milk production, which is achieved by monitoring various aspects of each cow, including milk yield, health status, estrus time, and other characteristics. Therefore, the foremost objective of a dairy farm is to establish a reliable method for identifying each cow accurately. Currently, common methods for cow identification rely on permanent measures such as ear or back tattooing, as well as the use of ear tags equipped with radio frequency identification (RFID) technology. However, these methods have limitations as they can fade, fall off, or break over time.
Object Detection Inspired by the Visual System of Fruit Flies
Picture for Object Detection Inspired by the Visual System of Fruit Flies
2022
Student/s: Chen Tasker, Yuval Mayor
Supervisor/s: Ron Amit
Logo of RAFAEL Collaborator
This work examines the feasibility of incorporating models based on insect vision systems for motion detection in computer vision. In recent years, significant advancements have been made in both the research of visual processing in animal systems and in the field of artificial neural networks. Despite these advancements, and despite artificial neural networks already drawing inspiration from biology, there remains a disconnect between these two areas of research. Insects have the ability to quickly and accurately respond to visual input while in flight, using less hardware than comparable computing systems, making them a potential source of inspiration for computer vision systems.
Detection of Breast Cancer in Mammograms
Picture for Detection of Breast Cancer in Mammograms
2022
Student/s: Gili Oved, Shir Graus
Supervisor/s: Ori Bryt, Dr. Dror Lederman
Breast cancer is among the leading cause of mortality among women in developing as well as under developing countries. The detection and classification of breast cancer in early stages of its development may allow patients to have proper treatment. The work faced the problem of mass detection in mammograms, using CNN architecture. The initial architecture of the project was given to us by former student subject to Dr. Dror Lederman. Weve worked and streamlined this initial architecture. In this work we had to stabilize the existing model so that it would converge and not diverge. In addition, we had to improve the percentage accuracy of the model.
Suspicious Moles Detection using Binary Masks
Picture for Suspicious Moles Detection using Binary Masks
2022
Student/s: Zeev Rispler, Ophir Weiner
Supervisor/s: Yair Moshe
Logo of MARPE Technologies Collaborator
Melanoma skin cancer is common and deadly, and its diagnosis today requires a manual examination by a dermatologist. Any spot on the patients body marked by the doctor as suspicious should be examined using a dermoscope (polarized light photography), as this is the only way to determine whether the mole is malignant or not. Marpe Technologies develops a system that scans the patients body in visible light using a high-resolution camera, detects suspicious moles, and sends them to examination by a dermatologist with a dermoscope, thus saving the doctors manual examination time. Previous SIPL project aimed at improving this classification achieved a 99.
Object Detection Inspired by the Biology of Flying Insects
Picture for Object Detection Inspired by the Biology of Flying Insects
2022
Student/s: Iddo Bar-Haim, Elior Schneider
Supervisor/s: Yuval Silman
Logo of RAFAEL Collaborator
The goal of this work is to build an algorithm that analyse spatial parameters using simple and inexpensive sensors and a limited calculation cost. The current method of identifying objects in a given space and extracting its active properties requires a large number of advanced sensors and expensive processing power. We suggest the option of reducing the computing power and the number of sensors used while maintaining the output quality. The proposed solution is based on a visual system inspired by flying insects and the algorithm is implemented using signal processing tools.
Mark of Award this ProjectMotorized Thermal Camera Slider for Oral Cancer Detection
Picture for Motorized Thermal Camera Slider for Oral Cancer Detection
2022
Student/s: Dor Mitz, Yaniv Shlomovich
Supervisor/s: Ori Bryt
Logo of HT Bioimaging Collaborator
Early detection of a cancerous tumor is important and may be the decisive factor in the patient's fate, since if the tumor is at the beginning - the treatment is simpler, and the chances of recovery are significantly higher. Due to the great importance of the tests, there is a goal to make these tests more accessible and more accurate so that many people can perform them and get accurate answers, which will save human life. HT Bioimaging is developing a system for detecting and testing cancerous tumors by heating the tumor and sampling the heat dissipation using a thermal camera.
Topological Hyperbolic Representation For Classification of Hyperspectral Images
Picture for Topological Hyperbolic Representation For Classification of Hyperspectral Images
2022
Student/s: Luiz Fernando Zaima Wainstein, Tomer Danilin
Supervisor/s: Ya-Wei Lin
In this work, we demonstrate the use and application of learning hierarchical data by embedding it into hyperbolic space. By applying the latest approaches in the field of machine learning and topological data analysis, specifically hyperbolic representations and persistence diagrams we showed the great potential of this method to classify data. We showed good results in classifying MNIST and Hyperspectral images of low resolution which may come in use for civilian and military applications alike.
Image Colorization for Thermal Mobile Camera Images
Picture for Image Colorization for Thermal Mobile Camera Images
2022
Student/s: Idan Friedman, Tomer Lotan
Supervisor/s: Ori Bryt
Thermal image colorization is a topic that is gaining momentum in the world of artificial intelligence. In recent years, with a significant improvement in the tools, and with the algorithmic development of deep learning, the world of computer vision has managed to achieve impressive achievements in everything related to image processing and analysis. A significant development that has led to the rapid increase in achievements is the development of the CNN called GAN - Generative Adversarial Network. Networks of this type make it possible to produce a new set of information based on the characteristics of the existing information.
High Perceptual Quality Single Image Super Resolution
Picture for High Perceptual Quality Single Image Super Resolution
2022
Student/s: David-Elone Zana, Odelia Bellaiche Bensegnor
Supervisor/s: Theo Adrai
Logo of GIP Lab CollaboratorLogo of Dept. of CS CollaboratorLogo of Technion Collaborator
Nowadays, the metric used to calculate the statistic distance between different datasets is the FID (Frchet Inception Distance): it uses a pretrained inception network and a divergence very close to the W2 divergence (in this case) to approach the distance between them. We assume that the latent representation of each dataset has a Gaussian distribution. We also assume that the Gaussian distribution is not degenerate: we assume that the covariance matrix is a positive definite matrix: all the principal components are not zeros. To these assumptions, we can add the numerical instability and the impossibility to score a single image.
Morphology Recognition of Cells Using Nuclear Coloring
Picture for Morphology Recognition of Cells Using Nuclear Coloring
2022
Student/s: Avishav Engle, Neomi Cohen
Supervisor/s: Prof. Daphne Weihs, Ori Bryt
Logo of Weihs Mechanobiology Lab (Faculty of Bio-Medical Engineering) Collaborator
Prof. Daphne Weihs lab researches the probability of metastasis in cancer cells by checking the level of penetration that these cells exhibit on a gel substrate. This project is aimed at performing an automatic analysis of the gel, photographed using DIC microscopy. Using these images, and additional images of nucleus staining of the cells, we were tasked with creating an algorithm for morphological recognition of the cells by use of segmentation. With this algorithm, the researchers in the lab can learn the number of cells, their areas, and their liveliness automatically, promptly and accurately.
Real Image Denoising with Feature Attention
Picture for Real Image Denoising with Feature Attention
2022
Student/s: Roei Weiner, Adam Galon
Supervisor/s: Dr. Meir, Bar Zohar
This project focuses on the issue of real image denoising cleaning noise that is created during the process of capturing the image. In the first part of the project, we designed a CNN, based on RIDNet architecture. In the second part of the project we wish to improve the result of the network, on a dataset it hasnt seen before. I.e., improve the generalization capability of the network. To achieve this goal, we have created datasets with synthetic noise, that aspired to be as similar as possible to real noise. In order to train this model, we have created 4 datasets, each of them constructed of different ratio of real noise image and synthetic noise image.
Audio-Visual Voice Activity Detection and Localization Using Deep Correlated Representations
Picture for Audio-Visual Voice Activity Detection and Localization Using Deep Correlated Representations
2022
Student/s: Kfir Bendic, Itzhak Mandelman
Supervisor/s: Ofir Lindenbaum
One of the problems in performing signal processing operations on sound clips stems from noise added to the measurement device. Noise can drastically damage the performance of accurate analysis of an audio signal. One method to deal with this problem is to use multimodal observations so that one of the modalities is not affected by the noise of the other. An example is a video source independent of noise added to the audio, thus unaffected by this noise. This way, one can try to extract information lost in the audio due to the noise, using the video. The purpose of this project is to perform spatial and temporal detection and recognition of speech, both in audio and video.
Detecting Distracted Pedestrians Crossing a Crosswalk
Picture for Detecting Distracted Pedestrians Crossing a Crosswalk
2022
Student/s: Roy Epstein, Sagi Lizbona
Supervisor/s: Ido Cohen, Ori Bryt, Michal Derhy
This work deals with the identification of pedestrians approaching a crosswalk when they use a mobile phone. As part of the research work of Michal Derhy from the Faculty of Architecture and Urban Planning, the option of creating a system that alerts distracted people approaching to cross the road was examined. In an experiment conducted by Michal, a system was set up that included an alarm that was activated manually when a person that use the phone approached the crosswalk. The results of the experiment showed that there is a direct effect between the warning to the pedestrians and their alertness to what is happening around the crosswalk.
Quantification of penetration of Cancer on a Gel surface
Picture for Quantification of penetration of Cancer on a Gel surface
2022
Student/s: Maayan Edri, Yaniv Dasa
Supervisor/s: Prof. Daphne Weihs
The main goal of this work is to develop an algorithm which classifies depth of indentation for each shown cell in a given surface image. In the first step, we developed an algorithm that receives surface images and identifies which of the cells penetrated the gel substrate. In second step, we tried to calibrate the pushing depth of cells by a blur map. A close relationship can be found between the penetration depth of cancer cells and their metastatic nature. The deeper the cells penetrate, the more metastatic they are in nature, meaning more dangerous for the patient. Until now, analyzing such images in laboratories was partly manual and partly automatic and took a long time.
Seeing Sound: Estimating Image From Sound
Picture for Seeing Sound: Estimating Image From Sound
2022
Student/s: Sagy Gersh, Yahav Vinokur
Supervisor/s: Yair Moshe
The goal of this work is to train a deep neural network so that it can receive an audio signal as input, and output a reconstructed image of the source from which that audio signal was produced. Under the assumption that an audio signal contains spatial properties of the object that produced it, we tried to use an audio classifier to extract these properties and transform them into a feature vector from which we can reconstruct the image from which the source was produced using a deep network with the GAN architecture. This project is a follow-up project with the same goal.
Image Denoising Using CNN Autoencoder
Picture for Image Denoising Using CNN Autoencoder
2021
Student/s: Avihu Amar, Gil Barum
Supervisor/s: Dr. Meir Bar-Zohar
In this work we show a practical solution for image denoising using CNN Autoencoder Neural Network. The network we built is easy to implement and provide relatively high performance when compared to other classic methods like BM3D, and even compared to other, more complex networks. This network is also very flexible and can be adjusted to match different memory capacity of the graphic cards available for the training. We show how we use a relatively simple design and improve it by using custom performance metrics designed to evaluate images, replacing standard layers like MaxPool and UpSampling with convolutional layers, implementing custom loss functions and comparing between them.
Pneumonia Detection from Chest X-Rays with Robustness to Deformations
Picture for Pneumonia Detection from Chest X-Rays with Robustness to Deformations
2021
Student/s: Andy Rodan, Or Glassman
Supervisor/s: Yair Moshe
Logo of Zebra Medical Vision Collaborator
In recent years, the field of artificial intelligence and deep learning is gaining a foothold in almost every field of our lives. In the field of medicine in particular, deep learning has a prominent place in everything related to image processing, information analysis and disease diagnosis. Data acquisition requires skilled and experienced personnel, who are required to perform tasks that are partly Sisyphean and tedious. Also, sometimes it is a matter of working with rare diseases, about which not enough information has been gathered to date. Many medical photographs contain various deformations which make it difficult for the algorithms to achieve optimal detection performance.
Mark of Award this ProjectImage Manipulation with GANs Spatial Control
Picture for Image Manipulation with GANs Spatial Control
2021
Student/s: Karin Jakoel, Liron Efraim
Supervisor/s: Tamar Rott
We suggest a new approach that enables spatial editing and manipulation of images using Generative Adversarial Networks (GANs). Though many tasks have been solved utilizing the powerful abilities of GANs, this is the first time that a spatial control is suggested. This ability is possible thanks to a test-time spatial normalization that uses the trained model as is and does not requires any fine tuning. Therefore our method is significantly fast and does not required further training. We demonstrate the new approach for the task of class hybridization and saliency manipulation.
Creating Image Segmentation Maps Using GANs
Picture for Creating Image Segmentation Maps Using GANs
2021
Student/s: Inbal Aharoni, Shani Israelov
Supervisor/s: Idan Kligvasser
The use of GAN has drastically affected low-level vision in graphics, particularly in tasks related to image creation and image-to-image translation. Today the training process, despite all the latest developments, is still unstable. Given a semantic segmentation map in which we can separate and look at each pixel in the image and tag it to the relevant class it represents, we can (with the help of GAN) produce images based on this map and hope to reach a more stable model. With the success of GANs we produced segmentation maps. With these maps and with the help of the generative model we can get a semantic understanding of the data set and even create completely new scenes.
Eye Blinking Detection In Video
Picture for Eye Blinking Detection In Video
2021
Student/s: Poliana Rizik
Supervisor/s: Ori Bryt
Melanoma Detection and Segmentation
Picture for Melanoma Detection and Segmentation
2021
Student/s: Liel Nagar, Shir Hayut
Supervisor/s: Yair Moshe
Logo of MARPE Technologies Collaborator
Melanoma is a type of skin cancer that is both common and dangerous. Today, in order to diagnose skin cancer, a checkup must be performed by a dermatologist. For the doctor, such a procedure takes a long time. Marpe Technologies has developed an advanced scanning system, which includes a camera that scans the entire body of the patient and alerts the physician when it detects suspicious spots. A previous project on the subject was carried out in collaboration with Marpe Technologies in order to create a trained model that classifies images at technician-level as "suspicious moles" or "unsuspected areas".
Mark of Award this ProjectGenerative Deep Features
Picture for Generative Deep Features
2021
Student/s: Hila Manor, Da-El Klang
Supervisor/s: Tamar Rott Shaham
The goal of this work is to research the capability of generating a completely new image with the same visual content of a single given natural image, by using unsupervised learning of a deep neural network without the use of a GAN. This project is based on the work presented in the paper: "SinGAN: Learning a Generative Model from a Single Natural Image" (Rot-Shahamet al.) Different papers published in the last couple of years have already established the connection between the deep features of classification networks and the semantic content of images, such that we can define the visual content of an image by the statistics of its deep features.
Quantification of Penetration of Cancer Cells on a Gel Surface
Picture for Quantification of Penetration of Cancer Cells on a Gel Surface
2021
Student/s: Ofir Fridchay, Elnatan Kadar
Supervisor/s: Prof. Daphna Weihs
Logo of Weihs Mechanobiology Lab (Faculty of Bio-Medical Engineering) Collaborator
The target of this work is to dperformo automatic analysis to images from prof. Daphna Weihs lab's experiments. It is already known that we can evaluate the probability of development of metastases of cancer cells by their penetration level on gel surface and this evaluation ability is essential to fit the best treatment to sick patients. In this project we use only two images, the first is DIC a microscopic image of the cells and the second is surface that shows light of bids that are in the bottom of the gel. So, if we have a penetrating cell, we will see its region not in focus. Our first task was to do automatic analysis of DIC images in order to find cells locations and count them.
Understanding the Ability of Birds to See Underwater Prey
Picture for Understanding the Ability of Birds to See Underwater Prey
2021
Student/s: Ron Moisseev, Vladimir Kulikov
Supervisor/s: Dr. Marina Alterman, Prof. Gadi Katzir
Image obtained through the air-water interface is determined by light refraction. This affects many parameters such as image apparent shape, position, motion, contour, contrast and more. Birds of many species, such as ospreys, kingfishers, and herons, capture fish after observing and aiming from the air and can compensate for these effects and successfully strike their prey. In this project we researched the geometrical distortions caused by refraction of light rays through the air-water interface and expanded the process to a sea surface with waves. Afterwards, we defined the type of distortion caused by the water surface and proposed a way to measure said distortion.
Identification of Suspicious Objects in Security Camera Video
Picture for Identification of Suspicious Objects in Security Camera Video
2021
Student/s: Lana Brik, Maor Atias
Supervisor/s: Ido Cohen
Logo of IDF Collaborator
Security and safety problems rank among the most pressing issues of modern times. In the day-to-day life, the army encounters terrorist attacks. To thwart them, it is necessary to monitor the movements of people and their belongings in specific places. Sample videos of intentionally placed objects were provided by the ICT Corps. The projects goal is to provide a tool for locating suspicious objects that were left in the scene and to classify them. The identification and tracking part can be solved by using DeepSORT. After that, an analysis algorithm associates the objects to their owners, and monitors changes in various parameters between frames.
Augmented Reality for Physics Classes
Picture for Augmented Reality for Physics Classes
2021
Student/s: Marom Bar Orion, Elior Portal
Supervisor/s: Yair Moshe
Augmented Reality App for Physics Classrooms, is a work that aims to create an android app that will allow to take a photo of a drawing of a physics system on a blackboard, to create a running simulation of the drawn system. This project uses deep learning tools, classic image processing algorithms and animation programming tools. It is based on previous projects in which a method was developed for detecting, classifying and localizing objects in an image, mapping them, and creating animation. In this project we focused on improving the object detection and recognition accuracy.
Deep Learning Based Image Processing for a Smartphone Camera
Picture for Deep Learning Based Image Processing for a Smartphone Camera
2021
Student/s: Alexey Golub, Yanay Dado
Supervisor/s: Dr. Meir Bar-Zohar
In the first part of the project, our focus was on the PyNET network. This network was designed to replace the full ISP pipeline, which is responsible for the conversion of raw information detected by a digital camera sensor (known as a Bayer image or a RAW image) into the color image seen on the screen (of the DSLR camera, of the smartphone, etc.). Specifically, we tested different loss functions in order to improve PyNETs performance. In the second part of the project, we explored additional ways to improve this performance.
3D Object Detection For Intel RealSense LiDAR
Picture for 3D Object Detection For Intel RealSense LiDAR
2021
Student/s: Anaelle Yasdi, Judit Ben Ami
Supervisor/s: Ori Bryt
This project's main goal is to exam and study the Intel Realsense Lidar Camera, and do to object detection and classification of in-door images taken by L515. The secondary objective of the project is to create a small database with those images and to label them. The first step in the project was to find a deep learning network and the large 3D in-door dataset it was trained on. The second step was to create a small database with labeled images taken by L515 camera, and the third and final step was to input those images to the chosen network, do adjustments, and to achieve the project's goal: do objects detection and classification.
Mark of Award this ProjectOptical Character Recognition (OCR) for Old Torah Manuscripts
Picture for Optical Character Recognition (OCR) for Old Torah Manuscripts
2020
Student/s: Tal Stolovich, Ohad Kimelfeld
Supervisor/s: Ori Bryt
In this work, two architectures of Optical Character Recognition (OCR) systems were demonstrated for the Solitreo font of the Hebrew Language. The first architecture demonstrated was based upon text detection and classification. The second architecture demonstrated was based upon cropping a document into separate text rows, and whole row translation using an LSTM network. In addition, handwritten text document processing algorithms were also demonstrated, such as: Binarization, Connected Components Analysis, Text Row Detections, and more.
Mark of Award this ProjectEarly Detection of Cancer Using Thermal Video Analysis
Picture for Early Detection of Cancer Using Thermal Video Analysis
2019
Student/s: Idan Barazani
Supervisor/s: Aviad Levis, Ori Bryt
Logo of HT Bioimaging Collaborator
Cancer is a major challenge to modern medicine, the disease has many victims everywhere in the world, therefore multiple efforts and resources are invested in the attempt for cancer annihilation. As part of the characterization of diseases in general and especially cancer, early detection has the potential to increase the patient's chances of recovery. The primary goal of the project - early identification of external cancer (tongue/ cheek /Lip) using cooling and heating patterns of these biological tissues.
Mark of Award this ProjectPhysics Classroom Augmented Reality with Your Smartphone Part B
Picture for Physics Classroom Augmented Reality with Your Smartphone Part B
2019
Student/s: Georgee Tsintsadze, Yonatan Sackstein
Supervisor/s: Yair Moshe
The project, Physics Classroom Augmented Reality with Your Smartphone, is the second project having the same goal as the previous one creating an android app that will allow, using a photo of a drawing of a physical system, to create a running simulation of the said physical system. This project uses classic image processing algorithms and animation programming tools. The project is based on a previous one that detects, classifies and localizes objects in an image. The first stage of the project was to create an application for presenting a simple animation of a physical interaction.
Mark of Award this ProjectDeep Learning for Physics Classroom Augmented Reality App
Picture for Deep Learning for Physics Classroom Augmented Reality App
2019
Student/s: Tom Kratter, Yonatan Sackstein
Supervisor/s: Yair Moshe
The project Deep Learning for Classroom Augmented Reality Android App is a second project having the same goal as the previous one creating an android app that will allow, using an image of a drawing of a physical system, to create a running simulation of the said physical system. The goal of this project, similar to that of the previous project didnt succeed, and as part of the overall solution, is to classify and localize different objects in the drawing of the physical system. Our project tries (and usually succeeds) to do so using deep learning algorithms, as opposed to the previous project that tried and hasnt managed to do so using classic image processing algorithms.
Mark of Award this ProjectEfficient Deep Learning for Pedestrian Traffic Light Recognition
Picture for Efficient Deep Learning for Pedestrian Traffic Light Recognition
2019
Student/s: Roni Ash, Dolev Ofri
Supervisor/s: Yair Moshe
Crossing a road is a dangerous activity for pedestrians and therefore pedestrian crossings and intersections often include pedestrian-directed traffic lights. These traffic lights may be accompanied by audio signals to aid the visually impaired. In many cases, when such an audio signal is not available, a visually impaired pedestrian cannot cross the road without help. In this project, we propose a technique that may help visually impaired people by detecting pedestrian traffic lights and their state (walk/dont walk) from video taken with a mobile phone camera.
Mark of Award this ProjectFrom Deep Features To Image Restoration
Picture for From Deep Features To Image Restoration
2018
Student/s: Ori Sztyglic
Supervisor/s: Tamar Rott, Idan Kligvasser
In recent years, the use of deep features as an image perceptual descriptor is very popular, mainly for measuring the perceptual similarity between two images. In the field of image restoration, this has proved to be very useful for tasks such as super-resolution and style transfer. In this project, we suggest a different direction: rather than using deep features as a similarity measure, we suggest using them to construct a natural image prior. This can be done by learning the statistics of natural image's deep features. Using this special prior, we can gain from both world: the deep one, and the "classic" one.
Mark of Award this ProjectRelative Camera Pose Estimation Using Smartphone Cameras and Sensors
Picture for Relative Camera Pose Estimation Using Smartphone Cameras and Sensors
2018
Student/s: Omer Movshovitz, Guy Dascalu
Supervisor/s: Ori Bryt, Adam Wolff
Logo of RAFAEL Collaborator
During communication between two people who are located at a distance from each other, often arises the need to indicate a point of interest in the individuals shared field of view. Such a situation causes a problem since it is not always possible to describe where the point of interest is in the other persons field of view, since they see the scene from a different perspective. For example, when describing a window on a building or a rock in a forest, one must use the surrounding environment in order to convey the location of the point of interest, which creates a communication barrier. In this project, we surveyed the possibility of creating the basis of a Visual Social Network.
Video Classification Using Deep Learning
Picture for Video Classification Using Deep Learning
2018
Student/s: Ifat Abramovich, Tomer Ben-Yehuda
Supervisor/s: Dr. Rami Cohen
Much recent advancement in Computer Vision is attributed to large datasets and the ability to use them to train deep neural networks. In 2016 Google announced the publishing of a public dataset containing about 8-million tagged videos called YouTube-8M. In this project, we used this database to train several deep neural networks for tagging videos in a variety of categories. In the first stage, we downloaded 5000 videos for 5 different categories. Next, we trained two deep networks, with slightly different architectures, to tag a video into one of the five categories. One network uses the LSTM architecture and the other uses the BiLSTM architecture.
Mark of Award this ProjectIdentification Of Content In Images Of Promotional Brochures
Picture for Identification Of Content In Images Of Promotional Brochures
2018
Student/s: Gil Gelbert, Ilan Meltzer
Supervisor/s: Ori Bryt
Logo of Quanta Collaborator
The composition of a promotional brochure is the characteristics and relations between the different objects within it, among those their location, color and shape. Composition has a crucial importance to the degree of success in which the brochure manages to convey the desired message. Despite of the great importance of composition to the success of the brochure, the designer is tasked with designing it and matching it to the brochure's topic, without a convenient tool that allows using generic compositions that were found suitable for certain topics.
Mark of Award this ProjectDetection and Localization of Cumulonimbus Clouds in Satellite Images
Picture for Detection and Localization of Cumulonimbus Clouds in Satellite Images
2018
Student/s: Etai Wagner, Ron Dorfman
Supervisor/s: Almog Lahav
In this work, we address the problem of Cumulonimbus (Cb) cloud detection from Infrared (IR) satellite images. The Cb Clouds are associated with thunderstorms and atmospheric instability and their detection is of high importance since they pose extreme danger to aviation. We present a joint spatio-temporal detection method that exploits the distinct spatial characteristics of Cb clouds as well as their prototypical evolution over time. The presented method is unsupervised and does not require labeled data or predefined spatial handcrafted features, such as particular shapes, temperatures, textures, and gradients.
Pedestrian Traffic Light Recognition for the Visually Impaired Using Deep Learning
Picture for Pedestrian Traffic Light Recognition for the Visually Impaired Using Deep Learning
2018
Student/s: Idan Friedman, Jonathan Brokman
Supervisor/s: Yair Moshe
This project is a part of a series of projects carried out in SIPL dedicated to creating an Android application that will assist the visually impaired people with pedestrian traffic lights. The current project consists of two parts: 1. Recognition of pedestrian traffic lights in a single image taken with a mobile phone from a pedestrian perspective. We use the Faster RCNN object detector with transfer learning on more than 900 pedestrian traffic light images, and achieve 98% accuracy. 2. Using the recognition module from part 1 along with object tracking to detect light switches from red to green or vice versa, for improved recognition robustness. For this aim, we use the KCF object tracker.
Mark of Award this ProjectAnomaly Detection in Multibeam Echosounder Seabed Scans
Picture for Anomaly Detection in Multibeam Echosounder Seabed Scans
2017
Student/s: Ron Vainshtein, Itay Cohen
Supervisor/s: Yaakov Bucris
Logo of RAFAEL Collaborator
This project deals with the detection of anomalies in seabed surveys collected using multibeam sonar. Detecting anomalies is a common task in the world of signal processing in general and in image processing in particular. Therefore, there are established methods and methodologies for dealing with such problems in these worlds. As oppose to typical data in these fields, information collected by the multibeam sonar presents various problems that are unique - sampling on irregular grids, variability of the sampling nature depending on the position of the boat and the depth of the soil and areas lacking samples.
Mark of Award this ProjectPredicting the Existence of Dyslexia in Children Using fMRI
Picture for Predicting the Existence of Dyslexia in Children Using fMRI
2017
Student/s: Chen Cohen, Tom Beer
Supervisor/s: Yoochai Blau
Dyslexia is a learning disorder characterized by difficulties with accurate or fluent word recognition and by poor spelling and decoding abilities. Current diagnosis of dyslexia lacks objective criteria, which can decrease treatment efficacy. Diagnosis relies on a discrepancy between reading ability and intelligence, a measure which can be unreliable, and has been criticized for its poor validity. Functional magnetic resonance imaging (fMRI) is a fairly new and unique tool that enables widespread, noninvasive investigation of brain functions.
Mark of Award this ProjectObjects Removal from Crowded Image
Picture for Objects Removal from Crowded Image
2016
Student/s: Guy Weisberger, Asif Zamir
Supervisor/s: Yair Moshe, Ori Bryt
When using the smartphone's camera to take a photo, usually unwanted objects enter the frame, such as unwanted cars on the road, people who walk in the background of the frame, etc. Another example is when using surveillance cameras, sometimes an image free of unwanted objects is desired. The project's goal is to deal with those situations in a way that allows the user to select the unwanted object that he wants to remove from a list of unwanted objects, and remove him in a way that the background of the objects is completed from another frame.
Mark of Award this ProjectRobust Underwater Image Compression
Picture for Robust Underwater Image Compression
2016
Student/s: Rotem Mulayoff, Ido Zoref
Supervisor/s: Yair Moshe, Azriel Sinai
Logo of RAFAEL Collaborator
We describes here a JPEG based compression scheme, adjusted specific for the underwater acoustical channel. The project goal was to deal with common bit error rates for the underwater acoustic channel. Tow measures were used to quantify different solutions: compression ratio, which we tried to minimize, and SSIM, a measure that describes the quality of the image, which we tried to maximize. The scheme described in this report was inspired by initial research conducted by RAFAEL, common schemes to deal with errors in images, and intensive acquaintance with the international JPEG compression standard.
Mark of Award this ProjectFast High Efficiency Video Coding (HEVC)
Picture for Fast High Efficiency Video Coding (HEVC)
2016
Student/s: Avi Giterman, Liron Anavi
Supervisor/s: Yair Moshe
High Efficiency Video Coding (HEVC) is a new video coding standard that has recently been finalized. Due to its substantially improved performance, it is expected to replace the H.264 video coding standard and to become the most common video coding technique in few years. A major innovation in HEVC is the use of a quad-tree based coding tree block for images. In this representation, an image is first divided into non-overlapping coding units, which can be recursively divided into smaller coding units. This recursive quad-tree decomposition of the image is an efficient representation of variable block sizes, so that regions of different sizes can be better coded.
Mark of Award this ProjectDistance Estimation of Marine Vehicles
Picture for Distance Estimation of Marine Vehicles
2015
Student/s: Ran Gladstone, AvihaiBarel
Supervisor/s: Yair Moshe
Logo of RAFAEL Collaborator
This projects goal is estimation of the distance of floating objects, such as boats and personal water craft (water scooters) from a video of maritime environment for the Protector USV, which is a product of Rafael. We propose a novel and efficient algorithm to achieve this goal. The algorithm receives as input a video of a marine environment. In addition, the algorithm receives as input for every video frame the location of a pixel that is on or near the object of interest, which we want to estimate the distance to.
Low Complexity Image Compression of Capsule Endoscopy Images
Picture for Low Complexity Image Compression of Capsule Endoscopy Images
2014
Student/s: Aviv Barabi, Dvir Sason
Supervisor/s: Rami Cohen
Logo of Given Imaging  (Medtronic) Collaborator
Capsule endoscopy is a method for recording images of the digestive tract. A patient swallows a capsule containing a tiny camera, which captures images that are then transmitted wirelessly to an external receiver for examination by a physician. Due to limited computational capabilities in the capsule and bandwidth constraints derives from dimensions of capsule, low-complexity and efficient compression of the images is required before transmission. In addition, the images are captured using a Bayer filter mosaic, such that each pixel in raw captured images represents only one color: red, green or blue.
Mark of Award this ProjectMotion Analysis Using Kinect for Monitoring Parkinson's Disease
Picture for Motion Analysis Using Kinect for Monitoring Parkinson's Disease
2014
Student/s: Ben Dror, Eilon Yanai
Supervisor/s: Alex Frid
Logo of RAMBAM Medical Center CollaboratorLogo of Microsoft Collaborator
Parkinson's Disease (PD) is a degenerative disease of the central nervous system with a profound effect on the motor system. Symptoms include slowness of movement, rigidity of motion and in some patients, tremor. The severity of the disease is quantified using the Unified Parkinson Disease Rating Scale (UPDRS) which is a subjective scale performed and scored by physicians. In this work, we present an automated, objective quantitative analysis of four UPDRS motor examinations of Hand Movement and Finger Taps. For this purpose, a non-invasive system for recording and analysis of fine motor skills of hands was developed.
Mark of Award this ProjectVideo Quality Assessment Prototype System
Picture for Video Quality Assessment Prototype System
2014
Student/s: Tom Mendel, Regev Nir
Supervisor/s: Yair Moshe
Logo of IDF Collaborator
Video quality assessment becomes increasingly important nowadays. Therefore, it is desirable to develop a visual quality metric that correlates well with human visual perception. In previous projects in SIPL, a new technique for quality assessment has been developed, based on DCT sub-bands similarity (DSS). The new technique shows excellent results in comparison to subjective results. Its low complexity makes it highly suitable for a variety of live-streaming applications. Yet, some changes have to be made. The main goal of this project is to build up a prototype of a video quality assessment system, based on network similar to the IDF network, while using the DSS method.
Mark of Award this ProjectAugmented Reality Pinball
Picture for Augmented Reality Pinball
2014
Student/s: Gal Alchanati, Gal Steinfeld
Supervisor/s: Yair Moshe
Created in an undergraduate project in the Signal and Image Processing Laboratory (SIPL), Department of Electrical Engineering, Technion Israel Institute of Technology. We have built a standalone, robust and portable platform that enables an augmented reality pinball game based on virtual and real objects and using common hardware.
Video Compression for Underwater Acoustic Communications
Project default image
2013
Student/s: Gilad Avrashi, Shlomi Museri
Supervisor/s: Rami Cohen, Yaakov Bucris
Mark of Award this ProjectTone Mapping of SWIR Images
Picture for Tone Mapping of SWIR Images
2012
Student/s: Maya Harel
Supervisor/s: Yair Moshe
Sensing in the Shortwave Infrared (SWIR) range has only recently been made practical. The SWIR band has an important advantage it is not visible to the human eye, but since it is reflective, shows shadows and contrast in its imagery. Moreover, SWIR sensors are highly tolerant to challenging atmospheric conditions such as fog and smoke. They can be made extremely sensitive, thus can work in very dark conditions. However, fundamental differences exist in the appearance between images sensed in visible and SWIR bands. In particular, human faces in SWIR images do not match human intuition and make it difficult to recognize familiar faces by looking at such images.
Mark of Award this ProjectChange Detection of Cars in a Parking Lot
Project default image
2012
Student/s: Ido Ariav, Tom Zohar
Supervisor/s: Meir Bar-Zohar
Mark of Award this ProjectReal Time Airborne Video Stabilization
Project default image
2012
Student/s: Amit Shaviv
Supervisor/s: Ori Brit
Mark of Award this ProjectDetermining Image Origin & Integrity Using Sensor Noise
Project default image
2012
Student/s: Eli Schwartz, Roi Hochman
Supervisor/s: Ori Bryt
Mark of Award this ProjectImage Quality Assessment Based On DCT Subband Similarity
Project default image
2012
Student/s: Amnon Balanov, Arik Schwartz
Supervisor/s: Yair Moshe
Mark of Award this ProjectCast Shadow Detection in Images & Video
Project default image
2011
Student/s: Elad Bullkich, Idan Ilan
Supervisor/s: Yair Moshe, Prof. Yacov Hel-Or
Logo of IDC Herzlia Collaborator
Mark of Award this ProjectFeather Color Analysis for the Barn Owl
Project default image
2011
Student/s: Naftaly Kizner
Supervisor/s: Ori Bryt, Yuval Bahat
Image Compression using Sparse and Redundant Representations
Project default image
2011
Student/s: Inbal Horev
Supervisor/s: Ori Bryt, Ron Rubinstein
Detection of Score Changes in Sports video
Project default image
2010
Student/s: Ezri Sonn, Reuven Berkun
Supervisor/s: Dmitry Rudoy
Content Insertion into H.264 Coded Video - Supporting B-Frames
Project default image
2009
Student/s: Yan Michalevsky
Supervisor/s: Tamar Shoham
Logo of Negev Consortium Collaborator
Mark of Award this ProjectReal-Time Pedestrian Detection and Tracking
Project default image
2009
Student/s: Tal Rath, Eyal Enav
Supervisor/s: Yair Moshe
Mark of Award this ProjectObject Reidentification in Video Using Multiple Cameras
Project default image
2008
Student/s: Guy Berdugo, Omri Soceanu
Supervisor/s: Yair Moshe, Dmitry Rudoy
Logo of MATE Collaborator
Mark of Award this ProjectVideo Packet Loss Concealment Detection Based On Image Content
Project default image
2008
Student/s: Dor Shabtay, Nissan Raviv
Supervisor/s: Yair Moshe
Logo of RADvision Collaborator
Mark of Award this ProjectDetecting Added Markers and Notes on Printed Text
Project default image
2008
Student/s: Gal Gur-Arye, Roee Sulimarski
Supervisor/s: Avishai Adler
Logo of IBM Research Labs CollaboratorCollaborator default image
Mark of Award this ProjectReal-Time Pedestrian Motion Detection & Tracking
Project default image
2008
Student/s: Yaniv Solomon, Elad Roichman
Supervisor/s: Yair Moshe
Logo of MOD Collaborator
Mark of Award this ProjectImage Reconstruction from Plenoptic Camera
Project default image
2007
Student/s: Gavriel Berger, Nir Berkovich
Supervisor/s: Assaf Cohen
Mark of Award this ProjectTracking People Using Video Images and Active Contours
Project default image
2006
Student/s: Zohar Tal, Nadav Granot
Supervisor/s: Oleg Kuybeda
Non-Uniformity Correction in Infrared Images
Project default image
2006
Student/s: Marlene Shehadeh
Supervisor/s: Oleg Kuybeda
H.264 Temporal Post Processing for Flicker Reduction
Project default image
2006
Student/s: Dimitry Kletsel, Yair Kuszpet
Supervisor/s: Yair Moshe
Logo of Intel-Oplus Collaborator
Mark of Award this ProjectIntravascular Ultrasound Image Analysis
Project default image
2006
Student/s: Mike Sumszyk, Eyal Madar
Supervisor/s: Oleg Kuybeda
Logo of MediGuide Collaborator
Mark of Award this ProjectLens Motor Noise Reduction For Digital Camera
Project default image
2005
Student/s: Avihay Barazany, Royi Levi
Supervisor/s: Kuti Avargel
Logo of Zoran Collaborator
Mark of Award this ProjectFace Detection in Video
Project default image
2004
Student/s: Tal Kenig, Eran Hertzberg
Supervisor/s: Assaf Cohen
Logo of RADvision Collaborator
Mark of Award this ProjectFractional Zoom For Images
Project default image
2004
Student/s: Doron Meiraz, Shay Fux
Supervisor/s: Yevgeny Margolis
Logo of Oplus Collaborator
Channel Reduction in Hyperspectral Images
Project default image
2004
Student/s: Assaf Kagan, Yaakov Lumer
Supervisor/s: Oleg Kuybeda
Logo of MATA Collaborator
Mark of Award this ProjectAnomaly Detection in Hyperspectral Images, Part A+B
Project default image
2003
Student/s: Boaz Matalon, Ayal Hitron
Supervisor/s: Asaf Cohen, Ruti Shapira
Mark of Award this ProjectLossless Compression of Images Using Adaptive Scan
Project default image
2003
Student/s: Dror Porat, Tomer Michaeli
Supervisor/s: Ilan Sozkover
Detection and Discrimination of Sniffing and Panting Sounds of Dogs
Project default image
2002
Student/s: Ophir Azulai, Gil Bloch
Supervisor/s: Yizhar Lavner, Irit Gazit
Logo of TAU Collaborator
Mark of Award this ProjectAutomatic Detection of Location and Face Orientation
Project default image
2002
Student/s: Mordechai Shor-Haham, Tal Shaul
Supervisor/s: Assaf Cohen
Mark of Award this ProjectReal-Time Lecturer Tracking
Project default image
2001
Student/s: Baleegh Abb, Shadi Saba
Supervisor/s: Assaf Cohen
Mark of Award this ProjectRepresenting Images for Video Scene
Project default image
2001
Student/s: Viki Alman, Yaron Greenhot
Supervisor/s: Asaf Cohen
Logo of Jigami Collaborator
Mark of Award this ProjectProgressive Coding of Color Images - Part B - Compression
Project default image
2001
Student/s: Elias George, Eyal Erlich
Supervisor/s: Alon Spira
Mark of Award this ProjectDigital Video Protection for Authenticity Verification
Project default image
2000
Student/s: Oren Keidar
Supervisor/s: Ran Bar-Sella
Logo of NICE Collaborator
Mark of Award this ProjectLow bit rate video compression with QuadTree using Adaptive Quantization
Project default image
1999
Student/s: Ori Berger
Supervisor/s: Ran Bar-Sella
Mark of Award this ProjectWafer Images Compression Using Wavelets
Project default image
1999
Student/s: Yaron Efrat, Shay Shvedron
Supervisor/s: Yuval Dorfan
Logo of OPAL Collaborator
Mark of Award this ProjectEditing of VCD MPEG1 Files
Project default image
1994
Student/s: Zelig Weiner, Nir Shachar
Supervisor/s: Ran Bar-Sella
Logo of Optibase Collaborator
Fractal Image Compression using a Motorola DSP96002
Picture for Fractal Image Compression using a Motorola DSP96002
1992
Student/s: Raz Barequet, Omer Zifroni
Supervisor/s: Nimrod Peleg, Guillermo Spiro
Logo of ELEX Collaborator
Background Learning of Images
Project default image
1991
Student/s: Virginie Girardot
Supervisor/s: Zohar Sivan
Mark of Award this ProjectCompression of Text Images Using Binary Images
Project default image
1990
Student/s: Gil Shamir, Zahi Rozenhoiz
Supervisor/s: Zvi Eisips
Scanned Documents Compression
Project default image
1990
Student/s: Eliezer Brightstein, Shahar Kons
Supervisor/s: Gal Shachor
Image Sequence Compression using DPCM Algorithm
Project default image
1989
Student/s: Yaron Shliselberg, Boaz Dinaty
Supervisor/s: Doron Adler