SIPL Projects

Musical Signals

Melody Extraction from Polyphonic Music
Picture for Melody Extraction from Polyphonic Music
2024
Student/s: Tomer Massas, Shahar Pickman
Supervisor/s: Ori Bryt, Dr. Lior Arbel
Melody extraction is one task from a variety of problems in the field of musical retrieval (MIR), that are designed to develop algorithms and techniques for extracting meaningful information from musical content. This work deals with the problem of extracting the melody from an audio segment, using machine learning techniques. This problem refers to the process of isolating and identifying the melody, the main musical line from a musical piece.   The input to our system is an audio file containing a musical composition, and our approach involves employing a combination of signal processing and machine learning algorithms.
Music Genre Classifier Using Deep Learning Networks
Picture for Music Genre Classifier Using Deep Learning Networks
2024
Student/s: Ilay Yavlovich, Amit Karp
Supervisor/s: Hadas Ofir
In this work, we created a model based on deep neural networks to classify music genres. During the process, we segmented each song into song excerpts and fed them into the model for training, validation, and testing. Throughout the project, we utilized the classification of songs with a single genre from the MTG-Jamendo song database, which involved working with a database of songs divided into genres in an unbalanced manner (the number of songs from each genre varies significantly). Therefore, we chose to work only with the ten largest genres and used different weighting schemes in hopes of improving the results.
Bass Generation Based on Vocals via Deep Learning
Picture for Bass Generation Based on Vocals via Deep Learning
2024
Student/s: Dror Tiferet and Rom Ben Anat
Supervisor/s: Hila Manor & Gal Gershler
This work aims to create a bass accompaniment track for a solo vocal track. This is achieved using a machine learning model trained on a comprehensive dataset of bass and vocal tracks that sound good together. The system first processes the vocal track and converts it into a spectrogram, a graphical representation of the signal's frequency spectrum over time. A generative diffusion model is then used for producing a corresponding bass track that aligns with the input vocal spectrogram as a conditioning input to the model. Throughout the project, an extensive literature review was conducted to select appropriate models, including mel-gan and HIFI gan.
Smart Encoding For Musical Notes
Picture for Smart Encoding For Musical Notes
2023
Student/s: Tzahala Buchnik, Hana Shechter
Supervisor/s: Neria Uzan
The goal of the work is exploring & finding smart encoding of music notes, that captures meaningful harmonic relationships in music. The assumption that music is similar to language in the sense that there are also certain grammatical rules, of course with some flexibility in relation to normal language. Thus, related notes & chords will be close to each other in sense of cosine similarity. During the project, we learned tools for storing and representing music files, such as MIDI and others, through which we extracted the database for the project.
Humming to Music
Picture for Humming to Music
2022
Student/s: Ariel Tubul, Yarden Ezra
Supervisor/s: Hadas Ofir
The work deals with converting a sound segment of the humming into a sound segment of a selected musical instrument. This conversion relies on established and proven models developed on the basis of deep network algorithms in combination with digital signal processing methods and techniques. The two main models we worked with are DDSP, developed by google, and the other model, that was inspired by it was developed by Dr. Lior Wolf from Tel Aviv University. Later we will expand and detail these models and how they work.
Recognition of Musical Key & Scale
Picture for Recognition of Musical Key & Scale
2021
Student/s: Nadav David, Ido Gabay
Supervisor/s: Alon Eilam, Gal Greshler
Logo of Waves Audio Collaborator
Classifying musical keys is one of music theorys most ancient problems today. In the modern world a real need arises from home composers for a system that can accurately and reliably identify an audio samples musical key, mainly for its incorporation in whole, complex pieces. This problem gets even more complicated when we want to classify the keys of short audio samples, in which unlike full songs, their keys arent as clear. In this work we implemented methods to solve the musical keys classifying problem, while focusing on the audio pre-processing and researching advanced methods for classification using neural networks and improving their results using Semi-Supervised learning.
Mark of Award this ProjectSymbian Guitar Tuner
Project default image
2009
Student/s: Gilad Cohen, Reut Ochayon
Supervisor/s: Yevgeni Litvin, Yoav Shechtman
Mark of Award this ProjectAutomatic Transcription of Piano Polyphonic Music
Project default image
2007
Student/s: Gur Harary, Yoav Shechtman
Supervisor/s: Zvi Ben-Haim
Mark of Award this ProjectMP3 Robust Digital Watermark
Project default image
2005
Student/s: Ron Dobrovinsky, Stella Lavetman
Supervisor/s: Hadas Ofir
Mark of Award this ProjectAutomatic Transcription of Music
Project default image
2004
Student/s: Nathan Fellman, Yuval Peled
Supervisor/s: Hadas Ofir
Logo of Mobixell Collaborator
Mark of Award this ProjectAncillary Instrument for Piano Tuning
Project default image
2002
Student/s: Moran Klein, Uri Gold
Supervisor/s: Hagai Krupnik
Logo of DigiSpeech Collaborator