Crossing a road is a dangerous activity for pedestrians and therefore pedestrian crossings and intersections often include pedestrian-directed traffic lights. These traffic lights may be accompanied by audio signals to aid the visually impaired. In many cases, when such an audio signal is not available, a visually impaired pedestrian cannot cross the road without help. In this project, we propose a technique that may help visually impaired people by detecting pedestrian traffic lights and their state (walk/dont walk) from video taken with a mobile phone camera. The proposed technique consists of two main modules - an object detector that uses a deep convolutional network Tiny YOLO, and a decision module. We test performance in accuracy and runtime (with the goal of achieving real time). First, we test the system for image processing. We train the network differently each time using the original dataset and using data augmentation for achieving better performance. Another method that is tested is training on different number of classes training on vehicle traffic lights in addition to pedestrian traffic lights. After choosing the best methods, we proceed on to video processing in order to test the system performance towards application implementation. The proposed technique proves to be fast and accurate with running time of 6 ms per frame on a desktop computer with GeForce GTX 1080 GPU and detection accuracy of more than 98%. It currently runs on a mobile phone in a client-server architecture, using a local server.