Live Demo: Sketches to Photos
The live demo utilizes a Neural Network to synthesis photos with face sketches, which is helpful for various applications, e.g.,
identifying the suspect.
The Neural Network in this example is regressing pixel values live in your browser based on ConvNetJS
The transformed representations in this visualization can be losely thought of as the activations of the neurons along the way.
The Sketch-Photo pairs are from the CUHK Face Sketch Database (CUFS)
By the end of the class, you will know exactly what all these numbers mean.
[Mar 13] Please send your final project proposals to your TA-in-charge by Mar 24. Details on the final project can be found here
[Mar 5] Assignment 2
is now available. Please submit your assignments through BlackBoard by March 28.
[Feb 20] For auditing students who need a certificate, please send your assignment1 to firstname.lastname@example.org with keyword [ELEG5491] in the email subject field.
[Feb 13] We will have an one-hour open-book quiz on Feb 19, at the usual lecture time and classroom. The quiz will cover all lectures before and including CNN
[Jan 28] Assignment 1
is now available. Please submit your assignments through BlackBoard by Feb 21.
[Jan 24] Optional tutorials start from this week. You can find the tutorial syllabus and slides here.
[Jan 9] If you need the instructor's signature to select this course, please bring your CS-1 form to Room 304, SHB.
[Jan 4] Welcome to ELEG 5491 Introduction to Deep Learning!
This course provides an introduction to deep learning. Students taking this course will learn the theories, models, algorithms, implementation and recent progress of deep learning, and obtain empirical experience on training deep neural networks. The course starts with machine learning basics and some classical deep models, followed by optimization techniques for training deep neural networks, implementation of large-scale deep learning, multi-task deep learning, transferred deep learning, recurrent neural networks, applications of deep learning to computer vision and speech recognition, and understanding why deep learning works. The students are expected to have some basic background knowledge on calculus, linear algebra, probability, statistics and random process as a prerequisite. The course offered in Spring 2019
- The latest developments in deep learning, e.g., deep reinforcement learning, GAN, RNN with language models, video analysis and so on.
- Hands on experience with the optimization of deep learning, using popular DL toolkits (for example, PyTorch).
- The final project will walk you through the whole pipeline of doing research: drafting proposal, discussing ideas, conducting experiments, writing report, and sharing your work via the presentation!
Time and Venue
Term 2 (January - April), 2019
- Tuesday, 14:30-16:15
LT, T.Y. Wong Hall
- Tuesday, 16:30-17:15
LT2, Mong Man Wai Building (MMW)
- Thursday, 14:30-15:15
G18, Basic Medicine Science Building
Xiaogang Wang: email@example.com
Hongsheng Li: firstname.lastname@example.org
Xihui Liu: email@example.com
Hang Zhou: firstname.lastname@example.org
Yixiao Ge: email@example.com
Hongyang Li: firstname.lastname@example.org
3 assignments: 30%
2 quizzes: 30%
Final Project: 40%
I am a student outside the EE department, can I register in the class?
Yes, you are welcome to register. For graduate students outside our EE department, you can fill in
and ask approval from both your supervisor and the course instructor during the add/drop period.
Is this course hard for undergrad students?
The course is designed for senior undergrad and graduate students. It is not for the faint of heart. However, we will show lots of interesting cases and hands-on experience about deep learning models. Some part of the lectures requite calculus and linear algebra, but we will walk you through those knowledge. We think for undergrads, you will learn a lot at the end of the course through lectures, tutorials and the final project.
Can I work in groups for the Final Project?
No. The final project is done individually and details will be announced later.
We sincerely thank all the contributors who made great efforts in supporting this course:
Prof. Wanli Ouyang, Prof. Hongsheng Li
Dr. Xingyu Zeng, Dr. Zhe Wang, Dr. Tong Xiao, Dr. Xiao Chu, Dr. Wei Yang, Dr. Kai Kang