top of page

Foundations of Deep Learning (infFDL-01a)

Abstract

​

This lecture provides a foundational introduction to deep learning, with a focus on understanding the core concepts and practical applications of neural networks. The course begins with basic building blocks such as optimization, loss functions, and backpropagation before advancing to key architectures including Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers. Emphasis is placed on understanding how these models are trained, how they generalize, and the role of regularization and data augmentation in improving performance. The course is designed to prepare students to implement and apply deep learning models effectively, and to critically evaluate their behavior and results across a range of tasks. The course is suitable for students of Bachelor programs, as well as those from other disciplines with an interest in AI.


Learning Objectives

​

Students will be able to...
 

  • explain the foundational principles of deep learning and artificial neural networks.

  • describe and apply key training concepts including loss functions, optimizers, and backpropagation.

  • implement and train feedforward neural networks.

  • understand and apply techniques for regularization, data augmentation, and training stability.

  • analyze and compare common activation functions and loss functions.

  • critically evaluate model performance and generalization.

  • apply CNN architectures to image-based tasks and understand design choices in state-of-the-art models.

  • understand RNN and Transformers architectures.

  • work independently and in teams on deep learning problems using Python-based tools.

 
Course Content

​

  • Introduction to Deep Learning and Neural Networks

  • Optimization and Backpropagation

  • Loss Functions and Activation Functions

  • Training Neural Networks: Best Practices

  • Regularization Techniques and Data Augmentation

  • Convolutional Neural Networks (CNNs) and CNN Architectures (e.g., AlexNet, ResNet)

  • Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)

  • Introduction to Transformers in Deep Learning

 

Further Requirements

​

  • Basic knowledge of linear algebra, statistics, and differential calculus.

  • Familiarity with Python programming.

  • All lecture slides and course materials will be provided in English.

 

Exam

​

Written or oral exam at the end of the semester. Successful participation in the exercises (homework assignments) is required for admission to the final exam.


Teaching and Learning Methods

 

Lectures are presented using slide-based presentations, supported by occasional whiteboard explanations. The concepts introduced in the lecture will be accompanied by practical examples and applications. Weekly exercises and homework assignments will be used to reinforce and deepen understanding.

This course is applicable for students pursuing computer science, data science, and related fields. It also serves as a basis for deep learning and AI courses for other fields of science as well as preparation for more advanced AI courses (e.g., Autonomous Learning, Generative AI, Information Retrieval, etc.).


Literature

​

  • Deep Learning, Ian Goodfellow, Yoshua Bengio, Aaron Courville, MIT Press, 2016

  • Neural Networks and Deep Learning, Michael Nielsen (online resource)

  • Deep Learning with Python, François Chollet, Manning Publications

Kiel University
Department of Computer Science   
Visual Computing and Artificial Intelligence
Neufeldtstraße 6 (Ground Floor)
D-24118 Kiel
Germany

 © Visual Computing and Artificial Intelligence Group 2025

bottom of page