Optimization Techniques
Course Outline
â€‹
This course introduces students to various optimization techniques used in computer science and their applications. It covers the principles of optimal solutions, various techniques for finding them, and the analysis of these techniques. Students will learn to formulate optimization problems and solve them using appropriate algorithms.
â€‹
Introduction to Optimization

Understanding optimization: definitions and applications

Classifications of optimization problems

Basics of Convexity, Concavity, and Convex optimization problems
Unconstrained Optimization Techniques

Gradient Descent and Stochastic Gradient Descent

Newton's and QuasiNewton Methods

Conjugate gradient method
Constrained Optimization Techniques

Lagrange multipliers

KarushKuhnTucker (KKT) conditions

Projected Gradient Descent
Linear Programming

Introduction to linear programming

Simplex method

Duality and Dual Simplex method
Midterm Examination
Integer Programming and Combinatorial Optimization

Introduction to integer programming

Branch and bound method

Greedy algorithms, Dynamic programming
Heuristic Methods

Genetic algorithms

Simulated Annealing

Particle Swarm Optimization
Machine Learning and Optimization

Least squares and logistic regression

Support Vector Machines

Neural networks and backpropagation
â€‹
Textbooks and Reference Materials:

"Convex Optimization" by Stephen Boyd and Lieven Vandenberghe

"Optimization for Machine Learning" by Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright
Introduction to Optimization
â€‹
Optimization Techniques in Computer Science. Optimization is a fundamental concept that transcends the boundaries of many scientific disciplines, including computer science, mathematics, engineering, economics, and data science. The ability to solve optimization problems efficiently and effectively is essential for any professional in these fields.
â€‹
This course will introduce you to the fundamental techniques and methodologies used in optimization. Whether you're tuning a machine learning algorithm to get the best performance, determining the shortest path for a delivery network, or scheduling tasks in a busy system to minimize latency, optimization techniques form the bedrock of these processes.
â€‹
We'll begin by familiarizing ourselves with the landscape of optimization, understanding its key definitions and classifications, such as convex and nonconvex problems, constrained and unconstrained optimization, and linear and nonlinear programming. We'll explore the concepts of convexity and concavity, which play crucial roles in determining problem complexity and solvability.
â€‹
In the first part of the course, we'll delve into unconstrained optimization techniques like Gradient Descent, which serves as the backbone of many machine learning algorithms, and Newton's and QuasiNewton Methods, used for efficiently finding local minima or maxima of a function. We'll apply these techniques to problems like curve fitting or finding the roots of an equation.
â€‹
Then we'll move on to constrained optimization, discussing concepts like Lagrange multipliers and the KarushKuhnTucker conditions. We'll apply these techniques to problems like portfolio optimization, where we need to maximize return while limiting risk.
â€‹
In our discussion on linear programming, you'll learn about the Simplex method and its applications in fields like resource allocation, logistics, and production planning. For instance, we'll tackle a realworld supply chain optimization problem to decide the most costeffective distribution of goods from multiple warehouses to various retail outlets.
â€‹
Our foray into integer programming and combinatorial optimization will have us exploring the branch and bound method and dynamic programming techniques. These skills come in handy when dealing with discrete optimization problems, such as task scheduling, or the famous traveling salesperson problem.
We'll also uncover the world of heuristic methods, such as genetic algorithms and simulated annealing, which can provide good solutions for complex, multimodal optimization problems. We'll demonstrate their utility in problems like feature selection in machine learning, or even in gameplaying AI.
Lastly, we'll explore the intersection of machine learning and optimization. We'll delve into backpropagation, the optimization backbone of neural networks, and discuss techniques for optimizing support vector machines and logistic regression models.