If the "Traveling Salesman Problem" is convex and difficult to solve, does this imply that there are non-convex discrete/combinatorial problems that are even more difficult to solve?
I have also heard the "Traveling Salesman Problem" is a very difficult problem to solve. Using this logic, I have seen the objective function of the "Traveling Salesman Problem" being written as a linear function and thus the "Traveling Salesman Problem" being considered as a convex optimization problem. If we consider continuous optimization problems, we usually say that "convex optimization problems are easier than non-convex optimization problems" because non-convex functions can have "saddle points" that can result in the optimization algorithm getting stuck in these "saddle points". I have heard the argument that "any optimization problem that can be formulated into a linear problem is always convex (because linear objective functions are always convex)". Thus, are 4 types of optimizations on the above list effectively the "same thing"? In all 4 types of problems, we are interested in finding out a "discrete combination" of inputs - i.e. evolutionary algorithms, branch and bound, etc.), since it is impossible to take the derivatives of the objective functions corresponding to these problems.įor example, if you take problems such as the "Traveling Salesman" or "Knapsack Problem" (note: I have heard that these problems belong on the above list, but I am not sure), I would visualize the objective function as something like this:Īre 4 types of optimizations on the above list effectively the "same thing"? The way I see it, all 4 types of these problems have "discrete inputs" and in a mathematical sense, "integers" are always considered as "discrete".
#Trivial antonym free
This is why I have heard that problems belong to the above list usually require "gradient free optimization methods" (e.g. For instance, the inputs of the above list of problems are usually "categorical" in nature. When I think of these problems, the first thing that comes to mind is that they are fundamentally different from optimizing continuous functions. However, I find myself very confused when trying to sort through the following types of optimization problems: I also understand that continuous functions can be optimized subject to some constraints. For example, we could be interested in finding out the value of $x$ that results in the smallest value of $y$. I find myself having a lot of trouble understanding the different "types" of optimization problems that exist.įor example, I understand the idea of optimizing continuous functions (e.g. I do not have a background in optimization and I am trying to teach myself more about this topic.