# Problem of Longest Increasing Subsequence (LIS)

Chat

Let’s explain the problem with an example. Assume the array is {10, 22, 9, 33, 21, 50, 41, 60}. In this case, there are several increasing subsequences like:

1. {10, 22, 33, 50, 60}
2. {10, 33, 50, 60}
3. {10, 21, 50, 50}
4. {10, 21, 41, 60}
.
.
.
and so on. The idea is to find the maximum length of the increasing subsequence in the main array.

The problem can be efficiently solved using dynamic programming but let’s first try to solve it using brute force. With brute force, we will make all the possible combinations and find out which one is the longest. Let’s start with the first element in our main array (10) and see how that goes.

Starting with 10, we either include it or we exclude it. Let’s include it first in which case the next element in the LIS has to be bigger than 10. We have 22, which is bigger than 10. We can have two cases here. One, we include 22 in our LIS and proceed further and two, we don’t include 22 and proceed further. Next, for each of the two cases above, we will check again for the next element, which is 9 in this case. Since 9 is smaller than 10, there is just one case here, which is to not include. Next, we have 33 in the main array. Again, we can have two cases here. One, we include 33 and proceed further and two, we exclude 33 from our LIS and proceed further. We keep doing this till we reach the end of the main array. At the end we take the maximum achieved by either including or excluding and return that as the answer. The time complexity of this brute force method will be O(2^n) and space complexity will be O(nxn).

The problem with the brute force method is that often, the same computation is being done more than once. We can solve this problem using dynamic programming (overlapping sub-problems and optimal sub-structure). Let’s see how we do that in THIS video.

HERE is some code that solves it using both brute force and dynamic programming. Note that there is another method using binary search that solves this problem in O(nlog(n)) time complexity and we may cover that in a later post.

To find out how to print the LIS elements as well, refer to THIS piece of code. The idea is similar just a little modification and keeping a tab of vector of vectors.

References:

1. Tushar Roy 2015. “Longest Increasing Subsequence”. https://www.youtube.com/watch?v=CE2b_-XfVDk. [Online; accessed 24-Jan-2018].
2. vinod23, LeetCode 2017. “Longest Increasing Subsequence”. https://leetcode.com/problems/longest-increasing-subsequence/solution/. [Online; accessed 24-Jan-2018].
3. GeeksforGeeks 2017. “Dynamic Programming | Set 3 (Longest Increasing Subsequence)”. https://www.geeksforgeeks.org/longest-increasing-subsequence/. [Online; accessed 24-Jan-2018].

# Dynamic Programming – Optimal Sub-structure

Chat

In THIS post, we covered the overlapping sub-problems part of dynamic programming. Now we will talk about what do we mean by “optimal sub-structure”?

Assume we have a graph like below. What is the shortest path from node A to node D? It is A->C->D. So, optimal structure property means that if a node (C) lies between the shortest path of two nodes (A and D) then the shortest path between the two nodes (A and D) will be the shortest path between the source node (A) to the intermediary node (C) plus the shortest path from the intermediary node (C) to the destination node (D). Note that if we were to find the longest path between two nodes then this property does not hold true (try drawing out a graph and see for yourself why?) In the next few posts, we are going to solve a few DP questions to understand our concepts better.

References:

1. GeeksforGeeks 2017. “Dynamic Programming | Set 2 (Optimal Substructure Property)”. https://www.geeksforgeeks.org/dynamic-programming-set-2-optimal-substructure-property/. [Online; accessed 14-Jan-2018].