YOE : 1.5 years TC: 120k Can someone explain to me why this nested loop is constant time? I am taking an algorithms course on educative and I am getting through it pretty well, however, this particular situation is stumping me. All help is appreciated. First example: Time complexity of O( n logn). I fully understand why this example evaluates to a Linearithic time complexity. // Initializations const n = 10; const pie = 3.14; let sum = 0; var i = 1; while (i < n) { console.log(pie); // log n work for (var j = 1; j < n; j += 2) { sum = sum + 1; //log n * n work } i *= 3; } console.log(sum) ​ Second Example: Time complexity (as given in the course) O(n). And this is where I am stumped. When I attempt to calculate the time complexity of this loop using the same method as above I get Linearithmic time complexity. // Initializations const n = 10; const pie = 3.14; let sum = 0; var i = 1; while (i < n) { // console.log(pie); //log n work for (var j = 0; j < i; j++) { sum = sum + 1; // log n * n work? Apparently not. } i *= 2; } console.log(sum) The answer given by educative was very mathy and a bit over my head. Can someone explain this to me in English? Thanks for all of the help in advance! P.S will I ever have to do an analysis that is this math involved in a fang interview ? Here is the answer educative provided: The outer loop here runs log(n)*log*(*n*) times. In the first iteration of the outer loop, the body of the inner loop runs once. In the second iteration, it runs twice, and so on. The number of executions of the body of the inner loop increases in powers of 2. So, if k*k* is the number of iterations of the outer loop, the body of the inner loop runs a total of 1+2+4+8+\\cdots+2\^k1+2+4+8+⋯+2*k* times. This [geometric series](https://en.wikipedia.org/wiki/1_%2B_2_%2B_4_%2B_8_%2B_%E2%8B%AF) sums to 2\^{k+1}-12*k*\+1−1. The inner loop condition requires that in the last time the inner loop runs, it runs at most n*n* times. This requires 2\^k < n2*k*<*n*, i.e., k < log\_2n*k*<*log*2*n*, or k = \\lfloor log\_2n \\rfloor*k*=⌊*log*2*n*⌋. This means that the geometric series sum to 2\^{\\lfloor log\_2n \\rfloor+1} -12⌊*log*2*n*⌋+1−1 or 2\^{\\lceil log\_2n \\rceil} -12⌈*log*2*n*⌉−1. In other words, the sum is **at least** n-1*n*−1. But we need an upper bound on this value, for Big O. The number of iterations of the inner loop changes on integer powers of 2, as you would expect with the \\lceil n \\rceil⌈*n*⌉ exponent. At n = 4*n*=4, the number of iteration is 33, at n = 8*n*=8, it is 77 etc. So, on values of n that are integer powers of 22, the number of iterations is n - 1*n*−1. That conforms to our lower bound on the number of iterations. The upper bound is evident on n = 2\^i + 1*n*=2*i*+1, where i = 1, 2, 3, 4, ...*i*=1,2,3,4,..., i.e., n = 3, 5, 9, 17, ...*n*=3,5,9,17,... You will notice that at at these values of n*n*, the number of iterations of the inner loop is one less than the next higher integer power of 2, i.e., 2(n - 1) - 1 = 2n - 32(*n*−1)−1=2*n*−3. Thus, the number of iterations of the inner loop body is at least n - 1*n*−1 and at most 2n - 32*n*−3. As described in the slides, the outer for loop statements account for 4\\lceil log\_2 n \\rceil4⌈*log*2*n*⌉, and the total contribution of the inner loop is at least 8n - 128*n*−12. Thus, the total running time is at least 4\\lceil log\_2 n \\rceil + 8n - 124⌈*log*2*n*⌉+8*n*−12. Dropping the constants and the lower order terms gives us n*n*. Hence, the Big O Time Complexity is O(n)*O*(*n*) Here is the link if you want to read the answer given since the formatting did not come out the best: [https://www.educative.io/courses/data-structures-coding-interviews-javascript/xVo6KLGVNB3](https://www.educative.io/courses/data-structures-coding-interviews-javascript/xVo6KLGVNB3) #engineering #software #swe
Heard of stackoverflow.com?
I wanted answers from fang engineers.
There is nothing special about FANG engineers
Do not study for or try to be a software engineer if you feel geometric series is “MATHY”. It is disgusting to work so many so-called “coders” from random majors without any high school level math basics.
Why so exclusionary? Geometry… is… math… and there’s tons of software engineering work that doesn’t require any knowledge of geometry
@Uber he is asking on this forum because he wants to better himself. I am a random major too but scored 800 on math SAT at least.
The last outer iteration is O(n). The second-to-last outer iteration is O(n/2). Then O(n/4) and O(n/8) so on. If you add up all the outer iterations, you get O(2n) which is equivalent to O(n).
7
The point of big O notation is to find the limiting behavior of the function when ‘n’ approaches a big value. So if you think about it intuitively, the outer loop runs much smaller compared to the inner loop. So for a big value of n, the outer loop runs log(n) times, while the inner loop runs O(n) times. Here the bounding factor is O(n) since O(logn) is smaller. Now on the flip side, if it was the outer loop that was running n times, then the time complexity would be O (n log n) because here log n runs small, but the nature of the graph moves up because of that log(n) work. In other terms it’s pushed higher and the bound is no more O(n) If you want to understand it better try graphing the functions. Please correct me if I am wrong. This is how I understand it.
The key difference that in example 1, the inner loop is bound by N, so it runs in O(N), which you then multiply by the outer loop’s O(log N) to get O(NlogN). In example 2, the inner loop is bound by the variable i, which decreases exponentially with each iteration. In other words, ex 1 does the same N units of work for each iteration of the inner loop, while ex 2 does less and less work in each iteration. You can observe that the work done by ex 2 forms a series like 8 + 4 + 2 + 1 etc. The sum of that series is 2 * 8, or 2N where N=8.
But it is the opposite. In the second example, i does not decrease exponentially. It increases by a power of two every run. Example 2 does more and more work in each iteration, not less. Not that it matters though if it is going from high to low, the series will have the same complexity. I think what he is asking is why does the outer log(n) loop not get multiplied with the inner 2n loop like it did in the first example? I might add that I am also confused about this. Since the inner loop is running on every outer loop I would also expect it to be outer * inner which would be O(log n * 2n) or just O(log(n) *n)
TC or GTFO