I've seen many stories of people having a bad gauge of how their Google interviews. How does this performance look? This is for L4 role so each round only had ~35-40 min for the problem. Round 1: LC Medium - optimal solution without hints + follow-up. Not much to say here - straight-forward, went smoothly. Round 2: LC Medium - optimal solution but had to update algo while coding it (without hints). Asked followup but more problem solving than coding. Round 3: LC Medium - optimal solution without hints + walkthrough + tests. Asked follow-up which I solved but identified a minor bug that interviewer helped point out Round 4: LC Hard - optimal solution without hints + walkthrough + tests. Asked follow-up which was more problem solving. Also gave wrong complexity but was not mentioned. Round 5: Behavioral - typical stuff, think this went well While I thought I did well overall, there were still small things that could have gone better. I keep reading posts of people who feel like they did well but still get rejected. Do interviewers expect perfection? What signals are they evaluating? TC - 🥜 YOE - 5 Update: Passed HC
Looks like you can pass HC, why weren’t you given an L5 loop with 5 YOE?
How do you know its LC hard or medium? I mean were they straight out of LC or you think its equivalent to medium or hard. I am asking coz sometime the intent of the question is not to solve it but how you get there…for example, implement an iterator with some conditions the correct/optimal answer is very straightforward but how did you describe it matters….but if the questions were straight out of LC then its a diff story
Not straight out of LC but patterns that have extremely similar relation to specific LC problems.
Unfortunately in that case its very difficult to predict wht was important for the interviewer optimal solution or how you get to the solution…I remem in my interview recently he let me finish the code but then he was more interested in knowing why I chose lets say dfs over bfs or disjoint set over dfs etc etc
I haven't done a live onsite with Google but I have done few mocks with their engineers. They are tough graders. The first told me they choose questions so that at least some people can solve, but then they are comparing on speed of completion. Like say you take 40 minutes and another candidate takes 35.
On average, It took me ~5-10 min to figure out + describe algo. ~10 min to code, ~10 min to walk through a test / edge cases. ~10 min for followup.
Speed is such a tiny part of it. Solving the right problems instead of getting stuck on a tangent
I think you got it.
There are many things that go into evaluating Google interviews. For example, you didn’t mention: Did you ask clarifying questions? Did you solve for edge-cases and bad input?
Solid yes for both of those.
Spill the questions man!
I had my onsites coming soon, how do you test your code ? Do you yourself come up with test cases and dry run the code or interviewer gives test cases ?
No I was asked to come up with test cases and walk through them
is the LC hard dp?