Hi there community! I recently completed the coding part of the new grad virtual onsite for FB and wanted to share my experience with all of you while getting your opinions on my chances based on what I think I did well and wrong. Here we go: - Phone screen: HR gave me feedback before the onsite and she said I was a really strong performer on the phone screen. Is phone screen feedback also used for making a final decision after onsite? - 1st interview: Standard introduction followed by two medium questions. * First question: slight modification of a well known question regarding random selection. I shared an initial approach with O(N) space. After that I shared how I knew there is a way to use a probability concept to make it O(1) space, but that I was not sure how it worked. I struggled for a few minutes to find the math logic, but in the end I found it and was able to code it. Good: found optimal solution without bugs, right complexity analysis. Bad: struggled to find the logic, my thought process during those few minutes was not shared as clearly as I would have liked to. * Second question: a question regarding trees. I knew this problem and tried to explain everything clearly from the start, why I was choosing a particular traversal and not other and why my approach was optimal. I spent like 10-12 minutes before starting to code (Do you think I should have done faster?) and then the interviewer seemed satisfied enough to let me code. I wrote clean code for it (as I said I knew the question), but when done I didnt do a dry run directly. Good: clear explanation, clean optimal code. Bad: interviewer had to tell me to test with given example after I was done coding. As the main logic in the algorithm was a standard traversal I thought it was self explanatory, but it was an error on my side to not dry run the example directly. How much of a red flag is this? Also did not have time to do the full dry run (interviewer seemed already convinced that everything worked well though). -2nd interview: Interviewer told me how he'd like to get some clarification questions asked and an explanation before jumping to code, and how we would go through one question and if we had time we would go through more. * First question: I knew the question solution and could have coded it in literally 3 minutes. However I followed what he previously said and asked a few questions regarding edge cases and so on and so forth. I also explained deeply every detail of the logic (which consumed quite some time) before coding it. I dry run the code against a couple of examples and we jumped to second q. Good: good explanation, optimal solution coded. Bad: used all 20 mins for a question on the easier side (tagged as medium though). * Second question: another medium question. I did not know this one so took more time to find an algorithm. I found the optimal approach and coded it. When going through the dry run I found a bug causing an infinite loop and corrected it literally one second before interviewer said time was out. Good: found optimal solution, right complexity analysis. Bad: Did not have time for a working dry run. Code was initially buggy. My bug fix was right, but I think the interviewer was not fully convinced and time run out. As I said before I am yet to complete my behavioural interview, so that will also be obviously important for the final outcome. However, how would you evaluate these coding rounds performance? Is phone screen feedback used? How much of a red flag is not to dry run your code? Thanks a lot everyone! PD: questions were all from last 6 months most frequent questions (@top130). #facebook #meta #interview Edit: got the offer!!
Health & Wellness
4h
501
Issues with sleep
AMA
Yesterday
1866
I have worked at TikTok US core tech for 3 years. AMA.
India
5h
1959
Why is it so G*damn difficult to move money out of India
Cars
Yesterday
1257
Cyber truck killer: Chinese version of EV truck
India
Yesterday
746
Any Indians Think Kashmir Should be Independent?
Tc or
TC 0, this is a new grad process