Blind reviews are not the average of the individual categories. LinkedIn review for example should be: 3.5 + 4.4 + 4.3 + 4.4 + 3.7 = 4.06 Blind instead is asking for an “an overall” review and using that. Why would they do that when they have more granular data points?! #teamblind
Maybe they don't think each category is worth 20% (or any fixed amount) of the overall rating. If someone thought a company absolutely sucked, mgmt 1 star, culture 1 star, wlb 1 star, but growth and comp was amazing, the reviewer probably would want to put 1 star overall (justified I think), not 3. Same for reverse. Probably really subjective what the weights should be.
they’re just averaging this
Rate this company: 5 stars Everything else: 1 Star Final Blind rating: 5 star
I think they are collecting initial data. They can always improve the algorithm.
Lol like Google maps (don’t meant it because you work for G, just an example) recommends a “top rated” restaurant with a 4.2 rating while there is another 4.7 at the bottom of the page both with so many review I know the algorithm is doing a lot of work but in the end, the 4.2 always sucks and the 4.7 is always good. Some things should just stay simple.
I don’t work for maps but google is probably looking at many other parameters other than just overall ratings to customize the order of search result for you.
Your method assumes equal weights for each category and overall rating might be for things not captured by the individual categories as well
Weighted Average, possible?