The unspoken mantra of the OCP is that it must be cheap. The participants are looking at scale, at huge data centres and such critters traditionally have been (are) expensive. So how did FB and Google et al lower their costs?
1. they went to non-name hardware manufacturers for cheap components, minimal engineering...
Their main concerns seem to be not for the data they are caching and manipulating but for cheap builds and cheap operations aka low power consumption and low heat output.
2. they skimp big time on testing, what typically takes weeks to months is done in hours to a day.
So why is the traditional approach so time consuming? Umm, they take minor details such as data integrity seriously, they check - and certify - electrical, thermal, shock et al resistance, test error recovery, and all sorts of stuff that apparently are NOT important to social platforms. And I guess not to search and ad networks either.
Note: wonder how much of their dispersed architecture is redundancy necessary increased failure expectations.
A typical quantity over quality scale cost determination.
Which is fine until financial institutions (Fidelity, Goldman Sachs...) or health care providers or electrical utilities, or... decide that OCP certification is sufficient... and they lose or corrupt your financial, medical, lighting and heating wherewithal...
I consider an OCP certified devices as unfit for serious purpose (barring adequate redundancy).
Note: the testing regime I've read is on par with that for a big box store home computer. Quite sad really.