I was reminded of this again by the Panda/Farmer algorithm explanation:
and Barry Schwartz's (aka rustybrick, Cre8 Mod et al) Search Engine Land article Google Disables Starring Your Favorite Search Results, 15-March-2011:
This update is designed to reduce rankings for low-quality sites—sites which are low-value add for users, copy content from other websites or sites that are just not very useful. At the same time, it will provide better rankings for high-quality sites...
Now that Google has re-introduced a way to block sites in Google, Google has determined you no longer need to star search results from the search results page.
The first quote implies (at the very least) that high-quality sites will rank, not directly because of their high-quality but indirectly because of the removal of previously higher ranked low-quality sites.
The second quote implies (at the very least) - and yes, I know that it is Barry's quote, not a Google spokesperson - that Google prefers to collect data, not on 'what users like' but on what they do not.
When looking for weeds instead of identifying some instance of 'good' they instead attempt to identify all instances of 'bad'. As this is a much more difficult and convoluted method the fact stands out: Google (the algo) hasn't a clue what is good. Indeed, Google often needs/wants to be 'told' via canonical, nofollow, etc. how to do it's job.
So, for webdevs everywhere, a reminder: it is not as important to follow Google's guidelines (to be good) as it is to NOT be identified as some instance of algo bad.