Given that the manufacturers are retooling their lines every few years, and that at any point the IC supply can generate a wave of bad chips which are hard to QA for, I don't put much stock in the rep of manufacturers from more than a couple of generations back.
If you have the luck to be friendly with a seller who moves a lot of drives of different brands, they can tell you what is coming back _right now_ . Which will give insight into the DOA and 30-day failures of the current crop of drives. What the longer term projections are, though is not known.
Generally there are consumer drives with 3 or five year warranties, and more expensive drives, also with 5 years for 'enterprise use'. Since the expensive drives are several times the value of the consumer ones right now, and the consumer drives don't fail that often, a decent backup and even raid if you don't want to lose a day or so restoring, are cheaper than the higher end drives.
IIRC, Google's studies were done on the more expensive of the commodity drives, as they build their rigs to use RAIDs of whatever the 'sanely priced' drives currently are, rather than the really expensive ones.
Fer trivia:
For building certain systems, more RAM or cache isn't the solution, instead the problem really requires an array of fast disks... and when the speed on them disks really matters, the number of the disks can't just be increased freely as the controllers also introduce some latency. The disk speed ends up being very high (10-15k) to reduce the rotational latency, and the number of disks needs to be kept within a certain bound - e.g. no more than 36" or some other small magic number from the main controller - which forces the number of disks in the area to be at most, say (not a real number, just a representative one) 42 drives. RAIDing the top mirroring brings it down to 21 drives, which might be broken into three RAID 5's of 7 drives, say with 5 active and two hot-spares... suddenly, this server's configuration actually has a _maximum size_ in disk space - with 1 TB drives, it might only be able to support 12 TB of space while maintaining a 2.x ms mean read latency relative to main RAM.
So, while big systems can ideally dodge that sort of requirement if architected carefully, not all can - some are genuinely bound by disk latency, so need to spring for the fastest, largest and most reliable drives they can afford - and if the manufacturer doesn't stockpile enough old models to service the system's failure rate then the system needs to be redesigned to some degree with faster drives. For lucky systems, they will have space and compatibility on their controllers to upgrade, and mebbe an extra controller to handle testing with the new drives - rather than a whole separate (expensive) box.. for unlucky systems, the builder needs to make sure that enough drives are bought/contracted from the supplier such that the system will be able to have its lifetime be extended a couple of times, and also be able to handle some degree of extra projected failure rate of the drives. Some government or big-client purchase might eat up the entirety of a suppliers old-stock of some drive - one can't safely assume that a disk manufacturer will have old expensive drives in stock forever (though given how they charge massive rates for non-contract old drives they do seem to like to stock a fair number.
All this ramble to say: There are certain systems that require massively reliable disks.
As a gamer and programmer (specializing in profiling and optimization) my home rig is plenty big and fast, but, I certainly _don't_ need that kind of crazy expensive ****.
There is a need for some reliability, but beyond getting nailed by a bad run from a manufacturer (which can happen to the best of them at any time) making sure one has decent backups is more important than direct disk reliability. Secondly, if disk reliability actually matters, (e.g. there is always some risk in restoring from backup, depending on how doubly (or N x) redundant one's backup hardware and software is) Then RAID systems and journalling file systems (i.e. 'undelete' and 'restore previous versions' commands in windows) will reduce the frequency one needs to restore from backup.
Finally:
For the love of all things pure and beautiful in this world: RAID is not a backup!
RAID does not protect against: (too many drives failing at once - typically after a power outage against drives that have rarely been restarted, operating system level file-system corruption, drive-controller corruption, user error, virus/trojan/compromise, admin sabotage...)
I have seen entire companies fail when their RAID system failed. I have repeatedly seen RAIDs fail - all of the above examples of types RAID failure are from my personal experience, not stuff I read somewhere. Source at every job (and thus every firm) I have ever worked for has been lost at one point or another, aggressive use of RAID, revision control and backups, notwithstanding. The places with better-managed and more redundant storage lost _less_ source, and survived the loss. I have personally had to restore the main source from my personal machine four times (three different firms), having been left with the last suviving copy due to my own paranoia and distrust for other folks' 'backup skilz'... (I use the main system, of course, but I _also_ backup whatever I can to a separate machine under my own control - I don't do off-site backups without permission, though)
Mumble - to be clear, I am not a 'disks guy', a 'security guy' or a hacker - I'm just a regular coder with slightly more gray hairs than some of the managers I've worked with.
Your disks and mine will both fail.
A plan appropriate to the value of your data and your desire to maintain the system (most take a certain amount of babysitting) is ideal, but... just...
Have a plan.
dfj
PS: My home rigs and servers tend to have had 10-15 disks in total for the last 15 years - hence, I've lost a lot of disks. All together I've only had to be an admin (since I was a dev who could, and they did not have one yet) at a couple of places, so I'm pretty sure I've only handled at most 250 disks? I'm not a professional at disk stuff, just babbling about what I saw and what I think.