Ah yes, the reminder to perform a full-backup at least once each year
After ~14 years of service one of the WD Green drives failed. It had a few bad sectors for years, but the count didn't increase. Hence I didn't replace the drive immediately. Now it started reporting I/O errors too a few hours ago.
As the situation was foreseeable I already bought two replacement drives. Now the first one is replaced and the 9TB RAID5 will take roughly more than a day to rebuild.
root@DiskStation:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdc3[5] sda3[0] sdd3[4] sdb3[1]
8776632768 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UU_U]
[>....................] recovery = 0.0% (280704/2925544256) finish=2605.1min speed=18713K/sec
md1 : active raid1 sdc2[2] sda2[0] sdb2[1] sdd2[3]
2097088 blocks [4/4] [UUUU]
md0 : active raid1 sdc1[2] sda1[0] sdb1[1] sdd1[3]
2490176 blocks [4/4] [UUUU]
unused devices: <none>
A few minutes later the estimate already went down by 1100 minutes. I'll see how long it really took in the end.
root@DiskStation:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdc3[5] sda3[0] sdd3[4] sdb3[1]
8776632768 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UU_U]
[>....................] recovery = 0.3% (9309952/2925544256) finish=1584.8min speed=30667K/sec
md1 : active raid1 sdc2[2] sda2[0] sdb2[1] sdd2[3]
2097088 blocks [4/4] [UUUU]
md0 : active raid1 sdc1[2] sda1[0] sdb1[1] sdd1[3]
2490176 blocks [4/4] [UUUU]
unused devices: <none>
After Christmas I will replace the 4th drive also, as this also reported a bad sector as of today. And having swapped out 2 out of 4 drives is somewhat okay-ish for a 4 drive RAID5 with no hotspare. I don't assume the remaining to drives will fail completely within such a short period, that I can't replace at least one (and let the RAID rebuild, of course!).
Luckily I made my full-backup a few days ago. 😁