Ik moest een van de disks van mijn QNAP RAID vervangen, en na een hot swap begon de rebuild zoals verwacht. Na een dag lopen schrok ik er van dat het pas op de helft was ... lekker traag zeg!
Na wat zoek werk vond ik een oplossing om dit te versnellen ...
Als eerste natuurlijk even kijken wat de huidige snelheid is:
cat /proc/mdstat
De resulterende output vertelde me dat het nog 3243 minuten zou duren ... 54 uur!!!! 
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sdc3[10] sda3[8] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdb3[9]
17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/7] [UU_UUUUU]
[==========>..........] recovery = 50.1% (1470051456/2928697536) finish=3243.0min speed=7495K/sec
md256 : active raid1 sdc2[2](S) sdh2[7](S) sdg2[6](S) sdf2[5](S) sde2[4](S) sdd2[3](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdc4[2] sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdc1[2] sda1[0] sdf1[7] sdg1[6] sdh1[5] sdd1[4] sde1[3] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>
Met de volgende regel heb ik de minimum snelheid aangepast:
echo 50000 > /proc/sys/dev/raid/speed_limit_min
De verwachte tijd is nu 633 minuten, dus ongeveer 10 uur.
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sdc3[10] sda3[8] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdb3[9]
17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/7] [UU_UUUUU]
[==========>..........] recovery = 50.5% (1480883716/2928697536) finish=633.2min speed=38104K/sec
md256 : active raid1 sdc2[2](S) sdh2[7](S) sdg2[6](S) sdf2[5](S) sde2[4](S) sdd2[3](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdc4[2] sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdc1[2] sda1[0] sdf1[7] sdg1[6] sdh1[5] sdd1[4] sde1[3] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>