It’s once again time to reconfigure our ever-growing storage arrays. With about 2 days worth of backup capacity remaining I have bought new backup storage and am in the process of copying everything over.
Once the initial data transfer is complete I will disable the Cloud by enabling “Maintenance Mode” and give it one quick sync. Then I will reconfigure the Cloud to run with one backup copy as primary. This will allow me to disable maintenance mode and resume public Cloud operations with one backup acting as the primary Data store and the other backup copy as a backup.
With the Cloud operating out of backup storage I can reconfigure the primary Cloud Data array without worrying about redundancy or the clock ticking. After the primary Cloud Data array is rebuilt I can sync it with the temporary primary data and recycle the extra disks back into the primary Cloud Data array for added redundancy.
The tl;dr is: our service might be spotty this weekend. We will do our best to keep the platform online and stable but there are times when we will need to take things offline for short periods. Please pardon these outages while we work to expand our capabilities and improve our services.
The upgrade is mostly complete and the platform is stable. There were several compatibility hiccups that prevented configuring the main storage arrays the way I wanted but I’ve ordered a new RAID controller that should hopefully do the trick when it gets here. In the meantime we’re running the Cloud out of backup storage. There will be a little bit of downtime involved when installing the new RAID controller and the Cloud will be briefly unavailable during the changeover. The upgrade is about 75% complete. Thanks for sticking with us, and keep checking back for more updates!
I received the RAID parts and will be installing them this week. Expect downtime in the evenings all this week while I get the hardware configured and work out the kinks.
The upgrade is complete and testing has passed. The Cloud has been off backup storage and running out of a new primary storage array since this past weekend, but I kept a close eye on everything for the first few days before calling it complete. The new RAID controller and drives seem to be working well and there are no signs of problems. The new configuration is also more robust than before. Our primary storage array can withstand multiple simultaneous drive failures without losing any user-supplied data and the new backup array adds yet another layer of redundancy. The OS drive on this server was also upgraded to a unit that yields a 10x performance boost over the last configuration. That’s right, our new OS storage configuration is ten times faster than before, with sub-0.5ms random seek time and up to 3GB/s of disk transfer.
This also frees up hardware that can eventually be installed into other HonestRepair servers, further enhancing our capabilities. Before I get around to that I think I want to install an onboard LCD touch screen on out DNS/DHCP server, and possibly write a How-To about it. Stay tuned!