Everything has been stable now for a good period. We will still keep an eye on this one compute node, however we are going to close the incident as resolved now as all is running as it should on this compute host. We apologise for the inconvenience caused.
Feb 11, 17:50 GMT
We've restarted the compute node and will keep an eye on what's causing the high load if it repeats, but for now normal service has resumed.
Feb 11, 17:01 GMT
We've fix the API and Civo.com now, but are still working on compute-13.
Feb 11, 15:36 GMT
We think this is actually one compute node from our cluster, that happens to have some of our services (and client services) on it. We're trying to get it to respond, but it appears to have been slammed by something hogging all the CPU cores.
Feb 11, 15:00 GMT
Our alerting has just picked up that Civo.com is reporting errors, these are actually coming from our API. We're looking in to it and will update when we know more. We've tested a few client instances and they are OK, although we have had some reports of weirdness in other internal instances.
Feb 11, 14:57 GMT