Incidents | CilixCloud Incidents reported on status page for CilixCloud https://status.cilix.cloud/ https://d1lppblt9t2x15.cloudfront.net/logos/3a70572ce4abf82e367d9ca1145b1cd8.png Incidents | CilixCloud https://status.cilix.cloud/ en WAF Service Availability https://status.cilix.cloud/incident/744286 Wed, 15 Oct 2025 20:37:00 -0000 https://status.cilix.cloud/incident/744286#269563829c7417181642d7291b4a7bf298ed7b18672cf7df3b18f7000805c6fd #RFO# ##Date and Time of Incident:## 15-Oct-2025, 20:28 - 20:49 ##Timeline of Events## - 20:28: Availability alerts triggered, indicating a loss of Web Application Firewall cluster 1. - 20:32: Technical teams begin fault investigation. - 20:40: Issue identified as a locked-up worker node. - 20:45: Worker node removed from WAF cluster. - 20:49: Service availability returns to normal. ##Summary of impact## One of the WAF worker nodes in cluster 1 experienced a software failure that caused the TCP session state table to fail synchronising correctly across the WAF cluster. This, in turn, led to the cluster falling out of synchronisation and failing to pass traffic correctly. Services routing through the affected WAF cluster were unavailable for 21 minutes. ##Resolution Status## We have multiple redundancies in place to prevent a total outage from happening should a worker node fail entirely; however, in this case, a worker node experienced a very unique partial failure, where it was sending OKs to health checks, but wasn’t healthy. We have engaged with our WAF vendors TAC, and will deploy remedial fixes as soon as they are made available. As a precaution, we will be reloading all worker nodes overnight. No downtime is expected from this change. WAF Service Availability https://status.cilix.cloud/incident/744286 Wed, 15 Oct 2025 19:49:00 -0000 https://status.cilix.cloud/incident/744286#1af03ef9ab6be097b4d5197fa364c644dd4c06aedf4776077de200fead070f3e Services are now back online. Web Application Firewall (WAF) recovered https://status.cilix.cloud/ Wed, 15 Oct 2025 19:48:26 +0000 https://status.cilix.cloud/#7ec5498835c3cb6d7ed954f8becb7a309955925b1c0be6f6d4cc8f654ad3feb6 Web Application Firewall (WAF) recovered WAF Service Availability https://status.cilix.cloud/incident/744286 Wed, 15 Oct 2025 19:42:00 -0000 https://status.cilix.cloud/incident/744286#d136ce697763457c233ab395125f6ef88318ecdc5dfe0f84d65668a20791e4a5 We have identified the issue and are working to bring the services back online. WAF Service Availability https://status.cilix.cloud/incident/744286 Wed, 15 Oct 2025 19:32:00 -0000 https://status.cilix.cloud/incident/744286#a4ae49ddd1b57e2b6c3a8cf840c53f7aea065ba4477727f650ddccc4b4700d21 We are investigating availability issues with our Web Application Firewall. Web Application Firewall (WAF) went down https://status.cilix.cloud/ Wed, 15 Oct 2025 19:27:12 +0000 https://status.cilix.cloud/#7ec5498835c3cb6d7ed954f8becb7a309955925b1c0be6f6d4cc8f654ad3feb6 Web Application Firewall (WAF) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Tue, 14 Oct 2025 14:55:53 +0000 https://status.cilix.cloud/#540e378cb667b881be2ebb0d0e8e98d26b7e49a8d8d612c21f7efd5e1f8707cc SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Tue, 14 Oct 2025 03:24:12 +0000 https://status.cilix.cloud/#540e378cb667b881be2ebb0d0e8e98d26b7e49a8d8d612c21f7efd5e1f8707cc SMTP Relay 1 (London) went down Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Tue, 14 Oct 2025 00:51:30 +0000 https://status.cilix.cloud/#576e43b7128705a2a0e12eec35090815e010a557fb31eceb478df9ebf71d2090 Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Mon, 13 Oct 2025 23:09:51 +0000 https://status.cilix.cloud/#576e43b7128705a2a0e12eec35090815e010a557fb31eceb478df9ebf71d2090 Leased Lines (Ethernet Circuits) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Mon, 13 Oct 2025 15:28:58 +0000 https://status.cilix.cloud/#9f4be7374cd39a873c5b0def963ca4a1836e80d3efbd82be036d5e3ee446fbab SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Mon, 13 Oct 2025 14:58:57 +0000 https://status.cilix.cloud/#9f4be7374cd39a873c5b0def963ca4a1836e80d3efbd82be036d5e3ee446fbab SMTP Relay 1 (London) went down Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Fri, 10 Oct 2025 04:24:31 +0000 https://status.cilix.cloud/#4dbc812fd5e5df2a187008209320ffdd03b0f1a2b2602b7ce43683cadba252cd Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Fri, 10 Oct 2025 04:24:16 +0000 https://status.cilix.cloud/#3d7d3b72b4ace7780742bae3fdfa29eab2b998744ec77315b8299119f4f32057 Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Fri, 10 Oct 2025 04:23:51 +0000 https://status.cilix.cloud/#a7b22a1ec40ab86207dfb977b1e6a42ed9454ed9b759af4cf8e529564f3a353f Leased Lines (Ethernet Circuits) recovered LON2: Border Router Maintenance https://status.cilix.cloud/incident/740414 Fri, 10 Oct 2025 04:00:04 -0000 https://status.cilix.cloud/incident/740414#3a72299b1fa0560244054bf7b1510e4f08d6e21169b7a610167e8afe8b51dcb3 Maintenance completed LON2: Border Router Maintenance https://status.cilix.cloud/incident/740414 Fri, 10 Oct 2025 04:00:04 -0000 https://status.cilix.cloud/incident/740414#3a72299b1fa0560244054bf7b1510e4f08d6e21169b7a610167e8afe8b51dcb3 Maintenance completed Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Fri, 10 Oct 2025 03:36:57 +0000 https://status.cilix.cloud/#a7b22a1ec40ab86207dfb977b1e6a42ed9454ed9b759af4cf8e529564f3a353f Leased Lines (Ethernet Circuits) went down Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Fri, 10 Oct 2025 03:36:47 +0000 https://status.cilix.cloud/#3d7d3b72b4ace7780742bae3fdfa29eab2b998744ec77315b8299119f4f32057 Leased Lines (Ethernet Circuits) went down Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Fri, 10 Oct 2025 03:36:37 +0000 https://status.cilix.cloud/#4dbc812fd5e5df2a187008209320ffdd03b0f1a2b2602b7ce43683cadba252cd Leased Lines (Ethernet Circuits) went down LON2: Border Router Maintenance https://status.cilix.cloud/incident/740414 Fri, 10 Oct 2025 02:00:04 -0000 https://status.cilix.cloud/incident/740414#0e9be4633a0b99cc4573fb595a5008edbdc856eb136e60ef5973c04d6b8e95b9 ##What's Happening?## In light of the recent DDoS attacks against several large networks, causing major disruption, we will be making configuration, topology and capacity changes to our routing infrastructure in our London 2 location. These changes will significantly enhance our on-net DDoS mitigation capacity and will minimise the effects of any congestion in the event of a large DDoS attack. These changes are scheduled for 10-Oct-25 from 3 - 5 AM BST. ##What's the impact?## We will divert traffic away from our London 2 border routers before maintenance begins, meaning there should be little to no impact for most workloads. Any impact experienced should be brief (less than 1 minute) while traffic is rerouted over available paths. ##Updates## Updates will be provided on this maintenance note as required. LON2: Border Router Maintenance https://status.cilix.cloud/incident/740414 Fri, 10 Oct 2025 02:00:04 -0000 https://status.cilix.cloud/incident/740414#0e9be4633a0b99cc4573fb595a5008edbdc856eb136e60ef5973c04d6b8e95b9 ##What's Happening?## In light of the recent DDoS attacks against several large networks, causing major disruption, we will be making configuration, topology and capacity changes to our routing infrastructure in our London 2 location. These changes will significantly enhance our on-net DDoS mitigation capacity and will minimise the effects of any congestion in the event of a large DDoS attack. These changes are scheduled for 10-Oct-25 from 3 - 5 AM BST. ##What's the impact?## We will divert traffic away from our London 2 border routers before maintenance begins, meaning there should be little to no impact for most workloads. Any impact experienced should be brief (less than 1 minute) while traffic is rerouted over available paths. ##Updates## Updates will be provided on this maintenance note as required. LON2: Border Router Maintenance https://status.cilix.cloud/incident/740414 Fri, 10 Oct 2025 02:00:04 -0000 https://status.cilix.cloud/incident/740414#0e9be4633a0b99cc4573fb595a5008edbdc856eb136e60ef5973c04d6b8e95b9 ##What's Happening?## In light of the recent DDoS attacks against several large networks, causing major disruption, we will be making configuration, topology and capacity changes to our routing infrastructure in our London 2 location. These changes will significantly enhance our on-net DDoS mitigation capacity and will minimise the effects of any congestion in the event of a large DDoS attack. These changes are scheduled for 10-Oct-25 from 3 - 5 AM BST. ##What's the impact?## We will divert traffic away from our London 2 border routers before maintenance begins, meaning there should be little to no impact for most workloads. Any impact experienced should be brief (less than 1 minute) while traffic is rerouted over available paths. ##Updates## Updates will be provided on this maintenance note as required. LON2: Border Router Maintenance https://status.cilix.cloud/incident/740414 Fri, 10 Oct 2025 02:00:04 -0000 https://status.cilix.cloud/incident/740414#0e9be4633a0b99cc4573fb595a5008edbdc856eb136e60ef5973c04d6b8e95b9 ##What's Happening?## In light of the recent DDoS attacks against several large networks, causing major disruption, we will be making configuration, topology and capacity changes to our routing infrastructure in our London 2 location. These changes will significantly enhance our on-net DDoS mitigation capacity and will minimise the effects of any congestion in the event of a large DDoS attack. These changes are scheduled for 10-Oct-25 from 3 - 5 AM BST. ##What's the impact?## We will divert traffic away from our London 2 border routers before maintenance begins, meaning there should be little to no impact for most workloads. Any impact experienced should be brief (less than 1 minute) while traffic is rerouted over available paths. ##Updates## Updates will be provided on this maintenance note as required. Broadband: L2TP Provider Issues https://status.cilix.cloud/incident/738761 Thu, 09 Oct 2025 07:45:00 -0000 https://status.cilix.cloud/incident/738761#d462e097a6b02c6ef1d93324a7767889df67bc1e2125cd2e211b74a19e2e8b17 The provider has confirmed this disruption was a result of a targeted DDoS attack against their core network. They have taken measures to filter the malicious traffic out and have seen full service restoration. Broadband: L2TP Provider Issues https://status.cilix.cloud/incident/738761 Mon, 06 Oct 2025 21:23:00 -0000 https://status.cilix.cloud/incident/738761#a6493a5918c5a4a1c5ef691c5831f33ca7f7f4a00958b0edb527399974132aaf Services have been stable for the past 40 minutes. We will follow up with an RFO shortly. Broadband: L2TP Provider Issues https://status.cilix.cloud/incident/738761 Mon, 06 Oct 2025 20:40:00 -0000 https://status.cilix.cloud/incident/738761#569c1262f7883866a78ffc76bc2314f7ce71d3048a86d0cd6faf33dc8b5f7cd5 We are seeing the majority of broadband sessions back online. Broadband: L2TP Provider Issues https://status.cilix.cloud/incident/738761 Mon, 06 Oct 2025 19:25:00 -0000 https://status.cilix.cloud/incident/738761#9818d4c6014c8f59de4192e6ae5d9c2fc9c01d3dd8e0538c82ec39f6da077c3f The provider in question has acknowledged the issue and is working towards a resolution. Broadband: L2TP Provider Issues https://status.cilix.cloud/incident/738761 Mon, 06 Oct 2025 19:00:00 -0000 https://status.cilix.cloud/incident/738761#14625414c5648ea3a52265d73b1925bed9156ac78d6a7614b3b555a6efd988bc We have identified the source of this fault to an upstream provider. We have engaged with the L2TP provider for further information. Broadband: L2TP Provider Issues https://status.cilix.cloud/incident/738761 Mon, 06 Oct 2025 18:42:00 -0000 https://status.cilix.cloud/incident/738761#0c4f4dfcdee82e54de6fca664d280241456a918633c3ef3510b8123d447a6406 We are aware of a fault affecting L2TP dynamic broadband services starting at 19:42. L2TP BGP sessions are currently flapping, resulting in sporadic broadband connectivity. Ethernet Circuits are currently unaffected. We will update this page as more information is available. Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Thu, 02 Oct 2025 01:23:45 +0000 https://status.cilix.cloud/#027607a38af90b7acabf8d01c05a185dc805bb20e28f1914bfc5025a09334b31 Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Wed, 01 Oct 2025 23:04:18 +0000 https://status.cilix.cloud/#027607a38af90b7acabf8d01c05a185dc805bb20e28f1914bfc5025a09334b31 Leased Lines (Ethernet Circuits) went down LON1: Transit Provider Unplanned Maintenance https://status.cilix.cloud/incident/731518 Thu, 25 Sep 2025 04:00:12 -0000 https://status.cilix.cloud/incident/731518#2d7311a656211f5c4974b14a7533c9750e55dc3c37319c4f686695b5d28607be Maintenance completed LON1: Transit Provider Unplanned Maintenance https://status.cilix.cloud/incident/731518 Thu, 25 Sep 2025 00:00:00 -0000 https://status.cilix.cloud/incident/731518#516c543f674d856dafe48616e8837653831f3a11c56cb6dd18f3f5ba637a80a5 Our transit provider in our London 1 location has informed us that they will be performing urgent maintenance on their infrastructure tomorrow morning from 1 AM to 5 AM BST. This maintenance will affect our uplinks with AS13213. We have implemented traffic shaping measures to steer traffic away from LON1, and route via LON2 while maintenance is performed. We do not expect disruption from the provider maintenance, as traffic has been gracefully routed via our London 2 location. During the above times, our network will be operating with reduced redundancy. Scheduled Maintenance: Web Application Firewall https://status.cilix.cloud/incident/725069 Sat, 20 Sep 2025 05:00:58 -0000 https://status.cilix.cloud/incident/725069#d5a42b3b1d547e1323e6c92e6a94ffba03f376ce584abd6f95d0dffe72d3dc09 Maintenance completed Scheduled Maintenance: Web Application Firewall https://status.cilix.cloud/incident/725069 Sat, 20 Sep 2025 02:00:58 -0000 https://status.cilix.cloud/incident/725069#fbaafdacaef57548aecf970c37472c8c4e3ef3c5e9d8d1fcd3c9833093609340 We will be performing code upgrades on our Web Application Firewall cluster. Traffic will be moved away from nodes while upgrades are applied. Applications routing through the WAF cluster may experience up to 5 minutes of disruption. Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Wed, 17 Sep 2025 13:57:14 +0000 https://status.cilix.cloud/#1834ed8a94726bf53e5ded28bf3af6c36c9b5f86e7077c11692de3bc969dfb7f Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Wed, 17 Sep 2025 13:20:37 +0000 https://status.cilix.cloud/#1834ed8a94726bf53e5ded28bf3af6c36c9b5f86e7077c11692de3bc969dfb7f Leased Lines (Ethernet Circuits) went down Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Tue, 09 Sep 2025 20:24:42 +0000 https://status.cilix.cloud/#8b5cdc3ecc1ca3ef4c1967476b647698994ea1440f05f2f2d04d1608899ef5f6 Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Tue, 09 Sep 2025 12:25:23 +0000 https://status.cilix.cloud/#8b5cdc3ecc1ca3ef4c1967476b647698994ea1440f05f2f2d04d1608899ef5f6 Leased Lines (Ethernet Circuits) went down Network Maintenance: LON2 https://status.cilix.cloud/incident/717593 Sat, 06 Sep 2025 07:00:08 -0000 https://status.cilix.cloud/incident/717593#2384f265259da85d53c71cd498d60b629b63161e74d6a273dd3650a4840c17d1 Maintenance completed Network Maintenance: LON2 https://status.cilix.cloud/incident/717593 Sat, 06 Sep 2025 07:00:08 -0000 https://status.cilix.cloud/incident/717593#2384f265259da85d53c71cd498d60b629b63161e74d6a273dd3650a4840c17d1 Maintenance completed LON2 Subscriber Services recovered https://status.cilix.cloud/ Sat, 06 Sep 2025 06:17:28 +0000 https://status.cilix.cloud/#07ba71297facdc082875202acf0d3314bc4450739099d3f1c3a653c9f390e05f LON2 Subscriber Services recovered LON2 Subscriber Services went down https://status.cilix.cloud/ Sat, 06 Sep 2025 05:59:45 +0000 https://status.cilix.cloud/#07ba71297facdc082875202acf0d3314bc4450739099d3f1c3a653c9f390e05f LON2 Subscriber Services went down Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Sat, 06 Sep 2025 03:48:58 +0000 https://status.cilix.cloud/#913924ffa73b9400ab3c07cfbe0312dbfb05a05fef970a108f7697882491657f Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Sat, 06 Sep 2025 03:48:47 +0000 https://status.cilix.cloud/#5f9fe12a05af65f34408b3d9738cd1320e81609ddc6a1ef3d08f92e473b9bb15 Leased Lines (Ethernet Circuits) recovered Leased Lines (Ethernet Circuits) recovered https://status.cilix.cloud/ Sat, 06 Sep 2025 03:48:20 +0000 https://status.cilix.cloud/#713c43a0d3af07d0daf021583d813070a2533c937b9d6ddc85cfe73fcf267bb4 Leased Lines (Ethernet Circuits) recovered LON2 - Border recovered https://status.cilix.cloud/ Sat, 06 Sep 2025 03:46:39 +0000 https://status.cilix.cloud/#a40c0d61000d227ab351a86b4e52aee3d8c9b408c4710d112f5e6fa46408de2e LON2 - Border recovered London 2 Data Centre (LON2) recovered https://status.cilix.cloud/ Sat, 06 Sep 2025 03:46:39 +0000 https://status.cilix.cloud/#a40c0d61000d227ab351a86b4e52aee3d8c9b408c4710d112f5e6fa46408de2e London 2 Data Centre (LON2) recovered Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Sat, 06 Sep 2025 03:40:10 +0000 https://status.cilix.cloud/#713c43a0d3af07d0daf021583d813070a2533c937b9d6ddc85cfe73fcf267bb4 Leased Lines (Ethernet Circuits) went down Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Sat, 06 Sep 2025 03:40:10 +0000 https://status.cilix.cloud/#5f9fe12a05af65f34408b3d9738cd1320e81609ddc6a1ef3d08f92e473b9bb15 Leased Lines (Ethernet Circuits) went down LON2 - Border went down https://status.cilix.cloud/ Sat, 06 Sep 2025 03:39:59 +0000 https://status.cilix.cloud/#a40c0d61000d227ab351a86b4e52aee3d8c9b408c4710d112f5e6fa46408de2e LON2 - Border went down London 2 Data Centre (LON2) went down https://status.cilix.cloud/ Sat, 06 Sep 2025 03:39:59 +0000 https://status.cilix.cloud/#a40c0d61000d227ab351a86b4e52aee3d8c9b408c4710d112f5e6fa46408de2e London 2 Data Centre (LON2) went down Leased Lines (Ethernet Circuits) went down https://status.cilix.cloud/ Sat, 06 Sep 2025 03:39:52 +0000 https://status.cilix.cloud/#913924ffa73b9400ab3c07cfbe0312dbfb05a05fef970a108f7697882491657f Leased Lines (Ethernet Circuits) went down Network Maintenance: LON2 https://status.cilix.cloud/incident/717593 Sat, 06 Sep 2025 01:00:08 -0000 https://status.cilix.cloud/incident/717593#c4e31cecf7b966324adbb7c5a567087e236949f3ab25142dd097806b2f240282 We will be performing maintenance on our London 2 Border Routers on Saturday, 6 September 2025, from 0200 to 0800. A brief disruption to traffic will be seen while we drop BGP sessions to the affected border routers. Network Maintenance: LON2 https://status.cilix.cloud/incident/717593 Sat, 06 Sep 2025 01:00:08 -0000 https://status.cilix.cloud/incident/717593#c4e31cecf7b966324adbb7c5a567087e236949f3ab25142dd097806b2f240282 We will be performing maintenance on our London 2 Border Routers on Saturday, 6 September 2025, from 0200 to 0800. A brief disruption to traffic will be seen while we drop BGP sessions to the affected border routers. Planned Maintenance: Sentinel Web Application Firewall Capacity Enhancements https://status.cilix.cloud/incident/705964 Sat, 16 Aug 2025 04:00:00 -0000 https://status.cilix.cloud/incident/705964#63b829fe68138606e3235fe848b02514bcb99517812c6537d174f50133fec649 Maintenance completed Planned Maintenance: Sentinel Web Application Firewall Capacity Enhancements https://status.cilix.cloud/incident/705964 Sat, 16 Aug 2025 00:00:00 -0000 https://status.cilix.cloud/incident/705964#4a1d49dd2061557af73b4e73f01795ec614ffb5892a70c4decb7e6e284c2c60e ##What's Happening?## On 16-Aug-25, we will be making changes to our Sentinel Web Application Firewall system to increase its request processing capacity significantly. These changes will involve adding additional compute capacity to our load-balancing and scrubbing infrastructure components. Works are scheduled to begin at 1 AM BST and are expected to be complete by 5 AM BST. ##What's the impact?## Traffic that flows through the Sentinel WAF will experience up to 4 consecutive instances of disruption, each lasting up to 5 minutes. During each instance, customer service using the WAF will be unavailable for up to 5 minutes. We have taken steps to minimise disruption where possible; however, due to the nature of the changes being made, we require all traffic to be halted while configuration is synchronised between infrastructure components. ##Updates## Updates will be provided as required. Planned Maintenance: Sentinel Web Application Firewall Capacity Enhancements https://status.cilix.cloud/incident/705964 Sat, 16 Aug 2025 00:00:00 -0000 https://status.cilix.cloud/incident/705964#4a1d49dd2061557af73b4e73f01795ec614ffb5892a70c4decb7e6e284c2c60e ##What's Happening?## On 16-Aug-25, we will be making changes to our Sentinel Web Application Firewall system to increase its request processing capacity significantly. These changes will involve adding additional compute capacity to our load-balancing and scrubbing infrastructure components. Works are scheduled to begin at 1 AM BST and are expected to be complete by 5 AM BST. ##What's the impact?## Traffic that flows through the Sentinel WAF will experience up to 4 consecutive instances of disruption, each lasting up to 5 minutes. During each instance, customer service using the WAF will be unavailable for up to 5 minutes. We have taken steps to minimise disruption where possible; however, due to the nature of the changes being made, we require all traffic to be halted while configuration is synchronised between infrastructure components. ##Updates## Updates will be provided as required. Planned Maintenance: LON1 - Layer 3 Services Migration https://status.cilix.cloud/incident/619989 Sat, 09 Aug 2025 06:00:00 -0000 https://status.cilix.cloud/incident/619989#fc2d744dc18b2417a8c1968a2c813e90821870646e86da6b403b4225fe61624b Maintenance completed Planned Maintenance: LON1 - Layer 3 Services Migration https://status.cilix.cloud/incident/619989 Sat, 09 Aug 2025 03:00:00 -0000 https://status.cilix.cloud/incident/619989#245acbe15d39818d18585e3b979d1203151b4abef0ccd3ccf95f9d4bef3aa9cb ##What's Happening?## On 09-Aug-25, we will be performing changes on our network infrastructure to migrate Layer 3 services to our new core switches. The changes affect customers with services in racks A2-A4. These changes are part of a project to increase our core network capacity from 12.8T to 33.6T, while enabling us to deliver a wider range of services. Works are scheduled to begin at 2AM BST. ##What's the impact?## Traffic flow may be briefly interrupted while we drop Layer 3 services on the outgoing core switches, and bring Layer 3 services up on the new core switches. ##Updates## Updates will be provided under the following maintenance note: https://status.as215638.net/maintenance/619982 LON2: LNS Configuration Changes https://status.cilix.cloud/incident/604720 Sat, 28 Jun 2025 05:00:00 -0000 https://status.cilix.cloud/incident/604720#bd4f55aa933f69e9eab0468275ba65686438a7a9827d96017ccfd485070b870c Maintenance completed LON2: LNS Configuration Changes https://status.cilix.cloud/incident/604720 Sat, 28 Jun 2025 02:00:25 -0000 https://status.cilix.cloud/incident/604720#b616b2d3178b1fb8326ae4b43b13929406c13e80678fa632484f70b24c32449b ##What's Happening?  On 28-June-25, we will be performing configuration changes on the following L2TP Network Servers (LNS). These changes are to enable a number of technologies used for providing IPv6 over broadband sessions.  -lns1.lon2.as215638.net -lns2.lon2.as215638.net Any active sessions will be dropped when we apply configuration changes, the LNS' will require reloading.  These works are scheduled to begin at 3AM. ##What's the impact?  Existing broadband sessions will be terminated as the configuration changes are applied and the LNS appliance reloads. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. ##Updates: We will provide updates as required on this page. If you have any queries, please get in touch.  Regards,  CilixCloud Team LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Wed, 25 Jun 2025 01:00:00 -0000 https://status.cilix.cloud/incident/600732#1262ae3e4ee18f43606a9a4c0a60f36c31f1657d319ba2c5c386ef60ed2b4dd8 Maintenance completed LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Wed, 25 Jun 2025 01:00:00 -0000 https://status.cilix.cloud/incident/600732#1262ae3e4ee18f43606a9a4c0a60f36c31f1657d319ba2c5c386ef60ed2b4dd8 Maintenance completed LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Tue, 24 Jun 2025 23:00:12 -0000 https://status.cilix.cloud/incident/600732#7d2e8279920ac5840c43174c93843e299bf3b6a233bfc5dea3a5d6039885fcb4 Our transit provider in LON1 will be performing maintenance on the switches used to provide transit connectivity in LON1. The maintenance is scheduled to begin at 00:00 on 25-June-25. We do not expect there to be any disruption as traffic will automatically route via LON2. LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Tue, 24 Jun 2025 23:00:12 -0000 https://status.cilix.cloud/incident/600732#7d2e8279920ac5840c43174c93843e299bf3b6a233bfc5dea3a5d6039885fcb4 Our transit provider in LON1 will be performing maintenance on the switches used to provide transit connectivity in LON1. The maintenance is scheduled to begin at 00:00 on 25-June-25. We do not expect there to be any disruption as traffic will automatically route via LON2. LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 05:00:14 -0000 https://status.cilix.cloud/incident/582823#d1b441232e0df5cbcc884cf0c91cb4e9be434a6f1f46dfc1beb3138706ea657e Maintenance completed LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 05:00:14 -0000 https://status.cilix.cloud/incident/582823#d1b441232e0df5cbcc884cf0c91cb4e9be434a6f1f46dfc1beb3138706ea657e Maintenance completed LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 02:00:14 -0000 https://status.cilix.cloud/incident/582823#07dbb1aa0ff3d23b5a37b118973877f5366a3e479c3f2074932cfd52b83fd135 ##What's Happening?## On 31-May-25, we will be performing code upgrades on the following L2TP Network Servers (LNS). * lns1.lon2.as215638.net * lns2.lon2.as215638.net Any active sessions will be dropped before the code upgrade starts. These works are scheduled to begin at 3AM. ##What's the impact?## Existing broadband sessions will be terminated before the code upgrade begins. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 02:00:14 -0000 https://status.cilix.cloud/incident/582823#07dbb1aa0ff3d23b5a37b118973877f5366a3e479c3f2074932cfd52b83fd135 ##What's Happening?## On 31-May-25, we will be performing code upgrades on the following L2TP Network Servers (LNS). * lns1.lon2.as215638.net * lns2.lon2.as215638.net Any active sessions will be dropped before the code upgrade starts. These works are scheduled to begin at 3AM. ##What's the impact?## Existing broadband sessions will be terminated before the code upgrade begins. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 20:21:00 -0000 https://status.cilix.cloud/incident/561065#56b84c6a043e0eab16a8b1b753115919c21e158a2532606d8bb662337c61b4e3 The on-site team has replaced the B-Feed PDU. The single-fed devices are now back online. We will continue to closely monitor the power availability in London 1. We will provide any updates as required on this incident. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 19:38:00 -0000 https://status.cilix.cloud/incident/561065#4611019bea3a4279552c03e118fccd8bed27c273727164db8ece6af7bbccde74 We are aware of an issue with the B-Feed PDU in rack A4. This issue is affecting single-fed devices in rack A4. The on-site team are currently investigating. Further updates to follow as required. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 18:25:00 -0000 https://status.cilix.cloud/incident/561065#ef0eecdf1b0391f6125d82f726e19c78772d68276b03f0def8a780d425d15782 The affected power feed is back online. We did not see any disruption to customer services, however, we have asked the power provider for a full RFO. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 18:00:00 -0000 https://status.cilix.cloud/incident/561065#4529ec2132fb036cc3d945e755964d9a2c69603f7acdc7d78c1e91e8a7f68fc8 We are aware of an issue in our London 1 facility, where we have lost a power feed. We are not experiencing any major disruption to services, as we have redundant power. A limited set of internal services is currently offline. We are working on bringing these services back online as soon as possible. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 13:01:00 -0000 https://status.cilix.cloud/incident/533490#db5951d465d78fb3bddcd97153239a8efd3e86de45ae101ed57cbacc942ff5e9 LINX have resolved the issues with the LON1 Route Servers. We are seeing all 220K prefixes being imported successfully. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 12:08:00 -0000 https://status.cilix.cloud/incident/533490#bbdb3a7c9edbbc8cc631aa2c32eabdb731407442b7e388c1b51886b55cad9a93 LINX has identified an issue with the route servers and are working on a resolution. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 11:30:00 -0000 https://status.cilix.cloud/incident/533490#a7d483228ae5b0e87dc1e8ea7fde4c03f58fb361d81198a82e73184b8981663c The London Internet Exchange (LINX) are having issues with their Route Servers on their LON1 peering LAN. As a result, we are not receiving any routes from the LON1 LAN’s route servers. Traffic is successfully being routed via alternative paths, meaning our network remains fully operational. We expect a resolution from LINX shortly. LON1: Dedicated Severs SVI Migration https://status.cilix.cloud/incident/524353 Sat, 08 Mar 2025 06:30:44 -0000 https://status.cilix.cloud/incident/524353#2e751c800344fd30b25717ddc8a9b1886281e0b02f6eaf447ca99acf7cbf5c50 Maintenance completed LON1: Cisco Access Switch Firmware Upgrade https://status.cilix.cloud/incident/523766 Sat, 08 Mar 2025 06:30:00 -0000 https://status.cilix.cloud/incident/523766#79d3957ace1e53b938dfae91f29aae8b9f367a8b8b08a355c2ddb7b2faf2e4ff Maintenance completed LON1: Border Router Code Upgrade https://status.cilix.cloud/incident/524355 Sat, 08 Mar 2025 06:30:00 -0000 https://status.cilix.cloud/incident/524355#c18ee73ef5c77d740f6eb3c82c22bfae34de61aca9a25d41fe4bc53bbd44762e Maintenance completed LON1: Dedicated Severs SVI Migration https://status.cilix.cloud/incident/524353 Sat, 08 Mar 2025 03:30:44 -0000 https://status.cilix.cloud/incident/524353#f2ad1cbfa3aea980883752fd38ecdfae506ab370814d6bcfaa133e18b9eb9621 ##What's Happening?## We will migrate the SVIs used for self-service dedicated servers away from our core and onto a dedicated VC switch stack in rack A4. This is part of an ongoing project to migrate our network to a BGP-free, L2-free core, improving performance, reliability and scalability. ##What's the impact?## There will be a brief disruption in connectivity while the SVI is dropped from core switches, and brought up on the new switches. This should last a maximum of 1 minute, while our iBGP mesh reconverges. LON1: Cisco Access Switch Firmware Upgrade https://status.cilix.cloud/incident/523766 Sat, 08 Mar 2025 03:30:00 -0000 https://status.cilix.cloud/incident/523766#1d9e3b799c7ee5a1e3549cd09769cc2a89d6aa0d35f83b2709892b53871780da ##What's Happening?## We will be performing firmware upgrades on Cisco access Switches in racks A3 & A4 to patch a recently discovered CVE vulnerability.  These works are scheduled to take place on 08-March-2025, starting at 03:30 AM. ##What's the impact?## Switches will continue to pass traffic while the new firmware is installed. Once complete, they will reload, which will take approximately 5 minutes. During this time, network connectivity will be unavailable.  Switches will be updated sequentially, to minimise disruption. Customers who take redundant Layer 3 connectivity from us will failover to their backup uplinks automatically. We will cull BGP sessions for customers who advertise their routes to us using BGP before starting the upgrade process. This will gracefully route traffic away from the affected uplinks. LON1: Border Router Code Upgrade https://status.cilix.cloud/incident/524355 Sat, 08 Mar 2025 03:30:00 -0000 https://status.cilix.cloud/incident/524355#e4d5d4e8f3d769a6c471c1b993ae4b6958c47a0338d87b1d1232c0e504aa327e ##What's Happening?## We will be performing code upgrades on our border routers in LON1. ##What's the impact?## We will gracefully drain traffic away from the border routers before we begin the code upgrades. As such, we do not anticipate any disruption. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Sat, 22 Feb 2025 07:21:00 -0000 https://status.cilix.cloud/incident/516796#8a9291b34636b245d9b1c12622fa638e7e0572eb4fd8532424887a3ba429a64e We have observed an extended period of stability, all broadband services are online. We'll update this incident with further details as we receive them. LON2: Power Maintenance https://status.cilix.cloud/incident/516682 Sat, 22 Feb 2025 06:00:00 -0000 https://status.cilix.cloud/incident/516682#0d7623277791fc91549ad75bb530fd76e24c582db13202dd0ab7df9b6d073a6f Maintenance completed LON2: Power Maintenance https://status.cilix.cloud/incident/516682 Sat, 22 Feb 2025 01:00:00 -0000 https://status.cilix.cloud/incident/516682#4a0b4c17c5e20f0aa03b5e977e7063585d04123e18b0ae436224f01a8aff1e1c Our power provider in London 2 will perform maintenance on one of the redundant power connections in London 2. During this time we will be operating with reduced power redundancy. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 15:34:00 -0000 https://status.cilix.cloud/incident/516796#a619c249d77321b21e3492bdb4e32b4090528bc11d592abd4b608cb956fab9f5 All broadband sessions are back online - We have not had confirmation that the issues on the provider's side have been resolved, as such, we are still considering broadband services at risk of further disruption. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 15:02:00 -0000 https://status.cilix.cloud/incident/516796#619d7b8f4dc8bf0fd1afeeaa00c31244bcff21760ab8549f36a76da8c4480769 We can see broadband sessions are now dialing via alternative paths, with the majority of sessions back online. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 14:44:00 -0000 https://status.cilix.cloud/incident/516796#c9510b1aad89a5662b5627cf9c12ae323498e5183074bcbdf8b8138d43b5f3ad We are aware of a network device failure in BTs POP in THW. Sessions are diverting via alternative paths, however, continued issues in the provider's THW POP are causing disruption. We are working to isolate the THW NNIs, and route L2TP traffic via alternative paths. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 14:40:00 -0000 https://status.cilix.cloud/incident/516796#a8df48bdec0737c1bda6fb620bc88ef41540f11e15de9a3080322c24c9fe5a18 We are aware of an issue with broadband sessions connecting via BT's Telehouse West POP. We are working with the BT to isolate the root cause. Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 07:00:44 -0000 https://status.cilix.cloud/incident/489709#aaee44fbfd6010b527db92f1fb181fcfc5d9b06e97072585f7599a63edcf278e Maintenance completed Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 07:00:44 -0000 https://status.cilix.cloud/incident/489709#aaee44fbfd6010b527db92f1fb181fcfc5d9b06e97072585f7599a63edcf278e Maintenance completed Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 01:30:44 -0000 https://status.cilix.cloud/incident/489709#7b7b35e9a1d7c75832fca51f0e22b21b26e5eda72f85d41cbbfc9fb78717ace9 ##What's Happening? We will be making changes to our core network to extend our EVPN fabric between our London 1 and London 2 facilities. These changes will bring VXLAN tunnels between LON1 and LON2 online, enabling HA, failover, and L2 communication for resources hosted at both sites. These works are scheduled to take place on 11-Jan-2025, starting at 01:30 AM. ##What's the impact? We do not anticipate downtime or disruption during these works, however, as changes are being made to our core network, service availability is at risk until the changes are complete. ##Updates: Updates will be provided on this maintenance note as required. Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 01:30:44 -0000 https://status.cilix.cloud/incident/489709#7b7b35e9a1d7c75832fca51f0e22b21b26e5eda72f85d41cbbfc9fb78717ace9 ##What's Happening? We will be making changes to our core network to extend our EVPN fabric between our London 1 and London 2 facilities. These changes will bring VXLAN tunnels between LON1 and LON2 online, enabling HA, failover, and L2 communication for resources hosted at both sites. These works are scheduled to take place on 11-Jan-2025, starting at 01:30 AM. ##What's the impact? We do not anticipate downtime or disruption during these works, however, as changes are being made to our core network, service availability is at risk until the changes are complete. ##Updates: Updates will be provided on this maintenance note as required. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 11:27:00 -0000 https://status.cilix.cloud/incident/489760#e7d0851844d2f2ddd7495741b4e291c9d69a5e5c1f3689dfab2ce197cc63204c BT NOC has provided the following information: _________________________________________________________ Date and Time of Incident: 02/01/2025, 10:04 AM Summary of Impact: At 10:04 AM on 02/01/2025, a brief disruption occurred, impacting all services routed through the THW POP. Leased Line services restored almost immediately. Broadband services experienced varied recovery times, depending on customer router PPP dialer frequency, with some requiring a manual reboot to re-establish connectivity. Current Status: We are actively investigating the root cause of the incident. Preliminary analysis suggests a potential hardware fault. As part of our investigation, we are consulting with our vendor, Cisco, to determine the specific nature of the issue and identify any required remedial actions. Resolution Status: Pending further investigation and vendor feedback. We apologise for any inconvenience caused and are committed to resolving this issue as quickly as possible. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 10:06:00 -0000 https://status.cilix.cloud/incident/489760#bdcc2f752a0c3b9c7729d5e25bd5984251b0665daca9a069f4c101ebd6e31821 All affected services are back online and experienced approximately 90 seconds of disruption before failing over to alternative infrastructure. We are investigating the issue with BT and will provide updates as required. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 10:04:00 -0000 https://status.cilix.cloud/incident/489760#1bd124f5b7d26ced28ca2145e0cb861ad7e84291dbf1fffbbda1c99e99b7771d We are aware of connectivity issues for BT Wholesale broadband and leased line circuits terminating in Telehouse West. We are investigating these issues and will provide updates in due course. LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 12:00:00 -0000 https://status.cilix.cloud/incident/474427#307fec1ce2b86f8ae2848f0343129735e16d38b70fbdf312d4bd8bf2171cf79c Maintenance completed LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 12:00:00 -0000 https://status.cilix.cloud/incident/474427#307fec1ce2b86f8ae2848f0343129735e16d38b70fbdf312d4bd8bf2171cf79c Maintenance completed LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 01:00:00 -0000 https://status.cilix.cloud/incident/474427#707be83adf32a8ff016318d99290d9e7257dd3ebbcdf78c52f1dc1cdb35ce270 We will perform maintenance to integrate our new LON2 hosting facility with our network POPs, and our LON1 hosting facility. As part of this work, we will bring DCIs (Data Centre Interconnects) online between each site to enable additional redundancy, resiliency, and flexibility to customer services. We do not expect there to be any disruption as a result of these works, however, as we will be making changes to our core routing equipment there is a small risk of disruption. Updates will be provided as required. LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 01:00:00 -0000 https://status.cilix.cloud/incident/474427#707be83adf32a8ff016318d99290d9e7257dd3ebbcdf78c52f1dc1cdb35ce270 We will perform maintenance to integrate our new LON2 hosting facility with our network POPs, and our LON1 hosting facility. As part of this work, we will bring DCIs (Data Centre Interconnects) online between each site to enable additional redundancy, resiliency, and flexibility to customer services. We do not expect there to be any disruption as a result of these works, however, as we will be making changes to our core routing equipment there is a small risk of disruption. Updates will be provided as required. LON1: Core Switch Lockup https://status.cilix.cloud/incident/450495 Fri, 25 Oct 2024 16:04:00 -0000 https://status.cilix.cloud/incident/450495#a8e3d671a84688875284f86d181297ef6771f1826329dc2bd218d75b6518afb3 ##Executive Summary On 25th October at 09:15 GMT, monitoring systems detected a loss of network availability to several services connected to access switches in a Virtual Chassis configuration, in racks A2, A3 and A4, triggering a major incident response. Upon investigation, the pair of access switches had moved into a ‘soft-lockup’ state, which meant the switches were actively advertising their availability to carry traffic, but were unable to do so. To rectify the situation, our on-site team manually bypassed the affected access switches, which brought the affected services back online. Throughout the incident, our primary focus was the restoration of services and to minimise any potential impact. Total disruption to services was approximately 1 hour and 10 minutes. ##Next Steps We have engaged the hardware vendor TAC, and we have come to the conclusion that the fault with these access switches is related to the power surge we experienced in early October. These access switches have been scheduled for replacement and won’t be reintroduced into service until they have been replaced and stress tested. We are also investigating replacing any potentially affected network hardware with new replacements. Timeline: 09:15 GMT – Monitoring systems trigger availability alerts. 09:17 GMT – Alerts acknowledged and technical teams begin fault investigations. 09:30 GMT – Access switches are power cycled. 09:32 GMT – Service returns to downstream services following reboot. 09:43 GMT – Access switches move back into a ‘soft-lockup’ fault condition, monitoring alerts re-triggers. 09:45 GMT – Decision is made to physically bypass the access switches, instead connecting to rack aggregation switches. 09:50 GMT - Work begins on re-provisioning ports, VNI to VLAN maps, and L3 SVIs for affected customer services. 10:20 GMT – Re-provisioning work is completed across all 138 ports. 10:21 GMT – On-site team begin re-patching ports into aggregation switches. 10:40 GMT – On-site team completes re-patching. 10:42 GMT – Monitoring systems confirm availability of services. 10:45 GMT – Incident is marked as resolved. Investigations to be completed offline. ##Root Cause: We identified a similar fault with our Core switches on 6th October, where we experienced them moving into a ‘soft-lockup’ state. We have come to the conclusion with the hardware vendor that these switches were also affected by the power surge on 6th October. Due to the nature of the issue, there is no precursor or warning before the failiure occurs. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 -0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 -0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 -0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 -0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 -0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Self Service Portal: Software Upgrades https://status.cilix.cloud/incident/406059 Thu, 01 Aug 2024 15:00:00 -0000 https://status.cilix.cloud/incident/406059#e76afb66590ff413ee2b1e2e0c1e6a40d3cb8d6947adad42567485f378f8eb3c Maintenance completed Self Service Portal: Software Upgrades https://status.cilix.cloud/incident/406059 Thu, 01 Aug 2024 14:00:00 -0000 https://status.cilix.cloud/incident/406059#374e776c479932d0e06af797b78d0a0ac61e4e206b444aa884f43456179c7ceb We will be deploying a software update to our self-service portal. During the update works, the portal will be offline. *Please note, that customer virtual and dedicated servers will remain online and aren't affected by these maintenance works.* DDoS Protection System Integration https://status.cilix.cloud/incident/351449 Sat, 06 Apr 2024 11:00:00 -0000 https://status.cilix.cloud/incident/351449#ba933d8207ce38e1f5aad96e026af0157e7a38fce07c43ad710382b6f2f98ec6 Maintenance completed DDoS Protection System Integration https://status.cilix.cloud/incident/351449 Sat, 06 Apr 2024 06:00:00 -0000 https://status.cilix.cloud/incident/351449#c10ed78d595d3ff622d9cbe29b48fa3697260f8960fb31fcb8726c00f5b9813a As part of an ongoing project to enhance the security of our infrastructure, we will be integrating Cloudflare's Magic Transit DDoS protection into our infrastructure. As part of these works, we must deploy changes to our routing infrastructure and traffic monitoring systems. These changes will be made during a 2-hour window on the 6th of April 2024, resulting in up to 10 minutes of downtime. We will provide further updates as necessary.