Incidents | CilixCloud Incidents reported on status page for CilixCloud https://status.cilix.cloud/ https://d1lppblt9t2x15.cloudfront.net/logos/3a70572ce4abf82e367d9ca1145b1cd8.png Incidents | CilixCloud https://status.cilix.cloud/ en LON2: LNS Configuration Changes https://status.cilix.cloud/incident/604720 Sat, 28 Jun 2025 05:00:00 +0000 https://status.cilix.cloud/incident/604720#bd4f55aa933f69e9eab0468275ba65686438a7a9827d96017ccfd485070b870c Maintenance completed LON2: LNS Configuration Changes https://status.cilix.cloud/incident/604720 Sat, 28 Jun 2025 02:00:25 -0000 https://status.cilix.cloud/incident/604720#b616b2d3178b1fb8326ae4b43b13929406c13e80678fa632484f70b24c32449b ##What's Happening?  On 28-June-25, we will be performing configuration changes on the following L2TP Network Servers (LNS). These changes are to enable a number of technologies used for providing IPv6 over broadband sessions.  -lns1.lon2.as215638.net -lns2.lon2.as215638.net Any active sessions will be dropped when we apply configuration changes, the LNS' will require reloading.  These works are scheduled to begin at 3AM. ##What's the impact?  Existing broadband sessions will be terminated as the configuration changes are applied and the LNS appliance reloads. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. ##Updates: We will provide updates as required on this page. If you have any queries, please get in touch.  Regards,  CilixCloud Team LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Wed, 25 Jun 2025 01:00:00 +0000 https://status.cilix.cloud/incident/600732#1262ae3e4ee18f43606a9a4c0a60f36c31f1657d319ba2c5c386ef60ed2b4dd8 Maintenance completed LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Wed, 25 Jun 2025 01:00:00 +0000 https://status.cilix.cloud/incident/600732#1262ae3e4ee18f43606a9a4c0a60f36c31f1657d319ba2c5c386ef60ed2b4dd8 Maintenance completed LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Tue, 24 Jun 2025 23:00:12 -0000 https://status.cilix.cloud/incident/600732#7d2e8279920ac5840c43174c93843e299bf3b6a233bfc5dea3a5d6039885fcb4 Our transit provider in LON1 will be performing maintenance on the switches used to provide transit connectivity in LON1. The maintenance is scheduled to begin at 00:00 on 25-June-25. We do not expect there to be any disruption as traffic will automatically route via LON2. LON1: Transit Provider Maintenance https://status.cilix.cloud/incident/600732 Tue, 24 Jun 2025 23:00:12 -0000 https://status.cilix.cloud/incident/600732#7d2e8279920ac5840c43174c93843e299bf3b6a233bfc5dea3a5d6039885fcb4 Our transit provider in LON1 will be performing maintenance on the switches used to provide transit connectivity in LON1. The maintenance is scheduled to begin at 00:00 on 25-June-25. We do not expect there to be any disruption as traffic will automatically route via LON2. LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 05:00:14 +0000 https://status.cilix.cloud/incident/582823#d1b441232e0df5cbcc884cf0c91cb4e9be434a6f1f46dfc1beb3138706ea657e Maintenance completed LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 05:00:14 +0000 https://status.cilix.cloud/incident/582823#d1b441232e0df5cbcc884cf0c91cb4e9be434a6f1f46dfc1beb3138706ea657e Maintenance completed LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 05:00:14 +0000 https://status.cilix.cloud/incident/582823#d1b441232e0df5cbcc884cf0c91cb4e9be434a6f1f46dfc1beb3138706ea657e Maintenance completed LON2 Subscriber Services recovered https://status.cilix.cloud/ Sat, 31 May 2025 03:29:57 +0000 https://status.cilix.cloud/#c5fe1fb983703ca257ec8664791b503d5cced637f974845d4a3e592c47ef612d LON2 Subscriber Services recovered LON2 Subscriber Services went down https://status.cilix.cloud/ Sat, 31 May 2025 03:16:36 +0000 https://status.cilix.cloud/#c5fe1fb983703ca257ec8664791b503d5cced637f974845d4a3e592c47ef612d LON2 Subscriber Services went down LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 02:00:14 -0000 https://status.cilix.cloud/incident/582823#07dbb1aa0ff3d23b5a37b118973877f5366a3e479c3f2074932cfd52b83fd135 ##What's Happening?## On 31-May-25, we will be performing code upgrades on the following L2TP Network Servers (LNS). * lns1.lon2.as215638.net * lns2.lon2.as215638.net Any active sessions will be dropped before the code upgrade starts. These works are scheduled to begin at 3AM. ##What's the impact?## Existing broadband sessions will be terminated before the code upgrade begins. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 02:00:14 -0000 https://status.cilix.cloud/incident/582823#07dbb1aa0ff3d23b5a37b118973877f5366a3e479c3f2074932cfd52b83fd135 ##What's Happening?## On 31-May-25, we will be performing code upgrades on the following L2TP Network Servers (LNS). * lns1.lon2.as215638.net * lns2.lon2.as215638.net Any active sessions will be dropped before the code upgrade starts. These works are scheduled to begin at 3AM. ##What's the impact?## Existing broadband sessions will be terminated before the code upgrade begins. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. LON2: Scheduled Maintenance https://status.cilix.cloud/incident/582823 Sat, 31 May 2025 02:00:14 -0000 https://status.cilix.cloud/incident/582823#07dbb1aa0ff3d23b5a37b118973877f5366a3e479c3f2074932cfd52b83fd135 ##What's Happening?## On 31-May-25, we will be performing code upgrades on the following L2TP Network Servers (LNS). * lns1.lon2.as215638.net * lns2.lon2.as215638.net Any active sessions will be dropped before the code upgrade starts. These works are scheduled to begin at 3AM. ##What's the impact?## Existing broadband sessions will be terminated before the code upgrade begins. Any customers terminating on the above LNS' will experience a brief loss in internet connectivity while their session redials to an available LNS. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 20:21:00 -0000 https://status.cilix.cloud/incident/561065#56b84c6a043e0eab16a8b1b753115919c21e158a2532606d8bb662337c61b4e3 The on-site team has replaced the B-Feed PDU. The single-fed devices are now back online. We will continue to closely monitor the power availability in London 1. We will provide any updates as required on this incident. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 20:21:00 -0000 https://status.cilix.cloud/incident/561065#56b84c6a043e0eab16a8b1b753115919c21e158a2532606d8bb662337c61b4e3 The on-site team has replaced the B-Feed PDU. The single-fed devices are now back online. We will continue to closely monitor the power availability in London 1. We will provide any updates as required on this incident. SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Mon, 12 May 2025 20:07:33 +0000 https://status.cilix.cloud/#d4b2475d8098fdcdd28527297424717ed9ca5a6265c1733bac6b62383c7a6634 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Mon, 12 May 2025 20:04:32 +0000 https://status.cilix.cloud/#d4b2475d8098fdcdd28527297424717ed9ca5a6265c1733bac6b62383c7a6634 SMTP Relay 1 (London) went down London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 19:38:00 -0000 https://status.cilix.cloud/incident/561065#4611019bea3a4279552c03e118fccd8bed27c273727164db8ece6af7bbccde74 We are aware of an issue with the B-Feed PDU in rack A4. This issue is affecting single-fed devices in rack A4. The on-site team are currently investigating. Further updates to follow as required. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 19:38:00 -0000 https://status.cilix.cloud/incident/561065#4611019bea3a4279552c03e118fccd8bed27c273727164db8ece6af7bbccde74 We are aware of an issue with the B-Feed PDU in rack A4. This issue is affecting single-fed devices in rack A4. The on-site team are currently investigating. Further updates to follow as required. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 18:25:00 -0000 https://status.cilix.cloud/incident/561065#ef0eecdf1b0391f6125d82f726e19c78772d68276b03f0def8a780d425d15782 The affected power feed is back online. We did not see any disruption to customer services, however, we have asked the power provider for a full RFO. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 18:25:00 -0000 https://status.cilix.cloud/incident/561065#ef0eecdf1b0391f6125d82f726e19c78772d68276b03f0def8a780d425d15782 The affected power feed is back online. We did not see any disruption to customer services, however, we have asked the power provider for a full RFO. Nameserver 2 recovered https://status.cilix.cloud/ Mon, 12 May 2025 18:21:30 +0000 https://status.cilix.cloud/#eedd32d7ef8dd46ec9faad07216d5728a6a0ce2e08515a114482663759309950 Nameserver 2 recovered Nameserver 2 went down https://status.cilix.cloud/ Mon, 12 May 2025 18:03:31 +0000 https://status.cilix.cloud/#eedd32d7ef8dd46ec9faad07216d5728a6a0ce2e08515a114482663759309950 Nameserver 2 went down London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 18:00:00 -0000 https://status.cilix.cloud/incident/561065#4529ec2132fb036cc3d945e755964d9a2c69603f7acdc7d78c1e91e8a7f68fc8 We are aware of an issue in our London 1 facility, where we have lost a power feed. We are not experiencing any major disruption to services, as we have redundant power. A limited set of internal services is currently offline. We are working on bringing these services back online as soon as possible. London 1: Power Feed Issues https://status.cilix.cloud/incident/561065 Mon, 12 May 2025 18:00:00 -0000 https://status.cilix.cloud/incident/561065#4529ec2132fb036cc3d945e755964d9a2c69603f7acdc7d78c1e91e8a7f68fc8 We are aware of an issue in our London 1 facility, where we have lost a power feed. We are not experiencing any major disruption to services, as we have redundant power. A limited set of internal services is currently offline. We are working on bringing these services back online as soon as possible. Nameserver 1 recovered https://status.cilix.cloud/ Thu, 27 Mar 2025 08:28:09 +0000 https://status.cilix.cloud/#b3e7c11d8210a04b3868b9b59ceead4dba47c41bf84c5b61e3e6d2d586ba40c5 Nameserver 1 recovered Nameserver 2 recovered https://status.cilix.cloud/ Thu, 27 Mar 2025 08:26:49 +0000 https://status.cilix.cloud/#32dddc56cd715c946041cd47ac8b05eda31070bf1ce3d77702b4a9cae2ffb2e8 Nameserver 2 recovered Nameserver 2 went down https://status.cilix.cloud/ Thu, 27 Mar 2025 00:48:27 +0000 https://status.cilix.cloud/#32dddc56cd715c946041cd47ac8b05eda31070bf1ce3d77702b4a9cae2ffb2e8 Nameserver 2 went down Nameserver 1 went down https://status.cilix.cloud/ Thu, 27 Mar 2025 00:47:40 +0000 https://status.cilix.cloud/#b3e7c11d8210a04b3868b9b59ceead4dba47c41bf84c5b61e3e6d2d586ba40c5 Nameserver 1 went down London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 13:01:00 -0000 https://status.cilix.cloud/incident/533490#db5951d465d78fb3bddcd97153239a8efd3e86de45ae101ed57cbacc942ff5e9 LINX have resolved the issues with the LON1 Route Servers. We are seeing all 220K prefixes being imported successfully. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 13:01:00 -0000 https://status.cilix.cloud/incident/533490#db5951d465d78fb3bddcd97153239a8efd3e86de45ae101ed57cbacc942ff5e9 LINX have resolved the issues with the LON1 Route Servers. We are seeing all 220K prefixes being imported successfully. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 12:08:00 -0000 https://status.cilix.cloud/incident/533490#bbdb3a7c9edbbc8cc631aa2c32eabdb731407442b7e388c1b51886b55cad9a93 LINX has identified an issue with the route servers and are working on a resolution. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 12:08:00 -0000 https://status.cilix.cloud/incident/533490#bbdb3a7c9edbbc8cc631aa2c32eabdb731407442b7e388c1b51886b55cad9a93 LINX has identified an issue with the route servers and are working on a resolution. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 11:30:00 -0000 https://status.cilix.cloud/incident/533490#a7d483228ae5b0e87dc1e8ea7fde4c03f58fb361d81198a82e73184b8981663c The London Internet Exchange (LINX) are having issues with their Route Servers on their LON1 peering LAN. As a result, we are not receiving any routes from the LON1 LAN’s route servers. Traffic is successfully being routed via alternative paths, meaning our network remains fully operational. We expect a resolution from LINX shortly. London Internet Exchange (LINX) LON1 Route Server Issues https://status.cilix.cloud/incident/533490 Mon, 24 Mar 2025 11:30:00 -0000 https://status.cilix.cloud/incident/533490#a7d483228ae5b0e87dc1e8ea7fde4c03f58fb361d81198a82e73184b8981663c The London Internet Exchange (LINX) are having issues with their Route Servers on their LON1 peering LAN. As a result, we are not receiving any routes from the LON1 LAN’s route servers. Traffic is successfully being routed via alternative paths, meaning our network remains fully operational. We expect a resolution from LINX shortly. LON1: Dedicated Severs SVI Migration https://status.cilix.cloud/incident/524353 Sat, 08 Mar 2025 06:30:44 +0000 https://status.cilix.cloud/incident/524353#2e751c800344fd30b25717ddc8a9b1886281e0b02f6eaf447ca99acf7cbf5c50 Maintenance completed LON1: Cisco Access Switch Firmware Upgrade https://status.cilix.cloud/incident/523766 Sat, 08 Mar 2025 06:30:00 +0000 https://status.cilix.cloud/incident/523766#79d3957ace1e53b938dfae91f29aae8b9f367a8b8b08a355c2ddb7b2faf2e4ff Maintenance completed LON1: Cisco Access Switch Firmware Upgrade https://status.cilix.cloud/incident/523766 Sat, 08 Mar 2025 06:30:00 +0000 https://status.cilix.cloud/incident/523766#79d3957ace1e53b938dfae91f29aae8b9f367a8b8b08a355c2ddb7b2faf2e4ff Maintenance completed LON1: Border Router Code Upgrade https://status.cilix.cloud/incident/524355 Sat, 08 Mar 2025 06:30:00 +0000 https://status.cilix.cloud/incident/524355#c18ee73ef5c77d740f6eb3c82c22bfae34de61aca9a25d41fe4bc53bbd44762e Maintenance completed LON1: Border Router Code Upgrade https://status.cilix.cloud/incident/524355 Sat, 08 Mar 2025 06:30:00 +0000 https://status.cilix.cloud/incident/524355#c18ee73ef5c77d740f6eb3c82c22bfae34de61aca9a25d41fe4bc53bbd44762e Maintenance completed SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Sat, 08 Mar 2025 05:08:54 +0000 https://status.cilix.cloud/#ea39a2012ac59e77fe0b985ae570d6e960c60f5bedb8146f52084780ea3b476e SMTP Relay 1 (London) recovered LON1 - Border recovered https://status.cilix.cloud/ Sat, 08 Mar 2025 05:03:04 +0000 https://status.cilix.cloud/#301144e60495d1f9682be80f57e5c8b8efe7c78cef9625fe91f9d7dfe406d217 LON1 - Border recovered London 1 Data Centre (LON1) recovered https://status.cilix.cloud/ Sat, 08 Mar 2025 05:03:04 +0000 https://status.cilix.cloud/#301144e60495d1f9682be80f57e5c8b8efe7c78cef9625fe91f9d7dfe406d217 London 1 Data Centre (LON1) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Sat, 08 Mar 2025 04:23:45 +0000 https://status.cilix.cloud/#ea39a2012ac59e77fe0b985ae570d6e960c60f5bedb8146f52084780ea3b476e SMTP Relay 1 (London) went down LON1 - Border went down https://status.cilix.cloud/ Sat, 08 Mar 2025 04:21:54 +0000 https://status.cilix.cloud/#301144e60495d1f9682be80f57e5c8b8efe7c78cef9625fe91f9d7dfe406d217 LON1 - Border went down London 1 Data Centre (LON1) went down https://status.cilix.cloud/ Sat, 08 Mar 2025 04:21:54 +0000 https://status.cilix.cloud/#301144e60495d1f9682be80f57e5c8b8efe7c78cef9625fe91f9d7dfe406d217 London 1 Data Centre (LON1) went down LON1: Dedicated Severs SVI Migration https://status.cilix.cloud/incident/524353 Sat, 08 Mar 2025 03:30:44 -0000 https://status.cilix.cloud/incident/524353#f2ad1cbfa3aea980883752fd38ecdfae506ab370814d6bcfaa133e18b9eb9621 ##What's Happening?## We will migrate the SVIs used for self-service dedicated servers away from our core and onto a dedicated VC switch stack in rack A4. This is part of an ongoing project to migrate our network to a BGP-free, L2-free core, improving performance, reliability and scalability. ##What's the impact?## There will be a brief disruption in connectivity while the SVI is dropped from core switches, and brought up on the new switches. This should last a maximum of 1 minute, while our iBGP mesh reconverges. LON1: Cisco Access Switch Firmware Upgrade https://status.cilix.cloud/incident/523766 Sat, 08 Mar 2025 03:30:00 -0000 https://status.cilix.cloud/incident/523766#1d9e3b799c7ee5a1e3549cd09769cc2a89d6aa0d35f83b2709892b53871780da ##What's Happening?## We will be performing firmware upgrades on Cisco access Switches in racks A3 & A4 to patch a recently discovered CVE vulnerability.  These works are scheduled to take place on 08-March-2025, starting at 03:30 AM. ##What's the impact?## Switches will continue to pass traffic while the new firmware is installed. Once complete, they will reload, which will take approximately 5 minutes. During this time, network connectivity will be unavailable.  Switches will be updated sequentially, to minimise disruption. Customers who take redundant Layer 3 connectivity from us will failover to their backup uplinks automatically. We will cull BGP sessions for customers who advertise their routes to us using BGP before starting the upgrade process. This will gracefully route traffic away from the affected uplinks. LON1: Cisco Access Switch Firmware Upgrade https://status.cilix.cloud/incident/523766 Sat, 08 Mar 2025 03:30:00 -0000 https://status.cilix.cloud/incident/523766#1d9e3b799c7ee5a1e3549cd09769cc2a89d6aa0d35f83b2709892b53871780da ##What's Happening?## We will be performing firmware upgrades on Cisco access Switches in racks A3 & A4 to patch a recently discovered CVE vulnerability.  These works are scheduled to take place on 08-March-2025, starting at 03:30 AM. ##What's the impact?## Switches will continue to pass traffic while the new firmware is installed. Once complete, they will reload, which will take approximately 5 minutes. During this time, network connectivity will be unavailable.  Switches will be updated sequentially, to minimise disruption. Customers who take redundant Layer 3 connectivity from us will failover to their backup uplinks automatically. We will cull BGP sessions for customers who advertise their routes to us using BGP before starting the upgrade process. This will gracefully route traffic away from the affected uplinks. LON1: Border Router Code Upgrade https://status.cilix.cloud/incident/524355 Sat, 08 Mar 2025 03:30:00 -0000 https://status.cilix.cloud/incident/524355#e4d5d4e8f3d769a6c471c1b993ae4b6958c47a0338d87b1d1232c0e504aa327e ##What's Happening?## We will be performing code upgrades on our border routers in LON1. ##What's the impact?## We will gracefully drain traffic away from the border routers before we begin the code upgrades. As such, we do not anticipate any disruption. LON1: Border Router Code Upgrade https://status.cilix.cloud/incident/524355 Sat, 08 Mar 2025 03:30:00 -0000 https://status.cilix.cloud/incident/524355#e4d5d4e8f3d769a6c471c1b993ae4b6958c47a0338d87b1d1232c0e504aa327e ##What's Happening?## We will be performing code upgrades on our border routers in LON1. ##What's the impact?## We will gracefully drain traffic away from the border routers before we begin the code upgrades. As such, we do not anticipate any disruption. SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Tue, 04 Mar 2025 03:02:17 +0000 https://status.cilix.cloud/#21ff22fca508a9a6baff175ecc1851aae505d744cf5aef719bf34000ceaa81c6 SMTP Relay 1 (London) recovered LON1: Router Lockup https://status.cilix.cloud/incident/522245 Tue, 04 Mar 2025 02:55:00 -0000 https://status.cilix.cloud/incident/522245#5784562f685a7abc51d7bf6644347c9a22b605a94a6c8d98f824ef40d88baa20 We have executed a reload of the affected router, which has brought it back into service. We will review log telemetry and make any recommended changes. We did not observe any downtime, as traffic was automatically rerouted via our London 2 POP. SMTP Relay 1 (London) went down https://status.cilix.cloud/ Tue, 04 Mar 2025 02:32:09 +0000 https://status.cilix.cloud/#21ff22fca508a9a6baff175ecc1851aae505d744cf5aef719bf34000ceaa81c6 SMTP Relay 1 (London) went down LON1: Router Lockup https://status.cilix.cloud/incident/522245 Tue, 04 Mar 2025 02:30:00 -0000 https://status.cilix.cloud/incident/522245#267e28361ca55f490f3b8ff93ca56ae091da48a9fc93129203f6580681ead865 We have identified an issue with one of our border routers at our London 1 site. We have confirmed that traffic has been successfully diverted to our London 2 site, no services are currently offline. SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Wed, 26 Feb 2025 14:50:11 +0000 https://status.cilix.cloud/#7d582c7ca4396d85622fd699e1b40c7ff78366d4003771e8d24b4a3cdc7a1028 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Wed, 26 Feb 2025 14:43:11 +0000 https://status.cilix.cloud/#7d582c7ca4396d85622fd699e1b40c7ff78366d4003771e8d24b4a3cdc7a1028 SMTP Relay 1 (London) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Wed, 26 Feb 2025 14:40:12 +0000 https://status.cilix.cloud/#4182a6f2140450d5de30e5d260a6d7b6f593336fea6149d03992ec96f031a15a SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Wed, 26 Feb 2025 10:53:40 +0000 https://status.cilix.cloud/#4182a6f2140450d5de30e5d260a6d7b6f593336fea6149d03992ec96f031a15a SMTP Relay 1 (London) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Wed, 26 Feb 2025 08:47:58 +0000 https://status.cilix.cloud/#e518498ac3d50603a7754d140136cf29a9b9a2142f055a52233a5b3f1306198f SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Wed, 26 Feb 2025 08:19:35 +0000 https://status.cilix.cloud/#e518498ac3d50603a7754d140136cf29a9b9a2142f055a52233a5b3f1306198f SMTP Relay 1 (London) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Tue, 25 Feb 2025 20:04:53 +0000 https://status.cilix.cloud/#8d2c1a27b587164211137d18467c8bfa11bb436de56458a341c8da1b3f7eddc3 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Tue, 25 Feb 2025 18:03:50 +0000 https://status.cilix.cloud/#8d2c1a27b587164211137d18467c8bfa11bb436de56458a341c8da1b3f7eddc3 SMTP Relay 1 (London) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Tue, 25 Feb 2025 16:08:36 +0000 https://status.cilix.cloud/#e3948006a327b82111ae34c0376f3600b0eae881a329788275a2532faaaf1fec SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Tue, 25 Feb 2025 15:12:19 +0000 https://status.cilix.cloud/#e3948006a327b82111ae34c0376f3600b0eae881a329788275a2532faaaf1fec SMTP Relay 1 (London) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Sun, 23 Feb 2025 12:37:53 +0000 https://status.cilix.cloud/#762a68fbe6516614f21201b7ea80d0d082b50c697200f57858bcf09ba1756d79 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Sun, 23 Feb 2025 12:21:49 +0000 https://status.cilix.cloud/#762a68fbe6516614f21201b7ea80d0d082b50c697200f57858bcf09ba1756d79 SMTP Relay 1 (London) went down Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Sat, 22 Feb 2025 07:21:00 -0000 https://status.cilix.cloud/incident/516796#8a9291b34636b245d9b1c12622fa638e7e0572eb4fd8532424887a3ba429a64e We have observed an extended period of stability, all broadband services are online. We'll update this incident with further details as we receive them. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Sat, 22 Feb 2025 07:21:00 -0000 https://status.cilix.cloud/incident/516796#8a9291b34636b245d9b1c12622fa638e7e0572eb4fd8532424887a3ba429a64e We have observed an extended period of stability, all broadband services are online. We'll update this incident with further details as we receive them. LON2: Power Maintenance https://status.cilix.cloud/incident/516682 Sat, 22 Feb 2025 06:00:00 +0000 https://status.cilix.cloud/incident/516682#0d7623277791fc91549ad75bb530fd76e24c582db13202dd0ab7df9b6d073a6f Maintenance completed LON2: Power Maintenance https://status.cilix.cloud/incident/516682 Sat, 22 Feb 2025 06:00:00 +0000 https://status.cilix.cloud/incident/516682#0d7623277791fc91549ad75bb530fd76e24c582db13202dd0ab7df9b6d073a6f Maintenance completed SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Sat, 22 Feb 2025 04:52:57 +0000 https://status.cilix.cloud/#8dd3a24bbd3764fec545f68bb18a06bfe36613cadd0d6d0d6db6587239cee044 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Sat, 22 Feb 2025 04:37:52 +0000 https://status.cilix.cloud/#8dd3a24bbd3764fec545f68bb18a06bfe36613cadd0d6d0d6db6587239cee044 SMTP Relay 1 (London) went down LON2: Power Maintenance https://status.cilix.cloud/incident/516682 Sat, 22 Feb 2025 01:00:00 -0000 https://status.cilix.cloud/incident/516682#4a0b4c17c5e20f0aa03b5e977e7063585d04123e18b0ae436224f01a8aff1e1c Our power provider in London 2 will perform maintenance on one of the redundant power connections in London 2. During this time we will be operating with reduced power redundancy. LON2: Power Maintenance https://status.cilix.cloud/incident/516682 Sat, 22 Feb 2025 01:00:00 -0000 https://status.cilix.cloud/incident/516682#4a0b4c17c5e20f0aa03b5e977e7063585d04123e18b0ae436224f01a8aff1e1c Our power provider in London 2 will perform maintenance on one of the redundant power connections in London 2. During this time we will be operating with reduced power redundancy. SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Fri, 21 Feb 2025 18:08:09 +0000 https://status.cilix.cloud/#0b0e48976e5e1834575d48743e762dc2e5ae8f21d74148a136fd2bba7e136be4 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Fri, 21 Feb 2025 16:41:41 +0000 https://status.cilix.cloud/#0b0e48976e5e1834575d48743e762dc2e5ae8f21d74148a136fd2bba7e136be4 SMTP Relay 1 (London) went down Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 15:34:00 -0000 https://status.cilix.cloud/incident/516796#a619c249d77321b21e3492bdb4e32b4090528bc11d592abd4b608cb956fab9f5 All broadband sessions are back online - We have not had confirmation that the issues on the provider's side have been resolved, as such, we are still considering broadband services at risk of further disruption. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 15:34:00 -0000 https://status.cilix.cloud/incident/516796#a619c249d77321b21e3492bdb4e32b4090528bc11d592abd4b608cb956fab9f5 All broadband sessions are back online - We have not had confirmation that the issues on the provider's side have been resolved, as such, we are still considering broadband services at risk of further disruption. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 15:02:00 -0000 https://status.cilix.cloud/incident/516796#619d7b8f4dc8bf0fd1afeeaa00c31244bcff21760ab8549f36a76da8c4480769 We can see broadband sessions are now dialing via alternative paths, with the majority of sessions back online. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 15:02:00 -0000 https://status.cilix.cloud/incident/516796#619d7b8f4dc8bf0fd1afeeaa00c31244bcff21760ab8549f36a76da8c4480769 We can see broadband sessions are now dialing via alternative paths, with the majority of sessions back online. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 14:44:00 -0000 https://status.cilix.cloud/incident/516796#c9510b1aad89a5662b5627cf9c12ae323498e5183074bcbdf8b8138d43b5f3ad We are aware of a network device failure in BTs POP in THW. Sessions are diverting via alternative paths, however, continued issues in the provider's THW POP are causing disruption. We are working to isolate the THW NNIs, and route L2TP traffic via alternative paths. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 14:44:00 -0000 https://status.cilix.cloud/incident/516796#c9510b1aad89a5662b5627cf9c12ae323498e5183074bcbdf8b8138d43b5f3ad We are aware of a network device failure in BTs POP in THW. Sessions are diverting via alternative paths, however, continued issues in the provider's THW POP are causing disruption. We are working to isolate the THW NNIs, and route L2TP traffic via alternative paths. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 14:40:00 -0000 https://status.cilix.cloud/incident/516796#a8df48bdec0737c1bda6fb620bc88ef41540f11e15de9a3080322c24c9fe5a18 We are aware of an issue with broadband sessions connecting via BT's Telehouse West POP. We are working with the BT to isolate the root cause. Broadband Issues: Telehouse West https://status.cilix.cloud/incident/516796 Fri, 21 Feb 2025 14:40:00 -0000 https://status.cilix.cloud/incident/516796#a8df48bdec0737c1bda6fb620bc88ef41540f11e15de9a3080322c24c9fe5a18 We are aware of an issue with broadband sessions connecting via BT's Telehouse West POP. We are working with the BT to isolate the root cause. SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Fri, 21 Feb 2025 12:31:47 +0000 https://status.cilix.cloud/#ab94ddc6e65038b468ad410e2961890dfd71589adf385fd2717cd65d829486a8 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Fri, 21 Feb 2025 12:21:55 +0000 https://status.cilix.cloud/#ab94ddc6e65038b468ad410e2961890dfd71589adf385fd2717cd65d829486a8 SMTP Relay 1 (London) went down SMTP Relay 1 (London) recovered https://status.cilix.cloud/ Fri, 21 Feb 2025 08:40:07 +0000 https://status.cilix.cloud/#d7dd74c7af5e525ff28833b783499e4cdd2dff173bc0da550ca30405e3b5cbb5 SMTP Relay 1 (London) recovered SMTP Relay 1 (London) went down https://status.cilix.cloud/ Fri, 21 Feb 2025 08:27:11 +0000 https://status.cilix.cloud/#d7dd74c7af5e525ff28833b783499e4cdd2dff173bc0da550ca30405e3b5cbb5 SMTP Relay 1 (London) went down Broadband Sessions Failing to Connect https://status.cilix.cloud/incident/494631 Sun, 12 Jan 2025 13:06:00 -0000 https://status.cilix.cloud/incident/494631#a20cab3c5497881c90fe9f7382272d985ff0c50923c7a6943669e7ff21e886dd The provider has made changes to correct the routing issue. We have since seem all affected broadband sessions connect successfully. Broadband Sessions Failing to Connect https://status.cilix.cloud/incident/494631 Sun, 12 Jan 2025 09:30:00 -0000 https://status.cilix.cloud/incident/494631#c2e514bf822896750e3554428ba623da6eb73656977237d784991b642a582ddb We are aware of an issue where new broadband sessions fail to connect. Following a fault investigation, we believe the issues are related to a set of provider LACs that are failing to forward traffic to our LNS correctly. We are working with the provider in question to provide a resolution ASAP. Currently, established broadband sessions are unaffected. Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 07:00:44 +0000 https://status.cilix.cloud/incident/489709#aaee44fbfd6010b527db92f1fb181fcfc5d9b06e97072585f7599a63edcf278e Maintenance completed Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 07:00:44 +0000 https://status.cilix.cloud/incident/489709#aaee44fbfd6010b527db92f1fb181fcfc5d9b06e97072585f7599a63edcf278e Maintenance completed Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 07:00:44 +0000 https://status.cilix.cloud/incident/489709#aaee44fbfd6010b527db92f1fb181fcfc5d9b06e97072585f7599a63edcf278e Maintenance completed Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 07:00:44 +0000 https://status.cilix.cloud/incident/489709#aaee44fbfd6010b527db92f1fb181fcfc5d9b06e97072585f7599a63edcf278e Maintenance completed LON2 - Core recovered https://status.cilix.cloud/ Sat, 11 Jan 2025 05:15:13 +0000 https://status.cilix.cloud/#53cd3c52bdef8bdd3e3892202377bdae6323edc569bb18f142c63855cc159a32 LON2 - Core recovered LON2 - Core went down https://status.cilix.cloud/ Sat, 11 Jan 2025 05:10:33 +0000 https://status.cilix.cloud/#53cd3c52bdef8bdd3e3892202377bdae6323edc569bb18f142c63855cc159a32 LON2 - Core went down Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 01:30:44 -0000 https://status.cilix.cloud/incident/489709#7b7b35e9a1d7c75832fca51f0e22b21b26e5eda72f85d41cbbfc9fb78717ace9 ##What's Happening? We will be making changes to our core network to extend our EVPN fabric between our London 1 and London 2 facilities. These changes will bring VXLAN tunnels between LON1 and LON2 online, enabling HA, failover, and L2 communication for resources hosted at both sites. These works are scheduled to take place on 11-Jan-2025, starting at 01:30 AM. ##What's the impact? We do not anticipate downtime or disruption during these works, however, as changes are being made to our core network, service availability is at risk until the changes are complete. ##Updates: Updates will be provided on this maintenance note as required. Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 01:30:44 -0000 https://status.cilix.cloud/incident/489709#7b7b35e9a1d7c75832fca51f0e22b21b26e5eda72f85d41cbbfc9fb78717ace9 ##What's Happening? We will be making changes to our core network to extend our EVPN fabric between our London 1 and London 2 facilities. These changes will bring VXLAN tunnels between LON1 and LON2 online, enabling HA, failover, and L2 communication for resources hosted at both sites. These works are scheduled to take place on 11-Jan-2025, starting at 01:30 AM. ##What's the impact? We do not anticipate downtime or disruption during these works, however, as changes are being made to our core network, service availability is at risk until the changes are complete. ##Updates: Updates will be provided on this maintenance note as required. Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 01:30:44 -0000 https://status.cilix.cloud/incident/489709#7b7b35e9a1d7c75832fca51f0e22b21b26e5eda72f85d41cbbfc9fb78717ace9 ##What's Happening? We will be making changes to our core network to extend our EVPN fabric between our London 1 and London 2 facilities. These changes will bring VXLAN tunnels between LON1 and LON2 online, enabling HA, failover, and L2 communication for resources hosted at both sites. These works are scheduled to take place on 11-Jan-2025, starting at 01:30 AM. ##What's the impact? We do not anticipate downtime or disruption during these works, however, as changes are being made to our core network, service availability is at risk until the changes are complete. ##Updates: Updates will be provided on this maintenance note as required. Network Maintenance: LON1 & LON2 https://status.cilix.cloud/incident/489709 Sat, 11 Jan 2025 01:30:44 -0000 https://status.cilix.cloud/incident/489709#7b7b35e9a1d7c75832fca51f0e22b21b26e5eda72f85d41cbbfc9fb78717ace9 ##What's Happening? We will be making changes to our core network to extend our EVPN fabric between our London 1 and London 2 facilities. These changes will bring VXLAN tunnels between LON1 and LON2 online, enabling HA, failover, and L2 communication for resources hosted at both sites. These works are scheduled to take place on 11-Jan-2025, starting at 01:30 AM. ##What's the impact? We do not anticipate downtime or disruption during these works, however, as changes are being made to our core network, service availability is at risk until the changes are complete. ##Updates: Updates will be provided on this maintenance note as required. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 11:27:00 -0000 https://status.cilix.cloud/incident/489760#e7d0851844d2f2ddd7495741b4e291c9d69a5e5c1f3689dfab2ce197cc63204c BT NOC has provided the following information: _________________________________________________________ Date and Time of Incident: 02/01/2025, 10:04 AM Summary of Impact: At 10:04 AM on 02/01/2025, a brief disruption occurred, impacting all services routed through the THW POP. Leased Line services restored almost immediately. Broadband services experienced varied recovery times, depending on customer router PPP dialer frequency, with some requiring a manual reboot to re-establish connectivity. Current Status: We are actively investigating the root cause of the incident. Preliminary analysis suggests a potential hardware fault. As part of our investigation, we are consulting with our vendor, Cisco, to determine the specific nature of the issue and identify any required remedial actions. Resolution Status: Pending further investigation and vendor feedback. We apologise for any inconvenience caused and are committed to resolving this issue as quickly as possible. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 11:27:00 -0000 https://status.cilix.cloud/incident/489760#e7d0851844d2f2ddd7495741b4e291c9d69a5e5c1f3689dfab2ce197cc63204c BT NOC has provided the following information: _________________________________________________________ Date and Time of Incident: 02/01/2025, 10:04 AM Summary of Impact: At 10:04 AM on 02/01/2025, a brief disruption occurred, impacting all services routed through the THW POP. Leased Line services restored almost immediately. Broadband services experienced varied recovery times, depending on customer router PPP dialer frequency, with some requiring a manual reboot to re-establish connectivity. Current Status: We are actively investigating the root cause of the incident. Preliminary analysis suggests a potential hardware fault. As part of our investigation, we are consulting with our vendor, Cisco, to determine the specific nature of the issue and identify any required remedial actions. Resolution Status: Pending further investigation and vendor feedback. We apologise for any inconvenience caused and are committed to resolving this issue as quickly as possible. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 10:06:00 -0000 https://status.cilix.cloud/incident/489760#bdcc2f752a0c3b9c7729d5e25bd5984251b0665daca9a069f4c101ebd6e31821 All affected services are back online and experienced approximately 90 seconds of disruption before failing over to alternative infrastructure. We are investigating the issue with BT and will provide updates as required. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 10:06:00 -0000 https://status.cilix.cloud/incident/489760#bdcc2f752a0c3b9c7729d5e25bd5984251b0665daca9a069f4c101ebd6e31821 All affected services are back online and experienced approximately 90 seconds of disruption before failing over to alternative infrastructure. We are investigating the issue with BT and will provide updates as required. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 10:04:00 -0000 https://status.cilix.cloud/incident/489760#1bd124f5b7d26ced28ca2145e0cb861ad7e84291dbf1fffbbda1c99e99b7771d We are aware of connectivity issues for BT Wholesale broadband and leased line circuits terminating in Telehouse West. We are investigating these issues and will provide updates in due course. BT Wholesale Connectivity Drop https://status.cilix.cloud/incident/489760 Thu, 02 Jan 2025 10:04:00 -0000 https://status.cilix.cloud/incident/489760#1bd124f5b7d26ced28ca2145e0cb861ad7e84291dbf1fffbbda1c99e99b7771d We are aware of connectivity issues for BT Wholesale broadband and leased line circuits terminating in Telehouse West. We are investigating these issues and will provide updates in due course. LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 12:00:00 +0000 https://status.cilix.cloud/incident/474427#307fec1ce2b86f8ae2848f0343129735e16d38b70fbdf312d4bd8bf2171cf79c Maintenance completed LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 12:00:00 +0000 https://status.cilix.cloud/incident/474427#307fec1ce2b86f8ae2848f0343129735e16d38b70fbdf312d4bd8bf2171cf79c Maintenance completed LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 12:00:00 +0000 https://status.cilix.cloud/incident/474427#307fec1ce2b86f8ae2848f0343129735e16d38b70fbdf312d4bd8bf2171cf79c Maintenance completed LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 12:00:00 +0000 https://status.cilix.cloud/incident/474427#307fec1ce2b86f8ae2848f0343129735e16d38b70fbdf312d4bd8bf2171cf79c Maintenance completed LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 01:00:00 -0000 https://status.cilix.cloud/incident/474427#707be83adf32a8ff016318d99290d9e7257dd3ebbcdf78c52f1dc1cdb35ce270 We will perform maintenance to integrate our new LON2 hosting facility with our network POPs, and our LON1 hosting facility. As part of this work, we will bring DCIs (Data Centre Interconnects) online between each site to enable additional redundancy, resiliency, and flexibility to customer services. We do not expect there to be any disruption as a result of these works, however, as we will be making changes to our core routing equipment there is a small risk of disruption. Updates will be provided as required. LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 01:00:00 -0000 https://status.cilix.cloud/incident/474427#707be83adf32a8ff016318d99290d9e7257dd3ebbcdf78c52f1dc1cdb35ce270 We will perform maintenance to integrate our new LON2 hosting facility with our network POPs, and our LON1 hosting facility. As part of this work, we will bring DCIs (Data Centre Interconnects) online between each site to enable additional redundancy, resiliency, and flexibility to customer services. We do not expect there to be any disruption as a result of these works, however, as we will be making changes to our core routing equipment there is a small risk of disruption. Updates will be provided as required. LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 01:00:00 -0000 https://status.cilix.cloud/incident/474427#707be83adf32a8ff016318d99290d9e7257dd3ebbcdf78c52f1dc1cdb35ce270 We will perform maintenance to integrate our new LON2 hosting facility with our network POPs, and our LON1 hosting facility. As part of this work, we will bring DCIs (Data Centre Interconnects) online between each site to enable additional redundancy, resiliency, and flexibility to customer services. We do not expect there to be any disruption as a result of these works, however, as we will be making changes to our core routing equipment there is a small risk of disruption. Updates will be provided as required. LON2 Facility - Network Integration https://status.cilix.cloud/incident/474427 Sat, 14 Dec 2024 01:00:00 -0000 https://status.cilix.cloud/incident/474427#707be83adf32a8ff016318d99290d9e7257dd3ebbcdf78c52f1dc1cdb35ce270 We will perform maintenance to integrate our new LON2 hosting facility with our network POPs, and our LON1 hosting facility. As part of this work, we will bring DCIs (Data Centre Interconnects) online between each site to enable additional redundancy, resiliency, and flexibility to customer services. We do not expect there to be any disruption as a result of these works, however, as we will be making changes to our core routing equipment there is a small risk of disruption. Updates will be provided as required. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Tue, 08 Oct 2024 14:18:00 -0000 https://status.cilix.cloud/incident/440763#02d13cf898dd84cdb81177b74d444a119176cc0f9cb58759a3c6b360de6a2d97 # What happened? ### Tuesday 08-October-24 at * 19:00: We received alerts from our monitoring systems that our primary power feed had dropped offline. No disruption to service availability at this point. * 19:20: We receive alerts that several access (feed) switches have gone offline and are unreachable internally. * 20:59: We receive alerts that the offline access switches have come back online and are carrying traffic correctly. * 21:05: We manually verify that workloads are back online and functioning as expected. # What went wrong? After our primary power feed dropped, all equipment successfully failed over to the backup power feed and was performing as expected. In an effort to bring service back online for non-power redundant customers, our colocation provider enabled a transfer switch, that connected the primary and secondary power feeds together. This in turn meant that all the previously offline equipment in the data centre began to power back online. This resulted in a voltage drop from the standard 240v, to approximately 160v. Our core, border and aggregation network equipment feature power supplies that are able to function within a 100-240v range, and as such weren't affected by the voltage drop. However, our access switches feature power supplies that are validated for 220-240v. This led to our access switches powering down automatically, which in turn caused a loss of connectivity to any services not directly connected to our core switches. Once the voltage drop condition cleared, the access switches powered back online automatically, and service availability was restored. # What changes are we making? The service disruption was caused by a failed power feed, which then resulted in out-of-range operating conditions for our access layer switches. As a result, we will be implementing changes to our access layer switches that include: * Firmware configuration changes to adjust the automatic shutdown voltage range. * Replacement power supplies, validated for a 100-240v voltage range. The above changes are scheduled to take place this Saturday and Sunday (12-13 October 2024) We will also provide further updates once we have received a full Root Cause Analysis from our colocation provider. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:45:00 -0000 https://status.cilix.cloud/incident/440763#d76da99b0fe530cede900de31ff5386c845f97a6f8c5deaecfd8850ece9ba478 We have seen a full restoration of power at DC1. We are currently manually verifying all equipment has powered on correctly. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 19:30:00 -0000 https://status.cilix.cloud/incident/440763#315846d2c15036a30d6adc57f691c86c988db832126c905a18b2e6b3e72f39b9 We have received an update from the data centre that the issue is related to power systems. They believe they have identified the cause of the issues, and are working on remediation. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. DC1: Power Feed Issues https://status.cilix.cloud/incident/440763 Mon, 07 Oct 2024 18:59:00 -0000 https://status.cilix.cloud/incident/440763#497ee5768c5a3022a7cf25b8026e2925f0bc23f614cd047c91390eb676c6890e We are aware of an issue affecting the availability of our infrastructure. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 +0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 +0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 +0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 +0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 12:00:00 +0000 https://status.cilix.cloud/incident/427496#160c4d56300455e52d922e36d95877047722cdba512c17ca83b981d5eef1143b Maintenance completed Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Maintenance: Network Infrastructure Upgrades https://status.cilix.cloud/incident/427496 Sat, 14 Sep 2024 04:00:00 -0000 https://status.cilix.cloud/incident/427496#581f5d60181913f27aa8c23a1449e131619a2417f2f665e1633a4780ec36477a We will perform maintenance on our network infrastructure in our London 1 location to upgrade our access layer switches to 25Gb capable equipment. As part of this, we will be re-patching hypervisor nodes into the new access switches, however, we will re-route traffic through alternate uplinks beforehand. Dedicated server customers with the prefix "lon1-a4" will be unaffected by the maintenance. We expect disruption to be minimal and will update this page with more information as required. Self Service Portal: Software Upgrades https://status.cilix.cloud/incident/406059 Thu, 01 Aug 2024 15:00:00 +0000 https://status.cilix.cloud/incident/406059#e76afb66590ff413ee2b1e2e0c1e6a40d3cb8d6947adad42567485f378f8eb3c Maintenance completed Self Service Portal: Software Upgrades https://status.cilix.cloud/incident/406059 Thu, 01 Aug 2024 14:00:00 -0000 https://status.cilix.cloud/incident/406059#374e776c479932d0e06af797b78d0a0ac61e4e206b444aa884f43456179c7ceb We will be deploying a software update to our self-service portal. During the update works, the portal will be offline. *Please note, that customer virtual and dedicated servers will remain online and aren't affected by these maintenance works.* DDoS Protection System Integration https://status.cilix.cloud/incident/351449 Sat, 06 Apr 2024 11:00:00 +0000 https://status.cilix.cloud/incident/351449#ba933d8207ce38e1f5aad96e026af0157e7a38fce07c43ad710382b6f2f98ec6 Maintenance completed DDoS Protection System Integration https://status.cilix.cloud/incident/351449 Sat, 06 Apr 2024 06:00:00 -0000 https://status.cilix.cloud/incident/351449#c10ed78d595d3ff622d9cbe29b48fa3697260f8960fb31fcb8726c00f5b9813a As part of an ongoing project to enhance the security of our infrastructure, we will be integrating Cloudflare's Magic Transit DDoS protection into our infrastructure. As part of these works, we must deploy changes to our routing infrastructure and traffic monitoring systems. These changes will be made during a 2-hour window on the 6th of April 2024, resulting in up to 10 minutes of downtime. We will provide further updates as necessary.