<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Patmos Status - Incident history</title>
    <link>https://status.patmos.tech</link>
    <description>Patmos</description>
    <pubDate>Mon, 19 Jan 2026 05:28:00 +0000</pubDate>
    
<item>
  <title>Power Outage at KC1</title>
  <description>
    Type: Incident
    Duration: 12 hours and 41 minutes

    Affected Components: , Power, Network, 
Kansas City 1 (Tracy) →
    Jan 19, 06:09:00 GMT+0 - Resolved - **Root Cause Analysis**  
On 11/18/26 at 11:05 PM CST, the KC1 datacenter experienced a loss of utility power due to an upstream outage. UPS systems immediately carried the full datacenter load. Generator startup was initiated but did not complete before UPS battery capacity was exhausted, resulting in a facility-wide power loss. Utility power was restored at 11:43 PM CST.  
  
The generators successfully started and operated during routine testing and annual maintenance the prior week. During that maintenance, a cold-weather component (block heater) was identified for replacement, and repairs were already in progress. At the time of the outage, extreme cold temperatures and high winds prevented the generators from reaching operating conditions quickly enough to assume the load.  
  
Because the outage resulted in a hard shutdown of datacenter systems, additional time was required after power restoration to safely bring servers and supporting services back online, validate system integrity, and address any startup issues.  
  
Block heater replacement has been expedited, cold-start performance is being re-verified, and additional winter readiness checks are being implemented. Jan 19, 05:28:00 GMT+0 - Monitoring - KC1 experienced a power outage due to a loss of utility power beginning at 11:05:19 PM CST. UPS systems carried the load until approximately 11:28 PM CST, at which time the UPS failed and on-site generators did not start as expected.

Utility power was restored at 11:43:38 PM CST.

Most servers are back online at this time. We are actively working to bring any remaining offline servers back into service. If your server is still offline, please submit a support ticket at &lt;https://tickets.patmos.tech&gt; so we can prioritize restoration.

An investigation is underway, and a root cause analysis (RCA) will be provided within the next 24–48 hours. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 12 hours and 41 minutes</p>
    <p><strong>Affected Components:</strong> , , </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:09:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  **Root Cause Analysis**  
On 11/18/26 at 11:05 PM CST, the KC1 datacenter experienced a loss of utility power due to an upstream outage. UPS systems immediately carried the full datacenter load. Generator startup was initiated but did not complete before UPS battery capacity was exhausted, resulting in a facility-wide power loss. Utility power was restored at 11:43 PM CST.  
  
The generators successfully started and operated during routine testing and annual maintenance the prior week. During that maintenance, a cold-weather component (block heater) was identified for replacement, and repairs were already in progress. At the time of the outage, extreme cold temperatures and high winds prevented the generators from reaching operating conditions quickly enough to assume the load.  
  
Because the outage resulted in a hard shutdown of datacenter systems, additional time was required after power restoration to safely bring servers and supporting services back online, validate system integrity, and address any startup issues.  
  
Block heater replacement has been expedited, cold-start performance is being re-verified, and additional winter readiness checks are being implemented..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:28:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  KC1 experienced a power outage due to a loss of utility power beginning at 11:05:19 PM CST. UPS systems carried the load until approximately 11:28 PM CST, at which time the UPS failed and on-site generators did not start as expected.

Utility power was restored at 11:43:38 PM CST.

Most servers are back online at this time. We are actively working to bring any remaining offline servers back into service. If your server is still offline, please submit a support ticket at &lt;https://tickets.patmos.tech&gt; so we can prioritize restoration.

An investigation is underway, and a root cause analysis (RCA) will be provided within the next 24–48 hours..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 19 Jan 2026 05:28:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmkkswxti00gs13hfqeg6wyos</link>
  <guid>https://status.patmos.tech/incident/cmkkswxti00gs13hfqeg6wyos</guid>
</item>

<item>
  <title>Phoenix: Some Servers Unreachable</title>
  <description>
    Type: Incident
    Duration: 4 days, 14 hours and 5 minutes

    Affected Components: Phoenix
    Nov 20, 14:23:59 GMT+0 - Identified - This incident required manual reboots for many servers. Most are now back online and accessible, though a small portion are still being actively troubleshooted. We are continuing to investigate the root cause and will share a full root cause analysis once finalized. Nov 19, 23:55:00 GMT+0 - Investigating - Some servers are currently unreachable. All hands are on deck investigating the issue. We’ll provide updates as soon as more information is available. Nov 20, 21:07:24 GMT+0 - Identified - During scheduled power maintenance by the building owner yesterday at 4:25 PM MST, a UPS failure occurred, forcing the system into bypass mode. We are currently operating on generator power with the UPS in maintenance bypass.  
Replacement parts have been expedited and will arrive Friday (November 21). Technicians are scheduled to be onsite at 6:00 PM MST Friday to complete repairs. We expect the UPS to be fully restored by late Friday evening or early Saturday morning.  

Some servers experienced disruptions during the initial event. Our technicians are onsite working through affected systems individually and updating support tickets as work is completed.  
All scheduled maintenance activities have been paused until the UPS is back online and fully operational.

Next update: Friday at 6:00 PM MST when technicians begin repairs. Nov 22, 02:08:20 GMT+0 - Identified - The UPS is back online, and the load has been successfully transferred.

Our team continues working to restore all servers affected by the recent power event. Each server requires individual troubleshooting, so restoration is progressing steadily but not simultaneously.

Thank you for your patience as we work to bring all services fully online. Nov 24, 14:00:00 GMT+0 - Resolved - Root Cause Analysis  
  
On the evening of 11/19 at roughly 5:00 PM MST, we began receiving alerts as a building power changeover unexpectedly caused one of our UPS units to fall into bypass. A battery issue inside the UPS led to a sudden loss of power for the servers connected to it, resulting in hard shutdowns. Our team immediately moved into emergency response, including flying in additional technicians to help expedite service restoration.  
  
By 7:00 PM MST 11/21, UPS functionality had been restored, but the power event affected each server differently, which meant recovery required individualized attention rather than a single fix for all systems. Progress continued steadily through the night; by the morning of 11/22, most servers were back online, with only a small number still offline. By the morning of 11/24, the vast majority of servers had been restored, with only a few systems still undergoing more extensive hardware repairs.  
  
We are reviewing generator-transfer procedures and continuing routine maintenance and checks on the UPS and broader datacenter infrastructure to ensure safeguards are confirmed, strengthened, and any potential future issues are proactively detected. We are also updating our internal response playbooks based on lessons learned from this event. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 4 days, 14 hours and 5 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:23:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This incident required manual reboots for many servers. Most are now back online and accessible, though a small portion are still being actively troubleshooted. We are continuing to investigate the root cause and will share a full root cause analysis once finalized..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:55:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Some servers are currently unreachable. All hands are on deck investigating the issue. We’ll provide updates as soon as more information is available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:07:24&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  During scheduled power maintenance by the building owner yesterday at 4:25 PM MST, a UPS failure occurred, forcing the system into bypass mode. We are currently operating on generator power with the UPS in maintenance bypass.  
Replacement parts have been expedited and will arrive Friday (November 21). Technicians are scheduled to be onsite at 6:00 PM MST Friday to complete repairs. We expect the UPS to be fully restored by late Friday evening or early Saturday morning.  

Some servers experienced disruptions during the initial event. Our technicians are onsite working through affected systems individually and updating support tickets as work is completed.  
All scheduled maintenance activities have been paused until the UPS is back online and fully operational.

Next update: Friday at 6:00 PM MST when technicians begin repairs..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:08:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The UPS is back online, and the load has been successfully transferred.

Our team continues working to restore all servers affected by the recent power event. Each server requires individual troubleshooting, so restoration is progressing steadily but not simultaneously.

Thank you for your patience as we work to bring all services fully online..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Root Cause Analysis  
  
On the evening of 11/19 at roughly 5:00 PM MST, we began receiving alerts as a building power changeover unexpectedly caused one of our UPS units to fall into bypass. A battery issue inside the UPS led to a sudden loss of power for the servers connected to it, resulting in hard shutdowns. Our team immediately moved into emergency response, including flying in additional technicians to help expedite service restoration.  
  
By 7:00 PM MST 11/21, UPS functionality had been restored, but the power event affected each server differently, which meant recovery required individualized attention rather than a single fix for all systems. Progress continued steadily through the night; by the morning of 11/22, most servers were back online, with only a small number still offline. By the morning of 11/24, the vast majority of servers had been restored, with only a few systems still undergoing more extensive hardware repairs.  
  
We are reviewing generator-transfer procedures and continuing routine maintenance and checks on the UPS and broader datacenter infrastructure to ensure safeguards are confirmed, strengthened, and any potential future issues are proactively detected. We are also updating our internal response playbooks based on lessons learned from this event..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 19 Nov 2025 23:55:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmi6u6ht201qb2m1hasj7nuty</link>
  <guid>https://status.patmos.tech/incident/cmi6u6ht201qb2m1hasj7nuty</guid>
</item>

<item>
  <title>Networking Issue in Phoenix</title>
  <description>
    Type: Incident
    Duration: 5 hours and 58 minutes

    Affected Components: Phoenix
    Oct 1, 09:39:16 GMT+0 - Investigating - We have identified a networking issue affecting Phoenix DC, and are currently investigating this incident. Oct 1, 10:36:55 GMT+0 - Monitoring - We have identified the network device responsible for the recent issues affecting the Phoenix network. Immediate mitigation measures have been implemented to prevent further service impact while the device remains under observation.  
Our engineering team is currently conducting a thorough investigation into the root cause of the device malfunction. A complete Root Cause Analysis (RCA) will be published once our investigation is concluded and all findings have been verified.  
We will continue to monitor the situation closely and will provide updates as significant developments occur Oct 1, 15:37:37 GMT+0 - Resolved - Root Cause Analysis

An network device in our datacenter caused temporary routing errors, which disrupted access to some client servers. The issue was resolved by temporarily disabling and re-enabling the affected port, restoring normal connectivity. The device is being moved to a dedicated network segment to prevent similar issues in the future, and all affected services are now fully operational. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 5 hours and 58 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:39:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We have identified a networking issue affecting Phoenix DC, and are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:36:55&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We have identified the network device responsible for the recent issues affecting the Phoenix network. Immediate mitigation measures have been implemented to prevent further service impact while the device remains under observation.  
Our engineering team is currently conducting a thorough investigation into the root cause of the device malfunction. A complete Root Cause Analysis (RCA) will be published once our investigation is concluded and all findings have been verified.  
We will continue to monitor the situation closely and will provide updates as significant developments occur.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:37:37&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Root Cause Analysis

An network device in our datacenter caused temporary routing errors, which disrupted access to some client servers. The issue was resolved by temporarily disabling and re-enabling the affected port, restoring normal connectivity. The device is being moved to a dedicated network segment to prevent similar issues in the future, and all affected services are now fully operational..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 1 Oct 2025 09:39:16 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmg7sodef0s51o5fq5yrh2k94</link>
  <guid>https://status.patmos.tech/incident/cmg7sodef0s51o5fq5yrh2k94</guid>
</item>

<item>
  <title>Phoenix Datacenter Network Accessibility </title>
  <description>
    Type: Incident
    Duration: 4 days, 18 hours and 17 minutes

    
    Sep 26, 00:12:00 GMT+0 - Investigating - We are investigating network connectivity issues in our Phoenix datacenter. Devices hosted in Phoenix are currently unavailable for some customers.

Our network team is actively working to identify and resolve the issue.

Updates will be provided as developments occur. Sep 26, 11:45:22 GMT+0 - Monitoring - The network is now fully operational, and we continue to closely monitor its performance. A detailed root cause analysis will be shared once it is complete. Please refer to this page for any further updates. Sep 30, 18:29:11 GMT+0 - Resolved - Root Cause Analysis  
On 9/25/25 at 4:50 MST, clients relying on internal DNS in our Phoenix data center experienced connectivity loss for approximately 90 minutes. The outage was caused by a switch reload that did not retain firewall connectivity settings. Once the settings were restored, full connectivity was re-established. The firewall configuration has now been saved to prevent recurrence during future switch reloads. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 4 days, 18 hours and 17 minutes</p>
    
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:12:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are investigating network connectivity issues in our Phoenix datacenter. Devices hosted in Phoenix are currently unavailable for some customers.

Our network team is actively working to identify and resolve the issue.

Updates will be provided as developments occur..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:45:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  The network is now fully operational, and we continue to closely monitor its performance. A detailed root cause analysis will be shared once it is complete. Please refer to this page for any further updates..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:29:11&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Root Cause Analysis  
On 9/25/25 at 4:50 MST, clients relying on internal DNS in our Phoenix data center experienced connectivity loss for approximately 90 minutes. The outage was caused by a switch reload that did not retain firewall connectivity settings. Once the settings were restored, full connectivity was re-established. The firewall configuration has now been saved to prevent recurrence during future switch reloads..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 26 Sep 2025 00:12:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmg06sydd02gdonq7old4oz1t</link>
  <guid>https://status.patmos.tech/incident/cmg06sydd02gdonq7old4oz1t</guid>
</item>

<item>
  <title>KC partial Outage</title>
  <description>
    Type: Incident
    Duration: 2 hours and 3 minutes

    Affected Components: , Network, 
Kansas City 1 (Tracy) →
    Sep 6, 12:29:03 GMT+0 - Identified - We have identified an issue in the network and we will be working towards a resolution. Network teams are working to get it up and running this applies to a limited numbe of customers.  Sep 6, 14:31:56 GMT+0 - Resolved - This incident has been resolved. Identified hardware line card failure. Part has been replaced and tested.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 3 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 6&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:29:03&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have identified an issue in the network and we will be working towards a resolution. Network teams are working to get it up and running this applies to a limited numbe of customers. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 6&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:31:56&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. Identified hardware line card failure. Part has been replaced and tested. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 6 Sep 2025 12:29:03 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmf88qepp007e3rwl0etgdfp3</link>
  <guid>https://status.patmos.tech/incident/cmf88qepp007e3rwl0etgdfp3</guid>
</item>

<item>
  <title>Phoenix DC Connectivity Issues</title>
  <description>
    Type: Incident
    Duration: 6 days, 3 hours and 23 minutes

    Affected Components: Phoenix
    Sep 6, 13:13:00 GMT+0 - Monitoring - We are currently experiencing \*\*intermittent connectivity issues\*\* at the Phoenix Datacenter. Our networking team is fully engaged and actively implementing \*\*mitigation measures\*\* to stabilize service. We will continue to provide updates as work progresses. A \*\*permanent resolution\*\* and a formal \*\*root cause analysis\*\* will be shared once available. Thank you for your patience and understanding. Sep 4, 15:59:00 GMT+0 - Monitoring - Connectivity in the Phoenix Datacenter has been restored. The temporary fix applied to the network switch is currently maintaining service while our team continues to investigate the underlying cause. All services are operating normally, and we will provide further updates, including a root cause analysis, once available. Sep 5, 19:55:00 GMT+0 - Identified - We are aware that connectivity in the Phoenix Datacenter is intermittently going up and down. Our networking team has all hands on deck and is actively applying mitigation measures to stabilize service. We will continue to post updates here as work progresses, and a permanent fix and official root cause analysis will be shared once available. Sep 5, 21:44:05 GMT+0 - Monitoring - We are currently experiencing \*\*intermittent connectivity issues\*\* at the Phoenix Datacenter. Our networking team is fully engaged and actively implementing \*\*mitigation measures\*\* to stabilize service. We will continue to provide updates as work progresses. A \*\*permanent resolution\*\* and a formal \*\*root cause analysis\*\* will be shared once available. Thank you for your patience and understanding. Sep 4, 09:50:00 GMT+0 - Investigating - We are currently experiencing connectivity issues impacting the Phoenix Datacenter. Our networking team is actively investigating the cause. Further updates will be provided as soon as more information becomes available. Sep 4, 12:21:00 GMT+0 - Monitoring - Connectivity in the Phoenix Datacenter has been restored. A temporary fix has been applied to one of the network switches while our team continues to investigate the underlying cause. All services are currently operating normally, and we will provide further updates as more information becomes available. Sep 4, 14:31:00 GMT+0 - Identified - Connectivity in the Phoenix Datacenter is currently down. This is related to the earlier incident affecting one of the network switches.

Our networking team is actively investigating and preparing to apply the temporary fix that resolved the previous disruption. While this measure has been effective earlier, we are monitoring closely to confirm full service restoration and to identify the root cause.

We will continue providing updates as more information becomes available. Sep 4, 14:43:00 GMT+0 - Monitoring - Connectivity in the Phoenix Datacenter has been restored. The temporary fix applied to the network switch is currently maintaining service while our team continues to investigate the underlying cause.

All services are operating normally, and we will provide further updates, including a root cause analysis, once available. Sep 4, 15:49:00 GMT+0 - Identified - Connectivity in the Phoenix Datacenter is currently down. This is related to the earlier incident affecting one of the network switches.

Our networking team is actively investigating and preparing to apply the temporary fix that resolved the previous disruption. While this measure has been effective earlier, we are monitoring closely to confirm full service restoration and to identify the root cause.

We will continue providing updates as more information becomes available. Sep 5, 01:03:13 GMT+0 - Identified - Connectivity in the Phoenix Datacenter is currently down. This is related to the earlier incident affecting one of the network switches.

Our networking team is actively investigating and preparing to apply the temporary fix that resolved the previous disruption. While this measure has been effective earlier, we are monitoring closely to confirm full service restoration and to identify the root cause.

We will continue providing updates as more information becomes available. Sep 5, 19:55:17 GMT+0 - Identified - We are aware that connectivity in the Phoenix Datacenter is still going up and down since our last update. Our networking team remains fully engaged, actively working to stabilize service. We understand the impact this has on your operations and appreciate your patience. Further updates will be posted here as progress continues, and a permanent fix along with an official root cause analysis will be shared once available. Sep 5, 21:22:39 GMT+0 - Monitoring - We are currently experiencing server downtime related to the ongoing intermittent issue impacting one of our network switches. Our network team has identified the root cause and is actively working on applying the same temporary fix that previously restored service effectively.

While this workaround has proven successful in the past, we are closely monitoring the situation to confirm full service restoration. Our team remains fully engaged and is continuing to work toward a permanent resolution.

We appreciate your patience and will provide further updates as more information becomes available. Sep 6, 13:13:31 GMT+0 - Monitoring - We are currently experiencing **intermittent connectivity issues** at the Phoenix Datacenter. Our networking team is fully engaged and actively implementing **mitigation measures** to stabilize service. We will continue to provide updates as work progresses. A **permanent resolution** and a formal **root cause analysis** will be shared once available. Thank you for your patience and understanding. Sep 10, 13:13:00 GMT+0 - Resolved - Root Cause Analysis  
A network service disruption affecting clients on a shared VLAN occurred between September 3-6, 2025, caused by MAC address table corruption within the datacenter leaf switch fabric. Root cause investigation definitively identified that the firewall leaf switch cluster was not properly clearing MAC address entries, propagating incorrect records to other leaf switches and disrupting connectivity. Initial detection occurred September 3 at 16:00 UTC through monitoring and client reports. Temporary mitigation via MAC table clearing provided intermittent relief, but the issue persisted until September 6 when all leaf switch clusters were systematically reloaded to rebuild the fabric. Full resolution was achieved by 18:00 UTC with extended monitoring confirming complete stability by 22:00 UTC. The root cause has been eliminated through the fabric rebuild, and additional preventive measures including migrating a subset of servers off the shared VLAN have been implemented to ensure this specific issue cannot recur. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 days, 3 hours and 23 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 6&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:13:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are currently experiencing \*\*intermittent connectivity issues\*\* at the Phoenix Datacenter. Our networking team is fully engaged and actively implementing \*\*mitigation measures\*\* to stabilize service. We will continue to provide updates as work progresses. A \*\*permanent resolution\*\* and a formal \*\*root cause analysis\*\* will be shared once available. Thank you for your patience and understanding..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:59:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Connectivity in the Phoenix Datacenter has been restored. The temporary fix applied to the network switch is currently maintaining service while our team continues to investigate the underlying cause. All services are operating normally, and we will provide further updates, including a root cause analysis, once available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:55:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are aware that connectivity in the Phoenix Datacenter is intermittently going up and down. Our networking team has all hands on deck and is actively applying mitigation measures to stabilize service. We will continue to post updates here as work progresses, and a permanent fix and official root cause analysis will be shared once available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:44:05&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are currently experiencing \*\*intermittent connectivity issues\*\* at the Phoenix Datacenter. Our networking team is fully engaged and actively implementing \*\*mitigation measures\*\* to stabilize service. We will continue to provide updates as work progresses. A \*\*permanent resolution\*\* and a formal \*\*root cause analysis\*\* will be shared once available. Thank you for your patience and understanding..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently experiencing connectivity issues impacting the Phoenix Datacenter. Our networking team is actively investigating the cause. Further updates will be provided as soon as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:21:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Connectivity in the Phoenix Datacenter has been restored. A temporary fix has been applied to one of the network switches while our team continues to investigate the underlying cause. All services are currently operating normally, and we will provide further updates as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:31:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Connectivity in the Phoenix Datacenter is currently down. This is related to the earlier incident affecting one of the network switches.

Our networking team is actively investigating and preparing to apply the temporary fix that resolved the previous disruption. While this measure has been effective earlier, we are monitoring closely to confirm full service restoration and to identify the root cause.

We will continue providing updates as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:43:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Connectivity in the Phoenix Datacenter has been restored. The temporary fix applied to the network switch is currently maintaining service while our team continues to investigate the underlying cause.

All services are operating normally, and we will provide further updates, including a root cause analysis, once available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:49:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Connectivity in the Phoenix Datacenter is currently down. This is related to the earlier incident affecting one of the network switches.

Our networking team is actively investigating and preparing to apply the temporary fix that resolved the previous disruption. While this measure has been effective earlier, we are monitoring closely to confirm full service restoration and to identify the root cause.

We will continue providing updates as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:03:13&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Connectivity in the Phoenix Datacenter is currently down. This is related to the earlier incident affecting one of the network switches.

Our networking team is actively investigating and preparing to apply the temporary fix that resolved the previous disruption. While this measure has been effective earlier, we are monitoring closely to confirm full service restoration and to identify the root cause.

We will continue providing updates as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:55:17&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are aware that connectivity in the Phoenix Datacenter is still going up and down since our last update. Our networking team remains fully engaged, actively working to stabilize service. We understand the impact this has on your operations and appreciate your patience. Further updates will be posted here as progress continues, and a permanent fix along with an official root cause analysis will be shared once available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:22:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are currently experiencing server downtime related to the ongoing intermittent issue impacting one of our network switches. Our network team has identified the root cause and is actively working on applying the same temporary fix that previously restored service effectively.

While this workaround has proven successful in the past, we are closely monitoring the situation to confirm full service restoration. Our team remains fully engaged and is continuing to work toward a permanent resolution.

We appreciate your patience and will provide further updates as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 6&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:13:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are currently experiencing **intermittent connectivity issues** at the Phoenix Datacenter. Our networking team is fully engaged and actively implementing **mitigation measures** to stabilize service. We will continue to provide updates as work progresses. A **permanent resolution** and a formal **root cause analysis** will be shared once available. Thank you for your patience and understanding..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:13:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Root Cause Analysis  
A network service disruption affecting clients on a shared VLAN occurred between September 3-6, 2025, caused by MAC address table corruption within the datacenter leaf switch fabric. Root cause investigation definitively identified that the firewall leaf switch cluster was not properly clearing MAC address entries, propagating incorrect records to other leaf switches and disrupting connectivity. Initial detection occurred September 3 at 16:00 UTC through monitoring and client reports. Temporary mitigation via MAC table clearing provided intermittent relief, but the issue persisted until September 6 when all leaf switch clusters were systematically reloaded to rebuild the fabric. Full resolution was achieved by 18:00 UTC with extended monitoring confirming complete stability by 22:00 UTC. The root cause has been eliminated through the fabric rebuild, and additional preventive measures including migrating a subset of servers off the shared VLAN have been implemented to ensure this specific issue cannot recur..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 4 Sep 2025 09:50:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmf5bsqor0021xln2hhzrc6dl</link>
  <guid>https://status.patmos.tech/incident/cmf5bsqor0021xln2hhzrc6dl</guid>
</item>

<item>
  <title>Phoenix Datacenter Connectivity Issues</title>
  <description>
    Type: Incident
    Duration: 18 minutes

    Affected Components: Phoenix
    Sep 4, 03:59:58 GMT+0 - Investigating - We&#039;re currently seeing connectivity issues in the Phoenix Datacenter. The issue is being looked into by our networking team and we will update you as soon as we have any new information! Sep 4, 04:18:20 GMT+0 - Resolved - This incident has been resolved.  
Connectivity to Phoenix has been restored. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 18 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:59:58&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;re currently seeing connectivity issues in the Phoenix Datacenter. The issue is being looked into by our networking team and we will update you as soon as we have any new information!.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:18:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved.  
Connectivity to Phoenix has been restored..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 4 Sep 2025 03:59:58 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmf4vo10n00358wqu4mx8gpkk</link>
  <guid>https://status.patmos.tech/incident/cmf4vo10n00358wqu4mx8gpkk</guid>
</item>

<item>
  <title>Partial Upstream Provider Outage</title>
  <description>
    Type: Incident
    Duration: 22 hours

    Affected Components: Network
    Jul 31, 07:30:00 GMT+0 - Identified -  Jul 31, 07:30:00 GMT+0 - Identified - We are currently experiencing issues with two of our upstream network providers, Zayo and Cogent. While our network remains fully operational due to redundant connectivity through additional carriers, some customers may observe BGP session drops or routing changes if their networks prefer those providers.

We are actively working with Zayo and Cogent to investigate and resolve the issue. There is no expected impact to general connectivity, but if you are experiencing route-specific issues or degraded performance, please open a support ticket and include relevant traceroutes or BGP session logs.

We will provide further updates as more information becomes available. Aug 1, 05:30:00 GMT+0 - Resolved - We’ve identified that the earlier connectivity issues were caused by a dark fiber outage affecting one of our backbone paths. While we did not lose connection to any upstream providers, the fiber disruption impacted routing for a small subset of customers whose traffic relied on that path.

Service was fully restored at 12:30 AM today when the fiber came back online. All affected routes have stabilized, and no further impact is expected.

We’ll continue to monitor for any anomalies, but at this time the incident is considered resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 22 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are currently experiencing issues with two of our upstream network providers, Zayo and Cogent. While our network remains fully operational due to redundant connectivity through additional carriers, some customers may observe BGP session drops or routing changes if their networks prefer those providers.

We are actively working with Zayo and Cogent to investigate and resolve the issue. There is no expected impact to general connectivity, but if you are experiencing route-specific issues or degraded performance, please open a support ticket and include relevant traceroutes or BGP session logs.

We will provide further updates as more information becomes available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We’ve identified that the earlier connectivity issues were caused by a dark fiber outage affecting one of our backbone paths. While we did not lose connection to any upstream providers, the fiber disruption impacted routing for a small subset of customers whose traffic relied on that path.

Service was fully restored at 12:30 AM today when the fiber came back online. All affected routes have stabilized, and no further impact is expected.

We’ll continue to monitor for any anomalies, but at this time the incident is considered resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 31 Jul 2025 07:30:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cmdrg8zwr00xh12xeh759cwj9</link>
  <guid>https://status.patmos.tech/incident/cmdrg8zwr00xh12xeh759cwj9</guid>
</item>

<item>
  <title>Isolated Network Performance Degradation </title>
  <description>
    Type: Incident
    Duration: 32 minutes

    Affected Components: Phoenix
    Feb 8, 21:47:00 GMT+0 - Monitoring - We are currently investigating an issue that is impacting a subset of our customers with degraded network performance. Our Network Engineers have been alerted and are investigating the devices impacted.

Your patience and understanding are greatly appreciated as we work to restore normal operations. Stay tuned for updates. Thank you for your cooperation.  Feb 8, 22:19:00 GMT+0 - Monitoring - Our Networking Engineering team has identified an issue with one of our upstream internet service providers. They were able to resolve the issue for the impacted customers by shifting traffic to alternate internet service providers. 

We appreciate your patience and understanding in this matter. Feb 8, 22:19:00 GMT+0 - Resolved - This incident has been resolved.  
  
We appreciate your patience and understanding in this matter. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 32 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:47:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are currently investigating an issue that is impacting a subset of our customers with degraded network performance. Our Network Engineers have been alerted and are investigating the devices impacted.

Your patience and understanding are greatly appreciated as we work to restore normal operations. Stay tuned for updates. Thank you for your cooperation. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:19:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Our Networking Engineering team has identified an issue with one of our upstream internet service providers. They were able to resolve the issue for the impacted customers by shifting traffic to alternate internet service providers. 

We appreciate your patience and understanding in this matter..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:19:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved.  
  
We appreciate your patience and understanding in this matter..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 8 Feb 2025 21:47:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cm6wr0a2900163kz3hiomvf0h</link>
  <guid>https://status.patmos.tech/incident/cm6wr0a2900163kz3hiomvf0h</guid>
</item>

<item>
  <title>Power Outage Notice</title>
  <description>
    Type: Incident
    Duration: 4 hours and 52 minutes

    Affected Components: , 
Kansas City 1 (Tracy) →
    Sep 22, 05:42:04 GMT+0 - Identified - We&#039;re currently experiencing an isolatedpower outage. Please be advised that our backup power systems are operational. Our team is aware of the situation and is diligently investigating the root cause. 

We understand the inconvenience this may cause and assure you that we&#039;re working urgently to address the issue. Your patience and cooperation are greatly appreciated. Stay tuned for further updates. Thank you.  Sep 22, 10:34:31 GMT+0 - Resolved - We are glad to announce that power has been restored to our data center. Our operations are back online and running smoothly. 

We will continue to closely monitor the situation to ensure stability. Thank you for your patience and understanding during the outage. If you encounter further issues, please contact our support team.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 4 hours and 52 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:42:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;re currently experiencing an isolatedpower outage. Please be advised that our backup power systems are operational. Our team is aware of the situation and is diligently investigating the root cause. 

We understand the inconvenience this may cause and assure you that we&#039;re working urgently to address the issue. Your patience and cooperation are greatly appreciated. Stay tuned for further updates. Thank you. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:34:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We are glad to announce that power has been restored to our data center. Our operations are back online and running smoothly. 

We will continue to closely monitor the situation to ensure stability. Thank you for your patience and understanding during the outage. If you encounter further issues, please contact our support team. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 22 Sep 2024 05:42:04 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cm1d5iqc6001y13a4sf2uwqaq</link>
  <guid>https://status.patmos.tech/incident/cm1d5iqc6001y13a4sf2uwqaq</guid>
</item>

<item>
  <title>Utility Power Out, Running on Generators </title>
  <description>
    Type: Incident
    Duration: 2 hours and 16 minutes

    Affected Components: , 
Kansas City 1 (Tracy) →
    Sep 21, 12:52:14 GMT+0 - Investigating - Currently Evergy is not supplying power to the Kansas City Datacenter.

We successfully transitioned to generator power and are monitoring the situation.

No DC clients or services have been affected. 

We will update as the situation evolves. Sep 21, 15:08:04 GMT+0 - Resolved - This incident has been resolved.

Utility power has been resorted.

No clients or services were affected. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 16 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:52:14&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Currently Evergy is not supplying power to the Kansas City Datacenter.

We successfully transitioned to generator power and are monitoring the situation.

No DC clients or services have been affected. 

We will update as the situation evolves..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:08:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved.

Utility power has been resorted.

No clients or services were affected..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 21 Sep 2024 12:52:14 +0000</pubDate>
  <link>https://status.patmos.tech/incident/cm1c5g32f00323jpmsgpvw402</link>
  <guid>https://status.patmos.tech/incident/cm1c5g32f00323jpmsgpvw402</guid>
</item>

<item>
  <title>Isolated Power Outage Notice</title>
  <description>
    Type: Incident
    Duration: 18 hours and 45 minutes

    Affected Components: , 
Kansas City 1 (Tracy) →
    Aug 1, 11:32:00 GMT+0 - Investigating - We&#039;re currently experiencing an isolated power outage due to severe storms the Kansas and Missouri regions.   

On Wednesday night strong storms with reported wind speeds as high as 80 mph rolled through Kansas and into Missouri causing widespread outages. The Salina, Topeka, Lawrence, and Kansas City metro areas had significant damage. While our facility has no physical damage, we first saw commercial power failures at 10:45PM local time.  

We are currently investigating an issue with our backup power systems for a section of our our racks. Our team is aware of the situation and is diligently investigating the root cause.   

We understand the inconvenience this may cause and assure you that we&#039;re working urgently to address the issue. Your patience and cooperation are greatly appreciated. Stay tuned for further updates. Thank you. Aug 1, 13:40:00 GMT+0 - Monitoring - Our facilities continue to run on backup/generator power. We are at full fuel capacity which will allow us to maintain operations throughout the repair efforts upstream with the commercial power provider. Cloud, backup, disaster recovery service, and colocation environments have not been impacted. A small portion of our dedicated server clients are impacted and our team is working to restore services to the zone.   
  
We will provide another update as we have more details and will confirm when all systems are fully operational. Aug 2, 06:16:44 GMT+0 - Resolved - We are glad to announce that power has been fully restored to our data center. Our operations are back online and running smoothly. 

We will continue to closely monitor the situation to ensure stability. Thank you for your patience and understanding during the outage. If you encounter further issues, please contact our support team.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 18 hours and 45 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:32:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;re currently experiencing an isolated power outage due to severe storms the Kansas and Missouri regions.   

On Wednesday night strong storms with reported wind speeds as high as 80 mph rolled through Kansas and into Missouri causing widespread outages. The Salina, Topeka, Lawrence, and Kansas City metro areas had significant damage. While our facility has no physical damage, we first saw commercial power failures at 10:45PM local time.  

We are currently investigating an issue with our backup power systems for a section of our our racks. Our team is aware of the situation and is diligently investigating the root cause.   

We understand the inconvenience this may cause and assure you that we&#039;re working urgently to address the issue. Your patience and cooperation are greatly appreciated. Stay tuned for further updates. Thank you..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:40:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Our facilities continue to run on backup/generator power. We are at full fuel capacity which will allow us to maintain operations throughout the repair efforts upstream with the commercial power provider. Cloud, backup, disaster recovery service, and colocation environments have not been impacted. A small portion of our dedicated server clients are impacted and our team is working to restore services to the zone.   
  
We will provide another update as we have more details and will confirm when all systems are fully operational..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:16:44&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We are glad to announce that power has been fully restored to our data center. Our operations are back online and running smoothly. 

We will continue to closely monitor the situation to ensure stability. Thank you for your patience and understanding during the outage. If you encounter further issues, please contact our support team. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 1 Aug 2024 11:32:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/clzb9qlqb14698h4odomg6oj9z</link>
  <guid>https://status.patmos.tech/incident/clzb9qlqb14698h4odomg6oj9z</guid>
</item>

<item>
  <title>Customer Portal Maintenance - 4/21/2024</title>
  <description>
    Type: Maintenance
    Duration: 2 hours and 16 minutes

    Affected Components: Phoenix, Dallas
    Apr 21, 09:15:59 GMT+0 - Completed - We have completed this maintenance and restored access to the [my.patmos.tech](http://my.patmos.tech) customer portal.  Apr 21, 07:00:00 GMT+0 - Identified - We will be performing a maintenance on our [my.patmos.tech](http://my.patmos.tech) customer portal at 2:00 AM CDT on 4/21/2024 due to an upstream vendor upgrade. During this time any support needs will be directed to our phone queue at +1-913-890-8250.

This maintenance will not impact any services we offer to our customers. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 2 hours and 16 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:15:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  We have completed this maintenance and restored access to the [my.patmos.tech](http://my.patmos.tech) customer portal. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be performing a maintenance on our [my.patmos.tech](http://my.patmos.tech) customer portal at 2:00 AM CDT on 4/21/2024 due to an upstream vendor upgrade. During this time any support needs will be directed to our phone queue at +1-913-890-8250.

This maintenance will not impact any services we offer to our customers..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 21 Apr 2024 07:00:00 +0000</pubDate>
  <link>https://status.patmos.tech/maintenance/clv978mb141414broori97bce1</link>
  <guid>https://status.patmos.tech/maintenance/clv978mb141414broori97bce1</guid>
</item>

<item>
  <title>Power Outage Notice</title>
  <description>
    Type: Incident
    Duration: 7 hours and 56 minutes

    Affected Components: Phoenix
    Apr 8, 15:23:00 GMT+0 - Identified - We&#039;re currently experiencing an isolatedpower outage. Our team is aware of the situation and is diligently investigating the root cause. 

We understand the inconvenience this may cause and assure you that we&#039;re working urgently to address the issue. Your patience and cooperation are greatly appreciated. Stay tuned for further updates. Thank you.  Apr 8, 15:42:00 GMT+0 - Monitoring - We are glad to announce that power has been restored to the impacted sections of our data center. We are in the process of verifying proper recovery of any impacted systems.  

We will continue to closely monitor the situation to ensure stability. Thank you for your patience and understanding during the outage. If you encounter further issues, please contact our support team.  Apr 8, 23:18:32 GMT+0 - Resolved - Power has remained stable throughout the day with no further incidents and services have been restored to impacted customers hours ago.

Our teams are continuing to work with the vendors responsible for maintaining our power delivery protection systems to further investigate that caused the incident today. They will be assessing the health of the power delivery protection systems that support our customers and applying any repairs as needed. 

We appreciate your patience and understanding in this matter. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 hours and 56 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:23:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;re currently experiencing an isolatedpower outage. Our team is aware of the situation and is diligently investigating the root cause. 

We understand the inconvenience this may cause and assure you that we&#039;re working urgently to address the issue. Your patience and cooperation are greatly appreciated. Stay tuned for further updates. Thank you. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:42:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are glad to announce that power has been restored to the impacted sections of our data center. We are in the process of verifying proper recovery of any impacted systems.  

We will continue to closely monitor the situation to ensure stability. Thank you for your patience and understanding during the outage. If you encounter further issues, please contact our support team. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:18:32&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Power has remained stable throughout the day with no further incidents and services have been restored to impacted customers hours ago.

Our teams are continuing to work with the vendors responsible for maintaining our power delivery protection systems to further investigate that caused the incident today. They will be assessing the health of the power delivery protection systems that support our customers and applying any repairs as needed. 

We appreciate your patience and understanding in this matter..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 8 Apr 2024 15:23:00 +0000</pubDate>
  <link>https://status.patmos.tech/incident/clur3zn1t26932bqoezh7462jn</link>
  <guid>https://status.patmos.tech/incident/clur3zn1t26932bqoezh7462jn</guid>
</item>

  </channel>
  </rss>