<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <link rel="alternate" type="text/html" href="https://status.torproject.org/"/>
    <title>GitLab on Tor Project status</title>
    <link>https://status.torproject.org/affected/gitlab/</link>
    <description>Incident history</description>
    <generator>github.com/cstate</generator>
    <language>en</language>
    
    <lastBuildDate>2024-12-03T03:30:00+00:00</lastBuildDate>
    <updated>2024-12-03T03:30:00+00:00</updated>
    
    
    
      <atom:link href="https://status.torproject.org/affected/gitlab/index.xml" rel="self" type="application/rss+xml" />
    
    
      <item>
        <title>[Resolved] Router maintenance at Hetzner</title>
        <link>https://status.torproject.org/issues/2024-12-03-hetzner-router-maintenance/</link>
        <pubDate>Tue, 03 Dec 2024 03:30:00 +0000</pubDate>
        <guid>https://status.torproject.org/issues/2024-12-03-hetzner-router-maintenance/</guid>
        <category>2024-12-03 5:30:00 &#43;0000</category>
        <description>Hetzner has planned an emergency maintenance window on all of their routers, which will cause a network outage on all of our hosts in their datacenters. The maintenance window is planned on December 3rd 2024 from 3:30 UTC to 5:30 UTC, during which the network may experience spurious outages.
Services should come back online automatically as soon as network connectivity is restored by Hetzner.</description>
        <content type="html">&lt;p&gt;Hetzner has planned an emergency maintenance window on all of their routers,
which will cause a network outage on all of our hosts in their datacenters. The
maintenance window is planned on December 3rd 2024 from 3:30 UTC to 5:30 UTC,
during which the network may experience spurious outages.&lt;/p&gt;
&lt;p&gt;Services should come back online automatically as soon as network connectivity
is restored by Hetzner.&lt;/p&gt;
</content>
      </item>
    
      <item>
        <title>[Resolved] GitLab migration to another machine cluster</title>
        <link>https://status.torproject.org/issues/2024-11-28-gitlab-migration/</link>
        <pubDate>Thu, 28 Nov 2024 18:00:00 +0000</pubDate>
        <guid>https://status.torproject.org/issues/2024-11-28-gitlab-migration/</guid>
        <category>2024-11-29 14:04:00 &#43;0000</category>
        <description>Starting on November 28th at 18:00 UTC, gitlab.torproject.org will be taken offline in order to migrate it to our other machine cluster.
The transfer time is currently estimated at 18h, so gitlab will be coming online on the next day, friday 29th. If we&amp;rsquo;re lucky the transfer might finish sooner.
If you have any issues during that time, please reach out to us on IRC or via email.</description>
        <content type="html">&lt;p&gt;Starting on November 28th at 18:00 UTC, gitlab.torproject.org will be taken
offline in order to migrate it to our other machine cluster.&lt;/p&gt;
&lt;p&gt;The transfer time is currently estimated at 18h, so gitlab will be coming online
on the next day, friday 29th. If we&amp;rsquo;re lucky the transfer might finish sooner.&lt;/p&gt;
&lt;p&gt;If you have any issues during that time, please reach out to us on IRC or via
email.&lt;/p&gt;
</content>
      </item>
    
      <item>
        <title>[Resolved] Gitlab and CollecTor outage</title>
        <link>https://status.torproject.org/issues/2023-12-06-gitlab-collector-outage/</link>
        <pubDate>Wed, 06 Dec 2023 14:00:00 +0000</pubDate>
        <guid>https://status.torproject.org/issues/2023-12-06-gitlab-collector-outage/</guid>
        <category>2023-12-06 15:29:09 &#43;0000</category>
        <description>We&amp;rsquo;ve experienced heavy load and unresponsiveness on some of our services (e.g. Gitlab and CollecTor) leading to outages and disruptions.
The issue seems to have resolved itself, investigation seemed to show this was a routing issue upstream.
Update: issue have crept up again, root cause was elevated temperature with the hard drives on the affected server. Upstream has replaced fans in the server and situation has returned to normal.
See issue tpo/tpa/team#41429 for detailed analysis and updates.</description>
        <content type="html">&lt;p&gt;We&amp;rsquo;ve experienced heavy load and unresponsiveness on some of our
services (e.g. Gitlab and CollecTor) leading to outages and
disruptions.&lt;/p&gt;
&lt;p&gt;&lt;!-- raw HTML omitted --&gt;The issue seems to have resolved itself, investigation seemed to show
this was a routing issue upstream.&lt;!-- raw HTML omitted --&gt;&lt;/p&gt;
&lt;p&gt;Update: issue have crept up again, root cause was
elevated temperature with the hard drives on the affected
server. Upstream has replaced fans in the server and situation has
returned to normal.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41429&#34;&gt;issue tpo/tpa/team#41429&lt;/a&gt; for detailed analysis and updates.&lt;/p&gt;
</content>
      </item>
    
      <item>
        <title>[Resolved] outage at main provider</title>
        <link>https://status.torproject.org/issues/2023-02-04-hetzner-outage/</link>
        <pubDate>Sat, 04 Feb 2023 01:57:31 +0000</pubDate>
        <guid>https://status.torproject.org/issues/2023-02-04-hetzner-outage/</guid>
        <category>2023-02-04 03:33:46 &#43;0000</category>
        <description>We are witnessing a large outage at our main service provider, Hetzner. According to the information we have gathered so far, four switches (4!) have failed and that affects four (yes, again, 4!) of the servers in the 8-node Ganeti cluster.
Affected servers:
check-01 chives colchicifolium cupani fsn-node-01 fsn-node-02 fsn-node-04 fsn-node-07 gitlab-02 gnt-fsn henryi loghost01 majus materculae media-01 onionoo-backend-01 onionoo-backend-02 onionoo-frontend-02 perdulce polyanthum relay-01 static-master-fsn staticiforme submit-01 tbb-nightlies-master weather-01 Upstream resolved the situation after a few hours of downtime.</description>
        <content type="html">&lt;p&gt;We are witnessing a large outage at our main service provider,
Hetzner. According to the information we have gathered so far, four
switches (4!) have failed and that affects four (yes, again, 4!) of the
servers in the 8-node Ganeti cluster.&lt;/p&gt;
&lt;p&gt;Affected servers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;check-01&lt;/li&gt;
&lt;li&gt;chives&lt;/li&gt;
&lt;li&gt;colchicifolium&lt;/li&gt;
&lt;li&gt;cupani&lt;/li&gt;
&lt;li&gt;fsn-node-01&lt;/li&gt;
&lt;li&gt;fsn-node-02&lt;/li&gt;
&lt;li&gt;fsn-node-04&lt;/li&gt;
&lt;li&gt;fsn-node-07&lt;/li&gt;
&lt;li&gt;gitlab-02&lt;/li&gt;
&lt;li&gt;gnt-fsn&lt;/li&gt;
&lt;li&gt;henryi&lt;/li&gt;
&lt;li&gt;loghost01&lt;/li&gt;
&lt;li&gt;majus&lt;/li&gt;
&lt;li&gt;materculae&lt;/li&gt;
&lt;li&gt;media-01&lt;/li&gt;
&lt;li&gt;onionoo-backend-01&lt;/li&gt;
&lt;li&gt;onionoo-backend-02&lt;/li&gt;
&lt;li&gt;onionoo-frontend-02&lt;/li&gt;
&lt;li&gt;perdulce&lt;/li&gt;
&lt;li&gt;polyanthum&lt;/li&gt;
&lt;li&gt;relay-01&lt;/li&gt;
&lt;li&gt;static-master-fsn&lt;/li&gt;
&lt;li&gt;staticiforme&lt;/li&gt;
&lt;li&gt;submit-01&lt;/li&gt;
&lt;li&gt;tbb-nightlies-master&lt;/li&gt;
&lt;li&gt;weather-01&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Upstream resolved the situation after a few hours of
downtime. According to Hetzner it was &amp;ldquo;a short disruption of a line
card in one of our access routers&amp;rdquo;. Details of the incident are
available in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41057&#34;&gt;tpo/tpa/team#41057&lt;/a&gt;.&lt;/p&gt;
</content>
      </item>
    
      <item>
        <title>[Resolved] routing issues at main provider</title>
        <link>https://status.torproject.org/issues/2022-01-25-routing-issues/</link>
        <pubDate>Tue, 25 Jan 2022 06:00:00 +0000</pubDate>
        <guid>https://status.torproject.org/issues/2022-01-25-routing-issues/</guid>
        <category>2022-01-27 18:58:00 &#43;0000</category>
        <description>We are experiencing intermittent network outages that typically resolve themselves within a few hours. Preliminary investigations seem to point at routing issues at Hetzner, but we have yet to get a solid diagnostic. We&amp;rsquo;re following that issue in issue 40601.</description>
        <content type="html">&lt;p&gt;We are experiencing intermittent network outages that typically
resolve themselves within a few hours. Preliminary investigations seem
to point at routing issues at Hetzner, but we have yet to get a solid
diagnostic. We&amp;rsquo;re following that issue in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40601&#34;&gt;issue 40601&lt;/a&gt;.&lt;/p&gt;
</content>
      </item>
    
  </channel>
</rss>
