Previous messages

From SciNet Users Documentation
Revision as of 18:28, 1 September 2020 by Rzon (talk | contribs)
Jump to navigation Jump to search

August 24, 2020, 7:37 PM EST: Connectivity is back to normal

August 24, 2020, 6:35 PM EST: We have partial connectivity back, but are still investigating.

August 24, 2020, 3:15 PM EST: There are issues connecting to the data centre. We're investigating.

August 21, 2020, 6:00 PM EST: The pump has been repaired, cooling is restored, systems are up.
Scratch purging is postponed until the evening of Friday Aug 28th, 2020.

August 19, 2020, 4:40 PM EST: Update: The current estimate is to have the cooling restored on Friday and we hope to have the systems available for users on Saturday August 22, 2020.

August 17, 2020, 4:00 PM EST: Unfortunately after taking the pump apart it was determined there was a more serious failure of the main drive shaft, not just the seal. As a new one will need to be sourced or fabricated we're estimating that it will take at least a few more days to get the part and repairs done to restore cooling. Sorry for the inconvenience. 

August 15, 2020, 1:00 PM EST: Due to parts availablity to repair the failed pump and cooling system it is unlikely that systems will be able to be restored until Monday afternoon at the earliest.

August 15, 2020, 00:04 AM EST: A primary pump seal in the cooling infrastructure has blown and parts availability will not be able be determined until tomorrow. All systems are shut down as there is no cooling. If parts are available, systems may be back at the earliest late tomorrow. Check here for updates.

August 14, 2020, 21:04 AM EST: Tomorrow's /scratch purge has been postponed.

August 14, 2020, 21:00 AM EST: Staff at the datacenter. Looks like one of the pumps has a seal that is leaking badly.

August 14, 2020, 20:37 AM EST: We seem to be undergoing a thermal shutdown at the datacenter.

August 14, 2020, 20:20 AM EST: Network problems to niagara/mist. We are investigating.

August 13, 2020, 10:40 AM EST: Network is fixed, scheduler and other services are back.

August 13, 2020, 8:20 AM EST: We had an IB switch failure, which is affecting a subset of nodes, including the scheduler nodes.

August 10, 2020, 7:30 PM EST: Scheduler fully operational again.

August 10, 2020, 3:00 PM EST: Scheduler partially functional: jobs can be submitted and are running.

August 10, 2020, 2:00 PM EST: Scheduler is temporarily inoperational.

August 7, 2020, 9:15 PM EST: Network is fixed, scheduler and other services are coming back.

August 7, 2020, 8:20 PM EST: Disruption of part of the network in the data centre. Causes issue with the scheduler, the mist login node, and possibly others. We are investigating.

July 30, 2020, 9:00 AM Project backup in progress but incomplete: please be aware that after we deployed the new, larger storage appliance for scratch and project two months ago, we started a full backup of project (1.5PB). This backup is taking a while to complete, and there are still a few areas which have not been backed up fully. Please be careful to not delete things from project that you still need, in particular if they are recently added material.

July 27, 2020, 5:00 PM: Scheduler issues resolved.

July 27, 2020, 3:00 PM: Scheduler issues. We are investigating.

July 13, 4:40 PM: Most systems are available again. Only Mist is still being brought up.

July 13, 10:00 AM: SciNet/Niagara Downtime In Progress

SciNet/Niagara Downtime Announcement, July 13, 2020
All resources at SciNet will undergo a maintenance shutdown on Monday July 13, 2020, starting at 10:00 am EDT, for file system and scheduler upgrades. There will be no access to any of the SciNet systems (Niagara, Mist, HPSS, Teach cluster, or the file systems) during this time. We expect to be able to bring the systems back around 3 PM (EST) on the same day.

June 29, 6:21:00 PM: Systems are available again.

June 29, 12:30:00 PM: Power Outage caused thermal shutdown.

June 20, 2020, 10:24 PM: File systems are back up. Unfortunately, all running jobs would have died and users are asked to resubmit them.

June 20, 2020, 9:48 PM: An issue with the file systems is causing trouble. We are investigating the cause.

June 15, 2020, 10:30 PM: A power glitch caused some compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.

June 12, 2020, 6:15 PM: Two power glitches during the night caused some compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.

June 6, 2020, 6:06 AM: A power glitch caused some compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.

May 24, 2020, 8:20 AM: A power glitch this morning caused all compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.

May 7, 2020, 6:05 PM: Maintenance shutdown is finished. Most systems are back in production.

May 6, 2020, 7:08 AM: Two-day datacentre maintenance shutdown has started.

SciNet/Niagara Downtime Announcement, May 6-7, 2020

All resources at SciNet will undergo a two-day maintenance shutdown on May 6th and 7th 2020, starting at 7 am EDT on Wednesday May 6th. There will be no access to any of the SciNet systems (Niagara, Mist, HPSS, Teach cluster, or the file systems) or systems hosted at the SciNet data centre. We expect to be able to bring the systems back online the evening of May 7th.

May 4, 2020, 7:51 AM: A power glitch this morning caused compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.

May 3, 2020, 8:20 AM: A power glitch this morning caused all compute nodes to be rebooted: jobs running at the time may have failed; users are asked to resubmit these jobs.

April 28, 2020, 7:20 AM: A power glitch this morning caused all compute nodes to be rebooted: jobs running at the time have failed; users are asked to resubmit these jobs.

April 20, 2020: Security Incident at Cedar; implications for Niagara users

Last week, it became evident that the Cedar GP cluster had been comprimised for several weeks. The passwords of at least two Compute Canada users were known to the attackers. One of these was used to escalate privileges on Cedar, as explained on https://status.computecanada.ca/view_incident?incident=423.

These accounts were used to login to Niagara as well, but Niagara did not have the same security loophole as Cedar (which has been fixed), and no further escalation was observed on Niagara.

Reassuring as that may sound, it is not known how the passwords of the two user accounts were obtained. Given this uncertainty, the SciNet team *strongly* recommends that you change your password on https://ccdb.computecanada.ca/security/change_password, and remove any SSH keys and regenerate new ones (see https://docs.scinet.utoronto.ca/index.php/SSH_keys).

Tue 30 Mar 2020 14:55:14 EDT Burst Buffer available again.

Fri Mar 27 15:29:00 EDT 2020: SciNet systems are back up. Only the Burst Buffer remains offline, its maintenance is expected to be finished early next week.

Thu Mar 26 23:05:00 EDT 2020: Some aspects of the maintenance took longer than expected. The systems will not be back up until some time tomorrow, Friday March 27, 2020.

Wed Mar 25 7:00:00 EDT 2020: SciNet/Niagara downtime started.

Mon Mar 23 18:45:10 EDT 2020: File system issues were resolved.

Mon Mar 23 18:01:19 EDT 2020: There is currently an issue with the main Niagara filesystems. This effects all systems, all jobs have been killed. The issue is being investigated.

Fri Mar 20 13:15:33 EDT 2020: There was a power glitch at the datacentre at 8:50 AM, which resulted in jobs getting killed. Please resubmit failed jobs.

COVID-19 Impact on SciNet Operations, March 18, 2020

Although the University of Toronto is closing of some of its research operations on Friday March 20 at 5 pm EDT, this does not affect the SciNet systems (such as Niagara, Mist, and HPSS), which will remain operational.

SciNet/Niagara Downtime Announcement, March 25-26, 2020

All resources at SciNet will undergo a two-day maintenance shutdown on March 25th and 26th 2020, starting at 7 am EDT on Wednesday March 25th. There will be no access to any of the SciNet systems (Niagara, Mist, HPSS, Teach cluster, or the file systems) during this time.

This shutdown is necessary to finish the expansion of the Niagara cluster and its storage system.

We expect to be able to bring the systems back online the evening of March 26th.

March 9, 2020, 11:24 PM: HPSS services are temporarily suspended for emergency maintenance.

March 7, 2020, 10:15 PM: File system issues have been cleared.

March 6, 2020, 7:30 PM: File system issues; we are investigating

March 2, 2020, 1:30 PM: For the extension of Niagara, the operating system on all Niagara nodes has been upgraded from CentOS 7.4 to 7.6. This required all nodes to be rebooted. Running compute jobs are allowed to finish before the compute node gets rebooted. Login nodes have all been rebooted, as have the datamover nodes and the jupyterhub service.

Feb 24, 2020, 1:30PM: The Mist login node got rebooted. It is back, but we are still monitoring the situation.

Feb 12, 2020, 11:00AM: The Mist GPU cluster now available to users.

Feb 11, 2020, 2:00PM: The Niagara compute nodes were accidentally rebooted, killing all running jobs.

Feb 10, 2020, 19:00PM: HPSS is back to normal.

Jan 30, 2020, 12:01PM: We are having an issue with HPSS, in which the disk-cache is full. We put a reservation on the whole system (Globus, plus archive and vfs queues), until it has had a chance to clear some space on the cache.

Jan 21, 2020, 4:05PM: The was a partial power outage the took down a large amount of the compute nodes. If your job died during this period please resubmit.

Jan 13, 2020, 7:35 PM: Maintenance finished.

Jan 13, 2020, 8:20 AM: The announced maintenance downtime started (see below).

Jan 9 2020, 11:30 AM: External ssh connectivity restored, issue related to the university network.

Jan 9 2020, 9:24 AM: We received reports of users having trouble connecting into the SciNet data centre; we're investigating. Systems are up and running and jobs are fine.

As a work around, in the meantime, it appears to be possible to log into graham, cedar or beluga, and then ssh to niagara.

Downtime announcement: To prepare for the upcoming expansion of Niagara, there will be a one-day maintenance shutdown on January 13th 2020, starting at 8 am EST. There will be no access to Niagara, Mist, HPSS or teach, nor to their file systems during this time.

2019

December 13, 9:00 AM EST: Issues resolved.

December 13, 8:20 AM EST: Overnight issue is now preventing logins to Niagara and other services. Possibly a file system issue, we are investigating.

Fri, Nov 15 2019, 11:00 PM (EST) Niagara and most of the main systems are now available.

Fri, Nov 15 2019, 7:50 PM (EST) SOSCIP GPU cluster is up and accessible. Work on the other systems continues.

Fri, Nov 15 2019, 5:00 PM (EST) Infrastructure maintenance done, upgrades still in process.

Fri, Nov 15 2019, 7:00 AM (EST) Maintenance shutdown of the SciNet data centre has started. Note: scratch purging has been postponed until Nov 17.

Announcement: The SciNet datacentre will undergo a maintenance shutdown on Friday November 15th 2019, from 7 am to 11 pm (EST), with no access to any of the SciNet systems (Niagara, P8, SGC, HPSS, Teach cluster, or the filesystems) during that time. Sat, Nov 2 2019, 1:30 PM (update): Chiller has been fixed, all systems are operational.

Fri, Nov 1 2019, 4:30 PM (update): We are operating in free cooling so have brought up about 1/2 of the Niagara compute nodes to reduce the cooling load. Access, storage, and other systems should now be available.

Fri, Nov 1 2019, 12:05 PM (update): A power module in the chiller has failed and needs to be replaced. We should be able to operate in free cooling if the temperature stays cold enough, but we may not be able to run all systems. No eta yet on when users will be able to log back in.

Fri, Nov 1 2019, 9:15 AM (update): There was a automated shutdown because of rising temperatures, causing all systems to go down. We are investigating, check here for updates.

Fri, Nov 1 2019, 8:16 AM: Unexpected data centre issue: Check here for updates.

Thu 1 Aug 2019 5:00:00 PM Systems are up and operational.

Thu 1 Aug 2019 7:00:00 AM: Scheduled Downtime Maintenance of the SciNet Datacenter. All systems will be down and unavailable starting 7am until the evening.

Fri 26 Jul 2019, 16:02:26 EDT: There was an issue with the Burst Buffer at around 3PM, and it was recently solved. BB is OK again.

Sun 30 Jun 2019 The SOSCIP BGQ and P7 systems were decommissioned on June 30th, 2019. The BGQdev front end node and storage are still available.

Wed 19 Jun 2018, 1:20:00 PM: The BGQ is back online.

Wed 19 Jun 2018, 10:00:00 AM: The BGQ is still down, the SOSCIP GPU nodes should be back up.

Wed 19 Jun 2018, 1:40:00 AM: There was an issue with the SOSCIP BGQ and GPU Cluster last night about 1:42am, probably a power fluctuation that took it down.

Wed 12 Jun 2019, 3:30 AM - 7:40 AM Intermittent system issues on Niagara's project and scratch as the file number limit was reached. We increased the number of files allowed in total on the file system.

Thu 30 May 2019, 11:00:00 PM: The maintenance downtime of SciNet's data center has finished, and systems are being brought online now. You can check the progress here. Some systems might not be available until Friday morning.
Some action on the part of users will be required when they first connect again to a Niagara login nodes or datamovers. This is due to the security upgrade of the Niagara cluster, which is now in line with currently accepted best practices.
The details of the required actions can be found on the SSH Changes in May 2019 wiki page.

Wed 29-30 May 2019 The SciNet datacentre will undergo a two-day maintenance shutdown, starting at 7 am EDT on Wednesday May 29th. There will be no access to any of the SciNet systems (Niagara, P7, P8, BGQ, SGC, HPSS, Teach cluster, or the file systems) during this time.

SCHEDULED SHUTDOWN:

Please be advised that on Wednesday May 29th through Thursday May 30th, the SciNet datacentre will undergo a two-day maintenance shutdown, starting at 7 am EDT on Wednesday May 29th. There will be no access to any of the SciNet systems (Niagara, P7, P8, BGQ, SGC, HPSS, Teach cluster, or the file systems) during this time.

This is necessary to finish the installation of an emergency power generator, to perform the annual cooling tower maintenance, and to enhance login security.

We expect to be able to bring the systems back online the evening of May 30th. Due to the enhanced login security, the ssh applications of users will need to update their known host list. More detailed information on this procedure will be sent shortly before the systems are back online.

Fri 5 Apr 2019: Software updates on Niagara: The default CCEnv software stack now uses avx512 on Niagara, and there is now a NiaEnv/2019b stack ("epoch").

Thu 4 Apr 2019: The 2019 compute and storage allocations have taken effect on Niagara.

NOTE: There is scheduled network maintenance for Friday April 26th 12am-8am on the Scinet datacenter external network connection. This will not affect internal connections and running jobs however remote connections may see interruptions during this period.


Wed 24 Apr 2019 14:14 EDT: HPSS is back on service. Library and robot arm maintenance finished.

Wed 24 Apr 2019 08:35 EDT: HPSS out of service this morning for library and robot arm maintenance.

Fri 19 Apr 2019 17:40 EDT: HPSS robot arm has been released and is back to normal operations.

Fri 19 Apr 2019 14:00 EDT: problems with HPPS library robot have been detected.

Wed 17 Apr 2019 15:35 EDT: Network connection is back.

Wed 17 Apr 2019 15:12 EDT: Network connection down. Investigating.

Tue 9 Apr 2019 22:24:14 EDT: Network connection restored.

Tue 9 Apr 2019, 15:20: Network connection down. Investigating.

Fri 5 Apr 2019: Planned, short outage in connectivity to the SciNet datacentre from 7:30 am to 8:55 am EST for maintenance of the network. This outage will not affect running or queued jobs. It may be necessary to reboot the login nodes at some point tomorrow, which could result in a short interruption of connectivity, but which will have no effect on running or queued jobs.


April 4, 2019: The 2019 compute and storage allocations will take effect on Niagara. Running jobs will not be affected by this change and will run their course. Queued jobs' priorities will be updated to reflect the new fairshare values later in the day. The queue should fully reflect the new fairshare values in about 24 hours.

It may be necessary to reboot the login nodes at some point tomorrow, which could result in a short interruption of connectivity, but which will have no effect on running or queued jobs.

There will be updates to the software stack on this day as well.

March 25, 3:05 PM EST: Most systems back online, other services should be back shortly.

March 25, 12:05 PM EST: Power is back at the datacentre, but it is not yet known when all systems will be back up. Keep checking here for updates.

March 25, 11:27 AM EST: A power outage in the datacentre occured and caused all services to go down. Check here for updates.

Thu Mar 21 10:37:28 EDT 2019: HPSS is back in service

HPSS out of service on Tue, Mar/19 at 9AM, for tape library expansion and relocation. It's possible the downtime will extend to Wed, Mar/20.

January 21, 4:00 PM: HPSS is back in service. Thank you for your patience.

January 18, 5:00 PM: We did practically all of the HPSS upgrades (software/hardware), however the main client node - archive02 - is presenting an issue we just couldn't resolve yet. We will try to resume work over the weekend with cool heads, or on Monday. Sorry, but this is an unforeseen delay. Jobs on the queue we'll remain there, and we'll delay the scratch purging by 1 week.

January 16, 11:00 PM: HPSS is being upgraded, as announced.

January 16, 8:00 PM: System are coming back up and should be accessible for users now.

January 15, 8:00 AM: Data centre downtime in effect.

  • Downtime Announcement for January 15 and 16, 2019

The SciNet datacentre will need to undergo a two-day maintenance shutdown in order to perform electrical work, repairs and maintenance. The electrical work is in preparation for the upcoming installation of an emergency power generator and a larger UPS, which will result in increased resilience to power glitches and outages. The shutdown is scheduled to start on Tuesday January 15, 2019, at 7 am and will last until Wednesday 16, 2019, some time in the evening. There will be no access to any of the SciNet systems (Niagara, P7, P8, BGQ, SGC, HPSS, Teach cluster, or the filesystems) during this time. Check back here for up-to-date information on the status of the systems.

Note: this downtime was originally scheduled for Dec. 18, 2018, but has been postponed and combined with the annual maintenance downtime.

  • December 24, 2018, 11:35 AM EST: Most systems are operational again. If you had compute jobs running yesterday at around 3:30PM, they likely crashed - please check them and resubmit if needed.
  • December 24, 2018, 10:40 AM EST: Repairs have been made, and the file systems are starting to be mounted on the cluster.
  • December 23, 2018, 3:38 PM EST: Issues with the file systems (home, scratch and project). We are investigating, it looks like a hardware issue that we are trying to work around. Note that the absence of /home means you cannot log in with ssh keys. All compute jobs crashed around 3:30 PM EST on Dec 23. Once the system is properly up again, please resubmit your jobs. Unfortunately, at this time of year, it is not possible to give an estimate on when the system will be operational again.
  • Tue Nov 22 14:20:00 EDT 2018: HPSS back in service
  • Tue Nov 22 08:55:00 EDT 2018: HPSS offline for scheduled maintenance
  • Tue Nov 20 16:30:00 EDT 2018: HPSS offline on Thursday 9AM for installation of new LTO8 drives in the tape library.
  • Tue Oct 9 12:16:00 EDT 2018: BGQ compute nodes are up.
  • Sun Oct 7 20:24:26 EDT 2018: SGC and BGQ front end are available, BGQ compute nodes down related to a cooling issue.
  • Sat Oct 6 23:16:44 EDT 2018: There were some problems bringing up SGC & BGQ, they will remain offline for now.
  • Sat Oct 6 18:36:35 EDT 2018: Electrical work finished, power restored. Systems are coming online.
  • July 18, 2018: login.scinet.utoronto.ca is now disabled, GPC $SCRATCH and $HOME are decommissioned.
  • July 12, 2018: There was a short power interruption around 10:30 am which caused most of the systems (Niagara, SGC, BGQ) to reboot and any running jobs to fail.
  • July 11, 2018: P7's moved to BGQ filesystem, P8's moved to Niagara filesystem.
  • May 24, 2018, 9:25 PM EST: The data center is up, and all systems are operational again.
  • May 24, 2018, 7:00 AM EST: The data centre is under annual maintenance. All systems are offline. Systems are expected to be back late afternoon today; check for updates on this page.
  • May 18, 2018: Announcement: Annual scheduled maintenance downtime: Thursday May 24, starting 7:00 AM
  • May 16, 2018: Cooling restored, systems online
  • May 16, 2018: Cooling issue at datacentre again, all systems down
  • May 15, 2018: Cooling restored, systems coming online
  • May 15, 2018: Cooling issue at datacentre, all systems down
  • May 4, 2018: HPSS is now operational on Niagara.
  • May 3, 2018: Burst Buffer is available upon request.
  • May 3, 2018: The Globus endpoint for Niagara is available: computecanada#niagara.
  • May 1, 2018: System status moved here.
  • Apr 23, 2018 GPC-compute is decommissioned, GPC-storage available until 30 May 2018.
  • April 10, 2018: Niagara commissioned.