Posts

Possible network outage on 12/13

The network team will perform a maintenance next Tuesday (12/13) at 7:30am. This is not expected to affect any systems or running jobs, but there is still a ~20% chance that a network outage can happen, and last for about an hour. The team will be on site, prepared to intervene immediately should that happens.
Please note that a network outage will affect running jobs, so you might like to wait until the maintenance is over to submit large and/or critical jobs. As always, please contact us if you have any concerns or questions.

Department of Energy Computational Science Graduate Fellowship

Department of Energy Computational Science Graduate Fellowship

Applications due Jan 10 2012

We are pleased to inform you that the application is now open for the Department of Energy Computational Science Graduate Fellowship (DOE CSGF) at https://www.krellinst.org/doecsgf/application/.

This is an exciting opportunity for doctoral students to earn up to four years of financial support along with outstanding benefits and

opportunities while pursuing degrees in fields of study that utilize high performance computing technology to solve complex problems in science and engineering.

Benefits of the Fellowship:

  • $36,000 yearly stipend
  • Payment of all tuition and fees
  • $5,000 academic allowance in first year
  • $1,000 academic allowance each renewed year
  • 12-week research practicum at a DOE Laboratory
  • Yearly conferences
  • Career, professional and leadership development
  • Renewable up to four years

Applications for the next class of fellows are due Jan 10 2012.

For more information regarding the fellowship and to access the online application, visit: http://www.krellinst.org/csgf/

Updated: Network troubles, redux (FIXED)

We’ve got the switch back.  The outage looks to have caused our virtual machine farm to reboot, so connections to head nodes will have been dropped.

This also affected the network path between compute nodes and the file servers.  With a little luck, the NFS traffic should resume, but you may want to check on any running jobs to make sure.

Word from the network team is that they were following published instructions from the switch vendor to integrate the two switches when the failure occurred.  We’ll be looking into pretty intensely, as this these switches are seeing a lot of deployments in other OIT functions.

Network troubles, redux – 11/10 3:00pm

Hi folks,

In an attempt to restore network redundancy from the switch failure on 10/31, the Campus Network team has experienced some troubles connecting the new switch.  At this point, the core of our HPC network is non-functional.  Senior experts from the network team are working on restoring connectivity as soon as possible.

Full filesystems this morning

This morning, we found the hp8, hp10, hp12, hp14, hp16, hp18, hp20, hp22, hp24, and hp26 filesystems full.  All of these filesystems reside on the same fileserver and share capacity.  The root cause was a an oversight on our part – a lack of quota enforcement on a particular users home directory.  The proper 5GB home directory quotas have been reinstated and we are working with this user to move their data to their project directory.  We’ve managed to free up a little space at the moment, but it will take a little time to move a couple TB of data.  We’re also doing an audit to ensure that all appropriate storage quotas are in place.

 

This would have affected users on the following clusters:

  • Athena
  • BioCluster
  • Aryabhata
  • Atlantis
  • FoRCE
  • Optimius (not production yet)
  • ECE (not production yet)
  • Prometheus (not production yet)
  • Math (not production yet)
  • CEE (not production yet)

PACE staffing, week of November 14

Greetings all,

As I’m sure some of you are aware, next week is the annual Supercomputing ’11 conference in Seattle.  Many of the PACE staff will be attending, but Brian MacLeod and Andre McNeill have graciously agreed to hold the fort here.  The rest of us will be focused on conference activities but will have connectivity and can assist with urgent matters should it be required.

Updated: network troubles this morning (FIXED)

All head nodes and critical servers are back online (some required an emergency reboot).  The network link to PACE equipment in TSRB is restored as well.

We do not believe any jobs were lost.

All Inchworm clusters should be back to normal.

Please let us know via pace-support@oit.gatech.edu if you notice anything out of the ordinary at this point.

 

network troubles this morning – 0908

Looks like we have a problem with a network switch this morning.  Fortunately, our resiliency improvements have mitigated some of this, but not all as we haven’t yet extended those improvements down to the individual server level.  We’re working with the OIT network team to get things back functional as soon as possible.