Here’s our present for the holidays: a new OS stack based on RHEL6.3, which our tests indicated that we get a performance boost across all CPU architectures. Please try your codes now on TestFlight to make sure we haven’t introduced new bugs in this stack, and report to us any problems you see.
Category: tech support
Scheduled Quarterly Maintenance on 01/15/2013
The first quarterly maintenance of 2013 will take place on 01/15. All of the systems will be offlined for the entire day. We hope that no jobs will need to be killed, since we have been placing holds on jobs that would still be running on that day. If you submitted jobs with long walltimes (exceeding 01/15), then you will notice that they are being held by the scheduler to protect them from getting killed.
Here’s a summary of the tasks that we are planning to accomplish on the maintenance day.
* OS upgrade (6.2 to 6.3): We will upgrade the RHEL OS to version 6.3. This version offers better compatibility with our hardware, with potential benefits on performance. We have been testing existing software stack with this version to verify compatibility and do not expect any problems. We are upgrading the testflight nodes to 6.3 (they should be online very soon), so please submit test jobs to this queue to verify that your codes will continue to run on the new system.
* Scratch storage maintenance: As most of you already know, we have been working with Panasas to resolve the ongoing crashes. Panasas has identified the cause that will require a new release of their system software. We expect to deploy a tested version on this maintenance day.
Important: The new release will be tested on a separate storage system that was provided by Panasas, and not on our production system. Therefore, we must be prepared for the possibility of unforeseen problems that will only be triggered by production runs with actual usage patterns. As an effort to shield long running jobs from such an undesired event, we are placing another reservation to only allow jobs that will complete by 02/17/2013, while holding longer jobs. This way, should we need to declare an emergency downtime on that day, we will be able to do so with minimal impact. This will require jobs with more than 31 days of walltime to be held until February the 17th, so please consider this while setting walltimes for your jobs. This reservation is contingent upon the stability of the system, and it can be removed earlier than this date if we feel confident enough. We are sorry for this inconvenience.
* Conversion of more RHEL5 nodes to RHEL6: The majority of our users have made the switch to RHEL6 systems already. Therefore, we will migrate more of the FoRCE and Joe nodes to corresponding RHEL6 queues. We are not getting rid of the RHEL5 queues entirely (just yet), but the number of nodes they contain will be significantly reduced. Please contact us if your jobs are still dependent on RHEL5, since this version will be depreciated in the near future.
* Deployment of new database-driven configuration builders (dry-run mode only): We are developing a new system to manage user accounts and queue management, along with many other system management tasks, to minimize human error and maximize efficiency. We will deploy a dry-run mode only prototype of this system, which will run alongside with existing mechanisms. This will allow us to test and verify the new system against real usage scenarios to assist in the development effort, and will not be used for actual management tasks.
* New license server: We will start using a new license server, since the system on the existing server is getting old. We will migrate the existing licenses to the new server on the maintenance day. We don’t expect any difficulties, but please contact us if you notice any problems with licenses.
As always, please let us know if you have any concerns or questions at pace-support@oit.gatech.edu.
TestFlight in process of update
We have temporarily stopped the queues for TestFlight to allow them to drain so that we may upgrade TF machines to a new stack based on RHEL 6.3. Once all machines have been upgraded, we will re-enable the queues for jobs to test the suitability of this new stack.
Should there be no major software issues, this will become the de facto OS for RHEL6 based clusters on the next maintenance day, scheduled for January 17, 2013.
Maintenance Day (October 16, 2012) – complete
We have completed our maintenance activities. Head nodes are online again and queued up jobs are being released.
Our filesystem correction activities on the scratch found eight “objects” on the v7 volume to be damaged and were automatically removed. Unfortunately, the process provides no indication which file or directory was problematic.
As always, please followup to pace-support@oit.gatech.edu with any problems you may see, ideally with the pace-support.sh script discussed here: http://pace.gatech.edu/support.
Maintenance Day (October 16, 2012)
PACE Maintenance Day is underway.
All compute nodes are off, and all login nodes should be inaccessible.
campus network maintenance
The Network team will be performing some scheduled maintenance this Saturday morning. This may impact connectivity between your workstations/laptops/home, but should not affect jobs running within PACE. However, if your job requires access to network services outside of the PACE cluster (e.g. a remote license server), this maintenance may affect your jobs.
For further information please see the maintenance announcement on status.oit.gatech.edu.
Check the status of queue(s) using “pace-check-queue”
Dear PACE Users,
We have a new tool to announce. If you would like to check the status of any PACE queue, you can now run:
pace-check-queue <queuename>
substituting the queuename with the name of the queue you would like to check. This tool has a column, which tells you whether a node is accepting jobs or not, including a human readable explanation. This tool provides, at one glance, the following information:
* Which nodes are included in the queue
* Which nodes accept jobs and which don’t (and if they don’t, why)
* How may cores and how much memory each node has, and what percent of them are being used
* Overall usage (CPU/Memory) levels for the entire queue.
(This information is refreshed every half an hour)
We had recently announced a new tool, pace-stat, to check the status of your queues. These tools complement each other, so feel free to use both. Please report any down/problem nodes that you see in the list to pace-support@oit.gatech.edu.
Hope these new tools will provide you with a better HPC environment. Happy computing!
PS: These tools are continuously being developed, therefore your feedback and suggestions for improvements are always welcome!
upcoming maintenance day, 10/16 – working on the scratch storage
It’s that time again. We’ve been working with our scratch storage vendor (Panasas) quite a lot lately, and think we finally have some good news. Addressing the scratch space will be a major thrust of this quarterly maintenance, and we are cautiously optimistic that we will see improvements. We will also be applying some VMware tuning to our RHEL5 virtual machines that should increase responsiveness of those head nodes & servers. Completing upgrades to RHEL6 for a few clusters and a few other minor items round out our activities for the day.
Scratch storage
We have been testing new firmware on our loaner Panasas storage. Despite our best efforts, we have been unable to replicate our current set of problems after upgrading our loaner equipment to this firmware. This is good news! However, simply upgrading is insufficient to fully resolve our issues. So on maintenance day, we will be performing a number of tasks related to the Panasas. After the firmware update, we need to perform some basic file integrity checks – the equivalent of a UNIX fsck – on a copule of volumes. This process requires those volumes to be offline for the duration. After this, we need to perform reads of every file on the scratch that was created before the firmware upgrade. Based on our calculations, this will take weeks. Fortunately, this process can happen in the background, and with the filesystems online and otherwise operating normally. The net result is that the full impact of our maintenance day improvements to the scratch will not likely be realized for a couple of weeks. If there are files (particularly large ones) that you no longer need and can delete, this process will go faster. We will also be upgrading the Panasas client software on all compute nodes to (hopefully) address performance issues.
Finally, we will also be instituting a 20TB per user hard quota in addition to the 10TB per user soft quota currently in place. Users that exceed the soft quota will receive warning emails, but writes will succeed. Writes will fail for users that attempt to exceed the hard quota.
VMware tuning
With some assistance from the Architecture and Infrstructure directorate in OIT, we will be making a number of adjustments to our VMware world. The most significant of which is adjusting the filesystem alignment of our RHEL5 virtual machines. Users of RHEL5 head nodes are likely to see the most improvement. We’ll also be installing the VMware tools packages and applying various tuning parameters enabled by this package.
RHEL6 upgrades
The remaining RHEL5 portions of the clusters below will be upgraded to RHEL6. After maintenance day, RHEL5 will be unavailable to these clusters.
- Uranus
- BioCluster
- Cygnus
Misc items
- Configuration updates to redundant network switches serving some project storage
- Capacity expansion of the ECE file server
- Serial number updates to a small number of compute nodes lacking serial numbers in the BIOS
- Interoperability testing of Mellanox Infiniband switches
- Finish project directory migration of two remaining Optimus users
Cygnus FS pc5 online…mostly.
We have been able to bring /nv/pc5 back online, but at a cost to redundancy. One of the network interfaces/cables/switches is not behaving, but when we tried disconnecting various combinations of cables, we found one that caused the filesystem to be immediately available to all nodes.
Considering how close maintenance day is (10/16/12), spending time isolating the cable/switch/interface problem now only means more time for this filesystem to be offline as equipment gets retested. Waiting until maintenance day will cause the least disruption for Cygnus pc5 users who have their last run of jobs and take some time pressure off of us to make sure we have resolved the issue in its entirety before bringing all resources back online.
Despite the loss of redundancy, functionality is NOT affected. Only in the case of an additional switch or cable failure between now on October 16 will functionality be impacted.
Cygnus File System pc5 offline
It appears that we have an issue with the server housing the /nv/pc5 filesystem, which contains a subset of the Cygnus cluster users. We’re trying to isolate the source of the problem, but we have yet to actually find a pattern to why it is available on some nodes and not on others.