Posts

Free MATLAB Technical Seminars on Tuesday

As a friendly reminder, you are invited to join MathWorks for complimentary MATLAB seminars on Tuesday, September 18, 2012 in Room 144 in Clough Undergraduate Commons.

–Register now– Register at http://www.mathworks.com/seminars/GATech2012

–Agenda—

5:30 – 6:30 p.m.
Session 1: What’s New in MATLAB?
Presented By: Loren Shure, Principal MATLAB Developer (KEYNOTE SPEAKER)

In this session, we will demonstrate workflow examples highlighting and utilizing new MATLAB features. The latest MATLAB release, R2012b, introduces a redesigned Desktop, making it easier to help both new and experienced users navigate the continuously expanding capabilities within MATLAB.

Loren has worked at MathWorks for over 25 years. She has co-authored several MathWorks products in addition to adding core functionality to MATLAB. Loren currently works on the design of the MATLAB language. She graduated from MIT with a B.Sc. in physics and has a Ph.D. in marine geophysics from the University of California, San Diego, Scripps Institution of Oceanography. Loren writes about MATLAB on her blog, The Art of MATLAB.

6:30 – 7:00 p.m.
Georgia Tech Alumni Panel

Hear from a selection of Georgia Tech Alumni who now work at The MathWorks as they discuss their career paths. (Pizza will be served.)

7:00 – 8:30 p.m.
Session 2: Parallel and GPU Computing with MATLAB
Presented By: Jiro Doke, Ph.D., Senior Application Engineer and Georgia Tech alumnus

In this session you will learn how to solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. We will introduce you to high-level programming constructs that allow you to parallelize MATLAB applications and run them on multiple processors. We will show you how to overcome the memory limits of your desktop computer by distributing your data on a large scale computing resource, such as a cluster. We will also demonstrate how to take advantage of GPUs to speed up computations without low-level programming. Highlights include:
· Toolboxes with built-in support for parallel computing
· Creating parallel applications to speed up independent tasks
· Scaling up to computer clusters, grid environments or clouds
· Employing GPUs to speed up your computations

Jiro joined MathWorks in May 2006 as an application engineer. He received his B.S. from Georgia Institute of Technology and Ph.D. from the University of Michigan, both in Mechanical Engineering. His Ph.D. research was in biomechanics of human movement, specifically in human gait. His experience in MATLAB comes from extensive use in graduate school, using the tool for data acquisition, analysis, and visualization. At MathWorks, Jiro focuses on core MATLAB; math, statistics and optimization tools; and parallel computing tools.

Joe Fileserver fixed

The fileserver that houses Joe users’ data ( hp3 / pj1 ) started acting squirrelly this morning, finding itself unable to connect to the PACE LDAP server. That, in turn, caused Joe users to have problems logging in or having their jobs hang up because the fileserver could not authenticate users/jobs.

Restarting all the services on the fileserver rectified the problem.

Scratch storage issues: update

Scratch storage status update:

We continue to work with Panasas on the difficulties with our high-speed scratch storage system. Since the last update, we have received and installed two PAS-11 test shelves and have successfully reproduced our problems on them under the current production software version. We then updated to their latest release and re-tested only to observe a similar problem with this new release as well.

We’re continuing to do what we can to encourage the company to find a solution but are also exploring alternative technologies. We apologize for the inconvenience and will continue to update you with our progress.

[updated] Scratch Storage and Scheduler Concerns

Scheduler

The new server for the workload scheduler seems to have gone well.  We haven’t received much user feedback, but what we have received has been positive.  This matches with our own observations as well.  Presuming things continue to go well, we will relax some of our rate-limiting tuning paramaters on Thursday morning.  This shouldn’t cause any interruptions (even of submitting new jobs) but should allow the scheduler to start new jobs at a faster rate.  The net effect is to try and decrease wait times some users have been seeing.  We’ll slowly increase this parameter and monitor for bad behavior.

Scratch Storage

The story of the Panasas scratch storage does not go as well.  Last week, we received two “shelves” worth of storage to test.  (For comparison, we have five in production.)  Over the weekend, we put these through synthetic tests, designed to mimic the behavior that causes them to fail.  The good news is that we were able to replicate the problem in the testbed.  The bad news is that the highly anticipated new firmware provided by the vendor still does not fix the issues.  We continue to press Panasas quite aggressively for resolution and are looking into contingency plans – including alternate vendors.  Given that we are five weeks out from our normal maintenance day and have no viable fix, an emergency maintenance between now and then seems unlikely at this point.

RFI-2012, a competitive vendor selection process

Greetings GT community,

PACE is in the midst of our annual competitive vendor selection process. As outlined on the “Policy” page of our web site, we have issued a set of documents to various state contract vendors. This time around we have Dell, HP, IBM and Penguin Computing. Contained within these documents are general specifications based on the computing demand we are anticipating coming from the faculty over the next year. I’ve included a link to the documents (GT login required) below. Please bear in mind that these specs are not intended to limit configurations you may wish to purchase, but rather to normalize vendor responses and help us choose a vendor for the next year.

The document I’m sure you will be most interested in is a timeline. The overall timeline has not been published to the vendors, and I would appreciate if it was kept confidential. The first milestone, which obviously has been published, is that responses are due to us by 5:00pm today. The next step is for us to evaluate those responses. If any of you are interested in commenting on those responses, please let me know. Your feedback is appreciated.

Please watch this blog, as we will post updates as we move through the process.  We already have a number of people interested in a near-term purchase.  If you are as well, or you know somebody who is, now is the time to get the process started.  Please contact me at your convenience.

 

--
Neil Bright
Chief HPC Architect
neil.bright@oit.gatech.edu

FoRCE project server outage (pf2)

At about 4:30pm, one of the network interfaces for the server hosting the /nv/pf2 filesystem was knocked offline, causing the resources hosted by it to be unavailable. Normally, this shouldn’t have caused complete failure, but the loss of network exposed what was a configuration error in the fail-over components.

At 5:10, both the misconfiguration as well as the failed interface were brought back online, which should have brought all resources provided by this server online.

This affected some FoRCE users’ access to project storage. Please double check to see if jobs may have failed because of this outage. Data should not have been lost, as any transactions in progress should have been held up until connectivity was restored.

pace-stat

In answer to the requests made by many for insight on the status of your queues, we’ve developed a new tool for you called ‘pace-stat’ (/opt/pace/bin/pace-stat).

When you run pace-stat, a summary of all available queues will be displayed, and for each queue, values for:

– The number of jobs you have running, and the total number of running jobs
– The number of jobs you have queued, and the total number of queued jobs
– The total number of cores that all of your running jobs are using
– The total number of cores that all of your queued jobs are requesting
– The current number of unallocated cores free on the queue
– The approximate amount of memory/core that your running jobs are using
– The approximate amount of memory/core that your queued jobs are requesting
– The approximate amount of memory/core currently free in the queue
– The current percentage of the queue that has been allocated (by all running jobs)
– The total number of nodes in the queue
– The maximum wall-time for the queue

Please use pace-stat to help determine resource availability, and where best to submit jobs.

[updated] new server for job scheduler

As of about 3:00 this afternoon, we’re back up on the new server. Things look to performing much better. Please let us know if you have troubles. Also, positive reports on scheduler performance would be appreciated as well.

Thanks!

–Neil Bright

——————————————————————-

[update: 2:20pm, 8/30/12]

We’ve run in to a last minute issue with the scheduler migration.  Rather than rush things going into a long weekend, we will reschedule for next week, 2:30pm Tuesday afternoon.

——————————————————————-

We have made our preparations to move the job scheduler to new hardware, and plan to do so this Thursday (8/30) afternoon at 2:30pm.  We expect this to be a very low impact, low risk change.  All queued jobs should move to the new server and all executing jobs should continue to run without interruption.  What you may notice is some amount of time where you will be unable to submit new jobs and job queries will fail.  You’ll see the usual ‘timeout’ messages from commands like msub and showq.

As usual, please direct any concerns to pace-support@oit.gatech.edu.

–Neil Bright