Friday, 21 May 2010

Performance enhancing shrugs, featuring Lizard

So you've just set up your brand spanking new cluster, and it all seems to work as designed. Diagnostic tests pass with flying colours, and you've run a handful of small test jobs through to get your eye in.
But could it perform better?
Well that's a good question, and before you start to look for the answer you need to think about how you define current performance, how best to baseline your cluster.
Personally I perform a set of standard, well defined tests. The results of these, coupled with good awareness of expected results for the type of hardware, not only give benchmarks for the cluster, but also indicate whether everything is behaving itself. The tests I use are:
1. mpipingpong
2. High Performance Linpack
3. An appropriate ISV application benchmark
As you can see they will provide fairly high level results, and aim to increase confidence rather than troubleshoot. Let's look at them a little more closely.


mpipingpong
mpipingpong is used to analyse the latency and bandwidth when passing a message between two processes on one or more computers using MPI. This is a lightweight test, which completes in a short time, and a couple of simple runs are available in the diagnostic tests suite which comes as part of Windows HPC Server. Result! Simply run the
MPI Ping-Pong: Lightweight Throughput and MPI Ping-Pong: Quick Check tests across all cluster nodes to produce bandwidth and latency results respectively.


High Performance Linpack (HPL)
HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. This is the application used to quantify SuperComputer performance for Top500 qualification, and is generally regarded as the industry standard HPC benchmark. That's not to say that it shows how your cluster performs under your real world workloads, but it certainly allows for analysis of performance when compared against other, similar machines. Once again, Microsoft have come up trumps and provide a packaged version of HPL wrapped in a marvellous application called Lizard. Lizard de-stresses the HPL run process by:
1. Providing a consistent, compiled HPL executable. If you've ever tried to compile HPL yourself you'll know exactly what a benefit this is.
2. Automatically tweaking HPL input parameters in order to obtain the best possible result for your cluster configuration. There are many Linpack parameters, and automation makes the tuning process very simple.


ISV application
This is one that for your environment you'll almost certainly will have more knowledge of than me. Let's just say that firing a known real world workload across your cluster will give excellent feedback, particularly if you're able to compare results against other machines you own, or published benchmarks. As an example, in the Engineering field Ansys publish Fluent benchmark results online, which give independent comparisons to onsite test runs.

But what do the results mean? Well, let's think about them one by one.
mpipingpong
Obviously the results you achieve will depend on the hardware configuration of your machine, but for guidance, you should expect the following:

Network Type      Throughput        Latency
GigE                     ~110MB/S      40-50 microseconds
10GigE                 ~800MB/S      10-15 microseconds
DDR IB                ~1400MB/S    <2 microseconds
QDR IB                ~3000MB/S    <2 microseconds

If you're way off these numbers you should start troubleshooting.

Lizard
Lizard will provide both an actual performance number, in Flops, and a cluster efficiency number, which should give you a good idea how well your cluster is performing against expected results (based on comparison with your head node processor). This is a good starting figure, but it's worth digging a bit deeper to determine how well your cluster is performing. There are lots of resources out there which will tell you the optimum result for your processor type, but these should be taken with a pinch of salt, as you will lose performance through inefficiencies outside the processor (memory / interconnect etc.).

ISV application
Many ISVs publish well defined benchmark figures which can be used as comparisons. Just be aware that it's unlikely that they will have benchmarked a hardware configuration exactly the same as yours. It's also good value to run a job on (for example) a powerful workstation or alternative cluster. This will help form a good view of where your cluster should be performing.

So what's next?
Well, that all depends on your results. Are they looking good? Great, keep an eye on things, let the users loose, and ask for their feedback. Users are quick to notice when their jobs are not running as they like. Slightly concerned about your benchmark results? Nice, we're into performance troubleshooting and diagnosis, which I'll cover in separate post.

One last thing - make benchmarking a regular thing, it's not only a good thing to report on, but it can be a good warning system for cluster issues.

Thursday, 20 May 2010

PowerShell is Powerful

A pretty obvious thing to say I'm sure you'll agree, but for the sysadmin it is an incredibly useful link between the old WSH scripting world and full on object based C# or similar. I've been working with Windows Server for many years, and I'm struggling to come up with another administrative advancement which brings as much to the table as PowerShell.
Anyway, after that generic praise, let's get down to the real topic for this post. I have formed close allegiance with several Windows HPC Server PS commandlets over the past few years, so I thought I'd write about some of my favourites. Apologies to those left out, but I know they'll be waiting for me to come their way in the future despite not being given love here :)

Deployment
When you're building out multiple clusters, creating a PowerShell script which removes the need to manually work through the cluster configuration wizard is a sweet deal. As you're probably aware the cluster set up wizard runs through he following steps:


See the nice green checks? All achieved using PS commandlets. Let's break it down a bit...
Configure your network. Could be quite complicated as it takes into account various disconnected options such as network topology, subnet details, firewall settings, DHCP and NAT. But this can all be set up using Set-HpcNetwork As an example, say you want to create a Topology 1 setup, using NAT, firewall on for the Enterprise network, off for the private network, public address = currently assigned to NIC, private address = 192.168.1.253 private range 192.168.1.235 - 8.250, DHCP on for the private network, Headnode to provide NAT function. Check this out...

Set-HpcNetwork -topology private -enterprise 'NICName' -EnterpriseFirewall $True -private 'NICName2' -PrivateIpAddress 192.168.1.253 -PrivateSubnetMask 255.255.255.0 -PrivateDHCP $true -PrivateDhcpStartAddress 192.168.1.235 -PrivateDhcpEndAddress 192.168.1.250 -privateDHCPGateway 192.168.1.253 -PrivateDHCPDns 192.168.1.253 -PrivateNat $True -PrivateFirewall $False

Sweet!
Now then, provide installation credentials. Want to add domain\nodebuild as the node installation account?

Set-HpcClusterProperty -InstallCredential DOMAIN\nodebuild

Configure the naming of new nodes. Let's say you fancy naming the nodes after the cluster, so something like CLUSTER-CN001 onwards. The commandlet your looking for here is

Set-HpcClusterProperty -NodeNamingSeries

but you need to find the cluster name first. I use PS to grab the CCP_SCHEDULER environment variable.

Create a node Template? Simple, knock up a template with the appropriate steps, then export it. If your template has an image associated with it get that ready than import the image using

Add-HpcImage -Path

Now import your node template (which should reference your image if applicable)

Import-HpcNodeTemplate -Path

All done right? Well you may need to import drivers for he deployment process etc:

Add-HpcDriver -Path

You're all set, and of course by using a scripted method this is all nicely documented and repeatable.

Node Management

How do you control node group membership? I used to use the UI to manually add nodes to groups, which was a little bit painful so another delve into PS gave me a better way. I use Get-HpcNode in conjunction with Add-HpcGroup like this


Example - if you populate the node description with the software it has installed (softwarepackage2):

Get-HpcNode|where {$_.description -eq "softwarepackage2"}|Add-HpcGroup -name SoftwarePackage2Group

Example - if you want to add a node to a group based on it's config e.g. installed memory

Get-HpcNode|where {$_.Memory -ge "8000"}|Add-HpcGroup -name Over8GBGroup

Keeping an Eye on Business


The Operation log contains many juicy bits of info which are sometimes easy to miss. I tend to run up a monthly report which includes details of recent warnings and errors in the log.  PS again comes to my assistance, with the help of the Select-Object commandlet to hit the last 500 entries (appropriate for my needs).

Get-HpcOperation -State committed | Select-Object -Last 500 | Get-HpcOperationLog -Severity  Error,Warning

Also in the report is output of failed / failed to run diagnostics. I have an Ops Manager environment in place which of course provides overall management and reporting, but it also periodically runs diagnostic jobs. These are useful, and I grab the results like this:

Get-HpcTestResult -teststate -LastRunTime (get-date).AddMonths(-1)

So, to sum up, PowerShell is awesome! 

Monday, 10 May 2010

A job a day keeps the user at bay

There's always one isn't there? A pesky user who wants to try something fancy on a cluster but isn't entirely sure how to do it. This is the coal face of HPC. It might not have the glamour of procurement, or the technical challenge of system administration, but being able to help a stuck user is possibly the most rewarding part of the job.
I like to set myself a challenge every now and then (job a day? well, maybe not quite that often) to submit a job with crazy parameters, or oddball resource requirements. You never know when a user will want to do something similar, and then a) you'll be able to help, and b) you'll look pretty smart ;)
Thinking more about this I might fire in a blog post every now and than containing an interesting job submission technique, might be interesting to see if anyone can come up with any nifty alternatives...

Know your underlying infrastructure

So, you're looking to deploy and support a Windows HPC Server cluster, but you have a sneaky suspicion that there's more under the covers that you bargained for. What do you know, your instincts are correct, and you're suddenly in a world of Microsoft technologies which you should at least be aware of. 
The good news is (and this is a big advantage) that these underlying technologies are common, and it may be that your company/organisation has experience in those areas, for example in a corporate IT team. If those skills can be leveraged for your Windows HPC deployment, then it's all gravy - you can stick to what you do best & eke out tip top performance, and write some killer submission scrips for your cluster users.
But wait, what if you have no such skills in house? Well, here's a quick rundown of some of the things involved...


Windows Server 2008
The base Operating System. Note that Windows HPC Server can run on various editions of the OS (e.g. HPC Edition, Standard Edition, Enterprise Edition), which should show that the HPC Pack is a separate entity to the OS. License wise it's important to note that you can only run Windows HPC related technologies on 2008 HPC Edition - no chance of saving a few notes by using it as a base OS for your corporate Exchange system ;)
When thinking about the OS, pay particular attention to driver versions and settings. Use the built in reliability, performance & logging tools to your advantage.


Active Directory
There's no getting round the Active Directory thing. It's at the core of everything Windows HPC Server does, from deployment to running jobs to data authorisation. There are several potential options here depending on your environment. If you have a corporate AD I would strongly suggest that you work with your corp IT guys to integrate the cluster. This type of configuration will smooth the wheels of progress significantly. Allowing an existing authentication regime, one in which users already have accounts set up, can save a bunch of user admin overhead. If this is not an option, it's worth spending at least a bit of time pondering your AD architecture. It's nice and easy to simply promote your headnode to a domain controller, but I would suggest that you also run up another, separate Domain Controller, as losing your AD can be a royal PITA to recover from.
Either way, take some time to get this bit right, and to learn the basics of AD operation & you'll likely see payback in future.


SQL Server
I'm planning to dive into SQL Server a bit deeper in another post, but suffice to say it's well worth picking up some knowledge in this area.


Windows Deployment Services
WDS provides the platform for the super slick node deployment mechanism within Windows HPC Server. It's wrapped so nicely that you may never need to poke about under the covers, but definitely pay attention to imagex, diskpart, the \\<headnode>\reminst share and it's contents.


DHCP and DNS
OK so these are not necessarily Windows specific, but getting your DNS and DHCP knowledge down is very useful. I'm going to post about network configuration in another post, so will try to include some DNS / DHCP tips there too.


RRAS
Routing and Remote Access Services is an umbrella for bunch of useful Windows features. These include Dialup; VPN (both client server and site to site); IP subnet routing, and Network Address Translation (NAT). In the case of Windows HPC Server the NAT part is of interest. It plays an integral part in operation of those network topologies which rely on compute nodes only having connection to the private network. In these cases traffic destined for hosts on subnets other then the private network travel via the headnode (as gateway), and NAT out onto the enterprise network. This may be pertinent e.g. for Windows Updates and the like.


Windows Failover Clustering
If you're going for a High Availability head node solution, take some time to become acquainted with how Windows Failover Clustering works. Behaviour and types of shared disks; should you used disk share, quorum, node majority; Failover Cluster network configuration; Cluster resource DNS registration; verification and support of cluster components. This is a big subject in itself, and an awareness of how the technology works is important.

Thursday, 29 April 2010

It's all in the name





I like names, they have a funny way of telling a story in a single word. Take my name for instance, I'm sure you read the word 'Dan' and assume that I'm a strong, intelligent, handsome guy who's no end of fun to hang out with, right? Or maybe it's the other way round, and the person defines your perception of a name. I mean, if all people named Dan are super cool, does that make the name Dan super cool?
Anyway, what I really want to talk about here is how Windows HPC handles name resolution, particularly across private and application networks.
First off let's think a little about general Windows host name resolution order. This looks like this, listed in order:

1. Checks it's own name.
2. Looks in the Local DNS cache (you can list entries using ipconfig /displaydns).
3. Local HOSTS file (C:\Windows\System32\Drivers\etc\Hosts)
4. Adds the Search Suffix configured on the machine (if not FQDN), and query DNS
5. WINS (NetBios name resolution)
6. Broadcast on local subnet
7. Local LMHOSTS file (C:\Windows\System32\Drivers\etc\Hosts)

Pretty thorough I'm sure you'll agree.
Now let's look at this with our high performance and management hats on. We want to be able to get an  answer to our name resolution queries as quickly as possible, while ensuring consistency across all nodes in the cluster. We also need to resolve a host name to the appropriate network address for the cluster network we're after. Running through the resolution order above we can ignore 1. for obvious reasons. 2. may be interesting performance wise,  but as the cache may not contain records for all nodes, it's an inconsistent choice. 3. hmmm, the good old local hosts file, sounds kinda antiquated and simplistic don't you think? But it's at number 3 on the list, crucially checked before DNS resolution. And maybe it can be managed by one of the HPC services running on all nodes? Oh, this is starting to sound decent. Just to be sure though let's continue. 4. is our old friend DNS, which sounds like the way to go. But each DNS lookup can take a relatively long time. Seems like it'd be good for management, but as good as a cluster managed solution? Once we get to 5, 6 and 7 things are drifting off into desperation, so let's not say too much about those guys.

Well what do you know, Hosts seems to be a very good choice here, and lo and behold that's how it works in practice! Check out this example hosts file taken from a handy dev cluster... 

# Copyright (c) 1993-1999 Microsoft Corp.
# This host file is maintained by the Compute Cluster Configuration
# Management Service. Changes made to the file that match the netbios names
# for existing nodes in the cluster will be removed and replaced by entries
# calculated by the management service.
# Modify the following line to set the property to 'false' to disable this
# behavior. This will prevent the management service from making any
# further modifications to the file
# ManageFile = true

127.0.0.1                localhost
192.168.5.23             HPCDEV-HN02                    #HPC
192.168.100.11           HPCDEV-HN02                    #HPC
192.168.0.11             HPCDEV-CN001                   #HPC
192.168.0.10             HPCDEV-CN002                   #HPC
192.168.0.134            HPCDEV-CN003                   #HPC
192.168.0.1              HPCDEV-HN01                    #HPC
192.168.0.2              HPCDEV-HN02                    #HPC
192.168.0.3              HPCDEV-VHN01                   #HPC
192.168.1.1              HPCDEV-HN01                    #HPC

192.168.1.2              HPCDEV-HN02                    #HPC
192.168.1.3              HPCDEV-VHN01                   #HPC
192.168.5.22             Enterprise.HPCDEV-HN01         #HPC
192.168.5.23             Enterprise.HPCDEV-HN02         #HPC

192.168.5.28             Enterprise.HPCDEV-VHN01        #HPC
192.168.0.11             Private.HPCDEV-CN001           #HPC
192.168.0.10             Private.HPCDEV-CN002           #HPC
192.168.0.134            Private.HPCDEV-CN003           #HPC
192.168.0.1              Private.HPCDEV-HN01            #HPC
192.168.0.2              Private.HPCDEV-HN02            #HPC
192.168.0.3              Private.HPCDEV-VHN01           #HPC
192.168.1.11             Application.HPCDEV-CN001       #HPC
192.168.1.10             Application.HPCDEV-CN002       #HPC
192.168.1.134            Application.HPCDEV-CN003       #HPC
192.168.1.1              Application.HPCDEV-HN01        #HPC
192.168.1.2              Application.HPCDEV-HN02        #HPC
192.168.1.3              Application.HPCDEV-VHN01       #HPC






This file reflects the current addressing of a HA head node cluster configuration which has three compute nodes. Network topology is 3. Compute nodes isolated on private and application networks. Interesting to note that only the active head node addresses are detailed in the standard format listing (HPCDEV-HN02) for networks other than private and application (in this case Failover cluster heartbeat network and enterprise).
Check out those funky Enterprise. Private. and Application. entries. This allows the cluster service to be very specific in its address resolution requests, assuring it will always get back the address on an appropriate network.

But what if you want to host non HPC Server managed machines on your private network, therefore requiring hosts to register in DNS (they do not by default)? Well, you can use the awesomeness that is powershell...
Set-HpcNetwork -PrivateDnsRegistrationType WithConnectionDnsSuffix

One thing to beware of - check out the warning at the top of the hosts file. If you set  Managefile = False and manually alter entries previously managed by HPC Server things may get a little broken.

Monday, 26 April 2010

The business is (nearly) always right

I don't know about you, but I enjoy kicking the tyres of new products in my sphere. The only problem is that it's often quite difficult to find time to dedicate to the simple pleasure of learning something new, as real work, (and of course family) tend to, um, get in the way! On balance I can see that employers are quite justified in their time demands, and naturally the missus and son are top of the list :) so how to find that perfect balance and find time to follow up interesting developments?
Well, I have a few manoeuvres here:
1. Report report report. Managers like to know what's going on, so show them the money. Graphs, charts and pictures are particularly well received, and can buy you some play time. Speaking of reporting If you've've found some time to look into a new piece of tech, it's good to let the boss know your thoughts on it. Spread the word!
2. Stress the business benefits. Management types don't necessarily know anything about IOPS, or whether more FLOPS is a good or bad thing. What they will understand is that something might increase productivity, or even better save money.
3. Show those guys a good time... I'm talking about the family here of course ;)

Friday, 16 April 2010

Digging through the versions

One thing I enjoy about Windows HPC server (and previously CCS) is a good discussion on versions, naming and compatibility. It's a veritable cauldron of confusion!
Here's my quick naming and compatibility matrix for head nodes / compute nodes, hope it helps more than hinders :)

OS version along the top, HPC Pack version down the left.



Windows Server 2003 x64 Std / Ent Edition (+SP2; +R2)
Windows Server 2003 x64 HPC Edition (+SP2; +R2)
Windows Server 2008 Std / Ent Edition x64 (+SP2)
Windows Server 2008 HPC Edition x64 (+SP2)
Windows Server 2008 R2 Std / Ent Edition
Windows Server 2008 R2 HPC Edition BETA
2003 Compute Cluster Pack (+SP1)
Head node
Compute nodes
Headnode
Compute nodes
 Not Supported
 Not Supported
 Not Supported
 Not Supported
2008 HPC Pack (+SP1)
 Not Supported
 Not Supported
Head node
Compute nodes
Head node
Compute nodes
 Not Supported
 Not Supported
2008 HPC Pack R2 BETA
 Not Supported
 Not Supported
Compute nodes only
Compute nodes only
Head node
Compute nodes
Head node
Compute nodes


There's more of this type of thing surrounding SDK and Client component versions, I'll post about that later...

Tuesday, 13 April 2010

Don't forget the basics

I have a few rules which I try to abide by in my working life, one of which is 'Keep it simple, stupid!'. There is a great deal to be said for cutting out overcomplicated and unnecessary configuration, particularly when it comes to troubleshooting an existing system.
Don't get me wrong, I enjoy the cut and thrust of deep technical development and configuration as much as the next (slightly geeky) guy, but experience has shown that the business will thank you for a well designed, simple yet efficient solution, and that there are diminishing returns but increased risks as complexity increases.
This isn't to say that some services don't need in depth configuration, more that it's important to know when to stop.

Friday, 9 April 2010

Windows HPC Server 2008 R2 Beta 2

It's been out for a short while, but the HPC team blog has just announced availability of Windows HPC Server 2008 R2 Beta 2. I've been running it on a test rig and first impressions are extremely favourable. I'm particularly liking the additional diagnostic features, and inclusion of a node image capture will save a bit of manual  imagex stress. Some of the new scheduling features (particularly enhanced activation filter) are looking really nice too!
One other thumbs up for R2 is the ability to use a remote SQL Server instance. There are quite a few technical and management reasons why this is a great addition, and the financial benefit when running HA headnodes in an environment where a SQL cluster is already in place is an obvious plus point.

Try to get to SuperComputing.

I was fortunate enough to attend the SuperComputing conference in Portland, OR last year, and can honestly say it was a blast. I've been to several similar large events in my time, and SC09 was right up there amongst the best. I'm not sure if it was the location (I loved Portland, what an awesome city!), the people (hung out with some fun individuals), or the conference itself, but it was fantastic all round.
I'll be hoping to get out to New Orleans for SC10, maybe I'll see you there...

Wednesday, 7 April 2010

Do you really need it?

The world of HPC is studded with cutting edge technology, all of it at a price. The question is do you need it?
This sounds like a simple issue, but in reality it's often quite a challenge to determine whether something will provide increased performance for your (or your users) jobs.
To cut a long story short, the most useful thing you can do to come up with the answer is to determine the requirements of your problem space. Are your tasks particularly reliant on disk IO? Do you have a sensitivity to latency? Maybe your cluster is used to run lots of single processor jobs and does not require high MPI performance at all. Once you know this it's easier to cut through the admittedly shiny technology and focus on what would be of most benefit to your environment.

Wednesday, 31 March 2010

Damn there are some clever people out there

The technical computing world is populated with some scarily intelligent people, seemingly brought up from an early age on complex algorithms and deep mathematical theory. Unfortunately I am not, and will never be, one of these people no matter how hard I try, so I have to look for other ways to make myself useful.
Luckily, skills developed in the enterprise world are also useful when managing Windows HPC Server, as a lot of the building blocks are common. Aside from a technical tool belt armed with AD, SQL Server, WDS, DNS etc etc, some of the softer skills can be very useful too. Knowledge and experience of support processes, project management, incident resolution, dealing with customers, and service delivery are all as applicable in the HPC world as any other environment.
After all, your carefully constructed and configured cluster is providing a service to your customers which is often as critical as any enterprise offering.