Thursday, December 4, 2008

Cron Security

After the recent Security Challenge we became aware that any pool user could create at and cron jobs on our cluster: obviously not good for security or scheduling.

Initially we wondered if we'd need to create SELinux policies to restrict this but it's much simpler than that. Cron and at support simple allow and deny files to control which users can use the commands. /etc/cron.deny specifies which users are denied access, and /etc/cron.allow specifies which users are allowed. (For full details man crontab.)

In /etc/cron.deny we put:
   ALL
and in /etc/cron.allow we put:
   root
admina
adminb
...
where admina, adminb and so on are the admin users who should have cron access. /etc/at.deny and /etc/at.allow are configured the same way.

This is configured through Quattor. For now we're using the filecopy component to install the config files, but this might be a useful extension to the cron component.

Thursday, September 11, 2008

LHC switch-on in Ireland

We had a great day yesterday at Trinity's Science Gallery where we had a live feed from CERN running all day. There was a lot of press interest and the grid featured heavily due to the fact that the grid group here at TCD makes up half of Ireland's LHC involvement (the other half being the particle physics group at UCD who are in LHCb). We had the GridPP real-time monitor running all day, which provoked a lot of interest and made it onto national TV. One interesting side-effect of all the publicity is that the man on the street now knows that Ireland is one of the few European countries that isn't a member of CERN -- maybe it will cause the politicians to reconsider.

Friday, July 11, 2008

geclipse: a nice grid UI at last?

I've just been playing around with geclipse and I like what I see. It wraps up the fiddly business of VOMs proxies, information system queries, etc. so you don't have to worry about them. Once I'd downloaded the latest milestone release via eclipse's update manager and set up a VO I was able to submit a job. The WMS was discovered from the information system. They use JSDL to describe jobs, but you fill in the description using dialog boxes -- it can also translate to JDL. There are lots of cool things that I haven't even looked at yet like an interface to amazon ec2 and to local batch systems (to view queues etc.), also visualisation plugins allowing things like interactive jobs.

This looks like a great interface for grid beginners, especially those who're already familiar with eclipse. I knew that sooner or later someone would get round to writing some good software for submitting grid jobs!

Thursday, April 3, 2008

Who is that masked user?

Trying to get a better handle on usage of our cluster, I for the first time realised that Maui actually provides quite a nice way of displaying the efficiency of jobs. It doesn't sort them the way you'd like, but then that's what "sort" is for. Here's the "bottom 10" jobs on our system:

[root@gridgate gridmapdir]# showq -r|sort -n -k 4|sed -e 's/^[ \t]*//' -e '/^$/d'|head -n 10
JobName S Par Effic XFactor Q User Group MHost Procs Remaining StartTime
447 Jobs 447 of 683 Processors Active (65.45%)
550825_ R DEF 7.53 0.1 DE fus098 fusion wn019 1 7:45:00 Fri Mar 28 11:16:42
550438_ R DEF 9.31 0.1 DE fus098 fusion wn056 1 1:33:37 Fri Mar 28 05:05:04
550818_ R DEF 9.51 0.0 DE fus098 fusion wn072 1 5:19:15 Fri Mar 28 08:50:40
550439_ R DEF 9.65 0.1 DE fus098 fusion wn056 1 1:33:37 Fri Mar 28 05:05:04
550429_ R DEF 10.08 0.0 DE fus098 fusion wn062 1 00:39:47 Fri Mar 28 04:11:18
550437_ R DEF 10.19 0.1 DE fus098 fusion wn056 1 1:33:26 Fri Mar 28 05:05:04
550417 R DEF 10.28 0.1 DE fus098 fusion wn011 1 00:27:03 Fri Mar 28 03:58:28
550441_ R DEF 10.30 0.1 DE fus098 fusion wn056 1 1:33:40 Fri Mar 28 05:05:04

Looks like I need to find out who this fus098 guy is. Normally my method for doing this is to grep through /var/log/globus-gatekeeper.log but I finally got sick of this and wrote a little python script to translate the funny system used in /etc/grid-security/gridmapdir (documented here) and output the complete set of pool account mappings. I was going to implement all sorts of fancy options for outputting a particular user's mapping etc. but decided I could do what I needed with grep so I'll leave the fancification to someone else. The script is available here and here's some sample usage:

What are the mappings for users with "childs" in their DN?

[childss@gridgate childss]$ ./poolmapping |grep -i childs
dte053:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs:dteam
solovo003:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs:solovo
webcom050:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs:webcom
cosmo007:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs
cosmo004:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs:cosmo
gitest042:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs:gitest

What DN is mapped to dte053?

[childss@gridgate childss]$ ./poolmapping |grep -i dte053
dte053:/c=ie/o=grid-ireland/ou=cs.tcd.ie/l=ra-tcd/cn=stephen o. childs:dteam

Tuesday, March 11, 2008

But my proxy hasn't expired!

We have been plagued with a frustrating problem (especially in our test environment). Users would generate a new proxy, submit a job immediately and then get an error like this:


[childss@ui childss]$ edg-job-status https://cagraidsvr18.cs.tcd.ie:9000/nbPfABOjQHsG7IcFCJcYLg


*************************************************************
BOOKKEEPING INFORMATION:

Status info for the Job : https://cagraidsvr18.cs.tcd.ie:9000/nbPfABOjQHsG7IcFCJcYLg
Current Status: Aborted
Status Reason: Job proxy is expired.
Destination: gridgate02.testgrid.:2119/jobmanager-lcgpbs-test
reached on: Tue Mar 11 08:47:31 2008
*************************************************************


Which is very annoying as the proxy obviously hasn't expired. It turns out that this is due to old jobs stuck on the RB (whose proxies have expired). The problem can be cleared by logging onto the RB, identifying old jobs for the user's DN and removing them using condor_rm. I'll leave it to someone else to explain why this arises. I hope it's been fixed in the new WMS.

Tuesday, February 12, 2008

Can Quattor save the world?

Due to the wonders of planet, I've just seen this post by Andrew from Glasgow with the intriguing comment: "Are there any better tools? (is Quattor the savoiur for this type of problem)". This post was due to the frustrations of cobbling together fabric management from a collection of very good, but separate tools. So I thought I'd briefly describe some of the advantages of Quattor. I know many were burned in the early days of Quattor by its complexity and obscurity, but times really have changed and I suggest you revisit it. So here are just a few of the reasons I like it:


  • It's got a real programming language: this gives you data structures (e.g. hashes), types (allowing validation of the values you type in -- i.e. it will recognise that "123.34.32.O7" (spot the deliberate mistake) isn't an IP address).
  • The gLite configuration is up to date with YAIM (and often ahead of it): Michel Jouvin has led the way on a number of deployment issues in LCG (e.g. 64-bit WNs, space tokens, etc.) and all this stuff gets into Quattor before YAIM. (Also DNS-style VO-names, Xen configuration, etc., etc.) We have found that whenever we have to do something non-custom (e.g. publishing multiple different jobmanagers from one CE in GIP) it's a doddle in Quattor due to the availability of proper data structures (see above).
  • The Quattor Working Group templates are effectively a complete Grid distribution in a way that gLite itself isn't. What I mean is that they provide all you need for going from bare metal to installation of a complete SL-based Grid site. This is ideal for new/small sites.
  • It's a true community effort: having been involved in YAIM development for MPI, I have first-hand knowledge of the protracted process involved in getting anything fixed in gLite. In contrast, Quattor functions as a true OSS project: if there's a problem, you fix it and check it in. If it passes muster after a lightweight review, it's included in the core release. Problem solved.
  • It provides integration with installation and monitoring: the contents of configuration profiles for a machine are directly used to generate Kickstart templates, and monitoring (using Lemon is also tightly integrated with a raft of sensors and alerts available.

Monday, February 4, 2008

Play it again, SAM

After much pain, we have finally got a SAM server up and running for Grid-Ireland (see here). We used to run an SFT server, but it was ancient and when eventually the client software became incompatible with the UI distribution, we decided to move to SAM. It looked like there were quite good installation docs available so we assigned it to someone as a Friday afternoon project. That was two months ago! It turned out that the documentation, while good, had a few critical errors/omissions in it, and the support was non-existent. We've finally got it sorted now (the last problem was solved when I divined by reading the source code that you had to define an ACL of approved DNs in the config file) and it looks like it should be useful in keeping track of our non-EGEE sites. We'll try and feed our experience back upstream, or (probably more useful) stick it on a public page so it makes it into Google.

Friday, February 1, 2008

Stepping through the pgrade portal

As a Grid veteran, I normally submit jobs using edg-job-*, and at this stage I've almost given up hope that there could be a less painful way of getting jobs onto the Grid. I've tried Ganga in the past, and it was promising, but it didn't work well with the broken MPI on the EGEE grid, so I kind of gave up on it. The latest thing we've installed is the p-grade portal which has been around for a good while and is allegedly getting "mature" now. The first problem after creating an account was getting my cert set up for use in the portal. I had the cert and key on my local machine, and tried to upload them to a MyProxy server to get something the portal could use. At this point, I was asked for the hostname and port number of the MyProxy service. Now, I actually administer the MyProxy server, and still had to ask a colleague which port it ran on. There is no way in the world a user should have to know this, but apparently you can't set defaults in the portal. We're running version 2.5 still so maybe it's fixed in 2.6.

Once I got my cert up and running, I went to submit a job. My first job, the challenging "/bin/hostname" test failed. I didn't expect that. Apparently pgrade uploads a binary from your local machine by default rather than executing something hosted on the remote machine. As my local machine is FC6 and the execute node is SL3, the uploaded hostname binary wouldn't run. So if you wanted to run the hostname program on the remote host, you would have to upload a script which ran /bin/hostname.

The next challenge was how to add input files to the job. It turns out that this is done by adding "ports" to the job node you define. Everything in pgrade is a workflow, so files are ports that allow data to flow between nodes (or from the local machine). It takes a little while to get used to this approach.