Viewing remote X graphics from the cluster

The HPC clusters are not designed for graphics, but it is possible. The compute nodes, except for GPU nodes, have low-capability embedded graphics cards. Simple graphics are possible. Complex graphics like grid generation and Matlab plots are usually better done on a graphics workstation (though moderately complex graphics from a GPU node are possible; see the end of this page).

There are two ways to connect compute node graphics to your workstation: (1) X over ssh, and (2) VNC over ssh. X is easier to set up, but VNC is faster.

X over ssh

X over ssh can be sufficient if you have simple graphics and a good wired ethernet connection on campus. It is an old and inefficient protocol, and will be slow over remote links or wireless. X has a remote display capability, but that is very insecure and is firewalled off, so X needs to be routed over ssh, which encrypts X and makes it still slower. Your local terminal should be either Mac or Linux, which support X natively, or Windows with an X server such as XMing. Also helpful on Windows is a command-line ssh program such as plink.exe from the
PuTTY site. Here is the sequence:

(1) login text terminal to the Razor or Trestles cluster frontend by your usual process

(2) start an interactive batch job with qsub -I

qsub -I -q (queue) -l nodes=1:ppn=(value for queue) -l walltime=(value for queue)
Trestles and Razor examples:
qsub -I -q q30m32c -l nodes=1:ppn=32 -l walltime=30:00
qsub -I -q tiny12core -l nodes=1:ppn=12 -l walltime=3:00:00

On windows, start XMing. Start a graphics terminal (XMing or Linux/Mac native) and login to the same cluster frontend with a session with X graphics:

ssh -Y trestles.uark.edu/razor.uark.edu (Mac/Linux)
plink.exe -X trestles.uark.edu/razor.uark.edu (Windows)

when your interactive job starts in the first terminal, and shows your compute node, login the second terminal to the compute node selected for the job, for example:

ssh -Y tres0804/compute1279

and X program displays should show up directly on your workstation.

It is possible to do X forwarding with one terminal, but it doesn't work well with many programs, so we don't recommend it, but you can try it. Start a graphics terminal as above(Mac/Linux/XMing). Login to the frontend with X graphics ssh -Y/plink.exe -X. Start an interactive batch job with X forwarding by adding “-X” to the command: qsub -X -I -q (queue) -l nodes=1:ppn=(ppn) -l walltime=(time). When your compute node session starts, it should have graphics. This method in particular doesn't work with Matlab.

VNC

VNC is much more efficient and faster than remote X, particularly over slower links. The compute nodes have TigerVNC installed. Your workstation will need a VNC client installed; all VNC should be inter-operable. TigerVNC and TightVNC and others have multiple-OS clients. The process begins the same as above:

(1) login text terminal to the Razor or Trestles cluster frontend

(2) start an interactive batch job

When you get an interactive compute node, run on that compute node one of the scripts vnc-display-mac.sh/vnc-display-linux.sh/vnc-display-mac.sh according to your workstation. You should get output like this. The VNC session has a random two-digit index which appears many times, “17” for this example.

compute1102:rfeynman:$ vnc-display-linux.sh
Existing VNC configuration found in /home/rfeynman/.vnc
Cleaning up any VNC stale sessions...
starting VNC server on port 5917

Desktop 'TurboVNC: compute1102:17 (rfeynman)' started on display compute1102:17

Starting applications specified in /home/rfeynman/.vnc/xstartup.turbovnc
Log file is /home/rfeynman/.vnc/compute1102:17.log

Please copy and paste this command into a terminal running on your
local **DESKTOP MACHINE**:

  ssh -TMNf rfeynman@razor.uark.edu -L5917:compute1102:5917; vncviewer localhost:17

So in this Linux example it is asking you to start a second text terminal and copy and paste the above ssh and vncviewer commands into it.

The Mac/Linux/Windows scripts are identical except for the suggested workstation commands printed at the end. vnc-display-windows.sh, which assumes TightVNC on Windows, will end like:

LOCALNODE> plink.exe rfeynman@razor.uark.edu -L 5933:compute1102:5933
LOCALNODE> tightvnc localhost:33

vnc-display-mac.sh will produce something like this, with the ssh command the same as Linux, and assuming the builtin Mac VNC client:

ssh -TMNf rfeynman@razor.uark.edu -L5961:compute1102:5961; open vnc://localhost:5961

The script on the compute node has no way of telling your VNC client, so on Mac with TigerVNC, you may need to use vnc-display-linux.sh to get the right workstation commands printed out for TigerVNC's vncviewer client.

On your first run, you will need to set up a VNC password, which controls access to your VNC client and has nothing to do with your cluster or UArk password. On subsequent runs your local VNC client will ask for that password when starting.

After pasting these commands into your workstation, you should get a Linux Gnome session inside a workstation window, from which you can open xterm or gnome-terminal and run graphical commands.

TurboVNC X with OpenGL

It is possible to run moderately complex OpenGL programs using TurboVNC from the GPU nodes, which have hardware OpenGL support. Please contact HPC-SUPPORT.