User Tools

Site Tools


moving_data

This is an old revision of the document!


Data Transfer to and from AHPCC Clusters

A special data mover node is available, called tgv.uark.edu from campus and dm.uark.edu from the world. It should be used for moving data to and from the clusters. tgv/dm is configured with a 10Gb/s network connection and a dedicated 21TB storage system mounted at /local_storage. Regular login shells are blocked. The allowed protocols are

  • scp (secure copy)
  • sftp (secure ftp)
  • rsync

To upload a data file from the current directory on your local desktop machine to your /storage directory on razor:

pawel@localdesktop$# scp localfile.dat pwolinsk@tgv.uark.edu:/storage/pwolinsk/

To download a data file from your /storage directory on razor to the current directory on your local desktop machine:

pawel@localdesktop$# scp pwolinsk@tgv.uark.edu:/storage/pwolinsk/remotefile.dat .

You will also have a staging directory on tgv/dm called /local_storage/$USER/. A new Globus Online instance on tgv/dm is in preparation, and login shells are available for special situations such as batch wget from an http server.

Data Transfer between Razor & Trestles Clusters

A special node named bridge is set aside for the purpose of moving data between storage systems of the Razor and Trestles clusters. The bridge node has home, storage and scratch file systems from both Razor and Trestles nodes mounted under these directories:

Trestles file systems

/trestles/home/
/trestles/storage/
/trestles/scratch/
These are also mounted at:
/home
/storage
/scratch

When you login to bridge, you will be located in your Trestles home. Also please recall that the Trestles persistent /scratch/$USER partition is being phased out and will be only /scratch/$PBS_JOBID for each batch job.

Razor file systems

/razor/home/
/razor/storage/
/razor/scratch/

Although a direct secure copy to bridge node from either Razor or Trestles cluster nodes or front ends is available, to achieve best performance we recommend logging into bridge node directly and using the cp or mv commands to move files between the cluster file systems:

tres-l1:pwolinsk:$ ssh bridge
Last login: Fri Feb 19 14:03:31 2016 from tres-l1
No Modulefiles Currently Loaded.
bridge:pwolinsk:$ cp /trestles/home/pwolinsk/memusage /razor/home/pwolinsk/
bridge:pwolinsk:$ exit
logout
Connection to bridge closed.
tres-l1:pwolinsk:$ 

cp directly on bridge is faster because it uses 40Gb/s Infiniband while scp uses 1Gb/s ethernet.

moving_data.1457556871.txt.gz · Last modified: 2016/03/09 20:54 by root