Data Access, Imaging Tools & Protocols

Remote Access to Diamond Light Source (DLS) Data Servers

The details below refer to accessing the Beamline i13 (Coherence X-ray Imaging).

Please note that all data created in /dls/tmp/ will be deleted after 30 days, so make sure you transfer it to Dropbox or other online storage at least weekly.

Registration & Data Management

New users need these two steps before moving any further:

DLS Data Archive (accessing long-term storage for scans older than 40 days; see data retrieval for more tips).

Savu is a preferred Python-based tool for reconstructing raw DLS tomography data (i.e. radiographic projections).

Setting-up and Connecting

Before you start, please install the NoMachine NX Client.

  1. Run NoMachine and click on New Connection
  2. Leave protocol as NX
  3. Enter nx.diamond.ac.uk as the host (and leave 4000 as the port)
  4. Leave default Password Authentication method and No Proxy options
  5. Tick Create a Desktop Link to get a handy shortcut

From now on, all you need to do is to double-click on the desktop NoMachine link and enter your Federal User ID and password. Note that you will need to select (or create a new) Virtual Desktop.

Alternatively, you could connect via ssh (e.g. to run a download operation remotely: wget https…)

$ YourFedID@nx.diamond.ac.uk

Remote SuRVoS operation

Run the following commands in a Terminal: Applications –> System Tools –> Terminal.

$ module load hamilton
$ qlogin -P i13 -l gpu=1 -l gpu_arch=Pascal -l exclusive
$ module load survos
$ survos &

See also SuRVoS Tutorial Materials.



Technical info useful for troubleshooting: Linux Imaging Workstation [i13-ws010.diamond.ac.uk] 172.23.113.76 (NX port 4000)
List of Compute Nodes: qhost
Other possible graphical options for high GPU / Memory Usage:

module load global/cluster
qlogin -q high.q@@com14 -l exclusive -l gpu=1,nvidia_tesla -P i13

Similarly, to run Avizo 2019.1 (on DLS campus only)

module load avizo/2019.1; avizo

Further details are available in the online beamline manual.

Igor Chernyavsky, 2018/03/24 19:00

X-ray Imaging Centre at Alan Turing Bldg

Please book PC1 in advance of your session. And please make sure to save your data to an external HDD before the end of the session.

Running SuRVoS

Try either

  Running the shortcut SURVOS on the Desktop (C:\Users\...\Anaconda2\envs\ccpi\Scripts\SurVos.exe)

or

  [Start] --> Anaconda Prompt
  > activate ccpi
  > SuRVoS

Running Avizo

Note that there are Light and Full versions. One 'full' licence takes 4 'light' ones (out of a total of 32), so a 'full' version is not always available.

Igor Chernyavsky, 2018/03/26 18:00

Maths Compute Servers at Alan Turing Bldg

You need a a Maths Linux account to access the servers. If you do not have one, please contact Chris Paul, stating your UoM username and reason for access.

  • On Linux or MacOS: open a Terminal emulator and run
    $ ssh -Y username@e-a07maat1101X.it.manchester.ac.uk
  • On Windows: install and run PuTTY. Enter e-a07maat1101X.it.manchester.ac.uk as the Host Name, SSH as the Connection Type and hit [Open].

Here username is your UoM username, and X is the reference letter ('a' to 'l') from the Table below (if unsure, use a for CS1 as a starting point).

Note 1: If you are using MacOS or Windows, you also need to install and run an X Server first (see more details on X-forwarding).

Note 2: On a university-managed Linux PC, you could connect directly via a name alias, e.g. $ ssh -Y cs1 .

Ref (X) Name Core Count Core Speed and Type RAM (GiB) Note
a cs1 12 3.4 GHz (Intel Xeon E5-2643v3) 768 Memory-intensive
b cs2 8 3.3 GHz (Intel Xeon E5-2643) 128
c cs3 8 3.3 GHz (Intel Xeon E5-2643) 128
d cs4 8 3.3 GHz (Intel Xeon E5-2643) 128
e cs5 12 2.5 GHz (Inter Xeon E5-2430v2) 128 [offline]
f cs6 12 2.5 GHz (Intel Xeon E5-2430v2) 128
g cs7 12 2.5 GHz (Intel Xeon E5-2430v2) 128
h cs8 12 2.5 GHz (Intel Xeon E5-2430v2) 128
i cs9 8 (x2) 3.0 GHz (Intel Xeon E5-2623v3) 256
j cs10 8 (x2) 3.0 GHz (Intel Xeon E5-2623v3) 256 no COMSOL
k cs11 12 (x2) 3.4 GHz (Intel Xeon 6128) 1280 CPU- & Memory-intensive; no COMSOL
l cs12 12 2.5 GHz (Intel Xeon E5-2430v2) 192
m cs13 56 (x2) 2.2 GHz (Intel Xeon 6238R) 1024 CPU- & Memory-intensive; 892GB SSD (/tmp)
n cs14 56 (x2) 2.2 GHz (Intel Xeon 6238R) 1024 CPU- & Memory-intensive; 892GB SSD (/tmp)
minerva 20 (x2) 2.2 GHz (Intel Xeon 4114) 1536 Memory- & GPU-intensive (2x Nvidia P100 16GB); 2 TB HDD
citadel 8 (x2) 3.4 GHz (Intel Xeon E5-1680v4) 256 Visualisation & GPU-intensive (Nvidia GTX1080 8GB); 8 TB HDD

Note that cs1-cs8 cores run in a single-thread mode (HT is switched off).

System info:

free -h          # RAM memory (or, $sudo dmidecode -t memory)
lscpu            # CPU params
glxinfo -B       # GPU memory
# further detailed info 
sudo lshw -short #(omit sudo for partial info)

Load info:

top (followed by pressing the [t], [1] and [m] keys)

Running COMSOL

$ module load COMSOL/5.6 #or COMSOL/6.0 
$ comsol &

Note 1: If there are errors related to OpenGL, try

$ comsol -3drend sw &

Note 2: COMSOL is not available on compute servers cs10 and cs11.

Note 3: You could check available software versions by

$ module avail

To install COMSOL on a self-managed PC or laptop, download the distributive [6 GB] (multi-platform ISO disk image, supporting Linux, MacOS and Windows) and use the following details during the setup:

licence port@hostname: 15700@lfarm4.eps.manchester.ac.uk; licence number: 7076735

Running MATLAB

$ module load matlab2017a
$ matlab &

Other software

GraphPad Prism 8.0 Installer - see instructions.

UoM Research Software Repository

Note: When running your code at the University compute cluster (known as Computational Shared Facility, CSF), use the following to enable Internet access (e.g. to install necessary packages for Julia, Python, R, etc.):

module load tools/env/proxy

Igor Chernyavsky, 2021/07/22 15:00

Mounting misc Remote file systems on Linux

Before you start, make sure there is an empty directory (e.g. ~/Shared) in your home directory that you are going to mount.

  • Mount UoM RDS-SSH Data Share
sshfs UoM_USERNAME@rds-ssh.itservices.manchester.ac.uk:/mnt/eps01-rds/Placental-Biophysics-Group/ ~/Shared/RDS/
fusermount -u ~/Shared/RDS/
  • Mount UoM P-Drive
sudo mount -t cifs -o user=UoM_USERNAME,domain=ds.man.ac.uk,sec=ntlmsspi,uid=`id -u`,gid=`id -g` //nask.man.ac.uk/home$ ~/Shared/PDrive/
sudo umount ~/Shared/PDrive/
  • Mount Google Drive via gdfuse
google-drive-ocamlfuse ~/Shared/GDrive/
fusermount -u ~/Shared/GDrive/
  • Mount Dropbox via dbxfs (N.B. use '-o nonempty' option if sure; you might also need to install the following Ubuntu packages: libfuse2 build-essential libssl-dev libffi-dev python3-pip)
dbxfs ~/Shared/Dropbox/
fusermount -u ~/Shared/Dropbox/

For uploading a large file (>~ 10GB) or multiple files, use dropbox_uploader script:

./dropbox_uploader -s -p upload /LOCAL_FOLDER /REMOTE_FOLDER
  • Mount DLS I13 Data Storage (only available for 60 days after the beamtime)
sshfs FedID_USERNAME@nx.diamond.ac.uk:/dls/i13/data/ ~/Shared/DLS/
fusermount -u ~/Shared/DLS/

Igor Chernyavsky, 2019/05/24 21:12

i/placenta/data.txt · Last modified: 2022/03/07 13:23 by Igor Chernyavsky
Recent changes RSS feed
CC Attribution 3.0 Unported
Driven by DokuWiki Valid XHTML 1.0