Data Access, Imaging Tools & Protocols

===== Remote Access to Diamond Light Source (DLS) Data Servers ===== The details below refer to accessing the [[http://www.diamond.ac.uk/Beamlines/Materials/I13.html|Beamline i13]] (Coherence X-ray Imaging). Please note that all data created in ''/dls/tmp/'' will be **deleted after 30 days**, so make sure you transfer it to Dropbox or other online storage at least weekly. ==== Registration & Data Management ==== New users need these two steps before moving any further: * [[https://uas.diamond.ac.uk/uas/?y=2#register | FedID registration ]] * [[http://www.diamond.ac.uk/Users/UserGuide/Before-you-Arrive/Safety-Video-and-Test.html | Safety Training]] **[[https://icat.diamond.ac.uk/#/login | DLS Data Archive ]]** (accessing long-term storage for scans older than 40 days; see [[https://www.diamond.ac.uk/Users/Experiment-at-Diamond/IT-User-Guide/Not-at-DLS/Retrieve-data.html|data retrieval]] for more tips). [[https://savu.readthedocs.io/en/latest/user_guides/user_training/ | Savu ]] is a preferred Python-based tool for reconstructing raw DLS tomography data (i.e. radiographic projections). ==== Setting-up and Connecting ==== Before you start, please install the [[https://www.nomachine.com/product&p=NoMachine%20Enterprise%20Client|NoMachine NX Client]]. - Run ''NoMachine'' and click on //New Connection// - Leave protocol as ''NX'' - Enter **''nx.diamond.ac.uk''** as the host (and leave 4000 as the port) - Leave default //Password Authentication// method and //No Proxy// options - Tick //Create a Desktop Link// to get a handy shortcut From now on, all you need to do is to double-click on the desktop NoMachine link and enter your //Federal User ID// and password. Note that you will need to select (or create a new) //Virtual Desktop//. Alternatively, you could connect via ''ssh'' (e.g. to run a download operation remotely: ''wget https...'') $ YourFedID@nx.diamond.ac.uk ==== Remote SuRVoS operation ==== Run the following commands in a Terminal: ''Applications --> System Tools --> Terminal''. $ module load hamilton $ qlogin -P i13 -l gpu=1 -l gpu_arch=Pascal -l exclusive $ module load survos $ survos & See also {{https://diamondlightsource.github.io/SuRVoS/docs/tutorials/2016_11_SuRVoS_Workshop_Final_comp.pdf|SuRVoS Tutorial Materials}}. \\ ---- \\ Technical info useful for troubleshooting: ''Linux Imaging Workstation [i13-ws010.diamond.ac.uk] 172.23.113.76 (NX port 4000)''\\ List of Compute Nodes: ''qhost''\\ Other possible graphical options for high GPU / Memory Usage: module load global/cluster qlogin -q high.q@@com14 -l exclusive -l gpu=1,nvidia_tesla -P i13 Similarly, to run Avizo 2019.1 (//on DLS campus only//) module load avizo/2019.1; avizo Further details are available in the [[http://www.diamond.ac.uk/Beamlines/Mx/I24/I24-Manual/Remote-Access/Connection-to-Diamond/data-processing.html|online beamline manual]].
--- //Igor Chernyavsky, 2018/03/24 19:00// ---
===== X-ray Imaging Centre at Alan Turing Bldg ===== Please book ''PC1'' in advance of your session. And please make sure to //save your data to an external HDD// before the end of the session. ==== Running SuRVoS ==== Try either Running the shortcut SURVOS on the Desktop (C:\Users\...\Anaconda2\envs\ccpi\Scripts\SurVos.exe) or [Start] --> Anaconda Prompt > activate ccpi > SuRVoS ==== Running Avizo ==== Note that there are ''Light'' and ''Full'' versions. One 'full' licence takes 4 'light' ones (out of a total of 32), so a 'full' version is not always available.
--- //Igor Chernyavsky, 2018/03/26 18:00// ---
===== Maths Compute Servers at Alan Turing Bldg ===== You need a a Maths Linux account to access the servers. If you do not have one, please contact Chris Paul, stating your UoM username and reason for access. * **On Linux or MacOS**: open a //Terminal emulator// and run\\ ''$ ssh -Y **username**@e-a07maat1101X.it.manchester.ac.uk'' * **On Windows**: install and run [[https://www.putty.org|PuTTY]]. Enter ''e-a07maat1101**X**.it.manchester.ac.uk'' as the //Host Name//, SSH as the //Connection Type// and hit [Open]. Here **username** is your UoM username, and ''X'' is the reference letter ('a' to 'l') from the Table below (if unsure, use **''a''** for CS1 as a starting point). **Note 1**: If you are using MacOS or Windows, you also need to install and run an ''X Server'' first (see more details on [[https://kb.iu.edu/d/bdnt|X-forwarding]]). **Note 2**: On a university-managed Linux PC, you could connect directly via a name alias, e.g. ''$ ssh -Y cs1'' .\\ \\ ^ Ref (X) ^ Name ^ Core Count ^ Core Speed and Type ^ RAM (GiB) ^ Note ^ | a | cs1 | 12 | 3.4 GHz (Intel Xeon E5-2643v3) | 768 | Memory-intensive | | b | cs2 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | c | cs3 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | d | cs4 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | e | cs5 | 12 | 2.5 GHz (Inter Xeon E5-2430v2) | 128 | [offline] | | f | cs6 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | g | cs7 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | h | cs8 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | i | cs9 | 8 (x2) | 3.0 GHz (Intel Xeon E5-2623v3) | 256 | | | j | cs10 | 8 (x2) | 3.0 GHz (Intel Xeon E5-2623v3) | 256 | no COMSOL | | k | cs11 | 12 (x2) | 3.4 GHz (Intel Xeon 6128) | 1280 | CPU- & Memory-intensive; no COMSOL | | l | cs12 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 192 | | | m | cs13 | 56 (x2) | 2.2 GHz (Intel Xeon 6238R) | 1024 | CPU- & Memory-intensive; 892GB SSD (/tmp) | | n | cs14 | 56 (x2) | 2.2 GHz (Intel Xeon 6238R) | 1024 | CPU- & Memory-intensive; 892GB SSD (/tmp) | | | minerva | 20 (x2) | 2.2 GHz (Intel Xeon 4114) | 1536 | Memory- & GPU-intensive (2x Nvidia P100 16GB); 2 TB HDD | | | citadel | 8 (x2) | 3.4 GHz (Intel Xeon E5-1680v4) | 256 | Visualisation & GPU-intensive (Nvidia GTX1080 8GB); 8 TB HDD | Note that cs1-cs8 cores run in a single-thread mode (HT is switched off). System info: free -h # RAM memory (or, $sudo dmidecode -t memory) lscpu # CPU params glxinfo -B # GPU memory # further detailed info sudo lshw -short #(omit sudo for partial info) Load info: top (followed by pressing the [t], [1] and [m] keys) ==== Running COMSOL ==== $ module load COMSOL/5.6 #or COMSOL/6.0 $ comsol & **Note 1**: If there are errors related to OpenGL, try $ comsol -3drend sw & **Note 2**: COMSOL is //not// available on compute servers ''cs10'' and ''cs11''. **Note 3**: You could check available software versions by $ module avail To install COMSOL on a self-managed PC or laptop, download the [[https://livemanchesterac-my.sharepoint.com/:u:/g/personal/chris_paul_manchester_ac_uk/EUu_mH4g5TxOj2Zj02jPpdYBN5fD0RfE5BK9lPLB0RzF7Q?e=4%3aD1k4ox&at=9 | distributive [6 GB]]] (multi-platform ISO disk image, supporting Linux, MacOS and Windows) and use the following details during the setup: licence port@hostname: 15700@lfarm4.eps.manchester.ac.uk; licence number: 7076735 ==== Running MATLAB ==== $ module load matlab2017a $ matlab & ==== Other software ==== [[https://manchester.saasiteu.com/Modules/SelfService/#knowledgeBase/view/AFC87036FF584A79ABFABA678D76FBA7 | GraphPad Prism 8.0 Installer]] - see instructions. [[http://www.itservices.manchester.ac.uk/software/ | UoM Research Software Repository]] **Note**: When running your code at the University compute cluster (known as Computational Shared Facility, [[http://ri.itservices.manchester.ac.uk/csf3 | CSF]]), use the following to enable Internet access (e.g. to install necessary packages for Julia, Python, R, etc.): module load tools/env/proxy
--- //Igor Chernyavsky, 2021/07/22 15:00// ---
===== Mounting misc Remote file systems on Linux ===== Before you start, make sure there is an empty directory (e.g. ''~/Shared'') in your ''home'' directory that you are going to mount. * Mount UoM **RDS-SSH Data Share** sshfs UoM_USERNAME@rds-ssh.itservices.manchester.ac.uk:/mnt/eps01-rds/Placental-Biophysics-Group/ ~/Shared/RDS/ fusermount -u ~/Shared/RDS/ * Mount UoM **P-Drive** sudo mount -t cifs -o user=UoM_USERNAME,domain=ds.man.ac.uk,sec=ntlmsspi,uid=`id -u`,gid=`id -g` //nask.man.ac.uk/home$ ~/Shared/PDrive/ sudo umount ~/Shared/PDrive/ * Mount **Google Drive** via [[https://github.com/astrada/google-drive-ocamlfuse/|gdfuse]] google-drive-ocamlfuse ~/Shared/GDrive/ fusermount -u ~/Shared/GDrive/ * Mount **Dropbox** via [[https://github.com/rianhunter/dbxfs|dbxfs]] (N.B. use '''-o nonempty''' option if sure; you might also need to install the following ''Ubuntu'' packages: ''libfuse2 build-essential libssl-dev libffi-dev python3-pip'') dbxfs ~/Shared/Dropbox/ fusermount -u ~/Shared/Dropbox/ For uploading a large file (>~ 10GB) or multiple files, use [[https://github.com/andreafabrizi/Dropbox-Uploader|dropbox_uploader]] script: ./dropbox_uploader -s -p upload /LOCAL_FOLDER /REMOTE_FOLDER * Mount **DLS I13 Data Storage** (only available for **60 days** after the beamtime) sshfs FedID_USERNAME@nx.diamond.ac.uk:/dls/i13/data/ ~/Shared/DLS/ fusermount -u ~/Shared/DLS/
--- //Igor Chernyavsky, 2019/05/24 21:12// ---