Understanding living systems
This shows you the differences between two versions of the page.
i:placenta:data [2019/07/22 11:10] Igor Chernyavsky [DLS registration & data access] |
i:placenta:data [2022/03/07 13:23] (current) Igor Chernyavsky [Running COMSOL] |
||
---|---|---|---|
Line 16: | Line 16: | ||
**[[https://icat.diamond.ac.uk/#/login | DLS Data Archive ]]** (accessing long-term storage for scans older than 40 days; see [[https://www.diamond.ac.uk/Users/Experiment-at-Diamond/IT-User-Guide/Not-at-DLS/Retrieve-data.html|data retrieval]] for more tips). | **[[https://icat.diamond.ac.uk/#/login | DLS Data Archive ]]** (accessing long-term storage for scans older than 40 days; see [[https://www.diamond.ac.uk/Users/Experiment-at-Diamond/IT-User-Guide/Not-at-DLS/Retrieve-data.html|data retrieval]] for more tips). | ||
+ | [[https://savu.readthedocs.io/en/latest/user_guides/user_training/ | Savu ]] is a preferred Python-based tool for reconstructing raw DLS tomography data (i.e. radiographic projections). | ||
==== Setting-up and Connecting ==== | ==== Setting-up and Connecting ==== | ||
Line 29: | Line 29: | ||
From now on, all you need to do is to double-click on the desktop NoMachine link and enter your //Federal User ID// and password. Note that you will need to select (or create a new) //Virtual Desktop//. | From now on, all you need to do is to double-click on the desktop NoMachine link and enter your //Federal User ID// and password. Note that you will need to select (or create a new) //Virtual Desktop//. | ||
+ | Alternatively, you could connect via ''ssh'' (e.g. to run a download operation remotely: ''wget https...'') | ||
+ | $ YourFedID@nx.diamond.ac.uk | ||
==== Remote SuRVoS operation ==== | ==== Remote SuRVoS operation ==== | ||
Run the following commands in a Terminal: ''Applications --> System Tools --> Terminal''. | Run the following commands in a Terminal: ''Applications --> System Tools --> Terminal''. | ||
<code> | <code> | ||
- | $ module load global/cluster | + | $ module load hamilton |
- | $ qlogin -q high.q@@com14 -l exclusive -l gpu_arch=Pascal -P i13 | + | $ qlogin -P i13 -l gpu=1 -l gpu_arch=Pascal -l exclusive |
$ module load survos | $ module load survos | ||
$ survos & | $ survos & | ||
Line 47: | Line 49: | ||
List of Compute Nodes: ''qhost''\\ | List of Compute Nodes: ''qhost''\\ | ||
Other possible graphical options for high GPU / Memory Usage: | Other possible graphical options for high GPU / Memory Usage: | ||
- | <code>qlogin -q high.q@@com14 -l exclusive -l gpu=1,nvidia_tesla -P i13</code> | + | <code> |
+ | module load global/cluster | ||
+ | qlogin -q high.q@@com14 -l exclusive -l gpu=1,nvidia_tesla -P i13 | ||
+ | </code> | ||
+ | |||
+ | Similarly, to run Avizo 2019.1 (//on DLS campus only//) | ||
+ | <code>module load avizo/2019.1; avizo</code> | ||
Further details are available in the [[http://www.diamond.ac.uk/Beamlines/Mx/I24/I24-Manual/Remote-Access/Connection-to-Diamond/data-processing.html|online beamline manual]]. | Further details are available in the [[http://www.diamond.ac.uk/Beamlines/Mx/I24/I24-Manual/Remote-Access/Connection-to-Diamond/data-processing.html|online beamline manual]]. | ||
Line 81: | Line 89: | ||
* **On Linux or MacOS**: open a //Terminal emulator// and run\\ | * **On Linux or MacOS**: open a //Terminal emulator// and run\\ | ||
- | ''$ ssh -Y **username**@e-a07maat1101**X**.it.manchester.ac.uk'' | + | ''$ ssh -Y **username**@e-a07maat1101<wrap em>X</wrap>.it.manchester.ac.uk'' |
* **On Windows**: install and run [[https://www.putty.org|PuTTY]]. Enter ''e-a07maat1101**X**.it.manchester.ac.uk'' as the //Host Name//, SSH as the //Connection Type// and hit [Open]. | * **On Windows**: install and run [[https://www.putty.org|PuTTY]]. Enter ''e-a07maat1101**X**.it.manchester.ac.uk'' as the //Host Name//, SSH as the //Connection Type// and hit [Open]. | ||
- | Here **username** is your UoM username, and **''X''** is the reference letter ('a' to 'l') from the Table below (if unsure, use **''a''** for CS1 as a starting point). | + | Here **username** is your UoM username, and ''<wrap em>X</wrap>'' is the reference letter ('a' to 'l') from the Table below (if unsure, use **''a''** for CS1 as a starting point). |
+ | |||
+ | **Note 1**: If you are using MacOS or Windows, you also need to install and run an ''X Server'' first (see more details on [[https://kb.iu.edu/d/bdnt|X-forwarding]]). | ||
+ | |||
+ | **Note 2**: On a university-managed Linux PC, you could connect directly via a name alias, e.g. ''$ ssh -Y cs1'' .\\ \\ | ||
^ Ref (X) ^ Name ^ Core Count ^ Core Speed and Type ^ RAM (GiB) ^ Note ^ | ^ Ref (X) ^ Name ^ Core Count ^ Core Speed and Type ^ RAM (GiB) ^ Note ^ | ||
Line 92: | Line 105: | ||
| c | cs3 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | | c | cs3 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | ||
| d | cs4 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | | d | cs4 | 8 | 3.3 GHz (Intel Xeon E5-2643) | 128 | | | ||
- | | e | cs5 | 12 | 2.5 GHz (Inter Xeon E5-2430v2) | 128 | | | + | | e | cs5 | 12 | 2.5 GHz (Inter Xeon E5-2430v2) | 128 | [offline] | |
| f | cs6 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | | f | cs6 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | ||
| g | cs7 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | | g | cs7 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | ||
| h | cs8 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | | h | cs8 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 128 | | | ||
- | | i | cs9 | 16 | 3.0 GHz (Intel Xeon E5-2623v3) | 256 | | | + | | i | cs9 | 8 (x2) | 3.0 GHz (Intel Xeon E5-2623v3) | 256 | | |
- | | j | cs10 | 16 | 3.0 GHz (Intel Xeon E5-2623v3) | 256 | | | + | | j | cs10 | 8 (x2) | 3.0 GHz (Intel Xeon E5-2623v3) | 256 | no COMSOL | |
- | | k | cs11 | 32 | 3.0 GHz (AMD Opteron 6220) | 256 | CPU-intensive | | + | | k | cs11 | 12 (x2) | 3.4 GHz (Intel Xeon 6128) | 1280 | CPU- & Memory-intensive; no COMSOL | |
- | | l | cs12 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 192 | | | + | | l | cs12 | 12 | 2.5 GHz (Intel Xeon E5-2430v2) | 192 | | |
- | | | citadel | 8 (x2) | 3.4 GHz (Intel Xeon E5-1680v4) | 128 | GPU-intensive (Nvidia GTX1080 8GB 1.6GHz) | | + | | m | cs13 | 56 (x2) | 2.2 GHz (Intel Xeon 6238R) | 1024 | CPU- & Memory-intensive; 892GB SSD (/tmp) | |
+ | | n | cs14 | 56 (x2) | 2.2 GHz (Intel Xeon 6238R) | 1024 | CPU- & Memory-intensive; 892GB SSD (/tmp) | | ||
+ | | | minerva | 20 (x2) | 2.2 GHz (Intel Xeon 4114) | 1536 | Memory- & GPU-intensive (2x Nvidia P100 16GB); 2 TB HDD | | ||
+ | | | citadel | 8 (x2) | 3.4 GHz (Intel Xeon E5-1680v4) | 256 | Visualisation & GPU-intensive (Nvidia GTX1080 8GB); 8 TB HDD | | ||
- | Note that all CS cores run in a single-thread mode (HT is switched off). | + | Note that cs1-cs8 cores run in a single-thread mode (HT is switched off). |
Line 121: | Line 137: | ||
<code> | <code> | ||
- | $ module load comsol53 | + | $ module load COMSOL/5.6 #or COMSOL/6.0 |
$ comsol & | $ comsol & | ||
</code> | </code> | ||
+ | |||
+ | **Note 1**: If there are errors related to OpenGL, try | ||
+ | <code> | ||
+ | $ comsol -3drend sw & | ||
+ | </code> | ||
+ | |||
+ | **Note 2**: COMSOL is //not// available on compute servers ''cs10'' and ''cs11''. | ||
+ | |||
+ | **Note 3**: You could check available software versions by | ||
+ | <code> | ||
+ | $ module avail | ||
+ | </code> | ||
+ | |||
+ | To install COMSOL on a self-managed PC or laptop, download the [[https://livemanchesterac-my.sharepoint.com/:u:/g/personal/chris_paul_manchester_ac_uk/EUu_mH4g5TxOj2Zj02jPpdYBN5fD0RfE5BK9lPLB0RzF7Q?e=4%3aD1k4ox&at=9 | distributive [6 GB]]] (multi-platform ISO disk image, supporting Linux, MacOS and Windows) and use the following details during the setup: | ||
+ | <code>licence port@hostname: 15700@lfarm4.eps.manchester.ac.uk; licence number: 7076735</code> | ||
==== Running MATLAB ==== | ==== Running MATLAB ==== | ||
Line 133: | Line 164: | ||
+ | ==== Other software ==== | ||
- | <div rightalign>--- //Igor Chernyavsky, 2019/07/15 18:00// ---</div> | + | [[https://manchester.saasiteu.com/Modules/SelfService/#knowledgeBase/view/AFC87036FF584A79ABFABA678D76FBA7 |
+ | | GraphPad Prism 8.0 Installer]] - see instructions. | ||
+ | |||
+ | [[http://www.itservices.manchester.ac.uk/software/ | UoM Research Software Repository]] | ||
+ | |||
+ | **Note**: When running your code at the University compute cluster (known as Computational Shared Facility, [[http://ri.itservices.manchester.ac.uk/csf3 | CSF]]), use the following to enable Internet access (e.g. to install necessary packages for Julia, Python, R, etc.): | ||
+ | <code> | ||
+ | module load tools/env/proxy | ||
+ | </code> | ||
+ | |||
+ | <div rightalign>--- //Igor Chernyavsky, 2021/07/22 15:00// ---</div> | ||
===== Mounting misc Remote file systems on Linux ===== | ===== Mounting misc Remote file systems on Linux ===== | ||
+ | |||
+ | Before you start, make sure there is an empty directory (e.g. ''~/Shared'') in your ''home'' directory that you are going to mount. | ||
* Mount UoM **RDS-SSH Data Share** | * Mount UoM **RDS-SSH Data Share** | ||
<code> | <code> | ||
- | sshfs UoM_USERNAME@rds-ssh.itservices.manchester.ac.uk:/mnt/eps01-rds/Placental-Biophysics-Group/ ~/PDrive/RDS/ | + | sshfs UoM_USERNAME@rds-ssh.itservices.manchester.ac.uk:/mnt/eps01-rds/Placental-Biophysics-Group/ ~/Shared/RDS/ |
- | fusermount -u ~/PDrive/RDS/ | + | fusermount -u ~/Shared/RDS/ |
</code> | </code> | ||
* Mount UoM **P-Drive** | * Mount UoM **P-Drive** | ||
<code> | <code> | ||
- | sudo mount -t cifs -o user=UoM_USERNAME,domain=ds.man.ac.uk,sec=ntlmsspi,uid=`id -u`,gid=`id -g` //nask.man.ac.uk/home$ ~/PDrive/ | + | sudo mount -t cifs -o user=UoM_USERNAME,domain=ds.man.ac.uk,sec=ntlmsspi,uid=`id -u`,gid=`id -g` //nask.man.ac.uk/home$ ~/Shared/PDrive/ |
- | sudo umount ~/PDrive/ | + | sudo umount ~/Shared/PDrive/ |
</code> | </code> | ||
* Mount **Google Drive** via [[https://github.com/astrada/google-drive-ocamlfuse/|gdfuse]] | * Mount **Google Drive** via [[https://github.com/astrada/google-drive-ocamlfuse/|gdfuse]] | ||
<code> | <code> | ||
- | google-drive-ocamlfuse ~/PDrive/GDrive/ | + | google-drive-ocamlfuse ~/Shared/GDrive/ |
- | fusermount -u ~/PDrive/GDrive/ | + | fusermount -u ~/Shared/GDrive/ |
</code> | </code> | ||
* Mount **Dropbox** via [[https://github.com/rianhunter/dbxfs|dbxfs]] (N.B. use '''-o nonempty''' option if sure; you might also need to install the following ''Ubuntu'' packages: ''libfuse2 build-essential libssl-dev libffi-dev python3-pip'') | * Mount **Dropbox** via [[https://github.com/rianhunter/dbxfs|dbxfs]] (N.B. use '''-o nonempty''' option if sure; you might also need to install the following ''Ubuntu'' packages: ''libfuse2 build-essential libssl-dev libffi-dev python3-pip'') | ||
<code> | <code> | ||
- | dbxfs ~/PDrive/Dropbox/ | + | dbxfs ~/Shared/Dropbox/ |
- | fusermount -u ~/PDrive/Dropbox/ | + | fusermount -u ~/Shared/Dropbox/ |
+ | </code> | ||
+ | For uploading a large file (>~ 10GB) or multiple files, use [[https://github.com/andreafabrizi/Dropbox-Uploader|dropbox_uploader]] script: | ||
+ | <code> | ||
+ | ./dropbox_uploader -s -p upload /LOCAL_FOLDER /REMOTE_FOLDER | ||
+ | </code> | ||
+ | |||
+ | * Mount **DLS I13 Data Storage** (only available for **60 days** after the beamtime) | ||
+ | <code> | ||
+ | sshfs FedID_USERNAME@nx.diamond.ac.uk:/dls/i13/data/ ~/Shared/DLS/ | ||
+ | fusermount -u ~/Shared/DLS/ | ||
</code> | </code> | ||
<div rightalign>--- //Igor Chernyavsky, 2019/05/24 21:12// ---</div> | <div rightalign>--- //Igor Chernyavsky, 2019/05/24 21:12// ---</div> | ||