iccluster

Get access to iccluster

GPU allocation:

Server GPU allocate to
iccluster089.iccluster.epfl.ch 0 Siavash
1 Edo
iccluster046.iccluster.epfl.ch 0 Fayez & Marc
1 Ruofan
iccluster087.iccluster.epfl.ch 0 RK & Leo
1 Doru & Berkay

If you found someone else is using the resources allocated to you, you have the right to kill his/her program.

Log on to the server:

ssh {your gasper ID}@iccluster{089,087,046}.epfl.ch

Create a shortcut for SSH:

add the following lines into your ssh config file (usually it’s ~/.ssh/config or /etc/ssh/ssh_config)

Host 089

    Hostname iccluster089.iccluster.epfl.ch

    User {your gasper ID}

Then you can simply use ssh 089 instead of ssh {your gasper ID}@iccluster089.iccluster.epfl.ch

Reserve a New Node

 

If you need more GPU resource, please send your request to ruofan.zhou@epfl.ch, normally a new node will be ready within two working days.

Or if you want to reserve yourself, please follow the guideline below:

Reserve a new node:

  1. go to https://install.iccluster.epfl.ch/Portal/, login with your gasper account
  2. go to Reservations->Make a reservation
  3. select your days (normally you can modify it later) and server type (normally we reserve 1 ICCT3, which has 2 Titan X)
  4. setup your node once it appears on My Servers->List

Run setup for a new node:

  1. go to https://install.iccluster.epfl.ch/Portal/, login with your gasper account
  2. go to My Servers->Setup
  3. add the new reserved node into the setup list
  4. choose a boot option, normally select Ubuntu trusty (14.04) or Ubuntu xenial (16.04)
  5. in Customization, select at least Add IC-IT SSH Keys to root and IVRL Customization
  6. in Run setup, click I confirm
  7. wait for around half an hour after the boot is finished, ssh to node 089
  8. sudo vi /etc/exports
  9. and add at then end of the line which starting by /data the pattern “<new node IP>(rw,no_subtree_check)” separate by a space, like :
    /data       10.90.40.3(rw,no_subtree_check) <IP2>(rw,no_subtree_check) <IP3>(rw,no_subtree_check)
  10. sudo exportfs -r
  11. ssh on the new node:
  12. sudo mount -t nfs 10.90.40.15:/data /data
  13. sudo ln -s /data/home /home