Jul 18, 2019 | Access, Cluster, FAQ, SSH
In order to run graphical applications on Acropolis with out VNC you will need to open an SSH session capable of X11 forwarding. Unlike VNC your applications will not continue to run once your SSH session has ended. This method is only suitable for short running...
Jul 18, 2019 | Cluster, FAQ, Storage
The Acropolis /home and /share directories are hosted over a 10GbE network connection. Jobs which perform large amounts of reads and writes to disk can benefit by using scratch space. Each compute node is equipped with a fast solid state drive located at /scratch....
Jul 18, 2019 | Cluster, Computation, FAQ
“showq” will list the full job queue. “showq -u $USER” will only list your jobs. From here you can see the status of your submitted jobs. “checkjob” can be used to see the current status of a job. Run checkjob followed by the job id listed in showq. This...
Jul 18, 2019 | Cluster, Computation, FAQ, Software
Qsub Qsub is the command used to submit jobs to the cluster. Users are permitted to use up to 350 cores and 1TB of memory at a time. Beyond that your work will queue until your running jobs have completed. Resource reservations are strictly enforced. Your jobs will...
Jul 18, 2019 | Cluster, FAQ, Matlab
Matlab 2016b is the default version on Acropolis. Additional versions of Matlab can be loaded with the module command. Matlab 2016b is configured to work with the Distributed Computing Server. Matlab DCS is integrated with the job scheduler. This allows you to...
Jul 18, 2019 | Cluster, Computation, FAQ
Head Node The server you log into when connecting to acropolis.uchicago.edu is what we refer to as the head node or login node. This server is a Dell R920 consisting of “Intel(R) Xeon(R) CPU E7-8891 v2 @ 3.20GHz” CPU’s with a total of 40 cores or 80 computational...