From c7ee1a0d5207694bdf55c215c2942803b8bbe3a0 Mon Sep 17 00:00:00 2001 From: Yehor Mishchyriak Date: Wed, 9 Jul 2025 16:53:10 -0400 Subject: [PATCH 1/3] wrote a guide for linking jupyter to a cluster compute node --- protocols/code.md | 68 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 67 insertions(+), 1 deletion(-) diff --git a/protocols/code.md b/protocols/code.md index bf61cf4..d7f0b09 100644 --- a/protocols/code.md +++ b/protocols/code.md @@ -201,4 +201,70 @@ host presentations, etc. Each software or analysis project should have its own repo or repos -## + +## Using Tufts high-performance computing (HPC) cluster +For detailed guidelines regarding Tufts HPC usage please visit [the dedicated website](https://it.tufts.edu/high-performance-computing) + +## "Linking" your Jupyter notebook to a cluster's compute node +1. ssh into the cluster: +```bash + ssh your_name@login.cluster.edu +``` + +2. Request access to an interactive compute node and its resources: +CPU: +```bash + srun -t 0-02:00 --mem 2000 -p interactive --pty bash +``` +GPU: +```bash + srun -t 0-02:00 --mem 2000 -p gpu --gres gpu --pty bash +``` + +This gives a 2-hour time limit interactive session with a bash terminal, with 2GB of memory. +You can adjust the -t or the -mem requests as needed. + +### Requesting specific types of gpus: +Use the --gres option for either srun or sbatch commands +General syntax: ```--gres=gpu[:type][:number]```. Specifying either type or number is optional. +For example: +A100 gpu: ```--gres gpu:a100``` +V100 gpu: ```--gres gpu:v100``` +T4 gpu: ```--gres gpu:t4``` +RTX 6000 gpu: ```--gres gpu:rtx_6000``` +RTX A6000 gpu: ```--gres gpu:rtx_a6000``` + +See [GPU Hardware List](https://www.cs.tufts.edu/cs/152L3D/2024f/tufts_hpc_setup.html#gpu-hardware-list) + +3. Activating your conda environment and launching Jupyter notebook on a specific port: +```bash + conda activate + jupyter notebook --no-browser --port= +``` +* NOTE: after you've launched Jupyter, in the output you will see something like: +```.../localhost:6789/tree?token=``` +You need to copy the for this session; you will need it later. + +4. Tunneling the remote HPC port to your local machine port: +After you have entered the "srun" command, you will be allocated a certain node. +Now, go to the terminal on your local machine and enter the following: + +First, tunnel the port you bound Jupyter to (f.e. 6789) to some port on your local machine: +```bash + ssh your_name@login.cluster.edu -L :localhost: + # for example + # ssh yehor@login.cluster.edu -L 6789:localhost:6789 +``` + +Now, do the same for the compute node you were allocated (f.e. i2cmp003): +```bash + ssh -L :localhost: + # for example + # ssh i2cmp003 -L 6789:localhost:6789 +``` + +5. Opening Jupyter in the browser +Go to an internet browser on your machine and look up the following address: +```http://localhost:``` (again, you may use 6789 as the port number throughout the process) +Now, enter the token you copied in the cluster terminal as well as some password you need to come up with for this session only. +Done. Now do science. \ No newline at end of file From d10e31901564aebc0eda4616c49866389c930628 Mon Sep 17 00:00:00 2001 From: Kevin Bonham Date: Thu, 10 Jul 2025 15:29:21 -0400 Subject: [PATCH 2/3] Remove triple ticks for inline --- protocols/code.md | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/protocols/code.md b/protocols/code.md index d7f0b09..a1f44d2 100644 --- a/protocols/code.md +++ b/protocols/code.md @@ -225,14 +225,18 @@ This gives a 2-hour time limit interactive session with a bash terminal, with 2G You can adjust the -t or the -mem requests as needed. ### Requesting specific types of gpus: -Use the --gres option for either srun or sbatch commands -General syntax: ```--gres=gpu[:type][:number]```. Specifying either type or number is optional. + +Use the `--gres` option for either srun or sbatch commands + +General syntax: `--gres=gpu[:type][:number]`. Specifying either type or number is optional. + For example: -A100 gpu: ```--gres gpu:a100``` -V100 gpu: ```--gres gpu:v100``` -T4 gpu: ```--gres gpu:t4``` -RTX 6000 gpu: ```--gres gpu:rtx_6000``` -RTX A6000 gpu: ```--gres gpu:rtx_a6000``` + +A100 gpu: `--gres gpu:a100` +V100 gpu: `--gres gpu:v100` +T4 gpu: `--gres gpu:t4` +RTX 6000 gpu: `--gres gpu:rtx_6000` +RTX A6000 gpu: `--gres gpu:rtx_a6000` See [GPU Hardware List](https://www.cs.tufts.edu/cs/152L3D/2024f/tufts_hpc_setup.html#gpu-hardware-list) @@ -242,7 +246,7 @@ See [GPU Hardware List](https://www.cs.tufts.edu/cs/152L3D/2024f/tufts_hpc_setup jupyter notebook --no-browser --port= ``` * NOTE: after you've launched Jupyter, in the output you will see something like: -```.../localhost:6789/tree?token=``` +`.../localhost:6789/tree?token=` You need to copy the for this session; you will need it later. 4. Tunneling the remote HPC port to your local machine port: @@ -265,6 +269,6 @@ Now, do the same for the compute node you were allocated (f.e. i2cmp003): 5. Opening Jupyter in the browser Go to an internet browser on your machine and look up the following address: -```http://localhost:``` (again, you may use 6789 as the port number throughout the process) +`http://localhost:` (again, you may use 6789 as the port number throughout the process) Now, enter the token you copied in the cluster terminal as well as some password you need to come up with for this session only. -Done. Now do science. \ No newline at end of file +Done. Now do science. From bd8fa90c36a4f2696be694659e072f37fa366a9e Mon Sep 17 00:00:00 2001 From: Kevin Bonham Date: Thu, 28 Aug 2025 16:31:40 -0400 Subject: [PATCH 3/3] additional fixes --- Manifest.toml | 69 +++++++++++++++++---------------- Project.toml | 2 +- protocols/code.md | 98 ++++++++++++++++++++++++++--------------------- 3 files changed, 93 insertions(+), 76 deletions(-) diff --git a/Manifest.toml b/Manifest.toml index f4f9f6d..27f2651 100644 --- a/Manifest.toml +++ b/Manifest.toml @@ -1,8 +1,13 @@ # This file is machine-generated - editing it directly is not advised -julia_version = "1.11.4" +julia_version = "1.11.6" manifest_format = "2.0" -project_hash = "466211ef6f405feebed0a653388154f8873df086" +project_hash = "a45f85a71e35bf3e1235da274041a3eb67efb1ad" + +[[deps.ANSIColoredPrinters]] +git-tree-sha1 = "574baf8110975760d391c710b6341da1afa48d8c" +uuid = "a4c015fc-c6ff-483c-b24f-f7ea428134e9" +version = "0.0.1" [[deps.ArgTools]] uuid = "0dad84c5-d112-42e6-8d28-ef12dabb789f" @@ -21,6 +26,10 @@ git-tree-sha1 = "0691e34b3bb8be9307330f88d1a3c3f25466c24d" uuid = "d1d4a3ce-64b1-5f1a-9ba4-7e7e69966f35" version = "0.1.9" +[[deps.CRC32c]] +uuid = "8bf52ea8-c179-5cab-976a-9e18b702a9bc" +version = "1.11.0" + [[deps.CodecZlib]] deps = ["TranscodingStreams", "Zlib_jll"] git-tree-sha1 = "962834c22b66e32aa10f7611c08c8ca4e20749a9" @@ -51,17 +60,6 @@ deps = ["Printf"] uuid = "ade2ca70-3891-5945-98fb-dc099432e06a" version = "1.11.0" -[[deps.DelimitedFiles]] -deps = ["Mmap"] -git-tree-sha1 = "9e2f36d3c96a820c678f2f1f1782582fcf685bae" -uuid = "8bb1440f-4735-579b-a4ab-409b98df4dab" -version = "1.9.1" - -[[deps.DocStringExtensions]] -git-tree-sha1 = "e7b7e6f178525d17c720ab9c081e4ef04429f860" -uuid = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae" -version = "0.9.4" - [[deps.Downloads]] deps = ["ArgTools", "FileWatching", "LibCURL", "NetworkOptions"] uuid = "f43a241f-c20a-4ad4-852c-f6b1247861c6" @@ -88,17 +86,11 @@ version = "0.1.10" uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee" version = "1.11.0" -[[deps.Franklin]] -deps = ["Dates", "DelimitedFiles", "DocStringExtensions", "ExprTools", "FranklinTemplates", "HTTP", "Literate", "LiveServer", "Logging", "Markdown", "NodeJS", "OrderedCollections", "Pkg", "REPL", "Random", "TOML"] -git-tree-sha1 = "31e70717e0640d6576fe04d611a33df1c9c312d6" -uuid = "713c75ef-9fc9-4b05-94a9-213340da978e" -version = "0.10.95" - -[[deps.FranklinTemplates]] -deps = ["LiveServer"] -git-tree-sha1 = "c01813a615149ddb3b3d133f33de29d642fbe57b" -uuid = "3a985190-f512-4703-8d38-2a7944ed5916" -version = "0.10.2" +[[deps.FranklinParser]] +deps = ["PrecompileTools", "REPL"] +git-tree-sha1 = "7daf95d2334d4c0f73353e110c9396e9d5258afb" +uuid = "796511e7-1510-466f-ad0c-1823c64bcafa" +version = "0.7.1" [[deps.Git]] deps = ["Git_jll"] @@ -114,9 +106,9 @@ version = "2.49.0+0" [[deps.HTTP]] deps = ["Base64", "CodecZlib", "ConcurrentUtilities", "Dates", "ExceptionUnwrapping", "Logging", "LoggingExtras", "MbedTLS", "NetworkOptions", "OpenSSL", "PrecompileTools", "Random", "SimpleBufferStream", "Sockets", "URIs", "UUIDs"] -git-tree-sha1 = "f93655dc73d7a0b4a368e3c0bce296ae035ad76e" +git-tree-sha1 = "ed5e9c58612c4e081aecdb6e1a479e18462e041e" uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3" -version = "1.10.16" +version = "1.10.17" [[deps.Hyperscript]] deps = ["Test"] @@ -264,9 +256,9 @@ version = "2.0.0" [[deps.OpenSSL]] deps = ["BitFlags", "Dates", "MozillaCACerts_jll", "OpenSSL_jll", "Sockets"] -git-tree-sha1 = "38cb508d080d21dc1128f7fb04f20387ed4c0af4" +git-tree-sha1 = "f1a7e086c677df53e064e0fdd2c9d0b0833e3f6e" uuid = "4d8831e6-92b7-49fb-bdf8-b643e874388c" -version = "1.4.3" +version = "1.5.0" [[deps.OpenSSL_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -275,9 +267,9 @@ uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95" version = "3.0.16+0" [[deps.OrderedCollections]] -git-tree-sha1 = "cc4054e898b852042d7b503313f7ad03de99c3dd" +git-tree-sha1 = "05868e21324cede2207c6f0f466b4bfef6d5e7ee" uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d" -version = "1.8.0" +version = "1.8.1" [[deps.PCRE2_jll]] deps = ["Artifacts", "Libdl"] @@ -326,6 +318,11 @@ deps = ["SHA"] uuid = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" version = "1.11.0" +[[deps.Reexport]] +git-tree-sha1 = "45e428421666073eab6f2da5c9d310d99bb12f9b" +uuid = "189a3867-3050-52da-a836-e630ba90ab69" +version = "1.2.2" + [[deps.SHA]] uuid = "ea8e919c-243c-51af-8825-aaa63cd721ce" version = "0.7.0" @@ -392,9 +389,9 @@ uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa" version = "0.11.3" [[deps.URIs]] -git-tree-sha1 = "cbbebadbcc76c5ca1cc4b4f3b0614b3e603b5000" +git-tree-sha1 = "bef26fb046d031353ef97a82e3fdb6afe7f21b1a" uuid = "5c2747f8-b7ea-4ff2-ba2e-563bfd36b1d4" -version = "1.5.2" +version = "1.6.1" [[deps.UUIDs]] deps = ["Random", "SHA"] @@ -405,6 +402,14 @@ version = "1.11.0" uuid = "4ec0a83e-493e-50e2-b9ac-8f72acf5a8f5" version = "1.11.0" +[[deps.Xranklin]] +deps = ["ANSIColoredPrinters", "CRC32c", "Dates", "FranklinParser", "IOCapture", "LiveServer", "Logging", "Markdown", "OrderedCollections", "Pkg", "REPL", "Reexport", "Serialization", "TOML", "URIs"] +git-tree-sha1 = "00c83bd65338e9cf1ca1b8a183bd27f5df60767e" +repo-rev = "main" +repo-url = "git@github.com:tlienart/Xranklin.jl.git" +uuid = "558449b0-171e-4e1f-900f-d076a5ddf486" +version = "0.1.0" + [[deps.Zlib_jll]] deps = ["Libdl"] uuid = "83775a58-1f1d-513f-b197-d71354ab007a" diff --git a/Project.toml b/Project.toml index be5592f..b8af3b6 100644 --- a/Project.toml +++ b/Project.toml @@ -1,8 +1,8 @@ [deps] Dates = "ade2ca70-3891-5945-98fb-dc099432e06a" -Franklin = "713c75ef-9fc9-4b05-94a9-213340da978e" Git = "d7ba0133-e1db-5d97-8f8c-041e4b3a1eb2" Hyperscript = "47d2ed2b-36de-50cf-bf87-49c2cf4b8b91" Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306" NodeJS = "2bd173c7-0d6d-553b-b6af-13a54713934c" TimeZones = "f269a46b-ccf7-5d73-abea-4c690281aa53" +Xranklin = "558449b0-171e-4e1f-900f-d076a5ddf486" diff --git a/protocols/code.md b/protocols/code.md index a1f44d2..6702691 100644 --- a/protocols/code.md +++ b/protocols/code.md @@ -201,25 +201,31 @@ host presentations, etc. Each software or analysis project should have its own repo or repos - ## Using Tufts high-performance computing (HPC) cluster + For detailed guidelines regarding Tufts HPC usage please visit [the dedicated website](https://it.tufts.edu/high-performance-computing) ## "Linking" your Jupyter notebook to a cluster's compute node + 1. ssh into the cluster: -```bash + + ```bash ssh your_name@login.cluster.edu -``` + ``` 2. Request access to an interactive compute node and its resources: -CPU: -```bash - srun -t 0-02:00 --mem 2000 -p interactive --pty bash -``` -GPU: -```bash - srun -t 0-02:00 --mem 2000 -p gpu --gres gpu --pty bash -``` + + CPU: + + ```bash + srun -t 0-02:00 --mem 2000 -p interactive --pty bash + ``` + + GPU: + + ```bash + srun -t 0-02:00 --mem 2000 -p gpu --gres gpu --pty bash + ``` This gives a 2-hour time limit interactive session with a bash terminal, with 2GB of memory. You can adjust the -t or the -mem requests as needed. @@ -232,43 +238,49 @@ General syntax: `--gres=gpu[:type][:number]`. Specifying either type or number i For example: -A100 gpu: `--gres gpu:a100` -V100 gpu: `--gres gpu:v100` -T4 gpu: `--gres gpu:t4` -RTX 6000 gpu: `--gres gpu:rtx_6000` -RTX A6000 gpu: `--gres gpu:rtx_a6000` +- A100 gpu: `--gres gpu:a100` +- V100 gpu: `--gres gpu:v100` +- T4 gpu: `--gres gpu:t4` +- RTX 6000 gpu: `--gres gpu:rtx_6000` +- RTX A6000 gpu: `--gres gpu:rtx_a6000` See [GPU Hardware List](https://www.cs.tufts.edu/cs/152L3D/2024f/tufts_hpc_setup.html#gpu-hardware-list) 3. Activating your conda environment and launching Jupyter notebook on a specific port: -```bash - conda activate - jupyter notebook --no-browser --port= -``` -* NOTE: after you've launched Jupyter, in the output you will see something like: -`.../localhost:6789/tree?token=` -You need to copy the for this session; you will need it later. + + ```bash + conda activate + jupyter notebook --no-browser --port= + ``` + + >[!NOTE] + after you've launched Jupyter, in the output you will see something like: + `.../localhost:6789/tree?token=` + You need to copy the for this session; you will need it later. 4. Tunneling the remote HPC port to your local machine port: -After you have entered the "srun" command, you will be allocated a certain node. -Now, go to the terminal on your local machine and enter the following: - -First, tunnel the port you bound Jupyter to (f.e. 6789) to some port on your local machine: -```bash - ssh your_name@login.cluster.edu -L :localhost: - # for example - # ssh yehor@login.cluster.edu -L 6789:localhost:6789 -``` - -Now, do the same for the compute node you were allocated (f.e. i2cmp003): -```bash - ssh -L :localhost: - # for example - # ssh i2cmp003 -L 6789:localhost:6789 -``` + After you have entered the "srun" command, you will be allocated a certain node. + Now, go to the terminal on your local machine and enter the following: + + + First, tunnel the port you bound Jupyter to (f.e. 6789) to some port on your local machine: + + ```bash + ssh your_name@login.cluster.edu -L :localhost: + # for example + # ssh yehor@login.cluster.edu -L 6789:localhost:6789 + ``` + + Now, do the same for the compute node you were allocated (f.e. i2cmp003): + ```bash + ssh -L :localhost: + # for example + # ssh i2cmp003 -L 6789:localhost:6789 + ``` 5. Opening Jupyter in the browser -Go to an internet browser on your machine and look up the following address: -`http://localhost:` (again, you may use 6789 as the port number throughout the process) -Now, enter the token you copied in the cluster terminal as well as some password you need to come up with for this session only. -Done. Now do science. + Go to an internet browser on your machine and look up the following address: + `http://localhost:` (again, you may use 6789 as the port number throughout the process) + Now, enter the token you copied in the cluster terminal + as well as some password you need to come up with for this session only. + Done. Now do science.