Please give your feedback here
What challenges do you anticipate or have you faced when inheriting scripts/inputs/examples from other persons?
example that the author provided was incomplete
the inherited example was too large (took too long to run and queue before knowing that it even works)
example referred to files/paths that I had no access to
hard-coded paths, abundance of dependencies
variables, function names and comments written in a human language I don’t understand at all
poor or no documentation, including what variables are supposed to be
Missing dependencies, such as other scripts or packages/libraries which no longer exist/are valid
no comments (documentation missing) to explain what the script or the particular parts does
I’m an expert:
Yes, but not in a unix system
Mamba is better i think
what does executable python mean? And are the path variables permanent?
if you type
pythonand hit enter, you start the Python shell. but what really happened is that you ran an executable called python located at a certain path. using
which pythonyou can find out where it is located
some path variables are defined when you start the terminal (when you log in), and then you can change the variables (typically using
module load) but the changes only live for the duration of your terminal session. once you close the terminal, it will go back to the defaults. the take-away from this is that it is a good idea to do module loads inside job scripts so that the job scripts (and thus path variables are correctly defined inside job scripts) become more reproducible
what if the python version that you need is not in the system?
First is to make sure this is the case, by using
module avail python
Then if it is a newer version you could send a support request (more on this later today) asking the module to be installed
Alternatively, you could use the Miniconda module to create your own python environment
If the version is old or leads to errors that you can not resolve, when trying with Miniconda, you can use a container. We have Singularity installed on our system (more on this during the best practices course)
do you need to remove the standard env as well?
typically no. we modify the environment by adding variables “to the left” and the system searches paths from left to right until it finds it so it does not matter if standard variables are somewhere “to the right” and never used.
So the number AFTER the package indicates the dependencies in some way or have I misunderstood?
in the modules we often see two number sets (example:
gnuplot/5.2.8-GCCore-8.3.0). the first refers to the version of the package (here: 5.2.8). the second refers to the version of the so-called toolchain (here: 8.3.0). toolchain is the compiler that was used to build gnuplot from its source code. here both gnuplot and GCCore may have other dependencies on other modules.
Not really a question:
man moduleresults in the error `man: manpath list too long
That is a bit weird. From what operating system/tool are you logging in? Asking because I do not see this error on my computer.
What os on your laptop?
macOS 12.0.1 (21A559) (does my OS affect the remote login node?)
problem is locale … looking for a solution.
please try this:
export LANG=en_US.UTF-8followed by
man module. if that helped, then you can add the two exports into your
.bashrcon the cluster.
yes; there are some settings that are inherited - specially from zsh on mac and using iTerm2/terminal. However, this problem does not exist with Visual Studio Code.
@login-2.FRAM ~]$ man module man: can't set the locale; make sure $LC_* and $LANG are correct man: manpath list too long
I was able to load an older version of Ruby, which caused GCCcore to be reloaded with an earlier version (7.3.0). However, all other modules are still dependent on GCCcore 10.3.0. Is there a way to prevent me from having loaded the “wrong” version of Ruby?
Try not to mix modules compiled with different compiler, this will lead to errors hard to debug.
You need to find a module combination that are compiled with the same compiler/tool chain
If you give me the list of modules you want to use together, I can show you how (if that is possible)
That’s OK, I just wanted to fail the bonus exercise and see what would happen if I tried to load an incompatible module :)
module avail node/returns
No modules found!: Can Node.js be made available?
Which means this module is not installed and not been requested. You could aske it be installed. Send request to firstname.lastname@example.org.
When you send the request, please include a short description on how it will be used
note that you can also use EasyBuild to install it for yourself. this is not part of this course but there is documentation on it. this can be a good solution to install software that only one person or group needs
You could also use Conda/Miniconda to install this (follow the ongoing lecture )
Conda to install Node.js? Interesting 👀. Will also look at EasyBuild.
If you load the latest R package you will also get node.js . The package is hidden, but it is there. If you load : module load R/4.1.2-foss-2021b then you will see : which node /cluster/software/nodejs/14.17.6-GCCcore-11.2.0/bin/node
Can you explain again about GCCcore versions?
It basically means that there are some similarities between the free gnu compiler suite and the commercial/costly intel compiler suite. Codes that have the GCCcore toolchain can be used together with both codes that has the GCC toolchain and the intel toolchain as long as the GCC version is the same.
What does the ml in ml purge do?
ml --help- it can either list or load or unload. personal opinion: I find this inconsistent and surprising and I avoid it and prefer to type
about the meaning of
man mkdir : -p, --parents no error if existing, make parent directories as needed
- As always, man is your friend. (apart from when log in from new mac and trying cost --man ;-))
It was told that conda can help others reproduce my work. How does this do this?
the reproducibility aspect of Conda is that not only I can install my dependencies into an isolated environment, it also makes it easier to communicate to others who need the same environment on their computers by sharing the
environment.ymlfile which lists all dependencies and the version. personal recommendation: I always create Conda environments and always install packages from the environment.yml file. In other words, I first document the dependency in the environment.yml file and then install from that file. Then all I need to do is to share that file and others can install environment from it. This makes reproducibility for others and my future self a lot simpler.
Great! What needs to be specified to the reader in order for them to be able to reproduce it? the environment.yml file? does it contain all the packages(versions) that were used?
a reproducible environment.yml will list packages and versions (and also channels where to install them from) example for an environment.yml file: https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#create-env-file-manually
is pip install not recommended here?
using pip install in combination with virtual environments works similarly and is also fine if your software dependencies are Python packages. it is the same idea. the instructor now discourages using pip but they mean pip in combination with conda. it is perfectly fine to use pip in combination with virtual environments (not mentioned in this course).
if I have already made a python environment in the home folder, how to move to project folder?
you mean now conda environment of virtual environment?
Yes, in my own user not the course user (in SAGA). Update: Sorry conda environment in SAGA!
what I would do, assuming that you installed the environment from environment.yml (conda) or requirements.txt (virtual environment), then I would remove the environment and re-create it in the new place. I highly recommend to work as if the environment was disposable and could disappear at any moment. This will help you in future and make your computations more reproducible. If you are unsure which dependencies and versions are in the environment in question, you can export them into an environment.yml file.
So you have a fresh export of your conda environment in case it disappears that you can activate again?
I always have an environment.yml and install from it. Then I always know my dependencies. But if you don’t have it yet, you can use the export function to generate it from an “undocumented” existing environment. In other words I regard environment.yml as the documentation of my dependencies. It is very valuable to have.
Very good idea! I would like to do that! Could you share some code on how to create this file?
try these two and compare the results (they can be useful in different ways):
conda env export --from-history > environment.yml
conda env export > environment.yml
Thank you very much!
pleasure! it has been after years of suffering and not remembering what I did that I started always installing from files :-)
if you want to ask your colleagues an uncomfortable question in the corridor during coffee break: “which package versions does your notebook/project depend on?” if they have a hard time answering, then the project might be hard to re-run in 2 years.