Skip to content

Commit

Permalink
Work in progress
Browse files Browse the repository at this point in the history
  • Loading branch information
tskisner committed Nov 12, 2022
1 parent 593fb9f commit a356262
Show file tree
Hide file tree
Showing 11 changed files with 885 additions and 1,180 deletions.
28 changes: 15 additions & 13 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,23 @@ chapters:
sections:
- file: install_user
- file: install_dev
- file: nersc
- file: install_nersc
- file: interactive
# - file: quickstart
- file: data_model
# - file: data_io
- file: processing_model
- file: simulation_operators
sections:
- file: simulation_operators_satellite
- file: simulation_operators_ground
- file: simulation_operators_sky
- file: simulation_operators_instrument
- file: reduction_operators
- file: utility_operators
- file: tutorial
sections:
- file: tutorial_intro
- file: pointing
# - file: simulated_observing
# - file: simulated_signals
# - file: preprocessing
# - file: utilities
# - file: mapmaking
# # sections:
# # - file: mapmaking_utilities
# # - file: mapmaking_templates
- file: api
- file: dev
- file: benchmark
- file: changes
- file: api

14 changes: 14 additions & 0 deletions docs/data_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,3 +163,17 @@ Here is a picture of what data each process would have. The global process numb
shown as well as the rank within the group:

![image](_static/toast_data_dist.png)

(pixel:)=
## Distributed Map Domain Objects


(pixel:dist)=
### Pixel Distribution


(pixel:data)=
### Pixel Data



5 changes: 3 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ needs. We try to clarify the different options in the following sections.
(install:test)=
## Testing the Installation

After installation (regardless of how you did that), you can run both the compiled and
After installation (regardless of method), you can run both the compiled and
python unit tests. These tests will create an output directory named
`toast_test_output` in your current working directory:

Expand All @@ -21,7 +21,8 @@ If you have installed the `mpi4py` package, then you can also run the unit tests
MPI enabled. For example:

```{code-block} console
mpirun -np 4 python -c "import toast.tests; toast.tests.run()"
export OMP_NUM_THREADS=2
mpirun -np 2 python -c "import toast.tests; toast.tests.run()"
```

```{important}
Expand Down
47 changes: 47 additions & 0 deletions docs/install_dev.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@

(install:dev)=
# Developer Installation

Expand Down Expand Up @@ -25,6 +26,49 @@ MADAM mapmaking operator | [libmadam](https://github.com/hpc4cmb/libmadam)
Conviqt beam convolution operator | [libconviqt](https://github.com/hpc4cmb/libconviqt)
Totalconvolve beam convolution operator | [ducc0](https://pypi.org/project/ducc0)


## Using Conda

One of the easiest ways to setup a development environment is to create a conda environment and install all the toast dependencies. Begin by installing or setting up your conda base environment as described in the user installation guide [here](install:user:forge) or [here](install:user:anaconda).

Next, decide on the name of your conda env used for development. For this example, we will use the name `toastdev`. To set up this environment you can run the included script:

cd toast
./platforms/conda_dev_setup.sh toastdev

Now, we will use the conda tools to build toast in "development mode". The compiled extension is built in place and the git source tree will be placed in the python search path:

cd toast
conda activate toastdev
./platforms/conda_dev.sh

Now you can activate the conda env and use the toast git checkout directly. Any changes you make to the python source will show up immediately. If you change the C++ source, you will need to re-run the `conda_dev.sh` script to rebuild the compiled extension.


bash Miniforge3-Linux-x86_64.sh -b -f -p $HOME/software/condabase

source $HOME/software/condabase/etc/profile.d/conda.sh
conda activate
#conda update --yes -n base conda
conda update --yes -n base --all

conda create -n toastdev conda-build cmake psutil cython pytest fftw libaatm suitesparse tomlkit traitlets h5py astropy ephem healpy pshmem coverage pytest coveralls pytest-cov

conda activate toastdev

conda install gcc_linux-64 gxx_linux-64
OR
conda install compilers

pip install pixell

./dev_build.sh

NERSC:
MPICC="cc -shared" pip install --force-reinstall --no-cache-dir --no-binary=mpi4py mpi4py



## Installing Build Dependencies

The compiled extension within TOAST has several libraries that need to be found at build
Expand Down Expand Up @@ -148,6 +192,9 @@ to install these packages:

## Install Optional Dependencies

conda install jupyter-notebook wurlitzer ipywidgets plotly python-kaleido

pip install plotly-resampler


## Installing TOAST with CMake
Expand Down
File renamed without changes.
2 changes: 2 additions & 0 deletions docs/install_user.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ update the conda tool itself and the other essential packages in base. If one o
working environments becomes horribly out of date or broken, just delete it and make a
new one.

(install:user:anaconda)=
### Using Anaconda with conda-forge Packages

If you already have Anaconda python installed, the base conda environment may already be
Expand Down Expand Up @@ -135,6 +136,7 @@ conda config --set channel_priority strict

Now skip ahead to the section on [creating an environment](install:user:conda:env).

(install:user:forge)=
### Using a Native conda-forge Base

If you are starting from scratch, we recommend using the "miniforge" installer to set up
Expand Down
240 changes: 240 additions & 0 deletions docs/interactive.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"toc-hr-collapsed": false
},
"source": [
"# Interactive Use\n",
"\n",
"The documentation already discusses general installation (FIXME: link) of TOAST with conda or pip, so we will not cover that here. Instead this notebook focuses on considerations for running interactively.\n",
"\n",
"## Local Serial Use\n",
"\n",
"On a local laptop or workstation, it should be straightforward to install the additional tools needed for these notebooks with either conda or pip:\n",
"\n",
" conda install -c conda-forge jupyter-notebook wurlitzer ipywidgets plotly plotly-resampler\n",
" \n",
"OR\n",
"\n",
" python3 -m pip install jupyter-notebook wurlitzer ipywidgets plotly plotly-resampler\n",
"\n",
"## Local Parallel Use\n",
"\n",
"In addition to the packages needed in the last section, many parts of these notebooks can make use of MPI for parallelism if it is available. To enable MPI with toast from inside a jupyter notebook, you will first need the mpi4py package. If you are using `conda` to manage your environment, doing:\n",
"\n",
" conda install mpi4py\n",
" \n",
"**may** install a working package (this will install conda packages for MPI). If you are using a virtualenv and installing packages with `pip`, then first ensure that your system has a working MPI installation (try typing `which mpicc`). Then make sure your environment is activated and do:\n",
"\n",
" python3 -m pip install mpi4py\n",
"\n",
"Test that you can import mpi4py and that every process has a unique rank:\n",
"\n",
" mpirun -np 4 python3 -c 'from mpi4py import MPI; print(f\"hello from rank {MPI.COMM_WORLD.rank}\")'\n",
" \n",
"If that is working, then install the `ipyparallel` package:\n",
"\n",
" conda install ipyparallel\n",
" \n",
"OR\n",
"\n",
" python3 -m pip install ipyparallel\n",
"\n",
"Next we need to enable this parallel extension in jupyter:\n",
"\n",
" jupyter serverextension enable --py ipyparallel\n",
" \n",
"**IMPORTANT**: Before enabling MPI, decide how many MPI processes you want to use. This should not be more than the number of physical cores on your system. You should be sure to set the number of OpenMP threads per process such that the number of threads times the number of processes is not more than the physical cores on your system. For example, if you have 8 physical cores and want to use 4 MPI processes, then you should set the number of OpenMP threads to be no more than 2. If you do not fail to do this, you may find that every MPI process is using all cores on the system, leading to the notebook (and your system) running very slow.\n",
"\n",
"### Example: 8 cores, 4 MPI processes, 2 threads per process\n",
"\n",
"Set the number of threads **before** launching jupyter:\n",
"\n",
" export OMP_NUM_THREADS=2\n",
" jupyter-notebook\n",
" \n",
"And use `n=4` in the `Cluster()` constructor below.\n",
"\n",
"### Example: 8 cores, 4 MPI processes, 1 thread per process\n",
"\n",
"This might be a conservative choice where you don't want to use the entire resources of the server / laptop. Set the number of threads **before** launching jupyter:\n",
"\n",
" export OMP_NUM_THREADS=1\n",
" jupyter-notebook\n",
" \n",
"And use `n=4` in the `Cluster()` constructor below.\n",
"\n",
"### Example: 8 cores, 8 MPI processes, 1 thread per process\n",
"\n",
"In general, toast parallelizes better with MPI than relying on threading. Set the number of threads **before** launching jupyter:\n",
"\n",
" export OMP_NUM_THREADS=1\n",
" jupyter-notebook\n",
" \n",
"And use `n=8` in the `Cluster()` constructor below.\n",
"\n",
"## Computing Centers\n",
"\n",
"Many large computing centers (e.g. NERSC) offer some kind of jupyter service running on dedicated nodes. These nodes might be shared or their may be other mechanisms for launching ipyparallel processes through the batch system. This kind of custom setup is beyond the scope of this tutorial. For NERSC, [see the online documentation](https://docs.nersc.gov/services/jupyter/) for setting up a custom conda environment with your own set of packages (like toast) and then creating kernel spec file from that which can be used with notebooks."
]
},
{
"cell_type": "markdown",
"metadata": {
"toc-hr-collapsed": false
},
"source": [
"## Imports\n",
"\n",
"If you are using MPI, be sure to initialize the ipyparallel cluster at the start of the notebook, before other imports:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import ipyparallel as ipp\n",
"cluster = ipp.Cluster(engines=\"mpi\", n=4)\n",
"client = cluster.start_and_connect_sync()\n",
"client.block = True\n",
"# We can turn on automatic use of MPI with:\n",
"%autopx"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now import all of the remaining packages you want to use. For example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Built-in modules\n",
"import os\n",
"import sys\n",
"\n",
"# External modules\n",
"import toast\n",
"import toast.ops\n",
"import toast.widgets\n",
"\n",
"# Capture C++ output in the jupyter cells\n",
"import wurlitzer\n",
"%load_ext wurlitzer\n",
"\n",
"# Display inline plots\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The notebooks in these tutorials also import some helper functions in a file located in the same directory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load helper tools for the docs\n",
"from toast_docs import create_outdir"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following utility function in toast gets information about the MPI environment if it is in use, otherwise it returns `None` for the communicator, `1` for the number of processes, and `0` for the rank of the current process. These notebooks use the `if rank == 0:` guard around some operations that only need to be done on one process. If you are not using MPI in your own notebook, you can skip those kinds of checks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get our MPI world rank to use later when only one process needs to do something,\n",
"# like printing info or making a plot.\n",
"comm, procs, rank = toast.get_world()\n",
"print(f\"rank {rank} of {procs} processes in comm world {comm}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Runtime Environment\n",
"\n",
"The `toast` module can be influenced by a few environment variables, which must be set **before** importing `toast`. You can see these in the doc string for the package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if rank == 0:\n",
" help(toast)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can get the current TOAST runtime configuration from the \"Environment\" class."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if rank == 0:\n",
" env = toast.Environment.get()\n",
" print(env)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The logging level can be changed by either setting the `TOAST_LOGLEVEL` environment variable to one of the supported levels (`VERBOSE`, `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`) or by using the `set_log_level()` method of the `Environment` class. The maximum number of threads is controlled by the standard `OMP_NUM_THREADS` environment variable."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading

0 comments on commit a356262

Please sign in to comment.