The Hopper system at NERSC has 24 processors per node, usually we just submit jobs using full node usage for example:
#PBS -q regular
#PBS -l mppwidth=256
#PBS -l walltime=24:00:00
#PBS -N 256_job
#PBS -e $PBS_JOBID.err
#PBS -o $PBS_JOBID.out
#PBS -V
cd $PBS_O_WORKDIR
echo "Changing to workdir $PBS_O_WORKDIR"
echo "listing workdir contents"
ls -ltr
aprun -n 256 ./cluster > log.txt
Here you request 256 processors in the #PBS -l mppwidth=256 line in your script, when you job is executed is does so in the same number of processors stated in the aprun -n 256 ./cluster > log.txt
However, If you want to request a smaller number of processors per node in a run, the we have to change the mppwidth line.
Let's suppose you want to run it only on 17 processors out of 24 with a total of 256 processors (we do this when more memory is needed). The mmppwidth value is calculated with:
mppwidth=ceiling(total_processors_requested/number_of_processors_per_node)*24
Here total_processors_requested is 256,number_of_processors_per_node is 17
256/17=15.058823529, applying the ceiling function it becomes 16
So, 16*24=384, this is the value of mppwidth. But when you run the application, it will run only on 256 processors, so your job script will look like this:
#PBS -q regular
#PBS -l mppwidth=384
#PBS -l walltime=24:00:00
#PBS -N 17_24_job
#PBS -e $PBS_JOBID.err
#PBS -o $PBS_JOBID.out
#PBS -V
cd $PBS_O_WORKDIR
echo "Changing to workdir $PBS_O_WORKDIR"
echo "listing workdir contents"
ls -ltr
aprun -n 256 ./cluster > log.txt