Subsections


4.2 How to use batch processing data analysis servers

Figure 4.1: Illustrated outline of batch processing data analysis servers.
Image batch-e
MDAS has batch processing data analysis servers so as to implement batch processing. Batch processing is a processing mechanism that computers automatically and sequentially process each job which is a group of programs having shared purpose.

Processing efficiency may be decreased because of the lack of computational resource if multiple users run multiple programs on a interactive data analysis server. A job management system on the batch processing data analysis servers manages and runs these jobs sequentially, and the jobs should be processed most efficiently in the usable computational resource. The batch processing data analysis servers are effective against many programs and programs which need big computational resource.

Batch servers consist of 2 servers (kaibm[01-02].ana.nao.ac.jp). A job management system, named PBS Professional (hereinafter referred to PBS), is installed in the servers for the batch processing. The “kaibm01” functions as a PBS management server, and both the “kaibm01” and “kaibm02” function as calculation servers.

The PBS management server manages jobs submitted by users, and it allocates jobs to the calculation servers. Submitted jobs will be queued when there is no calculation resource on the calculation servers. Queued jobs will be run once computational resource is allocated. For the efficient processing, it could well happen that running jobs will be killed and required in order to run other jobs instead.


4.2.1 System configuration

Batch processing data analysis servers consist of 2 servers (FUJITSU Server PRIMERGY RX2530 M2). The Red Hat Enterprise Linux 7 is installed in each server.
Table 4.5: Specification of a kaibm server
Host name kaibm[01-02].ana.nao.ac.jp
Machine FUJITSU Server PRIMERGY RX2530 M2
Quantity 2
OS Red Hat Enterprise Linux 7
CPU Intel Xeon E5 2667 V4 3.2 GHz 16 core
RAM DDR4 2400 RDIMM 192GB


4.2.2 Queue configuration

The PBS has job queues which control an execution sequence of jobs. When a user submits a job to one of job queues, the PBS management server makes a judgment whether the calculation servers can execute the job or not. If it judges that they can execute the job, then the job will be run on them. If not, the job will be queued up until the calculation resource is allocated. The usable computational resource and the execution priority differ by the job queues. Users must select a suitable queue depending on the scale of the user's own jobs because to use an unsuitable queue is a wasteful use of the computational resource.

Table 4.6: Job queue configuration
Queue CPU cores Usable memory per a job Time limit for a job Number of executable jobs per an user
q1 1 11GB 30 days hard limit:32, soft limit: 2
q4 4 44GB 30 days hard limit:8, soft limit: 1
q8 8 88GB 15 days hard limit:4, soft limit: 1
q16 16 176GB 15 days hard limit:2, soft limit: 1


4.2.3 Tutorial

In order to use batch processing data analysis servers, you have to make a shell script called job script and submit a job into a job queue using “qsub” command from the “kaim” or “kaih” servers. In this section, we will introduce basic steps to submit your jobs.

  1. How to make a job script
  2. How to submit and delete a job
  3. How to display a job status


1. How to make a job script

Job script is a shell script in which the directives for the PBS and executable programs are described. The following script is an example of the job script when we want to run a program “a.out” with a queue “q1”.
#!/bin/bash
#PBS -M taro.tenmon@nao.ac.jp
#PBS -m abe
#PBS -q q1

# Go to this job’s working director
cd $PBS_O_WORKDIR

# Run your executable
./a.out
The “#PBS” lines are the directives for the PBS. In this script, we have made the following directives.
#PBS -M taro.tenmon@nao.ac.jp: E-mails will be sent to taro.tenmon@nao.ac.jp. Please make sure to use this directive. If you do not use this directive, administrators will receive error E-mails bounced back from the interactive server since the default E-mail address “user@host.ana.nao.ac.jp” is invalid.
#PBS -m abe: An E-mail will be sent when a job is stopped, a job is started to run, and a job is finished, respectively. The “#PBS -m a” is enabled by default.
#PBS -q q1: A job will be submitted into the queue “q1”.
The “$PBS_O_WORKDIR” is an environment variable defined in the PBS and expresses the path to the directory where the job script is submitted.


2. How to submit and delete a job

In order to submit a job into a queue, execute the “qsub” command on a interactive data analysis server.
$ qsub job_script.sh
The submitted job can be deleted by executing the “qdel” command. The Job_id can be displayed by using the “qstat” command as we shall see later.
$ qdel job_id


3. How to display job status

The “qstat” command shows you a status of submitted jobs.
$ qstat
Job id            Name       User       Time Use  S  Queue
----------------  ---------  --------   -------   -  -----
9013.a000         job1       user1      50:20:10  R  q1
9019.a000         job2       user2      40:32:13  R  q1
9030.a000         job3       user3      30:14:19  R  q1
9079.a000         job4       user4      00:59:15  R  q1
9102.a000         job5       user5             0  Q  q1
Each column represents the job id, job name, user name, CPU time used, job status, and queue name, respectively. The job status has following states.
Q (Queued): Job is queued and will run once computational resource is allocated.
R (Running): Job is running.
S (Suspended): Job is suspended. It occurs when a higher priority job needs computational resource.
As a side note, finished jobs are shown by using a “qstat -x” command.


4.2.4 PBS Professional

In this section, we will introduce the PBS professional briefly. Please refer to its User's Guide(https://www.pbsworks.com/pdfs/PBSUserGuide18.2.pdf) for details. This section refers and quotes the User's Guide.
  1. About the PBS Professional
  2. Lifecycle of a PBS job
  3. PBS job scripts
  4. PBS commands
  5. PBS directives
  6. PBS environment variables
  7. About a priority control of jobs


1. About the PBS Professional

PBS Professional is a distributed workload management system. PBS manages and monitors the computational workload for one or more computers. PBS does the following:
Queuing jobs
PBS collects jobs (work or tasks) to be run on one or more computers. Users submit jobs to PBS, where they are queued up until PBS is ready to run them.
Scheduling jobs
PBS selects which jobs to run, and when and where to run them, according to the policy specified by the site administrator. PBS allows the administrator to prioritize jobs and allocate resources in a wide variety of ways, to maximize efficiency and/or throughput.
Monitoring jobs
PBS tracks system resources, enforces usage policy, and reports usage. PBS tracks job completion, ensuring that jobs run despite system outages.


2. Lifecycle of a PBS job

Your PBS job has the following lifecycle:
  1. You write a job script.
  2. You submit the job to PBS.
  3. PBS accepts the job and returns a job ID to you.
  4. The PBS scheduler finds the right place and time to run your job, and sends your job to the selected execution host(s).
  5. Licenses are obtained.
  6. On each execution host, PBS creates a job-specific staging and execution directory.
  7. PBS sets PBS_JOBDIR and the job's jobdir attribute to the path of the job's staging and execution directory.
  8. On each execution host allocated to the job, PBS creates a job-specific temporary directory.
  9. PBS sets the TMPDIR environment variable to the pathname of the temporary directory.
  10. If any errors occur during directory creation or the setting of variables, the job is requeued.
  11. Input files or directories are copied to the primary execution host.
  12. The job runs under your login.
  13. Output files or directories are copied to specified locations.
  14. Temporary files and directories are cleaned up.
  15. Licenses are returned to pool.


3. PBS job script

An PBS jobscript consist of a shebang to specify a shell, PBS directives, and job tasks(programs or commands). Under the Linux, a shell script, Python, Perl, or other script can be allowed to make a job script. An example of job scripts using a shell script is as follows:
#!/bin/sh
# Job script with a single core

#PBS -M taro.tenmon@nao.ac.jp
#PBS -m abe
#PBS -q q1
#PBS -r y
#PBS -N job_name
#PBS -o Log.out
#PBS -e Log.err

# Go to this job’s working director
cd $PBS_O_WORKDIR

# Run your executable
./a.out

#!/bin/bash
# Job script with multiple cores

#PBS -M taro.tenmon@nao.ac.jp
#PBS -m abe
#PBS -r y
#PBS -q q4
#PBS -N job_name
#PBS -o Log.out
#PBS -e Log.err

# Go to this job’s working directory
cd $PBS_O_WORKDIR

# Run your executable
./a_0.out &
./a_1.out


4. PBS commands

PBS has various commands so that users can submit, monitor, and manage jobs. We will describe commonly used commands.

qsub

This is a command to submit a job into a queue. Please specify your job script for the argument.
$ qsub job_script.sh

qdel

This is a command to delete a submitted job. The Job_ID can be displayed by using “qstat” command as we shall see later.
$ qdel Job_ID

qstat

This is a command to check status of submitted jobs. Status of a specific job can be displayed by specifying the Job_ID as a argument. If a queue name is specified, status of the queue's jobs should be displayed.
$ qstat
Job id            Name       User       Time Use  S  Queue
----------------  ---------  --------   -------   -  -----
1000.kaibm01      job1       user1      50:20:10  R  q1
1001.kaibm01      job2       user1      40:32:13  R  q1
1002.kaibm01      job3       user2      30:14:19  R  q1
1003.kaibm01      job4       user2      00:59:15  R  q4
1004.kaibm01      job5       user3             0  Q  q16

$ qstat 1000
Job id            Name       User       Time Use  S  Queue
----------------  ---------  --------   -------   -  -----
1000.kaibm01      job1       user1      50:20:10  R  q1

$ qstat q1
Job id            Name       User       Time Use  S  Queue
----------------  ---------  --------   -------   -  -----
1000.kaibm01      job1       user1      50:20:10  R  q1
1001.kaibm01      job2       user1      40:32:13  R  q1
1002.kaibm01      job3       user2      30:14:19  R  q1
Each column represents the job id, job name, user name, CPU time used, job status, and queue name, respectively. The representative states of a job are as follows:
W (Waiting): Job is waiting. It will be queued and run when its submitter-assigned start time comes.
Q (Queued): Job is queued. Job will run once the calculation resource is allocated.
R (Running): Job is running.
S (Suspended): Job is suspended. It occurs when a higher priority job needs computational resource.
H (Held): Job is held. The “qhold” command can hold a job.
F (Finished): Job is finished. It represents that the job was completed, failed, or deleted.
The representative options of the “qstat” command are as follows:
-a: Displays information for all queued and running jobs. You can check the elapsed time of a job.
-x: Displays information for finished jobs in addition to queued and running jobs.
-n: The exec host string is listed on the line below the basic information. If the -1 option is given, the comment string is listed on the end of the same line.
-T: Displays the estimated start time for queued jobs
-f: Displays full information for jobs.
-Q: Displays queue status in default format.
-q: Displays queue status in alternate format.

qhold

This is a command to hold a submitted job. The execution of the job is interrupted, and allocated computational resource is released. The job will be resumed after executing “qrls” command that we shall see later. This command is available if you would like to run queued or suspended jobs preferentially.

qrls

This is a command to release a held job.
$ qrls Job_ID


5. PBS directives

PBS directives, which are used in a job script, are options for the “qsub” command to give various directives to the PBS. A directive needs a prefix “#PBS” at the beginning of a line, and must be put above any commands. If you put it below any commands, it will be ignored. We will show you representative PBS directives.

-M

This directive sets an E-mail address. Please make sure to use this directive. If you do not use this directive, administrators will receive error E-mails bounced back from an interactive server since the default E-mail address “user@host.ana.nao.ac.jp” is invalid.
#PBS -M your.address@example.jp

-m

This directive sets an E-mail notification from the PBS. If you do not use this directive, “#PBS -m a” will be set.
#PBS -m n|(one or more of a,b,e)
n: No mail is sent.
a: Mail is sent when the job is aborted by the PBS.
b: Mail is sent when the job begins execution.
e: Mail is sent when the job terminates.

-q

This directive specifies a queue where you submit a job. If you do not use this directive, “#PBS -q q1” will be set.
#PBS -q q1|q4|q8|q16
q1: Job is submitted to the queue “q1”.
q4: Job is submitted to the queue “q4”.
q8: Job is submitted to the queue “q8”.
q16: Job is submitted to the queue “q16”.

-l

This directive sets a limit on the calculation resource.
#PBS -l select=ncpus=X:mem=Ygb|walltime=hh:mm:ss
select=ncpus=X:mem=Ygb: The number of CPU cores and amount of memory used by your job are restricted to X cores and Y GB, respectively. You cannot specify the number of CPU cores and amount of memory exceeded the default values for the queue you will use. Available units for the memory are b, kb, mb, and gb.
walltime=hh:mm:ss: Maximum job execution time is set. You cannot specify the walltime exceeded the default value for the queue you will use.

-r

This directive specifies whether the PBS restarts submitted jobs or not after the system is restored. If you do not use this directive, “#PBS -r -y” will be set.
#PBS -r y|n
y: Job is rerunnable.
n: Job is not rerunnable.

-a

This directive sets a job start time. The job will be waited, and submitted when the specified time comes.
#PBS -a YYMMDDhhmm.SS
YYMMDDhhmm.SS: If you want to start a job at 07:30 on 1st September, 2020: “#PBS -a 2009010730.00”.

-h

This directive holds a job. The effect is same as the “qhold” command.
#PBS -h

-N

This directive sets a job name. The specified name is displayed on the Name columns of the “qstat” command. If you do not use this directive, the job name will be the job script's file name.
#PBS -N Job_name

-o

This directive sets a file name to the standard output. If the relative path is used, the current directory will be a directory where the “qsub” command was executed. If you do not use this directive, the file name will be“(job script name).o(job ID)”, and stored into the directory where the “qsub” command is executed .
#PBS -o /path/to/output.log

-e

This directive sets a file name to the standard error output. If the relative path is used, the current directory will be a directory where the “qsub” command was executed. If you do not use this directive, the file name will be“(job script name).e(job ID)”, and stored into the directory where the “qsub” command is executed .
#PBS -e /path/to/error.log

-j

This directive merges a standard output and standard error output.
#PBS -j oe|eo
oe: Standard output and standard error output are merged into standard output.
eo: Standard output and standard error output are merged into standard error.

-R

This directive removes the standard output and/or standard error output files.
#PBS -R o|e|oe
o: The standard output stream is removed.
e: The standard error output stream is removed.
oe: The standard output and standard error output streams are removed.


6. PBS environment variables

PBS environment variables, which are defined in the PBS, are available in the job script. Representative variables are as follows:
$PBS_JOBID: Submitted job's job ID.
$PBS_JOBNAME: Submitted job's job name.
$PBS_O_HOME: Value of environment variable $HOME.
$PBS_O_HOST: The host name on which the “qsub” command was executed.
$PBS_O_LANG: Value of environment variable $LANG.
$PBS_O_LOGNAME: Value of environment variable $LOGNAME.
$PBS_O_PATH: Value of environment variable $PATH.
$PBS_O_QUEUE: The queue name to which the job was submitted.
$PBS_O_SHELL: Value of environment variable $SHELL.
$PBS_O_WORKDIR: The absolute path of directory where “qsub” command was executed.


7. About a priority control of jobs

There are hard limit and soft limit on each queue which are values to limit on the number of executable jobs for each user.

The hard limit is a number of maximum executable jobs. If an user submits multiple jobs exceeded the hard limit value, a part of jobs will be queued. For example, if you submit ten jobs to q4 the hard limit of which is eight, eight jobs will be executed but two jobs will be queued.

The soft limit is a number of jobs that will be executed preferentially. If an user submits multiple jobs exceeded the soft limit value, a part of jobs will be low priority. For example, if you submit four jobs to q4 the soft limit of which is one, one job will be high priority but three jobs will be low priority lower than that of q16 which has the lowest priority among queues on our system. When you or someone submits additional jobs, the low priority jobs should be killed and requeued.

Examples of the priority control

We show an example of the priority control in the situation where several users submit jobs onto our system. In the example below, “1(AAAA)” represents a user A's q4 job which was firstly submitted onto the system. Note that the job priority is q1 >q4 >q8 >q16 >jobs lowered by soft limit and the soft limit values of q1, q4, q8, and q16 are 2, 1, 1, and 1, respectively.

1. User A submits six q4 jobs. All jobs will be run but the second and the following jobs will be low priority due to the soft limit.
Status Job CPU Cores Used
Running(kaibm01) : 1(AAAA) 2(AAAA) 3(AAAA) 4(AAAA) 16/16
Running(kaibm02) : 5(AAAA) 6(AAAA) 8/16
Queued :  


2. User B submits a q16 job. The q4 job in low priority and most recently submitted will be queued.
Status Job CPU Cores Used
Running(kaibm01) : 1(AAAA) 2(AAAA) 3(AAAA) 4(AAAA) 16/16
Running(kaibm02) : 7(BBBBBBBBBBBBBBBB) 16/16
Queued : 5(AAAA) 6(AAAA)  


3. User C submits two q1 jobs. The q4 job in low priority and most recently submitted will be queued.
Status Job CPU Cores Used
Running(kaibm01) : 1(AAAA) 2(AAAA) 3(AAAA) 8(C) 9(C) 14/16
Running(kaibm02) : 7(BBBBBBBBBBBBBBBB) 16/16
Queued : 4(AAAA) 5(AAAA) 6(AAAA)  


4. After first to third jobs are finished, queued q4 jobs will be run.
Status Job CPU Cores Used
Running(kaibm01) : 4(AAAA) 5(AAAA) 6(AAAA) 8(C) 9(C) 14/16
Running(kaibm02) : 7(BBBBBBBBBBBBBBBB) 16/16
Queued :  



4.2.5 Handling of jobs in the maintenance

For the following reasons, if any system maintenance is planned, we recommend that you delete your jobs before the maintenance and re-submit your jobs after the maintenance.

When batch processing data analysis servers are rebooted in a maintenance, running and queued jobs will be killed, but will be re-submitted automatically after reboot. However, the re-submitting process should be failed because the LDAP client is not started yet. This process is repeated 21 times, and you will receive a lot of mails from PBS to inform you that your jobs are failed when you set a “PBS -m” option. After that, jobs will be held by the system. The held jobs cannot be released with the general privilege, but user can delete it.

ADC
2023-10-17