|
Right now it's too complicated to set up the huge amount of cronjobs
necessary to get the system working, and it's very easy to miss one or
three.
So create an internal jobs runner that will know which jobs to run, and
when to run them. Since the data about the jobs come from the app
itself, it will automatically update when a newer version is deployed.
This is handled by a post_migrate hook that enumerates jobs.
All jobs are the existing django management commands. When a member
class called ScheduledJob is added to a management command it
automatically becomes managed by the job scheduler.
A job can either be scheduled to run at an interval ("every 30 minutes")
or at a fixed time ("at 23:37"). If time is chosen, multiple different
ones can be used, but only in the form of a time, so at least once per
day. A job can also be defined without a schedule (for manual runs), and
finally one job can trigger the immediate run of a different job after
it.
All jobs can be enabled/disabled through the web interface, though
normally they should all be enabled. It is also possible to override the
scheduling of individual jobs in the web interface.
Jobs can be defined as internal, meaning that they only call internal
functions and database accesses in django, or external (default),
meaning they do things like call external APIs. Internal functions will
be run in the same process using the same database connection as the
manager, and external jobs will be executed in a subprocess. External
commands can be given a timeout (default = 2 minutes) after which they
will get killed.
Each job can optionally be given a static method called should_run,
which will be executed before the job. If this returns False, the job
will be skipped. This can typically be used to avoid the
fork+exec+potentiallyexpensivecheck of some external commands for jobs
that need to react reasonably quickly but that are expensive to run.
This function will be called internally in the runner even for external
jobs.
All jobs are run through one django management command that is intended
to run as a systemd service. That means they are "single threaded" and
there is no risk of overlapping jobs.
By default notification emails are sent for all failed runs. It is also
possible to on a per-job basis configure notifications to be sent on
successful runs as well.
A history of all jobs is kept in the database, including the output from
both successful and failed jobs.
|