Age | Commit message (Collapse) | Author |
|
Commit 5a07ecd9 only worked in standalone mode, and would error in
Django mode. Fix that.
|
|
|
|
They provide a service similar to Plaid but better layed out for small
organisations (in Europe at least) for now. Let's see how long that
lasts, but for now here's a basic provider that works very similar to
the Plaid one, except it does not support webhooks.
|
|
|
|
This is default on in Django 4.2, but that makes it sometimes pick a
different default date format thatn we normally want. Since we want to
be in full control of that, turn off this in django.
|
|
|
|
This is currently used only for unmanaged bank transfers, and is only
used to show the instructions for how to pay such an invoice.
|
|
This change allows the currency format to be configured in the
instance. This allows us to use alternate symbols (e.g. £), and
to format amounts using the format required in different
jurisdictions.
The currency format is set to Euro by default.
Note that this change requires an update to any Jinja templates
that use the currency_format tag. This must be changed to
format_currency.
|
|
We don't enable the URLs and it's just a couple of empty tables in the
db. This one app was handled differently from others (the membership app
was added even if not used) and causes some issues like migration
dependencies.
|
|
This module will use the plaid.com service to download bank transaction
lists from any supported bank. If available, it will also respond to
webhooks sent by plaid whenever transactions show up, but failing that
will just poll twice per day.
|
|
This adds a new type of provider to the system for handling digital
signatures.
Initially the only consumer is conference sponsorships, but it could be
added for other parts of the system as well in the future. Regular
"old-style" sponsorship contracts are still supported, but will gain the
feature to auto-fill sponsor name and VAT number if wanted. The sponsor
signup workflow is adjusted to support either one or both of the two
methods.
Initially the only implementation is Signwell, but the system is made
pluggable just like e.g. the payment providers, so other suppliers can
be added in the future.
This should be considered fairly beta at this point, as several parts of
it cannot be fully tested until a production account is in place. But
the basics are there...
|
|
Adds it as an asset (cdn loaded by default) and changes all internal
references. Do so in a backwards compatible way where possible, for
site-skins to continue working.
|
|
There is a mix of things loaded off CDNs and things loaded from the
local media directory, which is, ahem, inconsistent. This first step
makes the assets a configurable set that can be overridden in
local_settings.py. It does not at this point change any assets, so the
result remains inconsistent, but that will happen in a future update.
Reviewed by Daniel Gustafsson, along with the associated smaller commits
|
|
This adds the support for registering more than dejavu fonts to the
system, and tries to do so in a backwards compatible fashion. That means
the old FONTROOT setting remains but now instead means the root for the
DejaVu fonts only. Further fonts can be added in local_settings.py
specifying botht he name of the font and the full path to the ttf file.
|
|
This simplifies deployments since django_markwhat has a tendency to
create conflicting requirements that makes upgrades harder. Showdown
doesn't have that problem, but this way we have a single defined
markdown process instead of having two subtly different ones.
Most of the code behind this adapted from the pgweb project that went
through this some months ago.
Fixes #72
|
|
We want these to use the configured datetime format and not fallback to a
system default (which includes timezone specifications, and we don't
want that since a single event is only in one timezone).
To make the view even easier to parse, if both the start and the end is
on the same day (which is normally is), just show the date once, and the
interval between the two times.
Fixes #69
|
|
|
|
|
|
Most things still work so it's still a usable setting, but it does break
the registration dashboard for example (for any conference that's not
run in UTC).
|
|
Instead of requiring people to get on IRC, support member meetings in a
web browser. To make this work, there will be a simple websockets based
server (hosted in a separate repository) that will act as a relay,
and a trivial web app to handle the frontend.
Also include a native handling of polls in the system, including
timeouts, being typical actions during these meetings.
Meeting log and handling goes in the database, making it easy to extract
later for generation of official meeting minutes.
Code by me, layout and styling by Ilaria Battiston.
|
|
Django used to have this, but at some point replaced it with a
dependency on facebook/watchman. Since we don't require this (and it's a
heavy dependency including runnning a separate daemon -- which is not
packaged on at least Debian stable versions), and didn't have it, the
django implementation would fall back on polling all files in a loop
once per second. And since there are thousands of files to depend on in
django environment, that could use a substantial amount of CPU.
We don't use this for the webserver itself, as that runs under smoething
like uwsgi, but we have two daemons at this point (scheduled task runner
and social media poster) that does.
So implement a local reloader based on inotify which is of course a lot
more efficient. We don't try any fancy detection magic, and instead just
watch every .py and.pyc file in a configurable set of directories (which
would typically consist of the code checkout and the virtualenv root,
but can be adapted).
If used, this adds a dependency on pyinotify, but the default
configuration is still to fall back on the django implementation.
|
|
This requires the web server to also configure a static mapping for
/media/django_toolbar/ pointing into the django toolbar directories.
|
|
* Remove the hard-coded twitter implementation and replace it with an
infrastructure for pluggable implementations.
* Separate out "social media broadcasting" (public twitter posts) from
"private notifications" (DMs) and invent "private broadcasting"
(notifications sent only to attendees of a conference) and
"organisation notifications" (sent to organisers).
* Add the concept of a Messaging Provider that's configured on a
Conference Series, which maps to a twitter account (or similar in
other providers). Replace "incoming twitter active" flag on a
conference with a setting on this messaging provider for "route
messages to conference". This way the messaging doesn't have to be
reconfigured for each new conference in a series, and we also
automatically avoid the risk of having two conferences getting the
same input.
* For each conference in a series, the individual Messaging Providers
can be enabled or disabled for the different functionality, and
individual channels be configured when applicable.
* Add implementations of Twitter (updated, social broadcasting and
private messaging support), Mastodon (social broadcasting and private
messaging) and Telegram (attendee boadcasts, private notifications,
and organisation broadcasts)
* Add webhook support for Twitter and Telegram, making for much faster
reactions to incoming messages.
* Hardcoded news twitter post accounts, and replaced with
MessagingProviders per above that are not attached to a conference.
* Add a daemon that listens to PostgreSQL notifications and sends out
broadcasts and notifications for quicker action (if not enabled, a
scheduled task will send them out every 10 minutes like before)
* In making broadcast posts, add support for the fact that different
providers have different max length of posts (e.g. Twitter currently has
280 and Mastodon 500), and also roughly account for the effects of
URL shorterners on posts.
* Add a button to registration dashboards to send DMs to attendees that
have configured notification.
* Send "private broadcasts" ahead of any talks to keep people posted of
talks. For now this is always enabled if a channel is set up for
private broadcasts, we may want to consider making it more
configurable in the future.
There are still a lot of tables and files referring Twitter in the tree,
and some of those will be renamed in a future commit to make tracking of
changes easier.
Fixes #29
Fixes #13
|
|
Turns out django does not reset the timezone on requests, unless it's
done explicitly. So add a new middleware that does exactly this -- and
then individual requests will set it back to the conference timezone as
required.
Not doing this could lead to very interesting results in a
multi-threaded server...
|
|
Switch the system to properly use django and postgres timezone support,
by allowing each conference to render all date related information in a
conference specific timezone (using the one that has already been
specified on the conference, per a previous commit).
All non-conference parts of the system keep using the default timezone
as specified in settings.TIME_ZONE.
This includes a migration that updates the existing sessions, session
slots and volunteer slots based on what timezone has been configured
for the conference (since previously everything was stored in the
wrong timezone if the conference was in anything but the default
one).
In order to make this work for non-django-orm queries, a context
manager that swaps the timezone to the conference and back out is
introduced, and related to that a way to get a cursor that turns off
django's protection against doing exactly this.
This finally removes the very ugly "timediff" column on the conference
which was a quick hack back in the days to support ical feeds using utc.
In passing, this also:
* Fixes ical feeds to include all required fields (uid and dtstamp
were missing on schedule entries)
* Fixes xml feed to use conference local time (fixes #8)
* Clarify what "valid until" and "active until" means in the help text
on discount codes and registration tpes.
* Don't duplicate dates in schedule xml feeds (seems others don't, and
there is no clear spec anywhere that I can find)
|
|
The version we have doesn't work in django 2.2. And while there might be
newer ones available, we only use it in the admin interface and the
newer django has it's own functionality for delivering the same thing
there.
|
|
This includes removing the FilterPersistMiddleware which really isn't
needed much anymore, so not worth the effort to port.
Changes are all backwards compatible.
|
|
Instead of the even more hackish solution to use an app called _initial
with an init module, move the code to inject the concurrency protection
into the app in util/, and instead make sure this app is loaded *before*
the django.contrib.admin.
Also move the check for svgcairo and qrencode to the util app, because
that's cleaner.
This removes the _initial app completely.
|
|
We already did tihs for DATETIME_FORMAT, but for some reason manually
set the format of date-only output. In setting this, also remove those
manual setups as long as they are matching the Y-m-d pattern, which is
what we should be using everywhere.
|
|
|
|
Not sure how that got there...
|
|
Create a namespace under /monitor/ that can be used for more monitoring
endpoints in the future, and a setting for MONITOR_SERVER_IPS to control
who is allowed to get information from them. Intended to be extended
with further points in the future.
For now, implement a "git" endpoint that returns information about the
current git position of this deployment (branch, tag if there is one,
and latest commit seen).
|
|
This controls the sending address for status from the scheduled jobs
runner. If not set, it will be automatically set to
SCHEDULED_JOBS_EMAIL.
|
|
Instead of assuming that sender == receiver for notifications, make it
possible to have the receiver be a different address, configured with a
separate parameter. If not set, it will be automatically set to
INVOICE_SENDER_EMAIL.
|
|
TREASURER_EMAIL is supposed to be used in templates, but actively by the
code, but usage in the wrong way had snuck in. Remove that, and to deal
with that add a field to TransferWise configuration for
notification_receiver.
While at it, put a comment in the config file explaining what it's for.
|
|
No support for payout tracking yet, since Stripe makes it impossible to
test that until 7 days after you sign up for the test account...
|
|
This makes it easier to setup initial sites that are not using the
postgresql community auth system, by instead letting them use things
like google and facebook to log users in.
|
|
This prevents a situation where the groups don't exist and one has to
consult the source code to figure out what they are supposed to be
called.
|
|
This uses the TransferWise REST API to get access to an IBAN account,
allowing "traditional" bank paid invoices to be reasonably automated.
The provider integrates with the "managed bank transfer" system, thereby
handling automated payments using the payment reference. Since this
reference is created by us it can be printed on the invoice, making it
easier to deal with in traditional corporate environments. Payments that
are incorrect in either amount or payment reference will now also show
up in the regular "pending bank transactions" view and can be processed
manually as necessary.
For most SEPA transfers, TransferWise will be able to provide the IBAN
number to the sending account. When this is the case, the provider also
supports refunds, that will be issued as general IBAN transfers to tihs
account. Note that refunds requires the API token to have "full access"
as it's permissions in the TW system, meaning it can make arbitrary
transfers of any funds. There is no way to specifically tie it to just
refunds, as these are just transfers and not payments.
|
|
Right now it's too complicated to set up the huge amount of cronjobs
necessary to get the system working, and it's very easy to miss one or
three.
So create an internal jobs runner that will know which jobs to run, and
when to run them. Since the data about the jobs come from the app
itself, it will automatically update when a newer version is deployed.
This is handled by a post_migrate hook that enumerates jobs.
All jobs are the existing django management commands. When a member
class called ScheduledJob is added to a management command it
automatically becomes managed by the job scheduler.
A job can either be scheduled to run at an interval ("every 30 minutes")
or at a fixed time ("at 23:37"). If time is chosen, multiple different
ones can be used, but only in the form of a time, so at least once per
day. A job can also be defined without a schedule (for manual runs), and
finally one job can trigger the immediate run of a different job after
it.
All jobs can be enabled/disabled through the web interface, though
normally they should all be enabled. It is also possible to override the
scheduling of individual jobs in the web interface.
Jobs can be defined as internal, meaning that they only call internal
functions and database accesses in django, or external (default),
meaning they do things like call external APIs. Internal functions will
be run in the same process using the same database connection as the
manager, and external jobs will be executed in a subprocess. External
commands can be given a timeout (default = 2 minutes) after which they
will get killed.
Each job can optionally be given a static method called should_run,
which will be executed before the job. If this returns False, the job
will be skipped. This can typically be used to avoid the
fork+exec+potentiallyexpensivecheck of some external commands for jobs
that need to react reasonably quickly but that are expensive to run.
This function will be called internally in the runner even for external
jobs.
All jobs are run through one django management command that is intended
to run as a systemd service. That means they are "single threaded" and
there is no risk of overlapping jobs.
By default notification emails are sent for all failed runs. It is also
possible to on a per-job basis configure notifications to be sent on
successful runs as well.
A history of all jobs is kept in the database, including the output from
both successful and failed jobs.
|
|
Out of settings.py and into the database, and while at it make it
possible to select which invoice methods can be used for membership
invoices.
|
|
This is a major refactoring of how the payment method integrates, with
the intetion of making it more flexible and more easy to use.
1. Configuration now lives in the database instead of local_settings.py.
2. This configuration is edited through the /admin/ interface, which
makes it a lot easier to add constraints (and instructions), thus
preventing misconfiguration.
3. Invoice payment methods are now separate from invoice payment
implementations. That means there can be multiple instances of the
same payment method, such as multiple different paypal accounts,
being managed.
4. All payment method implementations are now available in all
installations, including Braintree and Trustly. This retires the
x_ENABLED settings in local_settings.py. The code won't actually run
unless there are any payment methods defined with them.
5. On migration, all payment methods that are marked as inactive and
have never been used are removed. Any payment method that has been
used is left around, since there are old invoices connected to it.
Likewise, any payment method that is selected as available for any
sponsorship level (past or future) is left in the system.
XXXXXX manual action needed on production systems XXXXXX
1. Settings for payment methods should be migrated automatically, but
should of course be verified!
2. The template for Manual Bank Transfer is *not* migrated, since it
wasn't in settings.py, but in a template and overriden downstream.
Migrate the contents of the template invoices/banktransfer.html to the
database using the /admin/ interface. When this is done, the template
can be removed.
3. Notification URLs in Adyen must be updated in the Adyen backoffice to
include the payment method id in the url (adding a /n/ to the end of the
URL, with n being the id of the payment method).
4. Notification URLs in Paypal must be updated the same way.
|
|
|
|
|
|
Instead of never doing anything about them other than sending an email,
add a setting for ADYEN_IS_TEST_SYSTEM. If this setting is True
(default!) and the notification comes in from the adyen test
environment, process it as normal. If it's False and the notification
comes in frmo the live environment, process as normal. Only if there is
a mismatch is the email generated.
In passing, change the term "test system" to "test environment", to make
it more clear.
|
|
Sibling imports should be prefixed with a period. Good idea in py2, will
eventually become required in py3, so another small step.
|
|
Python 2.6 introduced the better syntax, Python 3 removes the old one,
so one small step towards py3.
|
|
It no longer has any pgeu specifics in it.
|
|
Split apart the settings for Invoice PDF and Refund PDF builders, as
they can theoretically be loaded from different places.
Remove all PGEU specific contents from the invoices. This will now
instead go in a class that lives in the skin, if there is one. If there
is no skin, an "anonymous" invoice with basically no info will be
generated. Enough for testing, but invoices should really always be
skinned.
Rename invoices to BaseInvoice and BaseRefund to make this clear.
Make the invoice properly handle both VAT and non-VAT cases, so that it
can be used in skinned versions of both.
BREAKING change - this changes the way the invoices are configured and
called, so all skinned installations must update to cover this.
|
|
In passing remove some comments that were pointless
|