This will allow us to increase the cache times in the browser for our
CSS, which almost never changes.
Enables a new value to be used in templates, {{gitrev}}, which can be
used to bust pretty much any URL. We could do this for all the images
in the templates as well, but since most of them almost never change,
we'll just enable it manually for each individual image as it becomes
necessray - or just use a ?1, ?2 etc for those.
Enabled by default for CSS and JavaScript links, since those are much
more likely to be changed without having the URL changed.
Cache times aren't increased yet - we'll do that later one we're sure
that all existing caches are expired first.
This is required if the queue is dropped and recreated in pgq as it
gets a new id, which needs to be used when viewing the current status
of the queue in the admin interface.
Instead of having to update this list manually in multiple places when
releasing new versions, just take the information out of the database
where it has to be anyway.
Fixes#90Closes#93
Also make the code automatically pick up wich PDF files exist in the
static checkout, and auto-detect their size, both A4 and US sizes. This
removes yet one more manual step, yay!
Fixes#163
This is required for the old style community auth system that is still
in use by the commitfest app. Once that has been retired or upgraded,
this patch should be reverted.
Existing passwords are automatically converted once the user logs in to
the main website once.
The new archives has a http api - use that one for searching instead
of directly talking to the database.
With the new API, we always fetch the complete search results (still
capped server-side at 1000 items), and store them locally in memcached
for 10 minutes. That way, paging will only hit the local memcached and
not the remote http api *or* the SQL api.
Most of these forms look pretty benign, but the user profile form, which
includes an SSH key field, certainly needs to be protected.
The survey form is unprotected because it's served over insecure HTTP
and the Varnish proxy strips cookies, which is required by the builtin
CSRF protection.
Marti Raudsepp
We want an API for this so they end up in the queue with all the other
requests, and get delivered to all our frontends without needing each node
to know about which frontends exist.
This increases session security, obviously... It will also break local development
installs, which will have to add the two rows that this patch adds to the
documentation.
Previously this had to be rsynced outside of the website. By allowing the
upload here, and automatically purging the data from varnish, we will reach
"almost instant" updates of the ftp site structure on the web.
This makes it possible to render the search results on the main engine.
We still run the query on the seprate search server, so once has to be
configured in settings_local.py with the key SEARCH_DSN (standard
PostgreSQL/psycopg2 connection string)
the fact that when pages are served through Varnish, the request will come
from the Varnish server and not from the client.
Create a /system_information page that shows some information about the
connection to help diagnose how the caches work.
of the RSS feed. (Which will receive a new URL now that it lives in the
actual app and not in with the static files, so a redirect will be needed
there).
on the ftp server, instead of crawling the directoreis directly. This
removes the requirement to sync almost 10Gb worth of ftp site onto the
web server...
The pickle file for this is currently around 1Mb, so it's not a huge
burden on the server. If it grows larger in the future, we may want to
re-think this and split it up, or put it in a database format or something
like that.