The new archives has a http api - use that one for searching instead
of directly talking to the database.
With the new API, we always fetch the complete search results (still
capped server-side at 1000 items), and store them locally in memcached
for 10 minutes. That way, paging will only hit the local memcached and
not the remote http api *or* the SQL api.
Most of these forms look pretty benign, but the user profile form, which
includes an SSH key field, certainly needs to be protected.
The survey form is unprotected because it's served over insecure HTTP
and the Varnish proxy strips cookies, which is required by the builtin
CSRF protection.
Marti Raudsepp
We want an API for this so they end up in the queue with all the other
requests, and get delivered to all our frontends without needing each node
to know about which frontends exist.
This increases session security, obviously... It will also break local development
installs, which will have to add the two rows that this patch adds to the
documentation.
Previously this had to be rsynced outside of the website. By allowing the
upload here, and automatically purging the data from varnish, we will reach
"almost instant" updates of the ftp site structure on the web.
This makes it possible to render the search results on the main engine.
We still run the query on the seprate search server, so once has to be
configured in settings_local.py with the key SEARCH_DSN (standard
PostgreSQL/psycopg2 connection string)
the fact that when pages are served through Varnish, the request will come
from the Varnish server and not from the client.
Create a /system_information page that shows some information about the
connection to help diagnose how the caches work.
of the RSS feed. (Which will receive a new URL now that it lives in the
actual app and not in with the static files, so a redirect will be needed
there).
on the ftp server, instead of crawling the directoreis directly. This
removes the requirement to sync almost 10Gb worth of ftp site onto the
web server...
The pickle file for this is currently around 1Mb, so it's not a huge
burden on the server. If it grows larger in the future, we may want to
re-think this and split it up, or put it in a database format or something
like that.