There is no need to browse into the repositories on the main website.
It's still possible to browse there directly on the ftp servers of course,
for those that need to debug a repository install and things like that.
we actually use in production, this mostly consists of the "new style" support
for raw data passthrough, but does not fix the actual problem (which is dealing
with url "double" encoding/decoding - or rather the loss of information about
that) yet...
Will fix that and sync up exactly with the production code for MW 1.19 soon...
This makes it possible to pass URLs that will fail when they end up being double
escaped in some cases, since they contain non-url-safe characters. Instead, they'd
be base64-encoded, and thus safe.
Also update the django community auth provider to do just this, including encrypting
the data with the site secret key to make sure it can't be changed/injected by
tricking the user to go directly to the wrong URL.
user preferences in MW pos 1.18 got moved to a seperate table and only stuff
that is _NOT_ default should be stored there. Despite what the documentation
says actually having data left in user_options is harmful and will break
random functionality like preference handling
Previously we would only purge based on URLs, but some of the upcoming
new work requires arbitrary expression purging.
NOTE! Require the creation of the new SQL procecure in the database,
either from varnish.sql or varnish_local.sql depending on if it's prod
or dev.
Replaces the old search code with something that's not quite as much
spaghetti (e.g. not evolved over too much time), and more stable (actual
error handling instead of random crashes)
Crawlers are now also multithreaded to deal with higher latency to some
sites.
In order to provide a consistent user experience, we must sign the
user out from the main website if the community site provides a logout
button - else that button will appear not to work...
This system relies on http redirects and signing in to the main website
instead of using cross-internet pgsql connections and signing in individually
to each website.
This should hopefully get rid of transient errors caused by automirror hitting
the site during reload, before our script has a change to pull the local site.
Sometimes we get a http 503 error from lighttpd if we hit the system
right after reload - make sure that we hit these errors from
the update script instead of leaking it to the end user or our
mirror script.
Previously this had to be rsynced outside of the website. By allowing the
upload here, and automatically purging the data from varnish, we will reach
"almost instant" updates of the ftp site structure on the web.
This makes it possible to figure out when the docs were actually
loaded, since developer docs don't carry a version number. This is
actually going to be the docs *load* timestamp, and not build timestamp,
but they should be close enough together that it shouldn't matter.
Fixes#108