This lets us separate things like project news from other OSS and from
commercial postings, for example, allowing for people to subscribe to
different feeds with just the parts they are interested in.
Publish the community recognition guidelines developed by the PostgreSQL Core
Committee from a navigation off of the "Community" section. Add URLs from
several other pages on the PostgreSQL website in order to access the guidelines:
* Donate
* Events
* User Groups
Additionally, this patch updates the contact email address for donation questions
to the PostgreSQL Funds Group.
Using the crawled data, propulate dropdown boxes with versions and
platforms, to be able to show simpler instructions of exactly which
commands to use to install using the yum repository.
The ftp server can then submit a list (and structure) of which platforms
are supported for yum downloads, which can then later (in a separate
commit) be used to generate a nicer download for yum repo rpms.
We'll use this to index some things in our own search engine without
exposing it to external sitemap parsers. Not from a security standpoint
of course, but something that will lead to it being possible to search
the devel docs again.
We have so many users now that loading these forms take forever.
Instead, implement a textbox with autocomplete using django-selectable,
which will not load the whole list of users at once.
The pwn module has never been used, as the pwn are simply sent to
-announce and nothing else. We've kept the code and model around for
doing it on the site for years now and it's unused, so let's remove it
to cut down on maintenance cost.
1. Prefix all our local modules with "pgweb" as required by the new
project layout.
2. Change the django core imports to match 1.8
3. redirect_to has been changed to RedirectView
In passing also tabify the urls file which used a horrible mix of tabs
and spaces. The python standard is spaces, but since the rest of the
pgweb projects uses tabs, make the urls.py files do that as well.
We were already using signals for everything except delete, and even
in our old version of django the delete signal exists (it didn't exist
when this code was first written).
Django doesn't really like models to be OOP like this, so keeping PgModel
would cause issues with upcoming changes in django 1.8. Using simple functions
is easier, and the actual functionality is replicated straight off.
The idea is a git repository hook will send a POST to this URL, which
will drop a trigger file somewhere. A cronjob (or inotify listener if
we want to be really fancy sometiem in the future) will pick up that
cronjob and run the update script. The goal being to shorten the time
required to process an update.
This will make CSS downloading a single request, instead of making
6-7 requests for each page as it is now. It also moves all the definitions
of URLs for CSS into the templates and not in the raw CSS itself, which
will make it possible to enable client side caching in the future.
Fixes#91
Instead of having to update this list manually in multiple places when
releasing new versions, just take the information out of the database
where it has to be anyway.
Fixes#90Closes#93
Also make the code automatically pick up wich PDF files exist in the
static checkout, and auto-detect their size, both A4 and US sizes. This
removes yet one more manual step, yay!
Fixes#163
This also changes the main URL to look at events to be /about/events/
instead of /about/eventsarchive/, and reuses /about/eventsarchive/ to
be the actual achive. Also separates out the training archive to it's
own page at /about/eventsarchive/training/, for easier browsing.
We want an API for this so they end up in the queue with all the other
requests, and get delivered to all our frontends without needing each node
to know about which frontends exist.
Now that we have more metadata, we can render this on the main page instead of
on the wiki. This commit copies all the text from the current wiki page, and
uses the new fields in the Version model to render the table on the same URL
as the reference was on before.
Previously this had to be rsynced outside of the website. By allowing the
upload here, and automatically purging the data from varnish, we will reach
"almost instant" updates of the ftp site structure on the web.
This makes it possible to render the search results on the main engine.
We still run the query on the seprate search server, so once has to be
configured in settings_local.py with the key SEARCH_DSN (standard
PostgreSQL/psycopg2 connection string)
This allows all models inherited from PgModel to specify which
URLs to purge by either setting a field or defining a function
called purge_urls, at which point they will be purged whenever
the save signal is fired.
Also implements a form under /admin/purge/ that allows for manual
purging. This should probably be extended in the future to show
the status of the pgq slaves, but that will come later.
Includes a SQL function that posts the expires to a pgq queue. For
a local deployment, this can be replaced with a simple void function
to turn off varnish purging.
Each module now contains a struct.py file that will return all
the URLs that it can generate (yes, this is a small break of the
abstraction of url.py, but we've broken that elsewhere as well),
and also which search-engine-weight (0.1-1.0) that this URL should
be given.
the fact that when pages are served through Varnish, the request will come
from the Varnish server and not from the client.
Create a /system_information page that shows some information about the
connection to help diagnose how the caches work.