Although the new feeds made that quite straightforward, there was a small detail to sort out: the Ubuntu Finder is visually dynamic, but it is actually a fully static web site served from S3, and the JSON feeds are served from the Canonical servers. This means the same-origin policy won’t allow that kind of cross-domain import to be easily done without further action.
The typical workaround for this kind of situation is to put a tiny proxy within the site server to load the JSON and dispatch to the browser from the same origin. Unfortunately, this isn’t an option in this case because there’s no custom server backing the data. There’s a similar option that actually works, though: deploying that tiny proxy server in some other corner and forward the JSON payload as JSONP or with cross-origin resource sharing enabled, so that browsers can bypass the same-origin restriction, and that’s what was done.
Rather than once again doing a special tiny server for that one service, though, this time around a slightly more general tool has emerged, and as an experiment it has been put live so anyone can use it. The server logic is pretty simple, and the idea is even simpler. Using the services from jsontest.com as an example, the following URL will serve a JSON document that can only be loaded from a page that is in a location allowed by the same-origin policy:
If one wanted to load that page from a different location, it might be transformed into a JSONP document by loading it from:
Alternatively, modern browsers that support the cross-origin resource sharing can simply load pure JSON by omitting the jsonpeercb parameter. The jsonpeer server will emit the proper header to allow the browser to load it:
This service is backed by a tiny Go server that lives in App Engine so it’s fast, secure (hopefully), and maintenance-less.
Some further details about the service:
- Results are JSON with cross-origin resource sharing by default
- With a query parameter jsonpeercb=<callback name>, results are JSONP
- The callback name must consist of characters in the set [_.a-zA-Z0-9]
- Query parameters provided to jsonpeer are used when doing the upstream request
- HTTP headers are discarded in both directions
- Results are cached for 5 minutes on memcache before being re-fetched
- Upstream results must be valid JSON
- Upstream results must have Content-Type application/json or text/plain
- Upstream results must be under 500kb
- Both http and https work; just tweak the URL and the path accordingly
Have fun if you need it, and please get in touch before abusing it.
UPDATE: The service and blog post were tweaked so that it defaults to returning plain JSON with CORS enabled, thanks to a suggestion by James Henstridge.