Mercurial > piecrust2
changeset 3:f485ba500df3
Gigantic change to basically make PieCrust 2 vaguely functional.
- Serving works, with debug window.
- Baking works, multi-threading, with dependency handling.
- Various things not implemented yet.
line wrap: on
line diff
--- a/.hgignore Wed Dec 25 22:16:46 2013 -0800 +++ b/.hgignore Sun Aug 10 23:43:16 2014 -0700 @@ -1,4 +1,6 @@ syntax: glob *.pyc venv +tags +build/messages/_cache
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/README.md Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,55 @@ +# PieCrust + +## Basic Configuration + +PieCrust comes with a simple way to work with the 90% case: a personal website with a blog. You don't have anything to define, but you can customize some aspects of that default setup. From most likely to less likely: + +* changing the URL format of posts +* changing the URL format of tags/categories +* defining new taxonomies (by default PieCrust comes with tags and categories) +* adding secondary blogs + +## Advanced Configuration + +PieCrust defines content using 3 concepts: *sources*, *taxonomies*, and *routes*. + +### Sources + +Sources define where your content is on disk, and how it's organized. By default, a source will use the `simple` scanner, but you can use other scanners that can can look for files differently, or can lift metadata information from the file names. For example, the `ordered` scanner will return the page files in the order defined by their file name prefix, and the `posts` collection of scanners will associate a date to each page based on their file name. + + sources: + posts: + type: posts/flat + recipes + reviews: + type: ordered + +### Taxonomies + +Taxonomies are used by PieCrust to generate listings of pages based on the metadata they have. For instance, you usually want pages listing posts for each existing tag. + + taxonomies: + tags: + multiple: true + category + course + ingredients: + multiple: true + +### Routes + +Routes define the shape of the URLs used to access your content. URLs for the built-in `pages` source cannot be changed, but you can specify URL routes for all custom sources and taxonomies. + + routes: + /%year%/%month%/%slug%: + source: posts + /recipes/%slug%: + source: recipes + /recipes/tag/%value%: + source: recipes + taxonomy: tags + /recipes/ingredient/%value%: + source: recipes + taxonomy: ingredients + /reviews/%slug%: + source: reviews
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/generate_messages.cmd Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,11 @@ +@echo off +setlocal + +set CUR_DIR=%~dp0 +set CHEF=%CUR_DIR%..\bin\chef +set OUT_DIR=%CUR_DIR%..\piecrust\res\messages +set ROOT_DIR=%CUR_DIR%messages + +%CHEF% --root=%ROOT_DIR% bake -o %OUT_DIR% +del %OUT_DIR%\index.html +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/generate_messages.sh Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,9 @@ +#!/bin/sh + +CUR_DIR="$( cd "$( dirname "$0" )" && pwd )" +CHEF=${CUR_DIR}/../bin/chef +OUT_DIR=${CUR_DIR}/../piecrust/res/messages +ROOT_DIR=${CUR_DIR}/messages + +$CHEF --root=$ROOT_DIR bake -o $OUT_DIR +rm ${OUT_DIR}/index.html
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/config.yml Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,2 @@ +site: + title: PieCrust System Messages
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/pages/_index.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,12 @@ +--- +title: PieCrust System Messages +--- + +Here are the **PieCrust** system message pages: + +* [Requirements Not Met]({{ pcurl('requirements') }}) +* [Error]({{ pcurl('error') }}) +* [Not Found]({{ pcurl('error404') }}) +* [Critical Error]({{ pcurl('critical') }}) + +This very page you're reading, however, is only here for convenience.
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/pages/critical.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,9 @@ +--- +title: The Whole Kitchen Burned Down! +layout: error +--- +Something critically bad happened, and **PieCrust** needs to shut down. It's probably our fault. + +{% raw %} +<div id="error-details">{{ details }}</div> +{% endraw %}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/pages/error.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,9 @@ +--- +title: The Cake Just Burned! +layout: error +--- + +{% raw %} +<div id="error-details">{{ details }}</div> +{% endraw %} +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/pages/error404.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,5 @@ +--- +title: Can't find the sugar! +--- + +It looks like the page you were trying to access does not exist around here. Try going somewhere else.
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/pages/requirements.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,9 @@ +--- +title: You're Missing A Few Things +--- + +It looks like you're not meeting some of the requirements for running **PieCrust**: + +* PHP 5.3+ + +For more information, be sure to check out the [documentation](http://bolt80.com/piecrust/doc).
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/templates/default.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,67 @@ +<!doctype html> +<html> +<head> + <title>{{ page.title }}</title> + <meta name="generator" content="PieCrust" /> + <link rel="stylesheet" type="text/css" href="http://fonts.googleapis.com/css?family=Lobster"> + <style> + body { + margin: 0; + padding: 1em; + background: #eee; + color: #000; + font-family: Georgia, serif; + } + h1 { + font-size: 4.5em; + font-family: Lobster, 'Trebuchet MS', Verdana, sans-serif; + text-align: center; + font-weight: bold; + margin-top: 0; + color: #333; + text-shadow: 0px 2px 5px rgba(0,0,0,0.3); + } + h2 { + font-size: 2.5em; + font-family: 'Lobster', 'Trebuchet MS', Verdana, sans-serif; + } + code { + background: #ddd; + padding: 0 0.2em; + } + #preamble { + font-size: 1.2em; + font-style: italic; + text-align: center; + margin-bottom: 0; + } + #container { + margin: 0 20%; + } + #content { + margin: 2em 1em; + } + .note { + margin: 3em; + color: #888; + font-style: italic; + } + </style> +</head> +<body> + <div id="container"> + <div id="header"> + <p id="preamble">A Message From The Kitchen:</p> + <h1>{{ page.title }}</h1> + </div> + <hr /> + <div id="content"> + {% block content %} + {{ content|raw }} + {% endblock %} + </div> + <hr /> + {% block footer %}{% endblock %} + </div> +</body> +</html> \ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/build/messages/_content/templates/error.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,7 @@ +{% extends "default.html" %} + +{% block footer %} +{% pcformat textile %} +p(note). You're seeing this because something wrong happend. To see detailed errors with callstacks, run chef with the @--debug@ parameter, append @?!debug@ to the URL, or initialize the @PieCrust@ object with @{'debug'=>true}@. On the other hand, to see you custom error pages, set the @site/display_errors@ setting to @false@. +{% endpcformat %} +{% endblock %}
--- a/chef.py Wed Dec 25 22:16:46 2013 -0800 +++ b/chef.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,113 +1,4 @@ -import sys -import os.path -import logging -import argparse -from piecrust.app import PieCrust, PieCrustConfiguration, APP_VERSION -from piecrust.environment import StandardEnvironment -from piecrust.pathutil import SiteNotFoundError, find_app_root -from piecrust.plugins.base import PluginLoader - - -logger = logging.getLogger(__name__) -logging.basicConfig(level=logging.INFO, - format="%(message)s") - - -class NullPieCrust: - def __init__(self): - self.root = None - self.cache = False - self.debug = False - self.templates_dirs = [] - self.pages_dir = [] - self.posts_dir = [] - self.plugins_dirs = [] - self.theme_dir = None - self.cache_dir = None - self.config = PieCrustConfiguration() - self.plugin_loader = PluginLoader(self) - self.env = StandardEnvironment() - self.env.initialize(self) - - -def main(): - root = None - cache = True - debug = False - config_variant = None - i = 0 - while i < len(sys.argv): - arg = sys.argv[i] - if arg.startswith('--root='): - root = os.path.expanduser(arg[len('--root='):]) - elif arg == '--root': - root = sys.argv[i + 1] - ++i - elif arg.startswith('--config='): - config_variant = arg[len('--config='):] - elif arg == '--config': - config_variant = sys.argv[i + 1] - ++i - elif arg == '--no-cache': - cache = False - elif arg == '--debug': - debug = True - - if arg[0] != '-': - break - - if debug: - logger.setLevel(logging.DEBUG) - - if root is None: - root = find_app_root() - - if not root: - app = NullPieCrust() - else: - app = PieCrust(root, cache=cache) - - # Handle a configuration variant. - if config_variant is not None: - if not root: - raise SiteNotFoundError() - app.config.applyVariant('variants/' + config_variant) - - # Setup the arg parser. - parser = argparse.ArgumentParser( - description="The PieCrust chef manages your website.") - parser.add_argument('--version', action='version', version=('%(prog)s ' + APP_VERSION)) - parser.add_argument('--root', help="The root directory of the website.") - parser.add_argument('--config', help="The configuration variant to use for this command.") - parser.add_argument('--debug', help="Show debug information.", action='store_true') - parser.add_argument('--no-cache', help="When applicable, disable caching.", action='store_true') - parser.add_argument('--quiet', help="Print only important information.", action='store_true') - parser.add_argument('--log', help="Send log messages to the specified file.") - - commands = sorted(app.plugin_loader.getCommands(), - lambda a, b: cmp(a.name, b.name)) - subparsers = parser.add_subparsers() - for c in commands: - p = subparsers.add_parser(c.name, help=c.description) - c.setupParser(p) - p.set_defaults(func=c._runFromChef) - - # Parse the command line. - result = parser.parse_args() - - # Setup the logger. - if result.debug and result.quiet: - raise Exception("You can't specify both --debug and --quiet.") - if result.debug: - logger.setLevel(logging.DEBUG) - elif result.quiet: - logger.setLevel(logging.WARNING) - if result.log: - from logging.handlers import FileHandler - logger.addHandler(FileHandler(result.log)) - - # Run the command! - result.func(app, result) +from piecrust.main import main if __name__ == '__main__':
--- a/piecrust/__init__.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/__init__.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,21 @@ + +APP_VERSION = '2.0.0alpha' + +CACHE_DIR = '_cache' +CONTENT_DIR = '_content' +TEMPLATES_DIR = '_content/templates' +PLUGINS_DIR = '_content/plugins' +THEME_DIR = '_content/theme' + +CONFIG_PATH = '_content/config.yml' +THEME_CONFIG_PATH = '_content/theme_config.yml' + +DEFAULT_FORMAT = 'markdown' +DEFAULT_TEMPLATE_ENGINE = 'jinja2' +DEFAULT_POSTS_FS = 'flat' +DEFAULT_DATE_FORMAT = '%b %d, %Y' +DEFAULT_PLUGIN_SOURCE = 'http://bitbucket.org/ludovicchabant/' +DEFAULT_THEME_SOURCE = 'http://bitbucket.org/ludovicchabant/' + +PIECRUST_URL = 'http://bolt80.com/piecrust/' +
--- a/piecrust/app.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/app.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,3 +1,4 @@ +import re import json import os.path import types @@ -5,30 +6,28 @@ import hashlib import logging import yaml -from cache import SimpleCache -from decorators import lazy_property -from plugins.base import PluginLoader -from environment import StandardEnvironment -from configuration import Configuration, merge_dicts - - -APP_VERSION = '2.0.0alpha' -CACHE_VERSION = '2.0' - -CACHE_DIR = '_cache' -TEMPLATES_DIR = '_content/templates' -PAGES_DIR = '_content/pages' -POSTS_DIR = '_content/posts' -PLUGINS_DIR = '_content/plugins' -THEME_DIR = '_content/theme' - -CONFIG_PATH = '_content/config.yml' -THEME_CONFIG_PATH = '_content/theme_config.yml' +from werkzeug.utils import cached_property +from piecrust import (APP_VERSION, + CACHE_DIR, TEMPLATES_DIR, + PLUGINS_DIR, THEME_DIR, + CONFIG_PATH, THEME_CONFIG_PATH, + DEFAULT_FORMAT, DEFAULT_TEMPLATE_ENGINE, DEFAULT_POSTS_FS, + DEFAULT_DATE_FORMAT, DEFAULT_PLUGIN_SOURCE, DEFAULT_THEME_SOURCE) +from piecrust.cache import ExtensibleCache, NullCache, NullExtensibleCache +from piecrust.plugins.base import PluginLoader +from piecrust.environment import StandardEnvironment +from piecrust.configuration import Configuration, ConfigurationError, merge_dicts +from piecrust.routing import Route +from piecrust.sources.base import REALM_USER, REALM_THEME +from piecrust.taxonomies import Taxonomy logger = logging.getLogger(__name__) +CACHE_VERSION = 10 + + class VariantNotFoundError(Exception): def __init__(self, variant_path, message=None): super(VariantNotFoundError, self).__init__( @@ -36,10 +35,10 @@ class PieCrustConfiguration(Configuration): - def __init__(self, paths=None, cache_dir=False): - super(PieCrustConfiguration, self).__init__() + def __init__(self, paths=None, cache=None, values=None, validate=True): + super(PieCrustConfiguration, self).__init__(values, validate) self.paths = paths - self.cache_dir = cache_dir + self.cache = cache or NullCache() self.fixups = [] def applyVariant(self, variant_path, raise_if_not_found=True): @@ -58,29 +57,29 @@ if self.paths is None: self._values = self._validateAll({}) return - - path_times = filter(self.paths, - lambda p: os.path.getmtime(p)) - cache_key = hashlib.md5("version=%s&cache=%s" % ( - APP_VERSION, CACHE_VERSION)) - - cache = None - if self.cache_dir: - cache = SimpleCache(self.cache_dir) + + path_times = map(lambda p: os.path.getmtime(p), self.paths) + cache_key = hashlib.md5("version=%s&cache=%d" % ( + APP_VERSION, CACHE_VERSION)).hexdigest() - if cache is not None: - if cache.isValid('config.json', path_times): - config_text = cache.read('config.json') - self._values = json.loads(config_text) - - actual_cache_key = self._values.get('__cache_key') - if actual_cache_key == cache_key: - return + if self.cache.isValid('config.json', path_times): + logger.debug("Loading configuration from cache...") + config_text = self.cache.read('config.json') + self._values = json.loads(config_text) + + actual_cache_key = self._values.get('__cache_key') + if actual_cache_key == cache_key: + return + logger.debug("Outdated cache key '%s' (expected '%s')." % ( + actual_cache_key, cache_key)) values = {} + logger.debug("Loading configuration from: %s" % self.paths) for i, p in enumerate(self.paths): with codecs.open(p, 'r', 'utf-8') as fp: loaded_values = yaml.load(fp.read()) + if loaded_values is None: + loaded_values = {} for fixup in self.fixups: fixup(i, loaded_values) merge_dicts(values, loaded_values) @@ -90,51 +89,307 @@ self._values = self._validateAll(values) - if cache is not None: - self._values['__cache_key'] = cache_key - config_text = json.dumps(self._values) - cache.write('config.json', config_text) + logger.debug("Caching configuration...") + self._values['__cache_key'] = cache_key + config_text = json.dumps(self._values) + self.cache.write('config.json', config_text) + + def _validateAll(self, values): + # Put all the defaults in the `site` section. + default_sitec = { + 'title': "Untitled PieCrust website", + 'root': '/', + 'default_format': DEFAULT_FORMAT, + 'default_template_engine': DEFAULT_TEMPLATE_ENGINE, + 'enable_gzip': True, + 'pretty_urls': False, + 'slugify': 'transliterate|lowercase', + 'timezone': False, + 'locale': False, + 'date_format': DEFAULT_DATE_FORMAT, + 'auto_formats': { + 'html': '', + 'md': 'markdown', + 'textile': 'textile'}, + 'default_auto_format': 'md', + 'pagination_suffix': '/%num%', + 'plugins_sources': [DEFAULT_PLUGIN_SOURCE], + 'themes_sources': [DEFAULT_THEME_SOURCE], + 'cache_time': 28800, + 'display_errors': True, + 'enable_debug_info': True + } + sitec = values.get('site') + if sitec is None: + sitec = {} + for key, val in default_sitec.iteritems(): + sitec.setdefault(key, val) + values['site'] = sitec + + # Add a section for our cached information. + cachec = {} + values['__cache'] = cachec + + # Cache auto-format regexes. + if not isinstance(sitec['auto_formats'], dict): + raise ConfigurationError("The 'site/auto_formats' setting must be a dictionary.") + cachec['auto_formats_re'] = r"\.(%s)$" % ( + '|'.join( + map(lambda i: re.escape(i), sitec['auto_formats'].keys()))) + if sitec['default_auto_format'] not in sitec['auto_formats']: + raise ConfigurationError("Default auto-format '%s' is not declared." % sitec['default_auto_format']) + + # Cache pagination suffix regex. + pgn_suffix = re.escape(sitec['pagination_suffix']) + pgn_suffix = pgn_suffix.replace("\\%num\\%", "(?P<num>\\d+)") + '$' + cachec['pagination_suffix_re'] = pgn_suffix + + # Make sure plugins and theme sources are lists. + if not isinstance(sitec['plugins_sources'], list): + sitec['plugins_sources'] = [sitec['plugins_sources']] + if not isinstance(sitec['themes_sources'], list): + sitec['themes_sources'] = [sitec['themes_sources']] + + # Setup values for posts/items. + ipp = sitec.get('posts_per_page') + if ipp is not None: + sitec.setdefault('items_per_page', ipp) + pf = sitec.get('posts_filters') + if pf is not None: + sitec.setdefault('items_filters', pf) + + # Figure out if we need to validate sources/routes, or auto-generate + # them from simple blog settings. + if 'sources' not in sitec: + posts_fs = sitec.setdefault('posts_fs', DEFAULT_POSTS_FS) + blogsc = sitec.setdefault('blogs', ['posts']) + + g_post_url = sitec.get('post_url', '%year%/%month%/%slug%') + g_tag_url = sitec.get('tag_url', 'tag/%tag%') + g_category_url = sitec.get('category_url', '%category%') + g_posts_per_page = sitec.get('items_per_page', 5) + g_posts_filters = sitec.get('items_filters') + g_date_format = sitec.get('date_format', DEFAULT_DATE_FORMAT) + + sourcesc = {} + sourcesc['pages'] = { + 'type': 'default', + 'data_endpoint': 'site/pages', + 'item_name': 'page'} + sitec['sources'] = sourcesc + + routesc = [] + sitec['routes'] = routesc + + taxonomiesc = {} + taxonomiesc['tags'] = { + 'multiple': True, + 'term': 'tag'} + taxonomiesc['categories'] = { + 'term': 'category'} + sitec['taxonomies'] = taxonomiesc + + for blog_name in blogsc: + blogc = values.get(blog_name, {}) + url_prefix = blog_name + '/' + endpoint = 'posts/%s' % blog_name + item_name = '%s-post' % blog_name + items_per_page = blogc.get('posts_per_page', g_posts_per_page) + items_filters = blogc.get('posts_filters', g_posts_filters) + date_format = blogc.get('date_format', g_date_format) + if len(blogsc) == 1: + url_prefix = '' + endpoint = 'posts' + item_name = 'post' + sourcesc[blog_name] = { + 'type': 'posts/%s' % posts_fs, + 'fs_endpoint': endpoint, + 'data_type': 'blog', + 'item_name': item_name, + 'items_per_page': items_per_page, + 'items_filters': items_filters, + 'date_format': date_format, + 'default_layout': 'post'} + tax_page_prefix = '' + if len(blogsc) > 1: + tax_page_prefix = blog_name + '/' + sourcesc[blog_name]['taxonomy_pages'] = { + 'tags': ('pages:%s_tag.%%ext%%;' + 'theme_pages:_tag.%%ext%%' % + tax_page_prefix), + 'categories': ('pages:%s_category.%%ext%%;' + 'theme_pages:_category.%%ext%%' % + tax_page_prefix)} + + post_url = blogc.get('post_url', url_prefix + g_post_url) + post_url = '/' + post_url.lstrip('/') + tag_url = blogc.get('tag_url', url_prefix + g_tag_url) + tag_url = '/' + tag_url.lstrip('/') + category_url = blogc.get('category_url', url_prefix + g_category_url) + category_url = '/' + category_url.lstrip('/') + routesc.append({'url': post_url, 'source': blog_name, + 'func': 'pcposturl(year,month,day,slug)'}) + routesc.append({'url': tag_url, 'source': blog_name, + 'taxonomy': 'tags', + 'func': 'pctagurl(tag)'}) + routesc.append({'url': category_url, 'source': blog_name, + 'taxonomy': 'categories', + 'func': 'pccaturl(category)'}) + + routesc.append({'url': '/%path:path%', 'source': 'pages', + 'func': 'pcurl(path)'}) + + # Validate sources/routes. + sourcesc = sitec.get('sources') + routesc = sitec.get('routes') + if not sourcesc: + raise ConfigurationError("There are no sources defined.") + if not routesc: + raise ConfigurationError("There are no routes defined.") + if not isinstance(sourcesc, dict): + raise ConfigurationError("The 'site/sources' setting must be a dictionary.") + if not isinstance(routesc, list): + raise ConfigurationError("The 'site/routes' setting must be a list.") + + # Add the theme page source if no sources were defined in the theme + # configuration itself. + has_any_theme_source = False + for sn, sc in sourcesc.iteritems(): + if sc.get('realm') == REALM_THEME: + has_any_theme_source = True + break + if not has_any_theme_source: + sitec['sources']['theme_pages'] = { + 'theme_source': True, + 'fs_endpoint': 'pages', + 'data_endpoint': 'site/pages', + 'item_name': 'page', + 'realm': REALM_THEME} + sitec['routes'].append({ + 'url': '/%path:path%', + 'source': 'theme_pages', + 'func': 'pcurl(path)'}) + + # Sources have the `default` scanner by default, duh. Also, a bunch + # of other default values for other configuration stuff. + for sn, sc in sourcesc.iteritems(): + if not isinstance(sc, dict): + raise ConfigurationError("All sources in 'site/sources' must be dictionaries.") + sc.setdefault('type', 'default') + sc.setdefault('fs_endpoint', sn) + sc.setdefault('data_endpoint', sn) + sc.setdefault('data_type', 'iterator') + sc.setdefault('item_name', sn) + sc.setdefault('items_per_page', 5) + sc.setdefault('date_format', DEFAULT_DATE_FORMAT) + sc.setdefault('realm', REALM_USER) + + # Check routes are referencing correct routes, have default + # values, etc. + for rc in routesc: + if not isinstance(rc, dict): + raise ConfigurationError("All routes in 'site/routes' must be dictionaries.") + rc_url = rc.get('url') + if not rc_url: + raise ConfigurationError("All routes in 'site/routes' must have an 'url'.") + if rc_url[0] != '/': + raise ConfigurationError("Route URLs must start with '/'.") + if rc.get('source') is None: + raise ConfigurationError("Routes must specify a source.") + if rc['source'] not in sourcesc.keys(): + raise ConfigurationError("Route is referencing unknown source: %s" % + rc['source']) + rc.setdefault('taxonomy', None) + rc.setdefault('page_suffix', '/%num%') + + # Validate taxonomies. + sitec.setdefault('taxonomies', {}) + taxonomiesc = sitec.get('taxonomies') + for tn, tc in taxonomiesc.iteritems(): + tc.setdefault('multiple', False) + tc.setdefault('term', tn) + tc.setdefault('page', '_%s.%%ext%%' % tc['term']) + + # Validate endpoints, and make sure the theme has a default source. + reserved_endpoints = set(['piecrust', 'site', 'page', 'route', + 'assets', 'pagination', 'siblings', + 'family']) + for name, src in sitec['sources'].iteritems(): + endpoint = src['data_endpoint'] + if endpoint in reserved_endpoints: + raise ConfigurationError( + "Source '%s' is using a reserved endpoint name: %s" % + (name, endpoint)) + + + # Done validating! + return values class PieCrust(object): - def __init__(self, root, cache=True, debug=False, env=None): - self.root = root + def __init__(self, root_dir, cache=True, debug=False, theme_site=False, + env=None): + self.root_dir = root_dir self.debug = debug - self.cache = cache + self.theme_site = theme_site self.plugin_loader = PluginLoader(self) + + if cache: + self.cache = ExtensibleCache(self.cache_dir) + else: + self.cache = NullExtensibleCache() + self.env = env if self.env is None: self.env = StandardEnvironment() self.env.initialize(self) - @lazy_property + @cached_property def config(self): - logger.debug("Loading site configuration...") + logger.debug("Creating site configuration...") paths = [] if self.theme_dir: paths.append(os.path.join(self.theme_dir, THEME_CONFIG_PATH)) - paths.append(os.path.join(self.root, CONFIG_PATH)) + paths.append(os.path.join(self.root_dir, CONFIG_PATH)) - config = PieCrustConfiguration(paths, self.cache_dir) + config_cache = self.cache.getCache('app') + config = PieCrustConfiguration(paths, config_cache) if self.theme_dir: # We'll need to patch the templates directories to be relative # to the site's root, and not the theme root. def _fixupThemeTemplatesDir(index, config): - if index == 0: - sitec = config.get('site') - if sitec: - tplc = sitec.get('templates_dirs') - if tplc: - if isinstance(tplc, types.StringTypes): - tplc = [tplc] - sitec['templates_dirs'] = filter(tplc, - lambda p: os.path.join(self.theme_dir, p)) + if index != 0: + return + sitec = config.get('site') + if sitec is None: + return + tplc = sitec.get('templates_dirs') + if tplc is None: + return + if isinstance(tplc, types.StringTypes): + tplc = [tplc] + sitec['templates_dirs'] = filter(tplc, + lambda p: os.path.join(self.theme_dir, p)) + config.fixups.append(_fixupThemeTemplatesDir) - config.fixups.append(_fixupThemeTemplatesDir) + # We'll also need to flag all page sources as coming from + # the theme. + def _fixupThemeSources(index, config): + if index != 0: + return + sitec = config.get('site') + if sitec is None: + sitec = {} + config['site'] = sitec + srcc = sitec.get('sources') + if srcc is not None: + for sn, sc in srcc.iteritems(): + sc['realm'] = REALM_THEME + config.fixups.append(_fixupThemeSources) return config - @lazy_property + @cached_property def templates_dirs(self): templates_dirs = self._get_configurable_dirs(TEMPLATES_DIR, 'site/templates_dirs') @@ -147,34 +402,88 @@ return templates_dirs - @lazy_property - def pages_dir(self): - return self._get_dir(PAGES_DIR) - - @lazy_property - def posts_dir(self): - return self._get_dir(POSTS_DIR) - - @lazy_property + @cached_property def plugins_dirs(self): return self._get_configurable_dirs(PLUGINS_DIR, 'site/plugins_dirs') - @lazy_property + @cached_property def theme_dir(self): - return self._get_dir(THEME_DIR) + td = self._get_dir(THEME_DIR) + if td is not None: + return td + return os.path.join(os.path.dirname(__file__), 'resources', 'theme') + + @cached_property + def cache_dir(self): + return os.path.join(self.root_dir, CACHE_DIR) + + @cached_property + def sources(self): + defs = {} + for cls in self.plugin_loader.getSources(): + defs[cls.SOURCE_NAME] = cls + + sources = [] + for n, s in self.config.get('site/sources').iteritems(): + cls = defs.get(s['type']) + if cls is None: + raise ConfigurationError("No such page source type: %s" % s['type']) + src = cls(self, n, s) + sources.append(src) + return sources + + @cached_property + def routes(self): + routes = [] + for r in self.config.get('site/routes'): + rte = Route(self, r) + routes.append(rte) + return routes - @lazy_property - def cache_dir(self): - if self.cache: - return os.path.join(self.root, CACHE_DIR) - return False + @cached_property + def taxonomies(self): + taxonomies = [] + for tn, tc in self.config.get('site/taxonomies').iteritems(): + tax = Taxonomy(self, tn, tc) + taxonomies.append(tax) + return taxonomies + + def getSource(self, source_name): + for source in self.sources: + if source.name == source_name: + return source + return None + + def getRoutes(self, source_name, skip_taxonomies=False): + for route in self.routes: + if route.source_name == source_name: + if not skip_taxonomies or route.taxonomy is None: + yield route + + def getRoute(self, source_name, source_metadata): + for route in self.getRoutes(source_name, True): + if route.isMatch(source_metadata): + return route + return None + + def getTaxonomyRoute(self, tax_name, source_name): + for route in self.routes: + if route.taxonomy == tax_name and route.source_name == source_name: + return route + return None + + def getTaxonomy(self, tax_name): + for tax in self.taxonomies: + if tax.name == tax_name: + return tax + return None def _get_dir(self, default_rel_dir): - abs_dir = os.path.join(self.root, default_rel_dir) + abs_dir = os.path.join(self.root_dir, default_rel_dir) if os.path.isdir(abs_dir): return abs_dir - return False + return None def _get_configurable_dirs(self, default_rel_dir, conf_name): dirs = [] @@ -182,11 +491,14 @@ # Add custom directories from the configuration. conf_dirs = self.config.get(conf_name) if conf_dirs is not None: - dirs += filter(conf_dirs, - lambda p: os.path.join(self.root, p)) + if isinstance(conf_dirs, types.StringTypes): + dirs.append(os.path.join(self.root_dir, conf_dirs)) + else: + dirs += filter(lambda p: os.path.join(self.root_dir, p), + conf_dirs) # Add the default directory if it exists. - default_dir = os.path.join(self.root, default_rel_dir) + default_dir = os.path.join(self.root_dir, default_rel_dir) if os.path.isdir(default_dir): dirs.append(default_dir)
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/baking/baker.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,483 @@ +import time +import os.path +import codecs +import urllib2 +import hashlib +import logging +import threading +from Queue import Queue, Empty +from piecrust.baking.records import TransitionalBakeRecord, BakeRecordPageEntry +from piecrust.chefutil import format_timed +from piecrust.data.filters import (PaginationFilter, HasFilterClause, + IsFilterClause, AndBooleanClause) +from piecrust.processing.base import ProcessorPipeline +from piecrust.rendering import PageRenderingContext, render_page +from piecrust.sources.base import (PageFactory, + REALM_NAMES, REALM_USER, REALM_THEME) + + +logger = logging.getLogger(__name__) + + +class PageBaker(object): + def __init__(self, app, out_dir, force=False, record=None, + copy_assets=False): + self.app = app + self.out_dir = out_dir + self.force = force + self.record = record + self.force = force + self.copy_assets = copy_assets + self.pretty_urls = app.config.get('site/pretty_urls') + self.pagination_suffix = app.config.get('site/pagination_suffix') + + def getOutputUri(self, uri, num): + suffix = self.pagination_suffix.replace('%num%', str(num)) + if self.pretty_urls: + # Output will be: + # - `uri/name` + # - `uri/name/2` + # - `uri/name.ext` + # - `uri/name.ext/2` + if num <= 1: + return uri + return uri + suffix + else: + # Output will be: + # - `uri/name.html` + # - `uri/name/2.html` + # - `uri/name.ext` + # - `uri/name/2.ext` + if uri == '/': + if num <= 1: + return '/' + return '/' + suffix.lstrip('/') + else: + if num <= 1: + return uri + #TODO: watch out for tags with dots in them. + base_uri, ext = os.path.splitext(uri) + return base_uri + suffix + ext + + def getOutputPath(self, uri): + bake_path = [self.out_dir] + decoded_uri = urllib2.unquote(uri.lstrip('/')).decode('utf8') + if self.pretty_urls: + bake_path.append(decoded_uri) + bake_path.append('index.html') + else: + name, ext = os.path.splitext(decoded_uri) + if ext: + bake_path.append(decoded_uri) + else: + bake_path.append(decoded_uri + '.html') + + return os.path.join(*bake_path) + + def bake(self, factory, route, taxonomy_name=None, taxonomy_term=None): + page = factory.buildPage() + + pagination_filter = None + custom_data = None + if taxonomy_name and taxonomy_term: + # Must bake a taxonomy listing page... we'll have to add a + # pagination filter for only get matching posts, and the output + # URL will be a bit different. + tax = self.app.getTaxonomy(taxonomy_name) + pagination_filter = PaginationFilter() + if tax.is_multiple: + if isinstance(taxonomy_term, tuple): + abc = AndBooleanClause() + for t in taxonomy_term: + abc.addClause(HasFilterClause(taxonomy_name, t)) + pagination_filter.addClause(abc) + slugified_term = '/'.join(taxonomy_term) + else: + pagination_filter.addClause(HasFilterClause(taxonomy_name, + taxonomy_term)) + slugified_term = taxonomy_term + else: + pagination_filter.addClause(IsFilterClause(taxonomy_name, + taxonomy_term)) + slugified_term = taxonomy_term + custom_data = {tax.term_name: taxonomy_term} + uri = route.getUri({tax.term_name: slugified_term}) + else: + # Normal page bake. + uri = route.getUri(factory.metadata) + + cur_sub = 1 + has_more_subs = True + cur_record_entry = BakeRecordPageEntry(page) + cur_record_entry.taxonomy_name = taxonomy_name + cur_record_entry.taxonomy_term = taxonomy_term + prev_record_entry = self.record.getPreviousEntry(page, taxonomy_name, + taxonomy_term) + + logger.debug("Baking '%s'..." % uri) + while has_more_subs: + sub_uri = self.getOutputUri(uri, cur_sub) + out_path = self.getOutputPath(sub_uri) + + # Check for up-to-date outputs. + do_bake = True + if not self.force and prev_record_entry: + try: + in_path_time = os.path.getmtime(page.path) + out_path_time = os.path.getmtime(out_path) + if out_path_time > in_path_time: + do_bake = False + except OSError: + # File doesn't exist, we'll need to bake. + pass + + # If this page didn't bake because it's already up-to-date. + # Keep trying for as many subs as we know this page has. + if not do_bake: + if (prev_record_entry is not None and + prev_record_entry.num_subs < cur_sub): + logger.debug("") + cur_sub += 1 + has_more_subs = True + logger.debug(" %s is up to date, skipping to next " + "sub-page." % out_path) + continue + + # We don't know how many subs to expect... just skip. + logger.debug(" %s is up to date, skipping bake." % out_path) + break + + # All good, proceed. + try: + logger.debug(" p%d -> %s" % (cur_sub, out_path)) + ctx, rp = self._bakeSingle(page, sub_uri, cur_sub, out_path, + pagination_filter, custom_data) + except Exception as ex: + logger.exception("Error baking page '%s' for URI '%s': %s" % + (page.ref_spec, uri, ex)) + raise + + cur_record_entry.out_uris.append(sub_uri) + cur_record_entry.out_paths.append(out_path) + cur_record_entry.used_source_names |= ctx.used_source_names + cur_record_entry.used_taxonomy_terms |= ctx.used_taxonomy_terms + + has_more_subs = False + if ctx.used_pagination is not None: + cur_record_entry.used_source_names.add( + ctx.used_pagination._source.name) + if ctx.used_pagination.has_more: + cur_sub += 1 + has_more_subs = True + + if self.record: + self.record.addEntry(cur_record_entry) + + return cur_record_entry + + def _bakeSingle(self, page, sub_uri, num, out_path, + pagination_filter=None, custom_data=None): + ctx = PageRenderingContext(page, sub_uri) + ctx.page_num = num + if pagination_filter: + ctx.pagination_filter = pagination_filter + if custom_data: + ctx.custom_data = custom_data + + rp = render_page(ctx) + + out_dir = os.path.dirname(out_path) + if not os.path.isdir(out_dir): + os.makedirs(out_dir, 0755) + + with codecs.open(out_path, 'w', 'utf-8') as fp: + fp.write(rp.content.decode('utf-8')) + + return ctx, rp + + +class Baker(object): + def __init__(self, app, out_dir=None, force=False, portable=False, + no_assets=False): + self.app = app + self.out_dir = out_dir or os.path.join(app.root_dir, '_counter') + self.force = force + self.portable = portable + self.no_assets = no_assets + self.num_workers = app.config.get('baker/workers') or 4 + + # Remember what taxonomy pages we should skip + # (we'll bake them repeatedly later with each taxonomy term) + self.taxonomy_pages = [] + logger.debug("Gathering taxonomy page paths:") + for tax in self.app.taxonomies: + for src in self.app.sources: + path = tax.resolvePagePath(src.name) + if path is not None: + self.taxonomy_pages.append(path) + logger.debug(" - %s" % path) + + def bake(self): + logger.debug(" Bake Output: %s" % self.out_dir) + logger.debug(" Root URL: %s" % self.app.config.get('site/root')) + + # Get into bake mode. + start_time = time.clock() + self.app.config.set('baker/is_baking', True) + self.app.env.base_asset_url_format = '%site_root%%uri%' + + # Make sure the output directory exists. + if not os.path.isdir(self.out_dir): + os.makedirs(self.out_dir, 0755) + + # Load/create the bake record. + record = TransitionalBakeRecord() + record_cache = self.app.cache.getCache('bake_r') + record_name = hashlib.md5(self.out_dir).hexdigest() + '.record' + if not self.force and record_cache.has(record_name): + t = time.clock() + record.loadPrevious(record_cache.getCachePath(record_name)) + logger.debug(format_timed(t, 'loaded previous bake record', + colored=False)); + + # Gather all sources by realm -- we're going to bake each realm + # separately so we can handle "overlaying" (i.e. one realm overrides + # another realm's pages). + sources_by_realm = {} + for source in self.app.sources: + srclist = sources_by_realm.setdefault(source.realm, []) + srclist.append(source) + + # Bake the realms. + realm_list = [REALM_USER, REALM_THEME] + for realm in realm_list: + srclist = sources_by_realm.get(realm) + if srclist is not None: + self._bakeRealm(record, realm, srclist) + + # Bake taxonomies. + self._bakeTaxonomies(record) + + # Bake the assets. + if not self.no_assets: + self._bakeAssets(record) + + # Save the bake record. + t = time.clock() + record.collapseRecords() + record.saveCurrent(record_cache.getCachePath(record_name)) + logger.debug(format_timed(t, 'saved bake record', colored=False)) + + # All done. + self.app.config.set('baker/is_baking', False) + logger.info('-------------------------'); + logger.info(format_timed(start_time, 'done baking')); + + def _bakeRealm(self, record, realm, srclist): + # Gather all page factories from the sources and queue them + # for the workers to pick up. Just skip taxonomy pages for now. + logger.debug("Baking realm %s" % REALM_NAMES[realm]) + pool, queue, abort = self._createWorkerPool(record, self.num_workers) + + for source in srclist: + factories = source.getPageFactories() + for fac in factories: + if fac.path in self.taxonomy_pages: + logger.debug("Skipping taxonomy page: %s:%s" % + (source.name, fac.ref_spec)) + continue + + route = self.app.getRoute(source.name, fac.metadata) + if route is None: + logger.error("Can't get route for page: %s" % fac.ref_spec) + continue + + logger.debug("Queuing: %s" % fac.ref_spec) + queue.put_nowait(BakeWorkerJob(fac, route)) + + self._waitOnWorkerPool(pool, abort) + + def _bakeTaxonomies(self, record): + logger.debug("Baking taxonomies") + + # Let's see all the taxonomy terms for which we must bake a + # listing page... first, pre-populate our big map of used terms. + buckets = {} + tax_names = [t.name for t in self.app.taxonomies] + source_names = [s.name for s in self.app.sources] + for sn in source_names: + source_taxonomies = {} + buckets[sn] = source_taxonomies + for tn in tax_names: + source_taxonomies[tn] = set() + + # Now see which ones are 'dirty' based on our bake record. + logger.debug("Gathering dirty taxonomy terms") + for prev_entry, cur_entry in record.transitions.itervalues(): + for tax in self.app.taxonomies: + changed_terms = None + # Re-bake all taxonomy pages that include new or changed + # pages. + if not prev_entry and cur_entry and cur_entry.was_baked: + changed_terms = cur_entry.config.get(tax.name) + elif prev_entry and cur_entry and cur_entry.was_baked: + changed_terms = [] + prev_terms = prev_entry.config.get(tax.name) + cur_terms = cur_entry.config.get(tax.name) + if tax.is_multiple: + if prev_terms is not None: + changed_terms += prev_terms + if cur_terms is not None: + changed_terms += cur_terms + else: + if prev_terms is not None: + changed_terms.append(prev_terms) + if cur_terms is not None: + changed_terms.append(cur_terms) + if changed_terms is not None: + if not isinstance(changed_terms, list): + changed_terms = [changed_terms] + buckets[cur_entry.source_name][tax.name] |= ( + set(changed_terms)) + + # Re-bake the combination pages for terms that are 'dirty'. + known_combinations = set() + logger.debug("Gathering dirty term combinations") + for prev_entry, cur_entry in record.transitions.itervalues(): + if cur_entry: + known_combinations |= cur_entry.used_taxonomy_terms + elif prev_entry: + known_combinations |= prev_entry.used_taxonomy_terms + for sn, tn, terms in known_combinations: + changed_terms = buckets[sn][tn] + if not changed_terms.isdisjoint(set(terms)): + changed_terms.add(terms) + + # Start baking those terms. + pool, queue, abort = self._createWorkerPool(record, self.num_workers) + for source_name, source_taxonomies in buckets.iteritems(): + for tax_name, terms in source_taxonomies.iteritems(): + if len(terms) == 0: + continue + + logger.debug("Baking '%s' for source '%s': %s" % + (tax_name, source_name, terms)) + tax = self.app.getTaxonomy(tax_name) + route = self.app.getTaxonomyRoute(tax_name, source_name) + tax_page_ref = tax.getPageRef(source_name) + if not tax_page_ref.exists: + logger.debug("No taxonomy page found at '%s', skipping." % + tax.page_ref) + continue + + tax_page_source = tax_page_ref.source + tax_page_rel_path = tax_page_ref.rel_path + logger.debug("Using taxonomy page: %s:%s" % + (tax_page_source.name, tax_page_rel_path)) + + for term in terms: + fac = PageFactory(tax_page_source, tax_page_rel_path, + {tax.term_name: term}) + logger.debug("Queuing: %s [%s, %s]" % + (fac.ref_spec, tax_name, term)) + queue.put_nowait( + BakeWorkerJob(fac, route, tax_name, term)) + + self._waitOnWorkerPool(pool, abort) + + def _bakeAssets(self, record): + baker_params = self.app.config.get('baker') or {} + skip_patterns = baker_params.get('skip_patterns') + force_patterns = baker_params.get('force_patterns') + proc = ProcessorPipeline( + self.app, self.out_dir, force=self.force, + skip_patterns=skip_patterns, force_patterns=force_patterns, + num_workers=self.num_workers) + proc.run() + + def _createWorkerPool(self, record, pool_size=4): + pool = [] + queue = Queue() + abort = threading.Event() + for i in range(pool_size): + ctx = BakeWorkerContext(self.app, self.out_dir, self.force, + record, queue, abort) + worker = BakeWorker(i, ctx) + worker.start() + pool.append(worker) + return pool, queue, abort + + def _waitOnWorkerPool(self, pool, abort): + for w in pool: + w.join() + if abort.is_set(): + raise Exception("Worker pool was aborted.") + + +class BakeWorkerContext(object): + def __init__(self, app, out_dir, force, record, work_queue, + abort_event): + self.app = app + self.out_dir = out_dir + self.force = force + self.record = record + self.work_queue = work_queue + self.abort_event = abort_event + + +class BakeWorkerJob(object): + def __init__(self, factory, route, taxonomy_name=None, taxonomy_term=None): + self.factory = factory + self.route = route + self.taxonomy_name = taxonomy_name + self.taxonomy_term = taxonomy_term + + @property + def source(self): + return self.factory.source + + +class BakeWorker(threading.Thread): + def __init__(self, wid, ctx): + super(BakeWorker, self).__init__() + self.wid = wid + self.ctx = ctx + self.num_bakes = 0 + self._page_baker = PageBaker(ctx.app, ctx.out_dir, ctx.force, + ctx.record) + + def run(self): + while(not self.ctx.abort_event.is_set()): + try: + job = self.ctx.work_queue.get(True, 0.1) + except Empty: + logger.debug("[%d] No more work... shutting down." % self.wid) + break + + try: + self._unsafeRun(job) + logger.debug("[%d] Done with page." % self.wid) + self.ctx.work_queue.task_done() + except Exception as ex: + self.ctx.abort_event.set() + logger.error("[%d] Critical error, aborting." % self.wid) + logger.exception(ex) + break + + def _unsafeRun(self, job): + start_time = time.clock() + + bake_res = self._page_baker.bake(job.factory, job.route, + taxonomy_name=job.taxonomy_name, + taxonomy_term=job.taxonomy_term) + + if bake_res.was_baked: + uri = bake_res.out_uris[0] + friendly_uri = uri if uri != '' else '[main page]' + friendly_count = '' + if bake_res.num_subs > 1: + friendly_count = ' (%d pages)' % bake_res.num_subs + logger.info(format_timed(start_time, '[%d] %s%s' % + (self.wid, friendly_uri, friendly_count))) + self.num_bakes += 1 +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/baking/records.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,113 @@ +import logging +from piecrust.records import Record + + +logger = logging.getLogger(__name__) + + +def _get_transition_key(source_name, rel_path, taxonomy_name=None, + taxonomy_term=None): + key = '%s:%s' % (source_name, rel_path) + if taxonomy_name and taxonomy_term: + key += ';%s:' % taxonomy_name + if isinstance(taxonomy_term, tuple): + key += '/'.join(taxonomy_term) + else: + key += taxonomy_term + return key + + +class BakeRecord(Record): + VERSION = 1 + + def __init__(self): + super(BakeRecord, self).__init__() + self.out_dir = None + self.bake_time = None + + +class BakeRecordPageEntry(object): + def __init__(self, page): + self.path = page.path + self.rel_path = page.rel_path + self.source_name = page.source.name + self.config = page.config.get() + self.taxonomy_name = None + self.taxonomy_term = None + self.out_uris = [] + self.out_paths = [] + self.used_source_names = set() + self.used_taxonomy_terms = set() + + @property + def was_baked(self): + return len(self.out_paths) > 0 + + @property + def num_subs(self): + return len(self.out_paths) + + @property + def transition_key(self): + return _get_transition_key(self.source_name, self.rel_path, + self.taxonomy_name, self.taxonomy_term) + +class TransitionalBakeRecord(object): + DELETION_MISSING = 1 + DELETION_CHANGED = 2 + + def __init__(self, previous_path=None): + self.previous = BakeRecord() + self.current = BakeRecord() + self.transitions = {} + if previous_path: + self.loadPrevious(previous_path) + self.current.entry_added += self._onCurrentEntryAdded + + def loadPrevious(self, previous_path): + self.previous = BakeRecord.load(previous_path) + for e in self.previous.entries: + self.transitions[e.transition_key] = (e, None) + + def saveCurrent(self, current_path): + self.current.save(current_path) + + def addEntry(self, entry): + self.current.addEntry(entry) + + def getPreviousEntry(self, page, taxonomy_name=None, taxonomy_term=None): + key = _get_transition_key(page.source.name, page.rel_path, + taxonomy_name, taxonomy_term) + pair = self.transitions.get(key) + if pair is not None: + return pair[0] + return None + + def collapseRecords(self): + for pair in self.transitions.itervalues(): + prev = pair[0] + cur = pair[1] + + if prev and cur and not cur.was_baked: + # This page wasn't baked, so the information from last + # time is still valid (we didn't get any information + # since we didn't bake). + cur.out_uris = list(prev.out_uris) + cur.out_paths = list(prev.out_paths) + cur.used_source_names = set(prev.used_source_names) + cur.used_taxonomy_terms = set(prev.used_taxonomy_terms) + + def _onCurrentEntryAdded(self, entry): + key = entry.transition_key + te = self.transitions.get(key) + if te is None: + logger.debug("Adding new record entry: %s" % key) + self.transitions[key] = (None, entry) + return + + if te[1] is not None: + raise Exception("A current entry already exists for: %s" % + key) + logger.debug("Setting current record entry: %s" % key) + self.transitions[key] = (te[0], entry) +
--- a/piecrust/cache.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/cache.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,13 +1,43 @@ import os import os.path import codecs +import logging +import threading + + +logger = logging.getLogger(__name__) + + +class ExtensibleCache(object): + def __init__(self, base_dir): + self.base_dir = base_dir + self.lock = threading.Lock() + self.caches = {} + + @property + def enabled(self): + return True + + def getCache(self, name): + c = self.caches.get(name) + if c is None: + with self.lock: + c = self.caches.get(name) + if c is None: + c_dir = os.path.join(self.base_dir, name) + if not os.path.isdir(c_dir): + os.makedirs(c_dir, 0755) + + c = SimpleCache(c_dir) + self.caches[name] = c + return c class SimpleCache(object): def __init__(self, base_dir): + self.base_dir = base_dir if not os.path.isdir(base_dir): - os.makedirs(base_dir, 0755) - self.base_dir = base_dir + raise Exception("Cache directory doesn't exist: %s" % base_dir) def isValid(self, path, time): cache_time = self.getCacheTime(path) @@ -33,6 +63,7 @@ def read(self, path): cache_path = self.getCachePath(path) + logger.debug("Reading cache: %s" % cache_path) with codecs.open(cache_path, 'r', 'utf-8') as fp: return fp.read() @@ -44,3 +75,40 @@ with codecs.open(cache_path, 'w', 'utf-8') as fp: fp.write(content) + def getCachePath(self, path): + if path.startswith('.'): + path = '__index__' + path + return os.path.join(self.base_dir, path) + + +class NullCache(object): + def isValid(self, path, time): + return False + + def getCacheTime(self, path): + return None + + def has(self, path): + return False + + def read(self, path): + raise Exception("Null cache has no data.") + + def write(self, path, content): + pass + + def getCachePath(self, path): + raise Exception("Null cache can't make paths.") + + +class NullExtensibleCache(object): + def __init__(self): + self.null_cache = NullCache() + + @property + def enabled(self): + return False + + def getCache(self, name): + return self.null_cache +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/chefutil.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,11 @@ +import time +from colorama import Fore + + +def format_timed(start_time, message, colored=True): + end_time = time.clock() + time_str = '%8.1f ms' % ((end_time - start_time) * 1000.0) + if colored: + return '[%s%s%s] %s' % (Fore.GREEN, time_str, Fore.RESET, message) + return '[%s] %s' % (time_str, message) +
--- a/piecrust/commands/base.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/commands/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,4 +1,5 @@ import logging +import functools from piecrust.pathutil import SiteNotFoundError @@ -9,6 +10,7 @@ def __init__(self, app, args): self.app = app self.args = args + self.result = 0 class ChefCommand(object): @@ -17,14 +19,71 @@ self.description = '__unknown__' self.requires_website = True - def setupParser(self, parser): + def setupParser(self, parser, app): raise NotImplementedError() def run(self, ctx): raise NotImplementedError() def _runFromChef(self, app, res): - if app.root is None and self.requires_website: + if app.root_dir is None and self.requires_website: raise SiteNotFoundError() - self.run(CommandContext(app, res)) + ctx = CommandContext(app, res) + self.run(ctx) + return ctx.result + + +class ExtendableChefCommand(ChefCommand): + def __init__(self): + super(ExtendableChefCommand, self).__init__() + self._extensions = None + + def setupParser(self, parser, app): + self._loadExtensions(app) + subparsers = parser.add_subparsers() + for e in self._extensions: + p = subparsers.add_parser(e.name, help=e.description) + e.setupParser(p, app) + p.set_defaults(func=e._runFromChef) + + def _loadExtensions(self, app): + if self._extensions is not None: + return + self._extensions = [] + for e in app.plugin_loader.getCommandExtensions(): + if e.command_name == self.name and e.supports(app): + self._extensions.append(e) + +class ChefCommandExtension(ChefCommand): + def __init__(self): + super(ChefCommandExtension, self).__init__() + self.command_name = '__unknown__' + + def supports(self, app): + return True + + +class _WrappedCommand(ChefCommand): + def __init__(self, func, name, description): + super(_WrappedCommand, self).__init__() + self.func = func + self.name = name + self.description = description + + def run(self, ctx): + self.func(ctx) + + +def simple_command(f, name, description=None): + @functools.wraps(f) + def wrapper(*args, **kwargs): + return f(*args, **kwargs) + cmd = _WrappedCommand(f, name, description) + f.__command_class__ = cmd + return wrapper + + +def get_func_command(f): + return getattr(f, '__command_class__') +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/commands/builtin/baking.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,78 @@ +import os.path +import logging +import hashlib +from piecrust.baking.baker import Baker +from piecrust.baking.records import BakeRecord +from piecrust.commands.base import ChefCommand + + +logger = logging.getLogger(__name__) + + +class BakeCommand(ChefCommand): + def __init__(self): + super(BakeCommand, self).__init__() + self.name = 'bake' + self.description = "Bakes your website into static HTML files." + + def setupParser(self, parser, app): + parser.add_argument('-o', '--output', + help="The directory to put all the baked HTML files into " + "(defaults to `_counter`)") + parser.add_argument('-f', '--force', + help="Force re-baking the entire website.", + action='store_true') + parser.add_argument('--portable', + help="Uses relative paths for all URLs.", + action='store_true') + parser.add_argument('--no-assets', + help="Don't process assets (only pages).", + action='store_true') + + def run(self, ctx): + baker = Baker( + ctx.app, + out_dir=ctx.args.output, + force=ctx.args.force, + portable=ctx.args.portable, + no_assets=ctx.args.no_assets) + if ctx.args.portable: + # Disable pretty URLs because there's likely not going to be + # a web server to handle serving default documents. + ctx.app.config.set('site/pretty_urls', False) + + baker.bake() + + +class ShowRecordCommand(ChefCommand): + def __init__(self): + super(ShowRecordCommand, self).__init__() + self.name = 'showrecord' + self.description = "Shows the bake record for a given output directory." + + def setupParser(self, parser, app): + parser.add_argument('output', + help="The output directory for which to show the bake record " + "(defaults to `_counter`)", + nargs='?') + + def run(self, ctx): + out_dir = ctx.args.output or os.path.join(ctx.app.root_dir, '_counter') + record_cache = ctx.app.cache.getCache('bake_r') + record_name = hashlib.md5(out_dir).hexdigest() + '.record' + if not record_cache.has(record_name): + raise Exception("No record has been created for this output path. " + "Did you bake there yet?") + + record = BakeRecord.load(record_cache.getCachePath(record_name)) + logging.info("Bake record for: %s" % record.out_dir) + logging.info("Last baked: %s" % record.bake_time) + logging.info("Entries:") + for entry in record.entries: + logging.info(" - ") + logging.info(" path: %s" % entry.path) + logging.info(" source: %s" % entry.source_name) + logging.info(" config: %s" % entry.config) + logging.info(" base URL: %s" % entry.uri) + logging.info(" outputs: %s" % entry.out_paths) +
--- a/piecrust/commands/builtin/info.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/commands/builtin/info.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,4 +1,6 @@ +import os.path import logging +import fnmatch from piecrust.commands.base import ChefCommand @@ -11,9 +13,112 @@ self.name = 'root' self.description = "Gets the root directory of the current website." - def setupParser(self, parser): + def setupParser(self, parser, app): + pass + + def run(self, ctx): + logger.info(ctx.app.root_dir) + + +class ShowConfigCommand(ChefCommand): + def __init__(self): + super(ShowConfigCommand, self).__init__() + self.name = 'showconfig' + self.description = "Prints part of, or the entirety of, the website's configuration." + + def setupParser(self, parser, app): + parser.add_argument('path', + help="The path to a config section or value", + nargs='?') + + def run(self, ctx): + show = ctx.app.config.get(ctx.args.path) + if show is not None: + if isinstance(show, (dict, list)): + import yaml + out = yaml.safe_dump(show, default_flow_style=False) + logger.info(out) + else: + logger.info(show) + elif ctx.args.path: + logger.error("No such configuration path: %s" % ctx.args.path) + ctx.result = 1 + + +class ShowRoutesCommand(ChefCommand): + def __init__(self): + super(ShowRoutesCommand, self).__init__() + self.name = 'routes' + self.description = "Shows the routes defined for this website." + + def setupParser(self, parser, app): pass def run(self, ctx): - logger.info(ctx.app.root) + for route in ctx.app.routes: + logger.info("%s:" % route.uri_pattern) + logger.info(" source: %s" % route.source_name) + logger.info(" taxonomy: %s" % (route.taxonomy or '')) + + +class ShowPathsCommand(ChefCommand): + def __init__(self): + super(ShowPathsCommand, self).__init__() + self.name = 'paths' + self.description = "Shows the paths that this website is using." + + def setupParser(self, parser, app): + pass + + def run(self, ctx): + app = ctx.app + paths = ['theme_dir', 'templates_dirs', 'plugins_dirs', 'cache_dir'] + for p in paths: + value = getattr(app, p) + if value is list: + logging.info("%s: %s" % (p, ', '.join(value))) + else: + logging.info("%s: %s" % (p, value)) + + +class FindCommand(ChefCommand): + def __init__(self): + super(FindCommand, self).__init__() + self.name = 'find' + self.description = "Find pages in the website." + def setupParser(self, parser, app): + parser.add_argument('pattern', + help="The pattern to match with page slugs", + nargs='?') + parser.add_argument('--endpoint', + help="The endpoint(s) to look into", + nargs='+') + parser.add_argument('--full-path', + help="Return full paths instead of root-relative paths", + action='store_true') + parser.add_argument('--metadata', + help="Return metadata about the page instead of just the path", + action='store_true') + + def run(self, ctx): + pattern = ctx.args.pattern + sources = list(ctx.app.sources) + if ctx.args.endpoint: + endpoints = ctx.args.endpoint + sources = filter(lambda s: s.endpoint in endpoints, sources) + for src in sources: + page_facs = src.getPageFactories() + for pf in page_facs: + name = os.path.relpath(pf.path, ctx.app.root_dir) + if pattern is None or fnmatch.fnmatch(name, pattern): + if ctx.args.full_path: + name = pf.path + if ctx.args.metadata: + logger.info("path:%s" % pf.path) + for key, val in pf.metadata.iteritems(): + logger.info("%s:%s" % (key, val)) + logger.info("---") + else: + logger.info(name) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/commands/builtin/serving.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,30 @@ +import logging +from piecrust.serving import Server +from piecrust.commands.base import ChefCommand + + +logger = logging.getLogger(__name__) + + +class ServeCommand(ChefCommand): + def __init__(self): + super(ServeCommand, self).__init__() + self.name = 'serve' + self.description = "Runs a local web server to serve your website." + + def setupParser(self, parser, app): + parser.add_argument('-p', '--port', + help="The port for the web server", + default=8080) + parser.add_argument('-a', '--address', + help="The host for the web server", + default='localhost') + + def run(self, ctx): + server = Server( + ctx.app.root_dir, + host=ctx.args.address, + port=ctx.args.port, + debug=ctx.args.debug) + server.run() +
--- a/piecrust/commands/builtin/util.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/commands/builtin/util.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,10 +1,12 @@ import os import os.path +import shutil import codecs import logging import yaml from piecrust.app import CONFIG_PATH from piecrust.commands.base import ChefCommand +from piecrust.sources.base import IPreparingSource, MODE_CREATING logger = logging.getLogger(__name__) @@ -17,7 +19,7 @@ self.description = "Creates a new empty PieCrust website." self.requires_website = False - def setupParser(self, parser): + def setupParser(self, parser, app): parser.add_argument('destination', help="The destination directory in which to create the website.") @@ -28,10 +30,10 @@ if not os.path.isdir(destination): os.makedirs(destination, 0755) - + config_path = os.path.join(destination, CONFIG_PATH) if not os.path.isdir(os.path.dirname(config_path)): - os.makedirs(os.path.dirname(config_path)) + os.makedirs(os.path.dirname(config_path), 0755) config_text = yaml.dump({ 'site': { @@ -46,4 +48,58 @@ default_flow_style=False) with codecs.open(config_path, 'w', 'utf-8') as fp: fp.write(config_text) - + + +class PurgeCommand(ChefCommand): + def __init__(self): + super(PurgeCommand, self).__init__() + self.name = 'purge' + self.description = "Purges the website's cache." + + def setupParser(self, parser, app): + pass + + def run(self, ctx): + cache_dir = ctx.app.cache_dir + if os.path.isdir(cache_dir): + logger.info("Purging cache: %s" % cache_dir) + shutil.rmtree(cache_dir) + + +class PrepareCommand(ChefCommand): + def __init__(self): + super(PrepareCommand, self).__init__() + self.name = 'prepare' + self.description = "Prepares new content for your website." + + def setupParser(self, parser, app): + subparsers = parser.add_subparsers() + for src in app.sources: + if not isinstance(src, IPreparingSource): + logger.debug("Skipping source '%s' because it's not preparable.") + continue + p = subparsers.add_parser(src.name) + src.setupPrepareParser(p, app) + p.set_defaults(source=src) + + def run(self, ctx): + app = ctx.app + source = ctx.args.source + metadata = source.buildMetadata(ctx.args) + page_path = source.findPagePath(metadata, MODE_CREATING) + name, ext = os.path.splitext(page_path) + if ext == '.*': + page_path = '%s.%s' % (name, + app.config.get('site/default_auto_format')) + if os.path.exists(page_path): + raise Exception("'%s' already exists." % page_path) + + logger.info("Creating page: %s" % os.path.relpath(page_path, app.root_dir)) + if not os.path.exists(os.path.dirname(page_path)): + os.makedirs(os.path.dirname(page_path), 0755) + with open(page_path, 'w') as f: + f.write('---\n') + f.write('title: %s\n' % 'Unknown title') + f.write('---\n') + f.write("This is a new page!\n") +
--- a/piecrust/configuration.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/configuration.py Sun Aug 10 23:43:16 2014 -0700 @@ -6,18 +6,25 @@ logger = logging.getLogger(__name__) +class ConfigurationError(Exception): + pass + + class Configuration(object): def __init__(self, values=None, validate=True): if values is not None: - self.set_all(values, validate) + self.setAll(values, validate) else: self._values = None - def set_all(self, values, validate=True): + def setAll(self, values, validate=True): if validate: self._validateAll(values) self._values = values + def getAll(self): + return self.get() + def get(self, key_path=None): self._ensureLoaded() if key_path is None: @@ -73,10 +80,12 @@ return value -def merge_dicts(source, merging, validator=None): +def merge_dicts(source, merging, validator=None, *args): if validator is None: validator = lambda k, v: v _recurse_merge_dicts(source, merging, None, validator) + for other in args: + _recurse_merge_dicts(source, other, None, validator) def _recurse_merge_dicts(local_cur, incoming_cur, parent_path, validator): @@ -105,7 +114,7 @@ m = header_regex.match(text) if m is not None: header = unicode(m.group('header')) - config = yaml.safe_load(header) + config = yaml.load(header, Loader=yaml.BaseLoader) offset = m.end() else: config = {}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/assetor.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,75 @@ +import os.path +import logging +from piecrust.uriutil import multi_replace + + +logger = logging.getLogger(__name__) + + +def build_base_url(app, uri, assets_path): + base_url_format = app.env.base_asset_url_format + site_root = app.config.get('site/root') + rel_path = os.path.relpath(assets_path, app.root_dir) + pretty = app.config.get('site/pretty_urls') + if not pretty: + uri, _ = os.path.splitext(uri) + base_url = multi_replace( + base_url_format, + { + '%site_root%': site_root, + '%path%': rel_path, + '%uri%': uri}) + return base_url.rstrip('/') + '/' + + +class Assetor(object): + ASSET_DIR_SUFFIX = '-assets' + + debug_render_doc = """Helps render URLs to files in the current page's + asset folder.""" + debug_render = [] + debug_render_dynamic = ['_debugRenderAssetNames'] + + def __init__(self, page, uri): + self._page = page + self._uri = uri + self._cache = None + + def __getattr__(self, name): + try: + self._cacheAssets() + return self._cache[name] + except KeyError: + raise AttributeError() + + def __getitem__(self, key): + self._cacheAssets() + return self._cache[key] + + def __iter__(self): + self._cacheAssets() + return self._cache.__iter__() + + def iterkeys(self): + return self.__iter__(self) + + def _debugRenderAssetNames(self): + self._cacheAssets() + return self._cache.keys() + + def _cacheAssets(self): + if self._cache is not None: + return + + self._cache = {} + name, ext = os.path.splitext(self._page.path) + assets_dir = name + Assetor.ASSET_DIR_SUFFIX + if not os.path.isdir(assets_dir): + return + + base_url = build_base_url(self._page.app, self._uri, assets_dir) + for _, __, filenames in os.walk(assets_dir): + for fn in filenames: + name, ext = os.path.splitext(fn) + self._cache[name] = base_url + fn +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,132 @@ +import time +import logging +from piecrust.data.assetor import Assetor + + +logger = logging.getLogger(__name__) + + +class LazyPageConfigData(object): + """ An object that represents the configuration header of a page, + but also allows for additional data. It's meant to be exposed + to the templating system. + """ + def __init__(self, page): + self._page = page + self._values = None + self._loaders = None + + @property + def page(self): + return self._page + + def __getitem__(self, name): + self._load() + + if self._loaders: + loader = self._loaders.get(name) + if loader is not None: + try: + self._values[name] = loader(self, name) + except Exception as ex: + logger.error("Error while loading attribute '%s' for: %s" + % (name, self._page.path)) + logger.exception(ex) + raise Exception("Internal Error: %s" % ex) + + # We need to double-check `_loaders` here because + # the loader could have removed all loaders, which + # would set this back to `None`. + if self._loaders is not None: + del self._loaders[name] + if len(self._loaders) == 0: + self._loaders = None + + return self._values[name] + + def setValue(self, name, value): + self._values[name] = value + + def mapLoader(self, attr_name, loader): + if loader is None: + if self._loaders is None or attr_name not in self._loaders: + return + del self._loaders[attr_name] + if len(self._loaders) == 0: + self._loaders = None + return + + if self._loaders is None: + self._loaders = {} + if attr_name in self._loaders: + raise Exception("A loader has already been mapped for: %s" % + attr_name) + self._loaders[attr_name] = loader + + def _load(self): + if self._values is not None: + return + self._values = dict(self._page.config.get()) + try: + self._loadCustom() + except Exception as ex: + logger.error("Error while loading data for: %s" % self._page.path) + logger.exception(ex) + raise Exception("Internal Error: %s" % ex) + + def _loadCustom(self): + pass + + +def build_uri(page): + route = page.app.getRoute(page.source.name, page.source_metadata) + if route is None: + raise Exception("Can't get route for page: %s" % page.path) + return route.getUri(page.source_metadata) + + +def load_rendered_segment(data, name): + from piecrust.rendering import PageRenderingContext, render_page_segments + + uri = build_uri(data.page) + try: + ctx = PageRenderingContext(data.page, uri) + segs = render_page_segments(ctx) + except Exception as e: + logger.exception("Error rendering segments for '%s': %s" % (uri, e)) + raise + + for k, v in segs.iteritems(): + data.mapLoader(k, None) + data.setValue(k, v) + + if 'content.abstract' in segs: + data.setValue('content', segs['content.abstract']) + data.setValue('has_more', True) + if name == 'content': + return segs['content.abstract'] + + return segs[name] + + +class PaginationData(LazyPageConfigData): + def __init__(self, page): + super(PaginationData, self).__init__(page) + + def _loadCustom(self): + page_url = build_uri(self.page) + self.setValue('url', page_url) + self.setValue('slug', page_url) + self.setValue('timestamp', + time.mktime(self.page.datetime.timetuple())) + date_format = self.page.app.config.get('site/date_format') + if date_format: + self.setValue('date', self.page.datetime.strftime(date_format)) + + assetor = Assetor(self.page, page_url) + self.setValue('assets', assetor) + + segment_names = self.page.config.get('segments') + for name in segment_names: + self.mapLoader(name, load_rendered_segment) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/builder.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,116 @@ +import time +import logging +from piecrust.configuration import merge_dicts +from piecrust.data.assetor import Assetor +from piecrust.data.debug import build_debug_info +from piecrust.data.linker import Linker +from piecrust.data.paginator import Paginator + + +logger = logging.getLogger(__name__) + + +class DataBuildingContext(object): + def __init__(self, page, uri, page_num=1): + self.page = page + self.uri = uri + self.page_num = page_num + self.pagination_source = None + self.pagination_filter = None + + +def build_page_data(ctx): + page = ctx.page + app = page.app + + pgn_source = ctx.pagination_source or get_default_pagination_source(page) + paginator = Paginator(page, pgn_source, ctx.uri, ctx.page_num, + ctx.pagination_filter) + assetor = Assetor(page, ctx.uri) + linker = Linker(page) + data = { + 'piecrust': build_piecrust_data(), + 'page': dict(page.config.get()), + 'assets': assetor, + 'pagination': paginator, + 'siblings': linker, + 'family': linker + } + page_data = data['page'] + page_data['url'] = ctx.uri + page_data['timestamp'] = time.mktime(page.datetime.timetuple()) + date_format = app.config.get('site/date_format') + if date_format: + page_data['date'] = page.datetime.strftime(date_format) + + #TODO: handle slugified taxonomy terms. + + site_data = build_site_data(page) + merge_dicts(data, site_data) + + # Do this at the end because we want all the data to be ready to be + # displayed in the debugger window. + if (app.debug and app.config.get('site/enable_debug_info') and + not app.config.get('baker/is_baking')): + data['piecrust']['debug_info'] = build_debug_info(page, data) + + return data + + +def build_layout_data(page, page_data, contents): + data = dict(page_data) + for name, txt in contents.iteritems(): + if name in data: + logger.warning("Content segment '%s' will hide existing data." % + name) + data[name] = txt + return data + + +try: + from piecrust.__version__ import VERSION +except ImportError: + from piecrust import APP_VERSION as VERSION + + +def build_piecrust_data(): + data = { + 'version': VERSION, + 'url': 'http://bolt80.com/piecrust/', + 'branding': 'Baked with <em><a href="%s">PieCrust</a> %s</em>.' % ( + 'http://bolt80.com/piecrust/', VERSION) + } + return data + + +def build_site_data(page): + app = page.app + data = dict(app.config.get()) + for source in app.sources: + endpoint_bits = source.data_endpoint.split('/') + endpoint = data + for e in endpoint_bits[:-1]: + if e not in endpoint: + endpoint[e] = {} + endpoint = endpoint[e] + user_data = endpoint.get(endpoint_bits[-1]) + provider = source.buildDataProvider(page, user_data) + if endpoint_bits[-1] in endpoint: + provider.user_data = endpoint[endpoint_bits[-1]] + endpoint[endpoint_bits[-1]] = provider + return data + + +def get_default_pagination_source(page): + app = page.app + source_name = page.config.get('source') or page.config.get('blog') + logger.debug("Got source name %s for page %s" % (source_name, page.path)) + if source_name is None: + blog_names = app.config.get('site/blogs') + if blog_names is not None: + source_name = blog_names[0] + else: + source_name = app.sources[0].name + source = app.getSource(source_name) + return source +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/debug.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,370 @@ +import re +import cgi +import logging +import StringIO +from piecrust import APP_VERSION, PIECRUST_URL + + +logger = logging.getLogger(__name__) + + +css_id_re = re.compile(r'[^\w\d\-]+') + + +# CSS for the debug window. +CSS_DEBUGINFO = """ +text-align: left; +font-style: normal; +padding: 1em; +background: #a42; +color: #fff; +position: fixed; +width: 50%; +bottom: 0; +right: 0; +overflow: auto; +max-height: 50%; +box-shadow: 0 0 10px #633; +""" + +# HTML elements. +CSS_P = 'margin: 0; padding: 0;' +CSS_A = 'color: #fff; text-decoration: none;' + +# Headers. +CSS_BIGHEADER = 'margin: 0.5em 0; font-weight: bold;' +CSS_HEADER = 'margin: 0.5em 0; font-weight: bold;' + +# Data block elements. +CSS_DATA = 'font-family: Courier, sans-serif; font-size: 0.9em;' +CSS_DATABLOCK = 'margin-left: 2em;' +CSS_VALUE = 'color: #fca;' +CSS_DOC = 'color: #fa8; font-size: 0.9em;' + +# 'Baked with PieCrust' text +BRANDING_TEXT = 'Baked with <em><a href="%s">PieCrust</a> %s</em>.' % ( + PIECRUST_URL, APP_VERSION) + + +def build_debug_info(page, data): + """ Generates HTML debug info for the given page's data. + """ + output = StringIO.StringIO() + try: + _do_build_debug_info(page, data, output) + return output.getvalue() + finally: + output.close() + + +def _do_build_debug_info(page, data, output): + app = page.app + exec_info = app.env.exec_info_stack.current_page_info + + print >>output, '<div id="piecrust-debug-info" style="%s">' % CSS_DEBUGINFO + + print >>output, '<div>' + print >>output, '<p style="%s"><strong>PieCrust %s</strong> — ' % (CSS_P, APP_VERSION) + + # If we have some execution info in the environment, + # add more information. + if exec_info: + if exec_info.was_cache_valid: + output.write('baked this morning') + else: + output.write('baked just now') + + if app.cache.enabled: + if app.env.was_cache_cleaned: + output.write(', from a brand new cache') + else: + output.write(', from a valid cache') + else: + output.write(', with no cache') + + else: + output.write('no caching information available') + + output.write(', ') + if app.env.start_time != 0: + output.write('in __PIECRUST_TIMING_INFORMATION__') + else: + output.write('no timing information available') + + print >>output, '</p>' + print >>output, '</div>' + + if data: + print >>output, '<div>' + print >>output, ('<p style="%s cursor: pointer;" onclick="var l = ' + 'document.getElementById(\'piecrust-debug-details\'); ' + 'if (l.style.display == \'none\') l.style.display = ' + '\'block\'; else l.style.display = \'none\';">' % CSS_P) + print >>output, ('<span style="%s">Template engine data</span> ' + '— click to toggle</a>.</p>' % CSS_BIGHEADER) + + print >>output, '<div id="piecrust-debug-details" style="display: none;">' + print >>output, ('<p style="%s">The following key/value pairs are ' + 'available in the layout\'s markup, and most are ' + 'available in the page\'s markup.</p>' % CSS_DOC) + + filtered_data = dict(data) + for k in filtered_data.keys(): + if k.startswith('__'): + del filtered_data[k] + + renderer = DebugDataRenderer(output) + renderer.external_docs['data-site'] = ( + "This section comes from the site configuration file.") + renderer.external_docs['data-page'] = ( + "This section comes from the page's configuration header.") + renderer.renderData(filtered_data) + + print >>output, '</div>' + print >>output, '</div>' + + print >>output, '</div>' + + +class DebugDataRenderer(object): + MAX_VALUE_LENGTH = 150 + + def __init__(self, output): + self.indent = 0 + self.output = output + self.external_docs = {} + + def renderData(self, data): + if not isinstance(data, dict): + raise Exception("Expected top level data to be a dict.") + self._writeLine('<div style="%s">' % CSS_DATA) + self._renderDict(data, 'data') + self._writeLine('</div>') + + def _renderValue(self, data, path): + if data is None: + self._write('<null>') + return + + if isinstance(data, dict): + self._renderCollapsableValueStart(path) + with IndentScope(self): + self._renderDict(data, path) + self._renderCollapsableValueEnd() + return + + if isinstance(data, list): + self._renderCollapsableValueStart(path) + with IndentScope(self): + self._renderList(data, path) + self._renderCollapsableValueEnd() + return + + data_type = type(data) + if data_type is bool: + self._write('<span style="%s">%s</span>' % (CSS_VALUE, + 'true' if bool(data) else 'false')) + return + + if data_type is int: + self._write('<span style="%s">%d</span>' % (CSS_VALUE, data)) + return + + if data_type is float: + self._write('<span style="%s">%4.2f</span>' % (CSS_VALUE, data)) + return + + if data_type in (str, unicode): + if data_type == str: + data = data.decode('utf8') + if len(data) > DebugDataRenderer.MAX_VALUE_LENGTH: + data = data[:DebugDataRenderer.MAX_VALUE_LENGTH - 5] + data += '[...]' + data = cgi.escape(data).encode('ascii', 'xmlcharrefreplace') + self._write('<span style="%s">%s</span>' % (CSS_VALUE, data)) + return + + self._renderCollapsableValueStart(path) + with IndentScope(self): + self._renderObject(data, path) + self._renderCollapsableValueEnd() + + def _renderList(self, data, path): + self._writeLine('<div style="%s">' % CSS_DATABLOCK) + self._renderDoc(data, path) + self._renderAttributes(data, path) + rendered_count = self._renderIterable(data, path, lambda d: enumerate(d)) + if rendered_count == 0: + self._writeLine('<p style="%s %s">(empty array)</p>' % (CSS_P, CSS_DOC)) + self._writeLine('</div>') + + def _renderDict(self, data, path): + self._writeLine('<div style="%s">' % CSS_DATABLOCK) + self._renderDoc(data, path) + self._renderAttributes(data, path) + rendered_count = self._renderIterable(data, path, + lambda d: sorted(d.iteritems(), key=lambda i: i[0])) + if rendered_count == 0: + self._writeLine('<p style="%s %s">(empty dictionary)</p>' % (CSS_P, CSS_DOC)) + self._writeLine('</div>') + + def _renderObject(self, data, path): + if hasattr(data.__class__, 'debug_render_func'): + # This object wants to be rendered as a simple string... + render_func_name = data.__class__.debug_render_func + render_func = getattr(data, render_func_name) + value = render_func() + self._renderValue(value, path) + return + + self._writeLine('<div style="%s">' % CSS_DATABLOCK) + self._renderDoc(data, path) + rendered_attrs = self._renderAttributes(data, path) + + if (hasattr(data, '__iter__') and + hasattr(data.__class__, 'debug_render_items') and + data.__class__.debug_render_items): + rendered_count = self._renderIterable(data, path, + lambda d: enumerate(d)) + if rendered_count == 0: + self._writeLine('<p style="%s %s">(empty)</p>' % (CSS_P, CSS_DOC)) + + elif rendered_attrs == 0: + self._writeLine('<p style="%s %s">(empty)</p>' % (CSS_P, CSS_DOC)) + + self._writeLine('</div>') + + def _renderIterable(self, data, path, iter_func): + rendered_count = 0 + with IndentScope(self): + for i, item in iter_func(data): + self._writeStart('<div>%s' % i) + if item is not None: + self._write(' : ') + self._renderValue(item, self._makePath(path, i)) + self._writeEnd('</div>') + rendered_count += 1 + return rendered_count + + def _renderDoc(self, data, path): + if hasattr(data.__class__, 'debug_render_doc'): + self._writeLine('<span style="%s">– %s</span>' % + (CSS_DOC, data.__class__.debug_render_doc)) + + doc = self.external_docs.get(path) + if doc is not None: + self._writeLine('<span style="%s">– %s</span>' % + (CSS_DOC, doc)) + + def _renderAttributes(self, data, path): + if not hasattr(data.__class__, 'debug_render'): + return 0 + + attr_names = list(data.__class__.debug_render) + if hasattr(data.__class__, 'debug_render_dynamic'): + drd = data.__class__.debug_render_dynamic + for ng in drd: + name_gen = getattr(data, ng) + attr_names += name_gen() + + invoke_attrs = [] + if hasattr(data.__class__, 'debug_render_invoke'): + invoke_attrs = list(data.__class__.debug_render_invoke) + if hasattr(data.__class__, 'debug_render_invoke_dynamic'): + drid = data.__class__.debug_render_invoke_dynamic + for ng in drid: + name_gen = getattr(data, ng) + invoke_attrs += name_gen() + + rendered_count = 0 + for name in attr_names: + value = None + render_name = name + should_call = name in invoke_attrs + + try: + attr = getattr(data.__class__, name) + except AttributeError: + # This could be an attribute on the instance itself, or some + # dynamic attribute. + attr = getattr(data, name) + + if callable(attr): + attr_func = getattr(data, name) + argcount = attr_func.__code__.co_argcount + var_names = attr_func.__code__.co_varnames + if argcount == 1 and should_call: + render_name += '()' + value = attr_func() + else: + if should_call: + logger.warning("Method '%s' should be invoked for " + "rendering, but it has %s arguments." % + (name, argcount)) + should_call = False + render_name += '(%s)' % ','.join(var_names[1:]) + elif should_call: + value = getattr(data, name) + + self._writeLine('<div>%s' % render_name) + with IndentScope(self): + if should_call: + self._write(' : ') + self._renderValue(value, self._makePath(path, name)) + self._writeLine('</div>') + rendered_count += 1 + + return rendered_count + + def _renderCollapsableValueStart(self, path): + self._writeLine('<span style="cursor: pointer;" onclick="var l = ' + 'document.getElementById(\'piecrust-debug-data-%s\'); ' + 'if (l.style.display == \'none\') {' + ' l.style.display = \'block\';' + ' this.innerHTML = \'[-]\';' + '} else {' + ' l.style.display = \'none\';' + ' this.innerHTML = \'[+]\';' + '}">' + '[+]' + '</span>' % + path) + self._writeLine('<div style="display: none"' + 'id="piecrust-debug-data-%s">' % path) + + def _renderCollapsableValueEnd(self): + self._writeLine('</div>') + + def _makePath(self, parent_path, key): + return '%s-%s' % (parent_path, css_id_re.sub('-', str(key))) + + def _writeLine(self, msg): + self.output.write(self.indent * ' ') + self.output.write(msg) + self.output.write('\n') + + def _writeStart(self, msg=None): + self.output.write(self.indent * ' ') + if msg is not None: + self.output.write(msg) + + def _write(self, msg): + self.output.write(msg) + + def _writeEnd(self, msg=None): + if msg is not None: + self.output.write(msg) + self.output.write('\n') + + +class IndentScope(object): + def __init__(self, target): + self.target = target + + def __enter__(self): + self.target.indent += 1 + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.target.indent -= 1 +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/filters.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,159 @@ +import logging + + +logger = logging.getLogger(__name__) + + +class PaginationFilter(object): + def __init__(self): + self.root_clause = None + + @property + def is_empty(self): + return self.root_clause is None + + def addClause(self, clause): + self._ensureRootClause() + self.root_clause.addClause(clause) + + def addClausesFromConfig(self, config): + self._ensureRootClause() + self._addClausesFromConfigRecursive(config, self.root_clause) + + def pageMatches(self, page): + if self.root_clause is None: + return True + return self.root_clause.pageMatches(page) + + def _ensureRootClause(self): + if self.root_clause is None: + self.root_clause = AndBooleanClause() + + def _addClausesFromConfigRecursive(self, config, parent_clause): + for key, val in config.iteritems(): + if key == 'and': + if not isinstance(val, list) or len(val) == 0: + raise Exception("The given boolean 'AND' filter clause " + "doesn't have an array of child clauses.") + subcl = AndBooleanClause() + parent_clause.addClause(subcl) + for c in val: + self._addClausesFromConfigRecursive(c, subcl) + + elif key == 'or': + if not isinstance(val, list) or len(val) == 0: + raise Exception("The given boolean 'OR' filter clause " + "doesn't have an array of child clauses.") + subcl = OrBooleanClause() + parent_clause.addClause(subcl) + for c in val: + self._addClausesFromConfigRecursive(c, subcl) + + elif key == 'not': + if isinstance(val, list): + if len(val) != 1: + raise Exception("'NOT' filter clauses must have " + "exactly one child clause.") + val = val[0] + subcl = NotClause() + parent_clause.addClause(subcl) + self._addClausesFromConfigRecursive(val, subcl) + + elif key[:4] == 'has_': + setting_name = key[4:] + if isinstance(val, list): + wrappercl = AndBooleanClause() + for c in val: + wrappercl.addClause(HasFilterClause(setting_name, c)) + parent_clause.addClause(wrappercl) + else: + parent_clause.addClause(HasFilterClause(setting_name, val)) + + elif key[:3] == 'is_': + setting_name = key[3:] + parent_clause.addClause(IsFilterClause(setting_name, val)) + + else: + raise Exception("Unknown filter clause: %s" % key) + + + +class IFilterClause(object): + def addClause(self, clause): + raise NotImplementedError() + + def pageMatches(self, page): + raise NotImplementedError() + + +class NotClause(IFilterClause): + def __init__(self): + self.child = None + + def addClause(self, clause): + if self.child is not None: + raise Exception("'NOT' filtering clauses can only have one " + "child clause.") + self.child = clause + + def pageMatches(self, page): + if self.child is None: + raise Exception("'NOT' filtering clauses must have one child " + "clause.") + return not self.child.pageMatches(page) + + +class BooleanClause(IFilterClause): + def __init__(self): + self.clauses = [] + + def addClause(self, clause): + self.clauses.append(clause) + + +class AndBooleanClause(BooleanClause): + def pageMatches(self, page): + for c in self.clauses: + if not c.pageMatches(page): + return False + return True + + +class OrBooleanClause(BooleanClause): + def pageMatches(self, page): + for c in self.clauses: + if c.pageMatches(page): + return True + return False + + +class SettingFilterClause(IFilterClause): + def __init__(self, name, value, coercer=None): + self.name = name + self.value = value + self.coercer = coercer + + def addClause(self, clause): + raise Exception("Setting filter clauses can't have child clauses. " + "Use a boolean filter clause instead.") + + +class HasFilterClause(SettingFilterClause): + def pageMatches(self, page): + actual_value = page.config.get(self.name) + if actual_value is None or not isinstance(actual_value, list): + return False + + if self.coercer: + actual_value = map(self.coercer, actual_value) + + return self.value in actual_value + + +class IsFilterClause(SettingFilterClause): + def pageMatches(self, page): + actual_value = page.config.get(self.name) + if self.coercer: + actual_value = self.coercer(actual_value) + return actual_value == self.value +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/iterators.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,291 @@ +import logging +from piecrust.data.base import PaginationData +from piecrust.data.filters import PaginationFilter +from piecrust.events import Event + + +logger = logging.getLogger(__name__) + + +class SliceIterator(object): + def __init__(self, it, offset=0, limit=-1): + self.it = it + self.offset = offset + self.limit = limit + self.current_page = None + self.has_more = False + self.inner_count = -1 + self.next_page = None + self.prev_page = None + self._cache = None + + def __iter__(self): + if self._cache is None: + inner_list = list(self.it) + self.inner_count = len(inner_list) + self.has_more = self.inner_count > (self.offset + self.limit) + self._cache = inner_list[self.offset:self.offset + self.limit] + if self.current_page: + idx = inner_list.index(self.current_page) + if idx >= 0: + if idx < self.inner_count - 1: + self.next_page = inner_list[idx + 1] + if idx > 0: + self.prev_page = inner_list[idx - 1] + return iter(self._cache) + + +class SettingFilterIterator(object): + def __init__(self, it, fil_conf, page_accessor=None): + self.it = it + self.fil_conf = fil_conf + self._fil = None + self.page_accessor = page_accessor + + def __iter__(self): + if self._fil is None: + self._fil = PaginationFilter() + self._fil.addClausesFromConfig(self.fil_conf) + + for i in self.it: + if self.page_accessor: + page = self.page_accessor(i) + else: + page = i + if self._fil.pageMatches(page): + yield i + + +class SettingSortIterator(object): + def __init__(self, it, name, reverse=False, value_accessor=None): + self.it = it + self.name = name + self.reverse = reverse + self.value_accessor = value_accessor + + def __iter__(self): + def comparer(x, y): + if self.value_accessor: + v1 = self.value_accessor(x, self.name) + v2 = self.value_accessor(y, self.name) + else: + v1 = x.config.get(self.name) + v2 = y.config.get(self.name) + + if v1 is None and v2 is None: + return 0 + if v1 is None and v2 is not None: + return 1 if self.reverse else -1 + if v1 is not None and v2 is None: + return -1 if self.reverse else 1 + + if v1 == v2: + return 0 + if self.reverse: + return 1 if v1 < v2 else -1 + else: + return -1 if v1 < v2 else 1 + + return sorted(self.it, cmp=self._comparer, reverse=self.reverse) + + +class DateSortIterator(object): + def __init__(self, it, reverse=True): + self.it = it + self.reverse = reverse + + def __iter__(self): + return iter(sorted(self.it, + key=lambda x: x.datetime, reverse=self.reverse)) + + +class PaginationFilterIterator(object): + def __init__(self, it, fil): + self.it = it + self._fil = fil + + def __iter__(self): + for page in self.it: + if self._fil.pageMatches(page): + yield page + + +class SourceFactoryIterator(object): + def __init__(self, source): + self.source = source + self.it = None # This is to permit recursive traversal of the + # iterator chain. It acts as the end. + + def __iter__(self): + for factory in self.source.getPageFactories(): + yield factory.buildPage() + + +class PaginationDataBuilderIterator(object): + def __init__(self, it): + self.it = it + + def __iter__(self): + for page in self.it: + yield PaginationData(page) + + +class PageIterator(object): + def __init__(self, source, current_page=None, pagination_filter=None, + offset=0, limit=-1, locked=False): + self._source = source + self._current_page = current_page + self._locked = False + self._pages = SourceFactoryIterator(source) + self._pagesData = None + self._pagination_slicer = None + self._has_sorter = False + self._next_page = None + self._prev_page = None + self._iter_event = Event() + + # Apply any filter first, before we start sorting or slicing. + if pagination_filter is not None: + self._simpleNonSortedWrap(PaginationFilterIterator, + pagination_filter) + + if offset > 0 or limit > 0: + self.slice(offset, limit) + + self._locked = locked + + @property + def total_count(self): + self._load() + if self._pagination_slicer is not None: + return self._pagination_slicer.inner_count + return len(self._pagesData) + + @property + def next_page(self): + self._load() + return self._next_page + + @property + def prev_page(self): + self._load() + return self._prev_page + + def __len__(self): + self._load() + return len(self._pagesData) + + def __getitem__(self, key): + self._load() + return self._pagesData[key] + + def __iter__(self): + self._load() + self._iter_event.fire() + return iter(self._pagesData) + + def __getattr__(self, name): + if name[:3] == 'is_' or name[:3] == 'in_': + def is_filter(value): + conf = {'is_%s' % name[3:]: value} + return self._simpleNonSortedWrap(SettingFilterIterator, conf) + return is_filter + + if name[:4] == 'has_': + def has_filter(value): + conf = {name: value} + return self._simpleNonSortedWrap(SettingFilterIterator, conf) + return has_filter + + if name[:5] == 'with_': + def has_filter(value): + conf = {'has_%s' % name[5:]: value} + return self._simpleNonSortedWrap(SettingFilterIterator, conf) + return has_filter + + raise AttributeError() + + def skip(self, count): + return self._simpleWrap(SliceIterator, count) + + def limit(self, count): + return self._simpleWrap(SliceIterator, 0, count) + + def slice(self, skip, limit): + return self._simpleWrap(SliceIterator, skip, limit) + + def filter(self, filter_name): + if self._current_page is None: + raise Exception("Can't use `filter()` because no parent page was " + "set for this page iterator.") + filter_conf = self._current_page.config.get(filter_name) + if filter_conf is None: + raise Exception("Couldn't find filter '%s' in the configuration " + "header for page: %s" % + (filter_name, self._current_page.path)) + return self._simpleNonSortedWrap(SettingFilterIterator, filter_conf) + + def sort(self, setting_name, reverse=False): + self._ensureUnlocked() + self._unload() + self._pages = SettingSortIterator(self._pages, setting_name, reverse) + self._has_sorter = True + return self + + def reset(self): + self._ensureUnlocked() + self._unload + return self + + @property + def _has_more(self): + self._load() + if self._pagination_slicer: + return self._pagination_slicer.has_more + return False + + def _simpleWrap(self, it_class, *args, **kwargs): + self._ensureUnlocked() + self._unload() + self._ensureSorter() + self._pages = it_class(self._pages, *args, **kwargs) + if self._pagination_slicer is None and it_class is SliceIterator: + self._pagination_slicer = self._pages + return self + + def _simpleNonSortedWrap(self, it_class, *args, **kwargs): + self._ensureUnlocked() + self._unload() + self._pages = it_class(self._pages, *args, **kwargs) + return self + + def _ensureUnlocked(self): + if self._locked: + raise Exception( + "This page iterator has been locked, probably because " + "you're trying to tamper with pagination data.") + + def _ensureSorter(self): + if self._has_sorter: + return + self._pages = DateSortIterator(self._pages) + self._has_sorter = True + + def _unload(self): + self._pagesData = None + self._next_page = None + self._prev_page = None + + def _load(self): + if self._pagesData is not None: + return + + self._ensureSorter() + + it_chain = PaginationDataBuilderIterator(self._pages) + self._pagesData = list(it_chain) + + if self._current_page and self._pagination_slicer: + self._prev_page = PaginationData(self._pagination_slicer.prev_page) + self._next_page = PaginationData(self._pagination_slicer.next_page) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/linker.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,6 @@ + + +class Linker(object): + def __init__(self, page): + self.page = page +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/paginator.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,216 @@ +import math +import logging +from werkzeug.utils import cached_property +from piecrust.data.filters import PaginationFilter +from piecrust.data.iterators import PageIterator + + +logger = logging.getLogger(__name__) + + +class Paginator(object): + debug_render = ['has_more', 'items', 'has_items', 'items_per_page', + 'items_this_page', 'prev_page_number', 'this_page_number', + 'next_page_number', 'prev_page', 'next_page', + 'total_item_count', 'total_page_count', + 'next_item', 'prev_item'] + debug_render_invoke = ['has_more', 'items', 'has_items', 'items_per_page', + 'items_this_page', 'prev_page_number', 'this_page_number', + 'next_page_number', 'prev_page', 'next_page', + 'total_item_count', 'total_page_count', + 'next_item', 'prev_item'] + + def __init__(self, page, source, uri, page_num=1, pgn_filter=None): + self._parent_page = page + self._source = source + self._uri = uri + self._page_num = page_num + self._iterator = None + self._pgn_filter = pgn_filter + self._pgn_set_on_ctx = False + + @property + def is_loaded(self): + return self._iterator is not None + + @property + def has_more(self): + return self.next_page_number is not None + + @property + def unload(self): + self._iterator = None + + # Backward compatibility with PieCrust 1.0 {{{ + @property + def posts(self): + return self.items + + @property + def has_posts(self): + return self.has_items + + @property + def posts_per_page(self): + return self.items_per_page + + @property + def posts_this_page(self): + return self.items_this_page + + @property + def total_post_count(self): + return self.total_item_count + + @property + def next_post(self): + return self.next_item + + @property + def prev_post(self): + return self.prev_item + # }}} + + @property + def items(self): + self._load() + return self._iterator + + @property + def has_items(self): + return self.posts_this_page > 0 + + @cached_property + def items_per_page(self): + return (self._parent_page.config.get('items_per_page') or + self._source.items_per_page) + + @property + def items_this_page(self): + self._load() + return len(self._iterator) + + @property + def prev_page_number(self): + if self._page_num > 1: + return self._page_num - 1 + return None + + @property + def this_page_number(self): + return self._page_num + + @property + def next_page_number(self): + self._load() + if self._iterator._has_more: + return self._page_num + 1 + return None + + @property + def prev_page(self): + num = self.prev_page_number + if num is not None: + return self._getPageUri(num) + return None + + @property + def this_page(self): + return self._getPageUri(self._page_num) + + @property + def next_page(self): + num = self.next_page_number + if num is not None: + return self._getPageUri(num) + return None + + @property + def total_item_count(self): + self._load() + return self._iterator.total_count + + @property + def total_page_count(self): + total_count = self.total_item_count + per_page = self.items_per_page + return int(math.ceil(total_count / per_page)) + + @property + def next_item(self): + self._load() + return self._iterator.prev_page + + @property + def prev_item(self): + self._load() + return self._iterator.next_page + + def all_page_numbers(self, radius=-1): + total_page_count = self.total_page_count + if total_page_count == 0: + return [] + + if radius <= 0 or total_page_count < (2 * radius + 1): + return range(1, total_page_count) + + first_num = self._page_num - radius + last_num = self._page_num + radius + if first_num <= 0: + last_num += 1 - first_num + first_num = 1 + elif last_num > total_page_count: + first_num -= (last_num - total_page_count) + last_num = total_page_count + first_num = max(1, first_num) + last_num = min(total_page_count, last_num) + return range(first_num, last_num) + + def page(self, index): + return self._getPageUri(index) + + def _load(self): + if self._iterator is not None: + return + + if self._source is None: + raise Exception("Can't load pagination data: no source has been defined.") + + pag_filter = self._getPaginationFilter() + offset = (self._page_num - 1) * self.items_per_page + self._iterator = PageIterator(self._source, + current_page=self._parent_page, + pagination_filter=pag_filter, + offset=offset, limit=self.items_per_page, + locked=True) + self._iterator._iter_event += self._onIteration + + def _getPaginationFilter(self): + f = PaginationFilter() + + if self._pgn_filter is not None: + f.addClause(self._pgn_filter.root_clause) + + conf = (self._parent_page.config.get('items_filters') or + self._parent_page.app.config.get('site/items_filters')) + if conf == 'none' or conf == 'nil' or conf == '': + conf = None + if conf is not None: + f.addClausesFromConfig(conf) + + return f + + def _getPageUri(self, index): + uri = self._uri + if index > 1: + if uri != '': + uri += '/' + uri += str(index) + return uri + + def _onIteration(self): + if not self._pgn_set_on_ctx: + eis = self._parent_page.app.env.exec_info_stack + eis.current_page_info.render_ctx.setPagination(self) + self._pgn_set_on_ctx = True +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/data/provider.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,249 @@ +import time +import itertools +from piecrust.data.iterators import PageIterator +from piecrust.sources.base import ArraySource + + +class DataProvider(object): + debug_render_dynamic = ['_debugRenderUserData'] + debug_render_invoke_dynamic = ['_debugRenderUserData'] + + def __init__(self, source, page, user_data): + if source.app is not page.app: + raise Exception("The given source and page don't belong to " + "the same application.") + self._source = source + self._page = page + self._user_data = user_data + + def __getattr__(self, name): + if self._user_data is not None: + return self._user_data[name] + raise AttributeError() + + def __getitem__(self, name): + if self._user_data is not None: + return self._user_data[name] + raise KeyError() + + def _debugRenderUserData(self): + if self._user_data: + return self._user_data.keys() + return [] + + +class CompositeDataProvider(object): + def __init__(self, providers): + self._providers = providers + + def __getattr__(self, name): + for p in self._providers: + try: + return getattr(p, name) + except AttributeError: + pass + raise AttributeError() + + +class IteratorDataProvider(DataProvider): + PROVIDER_NAME = 'iterator' + + debug_render_doc = """Provides a list of pages.""" + + def __init__(self, source, page, user_data): + super(IteratorDataProvider, self).__init__(source, page, user_data) + self._pages = PageIterator(source, current_page=page) + self._pages._iter_event += self._onIteration + self._ctx_set = False + + def __len__(self): + return len(self._pages) + + def __getitem__(self, key): + return self._pages[key] + + def __iter__(self): + return iter(self._pages) + + def _onIteration(self): + if not self._ctx_set: + eis = self._page.app.env.exec_info_stack + eis.current_page_info.render_ctx.used_source_names.add( + self._source.name) + self._ctx_set = True + + +class BlogDataProvider(DataProvider): + PROVIDER_NAME = 'blog' + + debug_render_doc = """Provides a list of blog posts and yearly/monthly + archives.""" + debug_render = ['posts', 'years', 'months'] + debug_render_dynamic = (['_debugRenderTaxonomies'] + + DataProvider.debug_render_dynamic) + + def __init__(self, source, page, user_data): + super(BlogDataProvider, self).__init__(source, page, user_data) + self._yearly = None + self._monthly = None + self._taxonomies = {} + self._ctx_set = False + + def __getattr__(self, name): + if self._source.app.getTaxonomy(name) is not None: + return self._buildTaxonomy(name) + return super(BlogDataProvider, self).__getattr__(name) + + @property + def posts(self): + it = PageIterator(self._source, current_page=self._page) + it._iter_event += self._onIteration + return it + + @property + def years(self): + return self._buildYearlyArchive() + + @property + def months(self): + return self._buildMonthlyArchive() + + def _debugRenderTaxonomies(self): + return [t.name for t in self._source.app.taxonomies] + + def _buildYearlyArchive(self): + if self._yearly is not None: + return self._yearly + + self._yearly = [] + for fac in self._source.getPageFactories(): + post = fac.buildPage() + year = post.datetime.strftime('%Y') + + posts_this_year = next( + itertools.ifilter(lambda y: y.name == year, self._yearly), + None) + if posts_this_year is None: + timestamp = time.mktime( + (post.datetime.year, 1, 1, 0, 0, 0, 0, 0, -1)) + posts_this_year = BlogArchiveEntry(self._page, year, timestamp) + self._yearly.append(posts_this_year) + + posts_this_year._data_source.append(post) + self._yearly = sorted(self._yearly, + key=lambda e: e.timestamp, + reverse=True) + self._onIteration() + return self._yearly + + def _buildMonthlyArchive(self): + if self._monthly is not None: + return self._monthly + + self._monthly = [] + for fac in self._source.getPageFactories(): + post = fac.buildPage() + month = post.datetime.strftime('%B %Y') + + posts_this_month = next( + itertools.ifilter(lambda m: m.name == month, self._monthly), + None) + if posts_this_month is None: + timestamp = time.mktime( + (post.datetime.year, post.datetime.month, 1, + 0, 0, 0, 0, 0, -1)) + posts_this_month = BlogArchiveEntry(self._page, month, timestamp) + self._monthly.append(posts_this_month) + + posts_this_month._data_source.append(post) + self._monthly = sorted(self._monthly, + key=lambda e: e.timestamp, + reverse=True) + self._onIteration() + return self._monthly + + def _buildTaxonomy(self, tax_name): + if tax_name in self._taxonomies: + return self._taxonomies[tax_name] + + posts_by_tax_value = {} + for fac in self._source.getPageFactories(): + post = fac.buildPage() + tax_values = post.config.get(tax_name) + if not isinstance(tax_values, list): + tax_values = [tax_values] + for val in tax_values: + posts_by_tax_value.setdefault(val, []) + posts_by_tax_value[val].append(post) + + entries = [] + for value, ds in posts_by_tax_value.iteritems(): + source = ArraySource(self._page.app, ds) + entries.append(BlogTaxonomyEntry(self._page, source, value)) + self._taxonomies[tax_name] = sorted(entries, key=lambda k: k.name) + + self._onIteration() + return self._taxonomies[tax_name] + + def _onIteration(self): + if not self._ctx_set: + eis = self._page.app.env.exec_info_stack + eis.current_page_info.render_ctx.used_source_names.add( + self._source.name) + self._ctx_set = True + + +class BlogArchiveEntry(object): + def __init__(self, page, name, timestamp): + self.name = name + self.timestamp = timestamp + self._page = page + self._data_source = [] + self._iterator = None + + def __str__(self): + return self.name + + @property + def posts(self): + self._load() + self._iterator.reset() + return self._iterator + + def _load(self): + if self._iterator is not None: + return + source = ArraySource(self._page.app, self._data_source) + self._iterator = PageIterator(source, current_page=self._page) + + +class BlogTaxonomyEntry(object): + def __init__(self, page, source, property_value): + self._page = page + self._source = source + self._property_value = property_value + self._iterator = None + + def __str__(self): + return self._property_value + + @property + def name(self): + return self._property_value + + @property + def posts(self): + self._load() + self._iterator.reset() + return self._iterator + + @property + def post_count(self): + return self._source.page_count + + def _load(self): + if self._iterator is not None: + return + + self._iterator = PageIterator(self._source, self._page) +
--- a/piecrust/decorators.py Wed Dec 25 22:16:46 2013 -0800 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,17 +0,0 @@ -import functools - - -def lazy(f): - @functools.wraps(f) - def lazy_wrapper(*args, **kwargs): - if f.__lazyresult__ is None: - f.__lazyresult__ = f(*args, **kwargs) - return f.__lazyresult__ - - f.__lazyresult__ = None - return lazy_wrapper - - -def lazy_property(f): - return property(lazy(f)) -
--- a/piecrust/environment.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/environment.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,51 +1,77 @@ +import time import logging -from decorators import lazy_property +import threading +import repoze.lru logger = logging.getLogger(__name__) -class PageRepository(object): - pass +class MemCache(object): + def __init__(self, size=2048): + self.cache = repoze.lru.LRUCache(size) + self.lock = threading.RLock() + + def get(self, key, item_maker): + item = self.cache.get(key) + if item is None: + logger.debug("Acquiring lock for: %s" % key) + with self.lock: + item = self.cache.get(key) + if item is None: + logger.debug("'%s' not found in cache, must build." % key) + item = item_maker() + self.cache.put(key, item) + return item + + +PHASE_PAGE_PARSING = 0 +PHASE_PAGE_FORMATTING = 1 +PHASE_PAGE_RENDERING = 2 -class ExecutionContext(object): - pass +class ExecutionInfo(object): + def __init__(self, page, phase, render_ctx): + self.page = page + self.phase = phase + self.render_ctx = render_ctx + self.was_cache_valid = False + self.start_time = time.clock() + + +class ExecutionInfoStack(threading.local): + def __init__(self): + self._page_stack = [] + + @property + def current_page_info(self): + if len(self._page_stack) == 0: + return None + return self._page_stack[-1] + + @property + def is_main_page(self): + return len(self._page_stack) == 1 + + def pushPage(self, page, phase, render_ctx): + self._page_stack.append(ExecutionInfo(page, phase, render_ctx)) + + def popPage(self): + del self._page_stack[-1] class Environment(object): def __init__(self): - self.page_repository = PageRepository() - self._execution_ctx = None + self.start_time = time.clock() + self.exec_info_stack = ExecutionInfoStack() + self.was_cache_cleaned = False + self.page_repository = MemCache() + self.rendered_segments_repository = MemCache() + self.base_asset_url_format = '%site_root%%uri%' def initialize(self, app): pass - @lazy_property - def pages(self): - logger.debug("Loading pages...") - return self._loadPages() - - @lazy_property - def posts(self): - logger.debug("Loading posts...") - return self._loadPosts() - - @lazy_property - def file_system(self): - return None - - def get_execution_context(self, auto_create=False): - if auto_create and self._execution_ctx is None: - self._execution_ctx = ExecutionContext() - return self._execution_ctx - - def _loadPages(self): - raise NotImplementedError() - - def _loadPosts(self): - raise NotImplementedError() - class StandardEnvironment(Environment): def __init__(self):
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/events.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,20 @@ + +class Event(object): + def __init__(self): + self._handlers = [] + + def __iadd__(self, handler): + self._handlers.append(handler) + return self + + def __isub__(self, handler): + self._handlers.remove(handler) + return self + + def fire(self, *args, **kwargs): + # Make a copy of the handlers list in case some handler removes + # itself while executing. + handlers = list(self._handlers) + for handler in handlers: + handler(*args, **kwargs) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/formatting/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,19 @@ + +PRIORITY_FIRST = -1 +PRIORITY_NORMAL = 0 +PRIORITY_LAST = 1 + + +class Formatter(object): + FORMAT_NAMES = None + OUTPUT_FORMAT = None + + def __init__(self): + self.priority = PRIORITY_NORMAL + + def initialize(self, app): + self.app = app + + def render(self, format_name, txt): + raise NotImplementedError() +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/formatting/markdownformatter.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,11 @@ +from markdown import markdown +from piecrust.formatting.base import Formatter + + +class MarkdownFormatter(Formatter): + FORMAT_NAMES = ['markdown', 'mdown', 'md'] + OUTPUT_FORMAT = 'html' + + def render(self, format_name, txt): + return markdown(txt) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/main.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,152 @@ +import sys +import time +import os.path +import logging +import argparse +import colorama +from piecrust.app import PieCrust, PieCrustConfiguration, APP_VERSION +from piecrust.chefutil import format_timed +from piecrust.environment import StandardEnvironment +from piecrust.pathutil import SiteNotFoundError, find_app_root +from piecrust.plugins.base import PluginLoader + + +logger = logging.getLogger(__name__) + + +class ColoredFormatter(logging.Formatter): + COLORS = { + 'DEBUG': colorama.Fore.BLACK + colorama.Style.BRIGHT, + 'INFO': '', + 'WARNING': colorama.Fore.YELLOW, + 'ERROR': colorama.Fore.RED, + 'CRITICAL': colorama.Back.RED + colorama.Fore.WHITE + } + + def __init__(self, fmt=None, datefmt=None): + super(ColoredFormatter, self).__init__(fmt, datefmt) + + def format(self, record): + color = self.COLORS.get(record.levelname) + res = super(ColoredFormatter, self).format(record) + if color: + res = color + res + colorama.Style.RESET_ALL + return res + + +class NullPieCrust: + def __init__(self): + self.root_dir = None + self.debug = False + self.templates_dirs = [] + self.plugins_dirs = [] + self.theme_dir = None + self.cache_dir = None + self.config = PieCrustConfiguration() + self.plugin_loader = PluginLoader(self) + self.env = StandardEnvironment() + self.env.initialize(self) + + +def main(): + start_time = time.clock() + + # We need to parse some arguments before we can build the actual argument + # parser, because it can affect which plugins will be loaded. Also, log- + # related arguments must be parsed first because we want to log everything + # from the beginning. + root = None + cache = True + debug = False + quiet = False + log_file = None + config_variant = None + i = 1 + while i < len(sys.argv): + arg = sys.argv[i] + if arg.startswith('--root='): + root = os.path.expanduser(arg[len('--root='):]) + elif arg == '--root': + root = sys.argv[i + 1] + ++i + elif arg.startswith('--config='): + config_variant = arg[len('--config='):] + elif arg == '--config': + config_variant = sys.argv[i + 1] + ++i + elif arg == '--log': + log_file = sys.argv[i + 1] + ++i + elif arg == '--no-cache': + cache = False + elif arg == '--debug': + debug = True + elif arg == '--quiet': + quiet = True + + if arg[0] != '-': + break + + i = i + 1 + + # Setup the logger. + if debug and quiet: + raise Exception("You can't specify both --debug and --quiet.") + + colorama.init() + root_logger = logging.getLogger() + root_logger.setLevel(logging.INFO) + log_handler = logging.StreamHandler(sys.stdout) + if debug: + root_logger.setLevel(logging.DEBUG) + log_handler.setFormatter(ColoredFormatter("[%(name)s] %(message)s")) + else: + if quiet: + root_logger.setLevel(logging.WARNING) + log_handler.setFormatter(ColoredFormatter("%(message)s")) + root_logger.addHandler(log_handler) + if log_file: + root_logger.addHandler(logging.FileHandler(log_file)) + + # Setup the app. + if root is None: + root = find_app_root() + + if not root: + app = NullPieCrust() + else: + app = PieCrust(root, cache=cache) + + # Handle a configuration variant. + if config_variant is not None: + if not root: + raise SiteNotFoundError() + app.config.applyVariant('variants/' + config_variant) + + # Setup the arg parser. + parser = argparse.ArgumentParser( + description="The PieCrust chef manages your website.") + parser.add_argument('--version', action='version', version=('%(prog)s ' + APP_VERSION)) + parser.add_argument('--root', help="The root directory of the website.") + parser.add_argument('--config', help="The configuration variant to use for this command.") + parser.add_argument('--debug', help="Show debug information.", action='store_true') + parser.add_argument('--no-cache', help="When applicable, disable caching.", action='store_true') + parser.add_argument('--quiet', help="Print only important information.", action='store_true') + parser.add_argument('--log', help="Send log messages to the specified file.") + + commands = sorted(app.plugin_loader.getCommands(), + lambda a, b: cmp(a.name, b.name)) + subparsers = parser.add_subparsers() + for c in commands: + p = subparsers.add_parser(c.name, help=c.description) + c.setupParser(p, app) + p.set_defaults(func=c._runFromChef) + + # Parse the command line. + result = parser.parse_args() + logger.debug(format_timed(start_time, 'initialized PieCrust')) + + # Run the command! + exit_code = result.func(app, result) + return exit_code +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/mime.types Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,617 @@ +application/activemessage +application/andrew-inset ez +application/applefile +application/atomicmail +application/batch-SMTP +application/beep+xml +application/cals-1840 +application/commonground +application/cu-seeme csm cu +application/cybercash +application/dca-rft +application/dec-dx +application/dsptype tsp +application/dvcs +application/edi-consent +application/edifact +application/edi-x12 +application/eshop +application/font-tdpfr +application/futuresplash spl +application/ghostview +application/hta hta +application/http +application/hyperstudio +application/iges +application/index +application/index.cmd +application/index.obj +application/index.response +application/index.vnd +application/iotp +application/ipp +application/isup +application/mac-compactpro cpt +application/marc +application/mac-binhex40 hqx +application/macwriteii +application/mathematica nb +application/mathematica-old +application/msaccess mdb +application/msword doc dot +application/news-message-id +application/news-transmission +application/octet-stream bin +application/ocsp-request +application/ocsp-response +application/oda oda +application/ogg ogg +application/parityfec +application/pics-rules prf +application/pgp-encrypted +application/pgp-keys key +application/pdf pdf +application/pgp-signature pgp +application/pkcs10 +application/pkcs7-mime +application/pkcs7-signature +application/pkix-cert +application/pkixcmp +application/pkix-crl +application/postscript ps ai eps +application/prs.alvestrand.titrax-sheet +application/prs.cww +application/prs.nprend +application/qsig +application/riscos +application/remote-printing +application/rss+xml rss +application/rtf rtf +application/sdp +application/set-payment +application/set-payment-initiation +application/set-registration +application/set-registration-initiation +application/sgml +application/sgml-open-catalog +application/sieve +application/slate +application/smil smi smil +application/timestamp-query +application/timestamp-reply +application/vemmi +application/whoispp-query +application/whoispp-response +application/wita +application/wordperfect5.1 wp5 +application/x400-bp +application/xhtml+xml xht xhtml +application/xml +application/xml-dtd +application/xml-external-parsed-entity +application/zip zip +application/vnd.3M.Post-it-Notes +application/vnd.accpac.simply.aso +application/vnd.accpac.simply.imp +application/vnd.acucobol +application/vnd.aether.imp +application/vnd.anser-web-certificate-issue-initiation +application/vnd.anser-web-funds-transfer-initiation +application/vnd.audiograph +application/vnd.bmi +application/vnd.businessobjects +application/vnd.canon-cpdl +application/vnd.canon-lips +application/vnd.cinderella cdy +application/vnd.claymore +application/vnd.commerce-battelle +application/vnd.commonspace +application/vnd.comsocaller +application/vnd.contact.cmsg +application/vnd.cosmocaller +application/vnd.ctc-posml +application/vnd.cups-postscript +application/vnd.cups-raster +application/vnd.cups-raw +application/vnd.cybank +application/vnd.dna +application/vnd.dpgraph +application/vnd.dxr +application/vnd.ecdis-update +application/vnd.ecowin.chart +application/vnd.ecowin.filerequest +application/vnd.ecowin.fileupdate +application/vnd.ecowin.series +application/vnd.ecowin.seriesrequest +application/vnd.ecowin.seriesupdate +application/vnd.enliven +application/vnd.epson.esf +application/vnd.epson.msf +application/vnd.epson.quickanime +application/vnd.epson.salt +application/vnd.epson.ssf +application/vnd.ericsson.quickcall +application/vnd.eudora.data +application/vnd.fdf +application/vnd.ffsns +application/vnd.flographit +application/vnd.framemaker +application/vnd.fsc.weblaunch +application/vnd.fujitsu.oasys +application/vnd.fujitsu.oasys2 +application/vnd.fujitsu.oasys3 +application/vnd.fujitsu.oasysgp +application/vnd.fujitsu.oasysprs +application/vnd.fujixerox.ddd +application/vnd.fujixerox.docuworks +application/vnd.fujixerox.docuworks.binder +application/vnd.fut-misnet +application/vnd.grafeq +application/vnd.groove-account +application/vnd.groove-identity-message +application/vnd.groove-injector +application/vnd.groove-tool-message +application/vnd.groove-tool-template +application/vnd.groove-vcard +application/vnd.hhe.lesson-player +application/vnd.hp-HPGL +application/vnd.hp-PCL +application/vnd.hp-PCLXL +application/vnd.hp-hpid +application/vnd.hp-hps +application/vnd.httphone +application/vnd.hzn-3d-crossword +application/vnd.ibm.MiniPay +application/vnd.ibm.afplinedata +application/vnd.ibm.modcap +application/vnd.informix-visionary +application/vnd.intercon.formnet +application/vnd.intertrust.digibox +application/vnd.intertrust.nncp +application/vnd.intu.qbo +application/vnd.intu.qfx +application/vnd.irepository.package+xml +application/vnd.is-xpr +application/vnd.japannet-directory-service +application/vnd.japannet-jpnstore-wakeup +application/vnd.japannet-payment-wakeup +application/vnd.japannet-registration +application/vnd.japannet-registration-wakeup +application/vnd.japannet-setstore-wakeup +application/vnd.japannet-verification +application/vnd.japannet-verification-wakeup +application/vnd.koan +application/vnd.lotus-1-2-3 +application/vnd.lotus-approach +application/vnd.lotus-freelance +application/vnd.lotus-notes +application/vnd.lotus-organizer +application/vnd.lotus-screencam +application/vnd.lotus-wordpro +application/vnd.mcd +application/vnd.mediastation.cdkey +application/vnd.meridian-slingshot +application/vnd.mif mif +application/vnd.minisoft-hp3000-save +application/vnd.mitsubishi.misty-guard.trustweb +application/vnd.mobius.daf +application/vnd.mobius.dis +application/vnd.mobius.msl +application/vnd.mobius.plc +application/vnd.mobius.txf +application/vnd.motorola.flexsuite +application/vnd.motorola.flexsuite.adsi +application/vnd.motorola.flexsuite.fis +application/vnd.motorola.flexsuite.gotap +application/vnd.motorola.flexsuite.kmr +application/vnd.motorola.flexsuite.ttc +application/vnd.motorola.flexsuite.wem +application/vnd.mozilla.xul+xml +application/vnd.ms-artgalry +application/vnd.ms-asf +application/vnd.ms-excel xls xlb +application/vnd.ms-lrm +application/vnd.ms-pki.seccat cat +application/vnd.ms-pki.stl stl +application/vnd.ms-powerpoint ppt pps pot +application/vnd.ms-project +application/vnd.ms-tnef +application/vnd.ms-works +application/vnd.mseq +application/vnd.msign +application/vnd.music-niff +application/vnd.musician +application/vnd.netfpx +application/vnd.noblenet-directory +application/vnd.noblenet-sealer +application/vnd.noblenet-web +application/vnd.novadigm.EDM +application/vnd.novadigm.EDX +application/vnd.novadigm.EXT +application/vnd.osa.netdeploy +application/vnd.palm +application/vnd.pg.format +application/vnd.pg.osasli +application/vnd.powerbuilder6 +application/vnd.powerbuilder6-s +application/vnd.powerbuilder7 +application/vnd.powerbuilder7-s +application/vnd.powerbuilder75 +application/vnd.powerbuilder75-s +application/vnd.previewsystems.box +application/vnd.publishare-delta-tree +application/vnd.pvi.ptid1 +application/vnd.pwg-xhtml-print+xml +application/vnd.rapid +application/vnd.s3sms +application/vnd.seemail +application/vnd.shana.informed.formdata +application/vnd.shana.informed.formtemplate +application/vnd.shana.informed.interchange +application/vnd.shana.informed.package +application/vnd.sss-cod +application/vnd.sss-dtf +application/vnd.sss-ntf +application/vnd.stardivision.calc sdc +application/vnd.stardivision.draw sda +application/vnd.stardivision.impress sdd sdp +application/vnd.stardivision.math smf +application/vnd.stardivision.writer sdw vor +application/vnd.stardivision.writer-global sgl +application/vnd.street-stream +application/vnd.sun.xml.calc sxc +application/vnd.sun.xml.calc.template stc +application/vnd.sun.xml.draw sxd +application/vnd.sun.xml.draw.template std +application/vnd.sun.xml.impress sxi +application/vnd.sun.xml.impress.template sti +application/vnd.sun.xml.math sxm +application/vnd.sun.xml.writer sxw +application/vnd.sun.xml.writer.global sxg +application/vnd.sun.xml.writer.template stw +application/vnd.svd +application/vnd.swiftview-ics +application/vnd.triscape.mxs +application/vnd.trueapp +application/vnd.truedoc +application/vnd.tve-trigger +application/vnd.ufdl +application/vnd.uplanet.alert +application/vnd.uplanet.alert-wbxml +application/vnd.uplanet.bearer-choice +application/vnd.uplanet.bearer-choice-wbxml +application/vnd.uplanet.cacheop +application/vnd.uplanet.cacheop-wbxml +application/vnd.uplanet.channel +application/vnd.uplanet.channel-wbxml +application/vnd.uplanet.list +application/vnd.uplanet.list-wbxml +application/vnd.uplanet.listcmd +application/vnd.uplanet.listcmd-wbxml +application/vnd.uplanet.signal +application/vnd.vcx +application/vnd.vectorworks +application/vnd.vidsoft.vidconference +application/vnd.visio +application/vnd.vividence.scriptfile +application/vnd.wap.sic +application/vnd.wap.slc +application/vnd.wap.wbxml wbxml +application/vnd.wap.wmlc wmlc +application/vnd.wap.wmlscriptc wmlsc +application/vnd.webturbo +application/vnd.wrq-hp3000-labelled +application/vnd.wt.stf +application/vnd.xara +application/vnd.xfdl +application/vnd.yellowriver-custom-menu +application/x-123 wk +application/x-apple-diskimage dmg +application/x-bcpio bcpio +application/x-cdf cdf +application/x-cdlink vcd +application/x-chess-pgn pgn +application/x-core +application/x-cpio cpio +application/x-csh csh +application/x-debian-package deb +application/x-director dcr dir dxr +application/x-doom wad +application/x-dms dms +application/x-dvi dvi +application/x-executable +application/x-font pfa pfb gsf pcf pcf.Z +application/x-futuresplash spl +application/x-gnumeric gnumeric +application/x-go-sgf sgf +application/x-graphing-calculator gcf +application/x-gtar gtar tgz taz +application/x-hdf hdf +application/x-httpd-php phtml pht php +application/x-httpd-php-source phps +application/x-httpd-php3 php3 +application/x-httpd-php3-preprocessed php3p +application/x-httpd-php4 php4 +application/x-ica ica +application/x-internet-signup ins isp +application/x-iphone iii +application/x-java-applet +application/x-java-archive jar +application/x-java-bean +application/x-java-jnlp-file jnlp +application/x-java-serialized-object ser +application/x-java-vm class +application/x-javascript js +application/x-kdelnk +application/x-kchart chrt +application/x-killustrator kil +application/x-kpresenter kpr kpt +application/x-koan skp skd skt skm +application/x-kspread ksp +application/x-kword kwd kwt +application/x-latex latex +application/x-lha lha +application/x-lzh lzh +application/x-lzx lzx +application/x-maker frm maker frame fm fb book fbdoc +application/x-mif mif +application/x-ms-wmz wmz +application/x-ms-wmd wmd +application/x-msdos-program com exe bat dll +application/x-msi msi +application/x-netcdf nc +application/x-ns-proxy-autoconfig pac +application/x-object o +application/x-oz-application oza +application/x-perl pl pm +application/x-pkcs7-certreqresp p7r +application/x-pkcs7-crl crl +application/x-quicktimeplayer qtl +application/x-redhat-package-manager rpm +application/x-rx +application/x-sh +application/x-shar shar +application/x-shellscript +application/x-shockwave-flash swf swfl +application/x-sh sh +application/x-stuffit sit +application/x-sv4cpio sv4cpio +application/x-sv4crc sv4crc +application/x-tar tar +application/x-tcl tcl +application/x-tex tex +application/x-tex-gf gf +application/x-tex-pk pk +application/x-texinfo texinfo texi +application/x-trash ~ % bak old sik +application/x-troff t tr roff +application/x-troff-man man +application/x-troff-me me +application/x-troff-ms ms +application/x-ustar ustar +application/x-wais-source src +application/x-wingz wz +application/x-x509-ca-cert crt +application/x-xfig fig + +audio/32kadpcm +#audio/aiff aif aifc aiff +audio/basic au snd +audio/g.722.1 +audio/l16 +audio/midi mid midi kar +audio/mp4a-latm +audio/mpa-robust +audio/mpeg mpga mpega mp2 mp3 +audio/mpegurl m3u +audio/parityfec +audio/prs.sid sid +audio/telephone-event +audio/tone +#audio/wav wav +audio/vnd.cisco.nse +audio/vnd.cns.anp1 +audio/vnd.cns.inf1 +audio/vnd.digital-winds +audio/vnd.everad.plj +audio/vnd.lucent.voice +audio/vnd.nortel.vbk +audio/vnd.nuera.ecelp4800 +audio/vnd.nuera.ecelp7470 +audio/vnd.nuera.ecelp9600 +audio/vnd.octel.sbc +audio/vnd.qcelp +audio/vnd.rhetorex.32kadpcm +audio/vnd.vmx.cvsd +audio/x-aiff aif aiff aifc +audio/x-gsm gsm +audio/x-mpegurl m3u +audio/x-ms-wma wma +audio/x-ms-wax wax +audio/x-pn-realaudio-plugin rpm +audio/x-pn-realaudio ra rm ram +audio/x-realaudio ra +audio/x-scpls pls +audio/x-sd2 sd2 +audio/x-wav wav + +chemical/x-pdb pdb +chemical/x-xyz xyz + +image/bmp bmp +image/cgm +image/g3fax +image/gif gif +image/ief ief +image/jpeg jpeg jpg jpe +image/naplps +image/pcx pcx +image/png png +image/prs.btif +image/prs.pti +image/svg+xml svg svgz +image/tiff tiff tif +image/vnd.cns.inf2 +image/vnd.dwg +image/vnd.dxf +image/vnd.fastbidsheet +image/vnd.fpx +image/vnd.fst +image/vnd.fujixerox.edmics-mmr +image/vnd.fujixerox.edmics-rlc +image/vnd.mix +image/vnd.net-fpx +image/vnd.svf +image/vnd.wap.wbmp wbmp +image/vnd.xiff +image/x-cmu-raster ras +image/x-coreldraw cdr +image/x-coreldrawpattern pat +image/x-coreldrawtemplate cdt +image/x-corelphotopaint cpt +image/x-djvu djvu djv +image/x-icon ico +image/x-jg art +image/x-jng jng +image/x-ms-bmp bmp +image/x-photoshop psd +image/x-portable-anymap pnm +image/x-portable-bitmap pbm +image/x-portable-graymap pgm +image/x-portable-pixmap ppm +image/x-rgb rgb +image/x-xbitmap xbm +image/x-xpixmap xpm +image/x-xwindowdump xwd + +inode/chardevice +inode/blockdevice +inode/directory-locked +inode/directory +inode/fifo +inode/socket + +message/delivery-status +message/disposition-notification +message/external-body +message/http +message/s-http +message/news +message/partial +message/rfc822 + +model/iges igs iges +model/mesh msh mesh silo +model/vnd.dwf +model/vnd.flatland.3dml +model/vnd.gdl +model/vnd.gs-gdl +model/vnd.gtw +model/vnd.mts +model/vnd.vtu +model/vrml wrl vrml + +multipart/alternative +multipart/appledouble +multipart/byteranges +multipart/digest +multipart/encrypted +multipart/form-data +multipart/header-set +multipart/mixed +multipart/parallel +multipart/related +multipart/report +multipart/signed +multipart/voice-message + +text/calendar +text/comma-separated-values csv +text/css css +text/directory +text/english +text/enriched +text/h323 323 +text/html htm html +text/iuls uls +text/mathml mml +text/parityfec +text/plain asc txt text diff +text/prs.lines.tag +text/rfc822-headers +text/richtext rtx +text/rtf rtf +text/scriptlet sct wsc +text/t140 +text/texmacs tm ts +text/tab-separated-values tsv +text/uri-list +text/vnd.abc +text/vnd.curl +text/vnd.DMClientScript +text/vnd.flatland.3dml +text/vnd.fly +text/vnd.fmi.flexstor +text/vnd.in3d.3dml +text/vnd.in3d.spot +text/vnd.IPTC.NewsML +text/vnd.IPTC.NITF +text/vnd.latex-z +text/vnd.motorola.reflex +text/vnd.ms-mediapackage +text/vnd.wap.si +text/vnd.wap.sl +text/vnd.wap.wml wml +text/vnd.wap.wmlscript wmls +text/xml xml xsl +text/x-c++hdr h++ hpp hxx hh +text/x-c++src c++ cpp cxx cc +text/x-chdr h +text/x-crontab +text/x-csh csh +text/x-csrc c +text/x-java java +text/x-makefile +text/xml-external-parsed-entity +text/x-moc moc +text/x-pascal p pas +text/x-pcs-gcd gcd +text/x-server-parsed-html shtml +text/x-setext etx +text/x-sh sh +text/x-tcl tcl tk +text/x-tex tex ltx sty cls +text/x-vcalendar vcs +text/x-vcard vcf + +#video/avi avi +video/dl dl +video/fli fli +video/gl gl +video/mpeg mpeg mpg mpe +video/quicktime qt mov +video/mp4v-es +video/parityfec +video/pointer +video/vnd.fvt +video/vnd.motorola.video +video/vnd.motorola.videop +video/vnd.mpegurl mxu +video/vnd.mts +video/vnd.nokia.interleaved-multimedia +video/vnd.vivo +video/x-dv dif dv +video/x-la-asf lsf lsx +video/x-mng mng +video/x-ms-asf asf asx +video/x-ms-wm wm +video/x-ms-wmv wmv +video/x-ms-wmx wmx +video/x-ms-wvx wvx +video/x-msvideo avi +video/x-sgi-movie movie + +x-conference/x-cooltalk ice + +x-world/x-vrml vrm vrml wrl +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/page.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,284 @@ +import re +import sys +import json +import codecs +import os.path +import hashlib +import logging +import datetime +import dateutil.parser +import threading +from piecrust.configuration import (Configuration, ConfigurationError, + parse_config_header) +from piecrust.environment import PHASE_PAGE_PARSING + + +logger = logging.getLogger(__name__) + + +class PageConfiguration(Configuration): + def __init__(self, values=None, validate=True): + super(PageConfiguration, self).__init__(values, validate) + + def _validateAll(self, values): + values.setdefault('title', 'Untitled Page') + values.setdefault('content_type', 'html') + ppp = values.get('posts_per_page') + if ppp is not None: + values.setdefault('items_per_page', ppp) + pf = values.get('posts_filters') + if pf is not None: + values.setdefault('items_filters', pf) + return values + + +class Page(object): + def __init__(self, source, source_metadata, rel_path): + self.source = source + self.source_metadata = source_metadata + self.rel_path = rel_path + self.path = source.resolveRef(rel_path) + self._config = None + self._raw_content = None + self._datetime = None + + @property + def app(self): + return self.source.app + + @property + def ref_spec(self): + return '%s:%s' % (self.source.name, self.rel_path) + + @property + def config(self): + self._load() + return self._config + + @property + def raw_content(self): + self._load() + return self._raw_content + + @property + def datetime(self): + if self._datetime is None: + if 'datetime' in self.source_metadata: + self._datetime = self.source_metadata['datetime'] + elif 'date' in self.source_metadata: + page_date = self.source_metadata['date'] + timestr = self.config.get('time') + if timestr is not None: + try: + time_dt = dateutil.parser.parse(timestr) + except Exception as e: + raise ConfigurationError( + "Invalid time '%s' in page: %s" % + (timestr, self.path), e) + page_time = datetime.time(time_dt.hour, time_dt.minute, time_dt.second) + else: + page_time = datetime.time(0, 0, 0) + self._datetime = datetime.datetime.combine(page_date, page_time) + else: + self._datetime = datetime.datetime.fromtimestamp(os.path.getmtime(self.path)) + return self._datetime + + @datetime.setter + def datetime(self, value): + self._datetime = value + + def getSegment(self, name='content'): + return self.raw_content[name] + + def _load(self): + if self._config is not None: + return + + eis = self.app.env.exec_info_stack + eis.pushPage(self, PHASE_PAGE_PARSING, None) + try: + config, content = load_page(self.app, self.path) + self._config = config + self._raw_content = content + finally: + eis.popPage() + + +class PageLoadingError(Exception): + def __init__(self, path, inner=None): + super(PageLoadingError, self).__init__( + "Error loading page: %s" % path, + inner) + + +class ContentSegment(object): + debug_render_func = 'debug_render' + + def __init__(self, content=None, fmt=None): + self.parts = [] + if content is not None: + self.parts.append(ContentSegmentPart(content, fmt)) + + def debug_render(self): + return '\n'.join([p.content for p in self.parts]) + + +class ContentSegmentPart(object): + def __init__(self, content, fmt=None, line=-1): + self.content = content + self.fmt = fmt + self.line = line + + def __str__(self): + return '%s [%s]' % (self.content, self.fmt or '<default>') + + +def json_load_segments(data): + segments = {} + for key, seg_data in data.iteritems(): + seg = ContentSegment() + for p_data in seg_data: + part = ContentSegmentPart(p_data['c'], p_data['f'], p_data['l']) + seg.parts.append(part) + segments[key] = seg + return segments + + +def json_save_segments(segments): + data = {} + for key, seg in segments.iteritems(): + seg_data = [] + for part in seg.parts: + p_data = {'c': part.content, 'f': part.fmt, 'l': part.line} + seg_data.append(p_data) + data[key] = seg_data + return data + + +def load_page(app, path): + try: + return _do_load_page(app, path) + except Exception as e: + logger.exception("Error loading page: %s" % + os.path.relpath(path, app.root_dir)) + _, __, traceback = sys.exc_info() + raise PageLoadingError(path, e), None, traceback + + +def _do_load_page(app, path): + exec_info = app.env.exec_info_stack.current_page_info + if exec_info is None: + raise Exception("Loading page '%s' but not execution context has " + "been created for it." % path) + + # Check the cache first. + cache = app.cache.getCache('pages') + cache_path = "%s.json" % hashlib.md5(path).hexdigest() + page_time = os.path.getmtime(path) + if cache.isValid(cache_path, page_time): + exec_info.was_cache_valid = True + cache_data = json.loads(cache.read(cache_path)) + config = PageConfiguration(values=cache_data['config'], + validate=False) + content = json_load_segments(cache_data['content']) + return config, content + + # Nope, load the page from the source file. + exec_info.was_cache_valid = False + logger.debug("Loading page configuration from: %s" % path) + with codecs.open(path, 'r', 'utf-8') as fp: + raw = fp.read() + header, offset = parse_config_header(raw) + + if not 'format' in header: + auto_formats = app.config.get('site/auto_formats') + name, ext = os.path.splitext(path) + header['format'] = auto_formats.get(ext, None) + + config = PageConfiguration(header) + content = parse_segments(raw, offset) + config.set('segments', list(content.iterkeys())) + + # Save to the cache. + cache_data = { + 'config': config.get(), + 'content': json_save_segments(content)} + cache.write(cache_path, json.dumps(cache_data)) + + return config, content + + +segment_pattern = re.compile( + r"""^\-\-\-\s*(?P<name>\w+)(\:(?P<fmt>\w+))?\s*\-\-\-\s*$""", + re.M) +part_pattern = re.compile( + r"""^<\-\-\s*(?P<fmt>\w+)\s*\-\->\s*$""", + re.M) + + +def parse_segments(raw, offset=0): + matches = list(segment_pattern.finditer(raw, offset)) + num_matches = len(matches) + if num_matches > 0: + contents = {} + + first_offset = matches[0].start() + if first_offset > 0: + # There's some default content segment at the beginning. + seg = ContentSegment() + seg.parts = parse_segment_parts(raw, offset, first_offset) + contents['content'] = seg + + for i in range(1, num_matches): + m1 = matches[i - 1] + m2 = matches[i] + seg = ContentSegment() + seg.parts = parse_segment_parts(raw, m1.end() + 1, + m2.start(), m1.group('fmt')) + contents[m1.group('name')] = seg + + # Handle text past the last match. + lastm = matches[-1] + seg = ContentSegment() + seg.parts = parse_segment_parts(raw, lastm.end() + 1, + len(raw), lastm.group('fmt')) + contents[lastm.group('name')] = seg + + return contents + else: + # No segments, just content. + seg = ContentSegment() + seg.parts = parse_segment_parts(raw, offset, len(raw)) + return {'content': seg} + + +def parse_segment_parts(raw, start, end, first_part_fmt=None): + matches = list(part_pattern.finditer(raw, start, end)) + num_matches = len(matches) + if num_matches > 0: + parts = [] + + # First part, before the first format change. + parts.append( + ContentSegmentPart(raw[start:matches[0].start()], + first_part_fmt, + start)) + + for i in range(1, num_matches): + m1 = matches[i - 1] + m2 = matches[i] + parts.append( + ContentSegmentPart( + raw[m1.end() + 1:m2.start()], + m1.group('fmt'), + m1.end() + 1)) + + lastm = matches[-1] + parts.append(ContentSegmentPart(raw[lastm.end() + 1:end], + lastm.group('fmt'), + lastm.end() + 1)) + + return parts + else: + return [ContentSegmentPart(raw[start:end], first_part_fmt)] +
--- a/piecrust/plugins/base.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/plugins/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -23,12 +23,18 @@ def getCommands(self): return [] + def getCommandExtensions(self): + return [] + def getRepositories(self): return [] def getBakerAssistants(self): return [] + def getSources(self): + return [] + def initialize(self, app): pass @@ -68,12 +74,18 @@ def getCommands(self): return self._getPluginComponents('getCommands') + def getCommandExtensions(self): + return self._getPluginComponents('getCommandExtensions') + def getRepositories(self): return self._getPluginComponents('getRepositories', True) def getBakerAssistants(self): return self._getPluginComponents('getBakerAssistants') + def getSources(self): + return self._getPluginComponents('getSources') + def _ensureLoaded(self): if self._plugins is not None: return
--- a/piecrust/plugins/builtin.py Wed Dec 25 22:16:46 2013 -0800 +++ b/piecrust/plugins/builtin.py Sun Aug 10 23:43:16 2014 -0700 @@ -1,6 +1,18 @@ -from piecrust.commands.builtin.info import RootCommand -from piecrust.commands.builtin.util import InitCommand +from piecrust.commands.builtin.baking import (BakeCommand, ShowRecordCommand) +from piecrust.commands.builtin.info import (RootCommand, ShowConfigCommand, + FindCommand, ShowRoutesCommand, ShowPathsCommand) +from piecrust.commands.builtin.serving import (ServeCommand) +from piecrust.commands.builtin.util import (InitCommand, PurgeCommand, + PrepareCommand) +from piecrust.data.provider import (IteratorDataProvider, BlogDataProvider) +from piecrust.formatting.markdownformatter import MarkdownFormatter from piecrust.plugins.base import PieCrustPlugin +from piecrust.processing.base import CopyFileProcessor +from piecrust.processing.less import LessProcessor +from piecrust.sources.base import DefaultPageSource +from piecrust.sources.posts import (FlatPostsSource, ShallowPostsSource, + HierarchyPostsSource) +from piecrust.templating.jinjaengine import JinjaTemplateEngine class BuiltInPlugin(PieCrustPlugin): @@ -11,5 +23,42 @@ def getCommands(self): return [ InitCommand(), - RootCommand()] + RootCommand(), + PurgeCommand(), + ShowConfigCommand(), + FindCommand(), + PrepareCommand(), + ShowRoutesCommand(), + ShowPathsCommand(), + BakeCommand(), + ShowRecordCommand(), + ServeCommand()] + + def getCommandExtensions(self): + return [] + def getSources(self): + return [ + DefaultPageSource, + FlatPostsSource, + ShallowPostsSource, + HierarchyPostsSource] + + def getDataProviders(self): + return [ + IteratorDataProvider, + BlogDataProvider] + + def getTemplateEngines(self): + return [ + JinjaTemplateEngine()] + + def getFormatters(self): + return [ + MarkdownFormatter()] + + def getProcessors(self): + return [ + CopyFileProcessor(), + LessProcessor()] +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/processing/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,336 @@ +import re +import time +import shutil +import os.path +import logging +import threading +from Queue import Queue, Empty +from piecrust.chefutil import format_timed +from piecrust.processing.tree import (ProcessingTreeBuilder, + ProcessingTreeRunner, STATE_DIRTY, print_node) +from piecrust.records import Record + + +logger = logging.getLogger(__name__) + + +PRIORITY_FIRST = -1 +PRIORITY_NORMAL = 0 +PRIORITY_LAST = 1 + + +class Processor(object): + PROCESSOR_NAME = None + + def __init__(self): + self.priority = PRIORITY_NORMAL + self.is_bypassing_structured_processing = False + self.is_delegating_dependency_check = True + + def initialize(self, app): + self.app = app + + def onPipelineStart(self, pipeline): + pass + + def onPipelineEnd(self, pipeline): + pass + + def supportsExtension(self, ext): + return False + + def getDependencies(self, path): + return None + + def getOutputFilenames(self, filename): + return None + + def process(self, path, out_dir): + pass + + +class CopyFileProcessor(Processor): + PROCESSOR_NAME = 'copy' + + def __init__(self): + super(CopyFileProcessor, self).__init__() + self.priority = PRIORITY_LAST + + def supportsExtension(self, ext): + return True + + def getOutputFilenames(self, filename): + return [filename] + + def process(self, path, out_dir): + out_path = os.path.join(out_dir, os.path.basename(path)) + logger.debug("Copying: %s -> %s" % (path, out_path)) + shutil.copyfile(path, out_path) + return True + + +class SimpleFileProcessor(Processor): + def __init__(self, extensions=None): + super(SimpleFileProcessor, self).__init__() + self.extensions = extensions or {} + + def supportsExtension(self, ext): + return ext.lstrip('.') in self.extensions + + def getOutputFilenames(self, filename): + basename, ext = os.path.splitext(filename) + ext = ext.lstrip('.') + out_ext = self.extensions[ext] + return ['%s.%s' % (basename, out_ext)] + + def process(self, path, out_dir): + _, in_name = os.path.split(path) + out_name = self.getOutputFilenames(in_name)[0] + out_path = os.path.join(out_dir, out_name) + return self._doProcess(path, out_path) + + def _doProcess(self, in_path, out_path): + raise NotImplementedError() + + +class ProcessorPipelineRecord(Record): + VERSION = 1 + + def __init__(self): + super(ProcessorPipelineRecord, self).__init__() + self.is_multi_mount = False + + def addEntry(self, item): + self.entries.append(item) + + def hasOverrideEntry(self, rel_path): + if not self.is_multi_mount: + return False + return self.findEntry(rel_path) is not None + + def findEntry(self, rel_path): + rel_path = rel_path.lower() + for entry in self.entries: + for out_path in entry.rel_outputs: + if out_path.lower() == rel_path: + return entry + return None + + +class ProcessorPipelineRecordEntry(object): + def __init__(self, rel_input, is_processed=False, is_overridden=False): + self.rel_input = rel_input + self.rel_outputs = [] + self.is_processed = is_processed + self.is_overridden = is_overridden + + +class ProcessingContext(object): + def __init__(self, base_dir, job_queue, record=None): + self.base_dir = base_dir + self.job_queue = job_queue + self.record = record + + +class ProcessorPipeline(object): + def __init__(self, app, out_dir, force=False, mounts=None, + skip_patterns=None, force_patterns=None, num_workers=4): + self.app = app + tmp_dir = app.cache_dir + if not tmp_dir: + import tempfile + tmp_dir = os.path.join(tempfile.gettempdir(), 'piecrust') + self.tmp_dir = os.path.join(tmp_dir, 'proc') + self.out_dir = out_dir + self.force = force + self.mounts = mounts or {} + self.skip_patterns = skip_patterns or [] + self.force_patterns = force_patterns or [] + self.processors = app.plugin_loader.getProcessors() + self.num_workers = num_workers + + if app.theme_dir is not None: + self.mounts['theme'] = app.theme_dir + + self.skip_patterns += ['_cache', '_content', '_counter', + 'theme_info.yml', + '.DS_Store', 'Thumbs.db', + '.git*', '.hg*', '.svn'] + + self.skip_patterns = make_re(self.skip_patterns) + self.force_patterns = make_re(self.force_patterns) + + def run(self, src_dir_or_file=None): + record = ProcessorPipelineRecord() + + # Create the workers. + pool = [] + queue = Queue() + abort = threading.Event() + pipeline_lock = threading.Lock() + for i in range(self.num_workers): + ctx = ProcessingWorkerContext(self, record, queue, abort, + pipeline_lock) + worker = ProcessingWorker(i, ctx) + worker.start() + pool.append(worker) + + # Invoke pre-processors. + for proc in self.processors: + proc.onPipelineStart(self) + + if src_dir_or_file is not None: + # Process only the given path. + # Find out if this source directory is in a mount point. + base_dir = self.app.root_dir + for name, path in self.mounts.iteritems(): + if src_dir_or_file[:len(path)] == path: + base_dir = path + + ctx = ProcessingContext(base_dir, queue, record) + logger.debug("Initiating processing pipeline on: %s" % src_dir_or_file) + if os.path.isdir(src_dir_or_file): + self.processDirectory(ctx, src_dir_or_file) + elif os.path.isfile(src_dir_or_file): + self.processFile(ctx, src_dir_or_file) + + else: + # Process everything. + ctx = ProcessingContext(self.app.root_dir, queue, record) + logger.debug("Initiating processing pipeline on: %s" % self.app.root_dir) + self.processDirectory(ctx, self.app.root_dir) + ctx.is_multi_mount = True + for name, path in self.mounts.iteritems(): + mount_ctx = ProcessingContext(path, queue, record) + logger.debug("Initiating processing pipeline on: %s" % path) + self.processDirectory(mount_ctx, path) + + # Wait on all workers. + for w in pool: + w.join() + if abort.is_set(): + raise Exception("Worker pool was aborted.") + + # Invoke post-processors. + for proc in self.processors: + proc.onPipelineEnd(self) + + return record + + def processDirectory(self, ctx, start_dir): + for dirpath, dirnames, filenames in os.walk(start_dir): + rel_dirpath = os.path.relpath(dirpath, start_dir) + dirnames[:] = [d for d in dirnames + if not re_matchany(os.path.join(rel_dirpath, d), + self.skip_patterns)] + + for filename in filenames: + if re_matchany(os.path.join(rel_dirpath, filename), + self.skip_patterns): + continue + self.processFile(ctx, os.path.join(dirpath, filename)) + + def processFile(self, ctx, path): + logger.debug("Queuing: %s" % path) + job = ProcessingWorkerJob(ctx.base_dir, path) + ctx.job_queue.put_nowait(job) + + +class ProcessingWorkerContext(object): + def __init__(self, pipeline, record, work_queue, abort_event, + pipeline_lock): + self.pipeline = pipeline + self.record = record + self.work_queue = work_queue + self.abort_event = abort_event + self.pipeline_lock = pipeline_lock + + +class ProcessingWorkerJob(object): + def __init__(self, base_dir, path): + self.base_dir = base_dir + self.path = path + + +class ProcessingWorker(threading.Thread): + def __init__(self, wid, ctx): + super(ProcessingWorker, self).__init__() + self.wid = wid + self.ctx = ctx + + def run(self): + while(not self.ctx.abort_event.is_set()): + try: + job = self.ctx.work_queue.get(True, 0.1) + except Empty: + logger.debug("[%d] No more work... shutting down." % self.wid) + break + + try: + self._unsafeRun(job) + logger.debug("[%d] Done with file." % self.wid) + self.ctx.work_queue.task_done() + except Exception as ex: + self.ctx.abort_event.set() + logger.error("[%d] Critical error, aborting." % self.wid) + logger.exception(ex) + break + + def _unsafeRun(self, job): + start_time = time.clock() + pipeline = self.ctx.pipeline + record = self.ctx.record + + rel_path = os.path.relpath(job.path, job.base_dir) + + # Figure out if a previously processed file is overriding this one. + # This can happen if a theme file (processed via a mount point) + # is overridden in the user's website. + if record.hasOverrideEntry(rel_path): + record.addEntry(ProcessorPipelineRecordEntry(rel_path, + is_processed=False, is_overridden=True)) + logger.info(format_timed(start_time, + '%s [not baked, overridden]' % rel_path)) + return + + builder = ProcessingTreeBuilder(pipeline.processors) + tree_root = builder.build(rel_path) + print_node(tree_root, recursive=True) + leaves = tree_root.getLeaves() + fi = ProcessorPipelineRecordEntry(rel_path) + fi.rel_outputs = [l.path for l in leaves] + record.addEntry(fi) + + force = pipeline.force + if not force: + force = re_matchany(rel_path, pipeline.force_patterns) + + if force: + tree_root.setState(STATE_DIRTY, True) + + runner = ProcessingTreeRunner(job.base_dir, pipeline.tmp_dir, + pipeline.out_dir, self.ctx.pipeline_lock) + if runner.processSubTree(tree_root): + fi.is_processed = True + logger.info(format_timed(start_time, "[%d] %s" % (self.wid, rel_path))) + + +def make_re(patterns): + re_patterns = [] + for pat in patterns: + if pat[0] == '/' and pat[-1] == '/' and len(pat) > 2: + re_patterns.append(pat[1:-1]) + else: + escaped_pat = (re.escape(pat) + .replace(r'\*', r'[^/\\]*') + .replace(r'\?', r'[^/\\]')) + re_patterns.append(escaped_pat) + return map(lambda p: re.compile(p), re_patterns) + + +def re_matchany(filename, patterns): + for pattern in patterns: + if pattern.match(filename): + return True + return False +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/processing/less.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,77 @@ +import os +import os.path +import json +import hashlib +import logging +import subprocess +from piecrust.processing.base import SimpleFileProcessor +from piecrust.processing.tree import FORCE_BUILD + + +logger = logging.getLogger(__name__) + + +class LessProcessor(SimpleFileProcessor): + PROCESSOR_NAME = 'less' + + def __init__(self): + super(LessProcessor, self).__init__({'less': 'css'}) + self._conf = None + self._map_dir = None + + def onPipelineStart(self, pipeline): + self._map_dir = os.path.join(pipeline.tmp_dir, 'less') + if not os.path.isdir(self._map_dir): + os.makedirs(self._map_dir) + + def getDependencies(self, path): + map_path = self._getMapPath(path) + try: + with open(map_path, 'r') as f: + dep_map = json.load(f) + source = dep_map.get('sources') + # The last one is always the file itself, so skip that. Also, + # make all paths absolute. + path_dir = os.path.dirname(path) + def _makeAbs(p): + return os.path.join(path_dir, p) + return map(_makeAbs, source[:-1]) + except IOError: + # Map file not found... rebuild. + logger.debug("No map file found for LESS file '%s' at '%s'. " + "Rebuilding" % (path, map_path)) + return FORCE_BUILD + + def _doProcess(self, in_path, out_path): + self._ensureInitialized() + + map_path = self._getMapPath(in_path) + map_path = os.path.relpath(map_path) + args = [self._conf['bin'], '--source-map=%s' % map_path] + args += self._conf['options'] + args.append(in_path) + args.append(out_path) + logger.debug("Processing LESS file: %s" % args) + retcode = subprocess.call(args) + if retcode != 0: + raise Exception("Error occured in LESS compiler. Please check " + "log messages above for more information.") + return True + + def _ensureInitialized(self): + if self._conf is not None: + return + + self._conf = self.app.config.get('less') or {} + self._conf.setdefault('bin', 'lessc') + self._conf.setdefault('options', + ['--compress']) + if not isinstance(self._conf['options'], list): + raise Exception("The `less/options` configuration setting " + "must be an array of arguments.") + + def _getMapPath(self, path): + map_name = "%s.map" % hashlib.md5(path).hexdigest() + map_path = os.path.join(self._map_dir, map_name) + return map_path +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/processing/tree.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,278 @@ +import os +import time +import os.path +import logging + + +logger = logging.getLogger(__name__) + + +STATE_UNKNOWN = 0 +STATE_DIRTY = 1 +STATE_CLEAN = 2 + + +FORCE_BUILD = object() + + +class ProcessingTreeError(Exception): + pass + + +class ProcessorNotFoundError(ProcessingTreeError): + pass + + +class ProcessingTreeNode(object): + def __init__(self, path, available_procs, level=0): + self.path = path + self.available_procs = available_procs + self.outputs = [] + self.level = level + self.state = STATE_UNKNOWN + self._processor = None + + def getProcessor(self): + if self._processor is None: + _, ext = os.path.splitext(self.path) + for p in self.available_procs: + if p.supportsExtension(ext): + self._processor = p + self.available_procs.remove(p) + break + else: + raise ProcessorNotFoundError() + return self._processor + + def setState(self, state, recursive=True): + self.state = state + if recursive: + for o in self.outputs: + o.setState(state, True) + + @property + def is_leaf(self): + return len(self.outputs) == 0 + + def getLeaves(self): + if self.is_leaf: + return [self] + leaves = [] + for o in self.outputs: + for l in o.getLeaves(): + leaves.append(l) + return leaves + + +class ProcessingTreeBuilder(object): + def __init__(self, processors): + self.processors = processors + + def build(self, path): + start_time = time.clock() + tree_root = ProcessingTreeNode(path, list(self.processors)) + + loop_guard = 100 + walk_stack = [tree_root] + while len(walk_stack) > 0: + loop_guard -= 1 + if loop_guard <= 0: + raise ProcessingTreeError("Infinite loop detected!") + + cur_node = walk_stack.pop() + proc = cur_node.getProcessor() + + # If the root tree node (and only that one) wants to bypass this + # whole tree business, so be it. + if proc.is_bypassing_structured_processing: + if proc != tree_root: + raise ProcessingTreeError("Only root processors can " + "bypass structured processing.") + break + + # Get the destination directory and output files. + rel_dir, basename = os.path.split(cur_node.path) + out_names = proc.getOutputFilenames(basename) + if out_names is None: + continue + + for n in out_names: + out_node = ProcessingTreeNode( + os.path.join(rel_dir, n), + list(cur_node.available_procs), + cur_node.level + 1) + cur_node.outputs.append(out_node) + + if proc.PROCESSOR_NAME != 'copy': + walk_stack.append(out_node) + + logger.debug(format_timed(start_time, "Built processing tree for: %s" % path)) + return tree_root + + +class ProcessingTreeRunner(object): + def __init__(self, base_dir, tmp_dir, out_dir, lock=None): + self.base_dir = base_dir + self.tmp_dir = tmp_dir + self.out_dir = out_dir + self.lock = lock + + def processSubTree(self, tree_root): + did_process = False + walk_stack = [tree_root] + while len(walk_stack) > 0: + cur_node = walk_stack.pop() + + self._computeNodeState(cur_node) + if cur_node.state == STATE_DIRTY: + did_process_this_node = self.processNode(cur_node) + did_process |= did_process_this_node + + if did_process_this_node: + for o in cur_node.outputs: + if not o.is_leaf: + walk_stack.append(o) + else: + for o in cur_node.outputs: + if not o.is_leaf: + walk_stack.append(o) + return did_process + + def processNode(self, node): + full_path = self._getNodePath(node) + proc = node.getProcessor() + if proc.is_bypassing_structured_processing: + try: + start_time = time.clock() + proc.process(full_path, self.out_dir) + print_node(format_timed(start_time, "(bypassing structured processing)")) + return True + except Exception as e: + import sys + _, __, traceback = sys.exc_info() + raise Exception("Error processing: %s" % node.path, e), None, traceback + + # All outputs of a node must go to the same directory, so we can get + # the output directory off of the first output. + base_out_dir = self._getNodeBaseDir(node.outputs[0]) + rel_out_dir = os.path.dirname(node.path) + out_dir = os.path.join(base_out_dir, rel_out_dir) + if not os.path.isdir(out_dir): + if self.lock: + with self.lock: + if not os.path.isdir(out_dir): + os.makedirs(out_dir, 0755) + else: + os.makedirs(out_dir, 0755) + + try: + start_time = time.clock() + proc_res = proc.process(full_path, out_dir) + if proc_res is None: + raise Exception("Processor '%s' didn't return a boolean " + "result value." % proc) + if proc_res: + print_node(node, "-> %s" % out_dir) + return True + else: + print_node(node, "-> %s [clean]" % out_dir) + return False + except Exception as e: + import sys + _, __, traceback = sys.exc_info() + raise Exception("Error processing: %s" % node.path, e), None, traceback + + def _computeNodeState(self, node): + if node.state != STATE_UNKNOWN: + return + + proc = node.getProcessor() + if (proc.is_bypassing_structured_processing or + not proc.is_delegating_dependency_check): + # This processor wants to handle things on its own... + node.setState(STATE_DIRTY, False) + return + + start_time = time.clock() + + # Get paths and modification times for the input path and + # all dependencies (if any). + base_dir = self._getNodeBaseDir(node) + full_path = os.path.join(base_dir, node.path) + in_mtime = (full_path, os.path.getmtime(full_path)) + force_build = False + try: + deps = proc.getDependencies(full_path) + if deps == FORCE_BUILD: + force_build = True + elif deps is not None: + for dep in deps: + dep_mtime = os.path.getmtime(dep) + if dep_mtime > in_mtime[1]: + in_mtime = (dep, dep_mtime) + except Exception as e: + logger.warning("%s -- Will force-bake: %s" % (e, node.path)) + node.setState(STATE_DIRTY, True) + return + + if force_build: + # Just do what the processor told us to do. + node.setState(STATE_DIRTY, True) + message = "Processor requested a forced build." + print_node(node, message) + else: + # Get paths and modification times for the outputs. + message = None + for o in node.outputs: + full_out_path = self._getNodePath(o) + if not os.path.isfile(full_out_path): + message = "Output '%s' doesn't exist." % o.path + break + o_mtime = os.path.getmtime(full_out_path) + if o_mtime < in_mtime[1]: + message = "Input '%s' is newer than output '%s'." % ( + in_mtime[0], o.path) + break + if message is not None: + node.setState(STATE_DIRTY, True) + message += " Re-processing sub-tree." + print_node(node, message) + else: + node.setState(STATE_CLEAN, False) + + state = "dirty" if node.state == STATE_DIRTY else "clean" + logger.debug(format_timed(start_time, "Computed node dirtyness: %s" % state, node.level)) + + def _getNodeBaseDir(self, node): + if node.level == 0: + return self.base_dir + if node.is_leaf: + return self.out_dir + return os.path.join(self.tmp_dir, str(node.level)) + + def _getNodePath(self, node): + base_dir = self._getNodeBaseDir(node) + return os.path.join(base_dir, node.path) + + +def print_node(node, message=None, recursive=False): + indent = ' ' * node.level + try: + proc_name = node.getProcessor().PROCESSOR_NAME + except ProcessorNotFoundError: + proc_name = 'n/a' + + message = message or '' + logger.debug('%s%s [%s] %s' % (indent, node.path, proc_name, message)) + + if recursive: + for o in node.outputs: + print_node(o, None, True) + + +def format_timed(start_time, message, indent_level=0): + end_time = time.clock() + indent = indent_level * ' ' + build_time = '{0:8.1f} ms'.format((end_time - start_time) / 1000.0) + return "%s[%s] %s" % (indent, build_time, message) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/records.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,51 @@ +import os +import os.path +import logging +from piecrust import APP_VERSION +from piecrust.events import Event + +try: + import cPickle as pickle +except ImportError: + import pickle + + +logger = logging.getLogger(__name__) + + +class Record(object): + VERSION = 1 + + def __init__(self): + self.app_version = None + self.record_version = None + self.entries = [] + self.entry_added = Event() + + def isVersionMatch(self): + return (self.app_version == APP_VERSION and + self.record_version == self.VERSION) + + def addEntry(self, entry): + self.entries.append(entry) + self.entry_added.fire(entry) + + def save(self, path): + path_dir = os.path.dirname(path) + if not os.path.isdir(path_dir): + os.makedirs(path_dir, 0755) + + with open(path, 'w') as fp: + pickle.dump(self, fp, pickle.HIGHEST_PROTOCOL) + + def __getstate__(self): + odict = self.__dict__.copy() + del odict['entry_added'] + return odict + + @staticmethod + def load(path): + logger.debug("Loading bake record from: %s" % path) + with open(path, 'r') as fp: + return pickle.load(fp) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/rendering.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,198 @@ +import re +import os.path +import codecs +import logging +from piecrust.data.builder import (DataBuildingContext, build_page_data, + build_layout_data) +from piecrust.environment import PHASE_PAGE_FORMATTING, PHASE_PAGE_RENDERING + + +logger = logging.getLogger(__name__) + + +content_abstract_re = re.compile(r'^<!--\s*(more|(page)?break)\s*-->\s*$', + re.MULTILINE) + + +class PageRenderingError(Exception): + pass + + +class RenderedPage(object): + def __init__(self, page, uri, num=1): + self.page = page + self.uri = uri + self.num = num + self.data = None + self.content = None + self.execution_info = None + + @property + def app(self): + return self.page.app + + +class PageRenderingContext(object): + def __init__(self, page, uri, page_num=1): + self.page = page + self.uri = uri + self.page_num = page_num + self.pagination_source = None + self.pagination_filter = None + self.custom_data = None + self.use_cache = False + self.used_pagination = None + self.used_source_names = set() + self.used_taxonomy_terms = set() + + @property + def app(self): + return self.page.app + + @property + def source_metadata(self): + return self.page.source_metadata + + def reset(self): + self.used_pagination = None + + def setPagination(self, paginator): + if self.used_pagination is not None: + raise Exception("Pagination has already been used.") + self.used_pagination = paginator + + +def render_page(ctx): + eis = ctx.app.env.exec_info_stack + eis.pushPage(ctx.page, PHASE_PAGE_RENDERING, ctx) + try: + page = ctx.page + + # Build the data for both segment and layout rendering. + data_ctx = DataBuildingContext(page, ctx.uri, ctx.page_num) + data_ctx.pagination_source = ctx.pagination_source + data_ctx.pagination_filter = ctx.pagination_filter + page_data = build_page_data(data_ctx) + if ctx.custom_data: + page_data.update(ctx.custom_data) + + # Render content segments. + repo = ctx.app.env.rendered_segments_repository + if repo: + cache_key = '%s:%s' % (ctx.uri, ctx.page_num) + contents = repo.get(cache_key, + lambda: _do_render_page_segments(page, page_data)) + else: + contents = _do_render_page_segments(page, page_data) + + # Render layout. + layout_name = page.config.get('layout') + if layout_name is None: + layout_name = page.source.config.get('default_layout', 'default') + null_names = ['', 'none', 'nil'] + if layout_name not in null_names: + layout_data = build_layout_data(page, page_data, contents) + output = render_layout(layout_name, page, layout_data) + else: + output = contents['content'] + + rp = RenderedPage(page, ctx.uri, ctx.page_num) + rp.data = page_data + rp.content = codecs.encode(output, 'utf8') + rp.execution_info = eis.current_page_info + return rp + finally: + eis.popPage() + + +def render_page_segments(ctx): + repo = ctx.app.env.rendered_segments_repository + if repo: + cache_key = '%s:%s' % (ctx.uri, ctx.page_num) + return repo.get(cache_key, + lambda: _do_render_page_segments_from_ctx(ctx)) + + return _do_render_page_segments_from_ctx(ctx) + + +def _do_render_page_segments_from_ctx(ctx): + eis = ctx.app.env.exec_info_stack + eis.pushPage(ctx.page, PHASE_PAGE_FORMATTING, ctx) + try: + data_ctx = DataBuildingContext(ctx.page, ctx.uri, ctx.page_num) + page_data = build_page_data(data_ctx) + return _do_render_page_segments(ctx.page, page_data) + finally: + eis.popPage() + + +def _do_render_page_segments(page, page_data): + app = page.app + engine_name = page.config.get('template_engine') + format_name = page.config.get('format') + + engine = get_template_engine(app, engine_name) + if engine is None: + raise PageRenderingError("Can't find template engine '%s'." % engine_name) + + formatted_content = {} + for seg_name, seg in page.raw_content.iteritems(): + seg_text = u'' + for seg_part in seg.parts: + part_format = seg_part.fmt or format_name + part_text = engine.renderString(seg_part.content, page_data, + filename=page.path, line_offset=seg_part.line) + part_text = format_text(app, part_format, part_text) + seg_text += part_text + formatted_content[seg_name] = seg_text + + if seg_name == 'content': + m = content_abstract_re.search(seg_text) + if m: + offset = m.start() + content_abstract = seg_text[:offset] + formatted_content['content.abstract'] = content_abstract + + return formatted_content + + +def render_layout(layout_name, page, layout_data): + names = layout_name.split(',') + default_template_engine = get_template_engine(page.app, None) + default_exts = ['.' + e.lstrip('.') for e in default_template_engine.EXTENSIONS] + full_names = [] + for name in names: + if '.' not in name: + full_names.append(name + '.html') + for ext in default_exts: + full_names.append(name + ext) + else: + full_names.append(name) + + _, engine_name = os.path.splitext(full_names[0]) + engine_name = engine_name.lstrip('.') + engine = get_template_engine(page.app, engine_name) + if engine is None: + raise PageRenderingError("No such template engine: %s" % engine_name) + output = engine.renderFile(full_names, layout_data) + return output + + +def get_template_engine(app, engine_name): + if engine_name == 'html': + engine_name = None + engine_name = engine_name or app.config.get('site/default_template_engine') + for engine in app.plugin_loader.getTemplateEngines(): + if engine_name in engine.ENGINE_NAMES: + return engine + return None + +def format_text(app, format_name, txt): + format_name = format_name or app.config.get('site/default_format') + for fmt in app.plugin_loader.getFormatters(): + if fmt.FORMAT_NAMES is None or format_name in fmt.FORMAT_NAMES: + txt = fmt.render(format_name, txt) + if fmt.OUTPUT_FORMAT is not None: + format_name = fmt.OUTPUT_FORMAT + return txt +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/messages/404.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,65 @@ +<!doctype html> +<html> +<head> + <title>Can't find the sugar!</title> + <meta name="generator" content="PieCrust" /> + <link rel="stylesheet" type="text/css" href="http://fonts.googleapis.com/css?family=Lobster"> + <style> + body { + margin: 0; + padding: 1em; + background: #eee; + color: #000; + font-family: Georgia, serif; + } + h1 { + font-size: 4.5em; + font-family: Lobster, 'Trebuchet MS', Verdana, sans-serif; + text-align: center; + font-weight: bold; + margin-top: 0; + color: #333; + text-shadow: 0px 2px 5px rgba(0,0,0,0.3); + } + h2 { + font-size: 2.5em; + font-family: 'Lobster', 'Trebuchet MS', Verdana, sans-serif; + } + code { + background: #ddd; + padding: 0 0.2em; + } + #preamble { + font-size: 1.2em; + font-style: italic; + text-align: center; + margin-bottom: 0; + } + #container { + margin: 0 20%; + } + #content { + margin: 2em 1em; + } + .note { + margin: 3em; + color: #888; + font-style: italic; + } + </style> +</head> +<body> + <div id="container"> + <div id="header"> + <p id="preamble">A Message From The Kitchen:</p> + <h1>Can't find the sugar!</h1> + </div> + <hr /> + <div id="content"> + <p>It looks like the page you were trying to access does not exist around here. Try going somewhere else.</p> + + </div> + <hr /> + </div> +</body> +</html> \ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/messages/critical.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,67 @@ +<!doctype html> +<html> +<head> + <title>The Whole Kitchen Burned Down!</title> + <meta name="generator" content="PieCrust" /> + <link rel="stylesheet" type="text/css" href="http://fonts.googleapis.com/css?family=Lobster"> + <style> + body { + margin: 0; + padding: 1em; + background: #eee; + color: #000; + font-family: Georgia, serif; + } + h1 { + font-size: 4.5em; + font-family: Lobster, 'Trebuchet MS', Verdana, sans-serif; + text-align: center; + font-weight: bold; + margin-top: 0; + color: #333; + text-shadow: 0px 2px 5px rgba(0,0,0,0.3); + } + h2 { + font-size: 2.5em; + font-family: 'Lobster', 'Trebuchet MS', Verdana, sans-serif; + } + code { + background: #ddd; + padding: 0 0.2em; + } + #preamble { + font-size: 1.2em; + font-style: italic; + text-align: center; + margin-bottom: 0; + } + #container { + margin: 0 20%; + } + #content { + margin: 2em 1em; + } + .note { + margin: 3em; + color: #888; + font-style: italic; + } + </style> +</head> +<body> + <div id="container"> + <div id="header"> + <p id="preamble">A Message From The Kitchen:</p> + <h1>The Whole Kitchen Burned Down!</h1> + </div> + <hr /> + <div id="content"> + <p>Something critically bad happened, and <strong>PieCrust</strong> needs to shut down. It's probably our fault.</p> + +<div id="error-details">{{ details }}</div> + + </div> + <hr /> + <p class="note">You’re seeing this because something wrong happend. To see detailed errors with callstacks, run chef with the <code>--debug</code> parameter, append <code>?!debug</code> to the <span class="caps">URL</span>, or initialize the <code>PieCrust</code> object with <code>{'debug'=>true}</code>. On the other hand, to see you custom error pages, set the <code>site/display_errors</code> setting to <code>false</code>.</p> </div> +</body> +</html> \ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/messages/error.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,65 @@ +<!doctype html> +<html> +<head> + <title>The Cake Just Burned!</title> + <meta name="generator" content="PieCrust" /> + <link rel="stylesheet" type="text/css" href="http://fonts.googleapis.com/css?family=Lobster"> + <style> + body { + margin: 0; + padding: 1em; + background: #eee; + color: #000; + font-family: Georgia, serif; + } + h1 { + font-size: 4.5em; + font-family: Lobster, 'Trebuchet MS', Verdana, sans-serif; + text-align: center; + font-weight: bold; + margin-top: 0; + color: #333; + text-shadow: 0px 2px 5px rgba(0,0,0,0.3); + } + h2 { + font-size: 2.5em; + font-family: 'Lobster', 'Trebuchet MS', Verdana, sans-serif; + } + code { + background: #ddd; + padding: 0 0.2em; + } + #preamble { + font-size: 1.2em; + font-style: italic; + text-align: center; + margin-bottom: 0; + } + #container { + margin: 0 20%; + } + #content { + margin: 2em 1em; + } + .note { + margin: 3em; + color: #888; + font-style: italic; + } + </style> +</head> +<body> + <div id="container"> + <div id="header"> + <p id="preamble">A Message From The Kitchen:</p> + <h1>The Cake Just Burned!</h1> + </div> + <hr /> + <div id="content"> + <div id="error-details">{{ details }}</div> + + </div> + <hr /> + <p class="note">You’re seeing this because something wrong happend. To see detailed errors with callstacks, run chef with the <code>--debug</code> parameter, append <code>?!debug</code> to the <span class="caps">URL</span>, or initialize the <code>PieCrust</code> object with <code>{'debug'=>true}</code>. On the other hand, to see you custom error pages, set the <code>site/display_errors</code> setting to <code>false</code>.</p> </div> +</body> +</html> \ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/messages/error404.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,65 @@ +<!doctype html> +<html> +<head> + <title>Can't find the sugar!</title> + <meta name="generator" content="PieCrust" /> + <link rel="stylesheet" type="text/css" href="http://fonts.googleapis.com/css?family=Lobster"> + <style> + body { + margin: 0; + padding: 1em; + background: #eee; + color: #000; + font-family: Georgia, serif; + } + h1 { + font-size: 4.5em; + font-family: Lobster, 'Trebuchet MS', Verdana, sans-serif; + text-align: center; + font-weight: bold; + margin-top: 0; + color: #333; + text-shadow: 0px 2px 5px rgba(0,0,0,0.3); + } + h2 { + font-size: 2.5em; + font-family: 'Lobster', 'Trebuchet MS', Verdana, sans-serif; + } + code { + background: #ddd; + padding: 0 0.2em; + } + #preamble { + font-size: 1.2em; + font-style: italic; + text-align: center; + margin-bottom: 0; + } + #container { + margin: 0 20%; + } + #content { + margin: 2em 1em; + } + .note { + margin: 3em; + color: #888; + font-style: italic; + } + </style> +</head> +<body> + <div id="container"> + <div id="header"> + <p id="preamble">A Message From The Kitchen:</p> + <h1>Can't find the sugar!</h1> + </div> + <hr /> + <div id="content"> + <p>It looks like the page you were trying to access does not exist around here. Try going somewhere else.</p> + + </div> + <hr /> + </div> +</body> +</html> \ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/messages/requirements.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,71 @@ +<!doctype html> +<html> +<head> + <title>You're Missing A Few Things</title> + <meta name="generator" content="PieCrust" /> + <link rel="stylesheet" type="text/css" href="http://fonts.googleapis.com/css?family=Lobster"> + <style> + body { + margin: 0; + padding: 1em; + background: #eee; + color: #000; + font-family: Georgia, serif; + } + h1 { + font-size: 4.5em; + font-family: Lobster, 'Trebuchet MS', Verdana, sans-serif; + text-align: center; + font-weight: bold; + margin-top: 0; + color: #333; + text-shadow: 0px 2px 5px rgba(0,0,0,0.3); + } + h2 { + font-size: 2.5em; + font-family: 'Lobster', 'Trebuchet MS', Verdana, sans-serif; + } + code { + background: #ddd; + padding: 0 0.2em; + } + #preamble { + font-size: 1.2em; + font-style: italic; + text-align: center; + margin-bottom: 0; + } + #container { + margin: 0 20%; + } + #content { + margin: 2em 1em; + } + .note { + margin: 3em; + color: #888; + font-style: italic; + } + </style> +</head> +<body> + <div id="container"> + <div id="header"> + <p id="preamble">A Message From The Kitchen:</p> + <h1>You're Missing A Few Things</h1> + </div> + <hr /> + <div id="content"> + <p>It looks like you're not meeting some of the requirements for running <strong>PieCrust</strong>:</p> + +<ul> +<li>PHP 5.3+</li> +</ul> + +<p>For more information, be sure to check out the <a href="http://bolt80.com/piecrust/doc">documentation</a>.</p> + + </div> + <hr /> + </div> +</body> +</html> \ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/prepare/atom.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,48 @@ +--- +description: +read_more_text: "Read more..." +language: +categories: +ttl: +layout: none +format: none +post_count: 10 +content_type: xml +--- +<?xml version="1.0" encoding="utf-8"?> +<feed xmlns="http://www.w3.org/2005/Atom"> + <title>{{site.title}}</title> + {% if page.description %} + <subtitle>{{page.description}}</subtitle> + {% else %} + <subtitle>Latest news from {{site.title}}</subtitle> + {% endif %} + <link href="{{page.url}}" rel="self" /> + <link href="{{site.root}}" /> + <id>{{site.root}}</id> + <updated>{{now|atomdate}}</updated> + + {% for post in blog.posts.limit(page.post_count) %} + {% set author = site.author %} + {% if post.author %}{% set author = post.author %}{% endif %} + {% if not author %}{{pcfail("Atom feeds require an author for each post. You can specify a global author with the 'site.author' config.")}}{% endif %} + <entry> + <title>{{post.title}}</title> + <link href="{{post.url}}" /> + <link rel="alternate" type="text/html" href="{{post.url}}"/> + <id>{{post.url}}</id> + <updated>{{post.timestamp|atomdate}}</updated> + <content type="html">{{post.content}} + {% if post.has_more and page.read_more_text %} + {{ ("<a href=\"" ~ post.url ~ "\">" ~ page.read_more_text ~ "</a>")|escape }} + {% endif %} + </content> + {% if author %} + <author> + <name>{{author}}</name> + </author> + {% endif %} + </entry> + {% endfor %} + +</feed>
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/prepare/rss.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,60 @@ +--- +description: +read_more_text: "Read more..." +language: +categories: +ttl: +layout: none +format: none +post_count: 10 +content_type: xml +--- +<?xml version="1.0" encoding="UTF-8"?> +<rss version="2.0" + xmlns:atom="http://www.w3.org/2005/Atom" + xmlns:content="http://purl.org/rss/1.0/modules/content/" + xmlns:dc="http://purl.org/dc/elements/1.1/"> + <channel> + <title>{{site.title}}</title> + <link>{{site.root}}</link> + <atom:link href="{{page.url}}" rel="self" type="application/rss+xml" /> + {% if page.description %} + <description>{{page.description}}</description> + {% else %} + <description>Latest news from {{site.title}}</description> + {% endif %} + <lastBuildDate>{{now|date("r")}}</lastBuildDate> + <pubDate>{{now|date("r")}}</pubDate> + <generator>PieCrust {{piecrust.version}}</generator> + {% if page.language %}<language>{{page.language}}</language>{% endif %} + {% for c in page.categories %} + <category>{{c}}</category> + {% endfor %} + {% if page.ttl %}<ttl>{{page.ttl}}</ttl>{% endif %} + + {% for post in blog.posts.limit(page.post_count) %} + {% set author = site.author %} + {% if post.author %}{% set author = post.author %}{% endif %} + <item> + <title>{{post.title}}</title> + <link>{{post.url}}</link> + {% if author %} + <author>{{author}}</author> + <dc:creator>{{author}}</dc:creator> + {% endif %} + {% if post.category %}<category>{{post.category}}</category>{% endif %} + {% for t in post.tags %} + <category>{{t}}</category> + {% endfor %} + <pubDate>{{post.timestamp|date("r")}}</pubDate> + <guid>{{post.url}}</guid> + <description>{{post.content}} + {% if post.has_more and page.read_more_text %} + {{ ("<a href=\"" ~ post.url ~ "\">" ~ page.read_more_text ~ "</a>")|escape }} + {% endif %} + </description> + </item> + {% endfor %} + + </channel> +</rss>
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/_content/pages/_category.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,16 @@ +--- +title: +format: none +--- +<h2>Posts in {{ category }}</h2> + +<section> + {% for post in pagination.posts %} + {% include 'partial_post.html' %} + {% endfor %} +</section> +<section> + {% if pagination.prev_page %}<div class="prev"><a href="{{ pagination.prev_page }}">Next Posts</a></div>{% endif %} + {% if pagination.next_page %}<div class="next"><a href="{{ pagination.next_page }}">Previous Posts</a></div>{% endif %} +</section> +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/_content/pages/_index.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,59 @@ +--- +title: +format: none +--- + +{% if pagination.has_posts %} +<section> + {% for post in pagination.posts %} + {% include 'partial_post.html' %} + {% endfor %} +</section> +<section> + {% if pagination.prev_page %}<div class="prev"><a href="{{ pagination.prev_page }}">Next Posts</a></div>{% endif %} + {% if pagination.next_page %}<div class="next"><a href="{{ pagination.next_page }}">Previous Posts</a></div>{% endif %} +</section> +{% endif %} + + +<--markdown--> + +{% if not pagination.has_posts or not site.hide_quickstart %} + +## Quick Start + +Welcome to your new [PieCrust][] website! + +Since you don't seem to have any blog post or home page created yet, here's a +quick reference of things you probably want to do next. All `chef` commands need +to be run from inside your website's directory. + +For more information, refer to the [documentation][doc]. + + +### Create a new blog post + +Run `chef prepare post my-new-post-slug`, where `my-new-post-slug` is the [URL slug][slug] you want for your new post. + + +### Change this page + +To override this default home page, you can do one of the following: + +* Just hide this "quick start" section by setting the `hide_quickstart` property + to `true` in your site configuration. To do this, edit `_content/config.yml` + and `hide_quickstart: true` under the `site` section. + +* Manually rewrite this page by creating `_content/pages/_index.md` or running + `chef prepare page _index`. + +* Install a theme with `chef themes install theme-name`, where `theme-name` is + the name of a theme. You can list existing official themes with `chef themes + find`. + + +[piecrust]: {{ piecrust.url }} +[doc]: http://bolt80.com/piecrust/doc/ +[slug]: http://en.wikipedia.org/wiki/Clean_URL#Slug +{% endif %} +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/_content/pages/_tag.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,16 @@ +--- +title: +format: none +--- +<h2>Posts tagged with {{ tag }}</h2> + +<section> + {% for post in pagination.posts %} + {% include 'partial_post.html' %} + {% endfor %} +</section> +<section> + {% if pagination.prev_page %}<div class="prev"><a href="{{ pagination.prev_page }}">Next Posts</a></div>{% endif %} + {% if pagination.next_page %}<div class="next"><a href="{{ pagination.next_page }}">Previous Posts</a></div>{% endif %} +</section> +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/_content/templates/default.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,89 @@ +<!doctype html> +<head> + <meta charset="utf-8"> + <meta name="description" content="{{ site.description }}"> + <meta name="author" content="{{ site.author }}"> + <meta name="viewport" content="width=device-width,initial-scale=1"> + <title>{{ site.title }}{% if page.title %} — {{ page.title }}{% endif %}</title> + {% if site.css %} + <link rel="stylesheet" type="text/css" href="{{ site.root }}{{ site.css }}"> + {% else %} + <style type="text/css"> +body { + font-family: 'Baskerville', Georgia, Times, serif; + font-size: 1.3em; + line-height: 1.5em; + padding: 0 1.5em; + margin: 0; +} +h1, h2, h3, h4, h5, h6 { font-weight: bold; } +h1 { font-size: 2em; line-height: 0.75em; margin: 0.75em 0; } +h2 { font-size: 1.5em; line-height: 1em; margin: 1em 0; } +h3 { font-size: 1.2em; line-height: 1.25em; margin: 1.25em 0; } +h4 { font-size: 1em; line-height: 1.5em; margin: 1.5em 0; } +h5 { font-size: 0.8em; line-height: 1.875em; margin: 1.875em 0; } +h6 { font-size: 0.75em; line-height: 2em; margin: 2em 0; } +p { + font-size: 1em; + line-height: 1.5em; + margin: 1.5em 0; +} +code { background: #fee; padding: 0 0.3em; } +#container > header { + text-align: center; +} +#container > footer { + text-align: center; + font-style: italic; + font-size: 0.8em; + color: #888; +} + +.site-title { + font-size: 2em; + font-weight: bold; + line-height: 0.75em; + margin: 0.75em 0; +} +.post h2 { margin-bottom: 0.33em; } +.date { + font-style: italic; + font-size: 0.75em; + line-height: 0.5em; + margin-bottom: 1.5em; +} +blockquote { + font-style: italic; +} + </style> + {% endif %} +</head> +<body> + <div id="container"> + <header> + {% block header %} + {% if page.title %} + <p class="site-title"><a href="{{ site.root }}">{{ site.title }}</a></p> + {% else %} + <h1><a href="{{ site.root }}">{{ site.title }}</a></h1> + {% endif %} + {% endblock %} + </header> + <section id="main" role="main"> + {% block main %} + {% if page.title %} + <h1 class="page-title">{{ page.title }}</h1> + {% endif %} + + {{ content|raw }} + {% endblock %} + </section> + <footer> + {% block footer %} + {% endblock %} + {{ piecrust.debug_info|raw }} + <p>{{ piecrust.branding|raw }}</p> + </footer> + </div> +</body> +</html>
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/_content/templates/partial_post.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,19 @@ +<article> + <header> + <h2><a href="{{ post.url }}">{{ post.title }}</a></h2> + <small>{{ post.date }} + {% if post.tags %} + in + {% for t in post.tags %} + <a href="{{ pctagurl(t) }}">{{ t }}</a>{% if not loop.last %}, {% endif %} + {% endfor %} + {% endif %} + {% if post.category %} + | <a href="{{ pccaturl(post.category) }}">{{ post.category }}</a> + {% endif %} + </small> + </header> + <section> + {{ post.content|raw }} + </section> +</article>
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/_content/templates/post.html Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,2 @@ +{% extends "default.html" %} +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/theme/theme_info.yml Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,6 @@ +name: Built-in +description: The default theme for new PieCrust websites. +authors: + - Ludovic Chabant <ludovic@chabant.com> +url: http://bolt80.com/piecrust +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/webinit/htaccess Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,18 @@ +# Uncomment this if you want to set 'pretty_urls' to 'true'. +#RewriteEngine on + +# If you're running from a sub-directory, uncomment the RewriteBase statement and +# change it to the sub-directory name you're using. +#RewriteBase /yourbase + +# Don't rewrite requests to stuff that physically exists. +RewriteCond %{REQUEST_FILENAME} -f [OR] +RewriteCond %{REQUEST_FILENAME} -l [OR] +RewriteCond %{REQUEST_FILENAME} -d +RewriteRule ^.*$ - [NC,L] + +# Rewrite all the rest through the gateway. +RewriteRule ^.*$ index.php [NC,L] + +# Various other configuration stuff. +AddType text/css .less
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/resources/webinit/web.config Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,29 @@ +<?xml version="1.0" encoding="UTF-8"?> +<configuration> + <system.webServer> + <staticContent> + <mimeMap fileExtension=".less" mimeType="text/css" /> + </staticContent> + <!-- + Uncommment the following section if you want to set 'pretty_urls' to 'true'. + You'll need IIS' URLRewrite module installed for this to work. + Get it here: http://www.iis.net/download/urlrewrite + --> + <!-- + <rewrite> + <rules> + <rule name="Don't rewrite existing resources" stopProcessing="true"> + <conditions logicalGrouping="MatchAny"> + <add input="{REQUEST_FILENAME}" matchType="IsFile" /> + <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> + </conditions> + <action type="None" /> + </rule> + <rule name="Rewrite to index.php" stopProcessing="true"> + <action type="Rewrite" url="index.php" /> + </rule> + </rules> + </rewrite> + --> + </system.webServer> +</configuration>
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/routing.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,177 @@ +import re +import logging + + +logger = logging.getLogger(__name__) + + +route_re = re.compile(r'%((?P<qual>path):)?(?P<name>\w+)%') +template_func_re = re.compile(r'^(?P<name>\w+)\((?P<first_arg>\w+)(?P<other_args>.*)\)\s*$') +template_func_arg_re = re.compile(r',\s*(?P<arg>\w+)') + + +class Route(object): + """ Information about a route for a PieCrust application. + Each route defines the "shape" of an URL and how it maps to + sources and taxonomies. + """ + def __init__(self, app, cfg): + self.app = app + uri = cfg['url'] + self.uri_root = app.config.get('site/root').rstrip('/') + '/' + self.uri_pattern = uri.lstrip('/') + self.uri_format = route_re.sub(self._uriFormatRepl, self.uri_pattern) + p = route_re.sub(self._uriPatternRepl, self.uri_pattern) + self.uri_re = re.compile(p) + self.source_name = cfg['source'] + self.taxonomy = cfg.get('taxonomy') + self.required_source_metadata = [] + for m in route_re.finditer(uri): + self.required_source_metadata.append(m.group('name')) + self.template_func = None + self.template_func_name = None + self.template_func_args = [] + self._createTemplateFunc(cfg.get('func')) + + @property + def source(self): + for src in self.app.sources: + if src.name == self.source_name: + return src + raise Exception("Can't find source '%s' for route '%'." % ( + self.source_name, self.uri)) + + @property + def source_realm(self): + return self.source.realm + + def isMatch(self, source_metadata): + return True + + def getUri(self, source_metadata): + #TODO: fix this hard-coded shit + for key in ['year', 'month', 'day']: + if key in source_metadata and isinstance(source_metadata[key], str): + source_metadata[key] = int(source_metadata[key]) + return self.uri_root + (self.uri_format % source_metadata) + + def _uriFormatRepl(self, m): + name = m.group('name') + #TODO: fix this hard-coded shit + if name == 'year': + return '%(year)04d' + if name == 'month': + return '%(month)02d' + if name == 'day': + return '%(day)02d' + return '%(' + name + ')s' + + def _uriPatternRepl(self, m): + name = m.group('name') + qualifier = m.group('qual') + if qualifier == 'path': + return r'(?P<%s>.*)' % name + return r'(?P<%s>[^/\?]+)' % name + + def _createTemplateFunc(self, func_def): + if func_def is None: + return + + m = template_func_re.match(func_def) + if m is None: + raise Exception("Template function definition for route '%s' " + "has invalid syntax: %s" % + (self.uri_pattern, func_def)) + + self.template_func_name = m.group('name') + self.template_func_args.append(m.group('first_arg')) + arg_list = m.group('other_args') + if arg_list: + self.template_func_args += template_func_arg_re.findall(arg_list) + + if self.taxonomy: + # This will be a taxonomy route function... this means we can + # have a variable number of parameters, but only one parameter + # definition, which is the value. + if len(self.template_func_args) != 1: + raise Exception("Route '%s' is a taxonomy route and must have " + "only one argument, which is the term value." % + self.uri_pattern) + + def template_func(*args): + if len(args) == 0: + raise Exception( + "Route function '%s' expected at least one " + "argument." % func_def) + + # Term combinations can be passed as an array, or as multiple + # arguments. + values = args + if len(args) == 1 and isinstance(args[0], list): + values = args[0] + + # We need to register this use of a taxonomy term. + if len(values) == 1: + registered_values = values[0] + else: + registered_values = tuple(values) + eis = self.app.env.exec_info_stack + eis.current_page_info.render_ctx.used_taxonomy_terms.add( + (self.source_name, self.taxonomy, registered_values)) + + if len(values) == 1: + str_values = values[0] + else: + str_values = '/'.join(values) + term_name = self.template_func_args[0] + metadata = {term_name: str_values} + + return self.getUri(metadata) + + else: + # Normal route function. + def template_func(*args): + if len(args) != len(self.template_func_args): + raise Exception( + "Route function '%s' expected %d arguments, " + "got %d." % + (func_def, len(self.template_func_args), + len(args))) + metadata = {} + for arg_name, arg_val in zip(self.template_func_args, args): + metadata[arg_name] = arg_val + return self.getUri(metadata) + + self.template_func = template_func + + +class CompositeRouteFunction(object): + def __init__(self): + self._funcs = [] + self._arg_names = None + + def addFunc(self, route): + if self._arg_names is None: + self._arg_names = sorted(route.template_func_args) + + if sorted(route.template_func_args) != self._arg_names: + raise Exception("Cannot merge route function with arguments '%s' " + "with route function with arguments '%s'." % + (route.template_func_args, self._arg_names)) + self._funcs.append((route, route.template_func)) + + def __call__(self, *args, **kwargs): + if len(args) == len(self._arg_names): + f = self._funcs[0][1] + return f(*args, **kwargs) + + if len(args) == len(self._arg_names) + 1: + f_args = args[:-1] + for r, f in self._funcs: + if r.source_name == args[-1]: + return f(f_args, **kwargs) + raise Exception("No such source: %s" % args[-1]) + + raise Exception("Incorrect number of arguments for route function. " + "Expected '%s', got '%s'" % (self._arg_names, args)) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/serving.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,329 @@ +import re +import gzip +import time +import os.path +import hashlib +import logging +import StringIO +from werkzeug.exceptions import (NotFound, MethodNotAllowed, + InternalServerError) +from werkzeug.serving import run_simple +from werkzeug.wrappers import Request, Response +from werkzeug.wsgi import wrap_file +from jinja2 import FileSystemLoader, Environment +from piecrust.app import PieCrust +from piecrust.data.filters import (PaginationFilter, HasFilterClause, + IsFilterClause) +from piecrust.page import Page +from piecrust.processing.base import ProcessorPipeline +from piecrust.rendering import PageRenderingContext, render_page +from piecrust.sources.base import MODE_PARSING + + +logger = logging.getLogger(__name__) + + + + +class Server(object): + def __init__(self, root_dir, host='localhost', port='8080', + debug=False, static_preview=True): + self.root_dir = root_dir + self.host = host + self.port = port + self.debug = debug + self.static_preview = static_preview + self._out_dir = None + self._skip_patterns = None + self._force_patterns = None + self._record = None + self._mimetype_map = load_mimetype_map() + + def run(self): + # Bake all the assets so we know what we have, and so we can serve + # them to the client. We need a temp app for this. + app = PieCrust(root_dir=self.root_dir, debug=self.debug) + self._out_dir = os.path.join(app.cache_dir, 'server') + self._skip_patterns = app.config.get('baker/skip_patterns') + self._force_patterns = app.config.get('baker/force_patterns') + pipeline = ProcessorPipeline( + app, self._out_dir, + skip_patterns=self._skip_patterns, + force_patterns=self._force_patterns) + self._record = pipeline.run() + + # Run the WSGI app. + wsgi_wrapper = WsgiServer(self) + run_simple(self.host, self.port, wsgi_wrapper, + use_debugger=True, use_reloader=True) + + def _run_request(self, environ, start_response): + try: + return self._run_piecrust(environ, start_response) + except Exception as ex: + if self.debug: + raise + return self._handle_error(ex, environ, start_response) + + def _run_piecrust(self, environ, start_response): + request = Request(environ) + + # We don't support anything else than GET requests since we're + # previewing something that will be static later. + if self.static_preview and request.method != 'GET': + logger.error("Only GET requests are allowed, got %s" % request.method) + raise MethodNotAllowed() + + # Create the app for this request. + app = PieCrust(root_dir=self.root_dir, debug=self.debug) + + # We'll serve page assets directly from where they are. + app.env.base_asset_url_format = '/_asset/%path%' + + # See if the requested URL is an asset. + response = self._try_serve_asset(app, environ, request) + if response is not None: + return response(environ, start_response) + + # It's not an asset we know of... let's see if it can be a page asset. + response = self._try_serve_page_asset(app, environ, request) + if response is not None: + return response(environ, start_response) + + # Nope. Let's hope it's an actual page. + try: + response = self._try_serve_page(app, environ, request) + return response(environ, start_response) + except (RouteNotFoundError, SourceNotFoundError) as ex: + logger.exception(ex) + raise NotFound() + except Exception as ex: + logger.exception(ex) + if app.debug: + raise + raise InternalServerError() + + def _try_serve_asset(self, app, environ, request): + logger.debug("Searching for asset with path: %s" % request.path) + rel_req_path = request.path.lstrip('/') + entry = self._record.findEntry(rel_req_path) + if entry is None: + return None + + # Yep, we know about this URL because we processed an asset that + # maps to it... make sure it's up to date by re-processing it + # before serving. + asset_in_path = os.path.join(app.root_dir, entry.rel_input) + asset_out_path = os.path.join(self._out_dir, rel_req_path) + pipeline = ProcessorPipeline( + app, self._out_dir, + skip_patterns=self._skip_patterns, + force_patterns=self._force_patterns) + pipeline.run(asset_in_path) + + logger.debug("Serving %s" % asset_out_path) + wrapper = wrap_file(environ, open(asset_out_path)) + response = Response(wrapper) + _, ext = os.path.splitext(rel_req_path) + response.mimetype = self._mimetype_map.get( + ext.lstrip('.'), 'text/plain') + return response + + def _try_serve_page_asset(self, app, environ, request): + if not request.path.startswith('/_asset/'): + return None + + full_path = os.path.join(app.root_dir, request.path[len('/_asset/'):]) + if not os.path.isfile(full_path): + return None + + logger.debug("Serving %s" % full_path) + wrapper = wrap_file(environ, open(full_path)) + response = Response(wrapper) + _, ext = os.path.splitext(full_path) + response.mimetype = self._mimetype_map.get( + ext.lstrip('.'), 'text/plain') + return response + + def _try_serve_page(self, app, environ, request): + # Try to find what matches the requested URL. + req_path = request.path + page_num = 1 + pgn_suffix_re = app.config.get('__cache/pagination_suffix_re') + pgn_suffix_m = re.search(pgn_suffix_re, request.path) + if pgn_suffix_m: + req_path = request.path[:pgn_suffix_m.start()] + page_num = int(pgn_suffix_m.group('num')) + + routes = find_routes(app.routes, req_path) + if len(routes) == 0: + raise RouteNotFoundError("Can't find route for: %s" % req_path) + + taxonomy = None + for route, route_metadata in routes: + source = app.getSource(route.source_name) + if route.taxonomy is None: + path, fac_metadata = source.findPagePath( + route_metadata, MODE_PARSING) + if path is not None: + break + else: + taxonomy = app.getTaxonomy(route.taxonomy) + term_value = route_metadata.get(taxonomy.term_name) + if term_value is not None: + tax_page_ref = taxonomy.getPageRef(source.name) + path = tax_page_ref.path + source = tax_page_ref.source + fac_metadata = {taxonomy.term_name: term_value} + break + else: + raise SourceNotFoundError("Can't find path for: %s " + "(looked in: %s)" % + (req_path, [r.source_name for r, _ in routes])) + + # Build the page and render it. + page = Page(source, fac_metadata, path) + render_ctx = PageRenderingContext(page, req_path, page_num) + if taxonomy is not None: + flt = PaginationFilter() + if taxonomy.is_multiple: + flt.addClause(HasFilterClause(taxonomy.name, term_value)) + else: + flt.addClause(IsFilterClause(taxonomy.name, term_value)) + render_ctx.pagination_filter = flt + + render_ctx.custom_data = { + taxonomy.term_name: term_value} + rendered_page = render_page(render_ctx) + rp_content = rendered_page.content + + # Start response. + response = Response() + + etag = hashlib.md5(rp_content).hexdigest() + if not app.debug and etag in request.if_none_match: + response.status_code = 304 + return response + + response.set_etag(etag) + response.content_md5 = etag + + cache_control = response.cache_control + if app.debug: + cache_control.no_cache = True + cache_control.must_revalidate = True + else: + cache_time = (page.config.get('cache_time') or + app.config.get('site/cache_time')) + if cache_time: + cache_control.public = True + cache_control.max_age = cache_time + + content_type = page.config.get('content_type') + if content_type and '/' not in content_type: + mimetype = content_type_map.get(content_type, content_type) + else: + mimetype = content_type + if mimetype: + response.mimetype = mimetype + + if app.debug: + now_time = time.clock() + timing_info = ('%8.1f ms' % + ((now_time - app.env.start_time) * 1000.0)) + rp_content = rp_content.replace('__PIECRUST_TIMING_INFORMATION__', + timing_info) + + if ('gzip' in request.accept_encodings and + app.config.get('site/enable_gzip')): + try: + gzip_buffer = StringIO.StringIO() + gzip_file = gzip.GzipFile( + mode='wb', + compresslevel=9, + fileobj=gzip_buffer) + gzip_file.write(rp_content) + gzip_file.close() + rp_content = gzip_buffer.getvalue() + response.content_encoding = 'gzip' + except Exception: + logger.exception("Error compressing response, " + "falling back to uncompressed.") + rp_content = rendered_page.content + response.set_data(rp_content) + + return response + + def _handle_error(self, exception, environ, start_response): + path = 'error' + if isinstance(exception, NotFound): + path = '404' + env = Environment(loader=ErrorMessageLoader()) + template = env.get_template(path) + context = {'details': str(exception)} + response = Response(template.render(context), mimetype='text/html') + return response(environ, start_response) + + +class WsgiServer(object): + def __init__(self, server): + self.server = server + + def __call__(self, environ, start_response): + return self.server._run_request(environ, start_response) + + +class RouteNotFoundError(Exception): + pass + + +class SourceNotFoundError(Exception): + pass + + +content_type_map = { + 'html': 'text/html', + 'xml': 'text/xml', + 'txt': 'text/plain', + 'text': 'text/plain', + 'css': 'text/css', + 'xhtml': 'application/xhtml+xml', + 'atom': 'application/atom+xml', # or 'text/xml'? + 'rss': 'application/rss+xml', # or 'text/xml'? + 'json': 'application/json'} + + +def find_routes(routes, uri): + uri = uri.lstrip('/') + res = [] + for route in routes: + m = route.uri_re.match(uri) + if m: + metadata = m.groupdict() + res.append((route, metadata)) + return res + + +class ErrorMessageLoader(FileSystemLoader): + def __init__(self): + base_dir = os.path.join(os.path.dirname(__file__), 'resources', + 'messages') + super(ErrorMessageLoader, self).__init__(base_dir) + + def get_source(self, env, template): + template += '.html' + return super(ErrorMessageLoader, self).get_source(env, template) + + +def load_mimetype_map(): + mimetype_map = {} + sep_re = re.compile(r'\s+') + path = os.path.join(os.path.dirname(__file__), 'mime.types') + with open(path, 'r') as f: + for line in f: + tokens = sep_re.split(line) + if len(tokens) > 1: + for t in tokens[1:]: + mimetype_map[t] = tokens[0] + return mimetype_map +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/sources/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,339 @@ +import re +import os +import os.path +import logging +from werkzeug.utils import cached_property +from piecrust import CONTENT_DIR +from piecrust.configuration import ConfigurationError +from piecrust.page import Page + + +REALM_USER = 0 +REALM_THEME = 1 +REALM_NAMES = { + REALM_USER: 'User', + REALM_THEME: 'Theme'} + + +MODE_PARSING = 0 +MODE_CREATING = 1 + + +logger = logging.getLogger(__name__) + + +page_ref_pattern = re.compile(r'(?P<src>[\w]+)\:(?P<path>.*?)(;|$)') + + +class PageNotFoundError(Exception): + pass + + +class InvalidFileSystemEndpointError(Exception): + def __init__(self, source_name, fs_endpoint): + super(InvalidFileSystemEndpointError, self).__init__( + "Invalid file-system endpoint for source '%s': %s" % + (source_name, fs_endpoint)) + + +class PageFactory(object): + """ A class responsible for creating a page. + """ + def __init__(self, source, rel_path, metadata): + self.source = source + self.rel_path = rel_path + self.metadata = metadata + + @property + def ref_spec(self): + return '%s:%s' % (self.source.name, self.rel_path) + + @cached_property + def path(self): + return self.source.resolveRef(self.rel_path) + + def buildPage(self): + repo = self.source.app.env.page_repository + if repo is not None: + cache_key = '%s:%s' % (self.source.name, self.rel_path) + return repo.get(cache_key, self._doBuildPage) + return self._doBuildPage() + + def _doBuildPage(self): + logger.debug("Building page: %s" % self.path) + page = Page(self.source, self.metadata, self.rel_path) + # Load it right away, especially when using the page repository, + # because we'll be inside a critical scope. + page._load() + return page + + +class CachedPageFactory(object): + """ A `PageFactory` (in appearance) that already has a page built. + """ + def __init__(self, page): + self._page = page + + @property + def rel_path(self): + return self._page.rel_path + + @property + def metadata(self): + return self._page.source_metadata + + @property + def ref_spec(self): + return self._page.ref_spec + + @property + def path(self): + return self._page.path + + def buildPage(self): + return self._page + + +class PageRef(object): + """ A reference to a page, with support for looking a page in different + realms. + """ + def __init__(self, app, page_ref): + self.app = app + self._page_ref = page_ref + self._paths = None + self._first_valid_path_index = -2 + self._exts = app.config.get('site/auto_formats').keys() + + @property + def exists(self): + try: + self._checkPaths() + return True + except PageNotFoundError: + return False + + @property + def source_name(self): + self._checkPaths() + return self._paths[self._first_valid_path_index][0] + + @property + def source(self): + return self.app.getSource(self.source_name) + + @property + def rel_path(self): + self._checkPaths() + return self._paths[self._first_valid_path_index][1] + + @property + def path(self): + self._checkPaths() + return self._paths[self._first_valid_path_index][2] + + @property + def possible_rel_paths(self): + self._load() + return [p[1] for p in self._paths] + + @property + def possible_paths(self): + self._load() + return [p[2] for p in self._paths] + + def _load(self): + if self._paths is not None: + return + + it = list(page_ref_pattern.finditer(self._page_ref)) + if len(it) == 0: + raise Exception("Invalid page ref: %s" % self._page_ref) + + self._paths = [] + for m in it: + source_name = m.group('src') + source = self.app.getSource(source_name) + if source is None: + raise Exception("No such source: %s" % source_name) + rel_path = m.group('path') + path = source.resolveRef(rel_path) + if '%ext%' in rel_path: + for e in self._exts: + self._paths.append((source_name, + rel_path.replace('%ext%', e), + path.replace('%ext%', e))) + else: + self._paths.append((source_name, rel_path, path)) + + def _checkPaths(self): + if self._first_valid_path_index >= 0: + return + if self._first_valid_path_index == -1: + raise PageNotFoundError("No valid paths were found for page reference:" % + self._page_ref) + + self._load() + for i, path_info in enumerate(self._paths): + if os.path.isfile(path_info[2]): + self._first_valid_path_index = i + break + + +class PageSource(object): + """ A source for pages, e.g. a directory with one file per page. + """ + def __init__(self, app, name, config): + self.app = app + self.name = name + self.config = config + self._factories = None + self._provider_type = None + + def __getattr__(self, name): + try: + return self.config[name] + except KeyError: + raise AttributeError() + + @property + def is_theme_source(self): + return self.realm == REALM_THEME + + @property + def root_dir(self): + if self.is_theme_source: + return self.app.theme_dir + return self.app.root_dir + + def getPageFactories(self): + if self._factories is None: + self._factories = list(self.buildPageFactories()) + return self._factories + + def buildPageFactories(self): + raise NotImplementedError() + + def resolveRef(self, ref_path): + raise NotImplementedError() + + def findPagePath(self, metadata, mode): + raise NotImplementedError() + + def buildDataProvider(self, page, user_data): + if self._provider_type is None: + cls = next((pt for pt in self.app.plugin_loader.getDataProviders() + if pt.PROVIDER_NAME == self.data_type), + None) + if cls is None: + raise ConfigurationError("Unknown data provider type: %s" % + self.data_type) + self._provider_type = cls + + return self._provider_type(self, page, user_data) + + def getTaxonomyPageRef(self, tax_name): + tax_pages = self.config.get('taxonomy_pages') + if tax_pages is None: + return None + return tax_pages.get(tax_name) + + +class IPreparingSource: + def setupPrepareParser(self, parser, app): + raise NotImplementedError() + + def buildMetadata(self, args): + raise NotImplementedError() + + +class ArraySource(PageSource): + def __init__(self, app, inner_source, name='array', config=None): + super(ArraySource, self).__init__(app, name, config or {}) + self.inner_source = inner_source + + @property + def page_count(self): + return len(self.inner_source) + + def getPageFactories(self): + for p in self.inner_source: + yield CachedPageFactory(p) + + +class SimplePageSource(PageSource): + def __init__(self, app, name, config): + super(SimplePageSource, self).__init__(app, name, config) + self.fs_endpoint = config.get('fs_endpoint', name) + self.fs_endpoint_path = os.path.join(self.root_dir, CONTENT_DIR, self.fs_endpoint) + self.supported_extensions = app.config.get('site/auto_formats').keys() + + def buildPageFactories(self): + logger.debug("Scanning for pages in: %s" % self.fs_endpoint_path) + if not os.path.isdir(self.fs_endpoint_path): + raise InvalidFileSystemEndpointError(self.name, self.fs_endpoint_path) + + for dirpath, dirnames, filenames in os.walk(self.fs_endpoint_path): + rel_dirpath = os.path.relpath(dirpath, self.fs_endpoint_path) + dirnames[:] = filter(self._filterPageDirname, dirnames) + for f in filter(self._filterPageFilename, filenames): + slug, ext = os.path.splitext(os.path.join(rel_dirpath, f)) + if slug.startswith('./') or slug.startswith('.\\'): + slug = slug[2:] + if slug == '_index': + slug = '' + metadata = {'path': slug} + fac_path = f + if rel_dirpath != '.': + fac_path = os.path.join(rel_dirpath, f) + yield PageFactory(self, fac_path, metadata) + + def resolveRef(self, ref_path): + return os.path.join(self.fs_endpoint_path, ref_path) + + def findPagePath(self, metadata, mode): + uri_path = metadata['path'] + if uri_path == '': + uri_path = '_index' + path = os.path.join(self.fs_endpoint_path, uri_path) + _, ext = os.path.splitext(path) + + if mode == MODE_CREATING: + if ext == '': + return '%s.*' % path + return path, metadata + + if ext == '': + paths_to_check = ['%s.%s' % (path, e) + for e in self.supported_extensions] + else: + paths_to_check = [path] + for path in paths_to_check: + if os.path.isfile(path): + return path, metadata + + return None, None + + def _filterPageDirname(self, d): + return not d.endswith('-assets') + + def _filterPageFilename(self, f): + name, ext = os.path.splitext(f) + return (f[0] != '.' and + f[-1] != '~' and + ext.lstrip('.') in self.supported_extensions and + f not in ['Thumbs.db']) + + +class DefaultPageSource(SimplePageSource, IPreparingSource): + SOURCE_NAME = 'default' + + def __init__(self, app, name, config): + super(DefaultPageSource, self).__init__(app, name, config) + + def setupPrepareParser(self, parser, app): + parser.add_argument('uri', help='The URI for the new page.') + + def buildMetadata(self, args): + return {'path': args.uri} +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/sources/posts.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,230 @@ +import os +import os.path +import re +import glob +import logging +import datetime +from piecrust import CONTENT_DIR +from piecrust.sources.base import (PageSource, IPreparingSource, + PageNotFoundError, InvalidFileSystemEndpointError, + PageFactory, MODE_CREATING) + + +logger = logging.getLogger(__name__) + + +class PostsSource(PageSource, IPreparingSource): + PATH_FORMAT = None + + def __init__(self, app, name, config): + super(PostsSource, self).__init__(app, name, config) + self.fs_endpoint = config.get('fs_endpoint', name) + self.fs_endpoint_path = os.path.join(self.root_dir, CONTENT_DIR, self.fs_endpoint) + self.supported_extensions = app.config.get('site/auto_formats').keys() + + @property + def path_format(self): + return self.__class__.PATH_FORMAT + + def resolveRef(self, ref_path): + return os.path.join(self.fs_endpoint_path, ref_path) + + def findPagePath(self, metadata, mode): + year = metadata.get('year') + month = metadata.get('month') + day = metadata.get('day') + slug = metadata.get('slug') + + ext = metadata.get('ext') + if ext is None: + if len(self.supported_extensions) == 1: + ext = self.supported_extensions[0] + + replacements = { + 'year': year, + 'month': month, + 'day': day, + 'slug': slug, + 'ext': ext + } + needs_recapture = False + if year is None: + needs_recapture = True + replacements['year'] = '????' + if month is None: + needs_recapture = True + replacements['month'] = '??' + if day is None: + needs_recapture = True + replacements['day'] = '??' + if slug is None: + needs_recapture = True + replacements['slug'] = '*' + if ext is None: + needs_recapture = True + replacements['ext'] = '*' + path = os.path.join(self.fs_endpoint_path, self.path_format % replacements) + + if needs_recapture: + if mode == MODE_CREATING: + raise ValueError("Not enough information to find a post path.") + possible_paths = glob.glob(path) + if len(possible_paths) != 1: + raise PageNotFoundError() + path = possible_paths[0] + elif not os.path.isfile(path): + raise PageNotFoundError() + + regex_repl = { + 'year': '(?P<year>\d{4})', + 'month': '(?P<month>\d{2})', + 'day': '(?P<day>\d{2})', + 'slug': '(?P<slug>.*)', + 'ext': '(?P<ext>.*)' + } + pattern = os.path.join(self.fs_endpoint_path, self.path_format) % regex_repl + m = re.match(pattern, path) + if not m: + raise Exception("Expected to be able to match path with path " + "format: %s" % path) + fac_metadata = { + 'year': m.group('year'), + 'month': m.group('month'), + 'day': m.group('day'), + 'slug': m.group('slug') + } + + return path, fac_metadata + + def setupPrepareParser(self, parser, app): + parser.add_argument('-d', '--date', help="The date of the post, " + "in `year/month/day` format (defaults to today).") + parser.add_argument('slug', help="The URL slug for the new post.") + + def buildMetadata(self, args): + today = datetime.date.today() + year, month, day = today.year, today.month, today.day + if args.date: + year, month, day = filter( + lambda s: int(s), + args.date.split('/')) + return {'year': year, 'month': month, 'day': day, 'slug': args.slug} + + def _checkFsEndpointPath(self): + if not os.path.isdir(self.fs_endpoint_path): + raise InvalidFileSystemEndpointError(self.name, self.fs_endpoint_path) + + def _makeFactory(self, path, slug, year, month, day): + timestamp = datetime.date(year, month, day) + metadata = { + 'slug': slug, + 'year': year, + 'month': month, + 'day': day, + 'date': timestamp} + return PageFactory(self, path, metadata) + + +class FlatPostsSource(PostsSource): + SOURCE_NAME = 'posts/flat' + PATH_FORMAT = '%(year)s-%(month)s-%(day)s_%(slug)s.%(ext)s' + + def __init__(self, app, name, config): + super(FlatPostsSource, self).__init__(app, name, config) + + def buildPageFactories(self): + logger.debug("Scanning for posts (flat) in: %s" % self.fs_endpoint_path) + pattern = re.compile(r'(\d{4})-(\d{2})-(\d{2})_(.*)\.(\w+)$') + _, __, filenames = next(os.walk(self.fs_endpoint_path)) + for f in filenames: + match = pattern.match(f) + if match is None: + name, ext = os.path.splitext(f) + logger.warning("'%s' is not formatted as 'YYYY-MM-DD_slug-title.%s' " + "and will be ignored. Is that a typo?" % (f, ext)) + continue + yield self._makeFactory( + f, + match.group(4), + int(match.group(1)), + int(match.group(2)), + int(match.group(3))) + + +class ShallowPostsSource(PostsSource): + SOURCE_NAME = 'posts/shallow' + PATH_FORMAT = '%(year)s/%(month)s-%(day)s_%(slug)s.%(ext)s' + + def __init__(self, app, name, config): + super(ShallowPostsSource, self).__init__(app, name, config) + + def buildPageFactories(self): + logger.debug("Scanning for posts (shallow) in: %s" % self.fs_endpoint_path) + year_pattern = re.compile(r'(\d{4})$') + file_pattern = re.compile(r'(\d{2})-(\d{2})_(.*)\.(\w+)$') + _, year_dirs, __ = next(os.walk(self.fs_endpoint_path)) + year_dirs = filter(lambda d: year_pattern.match(d), year_dirs) + for yd in year_dirs: + if year_pattern.match(yd) is None: + logger.warning("'%s' is not formatted as 'YYYY' and will be ignored. " + "Is that a typo?") + continue + year = int(yd) + year_dir = os.path.join(self.fs_endpoint_path, yd) + + _, __, filenames = os.walk(year_dir) + for f in filenames: + match = file_pattern.match(f) + if match is None: + name, ext = os.path.splitext(f) + logger.warning("'%s' is not formatted as 'MM-DD_slug-title.%s' " + "and will be ignored. Is that a typo?" % (f, ext)) + continue + yield self._makeFactory( + os.path.join(yd, f), + match.group(3), + year, + int(match.group(1)), + int(match.group(2))) + + +class HierarchyPostsSource(PostsSource): + SOURCE_NAME = 'posts/hierarchy' + PATH_FORMAT = '%(year)s/%(month)s/%(day)s_%(slug)s.%(ext)s' + + def __init__(self, app, name, config): + super(HierarchyPostsSource, self).__init__(app, name, config) + + def buildPageFactories(self): + logger.debug("Scanning for posts (hierarchy) in: %s" % self.fs_endpoint_path) + year_pattern = re.compile(r'(\d{4})$') + month_pattern = re.compile(r'(\d{2})$') + file_pattern = re.compile(r'(\d{2})_(.*)\.(\w+)$') + _, year_dirs, __ = next(os.walk(self.fs_endpoint_path)) + year_dirs = filter(lambda d: year_pattern.match(d), year_dirs) + for yd in year_dirs: + year = int(yd) + year_dir = os.path.join(self.fs_endpoint_path, yd) + + _, month_dirs, __ = next(os.walk(year_dir)) + month_dirs = filter(lambda d: month_pattern.match(d), month_dirs) + for md in month_dirs: + month = int(md) + month_dir = os.path.join(year_dir, md) + + _, __, filenames = next(os.walk(month_dir)) + for f in filenames: + match = file_pattern.match(f) + if match is None: + name, ext = os.path.splitext(f) + logger.warning("'%s' is not formatted as 'DD_slug-title.%s' " + "and will be ignored. Is that a typo?" % (f, ext)) + continue + rel_name = os.path.join(yd, md, f) + yield self._makeFactory( + rel_name, + match.group(2), + year, + month, + int(match.group(1))) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/taxonomies.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,30 @@ +from piecrust.sources.base import PageRef, PageNotFoundError + + +class Taxonomy(object): + def __init__(self, app, name, config): + self.app = app + self.name = name + self.term_name = config['term'] + self.is_multiple = config['multiple'] + self.page_ref = config['page'] + self._source_page_refs = {} + + def resolvePagePath(self, source_name): + pr = self.getPageRef(source_name) + try: + return pr.path + except PageNotFoundError: + return None + + def getPageRef(self, source_name): + if source_name in self._source_page_refs: + return self._source_page_refs[source_name] + + source = self.app.getSource(source_name) + ref_path = (source.getTaxonomyPageRef(self.name) or + '%s:%s' % (source_name, self.page_ref)) + page_ref = PageRef(self.app, ref_path) + self._source_page_refs[source_name] = page_ref + return page_ref +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/templating/base.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,17 @@ + + +class TemplateNotFoundError(Exception): + pass + + +class TemplateEngine(object): + EXTENSIONS = [] + + def initialize(self, app): + self.app = app + + def renderString(self, txt, data, filename=None, line_offset=0): + raise NotImplementedError() + + def renderFile(self, paths, data): + raise NotImplementedError()
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/templating/jinjaengine.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,260 @@ +import re +import time +import logging +import strict_rfc3339 +from jinja2 import Environment, FileSystemLoader, TemplateNotFound +from jinja2.exceptions import TemplateSyntaxError +from jinja2.ext import Extension, Markup +from jinja2.nodes import CallBlock, Const +from pygments import highlight +from pygments.formatters import HtmlFormatter +from pygments.lexers import get_lexer_by_name, guess_lexer +from piecrust.rendering import format_text +from piecrust.routing import CompositeRouteFunction +from piecrust.templating.base import TemplateEngine, TemplateNotFoundError + + +logger = logging.getLogger(__name__) + + +class JinjaTemplateEngine(TemplateEngine): + # Name `twig` is for backwards compatibility with PieCrust 1.x. + ENGINE_NAMES = ['jinja', 'jinja2', 'twig'] + EXTENSIONS = ['jinja', 'jinja2', 'twig'] + + def __init__(self): + self.env = None + + def renderString(self, txt, data, filename=None, line_offset=0): + self._ensureLoaded() + tpl = self.env.from_string(txt) + try: + return tpl.render(data) + except TemplateSyntaxError as tse: + tse.lineno += line_offset + if filename: + tse.filename = filename + import sys + _, __, traceback = sys.exc_info() + raise tse, None, traceback + + def renderFile(self, paths, data): + self._ensureLoaded() + tpl = None + logger.debug("Looking for template: %s" % paths) + for p in paths: + try: + tpl = self.env.get_template(p) + break + except TemplateNotFound: + pass + if tpl is None: + raise TemplateNotFoundError() + return tpl.render(data) + + + def _ensureLoaded(self): + if self.env: + return + loader = FileSystemLoader(self.app.templates_dirs) + self.env = PieCrustEnvironment( + self.app, + loader=loader, + extensions=['jinja2.ext.autoescape', + PieCrustHighlightExtension, + PieCrustCacheExtension]) + + +class PieCrustEnvironment(Environment): + def __init__(self, app, *args, **kwargs): + super(PieCrustEnvironment, self).__init__(*args, **kwargs) + self.app = app + self.globals.update({ + 'fail': raise_exception}) + self.filters.update({ + 'formatwith': self._formatWith, + 'markdown': lambda v: self._formatWith(v, 'markdown'), + 'textile': lambda v: self._formatWith(v, 'textile'), + 'nocache': add_no_cache_parameter, + 'wordcount': get_word_count, + 'stripoutertag': strip_outer_tag, + 'stripslash': strip_slash, + 'titlecase': title_case, + 'atomdate': get_atom_date, + 'date': get_date}) + # Backwards compatibility with PieCrust 1.x. + self.globals.update({ + 'pcfail': raise_exception}) + + # Backwards compatibility with Twig. + twig_compatibility_mode = app.config.get('jinja/twig_compatibility') + if twig_compatibility_mode is None or twig_compatibility_mode is True: + self.trim_blocks = True + self.filters['raw'] = self.filters['safe'] + + # Add route functions. + for route in app.routes: + name = route.template_func_name + func = self.globals.get(name) + if func is None: + func = CompositeRouteFunction() + func.addFunc(route) + self.globals[name] = func + elif isinstance(func, CompositeRouteFunction): + self.globals[name].addFunc(route) + else: + raise Exception("Route function '%s' collides with an " + "existing function or template data." % + name) + + def _formatWith(self, value, format_name): + return format_text(self.app, format_name, value) + + +def raise_exception(msg): + raise Exception(msg) + + +def add_no_cache_parameter(value, param_name='t', param_value=None): + if not param_value: + param_value = time.time() + if '?' in value: + value += '&' + else: + value += '?' + value += '%s=%s' % (param_name, param_value) + return value + + +def get_word_count(value): + return len(value.split()) + + +def strip_outer_tag(value, tag=None): + tag_pattern = '[a-z]+[a-z0-9]*' + if tag is not None: + tag_pattern = re.escape(tag) + pat = r'^\<' + tag_pattern + r'\>(.*)\</' + tag_pattern + '>$' + m = re.match(pat, value) + if m: + return m.group(1) + return value + + +def strip_slash(value): + return value.rstrip('/') + + +def title_case(value): + return value.title() + + +def get_atom_date(value): + return strict_rfc3339.timestamp_to_rfc3339_localoffset(int(value)) + + +def get_date(value, fmt): + return time.strftime(fmt, time.localtime(value)) + + +class PieCrustHighlightExtension(Extension): + tags = set(['highlight', 'geshi']) + + def __init__(self, environment): + super(PieCrustHighlightExtension, self).__init__(environment) + + def parse(self, parser): + lineno = next(parser.stream).lineno + + # Extract the language name. + args = [parser.parse_expression()] + + # Extract optional arguments. + kwarg_names = {'line_numbers': 0, 'use_classes': 0, 'class': 1, 'id': 1} + kwargs = {} + while not parser.stream.current.test('block_end'): + name = parser.stream.expect('name') + if name.value not in kwarg_names: + raise Exception("'%s' is not a valid argument for the code " + "highlighting tag." % name.value) + if kwarg_names[name.value] == 0: + kwargs[name.value] = Const(True) + elif parser.stream.skip_if('assign'): + kwargs[name.value] = parser.parse_expression() + + # body of the block + body = parser.parse_statements(['name:endhighlight', 'name:endgeshi'], + drop_needle=True) + + return CallBlock(self.call_method('_highlight', args, kwargs), + [], [], body).set_lineno(lineno) + + def _highlight(self, lang, line_numbers=False, use_classes=False, + css_class=None, css_id=None, caller=None): + # Try to be mostly compatible with Jinja2-highlight's settings. + body = caller() + + if lang is None: + lexer = guess_lexer(body) + else: + lexer = get_lexer_by_name(lang, stripall=False) + + if css_class is None: + try: + css_class = self.environment.jinja2_highlight_cssclass + except AttributeError: + pass + + if css_class is not None: + formatter = HtmlFormatter(cssclass=css_class, + linenos=line_numbers) + else: + formatter = HtmlFormatter(linenos=line_numbers) + + code = highlight(Markup(body.rstrip()).unescape(), lexer, formatter) + return code + + +class PieCrustCacheExtension(Extension): + tags = set(['pccache']) + + def __init__(self, environment): + super(PieCrustCacheExtension, self).__init__(environment) + + environment.extend( + piecrust_cache_prefix='', + piecrust_cache={} + ) + + def parse(self, parser): + # the first token is the token that started the tag. In our case + # we only listen to ``'pccache'`` so this will be a name token with + # `pccache` as value. We get the line number so that we can give + # that line number to the nodes we create by hand. + lineno = parser.stream.next().lineno + + # now we parse a single expression that is used as cache key. + args = [parser.parse_expression()] + + # now we parse the body of the cache block up to `endpccache` and + # drop the needle (which would always be `endpccache` in that case) + body = parser.parse_statements(['name:endpccache'], drop_needle=True) + + # now return a `CallBlock` node that calls our _cache_support + # helper method on this extension. + return CallBlock(self.call_method('_cache_support', args), + [], [], body).set_lineno(lineno) + + def _cache_support(self, name, caller): + key = self.environment.piecrust_cache_prefix + name + + # try to load the block from the cache + # if there is no fragment in the cache, render it and store + # it in the cache. + rv = self.environment.piecrust_cache.get(key) + if rv is not None: + return rv + rv = caller() + self.environment.piecrust_cache[key] = rv + return rv +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/piecrust/uriutil.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,74 @@ +import re +import string +import logging +import functools + + +logger = logging.getLogger(__name__) + + +class UriError(Exception): + def __init__(self, uri): + super(UriError, self).__init__("Invalid URI: %s" % uri) + + +@functools.total_ordering +class UriInfo(object): + def __init__(self, uri, source, args, taxonomy=None, page_num=1): + self.uri = uri + self.source = source + self.args = args + self.taxonomy = taxonomy + self.page_num = page_num + + def __eq__(self, other): + return ((self.uri, self.source, self.args, self.taxonomy, + self.page_num) == + (other.uri, other.source, other.args, other.taxonomy, + other.page_num)) + + def __lt__(self, other): + return ((self.uri, self.source, self.args, self.taxonomy, + self.page_num) < + (other.uri, other.source, other.args, other.taxonomy, + other.page_num)) + + +pagenum_pattern = re.compile(r'/(\d+)/?$') + + +def parse_uri(routes, uri): + if string.find(uri, '..') >= 0: + raise UriError(uri) + + page_num = 1 + match = pagenum_pattern.search(uri) + if match is not None: + uri = uri[:match.start()] + page_num = int(match.group(1)) + + uri = '/' + uri.strip('/') + + for rn, rc in routes.iteritems(): + pattern = route_to_pattern(rn) + m = re.match(pattern, uri) + if m is not None: + args = m.groupdict() + return UriInfo(uri, rc['source'], args, rc.get('taxonomy'), + page_num) + + return None + + +r2p_pattern = re.compile(r'%(\w+)%') + + +def route_to_pattern(route): + return r2p_pattern.sub(r'(?P<\1>[\w\-]+)', route) + + +def multi_replace(text, replacements): + reps = dict((re.escape(k), v) for k, v in replacements.iteritems()) + pattern = re.compile("|".join(reps.keys())) + return pattern.sub(lambda m: reps[re.escape(m.group(0))], text) +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/pytest.ini Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,3 @@ +[pytest] +norecursedirs = .* _darcs CVS .svn .git .hg *.egg-info dist node_modules venv tmp* {args} +
--- a/tests/__init__.py Wed Dec 25 22:16:46 2013 -0800 +++ b/tests/__init__.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,1 @@ +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/mockutil.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,9 @@ +import mock +from piecrust.app import PieCrust, PieCrustConfiguration + + +def get_mock_app(config=None): + app = mock.MagicMock(spec=PieCrust) + app.config = PieCrustConfiguration() + return app +
--- a/tests/test_configuration.py Wed Dec 25 22:16:46 2013 -0800 +++ b/tests/test_configuration.py Sun Aug 10 23:43:16 2014 -0700 @@ -14,7 +14,7 @@ def test_config_set_all(): config = Configuration() - config.set_all({'foo': 'bar'}) + config.setAll({'foo': 'bar'}) assert config.get() == {'foo': 'bar'} @@ -103,4 +103,3 @@ } assert config.get() == expected -
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test_page.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,72 @@ +import pytest +from piecrust.page import parse_segments + + + +test_parse_segments_data1 = ("", {'content': ''}) +test_parse_segments_data2 = ("Foo bar", {'content': 'Foo bar'}) +test_parse_segments_data3 = ("""Something that spans +several lines +like this""", + {'content': """Something that spans +several lines +like this"""}) +test_parse_segments_data4 = ("""Blah blah +---foo--- +Something else +---bar--- +Last thing +""", + { + 'content': "Blah blah\n", + 'foo': "Something else\n", + 'bar': "Last thing\n"}) +test_parse_segments_data5 = ("""Blah blah +<--textile--> +Here's some textile +""", + { + 'content': [ + ("Blah blah\n", None), + ("Here's some textile\n", 'textile')]}) +test_parse_segments_data6 = ("""Blah blah +Whatever +<--textile--> +Oh well, that's good +---foo--- +Another segment +With another... +<--change--> +...of formatting. +""", + { + 'content': [ + ("Blah blah\nWhatever\n", None), + ("Oh well, that's good\n", 'textile')], + 'foo': [ + ("Another segment\nWith another...\n", None), + ("...of formatting.\n", 'change')]}) + +@pytest.mark.parametrize('text, expected', [ + test_parse_segments_data1, + test_parse_segments_data2, + test_parse_segments_data3, + test_parse_segments_data4, + test_parse_segments_data5, + test_parse_segments_data6, + ]) +def test_parse_segments(text, expected): + actual = parse_segments(text) + assert actual is not None + assert actual.keys() == expected.keys() + for key, val in expected.iteritems(): + if isinstance(val, str): + assert len(actual[key].parts) == 1 + assert actual[key].parts[0].content == val + assert actual[key].parts[0].fmt is None + else: + assert len(actual[key].parts) == len(val) + for i, part in enumerate(val): + assert actual[key].parts[i].content == part[0] + assert actual[key].parts[i].fmt == part[1] +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test_pagebaker.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,40 @@ +import pytest +from piecrust.baking.baker import PageBaker +from .mockutil import get_mock_app + + +@pytest.mark.parametrize('uri, page_num, pretty, expected', [ + # Pretty URLs + ('', 1, True, 'index.html'), + ('', 2, True, '2/index.html'), + ('foo', 1, True, 'foo/index.html'), + ('foo', 2, True, 'foo/2/index.html'), + ('foo/bar', 1, True, 'foo/bar/index.html'), + ('foo/bar', 2, True, 'foo/bar/2/index.html'), + ('foo.ext', 1, True, 'foo.ext/index.html'), + ('foo.ext', 2, True, 'foo.ext/2/index.html'), + ('foo.bar.ext', 1, True, 'foo.bar.ext/index.html'), + ('foo.bar.ext', 2, True, 'foo.bar.ext/2/index.html'), + # Ugly URLs + ('', 1, False, 'index.html'), + ('', 2, False, '2.html'), + ('foo', 1, False, 'foo.html'), + ('foo', 2, False, 'foo/2.html'), + ('foo/bar', 1, False, 'foo/bar.html'), + ('foo/bar', 2, False, 'foo/bar/2.html'), + ('foo.ext', 1, False, 'foo.ext'), + ('foo.ext', 2, False, 'foo/2.ext'), + ('foo.bar.ext', 1, False, 'foo.bar.ext'), + ('foo.bar.ext', 2, False, 'foo.bar/2.ext') + ]) +def test_get_output_path(uri, page_num, pretty, expected): + app = get_mock_app() + if pretty: + app.config.set('site/pretty_urls', True) + assert app.config.get('site/pretty_urls') == pretty + + baker = PageBaker(app, '/destination') + path = baker.getOutputPath(uri, page_num) + expected = '/destination/' + expected + assert expected == path +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test_processing_tree.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,38 @@ +from piecrust.processing.base import CopyFileProcessor, SimpleFileProcessor +from piecrust.processing.tree import ProcessingTreeBuilder, ProcessingTreeNode + + +class MockProcessor(SimpleFileProcessor): + def __init__(self): + super(MockProcessor, self).__init__({'mock': 'out'}) + self.processed = [] + + def _doProcess(self, in_path, out_path): + self.processed.append((in_path, out_path)) + + +mock_processors = [MockProcessor(), CopyFileProcessor()] +IDX_MOCK = 0 +IDX_COPY = 1 + + +def test_mock_node(): + node = ProcessingTreeNode('/foo.mock', list(mock_processors)) + assert node.getProcessor() == mock_processors[IDX_MOCK] + + +def test_copy_node(): + node = ProcessingTreeNode('/foo.other', list(mock_processors)) + assert node.getProcessor() == mock_processors[IDX_COPY] + + +def test_build_simple_tree(): + builder = ProcessingTreeBuilder(mock_processors) + root = builder.build('/foo.mock') + assert root is not None + assert root.getProcessor() == mock_processors[IDX_MOCK] + assert not root.is_leaf + assert len(root.outputs) == 1 + out = root.outputs[0] + assert out.getProcessor() == mock_processors[IDX_COPY] +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test_server.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,34 @@ +import re +import pytest +import mock +from piecrust.serving import find_routes +from piecrust.sources.base import REALM_USER, REALM_THEME + + +@pytest.mark.parametrize('uri, route_specs, expected', + [ + ('/', + [{'src': 'pages', 'pat': '(?P<path>.*)'}], + [('pages', {'path': ''})]), + ('/', + [{'src': 'pages', 'pat': '(?P<path>.*)'}, + {'src': 'theme', 'pat': '(?P<path>.*)', 'realm': REALM_THEME}], + [('pages', {'path': ''}), ('theme', {'path': ''})]) + ]) +def test_find_routes(uri, route_specs, expected): + routes = [] + for rs in route_specs: + m = mock.Mock() + m.source_name = rs['src'] + m.source_realm = rs.setdefault('realm', REALM_USER) + m.uri_re = re.compile(rs['pat']) + routes.append(m) + matching = find_routes(routes, uri) + + assert len(matching) == len(expected) + for i in range(len(matching)): + route, metadata = matching[i] + exp_source, exp_md = expected[i] + assert route.source_name == exp_source + assert metadata == exp_md +
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test_uriutil.py Sun Aug 10 23:43:16 2014 -0700 @@ -0,0 +1,26 @@ +import pytest +from piecrust.uriutil import UriInfo, parse_uri + + +@pytest.mark.parametrize('routes, uri, expected', [ + ({}, '/foo', None), + ( + {'/articles/%slug%': {'source': 'dummy'}}, + '/articles/foo', + UriInfo('', 'dummy', {'slug': 'foo'})), + ( + {'/foo/%bar%': {'source': 'foo'}, + '/other/%one%-%two%': {'source': 'other'}}, + '/other/some-thing', + UriInfo('', 'other', {'one': 'some', 'two': 'thing'})) + ]) +def test_parse_uri(routes, uri, expected): + if expected is not None: + expected.uri = uri + for pattern, args in routes.iteritems(): + if 'taxonomy' not in args: + args['taxonomy'] = None + + actual = parse_uri(routes, uri) + assert actual == expected +