Tab Completion

I'm Tab Atkins Jr, and I wear many hats. I work for Google on the Chrome browser as a Web Standards Hacker. I'm also a member of the CSS Working Group, and am either a member or contributor to several other working groups in the W3C. You can contact me here.
Listing of All Posts

Avoiding Novas, or, Encouraging Dramatic Escalation in D&D

Last updated:

I (as you probably know from my previous post and others on this blog) love D&D, and roleplaying games in general. A recent tweet from @dungeonbastard brought up a problem that I'd never consciously noticed before, but now realize that I've been constantly fighting against.

When we play D&D, we pretend it's a narrative game, that we're telling a story. But that's just one layer - the actual mechanics of the game, which drive the story, are descended from wargaming. This creates a conflict of goals during scenes like combat - in stories, combat has a dramatic arc, with an opening, a build-up, and a climax where the characters pull out their strongest abilities; in wargames, action economy rules everything, and you want to minimize overkill, so it's almost always best to blow your strongest moves immediately (the so-called "nova") and then mop-up whatever's left. D&D is about roleplay, but it's also a game, and we play games to win, so this conflict is frustrating.

In the replies people had a lot of suggestions, often based on mechanics from games that explicitly have a stronger narrative focus, and which have crafted their mechanics to support it. One in particular I found extremely compelling, from a game called 13th Age, which was based on D&D but with a stronger focus on getting the mechanics to support the narrative.

The Escalation Die

The basic idea is that there's a special d6, preferably a large, special-looking one, called the Escalation Die. After the first round of combat, assuming it was exciting, you put it on the table turned to 1. Each round thereafter, as long as the combat stays exciting, you turn it to the next number, maxing out at 6. If things get dull (players being safe and defensive instead of pressing the attack), you can leave it at its current value, or even decrease it.

The players, then, get a bonus to all attacks, saves, and checks (and save DCs) equal to the current value of the Escalation Die. To compensate for this added bonus, all enemies get a +2 to their AC, so which is canceled out by round 3 of a combat.

The point of this is that when combat starts, players are less likely to hit, but after a few rounds, they have a substantial bonus. This means their big, flashy attacks are best saved for a few rounds, to maximize their chance of succeeding - instead of nova-ing at the start of battle, you spend the time setting yourself up and beginning the engagement.

To enhance the effect, you can tie the bigger, flashier class abilities to the Escalation Die as well - you can't use your highest level of spells, or your X/day abilities, or what-have-you, until the Escalation Die is at 3+ or something.

While most monsters don't pay attention to the Escalation Die (and thus the combat gets easier as it escalates), "boss" monsters do (so both parties hit more often).


All in all, I just really like this concept. The restrictions ensure that you don't pull out your flashy abilities until later in the fight, which is nice narratively, but it's not all downside - the escalating bonus is just genuinely good, and helps protect against the "everyone misses several times in a row" runs of bad luck that sometimes happens.

I think I'll introduce this to the campaign I'm running for my little brother. He's already shown the power of the Paladin's smites, and so restricting that by the Escalation Die seems like it would be fun. This is also a solo campaign, so his fights will either be solo or with a single companion run by me, and the decreased chance of a bad run of luck later in the battle will be really effective at preventing a bad-luck death.

Fantasy World: Spirits as Corporations, Gods as States

Last updated:

(This is another entry in my collection of fantasy world-building ideas.)

I've picked up the habit, from people like Doctorow and Stross, of thinking of corporations and other things that are large groupings of people as a lifeform all their own. We humans, as complex multi-cellular organisms, are made from living cells, heavily-adapted from previously free-living single-celled organisms, but don't share many traits with those cells - we're a pattern on top of them, forming a totally novel form of life, with motivations and behaviors dramatically different from those of our component cells.

Similarly, corporations act like living things, built out of component humans, but acting as a novel pattern on top of those humans. Very small companies are still dominated by the individual humans in them, but at a certain size they inevitably start acting like something new, something beyond the humans leading them. You can swap out the management of most companies, and the company will continue living on mostly unchanged - they're composed of independent structure that uses humans, but is not actually driven by humans, just as we humans use cells. (The analogy isn't perfect, of course - we humans are built of trillions of cells, so any one cell has approximately zero chance of influencing us (except in rare cases like a lucky cancer cell), but corporations are made of a comparatively much smaller number of humans, so individual humans can have a larger effect on the whole.)

Biological organisms survive by expending their stores of energy to hunt down other biological organisms, consuming them and gaining more calories than they spent, to distribute to the cells that they're made up of. (Only on the extreme margins do biologicals do other things for energy, like photosynthesis or chemosynthesis. Plants and bacteria are clearly on a separate order of life than humans and other animals.)

Similarly, economic organisms like corporations survive by expending money to develop product lines and advertising, to convince other economic organisms (humans, or other corporations) to give up some of their own money, which the corporation then distributes to the humans they're made up of. They survive if they gain more money than they spent.

There's a closely related entity to the corporation that lives a dramatically different lifestyle - the state. While states do still have a nucleus of humans that drive them and which need to be fed with money (government employees), the way they feed is completely different. Rather than convincing other economic entities to give up their money via advertising and such, states simply claim space, and any economic organisms living in that space have to pay taxes to the state for the privilege of living there. States do still compete to a point, making their spaces more attractive, or their laws friendlier, but for the most part the organisms they extract money from are a captive populace.

To put the analogy a little differently, states are the brontosauruses of the economy - huge, lumbering titans that feed on whatever's around, and more or less immune to attack unless they're young or already severely injured. Corporations are the carnivores - mostly small and feisty, grabbing money where they can, but occasionally growing large and old and fearsome.

Get To The Fantasy Already

Okay, so let's shift gears to my fantasy world.

The basics of my metaphysics is that the world exists simultaneously in two layers: a physical and a narrative. Things that happen in one layer are reflected in the other; this is the basis of how magic works. At its core the world does not look like our familiar physical world, as many physical processes are driven by things in the narrative layer.

For example, atomic physics as we know it doesn't exist - the world is instead composed of a small number of complex-at-their-core elements, as I explained in the earlier Magical Metaphysics post. Our physical world is mostly composed of Physical and Biological elements, with Mental, Divine, and Energetic elements playing a more transient role. (That is, Mental elements don't "stick around" as part of physical reality; they're generated by and affect the physical world, but don't have a stable existence on their own.) The narrative layer is the opposite - it's mostly Mental and Divine, with Physical, Biological, and Energetic elements playing only a transient role.

So, spiritual beings exist mostly in the narrative layer, composed of thoughts and moral forces. They're literally made of stories, and are fed by people thinking of them (or more strongly, by worshiping and devoting themselves to the spirit). Even just reading about a spirit in detail strengthens that spirit, as your own mental impressions feed into the narrative layer and reinforce their existence.

Now to D&D. I really like the Warlock class in 5e - it's fun and unique, and I end up mixing at least a few levels into most of my character concepts. But I absolutely abhor the default flavor given to them. They're cast as mini-Clerics, devoting themselves in some way to the entity they've made a pact with and draw power from. But the default pact-capable entities are all bad! The most neutral one is an Archfey, but 5e has tried hard to make it clear that the Fey are amoral as we understand morality. The other two are just straight up demons, and eldritch horrors from beyond reality. We're supposed to believe that people who have literally contracted with demons, or promised to bring chaos and corruption into the world for magical power won't be shunned, jailed, or killed on sight? It's just straight-up evil! This is hard enough to reconcile in a standard D&D game, where everyone can just conveniently ignore most of the implications, but it's extremely troublesome for worldbuilding, where I want to have Pact Magic be the easiest and most common form of magic among people.

So, I've recast things a bit. All spiritual beings of middling power are naturally subject to Pacts, which bind them against their will and allow people to draw power from them. Pacts are a deeply-worn groove in the narrative structure of reality, and it's well-known that forging a pact with an entity requires zero agreement from that entity - there's no implication that you agree with them at all. In fact, many holy organizations heavily use demonic pacts, with the goal of draining the power of the demons and limiting their influence on the world.

But it's not totally one-sided. As I said, spiritual/narrative entities survive on attention. Being aware of them, thinking about them, recording details of them for future generations to read - all of this supports their existence. Entities who are subject to pacts are actually made more secure in existence - they're less likely to be forgotten and fade away. Pacts require careful study of an entity, to form the proper binding circle and rites, and this sort of study and recording is exactly what they want and need.

(Becoming more powerful with pact magic does require aligning yourself with the being you've bound, but that doesn't mean morally aligning yourself with them. Narrative entities have various domains, and you can align yourself with one while ignoring the more distasteful ones. For example, many demons have are spiritually aligned with fire, and so embracing fire magic or what-have-you aligns you spiritually with them, enabling you to raise your binding and increase your power.)

Note that only middling-power entities are subject to pacts. Weak entities either don't have enough "heft" to be properly grabbed by a Pact, or simply don't have enough power to survive a binding - it's in the binder's best interests to find an entity that can actually supply the power they want to draw.

Powerful spiritual entities, on the other hand, are immune to pacts as a matter of their nature. Much as the corporate life structure transitions to the surface-similar but deep-different feeding structure of the state, at some point in their growth spirits might lay claim to some area in narrative-space directly. They're no longer trying to spread and strengthen their own story, they're squatting on the abstract idea of some story, so that whenever that particular narrative plays out in the world, even if the people don't acknowledge the entity at all, it's still able to draw power from it. These are what we would traditionally call Gods, with their Domains: a God of Murder, for example, is strengthened by any murder, regardless of how or why it was performed; a God of the Harvest is strengthened by any agriculture, regardless of what it is; etc. There's still a notion of space and location in this - there can be multiple murder Gods, each feeding off of a different area.

Direct devotion to the God is still more nourishing, of course, and so they are incentivized to maintain a priesthood and spread their individual stories as well. If they become forgotten and live only off of anonymous narratives, they're vulnerable to being displaced by a fresh god, flush with worship and narrative control and looking to expand or establish their Domain. That is how Gods die - not directly by the hand of anyone physical, but by being replaced by other Gods.

Newbie's Guide to Python Module Management

Last updated:

I've always been confused by Python modules and how they work, but was able to kinda muddle thru on my own, as I expect most of us do. I recently sat down and actually taught myself how they work, and I think I have a handle on them now. To test it, I refactored a chunk of code in one of my projects that was bothering me, and as far as I can tell, I did so successfully!

This post documents that process, to hopefully help others in figuring stuff out.

The Problem

My main open-source project, Bikeshed, has to maintain a set of data files. These get updated frequently, so users can call bikeshed update to get new data for them, straight from the data sources. Each data source gets its own independent processing; there's not really any shared code between the different data files.

Originally there were only two types of data, and I wasn't doing anything too complicated with either of them, so I just went ahead and crammed both of the update functions into the same file, update.py. Fast-forward two years, and I've now got six independent update functions in this file, and several of them have gotten substantially more complicated. Refactoring code into helper functions is becoming hard, because it makes it more difficult to find the "main" functions buried in the sea of code.

What I'd really like is for each independent updater function, and its associated suite of helper functions, to live in a separate file. But I've already got a lot of files in my project - it would be great to have them all grouped into a subfolder.

Intro to Python Packages/Modules

So each foo.py file in your project automatically defines a module, named foo. You can import these files and get access to their variables with from . import foo, or from .foo import someVariable. (This is using absolute package-relative imports, which you should be using, not the "implicit relative imports" that Python2 originally shipped with; the . indicates "look in this module's parent".)

Each foo folder in your project defines a package named foo, if the folder has an __init__.py file in it. Packages are imported exactly like modules, with from . import foo/etc; the only difference is that packages can contain submodules (and subpackages) in addition to variables. This is how you get imports like import foo.bar.baz - foo and bar are packages (with bar a subpackage of foo), baz is either a package or a module.

Whenever you import a package, Python will run the __init__.py file and expose its variables for importing. (This is all the global variable names the code in the module can see, including modules that that the code imports!) It also automatically exposes any submodules in the package, regardless of whether __init__.py imports them or not: you can write import foo.bar if the foo/ folder contains a bar.py file, without foo/__init__.py having to do anything special. (Same for nested packages.)

Finally, whenever you do a * import (like from foo import *), Python will go ahead and pull in all the variables that foo/__init__.py defines and dump them into your namespace, but it does not dump submodules in unless __init__.py explicitly imported them already. (This is because the submodules might not be supposed to be part of the public API, and importing may have side-effects, since it just runs an __init__.py, and you might not want those side-effects to automatically happen.) Instead, it looks to see if __init__.py defined a magical __all__ variable; if it did, it assumes it's a list of strings naming all the submodules that should be imported by a * import, and does so.

(AKA, if your __init__.py already imports all the submodules you use or intend to expose, you're fine. If there are more that __init__.py doesn't use, but you want to expose to *-imports, set __all__ = ["sub1", "sub2"] in __init__.py.)

The Solution

So now we have all the information we need.

Step 1 is creating an update/ folder, and adding a blank __init__.py file. We now have an update package ready to import, even tho it's empty right now.

Step 2 is copying out all the code into submodules; I created an update/updateCrossRefs.py file and copied the cross-ref updater code into it, and so on. Now that the code is in separate files, I can rename the updater functions to all be just def update() for simplicity; no need to mention what they're updating when that's already in the module name.

Now that the code has moved from a top-level module in my project to a submodule, their import statements are wrong - anything that mentions from . import foo will look in the update package, not the overall project. Easy to fix, I just have to change these to from .. import foo; you can add as many dots as you want to move further up the package tree if you need.

At this point I'm already mostly done; I can run import update, then later call update.updateCrossRefs.update(), and it magically works! The last step is in handling "global" code, and putting together a good __init__.py.

For Step 3, I have one leftover piece of code, the general update() function that updates everything (or whatever subset of stuff I want). This is the only function the outside world ever actually calls; it's the only thing that calls the more specific updaters.

There's a few ways to do this - you can just put it directly in __init__.py and call it a day. But that exposes the imports it uses, and I want to keep the update module’s API surface nice and clean. Instead, I create another submodule, main.py, and put the function over there. Then, in __init__.py, I just call from .main import update. Now the outside world can say from . import update, and then just call update.update(), without having to know that the function is actually defined in a submodule.

Now that this is all done, I can finally delete the original update.py file in my main project directory. It's empty at this point, after all. ^_^

The End Result

I end up with the following directory structure:

bikeshed/
  ...other stuff...
  update/
    __init__.py
    main.py
    updateCrossRefs.py
    updateBiblio.py
    ...

And my __init__.py just says:

from .main import update, fixupDataFiles

__all__ = ["updateCrossRefs", "updateBiblio", 
           "updateCanIUse", "updateLinkDefaults", 
           "updateTestSuites", "updateLanguages"]

Then my project code, which was already doing from . import update, and calling update.update() (or update.fixupDataFiles()), continues to work and never realizes anything has changed at all!

How To: Clone your dang website you manage via git

Last updated:

This is a reminder post for myself, because I just had to re-clone my website on my local machine and futzed about for half an hour before figuring it out.

So, you've already set up your website as a bare git repository. Good for you! You've also got ssh working, and remember your ssh login details: the @ address and the password. The last thing you need is the location, on the remote server, of the actual git repository. (In my case, /home/public, the directory structure nearlyfreespeech.net uses by default.)

Now just:

git clone <ssh-address>:<remote-location> website

It'll ask for a password, then ta-da!

I'll update this to remind myself about getting the ssh key set up correctly, so I don't have to type a password on every push, later if I remember to.

Why I Abandoned @apply

Last updated:

Some time ago, I created the modern spec for CSS Variables, which lets you store uninterpreted values into "custom properties", then substitute those values into real properties later on.

This worked great, and was particularly convenient for Shadow DOM, which wanted a simple, controllable way to let the outside page twiddle the styling of a component, but without giving them full access to the internals of the component. The component author could just use a couple of var() functions, with default values so that it worked automatically, and then the component user could set custom properties on the shadow host and let them inherit into the component.

After a while, tho, we realized that this still had some issues. It was perfect for small amounts of customization - providing a "theme color" for a component or similar - but it fell down when you wanted to allow arbitrary styling of a component. For example, an alert widget might want to expose its title area to whatever inline styling the user wants. If the component could use light-DOM elements, pulled into the shadow DOM via slots, this works fine, as the user can target and fully style that element, but if the component itself generated the element, they were out of luck. The only way to get close is to define a whole bunch of custom properties, mimicking the actual CSS properties you want to allow, and then apply them all to the element in the shadow's stylesheet. This is awkward at best, and at worst can mean literally hundreds of boilerplate custom properties per styleable element, which is ugly, awkward, and slow - the trifecta!

So I thought - hey, custom properties can hold anything, right? Why not, instead of holding a single value, they held an entire ruleset, and then I add a new capability to let you substitute the whole shebang into an element's style? And thus the @apply rule was born:

/* outer page */
x-component {
  --heading-style: {
    color: red;
    text-decoration: underline;
    font-weight: bold;
  };
}
/* shadow DOM stylesheet */
.heading {
  @apply(--heading-style);
}

Mixing Levels

This seemed like a really elegant solution for a while - I got to reuse some existing functionality to solve a new problem! But gradually, we realized that it came with its own problems.

First, folding this into custom properties meant we were mixing levels, in a way that turned out awkward. For example, using a var() in a ruleset didn't do what you might think:

.list {
  --heading-style: {
    color: var(--theme-color);
  };
}
.list > x-component:first-child {
  --theme-color: red;
}
.list > x-component:last-child {
  --theme-color: blue;
}

The above code looks like it sets up the heading styles for all the components in the list, deferring the color property's value to the --theme-color variable, set on the individual components. Instead, tho, it subs in the value of --theme-color on the .list element itself.

This is due to the fact that custom properties don't care what's inside of them. A value, a ruleset, it all looks the same. As far as the CSS engine is concerned, that first rule was:

.list {
  --heading-style: █████var(--theme-color)█████;
}

And so it happily says "hey look, a variable reference! I know how to handle those!" and eagerly substitutes in the value from that element, rather than waiting until it's actually @apply'd.

I am planning on adding a feature to variables that makes them resolve "late", at use-time rather than definition-time, which would "solve" this. But it wouldn't really work: it requires the user to remember that they have to do this special thing for variables in custom property rulesets, and it doesn't solve the problem of wanting to use a late variable in a property meant to be @apply'd. Basically, this just kicks the problem a little further down the road; it doesn't actually solve anything.

This "mixing levels" thing would persist and cause problems in many other different ways.

Setting Custom Properties Inside

An obvious thing you might want to do in an @apply ruleset is set more custom properties, to provide styles for further-nested components. This brings up some new difficulties.

For one, can you set variables intended for the @apply'd element itself? You can do that with normal variables:

/* This is fine */
.foo {
  --one: blue;
  color: var(--one);
}

/* But does this work? */
.foo {
  --one: { color: blue; }
  @apply(--one)? 
}

/* How about this? */
.foo {
  --one: { --two: blue; }
}
.foo > .child {
  @apply(--one);
  color: var(--two);
}

There are use-cases for these. In particular, the last one makes sense; if it didn't work, then when you wanted to style an element in a shadow, you'd have to write all the normal properties in a custom property intended for @apply, then separately write all the custom properties you want to define:

x-component {
  --heading-style: {
    text-decoration: underline;
  };
  --theme-color: blue;
}
/* But this wouldn't work: */
x-component {
  --heading-style: {
    text-decoration: underline;
    --theme-color: blue;
  };
}

That's pretty annoying. But letting this work bring up some interesting issues. For example, you now have to hard against circularity:

x-component {
  --one: { --one: blue; }
  @apply(--one);
  color: var(--one);
}

What does the above even mean? Does the meaning change if I swap the ordering of the lines? We ended up defining that it does work, by carefully ordering the steps: first you do any var() substitution, and let custom properties define more variables, then you @apply substitution, then you repeat var() substitution over again with the new values that the @apply might have brought in. So the above example ends up giving the color property a value of blue. Circuitous and confusing since --one is interpreted in two totally different ways, and kinda annoying/expensive for the CSS engine, but it technically works.

But then we hit a further stumbling block - animations! We want to allow animations to be able to animate custom properties, but also to use custom properties (so you can, for example, animate a background from var(--theme-color-1) to var(--theme-color-2)). The way this ended up working is that the element first does variable substitution and definition as normal, then any animations defined to run on the element get to use the variables so defined, and define new ones, then the properties on the element get variable-substituted again. Sound familiar? Combining animations with @apply meant figuring out precisely how to interleave them, and how many times to re-substitute variables, and it turns out there isn't even a "correct" answer - whatever you choose, you'll exclude some reasonable use-cases.

Interacting With Selectors, and More

But ok, all that's possible to define, even if it's clumsy and confusing in some cases. Now real JS frameworks, in particular Polymer, started using a polyfilled version of @apply, in expectation of it eventually landing in browsers natively. And they ran into problems.

See, the original reason for @apply was to avoid an explosion of custom properties when you wanted to allow arbitrary styling - instead, you just had one single custom property. Much more elegant!

And that works fine, as long as you just want to throw some styles at an element and be done with it. But often, we want more than that. We want to define hover styles, focus styles, active styles. If it's an input, we want to define placeholder and error styles.

With @apply, the user doesn't have access to selectors anymore, so pseudo-classes don't exist. The component author has to reinvent them themself, adding a --heading-style-hover, --heading-style-focus, etc. And it's not uncommon to want to combine these, meaning you also need a --heading-style-hover-focus property, and more. The possibilities explode combinatorially, eliminating our nice "just one property" thing we had going, and ensuring that component users have to memorize the precise set of pseudo-classes each component chooses to expose, and precisely how they name things (is it --heading-style-hover-focus, or --heading-style-focus-hover? Or was it --heading-style_hover_focus for this one? Maybe --heading-style--hf, or --heading-style_hocus?).

This problem pervades the @apply rule, because in general it moves all the various pieces of a style rule one level down:

  • selectors get pushed into property names, losing reordering, syntax, optionality
  • property names get pushed into property values, losing the ability to easily cascade and override things - you can't define a block of properties for the normal element and then easily override just some of those properties for the element when hovered
  • property values get pushed into untyped property values, losing early grammar checking, and causing the problems explained in earlier sections

Ultimately, there's probably ways around all of these issues. For example, we toyed for a while with a "macro" facility that would auto-define the hover/focus/etc variants for you. But these "solutions" would just be reinventing, in a messy and ad-hoc way, the existing features of CSS that we so cavalierly threw away. I became increasingly disillusioned with the feature.

Enter ::part()

At the recent January 2017 CSSWG face-to-face meeting, while discussing these issues with my coworker Shane, we realized that we could avoid all of this by reviving the older ::part() proposal for Shadow DOM. This was a proposal to let the component author "tag" certain elements in their shadow with a "part name", and then the component user could target those parts by name with the ::part() pseudo-element on the component, like x-component::part(heading).

This had all the good stuff: it used CSS things at the correct level, so selectors lived in the selector space (allowing ::part(foo):hover, etc), property names lived in the property name space (allowing the page to define the same property multiple times and let the cascade figure things out), and property values lived in the property value space (var() worked correctly, no complications with animations or circularity, grammar checking works properly).

It also allowed some new useful powers - ::part() only targets the parts exposed by the component itself, not any sub-components it happens to use (unless it actively chooses to surface those sub-parts), which means better information-hiding. (Custom properties, because they inherit, default to the opposite - you have to specifically block them from inheriting into sub-components and styling them.) This also means name collisions are less of a problem - setting a custom property that the component and a sub-component uses can be a problem, but if they both use the same part name, that's just fine.

(We also have a ::theme() pseudo-element in the proposal that does automatically apply down into sub-components, for the rare times when that's exactly what you want to do.)

The one downside of ::part() is that it only works for shadow DOM. If you wanted to use @apply within your normal light-DOM page, you're out of luck. However, I'm okay with this. For one, @apply isn't actually all that useful for light DOM uses - just using normal selectors does the job better. For two, this might encourage more usage of Shadow DOM, which I consider a good result - more encapsulation is more better. (Tho we really need to explore a simple declarative version of Shadow DOM as well, to make simple structural usage of it possible without having to invoke JS.) For three, within a light DOM page we can potentially do even more powerful stuff, like inventing a real mixin facility, or a selector-variables thing, or what-have-you.

There's plenty more space to experiment here, and while it does suck to lose a tool that you might have gotten excited about, @apply really is just quite a bad idea technically. Let's solve these problems correctly. ^_^